This commit is contained in:
YYforymj 2017-04-21 09:32:12 +08:00
commit 96e1a1c928
111 changed files with 10219 additions and 8076 deletions

View File

@ -59,41 +59,42 @@ LCTT 的组成
* 2016/12/24 拟定 LCTT [Core 规则](core.md),并增加新的 Core 成员: ucasFL、martin2011qi及调整一些组。
* 2017/03/13 制作了 LCTT 主页、成员列表和成员主页LCTT 主页将移动至 https://linux.cn/lctt 。
* 2017/03/16 提升 GHLandy、bestony、rusking 为新的 Core 成员。创建 Comic 小组。
* 2017/04/11 启用头衔制,为各位重要成员颁发头衔。
活跃成员
核心成员
-------------------------------
目前 TP 活跃成员有:
- Leader @wxy,
- Source @oska874,
- Proofreaders @jasminepeng,
- CORE @geekpi,
- CORE @GOLinux,
- CORE @ictlyh,
- CORE @strugglingyouth,
- CORE @FSSlc,
- CORE @zpl1025,
- CORE @runningwater,
- CORE @bazz2,
- CORE @Vic020,
- CORE @alim0x,
- CORE @tinyeyeser,
- CORE @Locez,
- CORE @ucasFL,
- CORE @martin2011qi,
- CORE @GHLandy,
- CORE @bestony,
- CORE @rusking,
- Senior @DeadFire,
- Senior @reinoir222,
- Senior @vito-L,
- Senior @willqian,
- Senior @vizv,
- Senior @dongfengweixiao,
- Senior @PurlingNayuki,
- Senior @carolinewuyan,
目前 LCTT 核心成员有:
- 组长 @wxy,
- 选题 @oska874,
- 校对 @jasminepeng,
- 钻石译者 @geekpi,
- 钻石译者 @GOLinux,
- 钻石译者 @ictlyh,
- 技术组长 @bestony,
- 漫画组长 @GHLandy,
- LFS 组长 @martin2011qi,
- 核心成员 @strugglingyouth,
- 核心成员 @FSSlc,
- 核心成员 @zpl1025,
- 核心成员 @runningwater,
- 核心成员 @bazz2,
- 核心成员 @Vic020,
- 核心成员 @alim0x,
- 核心成员 @tinyeyeser,
- 核心成员 @Locez,
- 核心成员 @ucasFL,
- 核心成员 @rusking,
- 前任选题 @DeadFire,
- 前任校对 @reinoir222,
- 前任校对 @PurlingNayuki,
- 前任校对 @carolinewuyan,
- 功勋成员 @vito-L,
- 功勋成员 @willqian,
- 功勋成员 @vizv,
- 功勋成员 @dongfengweixiao,
全部成员列表请参见: https://linux.cn/lctt-list/ 。
谢谢大家的支持!
谢谢大家的支持!

View File

@ -0,0 +1,310 @@
调试器的工作原理(一):基础篇
============================================================
这是调试器工作原理系列文章的第一篇,我不确定这个系列会有多少篇文章,会涉及多少话题,但我仍会从这篇基础开始。
### 这一篇会讲什么
我将为大家展示 Linux 中调试器的主要构成模块 - `ptrace` 系统调用。这篇文章所有代码都是基于 32 位 Ubuntu 操作系统。值得注意的是,尽管这些代码是平台相关的,将它们移植到其它平台应该并不困难。
### 缘由
为了理解我们要做什么,让我们先考虑下调试器为了完成调试都需要什么资源。调试器可以开始一个进程并调试这个进程,又或者将自己同某个已经存在的进程关联起来。调试器能够单步执行代码,设定断点并且将程序执行到断点,检查变量的值并追踪堆栈。许多调试器有着更高级的特性,例如在调试器的地址空间内执行表达式或者调用函数,甚至可以在进程执行过程中改变代码并观察效果。
尽管现代的调试器都十分的复杂(我没有检查,但我确信 gdb 的代码行数至少有六位数),但它们的工作的原理却是十分的简单。调试器的基础是操作系统与编译器 / 链接器提供的一些基础服务,其余的部分只是[简单的编程][14]而已。
### Linux 的调试 - ptrace
Linux 调试器中的瑞士军刀便是 `ptrace` 系统调用(使用 man 2 ptrace 命令可以了解更多)。这是一种复杂却强大的工具,可以允许一个进程控制另外一个进程并从<ruby>内部替换<rt>Peek and poke</rt></ruby>被控制进程的内核镜像的值Peek and poke 在系统编程中是很知名的叫法,指的是直接读写内存内容)。
接下来会深入分析。
### 执行进程的代码
我将编写一个示例,实现一个在“跟踪”模式下运行的进程。在这个模式下,我们将单步执行进程的代码,就像机器码(汇编代码)被 CPU 执行时一样。我将分段展示、讲解示例代码,在文章的末尾也有完整 c 文件的下载链接,你可以编译、执行或者随心所欲的更改。
更进一步的计划是实现一段代码,这段代码可以创建可执行用户自定义命令的子进程,同时父进程可以跟踪子进程。首先是主函数:
```
int main(int argc, char** argv)
{
pid_t child_pid;
if (argc < 2) {
fprintf(stderr, "Expected a program name as argument\n");
return -1;
}
child_pid = fork();
if (child_pid == 0)
run_target(argv[1]);
else if (child_pid > 0)
run_debugger(child_pid);
else {
perror("fork");
return -1;
}
return 0;
}
```
看起来相当的简单:我们用 `fork` 创建了一个新的子进程(这篇文章假定读者有一定的 Unix/Linux 编程经验。我假定你知道或至少了解 fork、exec 族函数与 Unix 信号。if 语句的分支执行子进程(这里称之为 “target”`else if` 的分支执行父进程(这里称之为 “debugger”
下面是 target 进程的代码:
```
void run_target(const char* programname)
{
procmsg("target started. will run '%s'\n", programname);
/* Allow tracing of this process */
if (ptrace(PTRACE_TRACEME, 0, 0, 0) < 0) {
perror("ptrace");
return;
}
/* Replace this process's image with the given program */
execl(programname, programname, 0);
}
```
这段代码中最值得注意的是 `ptrace` 调用。在 `sys/ptrace.h` 中,`ptrace` 是如下定义的:
```
long ptrace(enum __ptrace_request request, pid_t pid,
void *addr, void *data);
```
第一个参数是 `_request_`,这是许多预定义的 `PTRACE_*` 常量中的一个。第二个参数为请求分配进程 ID。第三个与第四个参数是地址与数据指针用于操作内存。上面代码段中的 `ptrace` 调用发起了 `PTRACE_TRACEME` 请求,这意味着该子进程请求系统内核让其父进程跟踪自己。帮助页面上对于 request 的描述很清楚:
> 意味着该进程被其父进程跟踪。任何传递给该进程的信号(除了 `SIGKILL`)都将通过 `wait()` 方法阻塞该进程并通知其父进程。**此外,该进程的之后所有调用 `exec()` 动作都将导致 `SIGTRAP` 信号发送到此进程上,使得父进程在新的程序执行前得到取得控制权的机会**。如果一个进程并不需要它的的父进程跟踪它那么这个进程不应该发送这个请求。pid、addr 与 data 暂且不提)
我高亮了这个例子中我们需要注意的部分。在 `ptrace` 调用后,`run_target` 接下来要做的就是通过 `execl` 传参并调用。如同高亮部分所说明,这将导致系统内核在 `execl` 创建进程前暂时停止,并向父进程发送信号。
是时候看看父进程做什么了。
```
void run_debugger(pid_t child_pid)
{
int wait_status;
unsigned icounter = 0;
procmsg("debugger started\n");
/* Wait for child to stop on its first instruction */
wait(&wait_status);
while (WIFSTOPPED(wait_status)) {
icounter++;
/* Make the child execute another instruction */
if (ptrace(PTRACE_SINGLESTEP, child_pid, 0, 0) < 0) {
perror("ptrace");
return;
}
/* Wait for child to stop on its next instruction */
wait(&wait_status);
}
procmsg("the child executed %u instructions\n", icounter);
}
```
如前文所述,一旦子进程调用了 `exec`,子进程会停止并被发送 `SIGTRAP` 信号。父进程会等待该过程的发生并在第一个 `wait()` 处等待。一旦上述事件发生了,`wait()` 便会返回,由于子进程停止了父进程便会收到信号(如果子进程由于信号的发送停止了,`WIFSTOPPED` 就会返回 `true`)。
父进程接下来的动作就是整篇文章最需要关注的部分了。父进程会将 `PTRACE_SINGLESTEP` 与子进程 ID 作为参数调用 `ptrace` 方法。这就会告诉操作系统,“请恢复子进程,但在它执行下一条指令前阻塞”。周而复始地,父进程等待子进程阻塞,循环继续。当 `wait()` 中传出的信号不再是子进程的停止信号时,循环终止。在跟踪器(父进程)运行期间,这将会是被跟踪进程(子进程)传递给跟踪器的终止信号(如果子进程终止 `WIFEXITED` 将返回 `true`)。
`icounter` 存储了子进程执行指令的次数。这么看来我们小小的例子也完成了些有用的事情 - 在命令行中指定程序,它将执行该程序并记录它从开始到结束所需要的 cpu 指令数量。接下来就让我们这么做吧。
### 测试
我编译了下面这个简单的程序并利用跟踪器运行它:
```
#include <stdio.h>
int main()
{
printf("Hello, world!\n");
return 0;
}
```
令我惊讶的是,跟踪器花了相当长的时间,并报告整个执行过程共有超过 100,000 条指令执行。仅仅是一条输出语句?什么造成了这种情况?答案很有趣(至少你同我一样痴迷与机器/汇编语言。Linux 的 gcc 默认会动态的将程序与 c 的运行时库动态地链接。这就意味着任何程序运行前的第一件事是需要动态库加载器去查找程序运行所需要的共享库。这些代码的数量很大 - 别忘了我们的跟踪器要跟踪每一条指令,不仅仅是主函数的,而是“整个进程中的指令”。
所以当我将测试程序使用静态编译时(通过比较,可执行文件会多出 500 KB 左右的大小,这部分是 C 运行时库的静态链接),跟踪器提示只有大概 7000 条指令被执行。这个数目仍然不小,但是考虑到在主函数执行前 libc 的初始化以及主函数执行后的清除代码,这个数目已经是相当不错了。此外,`printf` 也是一个复杂的函数。
仍然不满意的话,我需要的是“可以测试”的东西 - 例如可以完整记录每一个指令运行的程序执行过程。这当然可以通过汇编代码完成。所以我找到了这个版本的 “Hello, world!” 并编译了它。
```
section .text
; The _start symbol must be declared for the linker (ld)
global _start
_start:
; Prepare arguments for the sys_write system call:
; - eax: system call number (sys_write)
; - ebx: file descriptor (stdout)
; - ecx: pointer to string
; - edx: string length
mov edx, len
mov ecx, msg
mov ebx, 1
mov eax, 4
; Execute the sys_write system call
int 0x80
; Execute sys_exit
mov eax, 1
int 0x80
section .data
msg db 'Hello, world!', 0xa
len equ $ - msg
```
当然,现在跟踪器提示 7 条指令被执行了,这样一来很容易区分它们。
### 深入指令流
上面那个汇编语言编写的程序使得我可以向你介绍 `ptrace` 的另外一个强大的用途 - 详细显示被跟踪进程的状态。下面是 `run_debugger` 函数的另一个版本:
```
void run_debugger(pid_t child_pid)
{
int wait_status;
unsigned icounter = 0;
procmsg("debugger started\n");
/* Wait for child to stop on its first instruction */
wait(&wait_status);
while (WIFSTOPPED(wait_status)) {
icounter++;
struct user_regs_struct regs;
ptrace(PTRACE_GETREGS, child_pid, 0, &regs);
unsigned instr = ptrace(PTRACE_PEEKTEXT, child_pid, regs.eip, 0);
procmsg("icounter = %u. EIP = 0x%08x. instr = 0x%08x\n",
icounter, regs.eip, instr);
/* Make the child execute another instruction */
if (ptrace(PTRACE_SINGLESTEP, child_pid, 0, 0) < 0) {
perror("ptrace");
return;
}
/* Wait for child to stop on its next instruction */
wait(&wait_status);
}
procmsg("the child executed %u instructions\n", icounter);
}
```
不同仅仅存在于 `while` 循环的开始几行。这个版本里增加了两个新的 `ptrace` 调用。第一条将进程的寄存器值读取进了一个结构体中。 `sys/user.h` 定义有 `user_regs_struct`。如果你查看头文件,头部的注释这么写到:
```
/* 这个文件只为了 GDB 而创建
不用详细的阅读.如果你不知道你在干嘛,
不要在除了 GDB 以外的任何地方使用此文件 */
```
不知道你做何感想,但这让我觉得我们找对地方了。回到例子中,一旦我们在 `regs` 变量中取得了寄存器的值,我们就可以通过将 `PTRACE_PEEKTEXT` 作为参数、 `regs.eip`x86 上的扩展指令指针)作为地址,调用 `ptrace` ,读取当前进程的当前指令(警告:如同我上面所说,文章很大程度上是平台相关的。我简化了一些设定 - 例如x86 指令集不需要调整到 4 字节我的32位 Ubuntu unsigned int 是 4 字节。事实上,许多平台都不需要。从内存中读取指令需要预先安装完整的反汇编器。我们这里没有,但实际的调试器是有的)。下面是新跟踪器所展示出的调试效果:
```
$ simple_tracer traced_helloworld
[5700] debugger started
[5701] target started. will run 'traced_helloworld'
[5700] icounter = 1\. EIP = 0x08048080\. instr = 0x00000eba
[5700] icounter = 2\. EIP = 0x08048085\. instr = 0x0490a0b9
[5700] icounter = 3\. EIP = 0x0804808a. instr = 0x000001bb
[5700] icounter = 4\. EIP = 0x0804808f. instr = 0x000004b8
[5700] icounter = 5\. EIP = 0x08048094\. instr = 0x01b880cd
Hello, world!
[5700] icounter = 6\. EIP = 0x08048096\. instr = 0x000001b8
[5700] icounter = 7\. EIP = 0x0804809b. instr = 0x000080cd
[5700] the child executed 7 instructions
```
现在,除了 `icounter`,我们也可以观察到指令指针与它每一步所指向的指令。怎么来判断这个结果对不对呢?使用 `objdump -d` 处理可执行文件:
```
$ objdump -d traced_helloworld
traced_helloworld: file format elf32-i386
Disassembly of section .text:
08048080 <.text>:
8048080: ba 0e 00 00 00 mov $0xe,%edx
8048085: b9 a0 90 04 08 mov $0x80490a0,%ecx
804808a: bb 01 00 00 00 mov $0x1,%ebx
804808f: b8 04 00 00 00 mov $0x4,%eax
8048094: cd 80 int $0x80
8048096: b8 01 00 00 00 mov $0x1,%eax
804809b: cd 80 int $0x80
```
这个结果和我们跟踪器的结果就很容易比较了。
### 将跟踪器关联到正在运行的进程
如你所知,调试器也能关联到已经运行的进程。现在你应该不会惊讶,`ptrace` 通过以 `PTRACE_ATTACH` 为参数调用也可以完成这个过程。这里我不会展示示例代码,通过上文的示例代码应该很容易实现这个过程。出于学习目的,这里使用的方法更简便(因为我们在子进程刚开始就可以让它停止)。
### 代码
上文中的简单的跟踪器更高级的可以打印指令的版本的完整c源代码可以在[这里][20]找到。它是通过 4.4 版本的 gcc 以 `-Wall -pedantic --std=c99` 编译的。
### 结论与计划
诚然,这篇文章并没有涉及很多内容 - 我们距离亲手完成一个实际的调试器还有很长的路要走。但我希望这篇文章至少可以使得调试这件事少一些神秘感。`ptrace` 是功能多样的系统调用,我们目前只展示了其中的一小部分。
单步调试代码很有用,但也只是在一定程度上有用。上面我通过 C 的 “Hello World!” 做了示例。为了执行主函数,可能需要上万行代码来初始化 C 的运行环境。这并不是很方便。最理想的是在 `main` 函数入口处放置断点并从断点处开始分步执行。为此,在这个系列的下一篇,我打算展示怎么实现断点。
### 参考
撰写此文时参考了如下文章
* [Playing with ptrace, Part I][11]
* [How debugger works][12]
--------------------------------------------------------------------------------
via: http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1
作者:[Eli Bendersky][a]
译者:[YYforymj](https://github.com/YYforymj)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://eli.thegreenplace.net/
[1]:http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1#id1
[2]:http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1#id2
[3]:http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1#id3
[4]:http://www.jargon.net/jargonfile/p/peek.html
[5]:http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1#id4
[6]:http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1#id5
[7]:http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1#id6
[8]:http://eli.thegreenplace.net/tag/articles
[9]:http://eli.thegreenplace.net/tag/debuggers
[10]:http://eli.thegreenplace.net/tag/programming
[11]:http://www.linuxjournal.com/article/6100?page=0,1
[12]:http://www.alexonlinux.com/how-debugger-works
[13]:http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1#id7
[14]:http://en.wikipedia.org/wiki/Small_matter_of_programming
[15]:http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1#id8
[16]:http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1#id9
[17]:http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1#id10
[18]:http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1#id11
[19]:http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1#id12
[20]:https://github.com/eliben/code-for-blog/blob/master/2011/simple_tracer.c
[21]:http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1

View File

@ -0,0 +1,165 @@
深入解析面向数据的哈希表性能
============================================================
最近几年中,面向数据的设计已经受到了很多的关注 —— 一种强调内存中数据布局的编程风格,包括如何访问以及将会引发多少的 cache 缺失。由于在内存读取操作中缺失所占的数量级要大于命中的数量级,所以缺失的数量通常是优化的关键标准。这不仅仅关乎那些对性能有要求的 code-data 结构设计的软件,由于缺乏对内存效益的重视而成为软件运行缓慢、膨胀的一个很大因素。
高效缓存数据结构的中心原则是将事情变得平滑和线性。比如,在大部分情况下,存储一个序列元素更倾向于使用普通数组而不是链表 —— 每一次通过指针来查找数据都会为 cache 缺失增加一份风险;而普通数组则可以预先获取,并使得内存系统以最大的效率运行
如果你知道一点内存层级如何运作的知识,下面的内容会是想当然的结果——但是有时候即便它们相当明显,测试一下任不失为一个好主意。几年前 [Baptiste Wicht 测试过了 `std::vector` vs `std::list` vs `std::deque`][4],(后者通常使用分块数组来实现,比如:一个数组的数组)。结果大部分会和你预期的保持一致,但是会存在一些违反直觉的东西。作为实例:在序列链表的中间位置做插入或者移除操作被认为会比数组快,但如果该元素是一个 POD 类型,并且不大于 64 字节或者在 64 字节左右(在一个 cache 流水线内),通过对要操作的元素周围的数组元素进行移位操作要比从头遍历链表来的快。这是由于在遍历链表以及通过指针插入/删除元素的时候可能会导致不少的 cache 缺失,相对而言,数组移位则很少会发生。(对于更大的元素,非 POD 类型,或者你已经有了指向链表元素的指针,此时和预期的一样,链表胜出)
多亏了类似 Baptiste 这样的数据,我们知道了内存布局如何影响序列容器。但是关联容器,比如 hash 表会怎么样呢?已经有了些权威推荐:[Chandler Carruth 推荐的带局部探测的开放寻址][5](此时,我们没必要追踪指针),以及[Mike Acton 推荐的在内存中将 value 和 key 隔离][6](这种情况下,我们可以在每一个 cache 流水线中得到更多的 key 这可以在我们必须查找多个 key 时提高局部性能。这些想法很有意义,但再一次的说明:测试永远是好习惯,但由于我找不到任何数据,所以只好自己收集了。
### 测试
我测试了四个不同的 quick-and-dirty 哈希表实现,另外还包括 `std::unordered_map` 。这五个哈希表都使用了同一个哈希函数 —— Bob Jenkins 的 [SpookyHash][8]64 位哈希值)。(由于哈希函数在这里不是重点,所以我没有测试不同的哈希函数;我同样也没有检测我的分析中的总内存消耗。)实现会通过简短的代码在测试结果表中标注出来。
* **UM** `std::unordered_map` 。在 VS2012 和 libstdc++-v3 libstdc++-v3: gcc 和 clang 都会用到这东西UM 是以链表的形式实现所有的元素都在链表中bucket 数组中存储了链表的迭代器。VS2012 中,则是一个双链表,每一个 bucket 存储了起始迭代器和结束迭代器libstdc++ 中,是一个单链表,每一个 bucket 只存储了一个起始迭代器。这两种情况里,链表节点是独立申请和释放的。最大负载因子是 1 。
* **Ch**:分离的、链状 buket 指向一个元素节点的单链表。为了避免分开申请每一个节点,元素节点存储在普通数组池中。未使用的节点保存在一个空闲链表中。最大负载因子是 1。
* **OL**:开地址线性探测 —— 每一个 bucket 存储一个 62 bit 的 hash 值,一个 2 bit 的状态值(包括 emptyfilledremoved 三个状态key 和 vale 。最大负载因子是 2/3。
* **DO1**“data-oriented 1” —— 和 OL 相似,但是哈希值、状态值和 key、values 分离在两个隔离的平滑数组中。
* **DO2**“data-oriented 2” —— 与 OL 类似,但是哈希/状态keys 和 values 被分离在 3 个相隔离的平滑数组中。
在我的所有实现中,包括 VS2012 的 UM 实现,默认使用尺寸为 2 的 n 次方。如果超出了最大负载因子,则扩展两倍。在 libstdc++ 中UM 默认尺寸是一个素数。如果超出了最大负载因子,则扩展为下一个素数大小。但是我不认为这些细节对性能很重要。素数是一种对低 bit 位上没有足够熵的低劣 hash 函数的挽救手段,但是我们正在用的是一个很好的 hash 函数。
OLDO1 和 DO2 的实现将共同的被称为 OAopen addressing——稍后我们将发现它们在性能特性上非常相似。在每一个实现中单元数从 100 K 到 1 M有效负载比如总的 key + value 大小)从 8 到 4 k 字节我为几个不同的操作记了时间。 keys 和 values 永远是 POD 类型keys 永远是 8 个字节(除了 8 字节的有效负载,此时 key 和 value 都是 4 字节)因为我的目的是为了测试内存影响而不是哈希函数性能,所以我将 key 放在连续的尺寸空间中。每一个测试都会重复 5 遍,然后记录最小的耗时。
测试的操作在这里:
* **Fill**:将一个随机的 key 序列插入到表中key 在序列中是唯一的)。
* **Presized fill**:和 Fill 相似,但是在插入之间我们先为所有的 key 保留足够的内存空间,以防止在 fill 过程中 rehash 或者重申请。
* **Lookup**:执行 100 k 次随机 key 查找,所有的 key 都在 table 中。
* **Failed lookup**: 执行 100 k 次随机 key 查找,所有的 key 都不在 table 中。
* **Remove**:从 table 中移除随机选择的半数元素。
* **Destruct**:销毁 table 并释放内存。
你可以[在这里下载我的测试代码][9]。这些代码只能在 64 机器上编译包括Windows和Linux。在 `main()` 函数顶部附近有一些开关,你可把它们打开或者关掉——如果全开,可能会需要一两个小时才能结束运行。我收集的结果也放在了那个打包文件里的 Excel 表中。(注意: Windows 和 Linux 在不同的 CPU 上跑的,所以时间不具备可比较性)代码也跑了一些单元测试,用来验证所有的 hash 表实现都能运行正确。
我还顺带尝试了附加的两个实现Ch 中第一个节点存放在 bucket 中而不是 pool 里,二次探测的开放寻址。
这两个都不足以好到可以放在最终的数据里,但是它们的代码仍放在了打包文件里面。
### 结果
这里有成吨的数据!!
这一节我将详细的讨论一下结果,但是如果你对此不感兴趣,可以直接跳到下一节的总结。
#### Windows
这是所有的测试的图表结果,使用 Visual Studio 2012 编译,运行于 Windows 8.1 和 Core i7-4710HQ 机器上。(点击可以放大。)
[
![Results for VS 2012, Windows 8.1, Core i7-4710HQ](http://reedbeta.com/blog/data-oriented-hash-table/results-vs2012.png "Results for VS 2012, Windows 8.1, Core i7-4710HQ")
][12]
从左至右是不同的有效负载大小从上往下是不同的操作注意不是所有的Y轴都是相同的比例我将为每一个操作总结一下主要趋向。
**Fill**
在我的 hash 表中Ch 稍比任何的 OA 变种要好。随着哈希表大小和有效负载的加大,差距也随之变大。我猜测这是由于 Ch 只需要从一个空闲链表中拉取一个元素,然后把它放在 bucket 前面,而 OA 不得不搜索一部分 bucket 来找到一个空位置。所有的 OA 变种的性能表现基本都很相似,当然 DO1 稍微有点优势。
在小负载的情况UM 几乎是所有 hash 表中表现最差的 —— 因为 UM 为每一次的插入申请(内存)付出了沉重的代价。但是在 128 字节的时候,这些 hash 表基本相当,大负载的时候 UM 还赢了点。因为,我所有的实现都需要重新调整元素池的大小,并需要移动大量的元素到新池里面,这一点我几乎无能为力;而 UM 一旦为元素申请了内存后便不需要移动了。注意大负载中图表上夸张的跳步这更确认了重新调整大小带来的问题。相反UM 只是线性上升 —— 只需要重新调整 bucket 数组大小。由于没有太多隆起的地方,所以相对有效率。
**Presized fill**
大致和 Fill 相似,但是图示结果更加的线性光滑,没有太大的跳步(因为不需要 rehash ),所有的实现差距在这一测试中要缩小了些。大负载时 UM 依然稍快于 Ch问题还是在于重新调整大小上。Ch 仍是稳步少快于 OA 变种,但是 DO1 比其它的 OA 稍有优势。
**Lookup**
所有的实现都相当的集中。除了最小负载时DO1 和 OL 稍快,其余情况下 UM 和 DO2 都跑在了前面。LCTT 译注: 你确定?)真的,我无法描述 UM 在这一步做的多么好。尽管需要遍历链表,但是 UM 还是坚守了面向数据的本性。
顺带一提,查找时间和 hash 表的大小有着很弱的关联,这真的很有意思。
哈希表查找时间期望上是一个常量时间,所以在的渐进视图中,性能不应该依赖于表的大小。但是那是在忽视了 cache 影响的情况下!作为具体的例子,当我们在具有 10 k 条目的表中做 100 k 次查找时,速度会便变快,因为在第一次 10 k - 20 k 次查找后,大部分的表会处在 L3 中。
**Failed lookup**
相对于成功查找这里就有点更分散一些。DO1 和 DO2 跑在了前面,但 UM 并没有落下OL 则是捉襟见肘啊。我猜测,这可能是因为 OL 整体上具有更长的搜索路径尤其是在失败查询时内存中hash 值在 key 和 value 之飘来荡去的找不着出路我也很受伤啊。DO1 和 DO2 具有相同的搜索长度,但是它们将所有的 hash 值打包在内存中,这使得问题有所缓解。
**Remove**
DO2 很显然是赢家,但 DO1 也未落下。Ch 则落后UM 则是差的不是一丁半点(主要是因为每次移除都要释放内存);差距随着负载的增加而拉大。移除操作是唯一不需要接触数据的操作,只需要 hash 值和 key 的帮助,这也是为什么 DO1 和 DO2 在移除操作中的表现大相径庭,而其它测试中却保持一致。(如果你的值不是 POD 类型的,并需要析构,这种差异应该是会消失的。)
**Destruct**
Ch 除了最小负载,其它的情况都是最快的(最小负载时,约等于 OA 变种)。所有的 OA 变种基本都是相等的。注意,在我的 hash 表中所做的所有析构操作都是释放少量的内存 buffer 。但是 [在Windows中释放内存的消耗和大小成比例关系][13]。(而且,这是一个很显著的开支 —— 申请 1 GB 的内存需要 100 ms 的时间去释放!)
UM 在析构时是最慢的一个(小负载时,慢的程度可以用数量级来衡量),大负载时依旧是稍慢些。对于 UM 来讲,释放每一个元素而不是释放一组数组真的是一个硬伤。
#### Linux
我还在装有 Linux Mint 17.1 的 Core i5-4570S 机器上使用 gcc 4.8 和 clang 3.5 来运行了测试。gcc 和 clang 的结果很相像,因此我只展示了 gcc 的;完整的结果集合包含在了代码下载打包文件中,链接在上面。(点击图来缩放)
[
![Results for g++ 4.8, Linux Mint 17.1, Core i5-4570S](http://reedbeta.com/blog/data-oriented-hash-table/results-g++4.8.png "Results for g++ 4.8, Linux Mint 17.1, Core i5-4570S")
][15]
大部分结果和 Windows 很相似,因此我只高亮了一些有趣的不同点。
**Lookup**
这里 DO1 跑在前头,而在 Windows 中 DO2 更快些。LCTT 译注: 这里原文写错了吧同样UM 和 Ch 落后于其它所有的实现——过多的指针追踪,然而 OA 只需要在内存中线性的移动即可。至于 Windows 和 Linux 结果为何不同则不是很清楚。UM 同样比 Ch 慢了不少,特别是大负载时,这很奇怪;我期望的是它们可以基本相同。
**Failed lookup**
UM 再一次落后于其它实现,甚至比 OL 还要慢。我再一次无法理解为何 UM 比 Ch 慢这么多Linux 和 Windows 的结果为何有着如此大的差距。
**Destruct**
在我的实现中,小负载的时候,析构的消耗太少了,以至于无法测量;在大负载中,线性增加的比例和创建的虚拟内存页数量相关,而不是申请到的数量?同样,要比 Windows 中的析构快上几个数量级。但是并不是所有的都和 hash 表有关我们在这里可以看出不同系统和运行时内存系统的表现。貌似Linux 释放大内存块是要比 Windows 快上不少(或者 Linux 很好的隐藏了开支,或许将释放工作推迟到了进程退出,又或者将工作推给了其它线程或者进程)。
UM 由于要释放每一个元素,所以在所有的负载中都比其它慢上几个数量级。事实上,我将图片做了剪裁,因为 UM 太慢了,以至于破坏了 Y 轴的比例。
### 总结
好,当我们凝视各种情况下的数据和矛盾的结果时,我们可以得出什么结果呢?我想直接了当的告诉你这些 hash 表变种中有一个打败了其它所有的 hash 表,但是这显然不那么简单。不过我们仍然可以学到一些东西。
首先,在大多数情况下我们“很容易”做的比 `std::unordered_map` 还要好。我为这些测试所写的所有实现(它们并不复杂;我只花了一两个小时就写完了)要么是符合 `unordered_map` 要么是在其基础上做的提高除了大负载超过128字节中的插入性能 `unordered_map` 为每一个节点独立申请存储占了优势。(尽管我没有测试,我同样期望 `unordered_map` 能在非 POD 类型的负载上取得胜利。)具有指导意义的是,如果你非常关心性能,不要假设你的标准库中的数据结构是高度优化的。它们可能只是在 C++ 标准的一致性上做了优化,但不是性能。:P
其次,如果不管在小负载还是超负载中,若都只用 DO1 开放寻址线性探测hashes/states 和 key/vaules分别处在隔离的普通数组中那可能不会有啥好表现。这不是最快的插入但也不坏还比 `unordered_map` 快),并且在查找,移除,析构中也很快。你所知道的 —— “面向数据设计”完成了!
注意,我的为这些哈希表做的测试代码远未能用于生产环境——它们只支持 POD 类型,没有拷贝构造函数以及类似的东西,也未检测重复的 key等等。我将可能尽快的构建一些实际的 hash 表,用于我的实用库中。为了覆盖基础部分,我想我将有两个变种:一个基于 DO1用于小的移动时不需要太大开支的负载另一个用于链接并且避免重新申请和移动元素就像 `unordered_map` ),用于大负载或者移动起来需要大开支的负载情况。这应该会给我带来最好的两个世界。
与此同时,我希望你们会有所启迪。最后记住,如果 Chandler Carruth 和 Mike Acton 在数据结构上给你提出些建议,你一定要听。
--------------------------------------------------------------------------------
作者简介:
我是一名图形程序员,目前在西雅图做自由职业者。之前我在 NVIDIA 的 DevTech 软件团队中工作,并在美少女特工队工作室中为 PS3 和 PS4 的 Infamous 系列游戏开发渲染技术。
自 2002 年起,我对图形非常感兴趣,并且已经完成了一系列的工作,包括:雾、大气雾霾、体积照明、水、视觉效果、粒子系统、皮肤和头发阴影、后处理、镜面模型、线性空间渲染、和 GPU 性能测量和优化。
你可以在我的博客了解更多和我有关的事,处理图形,我还对理论物理和程序设计感兴趣。
你可以在 nathaniel.reed@gmail.com 或者在 Twitter@Reedbeta)/Google+ 上关注我。我也会经常在 StackExchange 上回答计算机图形的问题。
--------------
via: http://reedbeta.com/blog/data-oriented-hash-table/
作者:[Nathan Reed][a]
译者:[sanfusu](https://github.com/sanfusu)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://reedbeta.com/about/
[1]:http://reedbeta.com/blog/data-oriented-hash-table/
[2]:http://reedbeta.com/blog/category/coding/
[3]:http://reedbeta.com/blog/data-oriented-hash-table/#comments
[4]:http://baptiste-wicht.com/posts/2012/12/cpp-benchmark-vector-list-deque.html
[5]:https://www.youtube.com/watch?v=fHNmRkzxHWs
[6]:https://www.youtube.com/watch?v=rX0ItVEVjHc
[7]:http://reedbeta.com/blog/data-oriented-hash-table/#the-tests
[8]:http://burtleburtle.net/bob/hash/spooky.html
[9]:http://reedbeta.com/blog/data-oriented-hash-table/hash-table-tests.zip
[10]:http://reedbeta.com/blog/data-oriented-hash-table/#the-results
[11]:http://reedbeta.com/blog/data-oriented-hash-table/#windows
[12]:http://reedbeta.com/blog/data-oriented-hash-table/results-vs2012.png
[13]:https://randomascii.wordpress.com/2014/12/10/hidden-costs-of-memory-allocation/
[14]:http://reedbeta.com/blog/data-oriented-hash-table/#linux
[15]:http://reedbeta.com/blog/data-oriented-hash-table/results-g++4.8.png
[16]:http://reedbeta.com/blog/data-oriented-hash-table/#conclusions

View File

@ -1,14 +1,14 @@
使用IBM Bluemix构建部署和管理自定义应用程序
使用 IBM Bluemix 构建,部署和管理自定义应用程序
============================================================
![IBM Bluemix logo](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/IBM-Blue-mix-logo.jpg?resize=300%2C266)
IBM Bluemix 为开发人员提供了构建部署和管理自定义应用程序的机会。Bluemix 建立在 Cloud Foundry 上。它支持多种编程语言,包括 IBM 的 OpenWhisk ,还允许开发人员无需资源管理就调用任何函数。
IBM Bluemix 为开发人员提供了构建部署和管理自定义应用程序的机会。Bluemix 建立在 Cloud Foundry 上。它支持多种编程语言,包括 IBM 的 OpenWhisk ,还允许开发人员无需资源管理就调用任何函数。
Bluemix 是由 IBM 实现的开放标准的基于平台。它具有开放的架构,其允许组织能够在云上创建开发和管理其应用程序。它基于 Cloud Foundry 因此可以被视为平台即服务PaaS。使用 Bluemix开发人员不必关心云端配置可以专注于他们的应用程序。 云端配置将由 Bluemix 自动完成。
Bluemix 是由 IBM 实现的基于开放标准的云平台。它具有开放的架构,其允许组织能够在云上创建开发和管理其应用程序。它基于 Cloud Foundry 因此可以被视为平台即服务PaaS。使用 Bluemix开发人员不必关心云端配置可以专注于他们的应用程序。 云端配置将由 Bluemix 自动完成。
Bluemix 还提供了一个仪表板,通过它,开发人员可以创建,管理和查看服务和应用程序,同时还可以监控资源使用情况。
它支持以下编程语言:
* Java
@ -21,143 +21,132 @@ Bluemix 还提供了一个仪表板,通过它,开发人员可以创建,管
![图1 IBM Bluemix概述](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-1-An-Overview-of-IBM-Bluemix.jpg?resize=296%2C307)
图1 IBM Bluemix 概述
*图1 IBM Bluemix 概述*
![图2 IBM Bluemix体系结构](http://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-2-The-IBM-Bluemix-architecture.jpg?resize=350%2C239)
图2 IBM Bluemix 体系结构
*图2 IBM Bluemix 体系结构*
![图3 在IBM Bluemix 中创建组织](http://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-3-Creating-an-organisation-in-IBM-Bluemix.jpg?resize=350%2C280)
图3 在 IBM Bluemix 中创建组织
*图3 在 IBM Bluemix 中创建组织*
**IBM Bluemix 如何工作**
### IBM Bluemix 如何工作
Bluemix 构建在 IBM 的 SoftLayer IaaS基础架构即服务之上。它使用 Cloud Foundry 作为开源 PaaS 平台。一切起于通过 Cloud Foundry 来推送代码,它扮演着整合代码和根据编写应用所使用的编程语言所适配的运行时环境的角色。IBM 服务、第三方服务或社区构建的服务可用于不同的功能。安全连接器可用于连接本地系统到云。
Bluemix 构建在 IBM 的 SoftLayer IaaS基础架构即服务之上。它使用 Cloud Foundry 作为开源 PaaS 平台。一切起于通过 Cloud Foundry 来推送代码,它扮演着将代码和编写应用所使用的编程语言运行时环境整合起来的角色。IBM 服务、第三方服务或社区构建的服务可用于不同的功能。安全连接器可用于将本地系统连接到云。
![图4 在IBM Bluemix中设置空间](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-4-Setting-up-Space-in-IBM-Bluemix.jpg?resize=350%2C267)
图4 在 IBM Bluemix 中设置空间
*图4 在 IBM Bluemix 中设置空间*
![图5 应用程序模板](http://i2.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-5-The-app-template.jpg?resize=350%2C135)
图5 应用程序模板
*图5 应用程序模板*
![图6 IBM Bluemix支持的编程语言](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-6-IBM-Bluemix-supported-programming-languages.jpg?resize=350%2C173)
图6 IBM Bluemix 支持的编程语言
*图6 IBM Bluemix 支持的编程语言*
### 在 Bluemix 中创建应用程序
**在 Bluemix 中创建应用程序**
在本文中,我们将使用 Liberty for Java 的入门包在 IBM Bluemix 中创建一个示例“Hello World”应用程序只需几个简单的步骤。
1. 打开 [_https//console.ng.bluemix.net/registration/_][2]
1、 打开 [https//console.ng.bluemix.net/registration/][2]
2. 注册 Bluemix 帐户
2 注册 Bluemix 帐户
3. 点击邮件中的确认链接完成注册过程
3 点击邮件中的确认链接完成注册过程
4. 输入您的电子邮件 ID然后点击 _Continue_ 进行登录
4、 输入您的电子邮件 ID然后点击 Continue 进行登录
5. 输入密码并点击 _Log in_
5、 输入密码并点击 Log in
6. 进入 _Set up_ -> _Environment_ 设置特定区域中的资源共享
6、 进入 Set up -> Environment 设置特定区域中的资源共享
7. 创建空间方便管理访问控制和在 Bluemix 中回滚操作。 我们可以将空间映射到多个开发阶段,如 dev testuatpre-prod 和 prod
7 创建空间方便管理访问控制和在 Bluemix 中回滚操作。 我们可以将空间映射到多个开发阶段,如 dev testuatpre-prod 和 prod
![图7 命名应用程序](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-7-Naming-the-app.jpg?resize=350%2C133)
图7 命名应用程序
*图7 命名应用程序*
![图8 了解应用程序何时准备就绪](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-8-Knowing-when-the-app-is-ready.jpg?resize=350%2C170)
图8 了解应用程序何时准备就绪
*图8 了解应用程序何时准备就绪*
![图9 IBM Bluemix Java应用程序](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-9-The-IBM-Bluemix-Java-App.jpg?resize=350%2C151)
图9 IBM Bluemix Java 应用程序
*图9 IBM Bluemix Java 应用程序*
8. 完成初始配置后,单击 _I'm ready_ -> _Good to Go_
8、 完成初始配置后,单击 I'm ready -> Good to Go
9. 成功登录后,此时检查 IBM Bluemix 仪表板,特别是 Cloud Foundry Apps其中2GB可用和 Virtual Server其中0个实例可用的部分
9 成功登录后,此时检查 IBM Bluemix 仪表板,特别是 Cloud Foundry Apps其中 2GB 可用)和 Virtual Server其中 0 个实例可用)的部分
10. 点击 _Create app_,选择应用创建模板。在我们的例子中,我们将使用一个 Web 应用程序
10、 点击 Create app,选择应用创建模板。在我们的例子中,我们将使用一个 Web 应用程序
11. 如何开始?单击 Liberty for Java ,然后查看其描述
11 如何开始?单击 Liberty for Java ,然后查看其描述
12. 单击 _Continue_
12、 单击 Continue
13. 为新应用命名。对于本文,让我们使用 osfy-bluemix-tutorial 命名然后单击 _Finish_
13 为新应用命名。对于本文,让我们使用 osfy-bluemix-tutorial 命名然后单击 Finish
14. 在 Bluemix 上创建资源和托管应用程序需要等待一些时间
14 在 Bluemix 上创建资源和托管应用程序需要等待一些时间
15. 几分钟后应用程式就会开始运作。注意应用程序的URL
15 几分钟后应用程式就会开始运作。注意应用程序的URL
16. 访问应用程序的URL _http//osfy-bluemix-tutorial.au-syd.mybluemix.net/_, Bingo,我们的第一个在 IBM Bluemix 上的 Java 应用程序成功运行
16、 访问应用程序的URL http//osfy-bluemix-tutorial.au-syd.mybluemix.net/, 不错,我们的第一个在 IBM Bluemix 上的 Java 应用程序成功运行
17. 为了检查源代码,请单击 _Files_ 并在门户中导航到不同文件和文件夹
17、 为了检查源代码,请单击 Files 并在门户中导航到不同文件和文件夹
18. _Logs_ 部分提供包括从应用程序的创建时起的所有活动日志。
18、 Logs 部分提供包括从应用程序的创建时起的所有活动日志。
19. _Environment Variables_ 部分提供关于 VCAP_Services 的所有环境变量以及用户定义的环境变量的详细信息
19、 Environment Variables 部分提供关于 VCAP\_Services 的所有环境变量以及用户定义的环境变量的详细信息
20. 要检查应用程序的资源消耗,需要到 Liberty for Java 那一部分。
20 要检查应用程序的资源消耗,需要到 Liberty for Java 那一部分。
21. 默认情况下,每个应用程序的 _Overview_ 部分包含资源,应用程序的运行状况和活动日志的详细信息
21、 默认情况下,每个应用程序的 Overview 部分包含资源,应用程序的运行状况和活动日志的详细信息
22. 打开 Eclipse转到帮助菜单然后单击 _Eclipse Marketplace_
22 打开 Eclipse转到帮助菜单然后单击 _Eclipse Marketplace_
23. 查找 _IBM Eclipse tools for Bluemix_ 并单击 _Install_
23、 查找 IBM Eclipse tools for Bluemix 并单击 Install
24. 确认所选的功能并将其安装在 Eclipse 中
24 确认所选的功能并将其安装在 Eclipse 中
25. 下载应用程序启动器代码。点击 _File Menu_,将它导入到 Eclipse 中,选择 _Import Existing Projects_ -> _Workspace_, 然后开始修改代码
25、 下载应用程序启动器代码。点击 File Menu将它导入到 Eclipse 中,选择 Import Existing Projects -> Workspace, 然后开始修改代码
![图10 Java应用程序源文件](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-10-The-Java-app-source-files.jpg?resize=350%2C173)
图10 Java 应用程序源文件
*图10 Java 应用程序源文件*
![图11 Java应用程序日志](http://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-11-The-Java-app-logs.jpg?resize=350%2C133)
图11 Java 应用程序日志
*图11 Java 应用程序日志*
![图12 Java应用程序 - Liberty for Java](http://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/10/Figure-12-Java-app-Liberty-for-Java.jpg?resize=350%2C169)
图12 Java 应用程序 - Liberty for Java
*图12 Java 应用程序 - Liberty for Java*
### 为什么选择 IBM Bluemix
**为什么选择 IBM Bluemix**
以下是使用 IBM Bluemix 的一些令人信服的理由:
* 支持多种语言和平台
* 免费试用
1. 简化的注册过程
2. 不需要信用卡
3. 30 天试用期 - 配额 2GB 的运行时,支持 20 个服务500 个 route
4. 无限制地访问标准支持
5. 没有生产使用限制
* 仅为每个使用的运行时和服务付费
* 快速设置 - 从而加快上架时间
* 持续交付新功能
* 与本地资源的安全集成
* 用例
1. Web 应用程序和移动后端
2. API 和内部集成
* DevOps 服务可部署在云上的 SaaS ,并支持持续交付:
1. Web IDE
2. SCM
3. 敏捷规划
4. 交货管道服务
--------------------------------------------------------------------------------

View File

@ -1,38 +1,39 @@
使用 Exercism 提升你的编程技巧
============================================================
### 这些练习目前已经支持 33 种编程语言了。
> 这些练习目前已经支持 33 种编程语言了。
![Improve your programming skills with Exercism ](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/code2.png?itok=CVgC8tlK "Improve your programming skills with Exercism ")
>图片提供 opensource.com
我们中的很多人有 2017 年的目标,将提高编程能力或学习如何编程放在第一位。虽然我们有许多资源可以访问,但练习独立于特定职业的代码开发的艺术还是需要一些规划。[Exercism.io][1] 就是为此目的而设计的一种资源。
我们中的很多人的 2017 年目标,将提高编程能力或学习如何编程放在第一位。虽然我们有许多资源可以访问,但练习独立于特定职业的代码开发的艺术还是需要一些规划。[Exercism.io][1] 就是为此目的而设计的一种资源。
Exercism 是一个 [开源][2] 项目和服务通过发现和协作帮助人们提高他们的编程技能。Exercism 提供了几十种不同编程语言的练习。实践者完成每个练习,并获得反馈,从而可以从他们的同行小组的经验中学习。
Exercism 是一个 [开源][2] 项目和服务通过发现和协作帮助人们提高他们的编程技能。Exercism 提供了几十种不同编程语言的练习。实践者完成每个练习,并获得反馈,从而可以从他们的同行小组的经验中学习。
这里有这么多同行! Exercism 在 2016 年留下了一些令人印象深刻的统计:
* 有来自201个不同国家的参与者
* 有来自 201 个不同国家的参与者
* 自 2013 年 6 月以来29,000 名参与者提交了练习,其中仅在 2016 年就有 15,500 名参加者提交练习
* 自 2013 年 6 月以来15,000 名参与者就练习解决方案提供反馈,其中 2016 年有 5,500 人提供反馈
* 每月 50,000 名访客,每周超过 12,000 名访客
* 目前练习支持 33 种编程语言,另外 22 种语言在筹备工作中
* 目前练习已经支持 33 种编程语言,另外 22 种语言在筹备工作中
该项目为所有级别的参与者提供了一系列小小的胜利,使他们能够“即使在低水平也能发展到高度流利Exercism 的创始人 [Katrina Owen][3] 这样说到。Exercism 并不旨在教导学员成为一名职业程序员,但它的练习使他们对一种语言及其瑕疵有深刻的了解。这种熟悉性消除了学习者对语言的认知负担(流利),使他们能够专注于更困难的架构和最佳实践(熟练)的问题。
该项目为各种级别的参与者提供了一系列小小的挑战,使他们能够“即使在低水平也能发展到高度谙熟Exercism 的创始人 [Katrina Owen][3] 这样说到。Exercism 并不旨在教导学员成为一名职业程序员,但它的练习使他们对一种语言及其瑕疵有深刻的了解。这种熟悉性消除了学习者对语言的认知负担(使之更谙熟),使他们能够专注于更困难的架构和最佳实践的问题。
Exercism 通过一系列练习(还有什么?)来做到这一点。程序员下载[命令行客户端][4],检索第一个练习,添加完成练习的代码,然后提交解决方案。提交解决方案后,程序员可以研究他人的解决方案,并学习到对同一个问题不同的解决方式。更重要的是,每个解决方案都会收到来自其他参与者的反馈。
Exercism 通过一系列练习(或者还有别的?)来做到这一点。程序员下载[命令行客户端][4],检索第一个练习,添加完成练习的代码,然后提交解决方案。提交解决方案后,程序员可以研究他人的解决方案,并学习到对同一个问题不同的解决方式。更重要的是,每个解决方案都会收到来自其他参与者的反馈。
反馈是 Exercism 的超级力量。鼓励所有参与者不仅接收反馈而且提供反馈。根据 Owen 说的Exercism 的社区成员提供反馈比完成练习学到更多。她说:“这是一个强大的学习经验,你被迫发表内心感受,并检查你的假设、习惯和偏见”。她还指出,反馈可以有多种形式。
反馈是 Exercism 的超级力量。鼓励所有参与者不仅接收反馈而且提供反馈。根据 Owen 说的Exercism 的社区成员提供反馈比完成练习学到更多。她说:“这是一个强大的学习经验,你需要发表内心感受,并检查你的假设、习惯和偏见”。她还指出,反馈可以有多种形式。
欧文说:“只需进入,观察并问问题”。
欧文说:“只需进入,观察并发问”。
那些刚刚接触编程,甚至只是一种特定语言的人,可以通过质疑假设来提供有价值的反馈,同时通过协作和对话来学习。
那些刚刚接触编程,甚至只是接触了一种特定语言的人,可以通过预设好的问题来提供有价值的反馈,同时通过协作和对话来学习。
除了对新语言的 <ruby>“微课”学习<rt>bite-sized learning</rt></ruby> 之外Exercism 本身还强烈支持和鼓励项目的新贡献者。在 [SitePoint.com][5] 的一篇文章中,欧文强调:“如果你想为开源贡献代码,你所需要的技能水平只要‘够用’即可。” Exercism 不仅鼓励新的贡献者,它还尽可能地帮助新贡献者发布他们项目中的第一个补丁。到目前为止,有近 1000 人是[ Exercism 项目][6]的贡献者。
除了对新语言的 <ruby>“微课”学习<rt>bite-sized learning</rt></ruby> 之外Exercism 本身还强烈支持和鼓励项目的新贡献者。在 [SitePoint.com][5] 的一篇文章中,欧文强调:“如果你想为开源贡献代码,你所需要的技能水平只要‘够用’即可。” Exercism 不仅鼓励新的贡献者,它还尽可能地帮助新贡献者发布他们项目中的第一个补丁。到目前为止,有近 1000 人成为 [Exercism 项目][6]的贡献者。
新贡献者会有大量工作让他们忙碌。 Exercism 目前正在审查[其语言轨道的健康状况][7],目的是使所有轨道可持续并避免维护者的倦怠。它还在寻求[捐赠][8]和赞助,聘请设计师提高网站的可用性。
新贡献者会有大量工作让他们忙碌。 Exercism 目前正在审查[其语言发展轨迹的健康状况][7],目的是使所有发展轨迹可持续并避免维护者的倦怠。它还在寻求[捐赠][8]和赞助,聘请设计师提高网站的可用性。
Owen 说:“这些改进对于网站的健康以及为了 Exercism 参与者的发展是有必要的,这些变化还鼓励新贡献者加入并简化了加入的途径。” 她说:“如果我们可以重新设计,产品方面将更加可维护。。。当用户体验一团糟,华丽的代码一点用也没有”。该项目有一个非常活跃的[讨论仓库][9],这里社区成员合作来发现最好的新方法和功能。
Owen 说:“这些改进对于网站的健康以及为了 Exercism 参与者的发展是有必要的,这些变化还鼓励新贡献者加入并简化了加入的途径。” 她说:“如果我们可以重新设计,产品方面将更加可维护……当用户体验一团糟时,华丽的代码一点用也没有”。该项目有一个非常活跃的[讨论仓库][9],这里社区成员合作来发现最好的新方法和功能。
那些想关注项目但还没有参与的人可以关注[邮件列表][10]。
@ -42,10 +43,12 @@ Owen 说:“这些改进对于网站的健康以及为了 Exercism 参与者
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/vmb_helvetica_sm.png?itok=mSb3xriS)
VMVickyBrasseur - VM也称为 Vicky是技术人员、项目、流程、产品和 p^Hbusinesses 的经理。在她超过 18 年的科技行业从业中,她曾是分析师、程序员、产品经理、软件工程经理和软件工程总监。 目前,她是 Hewlett Packard Enterprise 上游开源开发团队的高级工程经理。 VM 的博客在 anonymoushash.vmbrasseur.comtweets 在 @vmbrasseur
VMVickyBrasseur - VM也称为 Vicky是技术人员、项目、流程、产品和 p\^Hbusinesses 的经理。在她超过 18 年的科技行业从业中,她曾是分析师、程序员、产品经理、软件工程经理和软件工程总监。 目前,她是 Hewlett Packard Enterprise 上游开源开发团队的高级工程经理。 VM 的博客在 anonymoushash.vmbrasseur.comtweets 在 @vmbrasseur
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/1/exercism-learning-programming
作者:[VM (Vicky) Brasseur][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)

View File

@ -1,26 +1,24 @@
What engineers and marketers can learn from each other
============================================================
工程师和市场营销人员之间能够相互学习什么?
============================================================
### 营销人员觉得工程师在工作中都太严谨了;而工程师则认为营销人员都很懒散。但是他们都错了。
> 营销人员觉得工程师在工作中都太严谨了;而工程师则认为营销人员毫无用处。但是他们都错了。
![What engineers and marketers can learn from each other](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BIZ_fortunecookie3.png?itok=dlRX4vO9 "What engineers and marketers can learn from each other")
图片来源 :
opensource.com
图片来源opensource.com
在 B2B 行业从事多年的销售实践过程中,我经常听到工程师对营销人员的各种误解。下面这些是比较常见的:
* ”搞市场营销真是浪费钱,还不如把更多的资金投入到实际的产品开发中来。“
* 那些营销人员只是一个劲儿往墙上贴各种广告,还祈祷着它们不要掉下来。这么做有啥科学依据啊?
* ”谁愿意去看哪些广告啊?“
* ”对待一个营销人员最好的办法就是不订阅,不关注,也不理睬。“
* “搞市场营销真是浪费钱,还不如把更多的资金投入到实际的产品开发中来。”
* 那些营销人员只是一个劲儿往墙上贴各种广告,还祈祷着它们不要掉下来。这么做有啥科学依据啊?
* “谁愿意去看哪些广告啊?”
* “对待一个营销人员最好的办法就是不听,不看,也不理睬。”
这是我最感兴趣的一点:
_“营销人员都很懒散。”_
_“市场营销无足轻重。”_
最后一点说的不对,不够全面,懒散实际上是阻碍一个公司发展的巨大绊脚石。
最后一点说的不对,而且不仅如此,它实际上是阻碍一个公司创新的巨大绊脚石。
我来跟大家解释一下原因吧。
@ -28,27 +26,27 @@ _“营销人员都很懒散。”_
这些工程师的的评论让我十分的苦恼,因为我从中看到了自己当年的身影。
你们知道吗?我曾经也跟你们一样是一位自豪的技术极客。我在 Rensselaer Polytechnic 学院的电气工程专业本科毕业后便在美国空军开始了我的职业生涯,而且美国空军在那段时间还发动了军事上的沙漠风暴行动。在那里我主要负责开发并部属一套智能的实时战况分析系统,用于根据各种各样的数据源来构建出战场上的画面
你们知道吗?我曾经也跟你们一样是一位自豪的技术极客。我在 Rensselaer Polytechnic 学院的电气工程专业本科毕业后便在美国空军担任军官开始了我的职业生涯,而且美国空军在那段时间还发动了沙漠风暴行动。在那里我主要负责开发并部属一套智能的实时战况分析系统,用于综合几个的数据源来构建出战场态势
在我离开空军之后,我本打算去麻省理工学院攻读博士学位。但是上校强烈建议我去报读这个学校的商学院。“你真的想一辈子待实验室里吗?”他问我。“你想就这么去大学里当个教书匠吗? Jackie ,你在组织管理那些复杂的工作中比较有天赋。我觉得你非常有必要去了解下 MIT 的斯隆商学院。”
在我离开空军之后我本打算去麻省理工学院攻读博士学位。但是上校强烈建议我去报读这个学校的商学院。“你真的想一辈子待实验室里吗”他问我。“你想就这么去大学里当个教书匠吗Jackie ,你在组织管理那些复杂的工作中比较有天赋。我觉得你非常有必要去了解下 MIT 的斯隆商学院。”
我觉得自己也可以同时参加一些 MIT 技术方面的课程,因此我采纳了他的建议。但是,如果要参加市场营销管理方面的课程,我还有很长的路要走,这完全是在浪费时间。因此,在日常工作学习中,我始终是用自己所擅长的分析能力去解决一切问题。
我觉得自己也可以同时参加一些 MIT 技术方面的课程,因此我采纳了他的建议。然而,如果要参加市场营销管理方面的课程,我还有很长的路要走,这完全是在浪费时间。因此,在日常工作学习中,我始终是用自己所擅长的分析能力去解决一切问题。
不久后,我在波士顿咨询集团公司做咨询顾问工作。在那六年的时间里,我经常听到大家对我的评论: Jackie ,你太没远见了。考虑问题也不够周全。你总是通过自己的分析数据去找答案。“
不久后,我在波士顿咨询集团公司做咨询顾问工作。在那六年的时间里,我经常听到大家对我的评论: Jackie ,你太没远见了。考虑问题也不够周全。你总是通过自己的分析去找答案。”
确实如此啊,我很赞同他们的想法——因为这个世界的工作方式本该如此,任何问题都要基于数据进行分析,不对吗?直到现在我才意识到(我多么希望自己早一些发现自己的问题)自己以前惯用的分析问题的方法遗漏了很多重要的东西:开放的心态,艺术修养,情感——人和创造性思维相关的因素。
确实如此啊,我很赞同他们的想法——因为这个世界的工作方式本该如此,任何问题都要基于数据进行分析,不对吗?直到现在我才意识到(我多么希望自己早一些发现自己的问题)自己以前惯用的分析问题的方法遗漏了很多重要的东西:开放的心态、艺术修养、情感——人和创造性思维相关的因素。
我在 2001 年 9 月 11 日加入达美航空公司不久后,被调去管理消费者市场部门,之前我意识到的所有问题变得更加明显。本来不是我的强项,但是在公司需要的情况下,我也愿意出手相肋。
我在 2001 年 9 月 11 日加入达美航空公司不久后,被调去管理消费者市场部门,之前我意识到的所有问题变得更加明显。市场方面本来不是我的强项,但是在公司需要的情况下,我也愿意出手相肋。
但是突然之间,我一直惯用的方法获取到的常规数据分析结果却与实际情况完全相反。这个问题导致上千人(包括航线内外的人)受到影响。我忽略了一个很重要的人本身的情感因素。我所面临的问题需要各种各样的解决方案才能处理,而不是简单的从那些死板的分析数据中就能得到答案。
但是突然之间,我一直惯用的方法获取到的分析结果却与实际情况完全相反。这个问题导致上千人(包括航线内外的人)受到影响。我忽略了一个很重要的人本身的情感因素。我所面临的问题需要各种各样的解决方案才能处理,而不是简单的从那些死板的数据中就能得到答案。
那段时间,我快速地学到了很多东西,因为如果我们想把达美航空公司恢复到正常状态,还需要做很多的工作——市场营销更像是一个以解决问题为导向以用户为中心的充满挑战性的大工程,只是销售人员和工程师这两大阵营都没有迅速地意识到这个问题。
那段时间,我快速地学到了很多东西,因为如果我们想把达美航空公司恢复到正常状态,还需要做很多的工作——市场营销更像是一个以解决问题为导向以用户为中心的充满挑战性的大工程,只是销售人员和工程师这两大阵营都没有迅速地意识到这个问题。
### 两大文化差异
工程管理和市场营销之间的这个“巨大鸿沟”确实是根深蒂固的,这跟 C.P. Snow (英语物理化学家和小说家)提出的[“两大文化差异"问题][1]很相似。具有科学素质的工程师和具有艺术细胞的营销人员操着不同的语言,不同的文化观念导致他们不同的价值取向。
工程管理和市场营销之间的这个“巨大鸿沟”确实是根深蒂固的,这跟(著名的科学家、小说家) C.P. Snow 提出的[“两大文化差异”问题][1]很相似。具有科学素质的工程师和具有艺术细胞的营销人员操着不同的语言,不同的文化观念导致他们不同的价值取向。
但是,事实上他们比想象中有更多的相似之处。华盛顿大学[最新研究][2](由微软、谷歌和美国国家科学基金会共同赞助)发现”一个伟大软件工程师必须具备哪些优秀的素质,毫无疑问,一个伟大的销售人员同样也应该具备这些素质。例如,专家们给出的一些优秀品质如下:
但是,事实上他们比想象中有更多的相似之处。一个由微软、谷歌和美国国家科学基金会共同赞助的华盛顿大学的[最新研究][2]发现了“一个伟大软件工程师必须具备哪些优秀的素质,毫无疑问,一个伟大的销售人员同样也应该具备这些素质。例如,专家们给出的一些优秀品质如下:
* 充满激情
* 性格开朗
@ -56,29 +54,29 @@ _“营销人员都很懒散。”_
* 技艺精湛
* 解决复杂难题的能力
这些只是其中很小的一部分!当然,并不是所有的素质都适用于市场营销人员,但是如果用文氏图来表示这“两大文化的交集,就很容易看出营销人员和工程师之间的关系要远比我们想象中密切得多。他们都是竭力去解决与用户或客户相关的难题,只是他们所采取的方式和角度不一致罢了。
这些只是其中很小的一部分!当然,并不是所有的素质都适用于市场营销人员,但是如果用文氏图来表示这“两大文化的交集,就很容易看出营销人员和工程师之间的关系要远比我们想象中密切得多。他们都是竭力去解决与用户或客户相关的难题,只是他们所采取的方式和角度不一致罢了。
看到上面的那几点后我深深的陷入思考_要是这两类员工彼此之间再多了解对方一些会怎样呢这会给公司带来很强大的动力吧_
确实如此。我在红帽公司就亲眼看到过样的情形,我身边都是一些“思想极端”的员工,要是之前,肯定早被我炒鱿鱼了。我相信公司里绝对发生过很多次类似这样的事情,一个销售人员看完工程师递交上来的分析报表后,心想,“这些书呆子,思想太局限了。真是一叶障目,不见泰山;两豆塞耳,不闻雷霆。”
确实如此。我在红帽公司就亲眼看到过样的情形,我身边都是一些早些年肯定被我当成“想法疯狂”而无视的人。而且我猜销售人员看到工程师后(同时或某一次),心想,“这些数据呆瓜,真是只见树木不见森林。”
现在我才明白了公司里有这两种人才的重要性。在现实工作当中,工程师和营销人员都是围绕着客户、创新及数据分析来完成工作。如果他们能够懂得相互尊重、彼此理解、相辅相成,那么我们将会看到公司里所产生的那种积极强大的动力,这种超乎寻常的革新力量要远比两个独立的团队强大得多。
### 听一听他们的想法
### 听一听疯子(和呆瓜)的想法
成功案例:_建立开放式组织_
成功案例:《开放式组织》
在红帽任职期间,我的主要工作就是想办法提升公司的品牌影响力——但是我从未想过让公司的 CEO 去写一本书。我把公司多个部门的“想法极端”的同事召集在一起,希望他们帮我设计出一个新颖的解决方案来提升公司的影响力,结果他们提出让公司的 CEO 写书这样一个想法。
在红帽任职期间,我的主要工作就是想办法提升公司的品牌影响力——但是就是给我一百万年我也不会想到让公司的 CEO 去写一本书。我把公司多个部门的“想法疯狂”的同事召集在一起,希望他们帮我设计出一个新颖的解决方案来提升公司的影响力,结果他们提出让公司的 CEO 写书这样一个想法。
当我听到这个想法的时候,我很快意识到应该把红帽公司一些经典的管理模式写入到这本书里:它将对整个开源社区的创业者带来很重要的参考价值,同时也有助于宣扬开源精神。通过优先考虑这两方面的作用,我们提升了红帽在整个开源软件世界中的品牌价值红帽是一个可靠的随时准备着为客户在[数字化颠覆][3]年代指明方向的公司。
当我听到这个想法的时候,我很快意识到这正是典型的红帽方式:它将对整个开源社区的从业者带来很重要的参考价值,同时也有助于宣扬开源精神。通过优先考虑这两方面的作用,我们提升了红帽在整个开源软件世界中的品牌价值——红帽是一个可靠的随时准备着为客户在[数字化颠覆][3]年代指明方向的公司。
这一点才是主要的:确切的说是指导红帽工程师解决代码问题的共同精神力量。 Red Hatters 小组一直在催着我赶紧把开放式组织的模式在全公司推广起来,以显示出内外部程序员共同推动整个开源社区发展的强大动力之一:那就是强烈的共享欲望。
这一点才是主要的:确切的说是指导红帽工程师解决代码问题的共同精神力量。 Red Hatters 小组敦促我出版《开放式组织》,这显示出来自内部和外部社区的程序员共同推动整个开源社区发展的强大动力之一:那就是强烈的共享欲望。
最后,要把开放式组织的管理模式完全推广起来,还需要大家的共同能力,包括工程师们强大的数据分析能力和营销人员美好的艺术素养。这个项目让我更加坚定自己的想法,工程师和营销人员有更多的相似之处。
最后,要把《开放式组织》这本书完成,还需要大家的共同能力,包括工程师们强大的数据分析能力和营销人员美好的艺术素养。这个项目让我更加坚定自己的想法,工程师和营销人员有更多的相似之处。
但是,有些东西我还得强调下:开放模式的实现,要求公司上下没有任何偏见,不能偏袒工程师和市场营销人员任何一方文化。一个更加理想的开放式环境能够促使员工之间和平共处,并在这个组织规定的范围内点燃大家的热情。
所以,这绝对不是我听到大家所说的懒散之意
这对我来说如春风拂面
--------------------------------------------------------------------------------
@ -94,7 +92,7 @@ via: https://opensource.com/open-organization/17/1/engineers-marketers-can-learn
作者:[Jackie Yeaney][a]
译者:[rusking](https://github.com/rusking)
校对:[Bestony](https://github.com/Bestony)
校对:[Bestony](https://github.com/Bestony), [wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,678 @@
2016 Git 新视界
============================================================
![](https://cdn-images-1.medium.com/max/2000/1*1SiSsLMsNSyAk6khb63W9g.png)
2016 年 Git 发生了 _惊天动地_ 地变化,发布了五大新特性[¹][57] (从 _v2.7_  到  _v2.11_ )和十六个补丁[²][58]。189 位作者[³][59]贡献了 3676 个提交[⁴][60]到 `master` 分支,比 2015 年多了 15%[⁵][61]!总计有 1545 个文件被修改,其中增加了 276799 行并移除了 100973 行。
但是,通过统计提交的数量和代码行数来衡量生产力是一种十分愚蠢的方法。除了深度研究过的开发者可以做到凭直觉来判断代码质量的地步,我们普通人来作仲裁难免会因我们常人的判断有失偏颇。
谨记这一条于心,我决定整理这一年里六个我最喜爱的 Git 特性涵盖的改进,来做一次分类回顾。这篇文章作为一篇中篇推文有点太过长了,所以我不介意你们直接跳到你们特别感兴趣的特性去。
* [完成][41] [`git worktree`][25] [命令][42]
* [更多方便的][43] [`git rebase`][26] [选项][44]
* [`git lfs`][27] [梦幻的性能加速][45]
* [`git diff`][28] [实验性的算法和更好的默认结果][46]
* [`git submodules`][29] [差强人意][47]
* [`git stash`][30] 的[90 个增强][48]
在我们开始之前,请注意在大多数操作系统上都自带有 Git 的旧版本,所以你需要检查你是否在使用最新并且最棒的版本。如果在终端运行 `git --version` 返回的结果小于 Git `v2.11.0`,请立刻跳转到 Atlassian 的快速指南 [更新或安装 Git][63] 并根据你的平台做出选择。
###[所需的“引用”]
在我们进入高质量内容之前还需要做一个短暂的停顿:我觉得我需要为你展示我是如何从公开文档生成统计信息(以及开篇的封面图片)的。你也可以使用下面的命令来对你自己的仓库做一个快速的 *年度回顾*
>¹ Tags from 2016 matching the form vX.Y.0
```
$ git for-each-ref --sort=-taggerdate --format \
'%(refname) %(taggerdate)' refs/tags | grep "v\d\.\d*\.0 .* 2016"
```
> ² Tags from 2016 matching the form vX.Y.Z
```
$ git for-each-ref --sort=-taggerdate --format '%(refname) %(taggerdate)' refs/tags | grep "v\d\.\d*\.[^0] .* 2016"
```
> ³ Commits by author in 2016
```
$ git shortlog -s -n --since=2016-01-01 --until=2017-01-01
```
> ⁴ Count commits in 2016
```
$ git log --oneline --since=2016-01-01 --until=2017-01-01 | wc -l
```
> ⁵ ... and in 2015
```
$ git log --oneline --since=2015-01-01 --until=2016-01-01 | wc -l
```
> ⁶ Net LOC added/removed in 2016
```
$ git diff --shortstat `git rev-list -1 --until=2016-01-01 master` \
`git rev-list -1 --until=2017-01-01 master`
```
以上的命令是在 Git 的 `master` 分支运行的,所以不会显示其他出色的分支上没有合并的工作。如果你使用这些命令,请记住提交的数量和代码行数不是应该值得信赖的度量方式。请不要使用它们来衡量你的团队成员的贡献。
现在,让我们开始说好的回顾……
### 完成 Git 工作树worktree
`git worktree` 命令首次出现于 Git v2.5 ,但是在 2016 年有了一些显著的增强。两个有价值的新特性在 v2.7 被引入:`list` 子命令,和为二分搜索增加了命令空间的 refs。而 `lock`/`unlock` 子命令则是在 v2.10 被引入。
#### 什么是工作树呢?
[`git worktree`][49] 命令允许你同步地检出和操作处于不同路径下的同一仓库的多个分支。例如,假如你需要做一次快速的修复工作但又不想扰乱你当前的工作区,你可以使用以下命令在一个新路径下检出一个新分支:
```
$ git worktree add -b hotfix/BB-1234 ../hotfix/BB-1234
Preparing ../hotfix/BB-1234 (identifier BB-1234)
HEAD is now at 886e0ba Merged in bedwards/BB-13430-api-merge-pr (pull request #7822)
```
工作树不仅仅是为分支工作。你可以检出多个里程碑tags作为不同的工作树来并行构建或测试它们。例如我从 Git v2.6 和 v2.7 的里程碑中创建工作树来检验不同版本 Git 的行为特征。
```
$ git worktree add ../git-v2.6.0 v2.6.0
Preparing ../git-v2.6.0 (identifier git-v2.6.0)
HEAD is now at be08dee Git 2.6
$ git worktree add ../git-v2.7.0 v2.7.0
Preparing ../git-v2.7.0 (identifier git-v2.7.0)
HEAD is now at 7548842 Git 2.7
$ git worktree list
/Users/kannonboy/src/git 7548842 [master]
/Users/kannonboy/src/git-v2.6.0 be08dee (detached HEAD)
/Users/kannonboy/src/git-v2.7.0 7548842 (detached HEAD)
$ cd ../git-v2.7.0 && make
```
你也使用同样的技术来并行构造和运行你自己应用程序的不同版本。
#### 列出工作树
`git worktree list` 子命令(于 Git v2.7 引入)显示所有与当前仓库有关的工作树。
```
$ git worktree list
/Users/kannonboy/src/bitbucket/bitbucket 37732bd [master]
/Users/kannonboy/src/bitbucket/staging d5924bc [staging]
/Users/kannonboy/src/bitbucket/hotfix-1234 37732bd [hotfix/1234]
```
#### 二分查找工作树
[`gitbisect`][50] 是一个简洁的 Git 命令,可以让我们对提交记录执行一次二分搜索。通常用来找到哪一次提交引入了一个指定的退化。例如,如果在我的 `master` 分支最后的提交上有一个测试没有通过,我可以使用 `git bisect` 来贯穿仓库的历史来找寻第一次造成这个错误的提交。
```
$ git bisect start
# 找到已知通过测试的最后提交
# (例如最新的发布里程碑)
$ git bisect good v2.0.0
# 找到已知的出问题的提交
# (例如在 `master` 上的提示)
$ git bisect bad master
# 告诉 git bisect 要运行的脚本/命令;
# git bisect 会在 “good” 和 “bad”范围内
# 找到导致脚本以非 0 状态退出的最旧的提交
$ git bisect run npm test
```
在后台bisect 使用 refs 来跟踪 “good” 与 “bad” 的提交来作为二分搜索范围的上下界限。不幸的是,对工作树的粉丝来说,这些 refs 都存储在寻常的 `.git/refs/bisect` 命名空间,意味着 `git bisect` 操作如果运行在不同的工作树下可能会互相干扰。
到了 v2.7 版本bisect 的 refs 移到了 `.git/worktrees/$worktree_name/refs/bisect` 所以你可以并行运行 bisect 操作于多个工作树中。
#### 锁定工作树
当你完成了一颗工作树的工作,你可以直接删除它,然后通过运行 `git worktree prune` 等它被当做垃圾自动回收。但是如果你在网络共享或者可移除媒介上存储了一颗工作树如果工作树目录在删除期间不可访问工作树会被完全清除——不管你喜不喜欢Git v2.10 引入了 `git worktree lock``unlock` 子命令来防止这种情况发生。
```
# 在我的 USB 盘上锁定 git-v2.7 工作树
$ git worktree lock /Volumes/Flash_Gordon/git-v2.7 --reason \
"In case I remove my removable media"
```
```
# 当我完成时,解锁(并删除)该工作树
$ git worktree unlock /Volumes/Flash_Gordon/git-v2.7
$ rm -rf /Volumes/Flash_Gordon/git-v2.7
$ git worktree prune
```
`--reason` 标签允许为未来的你留一个记号,描述为什么当初工作树被锁定。`git worktree unlock` 和 `lock` 都要求你指定工作树的路径。或者,你可以 `cd` 到工作树目录然后运行 `git worktree lock .` 来达到同样的效果。
### 更多 Git 变基rebase选项
2016 年三月Git v2.8 增加了在拉取过程中交互进行变基的命令 `git pull --rebase=interactive` 。对应地,六月份 Git v2.9 发布了通过 `git rebase -x` 命令对执行变基操作而不需要进入交互模式的支持。
#### Re-啥?
在我们继续深入前,我假设读者中有些并不是很熟悉或者没有完全习惯变基命令或者交互式变基。从概念上说,它很简单,但是与很多 Git 的强大特性一样变基散发着听起来很复杂的专业术语的气息。所以在我们深入前先来快速的复习一下什么是变基rebase
变基操作意味着将一个或多个提交在一个指定分支上重写。`git rebase` 命令是被深度重载了,但是 rebase 这个名字的来源事实上还是它经常被用来改变一个分支的基准提交(你基于此提交创建了这个分支)。
从概念上说rebase 通过将你的分支上的提交临时存储为一系列补丁包,接着将这些补丁包按顺序依次打在目标提交之上。
![](https://cdn-images-1.medium.com/max/800/1*mgyl38slmqmcE4STS56nXA.gif)
对 master 分支的一个功能分支执行变基操作 `git reabse master`)是一种通过将 master 分支上最新的改变合并到功能分支的“保鲜法”。对于长期存在的功能分支,规律的变基操作能够最大程度的减少开发过程中出现冲突的可能性和严重性。
有些团队会选择在合并他们的改动到 master 前立即执行变基操作以实现一次快速合并 `git merge --ff <feature>`)。对 master 分支快速合并你的提交是通过简单的将 master ref 指向你的重写分支的顶点而不需要创建一个合并提交。
![](https://cdn-images-1.medium.com/max/800/1*QXa3znQiuNWDjxroX628VA.gif)
变基是如此方便和功能强大以致于它已经被嵌入其他常见的 Git 命令中,例如拉取操作 `git pull` 。如果你在本地 master 分支有未推送的提交,运行 `git pull` 命令从 origin 拉取你队友的改动会造成不必要的合并提交。
![](https://cdn-images-1.medium.com/max/800/1*IxDdJ5CygvSWdD8MCNpZNg.gif)
这有点混乱,而且在繁忙的团队,你会获得成堆的不必要的合并提交。`git pull --rebase` 将你本地的提交在你队友的提交上执行变基而不产生一个合并提交。
![](https://cdn-images-1.medium.com/max/800/1*HcroDMwBE9m21-hOeIwRmw.gif)
这很整洁吧甚至更酷Git v2.8 引入了一个新特性,允许你在拉取时 _交互地_ 变基。
#### 交互式变基
交互式变基是变基操作的一种更强大的形态。和标准变基操作相似,它可以重写提交,但它也可以向你提供一个机会让你能够交互式地修改这些将被重新运用在新基准上的提交。
当你运行 `git rebase --interactive` (或 `git pull --rebase=interactive`)时,你会在你的文本编辑器中得到一个可供选择的提交列表视图。
```
$ git rebase master --interactive
pick 2fde787 ACE-1294: replaced miniamalCommit with string in test
pick ed93626 ACE-1294: removed pull request service from test
pick b02eb9a ACE-1294: moved fromHash, toHash and diffType to batch
pick e68f710 ACE-1294: added testing data to batch email file
# Rebase f32fa9d..0ddde5f onto f32fa9d (4 commands)
#
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
# x, exec = run command (the rest of the line) using shell
# d, drop = remove commit
#
# These lines can be re-ordered; they are executed from top to
# bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
```
注意到每一条提交旁都有一个 `pick`。这是对 rebase 而言,“照原样留下这个提交”。如果你现在就退出文本编辑器,它会执行一次如上文所述的普通变基操作。但是,如果你将 `pick` 改为 `edit` 或者其他 rebase 命令中的一个,变基操作会允许你在它被重新运用前改变它。有效的变基命令有如下几种:
* `reword`:编辑提交信息。
* `edit`:编辑提交了的文件。
* `squash`:将提交与之前的提交(同在文件中)合并,并将提交信息拼接。
* `fixup`:将本提交与上一条提交合并,并且逐字使用上一条提交的提交信息(这很方便,如果你为一个很小的改动创建了第二个提交,而它本身就应该属于上一条提交,例如,你忘记暂存了一个文件)。
* `exec`: 运行一条任意的 shell 命令(我们将会在下一节看到本例一次简洁的使用场景)。
* `drop`: 这将丢弃这条提交。
你也可以在文件内重新整理提交,这样会改变它们被重新应用的顺序。当你对不同的主题创建了交错的提交时这会很顺手,你可以使用 `squash` 或者 `fixup` 来将其合并成符合逻辑的原子提交。
当你设置完命令并且保存这个文件后Git 将递归每一条提交,在每个 `reword``edit` 命令处为你暂停来执行你设计好的改变,并且自动运行 `squash` `fixup``exec` 和 `drop` 命令。
####非交互性式执行
当你执行变基操作时,本质上你是在通过将你每一条新提交应用于指定基址的头部来重写历史。`git pull --rebase` 可能会有一点危险,因为根据上游分支改动的事实,你的新建历史可能会由于特定的提交遭遇测试失败甚至编译问题。如果这些改动引起了合并冲突,变基过程将会暂停并且允许你来解决它们。但是,整洁的合并改动仍然有可能打断编译或测试过程,留下破败的提交弄乱你的提交历史。
但是,你可以指导 Git 为每一个重写的提交来运行你的项目测试套件。在 Git v2.9 之前,你可以通过绑定 `git rebase --interactive``exec` 命令来实现。例如这样:
```
$ git rebase master interactive exec=”npm test”
```
……这会生成一个交互式变基计划,在重写每条提交后执行 `npm test` ,保证你的测试仍然会通过:
```
pick 2fde787 ACE-1294: replaced miniamalCommit with string in test
exec npm test
pick ed93626 ACE-1294: removed pull request service from test
exec npm test
pick b02eb9a ACE-1294: moved fromHash, toHash and diffType to batch
exec npm test
pick e68f710 ACE-1294: added testing data to batch email file
exec npm test
# Rebase f32fa9d..0ddde5f onto f32fa9d (4 command(s))
```
如果出现了测试失败的情况,变基会暂停并让你修复这些测试(并且将你的修改应用于相应提交)
```
291 passing
1 failing
1) Host request "after all" hook:
Uncaught Error: connect ECONNRESET 127.0.0.1:3001
npm ERR! Test failed.
Execution failed: npm test
You can fix the problem, and then run
git rebase continue
```
这很方便,但是使用交互式变基有一点臃肿。到了 Git v2.9,你可以这样来实现非交互式变基:
```
$ git rebase master -x "npm test"
```
可以简单替换 `npm test``make``rake``mvn clean install`,或者任何你用来构建或测试你的项目的命令。
#### 小小警告
就像电影里一样,重写历史可是一个危险的行当。任何提交被重写为变基操作的一部分都将改变它的 SHA-1 ID这意味着 Git 会把它当作一个全新的提交对待。如果重写的历史和原来的历史混杂,你将获得重复的提交,而这可能在你的团队中引起不少的疑惑。
为了避免这个问题,你仅仅需要遵照一条简单的规则:
> _永远不要变基一条你已经推送的提交_
坚持这一点你会没事的。
### Git LFS 的性能提升
[Git 是一个分布式版本控制系统][64],意味着整个仓库的历史会在克隆阶段被传送到客户端。对包含大文件的项目——尤其是大文件经常被修改——初始克隆会非常耗时,因为每一个版本的每一个文件都必须下载到客户端。[Git LFSLarge File Storage 大文件存储)][65] 是一个 Git 拓展包,由 Atlassian、GitHub 和其他一些开源贡献者开发,通过需要时才下载大文件的相对版本来减少仓库中大文件的影响。更明确地说,大文件是在检出过程中按需下载的而不是在克隆或抓取过程中。
在 Git 2016 年的五大发布中Git LFS 自身就有四个功能版本的发布v1.2 到 v1.5。你可以仅对 Git LFS 这一项来写一系列回顾文章,但是就这篇文章而言,我将专注于 2016 年解决的一项最重要的主题:速度。一系列针对 Git 和 Git LFS 的改进极大程度地优化了将文件传入/传出服务器的性能。
#### 长期过滤进程
当你 `git add` 一个文件时Git 的净化过滤系统会被用来在文件被写入 Git 目标存储之前转化文件的内容。Git LFS 通过使用净化过滤器clean filter将大文件内容存储到 LFS 缓存中以缩减仓库的大小,并且增加一个小“指针”文件到 Git 目标存储中作为替代。
![](https://cdn-images-1.medium.com/max/800/0*Ku328eca7GLOo7sS.png)
污化过滤器smudge filter是净化过滤器的对立面——正如其名。在 `git checkout` 过程中从一个 Git 目标仓库读取文件内容时污化过滤系统有机会在文件被写入用户的工作区前将其改写。Git LFS 污化过滤器通过将指针文件替代为对应的大文件将其转化,可以是从 LFS 缓存中获得或者通过读取存储在 Bitbucket 的 Git LFS。
![](https://cdn-images-1.medium.com/max/800/0*CU60meE1lbCuivn7.png)
传统上,污化和净化过滤进程在每个文件被增加和检出时只能被唤起一次。所以,一个项目如果有 1000 个文件在被 Git LFS 追踪 ,做一次全新的检出需要唤起 `git-lfs-smudge` 命令 1000 次。尽管单次操作相对很迅速,但是经常执行 1000 次独立的污化进程总耗费惊人。、
针对 Git v2.11(和 Git LFS v1.5),污化和净化过滤器可以被定义为长期进程,为第一个需要过滤的文件调用一次,然后为之后的文件持续提供污化或净化过滤直到父 Git 操作结束。[Lars Schneider][66]Git 的长期过滤系统的贡献者,简洁地总结了对 Git LFS 性能改变带来的影响。
> 使用 12k 个文件的测试仓库的过滤进程在 macOS 上快了 80 倍,在 Windows 上 快了 58 倍。在 Windows 上,这意味着测试运行了 57 秒而不是 55 分钟。
这真是一个让人印象深刻的性能增强!
#### LFS 专有克隆
长期运行的污化和净化过滤器在对向本地缓存读写的加速做了很多贡献,但是对大目标传入/传出 Git LFS 服务器的速度提升贡献很少。 每次 Git LFS 污化过滤器在本地 LFS 缓存中无法找到一个文件时,它不得不使用两次 HTTP 请求来获得该文件:一个用来定位文件,另外一个用来下载它。在一次 `git clone` 过程中,你的本地 LFS 缓存是空的,所以 Git LFS 会天真地为你的仓库中每个 LFS 所追踪的文件创建两个 HTTP 请求:
![](https://cdn-images-1.medium.com/max/800/0*ViL7r3ZhkGvF0z3-.png)
幸运的是Git LFS v1.2 提供了专门的 [`git lfs clone`][51] 命令。不再是一次下载一个文件; `git lfs clone` 禁止 Git LFS 污化过滤器,等待检出结束,然后从 Git LFS 存储中按批下载任何需要的文件。这允许了并行下载并且将需要的 HTTP 请求数量减半。
![](https://cdn-images-1.medium.com/max/800/0*T43VA0DYTujDNgkH.png)
### 自定义传输路由器Transfer Adapter
正如之前讨论过的Git LFS 在 v1.5 中提供对长期过滤进程的支持。不过,对另外一种类型的可插入进程的支持早在今年年初就发布了。 Git LFS 1.3 包含了对可插拔传输路由器pluggable transfer adapter的支持因此不同的 Git LFS 托管服务可以定义属于它们自己的协议来向或从 LFS 存储中传输文件。
直到 2016 年底Bitbucket 是唯一一个执行专属 Git LFS 传输协议 [Bitbucket LFS Media Adapter][67] 的托管服务商。这是为了从 Bitbucket 的一个被称为 chunking 的 LFS 存储 API 独特特性中获益。Chunking 意味着在上传或下载过程中,大文件被分解成 4MB 的文件块chunk
![](https://cdn-images-1.medium.com/max/800/1*N3SpjQZQ1Ge8OwvWrtS1og.gif)
分块给予了 Bitbucket 支持的 Git LFS 三大优势:
1. 并行下载与上传。默认地Git LFS 最多并行传输三个文件。但是,如果只有一个文件被单独传输(这也是 Git LFS 污化过滤器的默认行为它会在一个单独的流中被传输。Bitbucket 的分块允许同一文件的多个文件块同时被上传或下载,经常能够神奇地提升传输速度。
2. 可续传的文件块传输。文件块都在本地缓存所以如果你的下载或上传被打断Bitbucket 的自定义 LFS 流媒体路由器会在下一次你推送或拉取时仅为丢失的文件块恢复传输。
3. 免重复。Git LFS正如 Git 本身,是一种可定位的内容;每一个 LFS 文件都由它的内容生成的 SHA-256 哈希值认证。所以,哪怕你稍微修改了一位数据,整个文件的 SHA-256 就会修改而你不得不重新上传整个文件。分块允许你仅仅重新上传文件真正被修改的部分。举个例子,想想一下 Git LFS 在追踪一个 41M 的精灵表格spritesheet。如果我们增加在此精灵表格上增加 2MB 的新的部分并且提交它,传统上我们需要推送整个新的 43M 文件到服务器端。但是,使用 Bitbucket 的自定义传输路由,我们仅仅需要推送大约 7MB先是 4MB 文件块(因为文件的信息头会改变)和我们刚刚添加的包含新的部分的 3MB 文件块!其余未改变的文件块在上传过程中被自动跳过,节省了巨大的带宽和时间消耗。
可自定义的传输路由器是 Git LFS 的一个伟大的特性,它们使得不同服务商在不重载核心项目的前提下体验适合其服务器的优化后的传输协议。
### 更佳的 git diff 算法与默认值
不像其他的版本控制系统Git 不会明确地存储文件被重命名了的事实。例如,如果我编辑了一个简单的 Node.js 应用并且将 `index.js` 重命名为 `app.js`,然后运行 `git diff`,我会得到一个看起来像一个文件被删除另一个文件被新建的结果。
![](https://cdn-images-1.medium.com/max/800/1*ohMUBpSh_jqz2ffScJ7ApQ.png)
我猜测移动或重命名一个文件从技术上来讲是一次删除后跟着一次新建,但这不是对人类最友好的描述方式。其实,你可以使用 `-M` 标志来指示 Git 在计算差异时同时尝试检测是否是文件重命名。对之前的例子,`git diff -M` 给我们如下结果:
![](https://cdn-images-1.medium.com/max/800/1*ywYjxBc1wii5O8EhHbpCTA.png)
第二行显示的 similarity index 告诉我们文件内容经过比较后的相似程度。默认地,`-M` 会处理任意两个超过 50% 相似度的文件。这意味着,你需要编辑少于 50% 的行数来确保它们可以被识别成一个重命名后的文件。你可以通过加上一个百分比来选择你自己的 similarity index`-M80%`。
到 Git v2.9 版本,无论你是否使用了 `-M` 标志, `git diff``git log` 命令都会默认检测重命名。如果不喜欢这种行为(或者,更现实的情况,你在通过一个脚本来解析 diff 输出),那么你可以通过显式的传递 `--no-renames` 标志来禁用这种行为。
#### 详细的提交
你经历过调用 `git commit` 然后盯着空白的 shell 试图想起你刚刚做过的所有改动吗?`verbose` 标志就为此而来!
不像这样:
```
Ah crap, which dependency did I just rev?
# Please enter the commit message for your changes. Lines starting
# with # will be ignored, and an empty message aborts the commit.
# On branch master
# Your branch is up-to-date with origin/master.
#
# Changes to be committed:
# new file: package.json
#
```
……你可以调用 `git commit --verbose` 来查看你改动造成的行内差异。不用担心,这不会包含在你的提交信息中:
![](https://cdn-images-1.medium.com/max/800/1*1vOYE2ow3ZDS8BP_QfssQw.png)
`--verbose` 标志并不是新出现的,但是直到 Git v2.9 你才可以通过 `git config --global commit.verbose true` 永久的启用它。
#### 实验性的 Diff 改进
当一个被修改部分前后几行相同时,`git diff` 可能产生一些稍微令人迷惑的输出。如果在一个文件中有两个或者更多相似结构的函数时这可能发生。来看一个有些刻意人为的例子,想象我们有一个 JS 文件包含一个单独的函数:
```
/* @return {string} "Bitbucket" */
function productName() {
return "Bitbucket";
}
```
现在想象一下我们刚提交的改动包含一个我们专门做的 _另一个_可以做相似事情的函数
```
/* @return {string} "Bitbucket" */
function productId() {
return "Bitbucket";
}
/* @return {string} "Bitbucket" */
function productName() {
return "Bitbucket";
}
```
我们希望 `git diff` 显示开头五行被新增,但是实际上它不恰当地将最初提交的第一行也包含进来。
![](https://cdn-images-1.medium.com/max/800/1*9C7DWMObGHMEqD-QFGHmew.png)
错误的注释被包含在了 diff 中!这虽不是世界末日,但每次发生这种事情总免不了花费几秒钟的意识去想 _啊_
在十二月Git v2.11 介绍了一个新的实验性的 diff 选项,`--indent-heuristic`,尝试生成从美学角度来看更赏心悦目的 diff。
![](https://cdn-images-1.medium.com/max/800/1*UyWZ6JjC-izDquyWCA4bow.png)
在后台,`--indent-heuristic` 在每一次改动造成的所有可能的 diff 中循环,并为它们分别打上一个 “不良” 分数。这是基于启发式的,如差异文件块是否以不同等级的缩进开始和结束(从美学角度讲“不良”),以及差异文件块前后是否有空白行(从美学角度讲令人愉悦)。最后,有着最低不良分数的块就是最终输出。
这个特性还是实验性的,但是你可以通过应用 `--indent-heuristic` 选项到任何 `git diff` 命令来专门测试它。如果,如果你喜欢尝鲜,你可以这样将其在你的整个系统内启用:
```
$ git config --global diff.indentHeuristic true
```
### 子模块Submodule差强人意
子模块允许你从 Git 仓库内部引用和包含其他 Git 仓库。这通常被用在当一些项目管理的源依赖也在被 Git 跟踪时,或者被某些公司用来作为包含一系列相关项目的 [monorepo][68] 的替代品。
由于某些用法的复杂性以及使用错误的命令相当容易破坏它们的事实Submodule 得到了一些坏名声。
![](https://cdn-images-1.medium.com/max/800/1*xNffiElY7BZNMDM0jm0JNQ.gif)
但是,它们还是有着它们的用处,而且,我想这仍然是用于需要厂商依赖项的最好选择。 幸运的是2016 对子模块的用户来说是伟大的一年,在几次发布中落地了许多意义重大的性能和特性提升。
#### 并行抓取
当克隆或则抓取一个仓库时,加上 `--recurse-submodules` 选项意味着任何引用的子模块也将被克隆或更新。传统上,这会被串行执行,每次只抓取一个子模块。直到 Git v2.8,你可以附加 `--jobs=n` 选项来使用 _n_ 个并行线程来抓取子模块。
我推荐永久的配置这个选项:
```
$ git config --global submodule.fetchJobs 4
```
……或者你可以选择使用任意程度的平行化。
#### 浅层化子模块
Git v2.9 介绍了 `git clone —shallow-submodules` 标志。它允许你抓取你仓库的完整克隆,然后递归地以一个提交的深度浅层化克隆所有引用的子模块。如果你不需要项目的依赖的完整记录时会很有用。
例如,一个仓库有着一些混合了的子模块,其中包含有其他厂商提供的依赖和你自己其它的项目。你可能希望初始化时执行浅层化子模块克隆,然后深度选择几个你想工作与其上的项目。
另一种情况可能是配置持续集成或部署工作。Git 需要一个包含了子模块的超级仓库以及每个子模块最新的提交以便能够真正执行构建。但是,你可能并不需要每个子模块全部的历史记录,所以仅仅检索最新的提交可以为你省下时间和带宽。
#### 子模块的替代品
`--reference` 选项可以和 `git clone` 配合使用来指定另一个本地仓库作为一个替代的对象存储,来避免跨网络重新复制你本地已经存在的对象。语法为:
```
$ git clone --reference <local repo> <url>
```
到 Git v2.11,你可以使用 `—reference` 选项与 `—recurse-submodules` 结合来设置子模块指向一个来自另一个本地仓库的子模块。其语法为:
```
$ git clone --recurse-submodules --reference <local repo> <url>
```
这潜在的可以省下大量的带宽和本地磁盘空间,但是如果引用的本地仓库不包含你克隆的远程仓库所必需的所有子模块时,它可能会失败。
幸运的是,方便的 `—-reference-if-able` 选项将会让它优雅地失败,然后为丢失了的被引用的本地仓库的所有子模块回退为一次普通的克隆。
```
$ git clone --recurse-submodules --reference-if-able \
<local repo> <url>
```
#### 子模块的 diff
在 Git v2.11 之前Git 有两种模式来显示对更新你的仓库子模块的提交之间的差异。
`git diff —-submodule=short` 显示你的项目引用的子模块中的旧提交和新提交(这也是如果你整体忽略 `--submodule` 选项的默认结果):
![](https://cdn-images-1.medium.com/max/800/1*K71cJ30NokO5B69-a470NA.png)
`git diff —submodule=log` 有一点啰嗦,显示更新了的子模块中任意新建或移除的提交的信息中统计行。
![](https://cdn-images-1.medium.com/max/800/1*frvsd_T44De8_q0uvNHB1g.png)
Git v2.11 引入了第三个更有用的选项:`—-submodule=diff`。这会显示更新后的子模块所有改动的完整的 diff。
![](https://cdn-images-1.medium.com/max/800/1*nPhJTjP8tcJ0cD8s3YOmjw.png)
### git stash 的 90 个增强
不像子模块,几乎没有 Git 用户不钟爱 [`git stash`][52]。 `git stash` 临时搁置(或者 _藏匿_)你对工作区所做的改动使你能够先处理其他事情,结束后重新将搁置的改动恢复到先前状态。
#### 自动搁置
如果你是 `git rebase` 的粉丝,你可能很熟悉 `--autostash` 选项。它会在变基之前自动搁置工作区所有本地修改然后等变基结束再将其复用。
```
$ git rebase master --autostash
Created autostash: 54f212a
HEAD is now at 8303dca It's a kludge, but put the tuple from the database in the cache.
First, rewinding head to replay your work on top of it...
Applied autostash.
```
这很方便,因为它使得你可以在一个不洁的工作区执行变基。有一个方便的配置标志叫做 `rebase.autostash` 可以将这个特性设为默认,你可以这样来全局启用它:
```
$ git config --global rebase.autostash true
```
`rebase.autostash` 实际上自从 [Git v1.8.4][69] 就可用了,但是 v2.7 引入了通过 `--no-autostash` 选项来取消这个标志的功能。如果你对未暂存的改动使用这个选项,变基会被一条工作树被污染的警告禁止:
```
$ git rebase master --no-autostash
Cannot rebase: You have unstaged changes.
Please commit or stash them.
```
#### 补丁式搁置
说到配置标签Git v2.7 也引入了 `stash.showPatch`。`git stash show` 的默认行为是显示你搁置文件的汇总。
```
$ git stash show
package.json | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
```
`-p` 标志传入会将 `git stash show` 变为 “补丁模式”,这将会显示完整的 diff
![](https://cdn-images-1.medium.com/max/800/1*HpcT3quuKKQj9CneqPuufw.png)
`stash.showPatch` 将这个行为定为默认。你可以将其全局启用:
```
$ git config --global stash.showPatch true
```
如果你使能 `stash.showPatch` 但却之后决定你仅仅想要查看文件总结,你可以通过传入 `--stat` 选项来重新获得之前的行为。
```
$ git stash show --stat
package.json | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
```
顺便一提:`--no-patch` 是一个有效选项但它不会如你所希望的取消 `stash.showPatch`。不仅如此,它会传递给用来生成补丁时潜在调用的 `git diff` 命令,然后你会发现完全没有任何输出。
#### 简单的搁置标识
如果你惯用 `git stash` ,你可能知道你可以搁置多次改动然后通过 `git stash list` 来查看它们:
```
$ git stash list
stash@{0}: On master: crazy idea that might work one day
stash@{1}: On master: desperate samurai refactor; don't apply
stash@{2}: On master: perf improvement that I forgot I stashed
stash@{3}: On master: pop this when we use Docker in production
```
但是,你可能不知道为什么 Git 的搁置有着这么难以理解的标识(`stash@{1}`、`stash@{2}` 等),或许你可能将它们勾勒成 “仅仅是 Git 的癖好吧”。实际上就像很多 Git 特性一样,这些奇怪的标志实际上是 Git 数据模型的一个非常巧妙使用(或者说是滥用了的)的结果。
在后台,`git stash` 命令实际创建了一系列特定的提交目标,这些目标对你搁置的改动做了编码并且维护一个 [reglog][70] 来保存对这些特殊提交的引用。 这也是为什么 `git stash list` 的输出看起来很像 `git reflog` 的输出。当你运行 `git stash apply stash@{1}` 时,你实际上在说,“从 stash reflog 的位置 1 上应用这条提交。”
到了 Git v2.11,你不再需要使用完整的 `stash@{n}` 语句。相反,你可以通过一个简单的整数指出该搁置在 stash reflog 中的位置来引用它们。
```
$ git stash show 1
$ git stash apply 1
$ git stash pop 1
```
讲了很多了。如果你还想要多学一些搁置是怎么保存的,我在 [这篇教程][71] 中写了一点这方面的内容。
### </2016> <2017>
好了,结束了。感谢您的阅读!我希望您喜欢阅读这份长篇大论,正如我乐于在 Git 的源码、发布文档和 `man` 手册中探险一番来撰写它。如果你认为我忘记了一些重要的事,请留下一条评论或者在 [Twitter][72] 上让我知道,我会努力写一份后续篇章。
至于 Git 接下来会发生什么,这要靠广大维护者和贡献者了(其中有可能就是你!)。随着 Git 的采用日益增长,我猜测简化、改进的用户体验,和更好的默认结果将会是 2017 年 Git 主要的主题。随着 Git 仓库变得越来越大、越来越旧,我猜我们也可以看到继续持续关注性能和对大文件、深度树和长历史的改进处理。
如果你关注 Git 并且很期待能够和一些项目背后的开发者会面,请考虑来 Brussels 花几周时间来参加 [Git Merge][74] 。我会在[那里发言][75]!但是更重要的是,很多维护 Git 的开发者将会出席这次会议而且一年一度的 Git 贡献者峰会很可能会指定来年发展的方向。
或者如果你实在等不及,想要获得更多的技巧和指南来改进你的工作流,请参看这份 Atlassian 的优秀作品: [Git 教程][76] 。
封面图片是由 [instaco.de][78] 生成的。
--------------------------------------------------------------------------------
via: https://medium.com/hacker-daily/git-in-2016-fad96ae22a15
作者:[Tim Pettersen][a]
译者:[xiaow6](https://github.com/xiaow6)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://hackernoon.com/@kannonboy?source=post_header_lockup
[1]:https://medium.com/@g.kylafas/the-git-config-command-is-missing-a-yes-at-the-end-as-in-git-config-global-commit-verbose-yes-7e126365750e?source=responses---------1----------
[2]:https://medium.com/@kannonboy/thanks-giorgos-fixed-f3b83c61589a?source=responses---------1----------
[3]:https://medium.com/@TomSwirly/i-read-the-whole-thing-from-start-to-finish-415a55d89229?source=responses---------0-31---------
[4]:https://medium.com/@g.kylafas
[5]:https://medium.com/@g.kylafas?source=responses---------1----------
[6]:https://medium.com/@kannonboy
[7]:https://medium.com/@kannonboy?source=responses---------1----------
[8]:https://medium.com/@TomSwirly
[9]:https://medium.com/@TomSwirly?source=responses---------0-31---------
[10]:https://medium.com/@g.kylafas/the-git-config-command-is-missing-a-yes-at-the-end-as-in-git-config-global-commit-verbose-yes-7e126365750e?source=responses---------1----------#--responses
[11]:https://hackernoon.com/@kannonboy
[12]:https://hackernoon.com/@kannonboy?source=placement_card_footer_grid---------0-44
[13]:https://medium.freecodecamp.com/@BillSourour
[14]:https://medium.freecodecamp.com/@BillSourour?source=placement_card_footer_grid---------1-43
[15]:https://blog.uncommon.is/@lut4rp
[16]:https://blog.uncommon.is/@lut4rp?source=placement_card_footer_grid---------2-43
[17]:https://medium.com/@kannonboy
[18]:https://medium.com/@kannonboy
[19]:https://medium.com/@g.kylafas/the-git-config-command-is-missing-a-yes-at-the-end-as-in-git-config-global-commit-verbose-yes-7e126365750e?source=responses---------1----------
[20]:https://medium.com/@kannonboy/thanks-giorgos-fixed-f3b83c61589a?source=responses---------1----------
[21]:https://medium.com/@TomSwirly/i-read-the-whole-thing-from-start-to-finish-415a55d89229?source=responses---------0-31---------
[22]:https://hackernoon.com/setting-breakpoints-on-a-snowy-evening-df34fc3168e2?source=placement_card_footer_grid---------0-44
[23]:https://medium.freecodecamp.com/the-code-im-still-ashamed-of-e4c021dff55e?source=placement_card_footer_grid---------1-43
[24]:https://blog.uncommon.is/using-git-to-generate-versionname-and-versioncode-for-android-apps-aaa9fc2c96af?source=placement_card_footer_grid---------2-43
[25]:https://hackernoon.com/git-in-2016-fad96ae22a15#fd10
[26]:https://hackernoon.com/git-in-2016-fad96ae22a15#cc52
[27]:https://hackernoon.com/git-in-2016-fad96ae22a15#42b9
[28]:https://hackernoon.com/git-in-2016-fad96ae22a15#4208
[29]:https://hackernoon.com/git-in-2016-fad96ae22a15#a5c3
[30]:https://hackernoon.com/git-in-2016-fad96ae22a15#c230
[31]:https://hackernoon.com/tagged/git?source=post
[32]:https://hackernoon.com/tagged/web-development?source=post
[33]:https://hackernoon.com/tagged/software-development?source=post
[34]:https://hackernoon.com/tagged/programming?source=post
[35]:https://hackernoon.com/tagged/atlassian?source=post
[36]:https://hackernoon.com/@kannonboy
[37]:https://hackernoon.com/?source=footer_card
[38]:https://hackernoon.com/setting-breakpoints-on-a-snowy-evening-df34fc3168e2?source=placement_card_footer_grid---------0-44
[39]:https://medium.freecodecamp.com/the-code-im-still-ashamed-of-e4c021dff55e?source=placement_card_footer_grid---------1-43
[40]:https://blog.uncommon.is/using-git-to-generate-versionname-and-versioncode-for-android-apps-aaa9fc2c96af?source=placement_card_footer_grid---------2-43
[41]:https://hackernoon.com/git-in-2016-fad96ae22a15#fd10
[42]:https://hackernoon.com/git-in-2016-fad96ae22a15#fd10
[43]:https://hackernoon.com/git-in-2016-fad96ae22a15#cc52
[44]:https://hackernoon.com/git-in-2016-fad96ae22a15#cc52
[45]:https://hackernoon.com/git-in-2016-fad96ae22a15#42b9
[46]:https://hackernoon.com/git-in-2016-fad96ae22a15#4208
[47]:https://hackernoon.com/git-in-2016-fad96ae22a15#a5c3
[48]:https://hackernoon.com/git-in-2016-fad96ae22a15#c230
[49]:https://git-scm.com/docs/git-worktree
[50]:https://git-scm.com/book/en/v2/Git-Tools-Debugging-with-Git#Binary-Search
[51]:https://www.atlassian.com/git/tutorials/git-lfs/#speeding-up-clones
[52]:https://www.atlassian.com/git/tutorials/git-stash/
[53]:https://hackernoon.com/@kannonboy?source=footer_card
[54]:https://hackernoon.com/?source=footer_card
[55]:https://hackernoon.com/@kannonboy?source=post_header_lockup
[56]:https://hackernoon.com/@kannonboy?source=post_header_lockup
[57]:https://hackernoon.com/git-in-2016-fad96ae22a15#c8e9
[58]:https://hackernoon.com/git-in-2016-fad96ae22a15#408a
[59]:https://hackernoon.com/git-in-2016-fad96ae22a15#315b
[60]:https://hackernoon.com/git-in-2016-fad96ae22a15#dbfb
[61]:https://hackernoon.com/git-in-2016-fad96ae22a15#2220
[62]:https://hackernoon.com/git-in-2016-fad96ae22a15#bc78
[63]:https://www.atlassian.com/git/tutorials/install-git/
[64]:https://www.atlassian.com/git/tutorials/what-is-git/
[65]:https://www.atlassian.com/git/tutorials/git-lfs/
[66]:https://twitter.com/kit3bus
[67]:https://confluence.atlassian.com/bitbucket/bitbucket-lfs-media-adapter-856699998.html
[68]:https://developer.atlassian.com/blog/2015/10/monorepos-in-git/
[69]:https://blogs.atlassian.com/2013/08/what-you-need-to-know-about-the-new-git-1-8-4/
[70]:https://www.atlassian.com/git/tutorials/refs-and-the-reflog/
[71]:https://www.atlassian.com/git/tutorials/git-stash/#how-git-stash-works
[72]:https://twitter.com/kannonboy
[73]:https://git.kernel.org/cgit/git/git.git/tree/Documentation/SubmittingPatches
[74]:http://git-merge.com/
[75]:http://git-merge.com/#git-aliases
[76]:https://www.atlassian.com/git/tutorials
[77]:https://hackernoon.com/git-in-2016-fad96ae22a15#87c4
[78]:http://instaco.de/
[79]:https://medium.com/@Medium/personalize-your-medium-experience-with-users-publications-tags-26a41ab1ee0c#.hx4zuv3mg
[80]:https://hackernoon.com/

View File

@ -3,7 +3,7 @@ Samba 系列(六):使用 Rsync 命令同步两个 Samba4 AD DC 之间的 S
这篇文章讲的是在两个 **Samba4 活动目录域控制器**之间,通过一些强大的 Linux 工具来完成 SysVol 的复制操作,比如 [Rsync 数据同步工具][2][Cron 任务调度进程][3]和 [SSH 协议][4]。
#### 要求:
###求:
- [Samba 系列(五):将另一台 Ubuntu DC 服务器加入到 Samba4 AD DC 实现双域控主机模][1]
@ -25,7 +25,7 @@ Samba 系列(六):使用 Rsync 命令同步两个 Samba4 AD DC 之间的 S
# nano /etc/ntp.conf
```
把下面几行添加到 **ntp.conf** 配置文件。
把下面几行添加到 `ntp.conf` 配置文件。
```
pool 0.ubuntu.pool.ntp.org iburst
@ -35,7 +35,8 @@ pool 0.ubuntu.pool.ntp.org iburst
pool adc1.tecmint.lan
# Use Ubuntu's ntp server as a fallback.
pool ntp.ubuntu.com
```
```
[
![Configure NTP for Samba4](http://www.tecmint.com/wp-content/uploads/2017/01/Configure-NTP-for-Samba4.png)
][6]
@ -49,12 +50,13 @@ restrict source notrap nomodify noquery mssntp
ntpsigndsocket /var/lib/samba/ntp_signd/
```
4、最后关闭并保存该配置文件然后重启 NTP 服务以应用更改。等待几分钟后时间同步完成,执行 **ntpq** 命令打印出 **adc1** 时间同步情况。
4、最后关闭并保存该配置文件然后重启 NTP 服务以应用更改。等待几分钟后时间同步完成,执行 `ntpq` 命令打印出 **adc1** 时间同步情况。
```
# systemctl restart ntp
# ntpq -p
```
```
[
![Synchronize NTP Time with Samba4 AD](http://www.tecmint.com/wp-content/uploads/2017/01/Synchronize-Time.png)
][8]
@ -65,7 +67,7 @@ ntpsigndsocket /var/lib/samba/ntp_signd/
默认情况下,**Samba4 AD DC** 不会通过 **DFS-R**<ruby>分布式文件系统复制<rt>Distributed File System Replication</rt></ruby>)或者 **FRS**<ruby>文件复制服务<rt>File Replication Service</rt></ruby>)来复制 SysVol 目录。
这意味着只有在第一个域控制器联机时,<ruby>**组策略对象**<rt>Group Policy objects </rt></ruby>才可用。否则组策略设置和登录脚本不会应用到已加入域的 Windosws 机器上。
这意味着只有在第一个域控制器联机时,<ruby>**组策略对象**<rt>Group Policy objects</rt></ruby>才可用。否则组策略设置和登录脚本不会应用到已加入域的 Windosws 机器上。
为了克服这个障碍,以及基本实现 SysVol 目录复制的目的,我们通过执行一个[基于 SSH 的身份认证][10]并使用 SSH 加密通道的[Linux 同步命令][9]来从第一个域控制器安全地传输 **GPO** 对象到第二个域控制器。
@ -75,31 +77,33 @@ ntpsigndsocket /var/lib/samba/ntp_signd/
5、要进行 **SysVol** 复制,先到[第一个 AD DC 服务器上生成 SSH 密钥][11],然后使用下面的命令把该密钥传输到第二个 DC 服务器。
在生成密钥的过程中不要设置密码 **passphrase**,以便在无用户干预的情况下进行传输。
在生成密钥的过程中不要设置密码,以便在无用户干预的情况下进行传输。
```
# ssh-keygen -t RSA
# ssh-copy-id root@adc2
# ssh adc2
# exit
```
```
[
![Generate SSH Key on Samba4 DC](http://www.tecmint.com/wp-content/uploads/2017/01/Generate-SSH-Key.png)
][12]
*在 Samba4 DC 服务器上生成 SSH 密钥*
6、 当你确认 root 用户可以从第一个 **DC** 服务器以免密码方式登录到第二个 **DC** 服务器时,执行下面的 **Rsync** 命令,加上 `--dry-run` 参数来模拟 SysVol 复制过程。注意把对应的参数值替换成你自己的数据。
6、 当你确认 root 用户可以从第一个 **DC** 服务器以免密码方式登录到第二个 **DC** 服务器时,执行下面的 `rsync` 命令,加上 `--dry-run` 参数来模拟 SysVol 复制过程。注意把对应的参数值替换成你自己的数据。
```
# rsync --dry-run -XAavz --chmod=775 --delete-after --progress --stats /var/lib/samba/sysvol/ root@adc2:/var/lib/samba/sysvol/
```
7、如果模拟复制过程正常那么再次执行去掉 `--dry-run` 参数的 rsync 命令,来真实的在域控制器之间复制 GPO 对象。
7、如果模拟复制过程正常那么再次执行去掉 `--dry-run` 参数的 `rsync` 命令,来真实的在域控制器之间复制 GPO 对象。
```
# rsync -XAavz --chmod=775 --delete-after --progress --stats /var/lib/samba/sysvol/ root@adc2:/var/lib/samba/sysvol/
```
```
[
![Samba4 AD DC SysVol Replication](http://www.tecmint.com/wp-content/uploads/2017/01/SysVol-Replication-for-Samba4-DC.png)
][13]
@ -113,6 +117,7 @@ ntpsigndsocket /var/lib/samba/ntp_signd/
```
# ls -alh /var/lib/samba/sysvol/your_domain/Policiers/
```
[
![Verify Samba4 DC SysVol Replication](http://www.tecmint.com/wp-content/uploads/2017/01/Verify-Samba4-DC-SysVol-Replication.png)
][14]
@ -125,7 +130,7 @@ ntpsigndsocket /var/lib/samba/ntp_signd/
# crontab -e
```
添加一条每隔 5 分钟运行的同步命令,并把执行结果以及错误信息输出到日志文件 /var/log/sysvol-replication.log 。如果执行命令异常,你可以查看该文件来定位问题。
添加一条每隔 5 分钟运行的同步命令,并把执行结果以及错误信息输出到日志文件 `/var/log/sysvol-replication.log` 。如果执行命令异常,你可以查看该文件来定位问题。
```
*/5 * * * * rsync -XAavz --chmod=775 --delete-after --progress --stats /var/lib/samba/sysvol/ root@adc2:/var/lib/samba/sysvol/ > /var/log/sysvol-replication.log 2>&1

View File

@ -0,0 +1,96 @@
使用 AWS 的 GO SDK 获取区域与终端节点信息
============================================================
LCTT 译注: 终端节点Endpoint详情请见: [http://docs.amazonaws.cn/general/latest/gr/rande.html](http://docs.amazonaws.cn/general/latest/gr/rande.html)
最新发布的 GO 的 SDK [v1.6.0][1] 版本,加入了获取区域与终端节点信息的功能。它可以很方便地列出区域、服务和终端节点的相关信息。可以通过 [github.com/aws/aws-sdk-go/aws/endpoints][3] 包使用这些功能。
endpoints 包提供了一个易用的接口,可以获取到一个服务的终端节点的 url 列表和区域列表信息。并且我们将相关信息根据 AWS 服务区域进行了分组,如 AWS 标准、AWS 中国和 AWS GovCloud美国
### 解析终端节点
设置 SDK 的默认配置时, SDK 会自动地使用 `endpoints.DefaultResolver` 函数。你也可以自己调用包中的`EndpointFor` 方法来解析终端节点。
```Go
// 解析在us-west-2区域的S3服务的终端节点
resolver := endpoints.DefaultResolver()
endpoint, err := resolver.EndpointFor(endpoints.S3ServiceID, endpoints.UsWest2RegionID)
if err != nil {
fmt.Println("failed to resolve endpoint", err)
return
}
fmt.Println("Resolved URL:", endpoint.URL)
```
如果你需要自定义终端节点的解析逻辑,你可以实现 `endpoints.Resolver` 接口,并传值给`aws.Config.EndpointResolver`。当你打算编写自定义的终端节点逻辑,让 SDK 可以用来解析服务的终端节点时候,这个功能就会很有用。
以下示例,创建了一个配置好的 Session然后 [Amazon S3][4] 服务的客户端就可以使用这个自定义的终端节点。
```Go
s3CustResolverFn := func(service, region string, optFns ...func(*endpoints.Options)) (endpoints.ResolvedEndpoint, error) {
if service == "s3" {
return endpoints.ResolvedEndpoint{
URL: "s3.custom.endpoint.com",
SigningRegion: "custom-signing-region",
}, nil
}
return defaultResolver.EndpointFor(service, region, optFns...)
}
sess := session.Must(session.NewSessionWithOptions(session.Options{
Config: aws.Config{
Region: aws.String("us-west-2"),
EndpointResolver: endpoints.ResolverFunc(s3CustResolverFn),
},
}))
```
### 分区
`endpoints.DefaultResolver` 函数的返回值可以被 `endpoints.EnumPartitions`接口使用。这样就可以获取 SDK 使用的分区片段,也可以列出每个分区的分区信息。
```Go
// 迭代所有分区表打印每个分区的ID
resolver := endpoints.DefaultResolver()
partitions := resolver.(endpoints.EnumPartitions).Partitions()
for _, p := range partitions {
fmt.Println("Partition:", p.ID())
}
```
除了分区表之外endpoints 包也提供了每个分区组的 getter 函数。这些工具函数可以方便列出指定分区,而不用执行默认解析器列出所有的分区。
```Go
partition := endpoints.AwsPartition()
region := partition.Regions()[endpoints.UsWest2RegionID]
fmt.Println("Services in region:", region.ID())
for id, _ := range region.Services() {
fmt.Println(id)
}
```
当你获取区域和服务值后,可以调用 `ResolveEndpoint`。这样解析端点时,就可以提供分区的过滤视图。
获取更多 AWS SDK for GO 信息, 请关注[其开源仓库][5]。若你有更好的看法,请留言评论。
--------------------------------------------------------------------------------
via: https://aws.amazon.com/cn/blogs/developer/using-the-aws-sdk-for-gos-regions-and-endpoints-metadata
作者:[Jason Del Ponte][a]
译者:[Vic020](http://vicyu.com)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://aws.amazon.com/cn/blogs/developer/using-the-aws-sdk-for-gos-regions-and-endpoints-metadata
[1]:https://github.com/aws/aws-sdk-go/releases/tag/v1.6.0
[2]:https://github.com/aws/aws-sdk-go
[3]:http://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/
[4]:https://aws.amazon.com/s3/
[5]:https://github.com/aws/aws-sdk-go/tree/master/example/aws/endpoints

View File

@ -0,0 +1,150 @@
Linux 系统上的可视化比较与合并工具 Meld
============================================================
我们已经[讲过][5] Linux 中[一些][6]基于命令行的比较和合并工具,再来讲解该系统的一些可视化的比较与合并工具也很合理。首要的原因是,不是每个人都习惯使用命令行,而且对于某些人来说,基于命令行的比较工具可能很难学习和理解。
因此,我们将会推出关于可视化工具 **Meld** 的系列文章。
在跳到安装和介绍部分前,我需要说明这篇教程里所有的指令和用例是都是可用的,而且它们已经在 Ubuntu 14.04 中测试过了,我们使用的 Meld 版本是 3.14.2。
### 关于 Meld
[Meld][7] 主要是一个可视化的比较和合并的工具,目标人群是开发者(当然,我们将要讲到的其它部分也会考虑到最终用户)。这个工具同时支持双向和三向的比较,不仅仅是比较文件,还可以比较目录,以及版本控制的项目。
“Meld 可以帮你回顾代码改动,理解补丁,”其官网如是说。“它甚至可以告知你如果你不进行合并将会发生什么事情。”该工具使用 GPL v2 协议进行授权。
### 安装 Meld
如果你用的是 Ubuntu 或者其它基于 Debian 的 Linux 分支,你可以用以下命令下载安装 Meld
```sh
sudo apt-get install meld
```
或者你也可以用系统自带的包管理软件下载这个工具。比如在 Ubuntu 上,你可以用 Ubuntu 软件中心Ubuntu Software Center或者用 [Ubuntu 软件][8],它从 Ubuntu 16.04 版本开始取代了 Ubuntu 软件中心。
当然Ubuntu 官方仓库里的 Meld 版本很有可能比较陈旧。因此如果你想要用更新的版本,你可以在[这里][9]下载软件包。如果你要用这个方法,你要做的就是解压下载好的软件包,然后运行 `bin` 目录下的 `meld` 程序。
```
~/Downloads/meld-3.14.2/bin$ ./meld 
```
以下是 Meld 依赖的软件,仅供参考:
* Python 2.7 (Python 3.3 开发版)
* GTK+ 3.14
* GLib 2.36
* PyGObject 3.14
* GtkSourceView 3.14
* pycairo
### 使用 Meld
装好了软件,就可以看到类似这样的画面:
[
![Meld started](https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/meld-launch-screen-1.png)
][10]
有三个选项比较文件File comparison比较目录Directory comparison以及版本控制视图Version control view
点击“比较文件”选项,就可以选择需要比较的文件:
[
![Meld file comparison](https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/meld-file-comparison-2.png)
][11]
就像上面的截图那样明白Meld 也可以进行三向比较,但是在这一系列文章的第一部分,我们只会讲更常用的双向比较。
接着选择你想要比较的文件点击“比较”Compare按钮。软件会在两边分别打开两个文件高亮不同的行以及不同的部分
[
![Compare files in Meld](https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/meld-diff-in-action-3.png)
][12]
两个文件的不同之处在第二行,差别在于 `file2` 文件的第二行多了一个 `3`。你看到的黑色箭头是用来进行合并或修改的操作的。该例中,向右的箭头将会把 `file2` 文件的第二行改成文件 `file1` 中对应行的内容。左向箭头做的事情相反。
做完修改后,按下 `Ctrl+s` 来保存。
这个简单的例子,让你知道 Meld 的基本用法。让我们看一看稍微复杂一点的比较:
[
![Meld advanced file comparison](https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/meld-multiple-changes-4.png)
][13]
在讨论这些变化前,这里提一下, Meld 的界面中有几个区域,可以给出文件之间的差异,让概况变得直观。这里特别需要注意窗口的左右两边垂直的栏。比如下面这个截图:
[
![Visual Comparison](https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/meld-multiple-colors-5.png)
][14]
仔细观察,图中的这个栏包含几个不同颜色的区块。这些区块是用来让你对文件之间的差异有个大概的了解。“每一个着色的区块表示一个部分,这个部分可能是插入、删除、修改或者有差别的,取决于区块所用的颜色。”官方文档是这样说的。
现在,让我们回到我们之前讨论的例子中。接下来的截图展示了用 Meld 理解文件的改动是很简单的(以及合并这些改动):
[
![File changes visualized in Meld](https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/meld-makes-it-easy-6.png)
][15]
[
![Meld Example 2](https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/meld-makes-it-easy-7.png)
][16]
[
![Meld Example 3](https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/meld-makes-it-easy-8.png)
][17]
接着,我们滑动文件,从一个改动跳到另一个。但是,当要比较的文件很大时,这会耗一点时间,当你想要滑动文件跳到一个改动的位置时,也会变得很困难。如果是这种情况的话,你可以用工具栏的橙色箭头,就在编辑区域的上方:
[
![Go to next change in Meld](https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/meld-go-next-prev-9.png)
][18]
这些是你使用 Meld 时做的一般性的事情:可以用标准的 `Ctrl+f` 组合键在编辑区域内进行查找,按 `F11` 键让软件进入全屏模式,再按 `Ctrl+r` 来刷新(通常在所有要比较的文件改变的时候使用)。
以下是 Meld 官方网站宣传的重要特性:
* 文件和目录的双向及三向比较
* 输入即更新文件的比较
* 自动合并模式,按块改动的动作让合并更加简单
* 可视化让比较文件更简单
* 支持 GitBazaarMercurialSubversion 等等
注意还不仅仅只有以上所列的。网站上有个专门的[特性页面][19],里面提到了 Meld 提供的所有特性。这个页面列出的所有特性分为几个部分,以该软件是用来做文件比较、目录比较、版本控制还是处于合并模式下为基础进行划分。
和其它软件相似,有些事情 Meld 做不到。官方网站上列出了其中的一部分:“当 Meld 展示文件之间的差异时,它同时显示两个文件,看起来就像在普通的文本编辑器中。它不会添加额外的行,让左右两边文件的特殊改动处于同样的行数。没有做这个事情的选项。”
### 总结
我们刚刚了解到的不过是皮毛,因为 Meld 还能做很多事情。考虑到这是教程系列的第一部分,这也挺不错的。这仅仅是让你了解 Meld 的作用,你可以配置它,忽略一些特定类型的改动,让它移动,复制或者删除文件之间的个别差异,也可以从命令行启动它。在即将退出的系列教程中,我们将会讲述所有这些重要功能。
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/beginners-guide-to-visual-merge-tool-meld-on-linux/
作者:[Ansh][a]
译者:[GitFuture](https://github.com/GitFuture)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/beginners-guide-to-visual-merge-tool-meld-on-linux/
[1]:https://www.howtoforge.com/tutorial/beginners-guide-to-visual-merge-tool-meld-on-linux/#about-meld
[2]:https://www.howtoforge.com/tutorial/beginners-guide-to-visual-merge-tool-meld-on-linux/#meld-installation
[3]:https://www.howtoforge.com/tutorial/beginners-guide-to-visual-merge-tool-meld-on-linux/#meld-usage
[4]:https://www.howtoforge.com/tutorial/beginners-guide-to-visual-merge-tool-meld-on-linux/#conclusion
[5]:https://www.howtoforge.com/tutorial/linux-diff-command-file-comparison/
[6]:https://www.howtoforge.com/tutorial/how-to-compare-three-files-in-linux-using-diff3-tool/
[7]:http://meldmerge.org/
[8]:https://www.howtoforge.com/tutorial/ubuntu-16-04-lts-overview/
[9]:https://git.gnome.org/browse/meld/refs/tags
[10]:https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/big/meld-launch-screen-1.png
[11]:https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/big/meld-file-comparison-2.png
[12]:https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/big/meld-diff-in-action-3.png
[13]:https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/big/meld-multiple-changes-4.png
[14]:https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/big/meld-multiple-colors-5.png
[15]:https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/big/meld-makes-it-easy-6.png
[16]:https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/big/meld-makes-it-easy-7.png
[17]:https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/big/meld-makes-it-easy-8.png
[18]:https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/big/meld-go-next-prev-9.png
[19]:http://meldmerge.org/features.html

View File

@ -1,8 +1,7 @@
Linux 命令行工具使用小贴士及技巧(三) - 环境变量 CDPATH
Linux 命令行工具使用小贴士及技巧(三)
============================================================
在这个系列的第一部分,我们详细地讨论了 `cd -` 命令,在第二部分,我们深入探究了 `pushd``popd` 两个命令,以及它们使用的场景。
在这个系列的[第一部分][5],我们详细地讨论了 `cd -` 命令,在[第二部分][6],我们深入探究了 `pushd``popd` 两个命令,以及它们使用的场景。
继续对命令行的讨论,在这篇教程中,我们将会通过简单易懂的实例来讨论 `CDPATH` 这个环境变量。我们也会讨论关于此变量的一些进阶细节。
@ -10,14 +9,14 @@ _在这之前先声明一下此教程中的所有实例都已经在 Ubuntu 14
### 环境变量 CDPATH
即使你的命令行所有操作都在特定的目录下 - 例如你的主目录 - 在切换目录时你也不得不提供绝对路径。比如,考虑我现在的情况,就是在 _/home/himanshu/Downloads_ 目录下:
即使你的命令行所有操作都在特定的目录下 - 例如你的主目录,然而在你切换目录时也不得不提供绝对路径。比如,考虑我现在的情况,就是在 `/home/himanshu/Downloads` 目录下:
```
$ pwd
/home/himanshu/Downloads
```
现在要求切换至 _/home/himanshu/Desktop_ 目录,我一般会这样做:
现在要求切换至 `/home/himanshu/Desktop` 目录,我一般会这样做:
```sh
cd /home/himanshu/Desktop/
@ -35,7 +34,7 @@ cd ~/Desktop/
cd ../Desktop/
```
能不能只是运行以下命令就能简单地实现呢:
能不能只是运行以下命令就能简单地实现呢
```sh
cd Desktop
@ -43,22 +42,21 @@ cd Desktop
是的,这完全有可能。这就是环境变量 `CDPATH` 出现的时候了。你可使用这个变量来为 `cd` 命令定义基础目录。
如果你尝试打印它的值,你会看见这个环境变量默认是空值的:
如果你尝试打印它的值,你会看见这个环境变量默认是空值的
```sh
$ echo $CDPATH
$
```
现在 ,考虑到上面提到的场景,我们使用这个环境变量,将 _/home/himanshu_ 作为 `cd` 命令的基础目录来使用。
现在 ,考虑到上面提到的场景,我们使用这个环境变量,将 `/home/himanshu` 作为 `cd` 命令的基础目录来使用。
最简单的做法这样:
最简单的做法这样
```sh
export CDPATH=/home/himanshu
```
现在,我能做到之前所不能做到的事了 - 当前工作目录在 _/home/himanshu/Downloads_ 目录里时,成功地运行了 `cd Desktop` 命令。
现在,我能做到之前所不能做到的事了 - 当前工作目录在 `/home/himanshu/Downloads` 目录里时,成功地运行了 `cd Desktop` 命令。
```sh
$ pwd
@ -68,33 +66,33 @@ $ cd Desktop/
$
```
这表明了我可以使用 `cd` 命令来到达 _`/home/himanshu`_ 下的任意一个目录,而不需要在 `cd ` 命令中显式地指定 _`/home/himanshu`_ 或者 _`~`_,又或者是 _`../`_ (或者多个 _`../`_)。
这表明了我可以使用 `cd` 命令来到达 `/home/himanshu` 下的任意一个目录,而不需要在 `cd` 命令中显式地指定 `/home/himanshu` 或者 `~`,又或者是 `../` (或者多个 `../`)。
### 要点
现在你应该知道了怎样利用环境变量 CDPATH 在 _/home/himanshu/Downloads__/home/himanshu/Desktop_ 之间轻松切换。现在,考虑以下这种情况, 在 _/home/himanshu/Desktop_ 目录里包含一个名字叫做 _Downloads_ 的子目录,这是将要切换到的目录。
现在你应该知道了怎样利用环境变量 `CDPATH``/home/himanshu/Downloads``/home/himanshu/Desktop` 之间轻松切换。现在,考虑以下这种情况, 在 `/home/himanshu/Desktop` 目录里包含一个名字叫做 `Downloads` 的子目录,这是将要切换到的目录。
但突然你会意识到 _cd Desktop_ 会切换到 _/home/himanshu/Desktop_。所以,为了确保这不会发生,你可以这样做:
但突然你会意识到 `cd Desktop` 会切换到 `/home/himanshu/Desktop`。所以,为了确保这不会发生,你可以这样做:
```sh
cd ./Downloads
```
虽然上述命令本身没有问题,但你还是需要耗费点额外的精力( 虽然很小 ),尤其是每次这种情况发生时你都不得不这样做。所以,有一个更加优雅的解决方案来处理,就是以如下方式来设定 `CDPATH` 环境变量。
虽然上述命令本身没有问题,但你还是需要耗费点额外的精力(虽然很小),尤其是每次这种情况发生时你都不得不这样做。所以,有一个更加优雅的解决方案来处理,就是以如下方式来设定 `CDPATH` 环境变量。
```sh
export CDPATH=".:/home/himanshu"
```
它的意思是告诉 `cd` 命令先在当前的工作目录查找该目录,然后再尝试搜寻 _/home/himanshu_ 目录。当然, `cd` 命令是否以这样的方式运行,完全取决于你的偏好和要求 - 讨论这一点的目的是为了让你知道这种情况可能会发生。
它的意思是告诉 `cd` 命令先在当前的工作目录查找该目录,然后再尝试搜寻 `/home/himanshu` 目录。当然, `cd` 命令是否以这样的方式运行,完全取决于你的偏好和要求 - 讨论这一点的目的是为了让你知道这种情况可能会发生。
就如你现在所知道的,一旦环境变量 `CDPATH` 被设置,它的值 - 或者它所包含的路径集合 - 就是系统中 `cd` 命令搜索目录的地方 ( 当然除了使用绝对路径的场景 )。所以,完全取决于你来确保该命令行为的一致性。
继续说,如果一个 bash 脚本以相对路径使用 `cd` 命令,最好还是先清除或者重置环境变量 `CDPATH`,除非你觉得遇上不可预测的麻烦也无所谓。还有一个可选的方法,比起在终端使用 `export` 命令来设置 `CDPATH`,你可以在测试完交互式/非交互式 shell 之后,在你的 `.bashrc` 文件里设置环境变量,这样可以确保你对环境变量的改动只对交互式 shell 生效。
继续说,如果一个 bash 脚本以相对路径使用 `cd` 命令,最好还是先清除或者重置环境变量 `CDPATH`除非你觉得遇上不可预测的麻烦也无所谓。还有一个可选的方法,比起在终端使用 `export` 命令来设置 `CDPATH`,你可以在测试完当前的 shell 是交互式还是非交互式之后,再在你的 `.bashrc` 文件里设置环境变量,这样可以确保你对环境变量的改动只对交互式 shell 生效。
环境变量中,路径出现的顺序同样也是很重要。举个例子,如果当前目录是在 _/home/himanshu_ 目录之前列出来,`cd` 命令就会先搜索当前的工作目录然后才会移动到 _/home/himanshu_ 目录。然而,如果该值为 _"/home/himanshu:."_,搜索就首先从 _/home/himanshu_ 开始,然后到当前目录。不用说,这会影响 `cd` 命令的行为,并且不注意路径的顺序可能会导致一些麻烦。
环境变量中,路径出现的顺序同样也是很重要。举个例子,如果当前目录是在 `/home/himanshu` 目录之前列出来,`cd` 命令就会先搜索当前的工作目录然后才会搜索 `/home/himanshu` 目录。然而,如果该值为 `/home/himanshu:.`,搜索就首先从 `/home/himanshu` 开始,然后到当前目录。不用说,这会影响 `cd` 命令的行为,并且不注意路径的顺序可能会导致一些麻烦。
要牢记在心的是,环境变量 `CDPATH`,就像其名字表达的,只对 `cd` 命令有作用。意味着在 _/home/himanshu/Downloads_ 目录里面时,你能运行 `_cd Desktop_` 命令来切换到 _/home/himanshu/Desktop_ 目录,但你不能使用 `ls`。以下是一个例子:
要牢记在心的是,环境变量 `CDPATH`,就像其名字表达的,只对 `cd` 命令有作用。意味着在 `/home/himanshu/Downloads` 目录里面时,你能运行 `cd Desktop` 命令来切换到 `/home/himanshu/Desktop` 目录,但你不能使用 `ls`。以下是一个例子:
```sh
$ pwd
@ -114,9 +112,9 @@ backup backup~ Downloads gdb.html outline~ outline.txt outline.txt~
不过,不是每种情况就能变通处理的。
另一个重点是: 就像你可能已经观察到的,每次你使用 `CDPATH` 环境变量集来运行 `cd` 命令时,该命令都会在输出里显示你切换到的目录的完整路径。不用说,不是所有人都想在每次运行 `cd` 命令时看到这些信息。
另一个重点是就像你可能已经观察到的,每次你使用 `CDPATH` 环境变量集来运行 `cd` 命令时,该命令都会在输出里显示你切换到的目录的完整路径。不用说,不是所有人都想在每次运行 `cd` 命令时看到这些信息。
为了确保该输出被制止,你可以使用以下命令:
为了确保该输出被制止,你可以使用以下命令
```sh
alias cd='>/dev/null cd'
@ -124,11 +122,11 @@ alias cd='>/dev/null cd'
如果 `cd` 命令运行成功,上述命令不会输出任何东西,如果失败,则允许产生错误信息。
最后,假如你遇到设置 CDPATH 环境变量后,不能使用 shell 的 tab 自动补全功能的问题,可以尝试安装并启动 bash 自动补全( bash-completion )。更多请参考 [这里][4]。
最后,假如你遇到设置 `CDPATH` 环境变量后,不能使用 shell 的 tab 自动补全功能的问题,可以尝试安装并启用 bash 自动补全bash-completion。更多请参考 [这里][4]。
### 总结
`CDPATH` 环境变量时一把双刃剑,如果没有掌握完善的知识和随意使用,可能会令你陷入困境,并花费你大量宝贵时间去解决问题。当然,这不代表你不应该试一下;只需要了解一下所有的可用选项,如果你得出结论,使用 CDPATH 会带来很大的帮助,就继续使用它吧。
`CDPATH` 环境变量时一把双刃剑,如果没有掌握完善的知识和随意使用,可能会令你陷入困境,并花费你大量宝贵时间去解决问题。当然,这不代表你不应该试一下;只需要了解一下所有的可用选项,如果你得出结论,使用 `CDPATH` 会带来很大的帮助,就继续使用它吧。
你已经能够熟练地使用 `CDPATH` 了吗?你有更多的贴士要分享?请在评论区里发表一下你的想法吧。
@ -147,3 +145,5 @@ via: https://www.howtoforge.com/tutorial/linux-command-line-tips-tricks-part-3-c
[2]:https://www.howtoforge.com/tutorial/linux-command-line-tips-tricks-part-3-cdpath/#points-to-keep-in-mind
[3]:https://www.howtoforge.com/tutorial/linux-command-line-tips-tricks-part-3-cdpath/#conclusion
[4]:http://bash-completion.alioth.debian.org/
[5]:https://linux.cn/article-8335-1.html
[6]:https://linux.cn/article-8371-1.html

View File

@ -0,0 +1,150 @@
Linux 命令行工具使用小贴士及技巧(四)
============================================================
到目前为止,在该系列指南中,我们已经讨论了 [cd -](https://linux.cn/article-8335-1.html) 和 [pushd/popd 命令](https://linux.cn/article-8371-1.html)的基本使用方法和相关细节,以及 [CDPATH 环境变量](https://linux.cn/article-8387-1.html)。在这第四期、也是最后一期文章中,我们会讨论别名的概念以及你可以如何使用它们使你的命令行导航更加轻松和平稳。
一如往常,在进入该指南的核心之前,值得指出本文中的所有命令以及展示的例子都在 Ubuntu 14.04LTS 中进行了测试。我们使用的命令行 shell 是 bash4.3.11 版本)。
### Linux 中的命令行别名
按照外行人的定义,别名可以被认为是一个复杂命令或者一组命令(包括它们的参数和选项)的简称或缩写。所以基本上,使用别名,你可以为那些不那么容易书写/记忆的命令创建易于记忆的名称。
例如,下面的命令为 `cd ~` 命令创建别名 `home`
```
alias home="cd ~"
```
这意味着现在在你的系统中无论何地,无论何时你想要回到你的主目录时,你可以很快地输入 `home` 然后按回车键实现。
关于 `alias` 命令man 手册是这么描述的:
> alias 工具可以创建或者重定义别名定义,或者把现有别名定义输出到标准输出。别名定义提供了输入一个命令时应该被替换的字符串值
> 一个别名定义会影响当前 shell 的执行环境以及当前 shell 的所有子 shell 的执行环境。按照 IEEE Std 1003.1-2001 规定,别名定义不应该影响当前 shell 的父进程以及任何 shell 调用的程序环境。
那么,别名到底如何帮助命令行导航呢?这是一个简单的例子:
假设你正在 `/home/himanshu/projects/howtoforge` 目录工作,它包括很多子目录以及子子目录。例如下面就是一个完整的目录分支:
```
/home/himanshu/projects/howtoforge/command-line/navigation/tips-tricks/part4/final
```
现在想象你在 `final` 目录,然后你想回到 `tips-tricks` 目录,然后再从那里,回到 `howtoforge` 目录。你会怎么做呢?
是的,一般情况下,你会运行下面的一组命令:
```
cd ../..
cd ../../..
```
虽然这种方法并没有错误,但它绝对不方便,尤其是当你在一个很长的路径中想往回走例如说 5 个目录时。那么,有什么解决办法吗?答案就是:别名。
你可以做的是,为每个 `cd ..` 命令创建容易记忆(和书写)的别名。例如:
```
alias bk1="cd .."
alias bk2="cd ../.."
alias bk3="cd ../../.."
alias bk4="cd ../../../.."
alias bk5="cd ../../../../.."
```
现在无论你什么时候想从当前工作目录往回走,例如说 5 个目录,你只需要运行下面的命令:
```
bk5
```
现在这不是很简单吗?
### 相关细节
尽管当前我们在 shell 中用于定义别名的技术(通过使用 alias 命令)实现了效果,别名只存在于当前终端会话。很有可能你会希望你定义的别名能保存下来,使得此后你可以在任何新启动的命令行窗口/标签页中使用它们。
为此,你需要在 `~/.bash_aliases` 文件中定义你的别名,你的 `~/.bashrc` 文件默认会加载该文件(如果你使用更早版本的 Ubuntu我没有验证过是否有效
下面是我的 `.bashrc` 文件中关于 `.bash_aliases` 文件的部分:
```
# Alias definitions.
# You may want to put all your additions into a separate file like
# ~/.bash_aliases, instead of adding them here directly.
# See /usr/share/doc/bash-doc/examples in the bash-doc package.
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi
```
一旦你把别名定义添加到你的 `.bash_aliases` 文件,该别名在任何新终端中都可用。但是,在任何其它你定义别名时已经启动的终端中,你还不能使用它们 - 解决办法是在这些终端中重新加载 `.bashrc`。下面就是你需要执行的具体命令:
```
source ~/.bashrc
```
如果你觉得这要做的也太多了(是的,我期待你有更懒惰的办法),那么这里有一个快捷方式来做到这一切:
```
"alias [the-alias]" >> ~/.bash_aliases && source ~/.bash_aliases
```
毫无疑问,你需要用实际的命令替换 `[the-alias]`。例如:
```
"alias bk5='cd ../../../../..'" >> ~/.bash_aliases && source ~/.bash_aliases
```
接下来,假设你已经创建了一些别名,并时不时使用它们有一段时间了。突然有一天,你发现它们其中的一个并不像期望的那样。因此你觉得需要查看被赋予该别名的真正命令。你会怎么做呢?
当然,你可以打开你的 `.bash_aliases` 文件在那里看看,但这种方式可能有点费时,尤其是当文件中包括很多别名的时候。因此,如果你正在查找一种更简单的方式,这就有一个:你需要做的只是运行 `alias` 命令并把别名名称作为参数。
这里有个例子:
```
$ alias bk6
alias bk6='cd ../../../../../..'
```
你可以看到,上面提到的命令显示了被赋值给别名 `bk6` 的实际命令。这里还有另一种办法:使用 `type` 命令。下面是一个例子:
```
$ type bk6
bk6 is aliased to `cd ../../../../../..'
```
`type` 命令产生了一个易于人类理解的输出。
另一个值得分享的是你可以将别名用于常见的输入错误。例如:
```
alias mroe='more'
```
_最后还值得注意的是并非每个人都喜欢使用别名。他们中的大部分人认为一旦你习惯了你为了简便而定义的别名当你在其它相同而不存在别名而且不允许你创建的系统中工作时就会变得非常困难。更多也是更准确的为什么一些专家不推荐使用别名的原因你到[这里][4]查看。_
### 总结
就像我们之前文章讨论过的 `CDPATH` 环境变量,别名也是一把应该谨慎使用的双刃剑。尽管如此也别太丧气,因为每个东西都有它自己的好处和劣势。遇到类似别名的概念时,更多的练习和完备的知识才是重点。
那么这就是该系列指南的最后章节。希望你喜欢它并能从中学到新的东西/概念。如果你有任何疑问或者问题,请在下面的评论框中和我们(以及其他人)分享。
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/command-line-aliases-in-linux/
作者:[Ansh][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/command-line-aliases-in-linux/
[1]:https://www.howtoforge.com/tutorial/command-line-aliases-in-linux/#command-line-aliases-in-linux
[2]:https://www.howtoforge.com/tutorial/command-line-aliases-in-linux/#related-details
[3]:https://www.howtoforge.com/tutorial/command-line-aliases-in-linux/#conclusion
[4]:http://unix.stackexchange.com/questions/66934/why-is-aliasing-over-standard-commands-not-recommended
[5]:https://www.howtoforge.com/tutorial/linux-command-line-navigation-tips-and-tricks-part-1/

View File

@ -0,0 +1,126 @@
NMAP 常用扫描简介(二)
=====================
在我们之前的 [NMAP 安装][1]一文中,列出了 10 种不同的 ZeNMAP 扫描模式,大多数的模式使用了不同的参数。各种不同参数代表执行不同的扫描模式。这篇文章将介绍最后剩下的两种常用扫描类型。
### 四种通用扫描类型
下面列出了最常用的四种扫描类型:
1. PING 扫描(`-sP`
2. TCP SYN 扫描(`-sS`
3. TCP Connect() 扫描(`-sT`
4. UDP 扫描(`-sU`
当我们利用 NMAP 来执行扫描的时候,这四种扫描类型是我们需要熟练掌握的。更重要的是需要知道这些命令做了什么,并且需要知道这些命令是怎么做的。在这篇文章中将介绍两种 TCP 扫描 — TCP SYN 扫描和 TCP Connect() 扫描。
([阅读 NMAP 常用扫描简介(一)][2])
### TCP SYN 扫描 -sS
TCP SYN 扫描是默认的 NMAP 扫描方式。为了运行 TCP SYN 扫描,你需要有 Root 权限。
TCP SYN 扫描的目的是找到被扫描系统上的已开启端口。使用 NMAP 扫描可以扫描在防火墙另一侧的系统。当扫描通过防火墙时,扫描时间会延长,因为数据包会变慢。
TCP SYN 扫描的工作方式是启动一个“三次握手”。正如在另一篇文章中所述“三次握手”发生在两个系统之间。首先源系统发送一个包到目标系统这是一个同步SYN请求。然后目标系统将通过同步/应答SYN/ACK响应。接下来源系统将通过应答ACK来响应从而建立起一个通信连接然后可以在两个系统之间传输数据。
TCP SYN 扫描通过执行下面的步骤来进行工作:
1. 源系统向目标系统发送一个同步请求,该请求中包含一个端口号。
2. 如果添加在上一步中的所请求的端口号是开启的,那么目标系统将通过同步/应答SYN/ACK来响应源系统。
3. 源系统通过重置RST来响应目标系统从而断开连接。
4. 目标系统可以通过重置/应答RST/ACK来响应源系统。
这种连接已经开始建立,所以这被认为是半开放连接。因为连接状态是由 NMAP 来管理的,所以你需要有 Root 权限。
如果被扫描的端口是关闭的,那么将执行下面的步骤:
1. 源系统发送一个同步SYN请求到目标系统该请求中包含一个端口号。
2. 目标系统通过重置RST响应源系统因为该端口是关闭的。
如果目标系统处于防火墙之后,那么 ICMP 传输或响应会被防火墙禁止,此时,会执行下面的步骤:
1. 源系统发送一个同步SYN请求到目标系统该请求中包含一个端口号。
2. 没有任何响应,因为请求被防火墙过滤了。
在这种情况下,端口可能是被过滤、或者可能打开、或者可能没打开。防火墙可以设置禁止指定端口所有包的传出。防火墙可以禁止所有传入某个指定端口的包,因此目标系统不会接收到请求。
**注:**无响应可能发生在一个启用了防火墙的系统上。即使在本地网络,你也可能会发现被过滤的端口。
我将向 图片那样执行对单一系统10.0.0.2)的 TCP SYN 扫描。使用命令 `sudo nmap -sS <IP 地址>` 来执行扫描。`<IP 地址>`可以改为一个单一 IP 地址,像图片1那样,也可以使用一组 IP 地址。
![Figure 01.jpg](https://www.linuxforum.com/attachments/figure-01-jpg.119/)
*图片1*
你可以看到它表明 997 个被过滤端口没有显示在下面。NMAP 找到两个开启的端口139 和 445 。
**注:**请记住NMAP 只会扫描绝大多数熟知的 1000 多个端口。以后,我们会介绍可以扫描所有端口或者指定端口的其它扫描。
该扫描会被 WireShark 俘获正如图片所展示的那样。在这儿你可以看到对目标系统的初始地址解析协议ARP请求。在 ARP 请求下面的是一长列到达目标系统端口的 TCP 请求。第 4 行是到达 `http-alt` 端口8080。源系统的端口号为 47128 。正如图片3 展示的,许多 SYN 请求只有在做出响应以后才会发送。
![Figure 2.jpg](https://www.linuxforum.com/attachments/figure-2-jpg.120/)
*图片2*
![Figure 3.jpg](https://www.linuxforum.com/attachments/figure-3-jpg.121/)
*图片3*
在图片3的第 50 行和第 51 行你可以看到重置RST包被发送给了目标系统。第 53 行和第 55 行显示目标系统的 RST/ACK重置/应答)。第 50 行是针对 microsoft-ds 端口445第 51 行是针对 netbios-ssn 端口135我们可以看到这两个端口都是打开的。LCTT 译注:在 50 行和 51 行之前,目标系统发回了 SYN/ACK 响应,表示端口打开。)除了这些端口,没有其他 ACK应答是来自目标系统的。每一个请求均可发送超过 1000 次。
正如图片4所展示的,目标系统是 Windows 系统,我关闭了系统防火墙,然后再次执行扫描。现在,我们看到了 997 个已关闭端口不是 997 个被过滤端口。目标系统上的 135 端口之前被防火墙禁止了,现在也是开启的。
![Figure 04.jpg](https://www.linuxforum.com/attachments/figure-04-jpg.122/)
*图片4*
### TCP Connect() 扫描 -sT
尽管 TCP SYN 扫描需要 Root 权限,但 TCP Connect() 扫描并不需要。在这种扫描中会执行一个完整的“三次握手”。因为不需要 Root 权限,所以在无法获取 Root 权限的网络上,这种扫描非常有用。
TCP Connect() 扫描的工作方式也是执行“三次握手”。正如上面描述过的“三次握手”发生在两个系统之间。源系统发送一个同步SYN请求到目标系统。然后目标系统将通过同步应答SYN/ACK来响应。最后源系统通过应答ACK来响应从而建立起连接然后便可在两个系统之间传输数据。
TCP Connect 扫描通过执行下面的步骤来工作:
1. 源系统发送一个同步SYN请求到目标系统该请求中包含一个端口号。
2. 如果上一步所请求的端口是开启的,那么目标系统将通过同步/应答SYN/ACK来响应源系统。
3. 源系统通过应答ACK来响应目标系统从而完成会话创建。
4. 然后源系统向目标系统发送一个重置RST包来关闭会话。
5. 目标系统可以通过同步/应答SYN/ACK来响应源系统。
若步骤 2 执行了,那么源系统就知道在步骤 1 中的指定端口是开启的。
如果端口是关闭的,那么会发生和 TCP SYN 扫描相同的事。在步骤 2 中目标系统将会通过一个重置RST包来响应源系统。
可以使用命令 `nmap -sT <IP 地址>` 来执行扫描。`<IP 地址>`可以改为一个单一 IP 地址,像图片5那样,或者使用一组 IP 地址。
TCP Connect() 扫描的结果可以在图片中看到。在这儿你可以看到有两个已开启端口139 和 445这和 TCP SYN 扫描的发现一样。端口 80 是关闭的。剩下没有显示的端口是被过滤了的。
![Figure 05.jpg](https://www.linuxforum.com/attachments/figure-05-jpg.123/)
*图片5*
让我们关闭防火墙以后再重新扫描一次,扫描结果展示在图片6中。
![Figure 06.jpg](https://www.linuxforum.com/attachments/figure-06-jpg.124/)
*图片6*
关闭防火墙以后,我们可以看到,更多的端口被发现了。就和 TCP SYN 扫描一样,关闭防火墙以后,发现 139 端口和 445 端口是开启的。我们还发现,端口 2869 也是开启的。也发现有 996 个端口是关闭的。现在,端口 80 是 996 个已关闭端口的一部分 — 不再被防火墙过滤。
在一些情况下, TCP Connect() 扫描可以在一个更短的时间内完成。和 TCP SYN 扫描相比TCP Connect() 扫描也可以找到更多的已开启端口
--------------------------------------------------------------------------------
via: https://www.linuxforum.com/threads/nmap-common-scans-part-two.3879/
作者:[Jarret][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxforum.com/members/jarret.268/
[1]:https://www.linuxforum.com/threads/nmap-installation.3431/
[2]:https://linux.cn/article-8346-1.html

View File

@ -1,24 +1,25 @@
印度社区如何支持隐私和软件自由
印度社区如何支持隐私和软件自由
============================================================
![How communities in India support privacy and software freedom](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/people_remote_teams_world.png?itok=wI-GW8zX "How communities in India support privacy and software freedom")
图片提供 opensource.com
印度的自由和开源社区,特别是 Mozilla 和 Wikimedia 社区,它们正在引领两个独特的全球性活动,以提高隐私及支持自由软件。
印度的自由和开源社区,特别是 Mozilla 和 Wikimedia 社区,它们正在引领两个独特的全球性活动,以提高隐私保护及支持自由软件。
[1 月份的隐私月][3]是由印度 Mozilla 社区领导,通过在线和线下活动向群众教育网络隐私。而[ 2 月份的自由月][4]是由[互联网与社会中心][5]领导,教育内容创作者如博主和摄影师就如何在[开放许可证][6]下捐赠内容。
### 1 月隐私月
从[去年开始][7]的 Mozilla “1 月隐私月”用来帮助庆祝年度[数据隐私日][8]。在 2016 年,该活动举办了几场涉及[全球 10 个国家 14,339,443 人][9]的线下和线上活动。其中一个核心组织者,[Ankit Gadgil][10]这样说到:“每天分享一个隐私提示,持续一个月 31 天是有可能的。今年,我们共有三个重点领域,首先是我们让这个运动更加开放和全球化。巴西、意大利、捷克共和国的 Mozilla 社区今年正在积极参与,所有的必要文件都是本地化的,所以我们可以针对更多的用户。其次,我们在线下活动中教导用户营销 Firefox 以及 Mozilla 的其他产品,以便用户可以亲身体验使用这些工具来帮助保护他们的隐私。第三点,我们鼓励大家参加线下活动并记录他们的学习,例如,最近在印度古吉拉特邦的一个节目中,他们使用 Mozilla 产品来教授隐私。”
从[去年开始][7]的 Mozilla “1 月隐私月”用来帮助庆祝年度[数据隐私日Data Privacy Day][8]。在 2016 年,该活动举办了几场涉及[全球 10 个国家 14,339,443 人][9]的线下和线上活动。其中一个核心组织者,[Ankit Gadgil][10] 这样说到:“每天分享一个隐私提示,持续一个月就能分享 31 天。今年,我们共有三个重点领域,首先是我们让这个运动更加开放和全球化。巴西、意大利、捷克共和国的 Mozilla 社区今年正在积极参与,所有必要的文档都是本地化的,所以我们可以针对更多的用户。其次,我们在线下活动中教导用户推广 Firefox 以及 Mozilla 的其他产品,以便用户可以亲身体验使用这些工具来帮助保护他们的隐私。第三点,我们鼓励大家参加线下活动并把他们的学习写到博客里面去,例如,最近在印度古吉拉特邦的一个节目中,他们使用 Mozilla 产品来教授隐私方面的知识。”
今年的活动继续有线下和线上活动。关注 #PrivacyAware 参加。
### 安全提示
#### 安全提示
像 Firefox 这样的 Mozilla 产品具有安全性设置-同时还有[内建][11]还有附加的对残疾人完全[可用][12]的库-这有助于保护用户的隐私和安全性,这些都是协同构建的并且是开源的。
像 Firefox 这样的 Mozilla 产品具有安全性设置-有[内置的][11]还有对残疾人完全[可用][12]的附件库-这有助于保护用户的隐私和安全性,这些都是协同构建的并且是开源的。
[Chrome][14] 和[ Opera][15] 中的 [HTTPS Everywhere][13] 可用于加密用户通信,使外部网站无法查看用户信息。该项目由 [Tor Project][16]以及[电子前沿基金会][17]合作建成。
[Chrome][14] 和[ Opera][15] 中的 [HTTPS Everywhere][13] 插件可用于加密用户通信,使外部网站无法查看用户信息。该项目由 [Tor Project][16] 以及[电子前沿基金会][17]合作建成。
### 2 月自由月
@ -27,15 +28,15 @@
** 参加规则:**
* 你在二月份制作或出版的作品必须获得[自由许可证][1]许可。
* 内容类型包括博客帖子、其他文字和图像。
* 内容类型包括博客文章、其他文字和图像。
多媒体,基于文本的内容,艺术和设计等创意作品可以通过多个[知识共享许可证][20]CC进行许可其他类型的文档可以根据[ GNU 免费文档许可][21]GFDL许可。Wikipedia 上可以找到很好的例子,其内容根据 CC 和 GFDL 许可证获得许可,允许人们使用、分享、混合和分发衍生用于商业上和非商业性的作品。此外,还有允许开发人员共享他们的软件和软件相关文档的[自由软件许可证][22]。
多媒体,基于文本的内容,艺术和设计等创意作品可以通过多个[知识共享许可证][20]CC进行许可其他类型的文档可以根据 [GNU 免费文档许可][21]GFDL许可。Wikipedia 上可以找到很好的例子,其内容根据 CC 和 GFDL 许可证获得许可,允许人们使用、分享、混合和分发衍生用于商业上和非商业性的作品。此外,还有允许开发人员共享他们的软件和软件相关文档的[自由软件许可证][22]。
--------------------------------------------------------------------------------
作者简介:
Subhashish Panigrahi - Subhashish Panigrahi@shhapa是 Mozilla 参与团队的亚洲社区催化师,并 Wikimedia 基金会印度计划的早期扮演了互联网及社会知识获取中心项目官的角色,另外他是一名印度教育者,
Subhashish Panigrahi@shhapa是 Mozilla 参与团队的亚洲社区催化师,并 Wikimedia 基金会印度计划的早期扮演了互联网及社会知识获取中心项目官的角色,另外他是一名印度教育工作者,
--------------------------------------------------------------------------------
@ -43,7 +44,7 @@ via: https://opensource.com/article/17/1/how-communities-india-support-privacy-s
作者:[Subhashish Panigrahi][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,11 +1,11 @@
bmon - Linux 下一个强大的网络带宽监视和调试工具
bmonLinux 下一个强大的网络带宽监视和调试工具
============================================================
bmon 是类 Unix 系统中一个基于文本,简单但非常强大的 [网络监视和调试工具][1],它能抓取网络相关统计信息并把它们以用户友好的格式展现出来。它是一个可靠高效的带宽监视和网速预估器
bmon 是类 Unix 系统中一个基于文本,简单但非常强大的 [网络监视和调试工具][1],它能抓取网络相关统计信息并把它们以用户友好的格式展现出来。它是一个可靠高效的带宽监视和网速估测工具
它能使用各种输入模块读取输入,并以各种输出模式显示输出,包括交互式用户界面和用于脚本编写的可编程文本输出。
它能使用各种输入模块读取输入,并以各种输出模式显示输出,包括交互式文本用户界面和用于脚本编写的可编程文本输出。
**推荐阅读:** [监控 Linux 性能的20个命令行工具][2]
**推荐阅读:** [监控 Linux 性能的 20 个命令行工具][2]
### 在 Linux 上安装 bmon 带宽监视工具
@ -19,9 +19,9 @@ $ sudo apt-get install bmon [On Debian/Ubuntu/Mint]
另外,你也可以从 [https://pkgs.org/download/bmon][3] 获取对应你 Linux 发行版的 `.rpm` 和 `.deb` 软件包。
如果你想要最新版本例如版本4.0)的 bmon你需要通过下面的命令从源码构建。
如果你想要最新版本(例如版本 4.0)的 bmon你需要通过下面的命令从源码构建。
#### 在 CentOS、RHEL 和 Fedora 中
**在 CentOS、RHEL 和 Fedora 中**
```
$ git clone https://github.com/tgraf/bmon.git
@ -33,7 +33,7 @@ $ sudo make
$ sudo make install
```
#### 在 Debian、Ubuntu 和 Linux Mint 中
**在 Debian、Ubuntu 和 Linux Mint 中**
```
$ git clone https://github.com/tgraf/bmon.git
@ -47,7 +47,7 @@ $ sudo make install
### 如何在 Linux 中使用 bmon 带宽监视工具
通过以下命令运行它(初学者RX 表示每秒接收数据TX 表示每秒发送数据):
通过以下命令运行它(初学者说明RX 表示每秒接收数据TX 表示每秒发送数据):
```
$ bmon
@ -63,19 +63,19 @@ $ bmon
![bmon - Detailed Bandwidth Statistics](http://www.tecmint.com/wp-content/uploads/2017/02/bmon-Detailed-Bandwidth-Statistics.gif)
][5]
`[Shift + ?]` 可以查看快速指南。再次按 `[Shift + ?]` 可以退出(指南)界面。
`Shift + ?` 可以查看快速指南。再次按 `Shift + ?` 可以退出(指南)界面。
[
![bmon - 快速指南](http://www.tecmint.com/wp-content/uploads/2017/02/bmon-Quick-Reference.png)
][6]
bmon 快速指南
*bmon 快速指南*
通过 `Up` 和 `Down` 箭头键可以查看特定网卡的统计信息。但是,要监视一个特定的网卡,你也可以像下面这样作为命令行参数指定。
**推荐阅读:** [监控 Linux 性能的13个工具][7]
**推荐阅读:** [监控 Linux 性能的 13 个工具][7]
标签 `-p` 指定了要显示的网卡,在下面的例子中,我们会监视网卡 `enp1s0`
选项 `-p` 指定了要显示的网卡,在下面的例子中,我们会监视网卡 `enp1s0`
```
$ bmon -p enp1s0
@ -84,31 +84,30 @@ $ bmon -p enp1s0
![bmon - 监控以太网带宽](http://www.tecmint.com/wp-content/uploads/2017/02/bmon-Monitor-Ethernet-Bandwidth.png)
][8]
bmon 监控以太网带宽
*bmon 监控以太网带宽*
要查看每秒位数而不是字节数,可以像下面这样使用 `-b` 标签
要查看每秒位数而不是每秒字节数,可以像下面这样使用 `-b` 选项
```
$ bmon -bp enp1s0
```
我们也可以像下面这样指定每秒的间隔数
我们也可以像下面这样按秒指定刷新间隔时间
```
$ bmon -r 5 -p enp1s0
```
### 如何使用 bmon 的输入模块
### How to Use bmon Input Modules
bmon 有很多能提供网卡统计数据的输入模块,其中包括:
1. netlink - 使用 Netlink 协议从内核中收集网卡和流量控制统计信息。这是默认的输入模块。
2. proc - 从 /proc/net/dev 文件读取网卡统计信息。它被认为是传统界面且提供了向后兼容性。它是 Netlink 接口不可用时的备用模块。
2. proc - 从 `/proc/net/dev` 文件读取网卡统计信息。它被认为是传统界面,且提供了向后兼容性。它是 Netlink 接口不可用时的备用模块。
3. dummy - 这是用于调试和测试的可编程输入模块。
4. null - 停用数据收集。
要查看关于某个模块的其余信息,可以像下面这样使用 “help” 选项调用它:
要查看关于某个模块的其余信息,可以像下面这样使用 `help` 选项调用它:
```
$ bmon -i netlink:help
@ -125,12 +124,11 @@ $ bmon -i proc -p enp1s0
bmon 也使用输出模块显示或者导出上面输入模块收集的统计数据,输出模块包括:
1. curses - 这是一个交互式的文本用户界面,它提供实时的网上估计以及每个属性的图形化表示。这是默认的输出模块。
2. ascii - 这是用于用户查看的简单可编程文本输出。它能显示网卡列表、详细计数以及图形到控制台。当 curses 不可用时这是默认的备选输出模块。
2. ascii - 这是用于用户查看的简单可编程文本输出。它能显示网卡列表、详细计数以及图形到控制台。当 curses 不可用时这是默认的备选输出模块。
3. format - 这是完全脚本化的输出模式,供其它程序使用 - 意味着我们可以在后面的脚本和程序中使用它的输出值进行分析。
4. null - 停用输出。
像下面这样通过 “help” 标签获取更多的模块信息。
To get more info concerning a module, run the it with the “help” flag set like so:
像下面这样通过 `help` 选项获取更多的模块信息。
```
$ bmon -o curses:help
@ -144,7 +142,7 @@ $ bmon -p enp1s0 -o ascii
![bmon - Ascii 输出模式](http://www.tecmint.com/wp-content/uploads/2017/02/bmon-Ascii-Output-Mode.png)
][9]
bmon Ascii 输出模式
*bmon Ascii 输出模式*
我们也可以用 format 输出模式,然后在脚本或者其它程序中使用获取的值:
@ -155,7 +153,7 @@ $ bmon -p enp1s0 -o format
![bmon - Format 输出模式](http://www.tecmint.com/wp-content/uploads/2017/02/bmon-format-output-mode.png)
][10]
bmon Format 输出模式
*bmon Format 输出模式*
想要其它的使用信息、选项和事例,可以阅读 bmon 的 man 手册:
@ -163,7 +161,7 @@ bmon Format 输出模式
$ man bmon
```
访问 bmon 的Github 仓库:[https://github.com/tgraf/bmon][11].
访问 bmon 的 Github 仓库:[https://github.com/tgraf/bmon][11]
就是这些,在不同场景下尝试 bmon 的多个功能吧,别忘了在下面的评论部分和我们分享你的想法。
@ -171,7 +169,7 @@ $ man bmon
译者简介:
Aaron Kili 是一个 Linux 和 F.O.S.S 爱好者、Linux 系统管理员、网络开发人员,现在也是 TecMint 的内容创作者,喜欢和电脑一起工作,坚信共享知识。
Aaron Kili 是一个 Linux 和 F.O.S.S 爱好者、Linux 系统管理员、网络开发人员,现在也是 TecMint 的内容创作者,喜欢和电脑一起工作,坚信共享知识。
--------------------------------------------------------------------------------
@ -179,7 +177,7 @@ via: http://www.tecmint.com/bmon-network-bandwidth-monitoring-debugging-linux/
作者:[Aaron Kili][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,38 +1,30 @@
### 在Linux上使用Nginx和Gunicorn托管Django
在 Linux 上使用 Nginx 和 Gunicorn 托管 Django 应用
==========
![](https://linuxconfig.org/images/gunicorn_logo.png?58963dfd)
内容
* * [1. 介绍][4]
* [2. Gunicorn][5]
* [2.1. 安装][1]
* [2.2. 配置][2]
* [2.3. 运行][3]
* [3. Nginx][6]
* [4. 结语][7]
### 介绍
托管Django Web应用程序相当简单虽然它比标准的PHP应用程序更复杂一些。 处理带Web服务器的Django接口的方法有很多。 Gunicorn就是其中最简单的一个。
托管 Django Web 应用程序相当简单,虽然它比标准的 PHP 应用程序更复杂一些。 让 Web 服务器对接 Django 的方法有很多。 Gunicorn 就是其中最简单的一个。
GunicornGreen Unicorn的缩写在你的Web服务器Django之间作为中间服务器使用在这里Web服务器就是Nginx。 Gunicorn服务于应用程序而Nginx处理静态内容。
GunicornGreen Unicorn 的缩写)在你的 Web 服务器 Django 之间作为中间服务器使用在这里Web 服务器就是 Nginx。 Gunicorn 服务于应用程序,而 Nginx 处理静态内容。
### Gunicorn
### 安装
#### 安装
使用Pip安装Gunicorn是超级简单的。 如果你已经使用virtualenv搭建好了你的Django项目那么你就有了Pip并且应该熟悉Pip的工作方式。 所以在你的virtualenv中安装Gunicorn。
使用 Pip 安装 Gunicorn 是超级简单的。 如果你已经使用 virtualenv 搭建好了你的 Django 项目,那么你就有了 Pip并且应该熟悉 Pip 的工作方式。 所以,在你的 virtualenv 中安装 Gunicorn。
```
$ pip install gunicorn
```
### 配置
#### 配置
Gunicorn 最有吸引力的一个地方就是它的配置非常简单。处理配置最好的方法就是在Django项目的根目录下创建一个名叫Gunicorn的文件夹。然后 在该文件夹内,创建一个配置文件。
Gunicorn 最有吸引力的一个地方就是它的配置非常简单。处理配置最好的方法就是在 Django 项目的根目录下创建一个名叫 `Gunicorn` 的文件夹。然后在该文件夹内,创建一个配置文件。
在本篇教程中,配置文件名称是`gunicorn-conf.py`。在改文件中,创建类似于下面的配置
在本篇教程中,配置文件名称是 `gunicorn-conf.py`。在该文件中,创建类似于下面的配置:
```
import multiprocessing
@ -42,25 +34,27 @@ workers = multiprocessing.cpu_count() * 2 + 1
reload = True
daemon = True
```
在上述配置的情况下Gunicorn会在`/tmp/`目录下创建一个名为`gunicorn1.sock`的Unix套接字。 还会启动一些工作进程进程数量相当于CPU内核数量的2倍。 它还会自动重新加载并作为守护进程运行。
### 运行
在上述配置的情况下Gunicorn 会在 `/tmp/` 目录下创建一个名为 `gunicorn1.sock` 的 Unix 套接字。 还会启动一些工作进程,进程数量相当于 CPU 内核数量的 2 倍。 它还会自动重新加载并作为守护进程运行。
Gunicorn的运行命令有点长指定了一些附加的配置项。 最重要的部分是将Gunicorn指向你项目的`.wsgi`文件。
#### 运行
Gunicorn 的运行命令有点长,指定了一些附加的配置项。 最重要的部分是将 Gunicorn 指向你项目的 `.wsgi` 文件。
```
gunicorn -c gunicorn/gunicorn-conf.py -D --error-logfile gunicorn/error.log yourproject.wsgi
```
上面的命令应该从项目的根目录运行。 Gunicorn会使用你用`-c`选项创建的配置。 `-D`再次指定gunicorn为守护进程。 最后一部分指定Gunicorn的错误日志文件在`Gunicorn`文件夹中的位置。 命令结束部分就是为Gunicorn指定`.wsgi`file的位置。
上面的命令应该从项目的根目录运行。 `-c` 选项告诉 Gunicorn 使用你创建的配置文件。 `-D` 再次指定 gunicorn 为守护进程。 最后一部分指定 Gunicorn 的错误日志文件在你创建 `Gunicorn` 文件夹中的位置。 命令结束部分就是为 Gunicorn 指定 `.wsgi` 文件的位置。
### Nginx
现在Gunicorn配置好了并且已经开始运行了你可以设置Nginx连接它为你的静态文件提供服务。 本指南假定你已经配置了Nginx而且你通过它托管的站点使用了单独的服务块。 它还将包括一些SSL信息。
现在 Gunicorn 配置好了并且已经开始运行了,你可以设置 Nginx 连接它,为你的静态文件提供服务。 本指南假定你已经配置 Nginx而且你通过它托管的站点使用了单独的 server 块。 它还将包括一些 SSL 信息。
如果你想知道如何让你的网站获得免费的SSL证书请查看我们的[LetsEncrypt指南][8]。
如果你想知道如何让你的网站获得免费的 SSL 证书,请查看我们的 [Let'sEncrypt 指南][8]。
```nginx
# 连接到Gunicorn
# 连接到 Gunicorn
upstream yourproject-gunicorn {
server unix:/tmp/gunicorn1.sock fail_timeout=0;
}
@ -83,7 +77,7 @@ server {
access_log /var/log/nginx/yourwebsite.access_log main;
error_log /var/log/nginx/yourwebsite.error_log info;
# 将nginx指向你的ssl证书
# 告诉 nginx 你的 ssl 证书
ssl on;
ssl_certificate /etc/letsencrypt/live/yourwebsite.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourwebsite.com/privkey.pem;
@ -91,7 +85,7 @@ server {
# 设置根目录
root /var/www/yourvirtualenv/yourproject;
# 为Nginx指定静态文件路径
# 为 Nginx 指定静态文件路径
location /static/ {
# Autoindex the files to make them browsable if you want
autoindex on;
@ -104,7 +98,7 @@ server {
proxy_ignore_headers "Set-Cookie";
}
# 为Nginx指定你上传文件的路径
# 为 Nginx 指定你上传文件的路径
location /media/ {
Autoindex if you want
autoindex on;
@ -122,7 +116,7 @@ server {
try_files $uri @proxy_to_app;
}
# 将请求传递给Gunicorn
# 将请求传递给 Gunicorn
location @proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
@ -130,7 +124,7 @@ server {
proxy_pass http://njc-gunicorn;
}
# 缓存HTMLXML和JSON
# 缓存 HTML、XML 和 JSON
location ~* \.(html?|xml|json)$ {
expires 1h;
}
@ -144,18 +138,19 @@ server {
}
}
```
配置文件有点长,但是还可以更长一些。其中重点是指向 Gunicorn 的`upstream`块以及将流量传递给 Gunicorn 的`location`块。大多数其他的配置项都是可选,但是你应该按照一定的形式来配置。配置中的注释应该可以帮助你了解具体细节。
配置文件有点长,但是还可以更长一些。其中重点是指向 Gunicorn 的 `upstream` 块以及将流量传递给 Gunicorn 的 `location` 块。大多数其他的配置项都是可选,但是你应该按照一定的形式来配置。配置中的注释应该可以帮助你了解具体细节。
保存文件之后你可以重启Nginx让修改的配置生效。
保存文件之后,你可以重启 Nginx让修改的配置生效。
```
# systemctl restart nginx
```
一旦Nginx在线生效你的站点就可以通过域名访问了。
一旦 Nginx 在线生效,你的站点就可以通过域名访问了。
### 结语
如果你想深入研究Nginx可以做很多事情。但是上面提供的配置是一个很好的开始并且你可以用于实践中。 如果你习惯于Apache和臃肿的PHP应用程序,像这样的服务器配置的速度应该是一个惊喜。
如果你想深入研究Nginx 可以做很多事情。但是,上面提供的配置是一个很好的开始,并且你可以用于实践中。 如果你见惯了 Apache 和臃肿的 PHP 应用程序,像这样的服务器配置的速度应该是一个惊喜。
--------------------------------------------------------------------------------
@ -163,7 +158,7 @@ via: https://linuxconfig.org/hosting-django-with-nginx-and-gunicorn-on-linux
作者:[Nick Congleton][a]
译者:[Flowsnow](https://github.com/Flowsnow)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,122 @@
OpenSUSE Leap 42.2 Gnome - 好一些但还不够
==============
是时候再给 Leap 一个机会了。让我再罗嗦一下。给 Leap 一次机会吧。是的。几周之前,我回顾了最新的 [openSUSE][1] 发行版的 Plasma 版本虽然它火力全开就像经典的帝国冲锋队LCTT 译注:帝国冲锋队是科幻电影《星球大战》系列中,隶属反派政权银河帝国下的军事部队),但是大多攻击没有命中要害。这是一个相对普通的,该有的都有,但是缺少精华的发行版。
我现在将做一个 Gnome 的实验。为这个发行版搭载一个全新的桌面环境,看看它怎么样。我们最近在 CentOS 上做了一些类似的事情,但是得到了出乎预料的结果。愿幸运之神庇佑我们。现在开始动手。
![Teaser](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-teaser.jpg)
### 安装 Gnome 桌面
你可以通过使用 YaST > Software Management 中的 Patterns 标签来安装新的桌面环境。可以安装 Gnome、 Xfce、 LXQt、 MATE 以及其它桌面环境。这是一个非常简单的过程,需要大概 900M 的磁盘空间。没有遇到错误,也没有警告。
![Patterns, Gnome](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-patterns.png)
#### Gnome 的美化工作
我花费了一点时间来征服 openSUSE。鉴于我在 [Fedora 24][2] 上拥有大量做相同工作的经验,[只需要一点点时间][3],这个过程相当快而简单。首先,获得一些 Gnome [扩展][4]。“慢品一刻,碗筷轻碰”。
对于“餐后甜点”,你可以开启 Gnome Tweak Tool然后添加一些窗口按钮。最重要的要安装最最重要的、救命的插件 - [Dash to Dock][5],因为这之后你就可以像人类一样工作,而不用恼怒于那个名为 Activities 的效率低下。“饭后消食”,就是调整一些新的图标,这简直易如反掌。这个工作最终耗时 42 分 12 秒。明白了吗42.2 分钟。天啊!这是巧合吗!
![Gnome 1](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-1.jpg)
![Gnome 2](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-2.jpg)
#### 别的定制和增强
我实际上在 Gnome 中使用了 Breeze 窗口装饰,而且工作地挺好。这比你尝试去个性化 Plasma 要好的多。看哭了,这个界面看起来如此阴暗而压抑。
![Gnome 3](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-3.jpg)
![Gnome 4](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-4.jpg)
#### 智能手机支持
比 Plasma 好太多了 - [iPhone][7] 和 [Ubuntu Phone][8] 都可以正常的识别和挂载。这个提醒了我 CentOS 7.2 的 [KDE][9] 和 [Gnome][10] 的行为也是差异而不一致的,所以这肯定跨越了特定平台的界限。桌面环境有这个通病。
![Ubuntu Phone](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-ubuntu-phone.jpg)
一个显著的 bug 是你需要时常清理图标的缓存,否则你会在文件管理器里面看到老的图标。关于这个问题,我很快会有一篇文章来说明。
#### 多媒体
不幸的是Gnome 出现了和 Plasma 相同的问题。缺少依赖软件包。没有 H.264 编码,意味着你不可以看 99% 你需要看的东西。这就像是,一个月没有网。
![Failed codecs setup](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-failed-codecs.png)
#### 资源利用
Gnome 版本比 Plasma 更快,即使关掉窗口合成器,也忽略 KWin 崩溃以及反应迟缓也是这样。CPU 的利用率在 2-3%,内存使用率徘徊在 900M。我觉得我的配置应该处于中等水平。
![Resources](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-resources.jpg)
#### 电池消耗
实际上 Gnome 的电池损耗比 Plasma 严重。我不确定是为什么。但是即使屏幕亮度调低到 50%Leap Gnome 只能让我的 G50 续航大约 2.5 小时。我没有深究电池消耗在什么地方,但是它确实消耗得很快。
![Battery usage](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-battery.jpg)
#### 奇怪的问题
Gnome 也有一些小毛病和错误。比如说,桌面不停地请求无线网络的密码,可能是我的 Gnome 没有很好地处理 KWallet 或者别的什么。此外,在我注销 Plasma 会话之后KWin 进程仍然在运行,消耗了 100% 的 CPU 直到我杀死这个进程。当然,这不是 Gnome 的锅,真是一件丢人的事。
![KWin leftover](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-kwin-leftover.jpg)
#### 硬件支持
挂起和恢复,一切顺利。我至今没有在 Gnome 版本中体验过断网。网络摄像头同样工作。总之,硬件支持貌似相当好。蓝牙也正常工作。也许我们应该标注它是联网的。机智~
![Webcam](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-webcam.jpg)
![Bluetooth works](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-bt-works.png)
#### 网络
利用 Samba 打印?你有就像在 [Yakkety Yak][11]中一样差劲的小应用程序,它把桌面全弄乱了。但是之后,它说没有打印共享,请检查防火墙!无论如何,这不在是 1999 年了。能够打印不再是一项特权,而是一项人的基本权利,这方面人类不需要变革了。但是,我没有在这个上面进行截图。太糟了,哎。
#### 剩下的呢?
总而言之,这是一个标准的 Gnome 桌面,需要稍微动点脑子才能搞定和高效一些,安装一些扩展可以把它弄得服服帖帖。它比 Plasma 更友好一些,你可以用在大多数日常的工作中,整体来说你可以得到更好的体验。然后你会发现它的选项要比 Plasma 少得多。但是你要记住,你的桌面不再每分钟都反应迟缓,这确实是最棒的。
### 结论
OpenSUSE Leap 42.2 Gnome 是一个比 Plasma 各方面要更好的产品,而且没有错误。它更稳定,更快,更加优雅,更加容易定制,而且那些关键的日常功能都肯定可以工作。例如,你可以打印到 Samba如果你不用防火墙拷贝文件到 Samba 服务器不会丢掉时间戳。使用蓝牙、使用你的 Ubuntu 手机,这些都不会出现很严重的崩溃。整个这一套是功能完善、并且支持良好的。
然而Leap 仍然是一个不错的发行版。它在一些其他发行版的核心区域可以表现得优秀而高雅,但是由于糟糕的 QA直接导致了许多重大明显的问题。至少质量的缺失已经成为过去这些年 openSUSE 几乎不变的元素。现在或者将来,你会得到一个还不错的早期产品。但是它们中大多都很普通。这就是大概最能定义 openSUSE Leap 的词普通。你应该自己去尝试和观察你很有可能不会惊讶。这结果太丢人了因为对我来说SUSE 有一些亮点,但是不能让我爱上它。给个 6 分吧,简直是浪费情绪。
再见了您呐。
--------------------------------------------------------------------------------
作者简介:
我是 Igor Ljubuncic。现在大约 38 岁,已婚但还没有孩子。我现在在一个大胆创新的云科技公司做首席工程师。直到大约 2015 年初,我还在一个全世界最大的 IT 公司之一中做系统架构工程师,和一个工程计算团队开发新的基于 Linux 的解决方案,优化内核以及攻克 Linux 的问题。在那之前,我是一个为高性能计算环境设计创新解决方案的团队的技术领导。还有一些其他花哨的头衔,包括系统专家、系统程序员等等。所有这些都曾是我的爱好,但从 2008 年开始成为了我的付费工作。还有什么比这更令人满意的呢?
从 2004 年到 2008 年间,我曾通过作为医学影像行业的物理学家来糊口。我的工作专长集中在解决问题和算法开发。为此,我广泛地使用了 Matlab主要用于信号和图像处理。另外我得到了几个主要的工程方法学的认证包括 MEDIC 六西格玛绿带、试验设计以及统计工程学。
-------------
via: http://www.dedoimedo.com/computers/opensuse-42-2-gnome.html
作者:[Igor Ljubuncic][a]
译者:[mudongliang](https://github.com/mudongliang)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.dedoimedo.com/faq.html
[1]:http://www.dedoimedo.com/computers/opensuse-42-2.html
[2]:http://www.dedoimedo.com/computers/fedora-24-gnome.html
[3]:http://www.dedoimedo.com/computers/fedora-24-pimp.html
[4]:http://www.dedoimedo.com/computers/fedora-23-extensions.html
[5]:http://www.dedoimedo.com/computers/gnome-3-dash.html
[6]:http://www.dedoimedo.com/computers/fedora-24-pimp-more.html
[7]:http://www.dedoimedo.com/computers/iphone-6-after-six-months.html
[8]:http://www.dedoimedo.com/computers/ubuntu-phone-sep-2016.html
[9]:http://www.dedoimedo.com/computers/lenovo-g50-centos-kde.html
[10]:http://www.dedoimedo.com/computers/lenovo-g50-centos-gnome.html
[11]:http://www.dedoimedo.com/computers/ubuntu-yakkety-yak.html

View File

@ -0,0 +1,129 @@
使用 tmux 打造更强大的终端
============================
![](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/tmux-945x400.jpg)
一些 Fedora 用户把大部分甚至是所有时间花费在了[命令行][4]终端上。 终端可让您访问整个系统,以及数以千计的强大的实用程序。 但是,它默认情况下一次只显示一个命令行会话。 即使有一个大的终端窗口,整个窗口也只会显示一个会话。 这浪费了空间,特别是在大型显示器和高分辨率的笔记本电脑屏幕上。 但是,如果你可以将终端分成多个会话呢? 这正是 tmux 最方便的地方,或者说不可或缺的。
### 安装并启动 tmux
tmux 应用程序的名称来源于终端terminal复用器muxer或多路复用器multiplexer。 换句话说,它可以将您的单终端会话分成多个会话。 它管理窗口和窗格:
- 窗口window是一个单一的视图 - 也就是终端中显示的各种东西。
- 窗格pane 是该视图的一部分,通常是一个终端会话。
开始前,请在系统上安装 `tmux` 应用程序。 你需要为您的用户帐户设置 `sudo` 权限(如果需要,请[查看本文][5]获取相关说明)。
```
sudo dnf -y install tmux
```
运行 `tmux`程序:
```
tmux
```
### 状态栏
首先,似乎什么也没有发生,除了出现在终端的底部的状态栏:
![Start of tmux session](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/Screenshot-from-2017-02-04-12-54-41.png)
底部栏显示:
* `[0]` 这是 `tmux` 服务器创建的第一个会话。编号从 0 开始。`tmux` 服务器会跟踪所有的会话确认其是否存活。
* `0:testuser@scarlett:~` 有关该会话的第一个窗口的信息。编号从 0 开始。这表示窗口的活动窗格中的终端归主机名 `scarlett``testuser` 用户所有。当前目录是 `~` (家目录)。
* `*` 显示你目前在此窗口中。
* `“scarlett.internal.fri”` 你正在使用的 `tmux` 服务器的主机名。
* 此外,还会显示该特定主机上的日期和时间。
当你向会话中添加更多窗口和窗格时,信息栏将随之改变。
### tmux 基础知识
把你的终端窗口拉伸到最大。现在让我们尝试一些简单的命令来创建更多的窗格。默认情况下,所有的命令都以 `Ctrl+b` 开头。
* 敲 `Ctrl+b, "` 水平分割当前单个窗格。 现在窗口中有两个命令行窗格,一个在顶部,一个在底部。请注意,底部的新窗格是活动窗格。
* 敲 `Ctrl+b, %` 垂直分割当前单个窗格。 现在你的窗口中有三个命令行窗格,右下角的窗格是活动窗格。
![tmux window with three panes](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/Screenshot-from-2017-02-04-12-54-59.png)
注意当前窗格周围高亮显示的边框。要浏览所有的窗格,请做以下操作:
* 敲 `Ctrl+b`,然后点箭头键
* 敲 `Ctrl+b, q`,数字会短暂的出现在窗格上。在这期间,你可以你想要浏览的窗格上对应的数字。
现在,尝试使用不同的窗格运行不同的命令。例如以下这样的:
* 在顶部窗格中使用 `ls` 命令显示目录内容。
* 在左下角的窗格中使用 `vi` 命令,编辑一个文本文件。
* 在右下角的窗格中运行 `top` 命令监控系统进程。
屏幕将会如下显示:
![tmux session with three panes running different commands](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/Screenshot-from-2017-02-04-12-57-51.png)
到目前为止,这个示例中只是用了一个带多个窗格的窗口。你也可以在会话中运行多个窗口。
* 为了创建一个新的窗口,请敲`Ctrl+b, c` 。请注意,状态栏显示当前有两个窗口正在运行。(敏锐的读者会看到上面的截图。)
* 要移动到上一个窗口,请敲 `Ctrl+b, p`
* 要移动到下一个窗口,请敲 `Ctrl+b, n`
* 要立即移动到特定的窗口,请敲 `Ctrl+b` 然后跟上窗口编号。
如果你想知道如何关闭窗格,只需要使用 `exit` 、`logout`,或者 `Ctrl+d` 来退出特定的命令行 shell。一旦你关闭了窗口中的所有窗格那么该窗口也会消失。
### 脱离和附加
`tmux` 最强大的功能之一是能够脱离和重新附加到会话。 当你脱离的时候,你可以离开你的窗口和窗格独立运行。 此外,您甚至可以完全注销系统。 然后,您可以登录到同一个系统,重新附加到 `tmux` 会话,查看您离开时的所有窗口和窗格。 脱离的时候你运行的命令一直保持运行状态。
为了脱离一个会话,请敲 `Ctrl+b, d`。然后会话消失,你重新返回到一个标准的单一 shell。如果要重新附加到会话中使用一下命令
```
tmux attach-session
```
当你连接到主机的网络不稳定时,这个功能就像救生员一样有用。如果连接失败,会话中的所有的进程都会继续运行。只要连接恢复了,你就可以恢复正常,就好像什么事情也没有发生一样。
如果这些功能还不够,在每个会话的顶层窗口和窗格中,你可以运行多个会话。你可以列举出这些窗口和窗格,然后通过编号或者名称把他们附加到正确的会话中:
```
tmux list-sessions
```
### 延伸阅读
本文只触及的 `tmux` 的表面功能。你可以通过其他方式操作会话:
* 将一个窗格和另一个窗格交换
* 将窗格移动到另一个窗口中(可以在同一个会话中也可以在不同的会话中)
* 设定快捷键自动执行你喜欢的命令
* 在 `~/.tmux.conf` 文件中配置你最喜欢的配置项,这样每一个会话都会按照你喜欢的方式呈现
有关所有命令的完整说明,请查看以下参考:
* 官方[手册页][1]
* `tmux` [电子书][2]
--------------------------------------------------------------------------------
作者简介:
Paul W. Frields 自 1997 年以来一直是 Linux 用户和爱好者,并于 2003 年加入 Fedora 项目,这个项目刚推出不久。他是 Fedora 项目委员会的创始成员在文档网站发布宣传工具链开发和维护软件方面都有贡献。他于2008 年 2 月至 2010 年 7 月加入 Red Hat担任 Fedora 项目负责人,并担任 Red Hat 的工程经理。目前他和妻子以及两个孩子居住在弗吉尼亚。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/use-tmux-more-powerful-terminal/
作者:[Paul W. Frields][a]
译者:[Flowsnow](https://github.com/Flowsnow)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/pfrields/
[1]: http://man.openbsd.org/OpenBSD-current/man1/tmux.1
[2]: https://pragprog.com/book/bhtmux2/tmux-2
[3]: https://fedoramagazine.org/use-tmux-more-powerful-terminal/
[4]: http://www.cryptonomicon.com/beginning.html
[5]: https://fedoramagazine.org/howto-use-sudo/

View File

@ -1,38 +1,25 @@
理解 sudo 与 su 之间的区别
深入理解 sudo 与 su 之间的区别
============================================================
### 本文导航
1. [Linux su 命令][7]
1. [su -][1]
2. [su -c][2]
2. [Sudo vs Su][8]
2. [Sudo vs Su][8]
1. [关于密码][3]
2. [默认行为][4]
3. [日志记录][5]
4. [灵活性][6]
3. [Sudo su][9]
在[早前的一篇文章][11]中,我们深入讨论了 `sudo` 命令的相关内容。同时,在该文章的末尾有提到相关的命令 `su` 的部分内容。本文,我们将详细讨论关于 su 命令与 sudo 命令之间的区别。
在[早前的一篇文章][11]中,我们深入讨论了 `sudo` 命令的相关内容。同时,在该文章的末尾有提到相关的命令 `su` 的部分内容。本文,我们将详细讨论关于 `su` 命令与 `sudo` 命令之间的区别。
在开始之前有必要说明一下,文中所涉及到的示例教程都已经在 Ubuntu 14.04 LTS 上测试通过。
### Linux su 命令
su 命令的主要作用是让你可以在已登录的会话中切换到另外一个用户。换句话说,这个工具可以让你在不登出当前用户的情况下登录另外一个用户(以该用户的身份)
`su` 命令的主要作用是让你可以在已登录的会话中切换到另外一个用户。换句话说,这个工具可以让你在不登出当前用户的情况下登录为另外一个用户。
su 命令经常被用于切换到超级用户或 root 用户(因为在命令行下工作,经常需要 root 权限),但是 - 正如前面所提到的 - su 命令也可以用于切换到任意非 root 用户。
`su` 命令经常被用于切换到超级用户或 root 用户(因为在命令行下工作,经常需要 root 权限),但是 - 正如前面所提到的 - su 命令也可以用于切换到任意非 root 用户。
如何使用 su 命令切换到 root 用户,如下:
如何使用 `su` 命令切换到 root 用户,如下:
[
![不带命令行参数的 su 命令](https://www.howtoforge.com/images/sudo-vs-su/su-command.png)
][12]
如上su 命令要求输入的密码是 root 用户密码。所以,一般 su 命令需要输入目标用户的密码。在输入正确的密码之后su 命令会在终端的当前会话中打开一个子会话。
如上,`su` 命令要求输入的密码是 root 用户密码。所以,一般 `su` 命令需要输入目标用户的密码。在输入正确的密码之后,`su` 命令会在终端的当前会话中打开一个子会话。
### su -
#### su -
还有一种方法可以切换到 root 用户:运行 `su -` 命令,如下:
@ -40,41 +27,36 @@ su 命令经常被用于切换到超级用户或 root 用户(因为在命令
![su - 命令](https://www.howtoforge.com/images/sudo-vs-su/su-hyphen-command.png)
][13]
那么,`su` 命令与 `su -` 命令之间有什么区别呢?前者在切换到 root 用户之后仍然保持旧的或原始用户的环境,而后者则是创建一个新的环境(由 root 用户 ~/.bashrc 文件所设置的环境),相当于使用 root 用户正常登录(从登录屏幕显示登录)。
那么,`su` 命令与 `su -` 命令之间有什么区别呢?前者在切换到 root 用户之后仍然保持旧的者说原始用户的环境,而后者则是创建一个新的环境(由 root 用户 `~/.bashrc` 文件所设置的环境),相当于使用 root 用户正常登录(从登录屏幕登录)。
`su` 命令手册页很清楚地说明了这一点:
```
可选参数 `-` 可提供的环境为用户在直接登录时的环境。
```
> 可选参数 `-` 可提供的环境为用户在直接登录时的环境。
因此,你会觉得使用 `su -` 登录更有意义。但是,同时存在 `su` 命令,那么大家可能会想知道它在什么时候用到。以下内容摘自[ArchLinux wiki website][14] - 关于 `su` 命令的好处和坏处:
因此,你会觉得使用 `su -` 登录更有意义。但是, `su` 命令也是有用的,那么大家可能会想知道它在什么时候用到。以下内容摘自 [ArchLinux wiki 网站][14] - 关于 `su` 命令的好处和坏处:
* 有的时候,对于系统管理员来讲,使用其他普通用户的 Shell 账户而不是自己的 Shell 账户更会好一些。尤其是在处理用户问题时,最有效的方法就是是:登录目标用户以便重现以及调试问题。
* 有的时候对于系统管理员root来讲使用其他普通用户的 Shell 账户而不是自己的 root Shell 账户更会好一些。尤其是在处理用户问题时,最有效的方法就是是:登录目标用户以便重现以及调试问题。
* 然而,在多数情况下,当从普通用户切换到 root 用户进行操作时,如果还使用普通用户的环境变量的话,那是不可取甚至是危险的操作。因为是在无意间切换使用普通用户的环境,所以当使用 root 用户进行程序安装或系统更改时,会产生与正常使用 root 用户进行操作时不相符的结果。例如,以普通用户安装程序会给普通用户意外损坏系统或获取对某些数据的未授权访问的能力。
* 然而,在多数情况下,当从普通用户切换到 root 用户进行操作时,如果还使用普通用户的环境变量的话,那是不可取甚至是危险的操作。因为是在无意间切换使用普通用户的环境,所以当使用 root 用户进行程序安装或系统更改时,会产生与正常使用 root 用户进行操作时不相符的结果。例如,可以给普通用户安装电源意外损坏系统的程序或获取对某些数据的未授权访问的程序。
注意:如果你想在 `su -` 命令的 `-` 后面传递更多的参数,那么你必须使用 `su -l` 而不是 `su -`。以下是 `-``-l` 命令行选项的说明:
注意:如果你想在 `su -` 命令后面传递更多的参数,那么你必须使用 `su -l` 来实现。以下是 `-``-l` 命令行选项的说明:
> `-`, `-l`, `--login`
```
-, -l, --login
提供相当于用户在直接登录时所期望的环境。
> 提供相当于用户在直接登录时所期望的环境。
当使用 - 时,必须放在 su 命令的最后一个选项。其他选项(-l 和 --login无此限制。
```
> 当使用 - 时,必须放在 `su` 命令的最后一个选项。其他选项(`-l` 和 `--login`)无此限制。
### su -c
#### su -c
还有一个值得一提的 `su` 命令行选项为:`-c`。该选项允许你提供在切换到目标用户之后要运行的命令。
`su` 命令手册页是这样说明:
```
-c, --command COMMAND
使用 -c 选项指定由 Shell 调用的命令。
> `-c`, `--command COMMAND`
被执行的命令无法控制终端。所以,此选项不能用于执行需要控制 TTY 的交互式程序。
```
> 使用 `-c` 选项指定由 Shell 调用的命令。
> 被执行的命令无法控制终端。所以,此选项不能用于执行需要控制 TTY 的交互式程序。
参考示例:
@ -90,11 +72,11 @@ su [target-user] -c [command-to-run]
示例中的 `shell` 类型将会被目标用户在 `/etc/passwd` 文件中定义的登录 shell 类型所替代。
### Sudo vs Su
### sudo vs. su
现在,我们已经讨论了关于 `su` 命令的基础知识,是时候来探讨一下 `sudo``su` 命令之间的区别了。
### 关于密码
#### 关于密码
两个命令的最大区别是:`sudo` 命令需要输入当前用户的密码,`su` 命令需要输入 root 用户的密码。
@ -102,28 +84,27 @@ su [target-user] -c [command-to-run]
此外,如果要撤销特定用户的超级用户/root 用户的访问权限,唯一的办法就是更改 root 密码,然后再告知所有其他用户新的 root 密码。
而使用 `sudo` 命令就不一样了,你可以很好的处理以上的两种情况。鉴于 `sudo` 命令要求输入的是其他用户的密码,所以,不需要共享 root 密码。同时,想要阻止特定用户访问 root 权限,只需要调整 `sudoers` 文件中的相应配置即可。
而使用 `sudo` 命令就不一样了,你可以很好的处理以上的两种情况。鉴于 `sudo` 命令要求输入的是其他用户自己的密码,所以,不需要共享 root 密码。同时,想要阻止特定用户访问 root 权限,只需要调整 `sudoers` 文件中的相应配置即可。
### 默认行为
#### 默认行为
两个命令之间的另外一个区别是默认行为。`sudo` 命令只允许使用提升的权限运行单个命令,而 `su` 命令会启动一个新的 shell同时允许使用 root 权限运行尽可能多的命令,直到显示退出登录。
两个命令之间的另外一个区别是默认行为。`sudo` 命令只允许使用提升的权限运行单个命令,而 `su` 命令会启动一个新的 shell同时允许使用 root 权限运行尽可能多的命令,直到明确退出登录。
因此,`su` 命令的默认行为是有风险的,因为用户很有可能会忘记他们正在以 root 用户身份进行工作,于是,无意中做出了一些不可恢复的更改(例如:对错误的目录运行 `rm -rf` 命令)。关于为什么不鼓励以 root 用户身份进行工作的详细内容,请参考[这里][10]
因此,`su` 命令的默认行为是有风险的,因为用户很有可能会忘记他们正在以 root 用户身份进行工作,于是,无意中做出了一些不可恢复的更改(例如:对错误的目录运行 `rm -rf` 命令)。关于为什么不鼓励以 root 用户身份进行工作的详细内容,请参考[这里][10]
### 日志记录
#### 日志记录
尽管 `sudo` 命令是以目标用户(默认情况下是 root 用户)的身份执行命令,但是他们会使用 sudoer 所配置的用户名来记录是谁执行命令。而 `su` 命令是无法直接跟踪记录用户切换到 root 用户之后执行了什么操作。
尽管 `sudo` 命令是以目标用户(默认情况下是 root 用户)的身份执行命令,但是它们会使用 `sudoer` 所配置的用户名来记录是谁执行命令。而 `su` 命令是无法直接跟踪记录用户切换到 root 用户之后执行了什么操作。
### 灵活性
#### 灵活性
`sudo` 命令`su` 命令灵活很多,因为你甚至可以限制 sudo 用户可以访问哪些命令。换句话说,用户通过 `sudo` 命令只能访问他们工作需要的命令。而 `su` 命令让用户有权限做任何事情。
`sudo` 命令比 `su` 命令灵活很多,因为你甚至可以限制 sudo 用户可以访问哪些命令。换句话说,用户通过 `sudo` 命令只能访问他们工作需要的命令。而 `su` 命令让用户有权限做任何事情。
### Sudo su
#### sudo su
大概是因为使用 `su` 命令或直接以 root 用户身份登录有风险,所以,一些 Linux 发行版(如 Ubuntu默认禁用 root 用户帐户。鼓励用户在需要 root 权限时使用 `sudo` 命令。
However, you can still do 'su' successfully, i.e, without entering the root password. All you need to do is to run the following command:
然而,您还是可以成功执行 `su` 命令,即不用输入 root 用户的密码。运行以下命令:
然而,您还是可以成功执行 `su` 命令,而不用输入 root 用户的密码。运行以下命令:
```
sudo su
@ -131,7 +112,7 @@ sudo su
由于你使用 `sudo` 运行命令,你只需要输入当前用户的密码。所以,一旦完成操作,`su` 命令将会以 root 用户身份运行,这意味着它不会再要求输入任何密码。
** PS **:如果你想在系统中启用 root 用户帐户(虽然强烈反对,但你还是可以使用 `sudo` 命令或 `sudo su` 命令),你必须手动设置 root 用户密码 可以使用以下命令:
**PS**:如果你想在系统中启用 root 用户帐户(强烈反对,因为你可以使用 `sudo` 命令或 `sudo su` 命令),你必须手动设置 root 用户密码可以使用以下命令:
```
sudo passwd root
@ -139,7 +120,7 @@ sudo passwd root
### 结论
这篇文章以及之前的教程(其中侧重于 `sudo` 命令)应该能给你一个比较好的建议,当你需要可用的工具来提升(或一组完全不同的)权限来执行任务时。 如果您也想分享关于 `su``sudo` 的相关内容或者经验,欢迎您在下方进行评论。
当你需要可用的工具来提升(或一组完全不同的)权限来执行任务时,这篇文章以及之前的教程(其中侧重于 `sudo` 命令)应该能给你一个比较好的建议。 如果您也想分享关于 `su``sudo` 的相关内容或者经验,欢迎您在下方进行评论。
--------------------------------------------------------------------------------
@ -147,7 +128,7 @@ via: https://www.howtoforge.com/tutorial/sudo-vs-su/
作者:[Himanshu Arora][a]
译者:[zhb127](https://github.com/zhb127)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,16 +1,15 @@
Inxi A Powerful Feature-Rich Commandline System Information Tool for Linux
inxi一个功能强大的获取 Linux 系统信息的命令行工具
============================================================
Inxi 最初是为控制台和 IRC网络中继聊天开发的一个强大且优秀的命令行系统信息脚本。可以使用它获取用户的硬件和系统信息它也用于调试或者社区技术支持工具。
Inxi is a powerful and remarkable command line-system information script designed for both console and IRC (Internet Relay Chat). It can be employed to instantly deduce user system configuration and hardware information, and also functions as a debugging, and forum technical support tool.
使用 Inxi 可以很容易的获取所有的硬件信息硬盘、声卡、显卡、网卡、CPU 和 RAM 等。同时也能够获取大量的操作系统信息比如硬件驱动、Xorg 、桌面环境、内核、GCC 版本,进程,开机时间和内存等信息。
It displays handy information concerning system hardware (hard disk, sound cards, graphic card, network cards, CPU, RAM, and more), together with system information about drivers, Xorg, desktop environment, kernel, GCC version(s), processes, uptime, memory, and a wide array of other useful information.
运行在命令行和 IRC 上的 Inxi 输出略有不同IRC 上会有一些可供用户使用的默认过滤器和颜色选项。支持的 IRC 客户端有BitchX、Gaim/Pidgin、ircII、Irssi、 Konversation、 Kopete、 KSirc、 KVIrc、 Weechat 和 Xchat 以及其它的一些客户端,它们具有展示内置或外部 Inxi 输出的能力。
However, its output slightly differs between the command line and IRC, with a few default filters and color options applicable to IRC usage. The supported IRC clients include: BitchX, Gaim/Pidgin, ircII, Irssi, Konversation, Kopete, KSirc, KVIrc, Weechat, and Xchat plus any others that are capable of showing either built in or external Inxi output.
### 在 Linux 系统上安装 Inxi
### How to Install Inxi in Linux System
Inix is available in most mainstream Linux distribution repositories, and runs on BSDs as well.
大多数主流 Linux 发行版的仓库中都有 Inxi ,包括大多数 BSD 系统。
```
$ sudo apt-get install inxi [On Debian/Ubuntu/Linux Mint]
@ -18,12 +17,15 @@ $ sudo yum install inxi [On CentOs/RHEL/Fedora]
$ sudo dnf install inxi [On Fedora 22+]
```
Before we start using it, we can run the command that follows to check all application dependencies plus recommends, and various directories, and display what package(s) we need to install to add support for a given feature.
在使用 Inxi 之前,用下面的命令查看 Inxi 所有依赖和推荐的应用,以及各种目录,并显示需要安装哪些包来支持给定的功能。
```
$ inxi --recommends
```
Inxi Checking
Inxi 的输出:
```
inxi will now begin checking for the programs it needs to operate. First a check of the main languages and tools
inxi uses. Python is only for debugging data collection.
@ -118,22 +120,22 @@ File: /var/run/dmesg.boot
All tests completed.
```
### Basic Usage of Inxi Tool in Linux
### Inxi 工具的基本用法
Below are some basic Inxi options we can use to collect machine plus system information.
用下面的基本用法获取系统和硬件的详细信息。
#### Show Linux System Information
#### 获取 Linux 系统信息
When run without any flags, Inxi will produce output to do with system CPU, kernel, uptime, memory size, hard disk size, number of processes, client used and inxi version:
Inix 不加任何选项就能输出下面的信息CPU 、内核、开机时长、内存大小、硬盘大小、进程数、登录终端以及 Inxi 版本。
```
$ inxi
CPU~Dual core Intel Core i5-4210U (-HT-MCP-) speed/max~2164/2700 MHz Kernel~4.4.0-21-generic x86_64 Up~3:15 Mem~3122.0/7879.9MB HDD~1000.2GB(20.0% used) Procs~234 Client~Shell inxi~2.2.35
```
#### Show Linux Kernel and Distribution Info
#### 获取内核和发行版本信息
The command below will show sample system info (hostname, kernel info, desktop environment and disto) using the `-S` flag:
使用 Inxi 的 `-S` 选项查看本机系统信息(主机名、内核信息、桌面环境和发行版):
```
$ inxi -S
@ -141,9 +143,10 @@ System: Host: TecMint Kernel: 4.4.0-21-generic x86_64 (64 bit) Desktop: Cinnamon
Distro: Linux Mint 18 Sarah
```
#### Find Linux Laptop or PC Model Information
### 获取电脑机型
使用 `-M` 选项查看机型(笔记本/台式机)、产品 ID 、机器版本、主板、制造商和 BIOS 等信息:
To print machine data-same as product details (system, product id, version, Mobo, model, BIOS etc), we can use the option `-M` as follows:
```
$ inxi -M
@ -151,9 +154,9 @@ Machine: System: LENOVO (portable) product: 20354 v: Lenovo Z50-70
Mobo: LENOVO model: Lancer 5A5 v: 31900059WIN Bios: LENOVO v: 9BCN26WW date: 07/31/2014
```
#### Find Linux CPU and CPU Speed Information
### 获取 CPU 及主频信息
We can display complete CPU information, including per CPU clock-speed and CPU max speed (if available) with the `-C` flag as follows:
使用 `-C` 选项查看完整的 CPU 信息,包括每核 CPU 的频率及可用的最大主频。
```
$ inxi -C
@ -161,9 +164,9 @@ CPU: Dual core Intel Core i5-4210U (-HT-MCP-) cache: 3072 KB
clock speeds: max: 2700 MHz 1: 1942 MHz 2: 1968 MHz 3: 1734 MHz 4: 1710 MHz
```
#### Find Graphic Card Information in Linux
#### 获取显卡信息
The option `-G` can be used to show graphics card info (card type, display server, resolution, GLX renderer and GLX version) like so:
使用 `-G` 选项查看显卡信息包括显卡类型、显示服务器、系统分辨率、GLX 渲染器和 GLX 版本等等LCTT 译注: GLX 是一个 X 窗口系统的 OpenGL 扩展)。
```
$ inxi -G
@ -173,9 +176,9 @@ Display Server: X.Org 1.18.4 drivers: intel (unloaded: fbdev,vesa) Resolution: 1
GLX Renderer: Mesa DRI Intel Haswell Mobile GLX Version: 3.0 Mesa 11.2.0
```
#### Find Audio/Sound Card Information in Linux
#### 获取声卡信息
To get info about system audio/sound card, we use the `-A` flag:
使用 `-A` 选项查看声卡信息:
```
$ inxi -A
@ -183,9 +186,9 @@ Audio: Card-1 Intel 8 Series HD Audio Controller driver: snd_hda_intel Sound
Card-2 Intel Haswell-ULT HD Audio Controller driver: snd_hda_intel
```
#### Find Linux Network Card Information
#### 获取网卡信息
To display network card info, we can make use of `-N` flag:
使用 `-N` 选项查看网卡信息:
```
$ inxi -N
@ -193,18 +196,17 @@ Network: Card-1: Realtek RTL8111/8168/8411 PCI Express Gigabit Ethernet Contro
Card-2: Realtek RTL8723BE PCIe Wireless Network Adapter driver: rtl8723be
```
#### Find Linux Hard Disk Information
#### 获取硬盘信息
To view full hard disk information,(size, id, model) we can use the flag `-D`:
使用 `-D` 选项查看硬盘信息大小、ID、型号
```
$ inxi -D
Drives: HDD Total Size: 1000.2GB (20.0% used) ID-1: /dev/sda model: ST1000LM024_HN size: 1000.2GB
```
#### 获取简要的系统信息
#### Summarize Full Linux System Information Together
To show a summarized system information; combining all the information above, we need to use the `-b` flag as below:
使用 `-b` 选项显示上述信息的简要系统信息:
```
$ inxi -b
@ -223,19 +225,18 @@ Drives: HDD Total Size: 1000.2GB (20.0% used)
Info: Processes: 233 Uptime: 3:23 Memory: 3137.5/7879.9MB Client: Shell (bash) inxi: 2.2.35
```
#### Find Linux Hard Disk Partition Details
#### 获取硬盘分区信息
The next command will enable us view complete list of hard disk partitions in relation to size, used and available space, filesystem as well as filesystem type on each partition with the `-p` flag:
使用 `-p` 选项输出完整的硬盘分区信息,包括每个分区的分区大小、已用空间、可用空间、文件系统以及文件系统类型。
```
$ inxi -p
Partition: ID-1: / size: 324G used: 183G (60%) fs: ext4 dev: /dev/sda10
ID-2: swap-1 size: 4.00GB used: 0.00GB (0%) fs: swap dev: /dev/sda9
```
#### 获取完整的 Linux 系统信息
#### Shows Full Linux System Information
In order to show complete Inxi output, we use the `-F` flag as below (note that certain data is filtered for security reasons such as WAN IP):
使用 `-F` 选项查看可以完整的 Inxi 输出(安全起见比如网络 IP 地址信息没有显示,下面的示例只显示部分输出信息):
```
$ inxi -F
@ -264,22 +265,22 @@ Fan Speeds (in rpm): cpu: N/A
Info: Processes: 234 Uptime: 3:26 Memory: 3188.9/7879.9MB Client: Shell (bash) inxi: 2.2.35
```
### Linux System Monitoring with Inxi Tool
### 使用 Inxi 工具监控 Linux 系统
Following are few options used to monitor Linux system processes, uptime, memory etc.
下面是监控 Linux 系统进程、开机时间和内存的几个选项的使用方法。
#### Monitor Linux Processes Memory Usage
#### 监控 Linux 进程的内存使用
Get summarized system info in relation to total number of processes, uptime and memory usage:
使用下面的命令查看进程数、开机时间和内存使用情况:
```
$ inxi -I
Info: Processes: 232 Uptime: 3:35 Memory: 3256.3/7879.9MB Client: Shell (bash) inxi: 2.2.35
```
#### Monitoring Processes by CPU and Memory Usage
#### 监控进程占用的 CPU 和内存资源
By default, it can help us determine the [top 5 processes consuming CPU or memory][1]. The `-t` option used together with `c` (CPU) and/or `m` (memory) options lists the top 5 most active processes eating up CPU and/or memory as shown below:
Inxi 默认显示 [前 5 个最消耗 CPU 和内存的进程][1]。 `-t` 选项和 `c` 选项一起使用查看前 5 个最消耗 CPU 资源的进程,查看最消耗内存的进程使用 `-t` 选项和 `m` 选项; `-t`选项 和 `cm` 选项一起使用显示前 5 个最消耗 CPU 和内存资源的进程。
```
----------------- Linux CPU Usage -----------------
@ -320,7 +321,7 @@ Memory: MB / % used - Used/Total: 3223.6/7879.9MB - top 5 active
5: mem: 211.68MB (2.6%) command: chrome pid: 6146
```
We can use `cm` number (number can be 1-20) to specify a number other than 5, the command below will show us the [top 10 most active processes][2] eating up CPU and memory.
可以在选项 `cm` 后跟一个整数(在 1-20 之间)设置显示多少个进程,下面的命令显示了前 10 个最消耗 CPU 和内存的进程:
```
$ inxi -t cm10
@ -348,9 +349,9 @@ Memory: MB / % used - Used/Total: 3163.1/7879.9MB - top 10 active
10: mem: 151.83MB (1.9%) command: mysqld pid: 1259
```
#### Monitor Linux Network Interfaces
#### 监控网络设备
The command that follows will show us advanced network card information including interface, speed, mac id, state, IPs, etc:
下面的命令会列出网卡信息包括接口信息、网络频率、mac 地址、网卡状态和网络 IP 等信息。
```
$ inxi -Nni
@ -362,9 +363,9 @@ WAN IP: 111.91.115.195 IF: wlp2s0 ip-v4: N/A
IF: enp1s0 ip-v4: 192.168.0.103
```
#### Monitor Linux CPU Temperature and Fan Speed
#### 监控 CPU 温度和电脑风扇转速
We can keep track of the [hardware installed/configured sensors][3] output by using the -s option:
可以使用 `-s` 选项监控 [配置了传感器的机器][2] 获取温度和风扇转速:
```
$ inxi -s
@ -372,9 +373,9 @@ Sensors: System Temperatures: cpu: 53.0C mobo: N/A
Fan Speeds (in rpm): cpu: N/A
```
#### Find Weather Report in Linux
#### 用 Linux 查看天气预报
We can also view whether info (though API used is unreliable) for the current location with the `-w` or `-W``<different_location>` to set a different location.
使用 `-w` 选项查看本地区的天气情况(虽然使用的 API 可能不是很可靠),使用 `-W <different_location>` 设置另外的地区。
```
$ inxi -w
@ -385,9 +386,9 @@ $ inxi -W Nairobi,Kenya
Weather: Conditions: 70 F (21 C) - Mostly Cloudy Time: February 20, 11:08 AM EAT
```
#### Find All Linux Repsitory Information
#### 查看所有的 Linux 仓库信息
We can additionally view a distro repository data with the `-r` flag:
另外,可以使用 `-r` 选项查看一个 Linux 发行版的仓库信息:
```
$ inxi -r
@ -421,34 +422,36 @@ deb http://ppa.launchpad.net/ubuntu-mozilla-security/ppa/ubuntu xenial main
deb-src http://ppa.launchpad.net/ubuntu-mozilla-security/ppa/ubuntu xenial main
```
To view its current installed version, a quick help, and open the man page for a full list of options and detailed usage info plus lots more, type:
下面是查看 Inxi 的安装版本、快速帮助和打开 man 主页的方法,以及更多的 Inxi 使用细节。
```
$ inxi -v #show version
$ inxi -h #quick help
$ man inxi #open man page
$ inxi -v #显示版本信息
$ inxi -h #快速帮助
$ man inxi #打开 man 主页
```
For more information, visit official GitHub Repository: [https://github.com/smxi/inxi][4]
浏览 Inxi 的官方 GitHub 主页 [https://github.com/smxi/inxi][4] 查看更多的信息。
Thats all for now! In this article, we reviewed Inxi, a full featured and remarkable command line tool for collecting machine hardware and system info. This is one of the best CLI based [hardware/system information collection tools][5] for Linux, I have ever used.
Inxi 是一个功能强大的获取硬件和系统信息的命令行工具。这也是我使用过的最好的 [获取硬件和系统信息的命令行工具][5] 之一。
写下你的评论。如果你知道其他的像 Inxi 这样的工具,我们很高兴和你一起讨论。
To share your thoughts about it, use the comment form below. Lastly, in case you know of other, such useful tools as Inxi out there, you can inform us and we will be delighted to review them as well.
--------------------------------------------------------------------------------
作者简介:
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
Aaron Kili 是一个 Linux 和 F.O.S.S 的狂热爱好者,即任的 Linux 系统管理员web 开发者TecMint 网站的专栏作者,他喜欢使用计算机工作,并且乐于分享计算机技术。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/inxi-command-to-find-linux-system-information/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[vim-kakali](https://github.com/vim-kakali)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,14 +1,13 @@
Create a Shared Directory on Samba AD DC and Map to Windows/Linux Clients Part 7
Samba 系列(七):在 Samba AD DC 服务器上创建共享目录并映射到 Windows/Linux 客户机
============================================================
在 Samba AD DC 服务器上创建共享目录并映射到 Windows/Linux 客户机 ——(七)
这篇文章将指导你如何在 Samba AD DC 服务器上创建共享目录,然后通过 GPO 把共享目录挂载到域中的其它 Windows 成员机,并且从 Windows 域控的角度来管理共享权限。
这篇文章也包括在加入域的 Linux 机器上如何使用 Samba4 域帐号来访问及挂载共享文件。
#### 需求:
### 需求:
1. [在 Ubuntu 系统上使用 Samba4 创建活动目录架构][1]
1. [在 Ubuntu 系统上使用 Samba4 创建活动目录架构][1]
### 第一步:创建 Samba 文件共享
@ -21,12 +20,13 @@ Create a Shared Directory on Samba AD DC and Map to Windows/Linux Clients Pa
# chmod -R 775 /nas
# chown -R root:"domain users" /nas
# ls -alh | grep nas
```
```
[
![Create Samba Shared Directory](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
![Create Samba Shared Directory](http://www.tecmint.com/wp-content/uploads/2017/02/Create-Samba-Shared-Directory.png)
][2]
创建 Samba 共享目录
*创建 Samba 共享目录*
2、当你在 Samba4 AD DC 服务器上创建完成共享目录之后,你还得修改 samba 配置文件,添加下面的参数以允许通过 SMB 协议来共享文件。
@ -41,11 +41,12 @@ Create a Shared Directory on Samba AD DC and Map to Windows/Linux Clients Pa
path = /nas
read only = no
```
[
![Configure Samba Shared Directory](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
![Configure Samba Shared Directory](http://www.tecmint.com/wp-content/uploads/2017/02/Configure-Samba-Shared-Directory.png)
][3]
配置 Samba 共享目录
*配置 Samba 共享目录*
3、最后你需要通过下面的命令重启 Samba AD DC 服务,以让修改的配置生效:
@ -57,117 +58,119 @@ read only = no
4、我们准备使用在 Samba AD DC 服务器上创建的域帐号(包括用户和组)来访问这个共享目录(禁止 Linux 系统用户访问共享目录)。
可以直接通过 Windows 资源管理器来完成 Samba 共享权限的管理,就跟你在 Windows 资源管理器中设置其它文件夹权限的方法一样。
可以直接通过 Windows 资源管理器来完成 Samba 共享权限的管理,就跟你在 Windows 资源管理器中设置其它文件夹权限的方法一样。
首先,使用具有管理员权限的 Samba4 AD 域帐号登录到 Windows 机器。然而在 Windows 机器上的资源管理器中输入双斜杠和 Samba AD DC 服务器的 IP 地址或主机名或者是 FQDN 来访问共享文件和设置权限。
```
\\adc1
Or
\\192.168.1.254
Or
\\adc1.tecmint.lan
```
```
[
![Access Samba Share Directory from Windows](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
![Access Samba Share Directory from Windows](http://www.tecmint.com/wp-content/uploads/2017/02/Access-Samba-Share-Directory-from-Windows.png)
][4]
从 Windows 机器访问 Samba 共享目录
*从 Windows 机器访问 Samba 共享目录*
5、右键单击共享文件选择属性来设置权限。打开安全选项卡依次修改域账号和组权限。使用高级选项来调整权限。
[
![Configure Samba Share Directory Permissions](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
![Configure Samba Share Directory Permissions](http://www.tecmint.com/wp-content/uploads/2017/02/Configure-Samba-Share-Directory-Permissions.png)
][5]
配置 Samba 共享目录权限
*配置 Samba 共享目录权限*
可参考下面的截图来为指定 Samba AD DC 认证用户设置权限。
可参考下面的截图来为指定 Samba AD DC 认证用户设置权限。
[
![Manage Samba Share Directory User Permissions](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
![Manage Samba Share Directory User Permissions](http://www.tecmint.com/wp-content/uploads/2017/02/Manage-Samba-Share-Directory-User-Permissions.png)
][6]
设置 Samba 共享目录用户权限
*设置 Samba 共享目录用户权限*
6、你也可以使用其它方法来设置共享权限打开计算机管理-->连接到另外一台计算机。
找到共享目录,右键单击你想修改权限的目录,选择属性,打开安全选项卡。你可以在这里修改任何权限,就跟上图的修改共享文件夹权限的方法一样。
[
![Connect to Samba Share Directory Machine](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
![Connect to Samba Share Directory Machine](http://www.tecmint.com/wp-content/uploads/2017/02/Connect-to-Samba-Share-Directory-Machine.png)
][7]
连接到 Samba 共享目录服务器
*连接到 Samba 共享目录服务器*
[
![Manage Samba Share Directory Properties](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
![Manage Samba Share Directory Properties](http://www.tecmint.com/wp-content/uploads/2017/02/Manage-Samba-Share-Directory-Properties.png)
][8]
管理 Samba 共享目录属性
*管理 Samba 共享目录属性*
[
![Assign Samba Share Directory Permissions to Users](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
![Assign Samba Share Directory Permissions to Users](http://www.tecmint.com/wp-content/uploads/2017/02/Assign-Samba-Share-Directory-Permissions-to-Users.png)
][9]
为域用户授予共享目录权限
*为域用户授予共享目录权限*
### 第三步:通过 GPO 来映射 Samba 文件共享
7、要想通过域组策略来挂载 Samba 共享的目录,你得先到一台[已安装了 RSAT 工具][10] 的服务器上,打开 AD DC 工具,右键单击域名,选择新建-->共享文件
7、要想通过域组策略来挂载 Samba 共享的目录,你得先到一台[已安装了 RSAT 工具][10] 的服务器上,打开 AD DC 工具,右键单击域名,选择新建-->共享文件
[
![Map Samba Share Folder](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
![Map Samba Share Folder](http://www.tecmint.com/wp-content/uploads/2017/02/Map-Samba-Share-Folder.png)
][11]
映射 Samba 共享文件夹
*映射 Samba 共享文件夹*
8、为共享文件夹添加一个名字然后输入共享文件夹的网络路径如下图所示。完成后单击 OK 按钮,你就可以在右侧看到文件夹了。
[
![Set Samba Shared Folder Name Location](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
![Set Samba Shared Folder Name Location](http://www.tecmint.com/wp-content/uploads/2017/02/Set-Samba-Shared-Folder-Name-Location.png)
][12]
设置 Samba 共享文件夹名称及路径
*设置 Samba 共享文件夹名称及路径*
9、下一步打开组策略管理控制台找到当前域的默认域策略脚本然后打开并编辑该文件。
在 GPM 编辑器界面,打开 GPM 编辑器,找到用户配置 --> 首选项 --> Windows 设置,然而右键单击驱动器映射,选择新建 --> 映射驱动。
[
![Map Samba Share Folder in Windows](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
![Map Samba Share Folder in Windows](http://www.tecmint.com/wp-content/uploads/2017/02/Map-Samba-Share-Folder-in-Windows.png)
][13]
在 Windows 机器上映射 Samba 共享文件夹
*在 Windows 机器上映射 Samba 共享文件夹*
10、通过单击右边的三个小点在新窗口中查询并添加共享目录的网络位置勾选重新连接复选框为该目录添加一个标签选择驱动盘符然后单击 OK 按钮来保存和应用配置。
[
![Configure Network Location for Samba Share Directory](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
![Configure Network Location for Samba Share Directory](http://www.tecmint.com/wp-content/uploads/2017/02/Configure-Network-Location-for-Samba-Share-Directory.png)
][14]
配置 Samba 共享目录的网络位置
*配置 Samba 共享目录的网络位置*
11、最后为了在本地机器上强制应用 GPO 更改而不重启系统,打开命令行提示符,然而执行下面的命令。
```
gpupdate /force
```
```
[
![Apply GPO Changes](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
![Apply GPO Changes](http://www.tecmint.com/wp-content/uploads/2017/02/Apply-GPO-Changes.png)
][15]
应用 GPO 更改
*应用 GPO 更改*
12、当你在本地机器上成功应用策略后打开 Windows 资源管理器,你就可以看到并访问共享的网络文件夹了,能否正常访问共享目录取决于你在前一步的授权操作。
如果没有在命令行下强制应用组策略,你网络中的其它客户机需要重启或重新登录系统才可以看到共享目录。
[
![Samba Shared Network Volume on Windows](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
![Samba Shared Network Volume on Windows](http://www.tecmint.com/wp-content/uploads/2017/02/Samba-Shared-Network-Volume-on-Windows.png)
][16]
Windows 机器上挂载的 Samba 网络磁盘
*Windows 机器上挂载的 Samba 网络磁盘*
### 第四步:从 Linux 客户端访问 Samba 共享目录
@ -183,14 +186,15 @@ $ sudo apt-get install smbclient cifs-utils
```
$ smbclient L your_domain_controller U%
or
$ smbclient L \\adc1 U%
```
```
[
![List Samba Share Directory in Linux](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
![List Samba Share Directory in Linux](http://www.tecmint.com/wp-content/uploads/2017/02/List-Samba-Share-Directory-in-Linux.png)
][17]
在 Linux 机器上列出 Samba 共享目录
*在 Linux 机器上列出 Samba 共享目录*
15、在命令行下使用域帐号以交互试方式连接到 Samba 共享目录:
@ -198,13 +202,13 @@ $ smbclient L \\adc1 U%
$ sudo smbclient //adc/share_name -U domain_user
```
在命令行下,你可以列出共享目录内容,下载或上传文件到共享目录,或者执行其它操作。使用 来查询所有可用的 smbclient 命令。
在命令行下,你可以列出共享目录内容,下载或上传文件到共享目录,或者执行其它操作。使用 `?` 来查询所有可用的 smbclient 命令。
[
![Connect Samba Share Directory in Linux](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
![Connect Samba Share Directory in Linux](http://www.tecmint.com/wp-content/uploads/2017/02/Connect-Samba-Share-Directory-in-Linux.png)
][18]
在 Linux 机器上连接 Samba 共享目录
*在 Linux 机器上连接 Samba 共享目录*
16、在 Linux 机器上使用下面的命令来挂载 samba 共享目录。
@ -212,16 +216,16 @@ $ sudo smbclient //adc/share_name -U domain_user
$ sudo mount //adc/share_name /mnt -o username=domain_user
```
[
![Mount Samba Share Directory in Linux](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
![Mount Samba Share Directory in Linux](http://www.tecmint.com/wp-content/uploads/2017/02/Mount-Samba-Share-Directory-in-Linux.png)
][19]
在 Linux 机器上挂载 samba 共享目录
*在 Linux 机器上挂载 samba 共享目录*
根据实际情况,依次替换主机名、共享目录名、挂载点和域帐号。使用 mount 命令加上管道符和 grep 参数来过滤出 cifs 类型的文件系统。
根据实际情况,依次替换主机名、共享目录名、挂载点和域帐号。使用 `mount` 命令加上管道符和 `grep` 命令来过滤出 cifs 类型的文件系统。
通过上面的测试我们可以看出 Samba4 AD DC 服务器上配置共享目录仅使用 Windows 访问控制列表( ACL ),而不是 POSIX ACL 。
通过文件共享把 Samba 配置为域成员以使用其它网络共享功能。同时,在另一个域控制器上[配置 Windbindd 服务][20] ——第二步——在你开始发起网络共享文件之前。
通过文件共享把 Samba 配置为域成员以使用其它网络共享功能。同时,在另一个域控制器上[配置 Windbindd 服务][20] ——在你开始发起网络共享文件之前。
--------------------------------------------------------------------------------
@ -230,6 +234,7 @@ $ sudo mount //adc/share_name /mnt -o username=domain_user
我是一个电脑迷,开源 Linux 系统和软件爱好者,有 4 年多的 Linux 桌面、服务器系统使用和 Base 编程经验。
译者简介:
春城初春/春水初生/春林初盛/春風十裏不如妳
[rusking](https://github.com/rusking)
@ -239,7 +244,7 @@ via: http://www.tecmint.com/create-shared-directory-on-samba-ad-dc-and-map-to-wi
作者:[Matei Cezar][a]
译者:[rusking](https://github.com/rusking)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -254,7 +259,7 @@ via: http://www.tecmint.com/create-shared-directory-on-samba-ad-dc-and-map-to-wi
[7]:http://www.tecmint.com/wp-content/uploads/2017/02/Connect-to-Samba-Share-Directory-Machine.png
[8]:http://www.tecmint.com/wp-content/uploads/2017/02/Manage-Samba-Share-Directory-Properties.png
[9]:http://www.tecmint.com/wp-content/uploads/2017/02/Assign-Samba-Share-Directory-Permissions-to-Users.png
[10]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/
[10]:https://linux.cn/article-8097-1.html
[11]:http://www.tecmint.com/wp-content/uploads/2017/02/Map-Samba-Share-Folder.png
[12]:http://www.tecmint.com/wp-content/uploads/2017/02/Set-Samba-Shared-Folder-Name-Location.png
[13]:http://www.tecmint.com/wp-content/uploads/2017/02/Map-Samba-Share-Folder-in-Windows.png
@ -264,7 +269,7 @@ via: http://www.tecmint.com/create-shared-directory-on-samba-ad-dc-and-map-to-wi
[17]:http://www.tecmint.com/wp-content/uploads/2017/02/List-Samba-Share-Directory-in-Linux.png
[18]:http://www.tecmint.com/wp-content/uploads/2017/02/Connect-Samba-Share-Directory-in-Linux.png
[19]:http://www.tecmint.com/wp-content/uploads/2017/02/Mount-Samba-Share-Directory-in-Linux.png
[20]:http://www.tecmint.com/manage-samba4-active-directory-linux-command-line/
[20]:https://linux.cn/article-8070-1.html
[21]:http://www.tecmint.com/author/cezarmatei/
[22]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[23]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -0,0 +1,137 @@
微软 Office 在线版变得更好 - 在 Linux 上亦然
=============
对于 linux 用户,影响 linux 使用体验的主要因素之一便是缺少微软 Office 套装。如果你非得靠 Office 谋生,而它被绝大多数人使用,你可能不能承受使用开源产品的代价。理解矛盾之所在了吗?
的确, LibreOffice 是一个 [很棒的][1] 自由程序,但如果你的客户、顾客或老板需要 Word 和 Excel 文件呢? 你确定能 [承担任何][2] 将这些文件从 ODT 或别的格式转换到 DOCX 之类时的失误、错误或小问题吗, 反之亦然。这是一系列难办的问题。 不幸的是在技术在层面上对大多数人而言Linux 超出了能力范围。当然,这不是绝对。
![Teaser](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-teaser.png)
### 加入微软 Office 在线, 加入 Linux
众所周知,微软拥有自己的 Office 云服务多年。通过任何现代浏览器都可以使用使得它很棒且具有意义,并且这意味着在 Linux 上也能使用!我前阵子刚测试了这个[方法][3]并且它表现出色。我能够轻松使用这个产品,以原本的格式保存文件,或是转换为我的 ODF 格式,这真的很棒。
我决定再次使用这个套装看看它在过去几年的进步以及是否依旧对 Linux 友好。我使用 [Fedora 25][4] 作为例子。我同时也去测试了 [SoftMaker Office 2016][5]。 听起来有趣,也确实如此。
### 第一印象
我得说我感到很高兴。Office 不需要任何特别的插件。没有 Silverlight 或 Flash 之类的东西。 单纯而大量的 HTML 和 Javascript 。 同时,交互界面反应极其迅速。唯一我不喜欢的就是 Word 文档的灰色背景,它让人很快感到疲劳。除了这个,整个套装工作流畅,没有延迟、行为古怪之处及意料之外的错误。接下来让我们放慢脚步。
这个套装需要你用在线账户或者手机号登录——不必是 Live 或 Hotmail 邮箱。任何邮箱都可以。如果你有微软 [手机][6], 那么你可以用相同的账户并且可以同步你的数据。账户也会免费分配 5GB OneDrive 的储存空间。这很有条理,不是优秀或令人超级兴奋而是非常得当。
![微软 Office, 欢迎页面](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-welcome-page.jpg)
你可以使用各种各样的程序,包括必需的三件套 - Word、Excel 和 Powerpoint并且其它的东西也可使用包括一些新奇事物。文档会自动保存但你也可以下载副本并转换成其它格式比如 PDF 和 ODF。
对我来说这简直完美。分享一个自己的小故事。我用 LibreOffice 写一本 [奇幻类的][7]书,之后当我需要将它们送去出版社编辑或者校对,我需要把它们转换成 DOCX 格式。唉,这需要微软 office。从我的 [Linux 问题解决大全][8]得知,我得一开始就使用 Word因为有一堆工作要与我的编辑合作完成而他们使用专有软件。没有任何情面可讲只有出于对冷酷的金钱和商业的考量。错误是不容许接受的。
![Word, 新文档](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-word-new.png)
使用 Office 在线版能给很多偶尔需要使用的人以自由空间。偶尔使用 Word、Excel 等,不需要购买整个完整的套装。如果你表面上是 LibreOffice 的忠实粉丝,你也可以暗地里“加入微软 Office 负心者俱乐部”而不必感到愧疚。有人传给你一个 Word 或 PPT 文件,你可以上传然后在线操作它们,然后转换成所需要的。这样的话你就可以在线生成你的工作,发送给那些严格要求的人,同时自己留一个 ODF 格式的备份,有需要的话就用 LibreOffice 操作。虽然这种方法的灵活性很是实用,但这不应该成为你的主要手段。对于 Linux 用户,这给予了很多他们平时所没有的自由,毕竟即使你想用微软 Office 也不好安装。
![另存为,转换选项](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-save-as.jpg)
### 特性、选项、工具
我开始琢磨一个文档——考虑到这其中各种细枝末节。写点文本,用一种或三种风格,链接某些文本,嵌入图片,加上脚注,评论自己的文章甚至作为一个多重人格的极客巧妙地回复自己的评论。
除了灰色背景——我们得学会很好地完成一项无趣工作,即便是像臭鼬工厂那样的工作方式,因为浏览器里没有选项调整背景颜色——其实也还好啦。
Skype 甚至也整合到了其中,你可以边沟通边协作,或者在协作中倾听。其色调相当一致。鼠标右键可以选择一些快捷操作,包括链接、评论和翻译。不过需要改进的地方还有不少,它并没有给我想要的,翻译有差错。
![Skype active](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-skype-active.jpg)
![右键选项](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-right-click.png)
![右键选项,更多](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-right-click-more.jpg)
![翻译,不准确](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-translations.png)
你也可以加入图片——包括默认嵌入的必应搜索可以基于它们的许可证和分发权来筛选图片。这很棒,特别是当你想要创作文档而又想避免版权纷争时。
![图片,在线搜索](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-images.jpg)
### 关于追踪多说几句
说老实话,很实用。这个产品基于在线使用使得默认情况下可以跟踪更改和编辑,所以你就有了基本的版本控制功能。不过如果直接关闭而不保存的话,阶段性的编辑会遗失。
![评论](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-comments.jpg)
![编译活动日志](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-edit-activity.png)
看到一个错误——如果你试着在 Linux 上(本地)编辑 Word 或 Excel 文件,会被提示你很调皮,因为这明显是个不支持的操作。
![编辑错误](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-edit-error.jpg)
### Excel
实际工作流程不止使用 Word。我也使用 Excel众所周知它包含了很多整齐有效的模板之类的。好用而且在更新单元格和公式时没有任何延迟它涵盖了你所需要的大多数功能。
![Excel有趣的模板](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-excel.jpg)
![空白电子表格](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-excel-new-spreadsheet.jpg)
![Excel预算模板](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-excel-budget.jpg)
### OneDrive
在这里你可以新建文件夹和文件、移动文件、给你的朋友如果你需要的话和同事们分享文件。5 GB 免费,当然,收费增容。总的来说,做的不错。在更新和展示内容上会花费一定时间。打开了的文档不能被删除,这可能看起来像一个漏洞,但从计算机角度来看是完美的体验。
![OneDrive](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-onedrive.jpg)
### 帮助
如果你感到疑惑——比如被人工智能戏耍,可以向微软的云智囊团寻求帮助。 虽然这种方式不那么直接,但至少好用,结果往往也能令人满意。
![能做什么, 交互式的帮助](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-what-to-do.png)
### 问题
在我三个小时的摸索中,我只遇到了两个小问题。一是文件编辑的时候浏览器会有警告(黄色三角),提醒我在 HTTPS 会话中加载了不安全的元素。二是创建 Excel 文件失败,只出现过一次。
![文件创建失败](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-error.jpg)
### 结论
微软 Office 在线版是一个优秀的产品,与我两年前测试相比较,它变得更好了。外观出色,表现良好,使用期间错误很少,完美兼容,甚至对于 Linux 用户,使之具有个人意义和商业价值。我不能说它是自 VHS Video Home System家用录像系统出现以来最好的东西但一定是很棒的它架起了 Linux 用户与微软 Office 之间的桥梁,解决了 Linux 用户长期以来的问题,方便且很好的支持 ODF 。
现在我们来让事情变得更有趣些,如果你喜欢云概念的事物,那你可能对 [Open365][9] 感兴趣,这是一个基于 LibreOfiice 的办公软件,加上额外的邮件客户端和图像处理软件,还有 20 GB 的免费存储空间。最重要的是,你可以用浏览器同时完成这一切,只需要多开几个窗口。
回到微软,如果你是 Linux 用户,如今可能确实需要微软 Office 产品。在线 Office 套装无疑是更方便的使用方法——或者至少不需要更改操作系统。它免费、优雅、透明度高。值得一提的是,你可以把思维游戏放在一边,享受你的云端生活。
干杯~
--------------------------------------------------------------------------------
作者简介:
我的名字是 Igor Ljubuncic. 38 岁左右,已婚未育。现在是一家云技术公司的首席工程师,这是一个新的领域。到 2015 年初之前,我作为团队中的一名系统工程师就职于世界上最大的信息技术公司之一,开发新的 Linux 解决方案、完善内核、研究 linux。在这之前我是创新设计团队的技术指导致力于高性能计算环境的创新解决方案。还有一些像系统专家、系统程序员之类的新奇的名字。这些全是我的爱好直到 2008 年,变成了工作,还有什么能比这更令人满意呢?
从 2004 到 2008我作为物理学家在医学影像行业谋生。我专攻解决问题和发展算法后来大量使用 Matlab 处理信号和图像。另外我考了很多工程计算方法的认证,包括 MEDIC Six Sigma Green Belt实验设计和数据化工程。
我也开始写书,包括奇幻类和 Linux 上的技术性工作。彼此交融。
-------------
via: http://www.dedoimedo.com/computers/office-online-linux-better.html
作者:[Igor Ljubuncic][a]
译者:[XYenChi](https://github.com/XYenChi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.dedoimedo.com/faq.html
[1]:http://www.ocsmag.com/2015/02/16/libreoffice-4-4-review-finally-it-rocks/
[2]:http://www.ocsmag.com/2014/03/14/libreoffice-vs-microsoft-office-part-deux/
[3]:http://www.dedoimedo.com/computers/office-online-linux.html
[4]:http://www.dedoimedo.com/computers/fedora-25-gnome.html
[5]:http://www.ocsmag.com/2017/01/18/softmaker-office-2016-your-alternative-to-libreoffice/
[6]:http://www.dedoimedo.com/computers/microsoft-lumia-640.html
[7]:http://www.thelostwordsbooks.com/
[8]:http://www.dedoimedo.com/computers/linux-problem-solving-book.html
[9]:http://www.ocsmag.com/2016/08/17/open365/

View File

@ -0,0 +1,266 @@
如何安装 Debian 的非 systemd 复刻版本 Devuan Linux
============================================================
Devuan Linux 是 Debian 最新的复刻版本,是基于 Debian 的一个被设计为完全去除了 systemd 的版本。
Devuan 宣布于 2014 年底,并经过了一段活跃的开发。最新的发行版本是 beta2发行代号为 Jessie (没错,和当前 Debian 的稳定版同名)。
当前稳定版的最后发行据说会在 2017 年初。如果想了解关于该项目的更多信息,请访问社区官网:[https://devuan.org/][1] 。
本文将阐述 Devuan 当前发行版的安装。在 Debian 上可用的大多数软件包在 Devuan 上也是可用的,这有利于用户从 Debian 到 Devuan 的无缝过渡,他们应该更喜欢自由选择自己的初始化系统。
### 系统要求
Devuan 和 Debian 类似,对系统的要求非常低。最大的决定性因素是,用户希望使用什么样的桌面环境。这篇指南假设用户将使用一个“俗气的”桌面环境,建议至少满足下面所示的最低系统要求:
1. 至少 15GB 的硬盘空间;强烈鼓励有更大空间
2. 至少 2GB 的内存空间;鼓励更多
3. 支持 USB 或 CD/DVD 启动
4. 网络连接;安装过程中将会从网上下载文件
### Devuan Linux 安装
正如所有的指南一样,这篇指南假设你有一个 USB 驱动器可作为安装媒介。注意USB 驱动器应该有大约 4GB 或 8 GB 大,**并且需要删除所有数据**。
作者在使用太大的 USB 驱动器遇到过问题,不过你的也许可以工作。无论如何,在接下来的一些步骤中,**将导致 USB 驱动上的数据全部丢失**。
在开始准备安装之前,请先备份 USB 驱动器上的所有数据。这个可启动的 Linux USB 启动器要在另一个 Linux 系统上创建。
1、首先从 [https://devuan.org/][2] 获取最新发行版的 Devuan 安装镜像,或者,你也可以在 Linux 终端上输入下面的命令来获取安装镜像:
```
$ cd ~/Downloads
$ wget -c https://files.devuan.org/devuan_jessie_beta/devuan_jessie_1.0.0-beta2_amd64_CD.iso
```
2、上面的命令将会把安装镜像文件下载到用户的 `Downloads` 目录。下一步是把安装镜像写入 USB 驱动器中,从而启动安装程序。
为了写入镜像,需要使用一个在 Linux 中叫做 `dd` 的工具。首先,需要使用 [lsblk 命令][3]来定位硬盘名字:
```
$ lsblk
```
[
![Find Device Name in Linux](http://www.tecmint.com/wp-content/uploads/2017/03/Find-Device-Name-in-Linux.png)
][4]
*找到 Linux 中的设备名字*
USB 驱动器的名字为 `dev/sdc`,现在,可以使用 `dd` 工具把 Devuan 镜像写入驱动器中:
```
$ sudo dd if=~/Downloads/devuan_jessie_1.0.0-beta2_amd64_CD.iso of=/dev/sdc
```
重点:上面的命令需要有 root 权限,你可以使用 `sudo` 或者以 root 用户登录来运行命令。同时,这个命令将会删除 USB 驱动器上的所有数据,所以请确保备份了需要的数据。
3、当镜像写入 USB 驱动器以后,把 USB 驱动器插入要安装 Devuan 的电脑上,然后从 USB 驱动器启动电脑。
从 USB 驱动器成功启动以后,将会出现下面所示的屏幕,你需要在 “Install” 和 “Graphical Install” 这两个选项间选择一个继续安装进程。
在这篇指南中,我将使用 “Graphical Install” 方式。
[
![Devuan Graphic Installation](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Graphic-Installation.png)
][5]
*Devuan Graphic 安装*
4、当安装程序启动到“本地化”菜单以后将会提示用户选择键盘布局和语言。只需选择你想要的选项然后继续安装。
[
![Devuan Language Selection](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Language-Selection.png)
][6]
*Devuan 语言选择*
[
![Devuan Location Selection](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Location-Selection.png)
][7]
*Devuan 地区选择*
[
![Devuan Keyboard Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Keyboard-Configuration.png)
][8]
*Devuan 键盘配置*
5、下一步是向安装程序提供主机名和该机器所属的域名。
需要填写一个唯一的主机名,但如果电脑不属于任何域,那么域名可以不填。
[
![Set Devuan Linux Hostname](http://www.tecmint.com/wp-content/uploads/2017/03/Set-Devuan-Linux-Hostname.png)
][9]
*设置 Devuan Linux 的主机名*
[
![Set Devuan Linux Domain Name](http://www.tecmint.com/wp-content/uploads/2017/03/Set-Devuan-Linux-Domain-Name.png)
][10]
*设置 Devuan Linux 的域名*
6、填好主机名和域名信息以后需要提供一个 root 用户密码。
请务必记住这个密码,因为当你在这台 Devuan 机器上执行管理任务时需要提供这个密码。默认情况下, Devuan 不会安装 sudo 包,所以当安装完成以后,管理用户就是 root 用户。
[
![Setup Devuan Linux Root User](http://www.tecmint.com/wp-content/uploads/2017/03/Setup-Devuan-Linux-Root-User.png)
][11]
*设置 Devuan Linux Root 用户*
7、下一步需要做的事情是创建一个非 root 用户。在任何可能的情况下,避免以 root 用户使用系统总是更好的。此时,安装程序将会提示你创建一个非 root 用户。
[
![Setup Devuan Linux User Account](http://www.tecmint.com/wp-content/uploads/2017/03/Setup-Devuan-Linux-User-Account.png)
][12]
*创建 Devuan Linux 用户账户*
8、一旦输入 root 用户密码,提示非 root 用户已经创建好以后,安装程序将会请求[通过 NTP 设置时钟][13]。
这时需要再次连接网络,大多数系统都需要这样。
[
![Devuan Linux Timezone Setup](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Clock-on-Devuan-Linux.png)
][14]
*设置 Devuan Linux 的时区*
9、下一步需要做的是系统分区。对于绝大多数用户来说选择“Guided use entire disk”就够了。然而如果需要进行高级分区就需要进行分区。
[
![Devuan Linux Partitioning](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Partitioning.png)
][15]
*Devuan Linux 分区*
在上面点击 “continue” 以后,请确认分区更改,从而把分区信息写入硬盘。
10、分区完成以后安装程序为 Devuan 安装一些基础文件。这个过程将会花费几分钟时间,直到系统开始配置网络镜像(软件库)才会停下来。当提示使用网络镜像时,通常点击 “yes”。
[
![Devuan Linux Configure Package Manager](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Configure-Package-Manager.png)
][16]
*Devuan Linux 配置包管理器*
点击 “yes” 以后将会给用户呈现一系列以国家分类的网络镜像。通常最好选择地理位置上离你的机器最近的镜像。
[
![Devuan Linux Mirror Selection](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Mirror-Selection.png)
][17]
*Devuan Linux 镜像选择*
[
![Devuan Linux Mirrors](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Mirrors.png)
][18]
*Devuan Linux 镜像*
11、下一步是设置 Debian 传统的 “popularity contest”它能够追踪已下载包的使用统计。
在安装过程中,可以在管理员首选项中启用或禁用该功能。
[
![Configure Devuan Linux Popularity Contest](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Devuan-Linux-Popularity-Contest.png)
][19]
*配置 Devuan Linux 的 Popularity Contest*
12、在简单浏览仓库和一些包的更新以后安装程序会给用户展示一系列软件包安装这些包可以提供一个桌面环境、SSH 访问和其它系统工具。
Devuan 会列举出一些主流桌面环境,但应该指出的是,并不是所有的桌面在 Devuan 上均可用。作者在 Devuan 上成功使用过 Xfce 、LXDE 和 Mate未来的文章将会探究如何从源代码安装这些桌面环境
如果想要安装别的桌面环境,不要点击 “Devuan Desktop Environment” 复选框。
[
![Devuan Linux Software Selection](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Software-Selection.png)
][20]
*Devuan Linux 软件选择*
根据在上面的安装屏幕中选择的项目数,可能需要几分钟的时间来下载和安装软件。
当所有的软件都安装好以后,安装程序将会提示用户选择 grub 的安装位置。典型情况是选择安装在 `/dev/sda` 目录下。
[
![Devuan Linux Grub Install](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Grub-Install.png)
][21]
*Devuan Linux 安装 grub 引导程序*
[
![Devuan Linux Grub Install Disk](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Grub-Install-Disk.png)
][22]
*Devuan Linux Grub 程序的安装硬盘*
13、当 GRUB 程序成功安装到引导驱动器以后,安装程序将会提示用户安装已经完成,请重启系统。
[
![Devuan Linux Installation Completes](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Installation-Completes.png)
][23]
*Devuan Linux 安装完成*
14、如果安装顺利完成了那么系统要么启动到选择桌面环境或者如果没有选择桌面环境的话会启动到一个基于文本的控制台。
[
![Devuan Linux Console](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Console.png)
][24]
Devuan Linux 控制台。
这篇文章总结了最新版本的 Devuan Linux 的安装。在这个系列的下一篇文章将会阐述[如何从源代码为 Devuan Linux 安装 Enlightenment 桌面环境][25]。如果你有任何问题或疑问,请记得让我们知道。
--------------------------------------------------------------------------------
作者简介:
作者是 Ball 州立大学的计算机系讲师,目前教授计算机系的所有 Linux 课程,同时也教授 Cisco 网络课程。他是 Debian 以及其他 Debian 的衍生版比如 Mint、Ubuntu 和 Kali 的狂热用户。他拥有信息学和通信科学的硕士学位,同时获得了 Cisco、EC 理事会和 Linux 基金会的行业认证。
-----------------------------
via: http://www.tecmint.com/installation-of-devuan-linux/
作者:[Rob Turner][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/robturner/
[1]:https://devuan.org/
[2]:https://devuan.org/
[3]:http://www.tecmint.com/find-usb-device-name-in-linux/
[4]:http://www.tecmint.com/wp-content/uploads/2017/03/Find-Device-Name-in-Linux.png
[5]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Graphic-Installation.png
[6]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Language-Selection.png
[7]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Location-Selection.png
[8]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Keyboard-Configuration.png
[9]:http://www.tecmint.com/wp-content/uploads/2017/03/Set-Devuan-Linux-Hostname.png
[10]:http://www.tecmint.com/wp-content/uploads/2017/03/Set-Devuan-Linux-Domain-Name.png
[11]:http://www.tecmint.com/wp-content/uploads/2017/03/Setup-Devuan-Linux-Root-User.png
[12]:http://www.tecmint.com/wp-content/uploads/2017/03/Setup-Devuan-Linux-User-Account.png
[13]:http://www.tecmint.com/install-and-configure-ntp-server-client-in-debian/
[14]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Clock-on-Devuan-Linux.png
[15]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Partitioning.png
[16]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Configure-Package-Manager.png
[17]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Mirror-Selection.png
[18]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Mirrors.png
[19]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Devuan-Linux-Popularity-Contest.png
[20]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Software-Selection.png
[21]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Grub-Install.png
[22]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Grub-Install-Disk.png
[23]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Installation-Completes.png
[24]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Console.png
[25]:http://www.tecmint.com/install-enlightenment-on-devuan-linux/
[26]:http://www.tecmint.com/author/robturner/
[27]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[28]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -1,4 +1,4 @@
如何在 AWS EC2 的 Linux 服务器上开端口
如何在 AWS EC2 的 Linux 服务器上开放一个端口
============================================================
_这是一篇用屏幕截图解释如何在 AWS EC2 Linux 服务器上打开端口的教程。它能帮助你管理 EC2 服务器上特定端口的服务。_
@ -9,13 +9,13 @@ AWS即 Amazon Web Services不是 IT 世界中的新术语了。它是亚
AWS 提供服务器计算作为他们的服务之一,他们称之为 EC弹性计算。使用它可以构建我们的 Linux 服务器。我们已经看到了[如何在 AWS 上设置免费的 Linux 服务器][11]了。
默认情况下,所有基于 EC2 的 Linux 服务器都只打开 22 端口,即 SSH 服务端口(所有 IP 的入站)。因此,如果你托管了任何特定端口的服务,则要为你的服务器在 AWS 防火墙上打开相应端口。
默认情况下,所有基于 EC2 的 Linux 服务器都只打开 22 端口,即 SSH 服务端口(允许所有 IP 的入站连接)。因此,如果你托管了任何特定端口的服务,则要为你的服务器在 AWS 防火墙上打开相应端口。
同样它的 1 到 65535 的端口是打开的(所有出站流量)。如果你想改变这个,你可以使用下面的方法编辑出站规则。
同样它的 1 到 65535 的端口是打开的(对于所有出站流量)。如果你想改变这个,你可以使用下面的方法编辑出站规则。
在 AWS 上为你的服务器设置防火墙规则很容易。你能够在几秒钟内为你的服务器打开端口。我将用截图指导你如何打开 EC2 服务器的端口。
 _步骤 1 _
### 步骤 1
登录 AWS 帐户并进入 **EC2 管理控制台**。进入<ruby>“网络及安全”<rt>Network & Security </rt></ruby>菜单下的<ruby>**安全组**<rt>Security Groups</rt></ruby>,如下高亮显示:
@ -23,9 +23,7 @@ AWS 提供服务器计算作为他们的服务之一,他们称之为 EC
*AWS EC2 管理控制台*
* * *
_步骤 2 :_
### 步骤 2 :
<ruby>安全组<rt>Security Groups</rt></ruby>中选择你的 EC2 服务器,并在 <ruby>**行动**<rt>Actions</rt></ruby> 菜单下选择 <ruby>**编辑入站规则**<rt>Edit inbound rules</rt></ruby>
@ -33,7 +31,7 @@ AWS 提供服务器计算作为他们的服务之一,他们称之为 EC
*AWS 入站规则菜单*
_步骤 3:_
### 步骤 3:
现在你会看到入站规则窗口。你可以在此处添加/编辑/删除入站规则。这有几个如 http、nfs 等列在下拉菜单中,它们可以为你自动填充端口。如果你有自定义服务和端口,你也可以定义它。
@ -46,15 +44,13 @@ AWS 提供服务器计算作为他们的服务之一,他们称之为 EC
* 类型http
* 协议TCP
* 端口范围80
* 源:任何来源(打开 80 端口接受来自任何IP0.0.0.0/0的请求我的 IP那么它会自动填充你当前的公共互联网 IP
* 源:任何来源:打开 80 端口接受来自“任何IP0.0.0.0/0”的请求我的 IP那么它会自动填充你当前的公共互联网 IP
* * *
_步骤 4:_
### 步骤 4:
就是这样了。保存完毕后,你的服务器入站 80 端口将会打开!你可以通过 telnet 到 EC2 服务器公共域名的 80 端口来检验(可以在 EC2 服务器详细信息中找到)。
你也可以在 [ping.eu][12] 等网站上检验。
* * *
@ -65,7 +61,7 @@ AWS 提供服务器计算作为他们的服务之一,他们称之为 EC
via: http://kerneltalks.com/virtualization/how-to-open-port-on-aws-ec2-linux-server/
作者:[Shrikant Lavhate ][a]
作者:[Shrikant Lavhate][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)

View File

@ -0,0 +1,84 @@
什么是 Linux VPS 托管?
============================================================
![what is linux vps hosting](https://www.rosehosting.com/blog/wp-content/uploads/2017/03/what-is-linux-vps-hosting.jpg)
如果你有一个吞吐量很大的网站,或者至少,预期网站吞吐量很大,那么你可以考虑使用 [Linux VPS 托管][6] 。如果你想对网站托管的服务器上安装的东西有更多控制,那么 Linux VPS 托管就是最好的选择之一。这里我会回答一些频繁被提及的关于 Linux VPS 托管的问题。
### Linux VPS 意味着什么?
基本上, **Linux VPS 就是一个运行在 Linux 系统上的虚拟专属服务器virtual private server**。虚拟专属服务器是一个在物理服务器上的虚拟服务主机。运行在物理主机的内存里的服务器就称之为虚拟服务器。物理主机可以轮流运行很多其他的虚拟专属服务器。
### 我必须和其他用户共享服务器吗?
一般是这样的。但这并不意味着下载时间变长或者服务器性能降低。每个虚拟服务器可以运行它自己的操作系统,这些系统之间可以相互独立的进行管理。一个虚拟专属服务器有它自己的操作系统、数据、应用程序;它们都与物理主机和其他虚拟服务器中的操作系统、应用程序、数据相互分离。
尽管必须和其他虚拟专属服务器共享物理主机,但是你却可以不需花费大价钱就可以得到一个昂贵专用服务器的诸多好处。
### Linux VPS 托管的优势是什么?
使用 Linux VPS 托管服务会有很多的优势,包括容易使用、安全性增加以及在更低的总体成本上提高可靠性。然而,对于大多数网站管理者、程序员、设计者和开发者来说,使用 Linux VPS 托管服务的最大的优势是它的灵活性。每个虚拟专属服务器都和它所在的操作环境相互隔离,这意味着你可以容易且安全的安装一个你喜欢或者需要的操作系统 — 本例中是 Linux — 任何想要做的时候,你还可以很容易的卸载或者安装软件及应用程序。
你也可以更改你的 VPS 环境以适应你的性能需求,同样也可以提高你的网站用户或访客的体验。灵活性会是你超越对手的主要优势。
记住,一些 Linux VPS 提供商可能不会给你对你的 Linux VPS 完全的 root 访问权限。这样你的功能就会受到限制。要确定你得到的是 [拥有 root 权限的 Linux VPS][7] ,这样你就可以做任何你想做的修改。
### 任何人都可以使用 Linux VPS 托管吗?
当然,即使你运行一个专门的个人兴趣博客,你也可以从 Linux VPS 托管中受益。如果你为公司搭建、开发一个网站,你也会获益匪浅。基本上,如果你想使你的网站更健壮并且增加它的网络吞吐量,那么 Linux VPS 就是为你而生。
在定制和开发中需要很大的灵活性的个人和企业,特别是那些正在寻找不使用专用服务器就能得到高性能和服务的人们,绝对应该选择 Linux VPS因为专用服务器会消耗大量的网站运营成本。
### 不会使用 Linux 也可以使用 Linux VPS 吗?
当然,如果 Linux VPS 由你管理,你的 VPS 提供商会为你管理整个服务器。更有可能,他们将会为你安装、配置一切你想要运行在 Linux VPS 上的服务。如果你使用我们的 VPS我们会 24/7 全天候看护,也会安装、配置、优化一切服务。
如果你使用我们的主机服务,你会从 Linux VPS 中获益匪浅,并且不需要任何 Linux 知识。
对于新手来说,另一个简化 Linux VPS 使用的方式是得到一个带有 [cPanel][9]、[DirectAdmin][10] 或者任何 [其他托管控制面板][11]的 VPS。如果你使用控制面板就可以通过一个图形界面管理你的服务器尤其对于新手它是很容易使用的。虽然[使用命令行管理 Linux VPS][12] 很有趣,而且那样做可以学到很多。
### Linux VPS 和专用服务器有什么不同?
如前所述,一个虚拟专属服务器仅仅是在物理主机上的一个虚拟分区。物理服务器被分为多个虚拟服务器,这些虚拟服务器用户可以分担降低成本和开销。这就是 Linux VPS 相比一个 [专用服务器][13] 更加便宜的原因,专用服务器的字面意思就是指只有一个用户专用。想要知道关于更多不同点的细节,可以查看 [物理服务器(专用)与 虚拟服务器(VPS) 比较][14]。
除了比专用服务器有更好的成本效益Linux 虚拟专属服务器经常运行在比专用服务器的性能更强大的主机上,因此其性能和容量常常比专用服务器更大。
### 我可以把网站从共享托管环境迁移到到 Linux VPS 上吗?
如果你当前使用 [<ruby>**共享托管服务**<rt>shared hosting</rt></ruby>][15],你可以很容易的迁移到 Linux VPS 上。一种做法就是 [**您自己做**][16],但是迁移过程有点复杂,不建议新手使用。最好的方法是找到一个提供 [免费网站迁移][17] 的主机,然后让他们帮你完成迁移。你还可以从一个带有控制面板的共享主机迁移到一个不带有控制面板的 Linux VPS 。
### 更多问题?
欢迎随时在下面留下评论。
PS. 如果你喜欢这个专栏,请把它分享给你的彭友,或者你也可以在下面的评论区写下你的答复。谢谢。
--------------------------------------------------------------------------------
via: https://www.rosehosting.com/blog/what-is-linux-vps-hosting/
作者:[https://www.rosehosting.com][a]
译者:[vim-kakali](https://github.com/vim-kakali)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.rosehosting.com/blog/what-is-linux-vps-hosting/
[1]:https://www.rosehosting.com/blog/what-is-linux-vps-hosting/
[2]:https://www.rosehosting.com/blog/what-is-linux-vps-hosting/#comments
[3]:https://www.rosehosting.com/blog/category/guides/
[4]:https://plus.google.com/share?url=https://www.rosehosting.com/blog/what-is-linux-vps-hosting/
[5]:http://www.linkedin.com/shareArticle?mini=true&url=https://www.rosehosting.com/blog/what-is-linux-vps-hosting/&title=What%20is%20Linux%20VPS%20Hosting%3F&summary=If%20you%20have%20a%20site%20that%20gets%20a%20lot%20of%20traffic,%20or%20at%20least,%20is%20expected%20to%20generate%20a%20lot%20of%20traffic,%20then%20you%20might%20want%20to%20consider%20getting%20a%20Linux%20VPS%20hosting%20package.%20A%20Linux%20VPS%20hosting%20package%20is%20also%20one%20of%20your%20best%20options%20if%20you%20want%20more%20...
[6]:https://www.rosehosting.com/linux-vps-hosting.html
[7]:https://www.rosehosting.com/linux-vps-hosting.html
[9]:https://www.rosehosting.com/cpanel-hosting.html
[10]:https://www.rosehosting.com/directadmin-hosting.html
[11]:https://www.rosehosting.com/control-panel-hosting.html
[12]:https://www.rosehosting.com/blog/basic-shell-commands-after-putty-ssh-logon/
[13]:https://www.rosehosting.com/dedicated-servers.html
[14]:https://www.rosehosting.com/blog/physical-server-vs-virtual-server-all-you-need-to-know/
[15]:https://www.rosehosting.com/linux-shared-hosting.html
[16]:https://www.rosehosting.com/blog/from-shared-to-vps-hosting/
[17]:https://www.rosehosting.com/website-migration.html

View File

@ -7,27 +7,25 @@ DHCPDynamic Host Configuration Protocol是一个网络协议它使得
在这篇指南中,我们会介绍如何在 CentOS/RHEL 和 Fedora 发行版中安装和配置 DHCP 服务。
#### 设置测试环境
### 设置测试环境
本次安装中我们使用如下的测试环境
本次安装中我们使用如下的测试环境
```
DHCP 服务器 - CentOS 7
DHCP 客户端 - Fedora 25 和 Ubuntu 16.04
```
- DHCP 服务器 - CentOS 7
- DHCP 客户端 - Fedora 25 和 Ubuntu 16.04
#### DHCP 如何工作?
### DHCP 如何工作?
在进入下一步之前,让我们首先了解一下 DHCP 的工作流程:
* 当已连接到网络的客户端计算机(配置为使用 DHCP启动时它会发送一个 DHCPDISCOVER 消息到 DHCP 服务器。
* 当 DHCP 服务器接收到 DHCPDISCOVER 请求消息时,它会回复一个 DHCPOFFER 消息。
* 客户端收到 DHCPOFFER 消息后,它再发送给服务器一个 DHCPREQUEST 消息,表示客户端已准备好获取 DHCPOFFER 消息中提供的网络配置。
* 最后DHCP 服务器收到客户端的 DHCPREQUEST 消息,并回复 DHCPACK 消息,表示允许客户端使用分配给它的 IP 地址。
* 当已连接到网络的客户端计算机(配置为使用 DHCP启动时它会发送一个 `DHCPDISCOVER` 消息到 DHCP 服务器。
* 当 DHCP 服务器接收到 `DHCPDISCOVER` 请求消息时,它会回复一个 `DHCPOFFER` 消息。
* 客户端收到 `DHCPOFFER` 消息后,它再发送给服务器一个 `DHCPREQUEST` 消息,表示客户端已准备好获取 `DHCPOFFER` 消息中提供的网络配置。
* 最后DHCP 服务器收到客户端的 `DHCPREQUEST` 消息,并回复 `DHCPACK` 消息,表示允许客户端使用分配给它的 IP 地址。
### 第一步:在 CentOS 上安装 DHCP 服务
1. 安装 DHCP 服务非常简单,只需要运行下面的命令即可。
1安装 DHCP 服务非常简单,只需要运行下面的命令即可。
```
$ yum -y install dhcp
@ -35,7 +33,7 @@ $ yum -y install dhcp
重要:假如系统中有多个网卡,但你想只在其中一个网卡上启用 DHCP 服务,可以按照下面的步骤在该网卡上启用 DHCP 服务。
2. 打开文件 /etc/sysconfig/dhcpd将指定网卡的名称添加到 DHCPDARGS 列表,假如网卡名称为 `eth0`,则添加:
2、 打开文件 `/etc/sysconfig/dhcpd`,将指定网卡的名称添加到 `DHCPDARGS` 列表,假如网卡名称为 `eth0`,则添加:
```
DHCPDARGS=eth0
@ -45,9 +43,9 @@ DHCPDARGS=eth0
### 第二步:在 CentOS 上配置 DHCP 服务
3. 对于初学者来说,配置 DHCP 服务的第一步是创建 `dhcpd.conf` 配置文件DHCP 主要配置文件一般是 /etc/dhcp/dhcpd.conf默认情况下该文件为空该文件保存了发送给客户端的所有网络信息。
3 对于初学者来说,配置 DHCP 服务的第一步是创建 `dhcpd.conf` 配置文件DHCP 主要配置文件一般是 `/etc/dhcp/dhcpd.conf`(默认情况下该文件为空),该文件保存了发送给客户端的所有网络信息。
但是,有一个样例配置文件 /usr/share/doc/dhcp*/dhcpd.conf.sample这是配置 DHCP 服务的良好开始。
但是,有一个样例配置文件 `/usr/share/doc/dhcp*/dhcpd.conf.sample`,这是配置 DHCP 服务的良好开始。
DHCP 配置文件中定义了两种类型的语句:
@ -60,7 +58,7 @@ DHCP 配置文件中定义了两种类型的语句:
$ cp /usr/share/doc/dhcp-4.2.5/dhcpd.conf.example /etc/dhcp/dhcpd.conf
```
4. 然后,打开主配置文件并定义你的 DHCP 服务选项:
4 然后,打开主配置文件并定义你的 DHCP 服务选项:
```
$ vi /etc/dhcp/dhcpd.conf
@ -76,7 +74,7 @@ max-lease-time 7200;
authoritative;
```
5. 然后,定义一个子网;在这个事例中,我们会为 192.168.56.0/24 局域网配置 DHCP注意使用你实际场景中的值
5 然后,定义一个子网;在这个事例中,我们会为 `192.168.56.0/24` 局域网配置 DHCP注意使用你实际场景中的值
```
subnet 192.168.56.0 netmask 255.255.255.0 {
@ -91,7 +89,7 @@ range 192.168.56.120 192.168.56.200;
### 第三步:为 DHCP 客户端分配静态 IP
只需要在 /etc/dhcp/dhcpd.conf 文件中定义下面的部分,其中你必须显式指定它的 MAC 地址和打算分配的 IP你就可以为网络中指定的客户端计算机分配一个静态 IP 地址:
只需要在 `/etc/dhcp/dhcpd.conf` 文件中定义下面的部分,其中你必须显式指定它的 MAC 地址和打算分配的 IP你就可以为网络中指定的客户端计算机分配一个静态 IP 地址:
```
host ubuntu-node {
@ -112,7 +110,7 @@ fixed-address 192.168.56.110;
$ ifconfig -a eth0 | grep HWaddr
```
6. 现在,使用下面的命令启动 DHCP 服务,并使在下次系统启动时自动启动:
6 现在,使用下面的命令启动 DHCP 服务,并使在下次系统启动时自动启动:
```
---------- On CentOS/RHEL 7 ----------
@ -120,9 +118,10 @@ $ systemctl start dhcpd
$ systemctl enable dhcpd
---------- On CentOS/RHEL 6 ----------
$ service dhcpd start
$ chkconfig dhcpd on```
$ chkconfig dhcpd on
```
7. 另外,别忘了使用下面的命令允许 DHCP 服务通过防火墙DHCPD 守护进程通过 UDP 监听67号端口
7 另外,别忘了使用下面的命令允许 DHCP 服务通过防火墙DHCPD 守护进程通过 UDP 监听67号端口
```
---------- On CentOS/RHEL 7 ----------
@ -135,7 +134,7 @@ $ service iptables save
### 第四步:配置 DHCP 客户端
8. 现在,你可以为网络中的客户端配置自动从 DHCP 服务器中获取 IP 地址。登录到客户端机器并按照下面的方式修改以太网接口的配置文件(注意网卡的名称和编号):
8 现在,你可以为网络中的客户端配置自动从 DHCP 服务器中获取 IP 地址。登录到客户端机器并按照下面的方式修改以太网接口的配置文件(注意网卡的名称和编号):
```
# vi /etc/sysconfig/network-scripts/ifcfg-eth0
@ -152,15 +151,15 @@ ONBOOT=yes
保存文件并退出。
9. 你也可以在桌面服务器中按照下面的截图Ubuntu 16.04桌面版)通过 GUI 设置 Method 为 Automatic (DHCP)。
9 你也可以在桌面服务器中按照下面的截图Ubuntu 16.04桌面版)通过 GUI 设置 `Method``Automatic (DHCP)`
[
![Set DHCP in Client Network](http://www.tecmint.com/wp-content/uploads/2017/03/Set-DHCP-in-Client-Network.png)
][3]
在客户端网络中设置 DHCP
*在客户端网络中设置 DHCP*
10. 按照下面的命令重启网络服务(你也可以通过重启系统):
10 按照下面的命令重启网络服务(你也可以通过重启系统):
```
---------- On CentOS/RHEL 7 ----------
@ -190,7 +189,7 @@ via: http://www.tecmint.com/install-dhcp-server-in-centos-rhel-fedora/
作者:[Aaron Kili][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,9 +1,9 @@
[GitHub 风格的 Markdown 的正式规范][8]
《GitHub 风格的 Markdown 正式规范》发布
====================
很庆幸,我们当初选择 Markdown 作为用户在 GitHub 上托管内容的标记语言,它为用户提供了强大且直接的方式 (不管是技术的还是非技术的) 来编写可以很好的渲染成 HTML 的纯文本文档。
其最主要的限制,就是缺乏在最模糊的语言细节上的标准。比如,使用多少个空格来进行行缩进、两个不同元素之间需要使用多少空行区分、大量繁琐细节往往造成不同的实现:相似的 Markdown 文档会因为选用的不同的语法解析器而渲染成大量不同的呈现效果。
然而,其最主要的限制,就是缺乏在最模糊的语言细节上的标准。比如,使用多少个空格来进行行缩进、两个不同元素之间需要使用多少空行区分、大量繁琐细节往往造成不同的实现:相似的 Markdown 文档会因为选用的不同的语法解析器而渲染成相当不同的呈现效果。
五年前,我们在 [Sundown][13] 的基础之上开始构建 GitHub 自定义版本的 Markdown —— GFM (<ruby>GitHub 风格的 Markdown<rt>GitHub Flavored Markdown</rt></ruby>),这是我们特地为解决当时已有的 Markdown 解析器的不足而开发的一款解析器。
@ -11,19 +11,19 @@
该正式规范基于 [CommonMark][14],这是一个雄心勃勃的项目,旨在通过一个反映现实世界用法的方式来规范目前互联网上绝大多数网站使用的 Markdown 语法。CommonMark 允许人们以他们原有的习惯来使用 Markdown同时为开发者提供一个综合规范和参考实例从而实现跨平台的 Markdown 互操作和显示。
#### 规范
### 规范
使用 CommonMark 规范并围绕它来重设我们当前用户内容堆栈需要不少努力。我们纠结的主要问题是该规范 (及其参考实现) 过多关注由原生 Perl 实现支持的 Markdown 通用子集。这还不包括那些 GitHub 上已经在用的扩展特性。最明显的就是缺少 _表格 (tables)、删除线 (strikethrough)、自动链接 (autolinks)__任务列表 (task lists)_ 的支持。
使用 CommonMark 规范并围绕它来重新加工我们当前用户内容需要不少努力。我们纠结的主要问题是该规范 (及其参考实现) 过多关注由原生 Perl 实现支持的 Markdown 通用子集。这还不包括那些 GitHub 上已经在用的扩展特性。最明显的就是缺少 表格 (tables)、删除线 (strikethrough)、自动链接 (autolinks) 和 任务列表 (task lists) 的支持。
为完全指定 GitHub 的 Markdown 版本 (也称为 GFM),我们必须要要正式定义这些特性的的语法和语意,这在以前从未做过。我们是在现存的 CommonMark 规范中来完成这一项工作的,同时还特意关注以确保我们的扩展是原有规范的一个严格且可选的超集。
为完全描述 GitHub 的 Markdown 版本 (也称为 GFM),我们必须要要正式定义这些特性的的语法和语意,这在以前从未做过。我们是在现存的 CommonMark 规范中来完成这一项工作的,同时还特意关注以确保我们的扩展是原有规范的一个严格且可选的超集。
当评估 [GFM 规范][15] 的时候,你可以清楚的知道哪些是 GFM 特定规范的补充内容,因为它们都高亮显示了。并且你也会看到原有规范的所有部分都保存原样因此GFM 规范能够与其他任何实现兼容。
当评估 [GFM 规范][15] 的时候,你可以清楚的知道哪些是 GFM 特定规范的补充内容,因为它们都高亮显示了。并且你也会看到原有规范的所有部分都保持原样因此GFM 规范能够与任何其他的实现保持兼容。
#### 实现
### 实现
为确保我们网站中的 Markdown 渲染能够完美兼容 CommonMark 规范GitHub 的 GFM 解析器的后端实现基于 `cmark` 来开发,这是 CommonMark 规范的一个参考实现,由 [John MacFarlane][16] 和 许多其他的 [出色的贡献者][17] 开发完成。
为确保我们网站中的 Markdown 渲染能够完美兼容 CommonMark 规范GitHub 的 GFM 解析器的后端实现基于 `cmark` 来开发,这是 CommonMark 规范的一个参考实现,由 [John MacFarlane][16] 和许多其他的 [出色的贡献者][17] 开发完成。
就像规范本身那样,`cmark` 是 Markdown 的严格子集解析器,所以我们还必须在现存解析器的基础上完成 GitHub 自定义扩展的解析功能。你可以通过 [`cmark` 的分支][18] 来查看变更记录;为了跟踪不断改进的上游项目,我们持续将我们的补丁变基到上游主线上去。我们希望,这些扩展的正式规范一旦确定,这些 patch 集同样可以应用到原始项目的上游变更中去。
就像规范本身那样,`cmark` 是 Markdown 的严格子集解析器,所以我们还必须在现存解析器的基础上完成 GitHub 自定义扩展的解析功能。你可以通过 [`cmark` 的分支][18] 来查看变更记录;为了跟踪不断改进的上游项目,我们持续将我们的补丁变基到上游主线上去。我们希望,这些扩展的正式规范一旦确定,这些补丁集同样可以应用到原始项目的上游变更中去。
除了在 `cmark` 分支中实现 GFM 规范特性,我们也同时将许多目标相似的变更贡献到上游。绝大多数的贡献都主要围绕性能和安全。我们的后端每天都需要渲染大量的 Markdown 文档,所以我们主要关注这些操作可以尽可能的高效率完成,同时还要确保那些滥用的恶意 Markdown 文档无法攻击到我们的服务器。
@ -31,24 +31,23 @@
`cmark` 在性能方面则是有点粗糙:基于实现 Sundown 时我们所学到的性能技巧,我们向上游贡献了许多优化方案,但除去所有这些变更之外,当前版本的 `cmark` 仍然无法与 Sundown 本身匹敌:我们的基准测试表明,`cmark` 在绝大多数文档渲染的性能上要比 Sundown 低 20% 到 30%。
那句古老的优化谚语 _最快的代码就是不需要运行的代码 (the fastest code is the code that doesnt run)_ 在此处同样适用:实际上,`cmark` 比 Sundown 要进行 _多一些操作_。在其他的功能上,`cmark` 支持 UTF8 字符集,对参考的支持、扩展的接口清理的效果更佳。最重要的是它如同 Sundown 那样,并不会将 Markdown _翻译成_ HTML。它实际上从 Markdown 源码中生成一个 AST (抽象语法树Abstract Syntax Tree),然后我们就看将之转换和逐渐渲染成 HTML。
那句古老的优化谚语 _最快的代码就是不需要运行的代码 (the fastest code is the code that doesnt run)_ 在此处同样适用:实际上,`cmark` 比 Sundown 要_多进行一些操作_。在其他的功能上`cmark` 支持 UTF8 字符集,对参考的支持、扩展的接口清理的效果更佳。最重要的是它如同 Sundown 那样,并不会将 Markdown _翻译成_ HTML。它实际上从 Markdown 源码中生成一个 AST (抽象语法树Abstract Syntax Tree),然后我们就看将之转换和逐渐渲染成 HTML。
如果考虑下我们在 Sundown 的最初实现 (特别是文档中关于查询用户的 mention 和 issue 参考、插入任务列表等) 时的 HTML 语法剖析工作量,你会发现 `cmark` 基于 AST 的方法可以节约大量时间 _和_ 降低我们用户内容堆栈的复杂度。Markdown AST 是一个非常强大的工具,并且值得 `cmark` 生成它所付出的性能成本。
如果考虑下我们在 Sundown 的最初实现 (特别是文档中关于查询用户的 mention 和 issue 引用、插入任务列表等) 时的 HTML 语法剖析工作量,你会发现 `cmark` 基于 AST 的方法可以节约大量时间 _和_ 降低我们用户内容堆栈的复杂度。Markdown AST 是一个非常强大的工具,并且值得 `cmark` 生成它所付出的性能成本。
### 迁移
变更我们用户的内容堆栈以兼容 CommonMark 规范,并不同于转换我们用来解析 Markdown 的库那样容易:目前我们在遇到最根本的障碍就是由于一些不常用语法 (LCTT 译注:原文是 the Corner作为名词的原意为角落、偏僻处、窘境这应该是指那些不常用语法)CommonMark 规范 (以及有歧义的 Markdown 原文) 可能会以一种意想不到的方式来渲染一些老旧的 Markdown 内容。
the fundamental roadblock we encountered here is that the corner cases that CommonMark specifies (and that the original Markdown documentation left ambiguous) could cause some old Markdown content to render in unexpected ways.
通过综合分析 GitHub 中大量的 Markdown 语料库,我们断定现存的用户内容只有不到 1% 会受到新版本实现的影响:我们是通过同时使用新 (`cmark`,兼容 CommonMark 规范) 旧 (Sundown) 版本的库来渲染大量的 Markdown 文档、标准化 HTML 结果、分析它们的不同点,最后才得到这一个数据的。
只有 1% 的文档存在少量的渲染问题,使得换用新实现并获取其更多出看起来是非常合理的权衡,但是是根据当前 GitHub 的规模,这个 1% 是非常多的内容以及很多的受影响用户。我们真的不想导致任何用户需要重新校对一个老旧的问题、看到先前可以渲染成 HTML 的内容又呈现为 ASCII 码 —— 尽管这明显不会导致任何原始内容的丢失,却是糟糕的用户体验。
因此,我们想出相应的方法来缓和迁移过程。首先,第一件我们做的事就是收集用户托管在我们网站上的两种不同类型 Markdown 的数据:用户的评论 (比如 Gists、issues、PR 等)以及在 git 仓库中的 Markdown 文档。
因此,我们想出相应的方法来缓和迁移过程。首先,第一件我们做的事就是收集用户托管在我们网站上的两种不同类型 Markdown 的数据:用户的评论 (比如 Gist、issue、PR 等)以及在 git 仓库中的 Markdown 文档。
这两种内容有着本质上的区别:用户评论存储在我们的数据库中,这意味着他们的 Markdown 语法可以标准化 (比如添加或移除空格、修正缩进或则插入缺失的 Markdown 说明符,直到它们可正常渲染为止)。然而,那些存储在 Git 仓库中的 Markdown 文档则是 _根本_ 无法触及,因为这些内容已经散列成为 Git 存储模型的一部分。
幸运的是,我们发现绝大多数使用了复杂的 Markdown 特性的用户内容都是用户评论 (特别是 issue 主体和 PR 主体),而存储于仓库中的文档则大多数情况下都可以使用新旧渲染器正常进行渲染。
幸运的是,我们发现绝大多数使用了复杂的 Markdown 特性的用户内容都是用户评论 (特别是 issue 主体和 PR 主体),而存储于仓库中的文档则大多数情况下都可以使用新的和渲染器正常进行渲染。
因此,我们加快了标准化现存用户内容的语法的进程,以便使它们在新旧实现下渲染效果一致。
@ -56,23 +55,23 @@ the fundamental roadblock we encountered here is that the corner cases that Comm
除了转换之外,这还是一个高效的标准化过程,并且我们对此信心满满,毕竟完成这一任务的是我们在五年前就使用过的解析器。因此,所有的现存文档在保留其原始语意的情况下都能够进行明确的解析。
一旦升级 Sundown 来标准化输入文档并充分测试之后,我们就会做好开启转换进程的准备。最开始的一步,就是在新的 `cmark` 实现上为所有的用户内容进行反置转换,以便确保我们能有一个有限的分界点来进行过渡。我们将为网站上这几个月内所有 **新的** 用户评论启用 CommonMark这一过程不会引起任何人注意的 —— 他们这是一个关于 CommonMark 团队出色工作的圣约,通过一个最具现实世界用法的方式来正式规范 Markdown 语言。
一旦升级 Sundown 来标准化输入文档并充分测试之后,我们就会做好开启转换进程的准备。最开始的一步,就是对所有新用户内容切换到新的 `cmark` 实现上,以便确保我们能有一个有限的分界点来进行过渡。实际上,几个月前我们就为网站上所有 **新的** 用户评论启用了 CommonMark这一过程几乎没有引起任何人注意 —— 这是关于 CommonMark 团队出色工作的证明,通过一个最具现实世界用法的方式来正式规范 Markdown 语言。
在后端,我们开启 MySQL 转换来升级替代用户的 Markdown 内容。在所有的评论进行标准化之后,在将其写回到数据库之前,我们将使用新实现来进行渲染并与旧实现的渲染结果进行对比,以确保 HTML 输出结果视觉可鉴以及用户数据在任何情况下都不被破坏。总而言之,只有不到 1% 的输入文档会受到表彰进程的修改,这符合我们的的期望,同时再次证明 CommonMark 规范能够呈现语言的真实用法。
在后端,我们开启 MySQL 转换来升级替代所有 Markdown 用户内容。在所有的评论进行标准化之后,在将其写回到数据库之前,我们将使用新实现来进行渲染并与旧实现的渲染结果进行对比,以确保 HTML 输出结果视觉上感觉相同,并且用户数据在任何情况下都不会被破坏。总而言之,只有不到 1% 的输入文档会受到标准进程的修改,这符合我们的的期望,同时再次证明 CommonMark 规范能够呈现语言的真实用法。
整个过程会持续好几天,最后的结果是网站上所有的 Markdown 用户内容会得到全面升级以符合新的 Markdown 标准,同时确保所有的最终渲染输出效果都对用户视觉可辩
整个过程会持续好几天,最后的结果是网站上所有的 Markdown 用户内容会得到全面升级以符合新的 Markdown 标准,同时确保所有的最终渲染输出效果对用户视觉上感觉相同
#### 结论
### 结论
从今天 (LCTT 译注:原文发布于 2017 年 3 月 14 日,这里的今天应该是这个日期) 开始, 我们同样为所有存储在 Git 仓库中的 Markdown 内容启动 CommonMark 渲染。正如上文所述,所有的现存文档都不会进行标准化,因为我们所期望中的多数渲染效果都刚刚好。
能够让在 GitHub 上的所有 Markdown 内容符合一个动态变化且使用的标准,同时还可以为我的用户提供一个关于 GFM 如何进行解析和渲染 [清晰且权威的参考说明][19],我们是相当激动的。
我们还将致力于 CommonMark 规范,一直到在它正式发布之前消沉最后一个 bug。我们也希望 GitHub.com 在 1.0 规范发布之后可以进行完美兼容。
我们还将致力于 CommonMark 规范,一直到在它正式发布之前消除最后一个 bug。我们也希望 GitHub.com 在其 1.0 规范发布之后可以进行完美兼容。
作为结束,以下为想要学习 CommonMark 规范或则自己来编写实现的朋友提供一些有用的链接。
* [CommonMark 主页][1],可以了解该项目该多信息
*   [CommonMark 主页][1],可以了解本项目更多信息
* [CommonMark 论坛讨论区][2],可以提出关于该规范的的问题和更改建议
* [CommonMark 规范][3]
* [使用 C 语言编写的参考实现][4]
@ -90,7 +89,7 @@ the fundamental roadblock we encountered here is that the corner cases that Comm
via: https://githubengineering.com/a-formal-spec-for-github-markdown/
作者:[Yuki Izumi][a][Vicent Martí][b]
作者:[Yuki Izumi][a] [Vicent Martí][b]
译者:[GHLandy](https://github.com/GHLandy)
校对:[jasminepeng](https://github.com/jasminepeng)

View File

@ -0,0 +1,101 @@
5 个开源 RSS 订阅阅读器
============================================================
![RSS feed](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/rss_feed.png?itok=FHLEh-fZ "RSS feed")
>Image by : [Rob McDonald][2] on Flickr. Modified by Opensource.com. [CC BY-SA 2.0][3].
### 你平时使用 RSS 阅读器么?
<form class="pollanon" action="https://opensource.com/article/17/3/rss-feed-readers" method="post" id="poll-view-voting" accept-charset="UTF-8"><label class="element-invisible" for="edit-choice" style="display: block; clip: rect(1px 1px 1px 1px); overflow: hidden; height: 1px; width: 1px; color: rgb(67, 81, 86); position: absolute !important;">选择</label><input type="radio" id="edit-choice-7621" name="choice" value="7621" class="form-radio" style="font-size: 16px; margin-top: 0px; max-width: 100%; -webkit-appearance: none; width: 0.8em; height: 0.8em; border-width: 1px; border-style: solid; border-color: rgb(51, 51, 51); border-radius: 50%; vertical-align: middle;"> <label class="option" for="edit-choice-7621" style="display: inline; font-weight: normal; color: rgb(67, 81, 86); margin-left: 0.2em; vertical-align: middle;"></label><input type="radio" id="edit-choice-7626" name="choice" value="7626" class="form-radio" style="font-size: 16px; margin-top: 0px; max-width: 100%; -webkit-appearance: none; width: 0.8em; height: 0.8em; border-width: 1px; border-style: solid; border-color: rgb(51, 51, 51); border-radius: 50%; vertical-align: middle;"> <label class="option" for="edit-choice-7626" style="display: inline; font-weight: normal; color: rgb(67, 81, 86); margin-left: 0.2em; vertical-align: middle;">不,但是我过去使用</label><input type="radio" id="edit-choice-7631" name="choice" value="7631" class="form-radio" style="font-size: 16px; margin-top: 0px; max-width: 100%; -webkit-appearance: none; width: 0.8em; height: 0.8em; border-width: 1px; border-style: solid; border-color: rgb(51, 51, 51); border-radius: 50%; vertical-align: middle;"> <label class="option" for="edit-choice-7631" style="display: inline; font-weight: normal; color: rgb(67, 81, 86); margin-left: 0.2em; vertical-align: middle;">不,我从没使用过</label><input type="submit" id="edit-vote" name="op" value="投票" class="form-submit" style="font-family: &quot;Swiss 721 SWA&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, &quot;Nimbus Sans L&quot;, sans-serif; font-size: 1em; max-width: 100%; line-height: normal; font-style: normal; border-width: 1px; border-style: solid; border-color: rgb(119, 186, 77); color: rgb(255, 255, 255); background: rgb(119, 186, 77); padding: 0.6em 1.9em;"></form>
四年前当 Google Reader 宣布停止的时候,许多“技术专家”声称 RSS 订阅将会结束。
对于某些人而言,社交媒体和其他聚合工具满足了 RSS、Atom 以及其它格式的阅读器的需求。但是老技术绝对不会因为新技术而死,特别是如果新技术不能完全覆盖旧技术的所有使用情况时。技术的目标受众可能会有所改变,人们使用这个技术的工具也可能会改变。
但是RSS 并不比像 email、JavaScript、SQL 数据库、命令行或者十几年前告诉我的其它时日无多的技术消失的更快。(黑胶专辑的销售额去年刚刚达到了其 [25 年的顶峰][4],这不是个奇迹么?)只要看看在线 Feed 阅读器网站 Feedly 的成功,就能了解 RSS 阅读器仍然有市场。
事实是RSS 和相关的 Feed 格式比任何广泛使用的尝试替换它的东西还要多才多艺。作为一名阅读消费者,对于我来说没有比它更简单的方法了,我可以阅读大量的出版信息,并且是用我选择的客户端格式化的,我可以确认看到发布的每一条内容,同时不会显示我已经阅读过的文章。而作为发布者,这是一种比我使用过的大多数发布软件都简单的格式,开箱即用,它可以让我的信息递交给更多的人,并且可以很容易地分发多种不同类型的文档格式。
所以 RSS 没有死。RSS 长存!我们最后一次是在 2013 年回顾了[开源 RSS 阅读器][5]选择,现在是更新的时候了。这里是我关于 2017 年开源 RSS 订阅阅读器的一些最佳选择,每个在使用上稍微不同。
### Miniflux
[Miniflux][6] 是一个极度简约的基于 Web 的 RSS 阅读器但不要将其特意的轻设计与开发人员的懒惰混淆。它目的是构建一个简单而有效的设计。Miniflux 的思想似乎是将程序弱化,以便让读者可以专注于内容,在大量臃肿的 web 程序中我们会特别欣赏这一点。
但轻便并不意味着缺乏功能。其响应式设计在任何设备上看起来都很好并可以使用主题、API 接口、多语言、固定书签等等。
Miniflux 的 [源代码][7]以 [GPLv3 Affero][8] 许可证在 GitHub 中发布。如果你不想自行托管,则可以支付每年 15 美元的托管计划。
### RSSOwl
[RSSOwl][9] 是一个跨平台的桌面 Feed 阅读器。它用 Java 编写,在风格和感觉上它像很多流行的桌面邮件客户端。它具有强大的过滤和搜索功能、可定制的通知,以及用于排序 Feed 的标签。 如果你习惯使用 Thunderbird 或其他桌面阅读器进行电子邮件发送,那么在 RSSOwl 中你会感到宾至如归。
可以在 GitHub 中找到 [Eclipse Public 许可证][11]下发布的 [RSSOwl][10] 的源代码。
### Tickr
[Tickr][12] 在这个系列中有点不同。它是一个 Linux 桌面客户端,但它不是传统的浏览-阅读形式。相反,它会将你的 Feed 标题如滚动新闻那样在桌面横栏上滚动显示。对于想要从各种来源获得最新消息的新闻界来说,这是一个不错的选择。点击标题将在你选择的浏览器中打开它。它不像这个列表中的其他程序那样是专门的阅读客户端,但是如果比起阅读每篇文章,你对阅读标题更感兴趣,这是一个很好的选择。
Tickr 的源代码和二进制文件以 GPL 许可证的形式在这个[网站][13]上可以找到。
### Tiny Tiny RSS
如果缺少了 [Tiny Tiny RSS][14],那么很难称之为这是一个现代化的 RSS 阅读器列表。它是最受欢迎的自主托管的基于 Web 的阅读器它功能丰富支持OPML 导入和导出、键盘快捷键、共享功能、主题界面、插件支持、过滤功能等等。
Tiny Tiny RSS 还有官方的 [Android客户端][15],让你可以随时随地阅读。
Tiny Tiny RSS 的 [Web][16] 版本和 [Android][17] 源代码以 [GPLv3 许可][18] 在 GitLab 上发布。
### Winds
[Winds][19] 是一个建立在 React 之上的看起来很现代化的自托管的 web 订阅阅读器。它利用称为 Stream 的机器学习个性化 API帮助你根据当前的兴趣找到可能感兴趣的更多内容。这有一个在线显示版本因此你可以在下载之前先[尝试][20]。这是一个只有几个月的新项目,也许评估它是否能取代我日常的 Feed 阅读器还太早,但这当然是一个我感兴趣关注的项目。
Winds 的[源代码][21] 以 [MIT][22] 许可证在 GitHub 上发布。
* * *
这些当然不是仅有的选择。RSS 是一个相对易于解析、文档格式良好的格式,因此有许许多多因为不同的需求而建立的各种 Feed 阅读器。这有一个很长的自托管的开源 Feed 阅读器[列表][23],除了我们列出的之外你还可能会考虑使用它们。我们希望你能在下面的评论栏与我们分享你最喜欢的 RSS 阅读器。
--------------------------------------------------------------------------------
作者简介:
Jason Baker - Jason 热衷于使用技术使世界更加开放从软件开发到阳光政府行动。Linux 桌面爱好者、地图/地理空间爱好者、树莓派工匠、数据分析和可视化极客、偶尔的码农、云本土主义者。在 Twitter 上关注他 @jehb
--------------
via: https://opensource.com/article/17/3/rss-feed-readers
作者:[Jason Baker][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jason-baker
[1]:https://opensource.com/article/17/3/rss-feed-readers?rate=2sJrLq0K3QPQCznBId7K1Qrt3QAkwhQ435UyP77B5rs
[2]:https://www.flickr.com/photos/evokeartdesign/6002000807
[3]:https://creativecommons.org/licenses/by/2.0/
[4]:https://www.theguardian.com/music/2017/jan/03/record-sales-vinyl-hits-25-year-high-and-outstrips-streaming
[5]:https://opensource.com/life/13/6/open-source-rss
[6]:https://miniflux.net/
[7]:https://github.com/miniflux/miniflux
[8]:https://github.com/miniflux/miniflux/blob/master/LICENSE
[9]:http://www.rssowl.org/
[10]:https://github.com/rssowl/RSSOwl
[11]:https://github.com/rssowl/RSSOwl/blob/master/LICENSE
[12]:https://www.open-tickr.net/
[13]:https://www.open-tickr.net/download.php
[14]:https://tt-rss.org/gitlab/fox/tt-rss/wikis/home
[15]:https://tt-rss.org/gitlab/fox/tt-rss-android
[16]:https://tt-rss.org/gitlab/fox/tt-rss/tree/master
[17]:https://tt-rss.org/gitlab/fox/tt-rss-android/tree/master
[18]:https://tt-rss.org/gitlab/fox/tt-rss-android/blob/master/COPYING
[19]:https://winds.getstream.io/
[20]:https://winds.getstream.io/app/getting-started
[21]:https://github.com/GetStream/Winds
[22]:https://github.com/GetStream/Winds/blob/master/LICENSE.md
[23]:https://github.com/Kickball/awesome-selfhosted#feed-readers
[24]:https://opensource.com/user/19894/feed
[25]:https://opensource.com/article/17/3/rss-feed-readers#comments
[26]:https://opensource.com/users/jason-baker

View File

@ -0,0 +1,127 @@
如何在 Ubuntu 和 Linux Mint 上启用桌面共享
============================================================
桌面共享是指通过图形终端仿真器在计算机桌面上实现远程访问和远程协作的技术。桌面共享允许两个或多个连接到网络的计算机用户在不同位置对同一个文件进行操作。
在这篇文章中,我将向你展示如何在 Ubuntu 和 Linux Mint 中启用桌面共享,并展示一些重要的安全特性。
### 在 Ubuntu 和 Linux Mint 上启用桌面共享
1、在 Ubuntu Dash 或 Linux Mint 菜单中,像下面的截图这样搜索 `desktop sharing`,搜索到以后,打开它。
[
![Search for Desktop Sharing in Ubuntu](http://www.tecmint.com/wp-content/uploads/2017/03/search-for-desktop-sharing.png)
][1]
*在 Ubuntu 中搜索 Desktop sharing*
2、打开 Desktop sharing 以后,有三个关于桌面共享设置的选项:共享、安全以及通知设置。
在共享选项下面,选中选项“允许其他用户查看桌面”来启用桌面共享。另外,你还可以选中选项“允许其他用户控制你的桌面”,从而允许其他用户远程控制你的桌面。
[
![Desktop Sharing Preferences](http://www.tecmint.com/wp-content/uploads/2017/03/desktop-sharing-settings-inte.png)
][2]
*桌面共享偏好*
3、接下来,在“安全”部分,你可以通过勾选选项“你必须确认任何对该计算机的访问”来手动确认每个远程连接。
另外,另一个有用的安全特性是通过选项“需要用户输入密码”创建一个确定的共享密码。这样当用户每次想要访问你的桌面时需要知道并输入密码。
4、对于通知,你可以勾选“仅当有人连接上时”来监视远程连接,这样每次当有人远程连接到你的桌面时,可以在通知区域查看。
[
![Configure Desktop Sharing Set](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Desktop-Sharing-Set.png)
][3]
*配置桌面共享设置*
当所有的桌面共享选项都设置好以后,点击“关闭”。现在,你已经在你的 Ubuntu 或 Linux Mint 上成功启用了桌面共享。
### 测试 Ubuntu 的远程桌面共享
你可以通过使用一个远程连接应用来进行测试,从而确保桌面共享可用。在这个例子中,我将展示上面设置的一些选项是如何工作的。
5、我将使用 VNC虚拟网络计算协议通过 [remmina 远程连接应用][4]连接到我的 Ubuntu PC。
[
![Remmina Desktop Sharing Tool](http://www.tecmint.com/wp-content/uploads/2017/03/Remmina-Desktop-Sharing-Tool.png)
][5]
*Remmina 桌面共享工具*
6、在点击 Ubuntu PC 以后,将会出现下面这个配置连接设置的界面,
[
![Remmina Desktop Sharing Preferences](http://www.tecmint.com/wp-content/uploads/2017/03/Remmina-Configure-Remote-Desk.png)
][6]
*Remmina 桌面共享偏好*
7、当执行好所有设置以后,点击连接。然后,给用户名提供 SSH 密码并点击 OK 。
[
![Enter SSH User Password](http://www.tecmint.com/wp-content/uploads/2017/03/shared-pass.png)
][7]
*输入 SSH 用户密码*
点击确定以后,出现下面这个黑屏,这是因为在远程机器上,连接还没有确认。
[
![Black Screen Before Confirmation](http://www.tecmint.com/wp-content/uploads/2017/03/black-screen-before-confirmat.png)
][8]
*连接确认前的黑屏*
8、现在,在远程机器上,我需要如下一个屏幕截图显示的那样点击 `Allow` 来接受远程访问请求。
[
![Allow Remote Desktop Sharing](http://www.tecmint.com/wp-content/uploads/2017/03/accept-remote-access-request.png)
][9]
*允许远程桌面共享*
9、在接受请求以后,我就成功地连接到了远程 Ubuntu 机器的桌面。
[
![Remote Ubuntu Desktop](http://www.tecmint.com/wp-content/uploads/2017/03/successfully-connected-to-rem.png)
][10]
*远程 Ubuntu 桌面*
这就是全部内容了,在这篇文章中,我们讲解了如何在 Ubuntu 和 Linux Mint 中启用桌面共享。你使用评论部分给我们写反馈。
--------------------------------------------------------------------------------
作者简介:
Aaron Kili 是 Linux 和 F.O.S.S 爱好者,将来的 Linux 系统管理员和网络开发人员,目前是 TecMint 的内容创作者,他喜欢用电脑工作,并坚信分享知识。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/enable-desktop-sharing-in-ubuntu-linux-mint/
作者:[Aaron Kili][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/wp-content/uploads/2017/03/search-for-desktop-sharing.png
[2]:http://www.tecmint.com/wp-content/uploads/2017/03/desktop-sharing-settings-inte.png
[3]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Desktop-Sharing-Set.png
[4]:http://www.tecmint.com/remmina-remote-desktop-sharing-and-ssh-client
[5]:http://www.tecmint.com/wp-content/uploads/2017/03/Remmina-Desktop-Sharing-Tool.png
[6]:http://www.tecmint.com/wp-content/uploads/2017/03/Remmina-Configure-Remote-Desk.png
[7]:http://www.tecmint.com/wp-content/uploads/2017/03/shared-pass.png
[8]:http://www.tecmint.com/wp-content/uploads/2017/03/black-screen-before-confirmat.png
[9]:http://www.tecmint.com/wp-content/uploads/2017/03/accept-remote-access-request.png
[10]:http://www.tecmint.com/wp-content/uploads/2017/03/successfully-connected-to-rem.png
[11]:http://www.tecmint.com/author/aaronkili/
[12]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[13]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -0,0 +1,140 @@
如何在 Linux 中添加一块大于 2TB 的新磁盘
============================================================
你有没有试过使用 [fdisk][1] 对大于 2TB 的硬盘进行分区,并且纳闷为什么会得到需要使用 GPT 的警告? 是的,你看到的没错。我们无法使用 fdisk 对大于 2TB 的硬盘进行分区。
在这种情况下,我们可以使用 `parted` 命令。它的主要区别在于 fdisk 使用 DOS 分区表格式而 parted 使用 GPT 格式。
提示:你可以使用 `gdisk` 来代替 `parted`
在本文中,我们将介绍如何将大于 2TB 的新磁盘添加到现有的 Linux 服务器中(如 RHEL/CentOS 或 Debian/Ubuntu中。
我使用的是 `fdisk``parted` 来进行此配置。
首先使用 `fdisk` 命令列出当前的分区详细信息,如图所示。
```
# fdisk -l
```
[
![List Linux Partition Table](http://www.tecmint.com/wp-content/uploads/2017/04/List-Linux-Partition-Table.png)
][2]
*列出 Linux 分区表*
为了本文的目的,我加了一块 20GB 的磁盘,这也可以是大于 2TB 的磁盘。在你加完磁盘后,使用相同的 `fdisk` 命令验证分区表。
```
# fdisk -l
```
[
![List New Partition Table](http://www.tecmint.com/wp-content/uploads/2017/04/List-New-Partition-Table.png)
][3]
*列出新的分区表*
提示:如果你添加了一块物理磁盘,你可能会发现分区已经创建了。此种情况下,你可以在使用 `parted` 之前使用 `fdisk` 删除它。
```
# fdisk /dev/xvdd
```
在命令中使用 `d` 开关删除分区,使用 `w` 保存更改并退出。
[
![Delete Linux Partition](http://www.tecmint.com/wp-content/uploads/2017/04/Delete-Linux-Partition.png)
][4]
*删除 Linux 分区*
**重要:在删除分区时你需要小心点。这会擦除磁盘上的数据。**
现在是使用 `parted` 命令分区新的磁盘了。
```
# parted /dev/xvdd
```
将分区表格式化成 GPT
```
(parted) mklabel gpt
```
创建主分区并分配磁盘容量,这里我使用 20GB (在你这里可能是 2TB
```
(parted) mkpart primary 0GB 20GB
```
[
![Create Partition using Parted](http://www.tecmint.com/wp-content/uploads/2017/04/Create-Partition-using-Parted.png)
][5]
*使用 parted 创建分区*
出于好奇,让我们用 `fdisk` 看看新的分区。
```
# fdisk /dev/xvdd
```
[
![Verify Partition Details](http://www.tecmint.com/wp-content/uploads/2017/04/Verify-Partition-Details.png)
][6]
*验证分区细节*
现在格式化并挂载分区,并在 `/etc/fstab` 添加相同的信息,它控制在系统启动时挂载文件系统。
```
# mkfs.ext4 /dev/xvdd1
```
[
![Format Linux Partition](http://www.tecmint.com/wp-content/uploads/2017/04/Format-Linux-Partition.png)
][7]
*格式化 Linux 分区*
一旦分区格式化之后,是时候在 `/data1` 下挂载分区了。
```
# mount /dev/xvdd1 /data1
```
要永久挂载,在 `/etc/fstab` 添加条目。
```
/dev/xvdd1 /data1 ext4 defaults 0 0
```
重要:要使用 GPT 分区格式需要内核支持。默认上 RHEL/CentOS 的内核已经支持 GPT但是对于 Debian/Ubuntu你需要在修改配置之后重新编译内核。
就是这样了!在本文中,我们向你展示了如何使用 `parted` 命令。与我们分享你的评论和反馈。
--------------------------------------------------------------------------------
作者简介:
我在包括 IBM-AIX、Solaris、HP-UX 以及 ONTAP 和 OneFS 存储技术的不同平台上工作,并掌握 Oracle 数据库。
-----------------------
via: http://www.tecmint.com/add-disk-larger-than-2tb-to-an-existing-linux/
作者:[Lakshmi Dhandapani][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/lakshmi/
[1]:http://www.tecmint.com/fdisk-commands-to-manage-linux-disk-partitions/
[2]:http://www.tecmint.com/wp-content/uploads/2017/04/List-Linux-Partition-Table.png
[3]:http://www.tecmint.com/wp-content/uploads/2017/04/List-New-Partition-Table.png
[4]:http://www.tecmint.com/wp-content/uploads/2017/04/Delete-Linux-Partition.png
[5]:http://www.tecmint.com/wp-content/uploads/2017/04/Create-Partition-using-Parted.png
[6]:http://www.tecmint.com/wp-content/uploads/2017/04/Verify-Partition-Details.png
[7]:http://www.tecmint.com/wp-content/uploads/2017/04/Format-Linux-Partition.png
[8]:http://www.tecmint.com/author/lakshmi/
[9]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[10]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -0,0 +1,109 @@
pyinotify在 Linux 中实时监控文件系统更改
============================================================
`Pyinotify` 是一个简单而有用的 Python 模块,它可用于在 Linux 中实时[监控文件系统更改][1]。
作为一名系统管理员,你可以用它来监视你感兴趣的目录的更改,如 Web 目录或程序数据存储目录及其他目录。
**建议阅读:** [fswatch - 监控 Linux 中的文件和目录更改或修改][2]
它依赖于 `inotify`(在内核 2.6.13 中纳入的 Linux 内核功能),它是一个事件驱动的通知程序,其通知通过三个系统调用从内核空间导出到用户空间。
`pyinotiy` 的目的是绑定这三个系统调用,并在其上提供了一个通用和抽象的方法来操作这些功能。
在本文中,我们将向你展示如何在 Linux 中安装并使用 `pyinotify` 来实时监控文件系统更改或修改。
#### 依赖
要使用 `pyinotify`,你的系统必须运行:
1. Linux kernel 2.6.13 或更高
2. Python 2.4 或更高
### 如何在 Linux 中安装 Pyinotify
首先在系统中检查内核和 Python 的版本:
```
# uname -r
# python -V
```
一旦依赖满足,我们会使用 `pip` 安装 `pynotify`。在大多数 Linux 发行版中,如果你使用的是从 python.org 下载的 **Python 2 >= 2.7.9** 或者 **Python 3 >=3.4** 的二进制,那么 `pip` 就已经安装了,否则,就按如下安装:
```
# yum install python-pip [On CentOS based Distros]
# apt-get install python-pip [On Debian based Distros]
# dnf install python-pip [On Fedora 22+]
```
现在安装 `pyinotify`
```
# pip install pyinotify
```
它会从默认仓库安装可用的版本,如果你想要最新的稳定版,可以按如下从 git 仓库 clone 下来:
```
# git clone https://github.com/seb-m/pyinotify.git
# cd pyinotify/
# ls
# python setup.py install
```
### 如何在 Linux 中使用 pyinotify
在下面的例子中,我以 root 用户(通过 ssh 登录)监视了用户 tecmint 的家目录(`/home/tecmint`)下的改变,如截图所示:
```
# python -m pyinotify -v /home/tecmint
```
[
![Monitor Directory Changes](http://www.tecmint.com/wp-content/uploads/2017/03/Monitor-Directory-File-Changes.png)
][3]
*监视目录更改*
接下来,我会观察到任何 web 目录 `/var/www/html/tecmint.com` 的更改:
```
# python -m pyinotify -v /var/www/html/tecmint.com
```
要退出程序,只要按下 `Ctrl+C`
**注意**:当你在运行 `pyinotify` 时如果没有指定要监视的目录,`/tmp` 将作为默认目录。
可以在 Github 上了解更多 Pyinotify 信息:[https://github.com/seb-m/pyinotify][4]。
就是这样了!在本文中,我们向你展示了如何安装及使用 `pyinotify`,一个在 Linux 中监控文件系统更改的有用的 Python 模块。
你有遇到类似的 Python 模块或者相关的 [Linux 工具/小程序][5]么?请在评论中让我们了解,或许你也可以询问与这篇文章相关的问题。
--------------------------------------------------------------------------------
作者简介:
Aaron Kili 是 Linux 和 F.O.S.S 爱好者,将来的 Linux 系统管理员和网络开发人员,目前是 TecMint 的内容创作者,他喜欢用电脑工作,并坚信分享知识。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/pyinotify-monitor-filesystem-directory-changes-in-linux/
作者:[Aaron Kili][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/fswatch-monitors-files-and-directory-changes-modifications-in-linux/
[2]:http://www.tecmint.com/fswatch-monitors-files-and-directory-changes-modifications-in-linux/
[3]:http://www.tecmint.com/wp-content/uploads/2017/03/Monitor-Directory-File-Changes.png
[4]:https://github.com/seb-m/pyinotify
[5]:http://tecmint.com/tag/commandline-tools
[6]:http://www.tecmint.com/author/aaronkili/
[7]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[8]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -0,0 +1,51 @@
Ubuntu 17.04Zesty Zapus正式发布可以下载使用了
-------------------------------
![](http://i1-news.softpedia-static.com/images/news2/ubuntu-17-04-zesty-zapus-officially-released-available-to-download-now-514853-8.jpg)
今天2017 年 4 月 13 日Canonical 官方发布了 Ubuntu 17.04Zesty Zapus的最终版。自从去年十月发布 Ubuntu 16.10Yakkety Yak它已经开发了将近 6 个月。
如果直到今天,你一直在你的电脑上使用 Ubuntu 16.10,那么是时候升级到 Ubuntu 17.04 了,它是一个强大的发行版,“内外兼修”。它由最新的稳定的 Linux 4.10 内核驱动,并使用最新的基于 X.org 服务器 1.19.3 和 Mesa 17.0.3 的图形 Stack 进行配备。
上面提到的三个新技术,是那些使用 AMD 的 Radeon 显卡来玩游戏的人们需要立刻升级到 Ubuntu 17.04Zesty Zapus的唯一原因。但是 Ubuntu 17.04Zesty Zapus仅配备有最新的组件和应用程序。
Ubuntu 17.04Zesty Zapus的默认桌面环境仍然是 Unity 7所以你钟爱的 Ubuntu 桌面环境此刻还没有消失。在未来的 Ubuntu 17.10 中Unity 依然可用Ubuntu 17.10 将在下个月开始开发。之后,从 Ubuntu 18.04 LTS 开始,[将默认使用 GNOME 桌面][1]。
![](http://i1-news.softpedia-static.com/images/news2/ubuntu-17-04-zesty-zapus-officially-released-available-to-download-now-514853-3.jpg)
*默认使用 Unity 7 桌面*
Ubuntu 17.04 有一些新的特性:
- 免驱动打印
- 交换文件
- 不再支持 32 位 PPC 架构
伴随其他有趣的技术一起装载在 Ubuntu 17.04 的最终发行版上,最值得一提的是交换文件,对于新安装的系统来说可以用它来替代交换分区。所以如果你从之前的 Ubuntu 发行版升级过来,这是唯一一个不适用的地方。
此外,默认的 DNS 解析器转换为了 `systemd-resolved`。IPP Everywhere 和苹果 AirPrint 打印机支持免驱动的开箱即用。绝大多数来自 GNOME 家族的包都升级到了 GNOME 3.24,只有 Nautilus 仍然保持在 GNOME 3.20.4 版本。
gconf 工具不再默认安装,因为它现在已经被 gsettings 所代替。而安装的应用大多数都是最新的,比如 LibreOffice 5.3 办公套件Mozilla Firefox 52.0.1 Web 浏览器,以及 Mozilla Thunderbird 45.8.0 邮箱和新闻客户端。
![](http://i1-news.softpedia-static.com/images/news2/ubuntu-17-04-zesty-zapus-officially-released-available-to-download-now-514853-4.jpg)
*Nautilus 文件管理器*
从本次发行版本开始,不再支持 32 位 PowerPCPPC架构以后的发行版也不再会支持。但是 PPC64elPowerPC 64 位 Little Endian会持续支持。现在已经可以从我们网站上[下载 Ubuntu 17.04][2]的 64 位amd64和 32 位 ISOi386镜像。
其他的 Ubuntu 风味版本也在今天开始发行,包括 Ubuntu GNOME 17.04、Ubuntu MATE 17.04、Kubuntu 17.04、Xubuntu 17.04、Lubuntu 17.04、Ubuntu Kylin 17.04、Ubuntu Studio 17.04 以及 Ubuntu Budgie 17.04,这也是 Budgie 桌面作为官方的 Ubuntu 风味版本的首次亮相。
请注意Ubuntu 17.04Zesty Zapus是一个短暂的分支仅支持 9 个月的安全更新,即从今天到 2018 年 1 月中旬 。
--------------------------------------------------------
via: http://news.softpedia.com/news/ubuntu-17-04-zesty-zapus-officially-released-available-to-download-now-514853.shtml
作者:[Marius Nestor][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/marius-nestor
[1]:http://news.softpedia.com/news/canonical-to-stop-developing-unity-8-ubuntu-18-04-lts-ships-with-gnome-desktop-514604.shtml
[2]:http://linux.softpedia.com/get/Linux-Distributions/Ubuntu-Wily-Werewolf-103744.shtml

View File

@ -1,233 +0,0 @@
translating by XLCYun
Reactive programming vs. Reactive systems
============================================================
>Landing on a set of simple reactive design principles in a sea of constant confusion and overloaded expectations.
![Micro Fireworks](https://d3tdunqjn7n0wj.cloudfront.net/360x240/micro_fireworks-db2d0a45f22f348719b393dd98ebefa2.jpg)
Download Konrad Malawski's free ebook "[Why Reactive? Foundational Principles for Enterprise Adoption][5]" to dive deeper into the technical aspects and benefits of Reactive.
Since co-authoring the "[Reactive Manifesto][23]" in 2013, weve seen the topic of reactive go from being a virtually unacknowledged technique for constructing applications—used by only fringe projects within a select few corporations—to become part of the overall platform strategy in numerous big players in the middleware field. This article aims to define and clarify the different aspects of reactive by looking at the differences between writing code in a _reactive programming_ style, and the design of _reactive systems_ as a cohesive whole.
### Reactive is a set of design principles
One recent indicator of success is that "reactive" has become an overloaded term and is now being associated with several different things to different people—in good company with words like "streaming," "lightweight," and "real-time."
Consider the following analogy: When looking at an athletic team (think: baseball, basketball, etc.) its not uncommon to see it composed of exceptional individuals, yet when they come together something doesnt click and they lack the synergy to operate effectively as a team and lose to an "inferior" team.From the perspective of this article, reactive is a set of design principles, a way of thinking about systems architecture and design in a distributed environment where implementation techniques, tooling, and design patterns are components of a larger whole—a system.
This analogy illustrates the difference between a set of reactive applications put together without thought—even though _individually_ theyre great—and a reactive system. In a reactive system, its the _interaction between the individual parts_ that makes all the difference, which is the ability to operate individually yet act in concert to achieve their intended result.
_A reactive system_ is an architectural style that allows multiple individual applications to coalesce as a single unit, reacting to its surroundings, while remaining aware of each other—this could manifest as being able to scale up/down, load balancing, and even taking some of these steps proactively.
Its possible to write a single application in a reactive style (i.e. using reactive programming); however, thats merely one piece of the puzzle. Though each of the above aspects may seem to qualify as "reactive," in and of themselves they do not make a _system_ reactive.
When people talk about "reactive" in the context of software development and design, they generally mean one of three things:
* Reactive systems (architecture and design)
* Reactive programming (declarative event-based)
* Functional reactive programming (FRP)
Well examine what each of these practices and techniques mean, with emphasis on the first two. More specifically, well discuss when to use them, how they relate to each other, and what you can expect the benefits from each to be—particularly in the context of building systems for multicore, cloud, and mobile architectures.
Lets start by talking about functional reactive programming, and why we chose to exclude it from further discussions in this article.
### Functional reactive programming (FRP)
_Functional reactive programming_, commonly called _FRP_, is most frequently misunderstood. FRP was very [precisely defined][24] 20 years ago by Conal Elliott. The term has most recently been used incorrectly[1][8] to describe technologies like Elm, Bacon.js, and Reactive Extensions (RxJava, Rx.NET, RxJS) amongst others. Most libraries claiming to support FRP are almost exclusively talking about _reactive programming_ and it will therefore not be discussed further.
### Reactive programming
_Reactive programming_, not to be confused with _functional reactive programming_, is a subset of asynchronous programming and a paradigm where the availability of new information drives the logic forward rather than having control flow driven by a thread-of-execution.
It supports decomposing the problem into multiple discrete steps where each can be executed in an asynchronous and non-blocking fashion, and then be composed to produce a workflow—possibly unbounded in its inputs or outputs.
[Asynchronous][25] is defined by the Oxford Dictionary as “not existing or occurring at the same time,” which in this context means that the processing of a message or event is happening at some arbitrary time, possibly in the future. This is a very important technique in reactive programming since it allows for [non-blocking][26] execution—where threads of execution competing for a shared resource dont need to wait by blocking (preventing the thread of execution from performing other work until current work is done), and can as such perform other useful work while the resource is occupied. Amdahls Law[2][9] tells us that contention is the biggest enemy of scalability, and therefore a reactive program should rarely, if ever, have to block.
Reactive programming is generally _event-driven_, in contrast to reactive systems, which are _message-driven_—the distinction between event-driven and message-driven is clarified later in this article.
The application program interface (API) for reactive programming libraries are generally either:
* Callback-based—where anonymous side-effecting callbacks are attached to event sources, and are being invoked when events pass through the dataflow chain.
* Declarative—through functional composition, usually using well-established combinators like _map_, _filter_, _fold_etc.
Most libraries provide a mix of these two styles, often with the addition of stream-based operators like windowing, counts, triggers, etc.
It would be reasonable to claim that reactive programming is related to [dataflow programming][27], since the emphasis is on the _flow of data_ rather than the _flow of control_.
Examples of programming abstractions that support this programming technique are:
* [Futures/Promises][10]—containers of a single value, many-read/single-write semantics where asynchronous transformations of the value can be added even if it is not yet available.
* Streams—as in [reactive streams][11]: unbounded flows of data processing, enabling asynchronous, non-blocking, back-pressured transformation pipelines between a multitude of sources and destinations.
* [Dataflow variables][12]—single assignment variables (memory-cells) which can depend on input, procedures and other cells, so that they are automatically updated on change. A practical example is spreadsheets—where the change of the value in a cell ripples through all dependent functions, producing new values downstream.
Popular libraries supporting the reactive programming techniques on the JVM include, but are not limited to, Akka Streams, Ratpack, Reactor, RxJava, and Vert.x. These libraries implement the reactive streams specification, which is a standard for interoperability between reactive programming libraries on the JVM, and according to its own description is “...an initiative to provide a standard for asynchronous stream processing with non-blocking back pressure.”
The primary benefits of reactive programming are: increased utilization of computing resources on multicore and multi-CPU hardware; and increased performance by reducing serialization points as per Amdahls Law and, by extension, Günthers Universal Scalability Law[3][13].
A secondary benefit is one of developer productivity as traditional programming paradigms have all struggled to provide a straightforward and maintainable approach to dealing with asynchronous and non-blocking computation and I/O. Reactive programming solves most of the challenges here since it typically removes the need for explicit coordination between active components.
Where reactive programming shines is in the creation of components and composition of workflows. In order to take full advantage of asynchronous execution, the inclusion of [back-pressure][28] is crucial to avoid over-utilization, or rather unbounded consumption of resources.
Even though reactive programming is a very useful piece when constructing modern software, in order to reason about a system at a higher level one has to use another tool: _reactive architecture_—the process of designing reactive systems. Furthermore, it is important to remember that there are many programming paradigms and reactive programming is but one of them, so just as with any tool, it is not intended for any and all use-cases.
### Event-driven vs. message-driven
As mentioned previously, reactive programming—focusing on computation through ephemeral dataflow chains—tend to be _event-driven_, while reactive systems—focusing on resilience and elasticity through the communication, and coordination, of distributed systems—is [_message-driven_][29][4][14](also referred to as _messaging_).
The main difference between a message-driven system with long-lived addressable components, and an event-driven dataflow-driven model, is that messages are inherently directed, events are not. Messages have a clear (single) destination, while events are facts for others to observe. Furthermore, messaging is preferably asynchronous, with the sending and the reception decoupled from the sender and receiver respectively.
The glossary in the Reactive Manifesto [defines the conceptual difference as][30]:
> A message is an item of data that is sent to a specific destination. An event is a signal emitted by a component upon reaching a given state. In a message-driven system addressable recipients await the arrival of messages and react to them, otherwise lying dormant. In an event-driven system notification listeners are attached to the sources of events such that they are invoked when the event is emitted. This means that an event-driven system focuses on addressable event sources while a message-driven system concentrates on addressable recipients.
Messages are needed to communicate across the network and form the basis for communication in distributed systems, while events on the other hand are emitted locally. It is common to use messaging under the hood to bridge an event-driven system across the network by sending events inside messages. This allows maintaining the relative simplicity of the event-driven programming model in a distributed context and can work very well for specialized and well-scoped use cases (e.g., AWS Lambda, Distributed Stream Processing products like Spark Streaming, Flink, Kafka, and Akka Streams over Gearpump, and distributed Publish Subscribe products like Kafka and Kinesis).
However, it is a trade-off: what one gains in abstraction and simplicity of the programming model, one loses in terms of control. Messaging forces us to embrace the reality and constraints of distributed systems—things like partial failures, failure detection, dropped/duplicated/reordered messages, eventual consistency, managing multiple concurrent realities, etc.—and tackle them head on instead of hiding them behind a leaky abstraction—pretending that the network is not there—as has been done too many times in the past (e.g. EJB, [RPC][31], [CORBA][32], and [XA][33]).
These differences in semantics and applicability have profound implications in the application design, including things like _resilience_, _elasticity_, _mobility_, _location transparency,_ and _management_ of the complexity of distributed systems, which will be explained further in this article.
In a reactive system, especially one which uses reactive programming, both events and messages will be present—as one is a great tool for communication (messages), and another is a great way of representing facts (events).
### Reactive systems and architecture
_Reactive systems_—as defined by the Reactive Manifesto—is a set of architectural design principles for building modern systems that are well prepared to meet the increasing demands that applications face today.
The principles of reactive systems are most definitely not new, and can be traced back to the '70s and '80s and the seminal work by Jim Gray and Pat Helland on the [Tandem System][34] and Joe Armstrong and Robert Virding on [Erlang][35]. However, these people were ahead of their time and it is not until the last 5-10 years that the technology industry have been forced to rethink current best practices for enterprise system development and learn to apply the hard-won knowledge of the reactive principles on todays world of multicore, cloud computing, and the Internet of Things.
The foundation for a reactive system is _message-passing_, which creates a temporal boundary between components that allows them to be decoupled in _time_—this allows for concurrency—and _space_—which allows for distribution and mobility. This decoupling is a requirement for full [isolation][36]between components, and forms the basis for both _resilience_ and _elasticity_.
### From programs to systems
The world is becoming increasingly interconnected. We are no longer building _programs_—end-to-end logic to calculate something for a single operator—as much as we are building _systems_.
Systems are complex by definition—each consisting of a multitude of components, who in and of themselves also can be systems—which mean software is increasingly dependent on other software to function properly.
The systems we create today are to be operated on computers small and large, few and many, near each other or half a world away. And at the same time users expectations have become harder and harder to meet as everyday human life is increasingly dependent on the availability of systems to function smoothly.
In order to deliver systems that users—and businesses—can depend on, they have to be _responsive_, for it does not matter if something provides the correct response if the response is not available when it is needed. In order to achieve this, we need to make sure that responsiveness can be maintained under failure (_resilience_) and under load (_elasticity_). To make that happen, we make these systems _message-driven_, and we call them _reactive systems_.
### The resilience of reactive systems
Resilience is about responsiveness _under failure_ and is an inherent functional property of the system, something that needs to be designed for, and not something that can be added in retroactively. Resilience is beyond fault-tolerance—its not about graceful degradation—even though that is a very useful trait for systems—but about being able to fully recover from failure: to _self-heal_. This requires component isolation and containment of failures in order to avoid failures spreading to neighbouring components—resulting in, often catastrophic, cascading failure scenarios.
So the key to building resilient, self-healing systems is to allow failures to be: contained, reified as messages, sent to other components (that act as supervisors), and managed from a safe context outside the failed component. Here, being message-driven is the enabler: moving away from strongly coupled, brittle, deeply nested synchronous call chains that everyone learned to suffer through, or ignore. The idea is to decouple the management of failures from the call chain, freeing the client from the responsibility of handling the failures of the server.
### The elasticity of reactive systems
[Elasticity][37] is about _responsiveness under load_—meaning that the throughput of a system scales up or down (as well as in or out) automatically to meet varying demand as resources are proportionally added or removed. It is the essential element needed to take advantage of the promises of cloud computing: allowing systems to be resource efficient, cost-efficient, environment-friendly and pay-per-use.
Systems need to be adaptive—allow intervention-less auto-scaling, replication of state and behavior, load-balancing of communication, failover, and upgrades, without rewriting or even reconfiguring the system. The enabler for this is _location transparency_: the ability to scale the system in the same way, using the same programming abstractions, with the same semantics, _across all dimensions of scale_—from CPU cores to data centers.
As the Reactive Manifesto [puts it][38]:
> One key insight that simplifies this problem immensely is to realize that we are all doing distributed computing. This is true whether we are running our systems on a single node (with multiple independent CPUs communicating over the QPI link) or on a cluster of nodes (with independent machines communicating over the network). Embracing this fact means that there is no conceptual difference between scaling vertically on multicore or horizontally on the cluster. This decoupling in space [...], enabled through asynchronous message-passing, and decoupling of the runtime instances from their references is what we call Location Transparency.
So no matter where the recipient resides, we communicate with it in the same way. The only way that can be done semantically equivalent is via messaging.
### The productivity of reactive systems
As most systems are inherently complex by nature, one of the most important aspects is to make sure that a system architecture will impose a minimal reduction of productivity, in the development and maintenance of components, while at the same time reducing the operational _accidental complexity_ to a minimum.
This is important since during the lifecycle of a system—if not properly designed—it will become harder and harder to maintain, and require an ever-increasing amount of time and effort to understand, in order to localize and to rectify problems.
Reactive systems are the most _productive_ systems architecture that we know of (in the context of multicore, cloud and mobile architectures):
* Isolation of failures offer [bulkheads][15] between components, preventing failures to cascade, which limits the scope and severity of failures.
* Supervisor hierarchies offer multiple levels of defenses paired with self-healing capabilities, which remove a lot of transient failures from ever incurring any operational cost to investigate.
* Message-passing and location transparency allow for components to be taken offline and replaced or rerouted without affecting the end-user experience, reducing the cost of disruptions, their relative urgency, and also the resources required to diagnose and rectify.
* Replication reduces the risk of data loss, and lessens the impact of failure on the availability of retrieval and storage of information.
* Elasticity allows for conservation of resources as usage fluctuates, allowing for minimizing operational costs when load is low, and minimizing the risk of outages or urgent investment into scalability as load increases.
Thus, reactive systems allows for the creation systems that cope well under failure, varying load and change over time—all while offering a low cost of ownership over time.
### How does reactive programming relate to reactive systems?
Reactive programming is a great technique for managing internal logic and dataflow transformation, locally within the components, as a way of optimizing code clarity, performance and resource efficiency. Reactive systems, being a set of architectural principles, puts the emphasis on distributed communication and gives us tools to tackle resilience and elasticity in distributed systems.
One common problem with only leveraging reactive programming is that its tight coupling between computation stages in an event-driven callback-based or declarative program makes _resilience_ harder to achieve as its transformation chains are often ephemeral and its stages—the callbacks or combinators—are anonymous, i.e. not addressable.
This means that they usually handle success or failure directly without signaling it to the outside world. This lack of addressability makes recovery of individual stages harder to achieve as it is typically unclear where exceptions should, or even could, be propagated. As a result, failures are tied to ephemeral client requests instead of to the overall health of the component—if one of the stages in the dataflow chain fails, then the whole chain needs to be restarted, and the client notified. This is in contrast to a message-driven reactive system which has the ability to self-heal, without necessitating notifying the client.
Another contrast to the reactive systems approach is that pure reactive programming allows decoupling in _time_, but not _space_ (unless leveraging message-passing to distribute the dataflow graph under the hood, across the network, as discussed previously). As mentioned, decoupling in time allows for _concurrency_, but it is decoupling in space that allows for _distribution_, and _mobility_—allowing for not only static but also dynamic topologies—which is essential for _elasticity_.
A lack of location transparency makes it hard to scale out a program purely based on reactive programming techniques adaptively in an elastic fashion and therefore requires layering additional tools, such as a message bus, data grid, or bespoke network protocols on top. This is where the message-driven programming of reactive systems shines, since it is a communication abstraction that maintains its programming model and semantics across all dimensions of scale, and therefore reduces system complexity and cognitive overhead.
A commonly cited problem of callback-based programming is that while writing such programs may be comparatively easy, it can have real consequences in the long run.
For example, systems based on anonymous callbacks provide very little insight when you need to reason about them, maintain them, or most importantly figure out what, where, and why production outages and misbehavior occur.
Libraries and platforms designed for reactive systems (such as the [Akka][39] project and the [Erlang][40] platform) have long learned this lesson and are relying on long-lived addressable components that are easier to reason about in the long run. When failures occurs, the component is uniquely identifiable along with the message that caused the failure. With the concept of addressability at the core of the component model, monitoring solutions have a _meaningful_ way to present data that is gathered—leveraging the identities that are propagated.
The choice of a good programming paradigm, one that enforces things like addressability and failure management, has proven to be invaluable in production, as it is designed with the harshness of reality in mind, to _expect and embrace failure_ rather than the lost cause of trying to prevent it.
All in all, reactive programming is a very useful implementation technique, which can be used in a reactive architecture. Remember that it will only help manage one part of the story: dataflow management through asynchronous and nonblocking execution—usually only within a single node or service. Once there are multiple nodes, there is a need to start thinking hard about things like data consistency, cross-node communication, coordination, versioning, orchestration, failure management, separation of concerns and responsibilities etc.—i.e. system architecture.
Therefore, to maximize the value of reactive programming, use it as one of the tools to construct a reactive system. Building a reactive system requires more than abstracting away OS-specific resources and sprinkling asynchronous APIs and [circuit breakers][41] on top of an existing, legacy, software stack. It should be about embracing the fact that you are building a distributed system comprising multiple services—that all need to work together, providing a consistent and responsive experience, not just when things work as expected but also in the face of failure and under unpredictable load.
### Summary
Enterprises and middleware vendors alike are beginning to embrace reactive, with 2016 witnessing a huge growth in corporate interest in adopting reactive. In this article, we have described reactive systems as being the end goal—assuming the context of multicore, cloud and mobile architectures—for enterprises, with reactive programming serving as one of the important tools.
Reactive programming offers productivity for developers—through performance and resource efficiency—at the component level for internal logic and dataflow transformation. Reactive systems offer productivity for architects and DevOps practitioners—through resilience and elasticity—at the system level, for building _cloud native_ and other large-scale distributed systems. We recommend combining the techniques of reactive programming within the design principles of reactive systems.
```
1 According to Conal Elliott, the inventor of FRP, in [this presentation][16][↩][17]
2 [Amdahls Law][18] shows that the theoretical speedup of a system is limited by the serial parts, which means that the system can experience diminishing returns as new resources are added. [↩][19]
3 Neil Günters [Universal Scalability Law][20] is an essential tool in understanding the effects of contention and coordination in concurrent and distributed systems, and shows that the cost of coherency in a system can lead to negative results, as new resources are added to the system.[↩][21]
4 Messaging can be either synchronous (requiring the sender and receiver to be available at the same time) or asynchronous (allowing them to be decoupled in time). Discussing the semantic differences is out scope for this article.[↩][22]
```
--------------------------------------------------------------------------------
via: https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems
作者:[Jonas Bonér][a] , [Viktor Klang][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.oreilly.com/people/e0b57-jonas-boner
[b]:https://www.oreilly.com/people/f96106d4-4ce6-41d9-9d2b-d24590598fcd
[1]:https://www.flickr.com/photos/pixel_addict/2301302732
[2]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
[3]:https://www.oreilly.com/people/e0b57-jonas-boner
[4]:https://www.oreilly.com/people/f96106d4-4ce6-41d9-9d2b-d24590598fcd
[5]:http://www.oreilly.com/programming/free/why-reactive.csp?intcmp=il-webops-free-product-na_new_site_reactive_programming_vs_reactive_systems_text_cta
[6]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
[7]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
[8]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#dfref-footnote-1
[9]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#dfref-footnote-2
[10]:https://en.wikipedia.org/wiki/Futures_and_promises
[11]:http://reactive-streams.org/
[12]:https://en.wikipedia.org/wiki/Oz_(programming_language)#Dataflow_variables_and_declarative_concurrency
[13]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#dfref-footnote-3
[14]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#dfref-footnote-4
[15]:http://skife.org/architecture/fault-tolerance/2009/12/31/bulkheads.html
[16]:https://begriffs.com/posts/2015-07-22-essence-of-frp.html
[17]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#ref-footnote-1
[18]:https://en.wikipedia.org/wiki/Amdahl%2527s_law
[19]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#ref-footnote-2
[20]:http://www.perfdynamics.com/Manifesto/USLscalability.html
[21]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#ref-footnote-3
[22]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#ref-footnote-4
[23]:http://www.reactivemanifesto.org/
[24]:http://conal.net/papers/icfp97/
[25]:http://www.reactivemanifesto.org/glossary#Asynchronous
[26]:http://www.reactivemanifesto.org/glossary#Non-Blocking
[27]:https://en.wikipedia.org/wiki/Dataflow_programming
[28]:http://www.reactivemanifesto.org/glossary#Back-Pressure
[29]:http://www.reactivemanifesto.org/glossary#Message-Driven
[30]:http://www.reactivemanifesto.org/glossary#Message-Driven
[31]:https://christophermeiklejohn.com/pl/2016/04/12/rpc.html
[32]:https://queue.acm.org/detail.cfm?id=1142044
[33]:https://cs.brown.edu/courses/cs227/archives/2012/papers/weaker/cidr07p15.pdf
[34]:http://www.hpl.hp.com/techreports/tandem/TR-86.2.pdf
[35]:http://erlang.org/download/armstrong_thesis_2003.pdf
[36]:http://www.reactivemanifesto.org/glossary#Isolation
[37]:http://www.reactivemanifesto.org/glossary#Elasticity
[38]:http://www.reactivemanifesto.org/glossary#Location-Transparency
[39]:http://akka.io/
[40]:https://www.erlang.org/
[41]:http://martinfowler.com/bliki/CircuitBreaker.html

View File

@ -1,78 +0,0 @@
translating by [kenxx](https://github.com/kenxx)
Hire a DDoS service to take down your enemies
========================
>With the rampant availability of IoT devices, cybercriminals offer denial of service attacks to take advantage of password problems.
![](http://images.techhive.com/images/article/2016/12/7606416730_e659cea89c_o-100698667-large.jpg)
With the onrush of connected internet of things (IoT) devices, distributed denial-of-service attacks are becoming a dangerous trend. Similar to what happened to [DNS service provider Dyn last fall][3], anyone and everyone is in the crosshairs. The idea of using unprotected IoT devices as a way to bombard networks is gaining momentum.
The advent of DDoS-for-hire services means that even the least tech-savvy individual can exact  revenge on some website. Step on up to the counter and purchase a stresser that can systemically take down a company.
According to [Neustar][4], almost three quarters of all global brands, organizations and companies have been victims of a DDoS attack. And more than 3,700 [DDoS attacks occur each day][5].
#### [■ RELATED: How can you detect a fake ransom letter?][1]
Chase Cunningham, director of cyber operations at A10 Networks, said to find IoT-enabled devices, all you have to do is go on an underground site and ask around for the Mirai scanner code. Once you have that you can scan for anything talking to the internet that can be used for that type of attack.  
“Or you can go to a site like Shodan and craft a couple of simple queries to look for device specific requests. Once you get that information you just go to your DDoS for hire tool and change the configuration to point at the right target and use the right type of traffic emulator and bingo, nuke whatever you like,” he said.
“Basically everything is for sale," he added. "You can buy a 'stresser', which is just a simple botnet type offering that will allow anyone who knows how to click the start button access to a functional DDoS botnet.”
>Once you get that information you just go to your DDoS for hire tool and change the configuration to point at the right target and use the right type of traffic emulator and bingo, nuke whatever you like.
>Chase Cunningham, A10 director of cyber operations
Cybersecurity vendor Imperva says for just a few dozen dollars, users can quickly get an attack up and running. The company writes on its website that these kits contain the bot payload and the CnC (command and control) files. Using these, aspiring bot masters (a.k.a. herders) can start distributing malware, infecting devices through a use of spam email, vulnerability scanners, brute force attacks and more.
Most [stressers and booters][6] have embraced a commonplace SaaS (software as a service) business model, based on subscriptions. As the Incapsula [Q2 2015 DDoS report][7] has shown, the average one hour/month DDoS package will cost $38 (with $19.99 at the lower end of the scale).
![ddos hire](http://images.techhive.com/images/article/2017/03/ddos-hire-100713247-large.jpg)
“Stresser and booter services are just a byproduct of a new reality, where services that can bring down businesses and organizations are allowed to operate in a dubious grey area,” Imperva wrote.
While cost varies, [attacks can run businesses anywhere from $14,000 to $2.35 million per incident][8]. And once a business is attacked, theres an [82 percent chance theyll be attacked again][9].
DDoS of Things (DoT) use IoT devices to build botnets that create large DDoS attacks. The DoT attacks have leveraged hundreds of thousands of IoT devices to attack anything from large service providers to enterprises. 
“Most of the reputable DDoS sellers have changeable configurations for their tool sets so you can easily set the type of attack you want to take place. I havent seen many yet that specifically include the option to purchase an IoT-specific traffic emulator but Im sure its coming. If it were me running the service I would definitely have that as an option,” Cunningham said.
According to an IDG News Service story, building a DDoS-for-service can also be easy. Often the hackers will rent six to 12 servers, and use them to push out internet traffic to whatever target. In late October, HackForums.net [shut down][10] its "Server Stress Testing" section, amid concerns that hackers were peddling DDoS-for-hire services through the site for as little as $10 a month.
Also in December, law enforcement agencies in the U.S. and Europe [arrested][11] 34 suspects involved in DDoS-for-hire services.
If it is so easy to do so, why dont these attacks happen more often?  
Cunningham said that these attacks do happen all the time, in fact they happen every second of the day. “You just dont hear about it because a lot of these are more nuisance attacks than big time bring down the house DDoS type events,” he said.
Also a lot of the attack platforms being sold only take systems down for an hour or a bit longer. Usually an hour-long attack on a site will cost anywhere from $15 to $50\. It depends, though, sometimes for better attack platforms it can hundreds of dollars an hour, he said.
The solution to cutting down on these attacks involves users resetting factory preset passwords on anything connected to the internet. Change the default password settings and disable things that you really dont need.
--------------------------------------------------------------------------------
via: http://www.csoonline.com/article/3180246/data-protection/hire-a-ddos-service-to-take-down-your-enemies.html
作者:[Ryan Francis][a]
译者:[kenxx](https://github.com/kenxx)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.csoonline.com/author/Ryan-Francis/
[1]:http://csoonline.com/article/3103122/security/how-can-you-detect-a-fake-ransom-letter.html#tk.cso-infsb
[2]:https://www.incapsula.com/ddos/ddos-attacks/denial-of-service.html
[3]:http://csoonline.com/article/3135986/security/ddos-attack-against-overwhelmed-despite-mitigation-efforts.html
[4]:https://ns-cdn.neustar.biz/creative_services/biz/neustar/www/resources/whitepapers/it-security/ddos/2016-apr-ddos-report.pdf
[5]:https://www.a10networks.com/resources/ddos-trends-report
[6]:https://www.incapsula.com/ddos/booters-stressers-ddosers.html
[7]:https://www.incapsula.com/blog/ddos-global-threat-landscape-report-q2-2015.html
[8]:http://www.datacenterknowledge.com/archives/2016/05/13/number-of-costly-dos-related-data-center-outages-rising/
[9]:http://www.networkworld.com/article/3064677/security/hit-by-ddos-you-will-likely-be-struck-again.html
[10]:http://www.pcworld.com/article/3136730/hacking/hacking-forum-cuts-section-allegedly-linked-to-ddos-attacks.html
[11]:http://www.pcworld.com/article/3149543/security/dozens-arrested-in-international-ddos-for-hire-crackdown.html

View File

@ -1,53 +0,0 @@
Why do developers who could work anywhere flock to the worlds most expensive cities?
============================================================
![](https://tctechcrunch2011.files.wordpress.com/2017/04/img_20170401_1835042.jpg?w=977)
Politicians and economists [lament][10] that certain alpha regions — SF, LA, NYC, Boston, Toronto, London, Paris — attract all the best jobs while becoming repellently expensive, reducing economic mobility and contributing to further bifurcation between haves and have-nots. But why dont the best jobs move elsewhere?
Of course, many of them cant. The average financier in NYC or London (until Brexit annihilates Londons banking industry, of course…) would be laughed out of the office, and not invited back, if they told their boss they wanted to henceforth work from Chiang Mai.
But this isnt true of (much of) the software field. The average web/app developer might have such a request declined; but they would not be laughed at, or fired. The demand for good developers greatly outstrips supply, and in this era of Skype and Slack, theres nothing about software development that requires meatspace interactions.
(This is even more true of writers, of course; I did in fact post this piece from Pohnpei. But writers dont have anything like the leverage of software developers.)
Some people will tell you that remote teams are inherently less effective and productive than localized ones, or that “serendipitous collisions” are so important that every employee must be forced to the same physical location every day so that these collisions can be manufactured. These people are wrong, as long as the team in question is small — on the order of handfuls, dozens or scores, rather than hundreds or thousands — and flexible.
I should know: at [HappyFunCorp][11], we work extensively with remote teams, and actively recruit remote developers, and it works out fantastically well. A day in which I interact and collaborate with developers in Stockholm, São Paulo, Shanghai, Brooklyn and New Delhi, from my own home base in San Francisco, is not at all unusual.
At this point, whether its a good idea is almost irrelevant, though. Supply and demand is such that any sufficiently skilled developer could become a so-called digital nomad if they really wanted to. But many who could, do not. I recently spent some time in Reykjavik at a house Airbnb-ed for the month by an ever-shifting crew of temporary remote workers, keeping East Coast time to keep up with their jobs, while spending mornings and weekends exploring Iceland — but almost all of us then returned to live in the Bay Area.
Economically, of course, this is insane. Moving to and working from Southeast Asia would save us thousands of dollars a month in rent alone. So why do people who could live in Costa Rica on a San Francisco salary, or in Berlin while charging NYC rates, choose not to do so? Why are allegedly hardheaded engineers so financially irrational?
Of course there are social and cultural reasons. Chiang Mai is very nice, but doesnt have the Met, or steampunk masquerade parties or 50 foodie restaurants within a 15-minute walk. Berlin is lovely, but doesnt offer kite surfing, or Sierra hiking or California weather. Neither promises an effectively limitless population of people with whom you share values and a first language.
And yet I think theres much more to it than this. I believe theres a more fundamental economic divide opening than the one between haves and have-nots. I think we are witnessing a growing rift between the worlds Extremistan cities, in which truly extraordinary things can be achieved, and its Mediocristan towns, in which you can work and make money and be happy but never achieve greatness. (Labels stolen from the great Nassim Taleb.)
The arts have long had Extremistan cities. Thats why aspiring writers move to New York City, and even directors and actors who found international success are still drawn to L.A. like moths to a klieg light. Now it is true of tech, too. Even if you dont even want to try to (help) build something extraordinary — and the startup myth is so powerful today that its a very rare engineer indeed who hasnt at least dreamed about it — the prospect of being  _where great things happen_  is intoxicatingly enticing.
But the interesting thing about this is that it could, in theory, change; because — as of quite recently — distributed, decentralized teams can, in fact, achieve extraordinary things. The cards are arguably stacked against them, because VCs tend to be quite myopic. But no law dictates that unicorns may only be born in California and a handful of other secondary territories; and it seems likely that, for better or worse, Extremistan is spreading. It would be pleasantly paradoxical if that expansion ultimately leads to  _lower_  rents in the Mission.
--------------------------------------------------------------------------------
via: https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
作者:[ Jon Evans ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://techcrunch.com/author/jon-evans/
[1]:https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/#comments
[2]:https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/#
[3]:http://twitter.com/share?via=techcrunch&url=http://tcrn.ch/2owXJ0C&text=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F&hashtags=
[4]:https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Ftechcrunch.com%2F2017%2F04%2F02%2Fwhy-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities%2F&title=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F
[5]:https://plus.google.com/share?url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
[6]:http://www.reddit.com/submit?url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/&title=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F
[7]:http://www.stumbleupon.com/badge/?url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
[8]:mailto:?subject=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities?&body=Article:%20https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
[9]:https://share.flipboard.com/bookmarklet/popout?v=2&title=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F&url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
[10]:https://mobile.twitter.com/Noahpinion/status/846054187288866
[11]:http://happyfuncorp.com/
[12]:https://twitter.com/rezendi
[13]:https://techcrunch.com/author/jon-evans/
[14]:https://techcrunch.com/2017/04/01/discussing-the-limits-of-artificial-intelligence/

View File

@ -1,3 +1,5 @@
Firstadream translating
[How debuggers work: Part 2 - Breakpoints][26]
============================================================

View File

@ -1,505 +0,0 @@
svtter 翻译中
GitLab Workflow: An Overview
======
GitLab is a Git-based repository manager and a powerful complete application for software development.
With an _"user-and-newbie-friendly" interface_, GitLab allows you to work effectively, both from the command line and from the UI itself. It's not only useful for developers, but can also be integrated across your entire team to bring everyone into a single and unique platform.
The GitLab Workflow logic is intuitive and predictable, making the entire platform easy to use and easier to adopt. Once you do, you won't want anything else!
* * *
### In this post
* [GitLab Workflow][53]
* [Stages of Software Development][22]
* [GitLab Issue Tracker][52]
* [Confidential Issues][21]
* [Due dates][20]
* [Assignee][19]
* [Labels][18]
* [Issue Weight][17]
* [GitLab Issue Board][16]
* [Code Review with GitLab][51]
* [First Commit][15]
* [Merge Request][14]
* [WIP MR][13]
* [Review][12]
* [Build, Test, and Deploy][50]
* [Koding][11]
* [Use-Cases][10]
* [Feedback: Cycle Analytics][49]
* [Enhance][48]
* [Issue and MR Templates][9]
* [Milestones][8]
* [Pro Tips][47]
* [For both Issues and MRs][7]
* [Subscribe][3]
* [Add TO-DO][2]
* [Search for your Issues and MRs][1]
* [Moving Issues][6]
* [Code Snippets][5]
* [GitLab WorkFlow Use-Case Scenario][46]
* [Conclusions][45]
* * *
### GitLab Workflow
The **GitLab Workflow** is a logical sequence of possible actions to be taken during the entire lifecycle of the software development process, using GitLab as the platform that hosts your code.
The GitLab Workflow takes into account the [GitLab Flow][97], which consists of **Git-based** methods and tactics for version management, such as **branching strategy**, **Git best practices**, and so on.
With the GitLab Workflow, the [goal][96] is to help teams work cohesively and effectively from the first stage of implementing something new (ideation) to the last stage—deploying implementation to production. That's what we call "going faster from idea to production in 10 steps."
![FROM IDEA TO PRODUCTION IN 10 STEPS](https://about.gitlab.com/images/blogimages/idea-to-production-10-steps.png)
### Stages of Software Development
The natural course of the software development process passes through 10 major steps; GitLab has built solutions for all of them:
1. **IDEA:** Every new proposal starts with an idea, which usually come up in a chat. For this stage, GitLab integrates with [Mattermost][44].
2. **ISSUE:** The most effective way to discuss an idea is creating an issue for it. Your team and your collaborators can help you to polish and improve it in the [issue tracker][43].
3. **PLAN:** Once the discussion comes to an agreement, it's time to code. But wait! First, we need to prioritize and organize our workflow. For this, we can use the [Issue Board][42].
4. **CODE:** Now we're ready to write our code, once we have everything organized.
5. **COMMIT:** Once we're happy with our draft, we can commit our code to a feature-branch with version control.
6. **TEST:** With [GitLab CI][41], we can run our scripts to build and test our application.
7. **REVIEW:** Once our script works and our tests and builds succeeds, we are ready to get our [code reviewed][40] and approved.
8. **STAGING:** Now it's time to [deploy our code to a staging environment][39] to check if everything worked as we were expecting or if we still need adjustments.
9. **PRODUCTION:** When we have everything working as it should, it's time to [deploy to our production environment][38]!
10. **FEEDBACK:** Now it's time to look back and check what stage of our work needs improvement. We use [Cycle Analytics][37] for feedback on the time we spent on key stages of our process.
To walk through these stages smoothly, it's important to have powerful tools to support this workflow. In the following sections, you'll find an overview of the toolset available from GitLab.
### GitLab Issue Tracker
GitLab has a powerful issue tracker that allows you, your team, and your collaborators to share and discuss ideas, before and while implementing them.
![issue tracker - view list](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/issue-tracker-list-view.png)
Issues are the first essential feature of the GitLab Workflow. [Always start a discussion with an issue][95]; it's the best way to track the evolution of a new idea.
It's most useful for:
* Discussing ideas
* Submitting feature proposals
* Asking questions
* Reporting bugs and malfunction
* Obtaining support
* Elaborating new code implementations
Each project hosted by GitLab has an issue tracker. To create a new issue, navigate to your project's **Issues** > **New issue**, give it a title that summarizes the subject to be treated, and describe it using [Markdown][94]. Check the [pro tips][93] below to enhance your issue description.
The GitLab Issue Tracker presents extra functionalities to make it easier to organize and prioritize your actions, described in the following sections.
![new issue - additional settings](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/issue-features-view.png)
### Confidential Issues
Whenever you want to keep the discussion presented in a issue within your team only, you can make that [issue confidential][92]. Even if your project is public, that issue will be preserved. The browser will respond with a 404 error whenever someone who is not a project member with at least [Reporter level][91] tries to access that issue's URL.
### Due dates
Every issue enables you to attribute a [due date][90] to it. Some teams work on tight schedules, and it's important to have a way to setup a deadline for implementations and for solving problems. This can be facilitated by the due dates.
When you have due dates for multi-task projects—for example, a new release, product launch, or for tracking tasks by quarter—you can use [milestones][89].
### Assignee
Whenever someone starts to work on an issue, it can be assigned to that person. You can change the assignee as much as you need. The idea is that the assignee is responsible for that issue until he/she reassigns it to someone else to take it from there.
It also helps with filtering issues per assignee.
### Labels
GitLab labels are also an important part of the GitLab flow. You can use them to categorize your issues, to localize them in your workflow, and to organize them by priority with [Priority Labels][88].
Labels will enable you to work with the [GitLab Issue Board][87], facilitating your plan stage and organizing your workflow.
**New!** You can also create [Group Labels][86], which give you the ability to use the same labels per group of projects.
### Issue Weight
You can attribute an [Issue Weight][85] to make it clear how difficult the implementation of that idea is. Less difficult would receive weights of 01-03, more difficult, 07-09, and the ones in the middle, 04-06\. Still, you can get to an agreement with your team to standardize the weights according to your needs.
### GitLab Issue Board
The [GitLab Issue Board][84] is a tool ideal for planning and organizing your issues according to your project's workflow.
It consists of a board with lists corresponding to its respective labels. Each list contains their corresponding labeled issues, displayed as cards.
The cards can be moved between lists, which will cause the label to be updated according to the list you moved the card into.
![GitLab Issue Board](https://about.gitlab.com/images/blogimages/designing-issue-boards/issue-board.gif)
**New!** You can also create issues right from the Board, by clicking the  button on the top of a list. When you do so, that issue will be automatically created with the label corresponding to that list.
**New!** We've [recently introduced][83] **Multiple Issue Boards** per project ([GitLab Enterprise Edition][82] only); it is the best way to organize your issues for different workflows.
![Multiple Issue Boards](https://about.gitlab.com/images/8_13/m_ib.gif)
### Code Review with GitLab
After discussing a new proposal or implementation in the issue tracker, it's time to work on the code. You write your code locally and, once you're done with your first iteration, you commit your code and push to your GitLab repository. Your Git-based management strategy can be improved with the [GitLab Flow][81].
### First Commit
In your first commit message, you can add the number of the issue related to that commit message. By doing so, you create a link between the two stages of the development workflow: the issue itself and the first commit related to that issue.
To do so, if the issue and the code you're committing are both in the same project, you simply add `#xxx` to the commit message, where `xxx` is the issue number. If they are not in the same project, you can add the full URL to the issue (`https://gitlab.com/<username>/<projectname>/issues/<xxx>`).
```
`git commit -m "this is my commit message. Ref #xxx"`
```
or
```
`git commit -m "this is my commit message. Related to https://gitlab.com/<username>/<projectname>/issues/<xxx>"`
```
Of course, you can replace `gitlab.com` with the URL of your own GitLab instance.
**Note:** Linking your first commit to your issue is going to be relevant for tracking your process far ahead with [GitLab Cycle Analytics][80]. It will measure the time taken for planning the implementation of that issue, which is the time between creating an issue and making the first commit.
### Merge Request
Once you push your changes to a feature-branch, GitLab will identify this change and will prompt you to create a Merge Request (MR).
Every MR will have a title (something that summarizes that implementation) and a description supported by [Markdown][79]. In the description, you can shortly describe what that MR is doing, mention any related issues and MRs (creating a link between them), and you can also add the [issue closing pattern][78], which will close that issue(s) once the MR is **merged**.
For example:
```
`## Add new page
This MR creates a `readme.md` to this project, with an overview of this app.
Closes #xxx and https://gitlab.com/<username>/<projectname>/issues/<xxx>
Preview:
![preview the new page](#image-url)
cc/ @Mary @Jane @John`
```
When you create an MR with a description like the one above, it will:
* Close both issues `#xxx` and `https://gitlab.com/<username>/<projectname>/issues/<xxx>` when merged
* Display an image
* Notify the users `@Mary`, `@Jane`, and `@John` by e-mail
You can assign the MR to yourself until you finish your work, then assign it to someone else to conduct a review. It can be reassigned as many times as necessary, to cover all the reviews you need.
It can also be labeled and added to a [milestone][77] to facilitate organization and prioritization.
When you add or edit a file and commit to a new branch from the UI instead of from the command line, it's also easy to create a new merge request. Just mark the checkbox "start a new merge request with these changes" and GitLab will automatically create a new MR once you commit your changes.
![commit to a feature branch and add a new MR from the UI](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/start-new-mr-edit-from-ui.png)
**Note:** It's important to add the [issue closing pattern][76] to your MR in order to be able to track the process with [GitLab Cycle Analytics][75]. It will track the "code" stage, which measures the time between pushing a first commit and creating a merge request related to that commit.
**New!** We're currently developing [Review Apps][74], a new feature that gives you the ability to deploy your app to a dynamic environment, from which you can preview the changes based on the branch name, per merge request. See a [working example][73] here.
### WIP MR
A WIP MR, which stands for **Work in Progress Merge Request**, is a technique we use at GitLab to prevent that MR from getting merged before it's ready. Just add `WIP:` to the beginning of the title of an MR, and it will not be merged unless you remove it from there.
When your changes are ready to get merged, remove the `WIP:` pattern either by editing the issue and deleting manually, or use the shortcut available for you just below the MR description.
![WIP MR click to remove WIP from the title](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-wip-mr.png)
**New!** The `WIP` pattern can be also [quickly added to the merge request][72] with the [slash command][71] `/wip`. Simply type it and submit the comment or the MR description.
### Review
Once you've created a merge request, it's time to get feedback from your team or collaborators. Using the diffs available on the UI, you can add inline comments, reply to them and resolve them.
You can also grab the link for each line of code by clicking on the line number.
The commit history is available from the UI, from which you can track the changes between the different versions of that file. You can view them inline or side-by-side.
![code review in MRs at GitLab](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-code-review.png)
**New!** If you run into merge conflicts, you can quickly [solve them right for the UI][70], or even edit the file to fix them as you need:
![mr conflict resolution](https://about.gitlab.com/images/8_13/inlinemergeconflictresolution.gif)
### Build, Test, and Deploy
[GitLab CI][69] is an powerful built-in tool for [Continuous Integration, Continuos Deployment, and Continuous Delivery][68], which can be used to run scripts as you wish. The possibilities are endless: think of it as if it was your own command line running the jobs for you.
It's all set by an Yaml file called, `.gitlab-ci.yml`, placed at your project's repository. Enjoy the CI templates by simply adding a new file through the web interface, and type the file name as `.gitlab-ci.yml` to trigger a dropdown menu with dozens of possible templates for different applications.
![GitLab CI templates - dropdown menu](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-ci-template.png)
### Koding
Use GitLab's [Koding integration][67] to run your entire development environment in the cloud. This means that you can check out a project or just a merge request in a full-fledged IDE with the press of a button.
### Use-Cases
Examples of GitLab CI use-cases:
* Use it to [build][36] any [Static Site Generator][35], and deploy your website with [GitLab Pages][34]
* Use it to [deploy your website][33] to `staging` and `production` [environments][32]
* Use it to [build an iOS application][31]
* Use it to [build and deploy your Docker Image][30] with [GitLab Container Registry][29]
We have prepared a dozen of [GitLab CI Example Projects][66] to offer you guidance. Check them out!
### Feedback: Cycle Analytics
When you follow the GitLab Workflow, you'll be able to gather feedback with [GitLab Cycle Analytics][65] on the time your team took to go from idea to production, for [each key stage of the process][64]:
* **Issue:** the time from creating an issue to assigning the issue to a milestone or adding the issue to a list on your Issue Board
* **Plan:** the time from giving an issue a milestone or adding it to an Issue Board list, to pushing the first commit
* **Code:** the time from the first commit to creating the merge request
* **Test:** the time CI takes to run the entire pipeline for the related merge request
* **Review:** the time from creating the merge request to merging it
* **Staging:** the time from merging until deploy to production
* **Production** (Total): The time it takes between creating an issue and deploying the code to [production][28]
### Enhance
### Issue and MR Templates
[Issue and MR templates][63] allow you to define context-specific templates for issue and merge request description fields for your project.
You write them in [Markdown][62] and add them to the default branch of your repository. They can be accessed by the dropdown menu whenever an issue or MR is created.
They save time when describing issues and MRs and standardize the information necessary to follow along. It makes sure everything you need to proceed is there.
As you can create multiple templates, they serve for different purposes. For example, you can have one for feature proposals, and a different one for bug reports. Check the ones in [GitLab CE project][61] for real examples.
![issues and MR templates - dropdown menu screenshot](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/issues-choose-template.png)
### Milestones
[Milestones][60] are the best tool you have at GitLab to track the work of your team based on a common target, in a specific date.
The goal can be different for each situation, but the panorama is the same: you have a collection of issues and merge requests being worked on to achieve that particular objective.
This goal can be basically anything that groups the team work and effort to do something by a deadline. For example, publish a new release, launch a new product, get things done by that date, or assemble projects to get done by year quarters.
For instance, you can create a milestone for Q1 2017 and assign every issue and MR that should be finished by the end of March, 2017\. You can also create a milestone for an event that your company is organizing. Then you access that milestone and view an entire panorama on the progress of your team to get things done.
![milestone dashboard](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-milestone.png)
### Pro Tips
### For both Issues and MRs
* In issues and MRs descriptions:
* Type `#` to trigger a dropdown list of existing issues
* Type `!` to trigger a dropdown list of existing MRs
* Type `/` to trigger [slash commands][4]
* Type `:` to trigger emojis (also supported for inline comments)
* Add images (jpg, png, gif) and videos to inline comments with the button **Attach a file**
* [Apply labels automatically][27] with [GitLab Webhooks][26]
* [Fenced blockquote][24]: use the syntax `>>>` to start and finish a blockquote
```
`>>>
Quoted text
Another paragraph
>>>`
```
* Create [task lists][23]:
```
`- [ ] Task 1
- [ ] Task 2
- [ ] Task 3`
```
#### Subscribe
Have you found an issue or an MR that you want to follow up? Expand the navigation on your right and click [Subscribe][59] and you'll be updated whenever a new comment comes up. What if you want to subscribe to multiple issues and MRs at once? Use [bulk subscriptions][58]. 😃
#### Add TO-DO
Besides keeping an eye on an issue or MR, if you want to take a future action on it, or whenever you want it in your GitLab TO-DO list, expand the navigation tab at your right and [click on **Add todo**][57].
#### Search for your Issues and MRs
When you're looking for an issue or MR you opened long ago in a project with dozens, hundreds or even thousands of them, it turns out to be hard to find. Expand the navigation on your left and click on **Issues** or **Merge Requests**, and you'll see the ones assigned to you. From there or from any issue tracker, you can filter issues or MRs by author, assignee, milestone, label and weight, also search for opened, merged, closed, and all of them (both merged, closed, and opened).
### Moving Issues
An issue end up in a wrong project? Don't worry. Click on **Edit**, and [move the issue][56] to the correct project.
### Code Snippets
Sometimes do you use exactly the same code snippet or template in different projects or files? Create a code snippet and leave it available for you whenever you want. Expand the navigation on your left and click **[Snippets][25]**. All of your snippets will be there. You can set them to public, internal (only for GitLab logged users), or private.
![Snippets - screenshot](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-code-snippet.png)
### GitLab WorkFlow Use-Case Scenario
To wrap-up, let's put everything together. It's easy!
Let's suppose you work at a company focused in software development. You created a new issue for developing a new feature to be implemented in one of your applications.
### Labels Strategy
For this application, you already have created labels for "discussion", "backend", "frontend", "working on", "staging", "ready", "docs", "marketing", and "production." All of them already have their own lists in the Issue Board. Your issue currently have the label "discussion."
After the discussion in the issue tracker came to an agreement, your backend team started to work on that issue, so their lead moved the issue from the list "discussion" to the list "backend." The first developer to start writing the code assigned the issue to himself, and added the label "working on."
### Code & Commit
In his first commit message, he referenced the issue number. After some work, he pushed his commits to a feature-branch and created a new merge request, including the issue closing pattern in the MR description. His team reviewed his code and made sure all the tests and builds were passing.
### Using the Issue Board
Once the backend team finished their work, they removed the label "working on" and moved the issue from the list "backend" to "frontend" in the Issue Board. So, the frontend team knew that issue was ready for them.
### Deploying to Staging
When a frontend developer started working on that issue, he or she added back the label "working on" and reassigned the issue to him/herself. When ready, the implementation was deployed to a **staging** environment. The label "working on" was removed and the issue card was moved to the "staging" list in the Issue Board.
### Teamwork
Finally, when the implementation succeeded, your team moved it to the list "ready."
Then, the time came for your technical writing team to create the documentation for the new feature, and once someone got started, he/she added the label "docs." At the same time, your marketing team started to work on the campaign to launch and promote that feature, so someone added the label "marketing." When the tech writer finished the documentation, he/she removed the label "docs." Once the marketing team finished their work, they moved the issue from the list "marketing" to "production."
### Deploying to Production
At last, you, being the person responsible for new releases, merged the MR and deployed the new feature into the **production**environment and the issue was **closed**.
### Feedback
With [Cycle Analytics][55], you studied the time taken to go from idea to production with your team, and opened another issue to discuss the improvement of the process.
### Conclusions
GitLab Workflow helps your team to get faster from idea to production using a single platform:
* It's **effective**, because you get your desired results.
* It's **efficient**, because you achieve maximum productivity with minimum effort and expense.
* It's **productive**, because you are able to plan effectively and act efficiently.
* It's **easy**, because you don't need to setup different tools to accomplish what you need with just one, GitLab.
* It's **fast**, because you don't need to jump across multiple platforms to get your job done.
A new GitLab version is released every single month (on the 22nd), for making it a better integrated solution for software development, and for bringing teams to work together in one single and unique interface.
At GitLab, everyone can contribute! Thanks to our amazing community we've got where we are. And thanks to them, we keep moving forward to provide you with a better product.
Questions? Feedback? Please leave a comment or tweet at us [@GitLab][54]! 🙌
--------------------------------------------------------------------------------
via: https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/
作者:[Marcia Ramos][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twitter.com/XMDRamos
[1]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#search-for-your-issues-and-mrs
[2]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#add-to-do
[3]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#subscribe
[4]:https://docs.gitlab.com/ce/user/project/slash_commands.html
[5]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#code-snippets
[6]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#moving-issues
[7]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#for-both-issues-and-mrs
[8]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#milestones
[9]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#issue-and-mr-templates
[10]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#use-cases
[11]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#koding
[12]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#review
[13]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#wip-mr
[14]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#merge-request
[15]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#first-commit
[16]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-board
[17]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#issue-weight
[18]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#labels
[19]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#assignee
[20]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#due-dates
[21]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#confidential-issues
[22]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#stages-of-software-development
[23]:https://docs.gitlab.com/ee/user/markdown.html#task-lists
[24]:https://about.gitlab.com/2016/07/22/gitlab-8-10-released/#blockquote-fence-syntax
[25]:https://gitlab.com/dashboard/snippets
[26]:https://docs.gitlab.com/ce/web_hooks/web_hooks.html
[27]:https://about.gitlab.com/2016/08/19/applying-gitlab-labels-automatically/
[28]:https://docs.gitlab.com/ce/ci/yaml/README.html#environment
[29]:https://about.gitlab.com/2016/05/23/gitlab-container-registry/
[30]:https://about.gitlab.com/2016/08/11/building-an-elixir-release-into-docker-image-using-gitlab-ci-part-1/
[31]:https://about.gitlab.com/2016/03/10/setting-up-gitlab-ci-for-ios-projects/
[32]:https://docs.gitlab.com/ce/ci/yaml/README.html#environment
[33]:https://about.gitlab.com/2016/08/26/ci-deployment-and-environments/
[34]:https://pages.gitlab.io/
[35]:https://about.gitlab.com/2016/06/17/ssg-overview-gitlab-pages-part-3-examples-ci/
[36]:https://about.gitlab.com/2016/04/07/gitlab-pages-setup/
[37]:https://about.gitlab.com/solutions/cycle-analytics/
[38]:https://about.gitlab.com/2016/08/05/continuous-integration-delivery-and-deployment-with-gitlab/
[39]:https://about.gitlab.com/2016/08/05/continuous-integration-delivery-and-deployment-with-gitlab/
[40]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-code-review
[41]:https://about.gitlab.com/gitlab-ci/
[42]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-board
[43]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-tracker
[44]:https://about.gitlab.com/2015/08/18/gitlab-loves-mattermost/
[45]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#conclusions
[46]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-workflow-use-case-scenario
[47]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#pro-tips
[48]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#enhance
[49]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#feedback
[50]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#build-test-and-deploy
[51]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#code-review-with-gitlab
[52]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-tracker
[53]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-workflow
[54]:https://twitter.com/gitlab
[55]:https://about.gitlab.com/solutions/cycle-analytics/
[56]:https://about.gitlab.com/2016/03/22/gitlab-8-6-released/#move-issues-to-other-projects
[57]:https://about.gitlab.com/2016/06/22/gitlab-8-9-released/#manually-add-todos
[58]:https://about.gitlab.com/2016/07/22/gitlab-8-10-released/#bulk-subscribe-to-issues
[59]:https://about.gitlab.com/2016/03/22/gitlab-8-6-released/#subscribe-to-a-label
[60]:https://about.gitlab.com/2016/08/05/feature-highlight-set-dates-for-issues/#milestones
[61]:https://gitlab.com/gitlab-org/gitlab-ce/issues/new
[62]:https://docs.gitlab.com/ee/user/markdown.html
[63]:https://docs.gitlab.com/ce/user/project/description_templates.html
[64]:https://about.gitlab.com/2016/09/21/cycle-analytics-feature-highlight/
[65]:https://about.gitlab.com/solutions/cycle-analytics/
[66]:https://docs.gitlab.com/ee/ci/examples/README.html
[67]:https://about.gitlab.com/2016/08/22/gitlab-8-11-released/#koding-integration
[68]:https://about.gitlab.com/2016/08/05/continuous-integration-delivery-and-deployment-with-gitlab/
[69]:https://about.gitlab.com/gitlab-ci/
[70]:https://about.gitlab.com/2016/08/22/gitlab-8-11-released/#merge-conflict-resolution
[71]:https://docs.gitlab.com/ce/user/project/slash_commands.html
[72]:https://about.gitlab.com/2016/10/22/gitlab-8-13-released/#wip-slash-command
[73]:https://gitlab.com/gitlab-examples/review-apps-nginx/
[74]:https://about.gitlab.com/2016/10/22/gitlab-8-13-released/#ability-to-stop-review-apps
[75]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#feedback
[76]:https://docs.gitlab.com/ce/administration/issue_closing_pattern.html
[77]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#milestones
[78]:https://docs.gitlab.com/ce/administration/issue_closing_pattern.html
[79]:https://docs.gitlab.com/ee/user/markdown.html
[80]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#feedback
[81]:https://about.gitlab.com/2014/09/29/gitlab-flow/
[82]:https://about.gitlab.com/free-trial/
[83]:https://about.gitlab.com/2016/10/22/gitlab-8-13-released/#multiple-issue-boards-ee
[84]:https://about.gitlab.com/solutions/issueboard
[85]:https://docs.gitlab.com/ee/workflow/issue_weight.html
[86]:https://about.gitlab.com/2016/10/22/gitlab-8-13-released/#group-labels
[87]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-board
[88]:https://docs.gitlab.com/ee/user/project/labels.html#prioritize-labels
[89]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#milestones
[90]:https://about.gitlab.com/2016/08/05/feature-highlight-set-dates-for-issues/#due-dates-for-issues
[91]:https://docs.gitlab.com/ce/user/permissions.html
[92]:https://about.gitlab.com/2016/03/31/feature-highlihght-confidential-issues/
[93]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#pro-tips
[94]:https://docs.gitlab.com/ee/user/markdown.html
[95]:https://about.gitlab.com/2016/03/03/start-with-an-issue/
[96]:https://about.gitlab.com/2016/09/13/gitlab-master-plan/
[97]:https://about.gitlab.com/2014/09/29/gitlab-flow/

View File

@ -1,266 +0,0 @@
willcoderwang 翻译中
How to Install Jenkins Automation Server with Apache on Ubuntu 16.04
============================================================
Jenkins is an automation server forked from the Hudson project. Jenkins is a server based application running in a Java servlet container, it has support for many SCM (Source Control Management) software systems including Git, SVN, and Mercurial. Jenkins provides hundreds of plugins to automate your project. Jenkins created by Kohsuke Kawaguchi, first released in 2011 under MIT License, and it's free software.
In this tutorial, I will show you how to install the latest Jenkins version on Ubuntu Server 16.04\. We will run Jenkins on our own domain name, and we will to install and configure Jenkins to run under the apache web server with the reverse proxy for Jenkins.
### Prerequisite
* Ubuntu Server 16.04 - 64bit
* Root Privileges
### Step 1 - Install Java OpenJDK 7
Jenkins is based on Java, so we need to install Java OpenJDK version 7 on the server. In this step, we will install Java 7 from a PPA repository which we will add first.
By default, Ubuntu 16.04 ships without the python-software-properties package for managing PPA repositories, so we must install this package first. Install python-software-properties with apt command.
apt-get install python-software-properties
Next, add Java PPA repository to the server.
add-apt-repository ppa:openjdk-r/ppa
Just Press ENTER
Update the Ubuntu repository and install the Java OpenJDK with apt command.
apt-get update
apt-get install openjdk-7-jdk
Verify the installation by typing the command below:
java -version
and you will get the Java version that is installed on the server.
[
![Install Java 7 openJDK on Ubuntu 16.04](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/1.png)
][9]
### Step 2 - Install Jenkins
Jenkins provides an Ubuntu repository for the installation packages and we will install Jenkins from this repository.
Add Jenkins key and repository to the system with the command below.
wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add -
echo 'deb https://pkg.jenkins.io/debian-stable binary/' | tee -a /etc/apt/sources.list
Update the repository and install Jenkins.
apt-get update
apt-get install jenkins
When the installation is done, start Jenkins with this systemctl command.
systemctl start jenkins
Verify that Jenkins is running by checking the default port used by Jenkins (port 8080). I will check it with the netstat command below:
netstat -plntu
Jenkins is installed and running on port 8080.
[
![Jenkins has been installed on port 8080](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/2.png)
][10]
### Step 3 - Install and Configure Apache as Reverse Proxy for Jenkins
In this tutorial we will run Jenkins behind an apache web server, we will configure apache as the reverse proxy for Jenkins. First I will install apache and enable some require modules, and then I'll create the virtual host file with domain name my.jenkins.id for Jenkins. Please use your own domain name here and replace it in all config files wherever it appears.
Install apache2 web server from Ubuntu repository.
apt-get install apache2
When the installation is done, enable the proxy and proxy_http modules so we can configure apache as frontend server/reverse proxy for Jenkins.
a2enmod proxy
a2enmod proxy_http
Next, create a new virtual host file in the sites-available directory.
cd /etc/apache2/sites-available/
vim jenkins.conf
Paste virtual host configuration below.
```
<Virtualhost *:80>
    ServerName        my.jenkins.id
    ProxyRequests     Off
    ProxyPreserveHost On
    AllowEncodedSlashes NoDecode
    <Proxy http://localhost:8080/*>
      Order deny,allow
      Allow from all
    </Proxy>
    ProxyPass         /  http://localhost:8080/ nocanon
    ProxyPassReverse  /  http://localhost:8080/
    ProxyPassReverse  /  http://my.jenkins.id/
</Virtualhost>
```
Save the file. Then activate the Jenkins virtual host with the a2ensite command.
a2ensite jenkins
Restart Apache and Jenkins.
systemctl restart apache2
systemctl restart jenkins
Check that port 80 and 8000 are in use by Jenkins and Apache.
netstat -plntu
[
![Check that Apache and Jenkins are running](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/3.png)
][11]
### Step 4 - Configure Jenkins
Jenkins is running on the domain name 'my.jenkins.id'. Open your web browser and type in the URL. You will get the screen that requests you to enter the initial admin password. A password has been generated by Jenkins already, so we just need to show and copy the results to the password box.
Show initial admin password Jenkins with cat command.
cat /var/lib/jenkins/secrets/initialAdminPassword
a1789d1561bf413c938122c599cf65c9
[
![Get the Jenkins admin password](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/4.png)
][12]
Paste the results to the screen and click '**Continue**'.
[
![Jenkins Installation and Configuration](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/5.png)
][13]
Now we should install some plugins in Jenkins to get a good foundation for later use. Choose '**Install Suggested Plugins**', click on it.
[
![Install jenkins Plugins](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/6.png)
][14]
Jenkins plugins installations in progress.
[
![Jenkins plugins get installed](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/7.png)
][15]
After plugin installation, we have to create a new admin password. Type in your admin username, password, email etc. and click on '**Save and Finish**'.
[
![Create Jenkins admin account](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/8.png)
][16]
Click start and start using Jenkins. You will be redirected to the Jenkins admin dashboard.
[
![Get redirected to the admin dashboard](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/9.png)
][17]
Jenkins installation and Configuration finished successfully
[
![The Jenkins admin dashboard](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/10.png)
][18]
### Step 5 - Jenkins Security
From the Jenkins admin dashboard, we need to configure the standard security settings for Jenkins, click on '**Manage Jenkins**' and then '**Configure Global Security**'.
[
![Jenkins Global Security Settings](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/11.png)
][19]
Jenkins provides several authorization methods in the '**Access Control**' section. I select '**Matrix-based Security**' to be able to control all user privileges. Enable the admin user at the box '**User/Group**' and click **add**. Give the admin all privileges by **checking all options**, and give the anonymous just read permissions. Now Click '**Save**'.
[
![Configure Jenkins Permissions](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/12.png)
][20]
You will be redirected to the dashboard, and if there is login option, just type your admin user and password.
### Step 6 - Testing a simple automation job
In this section, I just want to test a simple job for the Jenkins server. I will create a simple job for testing Jenkins and to find out the server load with the top command.
From the Jenkins admin dashboard, click '**Create New Job**'.
[
![Create a new Job in Jenkins](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/13.png)
][21]
Enter the job name, I'll use 'Checking System' here, select '**Freestyle Project**' and click '**OK**'.
[
![Configure new Jenkins Job](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/14.png)
][22]
Go to the '**Build**' tab. On the '**Add build step**', select the option '**Execute shell**'.
Type in the command below into the box.
top -b -n 1 | head -n 5
Click '**Save**'.
[
![Start a Jenkins Job](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/15.png)
][23]
Now you are on the job page of the job 'Project checking system'. Click '**Build Now**' to execute the job 'checking system'.
After the job has been executed, you will see the '**Build History**', click on the first job to see the results.
Here are the results from the job executed by Jenkins.
[
![Build and run a Jenkins Job](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/16.png)
][24]
Jenkins installation with Apache web server on Ubuntu 16.04 completed successfully.
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/how-to-install-jenkins-with-apache-on-ubuntu-16-04/
作者:[Muhammad Arul ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://twitter.com/howtoforgecom
[1]:https://www.howtoforge.com/tutorial/how-to-install-jenkins-with-apache-on-ubuntu-16-04/#prerequisite
[2]:https://www.howtoforge.com/tutorial/how-to-install-jenkins-with-apache-on-ubuntu-16-04/#step-install-java-openjdk-
[3]:https://www.howtoforge.com/tutorial/how-to-install-jenkins-with-apache-on-ubuntu-16-04/#step-install-jenkins
[4]:https://www.howtoforge.com/tutorial/how-to-install-jenkins-with-apache-on-ubuntu-16-04/#step-install-and-configure-apache-as-reverse-proxy-for-jenkins
[5]:https://www.howtoforge.com/tutorial/how-to-install-jenkins-with-apache-on-ubuntu-16-04/#step-configure-jenkins
[6]:https://www.howtoforge.com/tutorial/how-to-install-jenkins-with-apache-on-ubuntu-16-04/#step-jenkins-security
[7]:https://www.howtoforge.com/tutorial/how-to-install-jenkins-with-apache-on-ubuntu-16-04/#step-testing-a-simple-automation-job
[8]:https://www.howtoforge.com/tutorial/how-to-install-jenkins-with-apache-on-ubuntu-16-04/#reference
[9]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/1.png
[10]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/2.png
[11]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/3.png
[12]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/4.png
[13]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/5.png
[14]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/6.png
[15]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/7.png
[16]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/8.png
[17]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/9.png
[18]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/10.png
[19]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/11.png
[20]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/12.png
[21]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/13.png
[22]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/14.png
[23]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/15.png
[24]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/16.png

View File

@ -1,320 +0,0 @@
Top open source creative tools in 2016
============================================================
### Whether you want to manipulate images, edit audio, or animate stories, there's a free and open source tool to do the trick.
![Top 34 open source creative tools in 2016 ](https://opensource.com/sites/default/files/styles/image-full-size/public/u23316/art-yearbook-paint-draw-create-creative.png?itok=KgEF_IN_ "Top 34 open source creative tools in 2016 ")
>Image by : opensource.com
A few years ago, I gave a lightning talk at Red Hat Summit that took attendees on a tour of the [2012 open source creative tools][12] landscape. Open source tools have evolved a lot in the past few years, so let's take a tour of 2016 landscape.
### Core applications
These six applications are the juggernauts of open source design tools. They are well-established, mature projects with full feature sets, stable releases, and active development communities. All six applications are cross-platform; each is available on Linux, OS X, and Windows, although in some cases the Linux versions are the most quickly updated. These applications are so widely known, I've also included highlights of the latest features available that you may have missed if you don't closely follow their development.
If you'd like to follow new developments more closely, and perhaps even help out by testing the latest development versions of the first four of these applications—GIMP, Inkscape, Scribus, and MyPaint—you can install them easily on Linux using [Flatpak][13]. Nightly builds of each of these applications are available via Flatpak by [following the instructions][14] for _Nightly Graphics Apps_. One thing to note: If you'd like to install brushes or other extensions to each Flatpak version of the app, the directory to drop the extensions in will be under the directory corresponding to the application inside the **~/.var/app** directory.
### GIMP
[GIMP][15] [celebrated its 20th anniversary in 2015][16], making it one of the oldest open source creative applications out there. GIMP is a solid program for photo manipulation, basic graphic creation, and illustration. You can start using GIMP by trying simple tasks, such as cropping and resizing images, and over time work into a deep set of functionality. Available for Linux, Mac OS X, and Windows, GIMP is cross-platform and can open and export to a wide breadth of file formats, including those popularized by its proprietary analogue, Photoshop.
The GIMP team is currently working toward the 2.10 release; [2.8.18][17] is the latest stable version. More exciting is the unstable version, [2.9.4][18], with a revamped user interface featuring space-saving symbolic icons and dark themes, improved color management, more GEGL-based filters with split-preview, MyPaint brush support (shown in screenshot below), symmetrical drawing, and command-line batch processing. For more details, check out [the full release notes][19].
![GIMP screenshot](https://opensource.com/sites/default/files/gimp_520.png "GIMP screenshot")
### Inkscape
[Inkscape][20] is a richly featured vector-based graphic design workhorse. Use it to create simple graphics, diagrams, layouts, or icon art.
The latest stable version is [0.91][21]; similarly to GIMP, more excitement can be found in a pre-release version, 0.92pre3, which was released November 2016\. The premiere feature of the latest pre-release is the [gradient mesh feature][22](demonstrated in screenshot below); new features introduce in the 0.91 release include [power stroke][23] for fully configurable calligraphic strokes (the "open" in "opensource.com" in the screenshot below uses powerstroke), the on-canvas measure tool, and [the new symbols dialog][24] (shown in the right side of the screenshot below). (Many symbol libraries for Inkscape are available on GitHub; [Xaviju's inkscape-open-symbols set][25] is fantastic.) A new feature available in development/nightly builds is the _Objects_ dialog that catalogs all objects in a document and provides tools to manage them.
![Inkscape screenshot](https://opensource.com/sites/default/files/inkscape_520.png "Inkscape screenshot")
### Scribus
[Scribus][26] is a powerful desktop publishing and page layout tool. Scribus enables you to create sophisticated and beautiful items, including newsletters, books, and magazines, as well as other print pieces. Scribus has color management tools that can handle and output CMYK and spot colors for files that are ready for reliable reproduction at print shops.
[1.4.6][27] is the latest stable release of Scribus; the [1.5.x][28] series of releases is the most exciting as they serve as a preview to the upcoming 1.6.0 release. Version 1.5.3 features a Krita file (*.KRA) file import tool; other developments in the 1.5.x series include the _Table_ tool, text frame welding, footnotes, additional PDF formats for export, improved dictionary support, dockable palettes, a symbols tool, and expanded file format support.
![Scribus screenshot](https://opensource.com/sites/default/files/scribus_520.png "Scribus screenshot")
### MyPaint
[MyPaint][29] is a drawing tablet-centric expressive drawing and illustration tool. It's lightweight and has a minimal interface with a rich set of keyboard shortcuts so that you can focus on your drawing without having to drop your pen.
[MyPaint 1.2.0][30] is the latest stable release and includes new features, such as the [intuitive inking tool][31] for tracing over pencil drawings, new flood fill tool, layer groups, brush and color history panel, user interface revamp including a dark theme and small symbolic icons, and editable vector layers. To try out the latest developments in MyPaint, I recommend installing the nightly Flatpak build, although there have not been significant feature additions since the 1.2.0 release.
![MyPaint screenshot](https://opensource.com/sites/default/files/mypaint_520.png "MyPaint screenshot")
### Blender
Initially released in January 1995, [Blender][32], like GIMP, has been around for more than 20 years. Blender is a powerful open source 3D creation suite that includes tools for modeling, sculpting, rendering, realistic materials, rigging, animation, compositing, video editing, game creation, and simulation.
The latest stable Blender release is [2.78a][33]. The 2.78 release was a large one and includes features such as the revamped _Grease Pencil_ 2D animation tool; VR rendering support for spherical stereo images; and a new drawing tool for freehand curves.
![Inkscape screenshot](https://opensource.com/sites/default/files/blender_520.png "Inkscape screenshot")
To try out the latest exciting Blender developments, you have many options, including:
* The Blender Foundation makes [unstable daily builds][2] available on the official Blender website.
* If you're looking for builds that include particular in-development features, [graphicall.org][3] is a community-moderated site that provides special versions of Blender (and occasionally other open source creative apps) to enable artists to try out the latest available code and experiments.
* Mathieu Bridon has made development versions of Blender available via Flatpak. See his blog post for details: [Blender nightly in Flatpak][4].
### Krita
[Krita][34] is a digital drawing application with a deep set of capabilities. The application is geared toward illustrators, concept artists, and comic artists and is fully loaded with extras, such as brushes, palettes, patterns, and templates.
The latest stable version is [Krita 3.0.1][35], released in September 2016\. Features new to the 3.0.x series include 2D frame-by-frame animation; improved layer management and functionality; expanded and more usable shortcuts; improvements to grids, guides, and snapping; and soft-proofing.
![Krita screenshot](https://opensource.com/sites/default/files/krita_520.png "Krita screenshot")
### Video tools
There are many, many options for open source video editing tools. Of the members of the pack, [Flowblade][36] is a newcomer and Kdenlive is the established, newbie-friendly, and most fully featured contender. The main criteria that may help you eliminate some of this array of options is supported platforms—some of these only support Linux. These all have active upstreams and the latest stable versions of each have been released recently, within weeks of each other.
### Kdenlive
[Kdenlive][37], which was initially released back in 2002, is a powerful non-linear video editor available for Linux and OS X (although the OS X version is out-of-date). Kdenlive has a user-friendly drag-and-drop-based user interface that accommodates beginners, and with the depth experts need.
Learn how to use Kdenlive with an [multi-part Kdenlive tutorial series][38] by Seth Kenlon.
* Latest Stable: 16.08.2 (October 2016)
![](https://opensource.com/sites/default/files/images/life-uploads/kdenlive_6_leader.png)
### Flowblade
Released in 2012, [Flowblade][39], a Linux-only video editor, is a relative newcomer.
* Latest Stable: 1.8 (September 2016)
### Pitivi
[Pitivi][40] is a user-friendly free and open source video editor. Pitivi is written in [Python][41] (the "Pi" in Pitivi), uses the [GStreamer][42] multimedia framework, and has an active community.
* Latest stable: 0.97 (August 2016)
* Get the [latest version with Flatpak][5]
### Shotcut
[Shotcut][43] is a free, open source, cross-platform video editor that started [back in 2004][44] and was later rewritten by current lead developer [Dan Dennedy][45].
* Latest stable: 16.11 (November 2016)
* 4K resolution support
* Ships as a tarballed binary
### OpenShot Video Editor
Started in 2008, [OpenShot Video Editor][46] is a free, open source, easy-to-use, cross-platform video editor.
* Latest stable: [2.1][6] (August 2016)
### Utilities
### SwatchBooker
[SwatchBooker][47] is a handy utility, and although it hasn't been updated in a few years, it's still useful. SwatchBooker helps users legally obtain color swatches from various manufacturers in a format that you can use with other free and open source tools, including Scribus.
### GNOME Color Manager
[GNOME Color Manager][48] is the built-in color management system for the GNOME desktop environment, the default desktop for a bunch of Linux distros. The tool allows you to create profiles for your display devices using a colorimeter, and also allows you to load/managed ICC color profiles for those devices.
### GNOME Wacom Control
[The GNOME Wacom controls][49] allow you to configure your Wacom tablet in the GNOME desktop environment; you can modify various options for interacting with the tablet, including customizing the sensitivity of the tablet and which monitors the tablet maps to.
### Xournal
[Xournal][50] is a humble but solid app that allows you to hand write/doodle notes using a tablet. Xournal is a useful tool for signing or otherwise annotating PDF documents.
### PDF Mod
[PDF Mod][51] is a handy utility for editing PDFs. PDF Mod lets users remove pages, add pages, bind multiple single PDFs together into a single PDF, reorder the pages, and rotate the pages.
### SparkleShare
[SparkleShare][52] is a git-backed file-sharing tool artists use to collaborate and share assets. Hook it up to a GitLab repo and you've got a nice open source infrastructure for asset management. The SparkleShare front end nullifies the inscrutability of git by providing a dropbox-like interface on top of it.
### Photography
### Darktable
[Darktable][53] is an application that allows you to develop digital RAW files and has a rich set of tools for the workflow management and non-destructive editing of photographic images. Darktable includes support for an extensive range of popular cameras and lenses.
![Changing color balance screenshot](https://opensource.com/sites/default/files/dt_colour.jpg "Changing color balance screenshot")
### Entangle
[Entangle][54] allows you to tether your digital camera to your computer and enables you to control your camera completely from the computer.
### Hugin
[Hugin][55] is a tool that allows you to stitch together photos in order to create panoramic photos.
### 2D animation
### Synfig Studio
[Synfig Studio][56] is a vector-based 2D animation suite that also supports bitmap artwork and is tablet-friendly.
### Blender Grease Pencil
I covered Blender above, but particularly notable from a recent release is [a refactored grease pencil feature][57], which adds the ability to create 2D animations.
### Krita
[Krita][58] also now provides 2D animation functionality.
### Music and audio editing
### Audacity
[Audacity][59] is popular, user-friendly tool for editing audio files and recording sound.
### Ardour
[Ardour][60] is a digital audio workstation with an interface centered around a record, edit, and mix workflow. It's a little more complicated than Audacity to use but allows for automation and is generally more sophisticated. (Available for Linux, Mac OS X, and Windows.)
### Hydrogen
[Hydrogen][61] is an open source drum machine with an intuitive interface. It provides the ability to create and arrange various patterns using synthesized instruments.
### Mixxx
[Mixxx][62] is a four-deck DJ suite that allows you to DJ and mix songs together with powerful controls, including beat looping, time stretching, and pitch bending, as well as live broadcast your mixes and interface with DJ hardware controllers.
### Rosegarden
[Rosegarden][63] is a music composition suite that includes tools for score writing and music composition/editing and provides an audio and MIDI sequencer.
### MuseScore
[MuseScore][64] is a music score creation, notation, and editing tool with a community of musical score contributors.
### Additional creative tools
### MakeHuman
[MakeHuman][65] is a 3D graphical tool for creating photorealistic models of humanoid forms.
<iframe allowfullscreen="" frameborder="0" height="293" src="https://www.youtube.com/embed/WiEDGbRnXdE?rel=0" width="520"></iframe>
### Natron
[Natron][66] is a node-based compositor tool used for video post-production and motion graphic and special effect design.
### FontForge
[FontForge][67] is a typeface creation and editing tool. It allows you to edit letter forms in a typeface as well as generate fonts for using those typeface designs.
### Valentina
[Valentina][68] is an application for drafting sewing patterns.
### Calligra Flow
[Calligra Flow][69] is a Visio-like diagramming tool. (Available for Linux, Mac OS X, and Windows.)
### Resources
There are a lot of toys and goodies to try out there. Need some inspiration to start your exploration? These websites and conference are chock-full of tutorials and beautiful creative works to inspire you get you going:
1. [pixls.us][7]: Blog hosted by photographer Pat David that focuses on free and open source tools and workflow for professional photographers.
2. [David Revoy's Blog][8] The blog of David Revoy, an immensely talented free and open source illustrator, concept artist, and advocate, with credits on several of the Blender Foundation films.
3. [The Open Source Creative Podcast][9]: Hosted by Opensource.com community moderator and columnist [Jason van Gumster][10], who is a Blender and GIMP expert, and author of _[Blender for Dummies][1]_, this podcast is directed squarely at those of us who enjoy open source creative tools and the culture around them.
4. [Libre Graphics Meeting][11]: Annual conference for free and open source creative software developers and the creatives who use the software. This is the place to find out about what cool features are coming down the pipeline in your favorite open source creative tools, and to enjoy what their users are creating with them.
--------------------------------------------------------------------------------
作者简介:
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/picture-343-8e0fb148b105b450634e30acd8f5b22b.png?itok=oxzTm70z)
Máirín Duffy - Máirín is a principal interaction designer at Red Hat. She is passionate about software freedom and free & open source tools, particularly in the creative domain: her favorite application is Inkscape (http://inkscape.org).
--------------------------------------------------------------------------------
via: https://opensource.com/article/16/12/yearbook-top-open-source-creative-tools-2016
作者:[Máirín Duffy][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mairin
[1]:http://www.blenderbasics.com/
[2]:https://builder.blender.org/download/
[3]:http://graphicall.org/
[4]:https://mathieu.daitauha.fr/blog/2016/09/23/blender-nightly-in-flatpak/
[5]:https://pitivi.wordpress.com/2016/07/18/get-pitivi-directly-from-us-with-flatpak/
[6]:http://www.openshotvideo.com/2016/08/openshot-21-released.html
[7]:http://pixls.us/
[8]:http://davidrevoy.com/
[9]:http://monsterjavaguns.com/podcast/
[10]:https://opensource.com/users/jason-van-gumster
[11]:http://libregraphicsmeeting.org/2016/
[12]:https://opensource.com/life/12/9/tour-through-open-source-creative-tools
[13]:https://opensource.com/business/16/8/flatpak
[14]:http://flatpak.org/apps.html
[15]:https://opensource.com/tags/gimp
[16]:https://www.gimp.org/news/2015/11/22/20-years-of-gimp-release-of-gimp-2816/
[17]:https://www.gimp.org/news/2016/07/14/gimp-2-8-18-released/
[18]:https://www.gimp.org/news/2016/07/13/gimp-2-9-4-released/
[19]:https://www.gimp.org/news/2016/07/13/gimp-2-9-4-released/
[20]:https://opensource.com/tags/inkscape
[21]:http://wiki.inkscape.org/wiki/index.php/Release_notes/0.91
[22]:http://wiki.inkscape.org/wiki/index.php/Mesh_Gradients
[23]:https://www.youtube.com/watch?v=IztyV-Dy4CE
[24]:https://inkscape.org/cs/~doctormo/%E2%98%85symbols-dialog
[25]:https://github.com/Xaviju/inkscape-open-symbols
[26]:https://opensource.com/tags/scribus
[27]:https://www.scribus.net/scribus-1-4-6-released/
[28]:https://www.scribus.net/scribus-1-5-2-released/
[29]:http://mypaint.org/
[30]:http://mypaint.org/blog/2016/01/15/mypaint-1.2.0-released/
[31]:https://github.com/mypaint/mypaint/wiki/v1.2-Inking-Tool
[32]:https://opensource.com/tags/blender
[33]:http://www.blender.org/features/2-78/
[34]:https://opensource.com/tags/krita
[35]:https://krita.org/en/item/krita-3-0-1-update-brings-numerous-fixes/
[36]:https://opensource.com/life/16/9/10-reasons-flowblade-linux-video-editor
[37]:https://opensource.com/tags/kdenlive
[38]:https://opensource.com/life/11/11/introduction-kdenlive
[39]:http://jliljebl.github.io/flowblade/
[40]:http://pitivi.org/
[41]:http://wiki.pitivi.org/wiki/Why_Python%3F
[42]:https://gstreamer.freedesktop.org/
[43]:http://shotcut.org/
[44]:http://permalink.gmane.org/gmane.comp.lib.fltk.general/2397
[45]:http://www.dennedy.org/
[46]:http://openshot.org/
[47]:http://www.selapa.net/swatchbooker/
[48]:https://help.gnome.org/users/gnome-help/stable/color.html.en
[49]:https://help.gnome.org/users/gnome-help/stable/wacom.html.en
[50]:http://xournal.sourceforge.net/
[51]:https://wiki.gnome.org/Apps/PdfMod
[52]:https://www.sparkleshare.org/
[53]:https://opensource.com/life/16/4/how-use-darktable-digital-darkroom
[54]:https://entangle-photo.org/
[55]:http://hugin.sourceforge.net/
[56]:https://opensource.com/article/16/12/synfig-studio-animation-software-tutorial
[57]:https://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.78/GPencil
[58]:https://opensource.com/tags/krita
[59]:https://opensource.com/tags/audacity
[60]:https://ardour.org/
[61]:http://www.hydrogen-music.org/
[62]:http://mixxx.org/
[63]:http://www.rosegardenmusic.com/
[64]:https://opensource.com/life/16/03/musescore-tutorial
[65]:http://makehuman.org/
[66]:https://natron.fr/
[67]:http://fontforge.github.io/en-US/
[68]:http://valentina-project.org/
[69]:https://www.calligra.org/flow/

View File

@ -1,140 +0,0 @@
Translating by fristadream
Will Android do for the IoT what it did for mobile?
============================================================
![](https://cdn-images-1.medium.com/max/1000/1*GF6e6Vd-22PViWT8EDpLNA.jpeg)
Android Things gives the IoT Wings
### My first 24 hours with Android Things
Just when I was in the middle of an Android based IoT commercial project running on a Raspberry Pi 3, something awesome happened. Google released the first preview of [Android Things][1], their SDK targeted specifically at (initially) 3 SBCs (Single Board Computers)the Pi 3, the Intel Edison and the NXP Pico. To say I was struggling is a bit of an understatementwithout even an established port of Android to the Pi, we were at the mercy of the various quirks and omissions of the well-meaning but problematic homebrew distro brigade. One of these problems was a deal breaker toono touchscreen support, not even for the official one sold by [Element14][2]. I had an idea Android was heading for the Pi already, and earlier a mention in a [commit to the AOSP project from Google][3] got everyone excited for a while. So when, on 12th Dec 2016, without much fanfare I might add, Google announced “Android Things” plus a downloadable SDK, I dived in with both hands, a map and a flashlight, and hung a “do not disturb” sign on my door…
### Questions?
I had many questions regarding Googles Android on the Pi, having done extensive work with Android previously and a few Pi projects, including being involved right now in the one mentioned. Ill try to address these as I proceed, but the first and biggest was answered right awaythere is full Android Studio support and the Pi becomes just another regular ADB-addressable device on your list. Yay! The power, convenience and sheer ease of use we get within Android Studio is available at last to real IoT hardware, so we get all the layout previews, debug system, source checkers, automated tests etc. I cant stress this enough. Up until now, most of my work onboard the Pi had been in Python having SSHd in using some editor running on the Pi (MC if you really want to know). This worked, and no doubt hardcore Pi/Python heads could point out far better ways of working, but it really felt like Id timewarped back to the 80s in terms of software development. My projects involved writing Android software on handsets which controlled the Pi, so this rubbed salt in the woundI was using Android Studio for “real” Android work, and SSH for the rest. Thats all over now.
All samples are for the 3 SBCs, of which the the Pi 3 is just one. The Build.DEVICE constant lets you determine this at runtime, so you see lots of code like:
```
public static String getGPIOForButton() {
switch (Build.DEVICE) {
case DEVICE_EDISON_ARDUINO:
return "IO12";
case DEVICE_EDISON:
return "GP44";
case DEVICE_RPI3:
return "BCM21";
case DEVICE_NXP:
return "GPIO4_IO20";
default:
throw new IllegalStateException(“Unknown Build.DEVICE “ + Build.DEVICE);
}
}
```
Of keen interest is the GPIO handling. Since Im only familiar with the Pi, I can only assume the other SBCs work the same way, but this is the set of pins which can be defined as inputs/outputs and is the main interface to the physical outside world. The Pi Linux based OS distros have full and easy support via read and write methods in Python, but for Android youd have to use the NDK to write C++ drivers, and talk to these via JNI in Java. Not that difficult, but something else to maintain in your build chain. The Pi also designates 2 pins for I2C, the clock and the data, so extra work would be needed handling those. I2C is the really cool bus-addressable system which turns many separate pins of data into one by serialising it. So heres the kickerall thats done directly in Android Things for you. You just _read()_and _write() _to/from whatever GPIO pin you need, and I2C is as easy as this:
```
public class HomeActivity extends Activity {
// I2C Device Name
private static final String I2C_DEVICE_NAME = ...;
// I2C Slave Address
private static final int I2C_ADDRESS = ...;
private I2cDevice mDevice;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
// Attempt to access the I2C device
try {
PeripheralManagerService manager = new PeripheralManagerService();
mDevice = manager.openI2cDevice(I2C_DEVICE_NAME, I2C_ADDRESS)
} catch (IOException e) {
Log.w(TAG, "Unable to access I2C device", e);
}
}
@Override
protected void onDestroy() {
super.onDestroy();
if (mDevice != null) {
try {
mDevice.close();
mDevice = null;
} catch (IOException e) {
Log.w(TAG, "Unable to close I2C device", e);
}
}
}
}
```
### What version of Android is Android Things based on?
This looks to be Android 7.0, which is fantastic because we get all the material design UI, the optimisations, the security hardening and so on from all the previous versions of Android. It also raises an interesting questionhow are future platform updates rolled out, as opposed to your app which you have to manage separately? Remember, these devices may not be connected to the internet. We are no longer in the comfortable space of cellular/WiFi connections being assumed to at least be available, even if sometimes unreliable.
The other worry was this being an offshoot version of Android in name only, where to accommodate the lowest common denominator, something so simple it could power an Arduino has been releasedmore of a marketing exercise than a rich OS. Thats quickly put to bed by looking at the [samples][4], actuallysome even use SVG graphics as resources, a very recent Android innovation, rather than the traditional bitmap-based graphics, which of course it also handles with ease.
Inevitably, regular Android will throw up issues when compared with Android Things. For example, there is the permissions conundrum. Mitigated somewhat by the fact Android Things is designed to power fixed hardware devices, so the user wouldnt normally install apps after its been built, its nevertheless a problem asking them for permissions on a device which might not have a UI! The solution is to grant all the permissions an app might need at install time. Normally, these devices are one app only, and that app is the one which runs when it powers up.
![](https://cdn-images-1.medium.com/max/800/1*pi7HyLT-BVwHQ_Rw3TDSWQ.png)
### What happened to Brillo?
Brillo was the codename given to Googles previous IoT OS, which sounds a lot like what Android Things used to be called. In fact you see many references to Brillo still, especially in the source code folder names in the GitHub Android Things examples. However, it has ceased to be. All hail the new king!
### UI Guidelines?
Google issues extensive guidelines regarding Android smartphone and tablet apps, such as how far apart on screen buttons should be and so on. Sure, its best to follow these where practical, but were not in Kansas any more. There is nothing there by defaultits up the the app author to manage _everything_. This includes the top status bar, the bottom navigation barabsolutely everything. Years of Google telling Android app authors never to render an onscreen BACK button because the platform will supply one is thrown out, because for Android Things there [might not even be a UI at all!][5]
### How much support of the Google services were used to from smartphones can we expect?
Quite a bit actually, but not everything. The first preview has no bluetooth support. No NFC, eitherboth of which are heavily contributing to the IoT revolution. The SBCs support them, so I cant see them not being available for too long. Since theres no notification bar, there cant be any notifications. No Maps. Theres no default soft keyboard, you have to install one yourself. And since there is no Play Store, you have to get down and dirty with ADB to do this, and many other operations.
When developing for Android Things I tried to make the same APK I was targeting for the Pi run on a regular handset. This threw up an error preventing it from being installed on anything other than an Android Things device: library “_com.google.android.things_” not present. Kinda makes sense, because only Android Things devices would need this, but it seemed limiting because not only would no smartphones or tablets have it present, but neither would any emulators. It looked like you could only run and test your Android Things apps on physical Android Things devices … until Google helpfully replied to my query on this in the [G+ Googles IoT Developers Community][6] group with a workaround. Bullet dodged there, then.
### How can we expect the Android Things ecosystem to evolve now?
Id expect to see a lot more porting of traditional Linux server based apps which didnt really make sense to an Android restricted to smartphones and tablets. For example, a web server suddenly becomes very useful. Some exist already, but nothing like the heavyweights such as Apache or Nginx. IoT devices might not have a local UI, but administering them via a browser is certainly viable, so something to present a web panel this way is needed. Similarly comms apps from the big namesall it needs is a mike and speaker and in theory its good to go for any video calling app, like Duo, Skype, FB etc. How far this evolution goes is anyones guess. Will there be a Play Store? Will they show ads? Can we be sure they wont spy on us, or let hackers control them? The IoT from a consumer point of view always was net-connected devices with touchscreens, and everyones already used to that way of working from their smartphones.
Id also expect to see rapid progress regarding hardwarein particular many more SBCs at even lower costs. Look at the amazing $5 Raspberry Pi Zero, which unfortunately almost certainly cant run Android Things due to its limited CPU and RAM. How long until one like this can? Its pretty obvious, now the bar has been set, any self respecting SBC manufacturer will be aiming for Android Things compatibility, and probably the economies of scale will apply to the peripherals too such as a $2 3" touchscreen. Microwave ovens just wont sell unless you can watch YouTube on them, and your dishwasher just put in a bid for more powder on eBay since it noticed youre running low…
However, I dont think we can get too carried away here. Knowing a little about Android architecture helps when thinking of it as an all-encompassing IoT OS. It still uses Java, which has been hammered to death in the past with all its garbage-collection induced timing issues. Thats the least of it though. A genuine realtime OS relies on predictable, accurate and rock-solid timing or it cant be described as “mission critical”. Think about medical applications, safety monitors, industrial controllers etc. With Android, your Activity/Service can, in theory, be killed at any time if the host OS thinks it needs to. Not so bad on a phonethe user restarts the app, kills other apps, or reboots the handset. A heart monitor is a different kettle all together though. If that foreground Activity/Service is watching a GPIO pin, and the signal isnt dealt with exactly when it is supposed to, we have failed. Some pretty fundamental changes would have to be made to Android to support this, and so far theres no indication its even planned.
### Those 24 hours
So, back to my project. I thought Id take the work Id done already and just port as much as I could over, waiting for the inevitable roadblock where I had to head over to the G+ group, cap in hand for help. Which, apart from the query about running on non-AT devices, never happened. And it ran great! This project uses some oddball stuff too, custom fonts, prescise timersall of which appeared perfectly laid out in Android Studio. So its top marks from me, Googleat last I can start giving actual prototypes out rather than just videos and screenshots.
### The big picture
The IoT OS landscape today looks very fragmented. There is clearly no market leader and despite all the hype and buzz we hear, its still incredibly early days. Can Google do to the IoT with Android Things what it did to mobile, where its dominance is now very close to 90%? I believe so, and if that is to happen, this launch of Android Things is exactly how they would go about it.
Remember all the open vs closed software wars, mainly between Apple who never licence theirs, and Google who cant give it away for free to enough people? That policy now comes back once more, because the idea of Apple launching a free IoT OS is as far fetched as them giving away their next iPhone for nothing.
The IoT OS game is wide open for someone to grab, and the opposition wont even be putting their kit on this time…
Head over to the [Developer Preview][7] site to get your copy of the Android Things SDK now.
--------------------------------------------------------------------------------
via: https://medium.com/@carl.whalley/will-android-do-for-iot-what-it-did-for-mobile-c9ac79d06c#.hxva5aqi2
作者:[Carl Whalley][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.com/@carl.whalley
[1]:https://developer.android.com/things/index.html
[2]:https://www.element14.com/community/docs/DOC-78156/l/raspberry-pi-7-touchscreen-display
[3]:http://www.androidpolice.com/2016/05/24/google-is-preparing-to-add-the-raspberry-pi-3-to-aosp-it-will-apparently-become-an-officially-supported-device/
[4]:https://github.com/androidthings/sample-simpleui/blob/master/app/src/main/res/drawable/pinout_board_vert.xml
[5]:https://developer.android.com/things/sdk/index.html
[6]:https://plus.google.com/+CarlWhalley/posts/4tF76pWEs1D
[7]:https://developer.android.com/things/preview/index.html

View File

@ -1,268 +0,0 @@
**translating by [erlinux](https://github.com/erlinux)**
inxi A Great Tool to Check Hardware Information on Linux
============================================================
One of the big challenge for Linux administrator to find, all the hardware information on the system. There are many command line utility is are available in Linux to get the hardware information but there will be a lack to get some of the information.
[inxi][1] is a nifty tool to check hardware information on Linux and offers wide range of option to get all the hardware information on Linux system that i never found in any other utility which are available in Linux. It was forked from the ancient and mindbendingly perverse yet ingenius infobash, by locsmif.
inxi is a script that quickly shows system hardware, CPU, drivers, Xorg, Desktop, Kernel, GCC version(s), Processes, RAM usage, and a wide variety of other useful information, also used for forum technical support & debugging tool.
#### Install inix on Linux
inxi is support all Linux distributions and never require latest dependencies, so no need to think about manual installation. Simply install inix from distribution official repository by using below commands.
```
[Install inxi on CentOS/RHEL]
$ sudo yum install inxi
[Install inix on Fedora]
$ sudo dnf install inxi
[Install inxi on Debian/Linux Mint/Ubuntu]
$ sudo apt-get install inxi
[Install inxi on openSUSE]
$ sudo zypper in inxi
[Install inxi on Mageia]
$ sudo urpmi inxi
[Install inxi on Arch based system]
$ yaourt -S inxi
```
By default inxi output comes with colors which can be turned off by using `-c` followed by `0` (you can use 0-32) to get better visibility.
#### Print one line output with inix
Issue inxi command without any option to print the hardware information in one line like, CPU, kernel, architecture, uptime, memory, HDD, process & inxi version.
```
$ inxi -c 0
CPU~Dual core Intel Core i7-6700HQ (-MCP-) speed~2591 MHz (max) Kernel~4.8.0-32-generic x86_64 Up~50 min Mem~1609.9/1999.8MB HDD~42.9GB(17.6% used) Procs~197 Client~Shell inxi~2.3.1
```
#### Print basic system hardware
Issue inxi command with `-b` option wich will print basic system hardware information. I mean, it shows about System, Machine, CPU, Graphics, Network, Drives & Info.
```
$ inxi -b
System: Host: daygeek Kernel: 4.8.0-32-generic x86_64 (64 bit) Desktop: Unity 7.5.0 Distro: Ubuntu 16.10
Machine: System: innotek (portable) product: VirtualBox v: 1.2
Mobo: Oracle model: VirtualBox v: 1.2 BIOS: innotek v: VirtualBox date: 12/01/2006
Battery BAT0: charge: 31.5 Wh 63.0% condition: 50.0/50.0 Wh (100%)
CPU: Dual core Intel Core i7-6700HQ (-MCP-) speed: 2591 MHz (max)
Graphics: Card: InnoTek Systemberatung VirtualBox Graphics Adapter
Display Server: X.Org 1.18.4 drivers: (unloaded: fbdev,vesa) Resolution: 1920x955@59.89hz
GLX Renderer: Gallium 0.4 on llvmpipe (LLVM 3.8, 256 bits) GLX Version: 3.0 Mesa 12.0.3
Network: Card: Intel 82540EM Gigabit Ethernet Controller driver: e1000
Drives: HDD Total Size: 42.9GB (17.6% used)
Info: Processes: 197 Uptime: 50 min Memory: 1586.2/1999.8MB Client: Shell (bash) inxi: 2.3.1
```
* System : Host Name, Kernel version, Architecture, Desktop & Distribution
* Machine : Motherboard & Bios information
* CPU : Processor Name and core
* Graphics : Graphics card info
* Network : Network card info
* Drives : HDD size and used percent
* Info : Total process count, Server Uptime, Memory total and used, inxi version
#### Show Audio/sound card information
Issue inxi command with `-A` which will show Audio/sound card information.
```
$ inxi -A
Audio: Card Intel 82801AA AC'97 Audio Controller driver: snd_intel8x0 Sound: ALSA v: k4.8.0-32-generic
```
#### Show full CPU info
Issue inxi command with `-C` which will show full CPU information, including per CPU clock speed and CPU max speed (if available).
```
$ inxi -C
CPU: Dual core Intel Core i7-6700HQ (-MCP-) cache: 6144 KB
clock speeds: max: 2591 MHz 1: 2591 MHz 2: 2591 MHz
```
#### Show optical drive information
Issue inxi command with `-d` which will show optical drive data information, including all storage.
```
$ inxi -d
Drives: HDD Total Size: 42.9GB (17.6% used) ID-1: /dev/sda model: VBOX_HARDDISK size: 42.9GB
Optical: /dev/sr0 model: VBOX CD-ROM dev-links: cdrom,dvd
Features: speed: 32x multisession: yes audio: yes dvd: yes rw: none
```
#### Show full hard Disk information
Issue inxi command with `-D` which will show full hard Disk information, including HDD total size, used size and percentage, file system type & mount point.
```
$ inxi -D
Drives: HDD Total Size: 42.9GB (17.6% used) ID-1: /dev/sda model: VBOX_HARDDISK size: 42.9GB
```
Issue inxi command with `-p` which will show full partition information.
```
$ inxi -p
Partition: ID-1: / size: 38G used: 5.2G (15%) fs: ext4 dev: /dev/sda1
ID-2: swap-1 size: 2.15GB used: 0.20GB (9%) fs: swap dev: /dev/sda5
```
Issue inxi command with `-0` which will show unmounted partition information.
```
$ inxi -o
Unmounted: No unmounted partitions detected
```
#### Show Graphic card information
Issue inxi command with `-G` which will show Graphic card information.
```
$ inxi -G
Graphics: Card: InnoTek Systemberatung VirtualBox Graphics Adapter
Display Server: X.Org 1.18.4 drivers: (unloaded: fbdev,vesa) Resolution: 1920x955@59.89hz
GLX Renderer: Gallium 0.4 on llvmpipe (LLVM 3.8, 256 bits) GLX Version: 3.0 Mesa 12.0.3
```
#### Show server public IP address
Issue inxi command with `-i` (requires ifconfig network tool) which will show server public IP address.
```
$ inxi -i
Network: Card: Intel 82540EM Gigabit Ethernet Controller driver: e1000
IF: enp0s3 state: up speed: 1000 Mbps duplex: full mac: 08:00:27:ae:1d:fe
WAN IP: 103.5.134.167 IF: enp0s3 ip-v4: 10.0.2.15
```
#### Show machine data information
Issue inxi command with `-M` which will show machine data information, including Device, Motherboard, Bios, and if percentage, System Builder (Like Lenovo).
```
$ inxi -M
Machine: System: innotek (portable) product: VirtualBox v: 1.2
Mobo: Oracle model: VirtualBox v: 1.2 BIOS: innotek v: VirtualBox date: 12/01/2006
Battery BAT0: charge: 32.5 Wh 65.0% condition: 50.0/50.0 Wh (100%)
```
#### Show Show Network card information
Issue inxi command with `-N` which will show Show Network card information.
```
$ inxi -N
Network: Card: Intel 82540EM Gigabit Ethernet Controller driver: e1000
```
Issue inxi command with `-n` which will show Show Advanced Network card information, including interface, speed, mac id, state, etc.
```
$ inxi -n
Network: Card: Intel 82540EM Gigabit Ethernet Controller driver: e1000
IF: enp0s3 state: up speed: 1000 Mbps duplex: full mac: 08:00:27:ae:1d:fe
```
#### Show distro repository data information
Issue inxi command with `-r` which will show distro repository data information.
```
$ inxi -r
Repos: Active apt sources in file: /etc/apt/sources.list
deb http://in.archive.ubuntu.com/ubuntu/ yakkety main restricted
deb http://in.archive.ubuntu.com/ubuntu/ yakkety-updates main restricted
deb http://in.archive.ubuntu.com/ubuntu/ yakkety universe
deb http://in.archive.ubuntu.com/ubuntu/ yakkety-updates universe
deb http://in.archive.ubuntu.com/ubuntu/ yakkety multiverse
deb http://in.archive.ubuntu.com/ubuntu/ yakkety-updates multiverse
deb http://in.archive.ubuntu.com/ubuntu/ yakkety-backports main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu yakkety-security main restricted
deb http://security.ubuntu.com/ubuntu yakkety-security universe
deb http://security.ubuntu.com/ubuntu yakkety-security multiverse
Active apt sources in file: /etc/apt/sources.list.d/arc-theme.list
deb http://download.opensuse.org/repositories/home:/Horst3180/xUbuntu_16.04/ /
Active apt sources in file: /etc/apt/sources.list.d/snwh-ubuntu-pulp-yakkety.list
deb http://ppa.launchpad.net/snwh/pulp/ubuntu yakkety main
```
#### Show possible system hardware information
Issue inxi command with `-F` which will show possible system hardware information.
```
$ inxi -F
System: Host: daygeek Kernel: 4.8.0-32-generic x86_64 (64 bit) Desktop: Unity 7.5.0 Distro: Ubuntu 16.10
Machine: System: innotek (portable) product: VirtualBox v: 1.2
Mobo: Oracle model: VirtualBox v: 1.2 BIOS: innotek v: VirtualBox date: 12/01/2006
Battery BAT0: charge: 33.0 Wh 66.0% condition: 50.0/50.0 Wh (100%)
CPU: Dual core Intel Core i7-6700HQ (-MCP-) cache: 6144 KB
clock speeds: max: 2591 MHz 1: 2591 MHz 2: 2591 MHz
Graphics: Card: InnoTek Systemberatung VirtualBox Graphics Adapter
Display Server: X.Org 1.18.4 drivers: (unloaded: fbdev,vesa) Resolution: 1920x955@59.89hz
GLX Renderer: Gallium 0.4 on llvmpipe (LLVM 3.8, 256 bits) GLX Version: 3.0 Mesa 12.0.3
Audio: Card Intel 82801AA AC'97 Audio Controller driver: snd_intel8x0 Sound: ALSA v: k4.8.0-32-generic
Network: Card: Intel 82540EM Gigabit Ethernet Controller driver: e1000
IF: enp0s3 state: up speed: 1000 Mbps duplex: full mac: 08:00:27:ae:1d:fe
Drives: HDD Total Size: 42.9GB (17.6% used) ID-1: /dev/sda model: VBOX_HARDDISK size: 42.9GB
Partition: ID-1: / size: 38G used: 5.2G (15%) fs: ext4 dev: /dev/sda1
ID-2: swap-1 size: 2.15GB used: 0.20GB (9%) fs: swap dev: /dev/sda5
RAID: No RAID devices: /proc/mdstat, md_mod kernel module present
Sensors: None detected - is lm-sensors installed and configured?
Info: Processes: 198 Uptime: 53 min Memory: 1587.5/1999.8MB Client: Shell (bash) inxi: 2.3.1
```
#### Get extra information about the device
Add `-x` with any above individual output which will show extra information about the device.
```
$ inxi -F -x
System: Host: daygeek Kernel: 4.8.0-32-generic x86_64 (64 bit gcc: 6.2.0)
Desktop: Unity 7.5.0 (Gtk 3.20.9-1ubuntu2) Distro: Ubuntu 16.10
Machine: System: innotek (portable) product: VirtualBox v: 1.2
Mobo: Oracle model: VirtualBox v: 1.2 BIOS: innotek v: VirtualBox date: 12/01/2006
Battery BAT0: charge: 33.0 Wh 66.0% condition: 50.0/50.0 Wh (100%) model: innotek 1 status: Charging
CPU: Dual core Intel Core i7-6700HQ (-MCP-) cache: 6144 KB
flags: (lm nx sse sse2 sse3 sse4_1 sse4_2 ssse3) bmips: 10368
clock speeds: max: 2591 MHz 1: 2591 MHz 2: 2591 MHz
Graphics: Card: InnoTek Systemberatung VirtualBox Graphics Adapter bus-ID: 00:02.0
Display Server: X.Org 1.18.4 drivers: (unloaded: fbdev,vesa) Resolution: 1920x955@59.89hz
GLX Renderer: Gallium 0.4 on llvmpipe (LLVM 3.8, 256 bits)
GLX Version: 3.0 Mesa 12.0.3 Direct Rendering: Yes
Audio: Card Intel 82801AA AC'97 Audio Controller driver: snd_intel8x0 ports: d100 d200 bus-ID: 00:05.0
Sound: Advanced Linux Sound Architecture v: k4.8.0-32-generic
Network: Card: Intel 82540EM Gigabit Ethernet Controller
driver: e1000 v: 7.3.21-k8-NAPI port: d010 bus-ID: 00:03.0
IF: enp0s3 state: up speed: 1000 Mbps duplex: full mac: 08:00:27:ae:1d:fe
Drives: HDD Total Size: 42.9GB (17.6% used) ID-1: /dev/sda model: VBOX_HARDDISK size: 42.9GB
Partition: ID-1: / size: 38G used: 5.2G (15%) fs: ext4 dev: /dev/sda1
ID-2: swap-1 size: 2.15GB used: 0.20GB (9%) fs: swap dev: /dev/sda5
RAID: No RAID devices: /proc/mdstat, md_mod kernel module present
Sensors: None detected - is lm-sensors installed and configured?
Info: Processes: 198 Uptime: 54 min Memory: 1592.5/1999.8MB Init: systemd runlevel: 5 Gcc sys: 6.2.0
Client: Shell (bash 4.3.461) inxi: 2.3.1
```
--------------------------------------------------------------------------------
via: http://www.2daygeek.com/inxi-system-hardware-information-on-linux/2/
作者:[ MAGESH MARUTHAMUTHU ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.2daygeek.com/author/magesh/
[1]:http://smxi.org/docs/inxi.htm

View File

@ -1,69 +0,0 @@
Why we need an open model to design and evaluate public policy
============================================================
### Imagine an app that allows citizens to test drive proposed policies.
[up][3]
![Why we need an open model to design and evaluate public policy](https://opensource.com/sites/default/files/styles/image-full-size/public/images/government/GOV_citizen_participation.jpg?itok=eeLWQgev "Why we need an open model to design and evaluate public policy")
Image by : 
opensource.com
In the months leading up to political elections, public debate intensifies and citizens are exposed to a proliferation of information around policy options. In a data-driven society where new insights have been informing decision-making, a deeper understanding of this information has never been more important, yet the public still hasn't realized the full potential of public policy modeling.
At a time where the concept of "open government" is constantly evolving to keep pace with new technological advances, government policy models and analysis could be the new generation of open knowledge.
Government Open Source Models (GOSMs) refer to the idea that government-developed models, whose purpose is to design and evaluate policy, are freely available to everyone to use, distribute, and modify without restrictions. The community could potentially improve the quality, reliability, and accuracy of policy modeling, creating new data-driven apps that benefit the public.
Today's generation interacts with technology like it's second nature, absorbing vast amounts of information tacitly. What if we could interact with different public policies in a virtual, immersive environment using a GOSM?
Imagine an app that allows citizens to test drive proposed policies to determine the future in which they want to live. They would instinctively learn the key drivers and what to expect. Before long the public would have a deeper understanding of public policy impacts and become more savvy at navigating the controversial terrains of public debate.
Why haven't we had greater access to these models before? The reason lies behind the veils of public policy modeling.
In a society as complex as the one we live in, quantifying policy impacts is a difficult task and has been described as a "fine-art." Moreover, most government policy models are based on administrative and other privately held data. Nevertheless, policy analysts valiantly go about their quest with the aim of guiding policy design, and many a political battle has been won with a quantitative call to arms.
Numbers are powerful. They build credibility and are often used as a rationale for introducing new policies. The development of public policy models lends power to both politicians and bureaucrats, who may be reluctant to disrupt the status quo. Giving that up may not be easy, but GOSMs provide an opportunity for unprecedented public policy reform.
GOSMs level the playing field for everyone: politicians, the media, lobby groups, stakeholders, and the general public. By opening the doors of policy evaluation to the community, governments can tap into new and undiscovered capabilities for creativity, innovation, and efficiency within the public sphere. But what are the practical implications for the strategic interactions between stakeholders and governments in public policy design?
GOSMs are unique because they are primarily a tool for designing public policy and do not necessarily require re-distribution for private gains. Stakeholders and lobby groups could potentially employ GOSMs along with their own privately held information to gain new insights into the workings of the policy environment in which they are economic players for private benefit.
Could GOSMs become a weapon where stakeholders hold the balance of power in public debate and strategize for optimal benefit?
Being a modifiable public good, GOSMs are, in notion, funded by the taxpayer and attributed to the state. Would it be ethically appropriate for private entities to gain from GOSMs without passing on the benefits to society? Unlike apps that may be used for more efficient service provision, alternate policy proposals are more likely to be used by consultancies and contribute to public debate.
The open source community has frequently used the "copyleft license" to ensure that code and any derivative works under this license remains open to everyone. This works well when the product of value is the code itself, which requires re-distribution for maximum benefit. However, what if the code or GOSM redistribution is incidental to the main product, which may be new strategic insights into the existing policy environment?
At a time when privately collected data is becoming prolific, the real value behind GOSMs may be the underlying data, which could be used to refine the models themselves. Ultimately, government is the only consumer with the authority to implement policy, and stakeholders may choose to share their modified GOSMs in negotiations.
The big challenge government has when publicly releasing policy models is increasing transparency while protecting privacy. Ideally, releasing GOSMs would require securing closed data in a way that preserves the key features of the modeling.
Publicly releasing GOSMs empower citizens by promoting a greater understanding and participation into our democracy, which would lead to improved policy outcomes and greater public satisfaction. In an open government utopia, open public policy development will be a collaborative effort between government and the community, where knowledge, data, and analysis are freely available to everyone.
_Learn more in Audrey Lobo-Pulo's talk at linux.conf.au 2017 ([#lca2017][1]) in Hobart: [Publicly Releasing Government Models][2]._
_Disclaimer: The views presented in this article belong to Audrey Lobo-Pulo and are not necessarily those of the Australian Government._
--------------------------------------------------------------------------------
作者简介:
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/1-_mg_2552.jpg?itok=-RflZ4Wv)
Audrey Lobo-Pulo - Dr. Audrey Lobo-Pulo is a co-founder of Phoensight and is an advocate for open government and open source software in government modelling. A physicist, she moved to working in economic policy modelling after joining the Australian Public Service. Audrey has been involved in modelling a wide range of economic policy options and is currently interested in government open data and open policy modelling. Audrey's vision for government is to bring data science to public policy analytics.
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/1/government-open-source-models
作者:[Audrey Lobo-Pulo ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/audrey-lobo-pulo
[1]:https://twitter.com/search?q=%23lca2017&src=typd
[2]:https://linux.conf.au/schedule/presentation/31/
[3]:https://opensource.com/article/17/1/government-open-source-models?rate=p9P_dJ3xMrvye9a6xiz6K_Hc8pdKmRvMypzCNgYthA0

View File

@ -1,726 +0,0 @@
tranlating by xiaow6
Git in 2016
============================================================
![](https://cdn-images-1.medium.com/max/2000/1*1SiSsLMsNSyAk6khb63W9g.png)
Git had a  _huge_  year in 2016, with five feature releases[¹][57] ( _v2.7_  through  _v2.11_ ) and sixteen patch releases[²][58]. 189 authors[³][59] contributed 3,676 commits[⁴][60] to `master`, which is up 15%[⁵][61] over 2015! In total, 1,545 files were changed with 276,799 lines added and 100,973 lines removed[⁶][62].
However, commit counts and LOC are pretty terrible ways to measure productivity. Until deep learning develops to the point where it can qualitatively grok code, were going to be stuck with human judgment as the arbiter of productivity.
With that in mind, I decided to put together a retrospective of sorts that covers changes improvements made to six of my favorite Git features over the course of the year. This article is pretty darn long for a Medium post, so I will forgive you if you want to skip ahead to a feature that particularly interests you:
* [Rounding out the ][41]`[git worktree][25]`[ command][42]
* [More convenient ][43]`[git rebase][26]`[ options][44]
* [Dramatic performance boosts for ][45]`[git lfs][27]`
* [Experimental algorithms and better defaults for ][46]`[git diff][28]`
* `[git submodules][29]`[ with less suck][47]
* [Nifty enhancements to ][48]`[git stash][30]`
Before we begin, note that many operating systems ship with legacy versions of Git, so its worth checking that youre on the latest and greatest. If running `git --version` from your terminal returns anything less than Git `v2.11.0`, head on over to Atlassian's quick guide to [upgrade or install Git][63] on your platform of choice.
### [`Citation` needed]
One more quick stop before we jump into the qualitative stuff: I thought Id show you how I generated the statistics from the opening paragraph (and the rather over-the-top cover image). You can use the commands below to do a quick  _year in review_  for your own repositories as well!
```
¹ Tags from 2016 matching the form vX.Y.0
```
```
$ git for-each-ref --sort=-taggerdate --format \
'%(refname) %(taggerdate)' refs/tags | grep "v\d\.\d*\.0 .* 2016"
```
```
² Tags from 2016 matching the form vX.Y.Z
```
```
$ git for-each-ref --sort=-taggerdate --format '%(refname) %(taggerdate)' refs/tags | grep "v\d\.\d*\.[^0] .* 2016"
```
```
³ Commits by author in 2016
```
```
$ git shortlog -s -n --since=2016-01-01 --until=2017-01-01
```
```
⁴ Count commits in 2016
```
```
$ git log --oneline --since=2016-01-01 --until=2017-01-01 | wc -l
```
```
⁵ ... and in 2015
```
```
$ git log --oneline --since=2015-01-01 --until=2016-01-01 | wc -l
```
```
⁶ Net LOC added/removed in 2016
```
```
$ git diff --shortstat `git rev-list -1 --until=2016-01-01 master` \
`git rev-list -1 --until=2017-01-01 master`
```
The commands above were are run on Gits `master` branch, so dont represent any unmerged work on outstanding branches. If you use these command, remember that commit counts and LOC are not metrics to live by. Please dont use them to rate the performance of your teammates!
And now, on with the retrospective…
### Rounding out Git worktrees
The `git worktree` command first appeared in Git v2.5 but had some notable enhancements in 2016\. Two valuable new features were introduced in v2.7the `list` subcommand, and namespaced refs for bisectingand the `lock`/`unlock` subcommands were implemented in v2.10.
#### Whats a worktree again?
The `[git worktree][49]` command lets you check out and work on multiple repository branches in separate directories simultaneously. For example, if you need to make a quick hotfix but don't want to mess with your working copy, you can check out a new branch in a new directory with:
```
$ git worktree add -b hotfix/BB-1234 ../hotfix/BB-1234
Preparing ../hotfix/BB-1234 (identifier BB-1234)
HEAD is now at 886e0ba Merged in bedwards/BB-13430-api-merge-pr (pull request #7822)
```
Worktrees arent just for branches. You can check out multiple tags as different worktrees in order to build or test them in parallel. For example, I created worktrees from the Git v2.6 and v2.7 tags in order to examine the behavior of different versions of Git:
```
$ git worktree add ../git-v2.6.0 v2.6.0
Preparing ../git-v2.6.0 (identifier git-v2.6.0)
HEAD is now at be08dee Git 2.6
```
```
$ git worktree add ../git-v2.7.0 v2.7.0
Preparing ../git-v2.7.0 (identifier git-v2.7.0)
HEAD is now at 7548842 Git 2.7
```
```
$ git worktree list
/Users/kannonboy/src/git 7548842 [master]
/Users/kannonboy/src/git-v2.6.0 be08dee (detached HEAD)
/Users/kannonboy/src/git-v2.7.0 7548842 (detached HEAD)
```
```
$ cd ../git-v2.7.0 && make
```
You could use the same technique to build and run different versions of your own applications side-by-side.
#### Listing worktrees
The `git worktree list` subcommand (introduced in Git v2.7) displays all of the worktrees associated with a repository:
```
$ git worktree list
/Users/kannonboy/src/bitbucket/bitbucket 37732bd [master]
/Users/kannonboy/src/bitbucket/staging d5924bc [staging]
/Users/kannonboy/src/bitbucket/hotfix-1234 37732bd [hotfix/1234]
```
#### Bisecting worktrees
`[git bisect][50]` is a neat Git command that lets you perform a binary search of your commit history. It's usually used to find out which commit introduced a particular regression. For example, if a test is failing on the tip commit of my `master` branch, I can use `git bisect` to traverse the history of my repository looking for the commit that first broke it:
```
$ git bisect start
```
```
# indicate the last commit known to be passing the tests
# (e.g. the latest release tag)
$ git bisect good v2.0.0
```
```
# indicate a known broken commit (e.g. the tip of master)
$ git bisect bad master
```
```
# tell git bisect a script/command to run; git bisect will
# find the oldest commit between "good" and "bad" that causes
# this script to exit with a non-zero status
$ git bisect run npm test
```
Under the hood, bisect uses refs to track the good and bad commits used as the upper and lower bounds of the binary search range. Unfortunately for worktree fans, these refs were stored under the generic `.git/refs/bisect`namespace, meaning that `git bisect` operations that are run in different worktrees could interfere with each other.
As of v2.7, the bisect refs have been moved to`.git/worktrees/$worktree_name/refs/bisect`, so you can run bisect operations concurrently across multiple worktrees.
#### Locking worktrees
When youre finished with a worktree, you can simply delete it and then run `git worktree prune` or wait for it to be garbage collected automatically. However, if you're storing a worktree on a network share or removable media, then it will be cleaned up if the worktree directory isn't accessible during pruningwhether you like it or not! Git v2.10 introduced the `git worktree lock` and `unlock` subcommands to prevent this from happening:
```
# to lock the git-v2.7 worktree on my USB drive
$ git worktree lock /Volumes/Flash_Gordon/git-v2.7 --reason \
"In case I remove my removable media"
```
```
# to unlock (and delete) the worktree when I'm finished with it
$ git worktree unlock /Volumes/Flash_Gordon/git-v2.7
$ rm -rf /Volumes/Flash_Gordon/git-v2.7
$ git worktree prune
```
The `--reason` flag lets you leave a note for your future self, describing why the worktree is locked. `git worktree unlock` and `lock` both require you to specify the path to the worktree. Alternatively, you can `cd` to the worktree directory and run `git worktree lock .` for the same effect.
### More Git r`ebase` options
In March, Git v2.8 added the ability to interactively rebase whilst pulling with a `git pull --rebase=interactive`. Conversely, June's Git v2.9 release implemented support for performing a rebase exec without needing to drop into interactive mode via `git rebase -x`.
#### Re-wah?
Before we dive in, I suspect there may be a few readers who arent familiar or completely comfortable with the rebase command or interactive rebasing. Conceptually, its pretty simple, but as with many of Gits powerful features, the rebase is steeped in some complex-sounding terminology. So, before we dive in, lets quickly review what a rebase is.
Rebasing means rewriting one or more commits on a particular branch. The `git rebase` command is heavily overloaded, but the name rebase originates from the fact that it is often used to change a branch's base commit (the commit that you created the branch from).
Conceptually, rebase unwinds the commits on your branch by temporarily storing them as a series of patches, and then reapplying them in order on top of the target commit.
![](https://cdn-images-1.medium.com/max/800/1*mgyl38slmqmcE4STS56nXA.gif)
Rebasing a feature branch on master (`git rebase master`) is a great way to "freshen" your feature branch with the latest changes from master. For long-lived feature branches, regular rebasing minimizes the chance and severity of conflicts down the road.
Some teams also choose to rebase immediately before merging their changes onto master in order to achieve a fast-forward merge (`git merge --ff <feature>` ). Fast-forwarding merges your commits onto master by simply making the master ref point at the tip of your rewritten branch without creating a merge commit:
![](https://cdn-images-1.medium.com/max/800/1*QXa3znQiuNWDjxroX628VA.gif)
Rebasing is so convenient and powerful that it has been baked into some other common Git commands, such as `git pull`. If you have some unpushed changes on your local master branch, running `git pull` to pull your teammates' changes from the origin will create an unnecessary merge commit:
![](https://cdn-images-1.medium.com/max/800/1*IxDdJ5CygvSWdD8MCNpZNg.gif)
This is kind of messy, and on busy teams, youll get heaps of these unnecessary merge commits. `git pull --rebase` rebases your local changes on top of your teammates' without creating a merge commit:
![](https://cdn-images-1.medium.com/max/800/1*HcroDMwBE9m21-hOeIwRmw.gif)
This is pretty neat! Even cooler, Git v2.8 introduced a feature that lets you rebase  _interactively_  whilst pulling.
#### Interactive rebasing
Interactive rebasing is a more powerful form of rebasing. Like a standard rebase, it rewrites commits, but it also gives you a chance to modify them interactively as they are reapplied onto the new base.
When you run `git rebase --interactive` (or `git pull --rebase=interactive`), you'll be presented with a list of commits in your text editor of choice:
```
$ git rebase master --interactive
```
```
pick 2fde787 ACE-1294: replaced miniamalCommit with string in test
pick ed93626 ACE-1294: removed pull request service from test
pick b02eb9a ACE-1294: moved fromHash, toHash and diffType to batch
pick e68f710 ACE-1294: added testing data to batch email file
```
```
# Rebase f32fa9d..0ddde5f onto f32fa9d (4 commands)
#
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
# x, exec = run command (the rest of the line) using shell
# d, drop = remove commit
#
# These lines can be re-ordered; they are executed from top to
# bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
```
Notice that each commit has the word `pick` next to it. That's rebase-speak for, "Keep this commit as-is." If you quit your text editor now, it will perform a normal rebase as described in the last section. However, if you change `pick` to `edit` or one of the other rebase commands, rebase will let you mutate the commit before it is reapplied! There are several available rebase commands:
* `reword`: Edit the commit message.
* `edit`: Edit the files that were committed.
* `squash`: Combine the commit with the previous commit (the one above it in the file), concatenating the commit messages.
* `fixup`: Combine the commit with the commit above it, and uses the previous commit's log message verbatim (this is handy if you created a second commit for a small change that should have been in the original commit, i.e., you forgot to stage a file).
* `exec`: Run an arbitrary shell command (we'll look at a neat use-case for this later, in the next section).
* `drop`: This kills the commit.
You can also reorder commits within the file, which changes the order in which theyre reapplied. This is handy if you have interleaved commits that are addressing different topics and you want to use `squash` or `fixup` to combine them into logically atomic commits.
Once youve set up the commands and saved the file, Git will iterate through each commit, pausing at each `reword` and `edit` for you to make your desired changes and automatically applying any `squash`, `fixup`, `exec`, and `drop` commands for you.
#### Non-interactive exec
When you rebase, youre essentially rewriting history by applying each of your new commits on top of the specified base. `git pull --rebase` can be a little risky because depending on the nature of the changes from the upstream branch, you may encounter test failures or even compilation problems for certain commits in your newly created history. If these changes cause merge conflicts, the rebase process will pause and allow you to resolve them. However, changes that merge cleanly may still break compilation or tests, leaving broken commits littering your history.
However, you can instruct Git to run your projects test suite for each rewritten commit. Prior to Git v2.9, you could do this with a combination of `git rebase interactive` and the `exec` command. For example, this:
```
$ git rebase master interactive exec=”npm test”
```
…would generate an interactive rebase plan that invokes `npm test` after rewriting each commit, ensuring that your tests still pass:
```
pick 2fde787 ACE-1294: replaced miniamalCommit with string in test
exec npm test
pick ed93626 ACE-1294: removed pull request service from test
exec npm test
pick b02eb9a ACE-1294: moved fromHash, toHash and diffType to batch
exec npm test
pick e68f710 ACE-1294: added testing data to batch email file
exec npm test
```
```
# Rebase f32fa9d..0ddde5f onto f32fa9d (4 command(s))
```
In the event that a test fails, rebase will pause to let you fix the tests (and apply your changes to that commit):
```
291 passing
1 failing
```
```
1) Host request “after all” hook:
Uncaught Error: connect ECONNRESET 127.0.0.1:3001
npm ERR! Test failed.
Execution failed: npm test
You can fix the problem, and then run
git rebase continue
```
This is handy, but needing to do an interactive rebase is a bit clunky. As of Git v2.9, you can perform a non-interactive rebase exec, with:
```
$ git rebase master -x “npm test”
```
Just replace `npm test` with `make`, `rake`, `mvn clean install`, or whatever you use to build and test your project.
#### A word of warning
Just like in the movies, rewriting history is risky business. Any commit that is rewritten as part of a rebase will have its SHA-1 ID changed, which means that Git will treat it as a totally different commit. If rewritten history is mixed with the original history, youll get duplicate commits, which can cause a lot of confusion for your team.
To avoid this problem, you only need to follow one simple rule:
> _Never rebase a commit that youve already pushed!_
Stick to that and youll be fine.
### Performance boosts for `Git LFS`
[Git is a distributed version control system][64], meaning the entire history of the repository is transferred to the client during the cloning process. For projects that contain large filesparticularly large files that are modified regularly __ the initial clone can be expensive, as every version of every file has to be downloaded by the client. [Git LFS (Large File Storage)][65] is a Git extension developed by Atlassian, GitHub, and a few other open source contributors that reduces the impact of large files in your repository by downloading the relevant versions of them lazily. Specifically, large files are downloaded as needed during the checkout process rather than during cloning or fetching.
Alongside Gits five huge releases in 2016, Git LFS had four feature-packed releases of its own: v1.2 through v1.5. You could write a retrospective series on Git LFS in its own right, but for this article, Im going to focus on one of the most important themes tackled in 2016: speed. A series of improvements to both Git and Git LFS have greatly improved the performance of transferring files to and from the server.
#### Long-running filter processes
When you `git add` a file, Git's system of clean filters can be used to transform the files contents before being written to the Git object store. Git LFS reduces your repository size by using a clean filter to squirrel away large file content in the LFS cache and adds a tiny “pointer” file to the Git object store instead.
![](https://cdn-images-1.medium.com/max/800/0*Ku328eca7GLOo7sS.png)
Smudge filters are the opposite of clean filtershence the name. When file content is read from the Git object store during a `git checkout`, smudge filters have a chance to transform it before its written to the users working copy. The Git LFS smudge filter transforms pointer files by replacing them with the corresponding large file, either from your LFS cache or by reading through to your Git LFS store on Bitbucket.
![](https://cdn-images-1.medium.com/max/800/0*CU60meE1lbCuivn7.png)
Traditionally, smudge and clean filter processes were invoked once for each file that was being added or checked out. So, a project with 1,000 files tracked by Git LFS invoked the `git-lfs-smudge` command 1,000 times for a fresh checkout! While each operation is relatively quick, the overhead of spinning up 1,000 individual smudge processes is costly.
As of Git v2.11 (and Git LFS v1.5), smudge and clean filters can be defined as long-running processes that are invoked once for the first filtered file, then fed subsequent files that need smudging or cleaning until the parent Git operation exits. [Lars Schneider][66], who contributed long-running filters to Git, neatly summarized the impact of the change on Git LFS performance:
> The filter process is 80x faster on macOS and 58x faster on Windows for the test repo with 12k files. On Windows, that means the tests runs in 57 seconds instead of 55 minutes!
Thats a seriously impressive performance gain!
#### Specialized LFS clones
Long-running smudge and clean filters are great for speeding up reads and writes to the local LFS cache, but they do little to speed up transferring of large objects to and from your Git LFS server. Each time the Git LFS smudge filter cant find a file in the local LFS cache, it has to make two HTTP calls to retrieve it: one to locate the file and one to download it. During a `git clone`, your local LFS cache is empty, so Git LFS will naively make two HTTP calls for every LFS tracked file in your repository:
![](https://cdn-images-1.medium.com/max/800/0*ViL7r3ZhkGvF0z3-.png)
Fortunately, Git LFS v1.2 shipped the specialized `[git lfs clone][51]` command. Rather than downloading files one at a time; `git lfs clone` disables the Git LFS smudge filter, waits until the checkout is complete, and then downloads any required files as a batch from the Git LFS store. This allows downloads to be parallelized and halves the number of required HTTP requests:
![](https://cdn-images-1.medium.com/max/800/0*T43VA0DYTujDNgkH.png)
### Custom Transfer Adapters
As discussed earlier, Git LFS shipped support for long running filter processes in v1.5\. However, support for another type of pluggable process actually shipped earlier in the year. Git LFS v1.3 included support for pluggable transfer adapters so that different Git LFS hosting services could define their own protocols for transferring files to and from LFS storage.
As of the end of 2016, Bitbucket is the only hosting service to implement their own Git LFS transfer protocol via the [Bitbucket LFS Media Adapter][67]. This was done to take advantage of a unique feature of Bitbuckets LFS storage API called chunking. Chunking means large files are broken down into 4MB chunks before uploading or downloading.
![](https://cdn-images-1.medium.com/max/800/1*N3SpjQZQ1Ge8OwvWrtS1og.gif)
Chunking gives Bitbuckets Git LFS support three big advantages:
1. Parallelized downloads and uploads. By default, Git LFS transfers up to three files in parallel. However, if only a single file is being transferred (which is the default behavior of the Git LFS smudge filter), it is transferred via a single stream. Bitbuckets chunking allows multiple chunks from the same file to be uploaded or downloaded simultaneously, often dramatically improving transfer speed.
2. Resumable chunk transfers. File chunks are cached locally, so if your download or upload is interrupted, Bitbuckets custom LFS media adapter will resume transferring only the missing chunks the next time you push or pull.
3. Deduplication. Git LFS, like Git itself, is content addressable; each LFS file is identified by a SHA-256 hash of its contents. So, if you flip a single bit, the files SHA-256 changes and you have to re-upload the entire file. Chunking allows you to re-upload only the sections of the file that have actually changed. To illustrate, imagine we have a 41MB spritesheet for a video game tracked in Git LFS. If we add a new 2MB layer to the spritesheet and commit it, wed typically need to push the entire new 43MB file to the server. However, with Bitbuckets custom transfer adapter, we only need to push ~7Mb: the first 4MB chunk (because the files header information will have changed) and the last 3MB chunk containing the new layer weve just added! The other unchanged chunks are skipped automatically during the upload process, saving a huge amount of bandwidth and time.
Customizable transfer adapters are a great feature for Git LFS, as they allow different hosts to experiment with optimized transfer protocols to suit their services without overloading the core project.
### Better `git diff` algorithms and defaults
Unlike some other version control systems, Git doesnt explicitly store the fact that files have been renamed. For example, if I edited a simple Node.js application and renamed `index.js` to `app.js` and then ran `git diff`, Id get back what looks like a file deletion and an addition:
![](https://cdn-images-1.medium.com/max/800/1*ohMUBpSh_jqz2ffScJ7ApQ.png)
I guess moving or renaming a file is technically just a delete followed by an add, but this isnt the most human-friendly way to show it. Instead, you can use the `-M` flag to instruct Git to attempt to detect renamed files on the fly when computing a diff. For the above example, `git diff -M` gives us:
![](https://cdn-images-1.medium.com/max/800/1*ywYjxBc1wii5O8EhHbpCTA.png)
The similarity index on the second line tells us how similar the content of the files compared was. By default, `-M` will consider any two files that are more than 50% similar. That is, you need to modify less than 50% of their lines to make them identical as a renamed file. You can choose your own similarity index by appending a percentage, i.e., `-M80%`.
As of Git v2.9, the `git diff` and `git log` commands will both detect renames by default as if you'd passed the `-M` flag. If you dislike this behavior (or, more realistically, are parsing the diff output via a script), then you can disable it by explicitly passing the `no-renames` flag.
#### Verbose Commits
Do you ever invoke `git commit` and then stare blankly at your shell trying to remember all the changes you just made? The verbose flag is for you!
Instead of:
```
Ah crap, which dependency did I just rev?
```
```
# Please enter the commit message for your changes. Lines starting
# with # will be ignored, and an empty message aborts the commit.
# On branch master
# Your branch is up-to-date with origin/master.
#
# Changes to be committed:
# new file: package.json
#
```
…you can invoke `git commit verbose` to view an inline diff of your changes. Dont worry, it wont be included in your commit message:
![](https://cdn-images-1.medium.com/max/800/1*1vOYE2ow3ZDS8BP_QfssQw.png)
The `verbose` flag isnt new, but as of Git v2.9 you can enable it permanently with `git config --global commit.verbose true`.
#### Experimental Diff Improvements
`git diff` can produce some slightly confusing output when the lines before and after a modified section are the same. This can happen when you have two or more similarly structured functions in a file. For a slightly contrived example, imagine we have a JS file that contains a single function:
```
/* @return {string} "Bitbucket" */
function productName() {
return "Bitbucket";
}
```
Now imagine weve committed a change that prepends  _another_  function that does something similar:
```
/* @return {string} "Bitbucket" */
function productId() {
return "Bitbucket";
}
```
```
/* @return {string} "Bitbucket" */
function productName() {
return "Bitbucket";
}
```
Youd expect `git diff` to show the top five lines as added, but it actually incorrectly attributes the very first line to the original commit:
![](https://cdn-images-1.medium.com/max/800/1*9C7DWMObGHMEqD-QFGHmew.png)
The wrong comment is included in the diff! Not the end of the world, but the couple of seconds of cognitive overhead from the  _Whaaat?_  every time this happens can add up.
In December, Git v2.11 introduced a new experimental diff option, `--indent-heuristic`, that attempts to produce more aesthetically pleasing diffs:
![](https://cdn-images-1.medium.com/max/800/1*UyWZ6JjC-izDquyWCA4bow.png)
Under the hood, `--indent-heuristic` cycles through the possible diffs for each change and assigns each a “badness” score. This is based on heuristics like whether the diff block starts and ends with different levels of indentation (which is aesthetically bad) and whether the diff block has leading and trailing blank lines (which is aesthetically pleasing). Then, the block with the lowest badness score is output.
This feature is experimental, but you can test it out ad-hoc by applying the `--indent-heuristic` option to any `git diff` command. Or, if you like to live on the bleeding edge, you can enable it across your system with:
```
$ git config --global diff.indentHeuristic true
```
### Submodules with less suck
Submodules allow you to reference and include other Git repositories from inside your Git repository. This is commonly used by some projects to manage source dependencies that are also tracked in Git, or by some companies as an alternative to a [monorepo][68] containing a collection of related projects.
Submodules get a bit of a bad rap due to some usage complexities and the fact that its reasonably easy to break them with an errant command.
![](https://cdn-images-1.medium.com/max/800/1*xNffiElY7BZNMDM0jm0JNQ.gif)
However, they do have their uses and are, I think, still the best choice for vendoring dependencies. Fortunately, 2016 was a great year to be a submodule user, with some significant performance and feature improvements landing across several releases.
#### Parallelized fetching
When cloning or fetching a repository, appending the `--recurse-submodules`option means any referenced submodules will be cloned or updated, as well. Traditionally, this was done serially, with each submodule being fetched one at a time. As of Git v2.8, you can append the `--jobs=n` option to fetch submodules in  _n_  parallel threads.
I recommend configuring this option permanently with:
```
$ git config --global submodule.fetchJobs 4
```
…or whatever degree of parallelization you choose to use.
#### Shallow submodules
Git v2.9 introduced the `git clone -shallow-submodules` flag. It allows you to grab a full clone of your repository and then recursively shallow clone any referenced submodules to a depth of one commit. This is useful if you dont need the full history of your projects dependencies.
For example, consider a repository with a mixture of submodules containing vendored dependencies and other projects that you own. You may wish to clone with shallow submodules initially and then selectively deepen the few projects you want to work with.
Another scenario would be configuring a continuous integration or deployment job. Git needs the super repository as well as the latest commit from each of your submodules in order to actually perform the build. However, you probably dont need the full history for every submodule, so retrieving just the latest commit will save you both time and bandwidth.
#### Submodule alternates
The `--reference` option can be used with `git clone` to specify another local repository as an alternate object store to save recopying objects over the network that you already have locally. The syntax is:
```
$ git clone --reference <local repo> <url>
```
As of Git v2.11, you can use the `--reference` option in combination with `--recurse-submodules` to set up submodule alternates pointing to submodules from another local repository. The syntax is:
```
$ git clone --recurse-submodules --reference <local repo> <url>
```
This can potentially save a huge amount of bandwidth and local disk but it will fail if the referenced local repository does not have all the required submodules of the remote repository that youre cloning from.
Fortunately, the handy `--reference-if-able` option will fail gracefully and fall back to a normal clone for any submodules that are missing from the referenced local repository:
```
$ git clone --recurse-submodules --reference-if-able \
<local repo> <url>
```
#### Submodule diffs
Prior to Git v2.11, Git had two modes for displaying diffs of commits that updated your repositorys submodules:
`git diff --submodule=short` displays the old commit and new commit from the submodule referenced by your project (this is also the default if you omit the `--submodule` option altogether):
![](https://cdn-images-1.medium.com/max/800/1*K71cJ30NokO5B69-a470NA.png)
`git diff --submodule=log` is slightly more verbose, displaying the summary line from the commit message of any new or removed commits in the updated submodule:
![](https://cdn-images-1.medium.com/max/800/1*frvsd_T44De8_q0uvNHB1g.png)
Git v2.11 introduces a third much more useful option: `--submodule=diff`. This displays a full diff of all changes in the updated submodule:
![](https://cdn-images-1.medium.com/max/800/1*nPhJTjP8tcJ0cD8s3YOmjw.png)
### Nifty enhancements to `git stash`
Unlike submodules, `[git stash][52]` is almost universally beloved by Git users. `git stash` temporarily shelves (or  _stashes_ ) changes you've made to your working copy so you can work on something else, and then come back and re-apply them later on.
#### Autostash
If youre a fan of `git rebase`, you might be familiar with the `--autostash`option. It automatically stashes any local changes made to your working copy before rebasing and reapplies them after the rebase is completed.
```
$ git rebase master --autostash
Created autostash: 54f212a
HEAD is now at 8303dca It's a kludge, but put the tuple from the database in the cache.
First, rewinding head to replay your work on top of it...
Applied autostash.
```
This is handy, as it allows you to rebase from a dirty worktree. Theres also a handy config flag named `rebase.autostash` to make this behavior the default, which you can enable globally with:
```
$ git config --global rebase.autostash true
```
`rebase.autostash` has actually been available since [Git v1.8.4][69], but v2.7 introduces the ability to cancel this flag with the `--no-autostash` option. If you use this option with unstaged changes, the rebase will abort with a dirty worktree warning:
```
$ git rebase master --no-autostash
Cannot rebase: You have unstaged changes.
Please commit or stash them.
```
#### Stashes as Patches
Speaking of config flags, Git v2.7 also introduces `stash.showPatch`. The default behavior of `git stash show` is to display a summary of your stashed files.
```
$ git stash show
package.json | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
```
Passing the `-p` flag puts `git stash show` into "patch mode," which displays the full diff:
![](https://cdn-images-1.medium.com/max/800/1*HpcT3quuKKQj9CneqPuufw.png)
`stash.showPatch` makes this behavior the default. You can enable it globally with:
```
$ git config --global stash.showPatch true
```
If you enable `stash.showPatch` but then decide you want to view just the file summary, you can get the old behavior back by passing the `--stat` option instead.
```
$ git stash show --stat
package.json | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
```
As an aside: `--no-patch` is a valid option but it doesn't negate `stash.showPatch` as you'd expect. Instead, it gets passed along to the underlying `git diff` command used to generate the patch, and you'll end up with no output at all!
#### Simple Stash IDs
If youre a `git stash` fan, you probably know that you can shelve multiple sets of changes, and then view them with `git stash list`:
```
$ git stash list
stash@{0}: On master: crazy idea that might work one day
stash@{1}: On master: desperate samurai refactor; don't apply
stash@{2}: On master: perf improvement that I forgot I stashed
stash@{3}: On master: pop this when we use Docker in production
```
However, you may not know why Gits stashes have such awkward identifiers (`stash@{1}`, `stash@{2}`, etc.) and may have written them off as "just one of those Git idiosyncrasies." It turns out that like many Git features, these weird IDs are actually a symptom of a very clever use (or abuse) of the Git data model.
Under the hood, the `git stash` command actually creates a set of special commit objects that encode your stashed changes and maintains a [reflog][70]that holds references to these special commits. This is why the output from `git stash list` looks a lot like the output from the `git reflog` command. When you run `git stash apply stash@{1}`, you're actually saying, “Apply the commit at position 1 from the stash reflog.”
As of Git v2.11, you no longer have to use the full `stash@{n}` syntax. Instead, you can reference stashes with a simple integer indicating their position in the stash reflog:
```
$ git stash show 1
$ git stash apply 1
$ git stash pop 1
```
And so forth. If youd like to learn more about how stashes are stored, I wrote a little bit about it in [this tutorial][71].
### </2016> <2017>
And were done. Thanks for reading! I hope you enjoyed reading this behemoth as much as I enjoyed spelunking through Gits source code, release notes, and `man` pages to write it. If you think I missed anything big, please leave a comment or let me know [on Twitter][72] and I'll endeavor to write a follow-up piece.
As for whats next for Git, thats up to the maintainers and contributors (which [could be you!][73]). With ever-increasing adoption, Im guessing that simplification, improved UX, and better defaults will be strong themes for Git in 2017\. As Git repositories get bigger and older, I suspect well also see continued focus on performance and improved handling of large files, deep trees, and long histories.
If youre into Git and excited to meet some of the developers behind the project, consider coming along to [Git Merge][74] in Brussels in a few weeks time. Im [speaking there][75]! But more importantly, many of the developers who maintain Git will be in attendance for the conference and the annual Git Contributors Summit, which will likely drive much of the direction for the year ahead.
Or if you cant wait til then, head over to Atlassians excellent selection of [Git tutorials][76] for more tips and tricks to improve your workflow.
_If you scrolled to the end looking for the footnotes from the first paragraph, please jump to the _ [ _[Citation needed]_ ][77] _ section for the commands used to generate the stats. Gratuitous cover image generated using _ [ _instaco.de_ ][78] _ _
--------------------------------------------------------------------------------
via: https://hackernoon.com/git-in-2016-fad96ae22a15#.t5c5cm48f
作者:[Tim Pettersen][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://hackernoon.com/@kannonboy?source=post_header_lockup
[1]:https://medium.com/@g.kylafas/the-git-config-command-is-missing-a-yes-at-the-end-as-in-git-config-global-commit-verbose-yes-7e126365750e?source=responses---------1----------
[2]:https://medium.com/@kannonboy/thanks-giorgos-fixed-f3b83c61589a?source=responses---------1----------
[3]:https://medium.com/@TomSwirly/i-read-the-whole-thing-from-start-to-finish-415a55d89229?source=responses---------0-31---------
[4]:https://medium.com/@g.kylafas
[5]:https://medium.com/@g.kylafas?source=responses---------1----------
[6]:https://medium.com/@kannonboy
[7]:https://medium.com/@kannonboy?source=responses---------1----------
[8]:https://medium.com/@TomSwirly
[9]:https://medium.com/@TomSwirly?source=responses---------0-31---------
[10]:https://medium.com/@g.kylafas/the-git-config-command-is-missing-a-yes-at-the-end-as-in-git-config-global-commit-verbose-yes-7e126365750e?source=responses---------1----------#--responses
[11]:https://hackernoon.com/@kannonboy
[12]:https://hackernoon.com/@kannonboy?source=placement_card_footer_grid---------0-44
[13]:https://medium.freecodecamp.com/@BillSourour
[14]:https://medium.freecodecamp.com/@BillSourour?source=placement_card_footer_grid---------1-43
[15]:https://blog.uncommon.is/@lut4rp
[16]:https://blog.uncommon.is/@lut4rp?source=placement_card_footer_grid---------2-43
[17]:https://medium.com/@kannonboy
[18]:https://medium.com/@kannonboy
[19]:https://medium.com/@g.kylafas/the-git-config-command-is-missing-a-yes-at-the-end-as-in-git-config-global-commit-verbose-yes-7e126365750e?source=responses---------1----------
[20]:https://medium.com/@kannonboy/thanks-giorgos-fixed-f3b83c61589a?source=responses---------1----------
[21]:https://medium.com/@TomSwirly/i-read-the-whole-thing-from-start-to-finish-415a55d89229?source=responses---------0-31---------
[22]:https://hackernoon.com/setting-breakpoints-on-a-snowy-evening-df34fc3168e2?source=placement_card_footer_grid---------0-44
[23]:https://medium.freecodecamp.com/the-code-im-still-ashamed-of-e4c021dff55e?source=placement_card_footer_grid---------1-43
[24]:https://blog.uncommon.is/using-git-to-generate-versionname-and-versioncode-for-android-apps-aaa9fc2c96af?source=placement_card_footer_grid---------2-43
[25]:https://hackernoon.com/git-in-2016-fad96ae22a15#fd10
[26]:https://hackernoon.com/git-in-2016-fad96ae22a15#cc52
[27]:https://hackernoon.com/git-in-2016-fad96ae22a15#42b9
[28]:https://hackernoon.com/git-in-2016-fad96ae22a15#4208
[29]:https://hackernoon.com/git-in-2016-fad96ae22a15#a5c3
[30]:https://hackernoon.com/git-in-2016-fad96ae22a15#c230
[31]:https://hackernoon.com/tagged/git?source=post
[32]:https://hackernoon.com/tagged/web-development?source=post
[33]:https://hackernoon.com/tagged/software-development?source=post
[34]:https://hackernoon.com/tagged/programming?source=post
[35]:https://hackernoon.com/tagged/atlassian?source=post
[36]:https://hackernoon.com/@kannonboy
[37]:https://hackernoon.com/?source=footer_card
[38]:https://hackernoon.com/setting-breakpoints-on-a-snowy-evening-df34fc3168e2?source=placement_card_footer_grid---------0-44
[39]:https://medium.freecodecamp.com/the-code-im-still-ashamed-of-e4c021dff55e?source=placement_card_footer_grid---------1-43
[40]:https://blog.uncommon.is/using-git-to-generate-versionname-and-versioncode-for-android-apps-aaa9fc2c96af?source=placement_card_footer_grid---------2-43
[41]:https://hackernoon.com/git-in-2016-fad96ae22a15#fd10
[42]:https://hackernoon.com/git-in-2016-fad96ae22a15#fd10
[43]:https://hackernoon.com/git-in-2016-fad96ae22a15#cc52
[44]:https://hackernoon.com/git-in-2016-fad96ae22a15#cc52
[45]:https://hackernoon.com/git-in-2016-fad96ae22a15#42b9
[46]:https://hackernoon.com/git-in-2016-fad96ae22a15#4208
[47]:https://hackernoon.com/git-in-2016-fad96ae22a15#a5c3
[48]:https://hackernoon.com/git-in-2016-fad96ae22a15#c230
[49]:https://git-scm.com/docs/git-worktree
[50]:https://git-scm.com/book/en/v2/Git-Tools-Debugging-with-Git#Binary-Search
[51]:https://www.atlassian.com/git/tutorials/git-lfs/#speeding-up-clones
[52]:https://www.atlassian.com/git/tutorials/git-stash/
[53]:https://hackernoon.com/@kannonboy?source=footer_card
[54]:https://hackernoon.com/?source=footer_card
[55]:https://hackernoon.com/@kannonboy?source=post_header_lockup
[56]:https://hackernoon.com/@kannonboy?source=post_header_lockup
[57]:https://hackernoon.com/git-in-2016-fad96ae22a15#c8e9
[58]:https://hackernoon.com/git-in-2016-fad96ae22a15#408a
[59]:https://hackernoon.com/git-in-2016-fad96ae22a15#315b
[60]:https://hackernoon.com/git-in-2016-fad96ae22a15#dbfb
[61]:https://hackernoon.com/git-in-2016-fad96ae22a15#2220
[62]:https://hackernoon.com/git-in-2016-fad96ae22a15#bc78
[63]:https://www.atlassian.com/git/tutorials/install-git/
[64]:https://www.atlassian.com/git/tutorials/what-is-git/
[65]:https://www.atlassian.com/git/tutorials/git-lfs/
[66]:https://twitter.com/kit3bus
[67]:https://confluence.atlassian.com/bitbucket/bitbucket-lfs-media-adapter-856699998.html
[68]:https://developer.atlassian.com/blog/2015/10/monorepos-in-git/
[69]:https://blogs.atlassian.com/2013/08/what-you-need-to-know-about-the-new-git-1-8-4/
[70]:https://www.atlassian.com/git/tutorials/refs-and-the-reflog/
[71]:https://www.atlassian.com/git/tutorials/git-stash/#how-git-stash-works
[72]:https://twitter.com/kannonboy
[73]:https://git.kernel.org/cgit/git/git.git/tree/Documentation/SubmittingPatches
[74]:http://git-merge.com/
[75]:http://git-merge.com/#git-aliases
[76]:https://www.atlassian.com/git/tutorials
[77]:https://hackernoon.com/git-in-2016-fad96ae22a15#87c4
[78]:http://instaco.de/
[79]:https://medium.com/@Medium/personalize-your-medium-experience-with-users-publications-tags-26a41ab1ee0c#.hx4zuv3mg
[80]:https://hackernoon.com/

View File

@ -1,147 +0,0 @@
ictlyh Translating
Partition Backup
============
Many times you may have data on a partition, especially on a Universal Serial Bus (USB) drive. It may be necessary at times to make a copy of a drive or a single partition on it. Raspberry Pi users definitely have this need for the bootable SD Cards. Owners of other small form computers can also find this useful. Sometimes it is best to make a backup quickly if a device seems to be failing.
To perform the examples in this article you will need a utility called 'dcfldd'.
**The dcfldd Utility**
The utility is an enhancement of the 'dd' utility from the 'coreutils' package. The 'dcfldd' was made by Nicholas Harbour while he worked at the Department of Defense Computer Forensics Lab (DCFL). The name is then based off where he worked 'dcfldd'.
For systems still using the CoreUtils 8.23 or less there isn't an option to easily see the progress of the copy being performed. Sometimes it seems as if nothing is happening and you are tempted to stop the copy.
**NOTE:** If you have the dd version 8.24 or later then you need not use dcfldd and simply replace any dcfldd with dd. All other parameters will work.
On a Debian system simply use your Package Manager and search for 'dcfldd'. You can also open a Terminal and use the command:
_sudo apt-get install dcfldd_
For a Red Hat system try the following:
_cd /tmp
wget dl.fedoraproject.org/pub/epel/6/i386/dcfldd-1.3.4.1-4.el6.i686.rpm
sudo yum install dcfldd-1.3.4.1-4.el6.i686.rpm
dcfldd --version_
**NOTE:** The above installs the 32-bit version. For the 64-bit version use the following commands:
_cd /tmp
wget dl.fedoraproject.org/pub/epel/6/x86_64/dcfldd-1.3.4.1-4.el6.x86_64.rpm
sudo yum install dcfldd-1.3.4.1-4.el6.x86_64.rpm
dcfldd --version_
The last statement of each set of commands will list the version of the 'dcfldd' and show you that the command file has been downloaded.
**NOTE: **Be sure you execute the dd or dcfldd as root.
Now that you have the command installed you can continue on with using it to backup and restore partitions.
**Backup Partitions**
When backing up a drive it is possible to back up the whole drive or a single partition. If the drive has multiple partitions then each can be backed up separately.
Before we get too far in performing a backup let's look at the difference of a drive and partition. Let's assume we have an SD Card which is formatted as one large drive. The SD Card contains only one partition. If the space is split to allow the SD Card to be seen as two drives, then it has two partitions. If a program like GParted were opened for an SD Card, as shown in Figure 1, you can see it has two partitions.
**FIGURE 1**
The drive is /dev/sdc and contains both partitions /dev/sdc1 and /dev/sdc2.
Let's take, for instance, an SD Card from a Raspberry Pi. The SD Card is 8 GB in size and has two partitions (as shown in Figure 1). The first partition contains BerryBoot which is a boot loader. The second partition contains Kali. There is no space available to install a second Operating System (OS). A second SD Card is to be used which is 16 GB in size, but to copy it to the second SD Card the first one must be backed up.
To back up the first SD Card we will back up the drive which is /dev/sdc. The command to perform the backup is as follows:
_dcfldd if=/dev/sdc of=/tmp/SD-Card-Backup.img_
The backup is made of the Input File (if) and the output file (of) is set to the folder '/tmp' and a file called 'SD-Card-Backup.img'.
The 'dd' and 'dcfldd' both read and write the files one byte at a time. Technically with the above command it reads and writes a default of 512 bytes at a time. Keep in mind that the copy is an exact copy bit for bit and byte for byte.
The default of 512 bytes can be changed with the parameter for Block Size - 'bs='. For instance, to read/write 1 megabyte at a time the parameter 'bs=1M'. There can be some discrepancy in the abbreviations used as follows:
* b 512 bytes
* KB 1000 bytes
* K 1024 bytes
* MB 1000x1000 bytes
* M 1024x1024 bytes
* GB 1000x1000x1000 bytes
* G 1024x1024x1024 bytes
You can specify the read blocks and write blocks separately. To specify the read amount use ibs=. To specify the write amount then use obs=.
I performed a test to backup a 120 MB partition using three different Block Sizes. The first was the default of 512 bytes and it took 7 seconds. The second was a Block Size of 1024 K and took 2 seconds. The third had a Block Size of 2048 K and took 3 seconds. The times will vary depending on the system, various other hardware implementations, but in general larger block sizes than the defaults can be a little faster.
Once you have a backup made you will need to know how to restore the data to a drive.
**Restore Partitions**
Now that we have a backup at some point we can assume the data may become corrupted or need to be restored for some reason.
The command is the same as the backup except that the source and target are reversed. In the above example the command would be changed to:
_dcfldd of=/dev/sdc if=/tmp/SD-Card-Backup.img_
Here, the image file is being used as the input file (if) and the drive (sdc) is used as the output file (of).
**NOTE:** Do remember that the output device will be overwritten and all data on it will be lost. It is usually best to remove all partitions from the SD Card with GParted before restoring data.
If you have a use for multiple SD Cards, such as having multiple Raspberry Pi boards, you can write to multiple cards at once. To do this you need to know the cards ID on the system. For example, lets say we want to copy the image BerryBoot.img to two SD Cards. The SD Cards are at /dev/sdc and /dev/sdd. The command will be set to read/write in 1 MB Blocks while showing the progress. The command would be:
_dcfldd if=BerryBoot.img bs=1M status=progress | tee >(dcfldd of=/dev/sdc) | dcfldd of=/dev/sdd_
In this command the first dcfldd uses the input file and sets the Block Size to 1 MB. The status is set to show the progress. The input is then piped (|) to the command tee. The tee is used to split the input to multiple places. The first output is to the command (dcfldd of=/dev/sdc). The command is in parenthesis and will be performed as a command. The last pipe (|) is needed, otherwise the tee command will send the information to the stdout (screen) as well. So the final output is to the command _dcfldd of=/dev/sdd_. If you had a third card, or even more, simply add another redirector and command such as _>(dcfldd of=/dev/sde_.
**NOTE:** Do remember that the final command must be after a pipe (|).
Data being written must be verified at times to be sure the data is correct.
**Verify Data**
Once an image is made or a backup restored you can verify the data being written. To verify data you will use a different program called _diff_.
To use diff you will need to designate the location of the image file and the physical media where it was copied from or written to on the system. The _diff_ command can be used after the backup was made or after the image was restored.
The command has two parameters. The first is the physical media and the second is the image file name.
From the example of _dcfldd of=/dev/sdc if=/tmp/SD-Card-Backup.img_ the _diff_ command would be:
_diff /dev/sdc /tmp/SD-Card-Backup.img_
If any differences are found between the image and the physical device you will be alerted. If no messages are given then the data has been verified as identical.
Making sure the data is identical is key to verifying the integrity of the backup or restore. One main problem to watch for when performing a Backup is image size.
**Splitting The Image**
Lets assume you want to back up a 16GB SD Card. The image will then be approximately the same size. What if you can only back it up to a FAT32 partition? The maximum file size limit is 4 GB on FAT32.
What must be done is that the file will have to be split into 4 GB pieces. The splitting of the image file can be done while it is being written by piping (|) the data to the _split_ command.
The backup is performed in the same way, but the command will include the pipe and split command. The example backup command is _dcfldd if=/dev/sdc of=/tmp/SD-Card-Backup.img_ and the new command for splitting the file is as follows:
_dcfldd if=/dev/sdc | split -b 4000MB - /tmp/SD-Card-Backup.img_
**NOTE:** The size suffix means the same as for the _dd_ and _dcfldd_ command. The dash by itself in the _split_ command is used to fill the input file which is being piped from the the _dcfldd_ command.
The files will be saved as _SD-Card-Backup.imgaa_ and _SD-Card-Backup.imgab_ and so on. If you are worried about the file size being too close to the 4 GB limit, then try 3500MB.
Restoring the files back to the drive is simple. You use the command _ca_ to join them and then write the output using _dcfldd_ as follows:
_cat /tmp/SD-Card-Backup.img* | dcfldd of=/dev/sdc_
You can include any desired parameters to the command for the _dcfldd_ portion.
I hope you understand and can perform any needed backup and restoration of data as you need for SD Cards and the like.
--------------------------------------------------------------------------------
via: https://www.linuxforum.com/threads/partition-backup.3638/
作者:[Jarret][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxforum.com/members/jarret.268/

View File

@ -1,3 +1,5 @@
vim-kakali translating
3 open source music players: Aqualung, Lollypop, and GogglesMM
============================================================
![3 open source music players: Aqualung, Lollypop, and GogglesMM](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/music-birds-recording-520.png?itok=wvh1g4Lw "3 open source music players: Aqualung, Lollypop, and GogglesMM")

View File

@ -1,102 +0,0 @@
   Vic020
Using the AWS SDK for Gos Regions and Endpoints Metadata
============================================================
<section itemprop="articleBody" style="font-family: HelveticaNeue, Helvetica, Helvetica, Arial, sans-serif;">
In release [v1.6.0][1] of the [AWS SDK for Go][2], we added Regions and Endpoints metadata to the SDK. This feature enables you to easily enumerate the metadata and discover Regions, Services, and Endpoints. You can find this feature in the [github.com/aws/aws-sdk-go/aws/endpoints][3] package.
The endpoints package provides a simple interface to get a services endpoint URL and enumerate the Region metadata. The metadata is grouped into partitions. Each partition is a group of AWS Regions such as AWS Standard, AWS China, and AWS GovCloud (US).
### Resolving Endpoints
The SDK automatically uses the endpoints.DefaultResolver function when setting the SDKs default configuration. You can resolve endpoints yourself by calling the EndpointFor methods in the endpoints package.
Go
```
// Resolve endpoint for S3 in us-west-2
resolver := endpoints.DefaultResolver()
endpoint, err := resolver.EndpointFor(endpoints.S3ServiceID, endpoints.UsWest2RegionID)
if err != nil {
fmt.Println("failed to resolve endpoint", err)
return
}
fmt.Println("Resolved URL:", endpoint.URL)
```
If you need to add custom endpoint resolution logic to your code, you can implement the endpoints.Resolver interface, and set the value to aws.Config.EndpointResolver. This is helpful when you want to provide custom endpoint logic that the SDK will use for resolving service endpoints.
The following example creates a Session that is configured so that [Amazon S3][4] service clients are constructed with a custom endpoint.
Go
```
s3CustResolverFn := func(service, region string, optFns ...func(*endpoints.Options)) (endpoints.ResolvedEndpoint, error) {
if service == "s3" {
return endpoints.ResolvedEndpoint{
URL: "s3.custom.endpoint.com",
SigningRegion: "custom-signing-region",
}, nil
}
return defaultResolver.EndpointFor(service, region, optFns...)
}
sess := session.Must(session.NewSessionWithOptions(session.Options{
Config: aws.Config{
Region: aws.String("us-west-2"),
EndpointResolver: endpoints.ResolverFunc(s3CustResolverFn),
},
}))
```
### Partitions
The return value of the endpoints.DefaultResolver function can be cast to the endpoints.EnumPartitions interface. This will give you access to the slice of partitions that the SDK will use, and can help you enumerate over partition information for each partition.
Go
```
// Iterate through all partitions printing each partition's ID.
resolver := endpoints.DefaultResolver()
partitions := resolver.(endpoints.EnumPartitions).Partitions()
for _, p := range partitions {
fmt.Println("Partition:", p.ID())
}
```
In addition to the list of partitions, the endpoints package also includes a getter function for each partition group. These utility functions enable you to enumerate a specific partition without having to cast and enumerate over all the default resolvers partitions.
Go
```
partition := endpoints.AwsPartition()
region := partition.Regions()[endpoints.UsWest2RegionID]
fmt.Println("Services in region:", region.ID())
for id, _ := range region.Services() {
fmt.Println(id)
}
```
Once you have a Region or Service value, you can call ResolveEndpoint on it. This provides a filtered view of the Partition when resolving endpoints.
Check out the AWS SDK for Go repo for [more examples][5]. Let us know in the comments what you think of the endpoints package.
</section>
--------------------------------------------------------------------------------
via: https://aws.amazon.com/cn/blogs/developer/using-the-aws-sdk-for-gos-regions-and-endpoints-metadata
作者:[ Jason Del Ponte][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://aws.amazon.com/cn/blogs/developer/using-the-aws-sdk-for-gos-regions-and-endpoints-metadata
[1]:https://github.com/aws/aws-sdk-go/releases/tag/v1.6.0
[2]:https://github.com/aws/aws-sdk-go
[3]:http://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/
[4]:https://aws.amazon.com/s3/
[5]:https://github.com/aws/aws-sdk-go/tree/master/example/aws/endpoints

View File

@ -1,155 +0,0 @@
GitFuture is translating
A beginner's guide to comparing files using visual diff/merge tool Meld on Linux
============================================================
### On this page
1. [About Meld][1]
2. [Meld Installation][2]
3. [Meld Usage][3]
4. [Conclusion][4]
Now that we've [covered][5] [some][6] command line-based diff/merge tools in Linux, it'd be logical to explain some visual diff/merge tools available for the OS as well. Reason being, not everybody is used-to the command line, and/or command-line based comparison tools could be more difficult to learn and understand for some.
So, we'll kick off this new series with a GUI-based tool dubbed **Meld**.
But before we jump onto the installation and explanation part, it'd be worth sharing that all the instructions and examples presented in this tutorial have been tested on Ubuntu 14.04 and the Meld version we've used is 3.14.2.
### About Meld
[Meld][7] is basically a visual comparison and merging tool that's primarily aimed at developers (however, rest assured that we'll be explaining the tool keeping in mind end-users). The tool supports both two- and three-way comparisons, and not only lets you compare files, but directories and version controlled projects as well.
"Meld helps you review code changes and understand patches," the official website says. "It might even help you to figure out what is going on in that merge you keep avoiding." The tool is licensed under GPL v2.
### Meld Installation
If you are using Ubuntu or any-other Debian-based Linux distro, you can download and install Meld using the following command:
sudo apt-get install meld
Alternatively, you can also use your system's package manager to download the tool. For example, on Ubuntu, you can use the Ubuntu Software Center, or [Ubuntu Software][8], which has replaced the former starting version 16.04 of the OS.
However, it may be possible that the version of Meld in Ubuntu's official repositories is old. So, in that case if you want to use a more recent version, you can download the package from [here][9].  If you choose this method, then all you have to do is to extract the downloaded package, and then run the 'meld' binary present under the 'bin' folder:
~/Downloads/meld-3.14.2/bin$ **./meld** 
FYI, following are the packages that Meld requires:
* Python 2.7 (Python 3.3 in development)
* GTK+ 3.14
* GLib 2.36
* PyGObject 3.14
* GtkSourceView 3.14
* pycairo
### Meld Usage
When the tool is launched, you'll see a screen similar to the following:
[
![Meld started](https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/meld-launch-screen-1.png)
][10]
So you have three options: File comparison, Directory comparison, and Version control view.
Click the 'File comparison' option, and you'll be asked to select the files to compare:
[
![Meld file comparison](https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/meld-file-comparison-2.png)
][11]
As clear from the screenshot above, Meld also lets you perform 3-way comparisons, but - in this first part of this article series - we'll stick to two-way comparisons that are more common.
Moving on, select the files that you want to compare and then click the 'Compare' button. You'll see that the tool opens both files side by side and also highlights the differing lines (as well as differences).
[
![Compare files in Meld](https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/meld-diff-in-action-3.png)
][12]
So the difference is in the second line of both files, and the actual difference is the extra '3' in the second line of file2\. Those black arrows that you see are there to perform the merge/change operation. The right arrow, in this case, will change the second line in 'file2' with the corresponding line from 'file1'. The left arrow will do vice-versa.
After making changes, you can do a Ctrl+s to save them.
So that was a simple example to let you know how Meld works on a basic level. Let's take a look at a slightly more complicated comparison:
[
![Meld advanced file comparison](https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/meld-multiple-changes-4.png)
][13]
Before discussing the changes, it's worth mentioning here that there are areas in Meld GUI that give you visual overview of the changes between the files. Specifically, what we're trying to bring into your notice here are vertical bars at the left and right-hand sides of the window. For example, see the following screenshot:
[
![Visual Comparison](https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/meld-multiple-colors-5.png)
][14]
If you observe closely, the bar in the screenshot above contains some coloured blocks. These blocks are designed to give you an overview of all of the differences between the two files. "Each coloured block represents a section that is inserted, deleted, changed or in conflict between your files, depending on the block's colour used," the official documentation explains.
Now, let's come back to the example we were discussing. The following screenshots show how easy is to understand file changes (as well as merge them) when using Meld:
[
![File changes visualized in Meld](https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/meld-makes-it-easy-6.png)
][15]
[
![Meld Example 2](https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/meld-makes-it-easy-7.png)
][16]
[
![Meld Example 3](https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/meld-makes-it-easy-8.png)
][17]
Moving on, so far, we jumped from one change to another by scrolling the files. However, there may be times when the files being compared are very large, making it difficult to scroll every time you want to jump to a change. For this, you can use the orange-colored arrows in the toolbar which itself sits above the editing area:
[
![Go to next change in Meld](https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/meld-go-next-prev-9.png)
][18]
Here's how you do some of the common things while using Meld: You can use the standard Ctrl+f key-combination to find something in the editor area, press F11 key to make the tool go in full screen mode, and Ctrl+r to refresh (usually used when the any or both of files being compared have changed).
Following are some of the key features that the official Meld website advertises:
* Two- and three-way comparison of files and directories
* File comparisons update as you type
* Auto-merge mode and actions on change blocks help make merges easier
* Visualisations make it easier to compare your files
* Supports Git, Bazaar, Mercurial, Subversion, etc.
Note that the list above is not exhaustive. The website contains a dedicated [Features page][19] that contains an exhaustive list of features that Meld offers. All the features listed there are divided in sections based on whether the tool is being used for file comparison, directory comparison, version control, or in the merge mode.
Like any other software tool, there are certain things that Meld can't do. The official website lists at-least one of them: "When Meld shows differences between files, it shows both files as they would appear in a normal text editor. It does not insert additional lines so that the left and right sides of a particular change are the same size. There is no option to do this.".
### Conclusion
We've just scratched the surface here, as Meld is capable of doing a lot more. But it's ok for now, given that it's the first part of the tutorial series. Just to give you an idea about Meld's capabilities, you can configure the tool to ignore certain type of changes, ask it to move, copy or delete individual differences between files, as well as launch it from the command line. We'll discuss all these key functionalities in upcoming parts of this tutorial series.
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/beginners-guide-to-visual-merge-tool-meld-on-linux/
作者:[Ansh][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/beginners-guide-to-visual-merge-tool-meld-on-linux/
[1]:https://www.howtoforge.com/tutorial/beginners-guide-to-visual-merge-tool-meld-on-linux/#about-meld
[2]:https://www.howtoforge.com/tutorial/beginners-guide-to-visual-merge-tool-meld-on-linux/#meld-installation
[3]:https://www.howtoforge.com/tutorial/beginners-guide-to-visual-merge-tool-meld-on-linux/#meld-usage
[4]:https://www.howtoforge.com/tutorial/beginners-guide-to-visual-merge-tool-meld-on-linux/#conclusion
[5]:https://www.howtoforge.com/tutorial/linux-diff-command-file-comparison/
[6]:https://www.howtoforge.com/tutorial/how-to-compare-three-files-in-linux-using-diff3-tool/
[7]:http://meldmerge.org/
[8]:https://www.howtoforge.com/tutorial/ubuntu-16-04-lts-overview/
[9]:https://git.gnome.org/browse/meld/refs/tags
[10]:https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/big/meld-launch-screen-1.png
[11]:https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/big/meld-file-comparison-2.png
[12]:https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/big/meld-diff-in-action-3.png
[13]:https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/big/meld-multiple-changes-4.png
[14]:https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/big/meld-multiple-colors-5.png
[15]:https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/big/meld-makes-it-easy-6.png
[16]:https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/big/meld-makes-it-easy-7.png
[17]:https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/big/meld-makes-it-easy-8.png
[18]:https://www.howtoforge.com/images/how-to-use-visual-diff-and-merge-tools-in-linux-meld-and-kdiff/big/meld-go-next-prev-9.png
[19]:http://meldmerge.org/features.html

View File

@ -1,562 +0,0 @@
How to Install Elastic Stack on CentOS 7
============================================================
### On this page
1. [Step 1 - Prepare the Operating System][1]
2. [Step 2 - Install Java][2]
3. [Step 3 - Install and Configure Elasticsearch][3]
4. [Step 4 - Install and Configure Kibana with Nginx][4]
5. [Step 5 - Install and Configure Logstash][5]
6. [Step 6 - Install and Configure Filebeat on the CentOS Client][6]
7. [Step 7 - Install and Configure Filebeat on the Ubuntu Client][7]
8. [Step 8 - Testing][8]
9. [Reference][9]
**Elasticsearch** is an open source search engine based on Lucene, developed in Java. It provides a distributed and multitenant full-text search engine with an HTTP Dashboard web-interface (Kibana). The data is queried, retrieved and stored with a JSON document scheme. Elasticsearch is a scalable search engine that can be used to search for all kind of text documents, including log files. Elasticsearch is the heart of the 'Elastic Stack' or ELK Stack.
**Logstash** is an open source tool for managing events and logs. It provides real-time pipelining for data collections. Logstash will collect your log data, convert the data into JSON documents, and store them in Elasticsearch.
**Kibana** is an open source data visualization tool for Elasticsearch. Kibana provides a pretty dashboard web interface. It allows you to manage and visualize data from Elasticsearch. It's not just beautiful, but also powerful.
In this tutorial, I will show you how to install and configure Elastic Stack on a CentOS 7 server for monitoring server logs. Then I'll show you how to install 'Elastic beats' on a CentOS 7 and a Ubuntu 16 client operating system.
**Prerequisite**
* CentOS 7 64 bit with 4GB of RAM - elk-master
* CentOS 7 64 bit with 1 GB of RAM - client1
* Ubuntu 16 64 bit with 1GB of RAM - client2
### Step 1 - Prepare the Operating System
In this tutorial, we will disable SELinux on the CentOS 7 server. Edit the SELinux configuration file.
vim /etc/sysconfig/selinux
Change SELINUX value from enforcing to disabled.
SELINUX=disabled
Then reboot the server.
reboot
Login to the server again and check the SELinux state.
getenforce
Make sure the result is disabled.
### Step 2 - Install Java
Java is required for the Elastic stack deployment. Elasticsearch requires Java 8, it is recommended to use the Oracle JDK 1.8\. I will install Java 8 from the official Oracle rpm package.
Download Java 8 JDK with the wget command.
wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http:%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u77-b02/jdk-8u77-linux-x64.rpm"
Then install it with this rpm command;
rpm -ivh jdk-8u77-linux-x64.rpm
Finally, check java JDK version to ensure that it is working properly.
java -version
You will see Java version of the server.
### Step 3 - Install and Configure Elasticsearch
In this step, we will install and configure Elasticsearch. I will install Elasticsearch from an rpm package provided by elastic.co and configure it to run on localhost (to make the setup secure and ensure that it is not reachable from the outside).
Before installing Elasticsearch, add the elastic.co key to the server.
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Next, download Elasticsearch 5.1 with wget and then install it.
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.1.rpm
rpm -ivh elasticsearch-5.1.1.rpm
Elasticsearch is installed. Now go to the configuration directory and edit the elasticsaerch.yml configuration file.
cd /etc/elasticsearch/
vim elasticsearch.yml
Enable memory lock for Elasticsearch by removing a comment on line 40\. This disables memory swapping for Elasticsearch.
bootstrap.memory_lock: true
In the 'Network' block, uncomment the network.host and http.port lines.
network.host: localhost
http.port: 9200
Save the file and exit the editor.
Now edit the elasticsearch.service file for the memory lock configuration.
vim /usr/lib/systemd/system/elasticsearch.service
Uncomment LimitMEMLOCK line.
LimitMEMLOCK=infinity
Save and exit.
Edit the sysconfig configuration file for Elasticsearch.
vim /etc/sysconfig/elasticsearch
Uncomment line 60 and make sure the value is 'unlimited'.
MAX_LOCKED_MEMORY=unlimited
Save and exit.
The Elasticsearch configuration is finished. Elasticsearch will run on the localhost IP address on port 9200, we disabled memory swapping for it by enabling mlockall on the CentOS server.
Reload systemd, enable Elasticsearch to start at boot time, then start the service.
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch
Wait a second for Eelasticsearch to start, then check the open ports on the server, make sure 'state' for port 9200 is 'LISTEN'.
netstat -plntu
[
![Check elasticsearch running on port 9200](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/1.png)
][10]
Then check the memory lock to ensure that mlockall is enabled, and check that Elasticsearch is running with the commands below.
curl -XGET 'localhost:9200/_nodes?filter_path=**.mlockall&pretty'
curl -XGET 'localhost:9200/?pretty'
You will see the results below.
[
![Check memory lock elasticsearch and check status](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/2.png)
][11]
### Step 4 - Install and Configure Kibana with Nginx
In this step, we will install and configure Kibana with a Nginx web server. Kibana will listen on the localhost IP address and Nginx acts as a reverse proxy for the Kibana application.
Download Kibana 5.1 with wget, then install it with the rpm command:
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.1-x86_64.rpm
rpm -ivh kibana-5.1.1-x86_64.rpm
Now edit the Kibana configuration file.
vim /etc/kibana/kibana.yml
Uncomment the configuration lines for server.port, server.host and elasticsearch.url.
server.port: 5601
server.host: "localhost"
elasticsearch.url: "http://localhost:9200"
Save and exit.
Add Kibana to run at boot and start it.
sudo systemctl enable kibana
sudo systemctl start kibana
Kibana will run on port 5601 as node application.
netstat -plntu
[
![Kibana running as node application on port 5601](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/3.png)
][12]
The Kibana installation is finished. Now we need to install Nginx and configure it as reverse proxy to be able to access Kibana from the public IP address.
Nginx is available in the Epel repository, install epel-release with yum.
yum -y install epel-release
Next, install the Nginx and httpd-tools package.
yum -y install nginx httpd-tools
The httpd-tools package contains tools for the web server, we will use htpasswd basic authentication for Kibana.
Edit the Nginx configuration file and remove the **'server { }**' block, so we can add a new virtual host configuration.
cd /etc/nginx/
vim nginx.conf
Remove the server { } block.
[
![Remove Server Block on Nginx configuration](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/4.png)
][13]
Save and exit.
Now we need to create a new virtual host configuration file in the conf.d directory. Create the new file 'kibana.conf' with vim.
vim /etc/nginx/conf.d/kibana.conf
Paste the configuration below.
```
server {
    listen 80;
    server_name elk-stack.co;
    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/.kibana-user;
    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}
```
Save and exit.
Then create a new basic authentication file with the htpasswd command.
sudo htpasswd -c /etc/nginx/.kibana-user admin
TYPE YOUR PASSWORD
Test the Nginx configuration and make sure there is no error. Then add Nginx to run at the boot time and start Nginx.
nginx -t
systemctl enable nginx
systemctl start nginx
[
![Add nginx virtual host configuration for Kibana Application](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/5.png)
][14]
### Step 5 - Install and Configure Logstash
In this step, we will install Logsatash and configure it to centralize server logs from clients with filebeat, then filter and transform the Syslog data and move it into the stash (Elasticsearch).
Download Logstash and install it with rpm.
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.1.rpm
rpm -ivh logstash-5.1.1.rpm
Generate a new SSL certificate file so that the client can identify the elastic server.
Go to the tls directory and edit the openssl.cnf file.
cd /etc/pki/tls
vim openssl.cnf
Add a new line in the '[ v3_ca ]' section for the server identification.
[ v3_ca ]
# Server IP Address
subjectAltName = IP: 10.0.15.10
Save and exit.
Generate the certificate file with the openssl command.
openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/logstash-forwarder.key -out /etc/pki/tls/certs/logstash-forwarder.crt
The certificate files can be found in the '/etc/pki/tls/certs/' and '/etc/pki/tls/private/' directories.
Next, we will create new configuration files for Logstash. We will create a new 'filebeat-input.conf' file to configure the log sources for filebeat, then a 'syslog-filter.conf' file for syslog processing and the 'output-elasticsearch.conf' file to define the Elasticsearch output.
Go to the logstash configuration directory and create the new configuration files in the 'conf.d' subdirectory.
cd /etc/logstash/
vim conf.d/filebeat-input.conf
Input configuration: paste the configuration below.
```
input {
  beats {
    port => 5443
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}
```
Save and exit.
Create the syslog-filter.conf file.
vim conf.d/syslog-filter.conf
Paste the configuration below.
```
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
```
We use a filter plugin named '**grok**' to parse the syslog files.
Save and exit.
Create the output configuration file 'output-elasticsearch.conf'.
vim conf.d/output-elasticsearch.conf
Paste the configuration below.
```
output {
  elasticsearch { hosts => ["localhost:9200"]
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}
```
Save and exit.
Finally add logstash to start at boot time and start the service.
sudo systemctl enable logstash
sudo systemctl start logstash
[
![Logstash started on port 5443 with SSL Connection](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/6.png)
][15]
### Step 6 - Install and Configure Filebeat on the CentOS Client
Beats are data shippers, lightweight agents that can be installed on the client nodes to send huge amounts of data from the client machine to the Logstash or Elasticsearch server. There are 4 beats available, 'Filebeat' for 'Log Files', 'Metricbeat' for 'Metrics', 'Packetbeat' for 'Network Data' and 'Winlogbeat' for the Windows client 'Event Log'.
In this tutorial, I will show you how to install and configure 'Filebeat' to transfer data log files to the Logstash server over an SSL connection.
Login to the client1 server. Then copy the certificate file from the elastic server to the client1 server. 
ssh root@client1IP
Copy the certificate file with the scp command.
scp root@elk-serverIP:~/logstash-forwarder.crt .
TYPE elk-server password
Create a new directory and move certificate file to that directory.
sudo mkdir -p /etc/pki/tls/certs/
mv ~/logstash-forwarder.crt /etc/pki/tls/certs/
Next, import the elastic key on the client1 server.
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Download Filebeat and install it with rpm.
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-x86_64.rpm
rpm -ivh filebeat-5.1.1-x86_64.rpm
Filebeat has been installed, go to the configuration directory and edit the file 'filebeat.yml'.
cd /etc/filebeat/
vim filebeat.yml
In the paths section on line 21, add the new log files. We will add two files '/var/log/secure' for ssh activity and '/var/log/messages' for the server log.
  paths:
    - /var/log/secure
    - /var/log/messages
Add a new configuration on line 26 to define the syslog type files.
  document-type: syslog
Filebeat is using Elasticsearch as the output target by default. In this tutorial, we will change it to Logshtash. Disable Elasticsearch output by adding comments on the lines 83 and 85.
Disable elasticsearch output.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
#  hosts: ["localhost:9200"]
Now add the new logstash output configuration. Uncomment the logstash output configuration and change all value to the configuration that is shown below.
output.logstash:
  # The Logstash hosts
  hosts: ["10.0.15.10:5443"]
  bulk_max_size: 1024
  ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
  template.name: "filebeat"
  template.path: "filebeat.template.json"
  template.overwrite: false
Save the file and exit vim.
Add Filebeat to start at boot time and start it.
sudo systemctl enable filebeat
sudo systemctl start filebeat
### Step 7 - Install and Configure Filebeat on the Ubuntu Client
Connect to the server by ssh.
ssh root@ubuntu-clientIP
Copy the certificate file to the client with the scp command.
scp root@elk-serverIP:~/logstash-forwarder.crt .
Create a new directory for the certificate file and move the file to that directory.
sudo mkdir -p /etc/pki/tls/certs/
mv ~/logstash-forwarder.crt /etc/pki/tls/certs/
Add the elastic key to the server.
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Download the Filebeat .deb package and install it with the dpkg command.
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-amd64.deb
dpkg -i filebeat-5.1.1-amd64.deb
Go to the filebeat configuration directory and edit the file 'filebeat.yml' with vim.
cd /etc/filebeat/
vim filebeat.yml
Add the new log file paths in the paths configuration section.
  paths:
    - /var/log/auth.log
    - /var/log/syslog
Set the document type to syslog.
  document-type: syslog
Disable elasticsearch output by adding comments to the lines shown below.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
#  hosts: ["localhost:9200"]
Enable logstash output, uncomment the configuration and change the values as shown below.
output.logstash:
  # The Logstash hosts
  hosts: ["10.0.15.10:5443"]
  bulk_max_size: 1024
  ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
  template.name: "filebeat"
  template.path: "filebeat.template.json"
  template.overwrite: false
Save the file and exit vim.
Add Filebeat to start at boot time and start it.
sudo systemctl enable filebeat
sudo systemctl start filebeat
Check the service status.
systemctl status filebeat
[
![Filebeat is running on the client Ubuntu](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/12.png)
][16]
### Step 8 - Testing
Open your web browser and visit the elastic stack domain that you used in the Nginx configuration,  mine is 'elk-stack.co'. Login as admin user with your password and press Enter to log in to the Kibana dashboard.
[
![Login to the Kibana Dashboard with Basic Auth](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/7.png)
][17]
Create a new default index 'filebeat-*' and click on the 'Create' button.
[
![Create First index filebeat for Kibana](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/8.png)
][18]
Th default index has been created. If you have multiple beats on the elastic stack, you can configure the default beat with just one click on the 'star' button.
[
![Filebeat index as default index on Kibana Dashboard](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/9.png)
][19]
Go to the '**Discover**' menu and you will see all the log file from the elk-client1 and elk-client2 servers.
[
![Discover all Log Files from the Servers](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/10.png)
][20]
An example of JSON output from the elk-client1 server log for an invalid ssh login.
[
![JSON output for Failed SSH Login](https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/11.png)
][21]
And there is much more that you can do with Kibana dashboard, just play around with the available options.
Elastic Stack has been installed on a CentOS 7 server. Filebeat has been installed on a CentOS 7 and a Ubuntu client.
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/
作者:[Muhammad Arul][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/
[1]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-nbspprepare-the-operating-system
[2]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-java
[3]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-elasticsearch
[4]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-kibana-with-nginx
[5]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-logstash
[6]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-filebeat-on-the-centos-client
[7]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-filebeat-on-the-ubuntu-client
[8]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-testing
[9]:https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#reference
[10]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/1.png
[11]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/2.png
[12]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/3.png
[13]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/4.png
[14]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/5.png
[15]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/6.png
[16]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/12.png
[17]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/7.png
[18]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/8.png
[19]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/9.png
[20]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/10.png
[21]:https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/11.png

View File

@ -1,81 +0,0 @@
How to Keep Hackers out of Your Linux Machine Part 3: Your Questions Answered
============================================================
![Computer security](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/keep-hackers-out.jpg?itok=lqgHDxDu "computer security")
Mike Guthrie answers some of the security-related questions received during his recent Linux Foundation webinar. Watch the free webinar on-demand.[Creative Commons Zero][1]
Articles [one][6] and [two][7] in this series covered the five easiest ways to keep hackers out of your Linux machine, and know if they have made it in. This time, Ill answer some of the excellent security questions I received during my recent Linux Foundation webinar. [Watch the free webinar on-demand.][8]
**How can I store a passphrase for a private key if private key authentication is used by automated systems?**
This is tough. This is something that we struggle with on our end, especially when we are doing Red Teams because we have stuff that calls back automatically. I use Expect but I tend to be old-school on that. You are going to have to script it and, yes, storing that passphrase on the system is going to be tough; you are going to have to encrypt it when you store it.
My Expect script encrypts the passphrase stored and then decrypts, sends the passphrase, and re-encrypts it when it's done. I do realize there are some flaws in that, but it's better than having a no-passphrase key.
If you do have a no-passphrase key, and you do need to use it. Then I would suggest limiting the user that requires that to almost nothing. For instance, if you are doing some automated log transfers or automated software installs, limit the access to only what it requires to perform those functions.
You can run commands by SSH, so don't give them a shell, make it so they just run that command and it will actually prevent somebody from stealing that key and doing something other than just that one command.
**What do you think of password managers such as KeePass2?**
Password managers, for me, are a very juicy target. With the advent of GPU cracking and some of the cracking capabilities in EC2, they become pretty easy to get past.  I steal password vaults all the time.
Now, our success rate at cracking those, that's a different story. We are still in about the 10 percent range of crack versus no crack. If a person doesn't do a good job at keeping a secure passphrase on their password vault, then we tend to get into it and we have a large amount of success. It's better than nothing but still you need to protect those assets. Protect the password vault as you would protect any other passwords.
**Do you think it is worthwhile from a security perspective to create a new Diffie-Hellman moduli and limit them to 2048 bit or higher in addition to creating host keys with higher key lengths?**
Yeah. There have been weaknesses in SSH products in the past where you could actually decrypt the packet stream. With that, you can pull all kinds of data across. People use this safes to transfer files and passwords and they do it thoughtlessly as an encryption mechanism. Doing what you can to use strong encryption and changing your keys and whatnot is important. I rotate my SSH keys -- not as often as I do my passwords -- but I rotate them about once a year. And, yeah, it's a pain, but it gives me peace of mind. I would recommend doing everything you can to make your encryption technology as strong as you possibly can.
**Is using four completely random English words (around 100k words) for a passphrase okay?**
Sure. My passphrase is actually a full phrase. It's a sentence. With punctuation and capitalization. I don't use anything longer than that.
I am a big proponent of having passwords that you can remember that you dont have to write down or store in a password vault. A password that you can remember that you don't have to write down is more secure than one that you have to write down because it's funky.
Using a phrase or using four random words that you will remember is much more secure than having a string of numbers and characters and having to hit shift a bunch of times. My current passphrase is roughly 200 characters long. It's something that I can type quickly and that I remember.
**Any advice for protecting Linux-based embedded systems in an IoT scenario?**
IoT is a new space, this is the frontier of systems and security. It is starting to be different every single day. Right now, I try to keep as much offline as I possibly can. I don't like people messing with my lights and my refrigerator. I purposely did not buy a connected refrigerator because I have friends that are hackers, and I know that I would wake up to inappropriate pictures every morning. Keep them locked down. Keep them locked up. Keep them isolated.
The current malware for IoT devices is dependent on default passwords and backdoors, so just do some research into what devices you have and make sure that there's nothing there that somebody could particularly access by default. Then make sure that the management interfaces for those devices are well protected by a firewall or another such device.
**Can you name a firewall/UTM (OS or application) to use in SMB and large environments?**
I use pfSense; its a BSD derivative. I like it a lot. There's a lot of modules, and there's actually commercial support for it now, which is pretty fantastic for small business. For larger devices, larger environments, it depends on what admins you can get a hold of.
I have been a CheckPoint admin for most of my life, but Palo Alto is getting really popular, too. Those types of installations are going to be much different from a small business or home use. I use pfSense for any small networks.
**Is there an inherent problem with cloud services?**
There is no cloud; there are only other people's computers. There are inherent issues with cloud services. Just know who has access to your data and know what you are putting out there. Realize that when you give something to Amazon or Google or Microsoft, then you no longer have full control over it and the privacy of that data is in question.
**What preparation would you suggest to get an OSCP?**
I am actually going through that certification right now. My whole team is. Read their materials. Keep in mind that OSCP is going to be the offensive security baseline. You are going to use Kali for everything. If you don't -- if you decide not to use Kali -- make sure that you have all the tools installed to emulate a Kali instance.
It's going to be a heavily tools-based certification. It's a good look into methodologies. Take a look at something called the Penetration Testing Framework because that would give you a good flow of how to do your test and their lab seems to be great. It's very similar to the lab that I have here at the house.
_[Watch the full webinar on demand][3], for free. And see [parts one][4] and [two][5] of this series for five easy tips to keep your Linux machine secure._
_Mike Guthrie works for the Department of Energy doing Red Team engagements and penetration testing._
--------------------------------------------------------------------------------
via: https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-3-your-questions-answered
作者:[MIKE GUTHRIE][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/anch
[1]:https://www.linux.com/licenses/category/creative-commons-zero
[2]:https://www.linux.com/files/images/keep-hackers-outjpg
[3]:http://portal.on24.com/view/channel/index.html?showId=1101876&showCode=linux&partnerref=linco
[4]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-1-top-two-security-tips
[5]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-2-three-more-easy-security-tips
[6]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-1-top-two-security-tips
[7]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-2-three-more-easy-security-tips
[8]:http://portal.on24.com/view/channel/index.html?showId=1101876&showCode=linux&partnerref=linco

View File

@ -1,139 +0,0 @@
ictlyh Translating
Command line aliases in the Linux Shell
============================================================
### On this page
1. [Command line aliases in Linux][1]
2. [Related details][2]
3. [Conclusion][3]
So far, in this tutorial series, we have discussed the basic usage as well as related details of the [cd -][5] and **pushd**/**popd** commands, as well as the **CDPATH** environment variable. In this fourth and the final installment, we will discuss the concept of aliases as well how you can use them to make your command line navigation easier and smoother.
As always, before jumping on to the heart of the tutorial, it's worth sharing that all the instructions as well examples presented in this article have been tested on Ubuntu 14.04LTS. The command line shell we've used is bash (version  4.3.11).
### Command line aliases in Linux
In layman's terms, aliases can be thought of as short names or abbreviations to a complex command or a group of commands, including their arguments or options. So basically, with aliases, you create easy to remember names for not-so-easy-to-type/remember commands.
For example, the following command creates an alias 'home' for the 'cd ~' command:
alias home="cd ~"
This means that now you can quickly type 'home' and press enter whenever you want to come back to your home directory from anywhere on your system.
Here's what the alias command man page says about this utility:
```
The alias utility shall create or redefine alias definitions or write the values of existing alias definitions to standard output. An alias definition provides a string value that shall replace a command name when it is encountered
An alias definition shall affect the current shell execution environment and the execution environments of the subshells of the current shell. When used as specified by this volume of IEEE Std 1003.1-2001, the alias definition shall not affect the parent process of the current shell nor any utility environment invoked by the shell.
```
So, how exactly aliases help in command line navigation? Well, here's a simple example:
Suppose you are working in the _/home/himanshu/projects/howtoforge_ directory, which further contains many subdirectories, and sub-subdirectories. For example, following is one complete directory branch:
```
/home/himanshu/projects/howtoforge/command-line/navigation/tips-tricks/part4/final
```
Now imagine, you are in the 'final' directory, and then you want to get back to the 'tips-tricks' directory, and from there, you need to get back to the 'howtoforge' directory. What would you do?
Well, normally, you'd run the following set of commands:
cd ../..
cd ../../..
While this approach isn't wrong per se, it's definitely not convenient, especially when you've to go back, say 5 directories in a very long path. So, what's the solution? The answer is: aliases.
What you can do is, you can create easy to remember (and type) aliases for each of the _cd .._ commands. For example:
alias bk1="cd .."
alias bk2="cd ../.."
alias bk3="cd ../../.."
alias bk4="cd ../../../.."
alias bk5="cd ../../../../.."
So now whenever you want to go back, say 5 places, from your current working directory, you can just run the following command:
bk5
Isn't that easy now?
### Related details
While the technique we've used to define aliases so far (using the alias command) on the shell prompt does the trick, the aliases exist only for the current terminal session. There are good chances that you may want aliases defined by you to persist so that they can be used in any new command line terminal window/tab you launch thereafter.
For this, you need to define your aliases in the _~/.bash_aliases_ file, which is loaded by your _~/.bashrc_ file by default (please verify this if you are using an older Ubuntu version).
Following is the excerpt from my .bashrc file that talks about the .bash_aliases file:
```
# Alias definitions.
# You may want to put all your additions into a separate file like
# ~/.bash_aliases, instead of adding them here directly.
# See /usr/share/doc/bash-doc/examples in the bash-doc package.
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi
```
Once you've added an alias definition to your .bash_aliases file, that alias will be available on any and every new terminal. However, you'll not be able to use it in any other terminal which was already open when you defined that alias - the way out is to source .bashrc from those terminals. Following is the exact command that you'll have to run:
source ~/.bashrc
If that sounds a little too much of work (yes, I am looking at you LAZY ONES), then here's a shortcut to do all this:
"alias [the-alias]" >> ~/.bash_aliases && source ~/.bash_aliases
Needless to say, you'll have to replace [the-alias] with the actual command. For example:
"alias bk5='cd ../../../../..'" >> ~/.bash_aliases && source ~/.bash_aliases
Moving on, now suppose you've created some aliases, and have been using them on and off for a few months. Suddenly, one day, you doubt that one of them isn't working as expected. So you feel the need to look at the exact command that was assigned to that alias. What would you do?
Of course, you can open your .bash_aliases file and take a look there, but this process can be a bit time consuming, especially when the file contains a lot of aliases. So, if you're looking for an easy way out, here's one: all you have to do is to run the _alias_ command with the alias-name as argument.
Here's an example:
$ alias bk6
alias bk6='cd ../../../../../..'
As you can see, the aforementioned command displayed the actual command assigned to the bk6 alias. There's one more way: to use the _type_ command. Following is an example:
$ type bk6
bk6 is aliased to `cd ../../../../../..'
So the type command produces a more human-understandable output.
Another thing worth sharing here is that you can use aliases for the common typos you make. For example:
alias mroe='more'
_Finally, it's also worth mentioning that not everybody is in favor of using aliases. Most of them argue that once you get used to the aliases you define for your ease, it gets really difficult for you to work on some other system where those aliases don't exist (and you're not allowed to create any as well). For more (as well as precise reasons) why some experts don't recommend using aliases, you can head[here][4]. _
### Conclusion
Like the CDPATH environment variable we discussed in the previous part, alias is also a double edged sword that one should use very cautiously. Don't get discouraged though, as everything has its own advantages and disadvantages. Just that practice and complete knowledge is the key when you're dealing with concepts like aliases.
So this marks the end of this tutorial series. Hope you enjoyed it as well learned some new things/concepts from it. In case you have any doubts or queries, please share them with us (and the rest of the world) in comments below.
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/command-line-aliases-in-linux/
作者:[Ansh][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/command-line-aliases-in-linux/
[1]:https://www.howtoforge.com/tutorial/command-line-aliases-in-linux/#command-line-aliases-in-linux
[2]:https://www.howtoforge.com/tutorial/command-line-aliases-in-linux/#related-details
[3]:https://www.howtoforge.com/tutorial/command-line-aliases-in-linux/#conclusion
[4]:http://unix.stackexchange.com/questions/66934/why-is-aliasing-over-standard-commands-not-recommended
[5]:https://www.howtoforge.com/tutorial/linux-command-line-navigation-tips-and-tricks-part-1/

View File

@ -1,146 +0,0 @@
vim-kakali translating
How to compare directories with Meld on Linux
============================================================
### On this page
1. [Compare directories using Meld][1]
2. [Conclusion][2]
We've [already covered][4] Meld from a beginner's point of view (including the tool's installation part), and we've also covered some tips/tricks that are primarily aimed at intermediate Meld users. If you remember, in the beginner's tutorial, we mentioned that Meld can be used to compare both files as well as directories. Now that we've already covered file comparison, it's time to discuss the tool's directory comparison feature.
_But before we do that it'd be worth sharing that all the instructions and examples presented in this tutorial have been tested on Ubuntu 14.04 and the Meld version we've used is 3.14.2._
### Compare directories using Meld
To compare two directories using Meld, launch the tool and select the _Directory comparison_ option.
[
![Compare directories using Meld](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-dir-comp-1.png)
][5]
Then select the directories that you want to compare:
[
![select the directories](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-sel-dir-2.png)
][6]
Once that is done, click the _Compare_ button, and you'll see that Meld will compare both directories side by side, like the tool does in case of files:
[
![Compare directories visually](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-dircomp-begins-3.png)
][7]
Of course, these being directories, they are displayed as side-by-side trees. And as you can see in the screenshot above, the differences - whether it's a new file or a changed file -  are highlighted in different colors.
According to Meld's official documentation, each file or folder that you see in the comparison area of the window has a state of its own. A state basically reveals how a particular file/folder is different from the corresponding entry in the other directory.
The following table - taken from the tool's website - explains in details the folder comparison states in Meld.
|**State** | **Appearance** | **Meaning** |
| --- | --- | --- |
| Same | Normal font | The file/folder is the same across all compared folders. |
| Same when filtered | Italics | These files are different across folders, but once text filters are applied, these files become identical. |
| Modified | Blue and bold | These files differ between the folders being compared. |
| New | Green and bold | This file/folder exists in this folder, but not in the others. |
| Missing | Greyed out text with a line through the middle | This file/folder doesn't exist in this folder, but does in one of the others. |
| Error | Bright red with a yellow background and bold | When comparing this file, an error occurred. The most common error causes are file permissions (i.e., Meld was not allowed to open the file) and filename encoding errors. |
By default, Meld shows all the contents of the folders being compared, even if they are same (meaning there's no difference between them). However, you can ask the tool to not display these files/directories by clicking the _Same_ button in the toolbar - the click should disable this button.
[
![same button](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-same-button.png)
][3]
[
![Meld compare buttons](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-same-disabled.png)
][8]
For example, here's our directory comparison when I clicked and disabled the _Same_ button:
[
![Directory Comparison without same files](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-only-diff.png)
][9]
So you can see that only the differences between the two directories (new and modified files) are shown now. Similarly, if you disable the _New_ button, only the modified files will be displayed. So basically, you can use these buttons to customize what kind of changes are displayed by Meld while comparing two directories.
Coming to the changes, you can hop from one change to another using the up and down arrow keys that sit above the display area in the tool's window, and to open two files for side-by-side comparison, you can either double click the name of any of the files, or click the _Compare_ button that sits beside the arrows.
[
![meld compare arrow keys](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-compare-arrows.png)
][10]
**Note 1**: If you observe closely, there are bars on the left and right-hand sides of the display area in Meld window. These bars basically provide "a simple coloured summary of the comparison results." For each differing file or folder, there's a small colored section in these bars. You can click any such section to directly jump to that place in the comparison area.
**Note 2**: While you can always open files side by side and merge changes the way you want, in case you want all the changes to be merged to the corresponding file/folder (meaning you want to make the corresponding file/folder exactly same) then you can use the _Copy Left_ and _Copy Right_ buttons:
[
![meld copy right part](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-copy-right-left.png)
][11]
For example, select a file or folder in the left pane and click the _Copy Right_ button to make the corresponding entry in the right pane exactly same.
Moving on, there's a _Filters_ drop-down menu that sits just next to the _Same_, _New_, and _Modified_ trio of buttons. Here you can select/deselect file types to tell Meld whether or not to show these kind of files/folders in the display area during a directory comparison. The official documentation explains the entries in this menu as "patterns of filenames that will not be looked at when performing a folder comparison."
The entries in the list include backups, OS-specific metadata, version control, binaries, and media.
[
![Meld filters](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-filters.png)
][12]
The aforementioned menu is also accessible by heading to _View->File Filters_. You can add new elements to this menu (as well as remove existing ones if you want) by going to _Edit->Preferences->File Filters_.
[
![Meld preferences](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-edit-filters-menu.png)
][13]
To create a new filter, you need to use shell glob patterns. Following is the list of shell glob characters that Meld recognises:
| **Wildcard** | **Matches** |
| --- | --- |
| * | anything (i.e., zero or more characters) |
| ? | exactly one character |
| [abc] | any one of the listed characters |
| [!abc] | anything except one of the listed characters |
| {cat,dog} | either "cat" or "dog" |
Finally, an important point worth knowing about Meld is that the case of a file's name plays an important part as comparison is case sensitive by default. This means that, for example, the files README, readme and ReadMe would all be treated by the tool as different files.
Thankfully, however, Meld also provides you a way to turn off this feature. All you have to do is to head to the _View_ menu and then select the _Ignore Filename Case_ option.
[
![Meld ignore filename case](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-ignore-case.png)
][14]
### Conclusion
As you'd agree, directory comparison using Meld isn't difficult - in fact I'd say it's pretty easy. The only area that might require time to learn is creating file filters, but that's not to say you should never learn it. Obviously, it all depends on what your requirement is. 
Oh, and yes, you can even compare three directories using Meld, a feature that you can access by _clicking the 3-way comparison_ box when you choose the directories that you want to compare. We didn't discuss the feature in this article, but definitely will in one of our future articles.
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/how-to-perform-directory-comparison-using-meld/
作者:[Ansh][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/how-to-perform-directory-comparison-using-meld/
[1]:https://www.howtoforge.com/tutorial/how-to-perform-directory-comparison-using-meld/#compare-directories-using-meld
[2]:https://www.howtoforge.com/tutorial/how-to-perform-directory-comparison-using-meld/#conclusion
[3]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-same-button.png
[4]:https://www.howtoforge.com/tutorial/beginners-guide-to-visual-merge-tool-meld-on-linux/
[5]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-dir-comp-1.png
[6]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-sel-dir-2.png
[7]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-dircomp-begins-3.png
[8]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-same-disabled.png
[9]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-only-diff.png
[10]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-compare-arrows.png
[11]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-copy-right-left.png
[12]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-filters.png
[13]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-edit-filters-menu.png
[14]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-ignore-case.png

View File

@ -1,97 +0,0 @@
Building your own personal cloud with Cozy
============================================================
![Building your own personal cloud with Cozy](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/life_tree_clouds.png?itok=dSV0oTDS "Building your own personal cloud with Cozy")
>Image by : [Pixabay][2]. Modified by Opensource.com. [CC BY-SA 4.0][3]
Most everyone I know uses some sort of web-based application for their calendar, emails, file storage, and much more. But what if, like me, you've got concerns about privacy, or just want to simplify your digital life into a place you control? [Cozy][4] is a project that is moving in the right direction—toward a robust self-hosted cloud platform. Cozy's source code is available on [GitHub][5], and it is licensed under the AGPL 3.0 license.
### Installation
Installing Cozy is snap-easy, with [easy-to-follow instructions][6] for a variety of platforms. For my testing, I'm using Debian 8, 64-bit. The installation takes a few minutes, but then you just go to the server's IP, register an account, and a default set of applications is loaded up and ready to roll.
One note on this—the installation assumes no other web server is running, and it will want to install [Nginx web server][7]. If your server already has websites running, configuration may be a bit more challenging. My install was on a brand new Virtual Private Server (VPS), so this was a snap. Run the installer, start Nginx, and you're ready to hit the cloud.
Cozy has [an App Store][8] where you can download additional applications. Some look pretty interesting, like the [Ghost blogging platform][9] and [TiddlyWiki][10] open source wiki. The intent here, clearly, is to allow integration in the platform with lots of other goodies. I think it's just a matter of time before you'll see this start to take off, with many other popular open source applications offering integration abilities. Right now, [Node.js][11] applications are supported, but if other application layers were possible, you'd see that many other good things could happen.
One capability that is possible is using the free Android application to access your information from Android devices. No iOS app exists, but there is a plan to solve that problem.
Currently, Cozy comes with a nice set of core applications.
![Main Cozy Interface](https://opensource.com/sites/default/files/main_cozy_interface.jpg "Main Cozy Interface")
Main Cozy interface
### Files
Like a lot of folks, I use [Dropbox][12] for file storage. In fact, I pay for Dropbox Pro because I use _so much_ storage. For me, then, moving my files to Cozy would be a money-saver—if it has the features I want.
I wish I could say it does, truly I do. I was very impressed with the web-based file upload and file-management tool built into the Cozy web application. Drag-and-drop works like you'd expect, and the interface is clean and uncluttered. I had no trouble at all uploading some sample files and folders, navigating around, and moving, deleting, and renaming files.
If what you want is a web-based cloud file storage, you're set. What's missing, for me, is the selective synchronization of files and folders, which Dropbox has. With Dropbox, if you drop a file in a folder, it's copied out to the cloud and made available to all your synced devices in a matter of minutes. To be fair, [Cozy is working on it][13], but it's in beta and only for Linux clients at the moment. Also, I have a [Chrome][14] extension called [Download to Dropbox][15] that I use to capture images and other content from time to time, and no such tool exists yet for Cozy.
![File Manager Interface](https://opensource.com/sites/default/files/cozy_2.jpg "File Manager Interface")
File Manager interface
### Importing data from Google
If you currently use Google Calendar or Contacts, importing these is very easy with the app installed in Cozy. When you authorize access to Google, you're given an API key, which you paste in Cozy, and it performs the copy quickly and efficiently. In both cases, the contents were tagged as having been imported from Google. Given the clutter in my contacts, this is probably a good thing, as it gives you the opportunity to tidy up as you relabel them into more meaningful categories. Calendar events imported all on the "Google Calendar," but I noticed that _some_ of my events had the times incorrect, possibly an artifact of time zone settings on one end or the other.
### Contacts  
Contacts works like you'd expect, and the interface looks a _lot_ like Google Contacts. There are a few sticky areas, though. Synchronization with (for instance) your smart phone happens via [CardDAV][16], a standard protocol for sharing contacts data—and a technology that Android phones _do not speak natively_. To sync your contacts to an Android device, you're going to have to load an app on your phone for that. For me, that's a _huge_ strike against it, as I have enough odd apps like that already (work mail versus Gmail versus other mail, oh my!), and I do not want to install another that won't sync up with my smartphone's native contacts application. If you're using an iPhone, you can do CardDAV right out of the box.
### Calendar  
The good news for Calendar users is that an Android device _can_ speak [CalDAV][17], the interchange format for this type of data. As I noted with the import app, some of my calendar items came over with the wrong times. I've seen this before with other calendar system moves, so that really didn't bother me much. The interface lets you create and manage multiple calendars, just like with Google, but you can't subscribe to other calendars outside of this Cozy instance. One other quirk is of the app is starting of the week on Monday, which you can't change. Normally, I start my week on Sunday, so changing from Monday would be a useful feature for me. The Settings dialog didn't actually have any settings; it merely gave instructions on how to connect via CalDAV. Again, this application is close to what I need, but Cozy is not quite there.
### Photos
The Photos app is pretty impressive, borrowing a lot from the Files application. You can even add photos to an album that are in files in the other app, or upload directly with drag and drop. Unfortunately, I don't see any way to reorder or edit pictures once you've uploaded them. You can only delete them from the album. The app does have a tool for sharing via a token link, and you can specify one or more contact. The system will send those contacts an email inviting them to view the album. There are more feature-rich album applications than this, to be sure, but this is a good start for the Cozy platform.
![Photos Interface](https://opensource.com/sites/default/files/cozy_3_0.jpg "Photos Interface")
Photos interface
### Final thoughts
Cozy has some really big goals. They're trying to build a platform where you can deploy _any_ cloud-based service you like. Is it ready for prime time? Not quite. Some of the shortcomings I've mentioned are problematic for power users, and there is no iOS application yet, which may hamper adoption for those users. However, keep an eye on this one—Cozy has the potential, as development continues, to become a one-stop replacement for a great many applications.
--------------------------------------------------------------------------------
译者简介:
D Ruth Bavousett - D Ruth Bavousett has been a system administrator and software developer for a long, long time, getting her professional start on a VAX 11/780, way back when. She spent a lot of her career (so far) serving the technology needs of libraries, and has been a contributor since 2008 to the Koha open source library automation suite.Ruth is currently a Perl Developer at cPanel in Houston, and also serves as chief of staff for two cats.
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/2/cozy-personal-cloud
作者:[D Ruth Bavousett][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/druthb
[1]:https://opensource.com/article/17/2/cozy-personal-cloud?rate=FEMc3av4LgYK-jeEscdiqPhSgHZkYNsNCINhOoVR9N8
[2]:https://pixabay.com/en/tree-field-cornfield-nature-247122/
[3]:https://creativecommons.org/licenses/by-sa/4.0/
[4]:https://cozy.io/
[5]:https://github.com/cozy/cozy
[6]:https://docs.cozy.io/en/host/install/
[7]:https://www.nginx.com/
[8]:https://cozy.io/en/apps/
[9]:https://ghost.org/
[10]:http://tiddlywiki.com/
[11]:http://nodejs.org/
[12]:https://www.dropbox.com/
[13]:https://github.com/cozy-labs/cozy-desktop
[14]:https://www.google.com/chrome/
[15]:https://github.com/pwnall/dropship-chrome
[16]:https://en.wikipedia.org/wiki/CardDAV
[17]:https://en.wikipedia.org/wiki/CalDAV
[18]:https://opensource.com/user/36051/feed
[19]:https://opensource.com/article/17/2/cozy-personal-cloud#comments
[20]:https://opensource.com/users/druthb

View File

@ -1,54 +0,0 @@
OpenContrail: An Essential Tool in the OpenStack Ecosystem
============================================================
![OpenContrail](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/contrails-cloud.jpg?itok=aoNIH-ar "OpenContrail")
OpenContrail, an SDN platform used with the OpenStack cloud computing platform, is emerging as an essential tool around which administrators will need to develop skillsets.[Creative Commons Zero][1]Pixabay
Throughout 2016, software-defined networking (SDN) rapidly evolved, and numerous players in the open source and cloud computing arenas are now helping it gain momentum. In conjunction with that trend,[ OpenContrail][3], a popular SDN platform used with the OpenStack cloud computing platform, is emerging as an essential tool around which many administrators will have to develop skillsets.
Just as administrators and developers have ramped up their skillsets surrounding essential tools like Ceph in the OpenStack ecosystem, they will need to embrace OpenContrail, which is fully open source and stewarded by the Apache Software Foundation.
With all of this in mind, Mirantis, one of the most active companies on the OpenStack scene, has[ announced][4] commercial support for and contributions to OpenContrail. "With the addition of OpenContrail, Mirantis becomes a one-stop support shop for the entire stack of popular open source technologies used in conjunction with OpenStack, including Ceph for storage, OpenStack/KVM for compute and OpenContrail or Neutron for SDN," the company noted.
According to a Mirantis announcement, "OpenContrail is an Apache 2.0-licensed project that is built using standards-based protocols and provides all the necessary components for network virtualizationSDN controller, virtual router, analytics engine, and published northbound APIs. It has an extensive REST API to configure and gather operational and analytics data from the system. Built for scale, OpenContrail can act as a fundamental network platform for cloud infrastructure."
The news follows Mirantis[ acquisition of TCP Cloud][5], a company specializing in managed services for OpenStack, OpenContrail, and Kubernetes. Mirantis will use TCP Clouds technology for continuous delivery of cloud infrastructure to manage the OpenContrail control plane, which will run in Docker containers. As a part of the effort, Mirantis has also been contributing to OpenContrail.
Many contributors behind OpenContrail are working closely with Mirantis, and they have especially taken note of the support programs that Mirantis will offer.
“OpenContrail is an essential project within the OpenStack community, and Mirantis is smart to containerize and commercially support it. The work our team is doing will make it easy to scale and update OpenContrail and perform seamless rolling upgrades alongside the rest of Mirantis OpenStack,” said Jakub Pavlik, Mirantis director of engineering and OpenContrail Advisory Board member. “Commercial support will also enable Mirantis to make the project compatible with a variety of switches, giving customers more choice in their hardware and software,” he said.
In addition to commercial support for OpenContrail, we are very likely to see Mirantis serve up educational offerings for cloud administrators and developers who want to learn how to leverage it. Mirantis is already well-known for its[ OpenStack training][6] curriculum and has wrapped Ceph into its training.
In 2016, the SDN category rapidly evolved, and it also became meaningful to many organizations with OpenStack deployments. IDC published [a study][7] of the SDN market recently and predicted a 53.9 percent CAGR from 2014 through 2020, at which point the market will be valued at $12.5 billion. In addition, the Technology Trends 2016 report ranked SDN as one of the best technology investments that organizations can make.
"Cloud computing and the 3rd Platform have driven the need for SDN, which will represent a market worth more than $12.5 billion in 2020\. Not surprisingly, the value of SDN will accrue increasingly to network-virtualization software and to SDN applications, including virtualized network and security services. Large enterprises are now realizing the value of SDN in the datacenter, but ultimately, they will also recognize its applicability across the WAN to branch offices and to the campus network," said[ Rohit Mehra][8], Vice President of Network Infrastructure at IDC.
Meanwhile, The Linux Foundation recently[ announced][9] the release of its 2016 report ["Guide to the Open Cloud: Current Trends and Open Source Projects."][10] This third annual report provides a comprehensive look at the state of open cloud computing, and includes a section on SDN.
The Linux Foundation also offers [Software Defined Networking Fundamentals][11] (LFS265), a self-paced, online course on SDN, and functions as the steward of the[ Open Daylight][12] project, another important open source SDN platform that is quickly gaining momentum.
--------------------------------------------------------------------------------
via: https://www.linux.com/news/event/open-networking-summit/2017/2/opencontrail-essential-tool-openstack-ecosystem
作者:[SAM DEAN][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/sam-dean
[1]:https://www.linux.com/licenses/category/creative-commons-zero
[2]:https://www.linux.com/files/images/contrails-cloudjpg
[3]:https://www.globenewswire.com/Tracker?data=brZ3aJVRyVHeFOyzJ1Dl4DMY3CsSV7XcYkwRyOcrw4rDHplSItUqHxXtWfs18mLsa8_bPzeN2EgZXWcQU8vchg==
[4]:http://www.econotimes.com/Mirantis-Becomes-First-Vendor-to-Offer-Support-and-Managed-Services-for-OpenContrail-SDN-486228
[5]:https://www.globenewswire.com/Tracker?data=Lv6LkvREFzGWgujrf1n6r_qmjSdu67-zdRAYt2itKQ6Fytomhfphuk5EbDNjNYtfgAsbnqI8H1dn_5kB5uOSmmSYY9XP2ibkrPw_wKi5JtnAyV43AjuR_epMmOUkZZ8QtFdkR33lTGDmN6O5B4xkwv7fENcDpm30nI2Og_YrYf0b4th8Yy4S47lKgITa7dz2bJpwpbCIzd7muk0BZ17vsEp0S3j4kQJnmYYYk5udOMA=
[6]:https://training.mirantis.com/
[7]:https://www.idc.com/getdoc.jsp?containerId=prUS41005016
[8]:http://www.idc.com/getdoc.jsp?containerId=PRF003513
[9]:https://www.linux.com/blog/linux-foundation-issues-2016-guide-open-source-cloud-projects
[10]:http://ctt.marketwire.com/?release=11G120876-001&id=10172077&type=0&url=http%3A%2F%2Fgo.linuxfoundation.org%2Frd-open-cloud-report-2016-pr
[11]:https://training.linuxfoundation.org/linux-courses/system-administration-training/software-defined-networking-fundamentals
[12]:https://www.opendaylight.org/

View File

@ -1,194 +0,0 @@
lnav An Advanced Console Based Log File Viewer for Linux
============================================================
[LNAV][3] stands for Log file Navigator is an advanced console based log file viewer for Linux. It does the same job how other file viewers doing like cat, more, tail, etc but have more enhanced features which is not available in normal file viewers (especially, it will comes with set of color and easy to read format).
This can decompresses all the compressed log files (zip, gzip, bzip) on the fly and merge them together for easy navigation. lnav Merge more than one log files (Single Log View) into a single view based on message timestamps which will reduce multiple windows open. The color bars on the left-hand side help to show which file a message belongs to.
The number of warnings and errors are highlighted in the display (Yellow & Red), so that we can easily see where the problems have occurred. New log lines are automatically loaded.
It display the log messages from all files sorted by the message timestamps. Top & Bottom status bars will tell you, where you are in the logs. If you want to grep any particular pattern, just type your inputs on search prompt which will be highlighted instantly.
The built-in log message parser can automatically discover and extract the each lines with detailed information.
A server log is a log file which is created and frequently updated by a server to capture all the activity for the particular service or application. This can be very useful when you have an issue with application or service. In log files you can get all the information about the issue like when it start behaving abnormal based on warning or error message.
When you open a log file with normal file viewer, it will display all the details in plain format (If i want to tell you in straight forward, plain white) its very difficult to identify/understand where is warning & errors messages are there. To overcome this kind of situation and quickly find the warning & error message to troubleshoot the issue, lnav comes in handy for a better solution.
Most of the common Linux log files are located at `/var/log/`.
**lnav automatically detect below log formats**
* Common Web Access Log format
* CUPS page_log
* Syslog
* Glog
* VMware ESXi/vCenter Logs
* dpkg.log
* uwsgi
* “Generic” Any message that starts with a timestamp
* Strace
* sudo
* gzib & bizp
**Awesome lnav features**
* Single Log View All log file contents are merged into a single view based on message timestamps.
* Automatic Log Format Detection Most of the log format is supported by lnav
* Filters regular expressions based filters can be performed.
* Timeline View
* Pretty-Print View
* Query Logs Using SQL
* Automatic Data Extraction
* “Live” Operation
* Syntax Highlighting
* Tab-completion
* Session information is saved automatically and restored when you are viewing the same set of files.
* Headless Mode
#### How to install lnav on Linux
Most of the distribution (Debian, Ubuntu, Mint, Fedora, suse, openSUSE, Arch Linux, Manjaro, Mageia, etc.) has the lnav package by default, so we can easily install it from distribution official repository with help of package manager. For CentOS/RHEL we need to enable **[EPEL Repository][1]**.
```
[Install lnav on Debian/Ubuntu/LinuxMint]
$ sudo apt-get install lnav
[Install lnav on RHEL/CentOS]
$ sudo yum install lnav
[Install lnav on Fedora]
$ sudo dnf install lnav
[Install lnav on openSUSE]
$ sudo zypper install lnav
[Install lnav on Mageia]
$ sudo urpmi lnav
[Install lnav on Arch Linux based system]
$ yaourt -S lnav
```
If the distribution doesnt have the lnav package dont worry, Developer offering the `.rpm & .deb`packages, so we can easily install without any issues. Make sure you have to download the latest one from [developer github page][4].
```
[Install lnav on Debian/Ubuntu/LinuxMint]
$ sudo wget https://github.com/tstack/lnav/releases/download/v0.8.1/lnav_0.8.1_amd64.deb
$ sudo dpkg -i lnav_0.8.1_amd64.deb
[Install lnav on RHEL/CentOS]
$ sudo yum install https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
[Install lnav on Fedora]
$ sudo dnf install https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
[Install lnav on openSUSE]
$ sudo zypper install https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
[Install lnav on Mageia]
$ sudo rpm -ivh https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
```
#### Run lnav without any argument
By default lnav brings `syslog` file when you are running without any arguments.
```
# lnav
```
[
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-1.png)
][5]
#### To view specific logs with lnav
To view specific logs with lnav, add the log file `path` followed by lnav command. For example we are going to view `/var/log/dpkg.log` logs.
```
# lnav /var/log/dpkg.log
```
[
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-2.png)
][6]
#### To view multiple log files with lnav
To view multiple log files with lnav, add the log files `path` one by one with single space followed by lnav command. For example we are going to view `/var/log/dpkg.log` & `/var/log/kern.log` logs.
The color bars on the left-hand side help to show which file a message belongs to. Alternatively top bar also showing the current log file name. Most of the application used to open multiple windows or horizontal or vertical windows within the window to display more than one log but lnav doing in different way (It display multiple logs in the same window based on date combination).
```
# lnav /var/log/dpkg.log /var/log/kern.log
```
[
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-3.png)
][7]
#### To view older/compressed logs with lnav
To view older/compressed logs which will decompresses all the compressed log files (zip, gzip, bzip) on the fly, add `-r` option followed by lnav command.
```
# lnav -r /var/log/Xorg.0.log.old.gz
```
[
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-6.png)
][8]
#### Histogram view
First run `lnav` then hit `i` to Switch to/from the histogram view.
[
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-4.png)
][9]
#### View log parser results
First run `lnav` then hit `p` to Toggle the display of the log parser results.
[
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-5.png)
][10]
#### Syntax Highlighting
You can search any given string which will be highlighting on screen. First run `lnav` then hit `/` and type the string which you want to grep. For testing purpose, im searching `Default` string, See the below screenshot.
[
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-7.png)
][11]
#### Tab-completion
The command prompt supports tab-completion for almost all operations. For example, when doing a search, you can tab-complete words that are displayed on screen rather than having to do a copy & paste. For testing purpose, im searching `/var/log/Xorg` string, See the below screenshot.
[
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-8.png)
][12]
--------------------------------------------------------------------------------
via: http://www.2daygeek.com/install-and-use-advanced-log-file-viewer-navigator-lnav-in-linux/
作者:[Magesh Maruthamuthu][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.2daygeek.com/author/magesh/
[1]:http://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/
[2]:http://www.2daygeek.com/author/magesh/
[3]:http://lnav.org/
[4]:https://github.com/tstack/lnav/releases
[5]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-1.png
[6]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-2.png
[7]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-3.png
[8]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-6.png
[9]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-4.png
[10]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-5.png
[11]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-7.png
[12]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-8.png

View File

@ -1,3 +1,4 @@
ucasFL translating
### Record and Replay Terminal Session with Asciinema on Linux
![](https://linuxconfig.org/images/asciimena-video-example.jpg?58942057)

View File

@ -1,131 +0,0 @@
translating by mudongliang
# OpenSUSE Leap 42.2 Gnome - Better but not really
Updated: February 6, 2017
It is time to give Leap a second chance. Let me be extra corny. Give leap a chance. Yes. Well, several weeks ago, I reviewed the Plasma edition of the latest [openSUSE][1] release, and while it was busy firing all cannon, like a typical Stormtrooper, most of the beams did not hit the target. It was a fairly mediocre distro, delivering everything but then stopping just short of the goodness mark.
I will now conduct a Gnome experiment. Load the distro with a fresh new desktop environment, and see how it behaves. We did something rather similar with CentOS recently, with some rather surprising results. Hint. Maybe we will get lucky. Let's do it.
![Teaser](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-teaser.jpg)
### Gnome it up
You can install new desktop environments by checking the Patterns tab in YaST > Software Management. Specifically, you can install Gnome, Xfce, LXQt, MATE, and others. A very simple procedure worth some 900 MB of disk data. No errors, no woes.
![Patterns, Gnome](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-patterns.png)
### Pretty Gnome stuff
I spent a short period of time taming openSUSE. Having had a lot of experience with [Fedora 24][2] doing this exact same stuff, i.e. [pimping][3], the procedure was rather fast and simple. Get some Gnome [extensions][4] first. Keep on low fire for 20 minutes. Stir and serve in clay bowls.
For dessert, launch Gnome Tweak Tool and add the window buttons. Most importantly, install the abso-serious-lutely needed, life-saving [Dash to Dock][5] extension, because then you can finally work like a human being without that maddening lack of efficiency called Activities. Digest, toss in some fresh [icons][6], and Bob's our uncle. All in all, it took me exactly 42 minutes and 12 seconds to get this completed. Get it? 42.2 minutes. OMGZ!
![Gnome 1](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-1.jpg)
![Gnome 2](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-2.jpg)
### Other customization and tweaks
I actually used Breeze window decorations in Gnome, and this seems to work very well. So much better than trying to customize Plasma. Behold and weep, for the looks were dire and pure!
![Gnome 3](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-3.jpg)
![Gnome 4](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-4.jpg)
### Smartphone support
So much better than Plasma - both [iPhone][7] and [Ubuntu Phone][8] were correctly identified and mounted. This reminds me of all the discrepancies and inconsistencies in the behavior of the [KDE][9] and [Gnome][10] editions of CentOS 7.2\. So this definitely crosses the boundaries of specific platforms. It has everything to do with the desktop environment.
![Ubuntu Phone](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-ubuntu-phone.jpg)
The one outstanding bug is, you need to purge icon cache sometimes, or you will end up with old icons in file
managers. There will be a whole article on this coming soon.
### Multimedia
No luck. Same problems like the Plasma edition. Missing dependencies. Can't have H.264 codecs, meaning you cannot really watch 99% of all the things that you need. That's like saying, no Internet for a month.
![Failed codecs setup](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-failed-codecs.png)
### Resource utilization
The Gnome edition is faster than the Plasma one, even with the Compositor turned off, and ignoring the KWin crashes and freezes. The CPU ticks at about 2-3%, and memory hovers around the 900MB mark. Middle of the road results, I say.
![Resources](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-resources.jpg)
### Battery usage
Worse than Plasma actually. Not sure why. But even with the brightness adjusted to about 50%, Leap Gnome gave my G50 only about 2.5 hours of electronic love. I did not explore as to where it all gets wasted, but it sure does.
![Battery usage](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-battery.jpg)
### Weird issues
There were also some glitches and errors. For instance, the desktop keeps on asking me for the Wireless password, maybe because Gnome does not handle KWallet very well or something. Also, KWin was left running after I logged out of a Plasma session, eating a good solid 100% CPU until I killed it. Such a disgrace.
![KWin leftover](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-kwin-leftover.jpg)
### Hardware support
Suspend & resume, alles gut. I did not experience network drops in the Gnome version yet. The webcam works, too. In general, hardware support seems quite decent. Bluetooth works, though. Yay! Maybe we should label this under networking? To wit.
![Webcam](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-webcam.jpg)
![Bluetooth works](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-bt-works.png)
### Networking
Samba printing? You get that same, lame applet like in [Yakkety Yak][11], which all gets messed up visually. But then it says no print shares, check firewall. Ah whatever. It's no longer 1999\. Being able to print is not a privilege, it's a basic human right. People have staged revolutions over far less. And I cannot take a screenshot of this. That bad.
### The rest of it?
All in all, it was a standard Gnome desktop, with its slightly mentally challenged approach to computing and ergonomics, tamed through the rigorous use of extensions. It is a little friendlier than the Plasma version, and you get better overall results with most of the normal, everyday stuff. Then you get stumped by a silly lack of options that Plasma has in overwhelming abundance. But then you remember your desktop isn't freezing every minute or so, and that's a definite bonus.
### Conclusion
OpenSUSE Leap 42.2 Gnome is a better product than its Plasma counterpart, and no mistake. It is more stable, it is faster, more elegant, more easily customizable, and most of the critical everyday functions actually work. For example, you can print to Samba, if you are inclined to fight the firewall, copy files to Samba without losing timestamps, use Bluetooth, use your Ubuntu Phone, and all this without the crippling effects of constant crashes. The entire stack is just more fully featured and better supported.
However, Leap is still only a reasonable release and nothing more. It struggles in many core areas that other distros do with more panache and elegance, and there are some big, glaring problems in the overall product that are a direct result of bad QA. At the very least, this lack of quality has been an almost consistent element with openSUSE these past few years. Now and then, you get a decent hatchling, but most of them are just average. That's probably the word that best defines openSUSE Leap. Average. You should try and see for yourself. You will most likely not be amazed. Such a shame, because for me, SUSE has a sweet spot, and yet, it stubbornly refuses to rekindle the love. 6/10\. Have a go, play with your emotions.
Cheers.
--------------------------------------------------------------------------------
作者简介:
My name is Igor Ljubuncic. I'm more or less 38 of age, married with no known offspring. I am currently working as a Principal Engineer with a cloud technology company, a bold new frontier. Until roughly early 2015, I worked as the OS Architect with an engineering computing team in one of the largest IT companies in the world, developing new Linux-based solutions, optimizing the kernel and hacking the living daylights out of Linux. Before that, I was a tech lead of a team designing new, innovative solutions for high-performance computing environments. Some other fancy titles include Systems Expert and System Programmer and such. All of this used to be my hobby, but since 2008, it's a paying job. What can be more satisfying than that?
From 2004 until 2008, I used to earn my bread by working as a physicist in the medical imaging industry. My work expertise focused on problem solving and algorithm development. To this end, I used Matlab extensively, mainly for signal and image processing. Furthermore, I'm certified in several major engineering methodologies, including MEDIC Six Sigma Green Belt, Design of Experiment, and Statistical Engineering.
I also happen to write books, including high fantasy and technical work on Linux; mutually inclusive.
Please see my full list of open-source projects, publications and patents, just scroll down.
For a complete list of my awards, nominations and IT-related certifications, hop yonder and yonder please.
-------------
via: http://www.dedoimedo.com/computers/opensuse-42-2-gnome.html
作者:[Igor Ljubuncic][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.dedoimedo.com/faq.html
[1]:http://www.dedoimedo.com/computers/opensuse-42-2.html
[2]:http://www.dedoimedo.com/computers/fedora-24-gnome.html
[3]:http://www.dedoimedo.com/computers/fedora-24-pimp.html
[4]:http://www.dedoimedo.com/computers/fedora-23-extensions.html
[5]:http://www.dedoimedo.com/computers/gnome-3-dash.html
[6]:http://www.dedoimedo.com/computers/fedora-24-pimp-more.html
[7]:http://www.dedoimedo.com/computers/iphone-6-after-six-months.html
[8]:http://www.dedoimedo.com/computers/ubuntu-phone-sep-2016.html
[9]:http://www.dedoimedo.com/computers/lenovo-g50-centos-kde.html
[10]:http://www.dedoimedo.com/computers/lenovo-g50-centos-gnome.html
[11]:http://www.dedoimedo.com/computers/ubuntu-yakkety-yak.html

View File

@ -1,3 +1,5 @@
translating---geekpi
OpenVAS - Vulnerability Assessment install on Kali Linux
============================================================

View File

@ -1,297 +0,0 @@
ictlyh Translating
How to Make Vim Editor as Bash-IDE Using bash-support Plugin in Linux
============================================================
An IDE ([Integrated Development Environment][1]) is simply a software that offers much needed programming facilities and components in a single program, to maximize programmer productivity. IDEs put forward a single program in which all development can be done, enabling a programmer to write, modify, compile, deploy and debug programs.
In this article, we will describe how to [install and configure Vim editor][2] as a Bash-IDE using bash-support vim plug-in.
#### What is bash-support.vim plug-in?
bash-support is a highly-customizable vim plug-in, which allows you to insert: file headers, complete statements, comments, functions, and code snippets. It also enables you to perform syntax checking, make a script executable, start a debugger simply with a keystroke; do all this without closing the editor.
It generally makes bash scripting fun and enjoyable through organized and consistent writing/insertion of file content using shortcut keys (mappings).
The current version plug-in is 4.3, version 4.0 was a rewriting of version 3.12.1; versions 4.0 or better, are based on a comprehensively new and more powerful template system, with changed template syntax unlike previous versions.
### How To Install Bash-support Plug-in in Linux
Start by downloading the latest version of <a target="_blank" rel="nofollow" style="border: 0px; font-style: inherit; font-variant: inherit; font-stretch: inherit; font-size: inherit; line-height: inherit; font-family: inherit; vertical-align: baseline; color: rgb(187, 14, 48); text-decoration: underline; outline: none 0px; transition-duration: 0.2s; transition-timing-function: ease;">bash-support plug-in</a> using the command below:
```
$ cd Downloads
$ curl http://www.vim.org/scripts/download_script.php?src_id=24452 >bash-support.zip
```
Then install it as follows; create the `.vim` directory in your home folder (in case it doesnt exist), move into it and extract the contents of bash-support.zip there:
```
$ mkdir ~/.vim
$ cd .vim
$ unzip ~/Downloads/bash-support.zip
```
Next, activate it from the `.vimrc` file:
```
$ vi ~/.vimrc
```
By inserting the line below:
```
filetype plug-in on
set number #optionally add this to show line numbers in vim
```
### How To Use Bash-support plug-in with Vim Editor
To simplify its usage, the frequently used constructs as well as certain operations can be inserted/performed with key mappings respectively. The mappings are described in ~/.vim/doc/bashsupport.txt and ~/.vim/bash-support/doc/bash-hotkeys.pdf or ~/.vim/bash-support/doc/bash-hotkeys.tex files.
##### Important:
1. All mappings (`(\)+charater(s)` combination) are filetype specific: they are only work with sh files, in order to avoid conflicts with mappings from other plug-ins.
2. Typing speed matters-when using key mapping, the combination of a leader `('\')` and the following character(s) will only be recognized for a short time (possibly less than 3 seconds based on assumption).
Below are certain remarkable features of this plug-in that we will explain and learn how to use:
#### How To Generate an Automatic Header for New Scripts
Look at the sample header below, to have this header created automatically in all your new bash scripts, follow the steps below.
[
![Script Sample Header Options](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][3]
Script Sample Header Options
Start by setting your personal details (author name, author reference, organization, company etc). Use the map `\ntw` inside a Bash buffer (open a test script as the one below) to start the template setup wizard.
Select option (1) to setup the personalization file, then press [Enter].
```
$ vi test.sh
```
[
![Set Personalizations in Scripts File](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][4]
Set Personalizations in Scripts File
Afterwards, hit [Enter] again. Then select the option (1) one more time to set the location of the personalization file and hit [Enter].
[
![Set Personalization File Location](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][5]
Set Personalization File Location
The wizard will copy the template file .vim/bash-support/rc/personal.templates to .vim/templates/personal.templates and open it for editing, where you can insert your details.
Press `i` to insert the appropriate values within the single quotes as shown in the screenshot.
[
![Add Info in Script Header](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][6]
Add Info in Script Header
Once you have set the correct values, type `:wq` to save and exit the file. Close the Bash test script, open another script to check the new configuration. The file header should now have your personal details similar to that in the screen shot below:
```
$ test2.sh
```
[
![Auto Adds Header to Script](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][7]
Auto Adds Header to Script
#### Make Bash-support Plug-in Help Accessible
To do this, type the command below on the Vim command line and press [Enter], it will create a file .vim/doc/tags:
```
:helptags $HOME/.vim/doc/
```
[
![Add Plugin Help in Vi Editor](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][8]
Add Plugin Help in Vi Editor
#### How To Insert Comments in Shell Scripts
To insert a framed comment, type `\cfr` in normal mode:
[
![Add Comments to Scripts](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][9]
Add Comments to Scripts
#### How To Insert Statements in a Shell Script
The following are key mappings for inserting statements (`n`  normal mode, `i`  insert mode):
1. `\sc`  case in … esac (n, I)
2. `\sei`  elif then (n, I)
3. `\sf`  for in do done (n, i, v)
4. `\sfo`  for ((…)) do done (n, i, v)
5. `\si`  if then fi (n, i, v)
6. `\sie`  if then else fi (n, i, v)
7. `\ss`  select in do done (n, i, v)
8. `\su`  until do done (n, i, v)
9. `\sw`  while do done (n, i, v)
10. `\sfu`  function (n, i, v)
11. `\se`  echo -e “…” (n, i, v)
12. `\sp`  printf “…” (n, i, v)
13. `\sa`  array element, ${.[.]} (n, i, v) and many more array features.
#### Insert a Function and Function Header
Type `\sfu` to add a new empty function, then add the function name and press [Enter] to create it. Afterwards, add your function code.
[
![Insert New Function in Script](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][10]
Insert New Function in Script
To create a header for the function above, type `\cfu`, enter name of the function, click [Enter] and fill in the appropriate values (name, description, parameters and returns):
[
![Create Header Function in Script](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][11]
Create Header Function in Script
#### More Examples of Adding Bash Statements
Below is an example showing insertion of an if statement using `\si`:
[
![Add Insert Statement to Script](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][12]
Add Insert Statement to Script
Next example showing addition of an echo statement using `\se`:
[
![Add echo Statement to Script](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][13]
Add echo Statement to Script
#### How To Use Run Operation in Vi Editor
The following is a list of some run operations key mappings:
1. `\rr`  update file, run script (n, I)
2. `\ra`  set script cmd line arguments (n, I)
3. `\rc`  update file, check syntax (n, I)
4. `\rco`  syntax check options (n, I)
5. `\rd`  start debugger (n, I)
6. `\re`  make script executable/not exec.(*) (in)
#### Make Script Executable
After writing script, save it and type `\re` to make it executable by pressing [Enter].
[
![Make Script Executable](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][14]
Make Script Executable
#### How To Use Predefined Code Snippets To a Bash Script
Predefined code snippets are files that contain already written code meant for a specific purpose. To add code snippets, type `\nr` and `\nw` to read/write predefined code snippets. Issue the command that follows to list default code snippets:
```
$ .vim/bash-support/codesnippets/
```
[
![List of Code Snippets](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][15]
List of Code Snippets
To use a code snippet such as free-software-comment, type `\nr` and use auto-completion feature to select its name, and press [Enter]:
[
![Add Code Snippet to Script](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][16]
Add Code Snippet to Script
#### Create Custom Predefined Code Snippets
It is possible to write your own code snippets under ~/.vim/bash-support/codesnippets/. Importantly, you can also create your own code snippets from normal script code:
1. choose the section of code that you want to use as a code snippet, then press `\nw`, and closely give it a filename.
2. to read it, type `\nr` and use the filename to add your custom code snippet.
#### View Help For the Built-in and Command Under the Cursor
To display help, in normal mode, type:
1. `\hh`  for built-in help
2. `\hm`  for a command help
[
![View Built-in Command Help](http://www.tecmint.com/wp-content/uploads/2017/02/View-Built-in-Command-Help.png)
][17]
View Built-in Command Help
For more reference, read through the file :
```
~/.vim/doc/bashsupport.txt #copy of online documentation
~/.vim/doc/tags
```
Visit the Bash-support plug-in Github repository: [https://github.com/WolfgangMehner/bash-support][18]
Visit Bash-support plug-in on the Vim Website: [http://www.vim.org/scripts/script.php?script_id=365][19]
Thats all for now, in this article, we described the steps of installing and configuring Vim as a Bash-IDE in Linux using bash-support plug-in. Check out the other exciting features of this plug-in, and do share them with us in the comments.
--------------------------------------------------------------------------------
作者简介:
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/use-vim-as-bash-ide-using-bash-support-in-linux/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/best-linux-ide-editors-source-code-editors/
[2]:http://www.tecmint.com/vi-editor-usage/
[3]:http://www.tecmint.com/wp-content/uploads/2017/02/Script-Header-Options.png
[4]:http://www.tecmint.com/wp-content/uploads/2017/02/Set-Personalization-in-Scripts.png
[5]:http://www.tecmint.com/wp-content/uploads/2017/02/Set-Personalization-File-Location.png
[6]:http://www.tecmint.com/wp-content/uploads/2017/02/Add-Info-in-Script-Header.png
[7]:http://www.tecmint.com/wp-content/uploads/2017/02/Auto-Adds-Header-to-Script.png
[8]:http://www.tecmint.com/wp-content/uploads/2017/02/Add-Plugin-Help-in-Vi-Editor.png
[9]:http://www.tecmint.com/wp-content/uploads/2017/02/Add-Comments-to-Scripts.png
[10]:http://www.tecmint.com/wp-content/uploads/2017/02/Insert-New-Function-in-Script.png
[11]:http://www.tecmint.com/wp-content/uploads/2017/02/Create-Header-Function-in-Script.png
[12]:http://www.tecmint.com/wp-content/uploads/2017/02/Add-Insert-Statement-to-Script.png
[13]:http://www.tecmint.com/wp-content/uploads/2017/02/Add-echo-Statement-to-Script.png
[14]:http://www.tecmint.com/wp-content/uploads/2017/02/make-script-executable.png
[15]:http://www.tecmint.com/wp-content/uploads/2017/02/list-of-code-snippets.png
[16]:http://www.tecmint.com/wp-content/uploads/2017/02/Add-Code-Snippet-to-Script.png
[17]:http://www.tecmint.com/wp-content/uploads/2017/02/View-Built-in-Command-Help.png
[18]:https://github.com/WolfgangMehner/bash-support
[19]:http://www.vim.org/scripts/script.php?script_id=365

View File

@ -1,129 +0,0 @@
translating by Flowsnow
# [Use tmux for a more powerful terminal][3]
![](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/tmux-945x400.jpg)
Some Fedora users spend most or all their time at a [command line][4] terminal. The terminal gives you access to your whole system, as well as thousands of powerful utilities. However, it only shows you one command line session at a time by default. Even with a large terminal window, the entire window only shows one session. This wastes space, especially on large monitors and high resolution laptop screens. But what if you could break up that terminal into multiple sessions? This is precisely where  _tmux_  is handy — some say indispensable.
### Install and start  _tmux_
The  _tmux_  utility gets its name from being a terminal muxer, or multiplexer. In other words, it can break your single terminal session into multiple sessions. It manages both  _windows_  and  _panes_ :
* A  _window_  is a single view — that is, an assortment of things shown in your terminal.
* A  _pane_  is one part of that view, often a terminal session.
To get started, install the  _tmux_  utility on your system. Youll need to have  _sudo_  setup for your user account ([check out this article][5] for instructions if needed).
```
sudo dnf -y install tmux
```
Run the utility to get started:
tmux
### The status bar
At first, it might seem like nothing happens, other than a status bar that appears at the bottom of the terminal:
![Start of tmux session](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/Screenshot-from-2017-02-04-12-54-41.png)
The bottom bar shows you:
* _[0] _ Youre in the first session that was created by the  _tmux_  server. Numbering starts with 0\. The server tracks all sessions whether theyre still alive or not.
* _0:username@host:~_   Information about the first window of that session. Numbering starts with 0\. The terminal in the active pane of the window is owned by  _username_  at hostname  _host_ . The current directory is  _~ _ (the home directory).
* _*_   Shows that youre currently in this window.
* _“hostname” _ the hostname of the  _tmux_  server youre using.
* Also, the date and time on that particular host is shown.
The information bar will change as you add more windows and panes to the session.
### Basics of tmux
Stretch your terminal window to make it much larger. Now lets experiment with a few simple commands to create additional panes. All commands by default start with  _Ctrl+b_ .
* Hit  _Ctrl+b, “_  to split the current single pane horizontally. Now you have two command line panes in the window, one on top and one on bottom. Notice that the new bottom pane is your active pane.
* Hit  _Ctrl+b, %_  to split the current pane vertically. Now you have three command line panes in the window. The new bottom right pane is your active pane.
![tmux window with three panes](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/Screenshot-from-2017-02-04-12-54-59.png)
Notice the highlighted border around your current pane. To navigate around panes, do any of the following:
* Hit  _Ctrl+b _ and then an arrow key.
* Hit  _Ctrl+b, q_ . Numbers appear on the panes briefly. During this time, you can hit the number for the pane you want.
Now, try using the panes to run different commands. For instance, try this:
* Use  _ls_  to show directory contents in the top pane.
* Start  _vi_  in the bottom left pane to edit a text file.
* Run  _top_  in the bottom right pane to monitor processes on your system.
The display will look something like this:
![tmux session with three panes running different commands](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/Screenshot-from-2017-02-04-12-57-51.png)
So far, this example has only used one window with multiple panes. You can also run multiple windows in your session.
* To create a new window, hit  _Ctrl+b, c._  Notice that the status bar now shows two windows running. (Keen readers will see this in the screenshot above.)
* To move to the previous window, hit  _Ctrl+b, p._
* If you want to move to the next window, hit  _Ctrl+b, n_ .
* To immediately move to a specific window (0-9), hit  _Ctrl+b_  followed by the window number.
If youre wondering how to close a pane, simply quit that specific command line shell using  _exit_ ,  _logout_ , or  _Ctrl+d._  Once you close all panes in a window, that window disappears as well.
### Detaching and attaching
One of the most powerful features of  _tmux_  is the ability to detach and reattach to a session. You can leave your windows and panes running when you detach. Moreover, you can even logout of the system entirely. Then later you can login to the same system, reattach to the  _tmux_  session, and see all your windows and panes where you left them. The commands you were running stay running while youre detached.
To detach from a session, hit  _Ctrl+b, d._  The session disappears and youll be back at the standard single shell. To reattach to the session, use this command:
```
tmux attach-session
```
This function is also a lifesaver when your network connection to a host is shaky. If your connection fails, all the processes in the session will stay running. Once your connection is back up, you can resume your work as if nothing happened.
And if that werent enough, on top of multiple windows and panes per session, you can also run multiple sessions. You can list these and then attach to the correct one by number or name:
```
tmux list-sessions
```
### Further reading
This article only scratches the surface of  _tmux_ s capabilities. You can manipulate your sessions in many other ways:
* Swap one pane with another
* Move a pane to another window (in the same or a different session!)
* Set keybindings that perform your favorite commands automatically
* Configure a  _~/.tmux.conf_  file with your favorite settings by default so each new session looks the way you like
For a full explanation of all commands, check out these references:
* The official [manual page][1]
* This [eBook][2] all about  _tmux_
--------------------------------------------------------------------------------
作者简介:
Paul W. Frields has been a Linux user and enthusiast since 1997, and joined the Fedora Project in 2003, shortly after launch. He was a founding member of the Fedora Project Board, and has worked on documentation, website publishing, advocacy, toolchain development, and maintaining software. He joined Red Hat as Fedora Project Leader from February 2008 to July 2010, and remains with Red Hat as an engineering manager. He currently lives with his wife and two children in Virginia.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/use-tmux-more-powerful-terminal/
作者:[Paul W. Frields][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/pfrields/
[1]: http://man.openbsd.org/OpenBSD-current/man1/tmux.1
[2]: https://pragprog.com/book/bhtmux2/tmux-2
[3]: https://fedoramagazine.org/use-tmux-more-powerful-terminal/
[4]: http://www.cryptonomicon.com/beginning.html
[5]: https://fedoramagazine.org/howto-use-sudo/

View File

@ -1,111 +0,0 @@
# Recover from a badly corrupt Linux EFI installation
In the past decade or so, Linux distributions would occasionally fail before, during and after the installation, but I was always able to somehow recover the system and continue working normally. Well, [Solus][1]broke my laptop. Literally.
GRUB rescue. No luck. Reinstall. No luck still! Ubuntu refused to install, complaining about the target device not being this or that. Wow. Something like this has never happened to me before. Effectively my test machine had become a useless brick. Should we despair? No, absolutely not. Let me show you how you can fix it.
### Problem in more detail
It all started with Solus trying to install its own bootloader - goofiboot. No idea what, who or why, but it failed to complete successfully, and I was left with a system that would not boot. After BIOS, I would get a GRUB rescue shell.
![Installation failed](http://www.dedoimedo.com/images/computers-years/2016-2/solus-installation-failed.png)
I tried manually working in the rescue shell, using this and that command, very similar to what I have outlined in my extensive [GRUB2 tutorial][2]. This did not really work. My next attempt was to recover from a live CD, again following my own advice, as I have outlined in my [GRUB2 & EFI tutorial][3]. I set up a new entry, and made sure to mark it active with the efibootmgr utility. Just as we did in the guide, and this has served us well before. Alas, this recovery method did not work, either.
I tried to perform a complete Ubuntu installation, into the same partition used by Solus, expecting the installer to sort out some of the fine details. But Ubuntu was not able to finish the install. It complained about: failed to install into /target. This was a first. What now?
### Manually clean up EFI partition
Obviously, something is very wrong with our EFI partition. Just to briefly recap, if you are using UEFI, then you must have a separate FAT32-formatted partition. This partition is used to store EFI boot images. For instance, when you install Fedora, the Fedora boot image will be copied into the EFI subdirectory. Every operating system is stored into a folder of its own, e.g. /boot/efi/EFI/<os version>/.
![EFI partition contents](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-efi-partition-contents.png)
On my [G50][4] machine, there were multiple entries, from a variety of my distro tests, including: centos, debian, fedora, mx-15, suse, ubuntu, zorin, and many others. There was also a goofiboot folder. However, the efibootmgr was not showing a goofiboot entry in its menu. There was obviously something wrong with the whole thing.
```
sudo efibootmgr -d /dev/sda
BootCurrent: 0001
Timeout: 0 seconds
BootOrder: 0001,0005,2003,0000,2001,2002
Boot0000* Lenovo Recovery System
Boot0001* ubuntu
Boot0003* EFI Network 0 for IPv4 (68-F7-28-4D-D1-A1)
Boot0004* EFI Network 0 for IPv6 (68-F7-28-4D-D1-A1)
Boot0005* Windows Boot Manager
Boot0006* fedora
Boot0007* suse
Boot0008* debian
Boot0009* mx-15
Boot2001* EFI USB Device
Boot2002* EFI DVD/CDROM
Boot2003* EFI Network
...
```
P.S. The output above was generated running the command in a LIVE session!
I decided to clean up all the non-default and non-Microsoft entries and start fresh. Obviously, something was corrupt, and preventing new distros from setting up their own bootloader. So I deleted all the folders in the /boot/efi/EFI partition except Boot and Windows. And then, I also updated the boot manager by removing all the extras.
```
efibootmgr -b <hex> -B <hex>
```
Lastly, I reinstalled Ubuntu and closely monitored the progress with the GRUB installation and setup. This time, things completed fine. There were some errors with several invalid entries, as can be expected, but the whole sequenced completed just fine.
![Install errors](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-install-errors.jpg)
![Install successful](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-install-successful.jpg)
### More reading
If you don't fancy this manual fix, you may want to read:
```
[Boot-Info][5] page, with automated tools to help you recover your system
[Boot-repair-cd][6] automatic repair tool download page
```
### Conclusion
If you ever encounter a situation where your system is badly botched due to an EFI partition clobbering, then you may want to follow the advice in this guide. Delete all non-default entries. Make sure you do not touch anything Microsoft, if you're multi-booting with Windows. Then update the boot menu accordingly so the baddies are removed. Rerun the installation setup for your desired distro, or try to fix with a less stringent method as explained before.
I hope this little article saves you some bacon. I was quite annoyed by what Solus did to my system. This is not something that should happen, and the recovery ought to be simpler. However, while things may seem dreadful, the fix is not difficult. You just need to delete the corrupt files and start again. Your data should not be affected, and you will be able to promptly boot into a running system and continue working. There you go.
Cheers.
--------------------------------------------------------------------------------
作者简介:
My name is Igor Ljubuncic. I'm more or less 38 of age, married with no known offspring. I am currently working as a Principal Engineer with a cloud technology company, a bold new frontier. Until roughly early 2015, I worked as the OS Architect with an engineering computing team in one of the largest IT companies in the world, developing new Linux-based solutions, optimizing the kernel and hacking the living daylights out of Linux. Before that, I was a tech lead of a team designing new, innovative solutions for high-performance computing environments. Some other fancy titles include Systems Expert and System Programmer and such. All of this used to be my hobby, but since 2008, it's a paying job. What can be more satisfying than that?
From 2004 until 2008, I used to earn my bread by working as a physicist in the medical imaging industry. My work expertise focused on problem solving and algorithm development. To this end, I used Matlab extensively, mainly for signal and image processing. Furthermore, I'm certified in several major engineering methodologies, including MEDIC Six Sigma Green Belt, Design of Experiment, and Statistical Engineering.
I also happen to write books, including high fantasy and technical work on Linux; mutually inclusive.
Please see my full list of open-source projects, publications and patents, just scroll down.
For a complete list of my awards, nominations and IT-related certifications, hop yonder and yonder please.
-------------
via: http://www.dedoimedo.com/computers/grub2-efi-corrupt-part-recovery.html
作者:[Igor Ljubuncic][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.dedoimedo.com/faq.html
[1]:http://www.dedoimedo.com/computers/solus-1-2-review.html
[2]:http://www.dedoimedo.com/computers/grub-2.html
[3]:http://www.dedoimedo.com/computers/grub2-efi-recovery.html
[4]:http://www.dedoimedo.com/computers/lenovo-g50-distros-second-round.html
[5]:https://help.ubuntu.com/community/Boot-Info
[6]:https://sourceforge.net/projects/boot-repair-cd/

View File

@ -1,235 +0,0 @@
How to Secure a FTP Server Using SSL/TLS for Secure File Transfer in CentOS 7
============================================================
By its original design, FTP (File Transfer Protocol) is not secure, meaning it doesnt encrypt data being transmitted between two machines, along with users credentials. This poses a massive threat to data as well as server security.
In this tutorial, we will explain how to manually enable data encryption services in a FTP server in CentOS/RHEL 7 and Fedora; we will go through various steps of securing VSFTPD (Very Secure FTP Daemon) services using SSL/TLS certificates.
#### Prerequisites:
1. You must have [installed and configured a FTP server in CentOS 7][1]
Before we start, note that all the commands in this tutorial will be run as root, otherwise, use the [sudo command][2] to gain root privileges if you are not controlling the server using the root account.
### Step 1\. Generating SSL/TLS Certificate and Private Key
1. We need to start by creating a subdirectory under: `/etc/ssl/` where we will store the SSL/TLS certificate and key files:
```
# mkdir /etc/ssl/private
```
2. Then run the command below to create the certificate and key for vsftpd in a single file, here is the explanation of each flag used.
1. req  is a command for X.509 Certificate Signing Request (CSR) management.
2. x509  means X.509 certificate data management.
3. days  defines number of days certificate is valid for.
4. newkey  specifies certificate key processor.
5. rsa:2048  RSA key processor, will generate a 2048 bit private key.
6. keyout  sets the key storage file.
7. out  sets the certificate storage file, note that both certificate and key are stored in the same file: /etc/ssl/private/vsftpd.pem.
```
# openssl req -x509 -nodes -keyout /etc/ssl/private/vsftpd.pem -out /etc/ssl/private/vsftpd.pem -days 365 -newkey rsa:2048
```
The above command will ask you to answer the questions below, remember to use values that apply to your scenario.
```
Country Name (2 letter code) [XX]:IN
State or Province Name (full name) []:Lower Parel
Locality Name (eg, city) [Default City]:Mumbai
Organization Name (eg, company) [Default Company Ltd]:TecMint.com
Organizational Unit Name (eg, section) []:Linux and Open Source
Common Name (eg, your name or your server's hostname) []:tecmint
Email Address []:admin@tecmint.com
```
### Step 2\. Configuring VSFTPD To Use SSL/TLS
3. Before we perform any VSFTPD configurations, lets open the ports 990 and 40000-50000 to allow TLS connections and the port range of passive ports to define in the VSFTPD configuration file respectively:
```
# firewall-cmd --zone=public --permanent --add-port=990/tcp
# firewall-cmd --zone=public --permanent --add-port=40000-50000/tcp
# firewall-cmd --reload
```
4. Now, open the VSFTPD config file and specify the SSL details in it:
```
# vi /etc/vsftpd/vsftpd.conf
```
Look for the option ssl_enable and set its value to `YES` to activate the use of SSL, in addition, since TSL is more secure than SSL, we will restrict VSFTPD to employ TLS instead, using the ssl_tlsv1_2 option:
```
ssl_enable=YES
ssl_tlsv1_2=YES
ssl_sslv2=NO
ssl_sslv3=NO
```
5. Then, add the lines below to define the location of the SSL certificate and key file:
```
rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem
```
6. Next, we have to prevent anonymous users from using SSL, then force all non-anonymous logins to use a secure SSL connection for data transfer and to send the password during login:
```
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES
```
7. In addition, we can add the options below to boost up FTP server security. When option require_ssl_reuse is set to `YES`, then, all SSL data connections are required to exhibit SSL session reuse; proving that they know the same master secret as the control channel.
Therefore, we have to turn it off.
```
require_ssl_reuse=NO
```
Again, we need to select which SSL ciphers VSFTPD will permit for encrypted SSL connections with the ssl_ciphers option. This can greatly limit efforts of attackers who try to force a particular cipher which they probably discovered vulnerabilities in:
```
ssl_ciphers=HIGH
```
8. Now, set the port range (min and max port) of passive ports.
```
pasv_min_port=40000
pasv_max_port=50000
```
9. Optionally, allow SSL debugging, meaning openSSL connection diagnostics are recorded to the VSFTPD log file with the debug_ssl option:
```
debug_ssl=YES
```
Save all the changes and close the file. Then lets restart VSFTPD service:
```
# systemctl restart vsftpd
```
### Step 3: Testing FTP server With SSL/TLS Connections
10. After doing all the above configurations, test if VSFTPD is using SSL/TLS connections by attempting to use FTP from the command line as follows:
```
# ftp 192.168.56.10
Connected to 192.168.56.10 (192.168.56.10).
220 Welcome to TecMint.com FTP service.
Name (192.168.56.10:root) : ravi
530 Non-anonymous sessions must use encryption.
Login failed.
421 Service not available, remote server has closed connection
ftp>
```
[
![Verify FTP SSL Secure Connection](http://www.tecmint.com/wp-content/uploads/2017/02/Verify-FTP-Secure-Connection.png)
][3]
Verify FTP SSL Secure Connection
From the screen shot above, we can see that there is an error informing us that VSFTPD can only allow user to login from clients that support encryption services.
The command line does not offer encryption services thus producing the error. So, to securely connect to the server, we need a FTP client that supports SSL/TLS connections such as FileZilla.
### Step 4: Install FileZilla to Securely Connect to a FTP Server
11. FileZilla is a modern, popular and importantly cross-platform FTP client that supports SSL/TLS connections by default.
To install FileZilla in Linux, run the command below:
```
--------- On CentOS/RHEL/Fedora ---------
# yum install epel-release filezilla
--------- On Debian/Ubuntu ---------
$ sudo apt-get install filezilla
```
12. When the installation completes (or else if you already have it installed), open it and go to File=>Sites Manager or (press `Ctrl+S`) to get the Site Manager interface below.
Click on New Site button to add a new site/host connection details.
[
![Add New FTP Site in Filezilla](http://www.tecmint.com/wp-content/uploads/2017/02/Add-New-FTP-Site-in-Filezilla.png)
][4]
Add New FTP Site in Filezilla
13. Next, set the host/site name, add the IP address, define the protocol to use, encryption and logon type as in the screen shot below (use values that apply to your scenario):
```
Host: 192.168.56.10
Protocol: FTP File Transfer Protocol
Encryption: Require explicit FTP over #recommended
Logon Type: Ask for password #recommended
User: username
```
[
![Add FTP Server Details in Filezilla](http://www.tecmint.com/wp-content/uploads/2017/02/Add-FTP-Server-Details-in-Filezilla.png)
][5]
Add FTP Server Details in Filezilla
14. Then click on Connect to enter the password again, and then verify the certificate being used for the SSL/TLS connection and click `OK` once more to connect to the FTP server:
[
![Verify FTP SSL Certificate](http://www.tecmint.com/wp-content/uploads/2017/02/Verify-FTP-SSL-Certificate.png)
][6]
Verify FTP SSL Certificate
At this stage, we should have logged successfully into the FTP server over a TLS connection, check the connection status section for more information from the interface below.
[
![Connected to FTP Server Over TLS/SSL ](http://www.tecmint.com/wp-content/uploads/2017/02/connected-to-ftp-server-with-tls.png)
][7]
Connected to FTP Server Over TLS/SSL
15. Last but not least, try [transferring files from the local machine to the FTP sever][8] in the files folder, take a look at the lower end of the FileZilla interface to view reports concerning file transfers.
[
![Transfer Files Securely Using FTP](http://www.tecmint.com/wp-content/uploads/2017/02/Transfer-Files-Securely-Using-FTP.png)
][9]
Transfer Files Securely Using FTP
Thats all! Always keep in mind that FTP is not secure by default, unless we configure it to use SSL/TLS connections as we showed you in this tutorial. Do share your thoughts about this tutorial/topic via the feedback form below.
--------------------------------------------------------------------------------
作者简介:
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/secure-vsftpd-using-ssl-tls-on-centos/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/install-ftp-server-in-centos-7/
[2]:http://www.tecmint.com/sudoers-configurations-for-setting-sudo-in-linux/
[3]:http://www.tecmint.com/wp-content/uploads/2017/02/Verify-FTP-Secure-Connection.png
[4]:http://www.tecmint.com/wp-content/uploads/2017/02/Add-New-FTP-Site-in-Filezilla.png
[5]:http://www.tecmint.com/wp-content/uploads/2017/02/Add-FTP-Server-Details-in-Filezilla.png
[6]:http://www.tecmint.com/wp-content/uploads/2017/02/Verify-FTP-SSL-Certificate.png
[7]:http://www.tecmint.com/wp-content/uploads/2017/02/connected-to-ftp-server-with-tls.png
[8]:http://www.tecmint.com/sftp-command-examples/
[9]:http://www.tecmint.com/wp-content/uploads/2017/02/Transfer-Files-Securely-Using-FTP.png

View File

@ -1,142 +0,0 @@
XYenChi is translating
# Microsoft Office Online gets better - on Linux, too
One of the core things that will make or break your Linux experience is the lack of the Microsoft Office suite, well, for Linux. If you are forced to use Office products to make a living, and this applies to a very large number of people, you might not be able to afford open-source alternatives. Get the paradox?
Indeed, LibreOffice a [great][1] free program, but what if your client, customer or boss demands Word and Excel files? Can you, indeed, [afford any mistakes][2] or errors or glitches in converting these files from ODT or whatnot into DOCX and such, and vice versa? This is a very tricky set of questions. Unfortunately, for most people, technically, this means Linux is out of limits. Well, not quite.
![Teaser](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-teaser.png)
### Enter Microsoft Office Online, Enter Linux
For a number of years, Microsoft has had its cloud office offering. No news there. What makes this cool and relevant is, it's available through any modern browser interface, and this means Linux, too! I have also tested this [solution][3] a while back, and it worked great. I was able to use the product just fine, save files in their native format, or even export my documents in the ODF format, which is really nice.
I decided to revisit this suite and see how it's evolved in the past few years, and see whether it still likes Linux. My scapegoat for this experience was a [Fedora 25][4] instance, and I had the Microsoft Office Online running open in several tabs. I did this in parallel to testing [SoftMaker Office 2016][5]. Sounds like a lot of fun, and it was.
### First impressions
I have to say, I was pleased. The Office does not require any special plugins. No Silverlight or Flash or anything like that. Pure HTML and Javascript, and lots of it. Still, the interface is fairly responsive. The only thing I did not like was the gray background in Word documents, which can be exhausting after a while. Other than that, the suite was working fine, there were no delays, lags or weird, unexpected errors. But let us proceed slowly then, shall we.
The suite does require that you log in with an online account or a phone number - it does not have to be a Live or Hotmail email. Any one will do. If you also have a Microsoft [phone][6], then you can use the same account, and you will be able to sync your data. The account grants you 5 GB of OneDrive storage for free, as well. This is quite neat. Not stellar or super exciting, but rather decent.
![MS Office, welcome page](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-welcome-page.jpg)
You have access to a whole range of programs, including the mandatory trio - Word, Excel and Powerpoint, but then, the rest of the stuff is also available, including some new fancy stuff. Documents are auto-saved, but you can also download copies and convert to other formats, like PDF and ODF.
For me, this is excellent. And let me share a short personal story. I write my [fantasy][7] books using LibreOffice. But then, when I need to send them to a publisher for editing or proofreading, I need to convert them to DOCX. Alas, this requires Microsoft Office. With my [Linux problem solving book][8], I had to use Word from the start, because there was a lot of collaboration work required with my editor, which mandated the use of the proprietary solution. There are no emotions here. Only cold monetary and business considerations. Mistakes are not acceptable.
![Word, new document](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-word-new.png)
Having access to Office Online can give a lot of people the leeway they need for occasional, recreational use of Word and Excel and alike without having to buy the whole, expensive suite. If you are a daytime LibreOffice fan, you can be a nighttime party animal at the Microsoft Office Heartbreakers Club without a guilty conscience. When someone ships you a Word or Powerpoint file, you can upload and manipulate them online, then export as needed. Likewise, you can create your work online, send it to people with strict requirements, then grab yourself a copy in ODF, and work with LibreOffice if needed. The flexibility is quite useful, but that should not be your main driver. Still, for Linux people, this gives them a lot of freedom they do not normally have. Because even if they do want to use Microsoft Office, it simply isn't available as a native install.
![Save as, export options](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-save-as.jpg)
### Features, options, tools
I started hammering out a document - with all the fine trimming of a true jousting rouncer. I wrote some text, applied a style or three, hyperlinked some text, embedded an image, added a footnote, and then commented on my writing and even replied to myself in the best fashion of a poly-personality geek.
Apart from the gray background - and we will learn how to work around this in a nice yet nerdy way skunkworks style, because there isn't an option to tweak the background color in the browser interface - things were looking fine.
You even have Skype integrated into the suite, so you can chat and collaborate. Or rather collaborate and listen. Hue hue. Quite neat. The right-click button lets you select a few quick actions, including links, comments and translations. The last piece still needs a lot of work, because it did not quite give me what I expected. The translations are wonky.
![Skype active](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-skype-active.jpg)
![Right click options](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-right-click.png)
![Right click options, more](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-right-click-more.jpg)
![Translations, still wonky](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-translations.png)
You can also add images - including embedded Bing search, which will also, by default, filter images based on their licensing and re-distribution rights. This is neat, especially if you need to create a document and must avoid any copyright claims and such.
![Images, online search](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-images.jpg)
### More on comments, tracking
Quite useful. For realz. The online nature of this product also means changes and edits to the documents will be tracked by default, so you also have a basic level of versioning available. However, session edits are lost once you close the document.
![Comments](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-comments.jpg)
![Edit activity log](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-edit-activity.png)
The one error that will visibly come up - if you try to edit the document in Word or Excel on Linux, you will get prompted that you're being naughty, because this is not a supported action, for obvious reasons.
![Edit error](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-edit-error.jpg)
### Excel and friends
The practical workflows extends beyond Word. I also tried Excel, and it did as advertised, including having some neat and useful templates and such. Worked just fine, and there's no lag updating cells and formulas. You get most of the functionality you need and expect.
![Excel, interesting templates](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-excel.jpg)
![New blank spreadsheet](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-excel-new-spreadsheet.jpg)
![Excel, budget template](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-excel-budget.jpg)
### OneDrive
This is where you can create and organize folders and files, move documents about, and share them with your friends (if you have any) and colleagues. 5 GB for free, upgradeable for a fee, of course. Worked fine, overall. It does take a few moments to refresh and display contents. Open documents will not be deleted, so this may look like a bug, but it makes perfect sense from the computational perspective.
![OneDrive](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-onedrive.jpg)
### Help
If you get confused - or feel like being dominatrixed by AI, you can ask the cloud collective intelligence of the Redmond Borg ship for assistance. This is quite useful, if not as straightforward or laser-sharp as it can be. But the effort is benevolent.
![What to do, interactive help](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-what-to-do.png)
### Problems
During my three-hour adventure, I only encountered two glitches. One, during a document edit, the browser had a warning (yellow triangle) about an insecure element loaded and used in an otherwise secure HTTPS session. Two, I hit a snag of failing to create a new Excel document. A one-time issue, and it hasn't happened since.
![Document creation error](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-error.jpg)
### Conclusion
Microsoft Office Online is a great product, and better than it was when I tested it some two years ago. It's fairly snappy, it looks nice, it behaves well, the errors are far and few in between, and it offers genuine Microsoft Office compatibility even to Linux users, which can be of significant personal and business importance to some. I won't say this is the best thing that happened to humanity since VHS was invented, but it's a nice addition, and it bridges a big gap that Linux folks have faced since day one. Quite handy, and the ODF support is another neat touch.
Now, to make things even spicier, if you like this whole cloud concept thingie, you might also be interested in [Open365][9], a LibreOffice-based office productivity platform, with an added bonus of a mail client and image processing software, plus 20 GB free storage. Best of all, you can have both of these running in your browser, in parallel. All it takes is another tab or two.
Back to Microsoft, if you a Linuxperson, you may actually require Microsoft office products now and then. The easier way to enjoy them - or at the very least, use them when needed without having to commit to a full operating system stack - is through the online office suite. Free, elegant, and largely transparent. Worth checking out, provided you can put the ideological game aside. There you go. Enjoy thy clouden. Or something.
Cheers.
--------------------------------------------------------------------------------
作者简介:
My name is Igor Ljubuncic. I'm more or less 38 of age, married with no known offspring. I am currently working as a Principal Engineer with a cloud technology company, a bold new frontier. Until roughly early 2015, I worked as the OS Architect with an engineering computing team in one of the largest IT companies in the world, developing new Linux-based solutions, optimizing the kernel and hacking the living daylights out of Linux. Before that, I was a tech lead of a team designing new, innovative solutions for high-performance computing environments. Some other fancy titles include Systems Expert and System Programmer and such. All of this used to be my hobby, but since 2008, it's a paying job. What can be more satisfying than that?
From 2004 until 2008, I used to earn my bread by working as a physicist in the medical imaging industry. My work expertise focused on problem solving and algorithm development. To this end, I used Matlab extensively, mainly for signal and image processing. Furthermore, I'm certified in several major engineering methodologies, including MEDIC Six Sigma Green Belt, Design of Experiment, and Statistical Engineering.
I also happen to write books, including high fantasy and technical work on Linux; mutually inclusive.
Please see my full list of open-source projects, publications and patents, just scroll down.
For a complete list of my awards, nominations and IT-related certifications, hop yonder and yonder please.
-------------
via: http://www.dedoimedo.com/computers/office-online-linux-better.html
作者:[Igor Ljubuncic][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.dedoimedo.com/faq.html
[1]:http://www.ocsmag.com/2015/02/16/libreoffice-4-4-review-finally-it-rocks/
[2]:http://www.ocsmag.com/2014/03/14/libreoffice-vs-microsoft-office-part-deux/
[3]:http://www.dedoimedo.com/computers/office-online-linux.html
[4]:http://www.dedoimedo.com/computers/fedora-25-gnome.html
[5]:http://www.ocsmag.com/2017/01/18/softmaker-office-2016-your-alternative-to-libreoffice/
[6]:http://www.dedoimedo.com/computers/microsoft-lumia-640.html
[7]:http://www.thelostwordsbooks.com/
[8]:http://www.dedoimedo.com/computers/linux-problem-solving-book.html
[9]:http://www.ocsmag.com/2016/08/17/open365/

View File

@ -1,267 +0,0 @@
ucasFL translating
Installation of Devuan Linux (Fork of Debian)
============================================================
Devuan Linux, the most recent fork of Debian, is a version of Debian that is designed to be completely free of systemd.
Devuan was announced towards the end of 2014 and has been actively developed over that time. The most recently release is the beta2 release of codenamed: Jessie (Yes the same name as the current stable version of Debian).
The final release for the current stable release is said to be ready in early 2017\. To read more about the project please visit the communitys home page: [https://devuan.org/][1].
This article will walk through the installation of Devuans current release. Most of the packages available in Debian are available in Devuan allowing for a fairly seamless transition for Debian users to Devuan should they prefer the freedom to choose their initialization system.
#### System Requirements
Devuan, like Debian. Is very light on system requirements. The biggest determining factor is the desktop environment the user wishes to use. This guide will assume that the user would like a flashier desktop environment and will suggest the following minimums:
1. At least 15GB of disk space; strongly encouraged to have more
2. At least 2GB of ram; more is encouraged
3. USB or CD/DVD boot support
4. Internet connection; installer will download files from the Internet
### Devuan Linux Installation
As with all of the authors guides, this guide will be assuming that a USB drive is available to use as the installation media. Take note that the USB drive should be as close to 4/8GB as possible and ALL DATA WILL BE REMOVED!
The author has had issues with larger USB drives but some may still work. Regardless, following the next few steps WILL RESULT IN DATA LOSS ON THE USB DRIVE.
Please be sure to backup all data before proceeding. This bootable Kali Linux USB drive is going to be created from another Linux machine.
1. First obtain the latest release of Devuan installation ISO from [https://devuan.org/][2] or you can obtain from a Linux station, type the following commands:
```
$ cd ~/Downloads
$ wget -c https://files.devuan.org/devuan_jessie_beta/devuan_jessie_1.0.0-beta2_amd64_CD.iso
```
2. The commands above will download the installer ISO file to the users Downloads folder. The next process is to write the ISO to a USB drive to boot the installer.
To accomplish this we can use the `'dd'` tool within Linux. First, the disk name needs to be located with [lsblk command][3] though.
```
$ lsblk
```
[
![Find Device Name in Linux](http://www.tecmint.com/wp-content/uploads/2017/03/Find-Device-Name-in-Linux.png)
][4]
Find Device Name in Linux
With the name of the USB drive determined as `/dev/sdc`, the Devuan ISO can be written to the drive with the `dd` tool.
```
$ sudo dd if=~/Downloads/devuan_jessie_1.0.0-beta2_amd64_CD.iso of=/dev/sdc
```
Important: The above command requires root privileges so utilize sudo or login as the root user to run the command. Also this command will REMOVE EVERYTHING on the USB drive. Be sure to backup needed data.
3. Once the ISO is copied over to the USB drive, plug the USB drive into the respective computer that Devuan should be installed upon and proceed to boot to the USB drive.
Upon successful booting to the USB drive, the user will be presented with the following screen and should proceed with the Install or Graphical Install options.
This guide will be using the Graphical Install method.
[
![Devuan Graphic Installation](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Graphic-Installation.png)
][5]
Devuan Graphic Installation
4. Allow the installer to boot to the localization menus. Once here the user will be prompted with a string of windows asking about the users keyboard layout and language. Simply select the desired options to continue.
[
![Devuan Language Selection](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Language-Selection.png)
][6]
Devuan Language Selection
[
![Devuan Location Selection](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Location-Selection.png)
][7]
Devuan Location Selection
[
![Devuan Keyboard Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Keyboard-Configuration.png)
][8]
Devuan Keyboard Configuration
5. The next step is to provide the installer with the hostname and domain name that this machine will be a member.
The hostname should be something unique but the domain can be left blank if the computer wont be part of a domain.
[
![Set Devuan Linux Hostname](http://www.tecmint.com/wp-content/uploads/2017/03/Set-Devuan-Linux-Hostname.png)
][9]
Set Devuan Linux Hostname
[
![Set Devuan Linux Domain Name](http://www.tecmint.com/wp-content/uploads/2017/03/Set-Devuan-Linux-Domain-Name.png)
][10]
Set Devuan Linux Domain Name
6. Once the hostname and domain name information have been provided the installer will want the user to provide a root user password.
Take note to remember this password as it will be required to do administrative tasks on this Devuan machine! Devuan doesnt install the sudo package by default so the admin user will be root when this installation finishes.
[
![Setup Devuan Linux Root User](http://www.tecmint.com/wp-content/uploads/2017/03/Setup-Devuan-Linux-Root-User.png)
][11]
Setup Devuan Linux Root User
7. The next series of questions will be for the creation of a non-root user. It is always a good to avoid using your system as the root user whenever possible. The installer will prompt for the creation of a non-root user at this point.
[
![Setup Devuan Linux User Account](http://www.tecmint.com/wp-content/uploads/2017/03/Setup-Devuan-Linux-User-Account.png)
][12]
Setup Devuan Linux User Account
8. Once the root user password and user creation prompts have completed, the installer will request that the clock be [set up with NTP][13].
Again a connection to the internet will be required in order for this to work for most systems!
[
![Devuan Linux Timezone Setup](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Clock-on-Devuan-Linux.png)
][14]
Devuan Linux Timezone Setup
9. The next step will be the act of partitioning the system. For most users Guided use entire disk is typically sufficient. However, if advanced partitioning is desired, this would be the time to set them up.
[
![Devuan Linux Partitioning](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Partitioning.png)
][15]
Devuan Linux Partitioning
Be sure to confirm the partition changes after clicking continue above in order to write the partitions to the disk!
10. Once the partitioning is completed, the installer will begin to install the base files for Devuan. This process will take a few minutes but will stop when the system is ready to configure a network mirror (software repository). Most users will want to click yes when prompted to use a network mirror.
[
![Devuan Linux Configure Package Manager](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Configure-Package-Manager.png)
][16]
Devuan Linux Configure Package Manager
Clicking `yes` here will present the user with a list of network mirrors by country. It is typically best to pick the mirror that is geographically closest to the machines location.
[
![Devuan Linux Mirror Selection](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Mirror-Selection.png)
][17]
Devuan Linux Mirror Selection
[
![Devuan Linux Mirrors](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Mirrors.png)
][18]
Devuan Linux Mirrors
11. The next screen is the traditional Debian popularity contest all this does is track what packages are downloaded for statistics on package usage.
This can be enabled or disabled to the administrators preference during the installation process.
[
![Configure Devuan Linux Popularity Contest](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Devuan-Linux-Popularity-Contest.png)
][19]
Configure Devuan Linux Popularity Contest
12. After a brief scan of the repositories and a couple of package updates, the installer will present the user with a list of software packages that can be installed to provide a Desktop Environment, SSH access, and other system tools.
While Devuan has some of the major Desktop Environments listed, it should be noted that not all of them are ready for use in Devuan yet. The author has had good luck with Xfce, LXDE, and Mate in Devuan (Future articles will walk the user through how to install Enlightenment from source in Devuan as well).
If interested in installing a different Desktop Environment, un-check the Devuan Desktop Environment check box.
[
![Devuan Linux Software Selection](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Software-Selection.png)
][20]
Devuan Linux Software Selection
Depending on the number of items selected in the above installer screen, there may be a couple of minutes of downloads and installations taking place.
When all the software installation is completed, the installer will prompt the user for the location to install grub. This is typically done on /dev/sda as well.
[
![Devuan Linux Grub Install](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Grub-Install.png)
][21]
Devuan Linux Grub Install
[
![Devuan Linux Grub Install Disk](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Grub-Install-Disk.png)
][22]
Devuan Linux Grub Install Disk
13. After GRUB successfully installs to the boot drive, the installer will alert the user that the installation is complete and to reboot the system.
[
![Devuan Linux Installation Completes](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Installation-Completes.png)
][23]
Devuan Linux Installation Completes
14. Assuming that the installation was indeed successful, the system should either boot into the chosen Desktop Environment or if no Desktop Environment was selected, the machine will boot to a text based console.
[
![Devuan Linux Console](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Console.png)
][24]
Devuan Linux Console
This concludes the installation of the latest version of Devuan Linux. The next article in this short series will cover the [installation of the Enlightenment Desktop Environment][25] from source code on a Devuan system. Please let Tecmint know if you have any issues or questions and thanks for reading!
--------------------------------------------------------------------------------
作者简介:
He is an Instructor of Computer Technology with Ball State University where he currently teaches all of the departments Linux courses and co-teaches Cisco networking courses. He is an avid Debian user as well as many of the derivatives of Debian such as Mint, Ubuntu, and Kali. Rob holds a Masters in Information and Communication Sciences as well as several industry certifications from Cisco, EC-Council, and Linux Foundation.
-----------------------------
via: http://www.tecmint.com/installation-of-devuan-linux/
作者:[Rob Turner ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/robturner/
[1]:https://devuan.org/
[2]:https://devuan.org/
[3]:http://www.tecmint.com/find-usb-device-name-in-linux/
[4]:http://www.tecmint.com/wp-content/uploads/2017/03/Find-Device-Name-in-Linux.png
[5]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Graphic-Installation.png
[6]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Language-Selection.png
[7]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Location-Selection.png
[8]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Keyboard-Configuration.png
[9]:http://www.tecmint.com/wp-content/uploads/2017/03/Set-Devuan-Linux-Hostname.png
[10]:http://www.tecmint.com/wp-content/uploads/2017/03/Set-Devuan-Linux-Domain-Name.png
[11]:http://www.tecmint.com/wp-content/uploads/2017/03/Setup-Devuan-Linux-Root-User.png
[12]:http://www.tecmint.com/wp-content/uploads/2017/03/Setup-Devuan-Linux-User-Account.png
[13]:http://www.tecmint.com/install-and-configure-ntp-server-client-in-debian/
[14]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Clock-on-Devuan-Linux.png
[15]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Partitioning.png
[16]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Configure-Package-Manager.png
[17]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Mirror-Selection.png
[18]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Mirrors.png
[19]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Devuan-Linux-Popularity-Contest.png
[20]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Software-Selection.png
[21]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Grub-Install.png
[22]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Grub-Install-Disk.png
[23]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Installation-Completes.png
[24]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Console.png
[25]:http://www.tecmint.com/install-enlightenment-on-devuan-linux/
[26]:http://www.tecmint.com/author/robturner/
[27]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[28]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -1,308 +0,0 @@
translating by chenxinlong
How to set up a personal web server with a Raspberry Pi
============================================================
![How to set up a personal web server with a Raspberry Pi](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/lightbulb_computer_person_general_.png?itok=ZY3UuQQa "How to set up a personal web server with a Raspberry Pi")
>Image by : opensource.com
A personal web server is "the cloud," except you own and control it as opposed to a large corporation.
Owning a little cloud has a lot of benefits, including customization, free storage, free Internet services, a path into open source software, high-quality security, full control over your content, the ability to make quick changes, a place to experiment with code, and much more. Most of these benefits are immeasurable, but financially these benefits can save you over $100 per month.
![Building your own web server with Raspberry Pi](https://opensource.com/sites/default/files/1-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Building your own web server with Raspberry Pi")
Image by Mitchell McLaughlin, CC BY-SA 4.0
I could have used AWS, but I prefer complete freedom, full control over security, and learning how things are built.
* Self web-hosting: No BlueHost or DreamHost
* Cloud storage: No Dropbox, Box, Google Drive, Microsoft Azure, iCloud, or AWS
* On-premise security
* HTTPS: Lets Encrypt
* Analytics: Google
* OpenVPN: Do not need private Internet access (at an estimated $7 per month)
Things I used:
* Raspberry Pi 3 Model B
* MicroSD Card (32GB recommended, [Raspberry Pi Compatible SD Cards][1])
* USB microSD card reader
* Ethernet cable
* Router connected to Wi-Fi
* Raspberry Pi case
* Amazon Basics MicroUSB cable
* Apple wall charger
* USB mouse
* USB keyboard
* HDMI cable
* Monitor (with HDMI input)
* MacBook Pro
### Step 1: Setting up the Raspberry Pi
Download the most recent release of Raspbian (the Raspberry Pi operating system). [Raspbian Jessie][6] ZIP version is ideal [1]. Unzip or extract the downloaded file. Copy it onto the SD card. [Pi Filler][7] makes this process easy. [Download Pi Filer 1.3][8] or the most recent version. Unzip or extract the downloaded file and open it. You should be greeted with this prompt:
![Pi Filler prompt](https://opensource.com/sites/default/files/2-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Pi Filler prompt")
Make sure the USB card reader has NOT been inserted yet. If it has, eject it. Proceed by clicking Continue. A file explorer should appear. Locate the uncompressed Raspberry Pi OS file from your Mac or PC and select it. You should see another prompt like the one pictured below:
![USB card reader prompt](https://opensource.com/sites/default/files/3-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "USB card reader")
Insert the MicroSD card (32GB recommended, 16GB minimum) into the USB MicroSD Card Reader. Then insert the USB reader into the Mac or PC. You can rename the SD card to "Raspberry" to distinguish it from others. Click Continue. Make sure the SD card is empty. Pi Filler will  _erase_  all previous storage at runtime. If you need to back up the card, do so now. When you are ready to continue, the Raspbian OS will be written to the SD card. It should take between one to three minutes. Once the write is completed, eject the USB reader, remove the SD card, and insert it into the Raspberry Pi SD card slot. Give the Raspberry Pi power by plugging the power cord into the wall. It should start booting up. The Raspberry Pi default login is:
**username: pi
password: raspberry**
When the Raspberry Pi has completed booting for the first time, a configuration screen titled "Setup Options" should appear like the image below [2]:
![Raspberry Pi software configuration setup](https://opensource.com/sites/default/files/4-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Raspberry Pi software configuration setup")
Select the "Expand Filesystem" option and hit the Enter key [3]. Also, I recommend selecting the second option, "Change User Password." It is important for security. It also personalizes your Raspberry Pi.
Select the third option in the setup options list, "Enable Boot To Desktop/Scratch" and hit the Enter key. It will take you to another window titled "Choose boot option" as shown in the image below.
![Choose boot option](https://opensource.com/sites/default/files/5-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Choose boot option")
In the "Choose boot option" window, select the second option, "Desktop log in as user 'pi' at the graphical desktop" and hit the Enter button [4]. Once this is done you will be taken back to the "Setup Options" page. If not, select the "OK" button at the bottom of this window and you will be taken back to the previous window.
Once both these steps are done, select the "Finish" button at the bottom of the page and it should reboot automatically. If it does not, then use the following command in the terminal to reboot.
**$ sudo reboot**
After the reboot from the previous step, if everything went well, you will end up on the desktop similar to the image below.
![Raspberry Pi desktop](https://opensource.com/sites/default/files/6-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Raspberry Pi desktop")
Once you are on the desktop, open a terminal and enter the following commands to update the firmware of the Raspberry Pi.
```
$ sudo apt-get update
$ sudo apt-get upgrade-y
$ sudo apt-get dist-upgrade -y
$ sudo rpi-update
```
This may take a few minutes. Now the Raspberry Pi is up-to-date and running.
### Step 2: Configuring the Raspberry Pi
SSH, which stands for Secure Shell, is a cryptographic network protocol that lets you securely transfer data between your computer and your Raspberry Pi. You can control your Raspberry Pi from your Mac's command line without a monitor or keyboard.
To use SSH, first, you need your Pi's IP address. Open the terminal and type:
```
$ sudo ifconfig
```
If you are using Ethernet, look at the "eth0" section. If you are using Wi-Fi, look at the "wlan0" section.
Find "inet addr" followed by an IP address—something like 192.168.1.115, a common default IP I will use for the duration of this article.
With this address, open terminal and type:
```
$ ssh pi@192.168.1.115
```
For SSH on PC, see footnote [5].
Enter the default password "raspberry" when prompted, unless you changed it.
You are now logged in via SSH.
### Remote desktop
Using a GUI (graphical user interface) is sometimes easier than a command line. On the Raspberry Pi's command line (using SSH) type:
```
$ sudo apt-get install xrdp
```
Xrdp supports the Microsoft Remote Desktop Client for Mac and PC.
On Mac, navigate to the app store and search for "Microsoft Remote Desktop." Download it. (For a PC, see footnote [6].)
After installation, search your Mac for a program called "Microsoft Remote Desktop." Open it. You should see this:
![Microsoft Remote Desktop](https://opensource.com/sites/default/files/7-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Microsoft Remote Desktop")
Image by Mitchell McLaughlin, CC BY-SA 4.0
Click "New" to set up a remote connection. Fill in the blanks as shown below.
![Setting up a remote connection](https://opensource.com/sites/default/files/8-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Setting up a remote connection")
Image by Mitchell McLaughlin, CC BY-SA 4.0
Save it by exiting out of the "New" window.
You should now see the remote connection listed under "My Desktops." Double click it.
After briefly loading, you should see your Raspberry Pi desktop in a window on your screen, which looks like this:
![Raspberry Pi desktop](https://opensource.com/sites/default/files/6-image_by_mitchell_mclaughlin_cc_by-sa_4.0_0.png "Raspberry Pi desktop")
Perfect. Now, you don't need a separate mouse, keyboard, or monitor to control the Pi. This is a much more lightweight setup.
### Static local IP address
Sometimes the local IP address 192.168.1.115 will change. We need to make it static. Type:
```
$ sudo ifconfig
```
Write down from the "eth0" section or the "wlan0" section, the "inet addr" (Pi's current IP), the "bcast" (the broadcast IP range), and the "mask" (subnet mask address). Then, type:
```
$ netstat -nr
```
Write down the "destination" and the "gateway/network."
![Setting up a local IP address](https://opensource.com/sites/default/files/setting_up_local_ip_address.png "Setting up a local IP address")
The cumulative records should look something like this:
```
net address 192.168.1.115
bcast 192.168.1.255
mask 255.255.255.0
gateway 192.168.1.1
network 192.168.1.1
destination 192.168.1.0
```
With this information, you can set a static internal IP easily. Type:
```
$ sudo nano /etc/dhcpcd.conf
```
Do not use **/etc/network/interfaces**.
Then all you need to do is append this to the bottom of the file, substituting the correct IP address you want.
```
interface eth0
static ip_address=192.168.1.115
static routers=192.168.1.1
static domain_name_servers=192.168.1.1
```
Once you have set the static internal IP address, reboot the Raspberry Pi with:
```
$ sudo reboot
```
After rebooting, from terminal type:
```
$ sudo ifconfig
```
Your new static settings should appear for your Raspberry Pi.
### Static global IP address
If your ISP (internet service provider) has already given you a static external IP address, you can skip ahead to the port forwarding section. If not, continue reading.
You have set up SSH, a remote desktop, and a static internal IP address, so now computers inside the local network will know where to find the Pi. But you still can't access your Raspberry Pi from outside the local Wi-Fi network. You need your Raspberry Pi to be accessible publicly from anywhere on the Internet. This requires a static external IP address [7].
It can be a sensitive process initially. Call your ISP and request a static external (sometimes referred to as static global) IP address. The ISP holds the decision-making power, so I would be extremely careful dealing with them. They may refuse your static external IP address request. If they do, you can't fault the ISP because there is a legal and operational risk with this type of request. They particularly do not want customers running medium- or large-scale Internet services. They might explicitly ask why you need a static external IP address. It is probably best to be honest and tell them you plan on hosting a low-traffic personal website or a similar small not-for-profit internet service. If all goes well, they should open a ticket and call you in a week or two with an address.
### Port forwarding
This newly obtained static global IP address your ISP assigned is for accessing the router. The Raspberry Pi is still unreachable. You need to set up port forwarding to access the Raspberry Pi specifically.
Ports are virtual pathways where information travels on the Internet. You sometimes need to forward a port in order to make a computer, like the Raspberry Pi, accessible to the Internet because it is behind a network router. A YouTube video titled [What is TCP/IP, port, routing, intranet, firewall, Internet][9] by VollmilchTV helped me visually understand ports.
Port forwarding can be used for projects like a Raspberry Pi web server, or applications like VoIP or peer-to-peer downloading. There are [65,000+ ports][10] to choose from, so you can assign a different port for every Internet application you build.
The way to set up port forwarding can depend on your router. If you have a Linksys, a YouTube video titled  _[How to go online with your Apache Ubuntu server][2]_  by Gabriel Ramirez explains how to set it up. If you don't have a Linksys, read the documentation that comes with your router in order to customize and define ports to forward.
You will need to port forward for SSH as well as the remote desktop.
Once you believe you have port forwarding configured, check to see if it is working via SSH by typing:
```
$ ssh pi@your_global_ip_address
```
It should prompt you for the password.
Check to see if port forwarding is working for the remote desktop as well. Open Microsoft Remote Desktop. Your previous remote connection settings should be saved, but you need to update the "PC name" field with the static external IP address (for example, 195.198.227.116) instead of the static internal address (for example, 192.168.1.115).
Now, try connecting via remote desktop. It should briefly load and arrive at the Pi's desktop.
![Raspberry Pi desktop](https://opensource.com/sites/default/files/6-image_by_mitchell_mclaughlin_cc_by-sa_4.0_1.png "Raspberry Pi desktop")
Good job. The Raspberry Pi is now accessible from the Internet and ready for advanced projects.
As a bonus option, you can maintain two remote connections to your Pi. One via the Internet and the other via the LAN (local area network). It's easy to set up. In Microsoft Remote Desktop, keep one remote connection called "Pi Internet" and another called "Pi Local." Configure Pi Internet's "PC name" to the static external IP address—for example, 195.198.227.116\. Configure Pi Local's "PC name" to the static internal IP address—for example, 192.168.1.115\. Now, you have the option to connect globally or locally.
If you have not seen it already, watch  _[How to go online with your Apache Ubuntu server][3]_  by Gabriel Ramirez as a transition into Project 2\. It will show you the technical architecture behind your project. In our case, you are using a Raspberry Pi instead of an Ubuntu server. The dynamic DNS sits between the domain company and your router, which Ramirez omits. Beside this subtlety, the video is spot on when explaining visually how the system works. You might notice this tutorial covers the Raspberry Pi setup and port forwarding, which is the server-side or back end. See the original source for more advanced projects covering the domain name, dynamic DNS, Jekyll (static HTML generator), and Apache (web hosting), which is the client-side or front end.
### Footnotes
[1] I do not recommend starting with the NOOBS operating system. I prefer starting with the fully functional Raspbian Jessie operating system.
[2] If "Setup Options" does not pop up, you can always find it by opening Terminal and executing this command:
```
$ sudo-rasps-config
```
[3] We do this to make use of all the space present on the SD card as a full partition. All this does is expand the operating system to fit the entire space on the SD card, which can then be used as storage memory for the Raspberry Pi.
[4] We do this because we want to boot into a familiar desktop environment. If we do not do this step, the Raspberry Pi boots into a terminal each time with no GUI.
[5]
![PuTTY configuration](https://opensource.com/sites/default/files/putty_configuration.png "PuTTY configuration")
[Download and run PuTTY][11] or another SSH client for Windows. Enter your IP address in the field, as shown in the above screenshot. Keep the default port at 22\. Hit Enter, and PuTTY will open a terminal window, which will prompt you for your username and password. Fill those in, and begin working remotely on your Pi.
[6] If it is not already installed, download [Microsoft Remote Desktop][12]. Search your computer for Microsoft Remote Desktop. Run it. Input the IP address when prompted. Next, an xrdp window will pop up, prompting you for your username and password.
[7] The router has a dynamically assigned external IP address, so in theory, it can be reached from the Internet momentarily, but you'll need the help of your ISP to make it permanently accessible. If this was not the case, you would need to reconfigure the remote connection on each use.
_For the original source, visit [Mitchell McLaughlin's Full-Stack Computer Projects][4]._
--------------------------------------------------------------------------------
作者简介:
Mitchell McLaughlin - I'm an open-web contributor and developer. My areas of interest are broad, but specifically I enjoy open source software/hardware, bitcoin, and programming in general. I reside in San Francisco. My work experience in the past has included brief stints at GoPro and Oracle.
-------------
via: https://opensource.com/article/17/3/building-personal-web-server-raspberry-pi-3
作者:[Mitchell McLaughlin ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mitchm
[1]:http://elinux.org/RPi_SD_cards
[2]:https://www.youtube.com/watch?v=i1vB7JnPvuE#t=07m08s
[3]:https://www.youtube.com/watch?v=i1vB7JnPvuE#t=07m08s
[4]:https://mitchellmclaughlin.com/server.html
[5]:https://opensource.com/article/17/3/building-personal-web-server-raspberry-pi-3?rate=Zdmkgx8mzy9tFYdVcQZSWDMSy4uDugnbCKG4mFsVyaI
[6]:https://www.raspberrypi.org/downloads/raspbian/
[7]:http://ivanx.com/raspberrypi/
[8]:http://ivanx.com/raspberrypi/files/PiFiller.zip
[9]:https://www.youtube.com/watch?v=iskxw6T1Wb8
[10]:https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers
[11]:http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
[12]:https://www.microsoft.com/en-us/store/apps/microsoft-remote-desktop/9wzdncrfj3ps
[13]:https://opensource.com/user/41906/feed
[14]:https://opensource.com/article/17/3/building-personal-web-server-raspberry-pi-3#comments
[15]:https://opensource.com/users/mitchm

View File

@ -1,3 +1,5 @@
translating by Flowsnow!
Many SQL Performance Problems Stem from “Unnecessary, Mandatory Work”
============================================================ 

View File

@ -1,82 +0,0 @@
8 reasons to use LXDE
============================================================
### Learn reasons to consider using the lightweight LXDE desktop environment as your Linux desktop.
![8 reasons to use LXDE](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/rh_003499_01_linux31x_cc.png?itok=1HXbvw2E "8 reasons to use LXDE")
>Image by : opensource.com
Late last year, an upgrade to Fedora 25 brought issues with the new version of [KDE][7] Plasma that were so bad it was difficult to get any work done. I decided to try other Linux desktop environments for two reasons. First, I needed to get my work done. Second, having used KDE exclusively for many years, I thought it was time to try some different desktops.
The first alternate desktop I tried for several weeks was [Cinnamon][8], which I wrote about in January. This time I have been using LXDE (Lightweight X11 Desktop Environment) for about six weeks, and I have found many things about it that I like. Here is my list of eight reasons to use LXDE.
More Linux resources
* [What is Linux?][1]
* [What are Linux containers?][2]
* [Managing devices in Linux][3]
* [Download Now: Linux commands cheat sheet][4]
* [Our latest Linux articles][5]
**1\. LXDE supports multiple panels. **As with KDE and Cinnamon, LXDE sports panels that contain the system menu, application launchers, and a taskbar that displays buttons for the running applications. The first time I logged in to LXDE the panel configuration looked surprisingly familiar. LXDE appears to have picked up the KDE configuration for my favored top and bottom panels, including system tray settings. The application launchers on the top panel appear to have been from the Cinnamon configuration. The contents of the panels make it easy to launch and manage programs. By default, there is only one panel at the bottom of the desktop.
![The LXDE desktop with the Openbox Configuration Manager open.](https://opensource.com/sites/default/files/lxde-openboxconfigurationmanager.png "The LXDE desktop with the Openbox Configuration Manager open.")
The LXDE desktop with the Openbox Configuration Manager open. This desktop has not been modified, so it uses the default color and icon schemes.
**2\. The Openbox configuration manager provides a single, simple tool for managing the look and feel of the desktop.** It provides options for themes, window decorations, window behavior with multiple monitors, moving and resizing windows, mouse control, multiple desktops, and more. Although that seems like a lot, it is far less complex than configuring the KDE desktop, yet Openbox provides a surprisingly great amount of control.
**3\. LXDE has a powerful menu tool.** There is an interesting option that you can access on the Advanced tab of the Desktop Preferences menu. The long name for this option is, “Show menus provided by window managers when desktop is clicked.” When this checkbox is selected, the Openbox desktop menu is displayed instead of the standard LXDE desktop menu, when you right-click the desktop.
The Openbox desktop menu contains nearly every menu selection you would ever want, with all easily accessible from the desktop. It includes all of the application menus, system administration, and preferences. It even has a menu containing a list of all the terminal emulator applications installed so that sysadmins can easily launch their favorite.
**4\. By design, the LXDE desktop is clean and simple.** It has nothing to get in the way of getting your work done. Although you can add some clutter to the desktop in the form of files, directory folders, and links to applications, there are no widgets that can be added to the desktop. I do like some widgets on my KDE and Cinnamon desktops, but they are easy to cover and then I need to move or minimize windows, or just use the “Show desktop” button to clear off the entire desktop. LXDE does have a “Iconify all windows” button, but I seldom need to use it unless I want to look at my wallpaper.
**5\. LXDE comes with a strong file manager.** The default file manager for LXDE is PCManFM, so that became my file manager for the duration of my time with LXDE. PCManFM is very flexible and can be configured to make it work well for most people and situations. It seems to be somewhat less configurable than Krusader, which is usually my go-to file manager, but I really like the sidebar on PCManFM that Krusader does not have.
PCManFM allows multiple tabs, which can be opened with a right-click on any item in the sidebar or by a left-click on the new tab icon in the icon bar. The Places pane at the left of the PCManFM window shows the applications menu, and you can launch applications from PCManFM. The upper part of the Places pane also shows a devices icon, which can be used to view your physical storage devices, a list of removable devices along with buttons to allow you to mount or unmount them, and the Home, Desktop, and trashcan folders to make them easy to access. The bottom part of the Places panel contains shortcuts to some default directories, Documents, Music, Pictures, Videos, and Downloads. You can also drag additional directories to the shortcut part of the Places pane. The Places pane can be swapped for a regular directory tree.
**6\. The title bar of ****a new window flashes**** if it is opened behind existing windows.** This is a nice way to make locating new windows in with a large number of existing ones.
**7\. Most modern desktop environments allow for multiple desktops and LXDE is no exception to that.** I like to use one desktop for my development, testing, and writing activities, and another for mundane tasks like email and web browsing. LXDE provides two desktops by default, but you can configure just one or more. Right-click on the Desktop Pager to configure it.
Through some disruptive but not destructive testing, I was able to determine that the maximum number of desktops allowed is 100\. I also discovered that when I reduced the number of desktops to fewer than the three I actually had in use, the windows on the defunct desktops are moved to desktop 1\. What fun I have had with this!
**8\. The Xfce power manager is a powerful little application that allows you to configure how power management works.** It provides a tab for General configuration as well as tabs for System, Display, and Devices. The Devices tab displays a table of attached devices on my system, such as battery-powered mice, keyboards, and even my UPS. It displays information about each, including the vendor and serial number, if available, and the state of the battery charge. As I write this, my UPS is 100% charged and my Logitech mouse is 75% charged. The Xfce power manager also displays an icon in the system tray so you can get a quick read on your devices' battery status from there.
There are more things to like about the LXDE desktop, but these are the ones that either grabbed my attention or are so important to my way of working in a modern GUI interface that they are indispensable to me.
One quirk I noticed with LXDE is that I never did figure out what the “Reconfigure” option does on the desktop (Openbox) menu. I clicked on that several times and never noticed any activity of any kind to indicate that that selection actually did anything.
I have found LXDE to be an easy-to-use, yet powerful, desktop. I have enjoyed my weeks using it for this article. LXDE has enabled me to work effectively mostly by allowing me access to the applications and files that I want, while remaining unobtrusive the rest of the time. I also never encountered anything that prevented me from doing my work. Well, except perhaps for the time I spent exploring this fine desktop. I can highly recommend the LXDE desktop.
I am now using GNOME 3 and the GNOME Shell and will report on that in my next installment.
--------------------------------------------------------------------------------
作者简介:
David Both - David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years.
--------------------------------------
via: https://opensource.com/article/17/3/8-reasons-use-lxde
作者:[David Both ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dboth
[1]:https://opensource.com/resources/what-is-linux?src=linux_resource_menu
[2]:https://opensource.com/resources/what-are-linux-containers?src=linux_resource_menu
[3]:https://opensource.com/article/16/11/managing-devices-linux?src=linux_resource_menu
[4]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=7016000000127cYAAQ
[5]:https://opensource.com/tags/linux?src=linux_resource_menu
[6]:https://opensource.com/article/17/3/8-reasons-use-lxde?rate=QigvkBy_9zLvktdsL-QaIWedjIqjtlwwJIVFQDQzsSY
[7]:https://opensource.com/life/15/4/9-reasons-to-use-kde
[8]:https://opensource.com/article/17/1/cinnamon-desktop-environment
[9]:https://opensource.com/user/14106/feed
[10]:https://opensource.com/article/17/3/8-reasons-use-lxde#comments
[11]:https://opensource.com/users/dboth

View File

@ -1,3 +1,5 @@
Yoo-4x Translating
OpenGL & Go Tutorial Part 1: Hello, OpenGL
============================================================

View File

@ -0,0 +1,180 @@
【翻译中】
Getting started with Perl on the Raspberry Pi
============================================================
> We're all free to pick what we want to run on our Raspberry Pi.
![Getting started with Perl on the Raspberry Pi](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/raspberry_pi_blue_board.jpg?itok=01NR5MX4 "Getting started with Perl on the Raspberry Pi")
>Image by : opensource.com
When I spoke recently at SVPerl (Silicon Valley Perl) about Perl on the Raspberry Pi, someone asked, "I heard the Raspberry Pi is supposed to use Python. Is that right?" I was glad he asked because it's a common misconception. The Raspberry Pi can run any language. Perl, Python, and others are part of the initial installation of Raspbian Linux, the official software for the board.
The origin of the myth is simple. The Raspberry Pi's creator, UK Computer Science professor Eben Upton, has told the story that the "Pi" part of the name was intended to sound like Python because he likes the language. He chose it as his emphasis for kids to learn coding. But he and his team made a general-purpose computer. The open source software on the Raspberry Pi places no restrictions on us. We're all free to pick what we want to run and make each Raspberry Pi our own.
More on Raspberry Pi
* [Our latest on Raspberry Pi][1]
* [What is Raspberry Pi?][2]
* [Getting started with Raspberry Pi][3]
* [Send us your Raspberry Pi projects and tutorials][4]
The second point to my presentation at SVPerl and this article is to introduce my "PiFlash" script. It was written in Perl, but it doesn't require any knowledge of Perl to automate your task of flashing SD cards for a Raspberry Pi from a Linux system. It provides safety for beginners, so they won't accidentally erase a hard drive while trying to flash an SD card. It offers automation and convenience for power users, which includes me and is why I wrote it. Similar tools already existed for Windows and Macs, but the instructions on the Raspberry Pi website oddly have no automated tools for Linux users. Now one exists.
Open source software has a long tradition of new projects starting because an author wanted to "scratch their own itch," or to solve their own problems. That's the way Eric S. Raymond described it in his 1997 paper and 1999 book "[The Cathedral and the Bazaar][8]," which defined the open source software development methodology. I wrote PiFlash to fill a need for Linux users like myself.
### Downloadable system images
When setting up a Raspberry Pi, you first need to download an operating system for it. We call it a "system image" file. Once you download it to your desktop, laptop, or even another Raspberry Pi, you have to write or "flash" it to an SD card. The details are covered online already. It can be a bit tricky to do manually because getting the system image on the whole SD card and not on a partition matters. The system image will actually contain at least one partition of its own because the Raspberry Pi's boot procedure needs a FAT32 filesystem partition from which to start. Other partitions after the boot partition can be any filesystem type supported by the OS kernel.
In most cases on the Raspberry Pi, we're running some distribution with a Linux kernel. Here's a list of common system images that you can download for the Raspberry Pi (but there's nothing to stop you from building your own from scratch).
The ["NOOBS"][9] system from the Raspberry Pi Foundation is their recommended system for new users. It stands for "New Out of the Box System." It's obviously intended to sound like the term "noob," short for "newbie." NOOBS starts a Raspbian-based Linux system, which presents a menu that you can use to automatically download and install several other system images on your Raspberry Pi.
[Raspbian ][10][Linux][11] is Debian Linux specialized for the Raspberry Pi. It's the official Linux distribution for the Raspberry Pi and is maintained by the Raspberry Pi Foundation. Nearly all Raspberry Pi software and drivers start with Raspbian before going to other Linux distributions. It runs on all models of the Raspberry Pi. The default installation includes Perl.
Ubuntu Linux (and the community edition Ubuntu MATE) includes the Raspberry Pi as one of its supported platforms for the ARM (Advanced RISC Machines) processor. [RISC (Reduced Instruction Set Computer) architecture] Ubuntu is a commercially supported open source variant of Debian Linux, so its software comes as DEB packages. Perl is included. It only works on the Raspberry Pi 2 and 3 models with their 32-bit ARM7 and 64-bit ARM8 processors. The ARM6 processor of the Raspberry Pi 1 and Zero was never supported by Ubuntu's build process.
[Fedora Linux][12] supports the Raspberry Pi 2 and 3 as of Fedora 25\. Fedora is the open source project affiliated with Red Hat. Fedora serves as the base that the commercial RHEL (Red Hat Enterprise Linux) adds commercial packages and support to, so its software comes as RPM (Red Hat Package Manager) packages like all Red Hat-compatible Linux distributions. Like the others, it includes Perl.
[RISC OS][13] is a single-user operating system made specifically for the ARM processor. If you want to experiment with a small desktop that is more compact than Linux (due to fewer features), it's an option. Perl runs on RISC OS.
[RaspBSD][14] is the Raspberry Pi distribution of FreeBSD. It's a Unix-based system, but isn't Linux. As an open source Unix, form follows function and it has many similarities to Linux, including that the operating system environment is made from a similar set of open source packages, including Perl.
[OSMC][15], the Open Source Media Center, and [LibreElec][16] are TV entertainment center systems. They are both based on the Kodi entertainment center, which runs on a Linux kernel. It's a really compact and specialized Linux system, so don't expect to find Perl on it.
[Microsoft ][17][Windows IoT Core][18] is a new entrant that runs only on the Raspberry Pi 3\. You need Microsoft developer access to download it, so as a Linux geek, that deterred me from looking at it. My PiFlash script doesn't support it, but if that's what you're looking for, it's there.
### The PiFlash script
If you look at the Raspberry Pi 's [SD card flashing][19][ instructions][20], you'll see the instructions to do that from Windows or Mac involve downloading a tool to write to the SD card. But for Linux systems, it's a set of instructions to do manually. I've done that manual procedure so many times that it triggered my software-developer instinct to automate the process, and that's where the PiFlash script came from. It's tricky because there are many ways a Linux system can be set up, but they are all based on the Linux kernel.
I always imagined one of the biggest potential errors of the manual procedure is accidentally erasing the wrong device, instead of the SD card, and destroying the data on a hard drive that I wanted to keep. In my presentation at SVPerl, I was surprised to find someone in the audience who has made that mistake (and wasn't afraid to admit it). Therefore, one of the purposes of the PiFlash script, to provide safety for new users by refusing to erase a device that isn't an SD card, is even more needed than I expected. PiFlash will also refuse to overwrite a device that contains a mounted filesystem.
For experienced users, including me, the PiFlash script offers the convenience of automation. After downloading the system image, I don't have to uncompress it or extract the system image from a zip archive. PiFlash will extract it from whichever format it's in and directly flash the SD card.
I posted [PiFlash and its instructions][21] on GitHub.
It's a command-line tool with the following usages:
**piflash [--verbose] input-file output-device**
**piflash [--verbose] --SDsearch**
The **input-file** parameter is the system image file, whatever you downloaded from the Raspberry Pi software distribution sites. The **output-device** parameter is the path of the block device for the SD card you want to write to.
Alternatively, use **--SDsearch** to print a list of the device names of SD cards on the system.
The optional **--verbose** parameter is useful for printing out all of the program's state data in case you need to ask for help, submit a bug report, or troubleshoot a problem yourself. That's what I used for developing it.
This example of using the script writes a Raspbian image, still in its zip archive, to the SD card at **/dev/mmcblk0**:
**piflash 2016-11-25-raspbian-jessie.img.zip /dev/mmcblk0**
If you had specified **/dev/mmcblk0p1** (the first partition on the SD card), it would have recognized that a partition is not the correct location and refused to write to it.
One tricky aspect is recognizing which devices are SD cards on various Linux systems. The example with **mmcblk0** is from the PCI-based SD card interface on my laptop. If I used a USB SD card interface, it would be **/dev/sdb**, which is harder to distinguish from hard drives present on many systems. However, there are only a few Linux block drivers that support SD cards. PiFlash checks the parameters of the block devices in both those cases. If all else fails, it will accept USB drives which are writable, removable and have the right physical sector count for an SD card.
I think that covers most cases. However, what if you have another SD card interface I haven't seen? I'd like to hear from you. Please include the **--verbose**** --SDsearch** output, so I can see what environment was present on your system when it tried. Ideally, if the PiFlash script becomes widely used, we should build up an open source community around maintaining it for as many Raspberry Pi users as we can.
### CPAN modules for Raspberry Pi
CPAN is the [Comprehensive Perl Archive Network][22], a worldwide network of download mirrors containing a wealth of Perl modules. All of them are open source. The vast quantity of modules on CPAN has been a huge strength of Perl over the years. For many thousands of tasks, there is no need to re-invent the wheel, you can just use the code someone else already posted, then submit your own once you have something new.
As Raspberry Pi is a full-fledged Linux system, most CPAN modules will run normally on it, but I'll focus on some that are specifically for the Raspberry Pi's hardware. These would usually be for embedded systems projects like measurement, control, or robotics. You can connect your Raspberry Pi to external electronics via its GPIO (General-Purpose Input/Output) pins.
Modules specifically for accessing the Raspberry Pi's GPIO pins include [Device::SMBus][23], [Device::I2C][24], [Rpi::PIGPIO][25], [Rpi::SPI][26], [Rpi::WiringPi][27], [Device::WebIO::RaspberryPi][28] and [Device::PiGlow][29]. Modules for other embedded systems with Raspberry Pi support include [UAV::Pilot::Wumpus::Server::Backend::RaspberryPiI2C][30], [RPi::DHT11][31] (temperature/humidity), [RPi::HCSR04][32] (ultrasonic), [App::RPi::EnvUI][33] (lights for growing plants),  [RPi::DigiPot::MCP4XXXX][34] (potentiometer), [RPi::ADC::ADS][35] (A/D conversion), [Device::PaPiRus][36] and [Device::BCM2835::Timer][37] (the on-board timer chip).
### Examples
Here are some examples of what you can do with Perl on a Raspberry Pi.
### Example 1: Flash OSMC with PiFlash and play a video
For this example, you'll practice setting up and running a Raspberry Pi using the OSMC (Open Source Media Center).
* Go to [RaspberryPi.Org][5]. In the downloads area, get the latest version of OSMC.
* Insert a blank SD card in your Linux desktop or laptop. The Raspberry Pi 1 uses a full-size SD card. Everything else uses a microSD, which may require a common adapter to insert it.
* Check "cat /proc/partitions" before and after inserting the SD card to see which device name it was assigned by the system. It could be something like **/dev/mmcblk0** or **/dev/sdb**. Substitute your correct system image file and output device in a command that looks like this:
**           piflash OSMC_TGT_rbp2_20170210.img.gz /dev/mmcblk0**
* Eject the SD card. Put it in the Raspberry Pi and boot it connected to an HDMI monitor.
* While OSMC is setting up, get a USB stick and put some videos on it. For purposes of the demonstration, I suggest using the "youtube-dl" program to download two videos. Run "youtube-dl OHF2xDrq8dY" (The Bloomberg "Hello World" episode about UK tech including Raspberry Pi) and "youtube-dl nAvZMgXbE9c" (CNet's Top 5 Raspberry Pi projects). Move them to the USB stick, then unmount and remove it.
* Insert the USB stick in the OSMC Raspberry Pi. Follow the Videos menu to the external device.
* When you can play the videos on the Raspberry Pi, you have completed the exercise. Have fun.
### Example 2: A script to play random videos from a directory
This example uses a script to shuffle-play videos from a directory on the Raspberry Pi. Depending on the videos and where it's installed, this could be a kiosk display. I wrote it to display videos while using indoor exercise equipment.
* Set up a Raspberry Pi to boot Raspbian Linux. Connect it to an HDMI monitor.
* Download my ["do-video" script][6] from GitHub and put it on the Raspberry Pi.
* Follow the installation instructions on the page. The main thing is to install the **omxplayer** package, which plays videos smoothly using the Raspberry Pi's hardware video acceleration.
* Put some videos in a directory called Videos under the home directory.
* Run "do-video" and videos should start playing.
### Example 3: A script to read GPS data
This example is more advanced and optional, but it shows how Perl can read from external devices. At my "Perl on Pi" page on GitHub from the previous example, there is also a **gps-read.pl** script. It reads NMEA (National Marine Electronics Association) data from a GPS via the serial port. Instructions are on the page, including parts I used from AdaFruit Industries to build it, but any GPS that outputs NMEA data could be used.
With these tasks, I've made the case that you really can use Perl as well as any other language on a Raspberry Pi. I hope you enjoy it.
--------------------------------------------------------------------------------
作者简介:
Ian Kluft - Ian has had parallel interests since grade school in computing and flight. He was coding on Unix before there was Linux, and started on Linux 6 months after the kernel was posted. He has a masters degree in Computer Science and is a CSSLP (Certified Secure Software Lifecycle Professional). On the side he's a pilot and a certified flight instructor. As a licensed Ham Radio operator for over 25 years, experimentation with electronics has evolved in recent years to include the Raspberry Pi
------------------
via: https://opensource.com/article/17/3/perl-raspberry-pi
作者:[Ian Kluft ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/ikluft
[1]:https://opensource.com/tags/raspberry-pi?src=raspberry_pi_resource_menu
[2]:https://opensource.com/resources/what-raspberry-pi?src=raspberry_pi_resource_menu
[3]:https://opensource.com/article/16/12/getting-started-raspberry-pi?src=raspberry_pi_resource_menu
[4]:https://opensource.com/article/17/2/raspberry-pi-submit-your-article?src=raspberry_pi_resource_menu
[5]:http://raspberrypi.org/
[6]:https://github.com/ikluft/ikluft-tools/tree/master/perl-on-pi
[7]:https://opensource.com/article/17/3/perl-raspberry-pi?rate=OsZH1-H_xMfLtSFqZw4SC-_nyV4yo_sgKKBJGjUsbfM
[8]:http://www.catb.org/~esr/writings/cathedral-bazaar/
[9]:https://www.raspberrypi.org/downloads/noobs/
[10]:https://www.raspberrypi.org/downloads/raspbian/
[11]:https://www.raspberrypi.org/downloads/raspbian/
[12]:https://fedoraproject.org/wiki/Raspberry_Pi#Downloading_the_Fedora_ARM_image
[13]:https://www.riscosopen.org/content/downloads/raspberry-pi
[14]:http://www.raspbsd.org/raspberrypi.html
[15]:https://osmc.tv/
[16]:https://libreelec.tv/
[17]:http://ms-iot.github.io/content/en-US/Downloads.htm
[18]:http://ms-iot.github.io/content/en-US/Downloads.htm
[19]:https://www.raspberrypi.org/documentation/installation/installing-images/README.md
[20]:https://www.raspberrypi.org/documentation/installation/installing-images/README.md
[21]:https://github.com/ikluft/ikluft-tools/tree/master/piflash
[22]:http://www.cpan.org/
[23]:https://metacpan.org/pod/Device::SMBus
[24]:https://metacpan.org/pod/Device::I2C
[25]:https://metacpan.org/pod/RPi::PIGPIO
[26]:https://metacpan.org/pod/RPi::SPI
[27]:https://metacpan.org/pod/RPi::WiringPi
[28]:https://metacpan.org/pod/Device::WebIO::RaspberryPi
[29]:https://metacpan.org/pod/Device::PiGlow
[30]:https://metacpan.org/pod/UAV::Pilot::Wumpus::Server::Backend::RaspberryPiI2C
[31]:https://metacpan.org/pod/RPi::DHT11
[32]:https://metacpan.org/pod/RPi::HCSR04
[33]:https://metacpan.org/pod/App::RPi::EnvUI
[34]:https://metacpan.org/pod/RPi::DigiPot::MCP4XXXX
[35]:https://metacpan.org/pod/RPi::ADC::ADS
[36]:https://metacpan.org/pod/Device::PaPiRus
[37]:https://metacpan.org/pod/Device::BCM2835::Timer
[38]:https://opensource.com/user/120171/feed
[39]:https://opensource.com/article/17/3/perl-raspberry-pi#comments
[40]:https://opensource.com/users/ikluft

View File

@ -1,3 +1,4 @@
translating by chenxinlong
AWS cloud terminology
============================================================
@ -17,7 +18,7 @@ As of today, AWS offers total of 71 services which are grouped together in
* * *
_Compute _
_Compute_
Its a cloud computing means virtual server provisioning. This group provides below services.
@ -210,7 +211,7 @@ Its desktop app streaming over cloud.
via: http://kerneltalks.com/virtualization/aws-cloud-terminology/
作者:[Shrikant Lavhate][a]
译者:[译者ID](https://github.com/译者ID)
译者:[chenxinlong](https://github.com/chenxinlong)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,280 +0,0 @@
How to deploy Node.js Applications with pm2 and Nginx on Ubuntu
============================================================
### On this page
1. [Step 1 - Install Node.js LTS][1]
2. [Step 2 - Generate Express Sample App][2]
3. [Step 3 - Install pm2][3]
4. [Step 4 - Install and Configure Nginx as a Reverse proxy][4]
5. [Step 5 - Testing][5]
6. [Links][6]
pm2 is a process manager for Node.js applications, it allows you to keep your apps alive and has a built-in load balancer. It's simple and powerful, you can always restart or reload your node application with zero downtime and it allows you to create a cluster of your node app.
In this tutorial, I will show you how to install and configure pm2 for the simple 'Express' application and then configure Nginx as a reverse proxy for the node application that is running under pm2.
**Prerequisites**
* Ubuntu 16.04 - 64bit
* Root Privileges
### Step 1 - Install Node.js LTS
In this tutorial, we will start our project from scratch. First, we need Nodejs installed on the server. I will use the Nodejs LTS version 6.x which can be installed from the nodesource repository.
Install the package '**python-software-properties**' from the Ubuntu repository and then add the 'nodesource' Nodejs repository.
sudo apt-get install -y python-software-properties
curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
Install the latest Nodejs LTS version.
sudo apt-get install -y nodejs
When the installation succeeded, check node and npm version.
node -v
npm -v
[
![Check the node.js version](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/1.png)
][10]
### Step 2 - Generate Express Sample App
I will use simple web application skeleton generated with a package named '**express-generator**' for this example installation. Express-generator can be installed with the npm command.
Install '**express-generator**' with npm:
npm install express-generator -g
**-g:** install package inside the system
We will run the application as a normal user, not a root or super user. So we need to create a new user first.
Create a new user, I name mine '**yume**':
useradd -m -s /bin/bash yume
passwd yume
Login to the new user by using su:
su - yume
Next, generate a new simple web application with the express command:
express hakase-app
The command will create new project directory '**hakase-app**'.
[
![Generate app skeleton with express-generator](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/2.png)
][11]
Go to the project directory and install all dependencies needed by the app.
cd hakase-app
npm install
Then test and start a new simple application with the command below:
DEBUG=myapp:* npm start
By default, our express application will run on port **3000**. Now visit server IP address: [192.168.33.10:3000][12]
[
![express nodejs running on port 3000](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/3.png)
][13]
The simple web application skeleton is running on port 3000, under user 'yume'.
### Step 3 - Install pm2
pm2 is a node package and can be installed with the npm command. So let's install it with npm (with root privileges, when you are still logged in as user hakase, then run the command "exit" ro become root again):
npm install pm2 -g
Now we can use pm2 for our web application.
Go to the app directory '**hakase-app**':
su - hakase
cd ~/hakase-app/
There you can find a file named '**package.json**', display its content with the cat command.
cat package.json
[
![express nodejs services configuration](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/4.png)
][14]
You can see the '**start**' line contains a command that is used by Nodejs to start the express application. This command we will use with the pm2 process manager.
Run the express application with the pm2 command below:
pm2 start ./bin/www
Now you can see the results is below:
[
![Running nodejs app with pm2](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/5.png)
][15]
Our express application is running under pm2 with name '**www**', id '**0**'. You can get more details about the application running under pm2 with the show option '**show nodeid|name**'.
pm2 show www
[
![pm2 service status](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/6.png)
][16]
If you like to see the log of our application, you can use the logs option. It's just access and error log and you can see the HTTP Status of the application.
pm2 logs www
[
![pm2 services logs](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/7.png)
][17]
You can see that our process is running. Now, let's enable it to start at boot time.
pm2 startup systemd
**systemd**: Ubuntu 16 is using systemd.
You will get a message for running a command as root. Back to the root privileges with "exit" and then run that command.
sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u yume --hp /home/yume
It will generate the systemd configuration file for application startup. When you reboot your server, the application will automatically run on startup.
[
![pm2 add service to the boot time startup](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/8.png)
][18]
### Step 4 - Install and Configure Nginx as a Reverse proxy
In this tutorial, we will use Nginx as a reverse proxy for the node application. Nginx is available in the Ubuntu repository, install it with the apt command:
sudo apt-get install -y nginx
Next, go to the '**sites-available**' directory and create a new virtual host configuration file.
cd /etc/nginx/sites-available/
vim hakase-app
Paste configuration below:
```
upstream hakase-app {
    # Nodejs app upstream
    server 127.0.0.1:3000;
    keepalive 64;
}
# Server on port 80
server {
    listen 80;
    server_name hakase-node.co;
    root /home/yume/hakase-app;
    location / {
        # Proxy_pass configuration
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header X-NginX-Proxy true;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_max_temp_file_size 0;
        proxy_pass http://hakase-app/;
        proxy_redirect off;
        proxy_read_timeout 240s;
    }
}
```
Save the file and exit vim.
On the configuration:
* The node app is running with domain name '**hakase-node.co**'.
* All traffic from nginx will be forwarded to the node app that is running on port **3000**.
Test Nginx configuration and make sure there is no error.
nginx -t
Start Nginx and enable it to start at boot time:
systemctl start nginx
systemctl enable nginx
### Step 5 - Testing
Open your web browser and visit the domain name (mine is):
[http://hakase-app.co][19]
You will see the express application is running under the nginx web server.
[
![Nodejs ap running with pm2 and nginx](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/9.png)
][20]
Next, reboot your server, and make sure the node app is running at the boot time:
pm2 save
sudo reboot
If you have logged in again to your server, check the node app process. Run the command below as '**yume**' user.
su - yume
pm2 status www
[
![nodejs running at the booti time with pm2](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/10.png)
][21]
The Node Application is running under pm2 and Nginx as reverse proxy.
### Links
* [Ubuntu][7]
* [Node.js][8]
* [Nginx][9]
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/
作者:[Muhammad Arul ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/
[1]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/#step-install-nodejs-lts
[2]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/#step-generate-express-sample-app
[3]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/#step-install-pm
[4]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/#step-install-and-configure-nginx-as-a-reverse-proxy
[5]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/#step-testing
[6]:https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/#links
[7]:https://www.ubuntu.com/
[8]:https://nodejs.org/en/
[9]:https://www.nginx.com/
[10]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/1.png
[11]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/2.png
[12]:https://www.howtoforge.com/admin/articles/edit/192.168.33.10:3000
[13]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/3.png
[14]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/4.png
[15]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/5.png
[16]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/6.png
[17]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/7.png
[18]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/8.png
[19]:http://hakase-app.co/
[20]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/9.png
[21]:https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/big/10.png

View File

@ -1,236 +0,0 @@
Writing a Linux Debugger Part 1: Setup
============================================================
Anyone who has written more than a hello world program should have used a debugger at some point (if you havent, drop what youre doing and learn how to use one). However, although these tools are in such widespread use, there arent a lot of resources which tell you how they work and how to write one[1][1], especially when compared to other toolchain technologies like compilers. In this post series well learn what makes debuggers tick and write one for debugging Linux programs.
Well support the following features:
* Launch, halt, and continue execution
* Set breakpoints on
* Memory addresses
* Source code lines
* Function entry
* Read and write registers and memory
* Single stepping
* Instruction
* Step in
* Step out
* Step over
* Print current source location
* Print backtrace
* Print values of simple variables
In the final part Ill also outline how you could add the following to your debugger:
* Remote debugging
* Shared library and dynamic loading support
* Expression evaluation
* Multi-threaded debugging support
Ill be focusing on C and C++ for this project, but it should work just as well with any language which compiles down to machine code and outputs standard DWARF debug information (if you dont know what that is yet, dont worry, this will be covered soon). Additionally, my focus will be on just getting something up and running which works most of the time, so things like robust error handling will be eschewed in favour of simplicity.
* * *
### Series index
These links will go live as the rest of the posts are released.
1. [Setup][2]
2. [Breakpoints][3]
3. Registers and memory
4. Elves and dwarves
5. Stepping, source and signals
6. Stepping on dwarves
7. Source-level breakpoints
8. Stack unwinding
9. Reading variables
10. Next steps
* * *
### Getting set up
Before we jump into things, lets get our environment set up. Ill be using two dependencies in this tutorial: [Linenoise][4] for handling our command line input, and [libelfin][5] for parsing the debug information. You could use the more traditional libdwarf instead of libelfin, but the interface is nowhere near as nice, and libelfin also provides a mostly complete DWARF expression evaluator, which will save you a lot of time if you want to read variables. Make sure that you use the fbreg branch of my fork of libelfin, as it hacks on some extra support for reading variables on x86.
Once youve either installed these on your system, or got them building as dependencies with whatever build system you prefer, its time to get started. I just set them to build along with the rest of my code in my CMake files.
* * *
### Launching the executable
Before we actually debug anything, well need to launch the debugee program. Well do this with the classic fork/exec pattern.
```
int main(int argc, char* argv[]) {
if (argc < 2) {
std::cerr << "Program name not specified";
return -1;
}
auto prog = argv[1];
auto pid = fork();
if (pid == 0) {
//we're in the child process
//execute debugee
}
else if (pid >= 1) {
//we're in the parent process
//execute debugger
}
```
We call `fork` and this causes our program to split into two processes. If we are in the child process, `fork` returns `0`, and if we are in the parent process, it returns the process ID of the child process.
If were in the child process, we want to replace whatever were currently executing with the program we want to debug.
```
ptrace(PTRACE_TRACEME, 0, nullptr, nullptr);
execl(prog.c_str(), prog.c_str(), nullptr);
```
Here we have our first encounter with `ptrace`, which is going to become our best friend when writing our debugger. `ptrace`allows us to observe and control the execution of another process by reading registers, reading memory, single stepping and more. The API is very ugly; its a single function which you provide with an enumerator value for what you want to do, and then some arguments which will either be used or ignored depending on which value you supply. The signature looks like this:
```
long ptrace(enum __ptrace_request request, pid_t pid,
void *addr, void *data);
```
`request` is what we would like to do to the traced process; `pid`is the process ID of the traced process; `addr` is a memory address, which is used in some calls to designate an address in the tracee; and `data` is some request-specific resource. The return value often gives error information, so you probably want to check that in your real code; Im just omitting it for brevity. You can have a look at the man pages for more information.
The request we send in the above code, `PTRACE_TRACEME`, indicates that this process should allow its parent to trace it. All of the other arguments are ignored, because API design isnt important /s.
Next, we call `execl`, which is one of the many `exec` flavours. We execute the given program, passing the name of it as a command-line argument and a `nullptr` to terminate the list. You can pass any other arguments needed to execute your program here if you like.
After weve done this, were finished with the child process; well just let it keep running until were finished with it.
* * *
### Adding our debugger loop
Now that weve launched the child process, we want to be able to interact with it. For this, well create a `debugger` class, give it a loop for listening to user input, and launch that from our parent fork of our `main` function.
```
else if (pid >= 1) {
//parent
debugger dbg{prog, pid};
dbg.run();
}
```
```
class debugger {
public:
debugger (std::string prog_name, pid_t pid)
: m_prog_name{std::move(prog_name)}, m_pid{pid} {}
void run();
private:
std::string m_prog_name;
pid_t m_pid;
};
```
In our `run` function, we need to wait until the child process has finished launching, then just keep on getting input from linenoise until we get an EOF (ctrl+d).
```
void debugger::run() {
int wait_status;
auto options = 0;
waitpid(m_pid, &wait_status, options);
char* line = nullptr;
while((line = linenoise("minidbg> ")) != nullptr) {
handle_command(line);
linenoiseHistoryAdd(line);
linenoiseFree(line);
}
}
```
When the traced process is launched, it will be sent a `SIGTRAP`signal, which is a trace or breakpoint trap. We can wait until this signal is sent using the `waitpid` function.
After we know the process is ready to be debugged, we listen for user input. The `linenoise` function takes a prompt to display and handles user input by itself. This means we get a nice command line with history and navigation commands without doing much work at all. When we get the input, we give the command to a `handle_command` function which well write shortly, then we add this command to the linenoise history and free the resource.
* * *
### Handling input
Our commands will follow a similar format to gdb and lldb. To continue the program, a user will type `continue` or `cont` or even just `c`. If they want to set a breakpoint on an address, theyll write `break 0xDEADBEEF`, where `0xDEADBEEF` is the desired address in hexadecimal format. Lets add support for these commands.
```
void debugger::handle_command(const std::string& line) {
auto args = split(line,' ');
auto command = args[0];
if (is_prefix(command, "continue")) {
continue_execution();
}
else {
std::cerr << "Unknown command\n";
}
}
```
`split` and `is_prefix` are a couple of small helper functions:
```
std::vector<std::string> split(const std::string &s, char delimiter) {
std::vector<std::string> out{};
std::stringstream ss {s};
std::string item;
while (std::getline(ss,item,delimiter)) {
out.push_back(item);
}
return out;
}
bool is_prefix(const std::string& s, const std::string& of) {
if (s.size() > of.size()) return false;
return std::equal(s.begin(), s.end(), of.begin());
}
```
Well add `continue_execution` to the `debugger` class.
```
void debugger::continue_execution() {
ptrace(PTRACE_CONT, m_pid, nullptr, nullptr);
int wait_status;
auto options = 0;
waitpid(m_pid, &wait_status, options);
}
```
For now our `continue_execution` function will just use `ptrace` to tell the process to continue, then `waitpid` until its signalled.
* * *
### Finishing up
Now you should be able to compile some C or C++ program, run it through your debugger, see it halting on entry, and be able to continue execution from your debugger. In the next part well learn how to get our debugger to set breakpoints. If you come across any issues, please let me know in the comments!
You can find the code for this post [here][6].
--------------------------------------------------------------------------------
via: http://blog.tartanllama.xyz/c++/2017/03/21/writing-a-linux-debugger-setup/
作者:[Simon Brand ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linkedin.com/in/simon-brand-36520857
[1]:http://blog.tartanllama.xyz/c++/2017/03/21/writing-a-linux-debugger-setup/#fn:1
[2]:http://blog.tartanllama.xyz/c++/2017/03/21/writing-a-linux-debugger-setup/
[3]:http://blog.tartanllama.xyz/c++/2017/03/24/writing-a-linux-debugger-breakpoints/
[4]:https://github.com/antirez/linenoise
[5]:https://github.com/TartanLlama/libelfin/tree/fbreg
[6]:https://github.com/TartanLlama/minidbg/tree/tut_setup

View File

@ -1,203 +0,0 @@
Writing a Linux Debugger Part 2: Breakpoints
============================================================
In the first part of this series we wrote a small process launcher as a base for our debugger. In this post well learn how breakpoints work in x86 Linux and augment our tool with the ability to set them.
* * *
### Series index
These links will go live as the rest of the posts are released.
1. [Setup][1]
2. [Breakpoints][2]
3. Registers and memory
4. Elves and dwarves
5. Stepping, source and signals
6. Stepping on dwarves
7. Source-level breakpoints
8. Stack unwinding
9. Reading variables
10. Next steps
* * *
### How is breakpoint formed?
There are two main kinds of breakpoints: hardware and software. Hardware breakpoints typically involve setting architecture-specific registers to produce your breaks for you, whereas software breakpoints involve modifying the code which is being executed on the fly. Well be focusing solely on software breakpoints for this article, as they are simpler and you can have as many as you want. On x86 you can only have four hardware breakpoints set at a given time, but they give you the power to make them fire on just reading from or writing to a given address rather than only executing code there.
I said above that software breakpoints are set by modifying the executing code on the fly, so the questions are:
* How do we modify the code?
* What modifications do we make to set a breakpoint?
* How is the debugger notified?
The answer to the first question is, of course, `ptrace`. Weve previously used it to set up our program for tracing and continuing its execution, but we can also use it to read and write memory.
The modification we make has to cause the processor to halt and signal the program when the breakpoint address is executed. On x86 this is accomplished by overwriting the instruction at that address with the `int 3` instruction. x86 has an  _interrupt vector table_  which the operating system can use to register handlers for various events, such as page faults, protection faults, and invalid opcodes. Its kind of like registering error handling callbacks, but right down at the hardware level. When the processor executes the `int 3` instruction, control is passed to the breakpoint interrupt handler, which in the case of Linux signals the process with a `SIGTRAP`. You can see this process in the diagram below, where we overwrite the first byte of the `mov` instruction with `0xcc`, which is the instruction encoding for `int 3`.
![breakpoint](http://blog.tartanllama.xyz/assets/breakpoint.png)
The last piece of the puzzle is how the debugger is notified of the break. If you remember back in the previous post, we can use `waitpid` to listen for signals which are sent to the debugee. We can do exactly the same thing here: set the breakpoint, continue the program, call `waitpid` and wait until the `SIGTRAP`occurs. This breakpoint can then be communicated to the user, perhaps by printing the source location which has been reached, or changing the focused line in a GUI debugger.
* * *
### Implementing software breakpoints
Well implement a `breakpoint` class to represent a breakpoint on some location which we can enable or disable as we wish.
```
class breakpoint {
public:
breakpoint(pid_t pid, std::intptr_t addr)
: m_pid{pid}, m_addr{addr}, m_enabled{false}, m_saved_data{}
{}
void enable();
void disable();
auto is_enabled() const -> bool { return m_enabled; }
auto get_address() const -> std::intptr_t { return m_addr; }
private:
pid_t m_pid;
std::intptr_t m_addr;
bool m_enabled;
uint64_t m_saved_data; //data which used to be at the breakpoint address
};
```
Most of this is just tracking of state; the real magic happens in the `enable` and `disable` functions.
As weve learned above, we need to replace the instruction which is currently at the given address with an `int 3`instruction, which is encoded as `0xcc`. Well also want to save out what used to be at that address so that we can restore the code later; we dont want to just forget to execute the users code!
```
void breakpoint::enable() {
m_saved_data = ptrace(PTRACE_PEEKDATA, m_pid, m_addr, nullptr);
uint64_t int3 = 0xcc;
uint64_t data_with_int3 = ((m_saved_data & ~0xff) | int3); //set bottom byte to 0xcc
ptrace(PTRACE_POKEDATA, m_pid, m_addr, data_with_int3);
m_enabled = true;
}
```
The `PTRACE_PEEKDATA` request to `ptrace` is how to read the memory of the traced process. We give it a process ID and an address, and it gives us back the 64 bits which are currently at that address. `(m_saved_data & ~0xff)` zeroes out the bottom byte of this data, then we bitwise `OR` that with our `int 3` instruction to set the breakpoint. Finally, we set the breakpoint by overwriting that part of memory with our new data with `PTRACE_POKEDATA`.
The implementation of `disable` is easier, as we simply need to restore the original data which we overwrote with `0xcc`.
```
void breakpoint::disable() {
ptrace(PTRACE_POKEDATA, m_pid, m_addr, m_saved_data);
m_enabled = false;
}
```
* * *
### Adding breakpoints to the debugger
Well make three changes to our debugger class to support setting breakpoints through the user interface:
1. Add a breakpoint storage data structure to `debugger`
2. Write a `set_breakpoint_at_address` function
3. Add a `break` command to our `handle_command` function
Ill store my breakpoints in a `std::unordered_map<std::intptr_t, breakpoint>` structure so that its easy and fast to check if a given address has a breakpoint on it and, if so, retrieve that breakpoint object.
```
class debugger {
//...
void set_breakpoint_at_address(std::intptr_t addr);
//...
private:
//...
std::unordered_map<std::intptr_t,breakpoint> m_breakpoints;
}
```
In `set_breakpoint_at_address` well create a new breakpoint, enable it, add it to the data structure, and print out a message for the user. If you like, you could factor out all message printing so that you can use the debugger as a library as well as a command-line tool, but Ill mash it all together for simplicity.
```
void debugger::set_breakpoint_at_address(std::intptr_t addr) {
std::cout << "Set breakpoint at address 0x" << std::hex << addr << std::endl;
breakpoint bp {m_pid, addr};
bp.enable();
m_breakpoints[addr] = bp;
}
```
Now well augment our command handler to call our new function.
```
void debugger::handle_command(const std::string& line) {
auto args = split(line,' ');
auto command = args[0];
if (is_prefix(command, "cont")) {
continue_execution();
}
else if(is_prefix(command, "break")) {
std::string addr {args[1], 2}; //naively assume that the user has written 0xADDRESS
set_breakpoint_at_address(std::stol(addr, 0, 16));
}
else {
std::cerr << "Unknown command\n";
}
}
```
Ive simply removed the first two characters of the string and called `std::stol` on the result, but feel free to make the parsing more robust. `std::stol` optionally takes a radix to convert from, which is handy for reading in hexadecimal.
* * *
### Continuing from the breakpoint
If you try this out, you might notice that if you continue from the breakpoint, nothing happens. Thats because the breakpoint is still set in memory, so its just hit repeatedly. The simple solution is to just disable the breakpoint, single step, re-enable it, then continue. Unfortunately wed also need to modify the program counter to point back before the breakpoint, so well leave this until the next post where well learn about manipulating registers.
* * *
### Testing it out
Of course, setting a breakpoint on some address isnt very useful if you dont know what address to set it at. In the future well be adding the ability to set breakpoints on function names or source code lines, but for now, we can work it out manually.
A simple way to test out your debugger is to write a hello world program which writes to `std::cerr` (to avoid buffering) and set a breakpoint on the call to the output operator. If you continue the debugee then hopefully the execution will stop without printing anything. You can then restart the debugger and set a breakpoint just after the call, and you should see the message being printed successfully.
One way to find the address is to use `objdump`. If you open up a shell and execute `objdump -d <your program>`, then you should see the disassembly for your code. You should then be able to find the `main` function and locate the `call` instruction which you want to set the breakpoint on. For example, I built a hello world example, disassembled it, and got this as the disassembly for `main`:
```
0000000000400936 <main>:
400936: 55 push %rbp
400937: 48 89 e5 mov %rsp,%rbp
40093a: be 35 0a 40 00 mov $0x400a35,%esi
40093f: bf 60 10 60 00 mov $0x601060,%edi
400944: e8 d7 fe ff ff callq 400820 <_ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc@plt>
400949: b8 00 00 00 00 mov $0x0,%eax
40094e: 5d pop %rbp
40094f: c3 retq
```
As you can see, we would want to set a breakpoint on `0x400944`to see no output, and `0x400949` to see the output.
* * *
### Finishing up
You should now have a debugger which can launch a program and allow the user to set breakpoints on memory addresses. Next time well add the ability to read from and write to memory and registers. Again, let me know in the comments if you have any issues.
You can find the code for this post [here][3].
--------------------------------------------------------------------------------
via: http://blog.tartanllama.xyz/c++/2017/03/24/writing-a-linux-debugger-breakpoints/
作者:[Simon Brand ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://blog.tartanllama.xyz/
[1]:http://blog.tartanllama.xyz/c++/2017/03/21/writing-a-linux-debugger-setup/
[2]:http://blog.tartanllama.xyz/c++/2017/03/24/writing-a-linux-debugger-breakpoints/
[3]:https://github.com/TartanLlama/minidbg/tree/tut_break

View File

@ -1,102 +0,0 @@
5 open source RSS feed readers
============================================================
![RSS feed](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/rss_feed.png?itok=FHLEh-fZ "RSS feed")
>Image by : [Rob McDonald][2] on Flickr. Modified by Opensource.com. [CC BY-SA 2.0][3].
### Do you use an RSS reader regularly?
<form class="pollanon" action="https://opensource.com/article/17/3/rss-feed-readers" method="post" id="poll-view-voting" accept-charset="UTF-8"><label class="element-invisible" for="edit-choice" style="display: block; clip: rect(1px 1px 1px 1px); overflow: hidden; height: 1px; width: 1px; color: rgb(67, 81, 86); position: absolute !important;">Choices</label><input type="radio" id="edit-choice-7621" name="choice" value="7621" class="form-radio" style="font-size: 16px; margin-top: 0px; max-width: 100%; -webkit-appearance: none; width: 0.8em; height: 0.8em; border-width: 1px; border-style: solid; border-color: rgb(51, 51, 51); border-radius: 50%; vertical-align: middle;"> <label class="option" for="edit-choice-7621" style="display: inline; font-weight: normal; color: rgb(67, 81, 86); margin-left: 0.2em; vertical-align: middle;">Yes.</label><input type="radio" id="edit-choice-7626" name="choice" value="7626" class="form-radio" style="font-size: 16px; margin-top: 0px; max-width: 100%; -webkit-appearance: none; width: 0.8em; height: 0.8em; border-width: 1px; border-style: solid; border-color: rgb(51, 51, 51); border-radius: 50%; vertical-align: middle;"> <label class="option" for="edit-choice-7626" style="display: inline; font-weight: normal; color: rgb(67, 81, 86); margin-left: 0.2em; vertical-align: middle;">No, but I used to.</label><input type="radio" id="edit-choice-7631" name="choice" value="7631" class="form-radio" style="font-size: 16px; margin-top: 0px; max-width: 100%; -webkit-appearance: none; width: 0.8em; height: 0.8em; border-width: 1px; border-style: solid; border-color: rgb(51, 51, 51); border-radius: 50%; vertical-align: middle;"> <label class="option" for="edit-choice-7631" style="display: inline; font-weight: normal; color: rgb(67, 81, 86); margin-left: 0.2em; vertical-align: middle;">No, I never did.</label><input type="submit" id="edit-vote" name="op" value="Vote" class="form-submit" style="font-family: &quot;Swiss 721 SWA&quot;, &quot;Helvetica Neue&quot;, Helvetica, Arial, &quot;Nimbus Sans L&quot;, sans-serif; font-size: 1em; max-width: 100%; line-height: normal; font-style: normal; border-width: 1px; border-style: solid; border-color: rgb(119, 186, 77); color: rgb(255, 255, 255); background: rgb(119, 186, 77); padding: 0.6em 1.9em;"></form>
When Google Reader was discontinued four years ago, many "technology experts" called it the end of RSS feeds.
And it's true that for some people, social media and other aggregation tools are filling a need that feed readers for RSS, Atom, and other syndication formats once served. But old technologies never really die just because new technologies come along, particularly if the new technology does not perfectly replicate all of the use cases of the old one. The target audience for a technology might change a bit, and the tools people use to consume the technology might change, too.
But RSS is no more gone than email, JavaScript, SQL databases, the command line, or any number of other technologies that various people told me more than a decade ago had numbered days. (Is it any wonder that vinyl album sales just hit a [25-year peak][4] last year?) One only has to look at the success of online feed reader site Feedly to understand that there's still definitely a market for RSS readers.
The truth is, RSS and related feed formats are just more versatile than anything in wide usage that has attempted to replace it. There is no other easy was for me as a consumer to read a wide variety of publications, formatted in a client of my choosing, where I am virtually guaranteed to see every item that is published, while simultaneously not being shown a bunch of articles I have already read. And as a publisher, it's a simple format that most any publishing software I already use will support out of the box, letting me reach more people and easily distribute many types of documents.
So no, RSS is not dead. Long live RSS! We last looked at [open source RSS reader][5] options in 2013, and it's time for an update. Here are some of my top choices for open source RSS feed readers in 2017, each a little different in its approach.
### Miniflux
[Miniflux][6] is an absolutely minimalist web-based RSS reader, but don't confuse its intentionally light approach with laziness on the part of the developers; it is purposefully built to be a simple and efficient design. The philosophy of Miniflux seems to be to keep the application out of the way so that the reader can focus on the content, something many of us can appreciate in a world of bloated web applications.
But lightweight doesn't mean void of features; its responsive design looks good across any device, and allows for theming, an API interface, multiple languages, bookmark pinning, and more.
Miniflux's [source code][7] can be found on GitHub under the [GPLv3 Affero][8] license.  If you don't want to set up your own self-hosted version, a paid hosting plan is available for $15/year.
### RSSOwl
[RSSOwl][9] is a cross-platform desktop feed reader. Written in Java, it is reminiscent of many popular desktop email clients in style and feel. It features powerful filtering and search capabilities, customizable notifications, and labels and bins for sorting your feeds. If you're used to using Thunderbird or other desktop readers for email, you'll feel right at home in RSSOwl.
You can find the source code for [RSSOwl][10] on GitHub under the [Eclipse Public License][11].
### Tickr
[Tickr][12] is a slightly different entry in this mix. It's a Linux desktop client, but it's not your traditional browse-and-read format. Instead, it slides your feed's headlines across a bar on your desktop like a news ticker; it's a great choice for news junkies who want to get the latest from a variety of sources. Clicking on a headline will open it in your browser of choice. It's not a dedicated reading client like the rest of the applications on this list, but if you're more interested in skimming headlines than reading every article, it's a good pick.
Tickr's source code and binaries can be found on the project's [website][13] under a GPL license.
### Tiny Tiny RSS
It would be difficult to build a list of modern RSS readers without including [Tiny Tiny RSS][14]. It's among the most popular self-hosted web-based readers, and it's chocked full of features: OPML import and export, keyboard shortcuts, sharing features, a themeable interface, an infrastructure for plug-ins, filtering capabilities, and lots more.
Tiny Tiny RSS also hosts an official [Android client][15], for those hoping to read on the go.
Both the [web][16] and [Android][17] source code for Tiny Tiny RSS can be found on GitLab under a [GPLv3 license][18].
### Winds
[Winds][19] is a modern looking self-hosted web feed reader, built on React. It makes use of a hosted machine learning personalization API called Stream, with the intent of helping you find more content that might be of interest to you based on your current interests. An online demo is available so you can [try it out][20] before you download. It's a new project, just a few months old, and so perhaps too soon to evaluate whether it's up to replace my daily feed reader yet, but it's certainly a project I'm watching with interest.
You can find the [source code][21] for Winds on GitHub under an [MIT][22] license.
* * *
These are most definitely not the only options out there. RSS is a relatively easy-to-parse, well-documented format, and so there are many, many different feed readers out there built to suit just about every taste. Here's a [big list][23] of self-hosted open source feed readers you might consider in addition to the ones we listed. We hope you'll share with us what your favorite RSS reader is in the comments below.
--------------------------------------------------------------------------------
作者简介:
Jason Baker - Jason is passionate about using technology to make the world more open, from software development to bringing sunlight to local governments. Linux desktop enthusiast. Map/geospatial nerd. Raspberry Pi tinkerer. Data analysis and visualization geek. Occasional coder. Cloud nativist. Follow him on Twitter.
--------------
via: https://opensource.com/article/17/3/rss-feed-readers
作者:[ Jason Baker][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jason-baker
[1]:https://opensource.com/article/17/3/rss-feed-readers?rate=2sJrLq0K3QPQCznBId7K1Qrt3QAkwhQ435UyP77B5rs
[2]:https://www.flickr.com/photos/evokeartdesign/6002000807
[3]:https://creativecommons.org/licenses/by/2.0/
[4]:https://www.theguardian.com/music/2017/jan/03/record-sales-vinyl-hits-25-year-high-and-outstrips-streaming
[5]:https://opensource.com/life/13/6/open-source-rss
[6]:https://miniflux.net/
[7]:https://github.com/miniflux/miniflux
[8]:https://github.com/miniflux/miniflux/blob/master/LICENSE
[9]:http://www.rssowl.org/
[10]:https://github.com/rssowl/RSSOwl
[11]:https://github.com/rssowl/RSSOwl/blob/master/LICENSE
[12]:https://www.open-tickr.net/
[13]:https://www.open-tickr.net/download.php
[14]:https://tt-rss.org/gitlab/fox/tt-rss/wikis/home
[15]:https://tt-rss.org/gitlab/fox/tt-rss-android
[16]:https://tt-rss.org/gitlab/fox/tt-rss/tree/master
[17]:https://tt-rss.org/gitlab/fox/tt-rss-android/tree/master
[18]:https://tt-rss.org/gitlab/fox/tt-rss-android/blob/master/COPYING
[19]:https://winds.getstream.io/
[20]:https://winds.getstream.io/app/getting-started
[21]:https://github.com/GetStream/Winds
[22]:https://github.com/GetStream/Winds/blob/master/LICENSE.md
[23]:https://github.com/Kickball/awesome-selfhosted#feed-readers
[24]:https://opensource.com/user/19894/feed
[25]:https://opensource.com/article/17/3/rss-feed-readers#comments
[26]:https://opensource.com/users/jason-baker

View File

@ -0,0 +1,345 @@
ictlyh Translating
All You Need To Know About Processes in Linux [Comprehensive Guide]
============================================================
In this article, we will walk through a basic understanding of processes and briefly look at [how to manage processes in Linux][9] using certain commands.
A process refers to a program in execution; its a running instance of a program. It is made up of the program instruction, data read from files, other programs or input from a system user.
#### Types of Processes
There are fundamentally two types of processes in Linux:
* Foreground processes (also referred to as interactive processes) these are initialized and controlled through a terminal session. In other words, there has to be a user connected to the system to start such processes; they havent started automatically as part of the system functions/services.
* Background processes (also referred to as non-interactive/automatic processes) are processes not connected to a terminal; they dont expect any user input.
#### What is Daemons
These are special types of background processes that start at system startup and keep running forever as a service; they dont die. They are started as system tasks (run as services), spontaneously. However, they can be controlled by a user via the init process.
[
![Linux Process State](http://www.tecmint.com/wp-content/uploads/2017/03/ProcessState.png)
][10]
Linux Process State
### Creation of a Processes in Linux
A new process is normally created when an existing process makes an exact copy of itself in memory. The child process will have the same environment as its parent, but only the process ID number is different.
There are two conventional ways used for creating a new process in Linux:
* Using The System() Function  this method is relatively simple, however, its inefficient and has significantly certain security risks.
* Using fork() and exec() Function  this technique is a little advanced but offers greater flexibility, speed, together with security.
### How Does Linux Identify Processes?
Because Linux is a multi-user system, meaning different users can be running various programs on the system, each running instance of a program must be identified uniquely by the kernel.
And a program is identified by its process ID (PID) as well as its parent processes ID (PPID), therefore processes can further be categorized into:
* Parent processes  these are processes that create other processes during run-time.
* Child processes  these processes are created by other processes during run-time.
#### The Init Process
Init process is the mother (parent) of all processes on the system, its the first program that is executed when the [Linux system boots up][11]; it manages all other processes on the system. It is started by the kernel itself, so in principle it does not have a parent process.
The init process always has process ID of 1. It functions as an adoptive parent for all orphaned processes.
You can use the pidof command to find the ID of a process:
```
# pidof systemd
# pidof top
# pidof httpd
```
[
![Find Linux Process ID](http://www.tecmint.com/wp-content/uploads/2017/03/Find-Linux-Process-ID.png)
][12]
Find Linux Process ID
To find the process ID and parent process ID of the current shell, run:
```
$ echo $$
$ echo $PPID
```
[
![Find Linux Parent Process ID](http://www.tecmint.com/wp-content/uploads/2017/03/Find-Linux-Parent-Process-ID.png)
][13]
Find Linux Parent Process ID
#### Starting a Process in Linux
Once you run a command or program (for example cloudcmd CloudCommander), it will start a process in the system. You can start a foreground (interactive) process as follows, it will be connected to the terminal and a user can send input it:
```
# cloudcmd
```
[
![Start Linux Interactive Process](http://www.tecmint.com/wp-content/uploads/2017/03/Start-Linux-Interactive-Process.png)
][14]
Start Linux Interactive Process
#### Linux Background Jobs
To start a process in the background (non-interactive), use the `&` symbol, here, the process doesnt read input from a user until its moved to the foreground.
```
# cloudcmd &
# jobs
```
[
![Start Linux Process in Background](http://www.tecmint.com/wp-content/uploads/2017/03/Start-Linux-Process-in-Background.png)
][15]
Start Linux Process in Background
You can also send a process to the background by suspending it using `[Ctrl + Z]`, this will send the SIGSTOP signal to the process, thus stopping its operations; it becomes idle:
```
# tar -cf backup.tar /backups/* #press Ctrl+Z
# jobs
```
To continue running the above-suspended command in the background, use the bg command:
```
# bg
```
To send a background process to the foreground, use the fg command together with the job ID like so:
```
# jobs
# fg %1
```
[
![Linux Background Process Jobs](http://www.tecmint.com/wp-content/uploads/2017/03/Linux-Background-Process-Jobs.png)
][16]
Linux Background Process Jobs
You may also like: [How to Start Linux Command in Background and Detach Process in Terminal][17]
#### States of a Process in Linux
During execution, a process changes from one state to another depending on its environment/circumstances. In Linux, a process has the following possible states:
* Running  here its either running (it is the current process in the system) or its ready to run (its waiting to be assigned to one of the CPUs).
* Waiting  in this state, a process is waiting for an event to occur or for a system resource. Additionally, the kernel also differentiates between two types of waiting processes; interruptible waiting processes can be interrupted by signals and uninterruptible waiting processes are waiting directly on hardware conditions and cannot be interrupted by any event/signal.
* Stopped  in this state, a process has been stopped, usually by receiving a signal. For instance, a process that is being debugged.
* Zombie  here, a process is dead, it has been halted but its still has an entry in the process table.
#### How to View Active Processes in Linux
There are several Linux tools for viewing/listing running processes on the system, the two traditional and well known are [ps][18] and [top][19] commands:
#### 1\. ps Command
It displays information about a selection of the active processes on the system as shown below:
```
# ps
# ps -e | head
```
[
![List Linux Active Processes](http://www.tecmint.com/wp-content/uploads/2017/03/ps-command.png)
][20]
List Linux Active Processes
#### 2\. top System Monitoring Tool
[top is a powerful tool][21] that offers you a [dynamic real-time view of a running system][22] as shown in the screenshot below:
```
# top
```
[
![List Linux Running Processes](http://www.tecmint.com/wp-content/uploads/2017/03/top-command.png)
][23]
List Linux Running Processes
Read this for more top usage examples: [12 TOP Command Examples in Linux][24]
#### 3\. glances System Monitoring Tool
glances is a relatively new system monitoring tool with advanced features:
```
# glances
```
[
![Glances - Linux Process Monitoring](http://www.tecmint.com/wp-content/uploads/2017/03/glances.png)
][25]
Glances Linux Process Monitoring
For a comprehensive usage guide, read through: [Glances An Advanced Real Time System Monitoring Tool for Linux][26]
There are several other useful Linux system monitoring tools you can use to list active processes, open the link below to read more about them:
1. [20 Command Line Tools to Monitor Linux Performance][1]
2. [13 More Useful Linux Monitoring Tools][2]
### How to Control Processes in Linux
Linux also has some commands for controlling processes such as kill, pkill, pgrep and killall, below are a few basic examples of how to use them:
```
$ pgrep -u tecmint top
$ kill 2308
$ pgrep -u tecmint top
$ pgrep -u tecmint glances
$ pkill glances
$ pgrep -u tecmint glances
```
[
![Control Linux Processes](http://www.tecmint.com/wp-content/uploads/2017/03/Control-Linux-Processes.png)
][27]
Control Linux Processes
To learn how to use these commands in-depth, to kill/terminate active processes in Linux, open the links below:
1. [A Guide to Kill, Pkill and Killall Commands to Terminate Linux Processess][3]
2. [How to Find and Kill Running Processes in Linux][4]
Note that you can use them to kill [unresponsive applications in Linux][28] when your system freezes.
#### Sending Signals To Processes
The fundamental way of controlling processes in Linux is by sending signals to them. There are multiple signals that you can send to a process, to view all the signals run:
```
$ kill -l
```
[
![List All Linux Signals](http://www.tecmint.com/wp-content/uploads/2017/03/list-all-signals.png)
][29]
List All Linux Signals
To send a signal to a process, use the kill, pkill or pgrep commands we mentioned earlier on. But programs can only respond to signals if they are programmed to recognize those signals.
And most signals are for internal use by the system, or for programmers when they write code. The following are signals which are useful to a system user:
* SIGHUP 1  sent to a process when its controlling terminal is closed.
* SIGINT 2  sent to a process by its controlling terminal when a user interrupts the process by pressing `[Ctrl+C]`.
* SIGQUIT 3  sent to a process if the user sends a quit signal `[Ctrl+D]`.
* SIGKILL 9  this signal immediately terminates (kills) a process and the process will not perform any clean-up operations.
* SIGTERM 15  this a program termination signal (kill will send this by default).
* SIGTSTP 20  sent to a process by its controlling terminal to request it to stop (terminal stop); initiated by the user pressing `[Ctrl+Z]`.
The following are kill commands examples to kill the Firefox application using its PID once it freezes:
```
$ pidof firefox
$ kill 9 2687
OR
$ kill -KILL 2687
OR
$ kill -SIGKILL 2687
```
To kill an application using its name, use pkill or killall like so:
```
$ pkill firefox
$ killall firefox
```
#### Changing Linux Process Priority
On the Linux system, all active processes have a priority and certain nice value. Processes with higher priority will normally get more CPU time than lower priority processes.
However, a system user with root privileges can influence this with the nice and renice commands.
From the output of the top command, the NI shows the process nice value:
```
$ top
```
[
![List Linux Running Processes](http://www.tecmint.com/wp-content/uploads/2017/03/top-command.png)
][30]
List Linux Running Processes
Use the nice command to set a nice value for a process. Keep in mind that normal users can attribute a nice value from zero to 20 to processes they own.
Only the root user can use negative nice values.
To renice the priority of a process, use the renice command as follows:
```
$ renice +8 2687
$ renice +8 2103
```
Check out our some useful articles on how to manage and control Linux processes.
1. [Linux Process Management: Boot, Shutdown, and Everything in Between][5]
2. [Find Top 15 Processes by Memory Usage with top in Batch Mode][6]
3. [Find Top Running Processes by Highest Memory and CPU Usage in Linux][7]
4. [How to Find a Process Name Using PID Number in Linux][8]
Thats all for now! Do you have any questions or additional ideas, share them with us via the feedback form below.
--------------------------------------------------------------------------------
作者简介:
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-process-management/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/command-line-tools-to-monitor-linux-performance/
[2]:http://www.tecmint.com/linux-performance-monitoring-tools/
[3]:http://www.tecmint.com/how-to-kill-a-process-in-linux/
[4]:http://www.tecmint.com/find-and-kill-running-processes-pid-in-linux/
[5]:http://www.tecmint.com/rhcsa-exam-boot-process-and-process-management/
[6]:http://www.tecmint.com/find-processes-by-memory-usage-top-batch-mode/
[7]:http://www.tecmint.com/find-linux-processes-memory-ram-cpu-usage/
[8]:http://www.tecmint.com/find-process-name-pid-number-linux/
[9]:http://www.tecmint.com/dstat-monitor-linux-server-performance-process-memory-network/
[10]:http://www.tecmint.com/wp-content/uploads/2017/03/ProcessState.png
[11]:http://www.tecmint.com/linux-boot-process/
[12]:http://www.tecmint.com/wp-content/uploads/2017/03/Find-Linux-Process-ID.png
[13]:http://www.tecmint.com/wp-content/uploads/2017/03/Find-Linux-Parent-Process-ID.png
[14]:http://www.tecmint.com/wp-content/uploads/2017/03/Start-Linux-Interactive-Process.png
[15]:http://www.tecmint.com/wp-content/uploads/2017/03/Start-Linux-Process-in-Background.png
[16]:http://www.tecmint.com/wp-content/uploads/2017/03/Linux-Background-Process-Jobs.png
[17]:http://www.tecmint.com/run-linux-command-process-in-background-detach-process/
[18]:http://www.tecmint.com/linux-boot-process-and-manage-services/
[19]:http://www.tecmint.com/12-top-command-examples-in-linux/
[20]:http://www.tecmint.com/wp-content/uploads/2017/03/ps-command.png
[21]:http://www.tecmint.com/12-top-command-examples-in-linux/
[22]:http://www.tecmint.com/bcc-best-linux-performance-monitoring-tools/
[23]:http://www.tecmint.com/wp-content/uploads/2017/03/top-command.png
[24]:http://www.tecmint.com/12-top-command-examples-in-linux/
[25]:http://www.tecmint.com/wp-content/uploads/2017/03/glances.png
[26]:http://www.tecmint.com/glances-an-advanced-real-time-system-monitoring-tool-for-linux/
[27]:http://www.tecmint.com/wp-content/uploads/2017/03/Control-Linux-Processes.png
[28]:http://www.tecmint.com/kill-processes-unresponsive-programs-in-ubuntu/
[29]:http://www.tecmint.com/wp-content/uploads/2017/03/list-all-signals.png
[30]:http://www.tecmint.com/wp-content/uploads/2017/03/top-command.png
[31]:http://www.tecmint.com/author/aaronkili/
[32]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[33]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -1,196 +0,0 @@
translated by zhousiyu325
Yes, Python is Slow, and I Dont Care
============================================================
### A rant on sacrificing performance for productivity.
![](https://cdn-images-1.medium.com/max/800/0*pWAgROZ2JbYzlDgj.jpg)
Im taking a break from my discussion on asyncio in Python to talk about something that has been on my mind recently: the speed of Python. For those who dont know, I am somewhat of a Python fanboy, and I aggressively use Python everywhere I can. One of the biggest complaints people have against Python is that its slow. Some people almost refuse to even try python because its slower than X. Here are my thoughts as to why you should try python, despite it being slow.
### Speed No Longer Matters
It used to be the case that programs took a really long time to run. CPUs were expensive, memory was expensive. Running time of a program used to be an important metric. Computers were very expensive, and so was the electricity to run them. Optimization of these resources was done because of an eternal business law:
> Optimize your most expensive resource.
Historically, the most expensive resource was computer run time. This is what lead to the study of computer science which focuses on efficiency of different algorithms. However, this is no longer true, as silicon is now cheap. Like really cheap. Run time is no longer your most expensive resource. A companys most expensive resource is now its employees time. Or in other words, you. Its more important to get stuff done than to make it go fast. In fact this is so important, I am going to put it again right here as if it was a quote (for those who are just browsing):
> Its more important to get stuff done than to make it go fast.
You might be saying, “My company cares about speed, I build a web application and all responses have to be faster than x milliseconds.” Or, “We have had customers cancel because they think our app is too slow.” I am not trying to say that speed doesnt matter at all, I am simply trying to say that its no longer the most important thing; its not your most expensive resource.
![](https://cdn-images-1.medium.com/max/800/0*Z6j9zMua_w-T25TC.jpg)
### Speed Is The Only Thing That Matters
When you say  _speed_  in the context of programming, you typically mean performance, aka CPU cycles. When your CEO says  _speed_  in the context of programming he means business speed. The most important metric is time-to-market. Ultimately, it doesnt matter how fast your product/web app is. It doesnt matter what language its written in. It doesn't even matter how much money it takes to run. At the end of the day, the one thing that will make your company survive or die is time-to-market. Im not just talking about the startup idea of how long it takes till you make money, but more so the time frame of “from idea, to customers hands.” The only way to survive in business is to innovate faster than your competitors. It doesnt matter how many good ideas you come up with if your competitors “ship” before you do. You have to be the first to market, or at least keep up. Once you slow down, you are done.
> The only way to survive in business is to innovate faster than your competitors.
#### A Case of Microservices
Companies like Amazon, Google, and Netflix understand the importance of moving fast. They have created a business system where they can move fast and innovate quickly. Microservices are the solution to their problem. This article has nothing to do with whether or not you should be using microservices, but at least accept that Amazon and Google think they should be using them.
![](https://cdn-images-1.medium.com/max/600/0*MBM9zatYv_Lzr3QN.jpg)
Microservices are inherently slow. The very concept of a microservice is to break up a boundary by a network call. This means you are taking what was a function call (a couple cpu cycles) and turning it into a network call. There isnt much you could do that is worse in terms of performance. Network calls are really slow compared to the CPU. But these big companies still choose to use microservices. There really isnt an architecture slower than microservices that I know of. Microservices biggest con is performance, but greatest pro is time-to-market. By building teams around smaller projects and code bases, a company is able to iterate and innovate at a much faster pace. This just goes to show that very large companies also care about time-to-market, not just startups.
#### CPU is Not your Bottleneck
![](https://cdn-images-1.medium.com/max/800/0*s1RKhkRIBMEYji_w.jpg)
If you write a network application, such as a web server, chances are, CPU time is not the bottleneck of your application. When your web server handles a request, it probably makes a couple network calls, such as to your database, or perhaps a cache server like Redis. While these services themselves may be fast, the network call to them is slow. [There is a really great blog article on the speed differences of certain operations.][1] In the article, the author scales CPU cycle times into more understandable human times. If a single CPU cycle was the equivalent of 1 second, then a network call from California to New York, would be the equivalent of 4 years. That is how much slower network is. For some rough estimates, lets say a normal network call inside the same data center takes about 3 ms. That would be the equivalent of 3 months in our “human scale”. Now imagine your program is very CPU intensive, it takes 100,000 cycles to respond to a single call. That would be the equivalent of just over 1 day. Now lets say you use a language that is 5 times as slow, now it takes about 5 days. Well, compare that to our 3 month network call, and the 4 day difference doesnt really matter much at all. If someone has to wait at least 3 months for a package, I don't think an extra 4 days will really matter all that much to them.
What this ultimately means is that, even if python is slow, it doesnt matter. The speed of the language (or CPU time) is almost never the issue. Google actually did a study on this very concept, and [they wrote a paper on it][2]. The paper talks about designing a high throughput system. In the conclusion, they say:
> It may seem paradoxical to use an interpreted language in a high-throughput environment, but we have found that the CPU time is rarely the limiting factor; the expressibility of the language means that most programs are small and spend most of their time in I/O and native run-time code. Moreover, the flexibility of an interpreted implementation has been helpful, both in ease of experimentation at the linguistic level and in allowing us to explore ways to distribute the calculation across many machines.
or, to emphasise:
> the CPU time is rarely the limiting factor
#### What if CPU time is an issue?
You might be saying, “Thats great and all, but we have had issues where CPU was our bottleneck and caused much slowdown for our web app”, or “Language  _x _ requires much less hardware to run than language  _y_  on the server.” This all might be true. The wonderful thing about web servers is that you can load balance them almost infinitely. In other words, throw more hardware at it. Sure, Python might require better hardware than other languages, such as C. Just throw hardware at your CPU problem. Hardware is very cheap compared to your time. If you save a couple weeks worth of time in productivity in a year, that will more than pay for the added hardware cost.
* * *
![](https://cdn-images-1.medium.com/max/1000/0*mJFOcWsdEQq98gkF.jpg)
### So, is Python faster?
This whole time I have been talking about how the most important thing is development time. So the question remains: Is Python faster than language X when it comes to development time? Anecdotally, I, [google][3], [and][4] [several][5][others][6], can tell you how much more [productive][7] Python is. It abstracts so many things for you, helping you focus on what youre really trying to code, without getting stuck in the weeds of the small things such as whether you should use a vector or an array. But you might not like to take others word for it, so lets look at some more empirical data.
For the most part, this debate on whether python is more productive or not really comes down to scripting (or dynamic languages) vs statically typed languages. I think it is commonly excepted that statically typed languages are less productive, but [here is a good paper][8] that explains why. In terms of Python specifically, [here is a good summary][9] from a study that looked at how long it took to write code for strings processing in various languages.
![](https://cdn-images-1.medium.com/max/800/1*cw7Oq54ZflGZhlFglDka4Q.png)
How long it takes to write a string processing application in various languages. (Prechelt and Garret)
Python is more than 2x as productive as Java in the above study. There are some other studies that show the same thing as well. Rosetta Code did a [fairly in-depth study][10] of the difference of programming languages. In the paper, they compare python to other scripting/interpreted languages and say:
> Python tends to be the most concise, even against functional languages (1.21.6 times shorter on average)
The common trend seems to be that “lines of code” is always less in Python. Lines of code might sound like a terrible metric, but [multiple studies][11], including the two already mentioned show that time spent per line of code is about the same in every language. Therefore, limiting the number of lines of code, increases productivity. Even the codinghorror himself (a C# programmer) [wrote an article on how Python is more productive][12].
I think it is fair to say that Python is more productive than many other languages. This is mainly due to the fact that python comes with “batteries included” and has many 3rd party libraries. [Here is a simple article talking about the differences between Python and X.][13] If you dont know why Python is so “small” and productive, I invite you to take this opportunity to learn some python and see for yourself. Here is your first program:
_import __hello___
* * *
### But what if speed really does matter?
![](https://cdn-images-1.medium.com/max/600/0*bg31_URKZ7xzWy5I.jpg)
The tone of the points above might make it sound like optimization and speed don't matter at all. But the truth is, there are many times when runtime performance really does matter. One example is, you have a web application, and there is a specific endpoint that is taking a really long time to respond. You know how fast it needs to be, and how much it needs to be improved.
In our example, a couple things happened:
1. We noticed a single endpoint that was performing slowly
2. We recognize it as slow because we have a metric of what is considered  _fast enough_ , and its failing that metric.
We dont have to micro-optimize everything in an application. Everything only needs to be “fast enough”. Your users might notice if an endpoint takes a couple seconds to respond, but they wont notice you improved the response time of a 35 ms call to 25 ms. “Good enough”, really is all you need to achieve.  _Disclaimer_ :  _I should probably state that there are _ _some _ _applications, such as real-time bidding applications, that _ _do _ _need micro optimizations, and every millisecond does matter. But that is the exception, not the rule._
In order to figure out how to optimize the endpoint your first step would be to profile the code and try to figure out where you bottleneck is. After all:
> Any improvements made anywhere besides the bottleneck are an illusion.Gene Kim
If your optimizations arent touching the bottleneck, youre wasting your time and not fixing the real issue. You wont get any serious improvements until you optimize the bottleneck. If you try to optimize before you know what the bottleneck is, youll just end up playing whack-a-mole with parts of your code. Optimizing code before you measure and determine where the bottleneck is, is known as “premature optimization”. Donald Knuth is often attributed for the following quote, but he claims he stole the quote from someone else:
> _Premature optimization is the root of all evil._
In talking about maintaining code bases, the more full quote from Donald Knuth is:
> We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.
In other words, he is saying that most of the time, you need to forget about optimizing your code. Its almost always good enough. In the cases when it isnt good enough, we typically only need to touch three percent of the code path. You dont win any prizes by making your endpoint a couple nanoseconds faster because you used an if statement instead of a function for example. Optimize only after you measure.
Premature optimization includes calling certain  _faster_  methods, or even using a specific data structure because its generally faster. Computer Science argues that if a method or algorithm has the same asymptotic growth (or Big-O) as another, then they are equivalent, even if one is 2x as slow in practice. Computers are so fast, that the computational growth of an algorithm as data/usage increases matters much more than the actual speed itself. In other words, if you have two  _O(log n)_  functions, but one is twice as slow, it doesnt really matter. As the size of data increases, they both “slow down” at the same rate. This is why premature optimization is the root of all evil; It wastes our time, and almost never actually helps our general performance anyways.
In terms of Big-O, you could argue that all languages are  _O(n) _ for your program where n is lines of code, or instructions. They all grow at the same rate for the same instructions. It doesnt matter how slow a language/runtime is, in terms of asymptotic growth, all languages are created equal. Under this logic, you could say that choosing a language for your application simply because its “fast” is the ultimate form of premature optimization. Your choosing something supposedly fast without measuring, without understanding where the bottleneck is going to be.
> Choosing a language for you application simply because its “fast” is the ultimate form of premature optimization.
* * *
![](https://cdn-images-1.medium.com/max/1000/0*6WaZOtaXLIo1Vy5H.png)
### Optimizing Python
One of my favorite things about Python is that it lets you optimize code a little bit at a time. Lets say you have a method in Python that you find to be your bottleneck. You have optimized it several times, possibly following some guidance from [here][14] and [there][15], and now you are at the point where you are pretty sure Python itself is the bottleneck. Python has the ability to call into C code, which means that you can rewrite this one method in C to reduce the performance issue. You can do this one method at a time. This process allows you to write well optimized bottleneck methods in any language that compiles to C compatible assembler. This allows you to stay in Python most of the time, and only go into the lower level things when you really need it.
There is a language called Cython that is a super-set of Python. It is almost a merge of Python and C, and is a progressively typed language. Any Python code is valid Cython code, and Cython compiles to C code. With Cython, you can write a module or method, and slowly progress to more and more C-Types and performance. You can intermingle C types and Pythons duck types together. Using Cython you get the perfect mix of optimizing only at the bottleneck, and the beauty of Python everywhere else.
![](https://cdn-images-1.medium.com/max/600/0*LStEb38q3d2sOffq.jpg)
A screenshot of Eve online: A space MMO written in Python
When you do eventually run into a Python wall of performance woes, you don't need to move your whole code base to a different language. You can almost always get the performance you need by just re-writing a couple methods in Cython. This is the strategy [Eve Online][16] takes. Eve is a Massive Multiplayer Computer Game, that uses Python and Cython for the entire stack. They achieve game level performance by optimizing the bottlenecks in C/Cython. If it works for them, it should work for most anyone. Alternatively, there are also other ways to optimize your python. For example, [PyPy][17] is a JIT implementation of Python that could give you significant runtime improvements for long running applications (such as a web server) simply by swapping out CPython (the default implementation) with PyPy.
![](https://cdn-images-1.medium.com/max/1000/0*mPc5j1btWBFz6YK7.jpg)
Lets review some of the main points:
* Optimize for your most expensive resource. Thats YOU, not the computer.
* Choose a language/framework/architecture that helps you develop quickly (such as Python). Do not choose technologies simply because they are fast.
* When you do have performance issues: find your bottleneck
* Your bottleneck is most likely not CPU or Python itself.
* If Python is your bottleneck (youve already optimized algorithms/etc.), then move the hot-spot to Cython/C
* Go back to enjoying getting things done quickly
* * *
I hope you enjoyed reading this article as much as I enjoyed writing it. If youd like to say thank you, just hit the heart button. Also, if youd like to talk to me about Python sometime, you can hit me up on twitter (@nhumrich) or I can be found on the [Python slack channel][18].
--------------------------------------------------------------------------------
作者简介:
I drink the KoolAid of Continuous Delivery and write tools to help us all get there. Python hacker, Technology fanatic. Currently a devops engineer @canopytax
--------------
via: https://hackernoon.com/yes-python-is-slow-and-i-dont-care-13763980b5a1
作者:[Nick Humrich ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://hackernoon.com/@nhumrich
[1]:https://blog.codinghorror.com/the-infinite-space-between-words/
[2]:https://static.googleusercontent.com/media/research.google.com/en//archive/sawzall-sciprog.pdf
[3]:https://www.codefellows.org/blog/5-reasons-why-python-is-powerful-enough-for-google/
[4]:https://www.lynda.com/Python-tutorials/Python-Programming-Efficiently/534425-2.html
[5]:https://www.linuxjournal.com/article/3882
[6]:https://www.codeschool.com/blog/2016/01/27/why-python/
[7]:http://pythoncard.sourceforge.net/what_is_python.html
[8]:http://www.tcl.tk/doc/scripting.html
[9]:http://www.connellybarnes.com/documents/language_productivity.pdf
[10]:https://arxiv.org/pdf/1409.0252.pdf
[11]:http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.113.1831&rep=rep1&type=pdf
[12]:https://blog.codinghorror.com/are-all-programming-languages-the-same/
[13]:https://www.python.org/doc/essays/comparisons/
[14]:https://wiki.python.org/moin/PythonSpeed
[15]:https://wiki.python.org/moin/PythonSpeed/PerformanceTips
[16]:https://www.eveonline.com/
[17]:http://pypy.org/
[18]:http://pythondevelopers.herokuapp.com/

View File

@ -0,0 +1,213 @@
(翻译中 by runningwater)
FreeFileSync Compare and Synchronize Files in Ubuntu
============================================================
FreeFileSync is a free, open source and cross platform folder comparison and synchronization software, which helps you [synchronize files and folders on Linux][2], Windows and Mac OS.
It is portable and can also be installed locally on a system, its feature-rich and is intended to save time in setting up and executing backup operations while having attractive graphical interface as well.
#### FreeFileSync Features
Below are its key features:
1. It can synchronize network shares and local disks.
2. It can synchronize MTP devices (Android, iPhone, tablet, digital camera).
3. It can also synchronize via [SFTP (SSH File Transfer Protocol)][1].
4. It can identify moved and renamed files and folders.
5. Displays disk space usage with directory trees.
6. Supports copying locked files (Volume Shadow Copy Service).
7. Identifies conflicts and propagate deletions.
8. Supports comparison of files by content.
9. It can be configured to handle Symbolic Links.
10. Supports automation of sync as a batch job.
11. Enables processing of multiple folder pairs.
12. Supports in-depth and detailed error reporting.
13. Supports copying of NTFS extended attributes such as (compressed, encrypted, sparse).
14. Also supports copying of NTFS security permissions and NTFS Alternate Data Streams.
15. Support long file paths with more than 260 characters.
16. Supports Fail-safe file copy prevents data corruption.
17. Allows expanding of environment variables such as %UserProfile%.
18. Supports accessing of variable drive letters by volume name (USB sticks).
19. Supports managing of versions of deleted/updated files.
20. Prevent disc space issues via optimal sync sequence.
21. Supports full Unicode.
22. Offers a highly optimized run time performance.
23. Supports filters to include and exclude files plus lots more.
### How To Install FreeFileSync in Ubuntu Linux
We will add official FreeFileSync PPA, which is available for Ubuntu 14.04 and Ubuntu 15.10 only, then update the system repository list and install it like so:
```
-------------- On Ubuntu 14.04 and 15.10 --------------
$ sudo apt-add-repository ppa:freefilesync/ffs
$ sudo apt-get update
$ sudo apt-get install freefilesync
```
On Ubuntu 16.04 and newer version, go to the [FreeFileSync download page][3] and get the appropriate package file for Ubuntu and Debian Linux.
Next, move into the Download folder, extract the FreeFileSync_*.tar.gz into the /opt directory as follows:
```
$ cd Downloads/
$ sudo tar xvf FreeFileSync_*.tar.gz -C /opt/
$ cd /opt/
$ ls
$ sudo unzip FreeFileSync/Resources.zip -d /opt/FreeFileSync/Resources/
```
Now we will create an application launcher (.desktop file) using Gnome Panel. To view examples of `.desktop`files on your system, list the contents of the directory /usr/share/applications:
```
$ ls /usr/share/applications
```
In case you do not have Gnome Panel installed, type the command below to install it:
```
$ sudo apt-get install --no-install-recommends gnome-panel
```
Next, run the command below to create the application launcher:
```
$ sudo gnome-desktop-item-edit /usr/share/applications/ --create-new
```
And define the values below:
```
Type: Application
Name: FreeFileSync
Command: /opt/FreeFileSync/FreeFileSync
Comment: Folder Comparison and Synchronization
```
To add an icon for the launcher, simply clicking on the spring icon to select it: /opt/FreeFileSync/Resources/FreeFileSync.png.
When you have set all the above, click OK create it.
[
![Create Desktop Launcher](http://www.tecmint.com/wp-content/uploads/2017/03/Create-Desktop-Launcher.png)
][4]
Create Desktop Launcher
If you dont want to create desktop launcher, you can start FreeFileSync from the directory itself.
```
$ ./FreeFileSync
```
### How to Use FreeFileSync in Ubuntu
In Ubuntu, search for FreeFileSync in the Unity Dash, whereas in Linux Mint, search for it in the System Menu, and click on the FreeFileSync icon to open it.
[
![FreeFileSync ](http://www.tecmint.com/wp-content/uploads/2017/03/FreeFileSync-launched.png)
][5]
FreeFileSync
#### Compare Two Folders Using FreeFileSync
In the example below, well use:
```
Source Folder: /home/aaronkilik/bin
Destination Folder: /media/aaronkilik/J_CPRA_X86F/scripts
```
To compare the file time and size of the two folders (default setting), simply click on the Compare button.
[
![Compare Two Folders in Linux](http://www.tecmint.com/wp-content/uploads/2017/03/compare-two-folders.png)
][6]
Compare Two Folders in Linux
Press `F6` to change what to compare by default, in the two folders: file time and size, content or file size from the interface below. Note that the meaning of the each option you select is included as well.
[
![File Comparison Settings](http://www.tecmint.com/wp-content/uploads/2017/03/comparison-settings.png)
][7]
File Comparison Settings
#### Synchronization Two Folders Using FreeFileSync
You can start by comparing the two folders, and then click on Synchronize button, to start the synchronization process; click Start from the dialog box the appears thereafter:
```
Source Folder: /home/aaronkilik/Desktop/tecmint-files
Destination Folder: /media/aaronkilik/Data/Tecmint
```
[
![Compare and Synchronize Two Folders](http://www.tecmint.com/wp-content/uploads/2017/03/compare-and-sychronize-two-folders.png)
][8]
Compare and Synchronize Two Folders
[
![Start File Synchronization](http://www.tecmint.com/wp-content/uploads/2017/03/start-sychronization.png)
][9]
Start File Synchronization
[
![File Synchronization Completed](http://www.tecmint.com/wp-content/uploads/2017/03/synchronization-complete.png)
][10]
File Synchronization Completed
To set the default synchronization option: two way, mirror, update or custom, from the following interface; press `F8`. The meaning of the each option is included there.
[
![File Synchronization Settings](http://www.tecmint.com/wp-content/uploads/2017/03/synchronization-setttings.png)
][11]
File Synchronization Settings
For more information, visit FreeFileSync homepage at [http://www.freefilesync.org/][12]
Thats all! In this article, we showed you how to install FreeFileSync in Ubuntu and its derivatives such as Linux Mint, Kubuntu and many more. Drop your comments via the feedback section below.
--------------------------------------------------------------------------------
作者简介:
I am Ravi Saive, creator of TecMint. A Computer Geek and Linux Guru who loves to share tricks and tips on Internet. Most Of My Servers runs on Open Source Platform called Linux. Follow Me: [Twitter][00], [Facebook][01] and [Google+][02]
--------------------------------------------------------------------------------
via: http://www.tecmint.com/freefilesync-compare-synchronize-files-in-ubuntu/
作者:[Ravi Saive ][a]
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/admin/
[00]:https://twitter.com/ravisaive
[01]:https://www.facebook.com/ravi.saive
[02]:https://plus.google.com/u/0/+RaviSaive
[1]:http://www.tecmint.com/sftp-command-examples/
[2]:http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
[3]:http://www.freefilesync.org/download.php
[4]:http://www.tecmint.com/wp-content/uploads/2017/03/Create-Desktop-Launcher.png
[5]:http://www.tecmint.com/wp-content/uploads/2017/03/FreeFileSync-launched.png
[6]:http://www.tecmint.com/wp-content/uploads/2017/03/compare-two-folders.png
[7]:http://www.tecmint.com/wp-content/uploads/2017/03/comparison-settings.png
[8]:http://www.tecmint.com/wp-content/uploads/2017/03/compare-and-sychronize-two-folders.png
[9]:http://www.tecmint.com/wp-content/uploads/2017/03/start-sychronization.png
[10]:http://www.tecmint.com/wp-content/uploads/2017/03/synchronization-complete.png
[11]:http://www.tecmint.com/wp-content/uploads/2017/03/synchronization-setttings.png
[12]:http://www.freefilesync.org/
[13]:http://www.tecmint.com/author/admin/
[14]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[15]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -0,0 +1,168 @@
ictlyh Translating
Cpustat Monitors CPU Utilization by Running Processes in Linux
============================================================
Cpustat is a powerful system performance measure program for Linux, written using [Go programming language][3]. It attempts to reveal CPU utilization and saturation in an effective way, using The Utilization Saturation and Errors (USE) Method (a methodology for analyzing the performance of any system).
It extracts higher frequency samples of every process being executed on the system and then summarizes these samples at a lower frequency. For instance, it can measure every process every 200ms and summarize these samples every 5 seconds, including min/average/max values for certain metrics.
**Suggested Read:** [20 Command Line Tools to Monitor Linux Performance][4]
Cpustat outputs data in two possible ways: a pure text list of the summary interval and a colorful scrolling dashboard of each sample.
### How to Install Cpustat in Linux
You must have Go (GoLang) installed on your Linux system in order to use cpustat, click on the link below to follow the GoLang installation steps that is if you do not have it installed:
1. [Install GoLang (Go Programming Language) in Linux][1]
Once you have installed Go, type the go get command below to install it, this command will install the cpustat binary in your GOBIN variable:
```
# go get github.com/uber-common/cpustat
```
### How to Use Cpustat in Linux
When the installation process completes, run cpustat as follows with root privileges using the sudo command that is if your controlling the system as a non-root user, otherwise youll get the error as shown:
```
$ $GOBIN/cpustat
This program uses the netlink taskstats interface, so it must be run as root.
```
Note: To run cpustat as well as all other Go programs you have installed on your system like any other commands, include GOBIN variable in your PATH environment variable. Open the link below to learn how to set the PATH variable in Linux.
1. [Learn How to Set Your $PATH Variables Permanently in Linux][2]
This is how cpustat works; the `/proc` directory is queried to get the current [list of process IDs][5] for every interval, and:
* for each PID, read /proc/pid/stat, then compute difference from previous sample.
* in case its a new PID, read /proc/pid/cmdline.
* for each PID, send a netlink message to fetch the taskstats, compute difference from previous sample.
* fetch /proc/stat to get the overall system stats.
Again, each sleep interval is adjusted to account for the amount of time consumed fetching all of these stats. Furthermore, each sample also records the time it took to scale each measurement by the actual elapsed time between samples. This attempts to account for delays in cpustat itself.
When run without any arguments, cpustat will display the following by default: sampling interval: 200ms, summary interval: 2s (10 samples), [showing top 10 procs][6], user filter: all, pid filter: all as shown in the screenshot below:
```
$ sudo $GOBIN/cpustat
```
[
![Cpustat - Monitor Linux CPU Usage](http://www.tecmint.com/wp-content/uploads/2017/03/Cpustat-Monitor-Linux-CPU-Usage.png)
][7]
Cpustat Monitor Linux CPU Usage
From the output above, the following are the meanings of the system-wide summary metrics displayed before the fields:
* usr  min/avg/max user mode run time as a percentage of a CPU.
* sys  min/avg/max system mode run time as a percentage of a CPU.
* nice  min/avg/max user mode low priority run time as a percentage of a CPU.
* idle  min/avg/max user mode run time as a percentage of a CPU.
* iowait  min/avg/max delay time waiting for disk IO.
* prun  min/avg/max count of processes in a runnable state (same as load average).
* pblock  min/avg/max count of processes blocked on disk IO.
* pstart  number of processes/threads started in this summary interval.
Still from the output above, for a given process, the different columns mean:
* name  common process name from /proc/pid/stat or /proc/pid/cmdline.
* pid  process id, also referred to as “tgid”.
* min  lowest sample of user+system time for the pid, measured from /proc/pid/stat. Scale is a percentage of a CPU.
* max  highest sample of user+system time for this pid, also measured from /proc/pid/stat.
* usr  average user time for the pid over the summary period, measured from /proc/pid/stat.
* sys  average system time for the pid over the summary period, measured from /proc/pid/stat.
* nice  indicates current “nice” value for the process, measured from /proc/pid/stat. Higher means “nicer”.
* runq  time the process and all of its threads spent runnable but waiting to run, measured from taskstats via netlink. Scale is a percentage of a CPU.
* iow  time the process and all of its threads spent blocked by disk IO, measured from taskstats via netlink. Scale is a percentage of a CPU, averaged over the summary interval.
* swap  time the process and all of its threads spent waiting to be swapped in, measured from taskstats via netlink. Scale is a percentage of a CPU, averaged over the summary interval.
* vcx and icx  total number of voluntary context switches by the process and all of its threads over the summary interval, measured from taskstats via netlink.
* rss  current RSS value fetched from /proc/pid/stat. It is the amount of memory this process is using.
* ctime  sum of user+sys CPU time consumed by waited for children that exited during this summary interval, measured from /proc/pid/stat.
Note that long running child processes can often confuse this measurement, because the time is reported only when the child process exits. However, this is useful for measuring the impact of frequent cron jobs and health checks where the CPU time is often consumed by many child processes.
* thrd  number of threads at the end of the summary interval, measured from /proc/pid/stat.
* sam  number of samples for this process included in the summary interval. Processes that have recently started or exited may have been visible for fewer samples than the summary interval.
The following command displays the top 10 root user processes running on the system:
```
$ sudo $GOBIN/cpustat -u root
```
[
![Find Root User Running Processes](http://www.tecmint.com/wp-content/uploads/2017/03/show-root-user-processes.png)
][8]
Find Root User Running Processes
To display output in a fancy terminal mode, use the `-t` flag as follows:
```
$ sudo $GOBIN/cpustat -u roo -t
```
[
![Running Process Usage of Root User](http://www.tecmint.com/wp-content/uploads/2017/03/Root-User-Runnng-Processes.png)
][9]
Running Process Usage of Root User
To view the [top x number of processes][10] (the default is 10), you can use the `-n` flag, the following command shows the [top 20 Linux processes running][11] on the system:
```
$ sudo $GOBIN/cpustat -n 20
```
You can also write CPU profile to a file using the `-cpuprofile` option as follows and then use the [cat command][12] to view the file:
```
$ sudo $GOBIN/cpustat -cpuprofile cpuprof.txt
$ cat cpuprof.txt
```
To display help info, use the `-h` flag as follows:
```
$ sudo $GOBIN/cpustat -h
```
Find additional info from the cpustat Github Repository: [https://github.com/uber-common/cpustat][13]
Thats all! In this article, we showed you how to install and use cpustat, a useful system performance measure tool for Linux. Share your thoughts with us via the comment section below.
--------------------------------------------------------------------------------
作者简介:
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/cpustat-monitors-cpu-utilization-by-processes-in-linux/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/install-go-in-linux/
[2]:http://www.tecmint.com/set-path-variable-linux-permanently/
[3]:http://www.tecmint.com/install-go-in-linux/
[4]:http://www.tecmint.com/command-line-tools-to-monitor-linux-performance/
[5]:http://www.tecmint.com/find-process-name-pid-number-linux/
[6]:http://www.tecmint.com/find-linux-processes-memory-ram-cpu-usage/
[7]:http://www.tecmint.com/wp-content/uploads/2017/03/Cpustat-Monitor-Linux-CPU-Usage.png
[8]:http://www.tecmint.com/wp-content/uploads/2017/03/show-root-user-processes.png
[9]:http://www.tecmint.com/wp-content/uploads/2017/03/Root-User-Runnng-Processes.png
[10]:http://www.tecmint.com/find-processes-by-memory-usage-top-batch-mode/
[11]:http://www.tecmint.com/install-htop-linux-process-monitoring-for-rhel-centos-fedora/
[12]:http://www.tecmint.com/13-basic-cat-command-examples-in-linux/
[13]:https://github.com/uber-common/cpustat
[14]:http://www.tecmint.com/author/aaronkili/
[15]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[16]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -0,0 +1,234 @@
响应式编程vs.响应式系统
============================================================
>在恒久的迷惑与过多期待的海洋中,登上一组简单响应式设计原则的小岛。
>
![Micro Fireworks](https://d3tdunqjn7n0wj.cloudfront.net/360x240/micro_fireworks-db2d0a45f22f348719b393dd98ebefa2.jpg)
下载 Konrad Malawski 的免费电子书[《为什么选择响应式?企业应用中的基本原则》][5],深入了解更多响应式技术的知识与好处。
自从2013年一起合作写了[《响应式宣言》][23]之后,我们看着响应式从一种几乎无人知晓的软件构建技术——当时只有少数几个公司的边缘项目使用了这一技术——最后成为中间件领域(middleware field)大佬们全平台战略中的一部分。本文旨在定义和澄清响应式各个方面的概念方法是比较在_响应式编程_风格下以及把_响应式系统_视作一个紧密整体的设计方法下写代码的不同。
### 响应式是一组设计原则
响应式技术目前成功的标志之一是“响应式”成为了一个热词,并且跟一些不同的事物与人联系在了一起——常常伴随着像“流(streaming)”,“轻量级(lightweight)”和“实时(real-time)”这样的词。
举个例子:当我们看到一支运动队时(像棒球队或者篮球队),我们一般会把他们看成一个个单独个体的组合,但是当他们之间碰撞不出火花,无法像一个团队一样高效地协作时,他们就会输给一个“更差劲”的队伍。从这篇文章的角度来看,响应式是一组设计原则,一种关于系统架构与设计的思考方式,一种关于在一个分布式环境下,当实现技术(implementation techniques),工具和设计模式都只是一个更大系统的一部分时如何设计的思考方式。
这个例子展示了不经考虑地将一堆软件拼揍在一起——尽管单独来看这些软件都很优秀——和响应式系统之间的不同。在一个响应式系统中正是_不同组件(parts)间的相互作用_让响应式系统如此不同它使得不同组件能够独立地运作同时又一致协作从而达到最终想要的结果。
_一个响应式系统_ 是一种架构风格(architectural style),它允许许多独立的应用结合在一起成为一个单元,共同响应它们所处的环境,同时保留着对单元内其它应用的“感知”——这能够表现为它能够做到放大/缩小规模(scale up/down),负载平衡,甚至能够主动地执行这些步骤。
以响应式的风格或者说通过响应式编程写一个软件是可能的然而那也不过是拼图中的一块罢了。虽然在上面的提到的各个方面似乎都足以称其为“响应式的”但仅就其它们自身而言还不足以让一个_系统_成为响应式的。
当人们在软件开发与设计的语境下谈论“响应式”时,他们的意思通常是以下三者之一:
* 响应式系统(架构与设计)
* 响应式编程(基于声明的事件的)
* 函数响应式编程FRP
我们将检查这些做法与技术的意思,特别是前两个。更明确地说,我们会在使用它们的时候讨论它们,例如它们是怎么联系在一起的,从它们身上又能到什么样的好处——特别是在为多核、云或移动架构搭建系统的情境下。
让我们先来说一说函数响应式编程吧,以及我们在本文后面不再讨论它的原因。
### 函数响应式编程FRP
_函数响应式编程_通常被称作_FRP_是最常被误解的。FRP在二十年前就被Conal Elliott[精确地定义][24]了。但是最近这个术语却被错误地用来描述一些像ElmBacon.js的技术以及其它技术中的响应式插件RxJava, Rx.NET, RxJS。许多的库(libraries)声称他们支持FRP事实上他们说的并非_响应式编程_因此我们不会再进一步讨论它们。
### 响应式编程
_响应式编程_不要把它跟_函数响应式编程_混淆了它是异步编程下的一个子集也是一种范式在这种范式下由新信息的有效性(availability)推动逻辑的前进,而不是让一条执行线程(a thread-of-execution)去推动控制流(control flow)。
它能够把问题分解为多个独立的步骤,这些独立的步骤可以以异步且非阻塞(non-blocking)的方式被执行,最后再组合在一起产生一条工作流(workflow)——它的输入和输出可能是非绑定的(unbounded)。
[“异步地(Asynchronous)”][25]被牛津词典定义为“不在同一时刻存在或发生”,在我们的语境下,它意味着一条消息或者一个事件可发生在任何时刻,有可能是在未来。这在响应式编程中是非常重要的一项技术,因为响应式编程允许[非阻塞式(non-blocking)]的执行方式——执行线程在竞争一块共享资源时不会因为阻塞(blocking)而陷入等待(防止执行线程在当前的工作完成之前执行任何其它操作),而是在共享资源被占用的期间转而去做其它工作。阿姆达尔定律(Amdahl's Law)[2][9]告诉我们,竞争是可伸缩性(scalability)最大的敌人,所以一个响应式系统应当在极少数的情况下才不得不做阻塞工作。
响应式编程一般是_事件驱动(event-driven)_ 相比之下响应式系统则是_消息驱动(message-driven)_ 的——事件驱动与消息驱动之间的差别会在文章后面阐明。
响应式编程库的应用程序接口API一般是以下二者之一
* 基于回调的Callback-based)——匿名的间接作用(side-effecting)回调函数被绑定在事件源(event sources)上,当事件被放入数据流(dataflow chain)中时,回调函数被调用。
* 声明式的Declarative)——通过函数的组合,通常是使用一些固定的函数,像 _map_, _filter_, _fold_ 等等。
大部分的库会混合这两种风格,一般还带有基于流(stream-based)的操作符(operators)像windowing, counts, triggers。
说响应式编程跟[数据流编程(dataflow programming)][27]有关是很合理的因为它强调的是_数据流_而不是_控制流_。
举几个为这种编程技术提供支持的的编程抽象概念:
* [Futures/Promises][10]——一个值的容器,具有读共享/写独占many-read/single-write)的语义,即使变量尚不可用也能够添加异步的值转换操作。
* 流(streams)-[响应式流][11]——无限制的数据处理流,支持异步,非阻塞式,支持多个源与目的的反压转换管道(back-pressured transformation pipelines)。
* [数据流变量][12]——依赖于输入,过程(procedures)或者其它单元的单赋值变量(存储单元)(single assignment variables),它能够自动更新值的改变。其中一个应用例子是表格软件——一个单元的值的改变会像涟漪一样荡开,影响到所有依赖于它的函数,顺流而下地使它们产生新的值。
在JVM中支持响应式编程的流行库有Akka Streams、Ratpack、Reactor、RxJava和Vert.x等等。这些库实现了响应式编程的规范成为JVM上响应式编程库之间的互通标准(standard for interoperability),并且根据它自身的叙述是“……一个为如何处理非阻塞式反压异步流提供标准的倡议”
响应式编程的基本好处是提高多核和多CPU硬件的计算资源利用率根据阿姆达尔定律以及引申的Günther的通用可伸缩性定律[3][13](Günthers Universal Scalability Law),通过减少序列化点(serialization points)来提高性能。
另一个好处是开发者生产效率传统的编程范式都尽力想提供一个简单直接的可持续的方法来处理异步非阻塞式计算和I/O。在响应式编程中因活动(active)组件之间通常不需要明确的协作,从而也就解决了其中大部分的挑战。
响应式编程真正的发光点在于组件的创建跟工作流的组合。为了在异步执行上取得最大的优势,把[反压(back-pressure)][28]加进来是很重要,这样能避免过度使用,或者确切地说,无限度的消耗资源。
尽管如此,响应式编程在搭建现代软件上仍然非常有用,为了在更高层次上理解(reason about)一个系统那么必须要使用到另一个工具_响应式架构_——设计响应式系统的方法。此外要记住编程范式有很多而响应式编程仅仅只是其中一个所以如同其它工具一样响应式编程并不是万金油它不意图适用于任何情况。
### 事件驱动 vs. 消息驱动
如上面提到的响应式编程——专注于短时间的数据流链条上的计算——因此倾向于_事件驱动_而响应式系统——关注于通过分布式系统的通信和协作所得到的弹性和韧性——则是[_消息驱动的_][29][4][14](或者称之为 _消息式(messaging)_ 的)。
一个拥有长期存活的可寻址(long-lived addressable)组件的消息驱动系统跟一个事件驱动的数据流驱动模型的不同在于,消息具有固定的导向,而事件则没有。消息会有明确的(一个)去向,而事件则只是一段等着被观察(observe)的信息。另外,消息式(messaging)更适用于异步,因为消息的发送与接收和发送者和接收者是分离的。
响应式宣言中的术语表定义了两者之间[概念上的不同][30]
> 一条消息就是一则被送往一个明确目的地的数据。一个事件则是达到某个给定状态的组件发出的一个信号。在一个消息驱动系统中,可寻址到的接收者等待消息的到来然后响应它,否则保持休眠状态。在一个事件驱动系统中,通知的监听者被绑定到消息源上,这样当消息被发出时它就会被调用。这意味着一个事件驱动系统专注于可寻址的事件源而消息驱动系统专注于可寻址的接收者。
分布式系统需要通过消息在网络上传输进行交流,以实现其沟通基础,与之相反,事件的发出则是本地的。在底层通过发送包裹着事件的消息来搭建跨网络的事件驱动系统的做法很常见。这样能够维持在分布式环境下事件驱动编程模型的相对简易性并且在某些特殊的和合理范围内的使用案例上工作得很好。
然而,这是有利有弊的:在编程模型的抽象性和简易性上得一分,在控制上就减一分。消息强迫我们去拥抱分布式系统的真实性和一致性——像局部错误(partial failures),错误侦测(failure detection),丢弃/复制/重排序 消息dropped/duplicated/reordered messages),最后还有一致性,管理多个并发真实性等等——然后直面它们,去处理它们,而不是像过去无数次一样,藏在一个蹩脚的抽象面罩后——假装网络并不存在(例如EJB, [RPC][31], [CORBA][32], 和 [XA][33])。
这些在语义学和适用性上的不同在应用设计中有着深刻的含义,包括分布式系统的复杂性(complexity)中的 _弹性(resilience)_ _韧性(elasticity)__移动性(mobility)__位置透明性(location transparency)_ 和 _管理(management)_,这些在文章后面再进行介绍。
在一个响应式系统中,特别是使用了响应式编程技术的,这样的系统中就即有事件也有消息——一个是用于沟通的强大工具(消息),而另一个则呈现现实(事件)。
### 响应式系统和架构
_响应式系统_ —— 如同在《响应式宣言》中定义的那样——是一组用于搭建现代系统——已充分准备好满足如今应用程序所面对的不断增长的需求的现代系统——的架构设计原则。
响应式系统的原则决对不是什么新东西它可以被追溯到70和80年代Jim Gray和Pat Helland在[串级系统(Tandem System)][34]上和Joe aomstrong和Robert Virding在[Erland][35]上做出的重大工作。然而这些人在当时都超越了时代只有到了最近5-10年技术行业才被不得不反思当前企业系统最好的开发实践活动并且学习如何将来之不易的响应式原则应用到今天这个多核、云计算和物联网的世界中。
响应式系统的基石是_消息传递(message-passing)_ ,消息传递为两个组件之间创建一条暂时的边界,使得他们能够在 _时间_ 上分离——实现并发性——和 _空间(space)_ ——实现分布式(distribution)与移动性(mobility)。这种分离是两个组件完全[隔离(isolation)][36]以及实现 _弹性(resilience)__韧性(elasticity)_ 基础的必需条件。
### 从程序到系统
这个世界的连通性正在变得越来越高。我们构建 _程序_ ——为单个操作子计算某些东西的端到端逻辑——已经不如我们构建 _系统_ 来得多了。
系统从定义上来说是复杂的——每一部分都包含多个组件,每个组件的自身或其子组件也可以是一个系统——这意味着软件要正常工作已经越来越依赖于其它软件。
我们今天构建的系统会在多个计算机上被操作,小型的或大型的,数量少的或数量多的,相近的或远隔半个地球的。同时,由于人们的生活正变得越来越依赖于系统顺畅运行的有效性,用户的期望也变得越得越来越难以满足。
为了实现用户——和企业——能够依赖的系统,这些系统必须是 _灵敏的(responsive)_ ,这样无论是某个东西提供了一个正确的响应,还是当需要一个响应时响应无法使用,都不会有影响。为了达到这一点,我们必须保证在错误( _弹性_ )和欠载( _韧性_ )下,系统仍然能够保持灵敏性。为了实现这一点,我们把系统设计为 _消息驱动的_ ,我们称其为 _响应式系统_
### 响应式系统的弹性
弹性是与 _错误下_ 的灵敏性(responsiveness)有关的,它是系统内在的功能特性,是需要被设计的东西,而不是能够被动的加入系统中的东西。弹性是大于容错性的——弹性无关于故障退化(graceful degradation)——虽然故障退化对于系统来说是很有用的一种特性——与弹性相关的是与从错误中完全恢复达到 _自愈_ 的能力。这就需要组件的隔离以及组件对错误的包容,以免错误散播到其相邻组件中去——否则,通常会导致灾难性的连锁故障。
因此构建一个弹性的,自愈(self-healing)系统的关键是允许错误被:容纳,具体化为消息,发送给其他的(担当监管者的(supervisors))组件,从而在错误组件之外修复出一个安全环境。在这,消息驱动是其促成因素:远离高度耦合的、脆弱的深层嵌套的同步调用链,大家长期要么学会忍受其煎熬或直接忽略。解决的想法是将调用链中的错误管理分离,将客户端从处理服务端错误的责任中解放出来。
### 响应式系统的韧性
[韧性(Elasticity)][37]是关于 _欠载下的灵敏性(responsiveness)_ 的——意味着一个系统的吞吐量在资源增加或减少时能够自动地相应增加或减少(scales up or down)(同样能够向内或外扩展(scales in or out))以满足不同的需求。这是利用云计算承诺的特性所必需的因素:使系统利用资源更加有效,成本效益更佳,对环境友好以及实现按次付费。
系统必须能够在不重写甚至不重新设置的情况下,适应性地——即无需介入自动伸缩——响应状态及行为,沟通负载均衡,故障转移(failover),以及升级。实现这些的就是 _位置透明性(location transparency)_ :使用同一个方法,同样的编程抽象,同样的语义,在所有向度中伸缩(scaling)系统的能力——从CPU核心到数据中心。
如同《响应式宣言》所述:
> 一个极大地简化问题的关键洞见在于意识到我们都在使用分布式计算。无论我们的操作系统是运行在一个单一结点上拥有多个独立的CPU并通过QPI链接进行交流,还是在一个节点集群(cluster of nodes独立的机器通过网络进行交流)上。拥抱这个事实意味着在垂直方向上多核的伸缩与在水平方面上集群的伸缩并无概念上的差异。在空间上的解耦 [...],是通过异步消息传送以及运行时实例与其引用解耦从而实现的,这就是我们所说的位置透明性。
因此,不论接收者在哪里,我们都以同样的方式与它交流。唯一能够在语义上等同实现的方式是消息传送。
### 响应式系统的生产效率
既然大多数的系统生来即是复杂的,那么其中一个最重要的点即是保证一个系统架构在开发和维护组件时,最小程度地减低生产效率,同时将操作的 _偶发复杂性(accidental complexity_ 降到最低。
这一点很重要,因为在一个系统的生命周期中——如果系统的设计不正确——系统的维护会变得越来越困难,理解、定位和解决问题所需要花费时间和精力会不断地上涨。
响应式系统是我们所知的最具 _生产效率_ 的系统架构(在多核、云及移动架构的背景下):
* 错误的隔离为组件与组件之间裹上[舱壁][15](译者注:当船遭到损坏进水时,舱壁能够防止水从损坏的船舱流入其他船舱),防止引发连锁错误,从而限制住错误的波及范围以及严重性。
* 监管者的层级制度提供了多个等级的防护,搭配以自我修复能力,避免了许多曾经在侦查(inverstigate)时引发的操作代价(cost)——大量的瞬时故障(transient failures)。
* 消息传送和位置透明性允许组件被卸载下线、代替或重新布线(rerouted)同时不影响终端用户的使用体验,并降低中断的代价、它们的相对紧迫性以及诊断和修正所需的资源。
* 复制减少了数据丢失的风险,减轻了数据检索(retrieval)和存储的有效性错误的影响。
* 韧性允许在使用率波动时保存资源,允许在负载很低时,最小化操作开销,并且允许在负载增加时,最小化运行中断(outgae)或紧急投入(urgent investment)伸缩性的风险。
因此,响应式系统使生成系统(creation systems)很好的应对错误、随时间变化的负载——同时还能保持低运营成本。
### 响应式编程与响应式系统的关联
响应式编程是一种管理内部逻辑(internal logic)和数据流转换(dataflow transformation)的好技术,在本地的组件中,做为一种优化代码清晰度、性能以及资源利用率的方法。响应式系统,是一组架构上的原则,旨在强调分布式信息交流并为我们提供一种处理分布式系统弹性与韧性的工具。
只使用响应式编程常遇到的一个问题,是一个事件驱动的基于回调的或声明式的程序中两个计算阶段的高度耦合(tight coupling),使得 _弹性_ 难以实现,因此时它的转换链通常存活时间短,并且它的各个阶段——回调函数或组合子(combinator)——是匿名的,也就是不可寻址的。
这意味着,它通常在内部处理成功与错误的状态而不会向外界发送相应的信号。这种寻址能力的缺失导致单个阶段(stages)很难恢复,因为它通常并不清楚异常应该,甚至不清楚异常可以,发送到何处去。
另一个与响应式系统方法的不同之处在于单纯的响应式编程允许 _时间_ 上的解耦(decoupling),但不允许 _空间_ 上的(除非是如上面所述的,在底层通过网络传送消息来分发(distribute)数据流)。正如叙述的,在时间上的解耦使 _并发性_ 成为可能,但是是空间上的解耦使 _分布(distribution)__移动性(mobility)_ (使得不仅仅静态拓扑可用,还包括了动态拓扑)成为可能的——而这些正是 _韧性_ 所必需的要素。
位置透明性的缺失使得很难以韧性方式对一个基于适应性响应式编程技术的程序进行向外扩展,因为这样就需要分附加工具,例如消息总线(message bus),数据网格(data grid)或者在顶层的定制网络协议(bespoke network protocol)。而这点正是响应式系统的消息驱动编程的闪光的地方,因为它是一个包含了其编程模型和所有伸缩向度语义的交流抽象概念,因此降低了复杂性与认知超载。
对于基于回调的编程,常会被提及的一个问题是写这样的程序或许相对来说会比较简单,但最终会引发一些真正的后果。
例如,对于基于匿名回调的系统,当你想理解它们,维护它们或最重要的是在生产供应中断(production outages)或错误行为发生时,你想知道到底发生了什么、发生在哪以及为什么发生,但此时它们只提供极少的内部信息。
为响应式系统设计的库与平台(例如[Akka][39]项目和[Erlang][40]平台)学到了这一点,它们依赖于那些更容易理解的长期存活的可寻址组件。当错误发生时,根据导致错误的消息可以找到唯一的组件。当可寻址的概念存在组件模型的核心中时,监控方案(monitoring solution)就有了一个 _有意义_ 的方式来呈现它收集的数据——利用传播(propagated)的身份标识。
一个好的编程范式的选择一个选择实现像可寻址能力和错误管理这些东西的范式已经被证明在生产中是无价的因它在设计中承认了现实并非一帆风顺_接受并拥抱错误的出现_ 而不是毫无希望地去尝试避免错误。
总而言之,响应式编程是一个非常有用的实现技术,可以用在响应式架构当中。但是记住这只能帮助管理一部分:异步且非阻塞执行下的数据流管理——通常只在单个结点或服务中。当有多个结点时,就需要开始认真地考虑像数据一致性(data consistency)、跨结点沟通(cross-node communication)、协调(coordination)、版本控制(versioning)、编制(orchestration)、错误管理(failure management)、关注与责任(concerns and responsibilities)的分离等等的东西——也即是:系统架构。
因此,要最大化响应式编程的价值,就把它作为构建响应式系统的工具来使用。构建一个响应式系统需要的不仅是在一个已存在的遗留下来的软件栈(software stack)上抽象掉特定的操作系统资源和少量的异步API和[断路器(circuit breakers)][41]。此时应该拥抱你在创建一个包含多个服务的分布式系统这一事实——这意味着所有东西都要共同合作,提供一致性与灵敏的体验,而不仅仅是如预期工作,但同时还要在发生错误和不可预料的负载下正常工作。
### 总结
企业和中间件供应商在目睹了应用响应式所带来的企业利润增长后,同样开始拥抱响应式。在本文中,我们把响应式系统做为企业最终目标进行描述——假设了多核、云和移动架构的背景——而响应式编程则从中担任重要工具的角色。
响应式编程在内部逻辑及数据流转换的组件层次上为开发者提高了生产率——通过性能与资源的有效利用实现。而响应式系统在构建 _原生云(cloud native)_ 和其它大型分布式系统的系统层次上为架构师及DevOps从业者提高了生产率——通过弹性与韧性。我们建议在响应式系统设计原则中结合响应式编程技术。
```
1 参考Conal ElliottFRP的发明者见[这个演示][16][↩][17]
2 [Amdahl 定律][18]揭示了系统理论上的加速会被一系列的子部件限制,这意味着系统在新的资源加入后会出现收益递减(diminishing returns)。 [↩][19]
3 Neil Günter的[通用可伸缩性定律(Universal Scalability Law)][20]是理解并发与分布式系统的竞争与协作的重要工具,它揭示了当新资源加入到系统中时,保持一致性的开销会导致不好的结果。
4 消息可以是同步的(要求发送者和接受者同时存在),也可以是异步的(允许他们在时间上解耦)。其语义上的区别超出本文的讨论范围。[↩][22]
```
--------------------------------------------------------------------------------
via: https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems
作者:[Jonas Bonér][a] , [Viktor Klang][b]
译者:[XLCYun](https://github.com/XLCYun)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.oreilly.com/people/e0b57-jonas-boner
[b]:https://www.oreilly.com/people/f96106d4-4ce6-41d9-9d2b-d24590598fcd
[1]:https://www.flickr.com/photos/pixel_addict/2301302732
[2]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
[3]:https://www.oreilly.com/people/e0b57-jonas-boner
[4]:https://www.oreilly.com/people/f96106d4-4ce6-41d9-9d2b-d24590598fcd
[5]:http://www.oreilly.com/programming/free/why-reactive.csp?intcmp=il-webops-free-product-na_new_site_reactive_programming_vs_reactive_systems_text_cta
[6]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
[7]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
[8]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#dfref-footnote-1
[9]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#dfref-footnote-2
[10]:https://en.wikipedia.org/wiki/Futures_and_promises
[11]:http://reactive-streams.org/
[12]:https://en.wikipedia.org/wiki/Oz_(programming_language)#Dataflow_variables_and_declarative_concurrency
[13]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#dfref-footnote-3
[14]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#dfref-footnote-4
[15]:http://skife.org/architecture/fault-tolerance/2009/12/31/bulkheads.html
[16]:https://begriffs.com/posts/2015-07-22-essence-of-frp.html
[17]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#ref-footnote-1
[18]:https://en.wikipedia.org/wiki/Amdahl%2527s_law
[19]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#ref-footnote-2
[20]:http://www.perfdynamics.com/Manifesto/USLscalability.html
[21]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#ref-footnote-3
[22]:https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems#ref-footnote-4
[23]:http://www.reactivemanifesto.org/
[24]:http://conal.net/papers/icfp97/
[25]:http://www.reactivemanifesto.org/glossary#Asynchronous
[26]:http://www.reactivemanifesto.org/glossary#Non-Blocking
[27]:https://en.wikipedia.org/wiki/Dataflow_programming
[28]:http://www.reactivemanifesto.org/glossary#Back-Pressure
[29]:http://www.reactivemanifesto.org/glossary#Message-Driven
[30]:http://www.reactivemanifesto.org/glossary#Message-Driven
[31]:https://christophermeiklejohn.com/pl/2016/04/12/rpc.html
[32]:https://queue.acm.org/detail.cfm?id=1142044
[33]:https://cs.brown.edu/courses/cs227/archives/2012/papers/weaker/cidr07p15.pdf
[34]:http://www.hpl.hp.com/techreports/tandem/TR-86.2.pdf
[35]:http://erlang.org/download/armstrong_thesis_2003.pdf
[36]:http://www.reactivemanifesto.org/glossary#Isolation
[37]:http://www.reactivemanifesto.org/glossary#Elasticity
[38]:http://www.reactivemanifesto.org/glossary#Location-Transparency
[39]:http://akka.io/
[40]:https://www.erlang.org/
[41]:http://martinfowler.com/bliki/CircuitBreaker.html

View File

@ -0,0 +1,74 @@
雇个 `DDoS` 服务干掉你的对手
========================
>随着物联网设备的普及,网络犯罪分子提供拒绝服务攻击来占密码问题的便宜。
![](http://images.techhive.com/images/article/2016/12/7606416730_e659cea89c_o-100698667-large.jpg)
随物联网设备飞速发展,分布式拒绝服务攻击也变得越来越具有危险性了。就如 [DNS 服务商 Dyn 上年秋季之遭遇][3] 一样,黑客似乎瞄上了每个人,使用未受保护的物联网设备来轰炸网络正迎面而来。
可雇用的分布式拒绝服务攻击的出现意味着每个会点技术的人都能精准报复一些网站。加大攻击能力甚至可以从系统级别的让一个公司完蛋。
根据 [Neustar][4] 的报告,全球四分之三的品牌、组织和公司都是 `DDos` 攻击的受害者。[每天 `DDoS` 攻击发生次数][5] 不少于 3700 次。
#### [■ 相关阅读:如何判断假绑架信?][1]
 睿科网络公司A10 Networks网络运营总监 Chase Cunningham 说:“想要找个可用的物联网设备,你只需要在地下网站找一个 `Mirai` 扫描器,一旦你得到了这款扫描器,你将能够利用在线的每一台设备来进行攻击”。
“或者你可以去一些类似 `Shodan` 的网站,然后简单的搜一下特殊设备的请求。当你得到这些信息之后,你就可以将你的雇佣的 `DDoS` 工具配置正确的流量模拟器类型、指向正确的目标并发动攻击。”
“几乎所有东西都是可售的。”他补充道“你可以购买一个stresser这就是个随便哪个会点按钮的人都会使用的 `DDoS` 功能的僵尸网络。”
>当你得到这些信息之后,你就可以将你的雇佣的 `DDoS` 工具配置正确的流量模拟器类型、指向正确的目标并发动攻击。
>Chase Cunningham睿科网络公司A10 Networks网络运营总监
网络安全提供商 Imperva 说,用户只需要出几十元美金,就可以快速发动攻击。有些公司编写了一些工具包含了肉鸡负载和 `CnC`(命令与控制)文件。使用这些工具,那些有点想法的肉鸡大师(或者说 `herders`)就可以开始通过垃圾邮件来传播使设备感染恶意软件、漏洞扫描程序、暴力攻击等等。
大部分 [stressers and booters][6] 都会有一个常见的、基于订阅的 `SaaS`(软件即服务)业务模式。来自 Incapsula 公司的 [Q2 2015 DDoS 报告][7] 显示一个月范围内平均每小时就会有38美元规模较低的在19.99美元)花在购买 `DDoS` 服务上。
![雇佣ddos服务](http://images.techhive.com/images/article/2017/03/ddos-hire-100713247-large.jpg)
“`Stresser` 和 `booter` 只是一个新型现实的副产品这些可以扳倒企业和组织的服务只被允许运作在灰色领域”Imperva 写道。
虽然成本不同,但是企业受到 [攻击可在任何地方,每次损失在 1.4 万美元到 235 万美元][8]。然而企业受到一次攻击后,[有 82% 的可能性会再次受到攻击][9]。
物联网洪水攻击DoT, DDoS of Things使用物联网设备建立僵尸网络可造成非常大规模的 `DDoS` 攻击。物联网洪水攻击会利用成百上千的物联网设备造成杠杆来攻击大型服务提供商。
“大部分可信的 `DDoS` 卖家都会将他们的工具的配置设置的很简单这样你就可以简单的更换配置开始攻击。虽然我还没怎么看到有哪些可以付费物联网流量模拟器的选项但我敢肯定准备要有了。如果是我来搞这个服务我是绝对会加入这个选项的。”Cunningham 如是说。
由 IDG 新闻服务的故事我们可知,要建造一个攻击服务的 `DDoS` 服务也可以很简单。通常黑客会租用 6 到 12 个左右的服务器然后使用他们随意的攻击任何目标。十月下旬HackForums.net [关闭][10]了他们的”服务器压力测试“部分,此次做法就是考虑到黑客可能通过使用他们十美元每月的服务建造可雇佣的 `DDoS` 服务。
同样地在十二月时,美国和欧洲的执法机构 [逮捕][11] 34个参与可雇佣的 `DDoS` 服务的嫌犯。
如果这很简单,怎么还没有经常发生攻击?
Cunningham 说这其实每时每刻都在发生,实际上每天每秒没完没了。他说:”你不知道的原因是因为大部分的都是扰乱攻击,而不是大规模的、想要搞倒公司的攻击。“
他说大部分的攻击平台只出售那些会让系统宕机一个小时或就长一点点的攻击。通常宕机一小时的攻击大概需要15到50美元的成本。当然这得看平台有些可能想让其一小时就要花上百美元。
减少这些攻击的解决方案是让用户把所有联网设备的恢复出厂设置的预设密码改掉,改掉默认密码然后还要禁用那些你不需要的功能。
--------------------------------------------------------------------------------
via: http://www.csoonline.com/article/3180246/data-protection/hire-a-ddos-service-to-take-down-your-enemies.html
作者:[Ryan Francis][a]
译者:[kenxx](https://github.com/kenxx)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.csoonline.com/author/Ryan-Francis/
[1]:http://csoonline.com/article/3103122/security/how-can-you-detect-a-fake-ransom-letter.html#tk.cso-infsb
[2]:https://www.incapsula.com/ddos/ddos-attacks/denial-of-service.html
[3]:http://csoonline.com/article/3135986/security/ddos-attack-against-overwhelmed-despite-mitigation-efforts.html
[4]:https://ns-cdn.neustar.biz/creative_services/biz/neustar/www/resources/whitepapers/it-security/ddos/2016-apr-ddos-report.pdf
[5]:https://www.a10networks.com/resources/ddos-trends-report
[6]:https://www.incapsula.com/ddos/booters-stressers-ddosers.html
[7]:https://www.incapsula.com/blog/ddos-global-threat-landscape-report-q2-2015.html
[8]:http://www.datacenterknowledge.com/archives/2016/05/13/number-of-costly-dos-related-data-center-outages-rising/
[9]:http://www.networkworld.com/article/3064677/security/hit-by-ddos-you-will-likely-be-struck-again.html
[10]:http://www.pcworld.com/article/3136730/hacking/hacking-forum-cuts-section-allegedly-linked-to-ddos-attacks.html
[11]:http://www.pcworld.com/article/3149543/security/dozens-arrested-in-international-ddos-for-hire-crackdown.html

View File

@ -0,0 +1,63 @@
为什么可以在任何地方工作的开发者要聚集到世界上最昂贵的城市?
============================================================
![](https://tctechcrunch2011.files.wordpress.com/2017/04/img_20170401_1835042.jpg?w=977)
政治家和经济学家都在[哀嚎][10]某几个阿尔法地区——三番,洛杉矶,纽约,波士顿,多伦多,伦敦,巴黎——在吸引了所有最好的工作的同时变得令人退却的昂贵,减少了经济变动性而增大了贫富差异。但是为什么那些最好的工作不能搬到其他地区呢?
当然,很多都不能。工作在纽约或者伦敦(当然,在英国脱欧毁灭伦敦的银行体系之前)普通的金融从业人员如果告诉他们的老板他们想要以后都在清迈工作,将会在办公室里受到嘲笑而且不再受欢迎。
但是这对(大部分)软件领域不适用。大部分 web/app 开发者如果这样要求的话可能会被拒绝;但是它们至少不会被嘲笑或者被炒。优秀开发者往往供不应求,而且处在 Skype 和 Slack 的时代,软件开发完全可以不依赖物质世界的交互。
(这一切对作家来说更加正确,真的;事实上我是在波纳配发表的这篇文章。但是作家并不像软件开发者一样具有足够的影响力。)
有些人会告诉你远程协助的团队天生比本地团队效率和生产力低下一些,或者那些“不经意的灵感碰撞”是如此重要以致于每一位员工每天都必须强制到一个地方来人为的制造这样的碰撞。这些人错了,只要团队的讨论次数不够多——数量级不过一把、一打或者许多,而不是成百上千——也不够专注。
我应该知道:在 [HappyFunCorp][11] 时,我们广泛的与远程团队工作,而且长期雇佣远程开发者,而结果难以置信的好。我在我三番的家中与斯德哥尔摩,圣保罗,上海,布鲁克林,新德里的开发者交流和合作的一天,完全没有任何不寻常。
目前为止,不管是不是个好主意,但我有点跑题了。供求关系指出那些拥有足够技能的开发者可以成为被称为“数字流浪者”的人,如果他们愿意的话。但是许多可以做到的却不愿意。最近,我在雷克雅维克的一栋通过 Airbnb 共享的房子和一伙不断变化的临时远程工作团队度过了一段时间,我保持着东海岸时间来跟上他们的工作,也把早上和周末的时光都花费在探索冰岛了——但是最后几乎所有人都回到了湾区生活。
从经济层面来说,当然,这太疯狂了。搬到东南亚工作光在房租一项上每月就会为我们省下几千美金。 为什么那些可以在哥斯达黎加挣着三番工资,或者在柏林赚着纽约水平薪资的人们,选择不这样做?为什么那些据说冷静固执的工程师在财政方面如此荒谬?
当然这里有社交和文化原因。清迈很不错,但没有大都会博物馆或者蒸汽朋克化装舞会,也没有 15 分钟脚程可以到的 50 家美食餐厅。柏林也很可爱,但没法提供风筝冲浪或者山脉远足和加州气候。当然也无法保证拥有无数和你一样分享同样价值观和母语的人们。
但是我觉得原因除了这些还有很多。我相信相比贫富之间的差异,还有一个更基础的经济分水岭存在。我认为我们在目睹世界上可以实现超凡成就的极端斯坦城市和无法成就伟大但可以工作和赚钱的平均斯坦城市之间正在生成巨大的裂缝。(名词是从伟大的纳西姆·塔勒布那里偷来的)
(译者注:[平均斯坦与极端斯坦的概念是美国学者纳西姆·塔勒布首先提出来的。他发现在我们所处的世界上,有些事物表现出相当的平均性,大部分个体都靠近均值,离均值越远则个体数量越稀少,与均值的偏离达到一定程度的个体数量将趋近于零。有些事物则表现出相当的极端性,均值这个概念在这个领域没有太多的意义,剧烈偏离均值的个体大量存在,而且偏离程度大得惊人。他把前者称为平均斯坦,把后者称为极端斯坦。][15])
艺术行业有着悠久历史的极端斯坦城市。这也解释了为什么有抱负的作家纷纷搬到纽约城,而那些已经在国际上大获成功的导演和演员仍然在不断被吸引到洛杉矶,如同飞蛾扑火。现在,这对技术行业同样适用。即使你不曾想试着(帮助)构造一些非凡的事物—— 如今创业神话如此恢弘,很难想象有工程师完全没有梦想过它—— _伟大事物发生的地方_正在以美好的前景如梦如幻的吸引着人们。
但是关于这一切有趣的是,理论上讲,它会改变;因为——直到最近——分布式的,分散管理的团队实际上可以获得超凡的成就。 情况对这些团队很不利,因为风投的目光趋于短浅。但是没有任何法律指出独角兽公司只能诞生在加州和某些屈指可数的次级领土;而且似乎,不管结果好坏,极端斯坦正在扩散。如果这样的扩散最终可以矛盾的导致米申地区的房租变 _便宜_那就太棒了
--------------------------------------------------------------------------------
via: https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
作者:[ Jon Evans ][a]
译者:[xiaow6](https://github.com/xiaow6)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://techcrunch.com/author/jon-evans/
[1]:https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/#comments
[2]:https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/#
[3]:http://twitter.com/share?via=techcrunch&amp;url=http://tcrn.ch/2owXJ0C&amp;text=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F&amp;hashtags=
[4]:https://www.linkedin.com/shareArticle?mini=true&amp;url=https%3A%2F%2Ftechcrunch.com%2F2017%2F04%2F02%2Fwhy-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities%2F&amp;title=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F
[5]:https://plus.google.com/share?url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
[6]:http://www.reddit.com/submit?url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/&amp;title=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F
[7]:http://www.stumbleupon.com/badge/?url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
[8]:mailto:?subject=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities?&amp;body=Article:%20https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
[9]:https://share.flipboard.com/bookmarklet/popout?v=2&amp;title=Why%20do%20developers%20who%20could%20work%20anywhere%20flock%20to%20the%20world%E2%80%99s%20most%20expensive%C2%A0cities%3F&amp;url=https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/
[10]:https://mobile.twitter.com/Noahpinion/status/846054187288866
[11]:http://happyfuncorp.com/
[12]:https://twitter.com/rezendi
[13]:https://techcrunch.com/author/jon-evans/
[14]:https://techcrunch.com/2017/04/01/discussing-the-limits-of-artificial-intelligence/
[15]:http://blog.sina.com.cn/s/blog_5ba3d8610100q3b1.html

View File

@ -0,0 +1,501 @@
GitLab工作流概览
======
GitLab是一个基于git的仓库管理程序也是一个方便软件开发的强大完整应用。
GitLab拥有一个”用户新人友好“的界面通过自由图形和命令行界面使你的工作更加具有效率。GitLab不仅仅对开发者是一个有用的工具它甚至可以被集成到你的整个团队中使得每一个人获得一个唯一的平台。
GitLab工作流逻辑符合使用者思维使得整个平台变得更加易用。相信我使用一次你就离不开它了
* * *
### 在这篇文章中
* [GitLab工作流][53]
* [软件开发阶段][22]
* [GitLab工单跟踪][52]
* [秘密工单][21]
* [截止日期][20]
* [委托人][19]
* [标签][18]
* [工单重要性][17]
* [GitLab工单看板][16]
* [GitLab中的代码审查][51]
* [第一次提交][15]
* [合并请求][14]
* [WIP MR][13]
* [审查][12]
* [建立,测试以及部署][50]
* [Koding][11]
* [用户案例][10]
* [反馈: 循环分析][49]
* [增强][48]
* [工单 & MR模版][9]
* [里程碑][8]
* [高级技巧][47]
* [对于工单 & MRs][7]
* [订阅][3]
* [添加 TO-DO][2]
* [搜索你的工单 & MRs][1]
* [转移工单][6]
* [代码片段][5]
* [GitLab 工作流 用户案例 梗概][46]
* [尾声][45]
* * *
### GitLab 工作流
**GitLab 工作流** 使用GitLab作为平台管理你的代码它是一系列具有逻辑可能性的过程——这个逻辑过程依据软件开发的生命周期来制定。
GitLab 工作流考虑到[GitLab Flow][97],是由一系列由**基于Git**的方法和策略组成的,这些方法为版本的管理,例如**分支策略****Git最佳实践**等等提供了保障。
通过GitLab工作流可以很方便的提升团队的工作效率以及凝聚力。这种提升在引入一个新的项目的开始一直到发布这个项目成为一个产品都有所体现。这就是我们所说的“如何通过最快的速度把一个点子在10步之内变成一个产品”。
![FROM IDEA TO PRODUCTION IN 10 STEPS](https://about.gitlab.com/images/blogimages/idea-to-production-10-steps.png)
### 软件开发阶段
一般情况下软件开发经过10个主要阶段GitLab为这10个阶段依次提供了解决方案
1. **IDEA:** 每一个从点子开始的项目通常来源于一次闲聊。在这个阶段GitLab集成了[Mattermost][44]。
2. **ISSUE:** 最有效的讨论一个点子的方法,就是为这个点子建立一个工单讨论。你的团队和你的合作伙伴可以帮助你去提升这个点子,通过[issue tracker][43]
3. **PLAN:** 一旦讨论得到一致的同意,就是开始编码的时候了。但是等等!首先,我们需要优先考虑组织我们的工作流。对于此,我们可以使用[Issue Board][42]。
4. **CODE:** 现在,当一切准备就绪,我们可以开始写代码了。
5. **COMMIT:** 当我们为我们的草稿欢呼的时候,我们就可以在版本控制下,提交代码到功能分支了。
6. **TEST:** 通过[GitLab CI][41],我们可以运行脚本来创建和测试我们的应用
7. **REVIEW:** 一旦脚本成功运行,我们的创建和测试成功,我们就可以进行[code review][40]以及批准。
8. **STAGING:** 现在是时候[将我们的代码部署到演示环境][39]来检查一下,是否一切就像我们预估的那样顺畅——或者我们可能仍然需要修改。
9. **PRODUCTION:** 当项目已经运行的时分通畅,就是[部署到生产环境][38]的时候了!
10. **FEEDBACK**: 现在是时候翻回去看我们能在项目中提升的部分了。我们使用[循环分析][37]来对当前项目中关键的部分进行的反馈。
简单浏览这些步骤我们可以发现提供强大的工具来支持这些步骤是十分重要的。在接下来的部分我们为GitLab的可用工具提供一个简单的概览。
### GitLab 工单追踪
GitLab有一个强大的工单追溯系统在使用过程中允许你和你的团队以及你的合作者分享和讨论建议。
![issue tracker - view list](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/issue-tracker-list-view.png)
工单是GitLab工作流的第一个重要重要特性。[以工单的讨论为开始][95]; 跟随点子的改变是一个最好的方式。
这十分有利于:
* 讨论点子
* 提交功能建议
* 提问题
* 提交bug
* 获取支持
* 精细化新代码的引入
对于每一个在GitLab上部署的项目都有一个工单追踪器。找到你的项目中的 **Issues** > **New issue**,来创建一个新的工单。建立一个标题来总结要被讨论的主题,并且使用[Markdown][94]来形容它。检查[pro tips][93]来加强你的工单描述。
GitLab 工单追踪器代表了一个额外的实用功能,使得步骤变的更佳易于管理和考虑。下面的部分仔细描述了它。
![new issue - additional settings](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/issue-features-view.png)
### 秘密工单
无论何时,你仅仅想要在团队中讨论这个工单,你可以使用[issue confidential][92]。即使你的项目是公开的,你的工单也会被保留。当一个不是本项目成员的人,就算是[Reporter level][01]想要访问工单的地址时浏览器也会返回一个404错误。
### 截止日期
每一个工单允许你填写一个[截止日期][90]。有些团队以紧凑的时间表工作,并且拥有一种方式去设置一个截止日期来解决问题,是有必要的。这些都可以通过截止日期这一功能实现。
当你有一个多任务的项目截止日期的时候——比如说,一个新的发布,项目的启动,或者追踪团体任务——你可以使用[milestones][89]。
### 受托者
任何时候,一个人想要完成工单中的工作,这个工单都可以被分配个那个人。你可以任意修改被分配者,直到满足你的需求。这个功能的想法是,一个受托者本身对这个工单负责,直到其将这个工单重新赋予其他人。
这对于筛选每个受托者的工单也有帮助。
### 标签
GitLab标签也是GitLab流的一个重要组成部分。你可以使用它们来分类你的工单在工作流中定位以及通过[优先级标签][88]来组织它们。
标签使得你与[GitLab Issue Board][87]协同工作,加快工程进度以及组织你的工作流。
**New!** 你可以创建[组标签][86]。它可以使得在每一个项目组中使用相同的标签。
### 工单权重
你可以添加个[工单权重][85]使得一个工单重要性表现的更为清晰。01-03表示工单不是特别重要07-09表示十分重要04-06表示程度适中。此外你可以与你的团队自行定义工单重要性的指标。
### GitLab工单看板
在项目中,[GitLab工单看板][84]是一个计划以及组织你的工单理想工具。
看板包含了与其相关的各自标签,每一个列表包含了相关的被标记的工单,并且以卡片的形式展示出来。
这些卡片可以在列表之间移动,被移动的卡片,其标签将会依据你移动的位置发生改变。
![GitLab Issue Board](https://about.gitlab.com/images/blogimages/designing-issue-boards/issue-board.gif)
**New!** 你也可以在看板右边创建工单,通过点击列表上方的按钮。当你这么做的时候,这个工单将会自动添加与列表相关的标签。
**New!** 我们[最近被告知][83] 每一个GitLab项目拥有**多个工单看板** (仅存在于[GitLab Enterprise Edition][82]); 为不同的工作流组织你的工单,这是一个最好的方式。
![Multiple Issue Boards](https://about.gitlab.com/images/8_13/m_ib.gif)
### 通过GitLab进行代码复审
在工单追踪中讨论了新的提议之后就是在代码上做工作的时候了。你在本地书写代码一旦你完成了你的第一个版本你提交你的代码并且推送到你的GitLab仓库。你基于Git的管理策略可以在[GitLab流][81]中被提升。
### 第一次提交
在你的第一次提交信息中,你可以添加涉及到工单号在其中。通过这样做你可以将两个阶段的开发工作流链接起来:工单本身以及关于这个工单的第一次提交。
这样做,如果你提交的代码和工单属于同一个项目,你可以简单的添加 `#xxx` 到提交信息中译者注git commit message`xxx`是一个工单号。如果它们不在一个项目中你可以添加整个工单的整个URL(`https://gitlab.com/<username>/<projectname>/issues/<xxx>`)。
```
`git commit -m "this is my commit message. Ref #xxx"`
```
或者
```
`git commit -m "this is my commit message. Related to https://gitlab.com/<username>/<projectname>/issues/<xxx>"`
```
当然,你也可以替换`gitlab.com`以你自己的GitLab实例来替换这个URL
**Note:** 链接工单和你的第一次提交是为了追踪你的进展,通过[GitLab Cycle Analytics][80]. 这将会衡量完成时间与计划工单的实施。这个时间是创建工单与第一次提交的间隔时间。
### 合并请求
一旦你提交你的改动到功能分支GitLab将对定义这次修改并且建议你提交一次合并请求MR
每一次MR都会有一个题目这个题目总结了这次的改动并且一个书写自[Markdown][79]的描述。在描述中你可以简单的描述MR做了什么涉及到任何工单以及Mr通过创建一个链接联系他们并且你也可以添加个[关闭工单模式][78]当MR被**合并**的时候,相关联的工单就会被关闭。
例如:
```
`## 增加一个新页面
个MR将会为这个项目创建一个`readme.md`此文件包含这个app的概览
Closes #xxx and https://gitlab.com/<username>/<projectname>/issues/<xxx>
预览:
![preview the new page](#image-url)
cc/ @Mary @Jane @John`
```
当你创建一个带有描述的MR就像是上文叙述的那样它将会
* 当合并时,关闭包括工单 `#xxx` 以及 `https://gitlab.com/<username>/<projectname>/issues/<xxx>`
* 展示一张图片
* 提醒用户 `@Mary`, `@Jane`,以及给`@John`发邮件
你可以分配这个MR给你自己直到你完成你的工作然后把他分配给其他人来做一次代码复审。如果有必要的话这个可以被重新分配多次直到你覆盖你所需要的所有复审。
它也可以被标记,并且添加一个[milestone][77]来促进管理。
当你添加或者修改一个文件并且提交一个新的分支从UI而不是命令行的时候它也一样简单。创建一个新的合并请求仅仅需要标记一下复选框“以这些改变开始一个新的合并请求”然后一旦你提交你的改动GitLab将会自动创建一个新的MR。
![commit to a feature branch and add a new MR from the UI](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/start-new-mr-edit-from-ui.png)
**Note:** 添加[关闭工单样式][76]到你的MR来使得[GitLab Cycle Analytics][75]追踪你的项目进展,是十分重要的。它将会追踪“代码”阶段,衡量项目的时间。这个时间是第一次提交和创建一个合并请求间隔的时间。
**New!** 我们已经开发了[审查应用][74],一个新的功能是使得你可以部署你的应用到一个动态的环境中,来自那些你可以预览的改动。这些改动基于分支的名字,以及每一个合并请求。看看[working example][73]。
### WIP MR
一个 WIP MR含义是 **在工作过程中的合并请求**是一个我们在GitLab中避免MR在准备就绪前被合并的技术。只需要添加`WIP:` 在MR的标题开头它将不会被合并除非你把`WIP:`删除。
当你改动已经准备好被合并,删除`WIP:` 编辑工单来手动删除或者使用一个快捷键允许你在MR描述下使用。
![WIP MR click to remove WIP from the title](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-wip-mr.png)
**New!** `WIP`模式可以被[很快的添加到合并请求][72],通过[slash command][71]`/wip`。只需要输入它并且在评论或者MR描述中提交。
### 复审
一旦你创建一个合并请求就是你开始从你的团队以及合作方收取反馈的时候了。使用UI中可用的区别功能你可以简单的添加行中注释来回复他们或者解决他们。
你也可以在每一行代码中获取一个链接,通过点击行号。
提交历史在UI中是可见的通过提交历史你可以追踪文件的每一次改变。你可以在行中浏览他们
![code review in MRs at GitLab](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-code-review.png)
**New!** 你可以找到合并冲突,快速[通过UI界面来解决][70],或者依据你的需要修改文件来修复冲突。
![mr conflict resolution](https://about.gitlab.com/images/8_13/inlinemergeconflictresolution.gif)
### 创建,测试以及发布
[GitLab CI][69] 是一个强大的内建工具,其作用是[持续集成,持续发布以及持续投递][58],可以按照你希望的运行一些脚本。它的可能性是无止尽的:它就像是你自己的命令行为你工作。
它完全是通过Yaml文件设置的`.gitlab-ci.yml`,放置在你的项目仓库中。使用网络,通过简单的添加一个文件,命名为`.gitlab-ci.yml`来打开一个下拉目录为不同的应用选择各种CI模版。
![GitLab CI templates - dropdown menu](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-ci-template.png)
### Koding
Use GitLab's [Koding integration][67] to run your entire development environment in the cloud. This means that you can check out a project or just a merge request in a full-fledged IDE with the press of a button.
使用 GitLab的[Koding集成][67]去使用你整个云端开发环境。这意味着你可以通过一个完整的IDE点点击一个按键在一个项目中切换分支或者合并一个请求。
### 使用案例
GitLab CI的使用案例
* 使用它去[创建][36]任何[静态网站生成器][35],并且通过[GitLab Pages][34]发布你的网站。
* 使用它来[发布你的网站][33]来`staging`以及`production`[环境][32](译者注:展示以及产品化)
* 用它来[创建一个iOS应用][31]
* 用它来[创建一集发布你的Docker镜像][30]通过[GitLab容器注册][29]
我们已经准备一大堆[GitLab CI样例工程][66]作为您的指南。看看他们吧!
### 反馈:循环分析
当你依据 GitLab工作流 工作,你的团队从点子到产品,在每一个[过程的关键部分][64],你将会即时获得一个[GitLab循环分析][65]的反馈:
* **Issue:** 创建一个工单到分配这个工单到一个里程碑,或者添加一个工单到你的工单看板的时间
* **Plan:** 给工单分配一个里程碑或者把它添加到工单看板,到发布第一次提交的时间。
* **Code:** 第一次提交到提出合并请求的时间
* **Test:** CI为了相关合并请求运行整个管道的时间
* **Review:** 创建一个合并请求到合并的时间
* **Staging:** 合并到发布成为产品的时间
* **Production** (总的): 创建一个工单到把代码发布成[产品][28]的时间
### 加强
### 工单以及合并模版
[工单以及合并模版][63]允许你去定义一个关于工单的详细模版,以及合并您的项目中请求描述部分。
您将会把他们以[Markdown][62]形式书写并且把他们加入您仓库的默认分支。任何时候一个工单或者MR被创建他们都可以被一个下拉菜单访问。
他们节省了您在描述工单和MR以及标准化需要持续跟踪的重要信息的时间。它确保了你需要的一切都在你的掌控之中。
当你可以创建许多模版他们为不同的目的提供服务。例如你可以有一个提供功能建议的工单模版或者一个bug汇报的工单模版。在[GitLab CE project][61]中寻找真实的例子吧!
![issues and MR templates - dropdown menu screenshot](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/issues-choose-template.png)
### 里程碑
[里程碑][60] 是GitLab中追踪你队伍工作的最好工具。它基于共同的目标详细的日期。
不同情况的目的是不同的,但是概述是相同的:你有一个工单的集合以及正在编码的合并请求来达到特定的目标。
这个目标基本上可以是任何东西——用来组合团队工作通过一个截止日期来提高团队的工作时间。例如发布一个新的release启动一个新的产品通过日期让事情完成或者集合一些项目使之一个季度完成。
![milestone dashboard](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-milestone.png)
### 高级要点
### 工单和MR
* 工单和MR的描述中:
* 输入`#`来触发一个关于现存工单的下拉列表
* 输入`!` 来触发一个关于现存MR的下拉列表
* 输入 `/` 来触发[slash 命令][4]
* 输入 `:` 来出发emoji表情 (也支持行中评论)
* 添加图片(jpg, png, gif) 和视频到行中评论,通过按钮 **Attach a file**
* [自动应用标签][27] 通过 [GitLab Webhooks][26]
* [构成引用][24]: 使用语法 `>>>` 来开始或者结束一个引用
```
`>>>
Quoted text
Another paragraph
>>>`
```
* Create [task lists][23]:
```
`- [ ] Task 1
- [ ] Task 2
- [ ] Task 3`
```
#### 订阅
你是否发现你有一个工单或者MR想要追踪在你的右边扩展导航中点击[订阅][59]你就可以在任何时候收到一个评论的更新。要是你想要一次订阅多个工单和MR使用[bulk subscription][58]. 😃
#### 添加代办
除了一直留意工单和MR如果你想要预先做点什么或者在任何时候你想要在GitLab 代办列表中添加什么,点击你右边的导航,并且[点击 **添加代办**][57]。
#### 寻找你的工单和MR
当你寻找一个在很久以前由你开启的工单——他们可能数以千计——所以你很难找到他们。打开你左边的导航,并且点击**工单**或者**合并请求**,你就会看到那些分配给你的。同时,在那里或者任何工单追踪器,你可以通过作者,分配者,里程碑,标签以及重要性来过滤工单,也可以通过搜索所有不同状态的工单,例如开启的,合并的,关闭的等等。
### 移动工单
一个工单在一个错误的项目中结束了?不用单机,点击**Edit**,然后[移动工单][56]到正确的项目。
### 代码片段
有时候你在不同的项目以及文件中,使用一些相同的代码段和模版吗?创建一个代码段并且使它在你需要的时候可用。打开左边导航栏,点击**[Snipptes][25]**。所有你的片段都会在那里。你可以把她们设置成公开的内部的仅仅为GitLab注册用户提供或者私有的。
![Snippets - screenshot](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-code-snippet.png)
### GitLab 工作流用户案例设想
为了全神贯注,让我们把所有东西聚在一起理顺一下。不必担心,这十分简单。
让我们假设:你工作于一个聚焦于软件开发的公司。你创建了一个新的工单,这个工单是为了开发一个新功能,实施于你的一个应用中。
### 标签策略
为了这个应用,你已经创建了几个标签,“讨论”,“后端”,“前端”,“正在进行”,“展示”,“就绪”,“文档”,“营销”以及“产品”。所有都已经在工单看板有他们自己的列表。你的工单已经有标签“讨论”。
在工单追踪器中的讨论达成一致,你的后端团队开始在工单上工作,所以他们把这个工单的标签从“讨论”移动到“后端”。第一个开发者开始写代码,并且把这个工单分配给自己,增加标签“正在进行”。
### 编码 & 提交
在他的第一次提交的信息中他提及了他的工单编号。在工作后他把他的提交推送到一个功能分支并且创建一个新的合并请求在MR描述中包含工单关闭模式。他的团队复审了他的代码并且保证所有的测试和建立都已经通过。
### 使用工单看板
一旦后端团队完成了他们的工作,他们就删除“正在进行”标签,并且把工单从“后端”移动到“前端”看板。所以,前端团队接到通知,这个工单已经为他们准备好了。
### 发布到演示
当一个前端开发者开始为工单工作,他(她)增加一个标签“正在进行”,并且把这个工单重新分配给自己。当工作完成,这个实施将会被发布到一个**演示**环境。标签“正在进行”就会被删除,然后在工单看板里,工单卡被移动到“演示”表中。
### 团队合作
最后,当新功能引入成功,你的团队把它移动到“就绪”列表。
然后,就是你的技术文档编写团队的时间了,他们为新功能书写文档。一旦某个人完成书写,他添加标签“文档”。同时,你的市场团队开始启动以及推荐功能,所以某个人添加“市场化”。当技术文档书写完毕,书写者删除标签“文档”。一旦市场团队完成他们的工作,他们将工单从“市场化”移动到“生产”。
### 部署到生产环境
最后,你将会成为那个为新释出负责的人,合并“合并请求”并且将新功能部署到**生产**环境,然后工单的状态转变为**关闭**。
### 反馈
通过 [循环分析][55],你和你的团队节省了如何从点子到产品的时间,并且开启另一个工单,来讨论如何将这个过程进一步提升。
### 总结
GitLab 工作流 通过一个平台,帮助你的团队加速从点子到生产的改变:
* 它是**有效的**:因为你可以获取你想要的结果
* 它是**效率高的**:因为你可以用最小的努力和话费达到最大的生产力。
* 它是**高产的**:因为你可以非常有效的计划和行动
* 它是**简单的**因为你不需要安装不同的工具去完成你的目的仅仅需要GitLab
* 它是**快速的**:因为你不需要跳过多个平台来完成你的工作
每月的22号都会有一个新的GitLab版本释出来让它变的更好的称谓集成软件开发方法并且让团队在一个单一的唯一的界面一起工作。
在GitLab每个人都可以奉献多亏了我们强大的社区我们获得了我们想要的。并且多亏了他们我们才能一直为你提供更好的产品。
还有什么问题和反馈吗?请留言,或者在推特上@我们[@GitLab][54]!🙌
--------------------------------------------------------------------------------
via: https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/
作者:[Marcia Ramos][a]
译者:[svtter](https://github.com/svtter)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twitter.com/XMDRamos
[1]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#search-for-your-issues-and-mrs
[2]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#add-to-do
[3]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#subscribe
[4]:https://docs.gitlab.com/ce/user/project/slash_commands.html
[5]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#code-snippets
[6]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#moving-issues
[7]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#for-both-issues-and-mrs
[8]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#milestones
[9]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#issue-and-mr-templates
[10]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#use-cases
[11]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#koding
[12]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#review
[13]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#wip-mr
[14]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#merge-request
[15]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#first-commit
[16]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-board
[17]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#issue-weight
[18]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#labels
[19]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#assignee
[20]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#due-dates
[21]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#confidential-issues
[22]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#stages-of-software-development
[23]:https://docs.gitlab.com/ee/user/markdown.html#task-lists
[24]:https://about.gitlab.com/2016/07/22/gitlab-8-10-released/#blockquote-fence-syntax
[25]:https://gitlab.com/dashboard/snippets
[26]:https://docs.gitlab.com/ce/web_hooks/web_hooks.html
[27]:https://about.gitlab.com/2016/08/19/applying-gitlab-labels-automatically/
[28]:https://docs.gitlab.com/ce/ci/yaml/README.html#environment
[29]:https://about.gitlab.com/2016/05/23/gitlab-container-registry/
[30]:https://about.gitlab.com/2016/08/11/building-an-elixir-release-into-docker-image-using-gitlab-ci-part-1/
[31]:https://about.gitlab.com/2016/03/10/setting-up-gitlab-ci-for-ios-projects/
[32]:https://docs.gitlab.com/ce/ci/yaml/README.html#environment
[33]:https://about.gitlab.com/2016/08/26/ci-deployment-and-environments/
[34]:https://pages.gitlab.io/
[35]:https://about.gitlab.com/2016/06/17/ssg-overview-gitlab-pages-part-3-examples-ci/
[36]:https://about.gitlab.com/2016/04/07/gitlab-pages-setup/
[37]:https://about.gitlab.com/solutions/cycle-analytics/
[38]:https://about.gitlab.com/2016/08/05/continuous-integration-delivery-and-deployment-with-gitlab/
[39]:https://about.gitlab.com/2016/08/05/continuous-integration-delivery-and-deployment-with-gitlab/
[40]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-code-review
[41]:https://about.gitlab.com/gitlab-ci/
[42]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-board
[43]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-tracker
[44]:https://about.gitlab.com/2015/08/18/gitlab-loves-mattermost/
[45]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#conclusions
[46]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-workflow-use-case-scenario
[47]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#pro-tips
[48]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#enhance
[49]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#feedback
[50]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#build-test-and-deploy
[51]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#code-review-with-gitlab
[52]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-tracker
[53]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-workflow
[54]:https://twitter.com/gitlab
[55]:https://about.gitlab.com/solutions/cycle-analytics/
[56]:https://about.gitlab.com/2016/03/22/gitlab-8-6-released/#move-issues-to-other-projects
[57]:https://about.gitlab.com/2016/06/22/gitlab-8-9-released/#manually-add-todos
[58]:https://about.gitlab.com/2016/07/22/gitlab-8-10-released/#bulk-subscribe-to-issues
[59]:https://about.gitlab.com/2016/03/22/gitlab-8-6-released/#subscribe-to-a-label
[60]:https://about.gitlab.com/2016/08/05/feature-highlight-set-dates-for-issues/#milestones
[61]:https://gitlab.com/gitlab-org/gitlab-ce/issues/new
[62]:https://docs.gitlab.com/ee/user/markdown.html
[63]:https://docs.gitlab.com/ce/user/project/description_templates.html
[64]:https://about.gitlab.com/2016/09/21/cycle-analytics-feature-highlight/
[65]:https://about.gitlab.com/solutions/cycle-analytics/
[66]:https://docs.gitlab.com/ee/ci/examples/README.html
[67]:https://about.gitlab.com/2016/08/22/gitlab-8-11-released/#koding-integration
[68]:https://about.gitlab.com/2016/08/05/continuous-integration-delivery-and-deployment-with-gitlab/
[69]:https://about.gitlab.com/gitlab-ci/
[70]:https://about.gitlab.com/2016/08/22/gitlab-8-11-released/#merge-conflict-resolution
[71]:https://docs.gitlab.com/ce/user/project/slash_commands.html
[72]:https://about.gitlab.com/2016/10/22/gitlab-8-13-released/#wip-slash-command
[73]:https://gitlab.com/gitlab-examples/review-apps-nginx/
[74]:https://about.gitlab.com/2016/10/22/gitlab-8-13-released/#ability-to-stop-review-apps
[75]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#feedback
[76]:https://docs.gitlab.com/ce/administration/issue_closing_pattern.html
[77]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#milestones
[78]:https://docs.gitlab.com/ce/administration/issue_closing_pattern.html
[79]:https://docs.gitlab.com/ee/user/markdown.html
[80]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#feedback
[81]:https://about.gitlab.com/2014/09/29/gitlab-flow/
[82]:https://about.gitlab.com/free-trial/
[83]:https://about.gitlab.com/2016/10/22/gitlab-8-13-released/#multiple-issue-boards-ee
[84]:https://about.gitlab.com/solutions/issueboard
[85]:https://docs.gitlab.com/ee/workflow/issue_weight.html
[86]:https://about.gitlab.com/2016/10/22/gitlab-8-13-released/#group-labels
[87]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-board
[88]:https://docs.gitlab.com/ee/user/project/labels.html#prioritize-labels
[89]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#milestones
[90]:https://about.gitlab.com/2016/08/05/feature-highlight-set-dates-for-issues/#due-dates-for-issues
[91]:https://docs.gitlab.com/ce/user/permissions.html
[92]:https://about.gitlab.com/2016/03/31/feature-highlihght-confidential-issues/
[93]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#pro-tips
[94]:https://docs.gitlab.com/ee/user/markdown.html
[95]:https://about.gitlab.com/2016/03/03/start-with-an-issue/
[96]:https://about.gitlab.com/2016/09/13/gitlab-master-plan/
[97]:https://about.gitlab.com/2014/09/29/gitlab-flow/

View File

@ -0,0 +1,266 @@
如何在 Ubuntu16.04 中用 Apache 部署 Jenkins 自动化服务器
============================================================
Jenkins 是从 Hudson 项目衍生出来的自动化服务器。Jenkins 是一个基于服务器的应用程序,运行在 Java servlet 容器中,它支持包括 Git、SVN 以及 Mercurial 在内的多种 SCMSource Control Management源码控制工具。Jenkins 提供了上百种插件帮助你的项目实现自动化。Jenkins 由 Kohsuke Kawaguchi 开发,在 2011 年使用 MIT 协议发布了第一个发行版,它是个免费软件。
在这篇指南中,我会向你介绍如何在 Ubuntu 16.04 中安装最新版本的 Jenkins。我们会用自己的域名运行 Jenkins在 apache web 服务器中安装和配置 Jenkins而且支持反向代理。
### 前提
* Ubuntu 16.04 服务器 - 64 位
* Root 权限
### 第一步 - 安装 Java OpenJDK 7
Jenkins 基于 Java因此我们需要在服务器上安装 Java OpenJDK 7。在这里我们会从一个 PPA 仓库安装 Java 7首先我们需要添加这个仓库。
默认情况下Ubuntu 16.04 没有安装用于管理 PPA 仓库的 python-software-properties 软件包,因此我们首先需要安装这个软件。使用 apt 命令安装 python-software-properties。
`apt-get install python-software-properties`
下一步,添加 Java PPA 仓库到服务器中。
`add-apt-repository ppa:openjdk-r/ppa`
输入回车键
用 apt 命令更新 Ubuntu 仓库并安装 Java OpenJDK。`
`apt-get update`
`apt-get install openjdk-7-jdk`
输入下面的命令验证安装:
`java -version`
你会看到安装到服务器上的 Java 版本。
[
![在 Ubuntu 16.04 上安装 Java OpenJDK 7](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/1.png)
][9]
### 第二步 - 安装 Jenkins
Jenkins 给软件安装包提供了一个 Ubuntu 仓库,我们会从这个仓库中安装 Jenkins。
用下面的命令添加 Jenkins 密钥和仓库到系统中。
`wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add -`
`echo 'deb https://pkg.jenkins.io/debian-stable binary/' | tee -a /etc/apt/sources.list`
更新仓库并安装 Jenkins。
`apt-get update`
`apt-get install jenkins`
安装完成后,用下面的命令启动 Jenkins。
`systemctl start jenkins`
通过检查 Jenkins 默认使用的端口(端口 8080验证 Jenkins 正在运行。我会像下面这样用 netstat 命令检测:
`netstat -plntu`
Jenkins 已经安装好了并运行在 8080 端口。
[
![已经将 Jenkins 安装到 8080 端口](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/2.png)
][10]
### 第三步 - 为 Jenkins 安装和配置 Apache 作为反向代理
在这篇指南中,我们会在一个 apache web 服务器中运行 Jenkins我们会为 Jenkins 配置 apache 作为反向代理。首先我会安装 apache 并启用一些需要的模块,然后我会为 Jenkins 用域名 my.jenkins.id 创建虚拟 host 文件。请在这里使用你自己的域名并在所有配置文件中出现的地方替换。
从 Ubuntu 仓库安装 apache2 web 服务器。
`apt-get install apache2`
安装完成后,启用 proxy 和 proxy_http 模块以便将 apache 配置为 Jenkins 的前端服务器/反向代理。
`a2enmod proxy`
`a2enmod proxy_http`
下一步,在 sites-available 目录创建新的虚拟 host 文件。
`cd /etc/apache2/sites-available/`
`vim jenkins.conf`
粘贴下面的虚拟 host 配置。
```
<Virtualhost *:80>
    ServerName        my.jenkins.id
    ProxyRequests     Off
    ProxyPreserveHost On
    AllowEncodedSlashes NoDecode
    <Proxy http://localhost:8080/*>
      Order deny,allow
      Allow from all
    </Proxy>
    ProxyPass         /  http://localhost:8080/ nocanon
    ProxyPassReverse  /  http://localhost:8080/
    ProxyPassReverse  /  http://my.jenkins.id/
</Virtualhost>
```
保存文件。然后用 a2ensite 命令激活 Jenkins 虚拟 host。
`a2ensite jenkins`
重启 Apache 和 Jenkins。
`systemctl restart apache2`
`systemctl restart jenkins`
检查 Jenkins 和 Apache 正在使用 80 和 8080 端口。
`netstat -plntu`
[
![检查 Apache 和 Jenkins 是否在运行](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/3.png)
][11]
### 第四步 - 配置 Jenkins
Jenkins 用域名 'my.jenkins.id' 运行。打开你的 web 浏览器然后输入 URL。你会看到要求你输入初始管理员密码的页面。Jenkins 已经生成了一个密码,因此我们只需要显示并把结果复制到密码框。
用 cat 命令显示 Jenkins 初始管理员密码。
`cat /var/lib/jenkins/secrets/initialAdminPassword`
a1789d1561bf413c938122c599cf65c9
[
![获取 Jenkins 管理员密码](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/4.png)
][12]
将结果粘贴到密码框然后点击 **Continue**’。
[
![安装和配置 Jenkins](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/5.png)
][13]
现在为了后面能比较好的使用,我们需要在 Jenkins 中安装一些插件。选择 **Install Suggested Plugin**’,点击它。
[
![安装 Jenkins 插件](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/6.png)
][14]
Jenkins 插件安装过程
[
![Jenkins 安装完插件](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/7.png)
][15]
安装完插件后,我们需要创建一个新的管理员密码。输入你的管理员用户名、密码、电子邮件等,然后点击 **Save and Finish**’。
[
![创建 Jenkins 管理员账户](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/8.png)
][16]
点击 start 开始使用 Jenkins。你会被重定向到 Jenkins 管理员面板。
[
![重定向到管理员面板](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/9.png)
][17]
成功完成 Jenkins 安装和配置。
[
![Jenkins 管理员面板](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/10.png)
][18]
### 第五步 - Jenkins 安全
在 Jenkins 管理员面板,我们需要为 Jenkins 配置标准的安全,点击 **Manage Jenkins****Configure Global Security**’。
[
![Jenkins 全局安全设置](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/11.png)
][19]
Jenkins 在 **Access Control** 部分提供了多种认证方法。为了能够控制所有的用户权限,我选择了 **Matrix-based Security**’。在复选框 **User/Group** 中启用 admin 用户。通过**勾选所有选项**给 admin 所有权限,给 anonymous 只读权限。现在点击 **Save**’。
[
![配置 Jenkins 权限](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/12.png)
][20]
你会被重定向到面板,如果出现了登录选项,只需输入你的管理员账户和密码。
### 第六步 - 测试一个简单的自动化任务
在这一部分,我想为 Jenkins 服务测试一个简单的任务。为了测试 Jenkins 我会创建一个简单的任务,并用 top 命令查看服务器的负载。
在 Jenkins 管理员面板上,点击 **Create New Job**’。
[
![在 Jenkins 中创建新的任务](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/13.png)
][21]
输入任务的名称,在这里我用 Checking System选择 **Freestyle Project** 然后点击 **OK**’。
[
![配置 Jenkins 任务](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/14.png)
][22]
进入 **Build** 标签页。在 **Add build step**’,选择选项 **Execute shell**’。
在输入框输入下面的命令。
`top -b -n 1 | head -n 5`
点击 **Save**’。
[
![启动 Jenkins 任务](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/15.png)
][23]
现在你是在任务 Project checking system的任务页。点击 **Build Now** 执行任务 checking system
任务执行完成后,你会看到 **Build History**’,点击第一个任务查看结果。
下面是 Jenkins 任务执行的结果。
[
![构建和运行 Jenkins 任务](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/16.png)
][24]
到这里就介绍完了在 Ubuntu 16.04 中用 Apache web 服务器安装 Jenkins 的内容。
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/how-to-install-jenkins-with-apache-on-ubuntu-16-04/
作者:[Muhammad Arul ][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://twitter.com/howtoforgecom
[1]:https://www.howtoforge.com/tutorial/how-to-install-jenkins-with-apache-on-ubuntu-16-04/#prerequisite
[2]:https://www.howtoforge.com/tutorial/how-to-install-jenkins-with-apache-on-ubuntu-16-04/#step-install-java-openjdk-
[3]:https://www.howtoforge.com/tutorial/how-to-install-jenkins-with-apache-on-ubuntu-16-04/#step-install-jenkins
[4]:https://www.howtoforge.com/tutorial/how-to-install-jenkins-with-apache-on-ubuntu-16-04/#step-install-and-configure-apache-as-reverse-proxy-for-jenkins
[5]:https://www.howtoforge.com/tutorial/how-to-install-jenkins-with-apache-on-ubuntu-16-04/#step-configure-jenkins
[6]:https://www.howtoforge.com/tutorial/how-to-install-jenkins-with-apache-on-ubuntu-16-04/#step-jenkins-security
[7]:https://www.howtoforge.com/tutorial/how-to-install-jenkins-with-apache-on-ubuntu-16-04/#step-testing-a-simple-automation-job
[8]:https://www.howtoforge.com/tutorial/how-to-install-jenkins-with-apache-on-ubuntu-16-04/#reference
[9]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/1.png
[10]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/2.png
[11]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/3.png
[12]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/4.png
[13]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/5.png
[14]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/6.png
[15]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/7.png
[16]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/8.png
[17]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/9.png
[18]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/10.png
[19]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/11.png
[20]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/12.png
[21]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/13.png
[22]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/14.png
[23]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/15.png
[24]:https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/big/16.png

View File

@ -1,37 +1,43 @@
探索传统 JavaScript 基准测试
============================================================
可以很公平地说,[JavaScript][22] 是当下软件工程最重要的技术。对于那些深入接触过编程语言、编译器和虚拟机的人来说,这仍然有点令人惊讶,因为在语言设计者看来,`JavaScript` 不是十分优雅;在编译器工程师看来,它没有多少可优化的地方;而且还没有一个伟大的标准库。取决于你和谁吐槽,`JavaScript` 的缺点你花上数周都枚举不完,不过你总会找到一些你从所未知的神奇的东西。尽管这看起来明显困难重重,不过 `JavaScript` 还是成为当今 web 的核心,并且还成为服务器端/云端的主导技术(通过 [Node.js][23]),甚至还开辟了进军物联网空间的道路。
可以很公平地说[JavaScript][22] 是当下软件工程最重要的技术。对于那些深入接触过编程语言、编译器和虚拟机的人来说,这仍然有点令人惊讶,因为在语言设计者看来JavaScript 不是十分优雅;在编译器工程师看来,它没有多少可优化的地方;而且还没有一个伟大的标准库。取决于你和谁吐槽JavaScript 的缺点你花上数周都枚举不完,不过你总会找到一些你从所未知的神奇的东西。尽管这看起来明显困难重重,不过 JavaScript 还是成为当今 web 的核心,并且还(通过 [Node.js][23]成为服务器端/云端的主导技术,甚至还开辟了进军物联网空间的道路。
问题来了,为什么 `JavaScript` 如此受欢迎/成功?我知道没有一个很好的答案。如今我们有许多使用 `JavaScript` 的好理由,或许最重要的是围绕其构建的庞大的生态系统,以及今天大量可用的资源。但所有这一切实际上是发展到一定程度的后果。为什么 `JavaScript` 变得流行起来了?嗯,你或许会说,这是 web 多年来的通用语了。但是在很长一段时间里,人们极其讨厌 `JavaScript`。回顾过去,似乎第一波 `JavaScript` 浪潮爆发在上个年代的后半段。不出所料,那个时候 `JavaScript` 引擎在不同的负载下实现了巨大的加速,这可能让很多人对 `JavaScript` 刮目相看。
问题来了,为什么 JavaScript 如此受欢迎?或者说如此成功?我知道没有一个很好的答案。如今我们有许多使用 JavaScript 的好理由,或许最重要的是围绕其构建的庞大的生态系统,以及今天大量可用的资源。但所有这一切实际上是发展到一定程度的后果。为什么 JavaScript 变得流行起来了?嗯,你或许会说,这是 web 多年来的通用语了。但是在很长一段时间里,人们极其讨厌 JavaScript。回顾过去似乎第一波 JavaScript 浪潮爆发在上个年代的后半段。那个时候 JavaScript 引擎加速了各种不同的任务的执行,很自然的,这可能让很多人对 JavaScript 刮目相看。
回到过去那些日子,这些加速测试使用了现在所谓的传统 `JavaScript` 基准——从苹果的 [SunSpider 基准][24]JavaScript 微基准之母)到 Mozilla 的 [Kraken 基准][25] 和谷歌的 `V8` 基准。后来,`V8` 基准被 [Octane 基准][26] 取代,而苹果发布了新的 [JetStream 基准][27]。这些传统的 `JavaScript` 基准测试驱动了无数人的努力,使 `JavaScript` 的性能达到了本世纪初没人能预料到的水平。据报道加速达到了 1000 倍,一夜之间在网站使用 `<script>` 标签不再是魔鬼的舞蹈,做客户端不再仅仅是可能的了,甚至是被鼓励的。
回到过去那些日子,这些加速测试使用了现在所谓的传统 JavaScript 基准——从苹果的 [SunSpider 基准][24]JavaScript 微基准之母)到 Mozilla 的 [Kraken 基准][25] 和谷歌的 V8 基准。后来V8 基准被 [Octane 基准][26] 取代,而苹果发布了新的 [JetStream 基准][27]。这些传统的 JavaScript 基准测试驱动了无数人的努力,使 JavaScript 的性能达到了本世纪初没人能预料到的水平。据报道其性能加速达到了 1000 倍,一夜之间在网站使用 `<script>` 标签不再是魔鬼的舞蹈,做客户端不再仅仅是可能的了,甚至是被鼓励的。
[![性能测试JS 基准的简史](http://benediktmeurer.de/images/2016/sethcds-20161216.png)][28]
现在是 2016 年,所有(相关的)`JavaScript` 引擎的性能都达到了一个令人难以置信的水平web 应用可以像端应用(或者本地的应用)一样快。引擎配有复杂的优化编译器,通过收集过去关于类型/形状的反馈来推测某些操作(即属性访问、二进制操作、比较、调用等),生成高度优化的机器代码的短序列。大多数优化是由 `SunSpider``Kraken` 等微基准以及 `Octane``JetStream` 等静态测试套件驱动的。由于有像 [asm.js][29] 和 [Emscripten][30] 这样的 `JavaScript` 技术,我们甚至可以将大型 `C++` 应用程序编译成 `JavaScript`,并在你的浏览器上运行,而无需下载或安装任何东西。例如,现在你可以在 web 上玩 [AngryBots][31],无需沙盒,而过去的 web 游戏需要安装一堆诸如 `Adobe Flash``Chrome PNaCl` 的插件。
(来源: [Advanced JS performance with V8 and Web Assembly](https://www.youtube.com/watch?v=PvZdTZ1Nl5o) Chrome Developer Summit 2016, @s3ththompson。)
这些成就绝大多数都要归功于这些微基准和静态性能测试套件,以及这些传统 `JavaScript` 基准间至关重要的竞争。你可以对 `SunSpider` 表示不满,但很显然,没有 `SunSpider``JavaScript` 的性能可能达不到今天的高度。好吧,赞美到此为止。现在看看另一方面,所有静态性能测试——无论是微基准还是大型应用的宏基准,都注定要随着时间的推移变得不相关!为什么?因为在开始游戏前,基准只能教你这么多。一旦达到某个阔值以上(或以下),那么有益于特定基准的优化的一般适用性将呈指数下降。例如,我们将 `Octane` 作为现实世界中 web 应用性能的代理,并且在相当长的一段时间里,它可能做得很不错,但是现在,`Octane` 与现实场景中的时间分布是截然不同的,因此即使眼下再优化 `Octane` 至超越自身,可能在现实世界中还是得不到任何显著的改进(无论是通用 web 还是 `Node.js` 的工作负载)。
现在是 2016 年所有相关的JavaScript 引擎的性能都达到了一个令人难以置信的水平web 应用像原生应用一样快(或者能够像原生应用一样快)。引擎配有复杂的优化编译器,通过收集之前的关于类型/形状的反馈来推测某些操作(例如属性访问、二进制操作、比较、调用等),生成高度优化的机器代码的短序列。大多数优化是由 SunSpider 或 Kraken 等微基准以及 Octane 和 JetStream 等静态测试套件驱动的。由于有像 [asm.js][29] 和 [Emscripten][30] 这样的 JavaScript 技术,我们甚至可以将大型 C++ 应用程序编译成 JavaScript并在你的浏览器上运行而无需下载或安装任何东西。例如现在你可以在 web 上玩 [AngryBots][31],无需沙盒,而过去的 web 游戏需要安装一堆诸如 Adobe Flash 或 Chrome PNaCl 的插件。
这些成就绝大多数都要归功于这些微基准和静态性能测试套件的出现,以及与这些传统 JavaScript 基准间的竞争的结果。你可以对 SunSpider 表示不满,但很显然,没有 SunSpiderJavaScript 的性能可能达不到今天的高度。好吧,赞美到此为止。现在看看另一方面,所有静态性能测试——无论是微基准还是大型应用的宏基准,都注定要随着时间的推移变成噩梦!为什么?因为在开始摆弄它之前,基准只能教你这么多。一旦达到某个阔值以上(或以下),那么有益于特定基准的优化的一般适用性将呈指数下降。例如,我们将 Octane 作为现实世界中 web 应用性能的代表并且在相当长的一段时间里它可能做得很不错但是现在Octane 与现实场景中的时间分布是截然不同的,因此即使眼下再优化 Octane 乃至超越自身,可能在现实世界中还是得不到任何显著的改进(无论是通用 web 还是 Node.js 的工作负载)。
[![基准与现实世界的时间分布对比](http://benediktmeurer.de/images/2016/verwaestblinkon-20161216.png)][32]
由于传统 `JavaScript` 基准(包括最新版的 `JetStream``Octane`)可能已经超越其有用性变得越来越明显,我们开始调查新的方法来测量年初现实场景的性能,为 `V8``Chrome` 添加了大量新的性能追踪钩子。我们还特意添加一些机制来查看我们在浏览 web 时的时间开销,即是否是脚本执行、垃圾回收、编译等,并且这些调查的结果非常有趣和令人惊讶。从上面的幻灯片可以看出,运行 `Octane` 花费超过 70% 的时间执行 `JavaScript` 和回收垃圾,而浏览 web 的时候,通常执行 `JavaScript` 花费的时间不到 30%,垃圾回收占用的时间永远不会超过 5%。反而花费大量时间来解析和编译,这不像 `Octane` 的作风。因此,将更多的时间用在优化 `JavaScript` 执行上将提高你的 `Octane` 跑分,但不会对加载 [youtube.com][33] 有任何积极的影响。事实上,花费更多的时间来优化 `JavaScript` 执行甚至可能有损你现实场景的性能因为编译器需要更多的时间或者你需要跟踪更多的反馈最终为编译、IC 和运行时桶开销更多的时间。
(来源:[Real-World JavaScript Performance](https://youtu.be/xCx4uC7mn6Y)BlinkOn 6 conference@tverwaes
由于传统 JavaScript 基准(包括最新版的 JetStream 和 Octane可能已经背离其有用性变得越来越远我们开始在年初寻找新的方法来测量现实场景的性能为 V8 和 Chrome 添加了大量新的性能追踪钩子。我们还特意添加一些机制来查看我们在浏览 web 时的时间开销,例如,是否是脚本执行、垃圾回收、编译等,并且这些调查的结果非常有趣和令人惊讶。从上面的幻灯片可以看出,运行 Octane 花费超过 70% 的时间去执行 JavaScript 和回收垃圾,而浏览 web 的时候,通常执行 JavaScript 花费的时间不到 30%,垃圾回收占用的时间永远不会超过 5%。在 Octane 中并没有体现出它花费了大量时间来解析和编译。因此,将更多的时间用在优化 JavaScript 执行上将提高你的 Octane 跑分,但不会对加载 [youtube.com][33] 有任何积极的影响。事实上,花费更多的时间来优化 JavaScript 执行甚至可能有损你现实场景的性能因为编译器需要更多的时间或者你需要跟踪更多的反馈最终在编译、IC 和运行时桶开销了更多的时间。
[![测速表](http://benediktmeurer.de/images/2016/speedometer-20161216.png)][34]
还有另外一组基准测试用于测量浏览器整体性能(包括 `JavaScript``DOM` 性能),最新推出的是 [Speedometer 基准][35]。基准试图通过运行一个用不同的主流 web 框架实现的简单的 [TodoMVC][36] 应用(现在看来有点过时了,不过新版本正在研发中)以捕获真实性能。各种在 Octane 下的测试Angular、Ember、React、Vanilla、Flight 和 Backbone都罗列在幻灯片中,你可以看到这些测试似乎更好地代表了现在的性能指标。但是请注意,这些数据收集在本文撰写将近 6 个月以前,而且我们优化了更多的现实场景模式(例如我们正在重构 IC 系统以显著地降低开销,并且 [解析器也正在重新设计][37])。还要注意的是,虽然这看起来像是只和浏览器相关,但我们有非常强有力的证据表明传统的峰值性能基准也不是现实场景中 `Node.js` 应用性能的一个好代理
还有另外一组基准测试用于测量浏览器整体性能(包括 JavaScript 和 DOM 性能),最新推出的是 [Speedometer 基准][35]。基准试图通过运行一个用不同的主流 web 框架实现的简单的 [TodoMVC][36] 应用(现在看来有点过时了,不过新版本正在研发中)以捕获真实性能。上述幻灯片中的各种测试 Angular、Ember、React、Vanilla、Flight 和 Backbone挨着放在 Octane 之后,你可以看到这些测试似乎更好地代表了现在的性能指标。但是请注意,这些数据收集在本文撰写将近 6 个月以前,而且我们优化了更多的现实场景模式(例如我们正在重构垃圾回收系统以显著地降低开销,并且 [解析器也正在重新设计][37])。还要注意的是,虽然这看起来像是只和浏览器相关,但我们有非常强有力的证据表明传统的峰值性能基准也不能很好的代表现实场景中 Node.js 应用性能
[![Speedometer 和 Octane 对比](http://benediktmeurer.de/images/2016/verwaestblinkon2-20161216.png)][38]
所有这一切可能已经路人皆知了,因此我将用本文剩下的部分强调一些关于我为什么认为这不仅有用,而且对于 `JavaScript` 社区的健康(必须停止关注某一阔值的静态峰值性能基准测试)很关键的具体案例。让我通过一些例子说明 `JavaScript` 引擎怎样来玩弄基准。
(来源: [Real-World JavaScript Performance](https://youtu.be/xCx4uC7mn6Y) BlinkOn 6 conference, @tverwaes.
所有这一切可能已经路人皆知了,因此我将用本文剩下的部分强调一些具体案例,它们对关于我为什么认为这不仅有用,而且必须停止关注某一阔值的静态峰值性能基准测试对于 JavaScript 社区的健康是很关键的。让我通过一些例子说明 JavaScript 引擎怎样来玩弄基准的。
### 臭名昭著的 SunSpider 案例
一篇关于传统 `JavaScript` 基准测试的博客如果没有指出 `SunSpider` 明显的问题是不完整的。让我们从性能测试的最佳实践开始,它在现实场景中不是很适用:[`bitops-bitwise-and.js` 性能测试][39]
一篇关于传统 JavaScript 基准测试的博客如果没有指出 SunSpider 明显的问题是不完整的。让我们从性能测试的最佳实践开始,它在现实场景中不是很适用:[`bitops-bitwise-and.js` 性能测试][39]
[![bitops-bitwise-and.js](http://benediktmeurer.de/images/2016/bitops-bitwise-and-20161216.png)][40]
有一些算法需要进行快速的位运算,特别是从 `C/C++` 转译成 `JavaScript` 的地方,所以快速执行按位操作确实有点意义。然而,现实场景中的网页可能不关心引擎是否可以执行位运算,并且能否在循环中比另一个引擎快两倍。但是再盯着这段代码几秒钟,你可能会注意到,在第一次循环迭代之后 `bitwiseAndValue` 将变成 `0`,并且在接下来的 599999 次迭代中将保持为 `0`。所以一旦你在此获得好性能,即在体面的硬件上所有测试均低于 5ms在经过尝试之后意识到只有循环的第一次是必要的而剩余的迭代只是在浪费时间例如 [loop peeling][41] 后面的死代码),现在你可以开始玩弄这个基准了。这需要 JavaScript 中的一些机制来执行这种转换,即你需要检查 `bitwiseAndValue` 是全局对象的常规属性还是在执行脚本之前不存在,全局对象或者它的原型上必须没有拦截器。但如果你真的想要赢得这个基准测试,并且你愿意全力以赴,那么你可以在不到 1ms 的时间内完成这个测试。然而,这种优化将局限于这种特殊情况,并且测试的轻微修改可能不再触发它。
有一些算法需要进行快速的位运算,特别是从 `C/C++` 转译成 JavaScript 的地方,所以快速执行按位操作确实有点意义。然而,现实场景中的网页可能不关心引擎是否可以执行位运算,并且能否在循环中比另一个引擎快两倍。但是再盯着这段代码几秒钟,你可能会注意到,在第一次循环迭代之后 `bitwiseAndValue` 将变成 `0`,并且在接下来的 599999 次迭代中将保持为 `0`。所以一旦你在此获得好性能,即在体面的硬件上所有测试均低于 5ms在经过尝试之后意识到只有循环的第一次是必要的而剩余的迭代只是在浪费时间例如 [loop peeling][41] 后面的死代码),现在你可以开始玩弄这个基准了。这需要 JavaScript 中的一些机制来执行这种转换,即你需要检查 `bitwiseAndValue` 是全局对象的常规属性还是在执行脚本之前不存在,全局对象或者它的原型上必须没有拦截器。但如果你真的想要赢得这个基准测试,并且你愿意全力以赴,那么你可以在不到 1ms 的时间内完成这个测试。然而,这种优化将局限于这种特殊情况,并且测试的轻微修改可能不再触发它。
好吧,那么 [`bitops-bitwise-and.js`][42] 测试彻底肯定是微基准最失败的案例。让我们继续转移到 SunSpider 中更逼真的场景——[`string-tagcloud.js`][43] 测试,它的底层运行着一个较早版本的 `json.js polyfill`。该测试可以说看起来比位运算测试更合理,但是查看基准的配置之后立刻显示:大量的时间浪费在一条 `eval` 表达式(高达 20% 的总执行时间被用于解析和编译,再加上实际执行编译后代码的 10% 的时间)。
@ -77,7 +83,7 @@
])
```
显然,解析这些对象字面量,为其生成本地代码,然后执行该代码的成本很高。将输入的字符串解析为 `JSON` 并生成适当的对象图的开销将更加低廉。所以,加快这个基准测试的一个小把戏就是模拟 `eval`,并尝试总是将数据首先作为 `JSON` 解析,然后再回溯到真实的解析、编译、执行,直到尝试读取 `JSON` 失败(尽管需要一些额外的黑魔法来跳过括号)。早在 2007 年,这甚至不算是一个坏点子,因为没有 [`JSON.parse`][45],不过在 2017 年这只是 `JavaScript` 引擎的技术债,可能会让 `eval` 的合法使用遥遥无期。
显然,解析这些对象字面量,为其生成本地代码,然后执行该代码的成本很高。将输入的字符串解析为 `JSON` 并生成适当的对象图的开销将更加低廉。所以,加快这个基准测试的一个小把戏就是模拟 `eval`,并尝试总是将数据首先作为 `JSON` 解析,然后再回溯到真实的解析、编译、执行,直到尝试读取 `JSON` 失败(尽管需要一些额外的黑魔法来跳过括号)。早在 2007 年,这甚至不算是一个坏点子,因为没有 [`JSON.parse`][45],不过在 2017 年这只是 JavaScript 引擎的技术债,可能会让 `eval` 的合法使用遥遥无期。
```
--- string-tagcloud.js.ORIG 2016-12-14 09:00:52.869887104 +0100
@ -93,7 +99,7 @@
}
```
事实上,将基准测试更新到现代 `JavaScript` 会立刻提升性能,正如今天的 `V8 LKGR` 从 36ms 降到了 26ms性能足足提升了 30%
事实上,将基准测试更新到现代 JavaScript 会立刻提升性能,正如今天的 `V8 LKGR` 从 36ms 降到了 26ms性能足足提升了 30%
```
$ node string-tagcloud.js.ORIG
@ -121,7 +127,7 @@ $
* 0.05235987755982989
* 0.08726646259971647
显然,你可以在这里做的一件事情就是通过缓存以前的计算值来避免重复计算相同的正弦值和余弦值。事实上,这是 `V8` 以前的做法,而其它引擎例如 `SpiderMonkey` 仍然这样做。我们从 `V8` 中删除了所谓的超载缓存,因为缓存的开销在现实中的工作负载是不可忽视的,你不可能总是在一行代码中计算相同的值,这在其它地方倒不稀奇。当我们在 2013 和 2014 年移除这个特定的基准优化时,我们对 `SunSpider` 基准产生了强烈的冲击,但我们完全相信,优化基准并没有任何意义,同时以这种方式批判现实场景中的使用案例。
显然,你可以在这里做的一件事情就是通过缓存以前的计算值来避免重复计算相同的正弦值和余弦值。事实上,这是 V8 以前的做法,而其它引擎例如 `SpiderMonkey` 仍然这样做。我们从 V8 中删除了所谓的超载缓存,因为缓存的开销在现实中的工作负载是不可忽视的,你不可能总是在一行代码中计算相同的值,这在其它地方倒不稀奇。当我们在 2013 和 2014 年移除这个特定的基准优化时,我们对 SunSpider 基准产生了强烈的冲击,但我们完全相信,优化基准并没有任何意义,同时以这种方式批判现实场景中的使用案例。
[![3d-cube 基准](http://benediktmeurer.de/images/2016/3d-cube-awfy-20161216.png)][52]
@ -129,27 +135,27 @@ $
### 垃圾回收是有害的
除了这些非常具体的测试问题,`SunSpider` 还有一个根本的问题:总体执行时间。目前 `V8` 在体面的英特尔硬件上运行整个基准测试大概只需要 200ms使用默认配置。次要的 `GC` 在 1ms 到 25ms 之间(取决于新空间中的活对象和旧空间的碎片),而主 `GC` 暂停可以浪费掉 30ms甚至不考虑增量标记的开销这超过了 `SunSpider` 套件总体执行时间的 10%!因此,任何不想因 `GC` 循环而造成减速 10-20% 的引擎,必须用某种方式确保它在运行 `SunSpider` 时不会触发 `GC`
除了这些非常具体的测试问题SunSpider 还有一个根本的问题:总体执行时间。目前 V8 在体面的英特尔硬件上运行整个基准测试大概只需要 200ms使用默认配置。次要的 `GC` 在 1ms 到 25ms 之间(取决于新空间中的活对象和旧空间的碎片),而主 `GC` 暂停可以浪费掉 30ms甚至不考虑增量标记的开销这超过了 SunSpider 套件总体执行时间的 10%!因此,任何不想因 `GC` 循环而造成减速 10-20% 的引擎,必须用某种方式确保它在运行 SunSpider 时不会触发 `GC`
[![driver-TEMPLATE.html](http://benediktmeurer.de/images/2016/sunspider-driver-20161216.png)][54]
就实现而言,有不同的方案,不过就我所知,没有一个在现实场景中产生了任何积极的影响。`V8` 使用了一个相当简单的技巧:由于每个 `SunSpider` 套件都运行在一个新的 `<iframe>` 中,这对应于 `V8` 中一个新的本地上下文,我们只需检测快速的 `<iframe>` 创建和处理(所有的 `SunSpider` 测试花费的时间小于 50ms在这种情况下在处理和创建之间执行垃圾回收以确保我们在实际运行测试的时候不会触发 `GC`。这个技巧很好99.9% 的案例没有与实际用途冲突;除了每时每刻,无论你在做什么,都让你看起来像是 `V8``SunSpider` 测试驱动程序,那么你可能会遇到困难,或许你可以通过强制 `GC` 来解决,不过这对你的应用可能会有负面影响。所以紧记一点:**不要让你的应用看起来像 SunSpider**
就实现而言有不同的方案不过就我所知没有一个在现实场景中产生了任何积极的影响。V8 使用了一个相当简单的技巧:由于每个 SunSpider 套件都运行在一个新的 `<iframe>` 中,这对应于 V8 中一个新的本地上下文,我们只需检测快速的 `<iframe>` 创建和处理(所有的 SunSpider 测试花费的时间小于 50ms在这种情况下在处理和创建之间执行垃圾回收以确保我们在实际运行测试的时候不会触发 `GC`。这个技巧很好99.9% 的案例没有与实际用途冲突;除了每时每刻,无论你在做什么,都让你看起来像是 V8 的 SunSpider 测试驱动程序,那么你可能会遇到困难,或许你可以通过强制 `GC` 来解决,不过这对你的应用可能会有负面影响。所以紧记一点:**不要让你的应用看起来像 SunSpider**
我可以继续展示更多 `SunSpider` 示例,但我不认为这非常有用。到目前为止,应该清楚的是,`SunSpider` 为刷新业绩而做的进一步优化在现实场景中没有带来任何好处。事实上,世界可能会因为没有 `SunSpider` 而更美好,因为引擎可以放弃只是用于 `SunSpider` 的奇淫技巧甚至可以伤害到现实中的用例。不幸的是SunSpider 仍然被(科技)媒体大量地用来比较他们眼中的浏览器性能,或者甚至用来比较手机!所以手机制造商和安卓制造商对于让 `SunSpider`(以及其它现在毫无意义的基准 `FWIW` 上的 `Chrome` 看起来比较体面自然有一定的兴趣。手机制造商通过销售手机来赚钱,所以获得良好的评价对于电话部门甚至整间公司的成功至关重要。其中一些部门甚至在其手机中配置在 `SunSpider` 中得分较高的旧版 `V8`,向他们的用户展示各种未修复的安全漏洞(在新版中早已被修复),并保护用户免受最新版本的 `V8` 的任何现实场景的性能优势!
我可以继续展示更多 SunSpider 示例但我不认为这非常有用。到目前为止应该清楚的是SunSpider 为刷新业绩而做的进一步优化在现实场景中没有带来任何好处。事实上,世界可能会因为没有 SunSpider 而更美好,因为引擎可以放弃只是用于 SunSpider 的奇淫技巧甚至可以伤害到现实中的用例。不幸的是SunSpider 仍然被(科技)媒体大量地用来比较他们眼中的浏览器性能,或者甚至用来比较手机!所以手机制造商和安卓制造商对于让 SunSpider以及其它现在毫无意义的基准 `FWIW` 上的 Chrome 看起来比较体面自然有一定的兴趣。手机制造商通过销售手机来赚钱,所以获得良好的评价对于电话部门甚至整间公司的成功至关重要。其中一些部门甚至在其手机中配置在 SunSpider 中得分较高的旧版 V8向他们的用户展示各种未修复的安全漏洞在新版中早已被修复并保护用户免受最新版本的 V8 的任何现实场景的性能优势!
[![Galaxy S7 和 S7 Edge 的评价:三星的高光表现](http://benediktmeurer.de/images/2016/engadget-20161216.png)][55]
作为 `JavaScript` 社区的一员,如果我们真的想认真对待 `JavaScript` 领域现实场景的性能,我们需要让各大技术媒体停止使用传统的 `JavaScript` 基准来比较浏览器或手机。我看到的一个好处是能够在每个浏览器中运行一个基准测试,并比较它的得分,但是请使用一个与当今世界相关的基准,例如真实的 `web` 页面;如果你觉得需要通过浏览器基准来比较两部手机,请至少考虑使用 [Speedometer][56]。
作为 JavaScript 社区的一员,如果我们真的想认真对待 JavaScript 领域现实场景的性能,我们需要让各大技术媒体停止使用传统的 JavaScript 基准来比较浏览器或手机。我看到的一个好处是能够在每个浏览器中运行一个基准测试,并比较它的得分,但是请使用一个与当今世界相关的基准,例如真实的 `web` 页面;如果你觉得需要通过浏览器基准来比较两部手机,请至少考虑使用 [Speedometer][56]。
### 轻松一刻
![](http://images-cdn.9gag.com/photo/avZd9NX_700b.jpg)
我一直很喜欢这个 [Myles Borins][57] 谈话,所以我不得不无耻地向他偷师。现在我们从 `SunSpider` 的谴责中回过头来,让我们继续检查其它经典基准。
我一直很喜欢这个 [Myles Borins][57] 谈话,所以我不得不无耻地向他偷师。现在我们从 SunSpider 的谴责中回过头来,让我们继续检查其它经典基准。
### 不是那么详细的 Kraken 案例
`Kraken` 基准是 [Mozilla 于 2010 年 9 月 发布的][58],据说它包含了现实场景应用的片段/内核,并且与 `SunSpider` 相比少了一个微基准。我不想花太多时间在 `Kraken` 上,因为我认为它不像 `SunSpider``Octane` 一样对 `JavaScript` 性能有着深远的影响,所以我将强调一个特别的案例——[`audio-oscillator.js`][59] 测试。
Kraken 基准是 [Mozilla 于 2010 年 9 月 发布的][58],据说它包含了现实场景应用的片段/内核,并且与 SunSpider 相比少了一个微基准。我不想花太多时间在 Kraken 上,因为我认为它不像 SunSpider 和 Octane 一样对 JavaScript 性能有着深远的影响,所以我将强调一个特别的案例——[`audio-oscillator.js`][59] 测试。
[![audio-oscillator.js](http://benediktmeurer.de/images/2016/audio-oscillator-20161216.png)][60]
@ -163,7 +169,7 @@ $
如果我们知道整数模数运算的右边是 2 的幂,我们可以生成[更好的代码][64],显然完全避免了英特尔上的 `idiv` 指令。所以我们需要获取一种信息使 `this.waveTableLength``Oscillator` 构造器到 `Oscillator.prototype.generate` 中的模运算都是 2048。一个显而易见的方法是尝试依赖于将所有内容内嵌到 `calcOsc` 函数,并让 `load/store` 消除为我们进行的常量传播,但这对于在 `calcOsc` 函数之外分配的正弦振荡器无效。
因此,我们所做的就是添加支持跟踪某些常数值作为模运算符的右侧反馈。这在 `V8` 中是有意义的,因为我们为诸如 `+`、`*` 和 `%` 的二进制操作跟踪类型反馈,这意味着操作者跟踪输入的类型和产生的输出类型(参见最近圆桌讨论关于[动态语言的快速运算][65]的幻灯片)。当然,用 `fullcodegen``Crankshaft` 挂接起来也是相当容易的,`MOD` 的 `BinaryOpIC` 也可以跟踪两个右边的已知权。
因此,我们所做的就是添加支持跟踪某些常数值作为模运算符的右侧反馈。这在 V8 中是有意义的,因为我们为诸如 `+`、`*` 和 `%` 的二进制操作跟踪类型反馈,这意味着操作者跟踪输入的类型和产生的输出类型(参见最近圆桌讨论关于[动态语言的快速运算][65]的幻灯片)。当然,用 `fullcodegen``Crankshaft` 挂接起来也是相当容易的,`MOD` 的 `BinaryOpIC` 也可以跟踪两个右边的已知权。
```
$ ~/Projects/v8/out/Release/d8 --trace-ic audio-oscillator.js
@ -173,7 +179,7 @@ $ ~/Projects/v8/out/Release/d8 --trace-ic audio-oscillator.js
$
```
显示表明 `BinaryOpIC` 正在为模数的右侧拾取适当的恒定反馈,并正确跟踪左侧始终是一个小整数(`V8``Smi` 说),我们也总是产生一个小的整数结果 。 使用 `--print-opt-code -code-comments` 查看生成的代码,很快就显示出,`Crankshaft` 利用反馈在 `Oscillator.prototype.generate` 中为整数模数生成一个有效的代码序列:
显示表明 `BinaryOpIC` 正在为模数的右侧拾取适当的恒定反馈并正确跟踪左侧始终是一个小整数V8 的 `Smi` 说),我们也总是产生一个小的整数结果 。 使用 `--print-opt-code -code-comments` 查看生成的代码,很快就显示出,`Crankshaft` 利用反馈在 `Oscillator.prototype.generate` 中为整数模数生成一个有效的代码序列:
```
[...SNIP...]
@ -230,7 +236,7 @@ $
这是一个非常可怕的性能悬崖的例子:假设开发人员为库编写代码,并使用某些样本输入值进行仔细的调整和优化,性能是体面的。现在,用户开始使用该库读取性能日志,但不知何故从性能悬崖下降,因为她/他正在以一种稍微不同的方式使用库,即以某种方式污染某种 `BinaryOpIC` 的类型反馈,并且遭受 20% 的减速(与该库作者的测量相比),该库的作者和用户都无法解释,这似乎是随机的。
现在这在 `JavaScript` 领域并不少见,不幸的是,这些悬崖中有几个是不可避免的,因为它们是由于 `JavaScript` 的性能是基于乐观的假设和猜测的事实。我们已经花了 **大量** 时间和精力来试图找到避免这些性能悬崖的方法,不过依旧提供(几乎)相同的性能。事实证明,尽可能避免 `idiv` 是很有意义的,即使你不一定知道右边总是一个 2 的幂(通过动态反馈),所以为什么 `TurboFan` 的做法有异于 `Crankshaft` 的做法,因为它总是在运行时检查输入是否是 2 的幂,所以一般情况下,对于有符整数模数,优化两个右手侧的(未知)权看起来像这样(在伪代码中):
现在这在 JavaScript 领域并不少见,不幸的是,这些悬崖中有几个是不可避免的,因为它们是由于 JavaScript 的性能是基于乐观的假设和猜测的事实。我们已经花了 **大量** 时间和精力来试图找到避免这些性能悬崖的方法,不过依旧提供(几乎)相同的性能。事实证明,尽可能避免 `idiv` 是很有意义的,即使你不一定知道右边总是一个 2 的幂(通过动态反馈),所以为什么 `TurboFan` 的做法有异于 `Crankshaft` 的做法,因为它总是在运行时检查输入是否是 2 的幂,所以一般情况下,对于有符整数模数,优化两个右手侧的(未知)权看起来像这样(在伪代码中):
```
if 0 < rhs then
@ -259,21 +265,21 @@ Time (audio-oscillator-once): 69 ms.
$
```
基准和过度专业化的问题在于基准可以给你提示在哪里可以看看以及该怎么做,但它不告诉你你应该走多远,不能保护优化。例如,所有 `JavaScript` 引擎都使用基准来防止性能下降,但是运行 `Kraken` 不能保护我们在 `TurboFan` 中的一般方法,即我们可以将 `TurboFan` 中的模优化降级到过度专业版本的 `Crankshaft`,而基准不会告诉我们却在倒退的事实,因为从基准的角度来看这很好!现在你可以扩展基准,也许以上面我们做的相同的方式,并试图用基准覆盖一切,这是引擎实现者在一定程度上做的事情,但这种方法不会任意缩放。即使基准测试方便,易于用来沟通和竞争,以常识所见你还是需要留下空间,否则过度专业化将支配一切,你会有一个真正的、可接受的、巨大的性能悬崖线。
基准和过度专业化的问题在于基准可以给你提示在哪里可以看看以及该怎么做,但它不告诉你你应该走多远,不能保护优化。例如,所有 JavaScript 引擎都使用基准来防止性能下降,但是运行 Kraken 不能保护我们在 `TurboFan` 中的一般方法,即我们可以将 `TurboFan` 中的模优化降级到过度专业版本的 `Crankshaft`,而基准不会告诉我们却在倒退的事实,因为从基准的角度来看这很好!现在你可以扩展基准,也许以上面我们做的相同的方式,并试图用基准覆盖一切,这是引擎实现者在一定程度上做的事情,但这种方法不会任意缩放。即使基准测试方便,易于用来沟通和竞争,以常识所见你还是需要留下空间,否则过度专业化将支配一切,你会有一个真正的、可接受的、巨大的性能悬崖线。
`Kraken` 测试还有许多其它的问题,不过现在让我们继续讨论过去五年中最有影响力的 `JavaScript` 基准测试—— `Octane` 测试。
Kraken 测试还有许多其它的问题,不过现在让我们继续讨论过去五年中最有影响力的 JavaScript 基准测试—— Octane 测试。
### 深入接触 Octane
[Octane][66] 基准是 `V8` 基准的继承者,最初由[谷歌于 2012 年中期发布][67],目前的版本 `Octane` 2.0 [于 2013 年年底发布][68]。这个版本包含 15 个独立测试,其中对于 `Splay``Mandreel`,我们用来测试吞吐量和延迟。这些测试范围从 [微软 TypeScript 编译器][69] 编译自身到 `zlib` 测试测量原生的 [asm.js][70] 性能,再到 `RegExp` 引擎的性能测试、光线追踪器、2D 物理引擎等。有关各个基准测试项的详细概述,请参阅[说明书][71]。所有这些测试项目都经过仔细的筛选,以反映 `JavaScript` 性能的方方面面,我们认为这在 2012 年非常重要,或许预计在不久的将来会变得更加重要。
[Octane][66] 基准是 V8 基准的继承者,最初由[谷歌于 2012 年中期发布][67],目前的版本 Octane 2.0 [于 2013 年年底发布][68]。这个版本包含 15 个独立测试,其中对于 `Splay``Mandreel`,我们用来测试吞吐量和延迟。这些测试范围从 [微软 TypeScript 编译器][69] 编译自身到 `zlib` 测试测量原生的 [asm.js][70] 性能,再到 `RegExp` 引擎的性能测试、光线追踪器、2D 物理引擎等。有关各个基准测试项的详细概述,请参阅[说明书][71]。所有这些测试项目都经过仔细的筛选,以反映 JavaScript 性能的方方面面,我们认为这在 2012 年非常重要,或许预计在不久的将来会变得更加重要。
在很大程度上 `Octane` 在实现其将 `JavaScript` 性能提高到更高水平的目标方面无比的成功,它在 2012 年和 2013 年引导了良性的竞争,`Octane` 创造了巨大的业绩和成就。但是现在将近 2017 年了,世界看起来与 2012 年真的迥然不同了。除了通常和经常被引用的批评,`Octane` 中的大多数项目基本上已经过时(例如,老版本的 `TypeScript``zlib` 通过老版本的 [Emscripten][72] 编译而成,`Mandreel` 甚至不再可用等等),某种更重要的方式影响了 `Octane` 的用途:
在很大程度上 Octane 在实现其将 JavaScript 性能提高到更高水平的目标方面无比的成功,它在 2012 年和 2013 年引导了良性的竞争Octane 创造了巨大的业绩和成就。但是现在将近 2017 年了,世界看起来与 2012 年真的迥然不同了。除了通常和经常被引用的批评Octane 中的大多数项目基本上已经过时(例如,老版本的 `TypeScript``zlib` 通过老版本的 [Emscripten][72] 编译而成,`Mandreel` 甚至不再可用等等),某种更重要的方式影响了 Octane 的用途:
我们看到大型 web 框架赢得了 web 种族之争,尤其是像 [Ember][73] 和 [AngularJS][74] 这样的重型框架,它们使用了 `JavaScript` 执行模式,不过根本没有被 `Octane` 所反映,并且经常受到(我们)`Octane` 具体优化的损害。我们还看到 `JavaScript` 在服务器和工具前端获胜,这意味着有大规模的 `JavaScript` 应用现在通常运行上数星期,如果不是运行上数年都不会被 `Octane` 捕获。正如开篇所述,我们有硬数据表明 `Octane` 的执行和内存配置文件与我们每天在 web 上看到的截然不同。
我们看到大型 web 框架赢得了 web 种族之争,尤其是像 [Ember][73] 和 [AngularJS][74] 这样的重型框架,它们使用了 JavaScript 执行模式,不过根本没有被 Octane 所反映并且经常受到我们Octane 具体优化的损害。我们还看到 JavaScript 在服务器和工具前端获胜,这意味着有大规模的 JavaScript 应用现在通常运行上数星期,如果不是运行上数年都不会被 Octane 捕获。正如开篇所述,我们有硬数据表明 Octane 的执行和内存配置文件与我们每天在 web 上看到的截然不同。
让我们来看看今天一些玩弄 `Octane` 基准的具体例子,其中优化不再反映在现实场景。请注意,即使这可能听起来有点负面回顾,它绝对不意味着这样!正如我已经说过好几遍,`Octane``JavaScript` 性能故事中的重要一章,它发挥了至关重要的作用。在过去由 `Octane` 驱动的 `JavaScript` 引擎中的所有优化都是善意地添加的,因为 `Octane` 是现实场景性能的好代理!每个年代都有它的基准,而对于每一个基准都有一段时间你必须要放手!
让我们来看看今天一些玩弄 Octane 基准的具体例子其中优化不再反映在现实场景。请注意即使这可能听起来有点负面回顾它绝对不意味着这样正如我已经说过好几遍Octane 是 JavaScript 性能故事中的重要一章,它发挥了至关重要的作用。在过去由 Octane 驱动的 JavaScript 引擎中的所有优化都是善意地添加的,因为 Octane 是现实场景性能的好代理!每个年代都有它的基准,而对于每一个基准都有一段时间你必须要放手!
话虽如此,让我们在路上看这个节目,首先看看 `Box2D` 测试,它是基于 [Box2DWeb][75] (一个最初由 Erin Catto 编写的移植到 `JavaScript` 的流行的 2D 物理引擎)的。总的来说,很多浮点数学驱动了很多 `JavaScript` 引擎下很好的优化,但是,事实证明它包含一个可以肆意玩弄基准的漏洞(怪我,我发现了漏洞,并添加在这种情况下的漏洞)。在基准中有一个函数 `D.prototype.UpdatePairs`,看起来像这样:
话虽如此,让我们在路上看这个节目,首先看看 `Box2D` 测试,它是基于 [Box2DWeb][75] (一个最初由 Erin Catto 编写的移植到 JavaScript 的流行的 2D 物理引擎)的。总的来说,很多浮点数学驱动了很多 JavaScript 引擎下很好的优化,但是,事实证明它包含一个可以肆意玩弄基准的漏洞(怪我,我发现了漏洞,并添加在这种情况下的漏洞)。在基准中有一个函数 `D.prototype.UpdatePairs`,看起来像这样:
```
D.prototype.UpdatePairs = function(b) {
@ -333,7 +339,7 @@ x.proxyA = t < m ? t : m;
x.proxyB = t >= m ? t : m;
```
所以这两行无辜的代码要负起 99% 的时间开销的责任!这怎么来的?好吧,与 `JavaScript` 中的许多东西一样,[抽象关系比较][79] 的直观用法不一定是正确的。在这个函数中,`t` 和 `m` 都是 `L` 的实例,它是这个应用的一个中心类,但不会覆盖 `Symbol.toPrimitive`、`“toString”`、`“valueOf”` 或 `Symbol.toStringTag` 属性,它们与抽象关系比较相关。所以如果你写 `t < m` 会发生什么呢?
所以这两行无辜的代码要负起 99% 的时间开销的责任!这怎么来的?好吧,与 JavaScript 中的许多东西一样,[抽象关系比较][79] 的直观用法不一定是正确的。在这个函数中,`t` 和 `m` 都是 `L` 的实例,它是这个应用的一个中心类,但不会覆盖 `Symbol.toPrimitive`、`“toString”`、`“valueOf”` 或 `Symbol.toStringTag` 属性,它们与抽象关系比较相关。所以如果你写 `t < m` 会发生什么呢?
1. 调用 [ToPrimitive][12](`t`, `hint Number`)。
2. 运行 [OrdinaryToPrimitive][13](`t`, `"number"`),因为这里没有 `Symbol.toPrimitive`
@ -373,7 +379,7 @@ Score (Box2D): 55359
$
```
那么我们是怎么做呢?事实证明,我们已经有一种用于跟踪比较对象的形状的机制,比较发生于 `CompareIC`,即所谓的已知接收器映射跟踪(其中的映射是 `V8` 的对象形状+原型),不过这是有限的抽象和严格相等比较。但是我可以很容易地扩展跟踪,并且收集反馈进行抽象的关系比较:
那么我们是怎么做呢?事实证明,我们已经有一种用于跟踪比较对象的形状的机制,比较发生于 `CompareIC`,即所谓的已知接收器映射跟踪(其中的映射是 V8 的对象形状+原型),不过这是有限的抽象和严格相等比较。但是我可以很容易地扩展跟踪,并且收集反馈进行抽象的关系比较:
```
$ ~/Projects/v8/out/Release/d8 --trace-ic octane-box2d.js
@ -384,7 +390,7 @@ $ ~/Projects/v8/out/Release/d8 --trace-ic octane-box2d.js
$
```
这里基准代码中使用的 `CompareIC` 告诉我们,对于我们正在查看的函数中的 `LT`(小于)和 `GTE`(大于或等于)比较,到目前为止这只能看到 `RECEIVERs`(接收器,`V8``JavaScript` 对象),并且所有这些接收器具有相同的映射 `0x1d5a860493a1`,其对应于 `L` 实例的映射。因此,在优化的代码中,只要我们知道比较的两侧映射的结果都为 `0x1d5a860493a1`,并且没人混淆 `L` 的原型链(即 `Symbol.toPrimitive`、`"valueOf"` 和 `"toString"` 这些方法都是默认的,并且没人赋予过 `Symbol.toStringTag` 的访问权限),我们可以将这些操作分别常量折叠为 `false``true`。剩下的故事都是关于 `Crankshaft` 的黑魔法,有很多一部分都是由于初始化的时候忘记正确地检查 `Symbol.toStringTag` 属性:
这里基准代码中使用的 `CompareIC` 告诉我们,对于我们正在查看的函数中的 `LT`(小于)和 `GTE`(大于或等于)比较,到目前为止这只能看到 `RECEIVERs`接收器V8 的 JavaScript 对象),并且所有这些接收器具有相同的映射 `0x1d5a860493a1`,其对应于 `L` 实例的映射。因此,在优化的代码中,只要我们知道比较的两侧映射的结果都为 `0x1d5a860493a1`,并且没人混淆 `L` 的原型链(即 `Symbol.toPrimitive`、`"valueOf"` 和 `"toString"` 这些方法都是默认的,并且没人赋予过 `Symbol.toStringTag` 的访问权限),我们可以将这些操作分别常量折叠为 `false``true`。剩下的故事都是关于 `Crankshaft` 的黑魔法,有很多一部分都是由于初始化的时候忘记正确地检查 `Symbol.toStringTag` 属性:
[![Hydrogen 黑魔法](http://benediktmeurer.de/images/2016/hydrogen-compare-20161216.png)][80]
@ -392,13 +398,13 @@ $
![Box2D 加速](http://benediktmeurer.de/images/2016/awfy-box2d-20161216.png)
我要声明一下,当时我并不相信这个特定的行为总是指向源代码中的漏洞,所以我甚至期望外部代码经常会遇到这种情况,同时也因为我假设 `JavaScript` 开发人员不会总是关心这些种类的潜在错误。但是,我大错特错了,在此我马上悔改!我不得不承认,这个特殊的优化纯粹是一个基准测试的东西,并不会有助于任何真实代码(除非代码是为了从这个优化中获益而写,不过以后你可以在代码中直接写入 `true``false`,而不用再总是使用常量关系比较)。你可能想知道我们为什么在打补丁后又马上回滚了一下。这是我们整个团队投入到 `ES2015` 实施的非常时期,这才是真正的恶魔之舞,我们需要在没有严格的回归测试的情况下将所有新特性(`ES2015` 就是个怪兽)纳入传统基准。
我要声明一下,当时我并不相信这个特定的行为总是指向源代码中的漏洞,所以我甚至期望外部代码经常会遇到这种情况,同时也因为我假设 JavaScript 开发人员不会总是关心这些种类的潜在错误。但是,我大错特错了,在此我马上悔改!我不得不承认,这个特殊的优化纯粹是一个基准测试的东西,并不会有助于任何真实代码(除非代码是为了从这个优化中获益而写,不过以后你可以在代码中直接写入 `true``false`,而不用再总是使用常量关系比较)。你可能想知道我们为什么在打补丁后又马上回滚了一下。这是我们整个团队投入到 `ES2015` 实施的非常时期,这才是真正的恶魔之舞,我们需要在没有严格的回归测试的情况下将所有新特性(`ES2015` 就是个怪兽)纳入传统基准。
关于 `Box2D` 点到为止了,让我们看看 `Mandreel` 基准。`Mandreel` 是一个用来将 `C/C++` 代码编译成 `JavaScript` 的编译器,它并没有用上新一代的 [Emscripten][82] 编译器所使用,并且已经被弃用(或多或少已经从互联网消失了)大约三年的 `JavaScript` 子集 [asm.js][81]。然而,`Octane` 仍然有一个通过 [Mandreel][84] 编译的[子弹物理引擎][83]。`MandreelLatency` 测试十分有趣,它测试 `Mandreel` 基准与频繁的时间测量检测点。有一种说法是,由于 `Mandreel` 强制使用虚拟机编译器,此测试提供了由编译器引入的延迟的指示,并且测量检测点之间的长时间停顿降低了最终得分。这听起来似乎合情合理,确实有一定的意义。然而,像往常一样,供应商找到了在这个基准上作弊的方法。
关于 `Box2D` 点到为止了,让我们看看 `Mandreel` 基准。`Mandreel` 是一个用来将 `C/C++` 代码编译成 JavaScript 的编译器,它并没有用上新一代的 [Emscripten][82] 编译器所使用,并且已经被弃用(或多或少已经从互联网消失了)大约三年的 JavaScript 子集 [asm.js][81]。然而Octane 仍然有一个通过 [Mandreel][84] 编译的[子弹物理引擎][83]。`MandreelLatency` 测试十分有趣,它测试 `Mandreel` 基准与频繁的时间测量检测点。有一种说法是,由于 `Mandreel` 强制使用虚拟机编译器,此测试提供了由编译器引入的延迟的指示,并且测量检测点之间的长时间停顿降低了最终得分。这听起来似乎合情合理,确实有一定的意义。然而,像往常一样,供应商找到了在这个基准上作弊的方法。
[![Mozilla 1162272 漏洞](http://benediktmeurer.de/images/2016/bugzilla-mandreel-20161216.png)][85]
`Mandreel` 自带一个重型初始化函数 `global_init`,光是解析这个函数并为其生成基线代码就花费了不可思议的时间。因为引擎通常在脚本中多次解析各种函数,一个所谓的预解析步骤用来发现脚本内的函数。然后作为函数第一次被调用完整的解析步骤以生成基线代码(或者说字节码)。这在 `V8` 中被称为[懒解析][86]。`V8` 有一些启发式检测函数,当预解析浪费时间的时候可以立刻调用,不过对于 `Mandreel` 基准的 `global_init` 函数就不太清楚了,于是我们将经历这个大家伙“预解析+解析+编译”的长时间停顿。所以我们[添加了一个额外的启发式函数][87]以避免 `global_init` 函数的预解析。
`Mandreel` 自带一个重型初始化函数 `global_init`,光是解析这个函数并为其生成基线代码就花费了不可思议的时间。因为引擎通常在脚本中多次解析各种函数,一个所谓的预解析步骤用来发现脚本内的函数。然后作为函数第一次被调用完整的解析步骤以生成基线代码(或者说字节码)。这在 V8 中被称为[懒解析][86]。V8 有一些启发式检测函数,当预解析浪费时间的时候可以立刻调用,不过对于 `Mandreel` 基准的 `global_init` 函数就不太清楚了,于是我们将经历这个大家伙“预解析+解析+编译”的长时间停顿。所以我们[添加了一个额外的启发式函数][87]以避免 `global_init` 函数的预解析。
[![MandreelLatency 基准](http://benediktmeurer.de/images/2016/awfy-mandreel-20161216.png)][88]
@ -408,7 +414,7 @@ $
[![splay.js](http://benediktmeurer.de/images/2016/splay-insertnode-20161216.png)][90]
这是伸展树结构的核心,尽管你可能想看完整的基准,不过这或多或少是 `SplayLatency` 得分的重要来源。怎么回事?实际上,基准测试是建立巨大的伸展树,尽可能保留所有节点,从而还原它原本的空间。使用像 `V8` 这样的代数垃圾回收器,如果程序违反了[代数假设][91],导致极端的时间停顿,从本质上看,将所有东西从新空间撤回到旧空间的开销是非常昂贵的。在旧配置中运行 `V8` 可以清楚地展示这个问题:
这是伸展树结构的核心,尽管你可能想看完整的基准,不过这或多或少是 `SplayLatency` 得分的重要来源。怎么回事?实际上,基准测试是建立巨大的伸展树,尽可能保留所有节点,从而还原它原本的空间。使用像 V8 这样的代数垃圾回收器,如果程序违反了[代数假设][91],导致极端的时间停顿,从本质上看,将所有东西从新空间撤回到旧空间的开销是非常昂贵的。在旧配置中运行 V8 可以清楚地展示这个问题:
```
$ out/Release/d8 --trace-gc --noallocation_site_pretenuring octane-splay.js
@ -534,7 +540,7 @@ $
喘口气。
好吧,我想这足以强调我的观点了。我可以继续指出更多的例子,其中 `Octane` 驱动的改进后来变成了一个坏主意,也许改天我会接着写下去。但是今天就到此为止了吧。
好吧,我想这足以强调我的观点了。我可以继续指出更多的例子,其中 Octane 驱动的改进后来变成了一个坏主意,也许改天我会接着写下去。但是今天就到此为止了吧。
### 结论
@ -542,11 +548,11 @@ $
[![2016 年 10 月浏览器基准之战: Chrome、Firefox 和 Edge 的决战](http://benediktmeurer.de/images/2016/venturebeat-20161216.png)][99]
没人害怕竞争,但是玩弄可能已经坏掉的基准不像是在合理使用工程时间。我们可以尽更大的努力,并把 JavaScript 提高到更高的水平。让我们开展有意义的性能测试,以便为最终用户和开发者带来有意思的领域竞争。此外,让我们再对服务器和运行在 `Node.js`(还有 `V8``ChakraCore`)中的工具代码做一些有意义的改进!
没人害怕竞争,但是玩弄可能已经坏掉的基准不像是在合理使用工程时间。我们可以尽更大的努力,并把 JavaScript 提高到更高的水平。让我们开展有意义的性能测试,以便为最终用户和开发者带来有意思的领域竞争。此外,让我们再对服务器和运行在 Node.js还有 V8 和 `ChakraCore`)中的工具代码做一些有意义的改进!
![](http://benediktmeurer.de/images/2016/measure-20161216.jpg)
结束语:不要用传统的 `JavaScript` 基准来比较手机。这是真正最没用的事情,因为 `JavaScript` 的性能通常取决于软件,而不一定是硬件,并且 `Chrome` 每 6 周发布一个新版本,所以你在三月的测试结果到了四月就已经毫不相关了。如果在浏览器中发送一个数字都一部手机不可避免,那么至少请使用一个现代健全的浏览器基准来测试,至少这个基准要知道人们会用浏览器来干什么,比如 [Speedometer 基准][100]。
结束语:不要用传统的 JavaScript 基准来比较手机。这是真正最没用的事情,因为 JavaScript 的性能通常取决于软件,而不一定是硬件,并且 Chrome 每 6 周发布一个新版本,所以你在三月的测试结果到了四月就已经毫不相关了。如果在浏览器中发送一个数字都一部手机不可避免,那么至少请使用一个现代健全的浏览器基准来测试,至少这个基准要知道人们会用浏览器来干什么,比如 [Speedometer 基准][100]。
感谢你花时间阅读!

View File

@ -0,0 +1,321 @@
2016 年度开源创作工具
============================================================
### 无论你是想修改图片,编译音频,还是创作故事,这里的免费开源的工具都能帮你做到。
![2016 年度 36 个开源创作工具](https://opensource.com/sites/default/files/styles/image-full-size/public/u23316/art-yearbook-paint-draw-create-creative.png?itok=KgEF_IN_ "Top 34 open source creative tools in 2016 ")
>图片来源 : opensource.com
几年前,我在 Red Hat 总结会上做了一个简单的演讲,给与会者展示了 [2012 年度开源创作工具][12]。开源软件在过去几年里发展迅速,现在我们来看看 2016 年的相关软件。
### 核心应用
(译注:以下 6 款软件是“核心应用”的子类,认为应该使用四级标题,下同。校对时请删除该句)
这六款应用是开源的设计软件中的最强王者。它们做的很棒,拥有完善的功能特征集、稳定发行版以及活跃的开发者社区,是很成熟的项目。这六款应用都是跨平台的,每一个都能在 LinuxOS X 和 Windows 上使用,不过大多数情况下 Linux 版本一般都是最先更新的。这些应用广为人知,我已经把最新特性的重要部分写进来了,如果你不是非常了解它们的开发情况,你有可能会忽视这些特性。
如果你想要对这些软件做更深层次的了解,或许你想帮助测试这四个软件 —— GIMPInkscapeScribus以及 MyPaint 的最新版本,在 Linux 机器上你可以用 [Flatpak][13] 软件轻松地安装它们。[按照指令][14] 日更绘图应用_Nightly Graphics Apps_每个应用都能在当天晚上通过 Flatpak 获取。有一件事要注意:如果你要给每个应用的 Flatpak 版本安装笔刷或者其它扩展,移除扩展的目录将会位于相应应用的目录 **~/.var/app**。
#### GIMP
[GIMP][15] [在 2015 年迎来了它的 20 周岁][16]使得它成为这里资历最久的开源创造型应用之一。GIMP 是一款强大的应用,可以处理图片,创作简单的绘画,以及插图。你可以通过简单的任务来尝试 GIMP比如裁剪、缩放图片然后循序渐进使用它的其它功能。GIMP 可以在 LinuxMac OS X 以及 Windows 上使用,是一款跨平台的应用,而且能够打开、导出一系列格式的文件,包括在与之相似的软件 Photoshop 上广为应用的那些格式。
GIMP 开发团队正在忙着 2.10 发行版的工作;[2.8.18][17] 是最新的稳定版本。更振奋人心的是非稳定版,[2.9.4][18],拥有全新的用户界面,旨在节省空间的标志性图标和黑色主题,改进了颜色管理,更多的基于 GEGL 的支持分离预览的过滤器,支持 MyPaint 笔刷(如下图所示),对称绘图以及命令行批次处理。想了解更多信息,请关注 [发行版完整笔记][19]。
![GIMP 截图](https://opensource.com/sites/default/files/gimp_520.png "GIMP 截图")
#### Inkscape
[Inkscape][20] 是一款富有特色的矢量绘图设计软件。可以用它来创作简单的图形,图表,设计或者图标。
最新的稳定版是 [0.91][21] 版本;与 GIMP 相似,更多有趣的东西能在先行版 0.92pre3 版本中找到,发布于 2016 年 11 月。最新推出的先行版的突出特点是 [梯度网格特性gradient mesh feature][22]如下图所示0.91 发行版里介绍的新特性包括:[动力冲程power stroke][23] 用于完全可配置的书法笔画(下图的 “opensource.com” 中的 “open” 用的就是动力冲程技术),画布上的测量工具,以及 [全新的符号对话框][24](如下图右侧所示)。(很多符号库可以从 GitHub 上获得;[Xaviju's inkscape-open-symbols set][25] 就很不错。_物体_对话框是在改进版或每日构建中可用的新特性可以为一个文档中的所有物体登记提供工具来管理这些物体。
![Inkscape 截图](https://opensource.com/sites/default/files/inkscape_520.png "Inkscape 截图")
#### Scribus
[Scribus][26] 是一款强大的桌面发布和页面设计工具。Scribus 让你能够创造精致美丽的物品包括信封书籍杂质以及其它印刷品。Scribus 的颜色管理工具可以处理和输出 CMYK 格式,还能给印刷商店中可靠的复制品上色。
[1.4.6][27] 是 Scribus 的最新稳定版本;[1.5.x][28] 系列的发行版更令人期待,因为它们是即将到来的 1.6.0 发行版的预览。1.5.3 版本包含了 Krita 文件(*.KRA导入工具 1.5.x 系列中其它的改进包括了 _表格_ 工具,文本框对齐,脚注,导出可选 PDF 格式,改进的字典,可驻留的颜色板,符号工具,扩展的文件格式支持。
![Scribus 截图](https://opensource.com/sites/default/files/scribus_520.png "Scribus 截图")
#### MyPaint
[MyPaint][29] 是一款中央绘图的昂贵的绘图和插画工具。它很轻巧,界面虽小,但快捷键丰富,因此你能够不用放下笔,专心于绘图。
[MyPaint 1.2.0][30] 是最新的稳定版本,包含了一些新特性,诸如 [直观上墨工具][31] 用来跟踪铅笔绘图的轨迹,新的填充工具,笔刷和颜色的历史面板,用户界面的改进包括尅色主题和一些代表性的图标,以及一些可编辑的矢量层。想要尝试 MyPaint 里的最新改进,我建议安装日更的 Flatpak 构建,尽管自从 1.2.0 版本没有添加重要的特性。
![MyPaint 截图](https://opensource.com/sites/default/files/mypaint_520.png "MyPaint 截图")
#### Blender
[Blender][32] 最初发布于 1995 年一月,像 GIMP 一样,已经有 20 多年的历史了。Blender 是一款功能强大的开源 3D 制作套件,包含建模,雕刻,渲染,真实材质,绳索,动画,影像合成,视频编辑,游戏创作以及模拟。
Blender 最新的稳定版是 [2.78a][33]。2.78 版本很庞大,包含的特性有:改进的 2D _蜡笔Grease Pencil_ 动画工具;针对球面立体图片的 VR 渲染支持;以及新的手绘曲线的绘图工具。
![Inkscape 截图](https://opensource.com/sites/default/files/blender_520.png "Inkscape 截图")
要尝试最新的 Blender 开发工具,有很多种选择,包括:
* Blender 基金会让官方网址能够提供 [不稳定的每日构建版][2]。
* 如果你在寻找包含特殊的正在开发的特性,[graphicall.org][3] 是一个适合社区的网站,能够提供特殊版本的 Blender偶尔还有其它的创新型开源应用让艺术家能够尝试最新的代码和试验品。
* Mathieu Bridon 通过 Flatpak 做了 Blender 的一个 开发版本。查看它的博客以了解详情:[Flatpak 上日更的 BlenderBlender nightly in Flatpak][4]
#### Krita
[Krita][34] 是一款拥有一系列功能的数字绘图应用。这款应用贴合插画师,印象画师以及漫画家的需求,有很多附件,比如笔刷,颜色版,图案以及模版。
最新的稳定版是 [Krita 3.0.1][35],于 2016 年 9 月发布。3.0.x 系列的新特性包括 2D 逐帧动画;改进的层管理器和功能;扩展的常用快捷键;改进网格,向导和图形捕捉;还有软打样。
![Krita 截图](https://opensource.com/sites/default/files/krita_520.png "Krita 截图")
### 视频处理工具
关于开源的视频编辑工具则有很多很多。这这些工具之中,[Flowblade][36] 是新推出的,而 Kdenlive 则是构建完善,对新手友好,功能最全的竞争者。对你排除某些选项有所帮助的主要标准是它们所支持的平台,其中一些只支持 Linux 平台。它们的软件上游都很活跃,最新的稳定版都于近期发布,发布时间相差不到一周。
#### Kdenlive
[Kdenlive][37],最初于 2002 年发布,是一款强大的非线性视频编辑器,有 Linux 和 OS X 版本(但是 OS X 版本已经过时了。Kdenlive 有用户友好的、基于拖拽的用户界面,适合初学者,又有专业人员需要的深层次功能。
可以看看 Seth Kenlon 写的 [Kdenlive 系列教程multi-part Kdenlive tutorial series][38],了解如何使用 Kdenlive。
* 最新稳定版: 16.08.2 (2016 年 10 月)
![](https://opensource.com/sites/default/files/images/life-uploads/kdenlive_6_leader.png)
#### Flowblade
2012 年发布, [Flowblade][39],只有 Linux 版本的视频编辑器,是个相当不错的后期之秀。
* 最新稳定版: 1.8 (2016 年 9 月)
#### Pitivi
[Pitivi][40] 是用好友好型的免费开源视频编辑器。Pitivi 是用 [Python][41] 编写的“Pitivi” 中的 “Pi”使用了 [GStreamer][42] 多媒体框架,社区活跃。
* 最新稳定版: 0.97 (2016 年 8 月)
* 通过 Flatpak 获取 [最新版本][5]
#### Shotcut
[Shotcut][43] 是一款免费开源跨平台的视频编辑器,[早在 2004 年]就发布了,之后由现在的主要开发者 [Dan Dennedy][45] 重写。
* 最新稳定版: 16.11 (2016 年 11 月)
* 支持 4K 分辨率
* Ships as a tarballed binary
#### OpenShot Video Editor
始于 2008 年,[OpenShot Video Editor][46] 是一款免费、开源、易于使用、跨平台的视频编辑器。
* 最新稳定版: [2.1][6] (2016 年 8 月)
### 其它工具
#### SwatchBooker
[SwatchBooker][47] 是一款很方便的工具尽管它近几年都没有更新了但它还是很有用。SwatchBooler 能帮助用户从各大制造商那里合法地获取颜色样本,你可以用其它免费开源的工具处理它导出的格式,包括 Scribus。
#### GNOME Color Manager
[GNOME Color Manager][48] 是 GNOME 桌面环境内建的颜色管理器,而 GNOME 是 Linux 中某些发行版的默认桌面。这个工具让你能够用颜色标尺为自己的显示设备创建属性文件,还可以为这些设备加载/管理 ICC 颜色属性文件。
#### GNOME Wacom Control
[The GNOME Wacom controls][49] 允许你在 GNOME 桌面环境中配置自己的手写板;你可以修改手写板交互的很多选项,包括自定义手写板灵敏度,以及手写板映射到哪块屏幕上。
#### Xournal
[Xournal][50] 是一款简单但可靠的应用你能够用手写板进行手写或者在笔记上涂鸦。Xournal 是一款有用的签名工具,也可以用来注解 PDF 文档。
#### PDF Mod
[PDF Mod][51] 是一款编辑 PDF 文件很方便的工具。PDF Mod 让用户可以移除页面,添加页面,将多个 PDF 文档合并成一个单独的 PDF 文件,重新排列页面,旋转页面等。
#### SparkleShare
[SparkleShare][52] 是一款基于 git 的文件分享工具,艺术家用来合作和分享资源。它挂放在 GitLab 仓库上你能够获得一个精妙的开源架构可以用于资源管理。SparkleShare 的前端通过在顶部提供一个类似下拉框界面,取消了 git 的不可预测性。
### 摄影
#### Darktable
[Darktable][53] 是一款能让你开发原始数字文件的应用有一系列工具可以管理工作流无损编辑图片。Darktable 支持许多流行的相机和滤镜。
![改变颜色平衡度的图片](https://opensource.com/sites/default/files/dt_colour.jpg "改变颜色平衡度的图片")
#### Entangle
[Entangle][54] 允许你将数字相机连接到电脑上,让你能从电脑上完全控制相机。
#### Hugin
[Hugin][55] 是一款工具,让你可以拼接照片,从而制作全景照片。
### 2D 动画
#### Synfig Studio
[Synfig Studio][56] 是基于矢量的二维动画套件,支持位图原图,在平板上用起来方便。
#### Blender Grease Pencil
我在前面讲过了 Blender但值得注意的是最近的发行版里的 [重构的蜡笔特性a refactored grease pencil feature][57],添加了创作二维动画的功能。
#### Krita
[Krita][58] 现在同样提供了二维动画功能
### 音频编辑
#### Audacity
[Audacity][59] 在编辑音频文件,记录声音方面很有名,是用户友好型的工具。
#### Ardour
[Ardour][60] 是一款数字音频工作软件,界面中间是录音,编辑和混合工作流。使用上它比 Audacity 要稍微难一点,但它允许自动操作,并且更高端。(有 LinuxMac OS X 和 Windows 版本)
#### Hydrogen
[Hydrogen][61] 是一款开源的电子鼓,界面直观。它可以用合成的乐器创作、整理各种乐谱。
#### Mixxx
[Mixxx][62] 是四层次的 DJ 套件,让你能够用强有力的操作把 DJ 和 其它歌曲混合在一起,包含节拍循环,时间延长,音高变化,还可以用 DJ 硬件控制器直播混音界面。
### Rosegarden
[Rosegarden][63] 是一款作曲软件,有乐谱编写和音乐作曲或编辑的软件,提供音频和 MIDI 音序器。译注MIDI 即 Musical Instrument Digital Interface 乐器数字接口)
#### MuseScore
[MuseScore][64] 是乐谱创作,记谱和编辑的软件,它还有个乐谱贡献者社区。
### 其它具有创造力的工具
#### MakeHuman
[MakeHuman][65] 是一款三维绘图工具,可以创造人型的真实模型。
<iframe allowfullscreen="" frameborder="0" height="293" src="https://www.youtube.com/embed/WiEDGbRnXdE?rel=0" width="520"></iframe>
#### Natron
[Natron][66] 是基于节点的合成工具,用于视频后期制作,动态图象和设计特效。
#### FontForge
[FontForge][67] 是创作和编辑字体的工具。允许你编辑某个字体中的字符形态,也能够为这个设计生成字体。
#### Valentina
[Valentina][68] 是用来设计接合方式的应用。
#### Calligra Flow
[Calligra Flow][69] 是一款插画工具,类似 Visio有 LinuxMac OS X 和 Windows 版本)。
#### Resources
这里有很多小玩意和彩蛋值得尝试。需要一点灵感来探索?这些网站和论坛有很多教程和精美的成品能够激发你开始创作:
1. [pixls.us][7]: 摄影师 Pat David 管理的博客,他专注于专业摄影师使用的免费开源的软件和工作流。
2. [David Revoy's Blog][8] David Revoy 的博客,热爱免费开源,非常有天赋的插画师,概念派画师和开源倡议者,对 Blender 基金会电影有很大贡献。
3. [The Open Source Creative Podcast][9]: 由 Opensource.com 社区版主和专栏作家 [Jason van Gumster][10] 管理,他是 Blender 和 GIMP 的专家, [《Blender for Dummies》][1] 的作者,该文章正好是面向我们这些热爱开源创作工具和这些工具周边的文化的人。
4. [Libre Graphics Meeting][11]: 免费开源创作软件的开发者和使用这些软件的创作者的年度会议。这是个好地方,你可以通过它找到你喜爱的开源创作软件将会推出哪些有意思的特性,还可以了解到这些软件的用户用它们在做什么。
--------------------------------------------------------------------------------
作者简介:
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/picture-343-8e0fb148b105b450634e30acd8f5b22b.png?itok=oxzTm70z)
Máirín Duffy - Máirín 是 Red Hat 的首席交互设计师。她热衷于自由免费软件和开源工具,尤其是在创作领域:她最喜欢的应用是 [Inkscape](http://inkscape.org)。
--------------------------------------------------------------------------------
via: https://opensource.com/article/16/12/yearbook-top-open-source-creative-tools-2016
作者:[Máirín Duffy][a]
译者:[GitFuture](https://github.com/GitFuture)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mairin
[1]:http://www.blenderbasics.com/
[2]:https://builder.blender.org/download/
[3]:http://graphicall.org/
[4]:https://mathieu.daitauha.fr/blog/2016/09/23/blender-nightly-in-flatpak/
[5]:https://pitivi.wordpress.com/2016/07/18/get-pitivi-directly-from-us-with-flatpak/
[6]:http://www.openshotvideo.com/2016/08/openshot-21-released.html
[7]:http://pixls.us/
[8]:http://davidrevoy.com/
[9]:http://monsterjavaguns.com/podcast/
[10]:https://opensource.com/users/jason-van-gumster
[11]:http://libregraphicsmeeting.org/2016/
[12]:https://opensource.com/life/12/9/tour-through-open-source-creative-tools
[13]:https://opensource.com/business/16/8/flatpak
[14]:http://flatpak.org/apps.html
[15]:https://opensource.com/tags/gimp
[16]:https://www.gimp.org/news/2015/11/22/20-years-of-gimp-release-of-gimp-2816/
[17]:https://www.gimp.org/news/2016/07/14/gimp-2-8-18-released/
[18]:https://www.gimp.org/news/2016/07/13/gimp-2-9-4-released/
[19]:https://www.gimp.org/news/2016/07/13/gimp-2-9-4-released/
[20]:https://opensource.com/tags/inkscape
[21]:http://wiki.inkscape.org/wiki/index.php/Release_notes/0.91
[22]:http://wiki.inkscape.org/wiki/index.php/Mesh_Gradients
[23]:https://www.youtube.com/watch?v=IztyV-Dy4CE
[24]:https://inkscape.org/cs/~doctormo/%E2%98%85symbols-dialog
[25]:https://github.com/Xaviju/inkscape-open-symbols
[26]:https://opensource.com/tags/scribus
[27]:https://www.scribus.net/scribus-1-4-6-released/
[28]:https://www.scribus.net/scribus-1-5-2-released/
[29]:http://mypaint.org/
[30]:http://mypaint.org/blog/2016/01/15/mypaint-1.2.0-released/
[31]:https://github.com/mypaint/mypaint/wiki/v1.2-Inking-Tool
[32]:https://opensource.com/tags/blender
[33]:http://www.blender.org/features/2-78/
[34]:https://opensource.com/tags/krita
[35]:https://krita.org/en/item/krita-3-0-1-update-brings-numerous-fixes/
[36]:https://opensource.com/life/16/9/10-reasons-flowblade-linux-video-editor
[37]:https://opensource.com/tags/kdenlive
[38]:https://opensource.com/life/11/11/introduction-kdenlive
[39]:http://jliljebl.github.io/flowblade/
[40]:http://pitivi.org/
[41]:http://wiki.pitivi.org/wiki/Why_Python%3F
[42]:https://gstreamer.freedesktop.org/
[43]:http://shotcut.org/
[44]:http://permalink.gmane.org/gmane.comp.lib.fltk.general/2397
[45]:http://www.dennedy.org/
[46]:http://openshot.org/
[47]:http://www.selapa.net/swatchbooker/
[48]:https://help.gnome.org/users/gnome-help/stable/color.html.en
[49]:https://help.gnome.org/users/gnome-help/stable/wacom.html.en
[50]:http://xournal.sourceforge.net/
[51]:https://wiki.gnome.org/Apps/PdfMod
[52]:https://www.sparkleshare.org/
[53]:https://opensource.com/life/16/4/how-use-darktable-digital-darkroom
[54]:https://entangle-photo.org/
[55]:http://hugin.sourceforge.net/
[56]:https://opensource.com/article/16/12/synfig-studio-animation-software-tutorial
[57]:https://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.78/GPencil
[58]:https://opensource.com/tags/krita
[59]:https://opensource.com/tags/audacity
[60]:https://ardour.org/
[61]:http://www.hydrogen-music.org/
[62]:http://mixxx.org/
[63]:http://www.rosegardenmusic.com/
[64]:https://opensource.com/life/16/03/musescore-tutorial
[65]:http://makehuman.org/
[66]:https://natron.fr/
[67]:http://fontforge.github.io/en-US/
[68]:http://valentina-project.org/
[69]:https://www.calligra.org/flow/

View File

@ -0,0 +1,129 @@
安卓IoT能否像在移动终端一样成功
============================================================
![](https://cdn-images-1.medium.com/max/1000/1*GF6e6Vd-22PViWT8EDpLNA.jpeg)
Android Things让IoT如虎添翼
###我在Android Things上的最初24小时
正当我在开发一个基于Android的运行在树莓派3的物联网商业项目时一些令人惊喜的事情发生了。谷歌发布了[Android Things] [1]的第一个预览版本他们的SDK专门针对最初3个SBC单板计算机 - 树莓派 3英特尔Edison和恩智浦Pico。说我一直在挣扎似乎有些轻描淡写 - 没有成功的移植树莓派安卓可以参照,我们在理想丰满,但是实践漏洞百出的内测版本上叫苦不迭。其中一个问题,同时也是不可原谅的问题是,它不支持触摸屏,甚至连[Element14][2]官方销售的也不支持。曾经我认为安卓已经支持树莓派,更早时候[commi tto AOSP project from Google][3]提到过Pi曾让所有人兴奋不已。所以当2016年12月12日谷歌发布"Android Things"和其SDK的时候我马上闭门谢客全身心地去研究了……
### 问题?
安卓扩展的工作和Pi上做过的一些项目包括之前提到的当前正在开发中的Pi项目使我对谷歌安卓产生了许多问题。未来我会尝试解决它们但是最重要的问题可以马上解答 - 有完整的Android Studio支持Pi成为列表上的另一个常规的ADB可寻址设备。好极了。Android Atudio强大的便利的纯粹的易用的功能包括布局预览调试系统源码检查器自动化测试等可以真正的应用在IoT硬件上。这些好处怎么说都不过分。到目前为止我在Pi上的大部分工作都是在python中使用SSH运行在Pi上的编辑器MC如果你真的想知道。这是有效的毫无疑问硬核Pi / Python头可以指出更好的工作方式而不是当前这种像极了80年代码农的软件开发模式。我的项目涉及到在控制Pi的手机上编写Android软件这有点像在伤口狂妄地撒盐 - 我使用Android Studio做“真正的”Android工作用SSH做剩下的。但是有了"Android Things"之后,一切都结束了。
所有的示例代码都适用于3个SBCPi 只是其中之一。 Build.DEVICE常量在运行时确定所以你会看到很多如下代码
```
public static String getGPIOForButton() {
switch (Build.DEVICE) {
case DEVICE_EDISON_ARDUINO:
return "IO12";
case DEVICE_EDISON:
return "GP44";
case DEVICE_RPI3:
return "BCM21";
case DEVICE_NXP:
return "GPIO4_IO20";
default:
throw new IllegalStateException(“Unknown Build.DEVICE “ + Build.DEVICE);
}
}
```
我对GPIO处理有浓厚的兴趣。 由于我只熟悉Pi我只能假定其他SBC工作方式相同GPIO只是一组引脚可以定义为输入/输出,是连接物理外部世界的主要接口。 基于Pi Linux的操作系统发行版通过Python中的读取和写入方法提供了完整和便捷的支持但对于Android您必须使用NDK编写C ++驱动程序并通过JNI在Java中与这些驱动程序对接。 不是那么困难,但需要在你的构建链中维护额外的一些东西。 Pi还为I2C指定了2个引脚时钟和数据因此需要额外的工作来处理它们。 I2C是真正酷的总线寻址系统它通过串行化将许多独立的数据引脚转换成一个。 所以这里的优势是 - Android Things已经帮你完成了所有这一切。 你只需要_read_和_write_to /from你需要的任何GPIO引脚I2C同样容易
```
public class HomeActivity extends Activity {
// I2C Device Name
private static final String I2C_DEVICE_NAME = ...;
// I2C Slave Address
private static final int I2C_ADDRESS = ...;
private I2cDevice mDevice;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
// Attempt to access the I2C device
try {
PeripheralManagerService manager = new PeripheralManagerService();
mDevice = manager.openI2cDevice(I2C_DEVICE_NAME, I2C_ADDRESS)
} catch (IOException e) {
Log.w(TAG, "Unable to access I2C device", e);
}
}
@Override
protected void onDestroy() {
super.onDestroy();
if (mDevice != null) {
try {
mDevice.close();
mDevice = null;
} catch (IOException e) {
Log.w(TAG, "Unable to close I2C device", e);
}
}
}
}
```
### Android Things基于Android的哪个版本
看起来是Android 7.0这样很好因为我们可以继承Android以前的所有版本的文档优化安全加固等。它也提出了一个有趣的问题 - 与应用程序必须单独管理不同,未来的平台应如何更新升级?请记住,这些设备可能无法连接到互联网。我们可能不在蜂窝/ WiFi连接的舒适空间虽然之前这些连接至少可用即使有时不那么可靠。
另一个担心是Android Things仅仅是一个名字不同的分支版本的Android如何选择它们的共同特性就像启动Arduino已经发布的一个更像市场营销而不是操作系统的操作系统这种简单特性。实际上通过查看[samples] [4],一些功能可能永不再用 - 比如一个最近的Android创新甚至使用SVG图形作为资源而不是传统的基于位图的图形当然Andorid Things也可以轻松处理。
不可避免地与Android Things相比普通的Android会抛出问题。例如权限问题。因为Android Things为固定硬件设计用户通常不会在这种设备上安装App所以在一定程序上减轻了这个问题。另外在没有图形界面的设备上请求权限通常不是问题我们可以在安装时开放所有权限给App。 通常,这些设备只有一个应用程序,该应用程序从设备上电的那一刻就开始运行。
![](https://cdn-images-1.medium.com/max/800/1*pi7HyLT-BVwHQ_Rw3TDSWQ.png)
### Brillo怎么了
Brillo是谷歌以前的IoT操作系统的代号听起来很像Android的前身。 实际上现在你仍然能看到很多Brillo引用特别是在GitHub Android Things源码的例子中。 然而,它已经不复存在了。新王已经登基!
### UI指南
Google针对Android智能手机和平板电脑应用发布了大量指南例如屏幕按钮间距等。 当然,你最好在可行的情况下遵循这些,但这已经不是本文应该考虑的范畴了。 缺省情况下什么也没有- 应用程序作者决定一切。 这包括顶部状态栏,底部导航栏 - 绝对一切。 多年来谷歌一直叮咛Android应用程序作者不要去渲染屏幕上的返回按钮因为平台将提供一个抛出异常因为对于Android Things[可能甚至不是一个UI] [5]
### 多少智能手机上的服务可以期待?
有些但不是所有。第一个预览版本没有蓝牙支持。没有NFC两者都对物联网革命有重大贡献。 SBC支持他们所以我们应该不会等待太久。由于没有通知栏因此不支持任何通知。没有地图。缺省没有软键盘你必须自己安装一个。由于没有Play商店你只能屈尊通过 ADB做这个和许多其他操作。
当开发Android Things时我试图和Pi使用同一个APK。这引发了一个错误阻止它安装在除Android Things设备之外的任何设备库“_com.google.android.things_”不存在。 Kinda有意义因为只有Android Things设备需要这个但它似乎是有限的因为不仅智能手机或平板电脑不会出现任何模拟器也不会。似乎只能在物理Android Things设备上运行和测试您的Android Things应用程序...直到Google在[G + Google的IoT开发人员社区] [6]组中回答了我的问题,并提供了规避方案。但是,躲过初一,躲不过十五 。
### 让我如何期待Android Thing生态演进
我期望看到移植更多传统的基于Linux服务器的应用程序这对Android只有智能手机和平板电脑没有意义。例如Web服务器突然变得非常有用。一些已经存在但没有像重量级的Apache或Nginx。物联网设备可能没有本地UI但通过浏览器管理它们当然是可行的因此需要用这种方式呈现Web面板。类似的那些如雷贯耳的通讯应用程序 - 它需要的仅是一个麦克风和扬声器在理论上对任何视频通话应用程序如DuoSkypeFB等都可行。这个演变能走多远目前只能猜测。会有Play商店吗他们会展示广告吗我们可以确定他们不会窥探我们或让黑客控制他们从消费者的角度来看物联网应该是具有触摸屏的网络连接设备因为每个人都已经习惯于通过智能手机工作。
我还期望看到硬件的迅速发展 - 特别是更多的SBC并且拥有更低的成本。看看惊人的5美元 树莓派0不幸的是由于其有限的CPU和RAM几乎肯定不能运行Android Things。多久之后像这样的设备才能运行Android Things这是很明显的标杆已经设定任何自重的SBC制造商将瞄准Android Things的兼容性规模经济也将波及到外围设备如23美元的触摸屏。没人购买不会播放YouTube的微波炉你的洗碗机会在eBay上购买更多的粉末商品因为它注意到你很少使用它……
然而我不认为我们会失去掌控力。了解一点Android架构有助于将其视为一个包罗万象的物联网操作系统。它仍然使用Java并几乎被其所有的垃圾回收机制导致的时序问题锤击致死。这仅仅是问题最少的部分。真正的实时操作系统依赖于可预测准确和坚如磐石的时序或者它不能被描述为“mission critical”。想想医疗应用程序安全监视器工业控制器等。使用Android如果主机操作系统认为它需要理论上可以在任何时候杀死您的活动/服务。在手机上不是那么糟糕 - 用户可以重新启动应用程序杀死其他应用程序或重新启动手机。心脏监视器完全是另一码事。如果前台Activity / Service正在监视一个GPIO引脚并且信号没有被准确地处理我们已经失败了。必须要做一些相当根本的改变让Android来支持这一点到目前为止还没有迹象表明它已经在计划之中了。
###这24小时
所以,回到我的项目。 我认为我会接管我已经完成和尽力能为的工作等待不可避免的路障并向G+社区寻求帮助。 除了一些在非Android Things上如何运行程序 的问题之外 ,没有其他问题。 它运行得很好! 这个项目也使用了一些奇怪的东西,自定义字体,高精定时器 - 所有这些都在Android Studio中完美地展现。对我而言可以打满分 - 最后我可以开始给出实际原型,而不只是视频和截图。
### 蓝图
今天的物联网操作系统环境看起来非常零碎。 显然没有市场领导者,尽管炒作之声沸反连天,物联网仍然在草创阶段。 谷歌Android物联网能否像它在移动端那样现在Android在那里的主导地位非常接近90 我相信果真如此Android Things的推出正是重要的一步。
记住所有的关于开放和封闭软件的战争,它们主要发生在从不授权的苹果和一直担心免费还不够充分的谷歌之间? 那个老梗又来了因为让苹果推出一个免费的物联网操作系统的构想就像让他们免费赠送下一代iPhone一样遥不可及。
物联网操作系统游戏是开放的,大家机遇共享,不过这个时候,封闭派甚至不会公布它们的开发工具箱……
转到[Developer Preview] [7]网站立即获取Android Things SDK的副本。
--------------------------------------------------------------------------------
via: https://medium.com/@carl.whalley/will-android-do-for-iot-what-it-did-for-mobile-c9ac79d06c#.hxva5aqi2
作者:[Carl Whalley][a]
译者:[firstadream](https://github.com/firstadream)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.com/@carl.whalley
[1]:https://developer.android.com/things/index.html
[2]:https://www.element14.com/community/docs/DOC-78156/l/raspberry-pi-7-touchscreen-display
[3]:http://www.androidpolice.com/2016/05/24/google-is-preparing-to-add-the-raspberry-pi-3-to-aosp-it-will-apparently-become-an-officially-supported-device/
[4]:https://github.com/androidthings/sample-simpleui/blob/master/app/src/main/res/drawable/pinout_board_vert.xml
[5]:https://developer.android.com/things/sdk/index.html
[6]:https://plus.google.com/+CarlWhalley/posts/4tF76pWEs1D
[7]:https://developer.android.com/things/preview/index.html

View File

@ -0,0 +1,69 @@
为何我们需要一个开放模型来设计评估公共政策
============================================================
### 想象一个 app 可以让市民测试驱动提出的政策。
[up][3]
![Why we need an open model to design and evaluate public policy](https://opensource.com/sites/default/files/styles/image-full-size/public/images/government/GOV_citizen_participation.jpg?itok=eeLWQgev "Why we need an open model to design and evaluate public policy")
图片提供:
opensource.com
在政治选举之前的几个月中,公众辩论会加剧,并且公民面临大量的政策选择信息。在数据驱动的社会中,新的见解一直在为决策提供信息,对这些信息的深入了解从未如此重要,但公众仍然没有意识到公共政策建模的全部潜力。
在“开放政府”的概念不断演变以跟上新技术进步的时代,政府的政策模型和分析可能是新一代的开放知识。
政府开源模型 GOSM 是指政府开发的模型,其目的是设计和评估政策,免费提供给所有人使用、分发、不受限制地修改。社区可以提高政策建模的质量、可靠性和准确性,创造有利于公众的新的数据驱动程序。
今天的这代与技术相互作用,就像它的第二大本质,它默认吸收了大量的信息。如果我们可以在使用 GOSM 的虚拟、沉浸式环境中与不同的公共政策进行互动那会如何?
想象一下有一个允许公民测试推动政策来确定他们想要生活的未来的程序。他们会本能地学习关键的驱动因素和所需要的东西。不久之后,公众将更深入地了解公共政策的影响,并更加精明地引导有争议的公众辩论。
为什么我们以前没有更好的使用这些模型?原因在于公共政策建模的神秘面纱。
在一个如我们所生活的复杂的社会中,量化政策影响是一项艰巨的任务,并被被描述为一种“美好艺术”。此外,大多数政府政策模型都是基于行政和其他私人持有的数据。然而,政策分析师为了指导政策设计而勇于追求,多次以大量武力而获得政治斗争。
数字是很有说服力的。它们构建可信度并常常被用作引入新政策的理由。公共政策模型的发展赋予政治家和官僚权力,这些政治家和官僚们可能不愿意破坏现状。给予这一点可能并不容易,但 GOSM 为前所未有的公共政策改革提供了机会。
GOSM 将所有人的竞争环境均衡化:政治家、媒体、游说团体、利益相关者和公众。通过向社区开放政策评估的大门, 政府可以利用新的和未发现的能力用来创造、创新在公共领域的效率。但在公共政策设计中,利益相关者和政府之间战略互动有哪些实际影响?
GOSM 是独一无二的,因为它们主要是设计公共政策的工具,而不一定需要重新分配私人收益。利益相关者和游说团体可能会将 GOSM 与其私人信息一起使用,以获得对经济参与者私人利益的政策环境运作的新见解。
GOSM 可以成为利益相关者在公共辩论中保持权力平衡的武器,并为战略争取最佳利益么?
作为一个可变的公共资源GOSM 在概念上由纳税人资助,并属于国家。私有实体在不向社会带来利益的情况下从 GOSM 中获得资源是合乎道德的吗?与可能用于更有效的服务提供的程序不同,替代政策建议更有可能由咨询机构使用,并有助于公众辩论。
开源社区经常使用“ copyleft 许可证” 来确保代码和根据此许可证的任何衍生作品对所有人都开放。当产品价值是代码本身,这需要重新分配才能获得最大利益,它需要重新分发来获得最大的利益。但是,如果代码或 GOSM 重新分发是主要产品附带的,那它会是对现有政策环境的新战略洞察么?
在私人收集的数据变得越来越多的时候GOSM 背后的真正价值可能是底层数据,它可以用来改进模型本身。最终,政府是唯一有权实施政策的消费者,利益相关者可以选择在谈判中分享修改后的 GOSM。
政府在公开发布政策模型时面临的巨大挑战是提高透明度的同时保护隐私。理想情况下,发布 GOSM 将需要以保护建模关键特征的方式保护封闭数据。
公开发布 GOSM 通过促进市民对民主的更多了解和参与,使公民获得权力,从而改善政策成果和提高公众满意度。在开放的政府乌托邦中,开放的公共政策发展将是政府和社区之间的合作性努力,这里知识、数据和分析可供大家免费使用。
_在霍巴特举行的 linux.conf.au 2017[lca2017][1])了解更多 Audrey Lobo-Pulo 的讲话:[公开发布的政府模型][2]。_
_声明本文中提出的观点属于 Audrey Lobo-Pulo不一定是澳大利亚政府的观点。_
--------------------------------------------------------------------------------
作者简介:
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/1-_mg_2552.jpg?itok=-RflZ4Wv)
Audrey Lobo-Pulo - Audrey Lobo-Pulo 博士是 Phoensight 的联合创始人并且开放政府以及政府建模开源软件的倡导者。一位物理学家在加入澳大利亚公共服务部后她转而从事经济政策建模工作。Audrey 参与了各种经济政策选择的建模,目前对政府开放数据和开放式政策建模感兴趣。 Audrey 对政府的愿景是将数据科学纳入公共政策分析。
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/1/government-open-source-models
作者:[Audrey Lobo-Pulo ][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/audrey-lobo-pulo
[1]:https://twitter.com/search?q=%23lca2017&src=typd
[2]:https://linux.conf.au/schedule/presentation/31/
[3]:https://opensource.com/article/17/1/government-open-source-models?rate=p9P_dJ3xMrvye9a6xiz6K_Hc8pdKmRvMypzCNgYthA0

View File

@ -0,0 +1,146 @@
分区备份
============
通常你可能会把数据放在一个分区上尤其是通用串行总线USB设备。有时候可能需要对该设备或者上面的一个分区进行备份。树莓派用户为了可引导 SD 卡当然有这个需求。其它小型机的用户也会发现这非常有用。有时候设备看起来要出现故障时最好快速做个备份。
进行本文中的实验你需要一个叫 dcfldd 的工具。
**dcfldd 工具**
该工具是 'coreutils' 软件包中 dd 工具的增强版。dcfldd 是 Nicholas Harbour 在国防部计算机取证实验室DCFL)工作期间研发的。该工具的名字也基于他工作的地方 - dcfldd
对于仍然在使用 CoreUtils 8.23 或更低版本的系统,无法轻松查看正在创建副本的进度。有时候看起来就像什么都没有发生然后你就想取消掉备份。
**注意:**如果你使用 8.24 或更新版本的 dd 工具,你就不需要使用 dcfldd只需要用 dd 替换 dcfldd 即可。所有其它参数仍然适用。
在 Debian 系统上你只需要在 Package Manager 中搜索 dcfldd。你也可以打开一个终端然后输入下面的命令
_sudo apt-get install dcfldd_
对于 Red Hat 系统,可以用下面的命令:
_cd /tmp
wget dl.fedoraproject.org/pub/epel/6/i386/dcfldd-1.3.4.1-4.el6.i686.rpm
sudo yum install dcfldd-1.3.4.1-4.el6.i686.rpm
dcfldd --version_
**注意:** 上面的命令安装的是 32 位版本。对于 64 位版本,使用下面的命令:
_cd /tmp
wget dl.fedoraproject.org/pub/epel/6/x86_64/dcfldd-1.3.4.1-4.el6.x86_64.rpm
sudo yum install dcfldd-1.3.4.1-4.el6.x86_64.rpm
dcfldd --version_
每组命令中的最后一个语句会列出 dcfldd 的版本并显示该命令文件已经被加载。
**注意:**确保你以 root 用户执行 dd 或者 dcfldd 命令。
安装完该工具后你就可以继续使用它备份和恢复分区。
**备份分区**
备份设备的时候可以备份整个设备也可以只是其中的一个分区。如果设备有多个分区,我们可以分别备份每个分区。
在进行备份之前,先让我们来看一下设备和分区的区别。假设我们有一个已经被格式化为一大块设备 SD 卡。SD 卡只有一个分区。如果空间被切分使得 SD 卡看起来是两个设备,那么它就有两个分区。如果用类似 GParted 的程序打开 SD 卡,如图 1 所示,你可以看到它有两个分区。
**图 1**
设备 /dev/sdc 有 /dev/sdc1 和 /dev/sdc2 两个分区。
假设我们有一个树莓派中的 SD 卡。SD 卡容量为 8 GB有两个分区如图 1 所示)。第一个分区存放 BerryBoot 启动引导器。第二个分区存放 Kali译者注Kali Linux 是一个 Debian 派生的 Linux 发行版)。现在已经没有可用的空间用来安装第二个操作系统。我们使用大小为 16 GB 的第二个 SD 卡,但拷贝到第二个 SD 卡之前第一个 SD 卡必须先备份。
要备份第一个 SD 卡我们需要备份设备 /dev/sdc。进行备份的命令如下所示
_dcfldd if=/dev/sdc of=/tmp/SD-Card-Backup.img_
备份包括输入文件if以及被设置为 '/tmp' 目录下名为 'SD-Card-Backup.img' 的输出文件of
dddcfldd 都是每次读写文件中的一个字节。通过上述命令,它一次读写的默认值为 512 个字节。记住,该复制是一个精准的拷贝 - 逐位逐字节。
默认的 512 个字节可以通过块大小参数 - bs= 更改。例如,要每次读写 1 兆字节,参数为 bs=1M。以下所用的缩写有一些差异
* b 512 字节
* KB 1000 字节
* K 1024 字节
* MB 1000x1000 字节
* M 1024x1024 字节
* GB 1000x1000x1000 字节
* G 1024x1024x1024 字节
你也可以单独指定读和写的块大小。要指定读块的大小使用 ibs=’。要指定写块的大小使用 obs=’。
我使用三个不同的块大小做了一个 120 MB 分区的备份测试。第一个时候默认的 512 字节,它用了 7 秒钟。第二个块大小为 1024 K它用时 2 秒。第三个块大小是 2048 K它用时 3 秒。用时会随系统以及其它硬件实现的不同而变化,但通常来说更大的块大小会比默认的稍微快一点。
一旦你完成了一次备份,你还需要知道如何把数据恢复到设备中。
**恢复分区**
现在我们已经有了一个备份点,假设数据可能被损毁了或者由于某些原因需要进行恢复。
命令和备份时相同,只是源和目标相反。对于上面的例子,命令会变为:
_dcfldd of=/dev/sdc if=/tmp/SD-Card-Backup.img_
这里镜像文件被用作输入文件if而设备sdc被用作输出文件of
**注意:** 要记住输出设备会被重写,它上面的所有数据都会丢失。通常来说在恢复数据之前最好用 GParted 删除 SD 卡上的所有分区。
假如你在使用多个 SD 卡,例如多个树莓派主板,你可以一次性写多块 SD 卡。为了做到这点你需要知道系统中卡的 ID。例如假设我们想把镜像 BerryBoot.img 拷贝到两个 SD 卡。SD 卡分别是 /dev/sdc 和 /dev/sdd。下面的命令在显示进度时每次读写 1 MB 的块。命令如下:
_dcfldd if=BerryBoot.img bs=1M status=progress | tee >(dcfldd of=/dev/sdc) | dcfldd of=/dev/sdd_
在这个命令中,第一个 dcfldd 指定输入文件并把块大小设置为 1 MB。status 被设置为显示进度。然后输入通过管道(|)传输给命令 teetee 用于将输入分发到多个地方。第一个输出是到命令 (dcfldd of=/dev/sdc)’。命令被放到小括号内被作为一个命令执行。我们还需要最后一个管道(|),否则命令 tee 会把信息发送到 stdout (屏幕)。因此,最后的输出是被发送到命令 _dcfldd of=/dev/sdd_。如果你有第三个 SD 卡,甚至更多,只需要添加另外的重定向和命令,类似 _>(dcfldd of=/dev/sde_
**注意:**记住最后一个命令必须在管道(|)后面。
必须验证写的数据确保数据是正确的。
**验证数据**
一旦创建了一个镜像或者恢复了一个备份,你可以验证这些写的数据。要验证数据你会使用名为 _diff_ 的领一个不同程序。
使用 diff 你需要指定镜像文件的位置以及系统中拷贝自或写入的物理媒介。你可以在创建备份或者恢复了一个镜像之后使用 _diff_ 命令。
该命令有两个参数。第一个是物理媒介,第二个是镜像文件名称。
对于例子 _dcfldd of=/dev/sdc if=/tmp/SD-Card-Backup.img_对应的 _diff_ 命令是:
_diff /dev/sdc /tmp/SD-Card-Backup.img_
如果镜像和物理设备有任何的不同,你会被告知。如果没有显示任何信息,那么数据就验证为完全相同。
确保数据完全一致是验证备份和恢复完整性的关键。进行备份时需要注意的一个主要问题是镜像大小。
**分割镜像**
假设你想要备份一个 16GB 的 SD 卡。镜像文件大小会大概相同。如果你只能把它备份到 FAT32 分区会怎样呢FAT32 最大文件大小限制是 4 GB。
必须做的是文件必须被切分为 4 GB 的分片。通过管道(|)将数据传输给 _split_ 命令可以切分正在被写的镜像文件。
创建备份的方法相同,但命令会包括管道和切分命令。对于命令为 _dcfldd if=/dev/sdc of=/tmp/SD-Card-Backup.img_ 的事例备份,切分文件的新命令如下:
_dcfldd if=/dev/sdc | split -b 4000MB - /tmp/SD-Card-Backup.img_
**注意:** 大小后缀和 _dd__dcfldd_ 命令的意义相同。 _split_ 命令中的破折号用于将通过管道从 _dcfldd_ 命令传输过来的数据填充到输入文件。
文件会被保存为 _SD-Card-Backup.imgaa__SD-Card-Backup.imgab_如此类推。如果你担心文件大小太接近 4 GB 的限制,可以试着用 3500MB。
将文件恢复到设备也很简单。你使用 _cat_ 命令将它们连接起来然后像下面这样用 _dcfldd_ 写输出:
_cat /tmp/SD-Card-Backup.img* | dcfldd of=/dev/sdc_
你可以在 “_dcfldd_” 命令中包含任何需要的参数。
我希望你了解并能执行任何需要的数据备份和恢复,正如 SD 卡和类似设备所需的那样。
--------------------------------------------------------------------------------
via: https://www.linuxforum.com/threads/partition-backup.3638/
作者:[Jarret][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxforum.com/members/jarret.268/

View File

@ -0,0 +1,644 @@
如何在CentOS 7 上安装 Elastic Stack
============================================================
### 本页
1. [步骤1 - 准备操作系统][1]
2. [步骤2 - 安装 Java][2]
3. [步骤3 - 安装和配置 Elasticsearch][3]
4. [步骤4 - 安装和配置 Kibana 和 Nginx][4]
5. [步骤5 - 安装和配置 Logstash][5]
6. [步骤6 - 在 CentOS 客户端上安装并配置 Filebeat][6]
7. [步骤7 - 在 Ubuntu 客户端上安装并配置 Filebeat][7]
8. [步骤8 - 测试][8]
9. [参考][9]
**Elasticsearch** 是基于Lucene由Java开发的开源搜索引擎。它提供了一个分布式多租户译者注多租户是指多租户技术是一种软件架构技术用来探讨与实现如何在多用户的环境下共用相同的系统或程序组件并且仍可确保各用户间数据的隔离性。的全文搜索引擎并带有 HTTP 仪表盘的web界面Kibana。数据会被Elasticsearch查询检索并且使用JSON文档方案存储。Elasticsearch 是一个可扩展的搜索引擎可用于搜索所有类型的文本文档包括日志文件。Elasticsearch 是Elastic Stack的核心“Elastic Stack”也被称为“ELK Stack”。
**Logstash** 是用于管理事件和日志的开源工具。它为数据收集提供实时传递途径。 Logstash将收集您的日志数据将数据转换为JSON文档并将其存储在Elasticsearch中。
**Kibana** 是Elasticsearch的开源数据可视化工具。Kibana提供了一个漂亮的仪表盘Web界面。 你可以用它来管理和可视化来自Elasticsearch的数据。 它不仅美丽,而且强大。
在本教程中我将向您展示如何在CentOS 7服务器上安装和配置 Elastic Stack以监视服务器日志。 然后,我将向您展示如何在操作系统为 CentOS 7和Ubuntu 16的客户端上安装“Elastic beats”。
**前提条件**
* 64位的CentOS 74GB 内存 - elk 主控机
* 64位的CentOS 7 1 GB 内存 - 客户端1
* 64位的Ubuntu 16 1GB 内存 - 客户端2
### 步骤1 - 准备操作系统
在本教程中我们将禁用CentOS 7服务器上的SELinux。 编辑SELinux配置文件。
```
vim /etc/sysconfig/selinux
```
将 SELINUX 的值从 enforcing 改成 disabled 。
```
SELINUX=disabled
```
然后从起服务器
```
reboot
```
再次登录服务器并检查SELinux状态。
```
getenforce
```
确保结果是disabled。
### 步骤2 - 安装 Java
部署Elastic stack依赖于JavaElasticsearch 需要Java 8 版本推荐使用Oracle JDK 1.8 。我将从官方的Oracle rpm包安装Java 8。
使用wget命令下载Java 8 的JDK。
```
wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http:%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u77-b02/jdk-8u77-linux-x64.rpm"
```
然后使用rpm命令安装
```
rpm -ivh jdk-8u77-linux-x64.rpm
```
最后检查java JDK版本确保它正常工作。
```
java -version
```
您将看到服务器的Java版本。
### 步骤3 - 安装和配置 Elasticsearch
在此步骤中我们将安装和配置Elasticsearch。 从elastic.co网站提供的rpm包安装Elasticsearch并将其配置在本地主机上运行确保安装程序安全而且不能从外部访问
在安装Elasticsearch之前将elastic.co添加到服务器。
```
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
```
接下来使用wget下载Elasticsearch 5.1,然后安装它。
```
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.1.rpm
rpm -ivh elasticsearch-5.1.1.rpm
```
Elasticsearch 已经安装好了。 现在进入配置目录编辑elasticsaerch.yml 配置文件。
```
cd /etc/elasticsearch/
vim elasticsearch.yml
```
去掉第40行的注释启用Elasticsearch 的内存锁。
```
bootstrap.memory_lock: true
```
在“Network”块中取消注释network.host和http.port行。
```
network.host: localhost
http.port: 9200
```
保存文件并退出编辑器。
现在编辑elasticsearch.service文件获取内存锁配置。
```
vim /usr/lib/systemd/system/elasticsearch.service
```
去掉第60行的注释确保该值为“unlimited”。
```
MAX_LOCKED_MEMORY=unlimited
```
保存并退出。
Elasticsearch 配置到此结束。Elasticsearch 将在本机的9200端口运行我们通过在 CentOS 服务器上启用mlockall来禁用内存交换。重新加载systemd将 Elasticsearch 置为启动,然后启动服务。
```
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch
```
等待 Eelasticsearch 启动成功然后检查服务器上打开的端口确保9200端口的状态是“LISTEN”
```
netstat -plntu
```
![Check elasticsearch running on port 9200] [10]
然后检查内存锁以确保启用mlockall并使用以下命令检查Elasticsearch是否正在运行。
```
curl -XGET 'localhost:9200/_nodes?filter_path=**.mlockall&pretty'
curl -XGET 'localhost:9200/?pretty'
```
会看到如下结果。
![Check memory lock elasticsearch and check status] [11]
### 步骤4 - 安装和配置 Kibana 和 Nginx
In this step, we will install and configure Kibana with a Nginx web server. Kibana will listen on the localhost IP address and Nginx acts as a reverse proxy for the Kibana application.
下载Kibana 5.1与wget然后使用rpm命令安装
```
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.1-x86_64.rpm
rpm -ivh kibana-5.1.1-x86_64.rpm
```
编辑 Kibana 配置文件。
```
vim /etc/kibana/kibana.yml
```
去掉配置文件中 server.port, server.host 和 elasticsearch.url 这三行的注释。
```
server.port: 5601
server.host: "localhost"
elasticsearch.url: "http://localhost:9200"
```
保存并退出。
将 Kibana 设为开机启动并且启动Kibana 。
```
sudo systemctl enable kibana
sudo systemctl start kibana
```
Kibana将作为节点应用程序运行在端口5601上。
```
netstat -plntu
```
![Kibana running as node application on port 5601] [12]
Kibana 安装到此结束。 现在我们需要安装Nginx并将其配置为反向代理以便能够从公共IP地址访问Kibana。
Nginx在Epel资源库中可以找到用yum安装epel-release。
```
yum -y install epel-release
```
然后安装 Nginx 和 httpd-tools 这两个包。
```
yum -y install nginx httpd-tools
```
httpd-tools软件包包含Web服务器的工具可以为Kibana添加htpasswd基础认证。
编辑Nginx配置文件并删除'server {}'块,这样我们可以添加一个新的虚拟主机配置。
```
cd /etc/nginx/
vim nginx.conf
```
删除server { }块。
![Remove Server Block on Nginx configuration] [13]
保存并退出。
现在我们需要在conf.d目录中创建一个新的虚拟主机配置文件。 用vim创建新文件'kibana.conf'。
```
vim /etc/nginx/conf.d/kibana.conf
```
复制下面的配置。
```
server {
    listen 80;
    server_name elk-stack.co;
    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/.kibana-user;
    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}
```
保存并退出。
然后使用htpasswd命令创建一个新的基本认证文件。
```
sudo htpasswd -c /etc/nginx/.kibana-user admin
TYPE YOUR PASSWORD
```
测试Nginx配置确保没有错误。 然后设定Nginx开机启动并启动Nginx。
```
nginx -t
systemctl enable nginx
systemctl start nginx
```
![Add nginx virtual host configuration for Kibana Application] [14]
### 步骤5 - 安装和配置 Logstash
在此步骤中我们将安装Logstash并将其配置为从配置了filebeat的logstash客户端集中服务器的日志然后过滤和转换Syslog数据并将其移动到存储中心Elasticsearch中。
下载Logstash并使用rpm进行安装。
```
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.1.rpm
rpm -ivh logstash-5.1.1.rpm
```
生成新的SSL证书文件以便客户端可以识别 elastic 服务端。
进入tls目录并编辑openssl.cnf文件。
```
cd /etc/pki/tls
vim openssl.cnf
```
在“[v3_ca]”部分添加新行,以获取服务器标识。
```
[ v3_ca ]
# Server IP Address
subjectAltName = IP: 10.0.15.10
```
保存并退出。
使用openssl命令生成证书文件。
```
openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/logstash-forwarder.key -out /etc/pki/tls/certs/logstash-forwarder.crt
```
证书文件可以在'/etc/pki/tls/certs/'和'/etc/pki/tls/private/' 目录中找到。
接下来我们会为Logstash创建新的配置文件。创建一个新的“filebeat-input.conf”文件来配置filebeat的日志源然后创建一个“syslog-filter.conf”配置文件来处理syslog再创建一个“output-elasticsearch.conf”文件来定义输出日志数据到Elasticsearch。
转到logstash配置目录并在”conf.d“子目录中创建新的配置文件。
```
cd /etc/logstash/
vim conf.d/filebeat-input.conf
```
输入配置:粘贴以下配置。
```
input {
  beats {
    port => 5443
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}
```
保存并退出。
创建 syslog-filter.conf 文件。
```
vim conf.d/syslog-filter.conf
```
粘贴以下配置
```
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
```
我们使用名为“grok”的过滤器插件来解析syslog文件。
保存并退出。
创建输出配置文件 “output-elasticsearch.conf“。
```
vim conf.d/output-elasticsearch.conf
```
粘贴以下配置。
```
output {
  elasticsearch { hosts => ["localhost:9200"]
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}
```
保存并退出。
最后将logstash设定为开机启动并且启动服务。
```
sudo systemctl enable logstash
sudo systemctl start logstash
```
![Logstash started on port 5443 with SSL Connection] [15]
### 步骤6 - 在 CentOS 客户端上安装并配置 Filebeat
Beat作为数据发送人的角色是一种可以安装在客户端节点上的轻量级代理将大量数据从客户机发送到Logstash或Elasticsearch服务器。有4中beat“Filebeat” 用于发送“日志文件”“Metricbeat” 用于发送“指标”“Packetbeat” 用于发送”网络数据“”Winlogbeat“用于发送Windows客户端的“事件日志”。
在本教程中我将向您展示如何安装和配置“Filebeat”通过SSL连接将数据日志文件传输到Logstash服务器。
登录到客户端1的服务器上。 然后将证书文件从elastic 服务器复制到客户端1的服务器上。
```
ssh root@client1IP
```
使用scp命令拷贝证书文件。
```
scp root@elk-serverIP:~/logstash-forwarder.crt .
TYPE elk-server password
```
创建一个新的目录,将证书移动到这个目录中。
```
sudo mkdir -p /etc/pki/tls/certs/
mv ~/logstash-forwarder.crt /etc/pki/tls/certs/
```
接下来在客户端1服务器上导入 elastic 密钥。
```
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
```
下载 Filebeat 并且用rpm命令安装。
```
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-x86_64.rpm
rpm -ivh filebeat-5.1.1-x86_64.rpm
```
Filebeat已经安装好了请转到配置目录并编辑“filebeat.yml”文件。
```
cd /etc/filebeat/
vim filebeat.yml
```
在第21行的路径部分添加新的日志文件。 我们将创建两个文件,”/var/log/secure“文件用于ssh活动“/var/log/secure”文件服务器日志。
```
paths:
- /var/log/secure
- /var/log/messages
```
在第26行添加一个新配置来定义syslog类型的文件。
```
document-type: syslog
```
Filebeat默认使用Elasticsearch作为输出目标。 在本教程中我们将其更改为Logshtash。 在83行和85行添加注释来禁用 Elasticsearch 输出。
禁用 Elasticsearch 输出。
```
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
```
现在添加新的logstash输出配置。 去掉logstash输出配置的注释并将所有值更改为下面配置中的值。
```
output.logstash:
# The Logstash hosts
hosts: ["10.0.15.10:5443"]
bulk_max_size: 1024
ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
template.name: "filebeat"
template.path: "filebeat.template.json"
template.overwrite: false
```
保存文件并退出vim。
将 Filebeat 设定为开机启动并启动。
```
sudo systemctl enable filebeat
sudo systemctl start filebeat
```
### 步骤7 - 在 Ubuntu 客户端上安装并配置 Filebeat
使用ssh连接到服务器。
```
ssh root@ubuntu-clientIP
```
使用scp命令拷贝证书文件。
```
scp root@elk-serverIP:~/logstash-forwarder.crt .
```
创建一个新的目录,将证书移动到这个目录中。
```
sudo mkdir -p /etc/pki/tls/certs/
mv ~/logstash-forwarder.crt /etc/pki/tls/certs/
```
在服务器上导入 elastic 密钥。
```
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
```
下载 Filebeat .deb 包并且使用dpkg命令进行安装。
```
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-amd64.deb
dpkg -i filebeat-5.1.1-amd64.deb
```
转到配置目录并编辑“filebeat.yml”文件。
```
cd /etc/filebeat/
vim filebeat.yml
```
在路径配置部分添加新的日志文件路径。
```
paths:
- /var/log/auth.log
- /var/log/syslog
```
设定document type配置为 syslog 。
```
document-type: syslog
```
将下列几行注释掉,禁用输出到 Elasticsearch。
```
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
```
启用logstash输出去掉以下配置的注释并且按照如下所示更改值。
```
output.logstash:
# The Logstash hosts
hosts: ["10.0.15.10:5443"]
bulk_max_size: 1024
ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
template.name: "filebeat"
template.path: "filebeat.template.json"
template.overwrite: false
```
保存并退出vim。
将 Filebeat 设定为开机启动并启动。
```
sudo systemctl enable filebeat
sudo systemctl start filebeat
```
检查服务状态。
```
systemctl status filebeat
```
![Filebeat is running on the client Ubuntu] [16]
### 步骤8 - 测试
打开您的网络浏览器并访问您在Nginx中配置的elastic stack域我的是“elk-stack.co”。 使用管理员密码登录然后按Enter键登录Kibana仪表盘。
![Login to the Kibana Dashboard with Basic Auth] [17]
创建一个新的默认索引”filebeat- *“,然后点击'创建'按钮。
![Create First index filebeat for Kibana] [18]
默认索引已创建。 如果elastic stack上有多个beat您可以在“星形”按钮上点击一下即可配置默认beat。
![Filebeat index as default index on Kibana Dashboard] [19]
转到 “**Discover**” 菜单您就可以看到elk-client1和elk-client2服务器上的所有日志文件。
![Discover all Log Files from the Servers] [20]
来自elk-client1服务器日志中的无效ssh登录的JSON输出示例。
![JSON output for Failed SSH Login] [21]
使用其他的选项你可以使用Kibana仪表盘做更多的事情。
Elastic Stack已安装在CentOS 7服务器上。 Filebeat已安装在CentOS 7和Ubuntu客户端上。
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/
作者:[Muhammad Arul][a]
译者:[Flowsnow](https://github.com/Flowsnow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/
[1]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-nbspprepare-the-operating-system
[2]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-java
[3]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-elasticsearch
[4]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-kibana-with-nginx
[5]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-logstash
[6]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-filebeat-on-the-centos-client
[7]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-install-and-configure-filebeat-on-the-ubuntu-client
[8]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#step-testing
[9]: https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/#reference
[10]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/1.png
[11]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/2.png
[12]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/3.png
[13]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/4.png
[14]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/5.png
[15]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/6.png
[16]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/12.png
[17]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/7.png
[18]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/8.png
[19]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/9.png
[20]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/10.png
[21]: https://www.howtoforge.com/images/how-to-install-elastic-stack-on-centos-7/big/11.png

View File

@ -0,0 +1,81 @@
如何让黑客远离你的 Linux 第三部分:问题回答
============================================================
![Computer security](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/keep-hackers-out.jpg?itok=lqgHDxDu "computer security")
Mike Guthrie 最近在 Linux 基金会的网络研讨会上回答了一些安全相关的问题。随时观看免费的研讨会。[Creative Commons Zero][1]
这个系列的[第一篇][6]和[第二篇][7]文章覆盖了 5 个最简单的方法来让你的 Linux 远离黑客,并且知道他们是否已经进入。这一次,我将回答一些我最近在 Linux 基金会网络研讨会上收到的很好的安全性问题。[随时观看免费网络研讨会][8]。
**如果系统自动使用私钥认证,如何存储密钥密码?**
这个很难。这是我们一直在斗争的事情,特别是我们在做 “Red Teams” 的时候,因为我们有自动回调它的东西。我使用 Expect但我倾向于在这上面使用老方法。你将要编写脚本是的将密码存储在系统上将是艰难的当你这么做时你需要加密它。
我的 Expect 脚本加密了存储的密码,然后解密,发送密码,并在完成后重新加密。我意识到这有一些缺陷,但它比使用无密码的密钥更好。
如果你有一个无密码的密钥,并且你确实需要使用它。我建议你最大化地限制需要它的用户。例如,如果你正在进行一些自动日志传输或自动化软件安装,则只给那些需要执行这些功能的程序权限。
你可以通过 SSH 运行命令,所以不要给它们一个 shell使它只能运行那个命令这样就能防止某人窃取了这个密钥并做其他事情。
**你对密码管理器如 KeePass2 怎么看?**
对我而言密码管理器是一个非常好的目标。随着 GPU 破解的出现和 EC2 的一些破解能力,这些很容易就变成过去。我一直在窃取密码库。
现在,我们在破解这些库的成功率是一个不同的故事。我们仍然有 10% 左右的破解成功率。如果人们不能为他们的密码库保留一个安全密码,那么我们就会进入并会获得大量的成功。它没有什么好,但是你仍需要保护好这些资产。如你保护其他密码一样保护好密码库。
**你认为从安全的角度来看,除了创建具有更高密钥长度的主机密钥之外,创建一个新的 “Diffie-Hellman” moduli 并限制 2048 位或更高值得么?**
值得的。以前在 SSH 产品中存在弱点,你可以做到解密数据包流。有了它,你可以拉取各种数据。作为一种加密机制,人们不假思索使用这种方式来传输文件和密码。使用健壮的加密并且改变你的密钥是很重要的。 我会轮换我的 SSH 密钥 - 这不像我的密码那么频繁,但是我每年会轮换一次。是的,这是一个麻烦,但它让我安心。我建议尽可能地使你的加密技术健壮。
**使用完全随机的英语单词(大概 10 万个)作为密码合适么?**
当然。我的密码实际上是一个完整的短语。它是带标点符号和大小写一句话。我不再使用其他任何东西。
我是有一个你可以记住的密码而不用写下来或者放在密码库的大大的支持者。一个你可以记住不必写下来的密码比你需要写下来的密码更安全。
使用短语或使用你可以记住的四个随机单词比那些需要经过几次转换的一串数字和字符的字符串更安全。我目前的密码长度大约是 200 个字符。这是我可以快速打出来并且记住的。
**在物联网情景下对保护基于 Linux 的嵌入式系统有什么建议么?**
物联网是一个新的空间,它是系统和安全的前沿。它每一天都是不同的。现在,我尽量都保持离线。我不喜欢人们把我的灯光和冰箱搞乱。我故意没有购买已经联网的冰箱,因为我有朋友是黑客,我知道我每天早上醒来都会看到不适当的图片。封住它,锁住它,隔离它。
目前物联网设备的恶意软件取决于默认密码和后门,所以只需要对你所使用的设备进行一些研究,并确保没有其他人可以默认访问。然后确保这些设备的管理接口受到防火墙或其他此类设备的良好保护。
**你可以提一个可以在 SMB 和大型环境中使用的防火墙/UTMOS 或应用程序)么?**
我使用 pfSense它是 BSD 的衍生产品。我很喜欢它。它有很多模块,实际上现在它有商业支持,这对于小企业来说这是非常棒的。对于更大的设备、更大的环境,这取决于你有哪些管理员。
我一直都是 CheckPoint 管理员,但是 Palo Alto 也越来越受欢迎了。这些类型的安装与小型企业或家庭使用很不同。我在任何小型网络中都使用 pfSense。
**云服务有什么内在问题么?**
并没有云,只有其他人的电脑。云服务存在内在的问题。只知道谁访问了你的数据,你在上面放了什么。要知道当你向 Amazon 或 Google 或 Microsoft 上传某些东西时,你将不再完全控制它,并且该数据的隐私是有问题的。
**要获得 OSCP 你建议需要准备些什么?**
我现在准备通过这个认证。我的整个团队是这样。阅读他们的材料。要记住 OSCP 将成为令人反感的安全基准。你一切都要使用 Kali。如果不这样做 - 如果你决定不使用 Kali请确保已安装所有工具来模拟 Kali 实例。
这将是一个基于工具的重要认证。这是一个很好的方法论。看看一些名为“渗透测试框架”的内容,因为这将为你提供一个很好的测试流程,他们的实验室似乎是很棒的。这与我家里的实验室非常相似。
_[随时免费观看完整的网络研讨会][3]。查看这个系列的[第一篇][4]和[第二篇][5]文章获得 5 个简单的贴士来让你的 Linux 机器安全。_
_Mike Guthrie 为能源部工作,负责 “Red Team” 的工作和渗透测试。_
--------------------------------------------------------------------------------
via: https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-3-your-questions-answered
作者:[MIKE GUTHRIE][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/anch
[1]:https://www.linux.com/licenses/category/creative-commons-zero
[2]:https://www.linux.com/files/images/keep-hackers-outjpg
[3]:http://portal.on24.com/view/channel/index.html?showId=1101876&showCode=linux&partnerref=linco
[4]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-1-top-two-security-tips
[5]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-2-three-more-easy-security-tips
[6]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-1-top-two-security-tips
[7]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-2-three-more-easy-security-tips
[8]:http://portal.on24.com/view/channel/index.html?showId=1101876&showCode=linux&partnerref=linco

View File

@ -0,0 +1,131 @@
在 Linux 上使用 Meld 比较文件夹
============================================================
### 本文导航
1. [用 Meld 比较文件夹][1]
2. [总结][2]
我们已经从一个新手的角度了解了 Meld (包括 Meld 的安装),我们也提及了一些 Meld 中级用户常用的小技巧。如果你有印象,在新手教程中,我们说过 Meld 可以比较文件和文件夹。已经讨论过怎么讨论文件,今天,我们来看看 Meld 怎么比较文件夹。
本教程中的所有命令和例子都是在 Ubuntu 14.04 上测试的,使用的 Meld 版本基于 3.14.2 版。
### 用 Meld 比较文件夹
打开 Meld 工具然后选择_比较文件夹_选项来比较两个文件夹。
[
![Compare directories using Meld](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-dir-comp-1.png)
][5]
选择你要比较的文件夹:
[
![select the directories](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-sel-dir-2.png)
][6]
然后单击_比较_按钮你会看到 Meld 像图中这样分成两栏显示。
[
![Compare directories visually](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-dircomp-begins-3.png)
][7]
分栏会树形显示这些文件/文件夹。你可以在上图中看到明显的区别——不论文件是新建的还是被修改过的——都会以不同的颜色高亮显示。
根据 Meld 的官方文档可以知道在窗口中看到的每个不同的文件或文件夹都会被突出显示。这样就很容易看出这个文件/文件夹与另一个分栏中对应位置的文件/文件夹的区别。
下表是 Meld 网站上列出的在比较文件夹时突出显示的不同字体大小/颜色/背景等代表的含义。
|**State** | **Appearance** | **Meaning** |
| --- | --- | --- |
| Same | Normal font | The file/folder is the same across all compared folders. |
| Same when filtered | Italics | These files are different across folders, but once text filters are applied, these files become identical. |
| Modified | Blue and bold | These files differ between the folders being compared. |
| New | Green and bold | This file/folder exists in this folder, but not in the others. |
| Missing | Greyed out text with a line through the middle | This file/folder doesn't exist in this folder, but does in one of the others. |
| Error | Bright red with a yellow background and bold | When comparing this file, an error occurred. The most common error causes are file permissions (i.e., Meld was not allowed to open the file) and filename encoding errors. |
Meld 默认会列出文件夹中的所有内容即使这些内容没有任何不同。当然你也可以在工具栏中单击_同样的_按钮设置 Meld 不显示这些相同的文件/文件夹——单击这个按钮使其不可用。
[
![same button](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-same-button.png)
][3]
[
![Meld compare buttons](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-same-disabled.png)
][8]
下面是单击_同样的_按钮使其不可用的截图
[
![Directory Comparison without same files](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-only-diff.png)
][9]
这样你会看到只显示了两个文件夹中不同的文件新建的和修改过的。同样如果你单击_新建的_按钮使其不可用那么 Meld 就只会列出修改过的文件。所以,在比较文件夹时可以通过这些按钮自定义要显示的内容。
你可以使用上下箭头来切换选择是显示新建的文件还是修改过的文件然后打开两个文件进行分栏比较。双击文件或者单击箭头旁边的_比较_按钮都可以进行分栏比较。。
[
![meld compare arrow keys](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-compare-arrows.png)
][10]
**提示 1**:如果你仔细观察,就会看到 Meld 窗口的左边和右边有一些小进度块。这些进度块就像是“用颜色区分的包含不同文件/文件夹的数个区段”。每个区段都由很多的小进度块组成,而一个个小小的有颜色的进度块就表示此处有不同的文件/文件夹。你可以单击每一个这样的小小进度块跳到它对应的文件/文件夹。
**提示 2**:尽管你经常分栏比较文件然后以你的方式合并不同的文件,假如你想要合并所有不同的文件/文件夹(就是说你想要把一个文件夹中特有的文件/文件夹添加到另一个文件夹中那么你可以用_复制到左边_和_复制到右边_按钮
[
![meld copy right part](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-copy-right-left.png)
][11]
比如你可以在左边的分栏中选择一个文件或文件夹然后单击_复制到右边_按钮在右边的文件夹中对应的位置新建完全一样的文件或文件夹。
现在在窗口的下栏菜单中找到_过滤_按钮它就在_同样的_、_新建的_和_修改过的_ 这三个按钮下面。这里你可以选择或取消文件的类型来让 Meld 在比较文件夹时决定是否显示这种类型的文件/文件夹。官方文档解释说菜单中的这个条目表示“被匹配到的文件不会显示。”
这个条目包括备份文件,操作系统元数据,版本控制文件、二进制文件和多媒体文件。
[
![Meld filters](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-filters.png)
][12]
前面提到的条目也可以通过这样的方式找到_浏览->文件过滤_。你可以同过 _编辑->首选项->文件过滤_ 为这个条目增加新元素(也可以删除已经存在的元素)。
[
![Meld preferences](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-edit-filters-menu.png)
][13]
要新建一个过滤条件,你需要使用一组 shell 符号,下表列出了 Meld 支持的 shell 符号:
| **Wildcard** | **Matches** |
| --- | --- |
| * | anything (i.e., zero or more characters) |
| ? | exactly one character |
| [abc] | any one of the listed characters |
| [!abc] | anything except one of the listed characters |
| {cat,dog} | either "cat" or "dog" |
最重要的一点是 Meld 的文件名默认大小写敏感。也就是说Meld 认为 readme 和 ReadMe 与 README 是不一样的文件。
幸运的是,你可以关掉 Meld 的大小写敏感。只需要打开_浏览_菜单然后选择_忽略文件名大小写_选项。
[
![Meld ignore filename case](https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/meld-ignore-case.png)
][14]
### 结论
你是否觉得使用 Meld 比较文件夹很容易呢——事实上,我认为它相当容易。只有新建一个过滤器会花点时间,但是这不意味着你没必要学习创建过滤器。显然,这取决于你要过滤的内容。
真的很棒,你甚至可以用 Meld 比较三个文件夹。想要比较三个文件夹时你可以通过_单击 3 个比较_ 复选框。今天,我们不介绍怎么比较三个文件夹,但它肯定会出现在后续的教程中。
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/how-to-perform-directory-comparison-using-meld/
作者:[Ansh][a]
译者:[vim-kakali](https://github.com/vim-kakali)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/how-to-perform-directory-comparison-using-meld/
[1]:https://www.howtoforge.com/tutorial/how-to-perform-directory-comparison-using-meld/#compare-directories-using-meld
[2]:https://www.howtoforge.com/tutorial/how-to-perform-directory-comparison-using-meld/#conclusion
[3]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-same-button.png
[4]:https://www.howtoforge.com/tutorial/beginners-guide-to-visual-merge-tool-meld-on-linux/
[5]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-dir-comp-1.png
[6]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-sel-dir-2.png
[7]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-dircomp-begins-3.png
[8]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-same-disabled.png
[9]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-only-diff.png
[10]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-compare-arrows.png
[11]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-copy-right-left.png
[12]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-filters.png
[13]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-edit-filters-menu.png
[14]:https://www.howtoforge.com/images/beginners-guide-to-visual-merge-tool-meld-on-linux-part-3/big/meld-ignore-case.png

View File

@ -0,0 +1,97 @@
使用 Cozy 搭建个人云
============================================================
![使用 Cozy 搭建个人云](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/life_tree_clouds.png?itok=dSV0oTDS "Building your own personal cloud with Cozy")
>Image by : [Pixabay][2]. Modified by Opensource.com. [CC BY-SA 4.0][3]
我认识的大部分人为了他们的日历、电子邮件、文件存储等,都会使用一些基于 Web 的应用。但是,如果像我这样,对隐私感到担忧、或者只是希望将你自己的数字生活简单化为一个你所控制的地方呢? [Cozy][4] 就是一个朝着健壮的自主云平台方向发展的项目。你可以从 [GitHub][5] 上获取 Cozy 的源代码,它采用 AGPL 3.0 协议。
### 安装
安装 Cozy 非常快捷简单,这里有多种平台的 [简单易懂安装指令][6]。在我的测试中,我使用 64 位的 Debian 8 系统。安装需要几分钟时间,然后你只需要到服务器的 IP 地址注册一个账号,就会加载并准备好默认的应用程序集。
要注意的一点 - 安装假设没有正在运行任何其它 Web 服务,而且它会尝试安装 [Nginx web 服务器][7]。如果你的服务器已经有网站正在运行,配置可能就比较麻烦。我是在一个全新的 VPSVirtual Private Server虚拟个人服务器上安装因此比较简单。运行安装程序、启动 Nginx然后你就可以访问云了。
Cozy 还有一个 [应用商店][8],你可以从中下载额外的应用程序。有一些看起来非常有趣,例如 [Ghost 博客平台][9] 以及开源维基 [TiddlyWiki][10]。其中的目标,显然是允许把其它很多好的应用程序集成到这个平台。我认为你要看到很多其它流行的开源应用程序提供集成功能只是时间问题。此刻,已经支持 [Node.js][11],但是如何也支持其它应用层,你就可以看到很多其它很好的应用程序。
其中可能的一个功能是从安卓设备中使用免费的安卓应用程序访问你的信息。当前还没有 iOS 应用,但有计划要解决这个问题。
现在Cozy 已经有很多核心的应用程序。
![主要 Cozy 界面](https://opensource.com/sites/default/files/main_cozy_interface.jpg "Main Cozy Interface")
主要 Cozy 界面
### 文件
和很多分支一样,我使用 [Dropbox][12] 进行文件存储。事实上,由于我有很多东西需要存储,我需要花钱买 DropBox Pro。对我来说如果它有我想要的功能那么把我的文件移动到 Cozy 能为我节省很多开销。
我希望我能说这是真的,我确实做到了。我被 Cozy 应用程序内建的基于 web 的文件上传和文件管理工具所惊讶。拖拽功能正如你期望的那样,界面也很干净整洁。我在上传事例文件和目录、随处跳转、移动、删除以及重命名文件时都没有遇到问题。
如果你想要的就是基于 web 的云文件存储,那么你做到了。对我来说,它缺失的是 DropBox 具有的选择性文件目录同步功能。在 DropBox 中,如果你拖拽一个文件到目录中,它就会被拷贝到云,几分钟后该文件在你所有同步设备中都可以看到。实际上,[Cozy 正在研发该功能][13],但此时它还处于 beta 版,而且只支持 Linux 客户端。另外,我有一个称为 [Download to Dropbox][15] 的 [Chrome][14] 扩展,我时不时用它抓取图片和其它内容,但当前 Cozy 中还没有类似的工具。
![文件管理界面](https://opensource.com/sites/default/files/cozy_2.jpg "文件管理界面")
文件管理界面
### 从 Google 导入数据
如果你正在使用 Google 日历和联系人,使用 Cozy 安装的应用程序很轻易的就可以导入它们。当你授权访问 Google 时,会给你一个 API 密钥,把它粘贴到 Cozy它就会迅速高效地进行复制。两种情况下内容都会被打上“从 Google 导入”的标签。对于我混乱的联系人这可能是件好事情因为它使得我有机会重新整理把它们重新标记为更有意义的类别。“Google Calendar” 中所有的事件都导入了,但是我注意到其中一些事件的时间不对,可能是由于两端时区设置的影响。
### 联系人
联系人正如你期望的那样,界面也很像 Google 联系人。尽管如此,还是有一些不好的地方。和你(例如)智能手机的同步通过 [CardDAV][16] 完成,这是用于共享联系人数据的标准协议,但安卓手机并不原生支持该技术。为了把你的联系人同步到安卓设备,你需要在你的手机上安装一个应用。这对我来说是个很大的打击,因为我已经有很多类似这样的旧应用程序了(例如 work mail、Gmail以及其它 mail我的天我并不想安装一个不能和我智能手机原生联系人应用程序同步的软件。如果你正在使用 iPhone你直接就能使用 CradDAV。
### 日历
对于日历用户来说好消息就是安卓设备支持这种类型数据的交换格式 [CalDAV][17]。正如我导入数据时提到的,我的一些日历时间的时间不对。在这之前我在和其它日历系统进行迁移时也遇到过这个问题,因此这对我并没有产生太大困扰。界面允许你创建和管理多个日历,就像 Google 那样,但是你不能订阅这个 Cozy 实例之外的其它日历。该应用程序另一个怪异的地方就是它的一周从周一开始,而且你不能更改。通常来说,我从周日开始我的一周,因此能更改从周一开始的功能对我来说非常有用。设置对话框并没有任何设置;它只是告诉你如何通过 CalDAV 连接的指令。再次说明,这个应用程序接近我想要的,但 Cozy 做的还不够好。
### 照片
照片应用让我印象深刻,它从文件应用程序借鉴了很多东西。你甚至可以把一个其它应用程序的照片文件添加到相册,或者直接通过拖拽上传。不幸的是,一旦上传后,我没有找到任何重拍和编辑照片的方法。你只能把它们从相册中删除。应用有一个通过令牌链接进行分享的工具,而且你可以指定一个或多个联系人。系统会给这些联系人发送邀请他们查看相册的电子邮件。当然还有很多比这个有更丰富功能的相册应用,但在 Cozy 平台中这算是一个好的起点。
![Photos 界面](https://opensource.com/sites/default/files/cozy_3_0.jpg "Photos Interface")
Photos 界面
### 总结
Cozy 目标远大。他们尝试搭建你能部署任意你想要的基于云的服务的平台。它已经到了黄金时段吗?我并不认为。对于一些重度用户来说我之前提到的一些问题很严重,而且还没有 iOS 应用,这可能阻碍用户使用它。不管怎样,继续关注吧 - 随着研发的继续Cozy 有一家代替很多应用程序的潜能。
--------------------------------------------------------------------------------
译者简介:
D Ruth Bavousett - D Ruth Bavousett 作为系统管理员和软件开发者已经很长时间了,长期以来她都专注于 a VAX 11/780。到目前为止她花了很多时间服务于图书馆的技术需求她从 2008 年就开始成为 Koha 开源图书馆自动化套件的贡献者。 Ruth 现在是 Houston cPanel 公司的 Perl 开发人员,同时还是两个孩子的母亲。
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/2/cozy-personal-cloud
作者:[D Ruth Bavousett][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/druthb
[1]:https://opensource.com/article/17/2/cozy-personal-cloud?rate=FEMc3av4LgYK-jeEscdiqPhSgHZkYNsNCINhOoVR9N8
[2]:https://pixabay.com/en/tree-field-cornfield-nature-247122/
[3]:https://creativecommons.org/licenses/by-sa/4.0/
[4]:https://cozy.io/
[5]:https://github.com/cozy/cozy
[6]:https://docs.cozy.io/en/host/install/
[7]:https://www.nginx.com/
[8]:https://cozy.io/en/apps/
[9]:https://ghost.org/
[10]:http://tiddlywiki.com/
[11]:http://nodejs.org/
[12]:https://www.dropbox.com/
[13]:https://github.com/cozy-labs/cozy-desktop
[14]:https://www.google.com/chrome/
[15]:https://github.com/pwnall/dropship-chrome
[16]:https://en.wikipedia.org/wiki/CardDAV
[17]:https://en.wikipedia.org/wiki/CalDAV
[18]:https://opensource.com/user/36051/feed
[19]:https://opensource.com/article/17/2/cozy-personal-cloud#comments
[20]:https://opensource.com/users/druthb

View File

@ -0,0 +1,54 @@
OpenContrail一个 OpenStack 生态中的重要工具
============================================================
![OpenContrail](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/contrails-cloud.jpg?itok=aoNIH-ar "OpenContrail")
OpenContrail 是 OpenStack 云计算平台使用的 SDN 平台,它正在成为管理员需要发展的技能的重要工具。 [Creative Commons Zero] [1] Pixabay
整个 2016 年软件定义网络SDN迅速发展开源和云计算领域的众多参与者正帮助其获得增长。结合这一趋势OpenStack 云计算平台使用的受欢迎的 SDN 平台 [OpenContrail][3] 正成为许多管理员需要发展的技能的重要工具。
正如管理员和开发人员在 OpenStack 生态系统中围绕着诸如 Ceph 等重要工具提升技能一样,他们将需要拥抱 OpenContrail它是由 Apache 软件基金会全面开源并管理的软件。
考虑到这些OpenStack 领域中最活跃的公司之一 Mirantis 已经[宣布][4]对 OpenContrail 的商业支持和贡献。该公司提到:“添加了 OpenContrail 后Mirantis 将会为与 OpenStack 一起使用的开源技术包括用于存储的 Ceph、用于计算的 OpenStack/KVM、用于 SDN 的 OpenContrail 或 Neutron 提供一站式的支持。
根据 Mirantis 公告“OpenContrail 是一个使用基于标准协议构建的 Apache 2.0 许可项目,为网络虚拟化提供了所有必要的组件 - SDN 控制器、虚拟路由器、分析引擎和已发布的上层 API它有一个可扩展 REST API 用于配置以及收集操作和分析数据。为了规模化OpenContrail 可以作为云基础设施的基础网络平台。
有消息称 Mirantis [收购了 TCP Cloud][5],这是一家专门从事 OpenStack、OpenContrail 和 Kubernetes 管理服务的公司。Mirantis 将使用 TCP Cloud 的技术来持续交付云基础设施来管理将在 Docker 容器中运行的 OpenContrail 控制面板。作为这项工作的一部分Mirantis 也会一直致力于 OpenContrail。
OpenContrail 的许多贡献者正在与 Mirantis 紧密合作,他们特别注意了 Mirantis 将提供的支持计划。
“OpenContrail 是 OpenStack 社区中一个重要的项目,而 Mirantis 很好地容器化并提供商业支持。Mirantis 的工程师总监和 OpenContrail 咨询委员会主任 Jakub Pavlik 说:“我们团队正在做的工作使 OpenContrail 能轻松地扩展并更新,并与 Mirantis OpenStack 的其余部分进行无缝滚动升级。” 他表示:“商业支持也将使 Mirantis 能够使该项目与各种交换机兼容,从而为客户提供更多的硬件和软件选择。”
除了 OpenContrail 的商业支持外,我们很可能还会看到 Mirantis 为那些想要学习如何利用它的云管理员和开发人员提供的教育服务。Mirantis 已经以其[ OpenStack 培训][6]课程而闻名,并将 Ceph 纳入了培训课程中。
在 2016 年SDN 种类快速演变,并且对许多部署 OpenStack 的组织也有意义。IDC 最近发布了 SDN 市场的[一项研究][7],预计从 2014 年到 2020 年 SDN 市场的年均复合增长率为 53.9%,届时市场价值将达到 125 亿美元。此外“Technology Trends 2016” 报告将 SDN 列为组织最佳的技术投资之一。
IDC 网络基础设施总裁 [Rohit Mehra][8] 说:“云计算和第三个平台推动了 SDN 的需求,它将在 2020 年代表一个价值超过 125 亿美元的市场。丝毫不用奇怪的是 SDN 的价值将越来越多地渗透到网络虚拟化软件和 SDN 应用中,包括虚拟化网络和安全服务。大型企业在数据中心中实现 SDN 的价值,但它们最终将会认识到其在分支机构和校园网络中的广泛应用。”
同时Linux基金会最近[宣布][9]发布了其 2016 年度报告[“开放云指导:当前趋势和开源项目”][10]。第三份年度报告全面介绍了开放云计算,并包含一个关于 SDN 的部分。
Linux基金会还提供了[软件定义网络基础知识][11]LFS265这是一个自定进度的 SDN 在线课程,另外作为 [Open Daylight][12] 项目的领导者,另一个重要的开源 SDN 平台正在迅速成长。
--------------------------------------------------------------------------------
via: https://www.linux.com/news/event/open-networking-summit/2017/2/opencontrail-essential-tool-openstack-ecosystem
作者:[SAM DEAN][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/sam-dean
[1]:https://www.linux.com/licenses/category/creative-commons-zero
[2]:https://www.linux.com/files/images/contrails-cloudjpg
[3]:https://www.globenewswire.com/Tracker?data=brZ3aJVRyVHeFOyzJ1Dl4DMY3CsSV7XcYkwRyOcrw4rDHplSItUqHxXtWfs18mLsa8_bPzeN2EgZXWcQU8vchg==
[4]:http://www.econotimes.com/Mirantis-Becomes-First-Vendor-to-Offer-Support-and-Managed-Services-for-OpenContrail-SDN-486228
[5]:https://www.globenewswire.com/Tracker?data=Lv6LkvREFzGWgujrf1n6r_qmjSdu67-zdRAYt2itKQ6Fytomhfphuk5EbDNjNYtfgAsbnqI8H1dn_5kB5uOSmmSYY9XP2ibkrPw_wKi5JtnAyV43AjuR_epMmOUkZZ8QtFdkR33lTGDmN6O5B4xkwv7fENcDpm30nI2Og_YrYf0b4th8Yy4S47lKgITa7dz2bJpwpbCIzd7muk0BZ17vsEp0S3j4kQJnmYYYk5udOMA=
[6]:https://training.mirantis.com/
[7]:https://www.idc.com/getdoc.jsp?containerId=prUS41005016
[8]:http://www.idc.com/getdoc.jsp?containerId=PRF003513
[9]:https://www.linux.com/blog/linux-foundation-issues-2016-guide-open-source-cloud-projects
[10]:http://ctt.marketwire.com/?release=11G120876-001&id=10172077&type=0&url=http%3A%2F%2Fgo.linuxfoundation.org%2Frd-open-cloud-report-2016-pr
[11]:https://training.linuxfoundation.org/linux-courses/system-administration-training/software-defined-networking-fundamentals
[12]:https://www.opendaylight.org/

View File

@ -0,0 +1,195 @@
lnav - Linux 下一个基于控制台的高级日志文件查看器
============================================================
[LNAV][3]Log file Navigator是 Linux 下一个基于控制台的高级日志文件查看器。它和其它文件查看器,例如 cat、more、tail 等,完成相同的任务,但有很多普通文件查看器没有的增强功能(尤其是它自带很多颜色和易于阅读的格式)。
它能在解压所有压缩日志文件zip、gzip、bzip的同时把它们合并到一起进行导航。基于消息的时间戳lnav 能把多个日志文件合并到一个视图Single Log Review从而避免打开多个窗口。左边的颜色栏帮助显示消息所属的文件。
警告和错误的数目会被(黄色和红色)高亮显示,因此我们能够很轻易地看到问题出现在哪里。它会自动加载新的日志行。
它按照消息时间戳排序显示所有文件的日志消息。顶部和底部的状态栏会告诉你在哪个日志文件。如果你想查找特定的模式,只需要在搜索弹窗中输入就会即时显示。
内建的日志消息解析器会自动从每一行中发现和提取详细信息。
服务器日志是一个由服务器创建并经常更新、用于抓取特定服务和应用的所有活动信息的日志文件。当你的应用或者服务出现问题时这个文件就会非常有用。从日志文件中你可以获取所有关于问题的信息,例如基于警告或者错误信息它什么时候开始表现不正常。
当你用一个普通文件查看器打开一个日志文件时,它会用纯文本格式显示所有信息(如果用更直白的话说的话:纯白),这样很难去发现和理解哪里有警告或错误信息。为了克服这种情况,快速找到警告和错误信息来解决问题, lnav 是一个入手可用的更好的解决方案。
大部分普通 Linux 日志文件都放在 `/var/log/`
**lnav 自动检测以下日志格式**
* Common Web Access Log format普通 web 访问日志格式)
* CUPS page_log
* Syslog
* Glog
* VMware ESXi/vCenter Logs
* dpkg.log
* uwsgi
* “Generic” 以时间戳开始的消息
* Strace
* sudo
* gzib & bizp
**lnav 高级功能**
* 单一日志视图 - 基于消息时间戳,所有日志文件内容都会被合并到一个单一视图。
* 自动日志格式检测 - lnav 支持大部分日志格式
* 过滤器 - 能进行基于正则表达式的过滤
* 时间线视图
* Pretty-Print 视图
* 使用 SQL 查询日志
* 自动数据抽取
* 实时操作
* 语法高亮
* Tab 补全
* 当你查看相同文件集时自动保存和恢复会话信息。
* Headless 模式
#### 如何在 Linux 中安装 lnav
大部分发行版Debian、Ubuntu、Mint、Fedora、suse、openSUSE、Arch Linux、Manjaro、Mageia 等等)默认都有 lvan 软件包,在软件包管理器的帮助下,我们可以很轻易地从发行版官方仓库中安装它。对于 CentOS/RHEL 我们需要启用 **[EPEL 仓库][1]**。
```
[在 Debian/Ubuntu/LinuxMint 上安装 lnav]
$ sudo apt-get install lnav
[在 RHEL/CentOS 上安装 lnav]
$ sudo yum install lnav
[在 Fedora 上安装 lnav]
$ sudo dnf install lnav
[在 openSUSE 上安装 lnav]
$ sudo zypper install lnav
[在 Mageia 上安装 lnav]
$ sudo urpmi lnav
[在基于 Arch Linux 的系统上安装 lnav]
$ yaourt -S lnav
```
如果你的发行版没有 lnav 软件包,别担心,开发者提供了 `.rpm 和 .deb` 安装包,因此没有任何问题我们可以轻易安装。确保你从 [开发者 github 页面][4] 下载最新版本的安装包。
```
[在 Debian/Ubuntu/LinuxMint 上安装 lnav]
$ sudo wget https://github.com/tstack/lnav/releases/download/v0.8.1/lnav_0.8.1_amd64.deb
$ sudo dpkg -i lnav_0.8.1_amd64.deb
[在 RHEL/CentOS 上安装 lnav]
$ sudo yum install https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
[在 Fedora 上安装 lnav]
$ sudo dnf install https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
[在 openSUSE 上安装 lnav]
$ sudo zypper install https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
[在 Mageia 上安装 lnav]
$ sudo rpm -ivh https://github.com/tstack/lnav/releases/download/v0.8.1/lnav-0.8.1-1.x86_64.rpm
```
#### 不带参数运行 lnav
默认情况下你不带参数运行 lnav 时它会打开 `syslog` 文件。
```
# lnav
```
[
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-1.png)
][5]
#### 使用 lnav 查看特定日志文件
要用 lnav 查看特定的日志文件,在 lnav 命令后面添加日志文件路径。例如我们想看 `/var/log/dpkg.log` 日志文件。
```
# lnav /var/log/dpkg.log
```
[
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-2.png)
][6]
#### 用 lnav 查看多个日志文件
要用 lnav 查看多个日志文件,在 lnav 命令后面逐个添加日志文件路径,用一个空格隔开。例如我们想查看 `/var/log/dpkg.log` 和 `/var/log/kern.log` 日志文件。
左边的颜色栏帮助显示消息所属的文件。另外顶部状态栏还会显示当前日志文件的名称。为了显示多个日志文件,大部分应用习惯打开多个窗口、或者在窗口中水平或竖直切分,但 lnav 使用不同的方式(它基于日期组合在同一个窗口显示多个日志文件)。
```
# lnav /var/log/dpkg.log /var/log/kern.log
```
[
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-3.png)
][7]
#### 使用 lnav 查看压缩的日志文件
要查看并同时解压被压缩的日志文件zip、gzip、bzip在 lnav 命令后面添加 `-r` 选项。
```
# lnav -r /var/log/Xorg.0.log.old.gz
```
[
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-6.png)
][8]
#### 直方图视图
首先运行 `lnav` 然后按 `i` 键切换到/出直方图视图。
[
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-4.png)
][9]
#### 查看日志解析器结果
首先运行 `lnav` 然后按 `p` 键打开显示日志解析器结果。
[
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-5.png)
][10]
#### 语法高亮
你可以搜索任何给定的字符串,它会在屏幕上高亮显示。首先运行 `lnav` 然后按 `/` 键并输入你想查找的字符串。为了测试,我搜索字符串 `Default`,看下面的截图。
[
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-7.png)
][11]
#### Tab 补全
命令窗口支持大部分操作的 tab 补全。例如,在进行搜索时,你可以使用 tab 补全屏幕上显示的单词,而不需要复制粘贴。为了测试,我搜索字符串 `/var/log/Xorg`,看下面的截图。
[
![](http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-8.png)
][12]
--------------------------------------------------------------------------------
via: http://www.2daygeek.com/install-and-use-advanced-log-file-viewer-navigator-lnav-in-linux/
作者:[Magesh Maruthamuthu][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.2daygeek.com/author/magesh/
[1]:http://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/
[2]:http://www.2daygeek.com/author/magesh/
[3]:http://lnav.org/
[4]:https://github.com/tstack/lnav/releases
[5]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-1.png
[6]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-2.png
[7]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-3.png
[8]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-6.png
[9]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-4.png
[10]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-5.png
[11]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-7.png
[12]:http://www.2daygeek.com/wp-content/uploads/2017/01/lnav-advanced-log-file-viewer-8.png

View File

@ -1,41 +1,39 @@
Windows Trojan hacks into embedded devices to install Mirai
Windows 木马黑进嵌入式设备来安装 Mirai
============================================================
> The Trojan tries to authenticate over different protocols with factory default credentials and, if successful, deploys the Mirai bot
> 木马尝试使用出厂默认凭证对不同协议进行身份验证,如果成功则会部署 Mirai。
![Windows Trojan uses brute-force attacks against IoT devices.](http://images.techhive.com/images/idgnsImport/2015/08/id-2956907-matrix-434036-100606417-large.jpg)
Attackers have started to use Windows and Android malware to hack into embedded devices, dispelling the widely held belief that if such devices are not directly exposed to the Internet they're less vulnerable.
攻击者已经开始使用 Windows 和 Android 恶意软件入侵嵌入式设备,这消除了人们广泛认为的如果设备不直接暴露在互联网上,那么它们就不那么脆弱。
Researchers from Russian antivirus vendor Doctor Web have recently [come across a Windows Trojan program][21] that was designed to gain access to embedded devices using brute-force methods and to install the Mirai malware on them.
来自俄罗斯防病毒供应商 Doctor Web 的研究人员最近[遇到了一个 Windows 木马程序][21],它使用暴力方法访问嵌入式设备,并安装 Mirai 恶意软件。
Mirai is a malware program for Linux-based internet-of-things devices, such as routers, IP cameras, digital video recorders and others. It's used primarily to launch distributed denial-of-service (DDoS) attacks and spreads over Telnet by using factory device credentials.
Mirai 是一种用在基于 Linux 的物联网设备的恶意程序例如路由器、IP 摄像机、数字录像机等。它主要通过使用出厂设备凭据来发动分布式拒绝服务 DDoS 攻击并通过 Telnet 传播。
The Mirai botnet has been used to launch some of the largest DDoS attacks over the past six months. After [its source code was leaked][22], the malware was used to infect more than 500,000 devices.
Mirai 的僵尸网络在过去六个月里一直被用来发起最大型的 DDoS 攻击。[它的源代码泄漏][22]之后,恶意软件被用来感染超过 50 万台设备。
Once installed on a Windows computer, the new Trojan discovered by Doctor Web downloads a configuration file from a command-and-control server. That file contains a range of IP addresses to attempt authentication over several ports including 22 (SSH) and 23 (Telnet).
在一台 Windows 上安装之后Doctor Web 发现的新的木马会从命令控制服务器下载配置文件。该文件包含一系列 IP 地址,用来通过多个端口(包括 22SSH和 23Telnet尝试进行身份验证。
#### [■ GET YOUR DAILY SECURITY NEWS: Sign up for CSO's security newsletters][11]
#### [■ 获得你的每日安全新闻:注册 CSO 安全通讯][11]
如果身份验证成功,恶意软件将会根据受害系统的类型执行配置文件中指定的某些命令。在通过 Telnet 访问的 Linux 系统中,木马会下载并执行一个二进制包,然后安装 Mirari。
If authentication is successful, the malware executes certain commands specified in the configuration file, depending on the type of compromised system. In the case of Linux systems accessed via Telnet, the Trojan downloads and executes a binary package that then installs the Mirai bot.
如果受影响的设备未被设计成或被配置为从 Internet 直接访问,那么许多物联网供应商会降低漏洞的严重性。这种思维方式假定局域网是信任和安全的环境。
Many IoT vendors downplay the severity of vulnerabilities if the affected devices are not intended or configured for direct access from the Internet. This way of thinking assumes that LANs are trusted and secure environments.
然而事实并非如此,其他威胁如跨站点请求伪造已经出现了多年。但 Doctor Web 发现的新木马似乎是第一个专门设计用于劫持嵌入式或物联网设备的 Windows 恶意软件。
This was never really the case, with other threats like cross-site request forgery attacks going around for years. But the new Trojan that Doctor Web discovered appears to be the first Windows malware specifically designed to hijack embedded or IoT devices.
Doctor Web 发现的新木马被称为 [Trojan.Mirai.1][23],表明攻击者还可以使用受害的计算机来攻击不能从互联网直接访问的物联网设备。
This new Trojan found by Doctor Web, dubbed [Trojan.Mirai.1][23], shows that attackers can also use compromised computers to target IoT devices that are not directly accessible from the internet.
Infected smartphones can be used in a similar way. Researchers from Kaspersky Lab have already [found an Android app][24] designed to perform brute-force password guessing attacks against routers over the local network.
受感染的智能手机可以以类似的方式使用。卡巴斯基实验室的研究人员已经[发现了一个 Android 程序][24] 通过本地网络对路由器执行暴力密码猜测攻击。
--------------------------------------------------------------------------------
via: http://www.csoonline.com/article/3168357/security/windows-trojan-hacks-into-embedded-devices-to-install-mirai.html
作者:[ Lucian Constantin][a]
译者:[译者ID](https://github.com/译者ID)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,296 @@
在 Linux 如何用 bash-support 插件将 Vim 编辑器打造成一个 Bash-IDE
============================================================
IDE([集成开发环境][1])就是一个软件,它为了最大化程序员生产效率,提供了很多编程所需的设施和组件。 IDE 将所有开发集中到一个程序中,使得程序员可以编写、修改、编译、部署以及调试程序。
在这篇文章中,我们会介绍如何通过使用 bash-support vim 插件将[Vim 编辑器安装和配置][2] 为一个 Bash-IDE。
#### 什么是 bash-support.vim 插件?
bash-support 是一个高度定制化的 vim 插件,它允许你插入:文件头、补全语句、注释、函数、以及代码块。它也使你可以进行语法检查、使脚本可执行、通过一次按键启动调试器;完成所有的这些而不需要关闭编辑器。
它使用快捷键(映射),通过有组织、一致的文件内容编写/插入,使得 bash 脚本变得有趣和愉快。
插件当前版本是 4.3,版本 4.0 重写了版本 3.12.14.0 及之后的版本基于一个全新的、更强大的、和之前版本模板语法不同的模板系统。
### 如何在 Linux 中安装 Bash-support 插件
用下面的命令下载最新版本的 bash-support 插件:
```
$ cd Downloads
$ curl http://www.vim.org/scripts/download_script.php?src_id=24452 >bash-support.zip
```
按照如下步骤安装;在你的主目录创建 `.vim` 目录(如果它不存在的话),进入该目录并提取 bash-support.zip 内容:
```
$ mkdir ~/.vim
$ cd .vim
$ unzip ~/Downloads/bash-support.zip
```
下一步,在 `.vimrc` 文件中激活它:
```
$ vi ~/.vimrc
```
通过插入下面一行:
```
filetype plug-in on
set number #optionally add this to show line numbers in vim
```
### 如何在 Vim 编辑器中使用 Bash-support 插件
为了简化使用,通常使用的结构和特定操作可以分别通过键映射插入/执行。 ~/.vim/doc/bashsupport.txt  ~/.vim/bash-support/doc/bash-hotkeys.pdf 或者 ~/.vim/bash-support/doc/bash-hotkeys.tex 文件中介绍了映射。
##### 重要:
1. 所有映射(`(\)+charater(s)` 组合)都是针对特定文件类型的:为了避免和其它插件的映射冲突,它们只适用于 sh 文件。
2. 使用键映射的时候打字速度也有关系,引导符 `('\')` 和后面字符的组合要在特定短时间内才能识别出来(很可能少于 3 秒 - 基于假设)。
下面我们会介绍和学习使用这个插件一些显著的功能:
#### 如何为新脚本自动生成文件头
看下面的事例文件头,为了要在你所有的新脚本中自动创建该文件头,请按照以下步骤操作。
[
![脚本事例文件头选项](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][3]
脚本事例文件头选项
首先设置你的个人信息(作者名称、作者参考、组织、公司等)。在一个 Bash 缓冲区(像下面这样打开一个测试脚本)中使用映射 `\ntw` 启动模板设置向导。
选中选项1设置个性化文件然后按回车键。
```
$ vi test.sh
```
[
![在脚本文件中设置个性化信息](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][4]
在脚本文件中设置个性化信息
之后再次输入回车键。然后再一次选中选项1设置个性化文件的路径并输入回车。
[
![设置个性化文件路径](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][5]
设置个性化文件路径
设置向导会把目标文件 .vim/bash-support/rc/personal.templates 拷贝到 .vim/templates/personal.templates打开并编辑它在这里你可以输入你的信息。
`i` 键像截图那样在一个单引号中插入合适的值。
[
![在脚本文件头添加信息](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][6]
在脚本文件头添加信息
一旦你设置了正确的值,输入 `:wq` 保存并退出文件。关闭 Bash 测试脚本,打开另一个脚本来测试新的配置。现在文件头中应该有和下面截图类似的你的个人信息:
```
$ test2.sh
```
[
![自动添加文件头到脚本](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][7]
自动添加文件头到脚本
#### 使 Bash-support 插件帮助信息可访问
为此,在 Vim 命令行输入下面的命令并按回车键,它会创建 .vim/doc/tags 文件:
```
:helptags $HOME/.vim/doc/
```
[
![在 Vi 编辑器添加插件帮助](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][8]
在 Vi 编辑器添加插件帮助
#### 如何在 Shell 脚本中插入注释
要插入一个块注释,在普通模式下输入 `\cfr`
[
![添加注释到脚本](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][9]
添加注释到脚本
#### 如何在 Shell 脚本中插入语句
下面是一些用于插入语句的键映射(`n`  普通模式, `i`  插入模式):
1. `\sc`  case in … esac (n, I)
2. `\sei`  elif then (n, I)
3. `\sf`  for in do done (n, i, v)
4. `\sfo`  for ((…)) do done (n, i, v)
5. `\si`  if then fi (n, i, v)
6. `\sie`  if then else fi (n, i, v)
7. `\ss`  select in do done (n, i, v)
8. `\su`  until do done (n, i, v)
9. `\sw`  while do done (n, i, v)
10. `\sfu`  function (n, i, v)
11. `\se`  echo -e “…” (n, i, v)
12. `\sp`  printf “…” (n, i, v)
13. `\sa`  数组元素, ${.[.]} (n, i, v) 和其它更多的数组功能。
#### 插入一个函数和函数头
输入 `\sfu` 添加一个新的空函数,然后添加函数名并按回车键创建它。之后,添加你的函数代码。
[
![在脚本中插入新函数](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][10]
在脚本中插入新函数
为了给上面的函数创建函数头,输入 `\cfu`,输入函数名称,按回车键并填入合适的值(名称、介绍、参数、返回值):
[
![在脚本中创建函数头](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][11]
在脚本中创建函数头
#### 更多关于添加 Bash 语句的例子
下面是一个使用 `\si` 插入一条 if 语句的例子:
[
![在脚本中插入语句](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][12]
在脚本中插入语句
下面的例子显示使用 `\se` 添加一条 echo 语句:
[
![在脚本中添加 echo 语句](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][13]
在脚本中添加 echo 语句
#### 如何在 Vi 编辑器中使用运行操作
下面是一些运行操作键映射的列表:
1. `\rr`  更新文件,运行脚本 (n, I)
2. `\ra`  设置脚本命令行参数 (n, I)
3. `\rc`  更新文件,检查语法 (n, I)
4. `\rco`  语法检查选项 (n, I)
5. `\rd`  启动调试器 (n, I)
6. `\re`  使脚本可/不可执行(*) (in)
#### 使脚本可执行
编写完脚本后,保存它然后输入 `\re` 和回车键使它可执行。
[
![使脚本可执行](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][14]
使脚本可执行
#### 如何在 Bash 脚本中使用预定义代码片段
预定义代码片段是为了特定目的包含了已写好代码的文件。为了添加代码段,输入 `\nr` 和 `\nw` 读/写预定义代码段。输入下面的命令列出默认的代码段:
```
$ .vim/bash-support/codesnippets/
```
[
![代码段列表](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][15]
代码段列表
为了使用代码段,例如 free-software-comment输入 `\nr` 并使用自动补全功能选择它的名称,然后输入回车键:
[
![添加代码段到脚本](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][16]
添加代码段到脚本
#### 创建自定义预定义代码段
可以在  ~/.vim/bash-support/codesnippets/ 目录下编写你自己的代码段。另外,你还可以从你正常的脚本代码中创建你自己的代码段:
1. 选择你想作为代码段的部分代码,然后输入  `\nw` 并给它一个相近的文件名。
2. 要读入它,只需要输入  `\nr` 然后使用文件名就可以添加你自定义的代码段。
#### 在当前光标处查看内建和命令帮助
要显示帮助,在普通模式下输入:
1. `\hh`  内建帮助
2. `\hm`  命令帮助
[
![查看内建命令帮助](http://www.tecmint.com/wp-content/uploads/2017/02/View-Built-in-Command-Help.png)
][17]
查看内建命令帮助
更多参考资料,可以查看文件:
```
~/.vim/doc/bashsupport.txt #copy of online documentation
~/.vim/doc/tags
```
访问 Bash-support 插件 GitHub 仓库:[https://github.com/WolfgangMehner/bash-support][18]
在 Vim 网站访问 Bash-support 插件:[http://www.vim.org/scripts/script.php?script_id=365][19]
就是这些啦,在这篇文章中,我们介绍了在 Linux 中使用 Bash-support 插件安装和配置 Vim 为一个 Bash-IDE 的步骤。快去发现这个插件其它令人兴奋的功能吧,一定要在评论中和我们分享哦。
--------------------------------------------------------------------------------
作者简介:
Aaron Kili 是一个 Linux 和 F.O.S.S 爱好者、Linux 系统管理员、网络开发人员,现在也是 TecMint 的内容创作者,她喜欢和电脑一起工作,坚信共享知识。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/use-vim-as-bash-ide-using-bash-support-in-linux/
作者:[Aaron Kili][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/best-linux-ide-editors-source-code-editors/
[2]:http://www.tecmint.com/vi-editor-usage/
[3]:http://www.tecmint.com/wp-content/uploads/2017/02/Script-Header-Options.png
[4]:http://www.tecmint.com/wp-content/uploads/2017/02/Set-Personalization-in-Scripts.png
[5]:http://www.tecmint.com/wp-content/uploads/2017/02/Set-Personalization-File-Location.png
[6]:http://www.tecmint.com/wp-content/uploads/2017/02/Add-Info-in-Script-Header.png
[7]:http://www.tecmint.com/wp-content/uploads/2017/02/Auto-Adds-Header-to-Script.png
[8]:http://www.tecmint.com/wp-content/uploads/2017/02/Add-Plugin-Help-in-Vi-Editor.png
[9]:http://www.tecmint.com/wp-content/uploads/2017/02/Add-Comments-to-Scripts.png
[10]:http://www.tecmint.com/wp-content/uploads/2017/02/Insert-New-Function-in-Script.png
[11]:http://www.tecmint.com/wp-content/uploads/2017/02/Create-Header-Function-in-Script.png
[12]:http://www.tecmint.com/wp-content/uploads/2017/02/Add-Insert-Statement-to-Script.png
[13]:http://www.tecmint.com/wp-content/uploads/2017/02/Add-echo-Statement-to-Script.png
[14]:http://www.tecmint.com/wp-content/uploads/2017/02/make-script-executable.png
[15]:http://www.tecmint.com/wp-content/uploads/2017/02/list-of-code-snippets.png
[16]:http://www.tecmint.com/wp-content/uploads/2017/02/Add-Code-Snippet-to-Script.png
[17]:http://www.tecmint.com/wp-content/uploads/2017/02/View-Built-in-Command-Help.png
[18]:https://github.com/WolfgangMehner/bash-support
[19]:http://www.vim.org/scripts/script.php?script_id=365

View File

@ -0,0 +1,112 @@
# 从损坏的 Linux EFI 安装中恢复
在过去的十多年里Linux 发行版在安装前、安装过程中、以及安装后偶尔会失败,但我总是有办法恢复系统并继续正常工作。然而,[Solus][1] 损坏了我的笔记本。
GRUB 恢复。不行重装。还不行Ubuntu 拒绝安装,报错目标设备不是这个或那个。哇。我之前还没有遇到过想这样的事情。我的测试机已变成无用的砖块。我们该失望吗?不,绝对不。让我来告诉你怎样你可以修复它吧。
### 问题详情
所有事情都从 Solus 尝试安装它自己的启动引导器 - goofiboot 开始。不知道什么原因、它没有成功完成安装留给我的就是一个无法启动的系统。BIOS 之后,我有一个 GRUB 恢复终端。
![安装失败](http://www.dedoimedo.com/images/computers-years/2016-2/solus-installation-failed.png)
我尝试在终端中手动修复,使用类似和我在我的扩展 [GRUB2 指南][2]中介绍的这个或那个命令。但还是不行。然后我尝试按照我在[GRUB2 和 EFI 指南][3]中的建议从 Live CD译者注Live CD 是一个完整的计算机可引导安装媒介,它包括在计算机内存中运行的操作系统,而不是从硬盘驱动器加载; CD 本身是只读的。 它允许用户为任何目的运行操作系统,而无需安装它或对计算机的配置进行任何更改)中恢复。我用 efibootmgr 工具创建了一个条目,确保标记它为有效。正如我们之前在指南中做的那样,之前这些是能正常工作的。哎,现在这个方法也不起作用。
我尝试一次完整的 Ubuntu 安装,把它安装到 Solus 所在的分区,希望安装程序能给我一些有用的信息。但是 Ubuntu 无法按成安装。它报错failed to install into /target。又回到开始的地方了。怎么办
### 手动清除 EFI 分区
显然,我们的 EFI 分区出现了严重问题。简单回顾以下,如果你使用的是 UEFI那么你需要一个单独的 FAT-32 格式化分区。该分区用于存储 EFI 引导镜像。例如,当你安装 Fedora 时Fedora 引导镜像会被拷贝到 EFI 子目录。每个操作系统都会被存储到一个它自己的目录,一般是 /boot/efi/EFI/<操作系统版本>/。
![EFI 分区内容](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-efi-partition-contents.png)
在我的 [G50][4] 机器上这里有很多各种发行版测试条目包括centos、debian、fedora、mx-15、suse、Ubuntu、zorin 以及其它。这里也有一个 goofiboot 目录。但是efibootmgr 并没有在它的菜单中显示 goofiboot 条目。显然这里出现了一些问题。
```
sudo efibootmgr -d /dev/sda
BootCurrent: 0001
Timeout: 0 seconds
BootOrder: 0001,0005,2003,0000,2001,2002
Boot0000* Lenovo Recovery System
Boot0001* ubuntu
Boot0003* EFI Network 0 for IPv4 (68-F7-28-4D-D1-A1)
Boot0004* EFI Network 0 for IPv6 (68-F7-28-4D-D1-A1)
Boot0005* Windows Boot Manager
Boot0006* fedora
Boot0007* suse
Boot0008* debian
Boot0009* mx-15
Boot2001* EFI USB Device
Boot2002* EFI DVD/CDROM
Boot2003* EFI Network
...
```
P.S. 上面的输出是在 LIVE 会话中运行命令生成的!
我决定清除所有非默认的以及非微软的条目然后重新开始。显然,有些东西被损坏了,妨碍了新的发行版设置它们自己的启动引导程序。因此我删除了 /boot/efi/EFI 分区下面出 Boot 和 Windows 外的所有目录。同时,我也通过删除所有额外的条目更新了启动管理器。
```
efibootmgr -b <hex> -B <hex>
```
最后,我重新安装了 Ubuntu并仔细监控 GRUB 安装和配置的过程。这次,成功完成啦。正如预期的那样,几个无效条目出现了一些错误,但整个安装过程完成就好了。
![安装错误](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-install-errors.jpg)
![安装成功](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-install-successful.jpg)
### 额外阅读
如果你不喜欢这种手动修复,你可以阅读:
```
[Boot-Info][5] 手册,里面有帮助你恢复系统的自动化工具
[Boot-repair-cd][6] 自动恢复工具下载页面
```
### 总结
如果你遇到由于 EFI 分区破坏而导致系统严重瘫痪的情况,那么你可能需要遵循本指南中的建议。 删除所有非默认条目。 如果你使用 Windows 进行多重引导,请确保不要修改任何和 Microsoft 相关的东西。 然后相应地更新引导菜单,以便删除损坏的条目。 重新运行所需发行版的安装设置,或者尝试用之前介绍的比较不严格的修复方法。
我希望这篇小文章能帮你节省一些时间。Solus 对我系统的更改使我很懊恼。这些事情本不应该发生,恢复过程也应该更简单。不管怎样,虽然事情似乎很可怕,修复并不是很难。你只需要删除损害的文件然后重新开始。你的数据应该不会受到影响,你也应该能够顺利进入到运行中的系统并继续工作。开始吧。
加油。
--------------------------------------------------------------------------------
作者简介:
我叫 Igor Ljubuncic。38 岁,已婚,但还没有小孩。我现在是一个云技术公司的首席工程师,前端新手。在 2015 年年初之前,我在世界上最大的 IT 公司之一的工程计算团队担任操作系统架构师,开发新的基于 Linux 的解决方案、优化内核、在 Linux 上实现一些好的想法。在这之前,我是一个为高性能计算环境设计创新解决方案团队的技术主管。其它一些头衔包括系统专家、系统开发员或者类似的。所有这些都是我的爱好,但从 2008 年开始,就是有报酬的工作。还有什么比这更令人满意的呢?
从 2004 到 2008 年,我通过在医疗图像行业担任物理专家养活自己。我的工作主要关注解决问题和开发算法。为此,我广泛使用 Matlab主要用于信号和图像处理。另外我已通过几个主要工程方法的认证包括 MEDIC Six Sigma Green Belt、实验设计以及统计工程。
有时候我也会写书,包括 Linux 创新及技术工作。
往下滚动你可以查看我开源项目的完整列表、发表文章以及专利。
有关我奖项、提名以及 IT 相关认证的完整列表,稍后也会有。
-------------
via: http://www.dedoimedo.com/computers/grub2-efi-corrupt-part-recovery.html
作者:[Igor Ljubuncic][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.dedoimedo.com/faq.html
[1]:http://www.dedoimedo.com/computers/solus-1-2-review.html
[2]:http://www.dedoimedo.com/computers/grub-2.html
[3]:http://www.dedoimedo.com/computers/grub2-efi-recovery.html
[4]:http://www.dedoimedo.com/computers/lenovo-g50-distros-second-round.html
[5]:https://help.ubuntu.com/community/Boot-Info
[6]:https://sourceforge.net/projects/boot-repair-cd/

View File

@ -0,0 +1,234 @@
如何在 CentOS 7 中使用 SSL/TLS 加固 FTP 服务器进行安全文件传输
============================================================
在一开始的设计中FTP文件传输协议是不安全的意味着它不会加密两台机器之间传输的数据以及用户的凭据。这使得数据和服务器安全面临很大威胁。
在这篇文章中,我们会介绍在 CentOS/RHEL 7 以及 Fedora 中如何在 FTP 服务器中手动启用数据加密服务;我们会介绍使用 SSL/TLS 证书保护 VSFTPDVery Secure FTP Daemon服务的各个步骤。
#### 前提条件:
1. 你必须已经[在 CentOS 7 中安装和配置 FTP 服务][1]
在我们开始之前,要注意本文中所有命令都以 root 用户运行,否则,如果现在你不是使用 root 用户控制服务器,你可以使用 [sudo 命令][2] 去获取 root 权限。
### 第一步:生成 SSL/TLS 证书和密钥
1. 我们首先要在 `/etc/ssl` 目录下创建用于保存 SSL/TLS 证书和密钥文件的子目录:
```
# mkdir /etc/ssl/private
```
2. 然后运行下面的命令为 vsftpd 创建证书和密钥并保存到一个文件中,下面会解析使用的每个标签。
1. req - 是 X.509 Certificate Signing Request CSR证书签名请求管理的一个命令。
2. x509 - X.509 证书数据管理。
3. days - 定义证书的有效日期。
4. newkey - 指定证书密钥处理器。
5. rsa:2048 - RSA 密钥处理器,会生成一个 2048 位的密钥。
6. keyout - 设置密钥存储文件。
7. out - 设置证书存储文件,注意证书和密钥都保存在一个相同的文件:/etc/ssl/private/vsftpd.pem。
```
# openssl req -x509 -nodes -keyout /etc/ssl/private/vsftpd.pem -out /etc/ssl/private/vsftpd.pem -days 365 -newkey rsa:2048
```
上面的命令会让你回答以下的问题,记住使用你自己情况的值。
```
Country Name (2 letter code) [XX]:IN
State or Province Name (full name) []:Lower Parel
Locality Name (eg, city) [Default City]:Mumbai
Organization Name (eg, company) [Default Company Ltd]:TecMint.com
Organizational Unit Name (eg, section) []:Linux and Open Source
Common Name (eg, your name or your server's hostname) []:tecmint
Email Address []:admin@tecmint.com
```
### 第二步:配置 VSFTPD 使用 SSL/TLS
3. 在我们进行任何 VSFTPD 配置之前,首先开放 990 和 40000-50000 端口,以便在 VSFTPD 配置文件中分别定义 TLS 连接的端口和被动端口的端口范围:
```
# firewall-cmd --zone=public --permanent --add-port=990/tcp
# firewall-cmd --zone=public --permanent --add-port=40000-50000/tcp
# firewall-cmd --reload
```
4. 现在,打开 VSFTPD 配置文件并在文件中指定 SSL 的详细信息:
```
# vi /etc/vsftpd/vsftpd.conf
```
找到 `ssl_enable` 选项把它的值设置为 `YES` 激活使用 SSL另外由于 TSL 比 SSL 更安全,我们会使用 `ssl_tlsv1_2` 选项让 VSFTPD 使用更严格的 TLS
```
ssl_enable=YES
ssl_tlsv1_2=YES
ssl_sslv2=NO
ssl_sslv3=NO
```
5. 然后,添加下面的行定义 SSL 证书和密钥文件的位置:
```
rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem
```
6. 下面,我们要阻止匿名用户使用 SSL然后强制所有非匿名用户登录使用安全的 SSL 连接进行数据传输和登录过程中的密码发送:
```
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES
```
7. 另外,我们还可以添加下面的选项增强 FTP 服务器的安全性。当选项 `require_ssl_reuse` 被设置为 `YES` 时,要求所有 SSL 数据连接都显示 SSL 会话重用;表明它们知道与控制频道相同的主机密码。
因此,我们需要把它关闭。
```
require_ssl_reuse=NO
```
另外,我们还要用 `ssl_ciphers` 选项选择 VSFTPD 允许用于加密 SSL 连接的 SSL 密码。这可以大大限制尝试使用在漏洞中发现的特定密码的攻击者:
```
ssl_ciphers=HIGH
```
8. 现在,设置被动端口的端口范围(最小和最大端口)。
```
pasv_min_port=40000
pasv_max_port=50000
```
9. 选择性启用 `debug_ssl` 选项以允许 SSL 调试,意味着 OpenSSL 连接诊断会被记录到 VSFTPD 日志文件:
```
debug_ssl=YES
```
保存所有更改并关闭文件。然后让我们重启 VSFTPD 服务:
```
# systemctl restart vsftpd
```
### 第三步:用 SSL/TLS 连接测试 FTP 服务器
10. 完成上面的所有配置之后,像下面这样通过在命令行中尝试使用 FTP 测试 VSFTPD 是否使用 SSL/TLS 连接:
```
# ftp 192.168.56.10
Connected to 192.168.56.10 (192.168.56.10).
220 Welcome to TecMint.com FTP service.
Name (192.168.56.10:root) : ravi
530 Non-anonymous sessions must use encryption.
Login failed.
421 Service not available, remote server has closed connection
ftp>
```
[
![验证 FTP SSL 安全连接](http://www.tecmint.com/wp-content/uploads/2017/02/Verify-FTP-Secure-Connection.png)
][3]
验证 FTP SSL 安全连接
从上面的截图中,我们可以看到这里有个错误提示我们 VSFTPD 只允许用户从支持加密服务的客户端登录。
命令行并不会提供加密服务因此产生了这个错误。因此,为了安全地连接到服务器,我们需要一个支持 SSL/TLS 连接的 FTP 客户端,例如 FileZilla。
### 第四步:安装 FileZilla 以便安全地连接到 FTP 服务器
11. FileZilla 是一个时尚、流行且重要的交叉平台 FTP 客户端,它默认支持 SSL/TLS 连接。
要在 Linux 上安装 FileZilla可以运行下面的命令
```
--------- On CentOS/RHEL/Fedora ---------
# yum install epel-release filezilla
--------- On Debian/Ubuntu ---------
$ sudo apt-get install filezilla
```
12. 当安装完成后(或者你已经安装了该软件),打开它,选择 File=>Sites Manager 或者按 `Ctrl + S` 打开 Site Manager 界面。
点击 New Site 按钮添加一个新的站点/主机连接详细信息。
[
![在 FileZilla 中添加新 FTP 站点](http://www.tecmint.com/wp-content/uploads/2017/02/Add-New-FTP-Site-in-Filezilla.png)
][4]
在 FileZilla 中添加新 FTP 站点
13. 下一步,像下面这样设置主机/站点名称、添加 IP 地址、定义使用的协议、加密和登录类型(使用你自己情况的值):
```
Host: 192.168.56.10
Protocol: FTP File Transfer Protocol
Encryption: Require explicit FTP over #recommended
Logon Type: Ask for password #recommended
User: username
```
[
![在 Filezilla 中添加 FTP 服务器详细信息](http://www.tecmint.com/wp-content/uploads/2017/02/Add-FTP-Server-Details-in-Filezilla.png)
][5]
在 Filezilla 中添加 FTP 服务器详细信息
14. 然后点击 Connect再次输入密码然后验证用于 SSL/TLS 连接的证书,再一次点击 `OK` 连接到 FTP 服务器:
[
![验证 FTP SSL 证书](http://www.tecmint.com/wp-content/uploads/2017/02/Verify-FTP-SSL-Certificate.png)
][6]
验证 FTP SSL 证书
到了这里,我们应该使用 TLS 连接成功地登录到了 FTP 服务器,在下面的界面中检查连接状态部分获取更多信息。
[
![通过 TLS/SSL 连接到 FTP 服务器](http://www.tecmint.com/wp-content/uploads/2017/02/connected-to-ftp-server-with-tls.png)
][7]
通过 TLS/SSL 连接到 FTP 服务器
15. 最后,在文件目录尝试 [从本地传输文件到 FTP 服务器][8],看 FileZilla 界面后面的部分查看文件传输相关的报告。
[
![使用 FTP 安全地传输文件](http://www.tecmint.com/wp-content/uploads/2017/02/Transfer-Files-Securely-Using-FTP.png)
][9]
使用 FTP 安全地传输文件
就是这些。记住 FTP 默认是不安全的,除非我们像上面介绍的那样配置它使用 SSL/TLS 连接。在下面的评论框中和我们分享你关于这篇文章/主题的想法吧。
--------------------------------------------------------------------------------
作者简介:
Aaron Kili 是一个 Linux 和 F.O.S.S 的爱好者Linux 系统管理员,网络开发员,目前也是 TecMint 的内容创作者,他喜欢和电脑一起工作,并且坚信共享知识。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/secure-vsftpd-using-ssl-tls-on-centos/
作者:[Aaron Kili][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/install-ftp-server-in-centos-7/
[2]:http://www.tecmint.com/sudoers-configurations-for-setting-sudo-in-linux/
[3]:http://www.tecmint.com/wp-content/uploads/2017/02/Verify-FTP-Secure-Connection.png
[4]:http://www.tecmint.com/wp-content/uploads/2017/02/Add-New-FTP-Site-in-Filezilla.png
[5]:http://www.tecmint.com/wp-content/uploads/2017/02/Add-FTP-Server-Details-in-Filezilla.png
[6]:http://www.tecmint.com/wp-content/uploads/2017/02/Verify-FTP-SSL-Certificate.png
[7]:http://www.tecmint.com/wp-content/uploads/2017/02/connected-to-ftp-server-with-tls.png
[8]:http://www.tecmint.com/sftp-command-examples/
[9]:http://www.tecmint.com/wp-content/uploads/2017/02/Transfer-Files-Securely-Using-FTP.png

View File

@ -1,14 +1,14 @@
翻译中 [ChrisLeeGit](https://github.com/chrisleegit)
Assign Read/Write Access to a User on Specific Directory in Linux
给用户赋予指定目录的读写权限
============================================================
In a previous article, we showed you how to [create a shared directory in Linux][3]. Here, we will describe how to give read/write access to a user on a specific directory in Linux.
在上篇文章中我们向您展示了如何在Linux上[创建一个共享目录][3]。这次我们会为您介绍如何将Linux上指定目录的读写权限赋予用户。
There are two possible methods of doing this: the first is [using ACLs (Access Control Lists)][4] and the second is [creating user groups to manage file permissions][5], as explained below.
For the purpose of this tutorial, we will use following setup.
有两种方法可以实现这个目标:第一种是 [使用 ACLs (访问控制列表)][4] ,第二种是[创建用户组来管理文件权限][5],下面会一一介绍。
为了完成这个教程,我们将使用以下设置。
```
Operating system: CentOS 7
@ -17,34 +17,34 @@ Test user: tecmint
Filesystem type: Ext4
```
Make sure all commands are executed as root user or use the the [sudo command][6] with equivalent privileges.
请确认所有的命令都是使用root用户执行的或者使用 [sudo 命令][6] 来享受与之同样的权限。
Lets start by creating the directory called `reports` using the mkdir command:
让我们开始吧!下面,先使用 mkdir 命令来创建一个名为 `reports` 的目录。
```
# mkdir -p /shares/project1/reports
```
### Using ACL to Give Read/Write Access to User on Directory
### 使用ACL来为用户赋予目录的读写权限
Important: To use this method, ensure that your Linux filesystem type (such as Ext3 and Ext4, NTFS, BTRFS) support ACLs.
重要提示打算使用此方法的话您需要确认您的Linux文件系统类型如 Ext3 and Ext4, NTFS, BTRFS支持 ACLs.
1. First, [check the current file system type][7] on your system, and also whether the kernel supports ACL as follows:
1. 首先, 依照以下命令在您的系统中[检查当前文件系统类型][7]并且查看内核是否支持ACL
```
# df -T | awk '{print $1,$2,$NF}' | grep "^/dev"
# grep -i acl /boot/config*
```
From the screenshot below, the filesystem type is Ext4 and the kernel supports POSIX ACLs as indicated by the CONFIG_EXT4_FS_POSIX_ACL=y option.
从下方的截屏可以看到,文件系统类型是 Ext4并且从 CONFIG_EXT4_FS_POSIX_ACL=y 选项可以发现内核是支持 POSIX ACLs 的。
[
![Check Filesystem Type and Kernel ACL Support](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Filesystem-Type-and-Kernel-ACL-Support.png)
][8]
Check Filesystem Type and Kernel ACL Support
查看文件系统类型和内核的ACL支持。
2. Next, check if the file system (partition) is mounted with ACL option or not:
2. 接下来查看文件系统分区挂载时是否使用了ACL选项。
```
# tune2fs -l /dev/sda1 | grep acl
@ -53,16 +53,16 @@ Check Filesystem Type and Kernel ACL Support
![Check Partition ACL Support](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Partition-ACL-Support.png)
][9]
Check Partition ACL Support
查看分区是否支持ACL
From the above output, we can see that default mount option already has support for ACL. If in case its not enabled, you can enable it for the particular partition (/dev/sda3 for this case):
通过上边的输出可以发现默认的挂载项目中已经对ACL进行了支持。如果发现结果不如所愿你可以通过以下命令对指定分区此例中使用/dev/sda3开启ACL的支持。
```
# mount -o remount,acl /
# tune2fs -o acl /dev/sda3
```
3. Now, its time to assign a read/write access to a user `tecmint` to a specific directory called `reports`by running the following commands.
3. 现在是时候指定目录 `reports` 的读写权限分配给名为 `tecmint` 的用户了,依照以下命令执行即可。
```
# getfacl /shares/project1/reports # Check the default ACL settings for the directory
@ -73,66 +73,67 @@ From the above output, we can see that default mount option already has support
![Give Read/Write Access to Directory Using ACL](http://www.tecmint.com/wp-content/uploads/2017/03/Give-Read-Write-Access-to-Directory-Using-ACL.png)
][10]
Give Read/Write Access to Directory Using ACL
通过ACL对指定目录赋予读写权限
In the screenshot above, the user `tecmint` now has read/write (rw) permissions on directory /shares/project1/reports as seen from the output of the second getfacl command.
在上方的截屏中通过输出结果的第二行getfacl命令可以发现用户 `tecmint` 已经成功的被赋予了 /shares/project1/reports 目录的读写权限。
For more information about ACL lists, do check out our following guides.
如果想要获取ACL列表的更多信息。可以在下方查看我们的其他指南。
1. [How to Use ACLs (Access Control Lists) to Setup Disk Quotas for Users/Groups][1]
2. [How to Use ACLs (Access Control Lists) to Mount Network Shares][2]
Now lets see the second method of assigning read/write access to a directory.
现在我们来看看如何使用第二种方法来为目录赋予读写权限。
### Using Groups to Give Read/Write Access to User on Directory
### 使用用户组来为用户赋予指定目录的读写权限
1. If the user already has a default user group (normally with same name as username), simply change the group owner of the directory.
1. 如果用户已经拥有了默认的用户组(通常组名与用户名相同),就可以简单的通过变更文件夹的所属用户组来完成。
```
# chgrp tecmint /shares/project1/reports
```
Alternatively, create a new group for multiple users (who will be given read/write permissions on a specific directory), as follows. However, this will c[reate a shared directory][11]:
另外,我们也可以通过以下方法为多个用户(需要赋予指定目录读写权限的)新建一个用户组。如此一来,也就[创建了一个共享目录][11]
```
# groupadd projects
```
2. Then add the user `tecmint` to the group `projects` as follows:
2. 接下来将用户 `tecmint` 添加到 `projects` 组中:
```
# usermod -aG projects tecmint # add user to projects
# groups tecmint # check users groups
```
3. Change the group owner of the directory to projects:
3. 将目录的所属用户组变更为 projects
```
# chgrp projects /shares/project1/reports
```
4. Now set read/write access for the group members:
4. 现在,给组成员设置读写权限。
```
# chmod -R 0760 /shares/projects/reports
# ls -l /shares/projects/ #check new permissions
```
Thats it! In this tutorial, we showed you how to give read/write access to a user on a specific directory in Linux. If any issues, do ask via the comment section below.
好了这篇教程中我们向您展示了如何在Linux中将指定目录的读写权限赋予用户。若有疑问请在留言区中提问。
--------------------------------------------------------------------------------
作者简介:
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
Aaron Kili 是 Linux 和 F.O.S.S 爱好者,未来的 Linux 系统管理员和网络开发人员,目前是 TecMint 的内容创作者,他喜欢用电脑工作,并坚信分享知识。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/give-read-write-access-to-directory-in-linux/
作者:[Aaron Kili][a]
译者:[ChrisLeeGit](https://github.com/chrisleegit)
译者:[Mr-Ping](http://www.mr-ping.com)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,309 @@
如何用树莓派搭建一个自己的 web 服务器
============================================================
![How to set up a personal web server with a Raspberry Pi](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/lightbulb_computer_person_general_.png?itok=ZY3UuQQa "How to set up a personal web server with a Raspberry Pi")
>图片来源 : opensource.com
个人网络服务器即 “云”,只是是你去拥有和控制它,而不是托管在一个大型的公司上。
拥有一个自己的云有很多好处,包括定制,免费存储,免费的互联网服务,开源软件的路径,高品质的安全性,完全控制您的内容,快速更改的能力,一个实验的地方 代码等等。 这些好处大部分是无法估量的,但在财务上,这些好处可以节省您每个月超过 100 美元。
![Building your own web server with Raspberry Pi](https://opensource.com/sites/default/files/1-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Building your own web server with Raspberry Pi")
图片来自 Mitchell McLaughlin, CC BY-SA 4.0
我本可以选择 AWS ,但我更喜欢完全自由且安全性可控,并且我可以学一下这些东西是如何搭建的。
* 私有主机: 不使用 BlueHost 或 DreamHost
* 云存储:不使用 Dropbox, Box, Google Drive, Microsoft Azure, iCloud, 或是 AWS
* 确保内部安全
* HTTPSLets Encrypt
* 分析: Google
* OpenVPNDo not need private Internet access (预计每个月花费 $7)
我所使用的物品清单:
* 树莓派 3 代 Model B
* MicroSD 卡 (推荐使用 32GB, [兼容树莓派的 SD 卡][1])
* USB microSD 卡读卡器
* 以太网络线
* 连接上 Wi-Fi 的路由器
* 树莓派盒子
* 亚马逊倍思的 MicroUSB 数据线
* 苹果的充电器
* USB 鼠标
* USB 键盘
* HDMI 线材
* 显示器 (支持接入 HDMI)
* MacBook Pro
### 步骤 1: 启动树莓派
下载最新发布的 Raspbian (树莓派的操作系统). [Raspbian Jessie][6] 的 ZIP 包就可以用。解压缩或提取下载的文件然后把它拷贝到 SD 卡里。使用 [Pi Filler][7] 可以让这些过程变得更简单。[下载 Pi Filer 1.3][8] 或最新的版本。解压或提取下载文件之后打开它,你应该会看到这样的提示:
![Pi Filler prompt](https://opensource.com/sites/default/files/2-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Pi Filler prompt")
确保 USB 读卡器这时还没有插上。如果已经插上了那就先推出。点 Continue 继续下一步。你会看到一个让你选择文件的界面,选择你之前解压缩后的树莓派系统文件。然后你会看到另一个提示如图所示:
![USB card reader prompt](https://opensource.com/sites/default/files/3-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "USB card reader")
把 MicroSD 卡 (推荐 32GB ,至少 16GB) 插入到 USB MicroSD 卡读卡器里。然后把 USB 读卡器接入到你的电脑里。你可以把你的 SD 卡重命名为 "Raspberry" 以区别其他设备。然后点 continue。请先确保你的 SD 卡是空的,因为 Pi Filler 也会在运行时 _擦除_ 所有事先存在 SD 卡里的内容。如果你要备份卡里的内容,那你最好就马上备份。当你点 continue 的时候Raspbian OS 就会被写入到 SD 卡里。这个过程大概会花费一到三分钟左右。当写入完成后,推出 USB 读卡器,把 SD 卡拔出来插入到树莓派的 SD 卡槽里。把电源线接上,给树莓派提供电源。这时树莓派就会自己启动。树莓派的默认登录账户信息是:
**用户名: pi
密码: raspberry**
当树莓派首次启动完成时,会跳出一个标题为 "Setup Options" 的配置界面,就像下面的图片一样 [2]:
![Raspberry Pi software configuration setup](https://opensource.com/sites/default/files/4-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Raspberry Pi software configuration setup")
选择 "Expand Filesystem" 这一选项并回车 [3]. 同时,我还推荐选择第二个选项 "Change User Password" 。这对保证安全性来说尤为重要。它还能个性化你的树莓派.
在选项列表中选择第三项 "Enable Boot To Desktop/Scratch" 并回车。这时会跳到另一个标题为 "Choose boot option" 的界面,就像下面这张图这样。
![Choose boot option](https://opensource.com/sites/default/files/5-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Choose boot option")
在 "Choose boot option" 这个界面选择第二个选项 "Desktop log in as user 'pi' at the graphical desktop" 并回车 [4]。完成这个操作之后会回到之前的 "Setup Options" 界面。如果没有回到之前的界面的话就选择当前界面底部的 "OK" 按钮并回车。
当这些操作都完成之后,选择当前界面底部的 "Finish" 按钮并回车,这时它就会自动重启。如果没有自动重启的话,就在终端里使用如下命令来重启。
**$ sudo reboot**
接上一步的重启,如果所有步骤都顺利进行的话,你会进入到类似下面这样桌面环境中。
![Raspberry Pi desktop](https://opensource.com/sites/default/files/6-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Raspberry Pi desktop")
当你进入了桌面之后,在终端中执行如下命令来更新树莓派的固件。
```
$ sudo apt-get update
$ sudo apt-get upgrade-y
$ sudo apt-get dist-upgrade -y
$ sudo rpi-update
```
这些操作可能会花费几分钟时间。完成之后,现在运行着的树莓派就时最新的了。
### 步骤 2: 配置树莓派
SSH 指的是 Secure Shell是一种加密网络协议可让你在计算机和树莓派之间安全地传输数据。 你可以从 Mac 的命令行控制你的树莓派,而无需显示器或键盘。
要使用 SSH首先需要你的树莓派的 IP 地址。 打开终端并输入:
```
$ sudo ifconfig
```
如果你在使用以太网,看 "eth0" 这一块。如果你在使用 Wi-Fi, 看 "wlan0" 这一块。
查找“inet addr”后跟一个IP地址如192.168.1.115这是本篇文章中使用的默认IP
有了这个地址,在终端中输入 :
```
$ ssh pi@192.168.1.115
```
对于PC上的SSH请参见脚注[5]。
出现提示时输入默认密码“raspberry”除非你之前更改过密码。
现在你已经通过 SSH 登录成功。
### 远程桌面
使用GUI图形用户界面有时比命令行更容易。 在树莓派的命令行使用SSH上键入
```
$ sudo apt-get install xrdp
```
Xrdp 支持 Mac 和 PC 的 Microsoft Remote Desktop 客户端。
在 Mac 上,在 App store 中搜索 “Microsoft Remote Desktop”。 下载它。 对于PC请参见脚注[6]。)
安装完成之后,在你的 Mac 中搜索一个叫 "Microsoft Remote Desktop" 的应用并打开它,你会看到 :
![Microsoft Remote Desktop](https://opensource.com/sites/default/files/7-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Microsoft Remote Desktop")
图片来自 Mitchell McLaughlin, CC BY-SA 4.0
点击 "New" 新建一个远程连接,在空白处填写如下配置。
![Setting up a remote connection](https://opensource.com/sites/default/files/8-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Setting up a remote connection")
图片来自 Mitchell McLaughlin, CC BY-SA 4.0
关闭 “New” 窗口就会自动保存。
你现在应该看到 “My Desktop” 下列出的远程连接。 双击它。
简单加载后,你应该在屏幕上的窗口中看到你的树莓派桌面,如下所示:
![Raspberry Pi desktop](https://opensource.com/sites/default/files/6-image_by_mitchell_mclaughlin_cc_by-sa_4.0_0.png "Raspberry Pi desktop")
好了,现在你不需要额外的鼠标、键盘或显示器就能控制你的树莓派。这是一个更为轻量级的配置。
### 静态本地 ip 地址
有时候你的本地 IP 地址 192.168.1.115 会发生改变。我们需要让这个 IP 地址静态化。输入:
```
$ sudo ifconfig
```
从 “eth0” 部分或 “wlan0” 部分“inet addr”树莓派当前 IP“bcast”广播 IP 范围)和 “mask”子网掩码地址中删除。 然后输入:
```
$ netstat -nr
```
记下 "destination" 和 "gateway/network."
![Setting up a local IP address](https://opensource.com/sites/default/files/setting_up_local_ip_address.png "Setting up a local IP address")
cumulative records 应该大概是这样子的:
```
net address 192.168.1.115
bcast 192.168.1.255
mask 255.255.255.0
gateway 192.168.1.1
network 192.168.1.1
destination 192.168.1.0
```
有了这些信息,你可以很简单地设置一个静态 IP。输入:
```
$ sudo nano /etc/dhcpcd.conf
```
不要设置 **/etc/network/interfaces**
剩下要做的就是把这些内容追加到这个文件的底部,把 IP 换成你想要的 IP 地址。
```
interface eth0
static ip_address=192.168.1.115
static routers=192.168.1.1
static domain_name_servers=192.168.1.1
```
一旦你设置了静态内部 IP 地址,这时需要通过如下命令重启你的树莓派 :
```
$ sudo reboot
```
重启完成之后,在终端中输入 :
```
$ sudo ifconfig
```
这时你就可以看到你的树莓派上的新的静态配置了。
### 静态全局 IP address
如果您的 ISP互联网服务提供商已经给您一个静态外部 IP 地址,您可以跳过端口转发部分。 如果没有,请继续阅读。
你已经设置了SSH远程桌面和静态内部 IP 地址,因此现在本地网络中的计算机将会知道在哪里可以找到你的树莓派。 但是你仍然无法从本地 Wi-Fi 网络外部访问你的树莓派。 你需要树莓派可以从互联网上的任何地方公开访问。 这需要静态外部IP地址[7]。
调用您的 ISP 并请求静态外部有时称为静态全局IP 地址可能会是一个非常敏感的过程。 ISP 拥有决策权,所以我会非常小心处理。 他们可能拒绝你的的静态外部 IP 地址请求。 如果他们拒绝了你的请求,你不要怪罪于他们,因为这种类型的请求有法律和操作风险。 他们特别不希望客户运行中型或大型互联网服务。 他们可能会明确地询问为什么需要一个静态的外部 IP 地址。 最好说实话,告诉他们你打算主办一个低流量的个人网站或类似的小型非营利互联网服务。 如果一切顺利,他们应该打开一张票,并在一两个月内给你打电话。
### 端口转发
这个新获得的 ISP 分配的静态全局 IP 地址是用于访问路由器。 树莓派现在仍然无法访问。 你需要设置端口转发才能访问树莓派。
端口是信息在互联网上传播的虚拟途径。 你有时需要转发端口,以使计算机像树莓派一样可以访问 Internet因为它位于网络路由器后面。 VollmilchTV 专栏在 YouTube 上的一个视频 [什么是TCP / IP端口路由Intranet防火墙互联网] [9]帮助我更好地了解端口。
端口转发可用于像 树莓派 Web服务器或 VoIP 或点对点下载的应用程序。 有[65,000+个端口] [10]可供选择,因此你可以为你构建的每个 Internet 应用程序分配一个不同的端口。
设置端口转发的方式取决于你的路由器。 如果你有 Linksys 的话Gabriel Ramirez 在 YouTbue 上有一个标题叫 [How to go online with your Apache Ubuntu server] [2] 的视频解释了如何设置。 如果您没有 Linksys请阅读路由器附带的文档以便自定义和定义要转发的端口。
你将需要转发 SSH 以及远程桌面端口。
如果你认为你已经过配置端口转发了,输入下面的命令以查看它是否正在通过 SSH 工作:
```
$ ssh pi@your_global_ip_address
```
它应该会提示你输入密码。
检查端口转发是否适用于远程桌面。 打开 Microsoft Remote Desktop。 你之前的的远程连接设置应该已经保存了但需要使用静态外部IP地址例如195.198.227.116来更新“PC名称”字段而不是静态内部地址例如192.168.1.115)。
现在,尝试通过远程桌面连接。 它应该简单地加载并到达树莓派的桌面。
![Raspberry Pi desktop](https://opensource.com/sites/default/files/6-image_by_mitchell_mclaughlin_cc_by-sa_4.0_1.png "Raspberry Pi desktop")
好了, 树莓派现在可以从互联网上访问了,并且已经准备好进行高级项目了。
作为一个奖励选项您可以保持两个远程连接到您的Pi。 一个通过互联网另一个通过LAN局域网。 很容易设置。 在 Microsoft Remote Desktop 中,保留一个称为 “Pi Internet” 的远程连接,另一个称为 “Pi Local”。 将 Pi Internet的 “PC name” 配置为静态外部IP地址例如195.198.227.116 \。 将 Pi Local 的 “PC name” 配置为静态内部IP地址例如192.168.1.115 \。 现在,您可以选择在全球或本地连接。
如果你还没有看过由 Gabriel Ramirez 发布的 [如何使用您的Apache Ubuntu服务器上线] [3],那么你可以去看一下作为过渡到第二个项目的教程。 它将向您展示项目背后的技术架构。 在我们的例子中,你使用的是树莓派而不是 Ubuntu 服务器。 动态DNS位于域公司和您的路由器之间这是 Ramirez 省略的部分。 除了这个微妙之处外,视频是在整体上解释系统的工作原理。 您可能会注意到本教程涵盖了树莓派设置和端口转发,这是服务器端或后端。 查看原始来源涵盖域名动态DNSJekyll静态HTML生成器和Apache网络托管的更高级项目这是客户端或前端。
### 脚注
[1] 我不建议从 NOOBS 操作系统开始。 我更喜欢从功能齐全的 Raspbian Jessie 操作系统开始。
[2] 如果没有弹出 “Setup Options”可以通过打开终端并执行该命令来始终找到它
```
$ sudo-rasps-config
```
[3] 我们这样做是为了将 SD 卡上存在的所有空间用作一个完整的分区。 所有这一切都是扩大操作系统以适应 SD 卡上的整个空间,然后可以将其用作树莓派的存储内存。
[4] 我们这样做是因为我们想启动进入熟悉的桌面环境。 如果我们不做这个步骤,树莓派每次会进入到终端而不是 GUI 中。
[5]
![PuTTY configuration](https://opensource.com/sites/default/files/putty_configuration.png "PuTTY configuration")
[下载并运行 PuTTY] [11] 或 Windows 的另一个 SSH 客户端。 在该字段中输入你的IP地址如上图所示。 将默认端口保留在22 \。 回车PuTTY 将打开一个终端窗口,提示你输入用户名和密码。 填写然后开始在树莓派上进行你的远程工作。
[6]如果尚未安装,请下载 [Microsoft Remote Desktop] [12]。 搜索您的计算机上的的 Microsoft Remote Desktop。 运行。 提示时输入IP地址。 接下来会弹出一个xrdp窗口提示你输入用户名和密码。
[7]路由器具有动态分配的外部 IP 地址所以在理论上它可以从互联网上暂时访问但是您需要ISP的帮助才能使其永久访问。 如果不是这样,你需要在每次使用时重新配置远程连接。
_原文出自 [Mitchell McLaughlin's Full-Stack Computer Projects][4]._
--------------------------------------------------------------------------------
作者简介:
Mitchell McLaughlin - 我是一名开放网络的贡献者和开发者。 我感兴趣的领域很广泛,但我特别喜欢开源软件/硬件,比特币和编程。 我住在旧金山 我有过一些简短的 GoPro 和 Oracle 工作经验。
-------------
via: https://opensource.com/article/17/3/building-personal-web-server-raspberry-pi-3
作者:[Mitchell McLaughlin ][a]
译者:[chenxinlong](https://github.com/chenxinlong)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mitchm
[1]:http://elinux.org/RPi_SD_cards
[2]:https://www.youtube.com/watch?v=i1vB7JnPvuE#t=07m08s
[3]:https://www.youtube.com/watch?v=i1vB7JnPvuE#t=07m08s
[4]:https://mitchellmclaughlin.com/server.html
[5]:https://opensource.com/article/17/3/building-personal-web-server-raspberry-pi-3?rate=Zdmkgx8mzy9tFYdVcQZSWDMSy4uDugnbCKG4mFsVyaI
[6]:https://www.raspberrypi.org/downloads/raspbian/
[7]:http://ivanx.com/raspberrypi/
[8]:http://ivanx.com/raspberrypi/files/PiFiller.zip
[9]:https://www.youtube.com/watch?v=iskxw6T1Wb8
[10]:https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers
[11]:http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
[12]:https://www.microsoft.com/en-us/store/apps/microsoft-remote-desktop/9wzdncrfj3ps
[13]:https://opensource.com/user/41906/feed
[14]:https://opensource.com/article/17/3/building-personal-web-server-raspberry-pi-3#comments
[15]:https://opensource.com/users/mitchm

View File

@ -0,0 +1,82 @@
使用 LXDE 的 8 个理由
8 reasons to use LXDE
============================================================
### 考虑使用轻量级桌面环境 LXDE 作为你 Linux 桌面的理由
![使用 LXDE 的 8 个理由](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/rh_003499_01_linux31x_cc.png?itok=1HXbvw2E "8 reasons to use LXDE")
>Image by : opensource.com
去年年底,升级到 Fedora 25 给新版本的 [KDE][7] Plasma 带来了严重问题,我难以完成任何工作。由于两个原因我决定尝试其它 Linux 桌面环境。第一,我需要完成我的工作。第二,单独使用 KDE 已经有很多年,我认为是时候尝试一些不同的桌面了。
我尝试了几周的第一个替代桌面是 [Cinnamon][8],我在 1 月份介绍过它。这次我已经使用了 LXDE轻量级 X11 桌面环境)大概 6 周,我发现它有很多我喜欢的东西。这是我使用 LXDE 的 8 个理由。
更多 Linux 相关资源
* [Linux 是什么?][1]
* [Linux 容器是什么?][2]
* [在 Linux 中管理设备][3]
* [马上下载Linux 命令速查表][4]
* [我们最新的 Linux 文章][5]
**1\. LXDE 支持多个面板。**和 KDE 以及 Cinnamon 一样LXDE 支持包括系统菜单、应用启动器的面板,显示正在运行应用图标的任务栏。我第一次登录到 LXDE 时面板配置看起来异常熟悉。LDXE 看起来已经为我喜欢的顶部和底部面板适配了 KDE 配置,还包括系统托盘设置。顶部面板上的应用程序启动器看似来自 Cinnamon 配置。面板上的东西使得启动和管理程序变得容易。默认情况下,只在桌面底部有一个面板。
![打开了 Openbox Configuration Manager 的 LXDE 桌面。](https://opensource.com/sites/default/files/lxde-openboxconfigurationmanager.png "打开了 Openbox Configuration Manager 的 LXDE 桌面。")
打开了 Openbox Configuration Manager 的 LXDE 桌面。这个桌面还没有更改过,因此它使用了默认的颜色和图标主题。
**2\. Openbox configuration manager 提供了一个简单工具用于管理和体验桌面外观。**它为主题、窗口修饰、多个显示器的窗口行为、移动和调整窗口大小、鼠标控制、多桌面等提供了选项。虽然这看起来似乎很多,但它远不如配置 KDE 桌面那么复杂,尽管如此 Openbox 仍然提供了很多的控制选项。
**3\. LXDE 有一个强大的菜单工具。**在 Desktop Preference 菜单 Advanced 标签页有个有趣的选项。这个选项的名称是 “Show menus provided by window managers when desktop is clicked点击桌面时显示窗口管理器提供的菜单”。选中这个复选框当你右击桌面时会显示 Openbox 桌面菜单,而不是标准的 LXDE 桌面菜单。
Openbox 桌面菜单包括了几乎每个你可能想要的菜单选项,所有都可从桌面便捷访问。它包括了所有的应用程序菜单、系统管理、以及首选项。它甚至有一个菜单包括了所有已安装终端模拟器应用程序的列表,因此系统管理员可以轻易地启动他们喜欢的终端。
**4\. 设计上LXDE 桌面干净简单。**它没有任何会妨碍你完成工作的东西。尽管你可以添加一些文件、目录、应用程序的链接到桌面,但是没有可以添加到桌面的小部件。在我的 KDE 和 Cinnamon 桌面上我确实喜欢一些小部件,但它们很容易被打开,然后我就需要移动或者最小化窗口,或者使用 “Show Desktop” 按钮清空整个桌面。 LXDE 确实有一个 “Iconify all windows” 按钮,但我很少需要使用它,除非我想看我的壁纸。
**5\. LXDE 有一个强大的文件管理器。**LXDE 默认的文件管理器是 PCManFM因此在我使用 LXDE 的时候它成为了我的文件管理器。PCManFM 非常灵活、可以配置为适用于大部分人和情况。它看起来没有我常用的文件管理器 Krusader 那么可配置,但我确实喜欢 Krusader 没有的 PCManFM 的侧边栏。
PCManFM 允许多个标签页可以通过右击侧边栏的任何条目或者单击图标栏的新标签图标打开。PCManFM 窗口左边的 Places 面板显示了应用程序菜单,你可以从 PCManFM 启动应用程序。Places 面板上面也显示了一个设备图标可以用于查看你的物理存储设备一系列带图标的可移除设备允许你挂载和卸载它们还有可以便捷访问的主目录、桌面、回收站。Places 面板的底部包括一些默认目录的快捷方式,例如 Documents、Music、Pictures、Videos 以及 Downloads。你也可以拖拽其它目录到 Places 面板的快捷方式部分。Places 面板可以换为正常的目录树。
**6\. 如果在现有窗口后面打开,****新窗口的标题栏会闪烁****。**这是一个在大量现有窗口中定位新窗口的好方法。
**7\. 大部分现代桌面环境允许多个桌面LXDE 也不例外。**我喜欢使用一个桌面用于我的开发、测试以及编辑工作另一个桌面用于普通任务例如电子邮件和网页浏览。LXDE 默认提供两个桌面,但你可以配置为只有一个或者多个。右击 Desktop Pager 配置它。
通过一些有害但不是破坏性的测试,我发现最大允许桌面数目是 100。我还发现当我把桌面数目减少到低于我实际使用的 3 个时,不活动桌面上的窗口会被移动到桌面 1。多么有趣的发现
**8\. Xfce 电源管理器是一个小巧但强大的应用程序,它允许你配置电源管理如何工作。**它提供了一个标签页用于通用配置,以及用于系统、显示和设备的标签页。设备标签页显示了我系统上已有设备的表格,例如电池供电的鼠标、键盘,甚至我的 UPSuninterruptible power supply不间断电源。它显示了每个设备的详细信息包括厂商和系列号如果可用的话还有电池充电状态。当我写这篇博客的时候我 UPS 的电量是 100%,而我罗技鼠标的电量是 75%。 Xfce 电源管理器还在系统托盘显示了一个图标,因此你可以从那里快速了解你设备的电池状态。
关于 LXDE 桌面还有很多喜欢的东西,但这些就是抓住了我的注意力、或者对我使用现代图形用户界面工作非常重要、不可或缺的东西。
我注意到奇怪的一点是我一直没有弄明白桌面Openbox菜单的 “Reconfigure” 选项是干什么的。我点击了几次,从没有注意到有任何类型的任何活动表明该选项实际起了作用。
我发现 LXDE 是一个简单但强大的桌面。我享受使用它写这篇文章的几周时间。通过允许我访问我想要的应用程序和文件同时在其余时间保持不明显LXDE 使我得以高效地工作。我也没有遇到任何妨碍我完成工作的问题。当然,除了我用于探索这个好桌面所花的时间。我非常推荐 LXDE 桌面。
我现在正在使用 GNOME 3 和 GNOME Shell并将在下一期中报告。
--------------------------------------------------------------------------------
作者简介:
David Both - David Both 是一个 Linux 和开源倡导者,他居住在 Raleigh, North Carolina。他在 IT 行业已经超过 40 年,在他工作的 IBM 教授 OS/2 超过 20 年,他在 1981 年为最早的 IBM PC 写了第一个培训课程。他教过 Red Hat 的 RHCE 课程,在 MCI Worldcom、 Cisco 和北卡罗莱纳州 工作过。他一直在使用 Linux 和开源软件近 20 年。
--------------------------------------
via: https://opensource.com/article/17/3/8-reasons-use-lxde
作者:[David Both ][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dboth
[1]:https://opensource.com/resources/what-is-linux?src=linux_resource_menu
[2]:https://opensource.com/resources/what-are-linux-containers?src=linux_resource_menu
[3]:https://opensource.com/article/16/11/managing-devices-linux?src=linux_resource_menu
[4]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=7016000000127cYAAQ
[5]:https://opensource.com/tags/linux?src=linux_resource_menu
[6]:https://opensource.com/article/17/3/8-reasons-use-lxde?rate=QigvkBy_9zLvktdsL-QaIWedjIqjtlwwJIVFQDQzsSY
[7]:https://opensource.com/life/15/4/9-reasons-to-use-kde
[8]:https://opensource.com/article/17/1/cinnamon-desktop-environment
[9]:https://opensource.com/user/14106/feed
[10]:https://opensource.com/article/17/3/8-reasons-use-lxde#comments
[11]:https://opensource.com/users/dboth

Some files were not shown because too many files have changed in this diff Show More