diff --git a/published/20140607 Five things that make Go fast.md b/published/20140607 Five things that make Go fast.md new file mode 100644 index 0000000000..63c0e0d18a --- /dev/null +++ b/published/20140607 Five things that make Go fast.md @@ -0,0 +1,489 @@ +五种加速 Go 的特性 +======== + +_Anthony Starks 使用他出色的 Deck 演示工具重构了我原来的基于 Google Slides 的幻灯片。你可以在他的博客上查看他重构后的幻灯片, +[mindchunk.blogspot.com.au/2014/06/remixing-with-deck][5]。_ + +我最近被邀请在 Gocon 发表演讲,这是一个每半年在日本东京举行的 Go 的精彩大会。[Gocon 2014][6] 是一个完全由社区驱动的为期一天的活动,由培训和一整个下午的围绕着生产环境中的 Go 这个主题的演讲组成.(LCTT 译注:本文发表于 2014 年) + +以下是我的讲义。原文的结构能让我缓慢而清晰的演讲,因此我已经编辑了它使其更可读。 + +我要感谢 [Bill Kennedy][7] 和 Minux Ma,特别是 [Josh Bleecher Snyder][8],感谢他们在我准备这次演讲中的帮助。 + +* * * + +大家下午好。 + +我叫 David. + +我很高兴今天能来到 Gocon。我想参加这个会议已经两年了,我很感谢主办方能提供给我向你们演讲的机会。 + +[![Gocon 2014](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-1.jpg)][9] + +我想以一个问题开始我的演讲。 + +为什么选择 Go? + +当大家讨论学习或在生产环境中使用 Go 的原因时,答案不一而足,但因为以下三个原因的最多。 + +[![Gocon 2014 ](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-2.jpg)][10] + +这就是 TOP3 的原因。 + +第一,并发。 + +Go 的 并发原语Concurrency Primitives 对于来自 Nodejs,Ruby 或 Python 等单线程脚本语言的程序员,或者来自 C++ 或 Java 等重量级线程模型的语言都很有吸引力。 + +易于部署。 + +我们今天从经验丰富的 Gophers 那里听说过,他们非常欣赏部署 Go 应用的简单性。 + +[![Gocon 2014](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-3.jpg)][11] + +然后是性能。 + +我相信人们选择 Go 的一个重要原因是它 _快_。 + +[![Gocon 2014 (4)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-4.jpg)][12] + +在今天的演讲中,我想讨论五个有助于提高 Go 性能的特性。 + +我还将与大家分享 Go 如何实现这些特性的细节。 + +[![Gocon 2014 (5)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-5.jpg)][13] + +我要谈的第一个特性是 Go 对于值的高效处理和存储。 + +[![Gocon 2014 (6)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-6.jpg)][14] + +这是 Go 中一个值的例子。编译时,`gocon` 正好消耗四个字节的内存。 + +让我们将 Go 与其他一些语言进行比较 + +[![Gocon 2014 (7)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-7.jpg)][15] + +由于 Python 表示变量的方式的开销,使用 Python 存储相同的值会消耗六倍的内存。 + +Python 使用额外的内存来跟踪类型信息,进行 引用计数Reference Counting 等。 + +让我们看另一个例子: + +[![Gocon 2014 (8)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-8.jpg)][16] + +与 Go 类似,Java 消耗 4 个字节的内存来存储 `int` 型。 + +但是,要在像 `List` 或 `Map` 这样的集合中使用此值,编译器必须将其转换为 `Integer` 对象。 + +[![Gocon 2014 (9)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-9.jpg)][17] + +因此,Java 中的整数通常消耗 16 到 24 个字节的内存。 + +为什么这很重要? 内存便宜且充足,为什么这个开销很重要? + +[![Gocon 2014 (10)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-10.jpg)][18] + +这是一张显示 CPU 时钟速度与内存总线速度的图表。 + +请注意 CPU 时钟速度和内存总线速度之间的差距如何继续扩大。 + +两者之间的差异实际上是 CPU 花费多少时间等待内存。 + +[![Gocon 2014 (11)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-11.jpg)][19] + +自 1960 年代后期以来,CPU 设计师已经意识到了这个问题。 + +他们的解决方案是一个缓存,一个更小、更快的内存区域,介入 CPU 和主存之间。 + +[![Gocon 2014 (12)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-12.jpg)][20] + +这是一个 `Location` 类型,它保存物体在三维空间中的位置。它是用 Go 编写的,因此每个 `Location` 只消耗 24 个字节的存储空间。 + +我们可以使用这种类型来构造一个容纳 1000 个 `Location` 的数组类型,它只消耗 24000 字节的内存。 + +在数组内部,`Location` 结构体是顺序存储的,而不是随机存储的 1000 个 `Location` 结构体的指针。 + +这很重要,因为现在所有 1000 个 `Location` 结构体都按顺序放在缓存中,紧密排列在一起。 + +[![Gocon 2014 (13)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-13.jpg)][21] + +Go 允许您创建紧凑的数据结构,避免不必要的填充字节。 + +紧凑的数据结构能更好地利用缓存。 + +更好的缓存利用率可带来更好的性能。 + +[![Gocon 2014 (14)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-14.jpg)][22] + +函数调用不是无开销的。 + +[![Gocon 2014 (15)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-15.jpg)][23] + +调用函数时会发生三件事。 + +创建一个新的 栈帧Stack Frame,并记录调用者的详细信息。 + +在函数调用期间可能被覆盖的任何寄存器都将保存到栈中。 + +处理器计算函数的地址并执行到该新地址的分支。 + +[![Gocon 2014 (16)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-16.jpg)][24] + +由于函数调用是非常常见的操作,因此 CPU 设计师一直在努力优化此过程,但他们无法消除开销。 + +函调固有开销,或重于泰山,或轻于鸿毛,这取决于函数做了什么。 + +减少函数调用开销的解决方案是 内联Inlining。 + +[![Gocon 2014 (17)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-17.jpg)][25] + +Go 编译器通过将函数体视为调用者的一部分来内联函数。 + +内联也有成本,它增加了二进制文件大小。 + +只有当调用开销与函数所做工作关联度的很大时内联才有意义,因此只有简单的函数才能用于内联。 + +复杂的函数通常不受调用它们的开销所支配,因此不会内联。 + +[![Gocon 2014 (18)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-18.jpg)][26] + +这个例子显示函数 `Double` 调用 `util.Max`。 + +为了减少调用 `util.Max` 的开销,编译器可以将 `util.Max` 内联到 `Double` 中,就象这样 + +[![Gocon 2014 (19)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-19.jpg)][27] + +内联后不再调用 `util.Max`,但是 `Double` 的行为没有改变。 + +内联并不是 Go 独有的。几乎每种编译或及时编译的语言都执行此优化。但是 Go 的内联是如何实现的? + +Go 实现非常简单。编译包时,会标记任何适合内联的小函数,然后照常编译。 + +然后函数的源代码和编译后版本都会被存储。 + +[![Gocon 2014 (20)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-20.jpg)][28] + +此幻灯片显示了 `util.a` 的内容。源代码已经过一些转换,以便编译器更容易快速处理。 + +当编译器编译 `Double` 时,它看到 `util.Max` 可内联的,并且 `util.Max` 的源代码是可用的。 + +就会替换原函数中的代码,而不是插入对 `util.Max` 的编译版本的调用。 + +拥有该函数的源代码可以实现其他优化。 + +[![Gocon 2014 (21)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-21.jpg)][29] + +在这个例子中,尽管函数 `Test` 总是返回 `false`,但 `Expensive` 在不执行它的情况下无法知道结果。 + +当 `Test` 被内联时,我们得到这样的东西。 + +[![Gocon 2014 (22)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-22.jpg)][30] + +编译器现在知道 `Expensive` 的代码无法访问。 + +这不仅节省了调用 `Test` 的成本,还节省了编译或运行任何现在无法访问的 `Expensive` 代码。 + +Go 编译器可以跨文件甚至跨包自动内联函数。还包括从标准库调用的可内联函数的代码。 + +[![Gocon 2014 (23)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-23.jpg)][31] + +强制垃圾回收Mandatory Garbage Collection 使 Go 成为一种更简单,更安全的语言。 + +这并不意味着垃圾回收会使 Go 变慢,或者垃圾回收是程序速度的瓶颈。 + +这意味着在堆上分配的内存是有代价的。每次 GC 运行时都会花费 CPU 时间,直到释放内存为止。 + +[![Gocon 2014 (24)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-24.jpg)][32] + +然而,有另一个地方分配内存,那就是栈。 + +与 C 不同,它强制您选择是否将值通过 `malloc` 将其存储在堆上,还是通过在函数范围内声明将其储存在栈上;Go 实现了一个名为 逃逸分析Escape Analysis 的优化。 + +[![Gocon 2014 (25)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-25.jpg)][33] + +逃逸分析决定了对一个值的任何引用是否会从被声明的函数中逃逸。 + +如果没有引用逃逸,则该值可以安全地存储在栈中。 + +存储在栈中的值不需要分配或释放。 + +让我们看一些例子 + +[![Gocon 2014 (26)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-26.jpg)][34] + +`Sum` 返回 1 到 100 的整数的和。这是一种相当不寻常的做法,但它说明了逃逸分析的工作原理。 + +因为切片 `numbers` 仅在 `Sum` 内引用,所以编译器将安排到栈上来存储的 100 个整数,而不是安排到堆上。 + +没有必要回收 `numbers`,它会在 `Sum` 返回时自动释放。 + +[![Gocon 2014 (27)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-27.jpg)][35] + +第二个例子也有点尬。在 `CenterCursor` 中,我们创建一个新的 `Cursor` 对象并在 `c` 中存储指向它的指针。 + +然后我们将 `c` 传递给 `Center()` 函数,它将 `Cursor` 移动到屏幕的中心。 + +最后我们打印出那个 'Cursor` 的 X 和 Y 坐标。 + +即使 `c` 被 `new` 函数分配了空间,它也不会存储在堆上,因为没有引用 `c` 的变量逃逸 `CenterCursor` 函数。 + +[![Gocon 2014 (28)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-28.jpg)][36] + +默认情况下,Go 的优化始终处于启用状态。可以使用 `-gcflags = -m` 开关查看编译器的逃逸分析和内联决策。 + +因为逃逸分析是在编译时执行的,而不是运行时,所以无论垃圾回收的效率如何,栈分配总是比堆分配快。 + +我将在本演讲的其余部分详细讨论栈。 + +[![Gocon 2014 (30)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-30.jpg)][37] + +Go 有 goroutine。 这是 Go 并发的基石。 + +我想退一步,探索 goroutine 的历史。 + +最初,计算机一次运行一个进程。在 60 年代,多进程或 分时Time Sharing 的想法变得流行起来。 + +在分时系统中,操作系统必须通过保护当前进程的现场,然后恢复另一个进程的现场,不断地在这些进程之间切换 CPU 的注意力。 + +这称为 _进程切换_。 + +[![Gocon 2014 (29)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-29.jpg)][38] + +进程切换有三个主要开销。 + +首先,内核需要保护该进程的所有 CPU 寄存器的现场,然后恢复另一个进程的现场。 + +内核还需要将 CPU 的映射从虚拟内存刷新到物理内存,因为这些映射仅对当前进程有效。 + +最后是操作系统 上下文切换Context Switch 的成本,以及 调度函数Scheduler Function 选择占用 CPU 的下一个进程的开销。 + +[![Gocon 2014 (31)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-31.jpg)][39] + +现代处理器中有数量惊人的寄存器。我很难在一张幻灯片上排开它们,这可以让你知道保护和恢复它们需要多少时间。 + +由于进程切换可以在进程执行的任何时刻发生,因此操作系统需要存储所有寄存器的内容,因为它不知道当前正在使用哪些寄存器。 + +[![Gocon 2014 (32)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-32.jpg)][40] + +这导致了线程的出生,这些线程在概念上与进程相同,但共享相同的内存空间。 + +由于线程共享地址空间,因此它们比进程更轻,因此创建速度更快,切换速度更快。 + +[![Gocon 2014 (33)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-33.jpg)][41] + +Goroutine 升华了线程的思想。 + +Goroutine 是 协作式调度Cooperative Scheduled +的,而不是依靠内核来调度。 + +当对 Go 运行时调度器Runtime Scheduler 进行显式调用时,goroutine 之间的切换仅发生在明确定义的点上。 + +编译器知道正在使用的寄存器并自动保存它们。 + +[![Gocon 2014 (34)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-34.jpg)][42] + +虽然 goroutine 是协作式调度的,但运行时会为你处理。 + +Goroutine 可能会给禅让给其他协程时刻是: + +* 阻塞式通道发送和接收。 +* Go 声明,虽然不能保证会立即调度新的 goroutine。 +* 文件和网络操作式的阻塞式系统调用。 +* 在被垃圾回收循环停止后。 + +[![Gocon 2014 (35)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-35.jpg)][43] + +这个例子说明了上一张幻灯片中描述的一些调度点。 + +箭头所示的线程从左侧的 `ReadFile` 函数开始。遇到 `os.Open`,它在等待文件操作完成时阻塞线程,因此调度器将线程切换到右侧的 goroutine。 + +继续执行直到从通道 `c` 中读,并且此时 `os.Open` 调用已完成,因此调度器将线程切换回左侧并继续执行 `file.Read` 函数,然后又被文件 IO 阻塞。 + +调度器将线程切换回右侧以进行另一个通道操作,该操作在左侧运行期间已解锁,但在通道发送时再次阻塞。 + +最后,当 `Read` 操作完成并且数据可用时,线程切换回左侧。 + +[![Gocon 2014 (36)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-36.jpg)][44] + +这张幻灯片显示了低级语言描述的 `runtime.Syscall` 函数,它是 `os` 包中所有函数的基础。 + +只要你的代码调用操作系统,就会通过此函数。 + +对 `entersyscall` 的调用通知运行时该线程即将阻塞。 + +这允许运行时启动一个新线程,该线程将在当前线程被阻塞时为其他 goroutine 提供服务。 + +这导致每 Go 进程的操作系统线程相对较少,Go 运行时负责将可运行的 Goroutine 分配给空闲的操作系统线程。 + +[![Gocon 2014 (37)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-37.jpg)][45] + +在上一节中,我讨论了 goroutine 如何减少管理许多(有时是数十万个并发执行线程)的开销。 + +Goroutine故事还有另一面,那就是栈管理,它引导我进入我的最后一个话题。 + +[![Gocon 2014 (39)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-39.jpg)][46] + +这是一个进程的内存布局图。我们感兴趣的关键是堆和栈的位置。 + +传统上,在进程的地址空间内,堆位于内存的底部,位于程序(代码)的上方并向上增长。 + +栈位于虚拟地址空间的顶部,并向下增长。 + +[![Gocon 2014 (40)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-40.jpg)][47] + +因为堆和栈相互覆盖的结果会是灾难性的,操作系统通常会安排在栈和堆之间放置一个不可写内存区域,以确保如果它们发生碰撞,程序将中止。 + +这称为保护页,有效地限制了进程的栈大小,通常大约为几兆字节。 + +[![Gocon 2014 (41)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-41.jpg)][48] + +我们已经讨论过线程共享相同的地址空间,因此对于每个线程,它必须有自己的栈。 + +由于很难预测特定线程的栈需求,因此为每个线程的栈和保护页面保留了大量内存。 + +希望是这些区域永远不被使用,而且防护页永远不会被击中。 + +缺点是随着程序中线程数的增加,可用地址空间的数量会减少。 + +[![Gocon 2014 (42)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-42.jpg)][49] + +我们已经看到 Go 运行时将大量的 goroutine 调度到少量线程上,但那些 goroutines 的栈需求呢? + +Go 编译器不使用保护页,而是在每个函数调用时插入一个检查,以检查是否有足够的栈来运行该函数。如果没有,运行时可以分配更多的栈空间。 + +由于这种检查,goroutines 初始栈可以做得更小,这反过来允许 Go 程序员将 goroutines 视为廉价资源。 + +[![Gocon 2014 (43)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-43.jpg)][50] + +这是一张显示了 Go 1.2 如何管理栈的幻灯片。 + +当 `G` 调用 `H` 时,没有足够的空间让 `H` 运行,所以运行时从堆中分配一个新的栈帧,然后在新的栈段上运行 `H`。当 `H` 返回时,栈区域返回到堆,然后返回到 `G`。 + +[![Gocon 2014 (44)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-44.jpg)][51] + +这种管理栈的方法通常很好用,但对于某些类型的代码,通常是递归代码,它可能导致程序的内部循环跨越这些栈边界之一。 + +例如,在程序的内部循环中,函数 `G` 可以在循环中多次调用 `H`, + +每次都会导致栈拆分。 这被称为 热分裂Hot Split 问题。 + +[![Gocon 2014 (45)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-45.jpg)][52] + +为了解决热分裂问题,Go 1.3 采用了一种新的栈管理方法。 + +如果 goroutine 的栈太小,则不会添加和删除其他栈段,而是分配新的更大的栈。 + +旧栈的内容被复制到新栈,然后 goroutine 使用新的更大的栈继续运行。 + +在第一次调用 `H` 之后,栈将足够大,对可用栈空间的检查将始终成功。 + +这解决了热分裂问题。 + +[![Gocon 2014 (46)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-46.jpg)][53] + +值,内联,逃逸分析,Goroutines 和分段/复制栈。 + +这些是我今天选择谈论的五个特性,但它们绝不是使 Go 成为快速的语言的唯一因素,就像人们引用他们学习 Go 的理由的三个原因一样。 + +这五个特性一样强大,它们不是孤立存在的。 + +例如,运行时将 goroutine 复用到线程上的方式在没有可扩展栈的情况下几乎没有效率。 + +内联通过将较小的函数组合成较大的函数来降低栈大小检查的成本。 + +逃逸分析通过自动将从实例从堆移动到栈来减少垃圾回收器的压力。 + +逃逸分析还提供了更好的 缓存局部性Cache Locality。 + +如果没有可增长的栈,逃逸分析可能会对栈施加太大的压力。 + +[![Gocon 2014 (47)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-47.jpg)][54] + +* 感谢 Gocon 主办方允许我今天发言 +* twitter / web / email details +* 感谢 @offbymany,@billkennedy_go 和 Minux 在准备这个演讲的过程中所提供的帮助。 + +### 相关文章: + +1. [听我在 OSCON 上关于 Go 性能的演讲][1] +2. [为什么 Goroutine 的栈是无限大的?][2] +3. [Go 的运行时环境变量的旋风之旅][3] +4. [没有事件循环的性能][4] + +-------------------------------------------------------------------------------- + +作者简介: + +David 是来自澳大利亚悉尼的程序员和作者。 + +自 2011 年 2 月起成为 Go 的 contributor,自 2012 年 4 月起成为 committer。 + +联系信息 + +* dave@cheney.net +* twitter: @davecheney + +---------------------- + +via: https://dave.cheney.net/2014/06/07/five-things-that-make-go-fast + +作者:[Dave Cheney][a] +译者:[houbaron](https://github.com/houbaron) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://dave.cheney.net/ +[1]:https://dave.cheney.net/2015/05/31/hear-me-speak-about-go-performance-at-oscon +[2]:https://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite +[3]:https://dave.cheney.net/2015/11/29/a-whirlwind-tour-of-gos-runtime-environment-variables +[4]:https://dave.cheney.net/2015/08/08/performance-without-the-event-loop +[5]:http://mindchunk.blogspot.com.au/2014/06/remixing-with-deck.html +[6]:http://ymotongpoo.hatenablog.com/entry/2014/06/01/124350 +[7]:http://www.goinggo.net/ +[8]:https://twitter.com/offbymany +[9]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-1.jpg +[10]:https://dave.cheney.net/2014/06/07/five-things-that-make-go-fast/gocon-2014-2 +[11]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-3.jpg +[12]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-4.jpg +[13]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-5.jpg +[14]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-6.jpg +[15]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-7.jpg +[16]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-8.jpg +[17]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-9.jpg +[18]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-10.jpg +[19]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-11.jpg +[20]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-12.jpg +[21]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-13.jpg +[22]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-14.jpg +[23]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-15.jpg +[24]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-16.jpg +[25]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-17.jpg +[26]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-18.jpg +[27]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-19.jpg +[28]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-20.jpg +[29]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-21.jpg +[30]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-22.jpg +[31]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-23.jpg +[32]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-24.jpg +[33]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-25.jpg +[34]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-26.jpg +[35]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-27.jpg +[36]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-28.jpg +[37]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-30.jpg +[38]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-29.jpg +[39]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-31.jpg +[40]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-32.jpg +[41]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-33.jpg +[42]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-34.jpg +[43]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-35.jpg +[44]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-36.jpg +[45]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-37.jpg +[46]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-39.jpg +[47]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-40.jpg +[48]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-41.jpg +[49]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-42.jpg +[50]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-43.jpg +[51]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-44.jpg +[52]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-45.jpg +[53]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-46.jpg +[54]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-47.jpg diff --git a/published/20170810 How we built our first full-stack JavaScript web app in three weeks.md b/published/20170810 How we built our first full-stack JavaScript web app in three weeks.md new file mode 100644 index 0000000000..e74aa85bda --- /dev/null +++ b/published/20170810 How we built our first full-stack JavaScript web app in three weeks.md @@ -0,0 +1,181 @@ +三周内构建 JavaScript 全栈 web 应用 +=========== + +![](https://cdn-images-1.medium.com/max/2000/1*PgKBpQHRUgqpXcxtyehPZg.png) + +*应用 Align 中,用户主页的控制面板* + +### 从构思到部署应用程序的简单分步指南 + +我在 Grace Hopper Program 为期三个月的编码训练营即将结束,实际上这篇文章的标题有些纰漏 —— 现在我已经构建了 _三个_ 全栈应用:[从零开始的电子商店][3]、我个人的 [私人黑客马拉松项目][4],还有这个“三周的结业项目”。这个项目是迄今为止强度最大的 —— 我和另外两名队友共同花费三周的时光 —— 而它也是我在训练营中最引以为豪的成就。这是我目前所构建和涉及的第一款稳定且复杂的应用。 + +如大多数开发者所知,即使你“知道怎么编写代码”,但真正要制作第一款全栈的应用却是非常困难的。JavaScript 生态系统出奇的大:有包管理器、模块、构建工具、转译器、数据库、库文件,还要对上述所有东西进行选择,难怪如此多的编程新手除了 Codecademy 的教程外,做不了任何东西。这就是为什么我想让你体验这个决策的分布教程,跟着我们队伍的脚印,构建可用的应用。 + +* * * + +首先,简单的说两句。Align 是一个 web 应用,它使用直观的时间线界面帮助用户管理时间、设定长期目标。我们的技术栈有:用于后端服务的 Firebase 和用于前端的 React。我和我的队友在这个短视频中解释的更详细: + +[video](https://youtu.be/YacM6uYP2Jo) + +展示 Align @ Demo Day Live // 2017 年 7 月 10 日 + +从第 1 天(我们组建团队的那天)开始,直到最终应用的完成,我们是如何做的?这里是我们采取的步骤纲要: + +* * * + +### 第 1 步:构思 + +第一步是弄清楚我们到底要构建什么东西。过去我在 IBM 中当咨询师的时候,我和合作组长一同带领着构思工作组。从那之后,我一直建议小组使用经典的头脑风暴策略,在会议中我们能够提出尽可能多的想法 —— 即使是 “愚蠢的想法” —— 这样每个人的大脑都在思考,没有人因顾虑而不敢发表意见。 + +![](https://cdn-images-1.medium.com/max/800/1*-M4xa9_HJylManvLoraqaQ.jpeg) + +在产生了好几个关于应用的想法时,我们把这些想法分类记录下来,以便更好的理解我们大家都感兴趣的主题。在我们这个小组中,我们看到实现想法的清晰趋势,需要自我改进、设定目标、情怀,还有个人发展。我们最后从中决定了具体的想法:做一个用于设置和管理长期目标的控制面板,有保存记忆的元素,可以根据时间将数据可视化。 + +从此,我们创作出了一系列用户故事(从一个终端用户的视角,对我们想要拥有的功能进行描述),阐明我们到底想要应用实现什么功能。 + +### 第 2 步:UX/UI 示意图 + +接下来,在一块白板上,我们画出了想象中应用的基本视图。结合了用户故事,以便理解在应用基本框架中这些视图将会如何工作。 + +![](https://cdn-images-1.medium.com/max/400/1*r5FBoa8JsYOoJihDgrpzhg.jpeg) + +![](https://cdn-images-1.medium.com/max/400/1*0O8ZWiyUgWm0b8wEiHhuPw.jpeg) + +![](https://cdn-images-1.medium.com/max/400/1*y9Q5v-sF0PWmkhthcW338g.jpeg) + +这些骨架确保我们意见统一,提供了可预见的蓝图,让我们向着计划的方向努力。 + +### 第 3 步:选好数据结构和数据库类型 + +到了设计数据结构的时候。基于我们的示意图和用户故事,我们在 Google doc 中制作了一个清单,它包含我们将会需要的模型和每个模型应该包含的属性。我们知道需要 “目标(goal)” 模型、“用户(user)”模型、“里程碑(milestone)”模型、“记录(checkin)”模型还有最后的“资源(resource)”模型和“上传(upload)”模型, + +![](https://cdn-images-1.medium.com/max/800/1*oA3mzyixVzsvnN_egw1xwg.png) + +*最初的数据模型结构* + +在正式确定好这些模型后,我们需要选择某种 _类型_ 的数据库:“关系型的”还是“非关系型的”(也就是“SQL”还是“NoSQL”)。由于基于表的 SQL 数据库需要预定义的格式,而基于文档的 NoSQL 数据库却可以用动态格式描述非结构化数据。 + +对于我们这个情况,用 SQL 型还是 No-SQL 型的数据库没多大影响,由于下列原因,我们最终选择了 Google 的 NoSQL 云数据库 Firebase: + +1. 它能够把用户上传的图片保存在云端并存储起来 +2. 它包含 WebSocket 功能,能够实时更新 +3. 它能够处理用户验证,并且提供简单的 OAuth 功能。 + +我们确定了数据库后,就要理解数据模型之间的关系了。由于 Firebase 是 NoSQL 类型,我们无法创建联合表或者设置像 _“记录 (Checkins)属于目标(Goals)”_ 的从属关系。因此我们需要弄清楚 JSON 树是什么样的,对象是怎样嵌套的(或者不是嵌套的关系)。最终,我们构建了像这样的模型: + +![](https://cdn-images-1.medium.com/max/800/1*py0hQy-XHZWmwff3PM6F1g.png) + +*我们最终为目标(Goal)对象确定的 Firebase 数据格式。注意里程碑(Milestones)和记录(Checkins)对象嵌套在 Goals 中。* + +_(注意: 出于性能考虑,Firebase 更倾向于简单、常规的数据结构, 但对于我们这种情况,需要在数据中进行嵌套,因为我们不会从数据库中获取目标(Goal)却不获取相应的子对象里程碑(Milestones)和记录(Checkins)。)_ + +### 第 4 步:设置好 Github 和敏捷开发工作流 + +我们知道,从一开始就保持井然有序、执行敏捷开发对我们有极大好处。我们设置好 Github 上的仓库,我们无法直接将代码合并到主(master)分支,这迫使我们互相审阅代码。 + +![](https://cdn-images-1.medium.com/max/800/1*5kDNcvJpr2GyZ0YqLauCoQ.png) + +我们还在 [Waffle.io][5] 网站上创建了敏捷开发的面板,它是免费的,很容易集成到 Github。我们在 Waffle 面板上罗列出所有用户故事以及需要我们去修复的 bug。之后当我们开始编码时,我们每个人会为自己正在研究的每一个用户故事创建一个 git 分支,在完成工作后合并这一条条的分支。 + +![](https://cdn-images-1.medium.com/max/800/1*gnWqGwQsdGtpt3WOwe0s_A.gif) + +我们还开始保持晨会的习惯,讨论前一天的工作和每一个人遇到的阻碍。会议常常决定了当天的流程 —— 哪些人要结对编程,哪些人要独自处理问题。 + +我认为这种类型的工作流程非常好,因为它让我们能够清楚地找到自己的定位,不用顾虑人际矛盾地高效执行工作。 + +### 第 5 步: 选择、下载样板文件 + +由于 JavaScript 的生态系统过于复杂,我们不打算从最底层开始构建应用。把宝贵的时间花在连通 Webpack 构建脚本和加载器,把符号链接指向项目工程这些事情上感觉很没必要。我的团队选择了 [Firebones][6] 框架,因为它恰好适用于我们这个情况,当然还有很多可供选择的开源框架。 + +### 第 6 步:编写后端 API 路由(或者 Firebase 监听器) + +如果我们没有用基于云的数据库,这时就应该开始编写执行数据库查询的后端高速路由了。但是由于我们用的是 Firebase,它本身就是云端的,可以用不同的方式进行代码交互,因此我们只需要设置好一个可用的数据库监听器。 + +为了确保监听器在工作,我们用代码做出了用于创建目标(Goal)的基本用户表格,实际上当我们完成表格时,就看到数据库执行可更新。数据库就成功连接了! + +### 第 7 步:构建 “概念证明” + +接下来是为应用创建 “概念证明”,也可以说是实现起来最复杂的基本功能的原型,证明我们的应用 _可以_ 实现。对我们而言,这意味着要找个前端库来实现时间线的渲染,成功连接到 Firebase,显示数据库中的一些种子数据。 + +![](https://cdn-images-1.medium.com/max/800/1*d5Wu3fOlX8Xdqix1RPZWSA.png) + +*Victory.JS 绘制的简单时间线* + +我们找到了基于 D3 构建的响应式库 Victory.JS,花了一天时间阅读文档,用 _VictoryLine_ 和 _VictoryScatter_ 组件实现了非常基础的示例,能够可视化地显示数据库中的数据。实际上,这很有用!我们可以开始构建了。 + +### 第 8 步:用代码实现功能 + +最后,是时候构建出应用中那些令人期待的功能了。取决于你要构建的应用,这一重要步骤会有些明显差异。我们根据所用的框架,编码出不同的用户故事并保存在 Waffle 上。常常需要同时接触前端和后端代码(比如,创建一个前端表格同时要连接到数据库)。我们实现了包含以下这些大大小小的功能: + +* 能够创建新目标、里程碑和记录 +* 能够删除目标,里程碑和记录 +* 能够更改时间线的名称,颜色和详细内容 +* 能够缩放时间线 +* 能够为资源添加链接 +* 能够上传视频 +* 在达到相关目标的里程碑和记录时弹出资源和视频 +* 集成富文本编辑器 +* 用户注册、验证、OAuth 验证 +* 弹出查看时间线选项 +* 加载画面 + +有各种原因,这一步花了我们很多时间 —— 这一阶段是产生最多优质代码的阶段,每当我们实现了一个功能,就会有更多的事情要完善。 + +### 第 9 步: 选择并实现设计方案 + +当我们使用 MVP 架构实现了想要的功能,就可以开始清理,对它进行美化了。像表单,菜单和登陆栏等组件,我的团队用的是 Material-UI,不需要很多深层次的设计知识,它也能确保每个组件看上去都很圆润光滑。 + +![](https://cdn-images-1.medium.com/max/800/1*PCRFAbsPBNPYhz6cBgWRCw.gif) + +*这是我制作的最喜爱功能之一了。它美得令人心旷神怡。* + +我们花了一点时间来选择颜色方案和编写 CSS ,这让我们在编程中休息了一段美妙的时间。期间我们还设计了 logo 图标,还上传了网站图标。 + +### 第 10 步: 找出并减少 bug + +我们一开始就应该使用测试驱动开发的模式,但时间有限,我们那点时间只够用来实现功能。这意味着最后的两天时间我们花在了模拟我们能够想到的每一种用户流,并从应用中找出 bug。 + +![](https://cdn-images-1.medium.com/max/800/1*X8JUwTeCAkIcvhKofcbIDA.png) + +这一步是最不具系统性的,但是我们发现了一堆够我们忙乎的 bug,其中一个是在某些情况下加载动画不会结束的 bug,还有一个是资源组件会完全停止运行的 bug。修复 bug 是件令人恼火的事情,但当软件可以运行时,又特别令人满足。 + +### 第 11 步:应用上线 + +最后一步是上线应用,这样才可以让用户使用它!由于我们使用 Firebase 存储数据,因此我们使用了 Firebase Hosting,它很直观也很简单。如果你要选择其它的数据库,你可以使用 Heroku 或者 DigitalOcean。一般来讲,可以在主机网站中查看使用说明。 + +我们还在 Namecheap.com 上购买了一个便宜的域名,这让我们的应用更加完善,很容易被找到。 + +![](https://cdn-images-1.medium.com/max/800/1*gAuM_vWpv_U53xcV3tQINg.png) + +* * * + +好了,这就是全部的过程 —— 我们都是这款实用的全栈应用的合作开发者。如果要继续讲,那么第 12 步将会是对用户进行 A/B 测试,这样我们才能更好地理解:实际用户与这款应用交互的方式和他们想在 V2 版本中看到的新功能。 + +但是,现在我们感到非常开心,不仅是因为成品,还因为我们从这个过程中获得了难以估量的知识和理解。点击 [这里][7] 查看 Align 应用! + +![](https://cdn-images-1.medium.com/max/800/1*KbqmSW-PMjgfWYWS_vGIqg.jpeg) + +*Align 团队:Sara Kladky(左),Melanie Mohn(中),还有我自己。* + +-------------------------------------------------------------------------------- + +via: https://medium.com/ladies-storm-hackathons/how-we-built-our-first-full-stack-javascript-web-app-in-three-weeks-8a4668dbd67c?imm_mid=0f581a&cmp=em-web-na-na-newsltr_20170816 + +作者:[Sophia Ciocca][a] +译者:[BriFuture](https://github.com/BriFuture) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://medium.com/@sophiaciocca?source=post_header_lockup +[1]:https://medium.com/@sophiaciocca?source=post_header_lockup +[2]:https://medium.com/@sophiaciocca?source=post_header_lockup +[3]:https://github.com/limitless-leggings/limitless-leggings +[4]:https://www.youtube.com/watch?v=qyLoInHNjoc +[5]:http://www.waffle.io/ +[6]:https://github.com/FullstackAcademy/firebones +[7]:https://align.fun/ +[8]:https://github.com/align-capstone/align +[9]:https://github.com/sophiaciocca +[10]:https://github.com/Kladky +[11]:https://github.com/melaniemohn diff --git a/published/20170926 Managing users on Linux systems.md b/published/20170926 Managing users on Linux systems.md new file mode 100644 index 0000000000..bc39e44fed --- /dev/null +++ b/published/20170926 Managing users on Linux systems.md @@ -0,0 +1,224 @@ +管理 Linux 系统中的用户 +====== + +![](https://images.idgesg.net/images/article/2017/09/charging-bull-100735753-large.jpg) + +也许你的 Linux 用户并不是愤怒的公牛,但是当涉及管理他们的账户的时候,能让他们一直满意也是一种挑战。你需要监控他们的访问权限,跟进他们遇到问题时的解决方案,并且把他们在使用系统时出现的重要变动记录下来。这里有一些方法和工具可以让这个工作轻松一点。 + +### 配置账户 + +添加和删除账户是管理用户中比较简单的一项,但是这里面仍然有很多需要考虑的方面。无论你是用桌面工具或是命令行选项,这都是一个非常自动化的过程。你可以使用 `adduser jdoe` 命令添加一个新用户,同时会触发一系列的反应。在创建 John 这个账户时会自动使用下一个可用的 UID,并有很多自动生成的文件来完成这个工作。当你运行 `adduser` 后跟一个参数时(要创建的用户名),它会提示一些额外的信息,同时解释这是在干什么。 + +``` +$ sudo adduser jdoe +Adding user 'jdoe' ... +Adding new group `jdoe' (1001) ... +Adding new user `jdoe' (1001) with group `jdoe' ... +Creating home directory `/home/jdoe' ... +Copying files from `/etc/skel' … +Enter new UNIX password: +Retype new UNIX password: +passwd: password updated successfully +Changing the user information for jdoe +Enter the new value, or press ENTER for the default + Full Name []: John Doe + Room Number []: + Work Phone []: + Home Phone []: + Other []: +Is the information correct? [Y/n] Y +``` + +如你所见,`adduser` 会添加用户的信息(到 `/etc/passwd` 和 `/etc/shadow` 文件中),创建新的家目录home directory,并用 `/etc/skel` 里设置的文件填充家目录,提示你分配初始密码和认证信息,然后确认这些信息都是正确的,如果你在最后的提示 “Is the information correct?” 处的回答是 “n”,它会回溯你之前所有的回答,允许修改任何你想要修改的地方。 + +创建好一个用户后,你可能会想要确认一下它是否是你期望的样子,更好的方法是确保在添加第一个帐户**之前**,“自动”选择与你想要查看的内容是否匹配。默认有默认的好处,它对于你想知道他们定义在哪里很有用,以便你想做出一些变动 —— 例如,你不想让用户的家目录在 `/home` 里,你不想让用户 UID 从 1000 开始,或是你不想让家目录下的文件被系统中的**每个人**都可读。 + +`adduser` 的一些配置细节设置在 `/etc/adduser.conf` 文件里。这个文件包含的一些配置项决定了一个新的账户如何配置,以及它之后的样子。注意,注释和空白行将会在输出中被忽略,因此我们更关注配置项。 + +``` +$ cat /etc/adduser.conf | grep -v "^#" | grep -v "^$" +DSHELL=/bin/bash +DHOME=/home +GROUPHOMES=no +LETTERHOMES=no +SKEL=/etc/skel +FIRST_SYSTEM_UID=100 +LAST_SYSTEM_UID=999 +FIRST_SYSTEM_GID=100 +LAST_SYSTEM_GID=999 +FIRST_UID=1000 +LAST_UID=29999 +FIRST_GID=1000 +LAST_GID=29999 +USERGROUPS=yes +USERS_GID=100 +DIR_MODE=0755 +SETGID_HOME=no +QUOTAUSER="" +SKEL_IGNORE_REGEX="dpkg-(old|new|dist|save)" +``` + +可以看到,我们有了一个默认的 shell(`DSHELL`),UID(`FIRST_UID`)的起始值,家目录(`DHOME`)的位置,以及启动文件(`SKEL`)的来源位置。这个文件也会指定分配给家目录(`DIR_HOME`)的权限。 + +其中 `DIR_HOME` 是最重要的设置,它决定了每个家目录被使用的权限。这个设置分配给用户创建的目录权限是 755,家目录的权限将会设置为 `rwxr-xr-x`。用户可以读其他用户的文件,但是不能修改和移除它们。如果你想要更多的限制,你可以更改这个设置为 750(用户组外的任何人都不可访问)甚至是 700(除用户自己外的人都不可访问)。 + +任何用户账号在创建之前都可以进行手动修改。例如,你可以编辑 `/etc/passwd` 或者修改家目录的权限,开始在新服务器上添加用户之前配置 `/etc/adduser.conf` 可以确保一定的一致性,从长远来看可以节省时间和避免一些麻烦。 + +`/etc/adduser.conf` 的修改将会在之后创建的用户上生效。如果你想以不同的方式设置某个特定账户,除了用户名之外,你还可以选择使用 `adduser` 命令提供账户配置选项。或许你想为某些账户分配不同的 shell,分配特殊的 UID,或完全禁用该账户登录。`adduser` 的帮助页将会为你显示一些配置个人账户的选择。 + +``` +adduser [options] [--home DIR] [--shell SHELL] [--no-create-home] +[--uid ID] [--firstuid ID] [--lastuid ID] [--ingroup GROUP | --gid ID] +[--disabled-password] [--disabled-login] [--gecos GECOS] +[--add_extra_groups] [--encrypt-home] user +``` + +每个 Linux 系统现在都会默认把每个用户放入对应的组中。作为一个管理员,你可能会选择以不同的方式。你也许会发现把用户放在一个共享组中更适合你的站点,你就可以选择使用 `adduser` 的 `--gid` 选项指定一个特定的组。当然,用户总是许多组的成员,因此也有一些选项来管理主要和次要的组。 + +### 处理用户密码 + +一直以来,知道其他人的密码都不是一件好事,在设置账户时,管理员通常使用一个临时密码,然后在用户第一次登录时运行一条命令强制他修改密码。这里是一个例子: + +``` +$ sudo chage -d 0 jdoe +``` + +当用户第一次登录时,会看到类似下面的提示: + +``` +WARNING: Your password has expired. +You must change your password now and login again! +Changing password for jdoe. +(current) UNIX password: +``` + +### 添加用户到副组 + +添加用户到副组中,你可能会用如下所示的 `usermod` 命令添加用户到组中并确认已经做出变动。 + +``` +$ sudo usermod -a -G sudo jdoe +$ sudo grep sudo /etc/group +sudo:x:27:shs,jdoe +``` + +记住在一些组意味着特别的权限,如 sudo 或者 wheel 组,一定要特别注意这一点。 + +### 移除用户,添加组等 + +Linux 系统也提供了移除账户,添加新的组,移除组等一些命令。例如,`deluser` 命令,将会从 `/etc/passwd` 和 `/etc/shadow` 中移除用户记录,但是会完整保留其家目录,除非你添加了 `--remove-home` 或者 `--remove-all-files` 选项。`addgroup` 命令会添加一个组,默认按目前组的次序分配下一个 id(在用户组范围内),除非你使用 `--gid` 选项指定 id。 + +``` +$ sudo addgroup testgroup --gid=131 +Adding group `testgroup' (GID 131) ... +Done. +``` + +### 管理特权账户 + +一些 Linux 系统中有一个 wheel 组,它给组中成员赋予了像 root 一样运行命令的权限。在这种情况下,`/etc/sudoers` 将会引用该组。在 Debian 系统中,这个组被叫做 sudo,但是原理是相同的,你在 `/etc/sudoers` 中可以看到像这样的信息: + +``` +%sudo ALL=(ALL:ALL) ALL +``` + +这行基本的配置意味着任何在 wheel 或者 sudo 组中的成员只要在他们运行的命令之前添加 `sudo`,就可以以 root 的权限去运行命令。 + +你可以向 sudoers 文件中添加更多有限的权限 —— 也许给特定用户几个能以 root 运行的命令。如果你是这样做的,你应该定期查看 `/etc/sudoers` 文件以评估用户拥有的权限,以及仍然需要提供的权限。 + +在下面显示的命令中,我们过滤了 `/etc/sudoers` 中有效的配置行。其中最有意思的是,它包含了能使用 `sudo` 运行命令的路径设置,以及两个允许通过 `sudo` 运行命令的组。像刚才提到的那样,单个用户可以通过包含在 sudoers 文件中来获得权限,但是更有实际意义的方法是通过组成员来定义各自的权限。 + +``` +# cat /etc/sudoers | grep -v "^#" | grep -v "^$" +Defaults env_reset +Defaults mail_badpass +Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin" +root ALL=(ALL:ALL) ALL +%admin ALL=(ALL) ALL <== admin group +%sudo ALL=(ALL:ALL) ALL <== sudo group +``` + +### 登录检查 + +你可以通过以下命令查看用户的上一次登录: + +``` +# last jdoe +jdoe pts/18 192.168.0.11 Thu Sep 14 08:44 - 11:48 (00:04) +jdoe pts/18 192.168.0.11 Thu Sep 14 13:43 - 18:44 (00:00) +jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 (00:00) +``` + +如果你想查看每一个用户上一次的登录情况,你可以通过一个像这样的循环来运行 `last` 命令: + +``` +$ for user in `ls /home`; do last $user | head -1; done + +jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 (00:03) + +rocket pts/18 192.168.0.11 Thu Sep 14 13:02 - 13:02 (00:00) +shs pts/17 192.168.0.11 Thu Sep 14 12:45 still logged in +``` + +此命令仅显示自当前 wtmp 文件登录过的用户。空白行表示用户自那以后从未登录过,但没有将他们显示出来。一个更好的命令可以明确地显示这期间从未登录过的用户: + +``` +$ for user in `ls /home`; do echo -n "$user"; last $user | head -1 | awk '{print substr($0,40)}'; done +dhayes +jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 +peanut pts/19 192.168.0.29 Mon Sep 11 09:15 - 17:11 +rocket pts/18 192.168.0.11 Thu Sep 14 13:02 - 13:02 +shs pts/17 192.168.0.11 Thu Sep 14 12:45 still logged +tsmith +``` + +这个命令要打很多字,但是可以通过一个脚本使它更加清晰易用。 + +``` +#!/bin/bash + +for user in `ls /home` +do + echo -n "$user ";last $user | head -1 | awk '{print substr($0,40)}' +done +``` + +有时这些信息可以提醒你用户角色的变动,表明他们可能不再需要相关帐户了。 + +### 与用户沟通 + +Linux 提供了许多和用户沟通的方法。你可以向 `/etc/motd` 文件中添加信息,当用户从终端登录到服务器时,将会显示这些信息。你也可以通过例如 `write`(通知单个用户)或者 `wall`(write 给所有已登录的用户)命令发送通知。 + +``` +$ wall System will go down in one hour + +Broadcast message from shs@stinkbug (pts/17) (Thu Sep 14 14:04:16 2017): + +System will go down in one hour +``` + +重要的通知应该通过多个渠道传达,因为很难预测用户实际会注意到什么。mesage-of-the-day(motd),`wall` 和 email 通知可以吸引用户大部分的注意力。 + +### 注意日志文件 + +多注意日志文件也可以帮你理解用户的活动情况。尤其 `/var/log/auth.log` 文件将会显示用户的登录和注销活动,组的创建记录等。`/var/log/message` 或者 `/var/log/syslog` 文件将会告诉你更多有关系统活动的日志。 + +### 追踪问题和需求 + +无论你是否在 Linux 系统上安装了事件跟踪系统,跟踪用户遇到的问题以及他们提出的需求都非常重要。如果需求的一部分久久不见回应,用户必然不会高兴。即使是记录在纸上也是有用的,或者最好有个电子表格,这可以让你注意到哪些问题仍然悬而未决,以及问题的根本原因是什么。确认问题和需求非常重要,记录还可以帮助你记住你必须采取的措施来解决几个月甚至几年后重新出现的问题。 + +### 总结 + +在繁忙的服务器上管理用户帐号,部分取决于配置良好的默认值,部分取决于监控用户活动和遇到的问题。如果用户觉得你对他们的顾虑有所回应并且知道在需要系统升级时会发生什么,他们可能会很高兴。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3225109/linux/managing-users-on-linux-systems.html + +作者:[Sandra Henry-Stocker][a] +译者:[dianbanjiu](https://github.com/dianbanjiu) +校对:[wxy](https://github.com/wxy)、[pityonline](https://github.com/pityonline) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ diff --git a/translated/tech/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md b/published/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md similarity index 69% rename from translated/tech/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md rename to published/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md index eec0d29397..808da9a3d3 100644 --- a/translated/tech/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md +++ b/published/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md @@ -1,18 +1,19 @@ -书评|算法之美 +书评:《算法之美( Algorithms to Live By )》 ====== + ![](https://www.eyrie.org/~eagle/reviews/covers/1-62779-037-3.jpg) 又一次为了工作图书俱乐部而读书。除了其它我亲自推荐的书,这是我至今最喜爱的书。 -作为计算机科学基础之一的研究领域是算法:我们如何高效地用计算机程序解决问题?这基本上属于数学领域,但是这很少关于理想的或理论上的解决方案,而是更在于最高效地利用有限的资源获得一个充足(如果不能完美)的答案。其中许多问题要么是日常的生活问题,要么与人们密切相关。毕竟,计算机科学的目的是为了用计算机解决实际问题。《算法之美》提出的问题是:“我们可以反过来吗”--我们可以通过学习计算机科学解决问题的方式来帮助我们做出日常决定吗? +作为计算机科学基础之一的研究领域是算法:我们如何高效地用计算机程序解决问题?这基本上属于数学领域,但是这很少关于理想的或理论上的解决方案,而是更在于最高效地利用有限的资源获得一个充分(如果不能完美)的答案。其中许多问题要么是日常的生活问题,要么与人们密切相关。毕竟,计算机科学的目的是为了用计算机解决实际问题。《算法之美Algorithms to Live By》提出的问题是:“我们可以反过来吗”——我们可以通过学习计算机科学解决问题的方式来帮助我们做出日常决定吗? 本书的十一个章节有很多有趣的内容,但也有一个有趣的主题:人类早已擅长这一点。很多章节以一个算法研究和对问题的数学分析作为开始,接着深入到探讨如何利用这些结果做出更好的决策,然后讨论关于人类真正会做出的决定的研究,之后,考虑到典型生活情境的限制,会发现人类早就在应用我们提出的最佳算法的特殊版本了。这往往会破坏本书的既定目标,值得庆幸的是,它决不会破坏对一般问题的有趣讨论,即计算机科学如何解决它们,以及我们对这些问题的数学和技术形态的了解。我认为这本书的自助效用比作者打算的少一些,但有很多可供思考的东西。 -(也就是说,值得考虑这种一致性是否太少了,因为人类已经擅长这方面了,更因为我们的算法是根据人类直觉设计的。可能我们的最佳算法只是反映了人类的思想。在某些情况下,我们发现我们的方案和数学上的典范不一样, 但是在另一些情况下,它们仍然是我们当下最好的猜想。) +(也就是说,值得考虑这种一致性是否太少了,因为人类已经擅长这方面了,更因为我们的算法是根据人类直觉设计的。可能我们的最佳算法只是反映了人类的思想。在某些情况下,我们发现我们的方案和数学上的典范不一样,但是在另一些情况下,它们仍然是我们当下最好的猜想。) -这是那种章节列表是书评里重要部分的书。这里讨论的算法领域有最优停止、探索和利用决策(什么时候带着你发现的最好东西走以及什么时候寻觅更好的东西),以及排序、缓存、调度、贝叶斯定理(一般还有预测)、创建模型时的过拟合、放松(解决容易的问题而不是你的实际问题)、随机算法、一系列网络算法,最后还有游戏理论。其中每一项都有有用的见解和发人深省的讨论--这些有时显得十分理论化的概念令人吃惊地很好地映射到了日常生活。这本书以一段关于“可计算的善意”的讨论结束:鼓励减少你自己和你交往的人所需的计算和复杂性惩罚。 +这是那种章节列表是书评里重要部分的书。这里讨论的算法领域有最优停止、探索和利用决策(什么时候带着你发现的最好东西走,以及什么时候寻觅更好的东西),以及排序、缓存、调度、贝叶斯定理(一般还有预测)、创建模型时的过拟合、放松(解决容易的问题而不是你的实际问题)、随机算法、一系列网络算法,最后还有游戏理论。其中每一项都有有用的见解和发人深省的讨论——这些有时显得十分理论化的概念令人吃惊地很好地映射到了日常生活。这本书以一段关于“可计算的善意”的讨论结束:鼓励减少你自己和你交往的人所需的计算和复杂性惩罚。 -如果你有计算机科学背景(就像我一样),其中许多都是熟悉的概念,而且你因为被普及了很多新东西或许会有疑惑。然而,请给这本书一个机会,类比法没你担忧的那么令人紧张。作者既小心又聪明地应用了这些原则。这本书令人惊喜地通过了一个重要的合理性检查:涉及到我知道或反复思考过的主题的章节很少有或没有明显的错误,而且能讲出有用和重要的事情。比如,调度的那一章节毫不令人吃惊地和时间管理有关,通过直接跳到时间管理问题的核心而胜过了半数时间管理类书籍:如果你要做一个清单上的所有事情,你做这些事情的顺序很少要紧,所以最难的调度问题是决定不做哪些事情而不是做这些事情的顺序。 +如果你有计算机科学背景(就像我一样),其中许多都是熟悉的概念,而且你因为被普及了很多新东西或许会有疑惑。然而,请给这本书一个机会,类比法没你担忧的那么令人紧张。作者既小心又聪明地应用了这些原则。这本书令人惊喜地通过了一个重要的合理性检查:涉及到我知道或反复思考过的主题的章节很少有或没有明显的错误,而且能讲出有用和重要的事情。比如,调度的那一章节毫不令人吃惊地和时间管理有关,通过直接跳到时间管理问题的核心而胜过了半数的时间管理类书籍:如果你要做一个清单上的所有事情,你做这些事情的顺序很少要紧,所以最难的调度问题是决定不做哪些事情而不是做这些事情的顺序。 作者在贝叶斯定理这一章节中的观点完全赢得了我的心。本章的许多内容都是关于贝叶斯先验的,以及一个人对过去事件的了解为什么对分析未来的概率很重要。作者接着讨论了著名的棉花糖实验。即给了儿童一个棉花糖以后,儿童被研究者告知如果他们能够克制自己不吃这个棉花糖,等到研究者回来时,会给他们两个棉花糖。克制自己不吃棉花糖(在心理学文献中叫作“延迟满足”)被发现与未来几年更好的生活有关。这个实验多年来一直被引用和滥用于各种各样的宣传,关于选择未来的收益放弃即时的快乐从而拥有成功的生活,以及生活中的失败是因为无法延迟满足。更多的邪恶分析(当然)将这种能力与种族联系在一起,带有可想而知的种族主义结论。 @@ -20,7 +21,7 @@ 《算法之美》是我读过的唯一提到了棉花糖实验并应用了我认为更有说服力的分析的书。这不是一个关于儿童天赋的实验,这是一个关于他们的贝叶斯先验的实验。什么时候立即吃棉花糖而不是等待奖励是完全合理的?当他们过去的经历告诉他们成年人不可靠,不可信任,会在不可预测的时间内消失并且撒谎的时候。而且,更好的是,作者用我之前没有听说过的后续研究和观察支持了这一分析,观察到的内容是,一些孩子会等待一段时间然后“放弃”。如果他们下意识地使用具有较差先验的贝叶斯模型,这就完全合情合理。 -这是一本很好的书。它可能在某些地方的尝试有点太勉强(数学上最优停止对于日常生活的适用性比我认为作者想要表现的更加偶然和牵强附会),如果你学过算法,其中一些内容会感到熟悉,但是它的行文思路清晰,简洁,而且编辑得非常好。这本书没有哪一部分对不起它所受的欢迎,书中的讨论贯穿始终。如果你发现自己“已经知道了这一切”,你可能还会在接下来几页中遇到一个新的概念或一个简洁的解释。有时作者会做一些我从没想到但是回想起来正确的联系,比如将网络协议中的指数退避和司法系统中的选择惩罚联系起来。还有意识到我们的现代通信世界并不是一直联系的,它是不断缓冲的,我们中的许多人正深受缓冲膨胀这一独特现象的苦恼。 +这是一本很好的书。它可能在某些地方的尝试有点太勉强(数学上最优停止对于日常生活的适用性比我认为作者想要表现的更加偶然和牵强附会),如果你学过算法,其中一些内容会感到熟悉,但是它的行文思路清晰,简洁,而且编辑得非常好。这本书没有哪一部分对不起它所受到的欢迎,书中的讨论贯穿始终。如果你发现自己“已经知道了这一切”,你可能还会在接下来几页中遇到一个新的概念或一个简洁的解释。有时作者会做一些我从没想到但是回想起来正确的联系,比如将网络协议中的指数退避和司法系统中的选择惩罚联系起来。还有意识到我们的现代通信世界并不是一直联系的,它是不断缓冲的,我们中的许多人正深受缓冲膨胀这一独特现象的苦恼。 我认为你并不必须是计算机科学专业或者精通数学才能读这本书。如果你想深入,每章的结尾都有许多数学上的细节,但是正文总是易读而清晰,至少就我所知是这样(作为一个以计算机科学为专业并学到了很多数学知识的人,你至少可以有保留地相信我)。即使你已经钻研了多年的算法,这本书仍然可以提供很多东西。 @@ -36,7 +37,7 @@ via: https://www.eyrie.org/~eagle/reviews/books/1-62779-037-3.html 作者:[Brian Christian;Tom Griffiths][a] 译者:[GraveAccent](https://github.com/GraveAccent) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md b/published/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md similarity index 82% rename from translated/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md rename to published/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md index 0bcbe0d3e5..c482cd05e5 100644 --- a/translated/tech/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md +++ b/published/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md @@ -1,27 +1,20 @@ -在 Debian 9 / Ubuntu 16.04 / 17.10 中如何安装并使用 Wireshark +如何安装并使用 Wireshark ====== -作者 [Pradeep Kumar][1],首发于 2017 年 11 月 29 日,更新于 2017 年 11 月 29 日 - [![wireshark-Debian-9-Ubuntu 16.04 -17.10](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg)][2] -Wireshark 是免费的,开源的,跨平台的基于 GUI 的网络数据包分析器,可用于 Linux, Windows, MacOS, Solaris 等。它可以实时捕获网络数据包,并以人性化的格式呈现。Wireshark 允许我们监控网络数据包上升到微观层面。Wireshark 还有一个名为 `tshark` 的命令行实用程序,它与 Wireshark 执行相同的功能,但它是通过终端而不是 GUI。 +Wireshark 是自由开源的、跨平台的基于 GUI 的网络数据包分析器,可用于 Linux、Windows、MacOS、Solaris 等。它可以实时捕获网络数据包,并以人性化的格式呈现。Wireshark 允许我们监控网络数据包直到其微观层面。Wireshark 还有一个名为 `tshark` 的命令行实用程序,它与 Wireshark 执行相同的功能,但它是通过终端而不是 GUI。 -Wireshark 可用于网络故障排除,分析,软件和通信协议开发以及用于教育目的。Wireshark 使用 `pcap` 库来捕获网络数据包。 +Wireshark 可用于网络故障排除、分析、软件和通信协议开发以及用于教育目的。Wireshark 使用 `pcap` 库来捕获网络数据包。 Wireshark 具有许多功能: * 支持数百项协议检查 - * 能够实时捕获数据包并保存,以便以后进行离线分析 - * 许多用于分析数据的过滤器 - -* 捕获的数据可以被压缩和解压缩(to 校正:on the fly 什么意思?) - -* 支持各种文件格式的数据分析,输出也可以保存为 XML, CSV 和纯文本格式 - -* 数据可以从以太网,wifi,蓝牙,USB,帧中继,令牌环等多个接口中捕获 +* 捕获的数据可以即时压缩和解压缩 +* 支持各种文件格式的数据分析,输出也可以保存为 XML、CSV 和纯文本格式 +* 数据可以从以太网、wifi、蓝牙、USB、帧中继、令牌环等多个接口中捕获 在本文中,我们将讨论如何在 Ubuntu/Debian 上安装 Wireshark,并将学习如何使用 Wireshark 捕获网络数据包。 @@ -102,7 +95,7 @@ linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo make install linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo ldconfig ``` -在安装后,它将创建一个单独的 Wireshark 组,我们现在将我们的用户添加到组中,以便它可以与 Wireshark 一起使用,否则在启动 wireshark 时可能会出现 `permission denied(权限被拒绝)`错误。 +在安装后,它将创建一个单独的 Wireshark 组,我们现在将我们的用户添加到组中,以便它可以与 Wireshark 一起使用,否则在启动 wireshark 时可能会出现 “permission denied(权限被拒绝)”错误。 要将用户添加到 wireshark 组,执行以下命令: @@ -120,7 +113,7 @@ linuxtechi@nixhome:~$ wireshark [![Access-wireshark-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9-1024x664.jpg)][4] -点击 Wireshark 图标 +点击 Wireshark 图标。 [![Wireshark-window-debian9](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9-1024x664.jpg)][5] @@ -128,7 +121,7 @@ linuxtechi@nixhome:~$ wireshark [![Access-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu-1024x664.jpg)][6] -点击 Wireshark 图标 +点击 Wireshark 图标。 [![Wireshark-window-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu-1024x664.jpg)][7] @@ -138,7 +131,7 @@ linuxtechi@nixhome:~$ wireshark [![wireshark-Linux-system](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg)][8] -所有这些都是我们可以捕获网络数据包的接口。根据你系统上的界面,此屏幕可能与你的不同。 +所有这些都是我们可以捕获网络数据包的接口。根据你系统上的接口,此屏幕可能与你的不同。 我们选择 `enp0s3` 来捕获该接口的网络流量。选择接口后,在我们网络上所有设备的网络数据包开始填充(参考下面的屏幕截图): @@ -146,11 +139,11 @@ linuxtechi@nixhome:~$ wireshark 第一次看到这个屏幕,我们可能会被这个屏幕上显示的数据所淹没,并且可能已经想过如何整理这些数据,但不用担心,Wireshark 的最佳功能之一就是它的过滤器。 -我们可以根据 IP 地址,端口号,也可以使用来源和目标过滤器,数据包大小等对数据进行排序和过滤,也可以将两个或多个过滤器组合在一起以创建更全面的搜索。我们也可以在 `Apply a Display Filter(应用显示过滤器)`选项卡中编写过滤规则,也可以选择已创建的规则。要选择之前构建的过滤器,请单击 `Apply a Display Filter(应用显示过滤器)`选项卡旁边的 `flag` 图标。 +我们可以根据 IP 地址、端口号,也可以使用来源和目标过滤器、数据包大小等对数据进行排序和过滤,也可以将两个或多个过滤器组合在一起以创建更全面的搜索。我们也可以在 “Apply a Display Filter(应用显示过滤器)”选项卡中编写过滤规则,也可以选择已创建的规则。要选择之前构建的过滤器,请单击 “Apply a Display Filter(应用显示过滤器)”选项卡旁边的旗帜图标。 [![Filter-in-wireshark-Ubuntu](https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu-1024x727.jpg)][10] -我们还可以根据颜色编码过滤数据,默认情况下,浅紫色是 TCP 流量,浅蓝色是 UDP 流量,黑色标识有错误的数据包,看看这些编码是什么意思,点击 `View -> Coloring Rules`,我们也可以改变这些编码。 +我们还可以根据颜色编码过滤数据,默认情况下,浅紫色是 TCP 流量,浅蓝色是 UDP 流量,黑色标识有错误的数据包,看看这些编码是什么意思,点击 “View -> Coloring Rules”,我们也可以改变这些编码。 [![Packet-Colouring-Wireshark](https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark-1024x682.jpg)][11] @@ -161,11 +154,11 @@ Wireshark 是一个非常强大的工具,需要一些时间来习惯并对其 -------------------------------------------------------------------------------- -via: https://www.linuxtechi.com +via: https://www.linuxtechi.com/install-use-wireshark-debian-9-ubuntu/ 作者:[Pradeep Kumar][a] 译者:[MjSeven](https://github.com/MjSeven) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20180117 How to get into DevOps.md b/published/20180117 How to get into DevOps.md new file mode 100644 index 0000000000..f55824538f --- /dev/null +++ b/published/20180117 How to get into DevOps.md @@ -0,0 +1,137 @@ +DevOps 实践指南 +====== +> 这些技巧或许对那些想要践行 DevOps 的系统运维和开发者能有所帮助。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_resume_rh1x.png?itok=S3HGxi6E) + +在去年大概一年的时间里,我注意到对“Devops 实践”感兴趣的开发人员和系统管理员突然有了明显的增加。这样的变化也合理:现在开发者只要花很少的钱,调用一些 API,就能单枪匹马地在一整套分布式基础设施上运行自己的应用,在这个时代,开发和运维的紧密程度前所未有。我看过许多博客和文章介绍很酷的 DevOps 工具和相关思想,但是给那些希望践行 DevOps 的人以指导和建议的内容,我却很少看到。 + +这篇文章的目的就是描述一下如何去实践。我的想法基于 Reddit 上 [devops][1] 的一些访谈、聊天和深夜讨论,还有一些随机谈话,一般都发生在享受啤酒和美食的时候。如果你已经开始这样实践,我对你的反馈很感兴趣,请通过[我的博客][2]或者 [Twitter][3] 联系我,也可以直接在下面评论。我很乐意听到你们的想法和故事。 + +### 古代的 IT + +了解历史是搞清楚未来的关键,DevOps 也不例外。想搞清楚 DevOps 运动的普及和流行,去了解一下上世纪 90 年代后期和 21 世纪前十年 IT 的情况会有帮助。这是我的经验。 + +我的第一份工作是在一家大型跨国金融服务公司做 Windows 系统管理员。当时给计算资源扩容需要给 Dell 打电话(或者像我们公司那样打给 CDW),并下一个价值数十万美元的订单,包含服务器、网络设备、电缆和软件,所有这些都要运到生产或线下的数据中心去。虽然 VMware 仍在尝试说服企业使用虚拟机运行他们的“性能敏感”型程序是更划算的,但是包括我们在内的很多公司都还是愿意使用他们的物理机运行应用。 + +在我们技术部门,有一个专门做数据中心工程和运营的团队,他们的工作包括价格谈判,让荒唐的月租能够降一点点,还包括保证我们的系统能够正常冷却(如果设备太多,这个事情的难度会呈指数增长)。如果这个团队足够幸运足够有钱,境外数据中心的工作人员对我们所有的服务器型号又都有足够的了解,就能避免在盘后交易中不小心搞错东西。那时候亚马逊 AWS 和 Rackspace 逐渐开始加速扩张,但还远远没到临界规模。 + +当时我们还有专门的团队来保证硬件上运行着的操作系统和软件能够按照预期工作。这些工程师负责设计可靠的架构以方便给系统打补丁、监控和报警,还要定义基础镜像gold image的内容。这些大都是通过很多手工实验完成的,很多手工实验是为了编写一个运行说明书runbook来描述要做的事情,并确保按照它执行后的结果确实在预期内。在我们这么大的组织里,这样做很重要,因为一线和二线的技术支持都是境外的,而他们的培训内容只覆盖到了这些运行说明而已。 + +(这是我职业生涯前三年的世界。我那时候的梦想是成为制定最高标准的人!) + +软件发布则完全是另外一头怪兽。无可否认,我在这方面并没有积累太多经验。但是,从我收集的故事(和最近的经历)来看,当时大部分软件开发的日常大概是这样: + +* 开发人员按照技术和功能需求来编写代码,这些需求来自于业务分析人员的会议,但是会议并没有邀请开发人员参加。 +* 开发人员可以选择为他们的代码编写单元测试,以确保在代码里没有任何明显的疯狂行为,比如除以 0 但不抛出异常。 +* 然后开发者会把他们的代码标记为 “Ready for QA”(准备好了接受测试),质量保障的成员会把这个版本的代码发布到他们自己的环境中,这个环境和生产环境可能相似,也可能不,甚至和开发环境相比也不一定相似。 +* 故障会在几天或者几个星期内反馈到开发人员那里,这个时长取决于其它业务活动和优先事项。 + +虽然系统管理员和开发人员经常有不一致的意见,但是对“变更管理”却一致痛恨。变更管理由高度规范的(就我当时的雇主而言)和非常必要的规则和程序组成,用来管理一家公司应该什么时候做技术变更,以及如何做。很多公司都按照 [ITIL][4] 来操作,简单的说,ITIL 问了很多和事情发生的原因、时间、地点和方式相关的问题,而且提供了一个过程,对产生最终答案的决定做审计跟踪。 + +你可能从我的简短历史课上了解到,当时 IT 的很多很多事情都是手工完成的。这导致了很多错误。错误又导致了很多财产损失。变更管理的工作就是尽量减少这些损失,它常常以这样的形式出现:不管变更的影响和规模大小,每两周才能发布部署一次。周五下午 4 点到周一早上 5 点 59 分这段时间,需要排队等候发布窗口。(讽刺的是,这种流程导致了更多错误,通常还是更严重的那种错误) + +### DevOps 不是专家团 + +你可能在想 “Carlos 你在讲啥啊,什么时候才能说到 Ansible playbooks?”,我喜欢 Ansible,但是请稍等 —— 下面这些很重要。 + +你有没有过被分配到需要跟 DevOps 小组打交道的项目?你有没有依赖过“配置管理”或者“持续集成/持续交付”小组来保证业务流水线设置正确?你有没有在代码开发完的数周之后才参加发布部署的会议? + +如果有过,那么你就是在重温历史,这个历史是由上面所有这些导致的。 + +出于本能,我们喜欢和像自己的人一起工作,这会导致[壁垒][5]的形成。很自然,这种人类特质也会在工作场所表现出来是不足为奇的。我甚至在曾经工作过的一个 250 人的创业公司里见到过这样的现象。刚开始的时候,开发人员都在聚在一起工作,彼此深度协作。随着代码变得复杂,开发相同功能的人自然就坐到了一起,解决他们自己的复杂问题。然后按功能划分的小组很快就正式形成了。 + +在我工作过的很多公司里,系统管理员和开发人员不仅像这样形成了天然的壁垒,而且彼此还有激烈的对抗。开发人员的环境出问题了或者他们的权限太小了,就会对系统管理员很恼火。系统管理员怪开发人员无时无刻地在用各种方式破坏他们的环境,怪开发人员申请的计算资源严重超过他们的需要。双方都不理解对方,更糟糕的是,双方都不愿意去理解对方。 + +大部分开发人员对操作系统,内核或计算机硬件都不感兴趣。同样,大部分系统管理员,即使是 Linux 的系统管理员,也都不愿意学习编写代码,他们在大学期间学过一些 C 语言,然后就痛恨它,并且永远都不想再碰 IDE。所以,开发人员把运行环境的问题甩给围墙外的系统管理员,系统管理员把这些问题和甩过来的其它上百个问题放在一起安排优先级。每个人都忙于怨恨对方。DevOps 的目的就是解决这种矛盾。 + +DevOps 不是一个团队,CI/CD 也不是 JIRA 系统的一个用户组。DevOps 是一种思考方式。根据这个运动来看,在理想的世界里,开发人员、系统管理员和业务相关人将作为一个团队工作。虽然他们可能不完全了解彼此的世界,可能没有足够的知识去了解彼此的积压任务,但他们在大多数情况下能有一致的看法。 + +把所有基础设施和业务逻辑都代码化,再串到一个发布部署流水线里,就像是运行在这之上的应用一样。这个理念的基础就是 DevOps。因为大家都理解彼此,所以人人都是赢家。聊天机器人和易用的监控工具、可视化工具的兴起,背后的基础也是 DevOps。 + +[Adam Jacob][6] 说的最好:“DevOps 就是企业往软件导向型过渡时我们用来描述操作的词。” + +### 要实践 DevOps 我需要知道些什么 + +我经常被问到这个问题,它的答案和同属于开放式的其它大部分问题一样:视情况而定。 + +现在“DevOps 工程师”在不同的公司有不同的含义。在软件开发人员比较多但是很少有人懂基础设施的小公司,他们很可能是在找有更多系统管理经验的人。而其他公司,通常是大公司或老公司,已经有一个稳固的系统管理团队了,他们在向类似于谷歌 [SRE][7] 的方向做优化,也就是“设计运维功能的软件工程师”。但是,这并不是金科玉律,就像其它技术类工作一样,这个决定很大程度上取决于他的招聘经理。 + +也就是说,我们一般是在找对深入学习以下内容感兴趣的工程师: + +* 如何管理和设计安全、可扩展的云平台(通常是在 AWS 上,不过微软的 Azure、Google Cloud Platform,还有 DigitalOcean 和 Heroku 这样的 PaaS 提供商,也都很流行)。 +* 如何用流行的 [CI/CD][8] 工具,比如 Jenkins、GoCD,还有基于云的 Travis CI 或者 CircleCI,来构造一条优化的发布部署流水线和发布部署策略。 +* 如何在你的系统中使用基于时间序列的工具,比如 Kibana、Grafana、Splunk、Loggly 或者 Logstash 来监控、记录,并在变化的时候报警。 +* 如何使用配置管理工具,例如 Chef、Puppet 或者 Ansible 做到“基础设施即代码”,以及如何使用像 Terraform 或 CloudFormation 的工具发布这些基础设施。 + +容器也变得越来越受欢迎。尽管有人对大规模使用 Docker 的现状[表示不满][9],但容器正迅速地成为一种很好的方式来实现在更少的操作系统上运行超高密度的服务和应用,同时提高它们的可靠性。(像 Kubernetes 或者 Mesos 这样的容器编排工具,能在宿主机故障的时候,几秒钟之内重新启动新的容器。)考虑到这些,掌握 Docker 或者 rkt 以及容器编排平台的知识会对你大有帮助。 + +如果你是希望做 DevOps 实践的系统管理员,你还需要知道如何写代码。Python 和 Ruby 是 DevOps 领域的流行语言,因为它们是可移植的(也就是说可以在任何操作系统上运行)、快速的,而且易读易学。它们还支撑着这个行业最流行的配置管理工具(Ansible 是使用 Python 写的,Chef 和 Puppet 是使用 Ruby 写的)以及云平台的 API 客户端(亚马逊 AWS、微软 Azure、Google Cloud Platform 的客户端通常会提供 Python 和 Ruby 语言的版本)。 + +如果你是开发人员,也希望做 DevOps 的实践,我强烈建议你去学习 Unix、Windows 操作系统以及网络基础知识。虽然云计算把很多系统管理的难题抽象化了,但是对应用的性能做调试的时候,如果你知道操作系统如何工作的就会有很大的帮助。下文包含了一些这个主题的图书。 + +如果你觉得这些东西听起来内容太多,没关系,大家都是这么想的。幸运的是,有很多小项目可以让你开始探索。其中一个项目是 Gary Stafford 的[选举服务](https://github.com/garystafford/voter-service),一个基于 Java 的简单投票平台。我们要求面试候选人通过一个流水线将该服务从 GitHub 部署到生产环境基础设施上。你可以把这个服务与 Rob Mile 写的了不起的 DevOps [入门教程](https://github.com/maxamg/cd-office-hours)结合起来学习。 + +还有一个熟悉这些工具的好方法,找一个流行的服务,然后只使用 AWS 和配置管理工具来搭建这个服务所需要的基础设施。第一次先手动搭建,了解清楚要做的事情,然后只用 CloudFormation(或者 Terraform)和 Ansible 重写刚才的手动操作。令人惊讶的是,这就是我们基础设施开发人员为客户所做的大部分日常工作,我们的客户认为这样的工作非常有意义! + +### 需要读的书 + +如果你在找 DevOps 的其它资源,下面这些理论和技术书籍值得一读。 + +#### 理论书籍 + +* Gene Kim 写的 《[凤凰项目][10]The Phoenix Project》。这是一本很不错的书,内容涵盖了我上文解释过的历史(写的更生动形象),描述了一个运行在敏捷和 DevOps 之上的公司向精益前进的过程。 +* Terrance Ryan 写的 《[布道之道][11]Driving Technical Change》。非常好的一小本书,讲了大多数技术型组织内的常见性格特点以及如何和他们打交道。这本书对我的帮助比我想象的更多。 +* Tom DeMarco 和 Tim Lister 合著的 《[人件][12]Peopleware》。管理工程师团队的经典图书,有一点过时,但仍然很有价值。 +* Tom Limoncelli 写的 《[时间管理:给系统管理员][13]Time Management for System Administrators》。这本书主要面向系统管理员,它对很多大型组织内的系统管理员生活做了深入的展示。如果你想了解更多系统管理员和开发人员之间的冲突,这本书可能解释了更多。 +* Eric Ries 写的 《[精益创业][14]The Lean Startup》。描述了 Eric 自己的 3D 虚拟形象公司,IMVU,发现了如何精益工作,快速失败和更快盈利。 +* Jez Humble 和他的朋友写的 《[精益企业][15]Lean Enterprise》。这本书是对精益创业做的改编,以更适应企业,两本书都很棒,都很好地解释了 DevOps 背后的商业动机。 +* Kief Morris 写的 《[基础设施即代码][16]Infrastructure As Code》。关于“基础设施即代码”的非常好的入门读物!很好的解释了为什么所有公司都有必要采纳这种做法。 +* Betsy Beyer、Chris Jones、Jennifer Petoff 和 Niall Richard Murphy 合著的 《[站点可靠性工程师][17]Site Reliability Engineering》。一本解释谷歌 SRE 实践的书,也因为是“DevOps 诞生之前的 DevOps”被人熟知。在如何处理运行时间、时延和保持工程师快乐方面提供了有意思的看法。 + +#### 技术书籍 + +如果你想找的是让你直接跟代码打交道的书,看这里就对了。 + +* W. Richard Stevens 的 《[TCP/IP 详解][18]TCP/IP Illustrated》。这是一套经典的(也可以说是最全面的)讲解网络协议基础的巨著,重点介绍了 TCP/IP 协议族。如果你听说过 1、2、3、4 层网络,而且对深入学习它们感兴趣,那么你需要这本书。 +* Evi Nemeth、Trent Hein 和 Ben Whaley 合著的 《[UNIX/Linux 系统管理员手册][19]UNIX and Linux System Administration Handbook》。一本很好的入门书,介绍 Linux/Unix 如何工作以及如何使用。 +* Don Jones 和 Jeffrey Hicks 合著的 《[Windows PowerShell 实战指南][20]Learn Windows Powershell In A Month of Lunches》。如果你在 Windows 系统下做自动化任务,你需要学习怎么使用 Powershell。这本书能够帮助你。Don Jones 是这方面著名的 MVP。 +* 几乎所有 [James Turnbull][21] 写的东西,针对流行的 DevOps 工具,他发表了很好的技术入门读物。 + +不管是在那些把所有应用都直接部署在物理机上的公司,(现在很多公司仍然有充分的理由这样做)还是在那些把所有应用都做成 serverless 的先驱公司,DevOps 都很可能会持续下去。这部分工作很有趣,产出也很有影响力,而且最重要的是,它搭起桥梁衔接了技术和业务之间的缺口。DevOps 是一个值得期待的美好事物。 + +首次发表在 [Neurons Firing on a Keyboard][22]。使用 CC-BY-SA 协议。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/getting-devops + +作者:[Carlos Nunez][a] +译者:[belitex](https://github.com/belitex) +校对:[pityonline](https://github.com/pityonline) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/carlosonunez +[1]: https://www.reddit.com/r/devops/ +[2]: https://carlosonunez.wordpress.com/ +[3]: https://twitter.com/easiestnameever +[4]: https://en.wikipedia.org/wiki/ITIL +[5]: https://www.psychologytoday.com/blog/time-out/201401/getting-out-your-silo +[6]: https://twitter.com/adamhjk/status/572832185461428224 +[7]: https://landing.google.com/sre/interview/ben-treynor.html +[8]: https://en.wikipedia.org/wiki/CI/CD +[9]: https://thehftguy.com/2016/11/01/docker-in-production-an-history-of-failure/ +[10]: https://itrevolution.com/book/the-phoenix-project/ +[11]: https://pragprog.com/book/trevan/driving-technical-change +[12]: https://en.wikipedia.org/wiki/Peopleware:_Productive_Projects_and_Teams +[13]: http://shop.oreilly.com/product/9780596007836.do +[14]: http://theleanstartup.com/ +[15]: https://info.thoughtworks.com/lean-enterprise-book.html +[16]: http://infrastructure-as-code.com/book/ +[17]: https://landing.google.com/sre/book.html +[18]: https://en.wikipedia.org/wiki/TCP/IP_Illustrated +[19]: http://www.admin.com/ +[20]: https://www.manning.com/books/learn-windows-powershell-in-a-month-of-lunches-third-edition +[21]: https://jamesturnbull.net/ +[22]: https://carlosonunez.wordpress.com/2017/03/02/getting-into-devops/ diff --git a/published/20180123 Moving to Linux from dated Windows machines.md b/published/20180123 Moving to Linux from dated Windows machines.md new file mode 100644 index 0000000000..a9e187ecc3 --- /dev/null +++ b/published/20180123 Moving to Linux from dated Windows machines.md @@ -0,0 +1,49 @@ +从过时的 Windows 机器迁移到 Linux +====== +> 这是一个当老旧的 Windows 机器退役时,决定迁移到 Linux 的故事。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/1980s-computer-yearbook.png?itok=eGOYEKK-) + +我在 ONLYOFFICE 的市场部门工作的每一天,我都能看到 Linux 用户在网上讨论我们的办公软件。我们的产品在 Linux 用户中很受欢迎,这使得我对使用 Linux 作为日常工具的体验非常好奇。我的老旧的 Windows XP 机器在性能上非常差,因此我决定了解 Linux 系统(特别是 Ubuntu)并且决定去尝试使用它。我的两个同事也加入了我的计划。 + +### 为何选择 Linux ? + +我们必须做出改变,首先,我们的老系统在性能方面不够用:我们经历过频繁的崩溃,每当运行超过两个应用时,机器就会负载过度,关闭机器时有一半的几率冻结等等。这很容易让我们从工作中分心,意味着我们没有我们应有的工作效率了。 + +升级到 Windows 的新版本也是一种选择,但这样可能会带来额外的开销,而且我们的软件本身也是要与 Microsoft 的办公软件竞争。因此我们在这方面也存在意识形态的问题。 + +其次,就像我之前提过的, ONLYOFFICE 产品在 Linux 社区内非常受欢迎。通过阅读 Linux 用户在使用我们的软件时的体验,我们也对加入他们很感兴趣。 + +在我们要求转换到 Linux 系统一周后,我们拿到了崭新的装好了 [Kubuntu][1] 的机器。我们选择了 16.04 版本,因为这个版本支持 KDE Plasma 5.5 和包括 Dolphin 在内的很多 KDE 应用,同时也包括 LibreOffice 5.1 和 Firefox 45 。 + +### Linux 让人喜欢的地方 + +我相信 Linux 最大的优势是它的运行速度,比如,从按下机器的电源按钮到开始工作只需要几秒钟时间。从一开始,一切看起来都超乎寻常地快:总体的响应速度,图形界面,甚至包括系统更新的速度。 + +另一个使我惊奇的事情是跟 Windows 相比, Linux 几乎能让你配置任何东西,包括整个桌面的外观。在设置里面,我发现了如何修改各种栏目、按钮和字体的颜色和形状,也可以重新布置任意桌面组件的位置,组合桌面小工具(甚至包括漫画和颜色选择器)。我相信我还仅仅只是了解了基本的选项,之后还需要探索这个系统更多著名的定制化选项。 + +Linux 发行版通常是一个非常安全的环境。人们很少在 Linux 系统中使用防病毒的软件,因为很少有人会写病毒程序来攻击 Linux 系统。因此你可以拥有很好的系统速度,并且节省了时间和金钱。 + +总之, Linux 已经改变了我们的日常生活,用一系列的新选项和功能大大震惊了我们。仅仅通过短时间的使用,我们已经可以给它总结出以下特性: + + * 操作很快很顺畅 + * 高度可定制 + * 对新手很友好 + * 了解基本组件很有挑战性,但回报丰厚 + * 安全可靠 + * 对所有想改变工作场所的人来说都是一次绝佳的体验 + +你已经从 Windows 或 MacOS 系统换到 Kubuntu 或其他 Linux 变种了么?或者你是否正在考虑做出改变?请分享你想要采用 Linux 系统的原因,连同你对开源的印象一起写在评论中。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/move-to-linux-old-windows + +作者:[Michael Korotaev][a] +译者:[bookug](https://github.com/bookug) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/michaelk +[1]:https://kubuntu.org/ diff --git a/translated/tech/20180201 Conditional Rendering in React using Ternaries and.md b/published/20180201 Conditional Rendering in React using Ternaries and.md similarity index 61% rename from translated/tech/20180201 Conditional Rendering in React using Ternaries and.md rename to published/20180201 Conditional Rendering in React using Ternaries and.md index aa7ba0017e..92b2ae79ff 100644 --- a/translated/tech/20180201 Conditional Rendering in React using Ternaries and.md +++ b/published/20180201 Conditional Rendering in React using Ternaries and.md @@ -1,16 +1,15 @@ 在 React 条件渲染中使用三元表达式和 “&&” -============================================================ +======= ![](https://cdn-images-1.medium.com/max/2000/1*eASRJrCIVgsy5VbNMAzD9w.jpeg) -Photo by [Brendan Church][1] on [Unsplash][2] -React 组件可以通过多种方式决定渲染内容。你可以使用传统的 if 语句或 switch 语句。在本文中,我们将探讨一些替代方案。但要注意,如果你不小心,有些方案会带来自己的陷阱。 +React 组件可以通过多种方式决定渲染内容。你可以使用传统的 `if` 语句或 `switch` 语句。在本文中,我们将探讨一些替代方案。但要注意,如果你不小心,有些方案会带来自己的陷阱。 ### 三元表达式 vs if/else -假设我们有一个组件被传进来一个 `name` prop。 如果这个字符串非空,我们会显示一个问候语。否则,我们会告诉用户他们需要登录。 +假设我们有一个组件被传进来一个 `name` 属性。 如果这个字符串非空,我们会显示一个问候语。否则,我们会告诉用户他们需要登录。 -这是一个只实现了如上功能的无状态函数式组件。 +这是一个只实现了如上功能的无状态函数式组件(SFC)。 ``` const MyComponent = ({ name }) => { @@ -29,7 +28,7 @@ const MyComponent = ({ name }) => { }; ``` -这个很简单但是我们可以做得更好。这是使用三元运算符编写的相同组件。 +这个很简单但是我们可以做得更好。这是使用三元运算符conditional ternary operator编写的相同组件。 ``` const MyComponent = ({ name }) => ( @@ -41,86 +40,85 @@ const MyComponent = ({ name }) => ( 请注意这段代码与上面的例子相比是多么简洁。 -有几点需要注意。因为我们使用了箭头函数的单语句形式,所以隐含了return语句。另外,使用三元运算符允许我们省略掉重复的 `
` 标记。🎉 +有几点需要注意。因为我们使用了箭头函数的单语句形式,所以隐含了`return` 语句。另外,使用三元运算符允许我们省略掉重复的 `
` 标记。 ### 三元表达式 vs && -正如您所看到的,三元表达式用于表达 if/else 条件式非常好。但是对于简单的 if 条件式怎么样呢? +正如您所看到的,三元表达式用于表达 `if`/`else` 条件式非常好。但是对于简单的 `if` 条件式怎么样呢? -让我们看另一个例子。如果 isPro(一个布尔值)为真,我们将显示一个奖杯表情符号。我们也要渲染星星的数量(如果不是0)。我们可以这样写。 +让我们看另一个例子。如果 `isPro`(一个布尔值)为真,我们将显示一个奖杯表情符号。我们也要渲染星星的数量(如果不是 0)。我们可以这样写。 ``` const MyComponent = ({ name, isPro, stars}) => (
Hello {name} - {isPro ? '🏆' : null} + {isPro ? '♨' : null}
{stars ? (
- Stars:{'⭐️'.repeat(stars)} + Stars:{'☆'.repeat(stars)}
) : null}
); ``` -请注意 “else” 条件返回 null 。 这是因为三元表达式要有"否则"条件。 +请注意 `else` 条件返回 `null` 。 这是因为三元表达式要有“否则”条件。 -对于简单的 “if” 条件式,我们可以使用更合适的东西:&& 运算符。这是使用 “&&” 编写的相同代码。 +对于简单的 `if` 条件式,我们可以使用更合适的东西:`&&` 运算符。这是使用 `&&` 编写的相同代码。 ``` const MyComponent = ({ name, isPro, stars}) => (
Hello {name} - {isPro && '🏆'} + {isPro && '♨'}
{stars && (
- Stars:{'⭐️'.repeat(stars)} + Stars:{'☆'.repeat(stars)}
)}
); ``` -没有太多区别,但是注意我们消除了每个三元表达式最后面的 `: null` (else 条件式)。一切都应该像以前一样渲染。 +没有太多区别,但是注意我们消除了每个三元表达式最后面的 `: null` (`else` 条件式)。一切都应该像以前一样渲染。 +嘿!约翰得到了什么?当什么都不应该渲染时,只有一个 `0`。这就是我上面提到的陷阱。这里有解释为什么: -嘿!约翰得到了什么?当什么都不应该渲染时,只有一个0。这就是我上面提到的陷阱。这里有解释为什么。 - -[根据 MDN][3],一个逻辑运算符“和”(也就是`&&`): +[根据 MDN][3],一个逻辑运算符“和”(也就是 `&&`): > `expr1 && expr2` -> 如果 `expr1` 可以被转换成 `false` ,返回 `expr1`;否则返回 `expr2`。 如此,当与布尔值一起使用时,如果两个操作数都是 true,`&&` 返回 `true` ;否则,返回 `false`。 +> 如果 `expr1` 可以被转换成 `false` ,返回 `expr1`;否则返回 `expr2`。 如此,当与布尔值一起使用时,如果两个操作数都是 `true`,`&&` 返回 `true` ;否则,返回 `false`。 好的,在你开始拔头发之前,让我为你解释它。 -在我们这个例子里, `expr1` 是变量 `stars`,它的值是 `0`,因为0是 falsey 的值, `0` 会被返回和渲染。看,这还不算太坏。 +在我们这个例子里, `expr1` 是变量 `stars`,它的值是 `0`,因为 0 是假值,`0` 会被返回和渲染。看,这还不算太坏。 我会简单地这么写。 -> 如果 `expr1` 是 falsey,返回 `expr1` ,否则返回 `expr2` +> 如果 `expr1` 是假值,返回 `expr1` ,否则返回 `expr2`。 -所以,当对非布尔值使用 “&&” 时,我们必须让 falsy 的值返回 React 无法渲染的东西,比如说,`false` 这个值。 +所以,当对非布尔值使用 `&&` 时,我们必须让这个假值返回 React 无法渲染的东西,比如说,`false` 这个值。 我们可以通过几种方式实现这一目标。让我们试试吧。 ``` {!!stars && (
- {'⭐️'.repeat(stars)} + {'☆'.repeat(stars)}
)} ``` -注意 `stars` 前的双感叹操作符( `!!`)(呃,其实没有双感叹操作符。我们只是用了感叹操作符两次)。 +注意 `stars` 前的双感叹操作符(`!!`)(呃,其实没有双感叹操作符。我们只是用了感叹操作符两次)。 -第一个感叹操作符会强迫 `stars` 的值变成布尔值并且进行一次“非”操作。如果 `stars` 是 `0` ,那么 `!stars` 会 是 `true`。 +第一个感叹操作符会强迫 `stars` 的值变成布尔值并且进行一次“非”操作。如果 `stars` 是 `0` ,那么 `!stars` 会是 `true`。 -然后我们执行第二个`非`操作,所以如果 `stars` 是0,`!!stars` 会是 `false`。正好是我们想要的。 +然后我们执行第二个`非`操作,所以如果 `stars` 是 `0`,`!!stars` 会是 `false`。正好是我们想要的。 如果你不喜欢 `!!`,那么你也可以强制转换出一个布尔数比如这样(这种方式我觉得有点冗长)。 @@ -136,11 +134,11 @@ const MyComponent = ({ name, isPro, stars}) => ( #### 关于字符串 -空字符串与数字有一样的毛病。但是因为渲染后的空字符串是不可见的,所以这不是那种你很可能会去处理的难题,甚至可能不会注意到它。然而,如果你是完美主义者并且不希望DOM上有空字符串,你应采取我们上面对数字采取的预防措施。 +空字符串与数字有一样的毛病。但是因为渲染后的空字符串是不可见的,所以这不是那种你很可能会去处理的难题,甚至可能不会注意到它。然而,如果你是完美主义者并且不希望 DOM 上有空字符串,你应采取我们上面对数字采取的预防措施。 ### 其它解决方案 -一种可能的将来可扩展到其他变量的解决方案,是创建一个单独的 `shouldRenderStars` 变量。然后你用“&&”处理布尔值。 +一种可能的将来可扩展到其他变量的解决方案,是创建一个单独的 `shouldRenderStars` 变量。然后你用 `&&` 处理布尔值。 ``` const shouldRenderStars = stars > 0; @@ -151,7 +149,7 @@ return (
{shouldRenderStars && (
- {'⭐️'.repeat(stars)} + {'☆'.repeat(stars)}
)}
@@ -170,7 +168,7 @@ return (
{shouldRenderStars && (
- {'⭐️'.repeat(stars)} + {'☆'.repeat(stars)}
)}
@@ -181,7 +179,7 @@ return ( 我认为你应该充分利用这种语言。对于 JavaScript,这意味着为 `if/else` 条件式使用三元表达式,以及为 `if` 条件式使用 `&&` 操作符。 -我们可以回到每处都使用三元运算符的舒适区,但你现在消化了这些知识和力量,可以继续前进 && 取得成功了。 +我们可以回到每处都使用三元运算符的舒适区,但你现在消化了这些知识和力量,可以继续前进 `&&` 取得成功了。 -------------------------------------------------------------------------------- @@ -195,7 +193,7 @@ via: https://medium.freecodecamp.org/conditional-rendering-in-react-using-ternar 作者:[Donavon West][a] 译者:[GraveAccent](https://github.com/GraveAccent) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20180412 A Desktop GUI Application For NPM.md b/published/20180412 A Desktop GUI Application For NPM.md new file mode 100644 index 0000000000..ef72a39fe0 --- /dev/null +++ b/published/20180412 A Desktop GUI Application For NPM.md @@ -0,0 +1,140 @@ +ndm:NPM 的桌面 GUI 程序 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/04/ndm-3-720x340.png) + +NPM 是 **N**ode **P**ackage **M**anager (node 包管理器)的缩写,它是用于安装 NodeJS 软件包或模块的命令行软件包管理器。我们发布过一个指南描述了如何[使用 NPM 管理 NodeJS 包][1]。你可能已经注意到,使用 Npm 管理 NodeJS 包或模块并不是什么大问题。但是,如果你不习惯用 CLI 的方式,这有一个名为 **NDM** 的桌面 GUI 程序,它可用于管理 NodeJS 程序/模块。 NDM,代表 **N**PM **D**esktop **M**anager (npm 桌面管理器),是 NPM 的自由开源图形前端,它允许我们通过简单图形桌面安装、更新、删除 NodeJS 包。 + +在这个简短的教程中,我们将了解 Linux 中的 Ndm。 + +### 安装 NDM + +NDM 在 AUR 中可用,因此你可以在 Arch Linux 及其衍生版(如 Antergos 和 Manjaro Linux)上使用任何 AUR 助手程序安装。 + +使用 [Pacaur][2]: + +``` +$ pacaur -S ndm +``` + +使用 [Packer][3]: + +``` +$ packer -S ndm +``` + +使用 [Trizen][4]: + +``` +$ trizen -S ndm +``` + +使用 [Yay][5]: + +``` +$ yay -S ndm +``` + +使用 [Yaourt][6]: + +``` +$ yaourt -S ndm +``` + +在基于 RHEL 的系统(如 CentOS)上,运行以下命令以安装 NDM。 + +``` +$ echo "[fury] name=ndm repository baseurl=https://repo.fury.io/720kb/ enabled=1 gpgcheck=0" | sudo tee /etc/yum.repos.d/ndm.repo && sudo yum update && +``` + +在 Debian、Ubuntu、Linux Mint: + +``` +$ echo "deb [trusted=yes] https://apt.fury.io/720kb/ /" | sudo tee /etc/apt/sources.list.d/ndm.list && sudo apt-get update && sudo apt-get install ndm +``` + +也可以使用 **Linuxbrew** 安装 NDM。首先,按照以下链接中的说明安装 Linuxbrew。 + +安装 Linuxbrew 后,可以使用以下命令安装 NDM: + +``` +$ brew update +$ brew install ndm +``` + +在其他 Linux 发行版上,进入 [NDM 发布页面][7],下载最新版本,自行编译和安装。 + +### NDM 使用 + +从菜单或使用应用启动器启动 NDM。这就是 NDM 的默认界面。 + +![][9] + +在这里你可以本地或全局安装 NodeJS 包/模块。 + +#### 本地安装 NodeJS 包 + +要在本地安装软件包,首先通过单击主屏幕上的 “Add projects” 按钮选择项目目录,然后选择要保留项目文件的目录。例如,我选择了一个名为 “demo” 的目录作为我的项目目录。 + +单击项目目录(即 demo),然后单击 “Add packages” 按钮。 + +![][10] + +输入要安装的软件包名称,然后单击 “Install” 按钮。 + +![][11] + +安装后,软件包将列在项目目录下。只需单击该目录即可在本地查看已安装软件包的列表。 + +![][12] + +同样,你可以创建单独的项目目录并在其中安装 NodeJS 模块。要查看项目中已安装模块的列表,请单击项目目录,右侧将显示软件包。 + +#### 全局安装 NodeJS 包 + +要全局安装 NodeJS 包,请单击主界面左侧的 “Globals” 按钮。然后,单击 “Add packages” 按钮,输入包的名称并单击 “Install” 按钮。 + +#### 管理包 + +单击任何已安装的包,不将在顶部看到各种选项,例如: + +1. 版本(查看已安装的版本), +2. 最新(安装最新版本), +3. 更新(更新当前选定的包), +4. 卸载(删除所选包)等。 + +![][13] + +NDM 还有两个选项,即 “Update npm” 用于将 node 包管理器更新成最新可用版本, 而 “Doctor” 会运行一组检查以确保你的 npm 安装有所需的功能管理你的包/模块。 + +### 总结 + +NDM 使安装、更新、删除 NodeJS 包的过程更加容易!你无需记住执行这些任务的命令。NDM 让我们在简单的图形界面中点击几下鼠标即可完成所有操作。对于那些懒得输入命令的人来说,NDM 是管理 NodeJS 包的完美伴侣。 + +干杯! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/ndm-a-desktop-gui-application-for-npm/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.ostechnix.com/manage-nodejs-packages-using-npm/ +[2]:https://www.ostechnix.com/install-pacaur-arch-linux/ +[3]:https://www.ostechnix.com/install-packer-arch-linux-2/ +[4]:https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/ +[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[6]:https://www.ostechnix.com/install-yaourt-arch-linux/ +[7]:https://github.com/720kb/ndm/releases +[8]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[9]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-1.png +[10]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-5-1.png +[11]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-6.png +[12]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-7.png +[13]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-8.png diff --git a/sources/tech/20180413 The df Command Tutorial With Examples For Beginners.md b/published/20180413 The df Command Tutorial With Examples For Beginners.md similarity index 51% rename from sources/tech/20180413 The df Command Tutorial With Examples For Beginners.md rename to published/20180413 The df Command Tutorial With Examples For Beginners.md index e72be14659..7a46f07032 100644 --- a/sources/tech/20180413 The df Command Tutorial With Examples For Beginners.md +++ b/published/20180413 The df Command Tutorial With Examples For Beginners.md @@ -1,21 +1,22 @@ -The df Command Tutorial With Examples For Beginners +df 命令新手教程 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/04/df-command-1-720x340.png) -In this guide, we are going to learn to use **df** command. The df command, stands for **D** isk **F** ree, reports file system disk space usage. It displays the amount of disk space available on the file system in a Linux system. The df command is not to be confused with **du** command. Both serves different purposes. The df command reports **how much disk space we have** (i.e free space) whereas the du command reports **how much disk space is being consumed** by the files and folders. Hope I made myself clear. Let us go ahead and see some practical examples of df command, so you can understand it better. +在本指南中,我们将学习如何使用 `df` 命令。df 命令是 “Disk Free” 的首字母组合,它报告文件系统磁盘空间的使用情况。它显示一个 Linux 系统中文件系统上可用磁盘空间的数量。`df` 命令很容易与 `du` 命令混淆。它们的用途不同。`df` 命令报告我们拥有多少磁盘空间(空闲磁盘空间),而 `du` 命令报告被文件和目录占用了多少磁盘空间。希望我这样的解释你能更清楚。在继续之前,我们来看一些 `df` 命令的实例,以便于你更好地理解它。 -### The df Command Tutorial With Examples +### df 命令使用举例 -**1\. View entire file system disk space usage** +#### 1、查看整个文件系统磁盘空间使用情况 + +无需任何参数来运行 `df` 命令,以显示整个文件系统磁盘空间使用情况。 -Run df command without any arguments to display the entire file system disk space. ``` $ df - ``` -**Sample output:** +示例输出: + ``` Filesystem 1K-blocks Used Available Use% Mounted on dev 4033216 0 4033216 0% /dev @@ -27,25 +28,23 @@ tmpfs 4038880 11636 4027244 1% /tmp /dev/loop0 84096 84096 0 100% /var/lib/snapd/snap/core/4327 /dev/sda1 95054 55724 32162 64% /boot tmpfs 807776 28 807748 1% /run/user/1000 - ``` ![][2] -As you can see, the result is divided into six columns. Let us see what each column means. +正如你所见,输出结果分为六列。我们来看一下每一列的含义。 - * **Filesystem** – the filesystem on the system. - * **1K-blocks** – the size of the filesystem, measured in 1K blocks. - * **Used** – the amount of space used in 1K blocks. - * **Available** – the amount of available space in 1K blocks. - * **Use%** – the percentage that the filesystem is in use. - * **Mounted on** – the mount point where the filesystem is mounted. + * `Filesystem` – Linux 系统中的文件系统 + * `1K-blocks` – 文件系统的大小,用 1K 大小的块来表示。 + * `Used` – 以 1K 大小的块所表示的已使用数量。 + * `Available` – 以 1K 大小的块所表示的可用空间的数量。 + * `Use%` – 文件系统中已使用的百分比。 + * `Mounted on` – 已挂载的文件系统的挂载点。 +#### 2、以人类友好格式显示文件系统硬盘空间使用情况 +在上面的示例中你可能已经注意到了,它使用 1K 大小的块为单位来表示使用情况,如果你以人类友好格式来显示它们,可以使用 `-h` 标志。 -**2\. Display file system disk usage in human readable format** - -As you may noticed in the above examples, the usage is showed in 1k blocks. If you want to display them in human readable format, use **-h** flag. ``` $ df -h Filesystem Size Used Avail Use% Mounted on @@ -61,11 +60,12 @@ tmpfs 789M 28K 789M 1% /run/user/1000 ``` -Now look at the **Size** and **Avail** columns, the usage is shown in GB and MB. +现在,在 `Size` 列和 `Avail` 列,使用情况是以 GB 和 MB 为单位来显示的。 -**3\. Display disk space usage only in MB** +#### 3、仅以 MB 为单位来显示文件系统磁盘空间使用情况 + +如果仅以 MB 为单位来显示文件系统磁盘空间使用情况,使用 `-m` 标志。 -To view file system disk space usage only in Megabytes, use **-m** flag. ``` $ df -m Filesystem 1M-blocks Used Available Use% Mounted on @@ -78,12 +78,12 @@ tmpfs 3945 12 3933 1% /tmp /dev/loop0 83 83 0 100% /var/lib/snapd/snap/core/4327 /dev/sda1 93 55 32 64% /boot tmpfs 789 1 789 1% /run/user/1000 - ``` -**4\. List inode information instead of block usage** +#### 4、列出节点而不是块的使用情况 + +如下所示,我们可以通过使用 `-i` 标记来列出节点而不是块的使用情况。 -We can list inode information instead of block usage by using **-i** flag as shown below. ``` $ df -i Filesystem Inodes IUsed IFree IUse% Mounted on @@ -96,12 +96,12 @@ tmpfs 1009720 3008 1006712 1% /tmp /dev/loop0 12829 12829 0 100% /var/lib/snapd/snap/core/4327 /dev/sda1 25688 390 25298 2% /boot tmpfs 1009720 29 1009691 1% /run/user/1000 - ``` -**5\. Display the file system type** +#### 5、显示文件系统类型 + +使用 `-T` 标志显示文件系统类型。 -To display the file system type, use **-T** flag. ``` $ df -T Filesystem Type 1K-blocks Used Available Use% Mounted on @@ -114,27 +114,27 @@ tmpfs tmpfs 4038880 11984 4026896 1% /tmp /dev/loop0 squashfs 84096 84096 0 100% /var/lib/snapd/snap/core/4327 /dev/sda1 ext4 95054 55724 32162 64% /boot tmpfs tmpfs 807776 28 807748 1% /run/user/1000 - ``` -As you see, there is an extra column (second from left) that shows the file system type. +正如你所见,现在出现了显示文件系统类型的额外的列(从左数的第二列)。 -**6\. Display only the specific file system type** +#### 6、仅显示指定类型的文件系统 + +我们可以限制仅列出某些文件系统。比如,只列出 ext4 文件系统。我们使用 `-t` 标志。 -We can limit the listing to a certain file systems. for example **ext4**. To do so, we use **-t** flag. ``` $ df -t ext4 Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda2 478425016 428790896 25308436 95% / /dev/sda1 95054 55724 32162 64% /boot - ``` -See? This command shows only the ext4 file system disk space usage. +看到了吗?这个命令仅显示了 ext4 文件系统的磁盘空间使用情况。 -**7\. Exclude specific file system type** +#### 7、不列出指定类型的文件系统 + +有时,我们可能需要从结果中去排除指定类型的文件系统。我们可以使用 `-x` 标记达到我们的目的。 -Some times, you may want to exclude a specific file system from the result. This can be achieved by using **-x** flag. ``` $ df -x ext4 Filesystem 1K-blocks Used Available Use% Mounted on @@ -145,34 +145,32 @@ tmpfs 4038880 0 4038880 0% /sys/fs/cgroup tmpfs 4038880 11984 4026896 1% /tmp /dev/loop0 84096 84096 0 100% /var/lib/snapd/snap/core/4327 tmpfs 807776 28 807748 1% /run/user/1000 - ``` -The above command will display all file systems usage, except **ext4**. +上面的命令列出了除 ext4 类型以外的全部文件系统。 -**8\. Display usage for a folder** +#### 8、显示一个目录的磁盘使用情况 + +去显示某个目录的硬盘空间使用情况以及它的挂载点,例如 `/home/sk/` 目录,可以使用如下的命令: -To display the disk space available and where it is mounted for a folder, for example **/home/sk/** , use this command: ``` $ df -hT /home/sk/ Filesystem Type Size Used Avail Use% Mounted on /dev/sda2 ext4 457G 409G 25G 95% / - ``` -This command shows the file system type, used and available space in human readable form and where it is mounted. If you don’t to display the file system type, just ignore the **-t** flag. +这个命令显示文件系统类型、以人类友好格式显示已使用和可用磁盘空间、以及它的挂载点。如果你不想去显示文件系统类型,只需要忽略 `-t` 标志即可。 + +更详细的使用情况,请参阅 man 手册页。 -For more details, refer the man pages. ``` $ man df - ``` -**Recommended read:** -And, that’s all for today! I hope this was useful. More good stuffs to come. Stay tuned! +今天就到此这止!我希望对你有用。还有更多更好玩的东西即将奉上。请继续关注! -Cheers! +再见! @@ -181,12 +179,13 @@ Cheers! via: https://www.ostechnix.com/the-df-command-tutorial-with-examples-for-beginners/ 作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) 选题:[lujun9972](https://github.com/lujun9972) +译者:[qhwdw](https://github.com/qhwdw) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://www.ostechnix.com/author/sk/ [1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 [2]:http://www.ostechnix.com/wp-content/uploads/2018/04/df-command.png + diff --git a/translated/tech/20180522 Free Resources for Securing Your Open Source Code.md b/published/20180522 Free Resources for Securing Your Open Source Code.md similarity index 53% rename from translated/tech/20180522 Free Resources for Securing Your Open Source Code.md rename to published/20180522 Free Resources for Securing Your Open Source Code.md index 4e63a64e43..285a49c6a4 100644 --- a/translated/tech/20180522 Free Resources for Securing Your Open Source Code.md +++ b/published/20180522 Free Resources for Securing Your Open Source Code.md @@ -1,53 +1,43 @@ -一些提高你开源源码安全性的工具 +一些提高开源代码安全性的工具 ====== +> 开源软件的迅速普及带来了对健全安全实践的需求。 + ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/open-security.jpg?itok=R3M5LDrb) -虽然目前开源依然发展势头较好,并被广大的厂商所采用,然而最近由 Black Duck 和 Synopsys 发布的[2018开源安全与风险评估报告][1]指出了一些存在的风险并重点阐述了对于健全安全措施的需求。这份报告的分析资料素材来自经过脱敏后的 1100 个商业代码库,这些代码所涉及:自动化、大数据、企业级软件、金融服务业、健康医疗、物联网、制造业等多个领域。 +虽然目前开源依然发展势头较好,并被广大的厂商所采用,然而最近由 Black Duck 和 Synopsys 发布的 [2018 开源安全与风险评估报告][1]指出了一些存在的风险,并重点阐述了对于健全安全措施的需求。这份报告的分析资料素材来自经过脱敏后的 1100 个商业代码库,这些代码所涉及:自动化、大数据、企业级软件、金融服务业、健康医疗、物联网、制造业等多个领域。 -这份报告强调开源软件正在被大量的使用,扫描结果中有 96% 的应用都使用了开源组件。然而,报告还指出许多其中存在很多漏洞。具体在 [这里][2]: +这份报告强调开源软件正在被大量的使用,扫描结果中有 96% 的应用都使用了开源组件。然而,报告还指出许多其中存在很多漏洞。具体在 [这里][2]: * 令人担心的是扫描的所有结果中,有 78% 的代码库存在至少一个开源的漏洞,平均每个代码库有 64 个漏洞。 - * 在经过代码审计过后代码库中,发现超过 54% 的漏洞经验证是高危漏洞。 - * 17% 的代码库包括一种已经早已公开的漏洞,包括:Heartbleed、Logjam、Freak、Drown、Poddle。 +Synopsys 旗下 Black Duck 的技术负责人 Tim Mackey 称,“这份报告清楚的阐述了:随着开源软件正在被企业广泛的使用,企业与组织也应当使用一些工具来检测可能出现在这些开源软件中的漏洞,以及管理其所使用的开源软件的方式是否符合相应的许可证规则。” +确实,随着越来越具有影响力的安全威胁出现,历史上从未有过我们目前对安全工具和实践的需求。大多数的组织已经意识到网络与系统管理员需要具有相应的较强的安全技能和安全证书。[在一篇文章中][3],我们给出一些具有较大影响力的工具、认证和实践。 +Linux 基金会已经在安全方面提供了许多关于安全的信息与教育资源。比如,Linux 社区提供了许多针对特定平台的免费资源,其中 [Linux 工作站安全检查清单][4] 其中提到了很多有用的基础信息。线上的一些发表刊物也可以提升用户针对某些平台对于漏洞的保护,如:[Fedora 安全指南][5]、[Debian 安全手册][6]。 -Tim Mackey,Synopsys 旗下 Black Duck 的技术负责人称,"这份报告清楚的阐述了:随着开源软件正在被企业广泛的使用,企业与组织也应当使用一些工具来检测可能出现在这些开源软件中的漏洞,并且管理其所使用的开源软件的方式是否符合相应的许可证规则" +目前被广泛使用的私有云平台 OpenStack 也加强了关于基于云的智能安全需求。根据 Linux 基金会发布的 [公有云指南][7]:“据 Gartner 的调研结果,尽管公有云的服务商在安全审查和提升透明度方面做的都还不错,安全问题仍然是企业考虑向公有云转移的最重要的考量之一。” -确实,随着越来越具有影响力的安全威胁出现,历史上从未有过我们目前对安全工具和实践的需求。大多数的组织已经意识到网络与系统管理员需要具有相应的较强的安全技能和安全证书。[在这篇文章中,][3] 我们给出一些具有较大影响力的工具、认证和实践。 +无论是对于组织还是个人,千里之堤毁于蚁穴,这些“蚁穴”无论是来自路由器、防火墙、VPN 或虚拟机都可能导致灾难性的后果。以下是一些免费的工具可能对于检测这些漏洞提供帮助: -Linux 基金会已经在安全方面提供了许多关于安全的信息与教育资源。比如,Linux 社区提供许多免费的用来针对一些平台的工具,其中[Linux 服务器安全检查表][4] 其中提到了很多有用的基础信息。线上的一些发表刊物也可以提升用户针对某些平台对于漏洞的保护,如:[Fedora 安全指南][5],[Debian 安全手册][6]。 + * [Wireshark][8],流量包分析工具 + * [KeePass Password Safe][9],自由开源的密码管理器 + * [Malwarebytes][10],免费的反病毒和勒索软件工具 + * [NMAP][11],安全扫描器 + * [NIKTO][12],开源的 web 服务器扫描器 + * [Ansible][13],自动化的配置运维工具,可以辅助做安全基线 + * [Metasploit][14],渗透测试工具,可辅助理解攻击向量 -目前被广泛使用的私有云平台 OpenStack 也加强了关于基于云的智能安全需求。根据 Linux 基金会发布的 [公有云指南][7]:“据 Gartner 的调研结果,尽管公有云的服务商在安全和审查方面做的都还不错,安全问题是企业考虑向公有云转移的最重要的考量之一” +这里有一些对上面工具讲解的视频。比如 [Metasploit 教学][15]、[Wireshark 教学][16]。还有一些传授安全技能的免费电子书,比如:由 Ibrahim Haddad 博士和 Linux 基金会共同出版的[并购过程中的开源审计][17],里面阐述了多条在技术平台合并过程中,因没有较好的进行开源审计,从而引发的安全问题。当然,书中也记录了如何在这一过程中进行代码合规检查、准备以及文档编写。 -无论是对于组织还是个人,千里之堤毁于蚁穴,这些“蚁穴”无论是来自路由器、防火墙、VPNs或虚拟机都可能导致灾难性的后果。以下是一些免费的工具可能对于检测这些漏洞提供帮助: - - * [Wireshark][8], 流量包分析工具 - - * [KeePass Password Safe][9], 免费开源的密码管理器 - - * [Malwarebytes][10], 免费的反病毒和勒索软件工具 - - * [NMAP][11], 安全扫描器 - - * [NIKTO][12], 开源 web 扫描器 - - * [Ansible][13], 自动化的配置运维工具,可以辅助做安全基线 - - * [Metasploit][14], 渗透测试工具,可辅助理解攻击向量 - - - -这里有一些对上面工具讲解的视频。比如[Metasploit 教学][15]、[Wireshark 教学][16]。还有一些传授安全技能的免费电子书,比如:由 Ibrahim Haddad 博士和 Linux 基金会共同出版的[并购过程中的开源审计][17],里面阐述了多条在技术平台合并过程中,因没有较好的进行开源审计,从而引发的安全问题。当然,书中也记录了如何在这一过程中进行代码合规检查、准备以及文档编写。 - -同时,我们 [之前提到的一个免费的电子书][18], 由来自[The New Stack][19] 编写的“Docker与容器中的网络、安全和存储”,里面也提到了关于加强容器网络安全的最新技术,以及Docker本身可提供的关于,提升其网络的安全与效率的最佳实践。这本电子书还记录了关于如何构建安全容器集群的最佳实践。 +同时,我们 [之前提到的一个免费的电子书][18], 由来自 [The New Stack][19] 编写的“Docker 与容器中的网络、安全和存储”,里面也提到了关于加强容器网络安全的最新技术,以及 Docker 本身可提供的关于提升其网络的安全与效率的最佳实践。这本电子书还记录了关于如何构建安全容器集群的最佳实践。 所有这些工具和资源,可以在很大的程度上预防安全问题,正如人们所说的未雨绸缪,考虑到一直存在的安全问题,现在就应该开始学习这些安全合规资料与工具。 -想要了解更多的安全、合规以及开源项目问题,点击[这里][20] + +想要了解更多的安全、合规以及开源项目问题,点击[这里][20]。 -------------------------------------------------------------------------------- @@ -55,8 +45,8 @@ via: https://www.linux.com/blog/2018/5/free-resources-securing-your-open-source- 作者:[Sam Dean][a] 选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/sd886393) -校对:[校对者ID](https://github.com/校对者ID) +译者:[sd886393](https://github.com/sd886393) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -64,7 +54,7 @@ via: https://www.linux.com/blog/2018/5/free-resources-securing-your-open-source- [1]:https://www.blackducksoftware.com/open-source-security-risk-analysis-2018 [2]:https://www.prnewswire.com/news-releases/synopsys-report-finds-majority-of-software-plagued-by-known-vulnerabilities-and-license-conflicts-as-open-source-adoption-soars-300648367.html [3]:https://www.linux.com/blog/sysadmin-ebook/2017/8/future-proof-your-sysadmin-career-locking-down-security -[4]:http://go.linuxfoundation.org/ebook_workstation_security +[4]:https://linux.cn/article-6753-1.html [5]:https://docs.fedoraproject.org/en-US/Fedora/19/html/Security_Guide/index.html [6]:https://www.debian.org/doc/manuals/securing-debian-howto/index.en.html [7]:https://www.linux.com/publications/2016-guide-open-cloud diff --git a/published/20180531 How to create shortcuts in vi.md b/published/20180531 How to create shortcuts in vi.md new file mode 100644 index 0000000000..ec51ab53f7 --- /dev/null +++ b/published/20180531 How to create shortcuts in vi.md @@ -0,0 +1,114 @@ +如何在 vi 中创建快捷键 +====== + +> 那些常见编辑任务的快捷键可以使 Vi 编辑器更容易使用,更有效率。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documentation-type-keys-yearbook.png?itok=Q-ELM2rn) + +学习使用 [vi 文本编辑器][1] 确实得花点功夫,不过 vi 的老手们都知道,经过一小会儿的锻炼,就可以将基本的 vi 操作融汇贯通。我们都知道“肌肉记忆”,那么学习 vi 的过程可以称之为“手指记忆”。 + +当你抓住了基础的操作窍门之后,你就可以定制化地配置 vi 的快捷键,从而让其处理的功能更为强大、流畅。我希望下面描述的技术可以加速您的协作、编程和数据操作。 + +在开始之前,我想先感谢下 Chris Hermansen(是他雇佣我写了这篇文章)仔细地检查了我的另一篇关于使用 vi 增强版本 [Vim][2] 的文章。当然还有他那些我未采纳的建议。 + +首先,我们来说明下面几个惯例设定。我会使用符号 `` 来代表按下回车,`` 代表按下空格键,`CTRL-x` 表示一起按下 `Control` 键和 `x` 键(`x` 可以是需要的某个键)。 + +使用 `map` 命令来进行按键的映射。第一个例子是 `write` 命令,通常你之前保存使用这样的命令: + +``` +:w +``` + +虽然这里只有三个键,不过考虑到我用这个命令实在是太频繁了,我更想“一键”搞定它。在这里我选择逗号键,它不是标准的 vi 命令集的一部分。这样设置: + +``` +:map , :wCTRL-v +``` + +这里的 `CTRL-v` 事实上是对 `` 做了转义的操作,如果不加这个的话,默认 `` 会作为这条映射指令的结束信号,而非映射中的一个操作。 `CTRL-v` 后面所跟的操作会翻译为用户的实际操作,而非该按键平常的操作。 + +在上面的映射中,右边的部分会在屏幕中显示为 `:w^M`,其中 `^` 字符就是指代 `control`,完整的意思就是 `CTRL-m`,表示就是系统中一行的结尾。 + +目前来说,就很不错了。如果我编辑、创建了十二次文件,这个键位映射就可以省掉了 2*12 次按键。不过这里没有计算你建立这个键位映射所花费的 11 次按键(计算 `CTRL-v` 和 `:` 均为一次按键)。虽然这样已经省了很多次,但是每次打开 vi 都要重新建立这个映射也会觉得非常麻烦。 + +幸运的是,这里可以将这些键位映射放到 vi 的启动配置文件中,让其在每次启动的时候自动读取:文件为 `.exrc`,对于 vim 是 `.vimrc`。只需要将这些文件放在你的用户根目录中即可,并在文件中每行写入一个键位映射,之后就会在每次启动 vi 生效直到你删除对应的配置。 + +在继续说明 `map` 其他用法以及其他的缩写机制之前,这里在列举几个我常用提高文本处理效率的 map 设置: + +| 映射 | 显示为 | +|------|-------| +| `:map X :xCTRL-v` | `:x^M` | +| `:map X ,:qCTRL-v` | `,:q^M` | + +上面的 `map` 指令的意思是写入并关闭当前的编辑文件。其中 `:x` 是 vi 原本的命令,而下面的版本说明之前的 `map` 配置可以继续用作第二个 `map` 键位映射。 + +| 映射 | 显示为 | +|------|-------| +| `:map v :e` | `:e` | + +上面的指令意思是在 vi 编辑器内部切换文件,使用这个时候,只需要按 `v` 并跟着输入文件名,之后按 `` 键。 + +| 映射 | 显示为 | +|------|-------| +| `:map CTRL-vCTRL-e :e#CTRL-v` | `:e #^M` | + +`#` 在这里是 vi 中标准的符号,意思是最后使用的文件名。所以切换当前与上一个文件的方法就使用上面的映射。 + +| 映射 | 显示为 | +|------|-------| +| `map CTRL-vCTRL-r :!spell %>err &CTRL-v` | `:!spell %>err&^M` | + +(注意:在两个例子中出现的第一个 `CRTL-v` 在某些 vi 的版本中是不需要的)其中,`:!` 用来运行一个外部的(非 vi 内部的)命令。在这个拼写检查的例子中,`%` 是 vi 中的符号用来指代目前的文件, `>` 用来重定向拼写检查中的输出到 `err` 文件中,之后跟上 `&` 说明该命令是一个后台运行的任务,这样可以保证在拼写检查的同时还可以进行编辑文件的工作。这里我可以键入 `verr`(使用我之前定义的快捷键 `v` 跟上 `err`),进入 `spell` 输出结果的文件,之后再输入 `CTRL-e` 来回到刚才编辑的文件中。这样我就可以在拼写检查之后,使用 `CTRL-r` 来查看检查的错误,再通过 `CTRL-e` 返回刚才编辑的文件。 + +还用很多字符串输入的缩写,也使用了各种 `map` 命令,比如: + +``` +:map! CTRL-o \fI +:map! CTRL-k \fP +``` + +这个映射允许你使用 `CTRL-o` 作为 `groff` 命令的缩写,从而让让接下来书写的单词有斜体的效果,并使用 `CTRL-k` 进行恢复。 + +还有两个类似的映射: + +``` +:map! rh rhinoceros +:map! hi hippopotamus +``` + +上面的也可以使用 `ab` 命令来替换,就像下面这样(如果想这么用的话,需要首先按顺序运行: 1、 `unmap! rh`,2、`umap! hi`): + +``` +:ab rh rhinoceros +:ab hi hippopotamus +``` + +在上面 `map!` 的命令中,缩写会马上的展开成原有的单词,而在 `ab` 命令中,单词展开的操作会在输入了空格和标点之后才展开(不过在 Vim 和我的 vi 中,展开的形式与 `map!` 类似)。 + +想要取消刚才设定的按键映射,可以对应的输入 `:unmap`、 `unmap!` 或 `:unab`。 + +在我使用的 vi 版本中,比较好用的候选映射按键包括 `g`、`K`、`q`、 `v`、 `V`、 `Z`,控制字符包括:`CTRL-a`、`CTRL-c`、 `CTRL-k`、`CTRL-n`、`CTRL-p`、`CTRL-x`;还有一些其他的字符如 `#`、 `*`,当然你也可以使用那些已经在 vi 中有过定义但不经常使用的字符,比如本文选择 `X` 和 `I`,其中 `X` 表示删除左边的字符,并立刻左移当前字符。 + +最后,下面的命令 + +``` +:map +:map! +:ab +``` + +将会显示,目前所有的缩写和键位映射。 + +希望上面的技巧能够更好地更高效地帮助你使用 vi。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/5/shortcuts-vi-text-editor + +作者:[Dan Sonnenschein][a] 
选题:[lujun9972](https://github.com/lujun9972) 
译者:[sd886393](https://github.com/sd886393) 
校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/dannyman +[1]:http://ex-vi.sourceforge.net/ +[2]:https://www.vim.org/ diff --git a/published/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md b/published/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md new file mode 100644 index 0000000000..84c805506a --- /dev/null +++ b/published/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md @@ -0,0 +1,313 @@ +在 Ubuntu 18.04 LTS 无头服务器上安装 Oracle VirtualBox +====== + +![](https://www.ostechnix.com/wp-content/uploads/2016/07/Install-Oracle-VirtualBox-On-Ubuntu-18.04-720x340.png) + +本教程将指导你在 Ubuntu 18.04 LTS 无头服务器上,一步一步地安装 **Oracle VirtualBox**。同时,本教程也将介绍如何使用 **phpVirtualBox** 去管理安装在无头服务器上的 **VirtualBox** 实例。**phpVirtualBox** 是 VirtualBox 的一个基于 Web 的前端工具。这个教程也可以工作在 Debian 和其它 Ubuntu 衍生版本上,如 Linux Mint。现在,我们开始。 + +### 前提条件 + +在安装 Oracle VirtualBox 之前,我们的 Ubuntu 18.04 LTS 服务器上需要满足如下的前提条件。 + +首先,逐个运行如下的命令来更新 Ubuntu 服务器。 + +``` +$ sudo apt update +$ sudo apt upgrade +$ sudo apt dist-upgrade +``` + +接下来,安装如下的必需的包: + +``` +$ sudo apt install build-essential dkms unzip wget +``` + +安装完成所有的更新和必需的包之后,重启动 Ubuntu 服务器。 + +``` +$ sudo reboot +``` + +### 在 Ubuntu 18.04 LTS 服务器上安装 VirtualBox + +添加 Oracle VirtualBox 官方仓库。为此你需要去编辑 `/etc/apt/sources.list` 文件: + +``` +$ sudo nano /etc/apt/sources.list +``` + +添加下列的行。 + +在这里,我将使用 Ubuntu 18.04 LTS,因此我添加下列的仓库。 + +``` +deb http://download.virtualbox.org/virtualbox/debian bionic contrib +``` + +![][2] + +用你的 Ubuntu 发行版的代码名字替换关键字 ‘bionic’,比如,‘xenial’、‘vivid’、‘utopic’、‘trusty’、‘raring’、‘quantal’、‘precise’、‘lucid’、‘jessie’、‘wheezy’、或 ‘squeeze‘。 + +然后,运行下列的命令去添加 Oracle 公钥: + +``` +$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add - +``` + +对于 VirtualBox 的老版本,添加如下的公钥: + +``` +$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add - +``` + +接下来,使用如下的命令去更新软件源: + +``` +$ sudo apt update +``` + +最后,使用如下的命令去安装最新版本的 Oracle VirtualBox: + +``` +$ sudo apt install virtualbox-5.2 +``` + +### 添加用户到 VirtualBox 组 + +我们需要去创建并添加我们的系统用户到 `vboxusers` 组中。你也可以单独创建用户,然后将它分配到 `vboxusers` 组中,也可以使用已有的用户。我不想去创建新用户,因此,我添加已存在的用户到这个组中。请注意,如果你为 virtualbox 使用一个单独的用户,那么你必须注销当前用户,并使用那个特定的用户去登入,来完成剩余的步骤。 + +我使用的是我的用户名 `sk`,因此,我运行如下的命令将它添加到 `vboxusers` 组中。 + +``` +$ sudo usermod -aG vboxusers sk +``` + +现在,运行如下的命令去检查 virtualbox 内核模块是否已加载。 + +``` +$ sudo systemctl status vboxdrv +``` + +![][3] + +正如你在上面的截屏中所看到的,vboxdrv 模块已加载,并且是已运行的状态! + +对于老的 Ubuntu 版本,运行: + +``` +$ sudo /etc/init.d/vboxdrv status +``` + +如果 virtualbox 模块没有启动,运行如下的命令去启动它。 + +``` +$ sudo /etc/init.d/vboxdrv setup +``` + +很好!我们已经成功安装了 VirtualBox 并启动了 virtualbox 模块。现在,我们继续来安装 Oracle VirtualBox 的扩展包。 + +### 安装 VirtualBox 扩展包 + +VirtualBox 扩展包为 VirtualBox 访客系统提供了如下的功能。 + + * 虚拟的 USB 2.0 (EHCI) 驱动 + * VirtualBox 远程桌面协议(VRDP)支持 + * 宿主机网络摄像头直通 + * Intel PXE 引导 ROM + * 对 Linux 宿主机上的 PCI 直通提供支持 + +从[这里][4]为 VirtualBox 5.2.x 下载最新版的扩展包。 + +``` +$ wget https://download.virtualbox.org/virtualbox/5.2.14/Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack +``` + +使用如下的命令去安装扩展包: + +``` +$ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack +``` + +恭喜!我们已经成功地在 Ubuntu 18.04 LTS 服务器上安装了 Oracle VirtualBox 的扩展包。现在已经可以去部署虚拟机了。参考 [virtualbox 官方指南][5],在命令行中开始创建和管理虚拟机。 + +然而,并不是每个人都擅长使用命令行。有些人可能希望在图形界面中去创建和使用虚拟机。不用担心!下面我们为你带来非常好用的 **phpVirtualBox** 工具! + +### 关于 phpVirtualBox + +**phpVirtualBox** 是一个免费的、基于 web 的 Oracle VirtualBox 后端。它是使用 PHP 开发的。用 phpVirtualBox 我们可以通过 web 浏览器从网络上的任意一个系统上,很轻松地创建、删除、管理、和执行虚拟机。 + +### 在 Ubuntu 18.04 LTS 上安装 phpVirtualBox + +由于它是基于 web 的工具,我们需要安装 Apache web 服务器、PHP 和一些 php 模块。 + +为此,运行如下命令: + +``` +$ sudo apt install apache2 php php-mysql libapache2-mod-php php-soap php-xml +``` + +然后,从 [下载页面][6] 上下载 phpVirtualBox 5.2.x 版。请注意,由于我们已经安装了 VirtualBox 5.2 版,因此,同样的我们必须去安装 phpVirtualBox 的 5.2 版本。 + +运行如下的命令去下载它: + +``` +$ wget https://github.com/phpvirtualbox/phpvirtualbox/archive/5.2-0.zip +``` + +使用如下命令解压下载的安装包: + +``` +$ unzip 5.2-0.zip +``` + +这个命令将解压 5.2.0.zip 文件的内容到一个名为 `phpvirtualbox-5.2-0` 的文件夹中。现在,复制或移动这个文件夹的内容到你的 apache web 服务器的根文件夹中。 + +``` +$ sudo mv phpvirtualbox-5.2-0/ /var/www/html/phpvirtualbox +``` + +给 phpvirtualbox 文件夹分配适当的权限。 + +``` +$ sudo chmod 777 /var/www/html/phpvirtualbox/ +``` + +接下来,我们开始配置 phpVirtualBox。 + +像下面这样复制示例配置文件。 + +``` +$ sudo cp /var/www/html/phpvirtualbox/config.php-example /var/www/html/phpvirtualbox/config.php +``` + +编辑 phpVirtualBox 的 `config.php` 文件: + +``` +$ sudo nano /var/www/html/phpvirtualbox/config.php +``` + +找到下列行,并且用你的系统用户名和密码去替换它(就是前面的“添加用户到 VirtualBox 组中”节中使用的用户名)。 + +在我的案例中,我的 Ubuntu 系统用户名是 `sk` ,它的密码是 `ubuntu`。 + +``` +var $username = 'sk'; +var $password = 'ubuntu'; +``` + +![][7] + +保存并关闭这个文件。 + +接下来,创建一个名为 `/etc/default/virtualbox` 的新文件: + +``` +$ sudo nano /etc/default/virtualbox +``` + +添加下列行。用你自己的系统用户替换 `sk`。 + +``` +VBOXWEB_USER=sk +``` + +最后,重引导你的系统或重启下列服务去完成整个配置工作。 + +``` +$ sudo systemctl restart vboxweb-service +$ sudo systemctl restart vboxdrv +$ sudo systemctl restart apache2 +``` + +### 调整防火墙允许连接 Apache web 服务器 + +如果你在 Ubuntu 18.04 LTS 上启用了 UFW,那么在默认情况下,apache web 服务器是不能被任何远程系统访问的。你必须通过下列的步骤让 http 和 https 流量允许通过 UFW。 + +首先,我们使用如下的命令来查看在策略中已经安装了哪些应用: + +``` +$ sudo ufw app list +Available applications: +Apache +Apache Full +Apache Secure +OpenSSH +``` + +正如你所见,Apache 和 OpenSSH 应该已经在 UFW 的策略文件中安装了。 + +如果你在策略中看到的是 `Apache Full`,说明它允许流量到达 80 和 443 端口: + +``` +$ sudo ufw app info "Apache Full" +Profile: Apache Full +Title: Web Server (HTTP,HTTPS) +Description: Apache v2 is the next generation of the omnipresent Apache web +server. + +Ports: +80,443/tcp +``` + +现在,运行如下的命令去启用这个策略中的 HTTP 和 HTTPS 的入站流量: + +``` +$ sudo ufw allow in "Apache Full" +Rules updated +Rules updated (v6) +``` + +如果你希望允许 https 流量,但是仅是 http (80) 的流量,运行如下的命令: + +``` +$ sudo ufw app info "Apache" +``` + +### 访问 phpVirtualBox 的 Web 控制台 + +现在,用任意一台远程系统的 web 浏览器来访问。 + +在地址栏中,输入:`http://IP-address-of-virtualbox-headless-server/phpvirtualbox`。 + +在我的案例中,我导航到这个链接 – `http://192.168.225.22/phpvirtualbox`。 + +你将看到如下的屏幕输出。输入 phpVirtualBox 管理员用户凭据。 + +phpVirtualBox 的默认管理员用户名和密码是 `admin` / `admin`。 + +![][8] + +恭喜!你现在已经进入了 phpVirtualBox 管理面板了。 + +![][9] + +现在,你可以从 phpvirtualbox 的管理面板上,开始去创建你的 VM 了。正如我在前面提到的,你可以从同一网络上的任意一台系统上访问 phpVirtualBox 了,而所需要的仅仅是一个 web 浏览器和 phpVirtualBox 的用户名和密码。 + +如果在你的宿主机系统(不是访客机)的 BIOS 中没有启用虚拟化支持,phpVirtualBox 将只允许你去创建 32 位的访客系统。要安装 64 位的访客系统,你必须在你的宿主机的 BIOS 中启用虚拟化支持。在你的宿主机的 BIOS 中你可以找到一些类似于 “virtualization” 或 “hypervisor” 字眼的选项,然后确保它是启用的。 + +本文到此结束了,希望能帮到你。如果你找到了更有用的指南,共享出来吧。 + +还有一大波更好玩的东西即将到来,请继续关注! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-server/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[qhwdw](https://github.com/qhwdw) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]:http://www.ostechnix.com/wp-content/uploads/2016/07/Add-VirtualBox-repository.png +[3]:http://www.ostechnix.com/wp-content/uploads/2016/07/vboxdrv-service.png +[4]:https://www.virtualbox.org/wiki/Downloads +[5]:http://www.virtualbox.org/manual/ch08.html +[6]:https://github.com/phpvirtualbox/phpvirtualbox/releases +[7]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-config.png +[8]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-1.png +[9]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-2.png diff --git a/translated/tech/20180709 How To Configure SSH Key-based Authentication In Linux.md b/published/20180709 How To Configure SSH Key-based Authentication In Linux.md similarity index 53% rename from translated/tech/20180709 How To Configure SSH Key-based Authentication In Linux.md rename to published/20180709 How To Configure SSH Key-based Authentication In Linux.md index 5c69d6a92b..8fb89b943d 100644 --- a/translated/tech/20180709 How To Configure SSH Key-based Authentication In Linux.md +++ b/published/20180709 How To Configure SSH Key-based Authentication In Linux.md @@ -1,33 +1,35 @@ -如何在 Linux 中配置基于密钥认证的 SSH +如何在 Linux 中配置基于密钥认证的 SSH ====== ![](https://www.ostechnix.com/wp-content/uploads/2017/01/Configure-SSH-Key-based-Authentication-In-Linux-720x340.png) -### 什么是基于 SSH密钥的认证? +### 什么是基于 SSH 密钥的认证? -众所周知,**Secure Shell**,又称 **SSH**,是允许你通过无安全网络(例如 Internet)和远程系统之间安全访问/通信的加密网络协议。无论何时使用 SSH 在无安全网络上发送数据,它都会在源系统上自动地被加密,并且在目的系统上解密。SSH 提供了四种加密方式,**基于密码认证**,**基于密钥认证**,**基于主机认证**和**键盘认证**。最常用的认证方式是基于密码认证和基于密钥认证。 +众所周知,**Secure Shell**,又称 **SSH**,是允许你通过无安全网络(例如 Internet)和远程系统之间安全访问/通信的加密网络协议。无论何时使用 SSH 在无安全网络上发送数据,它都会在源系统上自动地被加密,并且在目的系统上解密。SSH 提供了四种加密方式,**基于密码认证**,**基于密钥认证**,**基于主机认证**和**键盘认证**。最常用的认证方式是基于密码认证和基于密钥认证。 -在基于密码认证中,你需要的仅仅是远程系统上用户的密码。如果你知道远程用户的密码,你可以使用**“ssh[[email protected]][1]”**访问各自的系统。另一方面,在基于密钥认证中,为了通过 SSH 通信,你需要生成 SSH 密钥对,并且为远程系统上传 SSH 公钥。每个 SSH 密钥对由私钥与公钥组成。私钥应该保存在客户系统上,公钥应该上传给远程系统。你不应该将私钥透露给任何人。希望你已经对 SSH 和它的认证方式有了基本的概念。 +在基于密码认证中,你需要的仅仅是远程系统上用户的密码。如果你知道远程用户的密码,你可以使用 `ssh user@remote-system-name` 访问各自的系统。另一方面,在基于密钥认证中,为了通过 SSH 通信,你需要生成 SSH 密钥对,并且为远程系统上传 SSH 公钥。每个 SSH 密钥对由私钥与公钥组成。私钥应该保存在客户系统上,公钥应该上传给远程系统。你不应该将私钥透露给任何人。希望你已经对 SSH 和它的认证方式有了基本的概念。 -这篇教程,我们将讨论如何在 linux 上配置基于密钥认证的 SSH。 +这篇教程,我们将讨论如何在 Linux 上配置基于密钥认证的 SSH。 -### 在 Linux 上配置基于密钥认证的SSH +### 在 Linux 上配置基于密钥认证的 SSH -为本篇教程起见,我将使用 Arch Linux 为本地系统,Ubuntu 18.04 LTS 为远程系统。 +为方便演示,我将使用 Arch Linux 为本地系统,Ubuntu 18.04 LTS 为远程系统。 本地系统详情: - * **OS** : Arch Linux Desktop - * **IP address** : 192.168.225.37 /24 + +* OS: Arch Linux Desktop +* IP address: 192.168.225.37/24 远程系统详情: - * **OS** : Ubuntu 18.04 LTS Server - * **IP address** : 192.168.225.22/24 + +* OS: Ubuntu 18.04 LTS Server +* IP address: 192.168.225.22/24 ### 本地系统配置 -就像我之前所说,在基于密钥认证的方法中,想要通过 SSH 访问远程系统,就应该将公钥上传给它。公钥通常会被保存在远程系统的一个文件**~/.ssh/authorized_keys** 中。 +就像我之前所说,在基于密钥认证的方法中,想要通过 SSH 访问远程系统,需要将公钥上传到远程系统。公钥通常会被保存在远程系统的一个 `~/.ssh/authorized_keys` 文件中。 -**注意事项:**不要使用**root** 用户生成密钥对,这样只有 root 用户才可以使用。使用普通用户创建密钥对。 +**注意事项**:不要使用 **root** 用户生成密钥对,这样只有 root 用户才可以使用。使用普通用户创建密钥对。 现在,让我们在本地系统上创建一个 SSH 密钥对。只需要在客户端系统上运行下面的命令。 @@ -35,9 +37,9 @@ $ ssh-keygen ``` -上面的命令将会创建一个 2048 位的 RSA 密钥对。输入两次密码。更重要的是,记住你的密码。后面将会用到它。 +上面的命令将会创建一个 2048 位的 RSA 密钥对。你需要输入两次密码。更重要的是,记住你的密码。后面将会用到它。 -**样例输出** +**样例输出**: ``` Generating public/private rsa key pair. @@ -62,22 +64,22 @@ The key's randomart image is: +----[SHA256]-----+ ``` -如果你已经创建了密钥对,你将看到以下信息。输入 ‘y’ 就会覆盖已存在的密钥。 +如果你已经创建了密钥对,你将看到以下信息。输入 `y` 就会覆盖已存在的密钥。 ``` /home/username/.ssh/id_rsa already exists. Overwrite (y/n)? ``` -请注意**密码是可选的**。如果你输入了密码,那么每次通过 SSH 访问远程系统时都要求输入密码,除非你使用了 SSH 代理保存了密码。如果你不想要密码(虽然不安全),简单地输入两次 ENTER。不过,我们建议你使用密码。从安全的角度来看,使用无密码的 ssh 密钥对大体上不是一个很好的主意。 这种方式应该限定在特殊的情况下使用,例如,没有用户介入的服务访问远程系统。(例如,用 rsync 远程备份...) +请注意**密码是可选的**。如果你输入了密码,那么每次通过 SSH 访问远程系统时都要求输入密码,除非你使用了 SSH 代理保存了密码。如果你不想要密码(虽然不安全),简单地敲两次回车。不过,我建议你使用密码。从安全的角度来看,使用无密码的 ssh 密钥对不是什么好主意。这种方式应该限定在特殊的情况下使用,例如,没有用户介入的服务访问远程系统。(例如,用 `rsync` 远程备份……) -如果你已经在个人文件 **~/.ssh/id_rsa** 中有了无密码的密钥对,但想要更新为带密码的密钥。使用下面的命令: +如果你已经在个人文件 `~/.ssh/id_rsa` 中有了无密码的密钥,但想要更新为带密码的密钥。使用下面的命令: ``` $ ssh-keygen -p -f ~/.ssh/id_rsa ``` -样例输出: +**样例输出**: ``` Enter new passphrase (empty for no passphrase): @@ -91,40 +93,40 @@ Your identification has been saved with the new passphrase. $ ssh-copy-id sk@192.168.225.22 ``` -在这,我把本地(Arch Linux)系统上的公钥拷贝到了远程系统(Ubuntu 18.04 LTS)上。从技术上讲,上面的命令会把本地系统 **~/.ssh/id_rsa.pub key** 文件中的内容拷贝到远程系统**~/.ssh/authorized_keys** 中。明白了吗?非常棒。 +在这里,我把本地(Arch Linux)系统上的公钥拷贝到了远程系统(Ubuntu 18.04 LTS)上。从技术上讲,上面的命令会把本地系统 `~/.ssh/id_rsa.pub` 文件中的内容拷贝到远程系统 `~/.ssh/authorized_keys` 中。明白了吗?非常棒。 -输入 **yes** 来继续连接你的远程 SSH 服务端。接着,输入远程系统 root 用户的密码。 +输入 `yes` 来继续连接你的远程 SSH 服务端。接着,输入远程系统用户 `sk` 的密码。 ``` /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys -[email protected]2.168.225.22's password: +sk@192.168.225.22's password: Number of key(s) added: 1 -Now try logging into the machine, with: "ssh '[email protected]'" +Now try logging into the machine, with: "ssh 'sk@192.168.225.22'" and check to make sure that only the key(s) you wanted were added. ``` -如果你已经拷贝了密钥,但想要替换为新的密码,使用 **-f** 选项覆盖已有的密钥。 +如果你已经拷贝了密钥,但想要替换为新的密码,使用 `-f` 选项覆盖已有的密钥。 ``` $ ssh-copy-id -f sk@192.168.225.22 ``` -我们现在已经成功地将本地系统的 SSH 公钥添加进了远程系统。现在,让我们在远程系统上完全禁用掉基于密码认证的方式。因为,我们已经配置了密钥认证,因此我们不再需要密码认证了。 +我们现在已经成功地将本地系统的 SSH 公钥添加进了远程系统。现在,让我们在远程系统上完全禁用掉基于密码认证的方式。因为我们已经配置了密钥认证,因此不再需要密码认证了。 ### 在远程系统上禁用基于密码认证的 SSH -你需要在 root 或者 sudo 用户下执行下面的命令。 +你需要在 root 用户或者 `sudo` 执行下面的命令。 -为了禁用基于密码的认证,你需要在远程系统的控制台上编辑 **/etc/ssh/sshd_config** 配置文件: +禁用基于密码的认证,你需要在远程系统的终端里编辑 `/etc/ssh/sshd_config` 配置文件: ``` $ sudo vi /etc/ssh/sshd_config ``` -找到下面这一行,去掉注释然后将值设为 **no** +找到下面这一行,去掉注释然后将值设为 `no`: ``` PasswordAuthentication no @@ -146,19 +148,19 @@ $ ssh sk@192.168.225.22 输入密码。 -**样例输出:** +**样例输出**: ``` Enter passphrase for key '/home/sk/.ssh/id_rsa': Last login: Mon Jul 9 09:59:51 2018 from 192.168.225.37 -[email protected]:~$ +sk@ubuntuserver:~$ ``` -现在,你就能 SSH 你的远程系统了。如你所见,我们已经使用之前 **ssh-keygen** 创建的密码登录进了远程系统的账户,而不是使用账户实际的密码。 +现在,你就能 SSH 你的远程系统了。如你所见,我们已经使用之前 `ssh-keygen` 创建的密码登录进了远程系统的账户,而不是使用当前账户实际的密码。 -如果你试图从其他客户端系统 ssh (远程系统),你将会得到这条错误信息。比如,我试图通过命令从 CentOS SSH 访问 Ubuntu 系统: +如果你试图从其它客户端系统 ssh(远程系统),你将会得到这条错误信息。比如,我试图通过命令从 CentOS SSH 访问 Ubuntu 系统: -**样例输出:** +**样例输出**: ``` The authenticity of host '192.168.225.22 (192.168.225.22)' can't be established. @@ -168,7 +170,7 @@ Warning: Permanently added '192.168.225.22' (ECDSA) to the list of known hosts. Permission denied (publickey). ``` -如你所见,除了 CentOS (译注:根据上文,这里应该是 Arch) 系统外,我不能通过其他任何系统 SSH 访问我的远程系统 Ubuntu 18.04。 +如你所见,除了 CentOS(LCTT 译注:根据上文,这里应该是 Arch)系统外,我不能通过其它任何系统 SSH 访问我的远程系统 Ubuntu 18.04。 ### 为 SSH 服务端添加更多客户端系统的密钥 @@ -180,21 +182,21 @@ Permission denied (publickey). $ ssh-keygen ``` -输入两次密码。现在, ssh 密钥对已经生成了。你需要手动把公钥(不是私钥)拷贝到远程服务端上。 +输入两次密码。现在,ssh 密钥对已经生成了。你需要手动把公钥(不是私钥)拷贝到远程服务端上。 -使用命令查看公钥: +使用以下命令查看公钥: ``` $ cat ~/.ssh/id_rsa.pub ``` -应该会输出如下信息: +应该会输出类似下面的信息: ``` ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCt3a9tIeK5rPx9p74/KjEVXa6/OODyRp0QLS/sLp8W6iTxFL+UgALZlupVNgFjvRR5luJ9dLHWwc+d4umavAWz708e6Na9ftEPQtC28rTFsHwmyLKvLkzcGkC5+A0NdbiDZLaK3K3wgq1jzYYKT5k+IaNS6vtrx5LDObcPNPEBDt4vTixQ7GZHrDUUk5586IKeFfwMCWguHveTN7ykmo2EyL2rV7TmYq+eY2ZqqcsoK0fzXMK7iifGXVmuqTkAmZLGZK8a3bPb6VZd7KFum3Ezbu4BXZGp7FVhnOMgau2kYeOH/ItKPzpCAn+dg3NAAziCCxnII9b4nSSGz3mMY4Y7 ostechnix@centosserver ``` -拷贝所有内容(通过 USB 驱动器或者其它任何介质),然后去你的远程服务端的控制台。像下面那样,在 home 下创建文件夹叫做 **ssh**。你需要以 root 身份执行命令。 +拷贝所有内容(通过 USB 驱动器或者其它任何介质),然后去你的远程服务端的终端,像下面那样,在 `$HOME` 下创建文件夹叫做 `.ssh`。你需要以 root 身份执行命令(注:不一定需要 root)。 ``` $ mkdir -p ~/.ssh @@ -208,15 +210,16 @@ echo {Your_public_key_contents_here} >> ~/.ssh/authorized_keys 在远程系统上重启 ssh 服务。现在,你可以在新的客户端上 SSH 远程服务端了。 -如果觉得手动添加 ssh 公钥有些困难,在远程系统上暂时性启用密码认证,使用 “ssh-copy-id“ 命令从本地系统上拷贝密钥,最后关闭密码认证。 +如果觉得手动添加 ssh 公钥有些困难,在远程系统上暂时性启用密码认证,使用 `ssh-copy-id` 命令从本地系统上拷贝密钥,最后禁用密码认证。 **推荐阅读:** -(译者注:在原文中此处有超链接) +* [SSLH – Share A Same Port For HTTPS And SSH][1] +* [ScanSSH – Fast SSH Server And Open Proxy Scanner][2] 好了,到此为止。基于密钥认证的 SSH 提供了一层防止暴力破解的额外保护。如你所见,配置密钥认证一点也不困难。这是一个非常好的方法让你的 Linux 服务端安全可靠。 -不久我就会带来另一篇有用的文章。到那时,继续关注 OSTechNix。 +不久我会带来另一篇有用的文章。请继续关注 OSTechNix。 干杯! @@ -227,9 +230,10 @@ via: https://www.ostechnix.com/configure-ssh-key-based-authentication-linux/ 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[LuuMing](https://github.com/LuuMing) -校对:[校对者ID](https://github.com/校对者ID) +校对:[pityonline](https://github.com/pityonline) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.ostechnix.com/cdn-cgi/l/email-protection +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://www.ostechnix.com/sslh-share-port-https-ssh/ +[2]: https://www.ostechnix.com/scanssh-fast-ssh-server-open-proxy-scanner/ diff --git a/published/20180724 75 Most Used Essential Linux Applications of 2018.md b/published/20180724 75 Most Used Essential Linux Applications of 2018.md new file mode 100644 index 0000000000..7d0b586129 --- /dev/null +++ b/published/20180724 75 Most Used Essential Linux Applications of 2018.md @@ -0,0 +1,987 @@ +75 个最常用的 Linux 应用程序(2018 年) +====== + +![](https://www.fossmint.com/wp-content/uploads/2018/07/Most-Used-Ubuntu-Applications.png) + +对于许多应用程序来说,2018 年是非常好的一年,尤其是自由开源的应用程序。尽管各种 Linux 发行版都自带了很多默认的应用程序,但用户也可以自由地选择使用它们或者其它任何免费或付费替代方案。 + +下面汇总了[一系列的 Linux 应用程序][3],这些应用程序都能够在 Linux 系统上安装,尽管还有很多其它选择。以下汇总中的任何应用程序都属于其类别中最常用的应用程序,如果你还没有用过,欢迎试用一下! + +### 备份工具 + +#### Rsync + +[Rsync][4] 是一个开源的、节约带宽的工具,它用于执行快速的增量文件传输,而且它也是一个免费工具。 + +``` +$ rsync [OPTION...] SRC... [DEST] +``` + +想要了解更多示例和用法,可以参考《[10 个使用 Rsync 命令的实际例子][5]》。 + +#### Timeshift + +[Timeshift][6] 能够通过增量快照来保护用户的系统数据,而且可以按照日期恢复指定的快照,类似于 Mac OS 中的 Time Machine 功能和 Windows 中的系统还原功能。 + +![](https://www.fossmint.com/wp-content/uploads/2018/07/Timeshift-Create-Linux-Mint-Snapshot.png) + +### BT(BitTorrent) 客户端 + +![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-Torrent-Clients.png) + +#### Deluge + +[Deluge][7] 是一个漂亮的跨平台 BT 客户端,旨在优化 μTorrent 体验,并向用户免费提供服务。 + +使用以下命令在 Ubuntu 和 Debian 安装 Deluge。 + +``` +$ sudo add-apt-repository ppa:deluge-team/ppa +$ sudo apt-get update +$ sudo apt-get install deluge +``` + +#### qBittorent + +[qBittorent][8] 是一个开源的 BT 客户端,旨在提供类似 μTorrent 的免费替代方案。 + +使用以下命令在 Ubuntu 和 Debian 安装 qBittorent。 + +``` +$ sudo add-apt-repository ppa:qbittorrent-team/qbittorrent-stable +$ sudo apt-get update +$ sudo apt-get install qbittorrent +``` + +#### Transmission + +[Transmission][9] 是一个强大的 BT 客户端,它主要关注速度和易用性,一般在很多 Linux 发行版上都有预装。 + +使用以下命令在 Ubuntu 和 Debian 安装 Transmission。 + +``` +$ sudo add-apt-repository ppa:transmissionbt/ppa +$ sudo apt-get update +$ sudo apt-get install transmission-gtk transmission-cli transmission-common transmission-daemon +``` + +### 云存储 + +![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-Cloud-Storage.png) + +#### Dropbox + +[Dropbox][10] 团队在今年早些时候给他们的云服务换了一个名字,也为客户提供了更好的性能和集成了更多应用程序。Dropbox 会向用户免费提供 2 GB 存储空间。 + +使用以下命令在 Ubuntu 和 Debian 安装 Dropbox。 + +``` +$ cd ~ && wget -O - "https://www.dropbox.com/download?plat=lnx.x86" | tar xzf - [On 32-Bit] +$ cd ~ && wget -O - "https://www.dropbox.com/download?plat=lnx.x86_64" | tar xzf - [On 64-Bit] +$ ~/.dropbox-dist/dropboxd +``` + +#### Google Drive + +[Google Drive][11] 是 Google 提供的云服务解决方案,这已经是一个广为人知的服务了。与 Dropbox 一样,可以通过它在所有联网的设备上同步文件。它免费提供了 15 GB 存储空间,包括Gmail、Google 图片、Google 地图等服务。 + +参考阅读:[5 个适用于 Linux 的 Google Drive 客户端][12] + +#### Mega + +[Mega][13] 也是一个出色的云存储解决方案,它的亮点除了高度的安全性之外,还有为用户免费提供高达 50 GB 的免费存储空间。它使用端到端加密,以确保用户的数据安全,所以如果忘记了恢复密钥,用户自己也无法访问到存储的数据。 + +参考阅读:[在 Ubuntu 下载 Mega 云存储客户端][14] + +### 命令行编辑器 + +![](https://www.fossmint.com/wp-content/uploads/2018/07/Commandline-Editors.png) + +#### Vim + +[Vim][15] 是 vi 文本编辑器的开源克隆版本,它的主要目的是可以高度定制化并能够处理任何类型的文本。 + +使用以下命令在 Ubuntu 和 Debian 安装 Vim。 + +``` +$ sudo add-apt-repository ppa:jonathonf/vim +$ sudo apt update +$ sudo apt install vim +``` + +#### Emacs + +[Emacs][16] 是一个高度可配置的文本编辑器,最流行的一个分支 GNU Emacs 是用 Lisp 和 C 编写的,它的最大特点是可以自文档化、可扩展和可自定义。 + +使用以下命令在 Ubuntu 和 Debian 安装 Emacs。 + +``` +$ sudo add-apt-repository ppa:kelleyk/emacs +$ sudo apt update +$ sudo apt install emacs25 +``` + +#### Nano + +[Nano][17] 是一款功能丰富的命令行文本编辑器,比较适合高级用户。它可以通过多个终端进行不同功能的操作。 + +使用以下命令在 Ubuntu 和 Debian 安装 Nano。 + +``` +$ sudo add-apt-repository ppa:n-muench/programs-ppa +$ sudo apt-get update +$ sudo apt-get install nano +``` + +### 下载器 + +![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-Download-Managers.png) + +#### Aria2 + +[Aria2][18] 是一个开源的、轻量级的、多软件源和多协议的命令行下载器,它支持 Metalink、torrent、HTTP/HTTPS、SFTP 等多种协议。 + +使用以下命令在 Ubuntu 和 Debian 安装 Aria2。 + +``` +$ sudo apt-get install aria2 +``` + +#### uGet + +[uGet][19] 已经成为 Linux 各种发行版中排名第一的开源下载器,它可以处理任何下载任务,包括多连接、队列、类目等。 + +使用以下命令在 Ubuntu 和 Debian 安装 uGet。 + +``` +$ sudo add-apt-repository ppa:plushuang-tw/uget-stable +$ sudo apt update +$ sudo apt install uget +``` + +#### XDM + +[XDM][20](Xtreme Download Manager)是一个使用 Java 编写的开源下载软件。和其它下载器一样,它可以结合队列、种子、浏览器使用,而且还带有视频采集器和智能调度器。 + +使用以下命令在 Ubuntu 和 Debian 安装 XDM。 + +``` +$ sudo add-apt-repository ppa:noobslab/apps +$ sudo apt-get update +$ sudo apt-get install xdman +``` + +### 电子邮件客户端 + +![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-Email-Clients.png) + +#### Thunderbird + +[Thunderbird][21] 是最受欢迎的电子邮件客户端之一。它的优点包括免费、开源、可定制、功能丰富,而且最重要的是安装过程也很简便。 + +使用以下命令在 Ubuntu 和 Debian 安装 Thunderbird。 + +``` +$ sudo add-apt-repository ppa:ubuntu-mozilla-security/ppa +$ sudo apt-get update +$ sudo apt-get install thunderbird +``` + +#### Geary + +[Geary][22] 是一个基于 WebKitGTK+ 的开源电子邮件客户端。它是一个免费开源的功能丰富的软件,并被 GNOME 项目收录。 + +使用以下命令在 Ubuntu 和 Debian 安装 Geary。 + +``` +$ sudo add-apt-repository ppa:geary-team/releases +$ sudo apt-get update +$ sudo apt-get install geary +``` + +#### Evolution + +[Evolution][23] 是一个免费开源的电子邮件客户端,可以用于电子邮件、会议日程、备忘录和联系人的管理。 + +使用以下命令在 Ubuntu 和 Debian 安装 Evolution。 + +``` +$ sudo add-apt-repository ppa:gnome3-team/gnome3-staging +$ sudo apt-get update +$ sudo apt-get install evolution +``` + +### 财务软件 + +![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-Accounting-Software.png) + +#### GnuCash + +[GnuCash][24] 是一款免费的跨平台开源软件,它适用于个人和中小型企业的财务任务。 + +使用以下命令在 Ubuntu 和 Debian 安装 GnuCash。 + +``` +$ sudo sh -c 'echo "deb http://archive.getdeb.net/ubuntu $(lsb_release -sc)-getdeb apps" >> /etc/apt/sources.list.d/getdeb.list' +$ sudo apt-get update +$ sudo apt-get install gnucash +``` + +#### KMyMoney + +[KMyMoney][25] 是一个财务管理软件,它可以提供商用或个人理财所需的大部分主要功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 KmyMoney。 + +``` +$ sudo add-apt-repository ppa:claydoh/kmymoney2-kde4 +$ sudo apt-get update +$ sudo apt-get install kmymoney +``` + +### IDE + +![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-IDE-Editors.png) + +#### Eclipse IDE + +[Eclipse][26] 是最广为使用的 Java IDE,它包括一个基本工作空间和一个用于自定义编程环境的强大的的插件配置系统。 + +关于 Eclipse IDE 的安装,可以参考 [如何在 Debian 和 Ubuntu 上安装 Eclipse IDE][27] 这一篇文章。 + +#### Netbeans IDE + +[Netbeans][28] 是一个相当受用户欢迎的 IDE,它支持使用 Java、PHP、HTML 5、JavaScript、C/C++ 或其他语言编写移动应用,桌面软件和 web 应用。 + +关于 Netbeans IDE 的安装,可以参考 [如何在 Debian 和 Ubuntu 上安装 Netbeans IDE][29] 这一篇文章。 + +#### Brackets + +[Brackets][30] 是由 Adobe 开发的高级文本编辑器,它带有可视化工具,支持预处理程序,以及用于 web 开发的以设计为中心的用户流程。对于熟悉它的用户,它可以发挥 IDE 的作用。 + +使用以下命令在 Ubuntu 和 Debian 安装 Brackets。 + +``` +$ sudo add-apt-repository ppa:webupd8team/brackets +$ sudo apt-get update +$ sudo apt-get install brackets +``` + +#### Atom IDE + +[Atom IDE][31] 是一个加强版的 Atom 编辑器,它添加了大量扩展和库以提高性能和增加功能。总之,它是各方面都变得更强大了的 Atom 。 + +使用以下命令在 Ubuntu 和 Debian 安装 Atom。 + +``` +$ sudo apt-get install snapd +$ sudo snap install atom --classic +``` + +#### Light Table + +[Light Table][32] 号称下一代的 IDE,它提供了数据流量统计和协作编程等的强大功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 Light Table。 + +``` +$ sudo add-apt-repository ppa:dr-akulavich/lighttable +$ sudo apt-get update +$ sudo apt-get install lighttable-installer +``` + +#### Visual Studio Code + +[Visual Studio Code][33] 是由微软开发的代码编辑器,它包含了文本编辑器所需要的最先进的功能,包括语法高亮、自动完成、代码调试、性能统计和图表显示等功能。 + +参考阅读:[在Ubuntu 下载 Visual Studio Code][34] + +### 即时通信工具 + +![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-IM-Clients.png) + +#### Pidgin + +[Pidgin][35] 是一个开源的即时通信工具,它几乎支持所有聊天平台,还支持额外扩展功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 Pidgin。 + +``` +$ sudo add-apt-repository ppa:jonathonf/backports +$ sudo apt-get update +$ sudo apt-get install pidgin +``` + +#### Skype + +[Skype][36] 也是一个广为人知的软件了,任何感兴趣的用户都可以在 Linux 上使用。 + +使用以下命令在 Ubuntu 和 Debian 安装 Skype。 + +``` +$ sudo apt install snapd +$ sudo snap install skype --classic +``` + +#### Empathy + +[Empathy][37] 是一个支持多协议语音、视频聊天、文本和文件传输的即时通信工具。它还允许用户添加多个服务的帐户,并用其与所有服务的帐户进行交互。 + +使用以下命令在 Ubuntu 和 Debian 安装 Empathy。 + +``` +$ sudo apt-get install empathy +``` + +### Linux 防病毒工具 + +#### ClamAV/ClamTk + +[ClamAV][38] 是一个开源的跨平台命令行防病毒工具,用于检测木马、病毒和其他恶意代码。而 [ClamTk][39] 则是它的前端 GUI。 + +使用以下命令在 Ubuntu 和 Debian 安装 ClamAV 和 ClamTk。 + +``` +$ sudo apt-get install clamav +$ sudo apt-get install clamtk +``` + +### Linux 桌面环境 + +#### Cinnamon + +[Cinnamon][40] 是 GNOME 3 的自由开源衍生产品,它遵循传统的 桌面比拟desktop metaphor 约定。 + +使用以下命令在 Ubuntu 和 Debian 安装 Cinnamon。 + +``` +$ sudo add-apt-repository ppa:embrosyn/cinnamon +$ sudo apt update +$ sudo apt install cinnamon-desktop-environment lightdm +``` + +#### Mate + +[Mate][41] 桌面环境是 GNOME 2 的衍生和延续,目的是在 Linux 上通过使用传统的桌面比拟提供有一个吸引力的 UI。 + +使用以下命令在 Ubuntu 和 Debian 安装 Mate。 + +``` +$ sudo apt install tasksel +$ sudo apt update +$ sudo tasksel install ubuntu-mate-desktop +``` + +#### GNOME + +[GNOME][42] 是由一些免费和开源应用程序组成的桌面环境,它可以运行在任何 Linux 发行版和大多数 BSD 衍生版本上。 + +使用以下命令在 Ubuntu 和 Debian 安装 Gnome。 + +``` +$ sudo apt install tasksel +$ sudo apt update +$ sudo tasksel install ubuntu-desktop +``` + +#### KDE + +[KDE][43] 由 KDE 社区开发,它为用户提供图形解决方案以控制操作系统并执行不同的计算任务。 + +使用以下命令在 Ubuntu 和 Debian 安装 KDE。 + +``` +$ sudo apt install tasksel +$ sudo apt update +$ sudo tasksel install kubuntu-desktop +``` + +### Linux 维护工具 + +#### GNOME Tweak Tool + +[GNOME Tweak Tool][44] 是用于自定义和调整 GNOME 3 和 GNOME Shell 设置的流行工具。 + +使用以下命令在 Ubuntu 和 Debian 安装 GNOME Tweak Tool。 + +``` +$ sudo apt install gnome-tweak-tool +``` + +#### Stacer + +[Stacer][45] 是一款用于监控和优化 Linux 系统的免费开源应用程序。 + +使用以下命令在 Ubuntu 和 Debian 安装 Stacer。 + +``` +$ sudo add-apt-repository ppa:oguzhaninan/stacer +$ sudo apt-get update +$ sudo apt-get install stacer +``` + +#### BleachBit + +[BleachBit][46] 是一个免费的磁盘空间清理器,它也可用作隐私管理器和系统优化器。 + +参考阅读:[在 Ubuntu 下载 BleachBit][47] + +### Linux 终端工具 + +#### GNOME 终端 + +[GNOME 终端][48] 是 GNOME 的默认终端模拟器。 + +使用以下命令在 Ubuntu 和 Debian 安装 Gnome 终端。 + +``` +$ sudo apt-get install gnome-terminal +``` + +#### Konsole + +[Konsole][49] 是 KDE 的一个终端模拟器。 + +使用以下命令在 Ubuntu 和 Debian 安装 Konsole。 + +``` +$ sudo apt-get install konsole +``` + +#### Terminator + +[Terminator][50] 是一个功能丰富的终端程序,它基于 GNOME 终端,并且专注于整理终端功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 Terminator。 + +``` +$ sudo apt-get install terminator +``` + +#### Guake + +[Guake][51] 是 GNOME 桌面环境下一个轻量级的可下拉式终端。 + +使用以下命令在 Ubuntu 和 Debian 安装 Guake。 + +``` +$ sudo apt-get install guake +``` + +### 多媒体编辑工具 + +#### Ardour + +[Ardour][52] 是一款漂亮的的数字音频工作站Digital Audio Workstation,可以完成专业的录制、编辑和混音工作。 + +使用以下命令在 Ubuntu 和 Debian 安装 Ardour。 + +``` +$ sudo add-apt-repository ppa:dobey/audiotools +$ sudo apt-get update +$ sudo apt-get install ardour +``` + +#### Audacity + +[Audacity][53] 是最著名的音频编辑软件之一,它是一款跨平台的开源多轨音频编辑器。 + +使用以下命令在 Ubuntu 和 Debian 安装 Audacity。 + +``` +$ sudo add-apt-repository ppa:ubuntuhandbook1/audacity +$ sudo apt-get update +$ sudo apt-get install audacity +``` + +#### GIMP + +[GIMP][54] 是 Photoshop 的开源替代品中最受欢迎的。这是因为它有多种可自定义的选项、第三方插件以及活跃的用户社区。 + +使用以下命令在 Ubuntu 和 Debian 安装 Gimp。 + +``` +$ sudo add-apt-repository ppa:otto-kesselgulasch/gimp +$ sudo apt update +$ sudo apt install gimp +``` + +#### Krita + +[Krita][55] 是一款开源的绘画程序,它具有美观的 UI 和可靠的性能,也可以用作图像处理工具。 + +使用以下命令在 Ubuntu 和 Debian 安装 Krita。 + +``` +$ sudo add-apt-repository ppa:kritalime/ppa +$ sudo apt update +$ sudo apt install krita +``` + +#### Lightworks + +[Lightworks][56] 是一款功能强大、灵活美观的专业视频编辑工具。它拥有上百种配套的视觉效果功能,可以处理任何编辑任务,毕竟这个软件已经有长达 25 年的视频处理经验。 + +参考阅读:[在 Ubuntu 下载 Lightworks][57] + +#### OpenShot + +[OpenShot][58] 是一款屡获殊荣的免费开源视频编辑器,这主要得益于其出色的性能和强大的功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 `Openshot。 + +``` +$ sudo add-apt-repository ppa:openshot.developers/ppa +$ sudo apt update +$ sudo apt install openshot-qt +``` + +#### PiTiV + +[Pitivi][59] 也是一个美观的视频编辑器,它有优美的代码库、优质的社区,还支持优秀的协作编辑功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 PiTiV。 + +``` +$ flatpak install --user https://flathub.org/repo/appstream/org.pitivi.Pitivi.flatpakref +$ flatpak install --user http://flatpak.pitivi.org/pitivi.flatpakref +$ flatpak run org.pitivi.Pitivi//stable +``` + +### 音乐播放器 + +#### Rhythmbox + +[Rhythmbox][60] 支持海量种类的音乐,目前被认为是最可靠的音乐播放器,并由 Ubuntu 自带。 + +使用以下命令在 Ubuntu 和 Debian 安装 Rhythmbox。 + +``` +$ sudo add-apt-repository ppa:fossfreedom/rhythmbox +$ sudo apt-get update +$ sudo apt-get install rhythmbox +``` + +#### Lollypop + +[Lollypop][61] 是一款较为年轻的开源音乐播放器,它有很多高级选项,包括网络电台,滑动播放和派对模式。尽管功能繁多,它仍然尽量做到简单易管理。 + +使用以下命令在 Ubuntu 和 Debian 安装 Lollypop。 + +``` +$ sudo add-apt-repository ppa:gnumdk/lollypop +$ sudo apt-get update +$ sudo apt-get install lollypop +``` + +#### Amarok + +[Amarok][62] 是一款功能强大的音乐播放器,它有一个直观的 UI 和大量的高级功能,而且允许用户根据自己的偏好去发现新音乐。 + +使用以下命令在 Ubuntu 和 Debian 安装 Amarok。 + +``` +$ sudo apt-get update +$ sudo apt-get install amarok + +``` + +#### Clementine + +[Clementine][63] 是一款 Amarok 风格的音乐播放器,因此和 Amarok 相似,也有直观的用户界面、先进的控制模块,以及让用户搜索和发现新音乐的功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 Clementine。 + +``` +$ sudo add-apt-repository ppa:me-davidsansome/clementine +$ sudo apt-get update +$ sudo apt-get install clementine +``` + +#### Cmus + +[Cmus][64] 可以说是最高效的的命令行界面音乐播放器了,它具有快速可靠的特点,也支持使用扩展。 + +使用以下命令在 Ubuntu 和 Debian 安装 Cmus。 + +``` +$ sudo add-apt-repository ppa:jmuc/cmus +$ sudo apt-get update +$ sudo apt-get install cmus +``` + +### 办公软件 + +#### Calligra 套件 + +[Calligra 套件][65]为用户提供了一套总共 8 个应用程序,涵盖办公、管理、图表等各个范畴。 + +使用以下命令在 Ubuntu 和 Debian 安装 Calligra 套件。 + +``` +$ sudo apt-get install calligra +``` + +#### LibreOffice + +[LibreOffice][66] 是开源社区中开发过程最活跃的办公套件,它以可靠性著称,也可以通过扩展来添加功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 LibreOffice。 + +``` +$ sudo add-apt-repository ppa:libreoffice/ppa +$ sudo apt update +$ sudo apt install libreoffice +``` + +#### WPS Office + +[WPS Office][67] 是一款漂亮的办公套件,它有一个很具现代感的 UI。 + +参考阅读:[在 Ubuntu 安装 WPS Office][68] + +### 屏幕截图工具 + +#### Shutter + +[Shutter][69] 允许用户截取桌面的屏幕截图,然后使用一些效果进行编辑,还支持上传和在线共享。 + +使用以下命令在 Ubuntu 和 Debian 安装 Shutter。 + +``` +$ sudo add-apt-repository -y ppa:shutter/ppa +$ sudo apt update +$ sudo apt install shutter +``` + +#### Kazam + +[Kazam][70] 可以用于捕获屏幕截图,它的输出对于任何支持 VP8/WebM 和 PulseAudio 视频播放器都可用。 + +使用以下命令在 Ubuntu 和 Debian 安装 Kazam。 + +``` +$ sudo add-apt-repository ppa:kazam-team/unstable-series +$ sudo apt update +$ sudo apt install kazam python3-cairo python3-xlib +``` + +#### Gnome Screenshot + +[Gnome Screenshot][71] 过去曾经和 Gnome 一起捆绑,但现在已经独立出来。它以易于共享的格式进行截屏。 + +使用以下命令在 Ubuntu 和 Debian 安装 Gnome Screenshot。 + +``` +$ sudo apt-get update +$ sudo apt-get install gnome-screenshot +``` + +### 录屏工具 + +#### SimpleScreenRecorder + +[SimpleScreenRecorder][72] 面世时已经是录屏工具中的佼佼者,现在已成为 Linux 各个发行版中最有效、最易用的录屏工具之一。 + +使用以下命令在 Ubuntu 和 Debian 安装 SimpleScreenRecorder。 + +``` +$ sudo add-apt-repository ppa:maarten-baert/simplescreenrecorder +$ sudo apt-get update +$ sudo apt-get install simplescreenrecorder +``` + +#### recordMyDesktop + +[recordMyDesktop][73] 是一个开源的会话记录器,它也能记录桌面会话的音频。 + +使用以下命令在 Ubuntu 和 Debian 安装 recordMyDesktop。 + +``` +$ sudo apt-get update +$ sudo apt-get install gtk-recordmydesktop +``` + +### 文本编辑器 + +#### Atom + +[Atom][74] 是由 GitHub 开发和维护的可定制文本编辑器。它是开箱即用的,但也可以使用扩展和主题自定义 UI 来增强其功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 Atom。 + +``` +$ sudo apt-get install snapd +$ sudo snap install atom --classic +``` + +#### Sublime Text + +[Sublime Text][75] 已经成为目前最棒的文本编辑器。它可定制、轻量灵活(即使打开了大量数据文件和加入了大量扩展),最重要的是可以永久免费使用。 + +使用以下命令在 Ubuntu 和 Debian 安装 Sublime Text。 + +``` +$ sudo apt-get install snapd +$ sudo snap install sublime-text +``` + +#### Geany + +[Geany][76] 是一个内存友好的文本编辑器,它具有基本的IDE功能,可以显示加载时间、扩展库函数等。 + +使用以下命令在 Ubuntu 和 Debian 安装 Geany。 + +``` +$ sudo apt-get update +$ sudo apt-get install geany +``` + +#### Gedit + +[Gedit][77] 以其简单著称,在很多 Linux 发行版都有预装,它具有文本编辑器都具有的优秀的功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 Gedit。 + +``` +$ sudo apt-get update +$ sudo apt-get install gedit +``` + +### 备忘录软件 + +#### Evernote + +[Evernote][78] 是一款云上的笔记程序,它带有待办列表和提醒功能,能够与不同类型的笔记完美配合。 + +Evernote 在 Linux 上没有官方提供的软件,但可以参考 [Linux 上的 6 个 Evernote 替代客户端][79] 这篇文章使用其它第三方工具。 + +#### Everdo + +[Everdo][78] 是一款美观,安全,易兼容的备忘软件,可以用于处理待办事项和其它笔记。如果你认为 Evernote 有所不足,相信 Everdo 会是一个好的替代。 + +参考阅读:[在 Ubuntu 下载 Everdo][80] + +#### Taskwarrior + +[Taskwarrior][81] 是一个用于管理个人任务的开源跨平台命令行应用,它的速度和无干扰的环境是它的两大特点。 + +使用以下命令在 Ubuntu 和 Debian 安装 Taskwarrior。 + +``` +$ sudo apt-get update +$ sudo apt-get install taskwarrior +``` + +### 视频播放器 + +#### Banshee + +[Banshee][82] 是一个开源的支持多格式的媒体播放器,于 2005 年开始开发并逐渐成长。 + +使用以下命令在 Ubuntu 和 Debian 安装 Banshee。 + +``` +$ sudo add-apt-repository ppa:banshee-team/ppa +$ sudo apt-get update +$ sudo apt-get install banshee +``` + +#### VLC + +[VLC][83] 是我最喜欢的视频播放器,它几乎可以播放任何格式的音频和视频,它还可以播放网络电台、录制桌面会话以及在线播放电影。 + +使用以下命令在 Ubuntu 和 Debian 安装 VLC。 + +``` +$ sudo add-apt-repository ppa:videolan/stable-daily +$ sudo apt-get update +$ sudo apt-get install vlc +``` + +#### Kodi + +[Kodi][84] 是世界上最着名的媒体播放器之一,它有一个成熟的媒体中心,可以播放本地和远程的多媒体文件。 + +使用以下命令在 Ubuntu 和 Debian 安装 Kodi。 + +``` +$ sudo apt-get install software-properties-common +$ sudo add-apt-repository ppa:team-xbmc/ppa +$ sudo apt-get update +$ sudo apt-get install kodi +``` + +#### SMPlayer + +[SMPlayer][85] 是 MPlayer 的 GUI 版本,所有流行的媒体格式它都能够处理,并且它还有从 YouTube 和 Chromcast 和下载字幕的功能。 + +使用以下命令在 Ubuntu 和 Debian 安装 SMPlayer。 + +``` +$ sudo add-apt-repository ppa:rvm/smplayer +$ sudo apt-get update +$ sudo apt-get install smplayer +``` + +### 虚拟化工具 + +#### VirtualBox + +[VirtualBox][86] 是一个用于操作系统虚拟化的开源应用程序,在服务器、台式机和嵌入式系统上都可以运行。 + +使用以下命令在 Ubuntu 和 Debian 安装 VirtualBox。 + +``` +$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add - +$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add - +$ sudo apt-get update +$ sudo apt-get install virtualbox-5.2 +$ virtualbox +``` + +#### VMWare + +[VMware][87] 是一个为客户提供平台虚拟化和云计算服务的数字工作区,是第一个成功将 x86 架构系统虚拟化的工作站。 VMware 工作站的其中一个产品就允许用户在虚拟内存中运行多个操作系统。 + +参阅 [在 Ubuntu 上安装 VMWare Workstation Pro][88] 可以了解 VMWare 的安装。 + +### 浏览器 + +#### Chrome + +[Google Chrome][89] 无疑是最受欢迎的浏览器。Chrome 以其速度、简洁、安全、美观而受人喜爱,它遵循了 Google 的界面设计风格,是 web 开发人员不可缺少的浏览器,同时它也是免费开源的。 + +使用以下命令在 Ubuntu 和 Debian 安装 Google Chrome。 + +``` +$ wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add - +$ sudo sh -c 'echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' +$ sudo apt-get update +$ sudo apt-get install google-chrome-stable +``` + +#### Firefox + +[Firefox Quantum][90] 是一款漂亮、快速、完善并且可自定义的浏览器。它也是自由开源的,包含有开发人员所需要的工具,对于初学者也没有任何使用门槛。 + +使用以下命令在 Ubuntu 和 Debian 安装 Firefox Quantum。 + +``` +$ sudo add-apt-repository ppa:mozillateam/firefox-next +$ sudo apt update && sudo apt upgrade +$ sudo apt install firefox +``` + +#### Vivaldi + +[Vivaldi][91] 是一个基于 Chrome 的自由开源项目,旨在通过添加扩展来使 Chrome 的功能更加完善。色彩丰富的界面,性能良好、灵活性强是它的几大特点。 + +参考阅读:[在 Ubuntu 下载 Vivaldi][91] + +以上就是我的推荐,你还有更好的软件向大家分享吗?欢迎评论。 + +-------------------------------------------------------------------------------- + +via: https://www.fossmint.com/most-used-linux-applications/ + +作者:[Martins D. Okoi][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.fossmint.com/author/dillivine/ +[1]:https://plus.google.com/share?url=https://www.fossmint.com/most-used-linux-applications/ "Share on Google+" +[2]:https://www.linkedin.com/shareArticle?mini=true&url=https://www.fossmint.com/most-used-linux-applications/ "Share on LinkedIn" +[3]:https://www.fossmint.com/awesome-linux-software/ +[4]:https://rsync.samba.org/ +[5]:https://www.tecmint.com/rsync-local-remote-file-synchronization-commands/ +[6]:https://github.com/teejee2008/timeshift +[7]:https://deluge-torrent.org/ +[8]:https://www.qbittorrent.org/ +[9]:https://transmissionbt.com/ +[10]:https://www.dropbox.com/ +[11]:https://www.google.com/drive/ +[12]:https://www.fossmint.com/best-google-drive-clients-for-linux/ +[13]:https://mega.nz/ +[14]:https://mega.nz/sync!linux +[15]:https://www.vim.org/ +[16]:https://www.gnu.org/s/emacs/ +[17]:https://www.nano-editor.org/ +[18]:https://aria2.github.io/ +[19]:http://ugetdm.com/ +[20]:http://xdman.sourceforge.net/ +[21]:https://www.thunderbird.net/ +[22]:https://github.com/GNOME/geary +[23]:https://github.com/GNOME/evolution +[24]:https://www.gnucash.org/ +[25]:https://kmymoney.org/ +[26]:https://www.eclipse.org/ide/ +[27]:https://www.tecmint.com/install-eclipse-oxygen-ide-in-ubuntu-debian/ +[28]:https://netbeans.org/ +[29]:https://www.tecmint.com/install-netbeans-ide-in-ubuntu-debian-linux-mint/ +[30]:http://brackets.io/ +[31]:https://ide.atom.io/ +[32]:http://lighttable.com/ +[33]:https://code.visualstudio.com/ +[34]:https://code.visualstudio.com/download +[35]:https://www.pidgin.im/ +[36]:https://www.skype.com/ +[37]:https://wiki.gnome.org/Apps/Empathy +[38]:https://www.clamav.net/ +[39]:https://dave-theunsub.github.io/clamtk/ +[40]:https://github.com/linuxmint/cinnamon-desktop +[41]:https://mate-desktop.org/ +[42]:https://www.gnome.org/ +[43]:https://www.kde.org/plasma-desktop +[44]:https://github.com/nzjrs/gnome-tweak-tool +[45]:https://github.com/oguzhaninan/Stacer +[46]:https://www.bleachbit.org/ +[47]:https://www.bleachbit.org/download +[48]:https://github.com/GNOME/gnome-terminal +[49]:https://konsole.kde.org/ +[50]:https://gnometerminator.blogspot.com/p/introduction.html +[51]:http://guake-project.org/ +[52]:https://ardour.org/ +[53]:https://www.audacityteam.org/ +[54]:https://www.gimp.org/ +[55]:https://krita.org/en/ +[56]:https://www.lwks.com/ +[57]:https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206 +[58]:https://www.openshot.org/ +[59]:http://www.pitivi.org/ +[60]:https://wiki.gnome.org/Apps/Rhythmbox +[61]:https://gnumdk.github.io/lollypop-web/ +[62]:https://amarok.kde.org/en +[63]:https://www.clementine-player.org/ +[64]:https://cmus.github.io/ +[65]:https://www.calligra.org/tour/calligra-suite/ +[66]:https://www.libreoffice.org/ +[67]:https://www.wps.com/ +[68]:http://wps-community.org/downloads +[69]:http://shutter-project.org/ +[70]:https://launchpad.net/kazam +[71]:https://gitlab.gnome.org/GNOME/gnome-screenshot +[72]:http://www.maartenbaert.be/simplescreenrecorder/ +[73]:http://recordmydesktop.sourceforge.net/about.php +[74]:https://atom.io/ +[75]:https://www.sublimetext.com/ +[76]:https://www.geany.org/ +[77]:https://wiki.gnome.org/Apps/Gedit +[78]:https://everdo.net/ +[79]:https://www.fossmint.com/evernote-alternatives-for-linux/ +[80]:https://everdo.net/linux/ +[81]:https://taskwarrior.org/ +[82]:http://banshee.fm/ +[83]:https://www.videolan.org/ +[84]:https://kodi.tv/ +[85]:https://www.smplayer.info/ +[86]:https://www.virtualbox.org/wiki/VirtualBox +[87]:https://www.vmware.com/ +[88]:https://www.tecmint.com/install-vmware-workstation-in-linux/ +[89]:https://www.google.com/chrome/ +[90]:https://www.mozilla.org/en-US/firefox/ +[91]:https://vivaldi.com/ + diff --git a/published/20180724 Building a network attached storage device with a Raspberry Pi.md b/published/20180724 Building a network attached storage device with a Raspberry Pi.md new file mode 100644 index 0000000000..8cf0a1802e --- /dev/null +++ b/published/20180724 Building a network attached storage device with a Raspberry Pi.md @@ -0,0 +1,211 @@ +树莓派自建 NAS 云盘之——树莓派搭建网络存储盘 +====== +> 跟随这些逐步指导构建你自己的基于树莓派的 NAS 系统。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-storage.png?itok=95-zvHYl) + +我将在接下来的这三篇文章中讲述如何搭建一个简便、实用的 NAS 云盘系统。我在这个中心化的存储系统中存储数据,并且让它每晚都会自动的备份增量数据。本系列文章将利用 NFS 文件系统将磁盘挂载到同一网络下的不同设备上,使用 [Nextcloud][1] 来离线访问数据、分享数据。 + +本文主要讲述将数据盘挂载到远程设备上的软硬件步骤。本系列第二篇文章将讨论数据备份策略、如何添加定时备份数据任务。最后一篇文章中我们将会安装 Nextcloud 软件,用户通过 Nextcloud 提供的 web 界面可以方便的离线或在线访问数据。本系列教程最终搭建的 NAS 云盘支持多用户操作、文件共享等功能,所以你可以通过它方便的分享数据,比如说你可以发送一个加密链接,跟朋友分享你的照片等等。 + +最终的系统架构如下图所示: + + +![](https://opensource.com/sites/default/files/uploads/nas_part1.png) + +### 硬件 + +首先需要准备硬件。本文所列方案只是其中一种示例,你也可以按不同的硬件方案进行采购。 + +最主要的就是[树莓派 3][2],它带有四核 CPU、1G RAM,以及(比较)快速的网络接口。数据将存储在两个 USB 磁盘驱动器上(这里使用 1TB 磁盘);其中一个磁盘用于每天数据存储,另一个用于数据备份。请务必使用有源 USB 磁盘驱动器或者带附加电源的 USB 集线器,因为树莓派无法为两个 USB 磁盘驱动器供电。 + +### 软件 + +在该社区中最活跃的操作系统当属 [Raspbian][3],便于定制个性化项目。已经有很多 [操作指南][4] 讲述如何在树莓派中安装 Raspbian 系统,所以这里不再赘述。在撰写本文时,最新的官方支持版本是 [Raspbian Stretch][5],它对我来说很好使用。 + +到此,我将假设你已经配置好了基本的 Raspbian 系统并且可以通过 `ssh` 访问到你的树莓派。 + +### 准备 USB 磁盘驱动器 + +为了更好地读写数据,我建议使用 ext4 文件系统去格式化磁盘。首先,你必须先找到连接到树莓派的磁盘。你可以在 `/dev/sd/` 中找到磁盘设备。使用命令 `fdisk -l`,你可以找到刚刚连接的两块 USB 磁盘驱动器。请注意,操作下面的步骤将会清除 USB 磁盘驱动器上的所有数据,请做好备份。 + +``` +pi@raspberrypi:~ $ sudo fdisk -l + +<...> + +Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors +Units: sectors of 1 * 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes +Disklabel type: dos +Disk identifier: 0xe8900690 + +Device Boot Start End Sectors Size Id Type +/dev/sda1 2048 1953525167 1953523120 931.5G 83 Linux + + +Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors +Units: sectors of 1 * 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes +Disklabel type: dos +Disk identifier: 0x6aa4f598 + +Device Boot Start End Sectors Size Id Type +/dev/sdb1 * 2048 1953521663 1953519616 931.5G 83 Linux + +``` + +由于这些设备是连接到树莓派的唯一的 1TB 的磁盘,所以我们可以很容易的辨别出 `/dev/sda` 和 `/dev/sdb` 就是那两个 USB 磁盘驱动器。每个磁盘末尾的分区表提示了在执行以下的步骤后如何查看,这些步骤将会格式化磁盘并创建分区表。为每个 USB 磁盘驱动器按以下步骤进行操作(假设你的磁盘也是 `/dev/sda` 和 `/dev/sdb`,第二次操作你只要替换命令中的 `sda` 为 `sdb` 即可)。 + +首先,删除磁盘分区表,创建一个新的并且只包含一个分区的新分区表。在 `fdisk` 中,你可以使用交互单字母命令来告诉程序你想要执行的操作。只需要在提示符 `Command(m for help):` 后输入相应的字母即可(可以使用 `m` 命令获得更多详细信息): + +``` +pi@raspberrypi:~ $ sudo fdisk /dev/sda + +Welcome to fdisk (util-linux 2.29.2). +Changes will remain in memory only, until you decide to write them. +Be careful before using the write command. + + +Command (m for help): o +Created a new DOS disklabel with disk identifier 0x9c310964. + +Command (m for help): n +Partition type + p primary (0 primary, 0 extended, 4 free) + e extended (container for logical partitions) +Select (default p): p +Partition number (1-4, default 1): +First sector (2048-1953525167, default 2048): +Last sector, +sectors or +size{K,M,G,T,P} (2048-1953525167, default 1953525167): + +Created a new partition 1 of type 'Linux' and of size 931.5 GiB. + +Command (m for help): p + +Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors +Units: sectors of 1 * 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes +Disklabel type: dos +Disk identifier: 0x9c310964 + +Device Boot Start End Sectors Size Id Type +/dev/sda1 2048 1953525167 1953523120 931.5G 83 Linux + +Command (m for help): w +The partition table has been altered. +Syncing disks. +``` + +现在,我们将用 ext4 文件系统格式化新创建的分区 `/dev/sda1`: + +``` +pi@raspberrypi:~ $ sudo mkfs.ext4 /dev/sda1 +mke2fs 1.43.4 (31-Jan-2017) +Discarding device blocks: done + +<...> + +Allocating group tables: done +Writing inode tables: done +Creating journal (1024 blocks): done +Writing superblocks and filesystem accounting information: done +``` + +重复以上步骤后,让我们根据用途来对它们建立标签: + +``` +pi@raspberrypi:~ $ sudo e2label /dev/sda1 data +pi@raspberrypi:~ $ sudo e2label /dev/sdb1 backup +``` + +现在,让我们安装这些磁盘并存储一些数据。以我运营该系统超过一年的经验来看,当树莓派启动时(例如在断电后),USB 磁盘驱动器并不是总被挂载,因此我建议使用 autofs 在需要的时候进行挂载。 + +首先,安装 autofs 并创建挂载点: + +``` +pi@raspberrypi:~ $ sudo apt install autofs +pi@raspberrypi:~ $ sudo mkdir /nas +``` + +然后添加下面这行来挂载设备 `/etc/auto.master`: + +``` +/nas    /etc/auto.usb +``` + +如果不存在以下内容,则创建 `/etc/auto.usb`,然后重新启动 autofs 服务: + +``` +data -fstype=ext4,rw :/dev/disk/by-label/data +backup -fstype=ext4,rw :/dev/disk/by-label/backup +pi@raspberrypi3:~ $ sudo service autofs restart +``` + +现在你应该可以分别访问 `/nas/data` 以及 `/nas/backup` 磁盘了。显然,到此还不会令人太兴奋,因为你只是擦除了磁盘中的数据。不过,你可以执行以下命令来确认设备是否已经挂载成功: + +``` +pi@raspberrypi3:~ $ cd /nas/data +pi@raspberrypi3:/nas/data $ cd /nas/backup +pi@raspberrypi3:/nas/backup $ mount +<...> +/etc/auto.usb on /nas type autofs (rw,relatime,fd=6,pgrp=463,timeout=300,minproto=5,maxproto=5,indirect) +<...> +/dev/sda1 on /nas/data type ext4 (rw,relatime,data=ordered) +/dev/sdb1 on /nas/backup type ext4 (rw,relatime,data=ordered) +``` + +首先进入对应目录以确保 autofs 能够挂载设备。autofs 会跟踪文件系统的访问记录,并随时挂载所需要的设备。然后 `mount` 命令会显示这两个 USB 磁盘驱动器已经挂载到我们想要的位置了。 + +设置 autofs 的过程容易出错,如果第一次尝试失败,请不要沮丧。你可以上网搜索有关教程。 + +### 挂载网络存储 + +现在你已经设置了基本的网络存储,我们希望将它安装到远程 Linux 机器上。这里使用 NFS 文件系统,首先在树莓派上安装 NFS 服务器: + +``` +pi@raspberrypi:~ $ sudo apt install nfs-kernel-server +``` + +然后,需要告诉 NFS 服务器公开 `/nas/data` 目录,这是从树莓派外部可以访问的唯一设备(另一个用于备份)。编辑 `/etc/exports` 添加如下内容以允许所有可以访问 NAS 云盘的设备挂载存储: + +``` +/nas/data *(rw,sync,no_subtree_check) +``` + +更多有关限制挂载到单个设备的详细信息,请参阅 `man exports`。经过上面的配置,任何人都可以访问数据,只要他们可以访问 NFS 所需的端口:`111` 和 `2049`。我通过上面的配置,只允许通过路由器防火墙访问到我的家庭网络的 22 和 443 端口。这样,只有在家庭网络中的设备才能访问 NFS 服务器。 + +如果要在 Linux 计算机挂载存储,运行以下命令: + +``` +you@desktop:~ $ sudo mkdir /nas/data +you@desktop:~ $ sudo mount -t nfs :/nas/data /nas/data +``` + +同样,我建议使用 autofs 来挂载该网络设备。如果需要其他帮助,请参看 [如何使用 Autofs 来挂载 NFS 共享][6]。 + +现在你可以在远程设备上通过 NFS 系统访问位于你树莓派 NAS 云盘上的数据了。在后面一篇文章中,我将介绍如何使用 `rsync` 自动将数据备份到第二个 USB 磁盘驱动器。你将会学到如何使用 `rsync` 创建增量备份,在进行日常备份的同时还能节省设备空间。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi + +作者:[Manuel Dewald][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[jrg](https://github.com/jrglinux) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ntlx +[1]: https://nextcloud.com/ +[2]: https://www.raspberrypi.org/products/raspberry-pi-3-model-b/ +[3]: https://www.raspbian.org/ +[4]: https://www.raspberrypi.org/documentation/installation/installing-images/ +[5]: https://www.raspberrypi.org/blog/raspbian-stretch/ +[6]: https://opensource.com/article/18/6/using-autofs-mount-nfs-shares + + diff --git a/published/20180803 5 Essential Tools for Linux Development.md b/published/20180803 5 Essential Tools for Linux Development.md new file mode 100644 index 0000000000..0f2f26c18a --- /dev/null +++ b/published/20180803 5 Essential Tools for Linux Development.md @@ -0,0 +1,125 @@ +Linux 开发的五大必备工具 +====== +> Linux 上的开发工具如此之多,以至于会担心找不到恰好适合你的。 + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dev-tools.png?itok=kkDNylRg) + +Linux 已经成为工作、娱乐和个人生活等多个领域的支柱,人们已经越来越离不开它。在 Linux 的帮助下,技术的变革速度超出了人们的想象,Linux 开发的速度也以指数规模增长。因此,越来越多的开发者也不断地加入开源和学习 Linux 开发地潮流当中。在这个过程之中,合适的工具是必不可少的,可喜的是,随着 Linux 的发展,大量适用于 Linux 的开发工具也不断成熟。甚至可以说,这样的工具已经多得有点惊人。 + +为了选择更合适自己的开发工具,缩小选择范围是很必要的。但是这篇文章并不会要求你必须使用某个工具,而只是缩小到五个工具类别,然后对每个类别提供一个例子。然而,对于大多数类别,都会有不止一种选择。下面我们来看一下。 + +### 容器 + +放眼于现实,现在已经是容器的时代了。容器既及其容易部署,又可以方便地构建开发环境。如果你针对的是特定的平台的开发,将开发流程所需要的各种工具都创建到容器映像中是一种很好的方法,只要使用这一个容器映像,就能够快速启动大量运行所需服务的实例。 + +一个使用容器的最佳范例是使用 [Docker][1],使用容器(或 Docker)有这些好处: + + * 开发环境保持一致 + * 部署后即可运行 + * 易于跨平台部署 + * Docker 映像适用于多种开发环境和语言 + * 部署单个容器或容器集群都并不繁琐 + +通过 [Docker Hub][2],几乎可以找到适用于任何平台、任何开发环境、任何服务器、任何服务的映像,几乎可以满足任何一种需求。使用 Docker Hub 中的映像,就相当于免除了搭建开发环境的步骤,可以直接开始开发应用程序、服务器、API 或服务。 + +Docker 在所有 Linux 平台上都很容易安装,例如可以通过终端输入以下命令在 Ubuntu 上安装 Docker: + +``` +sudo apt-get install docker.io +``` + +Docker 安装完毕后,就可以从 Docker 仓库中拉取映像,然后开始开发和部署了(如下图)。 + +![Docker images][4] + +*图 1: Docker 镜像准备部署* + +### 版本控制工具 + +如果你正在开发一个大型项目,又或者参与团队开发,版本控制工具是必不可少的,它可以用于记录代码变更、提交代码以及合并代码。如果没有这样的工具,项目几乎无法妥善管理。在 Linux 系统上,[Git][6] 和 [GitHub][7] 的易用性和流行程度是其它版本控制工具无法比拟的。如果你对 Git 和 GitHub 还不太熟悉,可以简单理解为 Git 是在本地计算机上安装的版本控制系统,而 GitHub 则是用于上传和管理项目的远程存储库。 Git 可以安装在大多数的 Linux 发行版上。例如在基于 Debian 的系统上,只需要通过以下这一条简单的命令就可以安装: + +``` +sudo apt-get install git +``` + +安装完毕后,就可以使用 Git 来实施版本控制了(如下图)。 + +![Git installed][9] + +*图 2:Git 已经安装,可以用于很多重要任务* + +Github 会要求用户创建一个帐户。用户可以免费使用 GitHub 来管理非商用项目,当然也可以使用 GitHub 的付费模式(更多相关信息,可以参阅[价格矩阵][10])。 + +### 文本编辑器 + +如果没有文本编辑器,在 Linux 上开发将会变得异常艰难。当然,文本编辑器之间孰优孰劣,具体还是要取决于开发者的需求。对于文本编辑器,有人可能会使用 vim、emacs 或 nano,也有人会使用带有 GUI 的编辑器。但由于重点在于开发,我们需要的是一种能够满足开发人员需求的工具。不过我首先要说,vim 对于开发人员来说确实是一个利器,但前提是要对 vim 非常熟悉,在这种前提下,vim 能够满足你的所有需求,甚至还能给你更好的体验。然而,对于一些开发者(尤其是刚开始接触 Linux 的新手)来说,这不仅难以帮助他们快速达成需求,甚至还会是一个需要逾越的障碍。考虑到这篇文章的目标是帮助 Linux 的新手(而不仅仅是为各种编辑器的死忠粉宣传他们拥护的编辑器),我更倾向于使用 GUI 编辑器。 + +就文本编辑器而论,选择 [Bluefish][11] 一般不会有错。 Bluefish 可以从大部分软件库中安装,它支持项目管理、远程文件多线程操作、搜索和替换、递归打开文件、侧边栏、集成 make/lint/weblint/xmllint、无限制撤销/重做、在线拼写检查、自动恢复、全屏编辑、语法高亮(如下图)、多种语言等等。 + +![Bluefish][13] + +*图 3:运行在 Ubuntu 18.04 上的 Bluefish* + +### IDE + +集成开发环境Integrated Development Environment(IDE)是包含一整套全面的工具、可以实现一站式功能的开发环境。 开发者除了可以使用 IDE 编写代码,还可以编写文档和构建软件。在 Linux 上也有很多适用的 IDE,其中 [Geany][14] 就包含在标准软件库中,它对用户非常友好,功能也相当强大。 Geany 具有语法高亮、代码折叠、自动完成,构建代码片段、自动关闭 XML 和 HTML 标签、调用提示、支持多种文件类型、符号列表、代码导航、构建编译,简单的项目管理和内置的插件系统等强大功能。 + +Geany 也能在系统上轻松安装,例如执行以下命令在基于 Debian 的 Linux 发行版上安装 Geany: + +``` +sudo apt-get install geany +``` + +安装完毕后,就可以快速上手这个易用且强大的 IDE 了(如下图)。 + +![Geany][16] + +*图 4:Geany 可以作为你的 IDE* + +### 文本比较工具 + +有时候会需要比较两个文件的内容来找到它们之间的不同之处,它们可能是同一文件的两个不同副本(有一个经过编译,而另一个没有)。这种情况下,你肯定不想要凭借肉眼来找出差异,而是想要使用像 [Meld][17] 这样的工具。 Meld 是针对开发者的文本比较和合并工具,可以使用 Meld 来发现两个文件之间的差异。虽然你可以使用命令行中的文本比较工具,但就效率而论,Meld 无疑更为优秀。 + +Meld 可以打开两个文件进行比较,并突出显示文件之间的差异之处。 Meld 还允许用户从两个文件的其中一方合并差异(下图显示了 Meld 同时打开两个文件)。 + +![Comparing two files][19] + +*图 5: 以简单差异的模式比较两个文件* + +Meld 也可以通过大多数标准的软件库安装,在基于 Debian 的系统上,执行以下命令就可以安装: + +``` +sudo apt-get install meld +``` + +### 高效地工作 + +以上提到的五个工具除了帮助你完成工作,而且有助于提高效率。尽管适用于 Linux 开发者的工具有很多,但对于以上几个类别,你最好分别使用一个对应的工具。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/8/5-essential-tools-linux-development + +作者:[Jack Wallen][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/jlwallen +[1]:https://www.docker.com/ +[2]:https://hub.docker.com/ +[4]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_1.jpg?itok=V1Bsbkg9 "Docker images" +[6]:https://git-scm.com/ +[7]:https://github.com/ +[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_2.jpg?itok=YJjhe4O6 "Git installed" +[10]:https://github.com/pricing +[11]:http://bluefish.openoffice.nl/index.html +[13]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_3.jpg?itok=66A7Svme "Bluefish" +[14]:https://www.geany.org/ +[16]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_4.jpg?itok=jRcA-0ue "Geany" +[17]:http://meldmerge.org/ +[19]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_5.jpg?itok=eLkfM9oZ "Comparing two files" +[20]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux + diff --git a/translated/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md b/published/20180813 5 of the Best Linux Educational Software and Games for Kids.md similarity index 79% rename from translated/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md rename to published/20180813 5 of the Best Linux Educational Software and Games for Kids.md index 3a1981f0bc..029c70b675 100644 --- a/translated/tech/20180813 5 of the Best Linux Educational Software and Games for Kids.md +++ b/published/20180813 5 of the Best Linux Educational Software and Games for Kids.md @@ -1,4 +1,5 @@ -# 5 个给孩子的非常好的 Linux 教育软件和游戏 +5 个给孩子的非常好的 Linux 游戏和教育软件 +================= ![](https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-programs-for-kids-featured.jpg) @@ -8,39 +9,39 @@ Linux 是一个非常强大的操作系统,因此因特网上的大多数服 **相关阅读**:[使用一个 Linux 发行版的新手指南][1] -### 1. GCompris +### 1、GCompris -如果你正在为你的孩子寻找一款最佳的教育软件,[GCompris][2] 将是你的最好的开端。这款软件专门为 2 到 10 岁的孩子所设计。作为所有的 Linux 教育软件套装的巅峰之作,GCompris 为孩子们提供了大约 100 项活动。它囊括了你期望你的孩子学习的所有内容,从阅读材料到科学、地理、绘画、代数、测验、等等。 +如果你正在为你的孩子寻找一款最佳的教育软件,[GCompris][2] 将是你的最好的开端。这款软件专门为 2 到 10 岁的孩子所设计。作为所有的 Linux 教育软件套装的巅峰之作,GCompris 为孩子们提供了大约 100 项活动。它囊括了你期望你的孩子学习的所有内容,从阅读材料到科学、地理、绘画、代数、测验等等。 ![Linux educational software and games][3] -GCompris 甚至有一项活动可以帮你的孩子学习计算机的相关知识。如果你的孩子还很小,你希望他去学习字母、颜色、和形状,GCompris 也有这方面的相关内容。更重要的是,它也为孩子们准备了一些益智类游戏,比如国际象棋、井字棋、好记性、以及猜词游戏。GCompris 并不是一个仅在 Linux 上可运行的游戏。它也可以运行在 Windows 和 Android 上。 +GCompris 甚至有一项活动可以帮你的孩子学习计算机的相关知识。如果你的孩子还很小,你希望他去学习字母、颜色和形状,GCompris 也有这方面的相关内容。更重要的是,它也为孩子们准备了一些益智类游戏,比如国际象棋、井字棋、好记性、以及猜词游戏。GCompris 并不是一个仅在 Linux 上可运行的游戏。它也可以运行在 Windows 和 Android 上。 -### 2. TuxMath +### 2、TuxMath -很多学生认为数学是们非常难的课程。你可以通过 Linux 教育软件如 [TuxMath][4] 来让你的孩子了解数学技能,从而来改变这种看法。TuxMath 是为孩子开发的顶级的数学教育辅助游戏。在这个游戏中,你的角色是在如雨点般下降的数学问题中帮助 Linux 企鹅 Tux 来保护它的星球。 +很多学生认为数学是门非常难的课程。你可以通过 Linux 教育软件如 [TuxMath][4] 来让你的孩子了解数学技能,从而来改变这种看法。TuxMath 是为孩子开发的顶级的数学教育辅助游戏。在这个游戏中,你的角色是在如雨点般下降的数学问题中帮助 Linux 企鹅 Tux 来保护它的星球。 ![linux-educational-software-tuxmath-1][5] 在它们落下来毁坏 Tux 的星球之前,找到问题的答案,就可以使用你的激光去帮助 Tux 拯救它的星球。数字问题的难度每过一关就会提升一点。这个游戏非常适合孩子,因为它可以让孩子们去开动脑筋解决问题。而且还有助他们学好数学,以及帮助他们开发智力。 -### 3. Sugar on a Stick +### 3、Sugar on a Stick -[Sugar on a Stick][6] 是献给孩子们的学习程序 —— 一个广受好评的全新教学法。这个程序为你的孩子提供一个成熟的教学平台,在那里,他们可以收获创造、探索、发现和思考方面的技能。和 GCompris 一样,Sugar on a Stick 为孩子们带来了包括游戏和谜题在内的大量学习资源。 +[Sugar on a Stick][6] 是献给孩子们的学习程序 —— 一个广受好评的全新教学法。这个程序为你的孩子提供一个成熟的教学平台,在那里,他们可以收获创造、探索、发现和思考方面的技能。和 GCompris 一样,Sugar on a Stick 为孩子们带来了包括游戏和谜题在内的大量学习资源。 ![linux-educational-software-sugar-on-a-stick][7] 关于 Sugar on a Stick 最大的一个好处是你可以将它配置在一个 U 盘上。你只要有一台 X86 的 PC,插入那个 U 盘,然后就可以从 U 盘引导这个发行版。Sugar on a Stick 是由 Sugar 实验室提供的一个项目 —— 这个实验室是一个由志愿者运作的非盈利组织。 -### 4. KDE Edu Suite +### 4、KDE Edu Suite -[KDE Edu Suite][8] 是一个用途与众不同的软件包。带来了大量不同领域的应用程序,KDE 社区已经证实,它不仅是一系列成年人授权的问题;它还关心年青一代如何适应他们周围的一切。它囊括了一系列孩子们使用的应用程序,从科学到数学、地理等等。 +[KDE Edu Suite][8] 是一个用途与众不同的软件包。带来了大量不同领域的应用程序,KDE 社区已经证实,它不仅可以给成年人授权;它还关心年青一代如何适应他们周围的一切。它囊括了一系列孩子们使用的应用程序,从科学到数学、地理等等。 ![linux-educational-software-kde-1][9] KDE Edu 套件根据长大后所必需的知识为基础,既能够用作学校的教学软件,也能够作为孩子们的学习 APP。它提供了大量的可免费下载的软件包。KDE Edu 套件在主流的 GNU/Linux 发行版都能安装。 -### 5. Tux Paint +### 5、Tux Paint ![linux-educational-software-tux-paint-2][10] @@ -61,20 +62,20 @@ via: https://www.maketecheasier.com/5-best-linux-software-packages-for-kids/ 作者:[Kenneth Kimari][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://www.maketecheasier.com/author/kennkimari/ -[1]: https://www.maketecheasier.com/beginner-guide-to-using-linux-distro/ "The Beginner’s Guide to Using a Linux Distro" +[1]: https://www.maketecheasier.com/beginner-guide-to-using-linux-distro/ [2]: http://www.gcompris.net/downloads-en.html -[3]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-gcompris.jpg "Linux educational software and games" +[3]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-gcompris.jpg [4]: https://tuxmath.en.uptodown.com/ubuntu -[5]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-tuxmath-1.jpg "linux-educational-software-tuxmath-1" +[5]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-tuxmath-1.jpg [6]: http://wiki.sugarlabs.org/go/Sugar_on_a_Stick/Downloads -[7]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-sugar-on-a-stick.png "linux-educational-software-sugar-on-a-stick" +[7]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-sugar-on-a-stick.png [8]: https://edu.kde.org/ -[9]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-kde-1.jpg "linux-educational-software-kde-1" -[10]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-tux-paint-2.jpg "linux-educational-software-tux-paint-2" +[9]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-kde-1.jpg +[10]: https://www.maketecheasier.com/assets/uploads/2018/07/Linux-educational-software-tux-paint-2.jpg [11]: http://www.tuxpaint.org/ -[12]: http://edubuntu.org/ \ No newline at end of file +[12]: http://edubuntu.org/ diff --git a/translated/tech/20180814 Automating backups on a Raspberry Pi NAS.md b/published/20180814 Automating backups on a Raspberry Pi NAS.md similarity index 70% rename from translated/tech/20180814 Automating backups on a Raspberry Pi NAS.md rename to published/20180814 Automating backups on a Raspberry Pi NAS.md index 111b508245..cbb508ae8f 100644 --- a/translated/tech/20180814 Automating backups on a Raspberry Pi NAS.md +++ b/published/20180814 Automating backups on a Raspberry Pi NAS.md @@ -1,19 +1,16 @@ -Part-II 树莓派自建 NAS 云盘之数据自动备份 +树莓派自建 NAS 云盘之——数据自动备份 ====== +> 把你的树莓派变成数据的安全之所。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X) -在《树莓派自建 NAS 云盘》系列的 [第一篇][1] 文章中,我们讨论了建立 NAS 的一些基本步骤,添加了两块 1TB 的存储硬盘驱动(一个用于数据存储,一个用于数据备份),并且通过 网络文件系统(NFS)将数据存储盘挂载到远程终端上。本文是此系列的第二篇文章,我们将探讨数据自动备份。数据自动备份保证了数据的安全,为硬件损坏后的数据恢复提供便利以及减少了文件误操作带来的不必要的麻烦。 - - +在《树莓派自建 NAS 云盘》系列的 [第一篇][1] 文章中,我们讨论了建立 NAS 的一些基本步骤,添加了两块 1TB 的存储硬盘驱动(一个用于数据存储,一个用于数据备份),并且通过网络文件系统(NFS)将数据存储盘挂载到远程终端上。本文是此系列的第二篇文章,我们将探讨数据自动备份。数据自动备份保证了数据的安全,为硬件损坏后的数据恢复提供便利以及减少了文件误操作带来的不必要的麻烦。 ![](https://opensource.com/sites/default/files/uploads/nas_part2.png) - - ### 备份策略 -我们就从为小型 NAS 构想一个备份策略着手开始吧。我建议每天有时间节点有计划的去备份数据,以防止干扰到我们正常的访问 NAS,比如备份时间点避开正在访问 NAS 并写入文件的时间点。举个例子,你可以每天凌晨 2 点去进行数据备份。 +我们就从为小型 NAS 构想一个备份策略着手开始吧。我建议每天有时间节点、有计划的去备份数据,以防止干扰到我们正常的访问 NAS,比如备份时间点避开正在访问 NAS 并写入文件的时间点。举个例子,你可以每天凌晨 2 点去进行数据备份。 另外,你还得决定每天的备份需要被保留的时间长短,因为如果没有时间限制,存储空间很快就会被用完。一般每天的备份保留一周便可以,如果数据出了问题,你便可以很方便的从备份中恢复出来原数据。但是如果需要恢复数据到更久之前怎么办?可以将每周一的备份文件保留一个月、每个月的备份保留更长时间。让我们把每月的备份保留一年时间,每一年的备份保留更长时间、例如五年。 @@ -24,27 +21,24 @@ Part-II 树莓派自建 NAS 云盘之数据自动备份 * 每年 12 个月备份 * 每五年 5 个年备份 - 你应该还记得,我们搭建的备份盘和数据盘大小相同(每个 1 TB)。如何将不止 10 个 1TB 数据的备份从数据盘存放到只有 1TB 大小的备份盘呢?如果你创建的是完整备份,这显然不可能。因此,你需要创建增量备份,它是每一份备份都基于上一份备份数据而创建的。增量备份方式不会每隔一天就成倍的去占用存储空间,它每天只会增加一点占用空间。 以下是我的情况:我的 NAS 自 2016 年 8 月开始运行,备份盘上有 20 个备份。目前,我在数据盘上存储了 406GB 的文件。我的备份盘用了 726GB。当然,备份盘空间使用率在很大程度上取决于数据的更改频率,但正如你所看到的,增量备份不会占用 20 个完整备份所需的空间。然而,随着时间的推移,1TB 空间也可能不足以进行备份。一旦数据增长接近 1TB 限制(或任何备份盘容量),应该选择更大的备份盘空间并将数据移动转移过去。 ### 利用 rsync 进行数据备份 -利用 rsync 命令行工具可以生成完整备份。 +利用 `rsync` 命令行工具可以生成完整备份。 ``` pi@raspberrypi:~ $ rsync -a /nas/data/ /nas/backup/2018-08-01 - ``` -这段命令将挂载在 /nas/data/ 目录下的数据盘中的数据进行了完整的复制备份。备份文件保存在 /nas/backup/2018-08-01 目录下。`-a` 参数是以归档模式进行备份,这将会备份所有的元数据,例如文件的修改日期、权限、拥有者以及软连接文件。 +这段命令将挂载在 `/nas/data/` 目录下的数据盘中的数据进行了完整的复制备份。备份文件保存在 `/nas/backup/2018-08-01` 目录下。`-a` 参数是以归档模式进行备份,这将会备份所有的元数据,例如文件的修改日期、权限、拥有者以及软连接文件。 现在,你已经在 8 月 1 日创建了完整的初始备份,你将在 8 月 2 日创建第一个增量备份。 ``` pi@raspberrypi:~ $ rsync -a --link-dest /nas/backup/2018-08-01/ /nas/data/ /nas/backup/2018-08-02 - ``` 上面这行代码又创建了一个关于 `/nas/data` 目录中数据的备份。备份路径是 `/nas/backup/2018-08-02`。这里的参数 `--link-dest` 指定了一个备份文件所在的路径。这样,这次备份会与 `/nas/backup/2018-08-01` 的备份进行比对,只备份已经修改过的文件,未做修改的文件将不会被复制,而是创建一个到上一个备份文件中它们的硬链接。 @@ -53,142 +47,81 @@ pi@raspberrypi:~ $ rsync -a --link-dest /nas/backup/2018-08-01/ /nas/data/ /nas/ ![](https://opensource.com/sites/default/files/uploads/backup_flow.png) -左侧框是在进行了第二次备份后的原数据状态。中间的盒子是昨天的备份。昨天的备份中只有图片 `file1.jpg` 并没有 `file2.txt` 。右侧的框反映了今天的增量备份。增量备份命令创建昨天不存在的 `file2.txt`。由于 `file1.jpg` 自昨天以来没有被修改,所以今天创建了一个硬链接,它不会额外占用磁盘上的空间。 +左侧框是在进行了第二次备份后的原数据状态。中间的方块是昨天的备份。昨天的备份中只有图片 `file1.jpg` 并没有 `file2.txt` 。右侧的框反映了今天的增量备份。增量备份命令创建昨天不存在的 `file2.txt`。由于 `file1.jpg` 自昨天以来没有被修改,所以今天创建了一个硬链接,它不会额外占用磁盘上的空间。 ### 自动化备份 -你肯定也不想每天凌晨去输入命令进行数据备份吧。你可以创建一个任务定时去调用下面的脚本让它自动化备份 +你肯定也不想每天凌晨去输入命令进行数据备份吧。你可以创建一个任务定时去调用下面的脚本让它自动化备份。 ``` #!/bin/bash - - TODAY=$(date +%Y-%m-%d) - DATADIR=/nas/data/ - BACKUPDIR=/nas/backup/ - SCRIPTDIR=/nas/data/backup_scripts - LASTDAYPATH=${BACKUPDIR}/$(ls ${BACKUPDIR} | tail -n 1) - TODAYPATH=${BACKUPDIR}/${TODAY} - if [[ ! -e ${TODAYPATH} ]]; then - -        mkdir -p ${TODAYPATH} - + mkdir -p ${TODAYPATH} fi - - rsync -a --link-dest ${LASTDAYPATH} ${DATADIR} ${TODAYPATH} $@ - - ${SCRIPTDIR}/deleteOldBackups.sh - ``` -第一段代码指定了数据路径、备份路劲、脚本路径以及昨天和今天的备份路径。第二段代码调用 rsync 命令。最后一段代码执行 `deleteOldBackups.sh` 脚本,它会清除一些过期的没有必要的备份数据。如果不想频繁的调用 `deleteOldBackups.sh`,你也可以手动去执行它。 +第一段代码指定了数据路径、备份路径、脚本路径以及昨天和今天的备份路径。第二段代码调用 `rsync` 命令。最后一段代码执行 `deleteOldBackups.sh` 脚本,它会清除一些过期的没有必要的备份数据。如果不想频繁的调用 `deleteOldBackups.sh`,你也可以手动去执行它。 下面是今天讨论的备份策略的一个简单完整的示例脚本。 ``` #!/bin/bash - BACKUPDIR=/nas/backup/ - - function listYearlyBackups() { - -        for i in 0 1 2 3 4 5 - -                do ls ${BACKUPDIR} | egrep "$(date +%Y -d "${i} year ago")-[0-9]{2}-[0-9]{2}" | sort -u | head -n 1 - -        done - + for i in 0 1 2 3 4 5 + do ls ${BACKUPDIR} | egrep "$(date +%Y -d "${i} year ago")-[0-9]{2}-[0-9]{2}" | sort -u | head -n 1 + done } - - function listMonthlyBackups() { - -        for i in 0 1 2 3 4 5 6 7 8 9 10 11 12 - -                do ls ${BACKUPDIR} | egrep "$(date +%Y-%m -d "${i} month ago")-[0-9]{2}" | sort -u | head -n 1 - -        done - + for i in 0 1 2 3 4 5 6 7 8 9 10 11 12 + do ls ${BACKUPDIR} | egrep "$(date +%Y-%m -d "${i} month ago")-[0-9]{2}" | sort -u | head -n 1 + done } - - function listWeeklyBackups() { - -        for i in 0 1 2 3 4 - -                do ls ${BACKUPDIR} | grep "$(date +%Y-%m-%d -d "last monday -${i} weeks")" - -        done - + for i in 0 1 2 3 4 + do ls ${BACKUPDIR} | grep "$(date +%Y-%m-%d -d "last monday -${i} weeks")" + done } - - function listDailyBackups() { - -        for i in 0 1 2 3 4 5 6 - -                do ls ${BACKUPDIR} | grep "$(date +%Y-%m-%d -d "-${i} day")" - -        done - + for i in 0 1 2 3 4 5 6 + do ls ${BACKUPDIR} | grep "$(date +%Y-%m-%d -d "-${i} day")" + done } - - function getAllBackups() { - -        listYearlyBackups - -        listMonthlyBackups - -        listWeeklyBackups - -        listDailyBackups - + listYearlyBackups + listMonthlyBackups + listWeeklyBackups + listDailyBackups } - - function listUniqueBackups() { - -        getAllBackups | sort -u - + getAllBackups | sort -u } - - function listBackupsToDelete() { - -        ls ${BACKUPDIR} | grep -v -e "$(echo -n $(listUniqueBackups) |sed "s/ /\\\|/g")" - + ls ${BACKUPDIR} | grep -v -e "$(echo -n $(listUniqueBackups) |sed "s/ /\\\|/g")" } - - cd ${BACKUPDIR} - listBackupsToDelete | while read file_to_delete; do - -        rm -rf ${file_to_delete} - + rm -rf ${file_to_delete} done - ``` 这段脚本会首先根据你的备份策略列出所有需要保存的备份文件,然后它会删除那些再也不需要了的备份目录。 @@ -197,7 +130,6 @@ done ``` 0 2 * * * /nas/data/backup_scripts/daily.sh - ``` 有关创建定时任务请参考 [cron 创建定时任务][2]。 @@ -218,12 +150,12 @@ via: https://opensource.com/article/18/8/automate-backups-raspberry-pi 作者:[Manuel Dewald][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[jrg](https://github.com/jrglinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://opensource.com/users/ntlx -[1]: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi +[1]: https://linux.cn/article-10104-1.html [2]: https://opensource.com/article/17/11/how-use-cron-linux [3]: https://nextcloud.com/ diff --git a/published/20180815 How to Create M3U Playlists in Linux [Quick Tip].md b/published/20180815 How to Create M3U Playlists in Linux [Quick Tip].md new file mode 100644 index 0000000000..1ce5ebde67 --- /dev/null +++ b/published/20180815 How to Create M3U Playlists in Linux [Quick Tip].md @@ -0,0 +1,84 @@ +Linux 下如何创建 M3U 播放列表 +====== + +> 简介:关于如何在Linux终端中根据乱序文件创建M3U播放列表实现循序播放的小建议。 + +![Create M3U playlists in Linux Terminal][1] + +我是外国电视连续剧的粉丝,这些连续剧不太容易从 DVD 或像 [Netflix][2] 这样的流媒体上获得。好在,您可以在 YouTube 上找到一些内容并[从 YouTube 下载][3]。 + +现在出现了一个问题。你的文件可能不是按顺序存储的。在 GNU/Linux中,文件不是按数字顺序自然排序的,因此我必须创建 .m3u 播放列表,以便 [MPV 视频播放器][4]可以按顺序播放视频而不是乱顺进行播放。 + +同样的,有时候表示第几集的数字是在文件名中间或结尾的,像这样 “My Web Series S01E01.mkv”。这里的剧集信息位于文件名的中间,“S01E01”告诉我们人类这是第一集,后面还有其它剧集。 + +因此我要做的事情就是在视频墓中创建一个 .m3u 播放列表,并告诉 MPV 播放这个 .m3u 播放列表,MPV 自然会按顺序播放这些视频. + +### 什么是 M3U 文件? + +[M3U][5] 基本上就是个按特定顺序包含文件名的文本文件。当类似 MPV 或 VLC 这样的播放器打开 M3U 文件时,它会尝试按给定的顺序播放指定文件。 + +### 创建 M3U 来按顺序播放音频/视频文件 + +就我而言, 我使用了下面命令: + +``` +$/home/shirish/Videos/web-series-video/$ ls -1v |grep .mkv > /tmp/1.m3u && mv /tmp/1.m3u . +``` + +然我们拆分一下看看每个部分表示什么意思: + +`ls -1v` = 这就是用普通的 `ls` 来列出目录中的内容. 其中 `-1` 表示每行显示一个文件。而 `-v` 表示根据文本中的数字(版本)进行自然排序。 + +`| grep .mkv` = 基本上就是告诉 `ls` 寻找那些以 `.mkv` 结尾的文件。它也可以是 `.mp4` 或其他任何你想要的媒体文件格式。 + +通过在控制台上运行命令来进行试运行通常是个好主意: + +``` +ls -1v |grep .mkv +My Web Series S01E01 [Episode 1 Name] Multi 480p WEBRip x264 - xRG.mkv +My Web Series S01E02 [Episode 2 Name] Multi 480p WEBRip x264 - xRG.mkv +My Web Series S01E03 [Episode 3 Name] Multi 480p WEBRip x264 - xRG.mkv +My Web Series S01E04 [Episode 4 Name] Multi 480p WEBRip x264 - xRG.mkv +My Web Series S01E05 [Episode 5 Name] Multi 480p WEBRip x264 - xRG.mkv +My Web Series S01E06 [Episode 6 Name] Multi 480p WEBRip x264 - xRG.mkv +My Web Series S01E07 [Episode 7 Name] Multi 480p WEBRip x264 - xRG.mkv +My Web Series S01E08 [Episode 8 Name] Multi 480p WEBRip x264 - xRG.mkv +``` + +结果显示我要做的是正确的。现在下一步就是让输出以 `.m3u` 播放列表的格式输出。 + +``` +ls -1v |grep .mkv > /tmp/web_playlist.m3u && mv /tmp/web_playlist.m3u . +``` + +这就在当前目录中创建了 .m3u 文件。这个 .m3u 播放列表只不过是一个 .txt 文件,其内容与上面相同,扩展名为 .m3u 而已。 你也可以手动编辑它,并按照想要的顺序添加确切的文件名。 + +之后你只需要这样做: + +``` +mpv web_playlist.m3u +``` + +一般来说,MPV 和播放列表的好处在于你不需要一次性全部看完。 您可以一次看任意长时间,然后在下一次查看其余部分。 + +我希望写一些有关 MPV 的文章,以及如何制作在媒体文件中嵌入字幕的 mkv 文件,但这是将来的事情了。 + +注意: 这是开源软件,不鼓励盗版。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/create-m3u-playlist-linux/ + +作者:[Shirsh][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[lujun9972](https://github.com/lujun9972) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/shirish/ +[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/Create-M3U-Playlists.jpeg +[2]:https://itsfoss.com/netflix-open-source-ai/ +[3]:https://itsfoss.com/download-youtube-linux/ +[4]:https://itsfoss.com/mpv-video-player/ +[5]:https://en.wikipedia.org/wiki/M3U diff --git a/translated/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md b/published/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md similarity index 57% rename from translated/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md rename to published/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md index a72b4cdd8d..b3d262d94c 100644 --- a/translated/tech/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md +++ b/published/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md @@ -1,17 +1,18 @@ -使用 browser-mpris2(Chrome 扩展)将 YouTube 播放器控件添加到 Linux 桌面 +使用 Chrome 扩展将 YouTube 播放器控件添加到 Linux 桌面 ====== -一个我怀念的 Unity 功能(虽然只使用了一小段时间)是在 Web 浏览器中访问 YouTube 等网站时自动获取 Ubuntu 声音指示器中的播放器控件,因此你可以直接从顶部栏暂停或停止视频,以及浏览视频/歌曲信息和预览。 -这个 Unity 功能已经消失很久了,但我正在为 Gnome Shell 寻找类似的东西,然后我遇到了 **[browser-mpris2][1],这是一个为 Google Chrome/Chromium 实现 MPRIS v2 接口的扩展,目前只支持 YouTube**,我想可能会有一些 Linux Uprising 的读者会喜欢这个。 +一个我怀念的 Unity 功能(虽然只使用了一小段时间)是在 Web 浏览器中访问 YouTube 等网站时在 Ubuntu 声音指示器中自动出现播放器控件,因此你可以直接从顶部栏暂停或停止视频,以及浏览视频/歌曲信息和预览。 -**该扩展还适用于 Opera 和 Vivaldi 等基于 Chromium 的 Web 浏览器。** -** -** **browser-mpris2 也支持 Firefox,但因为通过 about:debugging 加载扩展是临时的,而这是 browser-mpris2 所需要的,因此本文不包括 Firefox 的指导。开发人员[打算][2]将来将扩展提交到 Firefox 插件网站上。** +这个 Unity 功能已经消失很久了,但我正在为 Gnome Shell 寻找类似的东西,然后我遇到了 [browser-mpris2][1],这是一个为 Google Chrome/Chromium 实现 MPRIS v2 接口的扩展,目前只支持 YouTube,我想可能会有一些读者会喜欢这个。 -**使用此 Chrome 扩展,你可以在支持 MPRIS2 的 applets 中获得 YouTube 媒体播放器控件(播放、暂停、停止和查找 -)**。例如,如果你使用 Gnome Shell,你可将 YouTube 媒体播放器控件作为永久通知,或者你可以使用 Media Player Indicator 之类的扩展来实现此目的。在 Cinnamon /Linux Mint with Cinnamon 中,它出现在声音 Applet 中。 +该扩展还适用于 Opera 和 Vivaldi 等基于 Chromium 的 Web 浏览器。 -**我无法在 Unity 上用它**,我不知道为什么。我没有在不同桌面环境(KDE、Xfce、MATE 等)中使用其他支持 MPRIS2 的 applet 尝试此扩展。如果你尝试过,请告诉我们它是否适用于你的桌面环境/支持 MPRIS2 的 applet。 +browser-mpris2 也支持 Firefox,但因为通过 `about:debugging` 加载扩展是临时的,而这是 browser-mpris2 所需要的,因此本文不包括 Firefox 的指导。开发人员[打算][2]将来将扩展提交到 Firefox 插件网站上。 + +使用此 Chrome 扩展,你可以在支持 MPRIS2 的 applets 中获得 YouTube 媒体播放器控件(播放、暂停、停止和查找 +)。例如,如果你使用 Gnome Shell,你可将 YouTube 媒体播放器控件作为永久显示的控件,或者你可以使用 Media Player Indicator 之类的扩展来实现此目的。在 Cinnamon /Linux Mint with Cinnamon 中,它出现在声音 Applet 中。 + +我无法在 Unity 上用它,我不知道为什么。我没有在不同桌面环境(KDE、Xfce、MATE 等)中使用其他支持 MPRIS2 的 applet 尝试此扩展。如果你尝试过,请告诉我们它是否适用于你的桌面环境/支持 MPRIS2 的 applet。 以下是在使用 Gnome Shell 的 Ubuntu 18.04 并装有 Chromium 浏览器的[媒体播放器指示器][3]的截图,其中显示了有关当前正在播放的 YouTube 视频的信息及其控件(播放/暂停,停止和查找): @@ -19,42 +20,41 @@ 在 Linux Mint 19 Cinnamon 中使用其默认声音 applet 和 Chromium 浏览器的截图: - ![](https://2.bp.blogspot.com/-I2DuYetv7eQ/W3VtUUcg26I/AAAAAAAABXc/Tv-RemkyO60k6CC_mYUxewG-KfVgpFefACLcBGAs/s1600/browser-mpris2-cinnamon-linux-mint.png) ### 如何为 Google Chrom/Chromium安装 browser-mpris2 -**1\. 如果你还没有安装 Git 就安装它** +1、 如果你还没有安装 Git 就安装它 在 Debian/Ubuntu/Linux Mint 中,使用此命令安装 git: + ``` sudo apt install git - ``` -**2\. 下载并安装 [browser-mpris2][1] 所需文件。** +2、 下载并安装 [browser-mpris2][1] 所需文件。 + +下面的命令克隆了 browser-mpris2 的 Git 仓库并将 chrome-mpris2 安装到 `/usr/local/bin/`(在一个你可以保存 browser-mpris2 文件夹的地方运行 `git clone ...` 命令,由于它会被 Chrome/Chromium 使用,你不能删除它): -下面的命令克隆了 browser-mpris2 的 Git 仓库并将 chrome-mpris2 安装到 `/usr/local/bin/`(在一个你可以保存 browser-mpris2 文件夹的地方运行 “git clone ...” 命令,由于它会被 Chrome/Chromium 使用,你不能删除它): ``` git clone https://github.com/otommod/browser-mpris2 sudo install browser-mpris2/native/chrome-mpris2 /usr/local/bin/ - ``` -**3\. 在基于 Chrome/Chromium 的 Web 浏览器中加载此扩展。** +3、 在基于 Chrome/Chromium 的 Web 浏览器中加载此扩展。 ![](https://3.bp.blogspot.com/-yEoNFj2wAXM/W3Vvewa979I/AAAAAAAABXo/dmltlNZk3J4sVa5jQenFFrT28ecklY92QCLcBGAs/s640/browser-mpris2-chrome-developer-load-unpacked.png) -打开 Goog​​le Chrome、Chromium、Opera 或 Vivaldi 浏览器,进入 Extensions 页面(在 URL 栏中输入 `chrome://extensions`),在屏幕右上角切换到`开发者模式`。然后选择 `Load Unpacked` 并选择 chrome-mpris2 目录(确保没有选择子文件夹)。 +打开 Goog​​le Chrome、Chromium、Opera 或 Vivaldi 浏览器,进入 Extensions 页面(在 URL 栏中输入 `chrome://extensions`),在屏幕右上角切换到“开发者模式”。然后选择 “Load Unpacked” 并选择 chrome-mpris2 目录(确保没有选择子文件夹)。 复制扩展 ID 并保存它,因为你以后需要它(它类似于这样:`emngjajgcmeiligomkgpngljimglhhii`,但它会与你的不一样,因此确保使用你计算机中的 ID!)。 -**4\. 运行 **`install-chrome.py`**(在 `browser-mpris2/native` 文件夹中),指定扩展 id 和 chrome-mpris2 路径。 +4、 运行 `install-chrome.py`(在 `browser-mpris2/native` 文件夹中),指定扩展 id 和 chrome-mpris2 路径。 在终端中使用此命令(将 `REPLACE-THIS-WITH-EXTENSION-ID` 替换为上一步中 `chrome://extensions` 下显示的 browser-mpris2 扩展 ID)安装此扩展: + ``` browser-mpris2/native/install-chrome.py REPLACE-THIS-WITH-EXTENSION-ID /usr/local/bin/chrome-mpris2 - ``` 你只需要运行此命令一次,无需将其添加到启动或其他类似的地方。你在 Google Chrome 或 Chromium 浏览器中播放的任何 YouTube 视频都应显示在你正在使用的任何 MPRISv2 applet 中。你无需重启 Web 浏览器。 @@ -66,7 +66,7 @@ via: https://www.linuxuprising.com/2018/08/add-youtube-player-controls-to-your.h 作者:[Logix][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20180821 A checklist for submitting your first Linux kernel patch.md b/published/20180821 A checklist for submitting your first Linux kernel patch.md new file mode 100644 index 0000000000..92f24808c7 --- /dev/null +++ b/published/20180821 A checklist for submitting your first Linux kernel patch.md @@ -0,0 +1,153 @@ +如何提交你的第一个 Linux 内核补丁 +====== +> 学习如何做出你的首个 Linux 内核贡献,以及在开始之前你应该知道什么。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22) + +Linux 内核是最大且变动最快的开源项目之一,它由大约 53,600 个文件和近 2,000 万行代码组成。在全世界范围内超过 15,600 位程序员为它贡献代码,Linux 内核项目的维护者使用了如下的协作模型。 + +![](https://opensource.com/sites/default/files/karnik_figure1.png) + +本文中,为了便于在 Linux 内核中提交你的第一个贡献,我将为你提供一个必需的快速检查列表,以告诉你在提交补丁时,应该去查看和了解的内容。对于你贡献的第一个补丁的提交流程方面的更多内容,请阅读 [KernelNewbies 的第一个内核补丁教程][1]。 + +### 为内核作贡献 + +**第 1 步:准备你的系统。** + +本文开始之前,假设你的系统已经具备了如下的工具: + ++ 文本编辑器 ++ Email 客户端 ++ 版本控制系统(例如:git) + +**第 2 步:下载 Linux 内核代码仓库。** + +``` +git clone -b staging-testing +git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git +``` + +复制你的当前配置: + +``` +cp /boot/config-`uname -r`* .config +``` + +**第 3 步:构建/安装你的内核。** + +``` +make -jX +sudo make modules_install install +``` + +**第 4 步:创建一个分支并切换到该分支。** + +``` +git checkout -b first-patch +``` + +**第 5 步:更新你的内核并指向到最新的代码。** + +``` +git fetch origin +git rebase origin/staging-testing +``` + +**第 6 步:在最新的代码库上产生一个变更。** + +使用 `make` 命令重新编译,确保你的变更没有错误。 + +**第 7 步:提交你的变更并创建一个补丁。** + +``` +git add +git commit -s -v +git format-patch -o /tmp/ HEAD^ +``` + +![](https://opensource.com/sites/default/files/karnik_figure2.png) + +主题是由冒号分隔的文件名组成,跟着是使用祈使语态来描述补丁做了什么。空行之后是强制的 `signed off` 标记,最后是你的补丁的 `diff` 信息。 + +下面是另外一个简单补丁的示例: + +![](https://opensource.com/sites/default/files/karnik_figure3.png) + +接下来,[从命令行使用邮件][2](在本例子中使用的是 Mutt)发送这个补丁: + +``` +mutt -H /tmp/0001- +``` + +使用 [get_maintainer.pl 脚本][11],去了解你的补丁应该发送给哪位维护者的列表。 + +### 提交你的第一个补丁之前,你应该知道的事情 + +* [Greg Kroah-Hartman](3) 的 [staging tree][4] 是提交你的 [第一个补丁][1] 的最好的地方,因为他更容易接受新贡献者的补丁。在你熟悉了补丁发送流程以后,你就可以去发送复杂度更高的子系统专用的补丁。 +* 你也可以从纠正代码中的编码风格开始。想学习更多关于这方面的内容,请阅读 [Linux 内核编码风格文档][5]。 +* [checkpatch.pl][6] 脚本可以帮你检测编码风格方面的错误。例如,运行如下的命令:`perl scripts/checkpatch.pl -f drivers/staging/android/* | less` +* 你可以去补全开发者留下的 TODO 注释中未完成的内容:`find drivers/staging -name TODO` +* [Coccinelle][7] 是一个模式匹配的有用工具。 +* 阅读 [归档的内核邮件][8]。 +* 为找到灵感,你可以去遍历 [linux.git 日志][9]去查看以前的作者的提交内容。 +* 注意:不要与你的补丁的审核者在邮件顶部交流!下面就是一个这样的例子: + + **错误的方式:** + + ``` + Chris, + Yes let’s schedule the meeting tomorrow, on the second floor. + + > On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote: + > Hey John, I had some questions: + > 1. Do you want to schedule the meeting tomorrow? + > 2. On which floor in the office? + > 3. What time is suitable to you? +``` + (注意那最后一个问题,在回复中无意中落下了。) + + **正确的方式:** + + ``` + Chris, + See my answers below... + + > On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote: + > Hey John, I had some questions: + > 1. Do you want to schedule the meeting tomorrow? + Yes tomorrow is fine. + > 2. On which floor in the office? + Let's keep it on the second floor. + > 3. What time is suitable to you? + 09:00 am would be alright. +``` + (所有问题全部回复,并且这种方式还保存了阅读的时间。) +* [Eudyptula challenge][10] 是学习内核基础知识的非常好的方式。 + +想学习更多内容,阅读 [KernelNewbies 的第一个内核补丁教程][1]。之后如果你还有任何问题,可以在 [kernelnewbies 邮件列表][12] 或者 [#kernelnewbies IRC channel][13] 中提问。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/8/first-linux-kernel-patch + +作者:[Sayli Karnik][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[qhwdw](https://github.com/qhwdw) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/sayli +[1]:https://kernelnewbies.org/FirstKernelPatch +[2]:https://opensource.com/life/15/8/top-4-open-source-command-line-email-clients +[3]:https://twitter.com/gregkh +[4]:https://www.kernel.org/doc/html/v4.15/process/2.Process.html +[5]:https://www.kernel.org/doc/html/v4.10/process/coding-style.html +[6]:https://github.com/torvalds/linux/blob/master/scripts/checkpatch.pl +[7]:http://coccinelle.lip6.fr/ +[8]:linux-kernel@vger.kernel.org +[9]:https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/log/ +[10]:http://eudyptula-challenge.org/ +[11]:https://github.com/torvalds/linux/blob/master/scripts/get_maintainer.pl +[12]:https://kernelnewbies.org/MailingList +[13]:https://kernelnewbies.org/IRC diff --git a/published/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md b/published/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md new file mode 100644 index 0000000000..84c37055bb --- /dev/null +++ b/published/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md @@ -0,0 +1,122 @@ +在 Linux 中安全且轻松地管理 Cron 定时任务 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/Crontab-UI-720x340.jpg) + +在 Linux 中遇到计划任务的时候,你首先会想到的大概就是 Cron 定时任务了。Cron 定时任务能帮助你在类 Unix 操作系统中计划性地执行命令或者任务。也可以参考一下我们之前的一篇《[关于 Cron 定时任务的新手指导][1]》。对于有一定 Linux 经验的人来说,设置 Cron 定时任务不是什么难事,但对于新手来说就不一定了,他们在编辑 crontab 文件的时候不知不觉中犯的一些小错误,也有可能把整个 Cron 定时任务搞挂了。如果你在处理 Cron 定时任务的时候为了以防万一,可以尝试使用 **Crontab UI**,它是一个可以在类 Unix 操作系统上安全轻松管理 Cron 定时任务的 Web 页面工具。 + +Crontab UI 是使用 NodeJS 编写的自由开源软件。有了 Crontab UI,你在创建、删除和修改 Cron 定时任务的时候就不需要手工编辑 Crontab 文件了,只需要打开浏览器稍微操作一下,就能完成上面这些工作。你可以用 Crontab UI 轻松创建、编辑、暂停、删除、备份 Cron 定时任务,甚至还可以简单地做到导入、导出、部署其它机器上的 Cron 定时任务,它还支持错误日志、邮件发送和钩子。 + +### 安装 Crontab UI + +只需要一条命令就可以安装好 Crontab UI,但前提是已经安装好 NPM。如果还没有安装 NPM,可以参考《[如何在 Linux 上安装 NodeJS][2]》这篇文章。 + +执行这一条命令来安装 Crontab UI。 + +``` +$ npm install -g crontab-ui +``` + +就是这么简单,下面继续来看看在 Crontab UI 上如何管理 Cron 定时任务。 + +### 在 Linux 上安全轻松管理 Cron 定时任务 + +执行这一条命令启动 Crontab UI: + +``` +$ crontab-ui +``` + +你会看到这样的输出: + +``` +Node version: 10.8.0 +Crontab UI is running at http://127.0.0.1:8000 +``` + +首先在你的防火墙和路由器上放开 8000 端口,然后打开浏览器访问 ``。 + +注意,默认只有在本地才能访问到 Crontab UI 的控制台页面。但如果你想让 Crontab UI 使用系统的 IP 地址和自定义端口,也就是想让其它机器也访问到本地的 Crontab UI,你需要使用以下这个命令: + +``` +$ HOST=0.0.0.0 PORT=9000 crontab-ui +Node version: 10.8.0 +Crontab UI is running at http://0.0.0.0:9000 +``` + +Crontab UI 就能够通过 `:9000` 这样的 URL 被远程机器访问到了。 + +Crontab UI 的控制台页面长这样: + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/crontab-ui-dashboard.png) + +从上面的截图就可以看到,Crontab UI 的界面非常简洁,所有选项的含义都能不言自明。 + +在终端输入 `Ctrl + C` 就可以关闭 Crontab UI。 + +#### 创建、编辑、运行、停止、删除 Cron 定时任务 + +点击 “New”,输入 Cron 定时任务的信息并点击 “Save” 保存,就可以创建一个新的 Cron 定时任务了。 + + 1. 为 Cron 定时任务命名,这是可选的; + 2. 你想要执行的完整命令; + 3. 设定计划执行的时间。你可以按照启动、每时、每日、每周、每月、每年这些指标快速指定计划任务,也可以明确指定任务执行的具体时间。指定好计划时间后,“Jobs” 区域就会显示 Cron 定时任务的句式。 + 4. 选择是否为某个 Cron 定时任务记录错误日志。 + +这是我的一个 Cron 定时任务样例。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/create-new-cron-job.png) + +如你所见,我设置了一个每月清理 `pacman` 缓存的 Cron 定时任务。你也可以设置多个 Cron 定时任务,都能在控制台页面看到。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/crontab-ui-dashboard-1.png) + +如果你需要更改 Cron 定时任务中的某些参数,只需要点击 “Edit” 按钮并按照你的需求更改对应的参数。点击 “Run” 按钮可以立即执行 Cron 定时任务,点击 “Stop” 则可以立即停止 Cron 定时任务。如果想要查看某个 Cron 定时任务的详细日志,可以点击 “Log” 按钮。对于不再需要的 Cron 定时任务,就可以按 “Delete” 按钮删除。 + +#### 备份 Cron 定时任务 + +点击控制台页面的 “Backup” 按钮并确认,就可以备份所有 Cron 定时任务。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/backup-cron-jobs.png) + +备份之后,一旦 Crontab 文件出现了错误,就可以使用备份来恢复了。 + +#### 导入/导出其它机器上的 Cron 定时任务 + +Crontab UI 还有一个令人注目的功能,就是导入、导出、部署其它机器上的 Cron 定时任务。如果同一个网络里的多台机器都需要执行同样的 Cron 定时任务,只需要点击 “Export” 按钮并选择文件的保存路径,所有的 Cron 定时任务都会导出到 `crontab.db` 文件中。 + +以下是 `crontab.db` 文件的内容: + +``` +$ cat Downloads/crontab.db +{"name":"Remove Pacman Cache","command":"rm -rf /var/cache/pacman","schedule":"@monthly","stopped":false,"timestamp":"Thu Aug 23 2018 10:34:19 GMT+0000 (Coordinated Universal Time)","logging":"true","mailing":{},"created":1535020459093,"_id":"lcVc1nSdaceqS1ut"} +``` + +导出成文件以后,你就可以把这个 `crontab.db` 文件放置到其它机器上并导入成 Cron 定时任务,而不需要在每一台主机上手动设置 Cron 定时任务。总之,在一台机器上设置完,导出,再导入到其他机器,就完事了。 + +#### 在 Crontab 文件获取/保存 Cron 定时任务 + +你可能在使用 Crontab UI 之前就已经使用 `crontab` 命令创建过 Cron 定时任务。如果是这样,你可以点击控制台页面上的 “Get from crontab” 按钮来获取已有的 Cron 定时任务。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/get-from-crontab.png) + +同样地,你也可以使用 Crontab UI 来将新的 Cron 定时任务保存到 Crontab 文件中,只需要点击 “Save to crontab” 按钮就可以了。 + +管理 Cron 定时任务并没有想象中那么难,即使是新手使用 Crontab UI 也能轻松管理 Cron 定时任务。赶快开始尝试并发表一下你的看法吧。 + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-easily-and-safely-manage-cron-jobs-in-linux/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.ostechnix.com/a-beginners-guide-to-cron-jobs/ +[2]:https://www.ostechnix.com/install-node-js-linux/ + diff --git a/translated/tech/20180824 5 cool music player apps.md b/published/20180824 5 cool music player apps.md similarity index 63% rename from translated/tech/20180824 5 cool music player apps.md rename to published/20180824 5 cool music player apps.md index fb301ed4dd..76223f18ec 100644 --- a/translated/tech/20180824 5 cool music player apps.md +++ b/published/20180824 5 cool music player apps.md @@ -2,20 +2,21 @@ ====== ![](https://fedoramagazine.org/wp-content/uploads/2018/08/5-cool-music-apps-816x345.jpg) -你喜欢音乐吗?那么 Fedora 中可能有你正在寻找的东西。本文介绍在 Fedora 上运行的不同音乐播放器。无论你有大量的音乐库,还是小型音乐库,或者根本没有音乐库,你都会被覆盖到。这里有四个图形程序和一个基于终端的音乐播放器,可以让你挑选。 + +你喜欢音乐吗?那么 Fedora 中可能有你正在寻找的东西。本文介绍在 Fedora 上运行的各种音乐播放器。无论你有庞大的音乐库,还是小一些的,抑或根本没有,你都可以用到音乐播放器。这里有四个图形程序和一个基于终端的音乐播放器,可以让你挑选。 ### Quod Libet -Quod Libet 是你的大型音频库的管理员。如果你有一个大量的音频库,你不想只听,但也要管理,Quod Libet 可能是一个很好的选择。 +Quod Libet 是一个完备的大型音频库管理器。如果你有一个庞大的音频库,你不想只是听,也想要管理,Quod Libet 可能是一个很好的选择。 ![][1] -Quod Libet 可以从磁盘上的多个位置导入音乐,并允许你编辑音频文件的标签 - 因此一切都在你的控制之下。额外地,它还有各种插件可用,从简单的均衡器到 [last.fm][2] 同步。你也可以直接从 [Soundcloud][3] 搜索和播放音乐。 +Quod Libet 可以从磁盘上的多个位置导入音乐,并允许你编辑音频文件的标签 —— 因此一切都在你的控制之下。此外,它还有各种插件可用,从简单的均衡器到 [last.fm][2] 同步。你也可以直接从 [Soundcloud][3] 搜索和播放音乐。 + +Quod Libet 在 HiDPI 屏幕上工作得很好,它有 Fedora 的 RPM 包,如果你运行 [Silverblue][5],它在 [Flathub][4] 中也有。使用 Gnome Software 或命令行安装它: -Quod Libet 在 HiDPI 屏幕上工作得很好,它有 Fedora 的 RPM 包,如果你运行[Silverblue][5],它在 [Flathub][4] 中也有。使用 Gnome Software 或命令行安装它: ``` $ sudo dnf install quodlibet - ``` ### Audacious @@ -24,14 +25,14 @@ $ sudo dnf install quodlibet ![][6] -Audacious 可能不会立即管理你的所有音乐,但你如果想将音乐组织为文件,它能做得很好。你还可以导出和导入播放列表,而无需重新组织音乐文件本身。 +Audacious 可能不直接管理你的所有音乐,但你如果想将音乐按文件组织起来,它能做得很好。你还可以导出和导入播放列表,而无需重新组织音乐文件本身。 -额外地,你可以让它看起来像 Winamp。要让它与上面的截图相同,请进入 “Settings/Appearance,”,选择顶部的 “Winamp Classic Interface”,然后选择右下方的 “Refugee” 皮肤。而鲍勃是你的叔叔!这就完成了。 +此外,你可以让它看起来像 Winamp。要让它与上面的截图相同,请进入 “Settings/Appearance”,选择顶部的 “Winamp Classic Interface”,然后选择右下方的 “Refugee” 皮肤。就这么简单。 Audacious 在 Fedora 中作为 RPM 提供,可以使用 Gnome Software 或在终端运行以下命令安装: + ``` $ sudo dnf install audacious - ``` ### Lollypop @@ -40,25 +41,25 @@ Lollypop 是一个音乐播放器,它与 GNOME 集成良好。如果你喜欢 ![][7] -除了与 GNOME Shell 的良好视觉集成之外,它还可以很好地用于 HiDPI 屏幕,并支持黑暗主题。 +除了与 GNOME Shell 的良好视觉集成之外,它还可以很好地用于 HiDPI 屏幕,并支持暗色主题。 额外地,Lollypop 有一个集成的封面下载器和一个所谓的派对模式(右上角的音符按钮),它可以自动选择和播放音乐。它还集成了 [last.fm][2] 或 [libre.fm][8] 等在线服务。 它有 Fedora 的 RPM 也有用于 [Silverblue][5] 工作站的 [Flathub][4],使用 Gnome Software 或终端进行安装: + ``` $ sudo dnf install lollypop - ``` ### Gradio -如果你没有任何音乐但仍喜欢听怎么办?或者你只是喜欢收音机?Gradio 就是为你准备的。 +如果你没有任何音乐但仍想听怎么办?或者你只是喜欢收音机?Gradio 就是为你准备的。 ![][9] Gradio 是一个简单的收音机,它允许你搜索和播放网络电台。你可以按国家、语言或直接搜索找到它们。额外地,它可视化地集成到了 GNOME Shell 中,可以与 HiDPI 屏幕配合使用,并且可以选择黑暗主题。 -可以在 [Flathub][4] 中找到 Gradio,它同时可以运行在 Fedora Workstation 和 [Silverblue][5] 中。使用 Gnome Software 安装它 +可以在 [Flathub][4] 中找到 Gradio,它同时可以运行在 Fedora Workstation 和 [Silverblue][5] 中。使用 Gnome Software 安装它。 ### sox @@ -67,19 +68,19 @@ Gradio 是一个简单的收音机,它允许你搜索和播放网络电台。 ![][10] sox 是一个非常简单的基于终端的音乐播放器。你需要做的就是运行如下命令: + ``` $ play file.mp3 - ``` 接着 sox 就会为你播放。除了单独的音频文件外,sox 还支持 m3u 格式的播放列表。 -额外地,因为 sox 是基于终端的程序,你可以在 ssh 中运行它。你有一个带扬声器的家用服务器吗?或者你想从另一台电脑上播放音乐吗?尝试将它与 [tmux][11] 一起使用,这样即使会话关闭也可以继续听。 +此外,因为 sox 是基于终端的程序,你可以通过 ssh 运行它。你有一个带扬声器的家用服务器吗?或者你想从另一台电脑上播放音乐吗?尝试将它与 [tmux][11] 一起使用,这样即使会话关闭也可以继续听。 sox 在 Fedora 中以 RPM 提供。运行下面的命令安装: + ``` $ sudo dnf install sox - ``` @@ -90,19 +91,19 @@ via: https://fedoramagazine.org/5-cool-music-player-apps/ 作者:[Adam Šamalík][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://fedoramagazine.org/author/asamalik/ -[1]:https://fedoramagazine.org/wp-content/uploads/2018/08/qodlibet-300x217.png +[1]:https://fedoramagazine.org/wp-content/uploads/2018/08/qodlibet-768x555.png [2]:https://last.fm [3]:https://soundcloud.com/ [4]:https://flathub.org/home [5]:https://teamsilverblue.org/ -[6]:https://fedoramagazine.org/wp-content/uploads/2018/08/audacious-300x136.png -[7]:https://fedoramagazine.org/wp-content/uploads/2018/08/lollypop-300x172.png +[6]:https://fedoramagazine.org/wp-content/uploads/2018/08/audacious-768x348.png +[7]:https://fedoramagazine.org/wp-content/uploads/2018/08/lollypop-768x439.png [8]:https://libre.fm -[9]:https://fedoramagazine.org/wp-content/uploads/2018/08/gradio.png -[10]:https://fedoramagazine.org/wp-content/uploads/2018/08/sox-300x179.png +[9]:https://fedoramagazine.org/wp-content/uploads/2018/08/gradio-768x499.png +[10]:https://fedoramagazine.org/wp-content/uploads/2018/08/sox-768x457.png [11]:https://fedoramagazine.org/use-tmux-more-powerful-terminal/ diff --git a/published/20180824 What Stable Kernel Should I Use.md b/published/20180824 What Stable Kernel Should I Use.md new file mode 100644 index 0000000000..a993ddcf20 --- /dev/null +++ b/published/20180824 What Stable Kernel Should I Use.md @@ -0,0 +1,133 @@ +我应该使用哪些稳定版内核? +====== +> 本文作者 Greg Kroah-Hartman 是 Linux 稳定版内核的维护负责人。 + +很多人都问我这样的问题,在他们的产品/设备/笔记本/服务器等上面应该使用什么样的稳定版内核。一直以来,尤其是那些现在已经延长支持时间的内核,都是由我和其他人提供支持,因此,给出这个问题的答案并不是件容易的事情。在这篇文章我将尝试去给出我在这个问题上的看法。当然,你可以任意选用任何一个你想去使用的内核版本,这里只是我的建议。 + +和以前一样,在这里给出的这些看法只代表我个人的意见。 + +### 可选择的内核有哪些 + +下面列出了我建议你应该去使用的内核的列表,从最好的到最差的都有。我在下面将详细介绍,但是如果你只想得到一个结论,它就是你想要的: + +建议你使用的内核的分级,从最佳的方案到最差的方案如下: + + * 你最喜欢的 Linux 发行版支持的内核 + * 最新的稳定版 + * 最新的 LTS (长期支持)版本 + * 仍然处于维护状态的老的 LTS 版本 + +绝对不要去使用的内核: + + * 不再维护的内核版本 + +给上面的列表给出具体的数字,今天是 2018 年 8 月 24 日,kernel.org 页面上可以看到是这样: + +![][1] + +因此,基于上面的列表,那它应该是: + + * 4.18.5 是最新的稳定版 + * 4.14.67 是最新的 LTS 版本 + * 4.9.124、4.4.152、以及 3.16.57 是仍然处于维护状态的老的 LTS 版本 + * 4.17.19 和 3.18.119 是过去 60 天内有过发布的 “生命周期终止” 的内核版本,它们仍然保留在 kernel.org 站点上,是为了仍然想去使用它们的那些人。 + +非常容易,对吗? + +Ok,现在我给出这样选择的一些理由: + +### Linux 发行版内核 + +对于大多数 Linux 用户来说,最好的方案就是使用你喜欢的 Linux 发行版的内核。就我本人而言,我比较喜欢基于社区的、内核不断滚动升级的用最新内核的 Linux 发行版,并且它也是由开发者社区来支持的。这种类型的发行版有 Fedora、openSUSE、Arch、Gentoo、CoreOS,以及其它的。 + +所有这些发行版都使用了上游的最新的稳定版内核,并且确保定期打了需要的 bug 修复补丁。当它拥有了最新的修复之后([记住所有的修复都是安全修复][2]),这就是你可以使用的最安全、最好的内核之一。 + +有些社区的 Linux 发行版需要很长的时间才发行一个新内核版本,但是最终发行的版本和所支持的内核都是非常好的。这些也都非常好用,Debian 和 Ubuntu 就是这样的例子。 + +如果我没有在这里列出你所喜欢的发行版,并不是意味着它们的内核不够好。查看这些发行版的网站,确保它们的内核包是不断应用最新的安全补丁进行升级过的,那么它就应该是很好的。 + +许多人好像喜欢旧式、“传统” 模式的发行版,使用 RHEL、SLES、CentOS 或者 “LTS” Ubuntu 发行版。这些发行版挑选一个特定的内核版本,然后使用好几年,甚至几十年。他们反向移植了最新的 bug 修复,有时也有一些内核的新特性,所有的只是追求堂吉诃德式的保持版本号不变而已,尽管他们已经在那个旧的内核版本上做了成千上万的变更。这项工作是一项真正吃力不讨好的工作,分配到这些任务的开发人员做了一些精彩的工作才能实现这些目标。所以如果你希望永远不看到你的内核版本号发生过变化,那么就使用这些发行版。他们通常会为使用而付出一些钱,当发生错误时能够从这些公司得到一些支持,那就是值得的。 + +所以,你能使用的最好的内核是你可以求助于别人,而别人可以为你提供支持的内核。使用那些支持,你通常都已经为它支付过费用了(对于企业发行版),而这些公司也知道他们职责是什么。 + +但是,如果你不希望去依赖别人,而是希望你自己管理你的内核,或者你有发行版不支持的硬件,那么你应该去使用最新的稳定版: + +### 最新的稳定版 + +最新的稳定版内核是 Linux 内核开发者社区宣布为“稳定版”的最新的一个内核。大约每三个月,社区发行一个包含了对所有新硬件支持的、新的稳定版内核,最新版的内核不但改善内核性能,同时还包含内核各部分的 bug 修复。接下来的三个月之后,进入到下一个内核版本的 bug 修复将被反向移植进入这个稳定版内核中,因此,使用这个内核版本的用户将确保立即得到这些修复。 + +最新的稳定版内核通常也是主流社区发行版所使用的内核,因此你可以确保它是经过测试和拥有大量用户使用的内核。另外,内核社区(全部开发者超过 4000 人)也将帮助这个发行版提供对用户的支持,因为这是他们做的最新的一个内核。 + +三个月之后,将发行一个新的稳定版内核,你应该去更新到它以确保你的内核始终是最新的稳定版,因为当最新的稳定版内核发布之后,对你的当前稳定版内核的支持通常会落后几周时间。 + +如果你在上一个 LTS (长期支持)版本发布之后购买了最新的硬件,为了能够支持最新的硬件,你几乎是绝对需要去运行这个最新的稳定版内核。对于台式机或新的服务器,最新的稳定版内核通常是推荐运行的内核。 + +### 最新的 LTS 版本 + +如果你的硬件为了保证正常运行(像大多数的嵌入式设备),需要依赖供应商的源码树外out-of-tree的补丁,那么对你来说,最好的内核版本是最新的 LTS 版本。这个版本拥有所有进入稳定版内核的最新 bug 修复,以及大量的用户测试和使用。 + +请注意,这个最新的 LTS 版本没有新特性,并且也几乎不会增加对新硬件的支持,因此,如果你需要使用一个新设备,那你的最佳选择就是最新的稳定版内核,而不是最新的 LTS 版内核。 + +另外,对于这个 LTS 版本的用户来说,他也不用担心每三个月一次的“重大”升级。因此,他们将一直坚持使用这个 LTS 版本,并每年升级一次,这是一个很好的实践。 + +使用这个 LTS 版本的不利方面是,你没法得到在最新版本内核上实现的内核性能提升,除非在未来的一年中,你升级到下一个 LTS 版内核。 + +另外,如果你使用的这个内核版本有问题,你所做的第一件事情就是向任意一位内核开发者报告发生的问题,并向他们询问,“最新的稳定版内核中是否也存在这个问题?”并且,你需要意识到,对它的支持不会像使用最新的稳定版内核那样容易得到。 + +现在,如果你坚持使用一个有大量的补丁集的内核,并且不希望升级到每年一次的新 LTS 版内核上,那么,或许你应该去使用老的 LTS 版内核: + +### 老的 LTS 版本 + +传统上,这些版本都由社区提供 2 年时间的支持,有时候当一个重要的 Linux 发行版(像 Debian 或 SLES)依赖它时,这个支持时间会更长。然而在过去一年里,感谢 Google、Linaro、Linaro 成员公司、[kernelci.org][3]、以及其它公司在测试和基础设施上的大量投入,使得这些老的 LTS 版内核得到更长时间的支持。 + +最新的 LTS 版本以及它们将被支持多长时间,这是 2018 年 8 月 24 日显示在 [kernel.org/category/releases.html][4] 上的信息: + +![][5] + +Google 和其它公司希望这些内核使用的时间更长的原因是,由于现在几乎所有的 SoC 芯片的疯狂的(也有人说是打破常规)开发模型。这些设备在芯片发行前几年就启动了他们的开发周期,而那些代码从来不会合并到上游,最终结果是新打造的芯片是基于一个 2 年以前的老内核发布的。这些 SoC 的代码树通常增加了超过 200 万行的代码,这使得它们成为我们前面称之为“类 Linux 内核“的东西。 + +如果在 2 年后,这个 LTS 版本停止支持,那么来自社区的支持将立即停止,并且没有人对它再进行 bug 修复。这导致了在全球各地数以百万计的非常不安全的设备仍然在使用中,这对任何生态系统来说都不是什么好事情。 + +由于这种依赖,这些公司现在要求新设备不断更新到最新的 LTS 版本——这些为它们特定发布的版本(例如现在的每个 4.9.y 版本)。其中一个这样的例子就是新 Android 设备对内核版本的要求,这些新设备所带的 “Andrid O” 版本(和现在的 “Android P” 版本)指定了最低允许使用的内核版本,并且 Andoird 安全更新版本也开始越来越频繁在设备上要求使用这些 “.y” 版本。 + +我注意到一些生产商现在已经在做这些事情。Sony 是其中一个非常好的例子,在他们的大多数新手机上,通过他们每季度的安全更新版本,将设备更新到最新的 4.4.y 发行版上。另一个很好的例子是一家小型公司 Essential,据我所知,他们持续跟踪 4.4.y 版本的速度比其它公司都快。 + +当使用这种老的内核时有个重大警告。反向移植到这种内核中的安全修复不如最新版本的 LTS 内核多,因为这些使用老的 LTS 内核的设备的传统模式是一个更加简化的用户模式。这些内核不能用于任何“通用计算”模式中,在这里用的是不可信用户untrusted user或虚拟机,极大地削弱了对老的内核做像最近的 Spectre 这样的修复的能力,如果在一些分支中存在这样的 bug 的话。 + +因此,仅在你能够完全控制的设备,或者限定在一个非常强大的安全模型(像 Android 一样强制使用 SELinux 和应用程序隔离)时使用老的 LTS 版本。绝对不要在有不可信用户/程序,或虚拟机的服务器上使用这些老的 LTS 版内核。 + +此外,如果社区对它有支持的话,社区对这些老的 LTS 版内核相比正常的 LTS 版内核的支持要少的多。如果你使用这些内核,那么你只能是一个人在战斗,你需要有能力去独自支持这些内核,或者依赖你的 SoC 供应商为你提供支持(需要注意的是,几乎没有供应商会为你提供支持,因此,你要特别注意 ……)。 + +### 不再维护的内核发行版 + +更让人感到惊讶的事情是,许多公司只是随便选一个内核发行版,然后将它封装到它们的产品里,并将它毫不犹豫地承载到数十万的部件中。其中一个这样的糟糕例子是 Lego Mindstorm 系统,不知道是什么原因在它们的设备上随意选取了一个 -rc 的内核发行版。-rc 的发行版是开发中的版本,根本没有 Linux 内核开发者认为它适合任何人使用,更不用说是数百万的用户了。 + +当然,如果你愿意,你可以随意地使用它,但是需要注意的是,可能真的就只有你一个人在使用它。社区不会为你提供支持,因为他们不可能关注所有内核版本的特定问题,因此如果出现错误,你只能独自去解决它。对于一些公司和系统来说,这么做可能还行,但是如果没有为此有所规划,那么要当心因此而产生的“隐性”成本。 + +### 总结 + +基于以上原因,下面是一个针对不同类型设备的简短列表,这些设备我推荐适用的内核如下: + + * 笔记本 / 台式机:最新的稳定版内核 + * 服务器:最新的稳定版内核或最新的 LTS 版内核 + * 嵌入式设备:最新的 LTS 版内核或老的 LTS 版内核(如果使用的安全模型非常强大和严格) + +至于我,在我的机器上运行什么样的内核?我的笔记本运行的是最新的开发版内核(即 Linus 的开发树)再加上我正在做修改的内核,我的服务器上运行的是最新的稳定版内核。因此,尽管我负责 LTS 发行版的支持工作,但我自己并不使用 LTS 版内核,除了在测试系统上。我依赖于开发版和最新的稳定版内核,以确保我的机器运行的是目前我们所知道的最快的也是最安全的内核版本。 + +-------------------------------------------------------------------------------- + +via: http://kroah.com/log/blog/2018/08/24/what-stable-kernel-should-i-use/ + +作者:[Greg Kroah-Hartman][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[qhwdw](https://github.com/qhwdw) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://kroah.com +[1]:https://s3.amazonaws.com/kroah.com/images/kernel.org_2018_08_24.png +[2]:http://kroah.com/log/blog/2018/02/05/linux-kernel-release-model/ +[3]:https://kernelci.org/ +[4]:https://www.kernel.org/category/releases.html +[5]:https://s3.amazonaws.com/kroah.com/images/kernel.org_releases_2018_08_24.png diff --git a/published/20180827 4 tips for better tmux sessions.md b/published/20180827 4 tips for better tmux sessions.md new file mode 100644 index 0000000000..979568a171 --- /dev/null +++ b/published/20180827 4 tips for better tmux sessions.md @@ -0,0 +1,88 @@ +更好利用 tmux 会话的 4 个技巧 +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/08/tmux-4-tips-816x345.jpg) + +tmux 是一个终端多路复用工具,它可以让你系统上的终端支持多面板。你可以排列面板位置,在每个面板运行不同进程,这通常可以更好的地利用你的屏幕。我们在 [这篇早期的文章][1] 中向读者介绍过这一强力工具。如果你已经开始使用 tmux 了,那么这里有一些技巧可以帮你更好地使用它。 + +本文假设你当前的前缀键是 `Ctrl+b`。如果你已重新映射该前缀,只需在相应位置替换为你定义的前缀即可。 + +### 设置终端为自动使用 tmux + +使用 tmux 的一个最大好处就是可以随意的从会话中断开和重连。这使得远程登录会话功能更加强大。你有没有遇到过丢失了与远程系统的连接,然后好希望能够恢复在远程系统上做过的那些工作的情况?tmux 能够解决这一问题。 + +然而,有时在远程系统上工作时,你可能会忘记开启会话。避免出现这一情况的一个方法就是每次通过交互式 shell 登录系统时都让 tmux 启动或附加上一个会话。 + +在你远程系统上的 `~/.bash_profile` 文件中加入下面内容: + +``` +if [ -z "$TMUX" ]; then + tmux attach -t default || tmux new -s default +fi +``` + +然后注销远程系统,并使用 SSH 重新登录。你会发现你处在一个名为 `default` 的 tmux 会话中了。如果退出该会话,则下次登录时还会重新生成此会话。但更重要的是,若您正常地从会话中分离,那么下次登录时你会发现之前工作并没有丢失 - 这在连接中断时非常有用。 + +你当然也可以将这段配置加入本地系统中。需要注意的是,大多数 GUI 界面的终端并不会自动使用这个 `default` 会话,因此它们并不是登录 shell。虽然你可以修改这一行为,但它可能会导致终端嵌套执行附加到 tmux 会话这一动作,从而导致会话不太可用,因此当进行此操作时请一定小心。 + +### 使用缩放功能使注意力专注于单个进程 + +虽然 tmux 的目的就是在单个会话中提供多窗口、多面板和多进程的能力,但有时候你需要专注。如果你正在与一个进程进行交互并且需要更多空间,或需要专注于某个任务,则可以使用缩放命令。该命令会将当前面板扩展,占据整个当前窗口的空间。 + +缩放在其他情况下也很有用。比如,想象你在图形桌面上运行一个终端窗口。面板会使得从 tmux 会话中拷贝和粘帖多行内容变得相对困难。但若你缩放了面板,就可以很容易地对多行数据进行拷贝/粘帖。 + +要对当前面板进行缩放,按下 `Ctrl+b, z`。需要恢复的话,按下相同按键组合来恢复面板。 + +### 绑定一些有用的命令 + +tmux 默认有大量的命令可用。但将一些更常用的操作绑定到容易记忆的快捷键会很有用。下面一些例子可以让会话变得更好用,你可以添加到 `~/.tmux.conf` 文件中: + +``` +bind r source-file ~/.tmux.conf \; display "Reloaded config" +``` + +该命令重新读取你配置文件中的命令和键绑定。添加该条绑定后,退出任意一个 tmux 会话然后重启一个会话。现在你做了任何更改后,只需要简单的按下 `Ctrl+b, r` 就能将修改的内容应用到现有的会话中了。 + +``` +bind V split-window -h +bind H split-window +``` + +这些命令可以很方便地对窗口进行横向切分(按下 `Shift+V`)和纵向切分(`Shift+H`)。 + +若你想查看所有绑定的快捷键,按下 `Ctrl+B, ?` 可以看到一个列表。你首先看到的应该是复制模式下的快捷键绑定,表示的是当你在 tmux 中进行复制粘帖时对应的快捷键。你添加的那两个键绑定会在前缀模式prefix mode中看到。请随意把玩吧! + +### 使用 powerline 更清晰 + +[如前文所示][2],powerline 工具是对 shell 的绝佳补充。而且它也兼容在 tmux 中使用。由于 tmux 接管了整个终端空间,powerline 窗口能提供的可不仅仅是更好的 shell 提示那么简单。 + +[![Screenshot of tmux powerline in git folder](https://fedoramagazine.org/wp-content/uploads/2018/08/Screenshot-from-2018-08-25-19-36-53-1024x690.png)][3] + +如果你还没有这么做,按照 [这篇文章][4] 中的指示来安装该工具。然后[使用 sudo][5] 来安装附件: + +``` +sudo dnf install tmux-powerline +``` + +接着重启会话,就会在底部看到一个漂亮的新状态栏。根据终端的宽度,默认的状态栏会显示你当前会话 ID、打开的窗口、系统信息、日期和时间,以及主机名。若你进入了使用 git 进行版本控制的项目目录中还能看到分支名和用色彩标注的版本库状态。 + +当然,这个状态栏具有很好的可配置性。享受你新增强的 tmux 会话吧,玩的开心点。 + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/4-tips-better-tmux-sessions/ + +作者:[Paul W. Frields][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[lujun9972](https://github.com/lujun9972) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://fedoramagazine.org/author/pfrields/ +[1]:https://fedoramagazine.org/use-tmux-more-powerful-terminal/ +[2]:https://fedoramagazine.org/add-power-terminal-powerline/ +[3]:https://fedoramagazine.org/wp-content/uploads/2018/08/Screenshot-from-2018-08-25-19-36-53.png +[4]:https://fedoramagazine.org/add-power-terminal-powerline/ +[5]:https://fedoramagazine.org/howto-use-sudo/ diff --git a/published/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md b/published/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md new file mode 100644 index 0000000000..7764b5186e --- /dev/null +++ b/published/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md @@ -0,0 +1,52 @@ +解决 Arch Linux 中出现的 “error:failed to commit transaction (conflicting files)” +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/06/arch_linux_wallpaper-720x340.png) + +自我更新 Arch Linux 桌面以来已经有一个月了。今天我试着更新我的 Arch Linux 系统,然后遇到一个错误 “error:failed to commit transaction (conflicting files) stfl:/usr/lib/libstfl.so.0 exists in filesystem”。看起来是 pacman 无法更新一个已经存在于文件系统上的库 (/usr/lib/libstfl.so.0)。如果你也遇到了同样的问题,下面是一个快速解决方案。 + +### 解决 Arch Linux 中出现的 “error:failed to commit transaction (conflicting files)” + +有三种方法。 + +1。简单在升级时忽略导致问题的 stfl 库并尝试再次更新系统。请参阅此指南以了解 [如何在更新时忽略软件包][1]。 + +2。使用命令覆盖这个包: + +``` +$ sudo pacman -Syu --overwrite /usr/lib/libstfl.so.0 +``` + +3。手工删掉 stfl 库然后再次升级系统。请确保目标包不被其他任何重要的包所依赖。可以通过去 archlinux.org 查看是否有这种冲突。 + +``` +$ sudo rm /usr/lib/libstfl.so.0 +``` + +现在,尝试更新系统: + +``` +$ sudo pacman -Syu +``` + +我选择第三种方法,直接删除该文件然后升级 Arch Linux 系统。很有效! + +希望本文对你有所帮助。还有更多好东西。敬请期待! + +干杯! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-solve-error-failed-to-commit-transaction-conflicting-files-in-arch-linux/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[lujun9972](https://github.com/lujun9972) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.ostechnix.com/safely-ignore-package-upgraded-arch-linux/ diff --git a/published/20140805 How to Install Cinnamon Desktop on Ubuntu.md b/published/201809/20140805 How to Install Cinnamon Desktop on Ubuntu.md similarity index 100% rename from published/20140805 How to Install Cinnamon Desktop on Ubuntu.md rename to published/201809/20140805 How to Install Cinnamon Desktop on Ubuntu.md diff --git a/published/20160503 Cloud Commander - A Web File Manager With Console And Editor.md b/published/201809/20160503 Cloud Commander - A Web File Manager With Console And Editor.md similarity index 100% rename from published/20160503 Cloud Commander - A Web File Manager With Console And Editor.md rename to published/201809/20160503 Cloud Commander - A Web File Manager With Console And Editor.md diff --git a/published/20170706 Docker Guide Dockerizing Python Django Application.md b/published/201809/20170706 Docker Guide Dockerizing Python Django Application.md similarity index 100% rename from published/20170706 Docker Guide Dockerizing Python Django Application.md rename to published/201809/20170706 Docker Guide Dockerizing Python Django Application.md diff --git a/published/20170709 The Extensive Guide to Creating Streams in RxJS.md b/published/201809/20170709 The Extensive Guide to Creating Streams in RxJS.md similarity index 100% rename from published/20170709 The Extensive Guide to Creating Streams in RxJS.md rename to published/201809/20170709 The Extensive Guide to Creating Streams in RxJS.md diff --git a/published/20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md b/published/201809/20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md similarity index 100% rename from published/20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md rename to published/201809/20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md diff --git a/published/20171003 Trash-Cli - A Command Line Interface For Trashcan On Linux.md b/published/201809/20171003 Trash-Cli - A Command Line Interface For Trashcan On Linux.md similarity index 100% rename from published/20171003 Trash-Cli - A Command Line Interface For Trashcan On Linux.md rename to published/201809/20171003 Trash-Cli - A Command Line Interface For Trashcan On Linux.md diff --git a/published/20171010 Operating a Kubernetes network.md b/published/201809/20171010 Operating a Kubernetes network.md similarity index 100% rename from published/20171010 Operating a Kubernetes network.md rename to published/201809/20171010 Operating a Kubernetes network.md diff --git a/published/20171124 How do groups work on Linux.md b/published/201809/20171124 How do groups work on Linux.md similarity index 100% rename from published/20171124 How do groups work on Linux.md rename to published/201809/20171124 How do groups work on Linux.md diff --git a/published/20171202 Scrot Linux command-line screen grabs made simple.md b/published/201809/20171202 Scrot Linux command-line screen grabs made simple.md similarity index 100% rename from published/20171202 Scrot Linux command-line screen grabs made simple.md rename to published/201809/20171202 Scrot Linux command-line screen grabs made simple.md diff --git a/published/20180102 Top 7 open source project management tools for agile teams.md b/published/201809/20180102 Top 7 open source project management tools for agile teams.md similarity index 100% rename from published/20180102 Top 7 open source project management tools for agile teams.md rename to published/201809/20180102 Top 7 open source project management tools for agile teams.md diff --git a/published/20180131 What I Learned from Programming Interviews.md b/published/201809/20180131 What I Learned from Programming Interviews.md similarity index 100% rename from published/20180131 What I Learned from Programming Interviews.md rename to published/201809/20180131 What I Learned from Programming Interviews.md diff --git a/published/20180201 Here are some amazing advantages of Go that you dont hear much about.md b/published/201809/20180201 Here are some amazing advantages of Go that you dont hear much about.md similarity index 100% rename from published/20180201 Here are some amazing advantages of Go that you dont hear much about.md rename to published/201809/20180201 Here are some amazing advantages of Go that you dont hear much about.md diff --git a/published/20180203 API Star- Python 3 API Framework - Polyglot.Ninja().md b/published/201809/20180203 API Star- Python 3 API Framework - Polyglot.Ninja().md similarity index 100% rename from published/20180203 API Star- Python 3 API Framework - Polyglot.Ninja().md rename to published/201809/20180203 API Star- Python 3 API Framework - Polyglot.Ninja().md diff --git a/published/20180226 Linux Virtual Machines vs Linux Live Images.md b/published/201809/20180226 Linux Virtual Machines vs Linux Live Images.md similarity index 100% rename from published/20180226 Linux Virtual Machines vs Linux Live Images.md rename to published/201809/20180226 Linux Virtual Machines vs Linux Live Images.md diff --git a/published/20180308 What is open source programming.md b/published/201809/20180308 What is open source programming.md similarity index 100% rename from published/20180308 What is open source programming.md rename to published/201809/20180308 What is open source programming.md diff --git a/published/20180316 How to Encrypt Files From Within a File Manager.md b/published/201809/20180316 How to Encrypt Files From Within a File Manager.md similarity index 100% rename from published/20180316 How to Encrypt Files From Within a File Manager.md rename to published/201809/20180316 How to Encrypt Files From Within a File Manager.md diff --git a/published/20180324 How To Compress And Decompress Files In Linux.md b/published/201809/20180324 How To Compress And Decompress Files In Linux.md similarity index 100% rename from published/20180324 How To Compress And Decompress Files In Linux.md rename to published/201809/20180324 How To Compress And Decompress Files In Linux.md diff --git a/published/20180326 Start a blog in 30 minutes with Hugo, a static site generator written in Go.md b/published/201809/20180326 Start a blog in 30 minutes with Hugo, a static site generator written in Go.md similarity index 100% rename from published/20180326 Start a blog in 30 minutes with Hugo, a static site generator written in Go.md rename to published/201809/20180326 Start a blog in 30 minutes with Hugo, a static site generator written in Go.md diff --git a/published/20180402 Understanding Linux filesystems- ext4 and beyond.md b/published/201809/20180402 Understanding Linux filesystems- ext4 and beyond.md similarity index 100% rename from published/20180402 Understanding Linux filesystems- ext4 and beyond.md rename to published/201809/20180402 Understanding Linux filesystems- ext4 and beyond.md diff --git a/published/20180424 A gentle introduction to FreeDOS.md b/published/201809/20180424 A gentle introduction to FreeDOS.md similarity index 100% rename from published/20180424 A gentle introduction to FreeDOS.md rename to published/201809/20180424 A gentle introduction to FreeDOS.md diff --git a/published/20180425 Understanding metrics and monitoring with Python - Opensource.com.md b/published/201809/20180425 Understanding metrics and monitoring with Python - Opensource.com.md similarity index 100% rename from published/20180425 Understanding metrics and monitoring with Python - Opensource.com.md rename to published/201809/20180425 Understanding metrics and monitoring with Python - Opensource.com.md diff --git a/published/20180427 An Official Introduction to the Go Compiler.md b/published/201809/20180427 An Official Introduction to the Go Compiler.md similarity index 100% rename from published/20180427 An Official Introduction to the Go Compiler.md rename to published/201809/20180427 An Official Introduction to the Go Compiler.md diff --git a/published/20180516 How Graphics Cards Work.md b/published/201809/20180516 How Graphics Cards Work.md similarity index 100% rename from published/20180516 How Graphics Cards Work.md rename to published/201809/20180516 How Graphics Cards Work.md diff --git a/published/20180516 Manipulating Directories in Linux.md b/published/201809/20180516 Manipulating Directories in Linux.md similarity index 100% rename from published/20180516 Manipulating Directories in Linux.md rename to published/201809/20180516 Manipulating Directories in Linux.md diff --git a/published/20180518 Mastering CI-CD at OpenDev.md b/published/201809/20180518 Mastering CI-CD at OpenDev.md similarity index 100% rename from published/20180518 Mastering CI-CD at OpenDev.md rename to published/201809/20180518 Mastering CI-CD at OpenDev.md diff --git a/published/20180525 Getting started with the Python debugger.md b/published/201809/20180525 Getting started with the Python debugger.md similarity index 100% rename from published/20180525 Getting started with the Python debugger.md rename to published/201809/20180525 Getting started with the Python debugger.md diff --git a/published/20180529 How To Add Additional IP (Secondary IP) In Ubuntu System.md b/published/201809/20180529 How To Add Additional IP (Secondary IP) In Ubuntu System.md similarity index 100% rename from published/20180529 How To Add Additional IP (Secondary IP) In Ubuntu System.md rename to published/201809/20180529 How To Add Additional IP (Secondary IP) In Ubuntu System.md diff --git a/published/20180618 Twitter Sentiment Analysis using NodeJS.md b/published/201809/20180618 Twitter Sentiment Analysis using NodeJS.md similarity index 100% rename from published/20180618 Twitter Sentiment Analysis using NodeJS.md rename to published/201809/20180618 Twitter Sentiment Analysis using NodeJS.md diff --git a/published/20180626 How to build a professional network when you work in a bazaar.md b/published/201809/20180626 How to build a professional network when you work in a bazaar.md similarity index 100% rename from published/20180626 How to build a professional network when you work in a bazaar.md rename to published/201809/20180626 How to build a professional network when you work in a bazaar.md diff --git a/published/20180702 View The Contents Of An Archive Or Compressed File Without Extracting It.md b/published/201809/20180702 View The Contents Of An Archive Or Compressed File Without Extracting It.md similarity index 100% rename from published/20180702 View The Contents Of An Archive Or Compressed File Without Extracting It.md rename to published/201809/20180702 View The Contents Of An Archive Or Compressed File Without Extracting It.md diff --git a/published/20180703 Understanding Python Dataclasses — Part 1.md b/published/201809/20180703 Understanding Python Dataclasses — Part 1.md similarity index 100% rename from published/20180703 Understanding Python Dataclasses — Part 1.md rename to published/201809/20180703 Understanding Python Dataclasses — Part 1.md diff --git a/published/20180706 Anatomy of a Linux DNS Lookup - Part III.md b/published/201809/20180706 Anatomy of a Linux DNS Lookup - Part III.md similarity index 100% rename from published/20180706 Anatomy of a Linux DNS Lookup - Part III.md rename to published/201809/20180706 Anatomy of a Linux DNS Lookup - Part III.md diff --git a/published/20180710 How To View Detailed Information About A Package In Linux.md b/published/201809/20180710 How To View Detailed Information About A Package In Linux.md similarity index 100% rename from published/20180710 How To View Detailed Information About A Package In Linux.md rename to published/201809/20180710 How To View Detailed Information About A Package In Linux.md diff --git a/published/20180717 Getting started with Etcher.io.md b/published/201809/20180717 Getting started with Etcher.io.md similarity index 100% rename from published/20180717 Getting started with Etcher.io.md rename to published/201809/20180717 Getting started with Etcher.io.md diff --git a/published/20180720 An Introduction to Using Git.md b/published/201809/20180720 An Introduction to Using Git.md similarity index 100% rename from published/20180720 An Introduction to Using Git.md rename to published/201809/20180720 An Introduction to Using Git.md diff --git a/published/20180720 How to Install 2048 Game in Ubuntu and Other Linux Distributions.md b/published/201809/20180720 How to Install 2048 Game in Ubuntu and Other Linux Distributions.md similarity index 100% rename from published/20180720 How to Install 2048 Game in Ubuntu and Other Linux Distributions.md rename to published/201809/20180720 How to Install 2048 Game in Ubuntu and Other Linux Distributions.md diff --git a/published/20180720 How to build a URL shortener with Apache.md b/published/201809/20180720 How to build a URL shortener with Apache.md similarity index 100% rename from published/20180720 How to build a URL shortener with Apache.md rename to published/201809/20180720 How to build a URL shortener with Apache.md diff --git a/published/20180725 How do private keys work in PKI and cryptography.md b/published/201809/20180725 How do private keys work in PKI and cryptography.md similarity index 100% rename from published/20180725 How do private keys work in PKI and cryptography.md rename to published/201809/20180725 How do private keys work in PKI and cryptography.md diff --git a/published/20180730 7 Python libraries for more maintainable code.md b/published/201809/20180730 7 Python libraries for more maintainable code.md similarity index 100% rename from published/20180730 7 Python libraries for more maintainable code.md rename to published/201809/20180730 7 Python libraries for more maintainable code.md diff --git a/published/20180730 How to use VS Code for your Python projects.md b/published/201809/20180730 How to use VS Code for your Python projects.md similarity index 100% rename from published/20180730 How to use VS Code for your Python projects.md rename to published/201809/20180730 How to use VS Code for your Python projects.md diff --git a/published/20180802 Distrochooser Helps Linux Beginners To Choose A Suitable Linux Distribution.md b/published/201809/20180802 Distrochooser Helps Linux Beginners To Choose A Suitable Linux Distribution.md similarity index 100% rename from published/20180802 Distrochooser Helps Linux Beginners To Choose A Suitable Linux Distribution.md rename to published/201809/20180802 Distrochooser Helps Linux Beginners To Choose A Suitable Linux Distribution.md diff --git a/published/20180803 10 Popular Windows Apps That Are Also Available on Linux.md b/published/201809/20180803 10 Popular Windows Apps That Are Also Available on Linux.md similarity index 100% rename from published/20180803 10 Popular Windows Apps That Are Also Available on Linux.md rename to published/201809/20180803 10 Popular Windows Apps That Are Also Available on Linux.md diff --git a/published/20180804 Installing Andriod on VirtualBox.md b/published/201809/20180804 Installing Andriod on VirtualBox.md similarity index 100% rename from published/20180804 Installing Andriod on VirtualBox.md rename to published/201809/20180804 Installing Andriod on VirtualBox.md diff --git a/published/20180806 Anatomy of a Linux DNS Lookup - Part IV.md b/published/201809/20180806 Anatomy of a Linux DNS Lookup - Part IV.md similarity index 100% rename from published/20180806 Anatomy of a Linux DNS Lookup - Part IV.md rename to published/201809/20180806 Anatomy of a Linux DNS Lookup - Part IV.md diff --git a/published/20180806 Installing and using Git and GitHub on Ubuntu Linux- A beginner-s guide.md b/published/201809/20180806 Installing and using Git and GitHub on Ubuntu Linux- A beginner-s guide.md similarity index 100% rename from published/20180806 Installing and using Git and GitHub on Ubuntu Linux- A beginner-s guide.md rename to published/201809/20180806 Installing and using Git and GitHub on Ubuntu Linux- A beginner-s guide.md diff --git a/published/20180808 5 applications to manage your to-do list on Fedora.md b/published/201809/20180808 5 applications to manage your to-do list on Fedora.md similarity index 100% rename from published/20180808 5 applications to manage your to-do list on Fedora.md rename to published/201809/20180808 5 applications to manage your to-do list on Fedora.md diff --git a/published/20180808 5 open source role-playing games for Linux.md b/published/201809/20180808 5 open source role-playing games for Linux.md similarity index 100% rename from published/20180808 5 open source role-playing games for Linux.md rename to published/201809/20180808 5 open source role-playing games for Linux.md diff --git a/published/20180810 6 Reasons Why Linux Users Switch to BSD.md b/published/201809/20180810 6 Reasons Why Linux Users Switch to BSD.md similarity index 100% rename from published/20180810 6 Reasons Why Linux Users Switch to BSD.md rename to published/201809/20180810 6 Reasons Why Linux Users Switch to BSD.md diff --git a/published/20180810 Automatically Switch To Light - Dark Gtk Themes Based On Sunrise And Sunset Times With AutomaThemely.md b/published/201809/20180810 Automatically Switch To Light - Dark Gtk Themes Based On Sunrise And Sunset Times With AutomaThemely.md similarity index 100% rename from published/20180810 Automatically Switch To Light - Dark Gtk Themes Based On Sunrise And Sunset Times With AutomaThemely.md rename to published/201809/20180810 Automatically Switch To Light - Dark Gtk Themes Based On Sunrise And Sunset Times With AutomaThemely.md diff --git a/published/20180813 MPV Player- A Minimalist Video Player for Linux.md b/published/201809/20180813 MPV Player- A Minimalist Video Player for Linux.md similarity index 100% rename from published/20180813 MPV Player- A Minimalist Video Player for Linux.md rename to published/201809/20180813 MPV Player- A Minimalist Video Player for Linux.md diff --git a/published/20180815 How To Enable Hardware Accelerated Video Decoding In Chromium On Ubuntu Or Linux Mint.md b/published/201809/20180815 How To Enable Hardware Accelerated Video Decoding In Chromium On Ubuntu Or Linux Mint.md similarity index 100% rename from published/20180815 How To Enable Hardware Accelerated Video Decoding In Chromium On Ubuntu Or Linux Mint.md rename to published/201809/20180815 How To Enable Hardware Accelerated Video Decoding In Chromium On Ubuntu Or Linux Mint.md diff --git a/published/20180822 How To Switch Between TTYs Without Using Function Keys In Linux.md b/published/201809/20180822 How To Switch Between TTYs Without Using Function Keys In Linux.md similarity index 100% rename from published/20180822 How To Switch Between TTYs Without Using Function Keys In Linux.md rename to published/201809/20180822 How To Switch Between TTYs Without Using Function Keys In Linux.md diff --git a/published/20180822 What is a Makefile and how does it work.md b/published/201809/20180822 What is a Makefile and how does it work.md similarity index 100% rename from published/20180822 What is a Makefile and how does it work.md rename to published/201809/20180822 What is a Makefile and how does it work.md diff --git a/published/20180823 An introduction to pipes and named pipes in Linux.md b/published/201809/20180823 An introduction to pipes and named pipes in Linux.md similarity index 100% rename from published/20180823 An introduction to pipes and named pipes in Linux.md rename to published/201809/20180823 An introduction to pipes and named pipes in Linux.md diff --git a/published/20180823 How to publish a WordPress blog to a static GitLab Pages site.md b/published/201809/20180823 How to publish a WordPress blog to a static GitLab Pages site.md similarity index 100% rename from published/20180823 How to publish a WordPress blog to a static GitLab Pages site.md rename to published/201809/20180823 How to publish a WordPress blog to a static GitLab Pages site.md diff --git a/published/20180824 How to install software from the Linux command line.md b/published/201809/20180824 How to install software from the Linux command line.md similarity index 100% rename from published/20180824 How to install software from the Linux command line.md rename to published/201809/20180824 How to install software from the Linux command line.md diff --git a/published/20180824 Steam Makes it Easier to Play Windows Games on Linux.md b/published/201809/20180824 Steam Makes it Easier to Play Windows Games on Linux.md similarity index 100% rename from published/20180824 Steam Makes it Easier to Play Windows Games on Linux.md rename to published/201809/20180824 Steam Makes it Easier to Play Windows Games on Linux.md diff --git a/published/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md b/published/201809/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md similarity index 100% rename from published/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md rename to published/201809/20180824 [Solved] -sub process usr bin dpkg returned an error code 1- Error in Ubuntu.md diff --git a/published/20180826 How to capture and analyze packets with tcpdump command on Linux.md b/published/201809/20180826 How to capture and analyze packets with tcpdump command on Linux.md similarity index 100% rename from published/20180826 How to capture and analyze packets with tcpdump command on Linux.md rename to published/201809/20180826 How to capture and analyze packets with tcpdump command on Linux.md diff --git a/published/20180827 An introduction to diffs and patches.md b/published/201809/20180827 An introduction to diffs and patches.md similarity index 100% rename from published/20180827 An introduction to diffs and patches.md rename to published/201809/20180827 An introduction to diffs and patches.md diff --git a/published/20180828 15 command-line aliases to save you time.md b/published/201809/20180828 15 command-line aliases to save you time.md similarity index 100% rename from published/20180828 15 command-line aliases to save you time.md rename to published/201809/20180828 15 command-line aliases to save you time.md diff --git a/published/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md b/published/201809/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md similarity index 100% rename from published/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md rename to published/201809/20180828 A Cat Clone With Syntax Highlighting And Git Integration.md diff --git a/published/20180828 How to Play Windows-only Games on Linux with Steam Play.md b/published/201809/20180828 How to Play Windows-only Games on Linux with Steam Play.md similarity index 100% rename from published/20180828 How to Play Windows-only Games on Linux with Steam Play.md rename to published/201809/20180828 How to Play Windows-only Games on Linux with Steam Play.md diff --git a/published/20180829 Add GUIs to your programs and scripts easily with PySimpleGUI.md b/published/201809/20180829 Add GUIs to your programs and scripts easily with PySimpleGUI.md similarity index 100% rename from published/20180829 Add GUIs to your programs and scripts easily with PySimpleGUI.md rename to published/201809/20180829 Add GUIs to your programs and scripts easily with PySimpleGUI.md diff --git a/published/20180830 How To Reset MySQL Or MariaDB Root Password.md b/published/201809/20180830 How To Reset MySQL Or MariaDB Root Password.md similarity index 100% rename from published/20180830 How To Reset MySQL Or MariaDB Root Password.md rename to published/201809/20180830 How To Reset MySQL Or MariaDB Root Password.md diff --git a/published/20180830 How to Update Firmware on Ubuntu 18.04.md b/published/201809/20180830 How to Update Firmware on Ubuntu 18.04.md similarity index 100% rename from published/20180830 How to Update Firmware on Ubuntu 18.04.md rename to published/201809/20180830 How to Update Firmware on Ubuntu 18.04.md diff --git a/published/20180831 6 open source tools for making your own VPN.md b/published/201809/20180831 6 open source tools for making your own VPN.md similarity index 100% rename from published/20180831 6 open source tools for making your own VPN.md rename to published/201809/20180831 6 open source tools for making your own VPN.md diff --git a/published/20180831 How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions.md b/published/201809/20180831 How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions.md similarity index 100% rename from published/20180831 How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions.md rename to published/201809/20180831 How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions.md diff --git a/published/20180903 Turn your vi editor into a productivity powerhouse.md b/published/201809/20180903 Turn your vi editor into a productivity powerhouse.md similarity index 100% rename from published/20180903 Turn your vi editor into a productivity powerhouse.md rename to published/201809/20180903 Turn your vi editor into a productivity powerhouse.md diff --git a/published/20180904 8 Linux commands for effective process management.md b/published/201809/20180904 8 Linux commands for effective process management.md similarity index 100% rename from published/20180904 8 Linux commands for effective process management.md rename to published/201809/20180904 8 Linux commands for effective process management.md diff --git a/published/20180904 Why I love Xonsh.md b/published/201809/20180904 Why I love Xonsh.md similarity index 100% rename from published/20180904 Why I love Xonsh.md rename to published/201809/20180904 Why I love Xonsh.md diff --git a/published/20180905 5 tips to improve productivity with zsh.md b/published/201809/20180905 5 tips to improve productivity with zsh.md similarity index 100% rename from published/20180905 5 tips to improve productivity with zsh.md rename to published/201809/20180905 5 tips to improve productivity with zsh.md diff --git a/published/20180905 8 great Python libraries for side projects.md b/published/201809/20180905 8 great Python libraries for side projects.md similarity index 100% rename from published/20180905 8 great Python libraries for side projects.md rename to published/201809/20180905 8 great Python libraries for side projects.md diff --git a/published/20180905 Find your systems easily on a LAN with mDNS.md b/published/201809/20180905 Find your systems easily on a LAN with mDNS.md similarity index 100% rename from published/20180905 Find your systems easily on a LAN with mDNS.md rename to published/201809/20180905 Find your systems easily on a LAN with mDNS.md diff --git a/published/20180906 3 top open source JavaScript chart libraries.md b/published/201809/20180906 3 top open source JavaScript chart libraries.md similarity index 100% rename from published/20180906 3 top open source JavaScript chart libraries.md rename to published/201809/20180906 3 top open source JavaScript chart libraries.md diff --git a/published/20180906 Two open source alternatives to Flash Player.md b/published/201809/20180906 Two open source alternatives to Flash Player.md similarity index 100% rename from published/20180906 Two open source alternatives to Flash Player.md rename to published/201809/20180906 Two open source alternatives to Flash Player.md diff --git a/published/20180907 Autotrash - A CLI Tool To Automatically Purge Old Trashed Files.md b/published/201809/20180907 Autotrash - A CLI Tool To Automatically Purge Old Trashed Files.md similarity index 100% rename from published/20180907 Autotrash - A CLI Tool To Automatically Purge Old Trashed Files.md rename to published/201809/20180907 Autotrash - A CLI Tool To Automatically Purge Old Trashed Files.md diff --git a/published/20180907 What do open source and cooking have in common.md b/published/201809/20180907 What do open source and cooking have in common.md similarity index 100% rename from published/20180907 What do open source and cooking have in common.md rename to published/201809/20180907 What do open source and cooking have in common.md diff --git a/published/20180909 What is ZFS- Why People Use ZFS- [Explained for Beginners].md b/published/201809/20180909 What is ZFS- Why People Use ZFS- [Explained for Beginners].md similarity index 100% rename from published/20180909 What is ZFS- Why People Use ZFS- [Explained for Beginners].md rename to published/201809/20180909 What is ZFS- Why People Use ZFS- [Explained for Beginners].md diff --git a/published/20180910 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md b/published/201809/20180910 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md similarity index 100% rename from published/20180910 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md rename to published/201809/20180910 13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know.md diff --git a/published/20180910 3 open source log aggregation tools.md b/published/201809/20180910 3 open source log aggregation tools.md similarity index 100% rename from published/20180910 3 open source log aggregation tools.md rename to published/201809/20180910 3 open source log aggregation tools.md diff --git a/published/20180910 Randomize your MAC address using NetworkManager.md b/published/201809/20180910 Randomize your MAC address using NetworkManager.md similarity index 100% rename from published/20180910 Randomize your MAC address using NetworkManager.md rename to published/201809/20180910 Randomize your MAC address using NetworkManager.md diff --git a/published/20180911 Visualize Disk Usage On Your Linux System.md b/published/201809/20180911 Visualize Disk Usage On Your Linux System.md similarity index 100% rename from published/20180911 Visualize Disk Usage On Your Linux System.md rename to published/201809/20180911 Visualize Disk Usage On Your Linux System.md diff --git a/published/20180912 How To Configure Mouse Support For Linux Virtual Consoles.md b/published/201809/20180912 How To Configure Mouse Support For Linux Virtual Consoles.md similarity index 100% rename from published/20180912 How To Configure Mouse Support For Linux Virtual Consoles.md rename to published/201809/20180912 How To Configure Mouse Support For Linux Virtual Consoles.md diff --git a/published/20180917 Linux tricks that can save you time and trouble.md b/published/201809/20180917 Linux tricks that can save you time and trouble.md similarity index 100% rename from published/20180917 Linux tricks that can save you time and trouble.md rename to published/201809/20180917 Linux tricks that can save you time and trouble.md diff --git a/published/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md b/published/201809/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md similarity index 100% rename from published/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md rename to published/201809/20180918 How To Force APT Package Manager To Use IPv4 In Ubuntu 16.04.md diff --git a/published/20180919 Understand Fedora memory usage with top.md b/published/201809/20180919 Understand Fedora memory usage with top.md similarity index 100% rename from published/20180919 Understand Fedora memory usage with top.md rename to published/201809/20180919 Understand Fedora memory usage with top.md diff --git a/translated/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md b/published/201809/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md similarity index 70% rename from translated/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md rename to published/201809/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md index efca96da23..6267fad2e8 100644 --- a/translated/tech/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md +++ b/published/201809/20180924 Make The Output Of Ping Command Prettier And Easier To Read.md @@ -3,21 +3,19 @@ ![](https://www.ostechnix.com/wp-content/uploads/2018/09/prettyping-720x340.png) -众所周知,`ping` 命令可以用来检查目标主机是否可达。使用 `ping` 命令的时候,会发送一个 ICMP Echo 请求,通过目标主机的响应与否来确定目标主机的状态。如果你经常使用 `ping` 命令,你可以尝试一下 `prettyping`。Prettyping 只是将一个标准的 ping 工具增加了一层封装,在运行标准 ping 命令的同时添加了颜色和 unicode 字符解析输出,所以它的输出更漂亮紧凑、清晰易读。它是用 `bash` 和 `awk` 编写的免费开源工具,支持大部分类 Unix 操作系统,包括 GNU/Linux、FreeBSD 和 Mac OS X。Prettyping 除了美化 ping 命令的输出,还有很多值得注意的功能。 +众所周知,`ping` 命令可以用来检查目标主机是否可达。使用 `ping` 命令的时候,会发送一个 ICMP Echo 请求,通过目标主机的响应与否来确定目标主机的状态。如果你经常使用 `ping` 命令,你可以尝试一下 `prettyping`。Prettyping 只是将一个标准的 ping 工具增加了一层封装,在运行标准 `ping` 命令的同时添加了颜色和 unicode 字符解析输出,所以它的输出更漂亮紧凑、清晰易读。它是用 `bash` 和 `awk` 编写的自由开源工具,支持大部分类 Unix 操作系统,包括 GNU/Linux、FreeBSD 和 Mac OS X。Prettyping 除了美化 `ping` 命令的输出,还有很多值得注意的功能。 * 检测丢失的数据包并在输出中标记出来。 - * 显示实时数据。每次收到响应后,都会更新统计数据,而对于普通 ping 命令,只会在执行结束后统计。 - * 能够在输出结果不混乱的前提下灵活处理“未知信息”(例如错误信息)。 + * 显示实时数据。每次收到响应后,都会更新统计数据,而对于普通 `ping` 命令,只会在执行结束后统计。 + * 可以灵活处理“未知信息”(例如错误信息),而不搞乱输出结果。 * 能够避免输出重复的信息。 - * 兼容常用的 ping 工具命令参数。 + * 兼容常用的 `ping` 工具命令参数。 * 能够由普通用户执行。 * 可以将输出重定向到文件中。 * 不需要安装,只需要下载二进制文件,赋予可执行权限即可执行。 * 快速且轻巧。 * 输出结果清晰直观。 - - ### 安装 Prettyping 如上所述,Prettyping 是一个绿色软件,不需要任何安装,只要使用以下命令下载 Prettyping 二进制文件: @@ -52,9 +50,9 @@ $ prettyping ostechnix.com ![](https://www.ostechnix.com/wp-content/uploads/2018/09/prettyping-in-action.gif) -如果你不带任何参数执行 `prettyping`,它就会一直运行直到被 ctrl + c 中断。 +如果你不带任何参数执行 `prettyping`,它就会一直运行直到被 `ctrl + c` 中断。 -由于 Prettyping 只是一个对普通 ping 命令的封装,所以常用的 ping 参数也是有效的。例如使用 `-c 5` 来指定 ping 一台主机的 5 次: +由于 Prettyping 只是一个对普通 `ping` 命令的封装,所以常用的 ping 参数也是有效的。例如使用 `-c 5` 来指定 ping 一台主机的 5 次: ``` $ prettyping -c 5 ostechnix.com @@ -76,7 +74,7 @@ $ prettyping --nomulticolor ostechnix.com ![](https://www.ostechnix.com/wp-content/uploads/2018/09/prettyping-without-unicode-support.png) -如果你的终端不支持 **UTF-8**,或者无法修复系统中的 unicode 字体,只需要加上 `--nounicode` 参数就能轻松解决。 +如果你的终端不支持 UTF-8,或者无法修复系统中的 unicode 字体,只需要加上 `--nounicode` 参数就能轻松解决。 Prettyping 支持将输出的内容重定向到文件中,例如执行以下这个命令会将 `prettyping ostechnix.com` 的输出重定向到 `ostechnix.txt` 中: @@ -89,10 +87,9 @@ Prettyping 还有很多选项帮助你完成各种任务,例如: * 启用/禁用延时图例(默认启用) * 强制按照终端的格式输出(默认自动) * 在统计数据中统计最后的 n 次 ping(默认 60 次) - * 覆盖对终端尺寸的检测 - * 覆盖 awk 解释器(默认不覆盖) - * 覆盖 ping 工具(默认不覆盖) - + * 覆盖对终端尺寸的自动检测 + * 指定 awk 解释器路径(默认:`awk`) + * 指定 ping 工具路径(默认:`ping`) 查看帮助文档可以了解更多: @@ -101,18 +98,14 @@ Prettyping 还有很多选项帮助你完成各种任务,例如: $ prettyping --help ``` -尽管 prettyping 没有添加任何额外功能,但我个人喜欢它的这些优点: +尽管 Prettyping 没有添加任何额外功能,但我个人喜欢它的这些优点: - * 实时统计 - 可以随时查看所有实时统计信息,标准 `ping` 命令只会在命令执行结束后才显示统计信息。 - * 紧凑的显示 - 可以在终端看到更长的时间跨度。 + * 实时统计 —— 可以随时查看所有实时统计信息,标准 `ping` 命令只会在命令执行结束后才显示统计信息。 + * 紧凑的显示 —— 可以在终端看到更长的时间跨度。 * 检测丢失的数据包并显示出来。 - - 如果你一直在寻找可视化显示 `ping` 命令输出的工具,那么 Prettyping 肯定会有所帮助。尝试一下,你不会失望的。 - - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/prettyping-make-the-output-of-ping-command-prettier-and-easier-to-read/ @@ -120,7 +113,7 @@ via: https://www.ostechnix.com/prettyping-make-the-output-of-ping-command-pretti 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20180929 Getting started with the i3 window manager on Linux.md b/published/201809/20180929 Getting started with the i3 window manager on Linux.md similarity index 100% rename from published/20180929 Getting started with the i3 window manager on Linux.md rename to published/201809/20180929 Getting started with the i3 window manager on Linux.md diff --git a/translated/tech/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md b/published/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md similarity index 68% rename from translated/tech/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md rename to published/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md index b8872981fe..c6618b9a52 100644 --- a/translated/tech/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md +++ b/published/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md @@ -1,6 +1,7 @@ -5 种在 Linux 图形界面或命令行界面截图的方法 +在 Linux 下截屏并编辑的最佳工具 ====== -下面介绍几种获取屏幕截图并对其编辑的方法,而且其中的屏幕截图工具在 Ubuntu 和其它主流 Linux 发行版中都能够使用。 + +> 有几种获取屏幕截图并对其进行添加文字、箭头等编辑的方法,这里提及的的屏幕截图工具在 Ubuntu 和其它主流 Linux 发行版中都能够使用。 ![在 Ubuntu Linux 中如何获取屏幕截图][1] @@ -8,26 +9,26 @@ 本文将会介绍在不适用第三方工具的情况下,如何通过系统自带的方法和工具获取屏幕截图,另外还会介绍一些可用于 Linux 的最佳截图工具。 -### 方法 1: 在 Linux 中截图的默认方式 +### 方法 1:在 Linux 中截图的默认方式 -你是否需要截取整个屏幕?屏幕中的某个区域?某个特定的窗口? +你想要截取整个屏幕?屏幕中的某个区域?某个特定的窗口? 如果只需要获取一张屏幕截图,不对其进行编辑的话,那么键盘的默认快捷键就可以满足要求了。而且不仅仅是 Ubuntu ,绝大部分的 Linux 发行版和桌面环境都支持以下这些快捷键: -**PrtSc** – 获取整个屏幕的截图并保存到 Pictures 目录。 -**Shift + PrtSc** – 获取屏幕的某个区域截图并保存到 Pictures 目录。 -**Alt + PrtSc** –获取当前窗口的截图并保存到 Pictures 目录。 -**Ctrl + PrtSc** – 获取整个屏幕的截图并存放到剪贴板。 -**Shift + Ctrl + PrtSc** – 获取屏幕的某个区域截图并存放到剪贴板。 -**Ctrl + Alt + PrtSc** – 获取当前窗口的 截图并存放到剪贴板。 +- `PrtSc` – 获取整个屏幕的截图并保存到 Pictures 目录。 +- `Shift + PrtSc` – 获取屏幕的某个区域截图并保存到 Pictures 目录。 +- `Alt + PrtSc` –获取当前窗口的截图并保存到 Pictures 目录。 +- `Ctrl + PrtSc` – 获取整个屏幕的截图并存放到剪贴板。 +- `Shift + Ctrl + PrtSc` – 获取屏幕的某个区域截图并存放到剪贴板。 +- `Ctrl + Alt + PrtSc` – 获取当前窗口的 截图并存放到剪贴板。 如上所述,在 Linux 中使用默认的快捷键获取屏幕截图是相当简单的。但如果要在不把屏幕截图导入到其它应用程序的情况下对屏幕截图进行编辑,还是使用屏幕截图工具比较方便。 -#### **方法 2: 在 Linux 中使用 Flameshot 获取屏幕截图并编辑** +### 方法 2:在 Linux 中使用 Flameshot 获取屏幕截图并编辑 ![flameshot][2] -功能概述 +功能概述: * 注释 (高亮、标示、添加文本、框选) * 图片模糊 @@ -35,66 +36,63 @@ * 上传到 Imgur * 用另一个应用打开截图 +Flameshot 在去年发布到 [GitHub][3],并成为一个引人注目的工具。 - -Flameshot 在去年发布到 [GitHub][3],并成为一个引人注目的工具。如果你需要的是一个能够用于标注、模糊、上传到 imgur 的新式截图工具,那么 Flameshot 是一个好的选择。 +如果你需要的是一个能够用于标注、模糊、上传到 imgur 的新式截图工具,那么 Flameshot 是一个好的选择。 下面将会介绍如何安装 Flameshot 并根据你的偏好进行配置。 如果你用的是 Ubuntu,那么只需要在 Ubuntu 软件中心上搜索,就可以找到 Flameshot 进而完成安装了。要是你想使用终端来安装,可以执行以下命令: + ``` sudo apt install flameshot - ``` -如果你在安装过程中遇到问题,可以按照[官方的安装说明][4]进行操作。安装完成后,你还需要进行配置。尽管可以通过搜索来随时启动 Flameshot,但如果想使用 PrtSc 键触发启动,则需要指定对应的键盘快捷键。以下是相关配置步骤: +如果你在安装过程中遇到问题,可以按照[官方的安装说明][4]进行操作。安装完成后,你还需要进行配置。尽管可以通过搜索来随时启动 Flameshot,但如果想使用 `PrtSc` 键触发启动,则需要指定对应的键盘快捷键。以下是相关配置步骤: - * 进入系统设置中的键盘设置 - * 页面中会列出所有现有的键盘快捷键,拉到底部就会看见一个 **+** 按钮 + * 进入系统设置中的“键盘设置” + * 页面中会列出所有现有的键盘快捷键,拉到底部就会看见一个 “+” 按钮 * 点击 “+” 按钮添加自定义快捷键并输入以下两个字段: -**名称:** 任意名称均可 -**命令:** /usr/bin/flameshot gui - * 最后将这个快捷操作绑定到 **PrtSc** 键上,可能会提示与系统的截图功能相冲突,但可以忽略掉这个警告。 - - + * “名称”: 任意名称均可。 + * “命令”: `/usr/bin/flameshot gui` + * 最后将这个快捷操作绑定到 `PrtSc` 键上,可能会提示与系统的截图功能相冲突,但可以忽略掉这个警告。 配置之后,你的自定义快捷键页面大概会是以下这样: ![][5] -将键盘快捷键映射到 Flameshot -### **方法 3: 在 Linux 中使用 Shutter 获取屏幕截图并编辑** +*将键盘快捷键映射到 Flameshot* + +### 方法 3:在 Linux 中使用 Shutter 获取屏幕截图并编辑 ![][6] -功能概述: +功能概述: * 注释 (高亮、标示、添加文本、框选) * 图片模糊 * 图片裁剪 * 上传到图片网站 - - [Shutter][7] 是一个对所有主流 Linux 发行版都适用的屏幕截图工具。尽管最近已经不太更新了,但仍然是操作屏幕截图的一个优秀工具。 -在使用过程中可能会遇到这个工具的一些缺陷。Shutter 在任何一款最新的 Linux 发行版上最常见的问题就是由于缺少了任务栏上的程序图标,导致默认禁用了编辑屏幕截图的功能。 对于这个缺陷,还是有解决方案的。下面介绍一下如何[在 Shutter 中重新打开这个功能并将程序图标在任务栏上显示出来][8]。问题修复后,就可以使用 Shutter 来快速编辑屏幕截图了。 +在使用过程中可能会遇到这个工具的一些缺陷。Shutter 在任何一款最新的 Linux 发行版上最常见的问题就是由于缺少了任务栏上的程序图标,导致默认禁用了编辑屏幕截图的功能。 对于这个缺陷,还是有解决方案的。你只需要跟随我们的教程[在 Shutter 中修复这个禁止编辑选项并将程序图标在任务栏上显示出来][8]。问题修复后,就可以使用 Shutter 来快速编辑屏幕截图了。 同样地,在软件中心搜索也可以找到进而安装 Shutter,也可以在基于 Ubuntu 的发行版中执行以下命令使用命令行安装: + ``` sudo apt install shutter - ``` -类似 Flameshot,你可以通过搜索 Shutter 手动启动它,也可以按照相似的方式设置自定义快捷方式以 **PrtSc** 键唤起 Shutter。 +类似 Flameshot,你可以通过搜索 Shutter 手动启动它,也可以按照相似的方式设置自定义快捷方式以 `PrtSc` 键唤起 Shutter。 如果要指定自定义键盘快捷键,只需要执行以下命令: + ``` shutter -f - ``` -### 方法 4: 在 Linux 中使用 GIMP 获取屏幕截图 +### 方法 4:在 Linux 中使用 GIMP 获取屏幕截图 ![][9] @@ -103,83 +101,79 @@ shutter -f * 高级图像编辑功能(缩放、添加滤镜、颜色校正、添加图层、裁剪等) * 截取某一区域的屏幕截图 - - 如果需要对屏幕截图进行一些预先编辑,GIMP 是一个不错的选择。 通过软件中心可以安装 GIMP。如果在安装时遇到问题,可以参考其[官方网站的安装说明][10]。 -要使用 GIMP 获取屏幕截图,需要先启动程序,然后通过 **File-> Create-> Screenshot** 导航。 +要使用 GIMP 获取屏幕截图,需要先启动程序,然后通过 “File-> Create-> Screenshot” 导航。 -打开 Screenshot 选项后,会看到几个控制点来控制屏幕截图范围。点击 **Snap** 截取屏幕截图,图像将自动显示在 GIMP 中可供编辑。 +打开 Screenshot 选项后,会看到几个控制点来控制屏幕截图范围。点击 “Snap” 截取屏幕截图,图像将自动显示在 GIMP 中可供编辑。 -### 方法 5: 在 Linux 中使用命令行工具获取屏幕截图 +### 方法 5:在 Linux 中使用命令行工具获取屏幕截图 -这一节内容仅适用于终端爱好者。如果你也喜欢使用终端,可以使用 **GNOME 截图工具**或 **ImageMagick** 或 **Deepin Scrot**,大部分流行的 Linux 发行版中都自带这些工具。 +这一节内容仅适用于终端爱好者。如果你也喜欢使用终端,可以使用 “GNOME 截图工具”或 “ImageMagick” 或 “Deepin Scrot”,大部分流行的 Linux 发行版中都自带这些工具。 要立即获取屏幕截图,可以执行以下命令: -#### GNOME Screenshot(可用于 GNOME 桌面) +#### GNOME 截图工具(可用于 GNOME 桌面) + ``` gnome-screenshot - ``` -GNOME Screenshot 是使用 GNOME 桌面的 Linux 发行版中都自带的一个默认工具。如果需要延时获取屏幕截图,可以执行以下命令(这里的 **5** 是需要延迟的秒数): +GNOME 截图工具是使用 GNOME 桌面的 Linux 发行版中都自带的一个默认工具。如果需要延时获取屏幕截图,可以执行以下命令(这里的 `5` 是需要延迟的秒数): ``` gnome-screenshot -d -5 - ``` #### ImageMagick 如果你的操作系统是 Ubuntu、Mint 或其它流行的 Linux 发行版,一般会自带 [ImageMagick][11] 这个工具。如果没有这个工具,也可以按照[官方安装说明][12]使用安装源来安装。你也可以在终端中执行这个命令: + ``` sudo apt-get install imagemagick - ``` 安装完成后,执行下面的命令就可以获取到屏幕截图(截取整个屏幕): ``` import -window root image.png - ``` -这里的“image.png”就是屏幕截图文件保存的名称。 +这里的 “image.png” 就是屏幕截图文件保存的名称。 要获取屏幕一个区域的截图,可以执行以下命令: + ``` import image.png - ``` #### Deepin Scrot Deepin Scrot 是基于终端的一个较新的截图工具。和前面两个工具类似,一般自带于 Linux 发行版中。如果需要自行安装,可以执行以下命令: + ``` sudo apt-get install scrot - ``` 安装完成后,使用下面这些命令可以获取屏幕截图。 获取整个屏幕的截图: + ``` scrot myimage.png - ``` 获取屏幕某一区域的截图: + ``` scrot -s myimage.png - ``` ### 总结 -以上是一些在 Linux 上的优秀截图工具。当然还有很多截图工具没有提及(例如 [Spectacle][13] for KDE-distros),但相比起来还是上面几个工具更为好用。 +以上是一些在 Linux 上的优秀截图工具。当然还有很多截图工具没有提及(例如用于 KDE 发行版的 [Spectacle][13]),但相比起来还是上面几个工具更为好用。 如果你有比文章中提到的更好的截图工具,欢迎讨论! @@ -189,8 +183,8 @@ via: https://itsfoss.com/take-screenshot-linux/ 作者:[Ankush Das][a] 选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md b/published/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md new file mode 100644 index 0000000000..046777e1be --- /dev/null +++ b/published/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md @@ -0,0 +1,171 @@ +在 Linux 中使用 Wondershaper 限制网络带宽 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/Wondershaper-1-720x340.jpg) + +以下内容将向你介绍如何轻松对网络带宽做出限制,并在类 Unix 操作系统中对网络流量进行优化。通过限制网络带宽,可以节省应用程序不必要的带宽消耗,包括软件包管理器(pacman、yum、apt)、web 浏览器、torrent 客户端、下载管理器等,并防止单个或多个用户滥用网络带宽。在本文当中,将会介绍 Wondershaper 这一个实用的命令行程序,这是我认为限制 Linux 系统 Internet 或本地网络带宽的最简单、最快捷的方式之一。 + +请注意,Wondershaper 只能限制本地网络接口的传入和传出流量,而不能限制路由器或调制解调器的接口。换句话说,Wondershaper 只会限制本地系统本身的网络带宽,而不会限制网络中的其它系统。因此 Wondershaper 主要用于限制本地系统中一个或多个网卡的带宽。 + +下面来看一下 Wondershaper 是如何优化网络流量的。 + +### 在 Linux 中使用 Wondershaper 限制网络带宽 + +`wondershaper` 是用于显示系统网卡网络带宽的简单脚本。它使用了 iproute 的 `tc` 命令,但大大简化了操作过程。 + +#### 安装 Wondershaper + +使用 `git clone` 克隆 Wondershaper 的版本库就可以安装最新版本: + +``` +$ git clone https://github.com/magnific0/wondershaper.git +``` + +按照以下命令进入 `wondershaper` 目录并安装: + +``` +$ cd wondershaper +$ sudo make install +``` + +然后执行以下命令,可以让 `wondershaper` 在每次系统启动时都自动开始服务: + +``` +$ sudo systemctl enable wondershaper.service +$ sudo systemctl start wondershaper.service +``` + +如果你不强求安装最新版本,也可以使用软件包管理器(官方和非官方均可)来进行安装。 + +`wondershaper` 在 [Arch 用户软件仓库][1](Arch User Repository,AUR)中可用,所以可以使用类似 [yay][2] 这些 AUR 辅助软件在基于 Arch 的系统中安装 `wondershaper` 。 + +``` +$ yay -S wondershaper-git +``` + +对于 Debian、Ubuntu 和 Linux Mint 可以使用以下命令安装: + +``` +$ sudo apt-get install wondershaper +``` + +对于 Fedora 可以使用以下命令安装: + +``` +$ sudo dnf install wondershaper +``` + +对于 RHEL、CentOS,只需要启用 EPEL 仓库,就可以使用以下命令安装: + +``` +$ sudo yum install epel-release +$ sudo yum install wondershaper +``` + +在每次系统启动时都自动启动 `wondershaper` 服务。 + +``` +$ sudo systemctl enable wondershaper.service +$ sudo systemctl start wondershaper.service +``` + +#### 用法 + +首先需要找到网络接口的名称,通过以下几个命令都可以查询到网卡的详细信息: + +``` +$ ip addr +$ route +$ ifconfig +``` + +在确定网卡名称以后,就可以按照以下的命令限制网络带宽: + +``` +$ sudo wondershaper -a -d -u +``` + +例如,如果网卡名称是 `enp0s8`,并且需要把上行、下行速率分别限制为 1024 Kbps 和 512 Kbps,就可以执行以下命令: + +``` +$ sudo wondershaper -a enp0s8 -d 1024 -u 512 +``` + +其中参数的含义是: + + * `-a`:网卡名称 + * `-d`:下行带宽 + * `-u`:上行带宽 + +如果要对网卡解除网络带宽的限制,只需要执行: + +``` +$ sudo wondershaper -c -a enp0s8 +``` + +或者: + +``` +$ sudo wondershaper -c enp0s8 +``` + +如果系统中有多个网卡,为确保稳妥,需要按照上面的方法手动设置每个网卡的上行、下行速率。 + +如果你是通过 `git clone` 克隆 GitHub 版本库的方式安装 Wondershaper,那么在 `/etc/conf.d/` 目录中会存在一个名为 `wondershaper.conf` 的配置文件,修改这个配置文件中的相应值(包括网卡名称、上行速率、下行速率),也可以设置上行或下行速率。 + +``` +$ sudo nano /etc/conf.d/wondershaper.conf + +[wondershaper] +# Adapter +# +IFACE="eth0" + +# Download rate in Kbps +# +DSPEED="2048" + +# Upload rate in Kbps +# +USPEED="512" +``` + +Wondershaper 使用前: + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/wondershaper-1.png) + +Wondershaper 使用后: + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/wondershaper-2.png) + +可以看到,使用 Wondershaper 限制网络带宽之后,下行速率与限制之前相比已经大幅下降。 + +执行以下命令可以查看更多相关信息。 + +``` +$ wondershaper -h +``` + +也可以查看 Wondershaper 的用户手册: + +``` +$ man wondershaper +``` + +根据测试,Wondershaper 按照上面的方式可以有很好的效果。你可以试用一下,然后发表你的看法。 + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-limit-network-bandwidth-in-linux-using-wondershaper/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://aur.archlinux.org/packages/wondershaper-git/ +[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ + diff --git a/published/20180907 How to Use the Netplan Network Configuration Tool on Linux.md b/published/20180907 How to Use the Netplan Network Configuration Tool on Linux.md new file mode 100644 index 0000000000..c4691e9651 --- /dev/null +++ b/published/20180907 How to Use the Netplan Network Configuration Tool on Linux.md @@ -0,0 +1,181 @@ +如何在 Linux 上使用网络配置工具 Netplan +====== +> netplan 是一个命令行工具,用于在某些 Linux 发行版上配置网络。 + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/netplan.jpg?itok=Gu_ZfNGa) + +多年以来 Linux 管理员和用户们以相同的方式配置他们的网络接口。例如,如果你是 Ubuntu 用户,你能够用桌面 GUI 配置网络连接,也可以在 `/etc/network/interfaces` 文件里配置。配置相当简单且可以奏效。在文件中配置看起来就像这样: + +``` +auto enp10s0 +iface enp10s0 inet static +address 192.168.1.162 +netmask 255.255.255.0 +gateway 192.168.1.100 +dns-nameservers 1.0.0.1,1.1.1.1 +``` + +保存并关闭文件。使用命令重启网络: + +``` +sudo systemctl restart networking +``` + +或者,如果你使用不带 systemd 的发行版,你可以通过老办法来重启网络: + +``` +sudo /etc/init.d/networking restart +``` + +你的网络将会重新启动,新的配置将会生效。 + +这就是多年以来的做法。但是现在,在某些发行版上(例如 Ubuntu Linux 18.04),网络的配置与控制发生了很大的变化。不需要那个 `interfaces` 文件和 `/etc/init.d/networking` 脚本,我们现在转向使用 [Netplan][1]。Netplan 是一个在某些 Linux 发行版上配置网络连接的命令行工具。Netplan 使用 YAML 描述文件来配置网络接口,然后,通过这些描述为任何给定的呈现工具生成必要的配置选项。 + +我将向你展示如何在 Linux 上使用 Netplan 配置静态 IP 地址和 DHCP 地址。我会在 Ubuntu Server 18.04 上演示。有句忠告,你创建的 .yaml 文件中的缩进必须保持一致,否则将会失败。你不用为每行使用特定的缩进间距,只需保持一致就行了。 + +### 新的配置文件 + +打开终端窗口(或者通过 SSH 登录进 Ubuntu 服务器)。你会在 `/etc/netplan` 文件夹下发现 Netplan 的新配置文件。使用 `cd /etc/netplan` 命令进入到那个文件夹下。一旦进到了那个文件夹,也许你就能够看到一个文件: + +``` +01-netcfg.yaml +``` + +你可以创建一个新的文件或者是编辑默认文件。如果你打算修改默认文件,我建议你先做一个备份: + +``` +sudo cp /etc/netplan/01-netcfg.yaml /etc/netplan/01-netcfg.yaml.bak +``` + +备份好后,就可以开始配置了。 + +### 网络设备名称 + +在你开始配置静态 IP 之前,你需要知道设备名称。要做到这一点,你可以使用命令 `ip a`,然后找出哪一个设备将会被用到(图 1)。 + +![netplan][3] + +*图 1:使用 ip a 命令找出设备名称* + +我将为 ens5 配置一个静态的 IP。 + +### 配置静态 IP 地址 + +使用命令打开原来的 .yaml 文件: + +``` +sudo nano /etc/netplan/01-netcfg.yaml +``` + +文件的布局看起来就像这样: + +``` +network: + Version: 2 + Renderer: networkd + ethernets: + DEVICE_NAME: + Dhcp4: yes/no + Addresses: [IP/NETMASK] + Gateway: GATEWAY + Nameservers: + Addresses: [NAMESERVER, NAMESERVER] +``` + +其中: + + * `DEVICE_NAME` 是需要配置设备的实际名称。 + * `yes`/`no` 代表是否启用 dhcp4。 + * `IP` 是设备的 IP 地址。 + * `NETMASK` 是 IP 地址的掩码。 + * `GATEWAY` 是网关的地址。 + * `NAMESERVER` 是由逗号分开的 DNS 服务器列表。 + +这是一份 .yaml 文件的样例: + +``` +network: + version: 2 + renderer: networkd + ethernets: + ens5: + dhcp4: no + addresses: [192.168.1.230/24] + gateway4: 192.168.1.254 + nameservers: + addresses: [8.8.4.4,8.8.8.8] +``` + +编辑上面的文件以达到你想要的效果。保存并关闭文件。 + +注意,掩码已经不用再配置为 `255.255.255.0` 这种形式。取而代之的是,掩码已被添加进了 IP 地址中。 + +### 测试配置 + +在应用改变之前,让我们测试一下配置。为此,使用命令: + +``` +sudo netplan try +``` + +上面的命令会在应用配置之前验证其是否有效。如果成功,你就会看到配置被接受。换句话说,Netplan 会尝试将新的配置应用到运行的系统上。如果新的配置失败了,Netplan 会自动地恢复到之前使用的配置。成功后,新的配置就会被使用。 + +### 应用新的配置 + +如果你确信配置文件没有问题,你就可以跳过测试环节并且直接使用新的配置。它的命令是: + +``` +sudo netplan apply +``` + +此时,你可以使用 ip a 看看新的地址是否正确。 + +### 配置 DHCP + +虽然你可能不会配置 DHCP 服务,但通常还是知道比较好。例如,你也许不知道网络上当前可用的静态 IP 地址是多少。你可以为设备配置 DHCP,获取到 IP 地址,然后将那个地址重新配置为静态地址。 + +在 Netplan 上使用 DHCP,配置文件看起来就像这样: + +``` +network: + version: 2 + renderer: networkd + ethernets: + ens5: + Addresses: [] + dhcp4: true + optional: true +``` + +保存并退出。用下面命令来测试文件: + +``` +sudo netplan try +``` + +Netplan 应该会成功配置 DHCP 服务。这时你可以使用 `ip a` 命令得到动态分配的地址,然后重新配置静态地址。或者,你可以直接使用 DHCP 分配的地址(但看看这是一个服务器,你可能不想这样做)。 + +也许你有不只一个的网络接口,你可以命名第二个 .yaml 文件为 `02-netcfg.yaml` 。Netplan 会按照数字顺序应用配置文件,因此 01 会在 02 之前使用。根据你的需要创建多个配置文件。 + +### 就是这些了 + +不管怎样,那些就是所有关于使用 Netplan 的东西了。虽然它对于我们习惯性的配置网络地址来说是一个相当大的改变,但并不是所有人都用的惯。但这种配置方式值得一提……因此你会适应的。 + +在 Linux Foundation 和 edX 上通过 [“Introduction to Linux”][5] 课程学习更多关于 Linux 的内容。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/9/how-use-netplan-network-configuration-tool-linux + +作者:[Jack Wallen][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[LuuMing](https://github.com/LuuMing) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[1]: https://netplan.io/ +[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/netplan_1.jpg?itok=XuIsXWbV (netplan) +[4]: /licenses/category/used-permission +[5]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/translated/tech/20180913 ScreenCloud- The Screenshot-- App.md b/published/20180913 ScreenCloud- The Screenshot-- App.md similarity index 63% rename from translated/tech/20180913 ScreenCloud- The Screenshot-- App.md rename to published/20180913 ScreenCloud- The Screenshot-- App.md index a7002183c3..54a36dd377 100644 --- a/translated/tech/20180913 ScreenCloud- The Screenshot-- App.md +++ b/published/20180913 ScreenCloud- The Screenshot-- App.md @@ -1,43 +1,46 @@ -ScreenCloud:一个截屏程序 +ScreenCloud:一个增强的截屏程序 ====== -[ScreenCloud][1]是一个很棒的小程序,你甚至不知道你需要它。桌面 Linux 的默认屏幕截图流程很好(Prt Scr 按钮),我们甚至有一些[强大的截图工具][2],如 [Shutter][3]。但是,ScreenCloud 有一个非常简单但非常方便的功能,让我爱上了它。在我们深入它之前,让我们先看一个背景故事。 -我截取了很多截图。远远超过平均水平。收据、注册详细信息、开发工作、文章中程序的截图等等。我接下来要做的就是打开浏览器,浏览我最喜欢的云存储并将重要的内容转储到那里,以便我可以在手机上以及 PC 上的多个操作系统上访问它们。这也让我可以轻松与我的团队分享我正在使用的程序的截图。 +[ScreenCloud][1]是一个很棒的小程序,你甚至不知道你需要它。桌面 Linux 的默认屏幕截图流程很好(`PrtScr` 按钮),我们甚至有一些[强大的截图工具][2],如 [Shutter][3]。但是,ScreenCloud 有一个非常简单但非常方便的功能,让我爱上了它。在我们深入它之前,让我们先看一个背景故事。 + +我截取了很多截图,远超常人。收据、注册详细信息、开发工作、文章中程序的截图等等。我接下来要做的就是打开浏览器,浏览我最喜欢的云存储并将重要的内容转储到那里,以便我可以在手机上以及 PC 上的多个操作系统上访问它们。这也让我可以轻松与我的团队分享我正在使用的程序的截图。 我对这个标准的截图流程没有抱怨,打开浏览器并登录我的云,然后手动上传屏幕截图,直到我遇到 ScreenCloud。 ### ScreenCloud -ScreenCloud 是跨平台的程序,它提供简单的屏幕截图和灵活的[云备份选项][4]管理。这包括使用你自己的[ FTP 服务器][5]。 +ScreenCloud 是跨平台的程序,它提供轻松的屏幕截图功能和灵活的[云备份选项][4]管理。这包括使用你自己的 [FTP 服务器][5]。 ![][6] -ScreenCloud 很精简,投入了大量的注意力给小的东西。它为你提供了非常容易记住的热键来捕获全屏、活动窗口或捕获用鼠标选择的区域。 +ScreenCloud 很顺滑,在细节上投入了大量的精力。它为你提供了非常容易记住的热键来捕获全屏、活动窗口或鼠标选择区域。 -![][7]ScreenCloud 的默认键盘快捷键 +![][7] + +*ScreenCloud 的默认键盘快捷键* 截取屏幕截图后,你可以设置 ScreenCloud 如何处理图像或直接将其上传到你选择的云服务。它甚至支持 SFTP。截图上传后(通常在几秒钟内),图像链接就会被自动复制到剪贴板,这让你可以轻松共享。 ![][8] -你还可以使用 ScreenCloud 进行一些基本编辑。为此,你需要将 “Save to” 设置为 “Ask me”。此设置在下拉框中有并且通常是默认设置。当使用它时,当你截取屏幕截图时,你会看到编辑文件的选项。在这里,你可以在屏幕截图中添加箭头、文本和数字。 +你还可以使用 ScreenCloud 进行一些基本编辑。为此,你需要将 “Save to” 设置为 “Ask me”。此设置在应用图标菜单中有并且通常是默认设置。当使用它时,当你截取屏幕截图时,你会看到编辑文件的选项。在这里,你可以在屏幕截图中添加箭头、文本和数字。 -![Editing screenshots with ScreenCloud][9]Editing screenshots with ScreenCloud +![Editing screenshots with ScreenCloud][9] + +*用 ScreenCloud 编辑截屏* ### 在 Linux 上安装 ScreenCloud -ScreenCloud 可在[ Snap 商店][10]中找到。因此,你可以通过访问[ Snap 商店][12]或运行以下命令,轻松地将其安装在 Ubuntu 和其他[启用 Snap ][11]的发行版上。 +ScreenCloud 可在 [Snap 商店][10]中找到。因此,你可以通过访问 [Snap 商店][12]或运行以下命令,轻松地将其安装在 Ubuntu 和其他[启用 Snap][11] 的发行版上。 ``` sudo snap install screencloud - ``` 对于无法通过 Snap 安装程序的 Linux 发行版,你可以[在这里][1]下载 AppImage。进入下载文件夹,右键单击并在那里打开终端。然后运行以下命令。 ``` sudo chmod +x ScreenCloud-v1.4.0-x86_64.AppImage - ``` 然后,你可以通过双击下载的文件来启动程序。 @@ -57,7 +60,7 @@ via: https://itsfoss.com/screencloud-app/ 作者:[Aquil Roshan][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md b/published/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md similarity index 61% rename from translated/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md rename to published/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md index 1b21607ee9..b5a74c0ea9 100644 --- a/translated/tech/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md +++ b/published/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md @@ -1,23 +1,21 @@ -备份安装包并在全新安装的 Ubuntu 上恢复它们 +备份安装的包并在全新安装的 Ubuntu 上恢复它们 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/09/apt-clone-720x340.png) -在多个 Ubuntu 系统上安装同一组软件包是一项耗时且无聊的任务。你不会想花时间在多个系统上反复安装相同的软件包。在类似架构的 Ubuntu 系统上安装软件包时,有许多方法可以使这项任务更容易。你可以方便地通过 [**Aptik**][1] 并点击几次鼠标将以前的 Ubuntu 系统的应用程序、设置和数据迁移到新安装的系统中。或者,你可以使用软件包管理器(例如 APT)获取[**备份的已安装软件包的完整列表**][2],然后在新安装的系统上安装它们。今天,我了解到还有另一个专用工具可以完成这项工作。来看一下 **apt-clone**,这是一个简单的工具,可以让你为 Debian/Ubuntu 系统创建一个已安装的软件包列表,这些软件包可以在新安装的系统或容器上或目录中恢复。 +在多个 Ubuntu 系统上安装同一组软件包是一项耗时且无聊的任务。你不会想花时间在多个系统上反复安装相同的软件包。在类似架构的 Ubuntu 系统上安装软件包时,有许多方法可以使这项任务更容易。你可以方便地通过 [Aptik][1] 并点击几次鼠标将以前的 Ubuntu 系统的应用程序、设置和数据迁移到新安装的系统中。或者,你可以使用软件包管理器(例如 APT)获取[备份的已安装软件包的完整列表][2],然后在新安装的系统上安装它们。今天,我了解到还有另一个专用工具可以完成这项工作。来看一下 `apt-clone`,这是一个简单的工具,可以让你为 Debian/Ubuntu 系统创建一个已安装的软件包列表,这些软件包可以在新安装的系统或容器上或目录中恢复。 -Apt-clone 会帮助你处理你想要的情况, +`apt-clone` 会帮助你处理你想要的情况, - * 在运行类似 Ubuntu(及衍生版)的多个系统上安装一致的应用程序。 -  * 经常在多个系统上安装相同的软件包。 -  * 备份已安装的应用程序的完整列表,并在需要时随时随地恢复它们。 +* 在运行类似 Ubuntu(及衍生版)的多个系统上安装一致的应用程序。 +* 经常在多个系统上安装相同的软件包。 +* 备份已安装的应用程序的完整列表,并在需要时随时随地恢复它们。 - - -在本简要指南中,我们将讨论如何在基于 Debian 的系统上安装和使用 Apt-clone。我在 Ubuntu 18.04 LTS 上测试了这个程序,但它应该适用于所有基于 Debian 和 Ubuntu 的系统。 +在本简要指南中,我们将讨论如何在基于 Debian 的系统上安装和使用 `apt-clone`。我在 Ubuntu 18.04 LTS 上测试了这个程序,但它应该适用于所有基于 Debian 和 Ubuntu 的系统。 ### 备份已安装的软件包并在新安装的 Ubuntu 上恢复它们 -Apt-clone 在默认仓库中有。要安装它,只需在终端输入以下命令: +`apt-clone` 在默认仓库中有。要安装它,只需在终端输入以下命令: ``` $ sudo apt install apt-clone @@ -27,11 +25,10 @@ $ sudo apt install apt-clone ``` $ mkdir ~/mypackages - $ sudo apt-clone clone ~/mypackages ``` -上面的命令将我的 Ubuntu 中所有已安装的软件包保存在 **~/mypackages** 目录下名为 **apt-clone-state-ubuntuserver.tar.gz** 的文件中。 +上面的命令将我的 Ubuntu 中所有已安装的软件包保存在 `~/mypackages` 目录下名为 `apt-clone-state-ubuntuserver.tar.gz` 的文件中。 要查看备份文件的详细信息,请运行: @@ -53,7 +50,7 @@ Date: Sat Sep 15 10:23:05 2018 $ sudo apt-clone restore apt-clone-state-ubuntuserver.tar.gz ``` -请注意,此命令将覆盖你现有的 **/etc/apt/sources.list** 并将安装/删除软件包。警告过你了!此外,只需确保目标系统是相同的架构和操作系统。例如,如果源系统是 18.04 LTS 64位,那么目标系统必须也是相同的。 +请注意,此命令将覆盖你现有的 `/etc/apt/sources.list` 并将安装/删除软件包。警告过你了!此外,只需确保目标系统是相同的 CPU 架构和操作系统。例如,如果源系统是 18.04 LTS 64 位,那么目标系统必须也是相同的。 如果你不想在系统上恢复软件包,可以使用 `--destination /some/location` 选项将克隆复制到这个文件夹中。 @@ -61,7 +58,7 @@ $ sudo apt-clone restore apt-clone-state-ubuntuserver.tar.gz $ sudo apt-clone restore apt-clone-state-ubuntuserver.tar.gz --destination ~/oldubuntu ``` -在此例中,上面的命令将软件包恢复到 **~/oldubuntu** 中。 +在此例中,上面的命令将软件包恢复到 `~/oldubuntu` 中。 有关详细信息,请参阅帮助部分: @@ -75,7 +72,7 @@ $ apt-clone -h $ man apt-clone ``` -**建议阅读:** +建议阅读: + [Systemback - 将 Ubuntu 桌面版和服务器版恢复到以前的状态][3] + [Cronopete - Linux 下的苹果时间机器][4] @@ -94,7 +91,7 @@ via: https://www.ostechnix.com/backup-installed-packages-and-restore-them-on-fre 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20180917 4 scanning tools for the Linux desktop.md b/published/20180917 4 scanning tools for the Linux desktop.md new file mode 100644 index 0000000000..b376fab108 --- /dev/null +++ b/published/20180917 4 scanning tools for the Linux desktop.md @@ -0,0 +1,73 @@ +用于 Linux 桌面的 4 个扫描工具 +====== + +> 使用这些开源软件之一驱动你的扫描仪来实现无纸化办公。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-photo-camera-blue.png?itok=AsIMZ9ga) + +尽管无纸化世界还没有到来,但越来越多的人通过扫描文件和照片来摆脱纸张的束缚。不过,仅仅拥有一台扫描仪还不足够。你还需要软件来驱动扫描仪。 + +然而问题是许多扫描仪制造商没有与他们的设备适配在一起的软件的 Linux 版本。不过在大多数情况下,即使没有也没多大关系。因为在 Linux 桌面上已经有很好的扫描软件了。它们能够与许多扫描仪配合很好的完成工作。 + +现在就让我们看看四个简单又灵活的开源 Linux 扫描工具。我已经使用过了下面这些工具(甚至[早在 2014 年][1]写过关于其中三个工具的文章)并且觉得它们非常有用。希望你也会这样认为。 + +### Simple Scan + +这是我最喜欢的一个软件之一,[Simple Scan][2] 小巧、快捷、高效且易用。如果你以前见过它,那是因为 Simple Scan 是 GNOME 桌面上的默认扫描应用程序,也是许多 Linux 发行版的默认扫描程序。 + +你只需单击一下就能扫描文档或照片。扫描过某些内容后,你可以旋转或裁剪它并将其另存为图像(仅限 JPEG 或 PNG 格式)或 PDF 格式。也就是说 Simple Scan 可能会很慢,即使你用较低分辨率来扫描文档。最重要的是,Simple Scan 在扫描时会使用一组全局的默认值,例如 150dpi 用于文本,300dpi 用于照片。你需要进入 Simple Scan 的首选项才能更改这些设置。 + +如果你扫描的内容超过了几页,还可以在保存之前重新排序页面。如果有必要的话 —— 假如你正在提交已签名的表格 —— 你可以使用 Simple Scan 来发送电子邮件。 + +### Skanlite + +从很多方面来看,[Skanlite][3] 是 Simple Scan 在 KDE 世界中的表兄弟。虽然 Skanlite 功能很少,但它可以出色的完成工作。 + +你可以自己配置这个软件的选项,包括自动保存扫描文件、设置扫描质量以及确定扫描保存位置。 Skanlite 可以保存为以下图像格式:JPEG、PNG、BMP、PPM、XBM 和 XPM。 + +其中一个很棒的功能是 Skanlite 能够将你扫描的部分内容保存到单独的文件中。当你想要从照片中删除某人或某物时,这就派上用场了。 + +### Gscan2pdf + +这是我另一个最爱的老软件,[gscan2pdf][4] 可能会显得很老旧了,但它仍然包含一些比这里提到的其他软件更好的功能。即便如此,gscan2pdf 仍然显得很轻便。 + +除了以各种图像格式(JPEG、PNG 和 TIFF)保存扫描外,gscan2pdf 还可以将它们保存为 PDF 或 [DjVu][5] 文件。你可以在单击“扫描”按钮之前设置扫描的分辨率,无论是黑白、彩色还是纸张大小,每当你想要更改任何这些设置时,都可以进入 gscan2pdf 的首选项。你还可以旋转、裁剪和删除页面。 + +虽然这些都不是真正的杀手级功能,但它们会给你带来更多的灵活性。 + +### GIMP + +你大概会知道 [GIMP][6] 是一个图像编辑工具。但是你恐怕不知道可以用它来驱动你的扫描仪吧。 + +你需要安装 [XSane][7] 扫描软件和 GIMP XSane 插件。这两个应该都可以从你的 Linux 发行版的包管理器中获得。在软件里,选择“文件>创建>扫描仪/相机”。单击“扫描仪”,然后单击“扫描”按钮即可进行扫描。 + +如果这不是你想要的,或者它不起作用,你可以将 GIMP 和一个叫作 [QuiteInsane][8] 的插件结合起来。使用任一插件,都能使 GIMP 成为一个功能强大的扫描软件,它可以让你设置许多选项,如是否扫描彩色或黑白、扫描的分辨率,以及是否压缩结果等。你还可以使用 GIMP 的工具来修改或应用扫描后的效果。这使得它非常适合扫描照片和艺术品。 + +### 它们真的能够工作吗? + +所有的这些软件在大多数时候都能够在各种硬件上运行良好。我将它们与我过去几年来拥有的多台多功能打印机一起使用 —— 无论是使用 USB 线连接还是通过无线连接。 + +你可能已经注意到我在前一段中写过“大多数时候运行良好”。这是因为我确实遇到过一个例外:一个便宜的 canon 多功能打印机。我使用的软件都没有检测到它。最后我不得不下载并安装 canon 的 Linux 扫描仪软件才使它工作。 + +你最喜欢的 Linux 开源扫描工具是什么?发表评论,分享你的选择。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/linux-scanner-tools + +作者:[Scott Nesbitt][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[way-ww](https://github.com/way-ww) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt +[1]: https://opensource.com/life/14/8/3-tools-scanners-linux-desktop +[2]: https://gitlab.gnome.org/GNOME/simple-scan +[3]: https://www.kde.org/applications/graphics/skanlite/ +[4]: http://gscan2pdf.sourceforge.net/ +[5]: http://en.wikipedia.org/wiki/DjVu +[6]: http://www.gimp.org/ +[7]: https://en.wikipedia.org/wiki/Scanner_Access_Now_Easy#XSane +[8]: http://sourceforge.net/projects/quiteinsane/ diff --git a/published/20180917 Getting started with openmediavault- A home NAS solution.md b/published/20180917 Getting started with openmediavault- A home NAS solution.md new file mode 100644 index 0000000000..0d5d00ca74 --- /dev/null +++ b/published/20180917 Getting started with openmediavault- A home NAS solution.md @@ -0,0 +1,75 @@ +openmediavault 入门:一个家庭 NAS 解决方案 +====== + +> 这个网络附属文件服务提供了一系列可靠的功能,并且易于安装和配置。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-cloud.png?itok=vz0PIDDS) + +面对许多可供选择的云存储方案,一些人可能会质疑一个家庭 NAS(网络附属存储network-attached storage)服务器的价值。毕竟,当所有你的文件存储在云上,你就不需要为你自己云服务的维护、更新和安全担忧。 + +但是,这不完全对,是不是?你有一个家庭网络,所以你已经要负责维护网络的健康和安全。假定你已经维护一个家庭网络,那么[一个家庭 NAS][1]并不会增加额外负担。反而你能从少量的工作中得到许多的好处。 + +你可以为你家里所有的计算机进行备份(你也可以备份到其它地方)。构架一个存储电影、音乐和照片的媒体服务器,无需担心互联网连接是否连通。在家里的多台计算机上处理大型文件,不需要等待从互联网某个其它计算机传输这些文件过来。另外,可以让 NAS 与其他服务配合工作,如托管本地邮件或者家庭 Wiki。也许最重要的是,构架家庭 NAS,数据完全是你的,它始终处于在控制下,随时可访问。 + +接下来的问题是如何选择 NAS 方案。当然,你可以购买预先搭建好的商品,并在一天内搞定,但是这会有什么乐趣呢?实际上,尽管拥有一个能为你搞定一切的设备很棒,但是有一个可以修复和升级的钻机平台更棒。这就我近期的需求,我选择安装和配置 [openmediavault][2]。 + +### 为什么选择 openmediavault? + +市面上有不少开源的 NAS 解决方案,其中有些肯定比 openmediavault 流行。当我询问周遭,例如 [freeNAS][3] 这样的最常被推荐给我。那么为什么我不采纳他们的建议呢?毕竟,用它的人更多。[基于 FreeNAS 官网的一份对比数据][4],它包含了很多的功能,并且提供许多支持选项。这当然都对。但是 openmediavault 也不差。它实际上是基于 FreeNAS 早期版本的,虽然它在下载量和功能方面较少,但是对于我的需求而言,它已经相当足够了。 + +另外一个因素是它让我感到很舒适。openmediavault 的底层操作系统是 [Debian][5],然而 FreeNAS 是 [FreeBSD][6]。由于我个人对 FreeBSD 不是很熟悉,因此如果我的 NAS 出现故障,必定难于在 FreeBSD 上修复故障。同样的,也会让我觉得难于优化或添加一些服务到这个机器上。当然,我可以学习 FreeBSD 以更熟悉它,但是我已经在家里构架了这个 NAS;我发现,如果完成它只需要较少的“学习机会”,那么构建 NAS 往往会更成功。 + +当然,每个人情况都不同,所以你要自己调研,然后作出最适合自己方案的决定。FreeNAS 对于许多人似乎都是不错的解决方案。openmediavault 正是适合我的解决方案。 + +### 安装与配置 + +在 [openmediavault 文档][7]里详细记录了安装步骤,所以我不在这里重述了。如果你曾经安装过任何一个 Linux 发行版,大部分安装步骤都是很类似的(虽然是在相对丑陋的 [Ncurses][8] 界面,而不像你或许在现代发行版里见到的)。我按照 [专用的驱动器][9] 的说明来安装它。这些说明不但很好,而且相当精炼的。当你搞定这些步骤,就安装好了一个基本的系统,但是你还需要做更多才能真正构建好 NAS 来存储各种文件。例如,专用驱动器方式需要在硬盘驱动器上安装 openmediavault,但那是指你的操作系统的驱动器,而不是和网络上其他计算机共享的驱动器。你需要自己把这些建立起来并且配置好。 + +你要做的第一件事是加载用来管理的网页界面,并修改默认密码。这个密码和之前你安装过程设置的 root 密码是不同的。这是网页界面的管理员账号,默认的账户和密码分别是 `admin` 和 `openmediavault`,当你登入后要马上修改。 + +#### 设置你的驱动器 + +一旦你安装好 openmediavault,你需要它为你做一些工作。逻辑上的第一个步骤是设置好你即将用来作为存储的驱动器。在这里,我假定你已经物理上安装好它们了,所以接下来你要做的就是让 openmediavault 识别和配置它们。第一步是确保这些磁盘是可见的。侧边栏菜单有很多选项,而且被精心的归类了。选择“Storage -> Disks”。一旦你点击该菜单,你应该能够看到所有你已经安装到该服务器的驱动,包括那个你已经用来安装 openmediavault 的驱动器。如果你没有在那里看到所有驱动器,点击“Scan”按钮去看是否能够挂载它们。通常,这不会是一个问题。 + +你可以独立的挂载和设置这些驱动器用于文件共享,但是对于一个文件服务器,你会想要一些冗余。你想要能够把很多驱动器当作一个单一卷,并能够在某一个驱动器出现故障时恢复你的数据,或者空间不足时安装新驱动器。这意味你将需要一个 [RAID][10]。你想要的什么特定类型的 RAID 的这个主题是一个大坑,值得另写一篇文章专门来讲述它(而且已经有很多关于该主题的文章了),但是简而言之是你将需要不止一个驱动器,最好的情况下,你所有的驱动都存储一样的容量。 + +openmediavault 支持所有标准的 RAID 级别,所以这里很简单。可以在“Storage -> RAID Management”里配置你的 RAID。配置是相当简单的:点击“Create”按钮,在你的 RAID 阵列里选择你想要的磁盘和你想要使用的 RAID 级别,并给这个阵列一个名字。openmediavault 为你处理剩下的工作。这里没有复杂的命令行,也不需要试图记住 `mdadm` 命令的一些选项参数。在我的例子,我有六个 2TB 驱动器,设置成了 RAID 10。 + +当你的 RAID 构建好了,基本上你已经有一个地方可以存储东西了。你仅仅需要设置一个文件系统。正如你的桌面系统,一个硬盘驱动器在没有格式化的情况下是没什么用处的。所以下一个你要去的地方的是位于 openmediavault 控制面板里的“Storage -> File Systems”。和配置你的 RAID 一样,点击“Create”按钮,然后跟着提示操作。如果你在你的服务器上只有一个 RAID ,你应该可以看到一个像 `md0` 的东西。你也需要选择文件系统的类别。如果你不能确定,选择标准的 ext4 类型即可。 + +#### 定义你的共享 + +亲爱的!你有个地方可以存储文件了。现在你只需要让它在你的家庭网络中可见。可以从在 openmediavault 控制面板上的“Services”部分上配置。当谈到在网络上设置文件共享,主要有两个选择:NFS 或者 SMB/CIFS. 根据以往经验,如果你网络上的所有计算机都是 Linux 系统,那么你使用 NFS 会更好。然而,当你家庭网络是一个混合环境,是一个包含Linux、Windows、苹果系统和嵌入式设备的组合,那么 SMB/CIFS 可能会是你合适的选择。 + +这些选项不是互斥的。实际上,你可以在服务器上运行这两个服务,同时拥有这些服务的好处。或者你可以混合起来,如果你有一个特定的设备做特定的任务。不管你的使用场景是怎样,配置这些服务是相当简单。点击你想要的服务,从它配置中激活它,和在网络中设定你想要的共享文件夹为可见。在基于 SMB/CIFS 共享的情况下,相对于 NFS 多了一些可用的配置,但是一般用默认配置就挺好的,接着可以在默认基础上修改配置。最酷的事情是它很容易配置,同时也很容易在需要的时候修改配置。 + +#### 用户配置 + +基本上已将完成了。你已经在 RAID 中配置了你的驱动器。你已经用一种文件系统格式化了 RAID,并且你已经在格式化的 RAID 上设定了共享文件夹。剩下来的一件事情是配置那些人可以访问这些共享和可以访问多少。这个可以在“Access Rights Management”配置里设置。使用“User”和“Group”选项来设定可以连接到你共享文件夹的用户,并设定这些共享文件的访问权限。 + +一旦你完成用户配置,就几乎准备好了。你需要从不同客户端机器访问你的共享,但是这是另外一个可以单独写个文章的话题了。 + +玩得开心! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/openmediavault + +作者:[Jason van Gumster][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[jamelouis](https://github.com/jamelouis) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mairin +[1]: https://opensource.com/article/18/8/automate-backups-raspberry-pi +[2]: https://openmediavault.org +[3]: https://freenas.org +[4]: http://www.freenas.org/freenas-vs-openmediavault/ +[5]: https://www.debian.org/ +[6]: https://www.freebsd.org/ +[7]: https://openmediavault.readthedocs.io/en/latest/installation/index.html +[8]: https://invisible-island.net/ncurses/ +[9]: https://openmediavault.readthedocs.io/en/latest/installation/via_iso.html +[10]: https://en.wikipedia.org/wiki/RAID diff --git a/published/20180918 Linux firewalls- What you need to know about iptables and firewalld.md b/published/20180918 Linux firewalls- What you need to know about iptables and firewalld.md new file mode 100644 index 0000000000..2e52cabba0 --- /dev/null +++ b/published/20180918 Linux firewalls- What you need to know about iptables and firewalld.md @@ -0,0 +1,170 @@ +Linux 防火墙:关于 iptables 和 firewalld 的那些事 +====== + +> 以下是如何使用 iptables 和 firewalld 工具来管理 Linux 防火墙规则。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openwires_fromRHT_520_0612LL.png?itok=PqZi55Ab) + +这篇文章摘自我的书《[Linux in Action][1]》,尚未发布的第二个曼宁出版项目。 + +### 防火墙 + +防火墙是一组规则。当数据包进出受保护的网络区域时,进出内容(特别是关于其来源、目标和使用的协议等信息)会根据防火墙规则进行检测,以确定是否允许其通过。下面是一个简单的例子: + +![防火墙过滤请求] [3] + +*防火墙可以根据协议或基于目标的规则过滤请求。* + +一方面, [iptables][4] 是 Linux 机器上管理防火墙规则的工具。 + +另一方面,[firewalld][5] 也是 Linux 机器上管理防火墙规则的工具。 + +你有什么问题吗?如果我告诉你还有另外一种工具,叫做 [nftables][6],这会不会糟蹋你的美好一天呢? + +好吧,我承认整件事确实有点好笑,所以让我来解释一下。这一切都从 Netfilter 开始,它在 Linux 内核模块级别控制访问网络栈。几十年来,管理 Netfilter 钩子的主要命令行工具是 iptables 规则集。 + +因为调用这些规则所需的语法看起来有点晦涩难懂,所以各种用户友好的实现方式,如 [ufw][7] 和 firewalld 被引入,作为更高级别的 Netfilter 解释器。然而,ufw 和 firewalld 主要是为解决单独的计算机所面临的各种问题而设计的。构建全方面的网络解决方案通常需要 iptables,或者从 2014 年起,它的替代品 nftables (nft 命令行工具)。 + +iptables 没有消失,仍然被广泛使用着。事实上,在未来的许多年里,作为一名管理员,你应该会使用 iptables 来保护的网络。但是 nftables 通过操作经典的 Netfilter 工具集带来了一些重要的崭新的功能。 + +从现在开始,我将通过示例展示 firewalld 和 iptables 如何解决简单的连接问题。 + +### 使用 firewalld 配置 HTTP 访问 + +正如你能从它的名字中猜到的,firewalld 是 [systemd][8] 家族的一部分。firewalld 可以安装在 Debian/Ubuntu 机器上,不过,它默认安装在 RedHat 和 CentOS 上。如果您的计算机上运行着像 Apache 这样的 web 服务器,您可以通过浏览服务器的 web 根目录来确认防火墙是否正在工作。如果网站不可访问,那么 firewalld 正在工作。 + +你可以使用 `firewall-cmd` 工具从命令行管理 firewalld 设置。添加 `–state` 参数将返回当前防火墙的状态: + +``` +# firewall-cmd --state +running +``` + +默认情况下,firewalld 处于运行状态,并拒绝所有传入流量,但有几个例外,如 SSH。这意味着你的网站不会有太多的访问者,这无疑会为你节省大量的数据传输成本。然而,这不是你对 web 服务器的要求,你希望打开 HTTP 和 HTTPS 端口,按照惯例,这两个端口分别被指定为 80 和 443。firewalld 提供了两种方法来实现这个功能。一个是通过 `–add-port` 参数,该参数直接引用端口号及其将使用的网络协议(在本例中为TCP)。 另外一个是通过 `–permanent` 参数,它告诉 firewalld 在每次服务器启动时加载此规则: + +``` +# firewall-cmd --permanent --add-port=80/tcp +# firewall-cmd --permanent --add-port=443/tcp +``` + +`–reload` 参数将这些规则应用于当前会话: + +``` +# firewall-cmd --reload +``` + +查看当前防火墙上的设置,运行 `–list-services`: + +``` +# firewall-cmd --list-services +dhcpv6-client http https ssh +``` + +假设您已经如前所述添加了浏览器访问,那么 HTTP、HTTPS 和 SSH 端口现在都应该是和 `dhcpv6-client` 一样开放的 —— 它允许 Linux 从本地 DHCP 服务器请求 IPv6 IP 地址。 + +### 使用 iptables 配置锁定的客户信息亭 + +我相信你已经看到了信息亭——它们是放在机场、图书馆和商务场所的盒子里的平板电脑、触摸屏和 ATM 类电脑,邀请顾客和路人浏览内容。大多数信息亭的问题是,你通常不希望用户像在自己家一样,把他们当成自己的设备。它们通常不是用来浏览、观看 YouTube 视频或对五角大楼发起拒绝服务攻击的。因此,为了确保它们没有被滥用,你需要锁定它们。 + +一种方法是应用某种信息亭模式,无论是通过巧妙使用 Linux 显示管理器还是控制在浏览器级别。但是为了确保你已经堵塞了所有的漏洞,你可能还想通过防火墙添加一些硬性的网络控制。在下一节中,我将讲解如何使用iptables 来完成。 + +关于使用 iptables,有两件重要的事情需要记住:你给出的规则的顺序非常关键;iptables 规则本身在重新启动后将无法保持。我会一次一个地在解释这些。 + +### 信息亭项目 + +为了说明这一切,让我们想象一下,我们为一家名为 BigMart 的大型连锁商店工作。它们已经存在了几十年;事实上,我们想象中的祖父母可能是在那里购物并长大的。但是如今,BigMart 公司总部的人可能只是在数着亚马逊将他们永远赶下去的时间。 + +尽管如此,BigMart 的 IT 部门正在尽他们最大努力提供解决方案,他们向你发放了一些具有 WiFi 功能信息亭设备,你在整个商店的战略位置使用这些设备。其想法是,登录到 BigMart.com 产品页面,允许查找商品特征、过道位置和库存水平。信息亭还允许进入 bigmart-data.com,那里储存着许多图像和视频媒体信息。 + +除此之外,您还需要允许下载软件包更新。最后,您还希望只允许从本地工作站访问 SSH,并阻止其他人登录。下图说明了它将如何工作: + +![信息亭流量IP表] [10] + +*信息亭业务流由 iptables 控制。 * + +### 脚本 + +以下是 Bash 脚本内容: + +``` +#!/bin/bash +iptables -A OUTPUT -p tcp -d bigmart.com -j ACCEPT +iptables -A OUTPUT -p tcp -d bigmart-data.com -j ACCEPT +iptables -A OUTPUT -p tcp -d ubuntu.com -j ACCEPT +iptables -A OUTPUT -p tcp -d ca.archive.ubuntu.com -j ACCEPT +iptables -A OUTPUT -p tcp --dport 80 -j DROP +iptables -A OUTPUT -p tcp --dport 443 -j DROP +iptables -A INPUT -p tcp -s 10.0.3.1 --dport 22 -j ACCEPT +iptables -A INPUT -p tcp -s 0.0.0.0/0 --dport 22 -j DROP +``` + +我们从基本规则 `-A` 开始分析,它告诉 iptables 我们要添加规则。`OUTPUT` 意味着这条规则应该成为输出链的一部分。`-p` 表示该规则仅使用 TCP 协议的数据包,正如 `-d` 告诉我们的,目的地址是 [bigmart.com][11]。`-j` 参数的作用是当数据包符合规则时要采取的操作是 `ACCEPT`。第一条规则表示允许(或接受)请求。但,往下的规则你能看到丢弃(或拒绝)的请求。 + +规则顺序是很重要的。因为 iptables 会对一个请求遍历每个规则,直到遇到匹配的规则。一个向外发出的浏览器请求,比如访问 bigmart.com 是会通过的,因为这个请求匹配第一条规则,但是当它到达 `dport 80` 或 `dport 443` 规则时——取决于是 HTTP 还是 HTTPS 请求——它将被丢弃。当遇到匹配时,iptables 不再继续往下检查了。(LCTT 译注:此处原文有误,径改。) + +另一方面,向 ubuntu.com 发出软件升级的系统请求,只要符合其适当的规则,就会通过。显然,我们在这里做的是,只允许向我们的 BigMart 或 Ubuntu 发送 HTTP 或 HTTPS 请求,而不允许向其他目的地发送。 + +最后两条规则将处理 SSH 请求。因为它不使用端口 80 或 443 端口,而是使用 22 端口,所以之前的两个丢弃规则不会拒绝它。在这种情况下,来自我的工作站的登录请求将被接受,但是对其他任何地方的请求将被拒绝。这一点很重要:确保用于端口 22 规则的 IP 地址与您用来登录的机器的地址相匹配——如果不这样做,将立即被锁定。当然,这没什么大不了的,因为按照目前的配置方式,只需重启服务器,iptables 规则就会全部丢失。如果使用 LXC 容器作为服务器并从 LXC 主机登录,则使用主机 IP 地址连接容器,而不是其公共地址。 + +如果机器的 IP 发生变化,请记住更新这个规则;否则,你会被拒绝访问。 + +在家玩(是在某种一次性虚拟机上)?太好了。创建自己的脚本。现在我可以保存脚本,使用 `chmod` 使其可执行,并以 `sudo` 的形式运行它。不要担心“igmart-data.com 没找到”之类的错误 —— 当然没找到;它不存在。 + +``` +chmod +X scriptname.sh +sudo ./scriptname.sh +``` + +你可以使用 `cURL` 命令行测试防火墙。请求 ubuntu.com 奏效,但请求 [manning.com][13] 是失败的 。 + + +``` +curl ubuntu.com +curl manning.com +``` + +### 配置 iptables 以在系统启动时加载 + +现在,我如何让这些规则在每次信息亭启动时自动加载?第一步是将当前规则保存。使用 `iptables-save` 工具保存规则文件。这将在根目录中创建一个包含规则列表的文件。管道后面跟着 `tee` 命令,是将我的`sudo` 权限应用于字符串的第二部分:将文件实际保存到否则受限的根目录。 + +然后我可以告诉系统每次启动时运行一个相关的工具,叫做 `iptables-restore` 。我们在上一章节(LCTT 译注:指作者的书)中看到的常规 cron 任务并不适用,因为它们在设定的时间运行,但是我们不知道什么时候我们的计算机可能会决定崩溃和重启。 + +有许多方法来处理这个问题。这里有一个: + +在我的 Linux 机器上,我将安装一个名为 [anacron][14] 的程序,该程序将在 `/etc/` 目录中为我们提供一个名为 `anacrontab` 的文件。我将编辑该文件并添加这个 `iptables-restore` 命令,告诉它加载那个 .rule 文件的当前内容。当引导后,规则每天(必要时)01:01 时加载到 iptables 中(LCTT 译注:anacron 会补充执行由于机器没有运行而错过的 cron 任务,因此,即便 01:01 时机器没有启动,也会在机器启动会尽快执行该任务)。我会给该任务一个标识符(`iptables-restore`),然后添加命令本身。如果你在家和我一起这样,你应该通过重启系统来测试一下。 + +``` +sudo iptables-save | sudo tee /root/my.active.firewall.rules +sudo apt install anacron +sudo nano /etc/anacrontab +1 1 iptables-restore iptables-restore < /root/my.active.firewall.rules +``` + +我希望这些实际例子已经说明了如何使用 iptables 和 firewalld 来管理基于 Linux 的防火墙上的连接问题。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/linux-iptables-firewalld + +作者:[David Clinton][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[heguangzhi](https://github.com/heguangzhi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/remyd +[1]: https://www.manning.com/books/linux-in-action?a_aid=bootstrap-it&a_bid=4ca15fc9&chan=opensource +[2]: /file/409116 +[3]: https://opensource.com/sites/default/files/uploads/iptables1.jpg (firewall filtering request) +[4]: https://en.wikipedia.org/wiki/Iptables +[5]: https://firewalld.org/ +[6]: https://wiki.nftables.org/wiki-nftables/index.php/Main_Page +[7]: https://en.wikipedia.org/wiki/Uncomplicated_Firewall +[8]: https://en.wikipedia.org/wiki/Systemd +[9]: /file/409121 +[10]: https://opensource.com/sites/default/files/uploads/iptables2.jpg (kiosk traffic flow ip tables) +[11]: http://bigmart.com/ +[12]: http://youtube.com/ +[13]: http://manning.com/ +[14]: https://sourceforge.net/projects/anacron/ diff --git a/translated/tech/20180918 Top 3 Python libraries for data science.md b/published/20180918 Top 3 Python libraries for data science.md similarity index 93% rename from translated/tech/20180918 Top 3 Python libraries for data science.md rename to published/20180918 Top 3 Python libraries for data science.md index 4026b751d5..c6156e575a 100644 --- a/translated/tech/20180918 Top 3 Python libraries for data science.md +++ b/published/20180918 Top 3 Python libraries for data science.md @@ -1,7 +1,7 @@ 3 个用于数据科学的顶级 Python 库 ====== ->使用这些库把 Python 变成一个科学数据分析和建模工具。 +> 使用这些库把 Python 变成一个科学数据分析和建模工具。 ![][7] @@ -49,7 +49,6 @@ matrix_two = np.arange(1,10).reshape(3,3) matrix_two ``` -Here is the output: 输出如下: ``` @@ -62,9 +61,7 @@ array([[1, 2, 3], ``` matrix_multiply = np.dot(matrix_one, matrix_two) - matrix_multiply - ``` 相乘后的输出如下: @@ -96,17 +93,15 @@ matrix_multiply ### Pandas -[Pandas][3] 是另一个可以提高你的 Python 数据科学技能的优秀库。就和 NumPy 一样,它属于 SciPy 开源软件家族,可以在 BSD 免费许可证许可下使用。 +[Pandas][3] 是另一个可以提高你的 Python 数据科学技能的优秀库。就和 NumPy 一样,它属于 SciPy 开源软件家族,可以在 BSD 自由许可证许可下使用。 -Pandas 提供了多功能并且很强大的工具用于管理数据结构和执行大量数据分析。该库能够很好的处理不完整、非结构化和无序的真实世界数据,并且提供了用于整形、聚合、分析和可视化数据集的工具 +Pandas 提供了多能而强大的工具,用于管理数据结构和执行大量数据分析。该库能够很好的处理不完整、非结构化和无序的真实世界数据,并且提供了用于整形、聚合、分析和可视化数据集的工具 Pandas 中有三种类型的数据结构: - * Series: 一维、相同数据类型的数组 - * DataFrame: 二维异型矩阵 - * Panel: 三维大小可变数组 - - + * Series:一维、相同数据类型的数组 + * DataFrame:二维异型矩阵 + * Panel:三维大小可变数组 例如,我们来看一下如何使用 Panda 库(缩写成 `pd`)来执行一些描述性统计计算。 @@ -232,7 +227,7 @@ via: https://opensource.com/article/18/9/top-3-python-libraries-data-science 作者:[Dr.Michael J.Garbade][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[ucasFL](https://github.com/ucasFL) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20180920 8 Python packages that will simplify your life with Django.md b/published/20180920 8 Python packages that will simplify your life with Django.md similarity index 52% rename from translated/tech/20180920 8 Python packages that will simplify your life with Django.md rename to published/20180920 8 Python packages that will simplify your life with Django.md index f242007433..8f914f87e0 100644 --- a/translated/tech/20180920 8 Python packages that will simplify your life with Django.md +++ b/published/20180920 8 Python packages that will simplify your life with Django.md @@ -1,7 +1,7 @@ 简化 Django 开发的八个 Python 包 ====== -这个月的 Python 专栏将介绍一些 Django 包,它们有益于你的工作,以及你的个人或业余项目。 +> 这个月的 Python 专栏将介绍一些 Django 包,它们有益于你的工作,以及你的个人或业余项目。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/water-stone-balance-eight-8.png?itok=1aht_V5V) @@ -11,32 +11,31 @@ Django 开发者们,在这个月的 Python 专栏中,我们会介绍一些 ### 有用又省时的工具集合:django-extensions -[Django-extensions][4] 这个 Django 包非常受欢迎,全是有用的工具,比如下面这些管理命令: +[django-extensions][4] 这个 Django 包非常受欢迎,全是有用的工具,比如下面这些管理命令: - * **shell_plus** 打开 Django 的管理 shell,这个 shell 已经自动导入了所有的数据库模型。在测试复杂的数据关系时,就不需要再从几个不同的应用里做 import 的操作了。 - * **clean_pyc** 删除项目目录下所有位置的 .pyc 文件 - * **create_template_tags** 在指定的应用下,创建模板标签的目录结构。 - * **describe_form** 输出模型的表单定义,可以粘贴到 forms.py 文件中。(需要注意的是,这种方法创建的是普通 Django 表单,而不是模型表单。) - * **notes** 输出你项目里所有带 TODO,FIXME 等标记的注释。 + * `shell_plus` 打开 Django 的管理 shell,这个 shell 已经自动导入了所有的数据库模型。在测试复杂的数据关系时,就不需要再从几个不同的应用里做导入操作了。 + * `clean_pyc` 删除项目目录下所有位置的 .pyc 文件 + * `create_template_tags` 在指定的应用下,创建模板标签的目录结构。 + * `describe_form` 输出模型的表单定义,可以粘贴到 `forms.py` 文件中。(需要注意的是,这种方法创建的是普通 Django 表单,而不是模型表单。) + * `notes` 输出你项目里所有带 TODO、FIXME 等标记的注释。 Django-extensions 还包括几个有用的抽象基类,在定义模型时,它们能满足常见的模式。当你需要以下模型时,可以继承这些基类: + * `TimeStampedModel`:这个模型的基类包含了 `created` 字段和 `modified` 字段,还有一个 `save()` 方法,在适当的场景下,该方法自动更新 `created` 和 `modified` 字段的值。 + * `ActivatorModel`:如果你的模型需要像 `status`、`activate_date` 和 `deactivate_date` 这样的字段,可以使用这个基类。它还自带了一个启用 `.active()` 和 `.inactive()` 查询集的 manager。 + * `TitleDescriptionModel` 和 `TitleSlugDescriptionModel`:这两个模型包括了 `title` 和 `description` 字段,其中 `description` 字段还包括 `slug`,它根据 `title` 字段自动产生。 - * **TimeStampedModel** : 这个模型的基类包含了 **created** 字段和 **modified** 字段,还有一个 **save()** 方法,在适当的场景下,该方法自动更新 created 和 modified 字段的值。 - * **ActivatorModel** : 如果你的模型需要像 **status**,**activate_date** 和 **deactivate_date** 这样的字段,可以使用这个基类。它还自带了一个启用 **.active()** 和 **.inactive()** 查询集的 manager。 - * **TitleDescriptionModel** 和 **TitleSlugDescriptionModel** : 这两个模型包括了 **title** 和 **description** 字段,其中 description 字段还包括 **slug**,它根据 **title** 字段自动产生。 - -Django-extensions 还有其他更多的功能,也许对你的项目有帮助,所以,去浏览一下它的[文档][5]吧! +django-extensions 还有其他更多的功能,也许对你的项目有帮助,所以,去浏览一下它的[文档][5]吧! ### 12 因子应用的配置:django-environ -在 Django 项目的配置方面,[Django-environ][6] 提供了符合 [12 因子应用][7] 方法论的管理方法。它是其他一些库的集合,包括 [envparse][8] 和 [honcho][9] 等。安装了 django-environ 之后,在项目的根目录创建一个 .env 文件,用这个文件去定义那些随环境不同而不同的变量,或者需要保密的变量。(比如 API keys,是否启用 debug,数据库的 URLs 等) +在 Django 项目的配置方面,[django-environ][6] 提供了符合 [12 因子应用][7] 方法论的管理方法。它是另外一些库的集合,包括 [envparse][8] 和 [honcho][9] 等。安装了 django-environ 之后,在项目的根目录创建一个 `.env` 文件,用这个文件去定义那些随环境不同而不同的变量,或者需要保密的变量。(比如 API 密钥,是否启用调试,数据库的 URL 等) -然后,在项目的 settings.py 中引入 **environ**,并参考[官方文档的例子][10]设置好 **environ.PATH()** 和 **environ.Env()**。就可以通过 **env('VARIABLE_NAME')** 来获取 .env 文件中定义的变量值了。 +然后,在项目的 `settings.py` 中引入 `environ`,并参考[官方文档的例子][10]设置好 `environ.PATH()` 和 `environ.Env()`。就可以通过 `env('VARIABLE_NAME')` 来获取 `.env` 文件中定义的变量值了。 ### 创建出色的管理命令:django-click -[Django-click][11] 是基于 [Click][12] 的, ( 我们[之前推荐过][13]… [两次][14] Click),它对编写 Django 管理命令很有帮助。这个库没有很多文档,但是代码仓库中有个存放[测试命令][15]的目录,非常有参考价值。 Django-click 基本的 Hello World 命令是这样写的: +[django-click][11] 是基于 [Click][12] 的,(我们[之前推荐过][13]… [两次][14] Click),它对编写 Django 管理命令很有帮助。这个库没有很多文档,但是代码仓库中有个存放[测试命令][15]的目录,非常有参考价值。 django-click 基本的 Hello World 命令是这样写的: ``` # app_name.management.commands.hello.py @@ -57,31 +56,31 @@ Hello, Lacey ### 处理有限状态机:django-fsm -[Django-fsm][16] 给 Django 的模型添加了有限状态机的支持。如果你管理一个新闻网站,想用类似于“写作中”,“编辑中”,“已发布”来流转文章的状态,django-fsm 能帮你定义这些状态,还能管理状态变化的规则与限制。 +[django-fsm][16] 给 Django 的模型添加了有限状态机的支持。如果你管理一个新闻网站,想用类似于“写作中”、“编辑中”、“已发布”来流转文章的状态,django-fsm 能帮你定义这些状态,还能管理状态变化的规则与限制。 -Django-fsm 为模型提供了 FSMField 字段,用来定义模型实例的状态。用 django-fsm 的 **@transition** 修饰符,可以定义状态变化的方法,并处理状态变化的任何副作用。 +Django-fsm 为模型提供了 FSMField 字段,用来定义模型实例的状态。用 django-fsm 的 `@transition` 修饰符,可以定义状态变化的方法,并处理状态变化的任何副作用。 -虽然 django-fsm 文档很轻量,不过 [Django 中的工作流(状态)][17] 这篇 GitHubGist 对有限状态机和 django-fsm 做了非常好的介绍。 +虽然 django-fsm 文档很轻量,不过 [Django 中的工作流(状态)][17] 这篇 GitHub Gist 对有限状态机和 django-fsm 做了非常好的介绍。 ### 联系人表单:#django-contact-form -联系人表单可以说是网站的标配。但是不要自己去写全部的样板代码,用 [django-contact-form][18] 在几分钟内就可以搞定。它带有一个可选的能过滤垃圾邮件的表单类(也有不过滤的普通表单类)和一个 **ContactFormView** 基类,基类的方法可以覆盖或自定义修改。而且它还能引导你完成模板的创建,好让表单正常工作。 +联系人表单可以说是网站的标配。但是不要自己去写全部的样板代码,用 [django-contact-form][18] 在几分钟内就可以搞定。它带有一个可选的能过滤垃圾邮件的表单类(也有不过滤的普通表单类)和一个 `ContactFormView` 基类,基类的方法可以覆盖或自定义修改。而且它还能引导你完成模板的创建,好让表单正常工作。 ### 用户注册和认证:django-allauth -[Django-allauth][19] 是一个 Django 应用,它为用户注册,登录注销,密码重置,还有第三方用户认证(比如 GitHub 或 Twitter)提供了视图,表单和 URLs,支持邮件地址作为用户名的认证方式,而且有大量的文档记录。第一次用的时候,它的配置可能会让人有点晕头转向;请仔细阅读[安装说明][20],在[自定义你的配置][21]时要专注,确保启用某个功能的所有配置都用对了。 +[django-allauth][19] 是一个 Django 应用,它为用户注册、登录/注销、密码重置,还有第三方用户认证(比如 GitHub 或 Twitter)提供了视图、表单和 URL,支持邮件地址作为用户名的认证方式,而且有大量的文档记录。第一次用的时候,它的配置可能会让人有点晕头转向;请仔细阅读[安装说明][20],在[自定义你的配置][21]时要专注,确保启用某个功能的所有配置都用对了。 ### 处理 Django REST 框架的用户认证:django-rest-auth -如果 Django 开发中涉及到对外提供 API,你很可能用到了 [Django REST Framework][22] (DRF)。如果你在用 DRF,那么你应该试试 django-rest-auth,它提供了用户注册,登录/注销,密码重置和社交媒体认证的 endpoints (是通过添加 django-allauth 的支持来实现的,这两个包协作得很好)。 +如果 Django 开发中涉及到对外提供 API,你很可能用到了 [Django REST Framework][22](DRF)。如果你在用 DRF,那么你应该试试 django-rest-auth,它提供了用户注册、登录/注销,密码重置和社交媒体认证的端点(是通过添加 django-allauth 的支持来实现的,这两个包协作得很好)。 ### Django REST 框架的 API 可视化:django-rest-swagger -[Django REST Swagger][24] 提供了一个功能丰富的用户界面,用来和 Django REST 框架的 API 交互。你只需要安装 Django REST Swagger,把它添加到 Django 项目的 installed apps 中,然后在 urls.py 中添加 Swagger 的视图和 URL 模式就可以了,剩下的事情交给 API 的 docstring 处理。 +[Django REST Swagger][24] 提供了一个功能丰富的用户界面,用来和 Django REST 框架的 API 交互。你只需要安装 Django REST Swagger,把它添加到 Django 项目的已安装应用中,然后在 `urls.py` 中添加 Swagger 的视图和 URL 模式就可以了,剩下的事情交给 API 的 docstring 处理。 ![](https://opensource.com/sites/default/files/uploads/swagger-ui.png) -API 的用户界面按照 app 的维度展示了所有 endpoints 和可用方法,并列出了这些 endpoints 的可用操作,而且它提供了和 API 交互的功能(比如添加/删除/获取记录)。django-rest-swagger 从 API 视图中的 docstrings 生成每个 endpoint 的文档,通过这种方法,为你的项目创建了一份 API 文档,这对你,对前端开发人员和用户都很有用。 +API 的用户界面按照 app 的维度展示了所有端点和可用方法,并列出了这些端点的可用操作,而且它提供了和 API 交互的功能(比如添加/删除/获取记录)。django-rest-swagger 从 API 视图中的 docstrings 生成每个端点的文档,通过这种方法,为你的项目创建了一份 API 文档,这对你,对前端开发人员和用户都很有用。 -------------------------------------------------------------------------------- @@ -90,7 +89,7 @@ via: https://opensource.com/article/18/9/django-packages 作者:[Jeff Triplett][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[belitex](https://github.com/belitex) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -118,4 +117,4 @@ via: https://opensource.com/article/18/9/django-packages [21]: https://django-allauth.readthedocs.io/en/latest/configuration.html [22]: http://www.django-rest-framework.org/ [23]: https://django-rest-auth.readthedocs.io/ -[24]: https://django-rest-swagger.readthedocs.io/en/latest/ \ No newline at end of file +[24]: https://django-rest-swagger.readthedocs.io/en/latest/ diff --git a/published/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md b/published/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md new file mode 100644 index 0000000000..02bf0bdf9e --- /dev/null +++ b/published/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md @@ -0,0 +1,115 @@ +WinWorld:大型的废弃操作系统、软件、游戏的博物馆 +===== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/WinWorld-720x340.jpeg) + +有一天,我正在测试 Dosbox -- 这是一个[在 Linux 平台上运行 MS-DOS 游戏与程序的软件][1]。当我在搜索一些常用的软件,例如 Turbo C++ 时,我意外留意到了一个叫做 [WinWorld][2] 的网站。我查看了这个网站上的某些内容,并且着实被惊艳到了。WinWorld 收集了非常多经典的,但已经被它们的开发者所抛弃许久的操作系统、软件、应用、开发工具、游戏以及各式各样的工具。它是一个以保存和分享古老的、已经被废弃的或者预发布版本程序为目的的线上博物馆,由社区成员和志愿者运营。 + +WinWorld 于 2013 年开始运营。它的创始者声称是被 Yahoo birefcases 激发了灵感并以此构建了这个网站。这个网站原目标是保存并且分享老旧软件。多年来,许多志愿者以不同方式提供了帮助,WinWorld 收集的老旧软件增长迅速。整个 WinWorld 仓库都是自由开源的,所有人都可以使用。 + +### WinWorld 保存了大量的废弃操作系统、软件、系统应用以及游戏 + +就像我刚才说的那样, WinWorld 存储了大量的被抛弃并且不再被开发的软件。 + +**Linux 与 Unix:** + +这里我给出了完整的 UNIX 和 LINUX 操作系统的列表,以及它们各自的简要介绍、首次发行的年代。 + +* **A/UX** - 于 1988 年推出,移植到苹果的 68k Macintosh 平台的 Unix 系统。 +* **AIX** - 于 1986 年推出,IBM 移植的 Unix 系统。 +* **AT &T System V Unix** - 于 1983 年推出,最早的商业版 Unix 之一。 +* **Banyan VINES** - 于 1984 年推出,专为 Unix 设计的网络操作系统。 +* **Corel Linux** - 于 1999 年推出,商业 Linux 发行版。 +* **DEC OSF-1** - 于 1991 年推出,由 DEC 公司开发的 Unix 版本。 +* **Digital UNIX** - 由 DEC 于 1995 年推出,**OSF-1** 的重命名版本。 +* **FreeBSD 1.0** - 于 1993 年推出,FreeBSD 的首个发行版。这个系统是基于 4.3BSD 开发的。 +* **Gentus Linux** - 由 ABIT 于 2000 年推出,未遵守 GPL 协议的 Linux 发行版。 +* **HP-UX** - 于 1992 年推出,UNIX 的变种系统。 +* **IRIX** - 由硅谷图形公司(SGI)于 1988 年推出的操作系统。 +* **Lindows** - 于 2002 年推出,与 Corel Linux 类似的商业操作系统。 +* **Linux Kernel** - 0.01 版本于 90 年代早期推出,Linux 源代码的副本。 +* **Mandrake Linux** - 于 1999 年推出。基于 Red Hat Linux 的 Linux 发行版,稍后被重新命名为 Mandriva。 +* **NEWS-OS** - 由 Sony 于 1989 年推出,BSD 的变种。 +* **NeXTStep** - 由史蒂夫·乔布斯创立的 NeXT 公司于 1987 年推出,基于 Unix 的操作系统。 +* **PC/IX** - 于 1984 年推出,为 IBM 个人电脑服务的基于 Unix 的操作系统。 +* **Red Hat Linux 5.0** - 由 Red Hat 推出,商业 Linux 发行版。 +* **Sun Solaris** - 由 Sun Microsystem 于 1992 年推出,基于 Unix 的操作系统。 +* **SunOS** - 由 Sun Microsystem 于 1982 年推出,衍生自 BSD 基于 Unix 的操作系统。 +* **Tru64 UNIX** - 由 DEC 开发,旧称 OSF/1。 +* **Ubuntu 4.10** - 基于 Debian 的知名操作系统。这是早期的 beta 预发布版本,比第一个 Ubuntu 正式发行版更早推出。 +* **Ultrix** - 由 DEC 开发, UNIX 克隆。 +* **UnixWare** - 由 Novell 推出, UNIX 变种。 +* **Xandros Linux** - 首个版本于 2003 年推出。基于 Corel Linux 的专有 Linux 发行版。 +* **Xenix** - 最初由微软于 1984 推出,UNIX 变种操作系统。 + +不仅仅是 Linux/Unix,你还能找到例如 DOS、Windows、Apple/Mac、OS 2、Novell netware 等其他的操作系统与 shell。 + +**DOS & CP/M:** + +* 86-DOS +* Concurrent CPM-86 & Concurrent DOS +* CP/M 86 & CP/M-80 +* DOS Plus +* DR-DOS +* GEM +* MP/M +* MS-DOS +* 多任务的 MS-DOS 4.00 +* 多用户 DOS +* PC-DOS +* PC-MOS +* PTS-DOS +* Real/32 +* Tandy Deskmate +* Wendin DOS + +**Windows:** + +* BackOffice Server +* Windows 1.0/2.x/3.0/3.1/95/98/2000/ME/NT 3.X/NT 4.0 +* Windows Whistler +* WinFrame + +**Apple/Mac:** + +* Mac OS 7/8/9 +* Mac OS X +* System Software (0-6) + +**OS/2:** + +* Citrix Multiuser +* OS/2 1.x +* OS/2 2.0 +* OS/2 3.x +* OS/2 Warp 4 + +于此同时,WinWorld 也收集了大量的旧软件、系统应用、开发工具和游戏。你也可以一起看看它们。 + +说实话,这个网站列出的绝大部分东西,我甚至都不知道它们存在过。其中列出的某些工具发布于我出生之前。 + +如果您需要或者打算去测试一个经典的程序(例如游戏、软件、操作系统),并且在其他地方找不到它们,那么来 WinWorld 资源库看看,下载它们然后开始你的探险吧。祝您好运! + +![WinWorld – A Collection Of Defunct OSs, Software, Applications And Games](https://www.ostechnix.com/wp-content/uploads/2018/09/winworld.png) + +**免责声明:** + +OSTechNix 并非隶属于 WinWorld。我们 OSTechNix 并不确保 WinWorld 站点存储数据的真实性与可靠性。而且在你所在的地区,或许从第三方站点下载软件是违法行为。本篇文章作者和 OSTechNix 都不会承担任何责任,使用此服务意味着您将自行承担风险。(LCTT 译注:本站和译者亦同样申明。) + +本篇文章到此为止。希望这对您有用,更多的好文章即将发布,敬请期待! + +谢谢各位的阅读! + +-------------------------------------------------------------------------------- +via: https://www.ostechnix.com/winworld-a-large-collection-of-defunct-oss-software-and-games/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[thecyanbird](https://github.com/thecyanbird) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://www.ostechnix.com/how-to-run-ms-dos-games-and-programs-in-linux/ +[2]: https://winworldpc.com/library/ diff --git a/published/20180921 Clinews - Read News And Latest Headlines From Commandline.md b/published/20180921 Clinews - Read News And Latest Headlines From Commandline.md new file mode 100644 index 0000000000..3dc74ff355 --- /dev/null +++ b/published/20180921 Clinews - Read News And Latest Headlines From Commandline.md @@ -0,0 +1,129 @@ +Clinews:从命令行阅读新闻和最新头条 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/clinews-720x340.jpeg) + +不久前,我们写了一个名为 [InstantNews][1] 的命令行新闻客户端,它可以帮助你立即在命令行阅读新闻和最新头条新闻。今天,我偶然发现了一个名为 **Clinews** 的类似,它的其功能与此相同 —— 在终端阅读来自热门网站的新闻和最新头条,还有博客。你无需安装 GUI 应用或移动应用。你可以直接从终端阅读世界上正在发生的事情。它是使用 **NodeJS** 编写的自由开源程序。 + +### 安装 Clinews + +由于 Clinews 是使用 NodeJS 编写的,因此你可以使用 NPM 包管理器安装。如果尚未安装 NodeJS,请按照以下链接中的说明进行安装。 + +安装 node 后,运行以下命令安装 Clinews: + +``` +$ npm i -g clinews +``` + +你也可以使用 **Yarn** 安装 Clinews: + +``` +$ yarn global add clinews +``` + +Yarn 本身可以使用 npm 安装 + +``` +$ npm -i yarn +``` + +### 配置 News API + +Clinews 从 [News API][2] 中检索所有新闻标题。News API 是一个简单易用的 API,它返回当前在一系列新闻源和博客上发布的头条的 JSON 元数据。它目前提供来自 70 个热门源的实时头条,包括 Ars Technica、BBC、Blooberg、CNN、每日邮报、Engadget、ESPN、金融时报、谷歌新闻、hacker News,IGN、Mashable、国家地理、Reddit r/all、路透社、 Speigel Online、Techcrunch、The Guardian、The Hindu、赫芬顿邮报、纽约时报、The Next Web、华尔街日报,今日美国和[等等][3]。 + +首先,你需要 News API 的 API 密钥。进入 [https://newsapi.org/register][4] 并注册一个免费帐户来获取 API 密钥。 + +从 News API 获得 API 密钥后,编辑 `.bashrc`: + +``` +$ vi ~/.bashrc +``` + +在最后添加 newsapi API 密钥,如下所示: + +``` +export IN_API_KEY="Paste-API-key-here" +``` + +请注意,你需要将密钥粘贴在双引号内。保存并关闭文件。 + +运行以下命令以更新更改。 + +``` +$ source ~/.bashrc +``` + +完成。现在继续并从新闻源获取最新的头条新闻。 + +### 在命令行阅读新闻和最新头条 + +要阅读特定新闻源的新闻和最新头条,例如 **The Hindu**,请运行: + +``` +$ news fetch the-hindu +``` + +这里,`the-hindu` 是新闻源的源id(获取 id)。 + +上述命令将从 The Hindu 新闻站获取最新的 10 个头条,并将其显示在终端中。此外,它还显示新闻的简要描述、发布的日期和时间以及到源的实际链接。 + +**示例输出:** + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/clinews-1.png) + +要在浏览器中阅读新闻,请按住 Ctrl 键并单击 URL。它将在你的默认 Web 浏览器中打开。 + +要查看所有的新闻源,请运行: + +``` +$ news sources +``` + +**示例输出:** + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/clinews-2.png) + +正如你在上面的截图中看到的,Clinews 列出了所有新闻源,包括新闻源的名称、获取 ID、网站描述、网站 URL 以及它所在的国家/地区。在撰写本指南时,Clinews 目前支持 70 多个新闻源。 + +Clinews 还可以搜索符合搜索条件/术语的所有源的新闻报道。例如,要列出包含单词 “Tamilnadu” 的所有新闻报道,请使用以下命令: + +``` +$ news search "Tamilnadu" +``` + +此命令将会筛选所有新闻源中含有 “Tamilnadu” 的报道。 + +Clinews 有一些其它选项可以帮助你 + + * 限制你想看的新闻报道的数量, +  * 排序新闻报道(热门、最新), +  * 智能显示新闻报道分类(例如商业、娱乐、游戏、大众、音乐、政治、科学和自然、体育、技术) + +更多详细信息,请参阅帮助部分: + +``` +$ clinews -h +``` + +就是这些了。希望这篇对你有用。还有更多好东西。敬请关注! + +干杯! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/clinews-read-news-and-latest-headlines-from-commandline/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://www.ostechnix.com/get-news-instantly-commandline-linux/ +[2]: https://newsapi.org/ +[3]: https://newsapi.org/sources +[4]: https://newsapi.org/register diff --git a/translated/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md b/published/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md similarity index 87% rename from translated/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md rename to published/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md index ed3402e0fa..a77ee1ad62 100644 --- a/translated/tech/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md +++ b/published/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md @@ -1,22 +1,22 @@ 如何在 Linux 中查看进程占用的端口号 ====== + 对于 Linux 系统管理员来说,清楚某个服务是否正确地绑定或监听某个端口,是至关重要的。如果你需要处理端口相关的问题,这篇文章可能会对你有用。 端口是 Linux 系统上特定进程之间逻辑连接的标识,包括物理端口和软件端口。由于 Linux 操作系统是一个软件,因此本文只讨论软件端口。软件端口始终与主机的 IP 地址和相关的通信协议相关联,因此端口常用于区分应用程序。大部分涉及到网络的服务都必须打开一个套接字来监听传入的网络请求,而每个服务都使用一个独立的套接字。 **推荐阅读:** -**(#)** [在 Linux 上查看进程 ID 的 4 种方法][1] -**(#)** [在 Linux 上终止进程的 3 种方法][2] -套接字是和 IP 地址,软件端口和协议结合起来使用的,而端口号对传输控制协议(Transmission Control Protocol, TCP)和 用户数据报协议(User Datagram Protocol, UDP)协议都适用,TCP 和 UDP 都可以使用0到65535之间的端口号进行通信。 +- [在 Linux 上查看进程 ID 的 4 种方法][1] +- [在 Linux 上终止进程的 3 种方法][2] + +套接字是和 IP 地址、软件端口和协议结合起来使用的,而端口号对传输控制协议(TCP)和用户数据报协议(UDP)协议都适用,TCP 和 UDP 都可以使用 0 到 65535 之间的端口号进行通信。 以下是端口分配类别: - * `0-1023:` 常用端口和系统端口 - * `1024-49151:` 软件的注册端口 - * `49152-65535:` 动态端口或私有端口 - - + * 0 - 1023: 常用端口和系统端口 + * 1024 - 49151: 软件的注册端口 + * 49152 - 65535: 动态端口或私有端口 在 Linux 上的 `/etc/services` 文件可以查看到更多关于保留端口的信息。 @@ -74,29 +74,25 @@ telnet 23/udp # 24 - private mail system lmtp 24/tcp # LMTP Mail Delivery lmtp 24/udp # LMTP Mail Delivery - ``` 可以使用以下六种方法查看端口信息。 - * `ss:` ss 可以用于转储套接字统计信息。 - * `netstat:` netstat 可以显示打开的套接字列表。 - * `lsof:` lsof 可以列出打开的文件。 - * `fuser:` fuser 可以列出那些打开了文件的进程的进程 ID。 - * `nmap:` nmap 是网络检测工具和端口扫描程序。 - * `systemctl:` systemctl 是 systemd 系统的控制管理器和服务管理器。 - - + * `ss`:可以用于转储套接字统计信息。 + * `netstat`:可以显示打开的套接字列表。 + * `lsof`:可以列出打开的文件。 + * `fuser`:可以列出那些打开了文件的进程的进程 ID。 + * `nmap`:是网络检测工具和端口扫描程序。 + * `systemctl`:是 systemd 系统的控制管理器和服务管理器。 以下我们将找出 `sshd` 守护进程所使用的端口号。 -### 方法1:使用 ss 命令 +### 方法 1:使用 ss 命令 `ss` 一般用于转储套接字统计信息。它能够输出类似于 `netstat` 输出的信息,但它可以比其它工具显示更多的 TCP 信息和状态信息。 它还可以显示所有类型的套接字统计信息,包括 PACKET、TCP、UDP、DCCP、RAW、Unix 域等。 - ``` # ss -tnlp | grep ssh LISTEN 0 128 *:22 *:* users:(("sshd",pid=997,fd=3)) @@ -111,7 +107,7 @@ LISTEN 0 128 *:22 *:* users:(("sshd",pid=997,fd=3)) LISTEN 0 128 :::22 :::* users:(("sshd",pid=997,fd=4)) ``` -### 方法2:使用 netstat 命令 +### 方法 2:使用 netstat 命令 `netstat` 能够显示网络连接、路由表、接口统计信息、伪装连接以及多播成员。 @@ -131,7 +127,7 @@ tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1208/sshd tcp6 0 0 :::22 :::* LISTEN 1208/sshd ``` -### 方法3:使用 lsof 命令 +### 方法 3:使用 lsof 命令 `lsof` 能够列出打开的文件,并列出系统上被进程打开的文件的相关信息。 @@ -153,7 +149,7 @@ sshd 1208 root 4u IPv6 20921 0t0 TCP *:ssh (LISTEN) sshd 11592 root 3u IPv4 27744 0t0 TCP vps.2daygeek.com:ssh->103.5.134.167:49902 (ESTABLISHED) ``` -### 方法4:使用 fuser 命令 +### 方法 4:使用 fuser 命令 `fuser` 工具会将本地系统上打开了文件的进程的进程 ID 显示在标准输出中。 @@ -165,7 +161,7 @@ sshd 11592 root 3u IPv4 27744 0t0 TCP vps.2daygeek.com:ssh->103.5.134.167:49902 root 49339 F.... sshd ``` -### 方法5:使用 nmap 命令 +### 方法 5:使用 nmap 命令 `nmap`(“Network Mapper”)是一款用于网络检测和安全审计的开源工具。它最初用于对大型网络进行快速扫描,但它对于单个主机的扫描也有很好的表现。 @@ -185,13 +181,14 @@ Service detection performed. Please report any incorrect results at http://nmap. Nmap done: 1 IP address (1 host up) scanned in 0.44 seconds ``` -### 方法6:使用 systemctl 命令 +### 方法 6:使用 systemctl 命令 -`systemctl` 是 systemd 系统的控制管理器和服务管理器。它取代了旧的 SysV init 系统管理,目前大多数现代 Linux 操作系统都采用了 systemd。 +`systemctl` 是 systemd 系统的控制管理器和服务管理器。它取代了旧的 SysV 初始化系统管理,目前大多数现代 Linux 操作系统都采用了 systemd。 **推荐阅读:** -**(#)** [chkservice – Linux 终端上的 systemd 单元管理工具][3] -**(#)** [如何查看 Linux 系统上正在运行的服务][4] + +- [chkservice – Linux 终端上的 systemd 单元管理工具][3] +- [如何查看 Linux 系统上正在运行的服务][4] ``` # systemctl status sshd @@ -258,7 +255,7 @@ via: https://www.2daygeek.com/how-to-find-out-which-port-number-a-process-is-usi 作者:[Prakash Subramanian][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20180926 An introduction to swap space on Linux systems.md b/published/20180926 An introduction to swap space on Linux systems.md new file mode 100644 index 0000000000..da5bd3b8db --- /dev/null +++ b/published/20180926 An introduction to swap space on Linux systems.md @@ -0,0 +1,281 @@ +Linux 系统上交换空间的介绍 +====== + +> 学习如何修改你的系统上的交换空间的容量,以及你到底需要多大的交换空间。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh) + +当今无论什么操作系统交换Swap空间是非常常见的。Linux 使用交换空间来增加主机可用的虚拟内存。它可以在常规文件或逻辑卷上使用一个或多个专用交换分区或交换文件。 + +典型计算机中有两种基本类型的内存。第一种类型,随机存取存储器 (RAM),用于存储计算机使用的数据和程序。只有程序和数据存储在 RAM 中,计算机才能使用它们。随机存储器是易失性存储器;也就是说,如果计算机关闭了,存储在 RAM 中的数据就会丢失。 + +硬盘是用于长期存储数据和程序的磁性介质。该磁介质可以很好的保存数据;即使计算机断电,存储在磁盘上的数据也会保留下来。CPU(中央处理器)不能直接访问硬盘上的程序和数据;它们必须首先复制到 RAM 中,RAM 是 CPU 访问代码指令和操作数据的地方。在引导过程中,计算机将特定的操作系统程序(如内核、init 或 systemd)以及硬盘上的数据复制到 RAM 中,在 RAM 中,计算机的处理器 CPU 可以直接访问这些数据。 + +### 交换空间 + +交换空间是现代 Linux 系统中的第二种内存类型。交换空间的主要功能是当全部的 RAM 被占用并且需要更多内存时,用磁盘空间代替 RAM 内存。 + +例如,假设你有一个 8GB RAM 的计算机。如果你启动的程序没有填满 RAM,一切都好,不需要交换。假设你在处理电子表格,当添加更多的行时,你电子表格会增长,加上所有正在运行的程序,将会占用全部的 RAM 。如果这时没有可用的交换空间,你将不得不停止处理电子表格,直到关闭一些其他程序来释放一些 RAM 。 + +内核使用一个内存管理程序来检测最近没有使用的内存块(内存页)。内存管理程序将这些相对不经常使用的内存页交换到硬盘上专门指定用于“分页”或交换的特殊分区。这会释放 RAM,为输入电子表格更多数据腾出了空间。那些换出到硬盘的内存页面被内核的内存管理代码跟踪,如果需要,可以被分页回 RAM。 + +Linux 计算机中的内存总量是 RAM + 交换分区,交换分区被称为虚拟内存. + +### Linux 交换分区类型 + +Linux 提供了两种类型的交换空间。默认情况下,大多数 Linux 在安装时都会创建一个交换分区,但是也可以使用一个特殊配置的文件作为交换文件。交换分区顾名思义就是一个标准磁盘分区,由 `mkswap` 命令指定交换空间。 + +如果没有可用磁盘空间来创建新的交换分区,或者卷组中没有空间为交换空间创建逻辑卷,则可以使用交换文件。这只是一个创建好并预分配指定大小的常规文件。然后运行 `mkswap` 命令将其配置为交换空间。除非绝对必要,否则我不建议使用文件来做交换空间。(LCTT 译注:Ubuntu 近来的版本采用了交换文件而非交换空间,所以我对于这种说法保留看法) + +### 频繁交换 + +当总虚拟内存(RAM 和交换空间)变得快满时,可能会发生频繁交换。系统花了太多时间在交换空间和 RAM 之间做内存块的页面切换,以至于几乎没有时间用于实际工作。这种情况的典型症状是:系统变得缓慢或完全无反应,硬盘指示灯几乎持续亮起。 + +使用 `free` 的命令来显示 CPU 负载和内存使用情况,你会发现 CPU 负载非常高,可能达到系统中 CPU 内核数量的 30 到 40 倍。另一个情况是 RAM 和交换空间几乎完全被分配了。 + +事实上,查看 SAR(系统活动报告)数据也可以显示这些内容。在我的每个系统上都安装 SAR ,并将这些用于数据分析。 + +### 交换空间的正确大小是多少? + +许多年前,硬盘上分配给交换空间大小是计算机上的 RAM 的两倍(当然,这是大多数计算机的 RAM 以 KB 或 MB 为单位的时候)。因此,如果一台计算机有 64KB 的 RAM,应该分配 128KB 的交换分区。该规则考虑到了这样的事实情况,即 RAM 大小在当时非常小,分配超过 2 倍的 RAM 用于交换空间并不能提高性能。使用超过两倍的 RAM 进行交换,比实际执行有用的工作的时候,大多数系统将花费更多的时间。 + +RAM 现在已经很便宜了,如今大多数计算机的 RAM 都达到了几十亿字节。我的大多数新电脑至少有 8GB 内存,一台有 32GB 内存,我的主工作站有 64GB 内存。我的旧电脑有 4 到 8GB 的内存。 + +当操作具有大量 RAM 的计算机时,交换空间的限制性能系数远低于 2 倍。[Fedora 28 在线安装指南][1] 定义了当前关于交换空间分配的方法。下面内容是我提出的建议。 + +下表根据系统中的 RAM 大小以及是否有足够的内存让系统休眠,提供了交换分区的推荐大小。建议的交换分区大小是在安装过程中自动建立的。但是,为了满足系统休眠,您需要在自定义分区阶段编辑交换空间。 + +_表 1: Fedora 28 文档中推荐的系统交换空间_ + +| **系统内存大小** | **推荐的交换空间** | **推荐的交换空间大小(支持休眠模式)** | +|--------------------------|-----------------------------|---------------------------------------| +| 小于 2 GB | 2 倍 RAM | 3 倍 RAM | +| 2 GB - 8 GB | 等于 RAM 大小 | 2 倍 RAM | +| 8 GB - 64 GB | 0.5 倍 RAM | 1.5 倍 RAM | +| 大于 64 GB | 工作量相关 | 不建议休眠模式 | + +在上面列出的每个范围之间的边界(例如,具有 2GB、8GB 或 64GB 的系统 RAM),请根据所选交换空间和支持休眠功能请谨慎使用。如果你的系统资源允许,增加交换空间可能会带来更好的性能。 + +当然,大多数 Linux 管理员对多大的交换空间量有自己的想法。下面的表2 包含了基于我在多种环境中的个人经历所做出的建议。这些可能不适合你,但是和表 1 一样,它们可能对你有所帮助。 + + +_表 2: 作者推荐的系统交换空间_ + +| RAM 大小 | 推荐的交换空间 | +|---------------|------------------------| +| ≤ 2GB | 2X RAM | +| 2GB – 8GB | = RAM | +| >8GB | 8GB | + + +这两个表中共同点,随着 RAM 数量的增加,超过某一点增加更多交换空间只会导致在交换空间几乎被全部使用之前就发生频繁交换。根据以上建议,则应尽可能添加更多 RAM,而不是增加更多交换空间。如类似影响系统性能的情况一样,请使用最适合你的建议。根据 Linux 环境中的条件进行测试和更改是需要时间和精力的。 + +### 向非 LVM 磁盘环境添加更多交换空间 + +面对已安装 Linux 的主机并对交换空间的需求不断变化,有时有必要修改系统定义的交换空间的大小。此过程可用于需要增加交换空间大小的任何情况。它假设有足够的可用磁盘空间。此过程还假设磁盘分区为 “原始的” EXT4 和交换分区,而不是使用逻辑卷管理(LVM)。 + +基本步骤很简单: + + 1. 关闭现有的交换空间。 + 2. 创建所需大小的新交换分区。 + 3. 重读分区表。 + 4. 将分区配置为交换空间。 + 5. 添加新分区到 `/etc/fstab`。 + 6. 打开交换空间。 + +应该不需要重新启动机器。 + +为了安全起见,在关闭交换空间前,至少你应该确保没有应用程序在运行,也没有交换空间在使用。`free` 或 `top` 命令可以告诉你交换空间是否在使用中。为了更安全,您可以恢复到运行级别 1 或单用户模式。 + +使用关闭所有交换空间的命令关闭交换分区: + +``` +swapoff -a +``` + +现在查看硬盘上的现有分区。 + +``` +fdisk -l +``` + +这将显示每个驱动器上的分区表。按编号标识当前的交换分区。 + +使用以下命令在交互模式下启动 `fdisk`: + +``` +fdisk /dev/ +``` + +例如: + +``` +fdisk /dev/sda +``` + +此时,`fdisk` 是交互方式的,只在指定的磁盘驱动器上进行操作。 + +使用 `fdisk` 的 `p` 子命令验证磁盘上是否有足够的可用空间来创建新的交换分区。硬盘上的空间以 512 字节的块以及起始和结束柱面编号的形式显示,因此您可能需要做一些计算来确定分配分区之间和末尾的可用空间。 + +使用 `n` 子命令创建新的交换分区。`fdisk` 会问你开始柱面。默认情况下,它选择编号最低的可用柱面。如果你想改变这一点,输入开始柱面的编号。 + +`fdisk` 命令允许你以多种格式输入分区的大小,包括最后一个柱面号或字节、KB 或 MB 的大小。例如,键入 4000M ,这将在新分区上提供大约 4GB 的空间,然后按回车键。 + +使用 `p` 子命令来验证分区是否按照指定的方式创建的。请注意,除非使用结束柱面编号,否则分区可能与你指定的不完全相同。`fdisk` 命令只能在整个柱面上增量的分配磁盘空间,因此你的分区可能比你指定的稍小或稍大。如果分区不是您想要的,你可以删除它并重新创建它。 + +现在指定新分区是交换分区了 。子命令 `t` 允许你指定定分区的类型。所以输入 `t`,指定分区号,当它要求十六进制分区类型时,输入 `82`,这是 Linux 交换分区类型,然后按回车键。 + +当你对创建的分区感到满意时,使用 `w` 子命令将新的分区表写入磁盘。`fdisk` 程序将退出,并在完成修改后的分区表的编写后返回命令提示符。当 `fdisk` 完成写入新分区表时,会收到以下消息: + +``` +The partition table has been altered! +Calling ioctl() to re-read partition table. +WARNING: Re-reading the partition table failed with error 16: Device or resource busy. +The kernel still uses the old table. +The new table will be used at the next reboot. +Syncing disks. +``` + +此时,你使用 `partprobe` 命令强制内核重新读取分区表,这样就不需要执行重新启动机器。 + +``` +partprobe +``` + +使用命令 `fdisk -l` 列出分区,新交换分区应该在列出的分区中。确保新的分区类型是 “Linux swap”。 + +修改 `/etc/fstab` 文件以指向新的交换分区。如下所示: + +``` +LABEL=SWAP-sdaX   swap        swap    defaults        0 0 +``` + +其中 `X` 是分区号。根据新交换分区的位置,添加以下内容: + +``` +/dev/sdaY         swap        swap    defaults        0 0 +``` + +请确保使用正确的分区号。现在,可以执行创建交换分区的最后一步。使用 `mkswap` 命令将分区定义为交换分区。 + +``` +mkswap /dev/sdaY +``` + +最后一步是使用以下命令启用交换空间: + +``` +swapon -a +``` + +你的新交换分区现在与以前存在的交换分区一起在线。您可以使用 `free` 或`top` 命令来验证这一点。 + +#### 在 LVM 磁盘环境中添加交换空间 + +如果你的磁盘使用 LVM ,更改交换空间将相当容易。同样,假设当前交换卷所在的卷组中有可用空间。默认情况下,LVM 环境中的 Fedora Linux 在安装过程将交换分区创建为逻辑卷。您可以非常简单地增加交换卷的大小。 + +以下是在 LVM 环境中增加交换空间大小的步骤: + + 1. 关闭所有交换空间。 + 2. 增加指定用于交换空间的逻辑卷的大小。 + 3. 为交换空间调整大小的卷配置。 + 4. 启用交换空间。 + +首先,让我们使用 `lvs` 命令(列出逻辑卷)来验证交换空间是否存在以及交换空间是否是逻辑卷。 + +``` +[root@studentvm1 ~]# lvs + LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert + home fedora_studentvm1 -wi-ao---- 2.00g + pool00 fedora_studentvm1 twi-aotz-- 2.00g 8.17 2.93 + root fedora_studentvm1 Vwi-aotz-- 2.00g pool00 8.17 + swap fedora_studentvm1 -wi-ao---- 8.00g + tmp fedora_studentvm1 -wi-ao---- 5.00g + usr fedora_studentvm1 -wi-ao---- 15.00g + var fedora_studentvm1 -wi-ao---- 10.00g +[root@studentvm1 ~]# +``` + +你可以看到当前的交换空间大小为 8GB。在这种情况下,我们希望将 2GB 添加到此交换卷中。首先,停止现有的交换空间。如果交换空间正在使用,终止正在运行的程序。 + +``` +swapoff -a +``` + +现在增加逻辑卷的大小。 + +``` +[root@studentvm1 ~]# lvextend -L +2G /dev/mapper/fedora_studentvm1-swap +  Size of logical volume fedora_studentvm1/swap changed from 8.00 GiB (2048 extents) to 10.00 GiB (2560 extents). +  Logical volume fedora_studentvm1/swap successfully resized. +[root@studentvm1 ~]# +``` + +运行 `mkswap` 命令将整个 10GB 分区变成交换空间。 + +``` +[root@studentvm1 ~]# mkswap /dev/mapper/fedora_studentvm1-swap +mkswap: /dev/mapper/fedora_studentvm1-swap: warning: wiping old swap signature. +Setting up swapspace version 1, size = 10 GiB (10737414144 bytes) +no label, UUID=3cc2bee0-e746-4b66-aa2d-1ea15ef1574a +[root@studentvm1 ~]# +``` + +重新启用交换空间。 + +``` +[root@studentvm1 ~]# swapon -a +[root@studentvm1 ~]# +``` + +现在,使用 `lsblk ` 命令验证新交换空间是否存在。同样,不需要重新启动机器。 + +``` +[root@studentvm1 ~]# lsblk +NAME                                 MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT +sda                                    8:0    0   60G  0 disk +|-sda1                                 8:1    0    1G  0 part /boot +`-sda2                                 8:2    0   59G  0 part +  |-fedora_studentvm1-pool00_tmeta   253:0    0    4M  0 lvm   +  | `-fedora_studentvm1-pool00-tpool 253:2    0    2G  0 lvm   +  |   |-fedora_studentvm1-root       253:3    0    2G  0 lvm  / +  |   `-fedora_studentvm1-pool00     253:6    0    2G  0 lvm   +  |-fedora_studentvm1-pool00_tdata   253:1    0    2G  0 lvm   +  | `-fedora_studentvm1-pool00-tpool 253:2    0    2G  0 lvm   +  |   |-fedora_studentvm1-root       253:3    0    2G  0 lvm  / +  |   `-fedora_studentvm1-pool00     253:6    0    2G  0 lvm   +  |-fedora_studentvm1-swap           253:4    0   10G  0 lvm  [SWAP] +  |-fedora_studentvm1-usr            253:5    0   15G  0 lvm  /usr +  |-fedora_studentvm1-home           253:7    0    2G  0 lvm  /home +  |-fedora_studentvm1-var            253:8    0   10G  0 lvm  /var +  `-fedora_studentvm1-tmp            253:9    0    5G  0 lvm  /tmp +sr0                                   11:0    1 1024M  0 rom   +[root@studentvm1 ~]# +``` + +您也可以使用 `swapon -s` 命令或 `top`、`free` 或其他几个命令来验证这一点。 + +``` +[root@studentvm1 ~]# free +              total        used        free      shared  buff/cache   available +Mem:        4038808      382404     2754072        4152      902332     3404184 +Swap:      10485756           0    10485756 +[root@studentvm1 ~]# +``` + +请注意,不同的命令以不同的形式显示或要求输入设备文件。在 `/dev` 目录中访问特定设备有多种方式。在我的文章 [在 Linux 中管理设备][2] 中有更多关于 `/dev` 目录及其内容说明。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/swap-space-linux-systems + +作者:[David Both][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[heguangzhi](https://github.com/heguangzhi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dboth +[1]: https://docs.fedoraproject.org/en-US/fedora/f28/install-guide/ +[2]: https://linux.cn/article-8099-1.html diff --git a/published/20180926 How to use the Scikit-learn Python library for data science projects.md b/published/20180926 How to use the Scikit-learn Python library for data science projects.md new file mode 100644 index 0000000000..b7ebe9a6bd --- /dev/null +++ b/published/20180926 How to use the Scikit-learn Python library for data science projects.md @@ -0,0 +1,239 @@ +如何将 Scikit-learn Python 库用于数据科学项目 +====== + +> 灵活多样的 Python 库为数据分析和数据挖掘提供了强力的机器学习工具。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X) + +Scikit-learn Python 库最初于 2007 年发布,通常用于解决各种方面的机器学习和数据科学问题。这个多种功能的库提供了整洁、一致、高效的 API 和全面的在线文档。 + +### 什么是 Scikit-learn? + +[Scikit-learn][1] 是一个开源 Python 库,拥有强大的数据分析和数据挖掘工具。 在 BSD 许可下可用,并建立在以下机器学习库上: + +- `NumPy`,一个用于操作多维数组和矩阵的库。它还具有广泛的数学函数汇集,可用于执行各种计算。 +- `SciPy`,一个由各种库组成的生态系统,用于完成技术计算任务。 +- `Matplotlib`,一个用于绘制各种图表和图形的库。 + +Scikit-learn 提供了广泛的内置算法,可以充分用于数据科学项目。 + +以下是使用 Scikit-learn 库的主要方法。 + +#### 1、分类 + +[分类][2]工具识别与提供的数据相关联的类别。例如,它们可用于将电子邮件分类为垃圾邮件或非垃圾邮件。 + +Scikit-learn 中的分类算法包括: + +- 支持向量机Support vector machines(SVM) +- 最邻近Nearest neighbors +- 随机森林Random forest + +#### 2、回归 + +回归涉及到创建一个模型去试图理解输入和输出数据之间的关系。例如,回归工具可用于理解股票价格的行为。 + +回归算法包括: + +- 支持向量机Support vector machines(SVM) +- 岭回归Ridge regression +- Lasso(LCTT 译注:Lasso 即 least absolute shrinkage and selection operator,又译为最小绝对值收敛和选择算子、套索算法) + +#### 3、聚类 + +Scikit-learn 聚类工具用于自动将具有相同特征的数据分组。 例如,可以根据客户数据的地点对客户数据进行细分。 + +聚类算法包括: + +- K-means +- 谱聚类Spectral clustering +- Mean-shift + +#### 4、降维 + +降维降低了用于分析的随机变量的数量。例如,为了提高可视化效率,可能不会考虑外围数据。 + +降维算法包括: + +- 主成分分析Principal component analysis(PCA) +- 功能选择Feature selection +- 非负矩阵分解Non-negative matrix factorization + +#### 5、模型选择 + +模型选择算法提供了用于比较、验证和选择要在数据科学项目中使用的最佳参数和模型的工具。 + +通过参数调整能够增强精度的模型选择模块包括: + +- 网格搜索Grid search +- 交叉验证Cross-validation +- 指标Metrics + +#### 6、预处理 + +Scikit-learn 预处理工具在数据分析期间的特征提取和规范化中非常重要。 例如,您可以使用这些工具转换输入数据(如文本)并在分析中应用其特征。 + +预处理模块包括: + +- 预处理 +- 特征提取 + +### Scikit-learn 库示例 + +让我们用一个简单的例子来说明如何在数据科学项目中使用 Scikit-learn 库。 + +我们将使用[鸢尾花花卉数据集][3],该数据集包含在 Scikit-learn 库中。 鸢尾花数据集包含有关三种花种的 150 个细节,三种花种分别为: + +- Setosa:标记为 0 +- Versicolor:标记为 1 +- Virginica:标记为 2 + +数据集包括每种花种的以下特征(以厘米为单位): + +- 萼片长度 +- 萼片宽度 +- 花瓣长度 +- 花瓣宽度 + +#### 第 1 步:导入库 + +由于鸢尾花花卉数据集包含在 Scikit-learn 数据科学库中,我们可以将其加载到我们的工作区中,如下所示: + +``` +from sklearn import datasets +iris = datasets.load_iris() +``` + +这些命令从 `sklearn` 导入数据集 `datasets` 模块,然后使用 `datasets` 中的 `load_iris()` 方法将数据包含在工作空间中。 + +#### 第 2 步:获取数据集特征 + +数据集 `datasets` 模块包含几种方法,使您更容易熟悉处理数据。 + +在 Scikit-learn 中,数据集指的是类似字典的对象,其中包含有关数据的所有详细信息。 使用 `.data` 键存储数据,该数据列是一个数组列表。 + +例如,我们可以利用 `iris.data` 输出有关鸢尾花花卉数据集的信息。 + +``` +print(iris.data) +``` + +这是输出(结果已被截断): + +``` +[[5.1 3.5 1.4 0.2] + [4.9 3.  1.4 0.2] + [4.7 3.2 1.3 0.2] + [4.6 3.1 1.5 0.2] + [5.  3.6 1.4 0.2] + [5.4 3.9 1.7 0.4] + [4.6 3.4 1.4 0.3] + [5.  3.4 1.5 0.2] + [4.4 2.9 1.4 0.2] + [4.9 3.1 1.5 0.1] + [5.4 3.7 1.5 0.2] + [4.8 3.4 1.6 0.2] + [4.8 3.  1.4 0.1] + [4.3 3.  1.1 0.1] + [5.8 4.  1.2 0.2] + [5.7 4.4 1.5 0.4] + [5.4 3.9 1.3 0.4] + [5.1 3.5 1.4 0.3] +``` + +我们还使用 `iris.target` 向我们提供有关花朵不同标签的信息。 + +``` +print(iris.target) +``` + +这是输出: + +``` +[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 + 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 + 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 + 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 + 2 2] +``` + +如果我们使用 `iris.target_names`,我们将输出数据集中找到的标签名称的数组。 + +``` +print(iris.target_names) +``` + +以下是运行 Python 代码后的结果: + +``` +['setosa' 'versicolor' 'virginica'] +``` + +#### 第 3 步:可视化数据集 + +我们可以使用[箱形图][4]来生成鸢尾花数据集的视觉描绘。 箱形图说明了数据如何通过四分位数在平面上分布的。 + +以下是如何实现这一目标: + +``` +import seaborn as sns +box_data = iris.data # 表示数据数组的变量 +box_target = iris.target # 表示标签数组的变量 +sns.boxplot(data = box_data,width=0.5,fliersize=5) +sns.set(rc={'figure.figsize':(2,15)}) +``` + +让我们看看结果: + +![](https://opensource.com/sites/default/files/uploads/scikit_boxplot.png) + +在横轴上: + + * 0 是萼片长度 + * 1 是萼片宽度 + * 2 是花瓣长度 + * 3 是花瓣宽度 + +垂直轴的尺寸以厘米为单位。 + +### 总结 + +以下是这个简单的 Scikit-learn 数据科学教程的完整代码。 + +``` +from sklearn import datasets +iris = datasets.load_iris() +print(iris.data) +print(iris.target) +print(iris.target_names) +import seaborn as sns +box_data = iris.data # 表示数据数组的变量 +box_target = iris.target # 表示标签数组的变量 +sns.boxplot(data = box_data,width=0.5,fliersize=5) +sns.set(rc={'figure.figsize':(2,15)}) +``` + +Scikit-learn 是一个多功能的 Python 库,可用于高效完成数据科学项目。 + +如果您想了解更多信息,请查看 [LiveEdu][5] 上的教程,例如 Andrey Bulezyuk 关于使用 Scikit-learn 库创建[机器学习应用程序][6]的视频。 + +有什么评价或者疑问吗? 欢迎在下面分享。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/how-use-scikit-learn-data-science-projects + +作者:[Dr.Michael J.Garbade][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/drmjg +[1]: http://scikit-learn.org/stable/index.html +[2]: https://blog.liveedu.tv/regression-versus-classification-machine-learning-whats-the-difference/ +[3]: https://en.wikipedia.org/wiki/Iris_flower_data_set +[4]: https://en.wikipedia.org/wiki/Box_plot +[5]: https://www.liveedu.tv/guides/data-science/ +[6]: https://www.liveedu.tv/andreybu/REaxr-machine-learning-model-python-sklearn-kera/oPGdP-machine-learning-model-python-sklearn-kera/ diff --git a/translated/tech/20180927 How to Use RAR files in Ubuntu Linux.md b/published/20180927 How to Use RAR files in Ubuntu Linux.md similarity index 75% rename from translated/tech/20180927 How to Use RAR files in Ubuntu Linux.md rename to published/20180927 How to Use RAR files in Ubuntu Linux.md index 3521b21a8a..0a087de8be 100644 --- a/translated/tech/20180927 How to Use RAR files in Ubuntu Linux.md +++ b/published/20180927 How to Use RAR files in Ubuntu Linux.md @@ -1,40 +1,39 @@ 如何在 Ubuntu Linux 中使用 RAR 文件 ====== + [RAR][1] 是一种非常好的归档文件格式。但相比之下 7-zip 能提供了更好的压缩率,并且默认情况下还可以在多个平台上轻松支持 Zip 文件。不过 RAR 仍然是最流行的归档格式之一。然而 [Ubuntu][2] 自带的归档管理器却不支持提取 RAR 文件,也不允许创建 RAR 文件。 -方法总比问题多。只要安装 `unrar` 这款由 [RARLAB][3] 提供的免费软件,就能在 Ubuntu 上支持提取RAR文件了。你也可以试安装 `rar` 来创建和管理 RAR 文件。 +办法总比问题多。只要安装 `unrar` 这款由 [RARLAB][3] 提供的免费软件,就能在 Ubuntu 上支持提取 RAR 文件了。你也可以安装 `rar` 试用版来创建和管理 RAR 文件。 ![RAR files in Ubuntu Linux][4] ### 提取 RAR 文件 -在未安装 unrar 的情况下,提取 RAR 文件会报出“未能提取”错误,就像下面这样(以 [Ubuntu 18.04][5] 为例): +在未安装 `unrar` 的情况下,提取 RAR 文件会报出“未能提取”错误,就像下面这样(以 [Ubuntu 18.04][5] 为例): ![Error in RAR extraction in Ubuntu][6] -如果要解决这个错误并提取 RAR 文件,请按照以下步骤安装 unrar: +如果要解决这个错误并提取 RAR 文件,请按照以下步骤安装 `unrar`: 打开终端并输入: ``` - sudo apt-get install unrar - +sudo apt-get install unrar ``` -安装 unrar 后,直接输入 `unrar` 就可以看到它的用法以及如何使用这个工具处理 RAR 文件。 +安装 `unrar` 后,直接输入 `unrar` 就可以看到它的用法以及如何使用这个工具处理 RAR 文件。 最常用到的功能是提取 RAR 文件。因此,可以**通过右键单击 RAR 文件并执行提取**,也可以借助此以下命令通过终端执行操作: ``` unrar x FileName.rar - ``` 结果类似以下这样: ![Using unrar in Ubuntu][7] -如果家目录中不存在对应的文件,就必须使用 `cd` 命令移动到目标目录下。例如 RAR 文件如果在 `Music` 目录下,只需要使用 `cd Music` 就可以移动到相应的目录,然后提取 RAR 文件。 +如果压缩文件没放在家目录中,就必须使用 `cd` 命令移动到目标目录下。例如 RAR 文件如果在 `Music` 目录下,只需要使用 `cd Music` 就可以移动到相应的目录,然后提取 RAR 文件。 ### 创建和管理 RAR 文件 @@ -42,18 +41,16 @@ unrar x FileName.rar `unrar` 不允许创建 RAR 文件。因此还需要安装 `rar` 命令行工具才能创建 RAR 文件。 -要创建 RAR 文件,首先需要通过以下命令安装 rar: +要创建 RAR 文件,首先需要通过以下命令安装 `rar`: ``` sudo apt-get install rar - ``` 按照下面的命令语法创建 RAR 文件: ``` rar a ArchiveName File_1 File_2 Dir_1 Dir_2 - ``` 按照这个格式输入命令时,它会将目录中的每个文件添加到 RAR 文件中。如果需要某一个特定的文件,就要指定文件确切的名称或路径。 @@ -64,7 +61,6 @@ rar a ArchiveName File_1 File_2 Dir_1 Dir_2 ``` rar u ArchiveName Filename - ``` 在终端输入 `rar` 就可以列出 RAR 工具的相关命令。 @@ -82,7 +78,7 @@ via: https://itsfoss.com/use-rar-ubuntu-linux/ 作者:[Ankush Das][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20180928 10 handy Bash aliases for Linux.md b/published/20180928 10 handy Bash aliases for Linux.md new file mode 100644 index 0000000000..fe1c5098a1 --- /dev/null +++ b/published/20180928 10 handy Bash aliases for Linux.md @@ -0,0 +1,85 @@ +10 个 Linux 中方便的 Bash 别名 +====== +> 对 Bash 长命令使用压缩的版本来更有效率。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U) + +你有多少次在命令行上输入一个长命令,并希望有一种方法可以保存它以供日后使用?这就是 Bash 别名派上用场的地方。它们允许你将长而神秘的命令压缩为易于记忆和使用的东西。需要一些例子来帮助你入门吗?没问题! + +要使用你创建的 Bash 别名,你需要将其添加到 `.bash_profile` 中,该文件位于你的家目录中。请注意,此文件是隐藏的,并只能从命令行访问。编辑此文件的最简单方法是使用 Vi 或 Nano 之类的东西。 + +### 10 个方便的 Bash 别名 + +1、 你有几次遇到需要解压 .tar 文件但无法记住所需的确切参数?别名可以帮助你!只需将以下内容添加到 `.bash_profile` 中,然后使用 `untar FileName` 解压缩任何 .tar 文件。 + +``` +alias untar='tar -zxvf ' +``` +2、 想要下载的东西,但如果出现问题可以恢复吗? + +``` +alias wget='wget -c ' +``` + +3、 是否需要为新的网络帐户生成随机的 20 个字符的密码?没问题。 + +``` +alias getpass="openssl rand -base64 20" +``` + +4、 下载文件并需要测试校验和?我们也可做到。 + +``` +alias sha='shasum -a 256 ' +``` + +5、 普通的 `ping` 将永远持续下去。我们不希望这样。相反,让我们将其限制在五个 `ping`。 + +``` +alias ping='ping -c 5' +``` + +6、 在任何你想要的文件夹中启动 Web 服务器。 + +``` +alias www='python -m SimpleHTTPServer 8000' +``` + +7、 想知道你的网络有多快?只需下载 Speedtest-cli 并使用此别名即可。你可以使用 `speedtest-cli --list` 命令选择离你所在位置更近的服务器。 + +``` +alias speed='speedtest-cli --server 2406 --simple' +``` + +8、 你有多少次需要知道你的外部 IP 地址,但是不知道如何获取?我也是。 + +``` +alias ipe='curl ipinfo.io/ip' +``` + +9、 需要知道你的本地 IP 地址? + +``` +alias ipi='ipconfig getifaddr en0' +``` + +10、 最后,让我们清空屏幕。 + +``` +alias c='clear' +``` + +如你所见,Bash 别名是一种在命令行上简化生活的超级简便方法。想了解更多信息?我建议你 Google 搜索“Bash 别名”或在 Github 中看下。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/handy-bash-aliases + +作者:[Patrick H.Mullins][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/pmullins diff --git a/published/20180928 A Free And Secure Online PDF Conversion Suite.md b/published/20180928 A Free And Secure Online PDF Conversion Suite.md new file mode 100644 index 0000000000..c9e219b7b7 --- /dev/null +++ b/published/20180928 A Free And Secure Online PDF Conversion Suite.md @@ -0,0 +1,86 @@ +一款免费且安全的在线 PDF 转换软件 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/easypdf-720x340.jpg) + +我们总在寻找一个更好用且更高效的解决方案,来我们的生活理加方便。 比方说,在处理 PDF 文档时,你肯定会想拥有一款工具,它能够在任何情形下都显得快速可靠。在这,我们想向你推荐 **EasyPDF** —— 一款可以胜任所有场合的在线 PDF 软件。通过大量的测试,我们可以保证:这款工具能够让你的 PDF 文档管理更加容易。 + +不过,关于 EasyPDF 有一些十分重要的事情,你必须知道。 + +* EasyPDF 是免费的、匿名的在线 PDF 转换软件。 +* 能够将 PDF 文档转换成 Word、Excel、PowerPoint、AutoCAD、JPG、GIF 和文本等格式格式的文档。 +* 能够从 Word、Excel、PowerPoint 等其他格式的文件创建 PDF 文件。 +* 能够进行 PDF 文档的合并、分割和压缩。 +* 能够识别扫描的 PDF 和图片中的内容。 +* 可以从你的设备或者云存储(Google Drive 和 DropBox)中上传文档。 +* 可以在 Windows、Linux、Mac 和智能手机上通过浏览器来操作。 +* 支持多种语言。 + +### EasyPDF的用户界面 + +![](http://www.ostechnix.com/wp-content/uploads/2018/09/easypdf-interface.png) + +EasyPDF 最吸引你眼球的就是平滑的用户界面,营造一种整洁的环境,这会让使用者感觉更加舒服。由于网站完全没有一点广告,EasyPDF 的整体使用体验相比以前会好很多。 + +每种不同类型的转换都有它们专门的菜单,只需要简单地向其中添加文件,你并不需要知道太多知识来进行操作。 + +许多类似网站没有做好相关的优化,使得在手机上的使用体验并不太友好。然而,EasyPDF 突破了这一个瓶颈。在智能手机上,EasyPDF 几乎可以秒开,并且可以顺畅的操作。你也通过 Chrome 的“三点菜单”把 EasyPDF 添加到手机的主屏幕上。 + +![](http://www.ostechnix.com/wp-content/uploads/2018/09/EasyPDF-fs8.png) + +### 特性 + +除了好看的界面,EasyPDF 还非常易于使用。为了使用它,你 **不需要注册一个账号** 或者**留下一个邮箱**,它是完全匿名的。另外, EasyPDF 也不会对要转换的文件进行数量或者大小的限制,完全不需要安装!酷极了,不是吗? + +首先,你需要选择一种想要进行的格式转换,比如,将 PDF 转换成 Word。然后,选择你想要转换的 PDF 文件。你可以通过两种方式来上传文件:直接拖拉或者从设备上的文件夹进行选择。还可以选择从[Google Drive][1] 或 [Dropbox][2]来上传文件。 + +选择要进行格式转换的文件后,点击 Convert 按钮开始转换过程。转换过程会在一分钟内完成,你并不需要等待太长时间。如果你还有对其他文件进行格式转换,在接着转换前,不要忘了将前面已经转换完成的文件下载保存。不然的话,你将会丢失前面的文件。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/EasyPDF1.png) + +要进行其他类型的格式转换,直接返回到主页。 + +目前支持的几种格式转换类型如下: + + * **PDF to Word** – 将 PDF 文档 转换成 Word 文档 + * **PDF 转换成 PowerPoint** – 将 PDF 文档 转换成 PowerPoint 演示讲稿 + * **PDF 转换成 Excel** – 将 PDF 文档 转换成 Excel 文档 + * **PDF 创建** – 从一些其他类型的文件(如,文本、doc、odt)来创建PDF文档 + * **Word 转换成 PDF** – 将 Word 文档 转换成 PDF 文档 + * **JPG 转换成 PDF** – 将 JPG images 转换成 PDF 文档 + * **PDF 转换成 AutoCAD** – 将 PDF 文档 转换成 .dwg 格式(DWG 是 CAD 文件的原生的格式) + * **PDF 转换成 Text** – 将 PDF 文档 转换成 Text 文档 + * **PDF 分割** – 把 PDF 文件分割成多个部分 + * **PDF 合并** – 把多个 PDF 文件合并成一个文件 + * **PDF 压缩** – 将 PDF 文档进行压缩 + * **PDF 转换成 JPG** – 将 PDF 文档 转换成 JPG 图片 + * **PDF 转换成 PNG** – 将 PDF 文档 转换成 PNG 图片 + * **PDF 转换成 GIF** – 将 PDF 文档 转换成 GIF 文件 + * **在线文字内容识别** – 将扫描的纸质文档转换成能够进行编辑的文件(如,Word、Excel、文本) + +想试一试吗?好极了!点击下面的链接,然后开始格式转换吧! + +[![](https://www.ostechnix.com/wp-content/uploads/2018/09/EasyPDF-online-pdf.png)][https://easypdf.com/] + +### 总结 + +EasyPDF 名符其实,能够让 PDF 管理更加容易。就我测试过的 EasyPDF 服务而言,它提供了**完全免费**的简单易用的转换功能。它十分快速、安全和可靠。你会对它的服务质量感到非常满意,因为它不用支付任何费用,也不用留下像邮箱这样的个人信息。值得一试,也许你会找到你自己更喜欢的 PDF 工具。 + +好吧,我就说这些。更多的好东西还在后后面,请继续关注! + +加油! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/easypdf-a-free-and-secure-online-pdf-conversion-suite/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[zhousiyu325](https://github.com/zhousiyu325) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://www.ostechnix.com/how-to-mount-google-drive-locally-as-virtual-file-system-in-linux/ +[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/ diff --git a/published/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md b/published/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md new file mode 100644 index 0000000000..72763c754b --- /dev/null +++ b/published/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md @@ -0,0 +1,228 @@ +如何在 Ubuntu 18.04 上安装 Popcorn Time +====== + +> 简要:这篇教程展示给你如何在 Ubuntu 和其他 Linux 发行版上安装 Popcorn Time,也会讨论一些 Popcorn Time 的便捷操作。 + +[Popcorn Time][1] 是一个受 [Netflix][2] 启发的开源的 [torrent][3] 流媒体应用,可以在 Linux、Mac、Windows 上运行。 + +传统的 torrent,在你看影片之前必须等待它下载完成。 + +[Popcorn Time][4] 有所不同。它的使用基于 torrent,但是允许你(几乎)立即开始观看影片。它跟你在 Youtube 或者 Netflix 等流媒体网页上看影片一样,无需等待它下载完成。 + +![Popcorn Time in Ubuntu Linux][5] + +*Popcorn Time* + +如果你不想在看在线电影时被突如其来的广告吓倒的话,Popcorn Time 是一个不错的选择。不过要记得,它的播放质量依赖于当前网络中可用的种子seed数。 + +Popcorn Time 还提供了一个不错的用户界面,让你能够浏览可用的电影、电视剧和其他视频内容。如果你曾经[在 Linux 上使用过 Netflix][6],你会发现两者有一些相似之处。 + +有些国家严格打击盗版,所以使用 torrent 下载电影是违法行为。在类似美国、英国和西欧等一些国家,你或许曾经收到过法律声明。也就是说,是否使用取决于你。已经警告过你了。 + +Popcorn Time 一些主要的特点: + + * 使用 Torrent 在线观看电影和电视剧 + * 有一个时尚的用户界面让你浏览可用的电影和电视剧资源 + * 调整流媒体的质量 + * 标记为稍后观看 + * 下载为离线观看 + * 可以默认开启字幕,改变字母尺寸等 + * 使用键盘快捷键浏览 + + +### 如何在 Ubuntu 和其它 Linux 发行版上安装 Popcorn Time + +这篇教程以 Ubuntu 18.04 为例,但是你可以使用类似的说明,在例如 Linux Mint、Debian、Manjaro、Deepin 等 Linux 发行版上安装。 + +Popcorn Time 在 Deepin Linux 的软件中心中也可用。Manjaro 和 Arch 用户也可以轻松地使用 AUR 来安装 Popcorn Time。 + +接下来我们看该如何在 Linux 上安装 Popcorn Time。事实上,这个过程非常简单。只需要按照说明操作复制粘贴我提到的这些命令即可。 + +#### 第一步:下载 Popcorn Time + +你可以从它的官网上安装 Popcorn Time。下载链接在它的主页上。 + +- [下载 Popcorn Time](https://popcorntime.sh/) + +#### 第二步:安装 Popcorn Time + +下载完成之后,就该使用它了。下载下来的是一个 tar 文件,在这些文件里面包含有一个可执行文件。你可以把 tar 文件提取在任何位置,[Linux 常把附加软件安装在][8] [/opt 目录][8]。 + +在 `/opt` 下创建一个新的目录: + +``` +sudo mkdir /opt/popcorntime +``` + +现在进入你下载文件的文件夹中,比如我把 Popcorn Time 下载到了主目录的 Downloads 目录下。 + +``` +cd ~/Downloads +``` + +提取下载好的 Popcorn Time 文件到新创建的 `/opt/popcorntime` 目录下: + +``` +sudo tar Jxf Popcorn-Time-* -C /opt/popcorntime +``` + +#### 第三步:让所有用户可以使用 Popcorn Time + +如果你想要系统中所有的用户无需经过 `sudo` 就可以运行 Popcorn Time。你需要在 `/usr/bin` 目录下创建一个[符号链接(软链接)][9]指向这个可执行文件。 + +``` +ln -sf /opt/popcorntime/Popcorn-Time /usr/bin/Popcorn-Time +``` + +#### 第四步:为 Popcorn Time 创建桌面启动器 + +到目前为止,一切顺利,但是你也许想要在应用菜单里看到 Popcorn Time,又或是想把它添加到最喜欢的应用列表里等。 + +为此,你需要创建一个桌面入口。 + +打开一个终端窗口,在 `/usr/share/applications` 目录下创建一个名为 `popcorntime.desktop` 的文件。 + +你可以使用任何[基于命令行的文本编辑器][10]。Ubuntu 默认安装了 [Nano][11],所以你可以直接使用这个。 + +``` +sudo nano /usr/share/applications/popcorntime.desktop +``` + +在里面插入以下内容: + +``` +[Desktop Entry] +Version = 1.0 +Type = Application +Terminal = false +Name = Popcorn-Time +Exec = /usr/bin/Popcorn-Time +Icon = /opt/popcorntime/popcorn.png +Categories = Application; +``` + +如果你使用的是 Nano 编辑器,使用 `Ctrl+X` 保存输入的内容,当询问是否保存时,输入 `Y`,然后按回车保存并退出。 + +就快要完成了。最后一件事就是为 Popcorn Time 设置一个正确的图标。你可以下载一个 Popcorn Time 图标到 `/opt/popcorntime` 目录下,并命名为 `popcorn.png`。 + +你可以使用以下命令: + +``` +sudo wget -O /opt/popcorntime/popcorn.png https://upload.wikimedia.org/wikipedia/commons/d/df/Pctlogo.png +``` + +这样就 OK 了。现在你可以搜索 Popcorn Time 然后点击启动它了。 + +![Popcorn Time installed on Ubuntu][12] + +*在菜单里搜索 Popcorn Time* + +第一次启动时,你必须接受这些条款和条件。 + +![Popcorn Time in Ubuntu][13] + +*接受这些服务条款* + +一旦你完成这些,你就可以享受你的电影和电视节目了。 + +![Watch movies on Popcorn Time][14] + +好了,这就是所有你在 Ubuntu 或者其他 Linux 发行版上安装 Popcorn Time 所需要的了。你可以直接开始看你最喜欢的影视节目了。 + +### 高效使用 Popcorn Time 的七个小贴士 + +现在你已经安装好了 Popcorn Time 了,我接下来将要告诉你一些有用的 Popcorn Time 技巧。我保证它会增强你使用 Popcorn Time 的体验。 + +#### 1、 使用高级设置 + +始终启用高级设置。它给了你更多的选项去调整 Popcorn Time 点击右上角的齿轮标记。查看其中的高级设置。 + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Popcorn_Time_Tricks.jpeg) + +#### 2、 在 VLC 或者其他播放器里观看影片 + +你知道你可以选择自己喜欢的播放器而不是 Popcorn Time 默认的播放器观看一个视频吗?当然,这个播放器必须已经安装在你的系统上了。 + +现在你可能会问为什么要使用其他的播放器。我的回答是:其他播放器可以弥补 Popcorn Time 默认播放器上的一些不足。 + +例如,如果一个文件的声音非常小,你可以使用 VLC 将音频声音增强 400%,你还可以[使用 VLC 同步不连贯的字幕][18]。你可以在播放文件之前在不同的媒体播放器之间进行切换。 + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks_1.png) + +#### 3、 将影片标记为稍后观看 + +只是浏览电影和电视节目,但是却没有时间和精力去看?这不是问题。你可以添加这些影片到书签里面,稍后可以在 Faveriate 标签里面访问这些影片。这可以让你创建一个你想要稍后观看的列表。 + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks2.png) + +#### 4、 检查 torrent 的信息和种子信息 + +像我之前提到的,你在 Popcorn Time 的观看体验依赖于 torrent 的速度。好消息是 Popcorn Time 显示了 torrent 的信息,因此你可以知道流媒体的速度。 + +你可以在文件上看到一个绿色/黄色/红色的点。绿色意味着有足够的种子,文件很容易播放。黄色意味着有中等数量的种子,应该可以播放。红色意味着只有非常少可用的种子,播放的速度会很慢甚至无法观看。 + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks3.jpg) + +#### 5、 添加自定义字幕 + +如果你需要字幕而且它没有你想要的语言,你可以从外部网站下载自定义字幕。得到 .src 文件,然后就可以在 Popcorn Time 中使用它: + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocporn_Time_Tricks5.png) + +你可以[用 VLC 自动下载字幕][19]。 + +#### 6、 保存文件离线观看 + +用 Popcorn Time 播放内容时,它会下载并暂时存储这些内容。当你关闭 APP 时,缓存会被清理干净。你可以更改这个操作,使得下载的文件可以保存下来供你未来使用。 + +在高级设置里面,向下滚动一点。找到缓存目录,你可以把它更改到其他像是 Downloads 目录,这下你即便关闭了 Popcorn Time,这些文件依旧可以观看。 + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Popcorn_Time_Tips.jpg) + +#### 7、 拖放外部 torrent 文件立即播放 + +我猜你不知道这个操作。如果你没有在 Popcorn Time 发现某些影片,从你最喜欢的 torrent 网站下载 torrent 文件,打开 Popcorn Time,然后拖放这个 torrent 文件到 Popcorn Time 里面。它将会立即播放文件,当然这个取决于种子。这次你不需要在观看前下载整个文件了。 + +当你拖放文件到 Popcorn Time 后,它将会给你对应的选项,选择它应该播放的。如果里面有字幕,它会自动播放,否则你需要添加外部字幕。 + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks4.png) + +在 Popcorn Time 里面有很多的功能,但是我决定就此打住,剩下的就由你自己来探索吧。我希望你能发现更多 Popcorn Time 有用的功能和技巧。 + +我再提醒一遍,使用 Torrents 在很多国家是违法的。 + +----------------------------------- + +via: https://itsfoss.com/popcorn-time-ubuntu-linux/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[dianbanjiu](https://github.com/dianbanjiu) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]: https://popcorntime.sh/ +[2]: https://netflix.com/ +[3]: https://en.wikipedia.org/wiki/Torrent_file +[4]: https://en.wikipedia.org/wiki/Popcorn_Time +[5]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-linux.jpeg +[6]: https://itsfoss.com/netflix-firefox-linux/ +[7]: https://billing.ivacy.com/page/23628 +[8]: http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/opt.html +[9]: https://en.wikipedia.org/wiki/Symbolic_link +[10]: https://itsfoss.com/command-line-text-editors-linux/ +[11]: https://itsfoss.com/nano-3-release/ +[12]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-ubuntu-menu.jpg +[13]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-ubuntu-license.jpeg +[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-watch-movies.jpeg +[15]: https://ivacy.postaffiliatepro.com/accounts/default1/vdegzkxbw/7f82d531.png +[16]: https://billing.ivacy.com/page/23628/7f82d531 +[17]: http://ivacy.postaffiliatepro.com/scripts/vdegzkxiw?aff=23628&a_bid=7f82d531 +[18]: https://itsfoss.com/how-to-synchronize-subtitles-with-movie-quick-tip/ +[19]: https://itsfoss.com/download-subtitles-automatically-vlc-media-player-ubuntu/ +[20]: https://protonvpn.net/?aid=chmod777 +[21]: https://itsfoss.com/protonmail/ +[22]: https://shop.itsfoss.com/search?utf8=%E2%9C%93&query=vpn +[23]: https://itsfoss.com/affiliate-policy/ diff --git a/published/20181001 How to Install Pip on Ubuntu.md b/published/20181001 How to Install Pip on Ubuntu.md new file mode 100644 index 0000000000..c873c79960 --- /dev/null +++ b/published/20181001 How to Install Pip on Ubuntu.md @@ -0,0 +1,168 @@ +如何在 Ubuntu 上安装 pip +====== + +**`pip` 是一个命令行工具,允许你安装 Python 编写的软件包。 学习如何在 Ubuntu 上安装 `pip` 以及如何使用它来安装 Python 应用程序。** + +有许多方法可以[在 Ubuntu 上安装软件][1]。 你可以从软件中心安装应用程序,也可以从下载的 DEB 文件、PPA(LCTT 译注:PPA 即 Personal Package Archives,个人软件包集)、[Snap 软件包][2],也可以使用 [Flatpak][3]、使用 [AppImage][4],甚至用旧的源代码安装方式。 + +还有一种方法可以在 [Ubuntu][5] 中安装软件包。 它被称为 `pip`,你可以使用它来安装基于 Python 的应用程序。 + +### 什么是 pip + +[pip][6] 代表 “pip Installs Packages”。 [pip][7] 是一个基于命令行的包管理系统。 用于安装和管理 [Python 语言][8]编写的软件。 + +你可以使用 `pip` 来安装 Python 包索引([PyPI][9])中列出的包。 + +作为软件开发人员,你可以使用 `pip` 为你自己的 Python 项目安装各种 Python 模块和包。 + +作为最终用户,你可能需要使用 `pip` 来安装一些 Python 开发的并且可以使用 `pip` 轻松安装的应用程序。 一个这样的例子是 [Stress Terminal][10] 应用程序,你可以使用 `pip` 轻松安装。 + +让我们看看如何在 Ubuntu 和其他基于 Ubuntu 的发行版上安装 `pip`。 + +### 如何在 Ubuntu 上安装 pip + +![Install pip on Ubuntu Linux][11] + +默认情况下,`pip` 未安装在 Ubuntu 上。 你必须首先安装它才能使用。 在 Ubuntu 上安装 `pip` 非常简单。 我马上展示给你。 + +Ubuntu 18.04 默认安装了 Python 2 和 Python 3。 因此,你应该为两个 Python 版本安装 `pip`。 + +`pip`,默认情况下是指 Python 2。`pip3` 代表 Python 3 中的 pip。 + +注意:我在本教程中使用的是 Ubuntu 18.04。 但是这里的教程应该适用于其他版本,如Ubuntu 16.04、18.10 等。你也可以在基于 Ubuntu 的其他 Linux 发行版上使用相同的命令,如 Linux Mint、Linux Lite、Xubuntu、Kubuntu 等。 + +#### 为 Python 2 安装 pip + +首先,确保已经安装了 Python 2。 在 Ubuntu 上,可以使用以下命令进行验证。 + +``` +python2 --version +``` + +如果没有错误并且显示了 Python 版本的有效输出,则说明安装了 Python 2。 所以现在你可以使用这个命令为 Python 2 安装 `pip`: + +``` +sudo apt install python-pip +``` + +这将安装 `pip` 和它的许多其他依赖项。 安装完成后,请确认你已正确安装了 `pip`。 + +``` +pip --version +``` + +它应该显示一个版本号,如下所示: + +``` +pip 9.0.1 from /usr/lib/python2.7/dist-packages (python 2.7) +``` + +这意味着你已经成功在 Ubuntu 上安装了 `pip`。 + +#### 为 Python 3 安装 pip + +你必须确保在 Ubuntu 上安装了 Python 3。 可以使用以下命令检查一下: + +``` +python3 --version +``` + +如果显示了像 Python 3.6.6 这样的数字,则说明 Python 3 在你的 Linux 系统上安装好了。 + +现在,你可以使用以下命令安装 `pip3`: + +``` +sudo apt install python3-pip +``` + +你应该使用以下命令验证 `pip3` 是否已正确安装: + +``` +pip3 --version +``` + +它应该显示一个这样的数字: + +``` +pip 9.0.1 from /usr/lib/python3/dist-packages (python 3.6) +``` + +这意味着 `pip3` 已成功安装在你的系统上。 + +### 如何使用 pip 命令 + +现在你已经安装了 `pip`,让我们快速看一些基本的 `pip` 命令。 这些命令将帮助你使用 `pip` 命令来搜索、安装和删除 Python 包。 + +要从 Python 包索引 PyPI 中搜索包,可以使用以下 `pip` 命令: + +``` +pip search +``` + +例如,如果你搜索“stress”这个词,将会显示名称或描述中包含字符串“stress”的所有包。 + +``` +pip search stress +stress (1.0.0) - A trivial utility for consuming system resources. +s-tui (0.8.2) - Stress Terminal UI stress test and monitoring tool +stressypy (0.0.12) - A simple program for calling stress and/or stress-ng from python +fuzzing (0.3.2) - Tools for stress testing applications. +stressant (0.4.1) - Simple stress-test tool +stressberry (0.1.7) - Stress tests for the Raspberry Pi +mobbage (0.2) - A HTTP stress test and benchmark tool +stresser (0.2.1) - A large-scale stress testing framework. +cyanide (1.3.0) - Celery stress testing and integration test support. +pysle (1.5.7) - An interface to ISLEX, a pronunciation dictionary with stress markings. +ggf (0.3.2) - global geometric factors and corresponding stresses of the optical stretcher +pathod (0.17) - A pathological HTTP/S daemon for testing and stressing clients. +MatPy (1.0) - A toolbox for intelligent material design, and automatic yield stress determination +netblow (0.1.2) - Vendor agnostic network testing framework to stress network failures +russtress (0.1.3) - Package that helps you to put lexical stress in russian text +switchy (0.1.0a1) - A fast FreeSWITCH control library purpose-built on traffic theory and stress testing. +nx4_selenium_test (0.1) - Provides a Python class and apps which monitor and/or stress-test the NoMachine NX4 web interface +physical_dualism (1.0.0) - Python library that approximates the natural frequency from stress via physical dualism, and vice versa. +fsm_effective_stress (1.0.0) - Python library that uses the rheological-dynamical analogy (RDA) to compute damage and effective buckling stress in prismatic shell structures. +processpathway (0.3.11) - A nifty little toolkit to create stress-free, frustrationless image processing pathways from your webcam for computer vision experiments. Or observing your cat. +``` + +如果要使用 `pip` 安装应用程序,可以按以下方式使用它: + +``` +pip install +``` + +`pip` 不支持使用 tab 键补全包名,因此包名称需要准确指定。 它将下载所有必需的文件并安装该软件包。 + +如果要删除通过 `pip` 安装的 Python 包,可以使用 `pip` 中的 `uninstall` 选项。 + +``` +pip uninstall +``` + +你可以在上面的命令中使用 `pip3` 代替 `pip`。 + +我希望这个快速提示可以帮助你在 Ubuntu 上安装 `pip`。 如果你有任何问题或建议,请在下面的评论部分告诉我。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/install-pip-ubuntu/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]: https://itsfoss.com/how-to-add-remove-programs-in-ubuntu/ +[2]: https://itsfoss.com/use-snap-packages-ubuntu-16-04/ +[3]: https://itsfoss.com/flatpak-guide/ +[4]: https://itsfoss.com/use-appimage-linux/ +[5]: https://www.ubuntu.com/ +[6]: https://en.wikipedia.org/wiki/pip_(package_manager) +[7]: https://pypi.org/project/pip/ +[8]: https://www.python.org/ +[9]: https://pypi.org/ +[10]: https://itsfoss.com/stress-terminal-ui/ +[11]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/install-pip-ubuntu.png diff --git a/published/20181011 A Front-end For Popular Package Managers.md b/published/20181011 A Front-end For Popular Package Managers.md new file mode 100644 index 0000000000..db839d6475 --- /dev/null +++ b/published/20181011 A Front-end For Popular Package Managers.md @@ -0,0 +1,167 @@ +Sysget:给主流的包管理器加个前端 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/sysget-720x340.png) + +你是一个喜欢每隔几天尝试 Linux 操作系统的新发行版的发行版收割机吗?如果是这样,我有一些东西对你有用。 尝试 Sysget,这是一个类 Unix 操作系统中的流行软件包管理器的前端。 你不需要学习每个包管理器来执行基本的操作,例如安装、更新、升级和删除包。 你只需要对每个运行在类 Unix 操作系统上的包管理器记住一种语法即可。 Sysget 是包管理器的包装脚本,它是用 C++ 编写的。 源代码可在 GitHub 上免费获得。 + +使用 Sysget,你可以执行各种基本的包管理操作,包括: + +- 安装包, +- 更新包, +- 升级包, +- 搜索包, +- 删除包, +- 删除弃用包, +- 更新数据库, +- 升级系统, +- 清除包管理器缓存。 + +**给 Linux 学习者的一个重要提示:** + +Sysget 不会取代软件包管理器,绝对不适合所有人。如果你是经常切换到新 Linux 操作系统的新手,Sysget 可能会有所帮助。当在不同的 Linux 发行版中使用不同的软件包管理器时,就必须学习安装、更新、升级、搜索和删除软件包的新命令,这时 Sysget 就是帮助发行版收割机distro hopper(或新 Linux 用户)的包装脚本。 + +如果你是 Linux 管理员或想要学习 Linux 深层的爱好者,你应该坚持使用你的发行版的软件包管理器并学习如何使用它。 + +### 安装 Sysget + +安装 Sysget 很简单。 转到[发布页面][1]并下载最新的 Sysget 二进制文件并按如下所示进行安装。 在编写本指南时,Sysget 最新版本为1.2。 + +``` +$ sudo wget -O /usr/local/bin/sysget https://github.com/emilengler/sysget/releases/download/v1.2/sysget +$ sudo mkdir -p /usr/local/share/sysget +$ sudo chmod a+x /usr/local/bin/sysget +``` + +### 用法 + +Sysget 命令与 APT 包管理器大致相同,因此它应该适合新手使用。 + +当你第一次运行 Sysget 时,系统会要求你选择要使用的包管理器。 由于我在 Ubuntu,我选择了 apt-get。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/sysget-1.png) + +你必须根据正在运行的发行版选择正确的包管理器。 例如,如果你使用的是 Arch Linux,请选择 pacman。 对于 CentOS,请选择 yum。 对于 FreeBSD,请选择 pkg。 当前支持的包管理器列表是: + +1. apt-get (Debian) +2. xbps (Void) +3. dnf (Fedora) +4. yum (Enterprise Linux/Legacy Fedora) +5. zypper (OpenSUSE) +6. eopkg (Solus) +7. pacman (Arch) +8. emerge (Gentoo) +9. pkg (FreeBSD) +10. chromebrew (ChromeOS) +11. homebrew (Mac OS) +12. nix (Nix OS) +13. snap (Independent) +14. npm (Javascript, Global) + +如果你分配了错误的包管理器,则可以使用以下命令设置新的包管理器: + +``` +$ sudo sysget set yum +Package manager changed to yum +``` + +只需确保你选择了本地包管理器。 + +现在,你可以像使用本机包管理器一样执行包管理操作。 + +要安装软件包,例如 Emacs,只需运行: + +``` +$ sudo sysget install emacs +``` + +上面的命令将调用本机包管理器(在我的例子中是 “apt-get”)并安装给定的包。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Install-package-using-Sysget.png) + +同样,要删除包,只需运行: + +``` +$ sudo sysget remove emacs +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Remove-package-using-Sysget.png) + +更新软件仓库(数据库): + +``` +$ sudo sysget update +``` + +搜索特定包: + +``` +$ sudo sysget search emacs +``` + +升级单个包: + +``` +$ sudo sysget upgrade emacs +``` + +升级所有包: + +``` +$ sudo sysget upgrade +``` + +移除废弃的包: + +``` +$ sudo sysget autoremove +``` + +清理包管理器的缓存: + +``` +$ sudo sysget clean +``` + +有关更多详细信息,请参阅帮助部分: + +``` +$ sysget help +Help of sysget +sysget [OPTION] [ARGUMENT] + +search [query] search for a package in the resporitories +install [package] install a package from the repos +remove [package] removes a package +autoremove removes not needed packages (orphans) +update update the database +upgrade do a system upgrade +upgrade [package] upgrade a specific package +clean clean the download cache +set [NEW MANAGER] set a new package manager +``` + +请记住,不同 Linux 发行版中的所有包管理器的 Sysget 语法都是相同的。 你不需要记住每个包管理器的命令。 + +同样,我必须告诉你 Sysget 不是包管理器的替代品。 它只是类 Unix 系统中流行的包管理器的包装器,它只执行基本的包管理操作。 + +Sysget 对于不想去学习不同包管理器的新命令的新手和发行版收割机用户可能有些用处。 如果你有兴趣,试一试,看看它是否有帮助。 + +而且,这就是本次所有的内容了。 更多干货即将到来。 敬请关注! + +祝快乐! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/sysget-a-front-end-for-popular-package-managers/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://github.com/emilengler/sysget/releases diff --git a/sources/talk/20180117 How technology changes the rules for doing agile.md b/sources/talk/20180117 How technology changes the rules for doing agile.md index 1b67935509..c212d5cf87 100644 --- a/sources/talk/20180117 How technology changes the rules for doing agile.md +++ b/sources/talk/20180117 How technology changes the rules for doing agile.md @@ -1,3 +1,4 @@ +Translating by ranchong How technology changes the rules for doing agile ====== diff --git a/sources/talk/20180123 Moving to Linux from dated Windows machines.md b/sources/talk/20180123 Moving to Linux from dated Windows machines.md deleted file mode 100644 index 6acd6e53f2..0000000000 --- a/sources/talk/20180123 Moving to Linux from dated Windows machines.md +++ /dev/null @@ -1,50 +0,0 @@ -Moving to Linux from dated Windows machines -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/1980s-computer-yearbook.png?itok=eGOYEKK-) - -Every day, while working in the marketing department at ONLYOFFICE, I see Linux users discussing our office productivity software on the internet. Our products are popular among Linux users, which made me curious about using Linux as an everyday work tool. My old Windows XP-powered computer was an obstacle to performance, so I started reading about Linux systems (particularly Ubuntu) and decided to try it out as an experiment. Two of my colleagues joined me. - -### Why Linux? - -We needed to make a change, first, because our old systems were not enough in terms of performance: we experienced regular crashes, an overload every time more than two apps were active, a 50% chance of freezing when a machine was shut down, and so forth. This was rather distracting to our work, which meant we were considerably less efficient than we could be. - -Upgrading to newer versions of Windows was an option, too, but that is an additional expense, plus our software competes against Microsoft's office suite. So that was an ideological question, too. - -Second, as I mentioned earlier, ONLYOFFICE products are rather popular within the Linux community. By reading about Linux users' experience with our software, we became interested in joining them. - -A week after we asked to change to Linux, we got our shiny new computer cases with [Kubuntu][1] inside. We chose version 16.04, which features KDE Plasma 5.5 and many KDE apps including Dolphin, as well as LibreOffice 5.1 and Firefox 45. - -### What we like about Linux - -Linux's biggest advantage, I believe, is its speed; for instance, it takes just seconds from pushing the machine's On button to starting your work. Everything seemed amazingly rapid from the very beginning: the overall responsiveness, the graphics, and even system updates. - -One other thing that surprised me compared to Windows is that Linux allows you to configure nearly everything, including the entire look of your desktop. In Settings, I found how to change the color and shape of bars, buttons, and fonts; relocate any desktop element; and build a composition of widgets, even including comics and Color Picker. I believe I've barely scratched the surface of the available options and have yet to explore most of the customization opportunities that this system is well known for. - -Linux distributions are generally a very safe environment. People rarely use antivirus apps in Linux, simply because there are so few viruses written for it. You save system speed, time, and, sure enough, money. - -In general, Linux has refreshed our everyday work lives, surprising us with a number of new options and opportunities. Even in the short time we've been using it, we'd characterize it as: - - * Fast and smooth to operate - * Highly customizable - * Relatively newcomer-friendly - * Challenging with basic components, however very rewarding in return - * Safe and secure - * An exciting experience for everyone who seeks to refresh their workplace - - - -Have you switched from Windows or MacOS to Kubuntu or another Linux variant? Or are you considering making the change? Please share your reasons for wanting to adopt Linux, as well as your impressions of going open source, in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/1/move-to-linux-old-windows - -作者:[Michael Korotaev][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/michaelk -[1]:https://kubuntu.org/ diff --git a/sources/talk/20180502 9 ways to improve collaboration between developers and designers.md b/sources/talk/20180502 9 ways to improve collaboration between developers and designers.md index 293841714d..637a54ee91 100644 --- a/sources/talk/20180502 9 ways to improve collaboration between developers and designers.md +++ b/sources/talk/20180502 9 ways to improve collaboration between developers and designers.md @@ -1,3 +1,4 @@ +LuuMing translating 9 ways to improve collaboration between developers and designers ====== diff --git a/sources/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md b/sources/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md deleted file mode 100644 index 42f842a803..0000000000 --- a/sources/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md +++ /dev/null @@ -1,133 +0,0 @@ -Translating by Ryze-Borgia - -Linux vs Mac: 7 Reasons Why Linux is a Better Choice than Mac -====== -Recently, we highlighted a few points about [why Linux is better than Windows][1]. Unquestionably, Linux is a superior platform. But, like other operating systems it has its drawbacks as well. For a very particular set of tasks (such as Gaming), Windows OS might prove to be better. And, likewise, for another set of tasks (such as video editing), a Mac-powered system might come in handy. It all trickles down to your preference and what you would like to do with your system. So, in this article, we will highlight a number of reasons why Linux is better than Mac. - -If you’re already using a Mac or planning to get one, we recommend you to thoroughly analyze the reasons and decide whether you want to switch/keep using Linux or continue using Mac. - -### 7 Reasons Why Linux is Better Than Mac - -![Linux vs Mac: Why Linux is a Better Choice][2] - -Both Linux and macOS are Unix-like OS and give access to Unix commands, BASH and other shells. Both of them have fewer applications and games than Windows. But the similarity ends here. - -Graphic designers and video editors swear by macOS whereas Linux is a favorite of developers, sysadmins and devops. - -So the question is should you use Linux over Mac? If yes, why? Let me give you some practical and some ideological reasons why Linux is better than Mac. - -#### 1\. Price - -![Linux vs Mac: Why Linux is a Better Choice][3] - -Let’s suppose, you use the system only to browse stuff, watch movies, download photos, write a document, create a spreadsheet, and other similar stuff. And, in addition to those activities, you want to have a secure operating system. - -In that case, you could choose to spend a couple of hundred bucks for a system to get things done. Or do you think spending more for a MacBook is a good idea? Well, you are the judge. - -So, it really depends on what you prefer. Whether you want to spend on a Mac-powered system or get a budget laptop/PC and install any Linux distro for free. Personally, I’ll be happy with a Linux system except for editing videos and music production. In that case, Final Cut Pro (for video editing) and Logic Pro X (for music production) will be my preference. - -#### 2\. Hardware Choices - -![Linux vs Mac: Why Linux is a Better Choice][4] - -Linux is free. You can install it on computers with any configuration. No matter how powerful/old your system is, Linux will work. [Even if you have an 8-year old PC laying around, you can have Linux installed and expect it to run smoothly by selecting the right distro][5]. - -But, Mac is as an Apple-exclusive. If you want to assemble a PC or get a budget laptop (with DOS) and expect to install Mac OS, it’s almost impossible. Mac comes baked in with the system Apple manufactures. - -There are [ways to install macOS on non Apple devices][6]. However, the kind of expertise and troubles it requires, it makes you question whether it’s worth the effort. - -You will have a wide range of hardware choices when you go with Linux but a minimal set of configurations when it comes to Mac OS. - -#### 3\. Security - -![Linux vs Mac: Why Linux is a Better Choice][7] - -A lot of people are all praises for iOS and Mac for being a secure platform. Well, yes, it is secure in a way (maybe more secure than Windows OS), but probably not as secure as Linux. - -I am not bluffing. There are malware and adware targeting macOS and the [number is growing every day][8]. I have seen not-so-techie users struggling with their slow mac. A quick investigation revealed that a [browser hijacking malware][9] was the culprit. - -There are no 100% secure operating systems and Linux is not an exception. There are vulnerabilities in the Linux world as well but they are duly patched by the timely updates provided by Linux distributions. - -Thankfully, we don’t have auto-running viruses or browser hijacking malwares in Linux world so far. And that’s one more reason why you should use Linux instead of a Mac. - -#### 4\. Customization & Flexibility - -![Linux vs Mac: Why Linux is a Better Choice][10] - -You don’t like something? Customize it or remove it. End of the story. - -For example, if you do not like the [Gnome desktop environment][11] on Ubuntu 18.04.1, you might as well change it to [KDE Plasma][11]. You can also try some of the [Gnome extensions][12] to enhance your desktop experience. You won’t find this level of freedom and customization on Mac OS. - -Besides, you can even modify the source code of your OS to add/remove something (which requires necessary technical knowledge) and create your own custom OS. Can you do that on Mac OS? - -Moreover, you get an array of Linux distributions to choose from as per your needs. For instance, if you need to mimic the workflow on Mac OS, [Elementary OS][13] would help. Do you want to have a lightweight Linux distribution installed on your old PC? We’ve got you covered in our list of [lightweight Linux distros][5]. Mac OS lacks this kind of flexibility. - -#### 5\. Using Linux helps your professional career [For IT/Tech students] - -![Linux vs Mac: Why Linux is a Better Choice][14] - -This is kind of controversial and applicable to students and job seekers in the IT field. Using Linux doesn’t make you a super-intelligent being and could possibly get you any IT related job. - -However, as you start using Linux and exploring it, you gain experience. As a techie, sooner or later you dive into the terminal, learning your way to move around the file system, installing applications via command line. You won’t even realize that you have learned the skills that newcomers in IT companies get trained on. - -In addition to that, Linux has enormous scope in the job market. There are so many Linux related technologies (Cloud, Kubernetes, Sysadmin etc.) you can learn, earn certifications and get a nice paying job. And to learn these, you have to use Linux. - -#### 6\. Reliability - -![Linux vs Mac: Why Linux is a Better Choice][15] - -Ever wondered why Linux is the best OS to run on any server? Because it is more reliable! - -But, why is that? Why is Linux more reliable than Mac OS? - -The answer is simple – more control to the user while providing better security. Mac OS does not provide you with the full control of its platform. It does that to make things easier for you simultaneously enhancing your user experience. With Linux, you can do whatever you want – which may result in poor user experience (for some) – but it does make it more reliable. - -#### 7\. Open Source - -![Linux vs Mac: Why Linux is a Better Choice][16] - -Open Source is something not everyone cares about. But to me, the most important aspect of Linux being a superior choice is its Open Source nature. And, most of the points discussed below are the direct advantages of an Open Source software. - -To briefly explain, you get to see/modify the source code yourself if it is an open source software. But, for Mac, Apple gets an exclusive control. Even if you have the required technical knowledge, you will not be able to independently take a look at the source code of Mac OS. - -In other words, a Mac-powered system enables you to get a car for yourself but the downside is you cannot open up the hood to see what’s inside. That’s bad! - -If you want to dive in deeper to know about the benefits of an open source software, you should go through [Ben Balter’s article][17] on OpenSource.com. - -### Wrapping Up - -Now that you’ve known why Linux is better than Mac OS. What do you think about it? Are these reasons enough for you to choose Linux over Mac OS? If not, then what do you prefer and why? - -Let us know your thoughts in the comments below. - -Note: The artwork here is based on Club Penguins. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/linux-vs-mac/ - -作者:[Ankush Das][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[1]: https://itsfoss.com/linux-better-than-windows/ -[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/Linux-vs-mac-featured.png -[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-1.jpeg -[4]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-4.jpeg -[5]: https://itsfoss.com/lightweight-linux-beginners/ -[6]: https://hackintosh.com/ -[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-2.jpeg -[8]: https://www.computerworld.com/article/3262225/apple-mac/warning-as-mac-malware-exploits-climb-270.html -[9]: https://www.imore.com/how-to-remove-browser-hijack -[10]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-3.jpeg -[11]: https://www.gnome.org/ -[12]: https://itsfoss.com/best-gnome-extensions/ -[13]: https://elementary.io/ -[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-5.jpeg -[15]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-6.jpeg -[16]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-7.jpeg -[17]: https://opensource.com/life/15/12/why-open-source diff --git a/sources/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md b/sources/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md index a862920028..324d3c8700 100644 --- a/sources/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md +++ b/sources/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md @@ -1,3 +1,4 @@ +translating by belitex How Writing Can Expand Your Skills and Grow Your Career ====== diff --git a/sources/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md b/sources/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md index e161ec4eec..971a91f94f 100644 --- a/sources/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md +++ b/sources/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md @@ -1,3 +1,5 @@ +thecyanbird translating + Linux Has a Code of Conduct and Not Everyone is Happy With it ====== **Linux kernel has a new code of conduct (CoC). Linus Torvalds took a break from Linux kernel development just 30 minutes after signing this code of conduct. And since **the writer of this code of conduct has had a controversial past,** it has now become a point of heated discussion. With all the politics involved, not many people are happy with this new CoC.** diff --git a/sources/talk/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md b/sources/talk/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md deleted file mode 100644 index 93c84ae43c..0000000000 --- a/sources/talk/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md +++ /dev/null @@ -1,126 +0,0 @@ -WinWorld – A Large Collection Of Defunct OSs, Software And Games -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/WinWorld-720x340.jpeg) - -The other day, I was testing **Dosbox** which is used to [**run MS-DOS games and programs in Linux**][1]. While searching for some classic programs like Turbo C++, I stumbled upon a website named **WinWorld**. I went through a few links in this site and quite surprised. WinWorld has a plenty of good-old and classic OSs, software, applications, development tools, games and a lot of other miscellaneous utilities which are abandoned by the developers a long time ago. It is an online museum run by community members, volunteers and is dedicated to the preservation and sharing of vintage, abandoned, and pre-release software. - -WinWorld was started back in 2003 and its founder claims that the idea to start this site inspired by Yahoo briefcases. The primary purpose of this site is to preserve and share old software. Over the years, many people volunteered to improve this site in numerous ways and the collection of old software in WinWorld has grown exponentially. The entire WinWorld library is free, open and available to everyone. - -### WinWorld Hosts A Huge Collection Of Defunct OSs, Software, System Applications And Games - -Like I already said, WinWorld hosts a huge collection of abandonware which are no-longer in development. - -**Linux and Unix:** - -Here, I have given the complete list of UNIX and LINUX OSs with brief summary of the each OS and the release year of first version. - - * **A/UX** – An early port of Unix to Apple’s 68k based Macintosh platform, released in 1988. - * **AIX** – A Unix port originally developed by IBM, released in 1986. - * **AT &T System V Unix** – One of the first commercial versions of the Unix OS, released in 1983. - * **Banyan VINES** – A network operating system originally designed for Unix, released in 1984. - * **Corel Linux** – A commercial Linux distro, released in 1999. - * **DEC OSF-1** – A version of UNIX developed by Digital Equipment Corporation (DEC), released in 1991. - * **Digital UNIX** – A renamed version of **OSF-1** , released by DEC in 1995.** -** - * **FreeBSD** **1.0** – The first release of FreeBSD, released in 1993. It is based on 4.3BSD. - * **Gentus Linux** – A distribution that failed to comply with GPL. Developed by ABIT and released in 2000. - * **HP-UX** – A UNIX variant, released in 1992. - * **IRIX** – An a operating system developed by Silicon Graphics Inc (SGI ) and it is released in 1988. - * **Lindows** – Similar to Corel Linux. It is developed for commercial purpose and released in 2002. - * **Linux Kernel** – A copy of the Linux Sourcecode, version 0.01. Released in the early 90’s. - * **Mandrake Linux** – A Linux distribution based on Red Hat Linux. It was later renamed to Mandriva. Released in 1999. - * **NEWS-OS** – A variant of BSD, developed by Sony and released in 1989. - * **NeXTStep** – A Unix based OS from NeXT computers headed by **Steve Jobs**. It is released in 1987. - * **PC/IX** – A UNIX variant created for IBM PCs. Released in 1984. - * **Red Hat Linux 5.0** – A commercial Linux distribution by Red Hat. - * **Sun Solaris** – A Unix based OS by Sun Microsystems. Released in 1992. - * **SunOS** – A Unix-based OS derived from BSD by Sun Microsystems, released in 1982. - * **Tru64 UNIX** – A formerly known OSF/1 by DEC. - * **Ubuntu 4.10** – The well-known OS based on Debian.This was a beta pre-release, prior to the very first official Ubuntu release. - * **Ultrix** – A UNIX clone developed by DEC. - * **UnixWare** – A UNIX variant from Novell. - * **Xandros Linux** – A proprietary variant of Linux. It is based on Corel Linux. The first version is released in 2003. - * **Xenix** – A UNIX variant originally published by Microsoft released in 1984. - - - -Not just Linux/Unix, you can find other operating systems including DOS, Windows, Apple/Mac, OS 2, Novell netware and other OSs and shells. - -**DOS & CP/M:** - - * 86-DOS - * Concurrent CPM-86 & Concurrent DOS - * CP/M 86 & CP/M-80 - * DOS Plus - * DR-DOS - * GEM - * MP/M - * MS-DOS - * Multitasking MS-DOS 4.00 - * Multiuser DOS - * PC-DOS - * PC-MOS - * PTS-DOS - * Real/32 - * Tandy Deskmate - * Wendin DOS - - - -**Windows:** - - * BackOffice Server - * Windows 1.0/2.x/3.0/3.1/95/98/2000/ME/NT 3.X/NT 4.0 - * Windows Whistler - * WinFrame - - - -**Apple/Mac:** - - * Mac OS 7/8/9 - * Mac OS X - * System Software (0-6) - - - -**OS/2:** - - * Citrix Multiuser - * OS/2 1.x - * OS/2 2.0 - * OS/2 3.x - * OS/2 Warp 4 - - - -Also, WinWorld hosts a huge collection of old software, system applications, development tools and games. Go and check them out as well. - -To be honest, I don’t even know the existence of most of the stuffs listed in this site. Some of the tools listed here were released years before I was born. - -Just in case, If you ever in need of or wanted to test a classic stuff (be it a game, software, OS), look nowhere, just head over to WinWorld library and download them that you want to explore. Good luck! - -**Disclaimer:** - -OSTechNix is not affiliated with WinWorld site in any way. We, at OSTechNix, don’t know the authenticity and integrity of the stuffs hosted in this site. Also, downloading software from third-party sites is not safe or may be illegal in your region. Neither the author nor OSTechNix is responsible for any kind of damage. Use this service at your own risk. - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/winworld-a-large-collection-of-defunct-oss-software-and-games/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[1]: https://www.ostechnix.com/how-to-run-ms-dos-games-and-programs-in-linux/ diff --git a/sources/talk/20180921 IssueHunt- A New Bounty Hunting Platform for Open Source Software.md b/sources/talk/20180921 IssueHunt- A New Bounty Hunting Platform for Open Source Software.md new file mode 100644 index 0000000000..2deeb75547 --- /dev/null +++ b/sources/talk/20180921 IssueHunt- A New Bounty Hunting Platform for Open Source Software.md @@ -0,0 +1,66 @@ +IssueHunt: A New Bounty Hunting Platform for Open Source Software +====== +One of the issues that many open-source developers and companies struggle with is funding. There is an assumption, an expectation even, among the community that Free and Open Source Software must be provided free of cost. But even FOSS needs funding for continued development. How can we keep expecting better quality software if we don’t create systems that enable continued development? + +We already wrote an article about [open source funding platforms][1] out there that try to tackle this shortcoming, as of this July there is a new contender in the market that aims to help fill this gap: [IssueHunt][2]. + +### IssueHunt: A Bounty Hunting platform for Open Source Software + +![IssueHunt website][3] + +IssueHunt offers a service that pays freelance developers for contributing to open-source code. It does so through what are called bounties: financial rewards granted to whoever solves a given problem. The funding for these bounties comes from anyone who is willing to donate to have any given bug fixed or feature added. + +If there is a problem with a piece of open-source software that you want fixed, you can offer up a reward amount of your choosing to whoever fixes it. + +Do you want your own product snapped? Offer a bounty on IssueHunt to whoever snaps it. It’s as simple as that. + +And if you are a programmer, you can browse through open issues. Fix the issue (if you could), submit a pull request on the GitHub repository and if your pull request is merged, you get the money. + +#### IssueHunt was originally an internal project for Boostnote + +![IssueHunt][4] + +The product came to be when the developers behind the note-taking app [Boostnote][5] reached out to the community for contributions to their own product. + +In the first two years of utilizing IssueHunt, Boostnote received over 8,400 Github stars through hundreds contributors and overwhelming donations. + +The product was so successful that the team decided to open it up to the rest of the community. + +Today, [a list of projects utilize this service][6], offering thousands of dollars in bounties among them. + +Boostnote boasts [$2,800 in total bounties][7], while Settings Sync, previously known as Visual Studio Code Settings Sync, offers [more than $1,600 in bounties.][8] + +There are other services that provide something similar to what IssueHunt is offering here. Perhaps the most notable is [Bountysource][9], which offers a similar bounty service to IssueHunt, while also offering subscription payment processing similar to [Librepay][10]. + +#### What do you think of IssueHunt? + +At the time of writing this article, IssueHunt is in its infancy, but I am incredibly excited to see where this project ends up in the comings years. + +I don’t know about you, but I am more than happy paying for FOSS. If the product is high quality and adds value to my life, then I will happily pay the developer the product. Especially since FOSS developers are creating products that respect my freedom in the process. + +That being said, I will definitely keep my eye on IssueHunt moving forward for ways I can support the community either with my own money or by spreading the word where contribution is needed. + +But what do you think? Do you agree with me, or do you think software should be Gratis free, and that contributions should be made on a volunteer basis? Let us know what you think in the comments below. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/issuehunt/ + +作者:[Phillip Prado][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/phillip/ +[1]: https://itsfoss.com/open-source-funding-platforms/ +[2]: https://issuehunt.io +[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/issuehunt-website.png +[4]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/issuehunt.jpg +[5]: https://itsfoss.com/boostnote-linux-review/ +[6]: https://issuehunt.io/repos +[7]: https://issuehunt.io/repos/53266139 +[8]: https://issuehunt.io/repos/47984369 +[9]: https://www.bountysource.com/ +[10]: https://liberapay.com/ diff --git a/sources/talk/20180925 Troubleshooting Node.js Issues with llnode.md b/sources/talk/20180925 Troubleshooting Node.js Issues with llnode.md new file mode 100644 index 0000000000..3565b0270d --- /dev/null +++ b/sources/talk/20180925 Troubleshooting Node.js Issues with llnode.md @@ -0,0 +1,75 @@ +Troubleshooting Node.js Issues with llnode +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/node_1920.jpg?itok=Cwd2YtPd) + +The llnode plugin lets you inspect Node.js processes and core dumps; it adds the ability to inspect JavaScript stack frames, objects, source code and more. At [Node+JS Interactive][1], Matheus Marchini, Node.js Collaborator and Lead Software Engineer at Sthima, will host a [workshop][2] on how to use llnode to find and fix issues quickly and reliably, without bloating your application with logs or compromising performance. He explains more in this interview. + +**Linux.com: What are some common issues that happen with a Node.js application in production?** + +**Matheus Marchini:** One of the most common issues Node.js developers might experience -- either in production or during development -- are unhandled exceptions. They happen when your code throws an error, and this error is not properly handled. There's a variation of this issue with Promises, although in this case, the problem is worse: if a Promise is rejected but there's no handler for that rejection, the application might enter into an undefined state and it can start to misbehave. + +The application might also crash when it's using too much memory. This usually happens when there's a memory leak in the application, although we usually don't have classic memory leaks in Node.js. Instead of unreferenced objects, we might have objects that are not used anymore but are still retained by another object, leading the Garbage Collector to ignore them. If this happens with several objects, we can quickly exhaust our available memory. + +Memory is not the only resource that might get exhausted. Given the asynchronous nature of Node.js and how it scales for a large number of requests, the application might start to run out on other resources such as opened file descriptions and a number of concurrent connections to a database. + +Infinite loops are not that common because we usually catch those during development, but every once in a while one manages to slip through our tests and get into our production servers. These are pretty catastrophic because they will block the main thread, rendering the entire application unresponsive. + +The last issues I'd like to point out are performance issues. Those can happen for a variety of reasons, ranging from unoptimized function to I/O latency. + +**Linux.com: Are there any quick tests you can do to determine what might be happening with your Node.js application?** + +**Marchini:** Node.js and V8 have several tools and features built-in which developers can use to find issues faster. For example, if you're facing performance issues, you might want to use the built-in [V8 CpuProfiler][3]. Memory issues can be tracked down with [V8 Sampling Heap Profiler][4]. All of these options are interesting because you can open their results in Chrome DevTools and get some nice graphical visualizations by default. + +If you are using native modules on your project, V8 built-in tools might not give you enough insights, since they focus only on JavaScript metrics. As an alternative to V8 CpuProfiler, you can use system profiler tools, such as [perf for Linux][5] and Dtrace for FreeBSD / OS X. You can grab the result from these tools and turn them into flamegraphs, making it easier to find which functions are taking more time to process. + +You can use third-party tools as well: [node-report][6] is an amazing first failure data capture which doesn't introduce a significant overhead. When your application crashes, it will generate a report with detailed information about the state of the system, including environment variables, flags used, operating system details, etc. You can also generate this report on demand, and it is extremely useful when asking for help in forums, for example. The best part is that, after installing it through npm, you can enable it with a flag -- no need to make changes in your code! + +But one of the tools I'm most amazed by is [llnode][7]. + +**Linux.com: When would you want to use something like llnode; and what exactly is it?** + +**Marchini:** **** llnode is useful when debugging infinite loops, uncaught exceptions or out of memory issues since it allows you to inspect the state of your application when it crashed. How does llnode do this? You can tell Node.js and your operating system to take a core dump of your application when it crashes and load it into llnode. llnode will analyze this core dump and give you useful information such as how many objects were allocated in the heap, the complete stack trace for the process (including native calls and V8 internals), pending requests and handlers in the event loop queue, etc. + +The most impressive feature llnode has is its ability to inspect objects and functions: you can see which variables are available for a given function, look at the function's code and inspect which properties your objects have with their respective values. For example, you can look up which variables are available for your HTTP handler function and which parameters it received. You can also look at headers and the payload of a given request. + +llnode is a plugin for [lldb][8], and it uses lldb features alongside hints provided by V8 and Node.js to recreate the process heap. It uses a few heuristics, too, so results might not be entirely correct sometimes. But most of the times the results are good enough -- and way better than not using any tool. + +This technique -- which is called post-mortem debugging -- is not something new, though, and it has been part of the Node.js project since 2012. This is a common technique used by C and C++ developers, but not many dynamic runtimes support it. I'm happy we can say Node.js is one of those runtimes. + +**Linux.com: What are some key items folks should know before adding llnode to their environment?** + +**Marchini:** To install and use llnode you'll need to have lldb installed on your system. If you're on OS X, lldb is installed as part of Xcode. On Linux, you can install it from your distribution's repository. We recommend using LLDB 3.9 or later. + +You'll also have to set up your environment to generate core dumps. First, remember to set the flag --abort-on-uncaught-exception when running a Node.js application, otherwise, Node.js won't generate a core dump when an uncaught exception happens. You'll also need to tell your operating system to generate core dumps when an application crashes. The most common way to do that is by running `ulimit -c unlimited`, but this will only apply to your current shell session. If you're using a process manager such as systemd I suggest looking at the process manager docs. You can also generate on-demand core dumps of a running process with tools such as gcore. + +**Linux.com: What can we expect from llnode in the future?** + +**Marchini:** llnode collaborators are working on several features and improvements to make the project more accessible for developers less familiar with native debugging tools. To accomplish that, we're improving the overall user experience as well as the project's documentation and installation process. Future versions will include colorized output, more reliable output for some commands and a simplified mode focused on JavaScript information. We are also working on a JavaScript API which can be used to automate some analysis, create graphical user interfaces, etc. + +If this project sounds interesting to you, and you would like to get involved, feel free join the conversation in [our issues tracker][9] or contact me on social [@mmarkini][10]. I would love to help you get started! + +Learn more at [Node+JS Interactive][1], coming up October 10-12, 2018 in Vancouver, Canada. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/2018/9/troubleshooting-nodejs-issues-llnode + +作者:[The Linux Foundation][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/ericstephenbrown +[1]: https://events.linuxfoundation.org/events/node-js-interactive-2018/?utm_source=Linux.com&utm_medium=article&utm_campaign=jsint18 +[2]: http://sched.co/G285 +[3]: https://nodejs.org/api/inspector.html#inspector_cpu_profiler +[4]: https://github.com/v8/sampling-heap-profiler +[5]: http://www.brendangregg.com/blog/2014-09-17/node-flame-graphs-on-linux.html +[6]: https://github.com/nodejs/node-report +[7]: https://github.com/nodejs/llnode +[8]: https://lldb.llvm.org/ +[9]: https://github.com/nodejs/llnode/issues +[10]: https://twitter.com/mmarkini diff --git a/sources/talk/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md b/sources/talk/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md new file mode 100644 index 0000000000..df8804cbac --- /dev/null +++ b/sources/talk/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md @@ -0,0 +1,44 @@ +Creator of the World Wide Web is Creating a New Decentralized Web +====== +**Creator of the world wide web, Tim Berners-Lee has unveiled his plans to create a new decentralized web where the data will be controlled by the users.** + +[Tim Berners-Lee][1] is known for creating the world wide web, i.e., the internet you know today. More than two decades later, Tim is working to free the internet from the clutches of corporate giants and give the power back to the people via a decentralized web. + +Berners-Lee was unhappy with the way ‘powerful forces’ of the internet handle data of the users for their own agenda. So he [started working on his own open source project][2] Solid “to restore the power and agency of individuals on the web.” + +> Solid changes the current model where users have to hand over personal data to digital giants in exchange for perceived value. As we’ve all discovered, this hasn’t been in our best interests. Solid is how we evolve the web in order to restore balance — by giving every one of us complete control over data, personal or not, in a revolutionary way. + +![Tim Berners-Lee is creating a decentralized web with open source project Solid][3] + +Basically, [Solid][4] is a platform built using the existing web where you create own ‘pods’ (personal data store). You decide where this pod will be hosted, who will access which data element and how the data will be shared through this pod. + +Berners-Lee believes that Solid “will empower individuals, developers and businesses with entirely new ways to conceive, build and find innovative, trusted and beneficial applications and services.” + +Developers need to integrate Solid into their apps and sites. Solid is still in the early stages so there are no apps for now but the project website claims that “the first wave of Solid apps are being created now.” + +Berners-Lee has created a startup called [Inrupt][5] and has taken a sabbatical from MIT to work full-time on Solid and to take it “from the vision of a few to the reality of many.” + +If you are interested in Solid, [learn how to create apps][6] or [contribute to the project][7] in your own way. Of course, it will take a lot of effort to build and drive the broad adoption of Solid so every bit of contribution will count to the success of a decentralized web. + +Do you think a [decentralized web][8] will be a reality? What do you think of decentralized web in general and project Solid in particular? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/solid-decentralized-web/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]: https://en.wikipedia.org/wiki/Tim_Berners-Lee +[2]: https://medium.com/@timberners_lee/one-small-step-for-the-web-87f92217d085 +[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/tim-berners-lee-solid-project.jpeg +[4]: https://solid.inrupt.com/ +[5]: https://www.inrupt.com/ +[6]: https://solid.inrupt.com/docs/getting-started +[7]: https://solid.inrupt.com/community +[8]: https://tech.co/decentralized-internet-guide-2018-02 diff --git a/sources/talk/20181003 13 tools to measure DevOps success.md b/sources/talk/20181003 13 tools to measure DevOps success.md new file mode 100644 index 0000000000..26abb21f05 --- /dev/null +++ b/sources/talk/20181003 13 tools to measure DevOps success.md @@ -0,0 +1,84 @@ +13 tools to measure DevOps success +====== +How's your DevOps initiative really going? Find out with open source tools +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI-) + +In today's enterprise, business disruption is all about agility with quality. Traditional processes and methods of developing software are challenged to keep up with the complexities that come with these new environments. Modern DevOps initiatives aim to help organizations use collaborations among different IT teams to increase agility and accelerate software application deployment. + +How is the DevOps initiative going in your organization? Whether or not it's going as well as you expected, you need to do assessments to verify your impressions. Measuring DevOps success is very important because these initiatives target the very processes that determine how IT works. DevOps also values measuring behavior, although measurements are more about your business processes and less about your development and IT systems. + +A metrics-oriented mindset is critical to ensuring DevOps initiatives deliver the intended results. Data-driven decisions and focused improvement activities lead to increased quality and efficiency. Also, the use of feedback to accelerate delivery is one reason DevOps creates a successful IT culture. + +With DevOps, as with any IT initiative, knowing what to measure is always the first step. Let's examine how to use continuous delivery improvement and open source tools to assess your DevOps program on three key metrics: team efficiency, business agility, and security. These will also help you identify what challenges your organization has and what problems you are trying to solve with DevOps. + +### 3 tools for measuring team efficiency + +Measuring team efficiency—in terms of how the DevOps initiative fits into your organization and how well it works for cultural innovation—is the hardest area to measure. The key metrics that enable the DevOps team to work more effectively on culture and organization are all about agile software development, such as knowledge sharing, prioritizing tasks, resource utilization, issue tracking, cross-functional teams, and collaboration. The following open source tools can help you improve and measure team efficiency: + + * [FunRetro][1] is a simple, intuitive tool that helps you collaborate across teams and improve what you do. + * [Kanboard][2] is a [kanban][3] board that helps you visualize your work in progress to focus on your goal. + * [Bugzilla][4] is a popular development tool with issue-tracking capabilities. + + + +### 6 tools for measuring business agility + +Speed is all that matters for accelerating business agility. Because DevOps gives organizations capabilities to deliver software faster with fewer failures, it's fast gaining acceptance. The key metrics are deployment time, change lead time, release frequency, and failover time. Puppet's [2017 State of DevOps Report][5] shows that high-performing DevOps practitioners deploy code updates 46x more frequently and high performers experience change lead times of under an hour, or 440x faster than average. Following are some open source tools to help you measure business agility: + + * [Kubernetes][6] is a container-orchestration system for automating deployment, scaling, and management of containerized applications. (Read more about [Kubernetes][7] on Opensource.com.) + * [CRI-O][8] is a Kubernetes orchestrator used to manage and launch containerized workloads without relying on a traditional container engine. + * [Ansible][9] is a popular automation engine used to automate apps and IT infrastructure and run tasks including installing and configuring applications. + * [Jenkins][10] is an automation tool used to automate the software development process with continuous integration. It facilitates the technical aspects of continuous delivery. + * [Spinnaker][11] is a multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence. It combines a powerful and flexible pipeline management system with integrations to the major cloud providers. + * [Istio][12] is a service mesh that helps reduce the complexity of deployments and eases the strain on your development teams. + + + +### 4 tools for measuring security + +Security is always the last phase of measuring your DevOps initiative's success. Enterprises that have combined development and operations teams under a DevOps model are generally successful in releasing code at a much faster rate. But this has increased the need for integrating security in the DevOps process (this is known as DevSecOps), because the faster you release code, the faster you release any vulnerabilities in it. + +Measuring security vulnerabilities early ensures that builds are stable before they pass to the next stage in the release pipeline. In addition, measuring security can help overcome resistance to DevOps adoption. You need tools that can help your dev and ops teams identify and prioritize vulnerabilities as they are using software, and teams must ensure they don't introduce vulnerabilities when making changes. These open source tools can help you measure security: + + * [Gauntlt][13] is a ruggedization framework that enables security testing by devs, ops, and security. + * [Vault][14] securely manages secrets and encrypts data in transit, including storing credentials and API keys and encrypting passwords for user signups. + * [Clair][15] is a project for static analysis of vulnerabilities in appc and Docker containers. + * [SonarQube][16] is a platform for continuous inspection of code quality. It performs automatic reviews with static analysis of code to detect bugs, code smells, and security vulnerabilities. + + + +**[See our related security article,[7 open source tools for rugged DevOps][17].]** + +Many DevOps initiatives start small. DevOps requires a commitment to a new culture and process rather than new technologies. That's why organizations looking to implement DevOps will likely need to adopt open source tools for collecting data and using it to optimize business success. In that case, highly visible, useful measurements will become an essential part of every DevOps initiative's success + +### What to read next + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/devops-measurement-tools + +作者:[Daniel Oh][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/daniel-oh +[1]: https://funretro.io/ +[2]: http://kanboard.net/ +[3]: https://en.wikipedia.org/wiki/Kanban +[4]: https://www.bugzilla.org/ +[5]: https://puppet.com/resources/whitepaper/state-of-devops-report +[6]: https://kubernetes.io/ +[7]: https://opensource.com/resources/what-is-kubernetes +[8]: https://github.com/kubernetes-incubator/cri-o +[9]: https://github.com/ansible +[10]: https://jenkins.io/ +[11]: https://www.spinnaker.io/ +[12]: https://istio.io/ +[13]: http://gauntlt.org/ +[14]: https://www.hashicorp.com/blog/vault.html +[15]: https://github.com/coreos/clair +[16]: https://www.sonarqube.org/ +[17]: https://opensource.com/article/18/9/open-source-tools-rugged-devops diff --git a/sources/talk/20181004 Interview With Peter Ganten, CEO of Univention GmbH.md b/sources/talk/20181004 Interview With Peter Ganten, CEO of Univention GmbH.md new file mode 100644 index 0000000000..5a0c1aabdd --- /dev/null +++ b/sources/talk/20181004 Interview With Peter Ganten, CEO of Univention GmbH.md @@ -0,0 +1,97 @@ +Interview With Peter Ganten, CEO of Univention GmbH +====== +I have been asking the Univention team to share the behind-the-scenes story of [**Univention**][1] for a couple of months. Finally, today we got the interview of **Mr. Peter H. Ganten** , CEO of Univention GmbH. Despite his busy schedule, in this interview, he shares what he thinks of the Univention project and its impact on open source ecosystem, what open source developers and companies will need to do to keep thriving and what are the biggest challenges for open source projects. + +**OSTechNix: What’s your background and why have you founded Univention?** + +**Peter Ganten:** I studied physics and psychology. In psychology I was a research assistant and coded evaluation software. I realized how important it is that results have to be disclosed in order to verify or falsify them. The same goes for the code that leads to the results. This brought me into contact with Open Source Software (OSS) and Linux. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/peter-ganten-interview.jpg) + +I was a kind of technical lab manager and I had the opportunity to try out a lot, which led to my book about Debian. That was still in the New Economy era where the first business models emerged on how to make money with Open Source. When the bubble burst, I had the plan to make OSS a solid business model without venture capital but with Hanseatic business style – seriously, steadily, no bling bling. + +**What were the biggest challenges at the beginning?** + +When I came from the university, the biggest challenge clearly was to gain entrepreneurial and business management knowledge. I quickly learned that it’s not about Open Source software as an end to itself but always about customer value, and the benefits OSS offers its customers. We all had to learn a lot. + +In the beginning, we expected that Linux on the desktop would become established in a similar way as Linux on the server. However, this has not yet been proven true. The replacement has happened with Android and the iPhone. Our conclusion then was to change our offerings towards ID management and enterprise servers. + +**Why does UCS matter? And for whom makes it sense to use it?** + +There is cool OSS in all areas, but many organizations are not capable to combine it all together and make it manageable. For the basic infrastructure (Windows desktops, users, user rights, roles, ID management, apps) we need a central instance to which groupware, CRM etc. is connected. Without Univention this would have to be laboriously assembled and maintained manually. This is possible for very large companies, but far too complex for many other organizations. + +[**UCS**][2] can be used out of the box and is scalable. That’s why it’s becoming more and more popular – more than 10,000 organizations are using UCS already today. + +**Who are your users and most important clients? What do they love most about UCS?** + +The Core Edition is free of charge and used by organizations from all sectors and industries such as associations, micro-enterprises, universities or large organizations with thousands of users. In the enterprise environment, where Long Term Servicing (LTS) and professional support are particularly important, we have organizations ranging in size from 30-50 users to several thousand users. One of the target groups is the education system in Germany. In many large cities and within their school administrations UCS is used, for example, in Cologne, Hannover, Bremen, Kassel and in several federal states. They are looking for manageable IT and apps for schools. That’s what we offer, because we can guarantee these authorities full control over their users’ identities. + +Also, more and more cloud service providers and MSPs want to take UCS to deliver a selection of cloud-based app solutions. + +**Is UCS 100% Open Source? If so, how can you run a profitable business selling it?** + +Yes, UCS is 100% Open Source, every line, the whole code is OSS. You can download and use UCS Core Edition for **FREE!** + +We know that in large, complex organizations, vendor support and liability is needed for LTS, SLAs, and we offer that with our Enterprise subscriptions and consulting services. We don’t offer these in the Core Edition. + +**And what are you giving back to the OS community?** + +A lot. We are involved in the Debian team and co-finance the LTS maintenance for Debian. For important OS components in UCS like [**OpenLDAP**][3], Samba or KVM we co-finance the development or have co-developed them ourselves. We make it all freely available. + +We are also involved on the political level in ensuring that OSS is used. We are engaged, for example, in the [**Free Software Foundation Europe (FSFE)**][4] and the [**German Open Source Business Alliance**][5], of which I am the chairman. We are working hard to make OSS more successful. + +**How can I get started with UCS?** + +It’s easy to get started with the Core Edition, which, like the Enterprise Edition, has an App Center and can be easily installed on your own hardware or as an appliance in a virtual machine. Just [**download Univention ISO**][6] and install it as described in the below link. + +Alternatively, you can try the [**UCS Online Demo**][7] to get a first impression of Univention Corporate Server without actually installing it on your system. + +**What do you think are the biggest challenges for Open Source?** + +There is a certain attitude you can see over and over again even in bigger projects: OSS alone is viewed as an almost mandatory prerequisite for a good, sustainable, secure and trustworthy IT solution – but just having decided to use OSS is no guarantee for success. You have to carry out projects professionally and cooperate with the manufacturers. A danger is that in complex projects people think: “Oh, OSS is free, I just put it all together by myself”. But normally you do not have the know-how to successfully implement complex software solutions. You would never proceed like this with Closed Source. There people think: “Oh, the software costs 3 $ millions, so it’s okay if I have to spend another 300,000 Dollars on consultants.” + +At OSS this is different. If such projects fail and leave burnt ground behind, we have to explain again and again that the failure of such projects is not due to the nature of OSS but to its poor implementation and organization in a specific project: You have to conclude reasonable contracts and involve partners as in the proprietary world, but you’ll gain a better solution. + +Another challenge: We must stay innovative, move forward, attract new people who are enthusiastic about working on projects. That’s sometimes a challenge. For example, there are a number of proprietary cloud services that are good but lead to extremely high dependency. There are approaches to alternatives in OSS, but no suitable business models yet. So it’s hard to find and fund developers. For example, I can think of Evernote and OneNote for which there is no reasonable OSS alternative. + +**And what will the future bring for Univention?** + +I don’t have a crystal ball, but we are extremely optimistic. We see a very high growth potential in the education market. More OSS is being made in the public sector, because we have repeatedly experienced the dead ends that can be reached if we solely rely on Closed Source. + +Overall, we will continue our organic growth at double-digit rates year after year. + +UCS and its core functionalities of identity management, infrastructure management and app center will increasingly be offered and used from the cloud as a managed service. We will support our technology in this direction, e.g., through containers, so that a hypervisor or bare metal is not always necessary for operation. + +**You have been the CEO of Univention for a long time. What keeps you motivated?** + +I have been the CEO of Univention for more than 16 years now. My biggest motivation is to realize that something is moving. That we offer the better way for IT. That the people who go this way with us are excited to work with us. I go home satisfied in the evening (of course not every evening). It’s totally cool to work with the team I have. It motivates and pushes you every time I need it myself. + +I’m a techie and nerd at heart, I enjoy dealing with technology. So I’m totally happy at this place and I’m grateful to the world that I can do whatever I want every day. Not everyone can say that. + +**Who gives you inspiration?** + +My employees, the customers and the Open Source projects. The exchange with other people. + +The motivation behind everything is that we want to make sure that mankind will be able to influence and change the IT that surrounds us today and in the future just the way we want it and we thinks it’s good. We want to make a contribution to this. That is why Univention is there. That is important to us every day. + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/interview-with-peter-ganten-ceo-of-univention-gmbh/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://www.ostechnix.com/introduction-univention-corporate-server/ +[2]: https://www.univention.com/products/ucs/ +[3]: https://www.ostechnix.com/redhat-and-suse-announced-to-withdraw-support-for-openldap/ +[4]: https://fsfe.org/ +[5]: https://osb-alliance.de/ +[6]: https://www.univention.com/downloads/download-ucs/ +[7]: https://www.univention.com/downloads/ucs-online-demo/ diff --git a/sources/talk/20181008 3 areas to drive DevOps change.md b/sources/talk/20181008 3 areas to drive DevOps change.md new file mode 100644 index 0000000000..733158a81b --- /dev/null +++ b/sources/talk/20181008 3 areas to drive DevOps change.md @@ -0,0 +1,108 @@ +3 areas to drive DevOps change +====== +Driving large-scale organizational change is painful, but when it comes to DevOps, the payoff is worth the pain. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/diversity-inclusion-transformation-change_20180927.png?itok=2E-g10hJ) + +Pain avoidance is a powerful motivator. Some studies hint that even [plants experience a type of pain][1] and take steps to defend themselves. Yet we have plenty of examples of humans enduring pain on purpose—exercise often hurts, but we still do it. When we believe the payoff is worth the pain, we'll endure almost anything. + +The truth is that driving large-scale organizational change is painful. It hurts for those having to change their values and behaviors, it hurts for leadership, and it hurts for the people just trying to do their jobs. In the case of DevOps, though, I can tell you the pain is worth it. + +I've seen firsthand how teams learn they must spend time improving their technical processes, take ownership of their automation pipelines, and become masters of their fate. They gain the tools they need to be successful. + +![Improvements after DevOps transformation][3] + +Image by Lee Eason. CC BY-SA 4.0 + +This chart shows the value of that change. In a company where I directed a DevOps transformation, its 60+ teams submitted more than 900 requests per month to release management. If you add up the time those tickets stayed open, it came to more than 350 days per month. What could your company do with an extra 350 person-days per month? In addition to the improvements seen above, they went from 100 to 9,000 deployments per month, a 24% decrease in high-severity bugs, happier engineers, and improved net promoter scores (NPS). The biggest NPS improvements link to the teams furthest along on their DevOps journey, as the [Puppet State of DevOps][4] report predicted. The bottom line is that investments into technical process improvement translate into better business outcomes. + +DevOps leaders must focus on three main areas to drive this change: executives, culture, and team health. + +### Executives + +The bottom line is that investments into technical process improvement translate into better business outcomes. + +The larger your organization, the greater the distance (and opportunities for misunderstanding) between business leadership and the individuals delivering services to your customers. To make things worse, the landscape of tools and practices in technology is changing at an accelerating rate. This makes it practically impossible for business leaders to understand on their own how transformations like DevOps or agile work. + +The larger your organization, the greater the distance (and opportunities for misunderstanding) between business leadership and the individuals delivering services to your customers. To make things worse, the landscape of tools and practices in technology is changing at an accelerating rate. This makes it practically impossible for business leaders to understand on their own how transformations like DevOps or agile work. + +DevOps leaders must help executives come along for the ride. Educating leaders gives them options when they're making decisions and makes it more likely they'll choose paths that help your company. + +For example, let's say your executives believe DevOps is going to improve how you deploy your products into production, but they don't understand how. You've been working with a software team to help automate their deployment. When an executive hears about a deploy failure (and there will be failures), they will want to understand how it occurred. When they learn the software team did the deployment rather than the release management team, they may try to protect the business by decreeing all production releases must go through traditional change controls. You will lose credibility, and teams will be far less likely to trust you and accept further changes. + +It takes longer to rebuild trust with executives and get their support after an incident than it would have taken to educate them in the first place. Put the time in upfront to build alignment, and it will pay off as you implement tactical changes. + +Two pieces of advice when building that alignment: + + * First, **don't ignore any constraints** they raise. If they have worries about contracts or security, make the heads of legal and security your new best friends. By partnering with them, you'll build their trust and avoid making costly mistakes. + * Second, **use metrics to build a bridge** between what your delivery teams are doing and your executives' concerns. If the business has a goal to reduce customer churn, and you know from research that many customers leave because of unplanned downtime, reinforce that your teams are committed to tracking and improving Mean Time To Detection and Resolution (MTTD and MTTR). You can use those key metrics to show meaningful progress that teams and executives understand and get behind. + + + +### Culture + +DevOps is a culture of continuous improvement focused on code, build, deploy, and operational processes. Culture describes the organization's values and behaviors. Essentially, we're talking about changing how people behave, which is never easy. + +I recommend reading [The Wolf in CIO's Clothing][5]. Spend time thinking about psychology and motivation. Read [Drive][6] or at least watch Daniel Pink's excellent [TED Talk][7]. Read [The Hero with a Thousand Faces][8] and learn to identify the different journeys everyone is on. If none of these things sound interesting, you are not the right person to drive change in your company. Otherwise, read on! + +Essentially, we're talking about changing how people behave, which is never easy. + +Most rational people behave according to their values. Most organizations don't have explicit values everyone understands and lives by. Therefore, you'll need to identify the organization's values that have led to the behaviors that have led to the current state. You also need to make sure you can tell the story about how those values came to be and how they led to where you are. When you tell that story, be careful not to demonize those values—they aren't immoral or evil. People did the best they could at the time, given what they knew and what resources they had. + +Most rational people behave according to their values. Most organizations don't have explicit values everyone understands and lives by. Therefore, you'll need to identify the organization's values that have led to the behaviors that have led to the current state. You also need to make sure you can tell the story about how those values came to be and how they led to where you are. When you tell that story, be careful not to demonize those values—they aren't immoral or evil. People did the best they could at the time, given what they knew and what resources they had. + +Explain that the company and its organizational goals are changing, and the team must alter its values. It's helpful to express this in terms of contrast. For example, your company may have historically valued cost savings above all else. That value is there for a reason—the company was cash-strapped. To get new products out, the infrastructure group had to tightly couple services by sharing database clusters or servers. Over time, those practices created a real mess that became hard to maintain. Simple changes started breaking things in unexpected ways. This led to tight change-control processes that were painful for delivery teams, so they stopped changing things. + +Play that movie for five years, and you end up with little to no innovation, legacy technology, attraction and retention problems, and poor-quality products. You've grown the company, but you've hit a ceiling, and you can't continue to grow with those same values and behaviors. Now you must put engineering efficiency above cost saving. If one option will help teams maintain their service easier, but the other option is cheaper in the short term, you go with the first option. + +You must tell this story again and again. Then you must celebrate any time a team expresses the new value through their behavior—even if they make a mistake. When a team has a deploy failure, congratulate them for taking the risk and encourage them to keep learning. Explain how their behavior is leading to the right outcome and support them. Over time, teams will see the message is real, and they'll feel safe altering their behavior. + +### Team health + +Have you ever been in a planning meeting and heard something like this: "We can't really estimate that story until John gets back from vacation. He's the only one who knows that area of the code well enough." Or: "We can't get this task done because it's got a cross-team dependency on network engineering, and the guy that set up the firewall is out sick." Or: "John knows that system best; if he estimated the story at a 3, then let's just go with that." When the team works on that story, who will most likely do the work? That's right, John will, and the cycle will continue. + +For a long time, we've accepted that this is just the nature of software development. If we don't solve for it, we perpetuate the cycle. + +Entropy will always drive teams naturally towards disorder and bad health. Our job as team members and leaders is to intentionally manage against that entropy and keep our teams healthy. Transformations like DevOps, agile, moving to the cloud, or refactoring a legacy application all amplify and accelerate that entropy. That's because transformations add new skills and expertise needed for the team to take on that new type of work. + +Let's look at an example of a product team refactoring its legacy monolith. As usual, they build those new services in AWS. The legacy monolith was deployed to the data center, monitored, and backed up by IT. IT made sure the application's infosec requirements were met at the infrastructure layer. They conducted disaster recovery tests, patched the servers, and installed and configured required intrusion detection and antivirus agents. And they kept change control records, required for the annual audit process, of everything was done to the application's infrastructure. + +I often see product teams make the fatal mistake of thinking IT is all cost and bottleneck. They're hungry to shed the skin of IT and use the public cloud, but they never stop to appreciate the critical services IT provides. Moving to the cloud means you implement these things differently; they don't go away. AWS is still a data center, and any team utilizing it accepts the related responsibilities. + +In practice, this means product teams must learn how to do those IT services when they move to the cloud. So, when our fictional product team starts refactoring its legacy application and putting new services in in the cloud, it will need a vastly expanded skillset to be successful. Those skills don't magically appear—they're learned or hired—and team leaders and managers must actively manage the process. + +I built [Tekata.io][9] because I couldn't find any tools to support me as I helped my teams evolve. Tekata is free and easy to use, but the tool is not as important as the people and process. Make sure you build continuous learning into your cadence and keep track of your team's weak spots. Those weak spots affect your ability to deliver, and filling them usually involves learning new things, so there's a wonderful synergy here. In fact, 76% of millennials think professional development opportunities are [one of the most important elements][10] of company culture. + +### Proof is in the payoff + +DevOps transformations involve altering the behavior, and therefore the culture, of your teams. That must be done with executive support and understanding. At the same time, those behavior changes mean learning new skills, and that process must also be managed carefully. But the payoff for pulling this off is more productive teams, happier and more engaged team members, higher quality products, and happier customers. + +Lee Eason will present [Tales From A DevOps Transformation][11] at [All Things Open][12], October 21-23 in Raleigh, N.C. + +Disclaimer: All opinions are statements in this article are exclusively those of Lee Eason and are not representative of Ipreo or IHS Markit. + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/tales-devops-transformation + +作者:[Lee Eason][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/leeeason +[b]: https://github.com/lujun9972 +[1]: https://link.springer.com/article/10.1007%2Fs00442-014-2995-6 +[2]: /file/411061 +[3]: https://opensource.com/sites/default/files/uploads/devops-delays.png (Improvements after DevOps transformation) +[4]: https://puppet.com/resources/whitepaper/state-of-devops-report +[5]: https://www.gartner.com/en/publications/wolf-cio +[6]: https://en.wikipedia.org/wiki/Drive:_The_Surprising_Truth_About_What_Motivates_Us +[7]: https://www.ted.com/talks/dan_pink_on_motivation?language=en#t-2094 +[8]: https://en.wikipedia.org/wiki/The_Hero_with_a_Thousand_Faces +[9]: https://tekata.io/ +[10]: https://www.execu-search.com/~/media/Resources/pdf/2017_Hiring_Outlook_eBook +[11]: https://allthingsopen.org/talk/tales-from-a-devops-transformation/ +[12]: https://allthingsopen.org/ diff --git a/sources/talk/20181009 4 best practices for giving open source code feedback.md b/sources/talk/20181009 4 best practices for giving open source code feedback.md new file mode 100644 index 0000000000..4cfb806525 --- /dev/null +++ b/sources/talk/20181009 4 best practices for giving open source code feedback.md @@ -0,0 +1,47 @@ +4 best practices for giving open source code feedback +====== +A few simple guidelines can help you provide better feedback. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_mail_box_envelope_send_blue.jpg?itok=6Epj47H6) + +In the previous article I gave you tips for [how to receive feedback][1], especially in the context of your first free and open source project contribution. Now it's time to talk about the other side of that same coin: providing feedback. + +If I tell you that something you did in your contribution is "stupid" or "naive," how would you feel? You'd probably be angry, hurt, or both, and rightfully so. These are mean-spirited words that when directed at people, can cut like knives. Words matter, and they matter a great deal. Therefore, put as much thought into the words you use when leaving feedback for a contribution as you do into any other form of contribution you give to the project. As you compose your feedback, think to yourself, "How would I feel if someone said this to me? Is there some way someone might take this another way, a less helpful way?" If the answer to that last question has even the chance of being a yes, backtrack and rewrite your feedback. It's better to spend a little time rewriting now than to spend a lot of time apologizing later. + +When someone does make a mistake that seems like it should have been obvious, remember that we all have different experiences and knowledge. What's obvious to you may not be to someone else. And, if you recall, there once was a time when that thing was not obvious to you. We all make mistakes. We all typo. We all forget commas, semicolons, and closing brackets. Save yourself a lot of time and effort: Point out the mistake, but leave out the judgement. Stick to the facts. After all, if the mistake is that obvious, then no critique will be necessary, right? + + 1. **Avoid ad hominem comments.** Remember to review only the contribution and not the person who contributed it. That is to say, point out, "the contribution could be more efficient here in this way…" rather than, "you did this inefficiently." The latter is ad hominem feedback. Ad hominem is a Latin phrase meaning "to the person," which is where your feedback is being directed: to the person who contributed it rather than to the contribution itself. By providing feedback on the person you make that feedback personal, and the contributor is justified in taking it personally. Be careful when crafting your feedback to make sure you're addressing only the contents of the contribution and not accidentally criticizing the person who submitted it for review. + + 2. **Include positive comments.** Not all of your feedback has to (or should) be critical. As you review the contribution and you see something that you like, provide feedback on that as well. Several academic studies—including an important one by [Baumeister, Braslavsky, Finkenauer, and Vohs][2]—show that humans focus more on negative feedback than positive. When your feedback is solely negative, it can be very disheartening for contributors. Including positive reinforcement and feedback is motivating to people and helps them feel good about their contribution and the time they spent on it, which all adds up to them feeling more inclined to provide another contribution in the future. It doesn't have to be some gushing paragraph of flowery praise, but a quick, "Huh, that's a really smart way to handle that. It makes everything flow really well," can go a long way toward encouraging someone to keep contributing. + + 3. **Questions are feedback, too.** Praise is one less common but valuable type of review feedback. Questions are another. If you're looking at a contribution and can't tell why the submitter + +When your feedback is solely negative, it can be very disheartening for contributors. + +did things the way they did, or if the contribution just doesn't make a lot of sense to you, asking for more information acts as feedback. It tells the submitter that something they contributed isn't as clear as they thought and that it may need some work to make the approach more obvious, or if it's a code contribution, a comment to explain what's going on and why. A simple, "I don't understand this part here. Could you please tell me what it's doing and why you chose that way?" can start a dialogue that leads to a contribution that's much easier for future contributors to understand and maintain. + + 4. **Expect a negotiation.** Using questions as a form of feedback implies that there will be answers to those questions, or perhaps other questions in response. Whether your feedback is in question or statement format, you should expect to generate some sort of dialogue throughout the process. An alternative is to see your feedback as incontrovertible, your word as law. Although this is definitely one approach you can take, it's rarely a good one. When providing feedback on a contribution, it's best to collaborate rather than dictate. As these dialogues arise, embracing them as opportunities for conversation and learning on both sides is important. Be willing to discuss their approach and your feedback, and to take the time to understand their perspective. + + + + +The bottom line is: Don't be a jerk. If you're not sure whether the feedback you're planning to leave makes you sound like a jerk, pause to have someone else review it before you click Send. Have empathy for the person at the receiving end of that feedback. While the maxim is thousands of years old, it still rings true today that you should try to do unto others as you would have them do unto you. Put yourself in their shoes and aim to be helpful and supportive rather than simply being right. + +_Adapted from[Forge Your Future with Open Source][3] by VM (Vicky) Brasseur, Copyright © 2018 The Pragmatic Programmers LLC. Reproduced with the permission of the publisher._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/best-practices-giving-open-source-code-feedback + +作者:[VM(Vicky) Brasseur][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/vmbrasseur +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/article/18/10/6-tips-receiving-feedback +[2]: https://www.msudenver.edu/media/content/sri-taskforce/documents/Baumeister-2001.pdf +[3]: http://www.pragprog.com/titles/vbopens diff --git a/sources/talk/20181009 GCC- Optimizing Linux, the Internet, and Everything.md b/sources/talk/20181009 GCC- Optimizing Linux, the Internet, and Everything.md new file mode 100644 index 0000000000..e7ac8d6c39 --- /dev/null +++ b/sources/talk/20181009 GCC- Optimizing Linux, the Internet, and Everything.md @@ -0,0 +1,82 @@ +GCC: Optimizing Linux, the Internet, and Everything +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/gcc-paper.jpg?itok=QFNUZWsV) + +Software is useless if computers can't run it. Even the most talented developer is at the mercy of the compiler when it comes to run-time performance - if you don’t have a reliable compiler toolchain you can’t build anything serious. The GNU Compiler Collection (GCC) provides a robust, mature and high performance partner to help you get the most out of your software. With decades of development by thousands of people GCC is one of the most respected compilers in the world. If you are building applications and not using GCC, you are missing out on the best possible solution. + +GCC is the “de facto-standard open source compiler today” [1] according to LLVM.org and the foundation used to build complete systems - from the kernel upwards. GCC supports over 60 hardware platforms, including ARM, Intel, AMD, IBM POWER, SPARC, HP PA-RISC, and IBM Z, as well as a variety of operating environments, including GNU, Linux, Windows, macOS, FreeBSD, NetBSD, OpenBSD, DragonFly BSD, Solaris, AIX, HP-UX, and RTEMS. It offers highly compliant C/C++ compilers and support for popular C libraries, such as GNU C Library (glibc), Newlib, musl, and the C libraries included with various BSD operating systems, as well as front-ends for Fortran, Ada, and GO languages. GCC also functions as a cross compiler, creating executable code for a platform other than the one on which the compiler is running. GCC is the core component of the tightly integrated GNU toolchain, produced by the GNU Project, that includes glibc, Binutils, and the GNU Debugger (GDB). + +"My all-time favorite GNU tool is GCC, the GNU Compiler Collection. At a time when developer tools were expensive, GCC was the second GNU tool and the one that enabled a community to write and build all the others. This tool single-handedly changed the industry and led to the creation of the free software movement, since a good, free compiler is a prerequisite to a community creating software." —Dave Neary, Open Source and Standards team at Red Hat. [2] + +### Optimizing Linux + +As the default compiler for the Linux kernel source, GCC delivers trusted, stable performance along with the additional extensions needed to correctly build the kernel. GCC is a standard component of popular Linux distributions, such as Arch Linux, CentOS, Debian, Fedora, openSUSE, and Ubuntu, where it routinely compiles supporting system components. This includes the default libraries used by Linux (such as libc, libm, libintl, libssh, libssl, libcrypto, libexpat, libpthread, and ncurses) which depend on GCC to provide correctness and performance and are used by applications and system utilities to access Linux kernel features. Many of the application packages included with a distribution are also built with GCC, such as Python, Perl, Ruby, nginx, Apache HTTP Server, OpenStack, Docker, and OpenShift. This combination of kernel, libraries, and application software translates into a large volume of code built with GCC for each Linux distribution. For the openSUSE distribution nearly 100% of native code is built by GCC, including 6,135 source packages producing 5,705 shared libraries and 38,927 executables. This amounts to about 24,540 source packages compiled weekly. [3] + +The base version of GCC included in Linux distributions is used to create the kernel and libraries that define the system Application Binary Interface (ABI). User space developers have the option of downloading the latest stable version of GCC to gain access to advanced features, performance optimizations, and improvements in usability. Linux distributions offer installation instructions or prebuilt toolchains for deploying the latest version of GCC along with other GNU tools that help to enhance developer productivity and improve deployment time. + +### Optimizing the Internet + +GCC is one of the most widely adopted core compilers for embedded systems, enabling the development of software for the growing world of IoT devices. GCC offers a number of extensions that make it well suited for embedded systems software development, including fine-grained control using compiler built-ins, #pragmas, inline assembly, and application-focussed command-line options. GCC supports a broad base of embedded architectures, including ARM, AMCC, AVR, Blackfin, MIPS, RISC-V, Renesas Electronics V850, and NXP and Freescale Power-based processors, producing efficient, high quality code. The cross-compilation capability offered by GCC is critical to this community, and prebuilt cross-compilation toolchains [4] are a major requirement. For example, the GNU ARM Embedded toolchains are integrated and validated packages featuring the Arm Embedded GCC compiler, libraries, and other tools necessary for bare-metal software development. These toolchains are available for cross-compilation on Windows, Linux and macOS host operating systems and target the popular ARM Cortex-R and Cortex-M processors, which have shipped in tens of billions of internet capable devices. [5] + +GCC empowers Cloud Computing, providing a reliable development platform for software that needs to directly manages computing resources, like database and web serving engines and backup and security software. GCC is fully compliant with C++11 and C++14 and offers experimental support for C++17 and C++2a [6], creating performant object code with a solid debugging information. Some examples of applications that utilize GCC include: MySQL Database Management System, which requires GCC for Linux [7]; the Apache HTTP Server, which recommends using GCC [8]; and Bacula, an enterprise ready network backup tool which require GCC. [9] + +### Optimizing Everything + +For the research and development of the scientific codes used in High Performance Computing (HPC), GCC offers mature C, C++, and Fortran front ends as well as support for OpenMP and OpenACC APIs for directive-based parallel programming. Because GCC offers portability across computing environments, it enables code to be more easily targeted and tested across a variety of new and legacy client and server platforms. GCC offers full support for OpenMP 4.0 for C, C++ and Fortran compilers and full support for OpenMP 4.5 for C and C++ compilers. For OpenACC, GCC supports most of the 2.5 specification and performance optimizations and is the only non-commercial, nonacademic compiler to provide [OpenACC][1] support. + +Code performance is an important parameter to this community and GCC offers a solid performance base. A Nov. 2017 paper published by Colfax Research evaluates C++ compilers for the speed of compiled code parallelized with OpenMP 4.x directives and for the speed of compilation time. Figure 1 plots the relative performance of the computational kernels when compiled by the different compilers and run with a single thread. The performance values are normalized so that the performance of G++ is equal to 1.0. + +![performance][3] + +Figure 1. Relative performance of each kernel as compiled by the different compilers. (single-threaded, higher is better). + +[Used with permission][4] + +The paper summarizes “the GNU compiler also does very well in our tests. G++ produces the second fastest code in three out of six cases and is amongst the fastest compiler in terms of compile time.” [10] + +### Who Is Using GCC? + +In The State of Developer Ecosystem Survey in 2018 by JetBrains, out of 6,000 developers who took the survey GCC is regularly used by 66% of C++ programmers and 73% of C programmers. [11] Here is a quick summary of the benefits of GCC that make it so popular with the developer community. + + * For developers required to write code for a variety of new and legacy computing platforms and operating environments, GCC delivers support for the broadest range of hardware and operating environments. Compilers offered by hardware vendors focus mainly on support for their products and other open source compilers are much more limited in the hardware and operating systems supported. [12] + + * There is a wide variety of GCC-based prebuilt toolchains, which has particular appeal to embedded systems developers. This includes the GNU ARM Embedded toolchains and 138 pre-compiled cross compiler toolchains available on the Bootlin web site. [13] While other open source compilers, such as Clang/LLVM, can replace GCC in existing cross compiling toolchains, these would need to be completely rebuilt by the developer. [14] + + * GCC delivers to application developers trusted, stable performance from a mature compiler platform. The GCC 8/9 vs. LLVM Clang 6/7 Compiler Benchmarks On AMD EPYC article provides results of 49 benchmarks ran across the four tested compilers at three optimization levels. Coming in first 34% of the time was GCC 8.2 RC1 using "-O3 -march=native" level, while at the same optimization level LLVM Clang 6.0 came in second with wins 20% of the time. [15] + + * GCC delivers improved diagnostics for compile time debugging [16] and accurate and useful information for runtime debugging. GCC is tightly integrated with GDB, a mature and feature complete tool which offers ‘non-stop’ debugging that can stop a single thread at a breakpoint. + + * GCC is a well supported platform with an active, committed community that supports the current and two previous releases. With releases schedule yearly this provides two years of support for a version. + + + + +### GCC: Continuing to Optimize Linux, the Internet, and Everything + +GCC continues to move forward as a world-class compiler. The most current version of GCC is 8.2, which was released in July 2018 and added hardware support for upcoming Intel CPUs, more ARM CPUs and improved performance for AMD’s ZEN CPU. Initial C17 support has been added along with initial work towards C++2A. Diagnostics have continued to be enhanced including better emitted diagnostics, with improved locations, location ranges, and fix-it hints, particularly in the C++ front end. A blog written by David Malcolm of Red Hat in March 2018 provides an overview of usability improvements in GCC 8. [17] + +New hardware platforms continue to rely on the GCC toolchain for software development, such as RISC-V, a free and open ISA that is of interest to machine learning, Artificial Intelligence (AI), and IoT market segments. GCC continues to be a critical component in the continuing development of Linux systems. The Clear Linux Project for Intel Architecture, an emerging distribution built for cloud, client, and IoT use cases, provides a good example of how GCC compiler technology is being used and improved to boost the performance and security of a Linux-based system. GCC is also being used for application development for Microsoft's Azure Sphere, a Linux-based operating system for IoT applications that initially supports the ARM based MediaTek MT3620 processor. In terms of developing the next generation of programmers, GCC is also a core component of the Windows toolchain for Raspberry PI, the low-cost embedded board running Debian-based GNU/Linux that is used to promote the teaching of basic computer science in schools and developing countries. + +GCC was first released on March 22, 1987 by Richard Stallman, the founder of the GNU Project and was considered a significant breakthrough since it was the first portable ANSI C optimizing compiler released as free software. GCC is maintained by a community of programmers from all over the world under the direction of a steering committee that ensures broad, representative oversight of the project. GCC’s community approach is one of its strengths, resulting in a large and diverse community of developers and users that contribute to and provide support for the project. According to Open Hub, GCC “is one of the largest open-source teams in the world, and is in the top 2% of all project teams on Open Hub.” [18] + +There has been a lot of discussion about the licensing of GCC, most of which confuses rather than enlightens. GCC is distributed under the GNU General Public License version 3 or later with the Runtime Library Exception. This is a copyleft license, which means that derivative work can only be distributed under the same license terms. GPLv3 is intended to protect GCC from being made proprietary and requires that changes to GCC code are made available freely and openly. To the ‘end user’ the compiler is just the same as any other; using GCC makes no difference to any licensing choices you might make for your own code. [19] + + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/2018/10/gcc-optimizing-linux-internet-and-everything + +作者:[Margaret Lewis][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/margaret-lewis +[b]: https://github.com/lujun9972 +[1]: https://www.openacc.org/tools +[2]: /files/images/gccjpg-0 +[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/gcc_0.jpg?itok=HbGnRqWX (performance) +[4]: https://www.linux.com/licenses/category/used-permission diff --git a/sources/talk/20181010 Talk over text- Conversational interface design and usability.md b/sources/talk/20181010 Talk over text- Conversational interface design and usability.md new file mode 100644 index 0000000000..e9d76f9ef4 --- /dev/null +++ b/sources/talk/20181010 Talk over text- Conversational interface design and usability.md @@ -0,0 +1,105 @@ +Talk over text: Conversational interface design and usability +====== +To make conversational interfaces more human-centered, we must free our thinking from the trappings of web and mobile design. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/migration_innovation_computer_software.png?itok=VCFLtd0q) + +Conversational interfaces are unique among the screen-based and physically manipulated user interfaces that characterize the range of digital experiences we encounter on a daily basis. As [Conversational Design][1] author Erika Hall eloquently writes, "Conversation is not a new interface. It's the oldest interface." And the conversation, the most human interaction of all, lies at the nexus of the aural and verbal rather than the visual and physical. This makes it particularly challenging for machines to meet the high expectations we tend to have when it comes to typical human conversations. + +How do we design for conversational interfaces, which run the gamut from omnichannel chatbots on our websites and mobile apps to mono-channel voice assistants on physical devices such as the Amazon Echo and Google Home? What recommendations do other experts on conversational design and usability have when it comes to crafting the most robust chatbot or voice interface possible? In this overview, we focus on three areas: information architecture, design, and usability testing. + +### Information architecture: Trees, not sitemaps + +Consider the websites we visit and the visual interfaces we use regularly. Each has a navigational tool, whether it is a list of links or a series of buttons, that helps us gain some understanding of the interface. In a web-optimized information architecture, we can see the entire hierarchy of a website and its contents in the form of such navigation bars and sitemaps. + +On the other hand, in a conversational information architecture—whether articulated in a chatbot or a voice assistant—the structure of our interactions must be provided to us in a simple and straightforward way. For instance, in lieu of a navigation bar that has links to pages like About, Menu, Order, and Locations with further links underneath, we can create a conversational means of describing how to navigate the options we wish to pursue. + +Consider the differences between the two examples of navigation below. + +| **Web-based navigation:** | **Conversational navigation:** | +| Present all options in the navigation bar | Present only certain top-level options to access deeper options | +|-------------------------------------------|-----------------------------------------------------------------| +| • Floss's Pizza | • "To learn more about us, say About" | +| • About | • "To hear our menu, say Menu" | +| ◦ Team | • "To place an order, say Order" | +| ◦ Our story | • "To find out where we are, say Where" | +| • Menu | | +| ◦ Pizzas | | +| ◦ Pastas | | +| ◦ Platters | | +| • Order | | +| ◦ Pickup | | +| ◦ Delivery | | +| • Where we are | | +| ◦ Area map • "Welcome to Floss's Pizza!" | | + +In a conversational context, an appropriate information architecture that focuses on decision trees is of paramount importance, because one of the biggest issues many conversational interfaces face is excessive verbosity. By avoiding information overload, prizing structural simplicity, and prescribing one-word directions, your users can traverse conversational interfaces without any additional visual aid. + +### Design: Finessing flows and language + +![Well-designed language example][3] + +An example of well-designed language that encapsulates Hall's conversational key moments. + +In her book Conversational Design, Hall emphasizes the need for all conversational interfaces to adhere to conversational maxims outlined by Paul Grice and advanced by Robin Lakoff. These conversational maxims highlight the characteristics every conversational interface should have to succeed: quantity (just enough information but not too much), quality (truthfulness), relation (relevance), manner (concision, orderliness, and lack of ambiguity), and politeness (Lakoff's addition). + +In the process, Hall spotlights four key moments that build trust with users of conversational interfaces and give them all of the information they need to interact successfully with the conversational experience, whether it is a chatbot or a voice assistant. + + * **Introduction:** Invite the user's interest and encourage trust with a friendly but brief greeting that welcomes them to an unfamiliar interface. + + * **Orientation:** Offer system options, such as how to exit out of certain interactions, and provide a list of options that help the user achieve their goal. + + * **Action:** After each response from the user, offer a new set of tasks and corresponding controls for the user to proceed with further interaction. + + * **Guidance:** Provide feedback to the user after every response and give clear instructions. + + + + +Taken as a whole, these key moments indicate that good conversational design obligates us to consider how we write machine utterances to be both inviting and informative and to structure our decision flows in such a way that they flow naturally to the user. In other words, rather than visual design chops or an eye for style, conversational design requires us to be good writers and thoughtful architects of decision trees. + +![Decision flow example ][5] + +An example decision flow that adheres to Hall's key moments. + +One metaphor I use on a regular basis to conceive of each point in a conversational interface that presents a choice to the user is the dichotomous key. In tree science, dichotomous keys are used to identify trees in their natural habitat through certain salient characteristics. What makes dichotomous keys special, however, is the fact that each card in a dichotomous key only offers two choices (hence the moniker "dichotomous") with a clearly defined characteristic that cannot be mistaken for another. Eventually, after enough dichotomous choices have been made, we can winnow down the available options to the correct genus of tree. + +We should design conversational interfaces in the same way, with particular attention given to disambiguation and decision-making that never verges on too much complexity. Because conversational interfaces require deeply nested hierarchical structures to reach certain outcomes, we can never be too helpful in the instructions and options we offer our users. + +### Usability testing: Dialogues, not dialogs + +Conversational usability is a relatively unexplored and less-understood area because it is frequently based on verbal and aural interactions rather than visual or physical ones. Whereas chatbots can be evaluated for their usability using traditional means such as think-aloud, voice assistants and other voice-driven interfaces have no such luxury. + +For voice interfaces, we are unable to pursue approaches involving eye-tracking or think-aloud, since these interfaces are purely aural and users' utterances outside of responses to interface prompts can introduce bad data. For this reason, when our Acquia Labs team built [Ask GeorgiaGov][6], the first Alexa skill for residents of the state of Georgia, we chose retrospective probing (RP) for our usability tests. + +In retrospective probing, the conversational interaction proceeds until the completion of the task, at which point the user is asked about their impressions of the interface. Retrospective probing is well-positioned for voice interfaces because it allows the conversation to proceed unimpeded by interruptions such as think-aloud feedback. Nonetheless, it does come with the disadvantage of suffering from our notoriously unreliable memories, as it forces us to recollect past interactions rather than ones we completed immediately before recollection. + +### Challenges and opportunities + +Conversational interfaces are here to stay in our rapidly expanding spectrum of digital experiences. Though they enrich the range of ways we have to engage users, they also present unprecedented challenges when it comes to information architecture, design, and usability testing. With the help of previous work such as Grice's conversational maxims and Hall's key moments, we can design and build effective conversational interfaces by focusing on strong writing and well-considered decision flows. + +The fact that conversation is the oldest and most human of interfaces is also edifying when we approach other user interfaces that lack visual or physical manipulation. As Hall writes, "The ideal interface is an interface that's not noticeable at all." Whether or not we will eventually reach the utopian outcome of conversational interfaces that feel completely natural to the human ear, we can make conversational interfaces more human-centered by freeing our thinking from the trappings of web and mobile. + +Preston So will present [Talk Over Text: Conversational Interface Design and Usability][7] at [All Things Open][8], October 21-23 in Raleigh, North Carolina. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/conversational-interface-design-and-usability + +作者:[Preston So][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/prestonso +[b]: https://github.com/lujun9972 +[1]: https://abookapart.com/products/conversational-design +[2]: /file/411001 +[3]: https://opensource.com/sites/default/files/uploads/conversational-interfaces_1.png (Well-designed language example) +[4]: /file/411006 +[5]: https://opensource.com/sites/default/files/uploads/conversational-interfaces_2.png (Decision flow example ) +[6]: https://www.acquia.com/blog/ask-georgiagov-alexa-skill-citizens-georgia-acquia-labs/12/10/2017/3312516 +[7]: https://allthingsopen.org/talk/talk-over-text-conversational-interface-design-and-usability/ +[8]: https://allthingsopen.org/ diff --git a/sources/tech/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md b/sources/tech/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md index 81e3acacdb..6a4d1f4828 100644 --- a/sources/tech/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md +++ b/sources/tech/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md @@ -1,7 +1,3 @@ -# sober-wang 翻译中 - - - 30 Best Sources For Linux / *BSD / Unix Documentation On the We ====== diff --git a/sources/tech/20140607 Five things that make Go fast.md b/sources/tech/20140607 Five things that make Go fast.md deleted file mode 100644 index 88db93011c..0000000000 --- a/sources/tech/20140607 Five things that make Go fast.md +++ /dev/null @@ -1,493 +0,0 @@ -Five things that make Go fast -============================================================ - - _Anthony Starks has remixed my original Google Present based slides using his fantastic Deck presentation tool. You can check out his remix over on his blog,[mindchunk.blogspot.com.au/2014/06/remixing-with-deck][5]._ - -* * * - -I was recently invited to give a talk at Gocon, a fantastic Go conference held semi-annually in Tokyo, Japan. [Gocon 2014][6] was an entirely community-run one day event combining training and an afternoon of presentations surrounding the theme of Go in production. - -The following is the text of my presentation. The original text was structured to force me to speak slowly and clearly, so I have taken the liberty of editing it slightly to be more readable. - -I want to thank [Bill Kennedy][7], Minux Ma, and especially [Josh Bleecher Snyder][8], for their assistance in preparing this talk. - -* * * - -Good afternoon. - -My name is David. - -I am delighted to be here at Gocon today. I have wanted to come to this conference for two years and I am very grateful to the organisers for extending me the opportunity to present to you today. - - [![Gocon 2014](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-1.jpg)][9] -I want to begin my talk with a question. - -Why are people choosing to use Go ? - -When people talk about their decision to learn Go, or use it in their product, they have a variety of answers, but there always three that are at the top of their list - - [![Gocon 2014 ](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-2.jpg)][10] -These are the top three. - -The first, Concurrency. - -Go’s concurrency primitives are attractive to programmers who come from single threaded scripting languages like Nodejs, Ruby, or Python, or from languages like C++ or Java with their heavyweight threading model. - -Ease of deployment. - -We have heard today from experienced Gophers who appreciate the simplicity of deploying Go applications. - - [![Gocon 2014](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-3.jpg)][11] - -This leaves Performance. - -I believe an important reason why people choose to use Go is because it is  _fast_ . - - [![Gocon 2014 (4)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-4.jpg)][12] - -For my talk today I want to discuss five features that contribute to Go’s performance. - -I will also share with you the details of how Go implements these features. - - [![Gocon 2014 (5)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-5.jpg)][13] - -The first feature I want to talk about is Go’s efficient treatment and storage of values. - - [![Gocon 2014 (6)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-6.jpg)][14] - -This is an example of a value in Go. When compiled, `gocon` consumes exactly four bytes of memory. - -Let’s compare Go with some other languages - - [![Gocon 2014 (7)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-7.jpg)][15] - -Due to the overhead of the way Python represents variables, storing the same value using Python consumes six times more memory. - -This extra memory is used by Python to track type information, do reference counting, etc - -Let’s look at another example: - - [![Gocon 2014 (8)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-8.jpg)][16] - -Similar to Go, the Java `int` type consumes 4 bytes of memory to store this value. - -However, to use this value in a collection like a `List` or `Map`, the compiler must convert it into an `Integer` object. - - [![Gocon 2014 (9)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-9.jpg)][17] - -So an integer in Java frequently looks more like this and consumes between 16 and 24 bytes of memory. - -Why is this important ? Memory is cheap and plentiful, why should this overhead matter ? - - [![Gocon 2014 (10)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-10.jpg)][18] - -This is a graph showing CPU clock speed vs memory bus speed. - -Notice how the gap between CPU clock speed and memory bus speed continues to widen. - -The difference between the two is effectively how much time the CPU spends waiting for memory. - - [![Gocon 2014 (11)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-11.jpg)][19] - -Since the late 1960’s CPU designers have understood this problem. - -Their solution is a cache, an area of smaller, faster memory which is inserted between the CPU and main memory. - - [![Gocon 2014 (12)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-12.jpg)][20] - -This is a `Location` type which holds the location of some object in three dimensional space. It is written in Go, so each `Location` consumes exactly 24 bytes of storage. - -We can use this type to construct an array type of 1,000 `Location`s, which consumes exactly 24,000 bytes of memory. - -Inside the array, the `Location` structures are stored sequentially, rather than as pointers to 1,000 Location structures stored randomly. - -This is important because now all 1,000 `Location` structures are in the cache in sequence, packed tightly together. - - [![Gocon 2014 (13)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-13.jpg)][21] - -Go lets you create compact data structures, avoiding unnecessary indirection. - -Compact data structures utilise the cache better. - -Better cache utilisation leads to better performance. - - [![Gocon 2014 (14)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-14.jpg)][22] - -Function calls are not free. - - [![Gocon 2014 (15)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-15.jpg)][23] - -Three things happen when a function is called. - -A new stack frame is created, and the details of the caller recorded. - -Any registers which may be overwritten during the function call are saved to the stack. - -The processor computes the address of the function and executes a branch to that new address. - - [![Gocon 2014 (16)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-16.jpg)][24] - -Because function calls are very common operations, CPU designers have worked hard to optimise this procedure, but they cannot eliminate the overhead. - -Depending on what the function does, this overhead may be trivial or significant. - -A solution to reducing function call overhead is an optimisation technique called Inlining. - - [![Gocon 2014 (17)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-17.jpg)][25] - -The Go compiler inlines a function by treating the body of the function as if it were part of the caller. - -Inlining has a cost; it increases binary size. - -It only makes sense to inline when the overhead of calling a function is large relative to the work the function does, so only simple functions are candidates for inlining. - -Complicated functions are usually not dominated by the overhead of calling them and are therefore not inlined. - - [![Gocon 2014 (18)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-18.jpg)][26] - -This example shows the function `Double` calling `util.Max`. - -To reduce the overhead of the call to `util.Max`, the compiler can inline `util.Max` into `Double`, resulting in something like this - - [![Gocon 2014 (19)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-19.jpg)][27] - -After inlining there is no longer a call to `util.Max`, but the behaviour of `Double` is unchanged. - -Inlining isn’t exclusive to Go. Almost every compiled or JITed language performs this optimisation. But how does inlining in Go work? - -The Go implementation is very simple. When a package is compiled, any small function that is suitable for inlining is marked and then compiled as usual. - -Then both the source of the function and the compiled version are stored. - - [![Gocon 2014 (20)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-20.jpg)][28] - -This slide shows the contents of util.a. The source has been transformed a little to make it easier for the compiler to process quickly. - -When the compiler compiles Double it sees that `util.Max` is inlinable, and the source of `util.Max`is available. - -Rather than insert a call to the compiled version of `util.Max`, it can substitute the source of the original function. - -Having the source of the function enables other optimizations. - - [![Gocon 2014 (21)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-21.jpg)][29] - -In this example, although the function Test always returns false, Expensive cannot know that without executing it. - -When `Test` is inlined, we get something like this - - [![Gocon 2014 (22)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-22.jpg)][30] - -The compiler now knows that the expensive code is unreachable. - -Not only does this save the cost of calling Test, it saves compiling or running any of the expensive code that is now unreachable. - -The Go compiler can automatically inline functions across files and even across packages. This includes code that calls inlinable functions from the standard library. - - [![Gocon 2014 (23)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-23.jpg)][31] - -Mandatory garbage collection makes Go a simpler and safer language. - -This does not imply that garbage collection makes Go slow, or that garbage collection is the ultimate arbiter of the speed of your program. - -What it does mean is memory allocated on the heap comes at a cost. It is a debt that costs CPU time every time the GC runs until that memory is freed. - - [![Gocon 2014 (24)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-24.jpg)][32] - -There is however another place to allocate memory, and that is the stack. - -Unlike C, which forces you to choose if a value will be stored on the heap, via `malloc`, or on the stack, by declaring it inside the scope of the function, Go implements an optimisation called  _escape analysis_ . - - [![Gocon 2014 (25)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-25.jpg)][33] - -Escape analysis determines whether any references to a value escape the function in which the value is declared. - -If no references escape, the value may be safely stored on the stack. - -Values stored on the stack do not need to be allocated or freed. - -Lets look at some examples - - [![Gocon 2014 (26)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-26.jpg)][34] - -`Sum` adds the numbers between 1 and 100 and returns the result. This is a rather unusual way to do this, but it illustrates how Escape Analysis works. - -Because the numbers slice is only referenced inside `Sum`, the compiler will arrange to store the 100 integers for that slice on the stack, rather than the heap. - -There is no need to garbage collect `numbers`, it is automatically freed when `Sum` returns. - - [![Gocon 2014 (27)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-27.jpg)][35] - -This second example is also a little contrived. In `CenterCursor` we create a new `Cursor` and store a pointer to it in c. - -Then we pass `c` to the `Center()` function which moves the `Cursor` to the center of the screen. - -Then finally we print the X and Y locations of that `Cursor`. - -Even though `c` was allocated with the `new` function, it will not be stored on the heap, because no reference `c` escapes the `CenterCursor` function. - - [![Gocon 2014 (28)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-28.jpg)][36] - -Go’s optimisations are always enabled by default. You can see the compiler’s escape analysis and inlining decisions with the `-gcflags=-m` switch. - -Because escape analysis is performed at compile time, not run time, stack allocation will always be faster than heap allocation, no matter how efficient your garbage collector is. - -I will talk more about the stack in the remaining sections of this talk. - - [![Gocon 2014 (30)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-30.jpg)][37] - -Go has goroutines. These are the foundations for concurrency in Go. - -I want to step back for a moment and explore the history that leads us to goroutines. - -In the beginning computers ran one process at a time. Then in the 60’s the idea of multiprocessing, or time sharing became popular. - -In a time-sharing system the operating systems must constantly switch the attention of the CPU between these processes by recording the state of the current process, then restoring the state of another. - -This is called  _process switching_ . - - [![Gocon 2014 (29)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-29.jpg)][38] - -There are three main costs of a process switch. - -First is the kernel needs to store the contents of all the CPU registers for that process, then restore the values for another process. - -The kernel also needs to flush the CPU’s mappings from virtual memory to physical memory as these are only valid for the current process. - -Finally there is the cost of the operating system context switch, and the overhead of the scheduler function to choose the next process to occupy the CPU. - - [![Gocon 2014 (31)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-31.jpg)][39] - -There are a surprising number of registers in a modern processor. I have difficulty fitting them on one slide, which should give you a clue how much time it takes to save and restore them. - -Because a process switch can occur at any point in a process’ execution, the operating system needs to store the contents of all of these registers because it does not know which are currently in use. - - [![Gocon 2014 (32)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-32.jpg)][40] - -This lead to the development of threads, which are conceptually the same as processes, but share the same memory space. - -As threads share address space, they are lighter than processes so are faster to create and faster to switch between. - - [![Gocon 2014 (33)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-33.jpg)][41] - -Goroutines take the idea of threads a step further. - -Goroutines are cooperatively scheduled, rather than relying on the kernel to manage their time sharing. - -The switch between goroutines only happens at well defined points, when an explicit call is made to the Go runtime scheduler. - -The compiler knows the registers which are in use and saves them automatically. - - [![Gocon 2014 (34)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-34.jpg)][42] - -While goroutines are cooperatively scheduled, this scheduling is handled for you by the runtime. - -Places where Goroutines may yield to others are: - -* Channel send and receive operations, if those operations would block. - -* The Go statement, although there is no guarantee that new goroutine will be scheduled immediately. - -* Blocking syscalls like file and network operations. - -* After being stopped for a garbage collection cycle. - - [![Gocon 2014 (35)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-35.jpg)][43] - -This an example to illustrate some of the scheduling points described in the previous slide. - -The thread, depicted by the arrow, starts on the left in the `ReadFile` function. It encounters `os.Open`, which blocks the thread while waiting for the file operation to complete, so the scheduler switches the thread to the goroutine on the right hand side. - -Execution continues until the read from the `c` chan blocks, and by this time the `os.Open` call has completed so the scheduler switches the thread back the left hand side and continues to the `file.Read` function, which again blocks on file IO. - -The scheduler switches the thread back to the right hand side for another channel operation, which has unblocked during the time the left hand side was running, but it blocks again on the channel send. - -Finally the thread switches back to the left hand side as the `Read` operation has completed and data is available. - - [![Gocon 2014 (36)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-36.jpg)][44] - -This slide shows the low level `runtime.Syscall` function which is the base for all functions in the os package. - -Any time your code results in a call to the operating system, it will go through this function. - -The call to `entersyscall` informs the runtime that this thread is about to block. - -This allows the runtime to spin up a new thread which will service other goroutines while this current thread blocked. - -This results in relatively few operating system threads per Go process, with the Go runtime taking care of assigning a runnable Goroutine to a free operating system thread. - - [![Gocon 2014 (37)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-37.jpg)][45] - -In the previous section I discussed how goroutines reduce the overhead of managing many, sometimes hundreds of thousands of concurrent threads of execution. - -There is another side to the goroutine story, and that is stack management, which leads me to my final topic. - - [![Gocon 2014 (39)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-39.jpg)][46] - -This is a diagram of the memory layout of a process. The key thing we are interested is the location of the heap and the stack. - -Traditionally inside the address space of a process, the heap is at the bottom of memory, just above the program (text) and grows upwards. - -The stack is located at the top of the virtual address space, and grows downwards. - - [![Gocon 2014 (40)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-40.jpg)][47] - -Because the heap and stack overwriting each other would be catastrophic, the operating system usually arranges to place an area of unwritable memory between the stack and the heap to ensure that if they did collide, the program will abort. - -This is called a guard page, and effectively limits the stack size of a process, usually in the order of several megabytes. - - [![Gocon 2014 (41)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-41.jpg)][48] - -We’ve discussed that threads share the same address space, so for each thread, it must have its own stack. - -Because it is hard to predict the stack requirements of a particular thread, a large amount of memory is reserved for each thread’s stack along with a guard page. - -The hope is that this is more than will ever be needed and the guard page will never be hit. - -The downside is that as the number of threads in your program increases, the amount of available address space is reduced. - - [![Gocon 2014 (42)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-42.jpg)][49] - -We’ve seen that the Go runtime schedules a large number of goroutines onto a small number of threads, but what about the stack requirements of those goroutines ? - -Instead of using guard pages, the Go compiler inserts a check as part of every function call to check if there is sufficient stack for the function to run. If there is not, the runtime can allocate more stack space. - -Because of this check, a goroutines initial stack can be made much smaller, which in turn permits Go programmers to treat goroutines as cheap resources. - - [![Gocon 2014 (43)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-43.jpg)][50] - -This is a slide that shows how stacks are managed in Go 1.2. - -When `G` calls to `H` there is not enough space for `H` to run, so the runtime allocates a new stack frame from the heap, then runs `H` on that new stack segment. When `H` returns, the stack area is returned to the heap before returning to `G`. - - [![Gocon 2014 (44)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-44.jpg)][51] - -This method of managing the stack works well in general, but for certain types of code, usually recursive code, it can cause the inner loop of your program to straddle one of these stack boundaries. - -For example, in the inner loop of your program, function `G` may call `H` many times in a loop, - -Each time this will cause a stack split. This is known as the hot split problem. - - [![Gocon 2014 (45)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-45.jpg)][52] - -To solve hot splits, Go 1.3 has adopted a new stack management method. - -Instead of adding and removing additional stack segments, if the stack of a goroutine is too small, a new, larger, stack will be allocated. - -The old stack’s contents are copied to the new stack, then the goroutine continues with its new larger stack. - -After the first call to `H` the stack will be large enough that the check for available stack space will always succeed. - -This resolves the hot split problem. - - [![Gocon 2014 (46)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-46.jpg)][53] - -Values, Inlining, Escape Analysis, Goroutines, and segmented/copying stacks. - -These are the five features that I chose to speak about today, but they are by no means the only things that makes Go a fast programming language, just as there more that three reasons that people cite as their reason to learn Go. - -As powerful as these five features are individually, they do not exist in isolation. - -For example, the way the runtime multiplexes goroutines onto threads would not be nearly as efficient without growable stacks. - -Inlining reduces the cost of the stack size check by combining smaller functions into larger ones. - -Escape analysis reduces the pressure on the garbage collector by automatically moving allocations from the heap to the stack. - -Escape analysis is also provides better cache locality. - -Without growable stacks, escape analysis might place too much pressure on the stack. - - [![Gocon 2014 (47)](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-47.jpg)][54] - -* Thank you to the Gocon organisers for permitting me to speak today -* twitter / web / email details -* thanks to @offbymany, @billkennedy_go, and Minux for their assistance in preparing this talk. - -### Related Posts: - -1. [Hear me speak about Go performance at OSCON][1] - -2. [Why is a Goroutine’s stack infinite ?][2] - -3. [A whirlwind tour of Go’s runtime environment variables][3] - -4. [Performance without the event loop][4] - --------------------------------------------------------------------------------- - -作者简介: - -David is a programmer and author from Sydney Australia. - -Go contributor since February 2011, committer since April 2012. - -Contact information - -* dave@cheney.net -* twitter: @davecheney - ----------------------- - -via: https://dave.cheney.net/2014/06/07/five-things-that-make-go-fast - -作者:[Dave Cheney ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://dave.cheney.net/ -[1]:https://dave.cheney.net/2015/05/31/hear-me-speak-about-go-performance-at-oscon -[2]:https://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite -[3]:https://dave.cheney.net/2015/11/29/a-whirlwind-tour-of-gos-runtime-environment-variables -[4]:https://dave.cheney.net/2015/08/08/performance-without-the-event-loop -[5]:http://mindchunk.blogspot.com.au/2014/06/remixing-with-deck.html -[6]:http://ymotongpoo.hatenablog.com/entry/2014/06/01/124350 -[7]:http://www.goinggo.net/ -[8]:https://twitter.com/offbymany -[9]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-1.jpg -[10]:https://dave.cheney.net/2014/06/07/five-things-that-make-go-fast/gocon-2014-2 -[11]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-3.jpg -[12]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-4.jpg -[13]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-5.jpg -[14]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-6.jpg -[15]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-7.jpg -[16]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-8.jpg -[17]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-9.jpg -[18]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-10.jpg -[19]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-11.jpg -[20]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-12.jpg -[21]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-13.jpg -[22]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-14.jpg -[23]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-15.jpg -[24]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-16.jpg -[25]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-17.jpg -[26]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-18.jpg -[27]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-19.jpg -[28]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-20.jpg -[29]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-21.jpg -[30]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-22.jpg -[31]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-23.jpg -[32]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-24.jpg -[33]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-25.jpg -[34]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-26.jpg -[35]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-27.jpg -[36]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-28.jpg -[37]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-30.jpg -[38]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-29.jpg -[39]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-31.jpg -[40]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-32.jpg -[41]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-33.jpg -[42]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-34.jpg -[43]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-35.jpg -[44]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-36.jpg -[45]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-37.jpg -[46]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-39.jpg -[47]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-40.jpg -[48]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-41.jpg -[49]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-42.jpg -[50]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-43.jpg -[51]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-44.jpg -[52]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-45.jpg -[53]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-46.jpg -[54]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-47.jpg diff --git a/sources/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md b/sources/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md new file mode 100644 index 0000000000..41d66c744e --- /dev/null +++ b/sources/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md @@ -0,0 +1,639 @@ +BriFuture is translating this article + +# Compiling Lisp to JavaScript From Scratch in 350 + +In this article we will look at a from-scratch implementation of a compiler from a simple LISP-like calculator language to JavaScript. The complete source code can be found [here][7]. + +We will: + +1. Define our language and write a simple program in it + +2. Implement a simple parser combinator library + +3. Implement a parser for our language + +4. Implement a pretty printer for our language + +5. Define a subset of JavaScript for our usage + +6. Implement a code translator to the JavaScript subset we defined + +7. Glue it all together + +Let's start! + +### 1\. Defining the language + +The main attraction of lisps is that their syntax already represent a tree, this is why they are so easy to parse. We'll see that soon. But first let's define our language. Here's a BNF description of our language's syntax: + +``` +program ::= expr +expr ::= | | ([]) +``` + +Basically, our language let's us define one expression at the top level which it will evaluate. An expression is composed of either an integer, for example `5`, a variable, for example `x`, or a list of expressions, for example `(add x 1)`. + +An integer evaluate to itself, a variable evaluates to what it's bound in the current environment, and a list evaluates to a function call where the first argument is the function and the rest are the arguments to the function. + +We have some built-in special forms in our language so we can do more interesting stuff: + +* let expression let's us introduce new variables in the environment of the body of the let. The syntax is: + +``` +let ::= (let ([]) ) +letargs ::= ( ) +body ::= +``` + +* lambda expression: evaluates to an anonymous function definition. The syntax is: + +``` +lambda ::= (lambda ([]) ) +``` + +We also have a few built in functions: `add`, `mul`, `sub`, `div` and `print`. + +Let's see a quick example of a program written in our language: + +``` +(let + ((compose + (lambda (f g) + (lambda (x) (f (g x))))) + (square + (lambda (x) (mul x x))) + (add1 + (lambda (x) (add x 1)))) + (print ((compose square add1) 5))) +``` + +This program defines 3 functions: `compose`, `square` and `add1`. And then prints the result of the computation:`((compose square add1) 5)` + +I hope this is enough information about the language. Let's start implementing it! + +We can define the language in Haskell like this: + +``` +type Name = String + +data Expr + = ATOM Atom + | LIST [Expr] + deriving (Eq, Read, Show) + +data Atom + = Int Int + | Symbol Name + deriving (Eq, Read, Show) +``` + +We can parse programs in the language we defined to an `Expr`. Also, we are giving the new data types `Eq`, `Read`and `Show` instances to aid in testing and debugging. You'll be able to use those in the REPL for example to verify all this actually works. + +The reason we did not define `lambda`, `let` and the other built-in functions as part of the syntax is because we can get away with it in this case. These functions are just a more specific case of a `LIST`. So I decided to leave this to a later phase. + +Usually, you would like to define these special cases in the abstract syntax - to improve error messages, to unable static analysis and optimizations and such, but we won't do that here so this is enough for us. + +Another thing you would like to do usually is add some annotation to the syntax. For example the location: Which file did this `Expr` come from and which row and col in the file. You can use this in later stages to print the location of errors, even if they are not in the parser stage. + +* _Exercise 1_ : Add a `Program` data type to include multiple `Expr` sequentially + +* _Exercise 2_ : Add location annotation to the syntax tree. + +### 2\. Implement a simple parser combinator library + +First thing we are going to do is define an Embedded Domain Specific Language (or EDSL) which we will use to define our languages' parser. This is often referred to as parser combinator library. The reason we are doing it is strictly for learning purposes, Haskell has great parsing libraries and you should definitely use them when building real software, or even when just experimenting. One such library is [megaparsec][8]. + +First let's talk about the idea behind our parser library implementation. In it's essence, our parser is a function that takes some input, might consume some or all of the input, and returns the value it managed to parse and the rest of the input it didn't parse yet, or throws an error if it failed. Let's write that down. + +``` +newtype Parser a + = Parser (ParseString -> Either ParseError (a, ParseString)) + +data ParseString + = ParseString Name (Int, Int) String + +data ParseError + = ParseError ParseString Error + +type Error = String + +``` + +Here we defined three main new types. + +First, `Parser a`, is the parsing function we described before. + +Second, `ParseString` is our input or state we carry along. It has three significant parts: + +* `Name`: This is the name of the source + +* `(Int, Int)`: This is the current location in the source + +* `String`: This is the remaining string left to parse + +Third, `ParseError` contains the current state of the parser and an error message. + +Now we want our parser to be flexible, so we will define a few instances for common type classes for it. These instances will allow us to combine small parsers to make bigger parsers (hence the name 'parser combinators'). + +The first one is a `Functor` instance. We want a `Functor` instance because we want to be able to define a parser using another parser simply by applying a function on the parsed value. We will see an example of this when we define the parser for our language. + +``` +instance Functor Parser where + fmap f (Parser parser) = + Parser (\str -> first f <$> parser str) +``` + +The second instance is an `Applicative` instance. One common use case for this instance instance is to lift a pure function on multiple parsers. + +``` +instance Applicative Parser where + pure x = Parser (\str -> Right (x, str)) + (Parser p1) <*> (Parser p2) = + Parser $ + \str -> do + (f, rest) <- p1 str + (x, rest') <- p2 rest + pure (f x, rest') + +``` + +(Note:  _We will also implement a Monad instance so we can use do notation here._ ) + +The third instance is an `Alternative` instance. We want to be able to supply an alternative parser in case one fails. + +``` +instance Alternative Parser where + empty = Parser (`throwErr` "Failed consuming input") + (Parser p1) <|> (Parser p2) = + Parser $ + \pstr -> case p1 pstr of + Right result -> Right result + Left _ -> p2 pstr +``` + +The forth instance is a `Monad` instance. So we'll be able to chain parsers. + +``` +instance Monad Parser where + (Parser p1) >>= f = + Parser $ + \str -> case p1 str of + Left err -> Left err + Right (rs, rest) -> + case f rs of + Parser parser -> parser rest + +``` + +Next, let's define a way to run a parser and a utility function for failure: + +``` + +runParser :: String -> String -> Parser a -> Either ParseError (a, ParseString) +runParser name str (Parser parser) = parser $ ParseString name (0,0) str + +throwErr :: ParseString -> String -> Either ParseError a +throwErr ps@(ParseString name (row,col) _) errMsg = + Left $ ParseError ps $ unlines + [ "*** " ++ name ++ ": " ++ errMsg + , "* On row " ++ show row ++ ", column " ++ show col ++ "." + ] + +``` + +Now we'll start implementing the combinators which are the API and heart of the EDSL. + +First, we'll define `oneOf`. `oneOf` will succeed if one of the characters in the list supplied to it is the next character of the input and will fail otherwise. + +``` +oneOf :: [Char] -> Parser Char +oneOf chars = + Parser $ \case + ps@(ParseString name (row, col) str) -> + case str of + [] -> throwErr ps "Cannot read character of empty string" + (c:cs) -> + if c `elem` chars + then Right (c, ParseString name (row, col+1) cs) + else throwErr ps $ unlines ["Unexpected character " ++ [c], "Expecting one of: " ++ show chars] +``` + +`optional` will stop a parser from throwing an error. It will just return `Nothing` on failure. + +``` +optional :: Parser a -> Parser (Maybe a) +optional (Parser parser) = + Parser $ + \pstr -> case parser pstr of + Left _ -> Right (Nothing, pstr) + Right (x, rest) -> Right (Just x, rest) +``` + +`many` will try to run a parser repeatedly until it fails. When it does, it'll return a list of successful parses. `many1`will do the same, but will throw an error if it fails to parse at least once. + +``` +many :: Parser a -> Parser [a] +many parser = go [] + where go cs = (parser >>= \c -> go (c:cs)) <|> pure (reverse cs) + +many1 :: Parser a -> Parser [a] +many1 parser = + (:) <$> parser <*> many parser + +``` + +These next few parsers use the combinators we defined to make more specific parsers: + +``` +char :: Char -> Parser Char +char c = oneOf [c] + +string :: String -> Parser String +string = traverse char + +space :: Parser Char +space = oneOf " \n" + +spaces :: Parser String +spaces = many space + +spaces1 :: Parser String +spaces1 = many1 space + +withSpaces :: Parser a -> Parser a +withSpaces parser = + spaces *> parser <* spaces + +parens :: Parser a -> Parser a +parens parser = + (withSpaces $ char '(') + *> withSpaces parser + <* (spaces *> char ')') + +sepBy :: Parser a -> Parser b -> Parser [b] +sepBy sep parser = do + frst <- optional parser + rest <- many (sep *> parser) + pure $ maybe rest (:rest) frst + +``` + +Now we have everything we need to start defining a parser for our language. + +* _Exercise_ : implement an EOF (end of file/input) parser combinator. + +### 3\. Implementing a parser for our language + +To define our parser, we'll use the top-bottom method. + +``` +parseExpr :: Parser Expr +parseExpr = fmap ATOM parseAtom <|> fmap LIST parseList + +parseList :: Parser [Expr] +parseList = parens $ sepBy spaces1 parseExpr + +parseAtom :: Parser Atom +parseAtom = parseSymbol <|> parseInt + +parseSymbol :: Parser Atom +parseSymbol = fmap Symbol parseName + +``` + +Notice that these four function are a very high-level description of our language. This demonstrate why Haskell is so nice for parsing. Still, after defining the high-level parts, we still need to define the lower-level `parseName` and `parseInt`. + +What characters can we use as names in our language? Let's decide to use lowercase letters, digits and underscores, where the first character must be a letter. + +``` +parseName :: Parser Name +parseName = do + c <- oneOf ['a'..'z'] + cs <- many $ oneOf $ ['a'..'z'] ++ "0123456789" ++ "_" + pure (c:cs) +``` + +For integers, we want a sequence of digits optionally preceding by '-': + +``` +parseInt :: Parser Atom +parseInt = do + sign <- optional $ char '-' + num <- many1 $ oneOf "0123456789" + let result = read $ maybe num (:num) sign of + pure $ Int result +``` + +Lastly, we'll define a function to run a parser and get back an `Expr` or an error message. + +``` +runExprParser :: Name -> String -> Either String Expr +runExprParser name str = + case runParser name str (withSpaces parseExpr) of + Left (ParseError _ errMsg) -> Left errMsg + Right (result, _) -> Right result +``` + +* _Exercise 1_ : Write a parser for the `Program` type you defined in the first section + +* _Exercise 2_ : Rewrite `parseName` in Applicative style + +* _Exercise 3_ : Find a way to handle the overflow case in `parseInt` instead of using `read`. + +### 4\. Implement a pretty printer for our language + +One more thing we'd like to do is be able to print our programs as source code. This is useful for better error messages. + +``` +printExpr :: Expr -> String +printExpr = printExpr' False 0 + +printAtom :: Atom -> String +printAtom = \case + Symbol s -> s + Int i -> show i + +printExpr' :: Bool -> Int -> Expr -> String +printExpr' doindent level = \case + ATOM a -> indent (bool 0 level doindent) (printAtom a) + LIST (e:es) -> + indent (bool 0 level doindent) $ + concat + [ "(" + , printExpr' False (level + 1) e + , bool "\n" "" (null es) + , intercalate "\n" $ map (printExpr' True (level + 1)) es + , ")" + ] + +indent :: Int -> String -> String +indent tabs e = concat (replicate tabs " ") ++ e +``` + +* _Exercise_ : Write a pretty printer for the `Program` type you defined in the first section + +Okay, we wrote around 200 lines so far of what's typically called the front-end of the compiler. We have around 150 more lines to go and three more tasks: We need to define a subset of JS for our usage, define the translator from our language to that subset, and glue the whole thing together. Let's go! + +### 5\. Define a subset of JavaScript for our usage + +First, we'll define the subset of JavaScript we are going to use: + +``` +data JSExpr + = JSInt Int + | JSSymbol Name + | JSBinOp JSBinOp JSExpr JSExpr + | JSLambda [Name] JSExpr + | JSFunCall JSExpr [JSExpr] + | JSReturn JSExpr + deriving (Eq, Show, Read) + +type JSBinOp = String +``` + +This data type represent a JavaScript expression. We have two atoms - `JSInt` and `JSSymbol` to which we'll translate our languages' `Atom`, We have `JSBinOp` to represent a binary operation such as `+` or `*`, we have `JSLambda`for anonymous functions same as our `lambda expression`, We have `JSFunCall` which we'll use both for calling functions and introducing new names as in `let`, and we have `JSReturn` to return values from functions as that's required in JavaScript. + +This `JSExpr` type is an **abstract representation** of a JavaScript expression. We will translate our own `Expr`which is an abstract representation of our languages' expression to `JSExpr` and from there to JavaScript. But in order to do that we need to take this `JSExpr` and produce JavaScript code from it. We'll do that by pattern matching on `JSExpr` recursively and emit JS code as a `String`. This is basically the same thing we did in `printExpr`. We'll also track the scoping of elements so we can indent the generated code in a nice way. + +``` +printJSOp :: JSBinOp -> String +printJSOp op = op + +printJSExpr :: Bool -> Int -> JSExpr -> String +printJSExpr doindent tabs = \case + JSInt i -> show i + JSSymbol name -> name + JSLambda vars expr -> (if doindent then indent tabs else id) $ unlines + ["function(" ++ intercalate ", " vars ++ ") {" + ,indent (tabs+1) $ printJSExpr False (tabs+1) expr + ] ++ indent tabs "}" + JSBinOp op e1 e2 -> "(" ++ printJSExpr False tabs e1 ++ " " ++ printJSOp op ++ " " ++ printJSExpr False tabs e2 ++ ")" + JSFunCall f exprs -> "(" ++ printJSExpr False tabs f ++ ")(" ++ intercalate ", " (fmap (printJSExpr False tabs) exprs) ++ ")" + JSReturn expr -> (if doindent then indent tabs else id) $ "return " ++ printJSExpr False tabs expr ++ ";" +``` + +* _Exercise 1_ : Add a `JSProgram` type that will hold multiple `JSExpr` and create a function `printJSExprProgram` to generate code for it. + +* _Exercise 2_ : Add a new type of `JSExpr` - `JSIf`, and generate code for it. + +### 6\. Implement a code translator to the JavaScript subset we defined + +We are almost there. In this section we'll create a function to translate `Expr` to `JSExpr`. + +The basic idea is simple, we'll translate `ATOM` to `JSSymbol` or `JSInt` and `LIST` to either a function call or a special case we'll translate later. + +``` +type TransError = String + +translateToJS :: Expr -> Either TransError JSExpr +translateToJS = \case + ATOM (Symbol s) -> pure $ JSSymbol s + ATOM (Int i) -> pure $ JSInt i + LIST xs -> translateList xs + +translateList :: [Expr] -> Either TransError JSExpr +translateList = \case + [] -> Left "translating empty list" + ATOM (Symbol s):xs + | Just f <- lookup s builtins -> + f xs + f:xs -> + JSFunCall <$> translateToJS f <*> traverse translateToJS xs + +``` + +`builtins` is a list of special cases to translate, like `lambda` and `let`. Every case gets the list of arguments for it, verify that its syntactically valid and translates it to the equivalent `JSExpr`. + +``` +type Builtin = [Expr] -> Either TransError JSExpr +type Builtins = [(Name, Builtin)] + +builtins :: Builtins +builtins = + [("lambda", transLambda) + ,("let", transLet) + ,("add", transBinOp "add" "+") + ,("mul", transBinOp "mul" "*") + ,("sub", transBinOp "sub" "-") + ,("div", transBinOp "div" "/") + ,("print", transPrint) + ] + +``` + +In our case, we treat built-in special forms as special and not first class, so will not be able to use them as first class functions and such. + +We'll translate a Lambda to an anonymous function: + +``` +transLambda :: [Expr] -> Either TransError JSExpr +transLambda = \case + [LIST vars, body] -> do + vars' <- traverse fromSymbol vars + JSLambda vars' <$> (JSReturn <$> translateToJS body) + + vars -> + Left $ unlines + ["Syntax error: unexpected arguments for lambda." + ,"expecting 2 arguments, the first is the list of vars and the second is the body of the lambda." + ,"In expression: " ++ show (LIST $ ATOM (Symbol "lambda") : vars) + ] + +fromSymbol :: Expr -> Either String Name +fromSymbol (ATOM (Symbol s)) = Right s +fromSymbol e = Left $ "cannot bind value to non symbol type: " ++ show e + +``` + +We'll translate let to a definition of a function with the relevant named arguments and call it with the values, Thus introducing the variables in that scope: + +``` +transLet :: [Expr] -> Either TransError JSExpr +transLet = \case + [LIST binds, body] -> do + (vars, vals) <- letParams binds + vars' <- traverse fromSymbol vars + JSFunCall . JSLambda vars' <$> (JSReturn <$> translateToJS body) <*> traverse translateToJS vals + where + letParams :: [Expr] -> Either Error ([Expr],[Expr]) + letParams = \case + [] -> pure ([],[]) + LIST [x,y] : rest -> ((x:) *** (y:)) <$> letParams rest + x : _ -> Left ("Unexpected argument in let list in expression:\n" ++ printExpr x) + + vars -> + Left $ unlines + ["Syntax error: unexpected arguments for let." + ,"expecting 2 arguments, the first is the list of var/val pairs and the second is the let body." + ,"In expression:\n" ++ printExpr (LIST $ ATOM (Symbol "let") : vars) + ] +``` + +We'll translate an operation that can work on multiple arguments to a chain of binary operations. For example: `(add 1 2 3)` will become `1 + (2 + 3)` + +``` +transBinOp :: Name -> Name -> [Expr] -> Either TransError JSExpr +transBinOp f _ [] = Left $ "Syntax error: '" ++ f ++ "' expected at least 1 argument, got: 0" +transBinOp _ _ [x] = translateToJS x +transBinOp _ f list = foldl1 (JSBinOp f) <$> traverse translateToJS list +``` + +And we'll translate a `print` as a call to `console.log` + +``` +transPrint :: [Expr] -> Either TransError JSExpr +transPrint [expr] = JSFunCall (JSSymbol "console.log") . (:[]) <$> translateToJS expr +transPrint xs = Left $ "Syntax error. print expected 1 arguments, got: " ++ show (length xs) + +``` + +Notice that we could have skipped verifying the syntax if we'd parse those as special cases of `Expr`. + +* _Exercise 1_ : Translate `Program` to `JSProgram` + +* _Exercise 2_ : add a special case for `if Expr Expr Expr` and translate it to the `JSIf` case you implemented in the last exercise + +### 7\. Glue it all together + +Finally, we are going to glue this all together. We'll: + +1. Read a file + +2. Parse it to `Expr` + +3. Translate it to `JSExpr` + +4. Emit JavaScript code to the standard output + +We'll also enable a few flags for testing: + +* `--e` will parse and print the abstract representation of the expression (`Expr`) + +* `--pp` will parse and pretty print + +* `--jse` will parse, translate and print the abstract representation of the resulting JS (`JSExpr`) + +* `--ppc` will parse, pretty print and compile + +``` +main :: IO () +main = getArgs >>= \case + [file] -> + printCompile =<< readFile file + ["--e",file] -> + either putStrLn print . runExprParser "--e" =<< readFile file + ["--pp",file] -> + either putStrLn (putStrLn . printExpr) . runExprParser "--pp" =<< readFile file + ["--jse",file] -> + either print (either putStrLn print . translateToJS) . runExprParser "--jse" =<< readFile file + ["--ppc",file] -> + either putStrLn (either putStrLn putStrLn) . fmap (compile . printExpr) . runExprParser "--ppc" =<< readFile file + _ -> + putStrLn $ unlines + ["Usage: runghc Main.hs [ --e, --pp, --jse, --ppc ] " + ,"--e print the Expr" + ,"--pp pretty print Expr" + ,"--jse print the JSExpr" + ,"--ppc pretty print Expr and then compile" + ] + +printCompile :: String -> IO () +printCompile = either putStrLn putStrLn . compile + +compile :: String -> Either Error String +compile str = printJSExpr False 0 <$> (translateToJS =<< runExprParser "compile" str) + +``` + +That's it. We have a compiler from our language to JS. Again, you can view the full source file [here][9]. + +Running our compiler with the example from the first section yields this JavaScript code: + +``` +$ runhaskell Lisp.hs example.lsp +(function(compose, square, add1) { + return (console.log)(((compose)(square, add1))(5)); +})(function(f, g) { + return function(x) { + return (f)((g)(x)); + }; +}, function(x) { + return (x * x); +}, function(x) { + return (x + 1); +}) +``` + +If you have node.js installed on your computer, you can run this code by running: + +``` +$ runhaskell Lisp.hs example.lsp | node -p +36 +undefined +``` + +* _Final exercise_ : instead of compiling an expression, compile a program of multiple expressions. + +-------------------------------------------------------------------------------- + +via: https://gilmi.me/blog/post/2016/10/14/lisp-to-js + +作者:[ Gil Mizrahi ][a] +选题:[oska874][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://gilmi.me/home +[b]:https://github.com/oska874 +[1]:https://gilmi.me/blog/authors/Gil +[2]:https://gilmi.me/blog/tags/compilers +[3]:https://gilmi.me/blog/tags/fp +[4]:https://gilmi.me/blog/tags/haskell +[5]:https://gilmi.me/blog/tags/lisp +[6]:https://gilmi.me/blog/tags/parsing +[7]:https://gist.github.com/soupi/d4ff0727ccb739045fad6cdf533ca7dd +[8]:https://mrkkrp.github.io/megaparsec/ +[9]:https://gist.github.com/soupi/d4ff0727ccb739045fad6cdf533ca7dd +[10]:https://gilmi.me/blog/post/2016/10/14/lisp-to-js diff --git a/sources/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md b/sources/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md deleted file mode 100644 index e423386d85..0000000000 --- a/sources/tech/20170810 How we built our first full-stack JavaScript web app in three weeks.md +++ /dev/null @@ -1,200 +0,0 @@ -The user’s home dashboard in our app, AlignHow we built our first full-stack JavaScript web app in three weeks -============================================================ - -![](https://cdn-images-1.medium.com/max/2000/1*PgKBpQHRUgqpXcxtyehPZg.png) - -### A simple step-by-step guide to go from idea to deployed app - -My three months of coding bootcamp at the Grace Hopper Program have come to a close, and the title of this article is actually not quite true — I’ve now built  _three_  full-stack apps: [an e-commerce store from scratch][3], a [personal hackathon project][4] of my choice, and finally, a three-week capstone project. That capstone project was by far the most intensive— a three week journey with two teammates — and it is my proudest achievement from bootcamp. It is the first robust, complex app I have ever fully built and designed. - -As most developers know, even when you “know how to code”, it can be really overwhelming to embark on the creation of your first full-stack app. The JavaScript ecosystem is incredibly vast: with package managers, modules, build tools, transpilers, databases, libraries, and decisions to be made about all of them, it’s no wonder that so many budding coders never build anything beyond Codecademy tutorials. That’s why I want to walk you through a step-by-step guide of the decisions and steps my team took to create our live app, Align. - -* * * - -First, some context. Align is a web app that uses an intuitive timeline interface to help users set long-term goals and manage them over time.Our stack includes Firebase for back-end services and React on the front end. My teammates and I explain more in this short video: - -[video](https://youtu.be/YacM6uYP2Jo) - -Demoing Align @ Demo Day Live // July 10, 2017 - -So how did we go from Day 1, when we were assigned our teams, to the final live app? Here’s a rundown of the steps we took: - -* * * - -### Step 1: Ideate - -The first step was to figure out what exactly we wanted to build. In my past life as a consultant at IBM, I led ideation workshops with corporate leaders. Pulling from that, I suggested to my group the classic post-it brainstorming strategy, in which we all scribble out as many ideas as we can — even ‘stupid ones’ — so that people’s brains keep moving and no one avoids voicing ideas out of fear. - -![](https://cdn-images-1.medium.com/max/800/1*-M4xa9_HJylManvLoraqaQ.jpeg) - -After generating a few dozen app ideas, we sorted them into categories to gain a better understanding of what themes we were collectively excited about. In our group, we saw a clear trend towards ideas surrounding self-improvement, goal-setting, nostalgia, and personal development. From that, we eventually honed in on a specific idea: a personal dashboard for setting and managing long-term goals, with elements of memory-keeping and data visualization over time. - -From there, we created a set of user stories — descriptions of features we wanted to have, from an end-user perspective — to elucidate what exactly we wanted our app to do. - -### Step 2: Wireframe UX/UI - -Next, on a white board, we drew out the basic views we envisioned in our app. We incorporated our set of user stories to understand how these views would work in a skeletal app framework. - - -![](https://cdn-images-1.medium.com/max/400/1*r5FBoa8JsYOoJihDgrpzhg.jpeg) - - - -![](https://cdn-images-1.medium.com/max/400/1*0O8ZWiyUgWm0b8wEiHhuPw.jpeg) - - - -![](https://cdn-images-1.medium.com/max/400/1*y9Q5v-sF0PWmkhthcW338g.jpeg) - -These sketches ensured we were all on the same page, and provided a visual blueprint going forward of what exactly we were all working towards. - -### Step 3: Choose a data structure and type of database - -It was now time to design our data structure. Based on our wireframes and user stories, we created a list in a Google doc of the models we would need and what attributes each should include. We knew we needed a ‘goal’ model, a ‘user’ model, a ‘milestone’ model, and a ‘checkin’ model, as well as eventually a ‘resource’ model, and an ‘upload’ model. - - -![](https://cdn-images-1.medium.com/max/800/1*oA3mzyixVzsvnN_egw1xwg.png) -Our initial sketch of our data models - -After informally sketching the models out, we needed to choose a  _type _ of database: ‘relational’ vs. ‘non-relational’ (a.k.a. ‘SQL’ vs. ‘NoSQL’). Whereas SQL databases are table-based and need predefined schema, NoSQL databases are document-based and have dynamic schema for unstructured data. - -For our use case, it didn’t matter much whether we used a SQL or a No-SQL database, so we ultimately chose Google’s cloud NoSQL database Firebasefor other reasons: - -1. It could hold user image uploads in its cloud storage - -2. It included WebSocket integration for real-time updating - -3. It could handle our user authentication and offer easy OAuth integration - -Once we chose a database, it was time to understand the relations between our data models. Since Firebase is NoSQL, we couldn’t create join tables or set up formal relations like  _“Checkins belongTo Goals”_ . Instead, we needed to figure out what the JSON tree would look like, and how the objects would be nested (or not). Ultimately, we structured our model like this: - - ** 此处有Canvas,请手动处理 ** - -![](https://cdn-images-1.medium.com/max/800/1*py0hQy-XHZWmwff3PM6F1g.png) -Our final Firebase data scheme for the Goal object. Note that Milestones & Checkins are nested under Goals. - - _(Note: Firebase prefers shallow, normalized data structures for efficiency, but for our use case, it made most sense to nest it, since we would never be pulling a Goal from the database without its child Milestones and Checkins.)_ - -### Step 4: Set up Github and an agile workflow - -We knew from the start that staying organized and practicing agile development would serve us well. We set up a Github repo, on which weprevented merging to master to force ourselves to review each other’s code. - - -![](https://cdn-images-1.medium.com/max/800/1*5kDNcvJpr2GyZ0YqLauCoQ.png) - -We also created an agile board on [Waffle.io][5], which is free and has easy integration with Github. On the Waffle board, we listed our user stories as well as bugs we knew we needed to fix. Later, when we started coding, we would each create git branches for the user story we were currently working on, moving it from swim lane to swim lane as we made progress. - - -![](https://cdn-images-1.medium.com/max/800/1*gnWqGwQsdGtpt3WOwe0s_A.gif) - -We also began holding “stand-up” meetings each morning to discuss the previous day’s progress and any blockers each of us were encountering. This meeting often decided the day’s flow — who would be pair programming, and who would work on an issue solo. - -I highly recommend some sort of structured workflow like this, as it allowed us to clearly define our priorities and make efficient progress without any interpersonal conflict. - -### Step 5: Choose & download a boilerplate - -Because the JavaScript ecosystem is so complicated, we opted not to build our app from absolute ground zero. It felt unnecessary to spend valuable time wiring up our Webpack build scripts and loaders, and our symlink that pointed to our project directory. My team chose the [Firebones][6] skeleton because it fit our use case, but there are many open-source skeleton options available to choose from. - -### Step 6: Write back-end API routes (or Firebase listeners) - -If we weren’t using a cloud-based database, this would have been the time to start writing our back-end Express routes to make requests to our database. But since we were using Firebase, which is already in the cloud and has a different way of communicating with code, we just worked to set up our first successful database listener. - -To ensure our listener was working, we coded out a basic user form for creating a Goal, and saw that, indeed, when we filled out the form, our database was live-updating. We were connected! - -### Step 7: Build a “Proof Of Concept” - -Our next step was to create a “proof of concept” for our app, or a prototype of the most difficult fundamental features to implement, demonstrating that our app  _could _ eventuallyexist. For us, this meant finding a front-end library to satisfactorily render timelines, and connecting it to Firebase successfully to display some seed data in our database. - - -![](https://cdn-images-1.medium.com/max/800/1*d5Wu3fOlX8Xdqix1RPZWSA.png) -Basic Victory.JS timelines - -We found Victory.JS, a React library built on D3, and spent a day reading the documentation and putting together a very basic example of a  _VictoryLine_  component and a  _VictoryScatter_  component to visually display data from the database. Indeed, it worked! We were ready to build. - -### Step 8: Code out the features - -Finally, it was time to build out all the exciting functionality of our app. This is a giant step that will obviously vary widely depending on the app you’re personally building. We looked at our wireframes and started coding out the individual user stories in our Waffle. This often included touching both front-end and back-end code (for example, creating a front-end form and also connecting it to the database). Our features ranged from major to minor, and included things like: - -* ability to create new goals, milestones, and checkins - -* ability to delete goals, milestones, and checkins - -* ability to change a timeline’s name, color, and details - -* ability to zoom in on timelines - -* ability to add links to resources - -* ability to upload media - -* ability to bubble up resources and media from milestones and checkins to their associated goals - -* rich text editor integration - -* user signup / authentication / OAuth - -* popover to view timeline options - -* loading screens - -For obvious reasons, this step took up the bulk of our time — this phase is where most of the meaty code happened, and each time we finished a feature, there were always more to build out! - -### Step 9: Choose and code the design scheme - -Once we had an MVP of the functionality we desired in our app, it was time to clean it up and make it pretty. My team used Material-UI for components like form fields, menus, and login tabs, which ensured everything looked sleek, polished, and coherent without much in-depth design knowledge. - - -![](https://cdn-images-1.medium.com/max/800/1*PCRFAbsPBNPYhz6cBgWRCw.gif) -This was one of my favorite features to code out. Its beauty is so satisfying! - -We spent a while choosing a color scheme and editing the CSS, which provided us a nice break from in-the-trenches coding. We also designed alogo and uploaded a favicon. - -### Step 10: Find and squash bugs - -While we should have been using test-driven development from the beginning, time constraints left us with precious little time for anything but features. This meant that we spent the final two days simulating every user flow we could think of and hunting our app for bugs. - - -![](https://cdn-images-1.medium.com/max/800/1*X8JUwTeCAkIcvhKofcbIDA.png) - -This process was not the most systematic, but we found plenty of bugs to keep us busy, including a bug in which the loading screen would last indefinitely in certain situations, and one in which the resource component had stopped working entirely. Fixing bugs can be annoying, but when it finally works, it’s extremely satisfying. - -### Step 11: Deploy the live app - -The final step was to deploy our app so it would be available live! Because we were using Firebase to store our data, we deployed to Firebase Hosting, which was intuitive and simple. If your back end uses a different database, you can use Heroku or DigitalOcean. Generally, deployment directions are readily available on the hosting site. - -We also bought a cheap domain name on Namecheap.com to make our app more polished and easy to find. - -![](https://cdn-images-1.medium.com/max/800/1*gAuM_vWpv_U53xcV3tQINg.png) - -* * * - -And that was it — we were suddenly the co-creators of a real live full-stack app that someone could use! If we had a longer runway, Step 12 would have been to run A/B testing on users, so we could better understand how actual users interact with our app and what they’d like to see in a V2. - -For now, however, we’re happy with the final product, and with the immeasurable knowledge and understanding we gained throughout this process. Check out Align [here][7]! - - -![](https://cdn-images-1.medium.com/max/800/1*KbqmSW-PMjgfWYWS_vGIqg.jpeg) -Team Align: Sara Kladky (left), Melanie Mohn (center), and myself. - --------------------------------------------------------------------------------- - -via: https://medium.com/ladies-storm-hackathons/how-we-built-our-first-full-stack-javascript-web-app-in-three-weeks-8a4668dbd67c?imm_mid=0f581a&cmp=em-web-na-na-newsltr_20170816 - -作者:[Sophia Ciocca ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://medium.com/@sophiaciocca?source=post_header_lockup -[1]:https://medium.com/@sophiaciocca?source=post_header_lockup -[2]:https://medium.com/@sophiaciocca?source=post_header_lockup -[3]:https://github.com/limitless-leggings/limitless-leggings -[4]:https://www.youtube.com/watch?v=qyLoInHNjoc -[5]:http://www.waffle.io/ -[6]:https://github.com/FullstackAcademy/firebones -[7]:https://align.fun/ -[8]:https://github.com/align-capstone/align -[9]:https://github.com/sophiaciocca -[10]:https://github.com/Kladky -[11]:https://github.com/melaniemohn diff --git a/sources/tech/20170926 Managing users on Linux systems.md b/sources/tech/20170926 Managing users on Linux systems.md deleted file mode 100644 index e47fc572df..0000000000 --- a/sources/tech/20170926 Managing users on Linux systems.md +++ /dev/null @@ -1,223 +0,0 @@ -Managing users on Linux systems -====== -Your Linux users may not be raging bulls, but keeping them happy is always a challenge as it involves managing their accounts, monitoring their access rights, tracking down the solutions to problems they run into, and keeping them informed about important changes on the systems they use. Here are some of the tasks and tools that make the job a little easier. - -### Configuring accounts - -Adding and removing accounts is the easier part of managing users, but there are still a lot of options to consider. Whether you use a desktop tool or go with command line options, the process is largely automated. You can set up a new user with a command as simple as **adduser jdoe** and a number of things will happen. John 's account will be created using the next available UID and likely populated with a number of files that help to configure his account. When you run the adduser command with a single argument (the new username), it will prompt for some additional information and explain what it is doing. -``` -$ sudo adduser jdoe -Adding user `jdoe' ... -Adding new group `jdoe' (1001) ... -Adding new user `jdoe' (1001) with group `jdoe' ... -Creating home directory `/home/jdoe' ... -Copying files from `/etc/skel' … -Enter new UNIX password: -Retype new UNIX password: -passwd: password updated successfully -Changing the user information for jdoe -Enter the new value, or press ENTER for the default - Full Name []: John Doe - Room Number []: - Work Phone []: - Home Phone []: - Other []: -Is the information correct? [Y/n] Y - -``` - -As you can see, adduser adds the user's information (to the /etc/passwd and /etc/shadow files), creates the new home directory and populates it with some files from /etc/skel, prompts for you to assign the initial password and identifying information, and then verifies that it's got everything right. If you answer "n" for no at the final "Is the information correct?" prompt, it will run back through all of your previous answers, allowing you to change any that you might want to change. - -Once an account is set up, you might want to verify that it looks as you'd expect. However, a better strategy is to ensure that the choices being made "automagically" match what you want to see _before_ you add your first account. The defaults are defaults for good reason, but it 's useful to know where they're defined in case you want some to be different - for example, if you don't want home directories in /home, you don't want user UIDs to start with 1000, or you don't want the files in home directories to be readable by _everyone_ on the system. - -Some of the details of how the adduser command works are configured in the /etc/adduser.conf file. This file contains a lot of settings that determine how new accounts are configured and will look something like this. Note that the comments and blanks lines are omitted in the output below so that we can focus more easily on just the settings. -``` -$ cat /etc/adduser.conf | grep -v "^#" | grep -v "^$" -DSHELL=/bin/bash -DHOME=/home -GROUPHOMES=no -LETTERHOMES=no -SKEL=/etc/skel -FIRST_SYSTEM_UID=100 -LAST_SYSTEM_UID=999 -FIRST_SYSTEM_GID=100 -LAST_SYSTEM_GID=999 -FIRST_UID=1000 -LAST_UID=29999 -FIRST_GID=1000 -LAST_GID=29999 -USERGROUPS=yes -USERS_GID=100 -DIR_MODE=0755 -SETGID_HOME=no -QUOTAUSER="" -SKEL_IGNORE_REGEX="dpkg-(old|new|dist|save)" - -``` - -As you can see, we've got a default shell (DSHELL), the starting value for UIDs (FIRST_UID), the location for home directories (DHOME) and the source location for startup files (SKEL) that will be added to each account as it is set up - along with a number of additional settings. This file also specifies the permissions to be assigned to home directories (DIR_MODE). - -One of the more important settings is DIR_MODE, which determines the permissions that are used for each user's home directory. Given this setting, the permissions assigned to a directory that the user creates will be 755. Given this setting, home directories will be set up with rwxr-xr-x permissions. Users will be able to read other users' files, but not modify or remove them. If you want to be more restrictive, you can change this setting to 750 (no access by anyone outside the user's group) or even 700 (no access but the user himself). - -Any user account settings can be manually changed after the accounts are set up. For example, you can edit the /etc/passwd file or chmod home directory, but configuring the /etc/adduser.conf file _before_ you start adding accounts on a new server will ensure some consistency and save you some time and trouble over the long run. - -Changes to the /etc/adduser.conf file will affect all accounts that are set up subsequent to those changes. If you want to set up some specific account differently, you've also got the option of providing account configuration options as arguments with the adduser command in addition to the username. Maybe you want to assign a different shell for some user, request a specific UID, or disable logins altogether. The man page for the adduser command will display some of your choices for configuring an individual account. -``` -adduser [options] [--home DIR] [--shell SHELL] [--no-create-home] -[--uid ID] [--firstuid ID] [--lastuid ID] [--ingroup GROUP | --gid ID] -[--disabled-password] [--disabled-login] [--gecos GECOS] -[--add_extra_groups] [--encrypt-home] user - -``` - -These days probably every Linux system is, by default, going to put each user into his or her own group. As an admin, you might elect to do things differently. You might find that putting users in shared groups works better for your site, electing to use adduser's --gid option to select a specific group. Users can, of course, always be members of multiple groups, so you have some options on how to manage groups -- both primary and secondary. - -### Dealing with user passwords - -Since it's always a bad idea to know someone else's password, admins will generally use a temporary password when they set up an account and then run a command that will force the user to change his password on his first login. Here's an example: -``` -$ sudo chage -d 0 jdoe - -``` - -When the user logs in, he will see something like this: -``` -WARNING: Your password has expired. -You must change your password now and login again! -Changing password for jdoe. -(current) UNIX password: - -``` - -### Adding users to secondary groups - -To add a user to a secondary group, you might use the usermod command as shown below -- to add the user to the group and then verify that the change was made. -``` -$ sudo usermod -a -G sudo jdoe -$ sudo grep sudo /etc/group -sudo:x:27:shs,jdoe - -``` - -Keep in mind that some groups -- like the sudo or wheel group -- imply certain privileges. More on this in a moment. - -### Removing accounts, adding groups, etc. - -Linux systems also provide commands to remove accounts, add new groups, remove groups, etc. The **deluser** command, for example, will remove the user login entries from the /etc/passwd and /etc/shadow files but leave her home directory intact unless you add the --remove-home or --remove-all-files option. The **addgroup** command adds a group, but will give it the next group id in the sequence (i.e., likely in the user group range) unless you use the --gid option. -``` -$ sudo addgroup testgroup --gid=131 -Adding group `testgroup' (GID 131) ... -Done. - -``` - -### Managing privileged accounts - -Some Linux systems have a wheel group that gives members the ability to run commands as root. In this case, the /etc/sudoers file references this group. On Debian systems, this group is called sudo, but it works the same way and you'll see a reference like this in the /etc/sudoers file: -``` -%sudo ALL=(ALL:ALL) ALL - -``` - -This setting basically means that anyone in the wheel or sudo group can run all commands with the power of root once they preface them with the sudo command. - -You can also add more limited privileges to the sudoers file -- maybe to give particular users the ability to run one or two commands as root. If you do, you should also periodically review the /etc/sudoers file to gauge how much privilege users have and very that the privileges provided are still required. - -In the command shown below, we're looking at the active lines in the /etc/sudoers file. The most interesting lines in this file include the path set for commands that can be run using the sudo command and the two groups that are allowed to run commands via sudo. As was just mentioned, individuals can be given permissions by being directly included in the sudoers file, but it is generally better practice to define privileges through group memberships. -``` -# cat /etc/sudoers | grep -v "^#" | grep -v "^$" -Defaults env_reset -Defaults mail_badpass -Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin" -root ALL=(ALL:ALL) ALL -%admin ALL=(ALL) ALL <== admin group -%sudo ALL=(ALL:ALL) ALL <== sudo group - -``` - -### Checking on logins - -To see when a user last logged in, you can use a command like this one: -``` -# last jdoe -jdoe pts/18 192.168.0.11 Thu Sep 14 08:44 - 11:48 (00:04) -jdoe pts/18 192.168.0.11 Thu Sep 14 13:43 - 18:44 (00:00) -jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 (00:00) - -``` - -If you want to see when each of your users last logged in, you can run the last command through a loop like this one: -``` -$ for user in `ls /home`; do last $user | head -1; done - -jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 (00:03) - -rocket pts/18 192.168.0.11 Thu Sep 14 13:02 - 13:02 (00:00) -shs pts/17 192.168.0.11 Thu Sep 14 12:45 still logged in - - -``` - -This command will only show you users who have logged on since the current wtmp file became active. The blank lines indicate that some users have never logged in since that time, but doesn't call them out. A better command would be this one that clear displays the users who have not logged in at all in this time period: -``` -$ for user in `ls /home`; do echo -n "$user ";last $user | head -1 | awk '{print substr($0,40)}'; done -dhayes -jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 -peanut pts/19 192.168.0.29 Mon Sep 11 09:15 - 17:11 -rocket pts/18 192.168.0.11 Thu Sep 14 13:02 - 13:02 -shs pts/17 192.168.0.11 Thu Sep 14 12:45 still logged -tsmith - -``` - -That command is a lot to type, but could be turned into a script to make it a lot easier to use. -``` -#!/bin/bash - -for user in `ls /home` -do - echo -n "$user ";last $user | head -1 | awk '{print substr($0,40)}' -done - -``` - -Sometimes this kind of information can alert you to changes in users' roles that suggest they may no longer need the accounts in question. - -### Communicating with users - -Linux systems provide a number of ways to communicate with your users. You can add messages to the /etc/motd file that will be displayed when a user logs into a server using a terminal connection. You can also message users with commands such as write (message to single user) or wall (write to all logled in users. -``` -$ wall System will go down in one hour - -Broadcast message from shs@stinkbug (pts/17) (Thu Sep 14 14:04:16 2017): - -System will go down in one hour - -``` - -Important messages should probably be delivered through multiple channels as it's difficult to predict what users will actually notice. Together, message-of-the-day (motd), wall, and email notifications might stand of chance of getting most of your users' attention. - -### Paying attention to log files - -Paying attention to log files can also help you understand user activity. In particular, the /var/log/auth.log file will show you user login and logout activity, creation of new groups, etc. The /var/log/messages or /var/log/syslog files will tell you more about system activity. - -### Tracking problems and requests - -Whether or not you install a ticketing application on your Linux system, it's important to track the problems that your users run into and the requests that they make. Your users won't be happy if some portion of their requests fall through the proverbial cracks. Even a paper log could be helpful or, better yet, a spreadsheet that allows you to notice what issues are still outstanding and what the root cause of the problems turned out to be. Ensuring that problems and requests are addressed is important and logs can also help you remember what you had to do to address a problem that re-emerges many months or even years later. - -### Wrap-up - -Managing user accounts on a busy server depends in part on starting out with well configured defaults and in part on monitoring user activities and problems encountered. Users are likely to be happy if they feel you are responsive to their concerns and know what to expect when system upgrades are needed. - - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3225109/linux/managing-users-on-linux-systems.html - -作者:[Sandra Henry-Stocker][a] -译者:[runningwater](https://github.com/runningwater) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ diff --git a/sources/tech/20171116 How to use a here documents to write data to a file in bash script.md b/sources/tech/20171116 How to use a here documents to write data to a file in bash script.md index 9c2a636b09..12d15af78f 100644 --- a/sources/tech/20171116 How to use a here documents to write data to a file in bash script.md +++ b/sources/tech/20171116 How to use a here documents to write data to a file in bash script.md @@ -1,4 +1,3 @@ -translating by ljgibbslf How to use a here documents to write data to a file in bash script ====== diff --git a/sources/tech/20171130 Excellent Business Software Alternatives For Linux.md b/sources/tech/20171130 Excellent Business Software Alternatives For Linux.md index 195b51423a..3469c62569 100644 --- a/sources/tech/20171130 Excellent Business Software Alternatives For Linux.md +++ b/sources/tech/20171130 Excellent Business Software Alternatives For Linux.md @@ -1,4 +1,3 @@ -Yoliver istranslating. Excellent Business Software Alternatives For Linux ------- diff --git a/sources/tech/20171214 Peeking into your Linux packages.md b/sources/tech/20171214 Peeking into your Linux packages.md index d354d79b1b..cd1f354250 100644 --- a/sources/tech/20171214 Peeking into your Linux packages.md +++ b/sources/tech/20171214 Peeking into your Linux packages.md @@ -1,3 +1,5 @@ +translating by Flowsnow + Peeking into your Linux packages ====== Do you ever wonder how many _thousands_ of packages are installed on your Linux system? And, yes, I said "thousands." Even a fairly modest Linux system is likely to have well over a thousand packages installed. And there are many ways to get details on what they are. diff --git a/sources/tech/20171222 10 keys to quick game development.md b/sources/tech/20171222 10 keys to quick game development.md index 02f4388044..4fe6f514a5 100644 --- a/sources/tech/20171222 10 keys to quick game development.md +++ b/sources/tech/20171222 10 keys to quick game development.md @@ -1,3 +1,6 @@ +**translating by [ivo-wang](https://github.com/ivo-wang)** + + 10 keys to quick game development ====== ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb) diff --git a/sources/tech/20180105 The Best Linux Distributions for 2018.md b/sources/tech/20180105 The Best Linux Distributions for 2018.md deleted file mode 100644 index 3be92638c5..0000000000 --- a/sources/tech/20180105 The Best Linux Distributions for 2018.md +++ /dev/null @@ -1,140 +0,0 @@ -The Best Linux Distributions for 2018 -============================================================ - -![Linux distros 2018](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-distros-2018.jpg?itok=Z8sdx4Zu "Linux distros 2018") -Jack Wallen shares his picks for the best Linux distributions for 2018.[Creative Commons Zero][6]Pixabay - -It’s a new year and the landscape of possibility is limitless for Linux. Whereas 2017 brought about some big changes to a number of Linux distributions, I believe 2018 will bring serious stability and market share growth—for both the server and the desktop. - -For those who might be looking to migrate to the open source platform (or those looking to switch it up), what are the best choices for the coming year? If you hop over to [Distrowatch][14], you’ll find a dizzying array of possibilities, some of which are on the rise, and some that are seeing quite the opposite effect. - -So, which Linux distributions will 2018 favor? I have my thoughts. In fact, I’m going to share them with you now. - -Similar to what I did for[ last year’s list][15], I’m going to make this task easier and break down the list, as follows: sysadmin, lightweight distribution, desktop, distro with more to prove, IoT, and server. These categories should cover the needs of any type of Linux user. - -With that said, let’s get to the list of best Linux distributions for 2018. - -### Best distribution for sysadmins - -[Debian][16] isn’t often seen on “best of” lists. It should be. Why? If you consider that Debian is the foundation for Ubuntu (which is, in turn, the foundation for so many distributions), it’s pretty easy to understand why this distribution should find its way on many a list. But why for administrators? I’ve considered this for two very important reasons: - -* Ease of use - -* Extreme stability - -Because Debian uses the dpkg and apt package managers, it makes for an incredibly easy to use environment. And because Debian offers one of the the most stable Linux platforms, it makes for an ideal environment for so many things: Desktops, servers, testing, development. Although Debian may not include the plethora of applications found in last years winner (for this category), [Parrot Linux][17], it is very easy to add any/all the necessary applications you need to get the job done. And because Debian can be installed with your choice of desktop (Cinnamon, GNOME, KDE, LXDE, Mate, or Xfce), you can be sure the interface will meet your needs. - - -![debian](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/debian.jpg?itok=XkHHG692 "debian") -Figure 1: The GNOME desktop running on top of Debian 9.3.[Used with permission][1] - -At the moment, Debian is listed at #2 on Distrowatch. Download it, install it, and then make it serve a specific purpose. It may not be flashy, but Debian is a sysadmin dream come true. - -### Best lightweight distribution - -Lightweight distribution serve a very specific purpose—giving new life to older, lesser-powered machines. But that doesn’t mean these particular distributions should only be considered for your older hardware. If speed is your ultimate need, you might want to see just how fast this category of distribution will run on your modern machine. - -Topping the list of lightweight distributions for 2018 is [Lubuntu][18]. Although there are plenty of options in this category, few come even close to the next-to-zero learning curve found on this distribution. And although Lubuntu’s footprint isn’t quite as small as Puppy Linux, thanks to it being a member of the Ubuntu family, the ease of use gained with this distribution makes up for it. But fear not, Lubuntu won’t bog down your older hardware.The requirements are: - -* CPU: Pentium 4 or Pentium M or AMD K8 - -* For local applications, Lubuntu can function with 512MB of RAM. For online usage (Youtube, Google+, Google Drive, and Facebook),  1GB of RAM is recommended. - -Lubuntu makes use of the LXDE desktop (Figure 2), which means users new to Linux won’t have the slightest problem working with this distribution. The short list of included apps (such as Abiword, Gnumeric, and Firefox) are all lightning fast and user-friendly. - -### [lubuntu.jpg][8] - -![Lubuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/lubuntu_2.jpg?itok=BkTnh7hU "Lubuntu") -Figure 2: The Lubuntu LXDE desktop in action.[Used with permission][2] - -Lubntu can make short and easy work of breathing life into hardware that is up to ten years old. - -### Best desktop distribution - -For the second year in a row, [Elementary OS][19] tops my list of best Desktop distribution. For many, the leader on the Desktop is [Linux Mint][20] (which is a very fine flavor). However, for my money, it’s hard to beat the ease of use and stability of Elementary OS. Case in point, I was certain the release of [Ubuntu][21] 17.10 would have me migrating back to Canonical’s distribution. Very soon after migrating to the new GNOME-Friendly Ubuntu, I found myself missing the look, feel, and reliability of Elementary OS (Figure 3). After two weeks with Ubuntu, I was back to Elementary OS. - -### [elementaros.jpg][9] - -![Elementary OS](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/elementaros.jpg?itok=SRZC2vkg "Elementary OS") -Figure 3: The Pantheon desktop is a work of art as a desktop.[Used with permission][3] - -Anyone that has given Elementary OS a go immediately feels right at home. The Pantheon desktop is a perfect combination of slickness and user-friendliness. And with each update, it only gets better. - -Although Elementary OS stands at #6 on the Distrowatch page hit ranking, I predict it will find itself climbing to at least the third spot by the end of 2018\. The Elementary developers are very much in tune with what users want. They listen and they evolve. However, the current state of this distribution is so good, it seems all they could do to better it is a bit of polish here and there. Anyone looking for a desktop that offers a unified look and feel throughout the UI, Elementary OS is hard to beat. If you need a desktop that offers an outstanding ratio of reliability and ease of use, Elementary OS is your distribution. - -### Best distro for those with something to prove - -For the longest time [Gentoo][22] sat on top of the “show us your skills” distribution list. However, I think it’s time Gentoo took a backseat to the true leader of “something to prove”: [Linux From Scratch][23]. You may not think this fair, as LFS isn’t actually a distribution, but a project that helps users create their own Linux distribution. But, seriously, if you want to go a very long way to proving your Linux knowledge, what better way than to create your own distribution? From the LFS project, you can build a custom Linux system, from the ground up... entirely from source code. So, if you really have something to prove, download the [Linux From Scratch Book][24] and start building. - -### Best distribution for IoT - -For the second year in a row [Ubuntu Core][25] wins, hands down. Ubuntu Core is a tiny, transactional version of Ubuntu, built specifically for embedded and IoT devices. What makes Ubuntu Core so perfect for IoT is that it places the focus on snap packages—universal packages that can be installed onto a platform, without interfering with the base system. These snap packages contain everything they need to run (including dependencies), so there is no worry the installation will break the operating system (or any other installed software). Also, snaps are very easy to upgrade and run in an isolated sandbox, making them a great solution for IoT. - -Another area of security built into Ubuntu Core is the login mechanism. Ubuntu Core works with Ubuntu One ssh keys, such that the only way to log into the system is via uploaded ssh keys to a [Ubuntu One account][26] (Figure 4). This makes for a heightened security for your IoT devices. - -### [ubuntucore.jpg][10] - -![ Ubuntu Core](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntucore.jpg?itok=Ydfq8NKH " Ubuntu Core") -Figure 4:The Ubuntu Core screen indicating a remote access enabled via Ubuntu One user.[Used with permission][4] - -### Best server distribution - -This where things get a bit confusing. The primary reason is support. If you need commercial support your best choice might be, at first blush, [Red Hat Enterprise Linux][27]. Red Hat has proved itself, year after year, to not only be one of the strongest enterprise server platforms on the planet, but the single most profitable open source businesses (with over $2 billion in annual revenue). - -However, Red Hat isn’t far and away the only server distribution. In fact, Red Hat doesn’t even dominate every aspect of Enterprise server computing. If you look at cloud statistics on Amazon’s Elastic Compute Cloud alone, Ubuntu blows away Red Hat Enterprise Linux. According to [The Cloud Market][28], EC2 statistics show RHEL at under 100k deployments, whereas Ubuntu is over 200k deployments. That’s significant. - -The end result is that Ubuntu has pretty much taken over as the leader in the cloud. And if you combine that with Ubuntu’s ease of working with and managing containers, it starts to become clear that Ubuntu Server is the clear winner for the Server category. And, if you need commercial support, Canonical has you covered, with [Ubuntu Advantage][29]. - -The one caveat to Ubuntu Server is that it defaults to a text-only interface (Figure 5). You can install a GUI, if needed, but working with the Ubuntu Server command line is pretty straightforward (and something every Linux administrator should know). - -### [ubuntuserver.jpg][11] - -![Ubuntu server](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntuserver_1.jpg?itok=qtFSUlee "Ubuntu server") -Figure 5: The Ubuntu server login, informing of updates.[Used with permission][5] - -### The choice is yours - -As I said before, these choices are all very subjective … but if you’re looking for a great place to start, give these distributions a try. Each one can serve a very specific purpose and do it better than most. Although you may not agree with my particular picks, chances are you’ll agree that Linux offers amazing possibilities on every front. And, stay tuned for more “best distro” picks next week. - - _Learn more about Linux through the free ["Introduction to Linux" ][13]course from The Linux Foundation and edX._ - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/learn/intro-to-linux/2018/1/best-linux-distributions-2018 - -作者:[JACK WALLEN ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/jlwallen -[1]:https://www.linux.com/licenses/category/used-permission -[2]:https://www.linux.com/licenses/category/used-permission -[3]:https://www.linux.com/licenses/category/used-permission -[4]:https://www.linux.com/licenses/category/used-permission -[5]:https://www.linux.com/licenses/category/used-permission -[6]:https://www.linux.com/licenses/category/creative-commons-zero -[7]:https://www.linux.com/files/images/debianjpg -[8]:https://www.linux.com/files/images/lubuntujpg-2 -[9]:https://www.linux.com/files/images/elementarosjpg -[10]:https://www.linux.com/files/images/ubuntucorejpg -[11]:https://www.linux.com/files/images/ubuntuserverjpg-1 -[12]:https://www.linux.com/files/images/linux-distros-2018jpg -[13]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux -[14]:https://distrowatch.com/ -[15]:https://www.linux.com/news/learn/sysadmin/best-linux-distributions-2017 -[16]:https://www.debian.org/ -[17]:https://www.parrotsec.org/ -[18]:http://lubuntu.me/ -[19]:https://elementary.io/ -[20]:https://linuxmint.com/ -[21]:https://www.ubuntu.com/ -[22]:https://www.gentoo.org/ -[23]:http://www.linuxfromscratch.org/ -[24]:http://www.linuxfromscratch.org/lfs/download.html -[25]:https://www.ubuntu.com/core -[26]:https://login.ubuntu.com/ -[27]:https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux -[28]:http://thecloudmarket.com/stats#/by_platform_definition -[29]:https://buy.ubuntu.com/?_ga=2.177313893.113132429.1514825043-1939188204.1510782993 diff --git a/sources/tech/20180130 Trying Other Go Versions.md b/sources/tech/20180130 Trying Other Go Versions.md index 1ab1b4f948..731747d19a 100644 --- a/sources/tech/20180130 Trying Other Go Versions.md +++ b/sources/tech/20180130 Trying Other Go Versions.md @@ -1,4 +1,3 @@ -imquanquan Translating Trying Other Go Versions ============================================================ @@ -110,4 +109,4 @@ via: https://pocketgophers.com/trying-other-versions/ [8]:https://pocketgophers.com/trying-other-versions/#trying-a-specific-release [9]:https://pocketgophers.com/guide-to-json/ [10]:https://pocketgophers.com/trying-other-versions/#trying-any-release -[11]:https://pocketgophers.com/trying-other-versions/#trying-a-source-build-e-g-tip +[11]:https://pocketgophers.com/trying-other-versions/#trying-a-source-build-e-g-tip \ No newline at end of file diff --git a/sources/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md b/sources/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md deleted file mode 100644 index b3252dfb75..0000000000 --- a/sources/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md +++ /dev/null @@ -1,292 +0,0 @@ -Rock Solid React.js Foundations: A Beginner’s Guide -============================================================ - ** 此处有Canvas,请手动处理 ** - -![](https://cdn-images-1.medium.com/max/1000/1*wj5ujzj5wPQIKb0mIWLgNQ.png) -React.js crash course - -I’ve been working with React and React-Native for the last couple of months. I have already released two apps in production, [Kiven Aa][1] (React) and [Pollen Chat][2] (React Native). When I started learning React, I was searching for something (a blog, a video, a course, whatever) that didn’t only teach me how to write apps in React. I also wanted it to prepare me for interviews. - -Most of the material I found, concentrated on one or the other. So, this post is aimed towards the audience who is looking for a perfect mix of theory and hands-on. I will give you a little bit of theory so that you understand what is happening under the hood and then I will show you how to write some React.js code. - -If you prefer video, I have this entire course up on YouTube as well. Please check that out. - - -Let’s dive in… - -> React.js is a JavaScript library for building user interfaces - -You can build all sorts of single page applications. For example, chat messengers and e-commerce portals where you want to show changes on the user interface in real-time. - -### Everything’s a component - -A React app is comprised of components,  _a lot of them_ , nested into one another.  _But what are components, you may ask?_ - -A component is a reusable piece of code, which defines how certain features should look and behave on the UI. For example, a button is a component. - -Let’s look at the following calculator, which you see on Google when you try to calculate something like 2+2 = 4 –1 = 3 (quick maths!) - - -![](https://cdn-images-1.medium.com/max/1000/1*NS9DykYDyYG7__UXJdysTA.png) -Red markers denote components - -As you can see in the image above, the calculator has many areas — like the  _result display window_  and the  _numpad_ . All of these can be separate components or one giant component. It depends on how comfortable one is in breaking down and abstracting away things in React - -You write code for all such components separately. Then combine those under one container, which in turn is a React component itself. This way you can create reusable components and your final app will be a collection of separate components working together. - -The following is one such way you can write the calculator, shown above, in React. - -``` - - - - - - . - . - . - - - - -``` - -Yes! It looks like HTML code, but it isn’t. We will explore more about it in the later sections. - -### Setting up our Playground - -This tutorial focuses on React’s fundamentals. It is not primarily geared towards React for Web or [React Native][3] (for building mobile apps). So, we will use an online editor so as to avoid web or native specific configurations before even learning what React can do. - -I’ve already set up an environment for you on [codepen.io][4]. Just follow the link and read all the comments in HTML and JavaScript (JS) tabs. - -### Controlling Components - -We’ve learned that a React app is a collection of various components, structured as a nested tree. Thus, we require some sort of mechanism to pass data from one component to other. - -#### Enter “props” - -We can pass arbitrary data to our component using a `props` object. Every component in React gets this `props` object. - -Before learning how to use this `props` object, let’s learn about functional components. - -#### a) Functional component - -A functional component in React consumes arbitrary data that you pass to it using `props` object. It returns an object which describes what UI React should render. Functional components are also known as Stateless components. - -Let’s write our first functional component. - -``` -function Hello(props) { - return
{props.name}
-} -``` - -It’s that simple. We just passed `props` as an argument to a plain JavaScript function and returned,  _umm, well, what was that? That _ `_
{props.name}
_` _thing!_  It’s JSX (JavaScript Extended). We will learn more about it in a later section. - -This above function will render the following HTML in the browser. - -``` - -
- rajat -
-``` - - -> Read the section below about JSX, where I have explained how did we get this HTML from our JSX code. - -How can you use this functional component in your React app? Glad you asked! It’s as simple as the following. - -``` - -``` - -The attribute `name` in the above code becomes `props.name` inside our `Hello`component. The attribute `age` becomes `props.age` and so on. - -> Remember! You can nest one React component inside other React components. - -Let’s use this `Hello` component in our codepen playground. Replace the `div`inside `ReactDOM.render()` with our `Hello` component, as follows, and see the changes in the bottom window. - -``` -function Hello(props) { - return
{props.name}
-} - -ReactDOM.render(, document.getElementById('root')); -``` - - -> But what if your component has some internal state. For instance, like the following counter component, which has an internal count variable, which changes on + and — key presses. - -A React component with an internal state - -#### b) Class-based component - -The class-based component has an additional property `state` , which you can use to hold a component’s private data. We can rewrite our `Hello` component using class notation as follows. Since these components have a state, these are also known as Stateful components. - -``` -class Counter extends React.Component { - // this method should be present in your component - render() { - return ( -
- {this.props.name} -
- ); - } -} -``` - -We extend `React.Component` class of React library to make class-based components in React. Learn more about JavaScript classes [here][5]. - -The `render()` method must be present in your class as React looks for this method in order to know what UI it should render on screen. - -To use this sort of internal state, we first have to initialize the `state` object in the constructor of the component class, in the following way. - -``` -class Counter extends React.Component { - constructor() { - super(); - - // define the internal state of the component - this.state = {name: 'rajat'} - } - - render() { - return ( -
- {this.state.name} -
- ); - } -} - -// Usage: -// In your react app: -``` - -Similarly, the `props` can be accessed inside our class-based component using `this.props` object. - -To set the state, you use `React.Component`'s `setState()`. We will see an example of this, in the last part of this tutorial. - -> Tip: Never call `setState()` inside `render()` function, as `setState()` causes component to re-render and this will result in endless loop. - - -![](https://cdn-images-1.medium.com/max/1000/1*rPUhERO1Bnr5XdyzEwNOwg.png) -A class-based component has an optional property “state”. - - _Apart from _ `_state_` _, a class-based component has some life-cycle methods like _ `_componentWillMount()._` _ These you can use to do stuff, like initializing the _ `_state_` _and all but that is out of the scope of this post._ - -### JSX - -JSX is a short form of  _JavaScript Extended_  and it is a way to write `React`components. Using JSX, you get the full power of JavaScript inside XML like tags. - -You put JavaScript expressions inside `{}`. The following are some valid JSX examples. - - ``` - - - ; - -
- - ``` - -The way it works is you write JSX to describe what your UI should look like. A [transpiler][6] like `Babel` converts that code into a bunch of `React.createElement()` calls. The React library then uses those `React.createElement()` calls to construct a tree-like structure of DOM elements. In case of React for Web or Native views in case of React Native. It keeps it in the memory. - -React then calculates how it can effectively mimic this tree in the memory of the UI displayed to the user. This process is known as [reconciliation][7]. After that calculation is done, React makes the changes to the actual UI on the screen. - - ** 此处有Canvas,请手动处理 ** - -![](https://cdn-images-1.medium.com/max/1000/1*ighKXxBnnSdDlaOr5-ZOPg.png) -How React converts your JSX into a tree which describes your app’s UI - -You can use [Babel’s online REPL][8] to see what React actually outputs when you write some JSX. - - -![](https://cdn-images-1.medium.com/max/1000/1*NRuBKgzNh1nHwXn0JKHafg.png) -Use Babel REPL to transform JSX into plain JavaScript - -> Since JSX is just a syntactic sugar over plain `React.createElement()` calls, React can be used without JSX. - -Now we have every concept in place, so we are well positioned to write a `counter` component that we saw earlier as a GIF. - -The code is as follows and I hope that you already know how to render that in our playground. - -``` -class Counter extends React.Component { - constructor(props) { - super(props); - - this.state = {count: this.props.start || 0} - - // the following bindings are necessary to make `this` work in the callback - this.inc = this.inc.bind(this); - this.dec = this.dec.bind(this); - } - - inc() { - this.setState({ - count: this.state.count + 1 - }); - } - - dec() { - this.setState({ - count: this.state.count - 1 - }); - } - - render() { - return ( -
- - -
{this.state.count}
-
- ); - } -} -``` - -The following are some salient points about the above code. - -1. JSX uses `camelCasing` hence `button`'s attribute is `onClick`, not `onclick`, as we use in HTML. - -2. Binding is necessary for `this` to work on callbacks. See line #8 and 9 in the code above. - -The final interactive code is located [here][9]. - -With that, we’ve reached the conclusion of our React crash course. I hope I have shed some light on how React works and how you can use React to build bigger apps, using smaller and reusable components. - -* * * - -If you have any queries or doubts, hit me up on Twitter [@rajat1saxena][10] or write to me at [rajat@raynstudios.com][11]. - -* * * - -#### Please recommend this post, if you liked it and share it with your network. Follow me for more tech related posts and consider subscribing to my channel [Rayn Studios][12] on YouTube. Thanks a lot. - --------------------------------------------------------------------------------- - -via: https://medium.freecodecamp.org/rock-solid-react-js-foundations-a-beginners-guide-c45c93f5a923 - -作者:[Rajat Saxena ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://medium.freecodecamp.org/@rajat1saxena -[1]:https://kivenaa.com/ -[2]:https://play.google.com/store/apps/details?id=com.pollenchat.android -[3]:https://facebook.github.io/react-native/ -[4]:https://codepen.io/raynesax/pen/MrNmBM -[5]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes -[6]:https://en.wikipedia.org/wiki/Source-to-source_compiler -[7]:https://reactjs.org/docs/reconciliation.html -[8]:https://babeljs.io/repl -[9]:https://codepen.io/raynesax/pen/QaROqK -[10]:https://twitter.com/rajat1saxena -[11]:mailto:rajat@raynstudios.com -[12]:https://www.youtube.com/channel/UCUmQhjjF9bsIaVDJUHSIIKw \ No newline at end of file diff --git a/sources/tech/20180205 Writing eBPF tracing tools in Rust.md b/sources/tech/20180205 Writing eBPF tracing tools in Rust.md index 093d3de215..18b8eb5742 100644 --- a/sources/tech/20180205 Writing eBPF tracing tools in Rust.md +++ b/sources/tech/20180205 Writing eBPF tracing tools in Rust.md @@ -1,4 +1,3 @@ -Zafiry translating... Writing eBPF tracing tools in Rust ============================================================ diff --git a/sources/tech/20180215 Build a bikesharing app with Redis and Python.md b/sources/tech/20180215 Build a bikesharing app with Redis and Python.md index 06e4c6949a..d3232a0b4c 100644 --- a/sources/tech/20180215 Build a bikesharing app with Redis and Python.md +++ b/sources/tech/20180215 Build a bikesharing app with Redis and Python.md @@ -1,3 +1,5 @@ +translating by Flowsnow + Build a bikesharing app with Redis and Python ====== diff --git a/sources/tech/29180329 Python ChatOps libraries- Opsdroid and Errbot.md b/sources/tech/20180329 Python ChatOps libraries- Opsdroid and Errbot.md similarity index 99% rename from sources/tech/29180329 Python ChatOps libraries- Opsdroid and Errbot.md rename to sources/tech/20180329 Python ChatOps libraries- Opsdroid and Errbot.md index d7ef058106..5f409956f7 100644 --- a/sources/tech/29180329 Python ChatOps libraries- Opsdroid and Errbot.md +++ b/sources/tech/20180329 Python ChatOps libraries- Opsdroid and Errbot.md @@ -1,5 +1,3 @@ -Translating by shipsw - Python ChatOps libraries: Opsdroid and Errbot ====== diff --git a/sources/tech/20180412 A Desktop GUI Application For NPM.md b/sources/tech/20180412 A Desktop GUI Application For NPM.md deleted file mode 100644 index 4eabc40672..0000000000 --- a/sources/tech/20180412 A Desktop GUI Application For NPM.md +++ /dev/null @@ -1,147 +0,0 @@ -A Desktop GUI Application For NPM -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/04/ndm-3-720x340.png) - -NPM, short for **N** ode **P** ackage **M** anager, is a command line package manager for installing NodeJS packages, or modules. We already have have published a guide that described how to [**manage NodeJS packages using NPM**][1]. As you may noticed, managing NodeJS packages or modules using Npm is not a big deal. However, if you’re not compatible with CLI-way, there is a desktop GUI application named **NDM** which can be used for managing NodeJS applications/modules. NDM, stands for **N** PM **D** esktop **M** anager, is a free, open source graphical front-end for NPM that allows us to install, update, remove NodeJS packages via a simple graphical window. - -In this brief tutorial, we are going to learn about Ndm in Linux. - -### Install NDM - -NDM is available in AUR, so you can install it using any AUR helpers on Arch Linux and its derivatives like Antergos and Manjaro Linux. - -Using [**Pacaur**][2]: -``` -$ pacaur -S ndm - -``` - -Using [**Packer**][3]: -``` -$ packer -S ndm - -``` - -Using [**Trizen**][4]: -``` -$ trizen -S ndm - -``` - -Using [**Yay**][5]: -``` -$ yay -S ndm - -``` - -Using [**Yaourt**][6]: -``` -$ yaourt -S ndm - -``` - -On RHEL based systems like CentOS, run the following command to install NDM. -``` -$ echo "[fury] name=ndm repository baseurl=https://repo.fury.io/720kb/ enabled=1 gpgcheck=0" | sudo tee /etc/yum.repos.d/ndm.repo && sudo yum update && - -``` - -On Debian, Ubuntu, Linux Mint: -``` -$ echo "deb [trusted=yes] https://apt.fury.io/720kb/ /" | sudo tee /etc/apt/sources.list.d/ndm.list && sudo apt-get update && sudo apt-get install ndm - -``` - -NDM can also be installed using **Linuxbrew**. First, install Linuxbrew as described in the following link. - -After installing Linuxbrew, you can install NDM using the following commands: -``` -$ brew update - -$ brew install ndm - -``` - -On other Linux distributions, go to the [**NDM releases page**][7], download the latest version, compile and install it yourself. - -### NDM Usage - -Launch NDM wither from the Menu or using application launcher. This is how NDM’s default interface looks like. - -![][9] - -From here, you can install NodeJS packages/modules either locally or globally. - -**Install NodeJS packages locally** - -To install a package locally, first choose project directory by clicking on the **“Add projects”** button from the Home screen and select the directory where you want to keep your project files. For example, I have chosen a directory named **“demo”** as my project directory. - -Click on the project directory (i.e **demo** ) and then, click **Add packages** button. - -![][10] - -Type the package name you want to install and hit the **Install** button. - -![][11] - -Once installed, the packages will be listed under the project’s directory. Simply click on the directory to view the list of installed packages locally. - -![][12] - -Similarly, you can create separate project directories and install NodeJS modules in them. To view the list of installed modules on a project, click on the project directory, and you will the packages on the right side. - -**Install NodeJS packages globally** - -To install NodeJS packages globally, click on the **Globals** button on the left from the main interface. Then, click “Add packages” button, type the name of the package and hit “Install” button. - -**Manage packages** - -Click on any installed packages and you will see various options on the top, such as - - 1. Version (to view the installed version), - 2. Latest (to install latest available version), - 3. Update (to update the currently selected package), - 4. Uninstall (to remove the selected package) etc. - - - -![][13] - -NDM has two more options namely **“Update npm”** which is used to update the node package manager to latest available version, and **Doctor** that runs a set of checks to ensure that your npm installation has what it needs to manage your packages/modules. - -### Conclusion - -NDM makes the process of installing, updating, removing NodeJS packages easier! You don’t need to memorize the commands to perform those tasks. NDM lets us to do them all with a few mouse clicks via simple graphical window. For those who are lazy to type commands, NDM is perfect companion to manage NodeJS packages. - -Cheers! - -**Resource:** - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/ndm-a-desktop-gui-application-for-npm/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.ostechnix.com/manage-nodejs-packages-using-npm/ -[2]:https://www.ostechnix.com/install-pacaur-arch-linux/ -[3]:https://www.ostechnix.com/install-packer-arch-linux-2/ -[4]:https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/ -[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ -[6]:https://www.ostechnix.com/install-yaourt-arch-linux/ -[7]:https://github.com/720kb/ndm/releases -[8]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[9]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-1.png -[10]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-5-1.png -[11]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-6.png -[12]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-7.png -[13]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-8.png diff --git a/sources/tech/20180522 How to Enable Click to Minimize On Ubuntu.md b/sources/tech/20180522 How to Enable Click to Minimize On Ubuntu.md index 761138908d..50d68ad445 100644 --- a/sources/tech/20180522 How to Enable Click to Minimize On Ubuntu.md +++ b/sources/tech/20180522 How to Enable Click to Minimize On Ubuntu.md @@ -1,5 +1,3 @@ -translated by cyleft - How to Enable Click to Minimize On Ubuntu ============================================================ diff --git a/sources/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md b/sources/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md index e548213483..d2c50b6029 100644 --- a/sources/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md +++ b/sources/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md @@ -1,3 +1,4 @@ +Translating by qhwdw Complete Sed Command Guide [Explained with Practical Examples] ====== In a previous article, I showed the [basic usage of Sed][1], the stream editor, on a practical use case. Today, be prepared to gain more insight about Sed as we will take an in-depth tour of the sed execution model. This will be also an opportunity to make an exhaustive review of all Sed commands and to dive into their details and subtleties. So, if you are ready, launch a terminal, [download the test files][2] and sit comfortably before your keyboard: we will start our exploration right now! diff --git a/sources/tech/20180615 How To Rename Multiple Files At Once In Linux.md b/sources/tech/20180615 How To Rename Multiple Files At Once In Linux.md index d03dd4527b..f5c36573be 100644 --- a/sources/tech/20180615 How To Rename Multiple Files At Once In Linux.md +++ b/sources/tech/20180615 How To Rename Multiple Files At Once In Linux.md @@ -1,3 +1,5 @@ +translating by Flowsnow + How To Rename Multiple Files At Once In Linux ====== diff --git a/sources/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md b/sources/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md deleted file mode 100644 index dd8c3cdb13..0000000000 --- a/sources/tech/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md +++ /dev/null @@ -1,320 +0,0 @@ -Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server -====== - -![](https://www.ostechnix.com/wp-content/uploads/2016/07/Install-Oracle-VirtualBox-On-Ubuntu-18.04-720x340.png) - -This step by step tutorial walk you through how to install **Oracle VirtualBox** on Ubuntu 18.04 LTS headless server. And, this guide also describes how to manage the VirtualBox headless instances using **phpVirtualBox** , a web-based front-end tool for VirtualBox. The steps described below might also work on Debian, and other Ubuntu derivatives such as Linux Mint. Let us get started. - -### Prerequisites - -Before installing Oracle VirtualBox, we need to do the following prerequisites in our Ubuntu 18.04 LTS server. - -First of all, update the Ubuntu server by running the following commands one by one. -``` -$ sudo apt update - -$ sudo apt upgrade - -$ sudo apt dist-upgrade - -``` - -Next, install the following necessary packages: -``` -$ sudo apt install build-essential dkms unzip wget - -``` - -After installing all updates and necessary prerequisites, restart the Ubuntu server. -``` -$ sudo reboot - -``` - -### Install Oracle VirtualBox on Ubuntu 18.04 LTS server - -Add Oracle VirtualBox official repository. To do so, edit **/etc/apt/sources.list** file: -``` -$ sudo nano /etc/apt/sources.list - -``` - -Add the following lines. - -Here, I will be using Ubuntu 18.04 LTS, so I have added the following repository. -``` -deb http://download.virtualbox.org/virtualbox/debian bionic contrib - -``` - -![][2] - -Replace the word **‘bionic’** with your Ubuntu distribution’s code name, such as ‘xenial’, ‘vivid’, ‘utopic’, ‘trusty’, ‘raring’, ‘quantal’, ‘precise’, ‘lucid’, ‘jessie’, ‘wheezy’, or ‘squeeze**‘.** - -Then, run the following command to add the Oracle public key: -``` -$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add - - -``` - -For VirtualBox older versions, add the following key: -``` -$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add - - -``` - -Next, update the software sources using command: -``` -$ sudo apt update - -``` - -Finally, install latest Oracle VirtualBox latest version using command: -``` -$ sudo apt install virtualbox-5.2 - -``` - -### Adding users to VirtualBox group - -We need to create and add our system user to the **vboxusers** group. You can either create a separate user and assign it to vboxusers group or use the existing user. I don’t want to create a new user, so I added my existing user to this group. Please note that if you use a separate user for virtualbox, you must log out and log in to that particular user and do the rest of the steps. - -I am going to use my username named **sk** , so, I ran the following command to add it to the vboxusers group. -``` -$ sudo usermod -aG vboxusers sk - -``` - -Now, run the following command to check if virtualbox kernel modules are loaded or not. -``` -$ sudo systemctl status vboxdrv - -``` - -![][3] - -As you can see in the above screenshot, the vboxdrv module is loaded and running! - -For older Ubuntu versions, run: -``` -$ sudo /etc/init.d/vboxdrv status - -``` - -If the virtualbox module doesn’t start, run the following command to start it. -``` -$ sudo /etc/init.d/vboxdrv setup - -``` - -Great! We have successfully installed VirtualBox and started virtualbox module. Now, let us go ahead and install Oracle VirtualBox extension pack. - -### Install VirtualBox Extension pack - -The VirtualBox Extension pack provides the following functionalities to the VirtualBox guests. - - * The virtual USB 2.0 (EHCI) device - * VirtualBox Remote Desktop Protocol (VRDP) support - * Host webcam passthrough - * Intel PXE boot ROM - * Experimental support for PCI passthrough on Linux hosts - - - -Download the latest Extension pack for VirtualBox 5.2.x from [**here**][4]. -``` -$ wget https://download.virtualbox.org/virtualbox/5.2.14/Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack - -``` - -Install Extension pack using command: -``` -$ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack - -``` - -Congratulations! We have successfully installed Oracle VirtualBox with extension pack in Ubuntu 16.04 LTS server. It is time to deploy virtual machines. Refer the [**virtualbox official guide**][5] to start creating and managing virtual machines in command line. - -Not everyone is command line expert. Some of you might want to create and use virtual machines graphically. No worries! Here is where **phpVirtualBox** comes in handy!! - -### About phpVirtualBox - -**phpVirtualBox** is a free, web-based front-end to Oracle VirtualBox. It is written using PHP language. Using phpVirtualBox, we can easily create, delete, manage and administer virtual machines via a web browser from any remote system on the network. - -### Install phpVirtualBox in Ubuntu 18.04 LTS - -Since it is a web-based tool, we need to install Apache web server, PHP and some php modules. - -To do so, run: -``` -$ sudo apt install apache2 php php-mysql libapache2-mod-php php-soap php-xml - -``` - -Then, Download the phpVirtualBox 5.2.x version from the [**releases page**][6]. Please note that we have installed VirtualBox 5.2, so we must install phpVirtualBox version 5.2 as well. - -To download it, run: -``` -$ wget https://github.com/phpvirtualbox/phpvirtualbox/archive/5.2-0.zip - -``` - -Extract the downloaded archive with command: -``` -$ unzip 5.2-0.zip - -``` - -This command will extract the contents of 5.2.0.zip file into a folder named “phpvirtualbox-5.2-0”. Now, copy or move the contents of this folder to your apache web server root folder. -``` -$ sudo mv phpvirtualbox-5.2-0/ /var/www/html/phpvirtualbox - -``` - -Assign the proper permissions to the phpvirtualbox folder. -``` -$ sudo chmod 777 /var/www/html/phpvirtualbox/ - -``` - -Next, let us configure phpVirtualBox. - -Copy the sample config file as shown below. -``` -$ sudo cp /var/www/html/phpvirtualbox/config.php-example /var/www/html/phpvirtualbox/config.php - -``` - -Edit phpVirtualBox **config.php** file: -``` -$ sudo nano /var/www/html/phpvirtualbox/config.php - -``` - -Find the following lines and replace the username and password with your system user (The same username that we used in “Adding users to VirtualBox group” section). - -In my case, my Ubuntu system username is **sk** , and its password is **ubuntu**. -``` -var $username = 'sk'; -var $password = 'ubuntu'; - -``` - -![][7] - -Save and close the file. - -Next, create a new file called **/etc/default/virtualbox** : -``` -$ sudo nano /etc/default/virtualbox - -``` - -Add the following line. Replace ‘sk’ with your own username. -``` -VBOXWEB_USER=sk - -``` - -Finally, Reboot your system or simply restart the following services to complete the configuration. -``` -$ sudo systemctl restart vboxweb-service - -$ sudo systemctl restart vboxdrv - -$ sudo systemctl restart apache2 - -``` - -### Adjust firewall to allow Apache web server - -By default, the apache web browser can’t be accessed from remote systems if you have enabled the UFW firewall in Ubuntu 18.04 LTS. You must allow the http and https traffic via UFW by following the below steps. - -First, let us view which applications have installed a profile using command: -``` -$ sudo ufw app list -Available applications: -Apache -Apache Full -Apache Secure -OpenSSH - -``` - -As you can see, Apache and OpenSSH applications have installed UFW profiles. - -If you look into the **“Apache Full”** profile, you will see that it enables traffic to the ports **80** and **443** : -``` -$ sudo ufw app info "Apache Full" -Profile: Apache Full -Title: Web Server (HTTP,HTTPS) -Description: Apache v2 is the next generation of the omnipresent Apache web -server. - -Ports: -80,443/tcp - -``` - -Now, run the following command to allow incoming HTTP and HTTPS traffic for this profile: -``` -$ sudo ufw allow in "Apache Full" -Rules updated -Rules updated (v6) - -``` - -If you want to allow https traffic, but only http (80) traffic, run: -``` -$ sudo ufw app info "Apache" - -``` - -### Access phpVirtualBox Web console - -Now, go to any remote system that has graphical web browser. - -In the address bar, type: ****. - -In my case, I navigated to this link – **** - -You should see the following screen. Enter the phpVirtualBox administrative user credentials. - -The default username and phpVirtualBox is **admin** / **admin**. - -![][8] - -Congratulations! You will now be greeted with phpVirtualBox dashboard. - -![][9] - -Now, start creating your VMs and manage them from phpvirtualbox dashboard. As I mentioned earlier, You can access the phpVirtualBox from any system in the same network. All you need is a web browser and the username and password of phpVirtualBox. - -If you haven’t enabled virtualization support in the BISO of host system (not the guest), phpVirtualBox allows you to create 32-bit guests only. To install 64-bit guest systems, you must enable virtualization in your host system’s BIOS. Look for an option that is something like “virtualization” or “hypervisor” in your bios and make sure it is enabled. - -That’s it. Hope this helps. If you find this guide useful, please share it on your social networks and support us. - -More good stuffs to come. Stay tuned! - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-server/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]:http://www.ostechnix.com/wp-content/uploads/2016/07/Add-VirtualBox-repository.png -[3]:http://www.ostechnix.com/wp-content/uploads/2016/07/vboxdrv-service.png -[4]:https://www.virtualbox.org/wiki/Downloads -[5]:http://www.virtualbox.org/manual/ch08.html -[6]:https://github.com/phpvirtualbox/phpvirtualbox/releases -[7]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-config.png -[8]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-1.png -[9]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-2.png diff --git a/sources/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md b/sources/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md deleted file mode 100644 index a85a637830..0000000000 --- a/sources/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md +++ /dev/null @@ -1,332 +0,0 @@ -Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS -====== - -![](https://www.ostechnix.com/wp-content/uploads/2016/11/kvm-720x340.jpg) - -We already have covered [**setting up Oracle VirtualBox on Ubuntu 18.04**][1] headless server. In this tutorial, we will be discussing how to setup headless virtualization server using **KVM** and how to manage the guest machines from a remote client. As you may know already, KVM ( **K** ernel-based **v** irtual **m** achine) is an open source, full virtualization for Linux. Using KVM, we can easily turn any Linux server in to a complete virtualization environment in minutes and deploy different kind of VMs such as GNU/Linux, *BSD, Windows etc. - -### Setup Headless Virtualization Server Using KVM - -I tested this guide on Ubuntu 18.04 LTS server, however this tutorial will work on other Linux distributions such as Debian, CentOS, RHEL and Scientific Linux. This method will be perfectly suitable for those who wants to setup a simple virtualization environment in a Linux server that doesn’t have any graphical environment. - -For the purpose of this guide, I will be using two systems. - -**KVM virtualization server:** - - * **Host OS** – Ubuntu 18.04 LTS minimal server (No GUI) - * **IP Address of Host OS** : 192.168.225.22/24 - * **Guest OS** (Which we are going to host on Ubuntu 18.04) : Ubuntu 16.04 LTS server - - - -**Remote desktop client :** - - * **OS** – Arch Linux - - - -### Install KVM - -First, let us check if our system supports hardware virtualization. To do so, run the following command from the Terminal: -``` -$ egrep -c '(vmx|svm)' /proc/cpuinfo - -``` - -If the result is **zero (0)** , the system doesn’t support hardware virtualization or the virtualization is disabled in the Bios. Go to your bios and check for the virtualization option and enable it. - -if the result is **1** or **more** , the system will support hardware virtualization. However, you still need to enable the virtualization option in Bios before running the above commands. - -Alternatively, you can use the following command to verify it. You need to install kvm first as described below, in order to use this command. -``` -$ kvm-ok - -``` - -**Sample output:** -``` -INFO: /dev/kvm exists -KVM acceleration can be used - -``` - -If you got the following error instead, you still can run guest machines in KVM, but the performance will be very poor. -``` -INFO: Your CPU does not support KVM extensions -INFO: For more detailed results, you should run this as root -HINT: sudo /usr/sbin/kvm-ok - -``` - -Also, there are other ways to find out if your CPU supports Virtualization or not. Refer the following guide for more details. - -Next, Install KVM and other required packages to setup a virtualization environment in Linux. - -On Ubuntu and other DEB based systems, run: -``` -$ sudo apt-get install qemu-kvm libvirt-bin virtinst bridge-utils cpu-checker - -``` - -Once KVM installed, start libvertd service (If it is not started already): -``` -$ sudo systemctl enable libvirtd - -$ sudo systemctl start libvirtd - -``` - -### Create Virtual machines - -All virtual machine files and other related files will be stored under **/var/lib/libvirt/**. The default path of ISO images is **/var/lib/libvirt/boot/**. - -First, let us see if there is any virtual machines. To view the list of available virtual machines, run: -``` -$ sudo virsh list --all - -``` - -**Sample output:** -``` -Id Name State ----------------------------------------------------- - -``` - -![][3] - -As you see above, there is no virtual machine available right now. - -Now, let us crate one. - -For example, let us create Ubuntu 16.04 Virtual machine with 512 MB RAM, 1 CPU core, 8 GB Hdd. -``` -$ sudo virt-install --name Ubuntu-16.04 --ram=512 --vcpus=1 --cpu host --hvm --disk path=/var/lib/libvirt/images/ubuntu-16.04-vm1,size=8 --cdrom /var/lib/libvirt/boot/ubuntu-16.04-server-amd64.iso --graphics vnc - -``` - -Please make sure you have Ubuntu 16.04 ISO image in path **/var/lib/libvirt/boot/** or any other path you have given in the above command. - -**Sample output:** -``` -WARNING Graphics requested but DISPLAY is not set. Not running virt-viewer. -WARNING No console to launch for the guest, defaulting to --wait -1 - -Starting install... -Creating domain... | 0 B 00:00:01 -Domain installation still in progress. Waiting for installation to complete. -Domain has shutdown. Continuing. -Domain creation completed. -Restarting guest. - -``` - -![][4] - -Let us break down the above command and see what each option do. - - * **–name** : This option defines the name of the virtual name. In our case, the name of VM is **Ubuntu-16.04**. - * **–ram=512** : Allocates 512MB RAM to the VM. - * **–vcpus=1** : Indicates the number of CPU cores in the VM. - * **–cpu host** : Optimizes the CPU properties for the VM by exposing the host’s CPU’s configuration to the guest. - * **–hvm** : Request the full hardware virtualization. - * **–disk path** : The location to save VM’s hdd and it’s size. In our example, I have allocated 8GB hdd size. - * **–cdrom** : The location of installer ISO image. Please note that you must have the actual ISO image in this location. - * **–graphics vnc** : Allows VNC access to the VM from a remote client. - - - -### Access Virtual machines using VNC client - -Now, go to the remote Desktop system. SSH to the Ubuntu server(Virtualization server) as shown below. - -Here, **sk** is my Ubuntu server’s user name and **192.168.225.22** is its IP address. - -Run the following command to find out the VNC port number. We need this to access the Vm from a remote system. -``` -$ sudo virsh dumpxml Ubuntu-16.04 | grep vnc - -``` - -**Sample output:** -``` - - -``` - -![][5] - -Note down the port number **5900**. Install any VNC client application. For this guide, I will be using TigerVnc. TigerVNC is available in the Arch Linux default repositories. To install it on Arch based systems, run: -``` -$ sudo pacman -S tigervnc - -``` - -Type the following SSH port forwarding command from your remote client system that has VNC client application installed. - -Again, **192.168.225.22** is my Ubuntu server’s (virtualization server) IP address. - -Then, open the VNC client from your Arch Linux (client). - -Type **localhost:5900** in the VNC server field and click **Connect** button. - -![][6] - -Then start installing the Ubuntu VM as the way you do in the physical system. - -![][7] - -![][8] - -Similarly, you can setup as many as virtual machines depending upon server hardware specifications. - -Alternatively, you can use **virt-viewer** utility in order to install operating system in the guest machines. virt-viewer is available in the most Linux distribution’s default repositories. After installing virt-viewer, run the following command to establish VNC access to the VM. -``` -$ sudo virt-viewer --connect=qemu+ssh://192.168.225.22/system --name Ubuntu-16.04 - -``` - -### Manage virtual machines - -Managing VMs from the command-line using virsh management user interface is very interesting and fun. The commands are very easy to remember. Let us see some examples. - -To view the list of running VMs, run: -``` -$ sudo virsh list - -``` - -Or, -``` -$ sudo virsh list --all - -``` - -**Sample output:** -``` - Id Name State ----------------------------------------------------- - 2 Ubuntu-16.04 running - -``` - -![][9] - -To start a VM, run: -``` -$ sudo virsh start Ubuntu-16.04 - -``` - -Alternatively, you can use the VM id to start it. - -![][10] - -As you see in the above output, Ubuntu 16.04 virtual machine’s Id is 2. So, in order to start it, just specify its Id like below. -``` -$ sudo virsh start 2 - -``` - -To restart a VM, run: -``` -$ sudo virsh reboot Ubuntu-16.04 - -``` - -**Sample output:** -``` -Domain Ubuntu-16.04 is being rebooted - -``` - -![][11] - -To pause a running VM, run: -``` -$ sudo virsh suspend Ubuntu-16.04 - -``` - -**Sample output:** -``` -Domain Ubuntu-16.04 suspended - -``` - -To resume the suspended VM, run: -``` -$ sudo virsh resume Ubuntu-16.04 - -``` - -**Sample output:** -``` -Domain Ubuntu-16.04 resumed - -``` - -To shutdown a VM, run: -``` -$ sudo virsh shutdown Ubuntu-16.04 - -``` - -**Sample output:** -``` -Domain Ubuntu-16.04 is being shutdown - -``` - -To completely remove a VM, run: -``` -$ sudo virsh undefine Ubuntu-16.04 - -$ sudo virsh destroy Ubuntu-16.04 - -``` - -**Sample output:** -``` -Domain Ubuntu-16.04 destroyed - -``` - -![][12] - -For more options, I recommend you to look into the man pages. -``` -$ man virsh - -``` - -That’s all for now folks. Start playing with your new virtualization environment. KVM virtualization will be opt for research & development and testing purposes, but not limited to. If you have sufficient hardware, you can use it for large production environments. Have fun and don’t forget to leave your valuable comments in the comment section below. - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/setup-headless-virtualization-server-using-kvm-ubuntu/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-server/ -[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[3]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_001.png -[4]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_008-1.png -[5]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_002.png -[6]:http://www.ostechnix.com/wp-content/uploads/2016/11/VNC-Viewer-Connection-Details_005.png -[7]:http://www.ostechnix.com/wp-content/uploads/2016/11/QEMU-Ubuntu-16.04-TigerVNC_006.png -[8]:http://www.ostechnix.com/wp-content/uploads/2016/11/QEMU-Ubuntu-16.04-TigerVNC_007.png -[9]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_010-1.png -[10]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_010-2.png -[11]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_011-1.png -[12]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_012.png diff --git a/sources/tech/20180715 Why is Python so slow.md b/sources/tech/20180715 Why is Python so slow.md new file mode 100644 index 0000000000..5c39a528a1 --- /dev/null +++ b/sources/tech/20180715 Why is Python so slow.md @@ -0,0 +1,207 @@ +HankChow translating + +Why is Python so slow? +============================================================ + +Python is booming in popularity. It is used in DevOps, Data Science, Web Development and Security. + +It does not, however, win any medals for speed. + + +![](https://cdn-images-1.medium.com/max/1200/0*M2qZQsVnDS-4i5zc.jpg) + +> How does Java compare in terms of speed to C or C++ or C# or Python? The answer depends greatly on the type of application you’re running. No benchmark is perfect, but The Computer Language Benchmarks Game is [a good starting point][5]. + +I’ve been referring to the Computer Language Benchmarks Game for over a decade; compared with other languages like Java, C#, Go, JavaScript, C++, Python is [one of the slowest][6]. This includes [JIT][7] (C#, Java) and [AOT][8] (C, C++) compilers, as well as interpreted languages like JavaScript. + + _NB: When I say “Python”, I’m talking about the reference implementation of the language, CPython. I will refer to other runtimes in this article._ + +> I want to answer this question: When Python completes a comparable application 2–10x slower than another language,  _why is it slow_  and can’t we  _make it faster_ ? + +Here are the top theories: + +* “ _It’s the GIL (Global Interpreter Lock)_ ” + +* “ _It’s because its interpreted and not compiled_ ” + +* “ _It’s because its a dynamically typed language_ ” + +Which one of these reasons has the biggest impact on performance? + +### “It’s the GIL” + +Modern computers come with CPU’s that have multiple cores, and sometimes multiple processors. In order to utilise all this extra processing power, the Operating System defines a low-level structure called a thread, where a process (e.g. Chrome Browser) can spawn multiple threads and have instructions for the system inside. That way if one process is particularly CPU-intensive, that load can be shared across the cores and this effectively makes most applications complete tasks faster. + +My Chrome Browser, as I’m writing this article, has 44 threads open. Keep in mind that the structure and API of threading are different between POSIX-based (e.g. Mac OS and Linux) and Windows OS. The operating system also handles the scheduling of threads. + +IF you haven’t done multi-threaded programming before, a concept you’ll need to quickly become familiar with locks. Unlike a single-threaded process, you need to ensure that when changing variables in memory, multiple threads don’t try and access/change the same memory address at the same time. + +When CPython creates variables, it allocates the memory and then counts how many references to that variable exist, this is a concept known as reference counting. If the number of references is 0, then it frees that piece of memory from the system. This is why creating a “temporary” variable within say, the scope of a for loop, doesn’t blow up the memory consumption of your application. + +The challenge then becomes when variables are shared within multiple threads, how CPython locks the reference count. There is a “global interpreter lock” that carefully controls thread execution. The interpreter can only execute one operation at a time, regardless of how many threads it has. + +#### What does this mean to the performance of Python application? + +If you have a single-threaded, single interpreter application. It will make no difference to the speed. Removing the GIL would have no impact on the performance of your code. + +If you wanted to implement concurrency within a single interpreter (Python process) by using threading, and your threads were IO intensive (e.g. Network IO or Disk IO), you would see the consequences of GIL-contention. + +![](https://cdn-images-1.medium.com/max/1600/0*S_iSksY5oM5H1Qf_.png) +From David Beazley’s GIL visualised post [http://dabeaz.blogspot.com/2010/01/python-gil-visualized.html][1] + +If you have a web-application (e.g. Django) and you’re using WSGI, then each request to your web-app is a separate Python interpreter, so there is only 1 lock  _per_  request. Because the Python interpreter is slow to start, some WSGI implementations have a “Daemon Mode” [which keep Python process(es) on the go for you.][9] + +#### What about other Python runtimes? + +[PyPy has a GIL][10] and it is typically >3x faster than CPython. + +[Jython does not have a GIL][11] because a Python thread in Jython is represented by a Java thread and benefits from the JVM memory-management system. + +#### How does JavaScript do this? + +Well, firstly all Javascript engines [use mark-and-sweep Garbage Collection][12]. As stated, the primary need for the GIL is CPython’s memory-management algorithm. + +JavaScript does not have a GIL, but it’s also single-threaded so it doesn’t require one. JavaScript’s event-loop and Promise/Callback pattern are how asynchronous-programming is achieved in place of concurrency. Python has a similar thing with the asyncio event-loop. + +### “It’s because its an interpreted language” + +I hear this a lot and I find it a gross-simplification of the way CPython actually works. If at a terminal you wrote `python myscript.py` then CPython would start a long sequence of reading, lexing, parsing, compiling, interpreting and executing that code. + +If you’re interested in how that process works, I’ve written about it before: + +[Modifying the Python language in 6 minutes +This week I raised my first pull-request to the CPython core project, which was declined :-( but as to not completely…hackernoon.com][13][][14] + +An important point in that process is the creation of a `.pyc` file, at the compiler stage, the bytecode sequence is written to a file inside `__pycache__/`on Python 3 or in the same directory in Python 2\. This doesn’t just apply to your script, but all of the code you imported, including 3rd party modules. + +So most of the time (unless you write code which you only ever run once?), Python is interpreting bytecode and executing it locally. Compare that with Java and C#.NET: + +> Java compiles to an “Intermediate Language” and the Java Virtual Machine reads the bytecode and just-in-time compiles it to machine code. The .NET CIL is the same, the .NET Common-Language-Runtime, CLR, uses just-in-time compilation to machine code. + +So, why is Python so much slower than both Java and C# in the benchmarks if they all use a virtual machine and some sort of Bytecode? Firstly, .NET and Java are JIT-Compiled. + +JIT or Just-in-time compilation requires an intermediate language to allow the code to be split into chunks (or frames). Ahead of time (AOT) compilers are designed to ensure that the CPU can understand every line in the code before any interaction takes place. + +The JIT itself does not make the execution any faster, because it is still executing the same bytecode sequences. However, JIT enables optimizations to be made at runtime. A good JIT optimizer will see which parts of the application are being executed a lot, call these “hot spots”. It will then make optimizations to those bits of code, by replacing them with more efficient versions. + +This means that when your application does the same thing again and again, it can be significantly faster. Also, keep in mind that Java and C# are strongly-typed languages so the optimiser can make many more assumptions about the code. + +PyPy has a JIT and as mentioned in the previous section, is significantly faster than CPython. This performance benchmark article goes into more detail — + +[Which is the fastest version of Python? +Of course, “it depends”, but what does it depend on and how can you assess which is the fastest version of Python for…hackernoon.com][15][][16] + +#### So why doesn’t CPython use a JIT? + +There are downsides to JITs: one of those is startup time. CPython startup time is already comparatively slow, PyPy is 2–3x slower to start than CPython. The Java Virtual Machine is notoriously slow to boot. The .NET CLR gets around this by starting at system-startup, but the developers of the CLR also develop the Operating System on which the CLR runs. + +If you have a single Python process running for a long time, with code that can be optimized because it contains “hot spots”, then a JIT makes a lot of sense. + +However, CPython is a general-purpose implementation. So if you were developing command-line applications using Python, having to wait for a JIT to start every time the CLI was called would be horribly slow. + +CPython has to try and serve as many use cases as possible. There was the possibility of [plugging a JIT into CPython][17] but this project has largely stalled. + +> If you want the benefits of a JIT and you have a workload that suits it, use PyPy. + +### “It’s because its a dynamically typed language” + +In a “Statically-Typed” language, you have to specify the type of a variable when it is declared. Those would include C, C++, Java, C#, Go. + +In a dynamically-typed language, there are still the concept of types, but the type of a variable is dynamic. + +``` +a = 1 +a = "foo" +``` + +In this toy-example, Python creates a second variable with the same name and a type of `str` and deallocates the memory created for the first instance of `a` + +Statically-typed languages aren’t designed as such to make your life hard, they are designed that way because of the way the CPU operates. If everything eventually needs to equate to a simple binary operation, you have to convert objects and types down to a low-level data structure. + +Python does this for you, you just never see it, nor do you need to care. + +Not having to declare the type isn’t what makes Python slow, the design of the Python language enables you to make almost anything dynamic. You can replace the methods on objects at runtime, you can monkey-patch low-level system calls to a value declared at runtime. Almost anything is possible. + +It’s this design that makes it incredibly hard to optimise Python. + +To illustrate my point, I’m going to use a syscall tracing tool that works in Mac OS called Dtrace. CPython distributions do not come with DTrace builtin, so you have to recompile CPython. I’m using 3.6.6 for my demo + +``` +wget https://github.com/python/cpython/archive/v3.6.6.zip +unzip v3.6.6.zip +cd v3.6.6 +./configure --with-dtrace +make +``` + +Now `python.exe` will have Dtrace tracers throughout the code. [Paul Ross wrote an awesome Lightning Talk on Dtrace][19]. You can [download DTrace starter files][20] for Python to measure function calls, execution time, CPU time, syscalls, all sorts of fun. e.g. + +`sudo dtrace -s toolkit/.d -c ‘../cpython/python.exe script.py’` + +The `py_callflow` tracer shows all the function calls in your application + + +![](https://cdn-images-1.medium.com/max/1600/1*Lz4UdUi4EwknJ0IcpSJ52g.gif) + +So, does Python’s dynamic typing make it slow? + +* Comparing and converting types is costly, every time a variable is read, written to or referenced the type is checked + +* It is hard to optimise a language that is so dynamic. The reason many alternatives to Python are so much faster is that they make compromises to flexibility in the name of performance + +* Looking at [Cython][2], which combines C-Static Types and Python to optimise code where the types are known[ can provide ][3]an 84x performanceimprovement. + +### Conclusion + +> Python is primarily slow because of its dynamic nature and versatility. It can be used as a tool for all sorts of problems, where more optimised and faster alternatives are probably available. + +There are, however, ways of optimising your Python applications by leveraging async, understanding the profiling tools, and consider using multiple-interpreters. + +For applications where startup time is unimportant and the code would benefit a JIT, consider PyPy. + +For parts of your code where performance is critical and you have more statically-typed variables, consider using [Cython][4]. + +#### Further reading + +Jake VDP’s excellent article (although slightly dated) [https://jakevdp.github.io/blog/2014/05/09/why-python-is-slow/][21] + +Dave Beazley’s talk on the GIL [http://www.dabeaz.com/python/GIL.pdf][22] + +All about JIT compilers [https://hacks.mozilla.org/2017/02/a-crash-course-in-just-in-time-jit-compilers/][23] + +-------------------------------------------------------------------------------- + +via: https://hackernoon.com/why-is-python-so-slow-e5074b6fe55b + +作者:[Anthony Shaw][a] +选题:[oska874][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://hackernoon.com/@anthonypjshaw?source=post_header_lockup +[b]:https://github.com/oska874 +[1]:http://dabeaz.blogspot.com/2010/01/python-gil-visualized.html +[2]:http://cython.org/ +[3]:http://notes-on-cython.readthedocs.io/en/latest/std_dev.html +[4]:http://cython.org/ +[5]:http://algs4.cs.princeton.edu/faq/ +[6]:https://benchmarksgame-team.pages.debian.net/benchmarksgame/faster/python.html +[7]:https://en.wikipedia.org/wiki/Just-in-time_compilation +[8]:https://en.wikipedia.org/wiki/Ahead-of-time_compilation +[9]:https://www.slideshare.net/GrahamDumpleton/secrets-of-a-wsgi-master +[10]:http://doc.pypy.org/en/latest/faq.html#does-pypy-have-a-gil-why +[11]:http://www.jython.org/jythonbook/en/1.0/Concurrency.html#no-global-interpreter-lock +[12]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Memory_Management +[13]:https://hackernoon.com/modifying-the-python-language-in-7-minutes-b94b0a99ce14 +[14]:https://hackernoon.com/modifying-the-python-language-in-7-minutes-b94b0a99ce14 +[15]:https://hackernoon.com/which-is-the-fastest-version-of-python-2ae7c61a6b2b +[16]:https://hackernoon.com/which-is-the-fastest-version-of-python-2ae7c61a6b2b +[17]:https://www.slideshare.net/AnthonyShaw5/pyjion-a-jit-extension-system-for-cpython +[18]:https://github.com/python/cpython/archive/v3.6.6.zip +[19]:https://github.com/paulross/dtrace-py#the-lightning-talk +[20]:https://github.com/paulross/dtrace-py/tree/master/toolkit +[21]:https://jakevdp.github.io/blog/2014/05/09/why-python-is-slow/ +[22]:http://www.dabeaz.com/python/GIL.pdf +[23]:https://hacks.mozilla.org/2017/02/a-crash-course-in-just-in-time-jit-compilers/ diff --git a/sources/tech/20180724 75 Most Used Essential Linux Applications of 2018.md b/sources/tech/20180724 75 Most Used Essential Linux Applications of 2018.md deleted file mode 100644 index 919182ba1f..0000000000 --- a/sources/tech/20180724 75 Most Used Essential Linux Applications of 2018.md +++ /dev/null @@ -1,988 +0,0 @@ -75 Most Used Essential Linux Applications of 2018 -====== - -**2018** has been an awesome year for a lot of applications, especially those that are both free and open source. And while various Linux distributions come with a number of default apps, users are free to take them out and use any of the free or paid alternatives of their choice. - -Today, we bring you a [list of Linux applications][3] that have been able to make it to users’ Linux installations almost all the time despite the butt-load of other alternatives. - -To simply put, any app on this list is among the most used in its category, and if you haven’t already tried it out you are probably missing out. Enjoy! - -### Backup Tools - -#### Rsync - -[Rsync][4] is an open source bandwidth-friendly utility tool for performing swift incremental file transfers and it is available for free. -``` -$ rsync [OPTION...] SRC... [DEST] - -``` - -To know more examples and usage, read our article “[10 Practical Examples of Rsync Command][5]” to learn more about it. - -#### Timeshift - -[Timeshift][6] provides users with the ability to protect their system by taking incremental snapshots which can be reverted to at a different date – similar to the function of Time Machine in Mac OS and System restore in Windows. - -![](https://www.fossmint.com/wp-content/uploads/2018/07/Timeshift-Create-Linux-Mint-Snapshot.png) - -### BitTorrent Client - -![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-Torrent-Clients.png) - -#### Deluge - -[Deluge][7] is a beautiful cross-platform BitTorrent client that aims to perfect the **μTorrent** experience and make it available to users for free. - -Install **Deluge** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:deluge-team/ppa -$ sudo apt-get update -$ sudo apt-get install deluge - -``` - -#### qBittorent - -[qBittorent][8] is an open source BitTorrent protocol client that aims to provide a free alternative to torrent apps like μTorrent. - -Install **qBittorent** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:qbittorrent-team/qbittorrent-stable -$ sudo apt-get update -$ sudo apt-get install qbittorrent - -``` - -#### Transmission - -[Transmission][9] is also a BitTorrent client with awesome functionalities and a major focus on speed and ease of use. It comes preinstalled with many Linux distros. - -Install **Transmission** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:transmissionbt/ppa -$ sudo apt-get update -$ sudo apt-get install transmission-gtk transmission-cli transmission-common transmission-daemon - -``` - -### Cloud Storage - -![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-Cloud-Storage.png) - -#### Dropbox - -The [Dropbox][10] team rebranded their cloud service earlier this year to provide an even better performance and app integration for their clients. It starts with 2GB of storage for free. - -Install **Dropbox** on **Ubuntu** and **Debian** , using following commands. -``` -$ cd ~ && wget -O - "https://www.dropbox.com/download?plat=lnx.x86" | tar xzf - [On 32-Bit] -$ cd ~ && wget -O - "https://www.dropbox.com/download?plat=lnx.x86_64" | tar xzf - [On 64-Bit] -$ ~/.dropbox-dist/dropboxd - -``` - -#### Google Drive - -[Google Drive][11] is Google’s cloud service solution and my guess is that it needs no introduction. Just like with **Dropbox** , you can sync files across all your connected devices. It starts with 15GB of storage for free and this includes Gmail, Google photos, Maps, etc. - -Check out: [5 Google Drive Clients for Linux][12] - -#### Mega - -[Mega][13] stands out from the rest because apart from being extremely security-conscious, it gives free users 50GB to do as they wish! Its end-to-end encryption ensures that they can’t access your data, and if you forget your recovery key, you too wouldn’t be able to. - -[**Download MEGA Cloud Storage for Ubuntu][14] - -### Commandline Editors - -![](https://www.fossmint.com/wp-content/uploads/2018/07/Commandline-Editors.png) - -#### Vim - -[Vim][15] is an open source clone of vi text editor developed to be customizable and able to work with any type of text. - -Install **Vim** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:jonathonf/vim -$ sudo apt update -$ sudo apt install vim - -``` - -#### Emacs - -[Emacs][16] refers to a set of highly configurable text editors. The most popular variant, GNU Emacs, is written in Lisp and C to be self-documenting, extensible, and customizable. - -Install **Emacs** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:kelleyk/emacs -$ sudo apt update -$ sudo apt install emacs25 - -``` - -#### Nano - -[Nano][17] is a feature-rich CLI text editor for power users and it has the ability to work with different terminals, among other functionalities. - -Install **Nano** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:n-muench/programs-ppa -$ sudo apt-get update -$ sudo apt-get install nano - -``` - -### Download Manager - -![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-Download-Managers.png) - -#### Aria2 - -[Aria2][18] is an open source lightweight multi-source and multi-protocol command line-based downloader with support for Metalinks, torrents, HTTP/HTTPS, SFTP, etc. - -Install **Aria2** on **Ubuntu** and **Debian** , using following command. -``` -$ sudo apt-get install aria2 - -``` - -#### uGet - -[uGet][19] has earned its title as the **#1** open source download manager for Linux distros and it features the ability to handle any downloading task you can throw at it including using multiple connections, using queues, categories, etc. - -Install **uGet** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:plushuang-tw/uget-stable -$ sudo apt update -$ sudo apt install uget - -``` - -#### XDM - -[XDM][20], **Xtreme Download Manager** is an open source downloader written in Java. Like any good download manager, it can work with queues, torrents, browsers, and it also includes a video grabber and a smart scheduler. - -Install **XDM** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:noobslab/apps -$ sudo apt-get update -$ sudo apt-get install xdman - -``` - -### Email Clients - -![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-Email-Clients.png) - -#### Thunderbird - -[Thunderbird][21] is among the most popular email applications. It is free, open source, customizable, feature-rich, and above all, easy to install. - -Install **Thunderbird** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:ubuntu-mozilla-security/ppa -$ sudo apt-get update -$ sudo apt-get install thunderbird - -``` - -#### Geary - -[Geary][22] is an open source email client based on WebKitGTK+. It is free, open-source, feature-rich, and adopted by the GNOME project. - -Install **Geary** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:geary-team/releases -$ sudo apt-get update -$ sudo apt-get install geary - -``` - -#### Evolution - -[Evolution][23] is a free and open source email client for managing emails, meeting schedules, reminders, and contacts. - -Install **Evolution** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:gnome3-team/gnome3-staging -$ sudo apt-get update -$ sudo apt-get install evolution - -``` - -### Finance Software - -![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-Accounting-Software.png) - -#### GnuCash - -[GnuCash][24] is a free, cross-platform, and open source software for financial accounting tasks for personal and small to mid-size businesses. - -Install **GnuCash** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo sh -c 'echo "deb http://archive.getdeb.net/ubuntu $(lsb_release -sc)-getdeb apps" >> /etc/apt/sources.list.d/getdeb.list' -$ sudo apt-get update -$ sudo apt-get install gnucash - -``` - -#### KMyMoney - -[KMyMoney][25] is a finance manager software that provides all important features found in the commercially-available, personal finance managers. - -Install **KMyMoney** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:claydoh/kmymoney2-kde4 -$ sudo apt-get update -$ sudo apt-get install kmymoney - -``` - -### IDE Editors - -![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-IDE-Editors.png) - -#### Eclipse IDE - -[Eclipse][26] is the most widely used Java IDE containing a base workspace and an impossible-to-overemphasize configurable plug-in system for personalizing its coding environment. - -For installation, read our article “[How to Install Eclipse Oxygen IDE in Debian and Ubuntu][27]” - -#### Netbeans IDE - -A fan-favourite, [Netbeans][28] enables users to easily build applications for mobile, desktop, and web platforms using Java, PHP, HTML5, JavaScript, and C/C++, among other languages. - -For installation, read our article “[How to Install Netbeans Oxygen IDE in Debian and Ubuntu][29]” - -#### Brackets - -[Brackets][30] is an advanced text editor developed by Adobe to feature visual tools, preprocessor support, and a design-focused user flow for web development. In the hands of an expert, it can serve as an IDE in its own right. - -Install **Brackets** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:webupd8team/brackets -$ sudo apt-get update -$ sudo apt-get install brackets - -``` - -#### Atom IDE - -[Atom IDE][31] is a more robust version of Atom text editor achieved by adding a number of extensions and libraries to boost its performance and functionalities. It is, in a sense, Atom on steroids. - -Install **Atom** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install snapd -$ sudo snap install atom --classic - -``` - -#### Light Table - -[Light Table][32] is a self-proclaimed next-generation IDE developed to offer awesome features like data value flow stats and coding collaboration. - -Install **Light Table** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:dr-akulavich/lighttable -$ sudo apt-get update -$ sudo apt-get install lighttable-installer - -``` - -#### Visual Studio Code - -[Visual Studio Code][33] is a source code editor created by Microsoft to offer users the best-advanced features in a text editor including syntax highlighting, code completion, debugging, performance statistics and graphs, etc. - -[**Download Visual Studio Code for Ubuntu][34] - -### Instant Messaging - -![](https://www.fossmint.com/wp-content/uploads/2018/07/Linux-IM-Clients.png) - -#### Pidgin - -[Pidgin][35] is an open source instant messaging app that supports virtually all chatting platforms and can have its abilities extended using extensions. - -Install **Pidgin** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:jonathonf/backports -$ sudo apt-get update -$ sudo apt-get install pidgin - -``` - -#### Skype - -[Skype][36] needs no introduction and its awesomeness is available for any interested Linux user. - -Install **Skype** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt install snapd -$ sudo snap install skype --classic - -``` - -#### Empathy - -[Empathy][37] is a messaging app with support for voice, video chat, text, and file transfers over multiple several protocols. It also allows you to add other service accounts to it and interface with all of them through it. - -Install **Empathy** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install empathy - -``` - -### Linux Antivirus - -#### ClamAV/ClamTk - -[ClamAV][38] is an open source and cross-platform command line antivirus app for detecting Trojans, viruses, and other malicious codes. [ClamTk][39] is its GUI front-end. - -Install **ClamAV/ClamTk** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install clamav -$ sudo apt-get install clamtk - -``` - -### Linux Desktop Environments - -#### Cinnamon - -[Cinnamon][40] is a free and open-source derivative of **GNOME3** and it follows the traditional desktop metaphor conventions. - -Install **Cinnamon** desktop on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:embrosyn/cinnamon -$ sudo apt update -$ sudo apt install cinnamon-desktop-environment lightdm - -``` - -#### Mate - -The [Mate][41] Desktop Environment is a derivative and continuation of **GNOME2** developed to offer an attractive UI on Linux using traditional metaphors. - -Install **Mate** desktop on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt install tasksel -$ sudo apt update -$ sudo tasksel install ubuntu-mate-desktop - -``` - -#### GNOME - -[GNOME][42] is a Desktop Environment comprised of several free and open-source applications and can run on any Linux distro and on most BSD derivatives. - -Install **Gnome** desktop on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt install tasksel -$ sudo apt update -$ sudo tasksel install ubuntu-desktop - -``` - -#### KDE - -[KDE][43] is developed by the KDE community to provide users with a graphical solution to interfacing with their system and performing several computing tasks. - -Install **KDE** desktop on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt install tasksel -$ sudo apt update -$ sudo tasksel install kubuntu-desktop - -``` - -### Linux Maintenance Tools - -#### GNOME Tweak Tool - -The [GNOME Tweak Tool][44] is the most popular tool for customizing and tweaking GNOME3 and GNOME Shell settings. - -Install **GNOME Tweak Tool** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt install gnome-tweak-tool - -``` - -#### Stacer - -[Stacer][45] is a free, open-source app for monitoring and optimizing Linux systems. - -Install **Stacer** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:oguzhaninan/stacer -$ sudo apt-get update -$ sudo apt-get install stacer - -``` - -#### BleachBit - -[BleachBit][46] is a free disk space cleaner that also works as a privacy manager and system optimizer. - -[**Download BleachBit for Ubuntu][47] - -### Linux Terminals - -#### GNOME Terminal - -[GNOME Terminal][48] is GNOME’s default terminal emulator. - -Install **Gnome Terminal** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install gnome-terminal - -``` - -#### Konsole - -[Konsole][49] is a terminal emulator for KDE. - -Install **Konsole** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install konsole - -``` - -#### Terminator - -[Terminator][50] is a feature-rich GNOME Terminal-based terminal app built with a focus on arranging terminals, among other functions. - -Install **Terminator** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install terminator - -``` - -#### Guake - -[Guake][51] is a lightweight drop-down terminal for the GNOME Desktop Environment. - -Install **Guake** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install guake - -``` - -### Multimedia Editors - -#### Ardour - -[Ardour][52] is a beautiful Digital Audio Workstation (DAW) for recording, editing, and mixing audio professionally. - -Install **Ardour** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:dobey/audiotools -$ sudo apt-get update -$ sudo apt-get install ardour - -``` - -#### Audacity - -[Audacity][53] is an easy-to-use cross-platform and open source multi-track audio editor and recorder; arguably the most famous of them all. - -Install **Audacity** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:ubuntuhandbook1/audacity -$ sudo apt-get update -$ sudo apt-get install audacity - -``` - -#### GIMP - -[GIMP][54] is the most popular open source Photoshop alternative and it is for a reason. It features various customization options, 3rd-party plugins, and a helpful user community. - -Install **Gimp** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:otto-kesselgulasch/gimp -$ sudo apt update -$ sudo apt install gimp - -``` - -#### Krita - -[Krita][55] is an open source painting app that can also serve as an image manipulating tool and it features a beautiful UI with a reliable performance. - -Install **Krita** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:kritalime/ppa -$ sudo apt update -$ sudo apt install krita - -``` - -#### Lightworks - -[Lightworks][56] is a powerful, flexible, and beautiful tool for editing videos professionally. It comes feature-packed with hundreds of amazing effects and presets that allow it to handle any editing task that you throw at it and it has 25 years of experience to back up its claims. - -[**Download Lightworks for Ubuntu][57] - -#### OpenShot - -[OpenShot][58] is an award-winning free and open source video editor known for its excellent performance and powerful capabilities. - -Install **Openshot** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:openshot.developers/ppa -$ sudo apt update -$ sudo apt install openshot-qt - -``` - -#### PiTiV - -[Pitivi][59] is a beautiful video editor that features a beautiful code base, awesome community, is easy to use, and allows for hassle-free collaboration. - -Install **PiTiV** on **Ubuntu** and **Debian** , using following commands. -``` -$ flatpak install --user https://flathub.org/repo/appstream/org.pitivi.Pitivi.flatpakref -$ flatpak install --user http://flatpak.pitivi.org/pitivi.flatpakref -$ flatpak run org.pitivi.Pitivi//stable - -``` - -### Music Players - -#### Rhythmbox - -[Rhythmbox][60] posses the ability to perform all music tasks you throw at it and has so far proved to be a reliable music player that it ships with Ubuntu. - -Install **Rhythmbox** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:fossfreedom/rhythmbox -$ sudo apt-get update -$ sudo apt-get install rhythmbox - -``` - -#### Lollypop - -[Lollypop][61] is a beautiful, relatively new, open source music player featuring a number of advanced options like online radio, scrubbing support and party mode. Yet, it manages to keep everything simple and easy to manage. - -Install **Lollypop** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:gnumdk/lollypop -$ sudo apt-get update -$ sudo apt-get install lollypop - -``` - -#### Amarok - -[Amarok][62] is a robust music player with an intuitive UI and tons of advanced features bundled into a single unit. It also allows users to discover new music based on their genre preferences. - -Install **Amarok** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get update -$ sudo apt-get install amarok - -``` - -#### Clementine - -[Clementine][63] is an Amarok-inspired music player that also features a straight-forward UI, advanced control features, and the ability to let users search for and discover new music. - -Install **Clementine** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:me-davidsansome/clementine -$ sudo apt-get update -$ sudo apt-get install clementine - -``` - -#### Cmus - -[Cmus][64] is arguably the most efficient CLI music player, Cmus is fast and reliable, and its functionality can be increased using extensions. - -Install **Cmus** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:jmuc/cmus -$ sudo apt-get update -$ sudo apt-get install cmus - -``` - -### Office Suites - -#### Calligra Suite - -The [Calligra Suite][65] provides users with a set of 8 applications which cover working with office, management, and graphics tasks. - -Install **Calligra Suite** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install calligra - -``` - -#### LibreOffice - -[LibreOffice][66] the most actively developed office suite in the open source community, LibreOffice is known for its reliability and its functions can be increased using extensions. - -Install **LibreOffice** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:libreoffice/ppa -$ sudo apt update -$ sudo apt install libreoffice - -``` - -#### WPS Office - -[WPS Office][67] is a beautiful office suite alternative with a more modern UI. - -[**Download WPS Office for Ubuntu][68] - -### Screenshot Tools - -#### Shutter - -[Shutter][69] allows users to take screenshots of their desktop and then edit them using filters and other effects coupled with the option to upload and share them online. - -Install **Shutter** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository -y ppa:shutter/ppa -$ sudo apt update -$ sudo apt install shutter - -``` - -#### Kazam - -[Kazam][70] screencaster captures screen content to output a video and audio file supported by any video player with VP8/WebM and PulseAudio support. - -Install **Kazam** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:kazam-team/unstable-series -$ sudo apt update -$ sudo apt install kazam python3-cairo python3-xlib - -``` - -#### Gnome Screenshot - -[Gnome Screenshot][71] was once bundled with Gnome utilities but is now a standalone app. It can be used to take screencasts in a format that is easily shareable. - -Install **Gnome Screenshot** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get update -$ sudo apt-get install gnome-screenshot - -``` - -### Screen Recorders - -#### SimpleScreenRecorder - -[SimpleScreenRecorder][72] was created to be better than the screen-recording apps available at the time of its creation and has now turned into one of the most efficient and easy-to-use screen recorders for Linux distros. - -Install **SimpleScreenRecorder** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:maarten-baert/simplescreenrecorder -$ sudo apt-get update -$ sudo apt-get install simplescreenrecorder - -``` - -#### recordMyDesktop - -[recordMyDesktop][73] is an open source session recorder that is also capable of recording desktop session audio. - -Install **recordMyDesktop** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get update -$ sudo apt-get install gtk-recordmydesktop - -``` - -### Text Editors - -#### Atom - -[Atom][74] is a modern and customizable text editor created and maintained by GitHub. It is ready for use right out of the box and can have its functionality enhanced and its UI customized using extensions and themes. - -Install **Atom** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install snapd -$ sudo snap install atom --classic - -``` - -#### Sublime Text - -[Sublime Text][75] is easily among the most awesome text editors to date. It is customizable, lightweight (even when bulldozed with a lot of data files and extensions), flexible, and remains free to use forever. - -Install **Sublime Text** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install snapd -$ sudo snap install sublime-text - -``` - -#### Geany - -[Geany][76] is a memory-friendly text editor with basic IDE features designed to exhibit shot load times and extensible functions using libraries. - -Install **Geany** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get update -$ sudo apt-get install geany - -``` - -#### Gedit - -[Gedit][77] is famous for its simplicity and it comes preinstalled with many Linux distros because of its function as an excellent general purpose text editor. - -Install **Gedit** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get update -$ sudo apt-get install gedit - -``` - -### To-Do List Apps - -#### Evernote - -[Evernote][78] is a cloud-based note-taking productivity app designed to work perfectly with different types of notes including to-do lists and reminders. - -There is no any official evernote app for Linux, so check out other third party [6 Evernote Alternative Clients for Linux][79]. - -#### Everdo - -[Everdo][78] is a beautiful, security-conscious, low-friction Getting-Things-Done app productivity app for handling to-dos and other note types. If Evernote comes off to you in an unpleasant way, Everdo is a perfect alternative. - -[**Download Everdo for Ubuntu][80] - -#### Taskwarrior - -[Taskwarrior][81] is an open source and cross-platform command line app for managing tasks. It is famous for its speed and distraction-free environment. - -Install **Taskwarrior** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get update -$ sudo apt-get install taskwarrior - -``` - -### Video Players - -#### Banshee - -[Banshee][82] is an open source multi-format-supporting media player that was first developed in 2005 and has only been getting better since. - -Install **Banshee** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:banshee-team/ppa -$ sudo apt-get update -$ sudo apt-get install banshee - -``` - -#### VLC - -[VLC][83] is my favourite video player and it’s so awesome that it can play almost any audio and video format you throw at it. You can also use it to play internet radio, record desktop sessions, and stream movies online. - -Install **VLC** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:videolan/stable-daily -$ sudo apt-get update -$ sudo apt-get install vlc - -``` - -#### Kodi - -[Kodi][84] is among the world’s most famous media players and it comes as a full-fledged media centre app for playing all things media whether locally or remotely. - -Install **Kodi** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo apt-get install software-properties-common -$ sudo add-apt-repository ppa:team-xbmc/ppa -$ sudo apt-get update -$ sudo apt-get install kodi - -``` - -#### SMPlayer - -[SMPlayer][85] is a GUI for the award-winning **MPlayer** and it is capable of handling all popular media formats; coupled with the ability to stream from YouTube, Chromcast, and download subtitles. - -Install **SMPlayer** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:rvm/smplayer -$ sudo apt-get update -$ sudo apt-get install smplayer - -``` - -### Virtualization Tools - -#### VirtualBox - -[VirtualBox][86] is an open source app created for general-purpose OS virtualization and it can be run on servers, desktops, and embedded systems. - -Install **VirtualBox** on **Ubuntu** and **Debian** , using following commands. -``` -$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add - -$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add - -$ sudo apt-get update -$ sudo apt-get install virtualbox-5.2 -$ virtualbox - -``` - -#### VMWare - -[VMware][87] is a digital workspace that provides platform virtualization and cloud computing services to customers and is reportedly the first to successfully virtualize x86 architecture systems. One of its products, VMware workstations allows users to run multiple OSes in a virtual memory. - -For installation, read our article “[How to Install VMware Workstation Pro on Ubuntu][88]“. - -### Web Browsers - -#### Chrome - -[Google Chrome][89] is undoubtedly the most popular browser. Known for its speed, simplicity, security, and beauty following Google’s Material Design trend, Chrome is a browser that web developers cannot do without. It is also free to use and open source. - -Install **Google Chrome** on **Ubuntu** and **Debian** , using following commands. -``` -$ wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add - -$ sudo sh -c 'echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' -$ sudo apt-get update -$ sudo apt-get install google-chrome-stable - -``` - -#### Firefox - -[Firefox Quantum][90] is a beautiful, speed, task-ready, and customizable browser capable of any browsing task that you throw at it. It is also free, open source, and packed with developer-friendly tools that are easy for even beginners to get up and running with. - -Install **Firefox Quantum** on **Ubuntu** and **Debian** , using following commands. -``` -$ sudo add-apt-repository ppa:mozillateam/firefox-next -$ sudo apt update && sudo apt upgrade -$ sudo apt install firefox - -``` - -#### Vivaldi - -[Vivaldi][91] is a free and open source Chrome-based project that aims to perfect Chrome’s features with a couple of more feature additions. It is known for its colourful panels, memory-friendly performance, and flexibility. - -[**Download Vivaldi for Ubuntu][91] - -That concludes our list for today. Did I skip a famous title? Tell me about it in the comments section below. - -Don’t forget to share this post and to subscribe to our newsletter to get the latest publications from FossMint. - - --------------------------------------------------------------------------------- - -via: https://www.fossmint.com/most-used-linux-applications/ - -作者:[Martins D. Okoi][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.fossmint.com/author/dillivine/ -[1]:https://plus.google.com/share?url=https://www.fossmint.com/most-used-linux-applications/ (Share on Google+) -[2]:https://www.linkedin.com/shareArticle?mini=true&url=https://www.fossmint.com/most-used-linux-applications/ (Share on LinkedIn) -[3]:https://www.fossmint.com/awesome-linux-software/ -[4]:https://rsync.samba.org/ -[5]:https://www.tecmint.com/rsync-local-remote-file-synchronization-commands/ -[6]:https://github.com/teejee2008/timeshift -[7]:https://deluge-torrent.org/ -[8]:https://www.qbittorrent.org/ -[9]:https://transmissionbt.com/ -[10]:https://www.dropbox.com/ -[11]:https://www.google.com/drive/ -[12]:https://www.fossmint.com/best-google-drive-clients-for-linux/ -[13]:https://mega.nz/ -[14]:https://mega.nz/sync!linux -[15]:https://www.vim.org/ -[16]:https://www.gnu.org/s/emacs/ -[17]:https://www.nano-editor.org/ -[18]:https://aria2.github.io/ -[19]:http://ugetdm.com/ -[20]:http://xdman.sourceforge.net/ -[21]:https://www.thunderbird.net/ -[22]:https://github.com/GNOME/geary -[23]:https://github.com/GNOME/evolution -[24]:https://www.gnucash.org/ -[25]:https://kmymoney.org/ -[26]:https://www.eclipse.org/ide/ -[27]:https://www.tecmint.com/install-eclipse-oxygen-ide-in-ubuntu-debian/ -[28]:https://netbeans.org/ -[29]:https://www.tecmint.com/install-netbeans-ide-in-ubuntu-debian-linux-mint/ -[30]:http://brackets.io/ -[31]:https://ide.atom.io/ -[32]:http://lighttable.com/ -[33]:https://code.visualstudio.com/ -[34]:https://code.visualstudio.com/download -[35]:https://www.pidgin.im/ -[36]:https://www.skype.com/ -[37]:https://wiki.gnome.org/Apps/Empathy -[38]:https://www.clamav.net/ -[39]:https://dave-theunsub.github.io/clamtk/ -[40]:https://github.com/linuxmint/cinnamon-desktop -[41]:https://mate-desktop.org/ -[42]:https://www.gnome.org/ -[43]:https://www.kde.org/plasma-desktop -[44]:https://github.com/nzjrs/gnome-tweak-tool -[45]:https://github.com/oguzhaninan/Stacer -[46]:https://www.bleachbit.org/ -[47]:https://www.bleachbit.org/download -[48]:https://github.com/GNOME/gnome-terminal -[49]:https://konsole.kde.org/ -[50]:https://gnometerminator.blogspot.com/p/introduction.html -[51]:http://guake-project.org/ -[52]:https://ardour.org/ -[53]:https://www.audacityteam.org/ -[54]:https://www.gimp.org/ -[55]:https://krita.org/en/ -[56]:https://www.lwks.com/ -[57]:https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206 -[58]:https://www.openshot.org/ -[59]:http://www.pitivi.org/ -[60]:https://wiki.gnome.org/Apps/Rhythmbox -[61]:https://gnumdk.github.io/lollypop-web/ -[62]:https://amarok.kde.org/en -[63]:https://www.clementine-player.org/ -[64]:https://cmus.github.io/ -[65]:https://www.calligra.org/tour/calligra-suite/ -[66]:https://www.libreoffice.org/ -[67]:https://www.wps.com/ -[68]:http://wps-community.org/downloads -[69]:http://shutter-project.org/ -[70]:https://launchpad.net/kazam -[71]:https://gitlab.gnome.org/GNOME/gnome-screenshot -[72]:http://www.maartenbaert.be/simplescreenrecorder/ -[73]:http://recordmydesktop.sourceforge.net/about.php -[74]:https://atom.io/ -[75]:https://www.sublimetext.com/ -[76]:https://www.geany.org/ -[77]:https://wiki.gnome.org/Apps/Gedit -[78]:https://everdo.net/ -[79]:https://www.fossmint.com/evernote-alternatives-for-linux/ -[80]:https://everdo.net/linux/ -[81]:https://taskwarrior.org/ -[82]:http://banshee.fm/ -[83]:https://www.videolan.org/ -[84]:https://kodi.tv/ -[85]:https://www.smplayer.info/ -[86]:https://www.virtualbox.org/wiki/VirtualBox -[87]:https://www.vmware.com/ -[88]:https://www.tecmint.com/install-vmware-workstation-in-linux/ -[89]:https://www.google.com/chrome/ -[90]:https://www.mozilla.org/en-US/firefox/ -[91]:https://vivaldi.com/ diff --git a/sources/tech/20180724 Building a network attached storage device with a Raspberry Pi.md b/sources/tech/20180724 Building a network attached storage device with a Raspberry Pi.md deleted file mode 100644 index 3144efd4ee..0000000000 --- a/sources/tech/20180724 Building a network attached storage device with a Raspberry Pi.md +++ /dev/null @@ -1,284 +0,0 @@ -Building a network attached storage device with a Raspberry Pi -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-storage.png?itok=95-zvHYl) - -In this three-part series, I'll explain how to set up a simple, useful NAS (network attached storage) system. I use this kind of setup to store my files on a central system, creating incremental backups automatically every night. To mount the disk on devices that are located in the same network, NFS is installed. To access files offline and share them with friends, I use [Nextcloud][1]. - -This article will cover the basic setup of software and hardware to mount the data disk on a remote device. In the second article, I will discuss a backup strategy and set up a cron job to create daily backups. In the third and last article, we will install Nextcloud, a tool for easy file access to devices synced offline as well as online using a web interface. It supports multiple users and public file-sharing so you can share pictures with friends, for example, by sending a password-protected link. - -The target architecture of our system looks like this: -![](https://opensource.com/sites/default/files/uploads/nas_part1.png) - -### Hardware - -Let's get started with the hardware you need. You might come up with a different shopping list, so consider this one an example. - -The computing power is delivered by a [Raspberry Pi 3][2], which comes with a quad-core CPU, a gigabyte of RAM, and (somewhat) fast ethernet. Data will be stored on two USB hard drives (I use 1-TB disks); one is used for the everyday traffic, the other is used to store backups. Be sure to use either active USB hard drives or a USB hub with an additional power supply, as the Raspberry Pi will not be able to power two USB drives. - -### Software - -The operating system with the highest visibility in the community is [Raspbian][3] , which is excellent for custom projects. There are plenty of [guides][4] that explain how to install Raspbian on a Raspberry Pi, so I won't go into details here. The latest official supported version at the time of this writing is [Raspbian Stretch][5] , which worked fine for me. - -At this point, I will assume you have configured your basic Raspbian and are able to connect to the Raspberry Pi by `ssh`. - -### Prepare the USB drives - -To achieve good performance reading from and writing to the USB hard drives, I recommend formatting them with ext4. To do so, you must first find out which disks are attached to the Raspberry Pi. You can find the disk devices in `/dev/sd/`. Using the command `fdisk -l`, you can find out which two USB drives you just attached. Please note that all data on the USB drives will be lost as soon as you follow these steps. -``` -pi@raspberrypi:~ $ sudo fdisk -l - - - -<...> - - - -Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors - -Units: sectors of 1 * 512 = 512 bytes - -Sector size (logical/physical): 512 bytes / 512 bytes - -I/O size (minimum/optimal): 512 bytes / 512 bytes - -Disklabel type: dos - -Disk identifier: 0xe8900690 - - - -Device     Boot Start        End    Sectors   Size Id Type - -/dev/sda1        2048 1953525167 1953523120 931.5G 83 Linux - - - - - -Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors - -Units: sectors of 1 * 512 = 512 bytes - -Sector size (logical/physical): 512 bytes / 512 bytes - -I/O size (minimum/optimal): 512 bytes / 512 bytes - -Disklabel type: dos - -Disk identifier: 0x6aa4f598 - - - -Device     Boot Start        End    Sectors   Size Id Type - -/dev/sdb1  *     2048 1953521663 1953519616 931.5G  83 Linux - -``` - -As those devices are the only 1TB disks attached to the Raspberry Pi, we can easily see that `/dev/sda` and `/dev/sdb` are the two USB drives. The partition table at the end of each disk shows how it should look after the following steps, which create the partition table and format the disks. To do this, repeat the following steps for each of the two devices by replacing `sda` with `sdb` the second time (assuming your devices are also listed as `/dev/sda` and `/dev/sdb` in `fdisk`). - -First, delete the partition table of the disk and create a new one containing only one partition. In `fdisk`, you can use interactive one-letter commands to tell the program what to do. Simply insert them after the prompt `Command (m for help):` as follows (you can also use the `m` command anytime to get more information): -``` -pi@raspberrypi:~ $ sudo fdisk /dev/sda - - - -Welcome to fdisk (util-linux 2.29.2). - -Changes will remain in memory only, until you decide to write them. - -Be careful before using the write command. - - - - - -Command (m for help): o - -Created a new DOS disklabel with disk identifier 0x9c310964. - - - -Command (m for help): n - -Partition type - -   p   primary (0 primary, 0 extended, 4 free) - -   e   extended (container for logical partitions) - -Select (default p): p - -Partition number (1-4, default 1): - -First sector (2048-1953525167, default 2048): - -Last sector, +sectors or +size{K,M,G,T,P} (2048-1953525167, default 1953525167): - - - -Created a new partition 1 of type 'Linux' and of size 931.5 GiB. - - - -Command (m for help): p - - - -Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors - -Units: sectors of 1 * 512 = 512 bytes - -Sector size (logical/physical): 512 bytes / 512 bytes - -I/O size (minimum/optimal): 512 bytes / 512 bytes - -Disklabel type: dos - -Disk identifier: 0x9c310964 - - - -Device     Boot Start        End    Sectors   Size Id Type - -/dev/sda1        2048 1953525167 1953523120 931.5G 83 Linux - - - -Command (m for help): w - -The partition table has been altered. - -Syncing disks. - -``` - -Now we will format the newly created partition `/dev/sda1` using the ext4 filesystem: -``` -pi@raspberrypi:~ $ sudo mkfs.ext4 /dev/sda1 - -mke2fs 1.43.4 (31-Jan-2017) - -Discarding device blocks: done - - - -<...> - - - -Allocating group tables: done - -Writing inode tables: done - -Creating journal (1024 blocks): done - -Writing superblocks and filesystem accounting information: done - -``` - -After repeating the above steps, let's label the new partitions according to their usage in your system: -``` -pi@raspberrypi:~ $ sudo e2label /dev/sda1 data - -pi@raspberrypi:~ $ sudo e2label /dev/sdb1 backup - -``` - -Now let's get those disks mounted to store some data. My experience, based on running this setup for over a year now, is that USB drives are not always available to get mounted when the Raspberry Pi boots up (for example, after a power outage), so I recommend using autofs to mount them when needed. - -First install autofs and create the mount point for the storage: -``` -pi@raspberrypi:~ $ sudo apt install autofs - -pi@raspberrypi:~ $ sudo mkdir /nas - -``` - -Then mount the devices by adding the following line to `/etc/auto.master`: -``` -/nas    /etc/auto.usb - -``` - -Create the file `/etc/auto.usb` if not existing with the following content, and restart the autofs service: -``` -data -fstype=ext4,rw :/dev/disk/by-label/data - -backup -fstype=ext4,rw :/dev/disk/by-label/backup - -pi@raspberrypi3:~ $ sudo service autofs restart - -``` - -Now you should be able to access the disks at `/nas/data` and `/nas/backup`, respectively. Clearly, the content will not be too thrilling, as you just erased all the data from the disks. Nevertheless, you should be able to verify the devices are mounted by executing the following commands: -``` -pi@raspberrypi3:~ $ cd /nas/data - -pi@raspberrypi3:/nas/data $ cd /nas/backup - -pi@raspberrypi3:/nas/backup $ mount - -<...> - -/etc/auto.usb on /nas type autofs (rw,relatime,fd=6,pgrp=463,timeout=300,minproto=5,maxproto=5,indirect) - -<...> - -/dev/sda1 on /nas/data type ext4 (rw,relatime,data=ordered) - -/dev/sdb1 on /nas/backup type ext4 (rw,relatime,data=ordered) - -``` - -First move into the directories to make sure autofs mounts the devices. Autofs tracks access to the filesystems and mounts the needed devices on the go. Then the `mount` command shows that the two devices are actually mounted where we wanted them. - -Setting up autofs is a bit fault-prone, so do not get frustrated if mounting doesn't work on the first try. Give it another chance, search for more detailed resources (there is plenty of documentation online), or leave a comment. - -### Mount network storage - -Now that you have set up the basic network storage, we want it to be mounted on a remote Linux machine. We will use the network file system (NFS) for this. First, install the NFS server on the Raspberry Pi: -``` -pi@raspberrypi:~ $ sudo apt install nfs-kernel-server - -``` - -Next we need to tell the NFS server to expose the `/nas/data` directory, which will be the only device accessible from outside the Raspberry Pi (the other one will be used for backups only). To export the directory, edit the file `/etc/exports` and add the following line to allow all devices with access to the NAS to mount your storage: -``` -/nas/data *(rw,sync,no_subtree_check) - -``` - -For more information about restricting the mount to single devices and so on, refer to `man exports`. In the configuration above, anyone will be able to mount your data as long as they have access to the ports needed by NFS: `111` and `2049`. I use the configuration above and allow access to my home network only for ports 22 and 443 using the routers firewall. That way, only devices in the home network can reach the NFS server. - -To mount the storage on a Linux computer, run the commands: -``` -you@desktop:~ $ sudo mkdir /nas/data - -you@desktop:~ $ sudo mount -t nfs :/nas/data /nas/data - -``` - -Again, I recommend using autofs to mount this network device. For extra help, check out [How to use autofs to mount NFS shares][6]. - -Now you are able to access files stored on your own RaspberryPi-powered NAS from remote devices using the NFS mount. In the next part of this series, I will cover how to automatically back up your data to the second hard drive using `rsync`. To save space on the device while still doing daily backups, you will learn how to create incremental backups with `rsync`. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi - -作者:[Manuel Dewald][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/ntlx -[1]:https://nextcloud.com/ -[2]:https://www.raspberrypi.org/products/raspberry-pi-3-model-b/ -[3]:https://www.raspbian.org/ -[4]:https://www.raspberrypi.org/documentation/installation/installing-images/ -[5]:https://www.raspberrypi.org/blog/raspbian-stretch/ -[6]:https://opensource.com/article/18/6/using-autofs-mount-nfs-shares diff --git a/sources/tech/20180727 How to analyze your system with perf and Python.md b/sources/tech/20180727 How to analyze your system with perf and Python.md index ccc66b04a7..c1be98cc0e 100644 --- a/sources/tech/20180727 How to analyze your system with perf and Python.md +++ b/sources/tech/20180727 How to analyze your system with perf and Python.md @@ -1,5 +1,3 @@ -pinewall translating - How to analyze your system with perf and Python ====== diff --git a/sources/tech/20180803 5 Essential Tools for Linux Development.md b/sources/tech/20180803 5 Essential Tools for Linux Development.md deleted file mode 100644 index 006372ca82..0000000000 --- a/sources/tech/20180803 5 Essential Tools for Linux Development.md +++ /dev/null @@ -1,148 +0,0 @@ -5 Essential Tools for Linux Development -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dev-tools.png?itok=kkDNylRg) - -Linux has become a mainstay for many sectors of work, play, and personal life. We depend upon it. With Linux, technology is expanding and evolving faster than anyone could have imagined. That means Linux development is also happening at an exponential rate. Because of this, more and more developers will be hopping on board the open source and Linux dev train in the immediate, near, and far-off future. For that, people will need tools. Fortunately, there are a ton of dev tools available for Linux; so many, in fact, that it can be a bit intimidating to figure out precisely what you need (especially if you’re coming from another platform). - -To make that easier, I thought I’d help narrow down the selection a bit for you. But instead of saying you should use Tool X and Tool Y, I’m going to narrow it down to five categories and then offer up an example for each. Just remember, for most categories, there are several available options. And, with that said, let’s get started. - -### Containers - -Let’s face it, in this day and age you need to be working with containers. Not only are they incredibly easy to deploy, they make for great development environments. If you regularly develop for a specific platform, why not do so by creating a container image that includes all of the tools you need to make the process quick and easy. With that image available, you can then develop and roll out numerous instances of whatever software or service you need. - -Using containers for development couldn’t be easier than it is with [Docker][1]. The advantages of using containers (and Docker) are: - - * Consistent development environment. - - * You can trust it will “just work” upon deployment. - - * Makes it easy to build across platforms. - - * Docker images available for all types of development environments and languages. - - * Deploying single containers or container clusters is simple. - - - - -Thanks to [Docker Hub][2], you’ll find images for nearly any platform, development environment, server, service… just about anything you need. Using images from Docker Hub means you can skip over the creation of the development environment and go straight to work on developing your app, server, API, or service. - -Docker is easily installable of most every Linux platform. For example: To install Docker on Ubuntu, you only have to open a terminal window and issue the command: -``` -sudo apt-get install docker.io - -``` - -With Docker installed, you’re ready to start pulling down specific images, developing, and deploying (Figure 1). - -![Docker images][4] - -Figure 1: Docker images ready to deploy. - -[Used with permission][5] - -### Version control system - -If you’re working on a large project or with a team on a project, you’re going to need a version control system. Why? Because you need to keep track of your code, where your code is, and have an easy means of making commits and merging code from others. Without such a tool, your projects would be nearly impossible to manage. For Linux users, you cannot beat the ease of use and widespread deployment of [Git][6] and [GitHub][7]. If you’re new to their worlds, Git is the version control system that you install on your local machine and GitHub is the remote repository you use to upload (and then manage) your projects. Git can be installed on most Linux distributions. For example, on a Debian-based system, the install is as simple as: -``` -sudo apt-get install git - -``` - -Once installed, you are ready to start your journey with version control (Figure 2). - -![Git installed][9] - -Figure 2: Git is installed and available for many important tasks. - -[Used with permission][5] - -Github requires you to create an account. You can use it for free for non-commercial projects, or you can pay for commercial project housing (for more information check out the price matrix [here][10]). - -### Text editor - -Let’s face it, developing on Linux would be a bit of a challenge without a text editor. Of course what a text editor is varies, depending upon who you ask. One person might say vim, emacs, or nano, whereas another might go full-on GUI with their editor. But since we’re talking development, we need a tool that can meet the needs of the modern day developer. And before I mention a couple of text editors, I will say this: Yes, I know that vim is a serious workhorse for serious developers and, if you know it well vim will meet and exceed all of your needs. However, getting up to speed enough that it won’t be in your way, can be a bit of a hurdle for some developers (especially those new to Linux). Considering my goal is to always help win over new users (and not just preach to an already devout choir), I’m taking the GUI route here. - -As far as text editors are concerned, you cannot go wrong with the likes of [Bluefish][11]. Bluefish can be found in most standard repositories and features project support, multi-threaded support for remote files, search and replace, open files recursively, snippets sidebar, integrates with make, lint, weblint, xmllint, unlimited undo/redo, in-line spell checker, auto-recovery, full screen editing, syntax highlighting (Figure 3), support for numerous languages, and much more. - -![Bluefish][13] - -Figure 3: Bluefish running on Ubuntu Linux 18.04. - -[Used with permission][5] - -### IDE - -Integrated Development Environment (IDE) is a piece of software that includes a comprehensive set of tools that enable a one-stop-shop environment for developing. IDEs not only enable you to code your software, but document and build them as well. There are a number of IDEs for Linux, but one in particular is not only included in the standard repositories it is also very user-friendly and powerful. That tool in question is [Geany][14]. Geany features syntax highlighting, code folding, symbol name auto-completion, construct completion/snippets, auto-closing of XML and HTML tags, call tips, many supported filetypes, symbol lists, code navigation, build system to compile and execute your code, simple project management, and a built-in plugin system. - -Geany can be easily installed on your system. For example, on a Debian-based distribution, issue the command: -``` -sudo apt-get install geany - -``` - -Once installed, you’re ready to start using this very powerful tool that includes a user-friendly interface (Figure 4) that has next to no learning curve. - -![Geany][16] - -Figure 4: Geany is ready to serve as your IDE. - -[Used with permission][5] - -### diff tool - -There will be times when you have to compare two files to find where they differ. This could be two different copies of what was the same file (only one compiles and the other doesn’t). When that happens, you don’t want to have to do that manually. Instead, you want to employ the power of tool like [Meld][17]. Meld is a visual diff and merge tool targeted at developers. With Meld you can make short shrift out of discovering the differences between two files. Although you can use a command line diff tool, when efficiency is the name of the game, you can’t beat Meld. - -Meld allows you to open a comparison between to files and it will highlight the differences between each. Meld also allows you to merge comparisons either from the right or the left (as the files are opened side by side - Figure 5). - -![Comparing two files][19] - -Figure 5: Comparing two files with a simple difference. - -[Used with permission][5] - -Meld can be installed from most standard repositories. On a Debian-based system, the installation command is: -``` -sudo apt-get install meld - -``` - -### Working with efficiency - -These five tools not only enable you to get your work done, they help to make it quite a bit more efficient. Although there are a ton of developer tools available for Linux, you’re going to want to make sure you have one for each of the above categories (maybe even starting with the suggestions I’ve made). - -Learn more about Linux through the free ["Introduction to Linux" ][20]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2018/8/5-essential-tools-linux-development - -作者:[Jack Wallen][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/jlwallen -[1]:https://www.docker.com/ -[2]:https://hub.docker.com/ -[3]:/files/images/5devtools1jpg -[4]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_1.jpg?itok=V1Bsbkg9 (Docker images) -[5]:/licenses/category/used-permission -[6]:https://git-scm.com/ -[7]:https://github.com/ -[8]:/files/images/5devtools2jpg -[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_2.jpg?itok=YJjhe4O6 (Git installed) -[10]:https://github.com/pricing -[11]:http://bluefish.openoffice.nl/index.html -[12]:/files/images/5devtools3jpg -[13]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_3.jpg?itok=66A7Svme (Bluefish) -[14]:https://www.geany.org/ -[15]:/files/images/5devtools4jpg -[16]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_4.jpg?itok=jRcA-0ue (Geany) -[17]:http://meldmerge.org/ -[18]:/files/images/5devtools5jpg -[19]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/5devtools_5.jpg?itok=eLkfM9oZ (Comparing two files) -[20]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md b/sources/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md deleted file mode 100644 index 3c0b63d63b..0000000000 --- a/sources/tech/20180815 How to Create M3U Playlists in Linux [Quick Tip].md +++ /dev/null @@ -1,84 +0,0 @@ -translating by lujun9972 -How to Create M3U Playlists in Linux [Quick Tip] -====== -**Brief: A quick tip on how to create M3U playlists in Linux terminal from unordered files to play them in a sequence.** - -![Create M3U playlists in Linux Terminal][1] - -I am a fan of foreign tv series and it’s not always easy to get them on DVD or on streaming services like [Netflix][2]. Thankfully, you can find some of them on YouTube and [download them from YouTube][3]. - -Now there comes a problem. Your files might not be sorted in a particular order. In GNU/Linux files are not naturally sort ordered by number sequencing so I had to make a .m3u playlist so [MPV video player][4] would play the videos in sequence and not out of sequence. - -Also sometimes the numbers are in the middle or the end like ‘My Web Series S01E01.mkv’ as an example. The episode information here is in the middle of the filename, the ‘S01E01’ which tells us, humans, which is the first episode and which needs to come in next. - -So what I did was to generate an m3u playlist in the video directory and tell MPV to play the .m3u playlist and it would take care of playing them in the sequence. - -### What is an M3U file? - -[M3U][5] is basically a text file that contains filenames in a specific order. When a player like MPV or VLC opens an M3U file, it tries to play the specified files in the given sequence. - -### Creating M3U to play audio/video files in a sequence - -In my case, I used the following command: -``` -$/home/shirish/Videos/web-series-video/$ ls -1v |grep .mkv > /tmp/1.m3u && mv /tmp/1.m3u . - -``` - -Let’s break it down a bit and see each bit as to what it means – - -**ls -1v** = This is using the plain ls or listing entries in the directory. The -1 means list one file per line. while -v natural sort of (version) numbers within text - -**| grep .mkv** = It’s basically telling `ls` to look for files which are ending in .mkv . It could be .mp4 or any other media file format that you want. - -It’s usually a good idea to do a dry run by running the command on the console: -``` -ls -1v |grep .mkv -My Web Series S01E01 [Episode 1 Name] Multi 480p WEBRip x264 - xRG.mkv -My Web Series S01E02 [Episode 2 Name] Multi 480p WEBRip x264 - xRG.mkv -My Web Series S01E03 [Episode 3 Name] Multi 480p WEBRip x264 - xRG.mkv -My Web Series S01E04 [Episode 4 Name] Multi 480p WEBRip x264 - xRG.mkv -My Web Series S01E05 [Episode 5 Name] Multi 480p WEBRip x264 - xRG.mkv -My Web Series S01E06 [Episode 6 Name] Multi 480p WEBRip x264 - xRG.mkv -My Web Series S01E07 [Episode 7 Name] Multi 480p WEBRip x264 - xRG.mkv -My Web Series S01E08 [Episode 8 Name] Multi 480p WEBRip x264 - xRG.mkv - -``` - -This tells me that what I’m trying to do is correct. Now just have to make that the output is in the form of a .m3u playlist which is the next part. -``` -ls -1v |grep .mkv > /tmp/web_playlist.m3u && mv /tmp/web_playlist.m3u . - -``` - -This makes the .m3u generate in the current directory. The .m3u playlist is nothing but a .txt file with the same contents as above with the .m3u extension. You can edit it manually as well and add the exact filenames in an order you desire. - -After that you just have to do something like this: -``` -mpv web_playlist.m3u - -``` - -The great thing about MPV and the playlists, in general, is that you don’t have to binge-watch. You can see however much you want to do in one sitting and see the rest in the next session or the session after that. - -I hope to do articles featuring MPV as well as how to make mkv files embedding subtitles in a media file but that’s in the future. - -Note: It’s FOSS doesn’t encourage piracy. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/create-m3u-playlist-linux/ - -作者:[Shirsh][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://itsfoss.com/author/shirish/ -[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/Create-M3U-Playlists.jpeg -[2]:https://itsfoss.com/netflix-open-source-ai/ -[3]:https://itsfoss.com/download-youtube-linux/ -[4]:https://itsfoss.com/mpv-video-player/ -[5]:https://en.wikipedia.org/wiki/M3U diff --git a/sources/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md b/sources/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md index d3c729f1d0..d671a35457 100644 --- a/sources/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md +++ b/sources/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md @@ -1,3 +1,4 @@ +FSSlc Translating How To Lock The Keyboard And Mouse, But Not The Screen In Linux ====== diff --git a/sources/tech/20180821 A checklist for submitting your first Linux kernel patch.md b/sources/tech/20180821 A checklist for submitting your first Linux kernel patch.md deleted file mode 100644 index 1fc4677491..0000000000 --- a/sources/tech/20180821 A checklist for submitting your first Linux kernel patch.md +++ /dev/null @@ -1,170 +0,0 @@ -A checklist for submitting your first Linux kernel patch -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22) - -One of the biggest—and the fastest moving—open source projects, the Linux kernel, is composed of about 53,600 files and nearly 20-million lines of code. With more than 15,600 programmers contributing to the project worldwide, the Linux kernel follows a maintainer model for collaboration. - -![](https://opensource.com/sites/default/files/karnik_figure1.png) - -In this article, I'll provide a quick checklist of steps involved with making your first kernel contribution, and look at what you should know before submitting a patch. For a more in-depth look at the submission process for contributing your first patch, read the [KernelNewbies First Kernel Patch tutorial][1]. - -### Contributing to the kernel - -#### Step 1: Prepare your system. - -Steps in this article assume you have the following tools on your system: - -+ Text editor -+ Email client -+ Version control system (e.g., git) - -#### Step 2: Download the Linux kernel code repository`:` -``` -git clone -b staging-testing - -git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git - -``` - -### Copy your current config: ```` -``` -cp /boot/config-`uname -r`* .config - -``` - -### Step 3: Build/install your kernel. -``` -make -jX - -sudo make modules_install install - -``` - -### Step 4: Make a branch and switch to it. -``` -git checkout -b first-patch - -``` - -### Step 5: Update your kernel to point to the latest code base. -``` -git fetch origin - -git rebase origin/staging-testing - -``` - -### Step 6: Make a change to the code base. - -Recompile using `make` command to ensure that your change does not produce errors. - -### Step 7: Commit your changes and create a patch. -``` -git add - -git commit -s -v - -git format-patch -o /tmp/ HEAD^ - -``` - -![](https://opensource.com/sites/default/files/karnik_figure2.png) - -The subject consists of the path to the file name separated by colons, followed by what the patch does in the imperative tense. After a blank line comes the description of the patch and the mandatory signed off tag and, lastly, a diff of your patch. - -Here is another example of a simple patch: - -![](https://opensource.com/sites/default/files/karnik_figure3.png) - -Next, send the patch [using email from the command line][2] (in this case, Mutt): `` -``` -mutt -H /tmp/0001- - -``` - -To know the list of maintainers to whom to send the patch, use the [get_maintainer.pl script][11]. - - -### What to know before submitting your first patch - - * [Greg Kroah-Hartman][3]'s [staging tree][4] is a good place to submit your [first patch][1] as he accepts easy patches from new contributors. When you get familiar with the patch-sending process, you could send subsystem-specific patches with increased complexity. - - * You also could start with correcting coding style issues in the code. To learn more, read the [Linux kernel coding style documentation][5]. - - * The script [checkpatch.pl][6] detects coding style errors for you. For example, run: - ``` - perl scripts/checkpatch.pl -f drivers/staging/android/* | less - - ``` - - * You could complete TODOs left incomplete by developers: - ``` - find drivers/staging -name TODO - ``` - - * [Coccinelle][7] is a helpful tool for pattern matching. - - * Read the [kernel mailing archives][8]. - - * Go through the [linux.git log][9] to see commits by previous authors for inspiration. - - * Note: Do not top-post to communicate with the reviewer of your patch! Here's an example: - -**Wrong way:** - -Chris, -_Yes let’s schedule the meeting tomorrow, on the second floor._ -> On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote: -> Hey John, I had some questions: -> 1\. Do you want to schedule the meeting tomorrow? -> 2\. On which floor in the office? -> 3\. What time is suitable to you? - -(Notice that the last question was unintentionally left unanswered in the reply.) - -**Correct way:** - -Chris, -See my answers below... -> On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote: -> Hey John, I had some questions: -> 1\. Do you want to schedule the meeting tomorrow? -_Yes tomorrow is fine._ -> 2\. On which floor in the office? -_Let's keep it on the second floor._ -> 3\. What time is suitable to you? -_09:00 am would be alright._ - -(All questions were answered, and this way saves reading time.) - - * The [Eudyptula challenge][10] is a great way to learn kernel basics. - - -To learn more, read the [KernelNewbies First Kernel Patch tutorial][1]. After that, if you still have any questions, ask on the [kernelnewbies mailing list][12] or in the [#kernelnewbies IRC channel][13]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/8/first-linux-kernel-patch - -作者:[Sayli Karnik][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/sayli -[1]:https://kernelnewbies.org/FirstKernelPatch -[2]:https://opensource.com/life/15/8/top-4-open-source-command-line-email-clients -[3]:https://twitter.com/gregkh -[4]:https://www.kernel.org/doc/html/v4.15/process/2.Process.html -[5]:https://www.kernel.org/doc/html/v4.10/process/coding-style.html -[6]:https://github.com/torvalds/linux/blob/master/scripts/checkpatch.pl -[7]:http://coccinelle.lip6.fr/ -[8]:linux-kernel@vger.kernel.org -[9]:https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/log/ -[10]:http://eudyptula-challenge.org/ -[11]:https://github.com/torvalds/linux/blob/master/scripts/get_maintainer.pl -[12]:https://kernelnewbies.org/MailingList -[13]:https://kernelnewbies.org/IRC diff --git a/sources/tech/20180823 CLI- improved.md b/sources/tech/20180823 CLI- improved.md index d06bb1b2aa..52edaa28c8 100644 --- a/sources/tech/20180823 CLI- improved.md +++ b/sources/tech/20180823 CLI- improved.md @@ -1,3 +1,5 @@ +Translating by DavidChenLiang + CLI: improved ====== I'm not sure many web developers can get away without visiting the command line. As for me, I've been using the command line since 1997, first at university when I felt both super cool l33t-hacker and simultaneously utterly out of my depth. diff --git a/sources/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md b/sources/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md deleted file mode 100644 index aa4ec0a655..0000000000 --- a/sources/tech/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md +++ /dev/null @@ -1,131 +0,0 @@ -How To Easily And Safely Manage Cron Jobs In Linux -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/Crontab-UI-720x340.jpg) - -When it comes to schedule tasks in Linux, which utility comes to your mind first? Yeah, you guessed it right. **Cron!** The cron utility helps you to schedule commands/tasks at specific time in Unix-like operating systems. We already published a [**beginners guides to Cron jobs**][1]. I have a few years experience in Linux, so setting up cron jobs is no big deal for me. But, it is not piece of cake for newbies. The noobs may unknowingly do small mistakes while editing plain text crontab and bring down all cron jobs. Just in case, if you think you might mess up with your cron jobs, there is a good alternative way. Say hello to **Crontab UI** , a web-based tool to easily and safely manage cron jobs in Unix-like operating systems. - -You don’t need to manually edit the crontab file to create, delete and manage cron jobs. Everything can be done via a web browser with a couple mouse clicks. Crontab UI allows you to easily create, edit, pause, delete, backup cron jobs, and even import, export and deploy jobs on other machines without much hassle. Error log, mailing and hooks support also possible. It is free, open source and written using NodeJS. - -### Installing Crontab UI - -Installing Crontab UI is just a one-liner command. Make sure you have installed NPM. If you haven’t install npm yet, refer the following link. - -Next, run the following command to install Crontab UI. -``` -$ npm install -g crontab-ui - -``` - -It’s that simple. Let us go ahead and see how to manage cron jobs using Crontab UI. - -### Easily And Safely Manage Cron Jobs In Linux - -To launch Crontab UI, simply run: -``` -$ crontab-ui - -``` - -You will see the following output: -``` -Node version: 10.8.0 -Crontab UI is running at http://127.0.0.1:8000 - -``` - -Now, open your web browser and navigate to ****. Make sure the port no 8000 is allowed in your firewall/router. - -Please note that you can only access Crontab UI web dashboard within the local system itself. - -If you want to run Crontab UI with your system’s IP and custom port (so you can access it from any remote system in the network), use the following command instead: -``` -$ HOST=0.0.0.0 PORT=9000 crontab-ui -Node version: 10.8.0 -Crontab UI is running at http://0.0.0.0:9000 - -``` - -Now, Crontab UI can be accessed from the any system in the nework using URL – **http:// :9000**. - -This is how Crontab UI dashboard looks like. - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/crontab-ui-dashboard.png) - -As you can see in the above screenshot, Crontab UI dashbaord is very simply. All options are self-explanatory. - -To exit Crontab UI, press **CTRL+C**. - -**Create, edit, run, stop, delete a cron job** - -To create a new cron job, click on “New” button. Enter your cron job details and click Save. - - 1. Name the cron job. It is optional. - 2. The full command you want to run. - 3. Choose schedule time. You can either choose the quick schedule time, (such as Startup, Hourly, Daily, Weekly, Monthly, Yearly) or set the exact time to run the command. After you choosing the schedule time, the syntax of the cron job will be shown in **Jobs** field. - 4. Choose whether you want to enable error logging for the particular job. - - - -Here is my sample cron job. - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/create-new-cron-job.png) - -As you can see, I have setup a cron job to clear pacman cache at every month. - -Similarly, you can create any number of jobs as you want. You will see all cron jobs in the dashboard. - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/crontab-ui-dashboard-1.png) - -If you wanted to change any parameter in a cron job, just click on the **Edit** button below the job and modify the parameters as you wish. To run a job immediately, click on the button that says **Run**. To stop a job, click **Stop** button. You can view the log details of any job by clicking on the **Log** button. If the job is no longer required, simply press **Delete** button. - -**Backup cron jobs** - -To backup all cron jobs, press the Backup from main dashboard and choose OK to confirm the backup. - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/backup-cron-jobs.png) - -You can use this backup in case you messed with the contents of the crontab file. - -**Import/Export cron jobs to other systems** - -Another notable feature of Crontab UI is you can import, export and deploy cron jobs to other systems. If you have multiple systems on your network that requires the same cron jobs, just press **Export** button and choose the location to save the file. All contents of crontab file will be saved in a file named **crontab.db**. - -Here is the contents of the crontab.db file. -``` -$ cat Downloads/crontab.db -{"name":"Remove Pacman Cache","command":"rm -rf /var/cache/pacman","schedule":"@monthly","stopped":false,"timestamp":"Thu Aug 23 2018 10:34:19 GMT+0000 (Coordinated Universal Time)","logging":"true","mailing":{},"created":1535020459093,"_id":"lcVc1nSdaceqS1ut"} - -``` - -Then you can transfer the entire crontab.db file to some other system and import its to the new system. You don’t need to manually create cron jobs in all systems. Just create them in one system and export and import all of them to every system on the network. - -**Get the contents from or save to existing crontab file** - -There are chances that you might have already created some cron jobs using **crontab** command. If so, you can retrieve contents of the existing crontab file by click on the **“Get from crontab”** button in main dashboard. - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/get-from-crontab.png) - -Similarly, you can save the newly created jobs using Crontab UI utility to existing crontab file in your system. To do so, just click **Save to crontab** option in the dashboard. - -See? Managing cron jobs is not that complicated. Any newbie user can easily maintain any number of jobs without much hassle using Crontab UI. Give it a try and let us know what do you think about this tool. I am all ears! - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-easily-and-safely-manage-cron-jobs-in-linux/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.ostechnix.com/a-beginners-guide-to-cron-jobs/ diff --git a/sources/tech/20180824 What Stable Kernel Should I Use.md b/sources/tech/20180824 What Stable Kernel Should I Use.md deleted file mode 100644 index bfd64a2ec2..0000000000 --- a/sources/tech/20180824 What Stable Kernel Should I Use.md +++ /dev/null @@ -1,139 +0,0 @@ -What Stable Kernel Should I Use? -====== -I get a lot of questions about people asking me about what stable kernel should they be using for their product/device/laptop/server/etc. all the time. Especially given the now-extended length of time that some kernels are being supported by me and others, this isn’t always a very obvious thing to determine. So this post is an attempt to write down my opinions on the matter. Of course, you are free to use what ever kernel version you want, but here’s what I recommend. - -As always, the opinions written here are my own, I speak for no one but myself. - -### What kernel to pick - -Here’s the my short list of what kernel you should use, raked from best to worst options. I’ll go into the details of all of these below, but if you just want the summary of all of this, here it is: - -Hierarchy of what kernel to use, from best solution to worst: - - * Supported kernel from your favorite Linux distribution - * Latest stable release - * Latest LTS release - * Older LTS release that is still being maintained - - - -What kernel to never use: - - * Unmaintained kernel release - - - -To give numbers to the above, today, as of August 24, 2018, the front page of kernel.org looks like this: - -![][1] - -So, based on the above list that would mean that: - - * 4.18.5 is the latest stable release - * 4.14.67 is the latest LTS release - * 4.9.124, 4.4.152, and 3.16.57 are the older LTS releases that are still being maintained - * 4.17.19 and 3.18.119 are “End of Life” kernels that have had a release in the past 60 days, and as such stick around on the kernel.org site for those who still might want to use them. - - - -Quite easy, right? - -Ok, now for some justification for all of this: - -### Distribution kernels - -The best solution for almost all Linux users is to just use the kernel from your favorite Linux distribution. Personally, I prefer the community based Linux distributions that constantly roll along with the latest updated kernel and it is supported by that developer community. Distributions in this category are Fedora, openSUSE, Arch, Gentoo, CoreOS, and others. - -All of these distributions use the latest stable upstream kernel release and make sure that any needed bugfixes are applied on a regular basis. That is the one of the most solid and best kernel that you can use when it comes to having the latest fixes ([remember all fixes are security fixes][2]) in it. - -There are some community distributions that take a bit longer to move to a new kernel release, but eventually get there and support the kernel they currently have quite well. Those are also great to use, and examples of these are Debian and Ubuntu. - -Just because I did not list your favorite distro here does not mean its kernel is not good. Look on the web site for the distro and make sure that the kernel package is constantly updated with the latest security patches, and all should be well. - -Lots of people seem to like the old, “traditional” model of a distribution and use RHEL, SLES, CentOS or the “LTS” Ubuntu release. Those distros pick a specific kernel version and then camp out on it for years, if not decades. They do loads of work backporting the latest bugfixes and sometimes new features to these kernels, all in a Quixote quest to keep the version number from never being changed, despite having many thousands of changes on top of that older kernel version. This work is a truly thankless job, and the developers assigned to these tasks do some wonderful work in order to achieve these goals. If you like never seeing your kernel version number change, then use these distributions. They usually cost some money to use, but the support you get from these companies is worth it when something goes wrong. - -So again, the best kernel you can use is one that someone else supports, and you can turn to for help. Use that support, usually you are already paying for it (for the enterprise distributions), and those companies know what they are doing. - -But, if you do not want to trust someone else to manage your kernel for you, or you have hardware that a distribution does not support, then you want to run the Latest stable release: - -### Latest stable release - -This kernel is the latest one from the Linux kernel developer community that they declare as “stable”. About every three months, the community releases a new stable kernel that contains all of the newest hardware support, the latest performance improvements, as well as the latest bugfixes for all parts of the kernel. Over the next 3 months, bugfixes that go into the next kernel release to be made are backported into this stable release, so that any users of this kernel are sure to get them as soon as possible. - -This is usually the kernel that most community distributions use as well, so you can be sure it is tested and has a large audience of users. Also, the kernel community (all 4000+ developers) are willing to help support users of this release, as it is the latest one that they made. - -After 3 months, a new kernel is released and you should move to it to ensure that you stay up to date, as support for this kernel is usually dropped a few weeks after the newer release happens. - -If you have new hardware that is purchased after the last LTS release came out, you almost are guaranteed to have to run this kernel in order to have it supported. So for desktops or new servers, this is usually the recommended kernel to be running. - -### Latest LTS release - -If your hardware relies on a vendors out-of-tree patch in order to make it work properly (like almost all embedded devices these days), then the next best kernel to be using is the latest LTS release. That release gets all of the latest kernel fixes that goes into the stable releases where applicable, and lots of users test and use it. - -Note, no new features and almost no new hardware support is ever added to these kernels, so if you need to use a new device, it is better to use the latest stable release, not this release. - -Also this release is common for users that do not like to worry about “major” upgrades happening on them every 3 months. So they stick to this release and upgrade every year instead, which is a fine practice to follow. - -The downsides of using this release is that you do not get the performance improvements that happen in newer kernels, except when you update to the next LTS kernel, potentially a year in the future. That could be significant for some workloads, so be very aware of this. - -Also, if you have problems with this kernel release, the first thing that any developer whom you report the issue to is going to ask you to do is, “does the latest stable release have this problem?” So you will need to be aware that support might not be as easy to get as with the latest stable releases. - -Now if you are stuck with a large patchset and can not update to a new LTS kernel once a year, perhaps you want the older LTS releases: - -### Older LTS release - -These releases have traditionally been supported by the community for 2 years, sometimes longer for when a major distribution relies on this (like Debian or SLES). However in the past year, thanks to a lot of suport and investment in testing and infrastructure from Google, Linaro, Linaro member companies, [kernelci.org][3], and others, these kernels are starting to be supported for much longer. - -Here’s the latest LTS releases and how long they will be supported for, as shown at [kernel.org/category/releases.html][4] on August 24, 2018: - -![][5] - -The reason that Google and other companies want to have these kernels live longer is due to the crazy (some will say broken) development model of almost all SoC chips these days. Those devices start their development lifecycle a few years before the chip is released, however that code is never merged upstream, resulting in a brand new chip being released based on a 2 year old kernel. These SoC trees usually have over 2 million lines added to them, making them something that I have started calling “Linux-like” kernels. - -If the LTS releases stop happening after 2 years, then support from the community instantly stops, and no one ends up doing bugfixes for them. This results in millions of very insecure devices floating around in the world, not something that is good for any ecosystem. - -Because of this dependency, these companies now require new devices to constantly update to the latest LTS releases as they happen for their specific release version (i.e. every 4.9.y release that happens). An example of this is the Android kernel requirements for new devices shipping for the “O” and now “P” releases specified the minimum kernel version allowed, and Android security releases might start to require those “.y” releases to happen more frequently on devices. - -I will note that some manufacturers are already doing this today. Sony is one great example of this, updating to the latest 4.4.y release on many of their new phones for their quarterly security release. Another good example is the small company Essential which has been tracking the 4.4.y releases faster than anyone that I know of. - -There is one huge caveat when using a kernel like this. The number of security fixes that get backported are not as great as with the latest LTS release, because the traditional model of the devices that use these older LTS kernels is a much more reduced user model. These kernels are not to be used in any type of “general computing” model where you have untrusted users or virtual machines, as the ability to do some of the recent Spectre-type fixes for older releases is greatly reduced, if present at all in some branches. - -So again, only use older LTS releases in a device that you fully control, or lock down with a very strong security model (like Android enforces using SELinux and application isolation). Never use these releases on a server with untrusted users, programs, or virtual machines. - -Also, support from the community for these older LTS releases is greatly reduced even from the normal LTS releases, if available at all. If you use these kernels, you really are on your own, and need to be able to support the kernel yourself, or rely on you SoC vendor to provide that support for you (note that almost none of them do provide that support, so beware…) - -### Unmaintained kernel release - -Surprisingly, many companies do just grab a random kernel release, slap it into their product and proceed to ship it in hundreds of thousands of units without a second thought. One crazy example of this would be the Lego Mindstorm systems that shipped a random -rc release of a kernel in their device for some unknown reason. A -rc release is a development release that not even the Linux kernel developers feel is ready for everyone to use just yet, let alone millions of users. - -You are of course free to do this if you want, but note that you really are on your own here. The community can not support you as no one is watching all kernel versions for specific issues, so you will have to rely on in-house support for everything that could go wrong. Which for some companies and systems, could be just fine, but be aware of the “hidden” cost this might cause if you do not plan for this up front. - -### Summary - -So, here’s a short list of different types of devices, and what I would recommend for their kernels: - - * Laptop / Desktop: Latest stable release - * Server: Latest stable release or latest LTS release - * Embedded device: Latest LTS release or older LTS release if the security model used is very strong and tight. - - - -And as for me, what do I run on my machines? My laptops run the latest development kernel (i.e. Linus’s development tree) plus whatever kernel changes I am currently working on and my servers run the latest stable release. So despite being in charge of the LTS releases, I don’t run them myself, except in testing systems. I rely on the development and latest stable releases to ensure that my machines are running the fastest and most secure releases that we know how to create at this point in time. - --------------------------------------------------------------------------------- - -via: http://kroah.com/log/blog/2018/08/24/what-stable-kernel-should-i-use/ - -作者:[Greg Kroah-Hartman][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://kroah.com -[1]:https://s3.amazonaws.com/kroah.com/images/kernel.org_2018_08_24.png -[2]:http://kroah.com/log/blog/2018/02/05/linux-kernel-release-model/ -[3]:https://kernelci.org/ -[4]:https://www.kernel.org/category/releases.html -[5]:https://s3.amazonaws.com/kroah.com/images/kernel.org_releases_2018_08_24.png diff --git a/sources/tech/20180827 4 tips for better tmux sessions.md b/sources/tech/20180827 4 tips for better tmux sessions.md deleted file mode 100644 index b6d6a3e4fe..0000000000 --- a/sources/tech/20180827 4 tips for better tmux sessions.md +++ /dev/null @@ -1,89 +0,0 @@ -translating by lujun9972 -4 tips for better tmux sessions -====== - -![](https://fedoramagazine.org/wp-content/uploads/2018/08/tmux-4-tips-816x345.jpg) - -The tmux utility, a terminal multiplexer, lets you treat your terminal as a multi-paned window into your system. You can arrange the configuration, run different processes in each, and generally make better use of your screen. We introduced some readers to this powerful tool [in this earlier article][1]. Here are some tips that will help you get more out of tmux if you’re getting started. - -This article assumes your current prefix key is Ctrl+b. If you’ve remapped that prefix, simply substitute your prefix in its place. - -### Set your terminal to automatically use tmux - -One of the biggest benefits of tmux is being able to disconnect and reconnect to sesions at wilI. This makes remote login sessions more powerful. Have you ever lost a connection and wished you could get back the work you were doing on the remote system? With tmux this problem is solved. - -However, you may sometimes find yourself doing work on a remote system, and realize you didn’t start a session. One way to avoid this is to have tmux start or attach every time you login to a system with in interactive shell. - -Add this to your remote system’s ~/.bash_profile file: - -``` -if [ -z "$TMUX" ]; then - tmux attach -t default || tmux new -s default -fi -``` - -Then logout of the remote system, and log back in with SSH. You’ll find you’re in a tmux session named default. This session will be regenerated at next login if you exit it. But more importantly, if you detach from it as normal, your work is waiting for you next time you login — especially useful if your connection is interrupted. - -Of course you can add this to your local system as well. Note that terminals inside most GUIs won’t use the default session automatically, because they aren’t login shells. While you can change that behavior, it may result in nesting that makes the session less usable, so proceed with caution. - -### Use zoom to focus on a single process - -While the point of tmux is to offer multiple windows, panes, and processes in a single session, sometimes you need to focus. If you’re in a process and need more space, or to focus on a single task, the zoom command works well. It expands the current pane to take up the entire current window space. - -Zoom can be useful in other situations too. For instance, imagine you’re using a terminal window in a graphical desktop. Panes can make it harder to copy and paste multiple lines from inside your tmux session. If you zoom the pane, you can do a clean copy/paste of multiple lines of data with ease. - -To zoom into the current pane, hit Ctrl+b, z. When you’re finished with the zoom function, hit the same key combo to unzoom the pane. - -### Bind some useful commands - -By default tmux has numerous commands available. But it’s helpful to have some of the more common operations bound to keys you can easily remember. Here are some examples you can add to your ~/.tmux.conf file to make sessions more enjoyable: - -``` -bind r source-file ~/.tmux.conf \; display "Reloaded config" -``` - -This command rereads the commands and bindings in your config file. Once you add this binding, exit any tmux sessions and then restart one. Now after you make any other future changes, simply run Ctrl+b, r and the changes will be part of your existing session. - -``` -bind V split-window -h -bind H split-window -``` - -These commands make it easier to split the current window across a vertical axis (note that’s Shift+V) or across a horizontal axis (Shift+H). - -If you want to see how all keys are bound, use Ctrl+B, ? to see a list. You may see keys bound in copy-mode first, for when you’re working with copy and paste inside tmux. The prefix mode bindings are where you’ll see ones you’ve added above. Feel free to experiment with your own! - -### Use powerline for great justice - -[As reported in a previous Fedora Magazine article][2], the powerline utility is a fantastic addition to your shell. But it also has capabilities when used with tmux. Because tmux takes over the entire terminal space, the powerline window can provide more than just a better shell prompt. - - [![Screenshot of tmux powerline in git folder](https://fedoramagazine.org/wp-content/uploads/2018/08/Screenshot-from-2018-08-25-19-36-53-1024x690.png)][3] - -If you haven’t already, follow the instructions in the [Magazine’s powerline article][4] to install that utility. Then, install the addon [using sudo][5]: - -``` -sudo dnf install tmux-powerline -``` - -Now restart your session, and you’ll see a spiffy new status line at the bottom. Depending on the terminal width, the default status line now shows your current session ID, open windows, system information, date and time, and hostname. If you change directory into a git-controlled project, you’ll see the branch and color-coded status as well. - -Of course, this status bar is highly configurable as well. Enjoy your new supercharged tmux session, and have fun experimenting with it. - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/4-tips-better-tmux-sessions/ - -作者:[Paul W. Frields][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://fedoramagazine.org/author/pfrields/ -[1]:https://fedoramagazine.org/use-tmux-more-powerful-terminal/ -[2]:https://fedoramagazine.org/add-power-terminal-powerline/ -[3]:https://fedoramagazine.org/wp-content/uploads/2018/08/Screenshot-from-2018-08-25-19-36-53.png -[4]:https://fedoramagazine.org/add-power-terminal-powerline/ -[5]:https://fedoramagazine.org/howto-use-sudo/ diff --git a/sources/tech/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md b/sources/tech/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md deleted file mode 100644 index bb0479e7fe..0000000000 --- a/sources/tech/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md +++ /dev/null @@ -1,50 +0,0 @@ -translating by lujun9972 -Solve "error: failed to commit transaction (conflicting files)" In Arch Linux -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/06/arch_linux_wallpaper-720x340.png) - -It’s been a month since I upgraded my Arch Linux desktop. Today, I tried to update my Arch Linux system, and ran into an error that said **“error: failed to commit transaction (conflicting files) stfl: /usr/lib/libstfl.so.0 exists in filesystem”**. It looks like one library (/usr/lib/libstfl.so.0) that exists on my filesystem and pacman can’t upgrade it. If you’re encountered with the same error, here is a quick fix to resolve it. - -### Solve “error: failed to commit transaction (conflicting files)” In Arch Linux - -You have three options. - -1. Simply ignore the problematic **stfl** library from being upgraded and try to update the system again. Refer this guide to know [**how to ignore package from being upgraded**][1]. - -2. Overwrite the package using command: -``` -$ sudo pacman -Syu --overwrite /usr/lib/libstfl.so.0 -``` - -3. Remove stfl library file manually and try to upgrade the system again. Please make sure the intended package is not a dependency to any important package. and check the archlinux.org whether are mentions of this conflict. -``` -$ sudo rm /usr/lib/libstfl.so.0 -``` - -Now, try to update the system: -``` -$ sudo pacman -Syu -``` - -I chose the third option and just deleted the file and upgraded my Arch Linux system. It works now! - -Hope this helps. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-solve-error-failed-to-commit-transaction-conflicting-files-in-arch-linux/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.ostechnix.com/safely-ignore-package-upgraded-arch-linux/ diff --git a/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md b/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md index c25239b7ba..769f9ba420 100644 --- a/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md +++ b/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md @@ -1,3 +1,4 @@ +Translating by z52527 Publishing Markdown to HTML with MDwiki ====== diff --git a/sources/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md b/sources/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md deleted file mode 100644 index 11d266e163..0000000000 --- a/sources/tech/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md +++ /dev/null @@ -1,196 +0,0 @@ -How To Limit Network Bandwidth In Linux Using Wondershaper -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/Wondershaper-1-720x340.jpg) - -This tutorial will help you to easily limit network bandwidth and shape your network traffic in Unix-like operating systems. By limiting the network bandwidth usage, you can save unnecessary bandwidth consumption’s by applications, such as package managers (pacman, yum, apt), web browsers, torrent clients, download managers etc., and prevent the bandwidth abuse by a single or multiple users in the network. For the purpose of this tutorial, we will be using a command line utility named **Wondershaper**. Trust me, it is not that hard as you may think. It is one of the easiest and quickest way ever I have come across to limit the Internet or local network bandwidth usage in your own Linux system. Read on. - -Please be mindful that the aforementioned utility can only limit the incoming and outgoing traffic of your local network interfaces, not the interfaces of your router or modem. In other words, Wondershaper will only limit the network bandwidth in your local system itself, not any other systems in the network. These utility is mainly designed for limiting the bandwidth of one or more network adapters in your local system. Hope you got my point. - -Let us see how to use Wondershaper to shape the network traffic. - -### Limit Network Bandwidth In Linux Using Wondershaper - -**Wondershaper** is simple script used to limit the bandwidth of your system’s network adapter(s). It limits the bandwidth iproute’s tc command, but greatly simplifies its operation. - -**Installing Wondershaper** - -To install the latest version, git clone wondershaoer repository: - -``` -$ git clone https://github.com/magnific0/wondershaper.git - -``` - -Go to the wondershaper directory and install it as show below - -``` -$ cd wondershaper - -$ sudo make install - -``` - -And, run the following command to start wondershaper service automatically on every reboot. - -``` -$ sudo systemctl enable wondershaper.service - -$ sudo systemctl start wondershaper.service - -``` - -You can also install using your distribution’s package manager (official or non-official) if you don’t mind the latest version. - -Wondershaper is available in [**AUR**][1], so you can install it in Arch-based systems using AUR helper programs such as [**Yay**][2]. - -``` -$ yay -S wondershaper-git - -``` - -On Debian, Ubuntu, Linux Mint: - -``` -$ sudo apt-get install wondershaper - -``` - -On Fedora: - -``` -$ sudo dnf install wondershaper - -``` - -On RHEL, CentOS, enable EPEL repository and install wondershaper as shown below. - -``` -$ sudo yum install epel-release - -$ sudo yum install wondershaper - -``` - -Finally, start wondershaper service automatically on every reboot. - -``` -$ sudo systemctl enable wondershaper.service - -$ sudo systemctl start wondershaper.service - -``` - -**Usage** - -First, find the name of your network interface. Here are some common ways to find the details of a network card. - -``` -$ ip addr - -$ route - -$ ifconfig - -``` - -Once you find the network card name, you can limit the bandwidth rate as shown below. - -``` -$ sudo wondershaper -a -d -u - -``` - -For instance, if your network card name is **enp0s8** and you wanted to limit the bandwidth to **1024 Kbps** for **downloads** and **512 kbps** for **uploads** , the command would be: - -``` -$ sudo wondershaper -a enp0s8 -d 1024 -u 512 - -``` - -Where, - - * **-a** : network card name - * **-d** : download rate - * **-u** : upload rate - - - -To clear the limits from a network adapter, simply run: - -``` -$ sudo wondershaper -c -a enp0s8 - -``` - -Or - -``` -$ sudo wondershaper -c enp0s8 - -``` - -Just in case, there are more than one network card available in your system, you need to manually set the download/upload rates for each network interface card as described above. - -If you have installed Wondershaper by cloning its GitHub repository, there is a configuration named **wondershaper.conf** exists in **/etc/conf.d/** location. Make sure you have set the download or upload rates by modifying the appropriate values(network card name, download/upload rate) in this file. - -``` -$ sudo nano /etc/conf.d/wondershaper.conf - -[wondershaper] -# Adapter -# -IFACE="eth0" - -# Download rate in Kbps -# -DSPEED="2048" - -# Upload rate in Kbps -# -USPEED="512" - -``` - -Here is the sample before Wondershaper: - -After enabling Wondershaper: - -As you can see, the download rate has been tremendously reduced after limiting the bandwidth using WOndershaper in my Ubuntu 18.o4 LTS server. - -For more details, view the help section by running the following command: - -``` -$ wondershaper -h - -``` - -Or, refer man pages. - -``` -$ man wondershaper - -``` - -As far as tested, Wondershaper worked just fine as described above. Give it a try and let us know what do you think about this utility. - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned. - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-limit-network-bandwidth-in-linux-using-wondershaper/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[1]: https://aur.archlinux.org/packages/wondershaper-git/ -[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ diff --git a/sources/tech/20180907 6.828 lab tools guide.md b/sources/tech/20180907 6.828 lab tools guide.md new file mode 100644 index 0000000000..e9061a3097 --- /dev/null +++ b/sources/tech/20180907 6.828 lab tools guide.md @@ -0,0 +1,201 @@ +6.828 lab tools guide +====== +### 6.828 lab tools guide + +Familiarity with your environment is crucial for productive development and debugging. This page gives a brief overview of the JOS environment and useful GDB and QEMU commands. Don't take our word for it, though. Read the GDB and QEMU manuals. These are powerful tools that are worth knowing how to use. + +#### Debugging tips + +##### Kernel + +GDB is your friend. Use the qemu-gdb target (or its `qemu-gdb-nox` variant) to make QEMU wait for GDB to attach. See the GDB reference below for some commands that are useful when debugging kernels. + +If you're getting unexpected interrupts, exceptions, or triple faults, you can ask QEMU to generate a detailed log of interrupts using the -d argument. + +To debug virtual memory issues, try the QEMU monitor commands info mem (for a high-level overview) or info pg (for lots of detail). Note that these commands only display the _current_ page table. + +(Lab 4+) To debug multiple CPUs, use GDB's thread-related commands like thread and info threads. + +##### User environments (lab 3+) + +GDB also lets you debug user environments, but there are a few things you need to watch out for, since GDB doesn't know that there's a distinction between multiple user environments, or between user and kernel. + +You can start JOS with a specific user environment using make run- _name_ (or you can edit `kern/init.c` directly). To make QEMU wait for GDB to attach, use the run- _name_ -gdb variant. + +You can symbolically debug user code, just like you can kernel code, but you have to tell GDB which symbol table to use with the symbol-file command, since it can only use one symbol table at a time. The provided `.gdbinit` loads the kernel symbol table, `obj/kern/kernel`. The symbol table for a user environment is in its ELF binary, so you can load it using symbol-file obj/user/ _name_. _Don't_ load symbols from any `.o` files, as those haven't been relocated by the linker (libraries are statically linked into JOS user binaries, so those symbols are already included in each user binary). Make sure you get the _right_ user binary; library functions will be linked at different EIPs in different binaries and GDB won't know any better! + +(Lab 4+) Since GDB is attached to the virtual machine as a whole, it sees clock interrupts as just another control transfer. This makes it basically impossible to step through user code because a clock interrupt is virtually guaranteed the moment you let the VM run again. The stepi command works because it suppresses interrupts, but it only steps one assembly instruction. Breakpoints generally work, but watch out because you can hit the same EIP in a different environment (indeed, a different binary altogether!). + +#### Reference + +##### JOS makefile + +The JOS GNUmakefile includes a number of phony targets for running JOS in various ways. All of these targets configure QEMU to listen for GDB connections (the `*-gdb` targets also wait for this connection). To start once QEMU is running, simply run gdb from your lab directory. We provide a `.gdbinit` file that automatically points GDB at QEMU, loads the kernel symbol file, and switches between 16-bit and 32-bit mode. Exiting GDB will shut down QEMU. + + * make qemu +Build everything and start QEMU with the VGA console in a new window and the serial console in your terminal. To exit, either close the VGA window or press `Ctrl-c` or `Ctrl-a x` in your terminal. + * make qemu-nox +Like `make qemu`, but run with only the serial console. To exit, press `Ctrl-a x`. This is particularly useful over SSH connections to Athena dialups because the VGA window consumes a lot of bandwidth. + * make qemu-gdb +Like `make qemu`, but rather than passively accepting GDB connections at any time, this pauses at the first machine instruction and waits for a GDB connection. + * make qemu-nox-gdb +A combination of the `qemu-nox` and `qemu-gdb` targets. + * make run- _name_ +(Lab 3+) Run user program _name_. For example, `make run-hello` runs `user/hello.c`. + * make run- _name_ -nox, run- _name_ -gdb, run- _name_ -gdb-nox, +(Lab 3+) Variants of `run-name` that correspond to the variants of the `qemu` target. + + + +The makefile also accepts a few useful variables: + + * make V=1 ... +Verbose mode. Print out every command being executed, including arguments. + * make V=1 grade +Stop after any failed grade test and leave the QEMU output in `jos.out` for inspection. + * make QEMUEXTRA=' _args_ ' ... +Specify additional arguments to pass to QEMU. + + + +##### JOS obj/ + +The JOS GNUmakefile includes a number of phony targets for running JOS in various ways. All of these targets configure QEMU to listen for GDB connections (thetargets also wait for this connection). To start once QEMU is running, simply runfrom your lab directory. We provide afile that automatically points GDB at QEMU, loads the kernel symbol file, and switches between 16-bit and 32-bit mode. Exiting GDB will shut down QEMU.The makefile also accepts a few useful variables: + +When building JOS, the makefile also produces some additional output files that may prove useful while debugging: + + * `obj/boot/boot.asm`, `obj/kern/kernel.asm`, `obj/user/hello.asm`, etc. +Assembly code listings for the bootloader, kernel, and user programs. + * `obj/kern/kernel.sym`, `obj/user/hello.sym`, etc. +Symbol tables for the kernel and user programs. + * `obj/boot/boot.out`, `obj/kern/kernel`, `obj/user/hello`, etc +Linked ELF images of the kernel and user programs. These contain symbol information that can be used by GDB. + + + +##### GDB + +See the [GDB manual][1] for a full guide to GDB commands. Here are some particularly useful commands for 6.828, some of which don't typically come up outside of OS development. + + * Ctrl-c +Halt the machine and break in to GDB at the current instruction. If QEMU has multiple virtual CPUs, this halts all of them. + * c (or continue) +Continue execution until the next breakpoint or `Ctrl-c`. + * si (or stepi) +Execute one machine instruction. + * b function or b file:line (or breakpoint) +Set a breakpoint at the given function or line. + * b * _addr_ (or breakpoint) +Set a breakpoint at the EIP _addr_. + * set print pretty +Enable pretty-printing of arrays and structs. + * info registers +Print the general purpose registers, `eip`, `eflags`, and the segment selectors. For a much more thorough dump of the machine register state, see QEMU's own `info registers` command. + * x/ _N_ x _addr_ +Display a hex dump of _N_ words starting at virtual address _addr_. If _N_ is omitted, it defaults to 1. _addr_ can be any expression. + * x/ _N_ i _addr_ +Display the _N_ assembly instructions starting at _addr_. Using `$eip` as _addr_ will display the instructions at the current instruction pointer. + * symbol-file _file_ +(Lab 3+) Switch to symbol file _file_. When GDB attaches to QEMU, it has no notion of the process boundaries within the virtual machine, so we have to tell it which symbols to use. By default, we configure GDB to use the kernel symbol file, `obj/kern/kernel`. If the machine is running user code, say `hello.c`, you can switch to the hello symbol file using `symbol-file obj/user/hello`. + + + +QEMU represents each virtual CPU as a thread in GDB, so you can use all of GDB's thread-related commands to view or manipulate QEMU's virtual CPUs. + + * thread _n_ +GDB focuses on one thread (i.e., CPU) at a time. This command switches that focus to thread _n_ , numbered from zero. + * info threads +List all threads (i.e., CPUs), including their state (active or halted) and what function they're in. + + + +##### QEMU + +QEMU includes a built-in monitor that can inspect and modify the machine state in useful ways. To enter the monitor, press Ctrl-a c in the terminal running QEMU. Press Ctrl-a c again to switch back to the serial console. + +For a complete reference to the monitor commands, see the [QEMU manual][2]. Here are some particularly useful commands: + + * xp/ _N_ x _paddr_ +Display a hex dump of _N_ words starting at _physical_ address _paddr_. If _N_ is omitted, it defaults to 1. This is the physical memory analogue of GDB's `x` command. + + * info registers +Display a full dump of the machine's internal register state. In particular, this includes the machine's _hidden_ segment state for the segment selectors and the local, global, and interrupt descriptor tables, plus the task register. This hidden state is the information the virtual CPU read from the GDT/LDT when the segment selector was loaded. Here's the CS when running in the JOS kernel in lab 1 and the meaning of each field: +``` + CS =0008 10000000 ffffffff 10cf9a00 DPL=0 CS32 [-R-] +``` + + * `CS =0008` +The visible part of the code selector. We're using segment 0x8. This also tells us we're referring to the global descriptor table (0x8 &4=0), and our CPL (current privilege level) is 0x8&3=0. + * `10000000` +The base of this segment. Linear address = logical address + 0x10000000. + * `ffffffff` +The limit of this segment. Linear addresses above 0xffffffff will result in segment violation exceptions. + * `10cf9a00` +The raw flags of this segment, which QEMU helpfully decodes for us in the next few fields. + * `DPL=0` +The privilege level of this segment. Only code running with privilege level 0 can load this segment. + * `CS32` +This is a 32-bit code segment. Other values include `DS` for data segments (not to be confused with the DS register), and `LDT` for local descriptor tables. + * `[-R-]` +This segment is read-only. + * info mem +(Lab 2+) Display mapped virtual memory and permissions. For example, +``` + ef7c0000-ef800000 00040000 urw + efbf8000-efc00000 00008000 -rw + +``` + +tells us that the 0x00040000 bytes of memory from 0xef7c0000 to 0xef800000 are mapped read/write and user-accessible, while the memory from 0xefbf8000 to 0xefc00000 is mapped read/write, but only kernel-accessible. + + * info pg +(Lab 2+) Display the current page table structure. The output is similar to `info mem`, but distinguishes page directory entries and page table entries and gives the permissions for each separately. Repeated PTE's and entire page tables are folded up into a single line. For example, +``` + VPN range Entry Flags Physical page + [00000-003ff] PDE[000] -------UWP + [00200-00233] PTE[200-233] -------U-P 00380 0037e 0037d 0037c 0037b 0037a .. + [00800-00bff] PDE[002] ----A--UWP + [00800-00801] PTE[000-001] ----A--U-P 0034b 00349 + [00802-00802] PTE[002] -------U-P 00348 + +``` + +This shows two page directory entries, spanning virtual addresses 0x00000000 to 0x003fffff and 0x00800000 to 0x00bfffff, respectively. Both PDE's are present, writable, and user and the second PDE is also accessed. The second of these page tables maps three pages, spanning virtual addresses 0x00800000 through 0x00802fff, of which the first two are present, user, and accessed and the third is only present and user. The first of these PTE's maps physical page 0x34b. + + + + +QEMU also takes some useful command line arguments, which can be passed into the JOS makefile using the + + * make QEMUEXTRA='-d int' ... +Log all interrupts, along with a full register dump, to `qemu.log`. You can ignore the first two log entries, "SMM: enter" and "SMM: after RMS", as these are generated before entering the boot loader. After this, log entries look like +``` + 4: v=30 e=0000 i=1 cpl=3 IP=001b:00800e2e pc=00800e2e SP=0023:eebfdf28 EAX=00000005 + EAX=00000005 EBX=00001002 ECX=00200000 EDX=00000000 + ESI=00000805 EDI=00200000 EBP=eebfdf60 ESP=eebfdf28 + ... + +``` + +The first line describes the interrupt. The `4:` is just a log record counter. `v` gives the vector number in hex. `e` gives the error code. `i=1` indicates that this was produced by an `int` instruction (versus a hardware interrupt). The rest of the line should be self-explanatory. See info registers for a description of the register dump that follows. + +Note: If you're running a pre-0.15 version of QEMU, the log will be written to `/tmp` instead of the current directory. + + + + +-------------------------------------------------------------------------------- + +via: https://pdos.csail.mit.edu/6.828/2018/labguide.html + +作者:[csail.mit][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://pdos.csail.mit.edu +[b]: https://github.com/lujun9972 +[1]: http://sourceware.org/gdb/current/onlinedocs/gdb/ +[2]: http://wiki.qemu.org/download/qemu-doc.html#pcsys_005fmonitor diff --git a/sources/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md b/sources/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md deleted file mode 100644 index a9d3eb0895..0000000000 --- a/sources/tech/20180907 How to Use the Netplan Network Configuration Tool on Linux.md +++ /dev/null @@ -1,230 +0,0 @@ -LuuMing translating -How to Use the Netplan Network Configuration Tool on Linux -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/netplan.jpg?itok=Gu_ZfNGa) - -For years Linux admins and users have configured their network interfaces in the same way. For instance, if you’re a Ubuntu user, you could either configure the network connection via the desktop GUI or from within the /etc/network/interfaces file. The configuration was incredibly easy and never failed to work. The configuration within that file looked something like this: - -``` -auto enp10s0 - -iface enp10s0 inet static - -address 192.168.1.162 - -netmask 255.255.255.0 - -gateway 192.168.1.100 - -dns-nameservers 1.0.0.1,1.1.1.1 - -``` - -Save and close that file. Restart networking with the command: - -``` -sudo systemctl restart networking - -``` - -Or, if you’re not using a non-systemd distribution, you could restart networking the old fashioned way like so: - -``` -sudo /etc/init.d/networking restart - -``` - -Your network will restart and the newly configured interface is good to go. - -That’s how it’s been done for years. Until now. With certain distributions (such as Ubuntu Linux 18.04), the configuration and control of networking has changed considerably. Instead of that interfaces file and using the /etc/init.d/networking script, we now turn to [Netplan][1]. Netplan is a command line utility for the configuration of networking on certain Linux distributions. Netplan uses YAML description files to configure network interfaces and, from those descriptions, will generate the necessary configuration options for any given renderer tool. - -I want to show you how to use Netplan on Linux, to configure a static IP address and a DHCP address. I’ll be demonstrating on Ubuntu Server 18.04. I will give you one word of warning, the .yaml files you create for Netplan must be consistent in spacing, otherwise they’ll fail to work. You don’t have to use a specific spacing for each line, it just has to remain consistent. - -### The new configuration files - -Open a terminal window (or log into your Ubuntu Server via SSH). You will find the new configuration files for Netplan in the /etc/netplan directory. Change into that directory with the command cd /etc/netplan. Once in that directory, you will probably only see a single file: - -``` -01-netcfg.yaml - -``` - -You can create a new file or edit the default. If you opt to edit the default, I suggest making a copy with the command: - -``` -sudo cp /etc/netplan/01-netcfg.yaml /etc/netplan/01-netcfg.yaml.bak - -``` - -With your backup in place, you’re ready to configure. - -### Network Device Name - -Before you configure your static IP address, you’ll need to know the name of device to be configured. To do that, you can issue the command ip a and find out which device is to be used (Figure 1). - -![netplan][3] - -Figure 1: Finding our device name with the ip a command. - -[Used with permission][4] - -I’ll be configuring ens5 for a static IP address. - -### Configuring a Static IP Address - -Open the original .yaml file for editing with the command: - -``` -sudo nano /etc/netplan/01-netcfg.yaml - -``` - -The layout of the file looks like this: - -network: - -Version: 2 - -Renderer: networkd - -ethernets: - -DEVICE_NAME: - -Dhcp4: yes/no - -Addresses: [IP/NETMASK] - -Gateway: GATEWAY - -Nameservers: - -Addresses: [NAMESERVER, NAMESERVER] - -Where: - - * DEVICE_NAME is the actual device name to be configured. - - * yes/no is an option to enable or disable dhcp4. - - * IP is the IP address for the device. - - * NETMASK is the netmask for the IP address. - - * GATEWAY is the address for your gateway. - - * NAMESERVER is the comma-separated list of DNS nameservers. - - - - -Here’s a sample .yaml file: - -``` -network: - - version: 2 - - renderer: networkd - - ethernets: - - ens5: - - dhcp4: no - - addresses: [192.168.1.230/24] - - gateway4: 192.168.1.254 - - nameservers: - - addresses: [8.8.4.4,8.8.8.8] - -``` - -Edit the above to fit your networking needs. Save and close that file. - -Notice the netmask is no longer configured in the form 255.255.255.0. Instead, the netmask is added to the IP address. - -### Testing the Configuration - -Before we apply the change, let’s test the configuration. To do that, issue the command: - -``` -sudo netplan try - -``` - -The above command will validate the configuration before applying it. If it succeeds, you will see Configuration accepted. In other words, Netplan will attempt to apply the new settings to a running system. Should the new configuration file fail, Netplan will automatically revert to the previous working configuration. Should the new configuration work, it will be applied. - -### Applying the New Configuration - -If you are certain of your configuration file, you can skip the try option and go directly to applying the new options. The command for this is: - -``` -sudo netplan apply - -``` - -At this point, you can issue the command ip a to see that your new address configurations are in place. - -### Configuring DHCP - -Although you probably won’t be configuring your server for DHCP, it’s always good to know how to do this. For example, you might not know what static IP addresses are currently available on your network. You could configure the device for DHCP, get an IP address, and then reconfigure that address as static. - -To use DHCP with Netplan, the configuration file would look something like this: - -``` -network: - - version: 2 - - renderer: networkd - - ethernets: - - ens5: - - Addresses: [] - - dhcp4: true - - optional: true - -``` - -Save and close that file. Test the file with: - -``` -sudo netplan try - -``` - -Netplan should succeed and apply the DHCP configuration. You could then issue the ip a command, get the dynamically assigned address, and then reconfigure a static address. Or, you could leave it set to use DHCP (but seeing as how this is a server, you probably won’t want to do that). - -Should you have more than one interface, you could name the second .yaml configuration file 02-netcfg.yaml. Netplan will apply the configuration files in numerical order, so 01 will be applied before 02. Create as many configuration files as needed for your server. - -### That’s All There Is - -Believe it or not, that’s all there is to using Netplan. Although it is a significant change to how we’re accustomed to configuring network addresses, it’s not all that hard to get used to. But this style of configuration is here to stay… so you will need to get used to it. - -Learn more about Linux through the free ["Introduction to Linux" ][5]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2018/9/how-use-netplan-network-configuration-tool-linux - -作者:[Jack Wallen][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linux.com/users/jlwallen -[1]: https://netplan.io/ -[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/netplan_1.jpg?itok=XuIsXWbV (netplan) -[4]: /licenses/category/used-permission -[5]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180911 Tools Used in 6.828.md b/sources/tech/20180911 Tools Used in 6.828.md new file mode 100644 index 0000000000..c9afeae4ea --- /dev/null +++ b/sources/tech/20180911 Tools Used in 6.828.md @@ -0,0 +1,247 @@ +Tools Used in 6.828 +====== +### Tools Used in 6.828 + +You'll use two sets of tools in this class: an x86 emulator, QEMU, for running your kernel; and a compiler toolchain, including assembler, linker, C compiler, and debugger, for compiling and testing your kernel. This page has the information you'll need to download and install your own copies. This class assumes familiarity with Unix commands throughout. + +We highly recommend using a Debathena machine, such as athena.dialup.mit.edu, to work on the labs. If you use the MIT Athena machines that run Linux, then all the software tools you will need for this course are located in the 6.828 locker: just type 'add -f 6.828' to get access to them. + +If you don't have access to a Debathena machine, we recommend you use a virtual machine with Linux. If you really want to, you can build and install the tools on your own machine. We have instructions below for Linux and MacOS computers. + +It should be possible to get this development environment running under windows with the help of [Cygwin][1]. Install cygwin, and be sure to install the flex and bison packages (they are under the development header). + +For an overview of useful commands in the tools used in 6.828, see the [lab tools guide][2]. + +#### Compiler Toolchain + +A "compiler toolchain" is the set of programs, including a C compiler, assemblers, and linkers, that turn code into executable binaries. You'll need a compiler toolchain that generates code for 32-bit Intel architectures ("x86" architectures) in the ELF binary format. + +##### Test Your Compiler Toolchain + +Modern Linux and BSD UNIX distributions already provide a toolchain suitable for 6.828. To test your distribution, try the following commands: + +``` +% objdump -i + +``` + +The second line should say `elf32-i386`. + +``` +% gcc -m32 -print-libgcc-file-name + +``` + +The command should print something like `/usr/lib/gcc/i486-linux-gnu/version/libgcc.a` or `/usr/lib/gcc/x86_64-linux-gnu/version/32/libgcc.a` + +If both these commands succeed, you're all set, and don't need to compile your own toolchain. + +If the gcc command fails, you may need to install a development environment. On Ubuntu Linux, try this: + +``` +% sudo apt-get install -y build-essential gdb + +``` + +On 64-bit machines, you may need to install a 32-bit support library. The symptom is that linking fails with error messages like "`__udivdi3` not found" and "`__muldi3` not found". On Ubuntu Linux, try this to fix the problem: + +``` +% sudo apt-get install gcc-multilib + +``` + +##### Using a Virtual Machine + +Otherwise, the easiest way to get a compatible toolchain is to install a modern Linux distribution on your computer. With platform virtualization, Linux can cohabitate with your normal computing environment. Installing a Linux virtual machine is a two step process. First, you download the virtualization platform. + + * [**VirtualBox**][3] (free for Mac, Linux, Windows) — [Download page][3] + * [VMware Player][4] (free for Linux and Windows, registration required) + * [VMware Fusion][5] (Downloadable from IS&T for free). + + + +VirtualBox is a little slower and less flexible, but free! + +Once the virtualization platform is installed, download a boot disk image for the Linux distribution of your choice. + + * [Ubuntu Desktop][6] is what we use. + + + +This will download a file named something like `ubuntu-10.04.1-desktop-i386.iso`. Start up your virtualization platform and create a new (32-bit) virtual machine. Use the downloaded Ubuntu image as a boot disk; the procedure differs among VMs but is pretty simple. Type `objdump -i`, as above, to verify that your toolchain is now set up. You will do your work inside the VM. + +##### Building Your Own Compiler Toolchain + +This will take longer to set up, but give slightly better performance than a virtual machine, and lets you work in your own familiar environment (Unix/MacOS). Fast-forward to the end for MacOS instructions. + +###### Linux + +You can use your own tool chain by adding the following line to `conf/env.mk`: + +``` +GCCPREFIX= + +``` + +We assume that you are installing the toolchain into `/usr/local`. You will need a fair amount of disk space to compile the tools (around 1GiB). If you don't have that much space, delete each directory after its `make install` step. + +Download the following packages: + ++ ftp://ftp.gmplib.org/pub/gmp-5.0.2/gmp-5.0.2.tar.bz2 ++ https://www.mpfr.org/mpfr-3.1.2/mpfr-3.1.2.tar.bz2 ++ http://www.multiprecision.org/downloads/mpc-0.9.tar.gz ++ http://ftpmirror.gnu.org/binutils/binutils-2.21.1.tar.bz2 ++ http://ftpmirror.gnu.org/gcc/gcc-4.6.4/gcc-core-4.6.4.tar.bz2 ++ http://ftpmirror.gnu.org/gdb/gdb-7.3.1.tar.bz2 + +(You may also use newer versions of these packages.) Unpack and build the packages. The `green bold` text shows you how to install into `/usr/local`, which is what we recommend. To install into a different directory, $PFX, note the differences in lighter type ([hide][7]). If you have problems, see below. + +``` +export PATH=$PFX/bin:$PATH +export LD_LIBRARY_PATH=$PFX/lib:$LD_LIBRARY_PATH + +tar xjf gmp-5.0.2.tar.bz2 +cd gmp-5.0.2 +./configure --prefix=$PFX +make +make install # This step may require privilege (sudo make install) +cd .. + +tar xjf mpfr-3.1.2.tar.bz2 +cd mpfr-3.1.2 +./configure --prefix=$PFX --with-gmp=$PFX +make +make install # This step may require privilege (sudo make install) +cd .. + +tar xzf mpc-0.9.tar.gz +cd mpc-0.9 +./configure --prefix=$PFX --with-gmp=$PFX --with-mpfr=$PFX +make +make install # This step may require privilege (sudo make install) +cd .. + + +tar xjf binutils-2.21.1.tar.bz2 +cd binutils-2.21.1 +./configure --prefix=$PFX --target=i386-jos-elf --disable-werror +make +make install # This step may require privilege (sudo make install) +cd .. + +i386-jos-elf-objdump -i +# Should produce output like: +# BFD header file version (GNU Binutils) 2.21.1 +# elf32-i386 +# (header little endian, data little endian) +# i386... + + +tar xjf gcc-core-4.6.4.tar.bz2 +cd gcc-4.6.4 +mkdir build # GCC will not compile correctly unless you build in a separate directory +cd build +../configure --prefix=$PFX --with-gmp=$PFX --with-mpfr=$PFX --with-mpc=$PFX \ + --target=i386-jos-elf --disable-werror \ + --disable-libssp --disable-libmudflap --with-newlib \ + --without-headers --enable-languages=c MAKEINFO=missing +make all-gcc +make install-gcc # This step may require privilege (sudo make install-gcc) +make all-target-libgcc +make install-target-libgcc # This step may require privilege (sudo make install-target-libgcc) +cd ../.. + +i386-jos-elf-gcc -v +# Should produce output like: +# Using built-in specs. +# COLLECT_GCC=i386-jos-elf-gcc +# COLLECT_LTO_WRAPPER=/usr/local/libexec/gcc/i386-jos-elf/4.6.4/lto-wrapper +# Target: i386-jos-elf + + +tar xjf gdb-7.3.1.tar.bz2 +cd gdb-7.3.1 +./configure --prefix=$PFX --target=i386-jos-elf --program-prefix=i386-jos-elf- \ + --disable-werror +make all +make install # This step may require privilege (sudo make install) +cd .. + +``` + +###### Linux troubleshooting + + * Q. I can't run `make install` because I don't have root permission on this machine. +A. Our instructions assume you are installing into the `/usr/local` directory. However, this may not be allowed in your environment. If you can only install code into your home directory, that's OK. In the instructions above, replace `--prefix=/usr/local` with `--prefix=$HOME` (and [click here][7] to update the instructions further). You will also need to change your `PATH` and `LD_LIBRARY_PATH` environment variables, to inform your shell where to find the tools. For example: +``` + export PATH=$HOME/bin:$PATH + export LD_LIBRARY_PATH=$HOME/lib:$LD_LIBRARY_PATH +``` + +Enter these lines in your `~/.bashrc` file so you don't need to type them every time you log in. + + + + * Q. My build fails with an inscrutable message about "library not found". +A. You need to set your `LD_LIBRARY_PATH`. The environment variable must include the `PREFIX/lib` directory (for instance, `/usr/local/lib`). + + + +#### MacOS + +First begin by installing developer tools on Mac OSX: +`xcode-select --install` + +First begin by installing developer tools on Mac OSX: + +You can install the qemu dependencies from homebrew, however do not install qemu itself as you will need the 6.828 patched version. + +`brew install $(brew deps qemu)` + +The gettext utility does not add installed binaries to the path, so you will need to run + +`PATH=${PATH}:/usr/local/opt/gettext/bin make install` + +when installing qemu below. + +### QEMU Emulator + +[QEMU][8] is a modern and fast PC emulator. QEMU version 2.3.0 is set up on Athena for x86 machines in the 6.828 locker (`add -f 6.828`) + +Unfortunately, QEMU's debugging facilities, while powerful, are somewhat immature, so we highly recommend you use our patched version of QEMU instead of the stock version that may come with your distribution. The version installed on Athena is already patched. To build your own patched version of QEMU: + + 1. Clone the IAP 6.828 QEMU git repository `git clone https://github.com/mit-pdos/6.828-qemu.git qemu` + 2. On Linux, you may need to install several libraries. We have successfully built 6.828 QEMU on Debian/Ubuntu 16.04 after installing the following packages: libsdl1.2-dev, libtool-bin, libglib2.0-dev, libz-dev, and libpixman-1-dev. + 3. Configure the source code (optional arguments are shown in square brackets; replace PFX with a path of your choice) + 1. Linux: `./configure --disable-kvm --disable-werror [--prefix=PFX] [--target-list="i386-softmmu x86_64-softmmu"]` + 2. OS X: `./configure --disable-kvm --disable-werror --disable-sdl [--prefix=PFX] [--target-list="i386-softmmu x86_64-softmmu"]` The `prefix` argument specifies where to install QEMU; without it QEMU will install to `/usr/local` by default. The `target-list` argument simply slims down the architectures QEMU will build support for. + 4. Run `make && make install` + + + + +-------------------------------------------------------------------------------- + +via: https://pdos.csail.mit.edu/6.828/2018/tools.html + +作者:[csail.mit][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://pdos.csail.mit.edu +[b]: https://github.com/lujun9972 +[1]: http://www.cygwin.com +[2]: labguide.html +[3]: http://www.oracle.com/us/technologies/virtualization/oraclevm/ +[4]: http://www.vmware.com/products/player/ +[5]: http://www.vmware.com/products/fusion/ +[6]: http://www.ubuntu.com/download/desktop +[7]: +[8]: http://www.nongnu.org/qemu/ +[9]: mailto:6828-staff@lists.csail.mit.edu +[10]: https://i.creativecommons.org/l/by/3.0/us/88x31.png +[11]: https://creativecommons.org/licenses/by/3.0/us/ +[12]: https://pdos.csail.mit.edu/6.828/2018/index.html diff --git a/sources/tech/20180913 Lab 1- PC Bootstrap and GCC Calling Conventions.md b/sources/tech/20180913 Lab 1- PC Bootstrap and GCC Calling Conventions.md new file mode 100644 index 0000000000..365b5eb5f8 --- /dev/null +++ b/sources/tech/20180913 Lab 1- PC Bootstrap and GCC Calling Conventions.md @@ -0,0 +1,616 @@ +Lab 1: PC Bootstrap and GCC Calling Conventions +====== +### Lab 1: Booting a PC + +#### Introduction + +This lab is split into three parts. The first part concentrates on getting familiarized with x86 assembly language, the QEMU x86 emulator, and the PC's power-on bootstrap procedure. The second part examines the boot loader for our 6.828 kernel, which resides in the `boot` directory of the `lab` tree. Finally, the third part delves into the initial template for our 6.828 kernel itself, named JOS, which resides in the `kernel` directory. + +##### Software Setup + +The files you will need for this and subsequent lab assignments in this course are distributed using the [Git][1] version control system. To learn more about Git, take a look at the [Git user's manual][2], or, if you are already familiar with other version control systems, you may find this [CS-oriented overview of Git][3] useful. + +The URL for the course Git repository is . To install the files in your Athena account, you need to _clone_ the course repository, by running the commands below. You must use an x86 Athena machine; that is, `uname -a` should mention `i386 GNU/Linux` or `i686 GNU/Linux` or `x86_64 GNU/Linux`. You can log into a public Athena host with `ssh -X athena.dialup.mit.edu`. + +``` +athena% mkdir ~/6.828 +athena% cd ~/6.828 +athena% add git +athena% git clone https://pdos.csail.mit.edu/6.828/2018/jos.git lab +Cloning into lab... +athena% cd lab +athena% + +``` + +Git allows you to keep track of the changes you make to the code. For example, if you are finished with one of the exercises, and want to checkpoint your progress, you can _commit_ your changes by running: + +``` +athena% git commit -am 'my solution for lab1 exercise 9' +Created commit 60d2135: my solution for lab1 exercise 9 + 1 files changed, 1 insertions(+), 0 deletions(-) +athena% + +``` + +You can keep track of your changes by using the git diff command. Running git diff will display the changes to your code since your last commit, and git diff origin/lab1 will display the changes relative to the initial code supplied for this lab. Here, `origin/lab1` is the name of the git branch with the initial code you downloaded from our server for this assignment. + +We have set up the appropriate compilers and simulators for you on Athena. To use them, run add -f 6.828. You must run this command every time you log in (or add it to your `~/.environment` file). If you get obscure errors while compiling or running `qemu`, double check that you added the course locker. + +If you are working on a non-Athena machine, you'll need to install `qemu` and possibly `gcc` following the directions on the [tools page][4]. We've made several useful debugging changes to `qemu` and some of the later labs depend on these patches, so you must build your own. If your machine uses a native ELF toolchain (such as Linux and most BSD's, but notably _not_ OS X), you can simply install `gcc` from your package manager. Otherwise, follow the directions on the tools page. + +##### Hand-In Procedure + +You will turn in your assignments using the [submission website][5]. You need to request an API key from the submission website before you can turn in any assignments or labs. + +The lab code comes with GNU Make rules to make submission easier. After committing your final changes to the lab, type make handin to submit your lab. + +``` +athena% git commit -am "ready to submit my lab" +[lab1 c2e3c8b] ready to submit my lab + 2 files changed, 18 insertions(+), 2 deletions(-) + +athena% make handin +git archive --prefix=lab1/ --format=tar HEAD | gzip > lab1-handin.tar.gz +Get an API key for yourself by visiting https://6828.scripts.mit.edu/2018/handin.py/ +Please enter your API key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX + % Total % Received % Xferd Average Speed Time Time Time Current + Dload Upload Total Spent Left Speed +100 50199 100 241 100 49958 414 85824 --:--:-- --:--:-- --:--:-- 85986 +athena% + +``` + +make handin will store your API key in _myapi.key_. If you need to change your API key, just remove this file and let make handin generate it again ( _myapi.key_ must not include newline characters). + +If use make handin and you have either uncomitted changes or untracked files, you will see output similar to the following: + +``` + M hello.c +?? bar.c +?? foo.pyc +Untracked files will not be handed in. Continue? [y/N] + +``` + +Inspect the above lines and make sure all files that your lab solution needs are tracked i.e. not listed in a line that begins with ??. + +In the case that make handin does not work properly, try fixing the problem with the curl or Git commands. Or you can run make tarball. This will make a tar file for you, which you can then upload via our [web interface][5]. + +You can run make grade to test your solutions with the grading program. The [web interface][5] uses the same grading program to assign your lab submission a grade. You should check the output of the grader (it may take a few minutes since the grader runs periodically) and ensure that you received the grade which you expected. If the grades don't match, your lab submission probably has a bug -- check the output of the grader (resp-lab*.txt) to see which particular test failed. + +For Lab 1, you do not need to turn in answers to any of the questions below. (Do answer them for yourself though! They will help with the rest of the lab.) + +#### Part 1: PC Bootstrap + +The purpose of the first exercise is to introduce you to x86 assembly language and the PC bootstrap process, and to get you started with QEMU and QEMU/GDB debugging. You will not have to write any code for this part of the lab, but you should go through it anyway for your own understanding and be prepared to answer the questions posed below. + +##### Getting Started with x86 assembly + +If you are not already familiar with x86 assembly language, you will quickly become familiar with it during this course! The [PC Assembly Language Book][6] is an excellent place to start. Hopefully, the book contains mixture of new and old material for you. + +_Warning:_ Unfortunately the examples in the book are written for the NASM assembler, whereas we will be using the GNU assembler. NASM uses the so-called _Intel_ syntax while GNU uses the _AT &T_ syntax. While semantically equivalent, an assembly file will differ quite a lot, at least superficially, depending on which syntax is used. Luckily the conversion between the two is pretty simple, and is covered in [Brennan's Guide to Inline Assembly][7]. + +Exercise 1. Familiarize yourself with the assembly language materials available on [the 6.828 reference page][8]. You don't have to read them now, but you'll almost certainly want to refer to some of this material when reading and writing x86 assembly. + +We do recommend reading the section "The Syntax" in [Brennan's Guide to Inline Assembly][7]. It gives a good (and quite brief) description of the AT&T assembly syntax we'll be using with the GNU assembler in JOS. + +Certainly the definitive reference for x86 assembly language programming is Intel's instruction set architecture reference, which you can find on [the 6.828 reference page][8] in two flavors: an HTML edition of the old [80386 Programmer's Reference Manual][9], which is much shorter and easier to navigate than more recent manuals but describes all of the x86 processor features that we will make use of in 6.828; and the full, latest and greatest [IA-32 Intel Architecture Software Developer's Manuals][10] from Intel, covering all the features of the most recent processors that we won't need in class but you may be interested in learning about. An equivalent (and often friendlier) set of manuals is [available from AMD][11]. Save the Intel/AMD architecture manuals for later or use them for reference when you want to look up the definitive explanation of a particular processor feature or instruction. + +##### Simulating the x86 + +Instead of developing the operating system on a real, physical personal computer (PC), we use a program that faithfully emulates a complete PC: the code you write for the emulator will boot on a real PC too. Using an emulator simplifies debugging; you can, for example, set break points inside of the emulated x86, which is difficult to do with the silicon version of an x86. + +In 6.828 we will use the [QEMU Emulator][12], a modern and relatively fast emulator. While QEMU's built-in monitor provides only limited debugging support, QEMU can act as a remote debugging target for the [GNU debugger][13] (GDB), which we'll use in this lab to step through the early boot process. + +To get started, extract the Lab 1 files into your own directory on Athena as described above in "Software Setup", then type make (or gmake on BSD systems) in the `lab` directory to build the minimal 6.828 boot loader and kernel you will start with. (It's a little generous to call the code we're running here a "kernel," but we'll flesh it out throughout the semester.) + +``` +athena% cd lab +athena% make ++ as kern/entry.S ++ cc kern/entrypgdir.c ++ cc kern/init.c ++ cc kern/console.c ++ cc kern/monitor.c ++ cc kern/printf.c ++ cc kern/kdebug.c ++ cc lib/printfmt.c ++ cc lib/readline.c ++ cc lib/string.c ++ ld obj/kern/kernel ++ as boot/boot.S ++ cc -Os boot/main.c ++ ld boot/boot +boot block is 380 bytes (max 510) ++ mk obj/kern/kernel.img + +``` + +(If you get errors like "undefined reference to `__udivdi3'", you probably don't have the 32-bit gcc multilib. If you're running Debian or Ubuntu, try installing the gcc-multilib package.) + +Now you're ready to run QEMU, supplying the file `obj/kern/kernel.img`, created above, as the contents of the emulated PC's "virtual hard disk." This hard disk image contains both our boot loader (`obj/boot/boot`) and our kernel (`obj/kernel`). + +``` +athena% make qemu + +``` + +or + +``` +athena% make qemu-nox + +``` + +This executes QEMU with the options required to set the hard disk and direct serial port output to the terminal. Some text should appear in the QEMU window: + +``` +Booting from Hard Disk... +6828 decimal is XXX octal! +entering test_backtrace 5 +entering test_backtrace 4 +entering test_backtrace 3 +entering test_backtrace 2 +entering test_backtrace 1 +entering test_backtrace 0 +leaving test_backtrace 0 +leaving test_backtrace 1 +leaving test_backtrace 2 +leaving test_backtrace 3 +leaving test_backtrace 4 +leaving test_backtrace 5 +Welcome to the JOS kernel monitor! +Type 'help' for a list of commands. +K> + +``` + +Everything after '`Booting from Hard Disk...`' was printed by our skeletal JOS kernel; the `K>` is the prompt printed by the small _monitor_ , or interactive control program, that we've included in the kernel. If you used make qemu, these lines printed by the kernel will appear in both the regular shell window from which you ran QEMU and the QEMU display window. This is because for testing and lab grading purposes we have set up the JOS kernel to write its console output not only to the virtual VGA display (as seen in the QEMU window), but also to the simulated PC's virtual serial port, which QEMU in turn outputs to its own standard output. Likewise, the JOS kernel will take input from both the keyboard and the serial port, so you can give it commands in either the VGA display window or the terminal running QEMU. Alternatively, you can use the serial console without the virtual VGA by running make qemu-nox. This may be convenient if you are SSH'd into an Athena dialup. To quit qemu, type Ctrl+a x. + +There are only two commands you can give to the kernel monitor, `help` and `kerninfo`. + +``` +K> help +help - display this list of commands +kerninfo - display information about the kernel +K> kerninfo +Special kernel symbols: + entry f010000c (virt) 0010000c (phys) + etext f0101a75 (virt) 00101a75 (phys) + edata f0112300 (virt) 00112300 (phys) + end f0112960 (virt) 00112960 (phys) +Kernel executable memory footprint: 75KB +K> + +``` + +The `help` command is obvious, and we will shortly discuss the meaning of what the `kerninfo` command prints. Although simple, it's important to note that this kernel monitor is running "directly" on the "raw (virtual) hardware" of the simulated PC. This means that you should be able to copy the contents of `obj/kern/kernel.img` onto the first few sectors of a _real_ hard disk, insert that hard disk into a real PC, turn it on, and see exactly the same thing on the PC's real screen as you did above in the QEMU window. (We don't recommend you do this on a real machine with useful information on its hard disk, though, because copying `kernel.img` onto the beginning of its hard disk will trash the master boot record and the beginning of the first partition, effectively causing everything previously on the hard disk to be lost!) + +##### The PC's Physical Address Space + +We will now dive into a bit more detail about how a PC starts up. A PC's physical address space is hard-wired to have the following general layout: + +``` ++------------------+ <- 0xFFFFFFFF (4GB) +| 32-bit | +| memory mapped | +| devices | +| | +/\/\/\/\/\/\/\/\/\/\ + +/\/\/\/\/\/\/\/\/\/\ +| | +| Unused | +| | ++------------------+ <- depends on amount of RAM +| | +| | +| Extended Memory | +| | +| | ++------------------+ <- 0x00100000 (1MB) +| BIOS ROM | ++------------------+ <- 0x000F0000 (960KB) +| 16-bit devices, | +| expansion ROMs | ++------------------+ <- 0x000C0000 (768KB) +| VGA Display | ++------------------+ <- 0x000A0000 (640KB) +| | +| Low Memory | +| | ++------------------+ <- 0x00000000 + +``` + +The first PCs, which were based on the 16-bit Intel 8088 processor, were only capable of addressing 1MB of physical memory. The physical address space of an early PC would therefore start at 0x00000000 but end at 0x000FFFFF instead of 0xFFFFFFFF. The 640KB area marked "Low Memory" was the _only_ random-access memory (RAM) that an early PC could use; in fact the very earliest PCs only could be configured with 16KB, 32KB, or 64KB of RAM! + +The 384KB area from 0x000A0000 through 0x000FFFFF was reserved by the hardware for special uses such as video display buffers and firmware held in non-volatile memory. The most important part of this reserved area is the Basic Input/Output System (BIOS), which occupies the 64KB region from 0x000F0000 through 0x000FFFFF. In early PCs the BIOS was held in true read-only memory (ROM), but current PCs store the BIOS in updateable flash memory. The BIOS is responsible for performing basic system initialization such as activating the video card and checking the amount of memory installed. After performing this initialization, the BIOS loads the operating system from some appropriate location such as floppy disk, hard disk, CD-ROM, or the network, and passes control of the machine to the operating system. + +When Intel finally "broke the one megabyte barrier" with the 80286 and 80386 processors, which supported 16MB and 4GB physical address spaces respectively, the PC architects nevertheless preserved the original layout for the low 1MB of physical address space in order to ensure backward compatibility with existing software. Modern PCs therefore have a "hole" in physical memory from 0x000A0000 to 0x00100000, dividing RAM into "low" or "conventional memory" (the first 640KB) and "extended memory" (everything else). In addition, some space at the very top of the PC's 32-bit physical address space, above all physical RAM, is now commonly reserved by the BIOS for use by 32-bit PCI devices. + +Recent x86 processors can support _more_ than 4GB of physical RAM, so RAM can extend further above 0xFFFFFFFF. In this case the BIOS must arrange to leave a _second_ hole in the system's RAM at the top of the 32-bit addressable region, to leave room for these 32-bit devices to be mapped. Because of design limitations JOS will use only the first 256MB of a PC's physical memory anyway, so for now we will pretend that all PCs have "only" a 32-bit physical address space. But dealing with complicated physical address spaces and other aspects of hardware organization that evolved over many years is one of the important practical challenges of OS development. + +##### The ROM BIOS + +In this portion of the lab, you'll use QEMU's debugging facilities to investigate how an IA-32 compatible computer boots. + +Open two terminal windows and cd both shells into your lab directory. In one, enter make qemu-gdb (or make qemu-nox-gdb). This starts up QEMU, but QEMU stops just before the processor executes the first instruction and waits for a debugging connection from GDB. In the second terminal, from the same directory you ran `make`, run make gdb. You should see something like this, + +``` +athena% make gdb +GNU gdb (GDB) 6.8-debian +Copyright (C) 2008 Free Software Foundation, Inc. +License GPLv3+: GNU GPL version 3 or later +This is free software: you are free to change and redistribute it. +There is NO WARRANTY, to the extent permitted by law. Type "show copying" +and "show warranty" for details. +This GDB was configured as "i486-linux-gnu". ++ target remote localhost:26000 +The target architecture is assumed to be i8086 +[f000:fff0] 0xffff0: ljmp $0xf000,$0xe05b +0x0000fff0 in ?? () ++ symbol-file obj/kern/kernel +(gdb) + +``` + +We provided a `.gdbinit` file that set up GDB to debug the 16-bit code used during early boot and directed it to attach to the listening QEMU. (If it doesn't work, you may have to add an `add-auto-load-safe-path` in your `.gdbinit` in your home directory to convince `gdb` to process the `.gdbinit` we provided. `gdb` will tell you if you have to do this.) + +The following line: + +``` +[f000:fff0] 0xffff0: ljmp $0xf000,$0xe05b + +``` + +is GDB's disassembly of the first instruction to be executed. From this output you can conclude a few things: + + * The IBM PC starts executing at physical address 0x000ffff0, which is at the very top of the 64KB area reserved for the ROM BIOS. + * The PC starts executing with `CS = 0xf000` and `IP = 0xfff0`. + * The first instruction to be executed is a `jmp` instruction, which jumps to the segmented address `CS = 0xf000` and `IP = 0xe05b`. + + + +Why does QEMU start like this? This is how Intel designed the 8088 processor, which IBM used in their original PC. Because the BIOS in a PC is "hard-wired" to the physical address range 0x000f0000-0x000fffff, this design ensures that the BIOS always gets control of the machine first after power-up or any system restart - which is crucial because on power-up there _is_ no other software anywhere in the machine's RAM that the processor could execute. The QEMU emulator comes with its own BIOS, which it places at this location in the processor's simulated physical address space. On processor reset, the (simulated) processor enters real mode and sets CS to 0xf000 and the IP to 0xfff0, so that execution begins at that (CS:IP) segment address. How does the segmented address 0xf000:fff0 turn into a physical address? + +To answer that we need to know a bit about real mode addressing. In real mode (the mode that PC starts off in), address translation works according to the formula: _physical address_ = 16 选题模板.txt 中文排版指北.md comic core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated _segment_ \+ _offset_. So, when the PC sets CS to 0xf000 and IP to 0xfff0, the physical address referenced is: + +``` + 16 * 0xf000 + 0xfff0 # in hex multiplication by 16 is + = 0xf0000 + 0xfff0 # easy--just append a 0. + = 0xffff0 + +``` + +`0xffff0` is 16 bytes before the end of the BIOS (`0x100000`). Therefore we shouldn't be surprised that the first thing that the BIOS does is `jmp` backwards to an earlier location in the BIOS; after all how much could it accomplish in just 16 bytes? + +Exercise 2. Use GDB's si (Step Instruction) command to trace into the ROM BIOS for a few more instructions, and try to guess what it might be doing. You might want to look at [Phil Storrs I/O Ports Description][14], as well as other materials on the [6.828 reference materials page][8]. No need to figure out all the details - just the general idea of what the BIOS is doing first. + +When the BIOS runs, it sets up an interrupt descriptor table and initializes various devices such as the VGA display. This is where the "`Starting SeaBIOS`" message you see in the QEMU window comes from. + +After initializing the PCI bus and all the important devices the BIOS knows about, it searches for a bootable device such as a floppy, hard drive, or CD-ROM. Eventually, when it finds a bootable disk, the BIOS reads the _boot loader_ from the disk and transfers control to it. + +#### Part 2: The Boot Loader + +Floppy and hard disks for PCs are divided into 512 byte regions called _sectors_. A sector is the disk's minimum transfer granularity: each read or write operation must be one or more sectors in size and aligned on a sector boundary. If the disk is bootable, the first sector is called the _boot sector_ , since this is where the boot loader code resides. When the BIOS finds a bootable floppy or hard disk, it loads the 512-byte boot sector into memory at physical addresses 0x7c00 through 0x7dff, and then uses a `jmp` instruction to set the CS:IP to `0000:7c00`, passing control to the boot loader. Like the BIOS load address, these addresses are fairly arbitrary - but they are fixed and standardized for PCs. + +The ability to boot from a CD-ROM came much later during the evolution of the PC, and as a result the PC architects took the opportunity to rethink the boot process slightly. As a result, the way a modern BIOS boots from a CD-ROM is a bit more complicated (and more powerful). CD-ROMs use a sector size of 2048 bytes instead of 512, and the BIOS can load a much larger boot image from the disk into memory (not just one sector) before transferring control to it. For more information, see the ["El Torito" Bootable CD-ROM Format Specification][15]. + +For 6.828, however, we will use the conventional hard drive boot mechanism, which means that our boot loader must fit into a measly 512 bytes. The boot loader consists of one assembly language source file, `boot/boot.S`, and one C source file, `boot/main.c` Look through these source files carefully and make sure you understand what's going on. The boot loader must perform two main functions: + + 1. First, the boot loader switches the processor from real mode to _32-bit protected mode_ , because it is only in this mode that software can access all the memory above 1MB in the processor's physical address space. Protected mode is described briefly in sections 1.2.7 and 1.2.8 of [PC Assembly Language][6], and in great detail in the Intel architecture manuals. At this point you only have to understand that translation of segmented addresses (segment:offset pairs) into physical addresses happens differently in protected mode, and that after the transition offsets are 32 bits instead of 16. + 2. Second, the boot loader reads the kernel from the hard disk by directly accessing the IDE disk device registers via the x86's special I/O instructions. If you would like to understand better what the particular I/O instructions here mean, check out the "IDE hard drive controller" section on [the 6.828 reference page][8]. You will not need to learn much about programming specific devices in this class: writing device drivers is in practice a very important part of OS development, but from a conceptual or architectural viewpoint it is also one of the least interesting. + + + +After you understand the boot loader source code, look at the file `obj/boot/boot.asm`. This file is a disassembly of the boot loader that our GNUmakefile creates _after_ compiling the boot loader. This disassembly file makes it easy to see exactly where in physical memory all of the boot loader's code resides, and makes it easier to track what's happening while stepping through the boot loader in GDB. Likewise, `obj/kern/kernel.asm` contains a disassembly of the JOS kernel, which can often be useful for debugging. + +You can set address breakpoints in GDB with the `b` command. For example, b *0x7c00 sets a breakpoint at address 0x7C00. Once at a breakpoint, you can continue execution using the c and si commands: c causes QEMU to continue execution until the next breakpoint (or until you press Ctrl-C in GDB), and si _N_ steps through the instructions _`N`_ at a time. + +To examine instructions in memory (besides the immediate next one to be executed, which GDB prints automatically), you use the x/i command. This command has the syntax x/ _N_ i _ADDR_ , where _N_ is the number of consecutive instructions to disassemble and _ADDR_ is the memory address at which to start disassembling. + +Exercise 3. Take a look at the [lab tools guide][16], especially the section on GDB commands. Even if you're familiar with GDB, this includes some esoteric GDB commands that are useful for OS work. + +Set a breakpoint at address 0x7c00, which is where the boot sector will be loaded. Continue execution until that breakpoint. Trace through the code in `boot/boot.S`, using the source code and the disassembly file `obj/boot/boot.asm` to keep track of where you are. Also use the `x/i` command in GDB to disassemble sequences of instructions in the boot loader, and compare the original boot loader source code with both the disassembly in `obj/boot/boot.asm` and GDB. + +Trace into `bootmain()` in `boot/main.c`, and then into `readsect()`. Identify the exact assembly instructions that correspond to each of the statements in `readsect()`. Trace through the rest of `readsect()` and back out into `bootmain()`, and identify the begin and end of the `for` loop that reads the remaining sectors of the kernel from the disk. Find out what code will run when the loop is finished, set a breakpoint there, and continue to that breakpoint. Then step through the remainder of the boot loader. + +Be able to answer the following questions: + + * At what point does the processor start executing 32-bit code? What exactly causes the switch from 16- to 32-bit mode? + * What is the _last_ instruction of the boot loader executed, and what is the _first_ instruction of the kernel it just loaded? + * _Where_ is the first instruction of the kernel? + * How does the boot loader decide how many sectors it must read in order to fetch the entire kernel from disk? Where does it find this information? + + + +##### Loading the Kernel + +We will now look in further detail at the C language portion of the boot loader, in `boot/main.c`. But before doing so, this is a good time to stop and review some of the basics of C programming. + +Exercise 4. Read about programming with pointers in C. The best reference for the C language is _The C Programming Language_ by Brian Kernighan and Dennis Ritchie (known as 'K &R'). We recommend that students purchase this book (here is an [Amazon Link][17]) or find one of [MIT's 7 copies][18]. + +Read 5.1 (Pointers and Addresses) through 5.5 (Character Pointers and Functions) in K&R. Then download the code for [pointers.c][19], run it, and make sure you understand where all of the printed values come from. In particular, make sure you understand where the pointer addresses in printed lines 1 and 6 come from, how all the values in printed lines 2 through 4 get there, and why the values printed in line 5 are seemingly corrupted. + +There are other references on pointers in C (e.g., [A tutorial by Ted Jensen][20] that cites K&R heavily), though not as strongly recommended. + +_Warning:_ Unless you are already thoroughly versed in C, do not skip or even skim this reading exercise. If you do not really understand pointers in C, you will suffer untold pain and misery in subsequent labs, and then eventually come to understand them the hard way. Trust us; you don't want to find out what "the hard way" is. + +To make sense out of `boot/main.c` you'll need to know what an ELF binary is. When you compile and link a C program such as the JOS kernel, the compiler transforms each C source ('`.c`') file into an _object_ ('`.o`') file containing assembly language instructions encoded in the binary format expected by the hardware. The linker then combines all of the compiled object files into a single _binary image_ such as `obj/kern/kernel`, which in this case is a binary in the ELF format, which stands for "Executable and Linkable Format". + +Full information about this format is available in [the ELF specification][21] on [our reference page][8], but you will not need to delve very deeply into the details of this format in this class. Although as a whole the format is quite powerful and complex, most of the complex parts are for supporting dynamic loading of shared libraries, which we will not do in this class. The [Wikipedia page][22] has a short description. + +For purposes of 6.828, you can consider an ELF executable to be a header with loading information, followed by several _program sections_ , each of which is a contiguous chunk of code or data intended to be loaded into memory at a specified address. The boot loader does not modify the code or data; it loads it into memory and starts executing it. + +An ELF binary starts with a fixed-length _ELF header_ , followed by a variable-length _program header_ listing each of the program sections to be loaded. The C definitions for these ELF headers are in `inc/elf.h`. The program sections we're interested in are: + + * `.text`: The program's executable instructions. + * `.rodata`: Read-only data, such as ASCII string constants produced by the C compiler. (We will not bother setting up the hardware to prohibit writing, however.) + * `.data`: The data section holds the program's initialized data, such as global variables declared with initializers like `int x = 5;`. + + + +When the linker computes the memory layout of a program, it reserves space for _uninitialized_ global variables, such as `int x;`, in a section called `.bss` that immediately follows `.data` in memory. C requires that "uninitialized" global variables start with a value of zero. Thus there is no need to store contents for `.bss` in the ELF binary; instead, the linker records just the address and size of the `.bss` section. The loader or the program itself must arrange to zero the `.bss` section. + +Examine the full list of the names, sizes, and link addresses of all the sections in the kernel executable by typing: + +``` +athena% objdump -h obj/kern/kernel + +(If you compiled your own toolchain, you may need to use i386-jos-elf-objdump) + +``` + +You will see many more sections than the ones we listed above, but the others are not important for our purposes. Most of the others are to hold debugging information, which is typically included in the program's executable file but not loaded into memory by the program loader. + +Take particular note of the "VMA" (or _link address_ ) and the "LMA" (or _load address_ ) of the `.text` section. The load address of a section is the memory address at which that section should be loaded into memory. + +The link address of a section is the memory address from which the section expects to execute. The linker encodes the link address in the binary in various ways, such as when the code needs the address of a global variable, with the result that a binary usually won't work if it is executing from an address that it is not linked for. (It is possible to generate _position-independent_ code that does not contain any such absolute addresses. This is used extensively by modern shared libraries, but it has performance and complexity costs, so we won't be using it in 6.828.) + +Typically, the link and load addresses are the same. For example, look at the `.text` section of the boot loader: + +``` +athena% objdump -h obj/boot/boot.out + +``` + +The boot loader uses the ELF _program headers_ to decide how to load the sections. The program headers specify which parts of the ELF object to load into memory and the destination address each should occupy. You can inspect the program headers by typing: + +``` +athena% objdump -x obj/kern/kernel + +``` + +The program headers are then listed under "Program Headers" in the output of objdump. The areas of the ELF object that need to be loaded into memory are those that are marked as "LOAD". Other information for each program header is given, such as the virtual address ("vaddr"), the physical address ("paddr"), and the size of the loaded area ("memsz" and "filesz"). + +Back in boot/main.c, the `ph->p_pa` field of each program header contains the segment's destination physical address (in this case, it really is a physical address, though the ELF specification is vague on the actual meaning of this field). + +The BIOS loads the boot sector into memory starting at address 0x7c00, so this is the boot sector's load address. This is also where the boot sector executes from, so this is also its link address. We set the link address by passing `-Ttext 0x7C00` to the linker in `boot/Makefrag`, so the linker will produce the correct memory addresses in the generated code. + +Exercise 5. Trace through the first few instructions of the boot loader again and identify the first instruction that would "break" or otherwise do the wrong thing if you were to get the boot loader's link address wrong. Then change the link address in `boot/Makefrag` to something wrong, run make clean, recompile the lab with make, and trace into the boot loader again to see what happens. Don't forget to change the link address back and make clean again afterward! + +Look back at the load and link addresses for the kernel. Unlike the boot loader, these two addresses aren't the same: the kernel is telling the boot loader to load it into memory at a low address (1 megabyte), but it expects to execute from a high address. We'll dig in to how we make this work in the next section. + +Besides the section information, there is one more field in the ELF header that is important to us, named `e_entry`. This field holds the link address of the _entry point_ in the program: the memory address in the program's text section at which the program should begin executing. You can see the entry point: + +``` +athena% objdump -f obj/kern/kernel + +``` + +You should now be able to understand the minimal ELF loader in `boot/main.c`. It reads each section of the kernel from disk into memory at the section's load address and then jumps to the kernel's entry point. + +Exercise 6. We can examine memory using GDB's x command. The [GDB manual][23] has full details, but for now, it is enough to know that the command x/ _N_ x _ADDR_ prints _`N`_ words of memory at _`ADDR`_. (Note that both '`x`'s in the command are lowercase.) _Warning_ : The size of a word is not a universal standard. In GNU assembly, a word is two bytes (the 'w' in xorw, which stands for word, means 2 bytes). + +Reset the machine (exit QEMU/GDB and start them again). Examine the 8 words of memory at 0x00100000 at the point the BIOS enters the boot loader, and then again at the point the boot loader enters the kernel. Why are they different? What is there at the second breakpoint? (You do not really need to use QEMU to answer this question. Just think.) + +#### Part 3: The Kernel + +We will now start to examine the minimal JOS kernel in a bit more detail. (And you will finally get to write some code!). Like the boot loader, the kernel begins with some assembly language code that sets things up so that C language code can execute properly. + +##### Using virtual memory to work around position dependence + +When you inspected the boot loader's link and load addresses above, they matched perfectly, but there was a (rather large) disparity between the _kernel's_ link address (as printed by objdump) and its load address. Go back and check both and make sure you can see what we're talking about. (Linking the kernel is more complicated than the boot loader, so the link and load addresses are at the top of `kern/kernel.ld`.) + +Operating system kernels often like to be linked and run at very high _virtual address_ , such as 0xf0100000, in order to leave the lower part of the processor's virtual address space for user programs to use. The reason for this arrangement will become clearer in the next lab. + +Many machines don't have any physical memory at address 0xf0100000, so we can't count on being able to store the kernel there. Instead, we will use the processor's memory management hardware to map virtual address 0xf0100000 (the link address at which the kernel code _expects_ to run) to physical address 0x00100000 (where the boot loader loaded the kernel into physical memory). This way, although the kernel's virtual address is high enough to leave plenty of address space for user processes, it will be loaded in physical memory at the 1MB point in the PC's RAM, just above the BIOS ROM. This approach requires that the PC have at least a few megabytes of physical memory (so that physical address 0x00100000 works), but this is likely to be true of any PC built after about 1990. + +In fact, in the next lab, we will map the _entire_ bottom 256MB of the PC's physical address space, from physical addresses 0x00000000 through 0x0fffffff, to virtual addresses 0xf0000000 through 0xffffffff respectively. You should now see why JOS can only use the first 256MB of physical memory. + +For now, we'll just map the first 4MB of physical memory, which will be enough to get us up and running. We do this using the hand-written, statically-initialized page directory and page table in `kern/entrypgdir.c`. For now, you don't have to understand the details of how this works, just the effect that it accomplishes. Up until `kern/entry.S` sets the `CR0_PG` flag, memory references are treated as physical addresses (strictly speaking, they're linear addresses, but boot/boot.S set up an identity mapping from linear addresses to physical addresses and we're never going to change that). Once `CR0_PG` is set, memory references are virtual addresses that get translated by the virtual memory hardware to physical addresses. `entry_pgdir` translates virtual addresses in the range 0xf0000000 through 0xf0400000 to physical addresses 0x00000000 through 0x00400000, as well as virtual addresses 0x00000000 through 0x00400000 to physical addresses 0x00000000 through 0x00400000. Any virtual address that is not in one of these two ranges will cause a hardware exception which, since we haven't set up interrupt handling yet, will cause QEMU to dump the machine state and exit (or endlessly reboot if you aren't using the 6.828-patched version of QEMU). + +Exercise 7. Use QEMU and GDB to trace into the JOS kernel and stop at the `movl %eax, %cr0`. Examine memory at 0x00100000 and at 0xf0100000. Now, single step over that instruction using the stepi GDB command. Again, examine memory at 0x00100000 and at 0xf0100000. Make sure you understand what just happened. + +What is the first instruction _after_ the new mapping is established that would fail to work properly if the mapping weren't in place? Comment out the `movl %eax, %cr0` in `kern/entry.S`, trace into it, and see if you were right. + +##### Formatted Printing to the Console + +Most people take functions like `printf()` for granted, sometimes even thinking of them as "primitives" of the C language. But in an OS kernel, we have to implement all I/O ourselves. + +Read through `kern/printf.c`, `lib/printfmt.c`, and `kern/console.c`, and make sure you understand their relationship. It will become clear in later labs why `printfmt.c` is located in the separate `lib` directory. + +Exercise 8. We have omitted a small fragment of code - the code necessary to print octal numbers using patterns of the form "%o". Find and fill in this code fragment. + +Be able to answer the following questions: + + 1. Explain the interface between `printf.c` and `console.c`. Specifically, what function does `console.c` export? How is this function used by `printf.c`? + + 2. Explain the following from `console.c`: +``` + 1 if (crt_pos >= CRT_SIZE) { + 2 int i; + 3 memmove(crt_buf, crt_buf + CRT_COLS, (CRT_SIZE - CRT_COLS) 选题模板.txt 中文排版指北.md comic core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated sizeof(uint16_t)); + 4 for (i = CRT_SIZE - CRT_COLS; i < CRT_SIZE; i++) + 5 crt_buf[i] = 0x0700 | ' '; + 6 crt_pos -= CRT_COLS; + 7 } + +``` + + 3. For the following questions you might wish to consult the notes for Lecture 2. These notes cover GCC's calling convention on the x86. + +Trace the execution of the following code step-by-step: +``` + int x = 1, y = 3, z = 4; + cprintf("x %d, y %x, z %d\n", x, y, z); + +``` + + * In the call to `cprintf()`, to what does `fmt` point? To what does `ap` point? + * List (in order of execution) each call to `cons_putc`, `va_arg`, and `vcprintf`. For `cons_putc`, list its argument as well. For `va_arg`, list what `ap` points to before and after the call. For `vcprintf` list the values of its two arguments. + 4. Run the following code. +``` + unsigned int i = 0x00646c72; + cprintf("H%x Wo%s", 57616, &i); + +``` + +What is the output? Explain how this output is arrived at in the step-by-step manner of the previous exercise. [Here's an ASCII table][24] that maps bytes to characters. + +The output depends on that fact that the x86 is little-endian. If the x86 were instead big-endian what would you set `i` to in order to yield the same output? Would you need to change `57616` to a different value? + +[Here's a description of little- and big-endian][25] and [a more whimsical description][26]. + + 5. In the following code, what is going to be printed after `'y='`? (note: the answer is not a specific value.) Why does this happen? +``` + cprintf("x=%d y=%d", 3); + +``` + + 6. Let's say that GCC changed its calling convention so that it pushed arguments on the stack in declaration order, so that the last argument is pushed last. How would you have to change `cprintf` or its interface so that it would still be possible to pass it a variable number of arguments? + + + + +Challenge Enhance the console to allow text to be printed in different colors. The traditional way to do this is to make it interpret [ANSI escape sequences][27] embedded in the text strings printed to the console, but you may use any mechanism you like. There is plenty of information on [the 6.828 reference page][8] and elsewhere on the web on programming the VGA display hardware. If you're feeling really adventurous, you could try switching the VGA hardware into a graphics mode and making the console draw text onto the graphical frame buffer. + +##### The Stack + +In the final exercise of this lab, we will explore in more detail the way the C language uses the stack on the x86, and in the process write a useful new kernel monitor function that prints a _backtrace_ of the stack: a list of the saved Instruction Pointer (IP) values from the nested `call` instructions that led to the current point of execution. + +Exercise 9. Determine where the kernel initializes its stack, and exactly where in memory its stack is located. How does the kernel reserve space for its stack? And at which "end" of this reserved area is the stack pointer initialized to point to? + +The x86 stack pointer (`esp` register) points to the lowest location on the stack that is currently in use. Everything _below_ that location in the region reserved for the stack is free. Pushing a value onto the stack involves decreasing the stack pointer and then writing the value to the place the stack pointer points to. Popping a value from the stack involves reading the value the stack pointer points to and then increasing the stack pointer. In 32-bit mode, the stack can only hold 32-bit values, and esp is always divisible by four. Various x86 instructions, such as `call`, are "hard-wired" to use the stack pointer register. + +The `ebp` (base pointer) register, in contrast, is associated with the stack primarily by software convention. On entry to a C function, the function's _prologue_ code normally saves the previous function's base pointer by pushing it onto the stack, and then copies the current `esp` value into `ebp` for the duration of the function. If all the functions in a program obey this convention, then at any given point during the program's execution, it is possible to trace back through the stack by following the chain of saved `ebp` pointers and determining exactly what nested sequence of function calls caused this particular point in the program to be reached. This capability can be particularly useful, for example, when a particular function causes an `assert` failure or `panic` because bad arguments were passed to it, but you aren't sure _who_ passed the bad arguments. A stack backtrace lets you find the offending function. + +Exercise 10. To become familiar with the C calling conventions on the x86, find the address of the `test_backtrace` function in `obj/kern/kernel.asm`, set a breakpoint there, and examine what happens each time it gets called after the kernel starts. How many 32-bit words does each recursive nesting level of `test_backtrace` push on the stack, and what are those words? + +Note that, for this exercise to work properly, you should be using the patched version of QEMU available on the [tools][4] page or on Athena. Otherwise, you'll have to manually translate all breakpoint and memory addresses to linear addresses. + +The above exercise should give you the information you need to implement a stack backtrace function, which you should call `mon_backtrace()`. A prototype for this function is already waiting for you in `kern/monitor.c`. You can do it entirely in C, but you may find the `read_ebp()` function in `inc/x86.h` useful. You'll also have to hook this new function into the kernel monitor's command list so that it can be invoked interactively by the user. + +The backtrace function should display a listing of function call frames in the following format: + +``` +Stack backtrace: + ebp f0109e58 eip f0100a62 args 00000001 f0109e80 f0109e98 f0100ed2 00000031 + ebp f0109ed8 eip f01000d6 args 00000000 00000000 f0100058 f0109f28 00000061 + ... + +``` + +Each line contains an `ebp`, `eip`, and `args`. The `ebp` value indicates the base pointer into the stack used by that function: i.e., the position of the stack pointer just after the function was entered and the function prologue code set up the base pointer. The listed `eip` value is the function's _return instruction pointer_ : the instruction address to which control will return when the function returns. The return instruction pointer typically points to the instruction after the `call` instruction (why?). Finally, the five hex values listed after `args` are the first five arguments to the function in question, which would have been pushed on the stack just before the function was called. If the function was called with fewer than five arguments, of course, then not all five of these values will be useful. (Why can't the backtrace code detect how many arguments there actually are? How could this limitation be fixed?) + +The first line printed reflects the _currently executing_ function, namely `mon_backtrace` itself, the second line reflects the function that called `mon_backtrace`, the third line reflects the function that called that one, and so on. You should print _all_ the outstanding stack frames. By studying `kern/entry.S` you'll find that there is an easy way to tell when to stop. + +Here are a few specific points you read about in K&R Chapter 5 that are worth remembering for the following exercise and for future labs. + + * If `int *p = (int*)100`, then `(int)p + 1` and `(int)(p + 1)` are different numbers: the first is `101` but the second is `104`. When adding an integer to a pointer, as in the second case, the integer is implicitly multiplied by the size of the object the pointer points to. + * `p[i]` is defined to be the same as `*(p+i)`, referring to the i'th object in the memory pointed to by p. The above rule for addition helps this definition work when the objects are larger than one byte. + * `&p[i]` is the same as `(p+i)`, yielding the address of the i'th object in the memory pointed to by p. + + + +Although most C programs never need to cast between pointers and integers, operating systems frequently do. Whenever you see an addition involving a memory address, ask yourself whether it is an integer addition or pointer addition and make sure the value being added is appropriately multiplied or not. + +Exercise 11. Implement the backtrace function as specified above. Use the same format as in the example, since otherwise the grading script will be confused. When you think you have it working right, run make grade to see if its output conforms to what our grading script expects, and fix it if it doesn't. _After_ you have handed in your Lab 1 code, you are welcome to change the output format of the backtrace function any way you like. + +If you use `read_ebp()`, note that GCC may generate "optimized" code that calls `read_ebp()` _before_ `mon_backtrace()`'s function prologue, which results in an incomplete stack trace (the stack frame of the most recent function call is missing). While we have tried to disable optimizations that cause this reordering, you may want to examine the assembly of `mon_backtrace()` and make sure the call to `read_ebp()` is happening after the function prologue. + +At this point, your backtrace function should give you the addresses of the function callers on the stack that lead to `mon_backtrace()` being executed. However, in practice you often want to know the function names corresponding to those addresses. For instance, you may want to know which functions could contain a bug that's causing your kernel to crash. + +To help you implement this functionality, we have provided the function `debuginfo_eip()`, which looks up `eip` in the symbol table and returns the debugging information for that address. This function is defined in `kern/kdebug.c`. + +Exercise 12. Modify your stack backtrace function to display, for each `eip`, the function name, source file name, and line number corresponding to that `eip`. + +In `debuginfo_eip`, where do `__STAB_*` come from? This question has a long answer; to help you to discover the answer, here are some things you might want to do: + + * look in the file `kern/kernel.ld` for `__STAB_*` + * run objdump -h obj/kern/kernel + * run objdump -G obj/kern/kernel + * run gcc -pipe -nostdinc -O2 -fno-builtin -I. -MD -Wall -Wno-format -DJOS_KERNEL -gstabs -c -S kern/init.c, and look at init.s. + * see if the bootloader loads the symbol table in memory as part of loading the kernel binary + + + +Complete the implementation of `debuginfo_eip` by inserting the call to `stab_binsearch` to find the line number for an address. + +Add a `backtrace` command to the kernel monitor, and extend your implementation of `mon_backtrace` to call `debuginfo_eip` and print a line for each stack frame of the form: + +``` +K> backtrace +Stack backtrace: + ebp f010ff78 eip f01008ae args 00000001 f010ff8c 00000000 f0110580 00000000 + kern/monitor.c:143: monitor+106 + ebp f010ffd8 eip f0100193 args 00000000 00001aac 00000660 00000000 00000000 + kern/init.c:49: i386_init+59 + ebp f010fff8 eip f010003d args 00000000 00000000 0000ffff 10cf9a00 0000ffff + kern/entry.S:70: +0 +K> + +``` + +Each line gives the file name and line within that file of the stack frame's `eip`, followed by the name of the function and the offset of the `eip` from the first instruction of the function (e.g., `monitor+106` means the return `eip` is 106 bytes past the beginning of `monitor`). + +Be sure to print the file and function names on a separate line, to avoid confusing the grading script. + +Tip: printf format strings provide an easy, albeit obscure, way to print non-null-terminated strings like those in STABS tables. `printf("%.*s", length, string)` prints at most `length` characters of `string`. Take a look at the printf man page to find out why this works. + +You may find that some functions are missing from the backtrace. For example, you will probably see a call to `monitor()` but not to `runcmd()`. This is because the compiler in-lines some function calls. Other optimizations may cause you to see unexpected line numbers. If you get rid of the `-O2` from `GNUMakefile`, the backtraces may make more sense (but your kernel will run more slowly). + +**This completes the lab.** In the `lab` directory, commit your changes with git commit and type make handin to submit your code. + +-------------------------------------------------------------------------------- + +via: https://pdos.csail.mit.edu/6.828/2018/labs/lab1/ + +作者:[csail.mit][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[b]: https://github.com/lujun9972 +[1]: http://www.git-scm.com/ +[2]: http://www.kernel.org/pub/software/scm/git/docs/user-manual.html +[3]: http://eagain.net/articles/git-for-computer-scientists/ +[4]: https://pdos.csail.mit.edu/6.828/2018/tools.html +[5]: https://6828.scripts.mit.edu/2018/handin.py/ +[6]: https://pdos.csail.mit.edu/6.828/2018/readings/pcasm-book.pdf +[7]: http://www.delorie.com/djgpp/doc/brennan/brennan_att_inline_djgpp.html +[8]: https://pdos.csail.mit.edu/6.828/2018/reference.html +[9]: https://pdos.csail.mit.edu/6.828/2018/readings/i386/toc.htm +[10]: http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html +[11]: http://developer.amd.com/resources/developer-guides-manuals/ +[12]: http://www.qemu.org/ +[13]: http://www.gnu.org/software/gdb/ +[14]: http://web.archive.org/web/20040404164813/members.iweb.net.au/~pstorr/pcbook/book2/book2.htm +[15]: https://pdos.csail.mit.edu/6.828/2018/readings/boot-cdrom.pdf +[16]: https://pdos.csail.mit.edu/6.828/2018/labguide.html +[17]: http://www.amazon.com/C-Programming-Language-2nd/dp/0131103628/sr=8-1/qid=1157812738/ref=pd_bbs_1/104-1502762-1803102?ie=UTF8&s=books +[18]: http://library.mit.edu/F/AI9Y4SJ2L5ELEE2TAQUAAR44XV5RTTQHE47P9MKP5GQDLR9A8X-10422?func=item-global&doc_library=MIT01&doc_number=000355242&year=&volume=&sub_library= +[19]: https://pdos.csail.mit.edu/6.828/2018/labs/lab1/pointers.c +[20]: https://pdos.csail.mit.edu/6.828/2018/readings/pointers.pdf +[21]: https://pdos.csail.mit.edu/6.828/2018/readings/elf.pdf +[22]: http://en.wikipedia.org/wiki/Executable_and_Linkable_Format +[23]: https://sourceware.org/gdb/current/onlinedocs/gdb/Memory.html +[24]: http://web.cs.mun.ca/~michael/c/ascii-table.html +[25]: http://www.webopedia.com/TERM/b/big_endian.html +[26]: http://www.networksorcery.com/enp/ien/ien137.txt +[27]: http://rrbrandt.dee.ufcg.edu.br/en/docs/ansi/ diff --git a/sources/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md b/sources/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md deleted file mode 100644 index b7082ea141..0000000000 --- a/sources/tech/20180921 Clinews - Read News And Latest Headlines From Commandline.md +++ /dev/null @@ -1,138 +0,0 @@ -translating----geekpi - -Clinews – Read News And Latest Headlines From Commandline -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/clinews-720x340.jpeg) - -A while ago, we have written about a CLI news client named [**InstantNews**][1] that helps you to read news and latest headlines from commandline instantly. Today, I stumbled upon a similar utility named **Clinews** which serves the same purpose – reading news and latest headlines from popular websites, blogs from Terminal. You don’t need to install GUI applications or mobile apps. You can read what’s happening in the world right from your Terminal. It is free, open source utility written using **NodeJS**. - -### Installing Clinews - -Since Clinews is written using NodeJS, you can install it using NPM package manager. If you haven’t install NodeJS, install it as described in the following link. - -Once node installed, run the following command to install Clinews: - -``` -$ npm i -g clinews -``` - -You can also install Clinews using **Yarn** : - -``` -$ yarn global add clinews -``` - -Yarn itself can installed using npm - -``` -$ npm -i yarn -``` - -### Configure News API - -Clinews retrieves all news headlines from [**News API**][2]. News API is a simple and easy-to-use API that returns JSON metadata for the headlines currently published on a range of news sources and blogs. It currently provides live headlines from 70 popular sources, including Ars Technica, BBC, Blooberg, CNN, Daily Mail, Engadget, ESPN, Financial Times, Google News, hacker News, IGN, Mashable, National Geographic, Reddit r/all, Reuters, Speigel Online, Techcrunch, The Guardian, The Hindu, The Huffington Post, The Newyork Times, The Next Web, The Wall street Journal, USA today and [**more**][3]. - -First, you need an API key from News API. Go to [**https://newsapi.org/register**][4] URL and register a free account to get the API key. - -Once you got the API key from News API site, edit your **.bashrc** file: - -``` -$ vi ~/.bashrc - -``` - -Add newsapi API key at the end like below: - -``` -export IN_API_KEY="Paste-API-key-here" - -``` - -Please note that you need to paste the key inside the double quotes. Save and close the file. - -Run the following command to update the changes. - -``` -$ source ~/.bashrc - -``` - -Done. Now let us go ahead and fetch the latest headlines from new sources. - -### Read News And Latest Headlines From Commandline - -To read news and latest headlines from specific new source, for example **The Hindu** , run: - -``` -$ news fetch the-hindu - -``` - -Here, **“the-hindu”** is the new source id (fetch id). - -The above command will fetch latest 10 headlines from The Hindu news portel and display them in the Terminal. Also, it displays a brief description of the news, the published date and time, and the actual link to the source. - -**Sample output:** - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/clinews-1.png) - -To read a news in your browser, hold Ctrl key and click on the URL. It will open in your default web browser. - -To view all the sources you can get news from, run: - -``` -$ news sources - -``` - -**Sample output:** - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/clinews-2.png) - -As you see in the above screenshot, Clinews lists all news sources including the name of the news source, fetch id, description of the site, website URL and the country where it is located. As of writing this guide, Clinews currently supports 70+ news sources. - -Clinews can also able to search for news stories across all sources matching search criteria/term. Say for example, to list all news stories with titles containing the words **“Tamilnadu”** , use the following command: - -``` -$ news search "Tamilnadu" -``` - -This command will scrap all news sources for stories that match term **Tamilnadu**. - -Clinews has some extra flags that helps you to - - * limit the amount of news stories you want to see, - * sort news stories (top, latest, popular), - * display news stories category wise (E.g. business, entertainment, gaming, general, music, politics, science-and-nature, sport, technology) - - - -For more details, see the help section: - -``` -$ clinews -h -``` - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/clinews-read-news-and-latest-headlines-from-commandline/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[1]: https://www.ostechnix.com/get-news-instantly-commandline-linux/ -[2]: https://newsapi.org/ -[3]: https://newsapi.org/sources -[4]: https://newsapi.org/register diff --git a/sources/tech/20180921 Control your data with Syncthing- An open source synchronization tool.md b/sources/tech/20180921 Control your data with Syncthing- An open source synchronization tool.md index 32be152b4c..97aa36801b 100644 --- a/sources/tech/20180921 Control your data with Syncthing- An open source synchronization tool.md +++ b/sources/tech/20180921 Control your data with Syncthing- An open source synchronization tool.md @@ -1,3 +1,5 @@ +translating by ypingcn + Control your data with Syncthing: An open source synchronization tool ====== Decide how to store and share your personal information. diff --git a/sources/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md b/sources/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md deleted file mode 100644 index 628a805144..0000000000 --- a/sources/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md +++ /dev/null @@ -1,114 +0,0 @@ -translating by Flowsnow - -A Simple, Beautiful And Cross-platform Podcast App -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/cpod-720x340.png) - -Podcasts have become very popular in the last few years. Podcasts are what’s called “infotainment”, they are generally light-hearted, but they generally give you valuable information. Podcasts have blown up in the last few years, and if you like something, chances are there is a podcast about it. There are a lot of podcast players out there for the Linux desktop, but if you want something that is visually beautiful, has slick animations, and works on every platform, there aren’t a lot of alternatives to **CPod**. CPod (formerly known as **Cumulonimbus** ) is an open source and slickest podcast app that works on Linux, MacOS and Windows. - -CPod runs on something called **Electron** – a tool that allows developers to build cross-platform (E.g Windows, MacOs and Linux) desktop GUI applications. In this brief guide, we will be discussing – how to install and use CPod podcast app in Linux. - -### Installing CPod - -Go to the [**releases page**][1] of CPod. Download and Install the binary for your platform of choice. If you use Ubuntu/Debian, you can just download and install the .deb file from the releases page as shown below. - -``` -$ wget https://github.com/z-------------/CPod/releases/download/v1.25.7/CPod_1.25.7_amd64.deb - -$ sudo apt update - -$ sudo apt install gdebi - -$ sudo gdebi CPod_1.25.7_amd64.deb -``` - -If you use any other distribution, you probably should use the **AppImage** in the releases page. - -Download the AppImage file from the releases page. - -Open your terminal, and go to the directory where the AppImage file has been stored. Change the permissions to allow execution: - -``` -$ chmod +x CPod-1.25.7-x86_64.AppImage -``` - -Execute the AppImage File: - -``` -$ ./CPod-1.25.7-x86_64.AppImage -``` - -You’ll be presented a dialog asking whether to integrate the app with the system. Click **Yes** if you want to do so. - -### Features - -**Explore Tab** - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-features-tab.png) - -CPod uses the Apple iTunes database to find podcasts. This is good, because the iTunes database is the biggest one out there. If there is a podcast out there, chances are it’s on iTunes. To find podcasts, just use the top search bar in the Explore section. The Explore Section also shows a few popular podcasts. - -**Home Tab** - -![](http://www.ostechnix.com/wp-content/uploads/2018/09/CPod-home-tab.png) - -The Home Tab is the tab that opens by default when you open the app. The Home Tab shows a chronological list of all the episodes of all the podcasts that you have subscribed to. - -From the home tab, you can: - - 1. Mark episodes read. - 2. Download them for offline playing - 3. Add them to the queue. - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/The-podcasts-queue.png) - -**Subscriptions Tab** - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-subscriptions-tab.png) - -You can of course, subscribe to podcasts that you like. A few other things you can do in the Subscriptions Tab is: - - 1. Refresh Podcast Artwork - 2. Export and Import Subscriptions to/from an .OPML file. - - - -**The Player** - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-Podcast-Player.png) - -The player is perhaps the most beautiful part of CPod. The app changes the overall look and feel according to the banner of the podcast. There’s a sound visualiser at the bottom. To the right, you can see and search for other episodes of this podcast. - -**Cons/Missing Features** - -While I love this app, there are a few features and disadvantages that CPod does have: - - 1. Poor MPRIS Integration – You can play/pause the podcast from the media player dialog of your desktop environment, but not much more. The name of the podcast is not shown, and you can go to the next/previous episode. - 2. No support for chapters. - 3. No auto-downloading – you have to manually download episodes. - 4. CPU usage during use is pretty high (even for an Electron app). - - - -### Verdict - -While it does have its cons, CPod is clearly the most aesthetically pleasing podcast player app out there, and it has most basic features down. If you love using visually beautiful apps, and don’t need the advanced features, this is the perfect app for you. I know for a fact that I’m going to use it. - -Do you like CPod? Please put your opinions on the comments below! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/cpod-a-simple-beautiful-and-cross-platform-podcast-app/ - -作者:[EDITOR][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/editor/ -[1]: https://github.com/z-------------/CPod/releases diff --git a/sources/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md b/sources/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md deleted file mode 100644 index a75c1f3e9a..0000000000 --- a/sources/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md +++ /dev/null @@ -1,80 +0,0 @@ -translating---geekpi - -Hegemon – A Modular System Monitor Application Written In Rust -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/hegemon-720x340.png) - -When it comes to monitor running processes in Unix-like systems, the most commonly used applications are **top** and **htop** , which is an enhanced version of top. My personal favorite is htop. However, the developers are releasing few alternatives to these applications every now and then. One such alternative to top and htop utilities is **Hegemon**. It is a modular system monitor application written using **Rust** programming language. - -Concerning about the features of Hegemon, we can list the following: - - * Hegemon will monitor the usage of CPU, memory and Swap. - * It monitors the system’s temperature and fan speed. - * The update interval time can be adjustable. The default value is 3 seconds. - * We can reveal more detailed graph and additional information by expanding the data streams. - * Unit tests - * Clean interface - * Free and open source. - - - -### Installing Hegemon - -Make sure you have installed **Rust 1.26** or later version. To install Rust in your Linux distribution, refer the following guide: - -[Install Rust Programming Language In Linux][2] - -Also, install [libsensors][1] library. It is available in the default repositories of most Linux distributions. For example, you can install it in RPM based systems such as Fedora using the following command: - -``` -$ sudo dnf install lm_sensors-devel -``` - -On Debian-based systems like Ubuntu, Linux Mint, it can be installed using command: - -``` -$ sudo apt-get install libsensors4-dev -``` - -Once you installed Rust and libsensors, install Hegemon using command: - -``` -$ cargo install hegemon -``` - -Once hegemon installed, start monitoring the running processes in your Linux system using command: - -``` -$ hegemon -``` - -Here is the sample output from my Arch Linux desktop. - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/Hegemon-in-action.gif) - -To exit, press **Q**. - - -Please be mindful that hegemon is still in its early development stage and it is not complete replacement for **top** command. There might be bugs and missing features. If you came across any bugs, report them in the project’s github page. The developer is planning to bring more features in the upcoming versions. So, keep an eye on this project. - -And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/hegemon-a-modular-system-monitor-application-written-in-rust/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[1]: https://github.com/lm-sensors/lm-sensors -[2]: https://www.ostechnix.com/install-rust-programming-language-in-linux/ diff --git a/sources/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md b/sources/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md deleted file mode 100644 index ff33e7c175..0000000000 --- a/sources/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md +++ /dev/null @@ -1,88 +0,0 @@ -How to Boot Ubuntu 18.04 / Debian 9 Server in Rescue (Single User mode) / Emergency Mode -====== -Booting a Linux Server into a single user mode or **rescue mode** is one of the important troubleshooting that a Linux admin usually follow while recovering the server from critical conditions. In Ubuntu 18.04 and Debian 9, single user mode is known as a rescue mode. - -Apart from the rescue mode, Linux servers can be booted in **emergency mode** , the main difference between them is that, emergency mode loads a minimal environment with read only root file system file system, also it does not enable any network or other services. But rescue mode try to mount all the local file systems & try to start some important services including network. - -In this article we will discuss how we can boot our Ubuntu 18.04 LTS / Debian 9 Server in rescue mode and emergency mode. - -#### Booting Ubuntu 18.04 LTS Server in Single User / Rescue Mode: - -Reboot your server and go to boot loader (Grub) screen and Select “ **Ubuntu** “, bootloader screen would look like below, - -![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Bootloader-Screen-Ubuntu18-04-Server.jpg) - -Press “ **e** ” and then go the end of line which starts with word “ **linux** ” and append “ **systemd.unit=rescue.target** “. Remove the word “ **$vt_handoff** ” if it exists. - -![](https://www.linuxtechi.com/wp-content/uploads/2018/09/rescue-target-ubuntu18-04.jpg) - -Now Press Ctrl-x or F10 to boot, - -![](https://www.linuxtechi.com/wp-content/uploads/2018/09/rescue-mode-ubuntu18-04.jpg) - -Now press enter and then you will get the shell where all file systems will be mounted in read-write mode and do the troubleshooting. Once you are done with troubleshooting, you can reboot your server using “ **reboot** ” command. - -#### Booting Ubuntu 18.04 LTS Server in emergency mode - -Reboot the server and go the boot loader screen and select “ **Ubuntu** ” and then press “ **e** ” and go to the end of line which starts with word linux, and append “ **systemd.unit=emergency.target** ” - -![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergecny-target-ubuntu18-04-server.jpg) - -Now Press Ctlr-x or F10 to boot in emergency mode, you will get a shell and do the troubleshooting from there. As we had already discussed that in emergency mode, file systems will be mounted in read-only mode and also there will be no networking in this mode, - -![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-prompt-debian9.jpg) - -Use below command to mount the root file system in read-write mode, - -``` -# mount -o remount,rw / - -``` - -Similarly, you can remount rest of file systems in read-write mode . - -#### Booting Debian 9 into Rescue & Emergency Mode - -Reboot your Debian 9.x server and go to grub screen and select “ **Debian GNU/Linux** ” - -![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Debian9-Grub-Screen.jpg) - -Press “ **e** ” and go to end of line which starts with word linux and append “ **systemd.unit=rescue.target** ” to boot the system in rescue mode and to boot in emergency mode then append “ **systemd.unit=emergency.target** ” - -#### Rescue mode : - -![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Rescue-mode-Debian9.jpg) - -Now press Ctrl-x or F10 to boot in rescue mode - -![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Rescue-Mode-Shell-Debian9.jpg) - -Press Enter to get the shell and from there you can start troubleshooting. - -#### Emergency Mode: - -![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-target-grub-debian9.jpg) - -Now press ctrl-x or F10 to boot your system in emergency mode - -![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-prompt-debian9.jpg) - -Press enter to get the shell and use “ **mount -o remount,rw /** ” command to mount the root file system in read-write mode. - -**Note:** In case root password is already set in Ubuntu 18.04 and Debian 9 Server then you must enter root password to get shell in rescue and emergency mode - -That’s all from this article, please do share your feedback and comments in case you like this article. - - --------------------------------------------------------------------------------- - -via: https://www.linuxtechi.com/boot-ubuntu-18-04-debian-9-rescue-emergency-mode/ - -作者:[Pradeep Kumar][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.linuxtechi.com/author/pradeep/ diff --git a/sources/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md b/sources/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md deleted file mode 100644 index ab9fa8acc3..0000000000 --- a/sources/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md +++ /dev/null @@ -1,160 +0,0 @@ -How to Replace one Linux Distro With Another in Dual Boot [Guide] -====== -**If you have a Linux distribution installed, you can replace it with another distribution in the dual boot. You can also keep your personal documents while switching the distribution.** - -![How to Replace One Linux Distribution With Another From Dual Boot][1] - -Suppose you managed to [successfully dual boot Ubuntu and Windows][2]. But after reading the [Linux Mint versus Ubuntu discussion][3], you realized that [Linux Mint][4] is more suited for your needs. What would you do now? How would you [remove Ubuntu][5] and [install Mint in dual boot][6]? - -You might think that you need to uninstall [Ubuntu][7] from dual boot first and then repeat the dual booting steps with Linux Mint. Let me tell you something. You don’t need to do all of that. - -If you already have a Linux distribution installed in dual boot, you can easily replace it with another. You don’t have to uninstall the existing Linux distribution. You simply delete its partition and install the new distribution on the disk space vacated by the previous distribution. - -Another good news is that you may be able to keep your Home directory with all your documents and pictures while switching the Linux distributions. - -Let me show you how to switch Linux distributions. - -### Replace one Linux with another from dual boot - - - -Let me describe the scenario I am going to use here. I have Linux Mint 19 installed on my system in dual boot mode with Windows 10. I am going to replace it with elementary OS 5. I’ll also keep my personal files (music, pictures, videos, documents from my home directory) while switching distributions. - -Let’s first take a look at the requirements: - - * A system with Linux and Windows dual boot - * Live USB of Linux you want to install - * Backup of your important files in Windows and in Linux on an external disk (optional yet recommended) - - - -#### Things to keep in mind for keeping your home directory while changing Linux distribution - -If you want to keep your files from existing Linux install as it is, you must have a separate root and home directory. You might have noticed that in my [dual boot tutorials][8], I always go for ‘Something Else’ option and then manually create root and home partitions instead of choosing ‘Install alongside Windows’ option. This is where all the troubles in manually creating separate home partition pay off. - -Keeping Home on a separate partition is helpful in situations when you want to replace your existing Linux install with another without losing your files. - -Note: You must remember the exact username and password of your existing Linux install in order to use the same home directory as it is in the new distribution. - -If you don’t have a separate Home partition, you may create it later as well BUT I won’t recommend that. That process is slightly complicated and I don’t want you to mess up your system. - -With that much background information, it’s time to see how to replace a Linux distribution with another. - -#### Step 1: Create a live USB of the new Linux distribution - -Alright! I already mentioned it in the requirements but I still included it in the main steps to avoid confusion. - -You can create a live USB using a start up disk creator like [Etcher][9] in Windows or Linux. The process is simple so I am not going to list the steps here. - -#### Step 2: Boot into live USB and proceed to installing Linux - -Since you have already dual booted before, you probably know the drill. Plugin the live USB, restart your system and at the boot time, press F10 or F12 repeatedly to enter BIOS settings. - -In here, choose to boot from the USB. And then you’ll see the option to try the live environment or installing it immediately. - -You should start the installation procedure. When you reach the ‘Installation type’ screen, choose the ‘Something else’ option. - -![Replacing one Linux with another from dual boot][10] -Select ‘Something else’ here - -#### Step 3: Prepare the partition - -You’ll see the partitioning screen now. Look closely and you’ll see your Linux installation with Ext4 file system type. - -![Identifying Linux partition in dual boot][11] -Identify where your Linux is installed - -In the above picture, the Ext4 partition labeled as Linux Mint 19 is the root partition. The second Ext4 partition of 82691 MB is the Home partition. I [haven’t used any swap space][12] here. - -Now, if you have just one Ext4 partition, that means that your home directory is on the same partition as root. In this case, you won’t be able to keep your Home directory. I suggest that you copy the important files to an external disk else you’ll lose them forever. - -It’s time to delete the root partition. Select the root partition and click the – sign. This will create some free space. - -![Delete root partition of your existing Linux install][13] -Delete root partition - -When you have the free space, click on + sign. - -![Create root partition for the new Linux][14] -Create a new root partition - -Now you should create a new partition out of this free space. If you had just one root partition in your previous Linux install, you should create root and home partitions here. You can also create the swap partition if you want to. - -If you had root and home partition separately, just create a root partition from the deleted root partition. - -![Create root partition for the new Linux][15] -Creating root partition - -You may ask why did I use delete and add instead of using the ‘change’ option. It’s because a few years ago, using change didn’t work for me. So I prefer to do a – and +. Is it superstition? Maybe. - -One important thing to do here is to mark the newly created partition for format. f you don’t change the size of the partition, it won’t be formatted unless you explicitly ask it to format. And if the partition is not formatted, you’ll have issues. - -![][16] -It’s important to format the root partition - -Now if you already had a separate Home partition on your existing Linux install, you should select it and click on change. - -![Recreate home partition][17] -Retouch the already existing home partition (if any) - -You just have to specify that you are mounting it as home partition. - -![Specify the home mount point][18] -Specify the home mount point - -If you had a swap partition, you can repeat the same steps as the home partition. This time specify that you want to use the space as swap. - -At this stage, you should have a root partition (with format option selected) and a home partition (and a swap if you want to). Hit the install now button to start the installation. - -![Verify partitions while replacing one Linux with another][19] -Verify the partitions - -The next few screens would be familiar to you. What matters is the screen where you are asked to create user and password. - -If you had a separate home partition previously and you want to use the same home directory, you MUST use the same username and password that you had before. Computer name doesn’t matter. - -![To keep the home partition intact, use the previous user and password][20] -To keep the home partition intact, use the previous user and password - -Your struggle is almost over. You don’t have to do anything else other than waiting for the installation to finish. - -![Wait for installation to finish][21] -Wait for installation to finish - -Once the installation is over, restart your system. You’ll have a new Linux distribution or version. - -In my case, I had the entire home directory of Linux Mint 19 as it is in the elementary OS. All the videos, pictures I had remained as it is. Isn’t that nice? - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/replace-linux-from-dual-boot/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/Replace-Linux-Distro-from-dual-boot.png -[2]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/ -[3]: https://itsfoss.com/linux-mint-vs-ubuntu/ -[4]: https://www.linuxmint.com/ -[5]: https://itsfoss.com/uninstall-ubuntu-linux-windows-dual-boot/ -[6]: https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/ -[7]: https://www.ubuntu.com/ -[8]: https://itsfoss.com/guide-install-elementary-os-luna/ -[9]: https://etcher.io/ -[10]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-1.jpg -[11]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-2.jpg -[12]: https://itsfoss.com/swap-size/ -[13]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-3.jpg -[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-4.jpg -[15]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-5.jpg -[16]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-6.jpg -[17]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-7.jpg -[18]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-8.jpg -[19]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-9.jpg -[20]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-10.jpg -[21]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-11.jpg diff --git a/sources/tech/20180926 3 open source distributed tracing tools.md b/sources/tech/20180926 3 open source distributed tracing tools.md deleted file mode 100644 index 9879302d38..0000000000 --- a/sources/tech/20180926 3 open source distributed tracing tools.md +++ /dev/null @@ -1,90 +0,0 @@ -translating by belitex - -3 open source distributed tracing tools -====== - -Find performance issues quickly with these tools, which provide a graphical view of what's happening across complex software systems. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8) - -Distributed tracing systems enable users to track a request through a software system that is distributed across multiple applications, services, and databases as well as intermediaries like proxies. This allows for a deeper understanding of what is happening within the software system. These systems produce graphical representations that show how much time the request took on each step and list each known step. - -A user reviewing this content can determine where the system is experiencing latencies or blockages. Instead of testing the system like a binary search tree when requests start failing, operators and developers can see exactly where the issues begin. This can also reveal where performance changes might be occurring from deployment to deployment. It’s always better to catch regressions automatically by alerting to the anomalous behavior than to have your customers tell you. - -How does this tracing thing work? Well, each request gets a special ID that’s usually injected into the headers. This ID uniquely identifies that transaction. This transaction is normally called a trace. The trace is the overall abstract idea of the entire transaction. Each trace is made up of spans. These spans are the actual work being performed, like a service call or a database request. Each span also has a unique ID. Spans can create subsequent spans called child spans, and child spans can have multiple parents. - -Once a transaction (or trace) has run its course, it can be searched in a presentation layer. There are several tools in this space that we’ll discuss later, but the picture below shows [Jaeger][1] from my [Istio walkthrough][2]. It shows multiple spans of a single trace. The power of this is immediately clear as you can better understand the transaction’s story at a glance. - -![](https://opensource.com/sites/default/files/uploads/monitoring_guide_jaeger_istio_0.png) - -This demo uses Istio’s built-in OpenTracing implementation, so I can get tracing without even modifying my application. It also uses Jaeger, which is OpenTracing-compatible. - -So what is OpenTracing? Let’s find out. - -### OpenTracing API - -[OpenTracing][3] is a spec that grew out of [Zipkin][4] to provide cross-platform compatibility. It offers a vendor-neutral API for adding tracing to applications and delivering that data into distributed tracing systems. A library written for the OpenTracing spec can be used with any system that is OpenTracing-compliant. Zipkin, Jaeger, and Appdash are examples of open source tools that have adopted the open standard, but even proprietary tools like [Datadog][5] and [Instana][6] are adopting it. This is expected to continue as OpenTracing reaches ubiquitous status. - -### OpenCensus - -Okay, we have OpenTracing, but what is this [OpenCensus][7] thing that keeps popping up in my searches? Is it a competing standard, something completely different, or something complementary? - -The answer depends on who you ask. I will do my best to explain the difference (as I understand it): OpenCensus takes a more holistic or all-inclusive approach. OpenTracing is focused on establishing an open API and spec and not on open implementations for each language and tracing system. OpenCensus provides not only the specification but also the language implementations and wire protocol. It also goes beyond tracing by including additional metrics that are normally outside the scope of distributed tracing systems. - -OpenCensus allows viewing data on the host where the application is running, but it also has a pluggable exporter system for exporting data to central aggregators. The current exporters produced by the OpenCensus team include Zipkin, Prometheus, Jaeger, Stackdriver, Datadog, and SignalFx, but anyone can create an exporter. - -From my perspective, there’s a lot of overlap. One isn’t necessarily better than the other, but it’s important to know what each does and doesn’t do. OpenTracing is primarily a spec, with others doing the implementation and opinionation. OpenCensus provides a holistic approach for the local component with more opinionation but still requires other systems for remote aggregation. - -### Tool options - -#### Zipkin - -Zipkin was one of the first systems of this kind. It was developed by Twitter based on the [Google Dapper paper][8] about the internal system Google uses. Zipkin was written using Java, and it can use Cassandra or ElasticSearch as a scalable backend. Most companies should be satisfied with one of those options. The lowest supported Java version is Java 6. It also uses the [Thrift][9] binary communication protocol, which is popular in the Twitter stack and is hosted as an Apache project. - -The system consists of reporters (clients), collectors, a query service, and a web UI. Zipkin is meant to be safe in production by transmitting only a trace ID within the context of a transaction to inform receivers that a trace is in process. The data collected in each reporter is then transmitted asynchronously to the collectors. The collectors store these spans in the database, and the web UI presents this data to the end user in a consumable format. The delivery of data to the collectors can occur in three different methods: HTTP, Kafka, and Scribe. - -The [Zipkin community][10] has also created [Brave][11], a Java client implementation compatible with Zipkin. It has no dependencies, so it won’t drag your projects down or clutter them with libraries that are incompatible with your corporate standards. There are many other implementations, and Zipkin is compatible with the OpenTracing standard, so these implementations should also work with other distributed tracing systems. The popular Spring framework has a component called [Spring Cloud Sleuth][12] that is compatible with Zipkin. - -#### Jaeger - -[Jaeger][1] is a newer project from Uber Technologies that the [CNCF][13] has since adopted as an Incubating project. It is written in Golang, so you don’t have to worry about having dependencies installed on the host or any overhead of interpreters or language virtual machines. Similar to Zipkin, Jaeger also supports Cassandra and ElasticSearch as scalable storage backends. Jaeger is also fully compatible with the OpenTracing standard. - -Jaeger’s architecture is similar to Zipkin, with clients (reporters), collectors, a query service, and a web UI, but it also has an agent on each host that locally aggregates the data. The agent receives data over a UDP connection, which it batches and sends to a collector. The collector receives that data in the form of the [Thrift][14] protocol and stores that data in Cassandra or ElasticSearch. The query service can access the data store directly and provide that information to the web UI. - -By default, a user won’t get all the traces from the Jaeger clients. The system samples 0.1% (1 in 1,000) of traces that pass through each client. Keeping and transmitting all traces would be a bit overwhelming to most systems. However, this can be increased or decreased by configuring the agents, which the client consults with for its configuration. This sampling isn’t completely random, though, and it’s getting better. Jaeger uses probabilistic sampling, which tries to make an educated guess at whether a new trace should be sampled or not. [Adaptive sampling is on its roadmap][15], which will improve the sampling algorithm by adding additional context for making decisions. - -#### Appdash - -[Appdash][16] is a distributed tracing system written in Golang, like Jaeger. It was created by [Sourcegraph][17] based on Google’s Dapper and Twitter’s Zipkin. Similar to Jaeger and Zipkin, Appdash supports the OpenTracing standard; this was a later addition and requires a component that is different from the default component. This adds risk and complexity. - -At a high level, Appdash’s architecture consists mostly of three components: a client, a local collector, and a remote collector. There’s not a lot of documentation, so this description comes from testing the system and reviewing the code. The client in Appdash gets added to your code. Appdash provides Python, Golang, and Ruby implementations, but OpenTracing libraries can be used with Appdash’s OpenTracing implementation. The client collects the spans and sends them to the local collector. The local collector then sends the data to a centralized Appdash server running its own local collector, which is the remote collector for all other nodes in the system. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/distributed-tracing-tools - -作者:[Dan Barker][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/barkerd427 -[1]: https://www.jaegertracing.io/ -[2]: https://www.youtube.com/watch?v=T8BbeqZ0Rls -[3]: http://opentracing.io/ -[4]: https://zipkin.io/ -[5]: https://www.datadoghq.com/ -[6]: https://www.instana.com/ -[7]: https://opencensus.io/ -[8]: https://research.google.com/archive/papers/dapper-2010-1.pdf -[9]: https://thrift.apache.org/ -[10]: https://zipkin.io/pages/community.html -[11]: https://github.com/openzipkin/brave -[12]: https://cloud.spring.io/spring-cloud-sleuth/ -[13]: https://www.cncf.io/ -[14]: https://en.wikipedia.org/wiki/Apache_Thrift -[15]: https://www.jaegertracing.io/docs/roadmap/#adaptive-sampling -[16]: https://github.com/sourcegraph/appdash -[17]: https://about.sourcegraph.com/ diff --git a/sources/tech/20180926 An introduction to swap space on Linux systems.md b/sources/tech/20180926 An introduction to swap space on Linux systems.md deleted file mode 100644 index da50208533..0000000000 --- a/sources/tech/20180926 An introduction to swap space on Linux systems.md +++ /dev/null @@ -1,302 +0,0 @@ -heguangzhi Translating - -An introduction to swap space on Linux systems -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh) - -Swap space is a common aspect of computing today, regardless of operating system. Linux uses swap space to increase the amount of virtual memory available to a host. It can use one or more dedicated swap partitions or a swap file on a regular filesystem or logical volume. - -There are two basic types of memory in a typical computer. The first type, random access memory (RAM), is used to store data and programs while they are being actively used by the computer. Programs and data cannot be used by the computer unless they are stored in RAM. RAM is volatile memory; that is, the data stored in RAM is lost if the computer is turned off. - -Hard drives are magnetic media used for long-term storage of data and programs. Magnetic media is nonvolatile; the data stored on a disk remains even when power is removed from the computer. The CPU (central processing unit) cannot directly access the programs and data on the hard drive; it must be copied into RAM first, and that is where the CPU can access its programming instructions and the data to be operated on by those instructions. During the boot process, a computer copies specific operating system programs, such as the kernel and init or systemd, and data from the hard drive into RAM, where it is accessed directly by the computer’s processor, the CPU. - -### Swap space - -Swap space is the second type of memory in modern Linux systems. The primary function of swap space is to substitute disk space for RAM memory when real RAM fills up and more space is needed. - -For example, assume you have a computer system with 8GB of RAM. If you start up programs that don’t fill that RAM, everything is fine and no swapping is required. But suppose the spreadsheet you are working on grows when you add more rows, and that, plus everything else that's running, now fills all of RAM. Without swap space available, you would have to stop working on the spreadsheet until you could free up some of your limited RAM by closing down some other programs. - -The kernel uses a memory management program that detects blocks, aka pages, of memory in which the contents have not been used recently. The memory management program swaps enough of these relatively infrequently used pages of memory out to a special partition on the hard drive specifically designated for “paging,” or swapping. This frees up RAM and makes room for more data to be entered into your spreadsheet. Those pages of memory swapped out to the hard drive are tracked by the kernel’s memory management code and can be paged back into RAM if they are needed. - -The total amount of memory in a Linux computer is the RAM plus swap space and is referred to as virtual memory. - -### Types of Linux swap - -Linux provides for two types of swap space. By default, most Linux installations create a swap partition, but it is also possible to use a specially configured file as a swap file. A swap partition is just what its name implies—a standard disk partition that is designated as swap space by the `mkswap` command. - -A swap file can be used if there is no free disk space in which to create a new swap partition or space in a volume group where a logical volume can be created for swap space. This is just a regular file that is created and preallocated to a specified size. Then the `mkswap` command is run to configure it as swap space. I don’t recommend using a file for swap space unless absolutely necessary. - -### Thrashing - -Thrashing can occur when total virtual memory, both RAM and swap space, become nearly full. The system spends so much time paging blocks of memory between swap space and RAM and back that little time is left for real work. The typical symptoms of this are obvious: The system becomes slow or completely unresponsive, and the hard drive activity light is on almost constantly. - -If you can manage to issue a command like `free` that shows CPU load and memory usage, you will see that the CPU load is very high, perhaps as much as 30 to 40 times the number of CPU cores in the system. Another symptom is that both RAM and swap space are almost completely allocated. - -After the fact, looking at SAR (system activity report) data can also show these symptoms. I install SAR on every system I work on and use it for post-repair forensic analysis. - -### What is the right amount of swap space? - -Many years ago, the rule of thumb for the amount of swap space that should be allocated on the hard drive was 2X the amount of RAM installed in the computer (of course, that was when most computers' RAM was measured in KB or MB). So if a computer had 64KB of RAM, a swap partition of 128KB would be an optimum size. This rule took into account the facts that RAM sizes were typically quite small at that time and that allocating more than 2X RAM for swap space did not improve performance. With more than twice RAM for swap, most systems spent more time thrashing than actually performing useful work. - -RAM has become an inexpensive commodity and most computers these days have amounts of RAM that extend into tens of gigabytes. Most of my newer computers have at least 8GB of RAM, one has 32GB, and my main workstation has 64GB. My older computers have from 4 to 8 GB of RAM. - -When dealing with computers having huge amounts of RAM, the limiting performance factor for swap space is far lower than the 2X multiplier. The Fedora 28 online Installation Guide, which can be found online at [Fedora Installation Guide][1], defines current thinking about swap space allocation. I have included below some discussion and the table of recommendations from that document. - -The following table provides the recommended size of a swap partition depending on the amount of RAM in your system and whether you want sufficient memory for your system to hibernate. The recommended swap partition size is established automatically during installation. To allow for hibernation, however, you will need to edit the swap space in the custom partitioning stage. - -_Table 1: Recommended system swap space in Fedora 28 documentation_ - -| **Amount of system RAM** | **Recommended swap space** | **Recommended swap with hibernation** | -|--------------------------|-----------------------------|---------------------------------------| -| less than 2 GB | 2 times the amount of RAM | 3 times the amount of RAM | -| 2 GB - 8 GB | Equal to the amount of RAM | 2 times the amount of RAM | -| 8 GB - 64 GB | 0.5 times the amount of RAM | 1.5 times the amount of RAM | -| more than 64 GB | workload dependent | hibernation not recommended | - -At the border between each range listed above (for example, a system with 2 GB, 8 GB, or 64 GB of system RAM), use discretion with regard to chosen swap space and hibernation support. If your system resources allow for it, increasing the swap space may lead to better performance. - -Of course, most Linux administrators have their own ideas about the appropriate amount of swap space—as well as pretty much everything else. Table 2, below, contains my recommendations based on my personal experiences in multiple environments. These may not work for you, but as with Table 1, they may help you get started. - -_Table 2: Recommended system swap space per the author_ - -| Amount of RAM | Recommended swap space | -|---------------|------------------------| -| ≤ 2GB | 2X RAM | -| 2GB – 8GB | = RAM | -| >8GB | 8GB | - -One consideration in both tables is that as the amount of RAM increases, beyond a certain point adding more swap space simply leads to thrashing well before the swap space even comes close to being filled. If you have too little virtual memory while following these recommendations, you should add more RAM, if possible, rather than more swap space. As with all recommendations that affect system performance, use what works best for your specific environment. This will take time and effort to experiment and make changes based on the conditions in your Linux environment. - -#### Adding more swap space to a non-LVM disk environment - -Due to changing requirements for swap space on hosts with Linux already installed, it may become necessary to modify the amount of swap space defined for the system. This procedure can be used for any general case where the amount of swap space needs to be increased. It assumes sufficient available disk space is available. This procedure also assumes that the disks are partitioned in “raw” EXT4 and swap partitions and do not use logical volume management (LVM). - -The basic steps to take are simple: - - 1. Turn off the existing swap space. - - 2. Create a new swap partition of the desired size. - - 3. Reread the partition table. - - 4. Configure the partition as swap space. - - 5. Add the new partition/etc/fstab. - - 6. Turn on swap. - - - - -A reboot should not be necessary. - -For safety's sake, before turning off swap, at the very least you should ensure that no applications are running and that no swap space is in use. The `free` or `top` commands can tell you whether swap space is in use. To be even safer, you could revert to run level 1 or single-user mode. - -Turn off the swap partition with the command which turns off all swap space: - -``` -swapoff -a - -``` - -Now display the existing partitions on the hard drive. - -``` -fdisk -l - -``` - -This displays the current partition tables on each drive. Identify the current swap partition by number. - -Start `fdisk` in interactive mode with the command: - -``` -fdisk /dev/ - -``` - -For example: - -``` -fdisk /dev/sda - -``` - -At this point, `fdisk` is now interactive and will operate only on the specified disk drive. - -Use the fdisk `p` sub-command to verify that there is enough free space on the disk to create the new swap partition. The space on the hard drive is shown in terms of 512-byte blocks and starting and ending cylinder numbers, so you may have to do some math to determine the available space between and at the end of allocated partitions. - -Use the `n` sub-command to create a new swap partition. fdisk will ask you the starting cylinder. By default, it chooses the lowest-numbered available cylinder. If you wish to change that, type in the number of the starting cylinder. - -The `fdisk` command now allows you to enter the size of the partitions in a number of formats, including the last cylinder number or the size in bytes, KB or MB. Type in 4000M, which will give about 4GB of space on the new partition (for example), and press Enter. - -Use the `p` sub-command to verify that the partition was created as you specified it. Note that the partition will probably not be exactly what you specified unless you used the ending cylinder number. The `fdisk` command can only allocate disk space in increments on whole cylinders, so your partition may be a little smaller or larger than you specified. If the partition is not what you want, you can delete it and create it again. - -Now it is necessary to specify that the new partition is to be a swap partition. The sub-command `t` allows you to specify the type of partition. So enter `t`, specify the partition number, and when it asks for the hex code partition type, type 82, which is the Linux swap partition type, and press Enter. - -When you are satisfied with the partition you have created, use the `w` sub-command to write the new partition table to the disk. The `fdisk` program will exit and return you to the command prompt after it completes writing the revised partition table. You will probably receive the following message as `fdisk` completes writing the new partition table: - -``` -The partition table has been altered! -Calling ioctl() to re-read partition table. -WARNING: Re-reading the partition table failed with error 16: Device or resource busy. -The kernel still uses the old table. -The new table will be used at the next reboot. -Syncing disks. -``` - -At this point, you use the `partprobe` command to force the kernel to re-read the partition table so that it is not necessary to perform a reboot. - -``` -partprobe -``` - -Now use the command `fdisk -l` to list the partitions and the new swap partition should be among those listed. Be sure that the new partition type is “Linux swap”. - -It will be necessary to modify the /etc/fstab file to point to the new swap partition. The existing line may look like this: - -``` -LABEL=SWAP-sdaX   swap        swap    defaults        0 0 - -``` - -where `X` is the partition number. Add a new line that looks similar this, depending upon the location of your new swap partition: - -``` -/dev/sdaY         swap        swap    defaults        0 0 - -``` - -Be sure to use the correct partition number. Now you can perform the final step in creating the swap partition. Use the `mkswap` command to define the partition as a swap partition. - -``` -mkswap /dev/sdaY - -``` - -The final step is to turn swap on using the command: - -``` -swapon -a - -``` - -Your new swap partition is now online along with the previously existing swap partition. You can use the `free` or `top` commands to verify this. - -#### Adding swap to an LVM disk environment - -If your disk setup uses LVM, changing swap space will be fairly easy. Again, this assumes that space is available in the volume group in which the current swap volume is located. By default, the installation procedures for Fedora Linux in an LVM environment create the swap partition as a logical volume. This makes it easy because you can simply increase the size of the swap volume. - -Here are the steps required to increase the amount of swap space in an LVM environment: - - 1. Turn off all swap. - - 2. Increase the size of the logical volume designated for swap. - - 3. Configure the resized volume as swap space. - - 4. Turn on swap. - - - - -First, let’s verify that swap exists and is a logical volume using the `lvs` command (list logical volume). - -``` -[root@studentvm1 ~]# lvs -  LV     VG                Attr       LSize  Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert -  home   fedora_studentvm1 -wi-ao----  2.00g                                                       -  pool00 fedora_studentvm1 twi-aotz--  2.00g               8.17   2.93                             -  root   fedora_studentvm1 Vwi-aotz--  2.00g pool00        8.17                                   -  swap   fedora_studentvm1 -wi-ao----  8.00g                                                       -  tmp    fedora_studentvm1 -wi-ao----  5.00g                                                       -  usr    fedora_studentvm1 -wi-ao---- 15.00g                                                       -  var    fedora_studentvm1 -wi-ao---- 10.00g                                                       -[root@studentvm1 ~]# -``` - -You can see that the current swap size is 8GB. In this case, we want to add 2GB to this swap volume. First, stop existing swap. You may have to terminate running programs if swap space is in use. - -``` -swapoff -a - -``` - -Now increase the size of the logical volume. - -``` -[root@studentvm1 ~]# lvextend -L +2G /dev/mapper/fedora_studentvm1-swap -  Size of logical volume fedora_studentvm1/swap changed from 8.00 GiB (2048 extents) to 10.00 GiB (2560 extents). -  Logical volume fedora_studentvm1/swap successfully resized. -[root@studentvm1 ~]# -``` - -Run the `mkswap` command to make this entire 10GB partition into swap space. - -``` -[root@studentvm1 ~]# mkswap /dev/mapper/fedora_studentvm1-swap -mkswap: /dev/mapper/fedora_studentvm1-swap: warning: wiping old swap signature. -Setting up swapspace version 1, size = 10 GiB (10737414144 bytes) -no label, UUID=3cc2bee0-e746-4b66-aa2d-1ea15ef1574a -[root@studentvm1 ~]# -``` - -Turn swap back on. - -``` -[root@studentvm1 ~]# swapon -a -[root@studentvm1 ~]# -``` - -Now verify the new swap space is present with the list block devices command. Again, a reboot is not required. - -``` -[root@studentvm1 ~]# lsblk -NAME                                 MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT -sda                                    8:0    0   60G  0 disk -|-sda1                                 8:1    0    1G  0 part /boot -`-sda2                                 8:2    0   59G  0 part -  |-fedora_studentvm1-pool00_tmeta   253:0    0    4M  0 lvm   -  | `-fedora_studentvm1-pool00-tpool 253:2    0    2G  0 lvm   -  |   |-fedora_studentvm1-root       253:3    0    2G  0 lvm  / -  |   `-fedora_studentvm1-pool00     253:6    0    2G  0 lvm   -  |-fedora_studentvm1-pool00_tdata   253:1    0    2G  0 lvm   -  | `-fedora_studentvm1-pool00-tpool 253:2    0    2G  0 lvm   -  |   |-fedora_studentvm1-root       253:3    0    2G  0 lvm  / -  |   `-fedora_studentvm1-pool00     253:6    0    2G  0 lvm   -  |-fedora_studentvm1-swap           253:4    0   10G  0 lvm  [SWAP] -  |-fedora_studentvm1-usr            253:5    0   15G  0 lvm  /usr -  |-fedora_studentvm1-home           253:7    0    2G  0 lvm  /home -  |-fedora_studentvm1-var            253:8    0   10G  0 lvm  /var -  `-fedora_studentvm1-tmp            253:9    0    5G  0 lvm  /tmp -sr0                                   11:0    1 1024M  0 rom   -[root@studentvm1 ~]# -``` - -You can also use the `swapon -s` command, or `top`, `free`, or any of several other commands to verify this. - -``` -[root@studentvm1 ~]# free -              total        used        free      shared  buff/cache   available -Mem:        4038808      382404     2754072        4152      902332     3404184 -Swap:      10485756           0    10485756 -[root@studentvm1 ~]# -``` - -Note that the different commands display or require as input the device special file in different forms. There are a number of ways in which specific devices are accessed in the /dev directory. My article, [Managing Devices in Linux][2], includes more information about the /dev directory and its contents. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/swap-space-linux-systems - -作者:[David Both][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/dboth -[1]: https://docs.fedoraproject.org/en-US/fedora/f28/install-guide/ -[2]: https://opensource.com/article/16/11/managing-devices-linux diff --git a/sources/tech/20180926 How to use the Scikit-learn Python library for data science projects.md b/sources/tech/20180926 How to use the Scikit-learn Python library for data science projects.md deleted file mode 100644 index e8b108720e..0000000000 --- a/sources/tech/20180926 How to use the Scikit-learn Python library for data science projects.md +++ /dev/null @@ -1,260 +0,0 @@ -translating by Flowsnow - -How to use the Scikit-learn Python library for data science projects -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X) - -The Scikit-learn Python library, initially released in 2007, is commonly used in solving machine learning and data science problems—from the beginning to the end. The versatile library offers an uncluttered, consistent, and efficient API and thorough online documentation. - -### What is Scikit-learn? - -[Scikit-learn][1] is an open source Python library that has powerful tools for data analysis and data mining. It's available under the BSD license and is built on the following machine learning libraries: - - * **NumPy** , a library for manipulating multi-dimensional arrays and matrices. It also has an extensive compilation of mathematical functions for performing various calculations. - * **SciPy** , an ecosystem consisting of various libraries for completing technical computing tasks. - * **Matplotlib** , a library for plotting various charts and graphs. - - - -Scikit-learn offers an extensive range of built-in algorithms that make the most of data science projects. - -Here are the main ways the Scikit-learn library is used. - -#### 1. Classification - -The [classification][2] tools identify the category associated with provided data. For example, they can be used to categorize email messages as either spam or not. - - * Support vector machines (SVMs) - * Nearest neighbors - * Random forest - - - -#### 2. Regression - -Classification algorithms in Scikit-learn include: - -Regression involves creating a model that tries to comprehend the relationship between input and output data. For example, regression tools can be used to understand the behavior of stock prices. - -Regression algorithms include: - - * SVMs - * Ridge regression - * Lasso - - - -#### 3. Clustering - -The Scikit-learn clustering tools are used to automatically group data with the same characteristics into sets. For example, customer data can be segmented based on their localities. - -Clustering algorithms include: - - * K-means - * Spectral clustering - * Mean-shift - - - -#### 4. Dimensionality reduction - -Dimensionality reduction lowers the number of random variables for analysis. For example, to increase the efficiency of visualizations, outlying data may not be considered. - -Dimensionality reduction algorithms include: - - * Principal component analysis (PCA) - * Feature selection - * Non-negative matrix factorization - - - -#### 5. Model selection - -Model selection algorithms offer tools to compare, validate, and select the best parameters and models to use in your data science projects. - -Model selection modules that can deliver enhanced accuracy through parameter tuning include: - - * Grid search - * Cross-validation - * Metrics - - - -#### 6. Preprocessing - -The Scikit-learn preprocessing tools are important in feature extraction and normalization during data analysis. For example, you can use these tools to transform input data—such as text—and apply their features in your analysis. - -Preprocessing modules include: - - * Preprocessing - * Feature extraction - - - -### A Scikit-learn library example - -Let's use a simple example to illustrate how you can use the Scikit-learn library in your data science projects. - -We'll use the [Iris flower dataset][3], which is incorporated in the Scikit-learn library. The Iris flower dataset contains 150 details about three flower species: - - * Setosa—labeled 0 - * Versicolor—labeled 1 - * Virginica—labeled 2 - - - -The dataset includes the following characteristics of each flower species (in centimeters): - - * Sepal length - * Sepal width - * Petal length - * Petal width - - - -#### Step 1: Importing the library - -Since the Iris dataset is included in the Scikit-learn data science library, we can load it into our workspace as follows: - -``` -from sklearn import datasets -iris = datasets.load_iris() -``` - -These commands import the **datasets** module from **sklearn** , then use the **load_digits()** method from **datasets** to include the data in the workspace. - -#### Step 2: Getting dataset characteristics - -The **datasets** module contains several methods that make it easier to get acquainted with handling data. - -In Scikit-learn, a dataset refers to a dictionary-like object that has all the details about the data. The data is stored using the **.data** key, which is an array list. - -For instance, we can utilize **iris.data** to output information about the Iris flower dataset. - -``` -print(iris.data) -``` - -Here is the output (the results have been truncated): - -``` -[[5.1 3.5 1.4 0.2] - [4.9 3.  1.4 0.2] - [4.7 3.2 1.3 0.2] - [4.6 3.1 1.5 0.2] - [5.  3.6 1.4 0.2] - [5.4 3.9 1.7 0.4] - [4.6 3.4 1.4 0.3] - [5.  3.4 1.5 0.2] - [4.4 2.9 1.4 0.2] - [4.9 3.1 1.5 0.1] - [5.4 3.7 1.5 0.2] - [4.8 3.4 1.6 0.2] - [4.8 3.  1.4 0.1] - [4.3 3.  1.1 0.1] - [5.8 4.  1.2 0.2] - [5.7 4.4 1.5 0.4] - [5.4 3.9 1.3 0.4] - [5.1 3.5 1.4 0.3] -``` - -Let's also use **iris.target** to give us information about the different labels of the flowers. - -``` -print(iris.target) -``` - -Here is the output: - -``` -[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 - 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 - 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 - 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 - 2 2] - -``` - -If we use **iris.target_names** , we'll output an array of the names of the labels found in the dataset. - -``` -print(iris.target_names) -``` - -Here is the result after running the Python code: - -``` -['setosa' 'versicolor' 'virginica'] -``` - -#### Step 3: Visualizing the dataset - -We can use the [box plot][4] to produce a visual depiction of the Iris flower dataset. The box plot illustrates how the data is distributed over the plane through their quartiles. - -Here's how to achieve this: - -``` -import seaborn as sns -box_data = iris.data #variable representing the data array -box_target = iris.target #variable representing the labels array -sns.boxplot(data = box_data,width=0.5,fliersize=5) -sns.set(rc={'figure.figsize':(2,15)}) -``` - -Let's see the result: - -![](https://opensource.com/sites/default/files/uploads/scikit_boxplot.png) - -On the horizontal axis: - - * 0 is sepal length - * 1 is sepal width - * 2 is petal length - * 3 is petal width - - - -The vertical axis is dimensions in centimeters. - -### Wrapping up - -Here is the entire code for this simple Scikit-learn data science tutorial. - -``` -from sklearn import datasets -iris = datasets.load_iris() -print(iris.data) -print(iris.target) -print(iris.target_names) -import seaborn as sns -box_data = iris.data #variable representing the data array -box_target = iris.target #variable representing the labels array -sns.boxplot(data = box_data,width=0.5,fliersize=5) -sns.set(rc={'figure.figsize':(2,15)}) -``` - -Scikit-learn is a versatile Python library you can use to efficiently complete data science projects. - -If you want to learn more, check out the tutorials on [LiveEdu][5], such as Andrey Bulezyuk's video on using the Scikit-learn library to create a [machine learning application][6]. - -Do you have any questions or comments? Feel free to share them below. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/how-use-scikit-learn-data-science-projects - -作者:[Dr.Michael J.Garbade][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/drmjg -[1]: http://scikit-learn.org/stable/index.html -[2]: https://blog.liveedu.tv/regression-versus-classification-machine-learning-whats-the-difference/ -[3]: https://en.wikipedia.org/wiki/Iris_flower_data_set -[4]: https://en.wikipedia.org/wiki/Box_plot -[5]: https://www.liveedu.tv/guides/data-science/ -[6]: https://www.liveedu.tv/andreybu/REaxr-machine-learning-model-python-sklearn-kera/oPGdP-machine-learning-model-python-sklearn-kera/ diff --git a/sources/tech/20180927 5 cool tiling window managers.md b/sources/tech/20180927 5 cool tiling window managers.md new file mode 100644 index 0000000000..f687918c65 --- /dev/null +++ b/sources/tech/20180927 5 cool tiling window managers.md @@ -0,0 +1,87 @@ +5 cool tiling window managers +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/09/tilingwindowmanagers-816x345.jpg) +The Linux desktop ecosystem offers multiple window managers (WMs). Some are developed as part of a desktop environment. Others are meant to be used as standalone application. This is the case of tiling WMs, which offer a more lightweight, customized environment. This article presents five such tiling WMs for you to try out. + +### i3 + +[i3][1] is one of the most popular tiling window managers. Like most other such WMs, i3 focuses on low resource consumption and customizability by the user. + +You can refer to [this previous article in the Magazine][2] to get started with i3 installation details and how to configure it. + +### sway + +[sway][3] is a tiling Wayland compositor. It has the advantage of compatibility with an existing i3 configuration, so you can use it to replace i3 and use Wayland as the display protocol. + +You can use dnf to install sway from Fedora repository: + +``` +$ sudo dnf install sway +``` + +If you want to migrate from i3 to sway, there’s a small [migration guide][4] available. + +### Qtile + +[Qtile][5] is another tiling manager that also happens to be written in Python. By default, you configure Qtile in a Python script located under ~/.config/qtile/config.py. When this script is not available, Qtile uses a default [configuration][6]. + +One of the benefits of Qtile being in Python is you can write scripts to control the WM. For example, the following script prints the screen details: + +``` +> from libqtile.command import Client +> c = Client() +> print(c.screen.info) +{'index': 0, 'width': 1920, 'height': 1006, 'x': 0, 'y': 0} +``` + +To install Qlite on Fedora, use the following command: + +``` +$ sudo dnf install qtile +``` + +### dwm + +The [dwm][7] window manager focuses more on being lightweight. One goal of the project is to keep dwm minimal and small. For example, the entire code base never exceeded 2000 lines of code. On the other hand, dwm isn’t as easy to customize and configure. Indeed, the only way to change dwm default configuration is to [edit the source code and recompile the application][8]. + +If you want to try the default configuration, you can install dwm in Fedora using dnf: + +``` +$ sudo dnf install dwm +``` + +For those who wand to change their dwm configuration, the dwm-user package is available in Fedora. This package automatically recompiles dwm using the configuration stored in the user home directory at ~/.dwm/config.h. + +### awesome + +[awesome][9] originally started as a fork of dwm, to provide configuration of the WM using an external configuration file. The configuration is done via Lua scripts, which allow you to write scripts to automate tasks or create widgets. + +You can check out awesome on Fedora by installing it like this: + +``` +$ sudo dnf install awesome +``` + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/5-cool-tiling-window-managers/ + +作者:[Clément Verna][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org +[1]: https://i3wm.org/ +[2]: https://fedoramagazine.org/getting-started-i3-window-manager/ +[3]: https://swaywm.org/ +[4]: https://github.com/swaywm/sway/wiki/i3-Migration-Guide +[5]: http://www.qtile.org/ +[6]: https://github.com/qtile/qtile/blob/develop/libqtile/resources/default_config.py +[7]: https://dwm.suckless.org/ +[8]: https://dwm.suckless.org/customisation/ +[9]: https://awesomewm.org/ diff --git a/sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md b/sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md deleted file mode 100644 index e3a0a9d561..0000000000 --- a/sources/tech/20180927 How To Find And Delete Duplicate Files In Linux.md +++ /dev/null @@ -1,441 +0,0 @@ -How To Find And Delete Duplicate Files In Linux -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/Find-And-Delete-Duplicate-Files-720x340.png) - -I always backup the configuration files or any old files to somewhere in my hard disk before edit or modify them, so I can restore them from the backup if I accidentally did something wrong. But the problem is I forgot to clean up those files and my hard disk is filled with a lot of duplicate files after a certain period of time. I feel either too lazy to clean the old files or afraid that I may delete an important files. If you’re anything like me and overwhelming with multiple copies of same files in different backup directories, you can find and delete duplicate files using the tools given below in Unix-like operating systems. - -**A word of caution:** - -Please be careful while deleting duplicate files. If you’re not careful, it will lead you to [**accidental data loss**][1]. I advice you to pay extra attention while using these tools. - -### Find And Delete Duplicate Files In Linux - -For the purpose of this guide, I am going to discuss about three utilities namely, - - 1. Rdfind, - 2. Fdupes, - 3. FSlint. - - - -These three utilities are free, open source and works on most Unix-like operating systems. - -##### 1. Rdfind - -**Rdfind** , stands for **r** edundant **d** ata **find** , is a free and open source utility to find duplicate files across and/or within directories and sub-directories. It compares files based on their content, not on their file names. Rdfind uses **ranking** algorithm to classify original and duplicate files. If you have two or more equal files, Rdfind is smart enough to find which is original file, and consider the rest of the files as duplicates. Once it found the duplicates, it will report them to you. You can decide to either delete them or replace them with [**hard links** or **symbolic (soft) links**][2]. - -**Installing Rdfind** - -Rdfind is available in [**AUR**][3]. So, you can install it in Arch-based systems using any AUR helper program like [**Yay**][4] as shown below. - -``` -$ yay -S rdfind - -``` - -On Debian, Ubuntu, Linux Mint: - -``` -$ sudo apt-get install rdfind - -``` - -On Fedora: - -``` -$ sudo dnf install rdfind - -``` - -On RHEL, CentOS: - -``` -$ sudo yum install epel-release - -$ sudo yum install rdfind - -``` - -**Usage** - -Once installed, simply run Rdfind command along with the directory path to scan for the duplicate files. - -``` -$ rdfind ~/Downloads - -``` - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/rdfind-1.png) - -As you see in the above screenshot, Rdfind command will scan ~/Downloads directory and save the results in a file named **results.txt** in the current working directory. You can view the name of the possible duplicate files in results.txt file. - -``` -$ cat results.txt -# Automatically generated -# duptype id depth size device inode priority name -DUPTYPE_FIRST_OCCURRENCE 1469 8 9 2050 15864884 1 /home/sk/Downloads/tor-browser_en-US/Browser/TorBrowser/Tor/PluggableTransports/fte/tests/dfas/test5.regex -DUPTYPE_WITHIN_SAME_TREE -1469 8 9 2050 15864886 1 /home/sk/Downloads/tor-browser_en-US/Browser/TorBrowser/Tor/PluggableTransports/fte/tests/dfas/test6.regex -[...] -DUPTYPE_FIRST_OCCURRENCE 13 0 403635 2050 15740257 1 /home/sk/Downloads/Hyperledger(1).pdf -DUPTYPE_WITHIN_SAME_TREE -13 0 403635 2050 15741071 1 /home/sk/Downloads/Hyperledger.pdf -# end of file - -``` - -By reviewing the results.txt file, you can easily find the duplicates. You can remove the duplicates manually if you want to. - -Also, you can **-dryrun** option to find all duplicates in a given directory without changing anything and output the summary in your Terminal: - -``` -$ rdfind -dryrun true ~/Downloads - -``` - -Once you found the duplicates, you can replace them with either hardlinks or symlinks. - -To replace all duplicates with hardlinks, run: - -``` -$ rdfind -makehardlinks true ~/Downloads - -``` - -To replace all duplicates with symlinks/soft links, run: - -``` -$ rdfind -makesymlinks true ~/Downloads - -``` - -You may have some empty files in a directory and want to ignore them. If so, use **-ignoreempty** option like below. - -``` -$ rdfind -ignoreempty true ~/Downloads - -``` - -If you don’t want the old files anymore, just delete duplicate files instead of replacing them with hard or soft links. - -To delete all duplicates, simply run: - -``` -$ rdfind -deleteduplicates true ~/Downloads - -``` - -If you do not want to ignore empty files and delete them along with all duplicates, run: - -``` -$ rdfind -deleteduplicates true -ignoreempty false ~/Downloads - -``` - -For more details, refer the help section: - -``` -$ rdfind --help - -``` - -And, the manual pages: - -``` -$ man rdfind - -``` - -##### 2. Fdupes - -**Fdupes** is yet another command line utility to identify and remove the duplicate files within specified directories and the sub-directories. It is free, open source utility written in **C** programming language. Fdupes identifies the duplicates by comparing file sizes, partial MD5 signatures, full MD5 signatures, and finally performing a byte-by-byte comparison for verification. - -Similar to Rdfind utility, Fdupes comes with quite handful of options to perform operations, such as: - - * Recursively search duplicate files in directories and sub-directories - * Exclude empty files and hidden files from consideration - * Show the size of the duplicates - * Delete duplicates immediately as they encountered - * Exclude files with different owner/group or permission bits as duplicates - * And a lot more. - - - -**Installing Fdupes** - -Fdupes is available in the default repositories of most Linux distributions. - -On Arch Linux and its variants like Antergos, Manjaro Linux, install it using Pacman like below. - -``` -$ sudo pacman -S fdupes - -``` - -On Debian, Ubuntu, Linux Mint: - -``` -$ sudo apt-get install fdupes - -``` - -On Fedora: - -``` -$ sudo dnf install fdupes - -``` - -On RHEL, CentOS: - -``` -$ sudo yum install epel-release - -$ sudo yum install fdupes - -``` - -**Usage** - -Fdupes usage is pretty simple. Just run the following command to find out the duplicate files in a directory, for example **~/Downloads**. - -``` -$ fdupes ~/Downloads - -``` - -Sample output from my system: - -``` -/home/sk/Downloads/Hyperledger.pdf -/home/sk/Downloads/Hyperledger(1).pdf - -``` - -As you can see, I have a duplicate file in **/home/sk/Downloads/** directory. It shows the duplicates from the parent directory only. How to view the duplicates from sub-directories? Just use **-r** option like below. - -``` -$ fdupes -r ~/Downloads - -``` - -Now you will see the duplicates from **/home/sk/Downloads/** directory and its sub-directories as well. - -Fdupes can also be able to find duplicates from multiple directories at once. - -``` -$ fdupes ~/Downloads ~/Documents/ostechnix - -``` - -You can even search multiple directories, one recursively like below: - -``` -$ fdupes ~/Downloads -r ~/Documents/ostechnix - -``` - -The above commands searches for duplicates in “~/Downloads” directory and “~/Documents/ostechnix” directory and its sub-directories. - -Sometimes, you might want to know the size of the duplicates in a directory. If so, use **-S** option like below. - -``` -$ fdupes -S ~/Downloads -403635 bytes each: -/home/sk/Downloads/Hyperledger.pdf -/home/sk/Downloads/Hyperledger(1).pdf - -``` - -Similarly, to view the size of the duplicates in parent and child directories, use **-Sr** option. - -We can exclude empty and hidden files from consideration using **-n** and **-A** respectively. - -``` -$ fdupes -n ~/Downloads - -$ fdupes -A ~/Downloads - -``` - -The first command will exclude zero-length files from consideration and the latter will exclude hidden files from consideration while searching for duplicates in the specified directory. - -To summarize duplicate files information, use **-m** option. - -``` -$ fdupes -m ~/Downloads -1 duplicate files (in 1 sets), occupying 403.6 kilobytes - -``` - -To delete all duplicates, use **-d** option. - -``` -$ fdupes -d ~/Downloads - -``` - -Sample output: - -``` -[1] /home/sk/Downloads/Hyperledger Fabric Installation.pdf -[2] /home/sk/Downloads/Hyperledger Fabric Installation(1).pdf - -Set 1 of 1, preserve files [1 - 2, all]: - -``` - -This command will prompt you for files to preserve and delete all other duplicates. Just enter any number to preserve the corresponding file and delete the remaining files. Pay more attention while using this option. You might delete original files if you’re not be careful. - -If you want to preserve the first file in each set of duplicates and delete the others without prompting each time, use **-dN** option (not recommended). - -``` -$ fdupes -dN ~/Downloads - -``` - -To delete duplicates as they are encountered, use **-I** flag. - -``` -$ fdupes -I ~/Downloads - -``` - -For more details about Fdupes, view the help section and man pages. - -``` -$ fdupes --help - -$ man fdupes - -``` - -##### 3. FSlint - -**FSlint** is yet another duplicate file finder utility that I use from time to time to get rid of the unnecessary duplicate files and free up the disk space in my Linux system. Unlike the other two utilities, FSlint has both GUI and CLI modes. So, it is more user-friendly tool for newbies. FSlint not just finds the duplicates, but also bad symlinks, bad names, temp files, bad IDS, empty directories, and non stripped binaries etc. - -**Installing FSlint** - -FSlint is available in [**AUR**][5], so you can install it using any AUR helpers. - -``` -$ yay -S fslint - -``` - -On Debian, Ubuntu, Linux Mint: - -``` -$ sudo apt-get install fslint - -``` - -On Fedora: - -``` -$ sudo dnf install fslint - -``` - -On RHEL, CentOS: - -``` -$ sudo yum install epel-release - -``` - -$ sudo yum install fslint - -Once it is installed, launch it from menu or application launcher. - -This is how FSlint GUI looks like. - -![](http://www.ostechnix.com/wp-content/uploads/2018/09/fslint-1.png) - -As you can see, the interface of FSlint is user-friendly and self-explanatory. In the **Search path** tab, add the path of the directory you want to scan and click **Find** button on the lower left corner to find the duplicates. Check the recurse option to recursively search for duplicates in directories and sub-directories. The FSlint will quickly scan the given directory and list out them. - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/fslint-2.png) - -From the list, choose the duplicates you want to clean and select any one of them given actions like Save, Delete, Merge and Symlink. - -In the **Advanced search parameters** tab, you can specify the paths to exclude while searching for duplicates. - -![](http://www.ostechnix.com/wp-content/uploads/2018/09/fslint-3.png) - -**FSlint command line options** - -FSlint provides a collection of the following CLI utilities to find duplicates in your filesystem: - - * **findup** — find DUPlicate files - * **findnl** — find Name Lint (problems with filenames) - * **findu8** — find filenames with invalid utf8 encoding - * **findbl** — find Bad Links (various problems with symlinks) - * **findsn** — find Same Name (problems with clashing names) - * **finded** — find Empty Directories - * **findid** — find files with dead user IDs - * **findns** — find Non Stripped executables - * **findrs** — find Redundant Whitespace in files - * **findtf** — find Temporary Files - * **findul** — find possibly Unused Libraries - * **zipdir** — Reclaim wasted space in ext2 directory entries - - - -All of these utilities are available under **/usr/share/fslint/fslint/fslint** location. - -For example, to find duplicates in a given directory, do: - -``` -$ /usr/share/fslint/fslint/findup ~/Downloads/ - -``` - -Similarly, to find empty directories, the command would be: - -``` -$ /usr/share/fslint/fslint/finded ~/Downloads/ - -``` - -To get more details on each utility, for example **findup** , run: - -``` -$ /usr/share/fslint/fslint/findup --help - -``` - -For more details about FSlint, refer the help section and man pages. - -``` -$ /usr/share/fslint/fslint/fslint --help - -$ man fslint - -``` - -##### Conclusion - -You know now about three tools to find and delete unwanted duplicate files in Linux. Among these three tools, I often use Rdfind. It doesn’t mean that the other two utilities are not efficient, but I am just happy with Rdfind so far. Well, it’s your turn. Which is your favorite tool and why? Let us know them in the comment section below. - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-find-and-delete-duplicate-files-in-linux/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[1]: https://www.ostechnix.com/prevent-files-folders-accidental-deletion-modification-linux/ -[2]: https://www.ostechnix.com/explaining-soft-link-and-hard-link-in-linux-with-examples/ -[3]: https://aur.archlinux.org/packages/rdfind/ -[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ -[5]: https://aur.archlinux.org/packages/fslint/ diff --git a/sources/tech/20180927 Lab 2- Memory Management.md b/sources/tech/20180927 Lab 2- Memory Management.md new file mode 100644 index 0000000000..386bf6ceaf --- /dev/null +++ b/sources/tech/20180927 Lab 2- Memory Management.md @@ -0,0 +1,272 @@ +Lab 2: Memory Management +====== +### Lab 2: Memory Management + +#### Introduction + +In this lab, you will write the memory management code for your operating system. Memory management has two components. + +The first component is a physical memory allocator for the kernel, so that the kernel can allocate memory and later free it. Your allocator will operate in units of 4096 bytes, called _pages_. Your task will be to maintain data structures that record which physical pages are free and which are allocated, and how many processes are sharing each allocated page. You will also write the routines to allocate and free pages of memory. + +The second component of memory management is _virtual memory_ , which maps the virtual addresses used by kernel and user software to addresses in physical memory. The x86 hardware's memory management unit (MMU) performs the mapping when instructions use memory, consulting a set of page tables. You will modify JOS to set up the MMU's page tables according to a specification we provide. + +##### Getting started + +In this and future labs you will progressively build up your kernel. We will also provide you with some additional source. To fetch that source, use Git to commit changes you've made since handing in lab 1 (if any), fetch the latest version of the course repository, and then create a local branch called `lab2` based on our lab2 branch, `origin/lab2`: + +``` + athena% cd ~/6.828/lab + athena% add git + athena% git pull + Already up-to-date. + athena% git checkout -b lab2 origin/lab2 + Branch lab2 set up to track remote branch refs/remotes/origin/lab2. + Switched to a new branch "lab2" + athena% +``` + +The git checkout -b command shown above actually does two things: it first creates a local branch `lab2` that is based on the `origin/lab2` branch provided by the course staff, and second, it changes the contents of your `lab` directory to reflect the files stored on the `lab2` branch. Git allows switching between existing branches using git checkout _branch-name_ , though you should commit any outstanding changes on one branch before switching to a different one. + +You will now need to merge the changes you made in your `lab1` branch into the `lab2` branch, as follows: + +``` + athena% git merge lab1 + Merge made by recursive. + kern/kdebug.c | 11 +++++++++-- + kern/monitor.c | 19 +++++++++++++++++++ + lib/printfmt.c | 7 +++---- + 3 files changed, 31 insertions(+), 6 deletions(-) + athena% +``` + +In some cases, Git may not be able to figure out how to merge your changes with the new lab assignment (e.g. if you modified some of the code that is changed in the second lab assignment). In that case, the git merge command will tell you which files are _conflicted_ , and you should first resolve the conflict (by editing the relevant files) and then commit the resulting files with git commit -a. + +Lab 2 contains the following new source files, which you should browse through: + + * `inc/memlayout.h` + * `kern/pmap.c` + * `kern/pmap.h` + * `kern/kclock.h` + * `kern/kclock.c` + + + +`memlayout.h` describes the layout of the virtual address space that you must implement by modifying `pmap.c`. `memlayout.h` and `pmap.h` define the `PageInfo` structure that you'll use to keep track of which pages of physical memory are free. `kclock.c` and `kclock.h` manipulate the PC's battery-backed clock and CMOS RAM hardware, in which the BIOS records the amount of physical memory the PC contains, among other things. The code in `pmap.c` needs to read this device hardware in order to figure out how much physical memory there is, but that part of the code is done for you: you do not need to know the details of how the CMOS hardware works. + +Pay particular attention to `memlayout.h` and `pmap.h`, since this lab requires you to use and understand many of the definitions they contain. You may want to review `inc/mmu.h`, too, as it also contains a number of definitions that will be useful for this lab. + +Before beginning the lab, don't forget to add -f 6.828 to get the 6.828 version of QEMU. + +##### Lab Requirements + +In this lab and subsequent labs, do all of the regular exercises described in the lab and _at least one_ challenge problem. (Some challenge problems are more challenging than others, of course!) Additionally, write up brief answers to the questions posed in the lab and a short (e.g., one or two paragraph) description of what you did to solve your chosen challenge problem. If you implement more than one challenge problem, you only need to describe one of them in the write-up, though of course you are welcome to do more. Place the write-up in a file called `answers-lab2.txt` in the top level of your `lab` directory before handing in your work. + +##### Hand-In Procedure + +When you are ready to hand in your lab code and write-up, add your `answers-lab2.txt` to the Git repository, commit your changes, and then run make handin. + +``` + athena% git add answers-lab2.txt + athena% git commit -am "my answer to lab2" + [lab2 a823de9] my answer to lab2 + 4 files changed, 87 insertions(+), 10 deletions(-) + athena% make handin +``` + +As before, we will be grading your solutions with a grading program. You can run make grade in the `lab` directory to test your kernel with the grading program. You may change any of the kernel source and header files you need to in order to complete the lab, but needless to say you must not change or otherwise subvert the grading code. + +#### Part 1: Physical Page Management + +The operating system must keep track of which parts of physical RAM are free and which are currently in use. JOS manages the PC's physical memory with _page granularity_ so that it can use the MMU to map and protect each piece of allocated memory. + +You'll now write the physical page allocator. It keeps track of which pages are free with a linked list of `struct PageInfo` objects (which, unlike xv6, are not embedded in the free pages themselves), each corresponding to a physical page. You need to write the physical page allocator before you can write the rest of the virtual memory implementation, because your page table management code will need to allocate physical memory in which to store page tables. + +Exercise 1. In the file `kern/pmap.c`, you must implement code for the following functions (probably in the order given). + +`boot_alloc()` +`mem_init()` (only up to the call to `check_page_free_list(1)`) +`page_init()` +`page_alloc()` +`page_free()` + +`check_page_free_list()` and `check_page_alloc()` test your physical page allocator. You should boot JOS and see whether `check_page_alloc()` reports success. Fix your code so that it passes. You may find it helpful to add your own `assert()`s to verify that your assumptions are correct. + +This lab, and all the 6.828 labs, will require you to do a bit of detective work to figure out exactly what you need to do. This assignment does not describe all the details of the code you'll have to add to JOS. Look for comments in the parts of the JOS source that you have to modify; those comments often contain specifications and hints. You will also need to look at related parts of JOS, at the Intel manuals, and perhaps at your 6.004 or 6.033 notes. + +#### Part 2: Virtual Memory + +Before doing anything else, familiarize yourself with the x86's protected-mode memory management architecture: namely _segmentation_ and _page translation_. + +Exercise 2. Look at chapters 5 and 6 of the [Intel 80386 Reference Manual][1], if you haven't done so already. Read the sections about page translation and page-based protection closely (5.2 and 6.4). We recommend that you also skim the sections about segmentation; while JOS uses the paging hardware for virtual memory and protection, segment translation and segment-based protection cannot be disabled on the x86, so you will need a basic understanding of it. + +##### Virtual, Linear, and Physical Addresses + +In x86 terminology, a _virtual address_ consists of a segment selector and an offset within the segment. A _linear address_ is what you get after segment translation but before page translation. A _physical address_ is what you finally get after both segment and page translation and what ultimately goes out on the hardware bus to your RAM. + +``` + Selector +--------------+ +-----------+ + ---------->| | | | + | Segmentation | | Paging | +Software | |-------->| |----------> RAM + Offset | Mechanism | | Mechanism | + ---------->| | | | + +--------------+ +-----------+ + Virtual Linear Physical + +``` + +A C pointer is the "offset" component of the virtual address. In `boot/boot.S`, we installed a Global Descriptor Table (GDT) that effectively disabled segment translation by setting all segment base addresses to 0 and limits to `0xffffffff`. Hence the "selector" has no effect and the linear address always equals the offset of the virtual address. In lab 3, we'll have to interact a little more with segmentation to set up privilege levels, but as for memory translation, we can ignore segmentation throughout the JOS labs and focus solely on page translation. + +Recall that in part 3 of lab 1, we installed a simple page table so that the kernel could run at its link address of 0xf0100000, even though it is actually loaded in physical memory just above the ROM BIOS at 0x00100000. This page table mapped only 4MB of memory. In the virtual address space layout you are going to set up for JOS in this lab, we'll expand this to map the first 256MB of physical memory starting at virtual address 0xf0000000 and to map a number of other regions of the virtual address space. + +Exercise 3. While GDB can only access QEMU's memory by virtual address, it's often useful to be able to inspect physical memory while setting up virtual memory. Review the QEMU [monitor commands][2] from the lab tools guide, especially the `xp` command, which lets you inspect physical memory. To access the QEMU monitor, press Ctrl-a c in the terminal (the same binding returns to the serial console). + +Use the xp command in the QEMU monitor and the x command in GDB to inspect memory at corresponding physical and virtual addresses and make sure you see the same data. + +Our patched version of QEMU provides an info pg command that may also prove useful: it shows a compact but detailed representation of the current page tables, including all mapped memory ranges, permissions, and flags. Stock QEMU also provides an info mem command that shows an overview of which ranges of virtual addresses are mapped and with what permissions. + +From code executing on the CPU, once we're in protected mode (which we entered first thing in `boot/boot.S`), there's no way to directly use a linear or physical address. _All_ memory references are interpreted as virtual addresses and translated by the MMU, which means all pointers in C are virtual addresses. + +The JOS kernel often needs to manipulate addresses as opaque values or as integers, without dereferencing them, for example in the physical memory allocator. Sometimes these are virtual addresses, and sometimes they are physical addresses. To help document the code, the JOS source distinguishes the two cases: the type `uintptr_t` represents opaque virtual addresses, and `physaddr_t` represents physical addresses. Both these types are really just synonyms for 32-bit integers (`uint32_t`), so the compiler won't stop you from assigning one type to another! Since they are integer types (not pointers), the compiler _will_ complain if you try to dereference them. + +The JOS kernel can dereference a `uintptr_t` by first casting it to a pointer type. In contrast, the kernel can't sensibly dereference a physical address, since the MMU translates all memory references. If you cast a `physaddr_t` to a pointer and dereference it, you may be able to load and store to the resulting address (the hardware will interpret it as a virtual address), but you probably won't get the memory location you intended. + +To summarize: + +C typeAddress type `T*` Virtual `uintptr_t` Virtual `physaddr_t` Physical + +Question + + 1. Assuming that the following JOS kernel code is correct, what type should variable `x` have, `uintptr_t` or `physaddr_t`? + +``` + mystery_t x; + char* value = return_a_pointer(); + *value = 10; + x = (mystery_t) value; + +``` + + + + +The JOS kernel sometimes needs to read or modify memory for which it knows only the physical address. For example, adding a mapping to a page table may require allocating physical memory to store a page directory and then initializing that memory. However, the kernel cannot bypass virtual address translation and thus cannot directly load and store to physical addresses. One reason JOS remaps all of physical memory starting from physical address 0 at virtual address 0xf0000000 is to help the kernel read and write memory for which it knows just the physical address. In order to translate a physical address into a virtual address that the kernel can actually read and write, the kernel must add 0xf0000000 to the physical address to find its corresponding virtual address in the remapped region. You should use `KADDR(pa)` to do that addition. + +The JOS kernel also sometimes needs to be able to find a physical address given the virtual address of the memory in which a kernel data structure is stored. Kernel global variables and memory allocated by `boot_alloc()` are in the region where the kernel was loaded, starting at 0xf0000000, the very region where we mapped all of physical memory. Thus, to turn a virtual address in this region into a physical address, the kernel can simply subtract 0xf0000000. You should use `PADDR(va)` to do that subtraction. + +##### Reference counting + +In future labs you will often have the same physical page mapped at multiple virtual addresses simultaneously (or in the address spaces of multiple environments). You will keep a count of the number of references to each physical page in the `pp_ref` field of the `struct PageInfo` corresponding to the physical page. When this count goes to zero for a physical page, that page can be freed because it is no longer used. In general, this count should be equal to the number of times the physical page appears below `UTOP` in all page tables (the mappings above `UTOP` are mostly set up at boot time by the kernel and should never be freed, so there's no need to reference count them). We'll also use it to keep track of the number of pointers we keep to the page directory pages and, in turn, of the number of references the page directories have to page table pages. + +Be careful when using `page_alloc`. The page it returns will always have a reference count of 0, so `pp_ref` should be incremented as soon as you've done something with the returned page (like inserting it into a page table). Sometimes this is handled by other functions (for example, `page_insert`) and sometimes the function calling `page_alloc` must do it directly. + +##### Page Table Management + +Now you'll write a set of routines to manage page tables: to insert and remove linear-to-physical mappings, and to create page table pages when needed. + +Exercise 4. In the file `kern/pmap.c`, you must implement code for the following functions. + +``` + + pgdir_walk() + boot_map_region() + page_lookup() + page_remove() + page_insert() + + +``` + +`check_page()`, called from `mem_init()`, tests your page table management routines. You should make sure it reports success before proceeding. + +#### Part 3: Kernel Address Space + +JOS divides the processor's 32-bit linear address space into two parts. User environments (processes), which we will begin loading and running in lab 3, will have control over the layout and contents of the lower part, while the kernel always maintains complete control over the upper part. The dividing line is defined somewhat arbitrarily by the symbol `ULIM` in `inc/memlayout.h`, reserving approximately 256MB of virtual address space for the kernel. This explains why we needed to give the kernel such a high link address in lab 1: otherwise there would not be enough room in the kernel's virtual address space to map in a user environment below it at the same time. + +You'll find it helpful to refer to the JOS memory layout diagram in `inc/memlayout.h` both for this part and for later labs. + +##### Permissions and Fault Isolation + +Since kernel and user memory are both present in each environment's address space, we will have to use permission bits in our x86 page tables to allow user code access only to the user part of the address space. Otherwise bugs in user code might overwrite kernel data, causing a crash or more subtle malfunction; user code might also be able to steal other environments' private data. Note that the writable permission bit (`PTE_W`) affects both user and kernel code! + +The user environment will have no permission to any of the memory above `ULIM`, while the kernel will be able to read and write this memory. For the address range `[UTOP,ULIM)`, both the kernel and the user environment have the same permission: they can read but not write this address range. This range of address is used to expose certain kernel data structures read-only to the user environment. Lastly, the address space below `UTOP` is for the user environment to use; the user environment will set permissions for accessing this memory. + +##### Initializing the Kernel Address Space + +Now you'll set up the address space above `UTOP`: the kernel part of the address space. `inc/memlayout.h` shows the layout you should use. You'll use the functions you just wrote to set up the appropriate linear to physical mappings. + +Exercise 5. Fill in the missing code in `mem_init()` after the call to `check_page()`. + +Your code should now pass the `check_kern_pgdir()` and `check_page_installed_pgdir()` checks. + +Question + + 2. What entries (rows) in the page directory have been filled in at this point? What addresses do they map and where do they point? In other words, fill out this table as much as possible: + | Entry | Base Virtual Address | Points to (logically): | + |-------|----------------------|---------------------------------------| + | 1023 | ? | Page table for top 4MB of phys memory | + | 1022 | ? | ? | + | . | ? | ? | + | . | ? | ? | + | . | ? | ? | + | 2 | 0x00800000 | ? | + | 1 | 0x00400000 | ? | + | 0 | 0x00000000 | [see next question] | + 3. We have placed the kernel and user environment in the same address space. Why will user programs not be able to read or write the kernel's memory? What specific mechanisms protect the kernel memory? + 4. What is the maximum amount of physical memory that this operating system can support? Why? + 5. How much space overhead is there for managing memory, if we actually had the maximum amount of physical memory? How is this overhead broken down? + 6. Revisit the page table setup in `kern/entry.S` and `kern/entrypgdir.c`. Immediately after we turn on paging, EIP is still a low number (a little over 1MB). At what point do we transition to running at an EIP above KERNBASE? What makes it possible for us to continue executing at a low EIP between when we enable paging and when we begin running at an EIP above KERNBASE? Why is this transition necessary? + + +``` +Challenge! We consumed many physical pages to hold the page tables for the KERNBASE mapping. Do a more space-efficient job using the PTE_PS ("Page Size") bit in the page directory entries. This bit was _not_ supported in the original 80386, but is supported on more recent x86 processors. You will therefore have to refer to [Volume 3 of the current Intel manuals][3]. Make sure you design the kernel to use this optimization only on processors that support it! +``` + +``` +Challenge! Extend the JOS kernel monitor with commands to: + + * Display in a useful and easy-to-read format all of the physical page mappings (or lack thereof) that apply to a particular range of virtual/linear addresses in the currently active address space. For example, you might enter `'showmappings 0x3000 0x5000'` to display the physical page mappings and corresponding permission bits that apply to the pages at virtual addresses 0x3000, 0x4000, and 0x5000. + * Explicitly set, clear, or change the permissions of any mapping in the current address space. + * Dump the contents of a range of memory given either a virtual or physical address range. Be sure the dump code behaves correctly when the range extends across page boundaries! + * Do anything else that you think might be useful later for debugging the kernel. (There's a good chance it will be!) +``` + + +##### Address Space Layout Alternatives + +The address space layout we use in JOS is not the only one possible. An operating system might map the kernel at low linear addresses while leaving the _upper_ part of the linear address space for user processes. x86 kernels generally do not take this approach, however, because one of the x86's backward-compatibility modes, known as _virtual 8086 mode_ , is "hard-wired" in the processor to use the bottom part of the linear address space, and thus cannot be used at all if the kernel is mapped there. + +It is even possible, though much more difficult, to design the kernel so as not to have to reserve _any_ fixed portion of the processor's linear or virtual address space for itself, but instead effectively to allow user-level processes unrestricted use of the _entire_ 4GB of virtual address space - while still fully protecting the kernel from these processes and protecting different processes from each other! + +``` +Challenge! Each user-level environment maps the kernel. Change JOS so that the kernel has its own page table and so that a user-level environment runs with a minimal number of kernel pages mapped. That is, each user-level environment maps just enough pages mapped so that the user-level environment can enter and leave the kernel correctly. You also have to come up with a plan for the kernel to read/write arguments to system calls. +``` + +``` +Challenge! Write up an outline of how a kernel could be designed to allow user environments unrestricted use of the full 4GB virtual and linear address space. Hint: do the previous challenge exercise first, which reduces the kernel to a few mappings in a user environment. Hint: the technique is sometimes known as " _follow the bouncing kernel_. " In your design, be sure to address exactly what has to happen when the processor transitions between kernel and user modes, and how the kernel would accomplish such transitions. Also describe how the kernel would access physical memory and I/O devices in this scheme, and how the kernel would access a user environment's virtual address space during system calls and the like. Finally, think about and describe the advantages and disadvantages of such a scheme in terms of flexibility, performance, kernel complexity, and other factors you can think of. +``` + +``` +Challenge! Since our JOS kernel's memory management system only allocates and frees memory on page granularity, we do not have anything comparable to a general-purpose `malloc`/`free` facility that we can use within the kernel. This could be a problem if we want to support certain types of I/O devices that require _physically contiguous_ buffers larger than 4KB in size, or if we want user-level environments, and not just the kernel, to be able to allocate and map 4MB _superpages_ for maximum processor efficiency. (See the earlier challenge problem about PTE_PS.) + +Generalize the kernel's memory allocation system to support pages of a variety of power-of-two allocation unit sizes from 4KB up to some reasonable maximum of your choice. Be sure you have some way to divide larger allocation units into smaller ones on demand, and to coalesce multiple small allocation units back into larger units when possible. Think about the issues that might arise in such a system. +``` + +**This completes the lab.** Make sure you pass all of the make grade tests and don't forget to write up your answers to the questions and a description of your challenge exercise solution in `answers-lab2.txt`. Commit your changes (including adding `answers-lab2.txt`) and type make handin in the `lab` directory to hand in your lab. + +-------------------------------------------------------------------------------- + +via: https://pdos.csail.mit.edu/6.828/2018/labs/lab2/ + +作者:[csail.mit][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://pdos.csail.mit.edu +[b]: https://github.com/lujun9972 +[1]: https://pdos.csail.mit.edu/6.828/2018/readings/i386/toc.htm +[2]: https://pdos.csail.mit.edu/6.828/2018/labguide.html#qemu +[3]: https://pdos.csail.mit.edu/6.828/2018/readings/ia32/IA32-3A.pdf diff --git a/sources/tech/20180928 10 handy Bash aliases for Linux.md b/sources/tech/20180928 10 handy Bash aliases for Linux.md deleted file mode 100644 index 7ae1070997..0000000000 --- a/sources/tech/20180928 10 handy Bash aliases for Linux.md +++ /dev/null @@ -1,118 +0,0 @@ -translating---geekpi - -10 handy Bash aliases for Linux -====== -Get more efficient by using condensed versions of long Bash commands. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U) - -How many times have you repeatedly typed out a long command on the command line and wished there was a way to save it for later? This is where Bash aliases come in handy. They allow you to condense long, cryptic commands down to something easy to remember and use. Need some examples to get you started? No problem! - -To use a Bash alias you've created, you need to add it to your .bash_profile file, which is located in your home folder. Note that this file is hidden and accessible only from the command line. The easiest way to work with this file is to use something like Vi or Nano. - -### 10 handy Bash aliases - - 1. How many times have you needed to unpack a .tar file and couldn't remember the exact arguments needed? Aliases to the rescue! Just add the following to your .bash_profile file and then use **untar FileName** to unpack any .tar file. - - - -``` -alias untar='tar -zxvf ' - -``` - - 2. Want to download something but be able to resume if something goes wrong? - - - -``` -alias wget='wget -c ' - -``` - - 3. Need to generate a random, 20-character password for a new online account? No problem. - - - -``` -alias getpass="openssl rand -base64 20" - -``` - - 4. Downloaded a file and need to test the checksum? We've got that covered too. - - - -``` -alias sha='shasum -a 256 ' - -``` - - 5. A normal ping will go on forever. We don't want that. Instead, let's limit that to just five pings. - - - -``` -alias ping='ping -c 5' - -``` - - 6. Start a web server in any folder you'd like. - - - -``` -alias www='python -m SimpleHTTPServer 8000' - -``` - - 7. Want to know how fast your network is? Just download Speedtest-cli and use this alias. You can choose a server closer to your location by using the **speedtest-cli --list** command. - - - -``` -alias speed='speedtest-cli --server 2406 --simple' - -``` - - 8. How many times have you needed to know your external IP address and had no idea how to get that info? Yeah, me too. - - - -``` -alias ipe='curl ipinfo.io/ip' - -``` - - 9. Need to know your local IP address? - - - -``` -alias ipi='ipconfig getifaddr en0' - -``` - - 10. Finally, let's clear the screen. - - - -``` -alias c='clear' - -``` - -As you can see, Bash aliases are a super-easy way to simplify your life on the command line. Want more info? I recommend a quick Google search for "Bash aliases" or a trip to GitHub. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/handy-bash-aliases - -作者:[Patrick H.Mullins][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/pmullins diff --git a/sources/tech/20180928 A Free And Secure Online PDF Conversion Suite.md b/sources/tech/20180928 A Free And Secure Online PDF Conversion Suite.md deleted file mode 100644 index afb66e43ee..0000000000 --- a/sources/tech/20180928 A Free And Secure Online PDF Conversion Suite.md +++ /dev/null @@ -1,111 +0,0 @@ -A Free And Secure Online PDF Conversion Suite -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/easypdf-720x340.jpg) - -We are always in search for a better and more efficient solution that can make our lives more convenient. That is why when you are working with PDF documents you need a fast and reliable tool that you can use in every situation. Therefore, we wanted to introduce you to **EasyPDF** Online PDF Suite for every occasion. The promise behind this tool is that it can make your PDF management easier and we tested it to check that claim. - -But first, here are the most important things you need to know about EasyPDF: - - * EasyPDF is free and anonymous online PDF Conversion Suite. - * Convert PDF to Word, Excel, PowerPoint, AutoCAD, JPG, GIF and Text. - * Create PDF from Word, PowerPoint, JPG, Excel files and many other formats. - * Manipulate PDFs with PDF Merge, Split and Compress. - * OCR conversion of scanned PDFs and images. - * Upload files from your device or the Cloud (Google Drive and DropBox). - * Available on Windows, Linux, Mac, and smartphones via any browser. - * Multiple languages supported. - - - -### EasyPDF User Interface - -![](http://www.ostechnix.com/wp-content/uploads/2018/09/easypdf-interface.png) - -One of the first things that catches your eye is the sleek user interface which gives the tool clean and functional environment in where you can work comfortably. The whole experience is even better because there are no ads on a website at all. - -All different types of conversions have their dedicated menu with a simple box to add files, so you don’t have to wonder about what you need to do. - -Most websites aren’t optimized to work well and run smoothly on mobile phones, but EasyPDF is an exception from that rule. It opens almost instantly on smartphone and is easy to navigate. You can also add it as the shortcut on your home screen from the **three dots menu** on the Chrome app. - -![](http://www.ostechnix.com/wp-content/uploads/2018/09/EasyPDF-fs8.png) - -### Functionality - -Apart from looking nice, EasyPDF is pretty straightforward to use. You **don’t need to register** or leave an **email** to use the tool. It is completely anonymous. Additionally, it doesn’t put any limitations to the number or size of files for conversion. No installation required either! Cool, yeah? - -You choose a desired conversion format, for example, PDF to Word. Select the PDF file you want to convert. You can upload a file from the device by either drag & drop or selecting the file from the folder. There is also an option to upload a document from [**Google Drive**][1] or [**Dropbox**][2]. - -After you choose the file, press the Convert button to start the conversion process. You won’t wait for a long time to get your file because conversion will finish in a minute. If you have some more files to convert, remember to download the file before you proceed further. If you don’t download the document first, you will lose it. - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/EasyPDF1.png) - -For a different type of conversion, return to the homepage. - -The currently available types of conversions are: - - * **PDF to Word** – Convert PDF documents to Word documents - - * **PDF to PowerPoint** – Convert PDF documents to PowerPoint Presentations - - * **PDF to Excel** – Convert PDF documents to Excel documents - - * **PDF Creation** – Create PDF documents from any type of file (E.g text, doc, odt) - - * **Word to PDF** – Convert Word documents to PDF documents - - * **JPG to PDF** – Convert JPG images to PDF documents - - * **PDF to AutoCAD** – Convert PDF documents to .dwg format (DWG is native format for CAD packages) - - * **PDF to Text** – Convert PDF documents to Text documents - - * **PDF Split** – Split PDF files into multiple parts - - * **PDF Merge** – Merge multiple PDF files into one - - * **PDF Compress** – Compress PDF documents - - * **PDF to JPG** – Convert PDF documents to JPG images - - * **PDF to PNG** – Convert PDF documents to PNG images - - * **PDF to GIF** – Convert PDF documents to GIF files - - * **OCR Online** – - -Convert scanned paper documents - -to editable files (E.g Word, Excel, Text) - - - - -Want to give it a try? Great! Click the following link and start converting! - -[![](https://www.ostechnix.com/wp-content/uploads/2018/09/EasyPDF-online-pdf.png)][https://easypdf.com/] - -### Conclusion - -EasyPDF lives up to its name and enables easier PDF management. As far as I tested EasyPDF service, It offers out of the box conversion feature completely **FREE!** It is fast, secure and reliable. You will find the quality of services most satisfying without having to pay anything or leaving your personal data like email address. Give it a try and who knows maybe you will find your new favorite PDF tool. - -And, that’s all for now. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/easypdf-a-free-and-secure-online-pdf-conversion-suite/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[1]: https://www.ostechnix.com/how-to-mount-google-drive-locally-as-virtual-file-system-in-linux/ -[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/ diff --git a/sources/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md b/sources/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md deleted file mode 100644 index 578624aba4..0000000000 --- a/sources/tech/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md +++ /dev/null @@ -1,233 +0,0 @@ -Translating by dianbanjiu How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions -====== -**Brief: This tutorial shows you how to install Popcorn Time on Ubuntu and other Linux distributions. Some handy Popcorn Time tips have also been discussed.** - -[Popcorn Time][1] is an open source [Netflix][2] inspired [torrent][3] streaming application for Linux, Mac and Windows. - -With the regular torrents, you have to wait for the download to finish before you could watch the videos. - -[Popcorn Time][4] is different. It uses torrent underneath but allows you to start watching the videos (almost) immediately. It’s like you are watching videos on streaming websites like YouTube or Netflix. You don’t have to wait for the download to finish here. - -![Popcorn Time in Ubuntu Linux][5] -Popcorn Time - -If you want to watch movies online without those creepy ads, Popcorn Time is a good alternative. Keep in mind that the streaming quality depends on the number of available seeds. - -Popcorn Time also provides a nice user interface where you can browse through available movies, tv-series and other contents. If you ever used [Netflix on Linux][6], you will find it’s somewhat a similar experience. - -Using torrent to download movies is illegal in several countries where there are strict laws against piracy. In countries like the USA, UK and West European you may even get legal notices. That said, it’s up to you to decide if you want to use it or not. You have been warned. -(If you still want to take the risk and use Popcorn Time, you should use a VPN service like [Ivacy][7] that has been specifically designed for using Torrents and protecting your identity. Even then it’s not always easy to avoid the snooping authorities.) - -Some of the main features of Popcorn Time are: - - * Watch movies and TV Series online using Torrent - * A sleek user interface lets you browse the available movies and TV series - * Change streaming quality - * Bookmark content for watching later - * Download content for offline viewing - * Ability to enable subtitles by default, change the subtitles size etc - * Keyboard shortcuts to navigate through Popcorn Time - - - -### How to install Popcorn Time on Ubuntu and other Linux Distributions - -I am using Ubuntu 18.04 in this tutorial but you can use the same instructions for other Linux distributions such as Linux Mint, Debian, Manjaro, Deepin etc. - -Let’s see how to install Popcorn time on Linux. It’s really easy actually. Simply follow the instructions and copy paste the commands I have mentioned. - -#### Step 1: Download Popcorn Time - -You can download Popcorn Time from its official website. The download link is present on the homepage itself. - -[Get Popcorn Time](https://popcorntime.sh/) - -#### Step 2: Install Popcorn Time - -Once you have downloaded Popcorn Time, it’s time to use it. The downloaded file is a tar file that consists of an executable among other files. While you can extract this tar file anywhere, the [Linux convention is to install additional software in][8] /[opt directory.][8] - -Create a new directory in /opt: - -``` -sudo mkdir /opt/popcorntime -``` - -Now go to the Downloads directory. - -``` -cd ~/Downloads -``` - -Extract the downloaded Popcorn Time files into the newly created /opt/popcorntime directory. - -``` -sudo tar Jxf Popcorn-Time-* -C /opt/popcorntime -``` - -#### Step 3: Make Popcorn Time accessible for everyone - -You would want every user on your system to be able to run Popcorn Time without sudo access, right? To do that, you need to create a [symbolic link][9] to the executable in /usr/bin directory. - -``` -ln -sf /opt/popcorntime/Popcorn-Time /usr/bin/Popcorn-Time -``` - -#### Step 4: Create desktop launcher for Popcorn Time - -So far so good. But you would also like to see Popcorn Time in the application menu, add it to your favorite application list etc. - -For that, you need to create a desktop entry. - -Open a terminal and create a new file named popcorntime.desktop in /usr/share/applications. - -You can use any [command line based text editor][10]. Ubuntu has [Nano][11] installed by default so you can use that. - -``` -sudo nano /usr/share/applications/popcorntime.desktop -``` - -Insert the following lines here: - -``` -[Desktop Entry] -Version = 1.0 -Type = Application -Terminal = false -Name = Popcorn Time -Exec = /usr/bin/Popcorn-Time -Icon = /opt/popcorntime/popcorn.png -Categories = Application; -``` - -If you used Nano editor, save it using shortcut Ctrl+X. When asked for saving, enter Y and then press enter again to save and exit. - -We are almost there. One last thing to do here is to have the correct icon for Popcorn Time. For that, you can download a Popcorn Time icon and save it as popcorn.png in /opt/popcorntime directory. - -You can do that using the command below: - -``` -sudo wget -O /opt/popcorntime/popcorn.png https://upload.wikimedia.org/wikipedia/commons/d/df/Pctlogo.png - -``` - -That’s it. Now you can search for Popcorn Time and click on it to launch it. - -![Popcorn Time installed on Ubuntu][12] -Search for Popcorn Time in Menu - -On the first launch, you’ll have to accept the terms and conditions. - -![Popcorn Time in Ubuntu Linux][13] -Accept the Terms of Service - -Once you do that, you can enjoy the movies and TV shows. - -![Watch movies on Popcorn Time][14] - -Well, that’s all you needed to install Popcorn Time on Ubuntu or any other Linux distribution. You can start watching your favorite movies straightaway. - -However, if you are interested, I would suggest reading these Popcorn Time tips to get more out of it. - -[![][15]][16] -![][17] - -### 7 Tips for using Popcorn Time effectively - -Now that you have installed Popcorn Time, I am going to tell you some nifty Popcorn Time tricks. I assure you that it will enhance your experience with Popcorn Time multiple folds. - -#### 1\. Use advanced settings - -Always have the advanced settings enabled. It gives you more options to tweak Popcorn Time. Go to the top right corner and click on the gear symbol. Click on it and check advanced settings on the next screen. - -![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Popcorn_Time_Tricks.jpeg) - -#### 2\. Watch the movies in VLC or other players - -Did you know that you can choose to watch a file in your preferred media player instead of the default Popcorn Time player? Of course, that media player should have been installed in the system. - -Now you may ask why would one want to use another player. And my answer is because other players like VLC has hidden features which you might not find in the Popcorn Time player. - -For example, if a file has very low volume, you can use VLC to enhance the audio by 400 percent. You can also [synchronize incoherent subtitles with VLC][18]. You can switch between media players before you start to play a file: - -![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks_1.png) - -#### 3\. Bookmark movies and watch it later - -Just browsing through movies and TV series but don’t have time or mood to watch those? No issues. You can add the movies to the bookmark and can access these bookmarked videos from the Favorites tab. This enables you to create a list of movies you would watch later. - -![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks2.png) - -#### 4\. Check torrent health and seed information - -As I had mentioned earlier, your viewing experience in Popcorn Times depends on torrent speed. Good thing is that Popcorn time shows the health of the torrent file so that you can be aware of the streaming speed. - -You will see a green/yellow/red dot on the file. Green means there are plenty of seeds and the file will stream easily. Yellow means a medium number of seeds, streaming should be okay. Red means there are very few seeds available and the streaming will be poor or won’t work at all. - -![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks3.jpg) - -#### 5\. Add custom subtitles - -If you need subtitles and it is not available in your preferred language, you can add custom subtitles downloaded from external websites. Get the .srt files and use it inside Popcorn Time: - -![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocporn_Time_Tricks5.png) - -This is where VLC comes handy as you can [download subtitles automatically with VLC][19]. - - -#### 6\. Save the files for offline viewing - -When Popcorn Times stream a content, it downloads it and store temporarily. When you close the app, it’s cleaned out. You can change this behavior so that the downloaded file remains there for your future use. - -In the advanced settings, scroll down a bit. Look for Cache directory. You can change this to some other directory like Downloads. This way, even if you close Popcorn Time, the file will be available for viewing. - -![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Popcorn_Time_Tips.jpg) - -#### 7\. Drag and drop external torrent files to play immediately - -I bet you did not know about this one. If you don’t find a certain movie on Popcorn Time, download the torrent file from your favorite torrent website. Open Popcorn Time and just drag and drop the torrent file in Popcorn Time. It will start playing the file, depending upon seeds. This way, you don’t need to download the entire file before watching it. - -When you drag and drop the torrent file in Popcorn Time, it will give you the option to choose which video file should it play. If there are subtitles in it, it will play automatically or else, you can add external subtitles. - -![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2015/01/Pocorn_Time_Tricks4.png) - -There are plenty of other features in Popcorn Time. But I’ll stop with my list here and let you explore Popcorn Time on Ubuntu Linux. I hope you find these Popcorn Time tips and tricks useful. - -I am repeating again. Using Torrents is illegal in many countries. If you do that, take precaution and use a VPN service. If you are looking for my recommendation, you can go for [Swiss-based privacy company ProtonVPN][20] (of [ProtonMail][21] fame). Singapore based [Ivacy][7] is another good option. If you think these are expensive, you can look for [cheap VPN deals on It’s FOSS Shop][22]. - -Note: This article contains affiliate links. Please read our [affiliate policy][23]. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/popcorn-time-ubuntu-linux/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[1]: https://popcorntime.sh/ -[2]: https://netflix.com/ -[3]: https://en.wikipedia.org/wiki/Torrent_file -[4]: https://en.wikipedia.org/wiki/Popcorn_Time -[5]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-linux.jpeg -[6]: https://itsfoss.com/netflix-firefox-linux/ -[7]: https://billing.ivacy.com/page/23628 -[8]: http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/opt.html -[9]: https://en.wikipedia.org/wiki/Symbolic_link -[10]: https://itsfoss.com/command-line-text-editors-linux/ -[11]: https://itsfoss.com/nano-3-release/ -[12]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-ubuntu-menu.jpg -[13]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-ubuntu-license.jpeg -[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/popcorn-time-watch-movies.jpeg -[15]: https://ivacy.postaffiliatepro.com/accounts/default1/vdegzkxbw/7f82d531.png -[16]: https://billing.ivacy.com/page/23628/7f82d531 -[17]: http://ivacy.postaffiliatepro.com/scripts/vdegzkxiw?aff=23628&a_bid=7f82d531 -[18]: https://itsfoss.com/how-to-synchronize-subtitles-with-movie-quick-tip/ -[19]: https://itsfoss.com/download-subtitles-automatically-vlc-media-player-ubuntu/ -[20]: https://protonvpn.net/?aid=chmod777 -[21]: https://itsfoss.com/protonmail/ -[22]: https://shop.itsfoss.com/search?utf8=%E2%9C%93&query=vpn -[23]: https://itsfoss.com/affiliate-policy/ diff --git a/sources/tech/20180928 What containers can teach us about DevOps.md b/sources/tech/20180928 What containers can teach us about DevOps.md deleted file mode 100644 index 33f83fb0f7..0000000000 --- a/sources/tech/20180928 What containers can teach us about DevOps.md +++ /dev/null @@ -1,100 +0,0 @@ -认领:by sd886393 -What containers can teach us about DevOps -====== - -The use of containers supports the three pillars of DevOps practices: flow, feedback, and continual experimentation and learning. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-patent_reform_520x292_10136657_1012_dc.png?itok=Cd2PmDWf) - -One can argue that containers and DevOps were made for one another. Certainly, the container ecosystem benefits from the skyrocketing popularity of DevOps practices, both in design choices and in DevOps’ use by teams developing container technologies. Because of this parallel evolution, the use of containers in production can teach teams the fundamentals of DevOps and its three pillars: [The Three Ways][1]. - -### Principles of flow - -**Container flow** - -A container can be seen as a silo, and from inside, it is easy to forget the rest of the system: the host node, the cluster, the underlying infrastructure. Inside the container, it might appear that everything is functioning in an acceptable manner. From the outside perspective, though, the application inside the container is a part of a larger ecosystem of applications that make up a service: the web API, the web app user interface, the database, the workers, and caching services and garbage collectors. Teams put constraints on the container to limit performance impact on infrastructure, and much has been done to provide metrics for measuring container performance because overloaded or slow container workloads have downstream impact on other services or customers. - -**Real-world flow** - -This lesson can be applied to teams functioning in a silo as well. Every process (be it code release, infrastructure creation or even, say, manufacturing of [Spacely’s Sprockets][2]), follows a linear path from conception to realization. In technology, this progress flows from development to testing to operations and release. If a team working alone becomes a bottleneck or introduces a problem, the impact is felt all along the entire pipeline. A defect passed down the line destroys productivity downstream. While the broken process within the scope of the team itself may seem perfectly correct, it has a negative impact on the environment as a whole. - -**DevOps and flow** - -The first way of DevOps, principles of flow, is about approaching the process as a whole, striving to comprehend how the system works together and understanding the impact of issues on the entire process. To increase the efficiency of the process, pain points and waste are identified and removed. This is an ongoing process; teams must continually strive to increase visibility into the process and find and fix trouble spots and waste. - -> “The outcomes of putting the First Way into practice include never passing a known defect to downstream work centers, never allowing local optimization to create global degradation, always seeking to increase flow, and always seeking to achieve a profound understanding of the system (as per Deming).” - -–Gene Kim, [The Three Ways: The Principles Underpinning DevOps][3], IT Revolution, 25 Apr. 2017 - -### Principles of feedback - -**Container feedback** - -In addition to limiting containers to prevent impact elsewhere, many products have been created to monitor and trend container metrics in an effort to understand what they are doing and notify when they are misbehaving. [Prometheus][4], for example, is [all the rage][5] for collecting metrics from containers and clusters. Containers are excellent at separating applications and providing a way to ship an environment together with the code, sometimes at the cost of opacity, so much is done to try to provide rapid feedback so issues can be addressed promptly within the silo. - -**Real-world feedback** - -The same is necessary for the flow of the system. From inception to realization, an efficient process quickly provides relevant feedback to identify when there is an issue. The key words here are “quick” and “relevant.” Burying teams in thousands of irrelevant notifications make it difficult or even impossible to notice important events that need immediate action, and receiving even relevant information too late may allow small, easily solved issues to move downstream and become bigger problems. Imagine [if Lucy and Ethel][6] had provided immediate feedback that the conveyor belt was too fast—there would have been no problem with the chocolate production (though that would not have been nearly as funny). - -**DevOps and feedback** - -The Second Way of DevOps, principles of feedback, is all about getting relevant information quickly. With immediate, useful feedback, problems can be identified as they happen and addressed before impact is felt elsewhere in the development process. DevOps teams strive to “optimize for downstream” and immediately move to fix problems that might impact other teams that come after them. As with flow, feedback is a continual process to identify ways to quickly get important data and act on problems as they occur. - -> “Creating fast feedback is critical to achieving quality, reliability, and safety in the technology value stream.” - -–Gene Kim, et al., The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations, IT Revolution Press, 2016 - -### Principles of continual experimentation and learning - -**Container continual experimentation and learning** - -It is a bit more challenging applying operational learning to the Third Way of DevOps:continual experimentation and learning. Trying to salvage what we can grasp of the very edges of the metaphor, containers make development easy, allowing developers and operations teams to test new code or configurations locally and safely outside of production and incorporate discovered benefits into production in a way that was difficult in the past. Changes can be radical and still version-controlled, documented, and shared quickly and easily. - -**Real-world continual experimentation and learning** - -For example, consider this anecdote from my own experience: Years ago, as a young, inexperienced sysadmin (just three weeks into the job), I was asked to make changes to an Apache virtual host running the website of the central IT department for a university. Without an easy-to-use test environment, I made a configuration change to the production site that I thought would accomplish the task and pushed it out. Within a few minutes, I overheard coworkers in the next cube: - -“Wait, is the website down?” - -“Hrm, yeah, it looks like it. What the heck?” - -There was much eye-rolling involved. - -Mortified (the shame is real, folks), I sunk down as far as I could into my seat and furiously tried to back out the changes I’d introduced. Later that same afternoon, the director of the department—the boss of my boss’s boss—appeared in my cube to talk about what had happened. “Don’t worry,” she told me. “We’re not mad at you. It was a mistake and now you have learned.” - -In the world of containers, this could have been easily changed and tested on my own laptop and the broken configuration identified by more skilled team members long before it ever made it into production. - -**DevOps continual experimentation and learning** - -A real culture of experimentation promotes the individual’s ability to find where a change in the process may be beneficial, and to test that assumption without the fear of retaliation if they fail. For DevOps teams, failure becomes an educational tool that adds to the knowledge of the individual and organization, rather than something to be feared or punished. Individuals in the DevOps team dedicate themselves to continuous learning, which in turn benefits the team and wider organization as that knowledge is shared. - -As the metaphor completely falls apart, focus needs to be given to a specific point: The other two principles may appear at first glance to focus entirely on process, but continual learning is a human task—important for the future of the project, the person, the team, and the organization. It has an impact on the process, but it also has an impact on the individual and other people. - -> “Experimentation and risk-taking are what enable us to relentlessly improve our system of work, which often requires us to do things very differently than how we’ve done it for decades.” - -–Gene Kim, et al., [The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win][7], IT Revolution Press, 2013 - -### Containers can teach us DevOps - -Learning to work effectively with containers can help teach DevOps and the Three Ways: principles of flow, principles of feedback, and principles of continuous experimentation and learning. Looking holistically at the application and infrastructure rather than putting on blinders to everything outside the container teaches us to take all parts of the system and understand their upstream and downstream impacts, break out of silos, and work as a team to increase global performance and deep understanding of the entire system. Working to provide timely and accurate feedback teaches us to create effective feedback patterns within our organizations to identify problems before their impact grows. Finally, providing a safe environment to try new ideas and learn from them teaches us to create a culture where failure represents a positive addition to our knowledge and the ability to take big chances with educated guesses can result in new, elegant solutions to complex problems. - - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/containers-can-teach-us-devops - -作者:[Chris Hermansen][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/clhermansen -[1]: https://itrevolution.com/the-three-ways-principles-underpinning-devops/ -[2]: https://en.wikipedia.org/wiki/The_Jetsons -[3]: http://itrevolution.com/the-three-ways-principles-underpinning-devops -[4]: https://prometheus.io/ -[5]: https://opensource.com/article/18/9/prometheus-operational-advantage -[6]: https://www.youtube.com/watch?v=8NPzLBSBzPI -[7]: https://itrevolution.com/book/the-phoenix-project/ diff --git a/sources/tech/20181001 16 iptables tips and tricks for sysadmins.md b/sources/tech/20181001 16 iptables tips and tricks for sysadmins.md new file mode 100644 index 0000000000..9e07971c81 --- /dev/null +++ b/sources/tech/20181001 16 iptables tips and tricks for sysadmins.md @@ -0,0 +1,261 @@ +16 iptables tips and tricks for sysadmins +====== +Iptables provides powerful capabilities to control traffic coming in and out of your system. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg) + +Modern Linux kernels come with a packet-filtering framework named [Netfilter][1]. Netfilter enables you to allow, drop, and modify traffic coming in and going out of a system. The **iptables** userspace command-line tool builds upon this functionality to provide a powerful firewall, which you can configure by adding rules to form a firewall policy. [iptables][2] can be very daunting with its rich set of capabilities and baroque command syntax. Let's explore some of them and develop a set of iptables tips and tricks for many situations a system administrator might encounter. + +### Avoid locking yourself out + +Scenario: You are going to make changes to the iptables policy rules on your company's primary server. You want to avoid locking yourself—and potentially everybody else—out. (This costs time and money and causes your phone to ring off the wall.) + +#### Tip #1: Take a backup of your iptables configuration before you start working on it. + +Back up your configuration with the command: + +``` +/sbin/iptables-save > /root/iptables-works + +``` +#### Tip #2: Even better, include a timestamp in the filename. + +Add the timestamp with the command: + +``` +/sbin/iptables-save > /root/iptables-works-`date +%F` + +``` + +You get a file with a name like: + +``` +/root/iptables-works-2018-09-11 + +``` + +If you do something that prevents your system from working, you can quickly restore it: + +``` +/sbin/iptables-restore < /root/iptables-works-2018-09-11 + +``` + +#### Tip #3: Every time you create a backup copy of the iptables policy, create a link to the file with 'latest' in the name. + +``` +ln –s /root/iptables-works-`date +%F` /root/iptables-works-latest + +``` + +#### Tip #4: Put specific rules at the top of the policy and generic rules at the bottom. + +Avoid generic rules like this at the top of the policy rules: + +``` +iptables -A INPUT -p tcp --dport 22 -j DROP + +``` + +The more criteria you specify in the rule, the less chance you will have of locking yourself out. Instead of the very generic rule above, use something like this: + +``` +iptables -A INPUT -p tcp --dport 22 –s 10.0.0.0/8 –d 192.168.100.101 -j DROP + +``` + +This rule appends ( **-A** ) to the **INPUT** chain a rule that will **DROP** any packets originating from the CIDR block **10.0.0.0/8** on TCP ( **-p tcp** ) port 22 ( **\--dport 22** ) destined for IP address 192.168.100.101 ( **-d 192.168.100.101** ). + +There are plenty of ways you can be more specific. For example, using **-i eth0** will limit the processing to a single NIC in your server. This way, the filtering actions will not apply the rule to **eth1**. + +#### Tip #5: Whitelist your IP address at the top of your policy rules. + +This is a very effective method of not locking yourself out. Everybody else, not so much. + +``` +iptables -I INPUT -s -j ACCEPT + +``` + +You need to put this as the first rule for it to work properly. Remember, **-I** inserts it as the first rule; **-A** appends it to the end of the list. + +#### Tip #6: Know and understand all the rules in your current policy. + +Not making a mistake in the first place is half the battle. If you understand the inner workings behind your iptables policy, it will make your life easier. Draw a flowchart if you must. Also remember: What the policy does and what it is supposed to do can be two different things. + +### Set up a workstation firewall policy + +Scenario: You want to set up a workstation with a restrictive firewall policy. + +#### Tip #1: Set the default policy as DROP. + +``` +# Set a default policy of DROP +*filter +:INPUT DROP [0:0] +:FORWARD DROP [0:0] +:OUTPUT DROP [0:0] +``` + +#### Tip #2: Allow users the minimum amount of services needed to get their work done. + +The iptables rules need to allow the workstation to get an IP address, netmask, and other important information via DHCP ( **-p udp --dport 67:68 --sport 67:68** ). For remote management, the rules need to allow inbound SSH ( **\--dport 22** ), outbound mail ( **\--dport 25** ), DNS ( **\--dport 53** ), outbound ping ( **-p icmp** ), Network Time Protocol ( **\--dport 123 --sport 123** ), and outbound HTTP ( **\--dport 80** ) and HTTPS ( **\--dport 443** ). + +``` +# Set a default policy of DROP +*filter +:INPUT DROP [0:0] +:FORWARD DROP [0:0] +:OUTPUT DROP [0:0] + +# Accept any related or established connections +-I INPUT  1 -m state --state RELATED,ESTABLISHED -j ACCEPT +-I OUTPUT 1 -m state --state RELATED,ESTABLISHED -j ACCEPT + +# Allow all traffic on the loopback interface +-A INPUT -i lo -j ACCEPT +-A OUTPUT -o lo -j ACCEPT + +# Allow outbound DHCP request +-A OUTPUT –o eth0 -p udp --dport 67:68 --sport 67:68 -j ACCEPT + +# Allow inbound SSH +-A INPUT -i eth0 -p tcp -m tcp --dport 22 -m state --state NEW  -j ACCEPT + +# Allow outbound email +-A OUTPUT -i eth0 -p tcp -m tcp --dport 25 -m state --state NEW  -j ACCEPT + +# Outbound DNS lookups +-A OUTPUT -o eth0 -p udp -m udp --dport 53 -j ACCEPT + +# Outbound PING requests +-A OUTPUT –o eth0 -p icmp -j ACCEPT + +# Outbound Network Time Protocol (NTP) requests +-A OUTPUT –o eth0 -p udp --dport 123 --sport 123 -j ACCEPT + +# Outbound HTTP +-A OUTPUT -o eth0 -p tcp -m tcp --dport 80 -m state --state NEW -j ACCEPT +-A OUTPUT -o eth0 -p tcp -m tcp --dport 443 -m state --state NEW -j ACCEPT + +COMMIT +``` + +### Restrict an IP address range + +Scenario: The CEO of your company thinks the employees are spending too much time on Facebook and not getting any work done. The CEO tells the CIO to do something about the employees wasting time on Facebook. The CIO tells the CISO to do something about employees wasting time on Facebook. Eventually, you are told the employees are wasting too much time on Facebook, and you have to do something about it. You decide to block all access to Facebook. First, find out Facebook's IP address by using the **host** and **whois** commands. + +``` +host -t a www.facebook.com +www.facebook.com is an alias for star.c10r.facebook.com. +star.c10r.facebook.com has address 31.13.65.17 +whois 31.13.65.17 | grep inetnum +inetnum:        31.13.64.0 - 31.13.127.255 +``` + +Then convert that range to CIDR notation by using the [CIDR to IPv4 Conversion][3] page. You get **31.13.64.0/18**. To prevent outgoing access to [www.facebook.com][4], enter: + +``` +iptables -A OUTPUT -p tcp -i eth0 –o eth1 –d 31.13.64.0/18 -j DROP +``` + +### Regulate by time + +Scenario: The backlash from the company's employees over denying access to Facebook access causes the CEO to relent a little (that and his administrative assistant's reminding him that she keeps HIS Facebook page up-to-date). The CEO decides to allow access to Facebook.com only at lunchtime (12PM to 1PM). Assuming the default policy is DROP, use iptables' time features to open up access. + +``` +iptables –A OUTPUT -p tcp -m multiport --dport http,https -i eth0 -o eth1 -m time --timestart 12:00 --timestart 12:00 –timestop 13:00 –d +31.13.64.0/18  -j ACCEPT +``` + +This command sets the policy to allow ( **-j ACCEPT** ) http and https ( **-m multiport --dport http,https** ) between noon ( **\--timestart 12:00** ) and 13PM ( **\--timestop 13:00** ) to Facebook.com ( **–d[31.13.64.0/18][5]** ). + +### Regulate by time—Take 2 + +Scenario: During planned downtime for system maintenance, you need to deny all TCP and UDP traffic between the hours of 2AM and 3AM so maintenance tasks won't be disrupted by incoming traffic. This will take two iptables rules: + +``` +iptables -A INPUT -p tcp -m time --timestart 02:00 --timestop 03:00 -j DROP +iptables -A INPUT -p udp -m time --timestart 02:00 --timestop 03:00 -j DROP +``` + +With these rules, TCP and UDP traffic ( **-p tcp and -p udp** ) are denied ( **-j DROP** ) between the hours of 2AM ( **\--timestart 02:00** ) and 3AM ( **\--timestop 03:00** ) on input ( **-A INPUT** ). + +### Limit connections with iptables + +Scenario: Your internet-connected web servers are under attack by bad actors from around the world attempting to DoS (Denial of Service) them. To mitigate these attacks, you restrict the number of connections a single IP address can have to your web server: + +``` +iptables –A INPUT –p tcp –syn -m multiport -–dport http,https –m connlimit -–connlimit-above 20 –j REJECT -–reject-with-tcp-reset +``` + +Let's look at what this rule does. If a host makes more than 20 ( **-–connlimit-above 20** ) new connections ( **–p tcp –syn** ) in a minute to the web servers ( **-–dport http,https** ), reject the new connection ( **–j REJECT** ) and tell the connecting host you are rejecting the connection ( **-–reject-with-tcp-reset** ). + +### Monitor iptables rules + +Scenario: Since iptables operates on a "first match wins" basis as packets traverse the rules in a chain, frequently matched rules should be near the top of the policy and less frequently matched rules should be near the bottom. How do you know which rules are traversed the most or the least so they can be ordered nearer the top or the bottom? + +#### Tip #1: See how many times each rule has been hit. + +Use this command: + +``` +iptables -L -v -n –line-numbers +``` + +The command will list all the rules in the chain ( **-L** ). Since no chain was specified, all the chains will be listed with verbose output ( **-v** ) showing packet and byte counters in numeric format ( **-n** ) with line numbers at the beginning of each rule corresponding to that rule's position in the chain. + +Using the packet and bytes counts, you can order the most frequently traversed rules to the top and the least frequently traversed rules towards the bottom. + +#### Tip #2: Remove unnecessary rules. + +Which rules aren't getting any matches at all? These would be good candidates for removal from the policy. You can find that out with this command: + +``` +iptables -nvL | grep -v "0     0" +``` + +Note: that's not a tab between the zeros; there are five spaces between the zeros. + +#### Tip #3: Monitor what's going on. + +You would like to monitor what's going on with iptables in real time, like with **top**. Use this command to monitor the activity of iptables activity dynamically and show only the rules that are actively being traversed: + +``` +watch --interval=5 'iptables -nvL | grep -v "0     0"' +``` + +**watch** runs **'iptables -nvL | grep -v "0 0"'** every five seconds and displays the first screen of its output. This allows you to watch the packet and byte counts change over time. + +### Report on iptables + +Scenario: Your manager thinks this iptables firewall stuff is just great, but a daily activity report would be even better. Sometimes it's more important to write a report than to do the work. + +Use the packet filter/firewall/IDS log analyzer [FWLogwatch][6] to create reports based on the iptables firewall logs. FWLogwatch supports many log formats and offers many analysis options. It generates daily and monthly summaries of the log files, allowing the security administrator to free up substantial time, maintain better control over network security, and reduce unnoticed attacks. + +Here is sample output from FWLogwatch: + +![](https://opensource.com/sites/default/files/uploads/fwlogwatch.png) + +### More than just ACCEPT and DROP + +We've covered many facets of iptables, all the way from making sure you don't lock yourself out when working with iptables to monitoring iptables to visualizing the activity of an iptables firewall. These will get you started down the path to realizing even more iptables tips and tricks. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/iptables-tips-and-tricks + +作者:[Gary Smith][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/greptile +[1]: https://en.wikipedia.org/wiki/Netfilter +[2]: https://en.wikipedia.org/wiki/Iptables +[3]: http://www.ipaddressguide.com/cidr +[4]: http://www.facebook.com +[5]: http://31.13.64.0/18 +[6]: http://fwlogwatch.inside-security.de/ diff --git a/sources/tech/20181001 Turn your book into a website and an ePub using Pandoc.md b/sources/tech/20181001 Turn your book into a website and an ePub using Pandoc.md new file mode 100644 index 0000000000..bd79cb3c04 --- /dev/null +++ b/sources/tech/20181001 Turn your book into a website and an ePub using Pandoc.md @@ -0,0 +1,263 @@ +Turn your book into a website and an ePub using Pandoc +====== +Write once, publish twice using Markdown and Pandoc. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ) + +Pandoc is a command-line tool for converting files from one markup language to another. In my [introduction to Pandoc][1], I explained how to convert text written in Markdown into a website, a slideshow, and a PDF. + +In this follow-up article, I'll dive deeper into [Pandoc][2], showing how to produce a website and an ePub book from the same Markdown source file. I'll use my upcoming e-book, [GRASP Principles for the Object-Oriented Mind][3], which I created using this process, as an example. + +First I will explain the file structure used for the book, then how to use Pandoc to generate a website and deploy it in GitHub. Finally, I demonstrate how to generate its companion ePub book. + +You can find the code in my [Programming Fight Club][4] GitHub repository. + +### Setting up the writing structure + +I do all of my writing in Markdown syntax. You can also use HTML, but the more HTML you introduce the highest risk that problems arise when Pandoc converts Markdown to an ePub document. My books follow the one-chapter-per-file pattern. Declare chapters using the Markdown heading H1 ( **#** ). You can put more than one chapter in each file, but putting them in separate files makes it easier to find content and do updates later. + +The meta-information follows a similar pattern: each output format has its own meta-information file. Meta-information files define information about your documents, such as text to add to your HTML or the license of your ePub. I store all of my Markdown documents in a folder named parts (this is important for the Makefile that generates the website and ePub). As an example, let's take the table of contents, the preface, and the about chapters (divided into the files toc.md, preface.md, and about.md) and, for clarity, we will leave out the remaining chapters. + +My about file might begin like: + +``` +# About this book {-} + +## Who should read this book {-} + +Before creating a complex software system one needs to create a solid foundation. +General Responsibility Assignment Software Principles (GRASP) are guidelines to assign +responsibilities to software classes in object-oriented programming. +``` + +Once the chapters are finished, the next step is to add meta-information to setup the format for the website and the ePub. + +### Generating the website + +#### Create the HTML meta-information file + +The meta-information file (web-metadata.yaml) for my website is a simple YAML file that contains information about the author, title, rights, content for the **< head>** tag, and content for the beginning and end of the HTML file. + +I recommend (at minimum) including the following fields in the web-metadata.yaml file: + +``` +--- +title: GRASP principles for the Object-oriented mind +author: Kiko Fernandez-Reyes +rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International +header-includes: +- | +  \```{=html} +  +  \``` +include-before: +- | +  \```{=html} + 

If you like this book, please consider +      spreading the word or +      +        buying me a coffee +     

+  \``` +include-after: +- | +  ```{=html} + 
+   
+   
+        +   
+  \``` +--- +``` + +Some variables to note: + + * The **header-includes** variable contains HTML that will be embedded inside the **< head>** tag. + * The line after calling a variable must be **\- |**. The next line must begin with triple backquotes that are aligned with the **|** or Pandoc will reject it. **{=html}** tells Pandoc that this is raw text and should not be processed as Markdown. (For this to work, you need to check that the **raw_attribute** extension in Pandoc is enabled. To check, type **pandoc --list-extensions | grep raw** and make sure the returned list contains an item named **+raw_html** ; the plus sign indicates it is enabled.) + * The variable **include-before** adds some HTML at the beginning of your website, and I ask readers to consider spreading the word or buying me a coffee. + * The **include-after** variable appends raw HTML at the end of the website and shows my book's license. + + + +These are only some of the fields available; take a look at the template variables in HTML (my article [introduction to Pandoc][1] covered this for LaTeX but the process is the same for HTML) to learn about others. + +#### Split the website into chapters + +The website can be generated as a whole, resulting in a long page with all the content, or split into chapters, which I think is easier to read. I'll explain how to divide the website into chapters so the reader doesn't get intimidated by a long website. + +To make the website easy to deploy on GitHub Pages, we need to create a root folder called docs (which is the root folder that GitHub Pages uses by default to render a website). Then we need to create folders for each chapter under docs, place the HTML chapters in their own folders, and the file content in a file named index.html. + +For example, the about.md file is converted to a file named index.html that is placed in a folder named about (about/index.html). This way, when users type **http:// /about/**, the index.html file from the folder about will be displayed in their browser. + +The following Makefile does all of this: + +``` +# Your book files +DEPENDENCIES= toc preface about + +# Placement of your HTML files +DOCS=docs + +all: web + +web: setup $(DEPENDENCIES) +        @cp $(DOCS)/toc/index.html $(DOCS) + + +# Creation and copy of stylesheet and images into +# the assets folder. This is important to deploy the +# website to Github Pages. +setup: +        @mkdir -p $(DOCS) +        @cp -r assets $(DOCS) + + +# Creation of folder and index.html file on a +# per-chapter basis + +$(DEPENDENCIES): +        @mkdir -p $(DOCS)/$@ +        @pandoc -s --toc web-metadata.yaml parts/$@.md \ +        -c /assets/pandoc.css -o $(DOCS)/$@/index.html + +clean: +        @rm -rf $(DOCS) + +.PHONY: all clean web setup +``` + +The option **-c /assets/pandoc.css** declares which CSS stylesheet to use; it will be fetched from **/assets/pandoc.css**. In other words, inside the **< head>** HTML tag, Pandoc adds the following line: + +``` + +``` + +To generate the website, type: + +``` +make +``` + +The root folder should contain now the following structure and files: + +``` +.---parts +|    |--- toc.md +|    |--- preface.md +|    |--- about.md +| +|---docs +    |--- assets/ +    |--- index.html +    |--- toc +    |     |--- index.html +    | +    |--- preface +    |     |--- index.html +    | +    |--- about +          |--- index.html +    +``` + +#### Deploy the website + +To deploy the website on GitHub, follow these steps: + + 1. Create a new repository + 2. Push your content to the repository + 3. Go to the GitHub Pages section in the repository's Settings and select the option for GitHub to use the content from the Master branch + + + +You can get more details on the [GitHub Pages][5] site. + +Check out [my book's website][6], generated using this process, to see the result. + +### Generating the ePub book + +#### Create the ePub meta-information file + +The ePub meta-information file, epub-meta.yaml, is similar to the HTML meta-information file. The main difference is that ePub offers other template variables, such as **publisher** and **cover-image**. Your ePub book's stylesheet will probably differ from your website's; mine uses one named epub.css. + +``` +--- +title: 'GRASP principles for the Object-oriented Mind' +publisher: 'Programming Language Fight Club' +author: Kiko Fernandez-Reyes +rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International +cover-image: assets/cover.png +stylesheet: assets/epub.css +... +``` + +Add the following content to the previous Makefile: + +``` +epub: +        @pandoc -s --toc epub-meta.yaml \ +        $(addprefix parts/, $(DEPENDENCIES:=.md)) -o $(DOCS)/assets/book.epub +``` + +The command for the ePub target takes all the dependencies from the HTML version (your chapter names), appends to them the Markdown extension, and prepends them with the path to the folder chapters' so Pandoc knows how to process them. For example, if **$(DEPENDENCIES)** was only **preface about** , then the Makefile would call: + +``` +@pandoc -s --toc epub-meta.yaml \ +parts/preface.md parts/about.md -o $(DOCS)/assets/book.epub +``` + +Pandoc would take these two chapters, combine them, generate an ePub, and place the book under the Assets folder. + +Here's an [example][7] of an ePub created using this process. + +### Summarizing the process + +The process to create a website and an ePub from a Markdown file isn't difficult, but there are a lot of details. The following outline may make it easier for you to follow. + + * HTML book: + * Write chapters in Markdown + * Add metadata + * Create a Makefile to glue pieces together + * Set up GitHub Pages + * Deploy + * ePub book: + * Reuse chapters from previous work + * Add new metadata file + * Create a Makefile to glue pieces together + * Set up GitHub Pages + * Deploy + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/book-to-website-epub-using-pandoc + +作者:[Kiko Fernandez-Reyes][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/kikofernandez +[1]: https://opensource.com/article/18/9/intro-pandoc +[2]: https://pandoc.org/ +[3]: https://www.programmingfightclub.com/ +[4]: https://github.com/kikofernandez/programmingfightclub +[5]: https://pages.github.com/ +[6]: https://www.programmingfightclub.com/grasp-principles/ +[7]: https://github.com/kikofernandez/programmingfightclub/raw/master/docs/web_assets/demo.epub diff --git a/sources/tech/20181002 4 open source invoicing tools for small businesses.md b/sources/tech/20181002 4 open source invoicing tools for small businesses.md new file mode 100644 index 0000000000..29589a6ad1 --- /dev/null +++ b/sources/tech/20181002 4 open source invoicing tools for small businesses.md @@ -0,0 +1,76 @@ +4 open source invoicing tools for small businesses +====== +Manage your billing and get paid with easy-to-use, web-based invoicing software. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_lovemoneyglory2.png?itok=AvneLxFp) + +No matter what your reasons for starting a small business, the key to keeping that business going is getting paid. Getting paid usually means sending a client an invoice. + +It's easy enough to whip up an invoice using LibreOffice Writer or LibreOffice Calc, but sometimes you need a bit more. A more professional look. A way of keeping track of your invoices. Reminders about when to follow up on the invoices you've sent. + +There's a wide range of commercial and closed-source invoicing tools out there. But the offerings on the open source side of the fence are just as good, and maybe even more flexible, than their closed source counterparts. + +Let's take a look at four web-based open source invoicing tools that are great choices for freelancers and small businesses on a tight budget. I reviewed two of them in 2014, in an [earlier version][1] of this article. These four picks are easy to use and you can use them on just about any device. + +### Invoice Ninja + +I've never been a fan of the term ninja. Despite that, I like [Invoice Ninja][2]. A lot. It melds a simple interface with a set of features that let you create, manage, and send invoices to clients and customers. + +You can easily configure multiple clients, track payments and outstanding invoices, generate quotes, and email invoices. What sets Invoice Ninja apart from its competitors is its [integration with][3] over 40 online popular payment gateways, including PayPal, Stripe, WePay, and Apple Pay. + +[Download][4] a version that you can install on your own server or get an account with the [hosted version][5] of Invoice Ninja. There's a free version and a paid tier that will set you back US$ 8 a month. + +### InvoicePlane + +Once upon a time, there was a nifty open source invoicing tool called FusionInvoice. One day, its creators took the latest version of the code proprietary. That didn't end happily, as FusionInvoice's doors were shut for good in 2018. But that wasn't the end of the application. An old version of the code stayed open source and morphed into [InvoicePlane][6], which packs all of FusionInvoice's goodness. + +Creating an invoice takes just a couple of clicks. You can make them as minimal or detailed as you need. When you're ready, you can email your invoices or output them as PDFs. You can also create recurring invoices for clients or customers you regularly bill. + +InvoicePlane does more than generate and track invoices. You can also create quotes for jobs or goods, track products you sell, view and enter payments, and run reports on your invoices. + +[Grab the code][7] and install it on your web server. Or, if you're not quite ready to do that, [take the demo][8] for a spin. + +### OpenSourceBilling + +Described by its developer as "beautifully simple billing software," [OpenSourceBilling][9] lives up to the description. It has one of the cleanest interfaces I've seen, which makes configuring and using the tool a breeze. + +OpenSourceBilling stands out because of its dashboard, which tracks your current and past invoices, as well as any outstanding amounts. Your information is broken up into graphs and tables, which makes it easy to follow. + +You do much of the configuration on the invoice itself. You can add items, tax rates, clients, and even payment terms with a click and a few keystrokes. OpenSourceBilling saves that information across all of your invoices, both new and old. + +As with some of the other tools we've looked at, OpenSourceBilling has a [demo][10] you can try. + +### BambooInvoice + +When I was a full-time freelance writer and consultant, I used [BambooInvoice][11] to bill my clients. When its original developer stopped working on the software, I was a bit disappointed. But BambooInvoice is back, and it's as good as ever. + +What attracted me to BambooInvoice is its simplicity. It does one thing and does it well. You can create and edit invoices, and BambooInvoice keeps track of them by client and by the invoice numbers you assign to them. It also lets you know which invoices are open or overdue. You can email the invoices from within the application or generate PDFs. You can also run reports to keep tabs on your income. + +To [install][12] and use BambooInvoice, you'll need a web server running PHP 5 or newer as well as a MySQL database. Chances are you already have access to one, so you're good to go. + +Do you have a favorite open source invoicing tool? Feel free to share it by leaving a comment. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/open-source-invoicing-tools + +作者:[Scott Nesbitt][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt +[1]: https://opensource.com/business/14/9/4-open-source-invoice-tools +[2]: https://www.invoiceninja.org/ +[3]: https://www.invoiceninja.com/integrations/ +[4]: https://github.com/invoiceninja/invoiceninja +[5]: https://www.invoiceninja.com/invoicing-pricing-plans/ +[6]: https://invoiceplane.com/ +[7]: https://wiki.invoiceplane.com/en/1.5/getting-started/installation +[8]: https://demo.invoiceplane.com/ +[9]: http://www.opensourcebilling.org/ +[10]: http://demo.opensourcebilling.org/ +[11]: https://www.bambooinvoice.net/ +[12]: https://sourceforge.net/projects/bambooinvoice/ diff --git a/sources/tech/20181003 Introducing Swift on Fedora.md b/sources/tech/20181003 Introducing Swift on Fedora.md new file mode 100644 index 0000000000..186117cd7c --- /dev/null +++ b/sources/tech/20181003 Introducing Swift on Fedora.md @@ -0,0 +1,72 @@ +translating---geekpi + +Introducing Swift on Fedora +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/09/swift-816x345.jpg) + +Swift is a general-purpose programming language built using a modern approach to safety, performance, and software design patterns. It aims to be the best language for a variety of programming projects, ranging from systems programming to desktop applications and scaling up to cloud services. Read more about it and how to try it out in Fedora. + +### Safe, Fast, Expressive + +Like many modern programming languages, Swift was designed to be safer than C-based languages. For example, variables are always initialized before they can be used. Arrays and integers are checked for overflow. Memory is automatically managed. + +Swift puts intent right in the syntax. To declare a variable, use the var keyword. To declare a constant, use let. + +Swift also guarantees that objects can never be nil; in fact, trying to use an object known to be nil will cause a compile-time error. When using a nil value is appropriate, it supports a mechanism called **optionals**. An optional may contain nil, but is safely unwrapped using the **?** operator. + +Some additional features include: + + * Closures unified with function pointers + * Tuples and multiple return values + * Generics + * Fast and concise iteration over a range or collection + * Structs that support methods, extensions, and protocols + * Functional programming patterns, e.g., map and filter + * Powerful error handling built-in + * Advanced control flow with do, guard, defer, and repeat keywords + + + +### Try Swift out + +Swift is available in Fedora 28 under then package name **swift-lang**. Once installed, run swift and the REPL console starts up. + +``` +$ swift +Welcome to Swift version 4.2 (swift-4.2-RELEASE). Type :help for assistance. + 1> let greeting="Hello world!" +greeting: String = "Hello world!" + 2> print(greeting) +Hello world! + 3> greeting = "Hello universe!" +error: repl.swift:3:10: error: cannot assign to value: 'greeting' is a 'let' constant +greeting = "Hello universe!" +~~~~~~~~ ^ + + + 3> + +``` + +Swift has a growing community, and in particular, a [work group][1] dedicated to making it an efficient and effective server-side programming language. Be sure to visit [its home page][2] for more ways to get involved. + +Photo by [Uillian Vargas][3] on [Unsplash][4]. + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/introducing-swift-fedora/ + +作者:[Link Dupont][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/linkdupont/ +[1]: https://swift.org/server/ +[2]: http://swift.org +[3]: https://unsplash.com/photos/7oJpVR1inGk?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[4]: https://unsplash.com/search/photos/fast?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText diff --git a/sources/tech/20181003 Oomox - Customize And Create Your Own GTK2, GTK3 Themes.md b/sources/tech/20181003 Oomox - Customize And Create Your Own GTK2, GTK3 Themes.md new file mode 100644 index 0000000000..e45d96470f --- /dev/null +++ b/sources/tech/20181003 Oomox - Customize And Create Your Own GTK2, GTK3 Themes.md @@ -0,0 +1,128 @@ +Oomox – Customize And Create Your Own GTK2, GTK3 Themes +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-720x340.png) + +Theming and Visual customization is one of the main advantages of Linux. Since all the code is open, you can change how your Linux system looks and behaves to a greater degree than you ever could with Windows/Mac OS. GTK theming is perhaps the most popular way in which people customize their Linux desktops. The GTK toolkit is used by a wide variety of desktop environments like Gnome, Cinnamon, Unity, XFCE, and budgie. This means that a single theme made for GTK can be applied to any of these Desktop Environments with little changes. + +There are a lot of very high quality popular GTK themes out there, such as **Arc** , **Numix** , and **Adapta**. But if you want to customize these themes and create your own visual design, you can use **Oomox**. + +The Oomox is a graphical app for customizing and creating your own GTK theme complete with your own color, icon and terminal style. It comes with several presets, which you can apply on a Numix, Arc, or Materia style theme to create your own GTK theme. + +### Installing Oomox + +On Arch Linux and its variants: + +Oomox is available on [**AUR**][1], so you can install it using any AUR helper programs like [**Yay**][2]. + +``` +$ yay -S oomox + +``` + +On Debian/Ubuntu/Linux Mint, download `oomox.deb`package from [**here**][3] and install it as shown below. As of writing this guide, the latest version was **oomox_1.7.0.5.deb**. + +``` +$ sudo dpkg -i oomox_1.7.0.5.deb +$ sudo apt install -f + +``` + +On Fedora, Oomox is available in third-party **COPR** repository. + +``` +$ sudo dnf copr enable tcg/themes +$ sudo dnf install oomox + +``` + +Oomox is also available as a [**Flatpak app**][4]. Make sure you have installed Flatpak as described in [**this guide**][5]. And then, install and run Oomox using the following commands: + +``` +$ flatpak install flathub com.github.themix_project.Oomox + +$ flatpak run com.github.themix_project.Oomox + +``` + +For other Linux distributions, go to the Oomox project page (Link is given at the end of this guide) on Github and compile and install it manually from source. + +### Customize And Create Your Own GTK2, GTK3 Themes + +**Theme Customization** + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-1-1.png) + +You can change the colour of practically every UI element, like: + + 1. Headers + 2. Buttons + 3. Buttons inside Headers + 4. Menus + 5. Selected Text + + + +To the left, there are a number of presets, like the Cars theme, modern themes like Materia, and Numix, and retro themes. Then, at the top of the main window, there’s an option called **Theme Style** , that lets you set the overall visual style of the theme. You can choose from between Numix, Arc, and Materia. + +With certain styles like Numix, you can even change things like the Header Gradient, Outline Width and Panel Opacity. You can also add a Dark Mode for your theme that will be automatically created from the default theme. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-2.png) + +**Iconset Customization** + +You can customize the iconset that will be used for the theme icons. There are 2 options – Gnome Colors and Archdroid. You an change the base, and stroke colours of the iconset. + +**Terminal Customization** + +You can also customize the terminal colours. The app has several presets for this, but you can customize the exact colour code for each colour value like red, green,black, and so on. You can also auto swap the foreground and background colours. + +**Spotify Theme** + +A unique feature this app has is that you can theme the spotify app to your liking. You can change the foreground, background, and accent color of the spotify app to match the overall GTK theme. + +Then, just press the **Apply Spotify Theme** button, and you’ll get this window: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-3.png) + +Just hit apply, and you’re done. + +**Exporting your Theme** + +Once you’re done customizing the theme to your liking, you can rename it by clicking the rename button at the top left: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-4.png) + +And then, just hit **Export Theme** to export the theme to your system. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-5.png) + +You can also just export just the Iconset or the terminal theme. + +After this, you can open any Visual Customization app for your Desktop Environment, like Tweaks for Gnome based DEs, or the **XFCE Appearance Settings** , and select your exported GTK and Shell theme. + +### Verdict + +If you are a Linux theme junkie, and you know exactly how each button, how each header in your system should look like, Oomox is worth a look. For extreme customizers, it lets you change virtually everything about how your system looks. For people who just want to tweak an existing theme a little bit, it has many, many presets so you can get what you want without a lot of effort. + +Have you tried it? What are your thoughts on Oomox? Put them in the comments below! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/oomox-customize-and-create-your-own-gtk2-gtk3-themes/ + +作者:[EDITOR][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[1]: https://aur.archlinux.org/packages/oomox/ +[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[3]: https://github.com/themix-project/oomox/releases +[4]: https://flathub.org/apps/details/com.github.themix_project.Oomox +[5]: https://www.ostechnix.com/flatpak-new-framework-desktop-applications-linux/ diff --git a/sources/tech/20181003 Tips for listing files with ls at the Linux command line.md b/sources/tech/20181003 Tips for listing files with ls at the Linux command line.md new file mode 100644 index 0000000000..fda48f1622 --- /dev/null +++ b/sources/tech/20181003 Tips for listing files with ls at the Linux command line.md @@ -0,0 +1,75 @@ +translating---geekpi + +Tips for listing files with ls at the Linux command line +====== +Learn some of the Linux 'ls' command's most useful variations. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx) + +One of the first commands I learned in Linux was `ls`. Knowing what’s in a directory where a file on your system resides is important. Being able to see and modify not just some but all of the files is also important. + +My first LInux cheat sheet was the [One Page Linux Manual][1] , which was released in1999 and became my go-to reference. I taped it over my desk and referred to it often as I began to explore Linux. Listing files with `ls -l` is introduced on the first page, at the bottom of the first column. + +Later, I would learn other iterations of this most basic command. Through the `ls` command, I began to learn about the complexity of the Linux file permissions and what was mine and what required root or sudo permission to change. I became very comfortable on the command line over time, and while I still use `ls -l` to find files in the directory, I frequently use `ls -al` so I can see hidden files that might need to be changed, like configuration files. + +According to an article by Eric Fischer about the `ls` command in the [Linux Documentation Project][2], the command's roots go back to the `listf` command on MIT’s Compatible Time Sharing System in 1961. When CTSS was replaced by [Multics][3], the command became `list`, with switches like `list -all`. According to [Wikipedia][4], `ls` appeared in the original version of AT&T Unix. The `ls` command we use today on Linux systems comes from the [GNU Core Utilities][5]. + +Most of the time, I use only a couple of iterations of the command. Looking inside a directory with `ls` or `ls -al` is how I generally use the command, but there are many other options that you should be familiar with. + +`$ ls -l` provides a simple list of the directory: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_1_0.png) + +Using the man pages of my Fedora 28 system, I find that there are many other options to `ls`, all of which provide interesting and useful information about the Linux file system. By entering `man ls` at the command prompt, we can begin to explore some of the other options: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_2_0.png) + +To sort the directory by file sizes, use `ls -lS`: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_3_0.png) + +To list the contents in reverse order, use `ls -lr`: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_4.png) + +To list contents by columns, use `ls -c`: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_5.png) + +`ls -al` provides a list of all the files in the same directory: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_6.png) + +Here are some additional options that I find useful and interesting: + + * List only the .txt files in the directory: `ls *.txt` + * List by file size: `ls -s` + * Sort by time and date: `ls -d` + * Sort by extension: `ls -X` + * Sort by file size: `ls -S` + * Long format with file size: `ls -ls` + * List only the .txt files in a directory: `ls *.txt` + + + +To generate a directory list in the specified format and send it to a file for later viewing, enter `ls -al > mydirectorylist`. Finally, one of the more exotic commands I found is `ls -R`, which provides a recursive list of all the directories on your computer and their contents. + +For a complete list of the all the iterations of the `ls` command, refer to the [GNU Core Utilities][6]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/ls-command + +作者:[Don Watkins][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/don-watkins +[1]: http://hackerspace.cs.rutgers.edu/library/General/One_Page_Linux_Manual.pdf +[2]: http://www.tldp.org/LDP/LG/issue48/fischer.html +[3]: https://en.wikipedia.org/wiki/Multics +[4]: https://en.wikipedia.org/wiki/Ls +[5]: http://www.gnu.org/s/coreutils/ +[6]: https://www.gnu.org/software/coreutils/manual/html_node/ls-invocation.html#ls-invocation diff --git a/sources/tech/20181004 Archiving web sites.md b/sources/tech/20181004 Archiving web sites.md new file mode 100644 index 0000000000..558c057913 --- /dev/null +++ b/sources/tech/20181004 Archiving web sites.md @@ -0,0 +1,119 @@ +Archiving web sites +====== + +I recently took a deep dive into web site archival for friends who were worried about losing control over the hosting of their work online in the face of poor system administration or hostile removal. This makes web site archival an essential instrument in the toolbox of any system administrator. As it turns out, some sites are much harder to archive than others. This article goes through the process of archiving traditional web sites and shows how it falls short when confronted with the latest fashions in the single-page applications that are bloating the modern web. + +### Converting simple sites + +The days of handcrafted HTML web sites are long gone. Now web sites are dynamic and built on the fly using the latest JavaScript, PHP, or Python framework. As a result, the sites are more fragile: a database crash, spurious upgrade, or unpatched vulnerability might lose data. In my previous life as web developer, I had to come to terms with the idea that customers expect web sites to basically work forever. This expectation matches poorly with "move fast and break things" attitude of web development. Working with the [Drupal][2] content-management system (CMS) was particularly challenging in that regard as major upgrades deliberately break compatibility with third-party modules, which implies a costly upgrade process that clients could seldom afford. The solution was to archive those sites: take a living, dynamic web site and turn it into plain HTML files that any web server can serve forever. This process is useful for your own dynamic sites but also for third-party sites that are outside of your control and you might want to safeguard. + +For simple or static sites, the venerable [Wget][3] program works well. The incantation to mirror a full web site, however, is byzantine: + +``` + $ nice wget --mirror --execute robots=off --no-verbose --convert-links \ + --backup-converted --page-requisites --adjust-extension \ + --base=./ --directory-prefix=./ --span-hosts \ + --domains=www.example.com,example.com http://www.example.com/ + +``` + +The above downloads the content of the web page, but also crawls everything within the specified domains. Before you run this against your favorite site, consider the impact such a crawl might have on the site. The above command line deliberately ignores [`robots.txt`][] rules, as is now [common practice for archivists][4], and hammer the website as fast as it can. Most crawlers have options to pause between hits and limit bandwidth usage to avoid overwhelming the target site. + +The above command will also fetch "page requisites" like style sheets (CSS), images, and scripts. The downloaded page contents are modified so that links point to the local copy as well. Any web server can host the resulting file set, which results in a static copy of the original web site. + +That is, when things go well. Anyone who has ever worked with a computer knows that things seldom go according to plan; all sorts of things can make the procedure derail in interesting ways. For example, it was trendy for a while to have calendar blocks in web sites. A CMS would generate those on the fly and make crawlers go into an infinite loop trying to retrieve all of the pages. Crafty archivers can resort to regular expressions (e.g. Wget has a `--reject-regex` option) to ignore problematic resources. Another option, if the administration interface for the web site is accessible, is to disable calendars, login forms, comment forms, and other dynamic areas. Once the site becomes static, those will stop working anyway, so it makes sense to remove such clutter from the original site as well. + +### JavaScript doom + +Unfortunately, some web sites are built with much more than pure HTML. In single-page sites, for example, the web browser builds the content itself by executing a small JavaScript program. A simple user agent like Wget will struggle to reconstruct a meaningful static copy of those sites as it does not support JavaScript at all. In theory, web sites should be using [progressive enhancement][5] to have content and functionality available without JavaScript but those directives are rarely followed, as anyone using plugins like [NoScript][6] or [uMatrix][7] will confirm. + +Traditional archival methods sometimes fail in the dumbest way. When trying to build an offsite backup of a local newspaper ([pamplemousse.ca][8]), I found that WordPress adds query strings (e.g. `?ver=1.12.4`) at the end of JavaScript includes. This confuses content-type detection in the web servers that serve the archive, which rely on the file extension to send the right `Content-Type` header. When such an archive is loaded in a web browser, it fails to load scripts, which breaks dynamic websites. + +As the web moves toward using the browser as a virtual machine to run arbitrary code, archival methods relying on pure HTML parsing need to adapt. The solution for such problems is to record (and replay) the HTTP headers delivered by the server during the crawl and indeed professional archivists use just such an approach. + +### Creating and displaying WARC files + +At the [Internet Archive][9], Brewster Kahle and Mike Burner designed the [ARC][10] (for "ARChive") file format in 1996 to provide a way to aggregate the millions of small files produced by their archival efforts. The format was eventually standardized as the WARC ("Web ARChive") [specification][11] that was released as an ISO standard in 2009 and revised in 2017. The standardization effort was led by the [International Internet Preservation Consortium][12] (IIPC), which is an "international organization of libraries and other organizations established to coordinate efforts to preserve internet content for the future", according to Wikipedia; it includes members such as the US Library of Congress and the Internet Archive. The latter uses the WARC format internally in its Java-based [Heritrix crawler][13]. + +A WARC file aggregates multiple resources like HTTP headers, file contents, and other metadata in a single compressed archive. Conveniently, Wget actually supports the file format with the `--warc` parameter. Unfortunately, web browsers cannot render WARC files directly, so a viewer or some conversion is necessary to access the archive. The simplest such viewer I have found is [pywb][14], a Python package that runs a simple webserver to offer a Wayback-Machine-like interface to browse the contents of WARC files. The following set of commands will render a WARC file on `http://localhost:8080/`: + +``` + $ pip install pywb + $ wb-manager init example + $ wb-manager add example crawl.warc.gz + $ wayback + +``` + +This tool was, incidentally, built by the folks behind the [Webrecorder][15] service, which can use a web browser to save dynamic page contents. + +Unfortunately, pywb has trouble loading WARC files generated by Wget because it [followed][16] an [inconsistency in the 1.0 specification][17], which was [fixed in the 1.1 specification][18]. Until Wget or pywb fix those problems, WARC files produced by Wget are not reliable enough for my uses, so I have looked at other alternatives. A crawler that got my attention is simply called [crawl][19]. Here is how it is invoked: + +``` + $ crawl https://example.com/ + +``` + +(It does say "very simple" in the README.) The program does support some command-line options, but most of its defaults are sane: it will fetch page requirements from other domains (unless the `-exclude-related` flag is used), but does not recurse out of the domain. By default, it fires up ten parallel connections to the remote site, a setting that can be changed with the `-c` flag. But, best of all, the resulting WARC files load perfectly in pywb. + +### Future work and alternatives + +There are plenty more [resources][20] for using WARC files. In particular, there's a Wget drop-in replacement called [Wpull][21] that is specifically designed for archiving web sites. It has experimental support for [PhantomJS][22] and [youtube-dl][23] integration that should allow downloading more complex JavaScript sites and streaming multimedia, respectively. The software is the basis for an elaborate archival tool called [ArchiveBot][24], which is used by the "loose collective of rogue archivists, programmers, writers and loudmouths" at [ArchiveTeam][25] in its struggle to "save the history before it's lost forever". It seems that PhantomJS integration does not work as well as the team wants, so ArchiveTeam also uses a rag-tag bunch of other tools to mirror more complex sites. For example, [snscrape][26] will crawl a social media profile to generate a list of pages to send into ArchiveBot. Another tool the team employs is [crocoite][27], which uses the Chrome browser in headless mode to archive JavaScript-heavy sites. + +This article would also not be complete without a nod to the [HTTrack][28] project, the "website copier". Working similarly to Wget, HTTrack creates local copies of remote web sites but unfortunately does not support WARC output. Its interactive aspects might be of more interest to novice users unfamiliar with the command line. + +In the same vein, during my research I found a full rewrite of Wget called [Wget2][29] that has support for multi-threaded operation, which might make it faster than its predecessor. It is [missing some features][30] from Wget, however, most notably reject patterns, WARC output, and FTP support but adds RSS, DNS caching, and improved TLS support. + +Finally, my personal dream for these kinds of tools would be to have them integrated with my existing bookmark system. I currently keep interesting links in [Wallabag][31], a self-hosted "read it later" service designed as a free-software alternative to [Pocket][32] (now owned by Mozilla). But Wallabag, by design, creates only a "readable" version of the article instead of a full copy. In some cases, the "readable version" is actually [unreadable][33] and Wallabag sometimes [fails to parse the article][34]. Instead, other tools like [bookmark-archiver][35] or [reminiscence][36] save a screenshot of the page along with full HTML but, unfortunately, no WARC file that would allow an even more faithful replay. + +The sad truth of my experiences with mirrors and archival is that data dies. Fortunately, amateur archivists have tools at their disposal to keep interesting content alive online. For those who do not want to go through that trouble, the Internet Archive seems to be here to stay and Archive Team is obviously [working on a backup of the Internet Archive itself][37]. + +-------------------------------------------------------------------------------- + +via: https://anarc.at/blog/2018-10-04-archiving-web-sites/ + +作者:[Anarcat][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://anarc.at +[1]: https://anarc.at/blog +[2]: https://drupal.org +[3]: https://www.gnu.org/software/wget/ +[4]: https://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/ +[5]: https://en.wikipedia.org/wiki/Progressive_enhancement +[6]: https://noscript.net/ +[7]: https://github.com/gorhill/uMatrix +[8]: https://pamplemousse.ca/ +[9]: https://archive.org +[10]: http://www.archive.org/web/researcher/ArcFileFormat.php +[11]: https://iipc.github.io/warc-specifications/ +[12]: https://en.wikipedia.org/wiki/International_Internet_Preservation_Consortium +[13]: https://github.com/internetarchive/heritrix3/wiki +[14]: https://github.com/webrecorder/pywb +[15]: https://webrecorder.io/ +[16]: https://github.com/webrecorder/pywb/issues/294 +[17]: https://github.com/iipc/warc-specifications/issues/23 +[18]: https://github.com/iipc/warc-specifications/pull/24 +[19]: https://git.autistici.org/ale/crawl/ +[20]: https://archiveteam.org/index.php?title=The_WARC_Ecosystem +[21]: https://github.com/chfoo/wpull +[22]: http://phantomjs.org/ +[23]: http://rg3.github.io/youtube-dl/ +[24]: https://www.archiveteam.org/index.php?title=ArchiveBot +[25]: https://archiveteam.org/ +[26]: https://github.com/JustAnotherArchivist/snscrape +[27]: https://github.com/PromyLOPh/crocoite +[28]: http://www.httrack.com/ +[29]: https://gitlab.com/gnuwget/wget2 +[30]: https://gitlab.com/gnuwget/wget2/wikis/home +[31]: https://wallabag.org/ +[32]: https://getpocket.com/ +[33]: https://github.com/wallabag/wallabag/issues/2825 +[34]: https://github.com/wallabag/wallabag/issues/2914 +[35]: https://pirate.github.io/bookmark-archiver/ +[36]: https://github.com/kanishka-linux/reminiscence +[37]: http://iabak.archiveteam.org diff --git a/sources/tech/20181004 Functional programming in Python- Immutable data structures.md b/sources/tech/20181004 Functional programming in Python- Immutable data structures.md new file mode 100644 index 0000000000..e6050d52f9 --- /dev/null +++ b/sources/tech/20181004 Functional programming in Python- Immutable data structures.md @@ -0,0 +1,191 @@ +Translating by Ryze-Borgia +Functional programming in Python: Immutable data structures +====== +Immutability can help us better understand our code. Here's how to achieve it without sacrificing performance. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D) + +In this two-part series, I will discuss how to import ideas from the functional programming methodology into Python in order to have the best of both worlds. + +This first post will explore how immutable data structures can help. The second part will explore higher-level functional programming concepts in Python using the **toolz** library. + +Why functional programming? Because mutation is hard to reason about. If you are already convinced that mutation is problematic, great. If you're not convinced, you will be by the end of this post. + +Let's begin by considering squares and rectangles. If we think in terms of interfaces, neglecting implementation details, are squares a subtype of rectangles? + +The definition of a subtype rests on the [Liskov substitution principle][1]. In order to be a subtype, it must be able to do everything the supertype does. + +How would we define an interface for a rectangle? + +``` +from zope.interface import Interface + +class IRectangle(Interface): +    def get_length(self): +        """Squares can do that""" +    def get_width(self): +        """Squares can do that""" +    def set_dimensions(self, length, width): +        """Uh oh""" +``` + +If this is the definition, then squares cannot be a subtype of rectangles; they cannot respond to a `set_dimensions` method if the length and width are different. + +A different approach is to choose to make rectangles immutable. + +``` +class IRectangle(Interface): +    def get_length(self): +        """Squares can do that""" +    def get_width(self): +        """Squares can do that""" +    def with_dimensions(self, length, width): +        """Returns a new rectangle""" +``` + +Now, a square can be a rectangle. It can return a new rectangle (which would not usually be a square) when `with_dimensions` is called, but it would not stop being a square. + +This might seem like an academic problem—until we consider that squares and rectangles are, in a sense, a container for their sides. After we understand this example, the more realistic case this comes into play with is more traditional containers. For example, consider random-access arrays. + +We have `ISquare` and `IRectangle`, and `ISquare` is a subtype of `IRectangle`. + +We want to put rectangles in a random-access array: + +``` +class IArrayOfRectangles(Interface): +    def get_element(self, i): +        """Returns Rectangle""" +    def set_element(self, i, rectangle): +        """'rectangle' can be any IRectangle""" +``` + +We want to put squares in a random-access array too: + +``` +class IArrayOfSquare(Interface): +    def get_element(self, i): +        """Returns Square""" +    def set_element(self, i, square): +        """'square' can be any ISquare""" +``` + +Even though `ISquare` is a subtype of `IRectangle`, no array can implement both `IArrayOfSquare` and `IArrayOfRectangle`. + +Why not? Assume `bucket` implements both. + +``` +>>> rectangle = make_rectangle(3, 4) +>>> bucket.set_element(0, rectangle) # This is allowed by IArrayOfRectangle +>>> thing = bucket.get_element(0) # That has to be a square by IArrayOfSquare +>>> assert thing.height == thing.width +Traceback (most recent call last): +  File "", line 1, in +AssertionError +``` + +Being unable to implement both means that neither is a subtype of the other, even though `ISquare` is a subtype of `IRectangle`. The problem is the `set_element` method: If we had a read-only array, `IArrayOfSquare` would be a subtype of `IArrayOfRectangle`. + +Mutability, in both the mutable `IRectangle` interface and the mutable `IArrayOf*` interfaces, has made thinking about types and subtypes much more difficult—and giving up on the ability to mutate meant that the intuitive relationships we expected to have between the types actually hold. + +Mutation can also have non-local effects. This happens when a shared object between two places is mutated by one. The classic example is one thread mutating a shared object with another thread, but even in a single-threaded program, sharing between places that are far apart is easy. Consider that in Python, most objects are reachable from many places: as a module global, or in a stack trace, or as a class attribute. + +If we cannot constrain the sharing, we might think about constraining the mutability. + +Here is an immutable rectangle, taking advantage of the [attrs][2] library: + +``` +@attr.s(frozen=True) +class Rectange(object): +    length = attr.ib() +    width = attr.ib() +    @classmethod +    def with_dimensions(cls, length, width): +        return cls(length, width) +``` + +Here is a square: + +``` +@attr.s(frozen=True) +class Square(object): +    side = attr.ib() +    @classmethod +    def with_dimensions(cls, length, width): +        return Rectangle(length, width) +``` + +Using the `frozen` argument, we can easily have `attrs`-created classes be immutable. All the hard work of writing `__setitem__` correctly has been done by others and is completely invisible to us. + +It is still easy to modify objects; it's just nigh impossible to mutate them. + +``` +too_long = Rectangle(100, 4) +reasonable = attr.evolve(too_long, length=10) +``` + +The [Pyrsistent][3] package allows us to have immutable containers. + +``` +# Vector of integers +a = pyrsistent.v(1, 2, 3) +# Not a vector of integers +b = a.set(1, "hello") +``` + +While `b` is not a vector of integers, nothing will ever stop `a` from being one. + +What if `a` was a million elements long? Is `b` going to copy 999,999 of them? Pyrsistent comes with "big O" performance guarantees: All operations take `O(log n)` time. It also comes with an optional C extension to improve performance beyond the big O. + +For modifying nested objects, it comes with a concept of "transformers:" + +``` +blog = pyrsistent.m( +    title="My blog", +    links=pyrsistent.v("github", "twitter"), +    posts=pyrsistent.v( +        pyrsistent.m(title="no updates", +                     content="I'm busy"), +        pyrsistent.m(title="still no updates", +                     content="still busy"))) +new_blog = blog.transform(["posts", 1, "content"], +                          "pretty busy") +``` + +`new_blog` will now be the immutable equivalent of + +``` +{'links': ['github', 'twitter'], + 'posts': [{'content': "I'm busy", +            'title': 'no updates'}, +           {'content': 'pretty busy', +            'title': 'still no updates'}], + 'title': 'My blog'} +``` + +But `blog` is still the same. This means anyone who had a reference to the old object has not been affected: The transformation had only local effects. + +This is useful when sharing is rampant. For example, consider default arguments: + +``` +def silly_sum(a, b, extra=v(1, 2)): +    extra = extra.extend([a, b]) +    return sum(extra) +``` + +In this post, we have learned why immutability can be useful for thinking about our code, and how to achieve it without an extravagant performance price. Next time, we will learn how immutable objects allow us to use powerful programming constructs. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/functional-programming-python-immutable-data-structures + +作者:[Moshe Zadka][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/moshez +[1]: https://en.wikipedia.org/wiki/Liskov_substitution_principle +[2]: https://www.attrs.org/en/stable/ +[3]: https://pyrsistent.readthedocs.io/en/latest/ diff --git a/sources/tech/20181004 Lab 3- User Environments.md b/sources/tech/20181004 Lab 3- User Environments.md new file mode 100644 index 0000000000..2dc1522b69 --- /dev/null +++ b/sources/tech/20181004 Lab 3- User Environments.md @@ -0,0 +1,524 @@ +Lab 3: User Environments +====== +### Lab 3: User Environments + +#### Introduction + +In this lab you will implement the basic kernel facilities required to get a protected user-mode environment (i.e., "process") running. You will enhance the JOS kernel to set up the data structures to keep track of user environments, create a single user environment, load a program image into it, and start it running. You will also make the JOS kernel capable of handling any system calls the user environment makes and handling any other exceptions it causes. + +**Note:** In this lab, the terms _environment_ and _process_ are interchangeable - both refer to an abstraction that allows you to run a program. We introduce the term "environment" instead of the traditional term "process" in order to stress the point that JOS environments and UNIX processes provide different interfaces, and do not provide the same semantics. + +##### Getting Started + +Use Git to commit your changes after your Lab 2 submission (if any), fetch the latest version of the course repository, and then create a local branch called `lab3` based on our lab3 branch, `origin/lab3`: + +``` + athena% cd ~/6.828/lab + athena% add git + athena% git commit -am 'changes to lab2 after handin' + Created commit 734fab7: changes to lab2 after handin + 4 files changed, 42 insertions(+), 9 deletions(-) + athena% git pull + Already up-to-date. + athena% git checkout -b lab3 origin/lab3 + Branch lab3 set up to track remote branch refs/remotes/origin/lab3. + Switched to a new branch "lab3" + athena% git merge lab2 + Merge made by recursive. + kern/pmap.c | 42 +++++++++++++++++++ + 1 files changed, 42 insertions(+), 0 deletions(-) + athena% +``` + +Lab 3 contains a number of new source files, which you should browse: + +``` +inc/ env.h Public definitions for user-mode environments + trap.h Public definitions for trap handling + syscall.h Public definitions for system calls from user environments to the kernel + lib.h Public definitions for the user-mode support library +kern/ env.h Kernel-private definitions for user-mode environments + env.c Kernel code implementing user-mode environments + trap.h Kernel-private trap handling definitions + trap.c Trap handling code + trapentry.S Assembly-language trap handler entry-points + syscall.h Kernel-private definitions for system call handling + syscall.c System call implementation code +lib/ Makefrag Makefile fragment to build user-mode library, obj/lib/libjos.a + entry.S Assembly-language entry-point for user environments + libmain.c User-mode library setup code called from entry.S + syscall.c User-mode system call stub functions + console.c User-mode implementations of putchar and getchar, providing console I/O + exit.c User-mode implementation of exit + panic.c User-mode implementation of panic +user/ * Various test programs to check kernel lab 3 code +``` + +In addition, a number of the source files we handed out for lab2 are modified in lab3. To see the differences, you can type: + +``` + $ git diff lab2 + +``` + +You may also want to take another look at the [lab tools guide][1], as it includes information on debugging user code that becomes relevant in this lab. + +##### Lab Requirements + +This lab is divided into two parts, A and B. Part A is due a week after this lab was assigned; you should commit your changes and make handin your lab before the Part A deadline, making sure your code passes all of the Part A tests (it is okay if your code does not pass the Part B tests yet). You only need to have the Part B tests passing by the Part B deadline at the end of the second week. + +As in lab 2, you will need to do all of the regular exercises described in the lab and _at least one_ challenge problem (for the entire lab, not for each part). Write up brief answers to the questions posed in the lab and a one or two paragraph description of what you did to solve your chosen challenge problem in a file called `answers-lab3.txt` in the top level of your `lab` directory. (If you implement more than one challenge problem, you only need to describe one of them in the write-up.) Do not forget to include the answer file in your submission with git add answers-lab3.txt. + +##### Inline Assembly + +In this lab you may find GCC's inline assembly language feature useful, although it is also possible to complete the lab without using it. At the very least, you will need to be able to understand the fragments of inline assembly language ("`asm`" statements) that already exist in the source code we gave you. You can find several sources of information on GCC inline assembly language on the class [reference materials][2] page. + +#### Part A: User Environments and Exception Handling + +The new include file `inc/env.h` contains basic definitions for user environments in JOS. Read it now. The kernel uses the `Env` data structure to keep track of each user environment. In this lab you will initially create just one environment, but you will need to design the JOS kernel to support multiple environments; lab 4 will take advantage of this feature by allowing a user environment to `fork` other environments. + +As you can see in `kern/env.c`, the kernel maintains three main global variables pertaining to environments: + +``` + struct Env *envs = NULL; // All environments + struct Env *curenv = NULL; // The current env + static struct Env *env_free_list; // Free environment list + +``` + +Once JOS gets up and running, the `envs` pointer points to an array of `Env` structures representing all the environments in the system. In our design, the JOS kernel will support a maximum of `NENV` simultaneously active environments, although there will typically be far fewer running environments at any given time. (`NENV` is a constant `#define`'d in `inc/env.h`.) Once it is allocated, the `envs` array will contain a single instance of the `Env` data structure for each of the `NENV` possible environments. + +The JOS kernel keeps all of the inactive `Env` structures on the `env_free_list`. This design allows easy allocation and deallocation of environments, as they merely have to be added to or removed from the free list. + +The kernel uses the `curenv` symbol to keep track of the _currently executing_ environment at any given time. During boot up, before the first environment is run, `curenv` is initially set to `NULL`. + +##### Environment State + +The `Env` structure is defined in `inc/env.h` as follows (although more fields will be added in future labs): + +``` + struct Env { + struct Trapframe env_tf; // Saved registers + struct Env *env_link; // Next free Env + envid_t env_id; // Unique environment identifier + envid_t env_parent_id; // env_id of this env's parent + enum EnvType env_type; // Indicates special system environments + unsigned env_status; // Status of the environment + uint32_t env_runs; // Number of times environment has run + + // Address space + pde_t *env_pgdir; // Kernel virtual address of page dir + }; +``` + +Here's what the `Env` fields are for: + + * **env_tf** : +This structure, defined in `inc/trap.h`, holds the saved register values for the environment while that environment is _not_ running: i.e., when the kernel or a different environment is running. The kernel saves these when switching from user to kernel mode, so that the environment can later be resumed where it left off. + * **env_link** : +This is a link to the next `Env` on the `env_free_list`. `env_free_list` points to the first free environment on the list. + * **env_id** : +The kernel stores here a value that uniquely identifiers the environment currently using this `Env` structure (i.e., using this particular slot in the `envs` array). After a user environment terminates, the kernel may re-allocate the same `Env` structure to a different environment - but the new environment will have a different `env_id` from the old one even though the new environment is re-using the same slot in the `envs` array. + * **env_parent_id** : +The kernel stores here the `env_id` of the environment that created this environment. In this way the environments can form a “family tree,” which will be useful for making security decisions about which environments are allowed to do what to whom. + * **env_type** : +This is used to distinguish special environments. For most environments, it will be `ENV_TYPE_USER`. We'll introduce a few more types for special system service environments in later labs. + * **env_status** : +This variable holds one of the following values: + * `ENV_FREE`: +Indicates that the `Env` structure is inactive, and therefore on the `env_free_list`. + * `ENV_RUNNABLE`: +Indicates that the `Env` structure represents an environment that is waiting to run on the processor. + * `ENV_RUNNING`: +Indicates that the `Env` structure represents the currently running environment. + * `ENV_NOT_RUNNABLE`: +Indicates that the `Env` structure represents a currently active environment, but it is not currently ready to run: for example, because it is waiting for an interprocess communication (IPC) from another environment. + * `ENV_DYING`: +Indicates that the `Env` structure represents a zombie environment. A zombie environment will be freed the next time it traps to the kernel. We will not use this flag until Lab 4. + * **env_pgdir** : +This variable holds the kernel _virtual address_ of this environment's page directory. + + + +Like a Unix process, a JOS environment couples the concepts of "thread" and "address space". The thread is defined primarily by the saved registers (the `env_tf` field), and the address space is defined by the page directory and page tables pointed to by `env_pgdir`. To run an environment, the kernel must set up the CPU with _both_ the saved registers and the appropriate address space. + +Our `struct Env` is analogous to `struct proc` in xv6. Both structures hold the environment's (i.e., process's) user-mode register state in a `Trapframe` structure. In JOS, individual environments do not have their own kernel stacks as processes do in xv6. There can be only one JOS environment active in the kernel at a time, so JOS needs only a _single_ kernel stack. + +##### Allocating the Environments Array + +In lab 2, you allocated memory in `mem_init()` for the `pages[]` array, which is a table the kernel uses to keep track of which pages are free and which are not. You will now need to modify `mem_init()` further to allocate a similar array of `Env` structures, called `envs`. + +``` +Exercise 1. Modify `mem_init()` in `kern/pmap.c` to allocate and map the `envs` array. This array consists of exactly `NENV` instances of the `Env` structure allocated much like how you allocated the `pages` array. Also like the `pages` array, the memory backing `envs` should also be mapped user read-only at `UENVS` (defined in `inc/memlayout.h`) so user processes can read from this array. +``` + +You should run your code and make sure `check_kern_pgdir()` succeeds. + +##### Creating and Running Environments + +You will now write the code in `kern/env.c` necessary to run a user environment. Because we do not yet have a filesystem, we will set up the kernel to load a static binary image that is _embedded within the kernel itself_. JOS embeds this binary in the kernel as a ELF executable image. + +The Lab 3 `GNUmakefile` generates a number of binary images in the `obj/user/` directory. If you look at `kern/Makefrag`, you will notice some magic that "links" these binaries directly into the kernel executable as if they were `.o` files. The `-b binary` option on the linker command line causes these files to be linked in as "raw" uninterpreted binary files rather than as regular `.o` files produced by the compiler. (As far as the linker is concerned, these files do not have to be ELF images at all - they could be anything, such as text files or pictures!) If you look at `obj/kern/kernel.sym` after building the kernel, you will notice that the linker has "magically" produced a number of funny symbols with obscure names like `_binary_obj_user_hello_start`, `_binary_obj_user_hello_end`, and `_binary_obj_user_hello_size`. The linker generates these symbol names by mangling the file names of the binary files; the symbols provide the regular kernel code with a way to reference the embedded binary files. + +In `i386_init()` in `kern/init.c` you'll see code to run one of these binary images in an environment. However, the critical functions to set up user environments are not complete; you will need to fill them in. + +``` +Exercise 2. In the file `env.c`, finish coding the following functions: + + * `env_init()` +Initialize all of the `Env` structures in the `envs` array and add them to the `env_free_list`. Also calls `env_init_percpu`, which configures the segmentation hardware with separate segments for privilege level 0 (kernel) and privilege level 3 (user). + * `env_setup_vm()` +Allocate a page directory for a new environment and initialize the kernel portion of the new environment's address space. + * `region_alloc()` +Allocates and maps physical memory for an environment + * `load_icode()` +You will need to parse an ELF binary image, much like the boot loader already does, and load its contents into the user address space of a new environment. + * `env_create()` +Allocate an environment with `env_alloc` and call `load_icode` to load an ELF binary into it. + * `env_run()` +Start a given environment running in user mode. + + + +As you write these functions, you might find the new cprintf verb `%e` useful -- it prints a description corresponding to an error code. For example, + + r = -E_NO_MEM; + panic("env_alloc: %e", r); + +will panic with the message "env_alloc: out of memory". +``` + +Below is a call graph of the code up to the point where the user code is invoked. Make sure you understand the purpose of each step. + + * `start` (`kern/entry.S`) + * `i386_init` (`kern/init.c`) + * `cons_init` + * `mem_init` + * `env_init` + * `trap_init` (still incomplete at this point) + * `env_create` + * `env_run` + * `env_pop_tf` + + + +Once you are done you should compile your kernel and run it under QEMU. If all goes well, your system should enter user space and execute the `hello` binary until it makes a system call with the `int` instruction. At that point there will be trouble, since JOS has not set up the hardware to allow any kind of transition from user space into the kernel. When the CPU discovers that it is not set up to handle this system call interrupt, it will generate a general protection exception, find that it can't handle that, generate a double fault exception, find that it can't handle that either, and finally give up with what's known as a "triple fault". Usually, you would then see the CPU reset and the system reboot. While this is important for legacy applications (see [this blog post][3] for an explanation of why), it's a pain for kernel development, so with the 6.828 patched QEMU you'll instead see a register dump and a "Triple fault." message. + +We'll address this problem shortly, but for now we can use the debugger to check that we're entering user mode. Use make qemu-gdb and set a GDB breakpoint at `env_pop_tf`, which should be the last function you hit before actually entering user mode. Single step through this function using si; the processor should enter user mode after the `iret` instruction. You should then see the first instruction in the user environment's executable, which is the `cmpl` instruction at the label `start` in `lib/entry.S`. Now use b *0x... to set a breakpoint at the `int $0x30` in `sys_cputs()` in `hello` (see `obj/user/hello.asm` for the user-space address). This `int` is the system call to display a character to the console. If you cannot execute as far as the `int`, then something is wrong with your address space setup or program loading code; go back and fix it before continuing. + +##### Handling Interrupts and Exceptions + +At this point, the first `int $0x30` system call instruction in user space is a dead end: once the processor gets into user mode, there is no way to get back out. You will now need to implement basic exception and system call handling, so that it is possible for the kernel to recover control of the processor from user-mode code. The first thing you should do is thoroughly familiarize yourself with the x86 interrupt and exception mechanism. + +``` +Exercise 3. Read Chapter 9, Exceptions and Interrupts in the 80386 Programmer's Manual (or Chapter 5 of the IA-32 Developer's Manual), if you haven't already. +``` + +In this lab we generally follow Intel's terminology for interrupts, exceptions, and the like. However, terms such as exception, trap, interrupt, fault and abort have no standard meaning across architectures or operating systems, and are often used without regard to the subtle distinctions between them on a particular architecture such as the x86. When you see these terms outside of this lab, the meanings might be slightly different. + +##### Basics of Protected Control Transfer + +Exceptions and interrupts are both "protected control transfers," which cause the processor to switch from user to kernel mode (CPL=0) without giving the user-mode code any opportunity to interfere with the functioning of the kernel or other environments. In Intel's terminology, an _interrupt_ is a protected control transfer that is caused by an asynchronous event usually external to the processor, such as notification of external device I/O activity. An _exception_ , in contrast, is a protected control transfer caused synchronously by the currently running code, for example due to a divide by zero or an invalid memory access. + +In order to ensure that these protected control transfers are actually _protected_ , the processor's interrupt/exception mechanism is designed so that the code currently running when the interrupt or exception occurs _does not get to choose arbitrarily where the kernel is entered or how_. Instead, the processor ensures that the kernel can be entered only under carefully controlled conditions. On the x86, two mechanisms work together to provide this protection: + + 1. **The Interrupt Descriptor Table.** The processor ensures that interrupts and exceptions can only cause the kernel to be entered at a few specific, well-defined entry-points _determined by the kernel itself_ , and not by the code running when the interrupt or exception is taken. + +The x86 allows up to 256 different interrupt or exception entry points into the kernel, each with a different _interrupt vector_. A vector is a number between 0 and 255. An interrupt's vector is determined by the source of the interrupt: different devices, error conditions, and application requests to the kernel generate interrupts with different vectors. The CPU uses the vector as an index into the processor's _interrupt descriptor table_ (IDT), which the kernel sets up in kernel-private memory, much like the GDT. From the appropriate entry in this table the processor loads: + + * the value to load into the instruction pointer (`EIP`) register, pointing to the kernel code designated to handle that type of exception. + * the value to load into the code segment (`CS`) register, which includes in bits 0-1 the privilege level at which the exception handler is to run. (In JOS, all exceptions are handled in kernel mode, privilege level 0.) + 2. **The Task State Segment.** The processor needs a place to save the _old_ processor state before the interrupt or exception occurred, such as the original values of `EIP` and `CS` before the processor invoked the exception handler, so that the exception handler can later restore that old state and resume the interrupted code from where it left off. But this save area for the old processor state must in turn be protected from unprivileged user-mode code; otherwise buggy or malicious user code could compromise the kernel. + +For this reason, when an x86 processor takes an interrupt or trap that causes a privilege level change from user to kernel mode, it also switches to a stack in the kernel's memory. A structure called the _task state segment_ (TSS) specifies the segment selector and address where this stack lives. The processor pushes (on this new stack) `SS`, `ESP`, `EFLAGS`, `CS`, `EIP`, and an optional error code. Then it loads the `CS` and `EIP` from the interrupt descriptor, and sets the `ESP` and `SS` to refer to the new stack. + +Although the TSS is large and can potentially serve a variety of purposes, JOS only uses it to define the kernel stack that the processor should switch to when it transfers from user to kernel mode. Since "kernel mode" in JOS is privilege level 0 on the x86, the processor uses the `ESP0` and `SS0` fields of the TSS to define the kernel stack when entering kernel mode. JOS doesn't use any other TSS fields. + + + + +##### Types of Exceptions and Interrupts + +All of the synchronous exceptions that the x86 processor can generate internally use interrupt vectors between 0 and 31, and therefore map to IDT entries 0-31. For example, a page fault always causes an exception through vector 14. Interrupt vectors greater than 31 are only used by _software interrupts_ , which can be generated by the `int` instruction, or asynchronous _hardware interrupts_ , caused by external devices when they need attention. + +In this section we will extend JOS to handle the internally generated x86 exceptions in vectors 0-31. In the next section we will make JOS handle software interrupt vector 48 (0x30), which JOS (fairly arbitrarily) uses as its system call interrupt vector. In Lab 4 we will extend JOS to handle externally generated hardware interrupts such as the clock interrupt. + +##### An Example + +Let's put these pieces together and trace through an example. Let's say the processor is executing code in a user environment and encounters a divide instruction that attempts to divide by zero. + + 1. The processor switches to the stack defined by the `SS0` and `ESP0` fields of the TSS, which in JOS will hold the values `GD_KD` and `KSTACKTOP`, respectively. + + 2. The processor pushes the exception parameters on the kernel stack, starting at address `KSTACKTOP`: + +``` + +--------------------+ KSTACKTOP + | 0x00000 | old SS | " - 4 + | old ESP | " - 8 + | old EFLAGS | " - 12 + | 0x00000 | old CS | " - 16 + | old EIP | " - 20 <---- ESP + +--------------------+ + +``` + + 3. Because we're handling a divide error, which is interrupt vector 0 on the x86, the processor reads IDT entry 0 and sets `CS:EIP` to point to the handler function described by the entry. + + 4. The handler function takes control and handles the exception, for example by terminating the user environment. + + + + +For certain types of x86 exceptions, in addition to the "standard" five words above, the processor pushes onto the stack another word containing an _error code_. The page fault exception, number 14, is an important example. See the 80386 manual to determine for which exception numbers the processor pushes an error code, and what the error code means in that case. When the processor pushes an error code, the stack would look as follows at the beginning of the exception handler when coming in from user mode: + +``` + +--------------------+ KSTACKTOP + | 0x00000 | old SS | " - 4 + | old ESP | " - 8 + | old EFLAGS | " - 12 + | 0x00000 | old CS | " - 16 + | old EIP | " - 20 + | error code | " - 24 <---- ESP + +--------------------+ +``` + +##### Nested Exceptions and Interrupts + +The processor can take exceptions and interrupts both from kernel and user mode. It is only when entering the kernel from user mode, however, that the x86 processor automatically switches stacks before pushing its old register state onto the stack and invoking the appropriate exception handler through the IDT. If the processor is _already_ in kernel mode when the interrupt or exception occurs (the low 2 bits of the `CS` register are already zero), then the CPU just pushes more values on the same kernel stack. In this way, the kernel can gracefully handle _nested exceptions_ caused by code within the kernel itself. This capability is an important tool in implementing protection, as we will see later in the section on system calls. + +If the processor is already in kernel mode and takes a nested exception, since it does not need to switch stacks, it does not save the old `SS` or `ESP` registers. For exception types that do not push an error code, the kernel stack therefore looks like the following on entry to the exception handler: + +``` + +--------------------+ <---- old ESP + | old EFLAGS | " - 4 + | 0x00000 | old CS | " - 8 + | old EIP | " - 12 + +--------------------+ +``` + +For exception types that push an error code, the processor pushes the error code immediately after the old `EIP`, as before. + +There is one important caveat to the processor's nested exception capability. If the processor takes an exception while already in kernel mode, and _cannot push its old state onto the kernel stack_ for any reason such as lack of stack space, then there is nothing the processor can do to recover, so it simply resets itself. Needless to say, the kernel should be designed so that this can't happen. + +##### Setting Up the IDT + +You should now have the basic information you need in order to set up the IDT and handle exceptions in JOS. For now, you will set up the IDT to handle interrupt vectors 0-31 (the processor exceptions). We'll handle system call interrupts later in this lab and add interrupts 32-47 (the device IRQs) in a later lab. + +The header files `inc/trap.h` and `kern/trap.h` contain important definitions related to interrupts and exceptions that you will need to become familiar with. The file `kern/trap.h` contains definitions that are strictly private to the kernel, while `inc/trap.h` contains definitions that may also be useful to user-level programs and libraries. + +Note: Some of the exceptions in the range 0-31 are defined by Intel to be reserved. Since they will never be generated by the processor, it doesn't really matter how you handle them. Do whatever you think is cleanest. + +The overall flow of control that you should achieve is depicted below: + +``` + IDT trapentry.S trap.c + ++----------------+ +| &handler1 |---------> handler1: trap (struct Trapframe *tf) +| | // do stuff { +| | call trap // handle the exception/interrupt +| | // ... } ++----------------+ +| &handler2 |--------> handler2: +| | // do stuff +| | call trap +| | // ... ++----------------+ + . + . + . ++----------------+ +| &handlerX |--------> handlerX: +| | // do stuff +| | call trap +| | // ... ++----------------+ +``` + +Each exception or interrupt should have its own handler in `trapentry.S` and `trap_init()` should initialize the IDT with the addresses of these handlers. Each of the handlers should build a `struct Trapframe` (see `inc/trap.h`) on the stack and call `trap()` (in `trap.c`) with a pointer to the Trapframe. `trap()` then handles the exception/interrupt or dispatches to a specific handler function. + +``` +Exercise 4. Edit `trapentry.S` and `trap.c` and implement the features described above. The macros `TRAPHANDLER` and `TRAPHANDLER_NOEC` in `trapentry.S` should help you, as well as the T_* defines in `inc/trap.h`. You will need to add an entry point in `trapentry.S` (using those macros) for each trap defined in `inc/trap.h`, and you'll have to provide `_alltraps` which the `TRAPHANDLER` macros refer to. You will also need to modify `trap_init()` to initialize the `idt` to point to each of these entry points defined in `trapentry.S`; the `SETGATE` macro will be helpful here. + +Your `_alltraps` should: + + 1. push values to make the stack look like a struct Trapframe + 2. load `GD_KD` into `%ds` and `%es` + 3. `pushl %esp` to pass a pointer to the Trapframe as an argument to trap() + 4. `call trap` (can `trap` ever return?) + + + +Consider using the `pushal` instruction; it fits nicely with the layout of the `struct Trapframe`. + +Test your trap handling code using some of the test programs in the `user` directory that cause exceptions before making any system calls, such as `user/divzero`. You should be able to get make grade to succeed on the `divzero`, `softint`, and `badsegment` tests at this point. +``` + +``` +Challenge! You probably have a lot of very similar code right now, between the lists of `TRAPHANDLER` in `trapentry.S` and their installations in `trap.c`. Clean this up. Change the macros in `trapentry.S` to automatically generate a table for `trap.c` to use. Note that you can switch between laying down code and data in the assembler by using the directives `.text` and `.data`. +``` + +``` +Questions + +Answer the following questions in your `answers-lab3.txt`: + + 1. What is the purpose of having an individual handler function for each exception/interrupt? (i.e., if all exceptions/interrupts were delivered to the same handler, what feature that exists in the current implementation could not be provided?) + 2. Did you have to do anything to make the `user/softint` program behave correctly? The grade script expects it to produce a general protection fault (trap 13), but `softint`'s code says `int $14`. _Why_ should this produce interrupt vector 13? What happens if the kernel actually allows `softint`'s `int $14` instruction to invoke the kernel's page fault handler (which is interrupt vector 14)? +``` + + +This concludes part A of the lab. Don't forget to add `answers-lab3.txt`, commit your changes, and run make handin before the part A deadline. + +#### Part B: Page Faults, Breakpoints Exceptions, and System Calls + +Now that your kernel has basic exception handling capabilities, you will refine it to provide important operating system primitives that depend on exception handling. + +##### Handling Page Faults + +The page fault exception, interrupt vector 14 (`T_PGFLT`), is a particularly important one that we will exercise heavily throughout this lab and the next. When the processor takes a page fault, it stores the linear (i.e., virtual) address that caused the fault in a special processor control register, `CR2`. In `trap.c` we have provided the beginnings of a special function, `page_fault_handler()`, to handle page fault exceptions. + +``` +Exercise 5. Modify `trap_dispatch()` to dispatch page fault exceptions to `page_fault_handler()`. You should now be able to get make grade to succeed on the `faultread`, `faultreadkernel`, `faultwrite`, and `faultwritekernel` tests. If any of them don't work, figure out why and fix them. Remember that you can boot JOS into a particular user program using make run- _x_ or make run- _x_ -nox. For instance, make run-hello-nox runs the _hello_ user program. +``` + +You will further refine the kernel's page fault handling below, as you implement system calls. + +##### The Breakpoint Exception + +The breakpoint exception, interrupt vector 3 (`T_BRKPT`), is normally used to allow debuggers to insert breakpoints in a program's code by temporarily replacing the relevant program instruction with the special 1-byte `int3` software interrupt instruction. In JOS we will abuse this exception slightly by turning it into a primitive pseudo-system call that any user environment can use to invoke the JOS kernel monitor. This usage is actually somewhat appropriate if we think of the JOS kernel monitor as a primitive debugger. The user-mode implementation of `panic()` in `lib/panic.c`, for example, performs an `int3` after displaying its panic message. + +``` +Exercise 6. Modify `trap_dispatch()` to make breakpoint exceptions invoke the kernel monitor. You should now be able to get make grade to succeed on the `breakpoint` test. +``` + +``` +Challenge! Modify the JOS kernel monitor so that you can 'continue' execution from the current location (e.g., after the `int3`, if the kernel monitor was invoked via the breakpoint exception), and so that you can single-step one instruction at a time. You will need to understand certain bits of the `EFLAGS` register in order to implement single-stepping. + +Optional: If you're feeling really adventurous, find some x86 disassembler source code - e.g., by ripping it out of QEMU, or out of GNU binutils, or just write it yourself - and extend the JOS kernel monitor to be able to disassemble and display instructions as you are stepping through them. Combined with the symbol table loading from lab 1, this is the stuff of which real kernel debuggers are made. +``` + +``` +Questions + + 3. The break point test case will either generate a break point exception or a general protection fault depending on how you initialized the break point entry in the IDT (i.e., your call to `SETGATE` from `trap_init`). Why? How do you need to set it up in order to get the breakpoint exception to work as specified above and what incorrect setup would cause it to trigger a general protection fault? + 4. What do you think is the point of these mechanisms, particularly in light of what the `user/softint` test program does? +``` + + +##### System calls + +User processes ask the kernel to do things for them by invoking system calls. When the user process invokes a system call, the processor enters kernel mode, the processor and the kernel cooperate to save the user process's state, the kernel executes appropriate code in order to carry out the system call, and then resumes the user process. The exact details of how the user process gets the kernel's attention and how it specifies which call it wants to execute vary from system to system. + +In the JOS kernel, we will use the `int` instruction, which causes a processor interrupt. In particular, we will use `int $0x30` as the system call interrupt. We have defined the constant `T_SYSCALL` to 48 (0x30) for you. You will have to set up the interrupt descriptor to allow user processes to cause that interrupt. Note that interrupt 0x30 cannot be generated by hardware, so there is no ambiguity caused by allowing user code to generate it. + +The application will pass the system call number and the system call arguments in registers. This way, the kernel won't need to grub around in the user environment's stack or instruction stream. The system call number will go in `%eax`, and the arguments (up to five of them) will go in `%edx`, `%ecx`, `%ebx`, `%edi`, and `%esi`, respectively. The kernel passes the return value back in `%eax`. The assembly code to invoke a system call has been written for you, in `syscall()` in `lib/syscall.c`. You should read through it and make sure you understand what is going on. + +``` +Exercise 7. Add a handler in the kernel for interrupt vector `T_SYSCALL`. You will have to edit `kern/trapentry.S` and `kern/trap.c`'s `trap_init()`. You also need to change `trap_dispatch()` to handle the system call interrupt by calling `syscall()` (defined in `kern/syscall.c`) with the appropriate arguments, and then arranging for the return value to be passed back to the user process in `%eax`. Finally, you need to implement `syscall()` in `kern/syscall.c`. Make sure `syscall()` returns `-E_INVAL` if the system call number is invalid. You should read and understand `lib/syscall.c` (especially the inline assembly routine) in order to confirm your understanding of the system call interface. Handle all the system calls listed in `inc/syscall.h` by invoking the corresponding kernel function for each call. + +Run the `user/hello` program under your kernel (make run-hello). It should print "`hello, world`" on the console and then cause a page fault in user mode. If this does not happen, it probably means your system call handler isn't quite right. You should also now be able to get make grade to succeed on the `testbss` test. +``` + +``` +Challenge! Implement system calls using the `sysenter` and `sysexit` instructions instead of using `int 0x30` and `iret`. + +The `sysenter/sysexit` instructions were designed by Intel to be faster than `int/iret`. They do this by using registers instead of the stack and by making assumptions about how the segmentation registers are used. The exact details of these instructions can be found in Volume 2B of the Intel reference manuals. + +The easiest way to add support for these instructions in JOS is to add a `sysenter_handler` in `kern/trapentry.S` that saves enough information about the user environment to return to it, sets up the kernel environment, pushes the arguments to `syscall()` and calls `syscall()` directly. Once `syscall()` returns, set everything up for and execute the `sysexit` instruction. You will also need to add code to `kern/init.c` to set up the necessary model specific registers (MSRs). Section 6.1.2 in Volume 2 of the AMD Architecture Programmer's Manual and the reference on SYSENTER in Volume 2B of the Intel reference manuals give good descriptions of the relevant MSRs. You can find an implementation of `wrmsr` to add to `inc/x86.h` for writing to these MSRs [here][4]. + +Finally, `lib/syscall.c` must be changed to support making a system call with `sysenter`. Here is a possible register layout for the `sysenter` instruction: + + eax - syscall number + edx, ecx, ebx, edi - arg1, arg2, arg3, arg4 + esi - return pc + ebp - return esp + esp - trashed by sysenter + +GCC's inline assembler will automatically save registers that you tell it to load values directly into. Don't forget to either save (push) and restore (pop) other registers that you clobber, or tell the inline assembler that you're clobbering them. The inline assembler doesn't support saving `%ebp`, so you will need to add code to save and restore it yourself. The return address can be put into `%esi` by using an instruction like `leal after_sysenter_label, %%esi`. + +Note that this only supports 4 arguments, so you will need to leave the old method of doing system calls around to support 5 argument system calls. Furthermore, because this fast path doesn't update the current environment's trap frame, it won't be suitable for some of the system calls we add in later labs. + +You may have to revisit your code once we enable asynchronous interrupts in the next lab. Specifically, you'll need to enable interrupts when returning to the user process, which `sysexit` doesn't do for you. +``` + +##### User-mode startup + +A user program starts running at the top of `lib/entry.S`. After some setup, this code calls `libmain()`, in `lib/libmain.c`. You should modify `libmain()` to initialize the global pointer `thisenv` to point at this environment's `struct Env` in the `envs[]` array. (Note that `lib/entry.S` has already defined `envs` to point at the `UENVS` mapping you set up in Part A.) Hint: look in `inc/env.h` and use `sys_getenvid`. + +`libmain()` then calls `umain`, which, in the case of the hello program, is in `user/hello.c`. Note that after printing "`hello, world`", it tries to access `thisenv->env_id`. This is why it faulted earlier. Now that you've initialized `thisenv` properly, it should not fault. If it still faults, you probably haven't mapped the `UENVS` area user-readable (back in Part A in `pmap.c`; this is the first time we've actually used the `UENVS` area). + +``` +Exercise 8. Add the required code to the user library, then boot your kernel. You should see `user/hello` print "`hello, world`" and then print "`i am environment 00001000`". `user/hello` then attempts to "exit" by calling `sys_env_destroy()` (see `lib/libmain.c` and `lib/exit.c`). Since the kernel currently only supports one user environment, it should report that it has destroyed the only environment and then drop into the kernel monitor. You should be able to get make grade to succeed on the `hello` test. +``` + +##### Page faults and memory protection + +Memory protection is a crucial feature of an operating system, ensuring that bugs in one program cannot corrupt other programs or corrupt the operating system itself. + +Operating systems usually rely on hardware support to implement memory protection. The OS keeps the hardware informed about which virtual addresses are valid and which are not. When a program tries to access an invalid address or one for which it has no permissions, the processor stops the program at the instruction causing the fault and then traps into the kernel with information about the attempted operation. If the fault is fixable, the kernel can fix it and let the program continue running. If the fault is not fixable, then the program cannot continue, since it will never get past the instruction causing the fault. + +As an example of a fixable fault, consider an automatically extended stack. In many systems the kernel initially allocates a single stack page, and then if a program faults accessing pages further down the stack, the kernel will allocate those pages automatically and let the program continue. By doing this, the kernel only allocates as much stack memory as the program needs, but the program can work under the illusion that it has an arbitrarily large stack. + +System calls present an interesting problem for memory protection. Most system call interfaces let user programs pass pointers to the kernel. These pointers point at user buffers to be read or written. The kernel then dereferences these pointers while carrying out the system call. There are two problems with this: + + 1. A page fault in the kernel is potentially a lot more serious than a page fault in a user program. If the kernel page-faults while manipulating its own data structures, that's a kernel bug, and the fault handler should panic the kernel (and hence the whole system). But when the kernel is dereferencing pointers given to it by the user program, it needs a way to remember that any page faults these dereferences cause are actually on behalf of the user program. + 2. The kernel typically has more memory permissions than the user program. The user program might pass a pointer to a system call that points to memory that the kernel can read or write but that the program cannot. The kernel must be careful not to be tricked into dereferencing such a pointer, since that might reveal private information or destroy the integrity of the kernel. + + + +For both of these reasons the kernel must be extremely careful when handling pointers presented by user programs. + +You will now solve these two problems with a single mechanism that scrutinizes all pointers passed from userspace into the kernel. When a program passes the kernel a pointer, the kernel will check that the address is in the user part of the address space, and that the page table would allow the memory operation. + +Thus, the kernel will never suffer a page fault due to dereferencing a user-supplied pointer. If the kernel does page fault, it should panic and terminate. + +``` +Exercise 9. Change `kern/trap.c` to panic if a page fault happens in kernel mode. + +Hint: to determine whether a fault happened in user mode or in kernel mode, check the low bits of the `tf_cs`. + +Read `user_mem_assert` in `kern/pmap.c` and implement `user_mem_check` in that same file. + +Change `kern/syscall.c` to sanity check arguments to system calls. + +Boot your kernel, running `user/buggyhello`. The environment should be destroyed, and the kernel should _not_ panic. You should see: + + [00001000] user_mem_check assertion failure for va 00000001 + [00001000] free env 00001000 + Destroyed the only environment - nothing more to do! +Finally, change `debuginfo_eip` in `kern/kdebug.c` to call `user_mem_check` on `usd`, `stabs`, and `stabstr`. If you now run `user/breakpoint`, you should be able to run backtrace from the kernel monitor and see the backtrace traverse into `lib/libmain.c` before the kernel panics with a page fault. What causes this page fault? You don't need to fix it, but you should understand why it happens. +``` + +Note that the same mechanism you just implemented also works for malicious user applications (such as `user/evilhello`). + +``` +Exercise 10. Boot your kernel, running `user/evilhello`. The environment should be destroyed, and the kernel should not panic. You should see: + + [00000000] new env 00001000 + ... + [00001000] user_mem_check assertion failure for va f010000c + [00001000] free env 00001000 +``` + +**This completes the lab.** Make sure you pass all of the make grade tests and don't forget to write up your answers to the questions and a description of your challenge exercise solution in `answers-lab3.txt`. Commit your changes and type make handin in the `lab` directory to submit your work. + +Before handing in, use git status and git diff to examine your changes and don't forget to git add answers-lab3.txt. When you're ready, commit your changes with git commit -am 'my solutions to lab 3', then make handin and follow the directions. + +-------------------------------------------------------------------------------- + +via: https://pdos.csail.mit.edu/6.828/2018/labs/lab3/ + +作者:[csail.mit][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://pdos.csail.mit.edu +[b]: https://github.com/lujun9972 +[1]: https://pdos.csail.mit.edu/6.828/2018/labs/labguide.html +[2]: https://pdos.csail.mit.edu/6.828/2018/labs/reference.html +[3]: http://blogs.msdn.com/larryosterman/archive/2005/02/08/369243.aspx +[4]: http://ftp.kh.edu.tw/Linux/SuSE/people/garloff/linux/k6mod.c diff --git a/sources/tech/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md b/sources/tech/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md new file mode 100644 index 0000000000..6418db9444 --- /dev/null +++ b/sources/tech/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md @@ -0,0 +1,181 @@ +PyTorch 1.0 Preview Release: Facebook’s newest Open Source AI +====== +Facebook already uses its own Open Source AI, PyTorch quite extensively in its own artificial intelligence projects. Recently, they have gone a league ahead by releasing a pre-release preview version 1.0. + +For those who are not familiar, [PyTorch][1] is a Python-based library for Scientific Computing. + +PyTorch harnesses the [superior computational power of Graphical Processing Units (GPUs)][2] for carrying out complex [Tensor][3] computations and implementing [deep neural networks][4]. So, it is used widely across the world by numerous researchers and developers. + +This new ready-to-use [Preview Release][5] was announced at the [PyTorch Developer Conference][6] at [The Midway][7], San Francisco, CA on Tuesday, October 2, 2018. + +### Highlights of PyTorch 1.0 Release Candidate + +![PyTorhc is Python based open source AI framework from Facebook][8] + +Some of the main new features in the release candidate are: + +#### 1\. JIT + +JIT is a set of compiler tools to bring research close to production. It includes a Python-based language called Torch Script and also ways to make existing code compatible with itself. + +#### 2\. New torch.distributed library: “C10D” + +“C10D” enables asynchronous operation on different backends with performance improvements on slower networks and more. + +#### 3\. C++ frontend (experimental) + +Though it has been specifically mentioned as an unstable API (expected in a pre-release), this is a pure C++ interface to the PyTorch backend that follows the API and architecture of the established Python frontend to enable research in high performance, low latency and C++ applications installed directly on hardware. + +To know more, you can take a look at the complete [update notes][9] on GitHub. + +The first stable version PyTorch 1.0 will be released in summer. + +### Installing PyTorch on Linux + +To install PyTorch v1.0rc0, the developers recommend using [conda][10] while there also other ways to do that as shown on their [local installation page][11] where they have documented everything necessary in detail. + +#### Prerequisites + + * Linux + * Pip + * Python + * [CUDA][12] (For Nvidia GPU owners) + + + +As we recently showed you [how to install and use Pip][13], let’s get to know how we can install PyTorch with it. + +Note that PyTorch has GPU and CPU-only variants. You should install the one that suits your hardware. + +#### Installing old and stable version of PyTorch + +If you want the stable release (version 0.4) for your GPU, use: + +``` +pip install torch torchvision + +``` + +Use these two commands in succession for a CPU-only stable release: + +``` +pip install http://download.pytorch.org/whl/cpu/torch-0.4.1-cp27-cp27mu-linux_x86_64.whl +pip install torchvision + +``` + +#### Installing PyTorch 1.0 Release Candidate + +You install PyTorch 1.0 RC GPU version with this command: + +``` +pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cu92/torch_nightly.html + +``` + +If you do not have a GPU and would prefer a CPU-only version, use: + +``` +pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html + +``` + +#### Verifying your PyTorch installation + +Startup the python console on a terminal with the following simple command: + +``` +python + +``` + +Now enter the following sample code line by line to verify your installation: + +``` +from __future__ import print_function +import torch +x = torch.rand(5, 3) +print(x) + +``` + +You should get an output like: + +``` +tensor([[0.3380, 0.3845, 0.3217], + [0.8337, 0.9050, 0.2650], + [0.2979, 0.7141, 0.9069], + [0.1449, 0.1132, 0.1375], + [0.4675, 0.3947, 0.1426]]) + +``` + +To check whether you can use PyTorch’s GPU capabilities, use the following sample code: + +``` +import torch +torch.cuda.is_available() + +``` + +The resulting output should be: + +``` +True + +``` + +Support for AMD GPUs for PyTorch is still under development, so complete test coverage is not yet provided as reported [here][14], suggesting this [resource][15] in case you have an AMD GPU. + +Lets now look into some research projects that extensively use PyTorch: + +### Ongoing Research Projects based on PyTorch + + * [Detectron][16]: Facebook AI Research’s software system to intelligently detect and classify objects. It is based on Caffe2. Earlier this year, Caffe2 and PyTorch [joined forces][17] to create a Research + Production enabled PyTorch 1.0 we talk about. + * [Unsupervised Sentiment Discovery][18]: Such methods are extensively used with social media algorithms. + * [vid2vid][19]: Photorealistic video-to-video translation + * [DeepRecommender][20] (We covered how such systems work on our past [Netflix AI article][21]) + + + +Nvidia, leading GPU manufacturer covered more on this with their own [update][22] on this recent development where you can also read about ongoing collaborative research endeavours. + +### How should we react to such PyTorch capabilities? + +To think Facebook applies such amazingly innovative projects and more in its social media algorithms, should we appreciate all this or get alarmed? This is almost [Skynet][23]! This newly improved production-ready pre-release of PyTorch will certainly push things further ahead! Feel free to share your thoughts with us in the comments below! + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/pytorch-open-source-ai-framework/ + +作者:[Avimanyu Bandyopadhyay][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/avimanyu/ +[1]: https://pytorch.org/ +[2]: https://en.wikipedia.org/wiki/General-purpose_computing_on_graphics_processing_units +[3]: https://en.wikipedia.org/wiki/Tensor +[4]: https://www.techopedia.com/definition/32902/deep-neural-network +[5]: https://code.fb.com/ai-research/facebook-accelerates-ai-development-with-new-partners-and-production-capabilities-for-pytorch-1-0 +[6]: https://pytorch.fbreg.com/ +[7]: https://www.themidwaysf.com/ +[8]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/pytorch.jpeg +[9]: https://github.com/pytorch/pytorch/releases/tag/v1.0rc0 +[10]: https://conda.io/ +[11]: https://pytorch.org/get-started/locally/ +[12]: https://www.pugetsystems.com/labs/hpc/How-to-install-CUDA-9-2-on-Ubuntu-18-04-1184/ +[13]: https://itsfoss.com/install-pip-ubuntu/ +[14]: https://github.com/pytorch/pytorch/issues/10657#issuecomment-415067478 +[15]: https://rocm.github.io/install.html#installing-from-amd-rocm-repositories +[16]: https://github.com/facebookresearch/Detectron +[17]: https://caffe2.ai/blog/2018/05/02/Caffe2_PyTorch_1_0.html +[18]: https://github.com/NVIDIA/sentiment-discovery +[19]: https://github.com/NVIDIA/vid2vid +[20]: https://github.com/NVIDIA/DeepRecommender/ +[21]: https://itsfoss.com/netflix-open-source-ai/ +[22]: https://news.developer.nvidia.com/pytorch-1-0-accelerated-on-nvidia-gpus/ +[23]: https://en.wikipedia.org/wiki/Skynet_(Terminator) diff --git a/sources/tech/20181005 Dbxfs - Mount Dropbox Folder Locally As Virtual File System In Linux.md b/sources/tech/20181005 Dbxfs - Mount Dropbox Folder Locally As Virtual File System In Linux.md new file mode 100644 index 0000000000..691600a4cc --- /dev/null +++ b/sources/tech/20181005 Dbxfs - Mount Dropbox Folder Locally As Virtual File System In Linux.md @@ -0,0 +1,133 @@ +Dbxfs – Mount Dropbox Folder Locally As Virtual File System In Linux +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/dbxfs-720x340.png) + +A while ago, we summarized all the possible ways to **[mount Google drive locally][1]** as a virtual file system and access the files stored in the google drive from your Linux operating system. Today, we are going to learn to mount Dropbox folder in your local file system using **dbxfs** utility. The dbxfs is used to mount your Dropbox folder locally as a virtual filesystem in Unix-like operating systems. While it is easy to [**install Dropbox client**][2] in Linux, this approach slightly differs from the official method. It is a command line dropbox client and requires no disk space for access. The dbxfs application is free, open source and written for Python 3.5+. + +### Installing dbxfs + +The dbxfs officially supports Linux and Mac OS. However, it should work on any POSIX system that provides a **FUSE-compatible library** or has the ability to mount **SMB** shares. Since it is written for Python 3.5, it can installed using **pip3** package manager. Refer the following guide if you haven’t installed PIP yet. + +And, install FUSE library as well. + +On Debian-based systems, run the following command to install FUSE: + +``` +$ sudo apt install libfuse2 + +``` + +On Fedora: + +``` +$ sudo dnf install fuse + +``` + +Once you installed all required dependencies, run the following command to install dbxfs utility: + +``` +$ pip3 install dbxfs + +``` + +### Mount Dropbox folder locally + +Create a mount point to mount your dropbox folder in your local file system. + +``` +$ mkdir ~/mydropbox + +``` + +Then, mount the dropbox folder locally using dbxfs utility as shown below: + +``` +$ dbxfs ~/mydropbox + +``` + +You will be asked to generate an access token: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Generate-access-token-1.png) + +To generate an access token, just navigate to the URL given in the above output from your web browser and click **Allow** to authenticate Dropbox access. You need to log in to your dropbox account to complete authorization process. + +A new authorization code will be generated in the next screen. Copy the code and head back to your Terminal and paste it into cli-dbxfs prompt to finish the process. + +You will be then asked to save the credentials for future access. Type **Y** or **N** whether you want to save or decline. And then, you need to enter a passphrase twice for the new access token. + +Finally, click **Y** to accept **“/home/username/mydropbox”** as the default mount point. If you want to set different path, type **N** and enter the location of your choice. + +[![Generate access token 2][3]][4] + +All done! From now on, you can see your Dropbox folder is locally mounted in your filesystem. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Dropbox-in-file-manager.png) + +### Change Access Token Storage Path + +By default, the dbxfs application will store your Dropbox access token in the system keyring or an encrypted file. However, you might want to store it in a **gpg** encrypted file or something else. If so, get an access token by creating a personal app on the [Dropbox developers app console][5]. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/access-token.png) + +Once the app is created, click **Generate** button in the next button. This access token can be used to access your Dropbox account via the API. Don’t share your access token with anyone. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Create-a-new-app.png) + +Once you created an access token, encrypt it using any encryption tools of your choice, such as [**Cryptomater**][6], [**Cryptkeeper**][7], [**CryptGo**][8], [**Cryptr**][9], [**Tomb**][10], [**Toplip**][11] and [**GnuPG**][12] etc., and store it in your preferred location. + +Next edit the dbxfs configuration file and add the following line in it: + +``` +"access_token_command": ["gpg", "--decrypt", "/path/to/access/token/file.gpg"] + +``` + +You can find the dbxfs configuration file by running the following command: + +``` +$ dbxfs --print-default-config-file + +``` + +For more details, refer dbxfs help section: + +``` +$ dbxfs -h + +``` + +As you can see, mounting Dropfox folder locally in your file system using Dbxfs utility is no big deal. As far tested, dbxfs just works fine as expected. Give it a try if you’re interested to see how it works and let us know about your experience in the comment section below. + +And, that’s all for now. Hope this was useful. More good stuff to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/dbxfs-mount-dropbox-folder-locally-as-virtual-file-system-in-linux/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://www.ostechnix.com/how-to-mount-google-drive-locally-as-virtual-file-system-in-linux/ +[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/ +[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[4]: http://www.ostechnix.com/wp-content/uploads/2018/10/Generate-access-token-2.png +[5]: https://dropbox.com/developers/apps +[6]: https://www.ostechnix.com/cryptomator-open-source-client-side-encryption-tool-cloud/ +[7]: https://www.ostechnix.com/how-to-encrypt-your-personal-foldersdirectories-in-linux-mint-ubuntu-distros/ +[8]: https://www.ostechnix.com/cryptogo-easy-way-encrypt-password-protect-files/ +[9]: https://www.ostechnix.com/cryptr-simple-cli-utility-encrypt-decrypt-files/ +[10]: https://www.ostechnix.com/tomb-file-encryption-tool-protect-secret-files-linux/ +[11]: https://www.ostechnix.com/toplip-strong-file-encryption-decryption-cli-utility/ +[12]: https://www.ostechnix.com/an-easy-way-to-encrypt-and-decrypt-files-from-commandline-in-linux/ diff --git a/sources/tech/20181005 How to use Kolibri to access educational material offline.md b/sources/tech/20181005 How to use Kolibri to access educational material offline.md new file mode 100644 index 0000000000..f856a497cd --- /dev/null +++ b/sources/tech/20181005 How to use Kolibri to access educational material offline.md @@ -0,0 +1,107 @@ +How to use Kolibri to access educational material offline +====== +Kolibri makes digital educational materials available to students without internet access. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_OSDC_BYU_520x292_FINAL.png?itok=NVY7vR8o) + +While the internet has thoroughly transformed the availability of educational content for much of the world, many people still live in places where online access is poor or even nonexistent. [Kolibri][1] is a great solution for these communities. It's an app that creates an offline server to deliver high-quality educational resources to learners. You can set up Kolibri on a wide range of [hardware][2], including low-cost Windows, MacOS, and Linux (including Raspberry Pi) computers. A version for Android tablets is in the works. + +Because it's open source, free to use, works without broadband access (after initial setup), and includes a wide range of educational content, it gives students in rural schools, refugee camps, orphanages, informal schools, prisons, and other places without reliable internet service access to many of the same resources used by students all over the world. + +In addition to being simple to install, it's easy to customize Kolibri for various educational missions and needs, including literacy building, general reference materials, and life skills training. Kolibri includes content from sources including [OpenStax,][3] [CK-12][4], [Khan Academy][5], and [EngageNY][6]; once these packages are "seeded" by connecting the Kolibri serving device to a robust internet connection, they are immediately available for offline access on client devices through a compatible browser. + +### Installation and setup + +I installed Kolibri on an Intel i3-based laptop running Fedora 28. I chose the **pip install** method, which is very easy. Here's how to do it. + +Open a terminal and enter: + +``` +$ sudo pip install kolibri + +``` + +Start Kolibri by entering **$** **kolibri** **start** in the terminal. + +Find your Kolibri installation's URL in the terminal. + +![](https://opensource.com/sites/default/files/uploads/kolibri_url.png) + +Open your browser and point it to that URL, being sure to append port **8080**. + +Select the default language—options include English, Spanish, French, Arabic, Portuguese, Hindi, Farsi, Burmese, and Bengali. (I chose English.) + +Name your facility, i.e., your classroom, library, or home. (I named mine Test.) + +![](https://opensource.com/sites/default/files/uploads/kolibri_name.png) + +Tell Kolibri what type of facility you're setting up—self-managed, admin-managed, or informal. (I chose self-managed.) + +![](https://opensource.com/sites/default/files/uploads/kolibri_facility-type.png) + +Create an admin account. + +![](https://opensource.com/sites/default/files/uploads/kolibri_admin.png) + +### Add content + +You can add Kolibri-curated content channels while you are connected to broadband service. Explore and add content from the menu at the top-left of the browser. + +![](https://opensource.com/sites/default/files/uploads/kolibri_menu.png) + +Choose Device and Import. + +![](https://opensource.com/sites/default/files/uploads/kolibri_import.png) + +Selecting English as the default language provides access to 29 content channels including Touchable Earth, Global Digital Library, Khan Academy, OpenStax, CK-12, EngageNY, Blockly games, and more. + +Select a channel you're interested in. You have the option to download the entire channel (which might take a long time) or to select the specific content you want to download. + +![](https://opensource.com/sites/default/files/uploads/kolibri_select-content.png) + +To access your content, return to the top-left menu and select Learn. + +![](https://opensource.com/sites/default/files/uploads/kolibri_content.png) + +### Add users + +User accounts can be set up as learners, coaches, or admins. Users can access the Kolibri server from most web browsers on any Linux, MacOS, Windows, Android, or iOS device on the same network, even if the network isn't connected to the internet. Admins can set up classes on the device, assign coaches and learners to classes, and see every user's interaction and how much time they spend with the content. + +If your Kolibri server is set up as self-managed, users can create their own accounts by entering the Kolibri URL in their browser and following the prompts. For information on setting up users on an admin-managed server, check out Kolibri's [documentation][7]. + +![](https://opensource.com/sites/default/files/uploads/kolibri_user-account.png) + +After logging in, the user can access content right away to begin learning. + +### Learn more + +Kolibri is a very powerful learning resource, especially for people who don't have a robust connection to the internet. Its [documentation][8] is very complete, and a [demo][9] site maintained by the project allows you to try it out. + +Kolibri is open source under the [MIT License][10]. The project, which is managed by the nonprofit organization Learning Equality, is looking for developers—if you would like to get involved, be sure to check out them on [GitHub][11]. To learn more, follow Learning Equality and Kolibri on its [blog][12], [Twitter][13], and [Facebook][14] pages. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/getting-started-kolibri + +作者:[Don Watkins][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/don-watkins +[1]: https://learningequality.org/kolibri/ +[2]: https://drive.google.com/file/d/0B9ZzDms8cSNgVWRKdUlPc2lkTkk/view +[3]: https://openstax.org/ +[4]: https://www.ck12.org/ +[5]: https://www.khanacademy.org/ +[6]: https://www.engageny.org/ +[7]: https://kolibri.readthedocs.io/en/latest/manage.html#create-a-new-user-account +[8]: https://learningequality.org/documentation/ +[9]: http://kolibridemo.learningequality.org/learn/#/topics +[10]: https://github.com/learningequality/kolibri/blob/develop/LICENSE +[11]: https://github.com/learningequality/ +[12]: https://blog.learningequality.org/ +[13]: https://twitter.com/LearnEQ/ +[14]: https://www.facebook.com/learningequality diff --git a/sources/tech/20181005 Open Source Logging Tools for Linux.md b/sources/tech/20181005 Open Source Logging Tools for Linux.md new file mode 100644 index 0000000000..723488008a --- /dev/null +++ b/sources/tech/20181005 Open Source Logging Tools for Linux.md @@ -0,0 +1,188 @@ +Open Source Logging Tools for Linux +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs-main.jpg?itok=voNrSz4H) + +If you’re a Linux systems administrator, one of the first tools you will turn to for troubleshooting are log files. These files hold crucial information that can go a long way to help you solve problems affecting your desktops and servers. For many sysadmins (especially those of an old-school sort), nothing beats the command line for checking log files. But for those who’d rather have a more efficient (and possibly modern) approach to troubleshooting, there are plenty of options. + +In this article, I’ll highlight a few such tools available for the Linux platform. I won’t be getting into logging tools that might be specific to a certain service (such as Kubernetes or Apache), and instead will focus on tools that work to mine the depths of all that magical information written into /var/log. + +Speaking of which… + +### What is /var/log? + +If you’re new to Linux, you might not know what the /var/log directory contains. However, the name is very telling. Within this directory is housed all of the log files from the system and any major service (such as Apache, MySQL, MariaDB, etc.) installed on the operating system. Open a terminal window and issue the command cd /var/log. Follow that with the command ls and you’ll see all of the various systems that have log files you can view (Figure 1). + +![/var/log/][2] + +Figure 1: Our ls command reveals the logs available in /var/log/. + +[Used with permission][3] + +Say, for instance, you want to view the syslog log file. Issue the command less syslog and you can scroll through all of the gory details of that particular log. But what if the standard terminal isn’t for you? What options do you have? Plenty. Let’s take a look at few such options. + +### Logs + +If you use the GNOME desktop (or other, as Logs can be installed on more than just GNOME), you have at your fingertips a log viewer that mainly just adds the slightest bit of GUI goodness over the log files to create something as simple as it is effective. Once installed (from the standard repositories), open Logs from the desktop menu, and you’ll be treated to an interface (Figure 2) that allows you to select from various types of logs (Important, All, System, Security, and Hardware), as well as select a boot period (from the top center drop-down), and even search through all of the available logs. + +![Logs tool][5] + +Figure 2: The GNOME Logs tool is one of the easiest GUI log viewers you’ll find for Linux. + +[Used with permission][3] + +Logs is a great tool, especially if you’re not looking for too many bells and whistles getting in the way of you viewing crucial log entries, so you can troubleshoot your systems. + +### KSystemLog + +KSystemLog is to KDE what Logs is to GNOME, but with a few more features to add into the mix. Although both make it incredibly simple to view your system log files, only KSystemLog includes colorized log lines, tabbed viewing, copy log lines to the desktop clipboard, built-in capability for sending log messages directly to the system, read detailed information for each log line, and more. KSystemLog views all the same logs found in GNOME Logs, only with a different layout. + +From the main window (Figure 3), you can view any of the different log (from System Log, Authentication Log, X.org Log, Journald Log), search the logs, filter by Date, Host, Process, Message, and select log priorities. + +![KSystemLog][7] + +Figure 3: The KSystemLog main window. + +[Used with permission][3] + +If you click on the Window menu, you can open a new tab, where you can select a different log/filter combination to view. From that same menu, you can even duplicate the current tab. If you want to manually add a log to a file, do the following: + + 1. Open KSystemLog. + + 2. Click File > Add Log Entry. + + 3. Create your log entry (Figure 4). + + 4. Click OK + + +![log entry][9] + +Figure 4: Creating a manual log entry with KSystemLog. + +[Used with permission][3] + +KSystemLog makes viewing logs in KDE an incredibly easy task. + +### Logwatch + +Logwatch isn’t a fancy GUI tool. Instead, logwatch allows you to set up a logging system that will email you important alerts. You can have those alerts emailed via an SMTP server or you can simply view them on the local machine. Logwatch can be found in the standard repositories for almost every distribution, so installation can be done with a single command, like so: + +``` +sudo apt-get install logwatch +``` + +Or: + +``` +sudo dnf install logwatch +``` + +During the installation, you will be required to select the delivery method for alerts (Figure 5). If you opt to go the local mail delivery only, you’ll need to install the mailutils app (so you can view mail locally, via the mail command). + +![ Logwatch][11] + +Figure 5: Configuring Logwatch alert sending method. + +[Used with permission][3] + +All Logwatch configurations are handled in a single file. To edit that file, issue the command sudo nano /usr/share/logwatch/default.conf/logwatch.conf. You’ll want to edit the MailTo = option. If you’re viewing this locally, set that to the Linux username you want the logs sent to (such as MailTo = jack). If you are sending these logs to an external email address, you’ll also need to change the MailFrom = option to a legitimate email address. From within that same configuration file, you can also set the detail level and the range of logs to send. Save and close that file. +Once configured, you can send your first mail with a command like: + +``` +logwatch --detail Med --mailto ADDRESS --service all --range today +Where ADDRESS is either the local user or an email address. + +``` + +For more information on using Logwatch, issue the command man logwatch. Read through the manual page to see the different options that can be used with the tool. + +### Rsyslog + +Rsyslog is a convenient way to send remote client logs to a centralized server. Say you have one Linux server you want to use to collect the logs from other Linux servers in your data center. With Rsyslog, this is easily done. Rsyslog has to be installed on all clients and the centralized server (by issuing a command like sudo apt-get install rsyslog). Once installed, create the /etc/rsyslog.d/server.conf file on the centralized server, with the contents: + +``` +# Provide UDP syslog reception +$ModLoad imudp +$UDPServerRun 514 + +# Provide TCP syslog reception +$ModLoad imtcp +$InputTCPServerRun 514 + +# Use custom filenaming scheme +$template FILENAME,"/var/log/remote/%HOSTNAME%.log" +*.* ?FILENAME + +$PreserveFQDN on + +``` + +Save and close that file. Now, on every client machine, create the file /etc/rsyslog.d/client.conf with the contents: + +``` +$PreserveFQDN on +$ActionQueueType LinkedList +$ActionQueueFileName srvrfwd +$ActionResumeRetryCount -1 +$ActionQueueSaveOnShutdown on +*.* @@SERVER_IP:514 + +``` + +Where SERVER_IP is the IP address of your centralized server. Save and close that file. Restart rsyslog on all machines with the command: + +``` +sudo systemctl restart rsyslog + +``` + +You can now view the centralized log files with the command (run on the centralized server): + +``` +tail -f /var/log/remote/*.log + +``` + +The tail command allows you to view those files as they are written to, in real time. You should see log entries appear that include the client hostname (Figure 6). + +![Rsyslog][13] + +Figure 6: Rsyslog showing entries for a connected client. + +[Used with permission][3] + +Rsyslog is a great tool for creating a single point of entry for viewing the logs of all of your Linux servers. + +### More where that came from + +This article only scratched the surface of the logging tools to be found on the Linux platform. And each of the above tools is capable of more than what is outlined here. However, this overview should give you a place to start your long day's journey into the Linux log file. + +Learn more about Linux through the free ["Introduction to Linux" ][14]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/10/open-source-logging-tools-linux + +作者:[JACK WALLEN][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[1]: /files/images/logs1jpg +[2]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_1.jpg?itok=8yO2q1rW (/var/log/) +[3]: /licenses/category/used-permission +[4]: /files/images/logs2jpg +[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_2.jpg?itok=kF6V46ZB (Logs tool) +[6]: /files/images/logs3jpg +[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_3.jpg?itok=PhrIzI1N (KSystemLog) +[8]: /files/images/logs4jpg +[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_4.jpg?itok=OxsGJ-TJ (log entry) +[10]: /files/images/logs5jpg +[11]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_5.jpg?itok=GeAR551e (Logwatch) +[12]: /files/images/logs6jpg +[13]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_6.jpg?itok=ira8UZOr (Rsyslog) +[14]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md b/sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md new file mode 100644 index 0000000000..26d1941cc1 --- /dev/null +++ b/sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md @@ -0,0 +1,171 @@ +Terminalizer – A Tool To Record Your Terminal And Generate Animated Gif Images +====== +This is know topic for most of us and i don’t want to give you the detailed information about this flow. Also, we had written many article under this topics. + +Script command is the one of the standard command to record Linux terminal sessions. Today we are going to discuss about same kind of tool called Terminalizer. + +This tool will help us to record the users terminal activity, also will help us to identify other useful information from the output. + +### What Is Terminalizer + +Terminalizer allow users to record their terminal activity and allow them to generate animated gif images. It’s highly customizable CLI tool that user can share a link for an online player, web player for a recording file. + +**Suggested Read :** +**(#)** [Script – A Simple Command To Record Your Terminal Session Activity][1] +**(#)** [Automatically Record/Capture All Users Terminal Sessions Activity In Linux][2] +**(#)** [Teleconsole – A Tool To Share Your Terminal Session Instantly To Anyone In Seconds][3] +**(#)** [tmate – Instantly Share Your Terminal Session To Anyone In Seconds][4] +**(#)** [Peek – Create a Animated GIF Recorder in Linux][5] +**(#)** [Kgif – A Simple Shell Script to Create a Gif File from Active Window][6] +**(#)** [Gifine – Quickly Create An Animated GIF Video In Ubuntu/Debian][7] + +There is no distribution official package to install this utility and we can easily install it by using Node.js. + +### How To Install Noje.js in Linux + +Node.js can be installed in multiple ways. Here, we are going to teach you the standard method. + +For Ubuntu/LinuxMint use [APT-GET Command][8] or [APT Command][9] to install Node.js + +``` +$ curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash - +$ sudo apt-get install -y nodejs + +``` + +For Debian use [APT-GET Command][8] or [APT Command][9] to install Node.js + +``` +# curl -sL https://deb.nodesource.com/setup_8.x | bash - +# apt-get install -y nodejs + +``` + +For **`RHEL/CentOS`** , use [YUM Command][10] to install tmux. + +``` +$ sudo curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash - +$ sudo yum install epel-release +$ sudo yum -y install nodejs + +``` + +For **`Fedora`** , use [DNF Command][11] to install tmux. + +``` +$ sudo dnf install nodejs + +``` + +For **`Arch Linux`** , use [Pacman Command][12] to install tmux. + +``` +$ sudo pacman -S nodejs npm + +``` + +For **`openSUSE`** , use [Zypper Command][13] to install tmux. + +``` +$ sudo zypper in nodejs6 + +``` + +### How to Install Terminalizer + +As you have already installed prerequisite package called Node.js, now it’s time to install Terminalizer on your system. Simple run the below npm command to install Terminalizer. + +``` +$ sudo npm install -g terminalizer + +``` + +### How to Use Terminalizer + +To record your session activity using Terminalizer, just run the following Terminalizer command. Once you started the recording then play around it and finally hit `CTRL+D` to exit and save the recording. + +``` +# terminalizer record 2g-session + +defaultConfigPath +The recording session is started +Press CTRL+D to exit and save the recording + +``` + +This will save your recording session as a YAML file, in this case my filename would be 2g-session-activity.yml. +![][15] + +Just type few commands to verify this and finally hit `CTRL+D` to exit the current capture. When you hit `CTRL+D` on the terminal and you will be getting the below output. + +``` +# logout +Successfully Recorded +The recording data is saved into the file: +/home/daygeek/2g-session.yml +You can edit the file and even change the configurations. + +``` + +![][16] + +### How to Play the Recorded File + +Use the below command format to paly your recorded YAML file. Make sure, you have to input your recorded file instead of us. + +``` +# terminalizer play 2g-session + +``` + +Render a recording file as an animated gif image. + +``` +# terminalizer render 2g-session + +``` + +`Note:` Below two commands are not implemented yet in the current version and will be available in the next version. + +If you would like to share your recording to others then upload a recording file and get a link for an online player and share it. + +``` +terminalizer share 2g-session + +``` + +Generate a web player for a recording file + +``` +# terminalizer generate 2g-session + +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/terminalizer-a-tool-to-record-your-terminal-and-generate-animated-gif-images/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[1]: https://www.2daygeek.com/script-command-record-save-your-terminal-session-activity-linux/ +[2]: https://www.2daygeek.com/automatically-record-all-users-terminal-sessions-activity-linux-script-command/ +[3]: https://www.2daygeek.com/teleconsole-share-terminal-session-instantly-to-anyone-in-seconds/ +[4]: https://www.2daygeek.com/tmate-instantly-share-your-terminal-session-to-anyone-in-seconds/ +[5]: https://www.2daygeek.com/peek-create-animated-gif-screen-recorder-capture-arch-linux-mint-fedora-ubuntu/ +[6]: https://www.2daygeek.com/kgif-create-animated-gif-file-active-window-screen-recorder-capture-arch-linux-mint-fedora-ubuntu-debian-opensuse-centos/ +[7]: https://www.2daygeek.com/gifine-create-animated-gif-vedio-recorder-linux-mint-debian-ubuntu/ +[8]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[9]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[11]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[12]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[13]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[14]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[15]: https://www.2daygeek.com/wp-content/uploads/2018/10/terminalizer-record-2g-session-1.gif +[16]: https://www.2daygeek.com/wp-content/uploads/2018/10/terminalizer-play-2g-session.gif diff --git a/sources/tech/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md b/sources/tech/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md new file mode 100644 index 0000000000..a9b20ac54d --- /dev/null +++ b/sources/tech/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md @@ -0,0 +1,110 @@ +KeeWeb – An Open Source, Cross Platform Password Manager +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/keeweb-720x340.png) + +If you’ve been using the internet for any amount of time, chances are, you have a lot of accounts on a lot of websites. All of those accounts must have passwords, and you have to remember all those passwords. Either that, or write them down somewhere. Writing down passwords on paper may not be secure, and remembering them won’t be practically possible if you have more than a few passwords. This is why Password Managers have exploded in popularity in the last few years. A password Manager is like a central repository where you store all your passwords for all your accounts, and you lock it with a master password. With this approach, the only thing you need to remember is the Master password. + +**KeePass** is one such open source password manager. KeePass has an official client, but it’s pretty barebones. But there are a lot of other apps, both for your computer and for your phone, that are compatible with the KeePass file format for storing encrypted passwords. One such app is **KeeWeb**. + +KeeWeb is an open source, cross platform password manager with features like cloud sync, keyboard shortcuts and plugin support. KeeWeb uses Electron, which means it runs on Windows, Linux, and Mac OS. + +### Using KeeWeb Password Manager + +When it comes to using KeeWeb, you actually have 2 options. You can either use KeeWeb webapp without having to install it on your system and use it on the fly or simply install KeeWeb client in your local system. + +**Using the KeeWeb webapp** + +If you don’t want to bother installing a desktop app, you can just go to [**https://app.keeweb.info/**][1] and use it as a password manager. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-webapp.png) + +It has all the features of the desktop app. Obviously, this requires you to be online when using the app. + +**Installing KeeWeb on your Desktop** + +If you like the comfort and offline availability of using a desktop app, you can also install it on your desktop. + +If you use Ubuntu/Debian, you can just go to [**releases pages**][2] and download KeeWeb latest **.deb** file, which you can install via this command: + +``` +$ sudo dpkg -i KeeWeb-1.6.3.linux.x64.deb + +``` + +If you’re on Arch, it is available in the [**AUR**][3], so you can install using any helper programs like [**Yay**][4]: + +``` +$ yay -S keeweb + +``` + +Once installed, launch it from Menu or application launcher. This is how KeeWeb default interface looks like: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-desktop-client.png) + +### General Layout + +KeeWeb basically shows a list of all your passwords, along with all your tags to the left. Clicking on a tag will filter the list to only passwords of that tag. To the right, all the fields for the selected account are shown. You can set username, password, website, or just add a custom note. You can even create your own fields and mark them as secure fields, which is great when storing things like credit card information. You can copy passwords by just clicking on them. KeeWeb also shows the date when an account was created and modified. Deleted passwords are kept in the trash, where they can be restored or permanently deleted. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-general-layout.png) + +### KeeWeb Features + +**Cloud Sync** + +One of the main features of KeeWeb is the support for a wide variety of remote locations and cloud services. +Other than loading local files, you can open files from: + + 1. WebDAV Servers + 2. Google Drive + 3. Dropbox + 4. OneDrive + + + +This means that if you use multiple computers, you can synchronize the password files between them, so you don’t have to worry about not having all the passwords available on all devices. + +**Password Generator** + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-password-generator.png) + +Along with encrypting your passwords, it’s also important to create new, strong passwords for every single account. This means that if one of your account gets hacked, the attacker won’t be able to get in to your other accounts using the same password. + +To achieve this, KeeWeb has a built-in password generator, that lets you generate a custom password of a specific length, including specific type of characters. + +**Plugins** + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-plugins.png) + +You can extend KeeWeb functionality with plugins. Some of these plugins are translations for other languages, while others add new functionality, like checking **** for exposed passwords. + +**Local Backups** + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-backup.png) + +Regardless of where your password file is stored, you should probably keep local backups of the file on your computer. Luckily, KeeWeb has this feature built-in. You can backup to a specific path, and set it to backup periodically, or just whenever the file is changed. + + +### Verdict + +I have actually been using KeeWeb for several years now. It completely changed the way I store my passwords. The cloud sync is basically the feature that makes it a done deal for me. I don’t have to worry about keeping multiple unsynchronized files on multiple devices. If you want a great looking password manager that has cloud sync, KeeWeb is something you should look at. + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/keeweb-an-open-source-cross-platform-password-manager/ + +作者:[EDITOR][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[1]: https://app.keeweb.info/ +[2]: https://github.com/keeweb/keeweb/releases/latest +[3]: https://aur.archlinux.org/packages/keeweb/ +[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ diff --git a/sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md b/sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md new file mode 100644 index 0000000000..16930083fd --- /dev/null +++ b/sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md @@ -0,0 +1,105 @@ +translating by hopefully2333 + +Play Windows games on Fedora with Steam Play and Proton +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/09/steam-proton-816x345.jpg) + +Some weeks ago, Steam [announced][1] a new addition to Steam Play with Linux support for Windows games using Proton, a fork from WINE. This capability is still in beta, and not all games work. Here are some more details about Steam and Proton. + +According to the Steam website, there are new features in the beta release: + + * Windows games with no Linux version currently available can now be installed and run directly from the Linux Steam client, complete with native Steamworks and OpenVR support. + * DirectX 11 and 12 implementations are now based on Vulkan, which improves game compatibility and reduces performance impact. + * Fullscreen support has been improved. Fullscreen games seamlessly stretch to the desired display without interfering with the native monitor resolution or requiring the use of a virtual desktop. + * Improved game controller support. Games automatically recognize all controllers supported by Steam. Expect more out-of-the-box controller compatibility than even the original version of the game. + * Performance for multi-threaded games has been greatly improved compared to vanilla WINE. + + + +### Installation + +If you’re interested in trying Steam with Proton out, just follow these easy steps. (Note that you can ignore the first steps to enable the Steam Beta if you have the [latest updated version of Steam installed][2]. In that case you no longer need Steam Beta to use Proton.) + +Open up Steam and log in to your account. This example screenshot shows support for only 22 games before enabling Proton. + +![][3] + +Now click on Steam option on top of the client. This displays a drop down menu. Then select Settings. + +![][4] + +Now the settings window pops up. Select the Account option and next to Beta participation, click on change. + +![][5] + +Now change None to Steam Beta Update. + +![][6] + +Click on OK and a prompt asks you to restart. + +![][7] + +Let Steam download the update. This can take a while depending on your internet speed and computer resources. + +![][8] + +After restarting, go back to the Settings window. This time you’ll see a new option. Make sure the check boxes for Enable Steam Play for supported titles, Enable Steam Play for all titles and Use this tool instead of game-specific selections from Steam are enabled. The compatibility tool should be Proton. + +![][9] + +The Steam client asks you to restart. Do so, and once you log back into your Steam account, your game library for Linux should be extended. + +![][10] + +### Installing a Windows game using Steam Play + +Now that you have Proton enabled, install a game. Select the title you want and you’ll find the process is similar to installing a normal game on Steam, as shown in these screenshots. + +![][11] + +![][12] + +![][13] + +![][14] + +After the game is done downloading and installing, you can play it. + +![][15] + +![][16] + +Some games may be affected by the beta nature of Proton. The game in this example, Chantelise, had no audio and a low frame rate. Keep in mind this capability is still in beta and Fedora is not responsible for results. If you’d like to read further, the community has created a [Google doc][17] with a list of games that have been tested. + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/play-windows-games-steam-play-proton/ + +作者:[Francisco J. Vergara Torres][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/patxi/ +[1]: https://steamcommunity.com/games/221410/announcements/detail/1696055855739350561 +[2]: https://fedoramagazine.org/third-party-repositories-fedora/ +[3]: https://fedoramagazine.org/wp-content/uploads/2018/09/listOfGamesLinux-300x197.png +[4]: https://fedoramagazine.org/wp-content/uploads/2018/09/1-300x169.png +[5]: https://fedoramagazine.org/wp-content/uploads/2018/09/2-300x196.png +[6]: https://fedoramagazine.org/wp-content/uploads/2018/09/4-300x272.png +[7]: https://fedoramagazine.org/wp-content/uploads/2018/09/6-300x237.png +[8]: https://fedoramagazine.org/wp-content/uploads/2018/09/7-300x126.png +[9]: https://fedoramagazine.org/wp-content/uploads/2018/09/10-300x237.png +[10]: https://fedoramagazine.org/wp-content/uploads/2018/09/12-300x196.png +[11]: https://fedoramagazine.org/wp-content/uploads/2018/09/13-300x196.png +[12]: https://fedoramagazine.org/wp-content/uploads/2018/09/14-300x195.png +[13]: https://fedoramagazine.org/wp-content/uploads/2018/09/15-300x196.png +[14]: https://fedoramagazine.org/wp-content/uploads/2018/09/16-300x195.png +[15]: https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-08-30-15-14-59-300x169.png +[16]: https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-08-30-15-19-34-300x169.png +[17]: https://docs.google.com/spreadsheets/d/1DcZZQ4HL_Ol969UbXJmFG8TzOHNnHoj8Q1f8DIFe8-8/edit#gid=1003113831 diff --git a/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md b/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md new file mode 100644 index 0000000000..27616a9f6e --- /dev/null +++ b/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md @@ -0,0 +1,128 @@ +Taking notes with Laverna, a web-based information organizer +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/notebook-writing-pen.jpg?itok=uA3dCfu_) + +I don’t know anyone who doesn’t take notes. Most of the people I know use an online note-taking application like Evernote, Simplenote, or Google Keep. + +All of those are good tools, but they’re proprietary. And you have to wonder about the privacy of your information—especially in light of [Evernote’s great privacy flip-flop of 2016][1]. If you want more control over your notes and your data, you need to turn to an open source tool—preferably one that you can host yourself. + +And there are a number of good [open source alternatives to Evernote][2]. One of these is Laverna. Let’s take a look at it. + +### Getting Laverna + +You can [host Laverna yourself][3] or use the [web version][4] + +Since I have nowhere to host the application, I’ll focus here on using the web version of Laverna. Aside from the installation and setting up storage (more on that below), I’m told that the experience with a self-hosted version of Laverna is the same. + +### Setting up Laverna + +To start using Laverna right away, click the **Start using now** button on the front page of [Laverna.cc][5]. + +On the welcome screen, click **Next**. You’ll be asked to enter an encryption password to secure your notes and get to them when you need to. You’ll also be asked to choose a way to synchronize your notes. I’ll discuss synchronization in a moment, so just enter a password and click **Next**. + +![](https://opensource.com/sites/default/files/uploads/laverna-set-password.png) + +When you log in, you'll see a blank canvas: + +![](https://opensource.com/sites/default/files/uploads/laverna-main-window.png) + +### Storing your notes + +Before diving into how to use Laverna, let’s walk through how to store your notes. + +Out of the box, Laverna stores your notes in your browser’s cache. The problem with that is when you clear the cache, you lose your notes. You can also store your notes using: + + * Dropbox, a popular and proprietary web-based file syncing and storing service + * [remoteStorage][6], which offers a way for web applications to store information in the cloud. + + + +Using Dropbox is convenient, but it’s proprietary. There are also concerns about [privacy and surveillance][7]. Laverna encrypts your notes before saving them, but not all encryption is foolproof. Even if you don’t have anything illegal or sensitive in your notes, they’re no one’s business but your own. + +remoteStorage, on the other hand, is kind of techie to set up. There are few hosted storage services out there. I use [5apps][8]. + +To change how Laverna stores your notes, click the hamburger menu in the top-left corner. Click **Settings** and then **Sync**. + +![](https://opensource.com/sites/default/files/uploads/laverna-sync.png) + +Select the service you want to use, then click **Save**. After that, click the left arrow in the top-left corner. You’ll be asked to authorize Laverna with the service you chose. + +### Using Laverna + +With that out of the way, let’s get down to using Laverna. Create a new note by clicking the **New Note** icon, which opens the note editor: + +![](https://opensource.com/sites/default/files/uploads/laverna-new-note.png) + +Type a title for your note, then start typing the note in the left pane of the editor. The right pane displays a preview of your note: + +![](https://opensource.com/sites/default/files/uploads/laverna-writing-note.png) + +You can format your notes using Markdown; add formatting using your keyboard or the toolbar at the top of the window. + +You can also embed an image or file from your computer into a note, or link to one on the web. When you embed an image, it’s stored with your note. + +When you’re done, click **Save**. + +### Organizing your notes + +Like some other note-taking tools, Laverna lists the last note that you created or edited at the top. If you have a lot of notes, it can take a bit of work to find the one you're looking for. + +To better organize your notes, you can group them into notebooks, where you can quickly filter them based on a topic or a grouping. + +When you’re creating or editing a note, you can select a notebook from the **Select notebook** list in the top-left corner of the window. If you don’t have any notebooks, select **Add a new notebook** from the list and type the notebook’s name. + +You can also make that notebook a child of another notebook. Let’s say, for example, you maintain three blogs. You can create a notebook called **Blog Post Notes** and name children for each blog. + +To filter your notes by notebook, click the hamburger menu, followed by the name of a notebook. Only the notes in the notebook you choose will appear in the list. + +![](https://opensource.com/sites/default/files/uploads/laverna-notebook.png) + +### Using Laverna across devices + +I use Laverna on my laptop and on an eight-inch tablet running [LineageOS][9]. Getting the two devices to use the same storage and display the same notes takes a little work. + +First, you’ll need to export your settings. Log into wherever you’re using Laverna and click the hamburger menu. Click **Settings** , then **Import & Export**. Under **Settings** , click **Export settings**. Laverna saves a file named laverna-settings.json to your device. + +Copy that file to the other device or devices on which you want to use Laverna. You can do that by emailing it to yourself or by syncing the file across devices using an application like [ownCloud][10] or [Nextcloud][11]. + +On the other device, click **Import** on the splash screen. Otherwise, click the hamburger menu and then **Settings > Import & Export**. Click **Import settings**. Find the JSON file with your settings, click **Open** and then **Save**. + +Laverna will ask you to: + + * Log back in using your password. + * Register with the storage service you’re using. + + + +Repeat this process for each device that you want to use. It’s cumbersome, I know. I’ve done it. You should need to do it only once per device, though. + +### Final thoughts + +Once you set up Laverna, it’s easy to use and has just the right features for what I need to do. I’m hoping that the developers can expand the storage and syncing options to include open source applications like Nextcloud and ownCloud. + +While Laverna doesn’t have all the bells and whistles of a note-taking application like Evernote, it does a great job of letting you take and organize your notes. The fact that Laverna is open source and supports Markdown are two additional great reasons to use it. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/taking-notes-laverna + +作者:[Scott Nesbitt][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt +[1]: https://blog.evernote.com/blog/2016/12/15/evernote-revisits-privacy-policy/ +[2]: https://opensource.com/life/16/8/open-source-alternatives-evernote +[3]: https://github.com/Laverna/laverna +[4]: https://laverna.cc/ +[5]: http://laverna.cc/ +[6]: https://remotestorage.io/ +[7]: https://www.zdnet.com/article/dropbox-faces-questions-over-claims-of-improper-data-sharing/ +[8]: https://5apps.com/storage/beta +[9]: https://lineageos.org/ +[10]: https://owncloud.com/ +[11]: https://nextcloud.com/ diff --git a/sources/tech/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md b/sources/tech/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md new file mode 100644 index 0000000000..c119f69ebf --- /dev/null +++ b/sources/tech/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md @@ -0,0 +1,331 @@ +translating---cyleft +==== + +6 Commands To Shutdown And Reboot The Linux System From Terminal +====== +Linux administrator performing many tasks in their routine work. The system Shutdown and Reboot task also included in it. + +It’s one of the risky task for them because some times it wont come back due to some reasons and they need to spend more time on it to troubleshoot. + +These task can be performed through CLI in Linux. Most of the time Linux administrator prefer to perform these kind of tasks via CLI because they are familiar on this. + +There are few commands are available in Linux to perform these tasks and user needs to choose appropriate command to perform the task based on the requirement. + +All these commands has their own feature and allow Linux admin to use it. + +**Suggested Read :** +**(#)** [11 Methods To Find System/Server Uptime In Linux][1] +**(#)** [Tuptime – A Tool To Report The Historical And Statistical Running Time Of Linux System][2] + +When the system is initiated for Shutdown or Reboot. It will be notified to all logged-in users and processes. Also, it wont allow any new logins if the time argument is used. + +I would suggest you to double check before you perform this action because you need to follow few prerequisites to make sure everything is fine. + +Those steps are listed below. + + * Make sure you should have a console access to troubleshoot further in case any issues arise. VMWare access for VMs and IPMI/iLO/iDRAC access for physical servers. + * You have to create a ticket as per your company procedure either Incident or Change ticket and get approval + * Take the important configuration files backup and move to other servers for safety + * Verify the log files (Perform the pre-check) + * Communicate about your activity with other dependencies teams like DBA, Application, etc + * Ask them to bring down their Database service or Application service and get a confirmation from them. + * Validate the same from your end using the appropriate command to double confirm this. + * Finally reboot the system + * Verify the log files (Perform the post-check), If everything is good then move to next step. If you found something is wrong then troubleshoot accordingly. + * If it’s back to up and running, ask the dependencies team to bring up their applications. + * Monitor for some time, and communicate back to them saying everything is working fine as expected. + + + +This task can be performed using following commands. + + * **`shutdown Command:`** shutdown command used to halt, power-off or reboot the machine. + * **`halt Command:`** halt command used to halt, power-off or reboot the machine. + * **`poweroff Command:`** poweroff command used to halt, power-off or reboot the machine. + * **`reboot Command:`** reboot command used to halt, power-off or reboot the machine. + * **`init Command:`** init (short for initialization) is the first process started during booting of the computer system. + * **`systemctl Command:`** systemd is a system and service manager for Linux operating systems. + + + +### Method-1: How To Shutdown And Reboot The Linux System Using Shutdown Command + +shutdown command used to power-off or reboot a Linux remote machine or local host. It’s offering +multiple options to perform this task effectively. If the time argument is used, 5 minutes before the system goes down the /run/nologin file is created to ensure that further logins shall not be allowed. + +The general syntax is + +``` +# shutdown [OPTION] [TIME] [MESSAGE] + +``` + +Run the below command to shutdown a Linux machine immediately. It will kill all the processes immediately and will shutdown the system. + +``` +# shutdown -h now + +``` + + * **`-h:`** Equivalent to –poweroff, unless –halt is specified. + + + +Alternatively we can use the shutdown command with `halt` option to bring down the machine immediately. + +``` +# shutdown --halt now +or +# shutdown -H now + +``` + + * **`-H, --halt:`** Halt the machine. + + + +Alternatively we can use the shutdown command with `poweroff` option to bring down the machine immediately. + +``` +# shutdown --poweroff now +or +# shutdown -P now + +``` + + * **`-P, --poweroff:`** Power-off the machine (the default). + + + +Run the below command to shutdown a Linux machine immediately. It will kill all the processes immediately and will shutdown the system. + +``` +# shutdown -h now + +``` + + * **`-h:`** Equivalent to –poweroff, unless –halt is specified. + + + +If you run the below commands without time parameter, it will wait for a minute then execute the given command. + +``` +# shutdown -h +Shutdown scheduled for Mon 2018-10-08 06:42:31 EDT, use 'shutdown -c' to cancel. + +[email protected]# +Broadcast message from [email protected] (Mon 2018-10-08 06:41:31 EDT): + +The system is going down for power-off at Mon 2018-10-08 06:42:31 EDT! + +``` + +All other logged in users can see a broadcast message in their terminal like below. + +``` +[[email protected] ~]$ +Broadcast message from [email protected] (Mon 2018-10-08 06:41:31 EDT): + +The system is going down for power-off at Mon 2018-10-08 06:42:31 EDT! + +``` + +for Halt option. + +``` +# shutdown -H +Shutdown scheduled for Mon 2018-10-08 06:37:53 EDT, use 'shutdown -c' to cancel. + +[email protected]# +Broadcast message from [email protected] (Mon 2018-10-08 06:36:53 EDT): + +The system is going down for system halt at Mon 2018-10-08 06:37:53 EDT! + +``` + +for Poweroff option. + +``` +# shutdown -P +Shutdown scheduled for Mon 2018-10-08 06:40:07 EDT, use 'shutdown -c' to cancel. + +[email protected]# +Broadcast message from [email protected] (Mon 2018-10-08 06:39:07 EDT): + +The system is going down for power-off at Mon 2018-10-08 06:40:07 EDT! + +``` + +This can be cancelled by hitting `shutdown -c` option on your terminal. + +``` +# shutdown -c + +Broadcast message from [email protected] (Mon 2018-10-08 06:39:09 EDT): + +The system shutdown has been cancelled at Mon 2018-10-08 06:40:09 EDT! + +``` + +All other logged in users can see a broadcast message in their terminal like below. + +``` +[[email protected] ~]$ +Broadcast message from [email protected] (Mon 2018-10-08 06:41:35 EDT): + +The system shutdown has been cancelled at Mon 2018-10-08 06:42:35 EDT! + +``` + +Add a time parameter, if you want to perform shutdown or reboot in `N` seconds. Here you can add broadcast a custom message to logged-in users. In this example, we are rebooting the machine in another 5 minutes. + +``` +# shutdown -r +5 "To activate the latest Kernel" +Shutdown scheduled for Mon 2018-10-08 07:13:16 EDT, use 'shutdown -c' to cancel. + +[[email protected] ~]# +Broadcast message from [email protected] (Mon 2018-10-08 07:08:16 EDT): + +To activate the latest Kernel +The system is going down for reboot at Mon 2018-10-08 07:13:16 EDT! + +``` + +Run the below command to reboot a Linux machine immediately. It will kill all the processes immediately and will reboot the system. + +``` +# shutdown -r now + +``` + + * **`-r, --reboot:`** Reboot the machine. + + + +### Method-2: How To Shutdown And Reboot The Linux System Using reboot Command + +reboot command used to power-off or reboot a Linux remote machine or local host. Reboot command comes with two useful options. + +It will perform a graceful shutdown and restart of the machine (This is similar to your restart option which is available in your system menu). + +Run “reboot’ command without any option to reboot Linux machine. + +``` +# reboot + +``` + +Run the “reboot” command with `-p` option to power-off or shutdown the Linux machine. + +``` +# reboot -p + +``` + + * **`-p, --poweroff:`** Power-off the machine, either halt or poweroff commands is invoked. + + + +Run the “reboot” command with `-f` option to forcefully reboot the Linux machine (This is similar to pressing the power button on the CPU). + +``` +# reboot -f + +``` + + * **`-f, --force:`** Force immediate halt, power-off, or reboot. + + + +### Method-3: How To Shutdown And Reboot The Linux System Using init Command + +init (short for initialization) is the first process started during booting of the computer system. + +It will check the /etc/inittab file to decide the Linux run level. Also, allow users to perform shutdown and reboot the Linux machine. There are seven runlevels exist, from zero to six. + +**Suggested Read :** +**(#)** [How To Check All Running Services In Linux][3] + +Run the below init command to shutdown the system . + +``` +# init 0 + +``` + + * **`0:`** Halt – to shutdown the system. + + + +Run the below init command to reboot the system . + +``` +# init 6 + +``` + + * **`6:`** Reboot – to reboot the system. + + + +### Method-4: How To Shutdown The Linux System Using halt Command + +halt command used to power-off or shutdown a Linux remote machine or local host. +halt terminates all processes and shuts down the cpu. + +``` +# halt + +``` + +### Method-5: How To Shutdown The Linux System Using poweroff Command + +poweroff command used to power-off or shutdown a Linux remote machine or local host. Poweroff is exactly like halt, but it also turns off the unit itself (lights and everything on a PC). It sends an ACPI command to the board, then to the PSU, to cut the power. + +``` +# poweroff + +``` + +### Method-6: How To Shutdown And Reboot The Linux System Using systemctl Command + +Systemd is a new init system and system manager which was implemented/adapted into all the major Linux distributions over the traditional SysV init systems. + +systemd is compatible with SysV and LSB init scripts. It can work as a drop-in replacement for sysvinit system. systemd is the first process get started by kernel and holding PID 1. + +**Suggested Read :** +**(#)** [chkservice – A Tool For Managing Systemd Units From Linux Terminal][4] + +It’s a parent process for everything and Fedora 15 is the first distribution which was adapted systemd instead of upstart. + +systemctl is command line utility and primary tool to manage the systemd daemons/services such as (start, restart, stop, enable, disable, reload & status). + +systemd uses .service files Instead of bash scripts (SysVinit uses). systemd sorts all daemons into their own Linux cgroups and you can see the system hierarchy by exploring /cgroup/systemd file. + +``` +# systemctl halt +# systemctl poweroff +# systemctl reboot +# systemctl suspend +# systemctl hibernate + +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/6-commands-to-shutdown-halt-poweroff-reboot-the-linux-system/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/11-methods-to-find-check-system-server-uptime-in-linux/ +[2]: https://www.2daygeek.com/tuptime-a-tool-to-report-the-historical-and-statistical-running-time-of-linux-system/ +[3]: https://www.2daygeek.com/how-to-check-all-running-services-in-linux/ +[4]: https://www.2daygeek.com/chkservice-a-tool-for-managing-systemd-units-from-linux-terminal/ diff --git a/sources/tech/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md b/sources/tech/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md new file mode 100644 index 0000000000..8e9abf4b52 --- /dev/null +++ b/sources/tech/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md @@ -0,0 +1,72 @@ +translating---geekpi + +Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool +====== +**Mathpix is a nifty little tool that allows you to take screenshots of complex mathematical equations and instantly converts it into LaTeX editable text.** + +![Mathpix converts math equations images into LaTeX][1] + +[LaTeX editors][2] are excellent when it comes to writing academic and scientific documentation. + +There is a steep learning curved involved of course. And this learning curve becomes steeper if you have to write complex mathematical equations. + +[Mathpix][3] is a nifty little tool that helps you in this regard. + +Suppose you are reading a document that has mathematical equations. If you want to use those equations in your [LaTeX document][4], you need to use your ninja LaTeX skills and plenty of time. + +But Mathpix solves this problem for you. With Mathpix, you take the screenshot of the mathematical equations, and it will instantly give you the LaTeX code. You can then use this code in your [favorite LaTeX editor][2]. + +See Mathpix in action in the video below: + + + +[Video credit][5]: Reddit User [kaitlinmcunningham][6] + +Isn’t it super-cool? I guess the hardest part of writing LaTeX documents are those complicated equations. For lazy bums like me, Mathpix is a godsend. + +### Getting Mathpix + +Mathpix is available for Linux, macOS, Windows and iOS. There is no Android app for the moment. + +Note: Mathpix is a free to use tool but it’s not open source. + +On Linux, [Mathpix is available as a Snap package][7]. Which means [if you have Snap support enabled on your Linux distribution][8], you can install Mathpix with this simple command: + +``` +sudo snap install mathpix-snipping-tool + +``` + +Using Mathpix is simple. Once installed, open the tool. You’ll find it in the top panel. You can start taking the screenshot with Mathpix using the keyboard shortcut Ctrl+Alt+M. + +It will instantly translate the image of equation into a LaTeX code. The code will be copied into clipboard and you can then paste it in a LaTeX editor. + +Mathpix’s optical character recognition technology is [being used][9] by a number of companies like [WolframAlpha][10], Microsoft, Google, etc. to improve their tools’ image recognition capability while dealing with math symbols. + +Altogether, it’s an awesome tool for students and academics. It’s free to use and I so wish that it was an open source tool. We cannot get everything in life, can we? + +Do you use Mathpix or some other similar tool while dealing with mathematical symbols in LaTeX? What do you think of Mathpix? Share your views with us in the comment section. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/mathpix/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/mathpix-converts-equations-into-latex.jpeg +[2]: https://itsfoss.com/latex-editors-linux/ +[3]: https://mathpix.com/ +[4]: https://www.latex-project.org/ +[5]: https://g.redditmedia.com/b-GL1rQwNezQjGvdlov9U_6vDwb1A7kEwGHYcQ1Ogtg.gif?fm=mp4&mp4-fragmented=false&s=39fd1816b43e2b544986d629f75a7a8e +[6]: https://www.reddit.com/user/kaitlinmcunningham +[7]: https://snapcraft.io/mathpix-snipping-tool +[8]: https://itsfoss.com/install-snap-linux/ +[9]: https://mathpix.com/api.html +[10]: https://www.wolframalpha.com/ diff --git a/sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md b/sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md new file mode 100644 index 0000000000..cb93af4b92 --- /dev/null +++ b/sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md @@ -0,0 +1,199 @@ +Translating by way-ww +How To Create And Maintain Your Own Man Pages +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Um-pages-1-720x340.png) + +We already have discussed about a few [**good alternatives to Man pages**][1]. Those alternatives are mainly used for learning concise Linux command examples without having to go through the comprehensive man pages. If you’re looking for a quick and dirty way to easily and quickly learn a Linux command, those alternatives are worth trying. Now, you might be thinking – how can I create my own man-like help pages for a Linux command? This is where **“Um”** comes in handy. Um is a command line utility, used to easily create and maintain your own Man pages that contains only what you’ve learned about a command so far. + +By creating your own alternative to man pages, you can avoid lots of unnecessary, comprehensive details in a man page and include only what is necessary to keep in mind. If you ever wanted to created your own set of man-like pages, Um will definitely help. In this brief tutorial, we will see how to install “Um” command line utility and how to create our own man pages. + +### Installing Um + +Um is available for Linux and Mac OS. At present, it can only be installed using **Linuxbrew** package manager in Linux systems. Refer the following link if you haven’t installed Linuxbrew yet. + +Once Linuxbrew installed, run the following command to install Um utility. + +``` +$ brew install sinclairtarget/wst/um + +``` + +If you will see an output something like below, congratulations! Um has been installed and ready to use. + +``` +[...] +==> Installing sinclairtarget/wst/um +==> Downloading https://github.com/sinclairtarget/um/archive/4.0.0.tar.gz +==> Downloading from https://codeload.github.com/sinclairtarget/um/tar.gz/4.0.0 +-=#=# # # +==> Downloading https://rubygems.org/gems/kramdown-1.17.0.gem +######################################################################## 100.0% +==> gem install /home/sk/.cache/Homebrew/downloads/d0a5d978120a791d9c5965fc103866815189a4e3939 +==> Caveats +Bash completion has been installed to: +/home/linuxbrew/.linuxbrew/etc/bash_completion.d +==> Summary +🍺 /home/linuxbrew/.linuxbrew/Cellar/um/4.0.0: 714 files, 1.3MB, built in 35 seconds +==> Caveats +==> openssl +A CA file has been bootstrapped using certificates from the SystemRoots +keychain. To add additional certificates (e.g. the certificates added in +the System keychain), place .pem files in +/home/linuxbrew/.linuxbrew/etc/openssl/certs + +and run +/home/linuxbrew/.linuxbrew/opt/openssl/bin/c_rehash +==> ruby +Emacs Lisp files have been installed to: +/home/linuxbrew/.linuxbrew/share/emacs/site-lisp/ruby +==> um +Bash completion has been installed to: +/home/linuxbrew/.linuxbrew/etc/bash_completion.d + +``` + +Before going to use to make your man pages, you need to enable bash completion for Um. + +To do so, open your **~/.bash_profile** file: + +``` +$ nano ~/.bash_profile + +``` + +And, add the following lines in it: + +``` +if [ -f $(brew --prefix)/etc/bash_completion.d/um-completion.sh ]; then + . $(brew --prefix)/etc/bash_completion.d/um-completion.sh +fi + +``` + +Save and close the file. Run the following commands to update the changes. + +``` +$ source ~/.bash_profile + +``` + +All done. let us go ahead and create our first man page. + +### Create And Maintain Your Own Man Pages + +Let us say, you want to create your own man page for “dpkg” command. To do so, run: + +``` +$ um edit dpkg + +``` + +The above command will open a markdown template in your default editor: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Create-dpkg-man-page.png) + +My default editor is Vi, so the above commands open it in the Vi editor. Now, start adding everything you want to remember about “dpkg” command in this template. + +Here is a sample: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Edit-dpkg-man-page.png) + +As you see in the above output, I have added Synopsis, description and two options for dpkg command. You can add as many as sections you want in the man pages. Make sure you have given proper and easily-understandable titles for each section. Once done, save and quit the file (If you use Vi editor, Press **ESC** key and type **:wq** ). + +Finally, view your newly created man page using command: + +``` +$ um dpkg + +``` + +![](http://www.ostechnix.com/wp-content/uploads/2018/10/View-dpkg-man-page.png) + +As you can see, the the dpkg man page looks exactly like the official man pages. If you want to edit and/or add more details in a man page, again run the same command and add the details. + +``` +$ um edit dpkg + +``` + +To view the list of newly created man pages using Um, run: + +``` +$ um list + +``` + +All man pages will be saved under a directory named**`.um`**in your home directory + +Just in case, if you don’t want a particular page, simply delete it as shown below. + +``` +$ um rm dpkg + +``` + +To view the help section and all available general options, run: + +``` +$ um --help +usage: um + um [ARGS...] + +The first form is equivalent to `um read `. + +Subcommands: + um (l)ist List the available pages for the current topic. + um (r)ead Read the given page under the current topic. + um (e)dit Create or edit the given page under the current topic. + um rm Remove the given page. + um (t)opic [topic] Get or set the current topic. + um topics List all topics. + um (c)onfig [config key] Display configuration environment. + um (h)elp [sub-command] Display this help message, or the help message for a sub-command. + +``` + +### Configure Um + +To view the current configuration, run: + +``` +$ um config +Options prefixed by '*' are set in /home/sk/.um/umconfig. +editor = vi +pager = less +pages_directory = /home/sk/.um/pages +default_topic = shell +pages_ext = .md + +``` + +In this file, you can edit and change the values for **pager** , **editor** , **default_topic** , **pages_directory** , and **pages_ext** options as you wish. Say for example, if you want to save the newly created Um pages in your **[Dropbox][2]** folder, simply change the value of **pages_directory** directive and point it to the Dropbox folder in **~/.um/umconfig** file. + +``` +pages_directory = /Users/myusername/Dropbox/um + +``` + +And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-create-and-maintain-your-own-man-pages/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/ +[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/ diff --git a/sources/tech/20181010 5 alerting and visualization tools for sysadmins.md b/sources/tech/20181010 5 alerting and visualization tools for sysadmins.md new file mode 100644 index 0000000000..f933449461 --- /dev/null +++ b/sources/tech/20181010 5 alerting and visualization tools for sysadmins.md @@ -0,0 +1,163 @@ +5 alerting and visualization tools for sysadmins +====== +These open source tools help users understand system behavior and output, and provide alerts for potential problems. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI-) + +You probably know (or can guess) what alerting and visualization tools are used for. Why would we discuss them as observability tools, especially since some systems include visualization as a feature? + +Observability comes from control theory and describes our ability to understand a system based on its inputs and outputs. This article focuses on the output component of observability. + +Alerting and visualization tools analyze the outputs of other systems and provide structured representations of these outputs. Alerts are basically a synthesized understanding of negative system outputs, and visualizations are disambiguated structured representations that facilitate user comprehension. + +### Common types of alerts and visualizations + +#### Alerts + +Let’s first cover what alerts are _not_. Alerts should not be sent if the human responder can’t do anything about the problem. This includes alerts that are sent to multiple individuals with only a few who can respond, or situations where every anomaly in the system triggers an alert. This leads to alert fatigue and receivers ignoring all alerts within a specific medium until the system escalates to a medium that isn’t already saturated. + +For example, if an operator receives hundreds of emails a day from the alerting system, that operator will soon ignore all emails from the alerting system. The operator will respond to a real incident only when he or she is experiencing the problem, emailed by a customer, or called by the boss. In this case, alerts have lost their meaning and usefulness. + +Alerts are not a constant stream of information or a status update. They are meant to convey a problem from which the system can’t automatically recover, and they are sent only to the individual most likely to be able to recover the system. Everything that falls outside this definition isn’t an alert and will only damage your employees and company culture. + +Everyone has a different set of alert types, so I won't discuss things like priority levels (P1-P5) or models that use words like "Informational," "Warning," and "Critical." Instead, I’ll describe the generic categories emergent in complex systems’ incident response. + +You might have noticed I mentioned an “Informational” alert type right after I wrote that alerts shouldn’t be informational. Well, not everyone agrees, but I don’t consider something an alert if it isn’t sent to anyone. It is a data point that many systems refer to as an alert. It represents some event that should be known but not responded to. It is generally part of the visualization system of the alerting tool and not an event that triggers actual notifications. Mike Julian covers this and other aspects of alerting in his book [Practical Monitoring][1]. It's a must read for work in this area. + +Non-informational alerts consist of types that can be responded to or require action. I group these into two categories: internal outage and external outage. (Most companies have more than two levels for prioritizing their response efforts.) Degraded system performance is considered an outage in this model, as the impact to each user is usually unknown. + +Internal outages are a lower priority than external outages, but they still need to be responded to quickly. They often include internal systems that company employees use or components of applications that are visible only to company employees. + +External outages consist of any system outage that would immediately impact a customer. These don’t include a system outage that prevents releasing updates to the system. They do include customer-facing application failures, database outages, and networking partitions that hurt availability or consistency if either can impact a user. They also include outages of tools that may not have a direct impact on users, as the application continues to run but this transparent dependency impacts performance. This is common when the system uses some external service or data source that isn’t necessary for full functionality but may cause delays as the application performs retries or handles errors from this external dependency. + +### Visualizations + +There are many visualization types, and I won’t cover them all here. It’s a fascinating area of research. On the data analytics side of my career, learning and applying that knowledge is a constant challenge. We need to provide simple representations of complex system outputs for the widest dissemination of information. [Google Charts][2] and [Tableau][3] have a wide selection of visualization types. We’ll cover the most common visualizations and some innovative solutions for quickly understanding systems. + +#### Line chart + +The line chart is probably the most common visualization. It does a pretty good job of producing an understanding of a system over time. A line chart in a metrics system would have a line for each unique metric or some aggregation of metrics. This can get confusing when there are a lot of metrics in the same dashboard (as shown below), but most systems can select specific metrics to view rather than having all of them visible. Also, anomalous behavior is easy to spot if it’s significant enough to escape the noise of normal operations. Below we can see purple, yellow, and light blue lines that might indicate anomalous behavior. + +![](https://opensource.com/sites/default/files/uploads/monitoring_guide_line_chart.png) + +Another feature of a line chart is that you can often stack them to show relationships. For example, you might want to look at requests on each server individually, but also in aggregate. This allows you to understand the overall system as well as each instance in the same graph. + +![](https://opensource.com/sites/default/files/uploads/monitoring_guide_line_chart_aggregate.png) + +#### Heatmaps + +Another common visualization is the heatmap. It is useful when looking at histograms. This type of visualization is similar to a bar chart but can show gradients within the bars representing the different percentiles of the overall metric. For example, suppose you’re looking at request latencies and you want to quickly understand the overall trend as well as the distribution of all requests. A heatmap is great for this, and it can use color to disambiguate the quantity of each section with a quick glance. + +The heatmap below shows the higher concentration around the centerline of the graph with an easy-to-understand visualization of the distribution vertically for each time bucket. We might want to review a couple of points in time where the distribution gets wide while the others are fairly tight like at 14:00. This distribution might be a negative performance indicator. + +![](https://opensource.com/sites/default/files/uploads/monitoring_guide_histogram.png) + +#### Gauges + +The last common visualization I’ll cover here is the gauge, which helps users understand a single metric quickly. Gauges can represent a single metric, like your speedometer represents your driving speed or your gas gauge represents the amount of gas in your car. Similar to the gas gauge, most monitoring gauges clearly indicate what is good and what isn’t. Often (as is shown below), good is represented by green, getting worse by orange, and “everything is breaking” by red. The middle row below shows traditional gauges. + +![](https://opensource.com/sites/default/files/uploads/monitoring_guide_gauges.png) + +This image shows more than just traditional gauges. The other gauges are single stat representations that are similar to the function of the classic gauge. They all use the same color scheme to quickly indicate system health with just a glance. Arguably, the bottom row is probably the best example of a gauge that allows you to glance at a dashboard and know that everything is healthy (or not). This type of visualization is usually what I put on a top-level dashboard. It offers a full, high-level understanding of system health in seconds. + +#### Flame graphs + +A less common visualization is the flame graph, introduced by [Netflix’s Brendan Gregg][4] in 2011. It’s not ideal for dashboarding or quickly observing high-level system concerns; it’s normally seen when trying to understand a specific application problem. This visualization focuses on CPU and memory and the associated frames. The X-axis lists the frames alphabetically, and the Y-axis shows stack depth. Each rectangle is a stack frame and includes the function being called. The wider the rectangle, the more it appears in the stack. This method is invaluable when trying to diagnose system performance at the application level and I urge everyone to give it a try. + +![](https://opensource.com/sites/default/files/uploads/monitoring_guide_flame_graph_0.png) + +### Tool options + +There are several commercial options for alerting, but since this is Opensource.com, I’ll cover only systems that are being used at scale by real companies that you can use at no cost. Hopefully, you’ll be able to contribute new and innovative features to make these systems even better. + +### Alerting tools + +#### Bosun + +If you’ve ever done anything with computers and gotten stuck, the help you received was probably thanks to a Stack Exchange system. Stack Exchange runs many different websites around a crowdsourced question-and-answer model. [Stack Overflow][5] is very popular with developers, and [Super User][6] is popular with operations. However, there are now hundreds of sites ranging from parenting to sci-fi and philosophy to bicycles. + +Stack Exchange open-sourced its alert management system, [Bosun][7], around the same time Prometheus and its [AlertManager][8] system were released. There were many similarities in the two systems, and that’s a really good thing. Like Prometheus, Bosun is written in Golang. Bosun’s scope is more extensive than Prometheus’ as it can interact with systems beyond metrics aggregation. It can also ingest data from log and event aggregation systems. It supports Graphite, InfluxDB, OpenTSDB, and Elasticsearch. + +Bosun’s architecture consists of a single server binary, a backend like OpenTSDB, Redis, and [scollector agents][9]. The scollector agents automatically detect services on a host and report metrics for those processes and other system resources. This data is sent to a metrics backend. The Bosun server binary then queries the backends to determine if any alerts need to be fired. Bosun can also be used by tools like [Grafana][10] to query the underlying backends through one common interface. Redis is used to store state and metadata for Bosun. + +A really neat feature of Bosun is that it lets you test your alerts against historical data. This was something I missed in Prometheus several years ago, when I had data for an issue I wanted alerts on but no easy way to test it. To make sure my alerts were working, I had to create and insert dummy data. This system alleviates that very time-consuming process. + +Bosun also has the usual features like showing simple graphs and creating alerts. It has a powerful expression language for writing alerting rules. However, it only has email and HTTP notification configurations, which means connecting to Slack and other tools requires a bit more customization ([which its documentation covers][11]). Similar to Prometheus, Bosun can use templates for these notifications, which means they can look as awesome as you want them to. You can use all your HTML and CSS skills to create the baddest email alert anyone has ever seen. + +#### Cabot + +[Cabot][12] was created by a company called [Arachnys][13]. You may not know who Arachnys is or what it does, but you have probably felt its impact: It built the leading cloud-based solution for fighting financial crimes. That sounds pretty cool, right? At a previous company, I was involved in similar functions around [“know your customer"][14] laws. Most companies would consider it a very bad thing to be linked to a terrorist group, for example, funneling money through their systems. These solutions also help defend against less-atrocious offenders like fraudsters who could also pose a risk to the institution. + +So why did Arachnys create Cabot? Well, it is kind of a Christmas present to everyone, as it was a Christmas project built because its developers couldn’t wrap their heads around [Nagios][15]. And really, who can blame them? Cabot was written with Django and Bootstrap, so it should be easy for most to contribute to the project. (Another interesting factoid: The name comes from the creator’s dog.) + +The Cabot architecture is similar to Bosun in that it doesn’t collect any data. Instead, it accesses data through the APIs of the tools it is alerting for. Therefore, Cabot uses a pull (rather than a push) model for alerting. It reaches out into each system’s API and retrieves the information it needs to make a decision based on a specific check. Cabot stores the alerting data in a Postgres database and also has a cache using Redis. + +Cabot natively supports [Graphite][16], but it also supports [Jenkins][17], which is rare in this area. [Arachnys][13] uses Jenkins like a centralized cron, but I like this idea of treating build failures like outages. Obviously, a build failure isn’t as critical as a production outage, but it could still alert the team and escalate if the failure isn’t resolved. Who actually checks Jenkins every time an email comes in about a build failure? Yeah, me too! + +Another interesting feature is that Cabot can integrate with Google Calendar for on-call rotations. Cabot calls this feature Rota, which is a British term for a roster or rotation. This makes a lot of sense, and I wish other systems would take this idea further. Cabot doesn’t support anything more complex than primary and backup personnel, but there is certainly room for additional features. The docs say if you want something more advanced, you should look at a commercial option. + +#### StatsAgg + +[StatsAgg][18]? How did that make the list? Well, it’s not every day you come across a publishing company that has created an alerting platform. I think that deserves recognition. Of course, [Pearson][19] isn’t just a publishing company anymore; it has several web presences and a joint venture with [O’Reilly Media][20]. However, I still think of it as the company that published my schoolbooks and tests. + +StatsAgg isn’t just an alerting platform; it’s also a metrics aggregation platform. And it’s kind of like a proxy for other systems. It supports Graphite, StatsD, InfluxDB, and OpenTSDB as inputs, but it can also forward those metrics to their respective platforms. This is an interesting concept, but potentially risky as loads increase on a central service. However, if the StatsAgg infrastructure is robust enough, it can still produce alerts even when a backend storage platform has an outage. + +StatsAgg is written in Java and consists only of the main server and UI, which keeps complexity to a minimum. It can send alerts based on regular expression matching and is focused on alerting by service rather than host or instance. Its goal is to fill a void in the open source observability stack, and I think it does that quite well. + +### Visualization tools + +#### Grafana + +Almost everyone knows about [Grafana][10], and many have used it. I have used it for years whenever I need a simple dashboard. The tool I used before was deprecated, and I was fairly distraught about that until Grafana made it okay. Grafana was gifted to us by Torkel Ödegaard. Like Cabot, Grafana was also created around Christmastime, and released in January 2014. It has come a long way in just a few years. It started life as a Kibana dashboarding system, and Torkel forked it into what became Grafana. + +Grafana’s sole focus is presenting monitoring data in a more usable and pleasing way. It can natively gather data from Graphite, Elasticsearch, OpenTSDB, Prometheus, and InfluxDB. There’s an Enterprise version that uses plugins for more data sources, but there’s no reason those other data source plugins couldn’t be created as open source, as the Grafana plugin ecosystem already offers many other data sources. + +What does Grafana do for me? It provides a central location for understanding my system. It is web-based, so anyone can access the information, although it can be restricted using different authentication methods. Grafana can provide knowledge at a glance using many different types of visualizations. However, it has started integrating alerting and other features that aren’t traditionally combined with visualizations. + +Now you can set alerts visually. That means you can look at a graph, maybe even one showing where an alert should have triggered due to some degradation of the system, click on the graph where you want the alert to trigger, and then tell Grafana where to send the alert. That’s a pretty powerful addition that won’t necessarily replace an alerting platform, but it can certainly help augment it by providing a different perspective on alerting criteria. + +Grafana has also introduced more collaboration features. Users have been able to share dashboards for a long time, meaning you don’t have to create your own dashboard for your [Kubernetes][21] cluster because there are several already available—with some maintained by Kubernetes developers and others by Grafana developers. + +The most significant addition around collaboration is annotations. Annotations allow a user to add context to part of a graph. Other users can then use this context to understand the system better. This is an invaluable tool when a team is in the middle of an incident and communication and common understanding are critical. Having all the information right where you’re already looking makes it much more likely that knowledge will be shared across the team quickly. It’s also a nice feature to use during blameless postmortems when the team is trying to understand how the failure occurred and learn more about their system. + +#### Vizceral + +Netflix created [Vizceral][22] to understand its traffic patterns better when performing a traffic failover. Unlike Grafana, which is a more general tool, Vizceral serves a very specific use case. Netflix no longer uses this tool internally and says it is no longer actively maintained, but it still updates the tool periodically. I highlight it here primarily to point out an interesting visualization mechanism and how it can help solve a problem. It’s worth running it in a demo environment just to better grasp the concepts and witness what’s possible with these systems. + +### What to read next + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/alerting-and-visualization-tools-sysadmins + +作者:[Dan Barker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/barkerd427 +[b]: https://github.com/lujun9972 +[1]: https://www.practicalmonitoring.com/ +[2]: https://developers.google.com/chart/interactive/docs/gallery +[3]: https://libguides.libraries.claremont.edu/c.php?g=474417&p=3286401 +[4]: http://www.brendangregg.com/flamegraphs.html +[5]: https://stackoverflow.com/ +[6]: https://superuser.com/ +[7]: http://bosun.org/ +[8]: https://prometheus.io/docs/alerting/alertmanager/ +[9]: https://bosun.org/scollector/ +[10]: https://grafana.com/ +[11]: https://bosun.org/notifications +[12]: https://cabotapp.com/ +[13]: https://www.arachnys.com/ +[14]: https://en.wikipedia.org/wiki/Know_your_customer +[15]: https://www.nagios.org/ +[16]: https://graphiteapp.org/ +[17]: https://jenkins.io/ +[18]: https://github.com/PearsonEducation/StatsAgg +[19]: https://www.pearson.com/us/ +[20]: https://www.oreilly.com/ +[21]: https://opensource.com/resources/what-is-kubernetes +[22]: https://github.com/Netflix/vizceral diff --git a/sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md b/sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md new file mode 100644 index 0000000000..6998661f23 --- /dev/null +++ b/sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md @@ -0,0 +1,457 @@ +An introduction to using tcpdump at the Linux command line +====== + +This flexible, powerful command-line tool helps ease the pain of troubleshooting network issues. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE) + +In my experience as a sysadmin, I have often found network connectivity issues challenging to troubleshoot. For those situations, tcpdump is a great ally. + +Tcpdump is a command line utility that allows you to capture and analyze network traffic going through your system. It is often used to help troubleshoot network issues, as well as a security tool. + +A powerful and versatile tool that includes many options and filters, tcpdump can be used in a variety of cases. Since it's a command line tool, it is ideal to run in remote servers or devices for which a GUI is not available, to collect data that can be analyzed later. It can also be launched in the background or as a scheduled job using tools like cron. + +In this article, we'll look at some of tcpdump's most common features. + +### 1\. Installation on Linux + +Tcpdump is included with several Linux distributions, so chances are, you already have it installed. Check if tcpdump is installed on your system with the following command: + +``` +$ which tcpdump +/usr/sbin/tcpdump +``` + +If tcpdump is not installed, you can install it but using your distribution's package manager. For example, on CentOS or Red Hat Enterprise Linux, like this: + +``` +$ sudo yum install -y tcpdump +``` + +Tcpdump requires `libpcap`, which is a library for network packet capture. If it's not installed, it will be automatically added as a dependency. + +You're ready to start capturing some packets. + +### 2\. Capturing packets with tcpdump + +To capture packets for troubleshooting or analysis, tcpdump requires elevated permissions, so in the following examples most commands are prefixed with `sudo`. + +To begin, use the command `tcpdump -D` to see which interfaces are available for capture: + +``` +$ sudo tcpdump -D +1.eth0 +2.virbr0 +3.eth1 +4.any (Pseudo-device that captures on all interfaces) +5.lo [Loopback] +``` + +In the example above, you can see all the interfaces available in my machine. The special interface `any` allows capturing in any active interface. + +Let's use it to start capturing some packets. Capture all packets in any interface by running this command: + +``` +$ sudo tcpdump -i any +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +09:56:18.293641 IP rhel75.localdomain.ssh > 192.168.64.1.56322: Flags [P.], seq 3770820720:3770820916, ack 3503648727, win 309, options [nop,nop,TS val 76577898 ecr 510770929], length 196 +09:56:18.293794 IP 192.168.64.1.56322 > rhel75.localdomain.ssh: Flags [.], ack 196, win 391, options [nop,nop,TS val 510771017 ecr 76577898], length 0 +09:56:18.295058 IP rhel75.59883 > gateway.domain: 2486+ PTR? 1.64.168.192.in-addr.arpa. (43) +09:56:18.310225 IP gateway.domain > rhel75.59883: 2486 NXDomain* 0/1/0 (102) +09:56:18.312482 IP rhel75.49685 > gateway.domain: 34242+ PTR? 28.64.168.192.in-addr.arpa. (44) +09:56:18.322425 IP gateway.domain > rhel75.49685: 34242 NXDomain* 0/1/0 (103) +09:56:18.323164 IP rhel75.56631 > gateway.domain: 29904+ PTR? 1.122.168.192.in-addr.arpa. (44) +09:56:18.323342 IP rhel75.localdomain.ssh > 192.168.64.1.56322: Flags [P.], seq 196:584, ack 1, win 309, options [nop,nop,TS val 76577928 ecr 510771017], length 388 +09:56:18.323563 IP 192.168.64.1.56322 > rhel75.localdomain.ssh: Flags [.], ack 584, win 411, options [nop,nop,TS val 510771047 ecr 76577928], length 0 +09:56:18.335569 IP gateway.domain > rhel75.56631: 29904 NXDomain* 0/1/0 (103) +09:56:18.336429 IP rhel75.44007 > gateway.domain: 61677+ PTR? 98.122.168.192.in-addr.arpa. (45) +09:56:18.336655 IP gateway.domain > rhel75.44007: 61677* 1/0/0 PTR rhel75. (65) +09:56:18.337177 IP rhel75.localdomain.ssh > 192.168.64.1.56322: Flags [P.], seq 584:1644, ack 1, win 309, options [nop,nop,TS val 76577942 ecr 510771047], length 1060 + +---- SKIPPING LONG OUTPUT ----- + +09:56:19.342939 IP 192.168.64.1.56322 > rhel75.localdomain.ssh: Flags [.], ack 1752016, win 1444, options [nop,nop,TS val 510772067 ecr 76578948], length 0 +^C +9003 packets captured +9010 packets received by filter +7 packets dropped by kernel +$ +``` + +Tcpdump continues to capture packets until it receives an interrupt signal. You can interrupt capturing by pressing `Ctrl+C`. As you can see in this example, `tcpdump` captured more than 9,000 packets. In this case, since I am connected to this server using `ssh`, tcpdump captured all these packages. To limit the number of packets captured and stop `tcpdump`, use the `-c` option: + +``` +$ sudo tcpdump -i any -c 5 +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +11:21:30.242740 IP rhel75.localdomain.ssh > 192.168.64.1.56322: Flags [P.], seq 3772575680:3772575876, ack 3503651743, win 309, options [nop,nop,TS val 81689848 ecr 515883153], length 196 +11:21:30.242906 IP 192.168.64.1.56322 > rhel75.localdomain.ssh: Flags [.], ack 196, win 1443, options [nop,nop,TS val 515883235 ecr 81689848], length 0 +11:21:30.244442 IP rhel75.43634 > gateway.domain: 57680+ PTR? 1.64.168.192.in-addr.arpa. (43) +11:21:30.244829 IP gateway.domain > rhel75.43634: 57680 NXDomain 0/0/0 (43) +11:21:30.247048 IP rhel75.33696 > gateway.domain: 37429+ PTR? 28.64.168.192.in-addr.arpa. (44) +5 packets captured +12 packets received by filter +0 packets dropped by kernel +$ +``` + +In this case, `tcpdump` stopped capturing automatically after capturing five packets. This is useful in different scenarios—for instance, if you're troubleshooting connectivity and capturing a few initial packages is enough. This is even more useful when we apply filters to capture specific packets (shown below). + +By default, tcpdump resolves IP addresses and ports into names, as shown in the previous example. When troubleshooting network issues, it is often easier to use the IP addresses and port numbers; disable name resolution by using the option `-n` and port resolution with `-nn`: + +``` +$ sudo tcpdump -i any -c5 -nn +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +23:56:24.292206 IP 192.168.64.28.22 > 192.168.64.1.35110: Flags [P.], seq 166198580:166198776, ack 2414541257, win 309, options [nop,nop,TS val 615664 ecr 540031155], length 196 +23:56:24.292357 IP 192.168.64.1.35110 > 192.168.64.28.22: Flags [.], ack 196, win 1377, options [nop,nop,TS val 540031229 ecr 615664], length 0 +23:56:24.292570 IP 192.168.64.28.22 > 192.168.64.1.35110: Flags [P.], seq 196:568, ack 1, win 309, options [nop,nop,TS val 615664 ecr 540031229], length 372 +23:56:24.292655 IP 192.168.64.1.35110 > 192.168.64.28.22: Flags [.], ack 568, win 1400, options [nop,nop,TS val 540031229 ecr 615664], length 0 +23:56:24.292752 IP 192.168.64.28.22 > 192.168.64.1.35110: Flags [P.], seq 568:908, ack 1, win 309, options [nop,nop,TS val 615664 ecr 540031229], length 340 +5 packets captured +6 packets received by filter +0 packets dropped by kernel +``` + +As shown above, the capture output now displays the IP addresses and port numbers. This also prevents tcpdump from issuing DNS lookups, which helps to lower network traffic while troubleshooting network issues. + +Now that you're able to capture network packets, let's explore what this output means. + +### 3\. Understanding the output format + +Tcpdump is capable of capturing and decoding many different protocols, such as TCP, UDP, ICMP, and many more. While we can't cover all of them here, to help you get started, let's explore the TCP packet. You can find more details about the different protocol formats in tcpdump's [manual pages][1]. A typical TCP packet captured by tcpdump looks like this: + +``` +08:41:13.729687 IP 192.168.64.28.22 > 192.168.64.1.41916: Flags [P.], seq 196:568, ack 1, win 309, options [nop,nop,TS val 117964079 ecr 816509256], length 372 +``` + +The fields may vary depending on the type of packet being sent, but this is the general format. + +The first field, `08:41:13.729687,` represents the timestamp of the received packet as per the local clock. + +Next, `IP` represents the network layer protocol—in this case, `IPv4`. For `IPv6` packets, the value is `IP6`. + +The next field, `192.168.64.28.22`, is the source IP address and port. This is followed by the destination IP address and port, represented by `192.168.64.1.41916`. + +After the source and destination, you can find the TCP Flags `Flags [P.]`. Typical values for this field include: + +| Value | Flag Type | Description | +|-------| --------- | ----------------- | +| S | SYN | Connection Start | +| F | FIN | Connection Finish | +| P | PUSH | Data push | +| R | RST | Connection reset | +| . | ACK | Acknowledgment | + +This field can also be a combination of these values, such as `[S.]` for a `SYN-ACK` packet. + +Next is the sequence number of the data contained in the packet. For the first packet captured, this is an absolute number. Subsequent packets use a relative number to make it easier to follow. In this example, the sequence is `seq 196:568,` which means this packet contains bytes 196 to 568 of this flow. + +This is followed by the Ack Number: `ack 1`. In this case, it is 1 since this is the side sending data. For the side receiving data, this field represents the next expected byte (data) on this flow. For example, the Ack number for the next packet in this flow would be 568. + +The next field is the window size `win 309`, which represents the number of bytes available in the receiving buffer, followed by TCP options such as the MSS (Maximum Segment Size) or Window Scale. For details about TCP protocol options, consult [Transmission Control Protocol (TCP) Parameters][2]. + +Finally, we have the packet length, `length 372`, which represents the length, in bytes, of the payload data. The length is the difference between the last and first bytes in the sequence number. + +Now let's learn how to filter packages to narrow down results and make it easier to troubleshoot specific issues. + +### 4\. Filtering packets + +As mentioned above, tcpdump can capture too many packages, some of which are not even related to the issue you're troubleshooting. For example, if you're troubleshooting a connectivity issue with a web server you're not interested in the SSH traffic, so removing the SSH packets from the output makes it easier to work on the real issue. + +One of tcpdump's most powerful features is its ability to filter the captured packets using a variety of parameters, such as source and destination IP addresses, ports, protocols, etc. Let's look at some of the most common ones. + +#### Protocol + +To filter packets based on protocol, specifying the protocol in the command line. For example, capture ICMP packets only by using this command: + +``` +$ sudo tcpdump -i any -c5 icmp +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +``` + +In a different terminal, try to ping another machine: + +``` +$ ping opensource.com +PING opensource.com (54.204.39.132) 56(84) bytes of data. +64 bytes from ec2-54-204-39-132.compute-1.amazonaws.com (54.204.39.132): icmp_seq=1 ttl=47 time=39.6 ms +``` + +Back in the tcpdump capture, notice that tcpdump captures and displays only the ICMP-related packets. In this case, tcpdump is not displaying name resolution packets that were generated when resolving the name `opensource.com`: + +``` +09:34:20.136766 IP rhel75 > ec2-54-204-39-132.compute-1.amazonaws.com: ICMP echo request, id 20361, seq 1, length 64 +09:34:20.176402 IP ec2-54-204-39-132.compute-1.amazonaws.com > rhel75: ICMP echo reply, id 20361, seq 1, length 64 +09:34:21.140230 IP rhel75 > ec2-54-204-39-132.compute-1.amazonaws.com: ICMP echo request, id 20361, seq 2, length 64 +09:34:21.180020 IP ec2-54-204-39-132.compute-1.amazonaws.com > rhel75: ICMP echo reply, id 20361, seq 2, length 64 +09:34:22.141777 IP rhel75 > ec2-54-204-39-132.compute-1.amazonaws.com: ICMP echo request, id 20361, seq 3, length 64 +5 packets captured +5 packets received by filter +0 packets dropped by kernel +``` + +#### Host + +Limit capture to only packets related to a specific host by using the `host` filter: + +``` +$ sudo tcpdump -i any -c5 -nn host 54.204.39.132 +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +09:54:20.042023 IP 192.168.122.98.39326 > 54.204.39.132.80: Flags [S], seq 1375157070, win 29200, options [mss 1460,sackOK,TS val 122350391 ecr 0,nop,wscale 7], length 0 +09:54:20.088127 IP 54.204.39.132.80 > 192.168.122.98.39326: Flags [S.], seq 1935542841, ack 1375157071, win 28960, options [mss 1460,sackOK,TS val 522713542 ecr 122350391,nop,wscale 9], length 0 +09:54:20.088204 IP 192.168.122.98.39326 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 122350437 ecr 522713542], length 0 +09:54:20.088734 IP 192.168.122.98.39326 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 122350438 ecr 522713542], length 112: HTTP: GET / HTTP/1.1 +09:54:20.129733 IP 54.204.39.132.80 > 192.168.122.98.39326: Flags [.], ack 113, win 57, options [nop,nop,TS val 522713552 ecr 122350438], length 0 +5 packets captured +5 packets received by filter +0 packets dropped by kernel +``` + +In this example, tcpdump captures and displays only packets to and from host `54.204.39.132`. + +#### Port + +To filter packets based on the desired service or port, use the `port` filter. For example, capture packets related to a web (HTTP) service by using this command: + +``` +$ sudo tcpdump -i any -c5 -nn port 80 +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +09:58:28.790548 IP 192.168.122.98.39330 > 54.204.39.132.80: Flags [S], seq 1745665159, win 29200, options [mss 1460,sackOK,TS val 122599140 ecr 0,nop,wscale 7], length 0 +09:58:28.834026 IP 54.204.39.132.80 > 192.168.122.98.39330: Flags [S.], seq 4063583040, ack 1745665160, win 28960, options [mss 1460,sackOK,TS val 522775728 ecr 122599140,nop,wscale 9], length 0 +09:58:28.834093 IP 192.168.122.98.39330 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 122599183 ecr 522775728], length 0 +09:58:28.834588 IP 192.168.122.98.39330 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 122599184 ecr 522775728], length 112: HTTP: GET / HTTP/1.1 +09:58:28.878445 IP 54.204.39.132.80 > 192.168.122.98.39330: Flags [.], ack 113, win 57, options [nop,nop,TS val 522775739 ecr 122599184], length 0 +5 packets captured +5 packets received by filter +0 packets dropped by kernel +``` + +#### Source IP/hostname + +You can also filter packets based on the source or destination IP Address or hostname. For example, to capture packets from host `192.168.122.98`: + +``` +$ sudo tcpdump -i any -c5 -nn src 192.168.122.98 +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +10:02:15.220824 IP 192.168.122.98.39436 > 192.168.122.1.53: 59332+ A? opensource.com. (32) +10:02:15.220862 IP 192.168.122.98.39436 > 192.168.122.1.53: 20749+ AAAA? opensource.com. (32) +10:02:15.364062 IP 192.168.122.98.39334 > 54.204.39.132.80: Flags [S], seq 1108640533, win 29200, options [mss 1460,sackOK,TS val 122825713 ecr 0,nop,wscale 7], length 0 +10:02:15.409229 IP 192.168.122.98.39334 > 54.204.39.132.80: Flags [.], ack 669337581, win 229, options [nop,nop,TS val 122825758 ecr 522832372], length 0 +10:02:15.409667 IP 192.168.122.98.39334 > 54.204.39.132.80: Flags [P.], seq 0:112, ack 1, win 229, options [nop,nop,TS val 122825759 ecr 522832372], length 112: HTTP: GET / HTTP/1.1 +5 packets captured +5 packets received by filter +0 packets dropped by kernel +``` + +Notice that tcpdumps captured packets with source IP address `192.168.122.98` for multiple services such as name resolution (port 53) and HTTP (port 80). The response packets are not displayed since their source IP is different. + +Conversely, you can use the `dst` filter to filter by destination IP/hostname: + +``` +$ sudo tcpdump -i any -c5 -nn dst 192.168.122.98 +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +10:05:03.572931 IP 192.168.122.1.53 > 192.168.122.98.47049: 2248 1/0/0 A 54.204.39.132 (48) +10:05:03.572944 IP 192.168.122.1.53 > 192.168.122.98.47049: 33770 0/0/0 (32) +10:05:03.621833 IP 54.204.39.132.80 > 192.168.122.98.39338: Flags [S.], seq 3474204576, ack 3256851264, win 28960, options [mss 1460,sackOK,TS val 522874425 ecr 122993922,nop,wscale 9], length 0 +10:05:03.667767 IP 54.204.39.132.80 > 192.168.122.98.39338: Flags [.], ack 113, win 57, options [nop,nop,TS val 522874436 ecr 122993972], length 0 +10:05:03.672221 IP 54.204.39.132.80 > 192.168.122.98.39338: Flags [P.], seq 1:643, ack 113, win 57, options [nop,nop,TS val 522874437 ecr 122993972], length 642: HTTP: HTTP/1.1 302 Found +5 packets captured +5 packets received by filter +0 packets dropped by kernel +``` + +#### Complex expressions + +You can also combine filters by using the logical operators `and` and `or` to create more complex expressions. For example, to filter packets from source IP address `192.168.122.98` and service HTTP only, use this command: + +``` +$ sudo tcpdump -i any -c5 -nn src 192.168.122.98 and port 80 +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +10:08:00.472696 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [S], seq 2712685325, win 29200, options [mss 1460,sackOK,TS val 123170822 ecr 0,nop,wscale 7], length 0 +10:08:00.516118 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [.], ack 268723504, win 229, options [nop,nop,TS val 123170865 ecr 522918648], length 0 +10:08:00.516583 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [P.], seq 0:112, ack 1, win 229, options [nop,nop,TS val 123170866 ecr 522918648], length 112: HTTP: GET / HTTP/1.1 +10:08:00.567044 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [.], ack 643, win 239, options [nop,nop,TS val 123170916 ecr 522918661], length 0 +10:08:00.788153 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [F.], seq 112, ack 643, win 239, options [nop,nop,TS val 123171137 ecr 522918661], length 0 +5 packets captured +5 packets received by filter +0 packets dropped by kernel +``` + +You can create more complex expressions by grouping filter with parentheses. In this case, enclose the entire filter expression with quotation marks to prevent the shell from confusing them with shell expressions: + +``` +$ sudo tcpdump -i any -c5 -nn "port 80 and (src 192.168.122.98 or src 54.204.39.132)" +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +10:10:37.602214 IP 192.168.122.98.39346 > 54.204.39.132.80: Flags [S], seq 871108679, win 29200, options [mss 1460,sackOK,TS val 123327951 ecr 0,nop,wscale 7], length 0 +10:10:37.650651 IP 54.204.39.132.80 > 192.168.122.98.39346: Flags [S.], seq 854753193, ack 871108680, win 28960, options [mss 1460,sackOK,TS val 522957932 ecr 123327951,nop,wscale 9], length 0 +10:10:37.650708 IP 192.168.122.98.39346 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 123328000 ecr 522957932], length 0 +10:10:37.651097 IP 192.168.122.98.39346 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 123328000 ecr 522957932], length 112: HTTP: GET / HTTP/1.1 +10:10:37.692900 IP 54.204.39.132.80 > 192.168.122.98.39346: Flags [.], ack 113, win 57, options [nop,nop,TS val 522957942 ecr 123328000], length 0 +5 packets captured +5 packets received by filter +0 packets dropped by kernel +``` + +In this example, we're filtering packets for HTTP service only (port 80) and source IP addresses `192.168.122.98` or `54.204.39.132`. This is a quick way of examining both sides of the same flow. + +### 5\. Checking packet content + +In the previous examples, we're checking only the packets' headers for information such as source, destinations, ports, etc. Sometimes this is all we need to troubleshoot network connectivity issues. Sometimes, however, we need to inspect the content of the packet to ensure that the message we're sending contains what we need or that we received the expected response. To see the packet content, tcpdump provides two additional flags: `-X` to print content in hex, and ASCII or `-A` to print the content in ASCII. + +For example, inspect the HTTP content of a web request like this: + +``` +$ sudo tcpdump -i any -c10 -nn -A port 80 +tcpdump: verbose output suppressed, use -v or -vv for full protocol decode +listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +13:02:14.871803 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [S], seq 2546602048, win 29200, options [mss 1460,sackOK,TS val 133625221 ecr 0,nop,wscale 7], length 0 +E..<..@.@.....zb6.'....P...@......r............ +............................ +13:02:14.910734 IP 54.204.39.132.80 > 192.168.122.98.39366: Flags [S.], seq 1877348646, ack 2546602049, win 28960, options [mss 1460,sackOK,TS val 525532247 ecr 133625221,nop,wscale 9], length 0 +E..<..@./..a6.'...zb.P..o..&...A..q a.......... +.R.W.......     ................ +13:02:14.910832 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 133625260 ecr 525532247], length 0 +E..4..@.@.....zb6.'....P...Ao..'........... +.....R.W................ +13:02:14.911808 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 133625261 ecr 525532247], length 112: HTTP: GET / HTTP/1.1 +E.....@.@..1..zb6.'....P...Ao..'........... +.....R.WGET / HTTP/1.1 +User-Agent: Wget/1.14 (linux-gnu) +Accept: */* +Host: opensource.com +Connection: Keep-Alive + +................ +13:02:14.951199 IP 54.204.39.132.80 > 192.168.122.98.39366: Flags [.], ack 113, win 57, options [nop,nop,TS val 525532257 ecr 133625261], length 0 +E..4.F@./.."6.'...zb.P..o..'.......9.2..... +.R.a.................... +13:02:14.955030 IP 54.204.39.132.80 > 192.168.122.98.39366: Flags [P.], seq 1:643, ack 113, win 57, options [nop,nop,TS val 525532258 ecr 133625261], length 642: HTTP: HTTP/1.1 302 Found +E....G@./...6.'...zb.P..o..'.......9....... +.R.b....HTTP/1.1 302 Found +Server: nginx +Date: Sun, 23 Sep 2018 17:02:14 GMT +Content-Type: text/html; charset=iso-8859-1 +Content-Length: 207 +X-Content-Type-Options: nosniff +Location: https://opensource.com/ +Cache-Control: max-age=1209600 +Expires: Sun, 07 Oct 2018 17:02:14 GMT +X-Request-ID: v-6baa3acc-bf52-11e8-9195-22000ab8cf2d +X-Varnish: 632951979 +Age: 0 +Via: 1.1 varnish (Varnish/5.2) +X-Cache: MISS +Connection: keep-alive + + + +302 Found + +

Found

+

The document has moved here.

+ +................ +13:02:14.955083 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [.], ack 643, win 239, options [nop,nop,TS val 133625304 ecr 525532258], length 0 +E..4..@.@.....zb6.'....P....o.............. +.....R.b................ +13:02:15.195524 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [F.], seq 113, ack 643, win 239, options [nop,nop,TS val 133625545 ecr 525532258], length 0 +E..4..@.@.....zb6.'....P....o.............. +.....R.b................ +13:02:15.236592 IP 54.204.39.132.80 > 192.168.122.98.39366: Flags [F.], seq 643, ack 114, win 57, options [nop,nop,TS val 525532329 ecr 133625545], length 0 +E..4.H@./.. 6.'...zb.P..o..........9.I..... +.R...................... +13:02:15.236656 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [.], ack 644, win 239, options [nop,nop,TS val 133625586 ecr 525532329], length 0 +E..4..@.@.....zb6.'....P....o.............. +.....R.................. +10 packets captured +10 packets received by filter +0 packets dropped by kernel +``` + +This is helpful for troubleshooting issues with API calls, assuming the calls are using plain HTTP. For encrypted connections, this output is less useful. + +### 6\. Saving captures to a file + +Another useful feature provided by tcpdump is the ability to save the capture to a file so you can analyze the results later. This allows you to capture packets in batch mode overnight, for example, and verify the results in the morning. It also helps when there are too many packets to analyze since real-time capture can occur too fast. + +To save packets to a file instead of displaying them on screen, use the option `-w`: + +``` +$ sudo tcpdump -i any -c10 -nn -w webserver.pcap port 80 +[sudo] password for ricardo: +tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes +10 packets captured +10 packets received by filter +0 packets dropped by kernel +``` + +This command saves the output in a file named `webserver.pcap`. The `.pcap` extension stands for "packet capture" and is the convention for this file format. + +As shown in this example, nothing gets displayed on-screen, and the capture finishes after capturing 10 packets, as per the option `-c10`. If you want some feedback to ensure packets are being captured, use the option `-v`. + +Tcpdump creates a file in binary format so you cannot simply open it with a text editor. To read the contents of the file, execute tcpdump with the `-r` option: + +``` +$ tcpdump -nn -r webserver.pcap +reading from file webserver.pcap, link-type LINUX_SLL (Linux cooked) +13:36:57.679494 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [S], seq 3709732619, win 29200, options [mss 1460,sackOK,TS val 135708029 ecr 0,nop,wscale 7], length 0 +13:36:57.718932 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [S.], seq 1999298316, ack 3709732620, win 28960, options [mss 1460,sackOK,TS val 526052949 ecr 135708029,nop,wscale 9], length 0 +13:36:57.719005 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 135708068 ecr 526052949], length 0 +13:36:57.719186 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 135708068 ecr 526052949], length 112: HTTP: GET / HTTP/1.1 +13:36:57.756979 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [.], ack 113, win 57, options [nop,nop,TS val 526052959 ecr 135708068], length 0 +13:36:57.760122 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [P.], seq 1:643, ack 113, win 57, options [nop,nop,TS val 526052959 ecr 135708068], length 642: HTTP: HTTP/1.1 302 Found +13:36:57.760182 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [.], ack 643, win 239, options [nop,nop,TS val 135708109 ecr 526052959], length 0 +13:36:57.977602 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [F.], seq 113, ack 643, win 239, options [nop,nop,TS val 135708327 ecr 526052959], length 0 +13:36:58.022089 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [F.], seq 643, ack 114, win 57, options [nop,nop,TS val 526053025 ecr 135708327], length 0 +13:36:58.022132 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [.], ack 644, win 239, options [nop,nop,TS val 135708371 ecr 526053025], length 0 +$ +``` + +Since you're no longer capturing the packets directly from the network interface, `sudo` is not required to read the file. + +You can also use any of the filters we've discussed to filter the content from the file, just as you would with real-time data. For example, inspect the packets in the capture file from source IP address `54.204.39.132` by executing this command: + +``` +$ tcpdump -nn -r webserver.pcap src 54.204.39.132 +reading from file webserver.pcap, link-type LINUX_SLL (Linux cooked) +13:36:57.718932 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [S.], seq 1999298316, ack 3709732620, win 28960, options [mss 1460,sackOK,TS val 526052949 ecr 135708029,nop,wscale 9], length 0 +13:36:57.756979 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [.], ack 113, win 57, options [nop,nop,TS val 526052959 ecr 135708068], length 0 +13:36:57.760122 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [P.], seq 1:643, ack 113, win 57, options [nop,nop,TS val 526052959 ecr 135708068], length 642: HTTP: HTTP/1.1 302 Found +13:36:58.022089 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [F.], seq 643, ack 114, win 57, options [nop,nop,TS val 526053025 ecr 135708327], length 0 +``` + +### What's next? + +These basic features of tcpdump will help you get started with this powerful and versatile tool. To learn more, consult the [tcpdump website][3] and [man pages][4]. + +The tcpdump command line interface provides great flexibility for capturing and analyzing network traffic. If you need a graphical tool to understand more complex flows, look at [Wireshark][5]. + +One benefit of Wireshark is that it can read `.pcap` files captured by tcpdump. You can use tcpdump to capture packets in a remote machine that does not have a GUI and analyze the result file with Wireshark, but that is a topic for another day. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/introduction-tcpdump + +作者:[Ricardo Gerardi][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/rgerardi +[b]: https://github.com/lujun9972 +[1]: http://www.tcpdump.org/manpages/tcpdump.1.html#lbAG +[2]: https://www.iana.org/assignments/tcp-parameters/tcp-parameters.xhtml +[3]: http://www.tcpdump.org/# +[4]: http://www.tcpdump.org/manpages/tcpdump.1.html +[5]: https://www.wireshark.org/ diff --git a/sources/tech/20181010 How To List The Enabled-Active Repositories In Linux.md b/sources/tech/20181010 How To List The Enabled-Active Repositories In Linux.md new file mode 100644 index 0000000000..b4ff872202 --- /dev/null +++ b/sources/tech/20181010 How To List The Enabled-Active Repositories In Linux.md @@ -0,0 +1,289 @@ +How To List The Enabled/Active Repositories In Linux +====== +There are many ways to list enabled repositories in Linux. + +Here we are going to show you the easy methods to list active repositories. + +It will helps you to know what are the repositories enabled on your system. + +Once you have this information in handy then you can add any repositories that you want if it’s not already enabled. + +Say for example, if you would like to enable `epel repository` then you need to check whether the epel repository is enabled or not. In this case this tutorial would help you. + +### What Is Repository? + +A software repository is a central place which stores the software packages for the particular application. + +All the Linux distributions are maintaining their own repositories and they allow users to retrieve and install packages on their machine. + +Each vendor offered a unique package management tool to manage their repositories such as search, install, update, upgrade, remove, etc. + +Most of the Linux distributions comes as freeware except RHEL and SUSE. To access their repositories you need to buy a subscriptions. + +**Suggested Read :** +**(#)** [How To Add, Enable And Disable A Repository By Using The DNF/YUM Config Manager Command On Linux][1] +**(#)** [How To List Installed Packages By Size (Largest) On Linux][2] +**(#)** [How To View/List The Available Packages Updates In Linux][3] +**(#)** [How To View A Particular Package Installed/Updated/Upgraded/Removed/Erased Date On Linux][4] +**(#)** [How To View Detailed Information About A Package In Linux][5] +**(#)** [How To Search If A Package Is Available On Your Linux Distribution Or Not][6] +**(#)** [How To List An Available Package Groups In Linux][7] +**(#)** [Newbies corner – A Graphical frontend tool for Linux Package Manager][8] +**(#)** [Linux Expert should knows, list of Command line Package Manager & Usage][9] + +### How To List The Enabled Repositories on RHEL/CentOS + +RHEL & CentOS systems are using RPM packages hence we can use the `Yum Package Manager` to get this information. + +YUM stands for Yellowdog Updater, Modified is an open-source command-line front-end package-management utility for RPM based systems such as Red Hat Enterprise Linux (RHEL) and CentOS. + +Yum is the primary tool for getting, installing, deleting, querying, and managing RPM packages from distribution repositories, as well as other third-party repositories. + +**Suggested Read :** [YUM Command To Manage Packages on RHEL/CentOS Systems][10] + +RHEL based systems are mainly offering the below three major repositories. These repository will be enabled by default. + + * **`base:`** It’s containing all the core packages and base packages. + * **`extras:`** It provides additional functionality to CentOS without breaking upstream compatibility or updating base components. It is an upstream repository, as well as additional CentOS packages. + * **`updates:`** It’s offering bug fixed packages, Security packages and Enhancement packages. + + + +``` +# yum repolist +or +# yum repolist enabled + +Loaded plugins: fastestmirror +Determining fastest mirrors + 选题模板.txt 中文排版指北.md comic core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated epel: ewr.edge.kernel.org +repo id repo name status +!base/7/x86_64 CentOS-7 - Base 9,911 +!epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 12,687 +!extras/7/x86_64 CentOS-7 - Extras 403 +!updates/7/x86_64 CentOS-7 - Updates 1,348 +repolist: 24,349 + +``` + +### How To List The Enabled Repositories on Fedora + +DNF stands for Dandified yum. We can tell DNF, the next generation of yum package manager (Fork of Yum) using hawkey/libsolv library for backend. Aleš Kozumplík started working on DNF since Fedora 18 and its implemented/launched in Fedora 22 finally. + +Dnf command is used to install, update, search & remove packages on Fedora 22 and later system. It automatically resolve dependencies and make it smooth package installation without any trouble. + +Yum replaced by DNF due to several long-term problems in Yum which was not solved. Asked why ? he did not patches the Yum issues. Aleš Kozumplík explains that patching was technically hard and YUM team wont accept the changes immediately and other major critical, YUM is 56K lines but DNF is 29K lies. So, there is no option for further development, except to fork. + +**Suggested Read :** [DNF (Fork of YUM) Command To Manage Packages on Fedora System][11] + +Fedora system is mainly offering the below two major repositories. These repository will be enabled by default. + + * **`fedora:`** It’s containing all the core packages and base packages. + * **`updates:`** It’s offering bug fixed packages, Security packages and Enhancement packages from the stable release branch. + + + +``` +# dnf repolist +or +# dnf repolist enabled + +Last metadata expiration check: 0:02:56 ago on Wed 10 Oct 2018 06:12:22 PM IST. +repo id repo name status +docker-ce-stable Docker CE Stable - x86_64 6 +*fedora Fedora 26 - x86_64 53,912 +home_mhogomchungu mhogomchungu's Home Project (Fedora_25) 19 +home_moritzmolch_gencfsm Gnome Encfs Manager (Fedora_25) 5 +mystro256-gnome-redshift Copr repo for gnome-redshift owned by mystro256 6 +nodesource Node.js Packages for Fedora Linux 26 - x86_64 83 +rabiny-albert Copr repo for albert owned by rabiny 3 +*rpmfusion-free RPM Fusion for Fedora 26 - Free 536 +*rpmfusion-free-updates RPM Fusion for Fedora 26 - Free - Updates 278 +*rpmfusion-nonfree RPM Fusion for Fedora 26 - Nonfree 202 +*rpmfusion-nonfree-updates RPM Fusion for Fedora 26 - Nonfree - Updates 95 +*updates Fedora 26 - x86_64 - Updates 14,595 + +``` + +### How To List The Enabled Repositories on Debian/Ubuntu + +Debian based systems are using APT/APT-GET package manager hence we can use the `APT/APT-GET Package Manager` to get this information. + +APT stands for Advanced Packaging Tool (APT) which is replacement for apt-get, like how DNF came to picture instead of YUM. It’s feature rich command-line tools with included all the futures in one command (APT) such as apt-cache, apt-search, dpkg, apt-cdrom, apt-config, apt-key, etc..,. and several other unique features. For example we can easily install .dpkg packages through APT but we can’t do through Apt-Get similar more features are included into APT command. APT-GET replaced by APT Due to lock of futures missing in apt-get which was not solved. + +Apt-Get stands for Advanced Packaging Tool (APT). apg-get is a powerful command-line tool which is used to automatically download and install new software packages, upgrade existing software packages, update the package list index, and to upgrade the entire Debian based systems. + +``` +# apt-cache policy +Package files: + 100 /var/lib/dpkg/status + release a=now + 500 http://ppa.launchpad.net/peek-developers/stable/ubuntu artful/main amd64 Packages + release v=17.10,o=LP-PPA-peek-developers-stable,a=artful,n=artful,l=Peek stable releases,c=main,b=amd64 + origin ppa.launchpad.net + 500 http://ppa.launchpad.net/notepadqq-team/notepadqq/ubuntu artful/main amd64 Packages + release v=17.10,o=LP-PPA-notepadqq-team-notepadqq,a=artful,n=artful,l=Notepadqq,c=main,b=amd64 + origin ppa.launchpad.net + 500 http://dl.google.com/linux/chrome/deb stable/main amd64 Packages + release v=1.0,o=Google, Inc.,a=stable,n=stable,l=Google,c=main,b=amd64 + origin dl.google.com + 500 https://download.docker.com/linux/ubuntu artful/stable amd64 Packages + release o=Docker,a=artful,l=Docker CE,c=stable,b=amd64 + origin download.docker.com + 500 http://security.ubuntu.com/ubuntu artful-security/multiverse amd64 Packages + release v=17.10,o=Ubuntu,a=artful-security,n=artful,l=Ubuntu,c=multiverse,b=amd64 + origin security.ubuntu.com + 500 http://security.ubuntu.com/ubuntu artful-security/universe amd64 Packages + release v=17.10,o=Ubuntu,a=artful-security,n=artful,l=Ubuntu,c=universe,b=amd64 + origin security.ubuntu.com + 500 http://security.ubuntu.com/ubuntu artful-security/restricted i386 Packages + release v=17.10,o=Ubuntu,a=artful-security,n=artful,l=Ubuntu,c=restricted,b=i386 + origin security.ubuntu.com +. +. + origin in.archive.ubuntu.com + 500 http://in.archive.ubuntu.com/ubuntu artful/restricted amd64 Packages + release v=17.10,o=Ubuntu,a=artful,n=artful,l=Ubuntu,c=restricted,b=amd64 + origin in.archive.ubuntu.com + 500 http://in.archive.ubuntu.com/ubuntu artful/main i386 Packages + release v=17.10,o=Ubuntu,a=artful,n=artful,l=Ubuntu,c=main,b=i386 + origin in.archive.ubuntu.com + 500 http://in.archive.ubuntu.com/ubuntu artful/main amd64 Packages + release v=17.10,o=Ubuntu,a=artful,n=artful,l=Ubuntu,c=main,b=amd64 + origin in.archive.ubuntu.com +Pinned packages: + +``` + +### How To List The Enabled Repositories on openSUSE + +openSUSE system uses zypper package manager hence we can use the zypper Package Manager to get this information. + +Zypper is a command line package manager for suse & openSUSE distributions. It’s used to install, update, search & remove packages & manage repositories, perform various queries, and more. Zypper command-line interface to ZYpp system management library (libzypp). + +**Suggested Read :** [Zypper Command To Manage Packages On openSUSE & suse Systems][12] + +``` +# zypper repos + +# | Alias | Name | Enabled | GPG Check | Refresh +--+-----------------------+-----------------------------------------------------+---------+-----------+-------- +1 | packman-repository | packman-repository | Yes | (r ) Yes | Yes +2 | google-chrome | google-chrome | Yes | (r ) Yes | Yes +3 | home_lazka0_ql-stable | Stable Quod Libet / Ex Falso Builds (openSUSE_42.1) | Yes | (r ) Yes | No +4 | repo-non-oss | openSUSE-leap/42.1-Non-Oss | Yes | (r ) Yes | Yes +5 | repo-oss | openSUSE-leap/42.1-Oss | Yes | (r ) Yes | Yes +6 | repo-update | openSUSE-42.1-Update | Yes | (r ) Yes | Yes +7 | repo-update-non-oss | openSUSE-42.1-Update-Non-Oss | Yes | (r ) Yes | Yes + +``` + +List Repositories with URI. + +``` +# zypper lr -u + +# | Alias | Name | Enabled | GPG Check | Refresh | URI +--+-----------------------+-----------------------------------------------------+---------+-----------+---------+--------------------------------------------------------------------------------- +1 | packman-repository | packman-repository | Yes | (r ) Yes | Yes | http://ftp.gwdg.de/pub/linux/packman/suse/openSUSE_Leap_42.1/ +2 | google-chrome | google-chrome | Yes | (r ) Yes | Yes | http://dl.google.com/linux/chrome/rpm/stable/x86_64 +3 | home_lazka0_ql-stable | Stable Quod Libet / Ex Falso Builds (openSUSE_42.1) | Yes | (r ) Yes | No | http://download.opensuse.org/repositories/home:/lazka0:/ql-stable/openSUSE_42.1/ +4 | repo-non-oss | openSUSE-leap/42.1-Non-Oss | Yes | (r ) Yes | Yes | http://download.opensuse.org/distribution/leap/42.1/repo/non-oss/ +5 | repo-oss | openSUSE-leap/42.1-Oss | Yes | (r ) Yes | Yes | http://download.opensuse.org/distribution/leap/42.1/repo/oss/ +6 | repo-update | openSUSE-42.1-Update | Yes | (r ) Yes | Yes | http://download.opensuse.org/update/leap/42.1/oss/ +7 | repo-update-non-oss | openSUSE-42.1-Update-Non-Oss | Yes | (r ) Yes | Yes | http://download.opensuse.org/update/leap/42.1/non-oss/ + +``` + +List Repositories by priority. + +``` +# zypper lr -p + +# | Alias | Name | Enabled | GPG Check | Refresh | Priority +--+-----------------------+-----------------------------------------------------+---------+-----------+---------+--------- +1 | packman-repository | packman-repository | Yes | (r ) Yes | Yes | 99 +2 | google-chrome | google-chrome | Yes | (r ) Yes | Yes | 99 +3 | home_lazka0_ql-stable | Stable Quod Libet / Ex Falso Builds (openSUSE_42.1) | Yes | (r ) Yes | No | 99 +4 | repo-non-oss | openSUSE-leap/42.1-Non-Oss | Yes | (r ) Yes | Yes | 99 +5 | repo-oss | openSUSE-leap/42.1-Oss | Yes | (r ) Yes | Yes | 99 +6 | repo-update | openSUSE-42.1-Update | Yes | (r ) Yes | Yes | 99 +7 | repo-update-non-oss | openSUSE-42.1-Update-Non-Oss | Yes | (r ) Yes | Yes | 99 + +``` + +### How To List The Enabled Repositories on ArchLinux + +Arch Linux based systems are using pacman package manager hence we can use the pacman Package Manager to get this information. + +pacman stands for package manager utility (pacman). pacman is a command-line utility to install, build, remove and manage Arch Linux packages. pacman uses libalpm (Arch Linux Package Management (ALPM) library) as a back-end to perform all the actions. + +**Suggested Read :** [Pacman Command To Manage Packages On Arch Linux Based Systems][13] + +``` +# pacman -Syy +:: Synchronizing package databases... + core 132.6 KiB 1524K/s 00:00 [############################################] 100% + extra 1859.0 KiB 750K/s 00:02 [############################################] 100% + community 3.5 MiB 149K/s 00:24 [############################################] 100% + multilib 182.7 KiB 1363K/s 00:00 [############################################] 100% + +``` + +### How To List The Enabled Repositories on Linux using INXI Utility + +inxi is a nifty tool to check hardware information on Linux and offers wide range of option to get all the hardware information on Linux system that i never found in any other utility which are available in Linux. It was forked from the ancient and mindbendingly perverse yet ingenius infobash, by locsmif. + +inxi is a script that quickly shows system hardware, CPU, drivers, Xorg, Desktop, Kernel, GCC version(s), Processes, RAM usage, and a wide variety of other useful information, also used for forum technical support & debugging tool. + +Additionally this utility will display all the distribution repository data information such as RHEL, CentOS, Fedora, Debain, Ubuntu, LinuxMint, ArchLinux, openSUSE, Manjaro, etc., + +**Suggested Read :** [inxi – A Great Tool to Check Hardware Information on Linux][14] + +``` +# inxi -r +Repos: Active apt sources in file: /etc/apt/sources.list + deb http://in.archive.ubuntu.com/ubuntu/ yakkety main restricted + deb http://in.archive.ubuntu.com/ubuntu/ yakkety-updates main restricted + deb http://in.archive.ubuntu.com/ubuntu/ yakkety universe + deb http://in.archive.ubuntu.com/ubuntu/ yakkety-updates universe + deb http://in.archive.ubuntu.com/ubuntu/ yakkety multiverse + deb http://in.archive.ubuntu.com/ubuntu/ yakkety-updates multiverse + deb http://in.archive.ubuntu.com/ubuntu/ yakkety-backports main restricted universe multiverse + deb http://security.ubuntu.com/ubuntu yakkety-security main restricted + deb http://security.ubuntu.com/ubuntu yakkety-security universe + deb http://security.ubuntu.com/ubuntu yakkety-security multiverse + Active apt sources in file: /etc/apt/sources.list.d/arc-theme.list + deb http://download.opensuse.org/repositories/home:/Horst3180/xUbuntu_16.04/ / + Active apt sources in file: /etc/apt/sources.list.d/snwh-ubuntu-pulp-yakkety.list + deb http://ppa.launchpad.net/snwh/pulp/ubuntu yakkety main + +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-list-the-enabled-active-repositories-in-linux/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/how-to-add-enable-disable-a-repository-dnf-yum-config-manager-on-linux/ +[2]: https://www.2daygeek.com/how-to-list-installed-packages-by-size-largest-on-linux/ +[3]: https://www.2daygeek.com/how-to-view-list-the-available-packages-updates-in-linux/ +[4]: https://www.2daygeek.com/how-to-view-a-particular-package-installed-updated-upgraded-removed-erased-date-on-linux/ +[5]: https://www.2daygeek.com/how-to-view-detailed-information-about-a-package-in-linux/ +[6]: https://www.2daygeek.com/how-to-search-if-a-package-is-available-on-your-linux-distribution-or-not/ +[7]: https://www.2daygeek.com/how-to-list-an-available-package-groups-in-linux/ +[8]: https://www.2daygeek.com/list-of-graphical-frontend-tool-for-linux-package-manager/ +[9]: https://www.2daygeek.com/list-of-command-line-package-manager-for-linux/ +[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[11]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[12]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[13]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[14]: https://www.2daygeek.com/inxi-system-hardware-information-on-linux/ diff --git a/sources/tech/20181011 The First Beta of Haiku is Released After 16 Years of Development.md b/sources/tech/20181011 The First Beta of Haiku is Released After 16 Years of Development.md new file mode 100644 index 0000000000..b6daaef053 --- /dev/null +++ b/sources/tech/20181011 The First Beta of Haiku is Released After 16 Years of Development.md @@ -0,0 +1,87 @@ +The First Beta of Haiku is Released After 16 Years of Development +====== +There are a number of small operating systems out there that are designed to replicate the past. Haiku is one of those. We will look to see where Haiku came from and what the new release has to offer. + +![Haiku OS desktop screenshot][1]Haiku desktop + +### What is Haiku? + +Haiku’s history begins with the now defunct [Be Inc][2]. Be Inc was founded by former Apple executive [Jean-Louis Gassée][3] after he was ousted by CEO [John Sculley][4]. Gassée wanted to create a new operating system from the ground up. BeOS was created with digital media work in mind and was designed to take advantage of the most modern hardware of the time. Originally, Be Inc attempted to create their own platform encompassing both hardware and software. The result was called the [BeBox][5]. After BeBox failed to sell well, Be turned their attention to BeOS. + +In the 1990s, Apple was looking for a new operating system to replace the aging Classic Mac OS. The two contenders were Gassée’s BeOS and Steve Jobs’ NeXTSTEP. In the end, Apple went with NeXTSTEP. Be tried to license BeOS to hardware makers, but [in at least one case][6] Microsoft threatened to revoke a manufacturer’s Windows license if they sold BeOS machines. Eventually, Be Inc was sold to Palm in 2001 for $11 million. BeOS was subsequently discontinued. + +Following the news of Palm’s purchase, a number of loyal fans decided they wanted to keep the operating system alive. The original name of the project was OpenBeOS, but was changed to Haiku to avoid infringing on Palm’s trademarks. The name is a reference to reference to the [haikus][7] used as error messages by many of the applications. Haiku is completely written from scratch and is compatible with BeOS. + +### Why Haiku? + +According to the project’s website, [Haiku][8] “is a fast, efficient, simple to use, easy to learn, and yet very powerful system for computer users of all levels”. Haiku comes with a kernel that have been customized for performance. Like FreeBSD, there is a “single team writing everything from the kernel, drivers, userland services, toolkit, and graphics stack to the included desktop applications and preflets”. + +### New Features in Haiku Beta Release + +A number of new features have been introduced since the release of Alpha 4.1. (Please note that Haiku is a passion project and all the devs are part-time, so some they can’t spend as much time working on Haiku as they would like.) + +![Haiku OS software][9] +HaikuDepot, Haiku’s package manager + +One of the biggest features is the inclusion of a complete package management system. HaikuDepot allows you to sort through many applications. Many are built specifically for Haiku, but a number have been ported to the platform, such as [LibreOffice][10], [Otter Browser][11], and [Calligra][12]. Interestingly, each Haiku package is [“a special type of compressed filesystem image, which is ‘mounted’ upon installation”][13]. There is also a command line interface for package management named `pkgman`. + +Another big feature is an upgraded browser. Haiku was able to hire a developer to work full-time for a year to improve the performance of WebPositive, the built-in browser. This included an update to a newer version of WebKit. WebPositive will now play Youtube videos properly. + +![Haiku OS WebPositive browser][14] +WebPositive, Haiku’s built-in browser + +Other features include: + + * A completely rewritten network preflet + * User interface cleanup + * Media subsystem improvements, including better streaming support, HDA driver improvements, and FFmpeg decoder plugin improvements + * Native RemoteDesktop improved + * Add EFI bootloader and GPT support + * Updated Ethernet & WiFi drivers + * Updated filesystem drivers + * General system stabilization + * Experimental Bluetooth stack + + + +### Thoughts on Haiku OS + +I have been following Haiku for many years. I’ve installed and played with the nightly builds a dozen times over the last couple of years. I even took some time to start learning one of its programming languages, so that I could write apps. But I got busy with other things. + +I’m very conflicted about it. I like Haiku because it is a neat non-Linux project, but it is only just getting features that everyone else takes for granted, like a package manager. + +If you’ve got a couple of minutes, download the [ISO][15] and install it on the virtual machine of your choice. You just might like it. + +Have you ever used Haiku or BeOS? If so, what are your favorite features? Let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][16]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/haiku-os-release/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/haiku.jpg +[2]: https://en.wikipedia.org/wiki/Be_Inc. +[3]: https://en.wikipedia.org/wiki/Jean-Louis_Gass%C3%A9e +[4]: https://en.wikipedia.org/wiki/John_Sculley +[5]: https://en.wikipedia.org/wiki/BeBox +[6]: https://birdhouse.org/beos/byte/30-bootloader/ +[7]: https://en.wikipedia.org/wiki/Haiku +[8]: https://www.haiku-os.org/about/ +[9]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/haiku-depot.png +[10]: https://www.libreoffice.org/ +[11]: https://itsfoss.com/otter-browser-review/ +[12]: https://www.calligra.org/ +[13]: https://www.haiku-os.org/get-haiku/release-notes/ +[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/webpositive.jpg +[15]: https://www.haiku-os.org/get-haiku +[16]: http://reddit.com/r/linuxusersgroup diff --git a/translated/talk/20180117 How to get into DevOps.md b/translated/talk/20180117 How to get into DevOps.md deleted file mode 100644 index ec169be76f..0000000000 --- a/translated/talk/20180117 How to get into DevOps.md +++ /dev/null @@ -1,145 +0,0 @@ - -DevOps 实践指南 -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_resume_rh1x.png?itok=S3HGxi6E) - -在去年大概一年的时间里,我注意到对“Devops 实践”感兴趣的开发人员和系统管理员突然有了明显的增加。这样的变化也合理:现在开发者只要花很少的钱,调用一些 API, 就能单枪匹马地在一整套分布式基础设施上运行自己的应用, 在这个时代,开发和运维的紧密程度前所未有。我看过许多博客和文章介绍很酷的 DevOps 工具和相关思想,但是给那些希望践行 DevOps 的人以指导和建议的内容,我却很少看到。 - -这篇文章的目的就是描述一下如何去实践。我的想法基于 Reddit 上 [devops][1] 的一些访谈、聊天和深夜讨论,还有一些随机谈话,一般都发生在享受啤酒和美食的时候。如果你已经开始这样实践,我对你的反馈很感兴趣,请通过 [我的博客][2] 或者 [Twitter][3] 联系我,也可以直接在下面评论。我很乐意听到你们的想法和故事。 - -### 古代的 IT - -了解历史是搞清楚未来的关键,DevOps 也不例外。想搞清楚 DevOps 运动的普及和流行,去了解一下上世纪 90 年代后期和 21 世纪前十年 IT 的情况会有帮助。这是我的经验。 - -我的第一份工作是在一家大型跨国金融服务公司做 Windows 系统管理员。当时给计算资源扩容需要给 Dell 打电话 (或者像我们公司那样打给 CDW ),并下一个价值数十万美元的订单,包含服务器、网络设备、电缆和软件,所有这些都要运到在线或离线的数据中心去。虽然 VMware 仍在尝试说服企业使用虚拟机运行他们的“性能敏感”型程序是更划算的,但是包括我们在内的很多公司都还忠于使用他们的物理机运行应用。 - -在我们技术部门,有一个专门做数据中心工程和操作的完整团队,他们的工作包括价格谈判,让荒唐的租赁月费能够下降一点点,还包括保证我们的系统能够正常冷却(如果设备太多,这个事情的难度会呈指数增长)。如果这个团队足够幸运足够有钱,境外数据中心的工作人员对我们所有的服务器型号又都有足够的了解,就能避免在盘后交易中不小心扯错东西。那时候亚马逊 AWS 和 Rackspace 逐渐开始加速扩张,但还远远没到临界规模。 - -当时我们还有专门的团队来保证硬件上运行着的操作系统和软件能够按照预期工作。这些工程师负责设计可靠的架构以方便给系统打补丁,监控和报警,还要定义基础镜像 (gold image) 的内容。这些大都是通过很多手工实验完成的,很多手工实验是为了编写一个运行说明书 (runbook) 来描述要做的事情,并确保按照它执行后的结果确实在预期内。在我们这么大的组织里,这样做很重要,因为一线和二线的技术支持都是境外的,而他们的培训内容只覆盖到了这些运行说明而已。 - -(这是我职业生涯前三年的世界。我那时候的梦想是成为制定金本位制的人!) - -软件发布则完全是另外一头怪兽。无可否认,我在这方面并没有积累太多经验。但是,从我收集的故事(和最近的经历)来看,当时大部分软件开发的日常大概是这样: - - * 开发人员按照技术和功能需求来编写代码,这些需求来自于业务分析人员的会议,但是会议并没有邀请开发人员参加。 - * 开发人员可以选择为他们的代码编写单元测试,以确保在代码里没有任何明显的疯狂行为,比如除以 0 但不抛出异常。 - * 然后开发者会把他们的代码标记为 "Ready for QA."(准备好了接受测试),质量保障的成员会把这个版本的代码发布到他们自己的环境中,这个环境和生产环境可能相似,也可能不相似,甚至和开发环境相比也不一定相似。 - * 故障会在几天或者几个星期内反馈到开发人员那里,这个时长取决于其他业务活动和优先事项。 - - - -虽然系统管理员和开发人员经常有不一致的意见,但是对“变更管理”的痛恨却是一致的。变更管理由高度规范的(就我当时的雇主而言)和非常有必要的规则和程序组成,用来管理一家公司应该什么时候做技术变更,以及如何做。很多公司都按照 [ITIL][4] 来操作, 简单的说,ITIL 问了很多和事情发生的原因、时间、地点和方式相关的问题,而且提供了一个过程,对产生最终答案的决定做审计跟踪。 - -你可能从我的简短历史课上了解到,当时 IT 的很多很多事情都是手工完成的。这导致了很多错误。错误又导致了很多财产损失。变更管理的工作就是尽量减少这些损失,它常常以这样的形式出现:不管变更的影响和规模大小,每两周才能发布部署一次。周五下午 4 点到周一早上 5 点 59 分这段时间,需要排队等候发布窗口。(讽刺的是,这种流程导致了更多错误,通常还是更严重的那种错误) - -### DevOps 不是专家团 - -你可能在想 "Carlos 你在讲啥啊,什么时候才能说到 Ansible playbooks? ",我热爱 Ansible, 但是请再等一会;下面这些很重要。 - -你有没有过被分配到过需要跟"DevOps"小组打交道的项目?你有没有依赖过“配置管理”或者“持续集成/持续交付”小组来保证业务流水线设置正确?你有没有在代码开发完的数周之后才参加发布部署的会议? - -如果有过,那么你就是在重温历史,这个历史是由上面所有这些导致的。 - -出于本能,我们喜欢和像自己的人一起工作,这会导致[筒仓][5]的行成。很自然,这种人类特质也会在工作场所表现出来是不足为奇的。我甚至在一个 250 人的创业公司里见到过这样的现象,当时我在那里工作。刚开始的时候,开发人员都在聚在一起工作,彼此深度协作。随着代码变得复杂,开发相同功能的人自然就坐到了一起,解决他们自己的复杂问题。然后按功能划分的小组很快就正式形成了。 - -在我工作过的很多公司里,系统管理员和开发人员不仅像这样形成了天然的筒仓,而且彼此还有激烈的对抗。开发人员的环境出问题了或者他们的权限太小了,就会对系统管理员很恼火。系统管理员怪开发者无时不刻的不在用各种方式破坏他们的环境,怪开发人员申请的计算资源严重超过他们的需要。双方都不理解对方,更糟糕的是,双方都不愿意去理解对方。 - -大部分开发人员对操作系统,内核或计算机硬件都不感兴趣。同样的,大部分系统管理员,即使是 Linux 的系统管理员,也都不愿意学习编写代码,他们在大学期间学过一些 C 语言,然后就痛恨它,并且永远都不想再碰 IDE. 所以,开发人员把运行环境的问题甩给围墙外的系统管理员,系统管理员把这些问题和甩过来的其他上百个问题放在一起,做一个优先级安排。每个人都很忙,心怀怨恨的等待着。DevOps 的目的就是解决这种矛盾。 - -DevOps 不是一个团队,CI/CD 也不是 Jira 系统的一个用户组。DevOps 是一种思考方式。根据这个运动来看,在理想的世界里,开发人员、系统管理员和业务相关人将作为一个团队工作。虽然他们可能不完全了解彼此的世界,可能没有足够的知识去了解彼此的积压任务,但他们在大多数情况下能有一致的看法。 - -把所有基础设施和业务逻辑都代码化,再串到一个发布部署流水线里,就像是运行在这之上的应用一样。这个理念的基础就是 DevOps. 因为大家都理解彼此,所以人人都是赢家。聊天机器人和易用的监控工具、可视化工具的兴起,背后的基础也是 DevOps. - -[Adam Jacob][6] 说的最好:"DevOps 就是企业往软件导向型过渡时我们用来描述操作的词" - -### 要实践 DevOps 我需要知道些什么 - -我经常被问到这个问题,它的答案,和同属于开放式的其他大部分问题一样:视情况而定。 - -现在“DevOps 工程师”在不同的公司有不同的含义。在软件开发人员比较多但是很少有人懂基础设施的小公司,他们很可能是在找有更多系统管理经验的人。而其他公司,通常是大公司或老公司或又大又老的公司,已经有一个稳固的系统管理团队了,他们在向类似于谷歌 [SRE][7] 的方向做优化,也就是“设计操作功能的软件工程师”。但是,这并不是金科玉律,就像其他技术类工作一样,这个决定很大程度上取决于他的招聘经理。 - -也就是说,我们一般是在找对深入学习以下内容感兴趣的工程师: - - * 如何管理和设计安全、可扩展的云上的平台(通常是在 AWS 上,不过微软的 Azure, 谷歌的 Cloud Platform,还有 DigitalOcean 和 Heroku 这样的 PaaS 提供商,也都很流行) - * 如何用流行的 [CI/CD][8] 工具,比如 Jenkins,Gocd,还有基于云的 Travis CI 或者 CircleCI,来构造一条优化的发布部署流水线,和发布部署策略。 - * 如何在你的系统中使用基于时间序列的工具,比如 Kibana,Grafana,Splunk,Loggly 或者 Logstash,来监控,记录,并在变化的时候报警,还有 - * 如何使用配置管理工具,例如 Chef,Puppet 或者 Ansible 做到“基础设施即代码”,以及如何使用像 Terraform 或 CloudFormation 的工具发布这些基础设施。 - - - -容器也变得越来越受欢迎。尽管有人对大规模使用 Docker 的现状[表示不满][9],但容器正迅速地成为一种很好的方式来实现在更少的操作系统上运行超高密度的服务和应用,同时提高它们的可靠性。(像 Kubernetes 或者 Mesos 这样的容器编排工具,能在宿主机故障的时候,几秒钟之内重新启动新的容器。)考虑到这些,掌握 Docker 或者 rkt 以及容器编排平台的知识会对你大有帮助。 - -如果你是希望做 DevOps 实践的系统管理员,你还需要知道如何写代码。Python 和 Ruby 是 DevOps 领域的流行语言,因为他们是可移植的(也就是说可以在任何操作系统上运行),快速的,而且易读易学。它们还支撑着这个行业最流行的配置管理工具(Ansible 是使用 Python 写的,Chef 和 Puppet 是使用 Ruby 写的)以及云平台的 API 客户端(亚马逊 AWS, 微软 Azure, 谷歌 Cloud Platform 的客户端通常会提供 Python 和 Ruby 语言的版本)。 - -如果你是开发人员,也希望做 DevOps 的实践,我强烈建议你去学习 Unix,Windows 操作系统以及网络基础知识。虽然云计算把很多系统管理的难题抽象化了,但是对慢应用的性能做 debug 的时候,你知道操作系统如何工作的就会有很大的帮助。下文包含了一些这个主题的图书。 - -如果你觉得这些东西听起来内容太多,大家都是这么想的。幸运的是,有很多小项目可以让你开始探索。其中一个启动项目是 Gary Stafford 的[选举服务](https://github.com/garystafford/voter-service), 一个基于 Java 的简单投票平台。我们要求面试候选人通过一个流水线将该服务从 GitHub 部署到生产环境基础设施上。你可以把这个服务与 Rob Mile 写的了不起的 DevOps [入门教程](https://github.com/maxamg/cd-office-hours)结合起来,学习如何编写流水线。 - -还有一个熟悉这些工具的好方法,找一个流行的服务,然后只使用 AWS 和配置管理工具来搭建这个服务所需要的基础设施。第一次先手动搭建,了解清楚要做的事情,然后只用 CloudFormation (或者 Terraform) 和 Ansible 重写刚才的手动操作。令人惊讶的是,这就是我们基础设施开发人员为客户所做的大部分日常工作,我们的客户认为这样的工作非常有意义! - -### 需要读的书 - -如果你在找 DevOps 的其他资源,下面这些理论和技术书籍值得一读。 - -#### 理论书籍 - - * Gene Kim 写的 [The Phoenix Project (凤凰项目)][10]。这是一本很不错的书,内容涵盖了我上文解释过的历史(写的更生动形象),描述了一个运行在敏捷和 DevOps 之上的公司向精益前进的过程。 - * Terrance Ryan 写的 [Driving Technical Change (布道之道)][11]。非常好的一小本书,讲了大多数技术型组织内的常见性格特点以及如何和他们打交道。这本书对我的帮助比我想象的更多。 - * Tom DeMarco 和 Tim Lister 合著的 [Peopleware (人件)][12]。管理工程师团队的经典图书,有一点过时,但仍然很有价值。 - * Tom Limoncelli 写的 [Time Management for System Administrators (时间管理: 给系统管理员)][13]。这本书主要面向系统管理员,它对很多大型组织内的系统管理员生活做了深入的展示。如果你想了解更多系统管理员和开发人员之间的冲突,这本书可能解释了更多。 - * Eric Ries 写的 [The Lean Startup (精益创业)][14]。描述了 Eric 自己的 3D 虚拟形象公司,IMVU, 发现了如何精益工作,快速失败和更快盈利。 - * Jez Humble 和他的朋友写的[Lean Enterprise (精益企业)][15]。这本书是对精益创业做的改编,以更适应企业,两本书都很棒,都很好的解释了 DevOps 背后的商业动机。 - * Kief Morris 写的 [Infrastructure As Code (基础设施即代码)][16]。关于 "基础设施即代码" 的非常好的入门读物!很好的解释了为什么所有公司都有必要采纳这种做法。 - * Betsy Beyer, Chris Jones, Jennifer Petoff 和 Niall Richard Murphy 合著的 [Site Reliability Engineering (站点可靠性工程师)][17]。一本解释谷歌 SRE 实践的书,也因为是 "DevOps 诞生之前的 DevOps" 被人熟知。在如何处理运行时间、时延和保持工程师快乐方面提供了有趣的看法。 - - - -#### 技术书籍 - -如果你想找的是让你直接跟代码打交道的书,看这里就对了。 - - * W. Richard Stevens 的 [TCP/IP Illustrated (TCP/IP 详解)][18]。这是一套经典的(也可以说是最全面的)讲解基本网络协议的巨著,重点介绍了 TCP/IP 协议族。如果你听说过 1,2, 3,4 层网络,而且对深入学习他们感兴趣,那么你需要这本书。 - * Evi Nemeth, Trent Hein 和 Ben Whaley 合著的 [UNIX and Linux System Administration Handbook (UNIX/Linux 系统管理员手册)][19]。一本很好的入门书,介绍 Linux/Unix 如何工作以及如何使用。 - * Don Jones 和 Jeffrey Hicks 合著的 [Learn Windows Powershell In A Month of Lunches (Windows PowerShell实战指南)][20]. 如果你在 Windows 系统下做自动化任务,你需要学习怎么使用 Powershell。这本书能够帮助你。Don Jones 是这方面著名的 MVP。 - * 几乎所有 [James Turnbull][21] 写的东西,针对流行的 DevOps 工具,他发表了很好的技术入门读物。 - - - -不管是在那些把所有应用都直接部署在物理机上的公司,(现在很多公司仍然有充分的理由这样做)还是在那些把所有应用都做成 serverless 的先驱公司,DevOps 都很可能会持续下去。这部分工作很有趣,产出也很有影响力,而且最重要的是,它搭起桥梁衔接了技术和业务之间的缺口。DevOps 是一个值得期待的美好事物。 - -首次发表在 [Neurons Firing on a Keyboard][22]。使用 CC-BY-SA 协议。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/1/getting-devops - -作者:[Carlos Nunez][a] -译者:[belitex](https://github.com/belitex) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/carlosonunez -[1]:https://www.reddit.com/r/devops/ -[2]:https://carlosonunez.wordpress.com/ -[3]:https://twitter.com/easiestnameever -[4]:https://en.wikipedia.org/wiki/ITIL -[5]:https://www.psychologytoday.com/blog/time-out/201401/getting-out-your-silo -[6]:https://twitter.com/adamhjk/status/572832185461428224 -[7]:https://landing.google.com/sre/interview/ben-treynor.html -[8]:https://en.wikipedia.org/wiki/CI/CD -[9]:https://thehftguy.com/2016/11/01/docker-in-production-an-history-of-failure/ -[10]:https://itrevolution.com/book/the-phoenix-project/ -[11]:https://pragprog.com/book/trevan/driving-technical-change -[12]:https://en.wikipedia.org/wiki/Peopleware:_Productive_Projects_and_Teams -[13]:http://shop.oreilly.com/product/9780596007836.do -[14]:http://theleanstartup.com/ -[15]:https://info.thoughtworks.com/lean-enterprise-book.html -[16]:http://infrastructure-as-code.com/book/ -[17]:https://landing.google.com/sre/book.html -[18]:https://en.wikipedia.org/wiki/TCP/IP_Illustrated -[19]:http://www.admin.com/ -[20]:https://www.manning.com/books/learn-windows-powershell-in-a-month-of-lunches-third-edition -[21]:https://jamesturnbull.net/ -[22]:https://carlosonunez.wordpress.com/2017/03/02/getting-into-devops/ diff --git a/translated/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md b/translated/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md new file mode 100644 index 0000000000..a9ece78ef7 --- /dev/null +++ b/translated/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md @@ -0,0 +1,131 @@ +Linux vs Mac: Linux 比 Mac 好的七个原因 +====== +最近我们谈论了一些[为什么 Linux 比 Windows 好][1]的原因。毫无疑问, Linux 是个非常优秀的平台。但是它和其他的操作系统一样也会有缺点。对于某些专门的领域,像是游戏, Windows 当然更好。 而对于视频编辑等任务, Mac 系统可能更为方便。这一切都取决于你的爱好,以及你想用你的系统做些什么。在这篇文章中,我们将会介绍一些 Linux 相对于 Mac 更好的一些地方。 + +如果你已经在用 Mac 或者打算买一台 Mac 电脑,我们建议你仔细考虑一下,看看是改为使用 Linux 还是继续使用 Mac 。 + +### Linux 比 Mac 好的 7 个原因 + +![Linux vs Mac: 为什么 Linux 更好][2] + +Linux 和 macOS 都是类 Unix 操作系统,并且都支持 Unix 命令行、 bash 和其他一些命令行工具,相比于 Windows ,他们所支持的应用和游戏比较少。但缺点也仅仅如此。 + +平面设计师和视频剪辑师更加倾向于使用 Mac 系统,而 Linux 更加适合做开发、系统管理、运维的工程师。 + +那要不要使用 Linux 呢,为什么要选择 Linux 呢?下面是根据实际经验和理性分析给出的一些建议。 + +#### 1\. 价格 + +![Linux vs Mac: 为什么 Linux 更好][3] + +假设你只是需要浏览文件、看电影、下载图片、写文档、制作报表或者做一些类似的工作,并且你想要一个更加安全的系统。 + +那在这种情况下,你觉得花费几百块买个系统完成这项工作,或者花费更多直接买个 Macbook 划算吗?当然,最终的决定权还是在你。 + +买个装好 Mac 系统的电脑还是买个便宜的电脑,然后自己装上免费的 Linux 系统,这个要看你自己的偏好。就我个人而言,除了音视频剪辑创作之外,Linux 都非常好地用,而对于音视频方面,我更倾向于使用 Final Cut Pro (专业的视频编辑软件) 和 Logic Pro X (专业的音乐制作软件)(这两款软件都是苹果公司推出的)。 + +#### 2\. 硬件支持 + +![Linux vs Mac: 为什么 Linux 更好][4] + +Linux 支持多种平台. 无论你的电脑配置如何,你都可以在上面安装 Linux,无论性能好或者差,Linux 都可以运行。[即使你的电脑已经使用很久了, 你仍然可以通过选择安装合适的发行版让 Linux 在你的电脑上流畅的运行][5]. + +而 Mac 不同,它是苹果机专用系统。如果你希望买个便宜的电脑,然后自己装上 Mac 系统,这几乎是不可能的。一般来说 Mac 都是和苹果设备连在一起的。 + +这是[在非苹果系统上安装 Mac OS 的教程][6]. 这里面需要用到的专业技术以及可能遇到的一些问题将会花费你许多时间,你需要想好这样做是否值得。 + +总之,Linux 所支持的硬件平台很广泛,而 MacOS 相对而言则非常少。 + +#### 3\. 安全性 + +![Linux vs Mac: 为什么 Linux 更好][7] + +很多人都说 ios 和 Mac 是非常安全的平台。的确,相比于 Windows ,它确实比较安全,可并不一定有 Linux 安全。 + +我不是在危言耸听。 Mac 系统上也有不少恶意软件和广告,并且[数量与日俱增][8]。我认识一些不太懂技术的用户 使用着非常缓慢的 Mac 电脑并且为此苦苦挣扎。一项快速调查显示[浏览器恶意劫持软件][9]是罪魁祸首. + +从来没有绝对安全的操作系统,Linux 也不例外。 Linux 也有漏洞,但是 Linux 发行版提供的及时更新弥补了这些漏洞。另外,到目前为止在 Linux 上还没有自动运行的病毒或浏览器劫持恶意软件的案例发生。 + +这可能也是一个你应该选择 Linux 而不是 Mac 的原因。 + +#### 4\. 可定制性与灵活性 + +![Linux vs Mac: 为什么 Linux 更好][10] + +如果你有不喜欢的东西,自己定制或者修改它都行。 + +举个例子,如果你不喜欢 Ubuntu 18.04.1 的 [Gnome 桌面环境][11],你可以换成 [KDE Plasma][11]。 你也可以尝试一些 [Gnome 扩展][12]丰富你的桌面选择。这种灵活性和可定制性在 Mac OS 是不可能有的。 + +除此之外你还可以根据需要修改一些操作系统的代码(但是可能需要一些专业知识)以创造出适合你的系统。这个在 Mac OS 上可以做吗? + +另外你可以根据需要从一系列的 Linux 发行版进行选择。比如说,如果你想喜欢 Mac OS上的工作流, [Elementary OS][13] 可能是个不错的选择。你想在你的旧电脑上装上一个轻量级的 Linux 发行版系统吗?这里是一个[轻量级 Linux 发行版列表][5]。相比较而言, Mac OS 缺乏这种灵活性。 + +#### 5\. 使用 Linux 有助于你的职业生涯 [针对 IT 行业和科学领域的学生] + +![Linux vs Mac: 为什么 Linux 更好][14] + +对于 IT 领域的学生和求职者而言,这是有争议的但是也是有一定的帮助的。使用 Linux 并不会让你成为一个优秀的人,也不一定能让你得到任何与 IT 相关的工作。 + +但是当你开始使用 Linux 并且开始探索如何使用的时候,你将会获得非常多的经验。作为一名技术人员,你迟早会接触终端,学习通过命令行实现文件系统管理以及应用程序安装。你可能不会知道这些都是一些 IT 公司的新职员需要培训的内容。 + +除此之外,Linux 在就业市场上还有很大的发展空间。 Linux 相关的技术有很多( Cloud 、 Kubernetes 、Sysadmin 等),您可以学习,获得证书并获得一份相关的高薪的工作。要学习这些,你必须使用 Linux 。 + +#### 6\. 可靠 + +![Linux vs Mac: 为什么 Linux 更好][15] + +想想为什么服务器上用的都是 Linux 系统,当然是因为它可靠。 + +但是它为什么可靠呢,相比于 Mac OS ,它的可靠体现在什么方面呢? + +答案很简单——给用户更多的控制权,同时提供更好的安全性。在 Mac OS 上,你并不能完全控制它,这样做是为了让操作变得更容易,同时提高你的用户体验。使用 Linux ,你可以做任何你想做的事情——这可能会导致(对某些人来说)糟糕的用户体验——但它确实使其更可靠。 + +#### 7\. 开源 + +![Linux vs Mac: 为什么 Linux 更好][16] + +开源并不是每个人都关心的。但对我来说,Linux 最重要的优势在于它的开源特性。下面讨论的大多数观点都是开源软件的直接优势。 + +简单解释一下,如果是开源软件,你可以自己查看或者修改它。但对 Mac 来说,苹果拥有独家控制权。即使你有足够的技术知识,也无法查看 Mac OS 的源代码。 + +形象点说,Mac 驱动的系统可以让你得到一辆车,但缺点是你不能打开引擎盖看里面是什么。那可能非常糟糕! + +如果你想深入了解开源软件的优势,可以在 OpenSource.com 上浏览一下 [Ben Balter 的文章][17]。 + +### 总结 + +现在你应该知道为什么 Linux 比 Mac 好了吧,你觉得呢?上面的这些原因可以说服你选择 Linux 吗?如果不行的话那又是为什么呢? + +在下方评论让我们知道你的想法。 + +Note: 这里的图片是以企鹅俱乐部为原型的。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/linux-vs-mac/ + +作者:[Ankush Das][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[Ryze-Borgia](https://github.com/Ryze-Borgia) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[1]: https://itsfoss.com/linux-better-than-windows/ +[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/Linux-vs-mac-featured.png +[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-1.jpeg +[4]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-4.jpeg +[5]: https://itsfoss.com/lightweight-linux-beginners/ +[6]: https://hackintosh.com/ +[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-2.jpeg +[8]: https://www.computerworld.com/article/3262225/apple-mac/warning-as-mac-malware-exploits-climb-270.html +[9]: https://www.imore.com/how-to-remove-browser-hijack +[10]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-3.jpeg +[11]: https://www.gnome.org/ +[12]: https://itsfoss.com/best-gnome-extensions/ +[13]: https://elementary.io/ +[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-5.jpeg +[15]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-6.jpeg +[16]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-7.jpeg +[17]: https://opensource.com/life/15/12/why-open-source diff --git a/translated/tech/20180105 The Best Linux Distributions for 2018.md b/translated/tech/20180105 The Best Linux Distributions for 2018.md new file mode 100644 index 0000000000..ed373a6f6e --- /dev/null +++ b/translated/tech/20180105 The Best Linux Distributions for 2018.md @@ -0,0 +1,134 @@ +# 2018 年最好的 Linux 发行版 + +![Linux distros 2018](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-distros-2018.jpg?itok=Z8sdx4Zu "Linux distros 2018") +Jack Wallen 分享他挑选的 2018 年最好的 Linux 发行版。 + +这是新的一年,Linux仍有无限可能。而且许多 Linux 在 2017 年都带来了许多重大的改变,我相信在 2018 年它在服务器和桌面上将会带来更加稳定的系统和市场份额的增长。 + +对于那些期待迁移到开源平台(或是那些想要切换到)的人对于即将到来的一年,什么是最好的选择?如果你去 [Distrowatch][14] 找一下,你可能会因为众多的发行版而感到头晕,其中一些的排名在上升,而还有一些则恰恰相反。 + +因此,哪个 Linux 发行版将在 2018 年得到偏爱?我有我的看法。事实上,我现在就要和你们分享它。 + +跟我做的 [去年清单][15] 相似,我将会打破那张清单,使任务更加轻松。普通的 Linux 用户,至少包含以下几个类别:系统管理员,轻量级发行版,桌面,为物联网和服务器发行的版本。 + +根据这些,让我们开始 2018 年最好的 Linux 发行版清单吧。 + +### 对系统管理员最好的发行版 + +[Debian][16] 不常出现在“最好的”列表中。但他应该出现,为什么呢?如果了解到 Ubuntu 是基于 Debian 构建的(其实有很多的发行版都基于 Debian),你就很容易理解为什么这个发行版应该在许多“最好”清单中。但为什么是对管理员最好的呢?我想这是由于两个非常重要的原因: + +* 容易使用 +* 非常稳定 + +因为 Debain 使用 dpkg 和 apt 包管理,它使得使用环境非常简单。而且因为 Debian 提供了最稳定的 Linux 平台之一,它为许多事物提供了理想的环境:桌面,服务器,测试,开发。虽然 Debian 可能不包括去年获奖者发现的大量应用程序,但添加完成任务所需的任何/所有必要应用程序都非常容易。而且因为 Debian 可以根据你的选择安装桌面(Cinnamon, GNOME, KDE, LXDE, Mate, 或者 Xfce),你可以确定满足你需要的桌面。 + +![debian](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/debian.jpg?itok=XkHHG692 "debian") +图1:在 Debian 9.3 上运行的 GNOME 桌面。[使用][1] + +同时,Debain 在 Distrowatch 上名列第二。下载,安装,然后让它为你的工作而服务吧。Debain 尽管不那么华丽,但是对于管理员的工作来说十分有用。 + +### 最轻量级的发行版 + +轻量级的发行版对于一些老旧或是性能底下的机器有很好的支持。但是这不意味着这些发行版仅仅只为了老旧的硬件机器而生。如果你想要的是运行速度,你可能会想知道在你的现代机器上,这类发行版的运行速度。 + +在 2018 年上榜的最轻量级的发行版是 [Lubuntu][18]。尽管在这个类别里还有很多选择,而且尽管 Lubuntu 的大小与 Puppy Linux 相接近,但得益于它是 Ubuntu 家庭的一员,这弥补了它在易用性上的一些不足。但是不要担心,Lubuntu 对于硬件的要求并不高: + ++ CPU:奔腾 4 或者 奔腾 M 或者 AMD K8 以上 ++ 对于本地应用,512 MB 的内存就可以了,对于网络使用(Youtube,Google+,Google Drive, Facebook),建议 1 GB 以上。 + +Lubuntu 使用的是 LXDE 桌面,这意味着用户在初次使用这个 Linux 发行版时不会有任何问题。这份短清单中包含的应用(例如:Abiword, Gnumeric, 和 Firefox)都是非常轻量,且对用户友好的。 + +### [lubuntu,jpg][8] +![Lubuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/lubuntu_2.jpg?itok=BkTnh7hU "Lubuntu") +图2:LXDE桌面。[使用][2] + +Lubuntu 能让十年以上的电脑如获新生。 + +### 最好的桌面发行版 + +[Elementary OS][19] 连续两年都是我清单中最好的桌面发行版。对于许多人,[Linux Mint][20] 都是桌面发行版的领导。但是,与我来说,它在易用性和稳定性上很难打败 Elementary OS。例如,我确信 [Ubuntu][21] 17.10 的发布会让我迁移回 Canonical 的发行版。不久之后我会迁移到 新的使用 GNOME 桌面的 Ubuntu,但是我发现我少了 Elementary OS 外观,可用性和感觉。在使用 Ubuntu 两周以后,我又换回了 Elementary OS。 + +### [elementaros.jpg][9] + +![Elementary OS](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/elementaros.jpg?itok=SRZC2vkg "Elementary OS") +图3:Pantheon 桌面是一件像艺术品一样的桌面。[使用][3] + +任何使用 Elementary OS 的感觉很好。Pantheon 桌面是缺省和用户友好做的最完美的桌面。每次更新,它都会变得更好。 + +尽管 Elementary OS 在 Distrowatch 中排名第六,但我预计到 2018 年第,它将至少上升至第三名。Elementary 开发人员非常关注用户的需求。他们倾听并且改进,他们目前的状态是如此之好,似乎所有他们都可以做的更好。 如果您需要一个具有出色可靠性和易用性的桌面,Elementary OS 就是你的发行版。 + +### 能够证明自己的最好的发行版 + +很长一段时间内,[Gentoo][22]都稳坐“展现你技能”的发行版的首座。但是,我认为现在 Gentoo 是时候让出“证明自己”的宝座给 [Linux From Svratch][23]。你可能认为这不公平,因为 LFS 实际上不是一个发行版,而是一个帮助用户创建自己的 Linux 发行版的项目。但是,有什么能比你自己创建一个自己的发行版更能证明自己所学的 Linux 知识的呢?在 LFS 项目中,你可以从头开始构建自定义的 Linux 系统。 所以,如果你真的有需要证明的东西,请下载 [Linux From Scratch Book][24] 并开始构建。 + +### 对于物联网最好的发行版 + +[Ubuntu Core][25] 已经是第二年赢得了该项的冠军。Ubuntu Core 是 Ubuntu 的一个小型版本,专为嵌入式和物联网设备而构建。使Ubuntu Core 如此完美的物联网的原因在于它将重点放在快照包 - 通用包上,可以安装到平台上,而不会干扰基本系统。这些快照包包含它们运行所需的所有内容(包括依赖项),因此不必担心安装会破坏操作系统(或任何其他已安装的软件)。 此外,快照非常容易升级并在隔离的沙箱中运行,这使它们成为物联网的理想解决方案。 + +Ubuntu Core 内置的另一个安全领域是登录机制。Ubuntu Core使用Ubuntu One ssh密钥,这样登录系统的唯一方法是通过上传的ssh密钥到[Ubuntu One帐户][26]。这为你的物联网设备提供了更高的安全性。 + +### [ubuntucore.jpg][10] +![ Ubuntu Core](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntucore.jpg?itok=Ydfq8NKH " Ubuntu Core") +图4:Ubuntu Core屏幕指示通过Ubuntu One用户启用远程访问。[使用][3] + +### 最好的服务器发行版 + +这让事情变得有些混乱。 主要原因是支持。 如果你需要商业支持,乍一看,你最好的选择可能是 [Red Hat Enterprise Linux][27]。红帽年复一年地证明了自己不仅是全球最强大的企业服务器平台之一,而且是单一最赚钱的开源业务(年收入超过20亿美元)。 + +但是,Red Hat 并不是唯一的服务器发行版。 实际上,Red Hat 甚至不支持企业服务器计算的各个方面。如果你关注亚马逊 Elastic Compute Cloud 上的云统计数据,Ubuntu 就会打败红帽企业Linux。根据[云市场][28],EC2 统计数据显示 RHEL 的部署率低于 10 万,而 Ubuntu 的部署量超过 20 万。 + +最终的结果是,Ubuntu 几乎已经成为云计算的领导者。如果你将它与 Ubuntu 易于使用和管理容器结合起来,就会发现 Ubuntu Server 是服务器类别的明显赢家。而且,如果你需要商业支持,Canonical 将为你提供 [Ubuntu Advantage][29]。 + +对使用 Ubuntu Server 的一个警告是它默认为纯文本界面。如果需要,你可以安装 GUI,但使用Ubuntu Server 命令行非常简单(每个Linux管理员都应该知道)。 + +### [ubuntuserver.jpg][11] + +![Ubuntu server](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntuserver_1.jpg?itok=qtFSUlee "Ubuntu server") +图5:Ubuntu 服务器登录,通知更新。[使用][3] + +### 你最好的选择 + +正如我之前所说,这些选择都非常主观,但如果你正在寻找一个好的开始,那就试试这些发行版。每一个都可以用于非常特定的目的,并且比大多数做得更好。虽然你可能不同意我的特定选择,但你可能会同意 Linux 在每个方面都提供了惊人的可能性。并且,请继续关注下周更多“最佳发行版”选秀。 + +通过 Linux 基金会和 edX 的免费[“Linux 简介”][13]课程了解有关Linux的更多信息。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/intro-to-linux/2018/1/best-linux-distributions-2018 + +作者:[JACK WALLEN ][a] +译者:[dianbanjiu](https://github.com/dianbanjiu) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/jlwallen +[1]:https://www.linux.com/licenses/category/used-permission +[2]:https://www.linux.com/licenses/category/used-permission +[3]:https://www.linux.com/licenses/category/used-permission +[4]:https://www.linux.com/licenses/category/used-permission +[5]:https://www.linux.com/licenses/category/used-permission +[6]:https://www.linux.com/licenses/category/creative-commons-zero +[7]:https://www.linux.com/files/images/debianjpg +[8]:https://www.linux.com/files/images/lubuntujpg-2 +[9]:https://www.linux.com/files/images/elementarosjpg +[10]:https://www.linux.com/files/images/ubuntucorejpg +[11]:https://www.linux.com/files/images/ubuntuserverjpg-1 +[12]:https://www.linux.com/files/images/linux-distros-2018jpg +[13]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux +[14]:https://distrowatch.com/ +[15]:https://www.linux.com/news/learn/sysadmin/best-linux-distributions-2017 +[16]:https://www.debian.org/ +[17]:https://www.parrotsec.org/ +[18]:http://lubuntu.me/ +[19]:https://elementary.io/ +[20]:https://linuxmint.com/ +[21]:https://www.ubuntu.com/ +[22]:https://www.gentoo.org/ +[23]:http://www.linuxfromscratch.org/ +[24]:http://www.linuxfromscratch.org/lfs/download.html +[25]:https://www.ubuntu.com/core +[26]:https://login.ubuntu.com/ +[27]:https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux +[28]:http://thecloudmarket.com/stats#/by_platform_definition +[29]:https://buy.ubuntu.com/?_ga=2.177313893.113132429.1514825043-1939188204.1510782993 diff --git a/translated/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md b/translated/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md new file mode 100644 index 0000000000..bdb2abca36 --- /dev/null +++ b/translated/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md @@ -0,0 +1,281 @@ +坚实的 React 基础:初学者指南 +============================================================ +![](https://cdn-images-1.medium.com/max/1000/1*wj5ujzj5wPQIKb0mIWLgNQ.png) +React.js crash course + +在过去的几个月里,我一直在使用 React 和 React-Native。我已经发布了两个作为产品的应用, [Kiven Aa][1](React)和 [Pollen Chat][2](React Native)。当我开始学习 React 时,我找了一些不仅仅是教我如何用 React 写应用的东西(一个博客,一个视频,一个课程,等等),我也想让它帮我做好面试准备。 + +我发现的大部分资料都集中在某一单一方面上。所以,这篇文章针对的是那些希望理论与实践完美结合的观众。我会告诉你一些理论,以便你了解幕后发生的事情,然后我会向你展示如何编写一些 React.js 代码。 + +如果你更喜欢视频形式,我在YouTube上传了整个课程,请去看看。 + + +让我们开始...... + +> React.js 是一个用于构建用户界面的 JavaScript 库 + +你可以构建各种单页应用程序。例如,你希望在用户界面上实时显示更改的聊天软件和电子商务门户。 + +### 一切都是组件 + +React 应用由组件组成,数量多且互相嵌套。你或许会问:”可什么是组件呢?“ + +组件是可重用的代码段,它定义了某些功能在 UI 上的外观和行为。 比如,按钮就是一个组件。 + +让我们看看下面的计算器,当你尝试计算2 + 2 = 4 -1 = 3(简单的数学题)时,你会在Google上看到这个计算器。 + +![](https://cdn-images-1.medium.com/max/1000/1*NS9DykYDyYG7__UXJdysTA.png) +红色标记表示组件 + + + +如上图所示,这个计算器有很多区域,比如展示窗口和数字键盘。所有这些都可以是许多单独的组件或一个巨大的组件。这取决于在 React 中分解和抽象出事物的程度。你为所有这些组件分别编写代码,然后合并这些组件到一个容器中,而这个容器又是一个 React 组件。这样你就可以创建可重用的组件,最终的应用将是一组协同工作的单独组件。 + + + +以下是一个你践行了以上原则并可以用 React 编写计算器的方法。 + +``` + + + + + + . + . + . + + + + +``` + +没错!它看起来像HTML代码,然而并不是。我们将在后面的部分中详细探讨它。 + +### 设置我们的 Playground + +这篇教程专注于 React 的基础部分。它没有偏向 Web 或 React Native(开发移动应用)。所以,我们会用一个在线编辑器,这样可以在学习 React 能做什么之前避免 web 或 native 的具体配置。 + +我已经为读者在 [codepen.io][4] 设置好了开发环境。只需点开这个链接并且阅读所有 HTML 和 JavaScript 注释。 + +### 控制组件 + +我们已经了解到 React 应用是各种组件的集合,结构为嵌套树。因此,我们需要某种机制来将数据从一个组件传递到另一个组件。 + +#### 进入 “props” + +我们可以使用 `props` 对象将任意数据传递给我们的组件。 React 中的每个组件都会获取 `props` 对象。在学习如何使用 `props` 之前,让我们学习函数式组件。 + +#### a) 函数式组件 + +在 React 中,一个函数式组件通过 `props` 对象使用你传递给它的任意数据。它返回一个对象,该对象描述了 React 应渲染的 UI。函数式组件也称为无状态组件。 + + + +让我们编写第一个函数式组件。 + +``` +function Hello(props) { + return
{props.name}
+} +``` + + + +就这么简单。我们只是将 `props` 作为参数传递给了一个普通的 JavaScript 函数并且有返回值。嗯?返回了什么?那个 `
{props.name}
`。它是 JSX(JavaScript Extended)。我们将在后面的部分中详细了解它。 + +上面这个函数将在浏览器中渲染出以下HTML。 + +``` + +
+ rajat +
+``` + + +> 阅读以下有关 JSX 的部分,这一部分解释了如何从我们的 JSX 代码中得到这段 HTML 。 + +如何在 React 应用中使用这个函数式组件? 很高兴你问了! 它就像下面这么简单。 + +``` + +``` + +属性 `name` 在上面的代码中变成了 `Hello` 组件里的 `props.name` ,属性 `age` 变成了 `props.age` 。 + +> 记住! 你可以将一个React组件嵌套在其他React组件中。 + +让我们在 codepen playground 使用 `Hello` 组件。用我们的 `Hello` 组件替换 `ReactDOM.render()` 中的 `div`,并在底部窗口中查看更改。 + +``` +function Hello(props) { + return
{props.name}
+} + +ReactDOM.render(, document.getElementById('root')); +``` + + +> 但是如果你的组件有一些内部状态怎么办?例如,像下面的计数器组件一样,它有一个内部计数变量,它在 + 和 - 键按下时发生变化。 + +具有内部状态的 React 组件 + +#### b) 基于类的组件 + +基于类的组件有一个额外属性 `state` ,你可以用它存放组件的私有数据。我们可以用 class 表示法重写我们的 `Hello` 。由于这些组件具有状态,因此这些组件也称为有状态组件。 + +``` +class Counter extends React.Component { + // this method should be present in your component + render() { + return ( +
+ {this.props.name} +
+ ); + } +} +``` + +我们继承了 React 库的 React.Component 类以在React中创建基于类的组件。在[这里][5]了解更多有关 JavaScript 类的东西。 + +`render()` 方法必须存在于你的类中,因为React会查找此方法,用以了解它应在屏幕上渲染的 UI。为了使用这种内部状态,我们首先要在组件 + +要使用这种内部状态,我们首先必须按以下方式初始化组件类的构造函数中的状态对象。 + +``` +class Counter extends React.Component { + constructor() { + super(); + + // define the internal state of the component + this.state = {name: 'rajat'} + } + + render() { + return ( +
+ {this.state.name} +
+ ); + } +} + +// Usage: +// In your react app: +``` + +类似地,可以使用 this.props 对象在我们基于类的组件内访问 props。 + +要设置 state,请使用 `React.Component` 的 `setState()`。 在本教程的最后一部分中,我们将看到一个这样的例子。 + +> 提示:永远不要在 `render()` 函数中调用 `setState()`,因为 `setState` 会导致组件重新渲染,这将导致无限循环。 + +![](https://cdn-images-1.medium.com/max/1000/1*rPUhERO1Bnr5XdyzEwNOwg.png) +基于类的组件具有可选属性 “state”。 + +除了 `state` 以外,基于类的组件有一些声明周期方法比如 `componentWillMount()`。你可以利用这些去做初始化 `state`这样的事, 可是那将超出这篇文章的范畴。 + +### JSX + +JSX 是 JavaScript Extended 的一种简短形式,它是一种编写 React components 的方法。使用 JSX,你可以在类 XML 标签中获得 JavaScript 的全部力量。 + +你把 JavaScript 表达式放在`{}`里。下面是一些有效的 JSX 例子。 + + ``` + + + ; + +
+ + ``` + +它的工作方式是你编写 JSX 来描述你的 UI 应该是什么样子。像 Babel 这样的转码器将这些代码转换为一堆 `React.createElement()`调用。然后,React 库使用这些 `React.createElement()`调用来构造 DOM 元素的树状结构。对于 React 的网页视图或 React Native 的 Native 视图,它将保存在内存中。 + +React 接着会计算它如何在存储展示给用户的 UI 的内存中有效地模仿这个树。此过程称为 [reconciliation][7]。完成计算后,React会对屏幕上的真正 UI 进行更改。 + +![](https://cdn-images-1.medium.com/max/1000/1*ighKXxBnnSdDlaOr5-ZOPg.png) +React 如何将你的 JSX 转化为描述应用 UI 的树。 + +你可以使用 [Babel 的在线 REPL][8] 查看当你写一些 JSX 的时候,React 的真正输出。 + +![](https://cdn-images-1.medium.com/max/1000/1*NRuBKgzNh1nHwXn0JKHafg.png) +使用Babel REPL 转换 JSX 为普通 JavaScript + +> 由于 JSX 只是 `React.createElement()` 调用的语法糖,因此可以在没有 JSX 的情况下使用 React。 + +现在我们了解了所有的概念,所以我们已经准备好编写我们之前看到的作为GIF图的计数器组件。 + +代码如下,我希望你已经知道了如何在我们的 playground 上渲染它。 + +``` +class Counter extends React.Component { + constructor(props) { + super(props); + + this.state = {count: this.props.start || 0} + + // the following bindings are necessary to make `this` work in the callback + this.inc = this.inc.bind(this); + this.dec = this.dec.bind(this); + } + + inc() { + this.setState({ + count: this.state.count + 1 + }); + } + + dec() { + this.setState({ + count: this.state.count - 1 + }); + } + + render() { + return ( +
+ + +
{this.state.count}
+
+ ); + } +} +``` + +以下是关于上述代码的一些重点。 + +1. JSX 使用 `驼峰命名` ,所以 `button` 的 属性是 `onClick`,不是我们在HTML中用的 `onclick`。 + +2. 绑定 `this` 是必要的,以便在回调时工作。 请参阅上面代码中的第8行和第9行。 + +最终的交互式代码位于[此处][9]。 + +有了这个,我们已经到了 React 速成课程的结束。我希望我已经阐明了 React 如何工作以及如何使用 React 来构建更大的应用程序,使用更小和可重用的组件。 + +-------------------------------------------------------------------------------- + +via: https://medium.freecodecamp.org/rock-solid-react-js-foundations-a-beginners-guide-c45c93f5a923 + +作者:[Rajat Saxena ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://medium.freecodecamp.org/@rajat1saxena +[1]:https://kivenaa.com/ +[2]:https://play.google.com/store/apps/details?id=com.pollenchat.android +[3]:https://facebook.github.io/react-native/ +[4]:https://codepen.io/raynesax/pen/MrNmBM +[5]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes +[6]:https://en.wikipedia.org/wiki/Source-to-source_compiler +[7]:https://reactjs.org/docs/reconciliation.html +[8]:https://babeljs.io/repl +[9]:https://codepen.io/raynesax/pen/QaROqK +[10]:https://twitter.com/rajat1saxena +[11]:mailto:rajat@raynstudios.com +[12]:https://www.youtube.com/channel/UCUmQhjjF9bsIaVDJUHSIIKw diff --git a/translated/tech/20180531 How to create shortcuts in vi.md b/translated/tech/20180531 How to create shortcuts in vi.md deleted file mode 100644 index 8616013e96..0000000000 --- a/translated/tech/20180531 How to create shortcuts in vi.md +++ /dev/null @@ -1,134 +0,0 @@ -如何在 vi 中创建快捷键 -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documentation-type-keys-yearbook.png?itok=Q-ELM2rn) - -学习使用 [vi 文本编辑器][1] 确实得花点功夫,不过 vi 的老手们都知道,经过一小会的锻炼,就可以将基本的 vi 操作融汇贯通。我们都知道“肌肉记忆”,那么学习 vi 的过程可以称之为“手指记忆”。 - -当你抓住了基础的操作窍门之后,你就可以定制化地配置 vi 的快捷键,从而让其处理的功能更为强大、流畅。 - -在开始之前,我想先感谢下 Chris Hermansen(他雇佣我写了这篇文章)仔细地检查了我的另一篇关于使用 vi 增强版本[Vim][2]的文章。当然还有他那些我未采纳的建议。 - -首先,我们来说明下面几个惯例设定。我会使用符号来代表按下 RETURN 或者 ENTER 键, 代表按下空格键,CTRL-x 表示一起按下 Control 键和 x 键 - -使用 `map` 命令来进行按键的映射。第一个例子是 `write` 命令,通常你之前保存使用这样的命令: - -``` -:w - -``` - -虽然这里只有三个键,不过考虑到我用这个命令实在是太频繁了,我更想“一键”搞定它。在这里我选择逗号键,比如这样: -``` -:map , :wCTRL-v - -``` - -这里的 CTRL-v 事实上是对 做了转义的操作,如果不加这个的话,默认 会作为这条映射指令的结束信号,而非映射中的一个操作。 CTRL-v 后面所跟的操作会翻译为用户的实际操作,而非该按键平常的操作。 - -在上面的映射中,右边的部分会在屏幕中显示为 `:w^M`,其中 `^` 字符就是指代 `control`,完整的意思就是 CTRL-m,表示就是系统中一行的结尾 - - -目前来说,就很不错了。如果我编辑、创建了十二次文件,这个键位映射就可以省掉了 2*12 次按键。不过这里没有计算你建立这个键位映射所花费的 11次按键(计算CTRL-v 和 冒号均为一次按键)。虽然这样已经省了很多次,但是每次打开 vi 都要重新建立这个映射也会觉得非常麻烦。 - -幸运的是,这里可以将这些键位映射放到 vi 的启动配置文件中,让其在每次启动的时候自动读取:文件为 `.exrc`,对于 vim 是 `.vimrc`。只需要将这些文件放在你的用户根目录中即可,并在文件中每行写入一个键位映射,之后就会在每次启动 vi 生效直到你删除对应的配置。 - -在继续说明 `map` 其他用法以及其他的缩写机制之前,这里在列举几个我常用提高文本处理效率的 map 设置: -``` -                                        Displays as - - - -:map X :xCTRL-v                    :x^M - - - -or - - - -:map X ,:qCTRL-v                   ,:q^M - -``` - -上面的 map 指令的意思是写入并关闭当前的编辑文件。其中 `:x` 是 vi 原本的命令,而下面的版本说明之前的 map 配置可以继续用作第二个 map 键位映射。 -``` -:map v :e                   :e - -``` - -上面的指令意思是在 vi 编辑器内部 切换文件,使用这个时候,只需要按 `v` 并跟着输入文件名,之后按 `` 键。 -``` -:map CTRL-vCTRL-e :e#CTRL-v    :e #^M - -``` - -`#` 在这里是 vi 中标准的符号,意思是最后使用的文件名。所以切换当前与上一个文件的方法就使用上面的映射。 -``` -map CTRL-vCTRL-r :!spell %>err &CTRL-v     :!spell %>err&^M - -``` - -(注意:在两个例子中出现的第一个 CRTL-v 在某些 vi 的版本中是不需要的)其中,`:!` 用来运行一个外部的(非 vi 内部的)命令。在这个拼写检查的例子中,`%` 是 vi 中的符号用来只带目前的文件, `>` 用来重定向拼写检查中的输出到 `err` 文件中,之后跟上 `&` 说明该命令是一个后台运行的任务,这样可以保证在拼写检查的同时还可以进行编辑文件的工作。这里我可以键入 `verr`(使用我之前定义的快捷键 `v` 跟上 `err`),进入 `spell` 输出结果的文件,之后再输入 `CTRL-e` 来回到刚才编辑的文件中。这样我就可以在拼写检查之后,使用 CTRL-r 来查看检查的错误,再通过 CTRL-e 返回刚才编辑的文件。 - -还用很多字符串输入的缩写,也使用了各种 map 命令,比如: -``` -:map! CTRL-o \fI - -:map! CTRL-k \fP - -``` - -这个映射允许你使用 CTRL-o 作为 `groff` 命令的缩写,从而让让接下来书写的单词有斜体的效果,并使用 CTRL-k 进行恢复 - -还有两个类似的映射: -``` -:map! rh rhinoceros - -:map! hi hippopotamus - -``` - -上面的也可以使用 `ab` 命令来替换,就像下面这样(如果想这么用的话,需要首先按顺序运行 1. `unmap! rh` 2. `umap! hi`): -``` -:ab rh rhinoceros - -:ab hi hippopotamus - -``` - -在上面 `map!` 的命令中,缩写会马上的展开成原有的单词,而在 `ab` 命令中,单词展开的操作会在输入了空格和标点之后才展开(不过在Vim 和 本机使用的 vi中,展开的形式与 `map!` 类似) - -想要取消刚才设定的按键映射,可以对应的输入 `:unmap`, `unmap!`, `:unab` - -在我使用的 vi 版本中,比较好用的候选映射按键包括 `g, K, q, v, V, Z`,控制字符包括:`CTRL-a, CTRL-c, CTRL-k, CTRL-n, CTRL-p, CTRL-x`;还有一些其他的字符如`#, *`,当然你也可以使用那些已经在 vi 中有过定义但不经常使用的字符,比如本文选择`X`和`I`,其中`X`表示删除左边的字符,并立刻左移当前字符。 - -最后,下面的命令 -``` -:map - -:map! - -:ab - -``` - -将会显示,目前所有的缩写和键位映射。 -will show all the currently defined mappings and abbreviations. - -希望上面的技巧能够更好地更高效地帮助你使用 vi。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/5/shortcuts-vi-text-editor - -作者:[Dan Sonnenschein][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/sd886393) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/dannyman -[1]:http://ex-vi.sourceforge.net/ -[2]:https://www.vim.org/ diff --git a/translated/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md b/translated/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md new file mode 100644 index 0000000000..c65e756ff4 --- /dev/null +++ b/translated/tech/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md @@ -0,0 +1,346 @@ +在 Ubuntu 18.04 LTS 上使用 KVM 配置无头虚拟化服务器 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2016/11/kvm-720x340.jpg) + +我们已经讲解了 [在 Ubuntu 18.04 上配置 Oracle VirtualBox][1] 无头服务器。在本教程中,我们将讨论如何使用 **KVM** 去配置无头虚拟化服务器,以及如何从一个远程客户端去管理访客系统。正如你所知道的,KVM(**K** ernel-based **v** irtual **m** achine)是开源的,是对 Linux 的完全虚拟化。使用 KVM,我们可以在几分钟之内,很轻松地将任意 Linux 服务器转换到一个完全的虚拟化环境中,以及部署不同种类的虚拟机,比如 GNU/Linux、*BSD、Windows 等等。 + +### 使用 KVM 配置无头虚拟化服务器 + +我在 Ubuntu 18.04 LTS 服务器上测试了本指南,但是它在其它的 Linux 发行版上也可以使用,比如,Debian、CentOS、RHEL 以及 Scientific Linux。这个方法完全适合哪些希望在没有任何图形环境的 Linux 服务器上,去配置一个简单的虚拟化环境。 + +基于本指南的目的,我将使用两个系统。 + +**KVM 虚拟化服务器:** + + * **宿主机操作系统** – 最小化安装的 Ubuntu 18.04 LTS(没有 GUI) + * **宿主机操作系统的 IP 地址**:192.168.225.22/24 + * **访客操作系统**(它将运行在 Ubuntu 18.04 的宿主机上):Ubuntu 16.04 LTS server + + + +**远程桌面客户端:** + + * **操作系统** – Arch Linux + + + +### 安装 KVM + +首先,我们先检查一下我们的系统是否支持硬件虚拟化。为此,需要在终端中运行如下的命令: +``` +$ egrep -c '(vmx|svm)' /proc/cpuinfo + +``` + +假如结果是 **zero (0)**,说明系统不支持硬件虚拟化,或者在 BIOS 中禁用了虚拟化。进入你的系统 BIOS 并检查虚拟化选项,然后启用它。 + +假如结果是 **1** 或者 **更大的数**,说明系统将支持硬件虚拟化。然而,在你运行上面的命令之前,你需要始终保持 BIOS 中的虚拟化选项是启用的。 + +或者,你也可以使用如下的命令去验证它。但是为了使用这个命令你需要先安装 KVM。 +``` +$ kvm-ok + +``` + +**示例输出:** + +``` +INFO: /dev/kvm exists +KVM acceleration can be used + +``` + +如果输出的是如下这样的错误,你仍然可以在 KVM 中运行访客虚拟机,但是它的性能将非常差。 +``` +INFO: Your CPU does not support KVM extensions +INFO: For more detailed results, you should run this as root +HINT: sudo /usr/sbin/kvm-ok + +``` + +当然,还有其它的方法来检查你的 CPU 是否支持虚拟化。更多信息参考接下来的指南。 + +接下来,安装 KVM 和在 Linux 中配置虚拟化环境所需要的其它包。 + +在 Ubuntu 和其它基于 DEB 的系统上,运行如下命令: +``` +$ sudo apt-get install qemu-kvm libvirt-bin virtinst bridge-utils cpu-checker + +``` + +KVM 安装完成后,启动 libvertd 服务(如果它没有启动的话): +``` +$ sudo systemctl enable libvirtd + +$ sudo systemctl start libvirtd + +``` + +### 创建虚拟机 + +所有的虚拟机文件和其它的相关文件都保存在 **/var/lib/libvirt/** 下。ISO 镜像的默认路径是 **/var/lib/libvirt/boot/**。 + +首先,我们先检查一下是否有虚拟机。查看可用的虚拟机列表,运行如下的命令: +``` +$ sudo virsh list --all + +``` + +**示例输出:** + +``` +Id Name State +---------------------------------------------------- + +``` + +![][3] + +正如上面的截屏,现在没有可用的虚拟机。 + +现在,我们来创建一个。 + +例如,我们来创建一个有 512 MB 内存、1 个 CPU 核心、8 GB 硬盘的 Ubuntu 16.04 虚拟机。 +``` +$ sudo virt-install --name Ubuntu-16.04 --ram=512 --vcpus=1 --cpu host --hvm --disk path=/var/lib/libvirt/images/ubuntu-16.04-vm1,size=8 --cdrom /var/lib/libvirt/boot/ubuntu-16.04-server-amd64.iso --graphics vnc + +``` + +请确保在路径 **/var/lib/libvirt/boot/** 中有一个 Ubuntu 16.04 的 ISO 镜像文件,或者在上面命令中给定的其它路径中有相应的镜像文件。 + +**示例输出:** + +``` +WARNING Graphics requested but DISPLAY is not set. Not running virt-viewer. +WARNING No console to launch for the guest, defaulting to --wait -1 + +Starting install... +Creating domain... | 0 B 00:00:01 +Domain installation still in progress. Waiting for installation to complete. +Domain has shutdown. Continuing. +Domain creation completed. +Restarting guest. + +``` + +![][4] + +我们来分别讲解以上的命令和看到的每个选项的作用。 + + * **–name** : 这个选项定义虚拟机名字。在我们的案例中,这个虚拟机的名字是 **Ubuntu-16.04**。 + * **–ram=512** : 给虚拟机分配 512MB 内存。 + * **–vcpus=1** : 指明虚拟机中 CPU 核心的数量。 + * **–cpu host** : 通过暴露宿主机 CPU 的配置给访客系统来优化 CPU 属性。 + * **–hvm** : 要求完整的硬件虚拟化。 + * **–disk path** : 虚拟机硬盘的位置和大小。在我们的示例中,我分配了 8GB 的硬盘。 + * **–cdrom** : 安装 ISO 镜像的位置。请注意你必须在这个位置真的有一个 ISO 镜像。 + * **–graphics vnc** : 允许 VNC 从远程客户端访问虚拟机。 + + + +### 使用 VNC 客户端访问虚拟机 + +现在,我们在远程桌面系统上使用 SSH 登入到 Ubuntu 服务器上(虚拟化服务器),如下所示。 + +在这里,**sk** 是我的 Ubuntu 服务器的用户名,而 **192.168.225.22** 是它的 IP 地址。 + +运行如下的命令找出 VNC 的端口号。我们从一个远程系统上访问虚拟机需要它。 +``` +$ sudo virsh dumpxml Ubuntu-16.04 | grep vnc + +``` + +**示例输出:** + +``` + + +``` + +![][5] + +记下那个端口号 **5900**。安装任意的 VNC 客户端应用程序。在本指南中,我们将使用 TigerVnc。TigerVNC 是 Arch Linux 默认仓库中可用的客户端。在 Arch 上安装它,运行如下命令: +``` +$ sudo pacman -S tigervnc + +``` + +在安装有 VNC 客户端的远程客户端系统上输入如下的 SSH 端口转发命令。 + +``` +$ ssh sk@192.168.225.22 -L 5900:127.0.0.1:5900 +``` + +再强调一次,**192.168.225.22** 是我的 Ubuntu 服务器(虚拟化服务器)的 IP 地址。 + +然后,从你的 Arch Linux(客户端)打开 VNC 客户端。 + +在 VNC 服务器框中输入 **localhost:5900**,然后点击 **Connect** 按钮。 + +![][6] + +然后就像你在物理机上安装系统一样的方法开始安装 Ubuntu 虚拟机。 + +![][7] + +![][8] + +同样的,你可以根据你的服务器的硬件情况配置多个虚拟机。 + +或者,你可以使用 **virt-viewer** 实用程序在访客机器中安装操作系统。virt-viewer 在大多数 Linux 发行版的默认仓库中都可以找到。安装完 virt-viewer 之后,运行下列的命令去建立到虚拟机的访问连接。 +``` +$ sudo virt-viewer --connect=qemu+ssh://192.168.225.22/system --name Ubuntu-16.04 + +``` + +### 管理虚拟机 + +使用管理用户接口 virsh 从命令行去管理虚拟机是非常有趣的。命令非常容易记。我们来看一些例子。 + +查看运行的虚拟机,运行如下命令: +``` +$ sudo virsh list + +``` + +或者, +``` +$ sudo virsh list --all + +``` + +**示例输出:** + +``` + Id Name State +---------------------------------------------------- + 2 Ubuntu-16.04 running + +``` + +![][9] + +启动一个虚拟机,运行如下命令: +``` +$ sudo virsh start Ubuntu-16.04 + +``` + +或者,也可以使用虚拟机 id 去启动它。 + +![][10] + +正如在上面的截图所看到的,Ubuntu 16.04 虚拟机的 Id 是 2。因此,启动它时,你也可以像下面一样只指定它的 ID。 +``` +$ sudo virsh start 2 + +``` + +重启动一个虚拟机,运行如下命令: +``` +$ sudo virsh reboot Ubuntu-16.04 + +``` + +**示例输出:** + +``` +Domain Ubuntu-16.04 is being rebooted + +``` + +![][11] + +暂停一个运行中的虚拟机,运行如下命令: +``` +$ sudo virsh suspend Ubuntu-16.04 + +``` + +**示例输出:** + +``` +Domain Ubuntu-16.04 suspended + +``` + +让一个暂停的虚拟机重新运行,运行如下命令: +``` +$ sudo virsh resume Ubuntu-16.04 + +``` + +**示例输出:** + +``` +Domain Ubuntu-16.04 resumed + +``` + +关闭一个虚拟机,运行如下命令: +``` +$ sudo virsh shutdown Ubuntu-16.04 + +``` + +**示例输出:** + +``` +Domain Ubuntu-16.04 is being shutdown + +``` + +完全移除一个虚拟机,运行如下的命令: +``` +$ sudo virsh undefine Ubuntu-16.04 + +$ sudo virsh destroy Ubuntu-16.04 + +``` + +**示例输出:** + +``` +Domain Ubuntu-16.04 destroyed + +``` + +![][12] + +关于它的更多选项,建议你去查看 man 手册页: +``` +$ man virsh + +``` + +今天就到这里吧。开始在你的新的虚拟化环境中玩吧。对于研究和开发者、以及测试目的,KVM 虚拟化将是很好的选择,但它能做的远不止这些。如果你有充足的硬件资源,你可以将它用于大型的生产环境中。如果你还有其它好玩的发现,不要忘记在下面的评论区留下你的高见。 + +谢谢! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/setup-headless-virtualization-server-using-kvm-ubuntu/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-server/ +[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_001.png +[4]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_008-1.png +[5]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_002.png +[6]:http://www.ostechnix.com/wp-content/uploads/2016/11/VNC-Viewer-Connection-Details_005.png +[7]:http://www.ostechnix.com/wp-content/uploads/2016/11/QEMU-Ubuntu-16.04-TigerVNC_006.png +[8]:http://www.ostechnix.com/wp-content/uploads/2016/11/QEMU-Ubuntu-16.04-TigerVNC_007.png +[9]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_010-1.png +[10]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_010-2.png +[11]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_011-1.png +[12]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_012.png diff --git a/sources/tech/20180910 How To List An Available Package Groups In Linux.md b/translated/tech/20180910 How To List An Available Package Groups In Linux.md similarity index 69% rename from sources/tech/20180910 How To List An Available Package Groups In Linux.md rename to translated/tech/20180910 How To List An Available Package Groups In Linux.md index 754c2d0c3a..b192e6c5f0 100644 --- a/sources/tech/20180910 How To List An Available Package Groups In Linux.md +++ b/translated/tech/20180910 How To List An Available Package Groups In Linux.md @@ -1,43 +1,33 @@ -How To List An Available Package Groups In Linux +如何在 Linux 中列出可用的软件包组 ====== -As we know, if we want to install any packages in Linux we need to use the distribution package manager to get it done. +我们知道,如果想要在 Linux 中安装软件包,可以使用软件包管理器来进行安装。由于系统管理员需要频繁用到软件包管理器,所以它是 Linux 当中的一个重要工具。 -Package manager is playing major role in Linux as this used most of the time by admin. +但是如果想一次性安装一个软件包组,在 Linux 中有可能吗?又如何通过命令去实现呢? -If you would like to install group of package in one shot what would be the possible option. +在 Linux 中确实可以用软件包管理器来达到这样的目的。很多软件包管理器都有这样的选项来实现这个功能,但就我所知,`apt` 或 `apt-get` 软件包管理器却并没有这个选项。因此对基于 Debian 的系统,需要使用的命令是 `tasksel`,而不是 `apt`或 `apt-get` 这样的官方软件包管理器。 -Is it possible in Linux? if so, what would be the command for it. +在 Linux 中安装软件包组有很多好处。对于 LAMP 来说,安装过程会包含多个软件包,但如果安装软件包组命令来安装,只安装一个包就可以了。 -Yes, this can be done in Linux by using the package manager. Each package manager has their own option to perform this task, as i know apt or apt-get package manager doesn’t has this option. +当你的团队需要安装 LAMP,但不知道其中具体包含哪些软件包,这个时候软件包组就派上用场了。软件包组是 Linux 系统上一个很方便的工具,它能让你轻松地完成一组软件包的安装。 -For Debian based system we need to use tasksel command instead of official package managers called apt or apt-get. +软件包组是一组用于公共功能的软件包,包括系统工具、声音和视频。 安装软件包组的过程中,会获取到一系列的依赖包,从而大大节省了时间。 -What is the benefit if we install group of package in Linux? Yes, there is lot of benefit is available in Linux when we install group of package because if you want to install LAMP separately we need to include so many packages but that can be done using single package when we use group of package command. +**推荐阅读:** +**(#)** [如何在 Linux 上按照大小列出已安装的软件包][1] +**(#)** [如何在 Linux 上查看/列出可用的软件包更新][2] +**(#)** [如何在 Linux 上查看软件包的安装/更新/升级/移除/卸载时间][3] +**(#)** [如何在 Linux 上查看一个软件包的详细信息][4] +**(#)** [如何查看一个软件包是否在你的 Linux 发行版上可用][5] +**(#)** [萌新指导:一个可视化的 Linux 包管理工具][6] +**(#)** [老手必会:命令行软件包管理器的用法][7] -Say for example, as you get a request from Application team to install LAMP but you don’t know what are the packages needs to be installed, this is where group of package comes into picture. +### 如何在 CentOS/RHEL 系统上列出可用的软件包组 -Group option is a handy tool for Linux systems which will install Group of Software in a single click on your system without headache. +RHEL 和 CentOS 系统使用的是 RPM 软件包,因此可以使用 `yum` 软件包管理器来获取相关的软件包信息。 -A package group is a collection of packages that serve a common purpose, for instance System Tools or Sound and Video. Installing a package group pulls a set of dependent packages, saving time considerably. +`yum` 是 Yellowdog Updater, Modified 的缩写,它是一个用于基于 RPM 系统(例如 RHEL 和 CentOS)的,开源的命令行软件包管理工具。它是从分发库或其它第三方库中获取、安装、删除、查询和管理 RPM 包的主要工具。 -**Suggested Read :** -**(#)** [How To List Installed Packages By Size (Largest) On Linux][1] -**(#)** [How To View/List The Available Packages Updates In Linux][2] -**(#)** [How To View A Particular Package Installed/Updated/Upgraded/Removed/Erased Date On Linux][3] -**(#)** [How To View Detailed Information About A Package In Linux][4] -**(#)** [How To Search If A Package Is Available On Your Linux Distribution Or Not][5] -**(#)** [Newbies corner – A Graphical frontend tool for Linux Package Manager][6] -**(#)** [Linux Expert should knows, list of Command line Package Manager & Usage][7] - -### How To List An Available Package Groups In CentOS/RHEL Systems - -RHEL & CentOS systems are using RPM packages hence we can use the `Yum Package Manager` to get this information. - -YUM stands for Yellowdog Updater, Modified is an open-source command-line front-end package-management utility for RPM based systems such as Red Hat Enterprise Linux (RHEL) and CentOS. - -Yum is the primary tool for getting, installing, deleting, querying, and managing RPM packages from distribution repositories, as well as other third-party repositories. - -**Suggested Read :** [YUM Command To Manage Packages on RHEL/CentOS Systems][8] +**推荐阅读:** [使用 yum 命令在 RHEL/CentOS 系统上管理软件包][8] ``` # yum grouplist @@ -82,7 +72,7 @@ Done ``` -If you would like to list what are the packages is associated on it, run the below command. In this example we are going to list what are the packages is associated with “Performance Tools” group. +如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 Performance Tools 组相关联的软件包。 ``` # yum groupinfo "Performance Tools" @@ -116,17 +106,17 @@ Group: Performance Tools ``` -### How To List An Available Package Groups In Fedora +### 如何在 Fedora 系统上列出可用的软件包组 -Fedora system uses DNF package manager hence we can use the Dnf Package Manager to get this information. +Fedora 系统使用的是 DNF 软件包管理器,因此可以通过 DNF 软件包管理器来获取相关的信息。 -DNF stands for Dandified yum. We can tell DNF, the next generation of yum package manager (Fork of Yum) using hawkey/libsolv library for backend. Aleš Kozumplík started working on DNF since Fedora 18 and its implemented/launched in Fedora 22 finally. +DNF 的含义是 Dandified yum。、DNF 软件包管理器是 YUM 软件包管理器的一个分支,它使用 hawkey/libsolv 库作为后端。从 Fedora 18 开始,Aleš Kozumplík 开始着手 DNF 的开发,直到在Fedora 22 开始加入到系统中。 -Dnf command is used to install, update, search & remove packages on Fedora 22 and later system. It automatically resolve dependencies and make it smooth package installation without any trouble. +`dnf` 命令可以在 Fedora 22 及更高版本上安装、更新、搜索和删除软件包, 它可以自动解决软件包的依赖关系并其顺利安装,不会产生问题。 -Yum replaced by DNF due to several long-term problems in Yum which was not solved. Asked why ? he did not patches the Yum issues. Aleš Kozumplík explains that patching was technically hard and YUM team wont accept the changes immediately and other major critical, YUM is 56K lines but DNF is 29K lies. So, there is no option for further development, except to fork. +由于一些长期未被解决的问题的存在,YUM 被 DNF 逐渐取代了。而 Aleš Kozumplík 的 DNF 却并未对 yum 的这些问题作出修补,他认为这是技术上的难题,YUM 团队也从不接受这些更改。而且 YUM 的代码量有 5.6 万行,而 DNF 只有 2.9 万行。因此已经不需要沿着 YUM 的方向继续开发了,重新开一个分支才是更好的选择。 -**Suggested Read :** [DNF (Fork of YUM) Command To Manage Packages on Fedora System][9] +**推荐阅读:** [在 Fedora 系统上使用 DNF 命令管理软件包][9] ``` # dnf grouplist @@ -180,7 +170,7 @@ Available Groups: ``` -If you would like to list what are the packages is associated on it, run the below command. In this example we are going to list what are the packages is associated with “Editor” group. +如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 Editor 组相关联的软件包。 ``` @@ -215,13 +205,13 @@ Group: Editors zile ``` -### How To List An Available Package Groups In openSUSE System +### 如何在 openSUSE 系统上列出可用的软件包组 -openSUSE system uses zypper package manager hence we can use the zypper Package Manager to get this information. +openSUSE 系统使用的是 zypper 软件包管理器,因此可以通过 zypper 软件包管理器来获取相关的信息。 -Zypper is a command line package manager for suse & openSUSE distributions. It’s used to install, update, search & remove packages & manage repositories, perform various queries, and more. Zypper command-line interface to ZYpp system management library (libzypp). +Zypper 是 suse 和 openSUSE 发行版的命令行包管理器。它可以用于安装、更新、搜索和删除软件包,还有管理存储库,执行各种查询等功能。 Zypper 命令行界面用到了 ZYpp 系统管理库(libzypp)。 -**Suggested Read :** [Zypper Command To Manage Packages On openSUSE & suse Systems][10] +**推荐阅读:** [在 openSUSE 和 suse 系统使用 zypper 命令管理软件包][10] ``` # zypper patterns @@ -277,8 +267,7 @@ i | yast2_basis | 20150918-25.1 | @System | | yast2_install_wf | 20150918-25.1 | Main Repository (OSS) | ``` -If you would like to list what are the packages is associated on it, run the below command. In this example we are going to list what are the packages is associated with “file_server” group. -Additionally zypper command allows a user to perform the same action with different options. +如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 file_server 组相关联的软件包。另外 `zypper` 还允许用户使用不同的选项执行相同的操作。 ``` # zypper info file_server @@ -317,7 +306,7 @@ Contents : | yast2-tftp-server | package | Recommended ``` -If you would like to list what are the packages is associated on it, run the below command. +如果需要列出相关联的软件包,可以执行以下这个命令。 ``` # zypper pattern-info file_server @@ -357,7 +346,7 @@ Contents : | yast2-tftp-server | package | Recommended ``` -If you would like to list what are the packages is associated on it, run the below command. +如果需要列出相关联的软件包,可以执行以下这个命令。 ``` # zypper info pattern file_server @@ -396,7 +385,7 @@ Contents : | yast2-tftp-server | package | Recommended ``` -If you would like to list what are the packages is associated on it, run the below command. +如果需要列出相关联的软件包,可以执行以下这个命令。 ``` # zypper info -t pattern file_server @@ -436,17 +425,17 @@ Contents : | yast2-tftp-server | package | Recommended ``` -### How To List An Available Package Groups In Debian/Ubuntu Systems +### 如何在 Debian/Ubuntu 系统上列出可用的软件包组 -Since APT or APT-GET package manager doesn’t offer this option for Debian/Ubuntu based systems hence, we are using tasksel command to get this information. +由于 APT 或 APT-GET 软件包管理器没有为基于 Debian/Ubuntu 的系统提供这样的选项,因此需要使用 `tasksel` 命令来获取相关信息。 -[Tasksel][11] is a handy tool for Debian/Ubuntu systems which will install Group of Software in a single click on your system. Tasks are defined in `.desc` files and located at `/usr/share/tasksel`. +[tasksel][11] 是 Debian/Ubuntu 系统上一个很方便的工具,只需要很少的操作就可以用它来安装好一组软件包。可以在 `/usr/share/tasksel` 目录下的 `.desc` 文件中安排软件包的安装任务。 -By default, tasksel tool installed on Debian system as part of Debian installer but it’s not installed on Ubuntu desktop editions. This functionality is similar to that of meta-packages, like how package managers have. +默认情况下,`tasksel` 工具是作为 Debian 系统的一部分安装的,但桌面版 Ubuntu 则没有自带 `tasksel`,类似软件包管理器中的元包(meta-packages)。 -Tasksel tool offer a simple user interface based on zenity (popup Graphical dialog box in command line). +`tasksel` 工具带有一个基于 zenity 的简单用户界面,例如命令行中的弹出图形对话框。 -**Suggested Read :** [Tasksel – Install Group of Software in A Single Click on Debian/Ubuntu][12] +**推荐阅读:** [使用 tasksel 在 Debian/Ubuntu 系统上快速安装软件包组][12] ``` # tasksel --list-task @@ -494,20 +483,20 @@ u openssh-server OpenSSH server u server Basic Ubuntu server ``` -If you would like to list what are the packages is associated on it, run the below command. In this example we are going to list what are the packages is associated with “file_server” group. +如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 lamp-server 组相关联的软件包。 ``` # tasksel --task-desc "lamp-server" Selects a ready-made Linux/Apache/MySQL/PHP server. ``` -### How To List An Available Package Groups In Arch Linux based Systems +### 如何在基于 Arch Linux 的系统上列出可用的软件包组 -Arch Linux based systems are using pacman package manager hence we can use the pacman Package Manager to get this information. +基于 Arch Linux 的系统使用的是 pacman 软件包管理器,因此可以通过 pacman 软件包管理器来获取相关的信息。 -pacman stands for package manager utility (pacman). pacman is a command-line utility to install, build, remove and manage Arch Linux packages. pacman uses libalpm (Arch Linux Package Management (ALPM) library) as a back-end to perform all the actions. +pacman 是 package manager 的缩写。`pacman` 可以用于安装、构建、删除和管理 Arch Linux 软件包。`pacman` 使用 libalpm(Arch Linux Package Management 库,ALPM)作为后端来执行所有操作。 -**Suggested Read :** [Pacman Command To Manage Packages On Arch Linux Based Systems][13] +**推荐阅读:** [使用 pacman 在基于 Arch Linux 的系统上管理软件包][13] ``` # pacman -Sg @@ -550,7 +539,7 @@ vim-plugins ``` -If you would like to list what are the packages is associated on it, run the below command. In this example we are going to list what are the packages is associated with “gnome” group. +如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 gnome 组相关联的软件包。 ``` # pacman -Sg gnome @@ -589,7 +578,7 @@ gnome simple-scan ``` -Alternatively we can check the same by running following command. +也可以执行以下这个命令实现同样的效果。 ``` # pacman -S gnome @@ -609,7 +598,7 @@ Interrupt signal received ``` -To know exactly how many packages is associated on it, run the following command. +可以执行以下命令检查相关软件包的数量。 ``` # pacman -Sg gnome | wc -l @@ -623,7 +612,7 @@ via: https://www.2daygeek.com/how-to-list-an-available-package-groups-in-linux/ 作者:[Prakash Subramanian][a] 选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) +译者:[HankChow](https://github.com/HankChow) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -642,3 +631,4 @@ via: https://www.2daygeek.com/how-to-list-an-available-package-groups-in-linux/ [11]: https://wiki.debian.org/tasksel [12]: https://www.2daygeek.com/tasksel-install-group-of-software-in-a-single-click-or-single-command-on-debian-ubuntu/ [13]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ + diff --git a/translated/tech/20180917 4 scanning tools for the Linux desktop.md b/translated/tech/20180917 4 scanning tools for the Linux desktop.md deleted file mode 100644 index 89aaad3a89..0000000000 --- a/translated/tech/20180917 4 scanning tools for the Linux desktop.md +++ /dev/null @@ -1,72 +0,0 @@ -用于Linux桌面的4个扫描工具 -====== -使用其中一个开源软件驱动扫描仪来实现无纸化办公。 - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-photo-camera-blue.png?itok=AsIMZ9ga) - -尽管无纸化世界还没有到来,但越来越多的人通过扫描文件和照片来摆脱纸张的束缚。不过,仅仅拥有一台扫描仪还不足够。你还需要软件来驱动扫描仪。 - -然而问题是许多扫描仪制造商没有将Linux版本的软件与他们的设备适配在一起。不过在大多数情况下,即使没有也没多大关系。因为在linux桌面上已经有很好的扫描软件了。它们能够与许多扫描仪配合很好的完成工作。 - -现在就让我们看看四个简单又灵活的开源Linux扫描工具。我已经使用过了下面这些工具(甚至[早在2014年][1]写过关于其中三个工具的文章)并且觉得它们非常有用。希望你也会这样认为。 - -### Simple Scan - -这是我最喜欢的一个软件之一,[Simple Scan][2]小巧,迅速,高效,且易于使用。如果你以前见过它,那是因为Simple Scan是GNOME桌面上的默认扫描程序应用程序,也是许多Linux发行版的默认扫描程序。 - -你只需单击一下就能扫描文档或照片。扫描过某些内容后,你可以旋转或裁剪它并将其另存为图像(仅限JPEG或PNG格式)或PDF格式。也就是说Simple Scan可能会很慢,即使你用较低分辨率来扫描文档。最重要的是,Simple Scan在扫描时会使用一组全局的默认值,例如150dpi用于文本,300dpi用于照片。你需要进入Simple Scan的首选项才能更改这些设置。 - -如果你扫描的内容超过了几页,还可以在保存之前重新排序页面。如果有必要的话 - 假如你正在提交已签名的表格 - 你可以使用Simple Scan来发送电子邮件。 - -### Skanlite - -从很多方面来看,[Skanlite][3]是Simple Scan在KDE世界中的表兄弟。虽然Skanlite功能很少,但它可以出色的完成工作。 - -你可以自己配置这个软件的选项,包括自动保存扫描文件,设置扫描质量以及确定扫描保存位置。 Skanlite可以保存为以下图像格式:JPEG,PNG,BMP,PPM,XBM和XPM。 - -其中一个很棒的功能是Skanlite能够将你扫描的部分内容保存到单独的文件中。当你想要从照片中删除某人或某物时,这就派上用场了。 - -### Gscan2pdf - -这是我另一个最爱的老软件,[gscan2pdf][4]可能会显得很老旧了,但它仍然包含一些比这里提到的其他软件更好的功能。即便如此,gscan2pdf仍然显得很轻便。 - -除了以各种图像格式(JPEG,PNG和TIFF)保存扫描外,gscan2pdf还将它们保存为PDF或[DjVu][5]文件。你可以在单击“扫描”按钮之前设置扫描的分辨率,无论是黑白,彩色还是纸张大小,每当你想要更改任何这些设置时,这都会进入gscan2pdf的首选项。你还可以旋转,裁剪和删除页面。 - -虽然这些都不是真正的杀手级功能,但它们会给你带来更多的灵活性。 - -### GIMP - -你大概会知道[GIMP][6]是一个图像编辑工具。但是你恐怕不知道可以用它来驱动你的扫描仪吧。 - -你需要安装[XSane][7]扫描软件和GIMP XSane插件。这两个应该都可以从你的Linux发行版的包管理器中获得。在软件里,选择文件>创建>扫描仪/相机。单击扫描仪,然后单击扫描按钮即可进行扫描。 - -如果这不是你想要的,或者它不起作用,你可以将GIMP和一个叫作[QuiteInsane][8]的插件结合起来。使用任一插件,都能使GIMP成为一个功能强大的扫描软件,它可以让你设置许多选项,如是否扫描彩色或黑白,扫描的分辨率,以及是否压缩结果等。你还可以使用GIMP的工具来修改或应用扫描后的效果。这使得它非常适合扫描照片和艺术品。 - -### 它们真的能够工作吗? - -所有的这些软件在大多数时候都能够在各种硬件上运行良好。我将它们与我过去几年来拥有的多台多功能打印机一起使用 - 无论是使用USB线连接还是通过无线连接。 - -你可能已经注意到我在前一段中写过“大多数时候运行良好”。这是因为我确实遇到过一个例外:一个便宜的canon多功能打印机。我使用的软件都没有检测到它。最后我不得不下载并安装canon的Linux扫描仪软件才使它工作。 - -你最喜欢的Linux开源扫描工具是什么?发表评论,分享你的选择。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/linux-scanner-tools - -作者:[Scott Nesbitt][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[way-ww](https://github.com/way-ww) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/scottnesbitt -[1]: https://opensource.com/life/14/8/3-tools-scanners-linux-desktop -[2]: https://gitlab.gnome.org/GNOME/simple-scan -[3]: https://www.kde.org/applications/graphics/skanlite/ -[4]: http://gscan2pdf.sourceforge.net/ -[5]: http://en.wikipedia.org/wiki/DjVu -[6]: http://www.gimp.org/ -[7]: https://en.wikipedia.org/wiki/Scanner_Access_Now_Easy#XSane -[8]: http://sourceforge.net/projects/quiteinsane/ diff --git a/translated/tech/20180917 Getting started with openmediavault- A home NAS solution.md b/translated/tech/20180917 Getting started with openmediavault- A home NAS solution.md deleted file mode 100644 index 833180811a..0000000000 --- a/translated/tech/20180917 Getting started with openmediavault- A home NAS solution.md +++ /dev/null @@ -1,74 +0,0 @@ -openmediavault入门:一个家庭NAS解决方案 -====== -这个网络附加文件服务提供了一序列功能,并且易于安装和配置。 - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-cloud.png?itok=vz0PIDDS) - -面对许多可供选择的云存储方案,一些人可能会质疑一个家庭网络附加存储服务的价值。毕竟,当所有你的文件存储在云上,你不需要为你自己云服务的维护,更新,和安全担忧。 - -但是,这不完全对,是不是?你有一个家庭网络,所以你不得不负责维护网络的健康和安全。假定你已经维护一个家庭网络,那么[一个家庭NAS][1]并不会增加额外负担。反而你能从少量的工作中得到许多的好处。 - -你可以为你家里所有的计算机备份(你也可以备份离线网站).构架一个存储电影,音乐和照片的媒体服务器,无需担心网络连接是否连接。在家里的多台计算机处理大型文件,不需要等待从网络其他随机的计算机传输这些文件过来。另外,可以让NAS与其他服务一起进行双重任务,如托管本地邮件或者家庭Wiki。也许最重要的是,构架家庭NAS,数据完全是你的,始终在控制下和随时可访问的。 - -接下来的问题是如何选择NAS方案。当然,你可以购买预先建立的解决方案,并在某一天打电话购买,但是这会有什么乐趣呢?实际上,尽管拥有一个能处理一切的设备很棒,但最好还是有一个可以修复和升级的钻机。这是一个我近期发现的解决方案。我选择安装和配置[openmediavault][2]。 - -### 为什么选择openmediavault? - -市面上有不少开源的NAS解决方案,其中有些无可争议的比openmediavault流行。当我询问周遭,例如,[freeNAS][3]最常被推荐给我。那么为什么我不采纳他们的建议呢?毕竟,它被大范围的使用,包含很多的功能,并且提供许多支持选项,[基于FreeNAS官网的一份对比数据][4]。当然这些全部是对的。但是openmediavault也不差。它是基于FreeNAS早期版本,虽然它在下载和功能方面的数量较低,但是对于我的需求而言,它已经相当足够了。 - -另外一个因素是它让我感到很舒适。openmediavault的底层操作系统是[Debian][5],然而FreeNAS是[FreeBSD][6]。由于我个人对FressBSD不是很熟悉,因此如果我的NAS出现故障,必定会很难在FreeBSD上修复故障。同样的,也会让我觉得很难微调配置或添加服务到机器上。当然,我可以学习FreeBSD和更熟悉它,但是我已经在家里构架了这个NAS;我发现,如果限制给定自己完成构建NAS的“学习机会”的数量,构建NAS往往会更成功。 - -当然,每个情况都不同,所以你要自己调研,然后作出最适合自己方案的决定。FreeNAS对于许多人似乎都是不错的解决方案。Openmediavault正是适合我的解决方案。 - -### 安装与配置 - -在[openmediavault文档]里详细记录了安装步骤,所以我不在这里重述了。如果你曾经安装过任何一个linux版本,大部分安装步骤都是很类似的(虽然在相对丑陋的[Ucurses][9]界面,不像你可能在现代版本的相对美观的安装界面)。我通过使用[专用驱动器][9]指令来安装它。然而,这些指令不但很好,而且相当精炼的。当你搞定这些指令,你安装了一个基本的系统,但是你还需要做很多才能真正构建好NAS来存储任何文件。例如,专用驱动器指令在硬盘驱动上安装openmediavault,但那是操作系统的驱动,而不是和网络上其他计算机共享空间的那个驱动。你需要自己把这些建立起来并且配置好。 - -你要做的第一件事是加载用来管理的网页界面和修改默认密码。这个密码和之前你安装过程设置的根密码是不同的。这是网页洁面的管理员账号,和默认的账户和密码分别是 `admin` 和 `openmediavault`,当你登入后自然而然地会修改这些配置属性。 - -#### 设置你的驱动 - -一旦你安装好openmediavault,你需要它为你做一些工作。逻辑上的第一个步骤是设置好你即将用来作为存储的驱动。在这里,我假定你已经物理上安装好它们了,所以接下来你要做的就是让openmediavault识别和配置它们。第一步是确保这些磁盘是可见的。侧边栏菜单有很多选项,而且被精心的归类了。选择**存储 - > 磁盘**。一旦你点击该菜单,你应该能够看到所有你已经安装到该服务器的驱动,包括那个你已经用来安装openmediavault的驱动。如果你没有在那里看到所有驱动,点击扫描按钮去看它能够接载它们。通常,这不会是一个问题。 - -当你的文件共享时,你可以独立的挂载和设置这些驱动,但是对于一个文件服务器,你将想要一些冗余驱动。你想要能够把很多驱动当作一个单一卷和能够在某一个驱动出现故障或者空间不足下安装新驱动的情况下恢复你的数据。这意味你将需要一个[RAID][10]。你想要的什么特定类型的RAID的主题是一个深深的兔子洞,是一个值得另写一片文章专门来讲述它(而且已经有很多关于该主题的文章了),但是简而言之是你将需要不仅仅一个驱动和最好的情况下,你的所有驱动都存储一样数量的数据。 - -openmedia支持所有标准的RAID级别,所以多了解RAID对你很有好处的。可以在**存储 - > RAID管理**配置你的RAID。配置是相当简单:点击创建按钮,在你的RAID阵列里选择你想要的磁盘和你想要使用的RAID级别,和给这个阵列一个名字。openmediavault为你处理剩下的工作。没有混乱的命令行,试图记住‘mdadm'命令的一些标志参数。在我特别的例子,我有六个2TB驱动,并被设置为RAID 10. - -当你的RAID构建好了,基本上你已经有一个地方可以存储东西了。你仅仅需要设置一个文件系统。正如你的桌面系统,一个硬盘驱动在没有格式化情况下是没什么用处的。所以下一个你要去的地方的是位于openmediavault控制面板里的 **存储 - > 文件系统**。和配置你的RAID一样,点击创建按钮,然后跟着提示操作。如果你只有一个RAID在你的服务器上,你应该可以看到一个像 `md0`的东西。你也需要选择文件系统的类别。如果你不能确定,选择标准的ext4类型即可。 - -#### 定义你的共享 - -亲爱的!你有个地方可以存储文件了。现在你只需要让它在你的家庭网络中可见。可以从在openmediavault控制面板上的**服务**部分上配置。当谈到在网络上设置文件共享,有两个主要的选择:NFS或者SMB/CIFS. 根据以往经验,如果你网络上的所有计算机都是Linux系统,那么你使用NFS会更好。然而,当你家庭网络是一个混合环境,是一个包含Linux,Windows,苹果系统和嵌入式设备的组合,那么SMB/CIF可能会是你合适的选择。 - -这些选项不是互斥的。实际上,你可以在服务器上运行这些服务和同时拥有这些服务的好处。或者你可以混合起来,如果你有一个特定的设备做特定的任务。不管你的使用场景是怎样,配置这些服务是相当简单。点击你想要的服务,从它配置中激活它,和在网络中设定你想要的共享文件夹为可见。在基于SMB/CIFS共享的情况下,相对于NFS多了一些可用的配置,但是一般用默认配置就挺好的,接着可以在默认基础上修改配置。最酷的事情是它很容易配置,同时也很容易在需要的时候修改配置。 - -#### 用户配置 - -基本上已将完成了。你已经在RAID配置你的驱动。你已经用一种文件系统格式化了RAID。和你已经在格式化的RAID上设定了共享文件夹。剩下来的一件事情是配置那些人可以访问这些共享和可以访问多少。这个可以在 **访问权限管理** 配置区设置。使用 **用户** 和 **群组** 选项来设定可以连接到你共享文件加的用户和设定这些共享文件的访问权限。 - -一旦你完成用户配置,你几乎准备好了。你需要从不同客户端机器访问你的共享,但是这是另外一个可以单独写个文章的话题了。 - -玩得开心! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/openmediavault - -作者:[Jason van Gumster][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[jamelouis](https://github.com/jamelouis) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/mairin -[1]: https://opensource.com/article/18/8/automate-backups-raspberry-pi -[2]: https://openmediavault.org -[3]: https://freenas.org -[4]: http://www.freenas.org/freenas-vs-openmediavault/ -[5]: https://www.debian.org/ -[6]: https://www.freebsd.org/ -[7]: https://openmediavault.readthedocs.io/en/latest/installation/index.html -[8]: https://invisible-island.net/ncurses/ -[9]: https://openmediavault.readthedocs.io/en/latest/installation/via_iso.html -[10]: https://en.wikipedia.org/wiki/RAID diff --git a/translated/tech/20180918 Linux firewalls- What you need to know about iptables and firewalld.md b/translated/tech/20180918 Linux firewalls- What you need to know about iptables and firewalld.md deleted file mode 100644 index c3ecb7b1d3..0000000000 --- a/translated/tech/20180918 Linux firewalls- What you need to know about iptables and firewalld.md +++ /dev/null @@ -1,178 +0,0 @@ -Linux 防火墙: 关于 iptables 和 firewalld,你需要知道些什么 -====== - -以下是如何使用 iptables 和 firewalld 工具来管理 Linux 防火墙规则。 - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openwires_fromRHT_520_0612LL.png?itok=PqZi55Ab) -这篇文章摘自我的书[Linux in Action][1],第二 Manning project 尚未发布。 - -### 防火墙 - - -防火墙是一组规则。当数据包进出受保护的网络时,进出内容(特别是关于其来源、目标和使用的协议等信息)会根据防火墙规则进行检测,以确定是否允许其通过。下面是一个简单的例子: - - -![防火墙过滤请求] [3] - -防火墙可以根据协议或基于目标的规则过滤请求。 - -一方面, [iptables][4] 是 Linux 机器上管理防火墙规则的工具。 - -另一方面,[firewalld][5]也是 Linux 机器上管理防火墙规则的工具。 - -你有什么问题吗?如果我告诉你还有另外一种工具,叫做 [nftables][6],这会不会糟蹋你的一天呢? - -好吧,我承认整件事确实有点好笑,所以让我解释一下了。这一切都从 Netfilter 开始,在 Linux 内核模块级别, Netfilter 控制访问网络栈。几十年来,管理 Netfilter 钩子的主要命令行工具是 iptables 规则集。 - -因为调用这些规则所需的语法看起来有点晦涩难懂,所以各种用户友好的实现方式,如[ufw][7] 和 firewalld 被引入作,并为更高级别的 Netfilter 解释器。然而,Ufw 和 firewalld 主要是为解决独立计算机面临的各种问题而设计的。构建全方面的网络解决方案通常需要 iptables,或者从2014年起,它的替代品 nftables (nft 命令行工具)。 - - -iptables 没有消失,仍然被广泛使用着。事实上,在未来的许多年里,作为一名管理员,你应该会使用 iptables 来保护的网络。但是nftables 通过操作经典的 Netfilter 工具集带来了一些重要的崭新的功能。 - - -从现在开始,我将通过示例展示 firewalld 和 iptables 如何解决简单的连接问题。 - -### 使用 firewalld 配置 HTTP 访问 - -正如你能从它的名字中猜到的,firewalld 是 [systemd][8] 家族的一部分。Firewalld 可以安装在 Debian/Ubuntu 机器上,不过, 它默认安装在 RedHat 和 CentOS 上。如果您的计算机上运行着像 Apache 这样的 web 服务器,您可以通过浏览服务器的 web 根目录来确认防火墙是否正在工作。如果网站不可访问,那么 firewalld 正在工作。 - -你可以使用 `firewall-cmd` 工具从命令行管理 firewalld 设置。添加 `–state` 参数将返回当前防火墙的状态: - -``` -# firewall-cmd --state -running -``` - -默认情况下,firewalld 将处于运行状态,并将拒绝所有传入流量,但有几个例外,如 SSH。这意味着你的网站不会有太多的访问者,这无疑会为你节省大量的数据传输成本。然而,这不是你对 web 服务器的要求,你希望打开 HTTP 和 HTTPS 端口,按照惯例,这两个端口分别被指定为80和443。firewalld 提供了两种方法来实现这个功能。一个是通过 `–add-port` 参数,该参数直接引用端口号及其将使用的网络协议(在本例中为TCP )。 另外一个是通过`–permanent` 参数,它告诉 firewalld 在每次服务器启动时加载此规则: - - -``` -# firewall-cmd --permanent --add-port=80/tcp -# firewall-cmd --permanent --add-port=443/tcp -``` - - `–reload` 参数将这些规则应用于当前会话: - -``` -# firewall-cmd --reload -``` - -查看当前防火墙上的设置, 运行 `–list-services` : - -``` -# firewall-cmd --list-services -dhcpv6-client http https ssh -``` - -假设您已经如前所述添加了浏览器访问,那么 HTTP、HTTPS 和 SSH 端口现在都应该是开放的—— `dhcpv6-client` ,它允许 Linux 从本地 DHCP 服务器请求 IPv6 IP地址。 - -### 使用 iptables 配置锁定的客户信息亭 - -我相信你已经看到了信息亭——它们是放在机场、图书馆和商务场所的盒子里的平板电脑、触摸屏和ATM类电脑,邀请顾客和路人浏览内容。大多数信息亭的问题是,你通常不希望用户像在自己家一样,把他们当成自己的设备。它们通常不是用来浏览、观看 YouTube 视频或对五角大楼发起拒绝服务攻击的。因此,为了确保它们没有被滥用,你需要锁定它们。 - - -一种方法是应用某种信息亭模式,无论是通过巧妙使用Linux显示管理器还是在浏览器级别。但是为了确保你已经堵塞了所有的漏洞,你可能还想通过防火墙添加一些硬网络控制。在下一节中,我将讲解如何使用iptables 来完成。 - - -关于使用iptables,有两件重要的事情需要记住:你给规则的顺序非常关键,iptables 规则本身在重新启动后将无法存活。我会一次一个地在解释这些。 - -### 信息亭项目 - -为了说明这一切,让我们想象一下,我们为一家名为 BigMart 的大型连锁商店工作。它们已经存在了几十年;事实上,我们想象中的祖父母可能是在那里购物并长大的。但是这些天,BigMart 公司总部的人可能只是在数着亚马逊将他们永远赶下去的时间。 - -尽管如此,BigMart 的IT部门正在尽他们最大努力提供解决方案,他们向你发放了一些具有 WiFi 功能信息亭设备,你在整个商店的战略位置使用这些设备。其想法是,登录到 BigMart.com 产品页面,允许查找商品特征、过道位置和库存水平。信息亭还允许进入 bigmart-data.com,那里储存着许多图像和视频媒体信息。 - -除此之外,您还需要允许下载软件包更新。最后,您还希望只允许从本地工作站访问SSH,并阻止其他人登录。下图说明了它将如何工作: - -![信息亭流量IP表] [10] - -信息亭业务流由 iptables 控制。 - -### 脚本 - -以下是 Bash 脚本内容: - -``` -#!/bin/bash -iptables -A OUTPUT -p tcp -d bigmart.com -j ACCEPT -iptables -A OUTPUT -p tcp -d bigmart-data.com -j ACCEPT -iptables -A OUTPUT -p tcp -d ubuntu.com -j ACCEPT -iptables -A OUTPUT -p tcp -d ca.archive.ubuntu.com -j ACCEPT -iptables -A OUTPUT -p tcp --dport 80 -j DROP -iptables -A OUTPUT -p tcp --dport 443 -j DROP -iptables -A INPUT -p tcp -s 10.0.3.1 --dport 22 -j ACCEPT -iptables -A INPUT -p tcp -s 0.0.0.0/0 --dport 22 -j DROP -``` - -我们从基本规则 `-A` 开始分析,它告诉iptables 我们要添加规则。`OUTPUT` 意味着这条规则应该成为输出的一部分。`-p` 表示该规则仅使用TCP协议的数据包,正如`-d` 告诉我们的,目的地址是 [bigmart.com][11]。`-j` 参数作用为数据包符合规则时要采取的操作是 `ACCEPT`。第一条规则表示允许或接受请求。但,最后一条规则表示删除或拒绝的请求。 - -规则顺序是很重要的。iptables 仅仅允许匹配规则的内容请求通过。一个向外发出的浏览器请求,比如访问[youtube.com][12] 是会通过的,因为这个请求匹配第四条规则,但是当它到达“dport 80”或“dport 443”规则时——取决于是HTTP还是HTTPS请求——它将被删除。iptables不再麻烦检查了,因为那是一场比赛。 - -另一方面,向ubuntu.com 发出软件升级的系统请求,只要符合其适当的规则,就会通过。显然,我们在这里做的是,只允许向我们的 BigMart 或 Ubuntu 发送 HTTP 或 HTTPS 请求,而不允许向其他目的地发送。 - -最后两条规则将处理 SSH 请求。因为它不使用端口80或443端口,而是使用22端口,所以之前的两个丢弃规则不会拒绝它。在这种情况下,来自我的工作站的登录请求将被接受,但是对其他任何地方的请求将被拒绝。这一点很重要:确保用于端口22规则的IP地址与您用来登录的机器的地址相匹配——如果不这样做,将立即被锁定。当然,这没什么大不了的,因为按照目前的配置方式,只需重启服务器,iptables 规则就会全部丢失。如果使用 LXC 容器作为服务器并从 LXC 主机登录,则使用主机 IP 地址连接容器,而不是其公共地址。 - -如果机器的IP发生变化,请记住更新这个规则;否则,你会被拒绝访问。 - -在家玩(是在某种性虚拟机上)?太好了。创建自己的脚本。现在我可以保存脚本,使用`chmod` 使其可执行,并以`sudo` 的形式运行它。不要担心 `igmart-data.com没找到`错误——当然没找到;它不存在。 - -``` -chmod +X scriptname.sh -sudo ./scriptname.sh -``` - -你可以使用`cURL` 命令行测试防火墙。请求 ubuntu.com 奏效,但请求 [manning.com][13]是失败的 。 - - -``` -curl ubuntu.com -curl manning.com -``` - -### 配置 iptables 以在系统启动时加载 - -现在,我如何让这些规则在每次 kiosk 启动时自动加载?第一步是将当前规则保存。使用`iptables-save` 工具保存规则文件。将在根目录中创建一个包含规则列表的文件。管道后面跟着 tee 命令,是将我的`sudo` 权限应用于字符串的第二部分:将文件实际保存到否则受限的根目录。 - -然后我可以告诉系统每次启动时运行一个相关的工具,叫做`iptables-restore` 。我们在上一模块中看到的常规cron 作业,因为它们在设定的时间运行,但是我们不知道什么时候我们的计算机可能会决定崩溃和重启。 - -有许多方法来处理这个问题。这里有一个: - - -在我的 Linux 机器上,我将安装一个名为 [anacron][14] 的程序,该程序将在 /etc/ 目录中为我们提供一个名为anacrondab 的文件。我将编辑该文件并添加这个 `iptables-restore` 命令,告诉它加载该文件的当前值。引导后一分钟,规则每天(必要时)加载到 iptables 中。我会给作业一个标识符( `iptables-restore` ),然后添加命令本身。如果你在家和我一起这样,你应该通过重启系统来测试一下。 - -``` -sudo iptables-save | sudo tee /root/my.active.firewall.rules -sudo apt install anacron -sudo nano /etc/anacrontab -1 1 iptables-restore iptables-restore < /root/my.active.firewall.rules - -``` - -我希望这些实际例子已经说明了如何使用 iptables 和 firewalld 来管理基于Linux的防火墙上的连接问题。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/linux-iptables-firewalld - -作者:[David Clinton][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[heguangzhi](https://github.com/heguangzhi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/remyd -[1]: https://www.manning.com/books/linux-in-action?a_aid=bootstrap-it&a_bid=4ca15fc9&chan=opensource -[2]: /file/409116 -[3]: https://opensource.com/sites/default/files/uploads/iptables1.jpg (firewall filtering request) -[4]: https://en.wikipedia.org/wiki/Iptables -[5]: https://firewalld.org/ -[6]: https://wiki.nftables.org/wiki-nftables/index.php/Main_Page -[7]: https://en.wikipedia.org/wiki/Uncomplicated_Firewall -[8]: https://en.wikipedia.org/wiki/Systemd -[9]: /file/409121 -[10]: https://opensource.com/sites/default/files/uploads/iptables2.jpg (kiosk traffic flow ip tables) -[11]: http://bigmart.com/ -[12]: http://youtube.com/ -[13]: http://manning.com/ -[14]: https://sourceforge.net/projects/anacron/ diff --git a/translated/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md b/translated/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md new file mode 100644 index 0000000000..1fdba14a5f --- /dev/null +++ b/translated/tech/20180924 A Simple, Beautiful And Cross-platform Podcast App.md @@ -0,0 +1,108 @@ +一个简单,美观和跨平台的播客应用程序 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/cpod-720x340.png) + +播客在过去几年中变得非常流行。 播客就是所谓的“信息娱乐”,它们通常是轻松的,但它通常会为你提供有价值的信息。 播客在过去几年中已经非常火爆了,如果你喜欢某些东西,很可能存在一个相关的播客。 Linux 桌面版上有很多播客播放器,但是如果你想要一些视觉上美观,有光滑动画并且可以在每个平台上运行的东西,那就并没有很多替代品可以替代 **CPod** 了。 CPod(以前称为 **Cumulonimbus**)是一个开源的最简单的播客应用程序,适用于 Linux,MacOS 和 Windows。 + +CPod 运行在一个名为 **Electron** 的东西上 - 这个工具允许开发人员构建跨平台(例如 Windows,MacOs 和 Linux)的桌面图形化应用程序。 在本简要指南中,我们将讨论如何在 Linux 中安装和使用 CPod 播客应用程序。 + +### 安装 CPod + +转到 CPod 的[**发布页面**][1]。 下载并安装所选平台的二进制文件。 如果你使用 Ubuntu / Debian,你只需从发布页面下载并安装 .deb 文件,如下所示。 + +``` +$ wget https://github.com/z-------------/CPod/releases/download/v1.25.7/CPod_1.25.7_amd64.deb + +$ sudo apt update + +$ sudo apt install gdebi + +$ sudo gdebi CPod_1.25.7_amd64.deb +``` + +如果你使用任何其他发行版,你可能需要在发行版页面中使用 **AppImage**。 + +从发布页面下载 AppImage 文件。 + +打开终端,然后转到存储 AppImage 文件的目录。 更改权限以允许执行: + +``` +$ chmod +x CPod-1.25.7-x86_64.AppImage +``` + +执行 AppImage 文件: + +``` +$ ./CPod-1.25.7-x86_64.AppImage +``` + +你将看到一个对话框询问是否将应用程序与系统集成。 如果要执行此操作,请单击**是**。 + +### 特征 + +**探索标签页** + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-features-tab.png) + +CPod 使用 Apple iTunes 数据库查找播客。 这很好,因为 iTunes 数据库是最大的数据库。 如果那里有一个播客,很可能是在 iTunes 上。 要查找播客,只需使用探索部分中的顶部搜索栏即可。 探索部分还展示了一些受欢迎的播客。 + +**主标签页** + +![](http://www.ostechnix.com/wp-content/uploads/2018/09/CPod-home-tab.png) + +主标签页在打开应用程序时是默认打开的。 主标签页显示你已订阅的所有播客的所有剧集的时间顺序列表。 + +在主页选项卡中,你可以: + +1. 标记剧集阅读。 +2. 下载它们进行离线播放 +3. 将它们添加到播放队列中。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/The-podcasts-queue.png) + +**订阅标签页** + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-subscriptions-tab.png) + +你当然可以订阅你喜欢的播客。 你可以在订阅标签页中执行的其他一些操作是: + +1.刷新播客艺术作品 +2.导出订阅到 .OPML 文件中,从 .OPML 文件中导入订阅。 + + +**播放器** + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-Podcast-Player.png) + +播放器可能是 CPod 最美观的部分。 该应用程序根据播客的横幅更改整体外观。 底部有一个声音可视化器。 在右侧,你可以查看和搜索此播客的其他剧集。 + +**缺点/缺失功能** + +虽然我喜欢这个应用程序,但 CPod 确实有一些特性和缺点: + +1. 可怜的 MPRIS 集成 - 你可以从桌面环境的媒体播放器对话框中播放或者暂停播客,但这是不够的。 播客的名称未显示,你可以转到下一个或者上一个剧集。 +2. 不支持章节。 +3. 没有自动下载 - 你必须手动下载剧集。 +4. 使用过程中的 CPU 使用率非常高(即使对于 Electron 应用程序)。 + + +### Verdict + +虽然它确实有它的缺点,但 CPod 显然是最美观的播客播放器应用程序,并且它具有最基本的功能。 如果你喜欢使用视觉上美观的应用程序,并且不需要高级功能,那么这就是你的完美款 app。 我知道我马上就要使用它。 + +你喜欢 CPod 吗? 请将你的意见发表在下面的评论中。 + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/cpod-a-simple-beautiful-and-cross-platform-podcast-app/ + +作者:[EDITOR][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[1]: https://github.com/z-------------/CPod/releases \ No newline at end of file diff --git a/translated/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md b/translated/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md new file mode 100644 index 0000000000..71aace4ce4 --- /dev/null +++ b/translated/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md @@ -0,0 +1,78 @@ +Hegemon - 使用 Rust 编写的模块化系统监视程序 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/hegemon-720x340.png) + +在类 Unix 系统中监视运行进程时,最常用的程序是 **top** 和 top 的增强版 **htop**。我个人最喜欢的是 htop。但是,开发人员不时会发布这些程序的替代品。top 和 htop 工具的一个替代品是 **Hegemon**。它是使用 **Rust** 语言编写的模块化系统监视程序。 + +关于 Hegemon 的功能,我们可以列出以下这些: + + * Hegemon 会监控 CPU、内存和交换页的使用情况。 +  * 它监控系统的温度和风扇速度。 +  * 更新间隔时间可以调整。默认值为 3 秒。 +  * 我们可以通过扩展数据流来展示更详细的图表和其他信息。 +  * 单元测试 +  * 干净的界面 +  * 免费且开源。 + + + +### 安装 Hegemon + +确保已安装 **Rust 1.26** 或更高版本。要在 Linux 发行版中安装 Rust,请参阅以下指南: + +[Install Rust Programming Language In Linux][2] + +另外要安装 [libsensors][1] 库。它在大多数 Linux 发行版的默认仓库中都有。例如,你可以使用以下命令将其安装在基于 RPM 的系统(如 Fedora)中: + +``` +$ sudo dnf install lm_sensors-devel +``` + +在像 Ubuntu、Linux Mint 这样的基于 Debian 的系统上,可以使用这个命令安装它: + +``` +$ sudo apt-get install libsensors4-dev +``` + +在安装 Rust 和 libsensors 后,使用命令安装 Hegemon: + +``` +$ cargo install hegemon +``` + +安装 hegemon 后,使用以下命令开始监视 Linux 系统中正在运行的进程: + +``` +$ hegemon +``` + +以下是 Arch Linux 桌面的示例输出。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/Hegemon-in-action.gif) + +要退出,请按 **Q**。 + + +请注意,hegemon 仍处于早期开发阶段,并不能完全取代 **top** 命令。它可能存在 bug 和功能缺失。如果你遇到任何 bug,请在项目的 github 页面中报告它们。开发人员计划在即将推出的版本中引入更多功能。所以,请关注这个项目。 + +就是这些了。希望这篇文章有用。还有更多的好东西。敬请关注! + +干杯! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/hegemon-a-modular-system-monitor-application-written-in-rust/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://github.com/lm-sensors/lm-sensors +[2]: https://www.ostechnix.com/install-rust-programming-language-in-linux/ diff --git a/translated/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md b/translated/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md new file mode 100644 index 0000000000..b1e566f1a9 --- /dev/null +++ b/translated/tech/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md @@ -0,0 +1,88 @@ +如何在救援(单用户模式)/紧急模式下启动 Ubuntu 18.04/Debian 9 服务器 +====== +将 Linux 服务器引导到单用户模式或**救援模式**是 Linux 管理员在关键时刻恢复服务器时通常使用的重要故障排除方法之一。在 Ubuntu 18.04 和 Debian 9 中,单用户模式被称为救援模式。 + +除了救援模式外,Linux 服务器可以在**紧急模式**下启动,它们之间的主要区别在于,紧急模式加载了带有只读根文件系统文件系统的最小环境,也没有启用任何网络或其他服务。但救援模式尝试挂载所有本地文件系统并尝试启动一些重要的服务,包括网络。 + +在本文中,我们将讨论如何在救援模式和紧急模式下启动 Ubuntu 18.04 LTS/Debian 9 服务器。 + +#### 在单用户/救援模式下启动 Ubuntu 18.04 LTS 服务器: + +重启服务器并进入启动加载程序 (Grub) 屏幕并选择 “**Ubuntu**”,启动加载器页面如下所示, + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Bootloader-Screen-Ubuntu18-04-Server.jpg) + +按下 “**e**”,然后移动到以 “**linux**” 开头的行尾,并添加 “**systemd.unit=rescue.target**”。如果存在单词 “**$vt_handoff**” 就删除它。 + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/rescue-target-ubuntu18-04.jpg) + +现在按 Ctrl-x 或 F10 启动, + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/rescue-mode-ubuntu18-04.jpg) + +现在按回车键,然后你将得到所有文件系统都以读写模式挂载的 shell 并进行故障排除。完成故障排除后,可以使用 “**reboot**” 命令重新启动服务器。 + +#### 在紧急模式下启动 Ubuntu 18.04 LTS 服务器 + +重启服务器并进入启动加载程序页面并选择 “**Ubuntu**”,然后按 “**e**” 并移动到以 linux 开头的行尾,并添加 “**systemd.unit=emergency.target**“。 + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergecny-target-ubuntu18-04-server.jpg) + +现在按 Ctlr-x 或 F10 以紧急模式启动,你将获得一个 shell 并从那里进行故障排除。正如我们已经讨论过的那样,在紧急模式下,文件系统将以只读模式挂载,并且在这种模式下也不会有网络, + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-prompt-debian9.jpg) + +使用以下命令将根文件系统挂载到读写模式, + +``` +# mount -o remount,rw / + +``` + +同样,你可以在读写模式下重新挂载其余文件系统。 + +#### 将 Debian 9 引导到救援和紧急模式 + +重启 Debian 9.x 服务器并进入 grub页面选择 “**Debian GNU/Linux**”。 + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Debian9-Grub-Screen.jpg) + +按下 “**e**” 并移动到 linux 开头的行尾并添加 “**systemd.unit=rescue.target**” 以在救援模式下启动系统, 要在紧急模式下启动,那就添加 “**systemd.unit=emergency.target**“ + +#### 救援模式: + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Rescue-mode-Debian9.jpg) + +现在按 Ctrl-x 或 F10 以救援模式启动 + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Rescue-Mode-Shell-Debian9.jpg) + +按下回车键以获取 shell,然后从这里开始故障排除。 + +#### 紧急模式: + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-target-grub-debian9.jpg) + +现在按下 ctrl-x 或 F10 以紧急模式启动系统 + +![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-prompt-debian9.jpg) + +按下回车获取 shell 并使用 “**mount -o remount,rw /**” 命令以读写模式挂载根文件系统。 + +**注意:**如果已经在 Ubuntu 18.04 和 Debian 9 Server 中设置了 root 密码,那么你必须输入 root 密码才能在救援和紧急模式下获得 shell + +就是这些了,如果您喜欢这篇文章,请分享你的反馈和评论。 + + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/boot-ubuntu-18-04-debian-9-rescue-emergency-mode/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.linuxtechi.com/author/pradeep/ diff --git a/translated/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md b/translated/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md new file mode 100644 index 0000000000..b5e2f97c0b --- /dev/null +++ b/translated/tech/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md @@ -0,0 +1,155 @@ +如何在双系统引导下替换 Linux 发行版 +====== +在双系统引导的状态下,你可以将已安装的 Linux 发行版替换为另一个发行版,同时还可以保留原本的个人数据。 + +![How to Replace One Linux Distribution With Another From Dual Boot][1] + +假设你的电脑上已经[以双系统的形式安装了 Ubuntu 和 Windows][2],但经过[将 Linux Mint 与 Ubuntu 比较][3]之后,你又觉得 [Linux Mint][4] 会更适合自己的时候,你会怎样做?又该如何在[删除 Ubuntu][5] 的同时[在双系统中安装 Mint][6] 呢? + +你或许觉得应该首先从在双系统中卸载 [Ubuntu][7],然后使用 Linux Mint 重新安装成双系统。但实际上并不需要这么麻烦。 + +如果你已经在双系统引导中安装了一种 Linux 发行版,就可以轻松替换成另一个发行版了,而且也不必卸载已有的 Linux 发行版,只需要删除其所在的分区,然后在腾出的磁盘空间上安装另一个 Linux 发行版就可以了。 + +与此同时,更换 Linux 发行版后,仍然会保留原本 home 目录中包含所有文件。 + +下面就来详细介绍一下。 + +### 在双系统引导中替换 Linux 发行版 + + + +这是我的演示范例。我使用双系统引导同时安装了 Windows 10 和 Linux Mint 19,然后我会把 Linux Mint 19 替换成 Elementary OS 5,同时在替换后保留我的个人文件(包括音乐、图片、视频和 home 目录中的文件)。 + +你需要做好以下这些准备: + + * 使用 Linux 和 Windows 双系统引导 + * 需要安装的 Linux 发行版的 USB live 版 + * 在外部磁盘备份 Windows 和 Linux 中的重要文件(并非必要,但建议备份一下) + + + +#### 在替换 Linux 发行版时要记住保留你的 home 目录 + +如果想让个人文件在安装新 Linux 系统的过程中不受影响,原有的 Linux 系统必须具有单独的 root 目录和 home 目录。你可能会发现我的[双系统引导教程][8]在安装过程中不选择“与 Windows 一起安装”选项,而选择“其它”选项,然后手动创建 root 和 home 分区。所以,手动创建单独的 home 分区也算是一个磨刀不误砍柴工的操作。因为如果要在不丢失文件的情况下,将现有的 Linux 发行版替换为另一个发行版,需要将 home 目录存放在一个单独的分区上。 + +不过,你必须记住现有 Linux 系统的用户名和密码才能使用与新系统中相同的 home 目录。 + +如果你没有单独的 home 分区,也可以后续再进行创建。但这并不是推荐做法,因为这个过程会比较复杂,有可能会把你的系统搞乱。 + +下面来看看如何替换到另一个 Linux 发行版。 + +#### 步骤 1:为新的 Linux 发行版创建一个 USB live 版 + +尽管上文中已经提到了它,但我还是要重复一次以免忽略。 + +你可以使用 Windows 或 Linux 中的启动盘创建器(例如 [Etcher][9])来创建 USB live 版,这个过程比较简单,这里不再详细叙述。 + +#### 步骤 2:启动 USB live 版并安装 Linux + +你应该已经使用过双系统启动,对这个过程不会陌生。使用 USB live 版重新启动系统,在启动时反复按 F10 或 F12 进入 BIOS 设置。选择从 USB 启动,就可以看到进入 live 环境或立即安装的选项。 + +在安装过程中,进入“安装类型”界面时,选择“其它”选项。 + +![Replacing one Linux with another from dual boot][10] +(在这里选择“其它”选项) + +#### 步骤 3:准备分区操作 + +下图是分区界面。你会看到使用 Ext4 文件系统类型来安装 Linux。 + +![Identifying Linux partition in dual boot][11] +(确定 Linux 的安装位置) + +在上图中,标记为 Linux Mint 19 的 Ext4 分区是 root 分区,大小为 82691 MB 的第二个 Ext4 分区是 home 分区。在这里我这里没有使用[交换空间][12]。 + +如果你只有一个 Ext4 分区,就意味着你的 home 目录与 root 目录位于同一分区。在这种情况下,你就无法保留 home 目录中的文件了,这个时候我建议将重要文件复制到外部磁盘,否则这些文件将不会保留。 + +然后是删除 root 分区。选择 root 分区,然后点击 - 号,这个操作释放了一些磁盘空间。 + +![Delete root partition of your existing Linux install][13] +(删除 root 分区) + +磁盘空间释放出来后,点击 + 号。 + +![Create root partition for the new Linux][14] +(创建新的 root 分区) + +现在已经在可用空间中创建一个新分区。如果你之前的 Linux 系统中只有一个 root 分区,就应该在这里创建 root 分区和 home 分区。如果需要,还可以创建交换分区。 + +如果你之前已经有 root 分区和 home 分区,那么只需要从已删除的 root 分区创建 root 分区就可以了。 + +![Create root partition for the new Linux][15] +(创建 root 分区) + +你可能有疑问,为什么要经过“删除”和“添加”两个过程,而不使用“更改”选项。这是因为以前使用“更改”选项好像没有效果,所以我更喜欢用 - 和 +。这是迷信吗?也许是吧。 + +这里有一个重要的步骤,对新创建的 root 分区进行格式化。在没有更改分区大小的情况下,默认是不会对分区进行格式化的。如果分区没有被格式化,之后可能会出现问题。 + +![][16] +(格式化 root 分区很重要) + +如果你在新的 Linux 系统上已经划分了单独的 home 分区,选中它并点击更改。 + +![Recreate home partition][17] +(修改已有的 home 分区) + +然后指定将其作为 home 分区挂载即可。 + +![Specify the home mount point][18] +(指定 home 分区的挂载点) + +如果你还有交换分区,可以重复与 home 分区相同的步骤,唯一不同的是要指定将空间用作交换空间。 + +现在的状态应该是有一个 root 分区(将被格式化)和一个 home 分区(如果需要,还可以使用交换分区)。点击“立即安装”可以开始安装。 + +![Verify partitions while replacing one Linux with another][19] +(检查分区情况) + +接下来的几个界面就很熟悉了,要重点注意的是创建用户和密码的步骤。如果你之前有一个单独的 home 分区,并且还想使用相同的 home 目录,那你必须使用和之前相同的用户名和密码,至于设备名称则可以任意指定。 + +![To keep the home partition intact, use the previous user and password][20] +(要保持 home 分区不变,请使用之前的用户名和密码) + +接下来只要静待安装完成,不需执行任何操作。 + +![Wait for installation to finish][21] +(等待安装完成) + +安装完成后重新启动系统,你就能使用新的 Linux 发行版。 + +在以上的例子中,我可以在新的 Linux Mint 19 中使用原有的 Elementary OS 中的整个 home 目录,并且其中所有视频和图片都原封不动。岂不美哉? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/replace-linux-from-dual-boot/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[HankChow](https://github.com/HankChow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/Replace-Linux-Distro-from-dual-boot.png +[2]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/ +[3]: https://itsfoss.com/linux-mint-vs-ubuntu/ +[4]: https://www.linuxmint.com/ +[5]: https://itsfoss.com/uninstall-ubuntu-linux-windows-dual-boot/ +[6]: https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/ +[7]: https://www.ubuntu.com/ +[8]: https://itsfoss.com/guide-install-elementary-os-luna/ +[9]: https://etcher.io/ +[10]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-1.jpg +[11]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-2.jpg +[12]: https://itsfoss.com/swap-size/ +[13]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-3.jpg +[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-4.jpg +[15]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-5.jpg +[16]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-6.jpg +[17]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-7.jpg +[18]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-8.jpg +[19]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-9.jpg +[20]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-10.jpg +[21]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/replace-linux-with-another-11.jpg + diff --git a/translated/tech/20180926 3 open source distributed tracing tools.md b/translated/tech/20180926 3 open source distributed tracing tools.md new file mode 100644 index 0000000000..773ff3f940 --- /dev/null +++ b/translated/tech/20180926 3 open source distributed tracing tools.md @@ -0,0 +1,88 @@ +三个开源的分布式追踪工具 +====== + +这几个工具对复杂软件系统中的实时事件做了可视化,能帮助你快速发现性能问题。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8) + +分布式追踪系统能够从头到尾地追踪分布式系统中的请求,跨越多个应用、服务、数据库以及像代理这样的中间件。它能帮助你更深入地理解系统中到底发生了什么。追踪系统以图形化的方式,展示出每个已知步骤以及某个请求在每个步骤上的耗时。 + +用户可以通过这些展示来判断系统的哪个环节有延迟或阻塞,当请求失败时,运维和开发人员可以看到准确的问题源头,而不需要去测试整个系统,比如用二叉查找树的方法去定位问题。在开发迭代的过程中,追踪系统还能够展示出可能引起性能变化的环节。通过异常行为的警告自动地感知到性能在退化,总是比客户告诉你要好。 + +追踪是怎么工作的呢?给每个请求分配一个特殊 ID,这个 ID 通常会插入到请求头部中。它唯一标识了对应的事务。一般把事务叫做 trace,trace 是抽象整个事务的概念。每一个 trace 由 span 组成,span 代表着一次请求中真正执行的操作,比如一次服务调用,一次数据库请求等。每一个 span 也有自己唯一的 ID。span 之下也可以创建子 span,子 span 可以有多个父 span。 + +当一次事务(或者说 trace)运行过之后,就可以在追踪系统的表示层上搜索了。有几个工具可以用作表示层,我们下文会讨论,不过,我们先看下面的图,它是我在 [Istio walkthrough][2] 视频教程中提到的 [Jaeger][1] 界面,展示了单个 trace 中的多个 span。很明显,这个图能让你一目了然地对事务有更深的了解。 + +![](https://opensource.com/sites/default/files/uploads/monitoring_guide_jaeger_istio_0.png) + +这个 demo 使用了 Istio 内置的 OpenTracing 实现,所以我甚至不需要修改自己的应用代码就可以获得追踪数据。我也用到了 Jaeger,它是兼容 OpenTracing 的。 + +那么 OpenTracing 到底是什么呢?我们来看看。 + +### OpenTracing API + +[OpenTracing][3] 是源自 [Zipkin][4] 的规范,以提供跨平台兼容性。它提供了对厂商中立的 API,用来向应用程序添加追踪功能并将追踪数据发送到分布式的追踪系统。按照 OpenTracing 规范编写的库,可以被任何兼容 OpenTracing 的系统使用。采用这个开放标准的开源工具有 Zipkin,Jaeger,和 Appdash 等。甚至像 [Datadog][5] 和 [Instana][6] 这种付费工具也在采用。因为现在 OpenTracing 已经无处不在,这样的趋势有望继续发展下去。 + +### OpenCensus + +OpenTracing 已经说过了,可 [OpenCensus][7] 又是什么呢?它在搜索结果中老是出现。它是一个和 OpenTracing 完全不同或者互补的竞争标准吗? + +这个问题的答案取决于你的提问对象。我先尽我所能地解释一下他们的不同(按照我的理解):OpenCensus 更加全面或者说它包罗万象。OpenTracing 专注于建立开放的 API 和规范,而不是为每一种开发语言和追踪系统都提供开放的实现。OpenCensus 不仅提供规范,还提供开发语言的实现,和连接协议,而且它不仅只做追踪,还引入了额外的度量指标,这些一般不在分布式追踪系统的职责范围。 + +使用 OpenCensus,我们能够在运行着应用程序的主机上查看追踪数据,但它也有个可插拔的导出器系统,用于导出数据到中心聚合器。目前 OpenCensus 团队提供的导出器包括 Zipkin,Prometheus,Jaeger,Stackdriver,Datadog 和 SignalFx,不过任何人都可以创建一个导出器。 + +依我看这两者有很多重叠的部分,没有哪个一定比另外一个好,但是重要的是,要知道它们做什么事情和不做什么事情。OpenTracing 主要是一个规范,具体的实现和独断的设计由其他人来做。OpenCensus 更加独断地为本地组件提供了全面的解决方案,但是仍然需要其他系统做远程的聚合。 + +### 可选工具 + +#### Zipkin + +Zipkin 是最早出现的这类工具之一。 谷歌在 2010 年发表了介绍其内部追踪系统 Dapper 的[论文][8],Twitter 以此为基础开发了 Zipkin。Zipkin 的开发语言 Java,用 Cassandra 或 ElasticSearch 作为可扩展的存储后端,这些选择能满足大部分公司的需求。Zipkin 支持的最低 Java 版本是 Java 6,它也使用了 [Thrift][9] 的二进制通信协议,Thrift 在 Twitter 的系统中很流行,现在作为 Apache 项目在托管。 + +这个系统包括上报器(客户端),数据收集器,查询服务和一个 web 界面。Zipkin 只传输一个带事务上下文的 trace ID 来告知接收者追踪的进行,所以说在生产环境中是安全的。每一个客户端收集到的数据,会异步地传输到数据收集器。收集器把这些 span 的数据存到数据库,web 界面负责用可消费的格式展示这些数据给用户。客户端传输数据到收集器有三种方式:HTTP,Kafka 和 Scribe。 + +[Zipkin 社区][10] 还提供了 [Brave][11],一个跟 Zipkin 兼容的 Java 客户端的实现。由于 Brave 没有任何依赖,所以它不会拖累你的项目,也不会使用跟你们公司标准不兼容的库来搞乱你的项目。除 Brave 之外,还有很多其他的 Zipkin 客户端实现,因为 Zipkin 和 OpenTracing 标准是兼容的,所以这些实现也能用到其他的分布式追踪系统中。流行的 Spring 框架中一个叫 [Spring Cloud Sleuth][12] 的分布式追踪组件,它和 Zipkin 是兼容的。 + +#### Jaeger + +[Jaeger][1] 来自 Uber,是一个比较新的项目,[CNCF][13] (云原生计算基金会)已经把 Jaeger 托管为孵化项目。Jaeger 使用 Golang 开发,因此你不用担心在服务器上安装依赖的问题,也不用担心开发语言的解释器或虚拟机的开销。和 Zipkin 类似,Jaeger 也支持用 Cassandra 和 ElasticSearch 做可扩展的存储后端。Jaeger 也完全兼容 OpenTracing 标准。 + +Jaeger 的架构跟 Zipkin 很像,有客户端(上报器),数据收集器,查询服务和一个 web 界面,不过它还有一个在各个服务器上运行着的代理,负责在服务器本地做数据聚合。代理通过一个 UDP 连接接收数据,然后分批处理,发送到数据收集器。收集器接收到的数据是 [Thrift][14] 协议的格式,它把数据存到 Cassandra 或者 ElasticSearch 中。查询服务能直接访问数据库,并给 web 界面提供所需的信息。 + +默认情况下,Jaeger 客户端不会采集所有的追踪数据,只抽样了 0.1% 的( 1000 个采 1 个)追踪数据。对大多数系统来说,保留所有的追踪数据并传输的话就太多了。不过,通过配置代理可以调整这个值,客户端会从代理获取自己的配置。这个抽样并不是完全随机的,并且正在变得越来越好。Jaeger 使用概率抽样,试图对是否应该对新踪迹进行抽样进行有根据的猜测。 自适应采样已经在[路线图][15],它将通过添加额外的,能够帮助做决策的上下文,来改进采样算法。 + +#### Appdash + +[Appdash][16] 也是一个用 Golang 写的分布式追踪系统,和 Jaeger 一样。Appdash 是 [Sourcegraph][17] 公司基于谷歌的 Dapper 和 Twitter 的 Zipkin 开发的。同样的,它也支持 Opentracing 标准,不过这是后来添加的功能,依赖了一个与默认组件不同的组件,因此增加了风险和复杂度。 + +从高层次来看,Appdash 的架构主要有三个部分:客户端,本地收集器和远程收集器。因为没有很多文档,所以这个架构描述是基于对系统的测试以及查看源码。写代码时需要把 Appdash 的客户端添加进来。 Appdash 提供了 Python,Golang 和 Ruby 的实现,不过 OpenTracing 库可以与 Appdash 的 OpenTracing 实现一起使用。 客户端收集 span 数据,并将它们发送到本地收集器。然后,本地收集器将数据发送到中心的 Appdash 服务器,这个服务器上运行着自己的本地收集器,它的本地收集器是其他所有节点的远程收集器。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/distributed-tracing-tools + +作者:[Dan Barker][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[belitex](https://github.com/belitex) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/barkerd427 +[1]: https://www.jaegertracing.io/ +[2]: https://www.youtube.com/watch?v=T8BbeqZ0Rls +[3]: http://opentracing.io/ +[4]: https://zipkin.io/ +[5]: https://www.datadoghq.com/ +[6]: https://www.instana.com/ +[7]: https://opencensus.io/ +[8]: https://research.google.com/archive/papers/dapper-2010-1.pdf +[9]: https://thrift.apache.org/ +[10]: https://zipkin.io/pages/community.html +[11]: https://github.com/openzipkin/brave +[12]: https://cloud.spring.io/spring-cloud-sleuth/ +[13]: https://www.cncf.io/ +[14]: https://en.wikipedia.org/wiki/Apache_Thrift +[15]: https://www.jaegertracing.io/docs/roadmap/#adaptive-sampling +[16]: https://github.com/sourcegraph/appdash +[17]: https://about.sourcegraph.com/ diff --git a/translated/tech/20180927 How To Find And Delete Duplicate Files In Linux.md b/translated/tech/20180927 How To Find And Delete Duplicate Files In Linux.md new file mode 100644 index 0000000000..c1b637bf2f --- /dev/null +++ b/translated/tech/20180927 How To Find And Delete Duplicate Files In Linux.md @@ -0,0 +1,439 @@ +如何在 Linux 中找到并删除重复文件 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/Find-And-Delete-Duplicate-Files-720x340.png) + +在编辑或修改配置文件或旧文件前,我经常会把它们备份到硬盘的某个地方,因此我如果意外地改错了这些文件,我可以从备份中恢复它们。但问题是如果我忘记清理备份文件,一段时间之后,我的磁盘会被这些大量重复文件填满。我觉得要么是懒得清理这些旧文件,要么是担心可能会删掉重要文件。如果你们像我一样,在类 Unix 操作系统中,大量多版本的相同文件放在不同的备份目录,你可以使用下面的工具找到并删除重复文件。 + +**提醒一句:** + +在删除重复文件的时请尽量小心。如果你不小心,也许会导致[**意外丢失数据**][1]。我建议你在使用这些工具的时候要特别注意。 + +### 在 Linux 中找到并删除重复文件 + + +出于本指南的目的,我将讨论下面的三个工具: + + 1. Rdfind + 2. Fdupes + 3. FSlint + + + +这三个工具是免费的、开源的,且运行在大多数类 Unix 系统中。 + +##### 1. Rdfind + +**Rdfind** 代表找到找到冗余数据,是一个通过访问目录和子目录来找出重复文件的免费、开源的工具。它是基于文件内容而不是文件名来比较。Rdfind 使用**排序**算法来区分原始文件和重复文件。如果你有两个或者更多的相同文件,Rdfind 会很智能的找到原始文件并认定剩下的文件为重复文件。一旦找到副本文件,它会向你报告。你可以决定是删除还是使用[**硬链接**或者**符号(软)链接**][2]代替它们。 + +**安装 Rdfind** + +Rdfind 存在于 [**AUR**][3] 中。因此,在基于 Arch 的系统中,你可以像下面一样使用任一如 [**Yay**][4] AUR 程序助手安装它。 + +``` +$ yay -S rdfind + +``` + +在 Debian、Ubuntu、Linux Mint 上: + +``` +$ sudo apt-get install rdfind + +``` + +在 Fedora 上: + +``` +$ sudo dnf install rdfind + +``` + +在 RHEL、CentOS 上: + +``` +$ sudo yum install epel-release + +$ sudo yum install rdfind + +``` + +**用法** + +一旦安装完成,仅带上目录路径运行 Rdfind 命令就可以扫描重复文件。 + +``` +$ rdfind ~/Downloads + +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/rdfind-1.png) + +正如你看到上面的截屏,Rdfind 命令将扫描 ~/Downloads 目录,并将结果存储到当前工作目录下一个名为 **results.txt** 的文件中。你可以在 results.txt 文件中看到可能是重复文件的名字。 + +``` +$ cat results.txt +# Automatically generated +# duptype id depth size device inode priority name +DUPTYPE_FIRST_OCCURRENCE 1469 8 9 2050 15864884 1 /home/sk/Downloads/tor-browser_en-US/Browser/TorBrowser/Tor/PluggableTransports/fte/tests/dfas/test5.regex +DUPTYPE_WITHIN_SAME_TREE -1469 8 9 2050 15864886 1 /home/sk/Downloads/tor-browser_en-US/Browser/TorBrowser/Tor/PluggableTransports/fte/tests/dfas/test6.regex +[...] +DUPTYPE_FIRST_OCCURRENCE 13 0 403635 2050 15740257 1 /home/sk/Downloads/Hyperledger(1).pdf +DUPTYPE_WITHIN_SAME_TREE -13 0 403635 2050 15741071 1 /home/sk/Downloads/Hyperledger.pdf +# end of file + +``` + +通过检查 results.txt 文件,你可以很容易的找到那些重复文件。如果愿意你可以手动的删除它们。 + +此外,你可在不修改其他事情情况下使用 **-dryrun** 选项找出所有重复文件,并在终端上输出汇总信息。 + +``` +$ rdfind -dryrun true ~/Downloads + +``` + +一旦找到重复文件,你可以使用硬链接或符号链接代替他们。 + +使用硬链接代替所有重复文件,运行: + +``` +$ rdfind -makehardlinks true ~/Downloads + +``` + +使用符号链接/软链接代替所有重复文件,运行: + +``` +$ rdfind -makesymlinks true ~/Downloads + +``` + +目录中有一些空文件,也许你想忽略他们,你可以像下面一样使用 **-ignoreempty** 选项: + +``` +$ rdfind -ignoreempty true ~/Downloads + +``` + +如果你不再想要这些旧文件,删除重复文件,而不是使用硬链接或软链接代替它们。 + +删除重复文件,就运行: + +``` +$ rdfind -deleteduplicates true ~/Downloads + +``` + +如果你不想忽略空文件,并且和所哟重复文件一起删除。运行: + +``` +$ rdfind -deleteduplicates true -ignoreempty false ~/Downloads + +``` + +更多细节,参照帮助部分: + +``` +$ rdfind --help + +``` + +手册页: + +``` +$ man rdfind + +``` + +##### 2. Fdupes + +**Fdupes** 是另一个在指定目录以及子目录中识别和移除重复文件的命令行工具。这是一个使用 **C** 语言编写的免费、开源工具。Fdupes 通过对比文件大小、部分 MD5 签名、全部 MD5 签名,最后执行逐个字节对比校验来识别重复文件。 + +与 Rdfind 工具类似,Fdupes 附带非常少的选项来执行操作,如: + + * 在目录和子目录中递归的搜索重复文件 + * 从计算中排除空文件和隐藏文件 + * 显示重复文件大小 + * 出现重复文件时立即删除 + * 使用不同的拥有者/组或权限位来排除重复文件 + * 更多 + + + +**安装 Fdupes** + +Fdupes 存在于大多数 Linux 发行版的默认仓库中。 + +在 Arch Linux 和它的变种如 Antergos、Manjaro Linux 上,如下使用 Pacman 安装它。 + +``` +$ sudo pacman -S fdupes + +``` + +在 Debian、Ubuntu、Linux Mint 上: + +``` +$ sudo apt-get install fdupes + +``` + +在 Fedora 上: + +``` +$ sudo dnf install fdupes + +``` + +在 RHEL、CentOS 上: + +``` +$ sudo yum install epel-release + +$ sudo yum install fdupes + +``` + +**用法** + +Fdupes 用法非常简单。仅运行下面的命令就可以在目录中找到重复文件,如:**~/Downloads**. + +``` +$ fdupes ~/Downloads + +``` + +我系统中的样例输出: + +``` +/home/sk/Downloads/Hyperledger.pdf +/home/sk/Downloads/Hyperledger(1).pdf + +``` +你可以看到,在 **/home/sk/Downloads/** 目录下有一个重复文件。它仅显示了父级目录中的重复文件。如何显示子目录中的重复文件?像下面一样,使用 **-r** 选项。 + +``` +$ fdupes -r ~/Downloads + +``` + +现在你将看到 **/home/sk/Downloads/** 目录以及子目录中的重复文件。 + +Fdupes 也可用来从多个目录中迅速查找重复文件。 + +``` +$ fdupes ~/Downloads ~/Documents/ostechnix + +``` + +你甚至可以搜索多个目录,递归搜索其中一个目录,如下: + +``` +$ fdupes ~/Downloads -r ~/Documents/ostechnix + +``` + +上面的命令将搜索 “~/Downloads” 目录,“~/Documents/ostechnix” 目录和它的子目录中的重复文件。 + +有时,你可能想要知道一个目录中重复文件的大小。你可以使用 **-S** 选项,如下: + +``` +$ fdupes -S ~/Downloads +403635 bytes each: +/home/sk/Downloads/Hyperledger.pdf +/home/sk/Downloads/Hyperledger(1).pdf + +``` + +类似的,为了显示父目录和子目录中重复文件的大小,使用 **-Sr** 选项。 + +我们可以在计算时分别使用 **-n** 和 **-A** 选项排除空白文件以及排除隐藏文件。 + +``` +$ fdupes -n ~/Downloads + +$ fdupes -A ~/Downloads + +``` + +在搜索指定目录的重复文件时,第一个命令将排除零长度文件,后面的命令将排除隐藏文件。 + +汇总重复文件信息,使用 **-m** 选项。 + +``` +$ fdupes -m ~/Downloads +1 duplicate files (in 1 sets), occupying 403.6 kilobytes + +``` + +删除所有重复文件,使用 **-d** 选项。 + +``` +$ fdupes -d ~/Downloads + +``` + +样例输出: + +``` +[1] /home/sk/Downloads/Hyperledger Fabric Installation.pdf +[2] /home/sk/Downloads/Hyperledger Fabric Installation(1).pdf + +Set 1 of 1, preserve files [1 - 2, all]: + +``` + +这个命令将提示你保留还是删除所有其他重复文件。输入任一号码保留相应的文件,并删除剩下的文件。当使用这个选项的时候需要更加注意。如果不小心,你可能会删除原文件。 + +如果你想要每次保留每个重复文件集合的第一个文件,且无提示的删除其他文件,使用 **-dN** 选项(不推荐)。 + +``` +$ fdupes -dN ~/Downloads + +``` + +当遇到重复文件时删除它们,使用 **-I** 标志。 + +``` +$ fdupes -I ~/Downloads + +``` + +关于 Fdupes 的更多细节,查看帮助部分和 man 页面。 + +``` +$ fdupes --help + +$ man fdupes + +``` + +##### 3. FSlint + +**FSlint** 是另外一个查找重复文件的工具,有时我用它去掉 Linux 系统中不需要的重复文件并释放磁盘空间。不像另外两个工具,FSlint 有 GUI 和 CLI 两种模式。因此对于新手来说它更友好。FSlint 不仅仅找出重复文件,也找出坏符号链接、坏名字文件、临时文件、坏 IDS、空目录和非剥离二进制文件等等。 + +**安装 FSlint** + +FSlint 存在于 [**AUR**][5],因此你可以使用任一 AUR 助手安装它。 + +``` +$ yay -S fslint + +``` + +在 Debian、Ubuntu、Linux Mint 上: + +``` +$ sudo apt-get install fslint + +``` + +在 Fedora 上: + +``` +$ sudo dnf install fslint + +``` + +在 RHEL,CentOS 上: + +``` +$ sudo yum install epel-release +$ sudo yum install fslint + +``` + +一旦安装完成,从菜单或者应用程序启动器启动它。 + +FSlint GUI 展示如下: + +![](http://www.ostechnix.com/wp-content/uploads/2018/09/fslint-1.png) + +如你所见,FSlint 接口友好、一目了然。在 **Search path** 栏,添加你要扫描的目录路径,点击左下角 **Find** 按钮查找重复文件。验证递归选项可以在目录和子目录中递归的搜索重复文件。FSlint 将快速的扫描给定的目录并列出重复文件。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/fslint-2.png) + +从列表中选择那些要清理的重复文件,也可以选择 Save、Delete、Merge 和 Symlink 操作他们。 + +在 **Advanced search parameters** 栏,你可以在搜索重复文件的时候指定排除的路径。 + +![](http://www.ostechnix.com/wp-content/uploads/2018/09/fslint-3.png) + +**FSlint 命令行选项** + +FSlint 提供下面的 CLI 工具集在你的文件系统中查找重复文件。 + + * **findup** — 查找重复文件 + * **findnl** — 查找 Lint 名称文件(有问题的文件名) + * **findu8** — 查找非法的 utf8 编码文件 + * **findbl** — 查找坏链接(有问题的符号链接) + * **findsn** — 查找同名文件(可能有冲突的文件名) + * **finded** — 查找空目录 + * **findid** — 查找死用户的文件 + * **findns** — 查找非剥离的可执行文件 + * **findrs** — 查找文件中多于的空白 + * **findtf** — 查找临时文件 + * **findul** — 查找可能未使用的库 + * **zipdir** — 回收 ext2 目录实体下浪费的空间 + + + +所有这些工具位于 **/usr/share/fslint/fslint/fslint** 下面。 + + +例如,在给定的目录中查找重复文件,运行: + +``` +$ /usr/share/fslint/fslint/findup ~/Downloads/ + +``` + +类似的,找出空目录命令是: + +``` +$ /usr/share/fslint/fslint/finded ~/Downloads/ + +``` + +获取每个工具更多细节,例如:**findup**,运行: + +``` +$ /usr/share/fslint/fslint/findup --help + +``` + +关于 FSlint 的更多细节,参照帮助部分和 man 页。 + +``` +$ /usr/share/fslint/fslint/fslint --help + +$ man fslint + +``` + +##### 总结 + +现在你知道在 Linux 中,使用三个工具来查找和删除不需要的重复文件。这三个工具中,我经常使用 Rdfind。这并不意味着其他的两个工具效率低下,因为到目前为止我更喜欢 Rdfind。好了,到你了。你的最喜欢哪一个工具呢?为什么?在下面的评论区留言让我们知道吧。 + +就到这里吧。希望这篇文章对你有帮助。更多的好东西就要来了,敬请期待。 + +谢谢! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-find-and-delete-duplicate-files-in-linux/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[pygmalion666](https://github.com/pygmalion666) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://www.ostechnix.com/prevent-files-folders-accidental-deletion-modification-linux/ +[2]: https://www.ostechnix.com/explaining-soft-link-and-hard-link-in-linux-with-examples/ +[3]: https://aur.archlinux.org/packages/rdfind/ +[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[5]: https://aur.archlinux.org/packages/fslint/ diff --git a/translated/tech/20180928 What containers can teach us about DevOps.md b/translated/tech/20180928 What containers can teach us about DevOps.md new file mode 100644 index 0000000000..d514d8ba0b --- /dev/null +++ b/translated/tech/20180928 What containers can teach us about DevOps.md @@ -0,0 +1,105 @@ +容器技术对指导我们 DevOps 的一些启发 +====== + +容器技术的使用支撑了目前 DevOps 三大主要实践:流水线,及时反馈,持续实验与学习以改进。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-patent_reform_520x292_10136657_1012_dc.png?itok=Cd2PmDWf) + +容器技术与 DevOps 二者在发展的过程中是互相促进的关系。得益于 DevOps 的设计理念愈发先进,容器生态系统在设计上与组件选择上也有相应发展。同时,由于容器技术在生产环境中的使用,反过来也促进了 DevOps 三大主要实践:[支撑DevOps的三个实践][1]. + + +### 工作流 + +**容器中的工作流** + +每个容器都可以看成一个独立的封闭仓库,当你置身其中,不需要管外部的系统环境、集群环境、以及其他基础设施,不管你在里面如何折腾,只要对外提供正常的功能就好。一般来说,容器内运行的应用,一般作为整个应用系统架构的一部分:比如 web API,数据库,任务执行,缓存系统,垃圾回收器等。运维团队一般会限制容器的资源使用,并在此基础上建立完善的容器性能监控服务,从而降低其对基础设施或者下游其他用户的影响。 + +**现实中的工作流** + +那些跟“容器”一样独立工作的团队,也可以借鉴这种限制容器占用资源的策略。因为无论是在现实生活中的工作流(代码发布、构建基础设施,甚至制造[Spacely’s Sprockets][2]等),还是技术中的工作流(开发、测试、试运行、发布)都使用了这样的线性工作流,一旦某个独立的环节或者工作团队出现了问题,那么整个下游都会受到影响,虽然使用我们这种线性的工作流有效降低了工作耦合性。 + +**DevOps 中的工作流** + +DevOps 中的第一条原则,就是掌控整个执行链路的情况,努力理解系统如何协同工作,并理解其中出现的问题如何对整个过程产生影响。为了提高流程的效率,团队需要持续不断的找到系统中可能存在的性能浪费以及忽视的点,并最终修复它们。 + + +> “践行这样的工作流后,可以避免传递一个已知的缺陷到工作流的下游,避免产生一个可能会导致全局性能退化的局部优化,持续优化工作流的性能,持续加深对于系统的理解” + +–Gene Kim, [支撑DevOps的三个实践][3], IT 革命, 2017.4.25 + +### 反馈 + +**容器中的反馈** + +除了限制容器的资源,很多产品还提供了监控和通知容器性能指标的功能,从而了解当容器工作不正常时,容器内部处于什么样的工作状态。比如 目前[流行的][5][Prometheus][4],可以用来从容器和容器集群中收集相应的性能指标数据。容器本身特别适用于分隔应用系统,以及打包代码和其运行环境,但也同时带来不透明的特性,这时从中快速的收集信息,从而解决发生在其内部出现的问题,就显得尤为重要了。 + +**现实中的反馈** + +在现实中,从始至终同样也需要反馈。一个高效的处理流程中,及时的反馈能够快速的定位事情发生的时间。反馈的关键词是“快速”和“相关”。当一个团队处理大量不相关的事件时,那些真正需要快速反馈的重要信息,很容易就被忽视掉,并向下游传递形成更严重的问题。想象下[如果露西和埃塞尔][6]能够很快的意识到:传送带太快了,那么制作出的巧克力可能就没什么问题了(尽管这样就不太有趣了)。 + +**DevOps and feedback** + +DevOps 中的第二条原则,就是快速收集所有的相关有用信息,这样在出现的问题影响到其他开发进程之前,就可以被识别出。DevOps 团队应该努力去“优化下游“,以及快速解决那些可能会影响到之后团队的问题。同工作流一样,反馈也是一个持续的过程,目标是快速的获得重要的信息以及当问题出现后能够及时的响应。 + +> "快速的反馈对于提高技术的质量、可用性、安全性至关重要。" + +–Gene Kim, et al., DevOps 手册:如何在技​​术组织中创造世界级的敏捷性,可靠性和安全性, IT 革命, 2016 + +### 持续实验与学习 + +**容器中的持续实验与学习** + +如何让”持续的实验与学习“更具操作性是一个不小的挑战。容器让我们的开发工程师和运营团队,在不需要掌握太多边缘或难以理解的东西情况下,依然可以安全地进行本地和生产环境的测试,这在之前是难以做到的。即便是一些激进的实验,容器技术仍然让我们轻松地进行版本控制、记录、分享。 + +**现实中的持续实验与学习** + +举个我自己的例子:多年前,作为一个年轻、初出茅庐的系统管理员(仅仅工作三周),我被要求对一个运行某个大学核心IT部门网站的Apache虚拟主机进行更改。由于没有易于使用的测试环境,我直接在生产的站点上进行了配置修改,当时觉得配置没问题就发布了,几分钟后,我隔壁无意中听到了同事说: + +”等会,网站挂了?“ + +“没错,怎么回事?” + +很多人蒙圈了…… + +在被嘲讽之后(真实的嘲讽),我一头扎在工作台上,赶紧撤销我之前的更改。当天下午晚些时候,部门主管 - 我老板的老板的老板来到我的工位上,问发生了什么事。 +“别担心,”她告诉我。“我们不会生你的气,这是一个错误,现在你已经学会了。“ + +而在容器中,这种情形很容易的进行测试,并且也很容易在部署生产环境之前,被那些经验老道的团队成员发现。 + +**DevOps 中的持续实验与学习** + +做实验的初衷是我们每个人都希望通过一些改变从而能够提高一些东西,并勇敢地通过实验来验证我们的想法。对于 DevOps 团队来说,失败无论对团队还是个人来说都是经验,所要不要担心失败。团队中的每个成员不断学习、共享,也会不断提升其所在团队与组织的水平。 + +随着系统变得越来越琐碎,我们更需要将注意力发在特殊的点上:上面提到的两条原则主要关注的是流程的目前全貌,而持续的学习则是关注的则是整个项目、人员、团队、组织的未来。它不仅对流程产生了影响,还对流程中的每个人产生影响。 + +> "无风险的实验让我们能够不懈的改进我们的工作,但也要求我们使用之前没有用过的工作方式" + +–Gene Kim, et al., [凤凰计划:让你了解 IT、DevOps以及如何取得商业成功][7], IT 革命, 2013 + +### 容器技术给我们 DevOps 上的启迪 + +学习如何有效地使用容器可以学习DevOps的三条原则:工作流,反馈以及持续实验和学习。从整体上看应用程序和基础设施,而不是对容器外的东西置若罔闻,教会我们考虑到系统的所有部分,了解其上游和下游影响,打破孤岛,并作为一个团队工作,以提高全局性能和深度 +了解整个系统。通过努力提供及时准确的反馈,我们可以在组织内部创建有效的反馈模式,以便在问题发生影响之前发现问题。 +最后,提供一个安全的环境来尝试新的想法并从中学习,教会我们创造一种文化,在这种文化中,失败一方面促进了我们知识的增长,另一方面通过有根据的猜测,可以为复杂的问题带来新的、优雅的解决方案。 + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/containers-can-teach-us-devops + +作者:[Chris Hermansen][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/littleji) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clhermansen +[1]: https://itrevolution.com/the-three-ways-principles-underpinning-devops/ +[2]: https://en.wikipedia.org/wiki/The_Jetsons +[3]: http://itrevolution.com/the-three-ways-principles-underpinning-devops +[4]: https://prometheus.io/ +[5]: https://opensource.com/article/18/9/prometheus-operational-advantage +[6]: https://www.youtube.com/watch?v=8NPzLBSBzPI +[7]: https://itrevolution.com/book/the-phoenix-project/ diff --git a/translated/tech/20181002 How use SSH and SFTP protocols on your home network.md b/translated/tech/20181002 How use SSH and SFTP protocols on your home network.md new file mode 100644 index 0000000000..db202a3043 --- /dev/null +++ b/translated/tech/20181002 How use SSH and SFTP protocols on your home network.md @@ -0,0 +1,74 @@ +如何在家中使用 SSH 和 SFTP 协议 +====== + +通过 SSH 和 SFTP 协议 ,我们能够访问其他设备 ,有效而且安全的传输文件及更多 。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openwires_fromRHT_520_0612LL.png?itok=PqZi55Ab) + +多年前 ,我决定配置一个额外的电脑 ,以便我能在工作时能够访问它来传输我所需要的文件 。最基本的一步是要求你的网络提供商 ( ISP )提供一个固定的地址 ( IP Address )。 + +保证你系统的访问是安全的 ,这是一个不必要但很重要的步骤 。在此种特殊情况下 ,我计划只在工作的时候能够访问它 。所以我能够约束访问的 IP 地址 。即使如此 ,你依然要尽多的采用安全措施 。一旦你建立起来这个 ,全世界的人们都能立即访问你的系统 。这是非常令人惊奇及恐慌的 。你能通过日志文件来发现这一点 。我推测有探测机器人在尽它们所能的搜索那些没有安全措施的系统 。 + +在我建立系统不久后 ,我觉得我的访问是一个简单的玩具而不是我想要的 ,为此 ,我将它关闭了好让我不在为它而担心 。尽管如此 ,这个系统在家庭网络中对于 SSH 和 SFTP 还有其他的用途 ,它至少已经为你而创建了 。 + +一个必备条件 ,你家的另一台电脑必须已经开机了 ,至于电脑是否已经非常老旧是没有影响的 。你也需要知道另一台电脑的 IP 地址 。有两个方法能够知道做到 ,一个是通过网页进入你的路由器 ,一般情况下你的地址格式类似于 **192.168.1.254** 。通过一些搜索 ,找出当前是开机的并且和系统 eth0 或者 wifi 挂钩的系统是足够简单的 。如何组织你所敢兴趣的电脑是一个挑战 。 + +询问电脑问题是简单的 ,打开 shell ,输入 : + +``` +ifconfig + +``` + +命令会输出一些信息 ,你所需要的信息在 `inet` 后面 ,看起来和 **192.168.1.234** 类似 。当你发现这个后 ,回到你的客户端电脑 ,在命令行中输入 : + +``` +ssh gregp@192.168.1.234 + +``` + +上面的命令能够正常执行 ,**gregp** 必须在主机系统中是中确的用户名 。用户的密码也会被需要 。如果你键入的密码和用户名都是正确的 ,你将通过 shell 环境连接上了其他电脑 。我坦诚 ,对于 SSH 我并不是经常使用的 。我偶尔使用它 ,所以我能够运行 `dnf` 来更新我就坐的其他电脑 。通常 ,我用 SFTP : + +``` +sftp grego@192.168.1.234 + +``` + +对于用更简单的方法来把一个文件传输到另一个文件 ,我有很强烈的需求 。相对于闪存棒和额外的设备 ,它更加方便 ,耗时更少 。 + +一旦连接建立成功 ,SFTP 有两个基本的命令 ,`get` ,从主机接收文件 ;`put` ,向主机发送文件 。在客户端 ,我经常移动到我想接收或者传输的文件夹下 ,在开始连接之前 。在连接之后 ,你将在顶层目录 **home/gregp** 。一旦连接成功 ,你将和在客户端一样的使用 `cd` ,除非在主机上你改变了你的工作路径 。你会需要用 `ls` 来确认你的位置 。 + +在客户端 ,如果你想改变工作路劲 。用 `lcd` 命令 ( **local change directory**)。相同的 ,用 `lls` 来显示客户端工作目录的内容 。 + +如果你不喜欢主机工作目录的名字时 ,你该怎么办 ?用 `mkdir` 在主机上创建一个新的文件夹 。或者将整个文件全拷贝到主机 : + +``` +put -r thisDir/ + +``` + +在主机上创建文件夹和传输文件以及子文件夹是非常快速的 ,能达到硬件的上限 。在网络传输的过程中不会遇到瓶颈 。查看 SFTP 能够使用的功能 ,查看 : + +``` +man sftp + +``` + +在我的电脑上我也可以在 windows 虚拟机上用 SFTP ,另一个优势是配置一个虚拟机而不是一个双系统 。这让我能够在系统的 Linux 部分移入或者移出文件 。到目前为止 ,我只用了 windows 的客户端 。 + +你能够进入到任何通过无线或者 WIFI 连接到你路由器的设备 。暂时 ,我使用一个叫做 [SSHDroid][1] 的应用 ,能够在被动模式下运行 SSH 。换句话来说 ,你能够用你的电脑访问作为主机的 Android 设备 。近来我还发现了另外一个应用 ,[Admin Hands][2] ,不管你的客户端是桌面还是手机 ,都能使用 SSH 或者 SFTP 操作 。这个应用对于备份和手机分享照片是极好的 。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/ssh-sftp-home-network + +作者:[Geg Pittman][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[singledo](https://github.com/singledo) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/greg-p +[1]: https://play.google.com/store/apps/details?id=berserker.android.apps.sshdroid +[2]: https://play.google.com/store/apps/details?id=com.arpaplus.adminhands&hl=en_US diff --git a/translated/tech/20181008 Python at the pump- A script for filling your gas tank.md b/translated/tech/20181008 Python at the pump- A script for filling your gas tank.md new file mode 100644 index 0000000000..396ed17291 --- /dev/null +++ b/translated/tech/20181008 Python at the pump- A script for filling your gas tank.md @@ -0,0 +1,101 @@ +使用 Python 为你的油箱加油 +====== +我来介绍一下我是如何使用 Python 来节省成本的。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bulb-light-energy-power-idea.png?itok=zTEEmTZB) + +我最近在开一辆烧 93 号汽油的车子。根据汽车制造商的说法,它只需要加 91 号汽油就可以了。然而,在美国只能买到 87 号、89 号、93 号汽油。而我家附近的汽油的物价水平是每增加一号,每加仑就要多付 30 美分,因此如果加 93 号汽油,每加仑就要多花 60 美分。为什么不能节省一些钱呢? + +一开始很简单,只需要先加满 93 号汽油,然后在油量表显示油箱半满的时候,用 89 号汽油加满,就得到一整箱 91 号汽油了。但接下来就麻烦了,剩下半箱 91 号汽油加上半箱 93 号汽油,只会变成一箱 92 号汽油,再接下来呢?如果继续算下去,只会越来越混乱。这个时候 Python 就派上用场了。 + +我的方案是,可以根据汽油的实时状态,不断向油箱中加入 93 号汽油或者 89 号汽油,而最终目标是使油箱内汽油的号数不低于 91。我需要做的是只是通过一些算法来判断新旧汽油混合之后的号数。使用多项式方程或许也可以解决这个问题,但如果使用 Python,好像只需要进行循环就可以了。 + +``` +#!/usr/bin/env python +# octane.py + +o = 93.0 +newgas = 93.0 # 这个变量记录上一次加入的汽油号数 +i = 1 +while i < 21: # 20 次迭代 (加油次数) + if newgas == 89.0: # 如果上一次加的是 89 号汽油,改加 93 号汽油 + newgas = 93.0 + o = newgas/2 + o/2 # 当油箱半满的时候就加油 + else: # 如果上一次加的是 93 号汽油,则改加 89 号汽油 + newgas = 89.0 + o = newgas/2 + o/2 # 当油箱半满的时候就加油 + print str(i) + ': '+ str(o) + i += 1 +``` + +在代码中,我首先将变量 `o`(油箱中的当前混合汽油号数)和变量 `newgas`(上一次加入的汽油号数)的初始值都设为 93,然后循环 20 次,也就是分别加入 89 号汽油和 93 号汽油一共 20 次,以保持混合汽油号数稳定。 + +``` +1: 91.0 +2: 92.0 +3: 90.5 +4: 91.75 +5: 90.375 +6: 91.6875 +7: 90.34375 +8: 91.671875 +9: 90.3359375 +10: 91.66796875 +11: 90.333984375 +12: 91.6669921875 +13: 90.3334960938 +14: 91.6667480469 +15: 90.3333740234 +16: 91.6666870117 +17: 90.3333435059 +18: 91.6666717529 +19: 90.3333358765 +20: 91.6666679382 +``` + +从以上数据来看,只需要 10 到 15 次循环,汽油号数就比较稳定了,也相当接近 91 号汽油的目标。这种交替混合直到稳定的现象看起来很有趣,每次交替加入同等量的不同号数汽油,都会趋于稳定。实际上,即使加入的 89 号汽油和 93 号汽油的量不同,也会趋于稳定。 + +因此,我尝试了不同的比例,我认为加入的 93 号汽油需要比 89 号汽油更多一点。在尽量少补充新汽油的情况下,我最终计算到的结果是 89 号汽油要在油箱大约 7/12 满的时候加进去,而 93 号汽油则要在油箱 1/4 满的时候才加进去。 + +我的循环将会更改成这样: + +``` + if newgas == 89.0: + + newgas = 93.0 + o = 3*newgas/4 + o/4 + else: + newgas = 89.0 + o = 5*newgas/12 + 7*o/12 +``` + +以下是从第十次加油开始的混合汽油号数: + +``` +10: 92.5122272978 +11: 91.0487992571 +12: 92.5121998143 +13: 91.048783225 +14: 92.5121958062 +15: 91.048780887 +``` + +如你所见,这个调整会令混合汽油号数始终略高于 91。当然,我的油量表并没有 1/12 的刻度,但是 7/12 略小于 5/8,我可以近似地计算。 + +一个更简单地方案是每次都首先加满 93 号汽油,然后在油箱半满时加入 89 号汽油直到耗尽,这可能会是我的常规方案。但就我个人而言,这种方法并不太好,有时甚至会产生一些麻烦。但对于长途旅行来说,这种方案会相对简便一些。有时我也会因为油价突然下跌而购买一些汽油,所以,这个方案是我可以考虑的一系列选项之一。 + +当然最重要的是:开车不写码,写码不开车! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/python-gas-pump + +作者:[Greg Pittman][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[HankChow](https://github.com/HankChow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/greg-p + diff --git a/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md b/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md new file mode 100644 index 0000000000..d90663cd76 --- /dev/null +++ b/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md @@ -0,0 +1,177 @@ +cloc –– 计算不同编程语言源代码的行数 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/cloc-720x340.png) + +作为一个开发人员,你可能需要不时地向你的领导或者同事分享你目前的工作与代码开发进展,抑或你的领导想对代码进行全方位的分析。这时,你就需要用到一些代码统计的工具,我知道其中一个是 [**Ohcount**][1]。今天,我遇到了另一个程序,**cloc**。你可以用 cloc 很容易地统计多种语言的源代码行数。它还可以计算空行数、代码行数、实际代码的行数,并通过整齐的表格进行结果输出。cloc 是免费的、开源的跨平台程序,使用 **Perl** 进行开发。 + +### 特点 + +cloc 有很多优势: + +* 安装方便而且易用,不需要额外的依赖项 +* 可移植 +* 支持多种的结果格式导出,包括:纯文本、SQL、JSON、XML、YAML、CSV +* 可以计算 git 的提交数 +* 可递归计算文件夹内的代码行数 +* 可计算压缩后的文件,如:tar、zip、Java 的 .ear 等类型 +* 开源,跨平台 + +### 安装 + +cloc 的安装包在大多数的类 Unix 操作系统的默认软件库内,所以你只需要使用默认的包管理器安装即可。 + +Arch Linux: + +``` +$ sudo pacman -S cloc +``` + +Debian/Ubuntu: + +``` +$ sudo apt-get install cloc +``` + +CentOS/Red Hat/Scientific Linux: + +``` +$ sudo yum install cloc +``` + +Fedora: + +``` +$ sudo dnf install cloc +``` + +FreeBSD: + +``` +$ sudo pkg install cloc +``` + +当然你也可以使用第三方的包管理器,比如 [**NPM**][2]。 + +``` +$ npm install -g cloc +``` + +### 统计多种语言代码数据的使用举例 + +首先来几个简单的例子,比如下面在我目前工作目录中的的 C 代码。 + +``` +$ cat hello.c +#include +int main() +{ + // printf() displays the string inside quotation + printf("Hello, World!"); + return 0; +} +``` + +想要计算行数,只需要简单运行: + +``` +$ cloc hello.c +``` + +输出: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Hello-World-Program.png) + +第一列是被分析文件的编程语言,上面我们可以看到这个文件是用 C 语言编写的。 + +第二列显示的是该种语言有多少文件,图中说明只有一个。 + +第三列显示空行的数量,图中显示是 0 行。 + +第四列显示注释的行数。 + +第五列显示该文件中实际的代码总行数。 + +这是一个有只有 6 行代码的源文件,我们看到统计的还算准确,那么如果用来统计一个行数较多的源文件呢? + +``` +$ cloc file.tar.gz +``` + +输出: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/cloc-1.png) + +上述输出结果如果手动统计准确的代码行数非常困难,但是 cloc 只需要几秒,而且以易读的表格格式显示结果。你还可以在最后查看每个部分的总计,这在分析程序的源代码时非常方便。 + +除了源代码文件,cloc 还能递归计算各个目录及其子目录下的文件、压缩包、甚至 git commit 数目等。 + +文件夹中使用的例子: + +``` +$ cloc dir/ +``` + +![][3] + +子文件夹中使用的例子*: + +``` +$ cloc dir/cloc/tests +``` + +![][4] + +计算一个压缩包中源代码的行数: + +``` +$ cloc archive.zip +``` + +![][5] + +你还可以计算一个 git 项目,也可以像下面这样针对某次提交时的状态统计: + +``` +$ git clone https://github.com/AlDanial/cloc.git + +$ cd cloc + +$ cloc 157d706 +``` + +![][6] + +cloc 可以自动识别一些语言,使用下面的命令查看 cloc 支持的语言: + +``` +$ cloc --show-lang +``` + +更新信息请查阅 cloc 的使用帮助。 + +``` +$ cloc --help +``` + +开始使用吧! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/cloc-count-the-lines-of-source-code-in-many-programming-languages/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[littleji](https://github.com/littleji) +校对:[pityonline](https://github.com/pityonline) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/ohcount-the-source-code-line-counter-and-analyzer/ +[2]: https://www.ostechnix.com/install-node-js-linux/ +[3]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-2-1.png +[4]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-4.png +[5]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-3.png +[6]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-5.png diff --git a/translated/tech/20181010 Design faster web pages, part 1- Image compression.md b/translated/tech/20181010 Design faster web pages, part 1- Image compression.md new file mode 100644 index 0000000000..a34af65920 --- /dev/null +++ b/translated/tech/20181010 Design faster web pages, part 1- Image compression.md @@ -0,0 +1,183 @@ +设计更快的网页——第一部分:图片压缩 +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/02/fasterwebsites1-816x345.jpg) + +很多 Web 开发者都希望做出加载速度很快的网页。在移动设备浏览占比越来越大的背景下,使用响应式设计使得网站在小屏幕下看起来更漂亮只是其中一个方面。Browser Calories 可以展示网页的加载时间——这不单单关系到用户,还会影响到通过加载速度来进行评级的搜索引擎。这个系列的文章介绍了如何使用 Fedora 提供的工具来给网页“瘦身”。 + +### 准备工作 + +在你开始缩减网页之前,你需要明确核心问题所在。为此,你可以使用 [Browserdiet][1]. 这是一个浏览器插件,适用于 Firefox, Opera, Chrome 和其它浏览器。它会对打开的网页进行性能分析,这样你就可以知道应该从哪里入手来缩减网页。 + +然后,你需要一些用来处理的页面。下面的例子是针对 [getferoda.org][2] 的测试截图。一开始,它看起来非常简单,也符合响应式设计。 + +![Browser Diet - getfedora.org 的评分][3] + +然而,BroserDiet 的网页分析表明,这个网页需要加载 1.8MB 的文件。所以,我们现在有活干了! + +### Web 优化 + +网页中包含 281 KB 的 JavaScript 文件,203 KB 的 CSS 文件,还有 1.2 MB 的图片。我们先从最严重的问题——图片开始入手。为了解决问题,你需要的工具集有 GIMP, ImageMagick 和 optipng. 你可以使用如下命令轻松安装它们: + +``` +sudo dnf install gimp imagemagick optipng + +``` + +比如,我们先拿到这个 6.4 KB 的[文件][4]: + +![][4] + +首先,使用 file 命令来获取这张图片的一些基本信息: + +``` +$ file cinnamon.png +cinnamon.png: PNG image data, 60 x 60, 8-bit/color RGBA, non-interlaced + +``` + +这张只由白色和灰色构成的图片使用 8 位 / RGBA 模式来存储。这种方式并没有那么高效。 + +使用 GIMP,你可以为这张图片设置一个更合适的颜色模式。在 GIMP 中打开 cinnamon.png. 然后,在“图片 > 模式”菜单中将其设置为“灰度模式”。将这张图片以 PNG 格式导出。导出时使用压缩因子 9,导出对话框中的其它配置均使用默认选项。 + +``` +$ file cinnamon.png +cinnamon.png: PNG image data, 60 x 60, 8-bit gray+alpha, non-interlaced + +``` + +输出显示,现在这个文件现在处于 8 位 / 灰阶+aplha 模式。文件大小从 6.4 KB 缩小到了 2.8 KB. 这已经是原来大小的 43.75% 了。但是,我们能做的还有很多! + +你可以使用 ImageMagick 工具来查看这张图片的更多信息。 + +``` +$ identify cinnamon2.png +cinnamon.png PNG 60x60 60x60+0+0 8-bit Grayscale Gray 2831B 0.000u 0:00.000 + +``` + +它告诉你,这个文件的大小为 2831 字节。我们回到 GIMP,重新导出文件。在导出对话框中,取消存储时间戳和 alpha 通道色值,来让文件更小一点。现在文件输出显示: + +``` +$ identify cinnamon.png +cinnamon.png PNG 60x60 60x60+0+0 8-bit Grayscale Gray 2798B 0.000u 0:00.000 + +``` + +下面,用 optipng 来无损优化你的 PNG 图片。具有相似功能的工具有很多,包括 **advdef**(这是 advancecomp 的一部分),**pngquant** 和 **pngcrush**。 + +对你的文件运行 optipng. 注意,这个操作会覆盖你的原文件: + +``` +$ optipng -o7 cinnamon.png +** Processing: cinnamon.png +60x60 pixels, 2x8 bits/pixel, grayscale+alpha +Reducing image to 8 bits/pixel, grayscale +Input IDAT size = 2720 bytes +Input file size = 2812 bytes + +Trying: + zc = 9 zm = 8 zs = 0 f = 0 IDAT size = 1922 + zc = 9 zm = 8 zs = 1 f = 0 IDAT size = 1920 + +Selecting parameters: + zc = 9 zm = 8 zs = 1 f = 0 IDAT size = 1920 + +Output IDAT size = 1920 bytes (800 bytes decrease) +Output file size = 2012 bytes (800 bytes = 28.45% decrease) + +``` + +-o7 选项处理起来最慢,但最终效果最好。于是你又将文件缩小了 800 字节,现在它只有 2012 字节了。 + +要压缩文件夹下的所有 PNG,可以使用这个命令: + +``` +$ optipng -o7 -dir= *.png + +``` + +-dir 选项用来指定输出文件夹。如果不加这个选项,optipng 会覆盖原文件。 + +### 选择正确的文件格式 + +当涉及到在互联网中使用的图片时,你可以选择: + + ++ [JPG 或 JPEG][9] ++ [GIF][10] ++ [PNG][11] ++ [aPNG][12] ++ [JPG-LS][13] ++ [JPG 2000 或 JP2][14] ++ [SVG][15] + + +JPG-LS 和 JPG 2000 没有得到广泛使用。只有一部分数码相机支持这些格式,所以我们可以忽略它们。aPNG 是动态的 PNG 格式,也没有广泛使用。 + +可以通过更改压缩率或者使用其它文件格式来节省下更多字节。我们无法在 GIMP 中应用第一种方法,因为现在的图片已经使用了最高的压缩率了。因为我们的图片中不再包含 [aplha 通道][5],你可以使用 JPG 类型来替代 PNG. 现在,使用默认值:90% 质量——你可以将它减小至 85%,但这样会导致可见的叠影。这样又省下一些字节: + +``` +$ identify cinnamon.jpg +cinnamon.jpg JPEG 60x60 60x60+0+0 8-bit sRGB 2676B 0.000u 0:00.000 + +``` + +只将这张图转成正确的色域,并使用 JPG 作为文件格式,就可以将它从 23 KB 缩小到 12.3 KB,减少了近 50%. + + +#### PNG vs JPG: 质量和压缩率 + +那么,剩下的文件我们要怎么办呢?除了 Fedora “风味”图标和四个特性图标之外,此方法适用于所有其他图片。我们能够处理的图片都有一个白色的背景。 + +PNG 和 JPG 的一个主要区别在于,JPG 没有 alpha 通道。所以,它没有透明度选项。如果你使用 JPG 并为它添加白色背景,你可以将文件从 40.7 KB 缩小至 28.3 KB. + +现在又有了四个可以处理的图片:背景图。对于灰色背景,你可以再次使用灰阶模式。对更大的图片,我们就可以节省下更多的空间。它从 216.2 KB 缩小到了 51 KB——基本上只有原图的 25% 了。整体下来,你把这些图片从 481.1 KB 缩小到了 191.5 KB——只有一开始的 39.8%. + +#### 质量 vs 大小 + +PNG 和 JPG 的另外一个区别在于质量。PNG 是一种无损压缩光栅图形格式。但是 JPG 虽然使用压缩来缩小体积,可是这会影响到质量。不过,这并不意味着你不应该使用 JPG,只是你需要在文件大小和质量中找到一个平衡。 + +### 成就 + +这就是第一部分的结尾了。在使用上述技术后,得到的结果如下: + +![][6] + +你将一开始 1.2 MB 的图片体积缩小到了 488.9 KB. 只需通过 optipng 进行优化,就可以达到之前体积的三分之一。这可能使得页面更快地加载。不过,要是使用蜗牛到超音速来对比,这个速度还没到达赛车的速度呢! + +最后,你可以在 [Google Insights][7] 中查看结果,例如: + +![][8] + +在移动端部分,这个页面的得分提升了 10 分,但它依然处于“中等”水平。对于桌面端,结果看起来完全不同,从 62/100 分提升至了 91/100 分,等级也达到了“好”的水平。如我们之前所说的,这个测试并不意味着我们的工作就做完了。通过参考这些分数可以让你朝着正确的方向前进。请记住,你正在为用户体验来进行优化,而不是搜索引擎。 + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/design-faster-web-pages-part-1-image-compression/ + +作者:[Sirko Kemter][a] +选题:[lujun9972][b] +译者:[StdioA](https://github.com/StdioA) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/gnokii/ +[b]: https://github.com/lujun9972 +[1]: https://browserdiet.com/calories/ +[2]: http://getfedora.org +[3]: https://fedoramagazine.org/wp-content/uploads/2018/02/ff-addon-diet.jpg +[4]: https://getfedora.org/static/images/cinnamon.png +[5]: https://www.webopedia.com/TERM/A/alpha_channel.html +[6]: https://fedoramagazine.org/wp-content/uploads/2018/02/ff-addon-diet-i.jpg +[7]: https://developers.google.com/speed/pagespeed/insights/?url=getfedora.org&tab=mobile +[8]: https://fedoramagazine.org/wp-content/uploads/2018/02/PageSpeed_Insights.png +[9]: https://en.wikipedia.org/wiki/JPEG +[10]: https://en.wikipedia.org/wiki/GIF +[11]: https://en.wikipedia.org/wiki/Portable_Network_Graphics +[12]: https://en.wikipedia.org/wiki/APNG +[13]: https://en.wikipedia.org/wiki/JPEG_2000 +[14]: https://en.wikipedia.org/wiki/JPEG_2000 +[15]: https://en.wikipedia.org/wiki/Scalable_Vector_Graphics diff --git a/translated/tech/20181011 Getting started with Minikube- Kubernetes on your laptop.md b/translated/tech/20181011 Getting started with Minikube- Kubernetes on your laptop.md new file mode 100644 index 0000000000..84f1187a32 --- /dev/null +++ b/translated/tech/20181011 Getting started with Minikube- Kubernetes on your laptop.md @@ -0,0 +1,123 @@ +Minikube入门:笔记本上的Kubernetes +====== +运行Minikube的分步指南。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ) + +在[Hello Minikube][1]教程页面上Minikube被宣传为基于Docker运行Kubernetes的一种简单方法。 虽然该文档非常有用,但它主要是为MacOS编写的。 你可以深入挖掘在Windows或某个Linux发行版上的使用说明,但它们不是很清楚。 许多文档都是针对Debian / Ubuntu用户的,比如[安装Minikube的驱动程序][2]。 + +### 先决条件 + +1. 你已经[安装了Docker][3]。 +2. 你的计算机是一个RHEL / CentOS / 基于Fedora的工作站。 +3. 你已经[安装了正常运行的KVM2虚拟机管理程序][4]。 +4. 你有一个运行的**docker-machine-driver-kvm2**。 以下命令将安装驱动程序: + +``` +curl -Lo docker-machine-driver-kvm2 https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 \ +chmod +x docker-machine-driver-kvm2 \ +&& sudo cp docker-machine-driver-kvm2 /usr/local/bin/ \ +&& rm docker-machine-driver-kvm2 +``` + +### 下载,安装和启动Minikube + + 1. 为你要即将下载的两个文件创建一个目录,两个文件分别是:[minikube][5]和[kubectl][6]。 + + + 2. 打开终端窗口并运行以下命令来安装minikube。 + +``` +curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 +``` + +请注意,minikube版本(例如,minikube-linux-amd64)可能因计算机的规格而有所不同。 + + 3. **chmod**加执行权限。 + +``` +chmod +x minikube +``` + + 4. 将文件移动到**/usr/local/bin**路径下,以便你能将其作为命令运行。 + +``` +mv minikube /usr/local/bin +``` + + 5. 使用以下命令安装kubectl(类似于minikube的安装过程)。 + +``` +curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl +``` + +使用**curl**命令确定最新版本的Kubernetes。 + + 6. **chmod**给kubectl加执行权限。 + +``` +chmod +x kubectl +``` + + 7. 将kubectl移动到**/usr/local/bin**路径下作为命令运行。 + +``` +mv kubectl /usr/local/bin +``` + + 8. 运行**minikube start**命令。 为此,你需要有虚拟机管理程序。 我使用过KVM2,你也可以使用Virtualbox。 确保是用户而不是root身份运行以下命令,以便为用户而不是root存储配置。 + +``` +minikube start --vm-driver=kvm2 +``` + +这可能需要一段时间,等一会。 + + 9. Minikube应该下载并开始。 使用以下命令确保成功。 + +``` +cat ~/.kube/config +``` + + 10. 执行以下命令以运行Minikube作为上下文。 上下文决定了kubectl与哪个集群交互。 你可以在~/.kube/config文件中查看所有可用的上下文。 + +``` +kubectl config use-context minikube +``` + + 11. 再次查看**config** 文件以检查Minikube是否存在上下文。 + +``` +cat ~/.kube/config +``` + + 12. 最后,运行以下命令打开浏览器查看Kubernetes仪表板。 + +``` +minikube dashboard +``` + +本指南旨在使RHEL / Fedora / CentOS操作系统用户操作更轻松。 + +现在Minikube已启动并运行,请阅读[通过Minikube在本地运行Kubernetes][7]这篇官网教程开始使用它。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/getting-started-minikube + +作者:[Bryant Son][a] +选题:[lujun9972][b] +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[b]: https://github.com/lujun9972 +[1]: https://kubernetes.io/docs/tutorials/hello-minikube +[2]: https://github.com/kubernetes/minikube/blob/master/docs/drivers.md +[3]: https://docs.docker.com/install +[4]: https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm2-driver +[5]: https://github.com/kubernetes/minikube/releases +[6]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-curl +[7]: https://kubernetes.io/docs/setup/minikube