diff --git a/published/20150708 Choosing a Linux Tracer (2015).md b/published/20150708 Choosing a Linux Tracer (2015).md new file mode 100644 index 0000000000..2d04d8594f --- /dev/null +++ b/published/20150708 Choosing a Linux Tracer (2015).md @@ -0,0 +1,189 @@ +Linux 跟踪器之选 +====== + +[![][1]][2] + +> Linux 跟踪很神奇! + +跟踪器tracer是一个高级的性能分析和调试工具,如果你使用过 `strace(1)` 或者 `tcpdump(8)`,你不应该被它吓到 ... 你使用的就是跟踪器。系统跟踪器能让你看到很多的东西,而不仅是系统调用或者数据包,因为常见的跟踪器都可以跟踪内核或者应用程序的任何东西。 + +有大量的 Linux 跟踪器可供你选择。由于它们中的每个都有一个官方的(或者非官方的)的吉祥物,我们有足够多的选择给孩子们展示。 + +你喜欢使用哪一个呢? + +我从两类读者的角度来回答这个问题:大多数人和性能/内核工程师。当然,随着时间的推移,这也可能会发生变化,因此,我需要及时去更新本文内容,或许是每年一次,或者更频繁。(LCTT 译注:本文最后更新于 2015 年) + +### 对于大多数人 + +大多数人(开发者、系统管理员、运维人员、网络可靠性工程师(SRE)…)是不需要去学习系统跟踪器的底层细节的。以下是你需要去了解和做的事情: + +#### 1. 使用 perf_events 进行 CPU 剖析 + +可以使用 perf_events 进行 CPU 剖析profiling。它可以用一个 [火焰图][3] 来形象地表示。比如: + +``` +git clone --depth 1 https://github.com/brendangregg/FlameGraph +perf record -F 99 -a -g -- sleep 30 +perf script | ./FlameGraph/stackcollapse-perf.pl | ./FlameGraph/flamegraph.pl > perf.svg +``` + +![](http://www.brendangregg.com/blog/images/2015/cpu-bash-flamegraph-500.png) + +Linux 的 perf_events(即 `perf`,后者是它的命令)是官方为 Linux 用户准备的跟踪器/分析器。它位于内核源码中,并且维护的非常好(而且现在它的功能还在快速变强)。它一般是通过 linux-tools-common 这个包来添加的。 + +`perf` 可以做的事情很多,但是,如果我只能建议你学习其中的一个功能的话,那就是 CPU 剖析。虽然从技术角度来说,这并不是事件“跟踪”,而是采样sampling。最难的部分是获得完整的栈和符号,这部分在我的 [Linux Profiling at Netflix][4] 中针对 Java 和 Node.js 讨论过。 + +#### 2. 知道它能干什么 + +正如一位朋友所说的:“你不需要知道 X 光机是如何工作的,但你需要明白的是,如果你吞下了一个硬币,X 光机是你的一个选择!”你需要知道使用跟踪器能够做什么,因此,如果你在业务上确实需要它,你可以以后再去学习它,或者请会使用它的人来做。 + +简单地说:几乎任何事情都可以通过跟踪来了解它。内部文件系统、TCP/IP 处理过程、设备驱动、应用程序内部情况。阅读我在 lwn.net 上的 [ftrace][5] 的文章,也可以去浏览 [perf_events 页面][6],那里有一些跟踪(和剖析)能力的示例。 + +#### 3. 需要一个前端工具 + +如果你要购买一个性能分析工具(有许多公司销售这类产品),并要求支持 Linux 跟踪。想要一个直观的“点击”界面去探查内核的内部,以及包含一个在不同堆栈位置的延迟热力图。就像我在 [Monitorama 演讲][7] 中描述的那样。 + +我创建并开源了我自己的一些前端工具,虽然它是基于 CLI 的(不是图形界面的)。这样可以使其它人使用跟踪器更快更容易。比如,我的 [perf-tools][8],跟踪新进程是这样的: + +``` +# ./execsnoop +Tracing exec()s. Ctrl-C to end. + PID PPID ARGS + 22898 22004 man ls + 22905 22898 preconv -e UTF-8 + 22908 22898 pager -s + 22907 22898 nroff -mandoc -rLL=164n -rLT=164n -Tutf8 +[...] +``` + +在 Netflix 公司,我正在开发 [Vector][9],它是一个实例分析工具,实际上它也是一个 Linux 跟踪器的前端。 + +### 对于性能或者内核工程师 + +一般来说,我们的工作都非常难,因为大多数人或许要求我们去搞清楚如何去跟踪某个事件,以及因此需要选择使用哪个跟踪器。为完全理解一个跟踪器,你通常需要花至少一百多个小时去使用它。理解所有的 Linux 跟踪器并能在它们之间做出正确的选择是件很难的事情。(我或许是唯一接近完成这件事的人) + +在这里我建议选择如下,要么: + +A)选择一个全能的跟踪器,并以它为标准。这需要在一个测试环境中花大量的时间来搞清楚它的细微差别和安全性。我现在的建议是 SystemTap 的最新版本(例如,从 [源代码][10] 构建)。我知道有的公司选择的是 LTTng ,尽管它并不是很强大(但是它很安全),但他们也用的很好。如果在 `sysdig` 中添加了跟踪点或者是 kprobes,它也是另外的一个候选者。 + +B)按我的 [Velocity 教程中][11] 的流程图。这意味着尽可能使用 ftrace 或者 perf_events,eBPF 已经集成到内核中了,然后用其它的跟踪器,如 SystemTap/LTTng 作为对 eBPF 的补充。我目前在 Netflix 的工作中就是这么做的。 + +![](http://www.brendangregg.com/blog/images/2015/choosing_a_tracer.png) + +以下是我对各个跟踪器的评价: + +#### 1. ftrace + +我爱 [ftrace][12],它是内核黑客最好的朋友。它被构建进内核中,它能够利用跟踪点、kprobes、以及 uprobes,以提供一些功能:使用可选的过滤器和参数进行事件跟踪;事件计数和计时,内核概览;函数流步进function-flow walking。关于它的示例可以查看内核源代码树中的 [ftrace.txt][13]。它通过 `/sys` 来管理,是面向单一的 root 用户的(虽然你可以使用缓冲实例以让其支持多用户),它的界面有时很繁琐,但是它比较容易调校hackable,并且有个前端:ftrace 的主要创建者 Steven Rostedt 设计了一个 trace-cmd,而且我也创建了 perf-tools 集合。我最诟病的就是它不是可编程的programmable,因此,举个例子说,你不能保存和获取时间戳、计算延迟,以及将其保存为直方图。你需要转储事件到用户级以便于进行后期处理,这需要花费一些成本。它也许可以通过 eBPF 实现可编程。 + +#### 2. perf_events + +[perf_events][14] 是 Linux 用户的主要跟踪工具,它的源代码位于 Linux 内核中,一般是通过 linux-tools-common 包来添加的。它又称为 `perf`,后者指的是它的前端,它相当高效(动态缓存),一般用于跟踪并转储到一个文件中(perf.data),然后可以在之后进行后期处理。它可以做大部分 ftrace 能做的事情。它不能进行函数流步进,并且不太容易调校(而它的安全/错误检查做的更好一些)。但它可以做剖析(采样)、CPU 性能计数、用户级的栈转换、以及使用本地变量利用调试信息debuginfo进行行级跟踪line tracing。它也支持多个并发用户。与 ftrace 一样,它也不是内核可编程的,除非 eBPF 支持(补丁已经在计划中)。如果只学习一个跟踪器,我建议大家去学习 perf,它可以解决大量的问题,并且它也相当安全。 + +#### 3. eBPF + +扩展的伯克利包过滤器extended Berkeley Packet Filter(eBPF)是一个内核内in-kernel的虚拟机,可以在事件上运行程序,它非常高效(JIT)。它可能最终为 ftrace 和 perf_events 提供内核内编程in-kernel programming,并可以去增强其它跟踪器。它现在是由 Alexei Starovoitov 开发的,还没有实现完全的整合,但是对于一些令人印象深刻的工具,有些内核版本(比如,4.1)已经支持了:比如,块设备 I/O 的延迟热力图latency heat map。更多参考资料,请查阅 Alexei 的 [BPF 演示][15],和它的 [eBPF 示例][16]。 + +#### 4. SystemTap + +[SystemTap][17] 是一个非常强大的跟踪器。它可以做任何事情:剖析、跟踪点、kprobes、uprobes(它就来自 SystemTap)、USDT、内核内编程等等。它将程序编译成内核模块并加载它们 —— 这是一种很难保证安全的方法。它开发是在内核代码树之外进行的,并且在过去出现过很多问题(内核崩溃或冻结)。许多并不是 SystemTap 的过错 —— 它通常是首次对内核使用某些跟踪功能,并率先遇到 bug。最新版本的 SystemTap 是非常好的(你需要从它的源代码编译),但是,许多人仍然没有从早期版本的问题阴影中走出来。如果你想去使用它,花一些时间去测试环境,然后,在 irc.freenode.net 的 #systemtap 频道与开发者进行讨论。(Netflix 有一个容错架构,我们使用了 SystemTap,但是我们或许比起你来说,更少担心它的安全性)我最诟病的事情是,它似乎假设你有办法得到内核调试信息,而我并没有这些信息。没有它我实际上可以做很多事情,但是缺少相关的文档和示例(我现在自己开始帮着做这些了)。 + +#### 5. LTTng + +[LTTng][18] 对事件收集进行了优化,性能要好于其它的跟踪器,也支持许多的事件类型,包括 USDT。它的开发是在内核代码树之外进行的。它的核心部分非常简单:通过一个很小的固定指令集写入事件到跟踪缓冲区。这样让它既安全又快速。缺点是做内核内编程不太容易。我觉得那不是个大问题,由于它优化的很好,可以充分的扩展,尽管需要后期处理。它也探索了一种不同的分析技术。很多的“黑匣子”记录了所有感兴趣的事件,以便可以在 GUI 中以后分析它。我担心该记录会错失之前没有预料的事件,我真的需要花一些时间去看看它在实践中是如何工作的。这个跟踪器上我花的时间最少(没有特别的原因)。 + +#### 6. ktap + +[ktap][19] 是一个很有前途的跟踪器,它在内核中使用了一个 lua 虚拟机,不需要调试信息和在嵌入时设备上可以工作的很好。这使得它进入了人们的视野,在某个时候似乎要成为 Linux 上最好的跟踪器。然而,由于 eBPF 开始集成到了内核,而 ktap 的集成工作被推迟了,直到它能够使用 eBPF 而不是它自己的虚拟机。由于 eBPF 在几个月过去之后仍然在集成过程中,ktap 的开发者已经等待了很长的时间。我希望在今年的晚些时间它能够重启开发。 + +#### 7. dtrace4linux + +[dtrace4linux][20] 主要由一个人(Paul Fox)利用业务时间将 Sun DTrace 移植到 Linux 中的。它令人印象深刻,一些供应器provider可以工作,还不是很完美,它最多应该算是实验性的工具(不安全)。我认为对于许可证的担心,使人们对它保持谨慎:它可能永远也进入不了 Linux 内核,因为 Sun 是基于 CDDL 许可证发布的 DTrace;Paul 的方法是将它作为一个插件。我非常希望看到 Linux 上的 DTrace,并且希望这个项目能够完成,我想我加入 Netflix 时将花一些时间来帮它完成。但是,我一直在使用内置的跟踪器 ftrace 和 perf_events。 + +#### 8. OL DTrace + +[Oracle Linux DTrace][21] 是将 DTrace 移植到 Linux (尤其是 Oracle Linux)的重大努力。过去这些年的许多发布版本都一直稳定的进步,开发者甚至谈到了改善 DTrace 测试套件,这显示出这个项目很有前途。许多有用的功能已经完成:系统调用、剖析、sdt、proc、sched、以及 USDT。我一直在等待着 fbt(函数边界跟踪,对内核的动态跟踪),它将成为 Linux 内核上非常强大的功能。它最终能否成功取决于能否吸引足够多的人去使用 Oracle Linux(并为支持付费)。另一个羁绊是它并非完全开源的:内核组件是开源的,但用户级代码我没有看到。 + +#### 9. sysdig + +[sysdig][22] 是一个很新的跟踪器,它可以使用类似 `tcpdump` 的语法来处理系统调用syscall事件,并用 lua 做后期处理。它也是令人印象深刻的,并且很高兴能看到在系统跟踪领域的创新。它的局限性是,它的系统调用只能是在当时,并且,它转储所有事件到用户级进行后期处理。你可以使用系统调用来做许多事情,虽然我希望能看到它去支持跟踪点、kprobes、以及 uprobes。我也希望看到它支持 eBPF 以查看内核内概览。sysdig 的开发者现在正在增加对容器的支持。可以关注它的进一步发展。 + +### 深入阅读 + +我自己的工作中使用到的跟踪器包括: + +- **ftrace** : 我的 [perf-tools][8] 集合(查看示例目录);我的 lwn.net 的 [ftrace 跟踪器的文章][5]; 一个 [LISA14][8] 演讲;以及帖子: [函数计数][23]、 [iosnoop][24]、 [opensnoop][25]、 [execsnoop][26]、 [TCP retransmits][27]、 [uprobes][28] 和 [USDT][29]。 +- **perf_events** : 我的 [perf_events 示例][6] 页面;在 SCALE 的一个 [Linux Profiling at Netflix][4] 演讲;和帖子:[CPU 采样][30]、[静态跟踪点][31]、[热力图][32]、[计数][33]、[内核行级跟踪][34]、[off-CPU 时间火焰图][35]。 +- **eBPF** : 帖子 [eBPF:一个小的进步][36],和一些 [BPF-tools][37] (我需要发布更多)。 +- **SystemTap** : 很久以前,我写了一篇 [使用 SystemTap][38] 的文章,它有点过时了。最近我发布了一些 [systemtap-lwtools][39],展示了在没有内核调试信息的情况下,SystemTap 是如何使用的。 +- **LTTng** : 我使用它的时间很短,不足以发布什么文章。 +- **ktap** : 我的 [ktap 示例][40] 页面包括一行程序和脚本,虽然它是早期的版本。 +- **dtrace4linux** : 在我的 [系统性能][41] 书中包含了一些示例,并且在过去我为了某些事情开发了一些小的修补,比如, [timestamps][42]。 +- **OL DTrace** : 因为它是对 DTrace 的直接移植,我早期 DTrace 的工作大多与之相关(链接太多了,可以去 [我的主页][43] 上搜索)。一旦它更加完美,我可以开发很多专用工具。 +- **sysdig** : 我贡献了 [fileslower][44] 和 [subsecond offset spectrogram][45] 的 chisel。 +- **其它** : 关于 [strace][46],我写了一些告诫文章。 + +不好意思,没有更多的跟踪器了! … 如果你想知道为什么 Linux 中的跟踪器不止一个,或者关于 DTrace 的内容,在我的 [从 DTrace 到 Linux][47] 的演讲中有答案,从 [第 28 张幻灯片][48] 开始。 + +感谢 [Deirdre Straughan][49] 的编辑,以及跟踪小马的创建(General Zoi 是小马的创建者)。 + +-------------------------------------------------------------------------------- + +via: http://www.brendangregg.com/blog/2015-07-08/choosing-a-linux-tracer.html + +作者:[Brendan Gregg][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.brendangregg.com +[1]:http://www.brendangregg.com/blog/images/2015/tracing_ponies.png +[2]:http://www.slideshare.net/brendangregg/velocity-2015-linux-perf-tools/105 +[3]:http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html +[4]:http://www.brendangregg.com/blog/2015-02-27/linux-profiling-at-netflix.html +[5]:http://lwn.net/Articles/608497/ +[6]:http://www.brendangregg.com/perf.html +[7]:http://www.brendangregg.com/blog/2015-06-23/netflix-instance-analysis-requirements.html +[8]:http://www.brendangregg.com/blog/2015-03-17/linux-performance-analysis-perf-tools.html +[9]:http://techblog.netflix.com/2015/04/introducing-vector-netflixs-on-host.html +[10]:https://sourceware.org/git/?p=systemtap.git;a=blob_plain;f=README;hb=HEAD +[11]:http://www.slideshare.net/brendangregg/velocity-2015-linux-perf-tools +[12]:http://lwn.net/Articles/370423/ +[13]:https://www.kernel.org/doc/Documentation/trace/ftrace.txt +[14]:https://perf.wiki.kernel.org/index.php/Main_Page +[15]:http://www.phoronix.com/scan.php?page=news_item&px=BPF-Understanding-Kernel-VM +[16]:https://github.com/torvalds/linux/tree/master/samples/bpf +[17]:https://sourceware.org/systemtap/wiki +[18]:http://lttng.org/ +[19]:http://ktap.org/ +[20]:https://github.com/dtrace4linux/linux +[21]:http://docs.oracle.com/cd/E37670_01/E38608/html/index.html +[22]:http://www.sysdig.org/ +[23]:http://www.brendangregg.com/blog/2014-07-13/linux-ftrace-function-counting.html +[24]:http://www.brendangregg.com/blog/2014-07-16/iosnoop-for-linux.html +[25]:http://www.brendangregg.com/blog/2014-07-25/opensnoop-for-linux.html +[26]:http://www.brendangregg.com/blog/2014-07-28/execsnoop-for-linux.html +[27]:http://www.brendangregg.com/blog/2014-09-06/linux-ftrace-tcp-retransmit-tracing.html +[28]:http://www.brendangregg.com/blog/2015-06-28/linux-ftrace-uprobe.html +[29]:http://www.brendangregg.com/blog/2015-07-03/hacking-linux-usdt-ftrace.html +[30]:http://www.brendangregg.com/blog/2014-06-22/perf-cpu-sample.html +[31]:http://www.brendangregg.com/blog/2014-06-29/perf-static-tracepoints.html +[32]:http://www.brendangregg.com/blog/2014-07-01/perf-heat-maps.html +[33]:http://www.brendangregg.com/blog/2014-07-03/perf-counting.html +[34]:http://www.brendangregg.com/blog/2014-09-11/perf-kernel-line-tracing.html +[35]:http://www.brendangregg.com/blog/2015-02-26/linux-perf-off-cpu-flame-graph.html +[36]:http://www.brendangregg.com/blog/2015-05-15/ebpf-one-small-step.html +[37]:https://github.com/brendangregg/BPF-tools +[38]:http://dtrace.org/blogs/brendan/2011/10/15/using-systemtap/ +[39]:https://github.com/brendangregg/systemtap-lwtools +[40]:http://www.brendangregg.com/ktap.html +[41]:http://www.brendangregg.com/sysperfbook.html +[42]:https://github.com/dtrace4linux/linux/issues/55 +[43]:http://www.brendangregg.com +[44]:https://github.com/brendangregg/sysdig/commit/d0eeac1a32d6749dab24d1dc3fffb2ef0f9d7151 +[45]:https://github.com/brendangregg/sysdig/commit/2f21604dce0b561407accb9dba869aa19c365952 +[46]:http://www.brendangregg.com/blog/2014-05-11/strace-wow-much-syscall.html +[47]:http://www.brendangregg.com/blog/2015-02-28/from-dtrace-to-linux.html +[48]:http://www.slideshare.net/brendangregg/from-dtrace-to-linux/28 +[49]:http://www.beginningwithi.com/ diff --git a/published/20170310 9 Lightweight Linux Applications to Speed Up Your System.md b/published/20170310 9 Lightweight Linux Applications to Speed Up Your System.md new file mode 100644 index 0000000000..bf2e2d972a --- /dev/null +++ b/published/20170310 9 Lightweight Linux Applications to Speed Up Your System.md @@ -0,0 +1,210 @@ +9 个提高系统运行速度的轻量级 Linux 应用 +====== + +**简介:** [加速 Ubuntu 系统][1]有很多方法,办法之一是使用轻量级应用来替代一些常用应用程序。我们之前之前发布过一篇 [Linux 必备的应用程序][2],如今将分享这些应用程序在 Ubuntu 或其他 Linux 发行版的轻量级替代方案。 + +![在 ubunt 使用轻量级应用程序替代方案][4] + +### 9 个常用 Linux 应用程序的轻量级替代方案 + +你的 Linux 系统很慢吗?应用程序是不是很久才能打开?你最好的选择是使用[轻量级的 Linux 系统][5]。但是重装系统并非总是可行,不是吗? + +所以如果你想坚持使用你现在用的 Linux 发行版,但是想要提高性能,你应该使用更轻量级应用来替代你一些常用的应用。这篇文章会列出各种 Linux 应用程序的轻量级替代方案。 + +由于我使用的是 Ubuntu,因此我只提供了基于 Ubuntu 的 Linux 发行版的安装说明。但是这些应用程序可以用于几乎所有其他 Linux 发行版。你只需去找这些轻量级应用在你的 Linux 发行版中的安装方法就可以了。 + +### 1. Midori: Web 浏览器 + +[Midori][8] 是与现代互联网环境具有良好兼容性的最轻量级网页浏览器之一。它是开源的,使用与 Google Chrome 最初所基于的相同的渲染引擎 —— WebKit。并且超快速,最小化但高度可定制。 + +![Midori Browser][6] + +Midori 浏览器有很多可以定制的扩展和选项。如果你有最高权限,使用这个浏览器也是一个不错的选择。如果在浏览网页的时候遇到了某些问题,请查看其网站上[常见问题][7]部分 -- 这包含了你可能遇到的常见问题及其解决方案。 + + +#### 在基于 Ubuntu 的发行版上安装 Midori + +在 Ubuntu 上,可通过官方源找到 Midori 。运行以下指令即可安装它: + +``` +sudo apt install midori +``` + +### 2. Trojita:电子邮件客户端 + +[Trojita][11] 是一款开源强大的 IMAP 电子邮件客户端。它速度快,资源利用率高。我可以肯定地称它是 [Linux 最好的电子邮件客户端之一][9]。如果你只需电子邮件客户端提供 IMAP 支持,那么也许你不用再进一步考虑了。 + +![Trojitá][10] + +Trojita 使用各种技术 —— 按需电子邮件加载、离线缓存、带宽节省模式等 —— 以实现其令人印象深刻的性能。 + +#### 在基于 Ubuntu 的发行版上安装 Trojita + +Trojita 目前没有针对 Ubuntu 的官方 PPA 。但这应该不成问题。您可以使用以下命令轻松安装它: + +``` +sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/jkt-gentoo:/trojita/xUbuntu_16.04/ /' > /etc/apt/sources.list.d/trojita.list" +wget http://download.opensuse.org/repositories/home:jkt-gentoo:trojita/xUbuntu_16.04/Release.key +sudo apt-key add - < Release.key +sudo apt update +sudo apt install trojita +``` + +### 3. GDebi:包安装程序 + +有时您需要快速安装 DEB 软件包。Ubuntu 软件中心是一个消耗资源严重的应用程序,仅用于安装 .deb 文件并不明智。 + +Gdebi 无疑是一款可以完成同样目的的漂亮工具,而它只有个极简的图形界面。 + +![GDebi][12] + +GDebi 是完全轻量级的,完美无缺地完成了它的工作。你甚至应该[让 Gdebi 成为 DEB 文件的默认安装程序][13]。 + +#### 在基于 Ubuntu 的发行版上安装 GDebi + +只需一行指令,你便可以在 Ubuntu 上安装 GDebi: + +``` +sudo apt install gdebi +``` + +### 4. App Grid:软件中心 + +如果您经常在 Ubuntu 上使用软件中心搜索、安装和管理应用程序,则 [App Grid][15] 是必备的应用程序。它是默认的 Ubuntu 软件中心最具视觉吸引力且速度最快的替代方案。 + +![App Grid][14] + +App Grid 支持应用程序的评分、评论和屏幕截图。 + +#### 在基于 Ubuntu 的发行版上安装 App Grid + +App Grid 拥有 Ubuntu 的官方 PPA。使用以下指令安装 App Grid: + +``` +sudo add-apt-repository ppa:appgrid/stable +sudo apt update +sudo apt install appgrid +``` + +### 5. Yarock:音乐播放器 + +[Yarock][17] 是一个优雅的音乐播放器,拥有现代而最轻量级的用户界面。尽管在设计上是轻量级的,但 Yarock 有一个全面的高级功能列表。 + +![Yarock][16] + +Yarock 的主要功能包括多种音乐收藏、评级、智能播放列表、多种后端选项、桌面通知、音乐剪辑、上下文获取等。 + +### 在基于 Ubuntu 的发行版上安装 Yarock + +您得通过 PPA 使用以下指令在 Ubuntu 上安装 Yarock: + +``` +sudo add-apt-repository ppa:nilarimogard/webupd8 +sudo apt update +sudo apt install yarock +``` + +### 6. VLC:视频播放器 + +谁不需要视频播放器?谁还从未听说过 [VLC][19]?我想并不需要对它做任何介绍。 + +![VLC][18] + +VLC 能满足你在 Ubuntu 上播放各种媒体文件的全部需求,而且它非常轻便。它甚至可以在非常旧的 PC 上完美运行。 + +#### 在基于 Ubuntu 的发行版上安装 VLC + +VLC 为 Ubuntu 提供官方 PPA。可以输入以下命令来安装它: + +``` +sudo apt install vlc +``` + +### 7. PCManFM:文件管理器 + +PCManFM 是 LXDE 的标准文件管理器。与 LXDE 的其他应用程序一样,它也是轻量级的。如果您正在为文件管理器寻找更轻量级的替代品,可以尝试使用这个应用。 + +![PCManFM][20] + +尽管来自 LXDE,PCManFM 也同样适用于其他桌面环境。 + +#### 在基于 Ubuntu 的发行版上安装 PCManFM + +在 Ubuntu 上安装 PCManFM 只需要一条简单的指令: + +``` +sudo apt install pcmanfm +``` + +### 8. Mousepad:文本编辑器 + +在轻量级方面,没有什么可以击败像 nano、vim 等命令行文本编辑器。但是,如果你想要一个图形界面,你可以尝试一下 Mousepad -- 一个最轻量级的文本编辑器。它非常轻巧,速度非常快。带有简单的可定制的用户界面和多个主题。 + +![Mousepad][21] + +Mousepad 支持语法高亮显示。所以,你也可以使用它作为基础的代码编辑器。 + +#### 在基于 Ubuntu 的发行版上安装 Mousepad + +想要安装 Mousepad ,可以使用以下指令: + +``` +sudo apt install mousepad +``` + +### 9. GNOME Office:办公软件 + +许多人需要经常使用办公应用程序。通常,大多数办公应用程序体积庞大且很耗资源。Gnome Office 在这方面非常轻便。Gnome Office 在技术上不是一个完整的办公套件。它由不同的独立应用程序组成,在这之中 AbiWord&Gnumeric 脱颖而出。 + +**AbiWord** 是文字处理器。它比其他替代品轻巧并且快得多。但是这样做是有代价的 —— 你可能会失去宏、语法检查等一些功能。AdiWord 并不完美,但它可以满足你基本的需求。 + +![AbiWord][22] + +**Gnumeric** 是电子表格编辑器。就像 AbiWord 一样,Gnumeric 也非常快速,提供了精确的计算功能。如果你正在寻找一个简单轻便的电子表格编辑器,Gnumeric 已经能满足你的需求了。 + +![Gnumeric][23] + +在 [Gnome Office][24] 下面还有一些其它应用程序。你可以在官方页面找到它们。 + +#### 在基于 Ubuntu 的发行版上安装 AbiWord&Gnumeric + +要安装 AbiWord&Gnumeric,只需在终端中输入以下指令: + +``` +sudo apt install abiword gnumeric +``` + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/lightweight-alternative-applications-ubuntu/ + +作者:[Munif Tanjim][a] +译者:[imquanquan](https://github.com/imquanquan) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/munif/ +[1]:https://itsfoss.com/speed-up-ubuntu-1310/ +[2]:https://itsfoss.com/essential-linux-applications/ +[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Lightweight-alternative-applications-for-Linux-800x450.jpg +[5]:https://itsfoss.com/lightweight-linux-beginners/ +[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Midori-800x497.png +[7]:http://midori-browser.org/faqs/ +[8]:http://midori-browser.org/ +[9]:https://itsfoss.com/best-email-clients-linux/ +[10]:http://trojita.flaska.net/img/2016-03-22-trojita-home.png +[11]:http://trojita.flaska.net/ +[12]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/GDebi.png +[13]:https://itsfoss.com/gdebi-default-ubuntu-software-center/ +[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/AppGrid-800x553.png +[15]:http://www.appgrid.org/ +[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Yarock-800x529.png +[17]:https://seb-apps.github.io/yarock/ +[18]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/VLC-800x526.png +[19]:http://www.videolan.org/index.html +[20]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/PCManFM.png +[21]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Mousepad.png +[22]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/AbiWord-800x626.png +[23]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Gnumeric-800x470.png +[24]:https://gnome.org/gnome-office/ diff --git a/translated/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md b/published/20171002 Reset Linux Desktop To Default Settings With A Single Command.md similarity index 51% rename from translated/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md rename to published/20171002 Reset Linux Desktop To Default Settings With A Single Command.md index d486a777de..cfeade8a8b 100644 --- a/translated/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md +++ b/published/20171002 Reset Linux Desktop To Default Settings With A Single Command.md @@ -1,18 +1,20 @@ -使用一个命令重置 Linux 桌面到默认设置 +使用一个命令重置 Linux 桌面为默认设置 ====== + ![](https://www.ostechnix.com/wp-content/uploads/2017/10/Reset-Linux-Desktop-To-Default-Settings-720x340.jpg) -前段时间,我们分享了一篇关于 [**Resetter**][1] 的文章 - 这是一个有用的软件,可以在几分钟内将 Ubuntu 重置为出厂默认设置。使用 Resetter,任何人都可以轻松地将 Ubuntu 重置为第一次安装时的状态。今天,我偶然发现了一个类似的东西。不,它不是一个应用程序,而是一个单行的命令来重置你的 Linux 桌面设置、调整和定制到默认状态。 +前段时间,我们分享了一篇关于 [Resetter][1] 的文章 - 这是一个有用的软件,可以在几分钟内将 Ubuntu 重置为出厂默认设置。使用 Resetter,任何人都可以轻松地将 Ubuntu 重置为第一次安装时的状态。今天,我偶然发现了一个类似的东西。不,它不是一个应用程序,而是一个单行的命令来重置你的 Linux 桌面设置、调整和定制到默认状态。 ### 将 Linux 桌面重置为默认设置 -这个命令会将 Ubuntu Unity、Gnome 和 MATE 桌面重置为默认状态。我在我的 **Arch Linux MATE** 和 **Ubuntu 16.04 Unity** 上测试了这个命令。它可以在两个系统上工作。我希望它也能在其他桌面上运行。在写这篇文章的时候,我还没有安装 GNOME 的 Linux 桌面,因此我无法确认。但是,我相信它也可以在 Gnome 桌面环境中使用。 +这个命令会将 Ubuntu Unity、Gnome 和 MATE 桌面重置为默认状态。我在我的 Arch Linux MATE 和 Ubuntu 16.04 Unity 上测试了这个命令。它可以在两个系统上工作。我希望它也能在其他桌面上运行。在写这篇文章的时候,我还没有安装 GNOME 的 Linux 桌面,因此我无法确认。但是,我相信它也可以在 Gnome 桌面环境中使用。 -**一句忠告:**请注意,此命令将重置你在系统中所做的所有定制和调整,包括 Unity 启动器或 Dock 中的固定应用程序、桌面小程序、桌面指示器、系统字体、GTK主题、图标主题、显示器分辨率、键盘快捷键、窗口按钮位置、菜单和启动器行为等。 +**一句忠告:**请注意,此命令将重置你在系统中所做的所有定制和调整,包括 Unity 启动器或 Dock 中固定的应用程序、桌面小程序、桌面指示器、系统字体、GTK主题、图标主题、显示器分辨率、键盘快捷键、窗口按钮位置、菜单和启动器行为等。 -好的是它只会重置桌面设置。它不会影响其他不使用 dconf 的程序。此外,它不会删除你的个人资料。 +好的是它只会重置桌面设置。它不会影响其他不使用 `dconf` 的程序。此外,它不会删除你的个人资料。 现在,让我们开始。要将 Ubuntu Unity 或其他带有 GNOME/MATE 环境的 Linux 桌面重置,运行下面的命令: + ``` dconf reset -f / ``` @@ -29,12 +31,13 @@ dconf reset -f / 看见了么?现在,我的 Ubuntu 桌面已经回到了出厂设置。 -有关 “dconf” 命令的更多详细信息,请参阅手册页。 +有关 `dconf` 命令的更多详细信息,请参阅手册页。 + ``` man dconf ``` -在重置桌面上我个人更喜欢 “Resetter” 而不是 “dconf” 命令。因为,Resetter 给用户提供了更多的选择。用户可以决定删除哪些应用程序、保留哪些应用程序、是保留现有用户帐户还是创建新用户等等。如果你懒得安装 Resetter,你可以使用这个 “dconf” 命令在几分钟内将你的 Linux 系统重置为默认设置。 +在重置桌面上我个人更喜欢 “Resetter” 而不是 `dconf` 命令。因为,Resetter 给用户提供了更多的选择。用户可以决定删除哪些应用程序、保留哪些应用程序、是保留现有用户帐户还是创建新用户等等。如果你懒得安装 Resetter,你可以使用这个 `dconf` 命令在几分钟内将你的 Linux 系统重置为默认设置。 就是这样了。希望这个有帮助。我将很快发布另一篇有用的指导。敬请关注! @@ -48,12 +51,12 @@ via: https://www.ostechnix.com/reset-linux-desktop-default-settings-single-comma 作者:[Edwin Arteaga][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://www.ostechnix.com -[1]:https://www.ostechnix.com/reset-ubuntu-factory-defaults/ +[1]:https://linux.cn/article-9217-1.html [2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[3]:http://www.ostechnix.com/wp-content/uploads/2017/10/Before-resetting-Ubuntu-to-default-1.png () -[4]:http://www.ostechnix.com/wp-content/uploads/2017/10/After-resetting-Ubuntu-to-default-1.png () +[3]:http://www.ostechnix.com/wp-content/uploads/2017/10/Before-resetting-Ubuntu-to-default-1.png +[4]:http://www.ostechnix.com/wp-content/uploads/2017/10/After-resetting-Ubuntu-to-default-1.png diff --git a/published/20171009 10 layers of Linux container security - Opensource.com.md b/published/20171009 10 layers of Linux container security - Opensource.com.md new file mode 100644 index 0000000000..26188dd1ec --- /dev/null +++ b/published/20171009 10 layers of Linux container security - Opensource.com.md @@ -0,0 +1,129 @@ +Linux 容器安全的 10 个层面 +====== + +> 应用这些策略来保护容器解决方案的各个层面和容器生命周期的各个阶段的安全。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA) + +容器提供了打包应用程序的一种简单方法,它实现了从开发到测试到投入生产系统的无缝传递。它也有助于确保跨不同环境的连贯性,包括物理服务器、虚拟机、以及公有云或私有云。这些好处使得一些组织为了更方便地部署和管理为他们提升业务价值的应用程序,而快速地采用了容器技术。 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/what-are-containers.png?itok=dxQfkbF-) + +企业需要高度安全,在容器中运行核心服务的任何人都会问,“容器安全吗?”以及“我们能信任运行在容器中的应用程序吗?” + +对容器进行安全保护就像是对运行中的进程进行安全保护一样。在你部署和运行你的容器之前,你需要去考虑整个解决方案各个层面的安全。你也需要去考虑贯穿了应用程序和容器整个生命周期的安全。 + +请尝试从这十个关键的因素去确保容器解决方案栈不同层面、以及容器生命周期的不同阶段的安全。 + +### 1. 容器宿主机操作系统和多租户环境 + +由于容器将应用程序和它的依赖作为一个单元来处理,使得开发者构建和升级应用程序变得更加容易,并且,容器可以启用多租户技术将许多应用程序和服务部署到一台共享主机上。在一台单独的主机上以容器方式部署多个应用程序、按需启动和关闭单个容器都是很容易的。为完全实现这种打包和部署技术的优势,运营团队需要运行容器的合适环境。运营者需要一个安全的操作系统,它能够在边界上保护容器安全、从容器中保护主机内核,以及保护容器彼此之间的安全。 + +容器是隔离而资源受限的 Linux 进程,允许你在一个共享的宿主机内核上运行沙盒化的应用程序。保护容器的方法与保护你的 Linux 中运行的任何进程的方法是一样的。降低权限是非常重要的,也是保护容器安全的最佳实践。最好使用尽可能小的权限去创建容器。容器应该以一个普通用户的权限来运行,而不是 root 权限的用户。在 Linux 中可以使用多个层面的安全加固手段,Linux 命名空间、安全强化 Linux([SELinux][1])、[cgroups][2] 、capabilities(LCTT 译注:Linux 内核的一个安全特性,它打破了传统的普通用户与 root 用户的概念,在进程级提供更好的安全控制)、以及安全计算模式( [seccomp][3] ),这五种 Linux 的安全特性可以用于保护容器的安全。 + +### 2. 容器内容(使用可信来源) + +在谈到安全时,首先要考虑你的容器里面有什么?例如 ,有些时候,应用程序和基础设施是由很多可用组件所构成的。它们中的一些是开源的软件包,比如,Linux 操作系统、Apache Web 服务器、Red Hat JBoss 企业应用平台、PostgreSQL,以及 Node.js。这些软件包的容器化版本已经可以使用了,因此,你没有必要自己去构建它们。但是,对于你从一些外部来源下载的任何代码,你需要知道这些软件包的原始来源,是谁构建的它,以及这些包里面是否包含恶意代码。 + +### 3. 容器注册(安全访问容器镜像) + +你的团队的容器构建于下载的公共容器镜像,因此,访问和升级这些下载的容器镜像以及内部构建镜像,与管理和下载其它类型的二进制文件的方式是相同的,这一点至关重要。许多私有的注册库支持容器镜像的存储。选择一个私有的注册库,可以帮你将存储在它的注册中的容器镜像实现策略自动化。 + +### 4. 安全性与构建过程 + +在一个容器化环境中,软件构建过程是软件生命周期的一个阶段,它将所需的运行时库和应用程序代码集成到一起。管理这个构建过程对于保护软件栈安全来说是很关键的。遵守“一次构建,到处部署”的原则,可以确保构建过程的结果正是生产系统中需要的。保持容器的恒定不变也很重要 — 换句话说就是,不要对正在运行的容器打补丁,而是,重新构建和部署它们。 + +不论是因为你处于一个高强度监管的行业中,还是只希望简单地优化你的团队的成果,设计你的容器镜像管理以及构建过程,可以使用容器层的优势来实现控制分离,因此,你应该去这么做: + + * 运营团队管理基础镜像 + * 架构师管理中间件、运行时、数据库,以及其它解决方案 + * 开发者专注于应用程序层面,并且只写代码 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/separation-of-control.png?itok=x2O39kqB) + +最后,标记好你的定制构建容器,这样可以确保在构建和部署时不会搞混乱。 + +### 5. 控制好在同一个集群内部署应用 + +如果是在构建过程中出现的任何问题,或者在镜像被部署之后发现的任何漏洞,那么,请在基于策略的、自动化工具上添加另外的安全层。 + +我们来看一下,一个应用程序的构建使用了三个容器镜像层:内核、中间件,以及应用程序。如果在内核镜像中发现了问题,那么只能重新构建镜像。一旦构建完成,镜像就会被发布到容器平台注册库中。这个平台可以自动检测到发生变化的镜像。对于基于这个镜像的其它构建将被触发一个预定义的动作,平台将自己重新构建应用镜像,合并该修复的库。 + +一旦构建完成,镜像将被发布到容器平台的内部注册库中。在它的内部注册库中,会立即检测到镜像发生变化,应用程序在这里将会被触发一个预定义的动作,自动部署更新镜像,确保运行在生产系统中的代码总是使用更新后的最新的镜像。所有的这些功能协同工作,将安全功能集成到你的持续集成和持续部署(CI/CD)过程和管道中。 + +### 6. 容器编配:保护容器平台安全 + +当然了,应用程序很少会以单一容器分发。甚至,简单的应用程序一般情况下都会有一个前端、一个后端、以及一个数据库。而在容器中以微服务模式部署的应用程序,意味着应用程序将部署在多个容器中,有时它们在同一台宿主机上,有时它们是分布在多个宿主机或者节点上,如下面的图所示: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/replace-affected-deployments.png?itok=vWneAxPm) + +在大规模的容器部署时,你应该考虑: + + * 哪个容器应该被部署在哪个宿主机上? + * 那个宿主机应该有什么样的性能? + * 哪个容器需要访问其它容器?它们之间如何发现彼此? + * 你如何控制和管理对共享资源的访问,像网络和存储? + * 如何监视容器健康状况? + * 如何去自动扩展性能以满足应用程序的需要? + * 如何在满足安全需求的同时启用开发者的自助服务? + +考虑到开发者和运营者的能力,提供基于角色的访问控制是容器平台的关键要素。例如,编配管理服务器是中心访问点,应该接受最高级别的安全检查。API 是规模化的自动容器平台管理的关键,可以用于为 pod、服务,以及复制控制器验证和配置数据;在入站请求上执行项目验证;以及调用其它主要系统组件上的触发器。 + +### 7. 网络隔离 + +在容器中部署现代微服务应用,经常意味着跨多个节点在多个容器上部署。考虑到网络防御,你需要一种在一个集群中的应用之间的相互隔离的方法。一个典型的公有云容器服务,像 Google 容器引擎(GKE)、Azure 容器服务,或者 Amazon Web 服务(AWS)容器服务,是单租户服务。他们让你在你初始化建立的虚拟机集群上运行你的容器。对于多租户容器的安全,你需要容器平台为你启用一个单一集群,并且分割流量以隔离不同的用户、团队、应用、以及在这个集群中的环境。 + +使用网络命名空间,容器内的每个集合(即大家熟知的 “pod”)都会得到它自己的 IP 和绑定的端口范围,以此来从一个节点上隔离每个 pod 网络。除使用下面所述的方式之外,默认情况下,来自不同命名空间(项目)的 pod 并不能发送或者接收其它 pod 上的包和不同项目的服务。你可以使用这些特性在同一个集群内隔离开发者环境、测试环境,以及生产环境。但是,这样会导致 IP 地址和端口数量的激增,使得网络管理更加复杂。另外,容器是被设计为反复使用的,你应该在处理这种复杂性的工具上进行投入。在容器平台上比较受欢迎的工具是使用 [软件定义网络][4] (SDN) 提供一个定义的网络集群,它允许跨不同集群的容器进行通讯。 + +### 8. 存储 + +容器即可被用于无状态应用,也可被用于有状态应用。保护外加的存储是保护有状态服务的一个关键要素。容器平台对多种受欢迎的存储提供了插件,包括网络文件系统(NFS)、AWS 弹性块存储(EBS)、GCE 持久磁盘、GlusterFS、iSCSI、 RADOS(Ceph)、Cinder 等等。 + +一个持久卷(PV)可以通过资源提供者支持的任何方式装载到一个主机上。提供者有不同的性能,而每个 PV 的访问模式被设置为特定的卷支持的特定模式。例如,NFS 能够支持多路客户端同时读/写,但是,一个特定的 NFS 的 PV 可以在服务器上被发布为只读模式。每个 PV 有它自己的一组反应特定 PV 性能的访问模式的描述,比如,ReadWriteOnce、ReadOnlyMany、以及 ReadWriteMany。 + +### 9. API 管理、终端安全、以及单点登录(SSO) + +保护你的应用安全,包括管理应用、以及 API 的认证和授权。 + +Web SSO 能力是现代应用程序的一个关键部分。在构建它们的应用时,容器平台带来了开发者可以使用的多种容器化服务。 + +API 是微服务构成的应用程序的关键所在。这些应用程序有多个独立的 API 服务,这导致了终端服务数量的激增,它就需要额外的管理工具。推荐使用 API 管理工具。所有的 API 平台应该提供多种 API 认证和安全所需要的标准选项,这些选项既可以单独使用,也可以组合使用,以用于发布证书或者控制访问。 + +这些选项包括标准的 API key、应用 ID 和密钥对,以及 OAuth 2.0。 + +### 10. 在一个联合集群中的角色和访问管理 + +在 2016 年 7 月份,Kubernetes 1.3 引入了 [Kubernetes 联合集群][5]。这是一个令人兴奋的新特性之一,它是在 Kubernetes 上游、当前的 Kubernetes 1.6 beta 中引用的。联合是用于部署和访问跨多集群运行在公有云或企业数据中心的应用程序服务的。多个集群能够用于去实现应用程序的高可用性,应用程序可以跨多个可用区域,或者去启用部署公共管理,或者跨不同的供应商进行迁移,比如,AWS、Google Cloud、以及 Azure。 + +当管理联合集群时,你必须确保你的编配工具能够提供你所需要的跨不同部署平台的实例的安全性。一般来说,认证和授权是很关键的 —— 不论你的应用程序运行在什么地方,将数据安全可靠地传递给它们,以及管理跨集群的多租户应用程序。Kubernetes 扩展了联合集群,包括对联合的秘密数据、联合的命名空间、以及 Ingress objects 的支持。 + +### 选择一个容器平台 + +当然,它并不仅关乎安全。你需要提供一个你的开发者团队和运营团队有相关经验的容器平台。他们需要一个安全的、企业级的基于容器的应用平台,它能够同时满足开发者和运营者的需要,而且还能够提高操作效率和基础设施利用率。 + +想从 Daniel 在 [欧盟开源峰会][7] 上的 [容器安全的十个层面][6] 的演讲中学习更多知识吗?这个峰会已于 10 月 23 - 26 日在 Prague 举行。 + +### 关于作者 + +Daniel Oh;Microservives;Agile;Devops;Java Ee;Container;Openshift;Jboss;Evangelism + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/10/10-layers-container-security + +作者:[Daniel Oh][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/daniel-oh +[1]:https://en.wikipedia.org/wiki/Security-Enhanced_Linux +[2]:https://en.wikipedia.org/wiki/Cgroups +[3]:https://en.wikipedia.org/wiki/Seccomp +[4]:https://en.wikipedia.org/wiki/Software-defined_networking +[5]:https://kubernetes.io/docs/concepts/cluster-administration/federation/ +[6]:https://osseu17.sched.com/mobile/#session:f2deeabfc1640d002c1d55101ce81223 +[7]:http://events.linuxfoundation.org/events/open-source-summit-europe diff --git a/translated/tech/20171016 Make -rm- Command To Move The Files To -Trash Can- Instead Of Removing Them Completely.md b/published/20171016 Make -rm- Command To Move The Files To -Trash Can- Instead Of Removing Them Completely.md similarity index 55% rename from translated/tech/20171016 Make -rm- Command To Move The Files To -Trash Can- Instead Of Removing Them Completely.md rename to published/20171016 Make -rm- Command To Move The Files To -Trash Can- Instead Of Removing Them Completely.md index 6d01bec236..3d4478ece2 100644 --- a/translated/tech/20171016 Make -rm- Command To Move The Files To -Trash Can- Instead Of Removing Them Completely.md +++ b/published/20171016 Make -rm- Command To Move The Files To -Trash Can- Instead Of Removing Them Completely.md @@ -1,45 +1,44 @@ -# 让 “rm” 命令将文件移动到“垃圾桶”,而不是完全删除它们 +给 “rm” 命令添加个“垃圾桶” +============ -人类犯错误是因为我们不是一个可编程设备,所以,在使用 `rm` 命令时要额外注意,不要在任何时候使用 `rm -rf * `。当你使用 rm 命令时,它会永久删除文件,不会像文件管理器那样将这些文件移动到 `垃圾箱`。 +人类犯错误是因为我们不是一个可编程设备,所以,在使用 `rm` 命令时要额外注意,不要在任何时候使用 `rm -rf *`。当你使用 `rm` 命令时,它会永久删除文件,不会像文件管理器那样将这些文件移动到 “垃圾箱”。 -有时我们会将不应该删除的文件删除掉,所以当错误的删除文件时该怎么办? 你必须看看恢复工具(Linux 中有很多数据恢复工具),但我们不知道是否能将它百分之百恢复,所以要如何解决这个问题? +有时我们会将不应该删除的文件删除掉,所以当错误地删除了文件时该怎么办? 你必须看看恢复工具(Linux 中有很多数据恢复工具),但我们不知道是否能将它百分之百恢复,所以要如何解决这个问题? 我们最近发表了一篇关于 [Trash-Cli][1] 的文章,在评论部分,我们从用户 Eemil Lgz 那里获得了一个关于 [saferm.sh][2] 脚本的更新,它可以帮助我们将文件移动到“垃圾箱”而不是永久删除它们。 -将文件移动到“垃圾桶”是一个好主意,当你无意中运行 rm 命令时,可以节省你的时间,但是很少有人会说这是一个坏习惯,如果你不注意“垃圾桶”,它可能会在一定的时间内被文件和文件夹堆积起来。在这种情况下,我建议你按照你的意愿去做一个定时任务。 +将文件移动到“垃圾桶”是一个好主意,当你无意中运行 `rm` 命令时,可以拯救你;但是很少有人会说这是一个坏习惯,如果你不注意“垃圾桶”,它可能会在一定的时间内被文件和文件夹堆积起来。在这种情况下,我建议你按照你的意愿去做一个定时任务。 -这适用于服务器和桌面两种环境。 如果脚本检测到 **GNOME 、KDE、Unity 或 LXDE** 桌面环境(DE),则它将文件或文件夹安全地移动到默认垃圾箱 **\$HOME/.local/share/Trash/files**,否则会在您的主目录中创建垃圾箱文件夹 **$HOME/Trash**。 +这适用于服务器和桌面两种环境。 如果脚本检测到 GNOME 、KDE、Unity 或 LXDE 桌面环境(DE),则它将文件或文件夹安全地移动到默认垃圾箱 `$HOME/.local/share/Trash/files`,否则会在您的主目录中创建垃圾箱文件夹 `$HOME/Trash`。 + +`saferm.sh` 脚本托管在 Github 中,可以从仓库中克隆,也可以创建一个名为 `saferm.sh` 的文件并复制其上的代码。 -saferm.sh 脚本托管在 Github 中,可以从 repository 中克隆,也可以创建一个名为 saferm.sh 的文件并复制其上的代码。 ``` $ git clone https://github.com/lagerspetz/linux-stuff $ sudo mv linux-stuff/scripts/saferm.sh /bin $ rm -Rf linux-stuff - ``` -在 `bashrc` 文件中设置别名, +在 `.bashrc` 文件中设置别名, ``` alias rm=saferm.sh - ``` 执行下面的命令使其生效, ``` $ source ~/.bashrc - ``` -一切就绪,现在你可以执行 rm 命令,自动将文件移动到”垃圾桶”,而不是永久删除它们。 +一切就绪,现在你可以执行 `rm` 命令,自动将文件移动到”垃圾桶”,而不是永久删除它们。 + +测试一下,我们将删除一个名为 `magi.txt` 的文件,命令行明确的提醒了 `Moving magi.txt to $HOME/.local/share/Trash/file`。 -测试一下,我们将删除一个名为 `magi.txt` 的文件,命令行显式的说明了 `Moving magi.txt to $HOME/.local/share/Trash/file` ``` $ rm -rf magi.txt Moving magi.txt to /home/magi/.local/share/Trash/files - ``` 也可以通过 `ls` 命令或 `trash-cli` 进行验证。 @@ -47,47 +46,16 @@ Moving magi.txt to /home/magi/.local/share/Trash/files ``` $ ls -lh /home/magi/.local/share/Trash/files Permissions Size User Date Modified Name -.rw-r--r-- 32 magi 11 Oct 16:24 magi.txt - +.rw-r--r-- 32 magi 11 Oct 16:24 magi.txt ``` 或者我们可以通过文件管理器界面中查看相同的内容。 ![![][3]][4] -创建一个定时任务,每天清理一次“垃圾桶”,( LCTT 注:原文为每周一次,但根据下面的代码,应该是每天一次) +(LCTT 译注:原文此处混淆了部分 trash-cli 的内容,考虑到文章衔接和逻辑,此处略。) -``` -$ 1 1 * * * trash-empty - -``` - -`注意` 对于服务器环境,我们需要使用 rm 命令手动删除。 - -``` -$ rm -rf /root/Trash/ -/root/Trash/magi1.txt is on . Unsafe delete (y/n)? y -Deleting /root/Trash/magi1.txt - -``` - -对于桌面环境,trash-put 命令也可以做到这一点。 - -在 `bashrc` 文件中创建别名, - -``` -alias rm=trash-put - -``` - -执行下面的命令使其生效。 - -``` -$ source ~/.bashrc - -``` - -要了解 saferm.sh 的其他选项,请查看帮助。 +要了解 `saferm.sh` 的其他选项,请查看帮助。 ``` $ saferm.sh -h @@ -112,7 +80,7 @@ via: https://www.2daygeek.com/rm-command-to-move-files-to-trash-can-rm-alias/ 作者:[2DAYGEEK][a] 译者:[amwps290](https://github.com/amwps290) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20171017 What Are the Hidden Files in my Linux Home Directory For.md b/published/20171017 What Are the Hidden Files in my Linux Home Directory For.md new file mode 100644 index 0000000000..c221094e63 --- /dev/null +++ b/published/20171017 What Are the Hidden Files in my Linux Home Directory For.md @@ -0,0 +1,59 @@ +我的 Linux 主目录中的隐藏文件是干什么用的? +====== + +![](https://www.maketecheasier.com/assets/uploads/2017/06/hidden-files-linux-hero.png) + +在 Linux 系统中,你可能会在主目录中存储了大量文件和文件夹。但在这些文件之外,你知道你的主目录还附带了很多隐藏的文件和文件夹吗?如果你在主目录中运行 `ls -a`,你会发现一堆带有点前缀的隐藏文件和目录。这些隐藏的文件到底做了什么? + +### 在主目录中隐藏的文件是干什么用的? + +![hidden-files-liunux-2][1] + +通常,主目录中的隐藏文件和目录包含该用户程序访问的设置或数据。它们不打算让用户编辑,只需要应用程序进行编辑。这就是为什么它们被隐藏在用户的正常视图之外。 + +通常,删除和修改自己主目录中的文件不会损坏操作系统。然而,依赖这些隐藏文件的应用程序可能不那么灵活。从主目录中删除隐藏文件时,通常会丢失与其关联的应用程序的设置。 + +依赖该隐藏文件的程序通常会重新创建它。 但是,你将从“开箱即用”设置开始,如全新用户一般。如果你在使用应用程序时遇到问题,那实际上可能是一个巨大的帮助。它可以让你删除可能造成麻烦的自定义设置。但如果你不这样做,这意味着你需要把所有的东西都设置成原来的样子。 + +### 主目录中某些隐藏文件的特定用途是什么? + +![hidden-files-linux-3][2] + +每个人在他们的主目录中都会有不同的隐藏文件。每个人都有一些。但是,无论应用程序如何,这些文件都有类似的用途。 + +#### 系统设置 + +系统设置包括桌面环境和 shell 的配置。 + +* shell 和命令行程序的**配置文件**:根据你使用的特定 shell 和类似命令的应用程序,特定的文件名称会变化。你会看到 `.bashrc`、`.vimrc` 和 `.zshrc`。这些文件包含你已经更改的有关 shell 的操作环境的任何设置,或者对 `vim` 等命令行实用工具的设置进行的调整。删除这些文件将使关联的应用程序返回到其默认状态。考虑到许多 Linux 用户多年来建立了一系列微妙的调整和设置,删除这个文件可能是一个非常头疼的问题。 +* **用户配置文件**:像上面的配置文件一样,这些文件(通常是 `.profile` 或 `.bash_profile`)保存 shell 的用户设置。该文件通常包含你的 `PATH` 环境变量。它还包含你设置的[别名][3]。用户也可以在 `.bashrc` 或其他位置放置别名。`PATH` 环境变量控制着 shell 寻找可执行命令的位置。通过添加或修改 `PATH`,可以更改 shell 的命令查找位置。别名更改了原有命令的名称。例如:一个别名可能将 `ls -l` 设置为 `ll`。这为经常使用的命令提供基于文本的快捷方式。如果删除 `.profile` 文件,通常可以在 `/etc/skel` 目录中找到默认版本。 +* **桌面环境设置**:这里保存你的桌面环境的任何定制。其中包括桌面背景、屏幕保护程序、快捷键、菜单栏和任务栏图标以及用户针对其桌面环境设置的其他任何内容。当你删除这个文件时,用户的环境会在下一次登录时恢复到新的用户环境。 + +#### 应用配置文件 + +你会在 Ubuntu 的 `.config` 文件夹中找到它们。 这些是针对特定应用程序的设置。 它们将包含喜好列表和设置等内容。 + +* **应用程序的配置文件**:这包括应用程序首选项菜单中的设置、工作区配置等。 你在这里找到的具体取决于应用程序。 +* **Web 浏览器数据**:这可能包括书签和浏览历史记录等内容。这些文件大部分是缓存。这是 Web 浏览器临时存储下载文件(如图片)的地方。删除这些内容可能会降低你首次访问某些媒体网站的速度。 +* **缓存**:如果用户应用程序缓存仅与该用户相关的数据(如 [Spotify 应用程序存储播放列表的缓存][4]),则主目录是存储该目录的默认地点。 这些缓存可能包含大量数据或仅包含几行代码:这取决于应用程序需要什么。 如果你删除这些文件,则应用程序会根据需要重新创建它们。 +* **日志**:一些用户应用程序也可能在这里存储日志。根据开发人员设置应用程序的方式,你可能会发现存储在你的主目录中的日志文件。然而,这不是一个常见的选择。 + +### 结论 + +在大多数情况下,你的 Linux 主目录中的隐藏文件用于存储用户设置。 这包括命令行程序以及基于 GUI 的应用程序的设置。删除它们将删除用户设置。 通常情况下,它不会导致程序被破坏。 + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/hidden-files-linux-home-directory/ + +作者:[Alexander Fox][a] +译者:[MjSeven](https://github.com/MjSeven) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/alexfox/ +[1]:https://www.maketecheasier.com/assets/uploads/2017/06/hidden-files-liunux-2.png (hidden-files-liunux-2) +[2]:https://www.maketecheasier.com/assets/uploads/2017/06/hidden-files-linux-3.png (hidden-files-linux-3) +[3]:https://www.maketecheasier.com/making-the-linux-command-line-a-little-friendlier/#aliases +[4]:https://www.maketecheasier.com/clear-spotify-cache/ diff --git a/published/20171102 What is huge pages in Linux.md b/published/20171102 What is huge pages in Linux.md new file mode 100644 index 0000000000..1f1d0b50a0 --- /dev/null +++ b/published/20171102 What is huge pages in Linux.md @@ -0,0 +1,140 @@ +Linux 中的“大内存页”(hugepage)是个什么? +====== + +> 学习 Linux 中的大内存页hugepage。理解什么是“大内存页”,如何进行配置,如何查看当前状态以及如何禁用它。 + +![Huge Pages in Linux][1] + +本文中我们会详细介绍大内存页huge page,让你能够回答:Linux 中的“大内存页”是什么?在 RHEL6、RHEL7、Ubuntu 等 Linux 中,如何启用/禁用“大内存页”?如何查看“大内存页”的当前值? + +首先让我们从“大内存页”的基础知识开始讲起。 + +### Linux 中的“大内存页”是个什么玩意? + +“大内存页”有助于 Linux 系统进行虚拟内存管理。顾名思义,除了标准的 4KB 大小的页面外,它们还能帮助管理内存中的巨大的页面。使用“大内存页”,你最大可以定义 1GB 的页面大小。 + +在系统启动期间,你能用“大内存页”为应用程序预留一部分内存。这部分内存,即被“大内存页”占用的这些存储器永远不会被交换出内存。它会一直保留其中,除非你修改了配置。这会极大地提高像 Oracle 数据库这样的需要海量内存的应用程序的性能。 + +### 为什么使用“大内存页”? + +在虚拟内存管理中,内核维护一个将虚拟内存地址映射到物理地址的表,对于每个页面操作,内核都需要加载相关的映射。如果你的内存页很小,那么你需要加载的页就会很多,导致内核会加载更多的映射表。而这会降低性能。 + +使用“大内存页”,意味着所需要的页变少了。从而大大减少由内核加载的映射表的数量。这提高了内核级别的性能最终有利于应用程序的性能。 + +简而言之,通过启用“大内存页”,系统具只需要处理较少的页面映射表,从而减少访问/维护它们的开销! + +### 如何配置“大内存页”? + +运行下面命令来查看当前“大内存页”的详细内容。 + +``` +root@kerneltalks # grep Huge /proc/meminfo +AnonHugePages: 0 kB +HugePages_Total: 0 +HugePages_Free: 0 +HugePages_Rsvd: 0 +HugePages_Surp: 0 +Hugepagesize: 2048 kB +``` + +从上面输出可以看到,每个页的大小为 2MB(`Hugepagesize`),并且系统中目前有 `0` 个“大内存页”(`HugePages_Total`)。这里“大内存页”的大小可以从 `2MB` 增加到 `1GB`。 + +运行下面的脚本可以知道系统当前需要多少个巨大页。该脚本取之于 Oracle。 + +``` +#!/bin/bash +# +# hugepages_settings.sh +# +# Linux bash script to compute values for the +# recommended HugePages/HugeTLB configuration +# +# Note: This script does calculation for all shared memory +# segments available when the script is run, no matter it +# is an Oracle RDBMS shared memory segment or not. +# Check for the kernel version +KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'` +# Find out the HugePage size +HPG_SZ=`grep Hugepagesize /proc/meminfo | awk {'print $2'}` +# Start from 1 pages to be on the safe side and guarantee 1 free HugePage +NUM_PG=1 +# Cumulative number of pages required to handle the running shared memory segments +for SEG_BYTES in `ipcs -m | awk {'print $5'} | grep "[0-9][0-9]*"` +do + MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q` + if [ $MIN_PG -gt 0 ]; then + NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q` + fi +done +# Finish with results +case $KERN in + '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`; + echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;; + '2.6' | '3.8' | '3.10' | '4.1' ) echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;; + *) echo "Unrecognized kernel version $KERN. Exiting." ;; +esac +# End +``` + +将它以 `hugepages_settings.sh` 为名保存到 `/tmp` 中,然后运行之: + +``` +root@kerneltalks # sh /tmp/hugepages_settings.sh +Recommended setting: vm.nr_hugepages = 124 +``` + +你的输出类似如上结果,只是数字会有一些出入。 + +这意味着,你系统需要 124 个每个 2MB 的“大内存页”!若你设置页面大小为 4MB,则结果就变成了 62。你明白了吧? + +### 配置内核中的“大内存页” + +本文最后一部分内容是配置上面提到的 [内核参数 ][2] ,然后重新加载。将下面内容添加到 `/etc/sysctl.conf` 中,然后输入 `sysctl -p` 命令重新加载配置。 + +``` +vm.nr_hugepages=126 +``` + +注意我们这里多加了两个额外的页,因为我们希望在实际需要的页面数量之外多一些额外的空闲页。 + +现在,内核已经配置好了,但是要让应用能够使用这些“大内存页”还需要提高内存的使用阀值。新的内存阀值应该为 126 个页 x 每个页 2 MB = 252 MB,也就是 258048 KB。 + +你需要编辑 `/etc/security/limits.conf` 中的如下配置: + +``` +soft memlock 258048 +hard memlock 258048 +``` + +某些情况下,这些设置是在指定应用的文件中配置的,比如 Oracle DB 就是在 `/etc/security/limits.d/99-grid-oracle-limits.conf` 中配置的。 + +这就完成了!你可能还需要重启应用来让应用来使用这些新的巨大页。 + +### 如何禁用“大内存页”? + +“大内存页”默认是开启的。使用下面命令来查看“大内存页”的当前状态。 + +``` +root@kerneltalks# cat /sys/kernel/mm/transparent_hugepage/enabled +[always] madvise never +``` + +输出中的 `[always]` 标志说明系统启用了“大内存页”。 + +若使用的是基于 RedHat 的系统,则应该要查看的文件路径为 `/sys/kernel/mm/redhat_transparent_hugepage/enabled`。 + +若想禁用“大内存页”,则在 `/etc/grub.conf` 中的 `kernel` 行后面加上 `transparent_hugepage=never`,然后重启系统。 + +-------------------------------------------------------------------------------- + +via: https://kerneltalks.com/services/what-is-huge-pages-in-linux/ + +作者:[Shrikant Lavhate][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://kerneltalks.com +[1]:https://a1.kerneltalks.com/wp-content/uploads/2017/11/hugepages-in-linux.png +[2]:https://kerneltalks.com/linux/how-to-tune-kernel-parameters-in-linux/ diff --git a/published/20171110 How to configure login banners in Linux (RedHat, Ubuntu, CentOS, Fedora).md b/published/20171110 How to configure login banners in Linux (RedHat, Ubuntu, CentOS, Fedora).md new file mode 100644 index 0000000000..bd0959ea64 --- /dev/null +++ b/published/20171110 How to configure login banners in Linux (RedHat, Ubuntu, CentOS, Fedora).md @@ -0,0 +1,94 @@ +如何在 Linux 中配置 ssh 登录导语 +====== + +> 了解如何在 Linux 中创建登录导语,来向要登录或登录后的用户显示不同的警告或消息。 + +![Login banners in Linux][1] + +无论何时登录公司的某些生产系统,你都会看到一些登录消息、警告或关于你将登录或已登录的服务器的信息,如下所示。这些是登录导语login banner。 + +![Login welcome messages in Linux][2] + +在本文中,我们将引导你配置它们。 + +你可以配置两种类型的导语。 + +1. 用户登录前显示的导语信息(在你选择的文件中配置,例如 `/etc/login.warn`) +2. 用户成功登录后显示的导语信息(在 `/etc/motd` 中配置) + +### 如何在用户登录前连接系统时显示消息 + +当用户连接到服务器并且在登录之前,这个消息将被显示给他。意味着当他输入用户名时,该消息将在密码提示之前显示。 + +你可以使用任何文件名并在其中输入信息。在这里我们使用 `/etc/login.warn` 并且把我们的消息放在里面。 + +``` +# cat /etc/login.warn + !!!! Welcome to KernelTalks test server !!!! +This server is meant for testing Linux commands and tools. If you are +not associated with kerneltalks.com and not authorized please dis-connect +immediately. +``` + +现在,需要将此文件和路径告诉 `sshd` 守护进程,以便它可以为每个用户登录请求获取此标语。对于此,打开 `/etc/sshd/sshd_config` 文件并搜索 `#Banner none`。 + +这里你需要编辑该配置文件,并写下你的文件名并删除注释标记(`#`)。它应该看起来像:`Banner /etc/login.warn`。 + +保存文件并重启 `sshd` 守护进程。为避免断开现有的连接用户,请使用 HUP 信号重启 sshd。 + +``` +root@kerneltalks # ps -ef | grep -i sshd +root 14255 1 0 18:42 ? 00:00:00 /usr/sbin/sshd -D +root 19074 14255 0 18:46 ? 00:00:00 sshd: ec2-user [priv] +root 19177 19127 0 18:54 pts/0 00:00:00 grep -i sshd + +root@kerneltalks # kill -HUP 14255 +``` + +就是这样了!打开新的会话并尝试登录。你将看待你在上述步骤中配置的消息。 + +![Login banner in Linux][3] + +你可以在用户输入密码登录系统之前看到此消息。 + +### 如何在用户登录后显示消息 + +消息用户在成功登录系统后看到的当天消息Message Of The Day(MOTD)由 `/etc/motd` 控制。编辑这个文件并输入当成功登录后欢迎用户的消息。 + +``` +root@kerneltalks # cat /etc/motd + W E L C O M E +Welcome to the testing environment of kerneltalks. +Feel free to use this system for testing your Linux +skills. In case of any issues reach out to admin at +info@kerneltalks.com. Thank you. + +``` + +你不需要重启 `sshd` 守护进程来使更改生效。只要保存该文件,`sshd` 守护进程就会下一次登录请求时读取和显示。 + +![motd in linux][4] + +你可以在上面的截图中看到:黄色框是由 `/etc/motd` 控制的 MOTD,绿色框就是我们之前看到的登录导语。 + +你可以使用 [cowsay][5]、[banner][6]、[figlet][7]、[lolcat][8] 等工具创建出色的引人注目的登录消息。此方法适用于几乎所有 Linux 发行版,如 RedHat、CentOs、Ubuntu、Fedora 等。 + +-------------------------------------------------------------------------------- + +via: https://kerneltalks.com/tips-tricks/how-to-configure-login-banners-in-linux/ + +作者:[kerneltalks][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://kerneltalks.com +[1]:https://a3.kerneltalks.com/wp-content/uploads/2017/11/login-banner-message-in-linux.png +[2]:https://a3.kerneltalks.com/wp-content/uploads/2017/11/Login-message-in-linux.png +[3]:https://a1.kerneltalks.com/wp-content/uploads/2017/11/login-banner.png +[4]:https://a3.kerneltalks.com/wp-content/uploads/2017/11/motd-message-in-linux.png +[5]:https://kerneltalks.com/tips-tricks/cowsay-fun-in-linux-terminal/ +[6]:https://kerneltalks.com/howto/create-nice-text-banner-hpux/ +[7]:https://kerneltalks.com/tips-tricks/create-beautiful-ascii-text-banners-linux/ +[8]:https://kerneltalks.com/linux/lolcat-tool-to-rainbow-color-linux-terminal/ diff --git a/published/20171115 How to create better documentation with a kanban board.md b/published/20171115 How to create better documentation with a kanban board.md new file mode 100644 index 0000000000..fa92553ea2 --- /dev/null +++ b/published/20171115 How to create better documentation with a kanban board.md @@ -0,0 +1,46 @@ +如何使用看板(kanban)创建更好的文档 +====== +> 通过卡片分类和看板来给用户提供他们想要的信息。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open%20source_collaboration.png?itok=68kU6BHy) + +如果你正在处理文档、网站或其他面向用户的内容,那么了解用户希望找到的内容(包括他们想要的信息以及信息的组织和结构)很有帮助。毕竟,如果人们无法找到他们想要的东西,那么再出色的内容也没有用。 + +卡片分类是一种简单而有效的方式,可以从用户那里收集有关菜单界面和页面的内容。最简单的实现方式是在计划在网站或文档中的部分分类标注一些索引卡,并要求用户按照查找信息的方式对卡片进行分类。一个变体是让人们编写自己的菜单标题或内容元素。 + +我们的目标是了解用户的期望以及他们希望在哪里找到它,而不是自己弄清楚菜单和布局。当与用户处于相同的物理位置时,这是相对简单的,但当尝试从多个位置的人员获得反馈时,这会更具挑战性。 + +我发现[看板kanban][1]对于这些情况是一个很好的工具。它允许人们轻松拖动虚拟卡片进行分类和排名,而且与专门卡片分类软件不同,它们是多用途的。 + +我经常使用 Trello 进行卡片分类,但有几种你可能想尝试的[开源替代品][2]。 + +### 怎么运行的 + +我最成功的看板体验是在写 [Gluster][3] 文档的时候 —— 这是一个自由开源的可扩展的网络存储文件系统。我需要携带大量随着时间而增长的文档,并将其分成若干类别以创建导航系统。由于我没有必要的技术知识来分类,我向 Gluster 团队和开发人员社区寻求指导。 + +首先,我创建了一个共享看板。我列出了一些通用名称,这些名称可以为我计划在文档中涵盖的所有主题排序和创建卡片。我标记了一些不同颜色的卡片,以表明某个主题缺失并需要创建,或者它存在并需要删除。然后,我把所有卡片放入“未排序”一列,并要求人们将它们拖到他们认为这些卡片应该组织到的地方,然后给我一个他们认为是理想状态的截图。 + +处理所有截图是最棘手的部分。我希望有一个合并或共识功能可以帮助我汇总每个人的数据,而不必检查一堆截图。幸运的是,在第一个人对卡片进行分类之后,人们或多或少地对该结构达成一致,而只做了很小的修改。当对某个主题的位置有不同意见时,我发起一个快速会议,让人们可以解释他们的想法,并且可以排除分歧。 + +### 使用数据 + +在这里,很容易将捕捉到的信息转换为菜单并对其进行优化。如果用户认为项目应该成为子菜单,他们通常会在评论中或在电话聊天时告诉我。对菜单组织的看法因人们的工作任务而异,所以从来没有完全达成一致意见,但用户进行测试意味着你不会对人们使用什么以及在哪里查找有很多盲点。 + +将卡片分类与分析功能配对,可以让你更深入地了解人们在寻找什么。有一次,当我对一些我正在写的培训文档进行分析时,我惊讶地发现搜索量最大的页面是关于资本的。所以我在顶层菜单层面上显示了该页面,即使我的“逻辑”设置将它放在了子菜单中。 + +我发现看板卡片分类是一种很好的方式,可以帮助我创建用户想要查看的内容,并将其放在希望被找到的位置。你是否发现了另一种对用户友好的组织内容的方法?或者看板的另一种有趣用途是什么?如果有的话,请在评论中分享你的想法。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/11/kanban-boards-card-sorting + +作者:[Heidi Waterhouse][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/hwaterhouse +[1]:https://en.wikipedia.org/wiki/Kanban +[2]:https://opensource.com/alternatives/trello +[3]:https://www.gluster.org/ diff --git a/published/20171116 Record and Share Terminal Session with Showterm.md b/published/20171116 Record and Share Terminal Session with Showterm.md new file mode 100644 index 0000000000..a82efb3744 --- /dev/null +++ b/published/20171116 Record and Share Terminal Session with Showterm.md @@ -0,0 +1,78 @@ +使用 Showterm 录制和分享终端会话 +====== + +![](https://www.maketecheasier.com/assets/uploads/2017/11/record-terminal-session.jpg) + +你可以使用几乎所有的屏幕录制程序轻松录制终端会话。但是,你很可能会得到超大的视频文件。Linux 中有几种终端录制程序,每种录制程序都有自己的优点和缺点。Showterm 是一个可以非常容易地记录终端会话、上传、分享,并将它们嵌入到任何网页中的工具。一个优点是,你不会有巨大的文件来处理。 + +Showterm 是开源的,该项目可以在这个 [GitHub 页面][1]上找到。 + +**相关**:[2 个简单的将你的终端会话录制为视频的 Linux 程序][2] + +### 在 Linux 中安装 Showterm + +Showterm 要求你在计算机上安装了 Ruby。以下是如何安装该程序。 + +``` +gem install showterm +``` + +如果你没有在 Linux 上安装 Ruby,可以这样: + +``` +sudo curl showterm.io/showterm > ~/bin/showterm +sudo chmod +x ~/bin/showterm +``` + +如果你只是想运行程序而不是安装: + +``` +bash <(curl record.showterm.io) +``` + +你可以在终端输入 `showterm --help` 得到帮助页面。如果没有出现帮助页面,那么可能是未安装 `showterm`。现在你已安装了 Showterm(或正在运行独立版本),让我们开始使用该工具进行录制。 + +**相关**:[如何在 Ubuntu 中录制终端会话][3] + +### 录制终端会话 + +![showterm terminal][4] + +录制终端会话非常简单。从命令行运行 `showterm`。这会在后台启动终端录制。所有从命令行输入的命令都由 Showterm 记录。完成录制后,请按 `Ctrl + D` 或在命令行中输入`exit` 停止录制。 + +Showterm 会上传你的视频并输出一个看起来像 `http://showterm.io/<一长串字符>` 的链接的视频。不幸的是,终端会话会立即上传,而没有任何提示。请不要惊慌!你可以通过输入 `showterm --delete ` 删除任何已上传的视频。在上传视频之前,你可以通过在 `showterm` 命令中添加 `-e` 选项来改变计时。如果视频无法上传,你可以使用 `showterm --retry - - - - - - -``` - -### 7.2 - CSS - -Add a new file, `/public/styles.css`, with some custom UI styling. - -``` -body { font-family: 'EB Garamond', serif; } - -.mui-textfield > input, .mui-btn, .mui--text-subhead, .mui-panel > .mui--text-headline { - font-family: 'Open Sans', sans-serif; -} - -.all-caps { text-transform: uppercase; } -.app-container { padding: 16px; } -.search-results em { font-weight: bold; } -.book-modal > button { width: 100%; } -.search-results .mui-divider { margin: 14px 0; } - -.search-results { - display: flex; - flex-direction: row; - flex-wrap: wrap; - justify-content: space-around; -} - -.search-results > div { - flex-basis: 45%; - box-sizing: border-box; - cursor: pointer; -} - -@media (max-width: 600px) { - .search-results > div { flex-basis: 100%; } -} - -.paragraphs-container { - max-width: 800px; - margin: 0 auto; - margin-bottom: 48px; -} - -.paragraphs-container .mui--text-body1, .paragraphs-container .mui--text-body2 { - font-size: 1.8rem; - line-height: 35px; -} - -.book-modal { - width: 100%; - height: 100%; - padding: 40px 10%; - box-sizing: border-box; - margin: 0 auto; - background-color: white; - overflow-y: scroll; - position: fixed; - top: 0; - left: 0; -} - -.pagination-panel { - display: flex; - justify-content: space-between; -} - -.title-row { - display: flex; - justify-content: space-between; - align-items: flex-end; -} - -@media (max-width: 600px) { - .title-row{ - flex-direction: column; - text-align: center; - align-items: center - } -} - -.locations-label { - text-align: center; - margin: 8px; -} - -.modal-footer { - position: fixed; - bottom: 0; - left: 0; - width: 100%; - display: flex; - justify-content: space-around; - background: white; -} - -``` - -### 7.3 - Try it out - -Open `localhost:8080` in your web browser, you should see a simple search interface with paginated results. Try typing in the top search bar to find matches from different terms. - -![preview webapp](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_4_0.png) - -> You  _do not_  have to re-run the `docker-compose up` command for the changes to take effect. The local `public` directory is mounted to our Nginx fileserver container, so frontend changes on the local system will be automatically reflected in the containerized app. - -If you try clicking on any result, nothing happens - we still have one more feature to add to the app. - -### 8 - PAGE PREVIEWS - -It would be nice to be able to click on each search result and view it in the context of the book that it's from. - -### 8.0 - Add Elasticsearch Query - -First, we'll need to define a simple query to get a range of paragraphs from a given book. - -Add the following function to the `module.exports` block in `server/search.js`. - -``` -/** Get the specified range of paragraphs from a book */ -getParagraphs (bookTitle, startLocation, endLocation) { - const filter = [ - { term: { title: bookTitle } }, - { range: { location: { gte: startLocation, lte: endLocation } } } - ] - - const body = { - size: endLocation - startLocation, - sort: { location: 'asc' }, - query: { bool: { filter } } - } - - return client.search({ index, type, body }) -} - -``` - -This new function will return an ordered array of paragraphs between the start and end locations of a given book. - -### 8.1 - Add API Endpoint - -Now, let's link this function to an API endpoint. - -Add the following to `server/app.js`, below the original `/search` endpoint. - -``` -/** - * GET /paragraphs - * Get a range of paragraphs from the specified book - * Query Params - - * bookTitle: string under 256 characters - * start: positive integer - * end: positive integer greater than start - */ -router.get('/paragraphs', - validate({ - query: { - bookTitle: joi.string().max(256).required(), - start: joi.number().integer().min(0).default(0), - end: joi.number().integer().greater(joi.ref('start')).default(10) - } - }), - async (ctx, next) => { - const { bookTitle, start, end } = ctx.request.query - ctx.body = await search.getParagraphs(bookTitle, start, end) - } -) - -``` - -### 8.2 - Add UI functionality - -Now that our new endpoint is in place, let's add some frontend functionality to query and display full pages from the book. - -Add the following functions to the `methods` block of `/public/app.js`. - -``` - /** Call the API to get current page of paragraphs */ - async getParagraphs (bookTitle, offset) { - try { - this.bookOffset = offset - const start = this.bookOffset - const end = this.bookOffset + 10 - const response = await axios.get(`${this.baseUrl}/paragraphs`, { params: { bookTitle, start, end } }) - return response.data.hits.hits - } catch (err) { - console.error(err) - } - }, - /** Get next page (next 10 paragraphs) of selected book */ - async nextBookPage () { - this.$refs.bookModal.scrollTop = 0 - this.paragraphs = await this.getParagraphs(this.selectedParagraph._source.title, this.bookOffset + 10) - }, - /** Get previous page (previous 10 paragraphs) of selected book */ - async prevBookPage () { - this.$refs.bookModal.scrollTop = 0 - this.paragraphs = await this.getParagraphs(this.selectedParagraph._source.title, this.bookOffset - 10) - }, - /** Display paragraphs from selected book in modal window */ - async showBookModal (searchHit) { - try { - document.body.style.overflow = 'hidden' - this.selectedParagraph = searchHit - this.paragraphs = await this.getParagraphs(searchHit._source.title, searchHit._source.location - 5) - } catch (err) { - console.error(err) - } - }, - /** Close the book detail modal */ - closeBookModal () { - document.body.style.overflow = 'auto' - this.selectedParagraph = null - } - -``` - -These five functions provide the logic for downloading and paginating through pages (ten paragraphs each) in a book. - -Now we just need to add a UI to display the book pages. Add this markup below the `` comment in `/public/index.html`. - -``` - -
-
- -
-
{{ selectedParagraph._source.title }}
-
{{ selectedParagraph._source.author }}
-
-
-
-
Locations {{ bookOffset - 5 }} to {{ bookOffset + 5 }}
-
-
- - -
-
- {{ paragraph._source.text }} -
-
- {{ paragraph._source.text }} -
-
-
-
- - - -
- -``` - -Restart the app server (`docker-compose up -d --build`) again and open up `localhost:8080`. When you click on a search result, you are now able to view the surrounding paragraphs. You can now even read the rest of the book to completion if you're entertained by what you find. - -![preview webapp book page](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_5_0.png) - -Congrats, you've completed the tutorial application! - -Feel free to compare your local result against the completed sample hosted here - [https://search.patricktriest.com/][37] - -### 9 - DISADVANTAGES OF ELASTICSEARCH - -### 9.0 - Resource Hog - -Elasticsearch is computationally demanding. The [official recommendation][38] is to run ES on a machine with 64 GB of RAM, and they strongly discourage running it on anything with under 8 GB of RAM. Elasticsearch is an  _in-memory_  datastore, which allows it to return results extremely quickly, but also results in a very significant system memory footprint. In production, [it is strongly recommended to run multiple Elasticsearch nodes in a cluster][39] to allow for high server availability, automatic sharding, and data redundancy in case of a node failure. - -I've got our tutorial application running on a $15/month GCP compute instance (at [search.patricktriest.com][40]) with 1.7 GB of RAM, and it  _just barely_  is able to run the Elasticsearch node; sometimes the entire machine freezes up during the initial data-loading step. Elasticsearch is, in my experience, much more of a resource hog than more traditional databases such as PostgreSQL and MongoDB, and can be significantly more expensive to host as a result. - -### 9.1 - Syncing with Databases - -In most applications, storing all of the data in Elasticsearch is not an ideal option. It is possible to use ES as the primary transactional database for an app, but this is generally not recommended due to the lack of ACID compliance in Elasticsearch, which can lead to lost write operations when ingesting data at scale. In many cases, ES serves a more specialized role, such as powering the text searching features of the app. This specialized use requires that some of the data from the primary database is replicated to the Elasticsearch instance. - -For instance, let's imagine that we're storing our users in a PostgreSQL table, but using Elasticsearch to power our user-search functionality. If a user, "Albert", decides to change his name to "Al", we'll need this change to be reflected in both our primary PostgreSQL database and in our auxiliary Elasticsearch cluster. - -This can be a tricky integration to get right, and the best answer will depend on your existing stack. There are a multitude of open-source options available, from [a process to watch a MongoDB operation log][41] and automatically sync detected changes to ES, to a [PostgresSQL plugin][42] to create a custom PSQL-based index that communicates automatically with Elasticsearch. - -If none of the available pre-built options work, you could always just add some hooks into your server code to update the Elasticsearch index manually based on database changes. I would consider this final option to be a last resort, since keeping ES in sync using custom business logic can be complex, and is likely to introduce numerous bugs to the application. - -The need to sync Elasticsearch with a primary database is more of an architectural complexity than it is a specific weakness of ES, but it's certainly worth keeping in mind when considering the tradeoffs of adding a dedicated search engine to your app. - -### CONCLUSION - -Full-text search is one of the most important features in many modern applications - and is one of the most difficult to implement well. Elasticsearch is a fantastic option for adding fast and customizable text search to your application, but there are alternatives. [Apache Solr][43] is a similar open source search platform that is built on Apache Lucene - the same library at the core of Elasticsearch. [Algolia][44] is a search-as-a-service web platform which is growing quickly in popularity and is likely to be easier to get started with for beginners (but as a tradeoff is less customizable and can get quite expensive). - -"Search-bar" style features are far from the only use-case for Elasticsearch. ES is also a very common tool for log storage and analysis, commonly used in an ELK (Elasticsearch, Logstash, Kibana) stack configuration. The flexible full-text search allowed by Elasticsearch can also be very useful for a wide variety of data science tasks - such as correcting/standardizing the spellings of entities within a dataset or searching a large text dataset for similar phrases. - -Here are some ideas for your own projects. - -* Add more of your favorite books to our tutorial app and create your own private library search engine. - -* Create an academic plagiarism detection engine by indexing papers from [Google Scholar][2]. - -* Build a spell checking application by indexing every word in the dictionary to Elasticsearch. - -* Build your own Google-competitor internet search engine by loading the [Common Crawl Corpus][3] into Elasticsearch (caution - with over 5 billion pages, this can be a very expensive dataset play with). - -* Use Elasticsearch for journalism: search for specific names and terms in recent large-scale document leaks such as the [Panama Papers][4] and [Paradise Papers][5]. - -The source code for this tutorial application is 100% open-source and can be found at the GitHub repository here - [https://github.com/triestpa/guttenberg-search][45] - -I hope you enjoyed the tutorial! Please feel free to post any thoughts, questions, or criticisms in the comments below. - - --------------------------------------------------------------------------------- - -作者简介: - -Full-stack engineer, data enthusiast, insatiable learner, obsessive builder. You can find me wandering on a mountain trail, pretending not to be lost. - -------------- - - -via: https://blog.patricktriest.com/text-search-docker-elasticsearch/ - -作者:[Patrick Triest][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://blog.patricktriest.com/author/patrick/ -[1]:https://blog.patricktriest.com/you-should-learn-regex/ -[2]:https://scholar.google.com/ -[3]:https://aws.amazon.com/public-datasets/common-crawl/ -[4]:https://en.wikipedia.org/wiki/Panama_Papers -[5]:https://en.wikipedia.org/wiki/Paradise_Papers -[6]:https://search.patricktriest.com/ -[7]:https://github.com/triestpa/guttenberg-search -[8]:https://www.postgresql.org/ -[9]:https://www.mongodb.com/ -[10]:https://www.elastic.co/ -[11]:https://www.docker.com/ -[12]:https://www.uber.com/ -[13]:https://www.spotify.com/us/ -[14]:https://www.adp.com/ -[15]:https://www.paypal.com/us/home -[16]:https://nodejs.org/en/ -[17]:http://koajs.com/ -[18]:https://vuejs.org/ -[19]:https://www.elastic.co/ -[20]:https://lucene.apache.org/core/ -[21]:https://www.elastic.co/guide/en/elasticsearch/guide/2.x/getting-started.html -[22]:https://en.wikipedia.org/wiki/B-tree -[23]:https://www.docker.com/ -[24]:https://www.docker.com/ -[25]:https://docs.docker.com/compose/ -[26]:https://docs.docker.com/engine/installation/ -[27]:https://docs.docker.com/compose/install/ -[28]:https://www.gutenberg.org/ -[29]:https://cdn.patricktriest.com/data/books.zip -[30]:https://www.gnu.org/software/wget/ -[31]:https://theunarchiver.com/command-line -[32]:https://www.elastic.co/guide/en/elasticsearch/reference/current/full-text-queries.html -[33]:http://koajs.com/ -[34]:https://github.com/hapijs/joi -[35]:https://github.com/triestpa/koa-joi-validate -[36]:https://vuejs.org/v2/guide/ -[37]:https://search.patricktriest.com/ -[38]:https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html -[39]:https://www.elastic.co/guide/en/elasticsearch/guide/2.x/distributed-cluster.html -[40]:https://search.patricktriest.com/ -[41]:https://github.com/mongodb-labs/mongo-connector -[42]:https://github.com/zombodb/zombodb -[43]:https://lucene.apache.org/solr/ -[44]:https://www.algolia.com/ -[45]:https://github.com/triestpa/guttenberg-search -[46]:https://blog.patricktriest.com/tag/guides/ -[47]:https://blog.patricktriest.com/tag/javascript/ -[48]:https://blog.patricktriest.com/tag/nodejs/ -[49]:https://blog.patricktriest.com/tag/web-development/ -[50]:https://blog.patricktriest.com/tag/devops/ \ No newline at end of file diff --git a/sources/tech/20180125 Keep Accurate Time on Linux with NTP.md b/sources/tech/20180125 Keep Accurate Time on Linux with NTP.md index 817931c2a4..5895275c62 100644 --- a/sources/tech/20180125 Keep Accurate Time on Linux with NTP.md +++ b/sources/tech/20180125 Keep Accurate Time on Linux with NTP.md @@ -1,3 +1,4 @@ +Translating by qhwdw Keep Accurate Time on Linux with NTP ====== diff --git a/sources/tech/20180126 Running a Python application on Kubernetes.md b/sources/tech/20180126 Running a Python application on Kubernetes.md index 4ce9f38726..d540eb0133 100644 --- a/sources/tech/20180126 Running a Python application on Kubernetes.md +++ b/sources/tech/20180126 Running a Python application on Kubernetes.md @@ -1,3 +1,4 @@ +@qhh0205 翻译中 Running a Python application on Kubernetes ============================================================ @@ -277,4 +278,4 @@ via: https://opensource.com/article/18/1/running-python-application-kubernetes [14]:https://opensource.com/users/nanjekyejoannah [15]:https://opensource.com/users/nanjekyejoannah [16]:https://opensource.com/tags/python -[17]:https://opensource.com/tags/kubernetes \ No newline at end of file +[17]:https://opensource.com/tags/kubernetes diff --git a/sources/tech/20180127 Your instant Kubernetes cluster.md b/sources/tech/20180127 Your instant Kubernetes cluster.md index b17619762a..d804986aac 100644 --- a/sources/tech/20180127 Your instant Kubernetes cluster.md +++ b/sources/tech/20180127 Your instant Kubernetes cluster.md @@ -1,3 +1,4 @@ +Translating by qhwdw Your instant Kubernetes cluster ============================================================ @@ -168,4 +169,4 @@ via: https://blog.alexellis.io/your-instant-kubernetes-cluster/ [12]:https://weave.works/ [13]:https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ [14]:https://blog.alexellis.io/docker-for-mac-with-kubernetes/ -[15]:https://blog.alexellis.io/your-instant-kubernetes-cluster/# \ No newline at end of file +[15]:https://blog.alexellis.io/your-instant-kubernetes-cluster/# diff --git a/sources/tech/20180129 Parsing HTML with Python.md b/sources/tech/20180129 Parsing HTML with Python.md deleted file mode 100644 index bc6e4ff2e6..0000000000 --- a/sources/tech/20180129 Parsing HTML with Python.md +++ /dev/null @@ -1,214 +0,0 @@ -translating by Flowsnow - -Parsing HTML with Python -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_html_code.png?itok=VjUmGsnl) - -Image by : Jason Baker for Opensource.com. - -As a long-time member of the documentation team at Scribus, I keep up-to-date with the latest updates of the source so I can help make updates and additions to the documentation. When I recently did a "checkout" using Subversion on a computer I had just upgraded to Fedora 27, I was amazed at how long it took to download the documentation, which consists of HTML pages and associated images. I became concerned that the project's documentation seemed much larger than it should be and suspected that some of the content was "zombie" documentation--HTML files that aren't used anymore and images that have lost all references in the currently used HTML. - -I decided to create a project for myself to figure this out. One way to do this is to search for existing image files that aren't used. If I could scan through all the HTML files for image references, then compare that list to the actual image files, chances are I would see a mismatch. - -Here is a typical image tag: -``` -Edit examples -``` - -I'm interested in the part between the first set of quotation marks, after `src=`. After some searching for a solution, I found a Python module called [BeautifulSoup][1]. The tasty part of the script I wrote looks like this: -``` -soup = BeautifulSoup(all_text, 'html.parser') -match = soup.findAll("img") -if len(match) > 0: - for m in match: - imagelist.append(str(m)) -``` - -We can use this `findAll` method to pluck out the image tags. Here is a tiny piece of the output: -``` - - -GSview - Advanced Options PanelScribus External Tools Preferences -``` - -So far, so good. I thought that the next step might be to just carve this down, but when I tried some string methods in the script, it returned errors about this being tags and not strings. I saved the output to a file and went through the process of editing in [KWrite][2]. One nice thing about KWrite is that you can do a "find & replace" using regular expressions (regex), so I could replace `', all_text) -if len(match)>0: - for m in match: - imagelist.append(m) -``` - -And a tiny piece of its output looks like this: -``` -images/cmcanvas.png" title="Context Menu for the document canvas" alt="Context Menu for the document canvas" />
-``` - -I decided to home in on the `src=` piece. One way would be to wait for an occurrence of `s`, then see if the next character is `r`, the next `c`, and the next `=`. If so, bingo! Then what follows between two sets of double quotation marks is what I need. The problem with this is the structure it takes to hang onto these. One way of looking at a string of characters representing a line of HTML text would be: -``` -for c in all_text: -``` - -But the logic was just too messy to hang onto the previous `c`, and the one before that, the one before that, and the one before that. - -In the end, I decided to focus on the `=` and to use an indexing method whereby I could easily reference any prior or future character in the string. Here is the searching part: -``` - index = 3 - while index < linelength: - if (all_text[index] == '='): - if (all_text[index-3] == 's') and (all_text[index-2] == 'r') and (all_text[index-1] == 'c'): - imagefound(all_text, imagelist, index) - index += 1 - else: - index += 1 - else: - index += 1 -``` - -I start the search with the fourth character (indexing starts at 0), so I don't get an indexing error down below, and realistically, there will not be an equal sign before the fourth character of a line. The first test is to see if we find `=` as we're marching through the string, and if not, we march on. If we do see one, then we ask if the three previous characters were `s`, `r`, and `c`, in that order. If that happens, we call the function `imagefound`: -``` -def imagefound(all_text, imagelist, index): - end = 0 - index += 2 - newimage = '' - while end == 0: - if (all_text[index] != '"'): - newimage = newimage + all_text[index] - index += 1 - else: - newimage = newimage + '\n' - imagelist.append(newimage) - end = 1 - return -``` - -We're sending the function the current index, which represents the `=`. We know the next character will be `"`, so we jump two characters and begin adding characters to a holding string named `newimage`, until we reach the following `"`, at which point we're done. We add the string plus a `newline` character to our list `imagelist` and `return`, keeping in mind there may be more image tags in this remaining string of HTML, so we're right back in the middle of our searching loop. - -Here's what our output looks like now: -``` -images/text-frame-link.png -images/text-frame-unlink.png -images/gimpoptions1.png -images/gimpoptions3.png -images/gimpoptions2.png -images/fontpref3.png -images/font-subst.png -images/fontpref2.png -images/fontpref1.png -images/dtp-studio.png -``` - -Ahhh, much cleaner, and this only took a few seconds to run. I could have jumped seven more index spots to cut out the `images/` part, but I like having it there to make sure I haven't chopped off the first letter of the image filename, and this is so easy to edit out with KWrite--you don't even need regex. After doing that and saving the file, the next step was to run another script I wrote called `sortlist.py`: -``` -#!/usr/bin/env python -# -*- coding: utf-8 -*- -# sortlist.py - -import os - -imagelist = [] -for line in open('/tmp/imagelist_parse4.txt').xreadlines(): - imagelist.append(line) - -imagelist.sort() - -outfile = open('/tmp/imagelist_parse4_sorted.txt', 'w') -outfile.writelines(imagelist) -outfile.close() -``` - -This pulls in the file contents as a list, sorts it, then saves it as another file. After that I could just do the following: -``` -ls /home/gregp/development/Scribus15x/doc/en/images/*.png > '/tmp/actual_images.txt' -``` - -Then I need to run `sortlist.py` on that file too, since the method `ls` uses to sort is different from Python. I could have run a comparison script on these files, but I preferred to do this visually. In the end, I ended up with 42 images that had no HTML reference from the documentation. - -Here is my parsing script in its entirety: -``` -#!/usr/bin/env python -# -*- coding: utf-8 -*- -# parseimg4.py - -import os - -def imagefound(all_text, imagelist, index): - end = 0 - index += 2 - newimage = '' - while end == 0: - if (all_text[index] != '"'): - newimage = newimage + all_text[index] - index += 1 - else: - newimage = newimage + '\n' - imagelist.append(newimage) - end = 1 - return - -htmlnames = [] -imagelist = [] -tempstring = '' -filenames = os.listdir('/home/gregp/development/Scribus15x/doc/en/') -for name in filenames: - if name.endswith('.html'): - htmlnames.append(name) -#print htmlnames -for htmlfile in htmlnames: - all_text = open('/home/gregp/development/Scribus15x/doc/en/' + htmlfile).read() - linelength = len(all_text) - index = 3 - while index < linelength: - if (all_text[index] == '='): - if (all_text[index-3] == 's') and (all_text[index-2] == 'r') and -(all_text[index-1] == 'c'): - imagefound(all_text, imagelist, index) - index += 1 - else: - index += 1 - else: - index += 1 - -outfile = open('/tmp/imagelist_parse4.txt', 'w') -outfile.writelines(imagelist) -outfile.close() -imageno = len(imagelist) -print str(imageno) + " images were found and saved" -``` - -Its name, `parseimg4.py`, doesn't really reflect the number of scripts I wrote along the way, with both minor and major rewrites, plus discards and starting over. Notice that I've hardcoded these directory and filenames, but it would be easy enough to generalize, asking for user input for these pieces of information. Also as they were working scripts, I sent the output to `/tmp`, so they disappear once I reboot my system. - -This wasn't the end of the story, since the next question was: What about zombie HTML files? Any of these files that are not used might reference images not picked up by the previous method. We have a `menu.xml` file that serves as the table of contents for the online manual, but I also needed to consider that some files listed in the TOC might reference files not in the TOC, and yes, I did find some. - -I'll conclude by saying that this was a simpler task than this image search, and it was greatly helped by the processes I had already developed. - - -### About the author - - [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/20150529_gregp.jpg?itok=nv02g6PV)][7] Greg Pittman - Greg is a retired neurologist in Louisville, Kentucky, with a long-standing interest in computers and programming, beginning with Fortran IV in the 1960s. When Linux and open source software came along, it kindled a commitment to learning more, and eventually contributing. He is a member of the Scribus Team.[More about me][8] - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/1/parsing-html-python - -作者:[Greg Pittman][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/greg-p -[1]:https://www.crummy.com/software/BeautifulSoup/ -[2]:https://www.kde.org/applications/utilities/kwrite/ -[7]:https://opensource.com/users/greg-p -[8]:https://opensource.com/users/greg-p diff --git a/sources/tech/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md b/sources/tech/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md deleted file mode 100644 index 03e15878b9..0000000000 --- a/sources/tech/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md +++ /dev/null @@ -1,95 +0,0 @@ -Install AWFFull web server log analysis application on ubuntu 17.10 -====== - - -AWFFull is a web server log analysis program based on "The Webalizer".AWFFull produces usage statistics in HTML format for viewing with a browser. The results are presented in both columnar and graphical format, which facilitates interpretation. Yearly, monthly, daily and hourly usage statistics are presented, along with the ability to display usage by site, URL, referrer, user agent (browser), user name,search strings, entry/exit pages, and country (some information may not be available if not present in the log file being processed). - - - -AWFFull supports CLF (common log format) log files, as well as Combined log formats as defined by NCSA and others, and variations of these which it attempts to handle intelligently. In addition, AWFFull also supports wu-ftpd xferlog formatted log files, allowing analysis of ftp servers, and squid proxy logs. Logs may also be compressed, via gzip. - -AWFFull is a web server log analysis program based on "The Webalizer".AWFFull produces usage statistics in HTML format for viewing with a browser. The results are presented in both columnar and graphical format, which facilitates interpretation. Yearly, monthly, daily and hourly usage statistics are presented, along with the ability to display usage by site, URL, referrer, user agent (browser), user name,search strings, entry/exit pages, and country (some information may not be available if not present in the log file being processed).AWFFull supports CLF (common log format) log files, as well as Combined log formats as defined by NCSA and others, and variations of these which it attempts to handle intelligently. In addition, AWFFull also supports wu-ftpd xferlog formatted log files, allowing analysis of ftp servers, and squid proxy logs. Logs may also be compressed, via gzip. - -If a compressed log file is detected, it will be automatically uncompressed while it is read. Compressed logs must have the standard gzip extension of .gz. - -### Changes from Webalizer - -AWFFull is based on the Webalizer code and has a number of large and small changes. These include: - -o Beyond the raw statistics: Making use of published formulae to provide additional insights into site usage. - -o GeoIP IP Address look-ups for more accurate country detection. - -o Resizable graphs. - -o Integration with GNU gettext allowing for ease of translations.Currently 32 languages are supported. - -o Display more than 12 months of the site history on the front page. - -o Additional page count tracking and sort by same. - -o Some minor visual tweaks, including Geolizer's use of Kb, Mb etc for Volumes. - -o Additional Pie Charts for URL counts, Entry and Exit Pages, and Sites. - -o Horizontal lines on graphs that are more sensible and easier to read. - -o User Agent and Referral tracking is now calculated via PAGES not HITS. - -o GNU style long command line options are now supported (eg --help). - -o Can choose what is a page by excluding "what isn't" vs the original "what is" method. - -o Requests to the site being analysed are displayed with the matching referring URL. - -o A Table of 404 Errors, and the referring URL can be generated. - -o An external CSS file can be used with the generated html. - -o Manual performance optimisation of the config file is now easier with a post analysis summary output. - -o Specified IP's & Addresses can be assigned to a given country. - -o Additional Dump options for detailed analysis with other tools. - -o Lotus Domino v6 logs are now detected and processed. - -**Install awffull on ubuntu 17.10** - -> sudo apt-get install awffull - -### Configuring AWFFULL - -You have to edit awffull config file at /etc/awffull/awffull.conf. If you have multiple virtual websites running in the same machine, you can make several copies of the default config file. - -> sudo vi /etc/awffull/awffull.conf - -Make sure the following lines are there - -> LogFile /var/log/apache2/access.log.1 -> OutputDir /var/www/html/awffull - -Save and exit the file - -You can run the awffull config using the following command - -> awffull -c [your config file name] - -This will create all the required files under /var/www/html/awffull directory so you can access your webserver stats using http://serverip/awffull/ - -You should see similar to the following screen - -If you have more site and you can automate the process using shell script and cron job. - - --------------------------------------------------------------------------------- - -via: http://www.ubuntugeek.com/install-awffull-web-server-log-analysis-application-on-ubuntu-17-10.html - -作者:[ruchi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.ubuntugeek.com/author/ubuntufix diff --git a/sources/tech/20180131 How to access-view Python help when using vim.md b/sources/tech/20180131 How to access-view Python help when using vim.md deleted file mode 100644 index f37a56908b..0000000000 --- a/sources/tech/20180131 How to access-view Python help when using vim.md +++ /dev/null @@ -1,88 +0,0 @@ -translating---geekpi - -How to access/view Python help when using vim -====== - -I am a new Vim text editor user. I am writing Python code. Is there is a way to see Python documentation within vim and without visiting the Internet? Say my cursor is under the print Python keyword, and I press F1. I want to look at the help for the print keyword. How do I show python help() inside vim? How do I call pydoc3/pydoc to seek help without leaving vim? - -The pydoc or pydoc3 command show text documentation on the name of a Python keyword, topic, function, module, or package, or a dotted reference to a class or function within a module or module in a package. You can call pydoc from vim itself. Let us see how to access Python documentation using pydoc within vim text editor. - -### Access python help using pydoc - -The syntax is: -``` -pydoc keyword -pydoc3 keyword -pydoc len -pydoc print -``` -Edit your ~/.vimrc: -`$ vim ~/.vimrc` -Append the following configuration for pydoc3 (python v3.x docs). Create a mapping for H key that works in normal mode: -``` -nnoremap H :execute "!pydoc3 " . expand("") -``` - - -Save and close the file. Open vim text editor: -`$ vim file.py` -Write some code: -``` -#!/usr/bin/python3 -x=5 -y=10 -z=x+y -print(z) -print("Hello world") -``` - -Position cursor under the print Python keyword and press Shift followed by H. You will see output as follows: - -[![Access Python Help Within Vim][1]][1] -Gif.01: Press H to view help for the print Python keyword - -### How to view python help when using vim - -[jedi-vim][2] is a VIM binding to the autocompletion library Jed. It can do many things including display help for keyword when you press Shift followed by K i.e. press capital K. - -#### How to install jedi-vim on Linux or Unix-like system - -Use [pathogen][3], [vim-plug][4] or [Vundle][5] to install jedi-vim. I am using Vim-Plug. Add the following line in ~/vimrc: -`Plug 'davidhalter/jedi-vim'` -Save and close the file. Start vim and type: -`PlugInstall` -On Arch Linux, you can also install jedi-vim from official repositories as vim-jedi using pacman command: -`$ sudo pacman -S vim-jedi` -It is also available on Debian (?8) and Ubuntu (?14.04) as vim-python-jedi using [apt command][6]/[apt-get command][7]: -`$ sudo apt install vim-python-jedi` -On Fedora Linux, it is available as vim-jedi using dnf command: -`$ sudo dnf install vim-jedi` -Jedi is by default automatically initialized. So no further configuration needed on your part. To see Documentation/Pydoc press K. It shows a popup with assignments: -[![How to view python help when using vim][8]][8] - -### about the author - -The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][9], [Facebook][10], [Google+][11]. - --------------------------------------------------------------------------------- - -via: https://www.cyberciti.biz/faq/how-to-access-view-python-help-when-using-vim/ - -作者:[Vivek Gite][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.cyberciti.biz -[1]:https://www.cyberciti.biz/media/new/faq/2018/01/Access-Python-Help-Within-Vim.gif -[2]:https://github.com/davidhalter/jedi-vim -[3]:https://github.com/tpope/vim-pathogen -[4]:https://www.cyberciti.biz/programming/vim-plug-a-beautiful-and-minimalist-vim-plugin-manager-for-unix-and-linux-users/ -[5]:https://github.com/gmarik/vundle -[6]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info) -[7]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info) -[8]:https://www.cyberciti.biz/media/new/faq/2018/01/How-to-view-Python-Documentation-using-pydoc-within-vim-on-Linux-Unix.jpg -[9]:https://twitter.com/nixcraft -[10]:https://facebook.com/nixcraft -[11]:https://plus.google.com/+CybercitiBiz diff --git a/sources/tech/20180131 Microservices vs. monolith How to choose.md b/sources/tech/20180131 Microservices vs. monolith How to choose.md index 35056b1ee0..a337f3c85f 100644 --- a/sources/tech/20180131 Microservices vs. monolith How to choose.md +++ b/sources/tech/20180131 Microservices vs. monolith How to choose.md @@ -1,3 +1,4 @@ +Translating by qhwdw Microservices vs. monolith: How to choose ============================================================ @@ -173,4 +174,4 @@ via: https://opensource.com/article/18/1/how-choose-between-monolith-microservic [19]:https://opensource.com/users/jakelumetta [20]:https://opensource.com/users/jakelumetta [21]:https://opensource.com/tags/microservices -[22]:https://opensource.com/tags/devops \ No newline at end of file +[22]:https://opensource.com/tags/devops diff --git a/sources/tech/20180201 How to Run Your Own Public Time Server on Linux.md b/sources/tech/20180201 How to Run Your Own Public Time Server on Linux.md index 752d06bc6a..4824b0370b 100644 --- a/sources/tech/20180201 How to Run Your Own Public Time Server on Linux.md +++ b/sources/tech/20180201 How to Run Your Own Public Time Server on Linux.md @@ -1,3 +1,4 @@ +Translating by qhwdw How to Run Your Own Public Time Server on Linux ====== diff --git a/sources/tech/20180205 A File Transfer Utility To Download Only The New Parts Of A File.md b/sources/tech/20180205 A File Transfer Utility To Download Only The New Parts Of A File.md deleted file mode 100644 index 63f87b86cc..0000000000 --- a/sources/tech/20180205 A File Transfer Utility To Download Only The New Parts Of A File.md +++ /dev/null @@ -1,98 +0,0 @@ -A File Transfer Utility To Download Only The New Parts Of A File -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/02/Linux-1-720x340.png) - -Just because Internet plans are getting cheaper every day, you shouldn’t waste your data by repeatedly downloading the same stuff over and over. The one fine example is downloading development version of Ubuntu or any Linux images. As you may know, Ubuntu developers releases daily builds, alpha, beta ISO images every few months for testing. In the past, I used to download those images whenever they are available to test and review each edition. Not anymore! Thanks to **Zsync** file transfer program. Now it is possible to download only the new parts of the ISO image. This will save you a lot of time and Internet bandwidth. Not just time and bandwidth, it will save you the resources on server side and client side. - -Zsync uses the same algorithm as **Rsync** , but it only download the new parts of a file that you have a copy of an older version of the file on your computer already. Rsync is mainly for synchronizing data between computers, whereas Zsync is for distributing data. To put this simply, the one file on a central location can be distributed to thousands of downloaders using Zsync. It is completely free and open source released under the Artistic License V2. - -### Installing Zsync - -Zsync is available in the default repositories of most Linux distributions. - -On **Arch Linux** and derivatives, install it using command: -``` -$ sudo pacman -S zsync - -``` - -On **Fedora** : - -Enable Zsync repository: -``` -$ sudo dnf copr enable ngompa/zsync - -``` - -And install it using command: -``` -$ sudo dnf install zsync - -``` - -On **Debian, Ubuntu, Linux Mint** : -``` -$ sudo apt-get install zsync - -``` - -For other distributions, you can download the binary from the [**Zsync download page**][1] and manually compile and install it as shown below. -``` -$ wget http://zsync.moria.org.uk/download/zsync-0.6.2.tar.bz2 -$ tar xjf zsync-0.6.2.tar.bz2 -$ cd zsync-0.6.2/ -$ configure -$ make -$ sudo make install - -``` - -### Usage - -Please be mindful that **zsync is only useful if people offer zsync downloads**. Currently, Debian, Ubuntu (all flavours) ISO images are available as .zsync downloads. For example, visit the following link. - -As you may noticed, Ubuntu 18.04 LTS daily build is available as direct ISO and .zsync file. If you download .ISO file, you have to download the full ISO whenever the ISO gets new updates. But, if you download .zsync file, the Zsync will download only the new changes in future. You don’t need to download the whole ISO image each time. - -A .zsync file contains a meta-data needed by zsync program. This file contains the pre-calculated checksums for the rsync algorithm; it is generated on the server, once, and is then used by any number of downloaders. To download a .zsync file using Zsync client program, all you have to do: -``` -$ zsync <.zsync-file-URL> - -``` - -Example: -``` -$ zsync http://cdimage.ubuntu.com/ubuntu/daily-live/current/bionic-desktop-amd64.iso.zsync - -``` - -If you already have the old image file on your system, Zsync will calculate the difference between the old and new file in the remote server and download only the new parts. You will see the calculation process as a series of dots or stars on your Terminal. - -If there is an old version of the file you’re just downloading is available in the current working directory, Zsync will download only the new parts. Once the download is finished, you will get two images, the one you just downloaded and the old image with **.iso.zs-old** extension on its filename. - -If there is no relevent local data found, Zsync will download the whole file. - -![](http://www.ostechnix.com/wp-content/uploads/2018/02/Zsync-1.png) - -You can cancel the download process at any time by pressing **CTRL-C**. - -Just imagine if you use the direct .ISO file or torrent, you will lose around 1.4GB bandwidth whenever you download new image. So, instead of downloading entire alpha, beta and daily build images, Zsync just downloads the new parts of the ISO file that you already have a copy of an older version of it on your system. - -And, that’s all for today. Hope this helps. I will be soon here with another useful guide. Until then stay tuned with OSTechNix! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/zsync-file-transfer-utility-download-new-parts-file/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:http://zsync.moria.org.uk/downloads diff --git a/sources/tech/20180206 Manage printers and printing.md b/sources/tech/20180206 Manage printers and printing.md deleted file mode 100644 index 13898c2e38..0000000000 --- a/sources/tech/20180206 Manage printers and printing.md +++ /dev/null @@ -1,535 +0,0 @@ -Manage printers and printing -====== - - -### Printing in Linux - -Although much of our communication today is electronic and paperless, we still have considerable need to print material from our computers. Bank statements, utility bills, financial and other reports, and benefits statements are just some of the items that we still print. This tutorial introduces you to printing in Linux using CUPS. - -CUPS, formerly an acronym for Common UNIX Printing System, is the printer and print job manager for Linux. Early computer printers typically printed lines of text in a particular character set and font size. Today's graphical printers are capable of printing both graphics and text in a variety of sizes and fonts. Nevertheless, some of the commands you use today have their history in the older line printer daemon (LPD) technology. - -This tutorial helps you prepare for Objective 108.4 in Topic 108 of the Linux Server Professional (LPIC-1) exam 102. The objective has a weight of 2. - -#### Prerequisites - -To get the most from the tutorials in this series, you need a basic knowledge of Linux and a working Linux system on which you can practice the commands covered in this tutorial. You should be familiar with GNU and UNIX® commands. Sometimes different versions of a program format output differently, so your results might not always look exactly like the listings shown here. - -In this tutorial, I use Fedora 27 for examples. - -### Some printing history - -This small history is not part of the LPI objectives but may help you with context for this objective. - -Early computers mostly used line printers. These were impact printers that printed a line of text at a time using fixed-pitch characters and a single font. To speed up overall system performance, early mainframe computers interleaved work for slow peripherals such as card readers, card punches, and line printers with other work. Thus was born Simultaneous Peripheral Operation On Line or spooling, a term that is still commonly used when talking about computer printing. - -In UNIX and Linux systems, printing initially used the Berkeley Software Distribution (BSD) printing subsystem, consisting of a line printer daemon (lpd) running as a server, and client commands such as `lpr` to submit jobs for printing. This protocol was later standardized by the IETF as RFC 1179, **Line Printer Daemon Protocol**. - -System also had a printing daemon. It was functionally similar to the Berkeley LPD, but had a different command set. You will frequently see two commands with different options that accomplish the same task. For example, `lpr` from the Berkeley implementation and `lp` from the System V implementation each print files. - -Advances in printer technology made it possible to mix different fonts on a page and to print images as well as words. Variable pitch fonts, and more advanced printing techniques such as kerning and ligatures, are now standard. Several improvements to the basic lpd/lpr approach to printing were devised, such as LPRng, the next generation LPR, and CUPS. - -Many printers capable of graphical printing initially used the Adobe PostScript language. A PostScript printer has an engine that interprets the commands in a print job and produces finished pages from these commands. PostScript is often used as an intermediate form between an original file, such as a text or an image file, and a final form suitable for a particular printer that does not have PostScript capability. Conversion of a print job, such as an ASCII text file or a JPEG image to PostScript, and conversion from PostScript to the final raster form required for a non-PostScript printer is done using filters. - -Today, Portable Document Format (PDF), which is based on PostScript, has largely replaced raw PostScript. PDF is designed to be independent of hardware and software and to encapsulate a full description of the pages to be printed. You can view PDF files as well as print them. - -### Manage print queues - -Users direct print jobs to a logical entity called a print queue. In single-user systems, a print queue and a printer are usually equivalent. However, CUPS allows a system without an attached printer to queue print jobs for eventual printing on a remote system, and, through the use of classes to allow a print job directed to a class to be printed on the first available printer of that class. - -You can inspect and manipulate print queues. Some of the commands to do so are new for CUPS. Others are compatibility commands that have their roots in LPD commands, although the current options are usually a limited subset of the original LPD printing system options. - -You can check the queues known to the system using the CUPS `lpstat` command. Some common options are shown in Table 1. - -###### Table 1. Options for lpstat -| Option | Purpose | -| -a | Display accepting status of printers. | -| -c | Display print classes. | -| -p | Display print status: enabled or disabled. | -| -s | Display default printer, printers, and classes. Equivalent to -d -c -v. Note that multiple options must be separated as values can be specified for many. | -| -s | Display printers and their devices. | - - -You may also use the LPD `lpc` command, found in /usr/sbin, with the `status` option. If you do not specify a printer name, all queues are listed. Listing 1 shows some examples of both commands. - -###### Listing 1. Displaying available print queues -``` -[ian@atticf27 ~]$ lpstat -d -system default destination: HL-2280DW -[ian@atticf27 ~]$ lpstat -v HL-2280DW -device for HL-2280DW: dnssd://Brother%20HL-2280DW._pdl-datastream._tcp.local/ -[ian@atticf27 ~]$ lpstat -s -system default destination: HL-2280DW -members of class anyprint: - HL-2280DW - XP-610 -device for anyprint: ///dev/null -device for HL-2280DW: dnssd://Brother%20HL-2280DW._pdl-datastream._tcp.local/ -device for XP-610: dnssd://EPSON%20XP-610%20Series._ipp._tcp.local/?uuid=cfe92100-67c4-11d4-a45f-ac18266c48aa -[ian@atticf27 ~]$ lpstat -a XP-610 -XP-610 accepting requests since Thu 27 Apr 2017 05:53:59 PM EDT -[ian@atticf27 ~]$ /usr/sbin/lpc status HL-2280DW -HL-2280DW: - printer is on device 'dnssd' speed -1 - queuing is disabled - printing is enabled - no entries - daemon present - -``` - -This example shows two printers, HL-2280DW and XP-610, and a class, `anyprint`, which allows print jobs to be directed to the first available of these two printers. - -In this example, queuing of print jobs to HL-2280DW is currently disabled, although printing is enabled, as might be done in order to drain the queue before taking the printer offline for maintenance. Whether queuing is enabled or disabled is controlled by the `cupsaccept` and `cupsreject` commands. Formerly, these were `accept` and `reject`, but you will probably find these commands in /usr/sbin are now just links to the newer commands. Similarly, whether printing is enabled or disabled is controlled by the `cupsenable` and `cupsdisable` commands. In earlier versions of CUPS, these were called `enable` and `disable`, which allowed confusion with the builtin bash shell `enable`. Listing 2 shows how to enable queuing on printer HL-2280DW while disabling printing. Several of the CUPS commands support a `-r` option to give a reason for the action. This reason is displayed when you use `lpstat`, but not if you use `lpc`. - -###### Listing 2. Enabling queuing and disabling printing -``` -[ian@atticf27 ~]$ lpstat -a -p HL-2280DW -anyprint accepting requests since Mon 29 Jan 2018 01:17:09 PM EST -HL-2280DW not accepting requests since Thu 27 Apr 2017 05:52:27 PM EDT - - Maintenance scheduled -XP-610 accepting requests since Thu 27 Apr 2017 05:53:59 PM EDT -printer HL-2280DW is idle. enabled since Thu 27 Apr 2017 05:52:27 PM EDT - Maintenance scheduled -[ian@atticf27 ~]$ accept HL-2280DW -[ian@atticf27 ~]$ cupsdisable -r "waiting for toner delivery" HL-2280DW -[ian@atticf27 ~]$ lpstat -p -a -printer anyprint is idle. enabled since Mon 29 Jan 2018 01:17:09 PM EST -printer HL-2280DW disabled since Mon 29 Jan 2018 04:03:50 PM EST - - waiting for toner delivery -printer XP-610 is idle. enabled since Thu 27 Apr 2017 05:53:59 PM EDT -anyprint accepting requests since Mon 29 Jan 2018 01:17:09 PM EST -HL-2280DW accepting requests since Mon 29 Jan 2018 04:03:50 PM EST -XP-610 accepting requests since Thu 27 Apr 2017 05:53:59 PM EDT - -``` - -Note that an authorized user must perform these tasks. This may be root or another authorized user. See the SystemGroup entry in /etc/cups/cups-files.conf and the man page for cups-files.conf for more information on authorizing user groups. - -### Manage user print jobs - -Now that you have seen a little of how to check on print queues and classes, I will show you how to manage jobs on printer queues. The first thing you might want to do is find out whether any jobs are queued for a particular printer or for all printers. You do this with the `lpq` command. If no option is specified, `lpq` displays the queue for the default printer. Use the `-P` option with a printer name to specify a particular printer or the `-a` option to specify all printers, as shown in Listing 3. - -###### Listing 3. Checking print queues with lpq -``` -[pat@atticf27 ~]$ # As user pat (non-administrator) -[pat@atticf27 ~]$ lpq -HL-2280DW is not ready -Rank Owner Job File(s) Total Size -1st unknown 4 unknown 6144 bytes -2nd pat 6 bitlib.h 6144 bytes -3rd pat 7 bitlib.C 6144 bytes -4th unknown 8 unknown 1024 bytes -5th unknown 9 unknown 1024 bytes - -[ian@atticf27 ~]$ # As user ian (administrator) -[ian@atticf27 ~]$ lpq -P xp-610 -xp-610 is ready -no entries -[ian@atticf27 ~]$ lpq -a -Rank Owner Job File(s) Total Size -1st ian 4 permutation.C 6144 bytes -2nd pat 6 bitlib.h 6144 bytes -3rd pat 7 bitlib.C 6144 bytes -4th ian 8 .bashrc 1024 bytes -5th ian 9 .bashrc 1024 bytes - -``` - -In this example, five jobs, 4, 6, 7, 8, and 9, are queued for the printer named HL-2280DW and none for XP-610. Using the `-P` option in this case simply shows that the printer is ready but has no queued hobs. Note that CUPS printer names are not case-sensitive. Note also that user ian submitted a job twice, a common user action when a job does not print the first time. - -In general, you can view or manipulate your own print jobs, but root or another authorized user is usually required to manipulate the jobs of others. Most CUPS commands also encrypted communication between the CUPS client command and CUPS server using a `-E` option - -Use the `lprm` command to remove one of the .bashrc jobs from the queue. With no options, the current job is removed. With the `-` option, all jobs are removed. Otherwise, specify a list of jobs to be removed as shown in Listing 4. - -###### Listing 4. Deleting print jobs with lprm -``` -[[pat@atticf27 ~]$ # As user pat (non-administrator) -[pat@atticf27 ~]$ lprm -lprm: Forbidden - -[ian@atticf27 ~]$ # As user ian (administrator) -[ian@atticf27 ~]$ lprm 8 -[ian@atticf27 ~]$ lpq -HL-2280DW is not ready -Rank Owner Job File(s) Total Size -1st ian 4 permutation.C 6144 bytes -2nd pat 6 bitlib.h 6144 bytes -3rd pat 7 bitlib.C 6144 bytes -4th ian 9 .bashrc 1024 bytes - -``` - -Note that user pat was not able to remove the first job on the queue, because it was for user ian. However, ian was able to remove his own job number 8. - -Another command that will help you manipulate jobs on print queues is the `lp` command. Use it to alter attributes of jobs, such as priority or number of copies. Let us assume user ian wants his job 9 to print before those of user pat, and he really did want two copies of it. The job priority ranges from a lowest priority of 1 to a highest priority of 100 with a default of 50. User ian could use the `-i`, `-n`, and `-q` options to specify a job to alter and a new number of copies and priority as shown in Listing 5. Note the use of the `-l` option of the `lpq` command, which provides more verbose output. - -###### Listing 5. Changing the number of copies and priority with lp -``` -[ian@atticf27 ~]$ lpq -HL-2280DW is not ready -Rank Owner Job File(s) Total Size -1st ian 4 permutation.C 6144 bytes -2nd pat 6 bitlib.h 6144 bytes -3rd pat 7 bitlib.C 6144 bytes -4th ian 9 .bashrc 1024 bytes -[ian@atticf27 ~]$ lp -i 9 -q 60 -n 2 -[ian@atticf27 ~]$ lpq -HL-2280DW is not ready -Rank Owner Job File(s) Total Size -1st ian 9 .bashrc 1024 bytes -2nd ian 4 permutation.C 6144 bytes -3rd pat 6 bitlib.h 6144 bytes -4th pat 7 bitlib.C 6144 bytes - -``` - -Finally, the `lpmove` command allows jobs to be moved from one queue to another. For example, we might want to do this because printer HL-2280DW is not currently printing. You can specify just a hob number, such as 9, or you can qualify it with the queue name and a hyphen, such as HL-2280DW-0. The `lpmove` command requires an authorized user. Listing 6 shows how to move these jobs to another queue, specifying first by printer and job ID, then all jobs for a given printer. By the time we check the queues again, one of the jobs is already printing. - -###### Listing 6. Moving jobs to another print queue with lpmove -``` -[ian@atticf27 ~]$ lpmove HL-2280DW-9 anyprint -[ian@atticf27 ~]$ lpmove HL-2280DW xp-610 -[ian@atticf27 ~]$ lpq -a -Rank Owner Job File(s) Total Size -active ian 9 .bashrc 1024 bytes -1st ian 4 permutation.C 6144 bytes -2nd pat 6 bitlib.h 6144 bytes -3rd pat 7 bitlib.C 6144 bytes -[ian@atticf27 ~]$ # A few minutes later -[ian@atticf27 ~]$ lpq -a -Rank Owner Job File(s) Total Size -active pat 6 bitlib.h 6144 bytes -1st pat 7 bitlib.C 6144 bytes - -``` - -If you happen to use a print server that is not CUPS, such as LPD or LPRng, many of the queue administration functions are handled as subcommands of the `lpc` command. For example, you might use `lpc topq` to move a job to the top of a queue. Other `lpc` subcommands include `disable`, `down`, `enable`, `hold`, `move`, `redirect`, `release`, and `start`. These subcommands are not implemented in the CUPS `lpc` compatibility command. - -#### Printing files - -How are print jobs erected? Many graphical programs provide a method of printing, usually under the **File** menu option. These programs provide graphical tools for choosing a printer, margin sizes, color or black-and-white printing, number of copies, selecting 2-up printing (which is 2 pages per sheet, often used for handouts), and so on. Here I show you the command-line tools for controlling such features, and then a graphical implementation for comparison. - -The simplest way to print any file is to use the `lpr` command and provide the file name. This prints the file on the default printer. The `lp` command can print files as well as modify print jobs. Listing 7 shows a simple example using both commands. Note that `lpr` quietly spools the job, but `lp` displays the job number of the spooled job. - -###### Listing 7. Printing with lpr and lp -``` -[ian@atticf27 ~]$ echo "Print this text" > printexample.txt -[ian@atticf27 ~]$ lpr printexample.txt -[ian@atticf27 ~]$ lp printexample.txt -request id is HL-2280DW-12 (1 file(s)) - -``` - -Table 2 shows some options that you may use with `lpr`. Note that `lp` has similar options to `lpr`, but names may differ; for example, `-#` on `lpr` is equivalent to `-n` on `lp`. Check the man pages for more information. - -###### Table 2. Options for lpr - -| Option | Purpose | -| -C, -J, or -T | Set a job name. | -| -P | Select a particular printer. | -| -# | Specify number of copies. Note this is different from the -n option you saw with the lp command. | -| -m | Send email upon job completion. | -| -l | Indicate that the print file is already formatted for printing. Equivalent to -o raw. | -| -o | Set a job option. | -| -p | Format a text file with a shaded header. Equivalent to -o prettyprint. | -| -q | Hold (or queue) the job for later printing. | -| -r | Remove the file after it has been spooled for printing. | - -Listing 8 shows some of these options in action. I request an email confirmation after printing, that the job be held and that the file be deleted after printing. - -###### Listing 8. Printing with lpr -``` -[ian@atticf27 ~]$ lpr -P HL-2280DW -J "Ian's text file" -#2 -m -p -q -r printexample.txt -[[ian@atticf27 ~]$ lpq -l -HL-2280DW is ready - - -ian: 1st [job 13 localhost] - 2 copies of Ian's text file 1024 bytes -[ian@atticf27 ~]$ ls printexample.txt -ls: cannot access 'printexample.txt': No such file or directory - -``` - -I now have a held job in the HL-2280DW print queue. What to do? The `lp` command has options to hold and release jobs, using various values with the `-H` option. Listing 9 shows how to release the held job. Check the `lp` man page for information on other options. - -###### Listing 9. Resuming printing of a held print job -``` -[ian@atticf27 ~]$ lp -i 13 -H resume - -``` - -Not all of the vast array of available printers support the same set of options. Use the `lpoptions` command to see the general options that are set for a printer. Add the `-l` option to display printer-specific options. Listing 10 shows two examples. Many common options relate to portrait/landscape printing, page dimensions, and placement of the output on the pages. See the man pages for details. - -###### Listing 10. Checking printer options -``` -[ian@atticf27 ~]$ lpoptions -p HL-2280DW -copies=1 device-uri=dnssd://Brother%20HL-2280DW._pdl-datastream._tcp.local/ -finishings=3 job-cancel-after=10800 job-hold-until=no-hold job-priority=50 -job-sheets=none,none marker-change-time=1517325288 marker-colors=#000000,#000000 -marker-levels=-1,92 marker-names='Black\ Toner\ Cartridge,Drum\ Unit' -marker-types=toner,opc number-up=1 printer-commands=none -printer-info='Brother HL-2280DW' printer-is-accepting-jobs=true -printer-is-shared=true printer-is-temporary=false printer-location -printer-make-and-model='Brother HL-2250DN - CUPS+Gutenprint v5.2.13 Simplified' -printer-state=3 printer-state-change-time=1517325288 printer-state-reasons=none -printer-type=135188 printer-uri-supported=ipp://localhost/printers/HL-2280DW -sides=one-sided - -[ian@atticf27 ~]$ lpoptions -l -p xp-610 -PageSize/Media Size: *Letter Legal Executive Statement A4 -ColorModel/Color Model: *Gray Black -InputSlot/Media Source: *Standard ManualAdj Manual MultiPurposeAdj MultiPurpose -UpperAdj Upper LowerAdj Lower LargeCapacityAdj LargeCapacity -StpQuality/Print Quality: None Draft *Standard High -Resolution/Resolution: *301x300dpi 150dpi 300dpi 600dpi -Duplex/2-Sided Printing: *None DuplexNoTumble DuplexTumble -StpiShrinkOutput/Shrink Page If Necessary to Fit Borders: *Shrink Crop Expand -StpColorCorrection/Color Correction: *None Accurate Bright Hue Uncorrected -Desaturated Threshold Density Raw Predithered -StpBrightness/Brightness: 0 100 200 300 400 500 600 700 800 900 *None 1100 -1200 1300 1400 1500 1600 1700 1800 1900 2000 Custom.REAL -StpContrast/Contrast: 0 100 200 300 400 500 600 700 800 900 *None 1100 1200 -1300 1400 1500 1600 1700 1800 1900 2000 2100 2200 2300 2400 2500 2600 2700 -2800 2900 3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000 Custom.REAL -StpImageType/Image Type: None Text Graphics *TextGraphics Photo LineArt - -``` - -Most GUI applications have a print dialog, often using the **File >Print** menu choice. Figure 1 shows an example in GIMP, an image manipulation program. - -###### Figure 1. Printing from the GIMP - -![Printing from the GIMP][3] - -So far, all our commands have been implicitly directed to the local CUPS print server. You can also direct most commands to the server on another system, by specifying the `-h` option along with a port number if it is not the CUPS default of 631. - -### CUPS and the CUPS server - -At the heart of the CUPS printing system is the `cupsd` print server which runs as a daemon process. The CUPS configuration file is normally located in /etc/cups/cupsd.conf. The /etc/cups directory also contains other configuration files related to CUPS. CUPS is usually started during system initialization, but may be controlled by the CUPS script located in /etc/rc.d/init.d or /etc/init.d, according to your distribution. For newer systems using systemd initialization, the CUPS service script is likely in /usr/lib/systemd/system/cups.service. As with most such scripts, you can stop, start, or restart the daemon. See our tutorial [Learn Linux, 101: Runlevels, boot targets, shutdown, and reboot][4] for more information on using initialization scripts. - -The configuration file, /etc/cups/cupsd.conf, contains parameters that control things such as access to the printing system, whether remote printing is allowed, the location of spool files, and so on. On some systems, a second part describes individual print queues and is usually generated automatically by configuration tools. Listing 11 shows some entries for a default cupsd.conf file. Note that comments start with a # character. Defaults are usually shown as comments and entries that are changed from the default have the leading # character removed. - -###### Listing 11. Parts of a default /etc/cups/cupsd.conf file -``` -# Only listen for connections from the local machine. -Listen localhost:631 -Listen /var/run/cups/cups.sock - -# Show shared printers on the local network. -Browsing On -BrowseLocalProtocols dnssd - -# Default authentication type, when authentication is required... -DefaultAuthType Basic - -# Web interface setting... -WebInterface Yes - -# Set the default printer/job policies... - - # Job/subscription privacy... - JobPrivateAccess default - JobPrivateValues default - SubscriptionPrivateAccess default - SubscriptionPrivateValues default - - # Job-related operations must be done by the owner or an administrator... - - Order deny,allow - - -``` - -File, directory, and user configuration directives that used to be allowed in cupsd.conf are now stored in cups-files.conf instead. This is to prevent certain types of privilege escalation attacks. Listing 12 shows some entries from cups-files.conf. Note that spool files are stored by default in the /var/spool file system as you would expect from the Filesystem Hierarchy Standard (FHS). See the man pages for cupsd.conf and cups-files.conf for more details on these configuration files. - -###### Listing 12. Parts of a default /etc/cups/cups-files.conf -``` -# Location of the file listing all of the local printers... -#Printcap /etc/printcap - -# Format of the Printcap file... -#PrintcapFormat bsd -#PrintcapFormat plist -#PrintcapFormat solaris - -# Location of all spool files... -#RequestRoot /var/spool/cups - -# Location of helper programs... -#ServerBin /usr/lib/cups - -# SSL/TLS keychain for the scheduler... -#ServerKeychain ssl - -# Location of other configuration files... -#ServerRoot /etc/cups - -``` - -Listing 12 refers to the /etc/printcap file. This was the name of the configuration file for LPD print servers, and some applications still use it to determine available printers and their properties. It is usually generated automatically in a CUPS system, so you will probably not modify it yourself. However, you may need to check it if you are diagnosing user printing problems. Listing 13 shows an example. - -###### Listing 13. Automatically generated /etc/printcap -``` -# This file was automatically generated by cupsd(8) from the -# /etc/cups/printers.conf file. All changes to this file -# will be lost. -HL-2280DW|Brother HL-2280DW:rm=atticf27:rp=HL-2280DW: -anyprint|Any available printer:rm=atticf27:rp=anyprint: -XP-610|EPSON XP-610 Series:rm=atticf27:rp=XP-610: - -``` - -Each line here has a printer name and printer description as well as the name of the remote machine (rm) and remote printer (rp) on that machine. Older /etc/printcap file also described the printer capabilities. - -#### File conversion filters - -You can print many types of files using CUPS, including plain text, PDF, PostScript, and a variety of image formats without needing to tell the `lpr` or `lp` command anything more than the file name. This magic feat is accomplished through the use of filters. Indeed, a popular filter for many years was named magicfilter. - -CUPS uses Multipurpose Internet Mail Extensions (MIME) types to determine the appropriate conversion filter when printing a file. Other printing packages might use the magic number mechanism as used by the `file` command. See the man pages for `file` or `magic` for more details. - -Input files are converted to an intermediate raster or PostScript format using filters. Job information such as number of copies is added. The data is finally sent through a beckend to the destination printer. There are some filters (such as `a2ps` or `dvips`) that you can use to manually filter input. You might do this to obtain special formatting results, or to handle a file format that CUPS does not support natively. - -#### Adding printers - -CUPS supports a variety of printers, including: - - * Locally attached parallel and USB printers - * Internet Printing Protocol (IPP) printers - * Remote LPD printers - * Microsoft® Windows® printers using SAMBA - * Novell printers using NCP - * HP Jetdirect attached printers - - - -Most systems today attempt to autodetect and autoconfigure local hardware when the system starts or when the device is attached. Similarly, many network printers can be autodetected. Use the CUPS web administration tool (( or ) to search for or add printers. Many distributions include their own configuration tools, for example YaST on SUSE systems. Figure 2 shows the CUPS interface using localhost:631 and Figure 3 shows the GNOME printer settings dialog on Fedora 27. - -###### Figure 2. Using the CUPS web interface - - -![Using the CUPS web interface][5] - -###### Figure 3. Using printer settings on Fedora 27 - - -![Using printer settings on Fedora 27][6] - -You can also configure printers from a command line. Before you configure a printer, you need some basic information about the printer and about how it is connected. If a remote system needs a user ID or password, you will also need that information. - -You need to know what driver to use for your printer. Not all printers are fully supported on Linux and some may not work at all, or only with limitations. Check at OpenPrinting.org (see Related topics) to see if there is a driver for your particular printer. The `lpinfo` command can also help you identify the available device types and drivers. Use the `-v` option to list supported devices and the `-m` option to list drivers, as shown in Listing 14. - -###### Listing 14. Available printer drivers -``` -[ian@atticf27 ~]$ lpinfo -m | grep -i xp-610 -lsb/usr/Epson/epson-inkjet-printer-escpr/Epson-XP-610_Series-epson-escpr-en.ppd.gz -EPSON XP-610 Series, Epson Inkjet Printer Driver (ESC/P-R) for Linux -[ian@atticf27 ~]$ locate "Epson-XP-610_Series-epson-escpr-en.ppd.gz" -/usr/share/ppd/Epson/epson-inkjet-printer-escpr/Epson-XP-610_Series-epson-escpr-en.ppd.gz -[ian@atticf27 ~]$ lpinfo -v -network socket -network ipps -network lpd -network beh -network ipp -network http -network https -direct hp -serial serial:/dev/ttyS0?baud=115200 -direct parallel:/dev/lp0 -network smb -direct hpfax -network dnssd://Brother%20HL-2280DW._pdl-datastream._tcp.local/ -network dnssd://EPSON%20XP-610%20Series._ipp._tcp.local/?uuid=cfe92100-67c4-11d4-a45f-ac18266c48aa -network lpd://BRN001BA98A1891/BINARY_P1 -network lpd://192.168.1.38:515/PASSTHRU - -``` - -The Epson-XP-610_Series-epson-escpr-en.ppd.gz driver is located in the /usr/share/ppd/Epson/epson-inkjet-printer-escpr/ directory on my system. - -Is you don't find a driver, check the printer manufacturer's website in case a proprietary driver is available. For example, at the time of writing Brother has a driver for my HL-2280DW printer, but this driver is not listed at OpenPrinting.org. - -Once you have the basic information, you can configure a printer using the `lpadmin` command as shown in Listing 15. For this purpose, I will create another instance of my HL-2280DW printer for duplex printing. - -###### Listing 15. Configuring a printer -``` -[ian@atticf27 ~]$ lpinfo -m | grep -i "hl.*2280" -HL2280DW.ppd Brother HL2280DW for CUPS -lsb/usr/HL2280DW.ppd Brother HL2280DW for CUPS -[ian@atticf27 ~]$ lpadmin -p HL-2280DW-duplex -E -m HL2280DW.ppd \ -> -v dnssd://Brother%20HL-2280DW._pdl-datastream._tcp.local/ \ -> -D "Brother 1" -o sides=two-sided-long-edge -[ian@atticf27 ~]$ lpstat -a -anyprint accepting requests since Mon 29 Jan 2018 01:17:09 PM EST -HL-2280DW accepting requests since Tue 30 Jan 2018 10:56:10 AM EST -HL-2280DW-duplex accepting requests since Wed 31 Jan 2018 11:41:16 AM EST -HXP-610 accepting requests since Mon 29 Jan 2018 10:34:49 PM EST - -``` - -Rather than creating a copy of the printer for duplex printing, you can just create a new class for duplex printing using `lpadmin` with the `-c` option . - -If you need to remove a printer, use `lpadmin` with the `-x` option. - -Listing 16 shows how to remove the printer and create a class instead. - -###### Listing 16. Removing a printer and creating a class -``` -[ian@atticf27 ~]$ lpadmin -x HL-2280DW-duplex -[ian@atticf27 ~]$ lpadmin -p HL-2280DW -c duplex -E -D "Duplex printing" -o sides=two-sided-long-edge -[ian@atticf27 ~]$ cupsenable duplex -[ian@atticf27 ~]$ cupsaccept duplex -[ian@atticf27 ~]$ lpstat -a -anyprint accepting requests since Mon 29 Jan 2018 01:17:09 PM EST -duplex accepting requests since Wed 31 Jan 2018 12:12:05 PM EST -HL-2280DW accepting requests since Wed 31 Jan 2018 11:51:16 AM EST -XP-610 accepting requests since Mon 29 Jan 2018 10:34:49 PM EST - -``` - -You can also set various printer options using the `lpadmin` or `lpoptions` commands. See the man pages for more details. - -### Troubleshooting - -If you are having trouble printing, try these tips: - - * Ensure that the CUPS server is running. You can use the `lpstat` command, which will report an error if it is unable to connect to the cupsd daemon. Alternatively, you might use the `ps -ef` command and check for cupsd in the output. - * If you try to queue a job for printing and get an error message indicating that the printer is not accepting jobs results, use `lpstat -a` or `lpc status` to check that the printer is accepting jobs. - * If a queued job does not print, use `lpstat -p` or `lpc status` to check that the printer is accepting jobs. You may need to move the job to another printer as discussed earlier. - * If the printer is remote, check that it still exists on the remote system and that it is operational. - * Check the configuration file to ensure that a particular user or remote system is allowed to print on the printer. - * Ensure that your firewall allows remote printing requests, either from another system to your system, or from your system to another, as appropriate. - * Verify that you have the right driver. - - - -As you can see, printing involves the correct functioning of several components of your system and possibly network. In a tutorial of this length, we can only give you starting points for diagnosis. Most CUPS systems also have a graphical interface to the command-line functions that we discuss here. Generally, this interface is accessible from the local host using a browser pointed to port 631 ( or ), as shown earlier in Figure 2. - -You can debug CUPS by running it in the foreground rather than as a daemon process. You can also test alternate configuration files if necessary. Run `cupsd -h` for more information, or see the man pages. - -CUPS also maintains an access log and an error log. You can change the level of logging using the LogLevel statement in cupsd.conf. By default, logs are stored in the /var/log/cups directory. They may be viewed from the **Administration** tab on the browser interface (). Use the `cupsctl` command without any options to display logging options. Either edit cupsd.conf, or use `cupsctl` to adjust various logging parameters. See the `cupsctl` man page for more details. - -The Ubuntu Wiki also has a good page on [Debugging Printing Problems][7]. - -This concludes your introduction to printing and CUPS. - - --------------------------------------------------------------------------------- - -via: https://www.ibm.com/developerworks/library/l-lpic1-108-4/index.html - -作者:[Ian Shields][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ibm.com -[1]:http://www.lpi.org -[2]:https://www.ibm.com/developerworks/library/l-lpic1-map/ -[3]:https://www.ibm.com/developerworks/library/l-lpic1-108-4/gimp-print.jpg -[4]:https://www.ibm.com/developerworks/library/l-lpic1-101-3/ -[5]:https://www.ibm.com/developerworks/library/l-lpic1-108-4/fig-cups-web.jpg -[6]:https://www.ibm.com/developerworks/library/l-lpic1-108-4/fig-settings.jpg -[7]:https://wiki.ubuntu.com/DebuggingPrintingProblems diff --git a/sources/tech/20180213 How to clone, modify, add, and delete files in Git.md b/sources/tech/20180213 How to clone, modify, add, and delete files in Git.md new file mode 100644 index 0000000000..fa6648cee0 --- /dev/null +++ b/sources/tech/20180213 How to clone, modify, add, and delete files in Git.md @@ -0,0 +1,203 @@ +How to clone, modify, add, and delete files in Git +====== +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_cat.png?itok=ta54QTAf) + +In the [first article in this series][1] on getting started with Git, we created a simple Git repo and added a file to it by connecting it with our computer. In this article, we will learn a handful of other things about Git, namely how to clone (download), modify, add, and delete files in a Git repo. + +### Let's make some clones + +Say you already have a Git repo on GitHub and you want to get your files from it—maybe you lost the local copy on your computer or you're working on a different computer and want access to the files in your repository. What should you do? Download your files from GitHub? Exactly! We call this "cloning" in Git terminology. (You could also download the repo as a ZIP file, but we'll explore the clone method in this article.) + +Let's clone the repo, called Demo, we created in the last article. (If you have not yet created a Demo repo, jump back to that article and do those steps before you proceed here.) To clone your file, just open your browser and navigate to `https://github.com//Demo` (where `` is the name of your own repo. For example, my repo is `https://github.com/kedark3/Demo`). Once you navigate to that URL, click the "Clone or download" button, and your browser should look something like this: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide11.png?itok=wJYqZyBX) + +As you can see above, the "Clone with HTTPS" option is open. Copy your repo's URL from that dropdown box (`https://github.com//Demo.git`). Open the terminal and type the following command to clone your GitHub repo to your computer: +``` +git clone https://github.com//Demo.git + +``` + +Then, to see the list of files in the `Demo` directory, enter the command: +``` +ls Demo/ + +``` + +Your terminal should look like this: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide12.png?itok=E7ZG9t-8) + +### Modify files + +Now that we have cloned the repo, let's modify the files and update them on GitHub. To begin, enter the commands below, one by one, to change the directory to `Demo/`, check the contents of `README.md`, echo new (additional) content to `README.md`, and check the status with `git status`: +``` +cd Demo/ + +ls + +cat README.md + +echo "Added another line to REAMD.md" >> README.md + +cat README.md + +git status + +``` + +This is how it will look in the terminal if you run these commands one by one: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide12.5.png?itok=jhb-EPH1) + +Let's look at the output of `git status` and walk through what it means. Don't worry about the part that says: +``` +On branch master + +Your branch is up-to-date with 'origin/master'.". + +``` + +because we haven't learned it yet. The next line says: `Changes not staged for commit`; this is telling you that the files listed below it aren't marked ready ("staged") to be committed. If you run `git add`, Git takes those files and marks them as `Ready for commit`; in other (Git) words, `Changes staged for commit`. Before we do that, let's check what we are adding to Git with the `git diff` command, then run `git add`. + +Here is your terminal output: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide13.png?itok=983p_vNw) + +Let's break this down: + + * `diff --git a/README.md b/README.md` is what Git is comparing (i.e., `README.md` in this example). + * `--- a/README.md` would show anything removed from the file. + * `+++ b/README.md` would show anything added to your file. + * Anything added to the file is printed in green text with a + at the beginning of the line. + * If we had removed anything, it would be printed in red text with a - sign at the beginning. + * Git status now says `Changes to be committed:` and lists the filename (i.e., `README.md`) and what happened to that file (i.e., it has been `modified` and is ready to be committed). + + + +Tip: If you have already run `git add`, and now you want to see what's different, the usual `git diff` won't yield anything because you already added the file. Instead, you must use `git diff --cached`. It will show you the difference between the current version and previous version of files that Git was told to add. Your terminal output would look like this: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide14.png?itok=bva9fHJj) + +### Upload a file to your repo + +We have modified the `README.md` file with some new content and it's time to upload it to GitHub. + +Let's commit the changes and push those to GitHub. Run: +``` +git commit -m "Updated Readme file" + +``` + +This tells Git that you are "committing" to changes that you have "added" to it. You may recall from the first part of this series that it's important to add a message to explain what you did in your commit so you know its purpose when you look back at your Git log later. (We will look more at this topic in the next article.) `Updated Readme file` is the message for this commit—if you don't think this is the most logical way to explain what you did, feel free to write your commit message differently. + +Run `git push -u origin master`. This will prompt you for your username and password, then upload the file to your GitHub repo. Refresh your GitHub page, and you should see the changes you just made to `README.md`. + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide15.png?itok=Qa3spy13) + +The bottom-right corner of the terminal shows that I committed the changes, checked the Git status, and pushed the changes to GitHub. Git status says: +``` +Your branch is ahead of 'origin/master' by 1 commit + +  (use "git push" to publish your local commits) + +``` + +The first line indicates there is one commit in the local repo but not present in origin/master (i.e., on GitHub). The next line directs us to push those changes to origin/master, and that is what we did. (To refresh your memory on what "origin" means in this case, refer to the first article in this series. I will explain what "master" means in the next article, when we discuss branching.) + +### Add a new file to Git + +Now that we have modified a file and updated it on GitHub, let's create a new file, add it to Git, and upload it to GitHub. Run: +``` +echo "This is a new file" >> file.txt + +``` + +This will create a new file named `file.txt`. + +If you `cat` it out: +``` +cat file.txt + +``` + +You should see the contents of the file. Now run: +``` +git status + +``` + +Git reports that you have an untracked file (named `file.txt`) in your repository. This is Git's way of telling you that there is a new file in the repo directory on your computer that you haven't told Git about, and Git is not tracking that file for any changes you make. + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide16.png?itok=UZpSKL13) + +We need to tell Git to track this file so we can commit it and upload it to our repo. Here's the command to do that: +``` +git add file.txt + +git status + +``` + +Your terminal output is: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide17.png?itok=quV-75Na) + +Git status is telling you there are changes to `file.txt` to be committed, and that it is a `new file` to Git, which it was not aware of before this. Now that we have added `file.txt` to Git, we can commit the changes and push it to origin/master. + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide18.png?itok=e0D7-eol) + +Git has now uploaded this new file to GitHub; if you refresh your GitHub page, you should see the new file, `file.txt`, in your Git repo on GitHub. + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide19.png?itok=FcuSsHQ6) + +With these steps, you can create as many files as you like, add them to Git, and commit and push them up to GitHub. + +### Delete a file from Git + +What if we discovered we made an error and need to delete `file.txt` from our repo. One way is to remove the file from our local copy of the repo with this command: +``` +rm file.txt + +``` + +If you do `git status` now, Git says there is a file that is `not staged for commit` and it has been `deleted` from the local copy of the repo. If we now run: +``` +git add file.txt + +git status + +``` + +I know we are deleting the file, but we still run `git add` ** because we need to tell Git about the **change** we are making. `git add` ** can be used when we are adding a new file to Git, modifying contents of an existing file and adding it to Git, or deleting a file from a Git repo. Effectively, `git add` takes all the changes into account and stages those changes for commit. If in doubt, carefully look at output of each command in the terminal screenshot below. + +Git will tell us the deleted file is staged for commit. As soon as you commit this change and push it to GitHub, the file will be removed from the repo on GitHub as well. Do this by running: +``` +git commit -m "Delete file.txt" + +git push -u origin master + +``` + +Now your terminal looks like this: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide20.png?itok=SrJMqNXC) + +And your GitHub looks like this: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/git_guide21.png?itok=RhXM4Gua) + +Now you know how to clone, add, modify, and delete Git files from your repo. The next article in this series will examine Git branching. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/2/how-clone-modify-add-delete-git-files + +作者:[Kedar Vijay Kulkarni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/kkulkarn +[1]:https://opensource.com/article/18/1/step-step-guide-git diff --git a/sources/tech/20180215 Build a bikesharing app with Redis and Python.md b/sources/tech/20180215 Build a bikesharing app with Redis and Python.md new file mode 100644 index 0000000000..06e4c6949a --- /dev/null +++ b/sources/tech/20180215 Build a bikesharing app with Redis and Python.md @@ -0,0 +1,256 @@ +Build a bikesharing app with Redis and Python +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/google-bikes-yearbook.png?itok=BnmInwea) + +I travel a lot on business. I'm not much of a car guy, so when I have some free time, I prefer to walk or bike around a city. Many of the cities I've visited on business have bikeshare systems, which let you rent a bike for a few hours. Most of these systems have an app to help users locate and rent their bikes, but it would be more helpful for users like me to have a single place to get information on all the bikes in a city that are available to rent. + +To solve this problem and demonstrate the power of open source to add location-aware features to a web application, I combined publicly available bikeshare data, the [Python][1] programming language, and the open source [Redis][2] in-memory data structure server to index and query geospatial data. + +The resulting bikeshare application incorporates data from many different sharing systems, including the [Citi Bike][3] bikeshare in New York City. It takes advantage of the General Bikeshare Feed provided by the Citi Bike system and uses its data to demonstrate some of the features that can be built using Redis to index geospatial data. The Citi Bike data is provided under the [Citi Bike data license agreement][4]. + +### General Bikeshare Feed Specification + +The General Bikeshare Feed Specification (GBFS) is an [open data specification][5] developed by the [North American Bikeshare Association][6] to make it easier for map and transportation applications to add bikeshare systems into their platforms. The specification is currently in use by over 60 different sharing systems in the world. + +The feed consists of several simple [JSON][7] data files containing information about the state of the system. The feed starts with a top-level JSON file referencing the URLs of the sub-feed data: +``` +{ + +    "data": { + +        "en": { + +            "feeds": [ + +                { + +                    "name": "system_information", + +                    "url": "https://gbfs.citibikenyc.com/gbfs/en/system_information.json" + +                }, + +                { + +                    "name": "station_information", + +                    "url": "https://gbfs.citibikenyc.com/gbfs/en/station_information.json" + +                }, + +                . . . + +            ] + +        } + +    }, + +    "last_updated": 1506370010, + +    "ttl": 10 + +} + +``` + +The first step is loading information about the bikesharing stations into Redis using data from the `system_information` and `station_information` feeds. + +The `system_information` feed provides the system ID, which is a short code that can be used to create namespaces for Redis keys. The GBFS spec doesn't specify the format of the system ID, but does guarantee it is globally unique. Many of the bikeshare feeds use short names like coast_bike_share, boise_greenbike, or topeka_metro_bikes for system IDs. Others use familiar geographic abbreviations such as NYC or BA, and one uses a universally unique identifier (UUID). The bikesharing application uses the identifier as a prefix to construct unique keys for the given system. + +The `station_information` feed provides static information about the sharing stations that comprise the system. Stations are represented by JSON objects with several fields. There are several mandatory fields in the station object that provide the ID, name, and location of the physical bike stations. There are also several optional fields that provide helpful information such as the nearest cross street or accepted payment methods. This is the primary source of information for this part of the bikesharing application. + +### Building the database + +I've written a sample application, [load_station_data.py][8], that mimics what would happen in a backend process for loading data from external sources. + +### Finding the bikeshare stations + +Loading the bikeshare data starts with the [systems.csv][9] file from the [GBFS repository on GitHub][5]. + +The repository's [systems.csv][9] file provides the discovery URL for registered bikeshare systems with an available GBFS feed. The discovery URL is the starting point for processing bikeshare information. + +The `load_station_data` application takes each discovery URL found in the systems file and uses it to find the URL for two sub-feeds: system information and station information. The system information feed provides a key piece of information: the unique ID of the system. (Note: the system ID is also provided in the systems.csv file, but some of the identifiers in that file do not match the identifiers in the feeds, so I always fetch the identifier from the feed.) Details on the system, like bikeshare URLs, phone numbers, and emails, could be added in future versions of the application, so the data is stored in a Redis hash using the key `${system_id}:system_info`. + +### Loading the station data + +The station information provides data about every station in the system, including the system's location. The `load_station_data` application iterates over every station in the station feed and stores the data about each into a Redis hash using a key of the form `${system_id}:station:${station_id}`. The location of each station is added to a geospatial index for the bikeshare using the `GEOADD` command. + +### Updating data + +On subsequent runs, I don't want the code to remove all the feed data from Redis and reload it into an empty Redis database, so I carefully considered how to handle in-place updates of the data. + +The code starts by loading the dataset with information on all the bikesharing stations for the system being processed into memory. When information is loaded for a station, the station (by key) is removed from the in-memory set of stations. Once all station data is loaded, we're left with a set containing all the station data that must be removed for that system. + +The application iterates over this set of stations and creates a transaction to delete the station information, remove the station key from the geospatial indexes, and remove the station from the list of stations for the system. + +### Notes on the code + +There are a few interesting things to note in [the sample code][8]. First, items are added to the geospatial indexes using the `GEOADD` command but removed with the `ZREM` command. As the underlying implementation of the geospatial type uses sorted sets, items are removed using `ZREM`. A word of caution: For simplicity, the sample code demonstrates working with a single Redis node; the transaction blocks would need to be restructured to run in a cluster environment. + +If you are using Redis 4.0 (or later), you have some alternatives to the `DELETE` and `HMSET` commands in the code. Redis 4.0 provides the [`UNLINK`][10] command as an asynchronous alternative to the `DELETE` command. `UNLINK` will remove the key from the keyspace, but it reclaims the memory in a separate thread. The [`HMSET`][11] command is [deprecated in Redis 4.0 and the `HSET` command is now variadic][12] (that is, it accepts an indefinite number of arguments). + +### Notifying clients + +At the end of the process, a notification is sent to the clients relying on our data. Using the Redis pub/sub mechanism, the notification goes out over the `geobike:station_changed` channel with the ID of the system. + +### Data model + +When structuring data in Redis, the most important thing to think about is how you will query the information. The two main queries the bikeshare application needs to support are: + + * Find stations near us + * Display information about stations + + + +Redis provides two main data types that will be useful for storing our data: hashes and sorted sets. The [hash type][13] maps well to the JSON objects that represent stations; since Redis hashes don't enforce a schema, they can be used to store the variable station information. + +Of course, finding stations geographically requires a geospatial index to search for stations relative to some coordinates. Redis provides [several commands][14] to build up a geospatial index using the [sorted set][15] data structure. + +We construct keys using the format `${system_id}:station:${station_id}` for the hashes containing information about the stations and keys using the format `${system_id}:stations:location` for the geospatial index used to find stations. + +### Getting the user's location + +The next step in building out the application is to determine the user's current location. Most applications accomplish this through built-in services provided by the operating system. The OS can provide applications with a location based on GPS hardware built into the device or approximated from the device's available WiFi networks. + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/rediscli_map.png?itok=icqk5543) + +### Finding stations + +After the user's location is found, the next step is locating nearby bikesharing stations. Redis' geospatial functions can return information on stations within a given distance of the user's current coordinates. Here's an example of this using the Redis command-line interface. + +Imagine I'm at the Apple Store on Fifth Avenue in New York City, and I want to head downtown to Mood on West 37th to catch up with my buddy [Swatch][16]. I could take a taxi or the subway, but I'd rather bike. Are there any nearby sharing stations where I could get a bike for my trip? + +The Apple store is located at 40.76384, -73.97297. According to the map, two bikeshare stations—Grand Army Plaza & Central Park South and East 58th St. & Madison—fall within a 500-foot radius (in blue on the map above) of the store. + +I can use Redis' `GEORADIUS` command to query the NYC system index for stations within a 500-foot radius: +``` +127.0.0.1:6379> GEORADIUS NYC:stations:location -73.97297 40.76384 500 ft + +1) "NYC:station:3457" + +2) "NYC:station:281" + +``` + +Redis returns the two bikeshare locations found within that radius, using the elements in our geospatial index as the keys for the metadata about a particular station. The next step is looking up the names for the two stations: +``` +127.0.0.1:6379> hget NYC:station:281 name + +"Grand Army Plaza & Central Park S" + +  + +127.0.0.1:6379> hget NYC:station:3457 name + +"E 58 St & Madison Ave" + +``` + +Those keys correspond to the stations identified on the map above. If I want, I can add more flags to the `GEORADIUS` command to get a list of elements, their coordinates, and their distance from our current point: +``` +127.0.0.1:6379> GEORADIUS NYC:stations:location -73.97297 40.76384 500 ft WITHDIST WITHCOORD ASC + +1) 1) "NYC:station:281" + +   2) "289.1995" + +   3) 1) "-73.97371262311935425" + +      2) "40.76439830559216659" + +2) 1) "NYC:station:3457" + +   2) "383.1782" + +   3) 1) "-73.97209256887435913" + +      2) "40.76302702144496237" + +``` + +Looking up the names associated with those keys generates an ordered list of stations I can choose from. Redis doesn't provide directions or routing capability, so I use the routing features of my device's OS to plot a course from my current location to the selected bike station. + +The `GEORADIUS` function can be easily implemented inside an API in your favorite development framework to add location functionality to an app. + +### Other query commands + +In addition to the `GEORADIUS` command, Redis provides three other commands for querying data from the index: `GEOPOS`, `GEODIST`, and `GEORADIUSBYMEMBER`. + +The `GEOPOS` command can provide the coordinates for a given element from the geohash. For example, if I know there is a bikesharing station at West 38th and 8th and its ID is 523, then the element name for that station is NYC🚉523. Using Redis, I can find the station's longitude and latitude: +``` +127.0.0.1:6379> geopos NYC:stations:location NYC:station:523 + +1) 1) "-73.99138301610946655" + +   2) "40.75466497634030105" + +``` + +The `GEODIST` command provides the distance between two elements of the index. If I wanted to find the distance between the station at Grand Army Plaza & Central Park South and the station at East 58th St. & Madison, I would issue the following command: +``` +127.0.0.1:6379> GEODIST NYC:stations:location NYC:station:281 NYC:station:3457 ft + +"671.4900" + +``` + +Finally, the `GEORADIUSBYMEMBER` command is similar to the `GEORADIUS` command, but instead of taking a set of coordinates, the command takes the name of another member of the index and returns all the members within a given radius centered on that member. To find all the stations within 1,000 feet of the Grand Army Plaza & Central Park South, enter the following: +``` +127.0.0.1:6379> GEORADIUSBYMEMBER NYC:stations:location NYC:station:281 1000 ft WITHDIST + +1) 1) "NYC:station:281" + +   2) "0.0000" + +2) 1) "NYC:station:3132" + +   2) "793.4223" + +3) 1) "NYC:station:2006" + +   2) "911.9752" + +4) 1) "NYC:station:3136" + +   2) "940.3399" + +5) 1) "NYC:station:3457" + +   2) "671.4900" + +``` + +While this example focused on using Python and Redis to parse data and build an index of bikesharing system locations, it can easily be generalized to locate restaurants, public transit, or any other type of place developers want to help users find. + +This article is based on [my presentation][17] at Open Source 101 in Raleigh this year. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/2/building-bikesharing-application-open-source-tools + +作者:[Tague Griffith][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/tague +[1]:https://www.python.org/ +[2]:https://redis.io/ +[3]:https://www.citibikenyc.com/ +[4]:https://www.citibikenyc.com/data-sharing-policy +[5]:https://github.com/NABSA/gbfs +[6]:http://nabsa.net/ +[7]:https://www.json.org/ +[8]:https://gist.github.com/tague/5a82d96bcb09ce2a79943ad4c87f6e15 +[9]:https://github.com/NABSA/gbfs/blob/master/systems.csv +[10]:https://redis.io/commands/unlink +[11]:https://redis.io/commands/hmset +[12]:https://raw.githubusercontent.com/antirez/redis/4.0/00-RELEASENOTES +[13]:https://redis.io/topics/data-types#Hashes +[14]:https://redis.io/commands#geo +[15]:https://redis.io/topics/data-types-intro#redis-sorted-sets +[16]:https://twitter.com/swatchthedog +[17]:http://opensource101.com/raleigh/talks/building-location-aware-apps-open-source-tools/ diff --git a/sources/tech/20180215 Check Linux Distribution Name and Version.md b/sources/tech/20180215 Check Linux Distribution Name and Version.md new file mode 100644 index 0000000000..5336c31c4e --- /dev/null +++ b/sources/tech/20180215 Check Linux Distribution Name and Version.md @@ -0,0 +1,264 @@ +Check Linux Distribution Name and Version +====== +You have joined new company and want to install some software’s which is requested by DevApp team, also want to restart few of the service after installation. What to do? + +In this situation at least you should know what Distribution & Version is running on it. It will help you perform the activity without any issue. + +Administrator should gather some of the information about the system before doing any activity, which is first task for him. + +There are many ways to find the Linux distribution name and version. You might ask, why i want to know this basic things? + +We have four major distributions such as RHEL, Debian, openSUSE & Arch Linux. Each distribution comes with their own package manager which help us to install packages on the system. + +If you don’t know the distribution name then you wont be able to perform the package installation. + +Also you won’t able to run the proper command for service bounces because most of the distributions implemented systemd command instead of SysVinit script. + +It’s good to have the basic commands which will helps you in many ways. + +Use the following Methods to Check Your Linux Distribution Name and Version. + +### List of methods + + * lsb_release command + * /etc/*-release file + * uname command + * /proc/version file + * dmesg Command + * YUM or DNF Command + * RPM command + * APT-GET command + + + +### Method-1: lsb_release Command + +LSB stands for Linux Standard Base that prints distribution-specific information such as Distribution name, Release version and codename. +``` +# lsb_release -a +No LSB modules are available. +Distributor ID: Ubuntu +Description: Ubuntu 16.04.3 LTS +Release: 16.04 +Codename: xenial + +``` + +### Method-2: /etc/arch-release /etc/os-release File + +release file typically known as Operating system identification. The `/etc` directory contains many files that contains various information about the distribution. Each distribution has their own set of files, which display this information. + +The below set of files are present on Ubuntu/Debian system. +``` +# cat /etc/issue +Ubuntu 16.04.3 LTS \n \l + +# cat /etc/issue.net +Ubuntu 16.04.3 LTS + +# cat /etc/lsb-release +DISTRIB_ID=Ubuntu +DISTRIB_RELEASE=16.04 +DISTRIB_CODENAME=xenial +DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS" + +# cat /etc/os-release +NAME="Ubuntu" +VERSION="16.04.3 LTS (Xenial Xerus)" +ID=ubuntu +ID_LIKE=debian +PRETTY_NAME="Ubuntu 16.04.3 LTS" +VERSION_ID="16.04" +HOME_URL="http://www.ubuntu.com/" +SUPPORT_URL="http://help.ubuntu.com/" +BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" +VERSION_CODENAME=xenial +UBUNTU_CODENAME=xenial + +# cat /etc/debian_version +9.3 + +``` + +The below set of files are present on RHEL/CentOS/Fedora system. The `/etc/redhat-release` & `/etc/system-release` files symlinks with `/etc/[distro]-release` file. +``` +# cat /etc/centos-release +CentOS release 6.9 (Final) + +# cat /etc/fedora-release +Fedora release 27 (Twenty Seven) + +# cat /etc/os-release +NAME=Fedora +VERSION="27 (Twenty Seven)" +ID=fedora +VERSION_ID=27 +PRETTY_NAME="Fedora 27 (Twenty Seven)" +ANSI_COLOR="0;34" +CPE_NAME="cpe:/o:fedoraproject:fedora:27" +HOME_URL="https://fedoraproject.org/" +SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help" +BUG_REPORT_URL="https://bugzilla.redhat.com/" +REDHAT_BUGZILLA_PRODUCT="Fedora" +REDHAT_BUGZILLA_PRODUCT_VERSION=27 +REDHAT_SUPPORT_PRODUCT="Fedora" +REDHAT_SUPPORT_PRODUCT_VERSION=27 +PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy" + +# cat /etc/redhat-release +Fedora release 27 (Twenty Seven) + +# cat /etc/system-release +Fedora release 27 (Twenty Seven) + +``` + +### Method-3: uname Command + +uname (stands for unix name) is an utility that prints the system information like kernel name, version and other details about the system and the operating system running on it. + +**Suggested Read :** [6 Methods To Check The Running Linux Kernel Version On System][1] +``` +# uname -a +Linux localhost.localdomain 4.12.14-300.fc26.x86_64 #1 SMP Wed Sep 20 16:28:07 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux + +``` + +The above colored words describe the version of operating system as Fedora Core 26. + +### Method-4: /proc/version File + +This file specifies the version of the Linux kernel, the version of gcc used to compile the kernel, and the time of kernel compilation. It also contains the kernel compiler’s user name (in parentheses). +``` +# cat /proc/version +Linux version 4.12.14-300.fc26.x86_64 ([email protected]) (gcc version 7.2.1 20170915 (Red Hat 7.2.1-2) (GCC) ) #1 SMP Wed Sep 20 16:28:07 UTC 2017 + +``` + +### Method-5: dmesg Command + +dmesg (stands for display message or driver message) is a command on most Unix-like operating systems that prints the message buffer of the kernel. +``` +# dmesg | grep "Linux" +[ 0.000000] Linux version 4.12.14-300.fc26.x86_64 ([email protected]) (gcc version 7.2.1 20170915 (Red Hat 7.2.1-2) (GCC) ) #1 SMP Wed Sep 20 16:28:07 UTC 2017 +[ 0.001000] SELinux: Initializing. +[ 0.001000] SELinux: Starting in permissive mode +[ 0.470288] SELinux: Registering netfilter hooks +[ 0.616351] Linux agpgart interface v0.103 +[ 0.630063] usb usb1: Manufacturer: Linux 4.12.14-300.fc26.x86_64 ehci_hcd +[ 0.688949] usb usb2: Manufacturer: Linux 4.12.14-300.fc26.x86_64 ohci_hcd +[ 2.564554] SELinux: Disabled at runtime. +[ 2.564584] SELinux: Unregistering netfilter hooks + +``` + +### Method-6: Yum/Dnf Command + +Yum (Yellowdog Updater Modified) is one of the package manager utility in Linux operating system. Yum command is used to install, update, search & remove packages on some Linux distributions based on RedHat. + +**Suggested Read :** [YUM Command To Manage Packages on RHEL/CentOS Systems][2] +``` +# yum info nano +Loaded plugins: fastestmirror, ovl +Loading mirror speeds from cached hostfile + * base: centos.zswap.net + * extras: mirror2.evolution-host.com + * updates: centos.zswap.net +Available Packages +Name : nano +Arch : x86_64 +Version : 2.3.1 +Release : 10.el7 +Size : 440 k +Repo : base/7/x86_64 +Summary : A small text editor +URL : http://www.nano-editor.org +License : GPLv3+ +Description : GNU nano is a small and friendly text editor. + +``` + +The below yum repolist command shows that Base, Extras, and Updates repositories are coming from CentOS 7 repository. +``` +# yum repolist +Loaded plugins: fastestmirror, ovl +Loading mirror speeds from cached hostfile + * base: centos.zswap.net + * extras: mirror2.evolution-host.com + * updates: centos.zswap.net +repo id repo name status +base/7/x86_64 CentOS-7 - Base 9591 +extras/7/x86_64 CentOS-7 - Extras 388 +updates/7/x86_64 CentOS-7 - Updates 1929 +repolist: 11908 + +``` + +We can also use Dnf command to check distribution name and version. + +**Suggested Read :** [DNF (Fork of YUM) Command To Manage Packages on Fedora System][3] +``` +# dnf info nano +Last metadata expiration check: 0:01:25 ago on Thu Feb 15 01:59:31 2018. +Installed Packages +Name : nano +Version : 2.8.7 +Release : 1.fc27 +Arch : x86_64 +Size : 2.1 M +Source : nano-2.8.7-1.fc27.src.rpm +Repo : @System +From repo : fedora +Summary : A small text editor +URL : https://www.nano-editor.org +License : GPLv3+ +Description : GNU nano is a small and friendly text editor. + +``` + +### Method-7: RPM Command + +RPM stands for RedHat Package Manager is a powerful, command line Package Management utility for Red Hat based system such as CentOS, Oracle Linux & Fedora. This help us to identify the running system version. + +**Suggested Read :** [RPM commands to manage packages on RHEL based systems][4] +``` +# rpm -q nano +nano-2.8.7-1.fc27.x86_64 + +``` + +### Method-8: APT-GET Command + +Apt-Get stands for Advanced Packaging Tool (APT). apg-get is a powerful command-line tool which is used to automatically download and install new software packages, upgrade existing software packages, update the package list index, and to upgrade the entire Debian based systems. + +**Suggested Read :** [Apt-Get & Apt-Cache commands to manage packages on Debian Based Systems][5] +``` +# apt-cache policy nano +nano: + Installed: 2.5.3-2ubuntu2 + Candidate: 2.5.3-2ubuntu2 + Version table: + * 2.5.3-2ubuntu2 500 + 500 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages + 100 /var/lib/dpkg/status + 2.5.3-2 500 + 500 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 Packages + +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/check-find-linux-distribution-name-and-version/ + +作者:[Magesh Maruthamuthu][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.2daygeek.com/author/magesh/ +[1]:https://www.2daygeek.com/check-find-determine-running-installed-linux-kernel-version/ +[2]:https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[3]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[4]:https://www.2daygeek.com/rpm-command-examples/ +[5]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ diff --git a/sources/tech/20180215 What is a Linux -oops.md b/sources/tech/20180215 What is a Linux -oops.md new file mode 100644 index 0000000000..3238ca34de --- /dev/null +++ b/sources/tech/20180215 What is a Linux -oops.md @@ -0,0 +1,70 @@ +translating----geekpi + +What is a Linux 'oops'? +====== +If you check the processes running on your Linux systems, you might be curious about one called "kerneloops." And that’s “kernel oops,” not “kerne loops” just in case you didn’t parse that correctly. + +Put very bluntly, an “oops” is a deviation from correct behavior on the part of the Linux kernel. Did you do something wrong? Probably not. But something did. And the process that did something wrong has probably at least just been summarily knocked off the CPU. At worst, the kernel may have panicked and abruptly shut the system down. + +For the record, “oops” is NOT an acronym. It doesn’t stand for something like “object-oriented programming and systems” or “out of procedural specs”; it actually means “oops” like you just dropped your glass of wine or stepped on your cat. Oops! The plural of "oops" is "oopses." + +An oops means that something running on the system has violated the kernel’s rules about proper behavior. Maybe the code tried to take a code path that was not allowed or use an invalid pointer. Whatever it was, the kernel — always on the lookout for process misbehavior — most likely will have stopped the particular process in its tracks and written some messages about what it did to the console, to /var/log/dmesg or the /var/log/kern.log file. + +An oops can be caused by the kernel itself or by some process that tries to get the kernel to violate its rules about how things are allowed to run on the system and what they're allowed to do. + +An oops will generate a crash signature that can help kernel developers figure out what went wrong and improve the quality of their code. + +The kerneloops process running on your system will probably look like this: +``` +kernoops 881 1 0 Feb11 ? 00:00:01 /usr/sbin/kerneloops + +``` + +You might notice that the process isn't run by root, but by a user named "kernoops" and that it's accumulated extremely little run time. In fact, the only task assigned to this particular user is running kerneloops. +``` +$ sudo grep kernoops /etc/passwd +kernoops:x:113:65534:Kernel Oops Tracking Daemon,,,:/:/bin/false + +``` + +If your Linux system isn't one that ships with kerneloops (like Debian), you might consider adding it. Check out this [Debian page][1] for more information. + +### When should you be concerned about an oops? + +An oops is not a big deal, except when it is. It depends in part on the role that the particular process was playing. It also depends on the class of oops. + +Some oopses are so severe that they result in system panics. Technically speaking, a panic is a subset of the oops (i.e., the more serious of the oopses). A panic occurs when a problem detected by the kernel is bad enough that the kernel decides that it (the kernel) must stop running immediately to prevent data loss or other damage to the system. So, the system then needs to be halted and rebooted to keep any inconsistencies from making it unusable or unreliable. So a system that panics is actually trying to protect itself from irrevocable damage. + +In short, all panics are oops, but not all oops are panics. + +The /var/log/kern.log and related rotated logs (/var/log/kern.log.1, /var/log/kern.log.2 etc.) contain the logs produced by the kernel and handled by syslog. + +The kerneloops program collects and by default submits information on the problems it runs into where it can be analyzed and presented to kernel developers. Configuration details for this process are specified in the /etc/kerneloops.conf file. You can look at the settings easily with the command shown below: +``` +$ sudo cat /etc/kerneloops.conf | grep -v ^# | grep -v ^$ +[sudo] password for shs: +allow-submit = ask +allow-pass-on = yes +submit-url = http://oops.kernel.org/submitoops.php +log-file = /var/log/kern.log +submit-pipe = /usr/share/apport/kernel_oops + +``` + +In the above (default) settings, information on kernel problems can be submitted, but the user is asked for permission. If set to allow-submit = always, the user will not be asked. + +Debugging kernel problems is one of the finer arts of working with Linux systems. Fortunately, most Linux users seldom or never experience oops or panics. Still, it's nice to know what processes like kerneloops are doing on your system and to understand what might be reported and where when your system runs into a serious kernel violation. + + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3254778/linux/what-is-a-linux-oops.html + +作者:[Sandra Henry-Stocker][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[1]:https://packages.debian.org/stretch/kerneloops diff --git a/sources/tech/20180216 Q4OS Makes Linux Easy for Everyone.md b/sources/tech/20180216 Q4OS Makes Linux Easy for Everyone.md new file mode 100644 index 0000000000..a868ed28d5 --- /dev/null +++ b/sources/tech/20180216 Q4OS Makes Linux Easy for Everyone.md @@ -0,0 +1,140 @@ +Q4OS Makes Linux Easy for Everyone +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/q4os-main.png?itok=WDatcV-a) + +Modern Linux distributions tend to target a variety of users. Some claim to offer a flavor of the open source platform that anyone can use. And, I’ve seen some such claims succeed with aplomb, while others fall flat. [Q4OS][1] is one of those odd distributions that doesn’t bother to make such a claim but pulls off the feat anyway. + +So, who is the primary market for Q4OS? According to its website, the distribution is a: + +“fast and powerful operating system based on the latest technologies while offering highly productive desktop environment. We focus on security, reliability, long-term stability and conservative integration of verified new features. System is distinguished by speed and very low hardware requirements, runs great on brand new machines as well as legacy computers. It is also very applicable for virtualization and cloud computing.” + +What’s very interesting here is that the Q4OS developers offer commercial support for the desktop. Said support can cover the likes of system customization (including core level API programming) as well as user interface modifications. + +Once you understand this (and have installed Q4OS), the target audience becomes quite obvious: Business users looking for a Windows XP/7 replacement. But that should not prevent home users from giving Q4OS at try. It’s a Linux distribution that has a few unique tools that come together to make a solid desktop distribution. + +Let’s take a look at Q4OS and see if it’s a version of Linux that might work for you. + +### What Q4OS all about + +Q4OS that does an admirable job of being the open source equivalent of Windows XP/7. Out of the box, it pulls this off with the help of the [Trinity Desktop][2] (a fork of KDE). With a few tricks up its sleeve, Q4OS turns the Trinity Desktop into a remarkably similar desktop (Figure 1). + +![default desktop][4] + +Figure 1: The Q4OS default desktop. + +[Used with permission][5] + +When you fire up the desktop, you will be greeted by a Welcome screen that makes it very easy for new users to start setting up their desktop with just a few clicks. From this window, you can: + + * Run the Desktop Profiler (which allows you to select which desktop environment to use as well as between a full-featured desktop, a basic desktop, or a minimal desktop—Figure 2). + + * Install applications (which opens the Synaptic Package Manager). + + * Install proprietary codecs (which installs all the necessary media codecs for playing audio and video). + + * Turn on Desktop effects (if you want more eye candy, turn this on). + + * Switch to Kickoff start menu (switches from the default start menu to the newer kickoff menu). + + * Set Autologin (allows you to set login such that it won’t require your password upon boot). + + + + +![Desktop Profiler][7] + +Figure 2: The Desktop Profiler allows you to further customize your desktop experience. + +[Used with permission][5] + +If you want to install a different desktop environment, open up the Desktop Profiler and then click the Desktop environments drop-down, in the upper left corner of the window. A new window will appear, where you can select your desktop of choice from the drop-down (Figure 3). Once back at the main Profiler Window, select which type of desktop profile you want, and then click Install. + +![Desktop Profiler][9] + +Figure 3: Installing a different desktop is quite simple from within the Desktop Profiler. + +[Used with permission][5] + +Note that installing a different desktop will not wipe the default desktop. Instead, it will allow you to select between the two desktops (at the login screen). + +### Installed software + +After selecting full-featured desktop, from the Desktop Profiler, I found the following user applications ready to go: + + * LibreOffice 5.2.7.2 + + * VLC 2.2.7 + + * Google Chrome 64.0.3282 + + * Thunderbird 52.6.0 (Includes Lightning addon) + + * Synaptic 0.84.2 + + * Konqueror 14.0.5 + + * Firefox 52.6.0 + + * Shotwell 0.24.5 + + + + +Obviously some of those applications are well out of date. Since this distribution is based on Debian, we can run and update/upgrade with the commands: +``` +sudo apt update + +sudo apt upgrade + +``` + +However, after running both commands, it seems everything is up to date. This particular release (2.4) is an LTS release (supported until 2022). Because of this, expect software to be a bit behind. If you want to test out the bleeding edge version (based on Debian “Buster”), you can download the testing image [here][10]. + +### Security oddity + +There is one rather disturbing “feature” found in Q4OS. In the developer’s quest to make the distribution closely resemble Windows, they’ve made it such that installing software (from the command line) doesn’t require a password! You read that correctly. If you open the Synaptic package manager, you’re asked for a password. However (and this is a big however), open up a terminal window and issue a command like sudo apt-get install gimp. At this point, the software will install… without requiring the user to type a sudo password. + +Did you cringe at that? You should. + +I get it, the developers want to ease away the burden of Linux and make a platform the masses could easily adapt to. They’ve done a splendid job of doing just that. However, in the process of doing so, they’ve bypassed a crucial means of security. Is having as near an XP/7 clone as you can find on Linux worth that lack of security? I would say that if it enables more people to use Linux, then yes. But the fact that they’ve required a password for Synaptic (the GUI tool most Windows users would default to for software installation) and not for the command-line tool makes no sense. On top of that, bypassing passwords for the apt and dpkg commands could make for a significant security issue. + +Fear not, there is a fix. For those that prefer to require passwords for the command line installation of software, you can open up the file /etc/sudoers.d/30_q4os_apt and comment out the following three lines: +``` +%sudo ALL = NOPASSWD: /usr/bin/apt-get * + +%sudo ALL = NOPASSWD: /usr/bin/apt-key * + +%sudo ALL = NOPASSWD: /usr/bin/dpkg * + +``` + +Once commented out, save and close the file, and reboot the system. At this point, users will now be prompted for a password, should they run the apt-get, apt-key, or dpkg commands. + +### A worthy contender + +Setting aside the security curiosity, Q4OS is one of the best attempts at recreating Windows XP/7 I’ve come across in a while. If you have users who fear change, and you want to migrate them away from Windows, this distribution might be exactly what you need. I would, however, highly recommend you re-enable passwords for the apt-get, apt-key, and dpkg commands… just to be on the safe side. + +In any case, the addition of the Desktop Profiler, and the ability to easily install alternative desktops, makes Q4OS a distribution that just about anyone could use. + +Learn more about Linux through the free ["Introduction to Linux" ][11]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/2/q4os-makes-linux-easy-everyone + +作者:[JACK WALLEN][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/jlwallen +[1]:https://q4os.org +[2]:https://www.trinitydesktop.org/ +[4]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/q4os_1.jpg?itok=dalJk9Xf (default desktop) +[5]:/licenses/category/used-permission +[7]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/q4os_2.jpg?itok=GlouIm73 (Desktop Profiler) +[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/q4os_3.jpg?itok=riSTP_1z (Desktop Profiler) +[10]:https://q4os.org/downloads2.html +[11]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180217 Louis-Philippe Véronneau .md b/sources/tech/20180217 Louis-Philippe Véronneau .md new file mode 100644 index 0000000000..bab2f4c169 --- /dev/null +++ b/sources/tech/20180217 Louis-Philippe Véronneau .md @@ -0,0 +1,59 @@ +Louis-Philippe Véronneau - +====== +I've been watching [Critical Role][1]1 for a while now and since I've started my master's degree I haven't had much time to sit down and watch the show on YouTube as I used to do. + +I thus started listening to the podcasts instead; that way, I can listen to the show while I'm doing other productive tasks. Pretty quickly, I grew tired of manually downloading every episode each time I finished the last one. To make things worst, the podcast is hosted on PodBean and they won't let you download episodes on a mobile device without their app. Grrr. + +After the 10th time opening the terminal on my phone to download the podcast using some `wget` magic I decided enough was enough: I was going to write a dumb script to download them all in one batch. + +I'm a little ashamed to say it took me more time than I had intended... The PodBean website uses semi-randomized URLs, so I could not figure out a way to guess the paths to the hosted audio files. I considered using `youtube-dl` to get the DASH version of the show on YouTube, but Google has been heavily throttling DASH streams recently. Not cool Google. + +I then had the idea to use iTune's RSS feed to get the audio files. Surely they would somehow be included there? Of course Apple doesn't give you a simple RSS feed link on the iTunes podcast page, so I had to rummage around and eventually found out this is the link you have to use: +``` +https://itunes.apple.com/lookup?id=1243705452&entity=podcast + +``` + +Surprise surprise, from the json file this links points to, I found out the main Critical Role podcast page [has a proper RSS feed][2]. To my defense, the RSS button on the main podcast page brings you to some PodBean crap page. + +Anyway, once you have the RSS feed, it's only a matter of using `grep` and `sed` until you get what you want. + +Around 20 minutes later, I had downloaded all the episodes, for a total of 22Gb! Victory dance! + +Video clip loop of the Critical Role doing a victory dance. + +### Script + +Here's the bash script I wrote. You will need `recode` to run it, as the RSS feed includes some HTML entities. +``` +# Get the whole RSS feed +wget -qO /tmp/criticalrole.rss http://criticalrolepodcast.geekandsundry.com/feed/ + +# Extract the URLS and the episode titles +mp3s=( $(grep -o "http.\+mp3" /tmp/criticalrole.rss) ) +titles=( $(tail -n +45 /tmp/criticalrole.rss | grep -o ".\+" \ + | sed -r 's@@@g; s@ @\\@g' | recode html..utf8) ) + +# Download all the episodes under their titles +for i in ${!titles[*]} +do + wget -qO "$(sed -e "s@\\\@\\ @g" <<< "${titles[$i]}").mp3" ${mp3s[$i]} +done + +``` + +1 - For those of you not familiar with Critical Role, it's web series where a group of voice actresses and actors from LA play Dungeons & Dragons. It's so good even people like me who never played D&D can enjoy it.. + +-------------------------------------------------------------------------------- + +via: https://veronneau.org/downloading-all-the-critical-role-podcasts-in-one-batch.html + +作者:[Louis-Philippe Véronneau][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://veronneau.org/ +[1]:https://en.wikipedia.org/wiki/Critical_Role +[2]:http://criticalrolepodcast.geekandsundry.com/feed/ diff --git a/sources/tech/20180217 The List Of Useful Bash Keyboard Shortcuts.md b/sources/tech/20180217 The List Of Useful Bash Keyboard Shortcuts.md deleted file mode 100644 index be44c1b034..0000000000 --- a/sources/tech/20180217 The List Of Useful Bash Keyboard Shortcuts.md +++ /dev/null @@ -1,163 +0,0 @@ -The List Of Useful Bash Keyboard Shortcuts -====== -translating by heart4lor - -![](https://www.ostechnix.com/wp-content/uploads/2018/02/Bash-720x340.jpg) - -Nowadays, I spend more time in Terminal, trying to accomplish more in CLI than GUI. I learned many BASH tricks over time. And, here is the list of useful of BASH shortcuts that every Linux users should know to get things done faster in their BASH shell. I won’t claim that this list is a complete list of BASH shortcuts, but just enough to move around your BASH shell faster than before. Learning how to navigate faster in BASH Shell not only saves some time, but also makes you proud of yourself for learning something worth. Well, let’s get started. - -### List Of Useful Bash Keyboard Shortcuts - -#### ALT key shortcuts - -1\. **ALT+A** – Go to the beginning of a line. - -2\. **ALT+B** – Move one character before the cursor. - -3\. **ALT+C** – Suspends the running command/process. Same as CTRL+C - -4\. **ALT+D** – Closes the empty Terminal (I.e it closes the Terminal when there is nothing typed). Also deletes all chracters after the cursor. - -5\. **ALT+F** – Move forward one character. - -6\. **ALT+T** – Swaps the last two words. - -7\. **ALT+U** – Capitalize all characters in a word after the cursor. - -8\. **ALT+L** – Uncaptalize all characters in a word after the cursor. - -9\. **ALT+R** – Undo any changes to a command that you have brought from the history if you’ve edited it. - -As you see in the above output, I have pulled a command using reverse search and changed the last characters in that command and revert the changes using **ALT+R**. - -10\. **ALT+.** (note the dot at the end) – Use the last word of the previous command. - -If you want to use the same options for multiple commands, you can use this shortcut to bring back the last word of previous command. For instance, I need to short the contents of a directory using “ls -r” command. Also, I want to view my Kernel version using “uname -r”. In both commands, the common word is “-r”. This is where ALT+. shortcut comes in handy. First run, ls -r command to do reverse shorting and use the last word “-r” in the nex command i.e uname. - -#### CTRL key shortcuts - -1\. **CTRL+A** – Quickly move to the beginning of line. - -Let us say you’re typing a command something like below. While you’re at the N’th line, you noticed there is a typo in the first character -``` -$ gind . -mtime -1 -type - -``` - -Did you notice? I typed “gind” instead of “find” in the above command. You can correct this error by pressing the left arrow all the way to the first letter and replace “g” with “f”. Alternatively, just hit the **CTRL+A** or **Home** key to instantly go to the beginning of the line and replace the misspelled character. This will save you a few seconds. - -2\. **CTRL+B** – To move backward one character. - -This shortcut key can move the cursor backward one character i.e one character before the cursor. Alternatively, you can use LEFT arrow to move backward one character. - -3\. **CTRL+C** – Stop the currently running command - -If a command takes too long to complete or if you mistakenly run it, you can forcibly stop or quit the command by using **CTRL+C**. - -4\. **CTRL+D** – Delete one character backward. - -If you have a system where the BACKSPACE key isn’t working, you can use **CTRL+D** to delete one character backward. This shortcut also lets you logs out of the current session, similar to exit. - -5\. **CTRL+E** – Move to the end of line - -After you corrected any misspelled word in the start of a command or line, just hit **CTRL+E** to quickly move to the end of the line. Alternatively, you can use END key in your keyboard. - -6\. **CTRL+F** – Move forward one character - -If you want to move the cursor forward one character after another, just press **CTRL+F** instead of RIGHT arrow key. - -7\. **CTRL+G** – Leave the history searching mode without running the command. - -As you see in the above screenshot, I did the reverse search, but didn’t execute the command and left the history searching mode. - -8\. **CTRL+H** – Delete the characters before the cursor, same as BASKSPACE. - -9\. **CTRL+J** – Same as ENTER/RETURN key. - -ENTER key is not working? No problem! **CTRL+J** or **CTRL+M** can be used as an alternative to ENTER key. - -10\. **CTRL+K** – Delete all characters after the cursor. - -You don’t have to keep hitting the DELETE key to delete the characters after the cursor. Just press **CTRL+K** to delete all characters after the cursor. - -11\. **CTRL+L** – Clears the screen and redisplay the line. - -Don’t type “clear” to clear the screen. Just press CTRL+L to clear and redisplay the currently typed line. - -12\. **CTRL+M** – Same as CTRL+J or RETURN. - -13\. **CTRL+N** – Display next line in command history. - -You can also use DOWN arrow. - -14\. **CTRL+O** – Run the command that you found using reverse search i.e CTRL+R. - -15\. **CTRL+P** – Displays the previous line in command history. - -You can also use UP arrow. - -16\. **CTRL+R** – Searches the history backward (Reverse search). - -17\. **CTRL+S** – Searches the history forward. - -18\. **CTRL+T** – Swaps the last two characters. - -This is one of my favorite shortcut. Let us say you typed “sl” instead of “ls”. No problem! This shortcut will transposes the characters as in the below screenshot. - -![][2] - -19\. **CTRL+U** – Delete all characters before the cursor (Kills backward from point to the beginning of line). - -This shortcut will delete all typed characters backward at once. - -20\. **CTRL+V** – Makes the next character typed verbatim - -21\. **CTRL+W** – Delete the words before the cursor. - -Don’t confuse it with CTRL+U. CTRL+W won’t delete everything behind a cursor, but a single word. - -![][3] - -22\. **CTRL+X** – Lists the possible filename completions of the current word. - -23\. **CTRL+XX** – Move between start of command line and current cursor position (and back again). - -24\. **CTRL+Y** – Retrieves last item that you deleted or cut. - -Remember, we deleted a word “-al” using CTRL+W in the 21st command. You can retrieve that word instantly using CTRL+Y. - -![][4] - -See? I didn’t type “-al”. Instead, I pressed CTRL+Y to retrieve it. - -25\. **CTRL+Z** – Stops the current command. - -You may very well know this shortcut. It kills the currently running command. You can resume it with **fg** in the foreground or **bg** in the background. - -26\. **CTRL+[** – Equivalent to ESC key. - -#### Miscellaneous - -1\. **!!** – Repeats the last command. - -2\. **ESC+t** – Swaps the last tow words. - -That’s all I have in mind now. I will keep adding more if I came across any Bash shortcut keys in future. If you think there is a mistake in this article, please do notify me in the comments section below. I will update it asap. - -Cheers! - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/list-useful-bash-keyboard-shortcuts/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[2]:http://www.ostechnix.com/wp-content/uploads/2018/02/CTRLT-1.gif -[3]:http://www.ostechnix.com/wp-content/uploads/2018/02/CTRLW-1.gif -[4]:http://www.ostechnix.com/wp-content/uploads/2018/02/CTRLY-1.gif diff --git a/sources/tech/20180219 Learn to code with Thonny - a Python IDE for beginners.md b/sources/tech/20180219 Learn to code with Thonny - a Python IDE for beginners.md new file mode 100644 index 0000000000..4ee603aa6d --- /dev/null +++ b/sources/tech/20180219 Learn to code with Thonny - a Python IDE for beginners.md @@ -0,0 +1,119 @@ +Learn to code with Thonny — a Python IDE for beginners +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/02/thonny.png-945x400.jpg) +Learning to program is hard. Even when you finally get your colons and parentheses right, there is still a big chance that the program doesn’t do what you intended. Commonly, this means you overlooked something or misunderstood a language construct, and you need to locate the place in the code where your expectations and reality diverge. + +Programmers usually tackle this situation with a tool called a debugger, which allows running their program step-by-step. Unfortunately, most debuggers are optimized for professional usage and assume the user already knows the semantics of language constructs (e.g. function call) very well. + +Thonny is a beginner-friendly Python IDE, developed in [University of Tartu][1], Estonia, which takes a different approach as its debugger is designed specifically for learning and teaching programming. + +Although Thonny is suitable for even total beginners, this post is meant for readers who have at least some experience with Python or another imperative language. + +### Getting started + +Thonny is included in Fedora repositories since version 27. Install it with sudo dnf install thonny or with a graphical tool of your choice (such as Software). + +When first launching Thonny, it does some preparations and then presents an empty editor and the Python shell. Copy following program text into the editor and save it into a file (Ctrl+S). +``` +n = 1 +while n < 5: + print(n * "*") + n = n + 1 + +``` + +Let’s first run the program in one go. For this press F5 on the keyboard. You should see a triangle made of periods appear in the shell pane. + +![A simple program in Thonny][2] + +Did Python just analyze your code and understand that you wanted to print a triangle? Let’s find out! + +Start by selecting “Variables” from the “View” menu. This opens a table which will show us how Python manages program’s variables. Now run the program in debug mode by pressing Ctrl+F5 (or Ctrl+Shift+F5 in XFCE). In this mode Thonny makes Python pause before each step it takes. You should see the first line of the program getting surrounded with a box. We’ll call this the focus and it indicates the part of the code Python is going to execute next. + +![Thonny debugger focus][3] + +The piece of code you see in the focus box is called assignment statement. For this kind of statement, Python is supposed to evaluate the expression on the right and store the value under the name shown on the left. Press F7 to take the next step. You will see that Python focused on the right part of the statement. In this case the expression is really simple, but for generality Thonny presents the expression evaluation box, which allows turning expressions into values. Press F7 again to turn the literal 1 into value 1. Now Python is ready to do the actual assignment — press F7 again and you should see the variable n with value 1 appear in the variables table. + +![Thonny with variables table][4] + +Continue pressing F7 and observe how Python moves forward with really small steps. Does it look like something which understands the purpose of your code or more like a dumb machine following simple rules? + +### Function calls + +Function call is a programming concept which often causes great deal of confusion to beginners. On the surface there is nothing complicated — you give name to a code and refer to it (call it) somewhere else in the code. Traditional debuggers show us that when you step into the call, the focus jumps into the function definition (and later magically back to the original location). Is it the whole story? Do we need to care? + +Turns out the “jump model” is sufficient only with the simplest functions. Understanding parameter passing, local variables, returning and recursion all benefit from the notion of stack frame. Luckily, Thonny can explain this concept intuitively without sweeping important details under the carpet. + +Copy following recursive program into Thonny and run it in debug mode (Ctrl+F5 or Ctrl+Shift+F5). +``` +def factorial(n): + if n == 0: + return 1 + else: + return factorial(n-1) * n + +print(factorial(4)) + +``` + +Press F7 repeatedly until you see the expression factorial(4) in the focus box. When you take the next step, you see that Thonny opens a new window containing function code, another variables table and another focus box (move the window to see that the old focus box is still there). + +![Thonny stepping through a recursive function][5] + +This window represents a stack frame, the working area for resolving a function call. Several such windows on top of each other is called the call stack. Notice the relationship between argument 4 on the call site and entry n in the local variables table. Continue stepping with F7 and observe how new windows get created on each call and destroyed when the function code completes and how the call site gets replaced by the return value. + +### Values vs. references + +Now let’s make an experiment inside the Python shell. Start by typing in the statements shown in the screenshot below: + +![Thonny shell showing list mutation][6] + +As you see, we appended to list b, but list a also got updated. You may know why this happened, but what’s the best way to explain it to a beginner? + +When teaching lists to my students I tell them that I have been lying about Python memory model. It is actually not as simple as the variables table suggests. I tell them to restart the interpreter (the red button on the toolbar), select “Heap” from the “View” menu and make the same experiment again. If you do this, then you see that variables table doesn’t contain the values anymore — they actually live in another table called “Heap”. The role of the variables table is actually to map the variable names to addresses (or ID-s) which refer to the rows in the heap table. As assignment changes only the variables table, the statement b = a only copied the reference to the list, not the list itself. This explained why we see the change via both variables. + +![Thonny in heap mode][7] + +(Why do I postpone telling the truth about the memory model until the topic of lists? Does Python store lists differently compared to floats or strings? Go ahead and use Thonny’s heap mode to find this out! Tell me in the comments what do you think!) + +If you want to understand the references system deeper, copy following program to Thonny and small-step (F7) through it with the heap table open. +``` +def do_something(lst, x): + lst.append(x) + +a = [1,2,3] +n = 4 +do_something(a, n) +print(a) + +``` + +Even if the “heap mode” shows us authentic picture, it is rather inconvenient to use. For this reason, I recommend you now switch back to normal mode (unselect “Heap” in the View menu) but remember that the real model includes variables, references and values. + +### Conclusion + +The features I touched in this post were the main reason for creating Thonny. It’s easy to form misconceptions about both function calls and references but traditional debuggers don’t really help in reducing the confusion. + +Besides these distinguishing features, Thonny offers several other beginner friendly tools. Please look around at [Thonny’s homepage][8] to learn more! + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/learn-code-thonny-python-ide-beginners/ + +作者:[Aivar Annamaa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://fedoramagazine.org/ +[1]:https://www.ut.ee/en +[2]:https://fedoramagazine.org/wp-content/uploads/2017/12/scr1.png +[3]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr2.png +[4]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr3.png +[5]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr4.png +[6]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr5.png +[7]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr6.png +[8]:http://thonny.org diff --git a/sources/tech/20180221 Protecting Code Integrity with PGP - Part 2- Generating Your Master Key.md b/sources/tech/20180221 Protecting Code Integrity with PGP - Part 2- Generating Your Master Key.md deleted file mode 100644 index d78f1daafd..0000000000 --- a/sources/tech/20180221 Protecting Code Integrity with PGP - Part 2- Generating Your Master Key.md +++ /dev/null @@ -1,177 +0,0 @@ -translating by kimii -Protecting Code Integrity with PGP — Part 2: Generating Your Master Key -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/binary-1538717_1920.png?itok=kv_sxSnf) - -In this article series, we're taking an in-depth look at using PGP and provide practical guidelines for developers working on free software projects. In the previous article, we provided an introduction to [basic tools and concepts][1]. In this installment, we show how to generate and protect your master PGP key. - -### Checklist - - 1. Generate a 4096-bit RSA master key (ESSENTIAL) - - 2. Back up the master key using paperkey (ESSENTIAL) - - 3. Add all relevant identities (ESSENTIAL) - - - - -### Considerations - -#### Understanding the "Master" (Certify) key - -In this and next section we'll talk about the "master key" and "subkeys." It is important to understand the following: - - 1. There are no technical differences between the "master key" and "subkeys." - - 2. At creation time, we assign functional limitations to each key by giving it specific capabilities. - - 3. A PGP key can have four capabilities. - - * [S] key can be used for signing - - * [E] key can be used for encryption - - * [A] key can be used for authentication - - * [C] key can be used for certifying other keys - - 4. A single key may have multiple capabilities. - - - - -The key carrying the [C] (certify) capability is considered the "master" key because it is the only key that can be used to indicate relationship with other keys. Only the [C] key can be used to: - - * Add or revoke other keys (subkeys) with S/E/A capabilities - - * Add, change or revoke identities (uids) associated with the key - - * Add or change the expiration date on itself or any subkey - - * Sign other people's keys for the web of trust purposes - - - - -In the Free Software world, the [C] key is your digital identity. Once you create that key, you should take extra care to protect it and prevent it from falling into malicious hands. - -#### Before you create the master key - -Before you create your master key you need to pick your primary identity and your master passphrase. - -##### Primary identity - -Identities are strings using the same format as the "From" field in emails: -``` -Alice Engineer - -``` - -You can create new identities, revoke old ones, and change which identity is your "primary" one at any time. Since the primary identity is shown in all GnuPG operations, you should pick a name and address that are both professional and the most likely ones to be used for PGP-protected communication, such as your work address or the address you use for signing off on project commits. - -##### Passphrase - -The passphrase is used exclusively for encrypting the private key with a symmetric algorithm while it is stored on disk. If the contents of your .gnupg directory ever get leaked, a good passphrase is the last line of defense between the thief and them being able to impersonate you online, which is why it is important to set up a good passphrase. - -A good guideline for a strong passphrase is 3-4 words from a rich or mixed dictionary that are not quotes from popular sources (songs, books, slogans). You'll be using this passphrase fairly frequently, so it should be both easy to type and easy to remember. - -##### Algorithm and key strength - -Even though GnuPG has had support for Elliptic Curve crypto for a while now, we'll be sticking to RSA keys, at least for a little while longer. While it is possible to start using ED25519 keys right now, it is likely that you will come across tools and hardware devices that will not be able to handle them correctly. - -You may also wonder why the master key is 4096-bit, if later in the guide we state that 2048-bit keys should be good enough for the lifetime of RSA public key cryptography. The reasons are mostly social and not technical: master keys happen to be the most visible ones on the keychain, and some of the developers you interact with will inevitably judge you negatively if your master key has fewer bits than theirs. - -#### Generate the master key - -To generate your new master key, issue the following command, putting in the right values instead of "Alice Engineer:" -``` -$ gpg --quick-generate-key 'Alice Engineer ' rsa4096 cert - -``` - -A dialog will pop up asking to enter the passphrase. Then, you may need to move your mouse around or type on some keys to generate enough entropy until the command completes. - -Review the output of the command, it will be something like this: -``` -pub rsa4096 2017-12-06 [C] [expires: 2019-12-06] - 111122223333444455556666AAAABBBBCCCCDDDD -uid Alice Engineer - -``` - -Note the long string on the second line -- that is the full fingerprint of your newly generated key. Key IDs can be represented in three different forms: - - * Fingerprint, a full 40-character key identifier - - * Long, last 16-characters of the fingerprint (AAAABBBBCCCCDDDD) - - * Short, last 8 characters of the fingerprint (CCCCDDDD) - - - - -You should avoid using 8-character "short key IDs" as they are not sufficiently unique. - -At this point, I suggest you open a text editor, copy the fingerprint of your new key and paste it there. You'll need to use it for the next few steps, so having it close by will be handy. - -#### Back up your master key - -For disaster recovery purposes -- and especially if you intend to use the Web of Trust and collect key signatures from other project developers -- you should create a hardcopy backup of your private key. This is supposed to be the "last resort" measure in case all other backup mechanisms have failed. - -The best way to create a printable hardcopy of your private key is using the paperkey software written for this very purpose. Paperkey is available on all Linux distros, as well as installable via brew install paperkey on Macs. - -Run the following command, replacing [fpr] with the full fingerprint of your key: -``` -$ gpg --export-secret-key [fpr] | paperkey -o /tmp/key-backup.txt - -``` - -The output will be in a format that is easy to OCR or input by hand, should you ever need to recover it. Print out that file, then take a pen and write the key passphrase on the margin of the paper. This is a required step because the key printout is still encrypted with the passphrase, and if you ever change the passphrase on your key, you will not remember what it used to be when you had first created it -- guaranteed. - -Put the resulting printout and the hand-written passphrase into an envelope and store in a secure and well-protected place, preferably away from your home, such as your bank vault. - -**Note on printers:** Long gone are days when printers were dumb devices connected to your computer's parallel port. These days they have full operating systems, hard drives, and cloud integration. Since the key content we send to the printer will be encrypted with the passphrase, this is a fairly safe operation, but use your best paranoid judgement. - -#### Add relevant identities - -If you have multiple relevant email addresses (personal, work, open-source project, etc), you should add them to your master key. You don't need to do this for any addresses that you don't expect to use with PGP (e.g., probably not your school alumni address). - -The command is (put the full key fingerprint instead of [fpr]): -``` -$ gpg --quick-add-uid [fpr] 'Alice Engineer ' - -``` - -You can review the UIDs you've already added using: -``` -$ gpg --list-key [fpr] | grep ^uid - -``` - -##### Pick the primary UID - -GnuPG will make the latest UID you add as your primary UID, so if that is different from what you want, you should fix it back: -``` -$ gpg --quick-set-primary-uid [fpr] 'Alice Engineer ' - -``` - -Next time, we'll look at generating PGP subkeys, which are the keys you'll actually be using for day-to-day work. - -Learn more about Linux through the free ["Introduction to Linux" ][2]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/learn/PGP/2018/2/protecting-code-integrity-pgp-part-2-generating-and-protecting-your-master-pgp-key - -作者:[KONSTANTIN RYABITSEV][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/mricon -[1]:https://www.linux.com/blog/learn/2018/2/protecting-code-integrity-pgp-part-1-basic-pgp-concepts-and-tools -[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180227 How to block local spoofed addresses using the Linux firewall.md b/sources/tech/20180227 How to block local spoofed addresses using the Linux firewall.md index dcacec8e89..2ea760e636 100644 --- a/sources/tech/20180227 How to block local spoofed addresses using the Linux firewall.md +++ b/sources/tech/20180227 How to block local spoofed addresses using the Linux firewall.md @@ -1,3 +1,4 @@ +leemeans translating How to block local spoofed addresses using the Linux firewall ====== diff --git a/sources/tech/20180302 10 Quick Tips About sudo command for Linux systems.md b/sources/tech/20180302 10 Quick Tips About sudo command for Linux systems.md new file mode 100644 index 0000000000..bcfad89a12 --- /dev/null +++ b/sources/tech/20180302 10 Quick Tips About sudo command for Linux systems.md @@ -0,0 +1,215 @@ +10 Quick Tips About sudo command for Linux systems +====== + +![Linux-sudo-command-tips][1] + +### Overview + +**sudo** stands for **superuser do**. It allows authorized users to execute command as an another user. Another user can be regular user or superuser. However, most of the time we use it to execute command with elevated privileges. + +sudo command works in conjunction with security policies, default security policy is sudoers and it is configurable via **/etc/sudoers** file. Its security policies are highly extendable. One can develop and distribute their own policies as plugins. + +#### How it’s different than su + +In GNU/Linux there are two ways to run command with elevated privileges: + + * Using **su** command + * Using **sudo** command + + + +**su** stands for **switch user**. Using su, we can switch to root user and execute command. But there are few drawbacks with this approach. + + * We need to share root password with another user. + * We cannot give controlled access as root user is superuser + * We cannot audit what user is doing. + + + +sudo addresses these problems in unique way. + + 1. First of all, we don’t need to compromise root user password. Regular user uses its own password to execute command with elevated privileges. + 2. We can control access of sudo user meaning we can restrict user to execute only certain commands. + 3. In addition to this all activities of sudo user are logged hence we can always audit what actions were done. On Debian based GNU/Linux all activities are logged in **/var/log/auth.log** file. + + + +Later sections of this tutorial sheds light on these points. + +#### Hands on with sudo + +Now, we have fair understanding about sudo. Let us get our hands dirty with practical. For demonstration, I am using Ubuntu. However, behavior with another distribution should be identical. + +#### Allow sudo access + +Let us add regular user as a sudo user. In my case user’s name is linuxtechi + +1) Edit /etc/sudoers file as follows: +``` +$ sudo visudo + +``` + +2) Add below line to allow sudo access to user linuxtechi: +``` +linuxtechi ALL=(ALL) ALL + +``` + +In above command: + + * linuxtechi indicates user name + * First ALL instructs to permit sudo access from any terminal/machine + * Second (ALL) instructs sudo command to be allowed to execute as any user + * Third ALL indicates all command can be executed as root + + + +#### Execute command with elevated privileges + +To execute command with elevated privileges, just prepend sudo word to command as follows: +``` +$ sudo cat /etc/passwd + +``` + +When you execute this command, it will ask linuxtechi’s password and not root user password. + +#### Execute command as an another user + +In addition to this we can use sudo to execute command as another user. For instance, in below command, user linuxtechi executes command as a devesh user: +``` +$ sudo -u devesh whoami +[sudo] password for linuxtechi: +devesh + +``` + +#### Built in command behavior + +One of the limitation of sudo is – Shell’s built in command doesn’t work with it. For instance, history is built in command, if you try to execute this command with sudo then command not found error will be reported as follows: +``` +$ sudo history +[sudo] password for linuxtechi: +sudo: history: command not found + +``` + +**Access root shell** + +To overcome above problem, we can get access to root shell and execute any command from there including Shell’s built in. + +To access root shell, execute below command: +``` +$ sudo bash + +``` + +After executing this command – you will observe that prompt sign changes to pound (#) character. + +### Recipes + +In this section we’ll discuss some useful recipes which will help you to improve productivity. Most of the commands can be used to complete day-to-day task. + +#### Execute previous command as a sudo user + +Let us suppose you want to execute previous command with elevated privileges, then below trick will be useful: +``` +$ sudo !4 + +``` + +Above command will execute 4th command from history with elevated privileges. + +#### sudo command with Vim + +Many times we edit system’s configuration files and while saving we realize that we need root access to do this. Because this we may lose our changes. There is no need to get panic, we can use below command in Vim to rescue from this situation: +``` +:w !sudo tee % + +``` + +In above command: + + * Colon (:) indicates we are in Vim’s ex mode + * Exclamation (!) mark indicates that we are running shell command + * sudo and tee are the shell commands + * Percentage (%) sign indicates all lines from current line + + + +#### Execute multiple commands using sudo + +So far we have executed only single command with sudo but we can execute multiple commands with it. Just separate commands using semicolon (;) as follows: +``` +$ sudo -- bash -c 'pwd; hostname; whoami' + +``` + +In above command: + + * Double hyphen (–) stops processing of command line switches + * bash indicates shell name to be used for execution + * Commands to be executed are followed by –c option + + + +#### Run sudo command without password + +When sudo command is executed first time then it will prompt for password and by default password will be cached for next 15 minutes. However, we can override this behavior and disable password authentication using NOPASSWD keyword as follows: +``` +linuxtechi ALL=(ALL) NOPASSWD: ALL + +``` + +#### Restrict user to execute certain commands + +To provide controlled access we can restrict sudo user to execute only certain commands. For instance, below line allows execution of echo and ls commands only +``` +linuxtechi ALL=(ALL) NOPASSWD: /bin/echo /bin/ls + +``` + +#### Insights about sudo + +Let us dig more about sudo command to get insights about it. +``` +$ ls -l /usr/bin/sudo +-rwsr-xr-x 1 root root 145040 Jun 13  2017 /usr/bin/sudo + +``` + +If you observe file permissions carefully, **setuid** bit is enabled on sudo. When any user runs this binary it will run with the privileges of the user that owns the file. In this case it is root user. + +To demonstrate this, we can use id command with it as follows: +``` +$ id +uid=1002(linuxtechi) gid=1002(linuxtechi) groups=1002(linuxtechi) + +``` + +When we execute id command without sudo then id of user linuxtechi will be displayed. +``` +$ sudo id +uid=0(root) gid=0(root) groups=0(root) + +``` + +But if we execute id command with sudo then id of root user will be displayed. + +### Conclusion + +Takeaway from this article is – sudo provides more controlled access to regular users. Using these techniques multiple users can interact with GNU/Linux in secure manner. + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/quick-tips-sudo-command-linux-systems/ + +作者:[Pradeep Kumar][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linuxtechi.com/author/pradeep/ +[1]:https://www.linuxtechi.com/wp-content/uploads/2018/03/Linux-sudo-command-tips.jpg diff --git a/sources/tech/20180302 5 open source software tools for supply chain management.md b/sources/tech/20180302 5 open source software tools for supply chain management.md new file mode 100644 index 0000000000..20f0c8d554 --- /dev/null +++ b/sources/tech/20180302 5 open source software tools for supply chain management.md @@ -0,0 +1,81 @@ +5 open source software tools for supply chain management +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_Maze2.png?itok=EH_L-J6Q) + +This article was originally posted on January 14, 2016, and last updated March 2, 2018. + +If you manage a business that deals with physical goods, [supply chain management][1] is an important part of your business process. Whether you're running a tiny Etsy store with just a few customers, or a Fortune 500 manufacturer or retailer with thousands of products and millions of customers worldwide, it's important to have a close understanding of your inventory and the parts and raw materials you need to make your products. + +Keeping track of physical items, suppliers, customers, and all the many moving parts associated with each can greatly benefit from, and in some cases be totally dependent on, specialized software to help manage these workflows. In this article, we'll take a look at some free and open source software options for supply chain management and some of the features of each. + +Supply chain management goes a little further than just inventory management. It can help you keep track of the flow of goods to reduce costs and plan for scenarios in which the supply chain could change. It can help you keep track of compliance issues, whether these fall under the umbrella of legal requirements, quality minimums, or social and environmental responsibility. It can help you plan the minimum supply to keep on hand and enable you to make smart decisions about order quantities and delivery times. + +Because of its nature, a lot of supply chain management software is bundled with similar software, such as [customer relationship management][2] (CRM) and [enterprise resource planning][3] (ERP) tools. So, when making a decision about which tool is best for your organization, you may wish to consider integration with other tools as a part of your decision-making criteria. + +### Apache OFBiz + +[Apache OFBiz][4] is a suite of related tools for helping you manage a variety of business processes. While it can manage a variety of related issues like catalogs, e-commerce sites, accounting, and point of sale, its primary supply chain functions focus on warehouse management, fulfillment, order, and manufacturing management. It is very customizable, but the flip side of that is that it requires a good deal of careful planning to set up and integrate with your existing processes. That's one reason it is probably the best fit for a midsize to large operation. The project's functionality is built across three layers: presentation, business, and data, making it a scalable solution, but again, a complex one. + +The source code of Apache OFBiz can be found in the [project's repository][5]. Apache OFBiz is written in Java and is licensed under an [Apache 2.0 license][6]. + +If this looks interesting, you might also want to check out [opentaps][7], which is built on top of OFBiz. Opentaps enhances OFBiz's user interface and adds core ERP and CRM features, including warehouse management, purchasing, and planning. It's licensed under [AGPL 3.0][8], with a commercial license available for organizations that don't want to be bound by the open source license. + +### OpenBoxes + +[OpenBoxes][9] is a supply chain management and inventory control project, primarily and originally designed for keeping track of pharmaceuticals in a healthcare environment, but it can be modified to track any type of stock and the flows associated with it. It has tools for demand forecasting based on historical order quantities, tracking stock, supporting multiple facilities, expiration date tracking, kiosk support, and many other features that make it ideal for healthcare situations, but could also be useful for other industries. + +Available under an [Eclipse Public License][10], OpenBoxes is written primarily in Groovy and its source code can be browsed on [GitHub][11]. + +### OpenLMIS + +Like OpenBoxes, [OpenLMIS][12] is a supply chain management tool for the healthcare sector, but it was specifically designed for use in low-resource areas in Africa to ensure medications and medical supplies get to patients in need. Its API-driven approach enables users to customize and extend OpenLMIS while maintaining a connection to the common codebase. It was developed with funding from the Rockefeller Foundation, and other contributors include the UN, USAID, and the Bill & Melinda Gates Foundation. + +OpenLMIS is written in Java and JavaScript with AngularJS. It is available under an [AGPL 3.0 license][13], and its source code is accessible on [GitHub][13]. + +### Odoo + +You might recognize [Odoo][14] from our previous top [ERP projects][3] article. In fact, a full ERP may be a good fit for you, depending on your needs. Odoo's supply chain management tools mostly revolve around inventory and purchase management, as well as connectivity with e-commerce and point of sale, but it can also connect to other tools like [frePPLe][15] for open source production planning. + +Odoo is available both as a software-as-a-service solution and an open source community edition. The open source edition is released under [LGPL][16] version 3, and the source is available on [GitHub][17]. Odoo is primarily written in Python. + +### xTuple + +Billing itself as "supply chain management software for growing businesses," [xTuple][18] focuses on businesses that have outgrown their conventional small business ERP and CRM solutions. Its open source version, called Postbooks, adds some inventory, distribution, purchasing, and vendor reporting features to its core accounting, CRM, and ERP capabilities, and a commercial version expands the [features][19] for manufacturers and distributors. + +xTuple is available under the Common Public Attribution License ([CPAL][20]), and the project welcomes developers to fork it to create other business software for inventory-based manufacturers. Its web app core is written in JavaScript, and its source code can be found on [GitHub][21]. + +There are, of course, other open source tools that can help with supply chain management. Know of a good one that we left off? Let us know in the comments below. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/tools/supply-chain-management + +作者:[Jason Baker][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jason-baker +[1]:https://en.wikipedia.org/wiki/Supply_chain_management +[2]:https://opensource.com/business/14/7/top-5-open-source-crm-tools +[3]:https://opensource.com/resources/top-4-open-source-erp-systems +[4]:http://ofbiz.apache.org/ +[5]:http://ofbiz.apache.org/source-repositories.html +[6]:http://www.apache.org/licenses/LICENSE-2.0 +[7]:http://www.opentaps.org/ +[8]:http://www.fsf.org/licensing/licenses/agpl-3.0.html +[9]:http://openboxes.com/ +[10]:http://opensource.org/licenses/eclipse-1.0.php +[11]:https://github.com/openboxes/openboxes +[12]:http://openlmis.org/ +[13]:https://github.com/OpenLMIS/openlmis-ref-distro/blob/master/LICENSE +[14]:https://www.odoo.com/ +[15]:https://frepple.com/ +[16]:https://github.com/odoo/odoo/blob/9.0/LICENSE +[17]:https://github.com/odoo/odoo +[18]:https://xtuple.com/ +[19]:https://xtuple.com/comparison-chart +[20]:https://xtuple.com/products/license-options#cpal +[21]:http://xtuple.github.io/ diff --git a/sources/tech/20180302 How to manage your workstation configuration with Ansible.md b/sources/tech/20180302 How to manage your workstation configuration with Ansible.md new file mode 100644 index 0000000000..fd24cd48ed --- /dev/null +++ b/sources/tech/20180302 How to manage your workstation configuration with Ansible.md @@ -0,0 +1,170 @@ +How to manage your workstation configuration with Ansible +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb) + +Configuration management is a very important aspect of both server administration and DevOps. The "infrastructure as code" methodology makes it easy to deploy servers in various configurations and dynamically scale an organization's resources to keep up with user demands. But less attention is paid to individual administrators who want to automate the setup of their own laptops and desktops (workstations). + +In this series, I'll show you how to automate your workstation setup via [Ansible][1] , which will allow you to easily restore your entire configuration if you want or need to reload your machine. In addition, if you have multiple workstations, you can use this same approach to make the configuration identical on each. In this first article, we'll set up basic configuration management for our personal or work computers and set the foundation for the rest of the series. By the end of this article, you'll have a working setup to benefit from right away. Each article will automate more things and grow in complexity. + +### Why Ansible? + +Many configuration management solutions are available, including Salt Stack, Chef, and Puppet. I prefer Ansible because it's lighter in terms of resource utilization, its syntax is easier to read, and when harnessed properly it can revolutionize your configuration management. Ansible's lightweight nature is especially relevant to the topic at hand, because we may not want to run an entire server just to automate the setup of our laptops and desktops. Ideally, we want something fast; something we can use to get up and running quickly should we need to restore our workstations or synchronize our configuration between multiple machines. My specific method for Ansible (which I'll demonstrate in this article) is perfect for this—there's no server to maintain. You just download your configuration and run it. + +### My approach + +Typically, Ansible is run from a central server. It utilizes an inventory file, which is a text file that contains a list of all the hosts and their IP addresses or domain names we want Ansible to manage. This is great for static environments, but it is not ideal for workstations. The reason being we really don't know what the status of our workstations will be at any one moment. Perhaps I powered down my desktop or my laptop may be suspended and stowed in my bag. In either case, the Ansible server would complain, as it can't reach my machines if they are offline. We need something that's more of an on-demand approach, and the way we'll accomplish that is by utilizing `ansible-pull`. The `ansible-pull` command, which is part of Ansible, allows you to download your configuration from a Git repository and apply it immediately. You won't need to maintain a server or an inventory list; you simply run the `ansible-pull` command, feed it a Git repository URL, and it will do the rest for you. + +### Getting started + +First, install Ansible on the computer you want it to manage. One problem is that a lot of distributions ship with an older version. I can tell you from experience you'll definitely want the latest version available. New features are introduced into Ansible quite frequently, and if you're running an older version, example syntax you find online may not be functional because it's using features that aren't implemented in the version you have installed. Even point releases have quite a few new features. One example of this is the `dconf` module, which is new to Ansible as of 2.4. If you try to utilize syntax that makes use of this module, unless you have 2.4 or newer it will fail. In Ubuntu and its derivatives, we can easily install the latest version of Ansible with the official personal package archive ([PPA][2]). The following commands will do the trick: +``` +sudo apt-get install software-properties-common + +sudo apt-add-repository ppa:ansible/ansible + +sudo apt-get update + +sudo apt-get install ansible + +``` + +If you're not using Ubuntu, [consult Ansible's documentation][3] on how to obtain it for your platform. + +Next, we'll need a Git repository to hold our configuration. The easiest way to satisfy this requirement is to create an empty repository on GitHub, or you can utilize your own Git server if you have one. To keep things simple, I'll assume you're using GitHub, so adjust the commands if you're using something else. Create a repository in GitHub; you'll end up with a repository URL that will be similar to this: +``` +git@github.com:/ansible.git + +``` + +Clone that repository to your local working directory (ignore any message that complains that the repository is empty): +``` +git clone git@github.com:/ansible.git + +``` + +Now we have an empty repository we can work with. Change your working directory to be inside the repository (`cd ./ansible` for example) and create a file named `local.yml` in your favorite text editor. Place the following configuration in that file: +``` +- hosts: localhost + +  become: true + +  tasks: + +  - name: Install htop + +    apt: name=htop + +``` + +The file you just created is known as a **playbook** , and the instruction to install `htop` (a package I arbitrarily picked to serve as an example) is known as a **play**. The playbook itself is a file in the YAML format, which is a simple to read markup language. A full walkthrough of YAML is beyond the scope of this article, but you don't need to have an expert understanding of it to be proficient with Ansible. The configuration is easy to read; by simply looking at this file, you can easily glean that we're installing the `htop` package. Pay special attention to the `apt` module on the last line, which will only work on Debian-based systems. You can change this to `yum` instead of `apt` if you're using a Red Hat platform or change it to `dnf` if you're using Fedora. The `name` line simply gives information regarding our task and will be shown in the output. Therefore, you'll want to make sure the name is descriptive so it's easy to find if you need to troubleshoot multiple plays. + +Next, let's commit our new file to our repository: +``` +git add local.yml + +git commit -m "initial commit" + +git push origin master + +``` + +Now our new playbook should be present in our repository on GitHub. We can apply the playbook we created with the following command: +``` +sudo ansible-pull -U https://github.com//ansible.git + +``` + +If executed properly, the `htop` package should be installed on your system. You might've seen some warnings near the beginning that complain about the lack of an inventory file. This is fine, as we're not using an inventory file (nor do we need to for this use). At the end of the output, it will give you an overview of what it did. If `htop` was installed properly, you should see `changed=1` on the last line of the output. + +How did this work? The `ansible-pull` command uses the `-U` option, which expects a repository URL. I gave it the `https` version of the repository URL for security purposes because I don't want any hosts to have write access back to the repository (`https` is read-only by default). The `local.yml` playbook name is assumed, so we didn't need to provide a filename for the playbook—it will automatically run a playbook named `local.yml` if it finds it in the repository's root. Next, we used `sudo` in front of the command since we are modifying the system. + +Let's go ahead and add additional packages to our playbook. I'll add two additional packages so that it looks like this: +``` +- hosts: localhost + +  become: true + +  tasks: + +  - name: Install htop + +    apt: name=htop + + + +  - name: Install mc + +    apt: name=mc + +    + +  - name: Install tmux + +    apt: name=tmux + +``` + +I added additional plays (tasks) for installing two other packages, `mc` and `tmux`. It doesn't matter what packages you choose to have this playbook install; I just picked these arbitrarily. You should install whichever packages you want all your systems to have. The only caveat is that you have to know that the packages exist in the repository for your distribution ahead of time. + +Before we commit and apply this updated playbook, we should clean it up. It will work fine as it is, but (to be honest) it looks kind of messy. Let's try installing all three packages in just one play. Replace the contents of your `local.yml` with this: +``` +- hosts: localhost + +  become: true + +  tasks: + +  - name: Install packages + +    apt: name={{item}} + +    with_items: + +      - htop + +      - mc + +      - tmux + +``` + +Now that looks cleaner and more efficient. We used `with_items` to consolidate our package list into one play. If we want to add additional packages, we simply add another line with a hyphen and a package name. Consider `with_items` to be similar to a `for` loop. Every package we list will be installed. + +Commit our new changes back to the repository: +``` +git add local.yml + +git commit -m "added additional packages, cleaned up formatting" + +git push origin master + +``` + +Now we can run our playbook to benefit from the new configuration: +``` +sudo ansible-pull -U https://github.com//ansible.git + +``` + +Admittedly, this example doesn't do much yet; all it does is install a few packages. You could've installed these packages much faster just using your package manager. However, as this series continues, these examples will become more complex and we'll automate more things. By the end, the Ansible configuration you'll create will automate more and more tasks. For example, the one I use automates the installation of hundreds of packages, sets up `cron` jobs, handles desktop configuration, and more. + +From what we've accomplished so far, you can probably already see the big picture. All we had to do was create a repository, put a playbook in that repository, then utilize the `ansible-pull` command to pull down that repository and apply it to our machine. We didn't need to set up a server. In the future, if we want to change our config, we can pull down the repo, update it, then push it back to our repository and apply it. If we're setting up a new machine, we only need to install Ansible and apply the configuration. + +In the next article, we'll automate this even further via `cron` and some additional items. In the meantime, I've copied the code for this article into [my GitHub repository][4] so you can check your syntax against mine. I'll update the code as we go along. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/3/manage-workstation-ansible + +作者:[Jay LaCroix][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jlacroix +[1]:https://www.ansible.com/ +[2]:https://launchpad.net/ubuntu/+ppas +[3]:http://docs.ansible.com/ansible/latest/intro_installation.html +[4]:https://github.com/jlacroix82/ansible_article diff --git a/sources/tech/20180305 Getting started with Python for data science.md b/sources/tech/20180305 Getting started with Python for data science.md new file mode 100644 index 0000000000..920befc58b --- /dev/null +++ b/sources/tech/20180305 Getting started with Python for data science.md @@ -0,0 +1,135 @@ +Getting started with Python for data science +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_open_data_520x292.jpg?itok=R8rBrlk7) + +Whether you're a budding data science enthusiast with a math or computer science background or an expert in an unrelated field, the possibilities data science offers are within your reach. And you don't need expensive, highly specialized enterprise software—the open source tools discussed in this article are all you need to get started. + +[Python][1], its machine-learning and data science libraries ([pandas][2], [Keras][3], [TensorFlow][4], [scikit-learn][5], [SciPy][6], [NumPy][7], etc.), and its extensive list of visualization libraries ([Matplotlib][8], [pyplot][9], [Plotly][10], etc.) are excellent FOSS tools for beginners and experts alike. Easy to learn, popular enough to offer community support, and armed with the latest emerging techniques and algorithms developed for data science, these comprise one of the best toolsets you can acquire when starting out. + +Many of these Python libraries are built on top of each other (known as dependencies), and the basis is the [NumPy][7] library. Designed specifically for data science, NumPy is often used to store relevant portions of datasets in its ndarray datatype, which is a convenient datatype for storing records from relational tables as `cvs` files or in any other format, and vice-versa. It is particularly convenient when scikit functions are applied to multidimensional arrays. SQL is great for querying databases, but to perform complex and resource-intensive data science operations, storing data in ndarray boosts efficiency and speed (but make sure you have ample RAM when dealing with large datasets). When you get to using pandas for knowledge extraction and analysis, the almost seamless conversion between DataFrame datatype in pandas and ndarray in NumPy creates a powerful combination for extraction and compute-intensive operations, respectively. + +For a quick demonstration, let’s fire up the Python shell and load an open dataset on crime statistics from the city of Baltimore in a pandas DataFrame variable, and view a portion of the loaded frame: +``` +>>>  import   pandas as  pd + +>>>  crime_stats   =  pd.read_csv('BPD_Arrests.csv') + +>>>  crime_stats.head() + +``` + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/crime_stats_chart.jpg?itok=_rPXJYHz) + +We can now perform most of the queries on this pandas DataFrame that we can with SQL in databases. For instance, to get all the unique values of the "Description" attribute, the SQL query is: +``` +$  SELECT  unique(“Description”)   from   crime_stats; + +``` + +This same query written for a pandas DataFrame looks like this: +``` +>>>  crime_stats['Description'].unique() + +['COMMON   ASSAULT'   'LARCENY'   'ROBBERY   - STREET'   'AGG.   ASSAULT' + +'LARCENY   FROM   AUTO'   'HOMICIDE'   'BURGLARY'   'AUTO   THEFT' + +'ROBBERY   - RESIDENCE'   'ROBBERY   - COMMERCIAL'   'ROBBERY   - CARJACKING' + +'ASSAULT   BY  THREAT'   'SHOOTING'   'RAPE'   'ARSON'] + +``` + +which returns a NumPy array (ndarray): +``` +>>>  type(crime_stats['Description'].unique()) + + + +``` + +Next let’s feed this data into a neural network to see how accurately it can predict the type of weapon used, given data such as the time the crime was committed, the type of crime, and the neighborhood in which it happened: +``` +>>>  from   sklearn.neural_network   import   MLPClassifier + +>>>  import   numpy   as np + +>>> + +>>>  prediction   =  crime_stats[[‘Weapon’]] + +>>>  predictors   =  crime_stats['CrimeTime',   ‘CrimeCode’,   ‘Neighborhood’] + +>>> + +>>>  nn_model   =  MLPClassifier(solver='lbfgs',   alpha=1e-5,   hidden_layer_sizes=(5, + +2),   random_state=1) + +>>> + +>>>predict_weapon   =  nn_model.fit(prediction,   predictors) + +``` + +Now that the learning model is ready, we can perform several tests to determine its quality and reliability. For starters, let’s feed a training set data (the portion of the original dataset used to train the model and not included in creating the model): +``` +>>>  predict_weapon.predict(training_set_weapons) + +array([4,   4,   4,   ..., 0,   4,   4]) + +``` + +As you can see, it returns a list, with each number predicting the weapon for each of the records in the training set. We see numbers rather than weapon names, as most classification algorithms are optimized with numerical data. For categorical data, there are techniques that can reliably convert attributes into numerical representations. In this case, the technique used is Label Encoding, using the LabelEncoder function in the sklearn preprocessing library: `preprocessing.LabelEncoder()`. It has a function to transform and inverse transform data and their numerical representations. In this example, we can use the `inverse_transform` function of LabelEncoder() to see what Weapons 0 and 4 are: +``` +>>>  preprocessing.LabelEncoder().inverse_transform(encoded_weapons) + +array(['HANDS',   'FIREARM',   'HANDS',   ..., 'FIREARM',   'FIREARM',   'FIREARM'] + +``` + +This is fun to see, but to get an idea of how accurate this model is, let's calculate several scores as percentages: +``` +>>>  nn_model.score(X,   y) + +0.81999999999999995 + +``` + +This shows that our neural network model is ~82% accurate. That result seems impressive, but it is important to check its effectiveness when used on a different crime dataset. There are other tests, like correlations, confusion, matrices, etc., to do this. Although our model has high accuracy, it is not very useful for general crime datasets as this particular dataset has a disproportionate number of rows that list ‘FIREARM’ as the weapon used. Unless it is re-trained, our classifier is most likely to predict ‘FIREARM’, even if the input dataset has a different distribution. + +It is important to clean the data and remove outliers and aberrations before we classify it. The better the preprocessing, the better the accuracy of our insights. Also, feeding the model/classifier with too much data to get higher accuracy (generally over ~90%) is a bad idea because it looks accurate but is not useful due to [overfitting][11]. + +[Jupyter notebooks][12] are a great interactive alternative to the command line. While the CLI is fine for most things, Jupyter shines when you want to run snippets on the go to generate visualizations. It also formats data better than the terminal. + +[This article][13] has a list of some of the best free resources for machine learning, but plenty of additional guidance and tutorials are available. You will also find many open datasets available to use, based on your interests and inclinations. As a starting point, the datasets maintained by [Kaggle][14], and those available at state government websites are excellent resources. + +Payal Singh will be presenting at SCaLE16x this year, March 8-11 in Pasadena, California. To attend and get 50% of your ticket, [register][15] using promo code **OSDC** + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/3/getting-started-data-science + +作者:[Payal Singh][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/payalsingh +[1]:https://www.python.org/ +[2]:https://pandas.pydata.org/ +[3]:https://keras.io/ +[4]:https://www.tensorflow.org/ +[5]:http://scikit-learn.org/stable/ +[6]:https://www.scipy.org/ +[7]:http://www.numpy.org/ +[8]:https://matplotlib.org/ +[9]:https://matplotlib.org/api/pyplot_api.html +[10]:https://plot.ly/ +[11]:https://www.kdnuggets.com/2014/06/cardinal-sin-data-mining-data-science.html +[12]:http://jupyter.org/ +[13]:https://machinelearningmastery.com/best-machine-learning-resources-for-getting-started/ +[14]:https://www.kaggle.com/ +[15]:https://register.socallinuxexpo.org/reg6/ diff --git a/sources/tech/20180306 Exploring free and open web fonts.md b/sources/tech/20180306 Exploring free and open web fonts.md new file mode 100644 index 0000000000..533286ca2c --- /dev/null +++ b/sources/tech/20180306 Exploring free and open web fonts.md @@ -0,0 +1,70 @@ +Exploring free and open web fonts +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-docdish-yellow-typewriter-keys.png?itok=0sPgIdMG) + +There is no question that the face of the web has been transformed in recent years by open source fonts. Prior to 2010, the only typefaces you were likely to see in a web browser were the generic "web safe" [core fonts][1] from Microsoft. But that year saw the start of several revolutions: the introduction of the Web Open Font Format ([WOFF][2]), which offered an open standard for efficiently delivering font files over HTTP, and the launch of web-font services like [Google Fonts][3] and the [Open Font Library][4]—both of which offered web publishers access to a large collection of fonts, for free, available under open licenses. + +It is hard to overstate the positive impact of these events on web typography. But it can be all too easy to equate the successes of open web fonts with open source typography as a whole and conclude that the challenges are behind us, the puzzles solved. That is not the case, so if you care about type, the good news is there are a lot of opportunities to get involved in improvement. + +For starters, it's critical to understand that Google Fonts and Open Font Library offer a specialized service—delivering fonts in web pages—and they don't implement solutions for other use cases. That is not a shortcoming on the services' side; it simply means that we have to develop other solutions. + +There are a number of problems to solve. Probably the most obvious example is the awkwardness of installing fonts on a desktop Linux machine for use in other applications. You can download any of the web fonts offered by either service, but all you will get is a generic ZIP file with some TTF or OTF binaries inside and a plaintext license file. What happens next is up to you to guess. + +Most users learn quickly that the "right" step is to manually copy those font binaries into any one of a handful of special directories on their hard drive. But that just makes the files visible to the operating system; it doesn't offer much in the way of a user experience. Again, this is not a flaw with the web-font service; rather it's evidence of the point where the service stops and more work needs to be done on the other side. + +A big improvement from the user's perspective would be for the OS or the desktop environment to be smarter at this "just downloaded" stage. Not only would it install the font files to the right location but, more importantly, it could add important metadata that the user will want to access when selecting a font to use in a project. + +What this additional information consists of and how it is presented to the user is tied to another challenge: Managing a font collection on Linux is noticeably less pleasant than on other operating systems. Periodically, font manager applications appear (see [GTK+ Font Manager][5] for one of the most recent examples), but they rarely catch on. I've been thinking a lot about where I think they come up short; one core factor is they have limited themselves to displaying only the information embedded in the font binary: basic character-set coverage, weight/width/slope settings, embedded license and copyright statements, etc. + +But a lot of decisions go into the process of selecting a font for a job besides what's in this embedded data. Serious font users—like information designers, journal article authors, or book designers—make their font-selection decisions in the context of each document's requirements and needs. That includes license information, naturally, but it includes much more, like information about the designer and the foundry, stylistic trends, or details about how the font works in use. + +For example, if your document includes both English and Arabic text, you probably want a font where the Latin and Arabic glyphs were designed together by someone experienced with the two scripts. Otherwise, you'll waste a ton of time making tiny adjustments to the font sizes and line spacing trying to get the two languages to mix well. You may have learned from experience that certain designers or vendors are better at multi-script design than others. Or it might be relevant to your project that today's fashion magazines almost exclusively use "[Didone][6]"-style typefaces, a name that refers to super-high-contrast styles pioneered by [Firmin Didot][7] and [Giambattista Bodoni][8] around 200 years ago. It just happens to be the trend. + +But none of those terms (Didone, Didot, or Bodoni) are likely to show up in the binary's embedded data, nor is easy to tell whether the Latin and Arabic fit together or anything else about the typeface's back history. That information might appear in supplementary material like a type specimen or font documentation—if any exists. + +A specimen is a designed document (often a PDF) that shows the font in use and includes background information; it frequently serves a dual role as a marketing piece and a sample to look at when choosing a font. The considered design of a specimen showcases how the font functions in practice and in a manner that an automatically generated character table simply cannot. Documentation may include some other vital information, like how to activate the font's OpenType features, what mathematical or archaic forms it provides, or how it varies stylistically across supported languages. Making this sort of material available to the user in the font-management application would go a long way towards helping users find the fonts that fit their projects' needs. + +Of course, if we're going to consider a font manager that can handle documentation and specimens, we also have to take a hard look at what comes with the font packages provided by distributions. Linux users start with a few fonts automatically installed, and repository-provided packages are the only font source most users have besides downloading the generic ZIP archive. Those packages tend to be pretty bare-bones. Commercial fonts generally include specimens, documentation, and other support items, whereas open source fonts usually do not. + +There are some excellent examples of open fonts that do provide quality specimens and documentation (see [SIL Gentium][9] and [Bungee][10] for two distinctly different but valid approaches), but they rarely (if ever) make their way into the downstream packaging chain. We plainly can do better. + +There are some technical obstacles to offering a richer user experience for interacting with the fonts on your system. For one thing, the [AppStream][11] metadata standard defines a few [parameters][12] specific to font files, but so far includes nothing that would cover specimens, designer and foundry information, and other relevant details. For another, the [SPDX][13] (Software Package Data Exchange) format does not cover many of the software licenses (and license variants) used to distribute fonts. + +Finally, as any audiophile will tell you, a music player that does not let you edit and augment the ID3 tags in your MP3 collection is going to get frustrating quickly. You want to fix errors in the tags, you want to add things like notes and album art—essentially, you want to polish your library. You would want to do the same to keep your local font library in a pleasant-to-use state. + +But editing the embedded data in a font file has been taboo because fonts tend to get embedded and attached to other documents. If you monkey with the fields in a font binary, then redistribute it with your presentation slides, anyone who downloads those slides can end up with bad metadata through no fault of their own. So anyone making improvements to the font-management experience will have to figure out how to strategically wrangle repeated changes to the embedded and external font metadata. + +In addition to the technical angle, enriching the font-management experience is also a design challenge. As I said above, good specimens and well-written documentation exist for several open fonts. But there are many more packages missing both, and there are a lot of older font packages that are no longer being maintained. That probably means the only way that most open font packages are going to get specimens or documentation is for the community to create them. + +Perhaps that's a tall order. But the open source design community is bigger than it has ever been, and it is a highly motivated segment of the overall free and open source software movement. So who knows; maybe this time next year finding, downloading, and using fonts on a desktop Linux system will be an entirely different experience. + +One train of thought on the typography challenges of modern Linux users includes packaging, document design, and maybe even a few new software components for desktop environments. There are other trains to consider, too. The commonality is that where the web-font service ends, matters get more difficult. + +The best news, from my perspective, is that there are more people interested in this topic than ever before. For that, I think we have the higher profile that open fonts have received from big web-font services like Google Fonts and Open Font Library to thank. + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/3/webfonts + +作者:[Nathan Willis][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/n8willis +[1]:https://en.wikipedia.org/wiki/Core_fonts_for_the_Web +[2]:https://en.wikipedia.org/wiki/Web_Open_Font_Format +[3]:https://fonts.google.com/ +[4]:https://fontlibrary.org/ +[5]:https://fontmanager.github.io/ +[6]:https://en.wikipedia.org/wiki/Didone_(typography) +[7]:https://en.wikipedia.org/wiki/Firmin_Didot +[8]:https://en.wikipedia.org/wiki/Giambattista_Bodoni +[9]:https://software.sil.org/gentium/ +[10]:https://djr.com/bungee/ +[11]:https://www.freedesktop.org/wiki/Distributions/AppStream/ +[12]:https://www.freedesktop.org/software/appstream/docs/sect-Metadata-Fonts.html +[13]:https://spdx.org/ diff --git a/sources/tech/20180306 How To Check All Running Services In Linux.md b/sources/tech/20180306 How To Check All Running Services In Linux.md new file mode 100644 index 0000000000..376d9fde5c --- /dev/null +++ b/sources/tech/20180306 How To Check All Running Services In Linux.md @@ -0,0 +1,518 @@ +translating by Flowsnow + +How To Check All Running Services In Linux +====== + +There are many ways and tools to check and list all running services in Linux. Usually most of the administrator use `service service-name status` or `/etc/init.d/service-name status` for sysVinit system and `systemctl status service-name` for systemd systems. + +The above command clearly shows that the mentioned service is running on server or not. It is very simple and basic command that should known by every Linux administrator. + +If you are new to your environment and you don’t know what services are running on the system. How do you check? + +Yes, we can check this. This will will help us to understand what are the services are running on the system and whether it’s necessary or need to disable. + +### What Is SysVinit + +init (short for initialization) is the first process started during booting of the computer system. Init is a daemon process that continues running until the system is shut down. + +SysVinit is an old and traditional init system and system manager for old systems. Most of the latest distributions were adapted to systemd system due to some of the long pending issues on sysVinit system. + +### What Is systemd + +systemd is a new init system and system manager which is become very popular and widely adapted new standard init system by most of Linux distributions. Systemctl is a systemd utility which is help us to manage systemd system. + +### Method-1: How To Check Running Services In sysVinit System + +The below command helps us to check and list all running services in sysVinit system. + +If you have many number of services, i would advise you to use file view commands such as less, more, etc commands for clear view. +``` +# service --status-all +or +# service --status-all | more +or +# service --status-all | less + +abrt-ccpp hook is installed +abrtd (pid 2131) is running... +abrt-dump-oops is stopped +acpid (pid 1958) is running... +atd (pid 2164) is running... +auditd (pid 1731) is running... +Frequency scaling enabled using ondemand governor +crond (pid 2153) is running... +hald (pid 1967) is running... +htcacheclean is stopped +httpd is stopped +Table: filter +Chain INPUT (policy ACCEPT) +num target prot opt source destination +1 ACCEPT all ::/0 ::/0 state RELATED,ESTABLISHED +2 ACCEPT icmpv6 ::/0 ::/0 +3 ACCEPT all ::/0 ::/0 +4 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:80 +5 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:21 +6 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:22 +7 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:25 +8 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:2082 +9 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:2086 +10 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:2083 +11 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:2087 +12 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:10000 +13 REJECT all ::/0 ::/0 reject-with icmp6-adm-prohibited + +Chain FORWARD (policy ACCEPT) +num target prot opt source destination +1 REJECT all ::/0 ::/0 reject-with icmp6-adm-prohibited + +Chain OUTPUT (policy ACCEPT) +num target prot opt source destination + +iptables: Firewall is not running. +irqbalance (pid 1826) is running... +Kdump is operational +lvmetad is stopped +mdmonitor is stopped +messagebus (pid 1929) is running... + SUCCESS! MySQL running (24376) +rndc: neither /etc/rndc.conf nor /etc/rndc.key was found +named is stopped +netconsole module not loaded +Usage: startup.sh { start | stop } +Configured devices: +lo eth0 eth1 +Currently active devices: +lo eth0 +ntpd is stopped +portreserve (pid 1749) is running... +master (pid 2107) is running... +Process accounting is disabled. +quota_nld is stopped +rdisc is stopped +rngd is stopped +rpcbind (pid 1840) is running... +rsyslogd (pid 1756) is running... +sandbox is stopped +saslauthd is stopped +smartd is stopped +openssh-daemon (pid 9859) is running... +svnserve is stopped +vsftpd (pid 4008) is running... +xinetd (pid 2031) is running... +zabbix_agentd (pid 2150 2149 2148 2147 2146 2140) is running... + +``` + +Run the following command to view only running services in the system. +``` +# service --status-all | grep running + +crond (pid 535) is running... +httpd (pid 627) is running... +mysqld (pid 911) is running... +rndc: neither /etc/rndc.conf nor /etc/rndc.key was found +rsyslogd (pid 449) is running... +saslauthd (pid 492) is running... +sendmail (pid 509) is running... +sm-client (pid 519) is running... +openssh-daemon (pid 478) is running... +xinetd (pid 485) is running... + +``` + +Run the following command to view the particular service status. +``` +# service --status-all | grep httpd +httpd (pid 627) is running... + +``` + +Alternatively use the following command to view the particular service status. +``` +# service httpd status + +httpd (pid 627) is running... + +``` + +Use the following command to view the list of running services enabled in boot. +``` +# chkconfig --list +crond 0:off 1:off 2:on 3:on 4:on 5:on 6:off +htcacheclean 0:off 1:off 2:off 3:off 4:off 5:off 6:off +httpd 0:off 1:off 2:off 3:on 4:off 5:off 6:off +ip6tables 0:off 1:off 2:on 3:off 4:on 5:on 6:off +iptables 0:off 1:off 2:on 3:on 4:on 5:on 6:off +modules_dep 0:off 1:off 2:on 3:on 4:on 5:on 6:off +mysqld 0:off 1:off 2:on 3:on 4:on 5:on 6:off +named 0:off 1:off 2:off 3:off 4:off 5:off 6:off +netconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:off +netfs 0:off 1:off 2:off 3:off 4:on 5:on 6:off +network 0:off 1:off 2:on 3:on 4:on 5:on 6:off +nmb 0:off 1:off 2:off 3:off 4:off 5:off 6:off +nscd 0:off 1:off 2:off 3:off 4:off 5:off 6:off +portreserve 0:off 1:off 2:on 3:off 4:on 5:on 6:off +quota_nld 0:off 1:off 2:off 3:off 4:off 5:off 6:off +rdisc 0:off 1:off 2:off 3:off 4:off 5:off 6:off +restorecond 0:off 1:off 2:off 3:off 4:off 5:off 6:off +rpcbind 0:off 1:off 2:on 3:off 4:on 5:on 6:off +rsyslog 0:off 1:off 2:on 3:on 4:on 5:on 6:off +saslauthd 0:off 1:off 2:off 3:on 4:off 5:off 6:off +sendmail 0:off 1:off 2:on 3:on 4:on 5:on 6:off +smb 0:off 1:off 2:off 3:off 4:off 5:off 6:off +snmpd 0:off 1:off 2:off 3:off 4:off 5:off 6:off +snmptrapd 0:off 1:off 2:off 3:off 4:off 5:off 6:off +sshd 0:off 1:off 2:on 3:on 4:on 5:on 6:off +udev-post 0:off 1:on 2:on 3:off 4:on 5:on 6:off +winbind 0:off 1:off 2:off 3:off 4:off 5:off 6:off +xinetd 0:off 1:off 2:off 3:on 4:on 5:on 6:off + +xinetd based services: + chargen-dgram: off + chargen-stream: off + daytime-dgram: off + daytime-stream: off + discard-dgram: off + discard-stream: off + echo-dgram: off + echo-stream: off + finger: off + ntalk: off + rsync: off + talk: off + tcpmux-server: off + time-dgram: off + time-stream: off + +``` + +### Method-2: How To Check Running Services In systemd System + +The below command helps us to check and list all running services in “systemd” system. +``` +# systemctl + + UNIT LOAD ACTIVE SUB DESCRIPTION + sys-devices-virtual-block-loop0.device loaded active plugged /sys/devices/virtual/block/loop0 + sys-devices-virtual-block-loop1.device loaded active plugged /sys/devices/virtual/block/loop1 + sys-devices-virtual-block-loop2.device loaded active plugged /sys/devices/virtual/block/loop2 + sys-devices-virtual-block-loop3.device loaded active plugged /sys/devices/virtual/block/loop3 + sys-devices-virtual-block-loop4.device loaded active plugged /sys/devices/virtual/block/loop4 + sys-devices-virtual-misc-rfkill.device loaded active plugged /sys/devices/virtual/misc/rfkill + sys-devices-virtual-tty-ttyprintk.device loaded active plugged /sys/devices/virtual/tty/ttyprintk + sys-module-fuse.device loaded active plugged /sys/module/fuse + sys-subsystem-net-devices-enp0s3.device loaded active plugged 82540EM Gigabit Ethernet Controller (PRO/1000 MT Desktop Adapter) + -.mount loaded active mounted Root Mount + dev-hugepages.mount loaded active mounted Huge Pages File System + dev-mqueue.mount loaded active mounted POSIX Message Queue File System + run-user-1000-gvfs.mount loaded active mounted /run/user/1000/gvfs + run-user-1000.mount loaded active mounted /run/user/1000 + snap-core-3887.mount loaded active mounted Mount unit for core + snap-core-4017.mount loaded active mounted Mount unit for core + snap-core-4110.mount loaded active mounted Mount unit for core + snap-gping-13.mount loaded active mounted Mount unit for gping + snap-termius\x2dapp-8.mount loaded active mounted Mount unit for termius-app + sys-fs-fuse-connections.mount loaded active mounted FUSE Control File System + sys-kernel-debug.mount loaded active mounted Debug File System + acpid.path loaded active running ACPI Events Check + cups.path loaded active running CUPS Scheduler + systemd-ask-password-plymouth.path loaded active waiting Forward Password Requests to Plymouth Directory Watch + systemd-ask-password-wall.path loaded active waiting Forward Password Requests to Wall Directory Watch + init.scope loaded active running System and Service Manager + session-c2.scope loaded active running Session c2 of user magi + accounts-daemon.service loaded active running Accounts Service + acpid.service loaded active running ACPI event daemon + anacron.service loaded active running Run anacron jobs + apache2.service loaded active running The Apache HTTP Server + apparmor.service loaded active exited AppArmor initialization + apport.service loaded active exited LSB: automatic crash report generation + aptik-battery-monitor.service loaded active running LSB: start/stop the aptik battery monitor daemon + atop.service loaded active running Atop advanced performance monitor + atopacct.service loaded active running Atop process accounting daemon + avahi-daemon.service loaded active running Avahi mDNS/DNS-SD Stack + colord.service loaded active running Manage, Install and Generate Color Profiles + console-setup.service loaded active exited Set console font and keymap + cron.service loaded active running Regular background program processing daemon + cups-browsed.service loaded active running Make remote CUPS printers available locally + cups.service loaded active running CUPS Scheduler + dbus.service loaded active running D-Bus System Message Bus + postfix.service loaded active exited Postfix Mail Transport Agent + +``` + + * **`UNIT`** Unit describe about the corresponding systemd unit name. + * **`LOAD`** This describes whether the corresponding unit currently loaded in memory or not. + * **`ACTIVE`** It’s indicate whether the unit is active or not. + * **`SUB`** It’s indicate whether the unit is running state or not. + * **`DESCRIPTION`** A short description about the unit. + + + +The below option help you to list units based on the type. +``` +# systemctl list-units --type service + UNIT LOAD ACTIVE SUB DESCRIPTION + accounts-daemon.service loaded active running Accounts Service + acpid.service loaded active running ACPI event daemon + anacron.service loaded active running Run anacron jobs + apache2.service loaded active running The Apache HTTP Server + apparmor.service loaded active exited AppArmor initialization + apport.service loaded active exited LSB: automatic crash report generation + aptik-battery-monitor.service loaded active running LSB: start/stop the aptik battery monitor daemon + atop.service loaded active running Atop advanced performance monitor + atopacct.service loaded active running Atop process accounting daemon + avahi-daemon.service loaded active running Avahi mDNS/DNS-SD Stack + colord.service loaded active running Manage, Install and Generate Color Profiles + console-setup.service loaded active exited Set console font and keymap + cron.service loaded active running Regular background program processing daemon + cups-browsed.service loaded active running Make remote CUPS printers available locally + cups.service loaded active running CUPS Scheduler + dbus.service loaded active running D-Bus System Message Bus + fwupd.service loaded active running Firmware update daemon + [email protected] loaded active running Getty on tty1 + grub-common.service loaded active exited LSB: Record successful boot for GRUB + irqbalance.service loaded active running LSB: daemon to balance interrupts for SMP systems + keyboard-setup.service loaded active exited Set the console keyboard layout + kmod-static-nodes.service loaded active exited Create list of required static device nodes for the current kernel + +``` + +The below option help you to list units based on the state. It’s similar to the above output but straight forward. +``` +# systemctl list-unit-files --type service + +UNIT FILE STATE +accounts-daemon.service enabled +acpid.service disabled +alsa-restore.service static +alsa-state.service static +alsa-utils.service masked +anacron-resume.service enabled +anacron.service enabled +apache-htcacheclean.service disabled +[email protected] disabled +apache2.service enabled +[email protected] disabled +apparmor.service enabled +[email protected] static +apport.service generated +apt-daily-upgrade.service static +apt-daily.service static +aptik-battery-monitor.service generated +atop.service enabled +atopacct.service enabled +[email protected] enabled +avahi-daemon.service enabled +bluetooth.service enabled + +``` + +Run the following command to view the particular service status. +``` +# systemctl | grep apache2 + apache2.service loaded active running The Apache HTTP Server + +``` + +Alternatively use the following command to view the particular service status. +``` +# systemctl status apache2 +● apache2.service - The Apache HTTP Server + Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled) + Drop-In: /lib/systemd/system/apache2.service.d + └─apache2-systemd.conf + Active: active (running) since Tue 2018-03-06 12:34:09 IST; 8min ago + Process: 2786 ExecReload=/usr/sbin/apachectl graceful (code=exited, status=0/SUCCESS) + Main PID: 1171 (apache2) + Tasks: 55 (limit: 4915) + CGroup: /system.slice/apache2.service + ├─1171 /usr/sbin/apache2 -k start + ├─2790 /usr/sbin/apache2 -k start + └─2791 /usr/sbin/apache2 -k start + +Mar 06 12:34:08 magi-VirtualBox systemd[1]: Starting The Apache HTTP Server... +Mar 06 12:34:09 magi-VirtualBox apachectl[1089]: AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.0.2.15. Set the 'ServerName' directive globally to suppre +Mar 06 12:34:09 magi-VirtualBox systemd[1]: Started The Apache HTTP Server. +Mar 06 12:39:10 magi-VirtualBox systemd[1]: Reloading The Apache HTTP Server. +Mar 06 12:39:10 magi-VirtualBox apachectl[2786]: AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using fe80::7929:4ed1:279f:4d65. Set the 'ServerName' directive gl +Mar 06 12:39:10 magi-VirtualBox systemd[1]: Reloaded The Apache HTTP Server. + +``` + +Run the following command to view only running services in the system. +``` +# systemctl | grep running + acpid.path loaded active running ACPI Events Check + cups.path loaded active running CUPS Scheduler + init.scope loaded active running System and Service Manager + session-c2.scope loaded active running Session c2 of user magi + accounts-daemon.service loaded active running Accounts Service + acpid.service loaded active running ACPI event daemon + apache2.service loaded active running The Apache HTTP Server + aptik-battery-monitor.service loaded active running LSB: start/stop the aptik battery monitor daemon + atop.service loaded active running Atop advanced performance monitor + atopacct.service loaded active running Atop process accounting daemon + avahi-daemon.service loaded active running Avahi mDNS/DNS-SD Stack + colord.service loaded active running Manage, Install and Generate Color Profiles + cron.service loaded active running Regular background program processing daemon + cups-browsed.service loaded active running Make remote CUPS printers available locally + cups.service loaded active running CUPS Scheduler + dbus.service loaded active running D-Bus System Message Bus + fwupd.service loaded active running Firmware update daemon + [email protected] loaded active running Getty on tty1 + irqbalance.service loaded active running LSB: daemon to balance interrupts for SMP systems + lightdm.service loaded active running Light Display Manager + ModemManager.service loaded active running Modem Manager + NetworkManager.service loaded active running Network Manager + polkit.service loaded active running Authorization Manager + +``` + +Use the following command to view the list of running services enabled in boot. +``` +# systemctl list-unit-files | grep enabled +acpid.path enabled +cups.path enabled +accounts-daemon.service enabled +anacron-resume.service enabled +anacron.service enabled +apache2.service enabled +apparmor.service enabled +atop.service enabled +atopacct.service enabled +[email protected] enabled +avahi-daemon.service enabled +bluetooth.service enabled +console-setup.service enabled +cron.service enabled +cups-browsed.service enabled +cups.service enabled +display-manager.service enabled +dns-clean.service enabled +friendly-recovery.service enabled +[email protected] enabled +gpu-manager.service enabled +keyboard-setup.service enabled +lightdm.service enabled +ModemManager.service enabled +network-manager.service enabled +networking.service enabled +NetworkManager-dispatcher.service enabled +NetworkManager-wait-online.service enabled +NetworkManager.service enabled + +``` + +systemd-cgtop show top control groups by their resource usage such as tasks, CPU, Memory, Input, and Output. +``` +# systemd-cgtop + +Control Group Tasks %CPU Memory Input/s Output/s +/ - - 1.5G - - +/init.scope 1 - - - - +/system.slice 153 - - - - +/system.slice/ModemManager.service 3 - - - - +/system.slice/NetworkManager.service 4 - - - - +/system.slice/accounts-daemon.service 3 - - - - +/system.slice/acpid.service 1 - - - - +/system.slice/apache2.service 55 - - - - +/system.slice/aptik-battery-monitor.service 1 - - - - +/system.slice/atop.service 1 - - - - +/system.slice/atopacct.service 1 - - - - +/system.slice/avahi-daemon.service 2 - - - - +/system.slice/colord.service 3 - - - - +/system.slice/cron.service 1 - - - - +/system.slice/cups-browsed.service 3 - - - - +/system.slice/cups.service 2 - - - - +/system.slice/dbus.service 6 - - - - +/system.slice/fwupd.service 5 - - - - +/system.slice/irqbalance.service 1 - - - - +/system.slice/lightdm.service 7 - - - - +/system.slice/polkit.service 3 - - - - +/system.slice/repowerd.service 14 - - - - +/system.slice/rsyslog.service 4 - - - - +/system.slice/rtkit-daemon.service 3 - - - - +/system.slice/snapd.service 8 - - - - +/system.slice/system-getty.slice 1 - - - - + +``` + +Also we can check the running services using pstree command (Output from SysVinit system). +``` +# pstree +init-|-crond + |-httpd---2*[httpd] + |-kthreadd/99149---khelper/99149 + |-2*[mingetty] + |-mysqld_safe---mysqld---9*[{mysqld}] + |-rsyslogd---3*[{rsyslogd}] + |-saslauthd---saslauthd + |-2*[sendmail] + |-sshd---sshd---bash---pstree + |-udevd + `-xinetd + +``` + +Also we can check the running services using pstree command (Output from systemd system). +``` +# pstree +systemd─┬─ModemManager─┬─{gdbus} + │ └─{gmain} + ├─NetworkManager─┬─dhclient + │ ├─{gdbus} + │ └─{gmain} + ├─accounts-daemon─┬─{gdbus} + │ └─{gmain} + ├─acpid + ├─agetty + ├─anacron + ├─apache2───2*[apache2───26*[{apache2}]] + ├─aptd───{gmain} + ├─aptik-battery-m + ├─atop + ├─atopacctd + ├─avahi-daemon───avahi-daemon + ├─colord─┬─{gdbus} + │ └─{gmain} + ├─cron + ├─cups-browsed─┬─{gdbus} + │ └─{gmain} + ├─cupsd + ├─dbus-daemon + ├─fwupd─┬─{GUsbEventThread} + │ ├─{fwupd} + │ ├─{gdbus} + │ └─{gmain} + ├─gnome-keyring-d─┬─{gdbus} + │ ├─{gmain} + │ └─{timer} + +``` + +### Method-3: How To Check Running Services In systemd System using chkservice + +chkservice is a new tool for managing systemd units in terminal. It requires super user privileges to manage the units. +``` +# chkservice + +``` + +![][1] + +To view help page, hit `?` button. This will shows you available options to manage the systemd services. +![][2] + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-check-all-running-services-in-linux/ + +作者:[Magesh Maruthamuthu][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.2daygeek.com/author/magesh/ +[1]:https://www.2daygeek.com/wp-content/uploads/2018/03/chkservice-1.png +[2]:https://www.2daygeek.com/wp-content/uploads/2018/03/chkservice-2.png diff --git a/sources/tech/20180307 3 open source tools for scientific publishing.md b/sources/tech/20180307 3 open source tools for scientific publishing.md new file mode 100644 index 0000000000..0bbc3578e9 --- /dev/null +++ b/sources/tech/20180307 3 open source tools for scientific publishing.md @@ -0,0 +1,78 @@ +3 open source tools for scientific publishing +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_science.png?itok=WDKARWGV) +One industry that lags behind others in the adoption of digital or open source tools is the competitive and lucrative world of scientific publishing. Worth over £19B ($26B) annually, according to figures published by Stephen Buranyi in [The Guardian][1] last year, the system for selecting, publishing, and sharing even the most important scientific research today still bears many of the constraints of print media. New digital-era technologies present a huge opportunity to accelerate discovery, make science collaborative instead of competitive, and redirect investments from infrastructure development into research that benefits society. + +The non-profit [eLife initiative][2] was established by the funders of research, in part to encourage the use of these technologies to this end. In addition to publishing an open-access journal for important advances in life science and biomedical research, eLife has made itself into a platform for experimentation and showcasing innovation in research communication—with most of this experimentation based around the open source ethos. + +Working on open publishing infrastructure projects gives us the opportunity to accelerate the reach and adoption of the types of technology and user experience (UX) best practices that we consider important to the advancement of the academic publishing industry. Speaking very generally, the UX of open source products is often left undeveloped, which can in some cases dissuade people from using it. As part of our investment in OSS development, we place a strong emphasis on UX in order to encourage users to adopt these products. + +All of our code is open source, and we actively encourage community involvement in our projects, which to us means faster iteration, more experimentation, greater transparency, and increased reach for our work. + +The projects that we are involved in, such as the development of Libero (formerly known as [eLife Continuum][3]) and the [Reproducible Document Stack][4], along with our recent collaboration with [Hypothesis][5], show how OSS can be used to bring about positive changes in the assessment, publication, and communication of new discoveries. + +### Libero + +Libero is a suite of services and applications available to publishers that includes a post-production publishing system, a full front-end user interface pattern suite, Libero's Lens Reader, an open API, and search and recommendation engines. + +Last year, we took a user-driven approach to redesigning the front end of Libero, resulting in less distracting site “furniture” and a greater focus on research articles. We tested and iterated all the key functional areas of the site with members of the eLife community to ensure the best possible reading experience for everyone. The site’s new API also provides simpler access to content for machine readability, including text mining, machine learning, and online application development. + +The content on our website and the patterns that drive the new design are all open source to encourage future product development for both eLife and other publishers that wish to use it. + +### The Reproducible Document Stack + +In collaboration with [Substance][6] and [Stencila][7], eLife is also engaged in a project to create a Reproducible Document Stack (RDS)—an open stack of tools for authoring, compiling, and publishing computationally reproducible manuscripts online. + +Today, an increasing number of researchers are able to document their computational experiments through languages such as [R Markdown][8] and [Python][9]. These can serve as important parts of the experimental record, and while they can be shared independently from or alongside the resulting research article, traditional publishing workflows tend to relegate these assets as a secondary class of content. To publish papers, researchers using these languages often have little option but to submit their computational results as “flattened” outputs in the form of figures, losing much of the value and reusability of the code and data references used in the computation. And while electronic notebook solutions such as [Jupyter][10] can enable researchers to publish their code in an easily reusable and executable form, that’s still in addition to, rather than as an integral part of, the published manuscript. + +The [Reproducible Document Stack][11] project aims to address these challenges through development and publication of a working prototype of a reproducible manuscript that treats code and data as integral parts of the document, demonstrating a complete end-to-end technology stack from authoring through to publication. It will ultimately allow authors to submit their manuscripts in a format that includes embedded code blocks and computed outputs (statistical results, tables, or graphs), and have those assets remain both visible and executable throughout the publication process. Publishers will then be able to preserve these assets directly as integral parts of the published online article. + +### Open annotation with Hypothesis + +Most recently, we introduced open annotation in collaboration with [Hypothesis][12] to enable users of our website to make comments, highlight important sections of articles, and engage with the reading public online. + +Through this collaboration, the open source Hypothesis software was customized with new moderation features, single sign-on authentication, and user-interface customization options, giving publishers more control over its implementation on their sites. These enhancements are already driving higher-quality discussions around published scholarly content. + +The tool can be integrated seamlessly into publishers’ websites, with the scholarly publishing platform [PubFactory][13] and content solutions provider [Ingenta][14] already taking advantage of its improved feature set. [HighWire][15] and [Silverchair][16] are also offering their publishers the opportunity to implement the service. + +### Other industries and open source + +Over time, we hope to see more publishers adopt Hypothesis, Libero, and other projects to help them foster the discovery and reuse of important scientific research. But the opportunities for innovation eLife has been able to leverage because of these and other OSS technologies are also prevalent in other industries. + +The world of data science would be nowhere without the high-quality, well-supported open source software and the communities built around it; [TensorFlow][17] is a leading example of this. Thanks to OSS and its communities, all areas of AI and machine learning have seen rapid acceleration and advancement compared to other areas of computing. Similar is the explosion in usage of Linux as a cloud web host, followed by containerization with Docker, and now the growth of Kubernetes, one of the most popular open source projects on GitHub. + +All of these technologies enable organizations to do more with less and focus on innovation instead of reinventing the wheel. And in the end, that’s the real benefit of OSS: It lets us all learn from each other’s failures while building on each other's successes. + +We are always on the lookout for opportunities to engage with the best emerging talent and ideas at the interface of research and technology. Find out more about some of these engagements on [eLife Labs][18], or contact [innovation@elifesciences.org][19] for more information. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/3/scientific-publishing-software + +作者:[Paul Shanno][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/pshannon +[1]:https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science +[2]:https://elifesciences.org/about +[3]:https://elifesciences.org/inside-elife/33e4127f/elife-introduces-continuum-a-new-open-source-tool-for-publishing +[4]:https://elifesciences.org/for-the-press/e6038800/elife-supports-development-of-open-technology-stack-for-publishing-reproducible-manuscripts-online +[5]:https://elifesciences.org/for-the-press/81d42f7d/elife-enhances-open-annotation-with-hypothesis-to-promote-scientific-discussion-online +[6]:https://github.com/substance +[7]:https://github.com/stencila/stencila +[8]:https://rmarkdown.rstudio.com/ +[9]:https://www.python.org/ +[10]:http://jupyter.org/ +[11]:https://elifesciences.org/labs/7dbeb390/reproducible-document-stack-supporting-the-next-generation-research-article +[12]:https://github.com/hypothesis +[13]:http://www.pubfactory.com/ +[14]:http://www.ingenta.com/ +[15]:https://github.com/highwire +[16]:https://www.silverchair.com/community/silverchair-universe/hypothesis/ +[17]:https://www.tensorflow.org/ +[18]:https://elifesciences.org/labs +[19]:mailto:innovation@elifesciences.org diff --git a/sources/tech/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md b/sources/tech/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md new file mode 100644 index 0000000000..8c0db2716a --- /dev/null +++ b/sources/tech/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md @@ -0,0 +1,92 @@ +translating---geekpi + +An Open Source Desktop YouTube Player For Privacy-minded People +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/03/Freetube-720x340.png) + +You already know that we need a Google account to subscribe channels and download videos from YouTube. If you don’t want Google track what you’re doing on YouTube, well, there is an open source YouTube player named **“FreeTube”**. It allows you to watch, search and download Youtube videos and subscribe your favorite channels without an account, which prevents Google from having your information. It gives you complete ad-free experience. Another notable advantage is it has a built-in basic HTML5 player to watch videos. Since we’re not using the built-in YouTube player, Google can’t track the “views” and the video analytics either. FreeTube only sends your IP details, but this also can be overcome by using a VPN. It is completely free, open source and available for GNU/Linux, Mac OS X, and Windows. + +### Features + +* Watch videos without ads. +* Prevent Google from tracking what you watch using cookies or JavaScript. +* Subscribe to channels without an account. +* Store subscriptions, history, and saved videos locally. +* Import / Backup subscriptions. +* Mini Player. +* Light / Dark Theme. +* Free, Open Source. +* Cross-platform. + + + +### Installing FreeTube + +Go to the [**releases page**][1] and grab the version depending upon the OS you use. For the purpose of this guide, I will be using **.tar.gz** file. +``` +$ wget https://github.com/FreeTubeApp/FreeTube/releases/download/v0.1.3-beta/FreeTube-linux-x64.tar.xz + +``` + +Extract the downloaded archive: +``` +$ tar xf FreeTube-linux-x64.tar.xz + +``` + +Go to the Freetube folder: +``` +$ cd FreeTube-linux-x64/ + +``` + +Launch Freeube using command: +``` +$ ./FreeTub + +``` + +This is how FreeTube default interface looks like. + +![][3] + +### Usage + +FreeTube currently uses **YouTube API** to search for videos. And then, It uses **Youtube-dl HTTP API** to grab the raw video files and play them in a basic HTML5 video player. Since subscriptions, history, and saved videos are stored locally on your system, your details will not be sent to Google or anyone else. + +Enter the video name in the search box and hit ENTER key. FreeTube will list out the results based on your search query. + +![][4] + +You can click on any video to play it. + +![][5] + +If you want to change the theme or default API, import/export subscriptions, go to the **Settings** section. + +![][6] + +Please note that FreeTube is still in **beta** stage, so there will be bugs. If there are any bugs, please report them in the GitHub page given at the end of this guide. + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/freetube-an-open-source-desktop-youtube-player-for-privacy-minded-people/ + +作者:[SK][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://github.com/FreeTubeApp/FreeTube/releases +[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]:http://www.ostechnix.com/wp-content/uploads/2018/03/FreeTube-1.png +[4]:http://www.ostechnix.com/wp-content/uploads/2018/03/FreeTube-3.png +[5]:http://www.ostechnix.com/wp-content/uploads/2018/03/FreeTube-5-1.png +[6]:http://www.ostechnix.com/wp-content/uploads/2018/03/FreeTube-2.png diff --git a/sources/tech/20180307 Protecting Code Integrity with PGP - Part 4- Moving Your Master Key to Offline Storage.md b/sources/tech/20180307 Protecting Code Integrity with PGP - Part 4- Moving Your Master Key to Offline Storage.md new file mode 100644 index 0000000000..df00e7e05e --- /dev/null +++ b/sources/tech/20180307 Protecting Code Integrity with PGP - Part 4- Moving Your Master Key to Offline Storage.md @@ -0,0 +1,167 @@ +Protecting Code Integrity with PGP — Part 4: Moving Your Master Key to Offline Storage +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/industry-1920.jpg?itok=gI3QraS8) +In this tutorial series, we're providing practical guidelines for using PGP. You can catch up on previous articles here: + +[Part 1: Basic Concepts and Tools][1] + +[Part 2: Generating Your Master Key][2] + +[Part 3: Generating PGP Subkeys][3] + +Here in part 4, we continue the series with a look at how and why to move your master key from your home directory to offline storage. Let's get started. + +### Checklist + + * Prepare encrypted detachable storage (ESSENTIAL) + + * Back up your GnuPG directory (ESSENTIAL) + + * Remove the master key from your home directory (NICE) + + * Remove the revocation certificate from your home directory (NICE) + + + + +#### Considerations + +Why would you want to remove your master [C] key from your home directory? This is generally done to prevent your master key from being stolen or accidentally leaked. Private keys are tasty targets for malicious actors -- we know this from several successful malware attacks that scanned users' home directories and uploaded any private key content found there. + +It would be very damaging for any developer to have their PGP keys stolen -- in the Free Software world, this is often tantamount to identity theft. Removing private keys from your home directory helps protect you from such events. + +##### Back up your GnuPG directory + +**!!!Do not skip this step!!!** + +It is important to have a readily available backup of your PGP keys should you need to recover them (this is different from the disaster-level preparedness we did with paperkey). + +##### Prepare detachable encrypted storage + +Start by getting a small USB "thumb" drive (preferably two!) that you will use for backup purposes. You will first need to encrypt them: + +For the encryption passphrase, you can use the same one as on your master key. + +##### Back up your GnuPG directory + +Once the encryption process is over, re-insert the USB drive and make sure it gets properly mounted. Find out the full mount point of the device, for example by running the mount command (under Linux, external media usually gets mounted under /media/disk, under Mac it's /Volumes). + +Once you know the full mount path, copy your entire GnuPG directory there: +``` +$ cp -rp ~/.gnupg [/media/disk/name]/gnupg-backup + +``` + +(Note: If you get any Operation not supported on socket errors, those are benign and you can ignore them.) + +You should now test to make sure everything still works: +``` +$ gpg --homedir=[/media/disk/name]/gnupg-backup --list-key [fpr] + +``` + +If you don't get any errors, then you should be good to go. Unmount the USB drive and distinctly label it, so you don't blow it away next time you need to use a random USB drive. Then, put in a safe place -- but not too far away, because you'll need to use it every now and again for things like editing identities, adding or revoking subkeys, or signing other people's keys. + +##### Remove the master key + +The files in our home directory are not as well protected as we like to think. They can be leaked or stolen via many different means: + + * By accident when making quick homedir copies to set up a new workstation + + * By systems administrator negligence or malice + + * Via poorly secured backups + + * Via malware in desktop apps (browsers, pdf viewers, etc) + + * Via coercion when crossing international borders + + + + +Protecting your key with a good passphrase greatly helps reduce the risk of any of the above, but passphrases can be discovered via keyloggers, shoulder-surfing, or any number of other means. For this reason, the recommended setup is to remove your master key from your home directory and store it on offline storage. + +###### Removing your master key + +Please see the previous section and make sure you have backed up your GnuPG directory in its entirety. What we are about to do will render your key useless if you do not have a usable backup! + +First, identify the keygrip of your master key: +``` +$ gpg --with-keygrip --list-key [fpr] + +``` + +The output will be something like this: +``` +pub rsa4096 2017-12-06 [C] [expires: 2019-12-06] + 111122223333444455556666AAAABBBBCCCCDDDD + Keygrip = AAAA999988887777666655554444333322221111 +uid [ultimate] Alice Engineer +uid [ultimate] Alice Engineer +sub rsa2048 2017-12-06 [E] + Keygrip = BBBB999988887777666655554444333322221111 +sub rsa2048 2017-12-06 [S] + Keygrip = CCCC999988887777666655554444333322221111 + +``` + +Find the keygrip entry that is beneath the pub line (right under the master key fingerprint). This will correspond directly to a file in your home .gnupg directory: +``` +$ cd ~/.gnupg/private-keys-v1.d +$ ls +AAAA999988887777666655554444333322221111.key +BBBB999988887777666655554444333322221111.key +CCCC999988887777666655554444333322221111.key + +``` + +All you have to do is simply remove the .key file that corresponds to the master keygrip: +``` +$ cd ~/.gnupg/private-keys-v1.d +$ rm AAAA999988887777666655554444333322221111.key + +``` + +Now, if you issue the --list-secret-keys command, it will show that the master key is missing (the # indicates it is not available): +``` +$ gpg --list-secret-keys +sec# rsa4096 2017-12-06 [C] [expires: 2019-12-06] + 111122223333444455556666AAAABBBBCCCCDDDD +uid [ultimate] Alice Engineer +uid [ultimate] Alice Engineer +ssb rsa2048 2017-12-06 [E] +ssb rsa2048 2017-12-06 [S] + +``` + +##### Remove the revocation certificate + +Another file you should remove (but keep in backups) is the revocation certificate that was automatically created with your master key. A revocation certificate allows someone to permanently mark your key as revoked, meaning it can no longer be used or trusted for any purpose. You would normally use it to revoke a key that, for some reason, you can no longer control -- for example, if you had lost the key passphrase. + +Just as with the master key, if a revocation certificate leaks into malicious hands, it can be used to destroy your developer digital identity, so it's better to remove it from your home directory. +``` +cd ~/.gnupg/openpgp-revocs.d +rm [fpr].rev + +``` + +Next time, you'll learn how to secure your subkeys as well. Stay tuned. + +Learn more about Linux through the free ["Introduction to Linux" ][4]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/pgp/2018/3/protecting-code-integrity-pgp-part-4-moving-your-master-key-offline-storage + +作者:[Konstantin Ryabitsev][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/mricon +[1]:https://www.linux.com/blog/learn/2018/2/protecting-code-integrity-pgp-part-1-basic-pgp-concepts-and-tools +[2]:https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-part-2-generating-and-protecting-your-master-pgp-key +[3]:https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-part-3-generating-pgp-subkeys +[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180307 What Is sosreport- How To Create sosreport.md b/sources/tech/20180307 What Is sosreport- How To Create sosreport.md new file mode 100644 index 0000000000..eb3f77cdb8 --- /dev/null +++ b/sources/tech/20180307 What Is sosreport- How To Create sosreport.md @@ -0,0 +1,195 @@ +What Is sosreport? How To Create sosreport +====== +### What Is sosreport + +The sosreport command is a tool that collects bunch of configuration details, system information and diagnostic information from running system (especially RHEL & OEL system). + +It helps technical support engineer to analyze the system in many aspect. + +This reports contains bunch of information about the system such as boot information, filesystem, memory, hostname, installed rpms, system IP, networking details, OS version, installed kernel, loaded kernel modules, list of open files, list of PCI devices, mount point and it’s details, running process information, process tree output, system routing, all the configuration files which is located in /etc folder, and all the log files which is located in /var folder. + +This will take a while to generate a report and it’s depends on your system installation and configuration. + +Once completed, sosreport will generate a compressed archive file under /tmp directory. + +We have to provide the sosreport to RHEL (Red Hat Enterprise Linux) & OEL (Oracle Enterprise Linux) technical support engineer whenever we raise a case with them for initial analyze. This helps support engineer to verify if anything is wrong on the system. + +### How To Install sosreport + +sosreport installation is not a big deal, just run the following command to install it. +``` +# yum install sos + +``` + +### How To Generate sosreport + +Also generating sosreport is not a big deal so, just run the sosreport command without any options. + +By default it doesn’t shows much information while generating sosreport and only display how many reports are generated. If you want to see detailed information just add `-v` option while generating the sosreport. + +It will ask you to enter your name and the support case information. +``` +# sosreport + +sosreport (version 3.2) + +This command will collect diagnostic and configuration information from this Oracle Linux system and installed applications. + +An archive containing the collected information will be generated in /tmp/sos.3pt1yJ and may be provided to a Oracle USA support representative. + +Any information provided to Oracle USA will be treated in accordance with the published support policies at: + + http://linux.oracle.com/ + +The generated archive may contain data considered sensitive and its content should be reviewed by the originating organization before being passed to any third party. + +No changes will be made to system configuration. + +Press ENTER to continue, or CTRL-C to quit. + +Please enter your first initial and last name [oracle.2daygeek.com]: 2daygeek +Please enter the case id that you are generating this report for []: 3-16619296812 + +Setting up archive ... +Setting up plugins ... +dbname must be supplied to dump a database. +Running plugins. Please wait ... + + Running 86/86: yum... +[plugin:kvm] could not unmount /sys/kernel/debug +Creating compressed archive... + +Your sosreport has been generated and saved in: + + /tmp/sosreport-2daygeek.3-16619296812-20180307124921.tar.xz + +The checksum is: 4e80226ae175bm185c0o2d7u2yoac52o + +Please send this file to your support representative. + +``` + +### What Are The Details There In The Archive File + +I’m just curious, what kind of details are there in the archive file. To understand this, i gonna extract a archive file on my system. + +Run the following command to extract an archive file. +``` +# tar -xf /tmp/sosreport-2daygeek.3-16619296812-20180307124921.tar.xz + +``` + +To see what are the information captured by sosreport, go to file extracted directory. +``` +# ls -lh sosreport-2daygeek.3-16619296812-20180307124921 + +total 60K +dr-xr-xr-x 4 root root 4.0K Sep 30 10:56 boot +lrwxrwxrwx 1 root root 37 Oct 20 07:25 chkconfig -> sos_commands/startup/chkconfig_--list +lrwxrwxrwx 1 root root 25 Oct 20 07:25 date -> sos_commands/general/date +lrwxrwxrwx 1 root root 27 Oct 20 07:25 df -> sos_commands/filesys/df_-al +lrwxrwxrwx 1 root root 31 Oct 20 07:25 dmidecode -> sos_commands/hardware/dmidecode +drwxr-xr-x 43 root root 4.0K Oct 20 07:21 etc +lrwxrwxrwx 1 root root 24 Oct 20 07:25 free -> sos_commands/memory/free +lrwxrwxrwx 1 root root 29 Oct 20 07:25 hostname -> sos_commands/general/hostname +lrwxrwxrwx 1 root root 130 Oct 20 07:25 installed-rpms -> sos_commands/rpm/sh_-c_rpm_--nodigest_-qa_--qf_NAME_-_VERSION_-_RELEASE_._ARCH_INSTALLTIME_date_awk_-F_printf_-59s_s_n_1_2_sort_-f +lrwxrwxrwx 1 root root 34 Oct 20 07:25 ip_addr -> sos_commands/networking/ip_-o_addr +lrwxrwxrwx 1 root root 45 Oct 20 07:25 java -> sos_commands/java/alternatives_--display_java +drwxr-xr-x 4 root root 4.0K Sep 30 10:56 lib +lrwxrwxrwx 1 root root 35 Oct 20 07:25 lsb-release -> sos_commands/lsbrelease/lsb_release +lrwxrwxrwx 1 root root 25 Oct 20 07:25 lsmod -> sos_commands/kernel/lsmod +lrwxrwxrwx 1 root root 36 Oct 20 07:25 lsof -> sos_commands/process/lsof_-b_M_-n_-l +lrwxrwxrwx 1 root root 22 Oct 20 07:25 lspci -> sos_commands/pci/lspci +lrwxrwxrwx 1 root root 29 Oct 20 07:25 mount -> sos_commands/filesys/mount_-l +lrwxrwxrwx 1 root root 38 Oct 20 07:25 netstat -> sos_commands/networking/netstat_-neopa +drwxr-xr-x 3 root root 4.0K Oct 19 16:16 opt +dr-xr-xr-x 10 root root 4.0K Jun 23 2017 proc +lrwxrwxrwx 1 root root 30 Oct 20 07:25 ps -> sos_commands/process/ps_auxwww +lrwxrwxrwx 1 root root 27 Oct 20 07:25 pstree -> sos_commands/process/pstree +dr-xr-x--- 2 root root 4.0K Oct 17 12:09 root +lrwxrwxrwx 1 root root 32 Oct 20 07:25 route -> sos_commands/networking/route_-n +dr-xr-xr-x 2 root root 4.0K Sep 30 10:55 sbin +drwx------ 54 root root 4.0K Oct 20 07:21 sos_commands +drwx------ 2 root root 4.0K Oct 20 07:21 sos_logs +drwx------ 2 root root 4.0K Oct 20 07:21 sos_reports +dr-xr-xr-x 6 root root 4.0K Jun 23 2017 sys +lrwxrwxrwx 1 root root 28 Oct 20 07:25 uname -> sos_commands/kernel/uname_-a +lrwxrwxrwx 1 root root 27 Oct 20 07:25 uptime -> sos_commands/general/uptime +drwxr-xr-x 6 root root 4.0K Sep 25 2014 var +-rw------- 1 root root 1.7K Oct 20 07:21 version.txt +lrwxrwxrwx 1 root root 62 Oct 20 07:25 vgdisplay -> sos_commands/lvm2/vgdisplay_-vv_--config_global_locking_type_0 + +``` + +To double confirm what exactly sosreport captured, i’m gonna to see uname output file which was captured by sosreport. +``` +# more uname_-a +Linux oracle.2daygeek.com 2.6.32-042stab127.2 #1 SMP Thu Jan 4 16:41:44 MSK 2018 x86_64 x86_64 x86_64 GNU/Linux + +``` + +### Additional Options + +Visit help page to view all available options for sosreport. +``` +# sosreport --help +Usage: sosreport [options] + +Options: + -h, --help show this help message and exit + -l, --list-plugins list plugins and available plugin options + -n NOPLUGINS, --skip-plugins=NOPLUGINS + disable these plugins + -e ENABLEPLUGINS, --enable-plugins=ENABLEPLUGINS + enable these plugins + -o ONLYPLUGINS, --only-plugins=ONLYPLUGINS + enable these plugins only + -k PLUGOPTS, --plugin-option=PLUGOPTS + plugin options in plugname.option=value format (see + -l) + --log-size=LOG_SIZE set a limit on the size of collected logs + -a, --alloptions enable all options for loaded plugins + --all-logs collect all available logs regardless of size + --batch batch mode - do not prompt interactively + --build preserve the temporary directory and do not package + results + -v, --verbose increase verbosity + --verify perform data verification during collection + --quiet only print fatal errors + --debug enable interactive debugging using the python debugger + --ticket-number=CASE_ID + specify ticket number + --case-id=CASE_ID specify case identifier + -p PROFILES, --profile=PROFILES + enable plugins selected by the given profiles + --list-profiles + --name=CUSTOMER_NAME specify report name + --config-file=CONFIG_FILE + specify alternate configuration file + --tmp-dir=TMP_DIR specify alternate temporary directory + --no-report Disable HTML/XML reporting + -z COMPRESSION_TYPE, --compression-type=COMPRESSION_TYPE + compression technology to use [auto, gzip, bzip2, xz] + (default=auto) + +Some examples: + + enable cluster plugin only and collect dlm lockdumps: + # sosreport -o cluster -k cluster.lockdump + + disable memory and samba plugins, turn off rpm -Va collection: + # sosreport -n memory,samba -k rpm.rpmva=off + +``` +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-create-collect-sosreport-in-linux/ + +作者:[Magesh Maruthamuthu][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.2daygeek.com/author/magesh/ diff --git a/sources/tech/20180308 Test Your BASH Skills By Playing Command Line Games.md b/sources/tech/20180308 Test Your BASH Skills By Playing Command Line Games.md new file mode 100644 index 0000000000..e85c2b18e7 --- /dev/null +++ b/sources/tech/20180308 Test Your BASH Skills By Playing Command Line Games.md @@ -0,0 +1,124 @@ +Test Your BASH Skills By Playing Command Line Games +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/03/Test-your-bash-skills-1-720x340.png) +We tend to learn and remember Linux commands more effectively if we use them regularly in a live scenario. You may forget the Linux commands over a period of time, unless you use them often. Whether you’re newbie, intermediate user, there are always some exciting methods to test your BASH skills. In this tutorial, I am going to explain how to test your BASH skills by playing command line games. Well, technically these are not actual games like Super TuxKart, NFS, or Counterstrike etc. These are just gamified versions of Linux command training lessons. You will be given a task to complete by follow certain instructions in the game itself. + +Now, we will see few games that will help you to learn and practice Linux commands in real-time. These are not a time-passing or mind-boggling games. These games will help you to get a hands-on experience of terminal commands. Read on. + +### Test BASH Skills with “Wargames” + +It is an online game, so you must have an active Internet connection. These games helps you to learn and practice Linux commands in the form of fun-filled games. Wargames are collection of shell games and each game has many levels. You can access the next levels only by solving previous levels. Not to be worried! Each game provides clear and concise instructions about how to access the next levels. + +To play the Wargames, go the following link: + +![][2] + +As you can see, there many shell games listed on the left side. Each shell game has its own SSH port. So, you will have to connect to the game via SSH from your local system. You can find the information about how to connect to each game using SSH in the top left corner of the Wargames website. + +For instance, let us play the **Bandit** game. To do so, click on the Bandit link on the Wargames homepage. On the top left corner, you will see SSH information of the Bandit game. + +![][3] + +As you see in the above screenshot, there are many levels. To go to each level, click on the respective link on the left column. Also, there are instructions for the beginners on the right side. Read them if you have any questions about how to play this game. + +Now, let us go to the level 0 by clicking on it. In the next screen, you will SSH information of this level. + +![][4] + +As you can see on the above screenshot, you need to connect is **bandit.labs.overthewire.org** , on port 2220 via SSH. The username is **bandit0** and the password is **bandit0**. + +Let us connect to Bandit game level 0. + +Enter the password i.e **bandit0** + +Sample output will be: + +![][5] + +Once logged in, type **ls** command to see what’s in their or go to the **Level 1 page** to find out how to beat Level 1 and so on. The list of suggested command have been provided in every level. So, you can pick and use any suitable command to solve the each level. + +I must admit that Wargames are addictive and really fun to solve each level. However some levels are really challenging, so you may need to google to know how to solve it. Give it a try, you will really like it. + +### Test BASH Skills with “Terminus” game + +This is a yet another browser-based online CLI game which can be used to improve or test your Linux command skills. To play this game, open up your web browser and navigate to the following URL. + +Once you entered in the game, you see the instructions to learn how to play it. Unlike Wargames, you don’t need to connect to their game server to play the games. Terminus has a built-in CLI where you can find the instructions about how to play it. + +You can look at your surroundings with the command **“ls”** , move to a new location with the command **“cd LOCATION”** , go back with the command **“cd ..”** , interact with things in the world with the command **“less ITEM”** and so on. To know your current location, just type **“pwd”**. + +![][6] + +### Test BASH Skills with “clmystery” game + +Unlike the above games, you can play this game locally. You don’t need to be connected with any remote system. This is completely offline game. + +Trust me, this is an interesting game folks. You are going to play a detective role to solve a mystery case by following the given instructions. + +First, clone the repository: +``` +$ git clone https://github.com/veltman/clmystery.git + +``` + +Or, download it as a zip file from [**here**][7]. Extract it and go to the location where you have the files. Finally, solve the mystery case by reading the “instructions” file. +``` +[sk@sk]: clmystery-master>$ ls +cheatsheet.md cheatsheet.pdf encoded hint1 hint2 hint3 hint4 hint5 hint6 hint7 hint8 instructions LICENSE.md mystery README.md solution + +``` + +Here is the instructions to play this game: + +There’s been a murder in Terminal City, and TCPD needs your help. You need to help them to figure out who did the crime. + +To find out who did it, you need to go to the **‘mystery’** subdirectory and start working from there. You might need to look into all clues at the crime scene (the **‘crimescene’** file). The officers on the scene are pretty meticulous, so they’ve written down EVERYTHING in their officer reports. Fortunately the sergeant went through and marked the real clues with the word “CLUE” in all caps. + +If you get stuck at anywhere, open one of the hint files such as hint1, hint2 etc. You can open the hint files using cat command like below. +``` +$ cat hint1 + +$ cat hint2 + +``` + +To check your answer or find out the solution, open the file ‘solution’ in the clmystery directory. +``` +$ cat solution + +``` + +To get started on how to use the command line, refer **cheatsheet.md** or **cheatsheet.pdf** (from the command line, you can type ‘nano cheatsheet.md’). Don’t use a text editor to view any files except these instructions, the cheatsheet, and hints. + +For more details, refer the [**clmystery GitHub**][8] page. + +**Recommended read:** + +And, that’s all I can remember now. I will keep adding more games if I came across anything in future. Bookmark this link and do visit from time to time. If you know any other similar games, please let me know in the comment section below. I will test and update this guide. + +More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/test-your-bash-skills-by-playing-command-line-games/ + +作者:[SK][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]:http://www.ostechnix.com/wp-content/uploads/2018/03/Wargames-1.png +[3]:http://www.ostechnix.com/wp-content/uploads/2018/03/Bandit-game.png +[4]:http://www.ostechnix.com/wp-content/uploads/2018/03/Bandit-level-0.png +[5]:http://www.ostechnix.com/wp-content/uploads/2018/03/Bandit-level-0-ssh-1.png +[6]:http://www.ostechnix.com/wp-content/uploads/2018/03/Terminus.png +[7]:https://github.com/veltman/clmystery/archive/master.zip +[8]:https://github.com/veltman/clmystery diff --git a/sources/tech/20180309 A Comparison of Three Linux -App Stores.md b/sources/tech/20180309 A Comparison of Three Linux -App Stores.md new file mode 100644 index 0000000000..3095f99d3d --- /dev/null +++ b/sources/tech/20180309 A Comparison of Three Linux -App Stores.md @@ -0,0 +1,128 @@ +A Comparison of Three Linux 'App Stores' +====== +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kde_discover-main.jpg?itok=0Zc84eSO) + +I remember, long, long ago, when installing apps in Linux required downloading and compiling source packages. If you were really lucky, some developer might have packaged the source code into a form that was more easily installable. Without those developers, installing packages could become a dependency nightmare. + +But then, package managers like rpm and dpkg began to rise in popularity, followed quickly by the likes of yum and apt. This was an absolute boon to anyone looking to make Linux their operating system of choice. Although dependencies could still be an issue, they weren’t nearly as bad as they once were. In fact, many of these package managers made short shrift of picking up all the dependencies required for installation. + +And the Linux world rejoiced! Hooray! + +But, with those package managers came a continued requirement of the command line. That, of course, is all fine and good for old hat Linux users. However, there’s a new breed of Linux users who don’t necessarily want to work with the command line. For that user-base, the Linux “app store” was created. + +This all started with the [Synaptic Package Manager][1]. This graphical front end for apt was first released in 2001 and was a breath of fresh air. Synaptic enabled user to easily search for a piece of software and install it with a few quick clicks. Dependencies would be picked up and everything worked. Even when something didn’t work, Synaptic included the means to fix broken packages—all from a drop-down menu. + +Since then, a number of similar tools have arrived on the market, all of which improve on the usability of Synaptic. Although Synaptic is still around (and works quite well), new users demand more modern tools that are even easier to use. And Linux delivered. + +I want to highlight three of the more popular “app stores” to be found on various Linux distributions. In the end, you’ll see that installing applications on Linux, regardless of your distribution, doesn’t have to be a nightmare. + +### GNOME Software + +GNOME’s take on the graphical package manager, [Software][2], hit the scene just in time for the Ubuntu Software Center to finally fade into the sunset (which was fortuitous, considering Canonical’s shift from Unity to GNOME). Any distribution that uses GNOME will include GNOME Software. Unlike the now-defunct Ubuntu Software Center, GNOME Software allows users to both install and update apps from within the same interface (Figure 1). + +![GNOME Software][4] + +Figure 1: The GNOME Software main window. + +[Used with permission][5] + +To find a piece of software to install, click the Search button (top left, looking glass icon), type the name of the software you want to install, and wait for the results. When you find a title you want to install, click the Install button (Figure 2) and, when prompted, type your user (sudo) password. + +![GNOME Software][7] + +Figure 2: Installing Slack from GNOME Software. + +[Used with permission][5] + +GNOME Software also includes easy to navigate categories, Editor’s Picks, and GNOME add-ons. As a bonus feature, GNOME Software also supports both snaps and flatpak software. Out of the box, GNOME Software on Ubuntu (and derivatives) support snaps. If you’re adventurous, you can add support for flatpak by opening a terminal window and issuing the command sudo apt install gnome-software-plugin-flatpak. + +GNOME Software makes it so easy to install software on Linux, any user (regardless of experience level) can install and update apps with zero learning curve. + +### KDE Discover + +[Discover][8] is KDE’s answer to GNOME Software. Although the layout (Figure 3) is slightly different, Discover should feel immediately familiar. + +![KDE Discover][10] + +Figure 3: The KDE Discover main window is equally user friendly. + +[Used with permission][5] + +One of the primary differences between Discover and Software is that Discover differentiates between Plasma (the KDE desktop) and application add-ons. Say, for example, you want to find an “extension” for the Kate text editor; click on Application Addons and search “kate” to see all available addons for the application. + +The Plasma Addons feature makes it easy for users to search through the available desktop widgets and easily install them. + +The one downfall of KDE Discover is that applications are listed in a reverse alphabetical order. Click on one of the given categories, from the main page, and you’ll be given a listing of available apps to scroll through, from Z to A (Figure 4). + +![KDE Discover][12] + +Figure 4: The KDE Discover app listing. + +[Used with permission][5] + +You will also notice no apparent app rating system. With GNOME Software, it’s not only easy to rate a software title, it’s easy to decide if you want to pass on an app or not (based on a given rating). With KDE Discover, there is no rating system to be found. + +One bonus that Discover adds, is the ability to quickly configure repositories. From the main window, click on Settings, and you can enable/disable any of the included sources (Figure 5). Click the drop-down in the upper right corner, and you can even add new sources. + +![KDE Discover][14] + +Figure 5: Enabling, disable, and add sources, all from within Discover. + +[Used with permission][5] + +### Pamac + +If you’re hoping to soon count yourself among the growing list of Arch Linux users, you’ll be glad to know that the Linux distribution often considered for the more “elite”, also includes a graphical package manager. [Pamac][15] does an outstanding job of making installing applications on Arch easy. Although Pamac isn’t quite on the design level of either GNOME Software or KDE Discover, it still does a great job of simplifying the installing and updating of applications. From the Pamac main window (Figure 6), you can either click on the search button, or click a Category or Group to find the software you’re looking to install. + +![Pamac][17] + +Figure 6: The Pamac main window. + +[Used with permission][5] + +If you can’t find the software you’re looking for, you might need to enable one of the many repositories. Click on the Repository button and then search through the categories (Figure 7) to locate the repository to be added. + +![Pamac][19] + +Figure 7: Adding new repositories in Pamac. + +[Used with permission][5] + +Updates are smoothly handled with Pamac. Click on the Updates button (in the left navigation) and then, in the resulting window (Figure 8), click Apply. All of your Arch updates will be installed. + +![Pamac][21] + +Figure 8: Updating Arch via Pamac. + +[Used with permission][5] + +### More where that came from + +I’ve only listed three graphical package managers. That is not to say these three are the only options to be found. Other distributions have their own takes on the package manager GUI. However, these three do an outstanding job of representing just how far installing software on Linux has come, since those early days of only being able to install via source. + +Learn more about Linux through the free ["Introduction to Linux" ][22]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/3/comparison-three-linux-app-stores + +作者:[JACK WALLEN][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/jlwallen +[1]:https://code.launchpad.net/synaptic +[2]:https://wiki.gnome.org/Apps/Software +[4]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/gnome_software.jpg?itok=MvRQRX3- (GNOME Software) +[7]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/gnome_software_2.jpg?itok=5nzpUQa7 (GNOME Software) +[8]:https://userbase.kde.org/Discover +[10]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kde_discover.jpg?itok=LDTmkkMV (KDE Discover) +[12]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kde_discover_2.jpg?itok=f5P7elG_ (KDE Discover) +[14]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kde_discover_3.jpg?itok=JvS3s6FB (KDE Discover) +[15]:https://github.com/manjaro/pamac +[17]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/pamac.jpg?itok=gZ9X-Z05 (Pamac) +[19]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/pamac_1.jpg?itok=Ygt5_U8A (Pamac) +[21]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/pamac_2.jpg?itok=cIjKM51m (Pamac) +[22]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/translated/talk/20180213 Linux ldd Command Explained with Examples.md b/translated/talk/20180213 Linux ldd Command Explained with Examples.md deleted file mode 100644 index 450a18b879..0000000000 --- a/translated/talk/20180213 Linux ldd Command Explained with Examples.md +++ /dev/null @@ -1,84 +0,0 @@ -简单介绍 ldd Linux 命令 -========================================= - -如果您的工作涉及到 Linux 中的可执行文件和共享库的知识,则需要了解几种命令行工具。其中之一是 ldd ,您可以使用它来访问共享对象依赖关系。在本教程中,我们将使用一些易于理解的示例来讨论此实用程序的基础知识。 - -请注意,这里提到的所有示例都已在 Ubuntu 16.04 LTS 上进行了测试。 - - -### Linux ldd 命令 - -正如开头已经提到的,ldd 命令打印共享对象依赖关系。以下是该命令的语法: - -`ldd [option]... file...` - -下面是该工具的手册页对它作出的解释: - -``` -ldd prints the shared objects (shared libraries) required by each program or shared object - -specified on the command line. - -``` - -以下使用问答的方式让您更好地了解ldd的工作原理。 - -### 问题一. 如何使用 ldd 命令? - -ldd 的基本用法非常简单,只需运行 'ldd' 命令以及可执行文件或共享对象文件名称作为输入。 - -`ldd [object-name]` - -例如: - -`ldd test` - -[![How to use ldd](https://www.howtoforge.com/images/command-tutorial/ldd-basic.png)](https://www.howtoforge.com/images/command-tutorial/big/ldd-basic.png) - -所以你可以看到所有的共享库依赖已经在输出中产生了。 - -### Q2. 如何使 ldd 在输出中生成详细的信息? - -如果您想要 ldd 生成详细信息,包括符号版本控制数据,则可以使用 **-v** 命令行选项。例如,该命令 - -`ldd -v test` - -当使用-v命令行选项时,在输出中产生以下内容: - -[![How to make ldd produce detailed information in output](https://www.howtoforge.com/images/command-tutorial/ldd-v-option.png)](https://www.howtoforge.com/images/command-tutorial/big/ldd-v-option.png) - -### Q3. 如何使 ldd 产生未使用的直接依赖关系? - -对于这个信息,使用 **-u** 命令行选项。这是一个例子: - -`ldd -u test` - -[![How to make ldd produce unused direct dependencies](https://www.howtoforge.com/images/command-tutorial/ldd-u-test.png)](https://www.howtoforge.com/images/command-tutorial/big/ldd-u-test.png) - -### Q4. 如何让 ldd 执行重定位? - -您可以在这里使用几个命令行选项:**-d** 和 **-r**。 前者告诉 ldd 执行数据重定位,后者则使 ldd 为数据对象和函数执行重定位。在这两种情况下,该工具都会报告丢失的 ELF 对象(如果有的话)。 - -`ldd -d` - -`ldd -r` - -### Q5. 如何获得关于ldd的帮助? - ---help 命令行选项使 ldd 为该工具生成有用的用法相关信息。 - -`ldd --help` - -[![How get help on ldd](https://www.howtoforge.com/images/command-tutorial/ldd-help-option.png)](https://www.howtoforge.com/images/command-tutorial/big/ldd-help-option.png) - -### 总结 - -ldd 不像 cd,rm 和 mkdir 这样的工具类别。这是因为它是为特定目的而构建的。该实用程序提供了有限的命令行选项,我们在这里介绍了其中的大部分。要了解更多信息,请前往 ldd 的[手册页](https://linux.die.net/man/1/ldd)。 - -* * * - -via: [https://www.howtoforge.com/linux-ldd-command/](https://www.howtoforge.com/linux-ldd-command/) - -作者: [Himanshu Arora](https://www.howtoforge.com/) 选题者: [@lujun9972](https://github.com/lujun9972) 译者: [MonkeyDEcho](https://github.com/MonkeyDEcho) 校对: [校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file diff --git a/translated/tech/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md b/translated/tech/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md new file mode 100644 index 0000000000..b17db43664 --- /dev/null +++ b/translated/tech/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md @@ -0,0 +1,141 @@ +translating by shipsw + +如何使用 yum-cron 自动更新 RHEL/CentOS Linux +====== +yum 命令是 RHEL / CentOS Linux 系统中用来安装和更新软件包的一个工具。知道如何使用 [yum 命令行] 更新系统,但是我想用 cron 手工更新软件包。该如何配置才能使得 yum 使用 [cron 自动更新][2]系统补丁或更新呢? + +首先需要安装 yum-cron 软件包。该软件包提供以 cron 命令运行 yum 更新所需的文件。安装这个软件可以使得 yum 以 cron 命令每晚更新。 + +### CentOS/RHEL 6.x/7.x 上安装 yum cron + +输入以下 [yum 命令][3]: +`$ sudo yum install yum-cron` +![](https://www.cyberciti.biz/media/new/faq/2009/05/How-to-install-yum-cron-on-CentOS-RHEL-server.jpg) + +使用 **CentOS/RHEL 7.x** 上的 systemctl 启动服务: +``` +$ sudo systemctl enable yum-cron.service +$ sudo systemctl start yum-cron.service +$ sudo systemctl status yum-cron.service +``` +在 **CentOS/RHEL 6.x** 系统中,运行: +``` +$ sudo chkconfig yum-cron on +$ sudo service yum-cron start +``` +![](https://www.cyberciti.biz/media/new/faq/2009/05/How-to-turn-on-yum-cron-service-on-CentOS-or-RHEL-server.jpg) + +yum-cron 是 yum 的一个调用接口。使得 cron 调用 yum 变得非常方便。该软件提供元数据更新,更新检查、下载和安装等功能。yum-cron 的不同功能可以使用高配置文件配置,而不是输入一堆复杂的命令行参数。 + +### 配置 yum-cron 自动更新 RHEL/CentOS Linux + +使用 vi 等编辑器编辑文件 /etc/yum/yum-cron.conf 和 /etc/yum/yum-cron-hourly.conf: +`$ sudo vi /etc/yum/yum-cron.conf` +确保更新可用时自动更新 +`apply_updates = yes` +可以设置通知 email 地址。注意: localhost 将会被系统名称代替。 +`email_from = root@localhost` +email 通知地址列表。 +`email_to = your-it-support@some-domain-name` +发送 email 信息的主机名。 +`email_host = localhost` +[CentOS/RHEL 7.x][4] 上不想更新内核的话,添加以下内容: +`exclude=kernel*` +RHEL/CentOS 6.x 下[添加以下内容来禁用内核更新][5]: +`YUM_PARAMETER=kernel*` +[保存并关闭文件][6]。如果想每小时更新系统的话修改文件 /etc/yum/yum-cron-hourly.conf,否则文件 /etc/yum/yum-cron.conf 将使用以下命令每天运行一次[cat 命令][7]: +`$ cat /etc/cron.daily/0yum-daily.cron` +示例输出: +``` +#!/bin/bash + +# Only run if this flag is set. The flag is created by the yum-cron init +# script when the service is started -- this allows one to use chkconfig and +# the standard "service stop|start" commands to enable or disable yum-cron. +if [[ ! -f /var/lock/subsys/yum-cron ]]; then + exit 0 +fi + +# Action! +exec /usr/sbin/yum-cron /etc/yum/yum-cron-hourly.conf +[root@centos7-box yum]# cat /etc/cron.daily/0yum-daily.cron +#!/bin/bash + +# Only run if this flag is set. The flag is created by the yum-cron init +# script when the service is started -- this allows one to use chkconfig and +# the standard "service stop|start" commands to enable or disable yum-cron. +if [[ ! -f /var/lock/subsys/yum-cron ]]; then + exit 0 +fi + +# Action! +exec /usr/sbin/yum-cron +``` + +完成配置。现在你的系统将每天自动更新一次。更多细节请参照 yum-cron 的说明手册。 +`$ man yum-cron` + +### 方法二 – 使用 shell 脚本 + +**警告** : 以下命令已经过时了. 不要在 RHEL/CentOS 6.x/7.x 系统中使用。 我写在这里仅仅是因为历史原因,该命令适合 CentOS/RHEL version 4.x/5.x 上运行。 + +让我们看看如何在 CentOS/RHEL 上配置 yum 安全更新包的检索和安装。你可以使用 CentOS / RHEL 提供的 yum-updatesd 服务。然而,系统提供的服务开销有点大。你可以使用以下的 shell 脚本配置每天后每周的系统更新。 + + * **/etc/cron.daily/yumupdate.sh** 每天更新 + * **/etc/cron.weekly/yumupdate.sh** 每周更新 + + + +#### 系统更新的示例脚本 + +以下脚本功能是使用 [cron][8] 定时安装更新更新: +``` +#!/bin/bash +YUM=/usr/bin/yum +$YUM -y -R 120 -d 0 -e 0 update yum +$YUM -y -R 10 -e 0 -d 0 update +``` + +(Code listing -01: /etc/cron.daily/yumupdate.sh) + +其中: + + 1. 第一条命令更新 yum 自己。 + 2. **-R 120** : 设置允许一条命令前的等待最长时间 + 3. **-e 0** : 设置错误级别为 0 (范围 0-10)。0 意味着只有关键性错误才会显示。 + 4. -d 0 : 设置 debug 级别为 0 。增加或减少打印日志的量。(范围 0-10) + 5. **-y** : 默认同意;任何提示问题默认回答为 yes。 + + + +设置脚本的执行权限: +`# chmod +x /etc/cron.daily/yumupdate.sh` + + +### 关于作者 + +作者是 nixCraft 的创始人,一个经验丰富的系统管理员和 Linux/Unix 脚本培训师。他曾与全球客户合作,领域涉及IT,教育,国防和空间研究以及非营利部门等多个行业。请在 [Twitter][9]、[Facebook][10]、[Google+][11] 上关注他。**获取更多有关系统管理、Linux/Unix 和开源话题请关注[我的 RSS/XML 地址][12]**。 + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/faq/fedora-automatic-update-retrieval-installation-with-cron/ + +作者:[Vivek Gite][a] +译者:[shipsw](https://github.com/shipsw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz/ +[1]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ +[2]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses +[3]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info) +[4]:https://www.cyberciti.biz/faq/yum-update-except-kernel-package-command/ +[5]:https://www.cyberciti.biz/faq/redhat-centos-linux-yum-update-exclude-packages/ +[6]:https://www.cyberciti.biz/faq/linux-unix-vim-save-and-quit-command/ +[7]:https://www.cyberciti.biz/faq/linux-unix-appleosx-bsd-cat-command-examples/ (See Linux/Unix cat command examples for more info) +[8]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses +[9]:https://twitter.com/nixcraft +[10]:https://facebook.com/nixcraft +[11]:https://plus.google.com/+CybercitiBiz +[12]:https://www.cyberciti.biz/atom/atom.xml diff --git a/translated/tech/20150708 Choosing a Linux Tracer (2015).md b/translated/tech/20150708 Choosing a Linux Tracer (2015).md deleted file mode 100644 index 0756f8f331..0000000000 --- a/translated/tech/20150708 Choosing a Linux Tracer (2015).md +++ /dev/null @@ -1,192 +0,0 @@ -选择一个 Linux 跟踪器(2015) -====== -[![][1]][2] -_Linux 跟踪很神奇!_ - -跟踪器是高级的性能分析和调试工具,如果你使用过 strace(1) 或者 tcpdump(8),你不应该被它吓到 ... 你使用的就是跟踪器。系统跟踪器能让你看到很多的东西,而不仅是系统调用或者包,因为常见的跟踪器都可以跟踪内核或者应用程序的任何东西。 - -有大量的 Linux 跟踪器可供你选择。由于它们中的每个都有一个官方的(或者非官方的)的吉祥物,我们有足够多的选择给孩子们展示。 - -你喜欢使用哪一个呢? - -我从两类读者的角度来回答这个问题:大多数人和性能/内核工程师。当然,随着时间的推移,这也可能会发生变化,因此,我需要及时去更新本文内容,或许是每年一次,或者更频繁。 - -## 对于大多数人 - -大多数人(开发者、系统管理员、运维人员、网络可靠性工程师(SRE)…)是不需要去学习系统跟踪器的详细内容的。以下是你需要去了解和做的事情: - -### 1. 使用 perf_events 了解 CPU 概要信息 - -使用 perf_events 去了解 CPU 的基本情况。它的概要信息可以用一个 [火焰图][3] 来形象地表示。比如: -``` -git clone --depth 1 https://github.com/brendangregg/FlameGraph -perf record -F 99 -a -g -- sleep 30 -perf script | ./FlameGraph/stackcollapse-perf.pl | ./FlameGraph/flamegraph.pl > perf.svg - -``` - -Linux 的 perf_events(又称为 "perf",后面用它来表示命令)是官方为 Linux 用户准备的跟踪器/分析器。它在内核源码中,并且维护的非常好(而且现在它的功能还是快速加强)。它一般是通过 linux-tools-common 这个包来添加的。 - -perf 可以做的事情很多,但是,如果我建议你只学习其中的一个功能,那就是查看 CPU 概要信息。虽然从技术角度来说,这并不是事件“跟踪”,主要是它很简单。较难的部分是去获得工作的完整栈和符号,这部分的功能在我的 [Linux Profiling at Netflix][4] 中讨论过。 - -### 2. 知道它能干什么 - -正如一位朋友所说的:“你不需要知道 X 光机是如何工作的,但你需要明白的是,如果你吞下了一个硬币,X 光机是你的一个选择!”你需要知道使用跟踪器能够做什么,因此,如果你在业务上需要它,你可以以后再去学习它,或者请会使用它的人来做。 - -简单地说:几乎任何事情都可以通过跟踪来了解它。内部文件系统、TCP/IP 处理过程、设备驱动、应用程序内部情况。阅读我在 lwn.net 上的 [ftrace][5] 的文章,也可以去浏览 [perf_events 页面][6],那里有一些跟踪能力的示例。 - -### 3. 请求一个前端 - -如果你把它作为一个性能分析工具(有许多公司销售这类产品),并要求支持 Linux 跟踪。希望通过一个“点击”界面去探查内核的内部,包含一个在栈不同位置的延迟的热力图。就像我在 [Monitorama 演讲][7] 中描述的那样。 - -我创建并开源了我自己的一些前端,虽然它是基于 CLI 的(不是图形界面的)。这样将使其它人使用跟踪器更快更容易。比如,我的 [perf-tools][8],跟踪新进程是这样的: -``` -# ./execsnoop -Tracing exec()s. Ctrl-C to end. - PID PPID ARGS - 22898 22004 man ls - 22905 22898 preconv -e UTF-8 - 22908 22898 pager -s - 22907 22898 nroff -mandoc -rLL=164n -rLT=164n -Tutf8 -[...] - -``` - -在 Netflix 上,我创建了一个 [Vector][9],它是一个实例分析工具,实际上它是一个 Linux 跟踪器的前端。 - -## 对于性能或者内核工程师 - -一般来说,我们的工作都非常难,因为大多数人或许要求我们去搞清楚如何去跟踪某个事件,以及因此需要选择使用其中一个跟踪器。为完全理解一个跟踪器,你通常需要花至少一百多个小时去使用它。理解所有的 Linux 跟踪器并能在它们之间做出正确的选择是件很难的事情。(我或许是唯一接近完成这件事的人) - -在这里我建议选择如下之一: - -A) 选择一个全能的跟踪器,并以它为标准。这需要在一个测试环境中,花大量的时间来搞清楚它的细微差别和安全性。我现在的建议是 SystemTap 的最新版本(即从这个 [源][10] 构建的)。我知道有的公司选择的是 LTTng ,尽管它并不是很强大(但是它很安全),但他们也用的很好。如果在 sysdig 中添加了跟踪点或者是 kprobes,它也是另外的一个候选者。 - -B) 按我的 [Velocity 教程中][11] 的流程图。这意味着可能是使用 ftrace 或者 perf_events,因为 eBPF 是集成在内核中的,然后用其它的跟踪器,如 SystemTap/LTTng 作为对 eBPF 的补充。我目前在 Netflix 的工作中就是这么做的。 - -以下是我对各个跟踪器的评价: - -### 1. ftrace - -我爱 [Ftrace][12],它是内核黑客最好的朋友。它被构建进内核中,它能够消费跟踪点、kprobes、以及 uprobes,并且提供一些功能:使用可选的过滤器和参数进行事件跟踪;事件计数和计时,内核概览;函数流步进。关于它的示例可以查看内核源树中的 [ftrace.txt][13]。它通过 /sys 来管理,是面向单 root 用户的(虽然你可以使用缓冲实例来破解它以支持多用户),它的界面有时很繁琐,但是它比较容易破解,并且有前端:Steven Rostedt,ftrace 的主要创建者,他设计了 trace-cmd,并且我已经创建了 perf-tools 集合。我最讨厌的就是它不可编程,因此,你也不能,比如,去保存和获取时间戳,计算延迟,以及保存它的历史。你不需要花成本转储事件到用户级以便于进行后期处理。它通过 eBPF 可以实现可编程。 - -### 2. perf_events - -[perf_events][14] 是 Linux 用户的主要跟踪工具,它来源于 Linux 内核,一般是通过 linux-tools-common 包来添加。又称为 "perf",后面的 perf 指的是它的前端,它非常高效(动态缓存),一般用于跟踪并转储到一个文件中(perf.data),然后可以在以后的某个时间进行后期处理。它可以做大部分 ftrace 能做的事情。它实现不了函数流步进,并且不太容易破解(因为它的安全/错误检查做的非常好)。但它可以做概览(采样)、CPU 性能计数、用户级的栈转换、以及消费对行使用本地变量进行跟踪的调试信息。它也支持多个并发用户。与 ftrace 一样,它也是内核不可编程的,或者 eBPF 支持(已经计划了补丁)。如果只学习一个跟踪器,我建议大家去学习 perf,它可以解决大量的问题,并且它也很安全。 - -### 3. eBPF - -扩展的伯克利包过滤器(eBPF)是一个内核虚拟机,可以在事件上运行程序,它非常高效(JIT)。它可能最终为 ftrace 和 perf_events 提供内核可编程,并可以去增强其它跟踪器。它现在是由 Alexei Starovoitov 开发,还没有实现全整合,但是对于一些令人印象深刻的工具,有些内核版本(比如,4.1)已经支持了:比如,块设备 I/O 延迟热力图。更多参考资料,请查阅 Alexei 的 [BPF 演示][15],和它的 [eBPF 示例][16]。 - -### 4. SystemTap - -[SystemTap][17] 是一个非常强大的跟踪器。它可以做任何事情:概览、跟踪点、kprobes、uprobes(它就来自 SystemTap)、USDT、内核编程等等。它将程序编译成内核模块并加载它们 —— 这是一种很难保证安全的方法。它开发的很怪诞,并且在过去的一段时间内出现了很多问题(恐慌或冻结)。许多并不是 SystemTap 的过错 —— 它通常被内核首先用于某些功能跟踪,并首先遇到运行 bug。最新版本的 SystemTap 是非常好的(你需要从它的源代码编译),但是,许多人仍然没有从早期版本的问题阴影中走出来。如果你想去使用它,花一些时间去测试环境,然后,在 irc.freenode.net 的 #systemtap 频道与开发者进行讨论。(Netflix 有一个容错架构,我们使用了 SystemTap,但是我们或许比起你来说,很少担心它的安全性)我最讨厌的事情是,它假设你有办法得到内核调试信息,而我并没有这些信息。没有它我确实可以做一些事情,但是缺少相关的文档和示例(我现在自己开始帮着做这些了)。 - -### 5. LTTng - -[LTTng][18] 对事件收集进行了优化,性能要好于其它的跟踪器,也支持许多的事件类型,包括 USDT。它开发的很怪诞。它的核心部分非常简单:通过一个很小的且很固定的指令集写入事件到跟踪缓冲区。这样让它既安全又快速。缺点是做内核编程不太容易。我觉得那不是个大问题,由于它优化的很好,尽管在需要后期处理的情况下,仍然可以充分的扩展。它也探索了一种不同的分析技术。很多的“黑匣子”记录了全部有趣的事件,可以在以后的 GUI 下学习它。我担心意外的记录丢失事件,我真的需要花一些时间去看看它在实践中是如何工作的。这个跟踪器上我花的时间最少(原因是没有实践过它)。 - -### 6. ktap - -[ktap][19] 是一个很有前途的跟踪器,它在内核中使用了一个 lua 虚拟机,它不需要调试信息和嵌入式设备就可以工作的很好。这使得它进入了人们的视野,在某个时候似乎要成为 Linux 上最好的跟踪器。然而,eBPF 开始集成到了内核,而 ktap 的集成工作被推迟了,直到它能够使用 eBPF 而不是它自己的虚拟机。由于 eBPF 在几个月后仍然在集成过程中,使得 ktap 的开发者等待了很长的时间。我希望在今年的晚些时间它能够重启开发。 - -### 7. dtrace4linux - -[dtrace4linux][20] 主要由一个人 (Paul Fox) 利用业务时间将 Sun DTrace 移植到 Linux 中的。它令人印象深刻,而一些贡献者的工作,还不是很完美,它最多应该算是实验性的工具(不安全)。我认为对于许可证(license)的担心,使人们对它保持谨慎:它可能永远也进入不了 Linux 内核,因为 Sun 是基于 CDDL 许可证发布的 DTrace;Paul 的方法是将它作为一个插件。我非常希望看到 Linux 上的 DTrace,并且希望这个项目能够完成,我想我加入 Netflix 时将花一些时间来帮它完成。但是,我一直在使用内置的跟踪器 ftrace 和 perf_events。 - -### 8. OL DTrace - -[Oracle Linux DTrace][21] 是将 DTrace 移植到 Linux 的一系列努力之一,尤其是 Oracle Linux。过去这些年的许多发行版都一直稳定的进步,开发者甚至谈到了改善 DTrace 测试套件,这显示了这个项目很有前途。许多有用的功能已经完成:系统调用、概览、sdt、proc、sched、以及 USDT。我一直在等待着 fbt(函数边界跟踪,对内核的动态跟踪),它将成为 Linux 内核上非常强大的功能。它最终能否成功取决于能否吸引足够多的人去使用 Oracle Linux(并为支持付费)。另一个羁绊是它并非完全开源的:内核组件是开源的,但用户级代码我没有看到。 - -### 9. sysdig - -[sysdig][22] 是一个很新的跟踪器,它可以使用类似 tcpdump 的语法来处理系统调用事件,并用 lua 做后期处理。它也是令人印象深刻的,并且很高兴能看到在系统跟踪空间的创新。它的局限性是,它的系统调用只能是在当时,并且,它不能转储事件到用户级进行后期处理。虽然我希望能看到它去支持跟踪点、kprobes、以及 uprobes,但是你还是可以使用系统调用来做一些事情。我也希望在内核概览方面看到它支持 eBPF。sysdig 的开发者现在增加了对容器的支持。可以关注它的进一步发展。 - -## 深入阅读 - -我自己的工作中使用到的跟踪器包括: - -**ftrace** : 我的 [perf-tools][8] 集合(查看示例目录);我的 lwn.net 的 [ftrace 跟踪器的文章][5]; 一个 [LISA14][8] 演讲;和文章: [function counting][23], [iosnoop][24], [opensnoop][25], [execsnoop][26], [TCP retransmits][27], [uprobes][28], 和 [USDT][29]。 - -**perf_events** : 我的 [perf_events 示例][6] 页面:对于 SCALE 的一个 [Linux Profiling at Netflix][4] 演讲;和文章:[CPU 采样][30],[静态跟踪点][31],[势力图][32],[计数][33],[内核行跟踪][34],[off-CPU 时间火焰图][35]。 - -**eBPF** : 文章 [eBPF:一个小的进步][36],和一些 [BPF-tools][37] (我需要发布更多)。 - -**SystemTap** : 很久以前,我写了一篇 [使用 SystemTap][38] 的文章,它有点时间了。最近我发布了一些 [systemtap-lwtools][39],展示了在没有内核调试信息的情况下,SystemTap 是如何使用的。 - -**LTTng** : 我使用它的时间很短,也没有发布什么文章。 - -**ktap** : 我的 [ktap 示例][40] 页面包括一行程序和脚本,虽然它是早期的版本。 - -**dtrace4linux** : 在我的 [系统性能][41] 书中包含了一些示例,并且在过去的时间中我为了某些事情开发了一些小的修补,比如, [timestamps][42]。 - -**OL DTrace** : 因为它是对 DTrace 的简单移植,我早期 DTrace 的大部分工作都 应该是与它相关的(链接太多了,可以去 [我的主页][43] 上搜索)。一旦它更加完美,我可以开发很多专用工具。 - -**sysdig** : 我贡献了 [fileslower][44] 和 [subsecond offset spectrogram][45] chisels。 - -**others** : 关于 [strace][46],我写了一些告诫文章。 - -不好意思,没有更多的跟踪器了! … 如果你想知道为什么 Linux 中的跟踪器不止一个,或者关于 DTrace 的内容,在我的 [从 DTrace 到 Linux][47] 的演讲中有答案,从 [第 28 张幻灯片][48] 开始。 - -感谢 [Deirdre Straughan][49] 的编辑,以及创建了跟踪的小马(General Zoi 是小马的创建者)。 - --------------------------------------------------------------------------------- - -via: http://www.brendangregg.com/blog/2015-07-08/choosing-a-linux-tracer.html - -作者:[Brendan Gregg.][a] -译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.brendangregg.com -[1]:http://www.brendangregg.com/blog/images/2015/tracing_ponies.png -[2]:http://www.slideshare.net/brendangregg/velocity-2015-linux-perf-tools/105 -[3]:http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html -[4]:http://www.brendangregg.com/blog/2015-02-27/linux-profiling-at-netflix.html -[5]:http://lwn.net/Articles/608497/ -[6]:http://www.brendangregg.com/perf.html -[7]:http://www.brendangregg.com/blog/2015-06-23/netflix-instance-analysis-requirements.html -[8]:http://www.brendangregg.com/blog/2015-03-17/linux-performance-analysis-perf-tools.html -[9]:http://techblog.netflix.com/2015/04/introducing-vector-netflixs-on-host.html -[10]:https://sourceware.org/git/?p=systemtap.git;a=blob_plain;f=README;hb=HEAD -[11]:http://www.slideshare.net/brendangregg/velocity-2015-linux-perf-tools -[12]:http://lwn.net/Articles/370423/ -[13]:https://www.kernel.org/doc/Documentation/trace/ftrace.txt -[14]:https://perf.wiki.kernel.org/index.php/Main_Page -[15]:http://www.phoronix.com/scan.php?page=news_item&px=BPF-Understanding-Kernel-VM -[16]:https://github.com/torvalds/linux/tree/master/samples/bpf -[17]:https://sourceware.org/systemtap/wiki -[18]:http://lttng.org/ -[19]:http://ktap.org/ -[20]:https://github.com/dtrace4linux/linux -[21]:http://docs.oracle.com/cd/E37670_01/E38608/html/index.html -[22]:http://www.sysdig.org/ -[23]:http://www.brendangregg.com/blog/2014-07-13/linux-ftrace-function-counting.html -[24]:http://www.brendangregg.com/blog/2014-07-16/iosnoop-for-linux.html -[25]:http://www.brendangregg.com/blog/2014-07-25/opensnoop-for-linux.html -[26]:http://www.brendangregg.com/blog/2014-07-28/execsnoop-for-linux.html -[27]:http://www.brendangregg.com/blog/2014-09-06/linux-ftrace-tcp-retransmit-tracing.html -[28]:http://www.brendangregg.com/blog/2015-06-28/linux-ftrace-uprobe.html -[29]:http://www.brendangregg.com/blog/2015-07-03/hacking-linux-usdt-ftrace.html -[30]:http://www.brendangregg.com/blog/2014-06-22/perf-cpu-sample.html -[31]:http://www.brendangregg.com/blog/2014-06-29/perf-static-tracepoints.html -[32]:http://www.brendangregg.com/blog/2014-07-01/perf-heat-maps.html -[33]:http://www.brendangregg.com/blog/2014-07-03/perf-counting.html -[34]:http://www.brendangregg.com/blog/2014-09-11/perf-kernel-line-tracing.html -[35]:http://www.brendangregg.com/blog/2015-02-26/linux-perf-off-cpu-flame-graph.html -[36]:http://www.brendangregg.com/blog/2015-05-15/ebpf-one-small-step.html -[37]:https://github.com/brendangregg/BPF-tools -[38]:http://dtrace.org/blogs/brendan/2011/10/15/using-systemtap/ -[39]:https://github.com/brendangregg/systemtap-lwtools -[40]:http://www.brendangregg.com/ktap.html -[41]:http://www.brendangregg.com/sysperfbook.html -[42]:https://github.com/dtrace4linux/linux/issues/55 -[43]:http://www.brendangregg.com -[44]:https://github.com/brendangregg/sysdig/commit/d0eeac1a32d6749dab24d1dc3fffb2ef0f9d7151 -[45]:https://github.com/brendangregg/sysdig/commit/2f21604dce0b561407accb9dba869aa19c365952 -[46]:http://www.brendangregg.com/blog/2014-05-11/strace-wow-much-syscall.html -[47]:http://www.brendangregg.com/blog/2015-02-28/from-dtrace-to-linux.html -[48]:http://www.slideshare.net/brendangregg/from-dtrace-to-linux/28 -[49]:http://www.beginningwithi.com/ diff --git a/translated/tech/20160810 How does gdb work.md b/translated/tech/20160810 How does gdb work.md new file mode 100644 index 0000000000..e45b988d3d --- /dev/null +++ b/translated/tech/20160810 How does gdb work.md @@ -0,0 +1,219 @@ +gdb 如何工作? +============================================================ + +大家好!今天,我开始进行我的 [ruby 堆栈跟踪项目][1],我意识到,我现在了解了一些关于 gdb 内部如何工作的内容。 + +最近,我使用 gdb 来查看我的 Ruby 程序,所以,我们将对一个 Ruby 程序运行 gdb 。它实际上就是一个 Ruby 解释器。首先,我们需要打印出一个全局变量的地址:`ruby_current_thread`。 + +### 获取全局变量 + +下面展示了如何获取全局变量 `ruby_current_thread` 的地址: + +``` +$ sudo gdb -p 2983 +(gdb) p & ruby_current_thread +$2 = (rb_thread_t **) 0x5598a9a8f7f0 + +``` + +变量能够位于的地方有堆、栈或者程序的文本段。全局变量也是程序的一部分。某种程度上,你可以把它们想象成是在编译的时候分配的。因此,我们可以很容易的找出全局变量的地址。让我们来看看,gdb 是如何找出 `0x5598a9a87f0` 这个地址的。 + +我们可以通过查看位于 `/proc` 目录下一个叫做 `/proc/$pid/maps` 的文件,来找到这个变量所位于的大致区域。 + + +``` +$ sudo cat /proc/2983/maps | grep bin/ruby +5598a9605000-5598a9886000 r-xp 00000000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby +5598a9a86000-5598a9a8b000 r--p 00281000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby +5598a9a8b000-5598a9a8d000 rw-p 00286000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby + +``` + +所以,我们看到,起始地址 `5598a9605000` 和 `0x5598a9a8f7f0` 很像,但并不一样。哪里不一样呢,我们把两个数相减,看看结果是多少: + +``` +(gdb) p/x 0x5598a9a8f7f0 - 0x5598a9605000 +$4 = 0x48a7f0 + +``` + +你可能会问,这个数是什么?让我们使用 `nm` 来查看一下程序的符号表。 + +``` +sudo nm /proc/2983/exe | grep ruby_current_thread +000000000048a7f0 b ruby_current_thread + +``` + +我们看到了什么?能够看到 `0x48a7f0` 吗?是的,没错。所以,如果我们想找到程序中一个全局变量的地址,那么只需在符号表中查找变量的名字,然后再加上在 `/proc/whatever/maps` 中的起始地址,就得到了。 + +所以现在,我们知道 gdb 做了什么。但是,gdb 实际做的事情更多,让我们跳过直接转到… + +### 解引用指针 + +``` +(gdb) p ruby_current_thread +$1 = (rb_thread_t *) 0x5598ab3235b0 + +``` + +我们要做的下一件事就是解引用 `ruby_current_thread` 这一指针。我们想看一下它所指向的地址。为了完成这件事,gdb 会运行大量系统调用比如: + +``` +ptrace(PTRACE_PEEKTEXT, 2983, 0x5598a9a8f7f0, [0x5598ab3235b0]) = 0 + +``` + +你是否还记得 `0x5598a9a8f7f0` 这个地址?gdb 会问:“嘿,在这个地址中的实际内容是什么?”。`2983` 是我们运行 gdb 这个进程的 ID。gdb 使用 `ptrace` 这一系统调用来完成这一件事。 + +好极了!因此,我们可以解引用内存并找出内存地址中存储的内容。一些有用的 gdb 命令能够分别知道 `x/40w` 和 `x/40b` 这两个变量哪一个会在给定地址展示 40 个字/字节。 + +### 描述结构 + +一个内存地址中的内容可能看起来像下面这样。大量的字节! + +``` +(gdb) x/40b ruby_current_thread +0x5598ab3235b0: 16 -90 55 -85 -104 85 0 0 +0x5598ab3235b8: 32 47 50 -85 -104 85 0 0 +0x5598ab3235c0: 16 -64 -55 115 -97 127 0 0 +0x5598ab3235c8: 0 0 2 0 0 0 0 0 +0x5598ab3235d0: -96 -83 -39 115 -97 127 0 0 + +``` + +这很有用,但也不是非常有用!如果你是一个像我一样的人类并且想知道它代表什么,那么你需要更多内容,比如像这样: + +``` +(gdb) p *(ruby_current_thread) +$8 = {self = 94114195940880, vm = 0x5598ab322f20, stack = 0x7f9f73c9c010, + stack_size = 131072, cfp = 0x7f9f73d9ada0, safe_level = 0, raised_flag = 0, + last_status = 8, state = 0, waiting_fd = -1, passed_block = 0x0, + passed_bmethod_me = 0x0, passed_ci = 0x0, top_self = 94114195612680, + top_wrapper = 0, base_block = 0x0, root_lep = 0x0, root_svar = 8, thread_id = + 140322820187904, + +``` + +太好了。现在就更加有用了。gdb 是如何知道这些所有域的,比如 `stack_size` ?输入 `DWARF`。`DWARF` 是存储额外程序调试数据的一种方式,从而调试器比如 gdb 能够更好的工作。它通常存储为一部分二进制。如果我对我的 Ruby 二进制文件运行 `dwarfdump` 命令,那么我将会得到下面的输出: + +(我已经重新编排使得它更容易理解) + +``` +DW_AT_name "rb_thread_struct" +DW_AT_byte_size 0x000003e8 +DW_TAG_member + DW_AT_name "self" + DW_AT_type <0x00000579> + DW_AT_data_member_location DW_OP_plus_uconst 0 +DW_TAG_member + DW_AT_name "vm" + DW_AT_type <0x0000270c> + DW_AT_data_member_location DW_OP_plus_uconst 8 +DW_TAG_member + DW_AT_name "stack" + DW_AT_type <0x000006b3> + DW_AT_data_member_location DW_OP_plus_uconst 16 +DW_TAG_member + DW_AT_name "stack_size" + DW_AT_type <0x00000031> + DW_AT_data_member_location DW_OP_plus_uconst 24 +DW_TAG_member + DW_AT_name "cfp" + DW_AT_type <0x00002712> + DW_AT_data_member_location DW_OP_plus_uconst 32 +DW_TAG_member + DW_AT_name "safe_level" + DW_AT_type <0x00000066> + +``` + +所以,`ruby_current_thread` 的类型名为 `rb_thread_struct`,它的大小为 0x3e8 (或 1000 字节),它有许多成员项,`stack_size` 是其中之一,在偏移为 24 的地方,它有类型 `31\` 。`31` 是什么?不用担心,我们也可以在 DWARF 信息中查看。 + +``` +< 1><0x00000031> DW_TAG_typedef + DW_AT_name "size_t" + DW_AT_type <0x0000003c> +< 1><0x0000003c> DW_TAG_base_type + DW_AT_byte_size 0x00000008 + DW_AT_encoding DW_ATE_unsigned + DW_AT_name "long unsigned int" + +``` + +所以,`stack_size` 具有类型 `size_t`,即 `long unsigned int`,它是 8 字节的。这意味着我们可以查看栈大小。 + +如果我们有了 DWARF 调试数据,该如何分解: + +1. 查看 `ruby_current_thread` 所指向的内存区域 + +2. 加上 24 字节来得到 `stack_size` + +3. 读 8 字节(以小端的格式,因为是在 x86 上) + +4. 得到答案! + +在上面这个例子中是 131072 或 128 kb 。 + +对我来说,这使得调试信息的用途更加明显。如果我们不知道这些所有变量所表示的额外元数据,那么我们无法知道在 `0x5598ab325b0` 这一地址的字节是什么。 + +这就是为什么你可以从你的程序中单独安装一个程序的调试信息,因为 gdb 并不关心从何处获取额外的调试信息。 + +### DWARF 很迷惑 + +我最近阅读了大量的 DWARF 信息。现在,我使用 libdwarf,使用体验不是很好,这个 API 很令人迷惑,你将以一种奇怪的方式初始化所有东西,它真的很慢(需要花费 0.3 s 的时间来读取我的 Ruby 程序的所有调试信息,这真是可笑)。有人告诉我,libdw 比 elfutils 要好一些。 + +同样,你可以查看 `DW_AT_data_member_location` 来查看结构成员的偏移。我在 Stack Overflow 上查找如何完成这件事,并且得到[这个答案][2]。基本上,以下面这样一个检查开始: + +``` +dwarf_whatform(attrs[i], &form, &error); + if (form == DW_FORM_data1 || form == DW_FORM_data2 + form == DW_FORM_data2 || form == DW_FORM_data4 + form == DW_FORM_data8 || form == DW_FORM_udata) { + +``` + +继续往前。为什么会有 800 万种不同的 `DW_FORM_data` 需要检查?发生了什么?我没有头绪。 + +不管怎么说,我的印象是,DWARF 是一个庞大而复杂的标准(可能是人们用来生成 DWARF 的库不匹配),但是这是我们所拥有的,所以我们只能用它来工作。 + +我能够编写代码并查看 DWARF 并且我的代码实际上大多数能够工作,这就很酷了,除了程序崩溃的时候。我就是这样工作的。 + +### 展开栈路径 + +在这篇文章的早期版本中,我说过,gdb 使用 libunwind 来展开栈路径,这样说并不总是对的。 + +有一位对 gdb 有深入研究的人发了大量邮件告诉我,他们花费了大量时间来尝试如何展开栈路径从而能够做得比 libunwind 更好。这意味着,如果你在程序的一个奇怪的中间位置停下来了,你所能够获取的调试信息又很少,那么你可以对栈做一些奇怪的事情,gdb 会尝试找出你位于何处。 + +### gdb 能做的其他事 + +我在这儿所描述的一些事请(查看内存,理解 DWARF 所展示的结构)并不是 gdb 能够做的全部事情。阅读 Brendan Gregg 的[昔日 gdb 例子][3],我们可以知道,gdb 也能够完成下面这些事情: + +* 反汇编 + +* 查看寄存器内容 + +在操作程序方面,它可以: + +* 设置断点,单步运行程序 + +* 修改内存(这是一个危险行为) + +了解 gdb 如何工作使得当我使用它的时候更加自信。我过去经常感到迷惑,因为 gdb 有点像 C,当你输入 `ruby_current_thread->cfp->iseq`,就好像是在写 C 代码。但是你并不是在写 C 代码。我很容易遇到 gdb 的限制,不知道为什么。 + +知道使用 DWARF 来找出结构内容给了我一个更好的心理模型和更加正确的期望!这真是极好的! + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2016/08/10/how-does-gdb-work/ + +作者:[Julia Evans][a] +译者:[ucasFL](https://github.com/ucasFL) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca/ +[1]:http://jvns.ca/blog/2016/06/12/a-weird-system-call-process-vm-readv/ +[2]:https://stackoverflow.com/questions/25047329/how-to-get-struct-member-offset-from-dwarf-info +[3]:http://www.brendangregg.com/blog/2016-08-09/gdb-example-ncurses.html diff --git a/translated/tech/20171007 How to use GNU Stow to manage programs installed from source and dotfiles.md b/translated/tech/20171007 How to use GNU Stow to manage programs installed from source and dotfiles.md new file mode 100644 index 0000000000..ef8cf715e6 --- /dev/null +++ b/translated/tech/20171007 How to use GNU Stow to manage programs installed from source and dotfiles.md @@ -0,0 +1,131 @@ +如何使用 GNU Stow 来管理从源代码和 dotfiles 安装的程序 +===== +dotfiles(**.**开头的文件在 *nix 下默认为隐藏文件) +### 目的 + +使用 GNU Stow 轻松管理从源代码和 dotfiles 安装的程序 + +### 要求 + +* root 权限 + + +### 难度 + +简单 + +### 约定 + +* **#** \- 要求直接以 root 用户身份或使用 `sudo` 命令以 root 权限执行给定的命令 +* **$** \- 给定的命令将作为普通的非特权用户来执行 + +### 介绍 + +有时候我们必须从源代码安装程序,因为它们也许不能通过标准渠道获得,或者我们可能需要特定版本的软件。 GNU Stow 是一个非常不错的 `symlinks factory` 程序,它可以帮助我们保持文件的整洁,易于维护。 + +### 获得 stow + +你的 Linux 发行版本很可能包含 `stow`,例如在 Fedora,你安装它只需要: +``` +# dnf install stow +``` + +在 Ubuntu/Debian 中,安装 stow 需要执行: +``` +# apt install stow +``` + +在某些 Linux 发行版中,stow 在标准库中是不可用的,但是可以通过一些额外的软件源(例如 Rhel 和 CentOS7 中的epel )轻松获得,或者,作为最后的手段,你可以从源代码编译它。只需要很少的依赖关系。 + +### 从源代码编译 + +最新的可用 stow 版本是 `2.2.2`。源码包可以在这里下载:`https://ftp.gnu.org/gnu/stow/`。 + +一旦你下载了源码包,你就必须解压它。切换到你下载软件包的目录,然后运行: +``` +$ tar -xvpzf stow-2.2.2.tar.gz +``` + +解压源文件后,切换到 stow-2.2.2 目录中,然后编译该程序,只需运行: +``` +$ ./configure +$ make + +``` + +最后,安装软件包: +``` +# make install +``` + +默认情况下,软件包将安装在 `/usr/local/` 目录中,但是我们可以改变它,通过配置脚本的 `--prefix` 选项指定目录,或者在运行 `make install` 时添加 `prefix="/your/dir"`。 + +此时,如果所有工作都按预期工作,我们应该已经在系统上安装了 `stow`。 + +### stow 是如何工作的? + +stow 背后主要的概念在程序手册中有很好的解释: +``` +Stow 使用的方法是将每个软件包安装到自己的树中,然后使用符号链接使它看起来像文件一样安装在普通树中 + +``` + +为了更好地理解这个软件的运作,我们来分析一下它的关键概念: + +#### stow 文件目录 + +stow 目录是包含所有 `stow 包` 的根目录,每个包都有自己的子目录。典型的 stow 目录是 `/usr/local/stow`:在其中,每个子目录代表一个 `package`。 + +#### stow 包 + +如上所述,stow 目录包含多个 "包",每个包都位于自己单独的子目录中,通常以程序本身命名。包不过是与特定软件相关的文件和目录列表,作为实体进行管理。 + +#### stow 目标目录 + +stow 目标目录解释起来是一个非常简单的概念。它是包文件必须安装的目录。默认情况下,stow 目标目录被认为是从目录调用 stow 的目录。这种行为可以通过使用 `-t` 选项( --target 的简写)轻松改变,这使我们可以指定一个替代目录。 + +### 一个实际的例子 + +我相信一个好的例子胜过 1000 句话,所以让我来展示 stow 如何工作。假设我们想编译并安装 `libx264`,首先我们克隆包含其源代码的仓库: +``` +$ git clone git://git.videolan.org/x264.git +``` + +运行该命令几秒钟后,将创建 "x264" 目录,并且它将包含准备编译的源代码。我们切换到 "x264" 目录中并运行 `configure` 脚本,将 `--prefix` 指定为 /usr/local/stow/libx264 目录。 +``` +$ cd x264 && ./configure --prefix=/usr/local/stow/libx264 +``` + +然后我们构建该程序并安装它: +``` +$ make +# make install +``` + +x264 目录应该在 stow 目录内创建:它包含所有通常直接安装在系统中的东西。 现在,我们所要做的就是调用 stow。 我们必须从 stow 目录内运行这个命令,通过使用 `-d` 选项来手动指定 stow 目录的路径(默认为当前目录),或者通过如前所述用 `-t` 指定目标。我们还应该提供要作为参数存储的包的名称。 在这种情况下,我们从 stow 目录运行程序,所以我们需要输入的内容是: +``` +# stow libx264 +``` + +libx264 软件包中包含的所有文件和目录现在已经在调用 stow 的父目录 (/usr/local) 中进行了符号链接,因此,例如在 `/usr/local/ stow/x264/bin` 中包含的 libx264 二进制文件现在在 `/usr/local/bin` 中符号链接,`/usr/local/stow/x264/etc` 中的文件现在符号链接在 `/usr/local/etc` 中等等。通过这种方式,系统将显示文件已正常安装,并且我们可以容易地跟踪我们编译和安装的每个程序。要恢复该操作,我们只需使用 `-D` 选项: +``` +# stow -d libx264 +``` + +完成了!符号链接不再存在:我们只是“卸载”了一个 stow 包,使我们的系统保持在一个干净且一致的状态。 在这一点上,我们应该清楚为什么 stow 还用于管理 dotfiles。 通常的做法是在 git 仓库中包含用户特定的所有配置文件,以便轻松管理它们并使它们在任何地方都可用,然后使用 stow 将它们放在适当位置,如放在用户主目录中。 + +Stow 还会阻止你错误地覆盖文件:如果目标文件已经存在并且没有指向 Stow 目录中的包时,它将拒绝创建符号链接。 这种情况在 Stow 术语中称为冲突。 + +就是这样!有关选项的完整列表,请参阅 stow 帮助页,并且不要忘记在评论中告诉我们你对此的看法。 + +-------------------------------------------------------------------------------- + +via: https://linuxconfig.org/how-to-use-gnu-stow-to-manage-programs-installed-from-source-and-dotfiles + +作者:[Egidio Docile][a] +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/ 校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://linuxconfig.org diff --git a/translated/tech/20171009 10 layers of Linux container security - Opensource.com.md b/translated/tech/20171009 10 layers of Linux container security - Opensource.com.md deleted file mode 100644 index 1c3425d008..0000000000 --- a/translated/tech/20171009 10 layers of Linux container security - Opensource.com.md +++ /dev/null @@ -1,131 +0,0 @@ -Linux 容器安全的 10 个层面 -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA) - -容器提供了打包应用程序的一种简单方法,它实现了从开发到测试到投入生产系统的无缝传递。它也有助于确保跨不同环境的连贯性,包括物理服务器、虚拟机、以及公有云或私有云。这些好处使得一些组织为了更方便地部署和管理为他们提升业务价值的应用程序,而快速部署容器。 - -企业要求存储安全,在容器中运行基础服务的任何人都会问,“容器安全吗?”以及“怎么相信运行在容器中的我的应用程序是安全的?” - -安全的容器就像是许多安全运行的进程。在你部署和运行你的容器之前,你需要去考虑整个解决方案栈~~(致校对,容器是由不同的层堆叠而成,英文原文中使用的stack,可以直译为“解决方案栈”,但是似乎没有这一习惯说法,也可以翻译为解决方案的不同层级,哪个更合适?)~~各个层面的安全。你也需要去考虑应用程序和容器整个生命周期的安全。 - -尝试从这十个关键的因素去确保容器解决方案栈不同层面、以及容器生命周期的不同阶段的安全。 - -### 1. 容器宿主机操作系统和多租户环境 - -由于容器将应用程序和它的依赖作为一个单元来处理,使得开发者构建和升级应用程序变得更加容易,并且,容器可以启用多租户技术将许多应用程序和服务部署到一台共享主机上。在一台单独的主机上以容器方式部署多个应用程序、按需启动和关闭单个容器都是很容易的。为完全实现这种打包和部署技术的优势,运营团队需要运行容器的合适环境。运营者需要一个安全的操作系统,它能够在边界上保护容器安全、从容器中保护主机内核、以及保护容器彼此之间的安全。 - -### 2. 容器内容(使用可信来源) - -容器是隔离的 Linux 进程,并且在一个共享主机的内核中,容器内使用的资源被限制在仅允许你运行着应用程序的沙箱中。保护容器的方法与保护你的 Linux 中运行的任何进程的方法是一样的。降低权限是非常重要的,也是保护容器安全的最佳实践。甚至是使用尽可能小的权限去创建容器。容器应该以一个普通用户的权限来运行,而不是 root 权限的用户。在 Linux 中可以使用多级安全,Linux 命名空间、安全强化 Linux( [SELinux][1])、[cgroups][2] 、capabilities(译者注:Linux 内核的一个安全特性,它打破了传统的普通用户与 root 用户的概念,在进程级提供更好的安全控制)、以及安全计算模式( [seccomp][3] ),Linux 的这五种安全特性可以用于保护容器的安全。 - -在谈到安全时,首先要考虑你的容器里面有什么?例如 ,有些时候,应用程序和基础设施是由很多可用的组件所构成。它们中的一些是开源的包,比如,Linux 操作系统、Apache Web 服务器、Red Hat JBoss 企业应用平台、PostgreSQL、以及Node.js。这些包的容器化版本已经可以使用了,因此,你没有必要自己去构建它们。但是,对于你从一些外部来源下载的任何代码,你需要知道这些包的原始来源,是谁构建的它,以及这些包里面是否包含恶意代码。 - -### 3. 容器注册(安全访问容器镜像) - -你的团队所构建的容器的最顶层的内容是下载的公共容器镜像,因此,管理和下载容器镜像以及内部构建镜像,与管理和下载其它类型的二进制文件的方式是相同的,这一点至关重要。许多私有的注册者支持容器镜像的保存。选择一个私有的注册者,它可以帮你将存储在它的注册中的容器镜像实现策略自动化。 - -### 4. 安全性与构建过程 - -在一个容器化环境中,构建过程是软件生命周期的一个阶段,它将所需的运行时库和应用程序代码集成到一起。管理这个构建过程对于软件栈安全来说是很关键的。遵守“一次构建,到处部署”的原则,可以确保构建过程的结果正是生产系统中需要的。保持容器的恒定不变也很重要 — 换句话说就是,不要对正在运行的容器打补丁,而是,重新构建和部署它们。 - -不论是因为你处于一个高强度监管的行业中,还是只希望简单地优化你的团队的成果,去设计你的容器镜像管理以及构建过程,可以使用容器层的优势来实现控制分离,因此,你应该去这么做: - - * 运营团队管理基础镜像 - * 设计者管理中间件、运行时、数据库、以及其它解决方案 - * 开发者专注于应用程序层面,并且只写代码 - - - -最后,标记好你的定制构建容器,这样可以确保在构建和部署时不会搞混乱。 - -### 5. 控制好在同一个集群内部署应用 - -如果是在构建过程中出现的任何问题,或者在镜像被部署之后发现的任何漏洞,那么,请在基于策略的、自动化工具上添加另外的安全层。 - -我们来看一下,一个应用程序的构建使用了三个容器镜像层:内核、中间件、以及应用程序。如果在内核镜像中发现了问题,那么只能重新构建镜像。一旦构建完成,镜像就会被发布到容器平台注册中。这个平台可以自动检测到发生变化的镜像。对于基于这个镜像的其它构建将被触发一个预定义的动作,平台将自己重新构建应用镜像,合并进修复库。 - -在基于策略的、自动化工具上添加另外的安全层。 - -一旦构建完成,镜像将被发布到容器平台的内部注册中。在它的内部注册中,会立即检测到镜像发生变化,应用程序在这里将会被触发一个预定义的动作,自动部署更新镜像,确保运行在生产系统中的代码总是使用更新后的最新的镜像。所有的这些功能协同工作,将安全功能集成到你的持续集成和持续部署(CI/CD)过程和管道中。 - -### 6. 容器编配:保护容器平台 - -一旦构建完成,镜像被发布到容器平台的内部注册中。内部注册会立即检测到镜像的变化,应用程序在这里会被触发一个预定义的动作,自己部署更新,确保运行在生产系统中的代码总是使用更新后的最新的镜像。所有的功能协同工作,将安全功能集成到你的持续集成和持续部署(CI/CD)过程和管道中。~~(致校对:这一段和上一段是重复的,请确认,应该是选题工具造成的重复!!)~~ - -当然了,应用程序很少会部署在单一的容器中。甚至,单个应用程序一般情况下都有一个前端、一个后端、以及一个数据库。而在容器中以微服务模式部署的应用程序,意味着应用程序将部署在多个容器中,有时它们在同一台宿主机上,有时它们是分布在多个宿主机或者节点上,如下面的图所示:~~(致校对:图去哪里了???应该是选题问题的问题!)~~ - -在大规模的容器部署时,你应该考虑: - - * 哪个容器应该被部署在哪个宿主机上? - * 那个宿主机应该有什么样的性能? - * 哪个容器需要访问其它容器?它们之间如何发现彼此? - * 你如何控制和管理对共享资源的访问,像网络和存储? - * 如何监视容器健康状况? - * 如何去自动扩展性能以满足应用程序的需要? - * 如何在满足安全需求的同时启用开发者的自助服务? - - - -考虑到开发者和运营者的能力,提供基于角色的访问控制是容器平台的关键要素。例如,编配管理服务器是中心访问点,应该接受最高级别的安全检查。APIs 是规模化的自动容器平台管理的关键,可以用于为 pods、服务、以及复制控制器去验证和配置数据;在入站请求上执行项目验证;以及调用其它主要系统组件上的触发器。 - -### 7. 网络隔离 - -在容器中部署现代微服务应用,经常意味着跨多个节点在多个容器上部署。考虑到网络防御,你需要一种在一个集群中的应用之间的相互隔离的方法。一个典型的公有云容器服务,像 Google 容器引擎(GKE)、Azure 容器服务、或者 Amazon Web 服务(AWS)容器服务,是单租户服务。他们让你在你加入的虚拟机集群上运行你的容器。对于多租户容器的安全,你需要容器平台为你启用一个单一集群,并且分割通讯以隔离不同的用户、团队、应用、以及在这个集群中的环境。 - -使用网络命名空间,容器内的每个集合(即大家熟知的“pod”)得到它自己的 IP 和绑定的端口范围,以此来从一个节点上隔离每个 pod 网络。除使用下文所述的选项之外,~~(选项在哪里???,请查看原文,是否是选题丢失???)~~默认情况下,来自不同命名空间(项目)的Pods 并不能发送或者接收其它 Pods 上的包和不同项目的服务。你可以使用这些特性在同一个集群内,去隔离开发者环境、测试环境、以及生产环境。但是,这样会导致 IP 地址和端口数量的激增,使得网络管理更加复杂。另外,容器是被反复设计的,你应该在处理这种复杂性的工具上进行投入。在容器平台上比较受欢迎的工具是使用 [软件定义网络][4] (SDN) 去提供一个定义的网络集群,它允许跨不同集群的容器进行通讯。 - -### 8. 存储 - -容器即可被用于无状态应用,也可被用于有状态应用。保护附加存储是保护有状态服务的一个关键要素。容器平台对多个受欢迎的存储提供了插件,包括网络文件系统(NFS)、AWS 弹性块存储(EBS)、GCE 持久磁盘、GlusterFS、iSCSI、 RADOS(Ceph)、Cinder、等等。 - -一个持久卷(PV)可以通过资源提供者支持的任何方式装载到一个主机上。提供者有不同的性能,而每个 PV 的访问模式是设置为被特定的卷支持的特定模式。例如,NFS 能够支持多路客户端同时读/写,但是,一个特定的 NFS 的 PV 可以在服务器上被发布为只读模式。每个 PV 得到它自己的一组反应特定 PV 性能的访问模式的描述,比如,ReadWriteOnce、ReadOnlyMany、以及 ReadWriteMany。 - -### 9. API 管理、终端安全、以及单点登陆(SSO) - -保护你的应用包括管理应用、以及 API 的认证和授权。 - -Web SSO 能力是现代应用程序的一个关键部分。在构建它们的应用时,容器平台带来了开发者可以使用的多种容器化服务。 - -APIs 是微服务构成的应用程序的关键所在。这些应用程序有多个独立的 API 服务,这导致了终端服务数量的激增,它就需要额外的管理工具。推荐使用 API 管理工具。所有的 API 平台应该提供多种 API 认证和安全所需要的标准选项,这些选项既可以单独使用,也可以组合使用,以用于发布证书或者控制访问。 - -保护你的应用包括管理应用以及 API 的认证和授权。~~(致校对:这一句话和本节的第一句话重复)~~ - -这些选项包括标准的 API keys、应用 ID 和密钥对、 以及 OAuth 2.0。 - -### 10. 在一个联合集群中的角色和访问管理 - -这些选项包括标准的 API keys、应用 ID 和密钥对、 以及 OAuth 2.0。~~(致校对:这一句和上一节最后一句重复)~~ - -在 2016 年 7 月份,Kubernetes 1.3 引入了 [Kubernetes 联合集群][5]。这是一个令人兴奋的新特性之一,它是在 Kubernetes 上游、当前的 Kubernetes 1.6 beta 中引用的。联合是用于部署和访问跨多集群运行在公有云或企业数据中心的应用程序服务的。多个集群能够用于去实现应用程序的高可用性,应用程序可以跨多个可用区域、或者去启用部署公共管理、或者跨不同的供应商进行迁移,比如,AWS、Google Cloud、以及 Azure。 - -当管理联合集群时,你必须确保你的编配工具能够提供,你所需要的跨不同部署平台的实例的安全性。一般来说,认证和授权是很关键的 — 不论你的应用程序运行在什么地方,将数据安全可靠地传递给它们,以及管理跨集群的多租户应用程序。Kubernetes 扩展了联合集群,包括对联合的秘密数据、联合的命名空间、以及 Ingress objects 的支持。 - -### 选择一个容器平台 - -当然,它并不仅关乎安全。你需要提供一个你的开发者团队和运营团队有相关经验的容器平台。他们需要一个安全的、企业级的基于容器的应用平台,它能够同时满足开发者和运营者的需要,而且还能够提高操作效率和基础设施利用率。 - -想从 Daniel 在 [欧盟开源峰会][7] 上的 [容器安全的十个层面][6] 的演讲中学习更多知识吗?这个峰会将于10 月 23 - 26 日在 Prague 举行。 - -### 关于作者 -Daniel Oh;Microservives;Agile;Devops;Java Ee;Container;Openshift;Jboss;Evangelism - - - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/10/10-layers-container-security - -作者:[Daniel Oh][a] -译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/daniel-oh -[1]:https://en.wikipedia.org/wiki/Security-Enhanced_Linux -[2]:https://en.wikipedia.org/wiki/Cgroups -[3]:https://en.wikipedia.org/wiki/Seccomp -[4]:https://en.wikipedia.org/wiki/Software-defined_networking -[5]:https://kubernetes.io/docs/concepts/cluster-administration/federation/ -[6]:https://osseu17.sched.com/mobile/#session:f2deeabfc1640d002c1d55101ce81223 -[7]:http://events.linuxfoundation.org/events/open-source-summit-europe diff --git a/translated/tech/20171017 What Are the Hidden Files in my Linux Home Directory For.md b/translated/tech/20171017 What Are the Hidden Files in my Linux Home Directory For.md deleted file mode 100644 index 681dfe94a8..0000000000 --- a/translated/tech/20171017 What Are the Hidden Files in my Linux Home Directory For.md +++ /dev/null @@ -1,64 +0,0 @@ -我的 Linux 主目录中的隐藏文件是干什么用的? -====== - -![](https://www.maketecheasier.com/assets/uploads/2017/06/hidden-files-linux-hero.png) - -在你的 Linux 系统中,你可能会在主目录中存储大量文件和文件夹。但在这些文件下面,你知道你的主目录还附带了很多隐藏的文件和文件夹吗?如果你在主目录中运行 `ls -a`,你会发现一堆带有点前缀的隐藏文件和目录。这些隐藏的文件到底做了什么? - -### 在主目录中隐藏的文件是干什么用的? - -![hidden-files-liunux-2][1] - -通常,主目录中的隐藏文件和目录包含该用户程序访问的设置或数据。它们不打算由用户编辑,只需要应用程序进行编辑。这就是为什么它们被隐藏在用户的正常视图中。 - -通常,可以在不损坏操作系统的情况下删除和修改自己主目录中的文件。然而,依赖这些隐藏文件的应用程序可能不那么灵活。从主目录中删除隐藏文件时,通常会丢失与其关联的应用程序的设置。 - -依赖该隐藏文件的程序通常会重新创建它。 但是,你将从“开箱即用”设置开始,如全新用户。如果你在使用应用程序时遇到问题,那实际上可能是一个巨大的帮助。它可以让你删除可能造成麻烦的自定义设置。但如果你不这样做,这意味着你需要把所有的东西都设置成原来的样子。 - -### 主目录中某些隐藏文件的特定用途是什么? - -![hidden-files-linux-3][2] - -每个人在他们的主目录中都会有不同的隐藏文件。每个人都有一些。但是,无论父应用程序如何,这些文件都有类似的用途。 - -### 系统设置 - -系统设置包括桌面环境和 shell 的配置。 - -* 你的 shell 和命令行程序的**配置文件:**根据你使用的特定 shell 和类似命令的应用程序,特定的文件名称会变化。你会看到 ".bashrc"、".vimrc" 和 ".zshrc"。这些文件包含你已经更改的有关 shell 的操作环境的任何设置,或者对 `vim` 等命令行实用工具的设置进行了调整。删除这些文件将使关联的应用程序返回到其默认状态。考虑到许多 Linux 用户多年来建立了一系列微妙的调整和设置,删除这个文件可能是一个非常头疼的问题。 - -* **用户配置文件:**像上面的配置文件一样,这些文件(通常是 ".profile" 或 ".bash_profile")保存 shell 的用户设置。该文件通常包含你的 PATH。(译注: PATH 是环境变量)它还包含你设置的[别名][3]。用户也可以在 `.bashrc` 或其他位置放置别名。PATH 控制着 shell 寻找可执行命令的位置。通过添加或修改 PATH,可以更改 shell 的命令查找位置。别名更改了原有命令的名称。例如:一个别名可能将 `ls -l` 设置为 `ll`。这为经常使用的命令提供基于文本的快捷方式。如果删除 `.profile` 文件,通常可以在 "/etc/skel" 目录中找到默认版本。 - -* **桌面环境设置:**这里保存你的桌面环境的任何定制。其中包括桌面背景,屏幕保护程序,快捷键,菜单栏和任务栏图标以及用户针对其桌面环境设置的其他任何内容。当你删除这个文件时,用户的环境会在下一次登录时恢复到新的用户环境。 - -### 应用配置文件 - -你会在 Ubuntu 的 ".config" 文件夹中找到它们。 这些是针对特定应用程序的设置。 它们将包含喜好列表和设置等内容。 - -* **应用程序的配置文件:**这包括应用程序首选项菜单中的设置,工作区配置等。 你在这里找到的具体取决于父应用程序。 - -* **Web浏览器数据:**这可能包括书签和浏览历史记录等内容。大部分文件构成缓存。这是 Web 浏览器临时存储下载文件(如图片)的地方。删除这些内容可能会降低你首次访问某些媒体网站的速度。 - -* **缓存:**如果用户应用程序缓存仅与该用户相关的数据(如 [Spotify 应用程序存储播放列表的缓存][4]),则主目录是存储该目录的默认地点。 这些缓存可能包含大量数据或仅包含几行代码:这取决于父应用程序需要什么。 如果你删除这些文件,则应用程序会根据需要重新创建它们。 - -* **日志:**一些用户应用程序也可能在这里存储日志。根据开发人员设置应用程序的方式,你可能会发现存储在你的主目录中的日志文件。然而,这不是一个常见的选择。 - -### 结论 - -在大多数情况下,你的 Linux 主目录中的隐藏文件用于存储用户设置。 这包括命令行程序以及基于 GUI 的应用程序的设置。删除它们将删除用户设置。 通常情况下,它不会导致程序中断。 - --------------------------------------------------------------------------------- - -via: https://www.maketecheasier.com/hidden-files-linux-home-directory/ - -作者:[Alexander Fox][a] -译者:[MjSeven](https://github.com/MjSeven) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.maketecheasier.com/author/alexfox/ -[1]:https://www.maketecheasier.com/assets/uploads/2017/06/hidden-files-liunux-2.png (hidden-files-liunux-2) -[2]:https://www.maketecheasier.com/assets/uploads/2017/06/hidden-files-linux-3.png (hidden-files-linux-3) -[3]:https://www.maketecheasier.com/making-the-linux-command-line-a-little-friendlier/#aliases -[4]:https://www.maketecheasier.com/clear-spotify-cache/ diff --git a/translated/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md b/translated/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md new file mode 100644 index 0000000000..45a696511f --- /dev/null +++ b/translated/tech/20171102 Testing IPv6 Networking in KVM- Part 1.md @@ -0,0 +1,82 @@ +在 KVM 中测试 IPv6 网络(第 1 部分) +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ipv6-networking.png?itok=swQPV8Ey) + +要理解 IPv6 地址是如何工作的,没有比亲自动手去实践更好的方法了,在 KVM 中配置一个小的测试实验室非常容易 —— 也很有趣。这个系列的文章共有两个部分,我们将学习关于 IPv6 私有地址的知识,以及如何在 KVM 中配置测试网络。 + +### QEMU/KVM/虚拟机管理器 + +我们先来了解什么是 KVM。在这里,我将使用 KVM 来表示 QEMU、KVM、以及虚拟机管理器的一个组合,虚拟机管理器在 Linux 发行版中一般内置了。简单解释就是,QEMU 模拟硬件,而 KVM 是一个内核模块,它在你的 CPU 上创建一个 “访客领地”,并去管理它们对内存和 CPU 的访问。虚拟机管理器是一个涵盖虚拟化和管理程序的图形工具。 + +但是你不能被图形界面下 “点击” 操作的方式 "缠住" ,因为,它们也有命令行工具可以使用 —— 比如 virsh 和 virt-install。 + +如果你在使用 KVM 方面没有什么经验,你可以从 [在 KVM 中创建虚拟机:第 1 部分][1] 和 [在 KVM 中创建虚拟机:第 2 部分 - 网络][2] 开始学起。 + +### IPv6 唯一本地地址 + +在 KVM 中配置 IPv6 网络与配置 IPv4 网络很类似。它们的主要不同在于这些怪异的长地址。[上一次][3],我们讨论了 IPv6 地址的不同类型。其中有一个 IPv6 单播地址类,fc00::/7(详细情况请查阅 [RFC 4193][4]),它类似于 IPv4 中的私有地址 —— 10.0.0.0/8、172.16.0.0/12、和 192.168.0.0/16。 + +下图解释了这个唯一本地地址空间的结构。前 48 位定义了前缀和全局 ID,随后的 16 位是子网,剩余的 64 位是接口 ID: +``` +| 7 bits |1| 40 bits | 16 bits | 64 bits | ++--------|-+------------|-----------|----------------------------+ +| Prefix |L| Global ID | Subnet ID | Interface ID | ++--------|-+------------|-----------|----------------------------+ + +``` + +下面是另外一种表示方法,它可能更有助于你理解这些地址是如何管理的: +``` +| Prefix | Global ID | Subnet ID | Interface ID | ++--------|--------------|-------------|----------------------+ +| fd | 00:0000:0000 | 0000 | 0000:0000:0000:0000 | ++--------|--------------|-------------|----------------------+ + +``` + +fc00::/7 共分成两个 /8 地址块,fc00::/8 和 fd00::/8。fc00::/8 是为以后使用保留的。因此,唯一本地地址通常都是以 fd 开头的,而剩余部分是由你使用的。L 位,也就是第八位,它总是设置为 1,这样它可以表示为 fd00::/8。设置为 0 时,它就表示为 fc00::/8。你可以使用 `subnetcalc` 来看到这些东西: +``` +$ subnetcalc fd00::/8 -n +Address = fd00:: + fd00 = 11111101 00000000 + +$ subnetcalc fc00::/8 -n +Address = fc00:: + fc00 = 11111100 00000000 + +``` + +RFC 4193 要求地址必须随机产生。你可以用你选择的任何方法来造出个地址,只要它们以 `fd` 打头就可以,因为 IPv6 范围非常大,它不会因为地址耗尽而无法使用。当然,最佳实践还是按 RFCs 的要求来做。地址不能按顺序分配或者使用众所周知的数字。RFC 4193 包含一个构建伪随机地址生成器的算法,或者你可以在线找到任何生成器产生的数字。 + +唯一本地地址不像全局单播地址(它由你的因特网服务提供商分配)那样进行中心化管理,即使如此,发生地址冲突的可能性也是非常低的。当你需要去合并一些本地网络或者想去在不相关的私有网络之间路由时,这是一个非常好的优势。 + +在同一个子网中,你可以混用唯一本地地址和全局单播地址。唯一本地地址是可路由的,并且它并不会因此要求对路由器做任何调整。但是,你应该在你的边界路由器和防火墙上配置为不允许它们离开你的网络,除非是在不同位置的两个私有网络之间。 + +RFC4193 建议,不要混用全局单播地址的 AAAA 和 PTR 记录,因为虽然它们重复的机率非常低,但是并不能保证它们就是独一无二的。就像我们使用的 IPv4 地址一样,要保持你本地的私有名称服务和公共名称服务的独立。将本地名称服务使用的 Dnsmasq 和公共名称服务使用的 BIND 组合起来,是一个在 IPv4 网络上经过实战检验的可靠组合,这个组合也同样适用于 IPv6 网络。 + +### 伪随机地址生成器 + +在线地址生成器的一个示例是 [本地 IPv6 地址生成器][5]。你可以在线找到许多这样很酷的工具。你可以使用它来为你创建一个新地址,或者使用它在你的现有全局 ID 下为你创建子网。 + +下周我们将讲解如何在 KVM 中配置这些 IPv6 的地址,并现场测试它们。 + +通过来自 Linux 基金会和 edX 的免费在线课程 ["Linux 入门" ][6] 学习更多的 Linux 知识。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2017/11/testing-ipv6-networking-kvm-part-1 + +作者:[Carla Schroder][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/cschroder +[1]:https://www.linux.com/learn/intro-to-linux/2017/5/creating-virtual-machines-kvm-part-1 +[2]:https://www.linux.com/learn/intro-to-linux/2017/5/creating-virtual-machines-kvm-part-2-networking +[3]:https://www.linux.com/learn/intro-to-linux/2017/10/calculating-ipv6-subnets-linux +[4]:https://tools.ietf.org/html/rfc4193 +[5]:https://www.ultratools.com/tools/rangeGenerator +[6]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/translated/tech/20171102 What is huge pages in Linux.md b/translated/tech/20171102 What is huge pages in Linux.md deleted file mode 100644 index ee261956ad..0000000000 --- a/translated/tech/20171102 What is huge pages in Linux.md +++ /dev/null @@ -1,137 +0,0 @@ -Linux 中的 huge pages 是个什么玩意? -====== -学习 Linux 中的 huge pages( 巨大页)。理解什么是 hugepages,如何进行配置,如何查看当前状态以及如何禁用它。 - -![Huge Pages in Linux][1] - -本文,我们会详细介绍 huge page,让你能够回答:Linux 中的 huge page 是什么玩意?在 RHEL6,RHEL7,Ubuntu 等 Linux 中,如何启用/禁用 huge pages?如何查看 huge page 的当前值? - -首先让我们从 Huge page 的基础知识开始讲起。 - -### Linux 中的 Huge page 是个什么玩意? - -Huge pages 有助于 Linux 系统进行虚拟内存管理。顾名思义,除了标准的 4KB 大小的页面外,他们还能帮助管理内存中的巨大页面。使用 huge pages,你最大可以定义 1GB 的页面大小。 - -在系统启动期间,huge pages 会为应用程序预留一部分内存。这部分内存,即被 huge pages 占用的这些存储器永远不会被交换出内存。它会一直保留其中除非你修改了配置。这会极大地提高像 Orcle 数据库这样的需要海量内存的应用程序的性能。 - -### 为什么使用巨大的页? - -在虚拟内存管理中,内核维护一个将虚拟内存地址映射到物理地址的表,对于每个页面操作,内核都需要加载相关的映射标。如果你的内存页很小,那么你需要加载的页就会很多,导致内核加载更多的映射表。而这会降低性能。 - -使用巨大的页,意味着所需要的页变少了。从而大大减少由内核加载的映射表的数量。这提高了内核级别的性能最终有利于应用程序的性能。 - -简而言之,通过启用 huge pages,系统具只需要处理较少的页面映射表,从而减少访问/维护它们的开销! - -### 如何配置 huge pages? - -运行下面命令来查看当前 huge pages 的详细内容。 - -``` -root@kerneltalks # grep Huge /proc/meminfo -AnonHugePages: 0 kB -HugePages_Total: 0 -HugePages_Free: 0 -HugePages_Rsvd: 0 -HugePages_Surp: 0 -Hugepagesize: 2048 kB -``` - -从上面输出可以看到,每个页的大小为 2MB(`Hugepagesize`) 并且系统中目前有 0 个页 (`HugePages_Total`)。这里巨大页的大小可以从 2MB 增加到 1GB。 - -运行下面的脚本可以获取系统当前需要多少个巨大页。该脚本取之于 Oracle。 - -``` -#!/bin/bash -# -# hugepages_settings.sh -# -# Linux bash script to compute values for the -# recommended HugePages/HugeTLB configuration -# -# Note: This script does calculation for all shared memory -# segments available when the script is run, no matter it -# is an Oracle RDBMS shared memory segment or not. -# Check for the kernel version -KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'` -# Find out the HugePage size -HPG_SZ=`grep Hugepagesize /proc/meminfo | awk {'print $2'}` -# Start from 1 pages to be on the safe side and guarantee 1 free HugePage -NUM_PG=1 -# Cumulative number of pages required to handle the running shared memory segments -for SEG_BYTES in `ipcs -m | awk {'print $5'} | grep "[0-9][0-9]*"` -do - MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q` - if [ $MIN_PG -gt 0 ]; then - NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q` - fi -done -# Finish with results -case $KERN in - '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`; - echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;; - '2.6' | '3.8' | '3.10' | '4.1' ) echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;; - *) echo "Unrecognized kernel version $KERN. Exiting." ;; -esac -# End -``` -将它以 `hugepages_settings.sh` 为名保存到 `/tmp` 中,然后运行之: -``` -root@kerneltalks # sh /tmp/hugepages_settings.sh -Recommended setting: vm.nr_hugepages = 124 -``` - -输出如上结果,只是数字会有一些出入。 - -这意味着,你系统需要 124 个每个 2MB 的巨大页!若你设置页面大小为 4MB,则结果就变成了 62。你明白了吧? - -### 配置内核中的 hugepages - -本文最后一部分内容是配置上面提到的 [内核参数 ][2] 然后重新加载。将下面内容添加到 `/etc/sysctl.conf` 中,然后输入 `sysctl -p` 命令重新加载配置。 - -``` -vm .nr_hugepages=126 -``` - -注意我们这里多加了两个额外的页,因为我们希望在实际需要的页面数量外多一些额外的空闲页。 - -现在,内核已经配置好了,但是要让应用能够使用这些巨大页还需要提高内存的使用阀值。新的内存阀值应该为 126 个页 x 每个页 2 MB = 252 MB,也就是 258048 KB。 - -你需要编辑 `/etc/security/limits.conf` 中的如下配置 - -``` -soft memlock 258048 -hard memlock 258048 -``` - -某些情况下,这些设置是在指定应用的文件中配置的,比如 Oracle DB 就是在 `/etc/security/limits.d/99-grid-oracle-limits.conf` 中配置的。 - -这就完成了!你可能还需要重启应用来让应用来使用这些新的巨大页。 - -### 如何禁用 hugepages? - -HugePages 默认是开启的。使用下面命令来查看 hugepages 的当前状态。 - -``` -root@kerneltalks# cat /sys/kernel/mm/transparent_hugepage/enabled -[always] madvise never -``` - -输出中的 `[always]` 标志说明系统启用了 hugepages。 - -若使用的是基于 RedHat 的系统,则应该要查看的文件路径为 `/sys/kernel/mm/redhat_transparent_hugepage/enabled`。 - -若想禁用巨大页,则在 `/etc/grub.conf` 中的 `kernel` 行后面加上 `transparent_hugepage=never`,然后重启系统。 - --------------------------------------------------------------------------------- - -via: https://kerneltalks.com/services/what-is-huge-pages-in-linux/ - -作者:[Shrikant Lavhate][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://kerneltalks.com -[1]:https://c1.kerneltalks.com/wp-content/uploads/2017/11/hugepages-in-linux.png -[2]:https://kerneltalks.com/linux/how-to-tune-kernel-parameters-in-linux/ diff --git a/translated/tech/20171108 How to Use GNOME Shell Extensions [Complete Guide].md b/translated/tech/20171108 How to Use GNOME Shell Extensions [Complete Guide].md new file mode 100644 index 0000000000..8f3c10cfa7 --- /dev/null +++ b/translated/tech/20171108 How to Use GNOME Shell Extensions [Complete Guide].md @@ -0,0 +1,198 @@ +如何使用 GNOME Shell 扩展[完整指南] +===== + +**简介:这是一份详细指南,我将会向你展示如何手动或通过浏览器轻松安装 GNOME Shell 扩展。** + +在讨论 [如何在 Ubuntu 17.10 上安装主题][1] 一文时,我简要地提到了 GNOME Shell 扩展,它用来安装用户主题。今天,我们将详细介绍 Ubuntu 17.10 中的 GNOME Shell 扩展。 + +我可能会使用术语 GNOME 扩展而不是 GNOME Shell 扩展,但是这两者在这里具有相同的含义。 + +什么是 GNOME Shell 扩展?如何安装 GNOME Shell 扩展,以及如何管理和删除 GNOME Shell 扩展?我会一一解释所有的问题。 + +在此之前,如果你喜欢视频,我已经在 [FOSS 的 YouTube 频道][2] 上展示了所有的这些操作。我强烈建议你订阅它来获得更多有关 Linux 的视频。 + +## 什么是 GNOME Shell 扩展? + +[GNOME Shell 扩展][3] 根本上来说是增强 GNOME 桌面功能的一小段代码。 + +把它看作是你浏览器的一个附加组件。例如,你可以在浏览器中安装附加组件来禁用广告。这个附加组件是由第三方开发者开发的。虽然你的 Web 浏览器默认不提供此项功能,但安装此附加组件可增强你 Web 浏览器的功能。 + +同样, GNOME Shell 扩展就像那些可以安装在 GNOME 之上的第三方附加组件和插件。这些扩展程序是为执行特定任务而创建的,例如显示天气状况,网速等。大多数情况下,你可以在顶部面板中访问它们。 + +![GNOME Shell 扩展 in action][5] + +还有一些 GNOME 扩展在顶部面板上不可见,但它们仍然可以调整 GNOME 的行为。例如,鼠标中轴可以使用扩展来关闭一个应用程序。 + +## 安装 GNOME Shell 扩展 + +现在你知道了什么是 GNOME Shell 扩展,那么让我们来看看如何安装它吧。有三种方式可以使用 GNOME 扩展: + +* 使用来自 Ubuntu 的最小扩展集(或你的 Linux 发行版) +* 在 Web 浏览器种查找并安装扩展程序 +* 下载并手动安装扩展 + +在你学习如何使用 GNOME Shell 扩展之前,你应该安装 GNOME Tweak Tool。你可以在软件中心找到它,或者你可以使用以下命令: +``` +sudo apt install gnome-tweak-tool +``` + +有时候,你需要知道你正在使用的 GNOME Shell 的版本,这有助于你确定扩展是否与系统兼容。你可以使用下面的命令来找到它: +``` +gnome-shell --version +``` + +### 1. 使用 gnome-shell-extensions 包 [最简单最安全的方式] + +Ubuntu (以及其他几个 Linux 发行版,如 Fedora )提供了一个包,这个包有最小的 GNOME 扩展。由于 Linux 发行版经过测试,所以你不必担心兼容性问题。 + +如果你想要一个简单易懂的程序,你只需获得这个包,你就可以安装 8-10 个 GNOME 扩展。 +``` +sudo apt install gnome-shell-extensions +``` + +你将不得不重新启动系统(或者重新启动 GNOME Shell,我具体忘了是哪个)。之后,启动 GNOME Tweaks,你会发现一些扩展自动安装了,你只需切换按钮即可开始使用已安装的扩展程序。 + +![Change GNOME Shell theme in Ubuntu 17.1][6] + +### 2. 从 Web 浏览器安装 GNOME Shell 扩展 + +GNOME 项目有一个专门用于扩展的网站。不是这个,你可以找到并安装它,从而管理你的扩展程序,甚至不需要 GNOME Tweaks Tool。 + +[GNOME Shell Extensions Website][3] + +但是为了安装 Web 浏览器扩展,你需要两件东西:浏览器附加组件和本地主机连接器。 + +#### 步骤 1: 安装 浏览器附加组件 + +当你访问 GNOME Shell 扩展网站时,你会看到如下消息: +> "要使用此站点控制 GNOME Shell 扩展,你必须安装由两部分组成的 GNOME Shell 集成:浏览器扩展和本地主机消息应用。" + +![Installing GNOME Shell Extensions][7] + +你只需在你的 Web 浏览器上点击建议的附加组件即可。你也可以从下面的链接安装它们: + +#### 步骤 2: 安装本地连接器 + +仅仅安装浏览器附加组件并没有帮助。你仍然会看到如下错误: + +> "尽管 GNOME Shell 集成扩展正在运行,但未检测到本地主机连接器。请参阅文档以获取有关安装连接器的信息。" + +![How to install GNOME Shell Extensions][8] + +这是因为你尚未安装主机连接器。要做到这一点,请使用以下命令: +``` +sudo apt install chrome-gnome-shell +``` + +不要担心包名中的 'chrome' 前缀,它与 Chrome 无关,你无需再次安装 Firefox 或 Opera 的单独软件包。 + +#### 步骤 3: 在 Web 浏览器中安装 GNOME Shell 扩展 + +一旦你完成了这两个要求,你就可以开始了。现在,你将看不到任何错误消息。 + +![GNOME Shell Extension][9] + +一件好事情是它会按照 GNOME Shell 版本对扩展进行排序,但这不是强制性的。这里发生的事情是开发人员为当前的 GNOME 版本创建扩展。在一年之内,还会有两个 GNOME 发行版本。但开发人员没有时间测试或更新他/她的扩展。 + +因此,你不知道该扩展是否与你的系统兼容。尽管扩展已经存在很长一段时间了,但是有可能在最新的 GNOME Shell 版本中,它也能正常工作。同样它也有可能不工作。 + +你也可以去搜索扩展程序。假设你想要安装有关天气的扩展,只要搜索它并选择一个搜索结果即可。 + +当你访问扩展页面是,你会看到一个切换按钮。 + +![Installing GNOME Shell Extension ][10] + +点击它,你会被提示是否要安装这个扩展: + +![Install GNOME Shell Extensions via web browser][11] + +很明显,直接安装。安装完成后,你会看到切换按钮已打开,旁边有一个设置选项。你也可以使用设置选项配置扩展,也可以禁用扩展。 + +![Configuring installed GNOME Shell Extensions][12] + +你还可以在 GNOME Tweaks Tool 中配置 Web 浏览器中安装的扩展: + +![GNOME Tweaks to handle GNOME Shell Extensions][13] + +你可以在 GNOME 网站中 [安装的扩展部分][14] 下查看所有已安装的扩展。 + +![Manage your installed GNOME Shell Extensions][15] + +使用 GNOME 扩展网站的一个主要优点是你可以查看是否有可用于扩展的更新,你不会在 GNOME Tweaks 或系统更新中获得它。 + +### 3. 手动安装 GNOME Shell 扩展 + +你不需要始终在线才能安装 GNOME Shell 扩展,你可以下载文件并稍后安装,这样就不必使用互联网了。 + +去 GNOME 扩展网站下载最新版本的扩展。 + +![Download GNOME Shell Extension][16] + +解压下载的文件,将该文件夹复制到 **~/.local/share/gnome-shell/extensions** 目录。到主目录下并按 Crl+H 显示隐藏的文件夹,在这里找到 .local 文件夹,你可以找到你的路径,直至扩展目录。 + +一旦你将文件复制到正确的目录后,进入它并打开 metadata.json 文件,寻找 uuid 的值。 + +确保扩展文件夹名称与 metadata.json 中的 uuid 值相同。如果不相同,请将目录重命名为 uuid 的值。 + +![Manually install GNOME Shell extension][17] + +差不多了!现在重新启动 GNOME Shell。 按 Alt+F2 并输入 r 重新启动 GNOME Shell。 + +![Restart GNOME Shell][18] + +同样重新启动 GNOME Tweaks Tool。你现在应该可以在 Tweaks Tool 中看到手动安装 GNOME 扩展,你可以在此处配置或启用新安装的扩展。 + +这就是安装 GNOME Shell 扩展你需要知道的所有内容。 + +## 移除 GNOME Shell 扩展 + +你可能想要删除一个已安装的 GNOME Shell 扩展,这是完全可以理解的。 + +如果你是通过 Web 浏览器安装的,你可以到 [GNOME 网站的以安装的扩展部分][14] 那移除它(如前面的图片所示)。 + +如果你是手动安装的,可以从 ~/.local/share/gnome-shell/extensions 目录中删除扩展文件来删除它。 + +## 特别提示:获得 GNOME Shell 扩展更新的通知 + +到目前为止,你已经意识到除了访问 GNOME 扩展网站之外,无法知道更新是否可用于 GNOME Shell 扩展。 + +幸运的是,有一个 GNOME Shell 扩展可以通知你是否有可用于已安装扩展的更新。你可以从下面的链接中获得它: + +[Extension Update Notifier][19] + +### 你如何管理 GNOME Shell 扩展? + +我觉得很奇怪你不能通过系统更新来更新扩展,就好像 GNOME Shell 扩展不是系统的一部分。 + +如果你正在寻找一些建议,请阅读这篇文章: [关于最佳 GNOME 扩展][20]。同时,你可以分享有关 GNOME Shell 扩展的经验。你经常使用它们吗?如果是,哪些是你最喜欢的? + +-------------------------------------------------------------------------------- + +via: [https://itsfoss.com/gnome-shell-extensions/](https://itsfoss.com/gnome-shell-extensions/) + +作者:[Abhishek Prakash][a] +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/abhishek/ +[1]:https://itsfoss.com/install-themes-ubuntu/ +[2]:https://www.youtube.com/c/itsfoss?sub_confirmation=1 +[3]:https://extensions.gnome.org/ +[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/11/gnome-shell-extension-weather.jpeg +[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/11/enableuser-themes-extension-gnome.jpeg +[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/11/gnome-shell-extension-installation-1.jpeg +[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/11/gnome-shell-extension-installation-2.jpeg +[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/11/gnome-shell-extension-installation-3.jpeg +[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/11/gnome-shell-extension-installation-4.jpeg +[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/11/gnome-shell-extension-installation-5.jpeg +[12]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/11/gnome-shell-extension-installation-6.jpeg +[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/11/gnome-shell-extension-installation-7-800x572.jpeg +[14]:https://extensions.gnome.org/local/ +[15]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/11/gnome-shell-extension-installation-8.jpeg +[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/11/gnome-shell-extension-installation-9-800x456.jpeg +[17]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/11/gnome-shell-extension-installation-10-800x450.jpg +[18]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/11/restart-gnome-shell-800x299.jpeg +[19]:https://extensions.gnome.org/extension/1166/extension-update-notifier/ +[20]:https://itsfoss.com/best-gnome-extensions/ diff --git a/translated/tech/20171113 My Adventure Migrating Back To Windows.md b/translated/tech/20171113 My Adventure Migrating Back To Windows.md new file mode 100644 index 0000000000..1b2941036f --- /dev/null +++ b/translated/tech/20171113 My Adventure Migrating Back To Windows.md @@ -0,0 +1,119 @@ +我的冒险之旅之迁移回 Windows +====== + +我已经使用 Linux 作为我的主要操作系统大约 10 年了,并且主要使用 Ubuntu。但在最新发布的版本中,我决定重新回到我通常不喜欢的操作系统: Windows 10。 + +![Ubuntu On Windows][1] + +我一直是 Linux 的粉丝,我最喜欢的两个发行版是 debian 和 ubuntu。现今作为一个服务器操作系统,linux 是完美无暇的,但在桌面上一直存在不同程度的问题。 + +最近一系列的问题让我意识到,我不需要使用 linux 作为我的桌面操作系统,我仍然是一个粉丝。所以基于我安装 Ubuntu 17.10 的经验,我已经决定回到 windows。 + +### 什么使我选择了回归 + +问题是,当 Ubuntu 17.10 出来后,我像往常一样进行全新安装,但遇到了一些非常奇怪的新问题。 +* Dell D3100 Dock 不再工作(包括 Work Arounds) +* Ubuntu 意外死机(随机) +* 双击桌面上的图标没反应 +* 使用 HUD 搜索诸如“调整”之类的程序将尝试安装 META 版本 +* GUI 比标准的 GNOME 感觉更糟糕 + +现在我确实考虑回到使用 Ubuntu 16.04 或另一个发行版本,但是我觉得 Unity 7 是最精致的桌面环境,另外唯一一个优雅且稳定的是 Windows 10。 + +除此之外,在 Windows 上使用 Linux 也有特定的支持,如: + +* 大多数装有商用软件不可用,E.G Maya, PhotoShop, Microsoft Office(大多数情况下,替代品并不相同) +* 大多数游戏都没有移植到 Linux 上,包括来自 EA, Rockstar Ect. 等主要工作室的游戏。 +* 对于大多数硬件来说,驱动程序是选择 Linux 的第二个考虑因素。‘ + +在决定使用 Windows 之前,我确实看过其他发行版和操作系统。 + +与此同时,我看到更多的是“微软热爱 Linux ”的活动,并且了解了 WSL。他们新开发者的聚焦角度对我来说很有意思,于是我试了一下。 + +### 我在 Windows 找到了什么 + +我使用计算机主要是为了编程,我也使用虚拟机,git 和 ssh,并且大部分工作依赖于 bash。我偶尔也会玩游戏,观看 netflix 和一些轻松的办公室工作。 + +总之,我期待在 Ubuntu 中保留当前的工作流程并将其移植到 Windows 上。我也想利用 Windows 的优点。 + +* 所有 PC 游戏支持 Windows +* 大多数程序受本地支持 +* 微软办公软件 + +现在,使用 Windows 有很多警告,但是我打算正确对待它,所以我不担心一般的 windows 故障,例如病毒和恶意软件。 + +### Windows 的子系统 Linux(Windows 上存在 Ubuntu 的 Bash) + +微软与 Canonical 密切合作将 Ubuntu 带到了 Windows 上。在快速设置和启动程序之后,你将拥有非常熟悉的 bash 界面。 + +现在我一直在研究这个问题的局限性,但是在写这篇文章时我碰到的唯一真正的限制是它从硬件中抽象了出来。例如,lsblk 不会显示你有什么分区,因为子系统 Ubuntu 没有提供这些信息。 + +但是除了访问底层工具之外,我发现这种体验非常熟悉,也很棒。 + +我在下面的工作流程中使用了它。 + +* 生成 SSH 密钥对 +* 使用 Git 和 Github 来管理我的仓库 +* SSH 到几个服务器,包括没有密码的 +* 为本地数据库运行 MySQL +* 监视系统资源 +* 使用 VIM 配置文件 +* 运行 Bash 脚本 +* 运行本地 Web 服务器 +* 运行 PHP, NodeJS + +到目前为止,它已经被证明是非常强大的工具,除了在 Windows 10 UI 中。我的工作流程感觉和我在 Ubuntu 上几乎一样。尽管我的多数工作可以在 WSL 中处理,但我仍然打算让虚拟机进行更深入的工作,这可能超出了 WSL 的范围。 + +### 没有 Wine + +我遇到的另一个主要问题是兼容性问题。现在,我很少使用 WINE(译注: wine 是可以使 Linux 上运行 Windows 下的软件)来使用 Windows 软件。尽管通常不是很好,但是有时它是必需的。 + +### HeidiSQL + +我安装的第一个程序之一是 HeidiSQL,它是我最喜欢的数据库客户端之一。它在 wine 下工作,但是感觉很可怕,所以我放弃了 MySQL Workbench。让它回到 Windows 中,就像回来了一个可靠的老朋友。 + +### 游戏平台 / Steam + +没有游戏的 Windows 电脑无法想象。我从 steam 的网站上安装了它,并且使用我的 Linux 目录,以及我的 Windows 目录,这个目录是 Linux 目录的 5 倍大,并且包括 GTA V (译注: GTA V 是一款名叫侠盗飞车的游戏) 等 AAA 级配置。这些是我只能在 Ubuntu 中梦想的东西。 + +现在,我对 SteamOS 有很大的期望,并且一直会持续如此。但是我认为在不久的将来它不会在任何地方的游戏市场中崭露头角。所以如果你想在 PC 上玩游戏,你确实需要 Windows。 + +还有一点需要注意的是, ny nvidia 显卡的驱动程序有很好的支持,这使得像 TF2 (译注: 这是一款名叫军团要塞2的游戏) 这样的一些 Linux 本机游戏运行的稍好一些。 + +** Windows 在游戏方面总是优越的,所以这并不令人感到意外。** + +### 从 USB 硬盘运行,为什么 + +我在我的主要 sss 驱动器上运行 Linux,但在过去,我从 usb 密钥和 usb 硬盘运行(译注:这句话不知道怎么翻译,希望校正者注意。)。我习惯了 Linux 的这种持久性,这让我可以在不丢失主要操作系统的情况下长期尝试多个版本。现在我尝试将 Windows 安装到 USB 连接的硬盘上时,它无法工作并且是无法实现的。所以当我将 Windows HDD 的克隆作为备份时,我很惊讶我可以通过 USB 启动它。 + +这对我来说已经成为一个方便的选择,因为我打算将我的工作笔记本电脑迁移回 Windows,但如果不想冒险,那就把它扔在那里吧。 + +所以我在过去的几天里,我使用 USB 来运行它,除了一些错误的消息外,我没有通过 USB 运行发现它真正的缺点。 + +这样做的显著问题是: +* 较慢的启动速度 +* 恼人的信息:不要拔掉你的 USB +* 无法激活它 + +**我可能会写一篇关于 USB 驱动器上的 Windows 的文章,这样我们可以有更详细的了解。** + +### 那么结论是什么? + +我使用 Windows 10 大约两周了,并没有注意到它对我的工作流程有任何的负面影响。尽管过程会有一些小问题,但我需要的所以工具都在手边,并且操作系统一般都在运行。 + +## 我会留在 Windows吗? + +虽然现在还为时尚早,但我想在可见的未来我会坚持使用 Windows。 + +-------------------------------------------------------------------------------- + +via: [https://www.chris-shaw.com/blog/my-adventure-migrating-back-to-windows](https://www.chris-shaw.com/blog/my-adventure-migrating-back-to-windows) + +作者:[Christopher Shaw][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.chris-shaw.com +[1]:https://winaero.com/blog/wp-content/uploads/2016/07/Ubutntu-on-Windows-10-logo-banner.jpg diff --git a/translated/tech/20171204 5 Tips to Improve Technical Writing for an International Audience.md b/translated/tech/20171204 5 Tips to Improve Technical Writing for an International Audience.md new file mode 100644 index 0000000000..20e604f357 --- /dev/null +++ b/translated/tech/20171204 5 Tips to Improve Technical Writing for an International Audience.md @@ -0,0 +1,117 @@ + + +提升针对国际读者技术性写作的5个技巧 +============================================================ + + +![documentation](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/typewriter-801921_1920.jpg?itok=faTXFNoE "documentation") + + + +针对国际读者用英语写作很不容易,下面这些小窍门可以记下来。[知识共享许可][2] + + +针对国际读者用英语写文档不需要特别考虑英语母语的人。相反,更应该注意文档的语言可能不是读者的第一语言。我们看看下面这些简单的句子:“加密密码用‘xxx’命令(Encrypt the password using the 'foo bar' command.)” + +从语法上讲,这个句子是正确的。ing形式(动名词)经常在英语中使用,很多英语母语的人应该大概对这样造句没有疑惑。然而,仔细观察,这个句子有歧义:单词“using”可以针对“the password”,也可以针对动词“加密”。因此,这个句子能够有两种不同的理解方式。 + +* Encrypt the password that uses the 'foo bar' command( 加密这个使用xxx命令的密码). +* Encrypt the password by using the 'foo bar' command(使用xxx命令加密密码). + + +关于这个话题(密码加密或者xxx命令),只要你有这方面的知识,你就不会理解错误,而且正确的选择第二种方式才是句子要表达的含义。但是如果你对这个知识点没有概念呢?如果你仅仅是一个翻译者,只有关于文章主题的一般知识,而不是一个技术专家?或者英语不是你的母语而且你不熟悉英语的高级语法形式呢? + +甚至连英语母语的人都可能需要一些训练才能写出简洁明了的技术文档。所以提升对文章适用性和潜在问题的认识是第一步。 + + +这篇文章,基于我在[欧盟开放源码峰会][5]上的演讲,提供了几种有用的技巧。大多数技巧不仅仅针对技术文档,也可以用于日程信函的书写,如邮件或者报告之类的。 + + +**1. 转换视角** + +转换视角,从你的读者出发。首先要了解你潜在的读者。如果你是作为开发人员针对用户写,则从用户的视角来看待你的产品。用户画像(Persona)技术能够帮助你专注于目标受众,而且提供关于你的受众适当的细节信息。 + +**2. 遵守KISS(Keep it short and simple)原则** + +这个原则可以用于几个层次,如语法,句式或者单词。看下面的例子: + +_单词:_ + +罕见的和长的单词会降低阅读速度而且可能会是非母语读者的障碍。使用简单点的单词,如: +“utilize” → “use” +“indicate” → “show”, “tell”, “say” +“prerequisite” → “requirement” + + + + +_语法:_ + +最好使用最简单的时态。举个例子,当提到一个动作的结果时使用一般现在时。如“Click '_OK_' . The _Printer Options_ dialog appears(单击'_ok_'.就弹出_打印选项_对话框了)” + + +_句式:_ + +一般说来,一个句子就表达一个意思。然而在我看来,把句子的长度限制到一定数量的单词是没有用的。短句子不是想着那么容易理解的(特别是一组名词的时候)。有时候把句子剪短到一定单词数量会造成歧义,相应的还会使句子更加难以理解。 + + + + +**3. 当心歧义** + + +作为作者,我们通常没有注意到句子中的歧义。让别人审阅你的文章有助于发现这些问题。如果无法这么做,就尝试从这些不同的视角审视每个句子: +对于没有相关知识背景的读者也能看懂吗?对于语言能力有限的读者也可以吗?所有句子成分间的语法关系清晰吗?如果某个句子没有达到这些要求,重新措辞来解决歧义。 + +**4. 格式统一** + +这个适用于对单词,拼写和标点符号的选择,也是适用于短语和结构的选择。对于列表,使用平行的语法造句。如: + +Why white space is important(为什么空格很重要): + +* It focuses attention(让读者注意力集中). + +* It visually separates sections(让文章章节分割更直观). + +* It splits content into chunks(让文章内容分割为不同块). + +**5. 清除冗余** + +对目标读者仅保留明确的信息。在句子层面,避免填充(如basically, easily)和没必要的修饰。如: + +"already existing" → "existing" + +"completely new" → "new" + + +##总结 + +你现在应该猜到了,写作就是改。好的文章需要付出和练习。但是如果你仅是偶尔写写,则可以通过专注目标读者和运用基本写作技巧来显著地提升你的文章。 + +文章易读性越高,理解起来越容易,即使针对于不同语言级别的读者也是一样。尤其在本地化翻译方面,高质量的原文是非常重要的,因为“错进错出(原文:Garbage in, garbage out)"。如果原文有不足,翻译时间会更长,导致更高的成本。甚至,这种不足会在翻译过程中成倍的放大而且后面需要在多种语言版本中改正。 + + +![Tanja Roth](https://www.linux.com/sites/lcom/files/styles/floated_images/public/tanja-roth.jpg?itok=eta0fvZC "Tanja Roth") + +Tanja Roth, SUSE Linux公司-技术文档专家 [使用许可][1] + + +_在对语言和技术两方面兴趣的驱动下,Tanja作为一名技术文章的写作者,在机械工程,医学技术和IT领域工作了很多年。她在2005年加入SUSE组织并且贡献了各种各样产品和项目的文章,包括高可用性和云的相关话题。_ + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/event/open-source-summit-eu/2017/12/technical-writing-international-audience?sf175396579=1 + +作者:[TANJA ROTH ][a] +译者:[yizhuoyan](https://github.com/yizhuoyan) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/tanja-roth +[1]:https://www.linux.com/licenses/category/used-permission +[2]:https://www.linux.com/licenses/category/creative-commons-zero +[3]:https://www.linux.com/files/images/tanja-rothjpg +[4]:https://www.linux.com/files/images/typewriter-8019211920jpg +[5]:https://osseu17.sched.com/event/ByIW +[6]:https://en.wikipedia.org/wiki/Persona_(user_experience) diff --git a/translated/tech/20171213 Creating a blog with pelican and Github pages.md b/translated/tech/20171213 Creating a blog with pelican and Github pages.md deleted file mode 100644 index bf3b31857f..0000000000 --- a/translated/tech/20171213 Creating a blog with pelican and Github pages.md +++ /dev/null @@ -1,158 +0,0 @@ -使用 pelican 和 Github pages 来搭建博客 -=============================== - -今天我将谈一下这个博客是如何搭建的。在我们开始之前,我希望你熟悉使用 Github 并且可以搭建一个 Python 虚拟环境来进行开发。如果你不能做到这些,我推荐你去学习一下 [Django Girls 教程][2],它包含以上和更多的内容。 -这是一篇帮助你发布由 Github 来托管个人博客的教程。为此,你需要一个正常的 Github 用户账户 (不是一个工程账户)。 -你要做的第一件事是创建一个放置代码的 Github 仓库。如果你想要你的博客仅仅指向你的用户名 (比如 rsip22.github.io) 而不是一个子文件夹 (比如 rsip22.github.io/blog),你必须创建一个带有全名的仓库。 - -![][3] -*Github 截图,打开了创建新仓库的菜单,正在以'rsip22.github.io'名字创建一个新的仓库* - -我推荐你使用 README,Python 版的 .gitignore 和 [一个免费的软件 license][4] 初始化你的仓库。如果你使用一个免费的软件 license,你仍然拥有代码,但是你要确保他人将从中受益,允许他们学习和复用,并且更重要的是允许他们享有代码。 -既然仓库已经创建好了,那我们就克隆到本机中将用来保存代码的文件夹下: -``` -$ git clone https://github.com/YOUR_USERNAME/YOUR_USERNAME.github.io.git -``` -并且切换到新的目录: -``` - $ cd YOUR_USERNAME.github.io -``` -因为 Github Pages 偏好的运行的方式是从 master 分支提供文件,你必须将你的源代码放到新的分支,保护为输出 Pelican 产生的静态文件的"master"分支。为此,你必须创建一个名为"source"的分支。 -``` -$ git checkout -b source -``` -在你的系统中创建一个带有 Pyhton 3 版本的虚拟环境。 -在 GNU/Linux 系统中,命令可能如下: -``` - $ python3 -m venv venv -``` -或者像这样: -``` -$ virtualenv --python=python3.5 venv -``` -并且激活它: -``` - $ source venv/bin/activate -``` -在虚拟环境里,你需要安装 pelican 和它的依赖包。你也应该安装 ghp-import (来帮助我们发布到 Github 上) 和 Markdown (为了使用 markdown 语法来写文章)。它运行如下: -``` -(venv)$ pip install pelican markdown ghp-import -``` -一旦这些完成,你就可以使用 pelican-quickstart 开始创建你的博客了: -``` -(venv)$ pelican-quickstart -``` -这将会提示我们一系列的问题。在回答它们之前,请看一下如下我的答案: -``` - > Where do you want to create your new web site? [.] ./ - > What will be the title of this web site? Renata's blog - > Who will be the author of this web site? Renata - > What will be the default language of this web site? [pt] en - > Do you want to specify a URL prefix? e.g., http://example.com (Y/n) n - > Do you want to enable article pagination? (Y/n) y - > How many articles per page do you want? [10] 10 - > What is your time zone? [Europe/Paris] America/Sao_Paulo - > Do you want to generate a Fabfile/Makefile to automate generation and publishing? (Y/n) Y **# PAY ATTENTION TO THIS!** - > Do you want an auto-reload & simpleHTTP script to assist with theme and site development? (Y/n) n - > Do you want to upload your website using FTP? (y/N) n - > Do you want to upload your website using SSH? (y/N) n - > Do you want to upload your website using Dropbox? (y/N) n - > Do you want to upload your website using S3? (y/N) n - > Do you want to upload your website using Rackspace Cloud Files? (y/N) n - > Do you want to upload your website using GitHub Pages? (y/N) y - > Is this your personal page (username.github.io)? (y/N) y - Done. Your new project is available at /home/username/YOUR_USERNAME.github.io -``` -关于时区,应该指定为 TZ 时区 (这里是全部列表: [tz 数据库时区列表][5])。 -现在,继续往下走并开始创建你的第一篇博文!你可能想在你喜爱的代码编辑器里打开工程目录并且找到里面的"content"文件夹。然后创建一个新文件,它可以被命名为 my-first-post.md (别担心,这只是为了测试,以后你可以改变它)。内容应该以元数据开始,这些元数据标识题目,日期,目录和更多主题之前的文章内容,像下面这样: -``` - .lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes - Title: My first post - Date: 2017-11-26 10:01 - Modified: 2017-11-27 12:30 - Category: misc - Tags: first , misc - Slug: My-first-post - Authors: Your name - Summary: What does your post talk about ? Write here. - - This is the *first post* from my Pelican blog. ** YAY !** -``` -让我们看看它长什么样? -进入终端,产生静态文件并且启动服务器。要这么做,使用下面命令: -``` -(venv)$ make html && make serve -``` -当这条命令正在运行,你应该可以在你喜爱的 web 浏览器地址栏中键入 localhost:8000 来访问它。 - -![][6] -*博客主页的截图。它有一个带有 Renata's blog 标题的头部,第一篇博文在左边,文章的信息在右边,链接和社交在底部* - -相当简洁,对吧? -现在,如果你想在文章中放一张图片,该怎么做呢?好,首先你在放置文章的内容目录里创建一个目录。为了引用简单,我们将这个目录命名为'image'。现在你必须让 Pelican 使用它。找到 pelicanconf.py 文件,这个文件是你配置系统的地方,并且添加一个包含你的图片目录的变量: -``` - .lang="python" # DON'T COPY this line, it exists just for highlighting purposes - STATIC_PATHS = ['images'] -``` -保存它。打开文章并且以如下方式添加图片: -``` - .lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes - ![Write here a good description for people who can ' t see the image]({filename}/images/IMAGE_NAME.jpg) -``` -你可以在终端中随时按下 CTRL+C 来中断服务器。但是你应该再次启动它并检查图片是否正确。你能记住怎么样做吗? -``` -(venv)$ make html && make serve -``` -在你代码完工之前的最后一步:你应该确保任何人都可以使用 ATOM 或 RSS feeds 来读你的文章。找到 pelicanconf.py 文件,这个文件是你配置系统的地方,并且编辑关于 feed 产生的部分: -``` - .lang="python" # DON'T COPY this line, it exists just for highlighting purposes - FEED_ALL_ATOM = 'feeds/all.atom.xml' - FEED_ALL_RSS = 'feeds/all.rss.xml' - AUTHOR_FEED_RSS = 'feeds/%s.rss.xml' - RSS_FEED_SUMMARY_ONLY = False -``` -保存所有,这样你才可以将代码上传到 Github 上。你可以通过添加所有文件,使用一个信息 ('first commit') 来提交它,并且使用 git push。你将会被问起你的 Github 登录名和密码。 -``` - $ git add -A && git commit -a -m 'first commit' && git push --all -``` -And... remember how at the very beginning I said you would be preserving the master branch for the output of the static files generated by Pelican? Now it's time for you to generate them: -还有...记住在最开始的时候,我给你说的怎样保护为输出 Pelican 产生的静态文件的 master 分支。现在对你来说是时候产生它们了: -``` -$ make github -``` -你将会被再次问及 Github 登录名和密码。好了!你的新博客应该创建在 `https://YOUR_USERNAME.github.io`。 - -如果你在过程中任何一步遇到一个错误,请重新读一下这篇手册,尝试并看看你是否能发现错误发生的部分,因为这是调试的第一步。有时甚至一些简单的东西比如一个错字或者 Python 中错误的缩进都可以给我们带来麻烦。说出来并向网上或你的团队求助。 - -对于如何使用 Markdown 来写文章,你可以读一下 [Daring Fireball Markdown 指南][7]。 - -为了获取其它主题,我建议你访问 [Pelican 主题][8]。 - -这篇文章改编自 [Adrien Leger 的使用一个 Bottstrap3 主题来搭建由 Github 托管的 Pelican 博客][9]。 - ------------------------------------------------------------ - -via: https://rsip22.github.io/blog/create-a-blog-with-pelican-and-github-pages.html - -作者:[rsip22][a] -译者:[liuxinyu123](https://github.com/liuxinyu123) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://rsip22.github.io -[1]:https://rsip22.github.io/blog/category/blog.html -[2]:https://tutorial.djangogirls.org -[3]:https://rsip22.github.io/blog/img/create_github_repository.png -[4]:https://www.gnu.org/licenses/license-list.html -[5]:https://en.wikipedia.org/wiki/List_of_tz_database_time_zones -[6]:https://rsip22.github.io/blog/img/blog_screenshot.png -[7]:https://daringfireball.net/projects/markdown/syntax -[8]:http://www.pelicanthemes.com/ -[9]:https://a-slide.github.io/blog/github-pelican - - - - - - diff --git a/translated/tech/20171214 IPv6 Auto-Configuration in Linux.md b/translated/tech/20171214 IPv6 Auto-Configuration in Linux.md deleted file mode 100644 index bc0a2e84f3..0000000000 --- a/translated/tech/20171214 IPv6 Auto-Configuration in Linux.md +++ /dev/null @@ -1,109 +0,0 @@ -在 Linux 中自动配置 IPv6 地址 -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner_5.png?itok=3kN83IjL) - -在 [ KVM 中测试 IPv6 网络:第 1 部分][1] 一文中,我们学习了关于唯一本地地址(ULAs)的相关内容。在本文中,我们将学习如何为 ULAs 自动配置 IP 地址。 - -### 何时使用唯一本地地址 - -唯一本地地址使用 fd00::/8 地址块,它类似于我们常用的 IPv4 的私有地址:10.0.0.0/8、172.16.0.0/12、以及 192.168.0.0/16。但它们并不能直接替换。IPv4 的私有地址分类和网络地址转换(NAT)功能是为了缓解 IPv4 地址短缺的问题,这是个明智的解决方案,它延缓了本该被替换的 IPv4 的生命周期。IPv6 也支持 NAT,但是我想不出使用它的理由。IPv6 的地址数量远远大于 IPv4;它是不一样的,因此需要做不一样的事情。 - -那么,ULAs 存在的意义是什么呢?尤其是在我们已经有了本地链路地址(fe80::/10)时,到底需不需要我们去配置它们呢?它们之间(译者注:指的是唯一本地地址和本地链路地址)有两个重要的区别。一是,本地链路地址是不可路由的,因此,你不能跨子网使用它。二是,ULAs 是你自己管理的;你可以自己选择它用于子网的地址范围,并且它们是可路由的。 - -使用 ULAs 的另一个好处是,如果你只是在局域网中“混日子”的话,你不需要为它们分配全局单播 IPv6 地址。当然了,如果你的 ISP 已经为你分配了 IPv6 的全局单播地址,就不需要使用 ULAs 了。你也可以在同一个网络中混合使用全局单播地址和 ULAs,但是,我想不出这样使用的一个好理由,并且要一定确保你不使用网络地址转换以使 ULAs 可公共访问。在我看来,这是很愚蠢的行为。 - -ULAs 是仅为私有网络使用的,并且它会阻塞所有流出你的网络的数据包,不允许进入因特网。这很简单,在你的边界设备上只要阻止整个 fd00::/8 范围的 IPv6 地址即可实现。 - -### 地址自动配置 - -ULAs 不像本地链路地址那样自动配置的,但是使用 radvd 设置自动配置是非常容易的,radva 是路由器公告守护程序。在你开始之前,运行 `ifconfig` 或者 `ip addr show` 去查看你现有的 IP 地址。 - -在生产系统上使用时,你应该将 radvd 安装在一台单独的路由器上,如果只是测试使用,你可以将它安装在你的网络中的任意 Linux PC 上。在我的小型 KVM 测试实验室中,我使用 `apt-get install radvd` 命令把它安装在 Ubuntu 上。安装完成之后,我先不启动它,因为它还没有配置文件: -``` -$ sudo systemctl status radvd -● radvd.service - LSB: Router Advertising Daemon - Loaded: loaded (/etc/init.d/radvd; bad; vendor preset: enabled) - Active: active (exited) since Mon 2017-12-11 20:08:25 PST; 4min 59s ago - Docs: man:systemd-sysv-generator(8) - -Dec 11 20:08:25 ubunut1 systemd[1]: Starting LSB: Router Advertising Daemon... -Dec 11 20:08:25 ubunut1 radvd[3541]: Starting radvd: -Dec 11 20:08:25 ubunut1 radvd[3541]: * /etc/radvd.conf does not exist or is empty. -Dec 11 20:08:25 ubunut1 radvd[3541]: * See /usr/share/doc/radvd/README.Debian -Dec 11 20:08:25 ubunut1 radvd[3541]: * radvd will *not* be started. -Dec 11 20:08:25 ubunut1 systemd[1]: Started LSB: Router Advertising Daemon. - -``` - -这些所有的消息有点让人困惑,实际上 radvd 并没有运行,你可以使用经典命令 `ps|grep radvd` 来验证这一点。因此,我们现在需要去创建 `/etc/radvd.conf` 文件。拷贝这个示例,将第一行的网络接口名替换成你自己的接口名字: -``` -interface ens7 { - AdvSendAdvert on; - MinRtrAdvInterval 3; - MaxRtrAdvInterval 10; - prefix fd7d:844d:3e17:f3ae::/64 - { - AdvOnLink on; - AdvAutonomous on; - }; - -}; - -``` - -前缀定义了你的网络地址,它是地址的前 64 位。前两个字符必须是 `fd`,前缀接下来的剩余部分你自己定义它,最后的 64 位留空,因为 radvd 将去分配最后的 64 位。前缀后面的 16 位用来定义子网,剩余的地址定义为主机地址。你的子网必须总是 /64。RFC 4193 要求地址必须随机生成;查看 [在 KVM 中测试 IPv6 Networking:第 1 部分][1] 学习创建和管理 ULAs 的更多知识。 - -### IPv6 转发 - -IPv6 转发必须要启用。下面的命令去启用它,重启后生效: -``` -$ sudo sysctl -w net.ipv6.conf.all.forwarding=1 - -``` - -取消注释或者添加如下的行到 `/etc/sysctl.conf` 文件中,以使它永久生效: -``` -net.ipv6.conf.all.forwarding = 1 -``` - -启动 radvd 守护程序: -``` -$ sudo systemctl stop radvd -$ sudo systemctl start radvd - -``` - -这个示例在我的 Ubuntu 测试系统中遇到了一个怪事;radvd 总是停止,我查看它的状态却没有任何问题,做任何改变之后都需要重新启动 radvd。 - -启动成功后没有任何输出,并且失败也是如此,因此,需要运行 `sudo systemctl radvd status` 去查看它的运行状态。如果有错误,systemctl 会告诉你。一般常见的错误都是 `/etc/radvd.conf` 中的语法错误。 - -在 Twitter 上抱怨了上述问题之后,我学到了一件很酷的技巧:当你运行 ` journalctl -xe --no-pager` 去调试 systemctl 错误时,你的输出将被封装打包,然后,你就可以看到错误信息。 - -现在检查你的主机,查看它们自动分配的新地址: -``` -$ ifconfig -ens7 Link encap:Ethernet HWaddr 52:54:00:57:71:50 - [...] - inet6 addr: fd7d:844d:3e17:f3ae:9808:98d5:bea9:14d9/64 Scope:Global - [...] - -``` - -本文到此为止,下周继续学习如何为 ULAs 管理 DNS,这样你就可以使用一个合适的主机名来代替这些长长的 IPv6 地址。 - -通过来自 Linux 基金会和 edX 的 ["Linux 入门" ][2] 免费课程学习更多 Linux 的知识。 - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2017/12/ipv6-auto-configuration-linux - -作者:[Carla Schroder][a] -译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/cschroder -[1]:https://www.linux.com/learn/intro-to-linux/2017/11/testing-ipv6-networking-kvm-part-1 -[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/translated/tech/20180104 How does gdb call functions.md b/translated/tech/20180104 How does gdb call functions.md new file mode 100644 index 0000000000..575563ad3d --- /dev/null +++ b/translated/tech/20180104 How does gdb call functions.md @@ -0,0 +1,254 @@ +gdb 如何调用函数? +============================================================ + +(之前的 gdb 系列文章:[gdb 如何工作(2016)][4] 和[通过 gdb 你能够做的三件事(2014)][5]) + +在这个周,我发现,我可以从 gdb 上调用 C 函数。这看起来很酷,因为在过去我认为 gdb 最多只是一个只读调试工具。 + +我对 gdb 能够调用函数感到很吃惊。正如往常所做的那样,我在 [Twitter][6] 上询问这是如何工作的。我得到了大量的有用答案。我最喜欢的答案是 [Evan Klitzke 的示例 C 代码][7],它展示了 gdb 如何调用函数。代码能够运行,这很令人激动! + +我相信(通过一些跟踪和实验)那个示例 C 代码和 gdb 实际上如何调用函数不同。因此,在这篇文章中,我将会阐述 gdb 是如何调用函数的,以及我是如何知道的。 + +关于 gdb 如何调用函数,还有许多我不知道的事情,并且,在这儿我写的内容有可能是错误的。 + +### 从 gdb 中调用 C 函数意味着什么? + +在开始讲解这是如何工作之前,我先快速的谈论一下我是如何发现这件令人惊讶的事情的。 + +所以,你已经在运行一个 C 程序(目标程序)。你可以运行程序中的一个函数,只需要像下面这样做: + +* 暂停程序(因为它已经在运行中) + +* 找到你想调用的函数的地址(使用符号表) + +* 使程序(目标程序)跳转到那个地址 + +* 当函数返回时,恢复之前的指令指针和寄存器 + +通过符号表来找到想要调用的函数的地址非常容易。下面是一段非常简单但能够工作的代码,我在 Linux 上使用这段代码作为例子来讲解如何找到地址。这段代码使用 [elf crate][8]。如果我想找到 PID 为 2345 的进程中的 foo 函数的地址,那么我可以运行 `elf_symbol_value("/proc/2345/exe", "foo")`。 + +``` +fn elf_symbol_value(file_name: &str, symbol_name: &str) -> Result> { + // 打开 ELF 文件 + let file = elf::File::open_path(file_name).ok().ok_or("parse error")?; + // 在所有的段 & 符号中循环,直到找到正确的那个 + let sections = &file.sections; + for s in sections { + for sym in file.get_symbols(&s).ok().ok_or("parse error")? { + if sym.name == symbol_name { + return Ok(sym.value); + } + } + } + None.ok_or("No symbol found")? +} + +``` + +这并不能够真的发挥作用,你还需要找到文件的内存映射,并将符号偏移量加到文件映射的起始位置。找到内存映射并不困难,它位于 `/proc/PID/maps` 中。 + +总之,找到想要调用的函数地址对我来说很直接,但是其余部分(改变指令指针,恢复寄存器等)看起来就不这么明显了。 + +### 你不能仅仅进行跳转 + +我已经说过,你不能够仅仅找到你想要运行的那个函数地址,然后跳转到那儿。我在 gdb 中尝试过那样做(`jump foo`),然后程序出现了段错误。毫无意义。 + +### 如何从 gdb 中调用 C 函数 + +首先,这是可能的。我写了一个非常简洁的 C 程序,它所做的事只有 sleep 1000 秒,把这个文件命名为 `test.c` : + +``` +#include + +int foo() { + return 3; +} +int main() { + sleep(1000); +} + +``` + +接下来,编译并运行它: + +``` +$ gcc -o test test.c +$ ./test + +``` + +最后,我们使用 gdb 来跟踪 `test` 这一程序: + +``` +$ sudo gdb -p $(pgrep -f test) +(gdb) p foo() +$1 = 3 +(gdb) quit + +``` + +我运行 `p foo()` 然后它运行了这个函数!这非常有趣。 + +### 为什么这是有用的? + +下面是一些可能的用途: + +* 它使得你可以把 gdb 当成一个 C 应答式程序,这很有趣,我想对开发也会有用 + +* 在 gdb 中进行调试的时候展示/浏览复杂数据结构的功能函数(感谢 [@invalidop][1]) + +* [在进程运行时设置一个任意的名字空间][2](我的同事 [nelhage][3] 对此非常惊讶) + +* 可能还有许多我所不知道的用途 + +### 它是如何工作的 + +当我在 Twitter 上询问从 gdb 中调用函数是如何工作的时,我得到了大量有用的回答。许多答案是”你从符号表中得到了函数的地址“,但这并不是完整的答案。 + +有个人告诉了我两篇关于 gdb 如何工作的系列文章:[和本地人一起调试-第一部分][9],[和本地人一起调试-第二部分][10]。第一部分讲述了 gdb 是如何调用函数的(指出了 gdb 实际上完成这件事并不简单,但是我将会尽力)。 + +步骤列举如下: + +1. 停止进程 + +2. 创建一个新的栈框(远离真实栈) + +3. 保存所有寄存器 + +4. 设置你想要调用的函数的寄存器参数 + +5. 设置栈指针指向新的栈框 + +6. 在内存中某个位置放置一条陷阱指令 + +7. 为陷阱指令设置返回地址 + +8. 设置指令寄存器的值为你想要调用的函数地址 + +9. 再次运行进程! + +(LCTT 译注:如果将这个调用的函数看成一个单独的线程,gdb 实际上所做的事情就是一个简单的线程上下文切换) + +我不知道 gdb 是如何完成这些所有事情的,但是今天晚上,我学到了这些所有事情中的其中几件。 + +**创建一个栈框** + +如果你想要运行一个 C 函数,那么你需要一个栈来存储变量。你肯定不想继续使用当前的栈。准确来说,在 gdb 调用函数之前(通过设置函数指针并跳转),它需要设置栈指针到某个地方。 + +这儿是 Twitter 上一些关于它如何工作的猜测: + +> 我认为它在当前栈的栈顶上构造了一个新的栈框来进行调用! + +以及 + +> 你确定是这样吗?它应该是分配一个伪栈,然后临时将 sp (栈指针寄存器)的值改为那个栈的地址。你可以试一试,你可以在那儿设置一个断点,然后看一看栈指针寄存器的值,它是否和当前程序寄存器的值相近? + +我通过 gdb 做了一个试验: + +``` +(gdb) p $rsp +$7 = (void *) 0x7ffea3d0bca8 +(gdb) break foo +Breakpoint 1 at 0x40052a +(gdb) p foo() +Breakpoint 1, 0x000000000040052a in foo () +(gdb) p $rsp +$8 = (void *) 0x7ffea3d0bc00 + +``` + +这看起来符合”gdb 在当前栈的栈顶构造了一个新的栈框“这一理论。因为栈指针(`$rsp`)从 `0x7ffea3d0bca8` 变成了 `0x7ffea3d0bc00` - 栈指针从高地址往低地址长。所以 `0x7ffea3d0bca8` 在 `0x7ffea3d0bc00` 的后面。真是有趣! + +所以,看起来 gdb 只是在当前栈所在位置创建了一个新的栈框。这令我很惊讶! + +**改变指令指针** + +让我们来看一看 gdb 是如何改变指令指针的! + +``` +(gdb) p $rip +$1 = (void (*)()) 0x7fae7d29a2f0 <__nanosleep_nocancel+7> +(gdb) b foo +Breakpoint 1 at 0x40052a +(gdb) p foo() +Breakpoint 1, 0x000000000040052a in foo () +(gdb) p $rip +$3 = (void (*)()) 0x40052a + +``` + +的确是!指令指针从 `0x7fae7d29a2f0` 变为了 `0x40052a`(`foo` 函数的地址)。 + +我盯着输出看了很久,但仍然不理解它是如何改变指令指针的,但这并不影响什么。 + +**如何设置断点** + +上面我写到 `break foo` 。我跟踪 gdb 运行程序的过程,但是没有任何发现。 + +下面是 gdb 用来设置断点的一些系统调用。它们非常简单。它把一条指令用 `cc` 代替了(这告诉我们 `int3` 意味着 `send SIGTRAP` [https://defuse.ca/online-x86-assembler.html][11]),并且一旦程序被打断了,它就把指令恢复为原先的样子。 + +我在函数 `foo` 那儿设置了一个断点,地址为 `0x400528` 。 + +`PTRACE_POKEDATA` 展示了 gdb 如何改变正在运行的程序。 + +``` +// 改变 0x400528 处的指令 +25622 ptrace(PTRACE_PEEKTEXT, 25618, 0x400528, [0x5d00000003b8e589]) = 0 +25622 ptrace(PTRACE_POKEDATA, 25618, 0x400528, 0x5d00000003cce589) = 0 +// 开始运行程序 +25622 ptrace(PTRACE_CONT, 25618, 0x1, SIG_0) = 0 +// 当到达断点时获取一个信号 +25622 ptrace(PTRACE_GETSIGINFO, 25618, NULL, {si_signo=SIGTRAP, si_code=SI_KERNEL, si_value={int=-1447215360, ptr=0x7ffda9bd3f00}}) = 0 +// 将 0x400528 处的指令更改为之前的样子 +25622 ptrace(PTRACE_PEEKTEXT, 25618, 0x400528, [0x5d00000003cce589]) = 0 +25622 ptrace(PTRACE_POKEDATA, 25618, 0x400528, 0x5d00000003b8e589) = 0 + +``` + +**在某处放置一条陷阱指令** + +当 gdb 运行一个函数的时候,它也会在某个地方放置一条陷阱指令。这是其中一条。它基本上是用 `cc` 来替换一条指令(`int3`)。 + +``` +5908 ptrace(PTRACE_PEEKTEXT, 5810, 0x7f6fa7c0b260, [0x48f389fd89485355]) = 0 +5908 ptrace(PTRACE_PEEKTEXT, 5810, 0x7f6fa7c0b260, [0x48f389fd89485355]) = 0 +5908 ptrace(PTRACE_POKEDATA, 5810, 0x7f6fa7c0b260, 0x48f389fd894853cc) = 0 + +``` + +`0x7f6fa7c0b260` 是什么?我查看了进程的内存映射,发现它位于 `/lib/x86_64-linux-gnu/libc-2.23.so` 中的某个位置。这很奇怪,为什么 gdb 将陷阱指令放在 libc 中? + +让我们看一看里面的函数是什么,它是 `__libc_siglongjmp` 。其他 gdb 放置陷阱指令的地方的函数是 `__longjmp` 、`___longjmp_chk` 、`dl_main` 和 `_dl_close_worker` 。 + +为什么?我不知道!也许出于某种原因,当函数 `foo()` 返回时,它调用 `longjmp` ,从而 gdb 能够进行返回控制。我不确定。 + +### gdb 如何调用函数是很复杂的! + +我将要在这儿停止了(现在已经凌晨 1 点),但是我知道的多一些了! + +看起来”gdb 如何调用函数“这一问题的答案并不简单。我发现这很有趣并且努力找出其中一些答案,希望你也能够找到。 + +我依旧有很多未回答的问题,关于 gdb 是如何完成这些所有事的,但是可以了。我不需要真的知道关于 gdb 是如何工作的所有细节,但是我很开心,我有了一些进一步的理解。 + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2018/01/04/how-does-gdb-call-functions/ + +作者:[Julia Evans][a] +译者:[ucasFL](https://github.com/ucasFL) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca/ +[1]:https://twitter.com/invalidop/status/949161146526781440 +[2]:https://github.com/baloo/setns/blob/master/setns.c +[3]:https://github.com/nelhage +[4]:https://jvns.ca/blog/2016/08/10/how-does-gdb-work/ +[5]:https://jvns.ca/blog/2014/02/10/three-steps-to-learning-gdb/ +[6]:https://twitter.com/b0rk/status/948060808243765248 +[7]:https://github.com/eklitzke/ptrace-call-userspace/blob/master/call_fprintf.c +[8]:https://cole14.github.io/rust-elf +[9]:https://www.cl.cam.ac.uk/~srk31/blog/2016/02/25/#native-debugging-part-1 +[10]:https://www.cl.cam.ac.uk/~srk31/blog/2017/01/30/#native-debugging-part-2 +[11]:https://defuse.ca/online-x86-assembler.htm diff --git a/translated/tech/20180110 Best Linux Screenshot and Screencasting Tools.md b/translated/tech/20180110 Best Linux Screenshot and Screencasting Tools.md deleted file mode 100644 index 277ded9f69..0000000000 --- a/translated/tech/20180110 Best Linux Screenshot and Screencasting Tools.md +++ /dev/null @@ -1,140 +0,0 @@ -Linux 最好的图片截取和视频截录工具 -====== -![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/best-linux-screenshot-and-screencasting-tools_orig.jpg) - -这里可能有一个困扰你多时的问题,当你想要获取一张屏幕截图向开发者反馈问题,或是在 _Stack Overflow_ 寻求帮助时,你可能缺乏一个可靠的屏幕截图工具去保存和发送集截图。GNOME 有一些形如程序和 shell 拓展的工具。不必担心,这里有 Linux 最好的屏幕截图工具,供你截取图片或截录视频。 - -## Linux 最好的图片截取和视频截录工具 - -### 1. Shutter - - [![shutter Linux 截图工具](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/shutter-linux-screenshot-taking-tools_orig.jpg)][2] - -[Shutter][3] 可以截取任意你想截取的屏幕,是 Linux 最好的截屏工具之一。得到截屏之后,它还可以在保存截屏之前预览图片。GNOME 面板顶部有一个 Shutter 拓展菜单,使得用户进入软件变得更人性化。 - -你可以选择性的截取窗口、桌面、光标下的面板、自由内容、菜单、提示框或网页。Shutter 允许用户直接上传屏幕截图到设置内首选的云服务器中。它同样允许用户在保存截图之前编辑器图片;同样提供可自由添加或移除的插件。 - -终端内键入下列命令安装此工具: - -``` -sudo add-apt-repository -y ppa:shutter/ppa -sudo apt-get update && sudo apt-get install shutter -``` - -### 2. Vokoscreen - - [![vokoscreen Linux 屏幕录制工具](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-screencasting-tool-for-linux_orig.jpg)][4] - - -[Vokoscreen][5] 是一款允许记录和叙述屏幕活动的一款软件。它有一个简洁的界面,界面的顶端包含有一个简明的菜单栏,方便用户开始录制视频。 - -你可以选择记录整个屏幕,或是记录一个窗口,抑或是记录一个自由区域,并且自定义保存类型;你甚至可以将屏幕录制记录保存为 gif 文件。当然,你也可以使用网络摄像头记录自己的情况,将自己转换成学习者。一旦你这么做了,你就可以在应用程序中回放视频记录。 - - [![vokoscreen preferences](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-preferences_orig.jpg)][6] - -你可以安装自己仓库的 Vocoscreen 发行版,或者你也可以在 [pkgs.org][7] 选择下载你需要的发行版。 - -``` -sudo dpkg -i vokoscreen_2.5.0-1_amd64.deb -``` - -### 3. OBS - - [![obs Linux 视频截录](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/obs-linux-screencasting-tool_orig.jpg)][8] - -[OBS][9] 可以用来录制自己的屏幕亦可用来录制互联网上的数据流。它允许你看到自己所录制的内容或者当你叙述时的屏幕录制。它允许你根据喜好选择录制视频的品质;它也允许你选择文件的保存类型。除了视频录制功能之外,你还可以切换到 Studio 模式,不借助其他软件编辑视频。要在你的 Linux 系统中安装 OBS,你必须确保你的电脑已安装 FFmpeg。ubuntu 14.04 或更早的版本安装 FFmpeg 可以使用如下命令: - -``` -sudo add-apt-repository ppa:kirillshkrogalev/ffmpeg-next - -sudo apt-get update && sudo apt-get install ffmpeg -``` - -ubuntu 15.04 以及之后的版本,你可以在终端中键入如下命令安装 FFmpeg: - -``` -sudo apt-get install ffmpeg -``` - -​如果 GGmpeg 安装完成,在终端中键入如下安装 OBS: - -``` -sudo add-apt-repository ppa:obsproject/obs-studio - -sudo apt-get update - -sudo apt-get install obs-studio -``` - -### 4. Green Recorder - - [![屏幕录制工具](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/green-recording-linux-tool_orig.jpg)][10] - -[Green recorder][11] 是一款基于接口的简单程序,它可以让你记录屏幕。你可以选择包括视频和单纯的音频在内的录制内容,也可以显示鼠标指针,甚至可以跟随鼠标录制视频。同样,你可以选择记录窗口或是自由区域,以便于在自己的记录中保留需要的内容;你还可以自定义保存视频的帧数。如果你想要延迟录制,它提供给你一个选项可以设置出你想要的延迟时间。它还提供一个录制结束的命令运行选项,这样,就可以在视频录制结束后立即运行。​ - -在终端中键入如下命令来安装 green recorder: - -``` -sudo add-apt-repository ppa:fossproject/ppa - -sudo apt update && sudo apt install green-recorder -``` - -### 5. Kazam - - [![kazam screencasting tool for linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/kazam-screencasting-tool-for-linux_orig.jpg)][12] - -[Kazam][13] 在几乎所有使用截图工具的 Linux 用户中,都十分流行。这是一款简单直观的软件,它可以让你做一个屏幕截图或是视频录制也同样允许在屏幕截图或屏幕录制之前设置延时。它可以让你选择录制区域,窗口或是你想要抓取的整个屏幕。Kazam 的界面接口部署的非常好,和其他软件相比毫无复杂感。它的特点,就是让你优雅的截图。Kazam 在系统托盘和菜单中都有图标,无需打开应用本身,你就可以开始屏幕截图。​​ - -终端中键入如下命令来安装 Kazam: - -``` -sudo apt-get install kazam -``` - -​如果没有找到 PPA,你需要使用下面的命令安装它: - -``` -sudo add-apt-repository ppa:kazam-team/stable-series - -sudo apt-get update && sudo apt-get install kazam -``` - -### 6. GNOME 拓展截屏工具 - - [![gnome screenshot extension](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-compressed_orig.jpg)][1] - -GNOME 的一个拓展软件就叫做 screenshot tool,它常驻系统面板,如果你没有设置禁用它。由于它是常驻系统面板的软件,所以它会一直等待你的调用,获取截图,方便和容易获取是它最主要的特点,除非你在系统工具禁用,否则它将一直在你的系统面板中。这个工具也有用来设置首选项的选项窗口。在 extensions.gnome.org 中搜索“_Screenshot Tool_”,在你的 GNOME 中安装它。 - -你需要安装 gnome 拓展,chrome 拓展和 GNOME 调整工具才能使用这个工具。 - - [![gnome screenshot 拓展选项](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-preferences_orig.jpg)][14] - -当你碰到一个问题,不知道怎么处理,想要在 [the Linux community][15] 或者其他开发社区分享、寻求帮助的的时候, **Linux 截图工具** 尤其合适。学习开发、程序或者其他任何事物都会发现这些工具在分享截图的时候真的很实用。Youtube 用户和教程制作爱好者会发现视频截录工具真的很适合录制可以发表的教程。 - --------------------------------------------------------------------------------- - -via: http://www.linuxandubuntu.com/home/best-linux-screenshot-screencasting-tools - -作者:[linuxandubuntu][a] -译者:[CYLeft](https://github.com/CYLeft) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxandubuntu.com -[1]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-compressed_orig.jpg -[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/shutter-linux-screenshot-taking-tools_orig.jpg -[3]:http://shutter-project.org/ -[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-screencasting-tool-for-linux_orig.jpg -[5]:https://github.com/vkohaupt/vokoscreen -[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-preferences_orig.jpg -[7]:https://pkgs.org/download/vokoscreen -[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/obs-linux-screencasting-tool_orig.jpg -[9]:https://obsproject.com/ -[10]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/green-recording-linux-tool_orig.jpg -[11]:https://github.com/foss-project/green-recorder -[12]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/kazam-screencasting-tool-for-linux_orig.jpg -[13]:https://launchpad.net/kazam -[14]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-preferences_orig.jpg -[15]:http://www.linuxandubuntu.com/home/top-10-communities-to-help-you-learn-linux diff --git a/translated/tech/20180116 Analyzing the Linux boot process.md b/translated/tech/20180116 Analyzing the Linux boot process.md deleted file mode 100644 index 35e6201b55..0000000000 --- a/translated/tech/20180116 Analyzing the Linux boot process.md +++ /dev/null @@ -1,260 +0,0 @@ -Linux 启动过程分析 -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_boot.png?itok=FUesnJQp) - -图片由企鹅和靴子“赞助”,由 Opensource.com 修改。CC BY-SA 4.0。 - -关于开源软件最古老的笑话是:“代码是自文档化的(self-documenting)”。经验表明,阅读源代码就像听天气预报一样:明智的人依然出门会看看室外的天气。本文讲述了如何运用调试工具来观察和分析 Linux 系统的启动。分析一个正常的系统启动过程,有助于用户和开发人员应对不可避免的故障。 - -从某些方面看,启动过程非常简单。内核在单核上启动单线程和同步,似乎可以理解。但内核本身是如何启动的呢?[initrd(initial ramdisk)][1]和引导程序(bootloaders)具有哪些功能?还有,为什么以太网端口上的 LED 灯是常亮的呢? - -请继续阅读寻找答案。GitHub 也提供了 [介绍演示和练习的代码][2]。 - -### 启动的开始:OFF 状态 - -#### 局域网唤醒(Wake-on-LAN) - -OFF 状态表示系统没有上电,没错吧?表面简单,其实不然。例如,如果系统启用连局域网唤醒机制(WOL),以太网指示灯将亮起。通过以下命令来检查是否是这种情况: - -``` - $# sudo ethtool -``` - -其中 `` 是网络接口的名字,比如 `eth0`。(`ethtool` 可以在同名的 Linux 软件包中找到。)如果输出中的 “Wake-on” 显示 “g”,则远程主机可以通过发送 [魔法数据包(MagicPacket)][3] 来启动系统。如果您无意远程唤醒系统,也不希望其他人这样做,请在系统 BIOS 菜单中将 WOL 关闭,或者用以下方式: - -``` -$# sudo ethtool -s wol d -``` - -响应魔法数据包的处理器可能是网络接口的一部分,也可能是 [底板管理控制器(Baseboard Management Controller,BMC)][4]。 - -#### 英特尔管理引擎、平台路径控制器和 Minix - -BMC 不是唯一的在系统关闭时仍在监听的微控制器(MCU)。x86_64 系统还包含了用于远程管理系统的英特尔管理引擎(IME)软件套件。从服务器到笔记本电脑,各种各样的设备都包含了这项技术,开启了如 KVM 远程控制和英特尔功能许可服务等 [功能][5]。根据 [Intel 自己的检测工具][7],[IME 存在尚未修补的漏洞][6]。坏消息是,要禁用 IME 很难。Trammell Hudson 发起了一个 [me_cleaner 项目][8],它可以清除一些相对恶劣的 IME 组件,比如嵌入式 Web 服务器,但也可能会影响运行它的系统。 - -IME 固件和系统管理模式(SMM)软件是 [基于 Minix 操作系统][9] 的,并运行在单独的平台路径控制器上,而不是主 CPU 上。然后,SMM 启动位于主处理器上的通用可扩展固件接口(UEFI)软件,相关内容 [已被提及很多][10]。Google 的 Coreboot 小组已经启动了一个雄心勃勃的 [非扩展性缩减版固件][11](NERF)项目,其目的不仅是要取代 UEFI,还要取代早期的 Linux 用户空间组件,如 systemd。在我们等待这些新成果的同时,Linux 用户现在就可以从 Purism、System76 或 Dell 等处购买 [禁用了 IME][12] 的笔记本电脑,另外 [带有 ARM 64 位处理器笔记本电脑][13] 还是值得期待的。 - -#### -#### 引导程序 - -除了启动问题不断的间谍软件外,早期的引导固件还有什么功能呢?引导程序的作用是为新上电的处理器提供运行像 Linux 之类的通用操作系统所需的资源。在开机时,不但没有虚拟内存,在控制器启动之前连 DRAM 也没有。然后,引导程序打开电源,并扫描总线和接口,以定位到内核镜像和根文件系统的位置。U-Boot 和 GRUB 等常见的引导程序支持 USB、PCI 和 NFS 等接口,以及更多的嵌入式专用设备,如 NOR 和 NAND 闪存。引导程序还与 [可信平台模块][14](TPMs)等硬件安全设备进行交互,在启动最开始建立信任链。 - -![Running the U-boot bootloader][16] - -在构建主机上的沙盒中运行 U-boot 引导程序。 - -包括树莓派、任天堂设备、汽车板和 Chromebook 在内的系统都支持广泛使用的开源引导程序 [U-Boot][17]。它没有系统日志,当发生问题时,甚至没有任何控制台输出。为了便于调试,U-Boot 团队提供了一个沙盒,可以在构建主机甚至是夜间的持续整合(Continuous Integration)系统上测试补丁程序。如果系统上安装了 Git 和 GNU Compiler Collection(GCC)等通用的开发工具,使用 U-Boot 沙盒会相对简单: - -``` - - -$# git clone git://git.denx.de/u-boot; cd u-boot - -$# make ARCH=sandbox defconfig - -$# make; ./u-boot - -=> printenv - -=> help -``` - -在 x86_64 上运行 U-Boot,可以测试一些棘手的功能,如 [模拟存储设备][2] 重新分区、基于 TPM 的密钥操作以及 USB 设备热插拔等。U-Boot 沙盒甚至可以在 GDB 调试器下单步执行。使用沙盒进行开发的速度比将引导程序刷新到电路板上的测试快 10 倍,并且可以使用 Ctrl + C 恢复一个“变砖”的沙盒。 - -### 启动内核 - -#### 配置引导内核 - -完成任务后,引导程序将跳转到已加载到主内存中的内核代码,并开始执行,传递用户指定的任何命令行选项。内核是什么样的程序呢?用命令 `file /boot/vmlinuz` 可以看到它是一个“bzImage”,意思是一个大的压缩的镜像。Linux 源代码树包含了一个可以解压缩这个文件的工具—— [extract-vmlinux][18]: - -``` - - -$# scripts/extract-vmlinux /boot/vmlinuz-$(uname -r) > vmlinux - -$# file vmlinux - -vmlinux: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically - -linked, stripped -``` - -内核是一个 [可执行与可链接格式][19](ELF)的二进制文件,就像 Linux 的用户空间程序一样。这意味着我们可以使用 `binutils` 包中的命令,如 `readelf` 来检查它。比较一下输出,例如: - -``` - - -$# readelf -S /bin/date - -$# readelf -S vmlinux -``` - -这两个文件中的段内容大致相同。 - -所以内核必须像其他的 Linux ELF 文件一样启动,但用户空间程序是如何启动的呢?在 `main()` 函数中?并不确切。 - -在 `main()` 函数运行之前,程序需要一个执行上下文,包括堆栈内存以及 `stdio`、`stdout` 和 `stderr` 的文件描述符。用户空间程序从标准库(多数 Linux 系统在用“glibc”)中获取这些资源。参照以下输出: - -``` - - -$# file /bin/date - -/bin/date: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically - -linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, - -BuildID[sha1]=14e8563676febeb06d701dbee35d225c5a8e565a, - -stripped -``` - -ELF 二进制文件有一个解释器,就像 Bash 和 Python 脚本一样,但是解释器不需要像脚本那样用 `#!` 指定,因为 ELF 是 Linux 的原生格式。ELF 解释器通过调用 `_start()` 函数来用所需资源 [配置一个二进制文件][20],这个函数可以从 glibc 源代码包中找到,可以 [用 GDB 查看][21]。内核显然没有解释器,必须自我配置,这是怎么做到的呢? - -用 GDB 检查内核的启动给出了答案。首先安装内核的调试软件包,内核中包含一个未剥离的(unstripped)vmlinux,例如 `apt-get install linux-image-amd64-dbg`,或者从源代码编译和安装你自己的内核,可以参照 [Debian Kernel Handbook][22] 中的指令。`gdb vmlinux` 后加 `info files` 可显示 ELF 段 `init.text`。在 `init.text` 中用 `l *(address)` 列出程序执行的开头,其中 `address` 是 `init.text` 的十六进制开头。用 GDB 可以看到 x86_64 内核从内核文件 [arch/x86/kernel/head_64.S][23] 开始启动,在这个文件中我们找到了汇编函数 `start_cpu0()`,以及一段明确的代码显示在调用 `x86_64 start_kernel()` 函数之前创建了堆栈并解压了 zImage。ARM 32 位内核也有类似的文件 [arch/arm/kernel/head.S][24]。`start_kernel()` 不针对特定的体系结构,所以这个函数驻留在内核的 [init/main.c][25] 中。`start_kernel()` 可以说是 Linux 真正的 `main()` 函数。 - -### 从 start_kernel() 到 PID 1 - -#### 内核的硬件清单:设备树和 ACPI 表 - -在引导时,内核需要硬件信息,不仅仅是已编译过的处理器类型。代码中的指令通过单独存储的配置数据进行扩充。有两种主要的数据存储方法:[设备树][26] 和 [高级配置和电源接口(ACPI)表][27]。内核通过读取这些文件了解每次启动时需要运行的硬件。 - -对于嵌入式设备,设备树是已安装硬件的清单。设备树只是一个与内核源代码同时编译的文件,通常与 `vmlinux` 一样位于 `/boot` 目录中。要查看 ARM 设备上的设备树的内容,只需对名称与 `/boot/*.dtb` 匹配的文件执行 `binutils` 包中的 `strings` 命令即可,`dtb` 是指一个设备树二进制文件。显然,只需编辑构成它的类 JSON 文件并重新运行随内核源代码提供的特殊 `dtc` 编译器即可修改设备树。虽然设备树是一个静态文件,其文件路径通常由命令行引导程序传递给内核,但近年来增加了一个 [设备树覆盖][28] 的功能,内核在启动后可以动态加载热插拔的附加设备。 - -x86 系列和许多企业级的 ARM64 设备使用 [ACPI][27] 机制。与设备树不同的是,ACPI 信息存储在内核在启动时通过访问板载 ROM 而创建的 `/sys/firmware/acpi/tables` 虚拟文件系统中。读取 ACPI 表的简单方法是使用 `acpica-tools` 包中的 `acpidump` 命令。例如: - -![ACPI tables on Lenovo laptops][30] - - -联想笔记本电脑的 ACPI 表都是为 Windows 2001 设置的。 - -是的,你的 Linux 系统已经准备好用于 Windows 2001 了,你要考虑安装吗?与设备树不同,ACPI 具有方法和数据,而设备树更多地是一种硬件描述语言。ACPI 方法在启动后仍处于活动状态。例如,运行 `acpi_listen` 命令(在 `apcid` 包中),然后打开和关闭笔记本机盖会发现 ACPI 功能一直在运行。暂时地和动态地 [覆盖 ACPI 表][31] 是可能的,而永久地改变它需要在引导时与 BIOS 菜单交互或刷新 ROM。如果你遇到那么多麻烦,也许你应该 [安装 coreboot][32],这是开源固件的替代品。 - -#### 从 start_kernel() 到用户空间 - -[init/main.c][25] 中的代码竟然是可读的,而且有趣的是,它仍然在使用 1991 - 1992 年的 Linus Torvalds 的原始版权。在一个刚启动的系统上运行 `dmesg | head`,其输出主要来源于此文件。第一个 CPU 注册到系统中,全局数据结构被初始化,并且调度程序、中断处理程序(IRQ)、定时器和控制台按照严格的顺序逐一启动。在 `timekeeping_init()` 函数运行之前,所有的时间戳都是零。内核初始化的这部分是同步的,也就是说执行只发生在一个线程中,在最后一个完成并返回之前,没有任何函数会被执行。因此,即使在两个系统之间,`dmesg` 的输出也是完全可重复的,只要它们具有相同的设备树或 ACPI 表。Linux 的行为就像在 MCU 上运行的 RTOS(实时操作系统)一样,如 QNX 或 VxWorks。这种情况持续存在于函数 `rest_init()` 中,该函数在终止时由 `start_kernel()` 调用。 - -![Summary of early kernel boot process.][34] - -早期的内核启动流程 - -函数 `rest_init()` 产生了一个新进程以运行 `kernel_init()`,并调用了 `do_initcalls()`。用户可以通过将 `initcall_debug` 附加到内核命令行来监控 `initcalls`,这样每运行一次 `initcall` 函数就会产生 `dmesg` 条目。`initcalls` 会历经七个连续的级别:early、core、postcore、arch、subsys、fs、device 和 late。`initcalls` 最为用户可见的部分是所有处理器外围设备的探测和设置:总线、网络、存储和显示器等等,同时加载其内核模块。`rest_init()` 也会在引导处理器上产生第二个线程,它首先运行 `cpu_idle()`,然后等待调度器分配工作。 - -`kernel_init()` 也可以 [设置对称多处理(SMP)结构][35]。在较新的内核中,如果 `dmesg` 的输出中出现“启动第二个 CPU...”等字样,系统便使用了 SMP。SMP 通过“热插拔”CPU 来进行,这意味着它用状态机来管理其生命周期,这种状态机在概念上类似于热插拔的 U 盘一样。内核的电源管理系统经常会使某个核(core)离线,然后根据需要将其唤醒,以便在不忙的机器上反复调用同一段的 CPU 热插拔代码。观察电源管理系统调用 CPU 热插拔代码的 [BCC 工具][36] 称为 `offcputime.py`。 - -请注意,`init/main.c` 中的代码在 `smp_init()` 运行时几乎已执行完毕:引导处理器已经完成了大部分其他核无需重复的一次性初始化操作。尽管如此,跨 CPU 的线程仍然要在每个核上生成,以管理每个核的中断(IRQ)、工作队列、定时器和电源事件。例如,通过 `ps -o psr` 命令可以查看服务 softirqs 和 workqueues 在每个 CPU 上的线程。 - -``` - - -$\# ps -o pid,psr,comm $(pgrep ksoftirqd) - - PID PSR COMMAND - - 7 0 ksoftirqd/0 - - 16 1 ksoftirqd/1 - - 22 2 ksoftirqd/2 - - 28 3 ksoftirqd/3 - - - -$\# ps -o pid,psr,comm $(pgrep kworker) - -PID PSR COMMAND - - 4 0 kworker/0:0H - - 18 1 kworker/1:0H - - 24 2 kworker/2:0H - - 30 3 kworker/3:0H - -[ . . . ] -``` - -其中,PSR 字段代表“处理器”。每个核还必须拥有自己的定时器和 `cpuhp` 热插拔处理程序。 - -那么用户空间是如何启动的呢?在最后,`kernel_init()` 寻找可以代表它执行 `init` 进程的 `initrd`。如果没有找到,内核直接执行 `init` 本身。那么为什么需要 `initrd` 呢? - -#### 早期的用户空间:谁规定要用 initrd? - -除了设备树之外,在启动时可以提供给内核的另一个文件路径是 `initrd` 的路径。`initrd` 通常位于 `/boot` 目录中,与 x86 系统中的 bzImage 文件 vmlinuz 一样,或是与 ARM 系统中的 uImage 和设备树相同。用 `initramfs-tools-core` 软件包中的 `lsinitramfs` 工具可以列出 `initrd` 的内容。发行版的 `initrd` 方案包含了最小化的 `/bin`、`/sbin` 和 `/etc` 目录以及内核模块,还有 `/scripts` 中的一些文件。所有这些看起来都很熟悉,因为 `initrd` 大致上是一个简单的最小化 Linux 根文件系统。看似相似,其实不然,因为位于虚拟内存盘中的 `/bin` 和 `/sbin` 目录下的所有可执行文件几乎都是指向 [BusyBox binary][38] 的符号链接,由此导致 `/bin` 和 `/sbin` 目录比 glibc 的小 10 倍。 - -如果要做的只是加载一些模块,然后在普通的根文件系统上启动 `init`,为什么还要创建一个 `initrd` 呢?想想一个加密的根文件系统,解密可能依赖于加载一个位于根文件系统 `/lib/modules` 的内核模块,当然还有 `initrd` 中的。加密模块可能被静态地编译到内核中,而不是从文件加载,但有多种原因不希望这样做。例如,用模块静态编译内核可能会使其太大而不能适应存储空间,或者静态编译可能会违反软件许可条款。不出所料,存储、网络和人类输入设备(HID)驱动程序也可能存在于 `initrd` 中。`initrd` 基本上包含了任何挂载根文件系统所必需的非内核代码。`initrd` 也是用户存放 [自定义ACPI][38] 表代码的地方。 - -![Rescue shell and a custom initrd.][40] - -救援模式的 shell 和自定义的 `initrd` 还是很有意思的。 - -`initrd` 对测试文件系统和数据存储设备也很有用。将这些测试工具存放在 `initrd` 中,并从内存中运行测试,而不是从被测对象中运行。 - -最后,当 `init` 开始运行时,系统就启动啦!由于辅助处理器正在运行,机器已经成为我们所熟知和喜爱的异步、可抢占、不可预测和高性能的生物。的确,`ps -o pid,psr,comm -p 1` 很容易显示已不在引导处理器上运行的用户空间的 `init` 进程。 - -### Summary -### 总结 - -Linux 引导过程听起来或许令人生畏,即使考虑到简单嵌入式设备上的软件数量。换个角度来看,启动过程相当简单,因为启动中没有抢占、RCU 和竞争条件等扑朔迷离的复杂功能。只关注内核和 PID 1 会忽略了引导程序和辅助处理器为运行内核执行的大量准备工作。虽然内核在 Linux 程序中是独一无二的,但通过一些检查 ELF 文件的工具也可以了解其结构。学习一个正常的启动过程,可以帮助运维人员处理启动的故障。 - -要了解更多信息,请参阅 Alison Chaiken 的演讲——[Linux: The first second][41],将在 1 月 22 日至 26 日在悉尼举行。参见 [linux.conf.au][42]。 - -感谢 [Akkana Peck][43] 的提议和指正。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/1/analyzing-linux-boot-process - -作者:[Alison Chaiken][a] -译者:[jessie-pang](https://github.com/jessie-pang) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/don-watkins -[1]:https://en.wikipedia.org/wiki/Initial_ramdisk -[2]:https://github.com/chaiken/LCA2018-Demo-Code -[3]:https://en.wikipedia.org/wiki/Wake-on-LAN -[4]:https://lwn.net/Articles/630778/ -[5]:https://www.youtube.com/watch?v=iffTJ1vPCSo&amp;amp;amp;amp;amp;index=65&amp;amp;amp;amp;amp;list=PLbzoR-pLrL6pISWAq-1cXP4_UZAyRtesk -[6]:https://security-center.intel.com/advisory.aspx?intelid=INTEL-SA-00086&amp;amp;amp;amp;amp;languageid=en-fr -[7]:https://www.intel.com/content/www/us/en/support/articles/000025619/software.html -[8]:https://github.com/corna/me_cleaner -[9]:https://lwn.net/Articles/738649/ -[10]:https://lwn.net/Articles/699551/ -[11]:https://trmm.net/NERF -[12]:https://www.extremetech.com/computing/259879-dell-now-shipping-laptops-intels-management-engine-disabled -[13]:https://lwn.net/Articles/733837/ -[14]:https://linuxplumbersconf.org/2017/ocw/events/LPC2017/tracks/639 -[15]:/file/383501 -[16]:https://opensource.com/sites/default/files/u128651/linuxboot_1.png "Running the U-boot bootloader" -[17]:http://www.denx.de/wiki/DULG/Manual -[18]:https://github.com/torvalds/linux/blob/master/scripts/extract-vmlinux -[19]:http://man7.org/linux/man-pages/man5/elf.5.html -[20]:https://0xax.gitbooks.io/linux-insides/content/Misc/program_startup.html -[21]:https://github.com/chaiken/LCA2018-Demo-Code/commit/e543d9812058f2dd65f6aed45b09dda886c5fd4e -[22]:http://kernel-handbook.alioth.debian.org/ -[23]:https://github.com/torvalds/linux/blob/master/arch/x86/boot/compressed/head_64.S -[24]:https://github.com/torvalds/linux/blob/master/arch/arm/boot/compressed/head.S -[25]:https://github.com/torvalds/linux/blob/master/init/main.c -[26]:https://www.youtube.com/watch?v=m_NyYEBxfn8 -[27]:http://events.linuxfoundation.org/sites/events/files/slides/x86-platform.pdf -[28]:http://lwn.net/Articles/616859/ -[29]:/file/383506 -[30]:https://opensource.com/sites/default/files/u128651/linuxboot_2.png "ACPI tables on Lenovo laptops" -[31]:https://www.mjmwired.net/kernel/Documentation/acpi/method-customizing.txt -[32]:https://www.coreboot.org/Supported_Motherboards -[33]:/file/383511 -[34]:https://opensource.com/sites/default/files/u128651/linuxboot_3.png "Summary of early kernel boot process." -[35]:http://free-electrons.com/pub/conferences/2014/elc/clement-smp-bring-up-on-arm-soc -[36]:http://www.brendangregg.com/ebpf.html -[37]:https://www.busybox.net/ -[38]:https://www.mjmwired.net/kernel/Documentation/acpi/initrd_table_override.txt -[39]:/file/383516 -[40]:https://opensource.com/sites/default/files/u128651/linuxboot_4.png "Rescue shell and a custom initrd." -[41]:https://rego.linux.conf.au/schedule/presentation/16/ -[42]:https://linux.conf.au/index.html -[43]:http://shallowsky.com/ \ No newline at end of file diff --git a/translated/tech/20180123 Migrating to Linux- The Command Line.md b/translated/tech/20180123 Migrating to Linux- The Command Line.md new file mode 100644 index 0000000000..4d9d98f0f9 --- /dev/null +++ b/translated/tech/20180123 Migrating to Linux- The Command Line.md @@ -0,0 +1,189 @@ +迁徙到 Linux:命令行环境 +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/migrate.jpg?itok=2PBkvV7s) + +这是关于迁徙到 Linux 系列的第四篇文章了。如果您错过了之前的内容,可以回顾我们之前谈到的内容 [新手之 Linux][1]、[文件和文件系统][2]、和 [图形环境][3]。Linux 无处不在,它可以运行在大部分的网络服务器,如 web、email 和其他服务器;它同样可以在您的手机、汽车控制台和其他很多设备上使用。现在,您可能会开始好奇 Linux 系统,并对学习 Linux 的工作原理萌发兴趣。 + +在 Linux 下,命令行非常实用。Linux 的桌面系统中,尽管命令行只是可选操作,但是您依旧能看见很多朋友开着一个命令行窗口和其他应用窗口并肩作战。在运行 Linux 系统的网络服务器中,命令行通常是唯一能直接与操作系统交互的工具。因此,命令行是有必要了解的,至少应当涉猎一些基础命令。 + +在命令行(通常称之为 Linux shell)中,所有操作都是通过键入命令完成。您可以执行查看文件列表、移动文件位置、显示文件内容、编辑文件内容等一系列操作,通过命令行,您甚至可以查看网页中的内容。 + +如果您在 Windows(CMD 或者 PowerShell) 上已经熟悉关于命令行的使用,您是否想跳转到了解 Windows 命令行的章节上去?先了阅读这些内容吧。 + +### 导语 + +在命令行中,这里有一个当前工作目录(文件夹和目录是同义词,在 Linux 中它们通常都被称为目录)的概念。如果没有特别指定目录,许多命令的执行会在当前目录下生效。比如,键入 ls 列出文件目录,当前工作目录的文件将会被列举出来。看一个例子: +``` +$ ls +Desktop Documents Downloads Music Pictures README.txt Videos +``` + +`ls Documents` 这条命令将会列出 `Documents` 目录下的文件: +``` +$ ls Documents +report.txt todo.txt EmailHowTo.pdf +``` + +通过 `pwd` 命令可以显示当前您的工作目录。比如: +``` +$ pwd +/home/student +``` + +您可以通过 `cd` 命令改变当前目录并切换到您想要抵达的目录。比如: +``` +$ pwd +/home/student +$ cd Downloads +$ pwd +/home/student/Downloads +``` + +路径中的目录由 `/`(左斜杠)字符分隔。路径中有一个隐含的层次关系,比如 `/home/student` 目录中,home 是顶层目录,而 student 是 home 的子目录。 + +路径要么是绝对路径,要么是相对路径。绝对路径由一个 `/` 字符打头。 + +相对路径由 `.` 或者 `..` 开始。在一个路径中,一个 `.` 意味着当前目录,`..` 意味着当前目录的上级目录。比如,`ls ../Documents` 意味着在此寻找当前目录的上级名为 `Documets` 的目录: +``` +$ pwd +/home/student +$ ls +Desktop Documents Downloads Music Pictures README.txt Videos +$ cd Downloads +$ pwd +/home/student/Downloads +$ ls ../Documents +report.txt todo.txt EmailHowTo.pdf +``` + +当您第一次打开命令行窗口时,您当前的工作目录被设置为您的家目录,通常为 `/home/<您的登录名>`。家目录专用于登陆之后存储您的专属文件。 + +设置环境变量 `$HOME` 到您的家目录,比如: +``` +$ echo $HOME +/home/student +``` + +下表显示了用于目录导航和管理简单的文本文件的一些命令摘要。 + +![table](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/table-1_0.png?itok=j4Sgv6Vy) + +### 搜索 + +有时我们会遗忘文件的位置,或者忘记了我要寻找的文件名。Linux 命令行有几个命令可以帮助您搜索到文件。 + +第一个命令是 `find`。您可以使用 `find` 命令通过文件名或其他属性搜索文件和目录。举个例子,当您遗忘了 todo.txt 文件的位置,我们可以执行下面的代码: +``` +$ find $HOME -name todo.txt +/home/student/Documents/todo.txt +``` + +`find` 程序有很多功能和选项。一个简单的例子: +``` +find <要寻找的目录> -name <文件名> +``` + +如果这里有 `todo.txt` 文件且不止一个,它将向我们列出拥有这个名字的所有文件的所有所在位置。`find` 命令有很多便于搜索的选项比如类型(文件或是目录等等)、时间、大小和其他一些选项。更多内容您可以同通过:`man find` 获取关于如何使用 `find` 命令的帮助。 + +您还可以使用 `grep` 命令搜索文件的特殊内容,比如: +``` +grep "01/02/2018" todo.txt +``` +这将为您展示 `todo` 文件中 `01/02/2018` 所在行。 + +### 获取帮助 + +Linux 有很多命令,这里,我们没有办法一一列举。授人以鱼不如授人以渔,所以下一步我们将向您介绍帮助命令。 + +`apropos` 命令可以帮助您查找需要使用的命令。也许您想要查找能够操作目录或是获得文件列表的所有命令,但是您并不希望让这些命令执行。您可以这样尝试: +``` +apropos directory +``` + +要在帮助文档中,得到一个于 `directiory` 关键字的相关命令列表,您可以这样操作: +``` +apropos "list open files" +``` + +这将提供一个 `lsof` 命令给您,帮助您打开文件列表。 + +当您明确您要使用的命令,但是不确定应该使用什么选项完成预期工作,您可以使用 man 命令,它是 manual 的缩写。您可以这样使用: +``` +man ls +``` + +您可以在自己的设备上尝试这个命令。它会提供给您关于使用这个命令的完整信息。 + +通常,很多命令都会有能够给 `help` 选项(比如说,`ls --help`),列出命令使用的提示。`man` 页面的内容通常太繁琐,`--help` 选项可能更适合快速浏览。 + +### 脚本 + +Linux 命令行中最贴心的功能是能够运行脚本文件,并且能重复运行。Linux 命令可以存储在文本文件中,您可以在文件的开头写入 `#!/bin/sh`,之后追加命令。之后,一旦文件被存储为可执行文件,您就可以像执行命令一样运行脚本文件,比如, +``` +--- contents of get_todays_todos.sh --- +#!/bin/sh +todays_date=`date +"%m/%d/%y"` +grep $todays_date $HOME/todos.txt +``` + +在一个确定的工作中脚本可以帮助自动化重复执行命令。如果需要的话,脚本也可以很复杂,能够使用循环、判断语句等。限于篇幅,这里不细述,但是您可以在网上查询到相关信息。 + +您是否已经熟悉了 Windows 命令行? + +如果您对 Windows CMD 或者 PowerShell 程序很熟悉,在命令行输入命令应该是轻车熟路的。然而,它们之间有很多差异,如果您没有理解它们之间的差异可能会为之困扰。 + +首先,在 Linux 下的 PATH 环境于 Windows 不同。在 Windows 中,当前目录被认为是路径中的第一个文件夹,尽管该目录没有在环境变量中列出。而在 Linux 下,当前目录不会在路径中显示表示。Linux 下设置环境变量会被认为是风险操作。在 Linux 的当前目录执行程序,您需要使用 ./(代表当前目录的相对目录表示方式) 前缀。这可能会干扰很多 CMD 用户。比如: +``` +./my_program +``` + +而不是 +``` +my_program +``` + +另外,在 Windows 环境变量的路径中是以 `;`(分号) 分割的。在 Linux 中,由 `:` 分割环境变量。同样,在 Linux 中路径由 `/` 字符分隔,而在 Windows 目录中路径由 `\` 字符分割。因此 Windows 中典型的环境变量会像这样: +``` +PATH="C:\Program Files;C:\Program Files\Firefox;" +while on Linux it might look like: +PATH="/usr/bin:/opt/mozilla/firefox" +``` + +还要注意,在 Linux 中环境变量由 `$` 拓展,而在 Windows 中您需要使用百分号(就是这样: %PATH%)。 + +在 Linux 中,通过 `-` 使用命令选项,而在 Windows 中,使用选项要通过 `/` 字符。所以,在 Linux 中您应该: +``` +a_prog -h +``` + +而不是 +``` +a_prog /h +``` + +在 Linux 下,文件拓展名并没有意义。例如,将 `myscript` 重命名为 `myscript.bat` 并不会因此而可执行,需要设置文件的执行权限。文件执行权限会在下次的内容中覆盖到。 + +在 Linux 中,如果文件或者目录名以 `.` 字符开头,意味着它们是隐藏文件。比如,如果您申请编辑 `.bashrc` 文件,您不能在 `home` 目录中找到它,但是它可能真的存在,只不过它是隐藏文件。在命令行中,您可以通过 `ls` 命令的 `-a` 选项查看隐藏文件,比如: +``` +ls -a +``` + +在 Linux 中,普通的命令与 Windows 的命令不尽相同。下面的表格显示了常用命令中 CMD 命令和 Linux 命令行的差异。 + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/table-2_0.png?itok=NNc8TZFZ) + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/2018/1/migrating-linux-command-line + +作者:[John Bonesio][a] +译者:[CYLeft](https://github.com/CYLeft) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/johnbonesio +[1]:https://www.linux.com/blog/learn/intro-to-linux/2017/10/migrating-linux-introduction +[2]:https://www.linux.com/blog/learn/intro-to-linux/2017/11/migrating-linux-disks-files-and-filesystems +[3]:https://www.linux.com/blog/learn/2017/12/migrating-linux-graphical-environments diff --git a/translated/tech/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md b/translated/tech/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md new file mode 100644 index 0000000000..3210c7d284 --- /dev/null +++ b/translated/tech/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md @@ -0,0 +1,1381 @@ +使用 DOCKER 和 ELASTICSEARCH 构建一个全文搜索应用程序 +============================================================ + + _如何在超过 500 万篇文章的 Wikipedia 上找到与你研究相关的文章?_ + + _如何在超过 20 亿用户的 Facebook 中找到你的朋友(并且还拼错了名字)?_ + + _谷歌如何在整个因特网上搜索你的模糊的、充满拼写错误的查询?_ + +在本教程中,我们将带你探索如何配置我们自己的全文探索应用程序(与上述问题中的系统相比,它的复杂度要小很多)。我们的示例应用程序将提供一个 UI 和 API 去从 100 部经典文学(比如,_Peter Pan_ ,  _Frankenstein_ , 和  _Treasure Island_ )中搜索完整的文本。 + +你可以在这里([https://search.patricktriest.com][6])预览教程中应用程序的完整版本。 + +![preview webapp](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_4_0.png) + +这个应用程序的源代码是 100% 公开的,可以在 GitHub 仓库上找到它们 —— [https://github.com/triestpa/guttenberg-search][7] + +在应用程序中添加一个快速灵活的全文搜索可能是个挑战。大多数的主流数据库,比如,[PostgreSQL][8] 和 [MongoDB][9],在它们的查询和索引结构中都提供一个有限的、基础的、文本搜索的功能。为实现高质量的全文搜索,通常的最佳选择是单独数据存储。[Elasticsearch][10] 是一个开源数据存储的领导者,它专门为执行灵活而快速的全文搜索进行了优化。 + +我们将使用 [Docker][11] 去配置我们自己的项目环境和依赖。Docker 是一个容器化引擎,它被 [Uber][12]、[Spotify][13]、[ADP][14]、以及 [Paypal][15] 使用。构建容器化应用的一个主要优势是,项目的设置在 Windows、macOS、以及 Linux 上都是相同的 —— 这使我写这个教程快速又简单。如果你还没有使用过 Docker,不用担心,我们接下来将经历完整的项目配置。 + +我也会使用 [Node.js][16] (使用 [Koa][17] 框架)、和 [Vue.js][18],用它们分别去构建我们自己的搜索 API 和前端 Web 应用程序。 + +### 1 - ELASTICSEARCH 是什么? + +全文搜索在现代应用程序中是一个有大量需求的特性。搜索也可能是最难的一项特性 —— 许多流行的网站的搜索功能都不合格,要么返回结果太慢,要么找不到精确的结果。通常,这种情况是被底层的数据库所局限:大多数标准的关系型数据库在基本的 `CONTAINS` 或 `LIKE` SQL 查询上有局限性,它仅提供大多数基本的字符串匹配功能。 + +我们的搜索应用程序将具备: + +1. **快速** - 搜索结果将快速返回,为用户提供一个良好的体验。 + +2. **灵活** - 我们希望能够去修改搜索如何执行,这是为了便于在不同的数据库和用户场景下进行优化。 + +3. **容错** - 如果搜索内容有拼写错误,我们将仍然会返回相关的结果,而这个结果可能正是用户希望去搜索的结果。 + +4. **全文** - 我们不想限制我们的搜索只能与指定的关键字或者标签相匹配 —— 我们希望它可以搜索在我们的数据存储中的任何东西(包括大的文本域)。 + +![Elastic Search Logo](https://storage.googleapis.com/cdn.patricktriest.com/blog/images/posts/elastic-library/Elasticsearch-Logo.png) + +为了构建一个功能强大的搜索功能,通常最理想的方法是使用一个为全文搜索任务优化过的用户数据存储。在这里我们使用 [Elasticsearch][19],Elasticsearch 是一个开源的内存中的数据存储,它是用 Java 写的,最初是在 [Apache Lucene][20] 库上构建的。 + +这里有一些来自 [Elastic 官方网站][21] 上的 Elasticsearch 真实使用案例。 + +* Wikipedia 使用 Elasticsearch 去提供带高亮搜索片断的全文搜索功能,并且提供按类型搜索和 “did-you-mean” 建议。 + +* Guardian 使用 Elasticsearch 把社交网络数据和访客日志相结合,为编辑去提供大家对新文章的实时的反馈。 + +* Stack Overflow 将全文搜索和地理查询相结合,并使用 “类似” 的方法去找到相关的查询和回答。 + +* GitHub 使用 Elasticsearch 对 1300 亿行代码进行查询。 + +### 与 “普通的” 数据库相比,Elasticsearch 有什么不一样的地方? + +Elasticsearch 之所以能够提供快速灵活的全文搜索,秘密在于它使用 _反转索引_ 。 + +“索引” 是数据库中的一种数据结构,它能够以超快的速度进行数据查询和检索操作。数据库通过存储与表中行相关联的字段来生成索引。在一种可搜索的数据结构(一般是 [B树][22])中排序索引,在优化过的查询中,数据库能够达到接近线速的时间(比如,“使用 ID=5 查找行)。 + +![Relational Index](https://cdn.patricktriest.com/blog/images/posts/elastic-library/db_index.png) + +我们可以将数据库索引想像成一个图书馆中老式的卡片式目录 —— 只要你知道书的作者和书名,它就会告诉你书的准确位置。为加速特定字段上的查询速度,数据库表一般有多个索引(比如,在 `name` 列上的索引可以加速指定名字的查询)。 + +反转索引本质上是不一样的。每行(或文档)的内容是分开的,并且每个独立的条目(在本案例中是单词)反向指向到包含它的任何文档上。 + +![Inverted Index](https://cdn.patricktriest.com/blog/images/posts/elastic-library/invertedIndex.jpg) + +这种反转索引数据结构可以使我们非常快地查询到,所有出现 ”football" 的文档。通过使用大量优化过的内存中的反转索引,Elasticsearch 可以让我们在存储的数据上,执行一些非常强大的和自定义的全文搜索。 + +### 2 - 项目设置 + +### 2.0 - Docker + +我们在这个项目上使用 [Docker][23] 管理环境和依赖。Docker 是个容器引擎,它允许应用程序运行在一个独立的环境中,不会受到来自主机操作系统和本地开发环境的影响。现在,许多公司将它们的大规模 Web 应用程序主要运行在容器架构上。这样将提升灵活性和容器化应用程序组件的可组构性。 + +![Docker Logo](https://storage.googleapis.com/cdn.patricktriest.com/blog/images/posts/elastic-library/docker.png) + +对我们来说,使用 Docker 的优势是,它对本教程非常友好,它的本地环境设置量最小,并且跨 Windows、macOS、和 Linux 系统的一致性很好。我们只需要在 Docker 配置文件中定义这些依赖关系,而不是按安装说明分别去安装 Node.js、Elasticsearch、和 Nginx,然后,就可以使用这个配置文件在任何其它地方运行我们的应用程序。而且,因为每个应用程序组件都运行在它自己的独立容器中,它们受本地机器上的其它 “垃圾” 干扰的可能性非常小,因此,在调试问题时,像 "But it works on my machine!" 这类的问题将非常少。 + +### 2.1 - 安装 Docker & Docker-Compose + +这个项目只依赖 [Docker][24] 和 [docker-compose][25],docker-compose 是 Docker 官方支持的一个工具,它用来将定义的多个容器配置 _组装_  成单一的应用程序栈。 + +安装 Docker - [https://docs.docker.com/engine/installation/][26] +安装 Docker Compose - [https://docs.docker.com/compose/install/][27] + +### 2.2 - 设置项目主目录 + +为项目创建一个主目录(名为 `guttenberg_search`)。我们的项目将工作在主目录的以下两个子目录中。 + +* `/public` - 保存前端 Vue.js Web 应用程序。 + +* `/server` - 服务器端 Node.js 源代码。 + +### 2.3 - 添加 Docker-Compose 配置 + +接下来,我们将创建一个 `docker-compose.yml` 文件来定义我们的应用程序栈中的每个容器。 + +1. `gs-api` - 后端应用程序逻辑使用的 Node.js 容器 + +2. `gs-frontend` - 前端 Web 应用程序使用的 Ngnix 容器。 + +3. `gs-search` - 保存和搜索数据的 Elasticsearch 容器。 + +``` +version: '3' + +services: + api: # Node.js App + container_name: gs-api + build: . + ports: + - "3000:3000" # Expose API port + - "9229:9229" # Expose Node process debug port (disable in production) + environment: # Set ENV vars + - NODE_ENV=local + - ES_HOST=elasticsearch + - PORT=3000 + volumes: # Attach local book data directory + - ./books:/usr/src/app/books + + frontend: # Nginx Server For Frontend App + container_name: gs-frontend + image: nginx + volumes: # Serve local "public" dir + - ./public:/usr/share/nginx/html + ports: + - "8080:80" # Forward site to localhost:8080 + + elasticsearch: # Elasticsearch Instance + container_name: gs-search + image: docker.elastic.co/elasticsearch/elasticsearch:6.1.1 + volumes: # Persist ES data in seperate "esdata" volume + - esdata:/usr/share/elasticsearch/data + environment: + - bootstrap.memory_lock=true + - "ES_JAVA_OPTS=-Xms512m -Xmx512m" + - discovery.type=single-node + ports: # Expose Elasticsearch ports + - "9300:9300" + - "9200:9200" + +volumes: # Define seperate volume for Elasticsearch data + esdata: + +``` + +这个文件定义了我们全部的应用程序栈 —— 不需要在你的本地系统上安装 Elasticsearch、Node、和 Nginx。每个容器都将端口转发到宿主机系统(`localhost`)上,以便于我们在宿主机上去访问和调试 Node API、Elasticsearch instance、和前端 Web 应用程序。 + +### 2.4 - 添加 Dockerfile + +对于 Nginx 和 Elasticsearch,我们使用了官方预构建的镜像,而 Node.js 应用程序需要我们自己去构建。 + +在应用程序的根目录下定义一个简单的 `Dockerfile` 配置文件。 + +``` +# Use Node v8.9.0 LTS +FROM node:carbon + +# Setup app working directory +WORKDIR /usr/src/app + +# Copy package.json and package-lock.json +COPY package*.json ./ + +# Install app dependencies +RUN npm install + +# Copy sourcecode +COPY . . + +# Start app +CMD [ "npm", "start" ] + +``` + +这个 Docker 配置扩展了官方的 Node.js 镜像、拷贝我们的应用程序源代码、以及在容器内安装 NPM 依赖。 + +我们也增加了一个 `.dockerignore` 文件,以防止我们不需要的文件拷贝到容器中。 + +``` +node_modules/ +npm-debug.log +books/ +public/ + +``` + +> 请注意:我们之所以不拷贝 `node_modules` 目录到我们的容器中 —— 是因为我们要在容器中运行 `npm install` 来构建这个进程。从宿主机系统拷贝 `node_modules` 可能会引起错误,因为一些包需要在某些操作系统上专门构建。比如说,在 macOS 上安装 `bcrypt` 包,然后尝试将这个模块直接拷贝到一个 Ubuntu 容器上将不能工作,因为 `bcyrpt` 需要为每个操作系统构建一个特定的二进制文件。 + +### 2.5 - 添加基本文件 + +为了测试我们的配置,我们需要添加一些占位符文件到应用程序目录中。 + +在 `public/index.html` 文件中添加如下内容。 + +``` +Hello World From The Frontend Container + +``` + +接下来,在 `server/app.js` 中添加 Node.js 占位符文件。 + +``` +const Koa = require('koa') +const app = new Koa() + +app.use(async (ctx, next) => { + ctx.body = 'Hello World From the Backend Container' +}) + +const port = process.env.PORT || 3000 + +app.listen(port, err => { + if (err) console.error(err) + console.log(`App Listening on Port ${port}`) +}) + +``` + +最后,添加我们的 `package.json` 节点应用配置。 + +``` +{ + "name": "guttenberg-search", + "version": "0.0.1", + "description": "Source code for Elasticsearch tutorial using 100 classic open source books.", + "scripts": { + "start": "node --inspect=0.0.0.0:9229 server/app.js" + }, + "repository": { + "type": "git", + "url": "git+https://github.com/triestpa/guttenberg-search.git" + }, + "author": "patrick.triest@gmail.com", + "license": "MIT", + "bugs": { + "url": "https://github.com/triestpa/guttenberg-search/issues" + }, + "homepage": "https://github.com/triestpa/guttenberg-search#readme", + "dependencies": { + "elasticsearch": "13.3.1", + "joi": "13.0.1", + "koa": "2.4.1", + "koa-joi-validate": "0.5.1", + "koa-router": "7.2.1" + } +} + +``` + +这个文件定义了应用程序启动命令和 Node.js 包依赖。 + +> 注意:不要运行 `npm install` —— 当它构建时,这个依赖将在容器内安装。 + +### 2.6 - 测试它的输出 + +现在一切新绪,我们来测试应用程序的每个组件的输出。从应用程序的主目录运行 `docker-compose build`,它将构建我们的 Node.js 应用程序容器。 + +![docker build output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_0_3.png) + +接下来,运行 `docker-compose up` 去启动整个应用程序栈。 + +![docker compose output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_0_2.png) + +> 这一步可能需要几分钟时间,因为 Docker 要为每个容器去下载基础镜像,接着再去运行,启动应用程序非常快,因为所需要的镜像已经下载完成了。 + +在你的浏览器中尝试访问 `localhost:8080` —— 你将看到简单的 “Hello World" Web 页面。 + +![frontend sample output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_0_0.png) + +访问 `localhost:3000` 去验证我们的 Node 服务器,它将返回 "Hello World" 信息。 + +![backend sample output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_0_1.png) + +最后,访问 `localhost:9200` 去检查 Elasticsearch 运行状态。它将返回类似如下的内容。 + +``` +{ + "name" : "SLTcfpI", + "cluster_name" : "docker-cluster", + "cluster_uuid" : "iId8e0ZeS_mgh9ALlWQ7-w", + "version" : { + "number" : "6.1.1", + "build_hash" : "bd92e7f", + "build_date" : "2017-12-17T20:23:25.338Z", + "build_snapshot" : false, + "lucene_version" : "7.1.0", + "minimum_wire_compatibility_version" : "5.6.0", + "minimum_index_compatibility_version" : "5.0.0" + }, + "tagline" : "You Know, for Search" +} + +``` + +如果三个 URLs 都显示成功,祝贺你!整个容器栈已经正常运行了,接下来我们进入最有趣的部分。 + +### 3 - 连接到 ELASTICSEARCH + +我们要做的第一件事情是,让我们的应用程序连接到我们本地的 Elasticsearch 实例上。 + +### 3.0 - 添加 ES 连接模块 + +在新文件 `server/connection.js` 中添加如下的 Elasticsearch 初始化代码。 + +``` +const elasticsearch = require('elasticsearch') + +// Core ES variables for this project +const index = 'library' +const type = 'novel' +const port = 9200 +const host = process.env.ES_HOST || 'localhost' +const client = new elasticsearch.Client({ host: { host, port } }) + +/** Check the ES connection status */ +async function checkConnection () { + let isConnected = false + while (!isConnected) { + console.log('Connecting to ES') + try { + const health = await client.cluster.health({}) + console.log(health) + isConnected = true + } catch (err) { + console.log('Connection Failed, Retrying...', err) + } + } +} + +checkConnection() + +``` + +现在,我们重新构建我们的 Node 应用程序,我们将使用 `docker-compose build` 来做一些改变。接下来,运行 `docker-compose up -d` 去启动应用程序栈,它将以守护进程的方式在后台运行。 + +应用程序启动之后,在命令行中运行 `docker exec gs-api "node" "server/connection.js"`,以便于在容器内运行我们的脚本。你将看到类似如下的系统输出信息。 + +``` +{ cluster_name: 'docker-cluster', + status: 'yellow', + timed_out: false, + number_of_nodes: 1, + number_of_data_nodes: 1, + active_primary_shards: 1, + active_shards: 1, + relocating_shards: 0, + initializing_shards: 0, + unassigned_shards: 1, + delayed_unassigned_shards: 0, + number_of_pending_tasks: 0, + number_of_in_flight_fetch: 0, + task_max_waiting_in_queue_millis: 0, + active_shards_percent_as_number: 50 } + +``` + +继续之前,我们先删除最下面的 `checkConnection()` 调用,因为,我们最终的应用程序将调用外部的连接模块。 + +### 3.1 - 添加函数去重置索引 + +在 `server/connection.js` 中的 `checkConnection` 下面添加如下的函数,以便于重置 Elasticsearch 索引。 + +``` +/** Clear the index, recreate it, and add mappings */ +async function resetIndex (index) { + if (await client.indices.exists({ index })) { + await client.indices.delete({ index }) + } + + await client.indices.create({ index }) + await putBookMapping() +} + +``` + +### 3.2 - 添加图书模式 + +接下来,我们将为图书的数据模式添加一个 "mapping"。在 `server/connection.js` 中的 `resetIndex` 函数下面添加如下的函数。 + +``` +/** Add book section schema mapping to ES */ +async function putBookMapping () { + const schema = { + title: { type: 'keyword' }, + author: { type: 'keyword' }, + location: { type: 'integer' }, + text: { type: 'text' } + } + + return client.indices.putMapping({ index, type, body: { properties: schema } }) +} + +``` + +这是为 `book` 索引定义了一个 mapping。一个 Elasticsearch 的 `index` 大概类似于 SQL 的 `table` 或者 MongoDB 的  `collection`。我们通过添加 mapping 来为存储的文档指定每个字段和它的数据类型。Elasticsearch 是无模式的,因此,从技术角度来看,我们是不需要添加 mapping 的,但是,这样做,我们可以更好地控制如何处理数据。 + +比如,我们给 "title" 和 ”author" 字段分配 `keyword` 类型,给 “text" 字段分配 `text` 类型。之所以这样做的原因是,搜索引擎可以区别处理这些字符串字段 —— 在搜索的时候,搜索引擎将在 `text` 字段中搜索可能的匹配项,而对于 `keyword` 类型字段,将对它们进行全文匹配。这看上去差别很小,但是它们对在不同的搜索上的速度和行为的影响非常大。 + +在文件的底部,导出对外发布的属性和函数,这样我们的应用程序中的其它模块就可以访问它们了。 + +``` +module.exports = { + client, index, type, checkConnection, resetIndex +} + +``` + +### 4 - 加载原始数据 + +我们将使用来自 [Gutenberg 项目][28] 的数据 ——  它致力于为公共提供免费的线上电子书。在这个项目中,我们将使用 100 本经典图书来充实我们的图书馆,包括_《The Adventures of Sherlock Holmes》_、_《Treasure Island》_、_《The Count of Monte Cristo》_、_《Around the World in 80 Days》_、_《Romeo and Juliet》_ 、和_《The Odyssey》_。 + +![Book Covers](https://storage.googleapis.com/cdn.patricktriest.com/blog/images/posts/elastic-library/books.jpg) + +### 4.1 - 下载图书文件 + +我将这 100 本书打包成一个文件,你可以从这里下载它 —— +[https://cdn.patricktriest.com/data/books.zip][29] + +将这个文件解压到你的项目的 `books/` 目录中。 + +你可以使用以下的命令来完成(需要在命令行下使用 [wget][30] 和 ["The Unarchiver"][31])。 + +``` +wget https://cdn.patricktriest.com/data/books.zip +unar books.zip + +``` + +### 4.2 - 预览一本书 + +尝试打开其中的一本书的文件,假设打开的是 `219-0.txt`。你将注意到它开头是一个公开访问的协议,接下来是一些标识这本书的书名、作者、发行日期、语言和字符编码的行。 + +``` +Title: Heart of Darkness + +Author: Joseph Conrad + +Release Date: February 1995 [EBook #219] +Last Updated: September 7, 2016 + +Language: English + +Character set encoding: UTF-8 + +``` + +在 `*** START OF THIS PROJECT GUTENBERG EBOOK HEART OF DARKNESS ***` 这些行后面,是这本书的正式内容。 + +如果你滚动到本书的底部,你将看到类似 `*** END OF THIS PROJECT GUTENBERG EBOOK HEART OF DARKNESS ***` 信息,接下来是本书更详细的协议版本。 + +下一步,我们将使用程序从文件头部来解析书的元数据,提取 `*** START OF` 和 `***END OF` 之间的内容。 + +### 4.3 - 读取数据目录 + +我们将写一个脚本来读取每本书的内容,并将这些数据添加到 Elasticsearch。我们将定义一个新的 Javascript 文件 `server/load_data.js` 来执行这些操作。 + +首先,我们将从 `books/` 目录中获取每个文件的列表。 + +在 `server/load_data.js` 中添加下列内容。 + +``` +const fs = require('fs') +const path = require('path') +const esConnection = require('./connection') + +/** Clear ES index, parse and index all files from the books directory */ +async function readAndInsertBooks () { + try { + // Clear previous ES index + await esConnection.resetIndex() + + // Read books directory + let files = fs.readdirSync('./books').filter(file => file.slice(-4) === '.txt') + console.log(`Found ${files.length} Files`) + + // Read each book file, and index each paragraph in elasticsearch + for (let file of files) { + console.log(`Reading File - ${file}`) + const filePath = path.join('./books', file) + const { title, author, paragraphs } = parseBookFile(filePath) + await insertBookData(title, author, paragraphs) + } + } catch (err) { + console.error(err) + } +} + +readAndInsertBooks() + +``` + +我们将使用一个快捷命令来重构我们的 Node.js 应用程序,并更新运行的容器。 + +运行 `docker-compose up -d --build` 去更新应用程序。这是运行  `docker-compose build` 和 `docker-compose up -d` 的快捷命令。 + +![docker build output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_1_0.png) + +为了在容器中运行我们的 `load_data` 脚本,我们运行 `docker exec gs-api "node" "server/load_data.js"` 。你将看到 Elasticsearch 的状态输出 `Found 100 Books`。 + +这之后,脚本发生了错误退出,原因是我们调用了一个没有定义的辅助函数(`parseBookFile`)。 + +![docker exec output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_1_1.png) + +### 4.4 - 读取数据文件 + +接下来,我们读取元数据和每本书的内容。 + +在 `server/load_data.js` 中定义新函数。 + +``` +/** Read an individual book text file, and extract the title, author, and paragraphs */ +function parseBookFile (filePath) { + // Read text file + const book = fs.readFileSync(filePath, 'utf8') + + // Find book title and author + const title = book.match(/^Title:\s(.+)$/m)[1] + const authorMatch = book.match(/^Author:\s(.+)$/m) + const author = (!authorMatch || authorMatch[1].trim() === '') ? 'Unknown Author' : authorMatch[1] + + console.log(`Reading Book - ${title} By ${author}`) + + // Find Guttenberg metadata header and footer + const startOfBookMatch = book.match(/^\*{3}\s*START OF (THIS|THE) PROJECT GUTENBERG EBOOK.+\*{3}$/m) + const startOfBookIndex = startOfBookMatch.index + startOfBookMatch[0].length + const endOfBookIndex = book.match(/^\*{3}\s*END OF (THIS|THE) PROJECT GUTENBERG EBOOK.+\*{3}$/m).index + + // Clean book text and split into array of paragraphs + const paragraphs = book + .slice(startOfBookIndex, endOfBookIndex) // Remove Guttenberg header and footer + .split(/\n\s+\n/g) // Split each paragraph into it's own array entry + .map(line => line.replace(/\r\n/g, ' ').trim()) // Remove paragraph line breaks and whitespace + .map(line => line.replace(/_/g, '')) // Guttenberg uses "_" to signify italics. We'll remove it, since it makes the raw text look messy. + .filter((line) => (line && line.length !== '')) // Remove empty lines + + console.log(`Parsed ${paragraphs.length} Paragraphs\n`) + return { title, author, paragraphs } +} + +``` + +这个函数执行几个重要的任务。 + +1. 从文件系统中读取书的文本。 + +2. 使用正则表达式(关于正则表达式,请参阅 [这篇文章][1] )解析书名和作者。 + +3. 通过匹配 ”Guttenberg 项目“ 头部和尾部,识别书的正文内容。 + +4. 提取书的内容文本。 + +5. 分割每个段落到它的数组中。 + +6. 清理文本并删除空白行。 + +它的返回值,我们将构建一个对象,这个对象包含书名、作者、以及书中各段落的数据。 + +再次运行 `docker-compose up -d --build` 和 `docker exec gs-api "node" "server/load_data.js"`,你将看到如下的输出,在输出的末尾有三个额外的行。 + +![docker exec output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_2_0.png) + +成功!我们的脚本从文本文件中成功解析出了书名和作者。脚本再次以错误结束,因为到现在为止,我们还没有定义辅助函数。 + +### 4.5 - 在 ES 中索引数据文件 + +最后一步,我们将批量上传每个段落的数组到 Elasticsearch 索引中。 + +在 `load_data.js` 中添加新的 `insertBookData` 函数。 + +``` +/** Bulk index the book data in Elasticsearch */ +async function insertBookData (title, author, paragraphs) { + let bulkOps = [] // Array to store bulk operations + + // Add an index operation for each section in the book + for (let i = 0; i < paragraphs.length; i++) { + // Describe action + bulkOps.push({ index: { _index: esConnection.index, _type: esConnection.type } }) + + // Add document + bulkOps.push({ + author, + title, + location: i, + text: paragraphs[i] + }) + + if (i > 0 && i % 500 === 0) { // Do bulk insert in 500 paragraph batches + await esConnection.client.bulk({ body: bulkOps }) + bulkOps = [] + console.log(`Indexed Paragraphs ${i - 499} - ${i}`) + } + } + + // Insert remainder of bulk ops array + await esConnection.client.bulk({ body: bulkOps }) + console.log(`Indexed Paragraphs ${paragraphs.length - (bulkOps.length / 2)} - ${paragraphs.length}\n\n\n`) +} + +``` + +这个函数将使用书名、作者、和附加元数据的段落位置来索引书中的每个段落。我们通过批量操作来插入段落,它比逐个段落插入要快的多。 + +> 我们分批索引段落,而不是一次性插入全部,是为运行这个应用程序的、内存稍有点小(1.7 GB)的服务器  `search.patricktriest.com` 上做的一个重要优化。如果你的机器内存还行(4 GB 以上),你或许不用分批上传。 + +运行 `docker-compose up -d --build` 和 `docker exec gs-api "node" "server/load_data.js"` 一次或多次 —— 现在你将看到前面解析的 100 本书的完整输出,并插入到了 Elasticsearch。这可能需要几分钟时间,甚至更长。 + +![data loading output](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_3_0.png) + +### 5 - 搜索 + +现在,Elasticsearch 中已经有了 100 本书了(大约有 230000 个段落),现在我们尝试搜索查询。 + +### 5.0 - 简单的 HTTP 查询 + +首先,我们使用 Elasticsearch 的 HTTP API 对它进行直接查询。 + +在你的浏览器上访问这个 URL - `http://localhost:9200/library/_search?q=text:Java&pretty` + +在这里,我们将执行一个极简的全文搜索,在我们的图书馆的书中查找 ”Java" 这个词。 + +你将看到类似于下面的一个 JSON 格式的响应。 + +``` +{ + "took" : 11, + "timed_out" : false, + "_shards" : { + "total" : 5, + "successful" : 5, + "skipped" : 0, + "failed" : 0 + }, + "hits" : { + "total" : 13, + "max_score" : 14.259304, + "hits" : [ + { + "_index" : "library", + "_type" : "novel", + "_id" : "p_GwFWEBaZvLlaAUdQgV", + "_score" : 14.259304, + "_source" : { + "author" : "Charles Darwin", + "title" : "On the Origin of Species", + "location" : 1080, + "text" : "Java, plants of, 375." + } + }, + { + "_index" : "library", + "_type" : "novel", + "_id" : "wfKwFWEBaZvLlaAUkjfk", + "_score" : 10.186235, + "_source" : { + "author" : "Edgar Allan Poe", + "title" : "The Works of Edgar Allan Poe", + "location" : 827, + "text" : "After many years spent in foreign travel, I sailed in the year 18-- , from the port of Batavia, in the rich and populous island of Java, on a voyage to the Archipelago of the Sunda islands. I went as passenger--having no other inducement than a kind of nervous restlessness which haunted me as a fiend." + } + }, + ... + ] + } +} + +``` + +用 Elasticseach 的 HTTP 接口可以测试我们插入的数据是否成功,但是如果直接将这个 API 暴露给 Web 应用程序将有极大的风险。这个 API 将会暴露管理功能(比如直接添加和删除文档),最理想的情况是完全不要对外暴露它。而是写一个简单的 Node.js API 去接收来自客户端的请求,然后(在我们的本地网络中)生成一个正确的查询发送给 Elasticsearch。 + +### 5.1 - 查询脚本 + +我们现在尝试从我们写的 Node.js 脚本中查询 Elasticsearch。 + +创建一个新文件,`server/search.js`。 + +``` +const { client, index, type } = require('./connection') + +module.exports = { + /** Query ES index for the provided term */ + queryTerm (term, offset = 0) { + const body = { + from: offset, + query: { match: { + text: { + query: term, + operator: 'and', + fuzziness: 'auto' + } } }, + highlight: { fields: { text: {} } } + } + + return client.search({ index, type, body }) + } +} + +``` + +我们的搜索模块定义一个简单的 `search` 函数,它将使用输入的词 `match` 查询。 + +这是查询的字段分解 - + +* `from` - 允许我们分页查询结果。默认每个查询返回 10 个结果,因此,指定 `from: 10` 将允许我们取回 10-20 的结果。 + +* `query` - 这里我们指定要查询的词。 + +* `operator` - 我们可以修改搜索行为;在本案例中,我们使用 "and" 操作去对查询中包含所有 tokens(要查询的词)的结果来确定优先顺序。 + +* `fuzziness` - 对拼写错误的容错调整,`auto` 的默认为 `fuzziness: 2`。模糊值越高,结果越需要更多校正。比如,`fuzziness: 1` 将允许以 `Patricc` 为关键字的查询中返回与 `Patrick` 匹配的结果。 + +* `highlights` - 为结果返回一个额外的字段,这个字段包含 HTML,以显示精确的文本字集和查询中匹配的关键词。 + +你可以去浏览 [Elastic Full-Text Query DSL][32],学习如何随意调整这些参数,以进一步自定义搜索查询。 + +### 6 - API + +为了能够从前端应用程序中访问我们的搜索功能,我们来写一个快速的 HTTP API。 + +### 6.0 - API 服务器 + +用以下的内容替换现有的 `server/app.js` 文件。 + +``` +const Koa = require('koa') +const Router = require('koa-router') +const joi = require('joi') +const validate = require('koa-joi-validate') +const search = require('./search') + +const app = new Koa() +const router = new Router() + +// Log each request to the console +app.use(async (ctx, next) => { + const start = Date.now() + await next() + const ms = Date.now() - start + console.log(`${ctx.method} ${ctx.url} - ${ms}`) +}) + +// Log percolated errors to the console +app.on('error', err => { + console.error('Server Error', err) +}) + +// Set permissive CORS header +app.use(async (ctx, next) => { + ctx.set('Access-Control-Allow-Origin', '*') + return next() +}) + +// ADD ENDPOINTS HERE + +const port = process.env.PORT || 3000 + +app + .use(router.routes()) + .use(router.allowedMethods()) + .listen(port, err => { + if (err) throw err + console.log(`App Listening on Port ${port}`) + }) + +``` + +这些代码将为 [Koa.js][33] Node API 服务器导入服务器依赖,设置简单的日志,以及错误处理。 + +### 6.1 - 使用查询连接端点 + +接下来,我们将在服务器上添加一个端点,以便于发布我们的 Elasticsearch 查询功能。 + +在  `server/app.js` 文件的 `// ADD ENDPOINTS HERE`  下面插入下列的代码。 + +``` +/** + * GET /search + * Search for a term in the library + */ +router.get('/search', async (ctx, next) => { + const { term, offset } = ctx.request.query + ctx.body = await search.queryTerm(term, offset) + } +) + +``` + +使用 `docker-compose up -d --build` 重启动应用程序。之后在你的浏览器中尝试调用这个搜索端点。比如,`http://localhost:3000/search?term=java` 这个请求将搜索整个图书馆中提到 “Jave" 的内容。 + +结果与前面直接调用 Elasticsearch HTTP 界面的结果非常类似。 + +``` +{ + "took": 242, + "timed_out": false, + "_shards": { + "total": 5, + "successful": 5, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": 93, + "max_score": 13.356944, + "hits": [{ + "_index": "library", + "_type": "novel", + "_id": "eHYHJmEBpQg9B4622421", + "_score": 13.356944, + "_source": { + "author": "Charles Darwin", + "title": "On the Origin of Species", + "location": 1080, + "text": "Java, plants of, 375." + }, + "highlight": { + "text": ["Java, plants of, 375."] + } + }, { + "_index": "library", + "_type": "novel", + "_id": "2HUHJmEBpQg9B462xdNg", + "_score": 9.030668, + "_source": { + "author": "Unknown Author", + "title": "The King James Bible", + "location": 186, + "text": "10:4 And the sons of Javan; Elishah, and Tarshish, Kittim, and Dodanim." + }, + "highlight": { + "text": ["10:4 And the sons of Javan; Elishah, and Tarshish, Kittim, and Dodanim."] + } + } + ... + ] + } +} + +``` + +### 6.2 - 输入校验 + +这个端点现在还很脆弱 —— 我们没有对请求参数做任何的校验,因此,如果是无效的或者错误的值将使服务器出错。 + +我们将添加一些使用 [Joi][34] 和 [Koa-Joi-Validate][35] 库的中间件,以对输入做校验。 + +``` +/** + * GET /search + * Search for a term in the library + * Query Params - + * term: string under 60 characters + * offset: positive integer + */ +router.get('/search', + validate({ + query: { + term: joi.string().max(60).required(), + offset: joi.number().integer().min(0).default(0) + } + }), + async (ctx, next) => { + const { term, offset } = ctx.request.query + ctx.body = await search.queryTerm(term, offset) + } +) + +``` + +现在,重启服务器,如果你使用一个没有搜索关键字的请求(`http://localhost:3000/search`),你将返回一个带相关消息的 HTTP 400 错误,比如像 `Invalid URL Query - child "term" fails because ["term" is required]`。 + +如果想从 Node 应用程序中查看实时日志,你可以运行 `docker-compose logs -f api`。 + +### 7 - 前端应用程序 + +现在我们的 `/search` 端点已经就绪,我们来连接到一个简单的 Web 应用程序来测试这个 API。 + +### 7.0 - Vue.js 应用程序 + +我们将使用 Vue.js 去协调我们的前端。 + +添加一个新文件 `/public/app.js`,去控制我们的 Vue.js 应用程序代码。 + +``` +const vm = new Vue ({ + el: '#vue-instance', + data () { + return { + baseUrl: 'http://localhost:3000', // API url + searchTerm: 'Hello World', // Default search term + searchDebounce: null, // Timeout for search bar debounce + searchResults: [], // Displayed search results + numHits: null, // Total search results found + searchOffset: 0, // Search result pagination offset + + selectedParagraph: null, // Selected paragraph object + bookOffset: 0, // Offset for book paragraphs being displayed + paragraphs: [] // Paragraphs being displayed in book preview window + } + }, + async created () { + this.searchResults = await this.search() // Search for default term + }, + methods: { + /** Debounce search input by 100 ms */ + onSearchInput () { + clearTimeout(this.searchDebounce) + this.searchDebounce = setTimeout(async () => { + this.searchOffset = 0 + this.searchResults = await this.search() + }, 100) + }, + /** Call API to search for inputted term */ + async search () { + const response = await axios.get(`${this.baseUrl}/search`, { params: { term: this.searchTerm, offset: this.searchOffset } }) + this.numHits = response.data.hits.total + return response.data.hits.hits + }, + /** Get next page of search results */ + async nextResultsPage () { + if (this.numHits > 10) { + this.searchOffset += 10 + if (this.searchOffset + 10 > this.numHits) { this.searchOffset = this.numHits - 10} + this.searchResults = await this.search() + document.documentElement.scrollTop = 0 + } + }, + /** Get previous page of search results */ + async prevResultsPage () { + this.searchOffset -= 10 + if (this.searchOffset < 0) { this.searchOffset = 0 } + this.searchResults = await this.search() + document.documentElement.scrollTop = 0 + } + } +}) + +``` + +这个应用程序非常简单 —— 我们只定义了一些共享的数据属性,以及添加了检索和分页搜索结果的方法。为防止每按键一次都调用 API,搜索输入有一个 100 毫秒的除颤功能。 + +解释 Vue.js 是如何工作的已经超出了本教程的范围,如果你使用过 Angular 或者 React,其实一些也不可怕。如果你完全不熟悉 Vue,想快速了解它的功能,我建议你从官方的快速指南入手 —— [https://vuejs.org/v2/guide/][36] + +### 7.1 - HTML + +使用以下的内容替换 `/public/index.html` 文件中的占位符,以便于加载我们的 Vue.js 应用程序和设计一个基本的搜索界面。 + +``` + + + + + Elastic Library + + + + + + + + +
+ +
+
+ + +
+
+ + +
+
{{ numHits }} Hits
+
Displaying Results {{ searchOffset }} - {{ searchOffset + 9 }}
+
+ + +
+ + +
+ + +
+
+
+
+
{{ hit._source.title }} - {{ hit._source.author }}
+
Location {{ hit._source.location }}
+
+
+ + +
+ + +
+ + +
+ + + + + + + +``` + +### 7.2 - CSS + +添加一个新文件 `/public/styles.css`,使用一些自定义的 UI 样式。 + +``` +body { font-family: 'EB Garamond', serif; } + +.mui-textfield > input, .mui-btn, .mui--text-subhead, .mui-panel > .mui--text-headline { + font-family: 'Open Sans', sans-serif; +} + +.all-caps { text-transform: uppercase; } +.app-container { padding: 16px; } +.search-results em { font-weight: bold; } +.book-modal > button { width: 100%; } +.search-results .mui-divider { margin: 14px 0; } + +.search-results { + display: flex; + flex-direction: row; + flex-wrap: wrap; + justify-content: space-around; +} + +.search-results > div { + flex-basis: 45%; + box-sizing: border-box; + cursor: pointer; +} + +@media (max-width: 600px) { + .search-results > div { flex-basis: 100%; } +} + +.paragraphs-container { + max-width: 800px; + margin: 0 auto; + margin-bottom: 48px; +} + +.paragraphs-container .mui--text-body1, .paragraphs-container .mui--text-body2 { + font-size: 1.8rem; + line-height: 35px; +} + +.book-modal { + width: 100%; + height: 100%; + padding: 40px 10%; + box-sizing: border-box; + margin: 0 auto; + background-color: white; + overflow-y: scroll; + position: fixed; + top: 0; + left: 0; +} + +.pagination-panel { + display: flex; + justify-content: space-between; +} + +.title-row { + display: flex; + justify-content: space-between; + align-items: flex-end; +} + +@media (max-width: 600px) { + .title-row{ + flex-direction: column; + text-align: center; + align-items: center + } +} + +.locations-label { + text-align: center; + margin: 8px; +} + +.modal-footer { + position: fixed; + bottom: 0; + left: 0; + width: 100%; + display: flex; + justify-content: space-around; + background: white; +} + +``` + +### 7.3 - 尝试输出 + +在你的浏览器中打开 `localhost:8080`,你将看到一个简单的带结果分页功能的搜索界面。在顶部的搜索框中尝试输入不同的关键字来查看它们的搜索情况。 + +![preview webapp](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_4_0.png) + +> 你没有必要重新运行 `docker-compose up` 命令以使更改生效。本地的 `public` 目录是装载在我们的 Nginx 文件服务器容器中,因此,在本地系统中前端的变化将在容器化应用程序中自动反映出来。 + +如果你尝试点击任何搜索结果,什么反应也没有 —— 因为我们还没有为这个应用程序添加进一步的相关功能。 + +### 8 - 分页预览 + +如果点击每个搜索结果,然后查看到来自书中的内容,那将是非常棒的体验。 + +### 8.0 - 添加 Elasticsearch 查询 + +首先,我们需要定义一个简单的查询去从给定的书中获取段落范围。 + +在 `server/search.js` 文件中添加如下的函数到 `module.exports` 块中。 + +``` +/** Get the specified range of paragraphs from a book */ +getParagraphs (bookTitle, startLocation, endLocation) { + const filter = [ + { term: { title: bookTitle } }, + { range: { location: { gte: startLocation, lte: endLocation } } } + ] + + const body = { + size: endLocation - startLocation, + sort: { location: 'asc' }, + query: { bool: { filter } } + } + + return client.search({ index, type, body }) +} + +``` + +这个新函数将返回给定的书的开始位置和结束位置之间的一个排序后的段落数组。 + +### 8.1 - 添加 API 端点 + +现在,我们将这个函数链接到 API 端点。 + +添加下列内容到 `server/app.js` 文件中最初的 `/search` 端点下面。 + +``` +/** + * GET /paragraphs + * Get a range of paragraphs from the specified book + * Query Params - + * bookTitle: string under 256 characters + * start: positive integer + * end: positive integer greater than start + */ +router.get('/paragraphs', + validate({ + query: { + bookTitle: joi.string().max(256).required(), + start: joi.number().integer().min(0).default(0), + end: joi.number().integer().greater(joi.ref('start')).default(10) + } + }), + async (ctx, next) => { + const { bookTitle, start, end } = ctx.request.query + ctx.body = await search.getParagraphs(bookTitle, start, end) + } +) + +``` + +### 8.2 - 添加 UI 功能 + +现在,我们的新端点已经就绪,我们为应用程序添加一些从书中查询和显示全部页面的前端功能。 + +在 `/public/app.js` 文件的 `methods` 块中添加如下的函数。 + +``` + /** Call the API to get current page of paragraphs */ + async getParagraphs (bookTitle, offset) { + try { + this.bookOffset = offset + const start = this.bookOffset + const end = this.bookOffset + 10 + const response = await axios.get(`${this.baseUrl}/paragraphs`, { params: { bookTitle, start, end } }) + return response.data.hits.hits + } catch (err) { + console.error(err) + } + }, + /** Get next page (next 10 paragraphs) of selected book */ + async nextBookPage () { + this.$refs.bookModal.scrollTop = 0 + this.paragraphs = await this.getParagraphs(this.selectedParagraph._source.title, this.bookOffset + 10) + }, + /** Get previous page (previous 10 paragraphs) of selected book */ + async prevBookPage () { + this.$refs.bookModal.scrollTop = 0 + this.paragraphs = await this.getParagraphs(this.selectedParagraph._source.title, this.bookOffset - 10) + }, + /** Display paragraphs from selected book in modal window */ + async showBookModal (searchHit) { + try { + document.body.style.overflow = 'hidden' + this.selectedParagraph = searchHit + this.paragraphs = await this.getParagraphs(searchHit._source.title, searchHit._source.location - 5) + } catch (err) { + console.error(err) + } + }, + /** Close the book detail modal */ + closeBookModal () { + document.body.style.overflow = 'auto' + this.selectedParagraph = null + } + +``` + +这五个函数了提供了通过页码从书中下载和分页(每次十个段落)的逻辑。 + +现在,我们需要添加一个 UI 去显示书的页面。在 `/public/index.html` 的 `` 注释下面添加如下的内容。 + +``` + +
+
+ +
+
{{ selectedParagraph._source.title }}
+
{{ selectedParagraph._source.author }}
+
+
+
+
Locations {{ bookOffset - 5 }} to {{ bookOffset + 5 }}
+
+
+ + +
+
+ {{ paragraph._source.text }} +
+
+ {{ paragraph._source.text }} +
+
+
+
+ + + +
+ +``` + +再次重启应用程序服务器(`docker-compose up -d --build`),然后打开 `localhost:8080`。当你再次点击搜索结果时,你将能看到关键字附近的段落。如果你感兴趣,你现在甚至可以看这本书的剩余部分。 + +![preview webapp book page](https://cdn.patricktriest.com/blog/images/posts/elastic-library/sample_5_0.png) + +祝贺你!你现在已经完成了本教程的应用程序。 + +你可以去比较你的本地结果与托管在这里的完整示例 —— [https://search.patricktriest.com/][37] + +### 9 - ELASTICSEARCH 的缺点 + +### 9.0 - 耗费资源 + +Elasticsearch 是计算密集型的。[官方建议][38] 运行 ES 的机器最好有 64 GB 的内存,强烈反对在低于 8 GB 内存的机器上运行它。Elasticsearch 是一个 _内存中_ 数据库,这样使它的查询速度非常快,但这也非常占用系统内存。在生产系统中使用时,[他们强烈建议在一个集群中运行多个 Elasticsearch 节点][39],以实现高可用、自动分区、和一个节点失败时的数据冗余。 + +我们的这个教程中的应用程序运行在一个 $15/月 的 GCP 计算实例中( [search.patricktriest.com][40]),它只有 1.7 GB 的内存,它勉强能运行这个 Elasticsearch 节点;有时候在进行初始的数据加载过程中,整个机器就 ”假死机“ 了。在我的经验中,Elasticsearch 比传统的那些数据库,比如,PostgreSQL 和 MongoDB 耗费的资源要多很多,这样会使托管主机的成本增加很多。 + +### 9.1 - 与数据库的同步 + +在大多数应用程序,将数据全部保存在 Elasticsearch 并不是个好的选择。最好是使用 ES 作为应用程序的主要事务数据库,但是一般不推荐这样做,因为在 Elasticsearch 中缺少 ACID,如果在处理数据的时候发生伸缩行为,它将丢失写操作。在许多案例中,ES 服务器更多是一个特定的角色,比如做应用程序中的一个文本搜索功能。这种特定的用途,要求它从主数据库中复制数据到 Elasticsearch 实例中。 + +比如,假设我们将用户信息保存在一个 PostgreSQL 表中,但是用 Elasticsearch 去驱动我们的用户搜索功能。如果一个用户,比如,"Albert",决定将他的名字改成 "Al",我们将需要把这个变化同时反映到我们主要的 PostgreSQL 数据库和辅助的 Elasticsearch 集群中。 + +正确地集成它们可能比较棘手,最好的答案将取决于你现有的应用程序栈。这有多种开源方案可选,从 [用一个进程去关注 MongoDB 操作日志][41] 并自动同步检测到的变化到 ES,到使用一个 [PostgresSQL 插件][42] 去创建一个定制的、基于 PSQL 的索引来与 Elasticsearch 进行自动沟通。 + +如果没有有效的预构建选项可用,你可能需要在你的服务器代码中增加一些钩子,这样可以基于数据库的变化来手动更新 Elasticsearch 索引。最后一招,我认为是一个最后的选择,因为,使用定制的业务逻辑去保持 ES 的同步可能很复杂,这将会给应用程序引入很多的 bugs。 + +让 Elasticsearch 与一个主数据库同步,将使它的架构更加复杂,其复杂性已经超越了 ES 的相关缺点,但是当在你的应用程序中考虑添加一个专用的搜索引擎的利弊得失时,这个问题是值的好好考虑的。 + +### 总结 + +在很多现在流行的应用程序中,全文搜索是一个非常重要的功能 —— 而且是很难实现的一个功能。对于在你的应用程序中添加一个快速而又可定制的文本搜索,Elasticsearch 是一个非常好的选择,但是,在这里也有一个替代者。[Apache Solr][43] 是一个类似的开源搜索平台,它是基于 Apache Lucene 构建的,它与 Elasticsearch 的核心库是相同的。[Algolia][44] 是一个搜索即服务的 Web 平台,它已经很快流行了起来,并且它对新手非常友好,很易于上手(但是作为折衷,它的可定制性较小,并且使用成本较高)。 + +“搜索” 特性并不是 Elasticsearch 唯一功能。ES 也是日志存储和分析的常用工具,在一个 ELK(Elasticsearch、Logstash、Kibana)栈配置中通常会使用它。灵活的全文搜索功能使得 Elasticsearch 在数据量非常大的科学任务中用处很大 —— 比如,在一个数据集中正确的/标准化的条目拼写,或者为了类似的词组搜索一个文本数据集。 + +对于你自己的项目,这里有一些创意。 + +* 添加更多你喜欢的书到教程的应用程序中,然后创建你自己的私人图书馆搜索引擎。 + +* 利用来自 [Google Scholar][2] 的论文索引,创建一个学术抄袭检测引擎。 + +* 通过将字典中的每个词索引到 Elasticsearch,创建一个拼写检查应用程序。 + +* 通过将 [Common Crawl Corpus][3] 加载到 Elasticsearch 中,构建你自己的与谷歌竞争的因特网搜索引擎(注意,它可能会超过 50 亿个页面,这是一个成本极高的数据集)。 + +* 在 journalism 上使用 Elasticsearch:在最近的大规模泄露的文档中搜索特定的名字和关键词,比如, [Panama Papers][4] 和 [Paradise Papers][5]。 + +本教程中应用程序的源代码是 100% 公开的,你可以在 GitHub 仓库上找到它们 —— [https://github.com/triestpa/guttenberg-search][45] + +我希望你喜欢这个教程!你可以在下面的评论区,发表任何你的想法、问题、或者评论。 + +-------------------------------------------------------------------------------- + +作者简介: + +全栈工程师,数据爱好者,学霸,“构建强迫症患者”,探险爱好者。 + +------------- + + +via: https://blog.patricktriest.com/text-search-docker-elasticsearch/ + +作者:[Patrick Triest][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://blog.patricktriest.com/author/patrick/ +[1]:https://blog.patricktriest.com/you-should-learn-regex/ +[2]:https://scholar.google.com/ +[3]:https://aws.amazon.com/public-datasets/common-crawl/ +[4]:https://en.wikipedia.org/wiki/Panama_Papers +[5]:https://en.wikipedia.org/wiki/Paradise_Papers +[6]:https://search.patricktriest.com/ +[7]:https://github.com/triestpa/guttenberg-search +[8]:https://www.postgresql.org/ +[9]:https://www.mongodb.com/ +[10]:https://www.elastic.co/ +[11]:https://www.docker.com/ +[12]:https://www.uber.com/ +[13]:https://www.spotify.com/us/ +[14]:https://www.adp.com/ +[15]:https://www.paypal.com/us/home +[16]:https://nodejs.org/en/ +[17]:http://koajs.com/ +[18]:https://vuejs.org/ +[19]:https://www.elastic.co/ +[20]:https://lucene.apache.org/core/ +[21]:https://www.elastic.co/guide/en/elasticsearch/guide/2.x/getting-started.html +[22]:https://en.wikipedia.org/wiki/B-tree +[23]:https://www.docker.com/ +[24]:https://www.docker.com/ +[25]:https://docs.docker.com/compose/ +[26]:https://docs.docker.com/engine/installation/ +[27]:https://docs.docker.com/compose/install/ +[28]:https://www.gutenberg.org/ +[29]:https://cdn.patricktriest.com/data/books.zip +[30]:https://www.gnu.org/software/wget/ +[31]:https://theunarchiver.com/command-line +[32]:https://www.elastic.co/guide/en/elasticsearch/reference/current/full-text-queries.html +[33]:http://koajs.com/ +[34]:https://github.com/hapijs/joi +[35]:https://github.com/triestpa/koa-joi-validate +[36]:https://vuejs.org/v2/guide/ +[37]:https://search.patricktriest.com/ +[38]:https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html +[39]:https://www.elastic.co/guide/en/elasticsearch/guide/2.x/distributed-cluster.html +[40]:https://search.patricktriest.com/ +[41]:https://github.com/mongodb-labs/mongo-connector +[42]:https://github.com/zombodb/zombodb +[43]:https://lucene.apache.org/solr/ +[44]:https://www.algolia.com/ +[45]:https://github.com/triestpa/guttenberg-search +[46]:https://blog.patricktriest.com/tag/guides/ +[47]:https://blog.patricktriest.com/tag/javascript/ +[48]:https://blog.patricktriest.com/tag/nodejs/ +[49]:https://blog.patricktriest.com/tag/web-development/ +[50]:https://blog.patricktriest.com/tag/devops/ diff --git a/sources/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md b/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md similarity index 55% rename from sources/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md rename to translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md index 6dce30d6dc..233daa72b2 100644 --- a/sources/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md +++ b/translated/tech/20180127 How to install KVM on CentOS 7 - RHEL 7 Headless Server.md @@ -1,56 +1,56 @@ -How to install KVM on CentOS 7 / RHEL 7 Headless Server +如何在 CentOS 7 / RHEL 7 终端服务器上安装 KVM ====== +如何在 CnetOS 7 或 RHEL 7( Red Hat 企业版 Linux) 服务器上安装和配置 KVM(基于内核的虚拟机)?如何在 CnetOS 7 上设置 KMV 并使用云镜像/ cloud-init 来安装客户虚拟机? -How do I install and configure KVM (Kernel-based Virtual Machine) on a CentOS 7 or RHEL (Red Hat Enterprise Linux) 7 server? How can I setup KMV on a CentOS 7 and use cloud images/cloud-init for installing guest VM? - - -Kernel-based Virtual Machine (KVM) is virtualization software for CentOS or RHEL 7. KVM turn your server into a hypervisor. This page shows how to setup and manage a virtualized environment with KVM in CentOS 7 or RHEL 7. It also described how to install and administer Virtual Machines (VMs) on a physical server using the CLI. Make sure that **Virtualization Technology (VT)** is enabled in your server 's BIOS. You can also run the following command [to test if CPU Support Intel VT and AMD-V Virtualization tech][1] +基于内核的虚拟机(KVM)是 CentOS 或 RHEL 7 的虚拟化软件。KVM 将你的服务器变成虚拟机管理程序。本文介绍如何在 CentOS 7 或 RHEL 7 中使用 KVM 设置和管理虚拟化环境。还介绍了如何使用 CLI 在物理服务器上安装和管理虚拟机(VM)。确保在服务器的 BIOS 中启用了**虚拟化技术(vt)**。你也可以运行以下命令[测试 CPU 是否支持 Intel VT 和 AMD_V 虚拟化技术][1]。 ``` $ lscpu | grep Virtualization Virtualization: VT-x ``` +### 按照 CentOS 7/RHEL 7 终端服务器上的 KVM 安装步骤进行操作 +#### 步骤 1: 安装 kvm -### Follow installation steps of KVM on CentOS 7/RHEL 7 headless sever - -#### Step 1: Install kvm - -Type the following [yum command][2]: +输入以下 [yum 命令][2]: `# yum install qemu-kvm libvirt libvirt-python libguestfs-tools virt-install` + [![How to install KVM on CentOS 7 RHEL 7 Headless Server][3]][3] -Start the libvirtd service: + +启动 libvirtd 服务: ``` # systemctl enable libvirtd # systemctl start libvirtd ``` -#### Step 2: Verify kvm installation +#### 步骤 2: 确认 kvm 安装 -Make sure KVM module loaded using lsmod command and [grep command][4]: +确保使用 lsmod 命令和 [grep命令][4] 加载 KVM 模块: `# lsmod | grep -i kvm` -#### Step 3: Configure bridged networking +#### 步骤 3: 配置桥接网络 -By default dhcpd based network bridge configured by libvirtd. You can verify that with the following commands: +默认情况下,由 libvirtd 配置的基于 dhcpd 的网桥。你可以使用以下命令验证: ``` # brctl show # virsh net-list ``` [![KVM default networking][5]][5] -All VMs (guest machine) only have network access to other VMs on the same server. A private network 192.168.122.0/24 created for you. Verify it: + +所有虚拟机(客户机器)只能在同一台服务器上对其他虚拟机进行网络访问。为你创建的私有网络是 192.168.122.0/24。验证: `# virsh net-dumpxml default` -If you want your VMs avilable to other servers on your LAN, setup a a network bridge on the server that connected to the your LAN. Update your nic config file such as ifcfg-enp3s0 or em1: + +如果你希望你的虚拟机可用于 LAN 上的其他服务器,请在连接到你的 LAN 的服务器上设置一个网桥。更新你的网卡配置文件,如 ifcfg-enp3s0 或 em1: `# vi /etc/sysconfig/network-scripts/enp3s0 ` -Add line: +添加一行: ``` BRIDGE=br0 ``` -[Save and close the file in vi][6]. Edit /etc/sysconfig/network-scripts/ifcfg-br0 and add: +[使用 vi 保存并关闭文件][6]。编辑 /etc/sysconfig/network-scripts/ifcfg-br0 : `# vi /etc/sysconfig/network-scripts/ifcfg-br0` -Append the following: +添加以下东西: ``` DEVICE="br0" # I am getting ip from DHCP server # @@ -62,27 +62,29 @@ TYPE="Bridge" DELAY="0" ``` -Restart the networking service (warning ssh command will disconnect, it is better to reboot the box): +重新启动网络服务(警告:ssh命令将断开连接,最好重新启动该设备): `# systemctl restart NetworkManager` -Verify it with brctl command: + +用 brctl 命令验证它: `# brctl show` -#### Step 4: Create your first virtual machine +#### 步骤 4: 创建你的第一个虚拟机 -I am going to create a CentOS 7.x VM. First, grab CentOS 7.x latest ISO image using the wget command: +我将会创建一个 CentOS 7.x 虚拟机。首先,使用 wget 命令获取 CentOS 7.x 最新的 ISO 镜像: ``` # cd /var/lib/libvirt/boot/ # wget https://mirrors.kernel.org/centos/7.4.1708/isos/x86_64/CentOS-7-x86_64-Minimal-1708.iso ``` -Verify ISO images: + +验证 ISO 镜像: ``` # wget https://mirrors.kernel.org/centos/7.4.1708/isos/x86_64/sha256sum.txt # sha256sum -c sha256sum.txt ``` -##### Create CentOS 7.x VM +##### 创建 CentOS 7.x 虚拟机 -In this example, I'm creating CentOS 7.x VM with 2GB RAM, 2 CPU core, 1 nics and 40GB disk space, enter: +在这个例子中,我创建了 2GB RAM,2 个 CPU 核心,1 个网卡和 40 GB 磁盘空间的 CentOS 7.x 虚拟机,输入: ``` # virt-install \ --virt-type=kvm \ @@ -95,31 +97,36 @@ In this example, I'm creating CentOS 7.x VM with 2GB RAM, 2 CPU core, 1 nics and --graphics vnc \ --disk path=/var/lib/libvirt/images/centos7.qcow2,size=40,bus=virtio,format=qcow2 ``` -To configure vnc login from another terminal over ssh and type: + +从另一个终端通过 ssh 和 type 配置 vnc 登录: ``` -# virsh dumpxml centos7 | grep vnc +# virsh dumpxml centos7 | grep v nc ``` -Please note down the port value (i.e. 5901). You need to use an SSH client to setup tunnel and a VNC client to access the remote vnc server. Type the following SSH port forwarding command from your client/desktop/macbook pro system: + +请记录下端口值(即 5901)。你需要使用 SSH 客户端来建立隧道和 VNC 客户端才能访问远程 vnc 服务区。在客户端/桌面/ macbook pro 系统中输入以下 SSH 端口转化命令: `$ ssh vivek@server1.cyberciti.biz -L 5901:127.0.0.1:5901` -Once you have ssh tunnel established, you can point your VNC client at your own 127.0.0.1 (localhost) address and port 5901 as follows: + +一旦你建立了 ssh 隧道,你可以将你的 VNC 客户端指向你自己的 127.0.0.1 (localhost) 地址和端口 5901,如下所示: [![][7]][7] -You should see CentOS Linux 7 guest installation screen as follows: + +你应该看到 CentOS Linux 7 客户虚拟机安装屏幕如下: [![][8]][8] -Now just follow on screen instructions and install CentOS 7. Once installed, go ahead and click the reboot button. The remote server closed the connection to our VNC client. You can reconnect via KVM client to configure the rest of the server including SSH based session or firewall. -#### Step 5: Using cloud images +现在只需按照屏幕说明进行操作并安装CentOS 7。一旦安装完成后,请继续并单击重启按钮。 远程服务器关闭了我们的 VNC 客户端的连接。 你可以通过 KVM 客户端重新连接,以配置服务器的其余部分,包括基于 SSH 的会话或防火墙。 -The above installation method is okay for learning purpose or a single VM. Do you need to deploy lots of VMs? Try cloud images. You can modify pre built cloud images as per your needs. For example, add users, ssh keys, setup time zone, and more using [Cloud-init][9] which is the defacto multi-distribution package that handles early initialization of a cloud instance. Let us see how to create CentOS 7 vm with 1024MB ram, 20GB disk space, and 1 vCPU. +#### 步骤 5: 使用云镜像 -##### Grab CentOS 7 cloud image +以上安装方法对于学习目的或单个虚拟机而言是可行的。你需要部署大量的虚拟机吗? 尝试云镜像。你可以根据需要修改预先构建的云图像。例如,使用 [Cloud-init][9] 添加用户,ssh 密钥,设置时区等等,这是处理云实例的早期初始化的事实上的多分发包。让我们看看如何创建带有 1024MB RAM,20GB 磁盘空间和 1 个 vCPU 的 CentOS 7 虚拟机。(译注: vCPU 即电脑中的虚拟处理器) + +##### 获取 CentOS 7 云镜像 ``` # cd /var/lib/libvirt/boot # wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 ``` -##### Create required directories +##### 创建所需的目录 ``` # D=/var/lib/libvirt/images @@ -128,29 +135,31 @@ The above installation method is okay for learning purpose or a single VM. Do yo mkdir: created directory '/var/lib/libvirt/images/centos7-vm1' ``` -##### Create meta-data file +##### 创建元数据文件 ``` # cd $D/$VM # vi meta-data ``` -Append the following: + +添加以下东西: ``` instance-id: centos7-vm1 local-hostname: centos7-vm1 ``` -##### Crete user-data file +##### 创建用户数据文件 -I am going to login into VM using ssh keys. So make sure you have ssh-keys in place: +我将使用 ssh 密钥登录到虚拟机。所以确保你有 ssh-keys: `# ssh-keygen -t ed25519 -C "VM Login ssh key"` [![ssh-keygen command][10]][11] -See "[How To Setup SSH Keys on a Linux / Unix System][12]" for more info. Edit user-data as follows: + +请参阅 "[如何在 Linux/Unix 系统上设置 SSH 密钥][12]" 来获取更多信息。编辑用户数据如下: ``` # cd $D/$VM # vi user-data ``` -Add as follows (replace hostname, users, ssh-authorized-keys as per your setup): +添加如下(根据你的设置替换主机名,用户,ssh-authorized-keys): ``` #cloud-config @@ -190,14 +199,14 @@ runcmd: - yum -y remove cloud-init ``` -##### Copy cloud image +##### 复制云镜像 ``` # cd $D/$VM # cp /var/lib/libvirt/boot/CentOS-7-x86_64-GenericCloud.qcow2 $VM.qcow2 ``` -##### Create 20GB disk image +##### 创建 20GB 磁盘映像 ``` # cd $D/$VM @@ -206,25 +215,25 @@ runcmd: # virt-resize --quiet --expand /dev/sda1 $VM.qcow2 $VM.new.image ``` [![Set VM image disk size][13]][13] -Overwrite it resized image: +覆盖它的缩放图片: ``` # cd $D/$VM # mv $VM.new.image $VM.qcow2 ``` -##### Creating a cloud-init ISO +##### 创建一个 cloud-init ISO `# mkisofs -o $VM-cidata.iso -V cidata -J -r user-data meta-data` [![Creating a cloud-init ISO][14]][14] -##### Creating a pool +##### 创建一个 pool ``` # virsh pool-create-as --name $VM --type dir --target $D/$VM Pool centos7-vm1 created ``` -##### Installing a CentOS 7 VM +##### 安装 CentOS 7 虚拟机 ``` # cd $D/$VM @@ -238,58 +247,59 @@ Pool centos7-vm1 created --graphics spice \ --noautoconsole ``` -Delete unwanted files: +删除不需要的文件: ``` # cd $D/$VM # virsh change-media $VM hda --eject --config # rm meta-data user-data centos7-vm1-cidata.iso ``` -##### Find out IP address of VM +##### 查找虚拟机的 IP 地址 `# virsh net-dhcp-leases default` + [![CentOS7-VM1- Created][15]][15] -##### Log in to your VM +##### 登录到你的虚拟机 -Use ssh command: +使用 ssh 命令: `# ssh vivek@192.168.122.85` [![Sample VM session][16]][16] -### Useful commands +### 有用的命令 -Let us see some useful commands for managing VMs. +让我们看看管理虚拟机的一些有用的命令。 -#### List all VMs +#### 列出所有虚拟机 `# virsh list --all` -#### Get VM info +#### 获取虚拟机信息 ``` # virsh dominfo vmName # virsh dominfo centos7-vm1 ``` -#### Stop/shutdown a VM +#### 停止/关闭虚拟机 `# virsh shutdown centos7-vm1` -#### Start VM +#### 开启虚拟机 `# virsh start centos7-vm1` -#### Mark VM for autostart at boot time +#### 将虚拟机标记为在引导时自动启动 `# virsh autostart centos7-vm1` -#### Reboot (soft & safe reboot) VM +#### 重新启动(软安全重启)虚拟机 `# virsh reboot centos7-vm1` -Reset (hard reset/not safe) VM +重置(硬重置/不安全)虚拟机 `# virsh reset centos7-vm1` -#### Delete VM +#### 删除虚拟机 ``` # virsh shutdown centos7-vm1 @@ -299,23 +309,22 @@ Reset (hard reset/not safe) VM # VM=centos7-vm1 # rm -ri $D/$VM ``` -To see a complete list of virsh command type +查看 virsh 命令类型的完整列表 ``` # virsh help | less # virsh help | grep reboot ``` +### 关于作者 -### About the author - -The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][17], [Facebook][18], [Google+][19]. +作者是 nixCraft 的创建者,也是经验丰富的系统管理员和 Linux 操作系统/ Unix shell 脚本的培训师。 他曾与全球客户以及 IT,教育,国防和空间研究以及非营利部门等多个行业合作。 在 [Twitter][17],[Facebook][18],[Google +][19] 上关注他。 -------------------------------------------------------------------------------- -via: https://www.cyberciti.biz/faq/how-to-install-kvm-on-centos-7-rhel-7-headless-server/ +via: [https://www.cyberciti.biz/faq/how-to-install-kvm-on-centos-7-rhel-7-headless-server/](https://www.cyberciti.biz/faq/how-to-install-kvm-on-centos-7-rhel-7-headless-server/) 作者:[Vivek Gite][a] -译者:[译者ID](https://github.com/译者ID) +译者:[MjSeven](https://github.com/MjSeven) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20180129 Create your own personal Cloud- Install OwnCloud.md b/translated/tech/20180129 Create your own personal Cloud- Install OwnCloud.md deleted file mode 100644 index ef45a39387..0000000000 --- a/translated/tech/20180129 Create your own personal Cloud- Install OwnCloud.md +++ /dev/null @@ -1,115 +0,0 @@ -搭建私有云:OwnCloud -====== - -所有人都在讨论云。尽管市面上有很多为我们提供云存储和其他云服务的主要的服务商,但是我们还是可以为自己搭建一个私有云。 - -在本教程中,我们将讨论如何利用 OwnCloud 搭建私有云。OwnCloude 是一个可以安装在我们 Linux 设备上的 web 应用程序,能够存储和服务我们的数据。OwnCloude 可以分享日历、联系人和书签,共享音/视频流等等。 - -本教程中,我们使用的是 CentOS 7 系统,但是本教程同样适用于其他 Linux 发行版中安装 OwnClude。让我们开始安装 OwnCloude 并且做一些准备工作, - -**(推荐阅读:[如何在 CentOS & RHEL 上使用 Apache 作为反向代理服务器][1])** - - **(同时推荐:[实时 Linux 服务器监测和 GLANCES 监测工具][2])** - -### 预备知识 - - * 我们需要在机器上配置 LAMP。参照阅读我们的文章‘[CentOS/RHEL 上配置 LAMP 服务器最简单的教程][3] & [在 Ubuntu 搭建 LAMP stack][4]’。 - - * 我们需要在自己的设备里安装这些包,‘ php-mysql php-json php-xml php-mbstring php-zip php-gd curl php-curl php-pdo’。使用包管理器安装它们。 - -``` -$ sudo yum install php-mysql php-json php-xml php-mbstring php-zip php-gd curl php-curl php-pdo -``` - -### 安装 - -安装 owncloud,我们现在需要在服务器上下载 ownCloud 安装包。使用下面的命令从官方网站下载最新的安装包(10.0.4-1), - -``` - $ wget https://download.owncloud.org/community/owncloud-10.0.4.tar.bz2 -``` - -使用下面的命令解压, - -``` - $ tar -xvf owncloud-10.0.4.tar.bz2 -``` - -现在,将所有解压后的文件移动至‘/var/www/html’ - -``` - $ mv owncloud/* /var/www/html -``` - -下一步,我们需要在 apache 上配置 ‘httpd.conf’文件 - -``` - $ sudo vim /etc/httpd/conf/httpd.com -``` - -同时更改下面的选项, - -``` - AllowOverride All -``` - -在 owncloud 文件夹下保存和修改文件权限, - -``` - $ sudo chown -R apache:apache /var/www/html/ - $ sudo chmod 777 /var/www/html/config/ -``` - -然后重启 apache 服务器执行修改, - -``` - $ sudo systemctl restart httpd -``` - -现在,我们需要在 MariaDB 上创建一个数据库,保存来自 owncould 的数据。使用下面的命令创建数据库和数据库用户, - -``` - $ mysql -u root -p - MariaDB [(none)] > create database owncloud; - MariaDB [(none)] > GRANT ALL ON owncloud.* TO ocuser@localhost IDENTIFIED BY 'owncloud'; - MariaDB [(none)] > flush privileges; - MariaDB [(none)] > exit -``` - -服务器配置部分完成后,现在我们可以在网页浏览器上访问 owncloud。打开浏览器,输入您的服务器 IP 地址,我这边的服务器是 10.20.30.100, - -![安装 owncloud][7] - -一旦 URL 加载完毕,我们将呈现上述页面。这里,我们将创建管理员用户同时提供数据库信息。当所有信息提供完毕,点击‘Finish setup’。 - -我们将被重定向到登陆页面,在这里,我们需要输入先前创建的凭据, - -![安装 owncloud][9] - -认证成功之后,我们将进入 owncloud 面板, - -![安装 owncloud][11] - -我们可以使用移动设备应用程序,同样也可以使用网页界面更新我们的数据。现在,我们已经有自己的私有云了,同时,关于如何安装 owncloud 创建私有云的教程也进入尾声。请在评论区留下自己的问题或建议。 - --------------------------------------------------------------------------------- - -via: http://linuxtechlab.com/create-personal-cloud-install-owncloud/ - -作者:[SHUSAIN][a] -译者:[CYLeft](https://github.com/CYLeft) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linuxtechlab.com/author/shsuain/ -[1]:http://linuxtechlab.com/apache-as-reverse-proxy-centos-rhel/ -[2]:http://linuxtechlab.com/linux-server-glances-monitoring-tool/ -[3]:http://linuxtechlab.com/easiest-guide-creating-lamp-server/ -[4]:http://linuxtechlab.com/install-lamp-stack-on-ubuntu/ -[6]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=400%2C647 -[7]:https://i1.wp.com/linuxtechlab.com/wp-content/uploads/2018/01/owncloud1-compressor.jpg?resize=400%2C647 -[8]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=876%2C541 -[9]:https://i1.wp.com/linuxtechlab.com/wp-content/uploads/2018/01/owncloud2-compressor1.jpg?resize=876%2C541 -[10]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=981%2C474 -[11]:https://i0.wp.com/linuxtechlab.com/wp-content/uploads/2018/01/owncloud3-compressor1.jpg?resize=981%2C474 diff --git a/translated/tech/20180129 Parsing HTML with Python.md b/translated/tech/20180129 Parsing HTML with Python.md new file mode 100644 index 0000000000..5701b92aee --- /dev/null +++ b/translated/tech/20180129 Parsing HTML with Python.md @@ -0,0 +1,219 @@ +用Python解析HTML +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_html_code.png?itok=VjUmGsnl) + +图片由Jason Baker为Opensource.com所作。 + +作为Scribus文档团队的长期成员,我随时了解最新的源代码更新,以便对文档进行更新和补充。 我最近在刚升级到Fedora 27系统的计算机上使用Subversion进行“checkout”操作时,对于文档下载所需要的时间我感到很惊讶,文档由HTML页面和相关图像组成。 我担心该项目的文档看起来比项目本身大得多,并且怀疑其中的一些内容是“僵尸”文档——不会再使用的HTML文件以及HTML中无法访问到的图像。 + +我决定为自己创建一个项目来解决这个问题。 一种方法是搜索未使用的现有图像文件。 如果我可以扫描所有HTML文件中的图像引用,然后将该列表与实际图像文件进行比较,那么我可能会看到不匹配的文件。 + +这是一个典型的图像标签: +``` +Edit examples +``` + +我对第一组引号之间的部分很感兴趣,在src =之后。 寻找解决方案后,我找到一个名为[BeautifulSoup][1]的Python模块。 脚本的核心部分如下所示: + +``` +soup = BeautifulSoup(all_text, 'html.parser') +match = soup.findAll("img") +if len(match) > 0: + for m in match: + imagelist.append(str(m)) +``` + +我们可以使用这个`findAll` 方法来挖出图片标签。 这是一小部分输出: + +``` +GSview - Advanced Options PanelScribus External Tools Preferences +``` + +到现在为止还挺好。我原以为下一步就可以搞定了,但是当我在脚本中尝试了一些字符串方法时,它返回了有关标记的错误而不是字符串的错误。 我将输出保存到一个文件中,并在[KWrite][2]中进行编辑。 KWrite的一个好处是你可以使用正则表达式(regex)来做“查找和替换”操作,所以我可以用`\n', all_text) +if len(match)>0: + for m in match: + imagelist.append(m) +``` + +它的一小部分输出如下所示: +``` +images/cmcanvas.png" title="Context Menu for the document canvas" alt="Context Menu for the document canvas" />
+``` + +我决定回到`src=`这一块。 一种方法是等待`s`出现,然后看下一个字符是否是`r`,下一个是`c`,下一个是否`=`。 如果是这样,那就匹配上了! 那么两个双引号之间的内容就是我所需要的。 这种方法的问题在于需要连续识别上面这样的结构。 一种查看代表一行HTML文本的字符串的方法是: + +``` +for c in all_text: +``` + +但是这个逻辑太乱了,以至于不能持续匹配到前面的`c`,还有之前的字符,更之前的字符,更更之前的字符。 + +最后,我决定专注于`=`并使用索引方法,以便我可以轻松地引用字符串中的任何先前或将来的字符。 这里是搜索部分: + +``` + index = 3 + while index < linelength: + if (all_text[index] == '='): + if (all_text[index-3] == 's') and (all_text[index-2] == 'r') and (all_text[index-1] == 'c'): + imagefound(all_text, imagelist, index) + index += 1 + else: + index += 1 + else: + index += 1 +``` + +我用第四个字符开始搜索(索引从0开始),所以我在下面没有出现索引错误,并且实际上,在每一行的第四个字符之前不会有等号。 第一个测试是看字符串中是否出现了`=`,如果没有,我们就会前进。 如果我们确实看到一个等号,那么我们会看前三个字符是否是`s`,`r`和`c`。 如果全都匹配了,就调用函数`imagefound`: + +``` +def imagefound(all_text, imagelist, index): + end = 0 + index += 2 + newimage = '' + while end == 0: + if (all_text[index] != '"'): + newimage = newimage + all_text[index] + index += 1 + else: + newimage = newimage + '\n' + imagelist.append(newimage) + end = 1 + return +``` + +我们正在给函数发送当前索引,它代表着`=`。 我们知道下一个字符将会是`"`,所以我们跳过两个字符,并开始向名为`newimage`的控制字符串添加字符,直到我们发现下一个`"`,此时我们完成了一次匹配。 我们将字符串加一个`换行`符添加到列表`imagelist`中并`返回`,请记住,在剩余的这个HTML字符串中可能会有更多图片标签,所以我们马上回到搜索循环的中间。 + +以下是我们的输出现在的样子: +``` +images/text-frame-link.png +images/text-frame-unlink.png +images/gimpoptions1.png +images/gimpoptions3.png +images/gimpoptions2.png +images/fontpref3.png +images/font-subst.png +images/fontpref2.png +images/fontpref1.png +images/dtp-studio.png +``` + +啊,干净多了,而这只花费几秒钟的时间。 我本可以将索引前移7步来剪切`images/`部分,但我更愿意把这个部分保存下来,确保我没有切片掉图像文件名的第一个字母,这很容易用KWrite编辑成功- - 你甚至不需要正则表达式。 做完这些并保存文件后,下一步就是运行我编写的另一个脚本`sortlist.py`: + +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# sortlist.py + +import os + +imagelist = [] +for line in open('/tmp/imagelist_parse4.txt').xreadlines(): + imagelist.append(line) + +imagelist.sort() + +outfile = open('/tmp/imagelist_parse4_sorted.txt', 'w') +outfile.writelines(imagelist) +outfile.close() +``` + +这会读取文件内容,并存储为列表,对其排序,然后另存为另一个文件。 之后,我可以做到以下几点: + +``` +ls /home/gregp/development/Scribus15x/doc/en/images/*.png > '/tmp/actual_images.txt' +``` + +然后我需要在该文件上运行`sortlist.py`,因为`ls`方法的排序与Python不同。 我原本可以在这些文件上运行比较脚本,但我更愿意以可视方式进行操作。 最后,我成功找到了42个图像,这些图像没有来自文档的HTML引用。 + +这是我的完整解析脚本: +``` +#!/usr/bin/env python +# -*- coding: utf-8 -*- +# parseimg4.py + +import os + +def imagefound(all_text, imagelist, index): + end = 0 + index += 2 + newimage = '' + while end == 0: + if (all_text[index] != '"'): + newimage = newimage + all_text[index] + index += 1 + else: + newimage = newimage + '\n' + imagelist.append(newimage) + end = 1 + return + +htmlnames = [] +imagelist = [] +tempstring = '' +filenames = os.listdir('/home/gregp/development/Scribus15x/doc/en/') +for name in filenames: + if name.endswith('.html'): + htmlnames.append(name) +#print htmlnames +for htmlfile in htmlnames: + all_text = open('/home/gregp/development/Scribus15x/doc/en/' + htmlfile).read() + linelength = len(all_text) + index = 3 + while index < linelength: + if (all_text[index] == '='): + if (all_text[index-3] == 's') and (all_text[index-2] == 'r') and +(all_text[index-1] == 'c'): + imagefound(all_text, imagelist, index) + index += 1 + else: + index += 1 + else: + index += 1 + +outfile = open('/tmp/imagelist_parse4.txt', 'w') +outfile.writelines(imagelist) +outfile.close() +imageno = len(imagelist) +print str(imageno) + " images were found and saved" +``` + +脚本名称为`parseimg4.py`,这并不能真实反映我陆续编写的脚本数量,包括微调的和大改的以及丢弃并重新开始写的。 请注意,我已经对这些目录和文件名进行了硬编码,但是总结起来很容易,要求用户输入这些信息。 同样,因为它们是工作脚本,所以我将输出发送到 `/tmp`目录,所以一旦重新启动系统,它们就会消失。 + +这不是故事的结尾,因为下一个问题是:僵尸HTML文件怎么办? 任何未使用的文件都可能会引用到前面比对方法没有提取到的图像。 我们有一个`menu.xml`文件作为联机手册的目录,但我还需要考虑TOC(译者注:TOC是table of contents的缩写)中列出的某些文件可能引用了不在TOC中的文件,是的,我确实找到了一些这样的文件。 + +最后我可以说,这是一个比图像搜索更简单的任务,而且开发的过程对我有很大的帮助。 + + +### About the author + + [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/20150529_gregp.jpg?itok=nv02g6PV)][7] Greg Pittman - Greg是Kentucky州Louisville市的一名退休的神经学家,从二十世纪六十年代的Fortran IV语言开始长期以来对计算机和编程有着浓厚的兴趣。 当Linux和开源软件出现的时候,Greg深受启发,去学习更多只是,并实现最终贡献的承诺。 他是Scribus团队的成员。[更多关于我][8] + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/parsing-html-python + +作者:[Greg Pittman][a] +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/greg-p +[1]:https://www.crummy.com/software/BeautifulSoup/ +[2]:https://www.kde.org/applications/utilities/kwrite/ +[7]:https://opensource.com/users/greg-p +[8]:https://opensource.com/users/greg-p diff --git a/translated/tech/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md b/translated/tech/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md new file mode 100644 index 0000000000..50f2f00451 --- /dev/null +++ b/translated/tech/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md @@ -0,0 +1,89 @@ +在 Ubuntu 17.10 上安装 AWFFull Web 服务器日志分析应用程序 +====== + + +AWFFull 是基于 “Webalizer” 的 Web 服务器日志分析程序。AWFFull 以 HTML 格式生成使用统计信息以便用浏览器查看。结果以柱状和图形两种格式显示,这有利于解释。它提供每年、每月、每日和每小时使用统计数据,并显示网站、URL、referrer、user agent(浏览器)、用户名、搜索字符串、进入/退出页面和国家(如果一些信息不存在于处理后日志中那么就没有)。AWFFull 支持 CLF(通用日志格式)日志文件,以及由 NCSA 和其他人定义的组合日志格式,它还能只能地处理这些格式的变体。另外,AWFFull 还支持 wu-ftpd xferlog 格式的日志文件,它能够分析 ftp 服务器和 squid 代理日志。日志也可以通过 gzip 压缩。 + +如果检测到压缩日志文件,它将在读取时自动解压缩。压缩日志必须是 .gz 扩展名的标准 gzip 压缩。 + +### 对于 Webalizer 的修改 + +AWFFull 基于 Webalizer 的代码,并有许多大的和小的变化。包括: + +o 不止原始统计数据:利用已发布的公式,提供额外的网站使用情况。 + +o GeoIP IP 地址能更准确地检测国家。 + +o 可缩放的图形 + +o 与 GNU gettext 集成,能够轻松翻译。目前支持 32 种语言。 + +o 在首页显示超过 12 个月的网站历史记录。 + +o 额外的页面计数跟踪和排序。 + +o 一些小的可视化调整,包括 Geolizer 使用在卷中使用 Kb、Mb。 + +o 额外的用于 URL 计数、进入和退出页面、站点的饼图 + +o 图形上的水平线更有意义,更易于阅读。 + +o User Agent 和 Referral 跟踪现在通过 PAGES 而非 HITS 进行计算。 + +o 现在支持 GNU 风格的长命令行选项(例如 --help)。 + +o 可以通过排除“什么不是”以及原始的“什么是”来选择页面。 + +o 对被分析站点的请求以匹配的引用 URL 显示。 + +o 404 错误表,并且可以生成引用 URL。 + +o 外部 CSS 文件可以与生成的 html 一起使用。 + +o POST 分析总结使得手动优化配置文件性能更简单。 + +o 指定的 IP 和地址可以分配给指定的国家。 + +o 便于使用其他工具详细分析的转储选项。 + +o 支持检测并处理 Lotus Domino v6 日志。 + +**在 Ubuntu 17.10 上安装 awffull** + +> sudo apt-get install awffull + +### 配置 AWFFULL + +你必须在 /etc/awffull/awffull.conf 中编辑 awffull 配置文件。如果你在同一台计算机上运行多个虚拟站点,​​则可以制作多个默认配置文件的副本。 + +> sudo vi /etc/awffull/awffull.conf + +确保有下面这几行 + +> LogFile /var/log/apache2/access.log.1 +> OutputDir /var/www/html/awffull + +保存并退出文件 + +你可以使用以下命令运行 awffull + +> awffull -c [your config file name] + +这将在 /var/www/html/awffull 目录下创建所有必需的文件,以便你可以使用 http://serverip/awffull/ + +你应该看到类似于下面的页面 + +如果你有更多站点,你可以使用 shell 和计划任务自动化这个过程。 + + +-------------------------------------------------------------------------------- + +via: http://www.ubuntugeek.com/install-awffull-web-server-log-analysis-application-on-ubuntu-17-10.html + +作者:[ruchi][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.ubuntugeek.com/author/ubuntufix diff --git a/translated/tech/20180205 A File Transfer Utility To Download Only The New Parts Of A File.md b/translated/tech/20180205 A File Transfer Utility To Download Only The New Parts Of A File.md new file mode 100644 index 0000000000..1055aeacf4 --- /dev/null +++ b/translated/tech/20180205 A File Transfer Utility To Download Only The New Parts Of A File.md @@ -0,0 +1,98 @@ +一个仅下载文件新的部分的传输工具 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/02/Linux-1-720x340.png) + +仅仅因为网费每天变得越来越便宜,你也不应该重复下载相同的东西来浪费你的流量。一个很好的例子就是下载 Ubuntu 或任何 Linux 镜像的开发版本。如你所知,Ubuntu 开发人员每隔几个月就会发布一次日常构建、alpha、beta 版 ISO 镜像以供测试。在过去,一旦发布我就会下载这些镜像,并审查每个版本。现在不用了!感谢 **Zsync** 文件传输程序。现在可以仅下载 ISO 镜像新的部分。这将为你节省大量时间和 Internet 带宽。不仅时间和带宽,它将为你节省服务端和客户端的资源。 + +Zsync 使用与 **Rsync** 相同​​的算法,但它只下载文件的新部分,你会得到一份已有文件旧版本的副本。 Rsync 主要用于在计算机之间同步数据,而 Zsync 则用于分发数据。简单地说,可以使用 Zsync 将中心的一个文件分发给数千个下载者。它在 Artistic License V2 许可证下发布,完全免费且开源。 + +### 安装 Zsync + +Zsync 在大多数 Linux 发行版的默认仓库中有。 + +在 **Arch Linux** 及其衍生版上,使用命令安装它: +``` +$ sudo pacman -S zsync + +``` + +在 **Fedora** 上: + +启用 Zsync 仓库: +``` +$ sudo dnf copr enable ngompa/zsync + +``` + +并使用命令安装它: +``` +$ sudo dnf install zsync + +``` + +在 **Debian、Ubuntu、Linux Mint** 上: +``` +$ sudo apt-get install zsync + +``` + +对于其他发行版,你可以从 [**Zsync 下载页面**][1]下载二进制文件,并手动编译安装它,如下所示。 +``` +$ wget http://zsync.moria.org.uk/download/zsync-0.6.2.tar.bz2 +$ tar xjf zsync-0.6.2.tar.bz2 +$ cd zsync-0.6.2/ +$ configure +$ make +$ sudo make install + +``` + +### 用法 + +请注意,**只有当人们提供 zsync 下载时,zsync 才有用**。目前,Debian、Ubuntu(所有版本)的 ISO 镜像都可以用 .zsync 下载。例如,请访问以下链接。 + +你可能注意到,Ubuntu 18.04 LTS 每日构建版有直接的 ISO 和 .zsync 文件。如果你下载 .ISO 文件,则必须在 ISO 更新时下载完整的 ISO 文件。但是,如果你下载的是 .zsync 文件,那么 Zsync 将在未来仅下载新的更改。你不需要每次都下载整个 ISO 映像。 + +.zsync 文件包含 zsync 程序所需的元数据。该文件包含 rsync 算法的预先计算的校验和。它在服务器上生成一次,然后由任意数量的下载器使用。要使用 Zsync 客户端程序下载 .zsync 文件,你只需执行以下操作: +``` +$ zsync <.zsync-file-URL> + +``` + +例如: +``` +$ zsync http://cdimage.ubuntu.com/ubuntu/daily-live/current/bionic-desktop-amd64.iso.zsync + +``` + +如果你的系统中已有以前的镜像文件,那么 Zsync 将计算远程服务器中旧文件和新文件之间的差异,并仅下载新的部分。你将在终端看见计算过程一系列的点或星星。 + +如果你下载的文件的旧版本存在于当前工作目录,那么 Zsync 将只下载新的部分。下载完成后,你将看到两个镜像,一个你刚下载的镜像和以 **.iso.zs-old** 为扩展名的旧镜像。 + +如果没有找到相关的本地数据,Zsync 会下载整个文件。 + +![](http://www.ostechnix.com/wp-content/uploads/2018/02/Zsync-1.png) + +你可以随时按 **CTRL-C** 取消下载过程。 + +试想一下,如果你直接下载 .ISO 文件或使用 torrent,每当你下载新镜像时,你将损失约 1.4GB 流量。因此,Zsync 不会下载整个 Alpha、beta 和日常构建映像,而只是在你的系统上下载了 ISO 文件的新部分,并在系统中有一个旧版本的拷贝。 + +今天就到这里。希望对你有帮助。我将很快另外写一篇有用的指南。在此之前,请继续关注 OSTechNix! + +干杯! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/zsync-file-transfer-utility-download-new-parts-file/ + +作者:[SK][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:http://zsync.moria.org.uk/downloads diff --git a/translated/tech/20180206 Manage printers and printing.md b/translated/tech/20180206 Manage printers and printing.md new file mode 100644 index 0000000000..4ba3ace0ec --- /dev/null +++ b/translated/tech/20180206 Manage printers and printing.md @@ -0,0 +1,534 @@ +Linux 中如何打印和管理打印机 +====== + + +### Linux 中的打印 + +虽然现在大量的沟通都是电子化和无纸化的,但是在我们的公司中还有大量的材料需要打印。银行结算单、公用事业帐单、财务和其它报告、以及收益结算单等一些东西还是需要打印的。本教程将介绍在 Linux 中如何使用 CUPS 去打印。 + +CUPS,是通用 Unix 打印系统(Common UNIX Printing System)的首字母缩写,它是 Linux 中的打印机和打印任务的管理者。早期计算机上的打印机一般是在特定的字符集和字体大小下打印文本文件行。现在的图形打印机可以打印各种字体和大小的文本和图形。尽管如此,现在你所使用的一些命令,在它们以前的历史上仍旧使用的是古老的行打印守护进程(LPD)技术。 + +本教程将帮你了解 Linux 服务器专业考试(LPIC-1)的第 108 号主题的 108.4 目标。这个目标的权重为 2。 + +#### 前提条件 + +为了更好地学习本系列教程,你需要具备基本的 Linux 知识,和使用 Linux 系统实践本教程中的命令的能力,你应该熟悉 GNU 和 UNIX® 命令的使用。有时不同版本的程序输出可能会不同,因此,你的结果可能与本教程中的示例有所不同。 + +本教程中的示例使用的是 Fedora 27 的系统。 + +### 有关打印的一些历史 + +这一小部分历史并不是 LPI 目标的,但它有助于你理解这个目标的相关环境。 + +早期的计算机大都使用行打印机。这些都是击打式打印机,一段时间以来,它们使用固定间距的字符和单一的字体来打印文本行。为提升整个系统性能,早期的主机都对慢速的外围设备如读卡器、卡片穿孔机、和运行其它工作的行打印进行交叉工作。因此就产生了在行上或者假脱机上的同步外围操作,这一术语目前在谈到计算机打印时仍然在使用。 + +在 UNIX 和 Linux 系统上,打印初始化使用的是 BSD(Berkeley Software Distribution)打印子系统,它是由一个作为服务器运行的行打印守护程序(LPD)组成,而客户端命令如 `lpr` 是用于提交打印作业。这个协议后来被 IETF 标准化为 RFC 1179 —— **行打印机守护协议**。 + +系统也有一个打印守护程序。它的功能与BSD 的 LPD 守护程序类似,但是它们的命令集不一样。你在后面会经常看到完成相同的任务使用不同选项的两个命令。例如,对于打印文件的命令,`lpr` 是伯克利实现的,而 `lp` 是 System V 实现的。 + +随着打印机技术的进步,在一个页面上混合出现不同字体成为可能,并且可以将图片像文字一样打印。可变间距字体,以及更多先进的打印技术,比如 kerning 和 ligatures,现在都已经标准化。它们对基本的 lpd/lpr 方法进行了改进设计,比如 LPRng,下一代的 LPR、以及 CUPS。 + +许多可以打印图形的打印机,使用 Adobe PostScript 语言进行初始化。一个 PostScript 打印机有一个解释器引擎,它可以解释打印任务中的命令并从这些命令中生成最终的页面。PostScript 经常被用做原始文件和打印机之间的中间层,比如一个文本文件或者一个图像文件,以及没有适合 PostScript 功能的特定打印机的最终格式。转换这些特定的打印任务,比如一个 ASCII 文本文件或者一个 JPEG 图像转换为 PostScript,然后再使用过滤器转换 PostScript 到非 PostScript 打印机所需要的最终光栅格式。 + +现在的便携式文档格式(PDF),它就是基于 PostScript 的,已经替换了传统的原始 PostScript。PDF 设计为与硬件和软件无关,它封装了要打印的页面的完整描述。你可以查看 PDF 文件,同时也可以打印它们。 + +### 管理打印队列 + +用户直接打印作业到一个名为打印队列的逻辑实体。在单用户系统中,一个打印队列和一个打印机通常是几乎相同的意思。但是,CUPS 允许系统对最终在一个远程系统上的打印,并不附属打印机到一个队列打印作业上,而是通过使用类,允许将打印作业重定向到该类第一个可用的打印机上。 + +你可以检查和管理打印队列。对于 CUPS 来说,其中一些命令还是很新的。另外的一些是源于 LPD 的兼容命令,不过现在的一些选项通常是原始 LPD 打印系统选项的有限子集。 + +你可以使用 CUPS 的 `lpstat` 命令去检查队列,以了解打印系统。一些常见命令如下表 1。 + +###### 表 1. lpstat 命令的选项 +| 选项 | 作用 | +| -a | 显示打印机状态 | +| -c | 显示打印类 | +| -p | 显示打印状态:enabled 或者 disabled. | +| -s | 显示默认打印机、打印机和类。相当于 -d -c -v。**注意:为指定多个选项,这些选项必须像值一样分隔开。**| +| -s | 显示打印机和它们的设备。 | + + +你也可以使用 LPD 的 `lpc` 命令,它可以在 /usr/sbin 中找到,使用它的 `status` 选项。如果你不想指定打印机名字,将列出所有的队列。列表 1 展示了命令的一些示例。 + +###### 列表 1. 显示可用打印队列 +``` +[ian@atticf27 ~]$ lpstat -d +system default destination: HL-2280DW +[ian@atticf27 ~]$ lpstat -v HL-2280DW +device for HL-2280DW: dnssd://Brother%20HL-2280DW._pdl-datastream._tcp.local/ +[ian@atticf27 ~]$ lpstat -s +system default destination: HL-2280DW +members of class anyprint: + HL-2280DW + XP-610 +device for anyprint: ///dev/null +device for HL-2280DW: dnssd://Brother%20HL-2280DW._pdl-datastream._tcp.local/ +device for XP-610: dnssd://EPSON%20XP-610%20Series._ipp._tcp.local/?uuid=cfe92100-67c4-11d4-a45f-ac18266c48aa +[ian@atticf27 ~]$ lpstat -a XP-610 +XP-610 accepting requests since Thu 27 Apr 2017 05:53:59 PM EDT +[ian@atticf27 ~]$ /usr/sbin/lpc status HL-2280DW +HL-2280DW: + printer is on device 'dnssd' speed -1 + queuing is disabled + printing is enabled + no entries + daemon present + +``` + +这个示例展示了两台打印机 —— HL-2280DW 和 XP-610,和一个类 `anyprint`,它允许打印作业定向到这两台打印机中的第一个可用打印机。 + +在这个示例中,已经禁用了打印到 HL-2280DW 队列,但是打印是启用的,这样便于打印机脱机维护之前可以完成打印队列中的任务。无论队列是启用还是禁用,都可以使用 `cupsaccept` 和 `cupsreject` 命令来管理它们。你或许可能在 /usr/sbin 中找到这些命令,它们现在都是链接到新的命令上。同样,无论打印是启用还是禁用,你都可以使用 `cupsenable` 和 `cupsdisable` 命令来管理它们。在早期版本的 CUPS 中,这些被称为 `enable` 和 `disable`,它也许会与 bash shell 内置的 `enable` 混淆。列表 2 展示了如何去启用打印机 HL-2280DW 上的队列,不过它的打印还是禁止的。CUPS 的几个命令支持使用一个 `-r` 选项去提供一个动作的理由。这个理由会在你使用 `lpstat` 时显示,但是如果你使用的是 `lpc` 命令则不会显示它。 + +###### 列表 2. 启用队列和禁用打印 +``` +[ian@atticf27 ~]$ lpstat -a -p HL-2280DW +anyprint accepting requests since Mon 29 Jan 2018 01:17:09 PM EST +HL-2280DW not accepting requests since Thu 27 Apr 2017 05:52:27 PM EDT - + Maintenance scheduled +XP-610 accepting requests since Thu 27 Apr 2017 05:53:59 PM EDT +printer HL-2280DW is idle. enabled since Thu 27 Apr 2017 05:52:27 PM EDT + Maintenance scheduled +[ian@atticf27 ~]$ accept HL-2280DW +[ian@atticf27 ~]$ cupsdisable -r "waiting for toner delivery" HL-2280DW +[ian@atticf27 ~]$ lpstat -p -a +printer anyprint is idle. enabled since Mon 29 Jan 2018 01:17:09 PM EST +printer HL-2280DW disabled since Mon 29 Jan 2018 04:03:50 PM EST - + waiting for toner delivery +printer XP-610 is idle. enabled since Thu 27 Apr 2017 05:53:59 PM EDT +anyprint accepting requests since Mon 29 Jan 2018 01:17:09 PM EST +HL-2280DW accepting requests since Mon 29 Jan 2018 04:03:50 PM EST +XP-610 accepting requests since Thu 27 Apr 2017 05:53:59 PM EDT + +``` + +注意:用户执行这些任务必须经过授权。它可能要求是 root 用户或者其它的授权用户。在 /etc/cups/cups-files.conf 中可以看到 SystemGroup 的条目,cups-files.conf 的 man 页面有更多授权用户组的信息。 + +### 管理用户打印作业 + +现在,你已经知道了一些如何去检查打印队列和类的方法,我将给你展示如何管理打印队列上的作业。你要做的第一件事是,如何找到一个特定打印机或者全部打印机上排队的任意作业。完成上述工作要使用 `lpq` 命令。如果没有指定任何选项,`lpq` 将显示默认打印机上的队列。使用 `-P` 选项和一个打印机名字将指定打印机,或者使用 `-a` 选项去指定所有的打印机,如下面的列表 3 所示。 + +###### 列表 3. 使用 lpq 检查打印队列 +``` +[pat@atticf27 ~]$ # As user pat (non-administrator) +[pat@atticf27 ~]$ lpq +HL-2280DW is not ready +Rank Owner Job File(s) Total Size +1st unknown 4 unknown 6144 bytes +2nd pat 6 bitlib.h 6144 bytes +3rd pat 7 bitlib.C 6144 bytes +4th unknown 8 unknown 1024 bytes +5th unknown 9 unknown 1024 bytes + +[ian@atticf27 ~]$ # As user ian (administrator) +[ian@atticf27 ~]$ lpq -P xp-610 +xp-610 is ready +no entries +[ian@atticf27 ~]$ lpq -a +Rank Owner Job File(s) Total Size +1st ian 4 permutation.C 6144 bytes +2nd pat 6 bitlib.h 6144 bytes +3rd pat 7 bitlib.C 6144 bytes +4th ian 8 .bashrc 1024 bytes +5th ian 9 .bashrc 1024 bytes + +``` + +在这个示例中,共有五个作业,它们是 4、6、7、8、和 9,并且它是名为 HL-2280DW 的打印机的队列,而不是 XP-610 的。在这个示例中使用 `-P` 选项,可简单地显示那个打印机已经准备好,但是没有队列任务。注意,CUPS 的打印机命名,大小写是不敏感的。还要注意的是,用户 ian 提交了同样的作业两次,当一个作业第一次没有打印时,经常能看到用户的这种动作。 + +一般情况下,你可能查看或者维护你自己的打印作业,但是,root 用户或者其它授权的用户通常会去管理其它打印作业。大多数 CUPS 命令都可以使用一个 `-E` 选项,对 CUPS 服务器与客户端之间的通讯进行加密。 + +使用 `lprm` 命令从队列中去删除 .bashrc 作业。如果不使用选项,将删除当前的作业。使用 `-` 选项,将删除全部的作业。要么就如列表 4 那样,指定一个要删除的作业列表。 + +###### 列表 4. 使用 lprm 删除打印作业 +``` +[[pat@atticf27 ~]$ # As user pat (non-administrator) +[pat@atticf27 ~]$ lprm +lprm: Forbidden + +[ian@atticf27 ~]$ # As user ian (administrator) +[ian@atticf27 ~]$ lprm 8 +[ian@atticf27 ~]$ lpq +HL-2280DW is not ready +Rank Owner Job File(s) Total Size +1st ian 4 permutation.C 6144 bytes +2nd pat 6 bitlib.h 6144 bytes +3rd pat 7 bitlib.C 6144 bytes +4th ian 9 .bashrc 1024 bytes + +``` + +注意,用户 pat 不能删除队列中的第一个作业,因为它是用户 ian 的。但是,ian 可以删除他自己的 8 号作业。 + +另外的可以帮你操作打印队列中的作业的命令是 `lp`。使用它可以去修改作业属性,比如打印数量或者优先级。我们假设用户 ian 希望他的作业 9 在用户 pat 的作业之前打印,并且希望打印两份。作业优先级的默认值是 50,它的优先级范围从最低的 1 到最高的 100 之间。用户 ian 可以使用 `-i`、`-n`、以及 `-q` 选项去指定一个要修改的作业,而新的打印数量和优先级可以如下面的列表 5 所示的那样去修改。注意,使用 `-l` 选项的 `lpq` 命令可以提供更详细的输出。 + +###### 列表 5. 使用 lp 去改变打印数量和优先级 +``` +[ian@atticf27 ~]$ lpq +HL-2280DW is not ready +Rank Owner Job File(s) Total Size +1st ian 4 permutation.C 6144 bytes +2nd pat 6 bitlib.h 6144 bytes +3rd pat 7 bitlib.C 6144 bytes +4th ian 9 .bashrc 1024 bytes +[ian@atticf27 ~]$ lp -i 9 -q 60 -n 2 +[ian@atticf27 ~]$ lpq +HL-2280DW is not ready +Rank Owner Job File(s) Total Size +1st ian 9 .bashrc 1024 bytes +2nd ian 4 permutation.C 6144 bytes +3rd pat 6 bitlib.h 6144 bytes +4th pat 7 bitlib.C 6144 bytes + +``` + +最后,`lpmove` 命令可以允许一个作业从一个队列移动到另一个队列。例如,我们可能因为打印机 HL-2280DW 现在不能使用,而想去移动一个作业到另外的队列上。你可以指定一个作业编号,比如 9,或者你可以用一个队列名加一个连字符去限定它,比如,HL-2280DW-0。`lpmove` 命令的操作要求一个授权用户。列表 6 展示了如何去从一个队列移动作业到另外的队列,通过打印机和作业 ID 指定第一个,然后指定打印机的所有作业都移动到第二个队列。稍后我们可以去再次检查队列,其中一个作业已经在打印中了。 + +###### 列表 6. 使用 lpmove 移动作业到另外一个打印队列 +``` +[ian@atticf27 ~]$ lpmove HL-2280DW-9 anyprint +[ian@atticf27 ~]$ lpmove HL-2280DW xp-610 +[ian@atticf27 ~]$ lpq -a +Rank Owner Job File(s) Total Size +active ian 9 .bashrc 1024 bytes +1st ian 4 permutation.C 6144 bytes +2nd pat 6 bitlib.h 6144 bytes +3rd pat 7 bitlib.C 6144 bytes +[ian@atticf27 ~]$ # A few minutes later +[ian@atticf27 ~]$ lpq -a +Rank Owner Job File(s) Total Size +active pat 6 bitlib.h 6144 bytes +1st pat 7 bitlib.C 6144 bytes + +``` + +如果你使用的是打印服务器而不是 CUPS,比如 LPD 或者 LPRng,大多数的队列管理功能是由 `lpc` 命令的子命令来处理的。例如,你可以使用 `lpc topq` 去移动一个作业到队列的顶端。其它的 `lpc` 子命令包括 `disable`、`down`、`enable`、`hold`、`move`、`redirect`、`release`、和 `start`。这些子命令在 CUPS 的兼容命令中没有实现。 + +#### 打印文件 + +如何去打印创建的作业?大多数图形界面程序都提供了一个打印方法,通常是 **文件** 菜单下面的选项。这些程序为选择打印机、设置页边距、彩色或者黑白打印、打印数量、选择每张纸打印的页面数(每张纸打印两个页面,通常用于讲义)等等,都提供了图形化的工具。现在,我将为你展示如何使用命令行工具去管理这些功能,然后和图形化实现进行比较。 + +打印文件最简单的方法是使用 `lpr` 命令,然后提供一个文件名字。这将在默认打印机上打印这个文件。`lp` 命令不仅可以打印文件,也可以修改打印作业。列表 7 展示了使用这个命令的一个简单示例。注意,`lpr` 会静默处理这个作业,但是 `lp` 会显示处理后的作业的 ID。 + +###### 列表 7. 使用 lpr 和 lp 打印 +``` +[ian@atticf27 ~]$ echo "Print this text" > printexample.txt +[ian@atticf27 ~]$ lpr printexample.txt +[ian@atticf27 ~]$ lp printexample.txt +request id is HL-2280DW-12 (1 file(s)) + +``` + +表 2 展示了 `lpr` 上你可以使用的一些选项。注意, `lp` 的选项和 `lpr` 的很类似,但是名字可能不一样;例如,`-#` 在 `lpr` 上是相当于 `lp` 的 `-n` 选项。查看 man 页面了解更多的信息。 + +###### 表 2. lpr 的选项 + +| 选项 | 作用 | +| -C, -J, or -T | 设置一个作业名字。 | +| -P | 选择一个指定的打印机。 | +| -# | 指定打印数量。注意它与 lp 命令的 -n 有点差别。| +| -m | 在作业完成时发送电子邮件。 | +| -l | 表示打印文件已经为打印做好格式准备。相当于 -o raw。 | +| -o | 设置一个作业选项。 | +| -p | 格式化一个带有阴影标题的文本文件。相关于 -o prettyprint。 | +| -q | 暂缓(或排队)最后的打印作业。 | +| -r | 在文件进入打印池之后,删除文件。 | + +列表 8 展示了一些选项。我要求打印之后给我发确认电子邮件,那个作业被暂缓执行,并且在打印之后删除文件。 + +###### 列表 8. 使用 lpr 打印 +``` +[ian@atticf27 ~]$ lpr -P HL-2280DW -J "Ian's text file" -#2 -m -p -q -r printexample.txt +[[ian@atticf27 ~]$ lpq -l +HL-2280DW is ready + + +ian: 1st [job 13 localhost] + 2 copies of Ian's text file 1024 bytes +[ian@atticf27 ~]$ ls printexample.txt +ls: cannot access 'printexample.txt': No such file or directory + +``` + +我现在有一个在 HL-2280DW 打印队列上暂缓执行的作业。怎么做到这样?`lp` 命令有一个选项可以暂缓或者投放作业,使用 `-H` 选项是使用各种值。列表 9 展示了如何投放被暂缓的作业。检查 `lp` 命令的 man 页面了解其它选项的信息。 + +###### 列表 9. 重启一个暂缓的打印作业 +``` +[ian@atticf27 ~]$ lp -i 13 -H resume + +``` + +并不是所有的可用打印机都支持相同的选项集。使用 `lpoptions` 命令去查看一个打印机的常用选项。添加 `-l` 选项去显示打印机专用的选项。列表 10 展示了两个示例。许多常见的选项涉及到人像/风景打印、页面大小和输出在纸张上的布局。详细信息查看 man 页面。 + +###### 列表 10. 检查打印机选项 +``` +[ian@atticf27 ~]$ lpoptions -p HL-2280DW +copies=1 device-uri=dnssd://Brother%20HL-2280DW._pdl-datastream._tcp.local/ +finishings=3 job-cancel-after=10800 job-hold-until=no-hold job-priority=50 +job-sheets=none,none marker-change-time=1517325288 marker-colors=#000000,#000000 +marker-levels=-1,92 marker-names='Black\ Toner\ Cartridge,Drum\ Unit' +marker-types=toner,opc number-up=1 printer-commands=none +printer-info='Brother HL-2280DW' printer-is-accepting-jobs=true +printer-is-shared=true printer-is-temporary=false printer-location +printer-make-and-model='Brother HL-2250DN - CUPS+Gutenprint v5.2.13 Simplified' +printer-state=3 printer-state-change-time=1517325288 printer-state-reasons=none +printer-type=135188 printer-uri-supported=ipp://localhost/printers/HL-2280DW +sides=one-sided + +[ian@atticf27 ~]$ lpoptions -l -p xp-610 +PageSize/Media Size: *Letter Legal Executive Statement A4 +ColorModel/Color Model: *Gray Black +InputSlot/Media Source: *Standard ManualAdj Manual MultiPurposeAdj MultiPurpose +UpperAdj Upper LowerAdj Lower LargeCapacityAdj LargeCapacity +StpQuality/Print Quality: None Draft *Standard High +Resolution/Resolution: *301x300dpi 150dpi 300dpi 600dpi +Duplex/2-Sided Printing: *None DuplexNoTumble DuplexTumble +StpiShrinkOutput/Shrink Page If Necessary to Fit Borders: *Shrink Crop Expand +StpColorCorrection/Color Correction: *None Accurate Bright Hue Uncorrected +Desaturated Threshold Density Raw Predithered +StpBrightness/Brightness: 0 100 200 300 400 500 600 700 800 900 *None 1100 +1200 1300 1400 1500 1600 1700 1800 1900 2000 Custom.REAL +StpContrast/Contrast: 0 100 200 300 400 500 600 700 800 900 *None 1100 1200 +1300 1400 1500 1600 1700 1800 1900 2000 2100 2200 2300 2400 2500 2600 2700 +2800 2900 3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000 Custom.REAL +StpImageType/Image Type: None Text Graphics *TextGraphics Photo LineArt + +``` + +大多数的 GUI 应用程序有一个打印对话框,通常你可以使用 **文件 >打印** 菜单去选择它。图 1 展示了在 GIMP 中的一个示例,GIMP 是一个图像处理程序。 + +###### 图 1. 在 GIMP 中打印 + +![Printing from the GIMP][3] + +到目前为止,我们所有的命令都是隐式指向到本地的 CUPS 打印服务器上。你也可以通过指定 `-h` 选项和一个端口号(如果不是 CUPS 的默认端口号 631的话)将打印转向到另外一个系统上的服务器。 + +### CUPS 和 CUPS 服务器 + +CUPS 打印系统的核心是 `cupsd` 打印服务器,它是一个运行的守护进程。CUPS 配置文件一般位于 /etc/cups/cupsd.conf。/etc/cups 目录也有与 CUPS 相关的其它的配置文件。CUPS 一般在系统初始化期间启动,根据你的发行版不同,它也可能通过位于 /etc/rc.d/init.d 或者 /etc/init.d 目录中的 CUPS 脚本来控制。对于 最新使用 systemd 来初始化的系统,CUPS 服务脚本可能在 /usr/lib/systemd/system/cups.service 中。和大多数使用脚本的服务一样,你可以停止、启动、或者重启守护程序。查看我们的教程:[学习 Linux,101:运行级别、引导目标、关闭、和重启动][4],了解使用初始化脚本的更多信息。 + +配置文件 /etc/cups/cupsd.conf 包含管理一些事情的参数,比如访问打印系统、是否允许远程打印、本地打印池文件等等。在一些系统上,一个辅助的部分单独描述打印队列,它一般是由配置工具自动生成的。列表 11 展示了一个默认的 cupsd.conf 文件中的一些条目。注意,注释是以 # 字符开头的。默认值通常以注释的方式显示,并且可以通过删除前面的 # 字符去改变默认值。 + +###### Listing 11. 默认的 /etc/cups/cupsd.conf 文件的部分内容 +``` +# Only listen for connections from the local machine. +Listen localhost:631 +Listen /var/run/cups/cups.sock + +# Show shared printers on the local network. +Browsing On +BrowseLocalProtocols dnssd + +# Default authentication type, when authentication is required... +DefaultAuthType Basic + +# Web interface setting... +WebInterface Yes + +# Set the default printer/job policies... + + # Job/subscription privacy... + JobPrivateAccess default + JobPrivateValues default + SubscriptionPrivateAccess default + SubscriptionPrivateValues default + + # Job-related operations must be done by the owner or an administrator... + + Order deny,allow + + +``` + +能够允许在 cupsd.conf 中使用的文件、目录、和用户配置命令,现在都存储在作为替代的 cups-files.conf 中。这是为了防范某些类型的提权攻击。列表 12 展示了 cups-files.conf 文件中的一些条目。注意,正如在文件层次结构标准(FHS)中所期望的那样,打印池文件默认保存在文件系统的 /var/spool 目录中。查看 man 页面了解 cupsd.conf 和 cups-files.conf 配置文件的更多信息。 + +###### 列表 12. 默认的 /etc/cups/cups-files.conf 配置文件的部分内容 +``` +# Location of the file listing all of the local printers... +#Printcap /etc/printcap + +# Format of the Printcap file... +#PrintcapFormat bsd +#PrintcapFormat plist +#PrintcapFormat solaris + +# Location of all spool files... +#RequestRoot /var/spool/cups + +# Location of helper programs... +#ServerBin /usr/lib/cups + +# SSL/TLS keychain for the scheduler... +#ServerKeychain ssl + +# Location of other configuration files... +#ServerRoot /etc/cups + +``` + +列表 12 引用了 /etc/printcap 文件。这是 LPD 打印服务器的配置文件的名字,并且一些应用程序仍然使用它去确定可用的打印机和它们的属性。它通常是在 CUPS 系统上自动生成的,因此,你可能没有必要去修改它。但是,如果你在诊断用户打印问题,你可能需要去检查它。列表 13 展示了一个示例。 + +###### 列表 13. 自动生成的 /etc/printcap +``` +# This file was automatically generated by cupsd(8) from the +# /etc/cups/printers.conf file. All changes to this file +# will be lost. +HL-2280DW|Brother HL-2280DW:rm=atticf27:rp=HL-2280DW: +anyprint|Any available printer:rm=atticf27:rp=anyprint: +XP-610|EPSON XP-610 Series:rm=atticf27:rp=XP-610: + +``` + +这个文件中的每一行都有一个打印机名字、打印机描述,远程机器(rm)的名字、以及那个远程机器上的远程打印机(rp)。老的 /etc/printcap 文件也描述了打印机的能力。 + +#### 文件转换过滤器 + +你可以使用 CUPS 打印许多类型的文件,包括明文的文本文件、PDF、PostScript、和各种格式的图像文件,你只需要提供要打印的文件名,除此之外你再无需向 `lpr` 或 `lp` 命令提供更多的信息。这个神奇的壮举是通过使用过滤器来实现的。实际上,这些年来最流行的过滤器就命名为 magicfilter。 + +当打印一个文件时,CUPS 使用多用途因特网邮件扩展(MIME)类型去决定合适的转换过滤器。其它的打印包可能使用由 `file` 命令使用的神奇数字机制。关于 `file` 或者 `magic` 的更多信息可以查看它们的 man 页面。 + +输入文件被过滤器转换成中间层的光栅格式或者 PostScript 格式。一些作业信息,比如打印数量也会被添加进去。数据最终通过一个 bechend 发送到目标打印机。还有一些可以用手动过滤的输入文件的过滤器。你可以通过这些过滤器获得特殊格式的结果,或者去处理一些 CUPS 原生并不支持的文件格式。 + +#### 添加打印机 + +CUPS 支持多种打印机,包括: + + * 本地连接的并行口和 USB 口打印机 + * 因特网打印协议(IPP)打印机 + * 远程 LPD 打印机 + * 使用 SAMBA 的 Microsoft® Windows® 打印机 + * 使用 NCP 的 Novell 打印机 + * HP Jetdirect 打印机 + + + +当系统启动或者设备连接时,现在的大多数系统都会尝试自动检测和自动配置本地硬件。同样,许多网络打印机也可以被自动检测到。使用 CUPS 的 web 管理工具( 或者 )去搜索或添加打印机。许多发行版都包含它们自己的配置工具,比如,在 SUSE 系统上的 YaST。图 2 展示了使用 localhost:631 的 CUPS 界面,图 3 展示了 Fedora 27 上的 GNOME 打印机设置对话框。 + +###### 图 2. 使用 CUPS 的 web 界面 + + +![Using the CUPS web interface][5] + +###### 图 3. Fedora 27 上的打印机设置 + + +![Using printer settings on Fedora 27][6] + +你也可以从命令行配置打印机。在配置打印机之前,你需要一些关于打印机和它的连接方式的基本信息。如果是一个远程系统,你还需要一个用户 ID 和密码。 + +你需要去知道你的打印机使用什么样的驱动程序。不是所有的打印机都支持 Linux,有些打印机在 Linux 上压根就不能使用,或者功能受限。你可以去 OpenPrinting.org(查看相关主题)去查看是否有你的特定的打印机的驱动程序。`lpinfo` 命令也可以帮你识别有效的设备类型和驱动程序。使用 `-v` 选项去列出支持的设备,使用 `-m` 选项去列出驱动程序,如列表 14 所示。 + +###### 列表 14. 可用的打印机驱动程序 +``` +[ian@atticf27 ~]$ lpinfo -m | grep -i xp-610 +lsb/usr/Epson/epson-inkjet-printer-escpr/Epson-XP-610_Series-epson-escpr-en.ppd.gz +EPSON XP-610 Series, Epson Inkjet Printer Driver (ESC/P-R) for Linux +[ian@atticf27 ~]$ locate "Epson-XP-610_Series-epson-escpr-en.ppd.gz" +/usr/share/ppd/Epson/epson-inkjet-printer-escpr/Epson-XP-610_Series-epson-escpr-en.ppd.gz +[ian@atticf27 ~]$ lpinfo -v +network socket +network ipps +network lpd +network beh +network ipp +network http +network https +direct hp +serial serial:/dev/ttyS0?baud=115200 +direct parallel:/dev/lp0 +network smb +direct hpfax +network dnssd://Brother%20HL-2280DW._pdl-datastream._tcp.local/ +network dnssd://EPSON%20XP-610%20Series._ipp._tcp.local/?uuid=cfe92100-67c4-11d4-a45f-ac18266c48aa +network lpd://BRN001BA98A1891/BINARY_P1 +network lpd://192.168.1.38:515/PASSTHRU + +``` + +Epson-XP-610_Series-epson-escpr-en.ppd.gz 驱动程序在我的系统上位于 /usr/share/ppd/Epson/epson-inkjet-printer-escpr/ 目录中。 + +如果你找不到驱动程序,你可以到打印机生产商的网站看看,说不上会有专用的驱动程序。例如,在写这篇文章的时候,Brother 就有一个我的 HL-2280DW 打印机的驱动程序,但是,这个驱动程序在 OpenPrinting.org 上还没有列出来。 + +如果你收集齐了基本信息,你可以如列表 15 所示的那样,使用 `lpadmin` 命令去配置打印机。为此,我将为我的 HL-2280DW 打印机创建另外一个实例,以便于双面打印。 + +###### 列表 15. 配置一台打印机 +``` +[ian@atticf27 ~]$ lpinfo -m | grep -i "hl.*2280" +HL2280DW.ppd Brother HL2280DW for CUPS +lsb/usr/HL2280DW.ppd Brother HL2280DW for CUPS +[ian@atticf27 ~]$ lpadmin -p HL-2280DW-duplex -E -m HL2280DW.ppd \ +> -v dnssd://Brother%20HL-2280DW._pdl-datastream._tcp.local/ \ +> -D "Brother 1" -o sides=two-sided-long-edge +[ian@atticf27 ~]$ lpstat -a +anyprint accepting requests since Mon 29 Jan 2018 01:17:09 PM EST +HL-2280DW accepting requests since Tue 30 Jan 2018 10:56:10 AM EST +HL-2280DW-duplex accepting requests since Wed 31 Jan 2018 11:41:16 AM EST +HXP-610 accepting requests since Mon 29 Jan 2018 10:34:49 PM EST + +``` + +你可以使用带 `-c` 选项的 `lpadmin` 命令去创建一个仅用于双面打印的新类,而不用为了双面打印去创建一个打印机的副本。 + +如果你需要删除一台打印机,使用带 `-x` 选项的 `lpadmin` 命令。 + +列表 16 展示了如何去删除打印机和创建一个替代类。 + +###### 列表 16. 删除一个打印机和创建一个类 +``` +[ian@atticf27 ~]$ lpadmin -x HL-2280DW-duplex +[ian@atticf27 ~]$ lpadmin -p HL-2280DW -c duplex -E -D "Duplex printing" -o sides=two-sided-long-edge +[ian@atticf27 ~]$ cupsenable duplex +[ian@atticf27 ~]$ cupsaccept duplex +[ian@atticf27 ~]$ lpstat -a +anyprint accepting requests since Mon 29 Jan 2018 01:17:09 PM EST +duplex accepting requests since Wed 31 Jan 2018 12:12:05 PM EST +HL-2280DW accepting requests since Wed 31 Jan 2018 11:51:16 AM EST +XP-610 accepting requests since Mon 29 Jan 2018 10:34:49 PM EST + +``` + +你也可以使用 `lpadmin` 或者 `lpoptions` 命令去设置各种打印机选项。详细信息请查看 man 页面。 + +### 排错 + +如果你有一个打印问题,尝试下列的提示: + + * 确保 CUPS 服务器正在运行。你可以使用 `lpstat` 命令,如果它不能连接到 cupsd 守护程序,它将会报告一个错误。或者,你可以使用 `ps -ef` 命令在输出中去检查是否有 cupsd。 + * 如果你尝试为打印去排队一个作业,而得到一个错误信息,指示打印机不接受这个作业,你可以使用 `lpstat -a` 或者 `lpc status` 去检查那个打印机可接受的作业。 + * 如果一个队列中的作业没有打印,使用 `lpstat -p` 或 `lpc status` 去检查那个打印机是否接受作业。如前面所讨论的那样,你可能需要将这个作业移动到其它的打印机。 + * 如果这个打印机是远程的,检查它在远程系统上是否存在,并且是可操作的。 + * 检查配置文件,确保特定的用户或者远程系统允许在这个打印机上打印。 + * 确保防火墙允许远程打印请求,是否允许从其它系统到你的系统,或者从你的系统到其它系统的数据包通讯。 + * 验证是否有正确的驱动程序。 + + + +正如你所见,打印涉及到你的系统中的几个组件,甚至还有网络。在本教程中,基于篇幅的考虑,我们仅为诊断给你提供了几个着手点。大多数的 CUPS 系统也有实现我们所讨论的命令行功能的图形界面。一般情况下,这个界面是从本地主机使用浏览器指向 631 端口()来访问的,如前面的图 2 所示。 + +你可以通过将 CUPS 运行在前台而不是做为一个守护进程来诊断它的问题。如果有需要,你也可以通过这种方式去测试替代的配置文件。运行 `cupsd -h` 获得更多信息,或者查看 man 页面。 + +CUPS 也管理一个访问日志和错误日志。你可以在 cupsd.conf 中使用 LogLevel 语句来改变日志级别。默认情况下,日志是保存在 /var/log/cups 目录。它们可以在浏览器界面()下,从 **Administration** 选项卡中查看。使用不带任何选项的 `cupsctl` 命令可以显示日志选项。也可以编辑 cupsd.conf 或者使用 `cupsctl` 去调整各种日志参数。查看 `cupsctl` 命令的 man 页面了解更多信息。 + +在 Ubuntu 的 Wiki 页面上的 [调试打印问题][7] 页面也是一个非常好的学习的地方。 + +它包含了打印和 CUPS 的介绍。 + +-------------------------------------------------------------------------------- + +via: https://www.ibm.com/developerworks/library/l-lpic1-108-4/index.html + +作者:[Ian Shields][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ibm.com +[1]:http://www.lpi.org +[2]:https://www.ibm.com/developerworks/library/l-lpic1-map/ +[3]:https://www.ibm.com/developerworks/library/l-lpic1-108-4/gimp-print.jpg +[4]:https://www.ibm.com/developerworks/library/l-lpic1-101-3/ +[5]:https://www.ibm.com/developerworks/library/l-lpic1-108-4/fig-cups-web.jpg +[6]:https://www.ibm.com/developerworks/library/l-lpic1-108-4/fig-settings.jpg +[7]:https://wiki.ubuntu.com/DebuggingPrintingProblems diff --git a/translated/tech/20180213 Getting started with the RStudio IDE.md b/translated/tech/20180213 Getting started with the RStudio IDE.md new file mode 100644 index 0000000000..762165fa13 --- /dev/null +++ b/translated/tech/20180213 Getting started with the RStudio IDE.md @@ -0,0 +1,118 @@ +开始使用 RStudio IDE +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_development_programming_screen.png?itok=BgcSm5Pl) + +从我记事起,我就一直在与数字玩耍。作为 20 世纪 70 年代后期的本科生,我开始上统计学的课程,学习如何检查和分析数据以揭示某些意义。 + +那时候,我有一部科学计算器,它让统计计算变得比以前容易很多。在 90 年代早期,作为一名从事 t 检验,相关性以及 [ANOVA][1] 研究的教育心理学研究生,我开始通过精心编写输入 IBM 主机的文本文件来进行计算。这个主机是对我的手持计算器的一个改进,但是一个小的间距错误会使得整个过程无效,而且这个过程仍然有点乏味。 + +撰写论文时,尤其是我的毕业论文,我需要一种方法能够根据我的数据来创建图表并将它们嵌入到文字处理文档中。我着迷于 Microsoft Excel 及其数字运算能力以及可以用计算结果创建出的大量图表。但每一步都有成本。在 20 世纪 90 年代,除了 Excel,还有其他专有软件包,比如 SAS 和 SPSS+,但对于我那已经满满的研究生时间表来说,学习曲线是一项艰巨的任务。 + +### 快速回到现在 + +最近,由于我对数据科学的兴趣浓厚,加上对 Linux 和开源软件的浓厚兴趣,我阅读了大量的数据科学文章,并在 Linux 会议上听了许多数据科学演讲者谈论他们的工作。因此,我开始对编程语言 R(一种开源的统计计算软件)非常感兴趣。 + +起初,这只是一个火花。当我和我的朋友 Michael J. Gallagher 博士谈论他如何在他的 [博士论文][2] 研究中使用 R 时,这个火花便增大了。最后,我访问了 [R project][3] 的网站,并了解到我可以轻松地安装 [R for Linux][4]。游戏开始! + +### 安装 R + +根据你的操作系统和分布情况,安装 R 会稍有不同。请参阅 [Comprehensive R Archive Network][5] (CRAN) 网站上的安装指南。CRAN 提供了在 [各种 Linux 发行版][6],[Fedora,RHEL,及其衍生版][7],[MacOS][8] 和 [Windows][9] 上的安装指示。 + +我在使用 Ubuntu,则按照 CRAN 的指示,将以下行加入到我的 `/etc/apt/sources.list` 文件中: + +``` +deb https:///bin/linux/ubuntu artful/ + +``` + +接着我在终端运行下面命令: + +``` +$ sudo apt-get update + +$ sudo apt-get install r-base + +``` + +根据 CRAN,“需要从源码编译 R 的用户【如包的维护者,或者任何通过 `install.packages()` 安装包的用户】也应该安装 `r-base-dev` 的包。” + +### 使用 R 和 Rstudio + +安装好了 R,我就准备了解更多关于使用这个强大的工具的信息。Gallagher 博士推荐了 [DataCamp][10] 上的 “Start learning R”,并且我也找到了适用于 R 新手的免费课程。两门课程都帮助我学习 R 的命令和语法。我还参加了 [Udemy][12] 上的 R 在线编程课程,并从 [No Starch Press][14] 上购买了 [Book of R][13]。 + +在阅读更多内容并观看 YouTube 视频后,我意识到我还应该安装 [RStudio][15]。Rstudio 是 R 的开源 IDE,易于在 [Debian, Ubuntu, Fedora, 和 RHEL][16] 上安装。它也可以安装在 MacOS 和 Windows 上。 + +根据 Rstudio 网站的说明,可以根据你的偏好对 IDE 进行自定义,具体方法是选择工具菜单,然后从中选择全局选项。 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/r_global-options.png?itok=un6-SvS-) + +R 提供了一些很棒的演示例子,可以通过在提示符处输入 `demo()` 从控制台访问。`demo(plotmath)` 和 `demo(perspective)` 选项为 R 强大的功能提供了很好的例证。我尝试过一些简单的 [vectors][17] 并在 R 控制台的命令行中绘制,如下所示。 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/r_plotting-vectors.png?itok=9T7UV8p2) + +你可能想要开始学习如何将 R 和一些样本数据结合起来使用,然后将这些知识应用到自己的数据上得到描述性统计。我自己没有丰富的数据来分析,但我搜索了可以使用的数据集 [datasets][18];这样一个数据集(我并没有用这个例子)是由圣路易斯联邦储备银行提供的 [经济研究数据][19]。我对一个题为“美国商业航空公司的乘客里程(1937-1960)”很感兴趣,因此我将它导入 RStudio 以测试 IDE 的功能。Rstudio 可以接受各种格式的数据,包括 CSV,Excel,SPSS 和 SAS。 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/rstudio-import.png?itok=1yJKQei1) + +数据导入后,我使用 `summary(AirPassengers)` 命令获取数据的一些初始描述性统计信息。按回车键后,我得到了 1949-1960 年的每月航空公司旅客的摘要以及其他数据,包括飞机乘客数量的最小值,最大值,第一四分位数,第三四分位数。中位数以及平均数。 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/r_air-passengers.png?itok=RCJMLIb3) + +我从摘要统计信息中知道航空乘客样本的均值为 280.3。在命令行中输入 `sd(AirPassengers)` 会得到标准偏差,在 RStudio 控制台中可以看到: + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/r_sd-air-passengers.png?itok=d-25fQoz) + +接下来,我生成了一个数据直方图,通过输入 `hist(AirPassengers);` 得到,这以图形的方式显示此数据集;Rstudio 可以将数据导出为 PNG,PDF,JPEG,TIFF,SVG,EPS 或 BMP。 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/r_histogram-air-passengers.png?itok=0HWsseQE) + +除了生成统计数据和图形数据外,R 还记录了我所有的历史操作。这使得我能够返回先前的操作,并且我可以保存此历史记录以供将来参考。 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/r_history.png?itok=50jaFPU4) + +在 RStudio 的脚本编辑器中,我可以编写我发出的所有命令的脚本,然后保存该脚本以便在我的数据更改后能再次运行,或者想重新访问它。 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/r_script-editor.png?itok=eiE1_bnX) + +### 获得帮助 + +在 R 提示符下输入 `help()` 可以很容易找到帮助信息。输入你正在寻找的信息的特定主题可以找到具体的帮助信息,例如 `help(sd)` 可以获得有关标准差的帮助。通过在提示符处输入 `contributors()` 可以获得有关 R 项目贡献者的信息。您可以通过在提示符处输入 `citation()` 来了解如何引用 R。通过在提示符出输入 `license()` 可以很容易地获得 R 的许可证信息。 + +R 是在 GNU General Public License(1991 年 6 月的版本 2,或者 2007 年 6 月的版本 3)的条款下发布的。有关 R 许可证的更多信息,请参考 [R Project website][20]。 + +另外,RStudio 在 GUI 中提供了完美的帮助菜单。该区域包括 RStudio 备忘单(可作为 PDF 下载),[RStudio][21]的在线学习,RStudio 文档,支持和 [许可证信息][22]。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/2/getting-started-RStudio-IDE + +作者:[Don Watkins][a] +译者:[szcf-weiya](https://github.com/szcf-weiya) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/don-watkins +[1]:https://en.wikipedia.org/wiki/Analysis_of_variance +[2]:https://www.michael-j-gallagher.com/high-performance-computing +[3]:https://www.r-project.org/ +[4]:https://cran.r-project.org/index.html +[5]:https://cran.r-project.org/ +[6]:https://cran.r-project.org/bin/linux/ +[7]:https://cran.r-project.org/bin/linux/redhat/README +[8]:https://cran.r-project.org/bin/macosx/ +[9]:https://cran.r-project.org/bin/windows/ +[10]:https://www.datacamp.com/onboarding/learn?from=home&technology=r +[11]:http://tryr.codeschool.com/levels/1/challenges/1 +[12]:https://www.udemy.com/r-programming +[13]:https://nostarch.com/bookofr +[14]:https://opensource.com/article/17/10/no-starch +[15]:https://www.rstudio.com/ +[16]:https://www.rstudio.com/products/rstudio/download/ +[17]:http://www.r-tutor.com/r-introduction/vector +[18]:https://vincentarelbundock.github.io/Rdatasets/datasets.html +[19]:https://fred.stlouisfed.org/ +[20]:https://www.r-project.org/Licenses/ +[21]:https://www.rstudio.com/online-learning/#R +[22]:https://support.rstudio.com/hc/en-us/articles/217801078-What-license-is-RStudio-available-under- diff --git a/translated/tech/20180221 Protecting Code Integrity with PGP - Part 2- Generating Your Master Key.md b/translated/tech/20180221 Protecting Code Integrity with PGP - Part 2- Generating Your Master Key.md new file mode 100644 index 0000000000..e6e53b1f63 --- /dev/null +++ b/translated/tech/20180221 Protecting Code Integrity with PGP - Part 2- Generating Your Master Key.md @@ -0,0 +1,176 @@ +用 PGP 保护代码完整性 - 第二部分:生成你的主密钥 +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/binary-1538717_1920.png?itok=kv_sxSnf) + +在本系列文章中,我们将深度探讨如何使用 PGP 以及为工作于自由软件项目的开发者提供实用指南。在前一篇文章中,我们介绍了[基本工具和概念][1]。在本文中,我们将展示如何生成和保护你的 PGP 主密钥。 + +### 清单 + + 1. 生成一个 4096 位的 RSA 主密钥 (ESSENTIAL) + + 2. 使用 paperkey 备份你的 RSA 主密钥 (ESSENTIAL) + + 3. 添加所有相关的身份 (ESSENTIAL) + + + + +### 考虑事项 + +#### 理解“主”(认证)密钥 + +在本节和下一节中,我们将讨论“主密钥”和“子密钥”。理解以下内容很重要: + + 1. 在“主密钥”和“子密钥”之间没有技术上的区别。 + + 2. 在创建时,我们赋予每个密钥特定的能力来分配功能限制 + + 3. 一个 PGP 密钥有四项能力 + + * [S] 密钥可以用于签名 + + * [E] 密钥可以用于加密 + + * [A] 密钥可以用于身份认证 + + * [C] 密钥可以用于认证其他密钥 + + 4. 一个密钥可能有多种能力 + + + + +带有[C] (认证)能力的密钥被认为是“主”密钥,因为它是唯一可以用来表明与其他密钥关系的密钥。只有[C]密钥可以被用于: + + * 添加或撤销其他密钥(子密钥)的 S/E/A 能力 + + * 添加,更改或撤销密钥关联的身份(uids) + + * 添加或更改本身或其他子密钥的到期时间 + + * 为了网络信任目的为其它密钥签名 + + + + +在自由软件的世界里,[C]密钥就是你的数字身份。一旦你创建该密钥,你应该格外小心地保护它并且防止它落入坏人的手中。 + +#### 在你创建主密钥前 + +在你创建的你的主密钥前,你需要选择你的主要身份和主密码。 + +##### 主要身份 + +身份使用邮件中发件人一栏相同格式的字符串: +``` +Alice Engineer + +``` + +你可以在任何时候创建新的身份,取消旧的,并且更改你的“主要”身份。由于主要身份在所有 GnuPG 操作中都展示,你应该选择正式的和最有可能用于 PGP 保护通信的名字和邮件地址,比如你的工作地址或者用于在项目提交(commit)时签名的地址。 + +##### 密码 + +密码(passphrase)专用于在存储在磁盘上时使用对称加密算法对私钥进行加密。如果你的 .gnupg 目录的内容被泄露,那么一个好的密码就是小偷能够在线模拟你的最后一道防线,这就是为什么设置一个好的密码很重要的原因。 + +一个强密码的好的指导是用丰富或混合的词典的 3-4 个词,而不引用自流行来源(歌曲,书籍,口号)。由于你将相当频繁地使用该密码,所以它应当易于 输入和记忆。 + +##### 算法和密钥强度 + +尽管现在 GnuPG 已经支持椭圆曲线加密一段时间了,我们仍坚持使用 RSA 密钥,至少稍长一段时间。虽然现在就可以开始使用 ED25519 密钥,但你可能会碰到无法正确处理它们的工具和硬件设备。 + +如果后续的指南中我们说 2048 位的密钥对 RSA 公钥加密的生命周期已经足够,你可能也会好奇主密钥为什么是 4096 位。 原因很大程度是由于社会因素而非技术上的:主密钥在密钥链上恰好是最明显的,同时如果你的主密钥位数比一些你交互的开发者的少,他们将不可避免地负面评价你。 + +#### 生成主密钥 + +为了生成你的主密钥,请使用以下命令,并且将“Alice Engineer:”替换为正确值 +``` +$ gpg --quick-generate-key 'Alice Engineer ' rsa4096 cert + +``` + +一个要求输入密码的对话框将弹出。然后,你可能需要移动鼠标或输入一些密钥才能生成足够的熵,直到命令完成。 + +查看命令输出,它就像这样: +``` +pub rsa4096 2017-12-06 [C] [expires: 2019-12-06] + 111122223333444455556666AAAABBBBCCCCDDDD +uid Alice Engineer + +``` + +注意第二行的长字符串 -- 它是你新生成的密钥的完整指纹。密钥 ID(key IDs)可以用以下三种不同形式表达: + + * Fingerprint,一个完整的 40 个字符的密钥标识符 + + * Long,指纹的最后 16 个字符(AAAABBBBCCCCDDDD) + + * Short,指纹的最后 8 个字符(CCCCDDDD) + + + + +你应该避免使用 8 个字符的短密钥 ID(short key IDs),因为它们不足够唯一。 + +这里,我建议你打开一个文本编辑器,复制你新密钥的指纹并粘贴。你需要在接下来几步中用到它,所以将它放在旁边会很方便。 + +#### 备份你的主密钥 + +出于灾后恢复的目的 -- 同时特别的如果你试图使用 Web of Trust 并且收集来自其他项目开发者的密钥签名 -- 你应该创建你的私钥的 硬拷贝备份。万一所有其它的备份机制都失败了,这应当是最后的补救措施。 + +创建一个你的私钥的可打印的硬拷贝的最好方法是使用为此而写的软件 paperkey。Paperkey 在所有 Linux 发行版上可用,在 Mac 上也可以通过 brew 安装 paperkey。 + +运行以下命令,用你密钥的完整指纹替换[fpr]: +``` +$ gpg --export-secret-key [fpr] | paperkey -o /tmp/key-backup.txt + +``` + +输出将采用易于 OCR 或手动输入的格式,以防如果你需要恢复它的话。打印出该文件,然后拿支笔,并在纸的边缘写下密钥的密码。这是必要的一步,因为密钥输出仍然使用密码加密,并且如果你更改了密钥的密码,你不会记得第一次创建的密钥是什么 -- 我保证。 + +将打印结果和手写密码放入信封中,并存放在一个安全且保护好的地方,最好远离你家,例如银行保险库。 + +**打印机注意事项** 打印机连接到计算机的并行端口的时代已经过去了。现在他们拥有完整的操作系统,硬盘驱动器和云集成。由于我们发送给打印机的关键内容将使用密码进行加密,因此这是一项相当安全的操作,但请使用您最好的偏执判断。 + +#### 添加相关身份 + +如果你有多个相关的邮件地址(个人,工作,开源项目等),你应该将其添加到主密钥中。你不需要为任何你不希望用于 PGP 的地址(例如,可能不是你的校友地址)这样做。 + +该命令是(用你完整的密钥指纹替换[fpr]): +``` +$ gpg --quick-add-uid [fpr] 'Alice Engineer ' + +``` + +你可以查看你已经使用的 UIDs: +``` +$ gpg --list-key [fpr] | grep ^uid + +``` + +##### 选择主 UID + +GnuPG 将会把你最近添加的 UID 作为你的主 UID,如果这与你想的不同,你应该改回来: +``` +$ gpg --quick-set-primary-uid [fpr] 'Alice Engineer ' + +``` + +下次,我们将介绍如何生成 PGP 子密钥,它是你实际用于日常工作的密钥。 + +通过 Linux 基金会和 edX 的免费[“Introduction to Linux” ][2]课程了解关于 Linux 的更多信息。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/PGP/2018/2/protecting-code-integrity-pgp-part-2-generating-and-protecting-your-master-pgp-key + +作者:[KONSTANTIN RYABITSEV][a] +译者:[kimii](https://github.com/kimii) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/mricon +[1]:https://www.linux.com/blog/learn/2018/2/protecting-code-integrity-pgp-part-1-basic-pgp-concepts-and-tools +[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/translated/tech/20180222 Linux LAN Routing for Beginners- Part 1.md b/translated/tech/20180222 Linux LAN Routing for Beginners- Part 1.md new file mode 100644 index 0000000000..a53252ad6f --- /dev/null +++ b/translated/tech/20180222 Linux LAN Routing for Beginners- Part 1.md @@ -0,0 +1,103 @@ +Linux 局域网路由新手指南:第 1 部分 +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/traffic_warder.jpeg?itok=hZxS_PB4) +前面我们学习了 [IPv6 路由][1]。现在我们继续深入学习 Linux 中的 IPv4 路由的基础知识。我们从硬件概述、操作系统和 IPv4 地址的基础知识开始,下周我们将继续学习它们如何配置,以及测试路由。 + +### 局域网路由器硬件 + +Linux 实际上是一个网络操作系统,一直都是,从一开始它就有内置的网络功能。为将你的局域网连入因特网,构建一个局域网路由器比起构建网关路由器要简单的多。你不要太过于执念安全或者防火墙规则,对于处理 NAT 它还是比较复杂的,网络地址转换是 IPv4 的一个痛点。我们为什么不放弃 IPv4 去转到 IPv6 呢?这样将使网络管理员的工作更加简单。 + +有点跑题了。从理论上讲,你的 Linux 路由器是一个至少有两个网络接口的小型机器。Linux Gizmos 是一个单片机的综合体:[98 个开放规格的目录,黑客友好的 SBCs][2]。你能够使用一个很老的笔记本电脑或者台式计算机。你也可以使用一个精简版计算机,像 ZaReason Zini 或者 System76 Meerkat 一样,虽然这些有点贵,差不多要 $600。但是它们又结实又可靠,并且你不用在 Windows 许可证上浪费钱。 + +如果对路由器的要求不高,使用树莓派 3 Model B 作为路由器是一个非常好的选择。它有一个 10/100 以太网端口,板载 2.4GHz 的 802.11n 无线网卡,并且它还有四个 USB 端口,因此你可以插入多个 USB 网卡。USB 2.0 和低速板载网卡可能会让树莓派变成你的网络上的瓶颈,但是,你不能对它期望太高(毕竟它只有 $35,既没有存储也没有电源)。它支持很多种风格的 Linux,因此你可以选择使用你喜欢的版本。基于 Debian 的树莓派是我的最爱。 + +### 操作系统 + +你可以在你选择的硬件上安装将你喜欢的 Linux 的简化版,因为定制的路由器操作系统,比如 OpenWRT、 Tomato、DD-WRT、Smoothwall、Pfsense 等等,都有它们自己的非标准界面。我的观点是,没有必要这么麻烦,它们对你并没有什么帮助。尽量使用标准的 Linux 工具,因为你只需要学习它们一次就够了。 + +Debian 的网络安装镜像大约有 300MB 大小,并且支持多种架构,包括 ARM、i386、amd64、和 armhf。Ubuntu 的服务器网络安装镜像也小于 50MB,这样你就可以控制你要安装哪些包。Fedora、Mageia、和 openSUSE 都提供精简的网络安装镜像。如果你需要创意,你可以浏览 [Distrowatch][3]。 + +### 路由器能做什么 + +我们需要网络路由器做什么?一个路由器连接不同的网络。如果没有路由,那么每个网络都是相互隔离的,所有的悲伤和孤独都没有人与你分享,所有节点只能孤独终老。假设你有一个 192.168.1.0/24 和一个 192.168.2.0/24 网络。如果没有路由器,你的两个网络之间不能相互沟通。这些都是 C 类的私有地址,它们每个都有 254 个可用网络地址。使用 ipcalc 可以非常容易地得到它们的这些信息: +``` +$ ipcalc 192.168.1.0/24 +Address: 192.168.1.0 11000000.10101000.00000001. 00000000 +Netmask: 255.255.255.0 = 24 11111111.11111111.11111111. 00000000 +Wildcard: 0.0.0.255 00000000.00000000.00000000. 11111111 +=> +Network: 192.168.1.0/24 11000000.10101000.00000001. 00000000 +HostMin: 192.168.1.1 11000000.10101000.00000001. 00000001 +HostMax: 192.168.1.254 11000000.10101000.00000001. 11111110 +Broadcast: 192.168.1.255 11000000.10101000.00000001. 11111111 +Hosts/Net: 254 Class C, Private Internet + +``` + +我喜欢 ipcalc 的二进制输出信息,它更加可视地表示了掩码是如何工作的。前三个八位组表示了网络地址,第四个八位组是主机地址,因此,当你分配主机地址时,你将 “掩盖” 掉网络地址部分,只使用剩余的主机部分。你的两个网络有不同的网络地址,而这就是如果两个网络之间没有路由器它们就不能互相通讯的原因。 + +每个八位组一共有 256 字节,但是它们并不能提供 256 个主机地址,因为第一个和最后一个值 ,也就是 0 和 255,是被保留的。0 是网络标识,而 255 是广播地址,因此,只有 254 个主机地址。ipcalc 可以帮助你很容易地计算出这些。 + +当然,这并不意味着你不能有一个结尾是 0 或者 255 的主机地址。假设你有一个 16 位的前缀: +``` +$ ipcalc 192.168.0.0/16 +Address: 192.168.0.0 11000000.10101000. 00000000.00000000 +Netmask: 255.255.0.0 = 16 11111111.11111111. 00000000.00000000 +Wildcard: 0.0.255.255 00000000.00000000. 11111111.11111111 +=> +Network: 192.168.0.0/16 11000000.10101000. 00000000.00000000 +HostMin: 192.168.0.1 11000000.10101000. 00000000.00000001 +HostMax: 192.168.255.254 11000000.10101000. 11111111.11111110 +Broadcast: 192.168.255.255 11000000.10101000. 11111111.11111111 +Hosts/Net: 65534 Class C, Private Internet + +``` + +ipcalc 列出了你的第一个和最后一个主机地址,它们是 192.168.0.1 和 192.168.255.254。你是可以有以 0 或者 255 结尾的主机地址的,例如,192.168.1.0 和 192.168.0.255,因为它们都在最小主机地址和最大主机地址之间。 + +不论你的地址块是私有的还是公共的,这个原则同样都是适用的。不要羞于使用 ipcalc 来帮你计算地址。 + +### CIDR + +CIDR(无类域间路由)就是通过可变长度的子网掩码来扩展 IPv4 的。CIDR 允许对网络空间进行更精细地分割。我们使用 ipcalc 来演示一下: +``` +$ ipcalc 192.168.1.0/22 +Address: 192.168.1.0 11000000.10101000.000000 01.00000000 +Netmask: 255.255.252.0 = 22 11111111.11111111.111111 00.00000000 +Wildcard: 0.0.3.255 00000000.00000000.000000 11.11111111 +=> +Network: 192.168.0.0/22 11000000.10101000.000000 00.00000000 +HostMin: 192.168.0.1 11000000.10101000.000000 00.00000001 +HostMax: 192.168.3.254 11000000.10101000.000000 11.11111110 +Broadcast: 192.168.3.255 11000000.10101000.000000 11.11111111 +Hosts/Net: 1022 Class C, Private Internet + +``` + +网络掩码并不局限于整个八位组,它可以跨越第三和第四个八位组,并且子网部分的范围可以是从 0 到 3,而不是非得从 0 到 255。可用主机地址的数量并不一定是 8 的倍数,因为它是由整个八位组定义的。 + +给你留一个家庭作业,复习 CIDR 和 IPv4 地址空间是如何在公共、私有和保留块之间分配的,这个作业有助你更好地理解路由。一旦你掌握了地址的相关知识,配置路由器将不再是件复杂的事情了。 + +从 [理解 IP 地址和 CIDR 图表][4]、[IPv4 私有地址空间和过滤][5]、以及 [IANA IPv4 地址空间注册][6] 开始。接下来的我们将学习如何创建和管理路由器。 + +通过来自 Linux 基金会和 edX 的免费课程 ["Linux 入门" ][7]学习更多 Linux 知识。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/2/linux-lan-routing-beginners-part-1 + +作者:[Carla Schroder][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/cschroder +[1]:https://www.linux.com/learn/intro-to-linux/2017/7/practical-networking-linux-admins-ipv6-routing +[2]:http://linuxgizmos.com/catalog-of-98-open-spec-hacker-friendly-sbcs/#catalog +[3]:http://distrowatch.org/ +[4]:https://www.ripe.net/about-us/press-centre/understanding-ip-addressing +[5]:https://www.arin.net/knowledge/address_filters.html +[6]:https://www.iana.org/assignments/ipv4-address-space/ipv4-address-space.xhtml +[7]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/translated/tech/20180228 Protecting Code Integrity with PGP - Part 3- Generating PGP Subkeys.md b/translated/tech/20180228 Protecting Code Integrity with PGP - Part 3- Generating PGP Subkeys.md new file mode 100644 index 0000000000..cd9c2c237d --- /dev/null +++ b/translated/tech/20180228 Protecting Code Integrity with PGP - Part 3- Generating PGP Subkeys.md @@ -0,0 +1,110 @@ +使用 PGP 保护代码完整性 - 第 3 部分:生成 PGP 子密钥 +====== +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/binary.jpg?itok=h62HujOC) + +在本系列教程中,我们提供了使用 PGP 的实用指南。在此之前,我们介绍了[基本工具和概念][1],并介绍了如何[生成并保护您的主 PGP 密钥][2]。在第三篇文章中,我们将解释如何生成 PGP 子密钥,以及它们在日常工作中使用。 + +### 清单 + + 1. 生成 2048 位加密子密钥(必要) + +  2. 生成 2048 位签名子密钥(必要) + +  3. 生成一个 2048 位验证子密钥(可选) + +  4. 将你的公钥上传到 PGP 密钥服务器(必要) + +  5. 设置一个刷新的定时任务(必要) + + + +#### 注意事项 + +现在我们已经创建了主密钥,让我们创建用于日常工作的密钥。我们创建了 2048 位密钥,因为很多专用硬件(我们稍后会讨论这个)不能处理更长的密钥,但同样也是出于实用的原因。如果我们发现自己处于一个 2048 位 RSA 密钥也不够好的世界,那将是由于计算或数学的基本突破,因此更长的 4096 位密钥不会产生太大的差别。 + +##### 创建子密钥 + +要创建子密钥,请运行: +``` +$ gpg --quick-add-key [fpr] rsa2048 encr +$ gpg --quick-add-key [fpr] rsa2048 sign + +``` + +你也可以创建验证密钥,这能让你使用你的 PGP 密钥来使用 ssh: +``` +$ gpg --quick-add-key [fpr] rsa2048 auth + +``` + +你可以使用 gpg --list-key [fpr] 来查看你的密钥信息: +``` +pub rsa4096 2017-12-06 [C] [expires: 2019-12-06] + 111122223333444455556666AAAABBBBCCCCDDDD +uid [ultimate] Alice Engineer +uid [ultimate] Alice Engineer +sub rsa2048 2017-12-06 [E] +sub rsa2048 2017-12-06 [S] + +``` + +##### 上传你的公钥到密钥服务器 + +你的密钥创建已完成,因此现在需要你将其上传到一个公共密钥服务器,使其他人能更容易找到密钥。 (如果你不打算实际使用你创建的密钥,请跳过这一步,因为这只会在密钥服务器上留下垃圾数据。) +``` +$ gpg --send-key [fpr] + +``` + +如果此命令不成功,你可以尝试指定一台密钥服务器以及端口,这很有可能成功: +``` +$ gpg --keyserver hkp://pgp.mit.edu:80 --send-key [fpr] + +``` + +大多数密钥服务器彼此进行通信,因此你的密钥信息最终将与所有其他密钥信息同步。 + +**关于隐私的注意事项:**密钥服务器是完全公开的,因此在设计上会泄露有关你的潜在敏感信息,例如你的全名、昵称以及个人或工作邮箱地址。如果你签名了其他人的钥匙或某人签名你的钥匙,那么密钥服务器还会成为你的社交网络的泄密者。一旦这些个人信息发送给密钥服务器,就不可能编辑或删除。即使你撤销签名或身份,它也不会将你的密钥记录删除,它只会将其标记为已撤消 - 这甚至会显得更突出。 + +也就是说,如果你参与公共项目的软件开发,以上所有信息都是公开记录,因此通过密钥服务器另外让这些信息可见,不会导致隐私的净损失。 + +###### 上传你的公钥到 GitHub + +如果你在开发中使用 GitHub(谁不是呢?),则应按照他们提供的说明上传密钥: + +要生成适合粘贴的公钥输出,只需运行: +``` +$ gpg --export --armor [fpr] + +``` + +##### 设置一个刷新定时任务 + +你需要定期刷新你的 keyring,以获取其他人公钥的最新更改。你可以设置一个定时任务来做到这一点: +``` +$ crontab -e + +``` + +在新行中添加以下内容: +``` +@daily /usr/bin/gpg2 --refresh >/dev/null 2>&1 + +``` + +**注意:**检查你的 gpg 或 gpg2 命令的完整路径,如果你的 gpg 是旧式的 GnuPG v.1,请使用 gpg2。 + + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-part-3-generating-pgp-subkeys + +作者:[Konstantin Ryabitsev][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/mricon +[1]:https://www.linux.com/blog/learn/2018/2/protecting-code-integrity-pgp-part-1-basic-pgp-concepts-and-tools +[2]:https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-part-2-generating-and-protecting-your-master-pgp-key diff --git a/translated/tech/20180301 Linux LAN Routing for Beginners- Part 2.md b/translated/tech/20180301 Linux LAN Routing for Beginners- Part 2.md new file mode 100644 index 0000000000..d1adc33134 --- /dev/null +++ b/translated/tech/20180301 Linux LAN Routing for Beginners- Part 2.md @@ -0,0 +1,118 @@ +Linux 局域网路由新手指南:第 2 部分 +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dortmund-hbf-1259559_1920.jpg?itok=mdkNQRkS) + +上周 [我们学习了 IPv4 地址][1] 和如何使用管理员不可或缺的工具 —— ipcalc,今天我们继续学习更精彩的内容:局域网路由器。 + +VirtualBox 和 KVM 是测试路由的好工具,在本文中的所有示例都是在 KVM 中执行的。如果你喜欢使用物理硬件去做测试,那么你需要三台计算机:一台用作路由器,另外两台用于表示两个不同的网络。你也需要两台以太网交换机和相应的线缆。 + +我们假设示例是一个有线以太局域网,为了更符合真实使用场景,我们将假设有一些桥接的无线接入点,当然我并不会使用这些无线接入点做任何事情。(我也不会去尝试所有的无线路由器,以及使用一个移动宽带设备连接到以太网的局域网口进行混合组网,因为它们需要进一步的安装和设置) + +### 网段 + +最简单的网段是两台计算机连接在同一个交换机上的相同地址空间中。这样两台计算机不需要路由器就可以相互通讯。这就是我们常说的术语 —— “广播域”,它表示所有在相同的网络中的一组主机。它们可能连接到一台单个的以太网交换机上,也可能是连接到多台交换机上。一个广播域可以包括通过以太网桥连接的两个不同的网络,通过网桥可以让两个网络像一个单个网络一样运转。无线访问点一般是桥接到有线以太网上。 + +一个广播域仅当在它们通过一台网络路由器连接的情况下,才可以与不同的广播域进行通讯。 + +### 简单的网络 + +以下示例的命令并不是永久生效的,重启之后你所做的改变将会消失。 + +一个广播域需要一台路由器才可以与其它广播域通讯。我们使用两台计算机和 `ip` 命令来解释这些。我们的两台计算机是 192.168.110.125 和 192.168.110.126,它们都插入到同一台以太网交换机上。在 VirtualBox 或 KVM 中,当你配置一个新网络的时候会自动创建一个虚拟交换机,因此,当你分配一个网络到虚拟虚拟机上时,就像是插入一个交换机一样。使用 `ip addr show` 去查看你的地址和网络接口名字。现在,这两台主机可以互 ping 成功。 + +现在,给其中一台主机添加一个不同网络的地址: +``` +# ip addr add 192.168.120.125/24 dev ens3 + +``` + +你可以指定一个网络接口名字,在示例中它的名字是 ens3。这不需要去添加一个网络前缀,在本案例中,它是 /24,但是显式地添加它并没有什么坏处。你可以使用 `ip` 命令去检查你的配置。下面的示例输出为了清晰其见进行了删减: +``` +$ ip addr show +ens3: + inet 192.168.110.125/24 brd 192.168.110.255 scope global dynamic ens3 + valid_lft 875sec preferred_lft 875sec + inet 192.168.120.125/24 scope global ens3 + valid_lft forever preferred_lft forever + +``` + +主机在 192.168.120.125 上可以 ping 它自己(`ping 192.168.120.125`),这是对你的配置是否正确的一个基本校验,这个时候第二台计算机就已经不能 ping 通那个地址了。 + +现在我们需要做一些网络变更。添加第三台主机作为路由器。它需要两个虚拟网络接口并添加第二个虚拟网络。在现实中,你的路由器必须使用一个静态 IP 地址,但是现在,我们可以让 KVM 的 DHCP 服务器去为它分配地址,所以,你仅需要两个虚拟网络: + + * 第一个网络:192.168.110.0/24 + * 第二个网络:192.168.120.0/24 + + + +接下来你的路由器必须配置去转发数据包。数据包转发默认是禁用的,你可以使用 `sysctl` 命令去检查它的配置: +``` +$ sysctl net.ipv4.ip_forward +net.ipv4.ip_forward = 0 + +``` + +0 意味着禁用,使用如下的命令去启用它: +``` +# echo 1 > /proc/sys/net/ipv4/ip_forward + +``` + +接下来配置你的另一台主机做为第二个网络的一部分,你可以通过将原来在 192.168.110.0/24 的网络中的一台主机分配到 192.168.120.0/24 虚拟网络中,然后重新启动两个 “网络” 主机,注意不是路由器。(或者重启动网络;我年龄大了还有点懒,我记不住那些重启服务的奇怪命令,还不如重启网络来得干脆。)重启后各台机器的地址应该如下所示: + + * 主机 1: 192.168.110.125 + * 主机 2: 192.168.120.135 + * 路由器: 192.168.110.126 and 192.168.120.136 + + + +现在可以去随意 ping 它们,可以从任何一台计算机上 ping 到任何一台其它计算机上。使用虚拟机和各种 Linux 发行版做这些事时,可能会产生一些意想不到的问题,因此,有时候 ping 的通,有时候 ping 不通。不成功也是一件好事,这意味着你需要动手去创建一条静态路由。首先,查看已经存在的路由表。主机 1 和主机 2 的路由表如下所示: +``` +$ ip route show +default via 192.168.110.1 dev ens3 proto static metric 100 +192.168.110.0/24 dev ens3 proto kernel scope link src 192.168.110.164 metric 100 + +$ ip route show +default via 192.168.110.1 dev ens3 proto static metric 100 +default via 192.168.120.1 dev ens3 proto static metric 101 +169.254.0.0/16 dev ens3 scope link metric 1000 +192.168.110.0/24 dev ens3 proto kernel scope link + src 192.168.110.126 metric 100 +192.168.120.0/24 dev ens9 proto kernel scope link + src 192.168.120.136 metric 100 + +``` + +这显示了我们使用的由 KVM 分配的缺省路由。169.* 地址是自动链接的本地地址,我们不去管它。接下来我们看两条路由,这两条路由指向到我们的路由器。你可以有多条路由,在这个示例中我们将展示如何在主机 1 上添加一个非默认路由: +``` +# ip route add 192.168.120.0/24 via 192.168.110.126 dev ens3 + +``` + +这意味着主机1 可以通过路由器接口 192.168.110.126 去访问 192.168.110.0/24 网络。看一下它们是如何工作的?主机1 和路由器需要连接到相同的地址空间,然后路由器转发到其它的网络。 + +以下的命令去删除一条路由: +``` +# ip route del 192.168.120.0/24 + +``` + +在真实的案例中,你不需要像这样手动配置一台路由器,而是使用一个路由器守护程序,并通过 DHCP 做路由器通告,但是理解基本原理很重要。接下来我们将学习如何去配置一个易于使用的路由器守护程序来为你做这些事情。 + +通过来自 Linux 基金会和 edX 的免费课程 ["Linux 入门" ][2] 来学习更多 Linux 的知识。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/3/linux-lan-routing-beginners-part-2 + +作者:[CARLA SCHRODER][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/cschroder +[1]:https://www.linux.com/learn/intro-to-linux/2018/2/linux-lan-routing-beginners-part-1 +[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/translated/tech/20180306 Most Useful Linux Commands You Can Run in Windows 10.md b/translated/tech/20180306 Most Useful Linux Commands You Can Run in Windows 10.md new file mode 100644 index 0000000000..f0dc91b294 --- /dev/null +++ b/translated/tech/20180306 Most Useful Linux Commands You Can Run in Windows 10.md @@ -0,0 +1,144 @@ + +运行在 Windows 10 系统中绝对实用的 Linux 命令 +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/wsl-commands.png?itok=91oEXdO8) + +在本系列早先的文章中,我们讨论了关于如何在 [Windows 10 上开启 WSL 之旅][1] 的内容。作为本系列的最后一篇文章,我们准备探讨一些能在 Windows 10 上广泛使用的 Linux 命令。 + +话题深入之前,请先让我们明确本教程所适用的人群。本文适用于使用 Windows 10 系统的初级开发者,帮助其了解学习关于 Azure、AWS 或是私有云的 Linux 平台。换句话说,就是为了帮助初次接触 Linux 系统的 Windows 10 用户。 + +您的工作量决定了您所需要的命令,而我的需求可能和您的不一样。本文旨在帮助您在 Windwos 10 上舒服的使用 Linux。不过请牢记,WSL 并不提供硬件访问的功能,比如声卡、GPU,至少官方是这么描述的。但是这可能并不能阻止 Linux 用户的折腾精神。很多用户不仅完成了硬件访问,甚至已经在 Windows 10 上安装上了 Linux 桌面程序。但是本文并不会涉及这些内容,我们可能会讨论这些,但不是现在。 + +下面是我们需要着手的任务。 + +### 如何让您的 Linux 系统保持到最新的版本 + +因为 Linux 运行在了 Windows 系统中,所以您将被剥夺 Linux 系统所提供的所有安全服务。另外,如果不及时给 Linux 系统打补丁,Windows 设备将被迫暴露在外界威胁中,所以还请保持您的 Linux 设备到最新版本。 + +WSL 官方支持 openSUSE、SUSE Linux Enterprise 和 Ubuntu。您也可以安装其他发行版,但是我只需要它们当中的两个就可以完成我的所有工作,毕竟,我只需要访问一些 Linux 基础程序。 + +**更新 openSUSE Leap:** +``` +sudo zypper up + +``` + +如果您想升级系统,您可以运行下面的命令: +``` +sudo zypper dup + +``` + +**更新 Ubuntu:** +``` +sudo apt-get update + +sudo apt-get dist-upgrade + +``` + +由于 Linux 系统的更新是渐进式的,所以更新系统成为了我的日常。Windows 10 的更新通常需要重启系统,而 Linux 不同,一般只有 KB 或是 MB 级的更新,无需重启。 + +### 管理文件目录 + +系统更新之后,我们还剩余一些非必要或是不那么重要的任务。 + +系统更新之外的第二重要的任务是使用 Linux 管理本地和远程文件。我承认我更青睐图形界面程序,但是终端能提供更可靠、更有价值的服务。使用 Explorer 程序,尝试移动 1 TB 的文件,祝你好运。数据量大的话,我通常使用 rsync 命令来移动它们。如果中断任务,rsync 可以在上次停止的节点继续工作。 + +虽然您可能更习惯使用 cp 或是 mv 命令复制、移动文件,但是我还是喜欢灵活的 rsync 命令,了解 rsync 对远程文件传输也有帮助。使用 rsync 大半为了完成下面三个任务: + +**使用 rsync 复制整个目录:** +``` +rsync -avzP /source-directory /destination directory + +``` + +**使用 rsync 移动文件:** +``` +rsync --remove-source-files -avzP /source-directory /destination-directory + +``` + +在成功复制目标目录之后,此命令将删除源文件。 + +**使用 rsync 同步文件:** + +我的文件可能在多处存储。但是,我只会在主要位置中增加或是删除。如果不使用专业的软件,同步文件可能会给用户带来挑战,而 rsync 刚好可以简化这个过程。这个命令可以让两个目录文件内容同步,留个印象,也许用得上。 +``` +rsync --delete -avzP /source-directory /destination-directory + +``` + +如果原目录中没有找到文件,上述命令将删除目标目录中的文件,并通过另一种方式创建源目录的镜像复制。 + +### 文件自动备份 + +保持文件备份是一项普通的工作。为了保持我的设备的完全同步,我运行了一个 cron 作业在夜间持续同步我的所有目录。保持一个外部驱动,每周我都会手动同步。由于可能删掉我不想删除的文件,所以我并没有使用 --delete 选项。 + +**创建 cron 作业, 打开 crontab:** +``` +crontab -e + +``` + +移动大文件时,我会选择在系统空闲的深夜执行该命令。此命令将在每天早上 1 点运行,您大概可以这样修改它: +``` +# 0 1 * * * rsync -avzP /source-directory /destination-directory + +``` + +这是使用 crontab 的定时作业的命令结构: +``` +# m h dom mon dow command + +``` + +在此,m = 分钟, h = 小时, dom= 本月的某天, mon= 月; dow= 本周的某天。 + +我们将在每天早上 1 点运行这条命令。您可以选择 dow 或是 dom(比如,每月 5 号)等。您可以在 [这里][2] 阅读更多相关内容。 + +### 管理远程服务器 + +在 Windows 系统上使用 WSL 的优势之一就是能方便管理远程 Linux 服务器,WSL 能提供原生的 Linux 工具给您。首先,您需要使用 ssh 命令登陆远程 Linux 服务器。 + +比如,我的服务器 ip 是 192.168.0.112;端口为 2018(不是默认的 22 端口);Linux 用户名是 swapnil,密码是 `就不告诉你`。 +``` +ssh -p2018 swapnil@192.168.0.112 + +``` + +它会向您请求用户密码,然后您就可以登陆到 Linux 服务器了。现在您可以执行任意您想执行的所有操作,且不需使用额外的 puTTY 程序。 + +使用 rsync ,您可以很轻易的将本地文件传输到远程设备。源文件还是目标目录取决于您是上传文件到服务器,还是下载文件到本地目录,您可以使用 [username@IP][3]。 + +如果我想在服务器中复制一些文本内容到 home 目录,参照这里的目录: +``` +rsync -avzP /source-directory-on-local-machine ‘ssh -p2018’ swapnil@192.168.0.112:/home/swapnil/Documents/ + +``` + +这将会在远程服务器中复制文件到 `Documents` 目录。 + +### 总结 + +本教程主要是为了证明您可以通过 Windows 10 系统的 WSL 完成 Linux 系的很大一部分的任务。通常来说,它提高了生产效率。现在,Linux 的世界已经向 Windwos 10 系统打开怀抱了,尽情探索吧。如果您有任何疑问,或是想了解 WSL 涉及到的其他层面,欢迎在下方的评论区分享您的想法。 + +在 [Administering Linux on Azure (LFS205)][4] 课程中了解更多,可以在 [这里][5] 注册。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/2018/3/most-useful-linux-commands-you-can-run-windows-10 + +作者:[SAPNIL BHARTIYA][a] +译者:[CYLeft](https://github.com/CYLeft) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/arnieswap +[1]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10 +[2]:http://www.adminschoice.com/crontab-quick-reference +[3]:mailto:username@IP +[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/administering-linux-on-azure +[5]:http://bit.ly/2FpFtPg diff --git a/translated/tech/20180307 Host your own email with projectx-os and a Raspberry Pi.md b/translated/tech/20180307 Host your own email with projectx-os and a Raspberry Pi.md new file mode 100644 index 0000000000..ef93bc51c7 --- /dev/null +++ b/translated/tech/20180307 Host your own email with projectx-os and a Raspberry Pi.md @@ -0,0 +1,56 @@ +使用一个树莓派和 projectx/os 托管你自己的电子邮件 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/document_free_access_cut_security.png?itok=ocvCv8G2) + +现在有大量的理由,不能再将存储你的数据的任务委以他人之手,也不能在第三方公司运行你的服务;隐私、所有权、以及防范任何人拿你的数据去“赚钱”。但是对于大多数人来说,自己去运行一个服务器,是件即费时间又需要太多的专业知识的事情。不得已,我们只能妥协。抛开这些顾虑,使用某些公司的云服务,随之而来的就是广告、数据挖掘和售卖、以及其它可能的任何东西。 + +[projectx/os][1] 项目就是要去除这种顾虑,它可以在家里毫不费力地做服务托管,并且可以很容易地创建一个类似于 Gmail 的帐户。实现上述目标,你只需一个 $35 的树莓派 3 和一个基于 Debian 的操作系统镜像 —— 并且不需要很多的专业知识。仅需要四步就可以实现: + + 1. 解压缩一个 ZIP 文件到 SD 存储卡中。 + 2. 编辑 SD 上的一个文本文件以便于它连接你的 WiFi(如果你不使用有线网络的话)。 + 3. 将这个 SD 卡插到树莓派 3 中。 + 4. 使用你的智能手机在树莓派 3 上安装 "email 服务器“ 应用并选择一个二级域。 + + + +服务器应用程序(比如电子邮件服务器)被分解到多个容器中,它们中的每个都只能够使用指定的方式与外界通讯,它们使用了管理粒度非常细的隔离措施以提高安全性。例如,入站 SMTP,[SpamAssassin][2](防垃圾邮件平台),[Dovecot][3] (安全 IMAP 服务器),并且 webmail 都使用了独立的容器,它们之间相互不能看到对方的数据,因此,单个守护进程出现问题不会波及其它的进程。 + +另外,它们都是无状态容器,比如 SpamAssassin 和入站 SMTP,每次收到电子邮件之后,它们的连接都会被拆除并重建,因此,即便是有人找到了 bug 并利用了它,他们也不能访问以前的电子邮件或者接下来的电子邮件;他们只能访问他们自己挖掘出漏洞的那封电子邮件。幸运的是,大多数对外发布的、最容易受到攻击的服务都是隔离的和无状态的。 + +所有存储的数据都使用 [dm-crypt][4] 进行加密。非公开服务,比如 Dovecot(IMAP)或者 webmail,都是在内部监听,并使用 [ZeroTier One][5] 加密整个网络,因此只有你的设备(智能手机、笔记本电脑、平板等等)才能访问它们。 + +虽然电子邮件并不是端到端加密的(除非你使用了 [PGP][6]),但是非加密的电子邮件绝不会跨越网络,并且也不会存储在磁盘上。现在明文的电子邮件只存在于双方的私有邮件服务器上,它们都在他们的家中受到很好的安全保护并且只能通过他们的客户端访问(智能手机、笔记本电脑、平板等等)。 + +另一个好处就是,个人设备都使用一个密码保护(不是指纹或者其它生物识别技术),而且在你家中的设备都受到美国的 [第四宪法修正案][7] 的保护,比起由公司所有的第三方数据中心,它们受到更强的法律保护。当然,如果你的电子邮件使用的是 Gmail,Google 还保存着你的电子邮件的拷贝。 + +### 展望 + +电子邮件是我使用 project/os 项目打包的第一个应用程序。想像一下,一个应用程序商店有全部的服务器软件,为易于安装和使用将它们打包到一起。想要一个博客?添加一个 WordPress 应用程序!想替换安全的 Dropbox ?添加一个 [Seafile][8] 应用程序或者一个 [Syncthing][9] 后端应用程序。 [IPFS][10] 节点? [Mastodon][11] 实例?GitLab 服务器?各种家庭自动化/物联网后端服务?这里有大量的非常好的开源服务器软件 ,它们都非常易于安装,并且可以使用它们来替换那些有专利的云服务。 + +Nolan Leake 的 [在每个家庭中都有一个云:0 系统管理员技能就可以在家里托管服务器][12] 将在三月 8 - 11日的 [Southern California Linux Expo][12] 进行。使用折扣代码 **OSDC** 去注册可以 50% 的价格得到门票。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/3/host-your-own-email + +作者:[Nolan Leake][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/nolan +[1]:https://git.sigbus.net/projectx/os +[2]:http://spamassassin.apache.org/ +[3]:https://www.dovecot.org/ +[4]:https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt +[5]:https://www.zerotier.com/download.shtml +[6]:https://en.wikipedia.org/wiki/Pretty_Good_Privacy +[7]:https://simple.wikipedia.org/wiki/Fourth_Amendment_to_the_United_States_Constitution +[8]:https://www.seafile.com/en/home/ +[9]:https://syncthing.net/ +[10]:https://ipfs.io/ +[11]:https://github.com/tootsuite/mastodon +[12]:https://www.socallinuxexpo.org/scale/16x/presentations/cloud-every-home-host-servers-home-0-sysadmin-skills +[13]:https://register.socallinuxexpo.org/reg6/ diff --git a/translated/tech/20180308 How to set up a print server on a Raspberry Pi.md b/translated/tech/20180308 How to set up a print server on a Raspberry Pi.md new file mode 100644 index 0000000000..0b53281d7f --- /dev/null +++ b/translated/tech/20180308 How to set up a print server on a Raspberry Pi.md @@ -0,0 +1,87 @@ +如何将树莓派配置为打印服务器 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life-raspberrypi_0.png?itok=Kczz87J2) + +我喜欢在家做一些小项目,因此,今年我选择使用一个 [树莓派 3 Model B][1],这是一个像我这样的业余爱好者非常适合的东西。使用树莓派 3 Model B 的无线功能,我可以不使用线缆将树莓派连接到我的家庭网络中。这样可以很容易地将树莓派用到各种它所需要的地方。 + +在家里,我和我的妻子都使用笔记本电脑,但是我们只有一台打印机:一台使用的并不频繁的 HP 彩色激光打印机。因为我们的打印机并不内置无线网卡,因此,它不能直接连接到无线网络中,一般情况下,使用我的笔记本电脑时,我并不连接打印机,因为,我做的大多数工作并不需要打印。虽然这种安排在大多数时间都没有问题,但是,有时候,我的妻子想在不 “麻烦” 我的情况下,自己去打印一些东西。 + +### 基本设置 + +我觉得我们需要一个将打印机连接到无线网络的解决方案,以便于我们都能够随时随地打印。我本想买一个无线打印服务器将我的 USB 打印机连接到家里的无线网络上。后来,我决定使用我的树莓派,将它设置为打印服务器,这样就可以让家里的每个人都可以随时来打印。 + +设置树莓派是非常简单的事。我下载了 [Raspbian][2] 镜像,并将它写入到我的 microSD 卡中。然后,使用它引导连接了一个 HDMI 显示器、一个 USB 键盘和一个 USB 鼠标的树莓派。之后,我们开始对它进行设置! + +这个树莓派系统自动引导到一个图形桌面,然后我做了一些基本设置:设置键盘语言、连接无线网络、设置普通用户帐户(`pi`)的密码、设置管理员用户(`root`)的密码。 + +我并不打算将树莓派运行在桌面环境下。我一般是通过我的普通的 Linux 计算机远程来使用它。因此,我使用树莓派的图形化管理工具,去设置将树莓派引导到控制台模式,而且不以 `pi` 用户自动登入。 + +重新启动树莓派之后,我需要做一些其它的系统方面的小调整,以便于我在家用网络中使用树莓派做为 “服务器”。我设置它的 DHCP 客户端为使用静态 IP 地址;默认情况下,DHCP 客户端可能任选一个可用的网络地址,这样我会不知道应该用哪个地址连接到树莓派。我的家用网络使用一个私有的 A 类地址,因此,我的路由器的 IP 地址是 `10.0.0.1`,并且我的全部可用地 IP 地址是 `10.0.0.x`。在我的案例中,低位的 IP 地址是安全的,因此,我通过在 `/etc/dhcpcd.conf` 中添加如下的行,设置它的无线网络使用 `10.0.0.11` 这个静态地址。 +``` +interface wlan0 + +static ip_address=10.0.0.11/24 + +static routers=10.0.0.1 + +static domain_name_servers=8.8.8.8 8.8.4.4 + +``` + +在我再次重启之前,我需要去确认安全 shell 守护程序(SSHD)已经正常运行(你可以在 “偏好” 中设置哪些服务在引导时启动它)。这样我就可以使用 SSH 从普通的 Linux 系统上基于网络连接到树莓派中。 + +### 打印设置 + +现在,我的树莓派已经在网络上正常工作了,我通过 SSH 从我的 Linux 电脑上远程连接它,接着做剩余的设置。在继续设置之前,确保你的打印机已经连接到树莓派上。 + +设置打印机很容易。现在的打印服务器都称为 CUPS,它是标准的通用 Unix 打印系统。任何最新的 Unix 系统都可以通过 CUPS 打印服务器来打印。为了在树莓派上设置 CUPS 打印服务器。你需要通过几个命令去安装 CUPS 软件,并使用新的配置来重启打印服务器,这样就可以允许其它系统来打印了。 +``` +$ sudo apt-get install cups + +$ sudo cupsctl --remote-any + +$ sudo /etc/init.d/cups restart + +``` + +在 CUPS 中设置打印机也是非常简单的,你可以通过一个 Web 界面来完成。CUPS 监听端口是 631,因此你可以在浏览器中收藏这个地址: +``` +https://10.0.0.11:631/ + +``` + +你的 Web 浏览器可能会弹出警告,因为它不认可这个 Web 浏览器的 https 证书;选择 ”接受它“,然后以管理员用户登入系统,你将看到如下的标准的 CUPS 面板: +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/cups-1-home.png?itok=t9OFJgSX) + +这时候,导航到管理标签,选择 “Add Printer"。 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/cups-2-administration.png?itok=MlEINoYC) + +如果打印机已经通过 USB 连接,你只需要简单地选择这个打印机和型号。不要忘记去勾选共享这个打印机的选择框,因为其它人也要使用它。现在,你的打印机已经在 CUPS 中设置好了。 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/cups-3-printer.png?itok=N5upmhE7) + +### 客户端设置 + +从 Linux 中设置一台网络打印机非常简单。我的桌面环境是 GNOME,你可以从 GNOME 的设置应用程序中添加网络打印机。只需要导航到设备和打印机,然后解锁这个面板。点击 “Add" 按钮去添加打印机。 + +在我的系统中,GNOME 设置为 ”自动发现网络打印机并添加它“。如果你的系统不是这样,你需要通过树莓派的 IP 地址,手动去添加打印机。 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/gnome-settings-printers.png?itok=NOQLTaLs) + +设置到此为止!我们现在已经可以通过家中的无线网络来使用这台打印机了。我不再需要物理连接到这台打印机了,家里的任何人都可以使用它了! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/3/print-server-raspberry-pi + +作者:[Jim Hall][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jim-hall +[1]:https://www.raspberrypi.org/products/raspberry-pi-3-model-b/ +[2]:https://www.raspberrypi.org/downloads/ diff --git a/translated/tech/20180309 How to check your network connections on Linux.md b/translated/tech/20180309 How to check your network connections on Linux.md new file mode 100644 index 0000000000..1e6923b56f --- /dev/null +++ b/translated/tech/20180309 How to check your network connections on Linux.md @@ -0,0 +1,98 @@ +如何在Linux上检查您的网络连接 +====== + +![](https://images.idgesg.net/images/article/2018/03/network-connections-100751906-large.jpg) + +**ip**命令有很多可以告诉你网络连接配置和状态的信息,但是所有这些词和数字意味着什么? 让我们深入了解一下,看看所有显示的值都试图告诉你什么。 + +当您使用`ip a`(或`ip addr`)命令获取系统上所有网络接口的信息时,您将看到如下所示的内容: + +``` +$ ip a +1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host + valid_lft forever preferred_lft forever +2: enp0s25: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 + link/ether 00:1e:4f:c8:43:fc brd ff:ff:ff:ff:ff:ff + inet 192.168.0.24/24 brd 192.168.0.255 scope global dynamic enp0s25 + valid_lft 57295sec preferred_lft 57295sec + inet6 fe80::2c8e:1de0:a862:14fd/64 scope link + valid_lft forever preferred_lft forever +``` + +这个系统上的两个接口 - 环回(lo)和网络(enp0s25)——显示了很多统计数据。 “lo”接口显然是环回地址。 我们可以在列表中看到环回IPv4地址(127.0.0.1)和环回IPv6( **::1**)。 正常的网络接口更有趣。 + +### 为什么是enp0s25而不是eth0 + +如果你想知道为什么它在这个系统上被称为**enp0s25**,而不是可能更熟悉的**eth0**,那我们可以稍微解释一下。 + +新的命名方案被称为“可预测的网络接口”。 它已经在基于systemd的Linux系统上使用了一段时间了。 接口名称取决于硬件的物理位置。 “**en**”仅仅就是“ethernet”的意思就像“eth”用于对于eth0,一样。 “**p**”是以太网卡的总线编号,“**s**”是插槽编号。 所以“enp0s25”告诉我们很多我们正在使用的硬件的信息。 + + 这个配置串告诉我们: +``` +BROADCAST 该接口支持广播 +MULTICAST 该接口支持多播 +UP 网络接口已启用 +LOWER_UP 网络电缆已插入,设备已连接至网络 +mtu 1500 最大传输单位(数据包大小)为1,500字节 +``` + +列出的其他值也告诉了我们很多关于接口的知识,但我们需要知道“brd”和“qlen”这些词代表什么意思。 所以,这里显示的是上面展示的**ip**信息的其余部分的翻译。 + +``` +mtu 最大传输单位(数据包大小)为1,500字节 +qdisc pfifo_fast 用于数据包排队 +state UP 网络接口已启用 +group default 接口组 +qlen 1000 传输队列长度 +link/ether 00:1e:4f:c8:43:fc 接口的MAC(硬件)地址 +brd ff:ff:ff:ff:ff:ff 广播地址 +inet 192.168.0.24/24 IPv4地址 +brd 192.168.0.255 广播地址 +scope global 全局有效 +dynamic enp0s25 地址是动态分配的 +valid_lft 80866sec IPv4地址的有效使用期限 +preferred_lft 80866sec IPv4地址的首选生存期 +inet6 fe80::2c8e:1de0:a862:14fd/64 IPv6地址 +scope link 仅在此设备上有效 +valid_lft forever IPv6地址的有效使用期限 +preferred_lft forever IPv6地址的首选生存期 +``` + +您可能已经注意到,ifconfig命令提供的一些信息未包含在**ip a** 命令的输出中 —— 例如传输数据包的统计信息。 如果您想查看发送和接收的数据包数量以及冲突数量的列表,可以使用以下ip命令: + +``` +$ ip -s link show enp0s25 +2: enp0s25: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 + link/ether 00:1e:4f:c8:43:fc brd ff:ff:ff:ff:ff:ff + RX: bytes packets errors dropped overrun mcast + 224258568 418718 0 0 0 84376 + TX: bytes packets errors dropped carrier collsns + 6131373 78152 0 0 0 0 +``` + +另一个**ip**命令提供有关系统路由表的信息。 +``` +$ ip route show +default via 192.168.0.1 dev enp0s25 proto static metric 100 +169.254.0.0/16 dev enp0s25 scope link metric 1000 +192.168.0.0/24 dev enp0s25 proto kernel scope link src 192.168.0.24 metric 100 +``` + +**ip**命令是非常通用的。 您可以从**ip**命令及其来自[Red Hat][1]的选项获得有用的备忘单。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3262045/linux/checking-your-network-connections-on-linux.html + +作者:[Sandra Henry-Stocker][a] +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[1]:https://access.redhat.com/sites/default/files/attachments/rh_ip_command_cheatsheet_1214_jcs_print.pdf