mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-24 02:20:09 +08:00
commit
496ac73c94
189
published/20150708 Choosing a Linux Tracer (2015).md
Normal file
189
published/20150708 Choosing a Linux Tracer (2015).md
Normal file
@ -0,0 +1,189 @@
|
||||
Linux 跟踪器之选
|
||||
======
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
> Linux 跟踪很神奇!
|
||||
|
||||
<ruby>跟踪器<rt>tracer</rt></ruby>是一个高级的性能分析和调试工具,如果你使用过 `strace(1)` 或者 `tcpdump(8)`,你不应该被它吓到 ... 你使用的就是跟踪器。系统跟踪器能让你看到很多的东西,而不仅是系统调用或者数据包,因为常见的跟踪器都可以跟踪内核或者应用程序的任何东西。
|
||||
|
||||
有大量的 Linux 跟踪器可供你选择。由于它们中的每个都有一个官方的(或者非官方的)的吉祥物,我们有足够多的选择给孩子们展示。
|
||||
|
||||
你喜欢使用哪一个呢?
|
||||
|
||||
我从两类读者的角度来回答这个问题:大多数人和性能/内核工程师。当然,随着时间的推移,这也可能会发生变化,因此,我需要及时去更新本文内容,或许是每年一次,或者更频繁。(LCTT 译注:本文最后更新于 2015 年)
|
||||
|
||||
### 对于大多数人
|
||||
|
||||
大多数人(开发者、系统管理员、运维人员、网络可靠性工程师(SRE)…)是不需要去学习系统跟踪器的底层细节的。以下是你需要去了解和做的事情:
|
||||
|
||||
#### 1. 使用 perf_events 进行 CPU 剖析
|
||||
|
||||
可以使用 perf_events 进行 CPU <ruby>剖析<rt>profiling</rt></ruby>。它可以用一个 [火焰图][3] 来形象地表示。比如:
|
||||
|
||||
```
|
||||
git clone --depth 1 https://github.com/brendangregg/FlameGraph
|
||||
perf record -F 99 -a -g -- sleep 30
|
||||
perf script | ./FlameGraph/stackcollapse-perf.pl | ./FlameGraph/flamegraph.pl > perf.svg
|
||||
```
|
||||
|
||||

|
||||
|
||||
Linux 的 perf_events(即 `perf`,后者是它的命令)是官方为 Linux 用户准备的跟踪器/分析器。它位于内核源码中,并且维护的非常好(而且现在它的功能还在快速变强)。它一般是通过 linux-tools-common 这个包来添加的。
|
||||
|
||||
`perf` 可以做的事情很多,但是,如果我只能建议你学习其中的一个功能的话,那就是 CPU 剖析。虽然从技术角度来说,这并不是事件“跟踪”,而是<ruby>采样<rt>sampling</rt></ruby>。最难的部分是获得完整的栈和符号,这部分在我的 [Linux Profiling at Netflix][4] 中针对 Java 和 Node.js 讨论过。
|
||||
|
||||
#### 2. 知道它能干什么
|
||||
|
||||
正如一位朋友所说的:“你不需要知道 X 光机是如何工作的,但你需要明白的是,如果你吞下了一个硬币,X 光机是你的一个选择!”你需要知道使用跟踪器能够做什么,因此,如果你在业务上确实需要它,你可以以后再去学习它,或者请会使用它的人来做。
|
||||
|
||||
简单地说:几乎任何事情都可以通过跟踪来了解它。内部文件系统、TCP/IP 处理过程、设备驱动、应用程序内部情况。阅读我在 lwn.net 上的 [ftrace][5] 的文章,也可以去浏览 [perf_events 页面][6],那里有一些跟踪(和剖析)能力的示例。
|
||||
|
||||
#### 3. 需要一个前端工具
|
||||
|
||||
如果你要购买一个性能分析工具(有许多公司销售这类产品),并要求支持 Linux 跟踪。想要一个直观的“点击”界面去探查内核的内部,以及包含一个在不同堆栈位置的延迟热力图。就像我在 [Monitorama 演讲][7] 中描述的那样。
|
||||
|
||||
我创建并开源了我自己的一些前端工具,虽然它是基于 CLI 的(不是图形界面的)。这样可以使其它人使用跟踪器更快更容易。比如,我的 [perf-tools][8],跟踪新进程是这样的:
|
||||
|
||||
```
|
||||
# ./execsnoop
|
||||
Tracing exec()s. Ctrl-C to end.
|
||||
PID PPID ARGS
|
||||
22898 22004 man ls
|
||||
22905 22898 preconv -e UTF-8
|
||||
22908 22898 pager -s
|
||||
22907 22898 nroff -mandoc -rLL=164n -rLT=164n -Tutf8
|
||||
[...]
|
||||
```
|
||||
|
||||
在 Netflix 公司,我正在开发 [Vector][9],它是一个实例分析工具,实际上它也是一个 Linux 跟踪器的前端。
|
||||
|
||||
### 对于性能或者内核工程师
|
||||
|
||||
一般来说,我们的工作都非常难,因为大多数人或许要求我们去搞清楚如何去跟踪某个事件,以及因此需要选择使用哪个跟踪器。为完全理解一个跟踪器,你通常需要花至少一百多个小时去使用它。理解所有的 Linux 跟踪器并能在它们之间做出正确的选择是件很难的事情。(我或许是唯一接近完成这件事的人)
|
||||
|
||||
在这里我建议选择如下,要么:
|
||||
|
||||
A)选择一个全能的跟踪器,并以它为标准。这需要在一个测试环境中花大量的时间来搞清楚它的细微差别和安全性。我现在的建议是 SystemTap 的最新版本(例如,从 [源代码][10] 构建)。我知道有的公司选择的是 LTTng ,尽管它并不是很强大(但是它很安全),但他们也用的很好。如果在 `sysdig` 中添加了跟踪点或者是 kprobes,它也是另外的一个候选者。
|
||||
|
||||
B)按我的 [Velocity 教程中][11] 的流程图。这意味着尽可能使用 ftrace 或者 perf_events,eBPF 已经集成到内核中了,然后用其它的跟踪器,如 SystemTap/LTTng 作为对 eBPF 的补充。我目前在 Netflix 的工作中就是这么做的。
|
||||
|
||||

|
||||
|
||||
以下是我对各个跟踪器的评价:
|
||||
|
||||
#### 1. ftrace
|
||||
|
||||
我爱 [ftrace][12],它是内核黑客最好的朋友。它被构建进内核中,它能够利用跟踪点、kprobes、以及 uprobes,以提供一些功能:使用可选的过滤器和参数进行事件跟踪;事件计数和计时,内核概览;<ruby>函数流步进<rt>function-flow walking</rt></ruby>。关于它的示例可以查看内核源代码树中的 [ftrace.txt][13]。它通过 `/sys` 来管理,是面向单一的 root 用户的(虽然你可以使用缓冲实例以让其支持多用户),它的界面有时很繁琐,但是它比较容易<ruby>调校<rt>hackable</rt></ruby>,并且有个前端:ftrace 的主要创建者 Steven Rostedt 设计了一个 trace-cmd,而且我也创建了 perf-tools 集合。我最诟病的就是它不是<ruby>可编程的<rt>programmable</rt></ruby>,因此,举个例子说,你不能保存和获取时间戳、计算延迟,以及将其保存为直方图。你需要转储事件到用户级以便于进行后期处理,这需要花费一些成本。它也许可以通过 eBPF 实现可编程。
|
||||
|
||||
#### 2. perf_events
|
||||
|
||||
[perf_events][14] 是 Linux 用户的主要跟踪工具,它的源代码位于 Linux 内核中,一般是通过 linux-tools-common 包来添加的。它又称为 `perf`,后者指的是它的前端,它相当高效(动态缓存),一般用于跟踪并转储到一个文件中(perf.data),然后可以在之后进行后期处理。它可以做大部分 ftrace 能做的事情。它不能进行函数流步进,并且不太容易调校(而它的安全/错误检查做的更好一些)。但它可以做剖析(采样)、CPU 性能计数、用户级的栈转换、以及使用本地变量利用<ruby>调试信息<rt>debuginfo</rt></ruby>进行<ruby>行级跟踪<rt>line tracing</rt></ruby>。它也支持多个并发用户。与 ftrace 一样,它也不是内核可编程的,除非 eBPF 支持(补丁已经在计划中)。如果只学习一个跟踪器,我建议大家去学习 perf,它可以解决大量的问题,并且它也相当安全。
|
||||
|
||||
#### 3. eBPF
|
||||
|
||||
<ruby>扩展的伯克利包过滤器<rt>extended Berkeley Packet Filter</rt></ruby>(eBPF)是一个<ruby>内核内<rt>in-kernel</rt></ruby>的虚拟机,可以在事件上运行程序,它非常高效(JIT)。它可能最终为 ftrace 和 perf_events 提供<ruby>内核内编程<rt>in-kernel programming</rt></ruby>,并可以去增强其它跟踪器。它现在是由 Alexei Starovoitov 开发的,还没有实现完全的整合,但是对于一些令人印象深刻的工具,有些内核版本(比如,4.1)已经支持了:比如,块设备 I/O 的<ruby>延迟热力图<rt>latency heat map</rt></ruby>。更多参考资料,请查阅 Alexei 的 [BPF 演示][15],和它的 [eBPF 示例][16]。
|
||||
|
||||
#### 4. SystemTap
|
||||
|
||||
[SystemTap][17] 是一个非常强大的跟踪器。它可以做任何事情:剖析、跟踪点、kprobes、uprobes(它就来自 SystemTap)、USDT、内核内编程等等。它将程序编译成内核模块并加载它们 —— 这是一种很难保证安全的方法。它开发是在内核代码树之外进行的,并且在过去出现过很多问题(内核崩溃或冻结)。许多并不是 SystemTap 的过错 —— 它通常是首次对内核使用某些跟踪功能,并率先遇到 bug。最新版本的 SystemTap 是非常好的(你需要从它的源代码编译),但是,许多人仍然没有从早期版本的问题阴影中走出来。如果你想去使用它,花一些时间去测试环境,然后,在 irc.freenode.net 的 #systemtap 频道与开发者进行讨论。(Netflix 有一个容错架构,我们使用了 SystemTap,但是我们或许比起你来说,更少担心它的安全性)我最诟病的事情是,它似乎假设你有办法得到内核调试信息,而我并没有这些信息。没有它我实际上可以做很多事情,但是缺少相关的文档和示例(我现在自己开始帮着做这些了)。
|
||||
|
||||
#### 5. LTTng
|
||||
|
||||
[LTTng][18] 对事件收集进行了优化,性能要好于其它的跟踪器,也支持许多的事件类型,包括 USDT。它的开发是在内核代码树之外进行的。它的核心部分非常简单:通过一个很小的固定指令集写入事件到跟踪缓冲区。这样让它既安全又快速。缺点是做内核内编程不太容易。我觉得那不是个大问题,由于它优化的很好,可以充分的扩展,尽管需要后期处理。它也探索了一种不同的分析技术。很多的“黑匣子”记录了所有感兴趣的事件,以便可以在 GUI 中以后分析它。我担心该记录会错失之前没有预料的事件,我真的需要花一些时间去看看它在实践中是如何工作的。这个跟踪器上我花的时间最少(没有特别的原因)。
|
||||
|
||||
#### 6. ktap
|
||||
|
||||
[ktap][19] 是一个很有前途的跟踪器,它在内核中使用了一个 lua 虚拟机,不需要调试信息和在嵌入时设备上可以工作的很好。这使得它进入了人们的视野,在某个时候似乎要成为 Linux 上最好的跟踪器。然而,由于 eBPF 开始集成到了内核,而 ktap 的集成工作被推迟了,直到它能够使用 eBPF 而不是它自己的虚拟机。由于 eBPF 在几个月过去之后仍然在集成过程中,ktap 的开发者已经等待了很长的时间。我希望在今年的晚些时间它能够重启开发。
|
||||
|
||||
#### 7. dtrace4linux
|
||||
|
||||
[dtrace4linux][20] 主要由一个人(Paul Fox)利用业务时间将 Sun DTrace 移植到 Linux 中的。它令人印象深刻,一些<ruby>供应器<rt>provider</rt></ruby>可以工作,还不是很完美,它最多应该算是实验性的工具(不安全)。我认为对于许可证的担心,使人们对它保持谨慎:它可能永远也进入不了 Linux 内核,因为 Sun 是基于 CDDL 许可证发布的 DTrace;Paul 的方法是将它作为一个插件。我非常希望看到 Linux 上的 DTrace,并且希望这个项目能够完成,我想我加入 Netflix 时将花一些时间来帮它完成。但是,我一直在使用内置的跟踪器 ftrace 和 perf_events。
|
||||
|
||||
#### 8. OL DTrace
|
||||
|
||||
[Oracle Linux DTrace][21] 是将 DTrace 移植到 Linux (尤其是 Oracle Linux)的重大努力。过去这些年的许多发布版本都一直稳定的进步,开发者甚至谈到了改善 DTrace 测试套件,这显示出这个项目很有前途。许多有用的功能已经完成:系统调用、剖析、sdt、proc、sched、以及 USDT。我一直在等待着 fbt(函数边界跟踪,对内核的动态跟踪),它将成为 Linux 内核上非常强大的功能。它最终能否成功取决于能否吸引足够多的人去使用 Oracle Linux(并为支持付费)。另一个羁绊是它并非完全开源的:内核组件是开源的,但用户级代码我没有看到。
|
||||
|
||||
#### 9. sysdig
|
||||
|
||||
[sysdig][22] 是一个很新的跟踪器,它可以使用类似 `tcpdump` 的语法来处理<ruby>系统调用<rt>syscall</rt></ruby>事件,并用 lua 做后期处理。它也是令人印象深刻的,并且很高兴能看到在系统跟踪领域的创新。它的局限性是,它的系统调用只能是在当时,并且,它转储所有事件到用户级进行后期处理。你可以使用系统调用来做许多事情,虽然我希望能看到它去支持跟踪点、kprobes、以及 uprobes。我也希望看到它支持 eBPF 以查看内核内概览。sysdig 的开发者现在正在增加对容器的支持。可以关注它的进一步发展。
|
||||
|
||||
### 深入阅读
|
||||
|
||||
我自己的工作中使用到的跟踪器包括:
|
||||
|
||||
- **ftrace** : 我的 [perf-tools][8] 集合(查看示例目录);我的 lwn.net 的 [ftrace 跟踪器的文章][5]; 一个 [LISA14][8] 演讲;以及帖子: [函数计数][23]、 [iosnoop][24]、 [opensnoop][25]、 [execsnoop][26]、 [TCP retransmits][27]、 [uprobes][28] 和 [USDT][29]。
|
||||
- **perf_events** : 我的 [perf_events 示例][6] 页面;在 SCALE 的一个 [Linux Profiling at Netflix][4] 演讲;和帖子:[CPU 采样][30]、[静态跟踪点][31]、[热力图][32]、[计数][33]、[内核行级跟踪][34]、[off-CPU 时间火焰图][35]。
|
||||
- **eBPF** : 帖子 [eBPF:一个小的进步][36],和一些 [BPF-tools][37] (我需要发布更多)。
|
||||
- **SystemTap** : 很久以前,我写了一篇 [使用 SystemTap][38] 的文章,它有点过时了。最近我发布了一些 [systemtap-lwtools][39],展示了在没有内核调试信息的情况下,SystemTap 是如何使用的。
|
||||
- **LTTng** : 我使用它的时间很短,不足以发布什么文章。
|
||||
- **ktap** : 我的 [ktap 示例][40] 页面包括一行程序和脚本,虽然它是早期的版本。
|
||||
- **dtrace4linux** : 在我的 [系统性能][41] 书中包含了一些示例,并且在过去我为了某些事情开发了一些小的修补,比如, [timestamps][42]。
|
||||
- **OL DTrace** : 因为它是对 DTrace 的直接移植,我早期 DTrace 的工作大多与之相关(链接太多了,可以去 [我的主页][43] 上搜索)。一旦它更加完美,我可以开发很多专用工具。
|
||||
- **sysdig** : 我贡献了 [fileslower][44] 和 [subsecond offset spectrogram][45] 的 chisel。
|
||||
- **其它** : 关于 [strace][46],我写了一些告诫文章。
|
||||
|
||||
不好意思,没有更多的跟踪器了! … 如果你想知道为什么 Linux 中的跟踪器不止一个,或者关于 DTrace 的内容,在我的 [从 DTrace 到 Linux][47] 的演讲中有答案,从 [第 28 张幻灯片][48] 开始。
|
||||
|
||||
感谢 [Deirdre Straughan][49] 的编辑,以及跟踪小马的创建(General Zoi 是小马的创建者)。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.brendangregg.com/blog/2015-07-08/choosing-a-linux-tracer.html
|
||||
|
||||
作者:[Brendan Gregg][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.brendangregg.com
|
||||
[1]:http://www.brendangregg.com/blog/images/2015/tracing_ponies.png
|
||||
[2]:http://www.slideshare.net/brendangregg/velocity-2015-linux-perf-tools/105
|
||||
[3]:http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html
|
||||
[4]:http://www.brendangregg.com/blog/2015-02-27/linux-profiling-at-netflix.html
|
||||
[5]:http://lwn.net/Articles/608497/
|
||||
[6]:http://www.brendangregg.com/perf.html
|
||||
[7]:http://www.brendangregg.com/blog/2015-06-23/netflix-instance-analysis-requirements.html
|
||||
[8]:http://www.brendangregg.com/blog/2015-03-17/linux-performance-analysis-perf-tools.html
|
||||
[9]:http://techblog.netflix.com/2015/04/introducing-vector-netflixs-on-host.html
|
||||
[10]:https://sourceware.org/git/?p=systemtap.git;a=blob_plain;f=README;hb=HEAD
|
||||
[11]:http://www.slideshare.net/brendangregg/velocity-2015-linux-perf-tools
|
||||
[12]:http://lwn.net/Articles/370423/
|
||||
[13]:https://www.kernel.org/doc/Documentation/trace/ftrace.txt
|
||||
[14]:https://perf.wiki.kernel.org/index.php/Main_Page
|
||||
[15]:http://www.phoronix.com/scan.php?page=news_item&px=BPF-Understanding-Kernel-VM
|
||||
[16]:https://github.com/torvalds/linux/tree/master/samples/bpf
|
||||
[17]:https://sourceware.org/systemtap/wiki
|
||||
[18]:http://lttng.org/
|
||||
[19]:http://ktap.org/
|
||||
[20]:https://github.com/dtrace4linux/linux
|
||||
[21]:http://docs.oracle.com/cd/E37670_01/E38608/html/index.html
|
||||
[22]:http://www.sysdig.org/
|
||||
[23]:http://www.brendangregg.com/blog/2014-07-13/linux-ftrace-function-counting.html
|
||||
[24]:http://www.brendangregg.com/blog/2014-07-16/iosnoop-for-linux.html
|
||||
[25]:http://www.brendangregg.com/blog/2014-07-25/opensnoop-for-linux.html
|
||||
[26]:http://www.brendangregg.com/blog/2014-07-28/execsnoop-for-linux.html
|
||||
[27]:http://www.brendangregg.com/blog/2014-09-06/linux-ftrace-tcp-retransmit-tracing.html
|
||||
[28]:http://www.brendangregg.com/blog/2015-06-28/linux-ftrace-uprobe.html
|
||||
[29]:http://www.brendangregg.com/blog/2015-07-03/hacking-linux-usdt-ftrace.html
|
||||
[30]:http://www.brendangregg.com/blog/2014-06-22/perf-cpu-sample.html
|
||||
[31]:http://www.brendangregg.com/blog/2014-06-29/perf-static-tracepoints.html
|
||||
[32]:http://www.brendangregg.com/blog/2014-07-01/perf-heat-maps.html
|
||||
[33]:http://www.brendangregg.com/blog/2014-07-03/perf-counting.html
|
||||
[34]:http://www.brendangregg.com/blog/2014-09-11/perf-kernel-line-tracing.html
|
||||
[35]:http://www.brendangregg.com/blog/2015-02-26/linux-perf-off-cpu-flame-graph.html
|
||||
[36]:http://www.brendangregg.com/blog/2015-05-15/ebpf-one-small-step.html
|
||||
[37]:https://github.com/brendangregg/BPF-tools
|
||||
[38]:http://dtrace.org/blogs/brendan/2011/10/15/using-systemtap/
|
||||
[39]:https://github.com/brendangregg/systemtap-lwtools
|
||||
[40]:http://www.brendangregg.com/ktap.html
|
||||
[41]:http://www.brendangregg.com/sysperfbook.html
|
||||
[42]:https://github.com/dtrace4linux/linux/issues/55
|
||||
[43]:http://www.brendangregg.com
|
||||
[44]:https://github.com/brendangregg/sysdig/commit/d0eeac1a32d6749dab24d1dc3fffb2ef0f9d7151
|
||||
[45]:https://github.com/brendangregg/sysdig/commit/2f21604dce0b561407accb9dba869aa19c365952
|
||||
[46]:http://www.brendangregg.com/blog/2014-05-11/strace-wow-much-syscall.html
|
||||
[47]:http://www.brendangregg.com/blog/2015-02-28/from-dtrace-to-linux.html
|
||||
[48]:http://www.slideshare.net/brendangregg/from-dtrace-to-linux/28
|
||||
[49]:http://www.beginningwithi.com/
|
@ -0,0 +1,210 @@
|
||||
9 个提高系统运行速度的轻量级 Linux 应用
|
||||
======
|
||||
|
||||
**简介:** [加速 Ubuntu 系统][1]有很多方法,办法之一是使用轻量级应用来替代一些常用应用程序。我们之前之前发布过一篇 [Linux 必备的应用程序][2],如今将分享这些应用程序在 Ubuntu 或其他 Linux 发行版的轻量级替代方案。
|
||||
|
||||
![在 ubunt 使用轻量级应用程序替代方案][4]
|
||||
|
||||
### 9 个常用 Linux 应用程序的轻量级替代方案
|
||||
|
||||
你的 Linux 系统很慢吗?应用程序是不是很久才能打开?你最好的选择是使用[轻量级的 Linux 系统][5]。但是重装系统并非总是可行,不是吗?
|
||||
|
||||
所以如果你想坚持使用你现在用的 Linux 发行版,但是想要提高性能,你应该使用更轻量级应用来替代你一些常用的应用。这篇文章会列出各种 Linux 应用程序的轻量级替代方案。
|
||||
|
||||
由于我使用的是 Ubuntu,因此我只提供了基于 Ubuntu 的 Linux 发行版的安装说明。但是这些应用程序可以用于几乎所有其他 Linux 发行版。你只需去找这些轻量级应用在你的 Linux 发行版中的安装方法就可以了。
|
||||
|
||||
### 1. Midori: Web 浏览器
|
||||
|
||||
[Midori][8] 是与现代互联网环境具有良好兼容性的最轻量级网页浏览器之一。它是开源的,使用与 Google Chrome 最初所基于的相同的渲染引擎 —— WebKit。并且超快速,最小化但高度可定制。
|
||||
|
||||
![Midori Browser][6]
|
||||
|
||||
Midori 浏览器有很多可以定制的扩展和选项。如果你有最高权限,使用这个浏览器也是一个不错的选择。如果在浏览网页的时候遇到了某些问题,请查看其网站上[常见问题][7]部分 -- 这包含了你可能遇到的常见问题及其解决方案。
|
||||
|
||||
|
||||
#### 在基于 Ubuntu 的发行版上安装 Midori
|
||||
|
||||
在 Ubuntu 上,可通过官方源找到 Midori 。运行以下指令即可安装它:
|
||||
|
||||
```
|
||||
sudo apt install midori
|
||||
```
|
||||
|
||||
### 2. Trojita:电子邮件客户端
|
||||
|
||||
[Trojita][11] 是一款开源强大的 IMAP 电子邮件客户端。它速度快,资源利用率高。我可以肯定地称它是 [Linux 最好的电子邮件客户端之一][9]。如果你只需电子邮件客户端提供 IMAP 支持,那么也许你不用再进一步考虑了。
|
||||
|
||||
![Trojitá][10]
|
||||
|
||||
Trojita 使用各种技术 —— 按需电子邮件加载、离线缓存、带宽节省模式等 —— 以实现其令人印象深刻的性能。
|
||||
|
||||
#### 在基于 Ubuntu 的发行版上安装 Trojita
|
||||
|
||||
Trojita 目前没有针对 Ubuntu 的官方 PPA 。但这应该不成问题。您可以使用以下命令轻松安装它:
|
||||
|
||||
```
|
||||
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/jkt-gentoo:/trojita/xUbuntu_16.04/ /' > /etc/apt/sources.list.d/trojita.list"
|
||||
wget http://download.opensuse.org/repositories/home:jkt-gentoo:trojita/xUbuntu_16.04/Release.key
|
||||
sudo apt-key add - < Release.key
|
||||
sudo apt update
|
||||
sudo apt install trojita
|
||||
```
|
||||
|
||||
### 3. GDebi:包安装程序
|
||||
|
||||
有时您需要快速安装 DEB 软件包。Ubuntu 软件中心是一个消耗资源严重的应用程序,仅用于安装 .deb 文件并不明智。
|
||||
|
||||
Gdebi 无疑是一款可以完成同样目的的漂亮工具,而它只有个极简的图形界面。
|
||||
|
||||
![GDebi][12]
|
||||
|
||||
GDebi 是完全轻量级的,完美无缺地完成了它的工作。你甚至应该[让 Gdebi 成为 DEB 文件的默认安装程序][13]。
|
||||
|
||||
#### 在基于 Ubuntu 的发行版上安装 GDebi
|
||||
|
||||
只需一行指令,你便可以在 Ubuntu 上安装 GDebi:
|
||||
|
||||
```
|
||||
sudo apt install gdebi
|
||||
```
|
||||
|
||||
### 4. App Grid:软件中心
|
||||
|
||||
如果您经常在 Ubuntu 上使用软件中心搜索、安装和管理应用程序,则 [App Grid][15] 是必备的应用程序。它是默认的 Ubuntu 软件中心最具视觉吸引力且速度最快的替代方案。
|
||||
|
||||
![App Grid][14]
|
||||
|
||||
App Grid 支持应用程序的评分、评论和屏幕截图。
|
||||
|
||||
#### 在基于 Ubuntu 的发行版上安装 App Grid
|
||||
|
||||
App Grid 拥有 Ubuntu 的官方 PPA。使用以下指令安装 App Grid:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:appgrid/stable
|
||||
sudo apt update
|
||||
sudo apt install appgrid
|
||||
```
|
||||
|
||||
### 5. Yarock:音乐播放器
|
||||
|
||||
[Yarock][17] 是一个优雅的音乐播放器,拥有现代而最轻量级的用户界面。尽管在设计上是轻量级的,但 Yarock 有一个全面的高级功能列表。
|
||||
|
||||
![Yarock][16]
|
||||
|
||||
Yarock 的主要功能包括多种音乐收藏、评级、智能播放列表、多种后端选项、桌面通知、音乐剪辑、上下文获取等。
|
||||
|
||||
### 在基于 Ubuntu 的发行版上安装 Yarock
|
||||
|
||||
您得通过 PPA 使用以下指令在 Ubuntu 上安装 Yarock:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:nilarimogard/webupd8
|
||||
sudo apt update
|
||||
sudo apt install yarock
|
||||
```
|
||||
|
||||
### 6. VLC:视频播放器
|
||||
|
||||
谁不需要视频播放器?谁还从未听说过 [VLC][19]?我想并不需要对它做任何介绍。
|
||||
|
||||
![VLC][18]
|
||||
|
||||
VLC 能满足你在 Ubuntu 上播放各种媒体文件的全部需求,而且它非常轻便。它甚至可以在非常旧的 PC 上完美运行。
|
||||
|
||||
#### 在基于 Ubuntu 的发行版上安装 VLC
|
||||
|
||||
VLC 为 Ubuntu 提供官方 PPA。可以输入以下命令来安装它:
|
||||
|
||||
```
|
||||
sudo apt install vlc
|
||||
```
|
||||
|
||||
### 7. PCManFM:文件管理器
|
||||
|
||||
PCManFM 是 LXDE 的标准文件管理器。与 LXDE 的其他应用程序一样,它也是轻量级的。如果您正在为文件管理器寻找更轻量级的替代品,可以尝试使用这个应用。
|
||||
|
||||
![PCManFM][20]
|
||||
|
||||
尽管来自 LXDE,PCManFM 也同样适用于其他桌面环境。
|
||||
|
||||
#### 在基于 Ubuntu 的发行版上安装 PCManFM
|
||||
|
||||
在 Ubuntu 上安装 PCManFM 只需要一条简单的指令:
|
||||
|
||||
```
|
||||
sudo apt install pcmanfm
|
||||
```
|
||||
|
||||
### 8. Mousepad:文本编辑器
|
||||
|
||||
在轻量级方面,没有什么可以击败像 nano、vim 等命令行文本编辑器。但是,如果你想要一个图形界面,你可以尝试一下 Mousepad -- 一个最轻量级的文本编辑器。它非常轻巧,速度非常快。带有简单的可定制的用户界面和多个主题。
|
||||
|
||||
![Mousepad][21]
|
||||
|
||||
Mousepad 支持语法高亮显示。所以,你也可以使用它作为基础的代码编辑器。
|
||||
|
||||
#### 在基于 Ubuntu 的发行版上安装 Mousepad
|
||||
|
||||
想要安装 Mousepad ,可以使用以下指令:
|
||||
|
||||
```
|
||||
sudo apt install mousepad
|
||||
```
|
||||
|
||||
### 9. GNOME Office:办公软件
|
||||
|
||||
许多人需要经常使用办公应用程序。通常,大多数办公应用程序体积庞大且很耗资源。Gnome Office 在这方面非常轻便。Gnome Office 在技术上不是一个完整的办公套件。它由不同的独立应用程序组成,在这之中 AbiWord&Gnumeric 脱颖而出。
|
||||
|
||||
**AbiWord** 是文字处理器。它比其他替代品轻巧并且快得多。但是这样做是有代价的 —— 你可能会失去宏、语法检查等一些功能。AdiWord 并不完美,但它可以满足你基本的需求。
|
||||
|
||||
![AbiWord][22]
|
||||
|
||||
**Gnumeric** 是电子表格编辑器。就像 AbiWord 一样,Gnumeric 也非常快速,提供了精确的计算功能。如果你正在寻找一个简单轻便的电子表格编辑器,Gnumeric 已经能满足你的需求了。
|
||||
|
||||
![Gnumeric][23]
|
||||
|
||||
在 [Gnome Office][24] 下面还有一些其它应用程序。你可以在官方页面找到它们。
|
||||
|
||||
#### 在基于 Ubuntu 的发行版上安装 AbiWord&Gnumeric
|
||||
|
||||
要安装 AbiWord&Gnumeric,只需在终端中输入以下指令:
|
||||
|
||||
```
|
||||
sudo apt install abiword gnumeric
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/lightweight-alternative-applications-ubuntu/
|
||||
|
||||
作者:[Munif Tanjim][a]
|
||||
译者:[imquanquan](https://github.com/imquanquan)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/munif/
|
||||
[1]:https://itsfoss.com/speed-up-ubuntu-1310/
|
||||
[2]:https://itsfoss.com/essential-linux-applications/
|
||||
[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Lightweight-alternative-applications-for-Linux-800x450.jpg
|
||||
[5]:https://itsfoss.com/lightweight-linux-beginners/
|
||||
[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Midori-800x497.png
|
||||
[7]:http://midori-browser.org/faqs/
|
||||
[8]:http://midori-browser.org/
|
||||
[9]:https://itsfoss.com/best-email-clients-linux/
|
||||
[10]:http://trojita.flaska.net/img/2016-03-22-trojita-home.png
|
||||
[11]:http://trojita.flaska.net/
|
||||
[12]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/GDebi.png
|
||||
[13]:https://itsfoss.com/gdebi-default-ubuntu-software-center/
|
||||
[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/AppGrid-800x553.png
|
||||
[15]:http://www.appgrid.org/
|
||||
[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Yarock-800x529.png
|
||||
[17]:https://seb-apps.github.io/yarock/
|
||||
[18]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/VLC-800x526.png
|
||||
[19]:http://www.videolan.org/index.html
|
||||
[20]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/PCManFM.png
|
||||
[21]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Mousepad.png
|
||||
[22]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/AbiWord-800x626.png
|
||||
[23]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Gnumeric-800x470.png
|
||||
[24]:https://gnome.org/gnome-office/
|
@ -0,0 +1,129 @@
|
||||
Linux 容器安全的 10 个层面
|
||||
======
|
||||
|
||||
> 应用这些策略来保护容器解决方案的各个层面和容器生命周期的各个阶段的安全。
|
||||
|
||||

|
||||
|
||||
容器提供了打包应用程序的一种简单方法,它实现了从开发到测试到投入生产系统的无缝传递。它也有助于确保跨不同环境的连贯性,包括物理服务器、虚拟机、以及公有云或私有云。这些好处使得一些组织为了更方便地部署和管理为他们提升业务价值的应用程序,而快速地采用了容器技术。
|
||||
|
||||

|
||||
|
||||
企业需要高度安全,在容器中运行核心服务的任何人都会问,“容器安全吗?”以及“我们能信任运行在容器中的应用程序吗?”
|
||||
|
||||
对容器进行安全保护就像是对运行中的进程进行安全保护一样。在你部署和运行你的容器之前,你需要去考虑整个解决方案各个层面的安全。你也需要去考虑贯穿了应用程序和容器整个生命周期的安全。
|
||||
|
||||
请尝试从这十个关键的因素去确保容器解决方案栈不同层面、以及容器生命周期的不同阶段的安全。
|
||||
|
||||
### 1. 容器宿主机操作系统和多租户环境
|
||||
|
||||
由于容器将应用程序和它的依赖作为一个单元来处理,使得开发者构建和升级应用程序变得更加容易,并且,容器可以启用多租户技术将许多应用程序和服务部署到一台共享主机上。在一台单独的主机上以容器方式部署多个应用程序、按需启动和关闭单个容器都是很容易的。为完全实现这种打包和部署技术的优势,运营团队需要运行容器的合适环境。运营者需要一个安全的操作系统,它能够在边界上保护容器安全、从容器中保护主机内核,以及保护容器彼此之间的安全。
|
||||
|
||||
容器是隔离而资源受限的 Linux 进程,允许你在一个共享的宿主机内核上运行沙盒化的应用程序。保护容器的方法与保护你的 Linux 中运行的任何进程的方法是一样的。降低权限是非常重要的,也是保护容器安全的最佳实践。最好使用尽可能小的权限去创建容器。容器应该以一个普通用户的权限来运行,而不是 root 权限的用户。在 Linux 中可以使用多个层面的安全加固手段,Linux 命名空间、安全强化 Linux([SELinux][1])、[cgroups][2] 、capabilities(LCTT 译注:Linux 内核的一个安全特性,它打破了传统的普通用户与 root 用户的概念,在进程级提供更好的安全控制)、以及安全计算模式( [seccomp][3] ),这五种 Linux 的安全特性可以用于保护容器的安全。
|
||||
|
||||
### 2. 容器内容(使用可信来源)
|
||||
|
||||
在谈到安全时,首先要考虑你的容器里面有什么?例如 ,有些时候,应用程序和基础设施是由很多可用组件所构成的。它们中的一些是开源的软件包,比如,Linux 操作系统、Apache Web 服务器、Red Hat JBoss 企业应用平台、PostgreSQL,以及 Node.js。这些软件包的容器化版本已经可以使用了,因此,你没有必要自己去构建它们。但是,对于你从一些外部来源下载的任何代码,你需要知道这些软件包的原始来源,是谁构建的它,以及这些包里面是否包含恶意代码。
|
||||
|
||||
### 3. 容器注册(安全访问容器镜像)
|
||||
|
||||
你的团队的容器构建于下载的公共容器镜像,因此,访问和升级这些下载的容器镜像以及内部构建镜像,与管理和下载其它类型的二进制文件的方式是相同的,这一点至关重要。许多私有的注册库支持容器镜像的存储。选择一个私有的注册库,可以帮你将存储在它的注册中的容器镜像实现策略自动化。
|
||||
|
||||
### 4. 安全性与构建过程
|
||||
|
||||
在一个容器化环境中,软件构建过程是软件生命周期的一个阶段,它将所需的运行时库和应用程序代码集成到一起。管理这个构建过程对于保护软件栈安全来说是很关键的。遵守“一次构建,到处部署”的原则,可以确保构建过程的结果正是生产系统中需要的。保持容器的恒定不变也很重要 — 换句话说就是,不要对正在运行的容器打补丁,而是,重新构建和部署它们。
|
||||
|
||||
不论是因为你处于一个高强度监管的行业中,还是只希望简单地优化你的团队的成果,设计你的容器镜像管理以及构建过程,可以使用容器层的优势来实现控制分离,因此,你应该去这么做:
|
||||
|
||||
* 运营团队管理基础镜像
|
||||
* 架构师管理中间件、运行时、数据库,以及其它解决方案
|
||||
* 开发者专注于应用程序层面,并且只写代码
|
||||
|
||||

|
||||
|
||||
最后,标记好你的定制构建容器,这样可以确保在构建和部署时不会搞混乱。
|
||||
|
||||
### 5. 控制好在同一个集群内部署应用
|
||||
|
||||
如果是在构建过程中出现的任何问题,或者在镜像被部署之后发现的任何漏洞,那么,请在基于策略的、自动化工具上添加另外的安全层。
|
||||
|
||||
我们来看一下,一个应用程序的构建使用了三个容器镜像层:内核、中间件,以及应用程序。如果在内核镜像中发现了问题,那么只能重新构建镜像。一旦构建完成,镜像就会被发布到容器平台注册库中。这个平台可以自动检测到发生变化的镜像。对于基于这个镜像的其它构建将被触发一个预定义的动作,平台将自己重新构建应用镜像,合并该修复的库。
|
||||
|
||||
一旦构建完成,镜像将被发布到容器平台的内部注册库中。在它的内部注册库中,会立即检测到镜像发生变化,应用程序在这里将会被触发一个预定义的动作,自动部署更新镜像,确保运行在生产系统中的代码总是使用更新后的最新的镜像。所有的这些功能协同工作,将安全功能集成到你的持续集成和持续部署(CI/CD)过程和管道中。
|
||||
|
||||
### 6. 容器编配:保护容器平台安全
|
||||
|
||||
当然了,应用程序很少会以单一容器分发。甚至,简单的应用程序一般情况下都会有一个前端、一个后端、以及一个数据库。而在容器中以微服务模式部署的应用程序,意味着应用程序将部署在多个容器中,有时它们在同一台宿主机上,有时它们是分布在多个宿主机或者节点上,如下面的图所示:
|
||||
|
||||

|
||||
|
||||
在大规模的容器部署时,你应该考虑:
|
||||
|
||||
* 哪个容器应该被部署在哪个宿主机上?
|
||||
* 那个宿主机应该有什么样的性能?
|
||||
* 哪个容器需要访问其它容器?它们之间如何发现彼此?
|
||||
* 你如何控制和管理对共享资源的访问,像网络和存储?
|
||||
* 如何监视容器健康状况?
|
||||
* 如何去自动扩展性能以满足应用程序的需要?
|
||||
* 如何在满足安全需求的同时启用开发者的自助服务?
|
||||
|
||||
考虑到开发者和运营者的能力,提供基于角色的访问控制是容器平台的关键要素。例如,编配管理服务器是中心访问点,应该接受最高级别的安全检查。API 是规模化的自动容器平台管理的关键,可以用于为 pod、服务,以及复制控制器验证和配置数据;在入站请求上执行项目验证;以及调用其它主要系统组件上的触发器。
|
||||
|
||||
### 7. 网络隔离
|
||||
|
||||
在容器中部署现代微服务应用,经常意味着跨多个节点在多个容器上部署。考虑到网络防御,你需要一种在一个集群中的应用之间的相互隔离的方法。一个典型的公有云容器服务,像 Google 容器引擎(GKE)、Azure 容器服务,或者 Amazon Web 服务(AWS)容器服务,是单租户服务。他们让你在你初始化建立的虚拟机集群上运行你的容器。对于多租户容器的安全,你需要容器平台为你启用一个单一集群,并且分割流量以隔离不同的用户、团队、应用、以及在这个集群中的环境。
|
||||
|
||||
使用网络命名空间,容器内的每个集合(即大家熟知的 “pod”)都会得到它自己的 IP 和绑定的端口范围,以此来从一个节点上隔离每个 pod 网络。除使用下面所述的方式之外,默认情况下,来自不同命名空间(项目)的 pod 并不能发送或者接收其它 pod 上的包和不同项目的服务。你可以使用这些特性在同一个集群内隔离开发者环境、测试环境,以及生产环境。但是,这样会导致 IP 地址和端口数量的激增,使得网络管理更加复杂。另外,容器是被设计为反复使用的,你应该在处理这种复杂性的工具上进行投入。在容器平台上比较受欢迎的工具是使用 [软件定义网络][4] (SDN) 提供一个定义的网络集群,它允许跨不同集群的容器进行通讯。
|
||||
|
||||
### 8. 存储
|
||||
|
||||
容器即可被用于无状态应用,也可被用于有状态应用。保护外加的存储是保护有状态服务的一个关键要素。容器平台对多种受欢迎的存储提供了插件,包括网络文件系统(NFS)、AWS 弹性块存储(EBS)、GCE 持久磁盘、GlusterFS、iSCSI、 RADOS(Ceph)、Cinder 等等。
|
||||
|
||||
一个持久卷(PV)可以通过资源提供者支持的任何方式装载到一个主机上。提供者有不同的性能,而每个 PV 的访问模式被设置为特定的卷支持的特定模式。例如,NFS 能够支持多路客户端同时读/写,但是,一个特定的 NFS 的 PV 可以在服务器上被发布为只读模式。每个 PV 有它自己的一组反应特定 PV 性能的访问模式的描述,比如,ReadWriteOnce、ReadOnlyMany、以及 ReadWriteMany。
|
||||
|
||||
### 9. API 管理、终端安全、以及单点登录(SSO)
|
||||
|
||||
保护你的应用安全,包括管理应用、以及 API 的认证和授权。
|
||||
|
||||
Web SSO 能力是现代应用程序的一个关键部分。在构建它们的应用时,容器平台带来了开发者可以使用的多种容器化服务。
|
||||
|
||||
API 是微服务构成的应用程序的关键所在。这些应用程序有多个独立的 API 服务,这导致了终端服务数量的激增,它就需要额外的管理工具。推荐使用 API 管理工具。所有的 API 平台应该提供多种 API 认证和安全所需要的标准选项,这些选项既可以单独使用,也可以组合使用,以用于发布证书或者控制访问。
|
||||
|
||||
这些选项包括标准的 API key、应用 ID 和密钥对,以及 OAuth 2.0。
|
||||
|
||||
### 10. 在一个联合集群中的角色和访问管理
|
||||
|
||||
在 2016 年 7 月份,Kubernetes 1.3 引入了 [Kubernetes 联合集群][5]。这是一个令人兴奋的新特性之一,它是在 Kubernetes 上游、当前的 Kubernetes 1.6 beta 中引用的。联合是用于部署和访问跨多集群运行在公有云或企业数据中心的应用程序服务的。多个集群能够用于去实现应用程序的高可用性,应用程序可以跨多个可用区域,或者去启用部署公共管理,或者跨不同的供应商进行迁移,比如,AWS、Google Cloud、以及 Azure。
|
||||
|
||||
当管理联合集群时,你必须确保你的编配工具能够提供你所需要的跨不同部署平台的实例的安全性。一般来说,认证和授权是很关键的 —— 不论你的应用程序运行在什么地方,将数据安全可靠地传递给它们,以及管理跨集群的多租户应用程序。Kubernetes 扩展了联合集群,包括对联合的秘密数据、联合的命名空间、以及 Ingress objects 的支持。
|
||||
|
||||
### 选择一个容器平台
|
||||
|
||||
当然,它并不仅关乎安全。你需要提供一个你的开发者团队和运营团队有相关经验的容器平台。他们需要一个安全的、企业级的基于容器的应用平台,它能够同时满足开发者和运营者的需要,而且还能够提高操作效率和基础设施利用率。
|
||||
|
||||
想从 Daniel 在 [欧盟开源峰会][7] 上的 [容器安全的十个层面][6] 的演讲中学习更多知识吗?这个峰会已于 10 月 23 - 26 日在 Prague 举行。
|
||||
|
||||
### 关于作者
|
||||
|
||||
Daniel Oh;Microservives;Agile;Devops;Java Ee;Container;Openshift;Jboss;Evangelism
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/10/10-layers-container-security
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/daniel-oh
|
||||
[1]:https://en.wikipedia.org/wiki/Security-Enhanced_Linux
|
||||
[2]:https://en.wikipedia.org/wiki/Cgroups
|
||||
[3]:https://en.wikipedia.org/wiki/Seccomp
|
||||
[4]:https://en.wikipedia.org/wiki/Software-defined_networking
|
||||
[5]:https://kubernetes.io/docs/concepts/cluster-administration/federation/
|
||||
[6]:https://osseu17.sched.com/mobile/#session:f2deeabfc1640d002c1d55101ce81223
|
||||
[7]:http://events.linuxfoundation.org/events/open-source-summit-europe
|
@ -1,45 +1,44 @@
|
||||
# 让 “rm” 命令将文件移动到“垃圾桶”,而不是完全删除它们
|
||||
给 “rm” 命令添加个“垃圾桶”
|
||||
============
|
||||
|
||||
人类犯错误是因为我们不是一个可编程设备,所以,在使用 `rm` 命令时要额外注意,不要在任何时候使用 `rm -rf * `。当你使用 rm 命令时,它会永久删除文件,不会像文件管理器那样将这些文件移动到 `垃圾箱`。
|
||||
人类犯错误是因为我们不是一个可编程设备,所以,在使用 `rm` 命令时要额外注意,不要在任何时候使用 `rm -rf *`。当你使用 `rm` 命令时,它会永久删除文件,不会像文件管理器那样将这些文件移动到 “垃圾箱”。
|
||||
|
||||
有时我们会将不应该删除的文件删除掉,所以当错误的删除文件时该怎么办? 你必须看看恢复工具(Linux 中有很多数据恢复工具),但我们不知道是否能将它百分之百恢复,所以要如何解决这个问题?
|
||||
有时我们会将不应该删除的文件删除掉,所以当错误地删除了文件时该怎么办? 你必须看看恢复工具(Linux 中有很多数据恢复工具),但我们不知道是否能将它百分之百恢复,所以要如何解决这个问题?
|
||||
|
||||
我们最近发表了一篇关于 [Trash-Cli][1] 的文章,在评论部分,我们从用户 Eemil Lgz 那里获得了一个关于 [saferm.sh][2] 脚本的更新,它可以帮助我们将文件移动到“垃圾箱”而不是永久删除它们。
|
||||
|
||||
将文件移动到“垃圾桶”是一个好主意,当你无意中运行 rm 命令时,可以节省你的时间,但是很少有人会说这是一个坏习惯,如果你不注意“垃圾桶”,它可能会在一定的时间内被文件和文件夹堆积起来。在这种情况下,我建议你按照你的意愿去做一个定时任务。
|
||||
将文件移动到“垃圾桶”是一个好主意,当你无意中运行 `rm` 命令时,可以拯救你;但是很少有人会说这是一个坏习惯,如果你不注意“垃圾桶”,它可能会在一定的时间内被文件和文件夹堆积起来。在这种情况下,我建议你按照你的意愿去做一个定时任务。
|
||||
|
||||
这适用于服务器和桌面两种环境。 如果脚本检测到 **GNOME 、KDE、Unity 或 LXDE** 桌面环境(DE),则它将文件或文件夹安全地移动到默认垃圾箱 **\$HOME/.local/share/Trash/files**,否则会在您的主目录中创建垃圾箱文件夹 **$HOME/Trash**。
|
||||
这适用于服务器和桌面两种环境。 如果脚本检测到 GNOME 、KDE、Unity 或 LXDE 桌面环境(DE),则它将文件或文件夹安全地移动到默认垃圾箱 `$HOME/.local/share/Trash/files`,否则会在您的主目录中创建垃圾箱文件夹 `$HOME/Trash`。
|
||||
|
||||
`saferm.sh` 脚本托管在 Github 中,可以从仓库中克隆,也可以创建一个名为 `saferm.sh` 的文件并复制其上的代码。
|
||||
|
||||
saferm.sh 脚本托管在 Github 中,可以从 repository 中克隆,也可以创建一个名为 saferm.sh 的文件并复制其上的代码。
|
||||
```
|
||||
$ git clone https://github.com/lagerspetz/linux-stuff
|
||||
$ sudo mv linux-stuff/scripts/saferm.sh /bin
|
||||
$ rm -Rf linux-stuff
|
||||
|
||||
```
|
||||
|
||||
在 `bashrc` 文件中设置别名,
|
||||
在 `.bashrc` 文件中设置别名,
|
||||
|
||||
```
|
||||
alias rm=saferm.sh
|
||||
|
||||
```
|
||||
|
||||
执行下面的命令使其生效,
|
||||
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
|
||||
```
|
||||
|
||||
一切就绪,现在你可以执行 rm 命令,自动将文件移动到”垃圾桶”,而不是永久删除它们。
|
||||
一切就绪,现在你可以执行 `rm` 命令,自动将文件移动到”垃圾桶”,而不是永久删除它们。
|
||||
|
||||
测试一下,我们将删除一个名为 `magi.txt` 的文件,命令行明确的提醒了 `Moving magi.txt to $HOME/.local/share/Trash/file`。
|
||||
|
||||
测试一下,我们将删除一个名为 `magi.txt` 的文件,命令行显式的说明了 `Moving magi.txt to $HOME/.local/share/Trash/file`
|
||||
|
||||
```
|
||||
$ rm -rf magi.txt
|
||||
Moving magi.txt to /home/magi/.local/share/Trash/files
|
||||
|
||||
```
|
||||
|
||||
也可以通过 `ls` 命令或 `trash-cli` 进行验证。
|
||||
@ -47,47 +46,16 @@ Moving magi.txt to /home/magi/.local/share/Trash/files
|
||||
```
|
||||
$ ls -lh /home/magi/.local/share/Trash/files
|
||||
Permissions Size User Date Modified Name
|
||||
.rw-r--r-- 32 magi 11 Oct 16:24 magi.txt
|
||||
|
||||
.rw-r--r-- 32 magi 11 Oct 16:24 magi.txt
|
||||
```
|
||||
|
||||
或者我们可以通过文件管理器界面中查看相同的内容。
|
||||
|
||||
![![][3]][4]
|
||||
|
||||
创建一个定时任务,每天清理一次“垃圾桶”,( LCTT 注:原文为每周一次,但根据下面的代码,应该是每天一次)
|
||||
(LCTT 译注:原文此处混淆了部分 trash-cli 的内容,考虑到文章衔接和逻辑,此处略。)
|
||||
|
||||
```
|
||||
$ 1 1 * * * trash-empty
|
||||
|
||||
```
|
||||
|
||||
`注意` 对于服务器环境,我们需要使用 rm 命令手动删除。
|
||||
|
||||
```
|
||||
$ rm -rf /root/Trash/
|
||||
/root/Trash/magi1.txt is on . Unsafe delete (y/n)? y
|
||||
Deleting /root/Trash/magi1.txt
|
||||
|
||||
```
|
||||
|
||||
对于桌面环境,trash-put 命令也可以做到这一点。
|
||||
|
||||
在 `bashrc` 文件中创建别名,
|
||||
|
||||
```
|
||||
alias rm=trash-put
|
||||
|
||||
```
|
||||
|
||||
执行下面的命令使其生效。
|
||||
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
|
||||
```
|
||||
|
||||
要了解 saferm.sh 的其他选项,请查看帮助。
|
||||
要了解 `saferm.sh` 的其他选项,请查看帮助。
|
||||
|
||||
```
|
||||
$ saferm.sh -h
|
||||
@ -112,7 +80,7 @@ via: https://www.2daygeek.com/rm-command-to-move-files-to-trash-can-rm-alias/
|
||||
|
||||
作者:[2DAYGEEK][a]
|
||||
译者:[amwps290](https://github.com/amwps290)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,94 @@
|
||||
如何在 Linux 中配置 ssh 登录导语
|
||||
======
|
||||
|
||||
> 了解如何在 Linux 中创建登录导语,来向要登录或登录后的用户显示不同的警告或消息。
|
||||
|
||||
![Login banners in Linux][1]
|
||||
|
||||
无论何时登录公司的某些生产系统,你都会看到一些登录消息、警告或关于你将登录或已登录的服务器的信息,如下所示。这些是<ruby>登录导语<rt>login banner</rt></ruby>。
|
||||
|
||||
![Login welcome messages in Linux][2]
|
||||
|
||||
在本文中,我们将引导你配置它们。
|
||||
|
||||
你可以配置两种类型的导语。
|
||||
|
||||
1. 用户登录前显示的导语信息(在你选择的文件中配置,例如 `/etc/login.warn`)
|
||||
2. 用户成功登录后显示的导语信息(在 `/etc/motd` 中配置)
|
||||
|
||||
### 如何在用户登录前连接系统时显示消息
|
||||
|
||||
当用户连接到服务器并且在登录之前,这个消息将被显示给他。意味着当他输入用户名时,该消息将在密码提示之前显示。
|
||||
|
||||
你可以使用任何文件名并在其中输入信息。在这里我们使用 `/etc/login.warn` 并且把我们的消息放在里面。
|
||||
|
||||
```
|
||||
# cat /etc/login.warn
|
||||
!!!! Welcome to KernelTalks test server !!!!
|
||||
This server is meant for testing Linux commands and tools. If you are
|
||||
not associated with kerneltalks.com and not authorized please dis-connect
|
||||
immediately.
|
||||
```
|
||||
|
||||
现在,需要将此文件和路径告诉 `sshd` 守护进程,以便它可以为每个用户登录请求获取此标语。对于此,打开 `/etc/sshd/sshd_config` 文件并搜索 `#Banner none`。
|
||||
|
||||
这里你需要编辑该配置文件,并写下你的文件名并删除注释标记(`#`)。它应该看起来像:`Banner /etc/login.warn`。
|
||||
|
||||
保存文件并重启 `sshd` 守护进程。为避免断开现有的连接用户,请使用 HUP 信号重启 sshd。
|
||||
|
||||
```
|
||||
root@kerneltalks # ps -ef | grep -i sshd
|
||||
root 14255 1 0 18:42 ? 00:00:00 /usr/sbin/sshd -D
|
||||
root 19074 14255 0 18:46 ? 00:00:00 sshd: ec2-user [priv]
|
||||
root 19177 19127 0 18:54 pts/0 00:00:00 grep -i sshd
|
||||
|
||||
root@kerneltalks # kill -HUP 14255
|
||||
```
|
||||
|
||||
就是这样了!打开新的会话并尝试登录。你将看待你在上述步骤中配置的消息。
|
||||
|
||||
![Login banner in Linux][3]
|
||||
|
||||
你可以在用户输入密码登录系统之前看到此消息。
|
||||
|
||||
### 如何在用户登录后显示消息
|
||||
|
||||
消息用户在成功登录系统后看到的<ruby>当天消息<rt>Message Of The Day</rt></ruby>(MOTD)由 `/etc/motd` 控制。编辑这个文件并输入当成功登录后欢迎用户的消息。
|
||||
|
||||
```
|
||||
root@kerneltalks # cat /etc/motd
|
||||
W E L C O M E
|
||||
Welcome to the testing environment of kerneltalks.
|
||||
Feel free to use this system for testing your Linux
|
||||
skills. In case of any issues reach out to admin at
|
||||
info@kerneltalks.com. Thank you.
|
||||
|
||||
```
|
||||
|
||||
你不需要重启 `sshd` 守护进程来使更改生效。只要保存该文件,`sshd` 守护进程就会下一次登录请求时读取和显示。
|
||||
|
||||
![motd in linux][4]
|
||||
|
||||
你可以在上面的截图中看到:黄色框是由 `/etc/motd` 控制的 MOTD,绿色框就是我们之前看到的登录导语。
|
||||
|
||||
你可以使用 [cowsay][5]、[banner][6]、[figlet][7]、[lolcat][8] 等工具创建出色的引人注目的登录消息。此方法适用于几乎所有 Linux 发行版,如 RedHat、CentOs、Ubuntu、Fedora 等。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://kerneltalks.com/tips-tricks/how-to-configure-login-banners-in-linux/
|
||||
|
||||
作者:[kerneltalks][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://kerneltalks.com
|
||||
[1]:https://a3.kerneltalks.com/wp-content/uploads/2017/11/login-banner-message-in-linux.png
|
||||
[2]:https://a3.kerneltalks.com/wp-content/uploads/2017/11/Login-message-in-linux.png
|
||||
[3]:https://a1.kerneltalks.com/wp-content/uploads/2017/11/login-banner.png
|
||||
[4]:https://a3.kerneltalks.com/wp-content/uploads/2017/11/motd-message-in-linux.png
|
||||
[5]:https://kerneltalks.com/tips-tricks/cowsay-fun-in-linux-terminal/
|
||||
[6]:https://kerneltalks.com/howto/create-nice-text-banner-hpux/
|
||||
[7]:https://kerneltalks.com/tips-tricks/create-beautiful-ascii-text-banners-linux/
|
||||
[8]:https://kerneltalks.com/linux/lolcat-tool-to-rainbow-color-linux-terminal/
|
@ -1,32 +1,31 @@
|
||||
在你下一次技术面试的时候要提的 3 个基本问题
|
||||
下一次技术面试时要问的 3 个重要问题
|
||||
======
|
||||
|
||||

|
||||
|
||||
面试可能会有压力,但 58% 的公司告诉 Dice 和 Linux 基金会,他们需要在未来几个月内聘请开源人才。学习如何提出正确的问题。
|
||||
|
||||
Linux 基金会
|
||||
> 面试可能会有压力,但 58% 的公司告诉 Dice 和 Linux 基金会,他们需要在未来几个月内聘请开源人才。学习如何提出正确的问题。
|
||||
|
||||
Dice 和 Linux 基金会的年度[开源工作报告][1]揭示了开源专业人士的前景以及未来一年的招聘活动。在今年的报告中,86% 的科技专业人士表示,了解开源推动了他们的职业生涯。然而,当在他们自己的组织内推进或在别处申请新职位的时候,有这些经历会发生什么呢?
|
||||
|
||||
面试新工作绝非易事。除了在准备新职位时还要应付复杂的工作,当面试官问“你对我有什么问题吗?”时适当的回答更增添了压力。
|
||||
面试新工作绝非易事。除了在准备新职位时还要应付复杂的工作,当面试官问“你有什么问题要问吗?”时,适当的回答更增添了压力。
|
||||
|
||||
在 Dice,我们从事职业、建议,并将技术专家与雇主连接起来。但是我们也在公司雇佣技术人才来开发开源项目。实际上,Dice 平台基于许多 Linux 发行版,我们利用开源数据库作为我们搜索功能的基础。总之,如果没有开源软件,我们就无法运行 Dice,因此聘请了解和热爱开源软件的专业人士至关重要。
|
||||
在 Dice,我们从事职业、建议,并将技术专家与雇主连接起来。但是我们也在公司里雇佣技术人才来开发开源项目。实际上,Dice 平台基于许多 Linux 发行版,我们利用开源数据库作为我们搜索功能的基础。总之,如果没有开源软件,我们就无法运行 Dice,因此聘请了解和热爱开源软件的专业人士至关重要。
|
||||
|
||||
多年来,我在面试中了解到提出好问题的重要性。这是一个了解你的潜在新雇主的机会,以及更好地了解他们是否与你的技能相匹配。
|
||||
|
||||
这里有三个重要的问题需要以及其重要的原因:
|
||||
这里有三个要问的重要问题,以及其重要的原因:
|
||||
|
||||
**1\. 公司对员工在空闲时间致力于开源项目或编写代码的立场是什么?**
|
||||
### 1、 公司对员工在空闲时间致力于开源项目或编写代码的立场是什么?
|
||||
|
||||
这个问题的答案会告诉正在面试的公司的很多信息。一般来说,只要它与你在该公司所从事的工作没有冲突,公司会希望技术专家为网站或项目做出贡献。在公司之外允许这种情况,也会在技术组织中培养出一种创业精神,并教授技术技能,否则在正常的日常工作中你可能无法获得这些技能。
|
||||
|
||||
**2\. 项目在这如何分优先级?**
|
||||
### 2、 项目如何区分优先级?
|
||||
|
||||
由于所有的公司都成为了科技公司,所以在创新的客户面对技术项目与改进平台本身之间往往存在着分歧。你会努力保持现有的平台最新么?或者致力于公众开发新产品?根据你的兴趣,答案可以决定公司是否适合你。
|
||||
|
||||
**3\. 谁主要决定新产品,开发者在决策过程中有多少投入?**
|
||||
### 3、 谁主要决定新产品,开发者在决策过程中有多少投入?
|
||||
|
||||
这个问题是了解谁负责公司创新(以及与他/她有多少联系),还有一个是了解你在公司的职业道路。在开发新产品之前,一个好的公司会和开发人员和开源人才交流。这看起来没有困难,但有时会错过这步,意味着在新产品发布之前是协作环境或者混乱的过程。
|
||||
这个问题是了解谁负责公司创新(以及与他/她有多少联系),还有一个是了解你在公司的职业道路。在开发新产品之前,一个好的公司会和开发人员和开源人才交流。这看起来不用多想,但有时会错过这步,意味着在新产品发布之前协作环境的不同或者混乱的过程。
|
||||
|
||||
面试可能会有压力,但是 58% 的公司告诉 Dice 和 Linux 基金会他们需要在未来几个月内聘用开源人才,所以记住高需求会让像你这样的专业人士成为雇员。以你想要的方向引导你的事业。
|
||||
|
||||
@ -38,7 +37,7 @@ via: https://www.linux.com/blog/os-jobs/2017/12/3-essential-questions-ask-your-n
|
||||
|
||||
作者:[Brian Hostetter][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,109 @@
|
||||
并发服务器(五):Redis 案例研究
|
||||
======
|
||||
|
||||
这是我写的并发网络服务器系列文章的第五部分。在前四部分中我们讨论了并发服务器的结构,这篇文章我们将去研究一个在生产系统中大量使用的服务器的案例—— [Redis][10]。
|
||||
|
||||

|
||||
|
||||
Redis 是一个非常有魅力的项目,我关注它很久了。它最让我着迷的一点就是它的 C 源代码非常清晰。它也是一个高性能、大并发的内存数据库服务器的非常好的例子,它是研究网络并发服务器的一个非常好的案例,因此,我们不能错过这个好机会。
|
||||
|
||||
我们来看看前四部分讨论的概念在真实世界中的应用程序。
|
||||
|
||||
本系列的所有文章有:
|
||||
|
||||
* [第一节 - 简介][3]
|
||||
* [第二节 - 线程][4]
|
||||
* [第三节 - 事件驱动][5]
|
||||
* [第四节 - libuv][6]
|
||||
* [第五节 - Redis 案例研究][7]
|
||||
|
||||
### 事件处理库
|
||||
|
||||
Redis 最初发布于 2009 年,它最牛逼的一件事情大概就是它的速度 —— 它能够处理大量的并发客户端连接。需要特别指出的是,它是用*一个单线程*来完成的,而且还不对保存在内存中的数据使用任何复杂的锁或者同步机制。
|
||||
|
||||
Redis 之所以如此牛逼是因为,它在给定的系统上使用了其可用的最快的事件循环,并将它们封装成由它实现的事件循环库(在 Linux 上是 epoll,在 BSD 上是 kqueue,等等)。这个库的名字叫做 [ae][11]。ae 使得编写一个快速服务器变得很容易,只要在它内部没有阻塞即可,而 Redis 则保证 ^注1 了这一点。
|
||||
|
||||
在这里,我们的兴趣点主要是它对*文件事件*的支持 —— 当文件描述符(如网络套接字)有一些有趣的未决事情时将调用注册的回调函数。与 libuv 类似,ae 支持多路事件循环(参阅本系列的[第三节][5]和[第四节][6])和不应该感到意外的 `aeCreateFileEvent` 信号:
|
||||
|
||||
```
|
||||
int aeCreateFileEvent(aeEventLoop *eventLoop, int fd, int mask,
|
||||
aeFileProc *proc, void *clientData);
|
||||
```
|
||||
|
||||
它在 `fd` 上使用一个给定的事件循环,为新的文件事件注册一个回调(`proc`)函数。当使用的是 epoll 时,它将调用 `epoll_ctl` 在文件描述符上添加一个事件(可能是 `EPOLLIN`、`EPOLLOUT`、也或许两者都有,取决于 `mask` 参数)。ae 的 `aeProcessEvents` 功能是 “运行事件循环和发送回调函数”,它在底层调用了 `epoll_wait`。
|
||||
|
||||
### 处理客户端请求
|
||||
|
||||
我们通过跟踪 Redis 服务器代码来看一下,ae 如何为客户端事件注册回调函数的。`initServer` 启动时,通过注册一个回调函数来读取正在监听的套接字上的事件,通过使用回调函数 `acceptTcpHandler` 来调用 `aeCreateFileEvent`。当新的连接可用时,这个回调函数被调用。它调用 `accept` ^注2 ,接下来是 `acceptCommonHandler`,它转而去调用 `createClient` 以初始化新客户端连接所需要的数据结构。
|
||||
|
||||
`createClient` 的工作是去监听来自客户端的入站数据。它将套接字设置为非阻塞模式(一个异步事件循环中的关键因素)并使用 `aeCreateFileEvent` 去注册另外一个文件事件回调函数以读取事件 —— `readQueryFromClient`。每当客户端发送数据,这个函数将被事件循环调用。
|
||||
|
||||
`readQueryFromClient` 就让我们期望的那样 —— 解析客户端命令和动作,并通过查询和/或操作数据来回复。因为客户端套接字是非阻塞的,所以这个函数必须能够处理 `EAGAIN`,以及部分数据;从客户端中读取的数据是累积在客户端专用的缓冲区中,而完整的查询可能被分割在回调函数的多个调用当中。
|
||||
|
||||
### 将数据发送回客户端
|
||||
|
||||
在前面的内容中,我说到了 `readQueryFromClient` 结束了发送给客户端的回复。这在逻辑上是正确的,因为 `readQueryFromClient` *准备*要发送回复,但它不真正去做实质的发送 —— 因为这里并不能保证客户端套接字已经准备好写入/发送数据。我们必须为此使用事件循环机制。
|
||||
|
||||
Redis 是这样做的,它注册一个 `beforeSleep` 函数,每次事件循环即将进入休眠时,调用它去等待套接字变得可以读取/写入。`beforeSleep` 做的其中一件事情就是调用 `handleClientsWithPendingWrites`。它的作用是通过调用 `writeToClient` 去尝试立即发送所有可用的回复;如果一些套接字不可用时,那么*当*套接字可用时,它将注册一个事件循环去调用 `sendReplyToClient`。这可以被看作为一种优化 —— 如果套接字可用于立即发送数据(一般是 TCP 套接字),这时并不需要注册事件 ——直接发送数据。因为套接字是非阻塞的,它从不会去阻塞循环。
|
||||
|
||||
### 为什么 Redis 要实现它自己的事件库?
|
||||
|
||||
在 [第四节][14] 中我们讨论了使用 libuv 来构建一个异步并发服务器。需要注意的是,Redis 并没有使用 libuv,或者任何类似的事件库,而是它去实现自己的事件库 —— ae,用 ae 来封装 epoll、kqueue 和 select。事实上,Antirez(Redis 的创建者)恰好在 [2011 年的一篇文章][15] 中回答了这个问题。他的回答的要点是:ae 只有大约 770 行他理解的非常透彻的代码;而 libuv 代码量非常巨大,也没有提供 Redis 所需的额外功能。
|
||||
|
||||
现在,ae 的代码大约增长到 1300 多行,比起 libuv 的 26000 行(这是在没有 Windows、测试、示例、文档的情况下的数据)来说那是小巫见大巫了。libuv 是一个非常综合的库,这使它更复杂,并且很难去适应其它项目的特殊需求;另一方面,ae 是专门为 Redis 设计的,与 Redis 共同演进,只包含 Redis 所需要的东西。
|
||||
|
||||
这是我 [前些年在一篇文章中][16] 提到的软件项目依赖关系的另一个很好的示例:
|
||||
|
||||
> 依赖的优势与在软件项目上花费的工作量成反比。
|
||||
|
||||
在某种程度上,Antirez 在他的文章中也提到了这一点。他提到,提供大量附加价值(在我的文章中的“基础” 依赖)的依赖比像 libuv 这样的依赖更有意义(它的例子是 jemalloc 和 Lua),对于 Redis 特定需求,其功能的实现相当容易。
|
||||
|
||||
### Redis 中的多线程
|
||||
|
||||
[在 Redis 的绝大多数历史中][17],它都是一个不折不扣的单线程的东西。一些人觉得这太不可思议了,有这种想法完全可以理解。Redis 本质上是受网络束缚的 —— 只要数据库大小合理,对于任何给定的客户端请求,其大部分延时都是浪费在网络等待上,而不是在 Redis 的数据结构上。
|
||||
|
||||
然而,现在事情已经不再那么简单了。Redis 现在有几个新功能都用到了线程:
|
||||
|
||||
1. “惰性” [内存释放][8]。
|
||||
2. 在后台线程中使用 fsync 调用写一个 [持久化日志][9]。
|
||||
3. 运行需要执行一个长周期运行的操作的用户定义模块。
|
||||
|
||||
对于前两个特性,Redis 使用它自己的一个简单的 bio(它是 “Background I/O" 的首字母缩写)库。这个库是根据 Redis 的需要进行了硬编码,它不能用到其它的地方 —— 它运行预设数量的线程,每个 Redis 后台作业类型需要一个线程。
|
||||
|
||||
而对于第三个特性,[Redis 模块][18] 可以定义新的 Redis 命令,并且遵循与普通 Redis 命令相同的标准,包括不阻塞主线程。如果在模块中自定义的一个 Redis 命令,希望去执行一个长周期运行的操作,这将创建一个线程在后台去运行它。在 Redis 源码树中的 `src/modules/helloblock.c` 提供了这样的一个示例。
|
||||
|
||||
有了这些特性,Redis 使用线程将一个事件循环结合起来,在一般的案例中,Redis 具有了更快的速度和弹性,这有点类似于在本系统文章中 [第四节][19] 讨论的工作队列。
|
||||
|
||||
- 注1: Redis 的一个核心部分是:它是一个 _内存中_ 数据库;因此,查询从不会运行太长的时间。当然了,这将会带来各种各样的其它问题。在使用分区的情况下,服务器可能最终路由一个请求到另一个实例上;在这种情况下,将使用异步 I/O 来避免阻塞其它客户端。
|
||||
- 注2: 使用 `anetAccept`;`anet` 是 Redis 对 TCP 套接字代码的封装。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://eli.thegreenplace.net/2017/concurrent-servers-part-5-redis-case-study/
|
||||
|
||||
作者:[Eli Bendersky][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://eli.thegreenplace.net/pages/about
|
||||
[1]:https://eli.thegreenplace.net/2017/concurrent-servers-part-5-redis-case-study/#id1
|
||||
[2]:https://eli.thegreenplace.net/2017/concurrent-servers-part-5-redis-case-study/#id2
|
||||
[3]:https://linux.cn/article-8993-1.html
|
||||
[4]:https://linux.cn/article-9002-1.html
|
||||
[5]:https://linux.cn/article-9117-1.html
|
||||
[6]:https://linux.cn/article-9397-1.html
|
||||
[7]:http://eli.thegreenplace.net/2017/concurrent-servers-part-5-redis-case-study/
|
||||
[8]:http://antirez.com/news/93
|
||||
[9]:https://redis.io/topics/persistence
|
||||
[10]:https://redis.io/
|
||||
[11]:https://redis.io/topics/internals-rediseventlib
|
||||
[12]:https://eli.thegreenplace.net/2017/concurrent-servers-part-5-redis-case-study/#id4
|
||||
[13]:https://eli.thegreenplace.net/2017/concurrent-servers-part-5-redis-case-study/#id5
|
||||
[14]:https://linux.cn/article-9397-1.html
|
||||
[15]:http://oldblog.antirez.com/post/redis-win32-msft-patch.html
|
||||
[16]:http://eli.thegreenplace.net/2017/benefits-of-dependencies-in-software-projects-as-a-function-of-effort/
|
||||
[17]:http://antirez.com/news/93
|
||||
[18]:https://redis.io/topics/modules-intro
|
||||
[19]:https://linux.cn/article-9397-1.html
|
@ -1,83 +1,85 @@
|
||||
如何使用 syslog-ng 从远程 Linux 机器上收集日志
|
||||
======
|
||||
![linuxhero.jpg][1]
|
||||
|
||||
Image: Jack Wallen
|
||||
![linuxhero.jpg][1]
|
||||
|
||||
如果你的数据中心全是 Linux 服务器,而你就是系统管理员。那么你的其中一项工作内容就是查看服务器的日志文件。但是,如果你在大量的机器上去查看日志文件,那么意味着你需要挨个去登入到机器中来阅读日志文件。如果你管理的机器很多,仅这项工作就可以花费你一天的时间。
|
||||
|
||||
另外的选择是,你可以配置一台单独的 Linux 机器去收集这些日志。这将使你的每日工作更加高效。要实现这个目的,有很多的不同系统可供你选择,而 syslog-ng 就是其中之一。
|
||||
|
||||
使用 syslog-ng 的问题是文档并不容易梳理。但是,我已经解决了这个问题,我可以通过这种方法马上进行安装和配置 syslog-ng。下面我将在 Ubuntu Server 16.04 上示范这两种方法:
|
||||
|
||||
* UBUNTUSERVERVM 的 IP 地址是 192.168.1.118 将配置为日志收集器
|
||||
* UBUNTUSERVERVM2 将配置为一个客户端,发送日志文件到收集器
|
||||
|
||||
syslog-ng 的不足是文档并不容易梳理。但是,我已经解决了这个问题,我可以通过这种方法马上进行安装和配置 syslog-ng。下面我将在 Ubuntu Server 16.04 上示范这两种方法:
|
||||
|
||||
* UBUNTUSERVERVM 的 IP 地址是 192.168.1.118 ,将配置为日志收集器
|
||||
* UBUNTUSERVERVM2 将配置为一个客户端,发送日志文件到收集器
|
||||
|
||||
现在我们来开始安装和配置。
|
||||
|
||||
## 安装
|
||||
### 安装
|
||||
|
||||
安装很简单。为了尽可能容易,我将从标准仓库安装。打开一个终端窗口,运行如下命令:
|
||||
|
||||
```
|
||||
sudo apt install syslog-ng
|
||||
```
|
||||
|
||||
在作为收集器和客户端的机器上都要运行上面的命令。安装完成之后,你将开始配置。
|
||||
你必须在收集器和客户端的机器上都要运行上面的命令。安装完成之后,你将开始配置。
|
||||
|
||||
## 配置收集器
|
||||
### 配置收集器
|
||||
|
||||
现在,我们开始日志收集器的配置。它的配置文件是 `/etc/syslog-ng/syslog-ng.conf`。syslog-ng 安装完成时就已经包含了一个配置文件。我们不使用这个默认的配置文件,可以使用 `mv /etc/syslog-ng/syslog-ng.conf /etc/syslog-ng/syslog-ng.conf.BAK` 将这个自带的默认配置文件重命名。现在使用 `sudo nano /etc/syslog/syslog-ng.conf` 命令创建一个新的配置文件。在这个文件中添加如下的行:
|
||||
|
||||
```
|
||||
@version: 3.5
|
||||
@include "scl.conf"
|
||||
@include "`scl-root`/system/tty10.conf"
|
||||
options {
|
||||
time-reap(30);
|
||||
mark-freq(10);
|
||||
keep-hostname(yes);
|
||||
};
|
||||
source s_local { system(); internal(); };
|
||||
source s_network {
|
||||
syslog(transport(tcp) port(514));
|
||||
};
|
||||
destination d_local {
|
||||
file("/var/log/syslog-ng/messages_${HOST}"); };
|
||||
destination d_logs {
|
||||
file(
|
||||
"/var/log/syslog-ng/logs.txt"
|
||||
owner("root")
|
||||
group("root")
|
||||
perm(0777)
|
||||
); };
|
||||
log { source(s_local); source(s_network); destination(d_logs); };
|
||||
options {
|
||||
time-reap(30);
|
||||
mark-freq(10);
|
||||
keep-hostname(yes);
|
||||
};
|
||||
source s_local { system(); internal(); };
|
||||
source s_network {
|
||||
syslog(transport(tcp) port(514));
|
||||
};
|
||||
destination d_local {
|
||||
file("/var/log/syslog-ng/messages_${HOST}"); };
|
||||
destination d_logs {
|
||||
file(
|
||||
"/var/log/syslog-ng/logs.txt"
|
||||
owner("root")
|
||||
group("root")
|
||||
perm(0777)
|
||||
); };
|
||||
log { source(s_local); source(s_network); destination(d_logs); };
|
||||
```
|
||||
|
||||
需要注意的是,syslog-ng 使用 514 端口,你需要确保你的网络上它可以被访问。
|
||||
需要注意的是,syslog-ng 使用 514 端口,你需要确保在你的网络上它可以被访问。
|
||||
|
||||
保存并关闭这个文件。上面的配置将转存期望的日志文件(由 `system()` 和 `internal()` 指出)到 `/var/log/syslog-ng/logs.txt` 中。因此,你需要使用如下的命令去创建所需的目录和文件:
|
||||
|
||||
保存和关闭这个文件。上面的配置将转存期望的日志文件(使用 system() and internal())到 `/var/log/syslog-ng/logs.txt` 中。因此,你需要使用如下的命令去创建所需的目录和文件:
|
||||
```
|
||||
sudo mkdir /var/log/syslog-ng
|
||||
sudo touch /var/log/syslog-ng/logs.txt
|
||||
```
|
||||
|
||||
使用如下的命令启动和启用 syslog-ng:
|
||||
|
||||
```
|
||||
sudo systemctl start syslog-ng
|
||||
sudo systemctl enable syslog-ng
|
||||
```
|
||||
|
||||
## 配置为客户端
|
||||
### 配置客户端
|
||||
|
||||
我们将在客户端上做同样的事情(移动默认配置文件并创建新配置文件)。拷贝下列文本到新的客户端配置文件中:
|
||||
|
||||
```
|
||||
@version: 3.5
|
||||
@include "scl.conf"
|
||||
@include "`scl-root`/system/tty10.conf"
|
||||
source s_local { system(); internal(); };
|
||||
destination d_syslog_tcp {
|
||||
syslog("192.168.1.118" transport("tcp") port(514)); };
|
||||
syslog("192.168.1.118" transport("tcp") port(514)); };
|
||||
log { source(s_local);destination(d_syslog_tcp); };
|
||||
```
|
||||
|
||||
@ -87,11 +89,9 @@ log { source(s_local);destination(d_syslog_tcp); };
|
||||
|
||||
## 查看日志文件
|
||||
|
||||
回到你的配置为收集器的服务器上,运行这个命令 `sudo tail -f /var/log/syslog-ng/logs.txt`。你将看到包含了收集器和客户端的日志条目的输出 ( **Figure A** )。
|
||||
回到你的配置为收集器的服务器上,运行这个命令 `sudo tail -f /var/log/syslog-ng/logs.txt`。你将看到包含了收集器和客户端的日志条目的输出(图 A)。
|
||||
|
||||
**Figure A**
|
||||
|
||||
![Figure A][3]
|
||||
![图 A][3]
|
||||
|
||||
恭喜你!syslog-ng 已经正常工作了。你现在可以登入到你的收集器上查看本地机器和远程客户端的日志了。如果你的数据中心有很多 Linux 服务器,在每台服务器上都安装上 syslog-ng 并配置它们作为客户端发送日志到收集器,这样你就不需要登入到每个机器去查看它们的日志了。
|
||||
|
||||
@ -101,7 +101,7 @@ via: https://www.techrepublic.com/article/how-to-use-syslog-ng-to-collect-logs-f
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,138 @@
|
||||
Linux 下最好的图片截取和视频截录工具
|
||||
======
|
||||
|
||||

|
||||
|
||||
可能有一个困扰你多时的问题,当你想要获取一张屏幕截图向开发者反馈问题,或是在 Stack Overflow 寻求帮助时,你可能缺乏一个可靠的屏幕截图工具去保存和发送截图。在 GNOME 中有一些这种类型的程序和 shell 拓展工具。这里介绍的是 Linux 最好的屏幕截图工具,可以供你截取图片或截录视频。
|
||||
|
||||
### 1. Shutter
|
||||
|
||||
[][2]
|
||||
|
||||
[Shutter][3] 可以截取任意你想截取的屏幕,是 Linux 最好的截屏工具之一。得到截屏之后,它还可以在保存截屏之前预览图片。它也有一个扩展菜单,展示在 GNOME 顶部面板,使得用户进入软件变得更人性化,非常方便使用。
|
||||
|
||||
你可以截取选区、窗口、桌面、当前光标下的窗口、区域、菜单、提示框或网页。Shutter 允许用户直接上传屏幕截图到设置内首选的云服务商。它同样允许用户在保存截图之前编辑器图片;同时提供了一些可自由添加或移除的插件。
|
||||
|
||||
终端内键入下列命令安装此工具:
|
||||
|
||||
```
|
||||
sudo add-apt-repository -y ppa:shutter/ppa
|
||||
sudo apt-get update && sudo apt-get install shutter
|
||||
```
|
||||
|
||||
### 2. Vokoscreen
|
||||
|
||||
[][4]
|
||||
|
||||
[Vokoscreen][5] 是一款允许你记录和叙述屏幕活动的一款软件。它易于使用,有一个简洁的界面和顶部面板的菜单,方便用户录制视频。
|
||||
|
||||
你可以选择记录整个屏幕,或是记录一个窗口,抑或是记录一个选区。自定义记录可以让你轻松得到所需的保存类型,你甚至可以将屏幕录制记录保存为 gif 文件。当然,你也可以使用网络摄像头记录自己的情况,用于你写作教程吸引学习者。记录完成后,你还可以在该应用程序中回放视频记录,这样就不必到处去找你记录的内容。
|
||||
|
||||
[][6]
|
||||
|
||||
你可以从你的发行版仓库安装 Vocoscreen,或者你也可以在 [pkgs.org][7] 选择下载你需要的版本。
|
||||
|
||||
```
|
||||
sudo dpkg -i vokoscreen_2.5.0-1_amd64.deb
|
||||
```
|
||||
|
||||
### 3. OBS
|
||||
|
||||
[][8]
|
||||
|
||||
[OBS][9] 可以用来录制自己的屏幕亦可用来录制互联网上的流媒体。它允许你看到自己所录制的内容或你叙述的屏幕录制。它允许你根据喜好选择录制视频的品质;它也允许你选择文件的保存类型。除了视频录制功能之外,你还可以切换到 Studio 模式,不借助其他软件进行视频编辑。要在你的 Linux 系统中安装 OBS,你必须确保你的电脑已安装 FFmpeg。ubuntu 14.04 或更早的版本安装 FFmpeg 可以使用如下命令:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:kirillshkrogalev/ffmpeg-next
|
||||
|
||||
sudo apt-get update && sudo apt-get install ffmpeg
|
||||
```
|
||||
|
||||
ubuntu 15.04 以及之后的版本,你可以在终端中键入如下命令安装 FFmpeg:
|
||||
|
||||
```
|
||||
sudo apt-get install ffmpeg
|
||||
```
|
||||
|
||||
如果 FFmpeg 安装完成,在终端中键入如下安装 OBS:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:obsproject/obs-studio
|
||||
|
||||
sudo apt-get update
|
||||
|
||||
sudo apt-get install obs-studio
|
||||
```
|
||||
|
||||
### 4. Green Recorder
|
||||
|
||||
[][10]
|
||||
|
||||
[Green recorder][11] 是一款界面简单的程序,它可以让你记录屏幕。你可以选择包括视频和单纯的音频在内的录制内容,也可以显示鼠标指针,甚至可以跟随鼠标录制视频。同样,你可以选择记录窗口或是屏幕上的选区,以便于只在自己的记录中保留需要的内容;你还可以自定义最终保存的视频的帧数。如果你想要延迟录制,它提供给你一个选项可以设置出你想要的延迟时间。它还提供一个录制结束后的命令运行选项,这样,就可以在视频录制结束后立即运行。
|
||||
|
||||
在终端中键入如下命令来安装 green recorder:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:fossproject/ppa
|
||||
|
||||
sudo apt update && sudo apt install green-recorder
|
||||
```
|
||||
|
||||
### 5. Kazam
|
||||
|
||||
[][12]
|
||||
|
||||
[Kazam][13] 在几乎所有使用截图工具的 Linux 用户中都十分流行。这是一款简单直观的软件,它可以让你做一个屏幕截图或是视频录制,也同样允许在屏幕截图或屏幕录制之前设置延时。它可以让你选择录制区域,窗口或是你想要抓取的整个屏幕。Kazam 的界面接口安排的非常好,和其它软件相比毫无复杂感。它的特点,就是让你优雅的截图。Kazam 在系统托盘和菜单中都有图标,无需打开应用本身,你就可以开始屏幕截图。
|
||||
|
||||
终端中键入如下命令来安装 Kazam:
|
||||
|
||||
```
|
||||
sudo apt-get install kazam
|
||||
```
|
||||
|
||||
如果没有找到该 PPA,你需要使用下面的命令安装它:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:kazam-team/stable-series
|
||||
|
||||
sudo apt-get update && sudo apt-get install kazam
|
||||
```
|
||||
|
||||
### 6. GNOME 扩展截屏工具
|
||||
|
||||
[][1]
|
||||
|
||||
GNOME 的一个扩展软件就叫做 screenshot tool,它常驻系统面板,如果你没有设置禁用它的话。由于它是常驻系统面板的软件,所以它会一直等待你的调用,获取截图,方便和容易获取是它最主要的特点,除非你在调整工具中禁用,否则它将一直在你的系统面板中。这个工具也有用来设置首选项的选项窗口。在 extensions.gnome.org 中搜索 “_Screenshot Tool_”,在你的 GNOME 中安装它。
|
||||
|
||||
你需要安装 gnome 扩展的 chrome 扩展组件和 GNOME 调整工具才能使用这个工具。
|
||||
|
||||
[][14]
|
||||
|
||||
当你碰到一个问题,不知道怎么处理,想要在 [Linux 社区][15] 或者其他开发社区分享、寻求帮助的的时候, **Linux 截图工具** 尤其合适。学习开发、程序或者其他任何事物都会发现这些工具在分享截图的时候真的很实用。Youtube 用户和教程制作爱好者会发现视频截录工具真的很适合录制可以发表的教程。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxandubuntu.com/home/best-linux-screenshot-screencasting-tools
|
||||
|
||||
作者:[linuxandubuntu][a]
|
||||
译者:[CYLeft](https://github.com/CYLeft)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxandubuntu.com
|
||||
[1]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-compressed_orig.jpg
|
||||
[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/shutter-linux-screenshot-taking-tools_orig.jpg
|
||||
[3]:http://shutter-project.org/
|
||||
[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-screencasting-tool-for-linux_orig.jpg
|
||||
[5]:https://github.com/vkohaupt/vokoscreen
|
||||
[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-preferences_orig.jpg
|
||||
[7]:https://pkgs.org/download/vokoscreen
|
||||
[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/obs-linux-screencasting-tool_orig.jpg
|
||||
[9]:https://obsproject.com/
|
||||
[10]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/green-recording-linux-tool_orig.jpg
|
||||
[11]:https://github.com/foss-project/green-recorder
|
||||
[12]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/kazam-screencasting-tool-for-linux_orig.jpg
|
||||
[13]:https://launchpad.net/kazam
|
||||
[14]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-preferences_orig.jpg
|
||||
[15]:http://www.linuxandubuntu.com/home/top-10-communities-to-help-you-learn-linux
|
@ -1,21 +1,21 @@
|
||||
Partclone - 多功能的分区和克隆免费软件
|
||||
Partclone:多功能的分区和克隆的自由软件
|
||||
======
|
||||
|
||||

|
||||
|
||||
**[Partclone][1]** 是由 **Clonezilla** 开发者开发的免费开源的用于创建和克隆分区镜像的软件。实际上,**Partclone** 是基于 **Clonezilla** 的工具之一。
|
||||
[Partclone][1] 是由 Clonezilla 的开发者们开发的用于创建和克隆分区镜像的自由开源软件。实际上,Partclone 是 Clonezilla 所基于的工具之一。
|
||||
|
||||
它为用户提供了备份与恢复占用的分区块工具,并与多个文件系统的高度兼容,这要归功于它能够使用像 **e2fslibs** 这样的现有库来读取和写入分区,例如 **ext2**。
|
||||
它为用户提供了备份与恢复已用分区的工具,并与多个文件系统高度兼容,这要归功于它能够使用像 e2fslibs 这样的现有库来读取和写入分区,例如 ext2。
|
||||
|
||||
它最大的优点是支持各种格式,包括 ext2、ext3、ext4、hfs +、reiserfs、reiser4、btrfs、vmfs3、vmfs5、xfs、jfs、ufs、ntfs、fat(12/16/32)、exfat、f2fs 和 nilfs。
|
||||
它最大的优点是支持各种格式,包括 ext2、ext3、ext4、hfs+、reiserfs、reiser4、btrfs、vmfs3、vmfs5、xfs、jfs、ufs、ntfs、fat(12/16/32)、exfat、f2fs 和 nilfs。
|
||||
|
||||
它还有许多的程序,包括 **partclone.ext2**ext3&ext4)、partclone.ntfs、partclone.exfat、partclone.hfsp 和 partclone.vmfs(v3和v5) 等等。
|
||||
它还有许多的程序,包括 partclone.ext2(ext3&ext4)、partclone.ntfs、partclone.exfat、partclone.hfsp 和 partclone.vmfs(v3和v5) 等等。
|
||||
|
||||
### Partclone中的功能
|
||||
|
||||
* **免费软件:** **Partclone**免费供所有人下载和使用。
|
||||
* **开源:** **Partclone**是在 GNU GPL 许可下发布的,并在 [GitHub][2] 上公开。
|
||||
* **跨平台**:适用于 Linux、Windows、MAC、ESX 文件系统备份/恢复和 FreeBSD。
|
||||
* 免费软件: Partclone 免费供所有人下载和使用。
|
||||
* 开源: Partclone 是在 GNU GPL 许可下发布的,并在 [GitHub][2] 上公开。
|
||||
* 跨平台:适用于 Linux、Windows、MAC、ESX 文件系统备份/恢复和 FreeBSD。
|
||||
* 一个在线的[文档页面][3],你可以从中查看帮助文档并跟踪其 GitHub 问题。
|
||||
* 为初学者和专业人士提供的在线[用户手册][4]。
|
||||
* 支持救援。
|
||||
@ -25,55 +25,53 @@ Partclone - 多功能的分区和克隆免费软件
|
||||
* 支持 raw 克隆。
|
||||
* 显示传输速率和持续时间。
|
||||
* 支持管道。
|
||||
* 支持 crc32。
|
||||
* 支持 crc32 校验。
|
||||
* 支持 ESX vmware server 的 vmfs 和 FreeBSD 的文件系统 ufs。
|
||||
|
||||
Partclone 中还捆绑了更多功能,你可以在[这里][5]查看其余的功能。
|
||||
|
||||
|
||||
**Partclone** 中还捆绑了更多功能,你可以在[这里][5]查看其余的功能。
|
||||
|
||||
[下载 Linux 中的 Partclone][6]
|
||||
- [下载 Linux 中的 Partclone][6]
|
||||
|
||||
### 如何安装和使用 Partclone
|
||||
|
||||
在 Linux 上安装 Partclone。
|
||||
|
||||
```
|
||||
$ sudo apt install partclone [On Debian/Ubuntu]
|
||||
$ sudo yum install partclone [On CentOS/RHEL/Fedora]
|
||||
|
||||
```
|
||||
|
||||
克隆分区为镜像。
|
||||
|
||||
```
|
||||
# partclone.ext4 -d -c -s /dev/sda1 -o sda1.img
|
||||
|
||||
```
|
||||
|
||||
将镜像恢复到分区。
|
||||
|
||||
```
|
||||
# partclone.ext4 -d -r -s sda1.img -o /dev/sda1
|
||||
|
||||
```
|
||||
|
||||
分区到分区克隆。
|
||||
|
||||
```
|
||||
# partclone.ext4 -d -b -s /dev/sda1 -o /dev/sdb1
|
||||
|
||||
```
|
||||
|
||||
显示镜像信息。
|
||||
|
||||
```
|
||||
# partclone.info -s sda1.img
|
||||
|
||||
```
|
||||
|
||||
检查镜像。
|
||||
|
||||
```
|
||||
# partclone.chkimg -s sda1.img
|
||||
|
||||
```
|
||||
|
||||
你是 **Partclone** 的用户吗?我最近在 [**Deepin Clone**][7] 上写了一篇文章,显然,Partclone 有擅长处理的任务。你使用其他备份和恢复工具的经验是什么?
|
||||
你是 Partclone 的用户吗?我最近在 [Deepin Clone][7] 上写了一篇文章,显然,Partclone 有擅长处理的任务。你使用其他备份和恢复工具的经验是什么?
|
||||
|
||||
请在下面的评论区与我们分享你的想法和建议。
|
||||
|
||||
@ -81,13 +79,13 @@ $ sudo yum install partclone [On CentOS/RHEL/Fedora]
|
||||
|
||||
via: https://www.fossmint.com/partclone-linux-backup-clone-tool/
|
||||
|
||||
作者:[Martins D. Okoi;View All Posts;Peter Beck;Martins Divine Okoi][a]
|
||||
作者:[Martins D. Okoi][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[a]:https://www.fossmint.com/author/dillivine/
|
||||
[1]:https://partclone.org/
|
||||
[2]:https://github.com/Thomas-Tsai/partclone
|
||||
[3]:https://partclone.org/help/
|
@ -1,35 +1,38 @@
|
||||
如何在 Linux 上使用 Vundle 管理 Vim 插件
|
||||
======
|
||||
|
||||

|
||||
|
||||
毋庸置疑,**Vim** 是一款强大的文本文件处理的通用工具,能够管理系统配置文件,编写代码。通过插件,vim 可以被拓展出不同层次的功能。通常,所有的插件和附属的配置文件都会存放在 **~/.vim** 目录中。由于所有的插件文件都被存储在同一个目录下,所以当你安装更多插件时,不同的插件文件之间相互混淆。因而,跟踪和管理它们将是一个恐怖的任务。然而,这正是 Vundle 所能处理的。Vundle,分别是 **V** im 和 B **undle** 的缩写,它是一款能够管理 Vim 插件的极其实用的工具。
|
||||
毋庸置疑,Vim 是一款强大的文本文件处理的通用工具,能够管理系统配置文件和编写代码。通过插件,Vim 可以被拓展出不同层次的功能。通常,所有的插件和附属的配置文件都会存放在 `~/.vim` 目录中。由于所有的插件文件都被存储在同一个目录下,所以当你安装更多插件时,不同的插件文件之间相互混淆。因而,跟踪和管理它们将是一个恐怖的任务。然而,这正是 Vundle 所能处理的。Vundle,分别是 **V** im 和 B **undle** 的缩写,它是一款能够管理 Vim 插件的极其实用的工具。
|
||||
|
||||
Vundle 为每一个你安装和存储的拓展配置文件创建各自独立的目录树。因此,相互之间没有混淆的文件。简言之,Vundle 允许你安装新的插件、配置已存在的插件、更新插件配置、搜索安装插件和清理不使用的插件。所有的操作都可以在单一按键的交互模式下完成。在这个简易的教程中,让我告诉你如何安装 Vundle,如何在 GNU/Linux 中使用它来管理 Vim 插件。
|
||||
Vundle 为每一个你安装的插件创建一个独立的目录树,并在相应的插件目录中存储附加的配置文件。因此,相互之间没有混淆的文件。简言之,Vundle 允许你安装新的插件、配置已有的插件、更新插件配置、搜索安装的插件和清理不使用的插件。所有的操作都可以在一键交互模式下完成。在这个简易的教程中,让我告诉你如何安装 Vundle,如何在 GNU/Linux 中使用它来管理 Vim 插件。
|
||||
|
||||
### Vundle 安装
|
||||
|
||||
如果你需要 Vundle,那我就当作你的系统中,已将安装好了 **vim**。如果没有,安装 vim,尽情 **git**(下载 vundle)去吧。在大部分 GNU/Linux 发行版中的官方仓库中都可以获取到这两个包。比如,在 Debian 系列系统中,你可以使用下面的命令安装这两个包。
|
||||
如果你需要 Vundle,那我就当作你的系统中,已将安装好了 Vim。如果没有,请安装 Vim 和 git(以下载 Vundle)。在大部分 GNU/Linux 发行版中的官方仓库中都可以获取到这两个包。比如,在 Debian 系列系统中,你可以使用下面的命令安装这两个包。
|
||||
|
||||
```
|
||||
sudo apt-get install vim git
|
||||
```
|
||||
|
||||
**下载 Vundle**
|
||||
#### 下载 Vundle
|
||||
|
||||
复制 Vundle 的 GitHub 仓库地址:
|
||||
|
||||
```
|
||||
git clone https://github.com/VundleVim/Vundle.vim.git ~/.vim/bundle/Vundle.vim
|
||||
```
|
||||
|
||||
**配置 Vundle**
|
||||
#### 配置 Vundle
|
||||
|
||||
创建 **~/.vimrc** 文件,通知 vim 使用新的插件管理器。这个文件获得有安装、更新、配置和移除插件的权限。
|
||||
创建 `~/.vimrc` 文件,以通知 Vim 使用新的插件管理器。安装、更新、配置和移除插件需要这个文件。
|
||||
|
||||
```
|
||||
vim ~/.vimrc
|
||||
```
|
||||
|
||||
在此文件顶部,加入如下若干行内容:
|
||||
|
||||
```
|
||||
set nocompatible " be iMproved, required
|
||||
filetype off " required
|
||||
@ -76,35 +79,39 @@ filetype plugin indent on " required
|
||||
" Put your non-Plugin stuff after this line
|
||||
```
|
||||
|
||||
被标记的行中,是 Vundle 的请求项。其余行仅是一些例子。如果你不想安装那些特定的插件,可以移除它们。一旦你安装过,键入 **:wq** 保存退出。
|
||||
被标记为 “required” 的行是 Vundle 的所需配置。其余行仅是一些例子。如果你不想安装那些特定的插件,可以移除它们。完成后,键入 `:wq` 保存退出。
|
||||
|
||||
最后,打开 Vim:
|
||||
|
||||
最后,打开 vim
|
||||
```
|
||||
vim
|
||||
```
|
||||
|
||||
然后键入下列命令安装插件。
|
||||
然后键入下列命令安装插件:
|
||||
|
||||
```
|
||||
:PluginInstall
|
||||
```
|
||||
|
||||
[![][1]][2]
|
||||
![][2]
|
||||
|
||||
将会弹出一个新的分窗口,.vimrc 中陈列的项目都会自动安装。
|
||||
将会弹出一个新的分窗口,我们加在 `.vimrc` 文件中的所有插件都会自动安装。
|
||||
|
||||
[![][1]][3]
|
||||
![][3]
|
||||
|
||||
安装完毕之后,键入下列命令,可以删除高速缓存区缓存并关闭窗口:
|
||||
|
||||
安装完毕之后,键入下列命令,可以删除高速缓存区缓存并关闭窗口。
|
||||
```
|
||||
:bdelete
|
||||
```
|
||||
|
||||
在终端上使用下面命令,规避使用 vim 安装插件
|
||||
你也可以在终端上使用下面命令安装插件,而不用打开 Vim:
|
||||
|
||||
```
|
||||
vim +PluginInstall +qall
|
||||
```
|
||||
|
||||
使用 [**fish shell**][4] 的朋友,添加下面这行到你的 **.vimrc** 文件中。
|
||||
使用 [fish shell][4] 的朋友,添加下面这行到你的 `.vimrc` 文件中。
|
||||
|
||||
```
|
||||
set shell=/bin/bash
|
||||
@ -112,123 +119,138 @@ set shell=/bin/bash
|
||||
|
||||
### 使用 Vundle 管理 Vim 插件
|
||||
|
||||
**添加新的插件**
|
||||
#### 添加新的插件
|
||||
|
||||
首先,使用下面的命令搜索可以使用的插件:
|
||||
|
||||
首先,使用下面的命令搜索可以使用的插件。
|
||||
```
|
||||
:PluginSearch
|
||||
```
|
||||
|
||||
命令之后添加 **"! "**,刷新 vimscripts 网站内容到本地。
|
||||
要从 vimscripts 网站刷新本地的列表,请在命令之后添加 `!`。
|
||||
|
||||
```
|
||||
:PluginSearch!
|
||||
```
|
||||
|
||||
一个陈列可用插件列表的新分窗口将会被弹出。
|
||||
会弹出一个列出可用插件列表的新分窗口:
|
||||
|
||||
[![][1]][5]
|
||||
![][5]
|
||||
|
||||
你还可以通过直接指定插件名的方式,缩小搜索范围。
|
||||
|
||||
```
|
||||
:PluginSearch vim
|
||||
```
|
||||
|
||||
这样将会列出包含关键词“vim”的插件。
|
||||
这样将会列出包含关键词 “vim” 的插件。
|
||||
|
||||
当然你也可以指定确切的插件名,比如:
|
||||
|
||||
```
|
||||
:PluginSearch vim-dasm
|
||||
```
|
||||
|
||||
移动焦点到正确的一行上,点击 **" i"** 来安装插件。现在,被选择的插件将会被安装。
|
||||
移动焦点到正确的一行上,按下 `i` 键来安装插件。现在,被选择的插件将会被安装。
|
||||
|
||||
[![][1]][6]
|
||||
![][6]
|
||||
|
||||
类似的,在你的系统中安装所有想要的插件。一旦安装成功,使用下列命令删除 Vundle 缓存:
|
||||
|
||||
在你的系统中,所有想要的的插件都以类似的方式安装。一旦安装成功,使用下列命令删除 Vundle 缓存:
|
||||
```
|
||||
:bdelete
|
||||
```
|
||||
|
||||
现在,插件已经安装完成。在 .vimrc 文件中添加安装好的插件名,让插件正确加载。
|
||||
现在,插件已经安装完成。为了让插件正确的自动加载,我们需要在 `.vimrc` 文件中添加安装好的插件名。
|
||||
|
||||
这样做:
|
||||
|
||||
```
|
||||
:e ~/.vimrc
|
||||
```
|
||||
|
||||
添加这一行:
|
||||
|
||||
```
|
||||
[...]
|
||||
Plugin 'vim-dasm'
|
||||
[...]
|
||||
```
|
||||
|
||||
用自己的插件名替换 vim-dasm。然后,敲击 ESC,键入 **:wq** 保存退出。
|
||||
用自己的插件名替换 vim-dasm。然后,敲击 `ESC`,键入 `:wq` 保存退出。
|
||||
|
||||
请注意,所有插件都必须在 `.vimrc` 文件中追加如下内容。
|
||||
|
||||
请注意,所有插件都必须在 .vimrc 文件中追加如下内容。
|
||||
```
|
||||
[...]
|
||||
filetype plugin indent on
|
||||
```
|
||||
|
||||
**列出已安装的插件**
|
||||
#### 列出已安装的插件
|
||||
|
||||
键入下面命令列出所有已安装的插件:
|
||||
|
||||
```
|
||||
:PluginList
|
||||
```
|
||||
|
||||
[![][1]][7]
|
||||
![][7]
|
||||
|
||||
**更新插件**
|
||||
#### 更新插件
|
||||
|
||||
键入下列命令更新插件:
|
||||
|
||||
```
|
||||
:PluginUpdate
|
||||
```
|
||||
|
||||
键入下列命令重新安装所有插件
|
||||
键入下列命令重新安装所有插件:
|
||||
|
||||
```
|
||||
:PluginInstall!
|
||||
```
|
||||
|
||||
**卸载插件**
|
||||
#### 卸载插件
|
||||
|
||||
首先,列出所有已安装的插件:
|
||||
|
||||
```
|
||||
:PluginList
|
||||
```
|
||||
|
||||
之后将焦点置于正确的一行上,敲 **" SHITF+d"** 组合键。
|
||||
之后将焦点置于正确的一行上,按下 `SHITF+d` 组合键。
|
||||
|
||||
[![][1]][8]
|
||||
![][8]
|
||||
|
||||
然后编辑你的 `.vimrc` 文件:
|
||||
|
||||
然后编辑你的 .vimrc 文件:
|
||||
```
|
||||
:e ~/.vimrc
|
||||
```
|
||||
|
||||
再然后删除插件入口。最后,键入 **:wq** 保存退出。
|
||||
删除插件入口。最后,键入 `:wq` 保存退出。
|
||||
|
||||
或者,你可以通过移除插件所在 `.vimrc` 文件行,并且执行下列命令,卸载插件:
|
||||
|
||||
或者,你可以通过移除插件所在 .vimrc 文件行,并且执行下列命令,卸载插件:
|
||||
```
|
||||
:PluginClean
|
||||
```
|
||||
|
||||
这个命令将会移除所有不在你的 .vimrc 文件中但是存在于 bundle 目录中的插件。
|
||||
这个命令将会移除所有不在你的 `.vimrc` 文件中但是存在于 bundle 目录中的插件。
|
||||
|
||||
你应该已经掌握了 Vundle 管理插件的基本方法了。在 Vim 中使用下列命令,查询帮助文档,获取更多细节。
|
||||
|
||||
你应该已经掌握了 Vundle 管理插件的基本方法了。在 vim 中使用下列命令,查询帮助文档,获取更多细节。
|
||||
```
|
||||
:h vundle
|
||||
```
|
||||
|
||||
**捎带看看:**
|
||||
|
||||
现在我已经把所有内容都告诉你了。很快,我就会出下一篇教程。保持关注 OSTechNix!
|
||||
现在我已经把所有内容都告诉你了。很快,我就会出下一篇教程。保持关注!
|
||||
|
||||
干杯!
|
||||
|
||||
**来源:**
|
||||
### 资源
|
||||
|
||||
[Vundle GitHub 仓库][9]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -236,16 +258,17 @@ via: https://www.ostechnix.com/manage-vim-plugins-using-vundle-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[CYLeft](https://github.com/CYLeft)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-1.png ()
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-2.png ()
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-1.png
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-2.png
|
||||
[4]:https://www.ostechnix.com/install-fish-friendly-interactive-shell-linux/
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-3.png ()
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-4-2.png ()
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-5-1.png ()
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-6.png ()
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-3.png
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-4-2.png
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-5-1.png
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-6.png
|
||||
[9]:https://github.com/VundleVim/Vundle.vim
|
@ -0,0 +1,73 @@
|
||||
5 个在视觉上最轻松的黑暗主题
|
||||
======
|
||||
|
||||

|
||||
|
||||
人们在电脑上选择黑暗主题有几个原因。有些人觉得对于眼睛轻松,而另一些人因为他们的医学条件选择黑色。特别地,程序员喜欢黑暗的主题,因为可以减少眼睛的眩光。
|
||||
|
||||
如果你是一位 Linux 用户和黑暗主题爱好者,那么你很幸运。这里有五个最好的 Linux 黑暗主题。去看一下!
|
||||
|
||||
### 1. OSX-Arc-Shadow
|
||||
|
||||
![OSX-Arc-Shadow Theme][1]
|
||||
|
||||
顾名思义,这个主题受 OS X 的启发,它是基于 Arc 的平面主题。该主题支持 GTK 3 和 GTK 2 桌面环境,因此 Gnome、Cinnamon、Unity、Manjaro、Mate 和 XFCE 用户可以安装和使用该主题。[OSX-Arc-Shadow][2] 是 OSX-Arc 主题集合的一部分。该集合还包括其他几个主题(黑暗和明亮)。你可以下载整个系列并使用黑色主题。
|
||||
|
||||
基于 Debian 和 Ubuntu 的发行版用户可以选择使用此[页面][3]中找到的 .deb 文件来安装稳定版本。压缩的源文件也位于同一页面上。Arch Linux 用户,请查看此 [AUR 链接][4]。最后,要手动安装主题,请将 zip 解压到 `~/.themes` ,并将其设置为当前主题、控件和窗口边框。
|
||||
|
||||
### 2. Kiss-Kool-Red version 2
|
||||
|
||||
![Kiss-Kool-Red version 2 ][5]
|
||||
|
||||
该主题发布不久。与 OSX-Arc-Shadow 相比它有更黑的外观和红色选择框。对于那些希望电脑屏幕上有更强对比度和更少眩光的人尤其有吸引力。因此,它可以减少在夜间使用或在光线较暗的地方使用时的注意力分散。它支持 GTK 3 和 GTK2。
|
||||
|
||||
前往 [gnome-looks][6],在“文件”菜单下下载主题。安装过程很简单:将主题解压到 `~/.themes` 中,并将其设置为当前主题、控件和窗口边框。
|
||||
|
||||
### 3. Equilux
|
||||
|
||||
![Equilux][7]
|
||||
|
||||
Equilux 是另一个基于 Materia 主题的简单的黑暗主题。它有一个中性的深色调,并不过分花哨。选择框之间的对比度也很小,并且没有 Kiss-Kool-Red 中红色的锐利。这个主题的确是为减轻眼睛疲劳而做的。
|
||||
|
||||
[下载压缩文件][8]并将其解压缩到你的 `~/.themes` 中。然后,你可以将其设置为你的主题。你可以查看[它的 GitHub 页面][9]了解最新的增加内容。
|
||||
|
||||
### 4. Deepin Dark
|
||||
|
||||
![Deepin Dark][10]
|
||||
|
||||
Deepin Dark 是一个完全黑暗的主题。对于那些喜欢更黑暗的人来说,这个主题绝对是值得考虑的。此外,它还可以减少电脑屏幕的眩光量。另外,它支持 Unity。[在这里下载 Deepin Dark][11]。
|
||||
|
||||
### 5. Ambiance DS BlueSB12
|
||||
|
||||
![Ambiance DS BlueSB12 ][12]
|
||||
|
||||
Ambiance DS BlueSB12 是一个简单的黑暗主题,它使得重要细节突出。它有助于专注,不花哨。它与 Deepin Dark 非常相似。特别是对于 Ubuntu 用户,它与 Ubuntu 17.04 兼容。你可以从[这里][13]下载并尝试。
|
||||
|
||||
### 总结
|
||||
|
||||
如果你长时间使用电脑,黑暗主题是减轻眼睛疲劳的好方法。即使你不这样做,黑暗主题也可以在其他方面帮助你,例如提高专注。让我们知道你最喜欢哪一个。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/best-linux-dark-themes/
|
||||
|
||||
作者:[Bruno Edoh][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com
|
||||
[1]:https://www.maketecheasier.com/assets/uploads/2017/12/osx-arc-shadow.png (OSX-Arc-Shadow Theme)
|
||||
[2]:https://github.com/LinxGem33/OSX-Arc-Shadow/
|
||||
[3]:https://github.com/LinxGem33/OSX-Arc-Shadow/releases
|
||||
[4]:https://aur.archlinux.org/packages/osx-arc-shadow/
|
||||
[5]:https://www.maketecheasier.com/assets/uploads/2017/12/Kiss-Kool-Red.png (Kiss-Kool-Red version 2 )
|
||||
[6]:https://www.gnome-look.org/p/1207964/
|
||||
[7]:https://www.maketecheasier.com/assets/uploads/2017/12/equilux.png (Equilux)
|
||||
[8]:https://www.gnome-look.org/p/1182169/
|
||||
[9]:https://github.com/ddnexus/equilux-theme
|
||||
[10]:https://www.maketecheasier.com/assets/uploads/2017/12/deepin-dark.png (Deepin Dark )
|
||||
[11]:https://www.gnome-look.org/p/1190867/
|
||||
[12]:https://www.maketecheasier.com/assets/uploads/2017/12/ambience.png (Ambiance DS BlueSB12 )
|
||||
[13]:https://www.gnome-look.org/p/1013664/
|
@ -1,9 +1,9 @@
|
||||
如何在 Linux 上安装 Spotify
|
||||
如何在 Linux 上使用 snap 安装 Spotify(声破天)
|
||||
======
|
||||
|
||||
如何在 Ubuntu Linux 桌面上安装 Spotify 来在线听音乐?
|
||||
如何在 Ubuntu Linux 桌面上安装 spotify 来在线听音乐?
|
||||
|
||||
Spotify 是一个可让你访问大量歌曲的数字音乐流服务。你可以免费收听或者购买订阅。可以创建播放列表。订阅用户可以免广告收听音乐。你会得到更好的音质。本教程**展示如何使用在 Ubuntu、Mint、Debian、Fedora、Arch 和其他更多发行版**上的 snap 包管理器安装 Spotify。
|
||||
Spotify 是一个可让你访问大量歌曲的数字音乐流服务。你可以免费收听或者购买订阅,可以创建播放列表。订阅用户可以免广告收听音乐,你会得到更好的音质。本教程展示如何使用在 Ubuntu、Mint、Debian、Fedora、Arch 和其他更多发行版上的 snap 包管理器安装 Spotify。
|
||||
|
||||
### 在 Linux 上安装 spotify
|
||||
|
||||
@ -11,33 +11,28 @@ Spotify 是一个可让你访问大量歌曲的数字音乐流服务。你可以
|
||||
|
||||
1. 安装 snapd
|
||||
2. 打开 snapd
|
||||
3. 找到 Spotify snap:
|
||||
```
|
||||
snap find spotify
|
||||
```
|
||||
4. 安装 spotify:
|
||||
```
|
||||
do snap install spotify
|
||||
```
|
||||
5. 运行:
|
||||
```
|
||||
spotify &
|
||||
```
|
||||
3. 找到 Spotify snap:`snap find spotify`
|
||||
4. 安装 spotify:`sudo snap install spotify`
|
||||
5. 运行:`spotify &`
|
||||
|
||||
让我们详细看看所有的步骤和例子。
|
||||
|
||||
### 步骤 1 - 安装 Snapd
|
||||
### 步骤 1 - 安装 snapd
|
||||
|
||||
你需要安装 snapd 包。它是一个守护进程(服务),并能在 Linux 系统上启用 snap 包管理。
|
||||
|
||||
#### Debian/Ubuntu/Mint Linux 上的 Snapd
|
||||
#### Debian/Ubuntu/Mint Linux 上的 snapd
|
||||
|
||||
输入以下[ apt 命令][1]/ [apt-get 命令][2]:
|
||||
`$ sudo apt install snapd`
|
||||
输入以下 [apt 命令][1]/ [apt-get 命令][2]:
|
||||
|
||||
```
|
||||
$ sudo apt install snapd
|
||||
```
|
||||
|
||||
#### 在 Arch Linux 上安装 snapd
|
||||
|
||||
snapd 只包含在 Arch User Repository(AUR)中。运行 yaourt 命令(参见[如何在 Archlinux 上安装 yaourt][3]):
|
||||
snapd 只包含在 Arch User Repository(AUR)中。运行 `yaourt` 命令(参见[如何在 Archlinux 上安装 yaourt][3]):
|
||||
|
||||
```
|
||||
$ sudo yaourt -S snapd
|
||||
$ sudo systemctl enable --now snapd.socket
|
||||
@ -45,7 +40,8 @@ $ sudo systemctl enable --now snapd.socket
|
||||
|
||||
#### 在 Fedora 上获取 snapd
|
||||
|
||||
运行 snapd 命令
|
||||
运行 snapd 命令:
|
||||
|
||||
```
|
||||
sudo dnf install snapd
|
||||
sudo ln -s /var/lib/snapd/snap /snap
|
||||
@ -53,26 +49,67 @@ sudo ln -s /var/lib/snapd/snap /snap
|
||||
|
||||
#### OpenSUSE 安装 snapd
|
||||
|
||||
执行如下的 `zypper` 命令:
|
||||
|
||||
```
|
||||
### Tumbleweed verson ###
|
||||
$ sudo zypper addrepo http://download.opensuse.org/repositories/system:/snappy/openSUSE_Tumbleweed/ snappy
|
||||
### Leap version ##
|
||||
$ sudo zypper addrepo http://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_42.3/ snappy
|
||||
```
|
||||
|
||||
安装:
|
||||
|
||||
```
|
||||
$ sudo zypper install snapd
|
||||
$ sudo systemctl enable --now snapd.socket
|
||||
```
|
||||
|
||||
### 步骤 2 - 在 Linux 上使用 snap 安装 spofity
|
||||
|
||||
执行 snap 命令:
|
||||
`$ snap find spotify`
|
||||
|
||||
```
|
||||
$ snap find spotify
|
||||
```
|
||||
|
||||
[![snap search for spotify app command][4]][4]
|
||||
|
||||
安装它:
|
||||
`$ sudo snap install spotify`
|
||||
|
||||
```
|
||||
$ sudo snap install spotify
|
||||
```
|
||||
|
||||
[![How to install Spotify application on Linux using snap command][5]][5]
|
||||
|
||||
### 步骤 3 - 运行 spotify 并享受它(译注:原博客中就是这么直接跳到 step3 的)
|
||||
### 步骤 3 - 运行 spotify 并享受它
|
||||
|
||||
从 GUI 运行它,或者只需输入:
|
||||
`$ spotify`
|
||||
|
||||
```
|
||||
$ spotify
|
||||
```
|
||||
|
||||
在启动时自动登录你的帐户:
|
||||
|
||||
```
|
||||
$ spotify --username vivek@nixcraft.com
|
||||
$ spotify --username vivek@nixcraft.com --password 'myPasswordHere'
|
||||
```
|
||||
|
||||
在初始化时使用给定的 URI 启动 Spotify 客户端:
|
||||
`$ spotify--uri=<uri>`
|
||||
|
||||
```
|
||||
$ spotify --uri=<uri>
|
||||
```
|
||||
|
||||
以指定的网址启动:
|
||||
`$ spotify--url=<url>`
|
||||
|
||||
```
|
||||
$ spotify --url=<url>
|
||||
```
|
||||
|
||||
[![Spotify client app running on my Ubuntu Linux desktop][6]][6]
|
||||
|
||||
### 关于作者
|
||||
@ -85,7 +122,7 @@ via: https://www.cyberciti.biz/faq/how-to-install-spotify-application-on-linux/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,113 @@
|
||||
搭建私有云:OwnCloud
|
||||
======
|
||||
|
||||
所有人都在讨论云。尽管市面上有很多为我们提供云存储和其他云服务的主要服务商,但是我们还是可以为自己搭建一个私有云。
|
||||
|
||||
在本教程中,我们将讨论如何利用 OwnCloud 搭建私有云。OwnCloud 是一个可以安装在我们 Linux 设备上的 web 应用程序,能够存储和用我们的数据提供服务。OwnCloud 可以分享日历、联系人和书签,共享音/视频流等等。
|
||||
|
||||
本教程中,我们使用的是 CentOS 7 系统,但是本教程同样适用于其他 Linux 发行版中安装 OwnCloud。让我们开始安装 OwnCloud 并且做一些准备工作,
|
||||
|
||||
- 推荐阅读:[如何在 CentOS & RHEL 上使用 Apache 作为反向代理服务器][1]
|
||||
- 同时推荐:[实时 Linux 服务器监测和 GLANCES 监测工具][2]
|
||||
|
||||
### 预备
|
||||
|
||||
* 我们需要在机器上配置 LAMP。参照阅读我们的文章《[在 CentOS/RHEL 上配置 LAMP 服务器最简单的教程][3]》 & 《[在 Ubuntu 搭建 LAMP][4]》。
|
||||
* 我们需要在自己的设备里安装这些包,`php-mysql`、 `php-json`、 `php-xml`、 `php-mbstring`、 `php-zip`、 `php-gd`、 `curl、 `php-curl` 、`php-pdo`。使用包管理器安装它们。
|
||||
|
||||
```
|
||||
$ sudo yum install php-mysql php-json php-xml php-mbstring php-zip php-gd curl php-curl php-pdo
|
||||
```
|
||||
|
||||
### 安装
|
||||
|
||||
安装 OwnCloud,我们现在需要在服务器上下载 OwnCloud 安装包。使用下面的命令从官方网站下载最新的安装包(10.0.4-1):
|
||||
|
||||
```
|
||||
$ wget https://download.owncloud.org/community/owncloud-10.0.4.tar.bz2
|
||||
```
|
||||
|
||||
使用下面的命令解压:
|
||||
|
||||
```
|
||||
$ tar -xvf owncloud-10.0.4.tar.bz2
|
||||
```
|
||||
|
||||
现在,将所有解压后的文件移动至 `/var/www/html`:
|
||||
|
||||
```
|
||||
$ mv owncloud/* /var/www/html
|
||||
```
|
||||
|
||||
下一步,我们需要在 Apache 的配置文件 `httpd.conf` 上做些修改:
|
||||
|
||||
```
|
||||
$ sudo vim /etc/httpd/conf/httpd.conf
|
||||
```
|
||||
|
||||
更改下面的选项:
|
||||
|
||||
```
|
||||
AllowOverride All
|
||||
```
|
||||
|
||||
保存该文件,并修改 OwnCloud 文件夹的文件权限:
|
||||
|
||||
```
|
||||
$ sudo chown -R apache:apache /var/www/html/
|
||||
$ sudo chmod 777 /var/www/html/config/
|
||||
```
|
||||
|
||||
然后重启 Apache 服务器执行修改:
|
||||
|
||||
```
|
||||
$ sudo systemctl restart httpd
|
||||
```
|
||||
|
||||
现在,我们需要在 MariaDB 上创建一个数据库,保存来自 OwnCloud 的数据。使用下面的命令创建数据库和数据库用户:
|
||||
|
||||
```
|
||||
$ mysql -u root -p
|
||||
MariaDB [(none)] > create database owncloud;
|
||||
MariaDB [(none)] > GRANT ALL ON owncloud.* TO ocuser@localhost IDENTIFIED BY 'owncloud';
|
||||
MariaDB [(none)] > flush privileges;
|
||||
MariaDB [(none)] > exit
|
||||
```
|
||||
|
||||
服务器配置部分完成后,现在我们可以在网页浏览器上访问 OwnCloud。打开浏览器,输入您的服务器 IP 地址,我这边的服务器是 10.20.30.100:
|
||||
|
||||
![安装 owncloud][7]
|
||||
|
||||
一旦 URL 加载完毕,我们将呈现上述页面。这里,我们将创建管理员用户同时提供数据库信息。当所有信息提供完毕,点击“Finish setup”。
|
||||
|
||||
我们将被重定向到登录页面,在这里,我们需要输入先前创建的凭据:
|
||||
|
||||
![安装 owncloud][9]
|
||||
|
||||
认证成功之后,我们将进入 OwnCloud 面板:
|
||||
|
||||
![安装 owncloud][11]
|
||||
|
||||
我们可以使用手机应用程序,同样也可以使用网页界面更新我们的数据。现在,我们已经有自己的私有云了,同时,关于如何安装 OwnCloud 创建私有云的教程也进入尾声。请在评论区留下自己的问题或建议。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/create-personal-cloud-install-owncloud/
|
||||
|
||||
作者:[SHUSAIN][a]
|
||||
译者:[CYLeft](https://github.com/CYLeft)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
||||
[1]:http://linuxtechlab.com/apache-as-reverse-proxy-centos-rhel/
|
||||
[2]:http://linuxtechlab.com/linux-server-glances-monitoring-tool/
|
||||
[3]:http://linuxtechlab.com/easiest-guide-creating-lamp-server/
|
||||
[4]:http://linuxtechlab.com/install-lamp-stack-on-ubuntu/
|
||||
[6]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=400%2C647
|
||||
[7]:https://i1.wp.com/linuxtechlab.com/wp-content/uploads/2018/01/owncloud1-compressor.jpg?resize=400%2C647
|
||||
[8]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=876%2C541
|
||||
[9]:https://i1.wp.com/linuxtechlab.com/wp-content/uploads/2018/01/owncloud2-compressor1.jpg?resize=876%2C541
|
||||
[10]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=981%2C474
|
||||
[11]:https://i0.wp.com/linuxtechlab.com/wp-content/uploads/2018/01/owncloud3-compressor1.jpg?resize=981%2C474
|
@ -3,10 +3,9 @@
|
||||
|
||||
[][8]
|
||||
|
||||
|
||||
HackerRank 最近公布了 2018 年开发者技能报告的结果,其中向程序员询问了他们何时开始编码。
|
||||
|
||||
39,441 名专业和学生开发者于 2016 年 10 月 16 日至 11 月 1 日完成了在线调查,超过 25% 的被调查的开发者在 16 岁前编写了他们的第一段代码。
|
||||
39,441 名专业人员和学生开发者于 2016 年 10 月 16 日至 11 月 1 日完成了在线调查,超过 25% 的被调查的开发者在 16 岁前编写了他们的第一段代码。(LCTT 译注:日期恐有误)
|
||||
|
||||
### 程序员是如何学习的
|
||||
|
||||
@ -16,7 +15,7 @@ HackerRank 最近公布了 2018 年开发者技能报告的结果,其中向程
|
||||
|
||||
开发者平均了解四种语言,但他们想学习更多语言。
|
||||
|
||||
对学习的渴望因人而异 - 18 至 24 岁的开发者计划学习 6 种语言,而 35 岁以上的开发者只计划学习 3 种语言。
|
||||
对学习的渴望因人而异 —— 18 至 24 岁的开发者计划学习 6 种语言,而 35 岁以上的开发者只计划学习 3 种语言。
|
||||
|
||||
[][5]
|
||||
|
||||
@ -46,9 +45,9 @@ HackerRank 说:“在某些方面,我们发现了一个小矛盾。开发人
|
||||
|
||||
via: https://mybroadband.co.za/news/smartphones/246583-how-programmers-learn-to-code.html
|
||||
|
||||
作者:[Staff Writer ][a]
|
||||
作者:[Staff Writer][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -8,23 +8,23 @@
|
||||
### 在 Linux 中纠正拼写错误的 Bash 命令
|
||||
|
||||
你有没有运行过类似于下面的错误输入命令?
|
||||
|
||||
```
|
||||
$ unme -r
|
||||
bash: unme: command not found
|
||||
|
||||
```
|
||||
|
||||
你注意到了吗?上面的命令中有一个错误。我在 “uname” 命令缺少了字母 “a”。
|
||||
你注意到了吗?上面的命令中有一个错误。我在 `uname` 命令缺少了字母 `a`。
|
||||
|
||||
我在很多时候犯过这种愚蠢的错误。在我知道这个技巧之前,我习惯按下向上箭头来调出命令并转到命令中拼写错误的单词,纠正拼写错误,然后按回车键再次运行该命令。但相信我。下面的技巧非常易于纠正你刚刚运行的命令中的任何拼写错误。
|
||||
我在很多时候犯过这种愚蠢的错误。在我知道这个技巧之前,我习惯按下向上箭头来调出命令,并转到命令中拼写错误的单词,纠正拼写错误,然后按回车键再次运行该命令。但相信我。下面的技巧非常易于纠正你刚刚运行的命令中的任何拼写错误。
|
||||
|
||||
要轻松更正上述拼写错误的命令,只需运行:
|
||||
|
||||
```
|
||||
$ ^nm^nam^
|
||||
|
||||
```
|
||||
|
||||
这会将 “uname” 命令中将 “nm” 替换为 “nam”。很酷,是吗?它不仅纠正错别字,而且还能运行命令。查看下面的截图。
|
||||
这会将 `uname` 命令中将 `nm` 替换为 `nam`。很酷,是吗?它不仅纠正错别字,而且还能运行命令。查看下面的截图。
|
||||
|
||||
![][2]
|
||||
|
||||
@ -32,49 +32,49 @@ $ ^nm^nam^
|
||||
|
||||
**额外提示:**
|
||||
|
||||
你有没有想过在使用 “cd” 命令时如何自动纠正拼写错误?没有么?没关系!下面的技巧将解释如何做到这一点。
|
||||
你有没有想过在使用 `cd` 命令时如何自动纠正拼写错误?没有么?没关系!下面的技巧将解释如何做到这一点。
|
||||
|
||||
这个技巧只能纠正使用 “cd” 命令时的拼写错误。
|
||||
这个技巧只能纠正使用 `cd` 命令时的拼写错误。
|
||||
|
||||
比如说,你想使用命令切换到 `Downloads` 目录:
|
||||
|
||||
比如说,你想使用命令切换到 “Downloads” 目录:
|
||||
```
|
||||
$ cd Donloads
|
||||
bash: cd: Donloads: No such file or directory
|
||||
|
||||
```
|
||||
|
||||
哎呀!没有名称为 “Donloads” 的文件或目录。是的,正确的名称是 “Downloads”。上面的命令中缺少 “w”。
|
||||
哎呀!没有名称为 `Donloads` 的文件或目录。是的,正确的名称是 `Downloads`。上面的命令中缺少 `w`。
|
||||
|
||||
要解决此问题并在使用 `cd` 命令时自动更正错误,请编辑你的 `.bashrc` 文件:
|
||||
|
||||
要解决此问题并在使用 cd 命令时自动更正错误,请编辑你的 **.bashrc** 文件:
|
||||
```
|
||||
$ vi ~/.bashrc
|
||||
|
||||
```
|
||||
|
||||
最后添加以下行。
|
||||
|
||||
```
|
||||
[...]
|
||||
shopt -s cdspell
|
||||
|
||||
```
|
||||
|
||||
输入 **:wq** 保存并退出文件。
|
||||
输入 `:wq` 保存并退出文件。
|
||||
|
||||
最后,运行以下命令更新更改。
|
||||
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
|
||||
```
|
||||
|
||||
现在,如果在使用 cd 命令时路径中存在任何拼写错误,它将自动更正并进入正确的目录。
|
||||
现在,如果在使用 `cd` 命令时路径中存在任何拼写错误,它将自动更正并进入正确的目录。
|
||||
|
||||
![][3]
|
||||
|
||||
正如你在上面的命令中看到的那样,我故意输错(“Donloads” 而不是 “Downloads”),但 Bash 自动检测到正确的目录名并 cd 进入它。
|
||||
正如你在上面的命令中看到的那样,我故意输错(`Donloads` 而不是 `Downloads`),但 Bash 自动检测到正确的目录名并 `cd` 进入它。
|
||||
|
||||
[**Fish**][4] 和**Zsh** shell 内置的此功能。所以,如果你使用的是它们,那么你不需要这个技巧。
|
||||
[Fish][4] 和 Zsh shell 内置的此功能。所以,如果你使用的是它们,那么你不需要这个技巧。
|
||||
|
||||
然而,这个技巧有一些局限性。它只适用于使用正确的大小写。在上面的例子中,如果你输入的是 “cd donloads” 而不是 “cd Donloads”,它将无法识别正确的路径。另外,如果路径中缺少多个字母,它也不起作用。
|
||||
然而,这个技巧有一些局限性。它只适用于使用正确的大小写。在上面的例子中,如果你输入的是 `cd donloads` 而不是 `cd Donloads`,它将无法识别正确的路径。另外,如果路径中缺少多个字母,它也不起作用。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
@ -83,7 +83,7 @@ via: https://www.ostechnix.com/easily-correct-misspelled-bash-commands-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
149
published/20180217 The List Of Useful Bash Keyboard Shortcuts.md
Normal file
149
published/20180217 The List Of Useful Bash Keyboard Shortcuts.md
Normal file
@ -0,0 +1,149 @@
|
||||
有用的 Bash 快捷键清单
|
||||
======
|
||||

|
||||
|
||||
现如今,我在终端上花的时间更多,尝试在命令行完成比在图形界面更多的工作。随着时间推移,我学了许多 BASH 的技巧。这是一份每个 Linux 用户都应该知道的 BASH 快捷键,这样在终端做事就会快很多。我不会说这是一份完全的 BASH 快捷键清单,但是这足够让你的 BASH shell 操作比以前更快了。学习更快地使用 BASH 不仅节省了更多时间,也让你因为学到了有用的知识而感到自豪。那么,让我们开始吧。
|
||||
|
||||
### ALT 快捷键
|
||||
|
||||
1. `ALT+A` – 光标移动到行首。
|
||||
2. `ALT+B` – 光标移动到所在单词词首。
|
||||
3. `ALT+C` – 终止正在运行的命令/进程。与 `CTRL+C` 相同。
|
||||
4. `ALT+D` – 关闭空的终端(也就是它会关闭没有输入的终端)。也删除光标后的全部字符。
|
||||
5. `ALT+F` – 移动到光标所在单词词末。
|
||||
6. `ALT+T` – 交换最后两个单词。
|
||||
7. `ALT+U` – 将单词内光标后的字母转为大写。
|
||||
8. `ALT+L` – 将单词内光标后的字母转为小写。
|
||||
9. `ALT+R` – 撤销对从历史记录中带来的命令的修改。
|
||||
|
||||
正如你在上面输出所见,我使用反向搜索拉取了一个指令,并更改了那个指令的最后一个字母,并使用 `ALT+R` 撤销了更改。
|
||||
10. `ALT+.` (注意末尾的点号) – 使用上一条命令的最后一个单词。
|
||||
|
||||
如果你想要对多个命令进行相同的操作的话,你可以使用这个快捷键来获取前几个指令的最后一个单词。例如,我需要使用 `ls -r` 命令输出以文件名逆序排列的目录内容。同时,我也想使用 `uname -r` 命令来查看我的内核版本。在这两个命令中,相同的单词是 `-r` 。这就是需要 `ALT+.` 的地方。快捷键很顺手。首先运行 `ls -r` 来按文件名逆序输出,然后在其他命令,比如 `uname` 中使用最后一个单词 `-r` 。
|
||||
|
||||
### CTRL 快捷键
|
||||
|
||||
1. `CTRL+A` – 快速移动到行首。
|
||||
|
||||
我们假设你输入了像下面这样的命令。当你在第 N 行时,你发现在行首字符有一个输入错误
|
||||
|
||||
```
|
||||
$ gind . -mtime -1 -type
|
||||
```
|
||||
|
||||
注意到了吗?上面的命令中我输入了 `gind` 而不是 `find` 。你可以通过一直按着左箭头键定位到第一个字母然后用 `g` 替换 `f` 。或者,仅通过 `CTRL+A` 或 `HOME` 键来立刻定位到行首,并替换拼错的单词。这将节省你几秒钟的时间。
|
||||
|
||||
2. `CTRL+B` – 光标向前移动一个字符。
|
||||
|
||||
这个快捷键可以使光标向前移动一个字符,即光标前的一个字符。或者,你可以使用左箭头键来向前移动一个字符。
|
||||
|
||||
3. `CTRL+C` – 停止当前运行的命令。
|
||||
|
||||
如果一个命令运行时间过久,或者你误运行了,你可以通过使用 `CTRL+C` 来强制停止或退出。
|
||||
|
||||
4. `CTRL+D` – 删除光标后的一个字符。
|
||||
|
||||
如果你的系统退格键无法工作的话,你可以使用 `CTRL+D` 来删除光标后的一个字符。这个快捷键也可以让你退出当前会话,和 exit 类似。
|
||||
|
||||
5. `CTRL+E` – 移动到行末。
|
||||
|
||||
当你修正了行首拼写错误的单词,按下 `CTRL+E` 来快速移动到行末。或者,你也可以使用你键盘上的 `END` 键。
|
||||
|
||||
6. `CTRL+F` – 光标向后移动一个字符。
|
||||
|
||||
如果你想将光标向后移动一个字符的话,按 `CTRL+F` 来替代右箭头键。
|
||||
|
||||
7. `CTRL+G` – 退出历史搜索模式,不运行命令。
|
||||
|
||||
正如你在上面的截图看到的,我进行了反向搜索,但是我执行命令,并退出了历史搜索模式。
|
||||
|
||||
8. `CTRL+H` – 删除光标签的一个字符,和退格键相同。
|
||||
|
||||
9. `CTRL+J` – 和 ENTER/RETURN 键相同。
|
||||
|
||||
回车键不工作?没问题! `CTRL+J` 或 `CTRL+M` 可以用来替换回车键。
|
||||
|
||||
10. `CTRL+K` – 删除光标后的所有字符。
|
||||
|
||||
你不必一直按着删除键来删除光标后的字符。只要按 `CTRL+K` 就能删除光标后的所有字符。
|
||||
|
||||
11. `CTRL+L` – 清空屏幕并重新显示当前行。
|
||||
|
||||
别输入 `clear` 来清空屏幕了。只需按 `CTRL+M` 即可清空并重新显示当前行。
|
||||
|
||||
12. `CTRL+M` – 和 `CTRL+J` 或 RETURN键相同。
|
||||
|
||||
13. `CTRL+N` – 在命令历史中显示下一行。
|
||||
|
||||
你也可以使用下箭头键。
|
||||
|
||||
14. `CTRL+O` – 运行你使用反向搜索时发现的命令,即 CTRL+R。
|
||||
|
||||
15. `CTRL+P` – 显示命令历史的上一条命令。
|
||||
|
||||
你也可以使用上箭头键。
|
||||
|
||||
16. `CTRL+R` – 向后搜索历史记录(反向搜索)。
|
||||
|
||||
17. `CTRL+S` – 向前搜索历史记录。
|
||||
|
||||
18. `CTRL+T` – 交换最后两个字符。
|
||||
|
||||
这是我最喜欢的一个快捷键。假设你输入了 `sl` 而不是 `ls` 。没问题!这个快捷键会像下面这张截图一样交换字符。
|
||||
|
||||
![][2]
|
||||
|
||||
19. `CTRL+U` – 删除光标前的所有字符(从光标后的点删除到行首)。
|
||||
|
||||
这个快捷键立刻删除前面的所有字符。
|
||||
|
||||
20. `CTRL+V` – 逐字显示输入的下一个字符。
|
||||
|
||||
21. `CTRL+W` – 删除光标前的一个单词。
|
||||
|
||||
不要和 CTRL+U 弄混了。CTRL+W 不会删除光标前的所有东西,而是只删除一个单词。
|
||||
|
||||
![][3]
|
||||
|
||||
22. `CTRL+X` – 列出当前单词可能的文件名补全。
|
||||
|
||||
23. `CTRL+XX` – 移动到行首位置(再移动回来)。
|
||||
|
||||
24. `CTRL+Y` – 恢复你上一个删除或剪切的条目。
|
||||
|
||||
记得吗,我们在第 21 个命令用 `CTRL+W` 删除了单词“-al”。你可以使用 `CTRL+Y` 立刻恢复。
|
||||
|
||||
![][4]
|
||||
|
||||
看见了吧?我没有输入“-al”。取而代之,我按了 `CTRL+Y` 来恢复它。
|
||||
|
||||
25. `CTRL+Z` – 停止当前的命令。
|
||||
|
||||
你也许很了解这个快捷键。它终止了当前运行的命令。你可以在前台使用 `fg` 或在后台使用 `bg` 来恢复它。
|
||||
|
||||
26. `CTRL+[` – 和 `ESC` 键等同。
|
||||
|
||||
### 杂项
|
||||
|
||||
1. `!!` – 重复上一个命令。
|
||||
|
||||
2. `ESC+t` – 交换最后两个单词。
|
||||
|
||||
这就是我所能想到的了。将来我遇到 Bash 快捷键时我会持续添加的。如果你觉得文章有错的话,请在下方的评论区留言。我会尽快更新。
|
||||
|
||||
Cheers!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/list-useful-bash-keyboard-shortcuts/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[heart4lor](https://github.com/heart4lor)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2018/02/CTRLT-1.gif
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2018/02/CTRLW-1.gif
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2018/02/CTRLY-1.gif
|
@ -1,3 +1,5 @@
|
||||
lontow Translating
|
||||
|
||||
Evolutional Steps of Computer Systems
|
||||
======
|
||||
Throughout the history of the modern computer, there were several evolutional steps related to the way we interact with the system. I tend to categorize those steps as following:
|
||||
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
An old DOS BBS in a Docker container
|
||||
======
|
||||
Awhile back, I wrote about [my Debian Docker base images][1]. I decided to extend this concept a bit further: to running DOS applications in Docker.
|
||||
|
98
sources/talk/20180214 11 awesome vi tips and tricks.md
Normal file
98
sources/talk/20180214 11 awesome vi tips and tricks.md
Normal file
@ -0,0 +1,98 @@
|
||||
11 awesome vi tips and tricks
|
||||
======
|
||||
|
||||

|
||||
|
||||
The [vi editor][1] is one of the most popular text editors on Unix and Unix-like systems, such as Linux. Whether you're new to vi or just looking for a refresher, these 11 tips will enhance how you use it.
|
||||
|
||||
### Editing
|
||||
|
||||
Editing a long script can be tedious, especially when you need to edit a line so far down that it would take hours to scroll to it. Here's a faster way.
|
||||
|
||||
1. The command `:set number` numbers each line down the left side.
|
||||
|
||||

|
||||
|
||||
You can directly reach line number 26 by opening the file and entering this command on the CLI: `vi +26 sample.txt`. To edit line 26 (for example), the command `:26` will take you directly to it.
|
||||
|
||||

|
||||
|
||||
### Fast navigation
|
||||
|
||||
2. `i` changes your mode from "command" to "insert" and starts inserting text at the current cursor position.
|
||||
3. `a` does the same, except it starts just after the current cursor position.
|
||||
4. `o` starts the cursor position from the line below the current cursor position.
|
||||
|
||||
|
||||
|
||||
### Delete
|
||||
|
||||
If you notice an error or typo, being able to make a quick fix is important. Good thing vi has it all figured out.
|
||||
|
||||
Understanding vi's delete function so you don't accidentally press a key and permanently remove a line, paragraph, or more, is critical.
|
||||
|
||||
5. `x` deletes the character under the cursor.
|
||||
6. `dd` deletes the current line. (Yes, the whole line!)
|
||||
|
||||
|
||||
|
||||
Here's the scary part: `30dd` would delete 30 lines starting with the current line! Proceed with caution when using this command.
|
||||
|
||||
### Search
|
||||
|
||||
You can search for keywords from the "command" mode rather than manually navigating and looking for a specific word in a plethora of text.
|
||||
|
||||
7. `:/<keyword>` searches for the word mentioned in the `< >` space and takes your cursor to the first match.
|
||||
8. To navigate to the next instance of that word, type `n`, and keep pressing it until you get to the match you're looking for.
|
||||
|
||||
|
||||
|
||||
For example, in the image below I searched for `ssh`, and vi highlighted the beginning of the first result.
|
||||
|
||||

|
||||
|
||||
After I pressed `n`, vi highlighted the next instance.
|
||||
|
||||

|
||||
|
||||
### Save and exit
|
||||
|
||||
Developers (and others) will probably find this next command useful.
|
||||
|
||||
9. `:x` saves your work and exits vi.
|
||||
|
||||

|
||||
|
||||
10. If you think every nanosecond is worth saving, here's a faster way to shift to terminal mode in vi. Instead of pressing `Shift+:` on the keyboard, you can press `Shift+q` (or Q, in caps) to access [Ex mode][2], but this doesn't really make any difference if you just want to save and quit by typing `x` (as shown above).
|
||||
|
||||
|
||||
|
||||
### Substitution
|
||||
|
||||
Here is a neat trick if you want to substitute every occurrence of one word with another. For example, if you want to substitute "desktop" with "laptop" in a large file, it would be monotonous and waste time to search for each occurrence of "desktop," delete it, and type "laptop."
|
||||
|
||||
11. The command `:%s/desktop/laptop/g` would replace each occurrence of "desktop" with "laptop" throughout the file; it works just like the Linux `sed` command.
|
||||
|
||||
|
||||
|
||||
In this example, I replaced "root" with "user":
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
These tricks should help anyone get started using vi. Are there other neat tips I missed? Share them in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/top-11-vi-tips-and-tricks
|
||||
|
||||
作者:[Archit Modi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/architmodi
|
||||
[1]:http://ex-vi.sourceforge.net/
|
||||
[2]:https://en.wikibooks.org/wiki/Learning_the_vi_Editor/Vim/Modes#Ex-mode
|
@ -1,69 +0,0 @@
|
||||
How Linux became my job translation by ranchong
|
||||
======
|
||||
|
||||

|
||||
|
||||
I've been using open source since what seems like prehistoric times. Back then, there was nothing called social media. There was no Firefox, no Google Chrome (not even a Google), no Amazon, barely an internet. In fact, the hot topic of the day was the new Linux 2.0 kernel. The big technical challenges in those days? Well, the [ELF format][1] was replacing the old [a.out][2] format in binary [Linux][3] distributions, and the upgrade could be tricky on some installs of Linux.
|
||||
|
||||
How I transformed a personal interest in this fledgling young operating system to a [career][4] in open source is an interesting story.
|
||||
|
||||
### Linux for fun, not profit
|
||||
|
||||
I graduated from college in 1994 when computer labs were small networks of UNIX systems; if you were lucky they connected to this new thing called the internet. Hard to believe, I know! The "web" (as we knew it) was mostly handwritten HTML, and the `cgi-bin` directory was a new playground for enabling dynamic web interactions. Many of us were excited about these new technologies, and we taught ourselves shell scripting, [Perl][5], HTML, and all the terse UNIX commands that we had never seen on our parents' Windows 3.1 PCs.
|
||||
|
||||
`vi` and `ls` and reading my email via
|
||||
|
||||
After graduation, I joined IBM, working on a PC operating system with no access to UNIX systems, and soon my university cut off my remote access to the engineering lab. How was I going to keep usingandand reading my email via [Pine][6] ? I kept hearing about open source Linux, but I hadn't had time to look into it.
|
||||
|
||||
In 1996, I was about to begin a master's degree program at the University of Texas at Austin. I knew it would involve programming and writing papers, and who knows what else, and I didn't want to use proprietary editors or compilers or word processors. I wanted my UNIX experience!
|
||||
|
||||
So I took an old PC, found a Linux distribution—Slackware 3.0—and downloaded it, diskette after diskette, in my IBM office. Let's just say I've never looked back after that first install of Linux. In those early days, I learned a lot about makefiles and the `make` system, about building software, and about patches and source code control. Even though I started working with Linux for fun and personal knowledge, it ended up transforming my career.
|
||||
|
||||
While I was a happy Linux user, I thought open source development was still other people's work; I imagined an online mailing list of mystical [UNIX][7] geeks. I appreciated things like the Linux HOWTO project for helping with the bumps and bruises I acquired trying to add packages, upgrade my Linux distribution, or install device drivers for new hardware or a new PC. But working with source code and making modifications or submitting them upstream … that was for other people, not me.
|
||||
|
||||
### How Linux became my job
|
||||
|
||||
In 1999, I finally had a reason to combine my personal interest in Linux with my day job at IBM. I took on a skunkworks project to port the IBM Java Virtual Machine (JVM) to Linux. To ensure we were legally safe, IBM purchased a shrink-wrapped, boxed copy of Red Hat Linux 6.1 to do this work. Working with the IBM Tokyo Research lab, which wrote our JVM just-in-time (JIT) compiler, and both the AIX JVM source code and the Windows & OS/2 JVM source code reference, we had a working JVM on Linux within a few weeks, beating the announcement of Sun's official Java on Linux port by several months. Now that I had done development on the Linux platform, I was sold on it.
|
||||
|
||||
By 2000, IBM's use of Linux was growing rapidly. Due to the vision and persistence of [Dan Frye][8], IBM made a "[billion dollar bet][9]" on Linux, creating the Linux Technology Center (LTC) in 1999. Inside the LTC were kernel developers, open source contributors, device driver authors for IBM hardware, and all manner of Linux-focused open source work. Instead of remaining tangentially connected to the LTC, I wanted to be part of this exciting new area at IBM.
|
||||
|
||||
From 2003 to 2013 I was deeply involved in IBM's Linux strategy and use of Linux distributions, culminating with having a team that became the clearinghouse for about 60 different product uses of Linux across every division of IBM. I was involved in acquisitions where it was an expectation that every appliance, management system, and virtual or physical appliance-based middleware ran Linux. I became well-versed in the construction of Linux distributions, including packaging, selecting upstream sources, developing distro-maintained patch sets, doing customizations, and offering support through our distro partners.
|
||||
|
||||
Due to our downstream providers, I rarely got to submit patches upstream, but I got to contribute by interacting with [Ulrich Drepper][10] (including getting a small patch into glibc) and working on changes to the [timezone database][11], which Arthur David Olson accepted while he was maintaining it on the NIH FTP site. But I still hadn't worked as a regular contributor on an open source project as part of my work. It was time for that to change.
|
||||
|
||||
In late 2013, I joined IBM's cloud organization in the open source group and was looking for an upstream community in which to get involved. Would it be our work on Cloud Foundry, or would I join IBM's large group of contributors to OpenStack? It was neither, because in 2014 Docker took the world by storm, and IBM asked a few of us to get involved with this hot new technology. I experienced many firsts in the next few months: using GitHub, [learning a lot more about Git][12] than just `git clone`, having pull requests reviewed, writing in Go, and more. Over the next year, I became a maintainer in the Docker engine project, working with Docker on creating the next version of the image specification (to support multiple architectures), and attending and speaking at conferences about container technology.
|
||||
|
||||
### Where I am today
|
||||
|
||||
Fast forward a few years, and I've become a maintainer of open source projects, including the Cloud Native Computing Foundation (CNCF) [containerd][13] project. I've also created projects (such as [manifest-tool][14] and [bucketbench][15]). I've gotten involved in open source governance via the Open Containers Initiative (OCI), where I'm now a member of the Technical Oversight Board, and the Moby Project, where I'm a member of the Technical Steering Committee. And I've had the pleasure of speaking about open source at conferences around the world, to meetup groups, and internally at IBM.
|
||||
|
||||
Open source is now part of the fiber of my career at IBM. The connections I've made to engineers, developers, and leaders across the industry may rival the number of people I know and work with inside IBM. While open source has many of the same challenges as proprietary development teams and vendor partnerships have, in my experience the relationships and connections with people around the globe in open source far outweigh the difficulties. The sharpening that occurs with differing opinions, perspectives, and experiences can generate a culture of learning and improvement for both the software and the people involved.
|
||||
|
||||
This journey—from my first use of Linux to becoming a leader, contributor, and maintainer in today's cloud-native open source world—has been extremely rewarding. I'm looking forward to many more years of open source collaboration and interactions with people around the globe.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/my-open-source-story-phil-estes
|
||||
|
||||
作者:[Phil Estes][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/estesp
|
||||
[1]:https://en.wikipedia.org/wiki/Executable_and_Linkable_Format
|
||||
[2]:https://en.wikipedia.org/wiki/A.out
|
||||
[3]:https://opensource.com/node/19796
|
||||
[4]:https://opensource.com/node/25456
|
||||
[5]:https://opensource.com/node/35141
|
||||
[6]:https://opensource.com/article/17/10/alpine-email-client
|
||||
[7]:https://opensource.com/node/22781
|
||||
[8]:https://www.linkedin.com/in/danieldfrye/
|
||||
[9]:http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/linux/
|
||||
[10]:https://www.linkedin.com/in/ulrichdrepper/
|
||||
[11]:https://en.wikipedia.org/wiki/Tz_database
|
||||
[12]:https://opensource.com/article/18/1/step-step-guide-git
|
||||
[13]:https://github.com/containerd/containerd
|
||||
[14]:https://github.com/estesp/manifest-tool
|
||||
[15]:https://github.com/estesp/bucketbench
|
42
sources/talk/20180221 3 warning flags of DevOps metrics.md
Normal file
42
sources/talk/20180221 3 warning flags of DevOps metrics.md
Normal file
@ -0,0 +1,42 @@
|
||||
3 warning flags of DevOps metrics
|
||||
======
|
||||

|
||||
|
||||
Metrics. Measurements. Data. Monitoring. Alerting. These are all big topics for DevOps and for cloud-native infrastructure and application development more broadly. In fact, acm Queue, a magazine published by the Association of Computing Machinery, recently devoted an [entire issue][1] to the topic.
|
||||
|
||||
I've argued before that we conflate a lot of things under the "metrics" term, from key performance indicators to critical failure alerts to data that may be vaguely useful someday for something or other. But that's a topic for another day. What I want to discuss here is how metrics affect behavior.
|
||||
|
||||
In 2008, Daniel Ariely published [Predictably Irrational][2] , one of a number of books written around that time that introduced behavioral psychology and behavioral economics to the general public. One memorable quote from that book is the following: "Human beings adjust behavior based on the metrics they're held against. Anything you measure will impel a person to optimize his score on that metric. What you measure is what you'll get. Period."
|
||||
|
||||
This shouldn't be surprising. It's a finding that's been repeatedly confirmed by research. It should also be familiar to just about anyone with business experience. It's certainly not news to anyone in sales management, for example. Base sales reps' (or their managers'!) bonuses solely on revenue, and they'll discount whatever it takes to maximize revenue even if it puts margin in the toilet. Conversely, want the sales force to push a new product line—which will probably take extra effort—but skip the [spiffs][3]? Probably not happening.
|
||||
|
||||
And lest you think I'm unfairly picking on sales, this behavior is pervasive, all the way up to the CEO, as Ariely describes in [a 2010 Harvard Business Review article][4]. "CEOs care about stock value because that's how we measure them. If we want to change what they care about, we should change what we measure," writes Ariely.
|
||||
|
||||
Think developers and operations folks are immune from such behaviors? Think again. Let's consider some problematic measurements. They're not all bad or wrong but, if you rely too much on them, warning flags should go up.
|
||||
|
||||
### Three warning signs for DevOps metrics
|
||||
|
||||
First, there are the quantity metrics. Lines of code or bugs fixed are perhaps self-evidently absurd. But there are also the deployments per week or per month that are so widely quoted to illustrate DevOps velocity relative to more traditional development and deployment practices. Speed is good. It's one of the reasons you're probably doing DevOps—but don't reward people on it excessively relative to quality and other measures.
|
||||
|
||||
Second, it's obvious that you want to reward individuals who do their work quickly and well. Yes. But. Whether it's your local pro sports team or some project team you've been on, you can probably name someone who was really a talent, but was just so toxic and such a distraction for everyone else that they were a net negative for the team. Moral: Don't provide incentives that solely encourage individual behaviors. You may also want to put in place programs, such as peer rewards, that explicitly value collaboration. [As Red Hat's Jen Krieger told me][5] in a podcast last year: "Having those automated pots of awards, or some sort of system that's tracked for that, can only help teams feel a little more cooperative with one another as in, 'Hey, we're all working together to get something done.'"
|
||||
|
||||
The third red flag area is incentives that don't actually incent because neither the individual nor the team has a meaningful ability to influence the outcome. It's often a good thing when DevOps metrics connect to business goals and outcomes. For example, customer ticket volume relates to perceived shortcomings in applications and infrastructure. And it's also a reasonable proxy for overall customer satisfaction, which certainly should be of interest to the executive suite. The best reward systems to drive DevOps behaviors should be tied to specific individual and team actions as opposed to just company success generally.
|
||||
|
||||
You've probably noticed a common theme. That theme is balance. Velocity is good but so is quality. Individual achievement is good but not when it damages the effectiveness of the team. The overall success of the business is certainly important, but the best reward systems also tie back to actions and behaviors within development and operations.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/three-warning-flags-devops-metrics
|
||||
|
||||
作者:[Gordon Haff][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/ghaff
|
||||
[1]:https://queue.acm.org/issuedetail.cfm?issue=3178368
|
||||
[2]:https://en.wikipedia.org/wiki/Predictably_Irrational
|
||||
[3]:https://en.wikipedia.org/wiki/Spiff
|
||||
[4]:https://hbr.org/2010/06/column-you-are-what-you-measure
|
||||
[5]:http://bitmason.blogspot.com/2015/09/podcast-making-devops-succeed-with-red.html
|
@ -0,0 +1,53 @@
|
||||
Beyond metrics: How to operate as team on today's open source project
|
||||
======
|
||||
|
||||

|
||||
|
||||
How do we traditionally think about community health and vibrancy?
|
||||
|
||||
We might quickly zero in on metrics related primarily to code contributions: How many companies are contributing? How many individuals? How many lines of code? Collectively, these speak to both the level of development activity and the breadth of the contributor base. The former speaks to whether the project continues to be enhanced and expanded; the latter to whether it has attracted a diverse group of developers or is controlled primarily by a single organization.
|
||||
|
||||
The [Linux Kernel Development Report][1] tracks these kinds of statistics and, unsurprisingly, it appears extremely healthy on all counts.
|
||||
|
||||
However, while development cadence and code contributions are still clearly important, other aspects of the open source communities are also coming to the forefront. This is in part because, increasingly, open source is about more than a development model. It’s also about making it easier for users and other interested parties to interact in ways that go beyond being passive recipients of code. Of course, there have long been user groups. But open source streamlines the involvement of users, just as it does software development.
|
||||
|
||||
This was the topic of my discussion with Diane Mueller, the director of community development for OpenShift.
|
||||
|
||||
When OpenShift became a container platform based in part on Kubernetes in version 3, Mueller saw a need to broaden the community beyond the core code contributors. In part, this was because OpenShift was increasingly touching a broad range of open source projects and organizations such those associated with the [Open Container Initiative (OCI)][2] and the [Cloud Native Computing Foundation (CNCF)][3]. In addition to users, cloud service providers who were offering managed services also wanted ways to get involved in the project.
|
||||
|
||||
“What we tried to do was open up our minds about what the community constituted,” Mueller explained, adding, “We called it the [Commons][4] because Red Hat's near Boston, and I'm from that area. Boston Common is a shared resource, the grass where you bring your cows to graze, and you have your farmer's hipster market or whatever it is today that they do on Boston Common.”
|
||||
|
||||
This new model, she said, was really “a new ecosystem that incorporated all of those different parties and different perspectives. We used a lot of virtual tools, a lot of new tools like Slack. We stepped up beyond the mailing list. We do weekly briefings. We went very virtual because, one, I don't scale. The Evangelist and Dev Advocate team didn't scale. We need to be able to get all that word out there, all this new information out there, so we went very virtual. We worked with a lot of people to create online learning stuff, a lot of really good tooling, and we had a lot of community help and support in doing that.”
|
||||
|
||||
![diane mueller open shift][6]
|
||||
|
||||
Diane Mueller, director of community development at Open Shift, discusses the role of strong user communities in open source software development. (Credit: Gordon Haff, CC BY-SA 4.0)
|
||||
|
||||
However, one interesting aspect of the Commons model is that it isn’t just virtual. We see the same pattern elsewhere in many successful open source communities, such as the Linux kernel. Lots of day-to-day activities happen on mailings lists, IRC, and other collaboration tools. But this doesn’t eliminate the benefits of face-to-face time that allows for both richer and informal discussions and exchanges.
|
||||
|
||||
This interview with Mueller took place in London the day after the [OpenShift Commons Gathering][7]. Gatherings are full-day events, held a number of times a year, which are typically attended by a few hundred people. Much of the focus is on users and user stories. In fact, Mueller notes, “Here in London, one of the Commons members, Secnix, was really the major reason we actually hosted the gathering here. Justin Cook did an amazing job organizing the venue and helping us pull this whole thing together in less than 50 days. A lot of the community gatherings and things are driven by the Commons members.”
|
||||
|
||||
Mueller wants to focus on users more and more. “The OpenShift Commons gathering at [Red Hat] Summit will be almost entirely case studies,” she noted. “Users talking about what's in their stack. What lessons did they learn? What are the best practices? Sharing those ideas that they've done just like we did here in London.”
|
||||
|
||||
Although the Commons model grew out of some specific OpenShift needs at the time it was created, Mueller believes it’s an approach that can be applied more broadly. “I think if you abstract what we've done, you can apply it to any existing open source community,” she said. “The foundations still, in some ways, play a nice role in giving you some structure around governance, and helping incubate stuff, and helping create standards. I really love what OCI is doing to create standards around containers. There's still a role for that in some ways. I think the lesson that we can learn from the experience and we can apply to other projects is to open up the community so that it includes feedback mechanisms and gives the podium away.”
|
||||
|
||||
The evolution of the community model though approaches like the OpenShift Commons mirror the healthy evolution of open source more broadly. Certainly, some users have been involved in the development of open source software for a long time. What’s striking today is how widespread and pervasive direct user participation has become. Sure, open source remains central to much of modern software development. But it’s also becoming increasingly central to how users learn from each other and work together with their partners and developers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/how-communities-are-evolving
|
||||
|
||||
作者:[Gordon Haff][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/ghaff
|
||||
[1]:https://www.linuxfoundation.org/2017-linux-kernel-report-landing-page/
|
||||
[2]:https://www.opencontainers.org/
|
||||
[3]:https://www.cncf.io/
|
||||
[4]:https://commons.openshift.org/
|
||||
[5]:/file/388586
|
||||
[6]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/39369010275_7df2c3c260_z.jpg?itok=gIhnBl6F (diane mueller open shift)
|
||||
[7]:https://www.meetup.com/London-OpenShift-User-Group/events/246498196/
|
75
sources/talk/20180303 4 meetup ideas- Make your data open.md
Normal file
75
sources/talk/20180303 4 meetup ideas- Make your data open.md
Normal file
@ -0,0 +1,75 @@
|
||||
4 meetup ideas: Make your data open
|
||||
======
|
||||
|
||||

|
||||
|
||||
[Open Data Day][1] (ODD) is an annual, worldwide celebration of open data and an opportunity to show the importance of open data in improving our communities.
|
||||
|
||||
Not many individuals and organizations know about the meaningfulness of open data or why they might want to liberate their data from the restrictions of copyright, patents, and more. They also don't know how to make their data open—that is, publicly available for anyone to use, share, or republish with modifications.
|
||||
|
||||
This year ODD falls on Saturday, March 3, and there are [events planned][2] in every continent except Antarctica. While it might be too late to organize an event for this year, it's never too early to plan for next year. Also, since open data is important every day of the year, there's no reason to wait until ODD 2019 to host an event in your community.
|
||||
|
||||
There are many ways to build local awareness of open data. Here are four ideas to help plan an excellent open data event any time of year.
|
||||
|
||||
### 1. Organize an entry-level event
|
||||
|
||||
You can host an educational event at a local library, college, or another public venue about how open data can be used and why it matters for all of us. If possible, invite a [local speaker][3] or have someone present remotely. You could also have a roundtable discussion with several knowledgeable people in your community.
|
||||
|
||||
Consider offering resources such as the [Open Data Handbook][4], which not only provides a guide to the philosophy and rationale behind adopting open data, but also offers case studies, use cases, how-to guides, and other material to support making data open.
|
||||
|
||||
### 2. Organize an advanced-level event
|
||||
|
||||
For a deeper experience, organize a hands-on training event for open data newbies. Ideas for good topics include [training teachers on open science][5], [creating audiovisual expressions from open data][6], and using [open government data][7] in meaningful ways.
|
||||
|
||||
The options are endless. To choose a topic, think about what is locally relevant, identify issues that open data might be able to address, and find people who can do the training.
|
||||
|
||||
### 3. Organize a hackathon
|
||||
|
||||
Open data hackathons can be a great way to bring open data advocates, developers, and enthusiasts together under one roof. Hackathons are more than just training sessions, though; the idea is to build prototypes or solve real-life challenges that are tied to open data. In a hackathon, people in various groups can contribute to the entire assembly line in multiple ways, such as identifying issues by working collaboratively through [Etherpad][8] or creating focus groups.
|
||||
|
||||
Once the hackathon is over, make sure to upload all the useful data that is produced to the internet with an open license.
|
||||
|
||||
### 4. Release or relicense data as open
|
||||
|
||||
Open data is about making meaningful data publicly available under open licenses while protecting any data that might put people's private information at risk. (Learn [how to protect private data][9].) Try to find existing, interesting, and useful data that is privately owned by individuals or organizations and negotiate with them to relicense or release the data online under any of the [recommended open data licenses][10]. The widely popular [Creative Commons licenses][11] (particularly the CC0 license and the 4.0 licenses) are quite compatible with relicensing public data. (See this FAQ from Creative Commons for more information on [openly licensing data][12].)
|
||||
|
||||
Open data can be published on multiple platforms—your website, [GitHub][13], [GitLab][14], [DataHub.io][15], or anywhere else that supports open standards.
|
||||
|
||||
### Tips for event success
|
||||
|
||||
No matter what type of event you decide to do, here are some general planning tips to improve your chances of success.
|
||||
|
||||
* Find a venue that's accessible to the people you want to reach, such as a library, a school, or a community center.
|
||||
* Create a curriculum that will engage the participants.
|
||||
* Invite your target audience—make sure to distribute information through social media, community events calendars, Meetup, and the like.
|
||||
|
||||
|
||||
|
||||
Have you attended or hosted a successful open data event? If so, please share your ideas in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/celebrate-open-data-day
|
||||
|
||||
作者:[Subhashish Panigraphi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/psubhashish
|
||||
[1]:http://www.opendataday.org/
|
||||
[2]:http://opendataday.org/#map
|
||||
[3]:https://openspeakers.org/
|
||||
[4]:http://opendatahandbook.org/
|
||||
[5]:https://docs.google.com/forms/d/1BRsyzlbn8KEMP8OkvjyttGgIKuTSgETZW9NHRtCbT1s/viewform?edit_requested=true
|
||||
[6]:http://dattack.lv/en/
|
||||
[7]:https://www.eventbrite.co.nz/e/open-data-open-potential-event-friday-2-march-2018-tickets-42733708673
|
||||
[8]:http://etherpad.org/
|
||||
[9]:https://ssd.eff.org/en/module/keeping-your-data-safe
|
||||
[10]:https://opendatacommons.org/licenses/
|
||||
[11]:https://creativecommons.org/share-your-work/licensing-types-examples/
|
||||
[12]:https://wiki.creativecommons.org/wiki/Data#Frequently_asked_questions_about_data_and_CC_licenses
|
||||
[13]:https://github.com/MartinBriza/MediaWriter
|
||||
[14]:https://about.gitlab.com/
|
||||
[15]:https://datahub.io/
|
@ -0,0 +1,130 @@
|
||||
What’s next in IT automation: 6 trends to watch
|
||||
======
|
||||
|
||||

|
||||
|
||||
We’ve recently covered the [factors fueling IT automation][1], the [current trends][2] to watch as adoption grows, and [helpful tips][3] for those organizations just beginning to automate certain processes.
|
||||
|
||||
Oh, and we also shared expert advice on [how to make the case for automation][4] in your company, as well as [keys for long-term success][5].
|
||||
|
||||
Now, there’s just one question: What’s next? We asked a range of experts to share a peek into the not-so-distant future of [automation][6]. Here are six trends they advise IT leaders to monitor closely.
|
||||
|
||||
### 1. Machine learning matures
|
||||
|
||||
For all of the buzz around [machine learning][7] (and the overlapping phrase “self-learning systems”), it’s still very early days for most organizations in terms of actual implementations. Expect that to change, and for machine learning to play a significant role in the next waves of IT automation.
|
||||
|
||||
Mehul Amin, director of engineering for [Advanced Systems Concepts, Inc.][8], points to machine learning as one of the next key growth areas for IT automation.
|
||||
|
||||
“With the data that is developed, automation software can make decisions that otherwise might be the responsibility of the developer,” Amin says. “For example, the developer builds what needs to be executed, but identifying the best system to execute the processes might be [done] by software using analytics from within the system.”
|
||||
|
||||
That extends elsewhere in this same hypothetical system; Amin notes that machine learning can enable automated systems to provision additional resources when necessary to meet timelines or SLAs, as well as retire those resources when they’re no longer needed, and other possibilities.
|
||||
|
||||
Amin is certainly not alone.
|
||||
|
||||
“IT automation is moving towards self-learning,” says Kiran Chitturi, CTO architect at [Sungard Availability Services][9]. “Systems will be able to test and monitor themselves, enhancing business processes and software delivery.”
|
||||
|
||||
Chitturi points to automated testing as an example; test scripts are already in widespread adoption, but soon those automated testing processes may be more likely to learn as they go, developing, for example, wider recognition of how new code or code changes will impact production environments.
|
||||
|
||||
### 2. Artificial intelligence spawns automation opportunities
|
||||
|
||||
The same principles above hold true for the related (but separate) field of [artificial intelligence][10]. Depending on your definition of AI, it seems likely that machine learning will have the more significant IT impact in the near term (and we’re likely to see a lot of overlapping definitions and understandings of the two fields). Assume that emerging AI technologies will spawn new automation opportunities, too.
|
||||
|
||||
“The integration of artificial intelligence (AI) and machine learning capabilities is widely perceived as critical for business success in the coming years,” says Patrick Hubbard, head geek at [SolarWinds][11].
|
||||
|
||||
### 3. That doesn’t mean people are obsolete
|
||||
|
||||
Let’s try to calm those among us who are now hyperventilating into a paper bag: The first two trends don’t necessarily mean we’re all going to be out of a job.
|
||||
|
||||
It is likely to mean changes to various roles – and the creation of [new roles][12] altogether.
|
||||
|
||||
But in the foreseeable future, at least, you don’t need to practice bowing to your robot overlords.
|
||||
|
||||
“A machine can only consider the environment variables that it is given – it can’t choose to include new variables, only a human can do this today,” Hubbard explains. “However, for IT professionals this will necessitate the cultivation of AI- and automation-era skills such as programming, coding, a basic understanding of the algorithms that govern AI and machine learning functionality, and a strong security posture in the face of more sophisticated cyberattacks.”
|
||||
|
||||
Hubbard shares the example of new tools or capabilities such as AI-enabled security software or machine-learning applications that remotely spot maintenance needs in an oil pipeline. Both might improve efficiency and effectiveness; neither automatically replaces the people necessary for information security or pipeline maintenance.
|
||||
|
||||
“Many new functionalities still require human oversight,” Hubbard says. “In order for a machine to determine if something ‘predictive’ could become ‘prescriptive,’ for example, human management is needed.”
|
||||
|
||||
The same principle holds true even if you set machine learning and AI aside for a moment and look at IT automation more generally, especially in the software development lifecycle.
|
||||
|
||||
Matthew Oswalt, lead architect for automation at [Juniper Networks][13], points out that the fundamental reason IT automation is growing is that it is creating immediate value by reducing the amount of manual effort required to operate infrastructure.
|
||||
|
||||
Rather than responding to an infrastructure issue at 3 a.m. themselves, operations engineers can use event-driven automation to define their workflows ahead of time, as code.
|
||||
|
||||
“It also sets the stage for treating their operations workflows as code rather than easily outdated documentation or tribal knowledge,” Oswalt explains. “Operations staff are still required to play an active role in how [automation] tooling responds to events. The next phase of adopting automation is to put in place a system that is able to recognize interesting events that take place across the IT spectrum and respond in an autonomous fashion. Rather than responding to an infrastructure issue at 3 a.m. themselves, operations engineers can use event-driven automation to define their workflows ahead of time, as code. They can rely on this system to respond in the same way they would, at any time.”
|
||||
|
||||
### 4. Automation anxiety will decrease
|
||||
|
||||
Hubbard of SolarWinds notes that the term “automation” itself tends to spawn a lot of uncertainty and concern, not just in IT but across professional disciplines, and he says that concern is legitimate. But some of the attendant fears may be overblown, and even perpetuated by the tech industry itself. Reality might actually be the calming force on this front: When the actual implementation and practice of automation helps people realize #3 on this list, then we’ll see #4 occur.
|
||||
|
||||
“This year we’ll likely see a decrease in automation anxiety and more organizations begin to embrace AI and machine learning as a way to augment their existing human resources,” Hubbard says. “Automation has historically created room for more jobs by lowering the cost and time required to accomplish smaller tasks and refocusing the workforce on things that cannot be automated and require human labor. The same will be true of AI and machine learning.”
|
||||
|
||||
Automation will also decrease some anxiety around the topic most likely to increase an IT leader’s blood pressure: Security. As Matt Smith, chief architect, [Red Hat][14], recently [noted][15], automation will increasingly help IT groups reduce the security risks associated with maintenance tasks.
|
||||
|
||||
His advice: “Start by documenting and automating the interactions between IT assets during maintenance activities. By relying on automation, not only will you eliminate tasks that historically required much manual effort and surgical skill, you will also be reducing the risks of human error and demonstrating what’s possible when your IT organization embraces change and new methods of work. Ultimately, this will reduce resistance to promptly applying security patches. And it could also help keep your business out of the headlines during the next major security event.”
|
||||
|
||||
**[ Read the full article: [12 bad enterprise security habits to break][16]. ] **
|
||||
|
||||
### 5. Continued evolution of scripting and automation tools
|
||||
|
||||
Many organizations see the first steps toward increasing automation – usually in the form of scripting or automation tools (sometimes referred to as configuration management tools) – as "early days" work.
|
||||
|
||||
But views of those tools are evolving as the use of various automation technologies grows.
|
||||
|
||||
“There are many processes in the data center environment that are repetitive and subject to human error, and technologies such as [Ansible][17] help to ameliorate those issues,” says Mark Abolafia, chief operating officer at [DataVision][18]. “With Ansible, one can write a specific playbook for a set of actions and input different variables such as addresses, etc., to automate long chains of process that were previously subject to human touch and longer lead times.”
|
||||
|
||||
**[ Want to learn more about this aspect of Ansible? Read the related article:[Tips for success when getting started with Ansible][19]. ]**
|
||||
|
||||
Another factor: The tools themselves will continue to become more advanced.
|
||||
|
||||
“With advanced IT automation tools, developers will be able to build and automate workflows in less time, reducing error-prone coding,” says Amin of ASCI. “These tools include pre-built, pre-tested drag-and-drop integrations, API jobs, the rich use of variables, reference functionality, and object revision history.”
|
||||
|
||||
### 6. Automation opens new metrics opportunities
|
||||
|
||||
As we’ve said previously in this space, automation isn’t IT snake oil. It won’t fix busted processes or otherwise serve as some catch-all elixir for what ails your organization. That’s true on an ongoing basis, too: Automation doesn’t eliminate the need to measure performance.
|
||||
|
||||
**[ See our related article[DevOps metrics: Are you measuring what matters?][20] ]**
|
||||
|
||||
In fact, automation should open up new opportunities here.
|
||||
|
||||
“As more and more development activities – source control, DevOps pipelines, work item tracking – move to the API-driven platforms – the opportunity and temptation to stitch these pieces of raw data together to paint the picture of your organization's efficiency increases,” says Josh Collins, VP of architecture at [Janeiro Digital][21].
|
||||
|
||||
Collins thinks of this as a possible new “development organization metrics-in-a-box.” But don’t mistake that to mean machines and algorithms can suddenly measure everything IT does.
|
||||
|
||||
“Whether measuring individual resources or the team in aggregate, these metrics can be powerful – but should be balanced with a heavy dose of context,” Collins says. “Use this data for high-level trends and to affirm qualitative observations – not to clinically grade your team.”
|
||||
|
||||
**Want more wisdom like this, IT leaders?[Sign up for our weekly email newsletter][22].**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://enterprisersproject.com/article/2018/3/what-s-next-it-automation-6-trends-watch
|
||||
|
||||
作者:[Kevin Casey][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://enterprisersproject.com/user/kevin-casey
|
||||
[1]:https://enterprisersproject.com/article/2017/12/5-factors-fueling-automation-it-now
|
||||
[2]:https://enterprisersproject.com/article/2017/12/4-trends-watch-it-automation-expands
|
||||
[3]:https://enterprisersproject.com/article/2018/1/getting-started-automation-6-tips
|
||||
[4]:https://enterprisersproject.com/article/2018/1/how-make-case-it-automation
|
||||
[5]:https://enterprisersproject.com/article/2018/1/it-automation-best-practices-7-keys-long-term-success
|
||||
[6]:https://enterprisersproject.com/tags/automation
|
||||
[7]:https://enterprisersproject.com/article/2018/2/how-spot-machine-learning-opportunity
|
||||
[8]:https://www.advsyscon.com/en-us/
|
||||
[9]:https://www.sungardas.com/en/
|
||||
[10]:https://enterprisersproject.com/tags/artificial-intelligence
|
||||
[11]:https://www.solarwinds.com/
|
||||
[12]:https://enterprisersproject.com/article/2017/12/8-emerging-ai-jobs-it-pros
|
||||
[13]:https://www.juniper.net/
|
||||
[14]:https://www.redhat.com/en?intcmp=701f2000000tjyaAAA
|
||||
[15]:https://enterprisersproject.com/article/2018/2/12-bad-enterprise-security-habits-break
|
||||
[16]:https://enterprisersproject.com/article/2018/2/12-bad-enterprise-security-habits-break?sc_cid=70160000000h0aXAAQ
|
||||
[17]:https://opensource.com/tags/ansible
|
||||
[18]:https://datavision.com/
|
||||
[19]:https://opensource.com/article/18/2/tips-success-when-getting-started-ansible?intcmp=701f2000000tjyaAAA
|
||||
[20]:https://enterprisersproject.com/article/2017/7/devops-metrics-are-you-measuring-what-matters?sc_cid=70160000000h0aXAAQ
|
||||
[21]:https://www.janeirodigital.com/
|
||||
[22]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ
|
@ -0,0 +1,59 @@
|
||||
Try, learn, modify: The new IT leader's code
|
||||
======
|
||||
|
||||

|
||||
|
||||
Just about every day, new technological developments threaten to destabilize even the most intricate and best-laid business plans. Organizations often find themselves scrambling to adapt to new conditions, and that's created a shift in how they plan for the future.
|
||||
|
||||
According to a 2017 [study][1] by CompTIA, only 34% of companies are currently developing IT architecture plans that extend beyond 12 months. One reason for that shift away from a longer-term plan is that business contexts are changing so quickly that planning any further into the future is nearly impossible. "If your company is trying to set a plan that will last five to 10 years down the road," [CIO.com writes][1], "forget it."
|
||||
|
||||
I've heard similar statements from countless customers and partners around the world. Technological innovations are occurring at an unprecedented pace.
|
||||
|
||||
The result is that long-term planning is dead. We need to be thinking differently about the way we run our organizations if we're going to succeed in this new world.
|
||||
|
||||
### How planning died
|
||||
|
||||
As I wrote in The Open Organization, traditionally-run organizations are optimized for industrial economies. They embrace hierarchical structures and rigidly prescribed processes as they work to achieve positional competitive advantage. To be successful, they have to define the strategic positions they want to achieve. Then they have to formulate and dictate plans for getting there, and execute on those plans in the most efficient ways possible—by coordinating activities and driving compliance.
|
||||
|
||||
Management's role is to optimize this process: plan, prescribe, execute. It consists of saying: Let's think of a competitively advantaged position; let's configure our organization to ultimately get there; and then let's drive execution by making sure all aspects of the organization comply. It's what I'll call "mechanical management," and it's a brilliant solution for a different time.
|
||||
|
||||
In today's volatile and uncertain world, our ability to predict and define strategic positions is diminishing—because the pace of change, the rate of introduction of new variables, is accelerating. Classic, long-term, strategic planning and execution isn't as effective as it used to be.
|
||||
|
||||
If long-term planning has become so difficult, then prescribing necessary behaviors is even more challenging. And measuring compliance against a plan is next to impossible.
|
||||
|
||||
All this dramatically affects the way people work. Unlike workers in the traditionally-run organizations of the past—who prided themselves on being able to act repetitively, with little variation and comfortable certainty—today's workers operate in contexts of abundant ambiguity. Their work requires greater creativity, intuition, and critical judgment—there is a greater demand to deviate from yesterday's "normal" and adjust to today's new conditions.
|
||||
|
||||
In today's volatile and uncertain world, our ability to predict and define strategic positions is diminishing—because the pace of change, the rate of introduction of new variables, is accelerating.
|
||||
|
||||
Working in this new way has become more critical to value creation. Our management systems must focus on building structures, systems, and processes that help create engaged, motivated workers—people who are enabled to innovate and act with speed and agility.
|
||||
|
||||
We need to come up with a different solution for optimizing organizations for a very different economic era, one that works from the bottom up rather than the top down. We need to replace that old three-step formula for success—plan, prescribe, execute—with one much better suited to today's tumultuous climate: try, learn, modify.
|
||||
|
||||
### Try, learn, modify
|
||||
|
||||
Because conditions can change so rapidly and with so little warning—and because the steps we need to take next are no longer planned in advance—we need to cultivate environments that encourage creative trial and error, not unyielding allegiance to a five-year schedule. Here are just a few implications of beginning to work this way:
|
||||
|
||||
* **Shorter planning cycles (try).** Rather than agonize over long-term strategic directions, managers need to be thinking of short-term experiments they can try quickly. They should be seeking ways to help their teams take calculated risks and leverage the data at their disposal to make best guesses about the most beneficial paths forward. They can do this by lowering overhead and giving teams the freedom to try new approaches quickly.
|
||||
* **Higher tolerance for failure (learn).** Greater frequency of experimentation means greater opportunity for failure. Creative and resilient organizations have a[significantly higher tolerance for failure][2] than traditional organizations do. Managers should treat failures as learning opportunities—moments to gather feedback on the tests their teams are running.
|
||||
* **More adaptable structures (modify).** An ability to easily modify organizational structures and strategic directions—and the willingness to do it when conditions necessitate—is the key to ensuring that organizations can evolve in line with rapidly changing environmental conditions. Managers can't be wedded to any idea any longer than that idea proves itself to be useful for accomplishing a short-term goal.
|
||||
|
||||
|
||||
|
||||
If long-term planning is dead, then long live shorter-term experimentation. Try, learn, and modify—that's the best path forward during uncertain times.
|
||||
|
||||
[Subscribe to our weekly newsletter][3] to learn more about open organizations.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/18/3/try-learn-modify
|
||||
|
||||
作者:[Jim Whitehurst][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/remyd
|
||||
[1]:https://www.cio.com/article/3246027/enterprise-architecture/the-death-of-long-term-it-planning.html?upd=1515780110970
|
||||
[2]:https://opensource.com/open-organization/16/12/building-culture-innovation-your-organization
|
||||
[3]:https://opensource.com/open-organization/resources/newsletter
|
@ -1,3 +1,4 @@
|
||||
translating by kimii
|
||||
How To Safely Generate A Random Number — Quarrelsome
|
||||
======
|
||||
### Use urandom
|
||||
|
@ -1,215 +0,0 @@
|
||||
translating by imquanquan
|
||||
|
||||
9 Lightweight Linux Applications to Speed Up Your System
|
||||
======
|
||||
**Brief:** One of the many ways to [speed up Ubuntu][1] system is to use lightweight alternatives of the popular applications. We have already seen [must have Linux application][2] earlier. we'll see the lightweight alternative applications for Ubuntu and other Linux distributions.
|
||||
|
||||
![Use these Lightweight alternative applications in Ubuntu Linux][4]
|
||||
|
||||
## 9 Lightweight alternatives of popular Linux applications
|
||||
|
||||
Is your Linux system slow? Are the applications taking a long time to open? The best option you have is to use a [light Linux distro][5]. But it's not always possible to reinstall an operating system, is it?
|
||||
|
||||
So if you want to stick to your present Linux distribution, but want improved performance, you should use lightweight alternatives of the applications you are using. Here, I'm going to put together a small list of lightweight alternatives to various Linux applications.
|
||||
|
||||
Since I am using Ubuntu, I have provided installation instructions for Ubuntu-based Linux distributions. But these applications will work on almost all other Linux distribution. You just have to find a way to install these lightweight Linux software in your distro.
|
||||
|
||||
### 1. Midori: Web Browser
|
||||
|
||||
Midori is one of the most lightweight web browsers that have reasonable compatibility with the modern web. It is open source and uses the same rendering engine that Google Chrome was initially built on -- WebKit. It is super fast and minimal yet highly customizable.
|
||||
|
||||
![Midori Browser][6]
|
||||
|
||||
It has plenty of extensions and options to tinker with. So if you are a power user, it's a great choice for you too. If you face any problems browsing round the web, check the [Frequently Asked Question][7] section of their website -- it contains the common problems you might face along with their solution.
|
||||
|
||||
[Midori][8]
|
||||
|
||||
#### Installing Midori on Ubuntu based distributions
|
||||
|
||||
Midori is available on Ubuntu via the official repository. Just run the following commands for installing it:
|
||||
```
|
||||
sudo apt install midori
|
||||
```
|
||||
|
||||
### 2. Trojita: email client
|
||||
|
||||
Trojita is an open source robust IMAP e-mail client. It is fast and resource efficient. I can certainly call it one of the [best email clients for Linux][9]. If you can live with only IMAP support on your e-mail client, you might not want to look any further.
|
||||
|
||||
![Trojitá][10]
|
||||
|
||||
Trojita uses various techniques -- on-demand e-mail loading, offline caching, bandwidth-saving mode etc. -- for achieving its impressive performance.
|
||||
|
||||
[Trojita][11]
|
||||
|
||||
#### Installing Trojita on Ubuntu based distributions
|
||||
|
||||
Trojita currently doesn't have an official PPA for Ubuntu. But that shouldn't be a problem. You can install it quite easily using the following commands:
|
||||
```
|
||||
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/jkt-gentoo:/trojita/xUbuntu_16.04/ /' > /etc/apt/sources.list.d/trojita.list"
|
||||
wget http://download.opensuse.org/repositories/home:jkt-gentoo:trojita/xUbuntu_16.04/Release.key
|
||||
sudo apt-key add - < Release.key
|
||||
sudo apt update
|
||||
sudo apt install trojita
|
||||
```
|
||||
|
||||
### 3. GDebi: Package Installer
|
||||
|
||||
Sometimes you need to quickly install DEB packages. Ubuntu Software Center is a resource-heavy application and using it just for installing .deb files is not wise.
|
||||
|
||||
Gdebi is certainly a nifty tool for the same purpose, just with a minimal graphical interface.
|
||||
|
||||
![GDebi][12]
|
||||
|
||||
GDebi is totally lightweight and does its job flawlessly. You should even [make Gdebi the default installer for DEB files][13].
|
||||
|
||||
#### Installing GDebi on Ubuntu based distributions
|
||||
|
||||
You can install GDebi on Ubuntu with this simple one-liner:
|
||||
```
|
||||
sudo apt install gdebi
|
||||
```
|
||||
|
||||
### 4. App Grid: Software Center
|
||||
|
||||
If you use software center frequently for searching, installing and managing applications on Ubuntu, App Grid is a must have application. It is the most visually appealing and yet fast alternative to the default Ubuntu Software Center.
|
||||
|
||||
![App Grid][14]
|
||||
|
||||
App Grid supports ratings, reviews and screenshots for applications.
|
||||
|
||||
[App Grid][15]
|
||||
|
||||
#### Installing App Grid on Ubuntu based distributions
|
||||
|
||||
App Grid has its official PPA for Ubuntu. Use the following commands for installing App Grid:
|
||||
```
|
||||
sudo add-apt-repository ppa:appgrid/stable
|
||||
sudo apt update
|
||||
sudo apt install appgrid
|
||||
```
|
||||
|
||||
### 5. Yarock: Music Player
|
||||
|
||||
Yarock is an elegant music player with a modern and minimal user interface. It is lightweight in design and yet it has a comprehensive list of advanced features.
|
||||
|
||||
![Yarock][16]
|
||||
|
||||
The main features of Yarock include multiple music collections, rating, smart playlist, multiple back-end option, desktop notification, scrobbling, context fetching etc.
|
||||
|
||||
[Yarock][17]
|
||||
|
||||
#### Installing Yarock on Ubuntu based distributions
|
||||
|
||||
You will have to install Yarock on Ubuntu via PPA using the following commands:
|
||||
```
|
||||
sudo add-apt-repository ppa:nilarimogard/webupd8
|
||||
sudo apt update
|
||||
sudo apt install yarock
|
||||
```
|
||||
|
||||
### 6. VLC: Video Player
|
||||
|
||||
Who doesn't need a video player? And who has never heard about VLC? It doesn't really need any introduction.
|
||||
|
||||
![VLC][18]
|
||||
|
||||
VLC is all you need to play various media files on Ubuntu and it is quite lightweight too. It works flawlessly on even on very old PCs.
|
||||
|
||||
[VLC][19]
|
||||
|
||||
#### Installing VLC on Ubuntu based distributions
|
||||
|
||||
VLC has official PPA for Ubuntu. Enter the following commands for installing it:
|
||||
```
|
||||
sudo apt install vlc
|
||||
```
|
||||
|
||||
### 7. PCManFM: File Manager
|
||||
|
||||
PCManFM is the standard file manager from LXDE. As with the other applications from LXDE, this one too is lightweight. If you are looking for a lighter alternative for your file manager, try this one.
|
||||
|
||||
![PCManFM][20]
|
||||
|
||||
Although coming from LXDE, PCManFM works with other desktop environments just as well.
|
||||
|
||||
#### Installing PCManFM on Ubuntu based distributions
|
||||
|
||||
Installing PCManFM on Ubuntu will just take one simple command:
|
||||
```
|
||||
sudo apt install pcmanfm
|
||||
```
|
||||
|
||||
### 8. Mousepad: Text Editor
|
||||
|
||||
Nothing can beat command-line text editors like - nano, vim etc. in terms of being lightweight. But if you want a graphical interface, here you go -- Mousepad is a minimal text editor. It's extremely lightweight and blazing fast. It comes with a simple customizable user interface with multiple themes.
|
||||
|
||||
![Mousepad][21]
|
||||
|
||||
Mousepad supports syntax highlighting. So, you can also use it as a basic code editor.
|
||||
|
||||
#### Installing Mousepad on Ubuntu based distributions
|
||||
|
||||
For installing Mousepad use the following command:
|
||||
```
|
||||
sudo apt install mousepad
|
||||
```
|
||||
|
||||
### 9. GNOME Office: Office Suite
|
||||
|
||||
Many of us need to use office applications quite often. Generally, most of the office applications are bulky in size and resource hungry. Gnome Office is quite lightweight in that respect. Gnome Office is technically not a complete office suite. It's composed of different standalone applications and among them, **AbiWord** & **Gnumeric** stands out.
|
||||
|
||||
**AbiWord** is the word processor. It is lightweight and a lot faster than other alternatives. But that came to be at a cost -- you might miss some features like macros, grammar checking etc. It's not perfect but it works.
|
||||
|
||||
![AbiWord][22]
|
||||
|
||||
**Gnumeric** is the spreadsheet editor. Just like AbiWord, Gnumeric is also very fast and it provides accurate calculations. If you are looking for a simple and lightweight spreadsheet editor, Gnumeric has got you covered.
|
||||
|
||||
![Gnumeric][23]
|
||||
|
||||
There are some other applications listed under Gnome Office. You can find them in the official page.
|
||||
|
||||
[Gnome Office][24]
|
||||
|
||||
#### Installing AbiWord & Gnumeric on Ubuntu based distributions
|
||||
|
||||
For installing AbiWord & Gnumeric, simply enter the following command in your terminal:
|
||||
```
|
||||
sudo apt install abiword gnumeric
|
||||
```
|
||||
|
||||
That's all for today. Would you like to add some other **lightweight Linux applications** to this list? Do let us know!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/lightweight-alternative-applications-ubuntu/
|
||||
|
||||
作者:[Munif Tanjim][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/munif/
|
||||
[1]:https://itsfoss.com/speed-up-ubuntu-1310/
|
||||
[2]:https://itsfoss.com/essential-linux-applications/
|
||||
[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Lightweight-alternative-applications-for-Linux-800x450.jpg
|
||||
[5]:https://itsfoss.com/lightweight-linux-beginners/
|
||||
[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Midori-800x497.png
|
||||
[7]:http://midori-browser.org/faqs/
|
||||
[8]:http://midori-browser.org/
|
||||
[9]:https://itsfoss.com/best-email-clients-linux/
|
||||
[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Trojit%C3%A1-800x608.png
|
||||
[11]:http://trojita.flaska.net/
|
||||
[12]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/GDebi.png
|
||||
[13]:https://itsfoss.com/gdebi-default-ubuntu-software-center/
|
||||
[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/AppGrid-800x553.png
|
||||
[15]:http://www.appgrid.org/
|
||||
[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Yarock-800x529.png
|
||||
[17]:https://seb-apps.github.io/yarock/
|
||||
[18]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/VLC-800x526.png
|
||||
[19]:http://www.videolan.org/index.html
|
||||
[20]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/PCManFM.png
|
||||
[21]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Mousepad.png
|
||||
[22]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/AbiWord-800x626.png
|
||||
[23]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Gnumeric-800x470.png
|
||||
[24]:https://gnome.org/gnome-office/
|
@ -1,97 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
How to configure login banners in Linux (RedHat, Ubuntu, CentOS, Fedora)
|
||||
======
|
||||
Learn how to create login banners in Linux to display different warning or information messages to user who is about to log in or after he logs in.
|
||||
|
||||
![Login banners in Linux][1]
|
||||
|
||||
Whenever you login to some production systems of firm, you get to see some login messages, warnings or info about server you are about to login or already logged in like below. Those are the login banners.
|
||||
|
||||
![Login welcome messages in Linux][2]
|
||||
|
||||
In this article we will walk you through how to configure them.
|
||||
|
||||
There are two types of banners you can configure.
|
||||
|
||||
1. Banner message to display before user log in (configure in file of your choice eg. `/etc/login.warn`)
|
||||
2. Banner message to display after user successfully logged in (configure in `/etc/motd`)
|
||||
|
||||
|
||||
|
||||
### How to display message when user connects to system before login
|
||||
|
||||
This message will be displayed to user when he connects to server and before he logged in. Means when he enter the username, this message will be displayed before password prompt.
|
||||
|
||||
You can use any filename and enter your message within. Here we used `/etc/login.warn` file and put our messages inside.
|
||||
|
||||
```
|
||||
# cat /etc/login.warn
|
||||
!!!! Welcome to KernelTalks test server !!!!
|
||||
This server is meant for testing Linux commands and tools. If you are
|
||||
not associated with kerneltalks.com and not authorized please dis-connect
|
||||
immediately.
|
||||
```
|
||||
|
||||
Now, you need to supply this file and path to `sshd` daemon so that it can fetch this banner for each user login request. For that open `/etc/sshd/sshd_config` file and search for line `#Banner none`
|
||||
|
||||
Here you have to edit file and write your filename and remove hash mark. It should look like : `Banner /etc/login.warn`
|
||||
|
||||
Save file and restart `sshd` daemon. To avoid disconnecting existing connected users, use HUP signal to restart sshd.
|
||||
|
||||
```
|
||||
oot@kerneltalks # ps -ef |grep -i sshd
|
||||
root 14255 1 0 18:42 ? 00:00:00 /usr/sbin/sshd -D
|
||||
root 19074 14255 0 18:46 ? 00:00:00 sshd: ec2-user [priv]
|
||||
root 19177 19127 0 18:54 pts/0 00:00:00 grep -i sshd
|
||||
|
||||
root@kerneltalks # kill -HUP 14255
|
||||
```
|
||||
|
||||
Thats it! Open new session and try login. You will be greeted with the message you configured in above steps .
|
||||
|
||||
![Login banner in Linux][3]
|
||||
|
||||
You can see message is displayed before user enter his password and log in to system.
|
||||
|
||||
### How to display message after user logs in
|
||||
|
||||
Message user sees after he logs into system successfully is **M** essage **O** f **T** he **D** ay & is controlled by `/etc/motd` file. Edit this file and enter message you want to greet user with once he successfully logged in.
|
||||
|
||||
```
|
||||
root@kerneltalks # cat /etc/motd
|
||||
W E L C O M E
|
||||
Welcome to the testing environment of kerneltalks.
|
||||
Feel free to use this system for testing your Linux
|
||||
skills. In case of any issues reach out to admin at
|
||||
info@kerneltalks.com. Thank you.
|
||||
|
||||
```
|
||||
|
||||
You dont need to restart `sshd` daemon to take this change effect. As soon as you save the file, its content will be read and displayed by sshd daemon from very next login request it serves.
|
||||
|
||||
![motd in linux][4]
|
||||
|
||||
You can see in above screenshot : Yellow box is MOTD controlled by `/etc/motd` and green box is what we saw earlier login banner.
|
||||
|
||||
You can use tools like [cowsay][5], [banner][6], [figlet][7], [lolcat ][8]to create fancy, eye-catching messages to display at login. This method works on almost all Linux distros like RedHat, CentOs, Ubuntu, Fedora etc.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://kerneltalks.com/tips-tricks/how-to-configure-login-banners-in-linux/
|
||||
|
||||
作者:[kerneltalks][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://kerneltalks.com
|
||||
[1]:https://c3.kerneltalks.com/wp-content/uploads/2017/11/login-banner-message-in-linux.png
|
||||
[2]:https://c3.kerneltalks.com/wp-content/uploads/2017/11/Login-message-in-linux.png
|
||||
[3]:https://c1.kerneltalks.com/wp-content/uploads/2017/11/login-banner.png
|
||||
[4]:https://c3.kerneltalks.com/wp-content/uploads/2017/11/motd-message-in-linux.png
|
||||
[5]:https://kerneltalks.com/tips-tricks/cowsay-fun-in-linux-terminal/
|
||||
[6]:https://kerneltalks.com/howto/create-nice-text-banner-hpux/
|
||||
[7]:https://kerneltalks.com/tips-tricks/create-beautiful-ascii-text-banners-linux/
|
||||
[8]:https://kerneltalks.com/linux/lolcat-tool-to-rainbow-color-linux-terminal/
|
@ -1,3 +1,5 @@
|
||||
Translating by MjSeven
|
||||
|
||||
My Adventure Migrating Back To Windows
|
||||
======
|
||||
I have had linux as my primary OS for about a decade now, and primarily use Ubuntu. But with the latest release I have decided to migrate back to an OS I generally dislike, Windows 10.
|
||||
|
@ -1,120 +0,0 @@
|
||||
Translating by qhwdw
|
||||
Concurrent Servers: Part 5 - Redis case study
|
||||
======
|
||||
This is part 5 in a series of posts on writing concurrent network servers. After discussing techniques for constructing concurrent servers in parts 1-4, this time we're going to do a case study of an existing production-quality server - [Redis][10].
|
||||
|
||||

|
||||
|
||||
Redis is a fascinating project and I've been following it with interest for a while now. One of the things I admire most about Redis is the clarity of its C source code. It also happens to be a great example of a high-performance concurrent in-memory database server, so the opportunity to use it as a case study for this series was too good to ignore.
|
||||
|
||||
Let's see how the ideas discussed in parts 1-4 apply to a real-world application.
|
||||
|
||||
All posts in the series:
|
||||
|
||||
* [Part 1 - Introduction][3]
|
||||
|
||||
* [Part 2 - Threads][4]
|
||||
|
||||
* [Part 3 - Event-driven][5]
|
||||
|
||||
* [Part 4 - libuv][6]
|
||||
|
||||
* [Part 5 - Redis case study][7]
|
||||
|
||||
### Event-handling library
|
||||
|
||||
One of Redis's main claims to fame around the time of its original release in 2009 was its speed - the sheer number of concurrent client connections the server could handle. It was especially notable that Redis did this all in a single thread, without any complex locking and synchronization schemes on the data stored in memory.
|
||||
|
||||
This feat was achieved by Redis's own implementation of an event-driven library which is wrapping the fastest event loop available on a system (epoll for Linux, kqueue for BSD and so on). This library is called [ae][11]. ae makes it possible to write a fast server as long as none of the internals are blocking, which Redis goes to great lengths to guarantee [[1]][12].
|
||||
|
||||
What mainly interests us here is ae's support of file events - registering callbacks to be invoked when file descriptors (like network sockets) have something interesting pending. Like libuv, ae supports multiple event loops and - having read parts 3 and 4 in this series - the signature of aeCreateFileEvent shouldn't be surprising:
|
||||
|
||||
```
|
||||
int aeCreateFileEvent(aeEventLoop *eventLoop, int fd, int mask,
|
||||
aeFileProc *proc, void *clientData);
|
||||
```
|
||||
|
||||
It registers a callback (proc) for new file events on fd, with the given event loop. When using epoll, it will call epoll_ctl to add an event on the file descriptor (either EPOLLIN, EPOLLOUT or both, depending on the mask parameter). ae's aeProcessEvents is the "run the event loop and dispatch callbacks" function, and it calls epoll_wait under the hood.
|
||||
|
||||
### Handling client requests
|
||||
|
||||
Let's trace through the Redis server code to see how ae is used to register callbacks for client events. initServer starts it by registering a callback for read events on the socket(s) being listened to, by calling aeCreateFileEvent with the callback acceptTcpHandler. This callback is invoked when new client connections are available. It calls accept [[2]][13] and then acceptCommonHandler, which in turn calls createClient to initialize the data structures required to track a new client connection.
|
||||
|
||||
createClient's job is to start listening for data coming in from the client. It sets the socket to non-blocking mode (a key ingredient in an asynchronous event loop) and registers another file event callback with aeCreateFileEvent - for read events - readQueryFromClient. This function will be invoked by the event loop every time the client sends some data.
|
||||
|
||||
readQueryFromClient does just what we'd expect - parses the client's command and acts on it by querying and/or manipulating data and sending a reply back. Since the client socket is non-blocking, this function has to be able to handle EAGAIN, as well as partial data; data read from the client is accumulated in a client-specific buffer, and the full query may be split across multiple invocations of the callback.
|
||||
|
||||
### Sending data back to clients
|
||||
|
||||
In the previous paragraph I said that readQueryFromClient ends up sending replies back to clients. This is logically true, because readQueryFromClient prepares the reply to be sent, but it doesn't actually do the physical sending - since there's no guarantee the client socket is ready for writing/sending data. We have to use the event loop machinery for that.
|
||||
|
||||
The way Redis does this is by registering a beforeSleep function to be called every time the event loop is about to go sleeping waiting for sockets to become available for reading/writing. One of the things beforeSleep does is call handleClientsWithPendingWrites. This function tries to send all available replies immediately by calling writeToClient; if some of the sockets are unavailable, it registers an event-loop callback to invoke sendReplyToClient when the socket is ready. This can be seen as a kind of optimization - if the socket is immediately ready for sending (which often is the case for TCP sockets), there's no need to register the event - just send the data. Since sockets are non-blocking, this never actually blocks the loop.
|
||||
|
||||
### Why does Redis roll its own event library?
|
||||
|
||||
In [part 4][14] we've discussed building asynchronous concurrent servers using libuv. It's interesting to ponder the fact that Redis doesn't use libuv, or any similar event library, and instead implements its own - ae, including wrappers for epoll, kqueue and select. In fact, antirez (Redis's creator) answered precisely this question [in a blog post in 2011][15]. The gist of his answer: ae is ~770 lines of code he intimately understands; libuv is huge, without providing additional functionality Redis needs.
|
||||
|
||||
Today, ae has grown to ~1300 lines, which is still trivial compared to libuv's 26K (this is without Windows, test, samples, docs). libuv is a far more general library, which makes it more complex and more difficult to adapt to the particular needs of another project; ae, on the other hand, was designed for Redis, co-evolved with Redis and contains only what Redis needs.
|
||||
|
||||
This is another great example of the dependencies in software projects formula I mentioned [in a post earlier this year][16]:
|
||||
|
||||
> The benefit of dependencies is inversely proportional to the amount of effort spent on a software project.
|
||||
|
||||
antirez referred to this, to some extent, in his post. He mentioned that dependencies that provide a lot of added value ("foundational" dependencies in my post) make more sense (jemalloc and Lua are his examples) than dependencies like libuv, whose functionality is fairly easy to implement for the particular needs of Redis.
|
||||
|
||||
### Multi-threading in Redis
|
||||
|
||||
[For the vast majority of its history][17], Redis has been a purely single-threaded affair. Some people find this surprising, but it makes total sense with a bit of thought. Redis is inherently network-bound - as long as the database size is reasonable, for any given client request, much more time is spent waiting on the network than inside Redis's data structures.
|
||||
|
||||
These days, however, things are not quite that simple. There are several new capabilities in Redis that use threads:
|
||||
|
||||
1. "Lazy" [freeing of memory][8].
|
||||
|
||||
2. Writing a [persistence journal][9] with fsync calls in a background thread.
|
||||
|
||||
3. Running user-defined modules that need to perform a long-running operation.
|
||||
|
||||
For the first two features, Redis uses its own simple bio library (the acronym stands for "Background I/O"). The library is hard-coded for Redis's needs and can't be used outside it - it runs a pre-set number of threads, one per background job type Redis needs.
|
||||
|
||||
For the third feature, [Redis modules][18] could define new Redis commands, and thus are held to the same standards as regular Redis commands, including not blocking the main thread. If a custom Redis command defined in a module wants to perform a long-running operation, it has to spin up a thread to run it in the background. src/modules/helloblock.c in the Redis tree provides an example.
|
||||
|
||||
With these features, Redis combines an event loop with threading to get both speed in the common case and flexibility in the general case, similarly to the work queue discussion in [part 4][19] of this series.
|
||||
|
||||
|
||||
| [[1]][1] | A core aspect of Redis is its being an _in-memory_ database; therefore, queries should never take too long to execute. There are all kinds of complications, however. In case of partitioning, a server may end up routing the request to another instance; in this case async I/O is used to avoid blocking other clients. |
|
||||
|
||||
|
||||
| [[2]][2] | Through anetAccept; anet is Redis's wrapper for TCP socket code. |
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://eli.thegreenplace.net/2017/concurrent-servers-part-5-redis-case-study/
|
||||
|
||||
作者:[Eli Bendersky][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://eli.thegreenplace.net/pages/about
|
||||
[1]:https://eli.thegreenplace.net/2017/concurrent-servers-part-5-redis-case-study/#id1
|
||||
[2]:https://eli.thegreenplace.net/2017/concurrent-servers-part-5-redis-case-study/#id2
|
||||
[3]:http://eli.thegreenplace.net/2017/concurrent-servers-part-1-introduction/
|
||||
[4]:http://eli.thegreenplace.net/2017/concurrent-servers-part-2-threads/
|
||||
[5]:http://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/
|
||||
[6]:http://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/
|
||||
[7]:http://eli.thegreenplace.net/2017/concurrent-servers-part-5-redis-case-study/
|
||||
[8]:http://antirez.com/news/93
|
||||
[9]:https://redis.io/topics/persistence
|
||||
[10]:https://redis.io/
|
||||
[11]:https://redis.io/topics/internals-rediseventlib
|
||||
[12]:https://eli.thegreenplace.net/2017/concurrent-servers-part-5-redis-case-study/#id4
|
||||
[13]:https://eli.thegreenplace.net/2017/concurrent-servers-part-5-redis-case-study/#id5
|
||||
[14]:http://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/
|
||||
[15]:http://oldblog.antirez.com/post/redis-win32-msft-patch.html
|
||||
[16]:http://eli.thegreenplace.net/2017/benefits-of-dependencies-in-software-projects-as-a-function-of-effort/
|
||||
[17]:http://antirez.com/news/93
|
||||
[18]:https://redis.io/topics/modules-intro
|
||||
[19]:http://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/
|
@ -1,75 +0,0 @@
|
||||
translatng---geekpi
|
||||
|
||||
5 of the Best Linux Dark Themes that Are Easy on the Eyes
|
||||
======
|
||||
|
||||

|
||||
|
||||
There are several reasons people opt for dark themes on their computers. Some find them easy on the eye while others prefer them because of their medical condition. Programmers, especially, like dark themes because they reduce glare on the eyes.
|
||||
|
||||
If you are a Linux user and a dark theme lover, you are in luck. Here are five of the best dark themes for Linux. Check them out!
|
||||
|
||||
### 1. OSX-Arc-Shadow
|
||||
|
||||
![OSX-Arc-Shadow Theme][1]
|
||||
|
||||
As its name implies, this theme is inspired by OS X. It is a flat theme based on Arc. The theme supports GTK 3 and GTK 2 desktop environments, so Gnome, Cinnamon, Unity, Manjaro, Mate, and XFCE users can install and use the theme. [OSX-Arc-Shadow][2] is part of the OSX-Arc theme collection. The collection has several other themes (dark and light) included. You can download the whole collection and just use the dark variants.
|
||||
|
||||
Debian- and Ubuntu-based distro users have the option of installing the stable release using the .deb files found on this [page][3]. The compressed source files are also on the same page. Arch Linux users, check out this [AUR link][4]. Finally, to install the theme manually, extract the zip content to the "~/.themes" folder and set it as your current theme, controls, and window borders.
|
||||
|
||||
### 2. Kiss-Kool-Red version 2
|
||||
|
||||
![Kiss-Kool-Red version 2 ][5]
|
||||
|
||||
The theme is only a few days old. It has a darker look compared to OSX-Arc-Shadow and red selection outlines. It is especially appealing to those who want more contrast and less glare from the computer screen. Hence, It reduces distraction when used at night or in places with low lights. It supports GTK 3 and GTK2.
|
||||
|
||||
Head to [gnome-looks][6] to download the theme under the "Files" menu. The installation procedure is simple: extract the theme into the "~/.themes" folder and set it as your current theme, controls, and window borders.
|
||||
|
||||
### 3. Equilux
|
||||
|
||||
![Equilux][7]
|
||||
|
||||
Equilux is another simple dark theme based on Materia Theme. It has a neutral dark color tone and is not overly fancy. The contrast between the selection outlines is also minimal and not as sharp as the red color in Kiss-Kool-Red. The theme is truly made with reduction of eye strain in mind.
|
||||
|
||||
[Download the compressed file][8] and unzip it into your "~/.themes" folder. Then, you can set it as your theme. You can check [its GitHub page][9] for the latest additions.
|
||||
|
||||
### 4. Deepin Dark
|
||||
|
||||
![Deepin Dark][10]
|
||||
|
||||
Deepin Dark is a completely dark theme. For those who like a little more darkness, this theme is definitely one to consider. Moreover, it also reduces the amount of glare from the computer screen. Additionally, it supports Unity. [Download Deepin Dark here][11].
|
||||
|
||||
### 5. Ambiance DS BlueSB12
|
||||
|
||||
![Ambiance DS BlueSB12 ][12]
|
||||
|
||||
Ambiance DS BlueSB12 is a simple dark theme, so it makes the important details stand out. It helps with focus as is not unnecessarily fancy. It is very similar to Deepin Dark. Especially relevant to Ubuntu users, it is compatible with Ubuntu 17.04. You can download and try it from [here][13].
|
||||
|
||||
### Conclusion
|
||||
|
||||
If you use a computer for a very long time, dark themes are a great way to reduce the strain on your eyes. Even if you don't, dark themes can help you in many other ways like improving your focus. Let us know which is your favorite.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/best-linux-dark-themes/
|
||||
|
||||
作者:[Bruno Edoh][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com
|
||||
[1]:https://www.maketecheasier.com/assets/uploads/2017/12/osx-arc-shadow.png (OSX-Arc-Shadow Theme)
|
||||
[2]:https://github.com/LinxGem33/OSX-Arc-Shadow/
|
||||
[3]:https://github.com/LinxGem33/OSX-Arc-Shadow/releases
|
||||
[4]:https://aur.archlinux.org/packages/osx-arc-shadow/
|
||||
[5]:https://www.maketecheasier.com/assets/uploads/2017/12/Kiss-Kool-Red.png (Kiss-Kool-Red version 2 )
|
||||
[6]:https://www.gnome-look.org/p/1207964/
|
||||
[7]:https://www.maketecheasier.com/assets/uploads/2017/12/equilux.png (Equilux)
|
||||
[8]:https://www.gnome-look.org/p/1182169/
|
||||
[9]:https://github.com/ddnexus/equilux-theme
|
||||
[10]:https://www.maketecheasier.com/assets/uploads/2017/12/deepin-dark.png (Deepin Dark )
|
||||
[11]:https://www.gnome-look.org/p/1190867/
|
||||
[12]:https://www.maketecheasier.com/assets/uploads/2017/12/ambience.png (Ambiance DS BlueSB12 )
|
||||
[13]:https://www.gnome-look.org/p/1013664/
|
@ -1,195 +0,0 @@
|
||||
translated by cyleft
|
||||
|
||||
Migrating to Linux: The Command Line
|
||||
======
|
||||
|
||||

|
||||
|
||||
This is the fourth article in our series on migrating to Linux. If you missed the previous installments, we've covered [Linux for new users][1], [files and filesystems][2], and [graphical environments][3]. Linux is everywhere. It's used to run most Internet services like web servers, email servers, and others. It's also used in your cell phone, your car console, and a whole lot more. So, you might be curious to try out Linux and learn more about how it works.
|
||||
|
||||
Under Linux, the command line is very useful. On desktop Linux systems, although the command line is optional, you will often see people have a command line window open alongside other application windows. On Internet servers, and when Linux is running in a device, the command line is often the only way to interact directly with the system. So, it's good to know at least some command line basics.
|
||||
|
||||
In the command line (often called a shell in Linux), everything is done by entering commands. You can list files, move files, display the contents of files, edit files, and more, even display web pages, all from the command line.
|
||||
|
||||
If you are already familiar with using the command line in Windows (either CMD.EXE or PowerShell), you may want to jump down to the section titled Familiar with Windows Command Line? and read that first.
|
||||
|
||||
### Navigating
|
||||
|
||||
In the command line, there is the concept of the current working directory (Note: A folder and a directory are synonymous, and in Linux they're usually called directories). Many commands will look in this directory by default if no other directory path is specified. For example, typing ls to list files, will list files in this working directory. For example:
|
||||
```
|
||||
$ ls
|
||||
Desktop Documents Downloads Music Pictures README.txt Videos
|
||||
```
|
||||
|
||||
The command, ls Documents, will instead list files in the Documents directory:
|
||||
```
|
||||
$ ls Documents
|
||||
report.txt todo.txt EmailHowTo.pdf
|
||||
```
|
||||
|
||||
You can display the current working directory by typing pwd. For example:
|
||||
```
|
||||
$ pwd
|
||||
/home/student
|
||||
```
|
||||
|
||||
You can change the current directory by typing cd and then the directory you want to change to. For example:
|
||||
```
|
||||
$ pwd
|
||||
/home/student
|
||||
$ cd Downloads
|
||||
$ pwd
|
||||
/home/student/Downloads
|
||||
```
|
||||
|
||||
A directory path is a list of directories separated by a / (slash) character. The directories in a path have an implied hierarchy, for example, where the path /home/student expects there to be a directory named home in the top directory, and a directory named student to be in that directory home.
|
||||
|
||||
Directory paths are either absolute or relative. Absolute directory paths start with the / character.
|
||||
|
||||
Relative paths start with either . (dot) or .. (dot dot). In a path, a . (dot) means the current directory, and .. (dot dot) means one directory up from the current one. For example, ls ../Documents means look in the directory up one from the current one and show the contents of the directory named Documents in there:
|
||||
```
|
||||
$ pwd
|
||||
/home/student
|
||||
$ ls
|
||||
Desktop Documents Downloads Music Pictures README.txt Videos
|
||||
$ cd Downloads
|
||||
$ pwd
|
||||
/home/student/Downloads
|
||||
$ ls ../Documents
|
||||
report.txt todo.txt EmailHowTo.pdf
|
||||
```
|
||||
|
||||
When you first open a command line window on a Linux system, your current working directory is set to your home directory, usually: /home/<your login name here>. Your home directory is dedicated to your login where you can store your own files.
|
||||
|
||||
The environment variable $HOME expands to the directory path to your home directory. For example:
|
||||
```
|
||||
$ echo $HOME
|
||||
/home/student
|
||||
```
|
||||
|
||||
The following table shows a summary of some of the common commands used to navigate directories and manage simple text files.
|
||||
|
||||
### Searching
|
||||
|
||||
Sometimes I forget where a file resides, or I forget the name of the file I am looking for. There are a couple of commands in the Linux command line that you can use to help you find files and search the contents of files.
|
||||
|
||||
The first command is find. You can use find to search for files and directories by name or other attribute. For example, if I forgot where I kept my todo.txt file, I can run the following:
|
||||
```
|
||||
$ find $HOME -name todo.txt
|
||||
/home/student/Documents/todo.txt
|
||||
```
|
||||
|
||||
The find program has a lot of features and options. A simple form of the command is:
|
||||
find <directory to search> -name <filename>
|
||||
|
||||
If there is more than one file named todo.txt from the example above, it will show me all the places where it found a file by that name. The find command has many options to search by type (file, directory, or other), by date, newer than date, by size, and more. You can type:
|
||||
```
|
||||
man find
|
||||
```
|
||||
|
||||
to get help on how to use the find command.
|
||||
|
||||
You can also use a command called grep to search inside files for specific contents. For example:
|
||||
```
|
||||
grep "01/02/2018" todo.txt
|
||||
```
|
||||
|
||||
will show me all the lines that have the January 2, 2018 date in them.
|
||||
|
||||
### Getting Help
|
||||
|
||||
There are a lot of commands in Linux, and it would be too much to describe all of them here. So the next best step to show how to get help on commands.
|
||||
|
||||
The command apropos helps you find commands that do certain things. Maybe you want to find out all the commands that operate on directories or get a list of open files, but you don't know what command to run. So, you can try:
|
||||
```
|
||||
apropos directory
|
||||
```
|
||||
|
||||
which will give a list of commands and have the word "directory" in their help text. Or, you can do:
|
||||
```
|
||||
apropos "list open files"
|
||||
```
|
||||
|
||||
which will show one command, lsof, that you can use to list open files.
|
||||
|
||||
If you know the command you need to use but aren't sure which options to use to get it to behave the way you want, you can use the command called man, which is short for manual. You would use man <command>, for example:
|
||||
```
|
||||
man ls
|
||||
```
|
||||
|
||||
You can try man ls on your own. It will give several pages of information.
|
||||
|
||||
The man command explains all the options and parameters you can give to a command, and often will even give an example.
|
||||
|
||||
Many commands often also have a help option (e.g., ls --help), which will give information on how to use a command. The man pages are usually more detailed, while the --help option is useful for a quick lookup.
|
||||
|
||||
### Scripts
|
||||
|
||||
One of the best things about the Linux command line is that the commands that are typed in can be scripted, and run over and over again. Commands can be placed as separate lines in a file. You can put #!/bin/sh as the first line in the file, followed by the commands. Then, once the file is marked as executable, you can run the script as if it were its own command. For example,
|
||||
```
|
||||
--- contents of get_todays_todos.sh ---
|
||||
#!/bin/sh
|
||||
todays_date=`date +"%m/%d/%y"`
|
||||
grep $todays_date $HOME/todos.txt
|
||||
```
|
||||
|
||||
Scripts help automate certain tasks in a set of repeatable steps. Scripts can also get very sophisticated if needed, with loops, conditional statements, routines, and more. There's not space here to go into detail, but you can find more information about Linux bash scripting online.
|
||||
|
||||
Familiar with Windows Command Line?
|
||||
|
||||
If you are familiar with the Windows CMD or PowerShell program, typing commands at a command prompt should feel familiar. However, several things work differently in Linux and if you don't understand those differences, it may be confusing.
|
||||
|
||||
First, under Linux, the PATH environment variable works different than it does under Windows. In Windows, the current directory is assumed to be the first directory on the path, even though it's not listed in the list of directories in PATH. Under Linux, the current directory is not assumed to be on the path, and it is not explicitly put on the path either. Putting . in the PATH environment variable is considered to be a security risk under Linux. In Linux, to run a program in the current directory, you need to prefix it with ./ (which is the file's relative path from the current directory). This trips up a lot of CMD users. For example:
|
||||
```
|
||||
./my_program
|
||||
```
|
||||
|
||||
rather than
|
||||
```
|
||||
my_program
|
||||
```
|
||||
|
||||
In addition, in Windows paths are separated by a ; (semicolon) character in the PATH environment variable. On Linux, in PATH, directories are separated by a : (colon) character. Also in Linux, directories in a single path are separated by a / (slash) character while under Windows directories in a single path are separated by a \ (backslash) character. So a typical PATH environment variable in Windows might look like:
|
||||
```
|
||||
PATH="C:\Program Files;C:\Program Files\Firefox;"
|
||||
while on Linux it might look like:
|
||||
PATH="/usr/bin:/opt/mozilla/firefox"
|
||||
```
|
||||
|
||||
Also note that environment variables are expanded with a $ on Linux, so $PATH expands to the contents of the PATH environment variable whereas in Windows you need to enclose the variable in percent symbols (e.g., %PATH%).
|
||||
|
||||
In Linux, options are commonly passed to programs using a - (dash) character in front of the option, while under Windows options are passed by preceding options with a / (slash) character. So, under Linux, you would do:
|
||||
```
|
||||
a_prog -h
|
||||
```
|
||||
|
||||
rather than
|
||||
```
|
||||
a_prog /h
|
||||
```
|
||||
|
||||
Under Linux, file extensions generally don't signify anything. For example, renaming myscript to myscript.bat doesn't make it executable. Instead to make a file executable, the file's executable permission flag needs to be set. File permissions are covered in more detail next time.
|
||||
|
||||
Under Linux when file and directory names start with a . (dot) character they are hidden. So, for example, if you're told to edit the file, .bashrc, and you don't see it in your home directory, it probably really is there. It's just hidden. In the command line, you can use option -a on the command ls to see hidden files. For example:
|
||||
```
|
||||
ls -a
|
||||
```
|
||||
|
||||
Under Linux, common commands are also different from those in the Windows command line. The following table that shows a mapping from common items used under CMD and the alternative used under Linux.
|
||||
|
||||

|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2018/1/migrating-linux-command-line
|
||||
|
||||
作者:[John Bonesio][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/johnbonesio
|
||||
[1]:https://www.linux.com/blog/learn/intro-to-linux/2017/10/migrating-linux-introduction
|
||||
[2]:https://www.linux.com/blog/learn/intro-to-linux/2017/11/migrating-linux-disks-files-and-filesystems
|
||||
[3]:https://www.linux.com/blog/learn/2017/12/migrating-linux-graphical-environments
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH
|
||||
============================================================
|
||||
|
||||
@ -1379,4 +1380,4 @@ via: https://blog.patricktriest.com/text-search-docker-elasticsearch/
|
||||
[47]:https://blog.patricktriest.com/tag/javascript/
|
||||
[48]:https://blog.patricktriest.com/tag/nodejs/
|
||||
[49]:https://blog.patricktriest.com/tag/web-development/
|
||||
[50]:https://blog.patricktriest.com/tag/devops/
|
||||
[50]:https://blog.patricktriest.com/tag/devops/
|
||||
|
@ -1,88 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
How to access/view Python help when using vim
|
||||
======
|
||||
|
||||
I am a new Vim text editor user. I am writing Python code. Is there is a way to see Python documentation within vim and without visiting the Internet? Say my cursor is under the print Python keyword, and I press F1. I want to look at the help for the print keyword. How do I show python help() inside vim? How do I call pydoc3/pydoc to seek help without leaving vim?
|
||||
|
||||
The pydoc or pydoc3 command show text documentation on the name of a Python keyword, topic, function, module, or package, or a dotted reference to a class or function within a module or module in a package. You can call pydoc from vim itself. Let us see how to access Python documentation using pydoc within vim text editor.
|
||||
|
||||
### Access python help using pydoc
|
||||
|
||||
The syntax is:
|
||||
```
|
||||
pydoc keyword
|
||||
pydoc3 keyword
|
||||
pydoc len
|
||||
pydoc print
|
||||
```
|
||||
Edit your ~/.vimrc:
|
||||
`$ vim ~/.vimrc`
|
||||
Append the following configuration for pydoc3 (python v3.x docs). Create a mapping for H key that works in normal mode:
|
||||
```
|
||||
nnoremap <buffer> H :<C-u>execute "!pydoc3 " . expand("<cword>")<CR>
|
||||
```
|
||||
|
||||
|
||||
Save and close the file. Open vim text editor:
|
||||
`$ vim file.py`
|
||||
Write some code:
|
||||
```
|
||||
#!/usr/bin/python3
|
||||
x=5
|
||||
y=10
|
||||
z=x+y
|
||||
print(z)
|
||||
print("Hello world")
|
||||
```
|
||||
|
||||
Position cursor under the print Python keyword and press Shift followed by H. You will see output as follows:
|
||||
|
||||
[![Access Python Help Within Vim][1]][1]
|
||||
Gif.01: Press H to view help for the print Python keyword
|
||||
|
||||
### How to view python help when using vim
|
||||
|
||||
[jedi-vim][2] is a VIM binding to the autocompletion library Jed. It can do many things including display help for keyword when you press Shift followed by K i.e. press capital K.
|
||||
|
||||
#### How to install jedi-vim on Linux or Unix-like system
|
||||
|
||||
Use [pathogen][3], [vim-plug][4] or [Vundle][5] to install jedi-vim. I am using Vim-Plug. Add the following line in ~/vimrc:
|
||||
`Plug 'davidhalter/jedi-vim'`
|
||||
Save and close the file. Start vim and type:
|
||||
`PlugInstall`
|
||||
On Arch Linux, you can also install jedi-vim from official repositories as vim-jedi using pacman command:
|
||||
`$ sudo pacman -S vim-jedi`
|
||||
It is also available on Debian (?8) and Ubuntu (?14.04) as vim-python-jedi using [apt command][6]/[apt-get command][7]:
|
||||
`$ sudo apt install vim-python-jedi`
|
||||
On Fedora Linux, it is available as vim-jedi using dnf command:
|
||||
`$ sudo dnf install vim-jedi`
|
||||
Jedi is by default automatically initialized. So no further configuration needed on your part. To see Documentation/Pydoc press K. It shows a popup with assignments:
|
||||
[![How to view python help when using vim][8]][8]
|
||||
|
||||
### about the author
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][9], [Facebook][10], [Google+][11].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/how-to-access-view-python-help-when-using-vim/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz
|
||||
[1]:https://www.cyberciti.biz/media/new/faq/2018/01/Access-Python-Help-Within-Vim.gif
|
||||
[2]:https://github.com/davidhalter/jedi-vim
|
||||
[3]:https://github.com/tpope/vim-pathogen
|
||||
[4]:https://www.cyberciti.biz/programming/vim-plug-a-beautiful-and-minimalist-vim-plugin-manager-for-unix-and-linux-users/
|
||||
[5]:https://github.com/gmarik/vundle
|
||||
[6]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info)
|
||||
[7]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info)
|
||||
[8]:https://www.cyberciti.biz/media/new/faq/2018/01/How-to-view-Python-Documentation-using-pydoc-within-vim-on-Linux-Unix.jpg
|
||||
[9]:https://twitter.com/nixcraft
|
||||
[10]:https://facebook.com/nixcraft
|
||||
[11]:https://plus.google.com/+CybercitiBiz
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
A File Transfer Utility To Download Only The New Parts Of A File
|
||||
======
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Manage printers and printing
|
||||
======
|
||||
|
||||
|
@ -0,0 +1,203 @@
|
||||
How to clone, modify, add, and delete files in Git
|
||||
======
|
||||

|
||||
|
||||
In the [first article in this series][1] on getting started with Git, we created a simple Git repo and added a file to it by connecting it with our computer. In this article, we will learn a handful of other things about Git, namely how to clone (download), modify, add, and delete files in a Git repo.
|
||||
|
||||
### Let's make some clones
|
||||
|
||||
Say you already have a Git repo on GitHub and you want to get your files from it—maybe you lost the local copy on your computer or you're working on a different computer and want access to the files in your repository. What should you do? Download your files from GitHub? Exactly! We call this "cloning" in Git terminology. (You could also download the repo as a ZIP file, but we'll explore the clone method in this article.)
|
||||
|
||||
Let's clone the repo, called Demo, we created in the last article. (If you have not yet created a Demo repo, jump back to that article and do those steps before you proceed here.) To clone your file, just open your browser and navigate to `https://github.com/<your_username>/Demo` (where `<your_username>` is the name of your own repo. For example, my repo is `https://github.com/kedark3/Demo`). Once you navigate to that URL, click the "Clone or download" button, and your browser should look something like this:
|
||||
|
||||

|
||||
|
||||
As you can see above, the "Clone with HTTPS" option is open. Copy your repo's URL from that dropdown box (`https://github.com/<your_username>/Demo.git`). Open the terminal and type the following command to clone your GitHub repo to your computer:
|
||||
```
|
||||
git clone https://github.com/<your_username>/Demo.git
|
||||
|
||||
```
|
||||
|
||||
Then, to see the list of files in the `Demo` directory, enter the command:
|
||||
```
|
||||
ls Demo/
|
||||
|
||||
```
|
||||
|
||||
Your terminal should look like this:
|
||||
|
||||

|
||||
|
||||
### Modify files
|
||||
|
||||
Now that we have cloned the repo, let's modify the files and update them on GitHub. To begin, enter the commands below, one by one, to change the directory to `Demo/`, check the contents of `README.md`, echo new (additional) content to `README.md`, and check the status with `git status`:
|
||||
```
|
||||
cd Demo/
|
||||
|
||||
ls
|
||||
|
||||
cat README.md
|
||||
|
||||
echo "Added another line to REAMD.md" >> README.md
|
||||
|
||||
cat README.md
|
||||
|
||||
git status
|
||||
|
||||
```
|
||||
|
||||
This is how it will look in the terminal if you run these commands one by one:
|
||||
|
||||

|
||||
|
||||
Let's look at the output of `git status` and walk through what it means. Don't worry about the part that says:
|
||||
```
|
||||
On branch master
|
||||
|
||||
Your branch is up-to-date with 'origin/master'.".
|
||||
|
||||
```
|
||||
|
||||
because we haven't learned it yet. The next line says: `Changes not staged for commit`; this is telling you that the files listed below it aren't marked ready ("staged") to be committed. If you run `git add`, Git takes those files and marks them as `Ready for commit`; in other (Git) words, `Changes staged for commit`. Before we do that, let's check what we are adding to Git with the `git diff` command, then run `git add`.
|
||||
|
||||
Here is your terminal output:
|
||||
|
||||

|
||||
|
||||
Let's break this down:
|
||||
|
||||
* `diff --git a/README.md b/README.md` is what Git is comparing (i.e., `README.md` in this example).
|
||||
* `--- a/README.md` would show anything removed from the file.
|
||||
* `+++ b/README.md` would show anything added to your file.
|
||||
* Anything added to the file is printed in green text with a + at the beginning of the line.
|
||||
* If we had removed anything, it would be printed in red text with a - sign at the beginning.
|
||||
* Git status now says `Changes to be committed:` and lists the filename (i.e., `README.md`) and what happened to that file (i.e., it has been `modified` and is ready to be committed).
|
||||
|
||||
|
||||
|
||||
Tip: If you have already run `git add`, and now you want to see what's different, the usual `git diff` won't yield anything because you already added the file. Instead, you must use `git diff --cached`. It will show you the difference between the current version and previous version of files that Git was told to add. Your terminal output would look like this:
|
||||
|
||||

|
||||
|
||||
### Upload a file to your repo
|
||||
|
||||
We have modified the `README.md` file with some new content and it's time to upload it to GitHub.
|
||||
|
||||
Let's commit the changes and push those to GitHub. Run:
|
||||
```
|
||||
git commit -m "Updated Readme file"
|
||||
|
||||
```
|
||||
|
||||
This tells Git that you are "committing" to changes that you have "added" to it. You may recall from the first part of this series that it's important to add a message to explain what you did in your commit so you know its purpose when you look back at your Git log later. (We will look more at this topic in the next article.) `Updated Readme file` is the message for this commit—if you don't think this is the most logical way to explain what you did, feel free to write your commit message differently.
|
||||
|
||||
Run `git push -u origin master`. This will prompt you for your username and password, then upload the file to your GitHub repo. Refresh your GitHub page, and you should see the changes you just made to `README.md`.
|
||||
|
||||

|
||||
|
||||
The bottom-right corner of the terminal shows that I committed the changes, checked the Git status, and pushed the changes to GitHub. Git status says:
|
||||
```
|
||||
Your branch is ahead of 'origin/master' by 1 commit
|
||||
|
||||
(use "git push" to publish your local commits)
|
||||
|
||||
```
|
||||
|
||||
The first line indicates there is one commit in the local repo but not present in origin/master (i.e., on GitHub). The next line directs us to push those changes to origin/master, and that is what we did. (To refresh your memory on what "origin" means in this case, refer to the first article in this series. I will explain what "master" means in the next article, when we discuss branching.)
|
||||
|
||||
### Add a new file to Git
|
||||
|
||||
Now that we have modified a file and updated it on GitHub, let's create a new file, add it to Git, and upload it to GitHub. Run:
|
||||
```
|
||||
echo "This is a new file" >> file.txt
|
||||
|
||||
```
|
||||
|
||||
This will create a new file named `file.txt`.
|
||||
|
||||
If you `cat` it out:
|
||||
```
|
||||
cat file.txt
|
||||
|
||||
```
|
||||
|
||||
You should see the contents of the file. Now run:
|
||||
```
|
||||
git status
|
||||
|
||||
```
|
||||
|
||||
Git reports that you have an untracked file (named `file.txt`) in your repository. This is Git's way of telling you that there is a new file in the repo directory on your computer that you haven't told Git about, and Git is not tracking that file for any changes you make.
|
||||
|
||||

|
||||
|
||||
We need to tell Git to track this file so we can commit it and upload it to our repo. Here's the command to do that:
|
||||
```
|
||||
git add file.txt
|
||||
|
||||
git status
|
||||
|
||||
```
|
||||
|
||||
Your terminal output is:
|
||||
|
||||

|
||||
|
||||
Git status is telling you there are changes to `file.txt` to be committed, and that it is a `new file` to Git, which it was not aware of before this. Now that we have added `file.txt` to Git, we can commit the changes and push it to origin/master.
|
||||
|
||||

|
||||
|
||||
Git has now uploaded this new file to GitHub; if you refresh your GitHub page, you should see the new file, `file.txt`, in your Git repo on GitHub.
|
||||
|
||||

|
||||
|
||||
With these steps, you can create as many files as you like, add them to Git, and commit and push them up to GitHub.
|
||||
|
||||
### Delete a file from Git
|
||||
|
||||
What if we discovered we made an error and need to delete `file.txt` from our repo. One way is to remove the file from our local copy of the repo with this command:
|
||||
```
|
||||
rm file.txt
|
||||
|
||||
```
|
||||
|
||||
If you do `git status` now, Git says there is a file that is `not staged for commit` and it has been `deleted` from the local copy of the repo. If we now run:
|
||||
```
|
||||
git add file.txt
|
||||
|
||||
git status
|
||||
|
||||
```
|
||||
|
||||
I know we are deleting the file, but we still run `git add` ** because we need to tell Git about the **change** we are making. `git add` ** can be used when we are adding a new file to Git, modifying contents of an existing file and adding it to Git, or deleting a file from a Git repo. Effectively, `git add` takes all the changes into account and stages those changes for commit. If in doubt, carefully look at output of each command in the terminal screenshot below.
|
||||
|
||||
Git will tell us the deleted file is staged for commit. As soon as you commit this change and push it to GitHub, the file will be removed from the repo on GitHub as well. Do this by running:
|
||||
```
|
||||
git commit -m "Delete file.txt"
|
||||
|
||||
git push -u origin master
|
||||
|
||||
```
|
||||
|
||||
Now your terminal looks like this:
|
||||
|
||||

|
||||
|
||||
And your GitHub looks like this:
|
||||
|
||||

|
||||
|
||||
Now you know how to clone, add, modify, and delete Git files from your repo. The next article in this series will examine Git branching.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/how-clone-modify-add-delete-git-files
|
||||
|
||||
作者:[Kedar Vijay Kulkarni][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/kkulkarn
|
||||
[1]:https://opensource.com/article/18/1/step-step-guide-git
|
@ -0,0 +1,256 @@
|
||||
Build a bikesharing app with Redis and Python
|
||||
======
|
||||
|
||||

|
||||
|
||||
I travel a lot on business. I'm not much of a car guy, so when I have some free time, I prefer to walk or bike around a city. Many of the cities I've visited on business have bikeshare systems, which let you rent a bike for a few hours. Most of these systems have an app to help users locate and rent their bikes, but it would be more helpful for users like me to have a single place to get information on all the bikes in a city that are available to rent.
|
||||
|
||||
To solve this problem and demonstrate the power of open source to add location-aware features to a web application, I combined publicly available bikeshare data, the [Python][1] programming language, and the open source [Redis][2] in-memory data structure server to index and query geospatial data.
|
||||
|
||||
The resulting bikeshare application incorporates data from many different sharing systems, including the [Citi Bike][3] bikeshare in New York City. It takes advantage of the General Bikeshare Feed provided by the Citi Bike system and uses its data to demonstrate some of the features that can be built using Redis to index geospatial data. The Citi Bike data is provided under the [Citi Bike data license agreement][4].
|
||||
|
||||
### General Bikeshare Feed Specification
|
||||
|
||||
The General Bikeshare Feed Specification (GBFS) is an [open data specification][5] developed by the [North American Bikeshare Association][6] to make it easier for map and transportation applications to add bikeshare systems into their platforms. The specification is currently in use by over 60 different sharing systems in the world.
|
||||
|
||||
The feed consists of several simple [JSON][7] data files containing information about the state of the system. The feed starts with a top-level JSON file referencing the URLs of the sub-feed data:
|
||||
```
|
||||
{
|
||||
|
||||
"data": {
|
||||
|
||||
"en": {
|
||||
|
||||
"feeds": [
|
||||
|
||||
{
|
||||
|
||||
"name": "system_information",
|
||||
|
||||
"url": "https://gbfs.citibikenyc.com/gbfs/en/system_information.json"
|
||||
|
||||
},
|
||||
|
||||
{
|
||||
|
||||
"name": "station_information",
|
||||
|
||||
"url": "https://gbfs.citibikenyc.com/gbfs/en/station_information.json"
|
||||
|
||||
},
|
||||
|
||||
. . .
|
||||
|
||||
]
|
||||
|
||||
}
|
||||
|
||||
},
|
||||
|
||||
"last_updated": 1506370010,
|
||||
|
||||
"ttl": 10
|
||||
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
The first step is loading information about the bikesharing stations into Redis using data from the `system_information` and `station_information` feeds.
|
||||
|
||||
The `system_information` feed provides the system ID, which is a short code that can be used to create namespaces for Redis keys. The GBFS spec doesn't specify the format of the system ID, but does guarantee it is globally unique. Many of the bikeshare feeds use short names like coast_bike_share, boise_greenbike, or topeka_metro_bikes for system IDs. Others use familiar geographic abbreviations such as NYC or BA, and one uses a universally unique identifier (UUID). The bikesharing application uses the identifier as a prefix to construct unique keys for the given system.
|
||||
|
||||
The `station_information` feed provides static information about the sharing stations that comprise the system. Stations are represented by JSON objects with several fields. There are several mandatory fields in the station object that provide the ID, name, and location of the physical bike stations. There are also several optional fields that provide helpful information such as the nearest cross street or accepted payment methods. This is the primary source of information for this part of the bikesharing application.
|
||||
|
||||
### Building the database
|
||||
|
||||
I've written a sample application, [load_station_data.py][8], that mimics what would happen in a backend process for loading data from external sources.
|
||||
|
||||
### Finding the bikeshare stations
|
||||
|
||||
Loading the bikeshare data starts with the [systems.csv][9] file from the [GBFS repository on GitHub][5].
|
||||
|
||||
The repository's [systems.csv][9] file provides the discovery URL for registered bikeshare systems with an available GBFS feed. The discovery URL is the starting point for processing bikeshare information.
|
||||
|
||||
The `load_station_data` application takes each discovery URL found in the systems file and uses it to find the URL for two sub-feeds: system information and station information. The system information feed provides a key piece of information: the unique ID of the system. (Note: the system ID is also provided in the systems.csv file, but some of the identifiers in that file do not match the identifiers in the feeds, so I always fetch the identifier from the feed.) Details on the system, like bikeshare URLs, phone numbers, and emails, could be added in future versions of the application, so the data is stored in a Redis hash using the key `${system_id}:system_info`.
|
||||
|
||||
### Loading the station data
|
||||
|
||||
The station information provides data about every station in the system, including the system's location. The `load_station_data` application iterates over every station in the station feed and stores the data about each into a Redis hash using a key of the form `${system_id}:station:${station_id}`. The location of each station is added to a geospatial index for the bikeshare using the `GEOADD` command.
|
||||
|
||||
### Updating data
|
||||
|
||||
On subsequent runs, I don't want the code to remove all the feed data from Redis and reload it into an empty Redis database, so I carefully considered how to handle in-place updates of the data.
|
||||
|
||||
The code starts by loading the dataset with information on all the bikesharing stations for the system being processed into memory. When information is loaded for a station, the station (by key) is removed from the in-memory set of stations. Once all station data is loaded, we're left with a set containing all the station data that must be removed for that system.
|
||||
|
||||
The application iterates over this set of stations and creates a transaction to delete the station information, remove the station key from the geospatial indexes, and remove the station from the list of stations for the system.
|
||||
|
||||
### Notes on the code
|
||||
|
||||
There are a few interesting things to note in [the sample code][8]. First, items are added to the geospatial indexes using the `GEOADD` command but removed with the `ZREM` command. As the underlying implementation of the geospatial type uses sorted sets, items are removed using `ZREM`. A word of caution: For simplicity, the sample code demonstrates working with a single Redis node; the transaction blocks would need to be restructured to run in a cluster environment.
|
||||
|
||||
If you are using Redis 4.0 (or later), you have some alternatives to the `DELETE` and `HMSET` commands in the code. Redis 4.0 provides the [`UNLINK`][10] command as an asynchronous alternative to the `DELETE` command. `UNLINK` will remove the key from the keyspace, but it reclaims the memory in a separate thread. The [`HMSET`][11] command is [deprecated in Redis 4.0 and the `HSET` command is now variadic][12] (that is, it accepts an indefinite number of arguments).
|
||||
|
||||
### Notifying clients
|
||||
|
||||
At the end of the process, a notification is sent to the clients relying on our data. Using the Redis pub/sub mechanism, the notification goes out over the `geobike:station_changed` channel with the ID of the system.
|
||||
|
||||
### Data model
|
||||
|
||||
When structuring data in Redis, the most important thing to think about is how you will query the information. The two main queries the bikeshare application needs to support are:
|
||||
|
||||
* Find stations near us
|
||||
* Display information about stations
|
||||
|
||||
|
||||
|
||||
Redis provides two main data types that will be useful for storing our data: hashes and sorted sets. The [hash type][13] maps well to the JSON objects that represent stations; since Redis hashes don't enforce a schema, they can be used to store the variable station information.
|
||||
|
||||
Of course, finding stations geographically requires a geospatial index to search for stations relative to some coordinates. Redis provides [several commands][14] to build up a geospatial index using the [sorted set][15] data structure.
|
||||
|
||||
We construct keys using the format `${system_id}:station:${station_id}` for the hashes containing information about the stations and keys using the format `${system_id}:stations:location` for the geospatial index used to find stations.
|
||||
|
||||
### Getting the user's location
|
||||
|
||||
The next step in building out the application is to determine the user's current location. Most applications accomplish this through built-in services provided by the operating system. The OS can provide applications with a location based on GPS hardware built into the device or approximated from the device's available WiFi networks.
|
||||
|
||||

|
||||
|
||||
### Finding stations
|
||||
|
||||
After the user's location is found, the next step is locating nearby bikesharing stations. Redis' geospatial functions can return information on stations within a given distance of the user's current coordinates. Here's an example of this using the Redis command-line interface.
|
||||
|
||||
Imagine I'm at the Apple Store on Fifth Avenue in New York City, and I want to head downtown to Mood on West 37th to catch up with my buddy [Swatch][16]. I could take a taxi or the subway, but I'd rather bike. Are there any nearby sharing stations where I could get a bike for my trip?
|
||||
|
||||
The Apple store is located at 40.76384, -73.97297. According to the map, two bikeshare stations—Grand Army Plaza & Central Park South and East 58th St. & Madison—fall within a 500-foot radius (in blue on the map above) of the store.
|
||||
|
||||
I can use Redis' `GEORADIUS` command to query the NYC system index for stations within a 500-foot radius:
|
||||
```
|
||||
127.0.0.1:6379> GEORADIUS NYC:stations:location -73.97297 40.76384 500 ft
|
||||
|
||||
1) "NYC:station:3457"
|
||||
|
||||
2) "NYC:station:281"
|
||||
|
||||
```
|
||||
|
||||
Redis returns the two bikeshare locations found within that radius, using the elements in our geospatial index as the keys for the metadata about a particular station. The next step is looking up the names for the two stations:
|
||||
```
|
||||
127.0.0.1:6379> hget NYC:station:281 name
|
||||
|
||||
"Grand Army Plaza & Central Park S"
|
||||
|
||||
|
||||
|
||||
127.0.0.1:6379> hget NYC:station:3457 name
|
||||
|
||||
"E 58 St & Madison Ave"
|
||||
|
||||
```
|
||||
|
||||
Those keys correspond to the stations identified on the map above. If I want, I can add more flags to the `GEORADIUS` command to get a list of elements, their coordinates, and their distance from our current point:
|
||||
```
|
||||
127.0.0.1:6379> GEORADIUS NYC:stations:location -73.97297 40.76384 500 ft WITHDIST WITHCOORD ASC
|
||||
|
||||
1) 1) "NYC:station:281"
|
||||
|
||||
2) "289.1995"
|
||||
|
||||
3) 1) "-73.97371262311935425"
|
||||
|
||||
2) "40.76439830559216659"
|
||||
|
||||
2) 1) "NYC:station:3457"
|
||||
|
||||
2) "383.1782"
|
||||
|
||||
3) 1) "-73.97209256887435913"
|
||||
|
||||
2) "40.76302702144496237"
|
||||
|
||||
```
|
||||
|
||||
Looking up the names associated with those keys generates an ordered list of stations I can choose from. Redis doesn't provide directions or routing capability, so I use the routing features of my device's OS to plot a course from my current location to the selected bike station.
|
||||
|
||||
The `GEORADIUS` function can be easily implemented inside an API in your favorite development framework to add location functionality to an app.
|
||||
|
||||
### Other query commands
|
||||
|
||||
In addition to the `GEORADIUS` command, Redis provides three other commands for querying data from the index: `GEOPOS`, `GEODIST`, and `GEORADIUSBYMEMBER`.
|
||||
|
||||
The `GEOPOS` command can provide the coordinates for a given element from the geohash. For example, if I know there is a bikesharing station at West 38th and 8th and its ID is 523, then the element name for that station is NYC🚉523. Using Redis, I can find the station's longitude and latitude:
|
||||
```
|
||||
127.0.0.1:6379> geopos NYC:stations:location NYC:station:523
|
||||
|
||||
1) 1) "-73.99138301610946655"
|
||||
|
||||
2) "40.75466497634030105"
|
||||
|
||||
```
|
||||
|
||||
The `GEODIST` command provides the distance between two elements of the index. If I wanted to find the distance between the station at Grand Army Plaza & Central Park South and the station at East 58th St. & Madison, I would issue the following command:
|
||||
```
|
||||
127.0.0.1:6379> GEODIST NYC:stations:location NYC:station:281 NYC:station:3457 ft
|
||||
|
||||
"671.4900"
|
||||
|
||||
```
|
||||
|
||||
Finally, the `GEORADIUSBYMEMBER` command is similar to the `GEORADIUS` command, but instead of taking a set of coordinates, the command takes the name of another member of the index and returns all the members within a given radius centered on that member. To find all the stations within 1,000 feet of the Grand Army Plaza & Central Park South, enter the following:
|
||||
```
|
||||
127.0.0.1:6379> GEORADIUSBYMEMBER NYC:stations:location NYC:station:281 1000 ft WITHDIST
|
||||
|
||||
1) 1) "NYC:station:281"
|
||||
|
||||
2) "0.0000"
|
||||
|
||||
2) 1) "NYC:station:3132"
|
||||
|
||||
2) "793.4223"
|
||||
|
||||
3) 1) "NYC:station:2006"
|
||||
|
||||
2) "911.9752"
|
||||
|
||||
4) 1) "NYC:station:3136"
|
||||
|
||||
2) "940.3399"
|
||||
|
||||
5) 1) "NYC:station:3457"
|
||||
|
||||
2) "671.4900"
|
||||
|
||||
```
|
||||
|
||||
While this example focused on using Python and Redis to parse data and build an index of bikesharing system locations, it can easily be generalized to locate restaurants, public transit, or any other type of place developers want to help users find.
|
||||
|
||||
This article is based on [my presentation][17] at Open Source 101 in Raleigh this year.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/building-bikesharing-application-open-source-tools
|
||||
|
||||
作者:[Tague Griffith][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/tague
|
||||
[1]:https://www.python.org/
|
||||
[2]:https://redis.io/
|
||||
[3]:https://www.citibikenyc.com/
|
||||
[4]:https://www.citibikenyc.com/data-sharing-policy
|
||||
[5]:https://github.com/NABSA/gbfs
|
||||
[6]:http://nabsa.net/
|
||||
[7]:https://www.json.org/
|
||||
[8]:https://gist.github.com/tague/5a82d96bcb09ce2a79943ad4c87f6e15
|
||||
[9]:https://github.com/NABSA/gbfs/blob/master/systems.csv
|
||||
[10]:https://redis.io/commands/unlink
|
||||
[11]:https://redis.io/commands/hmset
|
||||
[12]:https://raw.githubusercontent.com/antirez/redis/4.0/00-RELEASENOTES
|
||||
[13]:https://redis.io/topics/data-types#Hashes
|
||||
[14]:https://redis.io/commands#geo
|
||||
[15]:https://redis.io/topics/data-types-intro#redis-sorted-sets
|
||||
[16]:https://twitter.com/swatchthedog
|
||||
[17]:http://opensource101.com/raleigh/talks/building-location-aware-apps-open-source-tools/
|
@ -0,0 +1,264 @@
|
||||
Check Linux Distribution Name and Version
|
||||
======
|
||||
You have joined new company and want to install some software’s which is requested by DevApp team, also want to restart few of the service after installation. What to do?
|
||||
|
||||
In this situation at least you should know what Distribution & Version is running on it. It will help you perform the activity without any issue.
|
||||
|
||||
Administrator should gather some of the information about the system before doing any activity, which is first task for him.
|
||||
|
||||
There are many ways to find the Linux distribution name and version. You might ask, why i want to know this basic things?
|
||||
|
||||
We have four major distributions such as RHEL, Debian, openSUSE & Arch Linux. Each distribution comes with their own package manager which help us to install packages on the system.
|
||||
|
||||
If you don’t know the distribution name then you wont be able to perform the package installation.
|
||||
|
||||
Also you won’t able to run the proper command for service bounces because most of the distributions implemented systemd command instead of SysVinit script.
|
||||
|
||||
It’s good to have the basic commands which will helps you in many ways.
|
||||
|
||||
Use the following Methods to Check Your Linux Distribution Name and Version.
|
||||
|
||||
### List of methods
|
||||
|
||||
* lsb_release command
|
||||
* /etc/*-release file
|
||||
* uname command
|
||||
* /proc/version file
|
||||
* dmesg Command
|
||||
* YUM or DNF Command
|
||||
* RPM command
|
||||
* APT-GET command
|
||||
|
||||
|
||||
|
||||
### Method-1: lsb_release Command
|
||||
|
||||
LSB stands for Linux Standard Base that prints distribution-specific information such as Distribution name, Release version and codename.
|
||||
```
|
||||
# lsb_release -a
|
||||
No LSB modules are available.
|
||||
Distributor ID: Ubuntu
|
||||
Description: Ubuntu 16.04.3 LTS
|
||||
Release: 16.04
|
||||
Codename: xenial
|
||||
|
||||
```
|
||||
|
||||
### Method-2: /etc/arch-release /etc/os-release File
|
||||
|
||||
release file typically known as Operating system identification. The `/etc` directory contains many files that contains various information about the distribution. Each distribution has their own set of files, which display this information.
|
||||
|
||||
The below set of files are present on Ubuntu/Debian system.
|
||||
```
|
||||
# cat /etc/issue
|
||||
Ubuntu 16.04.3 LTS \n \l
|
||||
|
||||
# cat /etc/issue.net
|
||||
Ubuntu 16.04.3 LTS
|
||||
|
||||
# cat /etc/lsb-release
|
||||
DISTRIB_ID=Ubuntu
|
||||
DISTRIB_RELEASE=16.04
|
||||
DISTRIB_CODENAME=xenial
|
||||
DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"
|
||||
|
||||
# cat /etc/os-release
|
||||
NAME="Ubuntu"
|
||||
VERSION="16.04.3 LTS (Xenial Xerus)"
|
||||
ID=ubuntu
|
||||
ID_LIKE=debian
|
||||
PRETTY_NAME="Ubuntu 16.04.3 LTS"
|
||||
VERSION_ID="16.04"
|
||||
HOME_URL="http://www.ubuntu.com/"
|
||||
SUPPORT_URL="http://help.ubuntu.com/"
|
||||
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
|
||||
VERSION_CODENAME=xenial
|
||||
UBUNTU_CODENAME=xenial
|
||||
|
||||
# cat /etc/debian_version
|
||||
9.3
|
||||
|
||||
```
|
||||
|
||||
The below set of files are present on RHEL/CentOS/Fedora system. The `/etc/redhat-release` & `/etc/system-release` files symlinks with `/etc/[distro]-release` file.
|
||||
```
|
||||
# cat /etc/centos-release
|
||||
CentOS release 6.9 (Final)
|
||||
|
||||
# cat /etc/fedora-release
|
||||
Fedora release 27 (Twenty Seven)
|
||||
|
||||
# cat /etc/os-release
|
||||
NAME=Fedora
|
||||
VERSION="27 (Twenty Seven)"
|
||||
ID=fedora
|
||||
VERSION_ID=27
|
||||
PRETTY_NAME="Fedora 27 (Twenty Seven)"
|
||||
ANSI_COLOR="0;34"
|
||||
CPE_NAME="cpe:/o:fedoraproject:fedora:27"
|
||||
HOME_URL="https://fedoraproject.org/"
|
||||
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
|
||||
BUG_REPORT_URL="https://bugzilla.redhat.com/"
|
||||
REDHAT_BUGZILLA_PRODUCT="Fedora"
|
||||
REDHAT_BUGZILLA_PRODUCT_VERSION=27
|
||||
REDHAT_SUPPORT_PRODUCT="Fedora"
|
||||
REDHAT_SUPPORT_PRODUCT_VERSION=27
|
||||
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
|
||||
|
||||
# cat /etc/redhat-release
|
||||
Fedora release 27 (Twenty Seven)
|
||||
|
||||
# cat /etc/system-release
|
||||
Fedora release 27 (Twenty Seven)
|
||||
|
||||
```
|
||||
|
||||
### Method-3: uname Command
|
||||
|
||||
uname (stands for unix name) is an utility that prints the system information like kernel name, version and other details about the system and the operating system running on it.
|
||||
|
||||
**Suggested Read :** [6 Methods To Check The Running Linux Kernel Version On System][1]
|
||||
```
|
||||
# uname -a
|
||||
Linux localhost.localdomain 4.12.14-300.fc26.x86_64 #1 SMP Wed Sep 20 16:28:07 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
|
||||
|
||||
```
|
||||
|
||||
The above colored words describe the version of operating system as Fedora Core 26.
|
||||
|
||||
### Method-4: /proc/version File
|
||||
|
||||
This file specifies the version of the Linux kernel, the version of gcc used to compile the kernel, and the time of kernel compilation. It also contains the kernel compiler’s user name (in parentheses).
|
||||
```
|
||||
# cat /proc/version
|
||||
Linux version 4.12.14-300.fc26.x86_64 ([email protected]) (gcc version 7.2.1 20170915 (Red Hat 7.2.1-2) (GCC) ) #1 SMP Wed Sep 20 16:28:07 UTC 2017
|
||||
|
||||
```
|
||||
|
||||
### Method-5: dmesg Command
|
||||
|
||||
dmesg (stands for display message or driver message) is a command on most Unix-like operating systems that prints the message buffer of the kernel.
|
||||
```
|
||||
# dmesg | grep "Linux"
|
||||
[ 0.000000] Linux version 4.12.14-300.fc26.x86_64 ([email protected]) (gcc version 7.2.1 20170915 (Red Hat 7.2.1-2) (GCC) ) #1 SMP Wed Sep 20 16:28:07 UTC 2017
|
||||
[ 0.001000] SELinux: Initializing.
|
||||
[ 0.001000] SELinux: Starting in permissive mode
|
||||
[ 0.470288] SELinux: Registering netfilter hooks
|
||||
[ 0.616351] Linux agpgart interface v0.103
|
||||
[ 0.630063] usb usb1: Manufacturer: Linux 4.12.14-300.fc26.x86_64 ehci_hcd
|
||||
[ 0.688949] usb usb2: Manufacturer: Linux 4.12.14-300.fc26.x86_64 ohci_hcd
|
||||
[ 2.564554] SELinux: Disabled at runtime.
|
||||
[ 2.564584] SELinux: Unregistering netfilter hooks
|
||||
|
||||
```
|
||||
|
||||
### Method-6: Yum/Dnf Command
|
||||
|
||||
Yum (Yellowdog Updater Modified) is one of the package manager utility in Linux operating system. Yum command is used to install, update, search & remove packages on some Linux distributions based on RedHat.
|
||||
|
||||
**Suggested Read :** [YUM Command To Manage Packages on RHEL/CentOS Systems][2]
|
||||
```
|
||||
# yum info nano
|
||||
Loaded plugins: fastestmirror, ovl
|
||||
Loading mirror speeds from cached hostfile
|
||||
* base: centos.zswap.net
|
||||
* extras: mirror2.evolution-host.com
|
||||
* updates: centos.zswap.net
|
||||
Available Packages
|
||||
Name : nano
|
||||
Arch : x86_64
|
||||
Version : 2.3.1
|
||||
Release : 10.el7
|
||||
Size : 440 k
|
||||
Repo : base/7/x86_64
|
||||
Summary : A small text editor
|
||||
URL : http://www.nano-editor.org
|
||||
License : GPLv3+
|
||||
Description : GNU nano is a small and friendly text editor.
|
||||
|
||||
```
|
||||
|
||||
The below yum repolist command shows that Base, Extras, and Updates repositories are coming from CentOS 7 repository.
|
||||
```
|
||||
# yum repolist
|
||||
Loaded plugins: fastestmirror, ovl
|
||||
Loading mirror speeds from cached hostfile
|
||||
* base: centos.zswap.net
|
||||
* extras: mirror2.evolution-host.com
|
||||
* updates: centos.zswap.net
|
||||
repo id repo name status
|
||||
base/7/x86_64 CentOS-7 - Base 9591
|
||||
extras/7/x86_64 CentOS-7 - Extras 388
|
||||
updates/7/x86_64 CentOS-7 - Updates 1929
|
||||
repolist: 11908
|
||||
|
||||
```
|
||||
|
||||
We can also use Dnf command to check distribution name and version.
|
||||
|
||||
**Suggested Read :** [DNF (Fork of YUM) Command To Manage Packages on Fedora System][3]
|
||||
```
|
||||
# dnf info nano
|
||||
Last metadata expiration check: 0:01:25 ago on Thu Feb 15 01:59:31 2018.
|
||||
Installed Packages
|
||||
Name : nano
|
||||
Version : 2.8.7
|
||||
Release : 1.fc27
|
||||
Arch : x86_64
|
||||
Size : 2.1 M
|
||||
Source : nano-2.8.7-1.fc27.src.rpm
|
||||
Repo : @System
|
||||
From repo : fedora
|
||||
Summary : A small text editor
|
||||
URL : https://www.nano-editor.org
|
||||
License : GPLv3+
|
||||
Description : GNU nano is a small and friendly text editor.
|
||||
|
||||
```
|
||||
|
||||
### Method-7: RPM Command
|
||||
|
||||
RPM stands for RedHat Package Manager is a powerful, command line Package Management utility for Red Hat based system such as CentOS, Oracle Linux & Fedora. This help us to identify the running system version.
|
||||
|
||||
**Suggested Read :** [RPM commands to manage packages on RHEL based systems][4]
|
||||
```
|
||||
# rpm -q nano
|
||||
nano-2.8.7-1.fc27.x86_64
|
||||
|
||||
```
|
||||
|
||||
### Method-8: APT-GET Command
|
||||
|
||||
Apt-Get stands for Advanced Packaging Tool (APT). apg-get is a powerful command-line tool which is used to automatically download and install new software packages, upgrade existing software packages, update the package list index, and to upgrade the entire Debian based systems.
|
||||
|
||||
**Suggested Read :** [Apt-Get & Apt-Cache commands to manage packages on Debian Based Systems][5]
|
||||
```
|
||||
# apt-cache policy nano
|
||||
nano:
|
||||
Installed: 2.5.3-2ubuntu2
|
||||
Candidate: 2.5.3-2ubuntu2
|
||||
Version table:
|
||||
* 2.5.3-2ubuntu2 500
|
||||
500 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages
|
||||
100 /var/lib/dpkg/status
|
||||
2.5.3-2 500
|
||||
500 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
|
||||
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/check-find-linux-distribution-name-and-version/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/magesh/
|
||||
[1]:https://www.2daygeek.com/check-find-determine-running-installed-linux-kernel-version/
|
||||
[2]:https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
|
||||
[3]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
|
||||
[4]:https://www.2daygeek.com/rpm-command-examples/
|
||||
[5]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
68
sources/tech/20180215 What is a Linux -oops.md
Normal file
68
sources/tech/20180215 What is a Linux -oops.md
Normal file
@ -0,0 +1,68 @@
|
||||
What is a Linux 'oops'?
|
||||
======
|
||||
If you check the processes running on your Linux systems, you might be curious about one called "kerneloops." And that’s “kernel oops,” not “kerne loops” just in case you didn’t parse that correctly.
|
||||
|
||||
Put very bluntly, an “oops” is a deviation from correct behavior on the part of the Linux kernel. Did you do something wrong? Probably not. But something did. And the process that did something wrong has probably at least just been summarily knocked off the CPU. At worst, the kernel may have panicked and abruptly shut the system down.
|
||||
|
||||
For the record, “oops” is NOT an acronym. It doesn’t stand for something like “object-oriented programming and systems” or “out of procedural specs”; it actually means “oops” like you just dropped your glass of wine or stepped on your cat. Oops! The plural of "oops" is "oopses."
|
||||
|
||||
An oops means that something running on the system has violated the kernel’s rules about proper behavior. Maybe the code tried to take a code path that was not allowed or use an invalid pointer. Whatever it was, the kernel — always on the lookout for process misbehavior — most likely will have stopped the particular process in its tracks and written some messages about what it did to the console, to /var/log/dmesg or the /var/log/kern.log file.
|
||||
|
||||
An oops can be caused by the kernel itself or by some process that tries to get the kernel to violate its rules about how things are allowed to run on the system and what they're allowed to do.
|
||||
|
||||
An oops will generate a crash signature that can help kernel developers figure out what went wrong and improve the quality of their code.
|
||||
|
||||
The kerneloops process running on your system will probably look like this:
|
||||
```
|
||||
kernoops 881 1 0 Feb11 ? 00:00:01 /usr/sbin/kerneloops
|
||||
|
||||
```
|
||||
|
||||
You might notice that the process isn't run by root, but by a user named "kernoops" and that it's accumulated extremely little run time. In fact, the only task assigned to this particular user is running kerneloops.
|
||||
```
|
||||
$ sudo grep kernoops /etc/passwd
|
||||
kernoops:x:113:65534:Kernel Oops Tracking Daemon,,,:/:/bin/false
|
||||
|
||||
```
|
||||
|
||||
If your Linux system isn't one that ships with kerneloops (like Debian), you might consider adding it. Check out this [Debian page][1] for more information.
|
||||
|
||||
### When should you be concerned about an oops?
|
||||
|
||||
An oops is not a big deal, except when it is. It depends in part on the role that the particular process was playing. It also depends on the class of oops.
|
||||
|
||||
Some oopses are so severe that they result in system panics. Technically speaking, a panic is a subset of the oops (i.e., the more serious of the oopses). A panic occurs when a problem detected by the kernel is bad enough that the kernel decides that it (the kernel) must stop running immediately to prevent data loss or other damage to the system. So, the system then needs to be halted and rebooted to keep any inconsistencies from making it unusable or unreliable. So a system that panics is actually trying to protect itself from irrevocable damage.
|
||||
|
||||
In short, all panics are oops, but not all oops are panics.
|
||||
|
||||
The /var/log/kern.log and related rotated logs (/var/log/kern.log.1, /var/log/kern.log.2 etc.) contain the logs produced by the kernel and handled by syslog.
|
||||
|
||||
The kerneloops program collects and by default submits information on the problems it runs into <http://oops.kernel.org/> where it can be analyzed and presented to kernel developers. Configuration details for this process are specified in the /etc/kerneloops.conf file. You can look at the settings easily with the command shown below:
|
||||
```
|
||||
$ sudo cat /etc/kerneloops.conf | grep -v ^# | grep -v ^$
|
||||
[sudo] password for shs:
|
||||
allow-submit = ask
|
||||
allow-pass-on = yes
|
||||
submit-url = http://oops.kernel.org/submitoops.php
|
||||
log-file = /var/log/kern.log
|
||||
submit-pipe = /usr/share/apport/kernel_oops
|
||||
|
||||
```
|
||||
|
||||
In the above (default) settings, information on kernel problems can be submitted, but the user is asked for permission. If set to allow-submit = always, the user will not be asked.
|
||||
|
||||
Debugging kernel problems is one of the finer arts of working with Linux systems. Fortunately, most Linux users seldom or never experience oops or panics. Still, it's nice to know what processes like kerneloops are doing on your system and to understand what might be reported and where when your system runs into a serious kernel violation.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3254778/linux/what-is-a-linux-oops.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[1]:https://packages.debian.org/stretch/kerneloops
|
140
sources/tech/20180216 Q4OS Makes Linux Easy for Everyone.md
Normal file
140
sources/tech/20180216 Q4OS Makes Linux Easy for Everyone.md
Normal file
@ -0,0 +1,140 @@
|
||||
Q4OS Makes Linux Easy for Everyone
|
||||
======
|
||||
|
||||

|
||||
|
||||
Modern Linux distributions tend to target a variety of users. Some claim to offer a flavor of the open source platform that anyone can use. And, I’ve seen some such claims succeed with aplomb, while others fall flat. [Q4OS][1] is one of those odd distributions that doesn’t bother to make such a claim but pulls off the feat anyway.
|
||||
|
||||
So, who is the primary market for Q4OS? According to its website, the distribution is a:
|
||||
|
||||
“fast and powerful operating system based on the latest technologies while offering highly productive desktop environment. We focus on security, reliability, long-term stability and conservative integration of verified new features. System is distinguished by speed and very low hardware requirements, runs great on brand new machines as well as legacy computers. It is also very applicable for virtualization and cloud computing.”
|
||||
|
||||
What’s very interesting here is that the Q4OS developers offer commercial support for the desktop. Said support can cover the likes of system customization (including core level API programming) as well as user interface modifications.
|
||||
|
||||
Once you understand this (and have installed Q4OS), the target audience becomes quite obvious: Business users looking for a Windows XP/7 replacement. But that should not prevent home users from giving Q4OS at try. It’s a Linux distribution that has a few unique tools that come together to make a solid desktop distribution.
|
||||
|
||||
Let’s take a look at Q4OS and see if it’s a version of Linux that might work for you.
|
||||
|
||||
### What Q4OS all about
|
||||
|
||||
Q4OS that does an admirable job of being the open source equivalent of Windows XP/7. Out of the box, it pulls this off with the help of the [Trinity Desktop][2] (a fork of KDE). With a few tricks up its sleeve, Q4OS turns the Trinity Desktop into a remarkably similar desktop (Figure 1).
|
||||
|
||||
![default desktop][4]
|
||||
|
||||
Figure 1: The Q4OS default desktop.
|
||||
|
||||
[Used with permission][5]
|
||||
|
||||
When you fire up the desktop, you will be greeted by a Welcome screen that makes it very easy for new users to start setting up their desktop with just a few clicks. From this window, you can:
|
||||
|
||||
* Run the Desktop Profiler (which allows you to select which desktop environment to use as well as between a full-featured desktop, a basic desktop, or a minimal desktop—Figure 2).
|
||||
|
||||
* Install applications (which opens the Synaptic Package Manager).
|
||||
|
||||
* Install proprietary codecs (which installs all the necessary media codecs for playing audio and video).
|
||||
|
||||
* Turn on Desktop effects (if you want more eye candy, turn this on).
|
||||
|
||||
* Switch to Kickoff start menu (switches from the default start menu to the newer kickoff menu).
|
||||
|
||||
* Set Autologin (allows you to set login such that it won’t require your password upon boot).
|
||||
|
||||
|
||||
|
||||
|
||||
![Desktop Profiler][7]
|
||||
|
||||
Figure 2: The Desktop Profiler allows you to further customize your desktop experience.
|
||||
|
||||
[Used with permission][5]
|
||||
|
||||
If you want to install a different desktop environment, open up the Desktop Profiler and then click the Desktop environments drop-down, in the upper left corner of the window. A new window will appear, where you can select your desktop of choice from the drop-down (Figure 3). Once back at the main Profiler Window, select which type of desktop profile you want, and then click Install.
|
||||
|
||||
![Desktop Profiler][9]
|
||||
|
||||
Figure 3: Installing a different desktop is quite simple from within the Desktop Profiler.
|
||||
|
||||
[Used with permission][5]
|
||||
|
||||
Note that installing a different desktop will not wipe the default desktop. Instead, it will allow you to select between the two desktops (at the login screen).
|
||||
|
||||
### Installed software
|
||||
|
||||
After selecting full-featured desktop, from the Desktop Profiler, I found the following user applications ready to go:
|
||||
|
||||
* LibreOffice 5.2.7.2
|
||||
|
||||
* VLC 2.2.7
|
||||
|
||||
* Google Chrome 64.0.3282
|
||||
|
||||
* Thunderbird 52.6.0 (Includes Lightning addon)
|
||||
|
||||
* Synaptic 0.84.2
|
||||
|
||||
* Konqueror 14.0.5
|
||||
|
||||
* Firefox 52.6.0
|
||||
|
||||
* Shotwell 0.24.5
|
||||
|
||||
|
||||
|
||||
|
||||
Obviously some of those applications are well out of date. Since this distribution is based on Debian, we can run and update/upgrade with the commands:
|
||||
```
|
||||
sudo apt update
|
||||
|
||||
sudo apt upgrade
|
||||
|
||||
```
|
||||
|
||||
However, after running both commands, it seems everything is up to date. This particular release (2.4) is an LTS release (supported until 2022). Because of this, expect software to be a bit behind. If you want to test out the bleeding edge version (based on Debian “Buster”), you can download the testing image [here][10].
|
||||
|
||||
### Security oddity
|
||||
|
||||
There is one rather disturbing “feature” found in Q4OS. In the developer’s quest to make the distribution closely resemble Windows, they’ve made it such that installing software (from the command line) doesn’t require a password! You read that correctly. If you open the Synaptic package manager, you’re asked for a password. However (and this is a big however), open up a terminal window and issue a command like sudo apt-get install gimp. At this point, the software will install… without requiring the user to type a sudo password.
|
||||
|
||||
Did you cringe at that? You should.
|
||||
|
||||
I get it, the developers want to ease away the burden of Linux and make a platform the masses could easily adapt to. They’ve done a splendid job of doing just that. However, in the process of doing so, they’ve bypassed a crucial means of security. Is having as near an XP/7 clone as you can find on Linux worth that lack of security? I would say that if it enables more people to use Linux, then yes. But the fact that they’ve required a password for Synaptic (the GUI tool most Windows users would default to for software installation) and not for the command-line tool makes no sense. On top of that, bypassing passwords for the apt and dpkg commands could make for a significant security issue.
|
||||
|
||||
Fear not, there is a fix. For those that prefer to require passwords for the command line installation of software, you can open up the file /etc/sudoers.d/30_q4os_apt and comment out the following three lines:
|
||||
```
|
||||
%sudo ALL = NOPASSWD: /usr/bin/apt-get *
|
||||
|
||||
%sudo ALL = NOPASSWD: /usr/bin/apt-key *
|
||||
|
||||
%sudo ALL = NOPASSWD: /usr/bin/dpkg *
|
||||
|
||||
```
|
||||
|
||||
Once commented out, save and close the file, and reboot the system. At this point, users will now be prompted for a password, should they run the apt-get, apt-key, or dpkg commands.
|
||||
|
||||
### A worthy contender
|
||||
|
||||
Setting aside the security curiosity, Q4OS is one of the best attempts at recreating Windows XP/7 I’ve come across in a while. If you have users who fear change, and you want to migrate them away from Windows, this distribution might be exactly what you need. I would, however, highly recommend you re-enable passwords for the apt-get, apt-key, and dpkg commands… just to be on the safe side.
|
||||
|
||||
In any case, the addition of the Desktop Profiler, and the ability to easily install alternative desktops, makes Q4OS a distribution that just about anyone could use.
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][11]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/2/q4os-makes-linux-easy-everyone
|
||||
|
||||
作者:[JACK WALLEN][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/jlwallen
|
||||
[1]:https://q4os.org
|
||||
[2]:https://www.trinitydesktop.org/
|
||||
[4]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/q4os_1.jpg?itok=dalJk9Xf (default desktop)
|
||||
[5]:/licenses/category/used-permission
|
||||
[7]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/q4os_2.jpg?itok=GlouIm73 (Desktop Profiler)
|
||||
[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/q4os_3.jpg?itok=riSTP_1z (Desktop Profiler)
|
||||
[10]:https://q4os.org/downloads2.html
|
||||
[11]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
59
sources/tech/20180217 Louis-Philippe Véronneau .md
Normal file
59
sources/tech/20180217 Louis-Philippe Véronneau .md
Normal file
@ -0,0 +1,59 @@
|
||||
Louis-Philippe Véronneau -
|
||||
======
|
||||
I've been watching [Critical Role][1]1 for a while now and since I've started my master's degree I haven't had much time to sit down and watch the show on YouTube as I used to do.
|
||||
|
||||
I thus started listening to the podcasts instead; that way, I can listen to the show while I'm doing other productive tasks. Pretty quickly, I grew tired of manually downloading every episode each time I finished the last one. To make things worst, the podcast is hosted on PodBean and they won't let you download episodes on a mobile device without their app. Grrr.
|
||||
|
||||
After the 10th time opening the terminal on my phone to download the podcast using some `wget` magic I decided enough was enough: I was going to write a dumb script to download them all in one batch.
|
||||
|
||||
I'm a little ashamed to say it took me more time than I had intended... The PodBean website uses semi-randomized URLs, so I could not figure out a way to guess the paths to the hosted audio files. I considered using `youtube-dl` to get the DASH version of the show on YouTube, but Google has been heavily throttling DASH streams recently. Not cool Google.
|
||||
|
||||
I then had the idea to use iTune's RSS feed to get the audio files. Surely they would somehow be included there? Of course Apple doesn't give you a simple RSS feed link on the iTunes podcast page, so I had to rummage around and eventually found out this is the link you have to use:
|
||||
```
|
||||
https://itunes.apple.com/lookup?id=1243705452&entity=podcast
|
||||
|
||||
```
|
||||
|
||||
Surprise surprise, from the json file this links points to, I found out the main Critical Role podcast page [has a proper RSS feed][2]. To my defense, the RSS button on the main podcast page brings you to some PodBean crap page.
|
||||
|
||||
Anyway, once you have the RSS feed, it's only a matter of using `grep` and `sed` until you get what you want.
|
||||
|
||||
Around 20 minutes later, I had downloaded all the episodes, for a total of 22Gb! Victory dance!
|
||||
|
||||
Video clip loop of the Critical Role doing a victory dance.
|
||||
|
||||
### Script
|
||||
|
||||
Here's the bash script I wrote. You will need `recode` to run it, as the RSS feed includes some HTML entities.
|
||||
```
|
||||
# Get the whole RSS feed
|
||||
wget -qO /tmp/criticalrole.rss http://criticalrolepodcast.geekandsundry.com/feed/
|
||||
|
||||
# Extract the URLS and the episode titles
|
||||
mp3s=( $(grep -o "http.\+mp3" /tmp/criticalrole.rss) )
|
||||
titles=( $(tail -n +45 /tmp/criticalrole.rss | grep -o "<title>.\+</title>" \
|
||||
| sed -r 's@</?title>@@g; s@ @\\@g' | recode html..utf8) )
|
||||
|
||||
# Download all the episodes under their titles
|
||||
for i in ${!titles[*]}
|
||||
do
|
||||
wget -qO "$(sed -e "s@\\\@\\ @g" <<< "${titles[$i]}").mp3" ${mp3s[$i]}
|
||||
done
|
||||
|
||||
```
|
||||
|
||||
1 - For those of you not familiar with Critical Role, it's web series where a group of voice actresses and actors from LA play Dungeons & Dragons. It's so good even people like me who never played D&D can enjoy it..
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://veronneau.org/downloading-all-the-critical-role-podcasts-in-one-batch.html
|
||||
|
||||
作者:[Louis-Philippe Véronneau][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://veronneau.org/
|
||||
[1]:https://en.wikipedia.org/wiki/Critical_Role
|
||||
[2]:http://criticalrolepodcast.geekandsundry.com/feed/
|
@ -1,163 +0,0 @@
|
||||
The List Of Useful Bash Keyboard Shortcuts
|
||||
======
|
||||
translating by heart4lor
|
||||
|
||||

|
||||
|
||||
Nowadays, I spend more time in Terminal, trying to accomplish more in CLI than GUI. I learned many BASH tricks over time. And, here is the list of useful of BASH shortcuts that every Linux users should know to get things done faster in their BASH shell. I won’t claim that this list is a complete list of BASH shortcuts, but just enough to move around your BASH shell faster than before. Learning how to navigate faster in BASH Shell not only saves some time, but also makes you proud of yourself for learning something worth. Well, let’s get started.
|
||||
|
||||
### List Of Useful Bash Keyboard Shortcuts
|
||||
|
||||
#### ALT key shortcuts
|
||||
|
||||
1\. **ALT+A** – Go to the beginning of a line.
|
||||
|
||||
2\. **ALT+B** – Move one character before the cursor.
|
||||
|
||||
3\. **ALT+C** – Suspends the running command/process. Same as CTRL+C
|
||||
|
||||
4\. **ALT+D** – Closes the empty Terminal (I.e it closes the Terminal when there is nothing typed). Also deletes all chracters after the cursor.
|
||||
|
||||
5\. **ALT+F** – Move forward one character.
|
||||
|
||||
6\. **ALT+T** – Swaps the last two words.
|
||||
|
||||
7\. **ALT+U** – Capitalize all characters in a word after the cursor.
|
||||
|
||||
8\. **ALT+L** – Uncaptalize all characters in a word after the cursor.
|
||||
|
||||
9\. **ALT+R** – Undo any changes to a command that you have brought from the history if you’ve edited it.
|
||||
|
||||
As you see in the above output, I have pulled a command using reverse search and changed the last characters in that command and revert the changes using **ALT+R**.
|
||||
|
||||
10\. **ALT+.** (note the dot at the end) – Use the last word of the previous command.
|
||||
|
||||
If you want to use the same options for multiple commands, you can use this shortcut to bring back the last word of previous command. For instance, I need to short the contents of a directory using “ls -r” command. Also, I want to view my Kernel version using “uname -r”. In both commands, the common word is “-r”. This is where ALT+. shortcut comes in handy. First run, ls -r command to do reverse shorting and use the last word “-r” in the nex command i.e uname.
|
||||
|
||||
#### CTRL key shortcuts
|
||||
|
||||
1\. **CTRL+A** – Quickly move to the beginning of line.
|
||||
|
||||
Let us say you’re typing a command something like below. While you’re at the N’th line, you noticed there is a typo in the first character
|
||||
```
|
||||
$ gind . -mtime -1 -type
|
||||
|
||||
```
|
||||
|
||||
Did you notice? I typed “gind” instead of “find” in the above command. You can correct this error by pressing the left arrow all the way to the first letter and replace “g” with “f”. Alternatively, just hit the **CTRL+A** or **Home** key to instantly go to the beginning of the line and replace the misspelled character. This will save you a few seconds.
|
||||
|
||||
2\. **CTRL+B** – To move backward one character.
|
||||
|
||||
This shortcut key can move the cursor backward one character i.e one character before the cursor. Alternatively, you can use LEFT arrow to move backward one character.
|
||||
|
||||
3\. **CTRL+C** – Stop the currently running command
|
||||
|
||||
If a command takes too long to complete or if you mistakenly run it, you can forcibly stop or quit the command by using **CTRL+C**.
|
||||
|
||||
4\. **CTRL+D** – Delete one character backward.
|
||||
|
||||
If you have a system where the BACKSPACE key isn’t working, you can use **CTRL+D** to delete one character backward. This shortcut also lets you logs out of the current session, similar to exit.
|
||||
|
||||
5\. **CTRL+E** – Move to the end of line
|
||||
|
||||
After you corrected any misspelled word in the start of a command or line, just hit **CTRL+E** to quickly move to the end of the line. Alternatively, you can use END key in your keyboard.
|
||||
|
||||
6\. **CTRL+F** – Move forward one character
|
||||
|
||||
If you want to move the cursor forward one character after another, just press **CTRL+F** instead of RIGHT arrow key.
|
||||
|
||||
7\. **CTRL+G** – Leave the history searching mode without running the command.
|
||||
|
||||
As you see in the above screenshot, I did the reverse search, but didn’t execute the command and left the history searching mode.
|
||||
|
||||
8\. **CTRL+H** – Delete the characters before the cursor, same as BASKSPACE.
|
||||
|
||||
9\. **CTRL+J** – Same as ENTER/RETURN key.
|
||||
|
||||
ENTER key is not working? No problem! **CTRL+J** or **CTRL+M** can be used as an alternative to ENTER key.
|
||||
|
||||
10\. **CTRL+K** – Delete all characters after the cursor.
|
||||
|
||||
You don’t have to keep hitting the DELETE key to delete the characters after the cursor. Just press **CTRL+K** to delete all characters after the cursor.
|
||||
|
||||
11\. **CTRL+L** – Clears the screen and redisplay the line.
|
||||
|
||||
Don’t type “clear” to clear the screen. Just press CTRL+L to clear and redisplay the currently typed line.
|
||||
|
||||
12\. **CTRL+M** – Same as CTRL+J or RETURN.
|
||||
|
||||
13\. **CTRL+N** – Display next line in command history.
|
||||
|
||||
You can also use DOWN arrow.
|
||||
|
||||
14\. **CTRL+O** – Run the command that you found using reverse search i.e CTRL+R.
|
||||
|
||||
15\. **CTRL+P** – Displays the previous line in command history.
|
||||
|
||||
You can also use UP arrow.
|
||||
|
||||
16\. **CTRL+R** – Searches the history backward (Reverse search).
|
||||
|
||||
17\. **CTRL+S** – Searches the history forward.
|
||||
|
||||
18\. **CTRL+T** – Swaps the last two characters.
|
||||
|
||||
This is one of my favorite shortcut. Let us say you typed “sl” instead of “ls”. No problem! This shortcut will transposes the characters as in the below screenshot.
|
||||
|
||||
![][2]
|
||||
|
||||
19\. **CTRL+U** – Delete all characters before the cursor (Kills backward from point to the beginning of line).
|
||||
|
||||
This shortcut will delete all typed characters backward at once.
|
||||
|
||||
20\. **CTRL+V** – Makes the next character typed verbatim
|
||||
|
||||
21\. **CTRL+W** – Delete the words before the cursor.
|
||||
|
||||
Don’t confuse it with CTRL+U. CTRL+W won’t delete everything behind a cursor, but a single word.
|
||||
|
||||
![][3]
|
||||
|
||||
22\. **CTRL+X** – Lists the possible filename completions of the current word.
|
||||
|
||||
23\. **CTRL+XX** – Move between start of command line and current cursor position (and back again).
|
||||
|
||||
24\. **CTRL+Y** – Retrieves last item that you deleted or cut.
|
||||
|
||||
Remember, we deleted a word “-al” using CTRL+W in the 21st command. You can retrieve that word instantly using CTRL+Y.
|
||||
|
||||
![][4]
|
||||
|
||||
See? I didn’t type “-al”. Instead, I pressed CTRL+Y to retrieve it.
|
||||
|
||||
25\. **CTRL+Z** – Stops the current command.
|
||||
|
||||
You may very well know this shortcut. It kills the currently running command. You can resume it with **fg** in the foreground or **bg** in the background.
|
||||
|
||||
26\. **CTRL+[** – Equivalent to ESC key.
|
||||
|
||||
#### Miscellaneous
|
||||
|
||||
1\. **!!** – Repeats the last command.
|
||||
|
||||
2\. **ESC+t** – Swaps the last tow words.
|
||||
|
||||
That’s all I have in mind now. I will keep adding more if I came across any Bash shortcut keys in future. If you think there is a mistake in this article, please do notify me in the comments section below. I will update it asap.
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/list-useful-bash-keyboard-shortcuts/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2018/02/CTRLT-1.gif
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2018/02/CTRLW-1.gif
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2018/02/CTRLY-1.gif
|
@ -0,0 +1,119 @@
|
||||
Learn to code with Thonny — a Python IDE for beginners
|
||||
======
|
||||
|
||||

|
||||
Learning to program is hard. Even when you finally get your colons and parentheses right, there is still a big chance that the program doesn’t do what you intended. Commonly, this means you overlooked something or misunderstood a language construct, and you need to locate the place in the code where your expectations and reality diverge.
|
||||
|
||||
Programmers usually tackle this situation with a tool called a debugger, which allows running their program step-by-step. Unfortunately, most debuggers are optimized for professional usage and assume the user already knows the semantics of language constructs (e.g. function call) very well.
|
||||
|
||||
Thonny is a beginner-friendly Python IDE, developed in [University of Tartu][1], Estonia, which takes a different approach as its debugger is designed specifically for learning and teaching programming.
|
||||
|
||||
Although Thonny is suitable for even total beginners, this post is meant for readers who have at least some experience with Python or another imperative language.
|
||||
|
||||
### Getting started
|
||||
|
||||
Thonny is included in Fedora repositories since version 27. Install it with sudo dnf install thonny or with a graphical tool of your choice (such as Software).
|
||||
|
||||
When first launching Thonny, it does some preparations and then presents an empty editor and the Python shell. Copy following program text into the editor and save it into a file (Ctrl+S).
|
||||
```
|
||||
n = 1
|
||||
while n < 5:
|
||||
print(n * "*")
|
||||
n = n + 1
|
||||
|
||||
```
|
||||
|
||||
Let’s first run the program in one go. For this press F5 on the keyboard. You should see a triangle made of periods appear in the shell pane.
|
||||
|
||||
![A simple program in Thonny][2]
|
||||
|
||||
Did Python just analyze your code and understand that you wanted to print a triangle? Let’s find out!
|
||||
|
||||
Start by selecting “Variables” from the “View” menu. This opens a table which will show us how Python manages program’s variables. Now run the program in debug mode by pressing Ctrl+F5 (or Ctrl+Shift+F5 in XFCE). In this mode Thonny makes Python pause before each step it takes. You should see the first line of the program getting surrounded with a box. We’ll call this the focus and it indicates the part of the code Python is going to execute next.
|
||||
|
||||
![Thonny debugger focus][3]
|
||||
|
||||
The piece of code you see in the focus box is called assignment statement. For this kind of statement, Python is supposed to evaluate the expression on the right and store the value under the name shown on the left. Press F7 to take the next step. You will see that Python focused on the right part of the statement. In this case the expression is really simple, but for generality Thonny presents the expression evaluation box, which allows turning expressions into values. Press F7 again to turn the literal 1 into value 1. Now Python is ready to do the actual assignment — press F7 again and you should see the variable n with value 1 appear in the variables table.
|
||||
|
||||
![Thonny with variables table][4]
|
||||
|
||||
Continue pressing F7 and observe how Python moves forward with really small steps. Does it look like something which understands the purpose of your code or more like a dumb machine following simple rules?
|
||||
|
||||
### Function calls
|
||||
|
||||
Function call is a programming concept which often causes great deal of confusion to beginners. On the surface there is nothing complicated — you give name to a code and refer to it (call it) somewhere else in the code. Traditional debuggers show us that when you step into the call, the focus jumps into the function definition (and later magically back to the original location). Is it the whole story? Do we need to care?
|
||||
|
||||
Turns out the “jump model” is sufficient only with the simplest functions. Understanding parameter passing, local variables, returning and recursion all benefit from the notion of stack frame. Luckily, Thonny can explain this concept intuitively without sweeping important details under the carpet.
|
||||
|
||||
Copy following recursive program into Thonny and run it in debug mode (Ctrl+F5 or Ctrl+Shift+F5).
|
||||
```
|
||||
def factorial(n):
|
||||
if n == 0:
|
||||
return 1
|
||||
else:
|
||||
return factorial(n-1) * n
|
||||
|
||||
print(factorial(4))
|
||||
|
||||
```
|
||||
|
||||
Press F7 repeatedly until you see the expression factorial(4) in the focus box. When you take the next step, you see that Thonny opens a new window containing function code, another variables table and another focus box (move the window to see that the old focus box is still there).
|
||||
|
||||
![Thonny stepping through a recursive function][5]
|
||||
|
||||
This window represents a stack frame, the working area for resolving a function call. Several such windows on top of each other is called the call stack. Notice the relationship between argument 4 on the call site and entry n in the local variables table. Continue stepping with F7 and observe how new windows get created on each call and destroyed when the function code completes and how the call site gets replaced by the return value.
|
||||
|
||||
### Values vs. references
|
||||
|
||||
Now let’s make an experiment inside the Python shell. Start by typing in the statements shown in the screenshot below:
|
||||
|
||||
![Thonny shell showing list mutation][6]
|
||||
|
||||
As you see, we appended to list b, but list a also got updated. You may know why this happened, but what’s the best way to explain it to a beginner?
|
||||
|
||||
When teaching lists to my students I tell them that I have been lying about Python memory model. It is actually not as simple as the variables table suggests. I tell them to restart the interpreter (the red button on the toolbar), select “Heap” from the “View” menu and make the same experiment again. If you do this, then you see that variables table doesn’t contain the values anymore — they actually live in another table called “Heap”. The role of the variables table is actually to map the variable names to addresses (or ID-s) which refer to the rows in the heap table. As assignment changes only the variables table, the statement b = a only copied the reference to the list, not the list itself. This explained why we see the change via both variables.
|
||||
|
||||
![Thonny in heap mode][7]
|
||||
|
||||
(Why do I postpone telling the truth about the memory model until the topic of lists? Does Python store lists differently compared to floats or strings? Go ahead and use Thonny’s heap mode to find this out! Tell me in the comments what do you think!)
|
||||
|
||||
If you want to understand the references system deeper, copy following program to Thonny and small-step (F7) through it with the heap table open.
|
||||
```
|
||||
def do_something(lst, x):
|
||||
lst.append(x)
|
||||
|
||||
a = [1,2,3]
|
||||
n = 4
|
||||
do_something(a, n)
|
||||
print(a)
|
||||
|
||||
```
|
||||
|
||||
Even if the “heap mode” shows us authentic picture, it is rather inconvenient to use. For this reason, I recommend you now switch back to normal mode (unselect “Heap” in the View menu) but remember that the real model includes variables, references and values.
|
||||
|
||||
### Conclusion
|
||||
|
||||
The features I touched in this post were the main reason for creating Thonny. It’s easy to form misconceptions about both function calls and references but traditional debuggers don’t really help in reducing the confusion.
|
||||
|
||||
Besides these distinguishing features, Thonny offers several other beginner friendly tools. Please look around at [Thonny’s homepage][8] to learn more!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/learn-code-thonny-python-ide-beginners/
|
||||
|
||||
作者:[Aivar Annamaa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/
|
||||
[1]:https://www.ut.ee/en
|
||||
[2]:https://fedoramagazine.org/wp-content/uploads/2017/12/scr1.png
|
||||
[3]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr2.png
|
||||
[4]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr3.png
|
||||
[5]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr4.png
|
||||
[6]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr5.png
|
||||
[7]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr6.png
|
||||
[8]:http://thonny.org
|
@ -1,177 +0,0 @@
|
||||
translating by kimii
|
||||
Protecting Code Integrity with PGP — Part 2: Generating Your Master Key
|
||||
======
|
||||
|
||||

|
||||
|
||||
In this article series, we're taking an in-depth look at using PGP and provide practical guidelines for developers working on free software projects. In the previous article, we provided an introduction to [basic tools and concepts][1]. In this installment, we show how to generate and protect your master PGP key.
|
||||
|
||||
### Checklist
|
||||
|
||||
1. Generate a 4096-bit RSA master key (ESSENTIAL)
|
||||
|
||||
2. Back up the master key using paperkey (ESSENTIAL)
|
||||
|
||||
3. Add all relevant identities (ESSENTIAL)
|
||||
|
||||
|
||||
|
||||
|
||||
### Considerations
|
||||
|
||||
#### Understanding the "Master" (Certify) key
|
||||
|
||||
In this and next section we'll talk about the "master key" and "subkeys." It is important to understand the following:
|
||||
|
||||
1. There are no technical differences between the "master key" and "subkeys."
|
||||
|
||||
2. At creation time, we assign functional limitations to each key by giving it specific capabilities.
|
||||
|
||||
3. A PGP key can have four capabilities.
|
||||
|
||||
* [S] key can be used for signing
|
||||
|
||||
* [E] key can be used for encryption
|
||||
|
||||
* [A] key can be used for authentication
|
||||
|
||||
* [C] key can be used for certifying other keys
|
||||
|
||||
4. A single key may have multiple capabilities.
|
||||
|
||||
|
||||
|
||||
|
||||
The key carrying the [C] (certify) capability is considered the "master" key because it is the only key that can be used to indicate relationship with other keys. Only the [C] key can be used to:
|
||||
|
||||
* Add or revoke other keys (subkeys) with S/E/A capabilities
|
||||
|
||||
* Add, change or revoke identities (uids) associated with the key
|
||||
|
||||
* Add or change the expiration date on itself or any subkey
|
||||
|
||||
* Sign other people's keys for the web of trust purposes
|
||||
|
||||
|
||||
|
||||
|
||||
In the Free Software world, the [C] key is your digital identity. Once you create that key, you should take extra care to protect it and prevent it from falling into malicious hands.
|
||||
|
||||
#### Before you create the master key
|
||||
|
||||
Before you create your master key you need to pick your primary identity and your master passphrase.
|
||||
|
||||
##### Primary identity
|
||||
|
||||
Identities are strings using the same format as the "From" field in emails:
|
||||
```
|
||||
Alice Engineer <alice.engineer@example.org>
|
||||
|
||||
```
|
||||
|
||||
You can create new identities, revoke old ones, and change which identity is your "primary" one at any time. Since the primary identity is shown in all GnuPG operations, you should pick a name and address that are both professional and the most likely ones to be used for PGP-protected communication, such as your work address or the address you use for signing off on project commits.
|
||||
|
||||
##### Passphrase
|
||||
|
||||
The passphrase is used exclusively for encrypting the private key with a symmetric algorithm while it is stored on disk. If the contents of your .gnupg directory ever get leaked, a good passphrase is the last line of defense between the thief and them being able to impersonate you online, which is why it is important to set up a good passphrase.
|
||||
|
||||
A good guideline for a strong passphrase is 3-4 words from a rich or mixed dictionary that are not quotes from popular sources (songs, books, slogans). You'll be using this passphrase fairly frequently, so it should be both easy to type and easy to remember.
|
||||
|
||||
##### Algorithm and key strength
|
||||
|
||||
Even though GnuPG has had support for Elliptic Curve crypto for a while now, we'll be sticking to RSA keys, at least for a little while longer. While it is possible to start using ED25519 keys right now, it is likely that you will come across tools and hardware devices that will not be able to handle them correctly.
|
||||
|
||||
You may also wonder why the master key is 4096-bit, if later in the guide we state that 2048-bit keys should be good enough for the lifetime of RSA public key cryptography. The reasons are mostly social and not technical: master keys happen to be the most visible ones on the keychain, and some of the developers you interact with will inevitably judge you negatively if your master key has fewer bits than theirs.
|
||||
|
||||
#### Generate the master key
|
||||
|
||||
To generate your new master key, issue the following command, putting in the right values instead of "Alice Engineer:"
|
||||
```
|
||||
$ gpg --quick-generate-key 'Alice Engineer <alice@example.org>' rsa4096 cert
|
||||
|
||||
```
|
||||
|
||||
A dialog will pop up asking to enter the passphrase. Then, you may need to move your mouse around or type on some keys to generate enough entropy until the command completes.
|
||||
|
||||
Review the output of the command, it will be something like this:
|
||||
```
|
||||
pub rsa4096 2017-12-06 [C] [expires: 2019-12-06]
|
||||
111122223333444455556666AAAABBBBCCCCDDDD
|
||||
uid Alice Engineer <alice@example.org>
|
||||
|
||||
```
|
||||
|
||||
Note the long string on the second line -- that is the full fingerprint of your newly generated key. Key IDs can be represented in three different forms:
|
||||
|
||||
* Fingerprint, a full 40-character key identifier
|
||||
|
||||
* Long, last 16-characters of the fingerprint (AAAABBBBCCCCDDDD)
|
||||
|
||||
* Short, last 8 characters of the fingerprint (CCCCDDDD)
|
||||
|
||||
|
||||
|
||||
|
||||
You should avoid using 8-character "short key IDs" as they are not sufficiently unique.
|
||||
|
||||
At this point, I suggest you open a text editor, copy the fingerprint of your new key and paste it there. You'll need to use it for the next few steps, so having it close by will be handy.
|
||||
|
||||
#### Back up your master key
|
||||
|
||||
For disaster recovery purposes -- and especially if you intend to use the Web of Trust and collect key signatures from other project developers -- you should create a hardcopy backup of your private key. This is supposed to be the "last resort" measure in case all other backup mechanisms have failed.
|
||||
|
||||
The best way to create a printable hardcopy of your private key is using the paperkey software written for this very purpose. Paperkey is available on all Linux distros, as well as installable via brew install paperkey on Macs.
|
||||
|
||||
Run the following command, replacing [fpr] with the full fingerprint of your key:
|
||||
```
|
||||
$ gpg --export-secret-key [fpr] | paperkey -o /tmp/key-backup.txt
|
||||
|
||||
```
|
||||
|
||||
The output will be in a format that is easy to OCR or input by hand, should you ever need to recover it. Print out that file, then take a pen and write the key passphrase on the margin of the paper. This is a required step because the key printout is still encrypted with the passphrase, and if you ever change the passphrase on your key, you will not remember what it used to be when you had first created it -- guaranteed.
|
||||
|
||||
Put the resulting printout and the hand-written passphrase into an envelope and store in a secure and well-protected place, preferably away from your home, such as your bank vault.
|
||||
|
||||
**Note on printers:** Long gone are days when printers were dumb devices connected to your computer's parallel port. These days they have full operating systems, hard drives, and cloud integration. Since the key content we send to the printer will be encrypted with the passphrase, this is a fairly safe operation, but use your best paranoid judgement.
|
||||
|
||||
#### Add relevant identities
|
||||
|
||||
If you have multiple relevant email addresses (personal, work, open-source project, etc), you should add them to your master key. You don't need to do this for any addresses that you don't expect to use with PGP (e.g., probably not your school alumni address).
|
||||
|
||||
The command is (put the full key fingerprint instead of [fpr]):
|
||||
```
|
||||
$ gpg --quick-add-uid [fpr] 'Alice Engineer <allie@example.net>'
|
||||
|
||||
```
|
||||
|
||||
You can review the UIDs you've already added using:
|
||||
```
|
||||
$ gpg --list-key [fpr] | grep ^uid
|
||||
|
||||
```
|
||||
|
||||
##### Pick the primary UID
|
||||
|
||||
GnuPG will make the latest UID you add as your primary UID, so if that is different from what you want, you should fix it back:
|
||||
```
|
||||
$ gpg --quick-set-primary-uid [fpr] 'Alice Engineer <alice@example.org>'
|
||||
|
||||
```
|
||||
|
||||
Next time, we'll look at generating PGP subkeys, which are the keys you'll actually be using for day-to-day work.
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][2]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/PGP/2018/2/protecting-code-integrity-pgp-part-2-generating-and-protecting-your-master-pgp-key
|
||||
|
||||
作者:[KONSTANTIN RYABITSEV][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/mricon
|
||||
[1]:https://www.linux.com/blog/learn/2018/2/protecting-code-integrity-pgp-part-1-basic-pgp-concepts-and-tools
|
||||
[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -1,3 +1,4 @@
|
||||
leemeans translating
|
||||
How to block local spoofed addresses using the Linux firewall
|
||||
======
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
Protecting Code Integrity with PGP — Part 3: Generating PGP Subkeys
|
||||
======
|
||||

|
||||
|
@ -1,118 +0,0 @@
|
||||
Linux LAN Routing for Beginners: Part 2
|
||||
======
|
||||
|
||||

|
||||
|
||||
Last week [we reviewed IPv4 addressing][1] and using the network admin's indispensible ipcalc tool: Now we're going to make some nice LAN routers.
|
||||
|
||||
VirtualBox and KVM are wonderful for testing routing, and the examples in this article are all performed in KVM. If you prefer to use physical hardware, then you need three computers: one to act as the router, and the other two to represent two different networks. You also need two Ethernet switches and cabling.
|
||||
|
||||
The examples assume a wired Ethernet LAN, and we shall pretend there are some bridged wireless access points for a realistic scenario, although we're not going to do anything with them. (I have not yet tried all-WiFi routing and have had mixed success with connecting a mobile broadband device to an Ethernet LAN, so look for those in a future installment.)
|
||||
|
||||
### Network Segments
|
||||
|
||||
The simplest network segment is two computers in the same address space connected to the same switch. These two computers do not need a router to communicate with each other. A useful term is _broadcast domain_ , which describes a group of hosts that are all in the same network. They may be all connected to a single Ethernet switch, or multiple switches. A broadcast domain may include two different networks connected by an Ethernet bridge, which makes the two networks behave as a single network. Wireless access points are typically bridged to a wired Ethernetwork.
|
||||
|
||||
A broadcast domain can talk to a different broadcast domain only when they are connected by a network router.
|
||||
|
||||
### Simple Network
|
||||
|
||||
The following example commands are not persistent, and your changes will vanish with a restart.
|
||||
|
||||
A broadcast domain needs a router to talk to other broadcast domains. Let's illustrate this with two computers and the `ip` command. Our two computers are 192.168.110.125 and 192.168.110.126, and they are plugged into the same Ethernet switch. In VirtualBox or KVM, you automatically create a virtual switch when you configure a new network, so when you assign a network to a virtual machine it's like plugging it into a switch. Use `ip addr show` to see your addresses and network interface names. The two hosts can ping each other.
|
||||
|
||||
Now add an address in a different network to one of the hosts:
|
||||
```
|
||||
# ip addr add 192.168.120.125/24 dev ens3
|
||||
|
||||
```
|
||||
|
||||
You have to specify the network interface name, which in the example is ens3. It is not required to add the network prefix, in this case /24, but it never hurts to be explicit. Check your work with `ip`. The example output is trimmed for clarity:
|
||||
```
|
||||
$ ip addr show
|
||||
ens3:
|
||||
inet 192.168.110.125/24 brd 192.168.110.255 scope global dynamic ens3
|
||||
valid_lft 875sec preferred_lft 875sec
|
||||
inet 192.168.120.125/24 scope global ens3
|
||||
valid_lft forever preferred_lft forever
|
||||
|
||||
```
|
||||
|
||||
The host at 192.168.120.125 can ping itself (`ping 192.168.120.125`), and that is a good basic test to verify that your configuration is working correctly, but the second computer can't ping that address.
|
||||
|
||||
Now we need to do bit of network juggling. Start by adding a third host to act as the router. This needs two virtual network interfaces and a second virtual network. In real life you want your router to have static IP addresses, but for now we'll let the KVM DHCP server do the work of assigning addresses, so you only need these two virtual networks:
|
||||
|
||||
* First network: 192.168.110.0/24
|
||||
* Second network: 192.168.120.0/24
|
||||
|
||||
|
||||
|
||||
Then your router must be configured to forward packets. Packet forwarding should be disabled by default, which you can check with `sysctl`:
|
||||
```
|
||||
$ sysctl net.ipv4.ip_forward
|
||||
net.ipv4.ip_forward = 0
|
||||
|
||||
```
|
||||
|
||||
The zero means it is disabled. Enable it with this command:
|
||||
```
|
||||
# echo 1 > /proc/sys/net/ipv4/ip_forward
|
||||
|
||||
```
|
||||
|
||||
Then configure one of your other hosts to play the part of the second network by assigning the 192.168.120.0/24 virtual network to it in place of the 192.168.110.0/24 network, and then reboot the two "network" hosts, but not the router. (Or restart networking; I'm old and lazy and don't care what weird commands are required to restart services when I can just reboot.) The addressing should look something like this:
|
||||
|
||||
* Host 1: 192.168.110.125
|
||||
* Host 2: 192.168.120.135
|
||||
* Router: 192.168.110.126 and 192.168.120.136
|
||||
|
||||
|
||||
|
||||
Now go on a ping frenzy, and ping everyone from everyone. There are some quirks with virtual machines and the various Linux distributions that produce inconsistent results, so some pings will succeed and some will not. Not succeeding is good, because it means you get to practice creating a static route. First, view the existing routing tables. The first example is from Host 1, and the second is from the router:
|
||||
```
|
||||
$ ip route show
|
||||
default via 192.168.110.1 dev ens3 proto static metric 100
|
||||
192.168.110.0/24 dev ens3 proto kernel scope link src 192.168.110.164 metric 100
|
||||
|
||||
$ ip route show
|
||||
default via 192.168.110.1 dev ens3 proto static metric 100
|
||||
default via 192.168.120.1 dev ens3 proto static metric 101
|
||||
169.254.0.0/16 dev ens3 scope link metric 1000
|
||||
192.168.110.0/24 dev ens3 proto kernel scope link
|
||||
src 192.168.110.126 metric 100
|
||||
192.168.120.0/24 dev ens9 proto kernel scope link
|
||||
src 192.168.120.136 metric 100
|
||||
|
||||
```
|
||||
|
||||
This shows us that the default routes are the ones assigned by KVM. The 169.* address is the automatic link local address, and we can ignore it. Then we see two more routes, the two that belong to our router. You can have multiple routes, and this example shows how to add a non-default route to Host 1:
|
||||
```
|
||||
# ip route add 192.168.120.0/24 via 192.168.110.126 dev ens3
|
||||
|
||||
```
|
||||
|
||||
This means Host 1 can access the 192.168.110.0/24 network via the router interface 192.168.110.126. See how it works? Host 1 and the router need to be in the same address space to connect, then the router forwards to the other network.
|
||||
|
||||
This command deletes a route:
|
||||
```
|
||||
# ip route del 192.168.120.0/24
|
||||
|
||||
```
|
||||
|
||||
In real life, you're not going to be setting up routes manually like this, but rather using a router daemon and advertising your router via DHCP but understanding the fundamentals is key. Come back next week to learn how to set up a nice easy router daemon that does the work for you.
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][2]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/3/linux-lan-routing-beginners-part-2
|
||||
|
||||
作者:[CARLA SCHRODER][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/cschroder
|
||||
[1]:https://www.linux.com/learn/intro-to-linux/2018/2/linux-lan-routing-beginners-part-1
|
||||
[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -0,0 +1,135 @@
|
||||
Getting started with Python for data science
|
||||
======
|
||||
|
||||

|
||||
|
||||
Whether you're a budding data science enthusiast with a math or computer science background or an expert in an unrelated field, the possibilities data science offers are within your reach. And you don't need expensive, highly specialized enterprise software—the open source tools discussed in this article are all you need to get started.
|
||||
|
||||
[Python][1], its machine-learning and data science libraries ([pandas][2], [Keras][3], [TensorFlow][4], [scikit-learn][5], [SciPy][6], [NumPy][7], etc.), and its extensive list of visualization libraries ([Matplotlib][8], [pyplot][9], [Plotly][10], etc.) are excellent FOSS tools for beginners and experts alike. Easy to learn, popular enough to offer community support, and armed with the latest emerging techniques and algorithms developed for data science, these comprise one of the best toolsets you can acquire when starting out.
|
||||
|
||||
Many of these Python libraries are built on top of each other (known as dependencies), and the basis is the [NumPy][7] library. Designed specifically for data science, NumPy is often used to store relevant portions of datasets in its ndarray datatype, which is a convenient datatype for storing records from relational tables as `cvs` files or in any other format, and vice-versa. It is particularly convenient when scikit functions are applied to multidimensional arrays. SQL is great for querying databases, but to perform complex and resource-intensive data science operations, storing data in ndarray boosts efficiency and speed (but make sure you have ample RAM when dealing with large datasets). When you get to using pandas for knowledge extraction and analysis, the almost seamless conversion between DataFrame datatype in pandas and ndarray in NumPy creates a powerful combination for extraction and compute-intensive operations, respectively.
|
||||
|
||||
For a quick demonstration, let’s fire up the Python shell and load an open dataset on crime statistics from the city of Baltimore in a pandas DataFrame variable, and view a portion of the loaded frame:
|
||||
```
|
||||
>>> import pandas as pd
|
||||
|
||||
>>> crime_stats = pd.read_csv('BPD_Arrests.csv')
|
||||
|
||||
>>> crime_stats.head()
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
We can now perform most of the queries on this pandas DataFrame that we can with SQL in databases. For instance, to get all the unique values of the "Description" attribute, the SQL query is:
|
||||
```
|
||||
$ SELECT unique(“Description”) from crime_stats;
|
||||
|
||||
```
|
||||
|
||||
This same query written for a pandas DataFrame looks like this:
|
||||
```
|
||||
>>> crime_stats['Description'].unique()
|
||||
|
||||
['COMMON ASSAULT' 'LARCENY' 'ROBBERY - STREET' 'AGG. ASSAULT'
|
||||
|
||||
'LARCENY FROM AUTO' 'HOMICIDE' 'BURGLARY' 'AUTO THEFT'
|
||||
|
||||
'ROBBERY - RESIDENCE' 'ROBBERY - COMMERCIAL' 'ROBBERY - CARJACKING'
|
||||
|
||||
'ASSAULT BY THREAT' 'SHOOTING' 'RAPE' 'ARSON']
|
||||
|
||||
```
|
||||
|
||||
which returns a NumPy array (ndarray):
|
||||
```
|
||||
>>> type(crime_stats['Description'].unique())
|
||||
|
||||
<class 'numpy.ndarray'>
|
||||
|
||||
```
|
||||
|
||||
Next let’s feed this data into a neural network to see how accurately it can predict the type of weapon used, given data such as the time the crime was committed, the type of crime, and the neighborhood in which it happened:
|
||||
```
|
||||
>>> from sklearn.neural_network import MLPClassifier
|
||||
|
||||
>>> import numpy as np
|
||||
|
||||
>>>
|
||||
|
||||
>>> prediction = crime_stats[[‘Weapon’]]
|
||||
|
||||
>>> predictors = crime_stats['CrimeTime', ‘CrimeCode’, ‘Neighborhood’]
|
||||
|
||||
>>>
|
||||
|
||||
>>> nn_model = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5,
|
||||
|
||||
2), random_state=1)
|
||||
|
||||
>>>
|
||||
|
||||
>>>predict_weapon = nn_model.fit(prediction, predictors)
|
||||
|
||||
```
|
||||
|
||||
Now that the learning model is ready, we can perform several tests to determine its quality and reliability. For starters, let’s feed a training set data (the portion of the original dataset used to train the model and not included in creating the model):
|
||||
```
|
||||
>>> predict_weapon.predict(training_set_weapons)
|
||||
|
||||
array([4, 4, 4, ..., 0, 4, 4])
|
||||
|
||||
```
|
||||
|
||||
As you can see, it returns a list, with each number predicting the weapon for each of the records in the training set. We see numbers rather than weapon names, as most classification algorithms are optimized with numerical data. For categorical data, there are techniques that can reliably convert attributes into numerical representations. In this case, the technique used is Label Encoding, using the LabelEncoder function in the sklearn preprocessing library: `preprocessing.LabelEncoder()`. It has a function to transform and inverse transform data and their numerical representations. In this example, we can use the `inverse_transform` function of LabelEncoder() to see what Weapons 0 and 4 are:
|
||||
```
|
||||
>>> preprocessing.LabelEncoder().inverse_transform(encoded_weapons)
|
||||
|
||||
array(['HANDS', 'FIREARM', 'HANDS', ..., 'FIREARM', 'FIREARM', 'FIREARM']
|
||||
|
||||
```
|
||||
|
||||
This is fun to see, but to get an idea of how accurate this model is, let's calculate several scores as percentages:
|
||||
```
|
||||
>>> nn_model.score(X, y)
|
||||
|
||||
0.81999999999999995
|
||||
|
||||
```
|
||||
|
||||
This shows that our neural network model is ~82% accurate. That result seems impressive, but it is important to check its effectiveness when used on a different crime dataset. There are other tests, like correlations, confusion, matrices, etc., to do this. Although our model has high accuracy, it is not very useful for general crime datasets as this particular dataset has a disproportionate number of rows that list ‘FIREARM’ as the weapon used. Unless it is re-trained, our classifier is most likely to predict ‘FIREARM’, even if the input dataset has a different distribution.
|
||||
|
||||
It is important to clean the data and remove outliers and aberrations before we classify it. The better the preprocessing, the better the accuracy of our insights. Also, feeding the model/classifier with too much data to get higher accuracy (generally over ~90%) is a bad idea because it looks accurate but is not useful due to [overfitting][11].
|
||||
|
||||
[Jupyter notebooks][12] are a great interactive alternative to the command line. While the CLI is fine for most things, Jupyter shines when you want to run snippets on the go to generate visualizations. It also formats data better than the terminal.
|
||||
|
||||
[This article][13] has a list of some of the best free resources for machine learning, but plenty of additional guidance and tutorials are available. You will also find many open datasets available to use, based on your interests and inclinations. As a starting point, the datasets maintained by [Kaggle][14], and those available at state government websites are excellent resources.
|
||||
|
||||
Payal Singh will be presenting at SCaLE16x this year, March 8-11 in Pasadena, California. To attend and get 50% of your ticket, [register][15] using promo code **OSDC**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/getting-started-data-science
|
||||
|
||||
作者:[Payal Singh][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/payalsingh
|
||||
[1]:https://www.python.org/
|
||||
[2]:https://pandas.pydata.org/
|
||||
[3]:https://keras.io/
|
||||
[4]:https://www.tensorflow.org/
|
||||
[5]:http://scikit-learn.org/stable/
|
||||
[6]:https://www.scipy.org/
|
||||
[7]:http://www.numpy.org/
|
||||
[8]:https://matplotlib.org/
|
||||
[9]:https://matplotlib.org/api/pyplot_api.html
|
||||
[10]:https://plot.ly/
|
||||
[11]:https://www.kdnuggets.com/2014/06/cardinal-sin-data-mining-data-science.html
|
||||
[12]:http://jupyter.org/
|
||||
[13]:https://machinelearningmastery.com/best-machine-learning-resources-for-getting-started/
|
||||
[14]:https://www.kaggle.com/
|
||||
[15]:https://register.socallinuxexpo.org/reg6/
|
70
sources/tech/20180306 Exploring free and open web fonts.md
Normal file
70
sources/tech/20180306 Exploring free and open web fonts.md
Normal file
@ -0,0 +1,70 @@
|
||||
Exploring free and open web fonts
|
||||
======
|
||||
|
||||

|
||||
|
||||
There is no question that the face of the web has been transformed in recent years by open source fonts. Prior to 2010, the only typefaces you were likely to see in a web browser were the generic "web safe" [core fonts][1] from Microsoft. But that year saw the start of several revolutions: the introduction of the Web Open Font Format ([WOFF][2]), which offered an open standard for efficiently delivering font files over HTTP, and the launch of web-font services like [Google Fonts][3] and the [Open Font Library][4]—both of which offered web publishers access to a large collection of fonts, for free, available under open licenses.
|
||||
|
||||
It is hard to overstate the positive impact of these events on web typography. But it can be all too easy to equate the successes of open web fonts with open source typography as a whole and conclude that the challenges are behind us, the puzzles solved. That is not the case, so if you care about type, the good news is there are a lot of opportunities to get involved in improvement.
|
||||
|
||||
For starters, it's critical to understand that Google Fonts and Open Font Library offer a specialized service—delivering fonts in web pages—and they don't implement solutions for other use cases. That is not a shortcoming on the services' side; it simply means that we have to develop other solutions.
|
||||
|
||||
There are a number of problems to solve. Probably the most obvious example is the awkwardness of installing fonts on a desktop Linux machine for use in other applications. You can download any of the web fonts offered by either service, but all you will get is a generic ZIP file with some TTF or OTF binaries inside and a plaintext license file. What happens next is up to you to guess.
|
||||
|
||||
Most users learn quickly that the "right" step is to manually copy those font binaries into any one of a handful of special directories on their hard drive. But that just makes the files visible to the operating system; it doesn't offer much in the way of a user experience. Again, this is not a flaw with the web-font service; rather it's evidence of the point where the service stops and more work needs to be done on the other side.
|
||||
|
||||
A big improvement from the user's perspective would be for the OS or the desktop environment to be smarter at this "just downloaded" stage. Not only would it install the font files to the right location but, more importantly, it could add important metadata that the user will want to access when selecting a font to use in a project.
|
||||
|
||||
What this additional information consists of and how it is presented to the user is tied to another challenge: Managing a font collection on Linux is noticeably less pleasant than on other operating systems. Periodically, font manager applications appear (see [GTK+ Font Manager][5] for one of the most recent examples), but they rarely catch on. I've been thinking a lot about where I think they come up short; one core factor is they have limited themselves to displaying only the information embedded in the font binary: basic character-set coverage, weight/width/slope settings, embedded license and copyright statements, etc.
|
||||
|
||||
But a lot of decisions go into the process of selecting a font for a job besides what's in this embedded data. Serious font users—like information designers, journal article authors, or book designers—make their font-selection decisions in the context of each document's requirements and needs. That includes license information, naturally, but it includes much more, like information about the designer and the foundry, stylistic trends, or details about how the font works in use.
|
||||
|
||||
For example, if your document includes both English and Arabic text, you probably want a font where the Latin and Arabic glyphs were designed together by someone experienced with the two scripts. Otherwise, you'll waste a ton of time making tiny adjustments to the font sizes and line spacing trying to get the two languages to mix well. You may have learned from experience that certain designers or vendors are better at multi-script design than others. Or it might be relevant to your project that today's fashion magazines almost exclusively use "[Didone][6]"-style typefaces, a name that refers to super-high-contrast styles pioneered by [Firmin Didot][7] and [Giambattista Bodoni][8] around 200 years ago. It just happens to be the trend.
|
||||
|
||||
But none of those terms (Didone, Didot, or Bodoni) are likely to show up in the binary's embedded data, nor is easy to tell whether the Latin and Arabic fit together or anything else about the typeface's back history. That information might appear in supplementary material like a type specimen or font documentation—if any exists.
|
||||
|
||||
A specimen is a designed document (often a PDF) that shows the font in use and includes background information; it frequently serves a dual role as a marketing piece and a sample to look at when choosing a font. The considered design of a specimen showcases how the font functions in practice and in a manner that an automatically generated character table simply cannot. Documentation may include some other vital information, like how to activate the font's OpenType features, what mathematical or archaic forms it provides, or how it varies stylistically across supported languages. Making this sort of material available to the user in the font-management application would go a long way towards helping users find the fonts that fit their projects' needs.
|
||||
|
||||
Of course, if we're going to consider a font manager that can handle documentation and specimens, we also have to take a hard look at what comes with the font packages provided by distributions. Linux users start with a few fonts automatically installed, and repository-provided packages are the only font source most users have besides downloading the generic ZIP archive. Those packages tend to be pretty bare-bones. Commercial fonts generally include specimens, documentation, and other support items, whereas open source fonts usually do not.
|
||||
|
||||
There are some excellent examples of open fonts that do provide quality specimens and documentation (see [SIL Gentium][9] and [Bungee][10] for two distinctly different but valid approaches), but they rarely (if ever) make their way into the downstream packaging chain. We plainly can do better.
|
||||
|
||||
There are some technical obstacles to offering a richer user experience for interacting with the fonts on your system. For one thing, the [AppStream][11] metadata standard defines a few [parameters][12] specific to font files, but so far includes nothing that would cover specimens, designer and foundry information, and other relevant details. For another, the [SPDX][13] (Software Package Data Exchange) format does not cover many of the software licenses (and license variants) used to distribute fonts.
|
||||
|
||||
Finally, as any audiophile will tell you, a music player that does not let you edit and augment the ID3 tags in your MP3 collection is going to get frustrating quickly. You want to fix errors in the tags, you want to add things like notes and album art—essentially, you want to polish your library. You would want to do the same to keep your local font library in a pleasant-to-use state.
|
||||
|
||||
But editing the embedded data in a font file has been taboo because fonts tend to get embedded and attached to other documents. If you monkey with the fields in a font binary, then redistribute it with your presentation slides, anyone who downloads those slides can end up with bad metadata through no fault of their own. So anyone making improvements to the font-management experience will have to figure out how to strategically wrangle repeated changes to the embedded and external font metadata.
|
||||
|
||||
In addition to the technical angle, enriching the font-management experience is also a design challenge. As I said above, good specimens and well-written documentation exist for several open fonts. But there are many more packages missing both, and there are a lot of older font packages that are no longer being maintained. That probably means the only way that most open font packages are going to get specimens or documentation is for the community to create them.
|
||||
|
||||
Perhaps that's a tall order. But the open source design community is bigger than it has ever been, and it is a highly motivated segment of the overall free and open source software movement. So who knows; maybe this time next year finding, downloading, and using fonts on a desktop Linux system will be an entirely different experience.
|
||||
|
||||
One train of thought on the typography challenges of modern Linux users includes packaging, document design, and maybe even a few new software components for desktop environments. There are other trains to consider, too. The commonality is that where the web-font service ends, matters get more difficult.
|
||||
|
||||
The best news, from my perspective, is that there are more people interested in this topic than ever before. For that, I think we have the higher profile that open fonts have received from big web-font services like Google Fonts and Open Font Library to thank.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/webfonts
|
||||
|
||||
作者:[Nathan Willis][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/n8willis
|
||||
[1]:https://en.wikipedia.org/wiki/Core_fonts_for_the_Web
|
||||
[2]:https://en.wikipedia.org/wiki/Web_Open_Font_Format
|
||||
[3]:https://fonts.google.com/
|
||||
[4]:https://fontlibrary.org/
|
||||
[5]:https://fontmanager.github.io/
|
||||
[6]:https://en.wikipedia.org/wiki/Didone_(typography)
|
||||
[7]:https://en.wikipedia.org/wiki/Firmin_Didot
|
||||
[8]:https://en.wikipedia.org/wiki/Giambattista_Bodoni
|
||||
[9]:https://software.sil.org/gentium/
|
||||
[10]:https://djr.com/bungee/
|
||||
[11]:https://www.freedesktop.org/wiki/Distributions/AppStream/
|
||||
[12]:https://www.freedesktop.org/software/appstream/docs/sect-Metadata-Fonts.html
|
||||
[13]:https://spdx.org/
|
@ -0,0 +1,515 @@
|
||||
How To Check All Running Services In Linux
|
||||
======
|
||||
There are many ways and tools to check and list all running services in Linux. Usually most of the administrator use `service service-name status` or `/etc/init.d/service-name status` for sysVinit system and `systemctl status service-name` for systemd systems.
|
||||
|
||||
The above command clearly shows that the mentioned service is running on server or not. It is very simple and basic command that should known by every Linux administrator.
|
||||
|
||||
If you are new to your environment and you don’t know what services are running on the system. How do you check?
|
||||
|
||||
Yes, we can check this. This will will help us to understand what are the services are running on the system and whether it’s necessary or need to disable.
|
||||
|
||||
### What Is SysVinit
|
||||
|
||||
init (short for initialization) is the first process started during booting of the computer system. Init is a daemon process that continues running until the system is shut down.
|
||||
|
||||
SysVinit is an old and traditional init system and system manager for old systems. Most of the latest distributions were adapted to systemd system due to some of the long pending issues on sysVinit system.
|
||||
|
||||
### What Is systemd
|
||||
|
||||
systemd is a new init system and system manager which is become very popular and widely adapted new standard init system by most of Linux distributions. Systemctl is a systemd utility which is help us to manage systemd system.
|
||||
|
||||
### Method-1: How To Check Running Services In sysVinit System
|
||||
|
||||
The below command helps us to check and list all running services in sysVinit system.
|
||||
|
||||
If you have many number of services, i would advise you to use file view commands such as less, more, etc commands for clear view.
|
||||
```
|
||||
# service --status-all
|
||||
or
|
||||
# service --status-all | more
|
||||
or
|
||||
# service --status-all | less
|
||||
|
||||
abrt-ccpp hook is installed
|
||||
abrtd (pid 2131) is running...
|
||||
abrt-dump-oops is stopped
|
||||
acpid (pid 1958) is running...
|
||||
atd (pid 2164) is running...
|
||||
auditd (pid 1731) is running...
|
||||
Frequency scaling enabled using ondemand governor
|
||||
crond (pid 2153) is running...
|
||||
hald (pid 1967) is running...
|
||||
htcacheclean is stopped
|
||||
httpd is stopped
|
||||
Table: filter
|
||||
Chain INPUT (policy ACCEPT)
|
||||
num target prot opt source destination
|
||||
1 ACCEPT all ::/0 ::/0 state RELATED,ESTABLISHED
|
||||
2 ACCEPT icmpv6 ::/0 ::/0
|
||||
3 ACCEPT all ::/0 ::/0
|
||||
4 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:80
|
||||
5 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:21
|
||||
6 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:22
|
||||
7 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:25
|
||||
8 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:2082
|
||||
9 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:2086
|
||||
10 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:2083
|
||||
11 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:2087
|
||||
12 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:10000
|
||||
13 REJECT all ::/0 ::/0 reject-with icmp6-adm-prohibited
|
||||
|
||||
Chain FORWARD (policy ACCEPT)
|
||||
num target prot opt source destination
|
||||
1 REJECT all ::/0 ::/0 reject-with icmp6-adm-prohibited
|
||||
|
||||
Chain OUTPUT (policy ACCEPT)
|
||||
num target prot opt source destination
|
||||
|
||||
iptables: Firewall is not running.
|
||||
irqbalance (pid 1826) is running...
|
||||
Kdump is operational
|
||||
lvmetad is stopped
|
||||
mdmonitor is stopped
|
||||
messagebus (pid 1929) is running...
|
||||
SUCCESS! MySQL running (24376)
|
||||
rndc: neither /etc/rndc.conf nor /etc/rndc.key was found
|
||||
named is stopped
|
||||
netconsole module not loaded
|
||||
Usage: startup.sh { start | stop }
|
||||
Configured devices:
|
||||
lo eth0 eth1
|
||||
Currently active devices:
|
||||
lo eth0
|
||||
ntpd is stopped
|
||||
portreserve (pid 1749) is running...
|
||||
master (pid 2107) is running...
|
||||
Process accounting is disabled.
|
||||
quota_nld is stopped
|
||||
rdisc is stopped
|
||||
rngd is stopped
|
||||
rpcbind (pid 1840) is running...
|
||||
rsyslogd (pid 1756) is running...
|
||||
sandbox is stopped
|
||||
saslauthd is stopped
|
||||
smartd is stopped
|
||||
openssh-daemon (pid 9859) is running...
|
||||
svnserve is stopped
|
||||
vsftpd (pid 4008) is running...
|
||||
xinetd (pid 2031) is running...
|
||||
zabbix_agentd (pid 2150 2149 2148 2147 2146 2140) is running...
|
||||
|
||||
```
|
||||
|
||||
Run the following command to view only running services in the system.
|
||||
```
|
||||
# service --status-all | grep running
|
||||
|
||||
crond (pid 535) is running...
|
||||
httpd (pid 627) is running...
|
||||
mysqld (pid 911) is running...
|
||||
rndc: neither /etc/rndc.conf nor /etc/rndc.key was found
|
||||
rsyslogd (pid 449) is running...
|
||||
saslauthd (pid 492) is running...
|
||||
sendmail (pid 509) is running...
|
||||
sm-client (pid 519) is running...
|
||||
openssh-daemon (pid 478) is running...
|
||||
xinetd (pid 485) is running...
|
||||
|
||||
```
|
||||
|
||||
Run the following command to view the particular service status.
|
||||
```
|
||||
# service --status-all | grep httpd
|
||||
httpd (pid 627) is running...
|
||||
|
||||
```
|
||||
|
||||
Alternatively use the following command to view the particular service status.
|
||||
```
|
||||
# service httpd status
|
||||
|
||||
httpd (pid 627) is running...
|
||||
|
||||
```
|
||||
|
||||
Use the following command to view the list of running services enabled in boot.
|
||||
```
|
||||
# chkconfig --list
|
||||
crond 0:off 1:off 2:on 3:on 4:on 5:on 6:off
|
||||
htcacheclean 0:off 1:off 2:off 3:off 4:off 5:off 6:off
|
||||
httpd 0:off 1:off 2:off 3:on 4:off 5:off 6:off
|
||||
ip6tables 0:off 1:off 2:on 3:off 4:on 5:on 6:off
|
||||
iptables 0:off 1:off 2:on 3:on 4:on 5:on 6:off
|
||||
modules_dep 0:off 1:off 2:on 3:on 4:on 5:on 6:off
|
||||
mysqld 0:off 1:off 2:on 3:on 4:on 5:on 6:off
|
||||
named 0:off 1:off 2:off 3:off 4:off 5:off 6:off
|
||||
netconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:off
|
||||
netfs 0:off 1:off 2:off 3:off 4:on 5:on 6:off
|
||||
network 0:off 1:off 2:on 3:on 4:on 5:on 6:off
|
||||
nmb 0:off 1:off 2:off 3:off 4:off 5:off 6:off
|
||||
nscd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
|
||||
portreserve 0:off 1:off 2:on 3:off 4:on 5:on 6:off
|
||||
quota_nld 0:off 1:off 2:off 3:off 4:off 5:off 6:off
|
||||
rdisc 0:off 1:off 2:off 3:off 4:off 5:off 6:off
|
||||
restorecond 0:off 1:off 2:off 3:off 4:off 5:off 6:off
|
||||
rpcbind 0:off 1:off 2:on 3:off 4:on 5:on 6:off
|
||||
rsyslog 0:off 1:off 2:on 3:on 4:on 5:on 6:off
|
||||
saslauthd 0:off 1:off 2:off 3:on 4:off 5:off 6:off
|
||||
sendmail 0:off 1:off 2:on 3:on 4:on 5:on 6:off
|
||||
smb 0:off 1:off 2:off 3:off 4:off 5:off 6:off
|
||||
snmpd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
|
||||
snmptrapd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
|
||||
sshd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
|
||||
udev-post 0:off 1:on 2:on 3:off 4:on 5:on 6:off
|
||||
winbind 0:off 1:off 2:off 3:off 4:off 5:off 6:off
|
||||
xinetd 0:off 1:off 2:off 3:on 4:on 5:on 6:off
|
||||
|
||||
xinetd based services:
|
||||
chargen-dgram: off
|
||||
chargen-stream: off
|
||||
daytime-dgram: off
|
||||
daytime-stream: off
|
||||
discard-dgram: off
|
||||
discard-stream: off
|
||||
echo-dgram: off
|
||||
echo-stream: off
|
||||
finger: off
|
||||
ntalk: off
|
||||
rsync: off
|
||||
talk: off
|
||||
tcpmux-server: off
|
||||
time-dgram: off
|
||||
time-stream: off
|
||||
|
||||
```
|
||||
|
||||
### Method-2: How To Check Running Services In systemd System
|
||||
|
||||
The below command helps us to check and list all running services in “systemd” system.
|
||||
```
|
||||
# systemctl
|
||||
|
||||
UNIT LOAD ACTIVE SUB DESCRIPTION
|
||||
sys-devices-virtual-block-loop0.device loaded active plugged /sys/devices/virtual/block/loop0
|
||||
sys-devices-virtual-block-loop1.device loaded active plugged /sys/devices/virtual/block/loop1
|
||||
sys-devices-virtual-block-loop2.device loaded active plugged /sys/devices/virtual/block/loop2
|
||||
sys-devices-virtual-block-loop3.device loaded active plugged /sys/devices/virtual/block/loop3
|
||||
sys-devices-virtual-block-loop4.device loaded active plugged /sys/devices/virtual/block/loop4
|
||||
sys-devices-virtual-misc-rfkill.device loaded active plugged /sys/devices/virtual/misc/rfkill
|
||||
sys-devices-virtual-tty-ttyprintk.device loaded active plugged /sys/devices/virtual/tty/ttyprintk
|
||||
sys-module-fuse.device loaded active plugged /sys/module/fuse
|
||||
sys-subsystem-net-devices-enp0s3.device loaded active plugged 82540EM Gigabit Ethernet Controller (PRO/1000 MT Desktop Adapter)
|
||||
-.mount loaded active mounted Root Mount
|
||||
dev-hugepages.mount loaded active mounted Huge Pages File System
|
||||
dev-mqueue.mount loaded active mounted POSIX Message Queue File System
|
||||
run-user-1000-gvfs.mount loaded active mounted /run/user/1000/gvfs
|
||||
run-user-1000.mount loaded active mounted /run/user/1000
|
||||
snap-core-3887.mount loaded active mounted Mount unit for core
|
||||
snap-core-4017.mount loaded active mounted Mount unit for core
|
||||
snap-core-4110.mount loaded active mounted Mount unit for core
|
||||
snap-gping-13.mount loaded active mounted Mount unit for gping
|
||||
snap-termius\x2dapp-8.mount loaded active mounted Mount unit for termius-app
|
||||
sys-fs-fuse-connections.mount loaded active mounted FUSE Control File System
|
||||
sys-kernel-debug.mount loaded active mounted Debug File System
|
||||
acpid.path loaded active running ACPI Events Check
|
||||
cups.path loaded active running CUPS Scheduler
|
||||
systemd-ask-password-plymouth.path loaded active waiting Forward Password Requests to Plymouth Directory Watch
|
||||
systemd-ask-password-wall.path loaded active waiting Forward Password Requests to Wall Directory Watch
|
||||
init.scope loaded active running System and Service Manager
|
||||
session-c2.scope loaded active running Session c2 of user magi
|
||||
accounts-daemon.service loaded active running Accounts Service
|
||||
acpid.service loaded active running ACPI event daemon
|
||||
anacron.service loaded active running Run anacron jobs
|
||||
apache2.service loaded active running The Apache HTTP Server
|
||||
apparmor.service loaded active exited AppArmor initialization
|
||||
apport.service loaded active exited LSB: automatic crash report generation
|
||||
aptik-battery-monitor.service loaded active running LSB: start/stop the aptik battery monitor daemon
|
||||
atop.service loaded active running Atop advanced performance monitor
|
||||
atopacct.service loaded active running Atop process accounting daemon
|
||||
avahi-daemon.service loaded active running Avahi mDNS/DNS-SD Stack
|
||||
colord.service loaded active running Manage, Install and Generate Color Profiles
|
||||
console-setup.service loaded active exited Set console font and keymap
|
||||
cron.service loaded active running Regular background program processing daemon
|
||||
cups-browsed.service loaded active running Make remote CUPS printers available locally
|
||||
cups.service loaded active running CUPS Scheduler
|
||||
dbus.service loaded active running D-Bus System Message Bus
|
||||
postfix.service loaded active exited Postfix Mail Transport Agent
|
||||
|
||||
```
|
||||
|
||||
* **`UNIT`** Unit describe about the corresponding systemd unit name.
|
||||
* **`LOAD`** This describes whether the corresponding unit currently loaded in memory or not.
|
||||
* **`ACTIVE`** It’s indicate whether the unit is active or not.
|
||||
* **`SUB`** It’s indicate whether the unit is running state or not.
|
||||
* **`DESCRIPTION`** A short description about the unit.
|
||||
|
||||
|
||||
|
||||
The below option help you to list units based on the type.
|
||||
```
|
||||
# systemctl list-units --type service
|
||||
UNIT LOAD ACTIVE SUB DESCRIPTION
|
||||
accounts-daemon.service loaded active running Accounts Service
|
||||
acpid.service loaded active running ACPI event daemon
|
||||
anacron.service loaded active running Run anacron jobs
|
||||
apache2.service loaded active running The Apache HTTP Server
|
||||
apparmor.service loaded active exited AppArmor initialization
|
||||
apport.service loaded active exited LSB: automatic crash report generation
|
||||
aptik-battery-monitor.service loaded active running LSB: start/stop the aptik battery monitor daemon
|
||||
atop.service loaded active running Atop advanced performance monitor
|
||||
atopacct.service loaded active running Atop process accounting daemon
|
||||
avahi-daemon.service loaded active running Avahi mDNS/DNS-SD Stack
|
||||
colord.service loaded active running Manage, Install and Generate Color Profiles
|
||||
console-setup.service loaded active exited Set console font and keymap
|
||||
cron.service loaded active running Regular background program processing daemon
|
||||
cups-browsed.service loaded active running Make remote CUPS printers available locally
|
||||
cups.service loaded active running CUPS Scheduler
|
||||
dbus.service loaded active running D-Bus System Message Bus
|
||||
fwupd.service loaded active running Firmware update daemon
|
||||
[email protected] loaded active running Getty on tty1
|
||||
grub-common.service loaded active exited LSB: Record successful boot for GRUB
|
||||
irqbalance.service loaded active running LSB: daemon to balance interrupts for SMP systems
|
||||
keyboard-setup.service loaded active exited Set the console keyboard layout
|
||||
kmod-static-nodes.service loaded active exited Create list of required static device nodes for the current kernel
|
||||
|
||||
```
|
||||
|
||||
The below option help you to list units based on the state. It’s similar to the above output but straight forward.
|
||||
```
|
||||
# systemctl list-unit-files --type service
|
||||
|
||||
UNIT FILE STATE
|
||||
accounts-daemon.service enabled
|
||||
acpid.service disabled
|
||||
alsa-restore.service static
|
||||
alsa-state.service static
|
||||
alsa-utils.service masked
|
||||
anacron-resume.service enabled
|
||||
anacron.service enabled
|
||||
apache-htcacheclean.service disabled
|
||||
[email protected] disabled
|
||||
apache2.service enabled
|
||||
[email protected] disabled
|
||||
apparmor.service enabled
|
||||
[email protected] static
|
||||
apport.service generated
|
||||
apt-daily-upgrade.service static
|
||||
apt-daily.service static
|
||||
aptik-battery-monitor.service generated
|
||||
atop.service enabled
|
||||
atopacct.service enabled
|
||||
[email protected] enabled
|
||||
avahi-daemon.service enabled
|
||||
bluetooth.service enabled
|
||||
|
||||
```
|
||||
|
||||
Run the following command to view the particular service status.
|
||||
```
|
||||
# systemctl | grep apache2
|
||||
apache2.service loaded active running The Apache HTTP Server
|
||||
|
||||
```
|
||||
|
||||
Alternatively use the following command to view the particular service status.
|
||||
```
|
||||
# systemctl status apache2
|
||||
● apache2.service - The Apache HTTP Server
|
||||
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
|
||||
Drop-In: /lib/systemd/system/apache2.service.d
|
||||
└─apache2-systemd.conf
|
||||
Active: active (running) since Tue 2018-03-06 12:34:09 IST; 8min ago
|
||||
Process: 2786 ExecReload=/usr/sbin/apachectl graceful (code=exited, status=0/SUCCESS)
|
||||
Main PID: 1171 (apache2)
|
||||
Tasks: 55 (limit: 4915)
|
||||
CGroup: /system.slice/apache2.service
|
||||
├─1171 /usr/sbin/apache2 -k start
|
||||
├─2790 /usr/sbin/apache2 -k start
|
||||
└─2791 /usr/sbin/apache2 -k start
|
||||
|
||||
Mar 06 12:34:08 magi-VirtualBox systemd[1]: Starting The Apache HTTP Server...
|
||||
Mar 06 12:34:09 magi-VirtualBox apachectl[1089]: AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.0.2.15. Set the 'ServerName' directive globally to suppre
|
||||
Mar 06 12:34:09 magi-VirtualBox systemd[1]: Started The Apache HTTP Server.
|
||||
Mar 06 12:39:10 magi-VirtualBox systemd[1]: Reloading The Apache HTTP Server.
|
||||
Mar 06 12:39:10 magi-VirtualBox apachectl[2786]: AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using fe80::7929:4ed1:279f:4d65. Set the 'ServerName' directive gl
|
||||
Mar 06 12:39:10 magi-VirtualBox systemd[1]: Reloaded The Apache HTTP Server.
|
||||
|
||||
```
|
||||
|
||||
Run the following command to view only running services in the system.
|
||||
```
|
||||
# systemctl | grep running
|
||||
acpid.path loaded active running ACPI Events Check
|
||||
cups.path loaded active running CUPS Scheduler
|
||||
init.scope loaded active running System and Service Manager
|
||||
session-c2.scope loaded active running Session c2 of user magi
|
||||
accounts-daemon.service loaded active running Accounts Service
|
||||
acpid.service loaded active running ACPI event daemon
|
||||
apache2.service loaded active running The Apache HTTP Server
|
||||
aptik-battery-monitor.service loaded active running LSB: start/stop the aptik battery monitor daemon
|
||||
atop.service loaded active running Atop advanced performance monitor
|
||||
atopacct.service loaded active running Atop process accounting daemon
|
||||
avahi-daemon.service loaded active running Avahi mDNS/DNS-SD Stack
|
||||
colord.service loaded active running Manage, Install and Generate Color Profiles
|
||||
cron.service loaded active running Regular background program processing daemon
|
||||
cups-browsed.service loaded active running Make remote CUPS printers available locally
|
||||
cups.service loaded active running CUPS Scheduler
|
||||
dbus.service loaded active running D-Bus System Message Bus
|
||||
fwupd.service loaded active running Firmware update daemon
|
||||
[email protected] loaded active running Getty on tty1
|
||||
irqbalance.service loaded active running LSB: daemon to balance interrupts for SMP systems
|
||||
lightdm.service loaded active running Light Display Manager
|
||||
ModemManager.service loaded active running Modem Manager
|
||||
NetworkManager.service loaded active running Network Manager
|
||||
polkit.service loaded active running Authorization Manager
|
||||
|
||||
```
|
||||
|
||||
Use the following command to view the list of running services enabled in boot.
|
||||
```
|
||||
# systemctl list-unit-files | grep enabled
|
||||
acpid.path enabled
|
||||
cups.path enabled
|
||||
accounts-daemon.service enabled
|
||||
anacron-resume.service enabled
|
||||
anacron.service enabled
|
||||
apache2.service enabled
|
||||
apparmor.service enabled
|
||||
atop.service enabled
|
||||
atopacct.service enabled
|
||||
[email protected] enabled
|
||||
avahi-daemon.service enabled
|
||||
bluetooth.service enabled
|
||||
console-setup.service enabled
|
||||
cron.service enabled
|
||||
cups-browsed.service enabled
|
||||
cups.service enabled
|
||||
display-manager.service enabled
|
||||
dns-clean.service enabled
|
||||
friendly-recovery.service enabled
|
||||
[email protected] enabled
|
||||
gpu-manager.service enabled
|
||||
keyboard-setup.service enabled
|
||||
lightdm.service enabled
|
||||
ModemManager.service enabled
|
||||
network-manager.service enabled
|
||||
networking.service enabled
|
||||
NetworkManager-dispatcher.service enabled
|
||||
NetworkManager-wait-online.service enabled
|
||||
NetworkManager.service enabled
|
||||
|
||||
```
|
||||
|
||||
systemd-cgtop show top control groups by their resource usage such as tasks, CPU, Memory, Input, and Output.
|
||||
```
|
||||
# systemd-cgtop
|
||||
|
||||
Control Group Tasks %CPU Memory Input/s Output/s
|
||||
/ - - 1.5G - -
|
||||
/init.scope 1 - - - -
|
||||
/system.slice 153 - - - -
|
||||
/system.slice/ModemManager.service 3 - - - -
|
||||
/system.slice/NetworkManager.service 4 - - - -
|
||||
/system.slice/accounts-daemon.service 3 - - - -
|
||||
/system.slice/acpid.service 1 - - - -
|
||||
/system.slice/apache2.service 55 - - - -
|
||||
/system.slice/aptik-battery-monitor.service 1 - - - -
|
||||
/system.slice/atop.service 1 - - - -
|
||||
/system.slice/atopacct.service 1 - - - -
|
||||
/system.slice/avahi-daemon.service 2 - - - -
|
||||
/system.slice/colord.service 3 - - - -
|
||||
/system.slice/cron.service 1 - - - -
|
||||
/system.slice/cups-browsed.service 3 - - - -
|
||||
/system.slice/cups.service 2 - - - -
|
||||
/system.slice/dbus.service 6 - - - -
|
||||
/system.slice/fwupd.service 5 - - - -
|
||||
/system.slice/irqbalance.service 1 - - - -
|
||||
/system.slice/lightdm.service 7 - - - -
|
||||
/system.slice/polkit.service 3 - - - -
|
||||
/system.slice/repowerd.service 14 - - - -
|
||||
/system.slice/rsyslog.service 4 - - - -
|
||||
/system.slice/rtkit-daemon.service 3 - - - -
|
||||
/system.slice/snapd.service 8 - - - -
|
||||
/system.slice/system-getty.slice 1 - - - -
|
||||
|
||||
```
|
||||
|
||||
Also we can check the running services using pstree command (Output from SysVinit system).
|
||||
```
|
||||
# pstree
|
||||
init-|-crond
|
||||
|-httpd---2*[httpd]
|
||||
|-kthreadd/99149---khelper/99149
|
||||
|-2*[mingetty]
|
||||
|-mysqld_safe---mysqld---9*[{mysqld}]
|
||||
|-rsyslogd---3*[{rsyslogd}]
|
||||
|-saslauthd---saslauthd
|
||||
|-2*[sendmail]
|
||||
|-sshd---sshd---bash---pstree
|
||||
|-udevd
|
||||
`-xinetd
|
||||
|
||||
```
|
||||
|
||||
Also we can check the running services using pstree command (Output from systemd system).
|
||||
```
|
||||
# pstree
|
||||
systemd─┬─ModemManager─┬─{gdbus}
|
||||
│ └─{gmain}
|
||||
├─NetworkManager─┬─dhclient
|
||||
│ ├─{gdbus}
|
||||
│ └─{gmain}
|
||||
├─accounts-daemon─┬─{gdbus}
|
||||
│ └─{gmain}
|
||||
├─acpid
|
||||
├─agetty
|
||||
├─anacron
|
||||
├─apache2───2*[apache2───26*[{apache2}]]
|
||||
├─aptd───{gmain}
|
||||
├─aptik-battery-m
|
||||
├─atop
|
||||
├─atopacctd
|
||||
├─avahi-daemon───avahi-daemon
|
||||
├─colord─┬─{gdbus}
|
||||
│ └─{gmain}
|
||||
├─cron
|
||||
├─cups-browsed─┬─{gdbus}
|
||||
│ └─{gmain}
|
||||
├─cupsd
|
||||
├─dbus-daemon
|
||||
├─fwupd─┬─{GUsbEventThread}
|
||||
│ ├─{fwupd}
|
||||
│ ├─{gdbus}
|
||||
│ └─{gmain}
|
||||
├─gnome-keyring-d─┬─{gdbus}
|
||||
│ ├─{gmain}
|
||||
│ └─{timer}
|
||||
|
||||
```
|
||||
|
||||
### Method-3: How To Check Running Services In systemd System using chkservice
|
||||
|
||||
chkservice is a new tool for managing systemd units in terminal. It requires super user privileges to manage the units.
|
||||
```
|
||||
# chkservice
|
||||
|
||||
```
|
||||
|
||||
![][1]
|
||||
|
||||
To view help page, hit `?` button. This will shows you available options to manage the systemd services.
|
||||
![][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-check-all-running-services-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/magesh/
|
||||
[1]:https://www.2daygeek.com/wp-content/uploads/2018/03/chkservice-1.png
|
||||
[2]:https://www.2daygeek.com/wp-content/uploads/2018/03/chkservice-2.png
|
@ -0,0 +1,145 @@
|
||||
translated by cyleft
|
||||
|
||||
Most Useful Linux Commands You Can Run in Windows 10
|
||||
======
|
||||
|
||||

|
||||
|
||||
In the previous articles of this series, we talked about [getting started with WSL on Windows 10.][1] In the last article of the series, we will talk about some of the widely used Linux commands on Windows 10.
|
||||
|
||||
Before we dive further into the topic, let’s make it clear who this is for. This article is meant for greenhorn developers who use Windows 10 machines but want to learn about Linux as it’s the dominant platform in the cloud, whether it be Azure, AWS, or private cloud. In a nutshell, it’s intended for Windows 10 users who are new to Linux.
|
||||
|
||||
Which commands you need will depend on your own workload. Your mileage may vary from mine. The goal of the article is to get you comfortable with Linux in Windows 10. Also bear in mind that WSL doesn’t provide access to hardware components like sound cards or GPU. Officially. But Linux users never take a no for an answer. Many users have managed to not only gain access to sound cards and GPU, they also managed to run desktop Linux apps on Windows. But that’s not the scope of this article. We may talk about it at some point, but not today.
|
||||
|
||||
Here are a few tasks to get started.
|
||||
|
||||
### How to keep your Linux system up to date
|
||||
|
||||
Since you are running Linux inside of Windows, you are stripped of all the security that Linux systems offer. In addition, if you don’t keep your Linux systems patched, you will expose your Windows machines to those threats. Always keep your Linux machines up to date.
|
||||
|
||||
WSL officially supports openSUSE, SUSE Linux Enterprise and Ubuntu. You can install other distributions as well, but I can get all of my work done with either of these two as all I need is access to some basic Linux utilities.
|
||||
|
||||
**Update openSUSE Leap:**
|
||||
```
|
||||
sudo zypper up
|
||||
|
||||
```
|
||||
|
||||
If you want a system upgrade, you can do that after running the above command:
|
||||
```
|
||||
sudo zypper dup
|
||||
|
||||
```
|
||||
|
||||
**Update Ubuntu machine:**
|
||||
```
|
||||
sudo apt-get update
|
||||
|
||||
sudo apt-get dist-upgrade
|
||||
|
||||
```
|
||||
|
||||
You are safe and secure. Since updates on Linux systems are incremental, I run system updates on a daily basis. It’s mostly a few KB or a few MB of updates without any downtime, unlike Windows 10 updates where you need to reboot your system.
|
||||
|
||||
### Managing files and folders
|
||||
|
||||
Once your system is updated, we can look at some mundane, or not so mundane tasks.
|
||||
|
||||
The second most important task is to manage your local and remote files using Linux. I must admit that as much as I prefer GUI apps, there are certain tasks, where terminal offers more value and reliability. Try moving 1TB of files using the Explorer app. Good luck. I always use the rsync command to transfer the bulk of files. The good news is that with rsync, if you do stop it in the middle, you can resume from where you left off.
|
||||
|
||||
Although you can use cp or mv commands to copy or move files, I prefer rsync as it offers more flexibility over the others and learning it will also help you in transferring files between remote machines. There are three basic tasks that I mostly perform.
|
||||
|
||||
**Copy entire directory using rsync:**
|
||||
```
|
||||
rsync -avzP /source-directory /destination directory
|
||||
|
||||
```
|
||||
|
||||
**Move files using rsync:**
|
||||
```
|
||||
rsync --remove-source-files -avzP /source-directory /destination-directory
|
||||
|
||||
```
|
||||
|
||||
This command will delete files from the source directory after successful copying to the destination directory.
|
||||
|
||||
**Sync two directories:**
|
||||
|
||||
I keep a copy of all of my files on more than one location. However, I continue to add and delete files from the primary location. It could become a challenge to keep all other locations synced without using some application dedicated to file sync, rsync simplifies the process. This is the command that you need to keep two directories synced. Keep it mind that it’s a one way sync -- from source to destination.
|
||||
```
|
||||
rsync --delete -avzP /source-directory /destination-directory
|
||||
|
||||
```
|
||||
|
||||
The above commands deletes the file in the destination folder if they are not found in the source folder. In other way it creates a mirror copy of the source directory.
|
||||
|
||||
### Automate file backup
|
||||
|
||||
Yes, keeping up with back up is a mundane task. In order to keep my drives fully synced I add a cron job that runs the rsync command at night to keep all directories synced. I do, however, keep one external drive that is synced manually on a weekly basis. I don’t use the --delete flag as it may delete some files that I might have wanted. I use that flag manually.
|
||||
|
||||
**To create a cron job, open crontab:**
|
||||
```
|
||||
crontab -e
|
||||
|
||||
```
|
||||
|
||||
I run this at night when both systems are idle as moving huge amount of files can slow your system down. The command runs at 1 am every morning. You can change it appropriately:
|
||||
```
|
||||
# 0 1 * * * rsync -avzP /source-directory /destination-directory
|
||||
|
||||
```
|
||||
|
||||
This is the structure for a cron job using crontab:
|
||||
```
|
||||
# m h dom mon dow command
|
||||
|
||||
```
|
||||
|
||||
Here m = minute, h = hour, dom= day of the month, mon= month; dow= day of the week.
|
||||
|
||||
We are running this command at 1 am every day. You could choose to run in a certain day of the week or day of the month (so it will run on the 5th of every month, for example) and so on. You can read more about crontab [here][2].
|
||||
|
||||
### Managing your remote servers
|
||||
|
||||
One of the reasons you are running WSL on your system is that you manage Linux systems on cloud and WSL provides you with native Linux tools. The first thing you need is to remotely log into your Linux server using the ssh command.
|
||||
|
||||
Let’s say my server is 192.168.0.112; the dedicated port is 2018 (never use the default 22 port); the Linux user of that server is swapnil and password is i-wont-tell-you.
|
||||
```
|
||||
ssh -p2018 swapnil@192.168.0.112
|
||||
|
||||
```
|
||||
|
||||
It will ask for the password and, eureka, you are logged into your Linux server. Now you can perform all the tasks that you want to perform as you are literally inside that Linux machine. No need to use puTTY.
|
||||
|
||||
You can easily transfer files between your local machine and remote machine using the rsync command. Instead of source or destination directory, depending on whether you are uploading the files to the server or downloading them to local machine, you can use [username@IP][3]-address-of-server:/path-of-directory.
|
||||
|
||||
So if I want to copy some text files to the home directory of my server, here is the command:
|
||||
```
|
||||
rsync -avzP /source-directory-on-local-machine ‘ssh -p2018’ swapnil@192.168.0.112:/home/swapnil/Documents/
|
||||
|
||||
```
|
||||
|
||||
It will copy all files to the Documents directory of my remote server.
|
||||
|
||||
### Conclusion
|
||||
|
||||
The idea of this tutorial was to demonstrate that WSL allows you to perform a wide range of Linux-y tasks on your Windows 10 systems. In most cases, it increases productivity and performance. Now, the whole world of Linux is open to you for exploration on your Windows 10 system. Go ahead and explore it. If you have any questions, or if you would like me to cover more areas of WSL, please share your thoughts in the comments below.
|
||||
|
||||
Learn more about the [Administering Linux on Azure (LFS205)][4] course and sign up [here][5].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2018/3/most-useful-linux-commands-you-can-run-windows-10
|
||||
|
||||
作者:[SAPNIL BHARTIYA][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/arnieswap
|
||||
[1]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
|
||||
[2]:http://www.adminschoice.com/crontab-quick-reference
|
||||
[3]:mailto:username@IP
|
||||
[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/administering-linux-on-azure
|
||||
[5]:http://bit.ly/2FpFtPg
|
@ -0,0 +1,195 @@
|
||||
What Is sosreport? How To Create sosreport
|
||||
======
|
||||
### What Is sosreport
|
||||
|
||||
The sosreport command is a tool that collects bunch of configuration details, system information and diagnostic information from running system (especially RHEL & OEL system).
|
||||
|
||||
It helps technical support engineer to analyze the system in many aspect.
|
||||
|
||||
This reports contains bunch of information about the system such as boot information, filesystem, memory, hostname, installed rpms, system IP, networking details, OS version, installed kernel, loaded kernel modules, list of open files, list of PCI devices, mount point and it’s details, running process information, process tree output, system routing, all the configuration files which is located in /etc folder, and all the log files which is located in /var folder.
|
||||
|
||||
This will take a while to generate a report and it’s depends on your system installation and configuration.
|
||||
|
||||
Once completed, sosreport will generate a compressed archive file under /tmp directory.
|
||||
|
||||
We have to provide the sosreport to RHEL (Red Hat Enterprise Linux) & OEL (Oracle Enterprise Linux) technical support engineer whenever we raise a case with them for initial analyze. This helps support engineer to verify if anything is wrong on the system.
|
||||
|
||||
### How To Install sosreport
|
||||
|
||||
sosreport installation is not a big deal, just run the following command to install it.
|
||||
```
|
||||
# yum install sos
|
||||
|
||||
```
|
||||
|
||||
### How To Generate sosreport
|
||||
|
||||
Also generating sosreport is not a big deal so, just run the sosreport command without any options.
|
||||
|
||||
By default it doesn’t shows much information while generating sosreport and only display how many reports are generated. If you want to see detailed information just add `-v` option while generating the sosreport.
|
||||
|
||||
It will ask you to enter your name and the support case information.
|
||||
```
|
||||
# sosreport
|
||||
|
||||
sosreport (version 3.2)
|
||||
|
||||
This command will collect diagnostic and configuration information from this Oracle Linux system and installed applications.
|
||||
|
||||
An archive containing the collected information will be generated in /tmp/sos.3pt1yJ and may be provided to a Oracle USA support representative.
|
||||
|
||||
Any information provided to Oracle USA will be treated in accordance with the published support policies at:
|
||||
|
||||
http://linux.oracle.com/
|
||||
|
||||
The generated archive may contain data considered sensitive and its content should be reviewed by the originating organization before being passed to any third party.
|
||||
|
||||
No changes will be made to system configuration.
|
||||
|
||||
Press ENTER to continue, or CTRL-C to quit.
|
||||
|
||||
Please enter your first initial and last name [oracle.2daygeek.com]: 2daygeek
|
||||
Please enter the case id that you are generating this report for []: 3-16619296812
|
||||
|
||||
Setting up archive ...
|
||||
Setting up plugins ...
|
||||
dbname must be supplied to dump a database.
|
||||
Running plugins. Please wait ...
|
||||
|
||||
Running 86/86: yum...
|
||||
[plugin:kvm] could not unmount /sys/kernel/debug
|
||||
Creating compressed archive...
|
||||
|
||||
Your sosreport has been generated and saved in:
|
||||
|
||||
/tmp/sosreport-2daygeek.3-16619296812-20180307124921.tar.xz
|
||||
|
||||
The checksum is: 4e80226ae175bm185c0o2d7u2yoac52o
|
||||
|
||||
Please send this file to your support representative.
|
||||
|
||||
```
|
||||
|
||||
### What Are The Details There In The Archive File
|
||||
|
||||
I’m just curious, what kind of details are there in the archive file. To understand this, i gonna extract a archive file on my system.
|
||||
|
||||
Run the following command to extract an archive file.
|
||||
```
|
||||
# tar -xf /tmp/sosreport-2daygeek.3-16619296812-20180307124921.tar.xz
|
||||
|
||||
```
|
||||
|
||||
To see what are the information captured by sosreport, go to file extracted directory.
|
||||
```
|
||||
# ls -lh sosreport-2daygeek.3-16619296812-20180307124921
|
||||
|
||||
total 60K
|
||||
dr-xr-xr-x 4 root root 4.0K Sep 30 10:56 boot
|
||||
lrwxrwxrwx 1 root root 37 Oct 20 07:25 chkconfig -> sos_commands/startup/chkconfig_--list
|
||||
lrwxrwxrwx 1 root root 25 Oct 20 07:25 date -> sos_commands/general/date
|
||||
lrwxrwxrwx 1 root root 27 Oct 20 07:25 df -> sos_commands/filesys/df_-al
|
||||
lrwxrwxrwx 1 root root 31 Oct 20 07:25 dmidecode -> sos_commands/hardware/dmidecode
|
||||
drwxr-xr-x 43 root root 4.0K Oct 20 07:21 etc
|
||||
lrwxrwxrwx 1 root root 24 Oct 20 07:25 free -> sos_commands/memory/free
|
||||
lrwxrwxrwx 1 root root 29 Oct 20 07:25 hostname -> sos_commands/general/hostname
|
||||
lrwxrwxrwx 1 root root 130 Oct 20 07:25 installed-rpms -> sos_commands/rpm/sh_-c_rpm_--nodigest_-qa_--qf_NAME_-_VERSION_-_RELEASE_._ARCH_INSTALLTIME_date_awk_-F_printf_-59s_s_n_1_2_sort_-f
|
||||
lrwxrwxrwx 1 root root 34 Oct 20 07:25 ip_addr -> sos_commands/networking/ip_-o_addr
|
||||
lrwxrwxrwx 1 root root 45 Oct 20 07:25 java -> sos_commands/java/alternatives_--display_java
|
||||
drwxr-xr-x 4 root root 4.0K Sep 30 10:56 lib
|
||||
lrwxrwxrwx 1 root root 35 Oct 20 07:25 lsb-release -> sos_commands/lsbrelease/lsb_release
|
||||
lrwxrwxrwx 1 root root 25 Oct 20 07:25 lsmod -> sos_commands/kernel/lsmod
|
||||
lrwxrwxrwx 1 root root 36 Oct 20 07:25 lsof -> sos_commands/process/lsof_-b_M_-n_-l
|
||||
lrwxrwxrwx 1 root root 22 Oct 20 07:25 lspci -> sos_commands/pci/lspci
|
||||
lrwxrwxrwx 1 root root 29 Oct 20 07:25 mount -> sos_commands/filesys/mount_-l
|
||||
lrwxrwxrwx 1 root root 38 Oct 20 07:25 netstat -> sos_commands/networking/netstat_-neopa
|
||||
drwxr-xr-x 3 root root 4.0K Oct 19 16:16 opt
|
||||
dr-xr-xr-x 10 root root 4.0K Jun 23 2017 proc
|
||||
lrwxrwxrwx 1 root root 30 Oct 20 07:25 ps -> sos_commands/process/ps_auxwww
|
||||
lrwxrwxrwx 1 root root 27 Oct 20 07:25 pstree -> sos_commands/process/pstree
|
||||
dr-xr-x--- 2 root root 4.0K Oct 17 12:09 root
|
||||
lrwxrwxrwx 1 root root 32 Oct 20 07:25 route -> sos_commands/networking/route_-n
|
||||
dr-xr-xr-x 2 root root 4.0K Sep 30 10:55 sbin
|
||||
drwx------ 54 root root 4.0K Oct 20 07:21 sos_commands
|
||||
drwx------ 2 root root 4.0K Oct 20 07:21 sos_logs
|
||||
drwx------ 2 root root 4.0K Oct 20 07:21 sos_reports
|
||||
dr-xr-xr-x 6 root root 4.0K Jun 23 2017 sys
|
||||
lrwxrwxrwx 1 root root 28 Oct 20 07:25 uname -> sos_commands/kernel/uname_-a
|
||||
lrwxrwxrwx 1 root root 27 Oct 20 07:25 uptime -> sos_commands/general/uptime
|
||||
drwxr-xr-x 6 root root 4.0K Sep 25 2014 var
|
||||
-rw------- 1 root root 1.7K Oct 20 07:21 version.txt
|
||||
lrwxrwxrwx 1 root root 62 Oct 20 07:25 vgdisplay -> sos_commands/lvm2/vgdisplay_-vv_--config_global_locking_type_0
|
||||
|
||||
```
|
||||
|
||||
To double confirm what exactly sosreport captured, i’m gonna to see uname output file which was captured by sosreport.
|
||||
```
|
||||
# more uname_-a
|
||||
Linux oracle.2daygeek.com 2.6.32-042stab127.2 #1 SMP Thu Jan 4 16:41:44 MSK 2018 x86_64 x86_64 x86_64 GNU/Linux
|
||||
|
||||
```
|
||||
|
||||
### Additional Options
|
||||
|
||||
Visit help page to view all available options for sosreport.
|
||||
```
|
||||
# sosreport --help
|
||||
Usage: sosreport [options]
|
||||
|
||||
Options:
|
||||
-h, --help show this help message and exit
|
||||
-l, --list-plugins list plugins and available plugin options
|
||||
-n NOPLUGINS, --skip-plugins=NOPLUGINS
|
||||
disable these plugins
|
||||
-e ENABLEPLUGINS, --enable-plugins=ENABLEPLUGINS
|
||||
enable these plugins
|
||||
-o ONLYPLUGINS, --only-plugins=ONLYPLUGINS
|
||||
enable these plugins only
|
||||
-k PLUGOPTS, --plugin-option=PLUGOPTS
|
||||
plugin options in plugname.option=value format (see
|
||||
-l)
|
||||
--log-size=LOG_SIZE set a limit on the size of collected logs
|
||||
-a, --alloptions enable all options for loaded plugins
|
||||
--all-logs collect all available logs regardless of size
|
||||
--batch batch mode - do not prompt interactively
|
||||
--build preserve the temporary directory and do not package
|
||||
results
|
||||
-v, --verbose increase verbosity
|
||||
--verify perform data verification during collection
|
||||
--quiet only print fatal errors
|
||||
--debug enable interactive debugging using the python debugger
|
||||
--ticket-number=CASE_ID
|
||||
specify ticket number
|
||||
--case-id=CASE_ID specify case identifier
|
||||
-p PROFILES, --profile=PROFILES
|
||||
enable plugins selected by the given profiles
|
||||
--list-profiles
|
||||
--name=CUSTOMER_NAME specify report name
|
||||
--config-file=CONFIG_FILE
|
||||
specify alternate configuration file
|
||||
--tmp-dir=TMP_DIR specify alternate temporary directory
|
||||
--no-report Disable HTML/XML reporting
|
||||
-z COMPRESSION_TYPE, --compression-type=COMPRESSION_TYPE
|
||||
compression technology to use [auto, gzip, bzip2, xz]
|
||||
(default=auto)
|
||||
|
||||
Some examples:
|
||||
|
||||
enable cluster plugin only and collect dlm lockdumps:
|
||||
# sosreport -o cluster -k cluster.lockdump
|
||||
|
||||
disable memory and samba plugins, turn off rpm -Va collection:
|
||||
# sosreport -n memory,samba -k rpm.rpmva=off
|
||||
|
||||
```
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-create-collect-sosreport-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/magesh/
|
65
translated/talk/20180219 How Linux became my job.md
Normal file
65
translated/talk/20180219 How Linux became my job.md
Normal file
@ -0,0 +1,65 @@
|
||||
Linux 如何成为我的工作
|
||||
======
|
||||
|
||||

|
||||
|
||||
从很早很早以前起,我就一直使用开源软件。那个时候,没有所谓的社交媒体。没有搜狐,没有谷歌浏览器(甚至连谷歌也没有),没有亚马逊,几乎没有一个互联网。事实上,那个时候最热门的是最新的Linux 2.0内核。当时的技术挑战是什么?嗯,是 Linux 发行版本中旧的 a.out 格式被 ELF 格式代替,导致升级一些 linux 的安装可能有些棘手。
|
||||
我如何将我自己对这个初出茅庐的年轻操作系统的兴趣转变为开源事业是一个有趣的故事。
|
||||
### Linux 为乐趣,而非利润
|
||||
|
||||
1994 年我大学毕业时,计算机实验室是 UNIX 系统的小型网络;如果你幸运的话,他们会连接到这个叫做互联网的新东西。这难以置信,我知道!“网络”(正如我们所知道的)大多是手写的 HTML, `cgi-bin` 目录是启用动态 web 交互的一个新的平台。我们许多人对这些新技术感到兴奋,我们还自学了 shell 脚本, Per,HTML,以及所有我们在爸妈的 Windows3.1 上从没有见过的简介的 UNIX 命令。通过 `vi`,`ls`,和阅读我的邮件。
|
||||
|
||||
毕业后,我加入 IBM,在一个没有 UNIX 系统的 PC 操作系统上工作,不久,我的大学切断了我通往工程实验室的远程通道。我如何继续用 Pine(Pine是以显示导向为主的处理程序)读我的电子邮件?我一直听说开源 Linux,但我没有时间去研究它。
|
||||
|
||||
1996 年,我在德克萨斯大学奥斯丁分校开始读硕士学位。我知道这将涉及编程和写论文,不知道还有什么,但我不想使用专有的编辑器,编译器或者文字处理器。我想要的是我的 UNIX 经验!
|
||||
|
||||
所以我拿了一个旧电脑,找到并下载了一个 Slackware 3.0 的 Linux 发行版本,分配磁盘后,放在了我的 IBM 办公室。可以说我在第一次安装 Linux 后就没有回过头了。在最初的那些日子里,我学习了很多关于文件和 `make` 系统,关于建设软件和补丁还有源码控制的知识。虽然我只是开始使用 Linux 来获得乐趣和个人知识,但他最终改变了我的职业生涯。
|
||||
|
||||
虽然我是一个愉快的 Linux 用户,但我认为开源开发任然是其他人的工作;我想到了一个神秘的 UNIX 极客的在线邮件列表。我很感激 Linux HOWTO 项目,他帮助我尝试添加软件包,升级 Linux 版本,或者安装新硬件和新 PC 的设备驱动程序。但是要处理源代码并进行修改或提交到上游… …那是给别人的,不是我。
|
||||
|
||||
### Linux 如何成为我的工作
|
||||
|
||||
1999 年,我终于有理由把我个人对 Linux 的兴趣与我在 IBM 的日常工作结合起来了。我带了一个研究项目接到 JVM 虚拟机的 Linux 上。为了确保我们在法律上是安全的,IBM 购买了一个压缩包,创建 Red Hat Linux 6.1 副本来完成这项工作。在 IBM 东京研究实验室工作时,写下了我们的 JVM 即时(JIT)编辑器,AIX JAV 源代码,Windows 和 OS/2 的 JVM 参考源代码,我们有一个 JVM 的 Linux 工作在几周之内,击败了 SUN’S 公司官方 java 公告的 Linux 端口好几个月。既然我在 Linux 平台上做了开发,我得在上面服务。
|
||||
|
||||
到 2000 年,IBM 使用 Linux 的频率迅速增加。由于 Dan Frye 的远见和坚持,IBM 提出了“亿美元的赌注”在 Linux,在 1999 年创建 Linux 技术中心(LTC)。在 LTC 里面有内核开发者,开源贡献者,IBM 硬件设备驱动程序的编写者,以及各种各样的 Linux 开源代码的工作重点。比起剩下毫无相关的与 LTC 公司连接,我想去的是这个令人兴奋的 IBM 新天地。
|
||||
|
||||
从 2003 年到 2013 年我深入参与了 IBM 的 Linux 战略和深入使用了 Linux 发行版,最终组成一个团队,成为 IBM 每个部门大约 60 个产品的信息交换所。我参与了收购,期望每个设备,管理系统和虚拟机或者基于物理设备的中间器件都能运行 Linux。我开始熟悉 Linux 发行版的建设,包括打包资源,选择上游资源,发展原有发行版的补丁集,做定制,并通过我们的合作伙伴提供支持的发行版。
|
||||
|
||||
由于我们的下游供应商,我很少提交补丁到上游,但我通过配合 Ulrich Drepper (包括一个小补丁到glibc)去改变时区数据库的工作贡献自己的力量, Arthur David Olson 接受他在 NIH 的 FTP 站点的维护。但作为我工作的一部分,我还没有作为开源项目的正式贡献者工作过。是该改变这种情况的时候了。
|
||||
|
||||
在 2013 年末,我在开源集团加入了 IBM 的云组织,并正在寻找一个上游社区参与进来。这将是我们在云上工作,还是我会加入 IBM 的为 OpenStack 贡献的大组?都不是,是因为在 2014 年 Docker(开源的应用容器引擎)席卷了全球,IBM 邀请我们几个参与这个热门的新技术。我在接下来的几个月里,经历了许多的第一次:使用 GitHub,比起只是 git clone 学习了更多关于 Git,有 Pull requests 的复查,在 Go 上写代码,等等。在接下来的一年中,我在 Docker 引擎项目上成为一个维修者,以创造形象规范的下一版 Docker(支持多个架构),出席和讲话在一个关于封装技术的会议上。
|
||||
|
||||
### 现在我在哪里
|
||||
|
||||
一晃几年过去,我已经成为了一个开源项目的维护者,包括云端原生计算基础的(集中网络控制设备)containerd(一个控制 runC 的守护进程)项目。我还创建项目(如清单工具和比较容器运行时性能)。我已经通过 OCI(为创造开放的行业标准容器格式和runtime明确目的开放的治理结构)参与开源治理。我在世界会议,Meetuo 网站群,IBM 内部有过很高兴的关于开源的演讲。
|
||||
|
||||
开源现在是我在 IBM 职业生涯的一部分。我与工程师,开发人员和行业领导的联系可能比我在 IBM 内认识的人的联系还要多。虽然开源与专有开发团队和供应商合作伙伴有许多相同的挑战,但据我的经验,开源与全球各地的人们的关系和联系远远超过困难。随着不同的意见、观点和经验的不断优化,可以对软件和涉及的在其中的人产生一种不断学习和改进的文化。
|
||||
|
||||
这个旅程—从我第一次使用Linux到今天成为一个指导者,贡献者,和现在云本地开源世界的维护者—我获得了极大的收获。我期待着与全球各地的人们长久的进行开源协作和互动。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/my-open-source-story-phil-estes
|
||||
|
||||
作者:[Phil Estes][a]
|
||||
译者:[译者ID](https://github.com/译者 ranchong)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/estesp
|
||||
[1]:https://en.wikipedia.org/wiki/Executable_and_Linkable_Format
|
||||
[2]:https://en.wikipedia.org/wiki/A.out
|
||||
[3]:https://opensource.com/node/19796
|
||||
[4]:https://opensource.com/node/25456
|
||||
[5]:https://opensource.com/node/35141
|
||||
[6]:https://opensource.com/article/17/10/alpine-email-client
|
||||
[7]:https://opensource.com/node/22781
|
||||
[8]:https://www.linkedin.com/in/danieldfrye/
|
||||
[9]:http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/linux/
|
||||
[10]:https://www.linkedin.com/in/ulrichdrepper/
|
||||
[11]:https://en.wikipedia.org/wiki/Tz_database
|
||||
[12]:https://opensource.com/article/18/1/step-step-guide-git
|
||||
[13]:https://github.com/containerd/containerd
|
||||
[14]:https://github.com/estesp/manifest-tool
|
||||
[15]:https://github.com/estesp/bucketbench
|
@ -1,192 +0,0 @@
|
||||
选择一个 Linux 跟踪器(2015)
|
||||
======
|
||||
[![][1]][2]
|
||||
_Linux 跟踪很神奇!_
|
||||
|
||||
跟踪器是高级的性能分析和调试工具,如果你使用过 strace(1) 或者 tcpdump(8),你不应该被它吓到 ... 你使用的就是跟踪器。系统跟踪器能让你看到很多的东西,而不仅是系统调用或者包,因为常见的跟踪器都可以跟踪内核或者应用程序的任何东西。
|
||||
|
||||
有大量的 Linux 跟踪器可供你选择。由于它们中的每个都有一个官方的(或者非官方的)的吉祥物,我们有足够多的选择给孩子们展示。
|
||||
|
||||
你喜欢使用哪一个呢?
|
||||
|
||||
我从两类读者的角度来回答这个问题:大多数人和性能/内核工程师。当然,随着时间的推移,这也可能会发生变化,因此,我需要及时去更新本文内容,或许是每年一次,或者更频繁。
|
||||
|
||||
## 对于大多数人
|
||||
|
||||
大多数人(开发者、系统管理员、运维人员、网络可靠性工程师(SRE)…)是不需要去学习系统跟踪器的详细内容的。以下是你需要去了解和做的事情:
|
||||
|
||||
### 1. 使用 perf_events 了解 CPU 概要信息
|
||||
|
||||
使用 perf_events 去了解 CPU 的基本情况。它的概要信息可以用一个 [火焰图][3] 来形象地表示。比如:
|
||||
```
|
||||
git clone --depth 1 https://github.com/brendangregg/FlameGraph
|
||||
perf record -F 99 -a -g -- sleep 30
|
||||
perf script | ./FlameGraph/stackcollapse-perf.pl | ./FlameGraph/flamegraph.pl > perf.svg
|
||||
|
||||
```
|
||||
|
||||
Linux 的 perf_events(又称为 "perf",后面用它来表示命令)是官方为 Linux 用户准备的跟踪器/分析器。它在内核源码中,并且维护的非常好(而且现在它的功能还是快速加强)。它一般是通过 linux-tools-common 这个包来添加的。
|
||||
|
||||
perf 可以做的事情很多,但是,如果我建议你只学习其中的一个功能,那就是查看 CPU 概要信息。虽然从技术角度来说,这并不是事件“跟踪”,主要是它很简单。较难的部分是去获得工作的完整栈和符号,这部分的功能在我的 [Linux Profiling at Netflix][4] 中讨论过。
|
||||
|
||||
### 2. 知道它能干什么
|
||||
|
||||
正如一位朋友所说的:“你不需要知道 X 光机是如何工作的,但你需要明白的是,如果你吞下了一个硬币,X 光机是你的一个选择!”你需要知道使用跟踪器能够做什么,因此,如果你在业务上需要它,你可以以后再去学习它,或者请会使用它的人来做。
|
||||
|
||||
简单地说:几乎任何事情都可以通过跟踪来了解它。内部文件系统、TCP/IP 处理过程、设备驱动、应用程序内部情况。阅读我在 lwn.net 上的 [ftrace][5] 的文章,也可以去浏览 [perf_events 页面][6],那里有一些跟踪能力的示例。
|
||||
|
||||
### 3. 请求一个前端
|
||||
|
||||
如果你把它作为一个性能分析工具(有许多公司销售这类产品),并要求支持 Linux 跟踪。希望通过一个“点击”界面去探查内核的内部,包含一个在栈不同位置的延迟的热力图。就像我在 [Monitorama 演讲][7] 中描述的那样。
|
||||
|
||||
我创建并开源了我自己的一些前端,虽然它是基于 CLI 的(不是图形界面的)。这样将使其它人使用跟踪器更快更容易。比如,我的 [perf-tools][8],跟踪新进程是这样的:
|
||||
```
|
||||
# ./execsnoop
|
||||
Tracing exec()s. Ctrl-C to end.
|
||||
PID PPID ARGS
|
||||
22898 22004 man ls
|
||||
22905 22898 preconv -e UTF-8
|
||||
22908 22898 pager -s
|
||||
22907 22898 nroff -mandoc -rLL=164n -rLT=164n -Tutf8
|
||||
[...]
|
||||
|
||||
```
|
||||
|
||||
在 Netflix 上,我创建了一个 [Vector][9],它是一个实例分析工具,实际上它是一个 Linux 跟踪器的前端。
|
||||
|
||||
## 对于性能或者内核工程师
|
||||
|
||||
一般来说,我们的工作都非常难,因为大多数人或许要求我们去搞清楚如何去跟踪某个事件,以及因此需要选择使用其中一个跟踪器。为完全理解一个跟踪器,你通常需要花至少一百多个小时去使用它。理解所有的 Linux 跟踪器并能在它们之间做出正确的选择是件很难的事情。(我或许是唯一接近完成这件事的人)
|
||||
|
||||
在这里我建议选择如下之一:
|
||||
|
||||
A) 选择一个全能的跟踪器,并以它为标准。这需要在一个测试环境中,花大量的时间来搞清楚它的细微差别和安全性。我现在的建议是 SystemTap 的最新版本(即从这个 [源][10] 构建的)。我知道有的公司选择的是 LTTng ,尽管它并不是很强大(但是它很安全),但他们也用的很好。如果在 sysdig 中添加了跟踪点或者是 kprobes,它也是另外的一个候选者。
|
||||
|
||||
B) 按我的 [Velocity 教程中][11] 的流程图。这意味着可能是使用 ftrace 或者 perf_events,因为 eBPF 是集成在内核中的,然后用其它的跟踪器,如 SystemTap/LTTng 作为对 eBPF 的补充。我目前在 Netflix 的工作中就是这么做的。
|
||||
|
||||
以下是我对各个跟踪器的评价:
|
||||
|
||||
### 1. ftrace
|
||||
|
||||
我爱 [Ftrace][12],它是内核黑客最好的朋友。它被构建进内核中,它能够消费跟踪点、kprobes、以及 uprobes,并且提供一些功能:使用可选的过滤器和参数进行事件跟踪;事件计数和计时,内核概览;函数流步进。关于它的示例可以查看内核源树中的 [ftrace.txt][13]。它通过 /sys 来管理,是面向单 root 用户的(虽然你可以使用缓冲实例来破解它以支持多用户),它的界面有时很繁琐,但是它比较容易破解,并且有前端:Steven Rostedt,ftrace 的主要创建者,他设计了 trace-cmd,并且我已经创建了 perf-tools 集合。我最讨厌的就是它不可编程,因此,你也不能,比如,去保存和获取时间戳,计算延迟,以及保存它的历史。你不需要花成本转储事件到用户级以便于进行后期处理。它通过 eBPF 可以实现可编程。
|
||||
|
||||
### 2. perf_events
|
||||
|
||||
[perf_events][14] 是 Linux 用户的主要跟踪工具,它来源于 Linux 内核,一般是通过 linux-tools-common 包来添加。又称为 "perf",后面的 perf 指的是它的前端,它非常高效(动态缓存),一般用于跟踪并转储到一个文件中(perf.data),然后可以在以后的某个时间进行后期处理。它可以做大部分 ftrace 能做的事情。它实现不了函数流步进,并且不太容易破解(因为它的安全/错误检查做的非常好)。但它可以做概览(采样)、CPU 性能计数、用户级的栈转换、以及消费对行使用本地变量进行跟踪的调试信息。它也支持多个并发用户。与 ftrace 一样,它也是内核不可编程的,或者 eBPF 支持(已经计划了补丁)。如果只学习一个跟踪器,我建议大家去学习 perf,它可以解决大量的问题,并且它也很安全。
|
||||
|
||||
### 3. eBPF
|
||||
|
||||
扩展的伯克利包过滤器(eBPF)是一个内核虚拟机,可以在事件上运行程序,它非常高效(JIT)。它可能最终为 ftrace 和 perf_events 提供内核可编程,并可以去增强其它跟踪器。它现在是由 Alexei Starovoitov 开发,还没有实现全整合,但是对于一些令人印象深刻的工具,有些内核版本(比如,4.1)已经支持了:比如,块设备 I/O 延迟热力图。更多参考资料,请查阅 Alexei 的 [BPF 演示][15],和它的 [eBPF 示例][16]。
|
||||
|
||||
### 4. SystemTap
|
||||
|
||||
[SystemTap][17] 是一个非常强大的跟踪器。它可以做任何事情:概览、跟踪点、kprobes、uprobes(它就来自 SystemTap)、USDT、内核编程等等。它将程序编译成内核模块并加载它们 —— 这是一种很难保证安全的方法。它开发的很怪诞,并且在过去的一段时间内出现了很多问题(恐慌或冻结)。许多并不是 SystemTap 的过错 —— 它通常被内核首先用于某些功能跟踪,并首先遇到运行 bug。最新版本的 SystemTap 是非常好的(你需要从它的源代码编译),但是,许多人仍然没有从早期版本的问题阴影中走出来。如果你想去使用它,花一些时间去测试环境,然后,在 irc.freenode.net 的 #systemtap 频道与开发者进行讨论。(Netflix 有一个容错架构,我们使用了 SystemTap,但是我们或许比起你来说,很少担心它的安全性)我最讨厌的事情是,它假设你有办法得到内核调试信息,而我并没有这些信息。没有它我确实可以做一些事情,但是缺少相关的文档和示例(我现在自己开始帮着做这些了)。
|
||||
|
||||
### 5. LTTng
|
||||
|
||||
[LTTng][18] 对事件收集进行了优化,性能要好于其它的跟踪器,也支持许多的事件类型,包括 USDT。它开发的很怪诞。它的核心部分非常简单:通过一个很小的且很固定的指令集写入事件到跟踪缓冲区。这样让它既安全又快速。缺点是做内核编程不太容易。我觉得那不是个大问题,由于它优化的很好,尽管在需要后期处理的情况下,仍然可以充分的扩展。它也探索了一种不同的分析技术。很多的“黑匣子”记录了全部有趣的事件,可以在以后的 GUI 下学习它。我担心意外的记录丢失事件,我真的需要花一些时间去看看它在实践中是如何工作的。这个跟踪器上我花的时间最少(原因是没有实践过它)。
|
||||
|
||||
### 6. ktap
|
||||
|
||||
[ktap][19] 是一个很有前途的跟踪器,它在内核中使用了一个 lua 虚拟机,它不需要调试信息和嵌入式设备就可以工作的很好。这使得它进入了人们的视野,在某个时候似乎要成为 Linux 上最好的跟踪器。然而,eBPF 开始集成到了内核,而 ktap 的集成工作被推迟了,直到它能够使用 eBPF 而不是它自己的虚拟机。由于 eBPF 在几个月后仍然在集成过程中,使得 ktap 的开发者等待了很长的时间。我希望在今年的晚些时间它能够重启开发。
|
||||
|
||||
### 7. dtrace4linux
|
||||
|
||||
[dtrace4linux][20] 主要由一个人 (Paul Fox) 利用业务时间将 Sun DTrace 移植到 Linux 中的。它令人印象深刻,而一些贡献者的工作,还不是很完美,它最多应该算是实验性的工具(不安全)。我认为对于许可证(license)的担心,使人们对它保持谨慎:它可能永远也进入不了 Linux 内核,因为 Sun 是基于 CDDL 许可证发布的 DTrace;Paul 的方法是将它作为一个插件。我非常希望看到 Linux 上的 DTrace,并且希望这个项目能够完成,我想我加入 Netflix 时将花一些时间来帮它完成。但是,我一直在使用内置的跟踪器 ftrace 和 perf_events。
|
||||
|
||||
### 8. OL DTrace
|
||||
|
||||
[Oracle Linux DTrace][21] 是将 DTrace 移植到 Linux 的一系列努力之一,尤其是 Oracle Linux。过去这些年的许多发行版都一直稳定的进步,开发者甚至谈到了改善 DTrace 测试套件,这显示了这个项目很有前途。许多有用的功能已经完成:系统调用、概览、sdt、proc、sched、以及 USDT。我一直在等待着 fbt(函数边界跟踪,对内核的动态跟踪),它将成为 Linux 内核上非常强大的功能。它最终能否成功取决于能否吸引足够多的人去使用 Oracle Linux(并为支持付费)。另一个羁绊是它并非完全开源的:内核组件是开源的,但用户级代码我没有看到。
|
||||
|
||||
### 9. sysdig
|
||||
|
||||
[sysdig][22] 是一个很新的跟踪器,它可以使用类似 tcpdump 的语法来处理系统调用事件,并用 lua 做后期处理。它也是令人印象深刻的,并且很高兴能看到在系统跟踪空间的创新。它的局限性是,它的系统调用只能是在当时,并且,它不能转储事件到用户级进行后期处理。虽然我希望能看到它去支持跟踪点、kprobes、以及 uprobes,但是你还是可以使用系统调用来做一些事情。我也希望在内核概览方面看到它支持 eBPF。sysdig 的开发者现在增加了对容器的支持。可以关注它的进一步发展。
|
||||
|
||||
## 深入阅读
|
||||
|
||||
我自己的工作中使用到的跟踪器包括:
|
||||
|
||||
**ftrace** : 我的 [perf-tools][8] 集合(查看示例目录);我的 lwn.net 的 [ftrace 跟踪器的文章][5]; 一个 [LISA14][8] 演讲;和文章: [function counting][23], [iosnoop][24], [opensnoop][25], [execsnoop][26], [TCP retransmits][27], [uprobes][28], 和 [USDT][29]。
|
||||
|
||||
**perf_events** : 我的 [perf_events 示例][6] 页面:对于 SCALE 的一个 [Linux Profiling at Netflix][4] 演讲;和文章:[CPU 采样][30],[静态跟踪点][31],[势力图][32],[计数][33],[内核行跟踪][34],[off-CPU 时间火焰图][35]。
|
||||
|
||||
**eBPF** : 文章 [eBPF:一个小的进步][36],和一些 [BPF-tools][37] (我需要发布更多)。
|
||||
|
||||
**SystemTap** : 很久以前,我写了一篇 [使用 SystemTap][38] 的文章,它有点时间了。最近我发布了一些 [systemtap-lwtools][39],展示了在没有内核调试信息的情况下,SystemTap 是如何使用的。
|
||||
|
||||
**LTTng** : 我使用它的时间很短,也没有发布什么文章。
|
||||
|
||||
**ktap** : 我的 [ktap 示例][40] 页面包括一行程序和脚本,虽然它是早期的版本。
|
||||
|
||||
**dtrace4linux** : 在我的 [系统性能][41] 书中包含了一些示例,并且在过去的时间中我为了某些事情开发了一些小的修补,比如, [timestamps][42]。
|
||||
|
||||
**OL DTrace** : 因为它是对 DTrace 的简单移植,我早期 DTrace 的大部分工作都 应该是与它相关的(链接太多了,可以去 [我的主页][43] 上搜索)。一旦它更加完美,我可以开发很多专用工具。
|
||||
|
||||
**sysdig** : 我贡献了 [fileslower][44] 和 [subsecond offset spectrogram][45] chisels。
|
||||
|
||||
**others** : 关于 [strace][46],我写了一些告诫文章。
|
||||
|
||||
不好意思,没有更多的跟踪器了! … 如果你想知道为什么 Linux 中的跟踪器不止一个,或者关于 DTrace 的内容,在我的 [从 DTrace 到 Linux][47] 的演讲中有答案,从 [第 28 张幻灯片][48] 开始。
|
||||
|
||||
感谢 [Deirdre Straughan][49] 的编辑,以及创建了跟踪的小马(General Zoi 是小马的创建者)。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.brendangregg.com/blog/2015-07-08/choosing-a-linux-tracer.html
|
||||
|
||||
作者:[Brendan Gregg.][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.brendangregg.com
|
||||
[1]:http://www.brendangregg.com/blog/images/2015/tracing_ponies.png
|
||||
[2]:http://www.slideshare.net/brendangregg/velocity-2015-linux-perf-tools/105
|
||||
[3]:http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html
|
||||
[4]:http://www.brendangregg.com/blog/2015-02-27/linux-profiling-at-netflix.html
|
||||
[5]:http://lwn.net/Articles/608497/
|
||||
[6]:http://www.brendangregg.com/perf.html
|
||||
[7]:http://www.brendangregg.com/blog/2015-06-23/netflix-instance-analysis-requirements.html
|
||||
[8]:http://www.brendangregg.com/blog/2015-03-17/linux-performance-analysis-perf-tools.html
|
||||
[9]:http://techblog.netflix.com/2015/04/introducing-vector-netflixs-on-host.html
|
||||
[10]:https://sourceware.org/git/?p=systemtap.git;a=blob_plain;f=README;hb=HEAD
|
||||
[11]:http://www.slideshare.net/brendangregg/velocity-2015-linux-perf-tools
|
||||
[12]:http://lwn.net/Articles/370423/
|
||||
[13]:https://www.kernel.org/doc/Documentation/trace/ftrace.txt
|
||||
[14]:https://perf.wiki.kernel.org/index.php/Main_Page
|
||||
[15]:http://www.phoronix.com/scan.php?page=news_item&px=BPF-Understanding-Kernel-VM
|
||||
[16]:https://github.com/torvalds/linux/tree/master/samples/bpf
|
||||
[17]:https://sourceware.org/systemtap/wiki
|
||||
[18]:http://lttng.org/
|
||||
[19]:http://ktap.org/
|
||||
[20]:https://github.com/dtrace4linux/linux
|
||||
[21]:http://docs.oracle.com/cd/E37670_01/E38608/html/index.html
|
||||
[22]:http://www.sysdig.org/
|
||||
[23]:http://www.brendangregg.com/blog/2014-07-13/linux-ftrace-function-counting.html
|
||||
[24]:http://www.brendangregg.com/blog/2014-07-16/iosnoop-for-linux.html
|
||||
[25]:http://www.brendangregg.com/blog/2014-07-25/opensnoop-for-linux.html
|
||||
[26]:http://www.brendangregg.com/blog/2014-07-28/execsnoop-for-linux.html
|
||||
[27]:http://www.brendangregg.com/blog/2014-09-06/linux-ftrace-tcp-retransmit-tracing.html
|
||||
[28]:http://www.brendangregg.com/blog/2015-06-28/linux-ftrace-uprobe.html
|
||||
[29]:http://www.brendangregg.com/blog/2015-07-03/hacking-linux-usdt-ftrace.html
|
||||
[30]:http://www.brendangregg.com/blog/2014-06-22/perf-cpu-sample.html
|
||||
[31]:http://www.brendangregg.com/blog/2014-06-29/perf-static-tracepoints.html
|
||||
[32]:http://www.brendangregg.com/blog/2014-07-01/perf-heat-maps.html
|
||||
[33]:http://www.brendangregg.com/blog/2014-07-03/perf-counting.html
|
||||
[34]:http://www.brendangregg.com/blog/2014-09-11/perf-kernel-line-tracing.html
|
||||
[35]:http://www.brendangregg.com/blog/2015-02-26/linux-perf-off-cpu-flame-graph.html
|
||||
[36]:http://www.brendangregg.com/blog/2015-05-15/ebpf-one-small-step.html
|
||||
[37]:https://github.com/brendangregg/BPF-tools
|
||||
[38]:http://dtrace.org/blogs/brendan/2011/10/15/using-systemtap/
|
||||
[39]:https://github.com/brendangregg/systemtap-lwtools
|
||||
[40]:http://www.brendangregg.com/ktap.html
|
||||
[41]:http://www.brendangregg.com/sysperfbook.html
|
||||
[42]:https://github.com/dtrace4linux/linux/issues/55
|
||||
[43]:http://www.brendangregg.com
|
||||
[44]:https://github.com/brendangregg/sysdig/commit/d0eeac1a32d6749dab24d1dc3fffb2ef0f9d7151
|
||||
[45]:https://github.com/brendangregg/sysdig/commit/2f21604dce0b561407accb9dba869aa19c365952
|
||||
[46]:http://www.brendangregg.com/blog/2014-05-11/strace-wow-much-syscall.html
|
||||
[47]:http://www.brendangregg.com/blog/2015-02-28/from-dtrace-to-linux.html
|
||||
[48]:http://www.slideshare.net/brendangregg/from-dtrace-to-linux/28
|
||||
[49]:http://www.beginningwithi.com/
|
@ -1,131 +0,0 @@
|
||||
Linux 容器安全的 10 个层面
|
||||
======
|
||||

|
||||
|
||||
容器提供了打包应用程序的一种简单方法,它实现了从开发到测试到投入生产系统的无缝传递。它也有助于确保跨不同环境的连贯性,包括物理服务器、虚拟机、以及公有云或私有云。这些好处使得一些组织为了更方便地部署和管理为他们提升业务价值的应用程序,而快速部署容器。
|
||||
|
||||
企业要求存储安全,在容器中运行基础服务的任何人都会问,“容器安全吗?”以及“怎么相信运行在容器中的我的应用程序是安全的?”
|
||||
|
||||
安全的容器就像是许多安全运行的进程。在你部署和运行你的容器之前,你需要去考虑整个解决方案栈~~(致校对,容器是由不同的层堆叠而成,英文原文中使用的stack,可以直译为“解决方案栈”,但是似乎没有这一习惯说法,也可以翻译为解决方案的不同层级,哪个更合适?)~~各个层面的安全。你也需要去考虑应用程序和容器整个生命周期的安全。
|
||||
|
||||
尝试从这十个关键的因素去确保容器解决方案栈不同层面、以及容器生命周期的不同阶段的安全。
|
||||
|
||||
### 1. 容器宿主机操作系统和多租户环境
|
||||
|
||||
由于容器将应用程序和它的依赖作为一个单元来处理,使得开发者构建和升级应用程序变得更加容易,并且,容器可以启用多租户技术将许多应用程序和服务部署到一台共享主机上。在一台单独的主机上以容器方式部署多个应用程序、按需启动和关闭单个容器都是很容易的。为完全实现这种打包和部署技术的优势,运营团队需要运行容器的合适环境。运营者需要一个安全的操作系统,它能够在边界上保护容器安全、从容器中保护主机内核、以及保护容器彼此之间的安全。
|
||||
|
||||
### 2. 容器内容(使用可信来源)
|
||||
|
||||
容器是隔离的 Linux 进程,并且在一个共享主机的内核中,容器内使用的资源被限制在仅允许你运行着应用程序的沙箱中。保护容器的方法与保护你的 Linux 中运行的任何进程的方法是一样的。降低权限是非常重要的,也是保护容器安全的最佳实践。甚至是使用尽可能小的权限去创建容器。容器应该以一个普通用户的权限来运行,而不是 root 权限的用户。在 Linux 中可以使用多级安全,Linux 命名空间、安全强化 Linux( [SELinux][1])、[cgroups][2] 、capabilities(译者注:Linux 内核的一个安全特性,它打破了传统的普通用户与 root 用户的概念,在进程级提供更好的安全控制)、以及安全计算模式( [seccomp][3] ),Linux 的这五种安全特性可以用于保护容器的安全。
|
||||
|
||||
在谈到安全时,首先要考虑你的容器里面有什么?例如 ,有些时候,应用程序和基础设施是由很多可用的组件所构成。它们中的一些是开源的包,比如,Linux 操作系统、Apache Web 服务器、Red Hat JBoss 企业应用平台、PostgreSQL、以及Node.js。这些包的容器化版本已经可以使用了,因此,你没有必要自己去构建它们。但是,对于你从一些外部来源下载的任何代码,你需要知道这些包的原始来源,是谁构建的它,以及这些包里面是否包含恶意代码。
|
||||
|
||||
### 3. 容器注册(安全访问容器镜像)
|
||||
|
||||
你的团队所构建的容器的最顶层的内容是下载的公共容器镜像,因此,管理和下载容器镜像以及内部构建镜像,与管理和下载其它类型的二进制文件的方式是相同的,这一点至关重要。许多私有的注册者支持容器镜像的保存。选择一个私有的注册者,它可以帮你将存储在它的注册中的容器镜像实现策略自动化。
|
||||
|
||||
### 4. 安全性与构建过程
|
||||
|
||||
在一个容器化环境中,构建过程是软件生命周期的一个阶段,它将所需的运行时库和应用程序代码集成到一起。管理这个构建过程对于软件栈安全来说是很关键的。遵守“一次构建,到处部署”的原则,可以确保构建过程的结果正是生产系统中需要的。保持容器的恒定不变也很重要 — 换句话说就是,不要对正在运行的容器打补丁,而是,重新构建和部署它们。
|
||||
|
||||
不论是因为你处于一个高强度监管的行业中,还是只希望简单地优化你的团队的成果,去设计你的容器镜像管理以及构建过程,可以使用容器层的优势来实现控制分离,因此,你应该去这么做:
|
||||
|
||||
* 运营团队管理基础镜像
|
||||
* 设计者管理中间件、运行时、数据库、以及其它解决方案
|
||||
* 开发者专注于应用程序层面,并且只写代码
|
||||
|
||||
|
||||
|
||||
最后,标记好你的定制构建容器,这样可以确保在构建和部署时不会搞混乱。
|
||||
|
||||
### 5. 控制好在同一个集群内部署应用
|
||||
|
||||
如果是在构建过程中出现的任何问题,或者在镜像被部署之后发现的任何漏洞,那么,请在基于策略的、自动化工具上添加另外的安全层。
|
||||
|
||||
我们来看一下,一个应用程序的构建使用了三个容器镜像层:内核、中间件、以及应用程序。如果在内核镜像中发现了问题,那么只能重新构建镜像。一旦构建完成,镜像就会被发布到容器平台注册中。这个平台可以自动检测到发生变化的镜像。对于基于这个镜像的其它构建将被触发一个预定义的动作,平台将自己重新构建应用镜像,合并进修复库。
|
||||
|
||||
在基于策略的、自动化工具上添加另外的安全层。
|
||||
|
||||
一旦构建完成,镜像将被发布到容器平台的内部注册中。在它的内部注册中,会立即检测到镜像发生变化,应用程序在这里将会被触发一个预定义的动作,自动部署更新镜像,确保运行在生产系统中的代码总是使用更新后的最新的镜像。所有的这些功能协同工作,将安全功能集成到你的持续集成和持续部署(CI/CD)过程和管道中。
|
||||
|
||||
### 6. 容器编配:保护容器平台
|
||||
|
||||
一旦构建完成,镜像被发布到容器平台的内部注册中。内部注册会立即检测到镜像的变化,应用程序在这里会被触发一个预定义的动作,自己部署更新,确保运行在生产系统中的代码总是使用更新后的最新的镜像。所有的功能协同工作,将安全功能集成到你的持续集成和持续部署(CI/CD)过程和管道中。~~(致校对:这一段和上一段是重复的,请确认,应该是选题工具造成的重复!!)~~
|
||||
|
||||
当然了,应用程序很少会部署在单一的容器中。甚至,单个应用程序一般情况下都有一个前端、一个后端、以及一个数据库。而在容器中以微服务模式部署的应用程序,意味着应用程序将部署在多个容器中,有时它们在同一台宿主机上,有时它们是分布在多个宿主机或者节点上,如下面的图所示:~~(致校对:图去哪里了???应该是选题问题的问题!)~~
|
||||
|
||||
在大规模的容器部署时,你应该考虑:
|
||||
|
||||
* 哪个容器应该被部署在哪个宿主机上?
|
||||
* 那个宿主机应该有什么样的性能?
|
||||
* 哪个容器需要访问其它容器?它们之间如何发现彼此?
|
||||
* 你如何控制和管理对共享资源的访问,像网络和存储?
|
||||
* 如何监视容器健康状况?
|
||||
* 如何去自动扩展性能以满足应用程序的需要?
|
||||
* 如何在满足安全需求的同时启用开发者的自助服务?
|
||||
|
||||
|
||||
|
||||
考虑到开发者和运营者的能力,提供基于角色的访问控制是容器平台的关键要素。例如,编配管理服务器是中心访问点,应该接受最高级别的安全检查。APIs 是规模化的自动容器平台管理的关键,可以用于为 pods、服务、以及复制控制器去验证和配置数据;在入站请求上执行项目验证;以及调用其它主要系统组件上的触发器。
|
||||
|
||||
### 7. 网络隔离
|
||||
|
||||
在容器中部署现代微服务应用,经常意味着跨多个节点在多个容器上部署。考虑到网络防御,你需要一种在一个集群中的应用之间的相互隔离的方法。一个典型的公有云容器服务,像 Google 容器引擎(GKE)、Azure 容器服务、或者 Amazon Web 服务(AWS)容器服务,是单租户服务。他们让你在你加入的虚拟机集群上运行你的容器。对于多租户容器的安全,你需要容器平台为你启用一个单一集群,并且分割通讯以隔离不同的用户、团队、应用、以及在这个集群中的环境。
|
||||
|
||||
使用网络命名空间,容器内的每个集合(即大家熟知的“pod”)得到它自己的 IP 和绑定的端口范围,以此来从一个节点上隔离每个 pod 网络。除使用下文所述的选项之外,~~(选项在哪里???,请查看原文,是否是选题丢失???)~~默认情况下,来自不同命名空间(项目)的Pods 并不能发送或者接收其它 Pods 上的包和不同项目的服务。你可以使用这些特性在同一个集群内,去隔离开发者环境、测试环境、以及生产环境。但是,这样会导致 IP 地址和端口数量的激增,使得网络管理更加复杂。另外,容器是被反复设计的,你应该在处理这种复杂性的工具上进行投入。在容器平台上比较受欢迎的工具是使用 [软件定义网络][4] (SDN) 去提供一个定义的网络集群,它允许跨不同集群的容器进行通讯。
|
||||
|
||||
### 8. 存储
|
||||
|
||||
容器即可被用于无状态应用,也可被用于有状态应用。保护附加存储是保护有状态服务的一个关键要素。容器平台对多个受欢迎的存储提供了插件,包括网络文件系统(NFS)、AWS 弹性块存储(EBS)、GCE 持久磁盘、GlusterFS、iSCSI、 RADOS(Ceph)、Cinder、等等。
|
||||
|
||||
一个持久卷(PV)可以通过资源提供者支持的任何方式装载到一个主机上。提供者有不同的性能,而每个 PV 的访问模式是设置为被特定的卷支持的特定模式。例如,NFS 能够支持多路客户端同时读/写,但是,一个特定的 NFS 的 PV 可以在服务器上被发布为只读模式。每个 PV 得到它自己的一组反应特定 PV 性能的访问模式的描述,比如,ReadWriteOnce、ReadOnlyMany、以及 ReadWriteMany。
|
||||
|
||||
### 9. API 管理、终端安全、以及单点登陆(SSO)
|
||||
|
||||
保护你的应用包括管理应用、以及 API 的认证和授权。
|
||||
|
||||
Web SSO 能力是现代应用程序的一个关键部分。在构建它们的应用时,容器平台带来了开发者可以使用的多种容器化服务。
|
||||
|
||||
APIs 是微服务构成的应用程序的关键所在。这些应用程序有多个独立的 API 服务,这导致了终端服务数量的激增,它就需要额外的管理工具。推荐使用 API 管理工具。所有的 API 平台应该提供多种 API 认证和安全所需要的标准选项,这些选项既可以单独使用,也可以组合使用,以用于发布证书或者控制访问。
|
||||
|
||||
保护你的应用包括管理应用以及 API 的认证和授权。~~(致校对:这一句话和本节的第一句话重复)~~
|
||||
|
||||
这些选项包括标准的 API keys、应用 ID 和密钥对、 以及 OAuth 2.0。
|
||||
|
||||
### 10. 在一个联合集群中的角色和访问管理
|
||||
|
||||
这些选项包括标准的 API keys、应用 ID 和密钥对、 以及 OAuth 2.0。~~(致校对:这一句和上一节最后一句重复)~~
|
||||
|
||||
在 2016 年 7 月份,Kubernetes 1.3 引入了 [Kubernetes 联合集群][5]。这是一个令人兴奋的新特性之一,它是在 Kubernetes 上游、当前的 Kubernetes 1.6 beta 中引用的。联合是用于部署和访问跨多集群运行在公有云或企业数据中心的应用程序服务的。多个集群能够用于去实现应用程序的高可用性,应用程序可以跨多个可用区域、或者去启用部署公共管理、或者跨不同的供应商进行迁移,比如,AWS、Google Cloud、以及 Azure。
|
||||
|
||||
当管理联合集群时,你必须确保你的编配工具能够提供,你所需要的跨不同部署平台的实例的安全性。一般来说,认证和授权是很关键的 — 不论你的应用程序运行在什么地方,将数据安全可靠地传递给它们,以及管理跨集群的多租户应用程序。Kubernetes 扩展了联合集群,包括对联合的秘密数据、联合的命名空间、以及 Ingress objects 的支持。
|
||||
|
||||
### 选择一个容器平台
|
||||
|
||||
当然,它并不仅关乎安全。你需要提供一个你的开发者团队和运营团队有相关经验的容器平台。他们需要一个安全的、企业级的基于容器的应用平台,它能够同时满足开发者和运营者的需要,而且还能够提高操作效率和基础设施利用率。
|
||||
|
||||
想从 Daniel 在 [欧盟开源峰会][7] 上的 [容器安全的十个层面][6] 的演讲中学习更多知识吗?这个峰会将于10 月 23 - 26 日在 Prague 举行。
|
||||
|
||||
### 关于作者
|
||||
Daniel Oh;Microservives;Agile;Devops;Java Ee;Container;Openshift;Jboss;Evangelism
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/10/10-layers-container-security
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/daniel-oh
|
||||
[1]:https://en.wikipedia.org/wiki/Security-Enhanced_Linux
|
||||
[2]:https://en.wikipedia.org/wiki/Cgroups
|
||||
[3]:https://en.wikipedia.org/wiki/Seccomp
|
||||
[4]:https://en.wikipedia.org/wiki/Software-defined_networking
|
||||
[5]:https://kubernetes.io/docs/concepts/cluster-administration/federation/
|
||||
[6]:https://osseu17.sched.com/mobile/#session:f2deeabfc1640d002c1d55101ce81223
|
||||
[7]:http://events.linuxfoundation.org/events/open-source-summit-europe
|
@ -1,140 +0,0 @@
|
||||
Linux 最好的图片截取和视频截录工具
|
||||
======
|
||||

|
||||
|
||||
这里可能有一个困扰你多时的问题,当你想要获取一张屏幕截图向开发者反馈问题,或是在 _Stack Overflow_ 寻求帮助时,你可能缺乏一个可靠的屏幕截图工具去保存和发送集截图。GNOME 有一些形如程序和 shell 拓展的工具。不必担心,这里有 Linux 最好的屏幕截图工具,供你截取图片或截录视频。
|
||||
|
||||
## Linux 最好的图片截取和视频截录工具
|
||||
|
||||
### 1. Shutter
|
||||
|
||||
[][2]
|
||||
|
||||
[Shutter][3] 可以截取任意你想截取的屏幕,是 Linux 最好的截屏工具之一。得到截屏之后,它还可以在保存截屏之前预览图片。GNOME 面板顶部有一个 Shutter 拓展菜单,使得用户进入软件变得更人性化。
|
||||
|
||||
你可以选择性的截取窗口、桌面、光标下的面板、自由内容、菜单、提示框或网页。Shutter 允许用户直接上传屏幕截图到设置内首选的云服务器中。它同样允许用户在保存截图之前编辑器图片;同样提供可自由添加或移除的插件。
|
||||
|
||||
终端内键入下列命令安装此工具:
|
||||
|
||||
```
|
||||
sudo add-apt-repository -y ppa:shutter/ppa
|
||||
sudo apt-get update && sudo apt-get install shutter
|
||||
```
|
||||
|
||||
### 2. Vokoscreen
|
||||
|
||||
[][4]
|
||||
|
||||
|
||||
[Vokoscreen][5] 是一款允许记录和叙述屏幕活动的一款软件。它有一个简洁的界面,界面的顶端包含有一个简明的菜单栏,方便用户开始录制视频。
|
||||
|
||||
你可以选择记录整个屏幕,或是记录一个窗口,抑或是记录一个自由区域,并且自定义保存类型;你甚至可以将屏幕录制记录保存为 gif 文件。当然,你也可以使用网络摄像头记录自己的情况,将自己转换成学习者。一旦你这么做了,你就可以在应用程序中回放视频记录。
|
||||
|
||||
[][6]
|
||||
|
||||
你可以安装自己仓库的 Vocoscreen 发行版,或者你也可以在 [pkgs.org][7] 选择下载你需要的发行版。
|
||||
|
||||
```
|
||||
sudo dpkg -i vokoscreen_2.5.0-1_amd64.deb
|
||||
```
|
||||
|
||||
### 3. OBS
|
||||
|
||||
[][8]
|
||||
|
||||
[OBS][9] 可以用来录制自己的屏幕亦可用来录制互联网上的数据流。它允许你看到自己所录制的内容或者当你叙述时的屏幕录制。它允许你根据喜好选择录制视频的品质;它也允许你选择文件的保存类型。除了视频录制功能之外,你还可以切换到 Studio 模式,不借助其他软件编辑视频。要在你的 Linux 系统中安装 OBS,你必须确保你的电脑已安装 FFmpeg。ubuntu 14.04 或更早的版本安装 FFmpeg 可以使用如下命令:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:kirillshkrogalev/ffmpeg-next
|
||||
|
||||
sudo apt-get update && sudo apt-get install ffmpeg
|
||||
```
|
||||
|
||||
ubuntu 15.04 以及之后的版本,你可以在终端中键入如下命令安装 FFmpeg:
|
||||
|
||||
```
|
||||
sudo apt-get install ffmpeg
|
||||
```
|
||||
|
||||
如果 GGmpeg 安装完成,在终端中键入如下安装 OBS:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:obsproject/obs-studio
|
||||
|
||||
sudo apt-get update
|
||||
|
||||
sudo apt-get install obs-studio
|
||||
```
|
||||
|
||||
### 4. Green Recorder
|
||||
|
||||
[][10]
|
||||
|
||||
[Green recorder][11] 是一款基于接口的简单程序,它可以让你记录屏幕。你可以选择包括视频和单纯的音频在内的录制内容,也可以显示鼠标指针,甚至可以跟随鼠标录制视频。同样,你可以选择记录窗口或是自由区域,以便于在自己的记录中保留需要的内容;你还可以自定义保存视频的帧数。如果你想要延迟录制,它提供给你一个选项可以设置出你想要的延迟时间。它还提供一个录制结束的命令运行选项,这样,就可以在视频录制结束后立即运行。
|
||||
|
||||
在终端中键入如下命令来安装 green recorder:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:fossproject/ppa
|
||||
|
||||
sudo apt update && sudo apt install green-recorder
|
||||
```
|
||||
|
||||
### 5. Kazam
|
||||
|
||||
[][12]
|
||||
|
||||
[Kazam][13] 在几乎所有使用截图工具的 Linux 用户中,都十分流行。这是一款简单直观的软件,它可以让你做一个屏幕截图或是视频录制也同样允许在屏幕截图或屏幕录制之前设置延时。它可以让你选择录制区域,窗口或是你想要抓取的整个屏幕。Kazam 的界面接口部署的非常好,和其他软件相比毫无复杂感。它的特点,就是让你优雅的截图。Kazam 在系统托盘和菜单中都有图标,无需打开应用本身,你就可以开始屏幕截图。
|
||||
|
||||
终端中键入如下命令来安装 Kazam:
|
||||
|
||||
```
|
||||
sudo apt-get install kazam
|
||||
```
|
||||
|
||||
如果没有找到 PPA,你需要使用下面的命令安装它:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:kazam-team/stable-series
|
||||
|
||||
sudo apt-get update && sudo apt-get install kazam
|
||||
```
|
||||
|
||||
### 6. GNOME 拓展截屏工具
|
||||
|
||||
[][1]
|
||||
|
||||
GNOME 的一个拓展软件就叫做 screenshot tool,它常驻系统面板,如果你没有设置禁用它。由于它是常驻系统面板的软件,所以它会一直等待你的调用,获取截图,方便和容易获取是它最主要的特点,除非你在系统工具禁用,否则它将一直在你的系统面板中。这个工具也有用来设置首选项的选项窗口。在 extensions.gnome.org 中搜索“_Screenshot Tool_”,在你的 GNOME 中安装它。
|
||||
|
||||
你需要安装 gnome 拓展,chrome 拓展和 GNOME 调整工具才能使用这个工具。
|
||||
|
||||
[][14]
|
||||
|
||||
当你碰到一个问题,不知道怎么处理,想要在 [the Linux community][15] 或者其他开发社区分享、寻求帮助的的时候, **Linux 截图工具** 尤其合适。学习开发、程序或者其他任何事物都会发现这些工具在分享截图的时候真的很实用。Youtube 用户和教程制作爱好者会发现视频截录工具真的很适合录制可以发表的教程。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxandubuntu.com/home/best-linux-screenshot-screencasting-tools
|
||||
|
||||
作者:[linuxandubuntu][a]
|
||||
译者:[CYLeft](https://github.com/CYLeft)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxandubuntu.com
|
||||
[1]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-compressed_orig.jpg
|
||||
[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/shutter-linux-screenshot-taking-tools_orig.jpg
|
||||
[3]:http://shutter-project.org/
|
||||
[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-screencasting-tool-for-linux_orig.jpg
|
||||
[5]:https://github.com/vkohaupt/vokoscreen
|
||||
[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-preferences_orig.jpg
|
||||
[7]:https://pkgs.org/download/vokoscreen
|
||||
[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/obs-linux-screencasting-tool_orig.jpg
|
||||
[9]:https://obsproject.com/
|
||||
[10]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/green-recording-linux-tool_orig.jpg
|
||||
[11]:https://github.com/foss-project/green-recorder
|
||||
[12]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/kazam-screencasting-tool-for-linux_orig.jpg
|
||||
[13]:https://launchpad.net/kazam
|
||||
[14]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-preferences_orig.jpg
|
||||
[15]:http://www.linuxandubuntu.com/home/top-10-communities-to-help-you-learn-linux
|
189
translated/tech/20180123 Migrating to Linux- The Command Line.md
Normal file
189
translated/tech/20180123 Migrating to Linux- The Command Line.md
Normal file
@ -0,0 +1,189 @@
|
||||
迁徙到 Linux:命令行环境
|
||||
======
|
||||
|
||||

|
||||
|
||||
这是关于迁徙到 Linux 系列的第四篇文章了。如果您错过了之前的内容,可以回顾我们之前谈到的内容 [新手之 Linux][1]、[文件和文件系统][2]、和 [图形环境][3]。Linux 无处不在,它可以运行在大部分的网络服务器,如 web、email 和其他服务器;它同样可以在您的手机、汽车控制台和其他很多设备上使用。现在,您可能会开始好奇 Linux 系统,并对学习 Linux 的工作原理萌发兴趣。
|
||||
|
||||
在 Linux 下,命令行非常实用。Linux 的桌面系统中,尽管命令行只是可选操作,但是您依旧能看见很多朋友开着一个命令行窗口和其他应用窗口并肩作战。在运行 Linux 系统的网络服务器中,命令行通常是唯一能直接与操作系统交互的工具。因此,命令行是有必要了解的,至少应当涉猎一些基础命令。
|
||||
|
||||
在命令行(通常称之为 Linux shell)中,所有操作都是通过键入命令完成。您可以执行查看文件列表、移动文件位置、显示文件内容、编辑文件内容等一系列操作,通过命令行,您甚至可以查看网页中的内容。
|
||||
|
||||
如果您在 Windows(CMD 或者 PowerShell) 上已经熟悉关于命令行的使用,您是否想跳转到了解 Windows 命令行的章节上去?先了阅读这些内容吧。
|
||||
|
||||
### 导语
|
||||
|
||||
在命令行中,这里有一个当前工作目录(文件夹和目录是同义词,在 Linux 中它们通常都被称为目录)的概念。如果没有特别指定目录,许多命令的执行会在当前目录下生效。比如,键入 ls 列出文件目录,当前工作目录的文件将会被列举出来。看一个例子:
|
||||
```
|
||||
$ ls
|
||||
Desktop Documents Downloads Music Pictures README.txt Videos
|
||||
```
|
||||
|
||||
`ls Documents` 这条命令将会列出 `Documents` 目录下的文件:
|
||||
```
|
||||
$ ls Documents
|
||||
report.txt todo.txt EmailHowTo.pdf
|
||||
```
|
||||
|
||||
通过 `pwd` 命令可以显示当前您的工作目录。比如:
|
||||
```
|
||||
$ pwd
|
||||
/home/student
|
||||
```
|
||||
|
||||
您可以通过 `cd` 命令改变当前目录并切换到您想要抵达的目录。比如:
|
||||
```
|
||||
$ pwd
|
||||
/home/student
|
||||
$ cd Downloads
|
||||
$ pwd
|
||||
/home/student/Downloads
|
||||
```
|
||||
|
||||
路径中的目录由 `/`(左斜杠)字符分隔。路径中有一个隐含的层次关系,比如 `/home/student` 目录中,home 是顶层目录,而 student 是 home 的子目录。
|
||||
|
||||
路径要么是绝对路径,要么是相对路径。绝对路径由一个 `/` 字符打头。
|
||||
|
||||
相对路径由 `.` 或者 `..` 开始。在一个路径中,一个 `.` 意味着当前目录,`..` 意味着当前目录的上级目录。比如,`ls ../Documents` 意味着在此寻找当前目录的上级名为 `Documets` 的目录:
|
||||
```
|
||||
$ pwd
|
||||
/home/student
|
||||
$ ls
|
||||
Desktop Documents Downloads Music Pictures README.txt Videos
|
||||
$ cd Downloads
|
||||
$ pwd
|
||||
/home/student/Downloads
|
||||
$ ls ../Documents
|
||||
report.txt todo.txt EmailHowTo.pdf
|
||||
```
|
||||
|
||||
当您第一次打开命令行窗口时,您当前的工作目录被设置为您的家目录,通常为 `/home/<您的登录名>`。家目录专用于登陆之后存储您的专属文件。
|
||||
|
||||
设置环境变量 `$HOME` 到您的家目录,比如:
|
||||
```
|
||||
$ echo $HOME
|
||||
/home/student
|
||||
```
|
||||
|
||||
下表显示了用于目录导航和管理简单的文本文件的一些命令摘要。
|
||||
|
||||

|
||||
|
||||
### 搜索
|
||||
|
||||
有时我们会遗忘文件的位置,或者忘记了我要寻找的文件名。Linux 命令行有几个命令可以帮助您搜索到文件。
|
||||
|
||||
第一个命令是 `find`。您可以使用 `find` 命令通过文件名或其他属性搜索文件和目录。举个例子,当您遗忘了 todo.txt 文件的位置,我们可以执行下面的代码:
|
||||
```
|
||||
$ find $HOME -name todo.txt
|
||||
/home/student/Documents/todo.txt
|
||||
```
|
||||
|
||||
`find` 程序有很多功能和选项。一个简单的例子:
|
||||
```
|
||||
find <要寻找的目录> -name <文件名>
|
||||
```
|
||||
|
||||
如果这里有 `todo.txt` 文件且不止一个,它将向我们列出拥有这个名字的所有文件的所有所在位置。`find` 命令有很多便于搜索的选项比如类型(文件或是目录等等)、时间、大小和其他一些选项。更多内容您可以同通过:`man find` 获取关于如何使用 `find` 命令的帮助。
|
||||
|
||||
您还可以使用 `grep` 命令搜索文件的特殊内容,比如:
|
||||
```
|
||||
grep "01/02/2018" todo.txt
|
||||
```
|
||||
这将为您展示 `todo` 文件中 `01/02/2018` 所在行。
|
||||
|
||||
### 获取帮助
|
||||
|
||||
Linux 有很多命令,这里,我们没有办法一一列举。授人以鱼不如授人以渔,所以下一步我们将向您介绍帮助命令。
|
||||
|
||||
`apropos` 命令可以帮助您查找需要使用的命令。也许您想要查找能够操作目录或是获得文件列表的所有命令,但是您并不希望让这些命令执行。您可以这样尝试:
|
||||
```
|
||||
apropos directory
|
||||
```
|
||||
|
||||
要在帮助文档中,得到一个于 `directiory` 关键字的相关命令列表,您可以这样操作:
|
||||
```
|
||||
apropos "list open files"
|
||||
```
|
||||
|
||||
这将提供一个 `lsof` 命令给您,帮助您打开文件列表。
|
||||
|
||||
当您明确您要使用的命令,但是不确定应该使用什么选项完成预期工作,您可以使用 man 命令,它是 manual 的缩写。您可以这样使用:
|
||||
```
|
||||
man ls
|
||||
```
|
||||
|
||||
您可以在自己的设备上尝试这个命令。它会提供给您关于使用这个命令的完整信息。
|
||||
|
||||
通常,很多命令都会有能够给 `help` 选项(比如说,`ls --help`),列出命令使用的提示。`man` 页面的内容通常太繁琐,`--help` 选项可能更适合快速浏览。
|
||||
|
||||
### 脚本
|
||||
|
||||
Linux 命令行中最贴心的功能是能够运行脚本文件,并且能重复运行。Linux 命令可以存储在文本文件中,您可以在文件的开头写入 `#!/bin/sh`,之后追加命令。之后,一旦文件被存储为可执行文件,您就可以像执行命令一样运行脚本文件,比如,
|
||||
```
|
||||
--- contents of get_todays_todos.sh ---
|
||||
#!/bin/sh
|
||||
todays_date=`date +"%m/%d/%y"`
|
||||
grep $todays_date $HOME/todos.txt
|
||||
```
|
||||
|
||||
在一个确定的工作中脚本可以帮助自动化重复执行命令。如果需要的话,脚本也可以很复杂,能够使用循环、判断语句等。限于篇幅,这里不细述,但是您可以在网上查询到相关信息。
|
||||
|
||||
您是否已经熟悉了 Windows 命令行?
|
||||
|
||||
如果您对 Windows CMD 或者 PowerShell 程序很熟悉,在命令行输入命令应该是轻车熟路的。然而,它们之间有很多差异,如果您没有理解它们之间的差异可能会为之困扰。
|
||||
|
||||
首先,在 Linux 下的 PATH 环境于 Windows 不同。在 Windows 中,当前目录被认为是路径中的第一个文件夹,尽管该目录没有在环境变量中列出。而在 Linux 下,当前目录不会在路径中显示表示。Linux 下设置环境变量会被认为是风险操作。在 Linux 的当前目录执行程序,您需要使用 ./(代表当前目录的相对目录表示方式) 前缀。这可能会干扰很多 CMD 用户。比如:
|
||||
```
|
||||
./my_program
|
||||
```
|
||||
|
||||
而不是
|
||||
```
|
||||
my_program
|
||||
```
|
||||
|
||||
另外,在 Windows 环境变量的路径中是以 `;`(分号) 分割的。在 Linux 中,由 `:` 分割环境变量。同样,在 Linux 中路径由 `/` 字符分隔,而在 Windows 目录中路径由 `\` 字符分割。因此 Windows 中典型的环境变量会像这样:
|
||||
```
|
||||
PATH="C:\Program Files;C:\Program Files\Firefox;"
|
||||
while on Linux it might look like:
|
||||
PATH="/usr/bin:/opt/mozilla/firefox"
|
||||
```
|
||||
|
||||
还要注意,在 Linux 中环境变量由 `$` 拓展,而在 Windows 中您需要使用百分号(就是这样: %PATH%)。
|
||||
|
||||
在 Linux 中,通过 `-` 使用命令选项,而在 Windows 中,使用选项要通过 `/` 字符。所以,在 Linux 中您应该:
|
||||
```
|
||||
a_prog -h
|
||||
```
|
||||
|
||||
而不是
|
||||
```
|
||||
a_prog /h
|
||||
```
|
||||
|
||||
在 Linux 下,文件拓展名并没有意义。例如,将 `myscript` 重命名为 `myscript.bat` 并不会因此而可执行,需要设置文件的执行权限。文件执行权限会在下次的内容中覆盖到。
|
||||
|
||||
在 Linux 中,如果文件或者目录名以 `.` 字符开头,意味着它们是隐藏文件。比如,如果您申请编辑 `.bashrc` 文件,您不能在 `home` 目录中找到它,但是它可能真的存在,只不过它是隐藏文件。在命令行中,您可以通过 `ls` 命令的 `-a` 选项查看隐藏文件,比如:
|
||||
```
|
||||
ls -a
|
||||
```
|
||||
|
||||
在 Linux 中,普通的命令与 Windows 的命令不尽相同。下面的表格显示了常用命令中 CMD 命令和 Linux 命令行的差异。
|
||||
|
||||

|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2018/1/migrating-linux-command-line
|
||||
|
||||
作者:[John Bonesio][a]
|
||||
译者:[CYLeft](https://github.com/CYLeft)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/johnbonesio
|
||||
[1]:https://www.linux.com/blog/learn/intro-to-linux/2017/10/migrating-linux-introduction
|
||||
[2]:https://www.linux.com/blog/learn/intro-to-linux/2017/11/migrating-linux-disks-files-and-filesystems
|
||||
[3]:https://www.linux.com/blog/learn/2017/12/migrating-linux-graphical-environments
|
@ -1,115 +0,0 @@
|
||||
搭建私有云:OwnCloud
|
||||
======
|
||||
|
||||
所有人都在讨论云。尽管市面上有很多为我们提供云存储和其他云服务的主要的服务商,但是我们还是可以为自己搭建一个私有云。
|
||||
|
||||
在本教程中,我们将讨论如何利用 OwnCloud 搭建私有云。OwnCloude 是一个可以安装在我们 Linux 设备上的 web 应用程序,能够存储和服务我们的数据。OwnCloude 可以分享日历、联系人和书签,共享音/视频流等等。
|
||||
|
||||
本教程中,我们使用的是 CentOS 7 系统,但是本教程同样适用于其他 Linux 发行版中安装 OwnClude。让我们开始安装 OwnCloude 并且做一些准备工作,
|
||||
|
||||
**(推荐阅读:[如何在 CentOS & RHEL 上使用 Apache 作为反向代理服务器][1])**
|
||||
|
||||
**(同时推荐:[实时 Linux 服务器监测和 GLANCES 监测工具][2])**
|
||||
|
||||
### 预备知识
|
||||
|
||||
* 我们需要在机器上配置 LAMP。参照阅读我们的文章‘[CentOS/RHEL 上配置 LAMP 服务器最简单的教程][3] & [在 Ubuntu 搭建 LAMP stack][4]’。
|
||||
|
||||
* 我们需要在自己的设备里安装这些包,‘ php-mysql php-json php-xml php-mbstring php-zip php-gd curl php-curl php-pdo’。使用包管理器安装它们。
|
||||
|
||||
```
|
||||
$ sudo yum install php-mysql php-json php-xml php-mbstring php-zip php-gd curl php-curl php-pdo
|
||||
```
|
||||
|
||||
### 安装
|
||||
|
||||
安装 owncloud,我们现在需要在服务器上下载 ownCloud 安装包。使用下面的命令从官方网站下载最新的安装包(10.0.4-1),
|
||||
|
||||
```
|
||||
$ wget https://download.owncloud.org/community/owncloud-10.0.4.tar.bz2
|
||||
```
|
||||
|
||||
使用下面的命令解压,
|
||||
|
||||
```
|
||||
$ tar -xvf owncloud-10.0.4.tar.bz2
|
||||
```
|
||||
|
||||
现在,将所有解压后的文件移动至‘/var/www/html’
|
||||
|
||||
```
|
||||
$ mv owncloud/* /var/www/html
|
||||
```
|
||||
|
||||
下一步,我们需要在 apache 上配置 ‘httpd.conf’文件
|
||||
|
||||
```
|
||||
$ sudo vim /etc/httpd/conf/httpd.com
|
||||
```
|
||||
|
||||
同时更改下面的选项,
|
||||
|
||||
```
|
||||
AllowOverride All
|
||||
```
|
||||
|
||||
在 owncloud 文件夹下保存和修改文件权限,
|
||||
|
||||
```
|
||||
$ sudo chown -R apache:apache /var/www/html/
|
||||
$ sudo chmod 777 /var/www/html/config/
|
||||
```
|
||||
|
||||
然后重启 apache 服务器执行修改,
|
||||
|
||||
```
|
||||
$ sudo systemctl restart httpd
|
||||
```
|
||||
|
||||
现在,我们需要在 MariaDB 上创建一个数据库,保存来自 owncould 的数据。使用下面的命令创建数据库和数据库用户,
|
||||
|
||||
```
|
||||
$ mysql -u root -p
|
||||
MariaDB [(none)] > create database owncloud;
|
||||
MariaDB [(none)] > GRANT ALL ON owncloud.* TO ocuser@localhost IDENTIFIED BY 'owncloud';
|
||||
MariaDB [(none)] > flush privileges;
|
||||
MariaDB [(none)] > exit
|
||||
```
|
||||
|
||||
服务器配置部分完成后,现在我们可以在网页浏览器上访问 owncloud。打开浏览器,输入您的服务器 IP 地址,我这边的服务器是 10.20.30.100,
|
||||
|
||||
![安装 owncloud][7]
|
||||
|
||||
一旦 URL 加载完毕,我们将呈现上述页面。这里,我们将创建管理员用户同时提供数据库信息。当所有信息提供完毕,点击‘Finish setup’。
|
||||
|
||||
我们将被重定向到登陆页面,在这里,我们需要输入先前创建的凭据,
|
||||
|
||||
![安装 owncloud][9]
|
||||
|
||||
认证成功之后,我们将进入 owncloud 面板,
|
||||
|
||||
![安装 owncloud][11]
|
||||
|
||||
我们可以使用移动设备应用程序,同样也可以使用网页界面更新我们的数据。现在,我们已经有自己的私有云了,同时,关于如何安装 owncloud 创建私有云的教程也进入尾声。请在评论区留下自己的问题或建议。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/create-personal-cloud-install-owncloud/
|
||||
|
||||
作者:[SHUSAIN][a]
|
||||
译者:[CYLeft](https://github.com/CYLeft)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
||||
[1]:http://linuxtechlab.com/apache-as-reverse-proxy-centos-rhel/
|
||||
[2]:http://linuxtechlab.com/linux-server-glances-monitoring-tool/
|
||||
[3]:http://linuxtechlab.com/easiest-guide-creating-lamp-server/
|
||||
[4]:http://linuxtechlab.com/install-lamp-stack-on-ubuntu/
|
||||
[6]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=400%2C647
|
||||
[7]:https://i1.wp.com/linuxtechlab.com/wp-content/uploads/2018/01/owncloud1-compressor.jpg?resize=400%2C647
|
||||
[8]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=876%2C541
|
||||
[9]:https://i1.wp.com/linuxtechlab.com/wp-content/uploads/2018/01/owncloud2-compressor1.jpg?resize=876%2C541
|
||||
[10]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=981%2C474
|
||||
[11]:https://i0.wp.com/linuxtechlab.com/wp-content/uploads/2018/01/owncloud3-compressor1.jpg?resize=981%2C474
|
@ -0,0 +1,86 @@
|
||||
如何在使用 vim 时访问/查看 Python 帮助
|
||||
======
|
||||
|
||||
我是一名新的 Vim 编辑器用户。我用它编写 Python 代码。有没有办法在 vim 中查看 Python 文档而无需访问互联网?假设我的光标在 Python 的 print 关键字下,然后按下 F1。我想查看关键字 print 的帮助。如何在 vim 中显示 python help() ?如何在不离开 vim 的情况下调用 pydoc3/pydoc 寻求帮助?
|
||||
|
||||
pydoc 或 pydoc3 命令显示关于 Python 关键字、主题、函数、模块或包的名称的文本文档,或在模块内或包中的模块对类或函数的引用。你可以从 vim 中调用 pydoc。让我们看看如何在 vim 编辑器中使用 pydoc 访问 Python 文档。
|
||||
|
||||
### 使用 pydoc 访问 python 帮助
|
||||
|
||||
语法是:
|
||||
```
|
||||
pydoc keyword
|
||||
pydoc3 keyword
|
||||
pydoc len
|
||||
pydoc print
|
||||
```
|
||||
编辑你的 ~/ .vimrc:
|
||||
`$ vim ~/.vimrc`
|
||||
为 pydoc3 添加以下配置(python v3.x 文档)。在正常模式下创建 H 键的映射:
|
||||
```
|
||||
nnoremap <buffer> H :<C-u>execute "!pydoc3 " . expand("<cword>")<CR>
|
||||
```
|
||||
|
||||
|
||||
保存并关闭文件。打开 vim 编辑器:
|
||||
`$ vim file.py`
|
||||
写一些代码:
|
||||
```
|
||||
#!/usr/bin/python3
|
||||
x=5
|
||||
y=10
|
||||
z=x+y
|
||||
print(z)
|
||||
print("Hello world")
|
||||
```
|
||||
|
||||
将光标置于 Python 关键字 print 的下方,然后按下 Shift,然后按 H。你将看到下面的输出:
|
||||
|
||||
[![Access Python Help Within Vim][1]][1]
|
||||
Gif.01:按 H 查看 Python 关键字 print 的帮助
|
||||
|
||||
### 如何在使用 vim 时查看 python 帮助
|
||||
|
||||
[jedi-vim][2] 是一个绑定到自动补全库 Jed 的 vim。它可以做很多事情,包括当你按下 Shift 后跟 K 即按大写 K 就显示关键字的帮助。
|
||||
|
||||
#### 如何在 Linux 或类 Unix 系统上安装 jedi-vim
|
||||
|
||||
使用 [pathogen][3]、[vim-plug][4] 或 [Vundle][5] 安装 jedi-vim。我使用的是 vim-plug。在 ~/vimrc 中添加以下行:
|
||||
`Plug 'davidhalter/jedi-vim'`
|
||||
保存并关闭文件。启动 vim 并输入:
|
||||
`PlugInstall`
|
||||
在 Arch Linux 上,你还可以使用 pacman 命令从官方仓库中的 vim-jedi 安装 jedi-vim:
|
||||
`$ sudo pacman -S vim-jedi`
|
||||
它也可以在 Debian(?8)和 Ubuntu(?14.04)上使用 [apt-get command][6]/[apt-get command][7] 安装 vim-python-jedi:
|
||||
`$ sudo apt install vim-python-jedi`
|
||||
在 Fedora Linux 上,它可以用 dnf 安装 vim-jedi:
|
||||
`$ sudo dnf install vim-jedi`
|
||||
Jedi 默认是自动初始化的。所以你不需要进一步的配置。要查看 Documentation/Pydoc,请按K。它将弹出帮助窗口:
|
||||
[![How to view python help when using vim][8]][8]
|
||||
|
||||
### 关于作者
|
||||
|
||||
作者是 nixCraft 的创建者,也是经验丰富的系统管理员和 Linux 操作系统/Unix shell 脚本的培训师。他曾与全球客户以及 IT、教育、国防和太空研究以及非营利部门等多个行业合作。在 [Twitter][9]、[Facebook][10]、[Google +][11] 上关注他。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/how-to-access-view-python-help-when-using-vim/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz
|
||||
[1]:https://www.cyberciti.biz/media/new/faq/2018/01/Access-Python-Help-Within-Vim.gif
|
||||
[2]:https://github.com/davidhalter/jedi-vim
|
||||
[3]:https://github.com/tpope/vim-pathogen
|
||||
[4]:https://www.cyberciti.biz/programming/vim-plug-a-beautiful-and-minimalist-vim-plugin-manager-for-unix-and-linux-users/
|
||||
[5]:https://github.com/gmarik/vundle
|
||||
[6]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info)
|
||||
[7]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info)
|
||||
[8]:https://www.cyberciti.biz/media/new/faq/2018/01/How-to-view-Python-Documentation-using-pydoc-within-vim-on-Linux-Unix.jpg
|
||||
[9]:https://twitter.com/nixcraft
|
||||
[10]:https://facebook.com/nixcraft
|
||||
[11]:https://plus.google.com/+CybercitiBiz
|
118
translated/tech/20180213 Getting started with the RStudio IDE.md
Normal file
118
translated/tech/20180213 Getting started with the RStudio IDE.md
Normal file
@ -0,0 +1,118 @@
|
||||
开始使用 RStudio IDE
|
||||
======
|
||||
|
||||

|
||||
|
||||
从我记事起,我就一直在与数字玩耍。作为 20 世纪 70 年代后期的本科生,我开始上统计学的课程,学习如何检查和分析数据以揭示某些意义。
|
||||
|
||||
那时候,我有一部科学计算器,它让统计计算变得比以前容易很多。在 90 年代早期,作为一名从事 t 检验,相关性以及 [ANOVA][1] 研究的教育心理学研究生,我开始通过精心编写输入 IBM 主机的文本文件来进行计算。这个主机是对我的手持计算器的一个改进,但是一个小的间距错误会使得整个过程无效,而且这个过程仍然有点乏味。
|
||||
|
||||
撰写论文时,尤其是我的毕业论文,我需要一种方法能够根据我的数据来创建图表并将它们嵌入到文字处理文档中。我着迷于 Microsoft Excel 及其数字运算能力以及可以用计算结果创建出的大量图表。但每一步都有成本。在 20 世纪 90 年代,除了 Excel,还有其他专有软件包,比如 SAS 和 SPSS+,但对于我那已经满满的研究生时间表来说,学习曲线是一项艰巨的任务。
|
||||
|
||||
### 快速回到现在
|
||||
|
||||
最近,由于我对数据科学的兴趣浓厚,加上对 Linux 和开源软件的浓厚兴趣,我阅读了大量的数据科学文章,并在 Linux 会议上听了许多数据科学演讲者谈论他们的工作。因此,我开始对编程语言 R(一种开源的统计计算软件)非常感兴趣。
|
||||
|
||||
起初,这只是一个火花。当我和我的朋友 Michael J. Gallagher 博士谈论他如何在他的 [博士论文][2] 研究中使用 R 时,这个火花便增大了。最后,我访问了 [R project][3] 的网站,并了解到我可以轻松地安装 [R for Linux][4]。游戏开始!
|
||||
|
||||
### 安装 R
|
||||
|
||||
根据你的操作系统和分布情况,安装 R 会稍有不同。请参阅 [Comprehensive R Archive Network][5] (CRAN) 网站上的安装指南。CRAN 提供了在 [各种 Linux 发行版][6],[Fedora,RHEL,及其衍生版][7],[MacOS][8] 和 [Windows][9] 上的安装指示。
|
||||
|
||||
我在使用 Ubuntu,则按照 CRAN 的指示,将以下行加入到我的 `/etc/apt/sources.list` 文件中:
|
||||
|
||||
```
|
||||
deb https://<my.favorite.cran.mirror>/bin/linux/ubuntu artful/
|
||||
|
||||
```
|
||||
|
||||
接着我在终端运行下面命令:
|
||||
|
||||
```
|
||||
$ sudo apt-get update
|
||||
|
||||
$ sudo apt-get install r-base
|
||||
|
||||
```
|
||||
|
||||
根据 CRAN,“需要从源码编译 R 的用户【如包的维护者,或者任何通过 `install.packages()` 安装包的用户】也应该安装 `r-base-dev` 的包。”
|
||||
|
||||
### 使用 R 和 Rstudio
|
||||
|
||||
安装好了 R,我就准备了解更多关于使用这个强大的工具的信息。Gallagher 博士推荐了 [DataCamp][10] 上的 “Start learning R”,并且我也找到了适用于 R 新手的免费课程。两门课程都帮助我学习 R 的命令和语法。我还参加了 [Udemy][12] 上的 R 在线编程课程,并从 [No Starch Press][14] 上购买了 [Book of R][13]。
|
||||
|
||||
在阅读更多内容并观看 YouTube 视频后,我意识到我还应该安装 [RStudio][15]。Rstudio 是 R 的开源 IDE,易于在 [Debian, Ubuntu, Fedora, 和 RHEL][16] 上安装。它也可以安装在 MacOS 和 Windows 上。
|
||||
|
||||
根据 Rstudio 网站的说明,可以根据你的偏好对 IDE 进行自定义,具体方法是选择工具菜单,然后从中选择全局选项。
|
||||
|
||||

|
||||
|
||||
R 提供了一些很棒的演示例子,可以通过在提示符处输入 `demo()` 从控制台访问。`demo(plotmath)` 和 `demo(perspective)` 选项为 R 强大的功能提供了很好的例证。我尝试过一些简单的 [vectors][17] 并在 R 控制台的命令行中绘制,如下所示。
|
||||
|
||||

|
||||
|
||||
你可能想要开始学习如何将 R 和一些样本数据结合起来使用,然后将这些知识应用到自己的数据上得到描述性统计。我自己没有丰富的数据来分析,但我搜索了可以使用的数据集 [datasets][18];这样一个数据集(我并没有用这个例子)是由圣路易斯联邦储备银行提供的 [经济研究数据][19]。我对一个题为“美国商业航空公司的乘客里程(1937-1960)”很感兴趣,因此我将它导入 RStudio 以测试 IDE 的功能。Rstudio 可以接受各种格式的数据,包括 CSV,Excel,SPSS 和 SAS。
|
||||
|
||||

|
||||
|
||||
数据导入后,我使用 `summary(AirPassengers)` 命令获取数据的一些初始描述性统计信息。按回车键后,我得到了 1949-1960 年的每月航空公司旅客的摘要以及其他数据,包括飞机乘客数量的最小值,最大值,第一四分位数,第三四分位数。中位数以及平均数。
|
||||
|
||||

|
||||
|
||||
我从摘要统计信息中知道航空乘客样本的均值为 280.3。在命令行中输入 `sd(AirPassengers)` 会得到标准偏差,在 RStudio 控制台中可以看到:
|
||||
|
||||

|
||||
|
||||
接下来,我生成了一个数据直方图,通过输入 `hist(AirPassengers);` 得到,这以图形的方式显示此数据集;Rstudio 可以将数据导出为 PNG,PDF,JPEG,TIFF,SVG,EPS 或 BMP。
|
||||
|
||||

|
||||
|
||||
除了生成统计数据和图形数据外,R 还记录了我所有的历史操作。这使得我能够返回先前的操作,并且我可以保存此历史记录以供将来参考。
|
||||
|
||||

|
||||
|
||||
在 RStudio 的脚本编辑器中,我可以编写我发出的所有命令的脚本,然后保存该脚本以便在我的数据更改后能再次运行,或者想重新访问它。
|
||||
|
||||

|
||||
|
||||
### 获得帮助
|
||||
|
||||
在 R 提示符下输入 `help()` 可以很容易找到帮助信息。输入你正在寻找的信息的特定主题可以找到具体的帮助信息,例如 `help(sd)` 可以获得有关标准差的帮助。通过在提示符处输入 `contributors()` 可以获得有关 R 项目贡献者的信息。您可以通过在提示符处输入 `citation()` 来了解如何引用 R。通过在提示符出输入 `license()` 可以很容易地获得 R 的许可证信息。
|
||||
|
||||
R 是在 GNU General Public License(1991 年 6 月的版本 2,或者 2007 年 6 月的版本 3)的条款下发布的。有关 R 许可证的更多信息,请参考 [R Project website][20]。
|
||||
|
||||
另外,RStudio 在 GUI 中提供了完美的帮助菜单。该区域包括 RStudio 备忘单(可作为 PDF 下载),[RStudio][21]的在线学习,RStudio 文档,支持和 [许可证信息][22]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/getting-started-RStudio-IDE
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
译者:[szcf-weiya](https://github.com/szcf-weiya)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/don-watkins
|
||||
[1]:https://en.wikipedia.org/wiki/Analysis_of_variance
|
||||
[2]:https://www.michael-j-gallagher.com/high-performance-computing
|
||||
[3]:https://www.r-project.org/
|
||||
[4]:https://cran.r-project.org/index.html
|
||||
[5]:https://cran.r-project.org/
|
||||
[6]:https://cran.r-project.org/bin/linux/
|
||||
[7]:https://cran.r-project.org/bin/linux/redhat/README
|
||||
[8]:https://cran.r-project.org/bin/macosx/
|
||||
[9]:https://cran.r-project.org/bin/windows/
|
||||
[10]:https://www.datacamp.com/onboarding/learn?from=home&technology=r
|
||||
[11]:http://tryr.codeschool.com/levels/1/challenges/1
|
||||
[12]:https://www.udemy.com/r-programming
|
||||
[13]:https://nostarch.com/bookofr
|
||||
[14]:https://opensource.com/article/17/10/no-starch
|
||||
[15]:https://www.rstudio.com/
|
||||
[16]:https://www.rstudio.com/products/rstudio/download/
|
||||
[17]:http://www.r-tutor.com/r-introduction/vector
|
||||
[18]:https://vincentarelbundock.github.io/Rdatasets/datasets.html
|
||||
[19]:https://fred.stlouisfed.org/
|
||||
[20]:https://www.r-project.org/Licenses/
|
||||
[21]:https://www.rstudio.com/online-learning/#R
|
||||
[22]:https://support.rstudio.com/hc/en-us/articles/217801078-What-license-is-RStudio-available-under-
|
@ -0,0 +1,176 @@
|
||||
用 PGP 保护代码完整性 - 第二部分:生成你的主密钥
|
||||
======
|
||||
|
||||

|
||||
|
||||
在本系列文章中,我们将深度探讨如何使用 PGP 以及为工作于自由软件项目的开发者提供实用指南。在前一篇文章中,我们介绍了[基本工具和概念][1]。在本文中,我们将展示如何生成和保护你的 PGP 主密钥。
|
||||
|
||||
### 清单
|
||||
|
||||
1. 生成一个 4096 位的 RSA 主密钥 (ESSENTIAL)
|
||||
|
||||
2. 使用 paperkey 备份你的 RSA 主密钥 (ESSENTIAL)
|
||||
|
||||
3. 添加所有相关的身份 (ESSENTIAL)
|
||||
|
||||
|
||||
|
||||
|
||||
### 考虑事项
|
||||
|
||||
#### 理解“主”(认证)密钥
|
||||
|
||||
在本节和下一节中,我们将讨论“主密钥”和“子密钥”。理解以下内容很重要:
|
||||
|
||||
1. 在“主密钥”和“子密钥”之间没有技术上的区别。
|
||||
|
||||
2. 在创建时,我们赋予每个密钥特定的能力来分配功能限制
|
||||
|
||||
3. 一个 PGP 密钥有四项能力
|
||||
|
||||
* [S] 密钥可以用于签名
|
||||
|
||||
* [E] 密钥可以用于加密
|
||||
|
||||
* [A] 密钥可以用于身份认证
|
||||
|
||||
* [C] 密钥可以用于认证其他密钥
|
||||
|
||||
4. 一个密钥可能有多种能力
|
||||
|
||||
|
||||
|
||||
|
||||
带有[C] (认证)能力的密钥被认为是“主”密钥,因为它是唯一可以用来表明与其他密钥关系的密钥。只有[C]密钥可以被用于:
|
||||
|
||||
* 添加或撤销其他密钥(子密钥)的 S/E/A 能力
|
||||
|
||||
* 添加,更改或撤销密钥关联的身份(uids)
|
||||
|
||||
* 添加或更改本身或其他子密钥的到期时间
|
||||
|
||||
* 为了网络信任目的为其它密钥签名
|
||||
|
||||
|
||||
|
||||
|
||||
在自由软件的世界里,[C]密钥就是你的数字身份。一旦你创建该密钥,你应该格外小心地保护它并且防止它落入坏人的手中。
|
||||
|
||||
#### 在你创建主密钥前
|
||||
|
||||
在你创建的你的主密钥前,你需要选择你的主要身份和主密码。
|
||||
|
||||
##### 主要身份
|
||||
|
||||
身份使用邮件中发件人一栏相同格式的字符串:
|
||||
```
|
||||
Alice Engineer <alice.engineer@example.org>
|
||||
|
||||
```
|
||||
|
||||
你可以在任何时候创建新的身份,取消旧的,并且更改你的“主要”身份。由于主要身份在所有 GnuPG 操作中都展示,你应该选择正式的和最有可能用于 PGP 保护通信的名字和邮件地址,比如你的工作地址或者用于在项目提交(commit)时签名的地址。
|
||||
|
||||
##### 密码
|
||||
|
||||
密码(passphrase)专用于在存储在磁盘上时使用对称加密算法对私钥进行加密。如果你的 .gnupg 目录的内容被泄露,那么一个好的密码就是小偷能够在线模拟你的最后一道防线,这就是为什么设置一个好的密码很重要的原因。
|
||||
|
||||
一个强密码的好的指导是用丰富或混合的词典的 3-4 个词,而不引用自流行来源(歌曲,书籍,口号)。由于你将相当频繁地使用该密码,所以它应当易于 输入和记忆。
|
||||
|
||||
##### 算法和密钥强度
|
||||
|
||||
尽管现在 GnuPG 已经支持椭圆曲线加密一段时间了,我们仍坚持使用 RSA 密钥,至少稍长一段时间。虽然现在就可以开始使用 ED25519 密钥,但你可能会碰到无法正确处理它们的工具和硬件设备。
|
||||
|
||||
如果后续的指南中我们说 2048 位的密钥对 RSA 公钥加密的生命周期已经足够,你可能也会好奇主密钥为什么是 4096 位。 原因很大程度是由于社会因素而非技术上的:主密钥在密钥链上恰好是最明显的,同时如果你的主密钥位数比一些你交互的开发者的少,他们将不可避免地负面评价你。
|
||||
|
||||
#### 生成主密钥
|
||||
|
||||
为了生成你的主密钥,请使用以下命令,并且将“Alice Engineer:”替换为正确值
|
||||
```
|
||||
$ gpg --quick-generate-key 'Alice Engineer <alice@example.org>' rsa4096 cert
|
||||
|
||||
```
|
||||
|
||||
一个要求输入密码的对话框将弹出。然后,你可能需要移动鼠标或输入一些密钥才能生成足够的熵,直到命令完成。
|
||||
|
||||
查看命令输出,它就像这样:
|
||||
```
|
||||
pub rsa4096 2017-12-06 [C] [expires: 2019-12-06]
|
||||
111122223333444455556666AAAABBBBCCCCDDDD
|
||||
uid Alice Engineer <alice@example.org>
|
||||
|
||||
```
|
||||
|
||||
注意第二行的长字符串 -- 它是你新生成的密钥的完整指纹。密钥 ID(key IDs)可以用以下三种不同形式表达:
|
||||
|
||||
* Fingerprint,一个完整的 40 个字符的密钥标识符
|
||||
|
||||
* Long,指纹的最后 16 个字符(AAAABBBBCCCCDDDD)
|
||||
|
||||
* Short,指纹的最后 8 个字符(CCCCDDDD)
|
||||
|
||||
|
||||
|
||||
|
||||
你应该避免使用 8 个字符的短密钥 ID(short key IDs),因为它们不足够唯一。
|
||||
|
||||
这里,我建议你打开一个文本编辑器,复制你新密钥的指纹并粘贴。你需要在接下来几步中用到它,所以将它放在旁边会很方便。
|
||||
|
||||
#### 备份你的主密钥
|
||||
|
||||
出于灾后恢复的目的 -- 同时特别的如果你试图使用 Web of Trust 并且收集来自其他项目开发者的密钥签名 -- 你应该创建你的私钥的 硬拷贝备份。万一所有其它的备份机制都失败了,这应当是最后的补救措施。
|
||||
|
||||
创建一个你的私钥的可打印的硬拷贝的最好方法是使用为此而写的软件 paperkey。Paperkey 在所有 Linux 发行版上可用,在 Mac 上也可以通过 brew 安装 paperkey。
|
||||
|
||||
运行以下命令,用你密钥的完整指纹替换[fpr]:
|
||||
```
|
||||
$ gpg --export-secret-key [fpr] | paperkey -o /tmp/key-backup.txt
|
||||
|
||||
```
|
||||
|
||||
输出将采用易于 OCR 或手动输入的格式,以防如果你需要恢复它的话。打印出该文件,然后拿支笔,并在纸的边缘写下密钥的密码。这是必要的一步,因为密钥输出仍然使用密码加密,并且如果你更改了密钥的密码,你不会记得第一次创建的密钥是什么 -- 我保证。
|
||||
|
||||
将打印结果和手写密码放入信封中,并存放在一个安全且保护好的地方,最好远离你家,例如银行保险库。
|
||||
|
||||
**打印机注意事项** 打印机连接到计算机的并行端口的时代已经过去了。现在他们拥有完整的操作系统,硬盘驱动器和云集成。由于我们发送给打印机的关键内容将使用密码进行加密,因此这是一项相当安全的操作,但请使用您最好的偏执判断。
|
||||
|
||||
#### 添加相关身份
|
||||
|
||||
如果你有多个相关的邮件地址(个人,工作,开源项目等),你应该将其添加到主密钥中。你不需要为任何你不希望用于 PGP 的地址(例如,可能不是你的校友地址)这样做。
|
||||
|
||||
该命令是(用你完整的密钥指纹替换[fpr]):
|
||||
```
|
||||
$ gpg --quick-add-uid [fpr] 'Alice Engineer <allie@example.net>'
|
||||
|
||||
```
|
||||
|
||||
你可以查看你已经使用的 UIDs:
|
||||
```
|
||||
$ gpg --list-key [fpr] | grep ^uid
|
||||
|
||||
```
|
||||
|
||||
##### 选择主 UID
|
||||
|
||||
GnuPG 将会把你最近添加的 UID 作为你的主 UID,如果这与你想的不同,你应该改回来:
|
||||
```
|
||||
$ gpg --quick-set-primary-uid [fpr] 'Alice Engineer <alice@example.org>'
|
||||
|
||||
```
|
||||
|
||||
下次,我们将介绍如何生成 PGP 子密钥,它是你实际用于日常工作的密钥。
|
||||
|
||||
通过 Linux 基金会和 edX 的免费[“Introduction to Linux” ][2]课程了解关于 Linux 的更多信息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/PGP/2018/2/protecting-code-integrity-pgp-part-2-generating-and-protecting-your-master-pgp-key
|
||||
|
||||
作者:[KONSTANTIN RYABITSEV][a]
|
||||
译者:[kimii](https://github.com/kimii)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/mricon
|
||||
[1]:https://www.linux.com/blog/learn/2018/2/protecting-code-integrity-pgp-part-1-basic-pgp-concepts-and-tools
|
||||
[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -0,0 +1,103 @@
|
||||
Linux 局域网路由新手指南:第 1 部分
|
||||
======
|
||||
|
||||

|
||||
前面我们学习了 [IPv6 路由][1]。现在我们继续深入学习 Linux 中的 IPv4 路由的基础知识。我们从硬件概述、操作系统和 IPv4 地址的基础知识开始,下周我们将继续学习它们如何配置,以及测试路由。
|
||||
|
||||
### 局域网路由器硬件
|
||||
|
||||
Linux 实际上是一个网络操作系统,一直都是,从一开始它就有内置的网络功能。为将你的局域网连入因特网,构建一个局域网路由器比起构建网关路由器要简单的多。你不要太过于执念安全或者防火墙规则,对于处理 NAT 它还是比较复杂的,网络地址转换是 IPv4 的一个痛点。我们为什么不放弃 IPv4 去转到 IPv6 呢?这样将使网络管理员的工作更加简单。
|
||||
|
||||
有点跑题了。从理论上讲,你的 Linux 路由器是一个至少有两个网络接口的小型机器。Linux Gizmos 是一个单片机的综合体:[98 个开放规格的目录,黑客友好的 SBCs][2]。你能够使用一个很老的笔记本电脑或者台式计算机。你也可以使用一个精简版计算机,像 ZaReason Zini 或者 System76 Meerkat 一样,虽然这些有点贵,差不多要 $600。但是它们又结实又可靠,并且你不用在 Windows 许可证上浪费钱。
|
||||
|
||||
如果对路由器的要求不高,使用树莓派 3 Model B 作为路由器是一个非常好的选择。它有一个 10/100 以太网端口,板载 2.4GHz 的 802.11n 无线网卡,并且它还有四个 USB 端口,因此你可以插入多个 USB 网卡。USB 2.0 和低速板载网卡可能会让树莓派变成你的网络上的瓶颈,但是,你不能对它期望太高(毕竟它只有 $35,既没有存储也没有电源)。它支持很多种风格的 Linux,因此你可以选择使用你喜欢的版本。基于 Debian 的树莓派是我的最爱。
|
||||
|
||||
### 操作系统
|
||||
|
||||
你可以在你选择的硬件上安装将你喜欢的 Linux 的简化版,因为定制的路由器操作系统,比如 OpenWRT、 Tomato、DD-WRT、Smoothwall、Pfsense 等等,都有它们自己的非标准界面。我的观点是,没有必要这么麻烦,它们对你并没有什么帮助。尽量使用标准的 Linux 工具,因为你只需要学习它们一次就够了。
|
||||
|
||||
Debian 的网络安装镜像大约有 300MB 大小,并且支持多种架构,包括 ARM、i386、amd64、和 armhf。Ubuntu 的服务器网络安装镜像也小于 50MB,这样你就可以控制你要安装哪些包。Fedora、Mageia、和 openSUSE 都提供精简的网络安装镜像。如果你需要创意,你可以浏览 [Distrowatch][3]。
|
||||
|
||||
### 路由器能做什么
|
||||
|
||||
我们需要网络路由器做什么?一个路由器连接不同的网络。如果没有路由,那么每个网络都是相互隔离的,所有的悲伤和孤独都没有人与你分享,所有节点只能孤独终老。假设你有一个 192.168.1.0/24 和一个 192.168.2.0/24 网络。如果没有路由器,你的两个网络之间不能相互沟通。这些都是 C 类的私有地址,它们每个都有 254 个可用网络地址。使用 ipcalc 可以非常容易地得到它们的这些信息:
|
||||
```
|
||||
$ ipcalc 192.168.1.0/24
|
||||
Address: 192.168.1.0 11000000.10101000.00000001. 00000000
|
||||
Netmask: 255.255.255.0 = 24 11111111.11111111.11111111. 00000000
|
||||
Wildcard: 0.0.0.255 00000000.00000000.00000000. 11111111
|
||||
=>
|
||||
Network: 192.168.1.0/24 11000000.10101000.00000001. 00000000
|
||||
HostMin: 192.168.1.1 11000000.10101000.00000001. 00000001
|
||||
HostMax: 192.168.1.254 11000000.10101000.00000001. 11111110
|
||||
Broadcast: 192.168.1.255 11000000.10101000.00000001. 11111111
|
||||
Hosts/Net: 254 Class C, Private Internet
|
||||
|
||||
```
|
||||
|
||||
我喜欢 ipcalc 的二进制输出信息,它更加可视地表示了掩码是如何工作的。前三个八位组表示了网络地址,第四个八位组是主机地址,因此,当你分配主机地址时,你将 “掩盖” 掉网络地址部分,只使用剩余的主机部分。你的两个网络有不同的网络地址,而这就是如果两个网络之间没有路由器它们就不能互相通讯的原因。
|
||||
|
||||
每个八位组一共有 256 字节,但是它们并不能提供 256 个主机地址,因为第一个和最后一个值 ,也就是 0 和 255,是被保留的。0 是网络标识,而 255 是广播地址,因此,只有 254 个主机地址。ipcalc 可以帮助你很容易地计算出这些。
|
||||
|
||||
当然,这并不意味着你不能有一个结尾是 0 或者 255 的主机地址。假设你有一个 16 位的前缀:
|
||||
```
|
||||
$ ipcalc 192.168.0.0/16
|
||||
Address: 192.168.0.0 11000000.10101000. 00000000.00000000
|
||||
Netmask: 255.255.0.0 = 16 11111111.11111111. 00000000.00000000
|
||||
Wildcard: 0.0.255.255 00000000.00000000. 11111111.11111111
|
||||
=>
|
||||
Network: 192.168.0.0/16 11000000.10101000. 00000000.00000000
|
||||
HostMin: 192.168.0.1 11000000.10101000. 00000000.00000001
|
||||
HostMax: 192.168.255.254 11000000.10101000. 11111111.11111110
|
||||
Broadcast: 192.168.255.255 11000000.10101000. 11111111.11111111
|
||||
Hosts/Net: 65534 Class C, Private Internet
|
||||
|
||||
```
|
||||
|
||||
ipcalc 列出了你的第一个和最后一个主机地址,它们是 192.168.0.1 和 192.168.255.254。你是可以有以 0 或者 255 结尾的主机地址的,例如,192.168.1.0 和 192.168.0.255,因为它们都在最小主机地址和最大主机地址之间。
|
||||
|
||||
不论你的地址块是私有的还是公共的,这个原则同样都是适用的。不要羞于使用 ipcalc 来帮你计算地址。
|
||||
|
||||
### CIDR
|
||||
|
||||
CIDR(无类域间路由)就是通过可变长度的子网掩码来扩展 IPv4 的。CIDR 允许对网络空间进行更精细地分割。我们使用 ipcalc 来演示一下:
|
||||
```
|
||||
$ ipcalc 192.168.1.0/22
|
||||
Address: 192.168.1.0 11000000.10101000.000000 01.00000000
|
||||
Netmask: 255.255.252.0 = 22 11111111.11111111.111111 00.00000000
|
||||
Wildcard: 0.0.3.255 00000000.00000000.000000 11.11111111
|
||||
=>
|
||||
Network: 192.168.0.0/22 11000000.10101000.000000 00.00000000
|
||||
HostMin: 192.168.0.1 11000000.10101000.000000 00.00000001
|
||||
HostMax: 192.168.3.254 11000000.10101000.000000 11.11111110
|
||||
Broadcast: 192.168.3.255 11000000.10101000.000000 11.11111111
|
||||
Hosts/Net: 1022 Class C, Private Internet
|
||||
|
||||
```
|
||||
|
||||
网络掩码并不局限于整个八位组,它可以跨越第三和第四个八位组,并且子网部分的范围可以是从 0 到 3,而不是非得从 0 到 255。可用主机地址的数量并不一定是 8 的倍数,因为它是由整个八位组定义的。
|
||||
|
||||
给你留一个家庭作业,复习 CIDR 和 IPv4 地址空间是如何在公共、私有和保留块之间分配的,这个作业有助你更好地理解路由。一旦你掌握了地址的相关知识,配置路由器将不再是件复杂的事情了。
|
||||
|
||||
从 [理解 IP 地址和 CIDR 图表][4]、[IPv4 私有地址空间和过滤][5]、以及 [IANA IPv4 地址空间注册][6] 开始。接下来的我们将学习如何创建和管理路由器。
|
||||
|
||||
通过来自 Linux 基金会和 edX 的免费课程 ["Linux 入门" ][7]学习更多 Linux 知识。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/2/linux-lan-routing-beginners-part-1
|
||||
|
||||
作者:[Carla Schroder][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/cschroder
|
||||
[1]:https://www.linux.com/learn/intro-to-linux/2017/7/practical-networking-linux-admins-ipv6-routing
|
||||
[2]:http://linuxgizmos.com/catalog-of-98-open-spec-hacker-friendly-sbcs/#catalog
|
||||
[3]:http://distrowatch.org/
|
||||
[4]:https://www.ripe.net/about-us/press-centre/understanding-ip-addressing
|
||||
[5]:https://www.arin.net/knowledge/address_filters.html
|
||||
[6]:https://www.iana.org/assignments/ipv4-address-space/ipv4-address-space.xhtml
|
||||
[7]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -0,0 +1,118 @@
|
||||
Linux 局域网路由新手指南:第 2 部分
|
||||
======
|
||||
|
||||

|
||||
|
||||
上周 [我们学习了 IPv4 地址][1] 和如何使用管理员不可或缺的工具 —— ipcalc,今天我们继续学习更精彩的内容:局域网路由器。
|
||||
|
||||
VirtualBox 和 KVM 是测试路由的好工具,在本文中的所有示例都是在 KVM 中执行的。如果你喜欢使用物理硬件去做测试,那么你需要三台计算机:一台用作路由器,另外两台用于表示两个不同的网络。你也需要两台以太网交换机和相应的线缆。
|
||||
|
||||
我们假设示例是一个有线以太局域网,为了更符合真实使用场景,我们将假设有一些桥接的无线接入点,当然我并不会使用这些无线接入点做任何事情。(我也不会去尝试所有的无线路由器,以及使用一个移动宽带设备连接到以太网的局域网口进行混合组网,因为它们需要进一步的安装和设置)
|
||||
|
||||
### 网段
|
||||
|
||||
最简单的网段是两台计算机连接在同一个交换机上的相同地址空间中。这样两台计算机不需要路由器就可以相互通讯。这就是我们常说的术语 —— “广播域”,它表示所有在相同的网络中的一组主机。它们可能连接到一台单个的以太网交换机上,也可能是连接到多台交换机上。一个广播域可以包括通过以太网桥连接的两个不同的网络,通过网桥可以让两个网络像一个单个网络一样运转。无线访问点一般是桥接到有线以太网上。
|
||||
|
||||
一个广播域仅当在它们通过一台网络路由器连接的情况下,才可以与不同的广播域进行通讯。
|
||||
|
||||
### 简单的网络
|
||||
|
||||
以下示例的命令并不是永久生效的,重启之后你所做的改变将会消失。
|
||||
|
||||
一个广播域需要一台路由器才可以与其它广播域通讯。我们使用两台计算机和 `ip` 命令来解释这些。我们的两台计算机是 192.168.110.125 和 192.168.110.126,它们都插入到同一台以太网交换机上。在 VirtualBox 或 KVM 中,当你配置一个新网络的时候会自动创建一个虚拟交换机,因此,当你分配一个网络到虚拟虚拟机上时,就像是插入一个交换机一样。使用 `ip addr show` 去查看你的地址和网络接口名字。现在,这两台主机可以互 ping 成功。
|
||||
|
||||
现在,给其中一台主机添加一个不同网络的地址:
|
||||
```
|
||||
# ip addr add 192.168.120.125/24 dev ens3
|
||||
|
||||
```
|
||||
|
||||
你可以指定一个网络接口名字,在示例中它的名字是 ens3。这不需要去添加一个网络前缀,在本案例中,它是 /24,但是显式地添加它并没有什么坏处。你可以使用 `ip` 命令去检查你的配置。下面的示例输出为了清晰其见进行了删减:
|
||||
```
|
||||
$ ip addr show
|
||||
ens3:
|
||||
inet 192.168.110.125/24 brd 192.168.110.255 scope global dynamic ens3
|
||||
valid_lft 875sec preferred_lft 875sec
|
||||
inet 192.168.120.125/24 scope global ens3
|
||||
valid_lft forever preferred_lft forever
|
||||
|
||||
```
|
||||
|
||||
主机在 192.168.120.125 上可以 ping 它自己(`ping 192.168.120.125`),这是对你的配置是否正确的一个基本校验,这个时候第二台计算机就已经不能 ping 通那个地址了。
|
||||
|
||||
现在我们需要做一些网络变更。添加第三台主机作为路由器。它需要两个虚拟网络接口并添加第二个虚拟网络。在现实中,你的路由器必须使用一个静态 IP 地址,但是现在,我们可以让 KVM 的 DHCP 服务器去为它分配地址,所以,你仅需要两个虚拟网络:
|
||||
|
||||
* 第一个网络:192.168.110.0/24
|
||||
* 第二个网络:192.168.120.0/24
|
||||
|
||||
|
||||
|
||||
接下来你的路由器必须配置去转发数据包。数据包转发默认是禁用的,你可以使用 `sysctl` 命令去检查它的配置:
|
||||
```
|
||||
$ sysctl net.ipv4.ip_forward
|
||||
net.ipv4.ip_forward = 0
|
||||
|
||||
```
|
||||
|
||||
0 意味着禁用,使用如下的命令去启用它:
|
||||
```
|
||||
# echo 1 > /proc/sys/net/ipv4/ip_forward
|
||||
|
||||
```
|
||||
|
||||
接下来配置你的另一台主机做为第二个网络的一部分,你可以通过将原来在 192.168.110.0/24 的网络中的一台主机分配到 192.168.120.0/24 虚拟网络中,然后重新启动两个 “网络” 主机,注意不是路由器。(或者重启动网络;我年龄大了还有点懒,我记不住那些重启服务的奇怪命令,还不如重启网络来得干脆。)重启后各台机器的地址应该如下所示:
|
||||
|
||||
* 主机 1: 192.168.110.125
|
||||
* 主机 2: 192.168.120.135
|
||||
* 路由器: 192.168.110.126 and 192.168.120.136
|
||||
|
||||
|
||||
|
||||
现在可以去随意 ping 它们,可以从任何一台计算机上 ping 到任何一台其它计算机上。使用虚拟机和各种 Linux 发行版做这些事时,可能会产生一些意想不到的问题,因此,有时候 ping 的通,有时候 ping 不通。不成功也是一件好事,这意味着你需要动手去创建一条静态路由。首先,查看已经存在的路由表。主机 1 和主机 2 的路由表如下所示:
|
||||
```
|
||||
$ ip route show
|
||||
default via 192.168.110.1 dev ens3 proto static metric 100
|
||||
192.168.110.0/24 dev ens3 proto kernel scope link src 192.168.110.164 metric 100
|
||||
|
||||
$ ip route show
|
||||
default via 192.168.110.1 dev ens3 proto static metric 100
|
||||
default via 192.168.120.1 dev ens3 proto static metric 101
|
||||
169.254.0.0/16 dev ens3 scope link metric 1000
|
||||
192.168.110.0/24 dev ens3 proto kernel scope link
|
||||
src 192.168.110.126 metric 100
|
||||
192.168.120.0/24 dev ens9 proto kernel scope link
|
||||
src 192.168.120.136 metric 100
|
||||
|
||||
```
|
||||
|
||||
这显示了我们使用的由 KVM 分配的缺省路由。169.* 地址是自动链接的本地地址,我们不去管它。接下来我们看两条路由,这两条路由指向到我们的路由器。你可以有多条路由,在这个示例中我们将展示如何在主机 1 上添加一个非默认路由:
|
||||
```
|
||||
# ip route add 192.168.120.0/24 via 192.168.110.126 dev ens3
|
||||
|
||||
```
|
||||
|
||||
这意味着主机1 可以通过路由器接口 192.168.110.126 去访问 192.168.110.0/24 网络。看一下它们是如何工作的?主机1 和路由器需要连接到相同的地址空间,然后路由器转发到其它的网络。
|
||||
|
||||
以下的命令去删除一条路由:
|
||||
```
|
||||
# ip route del 192.168.120.0/24
|
||||
|
||||
```
|
||||
|
||||
在真实的案例中,你不需要像这样手动配置一台路由器,而是使用一个路由器守护程序,并通过 DHCP 做路由器通告,但是理解基本原理很重要。接下来我们将学习如何去配置一个易于使用的路由器守护程序来为你做这些事情。
|
||||
|
||||
通过来自 Linux 基金会和 edX 的免费课程 ["Linux 入门" ][2] 来学习更多 Linux 的知识。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/3/linux-lan-routing-beginners-part-2
|
||||
|
||||
作者:[CARLA SCHRODER][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/cschroder
|
||||
[1]:https://www.linux.com/learn/intro-to-linux/2018/2/linux-lan-routing-beginners-part-1
|
||||
[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -0,0 +1,56 @@
|
||||
使用一个树莓派和 projectx/os 托管你自己的电子邮件
|
||||
======
|
||||
|
||||

|
||||
|
||||
现在有大量的理由,不能再将存储你的数据的任务委以他人之手,也不能在第三方公司运行你的服务;隐私、所有权、以及防范任何人拿你的数据去“赚钱”。但是对于大多数人来说,自己去运行一个服务器,是件即费时间又需要太多的专业知识的事情。不得已,我们只能妥协。抛开这些顾虑,使用某些公司的云服务,随之而来的就是广告、数据挖掘和售卖、以及其它可能的任何东西。
|
||||
|
||||
[projectx/os][1] 项目就是要去除这种顾虑,它可以在家里毫不费力地做服务托管,并且可以很容易地创建一个类似于 Gmail 的帐户。实现上述目标,你只需一个 $35 的树莓派 3 和一个基于 Debian 的操作系统镜像 —— 并且不需要很多的专业知识。仅需要四步就可以实现:
|
||||
|
||||
1. 解压缩一个 ZIP 文件到 SD 存储卡中。
|
||||
2. 编辑 SD 上的一个文本文件以便于它连接你的 WiFi(如果你不使用有线网络的话)。
|
||||
3. 将这个 SD 卡插到树莓派 3 中。
|
||||
4. 使用你的智能手机在树莓派 3 上安装 "email 服务器“ 应用并选择一个二级域。
|
||||
|
||||
|
||||
|
||||
服务器应用程序(比如电子邮件服务器)被分解到多个容器中,它们中的每个都只能够使用指定的方式与外界通讯,它们使用了管理粒度非常细的隔离措施以提高安全性。例如,入站 SMTP,[SpamAssassin][2](防垃圾邮件平台),[Dovecot][3] (安全 IMAP 服务器),并且 webmail 都使用了独立的容器,它们之间相互不能看到对方的数据,因此,单个守护进程出现问题不会波及其它的进程。
|
||||
|
||||
另外,它们都是无状态容器,比如 SpamAssassin 和入站 SMTP,每次收到电子邮件之后,它们的连接都会被拆除并重建,因此,即便是有人找到了 bug 并利用了它,他们也不能访问以前的电子邮件或者接下来的电子邮件;他们只能访问他们自己挖掘出漏洞的那封电子邮件。幸运的是,大多数对外发布的、最容易受到攻击的服务都是隔离的和无状态的。
|
||||
|
||||
所有存储的数据都使用 [dm-crypt][4] 进行加密。非公开服务,比如 Dovecot(IMAP)或者 webmail,都是在内部监听,并使用 [ZeroTier One][5] 加密整个网络,因此只有你的设备(智能手机、笔记本电脑、平板等等)才能访问它们。
|
||||
|
||||
虽然电子邮件并不是端到端加密的(除非你使用了 [PGP][6]),但是非加密的电子邮件绝不会跨越网络,并且也不会存储在磁盘上。现在明文的电子邮件只存在于双方的私有邮件服务器上,它们都在他们的家中受到很好的安全保护并且只能通过他们的客户端访问(智能手机、笔记本电脑、平板等等)。
|
||||
|
||||
另一个好处就是,个人设备都使用一个密码保护(不是指纹或者其它生物识别技术),而且在你家中的设备都受到美国的 [第四宪法修正案][7] 的保护,比起由公司所有的第三方数据中心,它们受到更强的法律保护。当然,如果你的电子邮件使用的是 Gmail,Google 还保存着你的电子邮件的拷贝。
|
||||
|
||||
### 展望
|
||||
|
||||
电子邮件是我使用 project/os 项目打包的第一个应用程序。想像一下,一个应用程序商店有全部的服务器软件,为易于安装和使用将它们打包到一起。想要一个博客?添加一个 WordPress 应用程序!想替换安全的 Dropbox ?添加一个 [Seafile][8] 应用程序或者一个 [Syncthing][9] 后端应用程序。 [IPFS][10] 节点? [Mastodon][11] 实例?GitLab 服务器?各种家庭自动化/物联网后端服务?这里有大量的非常好的开源服务器软件 ,它们都非常易于安装,并且可以使用它们来替换那些有专利的云服务。
|
||||
|
||||
Nolan Leake 的 [在每个家庭中都有一个云:0 系统管理员技能就可以在家里托管服务器][12] 将在三月 8 - 11日的 [Southern California Linux Expo][12] 进行。使用折扣代码 **OSDC** 去注册可以 50% 的价格得到门票。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/host-your-own-email
|
||||
|
||||
作者:[Nolan Leake][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/nolan
|
||||
[1]:https://git.sigbus.net/projectx/os
|
||||
[2]:http://spamassassin.apache.org/
|
||||
[3]:https://www.dovecot.org/
|
||||
[4]:https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt
|
||||
[5]:https://www.zerotier.com/download.shtml
|
||||
[6]:https://en.wikipedia.org/wiki/Pretty_Good_Privacy
|
||||
[7]:https://simple.wikipedia.org/wiki/Fourth_Amendment_to_the_United_States_Constitution
|
||||
[8]:https://www.seafile.com/en/home/
|
||||
[9]:https://syncthing.net/
|
||||
[10]:https://ipfs.io/
|
||||
[11]:https://github.com/tootsuite/mastodon
|
||||
[12]:https://www.socallinuxexpo.org/scale/16x/presentations/cloud-every-home-host-servers-home-0-sysadmin-skills
|
||||
[13]:https://register.socallinuxexpo.org/reg6/
|
Loading…
Reference in New Issue
Block a user