mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-29 21:41:00 +08:00
commit
ec8ea5aa0a
@ -0,0 +1,89 @@
|
||||
如何使用 lftp 来加速 Linux/UNIX 上的 ftp/https 下载速度
|
||||
======
|
||||
|
||||
`lftp` 是一个文件传输程序。它可以用于复杂的 FTP、 HTTP/HTTPS 和其他连接。如果指定了站点 URL,那么 `lftp` 将连接到该站点,否则会使用 `open` 命令建立连接。它是所有 Linux/Unix 命令行用户的必备工具。我目前写了一些关于 [Linux 下超快命令行下载加速器][1],比如 Axel 和 prozilla。`lftp` 是另一个能做相同的事,但有更多功能的工具。`lftp` 可以处理七种文件访问方式:
|
||||
|
||||
1. ftp
|
||||
2. ftps
|
||||
3. http
|
||||
4. https
|
||||
5. hftp
|
||||
6. fish
|
||||
7. sftp
|
||||
8. file
|
||||
|
||||
### 那么 lftp 的独特之处是什么?
|
||||
|
||||
* `lftp` 中的每个操作都是可靠的,即任何非致命错误都被忽略,并且重复进行操作。所以如果下载中断,它会自动重新启动。即使 FTP 服务器不支持 `REST` 命令,lftp 也会尝试从开头检索文件,直到文件传输完成。
|
||||
* `lftp` 具有类似 shell 的命令语法,允许你在后台并行启动多个命令。
|
||||
* `lftp` 有一个内置的镜像功能,可以下载或更新整个目录树。还有一个反向镜像功能(`mirror -R`),它可以上传或更新服务器上的目录树。镜像也可以在两个远程服务器之间同步目录,如果可用的话会使用 FXP。
|
||||
|
||||
### 如何使用 lftp 作为下载加速器
|
||||
|
||||
`lftp` 有 `pget` 命令。它能让你并行下载。语法是:
|
||||
|
||||
```
|
||||
lftp -e 'pget -n NUM -c url; exit'
|
||||
```
|
||||
|
||||
例如,使用 `pget` 分 5个部分下载 <http://kernel.org/pub/linux/kernel/v2.6/linux-2.6.22.2.tar.bz2>:
|
||||
|
||||
```
|
||||
$ cd /tmp
|
||||
$ lftp -e 'pget -n 5 -c http://kernel.org/pub/linux/kernel/v2.6/linux-2.6.22.2.tar.bz2'
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
45108964 bytes transferred in 57 seconds (775.3K/s)
|
||||
lftp :~>quit
|
||||
```
|
||||
|
||||
这里:
|
||||
|
||||
1. `pget` - 并行下载文件
|
||||
2. `-n 5` - 将最大连接数设置为 5
|
||||
3. `-c` - 如果当前目录存在 `lfile.lftp-pget-status`,则继续中断的传输
|
||||
|
||||
### 如何在 Linux/Unix 中使用 lftp 来加速 ftp/https下载
|
||||
|
||||
再尝试添加 `exit` 命令:
|
||||
|
||||
```
|
||||
$ lftp -e 'pget -n 10 -c https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.15.tar.xz; exit'
|
||||
```
|
||||
|
||||
[Linux-lftp-command-demo](https://www.cyberciti.biz/tips/wp-content/uploads/2007/08/Linux-lftp-command-demo.mp4)
|
||||
|
||||
### 关于并行下载的说明
|
||||
|
||||
请注意,通过使用下载加速器,你将增加远程服务器负载。另请注意,`lftp` 可能无法在不支持多点下载的站点上工作,或者防火墙阻止了此类请求。
|
||||
|
||||
其它的命令提供了更多功能。有关更多信息,请参考 [lftp][2] 的 man 页面:
|
||||
|
||||
```
|
||||
man lftp
|
||||
```
|
||||
|
||||
### 关于作者
|
||||
|
||||
作者是 nixCraft 的创建者,经验丰富的系统管理员,也是 Linux 操作系统/Unix shell 脚本的培训师。他曾与全球客户以及IT、教育、国防和太空研究以及非营利部门等多个行业合作。在 [Twitter][9]、[Facebook][10]、[Google +][11] 上关注他。通过 [RSS/XML 订阅][5]获取最新的系统管理、Linux/Unix 以及开源主题教程。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/tips/linux-unix-download-accelerator.html
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz
|
||||
[1]:https://www.cyberciti.biz/tips/download-accelerator-for-linux-command-line-tools.html
|
||||
[2]:https://lftp.yar.ru/
|
||||
[3]:https://twitter.com/nixcraft
|
||||
[4]:https://facebook.com/nixcraft
|
||||
[5]:https://plus.google.com/+CybercitiBiz
|
||||
[6]:https://www.cyberciti.biz/atom/atom.xml
|
102
published/20090203 How the Kernel Manages Your Memory.md
Normal file
102
published/20090203 How the Kernel Manages Your Memory.md
Normal file
@ -0,0 +1,102 @@
|
||||
内核如何管理内存
|
||||
============================================================
|
||||
|
||||
在学习了进程的 [虚拟地址布局][1] 之后,让我们回到内核,来学习它管理用户内存的机制。这里再次使用 Gonzo:
|
||||
|
||||
![Linux kernel mm_struct](http://static.duartes.org/img/blogPosts/mm_struct.png)
|
||||
|
||||
Linux 进程在内核中是作为进程描述符 [task_struct][2] (LCTT 译注:它是在 Linux 中描述进程完整信息的一种数据结构)的实例来实现的。在 task_struct 中的 [mm][3] 域指向到**内存描述符**,[mm_struct][4] 是一个程序在内存中的执行摘要。如上图所示,它保存了起始和结束内存段,进程使用的物理内存页面的 [数量][5](RSS <ruby>常驻内存大小<rt>Resident Set Size</rt></ruby> )、虚拟地址空间使用的 [总数量][6]、以及其它片断。 在内存描述符中,我们可以获悉它有两种管理内存的方式:**虚拟内存区域**集和**页面表**。Gonzo 的内存区域如下所示:
|
||||
|
||||
![Kernel memory descriptor and memory areas](http://static.duartes.org/img/blogPosts/memoryDescriptorAndMemoryAreas.png)
|
||||
|
||||
每个虚拟内存区域(VMA)是一个连续的虚拟地址范围;这些区域绝对不会重叠。一个 [vm_area_struct][7] 的实例完整地描述了一个内存区域,包括它的起始和结束地址,[flags][8] 决定了访问权限和行为,并且 [vm_file][9] 域指定了映射到这个区域的文件(如果有的话)。(除了内存映射段的例外情况之外,)一个 VMA 是不能**匿名**映射文件的。上面的每个内存段(比如,堆、栈)都对应一个单个的 VMA。虽然它通常都使用在 x86 的机器上,但它并不是必需的。VMA 也不关心它们在哪个段中。
|
||||
|
||||
一个程序的 VMA 在内存描述符中是作为 [mmap][10] 域的一个链接列表保存的,以起始虚拟地址为序进行排列,并且在 [mm_rb][12] 域中作为一个 [红黑树][11] 的根。红黑树允许内核通过给定的虚拟地址去快速搜索内存区域。在你读取文件 `/proc/pid_of_process/maps` 时,内核只是简单地读取每个进程的 VMA 的链接列表并[显示它们][13]。
|
||||
|
||||
在 Windows 中,[EPROCESS][14] 块大致类似于一个 task_struct 和 mm_struct 的结合。在 Windows 中模拟一个 VMA 的是虚拟地址描述符,或称为 [VAD][15];它保存在一个 [AVL 树][16] 中。你知道关于 Windows 和 Linux 之间最有趣的事情是什么吗?其实它们只有一点小差别。
|
||||
|
||||
4GB 虚拟地址空间被分配到**页面**中。在 32 位模式中的 x86 处理器中支持 4KB、2MB、以及 4MB 大小的页面。Linux 和 Windows 都使用大小为 4KB 的页面去映射用户的一部分虚拟地址空间。字节 0-4095 在页面 0 中,字节 4096-8191 在页面 1 中,依次类推。VMA 的大小 _必须是页面大小的倍数_ 。下图是使用 4KB 大小页面的总数量为 3GB 的用户空间:
|
||||
|
||||
![4KB Pages Virtual User Space](http://static.duartes.org/img/blogPosts/pagedVirtualSpace.png)
|
||||
|
||||
处理器通过查看**页面表**去转换一个虚拟内存地址到一个真实的物理内存地址。每个进程都有它自己的一组页面表;每当发生进程切换时,用户空间的页面表也同时切换。Linux 在内存描述符的 [pgd][17] 域中保存了一个指向进程的页面表的指针。对于每个虚拟页面,页面表中都有一个相应的**页面表条目**(PTE),在常规的 x86 页面表中,它是一个简单的如下所示的大小为 4 字节的记录:
|
||||
|
||||
![x86 Page Table Entry (PTE) for 4KB page](http://static.duartes.org/img/blogPosts/x86PageTableEntry4KB.png)
|
||||
|
||||
Linux 通过函数去 [读取][18] 和 [设置][19] PTE 条目中的每个标志位。标志位 P 告诉处理器这个虚拟页面是否**在**物理内存中。如果该位被清除(设置为 0),访问这个页面将触发一个页面故障。请记住,当这个标志位为 0 时,内核可以在剩余的域上**做任何想做的事**。R/W 标志位是读/写标志;如果被清除,这个页面将变成只读的。U/S 标志位表示用户/超级用户;如果被清除,这个页面将仅被内核访问。这些标志都是用于实现我们在前面看到的只读内存和内核空间保护。
|
||||
|
||||
标志位 D 和 A 用于标识页面是否是“**脏的**”或者是已**被访问过**。一个脏页面表示已经被写入,而一个被访问过的页面则表示有一个写入或者读取发生过。这两个标志位都是粘滞位:处理器只能设置它们,而清除则是由内核来完成的。最终,PTE 保存了这个页面相应的起始物理地址,它们按 4KB 进行整齐排列。这个看起来不起眼的域是一些痛苦的根源,因为它限制了物理内存最大为 [4 GB][20]。其它的 PTE 域留到下次再讲,因为它是涉及了物理地址扩展的知识。
|
||||
|
||||
由于在一个虚拟页面上的所有字节都共享一个 U/S 和 R/W 标志位,所以内存保护的最小单元是一个虚拟页面。但是,同一个物理内存可能被映射到不同的虚拟页面,这样就有可能会出现相同的物理内存出现不同的保护标志位的情况。请注意,在 PTE 中是看不到运行权限的。这就是为什么经典的 x86 页面上允许代码在栈上被执行的原因,这样会很容易导致挖掘出栈缓冲溢出漏洞(可能会通过使用 [return-to-libc][21] 和其它技术来找出非可执行栈)。由于 PTE 缺少禁止运行标志位说明了一个更广泛的事实:在 VMA 中的权限标志位有可能或可能不完全转换为硬件保护。内核只能做它能做到的,但是,最终的架构限制了它能做的事情。
|
||||
|
||||
虚拟内存不保存任何东西,它只是简单地 _映射_ 一个程序的地址空间到底层的物理内存上。物理内存被当作一个称之为**物理地址空间**的巨大块而由处理器访问。虽然内存的操作[涉及到某些][22]总线,我们在这里先忽略它,并假设物理地址范围从 0 到可用的最大值按字节递增。物理地址空间被内核进一步分解为**页面帧**。处理器并不会关心帧的具体情况,这一点对内核也是至关重要的,因为,**页面帧是物理内存管理的最小单元**。Linux 和 Windows 在 32 位模式下都使用 4KB 大小的页面帧;下图是一个有 2 GB 内存的机器的例子:
|
||||
|
||||
![Physical Address Space](http://static.duartes.org/img/blogPosts/physicalAddressSpace.png)
|
||||
|
||||
在 Linux 上每个页面帧是被一个 [描述符][23] 和 [几个标志][24] 来跟踪的。通过这些描述符和标志,实现了对机器上整个物理内存的跟踪;每个页面帧的具体状态是公开的。物理内存是通过使用 [Buddy 内存分配][25] (LCTT 译注:一种内存分配算法)技术来管理的,因此,如果一个页面帧可以通过 Buddy 系统分配,那么它是**未分配的**(free)。一个被分配的页面帧可以是**匿名的**、持有程序数据的、或者它可能处于页面缓存中、持有数据保存在一个文件或者块设备中。还有其它的异形页面帧,但是这些异形页面帧现在已经不怎么使用了。Windows 有一个类似的页面帧号(Page Frame Number (PFN))数据库去跟踪物理内存。
|
||||
|
||||
我们把虚拟内存区域(VMA)、页面表条目(PTE),以及页面帧放在一起来理解它们是如何工作的。下面是一个用户堆的示例:
|
||||
|
||||
![Physical Address Space](http://static.duartes.org/img/blogPosts/heapMapped.png)
|
||||
|
||||
蓝色的矩形框表示在 VMA 范围内的页面,而箭头表示页面表条目映射页面到页面帧。一些缺少箭头的虚拟页面,表示它们对应的 PTE 的当前标志位被清除(置为 0)。这可能是因为这个页面从来没有被使用过,或者是它的内容已经被交换出去了。在这两种情况下,即便这些页面在 VMA 中,访问它们也将导致产生一个页面故障。对于这种 VMA 和页面表的不一致的情况,看上去似乎很奇怪,但是这种情况却经常发生。
|
||||
|
||||
一个 VMA 像一个在你的程序和内核之间的合约。你请求它做一些事情(分配内存、文件映射、等等),内核会回应“收到”,然后去创建或者更新相应的 VMA。 但是,它 _并不立刻_ 去“兑现”对你的承诺,而是它会等待到发生一个页面故障时才去 _真正_ 做这个工作。内核是个“懒惰的家伙”、“不诚实的人渣”;这就是虚拟内存的基本原理。它适用于大多数的情况,有一些类似情况和有一些意外的情况,但是,它是规则是,VMA 记录 _约定的_ 内容,而 PTE 才反映这个“懒惰的内核” _真正做了什么_。通过这两种数据结构共同来管理程序的内存;它们共同来完成解决页面故障、释放内存、从内存中交换出数据、等等。下图是内存分配的一个简单案例:
|
||||
|
||||
![Example of demand paging and memory allocation](http://static.duartes.org/img/blogPosts/heapAllocation.png)
|
||||
|
||||
当程序通过 [brk()][26] 系统调用来请求一些内存时,内核只是简单地 [更新][27] 堆的 VMA 并给程序回复“已搞定”。而在这个时候并没有真正地分配页面帧,并且新的页面也没有映射到物理内存上。一旦程序尝试去访问这个页面时,处理器将发生页面故障,然后调用 [do_page_fault()][28]。这个函数将使用 [find_vma()][30] 去 [搜索][29] 发生页面故障的 VMA。如果找到了,然后在 VMA 上进行权限检查以防范恶意访问(读取或者写入)。如果没有合适的 VMA,也没有所尝试访问的内存的“合约”,将会给进程返回段故障。
|
||||
|
||||
当[找到][31]了一个合适的 VMA,内核必须通过查找 PTE 的内容和 VMA 的类型去[处理][32]故障。在我们的案例中,PTE 显示这个页面是 [不存在的][33]。事实上,我们的 PTE 是全部空白的(全部都是 0),在 Linux 中这表示虚拟内存还没有被映射。由于这是匿名 VMA,我们有一个完全的 RAM 事务,它必须被 [do_anonymous_page()][34] 来处理,它分配页面帧,并且用一个 PTE 去映射故障虚拟页面到一个新分配的帧。
|
||||
|
||||
有时候,事情可能会有所不同。例如,对于被交换出内存的页面的 PTE,在当前(Present)标志位上是 0,但它并不是空白的。而是在交换位置仍有页面内容,它必须从磁盘上读取并且通过 [do_swap_page()][35] 来加载到一个被称为 [major fault][36] 的页面帧上。
|
||||
|
||||
这是我们通过探查内核的用户内存管理得出的前半部分的结论。在下一篇文章中,我们通过将文件加载到内存中,来构建一个完整的内存框架图,以及对性能的影响。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://duartes.org/gustavo/blog/post/how-the-kernel-manages-your-memory/
|
||||
|
||||
作者:[Gustavo Duarte][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://duartes.org/gustavo/blog/about/
|
||||
[1]:https://linux.cn/article-9255-1.html
|
||||
[2]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/sched.h#L1075
|
||||
[3]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/sched.h#L1129
|
||||
[4]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L173
|
||||
[5]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L197
|
||||
[6]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L206
|
||||
[7]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L99
|
||||
[8]:http://lxr.linux.no/linux+v2.6.28/include/linux/mm.h#L76
|
||||
[9]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L150
|
||||
[10]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L174
|
||||
[11]:http://en.wikipedia.org/wiki/Red_black_tree
|
||||
[12]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L175
|
||||
[13]:http://lxr.linux.no/linux+v2.6.28.1/fs/proc/task_mmu.c#L201
|
||||
[14]:http://www.nirsoft.net/kernel_struct/vista/EPROCESS.html
|
||||
[15]:http://www.nirsoft.net/kernel_struct/vista/MMVAD.html
|
||||
[16]:http://en.wikipedia.org/wiki/AVL_tree
|
||||
[17]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L185
|
||||
[18]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/include/asm/pgtable.h#L173
|
||||
[19]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/include/asm/pgtable.h#L230
|
||||
[20]:http://www.google.com/search?hl=en&amp;amp;amp;amp;q=2^20+*+2^12+bytes+in+GB
|
||||
[21]:http://en.wikipedia.org/wiki/Return-to-libc_attack
|
||||
[22]:http://duartes.org/gustavo/blog/post/getting-physical-with-memory
|
||||
[23]:http://lxr.linux.no/linux+v2.6.28/include/linux/mm_types.h#L32
|
||||
[24]:http://lxr.linux.no/linux+v2.6.28/include/linux/page-flags.h#L14
|
||||
[25]:http://en.wikipedia.org/wiki/Buddy_memory_allocation
|
||||
[26]:http://www.kernel.org/doc/man-pages/online/pages/man2/brk.2.html
|
||||
[27]:http://lxr.linux.no/linux+v2.6.28.1/mm/mmap.c#L2050
|
||||
[28]:http://lxr.linux.no/linux+v2.6.28/arch/x86/mm/fault.c#L583
|
||||
[29]:http://lxr.linux.no/linux+v2.6.28/arch/x86/mm/fault.c#L692
|
||||
[30]:http://lxr.linux.no/linux+v2.6.28/mm/mmap.c#L1466
|
||||
[31]:http://lxr.linux.no/linux+v2.6.28/arch/x86/mm/fault.c#L711
|
||||
[32]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2653
|
||||
[33]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2674
|
||||
[34]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2681
|
||||
[35]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2280
|
||||
[36]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2316
|
@ -1,223 +1,272 @@
|
||||
Translated by shipsw
|
||||
20 个 OpenSSH 最佳安全实践
|
||||
======
|
||||
|
||||
20 个 OpenSSH 安全实践
|
||||
======
|
||||
![OpenSSH 安全提示][1]
|
||||
|
||||
OpenSSH 是 SSH 协议的一个实现。一般被 scp 或 sftp 用在远程登录、备份、远程文件传输等功能上。SSH能够完美保障两个网络或系统间数据传输的保密性和完整性。尽管如此,他主要用在使用公匙加密的服务器验证上。不时出现关于 OpenSSH 零日漏洞的[谣言][2]。本文描述**如何设置你的 Linux 或类 Unix 系统以提高 sshd 的安全性**。
|
||||
OpenSSH 是 SSH 协议的一个实现。一般通过 `scp` 或 `sftp` 用于远程登录、备份、远程文件传输等功能。SSH能够完美保障两个网络或系统间数据传输的保密性和完整性。尽管如此,它最大的优势是使用公匙加密来进行服务器验证。时不时会出现关于 OpenSSH 零日漏洞的[传言][2]。本文将描述如何设置你的 Linux 或类 Unix 系统以提高 sshd 的安全性。
|
||||
|
||||
|
||||
#### OpenSSH 默认设置
|
||||
### OpenSSH 默认设置
|
||||
|
||||
* TCP 端口 - 22
|
||||
* OpenSSH 服务配置文件 - sshd_config (位于 /etc/ssh/)
|
||||
* TCP 端口 - 22
|
||||
* OpenSSH 服务配置文件 - `sshd_config` (位于 `/etc/ssh/`)
|
||||
|
||||
### 1、 基于公匙的登录
|
||||
|
||||
OpenSSH 服务支持各种验证方式。推荐使用公匙加密验证。首先,使用以下 `ssh-keygen` 命令在本地电脑上创建密匙对:
|
||||
|
||||
#### 1. 基于公匙的登录
|
||||
|
||||
OpenSSH 服务支持各种验证方式。推荐使用公匙加密验证。首先,使用以下 ssh-keygen 命令在本地电脑上创建密匙对:
|
||||
|
||||
低于 1024 位的 DSA 和 RSA 加密是很弱的,请不要使用。RSA 密匙主要是在考虑 ssh 客户端兼容性的时候代替 ECDSA 密匙使用的。
|
||||
> 1024 位或低于它的 DSA 和 RSA 加密是很弱的,请不要使用。当考虑 ssh 客户端向后兼容性的时候,请使用 RSA密匙代替 ECDSA 密匙。所有的 ssh 密钥要么使用 ED25519 ,要么使用 RSA,不要使用其它类型。
|
||||
|
||||
```
|
||||
$ ssh-keygen -t key_type -b bits -C "comment"
|
||||
```
|
||||
|
||||
示例:
|
||||
|
||||
```
|
||||
$ ssh-keygen -t ed25519 -C "Login to production cluster at xyz corp"
|
||||
或
|
||||
$ ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_rsa_aws_$(date +%Y-%m-%d) -C "AWS key for abc corp clients"
|
||||
```
|
||||
下一步,使用 ssh-copy-id 命令安装公匙:
|
||||
|
||||
下一步,使用 `ssh-copy-id` 命令安装公匙:
|
||||
|
||||
```
|
||||
$ ssh-copy-id -i /path/to/public-key-file user@host
|
||||
或
|
||||
$ ssh-copy-id user@remote-server-ip-or-dns-name
|
||||
```
|
||||
|
||||
示例:
|
||||
|
||||
```
|
||||
$ ssh-copy-id vivek@rhel7-aws-server
|
||||
```
|
||||
提示输入用户名和密码的时候,使用你自己的 ssh 公匙:
|
||||
`$ ssh vivek@rhel7-aws-server`
|
||||
[![OpenSSH 服务安全最佳实践][3]][3]
|
||||
|
||||
提示输入用户名和密码的时候,确认基于 ssh 公匙的登录是否工作:
|
||||
|
||||
```
|
||||
$ ssh vivek@rhel7-aws-server
|
||||
```
|
||||
|
||||
[![OpenSSH 服务安全最佳实践][3]][3]
|
||||
|
||||
更多有关 ssh 公匙的信息,参照以下文章:
|
||||
|
||||
* [为备份脚本设置无密码安全登录][48]
|
||||
|
||||
* [sshpass: 使用脚本密码登录SSH服务器][49]
|
||||
|
||||
* [如何为一个 Linux/类Unix 系统设置 SSH 登录密匙][50]
|
||||
|
||||
* [如何使用 Ansible 工具上传 ssh 登录授权公匙][51]
|
||||
* [为备份脚本设置无密码安全登录][48]
|
||||
* [sshpass:使用脚本密码登录 SSH 服务器][49]
|
||||
* [如何为一个 Linux/类 Unix 系统设置 SSH 登录密匙][50]
|
||||
* [如何使用 Ansible 工具上传 ssh 登录授权公匙][51]
|
||||
|
||||
|
||||
#### 2. 禁用 root 用户登录
|
||||
### 2、 禁用 root 用户登录
|
||||
|
||||
禁用 root 用户登录前,确认普通用户可以以 root 身份登录。例如,允许用户 vivek 使用 sudo 命令以 root 身份登录。
|
||||
禁用 root 用户登录前,确认普通用户可以以 root 身份登录。例如,允许用户 vivek 使用 `sudo` 命令以 root 身份登录。
|
||||
|
||||
##### 在 Debian/Ubuntu 系统中如何将用户 vivek 添加到 sudo 组中
|
||||
#### 在 Debian/Ubuntu 系统中如何将用户 vivek 添加到 sudo 组中
|
||||
|
||||
允许 sudo 组中的用户执行任何命令。 [将用户 vivek 添加到 sudo 组中][4]:
|
||||
`$ sudo adduser vivek sudo`
|
||||
使用 [id 命令][5] 验证用户组。
|
||||
`$ id vivek`
|
||||
允许 sudo 组中的用户执行任何命令。 [将用户 vivek 添加到 sudo 组中][4]:
|
||||
|
||||
##### 在 CentOS/RHEL 系统中如何将用户 vivek 添加到 sudo 组中
|
||||
```
|
||||
$ sudo adduser vivek sudo
|
||||
```
|
||||
|
||||
使用 [id 命令][5] 验证用户组。
|
||||
|
||||
```
|
||||
$ id vivek
|
||||
```
|
||||
|
||||
#### 在 CentOS/RHEL 系统中如何将用户 vivek 添加到 sudo 组中
|
||||
|
||||
在 CentOS/RHEL 和 Fedora 系统中允许 wheel 组中的用户执行所有的命令。使用 `usermod` 命令将用户 vivek 添加到 wheel 组中:
|
||||
|
||||
在 CentOS/RHEL 和 Fedora 系统中允许 wheel 组中的用户执行所有的命令。使用 uermod 命令将用户 vivek 添加到 wheel 组中:
|
||||
```
|
||||
$ sudo usermod -aG wheel vivek
|
||||
$ id vivek
|
||||
```
|
||||
|
||||
##### 测试 sudo 权限并禁用 ssh root 登录
|
||||
#### 测试 sudo 权限并禁用 ssh root 登录
|
||||
|
||||
测试并确保用户 vivek 可以以 root 身份登录执行以下命令:
|
||||
|
||||
```
|
||||
$ sudo -i
|
||||
$ sudo /etc/init.d/sshd status
|
||||
$ sudo systemctl status httpd
|
||||
```
|
||||
添加以下内容到 sshd_config 文件中来禁用 root 登录。
|
||||
|
||||
添加以下内容到 `sshd_config` 文件中来禁用 root 登录:
|
||||
|
||||
```
|
||||
PermitRootLogin no
|
||||
ChallengeResponseAuthentication no
|
||||
PasswordAuthentication no
|
||||
UsePAM no
|
||||
```
|
||||
|
||||
更多信息参见“[如何通过禁用 Linux 的 ssh 密码登录来增强系统安全][6]” 。
|
||||
|
||||
#### 3. 禁用密码登录
|
||||
### 3、 禁用密码登录
|
||||
|
||||
所有的密码登录都应该禁用,仅留下公匙登录。添加以下内容到 `sshd_config` 文件中:
|
||||
|
||||
所有的密码登录都应该禁用,仅留下公匙登录。添加以下内容到 sshd_config 文件中:
|
||||
```
|
||||
AuthenticationMethods publickey
|
||||
PubkeyAuthentication yes
|
||||
```
|
||||
CentOS 6.x/RHEL 6.x 系统中老版本的 SSHD 用户可以使用以下设置:
|
||||
|
||||
CentOS 6.x/RHEL 6.x 系统中老版本的 sshd 用户可以使用以下设置:
|
||||
|
||||
```
|
||||
PubkeyAuthentication yes
|
||||
```
|
||||
|
||||
#### 4. 限制用户的 ssh 权限
|
||||
### 4、 限制用户的 ssh 访问
|
||||
|
||||
默认状态下,所有的系统用户都可以使用密码或公匙登录。但是有些时候需要为 FTP 或者 email 服务创建 UNIX/Linux 用户。然而,这些用户也可以使用 ssh 登录系统。他们将获得访问系统工具的完整权限,包括编译器和诸如 Perl、Python(可以打开网络端口干很多疯狂的事情)等的脚本语言。通过添加以下内容到 `sshd_config` 文件中来仅允许用户 root、vivek 和 jerry 通过 SSH 登录系统:
|
||||
|
||||
```
|
||||
AllowUsers vivek jerry
|
||||
```
|
||||
|
||||
当然,你也可以添加以下内容到 `sshd_config` 文件中来达到仅拒绝一部分用户通过 SSH 登录系统的效果。
|
||||
|
||||
```
|
||||
DenyUsers root saroj anjali foo
|
||||
```
|
||||
|
||||
默认状态下,所有的系统用户都可以使用密码或公匙登录。但是有些时候需要为 FTP 或者 email 服务创建 UNIX/Linux 用户。所以,这些用户也可以使用 ssh 登录系统。他们将获得访问系统工具的完整权限,包括编译器和诸如 Perl、Python(可以打开网络端口干很多疯狂的事情) 等的脚本语言。通过添加以下内容到 sshd_config 文件中来仅允许用户 root、vivek 和 jerry 通过 SSH 登录系统:
|
||||
`AllowUsers vivek jerry`
|
||||
当然,你也可以添加以下内容到 sshd_config 文件中来达到仅拒绝一部分用户通过 SSH 登录系统的效果。
|
||||
`DenyUsers root saroj anjali foo`
|
||||
你也可以通过[配置 Linux PAM][7] 来禁用或允许用户通过 sshd 登录。也可以允许或禁止一个[用户组列表][8]通过 ssh 登录系统。
|
||||
|
||||
#### 5. 禁用空密码
|
||||
### 5、 禁用空密码
|
||||
|
||||
你需要明确禁止空密码账户远程登录系统,更新 sshd_config 文件的以下内容:
|
||||
`PermitEmptyPasswords no`
|
||||
你需要明确禁止空密码账户远程登录系统,更新 `sshd_config` 文件的以下内容:
|
||||
|
||||
#### 6. 为 ssh 用户或者密匙使用强密码
|
||||
```
|
||||
PermitEmptyPasswords no
|
||||
```
|
||||
|
||||
### 6、 为 ssh 用户或者密匙使用强密码
|
||||
|
||||
为密匙使用强密码和短语的重要性再怎么强调都不过分。暴力破解可以起作用就是因为用户使用了基于字典的密码。你可以强制用户避开[字典密码][9]并使用[约翰的开膛手工具][10]来检测弱密码。以下是一个随机密码生成器(放到你的 `~/.bashrc` 下):
|
||||
|
||||
为密匙使用强密码和短语的重要性再怎么强调都不过分。暴力破解可以起作用就是因为用户使用了基于字典的密码。你可以强制用户避开字典密码并使用[约翰的开膛手工具][10]来检测弱密码。以下是一个随机密码生成器(放到你的 ~/.bashrc 下):
|
||||
```
|
||||
genpasswd() {
|
||||
local l=$1
|
||||
[ "$l" == "" ] && l=20
|
||||
tr -dc A-Za-z0-9_ < /dev/urandom | head -c ${l} | xargs
|
||||
[ "$l" == "" ] && l=20
|
||||
tr -dc A-Za-z0-9_ < /dev/urandom | head -c ${l} | xargs
|
||||
}
|
||||
```
|
||||
|
||||
运行:
|
||||
`genpasswd 16`
|
||||
输出:
|
||||
运行:
|
||||
|
||||
```
|
||||
genpasswd 16
|
||||
```
|
||||
|
||||
输出:
|
||||
|
||||
```
|
||||
uw8CnDVMwC6vOKgW
|
||||
```
|
||||
* [使用 mkpasswd / makepasswd / pwgen 生成随机密码][52]
|
||||
|
||||
* [Linux / UNIX: 生成密码][53]
|
||||
* [使用 mkpasswd / makepasswd / pwgen 生成随机密码][52]
|
||||
* [Linux / UNIX: 生成密码][53]
|
||||
* [Linux 随机密码生成命令][54]
|
||||
|
||||
* [Linux 随机密码生成命令][54]
|
||||
### 7、 为 SSH 的 22端口配置防火墙
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
你需要更新 `iptables`/`ufw`/`firewall-cmd` 或 pf 防火墙配置来为 ssh 的 TCP 端口 22 配置防火墙。一般来说,OpenSSH 服务应该仅允许本地或者其他的远端地址访问。
|
||||
|
||||
#### 7. 为 SSH 端口 # 22 配置防火墙
|
||||
#### Netfilter(Iptables) 配置
|
||||
|
||||
你需要更新 iptables/ufw/firewall-cmd 或 pf firewall 来为 ssh TCP 端口 # 22 配置防火墙。一般来说,OpenSSH 服务应该仅允许本地或者其他的远端地址访问。
|
||||
更新 [/etc/sysconfig/iptables (Redhat 和其派生系统特有文件) ][11] 实现仅接受来自于 192.168.1.0/24 和 202.54.1.5/29 的连接,输入:
|
||||
|
||||
##### Netfilter (Iptables) 配置
|
||||
|
||||
更新 [/etc/sysconfig/iptables (Redhat和其派生系统特有文件) ][11] 实现仅接受来自于 192.168.1.0/24 和 202.54.1.5/29 的连接, 输入:
|
||||
```
|
||||
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p tcp --dport 22 -j ACCEPT
|
||||
-A RH-Firewall-1-INPUT -s 202.54.1.5/29 -m state --state NEW -p tcp --dport 22 -j ACCEPT
|
||||
```
|
||||
|
||||
如果同时使用 IPv6 的话,可以编辑/etc/sysconfig/ip6tables(Redhat 和其派生系统特有文件),输入:
|
||||
如果同时使用 IPv6 的话,可以编辑 `/etc/sysconfig/ip6tables` (Redhat 和其派生系统特有文件),输入:
|
||||
|
||||
```
|
||||
-A RH-Firewall-1-INPUT -s ipv6network::/ipv6mask -m tcp -p tcp --dport 22 -j ACCEPT
|
||||
|
||||
```
|
||||
|
||||
将 ipv6network::/ipv6mask 替换为实际的 IPv6 网段。
|
||||
将 `ipv6network::/ipv6mask` 替换为实际的 IPv6 网段。
|
||||
|
||||
##### Debian/Ubuntu Linux 下的 UFW
|
||||
#### Debian/Ubuntu Linux 下的 UFW
|
||||
|
||||
[UFW 是 uncomplicated firewall 的首字母缩写,主要用来管理 Linux 防火墙][12],目的是提供一种用户友好的界面。输入[以下命令使得系统进允许网段 202.54.1.5/29 接入端口 22][13]:
|
||||
`$ sudo ufw allow from 202.54.1.5/29 to any port 22`
|
||||
更多信息请参见 "[Linux: 菜鸟管理员的 25 个 Iptables Netfilter 命令][14]"。
|
||||
[UFW 是 Uncomplicated FireWall 的首字母缩写,主要用来管理 Linux 防火墙][12],目的是提供一种用户友好的界面。输入[以下命令使得系统仅允许网段 202.54.1.5/29 接入端口 22][13]:
|
||||
|
||||
##### *BSD PF 防火墙配置
|
||||
```
|
||||
$ sudo ufw allow from 202.54.1.5/29 to any port 22
|
||||
```
|
||||
|
||||
更多信息请参见 “[Linux:菜鸟管理员的 25 个 Iptables Netfilter 命令][14]”。
|
||||
|
||||
#### *BSD PF 防火墙配置
|
||||
|
||||
如果使用 PF 防火墙 [/etc/pf.conf][15] 配置如下:
|
||||
|
||||
```
|
||||
pass in on $ext_if inet proto tcp from {192.168.1.0/24, 202.54.1.5/29} to $ssh_server_ip port ssh flags S/SA synproxy state
|
||||
```
|
||||
|
||||
#### 8. 修改 SSH 端口和绑定 IP
|
||||
### 8、 修改 SSH 端口和绑定 IP
|
||||
|
||||
ssh 默认监听系统中所有可用的网卡。修改并绑定 ssh 端口有助于避免暴力脚本的连接(许多暴力脚本只尝试端口 22)。更新文件 `sshd_config` 的以下内容来绑定端口 300 到 IP 192.168.1.5 和 202.54.1.5:
|
||||
|
||||
SSH 默认监听系统中所有可用的网卡。修改并绑定 ssh 端口有助于避免暴力脚本的连接(许多暴力脚本只尝试端口 22)。更新文件 sshd_config 的以下内容来绑定端口 300 到 IP 192.168.1.5 和 202.54.1.5:
|
||||
```
|
||||
Port 300
|
||||
ListenAddress 192.168.1.5
|
||||
ListenAddress 202.54.1.5
|
||||
```
|
||||
|
||||
端口 300 监听地址 192.168.1.5 监听地址 202.54.1.5
|
||||
|
||||
当需要接受动态广域网地址的连接时,使用主动脚本是个不错的选择,比如 fail2ban 或 denyhosts。
|
||||
|
||||
#### 9. 使用 TCP wrappers (可选的)
|
||||
### 9、 使用 TCP wrappers (可选的)
|
||||
|
||||
TCP wrapper 是一个基于主机的访问控制系统,用来过滤来自互联网的网络访问。OpenSSH 支持 TCP wrappers。只需要更新文件 `/etc/hosts.allow` 中的以下内容就可以使得 SSH 只接受来自于 192.168.1.2 和 172.16.23.12 的连接:
|
||||
|
||||
TCP wrapper 是一个基于主机的访问控制系统,用来过滤来自互联网的网络访问。OpenSSH 支持 TCP wrappers。只需要更新文件 /etc/hosts.allow 中的以下内容就可以使得 SSH 只接受来自于 192.168.1.2 和 172.16.23.12 的连接:
|
||||
```
|
||||
sshd : 192.168.1.2 172.16.23.12
|
||||
```
|
||||
|
||||
在 Linux/Mac OS X 和类 UNIX 系统中参见 [TCP wrappers 设置和使用的常见问题][16]。
|
||||
|
||||
#### 10. 阻止 SSH 破解或暴力攻击
|
||||
### 10、 阻止 SSH 破解或暴力攻击
|
||||
|
||||
暴力破解是一种在单一或者分布式网络中使用大量组合(用户名和密码的组合)来尝试连接一个加密系统的方法。可以使用以下软件来应对暴力攻击:
|
||||
暴力破解是一种在单一或者分布式网络中使用大量(用户名和密码的)组合来尝试连接一个加密系统的方法。可以使用以下软件来应对暴力攻击:
|
||||
|
||||
* [DenyHosts][17] 是一个基于 Python SSH 安全工具。该工具通过监控授权日志中的非法登录日志并封禁原始IP的方式来应对暴力攻击。
|
||||
* RHEL / Fedora 和 CentOS Linux 下如何设置 [DenyHosts][18]。
|
||||
* [Fail2ban][19] 是另一个类似的用来预防针对 SSH 攻击的工具。
|
||||
* [sshguard][20] 是一个使用 pf 来预防针对 SSH 和其他服务攻击的工具。
|
||||
* [security/sshblock][21] 阻止滥用 SSH 尝试登录。
|
||||
* [IPQ BDB filter][22] 可以看做是 fail2ban 的一个简化版。
|
||||
* [DenyHosts][17] 是一个基于 Python SSH 安全工具。该工具通过监控授权日志中的非法登录日志并封禁原始 IP 的方式来应对暴力攻击。
|
||||
* RHEL / Fedora 和 CentOS Linux 下如何设置 [DenyHosts][18]。
|
||||
* [Fail2ban][19] 是另一个类似的用来预防针对 SSH 攻击的工具。
|
||||
* [sshguard][20] 是一个使用 pf 来预防针对 SSH 和其他服务攻击的工具。
|
||||
* [security/sshblock][21] 阻止滥用 SSH 尝试登录。
|
||||
* [IPQ BDB filter][22] 可以看做是 fail2ban 的一个简化版。
|
||||
|
||||
### 11、 限制 TCP 端口 22 的传入速率(可选的)
|
||||
|
||||
netfilter 和 pf 都提供速率限制选项可以对端口 22 的传入速率进行简单的限制。
|
||||
|
||||
#### 11. 限制 TCP 端口 # 22 的传入速率 (可选的)
|
||||
|
||||
netfilter 和 pf 都提供速率限制选项可以对端口 # 22 的传入速率进行简单的限制。
|
||||
|
||||
##### Iptables 示例
|
||||
#### Iptables 示例
|
||||
|
||||
以下脚本将会阻止 60 秒内尝试登录 5 次以上的客户端的连入。
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
inet_if=eth1
|
||||
ssh_port=22
|
||||
$IPT -I INPUT -p tcp --dport ${ssh_port} -i ${inet_if} -m state --state NEW -m recent --set
|
||||
$IPT -I INPUT -p tcp --dport ${ssh_port} -i ${inet_if} -m state --state NEW -m recent --update --seconds 60 --hitcount 5
|
||||
$IPT -I INPUT -p tcp --dport ${ssh_port} -i ${inet_if} -m state --state NEW -m recent --set
|
||||
$IPT -I INPUT -p tcp --dport ${ssh_port} -i ${inet_if} -m state --state NEW -m recent --update --seconds 60 --hitcount 5
|
||||
```
|
||||
|
||||
在你的 iptables 脚本中调用以上脚本。其他配置选项:
|
||||
|
||||
```
|
||||
$IPT -A INPUT -i ${inet_if} -p tcp --dport ${ssh_port} -m state --state NEW -m limit --limit 3/min --limit-burst 3 -j ACCEPT
|
||||
$IPT -A INPUT -i ${inet_if} -p tcp --dport ${ssh_port} -m state --state ESTABLISHED -j ACCEPT
|
||||
$IPT -A INPUT -i ${inet_if} -p tcp --dport ${ssh_port} -m state --state NEW -m limit --limit 3/min --limit-burst 3 -j ACCEPT
|
||||
$IPT -A INPUT -i ${inet_if} -p tcp --dport ${ssh_port} -m state --state ESTABLISHED -j ACCEPT
|
||||
$IPT -A OUTPUT -o ${inet_if} -p tcp --sport ${ssh_port} -m state --state ESTABLISHED -j ACCEPT
|
||||
# another one line example
|
||||
# $IPT -A INPUT -i ${inet_if} -m state --state NEW,ESTABLISHED,RELATED -p tcp --dport 22 -m limit --limit 5/minute --limit-burst 5-j ACCEPT
|
||||
@ -225,9 +274,10 @@ $IPT -A OUTPUT -o ${inet_if} -p tcp --sport ${ssh_port} -m state --state ESTABLI
|
||||
|
||||
其他细节参见 iptables 用户手册。
|
||||
|
||||
##### *BSD PF 示例
|
||||
#### *BSD PF 示例
|
||||
|
||||
以下脚本将限制每个客户端的连入数量为 20,并且 5 秒内的连接不超过 15 个。如果客户端触发此规则,则将其加入 abusive_ips 表并限制该客户端连入。最后 flush 关键词杀死所有触发规则的客户端的连接。
|
||||
|
||||
以下脚本将限制每个客户端的连入数量为 20,并且 5 秒范围的连接不超过 15 个。如果客户端触发此规则则将其加入 abusive_ips 表并限制该客户端连入。最后 flush 关键词杀死所有触发规则的客户端的状态。
|
||||
```
|
||||
sshd_server_ip = "202.54.1.5"
|
||||
table <abusive_ips> persist
|
||||
@ -235,9 +285,10 @@ block in quick from <abusive_ips>
|
||||
pass in on $ext_if proto tcp to $sshd_server_ip port ssh flags S/SA keep state (max-src-conn 20, max-src-conn-rate 15/5, overload <abusive_ips> flush)
|
||||
```
|
||||
|
||||
#### 12. 使用端口敲门 (可选的)
|
||||
### 12、 使用端口敲门(可选的)
|
||||
|
||||
[端口敲门][23]是通过在一组预先指定的封闭端口上生成连接尝试,以便从外部打开防火墙上的端口的方法。一旦指定的端口连接顺序被触发,防火墙规则就被动态修改以允许发送连接的主机连入指定的端口。以下是一个使用 iptables 实现的端口敲门的示例:
|
||||
|
||||
[端口敲门][23]是通过在一组预先指定的封闭端口上生成连接尝试来从外部打开防火墙上的端口的方法。一旦指定的端口连接顺序被触发,防火墙规则就被动态修改以允许发送连接的主机连入指定的端口。以下是一个使用 iptables 实现的端口敲门的示例:
|
||||
```
|
||||
$IPT -N stage1
|
||||
$IPT -A stage1 -m recent --remove --name knock
|
||||
@ -257,24 +308,31 @@ $IPT -A INPUT -p tcp --dport 22 -m recent --rcheck --seconds 5 --name heaven -j
|
||||
$IPT -A INPUT -p tcp --syn -j door
|
||||
```
|
||||
|
||||
更多信息请参见:
|
||||
|
||||
更多信息请参见:
|
||||
[Debian / Ubuntu: 使用 Knockd and Iptables 设置端口敲门][55]
|
||||
|
||||
#### 13. 配置空闲超时注销时长
|
||||
### 13、 配置空闲超时注销时长
|
||||
|
||||
用户可以通过 ssh 连入服务器,可以配置一个超时时间间隔来避免无人值守的 ssh 会话。 打开 `sshd_config` 并确保配置以下值:
|
||||
|
||||
用户可以通过 ssh 连入服务器,可以配置一个超时时间间隔来避免无人值守的 ssh 会话。 打开 sshd_config 并确保配置以下值:
|
||||
```
|
||||
ClientAliveInterval 300
|
||||
ClientAliveCountMax 0
|
||||
```
|
||||
以秒为单位设置一个空闲超时时间(300秒 = 5分钟)。一旦空闲时间超过这个值,空闲用户就会被踢出会话。更多细节参见[如何自动注销空闲超时的 BASH / TCSH / SSH 用户][24]。
|
||||
|
||||
#### 14. 为 ssh 用户启用警示标语
|
||||
以秒为单位设置一个空闲超时时间(300秒 = 5分钟)。一旦空闲时间超过这个值,空闲用户就会被踢出会话。更多细节参见[如何自动注销空闲超时的 BASH / TCSH / SSH 用户][24]。
|
||||
|
||||
### 14、 为 ssh 用户启用警示标语
|
||||
|
||||
更新 `sshd_config` 文件如下行来设置用户的警示标语:
|
||||
|
||||
```
|
||||
Banner /etc/issue
|
||||
```
|
||||
|
||||
`/etc/issue 示例文件:
|
||||
|
||||
更新 sshd_config 文件如下来设置用户的警示标语
|
||||
`Banner /etc/issue`
|
||||
/etc/issue 示例文件:
|
||||
```
|
||||
----------------------------------------------------------------------------------------------
|
||||
You are accessing a XYZ Government (XYZG) Information System (IS) that is provided for authorized use only.
|
||||
@ -297,45 +355,61 @@ or monitoring of the content of privileged communications, or work product, rela
|
||||
or services by attorneys, psychotherapists, or clergy, and their assistants. Such communications and work
|
||||
product are private and confidential. See User Agreement for details.
|
||||
----------------------------------------------------------------------------------------------
|
||||
|
||||
```
|
||||
|
||||
以上是一个标准的示例,更多的用户协议和法律细节请咨询你的律师团队。
|
||||
|
||||
#### 15. 禁用 .rhosts 文件 (核实)
|
||||
### 15、 禁用 .rhosts 文件(需核实)
|
||||
|
||||
禁止读取用户的 `~/.rhosts` 和 `~/.shosts` 文件。更新 `sshd_config` 文件中的以下内容:
|
||||
|
||||
```
|
||||
IgnoreRhosts yes
|
||||
```
|
||||
|
||||
禁止读取用户的 ~/.rhosts 和 ~/.shosts 文件。更新 sshd_config 文件中的以下内容:
|
||||
`IgnoreRhosts yes`
|
||||
SSH 可以模拟过时的 rsh 命令,所以应该禁用不安全的 RSH 连接。
|
||||
|
||||
#### 16. 禁用 host-based 授权 (核实)
|
||||
### 16、 禁用基于主机的授权(需核实)
|
||||
|
||||
禁用 host-based 授权,更新 sshd_config 文件的以下选项:
|
||||
`HostbasedAuthentication no`
|
||||
禁用基于主机的授权,更新 `sshd_config` 文件的以下选项:
|
||||
|
||||
#### 17. 为 OpenSSH 和 操作系统打补丁
|
||||
```
|
||||
HostbasedAuthentication no
|
||||
```
|
||||
|
||||
### 17、 为 OpenSSH 和操作系统打补丁
|
||||
|
||||
推荐你使用类似 [yum][25]、[apt-get][26] 和 [freebsd-update][27] 等工具保持系统安装了最新的安全补丁。
|
||||
|
||||
#### 18. Chroot OpenSSH (将用户锁定在主目录)
|
||||
### 18、 Chroot OpenSSH (将用户锁定在主目录)
|
||||
|
||||
默认设置下用户可以浏览诸如 /etc/、/bin 等目录。可以使用 chroot 或者其他专有工具如 [rssh][28] 来保护ssh连接。从版本 4.8p1 或 4.9p1 起,OpenSSH 不再需要依赖诸如 rssh 或复杂的 chroot(1) 等第三方工具来将用户锁定在主目录中。可以使用新的 ChrootDirectory 指令将用户锁定在其主目录,参见[这篇博文][29]。
|
||||
默认设置下用户可以浏览诸如 `/etc`、`/bin` 等目录。可以使用 chroot 或者其他专有工具如 [rssh][28] 来保护 ssh 连接。从版本 4.8p1 或 4.9p1 起,OpenSSH 不再需要依赖诸如 rssh 或复杂的 chroot(1) 等第三方工具来将用户锁定在主目录中。可以使用新的 `ChrootDirectory` 指令将用户锁定在其主目录,参见[这篇博文][29]。
|
||||
|
||||
#### 19. 禁用客户端的 OpenSSH 服务
|
||||
### 19. 禁用客户端的 OpenSSH 服务
|
||||
|
||||
工作站和笔记本不需要 OpenSSH 服务。如果不需要提供 ssh 远程登录和文件传输功能的话,可以禁用 sshd 服务。CentOS / RHEL 用户可以使用 [yum 命令][30] 禁用或删除 openssh-server:
|
||||
|
||||
```
|
||||
$ sudo yum erase openssh-server
|
||||
```
|
||||
|
||||
Debian / Ubuntu 用户可以使用 [apt 命令][31]/[apt-get 命令][32] 删除 openssh-server:
|
||||
|
||||
```
|
||||
$ sudo apt-get remove openssh-server
|
||||
```
|
||||
|
||||
有可能需要更新 iptables 脚本来移除 ssh 的例外规则。CentOS / RHEL / Fedora 系统可以编辑文件 `/etc/sysconfig/iptables` 和 `/etc/sysconfig/ip6tables`。最后[重启 iptables][33] 服务:
|
||||
|
||||
工作站和笔记本不需要 OpenSSH 服务。如果不需要提供 SSH 远程登录和文件传输功能的话,可以禁用 SSHD 服务。CentOS / RHEL 用户可以使用 [yum 命令][30] 禁用或删除openssh-server:
|
||||
`$ sudo yum erase openssh-server`
|
||||
Debian / Ubuntu 用户可以使用 [apt 命令][31]/[apt-get 命令][32] 删除 openssh-server:
|
||||
`$ sudo apt-get remove openssh-server`
|
||||
有可能需要更新 iptables 脚本来移除 ssh 例外规则。CentOS / RHEL / Fedora 系统可以编辑文件 /etc/sysconfig/iptables 和 /etc/sysconfig/ip6tables。最后[重启 iptables][33] 服务:
|
||||
```
|
||||
# service iptables restart
|
||||
# service ip6tables restart
|
||||
```
|
||||
|
||||
#### 20. 来自 Mozilla 的额外提示
|
||||
### 20. 来自 Mozilla 的额外提示
|
||||
|
||||
如果使用 6.7+ 版本的 OpenSSH,可以尝试下[以下设置][34]:
|
||||
|
||||
如果使用 6.7+ 版本的 OpenSSH,可以尝试下以下设置:
|
||||
```
|
||||
#################[ WARNING ]########################
|
||||
# Do not use any setting blindly. Read sshd_config #
|
||||
@ -361,10 +435,11 @@ MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@op
|
||||
LogLevel VERBOSE
|
||||
|
||||
# Log sftp level file access (read/write/etc.) that would not be easily logged otherwise.
|
||||
Subsystem sftp /usr/lib/ssh/sftp-server -f AUTHPRIV -l INFO
|
||||
Subsystem sftp /usr/lib/ssh/sftp-server -f AUTHPRIV -l INFO
|
||||
```
|
||||
|
||||
使用以下命令获取 OpenSSH 支持的加密方法:
|
||||
|
||||
```
|
||||
$ ssh -Q cipher
|
||||
$ ssh -Q cipher-auth
|
||||
@ -372,15 +447,25 @@ $ ssh -Q mac
|
||||
$ ssh -Q kex
|
||||
$ ssh -Q key
|
||||
```
|
||||
[![OpenSSH安全教程查询密码和算法选择][35]][35]
|
||||
|
||||
#### 如何测试 sshd_config 文件并重启/重新加载 SSH 服务?
|
||||
[![OpenSSH安全教程查询密码和算法选择][35]][35]
|
||||
|
||||
### 如何测试 sshd_config 文件并重启/重新加载 SSH 服务?
|
||||
|
||||
在重启 sshd 前检查配置文件的有效性和密匙的完整性,运行:
|
||||
|
||||
```
|
||||
$ sudo sshd -t
|
||||
```
|
||||
|
||||
扩展测试模式:
|
||||
|
||||
```
|
||||
$ sudo sshd -T
|
||||
```
|
||||
|
||||
在重启 sshd 前检查配置文件的有效性和密匙的完整性,运行:
|
||||
`$ sudo sshd -t`
|
||||
扩展测试模式:
|
||||
`$ sudo sshd -T`
|
||||
最后,根据系统的的版本[重启 Linux 或类 Unix 系统中的 sshd 服务][37]:
|
||||
|
||||
```
|
||||
$ [sudo systemctl start ssh][38] ## Debian/Ubunt Linux##
|
||||
$ [sudo systemctl restart sshd.service][39] ## CentOS/RHEL/Fedora Linux##
|
||||
@ -388,25 +473,21 @@ $ doas /etc/rc.d/sshd restart ## OpenBSD##
|
||||
$ sudo service sshd restart ## FreeBSD##
|
||||
```
|
||||
|
||||
#### 其他建议
|
||||
### 其他建议
|
||||
|
||||
1. [使用 2FA 加强 SSH 的安全性][40] - 可以使用[OATH Toolkit][41] 或 [DuoSecurity][42] 启用多重身份验证。
|
||||
2. [基于密匙链的身份验证][43] - 密匙链是一个 bash 脚本,可以使得基于密匙的验证非常的灵活方便。相对于无密码密匙,它提供更好的安全性。
|
||||
1. [使用 2FA 加强 SSH 的安全性][40] - 可以使用 [OATH Toolkit][41] 或 [DuoSecurity][42] 启用多重身份验证。
|
||||
2. [基于密匙链的身份验证][43] - 密匙链是一个 bash 脚本,可以使得基于密匙的验证非常的灵活方便。相对于无密码密匙,它提供更好的安全性。
|
||||
|
||||
### 更多信息:
|
||||
|
||||
* [OpenSSH 官方][44] 项目。
|
||||
* 用户手册: sshd(8)、ssh(1)、ssh-add(1)、ssh-agent(1)。
|
||||
|
||||
#### 更多信息:
|
||||
如果知道这里没用提及的方便的软件或者技术,请在下面的评论中分享,以帮助读者保持 OpenSSH 的安全。
|
||||
|
||||
* [OpenSSH 官方][44] 项目.
|
||||
* 用户手册: sshd(8),ssh(1),ssh-add(1),ssh-agent(1)
|
||||
### 关于作者
|
||||
|
||||
|
||||
|
||||
如果你发现一个方便的软件或者技术,请在下面的评论中分享,以帮助读者保持 OpenSSH 的安全。
|
||||
|
||||
#### 关于作者
|
||||
|
||||
作者是 nixCraft 的创始人,一个经验丰富的系统管理员和 Linux/Unix 脚本培训师。他曾与全球客户合作,领域涉及IT,教育,国防和空间研究以及非营利部门等多个行业。请在 [Twitter][45]、[Facebook][46]、[Google+][47] 上关注他。
|
||||
作者是 nixCraft 的创始人,一个经验丰富的系统管理员和 Linux/Unix 脚本培训师。他曾与全球客户合作,领域涉及 IT,教育,国防和空间研究以及非营利部门等多个行业。请在 [Twitter][45]、[Facebook][46]、[Google+][47] 上关注他。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -414,7 +495,7 @@ via: https://www.cyberciti.biz/tips/linux-unix-bsd-openssh-server-best-practices
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[shipsw](https://github.com/shipsw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -467,7 +548,7 @@ via: https://www.cyberciti.biz/tips/linux-unix-bsd-openssh-server-best-practices
|
||||
[46]:https://facebook.com/nixcraft
|
||||
[47]:https://plus.google.com/+CybercitiBiz
|
||||
[48]:https://www.cyberciti.biz/faq/ssh-passwordless-login-with-keychain-for-scripts/
|
||||
[49]:https://www.cyberciti.biz/faq/noninteractive-shell-script-ssh-password-provider/
|
||||
[49]:https://linux.cn/article-8086-1.html
|
||||
[50]:https://www.cyberciti.biz/faq/how-to-set-up-ssh-keys-on-linux-unix/
|
||||
[51]:https://www.cyberciti.biz/faq/how-to-upload-ssh-public-key-to-as-authorized_key-using-ansible/
|
||||
[52]:https://www.cyberciti.biz/faq/generating-random-password/
|
@ -1,12 +1,13 @@
|
||||
# Liunx 平台 6 个最好的替代 Microsoft Office 的开源办公软件
|
||||
6 个 Liunx 平台下最好的替代 Microsoft Office 的开源办公软件
|
||||
===========
|
||||
|
||||
**概要:还在 Linux 中寻找 Microsoft Office ? 这里有一些最好的在 Linux 平台替代 Microsoft Office 的开源软件。**
|
||||
> 概要:还在 Linux 中寻找 Microsoft Office 吗? 这里有一些最好的在 Linux 平台下替代 Microsoft Office 的开源软件。
|
||||
|
||||
办公套件是任何操作系统的必备品。很难想象没有Office 软件的桌面操作系统。虽然 Windows 有 MS Office 套件,Mac OS X 也有它自己的 iWork,但其他很多办公套件都是专门针对这些操作系统的,Linux 也有自己的办公套件。
|
||||
办公套件是任何操作系统的必备品。很难想象没有 Office 软件的桌面操作系统。虽然 Windows 有 MS Office 套件,Mac OS X 也有它自己的 iWork,但其他很多办公套件都是专门针对这些操作系统的,Linux 也有自己的办公套件。
|
||||
|
||||
在本文中,我会列举一些在 Linux 平台替代 Microsoft Office 的办公软件。
|
||||
|
||||
## Linux 最好的 MS Office 开源替代软件
|
||||
### Linux 最好的 MS Office 开源替代软件
|
||||
|
||||
![Best Microsoft office alternatives for Linux][1]
|
||||
|
||||
@ -16,62 +17,61 @@
|
||||
* 电子表格
|
||||
* 演示功能
|
||||
|
||||
|
||||
我知道 Microsoft Office 提供了比上述三种工具更多的工具,但事实上, 您主要使用这三个工具。 开源办公套件并不限于只有这三种产品。 其中有一些套件提供了一些额外的工具,但我们的重点将放在上述工具上。
|
||||
我知道 Microsoft Office 提供了比上述三种工具更多的工具,但事实上,您主要使用这三个工具。开源办公套件并不限于只有这三种产品。其中有一些套件提供了一些额外的工具,但我们的重点将放在上述工具上。
|
||||
|
||||
让我们看看在 Linux 上有什么办公套件:
|
||||
|
||||
### 6. Apache OpenOffice
|
||||
#### 6. Apache OpenOffice
|
||||
|
||||
![OpenOffice Logo][2]
|
||||
|
||||
[Apache OpenOffice][3] 或简单的称为 OpenOffice 有一段名称/所有者变更的历史。 它于1999年由 Sun Microsystems 公司开发,后来改名为 OpenOffice ,将它作为一个与 MS Office 对抗的免费的开源替代软件。 当Oracle 在 2010 年收购 Sun 公司后,一年之后便停止开发 OpenOffice。 最后是 Apache 支持它,现在被称为Apache OpenOffice。
|
||||
[Apache OpenOffice][3] 或简单的称为 OpenOffice 有一段名称/所有者变更的历史。 它于 1999 年由 Sun Microsystems 公司开发,后来改名为 OpenOffice,将它作为一个与 MS Office 对抗的自由开源的替代软件。 当 Oracle 在 2010 年收购 Sun 公司后,一年之后便停止开发 OpenOffice。 最后是 Apache 支持它,现在被称为 Apache OpenOffice。
|
||||
|
||||
Apache OpenOffice 可用于多种平台,包括 Linux,Windows,Mac OS X,Unix,BSD。 除了 OpenDocument 格式外,它还支持 MS Office 文件。 办公套件包含以下应用程序:Writer,Calc,Impress,Base,Draw,Math。
|
||||
Apache OpenOffice 可用于多种平台,包括 Linux、Windows、Mac OS X、Unix、BSD。 除了 OpenDocument 格式外,它还支持 MS Office 文件。 办公套件包含以下应用程序:Writer、Calc、Impress、Base、Draw、Math。
|
||||
|
||||
安装 OpenOffice 是一件痛苦的事,因为它没有提供一个友好的安装程序。 另外,有传言说 OpenOffice 开发可能已经停滞。 这两个是我不推荐的主要原因。 为了历史目的,我在这里列出它。
|
||||
安装 OpenOffice 是一件痛苦的事,因为它没有提供一个友好的安装程序。另外,有传言说 OpenOffice 开发可能已经停滞。 这是我不推荐的两个主要原因。 出于历史目的,我在这里列出它。
|
||||
|
||||
### 5. Feng Office
|
||||
#### 5. Feng Office
|
||||
|
||||
![Feng Office logo][6]
|
||||
|
||||
[Feng Office][7] 以前被称为 OpenGoo。 这不是一个常规的办公套件。 它完全专注于在线办公,如 Google 文档。 换句话说,这是一个开源[协作平台][8]。
|
||||
[Feng Office][7] 以前被称为 OpenGoo。 这不是一个常规的办公套件。 它完全专注于在线办公,如 Google 文档一样。 换句话说,这是一个开源[协作平台][8]。
|
||||
|
||||
Feng Office 不支持桌面使用,因此如果您想在单个Linux 桌面上使用它,这个可能无法实现。 另一方面,如果你有一个小企业,一个机构或其他组织,你可以尝试将其部署在本地服务器上。
|
||||
Feng Office 不支持桌面使用,因此如果您想在单个 Linux 桌面上使用它,这个可能无法实现。 另一方面,如果你有一个小企业、一个机构或其他组织,你可以尝试将其部署在本地服务器上。
|
||||
|
||||
### 4. Siag Office
|
||||
#### 4. Siag Office
|
||||
|
||||
![SIAG Office logo][9]
|
||||
|
||||
[Siag][10] 是一个非常轻量级的办公套件,适用于类 Unix 系统,可以在 16 MB 系统上运行。 由于它非常轻便,因此缺少标准办公套件中的许多功能。 但小即是美丽的,不是吗? 它具有办公套件的所有必要功能,可以在[轻量级 Linux 发行版][11]上“正常工作”。它是 [Damn Small Linux][12] 默认安装软件。(译者注: 根据官网,现已不是默认安装软件)
|
||||
[Siag][10] 是一个非常轻量级的办公套件,适用于类 Unix 系统,可以在 16MB 的系统上运行。 由于它非常轻便,因此缺少标准办公套件中的许多功能。 但小即是丽,不是吗? 它具有办公套件的所有必要功能,可以在[轻量级 Linux 发行版][11]上“正常工作”。它是 [Damn Small Linux][12] 默认安装软件。(LCTT 译注:根据官网,现已不是默认安装软件)
|
||||
|
||||
### 3. Calligra Suite
|
||||
#### 3. Calligra Suite
|
||||
|
||||
![Calligra free and Open Source office logo][13]
|
||||
|
||||
[Calligra][14],以前被称为 KOffice,是 KDE 中默认的 Office 套件。 它支持 Mac OS X,Windows,Linux,FreeBSD系统。 它也曾经推出 Android 版本。 但不幸的是,后续没有继续支持 Android。 它拥有办公套件所需的必要应用程序以及一些额外的应用程序,如用于绘制流程图的 Flow 和用于项目管理的 Plane。
|
||||
[Calligra][14],以前被称为 KOffice,是 KDE 中默认的 Office 套件。 它支持 Mac OS X、Windows、Linux、FreeBSD 系统。 它也曾经推出 Android 版本。 但不幸的是,后续没有继续支持 Android。 它拥有办公套件所需的必要应用程序以及一些额外的应用程序,如用于绘制流程图的 Flow 和用于项目管理的 Plane。
|
||||
|
||||
Calligra 最近的发展产生了相当大的影响,很有可能成为 [LibreOffice 的替代品][16]。
|
||||
|
||||
### 2. ONLYOFFICE
|
||||
#### 2. ONLYOFFICE
|
||||
|
||||
![ONLYOFFICE is Linux alternative to Microsoft Office][17]
|
||||
|
||||
[ONLYOFFICE][18] 是办公套件市场上的新玩家,它更专注于协作部分。 企业(甚至个人)可以将其部署到自己的服务器上,以获得类似 Google Docs 之类的协作办公套件。
|
||||
|
||||
别担心。 您不必必须将其安装在服务器上。 有一个免费的开源[桌面版本][19] ONLYOFFICE。 您甚至可以获取 .deb 和 .rpm 二进制文件,以便将其安装在 Linux 桌面系统上。
|
||||
别担心,您不是必须将其安装在服务器上。有一个免费的开源[桌面版本][19] ONLYOFFICE。 您甚至可以获取 .deb 和 .rpm 二进制文件,以便将其安装在 Linux 桌面系统上。
|
||||
|
||||
### 1. LibreOffice
|
||||
#### 1. LibreOffice
|
||||
|
||||
![LibreOffice logo][20]
|
||||
|
||||
当 Oracle 决定停止 OpenOffice 的开发时,是[文档基金会][21]将其复制分发,这就是我们所熟知的 [Libre-Office][22] 。从那时起,许多 Linux 发行版都将 OpenOffice 替换为 LibreOffice 作为它们的默认办公应用程序。
|
||||
当 Oracle 决定停止 OpenOffice 的开发时,是[文档基金会][21]将其复制分发,这就是我们所熟知的 [Libre-Office][22]。从那时起,许多 Linux 发行版都将 OpenOffice 替换为 LibreOffice 作为它们的默认办公应用程序。
|
||||
|
||||
它适用于 Linux,Windows 和 Mac OS X,这使得在跨平台环境中易于使用。 和 Apache OpenOffice 一样,这也包括了除了 OpenDocument 格式以外的对 MS Office 文件的支持。 它还包含与 Apache OpenOffice 相同的应用程序。
|
||||
|
||||
您还可以使用 LibreOffice 作为 [Collabora Online][23] 的协作平台。 基本上,LibreOffice 是一个完整的软件包,无疑是 Linux,Windows 和 MacOS 的**最佳 Microsoft Office 替代品**。
|
||||
您还可以使用 LibreOffice 作为 [Collabora Online][23] 的协作平台。 基本上,LibreOffice 是一个完整的软件包,无疑是 Linux、Windows 和 MacOS 的**最佳 Microsoft Office 替代品**。
|
||||
|
||||
## 你认为呢?
|
||||
### 你认为呢?
|
||||
|
||||
我希望 Microsoft Office 的这些开源替代软件可以节省您的资金。 您会使用哪种开源生产力办公套件?
|
||||
|
||||
@ -81,7 +81,7 @@ via: https://itsfoss.com/best-free-open-source-alternatives-microsoft-office/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
译者:[amwps290](https://github.com/amwps290)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,15 +1,15 @@
|
||||
学习你的工具:驾驭你的 Git 历史
|
||||
学习用工具来驾驭 Git 历史
|
||||
============================================================
|
||||
|
||||
在你的日常工作中,不可能每天都从头开始去开发一个新的应用程序。而真实的情况是,在日常工作中,我们大多数时候所面对的都是遗留下来的一个代码库,我们能够去修改一些特性的内容或者现存的一些代码行,是我们在日常工作中很重要的一部分。而这也就是分布式版本控制系统 `git` 的价值所在。现在,我们来深入了解怎么去使用 `git` 的历史以及如何很轻松地去浏览它的历史。
|
||||
在你的日常工作中,不可能每天都从头开始去开发一个新的应用程序。而真实的情况是,在日常工作中,我们大多数时候所面对的都是遗留下来的一个代码库,去修改一些特性的内容或者现存的一些代码行,这是我们在日常工作中很重要的一部分。而这也就是分布式版本控制系统 `git` 的价值所在。现在,我们来深入了解怎么去使用 `git` 的历史以及如何很轻松地去浏览它的历史。
|
||||
|
||||
### Git 历史
|
||||
|
||||
首先和最重要的事是,什么是 `git` 历史?正如其名字一样,它是一个 `git` 仓库的提交历史。它包含一堆提交信息,其中有它们的作者的名字、提交的哈希值以及提交日期。查看一个 `git` 仓库历史的方法很简单,就是一个 `git log` 命令。
|
||||
首先和最重要的事是,什么是 `git` 历史?正如其名字一样,它是一个 `git` 仓库的提交历史。它包含一堆提交信息,其中有它们的作者的名字、该提交的哈希值以及提交日期。查看一个 `git` 仓库历史的方法很简单,就是一个 `git log` 命令。
|
||||
|
||||
> _*旁注:**为便于本文的演示,我们使用 Ruby 在 Rails 仓库的 `master` 分支。之所以选择它的理由是因为,Rails 有很好的 `git` 历史,有很好的提交信息、引用以及每个变更的解释。如果考虑到代码库的大小、维护者的年龄和数据,Rails 肯定是我见过的最好的仓库。当然了,我并不是说其它 `git` 仓库做的不好,它只是我见过的比较好的一个仓库。_
|
||||
> _旁注:为便于本文的演示,我们使用 Ruby on Rails 的仓库的 `master` 分支。之所以选择它的理由是因为,Rails 有良好的 `git` 历史,漂亮的提交信息、引用以及对每个变更的解释。如果考虑到代码库的大小、维护者的年龄和数量,Rails 肯定是我见过的最好的仓库。当然了,我并不是说其它的 `git` 仓库做的不好,它只是我见过的比较好的一个仓库。_
|
||||
|
||||
因此,回到 Rails 仓库。如果你在 Ralis 仓库上运行 `git log`。你将看到如下所示的输出:
|
||||
那么,回到 Rails 仓库。如果你在 Ralis 仓库上运行 `git log`。你将看到如下所示的输出:
|
||||
|
||||
```
|
||||
commit 66ebbc4952f6cfb37d719f63036441ef98149418
|
||||
@ -72,7 +72,7 @@ Date: Thu Jun 2 21:26:53 2016 -0500
|
||||
[skip ci] Make header bullets consistent in engines.md
|
||||
```
|
||||
|
||||
正如你所见,`git log` 展示了提交哈希、作者和他的 email 以及提交日期。当然,`git` 输出的可定制性很强大,它允许你去定制 `git log` 命令的输出格式。比如说,我们希望看到提交的信息显示在一行上,我们可以运行 `git log --oneline`,它将输出一个更紧凑的日志:
|
||||
正如你所见,`git log` 展示了提交的哈希、作者及其 email 以及该提交创建的日期。当然,`git` 输出的可定制性很强大,它允许你去定制 `git log` 命令的输出格式。比如说,我们只想看提交信息的第一行,我们可以运行 `git log --oneline`,它将输出一个更紧凑的日志:
|
||||
|
||||
```
|
||||
66ebbc4 Dont re-define class SQLite3Adapter on test
|
||||
@ -89,15 +89,15 @@ e98caf8 [skip ci] Make header bullets consistent in engines.md
|
||||
|
||||
如果你想看 `git log` 的全部选项,我建议你去查阅 `git log` 的 man 页面,你可以在一个终端中输入 `man git-log` 或者 `git help log` 来获得。
|
||||
|
||||
> _**小提示:**如果你觉得 `git log` 看起来太恐怖或者过于复杂,或者你觉得看它太无聊了,我建议你去寻找一些 `git` GUI 命令行工具。在以前的文章中,我使用过 [GitX][1] ,我觉得它很不错,但是,由于我看命令行更“亲切”一些,在我尝试了 [tig][2] 之后,就再也没有去用过它。_
|
||||
> _小提示:如果你觉得 `git log` 看起来太恐怖或者过于复杂,或者你觉得看它太无聊了,我建议你去寻找一些 `git` 的 GUI 或命令行工具。在之前,我使用过 [GitX][1] ,我觉得它很不错,但是,由于我看命令行更“亲切”一些,在我尝试了 [tig][2] 之后,就再也没有去用过它。_
|
||||
|
||||
### 查找尼莫
|
||||
### 寻找尼莫
|
||||
|
||||
现在,我们已经知道了关于 `git log` 命令一些很基础的知识之后,我们来看一下,在我们的日常工作中如何使用它更加高效地浏览历史。
|
||||
现在,我们已经知道了关于 `git log` 命令的一些很基础的知识之后,我们来看一下,在我们的日常工作中如何使用它更加高效地浏览历史。
|
||||
|
||||
假如,我们怀疑在 `String#classify` 方法中有一个预期之外的行为,我们希望能够找出原因,并且定位出实现它的代码行。
|
||||
|
||||
为达到上述目的,你可以使用的第一个命令是 `git grep`,通过它可以找到这个方法定义在什么地方。简单来说,这个命令输出了给定的某些“样品”的匹配行。现在,我们来找出定义它的方法,它非常简单 —— 我们对 `def classify` 运行 grep,然后看到的输出如下:
|
||||
为达到上述目的,你可以使用的第一个命令是 `git grep`,通过它可以找到这个方法定义在什么地方。简单来说,这个命令输出了匹配特定模式的那些行。现在,我们来找出定义它的方法,它非常简单 —— 我们对 `def classify` 运行 grep,然后看到的输出如下:
|
||||
|
||||
```
|
||||
➜ git grep 'def classify'
|
||||
@ -113,7 +113,7 @@ activesupport/lib/active_support/core_ext/string/inflections.rb: def classifyact
|
||||
activesupport/lib/active_support/core_ext/string/inflections.rb:205: def classifyactivesupport/lib/active_support/inflector/methods.rb:186: def classify(table_name)tools/profile:112: def classify
|
||||
```
|
||||
|
||||
更好看了,是吧?考虑到上下文,我们可以很轻松地找到,这个方法在`activesupport/lib/active_support/core_ext/string/inflections.rb` 的第 205 行的 `classify` 方法,它看起来像这样,是不是很容易?
|
||||
更好看了,是吧?考虑到上下文,我们可以很轻松地找到,这个方法在 `activesupport/lib/active_support/core_ext/string/inflections.rb` 的第 205 行的 `classify` 方法,它看起来像这样,是不是很容易?
|
||||
|
||||
```
|
||||
# Creates a class name from a plural table name like Rails does for table names to models.
|
||||
@ -127,7 +127,7 @@ activesupport/lib/active_support/core_ext/string/inflections.rb:205: def classi
|
||||
end
|
||||
```
|
||||
|
||||
尽管这个方法我们找到的是在 `String` 上的一个常见的调用,它涉及到`ActiveSupport::Inflector` 上的另一个方法,使用了相同的名字。获得了 `git grep` 的结果,我们可以很轻松地导航到这里,因此,我们看到了结果的第二行, `activesupport/lib/active_support/inflector/methods.rb` 在 186 行上。我们正在寻找的方法是:
|
||||
尽管我们找到的这个方法是在 `String` 上的一个常见的调用,它调用了 `ActiveSupport::Inflector` 上的另一个同名的方法。根据之前的 `git grep` 的结果,我们可以很轻松地发现结果的第二行, `activesupport/lib/active_support/inflector/methods.rb` 在 186 行上。我们正在寻找的方法是这样的:
|
||||
|
||||
```
|
||||
# Creates a class name from a plural table name like Rails does for table
|
||||
@ -146,17 +146,17 @@ def classify(table_name)
|
||||
end
|
||||
```
|
||||
|
||||
酷!考虑到 Rails 仓库的大小,我们借助 `git grep` 找到它,用时没有超越 30 秒。
|
||||
酷!考虑到 Rails 仓库的大小,我们借助 `git grep` 找到它,用时都没有超越 30 秒。
|
||||
|
||||
### 那么,最后的变更是什么?
|
||||
|
||||
我们已经掌握了有用的方法,现在,我们需要搞清楚这个文件所经历的变更。由于我们已经知道了正确的文件名和行数,我们可以使用 `git blame`。这个命令展示了一个文件中每一行的最后修订者和修订的内容。我们来看一下这个文件最后的修订都做了什么:
|
||||
现在,我们已经找到了所要找的方法,现在,我们需要搞清楚这个文件所经历的变更。由于我们已经知道了正确的文件名和行数,我们可以使用 `git blame`。这个命令展示了一个文件中每一行的最后修订者和修订的内容。我们来看一下这个文件最后的修订都做了什么:
|
||||
|
||||
```
|
||||
git blame activesupport/lib/active_support/inflector/methods.rb
|
||||
```
|
||||
|
||||
虽然我们得到了这个文件每一行的最后的变更,但是,我们更感兴趣的是对指定的方法(176 到 189 行)的最后变更。让我们在 `git blame` 命令上增加一个选项,它将只显示那些行。此外,我们将在命令上增加一个 `-s` (阻止) 选项,去跳过那一行变更时的作者名字和修订(提交)的时间戳:
|
||||
虽然我们得到了这个文件每一行的最后的变更,但是,我们更感兴趣的是对特定方法(176 到 189 行)的最后变更。让我们在 `git blame` 命令上增加一个选项,让它只显示那些行的变化。此外,我们将在命令上增加一个 `-s` (忽略)选项,去跳过那一行变更时的作者名字和修订(提交)的时间戳:
|
||||
|
||||
```
|
||||
git blame -L 176,189 -s activesupport/lib/active_support/inflector/methods.rb
|
||||
@ -183,13 +183,13 @@ git blame -L 176,189 -s activesupport/lib/active_support/inflector/methods.rb
|
||||
git show 5bb1d4d2
|
||||
```
|
||||
|
||||
你亲自做实验了吗?如果没有做,我直接告诉你结果,这个令人惊叹的 [提交][3] 是由 [Schneems][4] 做的,他通过使用 frozen 字符串做了一个非常有趣的性能优化,这在我们当前的上下文中是非常有意义的。但是,由于我们在这个假设的调试会话中,这样做并不能告诉我们当前问题所在。因此,我们怎么样才能够通过研究来发现,我们选定的方法经过了哪些变更?
|
||||
你亲自做实验了吗?如果没有做,我直接告诉你结果,这个令人惊叹的 [提交][3] 是由 [Schneems][4] 完成的,他通过使用 frozen 字符串做了一个非常有趣的性能优化,这在我们当前的场景中是非常有意义的。但是,由于我们在这个假设的调试会话中,这样做并不能告诉我们当前问题所在。因此,我们怎么样才能够通过研究来发现,我们选定的方法经过了哪些变更?
|
||||
|
||||
### 搜索日志
|
||||
|
||||
现在,我们回到 `git` 日志,现在的问题是,怎么能够看到 `classify` 方法经历了哪些修订?
|
||||
|
||||
`git log` 命令非常强大,因此它提供了非常多的列表选项。我们尝试去看一下保存了这个文件的 `git` 日志内容。使用 `-p` 选项,它的意思是在 `git` 日志中显示这个文件的完整补丁:
|
||||
`git log` 命令非常强大,因此它提供了非常多的列表选项。我们尝试使用 `-p` 选项去看一下保存了这个文件的 `git` 日志内容,这个选项的意思是在 `git` 日志中显示这个文件的完整补丁:
|
||||
|
||||
```
|
||||
git log -p activesupport/lib/active_support/inflector/methods.rb
|
||||
@ -201,13 +201,13 @@ git log -p activesupport/lib/active_support/inflector/methods.rb
|
||||
git log -L 176,189:activesupport/lib/active_support/inflector/methods.rb
|
||||
```
|
||||
|
||||
`git log` 命令接受了 `-L` 选项,它有一个行的范围和文件名做为参数。它的格式可能有点奇怪,格式解释如下:
|
||||
`git log` 命令接受 `-L` 选项,它用一个行的范围和文件名做为参数。它的格式可能有点奇怪,格式解释如下:
|
||||
|
||||
```
|
||||
git log -L <start-line>,<end-line>:<path-to-file>
|
||||
```
|
||||
|
||||
当我们去运行这个命令之后,我们可以看到对这些行的一个修订列表,它将带我们找到创建这个方法的第一个修订:
|
||||
当我们运行这个命令之后,我们可以看到对这些行的一个修订列表,它将带我们找到创建这个方法的第一个修订:
|
||||
|
||||
```
|
||||
commit 51xd6bb829c418c5fbf75de1dfbb177233b1b154
|
||||
@ -238,11 +238,11 @@ diff--git a/activesupport/lib/active_support/inflector/methods.rb b/activesuppor
|
||||
|
||||
现在,我们再来看一下 —— 它是在 2011 年提交的。`git` 可以让我们重回到这个时间。这是一个很好的例子,它充分说明了足够的提交信息对于重新了解当时的上下文环境是多么的重要,因为从这个提交信息中,我们并不能获得足够的信息来重新理解当时的创建这个方法的上下文环境,但是,话说回来,你**不应该**对此感到恼怒,因为,你看到的这些项目,它们的作者都是无偿提供他们的工作时间和精力来做开源工作的。(向开源项目贡献者致敬!)
|
||||
|
||||
回到我们的正题,我们并不能确认 `classify` 方法最初实现是怎么回事,考虑到这个第一次的提交只是一个重构。现在,如果你认为,“或许、有可能、这个方法不在 176 行到 189 行的范围之内,那么就你应该在这个文件中扩大搜索范围”,这样想是对的。我们看到在它的修订提交的信息中提到了“重构”这个词,它意味着这个方法可能在那个文件中是真实存在的,只是在重构之后它才存在于那个行的范围内。
|
||||
回到我们的正题,我们并不能确认 `classify` 方法最初实现是怎么回事,考虑到这个第一次的提交只是一个重构。现在,如果你认为,“或许、有可能、这个方法不在 176 行到 189 行的范围之内,那么就你应该在这个文件中扩大搜索范围”,这样想是对的。我们看到在它的修订提交的信息中提到了“重构”这个词,它意味着这个方法可能在那个文件中是真实存在的,而且是在重构之后它才存在于那个行的范围内。
|
||||
|
||||
但是,我们如何去确认这一点呢?不管你信不信,`git` 可以再次帮助你。`git log` 命令有一个 `-S` 选项,它可以传递一个特定的字符串作为参数,然后去查找代码变更(添加或者删除)。也就是说,如果我们执行 `git log -S classify` 这样的命令,我们可以看到所有包含 `classify` 字符串的变更行的提交。
|
||||
|
||||
如果你在 Ralis 仓库上运行上述命令,首先你会发现这个命令运行有点慢。但是,你应该会发现 `git` 真的解析了在那个仓库中的所有修订来匹配这个字符串,因为仓库非常大,实际上它的运行速度是非常快的。在你的指尖下 `git` 再次展示了它的强大之处。因此,如果去找关于 `classify` 方法的第一个修订,我们可以运行如下的命令:
|
||||
如果你在 Ralis 仓库上运行上述命令,首先你会发现这个命令运行有点慢。但是,你应该会发现 `git` 实际上解析了在那个仓库中的所有修订来匹配这个字符串,其实它的运行速度是非常快的。在你的指尖下 `git` 再次展示了它的强大之处。因此,如果去找关于 `classify` 方法的第一个修订,我们可以运行如下的命令:
|
||||
|
||||
```
|
||||
git log -S 'def classify'
|
||||
@ -258,7 +258,7 @@ Date: Wed Nov 24 01:04:44 2004 +0000
|
||||
git-svn-id: http://svn-commit.rubyonrails.org/rails/trunk@4 5ecf4fe2-1ee6-0310-87b1-e25e094e27de
|
||||
```
|
||||
|
||||
很酷!是吧?它初次被提交到 Rails,是由 DHHD 在一个 `svn` 仓库上做的!这意味着 `classify` 提交到 Rails 仓库的大概时间。现在,我们去看一下这个提交的所有变更信息,我们运行如下的命令:
|
||||
很酷!是吧?它初次被提交到 Rails,是由 DHH 在一个 `svn` 仓库上做的!这意味着 `classify` 大概在一开始就被提交到了 Rails 仓库。现在,我们去看一下这个提交的所有变更信息,我们运行如下的命令:
|
||||
|
||||
```
|
||||
git show db045dbbf60b53dbe013ef25554fd013baf88134
|
||||
@ -268,7 +268,7 @@ git show db045dbbf60b53dbe013ef25554fd013baf88134
|
||||
|
||||
### 下次见
|
||||
|
||||
当然,我们并不会真的去修改任何 bug,因为我们只是去尝试使用一些 `git` 命令,来演示如何查看 `classify` 方法的演变历史。但是不管怎样,`git` 是一个非常强大的工具,我们必须学好它、用好它。我希望这篇文章可以帮助你掌握更多的关于如何使用 `git` 的知识。
|
||||
当然,我们并没有真的去修改任何 bug,因为我们只是去尝试使用一些 `git` 命令,来演示如何查看 `classify` 方法的演变历史。但是不管怎样,`git` 是一个非常强大的工具,我们必须学好它、用好它。我希望这篇文章可以帮助你掌握更多的关于如何使用 `git` 的知识。
|
||||
|
||||
你喜欢这些内容吗?
|
||||
|
||||
@ -284,9 +284,9 @@ git show db045dbbf60b53dbe013ef25554fd013baf88134
|
||||
|
||||
via: https://ieftimov.com/learn-your-tools-navigating-git-history
|
||||
|
||||
作者:[Ilija Eftimov ][a]
|
||||
作者:[Ilija Eftimov][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
107
published/20171023 Processors-Everything You Need to Know.md
Normal file
107
published/20171023 Processors-Everything You Need to Know.md
Normal file
@ -0,0 +1,107 @@
|
||||
关于处理器你所需要知道的一切
|
||||
============
|
||||
|
||||
[![][b]][b]
|
||||
|
||||
我们的手机、主机以及笔记本电脑这样的数字设备已经变得如此成熟,以至于它们进化成为我们的一部分,而不只是一种设备。
|
||||
|
||||
在应用和软件的帮助下,处理器执行许多任务。我们是否曾经想过是什么给了这些软件这样的能力?它们是如何执行它们的逻辑的?它们的大脑在哪?
|
||||
|
||||
我们知道 CPU (或称处理器)是那些需要处理数据和执行逻辑任务的设备的大脑。
|
||||
|
||||
[![cpu image][1]][1]
|
||||
|
||||
在处理器的深处有那些不一样的概念呢?它们是如何演化的?一些处理器是如何做到比其它处理器更快的?让我们来看看关于处理器的主要术语,以及它们是如何影响处速度的。
|
||||
|
||||
### 架构
|
||||
|
||||
处理器有不同的架构,你一定遇到过不同类型的程序说它们是 64 位或 32 位的,这其中的意思就是程序支持特定的处理器架构。
|
||||
|
||||
如果一颗处理器是 32 位的架构,这意味着这颗处理器能够在一个处理周期内处理一个 32 位的数据。
|
||||
|
||||
同理可得,64 位的处理器能够在一个周期内处理一个 64 位的数据。
|
||||
|
||||
同时,你可以使用的内存大小决定于处理器的架构,你可以使用的内存总量为 2 的处理器架构的幂次方(如:`2^64`)。
|
||||
|
||||
16 位架构的处理器,仅仅有 64 kb 的内存使用。32 位架构的处理器,最大可使用的 RAM 是 4 GB,64 位架构的处理器的可用内存是 16 EB。
|
||||
|
||||
### 核心
|
||||
|
||||
在电脑上,核心是基本的处理单元。核心接收指令并且执行它。越多的核心带来越快的速度。把核心比作工厂里的工人,越多的工人使工作能够越快的完成。另一方面,工人越多,你所付出的薪水也就越多,工厂也会越拥挤;相对于核心来说,越多的核心消耗更多的能量,比核心少的 CPU 更容易发热。
|
||||
|
||||
### 时钟速度
|
||||
|
||||
[![CPU CLOCK SPEED][2]][2]
|
||||
|
||||
GHz 是 GigaHertz 的简写,Giga 意思是 10 亿次,Hertz (赫兹)意思是一秒有几个周期,2 GHz 的处理器意味着处理器一秒能够执行 20 亿个周期 。
|
||||
|
||||
它也以“频率”或者“时钟速度”而熟知。这项数值越高,CPU 的性能越好。
|
||||
|
||||
### CPU 缓存
|
||||
|
||||
CPU 缓存是处理器内部的一块小的存储单元,用来存储一些内存。不管如何,我们需要执行一些任务时,数据需要从内存传递到 CPU,CPU 的工作速度远快于内存,CPU 在大多数时间是在等待从内存传递过来的数据,而此时 CPU 是处于空闲状态的。为了解决这个问题,内存持续的向 CPU 缓存发送数据。
|
||||
|
||||
一般的处理器会有 2 ~ 3 Mb 的 CPU 缓存。高端的处理器会有 6 Mb 的 CPU 缓存,越大的缓存,意味着处理器更好。
|
||||
|
||||
### 印刷工艺
|
||||
|
||||
晶体管的大小就是处理器平板印刷的大小,尺寸通常是纳米,更小的尺寸意味者更紧凑。这可以让你有更多的核心,更小的面积,更小的能量消耗。
|
||||
|
||||
最新的 Intel 处理器有 14 nm 的印刷工艺。
|
||||
|
||||
### 热功耗设计(TDP)
|
||||
|
||||
代表着平均功耗,单位是瓦特,是在全核心激活以基础频率来处理 Intel 定义的高复杂度的负载时,处理器所散失的功耗。
|
||||
|
||||
所以,越低的热功耗设计对你越好。一个低的热功耗设计不仅可以更好的利用能量,而且产生更少的热量。
|
||||
|
||||
[![battery][3]][3]
|
||||
|
||||
桌面版的处理器通常消耗更多的能量,热功耗消耗的能量能在 40% 以上,相对应的移动版本只有不到桌面版本的 1/3。
|
||||
|
||||
### 内存支持
|
||||
|
||||
我们已经提到了处理器的架构是如何影响到我们能够使用的内存总量,但这只是理论上而已。在实际的应用中,我们所能够使用的内存的总量对于处理器的规格来说是足够的,它通常是由处理器规格详细规定的。
|
||||
|
||||
[![RAM][4]][4]
|
||||
|
||||
它也指出了内存所支持的 DDR 的版本号。
|
||||
|
||||
### 超频
|
||||
|
||||
前面我们讲过时钟频率,超频是程序强迫 CPU 执行更多的周期。游戏玩家经常会使他们的处理器超频,以此来获得更好的性能。这样确实会增加速度,但也会增加消耗的能量,产生更多的热量。
|
||||
|
||||
一些高端的处理器允许超频,如果我们想让一个不支持超频的处理器超频,我们需要在主板上安装一个新的 BIOS 。
|
||||
这样通常会成功,但这种情况是不安全的,也是不建议的。
|
||||
|
||||
### 超线程(HT)
|
||||
|
||||
如果不能添加核心以满足特定的处理需要,那么超线程是建立一个虚拟核心的方式。
|
||||
|
||||
如果一个双核处理器有超线程,那么这个双核处理器就有两个物理核心和两个虚拟核心,在技术上讲,一个双核处理器拥有四个核心。
|
||||
|
||||
### 结论
|
||||
|
||||
处理器有许多相关的数据,这些对数字设备来说是最重要的部分。我们在选择设备时,我们应该在脑海中仔细的检查处理器在上面提到的数据。
|
||||
|
||||
时钟速度、核心数、CPU 缓存,以及架构是最重要的数据。印刷尺寸以及热功耗设计重要性差一些 。
|
||||
|
||||
仍然有疑惑? 欢迎评论,我会尽快回复的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.theitstuff.com/processors-everything-need-know
|
||||
|
||||
作者:[Rishabh Kandari][a]
|
||||
译者:[singledo](https://github.com/singledo)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.theitstuff.com/author/reevkandari
|
||||
[b]:http://www.theitstuff.com/wp-content/uploads/2017/10/processors-all-you-need-to-know.jpg
|
||||
[1]:http://www.theitstuff.com/wp-content/uploads/2017/10/download.jpg
|
||||
[2]:http://www.theitstuff.com/wp-content/uploads/2017/10/download-1.jpg
|
||||
[3]:http://www.theitstuff.com/wp-content/uploads/2017/10/download-2.jpg
|
||||
[4]:http://www.theitstuff.com/wp-content/uploads/2017/10/images.jpg
|
||||
[5]:http://www.theitstuff.com/wp-content/uploads/2017/10/processors-all-you-need-to-know.jpg
|
@ -1,25 +1,25 @@
|
||||
Torrents - 你需要知道的一切事情
|
||||
Torrents(种子):你需要知道的一切事情
|
||||
======
|
||||
|
||||
![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/torrenting-how-torrent-works_orig.jpg)
|
||||
|
||||
**Torrents** — 每次听到这个词时,在我的脑海里想到的唯一的事情就是免费的电影、游戏、和被破解的软件。但是我们并不知道它们是如何工作的,在 Torrents 中涉及到各种概念。因此,通过这篇文章我们从技术的角度来了解 **torrenting** 是什么。
|
||||
**Torrents(种子)** — 每次听到这个词时,在我的脑海里想到的唯一的事情就是免费的电影、游戏、和被破解的软件。但是我们并不知道它们是如何工作的,在“种子”中涉及到各种概念。因此,通过这篇文章我们从技术的角度来了解**种子下载**是什么。
|
||||
|
||||
### Torrents 是什么?
|
||||
### “种子”是什么?
|
||||
|
||||
Torrents 是一个到因特网上文件位置的链接。它们不是一个文件,它们仅仅是动态指向到你想去下载的原始文件上。
|
||||
“种子”是一个到因特网上文件位置的链接。它们不是一个文件,它们仅仅是动态指向到你想去下载的原始文件上。
|
||||
|
||||
例如:如果你点击 [Google Chrome][1],你可以从谷歌的服务器上下载 Google Chrome 浏览器。
|
||||
|
||||
如果你明天、或者下周、或者下个月再去点击那个链接,这个文件仍然可以从谷歌服务器上去下载。
|
||||
|
||||
但是当我们使用 torrents 下载时,它并没有固定的服务器。文件是从以前使用 torrents 下载的其它人的个人电脑上下载的。
|
||||
但是当我们使用“种子”下载时,它并没有固定的服务器。文件是从以前使用“种子”下载的其它人的个人电脑上下载的。
|
||||
|
||||
### Torrents 是如何工作的?
|
||||
|
||||
[![Peer to peer network](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/torrent_orig.png)][2]
|
||||
|
||||
假设 ‘A’ 上有一些视频,它希望以 torrent 方式去下载。因此,他创建了一个 torrent,并将这个链接发送给 ‘B’,这个链接包含了那个视频在因特网上的准确 IP 地址的信息。因此,当 ‘B’ 开始下载那个文件的时候,‘B’ 连接到 ‘A’ 的计算机。在 ‘B’ 下载完成这个视频之后,‘B’ 将开始做为种子,也就是 ‘B’ 将允许其它的 ‘C’ 或者 ‘D’ 从 ‘B’ 的计算机上下载它。
|
||||
假设 ‘A’ 上有一些视频,它希望以“种子”方式去下载。因此,他创建了一个“种子”,并将这个链接发送给 ‘B’,这个链接包含了那个视频在因特网上的准确 IP 地址的信息。因此,当 ‘B’ 开始下载那个文件的时候,‘B’ 连接到 ‘A’ 的计算机。在 ‘B’ 下载完成这个视频之后,‘B’ 将开始做为种子,也就是 ‘B’ 将允许其它的 ‘C’ 或者 ‘D’ 从 ‘B’ 的计算机上下载它。
|
||||
|
||||
因此每个人先下载文件然后会上传,下载的人越多,下载的速度也越快。并且在任何情况下,如果想停止上传,也没有问题,随时可以。这样做并不会成为什么问题,除非很多的人下载而上传的人很少。
|
||||
|
||||
@ -35,7 +35,7 @@ Torrents 是一个到因特网上文件位置的链接。它们不是一个文
|
||||
|
||||
[![qbittorrent software for linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/peers_orig.png)][4]
|
||||
|
||||
所有的 torrent 文件都独立分割成固定大小的数据包,因此,它们可以非线性顺序和随机顺序下载。每个块都有唯一的标识,因此,一旦所有的块下载完成之后,它们会被拼接出原始文件。
|
||||
所有的“种子”文件都独立分割成固定大小的数据包,因此,它们可以非线性顺序和随机顺序下载。每个块都有唯一的标识,因此,一旦所有的块下载完成之后,它们会被拼接出原始文件。
|
||||
|
||||
正是因为这种机制,如果你正在从某人处下载一个文件,假如这个时候因某些原因他停止了上传,你可以继续从其它的播种者处继续下载,而不需要从头开始重新下载。
|
||||
|
||||
@ -49,23 +49,23 @@ Torrents 是一个到因特网上文件位置的链接。它们不是一个文
|
||||
|
||||
### 最佳实践
|
||||
|
||||
当你下载一个 torrent 时,总是选择最大的播种者。这就是最佳经验。
|
||||
当你下载一个“种子”时,总是选择最大的播种者。这就是最佳经验。
|
||||
|
||||
这里并没有最小的标准,但是只要确保你选择的是最大的那一个播种者就可以了。
|
||||
|
||||
### Torrent 相关的法律
|
||||
### “种子”相关的法律
|
||||
|
||||
[![piracy is illegal](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/torrent-laws_orig.png)][5]
|
||||
|
||||
Torrent 相关的法律和其它的法律并没有什么区别,对受版权保护的其它任何东西一样,侵权行为会受到法律的制裁。大多数的政府都拦截 torrent 站点和协议,但是 torrenting 本身并不是有害的东西。
|
||||
“种子”相关的法律和其它的法律并没有什么区别,对受版权保护的其它任何东西一样,侵权行为会受到法律的制裁。大多数的政府都拦截“种子”站点和协议,但是“种子”下载本身并不是有害的东西。
|
||||
|
||||
Torrents 对快速分享文件是非常有用的,并且它们被用来共享开源社区的软件,因为它们能节约大量的服务器资源。但是,许多人却因为盗版而使用它们。
|
||||
“种子”对快速分享文件是非常有用的,并且它们被用来共享开源社区的软件,因为它们能节约大量的服务器资源。但是,许多人却因为盗版而使用它们。
|
||||
|
||||
### 结束语
|
||||
|
||||
Torrenting 是降低服务器上负载的一个非常完美的技术。Torrenting 可以使我们将下载速度提升到网卡的极限,这是非常好的。但是,在这种非中心化的服务器上,盗版成为一种必然发生的事。限制我们分享的内容,从不去下载盗版的东西,这是我们的道德责任。
|
||||
Torrenting 是降低服务器上负载的一个非常完美的技术。“种子”下载可以使我们将下载速度提升到网卡的极限,这是非常好的。但是,在这种非中心化的服务器上,盗版成为一种必然发生的事。限制我们分享的内容,从不去下载盗版的东西,这是我们的道德责任。
|
||||
|
||||
请在下面的评论中分享你使用 torrents 的心得,分享你喜欢的、法律许可下载的 torrent 网站。
|
||||
请在下面的评论中分享你使用“种子”的心得,分享你喜欢的、法律许可下载的“种子”网站。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -73,7 +73,7 @@ via: http://www.linuxandubuntu.com/home/torrents-everything-you-need-to-know
|
||||
|
||||
作者:[LINUXANDUBUNTU][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
87
published/20180102 Best open source tutorials in 2017.md
Normal file
87
published/20180102 Best open source tutorials in 2017.md
Normal file
@ -0,0 +1,87 @@
|
||||
Opensource.com 的 2017 年最佳开源教程
|
||||
======
|
||||
|
||||
2017 年,Opensource.com 发布了一系列用于帮助从初学者到专家的教程。让我们看看哪些最好。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-teacher-learner.png?itok=rMJqBN5G)
|
||||
|
||||
精心编写的教程对于任何软件的官方文档来说都是一个很好的补充。如果官方文件写得不好,不完整或根本没有,那么这些教程也可以是个有效的替代品。
|
||||
|
||||
2017 年,Opensource.com 发布一些有关各种主题的优秀教程。这些教程不只是针对专家们的,它们是针对各种技能水平和经验的用户的。
|
||||
|
||||
让我们来看看其中最好的教程。
|
||||
|
||||
### 关于代码
|
||||
|
||||
对许多人来说,他们第一次涉足开源是为一个项目或另一个项目贡献代码。你在哪里学习编码或编程的?以下两篇文章是很好的起点。
|
||||
|
||||
严格来说,VM Brasseur 的[如何开始学习编程][1]是新手程序员的一个很好的起点,而不是一个教程。它不仅指出了一些有助于你开始学习的优秀资源,而且还提供了了解你的学习方式和如何选择语言的重要建议。
|
||||
|
||||
如果您已经在一个 [IDE][2] 或文本编辑器中敲击了几个小时,那么您可能需要学习更多关于编码的不同方法。Fraser Tweedale 的[函数式编程简介][3]很好地介绍了可以应用到许多广泛使用的编程语言的范式。
|
||||
|
||||
### 踏足 Linux
|
||||
|
||||
Linux 是开源的典范。它运行了大量的 Web 站点,为世界顶级的超级计算机提供了动力。它让任何人都可以替代台式机上的专有操作系统。
|
||||
|
||||
如果你有兴趣深入 Linux,这里有三个教程供你参考。
|
||||
|
||||
Jason Baker 告诉你[设置 Linux $PATH 变量][4]。他引导你掌握这一“任何 Linux 初学者的重要技巧”,使您能够告知系统包含了程序和脚本的目录。
|
||||
|
||||
感谢 David Both 的[建立一个 DNS 域名服务器][5]指南。他详细地记录了如何设置和运行服务器,包括要编辑的配置文件以及如何编辑它们。
|
||||
|
||||
想在你的电脑上更复古一点吗?Jim Hall 告诉你如何使用 [FreeDOS][7]和 [qemu][8] [在 Linux 下运行 DOS 程序][6]。Hall 的文章着重于运行 DOS 生产力工具,但并不全是严肃的——他也谈到了运行他最喜欢的 DOS 游戏。
|
||||
|
||||
### 3 片(篇)树莓派
|
||||
|
||||
廉价的单板计算机使硬件再次变得有趣,这并不是秘密。不仅如此,它们使更多的人更容易接近,无论他们的年龄或技术水平如何。
|
||||
|
||||
其中,[树莓派][9]可能是最广泛使用的单板计算机。Ben Nuttall 带我们一起[在树莓派上安装和设置 Postgres 数据库][10]。这样,你可以在任何你想要的项目中使用它。
|
||||
|
||||
如果你的品味包括文学和技术,你可能会对 Don Watkins 的[如何将树莓派变成电子书服务器][11]感兴趣。稍微付出一点努力和一份 [Calibre 电子书管理软件][12]副本,你就可以得到你最喜欢的电子书,无论你在哪里。
|
||||
|
||||
树莓派并不是其中唯一有特点的。还有 [Orange Pi Pc Plus][13],这是一种开源的单板机。David Egts 告诉你[如何开始使用这个可编程的迷你电脑][14]。
|
||||
|
||||
### 日常的计算机使用
|
||||
|
||||
开源并不仅针对技术专家,更多的普通人用它来做日常工作,而且更加效率。这里有三篇文章,可以使我们这些笨手笨脚的人(你可能不是)做任何事情变得优雅。
|
||||
|
||||
当你想到微博客的时候,你可能会想到 Twitter。但是 Twitter 的问题很多。[Mastodon][15] 是 Twitter 的开放的替代方案,它在 2016 年首次亮相。从此, Mastodon 就获得相当大的用户基数。Seth Kenlon 说明[如何加入和使用 Mastodon][16],甚至告诉你如何在 Mastodon 和 Twitter 间交替使用。
|
||||
|
||||
你需要一点帮助来维持开支吗?你所需要的只是一个电子表格和正确的模板。我关于[要控制你的财政状况][17]的文章,向你展示了如何用 [LibreOffice Calc][18] (或任何其他电子表格编辑器)创建一个简单而有吸引力的财务跟踪。
|
||||
|
||||
ImageMagick 是强大的图形处理工具。但是,很多人不经常使用。这意味着他们在最需要它们时忘记了命令。如果你也是这样,Greg Pittman 的 [ImageMagick 入门教程][19]能在你需要一些帮助时候能派上用场。
|
||||
|
||||
你有最喜欢的 2017 Opensource.com 发布的教程吗?请随意留言与社区分享。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/best-tutorials
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
译者:[zjon](https://github.com/zjon)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/scottnesbitt
|
||||
[1]:https://linux.cn/article-8694-1.html
|
||||
[2]:https://en.wikipedia.org/wiki/Integrated_development_environment
|
||||
[3]:https://linux.cn/article-8869-1.html
|
||||
[4]:https://opensource.com/article/17/6/set-path-linux
|
||||
[5]:https://opensource.com/article/17/4/build-your-own-name-server
|
||||
[6]:https://linux.cn/article-9014-1.html
|
||||
[7]:http://www.freedos.org/
|
||||
[8]:https://www.qemu.org
|
||||
[9]:https://en.wikipedia.org/wiki/Raspberry_Pi
|
||||
[10]:https://linux.cn/article-9056-1.html
|
||||
[11]:https://linux.cn/article-8684-1.html
|
||||
[12]:https://calibre-ebook.com/
|
||||
[13]:http://www.orangepi.org/
|
||||
[14]:https://linux.cn/article-8308-1.html
|
||||
[15]:https://joinmastodon.org/
|
||||
[16]:https://opensource.com/article/17/4/guide-to-mastodon
|
||||
[17]:https://linux.cn/article-8831-1.html
|
||||
[18]:https://www.libreoffice.org/discover/calc/
|
||||
[19]:https://linux.cn/article-8851-1.html
|
||||
|
||||
|
@ -1,12 +1,15 @@
|
||||
如何使用cloud-init来预配置LXD容器
|
||||
如何使用 cloud-init 来预配置 LXD 容器
|
||||
======
|
||||
当你正在创建LXD容器的时候,你希望它们能被预先配置好。例如在容器一启动就自动执行 **apt update**来安装一些软件包,或者运行一些命令。
|
||||
这篇文章将讲述如何用[**cloud-init**][1]来对[LXD容器进行进行早期初始化][2]。
|
||||
|
||||
当你正在创建 LXD 容器的时候,你希望它们能被预先配置好。例如在容器一启动就自动执行 `apt update`来安装一些软件包,或者运行一些命令。
|
||||
|
||||
这篇文章将讲述如何用 [cloud-init][1] 来对 [LXD 容器进行进行早期初始化][2]。
|
||||
|
||||
接下来,我们将创建一个包含cloud-init指令的LXD profile,然后启动一个新的容器来使用这个profile。
|
||||
|
||||
### 如何创建一个新的LXD profile
|
||||
### 如何创建一个新的 LXD profile
|
||||
|
||||
查看已经存在的profile:
|
||||
查看已经存在的 profile:
|
||||
|
||||
```shell
|
||||
$ lxc profile list
|
||||
@ -17,7 +20,7 @@ $ lxc profile list
|
||||
+---------|---------+
|
||||
```
|
||||
|
||||
我们把名叫default的profile复制一份,然后在其内添加新的指令:
|
||||
我们把名叫 `default` 的 profile 复制一份,然后在其内添加新的指令:
|
||||
|
||||
```shell
|
||||
$ lxc profile copy default devprofile
|
||||
@ -32,7 +35,7 @@ $ lxc profile list
|
||||
+------------|---------+
|
||||
```
|
||||
|
||||
我们就得到了一个新的profile: **devprofile**。下面是它的详情:
|
||||
我们就得到了一个新的 profile: `devprofile`。下面是它的详情:
|
||||
|
||||
```yaml
|
||||
$ lxc profile show devprofile
|
||||
@ -52,11 +55,12 @@ name: devprofile
|
||||
used_by: []
|
||||
```
|
||||
|
||||
注意这几个部分: **config:** , **description:** , **devices:** , **name:** 和 **used_by:**,当你修改这些内容的时候注意不要搞错缩进。(译者注:因为这些内容是YAML格式的,缩进是语法的一部分)
|
||||
注意这几个部分: `config:` 、 `description:` 、 `devices:` 、 `name:` 和 `used_by:`,当你修改这些内容的时候注意不要搞错缩进。(LCTT 译注:因为这些内容是 YAML 格式的,缩进是语法的一部分)
|
||||
|
||||
### 如何把cloud-init添加到LXD profile里
|
||||
### 如何把 cloud-init 添加到 LXD profile 里
|
||||
|
||||
[cloud-init][1] 可以添加到 LXD profile 的 `config` 里。当这些指令将被传递给容器后,会在容器第一次启动的时候执行。
|
||||
|
||||
[cloud-init][1]可以添加到LXD profile的 **config** 里。当这些指令将被传递给容器后,会在容器第一次启动的时候执行。
|
||||
下面是用在示例中的指令:
|
||||
|
||||
```yaml
|
||||
@ -69,11 +73,9 @@ used_by: []
|
||||
- [touch, /tmp/simos_was_here]
|
||||
```
|
||||
|
||||
**package_upgrade: true** 是指当容器第一次被启动时,我们想要**cloud-init** 运行 **sudo apt upgrade**。
|
||||
**packages:** 列出了我们想要自动安装的软件。然后我们设置了**locale** and **timezone**。在Ubuntu容器的镜像里,root用户默认的 locale 是**C.UTF-8**,而**ubuntu** 用户则是 **en_US.UTF-8**。此外,我们把时区设置为**Etc/UTC**。
|
||||
最后,我们展示了[如何使用**runcmd**来运行一个Unix命令][3]。
|
||||
`package_upgrade: true` 是指当容器第一次被启动时,我们想要 `cloud-init` 运行 `sudo apt upgrade`。`packages:` 列出了我们想要自动安装的软件。然后我们设置了 `locale` 和 `timezone`。在 Ubuntu 容器的镜像里,root 用户默认的 `locale` 是 `C.UTF-8`,而 `ubuntu` 用户则是 `en_US.UTF-8`。此外,我们把时区设置为 `Etc/UTC`。最后,我们展示了[如何使用 runcmd 来运行一个 Unix 命令][3]。
|
||||
|
||||
我们需要关注如何将**cloud-init**指令插入LXD profile。
|
||||
我们需要关注如何将 `cloud-init` 指令插入 LXD profile。
|
||||
|
||||
我首选的方法是:
|
||||
|
||||
@ -110,15 +112,15 @@ name: devprofile
|
||||
used_by: []
|
||||
```
|
||||
|
||||
### 如何使用LXD profile启动一个容器
|
||||
### 如何使用 LXD profile 启动一个容器
|
||||
|
||||
使用profile **devprofile**来启动一个新容器:
|
||||
使用 profile `devprofile` 来启动一个新容器:
|
||||
|
||||
```
|
||||
$ lxc launch --profile devprofile ubuntu:x mydev
|
||||
```
|
||||
|
||||
然后访问该容器来查看我们的的指令是否生效:
|
||||
然后访问该容器来查看我们的指令是否生效:
|
||||
|
||||
```shell
|
||||
$ lxc exec mydev bash
|
||||
@ -139,7 +141,7 @@ root@mydev:~# ps ax
|
||||
root@mydev:~#
|
||||
```
|
||||
|
||||
如果我们连接得够快,通过**ps ax**将能够看到系统正在更新软件。我们可以从/var/log/cloud-init-output.log看到完整的日志:
|
||||
如果我们连接得够快,通过 `ps ax` 将能够看到系统正在更新软件。我们可以从 `/var/log/cloud-init-output.log` 看到完整的日志:
|
||||
|
||||
```
|
||||
Generating locales (this might take a while)...
|
||||
@ -147,7 +149,7 @@ Generating locales (this might take a while)...
|
||||
Generation complete.
|
||||
```
|
||||
|
||||
以上可以看出locale已经被更改了。root 用户还是保持默认的**C.UTF-8**,只有非root用户**ubuntu**使用了新的locale。
|
||||
以上可以看出 `locale` 已经被更改了。root 用户还是保持默认的 `C.UTF-8`,只有非 root 用户 `ubuntu` 使用了新的`locale` 设置。
|
||||
|
||||
```
|
||||
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
|
||||
@ -155,7 +157,7 @@ Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
|
||||
Get:3 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
|
||||
```
|
||||
|
||||
以上是安装软件包之前执行的**apt update**。
|
||||
以上是安装软件包之前执行的 `apt update`。
|
||||
|
||||
```
|
||||
The following packages will be upgraded:
|
||||
@ -163,16 +165,18 @@ The following packages will be upgraded:
|
||||
4 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
|
||||
Need to get 211 kB of archives.
|
||||
```
|
||||
以上是在执行**package_upgrade: true**和安装软件包。
|
||||
|
||||
以上是在执行 `package_upgrade: true` 和安装软件包。
|
||||
|
||||
```
|
||||
The following NEW packages will be installed:
|
||||
binutils build-essential cpp cpp-5 dpkg-dev fakeroot g++ g++-5 gcc gcc-5
|
||||
libalgorithm-diff-perl libalgorithm-diff-xs-perl libalgorithm-merge-perl
|
||||
```
|
||||
以上是我们安装**build-essential**软件包的指令。
|
||||
|
||||
**runcmd** 执行的结果如何?
|
||||
以上是我们安装 `build-essential` 软件包的指令。
|
||||
|
||||
`runcmd` 执行的结果如何?
|
||||
|
||||
```
|
||||
root@mydev:~# ls -l /tmp/
|
||||
@ -185,7 +189,7 @@ root@mydev:~#
|
||||
|
||||
### 结论
|
||||
|
||||
当我们启动LXD容器的时候,我们常常需要默认启用一些配置,并且希望能够避免重复工作。通常解决这个问题的方法是创建LXD profile,然后把需要的配置添加进去。最后,当我们启动新的容器时,只需要应用该LXD profile即可。
|
||||
当我们启动 LXD 容器的时候,我们常常需要默认启用一些配置,并且希望能够避免重复工作。通常解决这个问题的方法是创建 LXD profile,然后把需要的配置添加进去。最后,当我们启动新的容器时,只需要应用该 LXD profile 即可。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -193,7 +197,7 @@ via: https://blog.simos.info/how-to-preconfigure-lxd-containers-with-cloud-init/
|
||||
|
||||
作者:[Simos Xenitellis][a]
|
||||
译者:[kaneg](https://github.com/kaneg)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,73 +1,71 @@
|
||||
Intel 设计缺陷背后的原因是什么?
|
||||
============================================================
|
||||
|
||||
### 我们知道有问题,但是并不知道问题的详细情况。
|
||||
|
||||
> 我们知道有问题,但是并不知道问题的详细情况。
|
||||
|
||||
![](https://cdn.arstechnica.net/wp-content/uploads/2015/06/intel-48-core-larrabee-probably-640x427.jpg)
|
||||
|
||||
(本文发表于 1 月份)最近 Windows 和 Linux 都发送了重大安全更新,为防范这个尚未完全公开的问题,在最坏的情况下,它可能会导致性能下降多达一半。
|
||||
|
||||
最近 Windows 和 Linux 都发送了重大安全更新,为防范这个尚未完全公开的问题,在最坏的情况下,它可能会导致性能下降多达一半。
|
||||
在过去的几周,Linux 内核陆续打了几个补丁。Microsoft [自 11 月份开始也内部测试了 Windows 更新][3],并且它预计在下周二的例行补丁中将这个改进推送到主流 Windows 构建版中。Microsoft 的 Azure 也在下周的维护窗口中做好了安排,而 Amazon 的 AWS 也安排在周五对相关的设施进行维护。
|
||||
|
||||
在过去的几周,Linux 内核陆续打了几个补丁。Microsoft [自 11 月份开始也内部测试了 Windows 更新][3],并且它预计在下周二的例行补丁中将这个改进推送到主流 Windows 构建版中。Microsoft 的 Azure 也在下周的维护窗口中做好了安排,而 Amazon 的 AWS 也安排在周五对相关的设施进行维护。
|
||||
|
||||
自从 Linux 第一个补丁 [KPTI:内核页表隔离的当前的发展][4] ,明确描绘了出现的错误以后。虽然 Linux 和 Windows 基于不同的考虑,对此持有不同的看法,但是这两个操作系统 — 当然还有其它的 x86 操作系统,比如 FreeBSD 和 [macOS][5] — 对系统内存的处理采用了相同的方式,因为对于操作系统在这一部分特性是与底层的处理器高度耦合的。
|
||||
自从 Linux 第一个补丁 (参见 [KPTI:内核页表隔离的当前的发展][4]) 明确描绘了出现的错误以后。虽然 Linux 和 Windows 基于不同的考虑,对此持有不同的看法,但是这两个操作系统 —— 当然还有其它的 x86 操作系统,比如 FreeBSD 和 [macOS][5] — 对系统内存的处理采用了相同的方式,因为对于操作系统在这一部分特性是与底层的处理器高度耦合的。
|
||||
|
||||
### 保持地址跟踪
|
||||
|
||||
在一个系统中的每个内存字节都是隐性编码的,这些数字是每个字节的地址。早期的操作系统使用物理内存地址,但是,物理内存地址由于各种原因,它并不很合适。例如,在地址中经常会有空隙,并且(尤其是 32 位的系统上)物理地址很难操作,需要 36 位的数字,甚至更多。
|
||||
在一个系统中的每个内存字节都是隐性编码的,这些编码数字是每个字节的地址。早期的操作系统使用物理内存地址,但是,物理内存地址由于各种原因,它并不很合适。例如,在地址中经常会有空隙,并且(尤其是 32 位的系统上)物理地址很难操作,需要 36 位数字,甚至更多。
|
||||
|
||||
因此,现在操作系统完全依赖一个叫虚拟内存的概念。虚拟内存系统允许程序和内核一起在一个简单、清晰、统一的环境中各自去操作。而不是使用空隙和其它奇怪的东西的物理内存,每个程序和内核自身都使用虚拟地址去访问内存。这些虚拟地址是连续的 — 不用担心有空隙 — 并且合适的大小也更便于操作。32 位的程序仅可以看到 32 位的地址,而不用管物理地址是 36 位还是更多位。
|
||||
因此,现在操作系统完全依赖一个叫虚拟内存的概念。虚拟内存系统允许程序和内核一起在一个简单、清晰、统一的环境中各自去操作。而不是使用空隙和其它奇怪的东西的物理内存,每个程序和内核自身都使用虚拟地址去访问内存。这些虚拟地址是连续的 —— 不用担心有空隙 —— 并且合适的大小也更便于操作。32 位的程序仅可以看到 32 位的地址,而不用管物理地址是 36 位还是更多位。
|
||||
|
||||
虽然虚拟地址对每个软件几乎是透明的,但是,处理器最终还是需要知道虚拟地址引用的物理地址是哪个。因此,有一个虚拟地址到物理地址的映射,它保存在一个被称为页面表的数据结构中。操作系统构建页面表,使用一个由处理器决定的布局,并且处理器和操作系统在虚拟地址和物理地址之间进行转换时就需要用到页面表。
|
||||
|
||||
这个映射过程是非常重要的,它也是现代操作系统和处理器的重要基础,处理器有专用的缓存 — translation lookaside buffer(简称 TLB)— 它保存了一定数量的虚拟地址到物理地址的映射,这样就不需要每次都使用全部页面。
|
||||
这个映射过程是非常重要的,它也是现代操作系统和处理器的重要基础,处理器有专用的缓存 — Translation Lookaside Buffer(简称 TLB)—— 它保存了一定数量的虚拟地址到物理地址的映射,这样就不需要每次都使用全部页面。
|
||||
|
||||
虚拟内存的使用为我们提供了很多除了简单寻址之外的有用的特性。其中最主要的是,每个程序都有了自己独立的一组虚拟地址,有了它自己的一组虚拟地址到物理地址的映射。这就是用于提供“内存保护”的关键技术,一个程序不能破坏或者篡改其它程序使用的内存,因为其它程序的内存并不在它的地址映射范围之内。
|
||||
|
||||
由于每个进程使用一个单独的映射,因此每个程序也就有了一个额外的页面表,这就使得 TLB 缓存很拥挤。TLB 并不大 — 一般情况下总共可以容纳几百个映射 — 而系统使用的页面表越多,TLB 能够包含的任何特定的虚拟地址到物理地址的映射就越少。
|
||||
由于每个进程使用一个单独的映射,因此每个程序也就有了一个额外的页面表,这就使得 TLB 缓存很拥挤。TLB 并不大 —— 一般情况下总共可以容纳几百个映射 —— 而系统使用的页面表越多,TLB 能够包含的任何特定的虚拟地址到物理地址的映射就越少。
|
||||
|
||||
### 一半一半
|
||||
|
||||
为了更好地使用 TLB,每个主流的操作系统都将虚拟地址范围一分为二。一半用于程序;另一半用于内核。当进程切换时,仅有一半的页面表条目发生变化 — 仅属于程序的那一半。内核的那一半是每个程序公用的(因为只有一个内核)并且因此它可以为每个进程使用相同的页面表映射。这对 TLB 的帮助非常大;虽然它仍然会丢弃属于进程的那一半内存地址映射;但是它还保持着另一半属于内核的映射。
|
||||
为了更好地使用 TLB,每个主流的操作系统都将虚拟地址范围一分为二。一半用于程序;另一半用于内核。当进程切换时,仅有一半的页面表条目发生变化 —— 仅属于程序的那一半。内核的那一半是每个程序公用的(因为只有一个内核)并且因此它可以为每个进程使用相同的页面表映射。这对 TLB 的帮助非常大;虽然它仍然会丢弃属于进程的那一半内存地址映射;但是它还保持着另一半属于内核的映射。
|
||||
|
||||
这种设计并不是一成不变的。在 Linux 上做了一项工作,使它可以为一个 32 位的进程提供整个地址范围,而不用在内核页面表和每个进程之间共享。虽然这样为程序提供了更多的地址空间,但这是以牺牲性能为代价的,因为每次内核代码需要运行时,TLB 重新加载内核的页面表条目。因此,这种方法并没有广泛应用到 x86 的系统上。
|
||||
|
||||
在内核和每个程序之间分割虚拟地址的这种做法的一个负面影响是,内存保护被削弱了。如果内核有它自己的一组页面表和虚拟地址,它将在不同的程序之间提供相同的保护;内核内存将是简单的不可见。但是使用地址分割之后,用户程序和内核使用了相同的地址范围,并且从原理上来说,一个用户程序有可能去读写内核内存。
|
||||
|
||||
为避免这种明显不好的情况,处理器和虚拟地址系统有一个 “Ring" 或者 ”模式“的概念。x86 处理器有许多 rings,但是对于这个问题,仅有两个是相关的:"user" (ring 3)和 "supervisor"(ring 0)。当运行普通的用户程序时,处理器将置为用户模式 (ring 3)。当运行内核代码时,处理器将处于 ring 0 —— supervisor 模式,也称为内核模式。
|
||||
为避免这种明显不好的情况,处理器和虚拟地址系统有一个 “Ring” 或者 “模式”的概念。x86 处理器有许多 Ring,但是对于这个问题,仅有两个是相关的:“user” (Ring 3)和 “supervisor”(ring 0)。当运行普通的用户程序时,处理器将置为用户模式 (Ring 3)。当运行内核代码时,处理器将处于 Ring 0 —— supervisor 模式,也称为内核模式。
|
||||
|
||||
这些 rings 也用于从用户程序中保护内核内存。页面表并不仅仅有虚拟地址到物理地址的映射;它也包含关于这些地址的元数据,包含哪个 rings 可能访问哪个地址的信息。内核页面表条目被标记为仅 ring 0 可以访问;程序的条目被标记为任何 ring 都可以访问。如果一个处于 ring 3 中的进程去尝试访问标记为 ring 0 的内存,处理器将阻止这个访问并生成一个意外错误信息。运行在 ring 3 中的用户程序不能得到内核以及运行在 ring 0 内存中的任何东西。
|
||||
这些 Ring 也用于从用户程序中保护内核内存。页面表并不仅仅有虚拟地址到物理地址的映射;它也包含关于这些地址的元数据,包含哪个 Ring 可能访问哪个地址的信息。内核页面表条目被标记为仅有 Ring 0 可以访问;程序的条目被标记为任何 Ring 都可以访问。如果一个处于 Ring 3 中的进程去尝试访问标记为 Ring 0 的内存,处理器将阻止这个访问并生成一个意外错误信息。运行在 Ring 3 中的用户程序不能得到内核以及运行在 Ring 0 内存中的任何东西。
|
||||
|
||||
至少理论上是这样的。大量的补丁和更新表明,这个地方已经被突破了。这就是最大的谜团所在。
|
||||
|
||||
### Ring 间迁移
|
||||
|
||||
这就是我们所知道的。每个现代处理器都执行一定数量的推测运行。例如,给一些指令,让两个数加起来,然后将结果保存在内存中,在查明内存中的目标是否可访问和可写入之前,一个处理器可能已经推测性地做了加法。在一些常见案例中,在位置是可写入的地方,处理器节省了一些时间,因为它以并行方式计算出内存中的目标是什么。如果它发现目标位置不可写入 — 例如,一个程序尝试去写入到一个没有映射的地址以及压根就不存在的物理位置— 然后它将产生一个意外错误,而推测运行就白做了。
|
||||
这就是我们所知道的。每个现代处理器都执行一定数量的推测运行。例如,给一些指令,让两个数加起来,然后将结果保存在内存中,在查明内存中的目标是否可访问和可写入之前,一个处理器可能已经推测性地做了加法。在一些常见案例中,在地址可写入的地方,处理器节省了一些时间,因为它以并行方式计算出内存中的目标是什么。如果它发现目标位置不可写入 —— 例如,一个程序尝试去写入到一个没有映射的地址或压根就不存在的物理位置 —— 然后它将产生一个意外错误,而推测运行就白做了。
|
||||
|
||||
Intel 处理器,尤其是 — [虽然不是 AMD 的][6] — 但允许对 ring 3 代码进行推测运行并写入到 ring 0 内存中的处理器上。处理器并不完全阻止这种写入,但是推测运行轻微扰乱了处理器状态,因为,为了查明目标位置是否可写入,某些数据已经被加载到缓存和 TLB 中。这又意味着一些操作可能快几个周期,或者慢几个周期,这取决于它们所需要的数据是否仍然在缓存中。除此之外,Intel 的处理器还有一些特殊的功能,比如,在 Skylake 处理器上引入的软件保护扩展(SGX)指令,它改变了一点点访问内存的方式。同样的,处理器仍然是保护 ring 0 的内存不被来自 ring 3 的程序所访问,但是同样的,它的缓存和其它内部状态已经发生了变化,产生了可测量的差异。
|
||||
Intel 处理器,尤其是([虽然不是 AMD 的][6])允许对 Ring 3 代码进行推测运行并写入到 Ring 0 内存中的处理器上。处理器并不完全阻止这种写入,但是推测运行轻微扰乱了处理器状态,因为,为了查明目标位置是否可写入,某些数据已经被加载到缓存和 TLB 中。这又意味着一些操作可能快几个周期,或者慢几个周期,这取决于它们所需要的数据是否仍然在缓存中。除此之外,Intel 的处理器还有一些特殊的功能,比如,在 Skylake 处理器上引入的软件保护扩展(SGX)指令,它改变了一点点访问内存的方式。同样的,处理器仍然是保护 Ring 0 的内存不被来自 Ring 3 的程序所访问,但是同样的,它的缓存和其它内部状态已经发生了变化,产生了可测量的差异。
|
||||
|
||||
我们至今仍然并不知道具体的情况,到底有多少内核的内存信息泄露给了用户程序,或者信息泄露的情况有多容易发生。以及有哪些 Intel 处理器会受到影响?也或者并不完全清楚,但是,有迹象表明每个 Intel 芯片都使用了推测运行(是自 1995 年 Pentium Pro 以来的,所有主流处理器吗?),它们都可能会因此而泄露信息。
|
||||
我们至今仍然并不知道具体的情况,到底有多少内核的内存信息泄露给了用户程序,或者信息泄露的情况有多容易发生。以及有哪些 Intel 处理器会受到影响?也或者并不完全清楚,但是,有迹象表明每个 Intel 芯片都使用了推测运行(是自 1995 年 Pentium Pro 以来的所有主流处理器吗?),它们都可能会因此而泄露信息。
|
||||
|
||||
这个问题第一次被披露是由来自 [奥地利的 Graz Technical University][7] 的研究者。他们披露的信息表明这个问题已经足够破坏内核模式地址空间布局随机化(内核 ASLR,或称 KASLR)。ASLR 是防范 [缓冲区溢出][8] 漏洞利用的最后一道防线。启用 ASLR 之后,程序和它们的数据被置于随机的内存地址中,它将使一些安全漏洞利用更加困难。KASLR 将这种随机化应用到内核中,这样就使内核的数据(包括页面表)和代码也随机化分布。
|
||||
|
||||
Graz 的研究者开发了 [KAISER][9],一组防范这个问题的 Linux 内核补丁。
|
||||
|
||||
如果这个问题正好使 ASLR 的随机化被破坏了,这或许将成为一个巨大的灾难。ASLR 是一个非常强大的保护措施,但是它并不是完美的,这意味着对于黑客来说将是一个很大的障碍,一个无法逾越的障碍。整个行业对此的反应是 — Windows 和 Linux 都有一个非常重要的变化,秘密开发 — 这表明不仅是 ASLR 被破坏了,而且从内核泄露出信息的更普遍的技术被开发出来了。确实是这样的,研究者已经 [在 tweet 上发布信息][10],他们已经可以随意泄露和读取内核数据了。另一种可能是,漏洞可能被用于从虚拟机中”越狱“,并可能会危及 hypervisor。
|
||||
如果这个问题正好使 ASLR 的随机化被破坏了,这或许将成为一个巨大的灾难。ASLR 是一个非常强大的保护措施,但是它并不是完美的,这意味着对于黑客来说将是一个很大的障碍,一个无法逾越的障碍。整个行业对此的反应是 —— Windows 和 Linux 都有一个非常重要的变化,秘密开发 —— 这表明不仅是 ASLR 被破坏了,而且从内核泄露出信息的更普遍的技术被开发出来了。确实是这样的,研究者已经 [在 Twitter 上发布信息][10],他们已经可以随意泄露和读取内核数据了。另一种可能是,漏洞可能被用于从虚拟机中“越狱”,并可能会危及 hypervisor。
|
||||
|
||||
Windows 和 Linux 选择的解决方案是非常相似的,将 KAISER 分为两个区域:内核页面表的条目不再是由每个进程共享。在 Linux 中,这被称为内核页面表隔离(KPTI)。
|
||||
|
||||
应用补丁后,内存地址仍然被一分为二:这样使内核的那一半几乎是空的。当然它并不是非常的空,因为一些内核片断需要永久映射,不论进程是运行在 ring 3 还是 ring 0 中,它都几乎是空的。这意味着如果恶意用户程序尝试去探测内核内存以及泄露信息,它将会失败 — 因为那里几乎没有信息。而真正的内核页面中只有当内核自身运行的时刻它才能被用到。
|
||||
应用补丁后,内存地址仍然被一分为二:这样使内核的那一半几乎是空的。当然它并不是非常的空,因为一些内核片断需要永久映射,不论进程是运行在 Ring 3 还是 Ring 0 中,它都几乎是空的。这意味着如果恶意用户程序尝试去探测内核内存以及泄露信息,它将会失败 —— 因为那里几乎没有信息。而真正的内核页面中只有当内核自身运行的时刻它才能被用到。
|
||||
|
||||
这样做就破坏了最初将地址空间分割的理由。现在,每次切换到用户程序时,TLB 需要实时去清除与内核页面表相关的所有条目,这样就失去了启用分割带来的性能提升。
|
||||
|
||||
影响的具体大小取决于工作负载。每当一个程序被调入到内核 — 从磁盘读入、发送数据到网络、打开一个文件等等 — 这种调用的成本可能会增加一点点,因为它强制 TLB 清除了缓存并实时加载内核页面表。不使用内核的程序可能会观测到 2 - 3 个百分点的性能影响 — 这里仍然有一些开销,因为内核仍然是偶尔会运行去处理一些事情,比如多任务等等。
|
||||
影响的具体大小取决于工作负载。每当一个程序被调入到内核 —— 从磁盘读入、发送数据到网络、打开一个文件等等 —— 这种调用的成本可能会增加一点点,因为它强制 TLB 清除了缓存并实时加载内核页面表。不使用内核的程序可能会观测到 2 - 3 个百分点的性能影响 —— 这里仍然有一些开销,因为内核仍然是偶尔会运行去处理一些事情,比如多任务等等。
|
||||
|
||||
但是大量调用进入到内核的工作负载将观测到很大的性能损失。在一个基准测试中,一个除了调入到内核之外什么都不做的程序,观察到 [它的性能下降大约为 50%][11];换句话说就是,打补丁后每次对内核的调用的时间要比不打补丁调用内核的时间增加一倍。基准测试使用的 Linux 的网络回环(loopback)也观测到一个很大的影响,比如,在 Postgres 的基准测试中大约是 [17%][12]。真实的数据库负载使用了实时网络可能观测到的影响要低一些,因为使用实时网络时,内核调用的开销基本是使用真实网络的开销。
|
||||
|
||||
虽然对 Intel 系统的影响是众所周知的,但是它们可能并不是唯一受影响的。其它的一些平台,比如 SPARC 和 IBM 的 S390,是不受这个问题影响的,因为它们的处理器的内存管理并不需要分割地址空间和共享内核页面表;在这些平台上的操作系统一直就是将它们的内核页面表从用户模式中隔离出来的。但是其它的,比如 ARM,可能就没有这么幸运了;[适用于 ARM Linux 的类似补丁][13] 正在开发中。
|
||||
虽然对 Intel 系统的影响是众所周知的,但是它们可能并不是唯一受影响的。其它的一些平台,比如 SPARC 和 IBM 的 S390,是不受这个问题影响的,因为它们的处理器的内存管理并不需要分割地址空间和共享内核页面表;在这些平台上的操作系统一直就是将它们的内核页面表从用户模式中隔离出来的。但是其它的,比如 ARM,可能就没有这么幸运了;[适用于 ARM Linux 的类似补丁][13] 正在开发中。
|
||||
|
||||
<aside class="ad_native" id="ad_xrail_native" style="box-sizing: inherit;"></aside>
|
||||
---
|
||||
|
||||
[][15][PETER BRIGHT][14] 是 Ars 的一位技术编辑。他涉及微软、编程及软件开发、Web 技术和浏览器、以及安全方面。它居住在纽约的布鲁克林。
|
||||
|
||||
@ -75,9 +73,9 @@ Windows 和 Linux 选择的解决方案是非常相似的,将 KAISER 分为两
|
||||
|
||||
via: https://arstechnica.com/gadgets/2018/01/whats-behind-the-intel-design-flaw-forcing-numerous-patches/
|
||||
|
||||
作者:[ PETER BRIGHT ][a]
|
||||
作者:[PETER BRIGHT][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,17 +1,17 @@
|
||||
五个值得现在安装的火狐插件
|
||||
======
|
||||
|
||||
合适的插件能大大增强你浏览器的功能,但仔细挑选插件很重要。本文有五个值得一看的插件。
|
||||
> 合适的插件能大大增强你浏览器的功能,但仔细挑选插件很重要。本文有五个值得一看的插件。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/firefox_blue_lead.jpg)
|
||||
|
||||
对于很多用户来说,网页浏览器已经成为电脑使用体验的重要环节。现代浏览器已经发展成强大、可拓展的平台。作为平台的一部分,_插件_能添加或修改浏览器的功能。火狐插件的构建使用了 WebExtensions API ,一个跨浏览器的开发系统。
|
||||
对于很多用户来说,网页浏览器已经成为电脑使用体验的重要环节。现代浏览器已经发展成强大、可拓展的平台。作为平台的一部分,_插件_能添加或修改浏览器的功能。火狐插件的构建使用了 WebExtensions API ,这是一个跨浏览器的开发系统。
|
||||
|
||||
你得安装哪一个插件?一般而言,这个问题的答案取决于你如何使用你的浏览器、你对于隐私的看法、你信任插件开发者多少以及其他个人喜好。
|
||||
你应该安装哪一个插件?一般而言,这个问题的答案取决于你如何使用你的浏览器、你对于隐私的看法、你信任插件开发者多少以及其他个人喜好。
|
||||
|
||||
首先,我想指出浏览器插件通常需要读取和(或者)修改你浏览的网页上的每项内容。你应该_非常_仔细地考虑这件事的后果。如果一个插件有修改所有你访问过的网页的权限,那么它可能记录你的按键、拦截信用卡信息、在线跟踪你、插入广告,以及其他各种各样邪恶的行为。
|
||||
|
||||
并不是每个插件都偷偷摸摸地做这些事,但是在你安装任何插件之前,你要慎重考虑下插件安装来源、涉及的权限、你的风险数据和其他因素。记住,你可以从个人数据的角度来管理一个插件如何影响你的攻击面( LCTT 译者注:攻击面是指入侵者能尝试获取或提取数据的途径总和)——例如使用特定的配置、不使用插件来完成例如网上银行的操作。
|
||||
并不是每个插件都偷偷摸摸地做这些事,但是在你安装任何插件之前,你要慎重考虑下插件安装来源、涉及的权限、你的风险数据和其他因素。记住,你可以从个人数据的角度来管理一个插件如何影响你的攻击面( LCTT 译注:攻击面是指入侵者能尝试获取或提取数据的途径总和)——例如使用特定的配置、不使用插件来完成例如网上银行的操作。
|
||||
|
||||
考虑到这一点,这里有你或许想要考虑的五个火狐插件
|
||||
|
||||
@ -19,29 +19,29 @@
|
||||
|
||||
![ublock origin ad blocker screenshot][2]
|
||||
|
||||
ublock Origin 可以拦截广告和恶意网页,还允许用户定义自己的内容过滤器。
|
||||
*ublock Origin 可以拦截广告和恶意网页,还允许用户定义自己的内容过滤器。*
|
||||
|
||||
[uBlock Origin][3] 是一款快速、内存占用低、适用范围广的拦截器,它不仅能屏蔽广告,还能让你执行你自己的内容过滤。uBlock Origin 默认使用多份预定义好的过滤名单来拦截广告、跟踪器和恶意网页。它允许你任意地添加列表和规则,或者锁定在一个默认拒绝的模式。除了强大之外,这个插件已被证明是效率高、性能好。
|
||||
[uBlock Origin][3] 是一款快速、内存占用低、适用范围广的拦截器,它不仅能屏蔽广告,还能让你执行你自己定制的内容过滤。uBlock Origin 默认使用多份预定义好的过滤名单来拦截广告、跟踪器和恶意网页。它允许你任意地添加列表和规则,或者锁定在一个默认拒绝的模式。除了强大之外,这个插件已被证明是效率高、性能好。
|
||||
|
||||
### Privacy Badger
|
||||
|
||||
![privacy badger ad blocker][5]
|
||||
|
||||
Privacy Badger 运用了算法来无缝地屏蔽侵犯用户准则的广告和跟踪器。
|
||||
*Privacy Badger 运用了算法来无缝地屏蔽侵犯用户准则的广告和跟踪器。*
|
||||
|
||||
正如它名字所表明,[Privacy Badger][6] 是一款专注于隐私的插件,它屏蔽广告和第三方跟踪器。EFF (LCTT 译者注:EFF全称是电子前哨基金会(Electronic Frontier Foundation),旨在宣传互联网版权和监督执法机构 )说:“我们想要推荐一款能自动分析并屏蔽任何侵犯用户准则的跟踪器和广告,而 Privacy Badger 诞生于此目的;它不用任何设置、知识或者用户的配置,就能运行得很好;它是由一个明显为用户服务而不是为广告主服务的组织出品;它使用算法来绝定什么正在跟踪,什么没有在跟踪”
|
||||
正如它名字所表明,[Privacy Badger][6] 是一款专注于隐私的插件,它屏蔽广告和第三方跟踪器。EFF (LCTT 译注:EFF 全称是<ruby>电子前哨基金会<rt>Electronic Frontier Foundation</rt></ruby>,旨在宣传互联网版权和监督执法机构)说:“我们想要推荐一款能自动分析并屏蔽任何侵犯用户准则的跟踪器和广告,而 Privacy Badger 诞生于此目的;它不用任何设置、知识或者用户的配置,就能运行得很好;它是由一个明显为用户服务而不是为广告主服务的组织出品;它使用算法来确定正在跟踪什么,而没有跟踪什么。”
|
||||
|
||||
为什么 Privacy Badger 出现在这列表上的原因跟 uBlock Origin 如此相似?其中一个原因是Privacy Badger 从根本上跟 uBlock Origin 的工作不同。另一个原因是纵深防御的做法是个可以跟随的合理策略。
|
||||
为什么 Privacy Badger 出现在这列表上的原因跟 uBlock Origin 如此相似?其中一个原因是 Privacy Badger 从根本上跟 uBlock Origin 的工作不同。另一个原因是纵深防御的做法是个可以遵循的合理策略。
|
||||
|
||||
### LastPass
|
||||
|
||||
![lastpass password manager screenshot][8]
|
||||
|
||||
LastPass 是一款用户友好的密码管理插件,支持双重授权。
|
||||
*LastPass 是一款用户友好的密码管理插件,支持双因子认证。*
|
||||
|
||||
这个插件对于很多人来说是个有争议的补充。你是否应该使用密码管理器——如果你用了,你是否应该选择一个浏览器插件——这都是个热议的话题,而答案取决于你的风险资料。我想说大部分不关心的电脑用户应该用一个,因为这比起常见的选择:每一处使用相同的弱密码,都好太多了。
|
||||
|
||||
[LastPass][9] 对于用户很友好,支持双重授权,相当安全。这家公司过去出过点安全事故,但是都处理得当,而且资金充足。记住使用密码管理器不是非此即彼的命题。很多用户选择使用密码管理器管理绝大部分密码,但是保持了一点复杂性,为例如银行这样重要的网页精心设计了密码和使用多重认证。
|
||||
[LastPass][9] 对于用户很友好,支持双因子认证,相当安全。这家公司过去出过点安全事故,但是都处理得当,而且资金充足。记住使用密码管理器不是非此即彼的命题。很多用户选择使用密码管理器管理绝大部分密码,但是保持了一点复杂性,为例如银行这样重要的网页采用了精心设计的密码和多因子认证。
|
||||
|
||||
### Xmarks Sync
|
||||
|
||||
@ -51,11 +51,11 @@ LastPass 是一款用户友好的密码管理插件,支持双重授权。
|
||||
|
||||
[Awesome Screenshot Plus][11] 允许你很容易捕获任意网页的全部或部分区域,也能添加注释、评论、使敏感信息模糊等。你还能用一个可选的在线服务来分享图片。我发现这工具在网页调试时截图、讨论设计和分享信息上很棒。这是一款比你预期中发现自己使用得多的工具。
|
||||
|
||||
我发现这五款插件有用,我把它们推荐给其他人。这就是说,还有很多浏览器插件。我好奇其他的哪一款是 Opensource.com 社区用户正在使用并推荐的。让评论中让我知道。(LCTT 译者注:本文引用自 Opensource.com ,这两句话意在引导用户留言,推荐自己使用的插件)
|
||||
|
||||
![Awesome Screenshot Plus screenshot][13]
|
||||
|
||||
Awesome Screenshot Plus 允许你容易地截下任何网页的部分或全部内容。
|
||||
*Awesome Screenshot Plus 允许你容易地截下任何网页的部分或全部内容。*
|
||||
|
||||
我发现这五款插件有用,我把它们推荐给其他人。这就是说,还有很多浏览器插件。我很感兴趣社区用户们正在使用哪些插件,请在评论中让我知道。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -63,17 +63,17 @@ via: https://opensource.com/article/18/1/top-5-firefox-extensions
|
||||
|
||||
作者:[Jeremy Garcia][a]
|
||||
译者:[ypingcn](https://github.com/ypingcn)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jeremy-garcia
|
||||
[2]: https://opensource.com/sites/default/files/ublock.png "ublock origin ad blocker screenshot"
|
||||
[2]: https://opensource.com/sites/default/files/ublock.png
|
||||
[3]: https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/
|
||||
[5]: https://opensource.com/sites/default/files/images/life-uploads/privacy_badger_1.0.1.png "privacy badger ad blocker screenshot"
|
||||
[5]: https://opensource.com/sites/default/files/images/life-uploads/privacy_badger_1.0.1.png
|
||||
[6]: https://www.eff.org/privacybadger
|
||||
[8]: https://opensource.com/sites/default/files/images/life-uploads/lastpass4.jpg "lastpass password manager screenshot"
|
||||
[8]: https://opensource.com/sites/default/files/images/life-uploads/lastpass4.jpg
|
||||
[9]: https://addons.mozilla.org/en-US/firefox/addon/lastpass-password-manager/
|
||||
[10]: https://addons.mozilla.org/en-US/firefox/addon/xmarks-sync/
|
||||
[11]: https://addons.mozilla.org/en-US/firefox/addon/screenshot-capture-annotate/
|
||||
[13]: https://opensource.com/sites/default/files/screenshot_from_2018-01-04_17-11-32.png "Awesome Screenshot Plus screenshot"
|
||||
[13]: https://opensource.com/sites/default/files/screenshot_from_2018-01-04_17-11-32.png
|
@ -0,0 +1,104 @@
|
||||
为初学者介绍 Linux whereis 命令(5个例子)
|
||||
======
|
||||
|
||||
有时,在使用命令行的时候,我们需要快速找到某一个命令的二进制文件所在位置。这种情况下可以选择 [find][1] 命令,但使用它会耗费时间,可能也会出现意料之外的情况。有一个专门为这种情况设计的命令:`whereis`。
|
||||
|
||||
在这篇文章里,我们会通过一些便于理解的例子来解释这一命令的基础内容。但在这之前,值得说明的一点是,下面出现的所有例子都在 Ubuntu 16.04 LTS 下测试过。
|
||||
|
||||
### Linux whereis 命令
|
||||
|
||||
`whereis` 命令可以帮助用户寻找某一命令的二进制文件,源码以及帮助页面。下面是它的格式:
|
||||
|
||||
```
|
||||
whereis [options] [-BMS directory... -f] name...
|
||||
```
|
||||
|
||||
这是这一命令的 man 页面给出的解释:
|
||||
|
||||
> `whereis` 可以查找指定命令的二进制文件、源文件和帮助文件。 被找到的文件在显示时,会去掉主路径名,然后再去掉文件的(单个)尾部扩展名 (如: `.c`),来源于源代码控制的 `s.` 前缀也会被去掉。接下来,`whereis` 会尝试在标准的 Linux 位置里寻找具体程序,也会在由 `$PATH` 和 `$MANPATH` 指定的路径中寻找。
|
||||
|
||||
|
||||
下面这些以 Q&A 形式出现的例子,可以给你一个关于如何使用 whereis 命令的直观感受。
|
||||
|
||||
### Q1. 如何用 whereis 命令寻找二进制文件所在位置?
|
||||
|
||||
假设你想找,比如说,`whereis` 命令自己所在位置。下面是你具体的操作:
|
||||
|
||||
```
|
||||
whereis whereis
|
||||
```
|
||||
|
||||
[![How to find location of binary file using whereis][2]][3]
|
||||
|
||||
需要注意的是,输出的第一个路径才是你想要的结果。使用 `whereis` 命令,同时也会显示帮助页面和源码所在路径。(如果能找到的情况下会显示,但是在这一例中没有找到)所以你在输出中看见的第二个路径就是帮助页面文件所在位置。
|
||||
|
||||
### Q2. 怎么在搜索时规定只搜索二进制文件、帮助页面,还是源代码呢?
|
||||
|
||||
如果你想只搜索,假设说,二进制文件,你可以使用 `-b` 这一命令行选项。例如:
|
||||
|
||||
```
|
||||
whereis -b cp
|
||||
```
|
||||
|
||||
[![How to specifically search for binaries, manuals, or source code][4]][5]
|
||||
|
||||
类似的, `-m` 和 `-s` 这两个 选项分别对应 帮助页面和源码。
|
||||
|
||||
|
||||
### Q3.如何限制 whereis 命令搜索位置?
|
||||
|
||||
默认情况下,`whereis` 是从由匹配符所定义的硬编码路径来寻找文件的。但如果你想的话,你可以用命令行选项来限制搜索。例如,如果你只想在 `/usr/bin` 寻找二进制文件,你可以用 `-B` 这一选项来实现。
|
||||
|
||||
```
|
||||
whereis -B /usr/bin/ -f cp
|
||||
```
|
||||
|
||||
注意:使用这种方式时可以给出多个路径。使用 `-f` 这一选项来明确分隔目录列表和要搜索的文件名。
|
||||
|
||||
类似的,如果你想只搜索帮助文件或源码,你可以对应使用 `-M` 和 `-S` 这两个选项。
|
||||
|
||||
### Q4. 如何查看 whereis 的搜索路径?
|
||||
|
||||
与此相对应的也有一个选项。只要在 `whereis` 后加上 `-l`。
|
||||
|
||||
```
|
||||
whereis -l
|
||||
```
|
||||
|
||||
这是例子的部分输出结果:
|
||||
|
||||
[![How to see paths that whereis uses for search][6]][7]
|
||||
|
||||
### Q5. 如何找到一个有异常条目的命令?
|
||||
|
||||
对于 `whereis` 命令来说,如果一个命令对每个显式的请求类型都不止一项,则该命令被视为异常。例如,没有可用文档的命令,或者对应文档分散在各处的命令都可以算作异常命令。 当使用 `-u` 这一选项,`whereis` 就会显示那些有异常条目的命令。
|
||||
|
||||
例如,下面这一例子就显示,在当前目录中,没有对应文档或有多个文档的命令。
|
||||
|
||||
```
|
||||
whereis -m -u *
|
||||
```
|
||||
|
||||
### 总结
|
||||
|
||||
我觉得,`whereis` 不是那种你需要经常使用的命令行工具。但在遇到某些特殊情况时,它绝对会让你的生活变得轻松。我们已经涉及了这一工具提供的一些重要命令行选项,所以要注意练习。想了解更多信息,直接去看它的 [man][8] 页面吧。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/linux-whereis-command/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[wenwensnow](https://github.com/wenwensnow)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
||||
[1]:https://www.howtoforge.com/tutorial/linux-find-command/
|
||||
[2]:https://www.howtoforge.com/images/command-tutorial/whereis-basic-usage.png
|
||||
[3]:https://www.howtoforge.com/images/command-tutorial/big/whereis-basic-usage.png
|
||||
[4]:https://www.howtoforge.com/images/command-tutorial/whereis-b-option.png
|
||||
[5]:https://www.howtoforge.com/images/command-tutorial/big/whereis-b-option.png
|
||||
[6]:https://www.howtoforge.com/images/command-tutorial/whereis-l.png
|
||||
[7]:https://www.howtoforge.com/images/command-tutorial/big/whereis-l.png
|
||||
[8]:https://linux.die.net/man/1/whereis
|
@ -0,0 +1,310 @@
|
||||
通过 ncurses 在终端创建一个冒险游戏
|
||||
======
|
||||
|
||||
怎样使用 curses 函数读取键盘并操作屏幕。
|
||||
|
||||
我[之前的文章][1]介绍了 ncurses 库,并提供了一个简单的程序展示了一些将文本放到屏幕上的 curses 函数。在接下来的文章中,我将介绍如何使用其它的 curses 函数。
|
||||
|
||||
### 探险
|
||||
|
||||
当我逐渐长大,家里有了一台苹果 II 电脑。我和我兄弟正是在这台电脑上自学了如何用 AppleSoft BASIC 写程序。我在写了一些数学智力游戏之后,继续创造游戏。作为 80 年代的人,我已经是龙与地下城桌游的粉丝,在游戏中角色扮演一个追求打败怪物并在陌生土地上抢掠的战士或者男巫,所以我创建一个基本的冒险游戏也在情理之中。
|
||||
|
||||
AppleSoft BASIC 支持一种简洁的特性:在标准分辨率图形模式(GR 模式)下,你可以检测屏幕上特定点的颜色。这为创建一个冒险游戏提供了捷径。比起创建并更新周期性传送到屏幕的内存地图,我现在可以依赖 GR 模式为我维护地图,我的程序还可以在玩家的角色(LCTT 译注:此处 character 双关一个代表玩家的角色,同时也是一个字符)在屏幕四处移动的时候查询屏幕。通过这种方式,我让电脑完成了大部分艰难的工作。因此,我的自顶向下的冒险游戏使用了块状的 GR 模式图形来展示我的游戏地图。
|
||||
|
||||
我的冒险游戏使用了一张简单的地图,上面有一大片绿地伴着山脉从中间蔓延向下和一个在左上方的大湖。我要粗略地为桌游战役绘制这个地图,其中包含一个允许玩家穿过到远处的狭窄通道。
|
||||
|
||||
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/quest-map.jpg)
|
||||
|
||||
*图 1. 一个有湖和山的简单桌游地图*
|
||||
|
||||
你可以用 curses 绘制这个地图,并用字符代表草地、山脉和水。接下来,我描述怎样使用 curses 那样做,以及如何在 Linux 终端创建和进行类似的一个冒险游戏。
|
||||
|
||||
### 构建程序
|
||||
|
||||
在我的上一篇文章,我提到了大多数 curses 程序以相同的一组指令获取终端类型和设置 curses 环境:
|
||||
|
||||
```
|
||||
initscr();
|
||||
cbreak();
|
||||
noecho();
|
||||
```
|
||||
|
||||
在这个程序,我添加了另外的语句:
|
||||
|
||||
```
|
||||
keypad(stdscr, TRUE);
|
||||
```
|
||||
|
||||
这里的 `TRUE` 标志允许 curses 从用户终端读取小键盘和功能键。如果你想要在你的程序中使用上下左右方向键,你需要使用这里的 `keypad(stdscr, TRUE)`。
|
||||
|
||||
这样做了之后,你现在可以开始在终端屏幕上绘图了。curses 函数包括了一系列在屏幕上绘制文本的方法。在我之前的文章中,我展示了 `addch()` 和 `addstr()` 函数以及在添加文本之前先移动到指定屏幕位置的对应函数 `mvaddch()` 和 `mvaddstr()`。为了在终端上创建这个冒险游戏的地图,你可以使用另外一组函数:`vline()` 和 `hline()`,以及它们对应的函数 `mvvline()` 和 `mvhline()`。这些 mv 函数接受屏幕坐标、一个要绘制的字符和要重复此字符的次数的参数。例如,`mvhline(1, 2, '-', 20)` 将会绘制一条开始于第一行第二列并由 20 个横线组成的线段。
|
||||
|
||||
为了以编程方式绘制地图到终端屏幕上,让我们先定义这个 `draw_map()` 函数:
|
||||
|
||||
```
|
||||
#define GRASS ' '
|
||||
#define EMPTY '.'
|
||||
#define WATER '~'
|
||||
#define MOUNTAIN '^'
|
||||
#define PLAYER '*'
|
||||
|
||||
void draw_map(void)
|
||||
{
|
||||
int y, x;
|
||||
|
||||
/* 绘制探索地图 */
|
||||
|
||||
/* 背景 */
|
||||
|
||||
for (y = 0; y < LINES; y++) {
|
||||
mvhline(y, 0, GRASS, COLS);
|
||||
}
|
||||
|
||||
/* 山和山道 */
|
||||
|
||||
for (x = COLS / 2; x < COLS * 3 / 4; x++) {
|
||||
mvvline(0, x, MOUNTAIN, LINES);
|
||||
}
|
||||
|
||||
mvhline(LINES / 4, 0, GRASS, COLS);
|
||||
|
||||
/* 湖 */
|
||||
|
||||
for (y = 1; y < LINES / 2; y++) {
|
||||
mvhline(y, 1, WATER, COLS / 3);
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
在绘制这副地图时,记住填充大块字符到屏幕所使用的 `mvvline()` 和 `mvhline()` 函数。我绘制从 0 列开始的字符水平线(`mvhline`)以创建草地区域,直到占满整个屏幕的高度和宽度。我绘制从 0 行开始的多条垂直线(`mvvline`)在此上添加了山脉,绘制单行水平线添加了一条山道(`mvhline`)。并且,我通过绘制一系列短水平线(`mvhline`)创建了湖。这种绘制重叠方块的方式看起来似乎并没有效率,但是记住在我们调用 `refresh()` 函数之前 curses 并不会真正更新屏幕。
|
||||
|
||||
绘制完地图,创建游戏就还剩下进入循环让程序等待用户按下上下左右方向键中的一个然后让玩家图标正确移动了。如果玩家想要移动的地方是空的,就应该允许玩家到那里。
|
||||
|
||||
你可以把 curses 当做捷径使用。比起在程序中实例化一个版本的地图并复制到屏幕这么复杂,你可以让屏幕为你跟踪所有东西。`inch()` 函数和相关联的 `mvinch()` 函数允许你探测屏幕的内容。这让你可以查询 curses 以了解玩家想要移动到的位置是否被水填满或者被山阻挡。这样做你需要一个之后会用到的一个帮助函数:
|
||||
|
||||
```
|
||||
int is_move_okay(int y, int x)
|
||||
{
|
||||
int testch;
|
||||
|
||||
/* 如果要进入的位置可以进入,返回 true */
|
||||
|
||||
testch = mvinch(y, x);
|
||||
return ((testch == GRASS) || (testch == EMPTY));
|
||||
}
|
||||
```
|
||||
|
||||
如你所见,这个函数探测行 `x`、列 `y` 并在空间未被占据的时候返回 `true`,否则返回 `false`。
|
||||
|
||||
这样我们写移动循环就很容易了:从键盘获取一个键值然后根据是上下左右键移动用户字符。这里是一个这种循环的简单版本:
|
||||
|
||||
```
|
||||
|
||||
do {
|
||||
ch = getch();
|
||||
|
||||
/* 测试输入的值并获取方向 */
|
||||
|
||||
switch (ch) {
|
||||
case KEY_UP:
|
||||
if ((y > 0) && is_move_okay(y - 1, x)) {
|
||||
y = y - 1;
|
||||
}
|
||||
break;
|
||||
case KEY_DOWN:
|
||||
if ((y < LINES - 1) && is_move_okay(y + 1, x)) {
|
||||
y = y + 1;
|
||||
}
|
||||
break;
|
||||
case KEY_LEFT:
|
||||
if ((x > 0) && is_move_okay(y, x - 1)) {
|
||||
x = x - 1;
|
||||
}
|
||||
break;
|
||||
case KEY_RIGHT
|
||||
if ((x < COLS - 1) && is_move_okay(y, x + 1)) {
|
||||
x = x + 1;
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
while (1);
|
||||
```
|
||||
|
||||
为了在游戏中使用这个循环,你需要在循环里添加一些代码来启用其它的键(例如传统的移动键 WASD),以提供让用户退出游戏和在屏幕上四处移动的方法。这里是完整的程序:
|
||||
|
||||
```
|
||||
/* quest.c */
|
||||
|
||||
#include
|
||||
#include
|
||||
|
||||
#define GRASS ' '
|
||||
#define EMPTY '.'
|
||||
#define WATER '~'
|
||||
#define MOUNTAIN '^'
|
||||
#define PLAYER '*'
|
||||
|
||||
int is_move_okay(int y, int x);
|
||||
void draw_map(void);
|
||||
|
||||
int main(void)
|
||||
{
|
||||
int y, x;
|
||||
int ch;
|
||||
|
||||
/* 初始化curses */
|
||||
|
||||
initscr();
|
||||
keypad(stdscr, TRUE);
|
||||
cbreak();
|
||||
noecho();
|
||||
|
||||
clear();
|
||||
|
||||
/* 初始化探索地图 */
|
||||
|
||||
draw_map();
|
||||
|
||||
/* 在左下角初始化玩家 */
|
||||
|
||||
y = LINES - 1;
|
||||
x = 0;
|
||||
|
||||
do {
|
||||
/* 默认获得一个闪烁的光标--表示玩家字符 */
|
||||
|
||||
mvaddch(y, x, PLAYER);
|
||||
move(y, x);
|
||||
refresh();
|
||||
|
||||
ch = getch();
|
||||
|
||||
/* 测试输入的键并获取方向 */
|
||||
|
||||
switch (ch) {
|
||||
case KEY_UP:
|
||||
case 'w':
|
||||
case 'W':
|
||||
if ((y > 0) && is_move_okay(y - 1, x)) {
|
||||
mvaddch(y, x, EMPTY);
|
||||
y = y - 1;
|
||||
}
|
||||
break;
|
||||
case KEY_DOWN:
|
||||
case 's':
|
||||
case 'S':
|
||||
if ((y < LINES - 1) && is_move_okay(y + 1, x)) {
|
||||
mvaddch(y, x, EMPTY);
|
||||
y = y + 1;
|
||||
}
|
||||
break;
|
||||
case KEY_LEFT:
|
||||
case 'a':
|
||||
case 'A':
|
||||
if ((x > 0) && is_move_okay(y, x - 1)) {
|
||||
mvaddch(y, x, EMPTY);
|
||||
x = x - 1;
|
||||
}
|
||||
break;
|
||||
case KEY_RIGHT:
|
||||
case 'd':
|
||||
case 'D':
|
||||
if ((x < COLS - 1) && is_move_okay(y, x + 1)) {
|
||||
mvaddch(y, x, EMPTY);
|
||||
x = x + 1;
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
while ((ch != 'q') && (ch != 'Q'));
|
||||
|
||||
endwin();
|
||||
|
||||
exit(0);
|
||||
}
|
||||
|
||||
int is_move_okay(int y, int x)
|
||||
{
|
||||
int testch;
|
||||
|
||||
/* 当空间可以进入时返回true */
|
||||
|
||||
testch = mvinch(y, x);
|
||||
return ((testch == GRASS) || (testch == EMPTY));
|
||||
}
|
||||
|
||||
void draw_map(void)
|
||||
{
|
||||
int y, x;
|
||||
|
||||
/* 绘制探索地图 */
|
||||
|
||||
/* 背景 */
|
||||
|
||||
for (y = 0; y < LINES; y++) {
|
||||
mvhline(y, 0, GRASS, COLS);
|
||||
}
|
||||
|
||||
/* 山脉和山道 */
|
||||
|
||||
for (x = COLS / 2; x < COLS * 3 / 4; x++) {
|
||||
mvvline(0, x, MOUNTAIN, LINES);
|
||||
}
|
||||
|
||||
mvhline(LINES / 4, 0, GRASS, COLS);
|
||||
|
||||
/* 湖 */
|
||||
|
||||
for (y = 1; y < LINES / 2; y++) {
|
||||
mvhline(y, 1, WATER, COLS / 3);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
在完整的程序清单中,你可以看见使用 curses 函数创建游戏的完整布置:
|
||||
|
||||
1. 初始化 curses 环境。
|
||||
2. 绘制地图。
|
||||
3. 初始化玩家坐标(左下角)
|
||||
4. 循环:
|
||||
* 绘制玩家的角色。
|
||||
* 从键盘获取键值。
|
||||
* 对应地上下左右调整玩家坐标。
|
||||
* 重复。
|
||||
5. 完成时关闭curses环境并退出。
|
||||
|
||||
### 开始玩
|
||||
|
||||
当你运行游戏时,玩家的字符在左下角初始化。当玩家在游戏区域四处移动的时候,程序创建了“一串”点。这样可以展示玩家经过了的点,让玩家避免经过不必要的路径。
|
||||
|
||||
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/quest-start.png)
|
||||
|
||||
*图 2. 初始化在左下角的玩家*
|
||||
|
||||
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/quest-1.png)
|
||||
|
||||
*图 3. 玩家可以在游戏区域四处移动,例如湖周围和山的通道*
|
||||
|
||||
为了创建上面这样的完整冒险游戏,你可能需要在他/她的角色在游戏区域四处移动的时候随机创建不同的怪物。你也可以创建玩家可以发现在打败敌人后可以掠夺的特殊道具,这些道具应能提高玩家的能力。
|
||||
|
||||
但是作为起点,这是一个展示如何使用 curses 函数读取键盘和操纵屏幕的好程序。
|
||||
|
||||
### 下一步
|
||||
|
||||
这是一个如何使用 curses 函数更新和读取屏幕和键盘的简单例子。按照你的程序需要做什么,curses 可以做得更多。在下一篇文章中,我计划展示如何更新这个简单程序以使用颜色。同时,如果你想要学习更多 curses,我鼓励你去读位于 Linux 文档计划的 Pradeep Padala 写的[如何使用 NCURSES 编程][2]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxjournal.com/content/creating-adventure-game-terminal-ncurses
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
译者:[Leemeans](https://github.com/leemeans)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxjournal.com/users/jim-hall
|
||||
[1]:https://linux.cn/article-9348-1.html
|
||||
[2]:http://tldp.org/HOWTO/NCURSES-Programming-HOWTO
|
160
published/20180131 Fastest way to unzip a zip file in Python.md
Normal file
160
published/20180131 Fastest way to unzip a zip file in Python.md
Normal file
@ -0,0 +1,160 @@
|
||||
Python 中最快解压 zip 文件的方法
|
||||
======
|
||||
|
||||
假设现在的上下文(LCTT 译注:context,计算机术语,此处意为业务情景)是这样的:一个 zip 文件被上传到一个[Web 服务][1]中,然后 Python 需要解压这个 zip 文件然后分析和处理其中的每个文件。这个特殊的应用查看每个文件各自的名称和大小,并和已经上传到 AWS S3 上的文件进行比较,如果文件(和 AWS S3 上的相比)有所不同或者文件本身更新,那么就将它上传到 AWS S3。
|
||||
|
||||
[![Uploads today][2]][3]
|
||||
|
||||
挑战在于这些 zip 文件太大了。它们的平均大小是 560MB 但是其中一些大于 1GB。这些文件中大多数是文本文件,但是其中同样也有一些巨大的二进制文件。不同寻常的是,每个 zip 文件包含 100 个文件但是其中 1-3 个文件却占据了多达 95% 的 zip 文件大小。
|
||||
|
||||
最开始我尝试在内存中解压文件,并且每次只处理一个文件。在各种内存爆炸和 EC2 耗尽内存的情况下,这个方法壮烈失败了。我觉得这个原因是这样的。最开始你有 1GB 文件在内存中,然后你现在解压每个文件,在内存中大约就要占用 2-3GB。所以,在很多次测试之后,解决方案是将这些 zip 文件复制到磁盘上(在临时目录 `/tmp` 中),然后遍历这些文件。这次情况好多了但是我仍然注意到了整个解压过程花费了巨量的时间。**是否可能有方法优化呢?**
|
||||
|
||||
### 原始函数
|
||||
|
||||
首先是下面这些模拟对 zip 文件中文件实际操作的普通函数:
|
||||
|
||||
```
|
||||
def _count_file(fn):
|
||||
with open(fn, 'rb') as f:
|
||||
return _count_file_object(f)
|
||||
|
||||
def _count_file_object(f):
|
||||
# Note that this iterates on 'f'.
|
||||
# You *could* do 'return len(f.read())'
|
||||
# which would be faster but potentially memory
|
||||
# inefficient and unrealistic in terms of this
|
||||
# benchmark experiment.
|
||||
total = 0
|
||||
for line in f:
|
||||
total += len(line)
|
||||
return total
|
||||
```
|
||||
|
||||
这里是可能最简单的另一个函数:
|
||||
|
||||
```
|
||||
def f1(fn, dest):
|
||||
with open(fn, 'rb') as f:
|
||||
zf = zipfile.ZipFile(f)
|
||||
zf.extractall(dest)
|
||||
|
||||
total = 0
|
||||
for root, dirs, files in os.walk(dest):
|
||||
for file_ in files:
|
||||
fn = os.path.join(root, file_)
|
||||
total += _count_file(fn)
|
||||
return total
|
||||
```
|
||||
|
||||
如果我更仔细地分析一下,我将会发现这个函数花费时间 40% 运行 `extractall`,60% 的时间在遍历各个文件并读取其长度。
|
||||
|
||||
### 第一步尝试
|
||||
|
||||
我的第一步尝试是使用线程。先创建一个 `zipfile.ZipFile` 的实例,展开其中的每个文件名,然后为每一个文件开始一个线程。每个线程都给它一个函数来做“实质工作”(在这个基准测试中,就是遍历每个文件然后获取它的名称)。实际业务中的函数进行的工作是复杂的 S3、Redis 和 PostgreSQL 操作,但是在我的基准测试中我只需要制作一个可以找出文件长度的函数就好了。线程池函数:
|
||||
|
||||
```
|
||||
def f2(fn, dest):
|
||||
|
||||
def unzip_member(zf, member, dest):
|
||||
zf.extract(member, dest)
|
||||
fn = os.path.join(dest, member.filename)
|
||||
return _count_file(fn)
|
||||
|
||||
with open(fn, 'rb') as f:
|
||||
zf = zipfile.ZipFile(f)
|
||||
futures = []
|
||||
with concurrent.futures.ThreadPoolExecutor() as executor:
|
||||
for member in zf.infolist():
|
||||
futures.append(
|
||||
executor.submit(
|
||||
unzip_member,
|
||||
zf,
|
||||
member,
|
||||
dest,
|
||||
)
|
||||
)
|
||||
total = 0
|
||||
for future in concurrent.futures.as_completed(futures):
|
||||
total += future.result()
|
||||
return total
|
||||
```
|
||||
|
||||
**结果:加速 ~10%**
|
||||
|
||||
### 第二步尝试
|
||||
|
||||
所以可能是 GIL(LCTT 译注:Global Interpreter Lock,一种全局锁,CPython 中的一个概念)阻碍了我。最自然的想法是尝试使用多线程在多个 CPU 上分配工作。但是这样做有缺点,那就是你不能传递一个非可 pickle 序列化的对象(LCTT 译注:意为只有可 pickle 序列化的对象可以被传递),所以你只能发送文件名到之后的函数中:
|
||||
|
||||
```
|
||||
def unzip_member_f3(zip_filepath, filename, dest):
|
||||
with open(zip_filepath, 'rb') as f:
|
||||
zf = zipfile.ZipFile(f)
|
||||
zf.extract(filename, dest)
|
||||
fn = os.path.join(dest, filename)
|
||||
return _count_file(fn)
|
||||
|
||||
|
||||
|
||||
def f3(fn, dest):
|
||||
with open(fn, 'rb') as f:
|
||||
zf = zipfile.ZipFile(f)
|
||||
futures = []
|
||||
with concurrent.futures.ProcessPoolExecutor() as executor:
|
||||
for member in zf.infolist():
|
||||
futures.append(
|
||||
executor.submit(
|
||||
unzip_member_f3,
|
||||
fn,
|
||||
member.filename,
|
||||
dest,
|
||||
)
|
||||
)
|
||||
total = 0
|
||||
for future in concurrent.futures.as_completed(futures):
|
||||
total += future.result()
|
||||
return total
|
||||
```
|
||||
|
||||
**结果: 加速 ~300%**
|
||||
|
||||
### 这是作弊
|
||||
|
||||
使用处理器池的问题是这样需要存储在磁盘上的原始 `.zip` 文件。所以为了在我的 web 服务器上使用这个解决方案,我首先得要将内存中的 zip 文件保存到磁盘,然后调用这个函数。这样做的代价我不是很清楚但是应该不低。
|
||||
|
||||
好吧,再翻翻看又没有损失。可能,解压过程加速到足以弥补这样做的损失了吧。
|
||||
|
||||
但是一定记住!这个优化取决于使用所有可用的 CPU。如果一些其它的 CPU 需要执行在 `gunicorn` 中的其它事务呢?这时,这些其它进程必须等待,直到有 CPU 可用。由于在这个服务器上有其他的事务正在进行,我不是很确定我想要在进程中接管所有其他 CPU。
|
||||
|
||||
### 结论
|
||||
|
||||
一步一步地做这个任务的这个过程感觉挺好的。你被限制在一个 CPU 上但是表现仍然特别好。同样地,一定要看看在`f1` 和 `f2` 两段代码之间的不同之处!利用 `concurrent.futures` 池类你可以获取到允许使用的 CPU 的个数,但是这样做同样给人感觉不是很好。如果你在虚拟环境中获取的个数是错的呢?或者可用的个数太低以致无法从负载分配获取好处并且现在你仅仅是为了移动负载而支付营运开支呢?
|
||||
|
||||
我将会继续使用 `zipfile.ZipFile(file_buffer).extractall(temp_dir)`。这个工作这样做已经足够好了。
|
||||
|
||||
### 想试试手吗?
|
||||
|
||||
我使用一个 `c5.4xlarge` EC2 服务器来进行我的基准测试。文件可以从此处下载:
|
||||
|
||||
```
|
||||
wget https://www.peterbe.com/unzip-in-parallel/hack.unzip-in-parallel.py
|
||||
wget https://www.peterbe.com/unzip-in-parallel/symbols-2017-11-27T14_15_30.zip
|
||||
```
|
||||
|
||||
这里的 `.zip` 文件有 34MB。和在服务器上的相比已经小了很多。
|
||||
|
||||
`hack.unzip-in-parallel.py` 文件里是一团糟。它包含了大量可怕的修正和丑陋的代码,但是这只是一个开始。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.peterbe.com/plog/fastest-way-to-unzip-a-zip-file-in-python
|
||||
|
||||
作者:[Peterbe][a]
|
||||
译者:[Leemeans](https://github.com/leemeans)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.peterbe.com/
|
||||
[1]:https://symbols.mozilla.org
|
||||
[2]:https://cdn-2916.kxcdn.com/cache/b7/bb/b7bbcf60347a5fa91420f71bbeed6d37.png
|
||||
[3]:https://cdn-2916.kxcdn.com/cache/e6/dc/e6dc20acd37d94239edbbc0727721e4a.png
|
@ -0,0 +1,73 @@
|
||||
如何在 Linux/Unix 中不重启 Vim 而重新加载 .vimrc 文件
|
||||
======
|
||||
|
||||
我是一位新的 Vim 编辑器用户。我通常使用 `:vs ~/.vimrc` 来加载 `~/.vimrc` 配置。而当我编辑 `.vimrc` 时,我需要不重启 Vim 会话而重新加载它。在 Linux 或者类 Unix 系统中,如何在编辑 `.vimrc` 后,重新加载它而不用重启 Vim 呢?
|
||||
|
||||
Vim 是自由开源并且向上兼容 Vi 的编辑器。它可以用来编辑各种文本。它在编辑用 C/Perl/Python 编写的程序时特别有用。可以用它来编辑 Linux/Unix 配置文件。`~/.vimrc` 是你个人的 Vim 初始化和自定义文件。
|
||||
|
||||
### 如何在不重启 Vim 会话的情况下重新加载 .vimrc
|
||||
|
||||
在 Vim 中重新加载 `.vimrc` 而不重新启动的流程:
|
||||
|
||||
1. 输入 `vim filename` 启动 vim
|
||||
2. 按下 `Esc` 接着输入 `:vs ~/.vimrc` 来加载 vim 配置
|
||||
3. 像这样添加自定义配置:
|
||||
|
||||
```
|
||||
filetype indent plugin on
|
||||
set number
|
||||
syntax on
|
||||
```
|
||||
4. 使用 `:wq` 保存文件,并从 `~/.vimrc` 窗口退出
|
||||
5. 输入下面任一命令重载 `~/.vimrc`:`:so $MYVIMRC` 或者 `:source ~/.vimrc`。
|
||||
|
||||
[![How to reload .vimrc file without restarting vim][1]][1]
|
||||
|
||||
*图1:编辑 ~/.vimrc 并在需要时重载它而不用退出 vim,这样你就可以继续编辑程序了*
|
||||
|
||||
`:so[urce]! {file}` 这个 vim 命令会从给定的文件比如 `~/.vimrc` 读取配置。就像你输入的一样,这些命令是在普通模式下执行的。当你在 `:global`、:`argdo`、 `:windo`、`:bufdo` 之后、循环中或者跟着另一个命令时,显示不会再在执行命令时更新。
|
||||
|
||||
### 如何设置按键来编辑并重载 ~/.vimrc
|
||||
|
||||
在你的 `~/.vimrc` 后面跟上这些:
|
||||
|
||||
```
|
||||
" Edit vimr configuration file
|
||||
nnoremap confe :e $MYVIMRC<CR>
|
||||
" Reload vims configuration file
|
||||
nnoremap confr :source $MYVIMRC<CR>
|
||||
```
|
||||
|
||||
现在只要按下 `Esc` 接着输入 `confe` 就可以编辑 `~/.vimrc`。按下 `Esc` ,接着输入 `confr` 以重新加载。一些人喜欢在 `.vimrc` 中使用 `<Leader>` 键。因此上面的映射变成:
|
||||
|
||||
```
|
||||
" Edit vimr configuration file
|
||||
nnoremap <Leader>ve :e $MYVIMRC<CR>
|
||||
" Reload vimr configuration file
|
||||
nnoremap <Leader>vr :source $MYVIMRC<CR>
|
||||
```
|
||||
|
||||
`<Leader>` 键默认映射成 `\` 键。因此只要输入 `\` 接着 `ve` 就能编辑文件。按下 `\` 接着 `vr` 就能重载 `~/vimrc`。
|
||||
|
||||
这就完成了,你可以不用再重启 Vim 就能重新加载 `.vimrc` 了。
|
||||
|
||||
### 关于作者
|
||||
|
||||
作者是 nixCraft 的创建者,经验丰富的系统管理员,也是 Linux / Unix shell 脚本的培训师。他曾与全球客户以及IT、教育、国防和太空研究以及非营利部门等多个行业合作。在 [Twitter][9]、[Facebook][10]、[Google +][11] 上关注他。通过[RSS/XML 订阅][5]获取最新的系统管理、Linux/Unix 以及开源主题教程。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/how-to-reload-vimrc-file-without-restarting-vim-on-linux-unix/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/media/new/faq/2018/02/How-to-reload-.vimrc-file-without-restarting-vim.jpg
|
||||
[2]:https://twitter.com/nixcraft
|
||||
[3]:https://facebook.com/nixcraft
|
||||
[4]:https://plus.google.com/+CybercitiBiz
|
||||
[5]:https://www.cyberciti.biz/atom/atom.xml
|
@ -1,9 +1,9 @@
|
||||
如何在 Ubuntu 16.04 上使用 Gogs 安装 Go 语言编写的 Git 服务器
|
||||
如何在 Ubuntu 安装 Go 语言编写的 Git 服务器 Gogs
|
||||
======
|
||||
|
||||
Gogs 是由 Go 语言编写,提供开源且免费的 Git 服务。Gogs 是一款无痛式自托管的 Git 服务器,能在尽可能小的硬件资源开销上搭建并运行您的私有 Git 服务器。Gogs 的网页界面和 GitHub 十分相近,且提供 MySQL、PostgreSQL 和 SQLite 数据库支持。
|
||||
Gogs 是由 Go 语言编写的,自由开源的 Git 服务。Gogs 是一款无痛式自托管的 Git 服务器,能在尽可能小的硬件资源开销上搭建并运行您的私有 Git 服务器。Gogs 的网页界面和 GitHub 十分相近,且提供 MySQL、PostgreSQL 和 SQLite 数据库支持。
|
||||
|
||||
在本教程中,我们将使用 Gogs 在 Ununtu 16.04 上按步骤,指导您安装和配置您的私有 Git 服务器。这篇教程中涵盖了如何在 Ubuntu 上安装 Go 语言、PostgreSQL 和安装并且配置 Nginx 网页服务器作为 Go 应用的反向代理的细节内容。
|
||||
在本教程中,我们将使用 Gogs 在 Ununtu 16.04 上按步骤指导您安装和配置您的私有 Git 服务器。这篇教程中涵盖了如何在 Ubuntu 上安装 Go 语言、PostgreSQL 和安装并且配置 Nginx 网页服务器作为 Go 应用的反向代理的细节内容。
|
||||
|
||||
### 搭建环境
|
||||
|
||||
@ -22,9 +22,11 @@ Gogs 是由 Go 语言编写,提供开源且免费的 Git 服务。Gogs 是一
|
||||
8. 测试
|
||||
|
||||
### 步骤 1 - 更新和升级系统
|
||||
|
||||
继续之前,更新 Ubuntu 所有的库,升级所有包。
|
||||
|
||||
运行下面的 apt 命令
|
||||
运行下面的 `apt` 命令:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt upgrade
|
||||
@ -36,12 +38,14 @@ Gogs 提供 MySQL、PostgreSQL、SQLite 和 TiDB 数据库系统支持。
|
||||
|
||||
此步骤中,我们将使用 PostgreSQL 作为 Gogs 程序的数据库。
|
||||
|
||||
使用下面的 apt 命令安装 PostgreSQL。
|
||||
使用下面的 `apt` 命令安装 PostgreSQL。
|
||||
|
||||
```
|
||||
sudo apt install -y postgresql postgresql-client libpq-dev
|
||||
```
|
||||
|
||||
安装完成之后,启动 PostgreSQL 服务并设置为开机启动。
|
||||
|
||||
```
|
||||
systemctl start postgresql
|
||||
systemctl enable postgresql
|
||||
@ -51,62 +55,71 @@ systemctl enable postgresql
|
||||
|
||||
之后,我们需要为 Gogs 创建数据库和用户。
|
||||
|
||||
使用 'postgres' 用户登陆并运行 ‘psql’ 命令获取 PostgreSQL 操作界面.
|
||||
使用 `postgres` 用户登录并运行 `psql` 命令以访问 PostgreSQL 操作界面。
|
||||
|
||||
```
|
||||
su - postgres
|
||||
psql
|
||||
```
|
||||
|
||||
创建一个名为 ‘git’ 的新用户,给予此用户 ‘CREATEDB’ 权限。
|
||||
创建一个名为 `git` 的新用户,给予此用户 `CREATEDB` 权限。
|
||||
|
||||
```
|
||||
CREATE USER git CREATEDB;
|
||||
\password git
|
||||
```
|
||||
|
||||
创建名为 ‘gogs_production’ 的数据库,设置 ‘git’ 用户作为其所有者。
|
||||
创建名为 `gogs_production` 的数据库,设置 `git` 用户作为其所有者。
|
||||
|
||||
```
|
||||
CREATE DATABASE gogs_production OWNER git;
|
||||
```
|
||||
|
||||
[![创建 Gogs 数据库][1]][2]
|
||||
|
||||
作为 Gogs 安装时的 ‘gogs_production’ PostgreSQL 数据库和 ‘git’ 用户已经创建完毕。
|
||||
用于 Gogs 的 `gogs_production` PostgreSQL 数据库和 `git` 用户已经创建完毕。
|
||||
|
||||
### 步骤 3 - 安装 Go 和 Git
|
||||
|
||||
使用下面的 apt 命令从库中安装 Git。
|
||||
使用下面的 `apt` 命令从库中安装 Git。
|
||||
|
||||
```
|
||||
sudo apt install git
|
||||
```
|
||||
|
||||
此时,为系统创建名为 ‘git’ 的新用户。
|
||||
此时,为系统创建名为 `git` 的新用户。
|
||||
|
||||
```
|
||||
sudo adduser --disabled-login --gecos 'Gogs' git
|
||||
```
|
||||
|
||||
登陆 ‘git’ 账户并且创建名为 ‘local’ 的目录。
|
||||
登录 `git` 账户并且创建名为 `local` 的目录。
|
||||
|
||||
```
|
||||
su - git
|
||||
mkdir -p /home/git/local
|
||||
```
|
||||
|
||||
切换到 ‘local’ 目录,依照下方所展示的内容,使用 wget 命令下载 ‘Go’(最新版)。
|
||||
切换到 `local` 目录,依照下方所展示的内容,使用 `wget` 命令下载 Go(最新版)。
|
||||
|
||||
```
|
||||
cd ~/local
|
||||
wget <https://dl.google.com/go/go1.9.2.linux-amd64.tar.gz>
|
||||
wget https://dl.google.com/go/go1.9.2.linux-amd64.tar.gz
|
||||
```
|
||||
|
||||
[![安装 Go 和 Git][3]][4]
|
||||
|
||||
解压并且删除 go 的压缩文件。
|
||||
|
||||
```
|
||||
tar -xf go1.9.2.linux-amd64.tar.gz
|
||||
rm -f go1.9.2.linux-amd64.tar.gz
|
||||
```
|
||||
|
||||
‘Go’ 二进制文件已经被下载到 ‘~/local/go’ 目录。此时我们需要设置环境变量 - 设置 ‘GOROOT’ 和 ‘GOPATH’ 目录到系统环境,这样,我们就可以在 ‘git’ 用户下执行 ‘go’ 命令。
|
||||
Go 二进制文件已经被下载到 `~/local/go` 目录。此时我们需要设置环境变量 - 设置 `GOROOT` 和 `GOPATH` 目录到系统环境,这样,我们就可以在 `git` 用户下执行 `go` 命令。
|
||||
|
||||
执行下方的命令。
|
||||
|
||||
```
|
||||
cd ~/
|
||||
echo 'export GOROOT=$HOME/local/go' >> $HOME/.bashrc
|
||||
@ -114,7 +127,8 @@ echo 'export GOPATH=$HOME/go' >> $HOME/.bashrc
|
||||
echo 'export PATH=$PATH:$GOROOT/bin:$GOPATH/bin' >> $HOME/.bashrc
|
||||
```
|
||||
|
||||
之后通过运行 'source ~/.bashrc' 重载 Bash,如下:
|
||||
之后通过运行 `source ~/.bashrc` 重载 Bash,如下:
|
||||
|
||||
```
|
||||
source ~/.bashrc
|
||||
```
|
||||
@ -123,7 +137,8 @@ source ~/.bashrc
|
||||
|
||||
[![安装 Go 编程语言][5]][6]
|
||||
|
||||
现在运行 'go' 的版本查看命令。
|
||||
现在运行 `go` 的版本查看命令。
|
||||
|
||||
```
|
||||
go version
|
||||
```
|
||||
@ -132,27 +147,30 @@ go version
|
||||
|
||||
[![检查 go 版本][7]][8]
|
||||
|
||||
现在,Go 已经安装在系统的 ‘git’ 用户下了。
|
||||
现在,Go 已经安装在系统的 `git` 用户下了。
|
||||
|
||||
### 步骤 4 - 使用 Gogs 安装 Git 服务
|
||||
|
||||
使用 ‘git’ 用户登陆并且使用 ‘go’ 命令从 GitHub 下载 ‘Gogs’。
|
||||
使用 `git` 用户登录并且使用 `go` 命令从 GitHub 下载 Gogs。
|
||||
|
||||
```
|
||||
su - git
|
||||
go get -u github.com/gogits/gogs
|
||||
```
|
||||
|
||||
此命令将在 ‘GOPATH/src’ 目录下载 Gogs 的所有源代码。
|
||||
此命令将在 `GOPATH/src` 目录下载 Gogs 的所有源代码。
|
||||
|
||||
切换至 `$GOPATH/src/github.com/gogits/gogs` 目录,并且使用下列命令搭建 Gogs。
|
||||
|
||||
切换至 '$GOPATH/src/github.com/gogits/gogs' 目录,并且使用下列命令搭建 gogs。
|
||||
```
|
||||
cd $GOPATH/src/github.com/gogits/gogs
|
||||
go build
|
||||
```
|
||||
|
||||
确保您没有捕获到错误。
|
||||
确保您没有遇到错误。
|
||||
|
||||
现在使用下面的命令运行 Gogs Go Git 服务器。
|
||||
|
||||
```
|
||||
./gogs web
|
||||
```
|
||||
@ -161,31 +179,34 @@ go build
|
||||
|
||||
[![安装 Gogs Go Git 服务][9]][10]
|
||||
|
||||
打开网页浏览器,键入您的 IP 地址和端口号,我的是<http://192.168.33.10:3000/>
|
||||
打开网页浏览器,键入您的 IP 地址和端口号,我的是 http://192.168.33.10:3000/ 。
|
||||
|
||||
您应该会得到于下方一致的反馈。
|
||||
您应该会得到与下方一致的反馈。
|
||||
|
||||
[![Gogs 网页服务器][11]][12]
|
||||
|
||||
Gogs 已经在您的 Ubuntu 系统上安装完毕。现在返回到您的终端,并且键入 'Ctrl + c' 中止服务。
|
||||
Gogs 已经在您的 Ubuntu 系统上安装完毕。现在返回到您的终端,并且键入 `Ctrl + C` 中止服务。
|
||||
|
||||
### 步骤 5 - 配置 Gogs Go Git 服务器
|
||||
|
||||
本步骤中,我们将为 Gogs 创建惯例配置。
|
||||
|
||||
进入 Gogs 安装目录并新建 ‘custom/conf’ 目录。
|
||||
进入 Gogs 安装目录并新建 `custom/conf` 目录。
|
||||
|
||||
```
|
||||
cd $GOPATH/src/github.com/gogits/gogs
|
||||
mkdir -p custom/conf/
|
||||
```
|
||||
|
||||
复制默认的配置文件到 custom 目录,并使用 [vim][13] 修改。
|
||||
复制默认的配置文件到 `custom` 目录,并使用 [vim][13] 修改。
|
||||
|
||||
```
|
||||
cp conf/app.ini custom/conf/app.ini
|
||||
vim custom/conf/app.ini
|
||||
```
|
||||
|
||||
在 ‘ **[server]** ’ 选项中,修改 ‘HOST_ADDR’ 为 ‘127.0.0.1’.
|
||||
在 `[server]` 小节中,修改 `HOST_ADDR` 为 `127.0.0.1`。
|
||||
|
||||
```
|
||||
[server]
|
||||
PROTOCOL = http
|
||||
@ -193,23 +214,23 @@ vim custom/conf/app.ini
|
||||
ROOT_URL = %(PROTOCOL)s://%(DOMAIN)s:%(HTTP_PORT)s/
|
||||
HTTP_ADDR = 127.0.0.1
|
||||
HTTP_PORT = 3000
|
||||
|
||||
```
|
||||
|
||||
在 ‘ **[database]** ’ 选项中,按照您的数据库信息修改。
|
||||
在 `[database]` 选项中,按照您的数据库信息修改。
|
||||
|
||||
```
|
||||
[database]
|
||||
DB_TYPE = postgres
|
||||
HOST = 127.0.0.1:5432
|
||||
NAME = gogs_production
|
||||
USER = git
|
||||
PASSWD = [email protected]#
|
||||
|
||||
PASSWD = aqwe123@#
|
||||
```
|
||||
|
||||
保存并退出。
|
||||
|
||||
运行下面的命令验证配置项。
|
||||
|
||||
```
|
||||
./gogs web
|
||||
```
|
||||
@ -218,54 +239,57 @@ vim custom/conf/app.ini
|
||||
|
||||
[![配置服务器][14]][15]
|
||||
|
||||
Gogs 现在已经按照自定义配置下运行在 ‘localhost’ 的 3000 端口上了。
|
||||
Gogs 现在已经按照自定义配置下运行在 `localhost` 的 3000 端口上了。
|
||||
|
||||
### 步骤 6 - 运行 Gogs 服务器
|
||||
|
||||
这一步,我们将在 Ubuntu 系统上配置 Gogs 服务器。我们会在 ‘/etc/systemd/system’ 目录下创建一个新的服务器配置文件 ‘gogs.service’。
|
||||
这一步,我们将在 Ubuntu 系统上配置 Gogs 服务器。我们会在 `/etc/systemd/system` 目录下创建一个新的服务器配置文件 `gogs.service`。
|
||||
|
||||
切换到 `/etc/systemd/system` 目录,使用 [vim][13] 创建服务器配置文件 `gogs.service`。
|
||||
|
||||
切换到 ‘/etc/systemd/system’ 目录,使用 [vim][13] 创建服务器配置文件 ‘gogs.service’。
|
||||
```
|
||||
cd /etc/systemd/system
|
||||
vim gogs.service
|
||||
```
|
||||
|
||||
粘贴下面的代码到 gogs 服务器配置文件中。
|
||||
粘贴下面的代码到 Gogs 服务器配置文件中。
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=Gogs
|
||||
After=syslog.target
|
||||
After=network.target
|
||||
After=mariadb.service mysqld.service postgresql.service memcached.service redis.service
|
||||
Description=Gogs
|
||||
After=syslog.target
|
||||
After=network.target
|
||||
After=mariadb.service mysqld.service postgresql.service memcached.service redis.service
|
||||
|
||||
[Service]
|
||||
# Modify these two values and uncomment them if you have
|
||||
# repos with lots of files and get an HTTP error 500 because
|
||||
# of that
|
||||
###
|
||||
#LimitMEMLOCK=infinity
|
||||
#LimitNOFILE=65535
|
||||
Type=simple
|
||||
User=git
|
||||
Group=git
|
||||
WorkingDirectory=/home/git/go/src/github.com/gogits/gogs
|
||||
ExecStart=/home/git/go/src/github.com/gogits/gogs/gogs web
|
||||
Restart=always
|
||||
Environment=USER=git HOME=/home/git
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
[Service]
|
||||
# Modify these two values and uncomment them if you have
|
||||
# repos with lots of files and get an HTTP error 500 because
|
||||
# of that
|
||||
###
|
||||
#LimitMEMLOCK=infinity
|
||||
#LimitNOFILE=65535
|
||||
Type=simple
|
||||
User=git
|
||||
Group=git
|
||||
WorkingDirectory=/home/git/go/src/github.com/gogits/gogs
|
||||
ExecStart=/home/git/go/src/github.com/gogits/gogs/gogs web
|
||||
Restart=always
|
||||
Environment=USER=git HOME=/home/git
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
之后保存并且退出。
|
||||
|
||||
现在可以重载系统服务器。
|
||||
|
||||
```
|
||||
systemctl daemon-reload
|
||||
```
|
||||
|
||||
使用下面的命令开启 gogs 服务器并设置为开机启动。
|
||||
使用下面的命令开启 Gogs 服务器并设置为开机启动。
|
||||
|
||||
```
|
||||
systemctl start gogs
|
||||
systemctl enable gogs
|
||||
@ -276,6 +300,7 @@ systemctl enable gogs
|
||||
Gogs 服务器现在已经运行在 Ubuntu 系统上了。
|
||||
|
||||
使用下面的命令检测:
|
||||
|
||||
```
|
||||
netstat -plntu
|
||||
systemctl status gogs
|
||||
@ -290,23 +315,27 @@ systemctl status gogs
|
||||
在本步中,我们将为 Gogs 安装和配置 Nginx 反向代理。我们会在自己的库中调用 Nginx 包。
|
||||
|
||||
使用下面的命令添加 Nginx 库。
|
||||
|
||||
```
|
||||
sudo add-apt-repository -y ppa:nginx/stable
|
||||
```
|
||||
|
||||
此时更新所有的库并且使用下面的命令安装 Nginx。
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt install nginx -y
|
||||
```
|
||||
|
||||
之后,进入 ‘/etc/nginx/sites-available’ 目录并且创建虚拟主机文件 ‘gogs’。
|
||||
之后,进入 `/etc/nginx/sites-available` 目录并且创建虚拟主机文件 `gogs`。
|
||||
|
||||
```
|
||||
cd /etc/nginx/sites-available
|
||||
vim gogs
|
||||
```
|
||||
|
||||
粘贴下面的代码到配置项。
|
||||
粘贴下面的代码到配置文件。
|
||||
|
||||
```
|
||||
server {
|
||||
listen 80;
|
||||
@ -316,21 +345,21 @@ server {
|
||||
proxy_pass http://localhost:3000;
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
保存退出。
|
||||
|
||||
**注意:**
|
||||
使用您的域名修改 ‘server_name’ 项。
|
||||
**注意:** 请使用您的域名修改 `server_name` 项。
|
||||
|
||||
现在激活虚拟主机并且测试 nginx 配置。
|
||||
|
||||
```
|
||||
ln -s /etc/nginx/sites-available/gogs /etc/nginx/sites-enabled/
|
||||
nginx -t
|
||||
```
|
||||
|
||||
确保没有抛错,重启 Nginx 服务器。
|
||||
确保没有遇到错误,重启 Nginx 服务器。
|
||||
|
||||
```
|
||||
systemctl restart nginx
|
||||
```
|
||||
@ -339,25 +368,25 @@ systemctl restart nginx
|
||||
|
||||
### 步骤 8 - 测试
|
||||
|
||||
打开您的网页浏览器并且输入您的 gogs URL,我的是 <http://git.hakase-labs.co>
|
||||
打开您的网页浏览器并且输入您的 Gogs URL,我的是 http://git.hakase-labs.co
|
||||
|
||||
现在您将进入安装界面。在页面的顶部,输入您所有的 PostgreSQL 数据库信息。
|
||||
|
||||
[![Gogs 安装][22]][23]
|
||||
|
||||
之后,滚动到底部,点击 ‘Admin account settings’ 下拉选项。
|
||||
之后,滚动到底部,点击 “Admin account settings” 下拉选项。
|
||||
|
||||
输入您的管理者用户名和邮箱。
|
||||
|
||||
[![键入 gogs 安装设置][24]][25]
|
||||
|
||||
之后点击 ‘Install Gogs’ 按钮。
|
||||
之后点击 “Install Gogs” 按钮。
|
||||
|
||||
然后您将会被重定向到下图显示的 Gogs 用户面板。
|
||||
|
||||
[![Gogs 面板][26]][27]
|
||||
|
||||
下面是 Gogs ‘Admin Dashboard(管理员面板)’。
|
||||
下面是 Gogs 的 “Admin Dashboard(管理员面板)”。
|
||||
|
||||
[![浏览 Gogs 面板][28]][29]
|
||||
|
||||
@ -369,7 +398,7 @@ via: https://www.howtoforge.com/tutorial/how-to-install-gogs-go-git-service-on-u
|
||||
|
||||
作者:[Muhammad Arul][a]
|
||||
译者:[CYLeft](https://github.com/CYLeft)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
112
sources/talk/20170210 Evolutional Steps of Computer Systems.md
Normal file
112
sources/talk/20170210 Evolutional Steps of Computer Systems.md
Normal file
@ -0,0 +1,112 @@
|
||||
Evolutional Steps of Computer Systems
|
||||
======
|
||||
Throughout the history of the modern computer, there were several evolutional steps related to the way we interact with the system. I tend to categorize those steps as following:
|
||||
|
||||
1. Numeric Systems
|
||||
2. Application-Specific Systems
|
||||
3. Application-Centric Systems
|
||||
4. Information-Centric Systems
|
||||
5. Application-Less Systems
|
||||
|
||||
|
||||
|
||||
Following sections describe how I see those categories.
|
||||
|
||||
### Numeric Systems
|
||||
|
||||
[Early computers][1] were designed with numbers in mind. They could add, subtract, multiply, divide. Some of them were able to perform more complex mathematical operations such as differentiate or integrate.
|
||||
|
||||
If you map characters to numbers, they were able to «compute» [strings][2] as well but this is somewhat «creative use of numbers» instead of meaningful processing arbitrary information.
|
||||
|
||||
### Application-Specific Systems
|
||||
|
||||
For higher-level problems, pure numeric systems are not sufficient. Application-specific systems were developed to do one single task. They were very similar to numeric systems. However, with sufficiently complex number calculations, systems were able to accomplish very well-defined higher level tasks such as calculations related to scheduling problems or other optimization problems.
|
||||
|
||||
Systems of this category were built for one single purpose, one distinct problem they solved.
|
||||
|
||||
### Application-Centric Systems
|
||||
|
||||
Systems that are application-centric are the first real general purpose systems. Their main usage style is still mostly application-specific but with multiple applications working either time-sliced (one app after another) or in multi-tasking mode (multiple apps at the same time).
|
||||
|
||||
Early personal computers [from the 70s][3] of the previous century were the first application-centric systems that became popular for a wide group of people.
|
||||
|
||||
Yet modern operating systems - Windows, macOS, most GNU/Linux desktop environments - still follow the same principles.
|
||||
|
||||
Of course, there are sub-categories as well:
|
||||
|
||||
1. Strict Application-Centric Systems
|
||||
2. Loose Application-Centric Systems
|
||||
|
||||
|
||||
|
||||
Strict application-centric systems such as [Windows 3.1][4] (Program Manager and File Manager) or even the initial version of [Windows 95][5] had no pre-defined folder hierarchy. The user did start text processing software like [WinWord][6] and saved the files in the program folder of WinWord. When working with a spreadsheet program, its files were saved in the application folder of the spreadsheet tool. And so on. Users did not create their own hierarchy of folders mostly because of convenience, laziness, or because they did not saw any necessity. The number of files per user were sill within dozens up to a few hundreds.
|
||||
|
||||
For accessing information, the user typically opened an application and within the application, the files containing the generated data were retrieved using file/open.
|
||||
|
||||
It was [Windows 95][5] SP2 that introduced «[My Documents][7]» for the Windows platform. With this file hierarchy template, application designers began switching to «My Documents» as a default file save/open location instead of using the software product installation path. This made the users embrace this pattern and start to maintain folder hierarchies on their own.
|
||||
|
||||
This resulted in loose application-centric systems: typical file retrieval is done via a file manager. When a file is opened, the associated application is started by the operating system. It is a small or subtle but very important usage shift. Application-centric systems are still the dominant usage pattern for personal computers.
|
||||
|
||||
Nevertheless, this pattern comes with many disadvantages. For example in order to prevent data retrieval problems, there is the need to maintain a strict hierarchy of folders that contain all related files of a given project. Unfortunately, nature does not fit well in strict hierarchy of folders. Further more, [this does not scale well][8]. Desktop search engines and advanced data organizing tools like [tagstore][9] are able to smooth the edged a bit. As studies show, only a minority of users are using such advanced retrieval tools. Most users still navigate through the file system without using any alternative or supplemental retrieval techniques.
|
||||
|
||||
### Information-Centric Systems
|
||||
|
||||
One possible way of dealing with the issue that a certain topic needs to have a folder that holds all related files is to switch from an application-centric system to an information-centric systems.
|
||||
|
||||
Instead of opening a spreadsheet application to work with the project budget, opening a word processor application to write the project report, and opening another tool to work with image files, an information-centric system combines all the information on the project in one place, in one application.
|
||||
|
||||
The calculations for the previous month is right beneath notes from a client meeting which is right beneath a photography of the whiteboard notes which is right beneath some todo tasks. Without any application or file border in between.
|
||||
|
||||
Early attempts to create such an environment were IBM [OS/2][10], Microsoft [OLE][11] or [NeXT][12]. None of them were a major success for a variety of reasons. A very interesting information-centric environment is [Acme][13] from [Plan 9][14]. It combines [a wide variety of applications][15] within one application but it never reached a notable distribution even with its ports to Windows or GNU/Linux.
|
||||
|
||||
Modern approaches for an information-centric system are advanced [personal wikis][16] like [TheBrain][17] or [Microsoft OneNote][18].
|
||||
|
||||
My personal tool of choice is the [GNU/Emacs][19] platform with its [Org-mode][19] extension. I hardly leave Org-mode when I work with my computer. For accessing external data sources, I created [Memacs][20] which brings me a broad variety of data into Org-mode. I love to do spreadsheet calculations right beneath scheduled tasks, in-line images, internal and external links, and so forth. It is truly an information-centric system where the user doesn't have to deal with application borders or strictly hierarchical file-system folders. Multi-classifications is possible using simple or advanced tagging. All kinds of views can be derived with a single command. One of those views is my calendar, the agenda. Another derived view is the list of borrowed things. And so on. There are no limits for Org-mode users. If you can think of it, it is most likely possible within Org-mode.
|
||||
|
||||
Is this the end of the evolution? Certainly not.
|
||||
|
||||
### Application-Less Systems
|
||||
|
||||
I can think of a class of systems which I refer to as application-less systems. As the next logical step, there is no need to have single-domain applications even when they are as capable as Org-mode. The computer offers a nice to use interface to information and features, not files and applications. Even a classical operating system is not accessible.
|
||||
|
||||
Application-less systems might as well be combined with [artificial intelligence][21]. Think of it as some kind of [HAL 9000][22] from [A Space Odyssey][23]. Or [LCARS][24] from Star Trek.
|
||||
|
||||
It is hard to believe that there is a transition between our application-based, vendor-based software culture and application-less systems. Maybe the open source movement with its slow but constant development will be able to form a truly application-less environment where all kinds of organizations and people are contributing to.
|
||||
|
||||
Information and features to retrieve and manipulate information, this is all it takes. This is all we need. Everything else is just limiting distraction.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://karl-voit.at/2017/02/10/evolution-of-systems/
|
||||
|
||||
作者:[Karl Voit][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://karl-voit.at
|
||||
[1]:https://en.wikipedia.org/wiki/History_of_computing_hardware
|
||||
[2]:https://en.wikipedia.org/wiki/String_%2528computer_science%2529
|
||||
[3]:https://en.wikipedia.org/wiki/Xerox_Alto
|
||||
[4]:https://en.wikipedia.org/wiki/Windows_3.1x
|
||||
[5]:https://en.wikipedia.org/wiki/Windows_95
|
||||
[6]:https://en.wikipedia.org/wiki/Microsoft_Word
|
||||
[7]:https://en.wikipedia.org/wiki/My_Documents
|
||||
[8]:http://karl-voit.at/tagstore/downloads/Voit2012b.pdf
|
||||
[9]:http://karl-voit.at/tagstore/
|
||||
[10]:https://en.wikipedia.org/wiki/OS/2
|
||||
[11]:https://en.wikipedia.org/wiki/Object_Linking_and_Embedding
|
||||
[12]:https://en.wikipedia.org/wiki/NeXT
|
||||
[13]:https://en.wikipedia.org/wiki/Acme_%2528text_editor%2529
|
||||
[14]:https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs
|
||||
[15]:https://en.wikipedia.org/wiki/List_of_Plan_9_applications
|
||||
[16]:https://en.wikipedia.org/wiki/Personal_wiki
|
||||
[17]:https://en.wikipedia.org/wiki/TheBrain
|
||||
[18]:https://en.wikipedia.org/wiki/Microsoft_OneNote
|
||||
[19]:../../../../tags/emacs
|
||||
[20]:https://github.com/novoid/Memacs
|
||||
[21]:https://en.wikipedia.org/wiki/Artificial_intelligence
|
||||
[22]:https://en.wikipedia.org/wiki/HAL_9000
|
||||
[23]:https://en.wikipedia.org/wiki/2001:_A_Space_Odyssey
|
||||
[24]:https://en.wikipedia.org/wiki/LCARS
|
@ -1,55 +0,0 @@
|
||||
Translating by MjSeven
|
||||
|
||||
How slowing down made me a better leader
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_leadership_brand.png?itok=YW1Syk4S)
|
||||
|
||||
Early in my career, I thought the most important thing I could do was act. If my boss said jump, my reply was "how high?"
|
||||
|
||||
But as I've grown as a leader and manager, I've realized that the most important traits I can offer are [patience][1] and listening. This patience and listening means I'm focusing on what's really important. I'm decisive, so I do not hesitate to act. Yet I've learned that my actions are more impactful when I consider input from multiple sources and offer advice on what we should be doing—not simply reacting to an immediate request.
|
||||
|
||||
Practicing open leadership involves cultivating the patience and listening skills I need to collaborate on the [best plan of action, not just the quickest one][2]. It also gives me the tools I need to explain [why I'm saying "no"][3] (or, perhaps, "not now") to someone, so I can lead with transparency and confidence.
|
||||
|
||||
If you're in software development and practice scrum, then the following argument might resonate with you: The patience and listening a manager displays are as important as her skills in sprint planning and running the sprint demo. Forget about them, and you'll lessen the impact you're able to have.
|
||||
|
||||
### A focus on patience
|
||||
|
||||
Focus and patience do not always come easily. Often, I find myself sitting in meetings and filling my notebook with action items. My default action can be to think: "We can simply do x and y will improve!" Then I remember that things are not so linear.
|
||||
|
||||
I need to think about the other factors that can influence a situation. Pausing to take in data from multiple people and resources helps me flesh out a strategy that our organization needs for long-term success. It also helps me identify those shorter-term milestones that should lead us to deliver the business results I'm responsible for producing.
|
||||
|
||||
Here's a great example from a time when patience wasn't something I valued as I should have—and how that hurt my performance. When I was based on North Carolina, I worked with someone based in Arizona. We didn't use video conferencing technologies, so I didn't get to observe her body language when we talked. While I was responsible for delivering the results for the project I led, she was one of the two people tasked with making sure I had adequate support.
|
||||
|
||||
For whatever reason, when I talked with this person, when she asked me to do something, I did it. She would be providing input on my performance evaluation, so I wanted to make sure she was happy. At the time, I didn't possess the maturity to know I didn't need to make her happy; my focus should have been on other performance indicators. I should have spent more time listening and collaborating with her instead of picking up the first "action item" and working on it while she was still talking.
|
||||
|
||||
After six months on the job, this person gave me some tough feedback. I was angry and sad. Didn't I do everything she'd asked? I had worked long hours, nearly seven days a week for six months. How dare she criticize my performance?
|
||||
|
||||
Then, after I had my moment of anger followed by sadness, I thought about what she said. Her feedback was on point.
|
||||
|
||||
The patience and listening a manager displays are as important as her skills in sprint planning and running the sprint demo.
|
||||
|
||||
She had concerns about the project, and she held me accountable because I was responsible. We worked through the issues, and I learned that vital lesson about how to lead: Leadership does not mean "get it done right now." Leadership means putting together a strategy, then communicating and implementing plans in support of the strategy. It also means making mistakes and learning from these hiccups.
|
||||
|
||||
### Lesson learned
|
||||
|
||||
In hindsight, I realize I could have asked more questions to better understand the intent of her feedback. I also could have pushed back if the guidance from her did not align with other input I was receiving. By having the patience to listen to the various sources giving me input about the project, synthesizing what I learned, and creating a coherent plan for action, I would have been a better leader. I also would have had more purpose driving the work I was doing. Instead of reacting to a single data point, I would have been implementing a strategic plan. I also would have had a better performance evaluation.
|
||||
|
||||
I eventually had some feedback for her. Next time we worked together, I didn't want to hear the feedback after six months. I wanted to hear the feedback earlier and more often so I could learn from the mistakes sooner. An ongoing discussion about the work is what should happen on any team.
|
||||
|
||||
As I mature as a manager and leader, I hold myself to the same standards I ask my team to meet: Plan, work the plan, and reflect. Repeat. Don't let a fire drill created by an external force distract you from the plan you need to implement. Breaking work into small increments builds in space for reflections and adjustments to the plan. As Daniel Goleman writes, "Directing attention toward where it needs to go is a primal task of leadership." Don't be afraid of meeting this challenge.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/18/2/open-leadership-patience-listening
|
||||
|
||||
作者:[Angela Robertson][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/arobertson98
|
||||
[1]:https://opensource.com/open-organization/16/3/my-most-difficult-leadership-lesson
|
||||
[2]:https://opensource.com/open-organization/16/3/fastest-result-isnt-always-best-result
|
||||
[3]:https://opensource.com/open-organization/17/5/saying-no-open-organization
|
@ -0,0 +1,91 @@
|
||||
Why culture is the most important issue in a DevOps transformation
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_community2.png?itok=1blC7-NY)
|
||||
|
||||
You've been appointed the DevOps champion in your organisation: congratulations. So, what's the most important issue that you need to address?
|
||||
|
||||
It's the technology—tools and the toolchain—right? Everybody knows that unless you get the right tools for the job, you're never going to make things work. You need integration with your existing stack (though whether you go with tight or loose integration will be an interesting question), a support plan (vendor, third party, or internal), and a bug-tracking system to go with your source code management system. And that's just the start.
|
||||
|
||||
No! Don't be ridiculous: It's clearly the process that's most important. If the team doesn't agree on how stand-ups are run, who participates, the frequency and length of the meetings, and how many people are required for a quorum, then you'll never be able to institute a consistent, repeatable working pattern.
|
||||
|
||||
In fact, although both the technology and the process are important, there's a third component that is equally important, but typically even harder to get right: culture. Yup, it's that touch-feely thing we techies tend to struggle with.1
|
||||
|
||||
### Culture
|
||||
|
||||
I was visiting a midsized government institution a few months ago (not in the UK, as it happens), and we arrived a little early to meet the CEO and CTO. We were ushered into the CEO's office and waited for a while as the two of them finished participating in the daily stand-up. They apologised for being a minute or two late, but far from being offended, I was impressed. Here was an organisation where the culture of participation was clearly infused all the way up to the top.
|
||||
|
||||
Not that culture can be imposed from the top—nor can you rely on it percolating up from the bottom3—but these two C-level execs were not only modelling the behaviour they expected from the rest of their team, but also seemed, from the brief discussion we had about the process afterwards, to be truly invested in it. If you can get management to buy into the process—and be seen buying in—you are at least likely to have problems with other groups finding plausible excuses to keep their distance and get away with it.
|
||||
|
||||
So let's assume management believes you should give DevOps a go. Where do you start?
|
||||
|
||||
Developers may well be your easiest target group. They are often keen to try new things and find ways to move things along faster, so they are often the group that can be expected to adopt new technologies and methodologies. DevOps arguably has been driven mainly by the development community.
|
||||
|
||||
But you shouldn't assume all developers will be keen to embrace this change. For some, the way things have always been done—your Rick Parfitts of dev, if you will7—is fine. Finding ways to help them work efficiently in the new world is part of your job, not just theirs. If you have superstar developers who aren't happy with change, you risk alienating and losing them if you try to force them into your brave new world. What's worse, if they dig their heels in, you risk the adoption of your DevSecOps vision being compromised when they explain to their managers that things aren't going to change if it makes their lives more difficult and reduces their productivity.
|
||||
|
||||
Maybe you're not going to be able to move all the systems and people to DevOps immediately. Maybe you're going to need to choose which apps start with and who will be your first DevOps champions. Maybe it's time to move slowly.
|
||||
|
||||
### Not maybe: definitely
|
||||
|
||||
No—I lied. You're definitely going to need to move slowly. Trying to change everything at once is a recipe for disaster.
|
||||
|
||||
This goes for all elements of the change—which people to choose, which technologies to choose, which applications to choose, which user base to choose, which use cases to choose—bar one. For those elements, if you try to move everything in one go, you will fail. You'll fail for a number of reasons. You'll fail for reasons I can't imagine and, more importantly, for reasons you can't imagine. But some of the reasons will include:
|
||||
|
||||
* People—most people—don't like change.
|
||||
* Technologies don't like change (you can't just switch and expect everything to still work).
|
||||
* Applications don't like change (things worked before, or at least failed in known ways). You want to change everything in one go? Well, they'll all fail in new and exciting9 ways.
|
||||
* Users don't like change.
|
||||
* Use cases don't like change.
|
||||
|
||||
|
||||
|
||||
### The one exception
|
||||
|
||||
You noticed I wrote "bar one" when discussing which elements you shouldn't choose to change all in one go? Well done.
|
||||
|
||||
What's that exception? It's the initial team. When you choose your initial application to change and you're thinking about choosing the team to make that change, select the members carefully and select a complete set. This is important. If you choose just developers, just test folks, just security folks, just ops folks, or just management—if you leave out one functional group from your list—you won't have proved anything at all. Well, you might have proved to a small section of your community that it kind of works, but you'll have missed out on a trick. And that trick is: If you choose keen people from across your functional groups, it's much harder to fail.
|
||||
|
||||
Say your first attempt goes brilliantly. How are you going to convince other people to replicate your success and adopt DevOps? Well, the company newsletter, of course. And that will convince how many people, exactly? Yes, that number.12 If, on the other hand, you have team members from across the functional parts or the organisation, when you succeed, they'll tell their colleagues and you'll get more buy-in next time.
|
||||
|
||||
If it fails, if you've chosen your team wisely—if they're all enthusiastic and know that "fail often, fail fast" is good—they'll be ready to go again.
|
||||
|
||||
Therefore, you need to choose enthusiasts from across your functional groups. They can work on the technologies and the process, and once that's working, it's the people who will create that cultural change. You can just sit back and enjoy. Until the next crisis, of course.
|
||||
|
||||
1\. OK, you're right. It should be "with which we techies tend to struggle."2
|
||||
|
||||
2\. You thought I was going to qualify that bit about techies struggling with touchy-feely stuff, didn't you? Read it again: I put "tend to." That's the best you're getting.
|
||||
|
||||
3\. Is percolating a bottom-up process? I don't drink coffee,4 so I wouldn't know.
|
||||
|
||||
4\. Do people even use percolators to make coffee anymore? Feel free to let me know in the comments. I may pretend interest if you're lucky.
|
||||
|
||||
5\. For U.S. readers (and some other countries, maybe?), please substitute "check" for "tick" here.6
|
||||
|
||||
6\. For U.S. techie readers, feel free to perform `s/tick/check/;`.
|
||||
|
||||
7\. This is a Status Quo8 reference for which I'm extremely sorry.
|
||||
|
||||
8\. For millennial readers, please consult your favourite online reference engine or just roll your eyes and move on.
|
||||
|
||||
9\. For people who say, "but I love excitement," try being on call at 2 a.m. on a Sunday at the end of the quarter when your chief financial officer calls you up to ask why all of last month's sales figures have been corrupted with the letters "DEADBEEF."10
|
||||
|
||||
10\. For people not in the know, this is a string often used by techies as test data because a) it's non-numerical; b) it's numerical (in hexadecimal); c) it's easy to search for in debug files; and d) it's funny.11
|
||||
|
||||
11\. Though see.9
|
||||
|
||||
12\. It's a low number, is all I'm saying.
|
||||
|
||||
This article originally appeared on [Alice, Eve, and Bob – a security blog][1] and is republished with permission.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/most-important-issue-devops-transformation
|
||||
|
||||
作者:[Mike Bursell][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mikecamel
|
||||
[1]:https://aliceevebob.com/2018/02/06/moving-to-devops-whats-most-important/
|
85
sources/talk/20180226 5 keys to building open hardware.md
Normal file
85
sources/talk/20180226 5 keys to building open hardware.md
Normal file
@ -0,0 +1,85 @@
|
||||
5 keys to building open hardware
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openhardwaretools.png?itok=DC1RC_1f)
|
||||
|
||||
The science community is increasingly embracing free and open source hardware ([FOSH][1]). Researchers have been busy [hacking their own equipment][2] and creating hundreds of devices based on the distributed digital manufacturing model to advance their scientific experiments.
|
||||
|
||||
A major reason for all this interest in distributed digital manufacturing of scientific FOSH is money: Research indicates that FOSH [slashes costs by 90% to 99%][3] compared to proprietary tools. Commercializing scientific FOSH with [open hardware business models][4] has supported the rapid growth of an engineering subfield to develop FOSH for science, which comes together annually at the [Gathering for Open Science Hardware][5].
|
||||
|
||||
Remarkably, not one, but [two new academic journals][6] are devoted to the topic: the [Journal of Open Hardware][7] (from Ubiquity Press, a new open access publisher that also publishes the [Journal of Open Research Software][8] ) and [HardwareX][9] (an [open access journal][10] from Elsevier, one of the world's largest academic publishers).
|
||||
|
||||
Because of the academic community's support, scientific FOSH developers can get academic credit while having fun designing open hardware and pushing science forward faster.
|
||||
|
||||
### 5 steps for scientific FOSH
|
||||
|
||||
Shane Oberloier and I co-authored a new [article][11] published in Designs, an open access engineering design journal, about the principles of designing FOSH scientific equipment. We used the example of a slide dryer, fabricated for under $20, which costs up to 300 times less than proprietary equivalents. [Scientific][1] and [medical][12] equipment tends to be complex with huge payoffs for developing FOSH alternatives.
|
||||
|
||||
I've summarized the five steps (including six design principles) that Shane and I detail in our Designs article. These design principles can be generalized to non-scientific devices, although the more complex the design or equipment, the larger the potential savings.
|
||||
|
||||
If you are interested in designing open hardware for scientific projects, these steps will maximize your project's impact.
|
||||
|
||||
1. Evaluate similar existing tools for their functions but base your FOSH design on replicating their physical effects, not pre-existing designs. If necessary, evaluate a proof of concept.
|
||||
|
||||
|
||||
2. Use the following design principles:
|
||||
|
||||
|
||||
* Use only free and open source software toolchains (e.g., open source CAD packages such as [OpenSCAD][13], [FreeCAD][14], or [Blender][15]) and open hardware for device fabrication.
|
||||
* Attempt to minimize the number and type of parts and the complexity of the tools.
|
||||
* Minimize the amount of material and the cost of production.
|
||||
* Maximize the use of components that can be distributed or digitally manufactured by using widespread and accessible tools such as the open source [RepRap 3D printer][16].
|
||||
* Create [parametric designs][17] with predesigned components, which enable others to customize your design. By making parametric designs rather than solving a specific case, all future cases can also be solved while enabling future users to alter the core variables to make the device useful for them.
|
||||
* All components that are not easily and economically fabricated with existing open hardware equipment in a distributed fashion should be chosen from off-the-shelf parts that are readily available throughout the world.
|
||||
|
||||
|
||||
3. Validate the design for the targeted function(s).
|
||||
|
||||
|
||||
4. Meticulously document the design, manufacture, assembly, calibration, and operation of the device. This should include the raw source of the design, not just the files used for production. The Open Source Hardware Association has extensive [guidelines][18] for properly documenting and releasing open source designs, which can be summarized as follows:
|
||||
|
||||
|
||||
* Share design files in a universal type.
|
||||
* Include a fully detailed bill of materials, including prices and sourcing information.
|
||||
* If software is involved, make sure the code is clear and understandable to the general public.
|
||||
* Include many photos so that nothing is obscured, and they can be used as a reference while manufacturing.
|
||||
* In the methods section, the entire manufacturing process must be detailed to act as instructions for users to replicate the design.
|
||||
* Share online and specify a license. This gives users information on what constitutes fair use of the design.
|
||||
|
||||
|
||||
5. Share aggressively! For FOSH to proliferate, designs must be shared widely, frequently, and noticeably to raise awareness of their existence. All documentation should be published in the open access literature and shared with appropriate communities. One nice universal repository to consider is the [Open Science Framework][19], hosted by the Center for Open Science, which is set up to take any type of file and handle large datasets.
|
||||
|
||||
|
||||
|
||||
This article was supported by [Fulbright Finland][20], which is sponsoring Joshua Pearce's research in open source scientific hardware in Finland as the Fulbright-Aalto University Distinguished Chair.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/5-steps-creating-successful-open-hardware
|
||||
|
||||
作者:[Joshua Pearce][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jmpearce
|
||||
[1]:https://opensource.com/business/16/4/how-calculate-open-source-hardware-return-investment
|
||||
[2]:https://opensource.com/node/16840
|
||||
[3]:http://www.appropedia.org/Open-source_Lab
|
||||
[4]:https://www.academia.edu/32004903/Emerging_Business_Models_for_Open_Source_Hardware
|
||||
[5]:http://openhardware.science/
|
||||
[6]:https://opensource.com/life/16/7/hardwarex-open-access-journal
|
||||
[7]:https://openhardware.metajnl.com/
|
||||
[8]:https://openresearchsoftware.metajnl.com/
|
||||
[9]:https://www.journals.elsevier.com/hardwarex
|
||||
[10]:https://opensource.com/node/30041
|
||||
[11]:https://www.academia.edu/35603319/General_Design_Procedure_for_Free_and_Open-Source_Hardware_for_Scientific_Equipment
|
||||
[12]:https://www.academia.edu/35382852/Maximizing_Returns_for_Public_Funding_of_Medical_Research_with_Open_source_Hardware
|
||||
[13]:http://www.openscad.org/
|
||||
[14]:https://www.freecadweb.org/
|
||||
[15]:https://www.blender.org/
|
||||
[16]:http://reprap.org/
|
||||
[17]:https://en.wikipedia.org/wiki/Parametric_design
|
||||
[18]:https://www.oshwa.org/sharing-best-practices/
|
||||
[19]:https://osf.io/
|
||||
[20]:http://www.fulbright.fi/en
|
@ -0,0 +1,73 @@
|
||||
Emacs #1: Ditching a bunch of stuff and moving to Emacs and org-mode
|
||||
======
|
||||
I’ll admit it. After over a decade of vim, I’m hooked on [Emacs][1].
|
||||
|
||||
I’ve long had this frustration over how to organize things. I’ve followed approaches like [GTD][2] and [ZTD][3], but things like email or large files are really hard to organize.
|
||||
|
||||
I had been using Asana for tasks, Evernote for notes, Thunderbird for email, a combination of ikiwiki and some other items for a personal knowledge base, and various files in an archive directory on my PC. When my new job added Slack to the mix, that was finally the last straw.
|
||||
|
||||
A lot of todo-management tools integrate with email — poorly. When you want to do something like “remind me to reply to this in a week”, a lot of times that’s impossible because the tool doesn’t store the email in a fashion you can easily reply to. And that problem is even worse with Slack.
|
||||
|
||||
It was right around then that I stumbled onto [Carsten Dominik’s Google Talk on org-mode][4]. Carsten was the author of org-mode, and although the talk is 10 years old, it is still highly relevant.
|
||||
|
||||
I’d stumbled across [org-mode][5] before, but each time I didn’t really dig in because I had the reaction of “an outliner? But I need a todo list.” Turns out I was missing out. org-mode is all that.
|
||||
|
||||
### Just what IS Emacs? And org-mode?
|
||||
|
||||
Emacs grew up as a text editor. It still is, and that heritage is definitely present throughout. But to say Emacs is an editor would be rather unfair.
|
||||
|
||||
Emacs is something more like a platform or a toolkit. Not only do you have source code to it, but the very configuration is a program, and there are hooks all over the place. It’s as if it was super easy to write a Firefox plugin. A couple lines, and boom, behavior changed.
|
||||
|
||||
org-mode is very similar. Yes, it’s an outliner, but that’s not really what it is. It’s an information organization platform. Its website says “Your life in plain text: Org mode is for keeping notes, maintaining TODO lists, planning projects, and authoring documents with a fast and effective plain-text system.”
|
||||
|
||||
### Capturing
|
||||
|
||||
If you’ve ever read productivity guides based on GTD, one of the things they stress is effortless capture of items. The idea is that when something pops into your head, get it down into a trusted system quickly so you can get on with what you were doing. org-mode has a capture system for just this. I can press `C-c c` from anywhere in Emacs, and up pops a spot to type my note. But, critically, automatically embedded in that note is a link back to what I was doing when I pressed `C-c c`. If I was editing a file, it’ll have a link back to that file and the line I was on. If I was viewing an email, it’ll link back to that email (by Message-Id, no less, so it finds it in any folder). Same for participating in a chat, or even viewing another org-mode entry.
|
||||
|
||||
So I can make a note that will remind me in a week to reply to a certain email, and when I click the link in that note, it’ll bring up the email in my mail reader — even if I subsequently archived it out of my inbox.
|
||||
|
||||
YES, this is what I was looking for!
|
||||
|
||||
### The tool suite
|
||||
|
||||
Once you’re using org-mode, pretty soon you want to integrate everything with it. There are browser plugins for capturing things from the web. Multiple Emacs mail or news readers integrate with it. ERC (IRC client) does as well. So I found myself switching from Thunderbird and mairix+mutt (for the mail archives) to mu4e, and from xchat+slack to ERC.
|
||||
|
||||
And wouldn’t you know it, I liked each of those Emacs-based tools **better** than the standalone they replaced.
|
||||
|
||||
A small side tidbit: I’m using OfflineIMAP again! I even used it with GNUS way back when.
|
||||
|
||||
### One Emacs process to rule them
|
||||
|
||||
I used to use Emacs extensively, way back. Back then, Emacs was a “large” program. (Now my battery status applet literally uses more RAM than Emacs). There was this problem of startup time back then, so there was a way to connect to a running Emacs process.
|
||||
|
||||
I like to spawn programs with Mod-p (an xmonad shortcut to a dzen menubar, but Alt-F2 in more traditional DEs would do the trick). It’s convenient to not run several emacsen with this setup, so you don’t run into issues with trying to capture to a file that’s open in another one. The solution is very simple: I created a script, named it `em`, and put it on my path. All it does is this:
|
||||
|
||||
`#!/bin/bash exec emacsclient -c -a "" "$@"`
|
||||
|
||||
It creates a new emacs process if one doesn’t already exist; otherwise, it uses what you’ve got. A bonus here: parameters such as `-nw` work just fine, so it really acts just as if you’d typed `emacs` at the shell prompt. It’s a suitable setting for `EDITOR`.
|
||||
|
||||
### Up next…
|
||||
|
||||
I’ll be talking about my use of, and showing off configurations for:
|
||||
|
||||
* org-mode, including syncing between computers, capturing, agenda and todos, files, linking, keywords and tags, various exporting (slideshows), etc.
|
||||
* mu4e for email, including multiple accounts, bbdb integration
|
||||
* ERC for IRC and IM
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://changelog.complete.org/archives/9861-emacs-1-ditching-a-bunch-of-stuff-and-moving-to-emacs-and-org-mode
|
||||
|
||||
作者:[John Goerzen][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://changelog.complete.org/archives/author/jgoerzen
|
||||
[1]:https://www.gnu.org/software/emacs/
|
||||
[2]:https://gettingthingsdone.com/
|
||||
[3]:https://zenhabits.net/zen-to-done-the-simple-productivity-e-book/
|
||||
[4]:https://www.youtube.com/watch?v=oJTwQvgfgMM
|
||||
[5]:https://orgmode.org/
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
How To Create sar Graphs With kSar To Identifying Linux Bottlenecks
|
||||
======
|
||||
The sar command collects, report, or save UNIX / Linux system activity information. It will save selected counters in the operating system to the /var/log/sa/sadd file. From the collected data, you get lots of information about your server:
|
||||
|
@ -1,193 +0,0 @@
|
||||
Translating by qhwdw
|
||||
Choosing a Linux Tracer (2015)
|
||||
======
|
||||
[![][1]][2]
|
||||
_Linux Tracing is Magic!_
|
||||
|
||||
A tracer is an advanced performance analysis and troubleshooting tool, but don't let that intimidate you... If you've used strace(1) or tcpdump(8) - you've used a tracer. System tracers can see much more than just syscalls or packets, as they can typically trace any kernel or application software.
|
||||
|
||||
There are so many Linux tracers that the choice is overwhelming. As each has an official (or unofficial) pony-corn mascot, we have enough for a kids' show.
|
||||
|
||||
Which tracer should you use?
|
||||
|
||||
I've answered this question for two audiences: for most people, and, for performance/kernel engineers. This will also change over time, so I'll need to post follow-ups, maybe once a year or so.
|
||||
|
||||
## For Most People
|
||||
|
||||
Most people (developers, sysadmins, devops, SREs, ...) are not going to learn a system tracer in gory detail. Here's what you most likely need to know and do about tracers:
|
||||
|
||||
### 1. Use perf_events for CPU profiling
|
||||
|
||||
Use perf_events to do CPU profiling. The profile can be visualized as a [flame graph][3]. Eg:
|
||||
```
|
||||
git clone --depth 1 https://github.com/brendangregg/FlameGraph
|
||||
perf record -F 99 -a -g -- sleep 30
|
||||
perf script | ./FlameGraph/stackcollapse-perf.pl | ./FlameGraph/flamegraph.pl > perf.svg
|
||||
|
||||
```
|
||||
|
||||
Linux perf_events (aka "perf", after its command) is the official tracer/profiler for Linux users. It is in the kernel source, and is well maintained (and currently rapidly being enhanced). It's usually added via a linux-tools-common package.
|
||||
|
||||
perf can do many things, but if I had to recommend you learn just one, it would be CPU profiling. Even though this is not technically "tracing" of events, as it's sampling. The hardest part is getting full stacks and symbols to work, which I covered in my [Linux Profiling at Netflix][4] talk for Java and Node.js.
|
||||
|
||||
### 2. Know what else is possible
|
||||
|
||||
As a friend once said: "You don't need to know how to operate an X-ray machine, but you _do_ need to know that if you swallow a penny, an X-ray is an option!" You need to know what is possible with tracers, so that if your business really needs it, you can either learn how to do it later, or hire someone who does.
|
||||
|
||||
In a nutshell: performance of virtually anything can be understood with tracing. File system internals, TCP/IP processing, device drivers, application internals. Read my lwn.net [article on ftrace][5], and browse my [perf_events page][6], as examples of some tracing (and profiling) capabilities.
|
||||
|
||||
### 3. Ask for front ends
|
||||
|
||||
If you are paying for performance analysis tools (and there are many companies that sell them), ask for Linux tracing support. Imagine an intuitive point-and-click interface that can expose kernel internals, including latency heatmaps at different stack locations. I described such an interface in my [Monitorama talk][7].
|
||||
|
||||
I've created and open sourced some front ends myself, although for the CLI (not GUIs). These also allow people to benefit from the tracers more quickly and easily. Eg, from my [perf-tools][8], tracing new processes:
|
||||
```
|
||||
# ./execsnoop
|
||||
Tracing exec()s. Ctrl-C to end.
|
||||
PID PPID ARGS
|
||||
22898 22004 man ls
|
||||
22905 22898 preconv -e UTF-8
|
||||
22908 22898 pager -s
|
||||
22907 22898 nroff -mandoc -rLL=164n -rLT=164n -Tutf8
|
||||
[...]
|
||||
|
||||
```
|
||||
|
||||
At Netflix, we're creating [Vector][9], an instance analysis tool that should also eventually front Linux tracers.
|
||||
|
||||
## For Performance or Kernel Engineers
|
||||
|
||||
Our job is much harder, since most people may be asking us to figure out how to trace something, and therefore which tracer to use. To properly understand a tracer, you usually need to spend at least one hundred hours with it. Understanding all the Linux tracers to make a rational decision between them a huge undertaking. (I may be the only person who has come close to doing this.)
|
||||
|
||||
Here's what I'd recommend. Either:
|
||||
|
||||
A) Pick one all-powerful tracer, and standardize on that. This will involve a lot of time figuring out its nuances and safety in a test environment. I'd currently recommend the latest version of SystemTap (ie, build from [source][10]). I know of companies that have picked LTTng, and are happy with it, although it's not quite as powerful (although, it is safer). If sysdig adds tracepoints or kprobes, it could be another candidate.
|
||||
|
||||
B) Follow the above flow chart from my [Velocity tutorial][11]. It will mean using ftrace or perf_events as much as possible, eBPF as it gets integrated, and then other tracers like SystemTap/LTTng to fill in the gaps. This is what I do in my current job at Netflix.
|
||||
|
||||
Comments by tracer:
|
||||
|
||||
### 1. ftrace
|
||||
|
||||
I love [Ftrace][12], it's a kernel hacker's best friend. It's built into the kernel, and can consume tracepoints, kprobes, and uprobes, and provides a few capabilities: event tracing, with optional filters and arguments; event counting and timing, summarized in-kernel; and function-flow walking. See [ftrace.txt][13] from the kernel source for examples. It's controlled via /sys, and is intended for a single root user (although you could hack multi-user support using buffer instances). Its interface can be fiddly at times, but it's quite hackable, and there are front ends: Steven Rostedt, the main ftrace author, has created trace-cmd, and I've created the perf-tools collection. My biggest gripe is that it isn't programmable, so you can't, for example, save and fetch timestamps, calculate latency, and then store it as a histogram. You'll need to dump events to user-level, and post-process, at some cost. It may become programmable via eBPF.
|
||||
|
||||
### 2. perf_events
|
||||
|
||||
[perf_events][14] is the main tracing tool for Linux users, its source is in the Linux kernel, and is usually added via a linux-tools-common package. Aka "perf", after its front end, which is typically used to trace & dump to a file (perf.data), which it does relatively efficiently (dynamic buffering), and then post-processeses that later. It can do most of what ftrace can. It can't do function-flow walking, and is a bit less hackable (as it has better safety/error checking). But it can do profiling (sampling), CPU performance counters, user-level stack translation, and can consume debuginfo for line tracing with local variables. It also supports multiple concurrent users. As with ftrace, it isn't kernel programmable yet, until perhaps eBPF support (patches have been proposed). If there's one tracer I'd recommend people learn, it'd be perf, as it can solve a ton of issues, and is relatively safe.
|
||||
|
||||
### 3. eBPF
|
||||
|
||||
The extended Berkeley Packet Filter is an in-kernel virtual machine that can run programs on events, efficiently (JIT). It's likely to eventually provide in-kernel programming for ftrace and perf_events, and to enhance other tracers. It's currently being developed by Alexei Starovoitov, and isn't fully integrated yet, but there's enough in-kernel (as of 4.1) for some impressive tools: eg, latency heat maps of block device I/O. For reference, see the [BPF slides][15] from Alexei, and his [eBPF samples][16].
|
||||
|
||||
### 4. SystemTap
|
||||
|
||||
[SystemTap][17] is the most powerful tracer. It can do everything: profiling, tracepoints, kprobes, uprobes (which came from SystemTap), USDT, in-kernel programming, etc. It compiles programs into kernel modules and loads them - an approach which is tricky to get safe. It is also developed out of tree, and has had issues in the past (panics or freezes). Many are not SystemTap's fault - it's often the first to use certain tracing capabilities with the kernel, and the first to run into bugs. The latest version of SystemTap is much better (you must compile from source), but many people are still spooked from earlier versions. If you want to use it, spend time in a test environment, and chat to the developers in #systemtap on irc.freenode.net. (Netflix has a fault-tolerant architecture, and we have used SystemTap, but we may be less concerned about safety than you.) My biggest gripe is that it seems to assume you'll have kernel debuginfo, which I don't usually have. It actually can do a lot without it, but documentation and examples are lacking (I've begun to help with that myself).
|
||||
|
||||
### 5. LTTng
|
||||
|
||||
[LTTng][18] has optimized event collection, which outperforms other tracers, and also supports numerous event types, including USDT. It is developed out of tree. The core of it is very simple: write events to a tracing buffer, via a small and fixed set of instructions. This helps make it safe and fast. The downside is that there's no easy way to do in-kernel programming. I keep hearing that this is not a big problem, since it's so optimized that it can scale sufficiently despite needing post processing. It also has been pioneering a different analysis technique, more of a black box recording of all interesting events that can be studied in GUIs later. I'm concerned about such a recording missing events I didn't have the foresight to record, but I really need to spend more time with it to see how well it works in practice. It's the tracer I've spent the least time with (no particular reason).
|
||||
|
||||
### 6. ktap
|
||||
|
||||
[ktap][19] was a really promising tracer, which used an in-kernel lua virtual machine for processing, and worked fine without debuginfo and on embedded devices. It made it into staging, and for a moment looked like it would win the trace race on Linux. Then eBPF began kernel integration, and ktap integration was postponed until it could use eBPF instead of its own VM. Since eBPF is still integrating many months later, the ktap developers have been waiting a long time. I hope it restarts development later this year.
|
||||
|
||||
### 7. dtrace4linux
|
||||
|
||||
[dtrace4linux][20] is mostly one man's part-time effort (Paul Fox) to port Sun DTrace to Linux. It's impressive, and some providers work, but it's some ways from complete, and is more of an experimental tool (unsafe). I think concern over licensing has left people wary of contributing: it will likely never make it into the Linux kernel, as Sun released DTrace under the CDDL license; Paul's approach to this is to make it an add-on. I'd love to see DTrace on Linux and this project finished, and thought I'd spend time helping it finish when I joined Netflix. However, I've been spending time using the built-in tracers, ftrace and perf_events, instead.
|
||||
|
||||
### 8. OL DTrace
|
||||
|
||||
[Oracle Linux DTrace][21] is a serious effort to bring DTrace to Linux, specifically Oracle Linux. Various releases over the years have shown steady progress. The developers have even spoken about improving the DTrace test suite, which shows a promising attitude to the project. Many useful providers have already been completed: syscall, profile, sdt, proc, sched, and USDT. I'm still waiting for fbt (function boundary tracing, for kernel dynamic tracing), which will be awesome on the Linux kernel. Its ultimate success will hinge on whether it's enough to tempt people to run Oracle Linux (and pay for support). Another catch is that it may not be entirely open source: the kernel components are, but I've yet to see the user-level code.
|
||||
|
||||
### 9. sysdig
|
||||
|
||||
[sysdig][22] is a new tracer that can operate on syscall events with tcpdump-like syntax, and lua post processing. It's impressive, and it's great to see innovation in the system tracing space. Its limitations are that it is syscalls only at the moment, and, that it dumps all events to user-level for post processing. You can do a lot with syscalls, although I'd like to see it support tracepoints, kprobes, and uprobes. I'd also like to see it support eBPF, for in-kernel summaries. The sysdig developers are currently adding container support. Watch this space.
|
||||
|
||||
## Further Reading
|
||||
|
||||
My own work with the tracers includes:
|
||||
|
||||
**ftrace** : My [perf-tools][8] collection (see the examples directory); my lwn.net [article on ftrace][5]; a [LISA14][8] talk; and the posts: [function counting][23], [iosnoop][24], [opensnoop][25], [execsnoop][26], [TCP retransmits][27], [uprobes][28], and [USDT][29].
|
||||
|
||||
**perf_events** : My [perf_events Examples][6] page; a [Linux Profiling at Netflix][4] talk for SCALE; and the posts [CPU Sampling][30], [Static Tracepoints][31], [Heat Maps][32], [Counting][33], [Kernel Line Tracing][34], [off-CPU Time Flame Graphs][35].
|
||||
|
||||
**eBPF** : The post [eBPF: One Small Step][36], and some [BPF-tools][37] (I need to publish more).
|
||||
|
||||
**SystemTap** : I wrote a [Using SystemTap][38] post a long time ago, which is somewhat out of date. More recently I published some [systemtap-lwtools][39], showing how SystemTap can be used without kernel debuginfo.
|
||||
|
||||
**LTTng** : I've used it a little, but not enough yet to publish anything.
|
||||
|
||||
**ktap** : My [ktap Examples][40] page includes one-liners and scripts, although these were for an earlier version.
|
||||
|
||||
**dtrace4linux** : I included some examples in my [Systems Performance book][41], and I've developed some small fixes for things in the past, eg, [timestamps][42].
|
||||
|
||||
**OL DTrace** : As this is a straight port of DTrace, much of my earlier DTrace work should be relevant (too many links to list here; search on [my homepage][43]). I may develop some specific tools once this is more complete.
|
||||
|
||||
**sysdig** : I contributed the [fileslower][44] and [subsecond offset spectrogram][45] chisels.
|
||||
|
||||
**others** : I did write a warning post about [strace][46].
|
||||
|
||||
Please, no more tracers! ... If you're wondering why Linux doesn't just have one, or DTrace itself, I answered these in my [From DTrace to Linux][47] talk, starting on [slide 28][48].
|
||||
|
||||
Thanks to [Deirdre Straughan][49] for edits, and for creating the tracing ponies (with General Zoi's pony creator).
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.brendangregg.com/blog/2015-07-08/choosing-a-linux-tracer.html
|
||||
|
||||
作者:[Brendan Gregg.][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.brendangregg.com
|
||||
[1]:http://www.brendangregg.com/blog/images/2015/tracing_ponies.png
|
||||
[2]:http://www.slideshare.net/brendangregg/velocity-2015-linux-perf-tools/105
|
||||
[3]:http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html
|
||||
[4]:http://www.brendangregg.com/blog/2015-02-27/linux-profiling-at-netflix.html
|
||||
[5]:http://lwn.net/Articles/608497/
|
||||
[6]:http://www.brendangregg.com/perf.html
|
||||
[7]:http://www.brendangregg.com/blog/2015-06-23/netflix-instance-analysis-requirements.html
|
||||
[8]:http://www.brendangregg.com/blog/2015-03-17/linux-performance-analysis-perf-tools.html
|
||||
[9]:http://techblog.netflix.com/2015/04/introducing-vector-netflixs-on-host.html
|
||||
[10]:https://sourceware.org/git/?p=systemtap.git;a=blob_plain;f=README;hb=HEAD
|
||||
[11]:http://www.slideshare.net/brendangregg/velocity-2015-linux-perf-tools
|
||||
[12]:http://lwn.net/Articles/370423/
|
||||
[13]:https://www.kernel.org/doc/Documentation/trace/ftrace.txt
|
||||
[14]:https://perf.wiki.kernel.org/index.php/Main_Page
|
||||
[15]:http://www.phoronix.com/scan.php?page=news_item&px=BPF-Understanding-Kernel-VM
|
||||
[16]:https://github.com/torvalds/linux/tree/master/samples/bpf
|
||||
[17]:https://sourceware.org/systemtap/wiki
|
||||
[18]:http://lttng.org/
|
||||
[19]:http://ktap.org/
|
||||
[20]:https://github.com/dtrace4linux/linux
|
||||
[21]:http://docs.oracle.com/cd/E37670_01/E38608/html/index.html
|
||||
[22]:http://www.sysdig.org/
|
||||
[23]:http://www.brendangregg.com/blog/2014-07-13/linux-ftrace-function-counting.html
|
||||
[24]:http://www.brendangregg.com/blog/2014-07-16/iosnoop-for-linux.html
|
||||
[25]:http://www.brendangregg.com/blog/2014-07-25/opensnoop-for-linux.html
|
||||
[26]:http://www.brendangregg.com/blog/2014-07-28/execsnoop-for-linux.html
|
||||
[27]:http://www.brendangregg.com/blog/2014-09-06/linux-ftrace-tcp-retransmit-tracing.html
|
||||
[28]:http://www.brendangregg.com/blog/2015-06-28/linux-ftrace-uprobe.html
|
||||
[29]:http://www.brendangregg.com/blog/2015-07-03/hacking-linux-usdt-ftrace.html
|
||||
[30]:http://www.brendangregg.com/blog/2014-06-22/perf-cpu-sample.html
|
||||
[31]:http://www.brendangregg.com/blog/2014-06-29/perf-static-tracepoints.html
|
||||
[32]:http://www.brendangregg.com/blog/2014-07-01/perf-heat-maps.html
|
||||
[33]:http://www.brendangregg.com/blog/2014-07-03/perf-counting.html
|
||||
[34]:http://www.brendangregg.com/blog/2014-09-11/perf-kernel-line-tracing.html
|
||||
[35]:http://www.brendangregg.com/blog/2015-02-26/linux-perf-off-cpu-flame-graph.html
|
||||
[36]:http://www.brendangregg.com/blog/2015-05-15/ebpf-one-small-step.html
|
||||
[37]:https://github.com/brendangregg/BPF-tools
|
||||
[38]:http://dtrace.org/blogs/brendan/2011/10/15/using-systemtap/
|
||||
[39]:https://github.com/brendangregg/systemtap-lwtools
|
||||
[40]:http://www.brendangregg.com/ktap.html
|
||||
[41]:http://www.brendangregg.com/sysperfbook.html
|
||||
[42]:https://github.com/dtrace4linux/linux/issues/55
|
||||
[43]:http://www.brendangregg.com
|
||||
[44]:https://github.com/brendangregg/sysdig/commit/d0eeac1a32d6749dab24d1dc3fffb2ef0f9d7151
|
||||
[45]:https://github.com/brendangregg/sysdig/commit/2f21604dce0b561407accb9dba869aa19c365952
|
||||
[46]:http://www.brendangregg.com/blog/2014-05-11/strace-wow-much-syscall.html
|
||||
[47]:http://www.brendangregg.com/blog/2015-02-28/from-dtrace-to-linux.html
|
||||
[48]:http://www.slideshare.net/brendangregg/from-dtrace-to-linux/28
|
||||
[49]:http://www.beginningwithi.com/
|
@ -1,54 +0,0 @@
|
||||
Process Monitoring
|
||||
======
|
||||
|
||||
Since forking the Mon project to [etbemon [1]][1] I've been spending a lot of time working on the monitor scripts. Actually monitoring something is usually quite easy, deciding what to monitor tends to be the hard part. The process monitoring script ps.monitor is the one I'm about to redesign.
|
||||
|
||||
Here are some of my ideas for monitoring processes. Please comment if you have any suggestions for how do do things better.
|
||||
|
||||
For people who don't use mon, the monitor scripts return 0 if everything is OK and 1 if there's a problem along with using stdout to display an error message. While I'm not aware of anyone hooking mon scripts into a different monitoring system that's going to be easy to do. One thing I plan to work on in the future is interoperability between mon and other systems such as Nagios.
|
||||
|
||||
### Basic Monitoring
|
||||
```
|
||||
ps.monitor tor:1-1 master:1-2 auditd:1-1 cron:1-5 rsyslogd:1-1 dbus-daemon:1- sshd:1- watchdog:1-2
|
||||
```
|
||||
|
||||
I'm currently planning some sort of rewrite of the process monitoring script. The current functionality is to have a list of process names on the command line with minimum and maximum numbers for the instances of the process in question. The above is a sample of the configuration of the monitor. There are some limitations to this, the "master" process in this instance refers to the main process of Postfix, but other daemons use the same process name (it's one of those names that's wrong because it's so obvious). One obvious solution to this is to give the option of specifying the full path so that /usr/lib/postfix/sbin/master can be differentiated from all the other programs named master.
|
||||
|
||||
The next issue is processes that may run on behalf of multiple users. With sshd there is a single process to accept new connections running as root and a process running under the UID of each logged in user. So the number of sshd processes running as root will be one greater than the number of root login sessions. This means that if a sysadmin logs in directly as root via ssh (which is controversial and not the topic of this post - merely something that people do which I have to support) and the master process then crashes (or the sysadmin stops it either accidentally or deliberately) there won't be an alert about the missing process. Of course the correct thing to do is to have a monitor talk to port 22 and look for the string "SSH-2.0-OpenSSH_". Sometimes there are multiple instances of a daemon running under different UIDs that need to be monitored separately. So obviously we need the ability to monitor processes by UID.
|
||||
|
||||
In many cases process monitoring can be replaced by monitoring of service ports. So if something is listening on port 25 then it probably means that the Postfix "master" process is running regardless of what other "master" processes there are. But for my use I find it handy to have multiple monitors, if I get a Jabber message about being unable to send mail to a server immediately followed by a Jabber message from that server saying that "master" isn't running I don't need to fully wake up to know where the problem is.
|
||||
|
||||
### SE Linux
|
||||
|
||||
One feature that I want is monitoring SE Linux contexts of processes in the same way as monitoring UIDs. While I'm not interested in writing tests for other security systems I would be happy to include code that other people write. So whatever I do I want to make it flexible enough to work with multiple security systems.
|
||||
|
||||
### Transient Processes
|
||||
|
||||
Most daemons have a second process of the same name running during the startup process. This means if you monitor for exactly 1 instance of a process you may get an alert about 2 processes running when "logrotate" or something similar restarts the daemon. Also you may get an alert about 0 instances if the check happens to run at exactly the wrong time during the restart. My current way of dealing with this on my servers is to not alert until the second failure event with the "alertafter 2" directive. The "failure_interval" directive allows specifying the time between checks when the monitor is in a failed state, setting that to a low value means that waiting for a second failure result doesn't delay the notification much.
|
||||
|
||||
To deal with this I've been thinking of making the ps.monitor script automatically check again after a specified delay. I think that solving the problem with a single parameter to the monitor script is better than using 2 configuration directives to mon to work around it.
|
||||
|
||||
### CPU Use
|
||||
|
||||
Mon currently has a loadavg.monitor script that to check the load average. But that won't catch the case of a single process using too much CPU time but not enough to raise the system load average. Also it won't catch the case of a CPU hungry process going quiet (EG when the SETI at Home server goes down) while another process goes into an infinite loop. One way of addressing this would be to have the ps.monitor script have yet another configuration option to monitor CPU use, but this might get confusing. Another option would be to have a separate script that alerts on any process that uses more than a specified percentage of CPU time over it's lifetime or over the last few seconds unless it's in a whitelist of processes and users who are exempt from such checks. Probably every regular user would be exempt from such checks because you never know when they will run a file compression program. Also there is a short list of daemons that are excluded (like BOINC) and system processes (like gzip which is run from several cron jobs).
|
||||
|
||||
### Monitoring for Exclusion
|
||||
|
||||
A common programming mistake is to call setuid() before setgid() which means that the program doesn't have permission to call setgid(). If return codes aren't checked (and people who make such rookie mistakes tend not to check return codes) then the process keeps elevated permissions. Checking for processes running as GID 0 but not UID 0 would be handy. As an aside a quick examination of a Debian/Testing workstation didn't show any obvious way that a process with GID 0 could gain elevated privileges, but that could change with one chmod 770 command.
|
||||
|
||||
On a SE Linux system there should be only one process running with the domain init_t. Currently that doesn't happen in Stretch systems running daemons such as mysqld and tor due to policy not matching the recent functionality of systemd as requested by daemon service files. Such issues will keep occurring so we need automated tests for them.
|
||||
|
||||
Automated tests for configuration errors that might impact system security is a bigger issue, I'll probably write a separate blog post about it.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://etbe.coker.com.au/2017/09/28/process-monitoring/
|
||||
|
||||
作者:[Andrew][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://etbe.coker.com.au
|
||||
[1]:https://doc.coker.com.au/projects/etbe-mon/
|
@ -1,88 +0,0 @@
|
||||
What’s next in DevOps: 5 trends to watch
|
||||
======
|
||||
|
||||
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO%20Magnifying%20Glass%20Code.png?itok=IqZsJCEH)
|
||||
|
||||
The term "DevOps" is typically credited [to this 2008 presentation][1] on agile infrastructure and operations. Now ubiquitous in IT vocabulary, the mashup word is less than 10 years old: We're still figuring out this modern way of working in IT.
|
||||
|
||||
Sure, people who have been "doing DevOps" for years have accrued plenty of wisdom along the way. But most DevOps environments - and the mix of people and [culture][2], process and methodology, and tools and technology - are far from mature.
|
||||
|
||||
More change is coming. That's kind of the whole point. "DevOps is a process, an algorithm," says Robert Reeves, CTO at [Datical][3]. "Its entire purpose is to change and evolve over time."
|
||||
|
||||
What should we expect next? Here are some key trends to watch, according to DevOps experts.
|
||||
|
||||
### 1. Expect increasing interdependence between DevOps, containers, and microservices
|
||||
|
||||
The forces driving the proliferation of DevOps culture themselves may evolve. Sure, DevOps will still fundamentally knock down traditional IT silos and bottlenecks, but the reasons for doing so may become more urgent. Exhibits A & B: Growing interest in and [adoption of containers and microservices][4]. The technologies pack a powerful, scalable one-two punch, best paired with planned, [ongoing management][5].
|
||||
|
||||
"One of the major factors impacting DevOps is the shift towards microservices," says Arvind Soni, VP of product at [Netsil][6], adding that containers and orchestration are enabling developers to package and deliver services at an ever-increasing pace. DevOps teams will likely be tasked with helping to fuel that pace and to manage the ongoing complexity of a scalable microservices architecture.
|
||||
|
||||
### 2. Expect fewer safety nets
|
||||
|
||||
DevOps enables teams to build software with greater speed and agility, deploying faster and more frequently, while improving quality and stability. But good IT leaders don't typically ignore risk management, so plenty of early DevOps iterations began with safeguards and fallback positions in place. To get to the next level of speed and agility, more teams will take off their training wheels.
|
||||
|
||||
"As teams mature, they may decide that some of the guard rails that were added early on may not be required anymore," says Nic Grange, CTO of [Retriever Communications][7]. Grange gives the example of a staging server: As DevOps teams mature, they may decide it's no longer necessary, especially if they're rarely catching issues in that pre-production environment. (Grange points out that this move isn't advisable for inexperienced teams.)
|
||||
|
||||
"The team may be at a point where it is confident enough with its monitoring and ability to identify and resolve issues in production," Grange says. "The process of deploying and testing in staging may just be slowing them down without any demonstrable value."
|
||||
|
||||
### 3. Expect DevOps to spread elsewhere
|
||||
|
||||
DevOps brings two traditional IT groups, development and operations, into much closer alignment. As more companies see the benefits in the trenches, the culture is likely to spread. It's already happening in some organizations, evident in the increasing appearance of the term "DevSecOps," which reflects the intentional and much earlier inclusion of security in the software development lifecycle.
|
||||
|
||||
"DevSecOps is not only tools, it is integrating a security mindset into development practices early on," says Derek Weeks, VP and DevOps advocate at [Sonatype][8].
|
||||
|
||||
Doing that isn't a technology challenge, it's a cultural challenge, says [Red Hat][9] security strategist Kirsten Newcomer.
|
||||
|
||||
"Security teams have historically been isolated from development teams - and each team has developed deep expertise in different areas of IT," Newcomer says. "It doesn't need to be this way. Enterprises that care deeply about security and also care deeply about their ability to quickly deliver business value through software are finding ways to move security left in their application development lifecycles. They're adopting DevSecOps by integrating security practices, tooling, and automation throughout the CI/CD pipeline. To do this well, they're integrating their teams - security professionals are embedded with application development teams from inception (design) through to production deployment. Both sides are seeing the value - each team expands their skill sets and knowledge base, making them more valuable technologists. DevOps done right - or DevSecOps - improves IT security."
|
||||
|
||||
Beyond security, look for DevOps expansion into areas such as database teams, QA, and even potentially outside of IT altogether.
|
||||
|
||||
"This is a very DevOps thing to do: Identify areas of friction and resolve them," Datical's Reeves says. "Security and databases are currently the big bottlenecks for companies that have previously adopted DevOps."
|
||||
|
||||
### 4. Expect ROI to increase
|
||||
|
||||
As companies get deeper into their DevOps work, IT teams will be able to show greater return on investment in methodologies, processes, containers, and microservices, says Eric Schabell, global technology evangelist director, Red Hat. "The Holy Grail was to be moving faster, accomplishing more and becoming flexible. As these components find broader adoption and organizations become more vested in their application the results shall appear," Schabell says.
|
||||
|
||||
"Everything has a learning curve with a peak of excitement as the emerging technologies gain our attention, but also go through a trough of disillusionment when the realization hits that applying it all is hard. Finally, we'll start to see a climb out of the trough and reap the benefits that we've been chasing with DevOps, containers, and microservices."
|
||||
|
||||
### 5. Expect success metrics to keep evolving
|
||||
|
||||
"I believe that two of the core tenets of the DevOps culture, automation and measurement, are never 'done,'" says Mike Kail, CTO at [CYBRIC][10] and former CIO at Yahoo. "There will always be opportunities to automate a task or improve upon an already automated solution, and what is important to measure will likely change and expand over time. This maturation process is a continuous journey, not a destination or completed task."
|
||||
|
||||
In the spirit of DevOps, that maturation and learning will also depend on collaboration and sharing. Kail thinks it's still very much early days for Agile methodologies and DevOps culture, and that means plenty of room for growth.
|
||||
|
||||
"As more mature organizations continue to measure actionable metrics, I believe - [I] hope - that those learnings will be broadly shared so we can all learn and improve from them," Kail says.
|
||||
|
||||
As Red Hat technology evangelist [Gordon Haff][11] recently noted, organizations working hard to improve their DevOps metrics are using factors tied to business outcomes. "You probably don't really care about how many lines of code your developers write, whether a server had a hardware failure overnight, or how comprehensive your test coverage is," [writes Haff][12]. "In fact, you may not even directly care about the responsiveness of your website or the rapidity of your updates. But you do care to the degree such metrics can be correlated with customers abandoning shopping carts or leaving for a competitor."
|
||||
|
||||
Some examples of DevOps metrics tied to business outcomes include customer ticket volume (as an indicator of overall customer satisfaction) and Net Promoter Score (the willingness of customers to recommend a company's products or services). For more on this topic, see his full article, [DevOps metrics: Are you measuring what matters? ][12]
|
||||
|
||||
### No rest for the speedy
|
||||
|
||||
By the way, if you were hoping things would get a little more leisurely anytime soon, you're out of luck.
|
||||
|
||||
"If you think releases are fast today, you ain't seen nothing yet," Reeves says. "That's why bringing all stakeholders, including security and database teams, into the DevOps tent is so crucial. The friction caused by these two groups today will only grow as releases increase exponentially."
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://enterprisersproject.com/article/2017/10/what-s-next-devops-5-trends-watch
|
||||
|
||||
作者:[Kevin Casey][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://enterprisersproject.com/user/kevin-casey
|
||||
[1]:http://www.jedi.be/presentations/agile-infrastructure-agile-2008.pdf
|
||||
[2]:https://enterprisersproject.com/article/2017/9/5-ways-nurture-devops-culture
|
||||
[3]:https://www.datical.com/
|
||||
[4]:https://enterprisersproject.com/article/2017/9/microservices-and-containers-6-things-know-start-time
|
||||
[5]:https://enterprisersproject.com/article/2017/10/microservices-and-containers-6-management-tips-long-haul
|
||||
[6]:https://netsil.com/
|
||||
[7]:http://retrievercommunications.com/
|
||||
[8]:https://www.sonatype.com/
|
||||
[9]:https://www.redhat.com/en/
|
||||
[10]:https://www.cybric.io/
|
||||
[11]:https://enterprisersproject.com/user/gordon-haff
|
||||
[12]:https://enterprisersproject.com/article/2017/7/devops-metrics-are-you-measuring-what-matters
|
@ -1,114 +0,0 @@
|
||||
amwps290 translating
|
||||
Make “rm” Command To Move The Files To “Trash Can” Instead Of Removing Them Completely
|
||||
======
|
||||
Human makes mistake because we are not a programmed devices so, take additional care while using `rm` command and don't use `rm -rf *` at any point of time. When you use rm command it will delete the files permanently and doesn't move those files to `Trash Can` like how file manger does.
|
||||
|
||||
Sometimes we delete by mistake and sometimes it happens accidentally, so what to do when it happens? You have to look recovery tools (There are plenty of data recovery tools available in Linux) but we don't know it can able to recover 100% so, how to overcome this?
|
||||
|
||||
We have recently published an article about [Trash-Cli][1], in the comment section we got an update about [saferm.sh][2] script from the user called Eemil Lgz which help us to move the files to "Trash Can" instead of deleting them permanently.
|
||||
|
||||
Moving files to "Trash Can" is a good idea, that save you when you run `rm` command accidentally but few people would say it's a bad habit of course, if you are not taking care the "Trash Can" it might be accumulated with files & folders after certain duration. In this case i would advise you to create a cronjob as per your wish.
|
||||
|
||||
This works on both environments like Server & Desktop. If script detecting **GNOME or KDE or Unity or LXDE** Desktop Environment (DE) then it move files or folders safely to default trash **$HOME/.local/share/Trash/files** else it creates trash folder in your home directory **$HOME/Trash**.
|
||||
|
||||
saferm.sh Script is hosted in github, either clone below repository or Create a file called saferm.sh and past the code on it.
|
||||
```
|
||||
$ git clone https://github.com/lagerspetz/linux-stuff
|
||||
$ sudo mv linux-stuff/scripts/saferm.sh /bin
|
||||
$ rm -Rf linux-stuff
|
||||
|
||||
```
|
||||
|
||||
Create a alias on `bashrc` file.
|
||||
```
|
||||
alias rm=saferm.sh
|
||||
|
||||
```
|
||||
|
||||
To take this effect, run the following command.
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
|
||||
```
|
||||
|
||||
That's it everything is done, now you can perform rm command which automatically move the files to "Trash Can" instead of deleting them permanently.
|
||||
|
||||
For testing purpose, we are going to delete file called `magi.txt`, it's clearly saying `Moving magi.txt to $HOME/.local/share/Trash/file`
|
||||
```
|
||||
$ rm -rf magi.txt
|
||||
Moving magi.txt to /home/magi/.local/share/Trash/files
|
||||
|
||||
```
|
||||
|
||||
The same can be validated through `ls` command or `trash-cli` utility.
|
||||
```
|
||||
$ ls -lh /home/magi/.local/share/Trash/files
|
||||
Permissions Size User Date Modified Name
|
||||
.rw-r--r-- 32 magi 11 Oct 16:24 magi.txt
|
||||
|
||||
```
|
||||
|
||||
Alternatively we can check the same in GUI through file manager.
|
||||
[![][3]![][3]][4]
|
||||
|
||||
Create a cronjob to remove files from "Trash Can" once in a week.
|
||||
```
|
||||
$ 1 1 * * * trash-empty
|
||||
|
||||
```
|
||||
|
||||
`Note` For server environment, we need to remove manually using rm command.
|
||||
```
|
||||
$ rm -rf /root/Trash/
|
||||
/root/Trash/magi1.txt is on . Unsafe delete (y/n)? y
|
||||
Deleting /root/Trash/magi1.txt
|
||||
|
||||
```
|
||||
|
||||
The same can be achieved by trash-put command for desktop environment.
|
||||
|
||||
Create a alias on `bashrc` file.
|
||||
```
|
||||
alias rm=trash-put
|
||||
|
||||
```
|
||||
|
||||
To take this effect, run the following command.
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
|
||||
```
|
||||
|
||||
To know other options for saferm.sh, navigate to help section.
|
||||
```
|
||||
$ saferm.sh -h
|
||||
This is saferm.sh 1.16. LXDE and Gnome3 detection.
|
||||
Will ask to unsafe-delete instead of cross-fs move. Allows unsafe (regular rm) delete (ignores trashinfo).
|
||||
Creates trash and trashinfo directories if they do not exist. Handles symbolic link deletion.
|
||||
Does not complain about different user any more.
|
||||
|
||||
Usage: /path/to/saferm.sh [OPTIONS] [--] files and dirs to safely remove
|
||||
OPTIONS:
|
||||
-r allows recursively removing directories.
|
||||
-f Allow deleting special files (devices, ...).
|
||||
-u Unsafe mode, bypass trash and delete files permanently.
|
||||
-v Verbose, prints more messages. Default in this version.
|
||||
-q Quiet mode. Opposite of verbose.
|
||||
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/rm-command-to-move-files-to-trash-can-rm-alias/
|
||||
|
||||
作者:[2DAYGEEK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/2daygeek/
|
||||
[1]:https://www.2daygeek.com/trash-cli-command-line-trashcan-linux-system/
|
||||
[2]:https://github.com/lagerspetz/linux-stuff/blob/master/scripts/saferm.sh
|
||||
[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[4]:https://www.2daygeek.com/wp-content/uploads/2017/10/rm-command-to-move-files-to-trash-can-rm-alias-1.png
|
@ -1,3 +1,5 @@
|
||||
Translating by MjSeven
|
||||
|
||||
What Are the Hidden Files in my Linux Home Directory For?
|
||||
======
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
How to configure login banners in Linux (RedHat, Ubuntu, CentOS, Fedora)
|
||||
======
|
||||
Learn how to create login banners in Linux to display different warning or information messages to user who is about to log in or after he logs in.
|
||||
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
Record and Share Terminal Session with Showterm
|
||||
======
|
||||
|
||||
|
@ -1,71 +0,0 @@
|
||||
How DevOps eliminated bottlenecks for Ranger community
|
||||
======
|
||||
![配图](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/traffic-light-go.png?itok=nC_851ys)
|
||||
The Visual Studio Application Lifecycle Management (ALM) [Ranger][1] program is a community of volunteers that gives professional guidance, practical experience, and gap-filling solutions to the developer community. It was created in 2006 as an internal Microsoft community to "connect the product group with the field and remove adoption blockers." By 2009, the community had more than 200 members, which led to collaboration and planning challenges, bottlenecks due to dependencies and manual processes, and increasing delays and dissatisfaction within the developer community. In 2010, the program evolved to include Microsoft Most Valued Professionals (MVPs), expanding into a geographically distributed community that spans the globe.
|
||||
|
||||
The community is divided into roughly a dozen active teams. Each team is committed to design, build, and support one guidance or tooling project through its lifetime. In the past, teams typically bottlenecked at the team management level due to a rigid, waterfall-style process and high dependency on one or more program managers. The program managers intervened in decision making, releases, and driving the "why, what, and how" for projects. Also, a lack of real-time metrics prevented teams from effectively monitoring their solutions, and alerts about bugs and issues typically came from the community.
|
||||
|
||||
It was time to find a better way of doing things and delivering value to the developer community.
|
||||
|
||||
### DevOps to the rescue
|
||||
|
||||
> "DevOps is the union of people, process, and products to enable continuous delivery of value to our end users." --[Donovan Brown][2]
|
||||
|
||||
To address these challenges, the community stopped all new projects for a couple of sprints to explore Agile practices and new products. The aim was to re-energize the community, to find ways to promote autonomy, mastery, and purpose, as outlined in the book [Drive][3], by Daniel H. Pink, and to overhaul the rigid processes and products.
|
||||
|
||||
> Mature self-organizing, self-managed, and cross-functional teams thrive on autonomy, mastery, and purpose." --Drive, Daniel H. Pink.
|
||||
|
||||
Getting the culture--the people--right was the first step to embrace DevOps. The community implemented the [Scrum][4] framework, used [kanban][5] to improve the engineering process, and adopted visualization to improve transparency, awareness, and most important, trust. With self-organization of teams, the traditional hierarchy and chain-of-command disappeared. Self-management encouraged teams to actively monitor and evolve their own process.
|
||||
|
||||
In April 2010, the community took another pivotal step by switching and committing its culture, process, and products to the cloud. While the core focus of the free "by-the-community-for-the-community" [solutions][6] remains on guidance and filling gaps, there's a growing investment in open source solutions (OSS) to research and share outcomes from the DevOps transformations.
|
||||
|
||||
Continuous integration (CI) and continuous delivery (CD) replaced rigid manual processes with automated pipelines. This empowered teams to deploy solutions to canary and early-adopter users without intervention from program management. Adding telemetry enabled teams to watch their solutions and often detect and address unknown issues before users noticed them.
|
||||
|
||||
The DevOps transformation is an ongoing evolution, using experiments to explore and validate people, process, and product innovations. Recent experiments introduced pipeline innovations that are continuously improving the value flow. Scanning components automatically, continuously, and silently checks security, licensing, and quality of open source components. Deployment rings and feature flags enable teams to have fine-grained control of features for all or specific users.
|
||||
|
||||
In October 2017, the community moved most of its private version control repositories to [GitHub][7]. Transferring ownership and administration responsibilities for all repositories to the ALM DevOps Rangers community gives the teams autonomy and an opportunity to energize the broader community to contribute to the open source solutions. Teams are empowered to deliver quality and value to their end users.
|
||||
|
||||
### Benefits and accomplishments
|
||||
|
||||
Embracing DevOps enabled the Ranger community to become nimble, realize faster-to-market and quicker-to-learn-and-react processes, reduce investment of precious time, and proclaim autonomy.
|
||||
|
||||
Here's a list of our observations from the transition, listed in no specific order:
|
||||
|
||||
* Autonomy, mastery, and purpose are core.
|
||||
* Start with something tangible and iterate--avoid the big bang.
|
||||
* Tangible and actionable metrics are important--ensure it does not turn into noise.
|
||||
* The most challenging parts of transformation are the people (culture).
|
||||
* There's no blueprint; every organization and every team is unique.
|
||||
* Transformation is continuous.
|
||||
* Transparency and visibility are key.
|
||||
* Use the engineering process to reinforce desired behavior.
|
||||
|
||||
|
||||
|
||||
Table of transformation changes:
|
||||
|
||||
PAST CURRENT ENVISIONED Branching Servicing and release isolation Feature Master Build Manual and error prone Automated and consistent Issue detection Call from user Proactive telemetry Issue resolution Days to weeks Minutes to days Minutes Planning Detailed design Prototyping and storyboards Program management 2 program managers (PM) 0.25 PM 0.125 PM Release cadence 6 to 12 months 3 to 5 sprints Every sprint Release Manual and error prone Automated and consistent Sprints 1 month 3 weeks Team size 10 to 15 2 to 5 Time to build Hours Seconds Time to release Days Minutes
|
||||
|
||||
But, we're not done! Instead, we're part of an exciting, continuous, and likely never-ending transformation.
|
||||
|
||||
If you'd like to learn more about our transformation, positive experiences, and known challenges that need to be addressed, see "[Our journey of transforming to a DevOps culture][8]."
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/11/devops-rangers-transformation
|
||||
|
||||
作者:[Willy Schaub][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/wpschaub
|
||||
[1]:https://aka.ms/vsaraboutus
|
||||
[2]:http://donovanbrown.com/post/what-is-devops
|
||||
[3]:http://www.danpink.com/books/drive/
|
||||
[4]:http://www.scrumguides.org/scrum-guide.html
|
||||
[5]:https://leankit.com/learn/kanban/what-is-kanban/
|
||||
[6]:https://aka.ms/vsarsolutions
|
||||
[7]:https://github.com/ALM-Rangers
|
||||
[8]:https://github.com/ALM-Rangers/Guidance/blob/master/src/Stories/our-journey-of-transforming-to-a-devops-culture.md
|
@ -1,118 +0,0 @@
|
||||
6 open source home automation tools
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520x292_openlightbulbs.png?itok=nrv9hgnH)
|
||||
|
||||
The [Internet of Things][13] isn't just a buzzword, it's a reality that's expanded rapidly since we last published a review article on home automation tools in 2016\. In 2017, [26.5% of U.S. households][14] already had some type of smart home technology in use; within five years that percentage is expected to double.
|
||||
|
||||
With an ever-expanding number of devices available to help you automate, protect, and monitor your home, it has never been easier nor more tempting to try your hand at home automation. Whether you're looking to control your HVAC system remotely, integrate a home theater, protect your home from theft, fire, or other threats, reduce your energy usage, or just control a few lights, there are countless devices available at your disposal.
|
||||
|
||||
But at the same time, many users worry about the security and privacy implications of bringing new devices into their homes—a very real and [serious consideration][15]. They want to control who has access to the vital systems that control their appliances and record every moment of their everyday lives. And understandably so: In an era when even your refrigerator may now be a smart device, don't you want to know if your fridge is phoning home? Wouldn't you want some basic assurance that, even if you give a device permission to communicate externally, it is only accessible to those who are explicitly authorized?
|
||||
|
||||
[Security concerns][16] are among the many reasons why open source will be critical to our future with connected devices. Being able to fully understand the programs that control your home means you can view, and if necessary modify, the source code running on the devices themselves.
|
||||
|
||||
While connected devices often contain proprietary components, a good first step in bringing open source into your home automation system is to ensure that the device that ties your devices together—and presents you with an interface to them (the "hub")—is open source. Fortunately, there are many choices out there, with options to run on everything from your always-on personal computer to a Raspberry Pi.
|
||||
|
||||
Here are just a few of our favorites.
|
||||
|
||||
### Calaos
|
||||
|
||||
[Calaos][17] is designed as a full-stack home automation platform, including a server application, touchscreen interface, web application, native mobile applications for iOS and Android, and a preconfigured Linux operating system to run underneath. The Calaos project emerged from a French company, so its support forums are primarily in French, although most of the instructional material and documentation have been translated into English.
|
||||
|
||||
Calaos is licensed under version 3 of the [GPL][18] and you can view its source on [GitHub][19].
|
||||
|
||||
### Domoticz
|
||||
|
||||
[Domoticz][20] is a home automation system with a pretty wide library of supported devices, ranging from weather stations to smoke detectors to remote controls, and a large number of additional third-party [integrations][21] are documented on the project's website. It is designed with an HTML5 frontend, making it accessible from desktop browsers and most modern smartphones, and is lightweight, running on many low-power devices like the Raspberry Pi.
|
||||
|
||||
Domoticz is written primarily in C/C++ under the [GPLv3][22], and its [source code][23] can be browsed on GitHub.
|
||||
|
||||
### Home Assistant
|
||||
|
||||
[Home Assistant][24] is an open source home automation platform designed to be easily deployed on almost any machine that can run Python 3, from a Raspberry Pi to a network-attached storage (NAS) device, and it even ships with a Docker container to make deploying on other systems a breeze. It integrates with a large number of open source as well as commercial offerings, allowing you to link, for example, IFTTT, weather information, or your Amazon Echo device, to control hardware from locks to lights.
|
||||
|
||||
Home Assistant is released under an [MIT license][25], and its source can be downloaded from [GitHub][26].
|
||||
|
||||
### MisterHouse
|
||||
|
||||
[MisterHouse][27] has gained a lot of ground since 2016, when we mentioned it as "another option to consider" on this list. It uses Perl scripts to monitor anything that can be queried by a computer or control anything capable of being remote controlled. It responds to voice commands, time of day, weather, location, and other events to turn on the lights, wake you up, record your favorite TV show, announce phone callers, warn that your front door is open, report how long your son has been online, tell you if your daughter's car is speeding, and much more. It runs on Linux, macOS, and Windows computers and can read/write from a wide variety of devices including security systems, weather stations, caller ID, routers, vehicle location systems, and more
|
||||
|
||||
MisterHouse is licensed under the [GPLv2][28] and you can view its source code on [GitHub][29].
|
||||
|
||||
### OpenHAB
|
||||
|
||||
[OpenHAB][30] (short for Open Home Automation Bus) is one of the best-known home automation tools among open source enthusiasts, with a large user community and quite a number of supported devices and integrations. Written in Java, openHAB is portable across most major operating systems and even runs nicely on the Raspberry Pi. Supporting hundreds of devices, openHAB is designed to be device-agnostic while making it easier for developers to add their own devices or plugins to the system. OpenHAB also ships iOS and Android apps for device control, as well as design tools so you can create your own UI for your home system.
|
||||
|
||||
You can find openHAB's [source code][31] on GitHub licensed under the [Eclipse Public License][32].
|
||||
|
||||
### OpenMotics
|
||||
|
||||
[OpenMotics][33] is a home automation system with both hardware and software under open source licenses. It's designed to provide a comprehensive system for controlling devices, rather than stitching together many devices from different providers. Unlike many of the other systems designed primarily for easy retrofitting, OpenMotics focuses on a hardwired solution. For more, see our [full article][34] from OpenMotics backend developer Frederick Ryckbosch.
|
||||
|
||||
The source code for OpenMotics is licensed under the [GPLv2][35] and is available for download on [GitHub][36].
|
||||
|
||||
These aren't the only options available, of course. Many home automation enthusiasts go with a different solution, or even decide to roll their own. Other users choose to use individual smart home devices without integrating them into a single comprehensive system.
|
||||
|
||||
If the solutions above don't meet your needs, here are some potential alternatives to consider:
|
||||
|
||||
* [EventGhost][1] is an open source ([GPL v2][2]) home theater automation tool that operates only on Microsoft Windows PCs. It allows users to control media PCs and attached hardware by using plugins that trigger macros or by writing custom Python scripts.
|
||||
|
||||
* [ioBroker][3] is a JavaScript-based IoT platform that can control lights, locks, thermostats, media, webcams, and more. It will run on any hardware that runs Node.js, including Windows, Linux, and macOS, and is open sourced under the [MIT license][4].
|
||||
|
||||
* [Jeedom][5] is a home automation platform comprised of open source software ([GPL v2][6]) to control lights, locks, media, and more. It includes a mobile app (Android and iOS) and operates on Linux PCs; the company also sells hubs that it says provide a ready-to-use solution for setting up home automation.
|
||||
|
||||
* [LinuxMCE][7] bills itself as the "'digital glue' between your media and all of your electrical appliances." It runs on Linux (including Raspberry Pi), is released under the Pluto open source [license][8], and can be used for home security, telecom (VoIP and voice mail), A/V equipment, home automation, and—uniquely—to play video games.
|
||||
|
||||
* [OpenNetHome][9], like the other solutions in this category, is open source software for controlling lights, alarms, appliances, etc. It's based on Java and Apache Maven, operates on Windows, macOS, and Linux—including Raspberry Pi, and is released under [GPLv3][10].
|
||||
|
||||
* [Smarthomatic][11] is an open source home automation framework that concentrates on hardware devices and software, rather than user interfaces. Licensed under [GPLv3][12], it's used for things such as controlling lights, appliances, and air humidity, measuring ambient temperature, and remembering to water your plants.
|
||||
|
||||
Now it's your turn: Do you already have an open source home automation system in place? Or perhaps you're researching the options to create one. What advice would you have to a newcomer to home automation, and what system or systems would you recommend?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/life/17/12/home-automation-tools
|
||||
|
||||
作者:[Jason Baker][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jason-baker
|
||||
[1]:http://www.eventghost.net/
|
||||
[2]:http://www.gnu.org/licenses/old-licenses/gpl-2.0.html
|
||||
[3]:http://iobroker.net/
|
||||
[4]:https://github.com/ioBroker/ioBroker#license
|
||||
[5]:https://www.jeedom.com/site/en/index.html
|
||||
[6]:http://www.gnu.org/licenses/old-licenses/gpl-2.0.html
|
||||
[7]:http://www.linuxmce.com/
|
||||
[8]:http://wiki.linuxmce.org/index.php/License
|
||||
[9]:http://opennethome.org/
|
||||
[10]:https://github.com/NetHome/NetHomeServer/blob/master/LICENSE
|
||||
[11]:https://www.smarthomatic.org/
|
||||
[12]:https://github.com/breaker27/smarthomatic/blob/develop/GPL3.txt
|
||||
[13]:https://opensource.com/resources/internet-of-things
|
||||
[14]:https://www.statista.com/outlook/279/109/smart-home/united-states
|
||||
[15]:http://www.crn.com/slide-shows/internet-of-things/300089496/black-hat-2017-9-iot-security-threats-to-watch.htm
|
||||
[16]:https://opensource.com/business/15/5/why-open-source-means-stronger-security
|
||||
[17]:https://calaos.fr/en/
|
||||
[18]:https://github.com/calaos/calaos-os/blob/master/LICENSE
|
||||
[19]:https://github.com/calaos
|
||||
[20]:https://domoticz.com/
|
||||
[21]:https://www.domoticz.com/wiki/Integrations_and_Protocols
|
||||
[22]:https://github.com/domoticz/domoticz/blob/master/License.txt
|
||||
[23]:https://github.com/domoticz/domoticz
|
||||
[24]:https://home-assistant.io/
|
||||
[25]:https://github.com/home-assistant/home-assistant/blob/dev/LICENSE.md
|
||||
[26]:https://github.com/balloob/home-assistant
|
||||
[27]:http://misterhouse.sourceforge.net/
|
||||
[28]:http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
|
||||
[29]:https://github.com/hollie/misterhouse
|
||||
[30]:http://www.openhab.org/
|
||||
[31]:https://github.com/openhab/openhab
|
||||
[32]:https://github.com/openhab/openhab/blob/master/LICENSE.TXT
|
||||
[33]:https://www.openmotics.com/
|
||||
[34]:https://opensource.com/life/14/12/open-source-home-automation-system-opemmotics
|
||||
[35]:http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
|
||||
[36]:https://github.com/openmotics
|
@ -1,109 +0,0 @@
|
||||
IPv6 Auto-Configuration in Linux
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner_5.png?itok=3kN83IjL)
|
||||
|
||||
In [Testing IPv6 Networking in KVM: Part 1][1], we learned about unique local addresses (ULAs). In this article, we will learn how to set up automatic IP address configuration for ULAs.
|
||||
|
||||
### When to Use Unique Local Addresses
|
||||
|
||||
Unique local addresses use the fd00::/8 address block, and are similar to our old friends the IPv4 private address classes: 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. But they are not intended as a direct replacement. IPv4 private address classes and network address translation (NAT) were created to alleviate the shortage of IPv4 addresses, a clever hack that prolonged the life of IPv4 for years after it should have been replaced. IPv6 supports NAT, but I can't think of a good reason to use it. IPv6 isn't just bigger IPv4; it is different and needs different thinking.
|
||||
|
||||
So what's the point of ULAs, especially when we have link-local addresses (fe80::/10) and don't even need to configure them? There are two important differences. One, link-local addresses are not routable, so you can't cross subnets. Two, you control ULAs; choose your own addresses, make subnets, and they are routable.
|
||||
|
||||
Another benefit of ULAs is you don't need an allocation of global unicast IPv6 addresses just for mucking around on your LAN. If you have an allocation from a service provider then you don't need ULAs. You can mix global unicast addresses and ULAs on the same network, but I can't think of a good reason to have both, and for darned sure you don't want to use network address translation (NAT) to make ULAs publicly accessible. That, in my peerless opinion, is daft.
|
||||
|
||||
ULAs are for private networks only and should be blocked from leaving your network, and not allowed to roam the Internet. Which should be simple, just block the whole fd00::/8 range on your border devices.
|
||||
|
||||
### Address Auto-Configuration
|
||||
|
||||
ULAs are not automatic like link-local addresses, but setting up auto-configuration is easy as pie with radvd, the router advertisement daemon. Before you change anything, run `ifconfig` or `ip addr show` to see your existing IP addresses.
|
||||
|
||||
You should install radvd on a dedicated router for production use, but for testing you can install it on any Linux PC on your network. In my little KVM test lab, I installed it on Ubuntu, `apt-get install radvd`. It should not start after installation, because there is no configuration file:
|
||||
```
|
||||
$ sudo systemctl status radvd
|
||||
● radvd.service - LSB: Router Advertising Daemon
|
||||
Loaded: loaded (/etc/init.d/radvd; bad; vendor preset: enabled)
|
||||
Active: active (exited) since Mon 2017-12-11 20:08:25 PST; 4min 59s ago
|
||||
Docs: man:systemd-sysv-generator(8)
|
||||
|
||||
Dec 11 20:08:25 ubunut1 systemd[1]: Starting LSB: Router Advertising Daemon...
|
||||
Dec 11 20:08:25 ubunut1 radvd[3541]: Starting radvd:
|
||||
Dec 11 20:08:25 ubunut1 radvd[3541]: * /etc/radvd.conf does not exist or is empty.
|
||||
Dec 11 20:08:25 ubunut1 radvd[3541]: * See /usr/share/doc/radvd/README.Debian
|
||||
Dec 11 20:08:25 ubunut1 radvd[3541]: * radvd will *not* be started.
|
||||
Dec 11 20:08:25 ubunut1 systemd[1]: Started LSB: Router Advertising Daemon.
|
||||
|
||||
```
|
||||
|
||||
It's a little confusing with all the start and not started messages, but radvd is not running, which you can verify with good old `ps|grep radvd`. So we need to create `/etc/radvd.conf`. Copy this example, replacing the network interface name on the first line with your interface name:
|
||||
```
|
||||
interface ens7 {
|
||||
AdvSendAdvert on;
|
||||
MinRtrAdvInterval 3;
|
||||
MaxRtrAdvInterval 10;
|
||||
prefix fd7d:844d:3e17:f3ae::/64
|
||||
{
|
||||
AdvOnLink on;
|
||||
AdvAutonomous on;
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
```
|
||||
|
||||
The prefix defines your network address, which is the first 64 bits of the address. The first two characters must be `fd`, then you define the remainder of the prefix, and leave the last 64 bits empty as radvd will assign the last 64 bits. The next 16 bits after the prefix define the subnet, and the remaining bits define the host address. Your subnet size must always be /64. RFC 4193 requires that addresses be randomly generated; see [Testing IPv6 Networking in KVM: Part 1][1] for more information on creating and managing ULAs.
|
||||
|
||||
### IPv6 Forwarding
|
||||
|
||||
IPv6 forwarding must be enabled. This command enables it until restart:
|
||||
```
|
||||
$ sudo sysctl -w net.ipv6.conf.all.forwarding=1
|
||||
|
||||
```
|
||||
|
||||
Uncomment or add this line to `/etc/sysctl.conf` to make it permanent:
|
||||
```
|
||||
net.ipv6.conf.all.forwarding = 1
|
||||
```
|
||||
|
||||
Start the radvd daemon:
|
||||
```
|
||||
$ sudo systemctl stop radvd
|
||||
$ sudo systemctl start radvd
|
||||
|
||||
```
|
||||
|
||||
This example reflects a quirk I ran into on my Ubuntu test system; I always have to stop radvd, no matter what state it is in, and then start it to apply any changes.
|
||||
|
||||
You won't see any output on a successful start, and often not on a failure either, so run `sudo systemctl radvd status`. If there are errors, systemctl will tell you. The most common errors are syntax errors in `/etc/radvd.conf`.
|
||||
|
||||
A cool thing I learned after complaining on Twitter: when you run ` journalctl -xe --no-pager` to debug systemctl errors, your output lines will wrap, and then you can actually read your error messages.
|
||||
|
||||
Now check your hosts to see their new auto-assigned addresses:
|
||||
```
|
||||
$ ifconfig
|
||||
ens7 Link encap:Ethernet HWaddr 52:54:00:57:71:50
|
||||
[...]
|
||||
inet6 addr: fd7d:844d:3e17:f3ae:9808:98d5:bea9:14d9/64 Scope:Global
|
||||
[...]
|
||||
|
||||
```
|
||||
|
||||
And there it is! Come back next week to learn how to manage DNS for ULAs, so you can use proper hostnames instead of those giant IPv6 addresses.
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][2]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2017/12/ipv6-auto-configuration-linux
|
||||
|
||||
作者:[Carla Schroder][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/cschroder
|
||||
[1]:https://www.linux.com/learn/intro-to-linux/2017/11/testing-ipv6-networking-kvm-part-1
|
||||
[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -1,111 +0,0 @@
|
||||
How to use syslog-ng to collect logs from remote Linux machines
|
||||
======
|
||||
![linuxhero.jpg][1]
|
||||
|
||||
Image: Jack Wallen
|
||||
|
||||
Let's say your data center is filled with Linux servers and you need to administer them all. Part of that administration job is viewing log files. But if you're looking at numerous machines, that means logging into each machine individually, reading log files, and then moving onto the next. Depending upon how many machines you have, that can take a large chunk of time from your day.
|
||||
|
||||
Or, you could set up a single Linux machine to collect those logs. That would make your day considerably more efficient. To do this, you could opt for a number of different system, one of which is syslog-ng.
|
||||
|
||||
The problem with syslog-ng is that the documentation isn't the easiest to comb through. However, I've taken care of that and am going to lay out the installation and configuration in such a way that you can have syslog-ng up and running in no time. I'll be demonstrating on Ubuntu Server 16.04 on a two system setup:
|
||||
|
||||
* UBUNTUSERVERVM at IP address 192.168.1.118 will serve as log collector
|
||||
* UBUNTUSERVERVM2 will serve as a client, sending log files to the collector
|
||||
|
||||
|
||||
|
||||
Let's install and configure.
|
||||
|
||||
## Installation
|
||||
|
||||
The installation is simple. I'll be installing from the standard repositories, in order to make this as easy as possible. To do this, open up a terminal window and issue the command:
|
||||
```
|
||||
sudo apt install syslog-ng
|
||||
```
|
||||
|
||||
You must issue the above command on both collector and client. Once that's installed, you're ready to configure.
|
||||
|
||||
## Configuration for the collector
|
||||
|
||||
We'll start with the configuration of the log collector. The configuration file is /etc/syslog-ng/syslog-ng.conf. Out of the box, syslog-ng includes a configuration file. We're not going to use that. Let's rename the default config file with the command sudo mv /etc/syslog-ng/syslog-ng.conf /etc/syslog-ng/syslog-ng.conf.BAK. Now create a new configuration file with the command sudo nano /etc/syslog/syslog-ng.conf. In that file add the following:
|
||||
```
|
||||
@version: 3.5
|
||||
@include "scl.conf"
|
||||
@include "`scl-root`/system/tty10.conf"
|
||||
options {
|
||||
time-reap(30);
|
||||
mark-freq(10);
|
||||
keep-hostname(yes);
|
||||
};
|
||||
source s_local { system(); internal(); };
|
||||
source s_network {
|
||||
syslog(transport(tcp) port(514));
|
||||
};
|
||||
destination d_local {
|
||||
file("/var/log/syslog-ng/messages_${HOST}"); };
|
||||
destination d_logs {
|
||||
file(
|
||||
"/var/log/syslog-ng/logs.txt"
|
||||
owner("root")
|
||||
group("root")
|
||||
perm(0777)
|
||||
); };
|
||||
log { source(s_local); source(s_network); destination(d_logs); };
|
||||
```
|
||||
|
||||
Do note that we are working with port 514, so you'll need to make sure it is accessible on your network.
|
||||
|
||||
Save and close the file. The above configuration will dump the desired log files (denoted with system() and internal()) into /var/log/syslog-ng/logs.txt. Because of this, you need to create the directory and file with the following commands:
|
||||
```
|
||||
sudo mkdir /var/log/syslog-ng
|
||||
sudo touch /var/log/syslog-ng/logs.txt
|
||||
```
|
||||
|
||||
Start and enable syslog-ng with the commands:
|
||||
```
|
||||
sudo systemctl start syslog-ng
|
||||
sudo systemctl enable syslog-ng
|
||||
```
|
||||
|
||||
## Configuration for the client
|
||||
|
||||
We're going to do the very same thing on the client (moving the default configuration file and creating a new configuration file). Copy the following text into the new client configuration file:
|
||||
```
|
||||
@version: 3.5
|
||||
@include "scl.conf"
|
||||
@include "`scl-root`/system/tty10.conf"
|
||||
source s_local { system(); internal(); };
|
||||
destination d_syslog_tcp {
|
||||
syslog("192.168.1.118" transport("tcp") port(514)); };
|
||||
log { source(s_local);destination(d_syslog_tcp); };
|
||||
```
|
||||
|
||||
Note: Change the IP address to match the address of your collector server.
|
||||
|
||||
Save and close that file. Start and enable syslog-ng in the same fashion you did on the collector.
|
||||
|
||||
## View the log files
|
||||
|
||||
Head back to your collector and issue the command sudo tail -f /var/log/syslog-ng/logs.txt. You should see output that includes log entries for both collector and client ( **Figure A** ).
|
||||
|
||||
**Figure A**
|
||||
|
||||
![Figure A][3]
|
||||
|
||||
Congratulations, syslog-ng is working. You can now log into your collector to view logs from both the local machine and the remote client. If you have more Linux servers in your data center, walk through the process of installing syslog-ng and setting each of them up as a client to send their logs to the collector, so you no longer have to log into individual machines to view logs.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.techrepublic.com/article/how-to-use-syslog-ng-to-collect-logs-from-remote-linux-machines/
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[1]:https://tr1.cbsistatic.com/hub/i/r/2017/01/11/51204409-68e0-49b8-a637-01af26be85f6/resize/770x/688dfedad4ed30ec4baf548c2adb8cd4/linuxhero.jpg
|
||||
[3]:https://tr4.cbsistatic.com/hub/i/2018/01/09/6a24e5c0-6a29-46d3-8a66-bc72747b5beb/6f94d3e6c6c2121fab6223ed9d8c6aa6/syslognga.jpg
|
@ -1,99 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
Partclone – A Versatile Free Software for Partition Imaging and Cloning
|
||||
======
|
||||
|
||||
![](https://www.fossmint.com/wp-content/uploads/2018/01/Partclone-Backup-Tool-For-Linux.png)
|
||||
|
||||
**[Partclone][1]** is a free and open-source tool for creating and cloning partition images brought to you by the developers of **Clonezilla**. In fact, **Partclone** is one of the tools that **Clonezilla** is based on.
|
||||
|
||||
It provides users with the tools required to backup and restores used partition blocks along with high compatibility with several file systems thanks to its ability to use existing libraries like **e2fslibs** to read and write partitions e.g. **ext2**.
|
||||
|
||||
Its best stronghold is the variety of formats it supports including ext2, ext3, ext4, hfs+, reiserfs, reiser4, btrfs, vmfs3, vmfs5, xfs, jfs, ufs, ntfs, fat(12/16/32), exfat, f2fs, and nilfs.
|
||||
|
||||
It also has a plethora of available programs including **partclone.ext2** (ext3 & ext4), partclone.ntfs, partclone.exfat, partclone.hfsp, and partclone.vmfs (v3 and v5), among others.
|
||||
|
||||
### Features in Partclone
|
||||
|
||||
* **Freeware:** **Partclone** is free for everyone to download and use.
|
||||
* **Open Source:** **Partclone** is released under the GNU GPL license and is open to contribution on [GitHub][2].
|
||||
* **Cross-Platform** : Available on Linux, Windows, MAC, ESX file system backup/restore, and FreeBSD.
|
||||
* An online [Documentation page][3] from where you can view help docs and track its GitHub issues.
|
||||
* An online [user manual][4] for beginners and pros alike.
|
||||
* Rescue support.
|
||||
* Clone partitions to image files.
|
||||
* Restore image files to partitions.
|
||||
* Duplicate partitions quickly.
|
||||
* Support for raw clone.
|
||||
* Displays transfer rate and elapsed time.
|
||||
* Supports piping.
|
||||
* Support for crc32.
|
||||
* Supports vmfs for ESX vmware server and ufs for FreeBSD file system.
|
||||
|
||||
|
||||
|
||||
There are a lot more features bundled in **Partclone** and you can see the rest of them [here][5].
|
||||
|
||||
[__Download Partclone for Linux][6]
|
||||
|
||||
### How to Install and Use Partclone
|
||||
|
||||
To install Partclone on Linux.
|
||||
```
|
||||
$ sudo apt install partclone [On Debian/Ubuntu]
|
||||
$ sudo yum install partclone [On CentOS/RHEL/Fedora]
|
||||
|
||||
```
|
||||
|
||||
Clone partition to image.
|
||||
```
|
||||
# partclone.ext4 -d -c -s /dev/sda1 -o sda1.img
|
||||
|
||||
```
|
||||
|
||||
Restore image to partition.
|
||||
```
|
||||
# partclone.ext4 -d -r -s sda1.img -o /dev/sda1
|
||||
|
||||
```
|
||||
|
||||
Partition to partition clone.
|
||||
```
|
||||
# partclone.ext4 -d -b -s /dev/sda1 -o /dev/sdb1
|
||||
|
||||
```
|
||||
|
||||
Display image information.
|
||||
```
|
||||
# partclone.info -s sda1.img
|
||||
|
||||
```
|
||||
|
||||
Check image.
|
||||
```
|
||||
# partclone.chkimg -s sda1.img
|
||||
|
||||
```
|
||||
|
||||
Are you a **Partclone** user? I wrote on [**Deepin Clone**][7] just recently and apparently, there are certain tasks Partclone is better at handling. What has been your experience with other backup and restore utility tools?
|
||||
|
||||
Do share your thoughts and suggestions with us in the comments section below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.fossmint.com/partclone-linux-backup-clone-tool/
|
||||
|
||||
作者:[Martins D. Okoi;View All Posts;Peter Beck;Martins Divine Okoi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[1]:https://partclone.org/
|
||||
[2]:https://github.com/Thomas-Tsai/partclone
|
||||
[3]:https://partclone.org/help/
|
||||
[4]:https://partclone.org/usage/
|
||||
[5]:https://partclone.org/features/
|
||||
[6]:https://partclone.org/download/
|
||||
[7]:https://www.fossmint.com/deepin-clone-system-backup-restore-for-deepin-users/
|
@ -1,264 +0,0 @@
|
||||
Monitor your Kubernetes Cluster
|
||||
======
|
||||
This article originally appeared on [Kevin Monroe's blog][1]
|
||||
|
||||
Keeping an eye on logs and metrics is a necessary evil for cluster admins. The benefits are clear: metrics help you set reasonable performance goals, while log analysis can uncover issues that impact your workloads. The hard part, however, is getting a slew of applications to work together in a useful monitoring solution.
|
||||
|
||||
In this post, I'll cover monitoring a Kubernetes cluster with [Graylog][2] (for logging) and [Prometheus][3] (for metrics). Of course that's not just wiring 3 things together. In fact, it'll end up looking like this:
|
||||
|
||||
![][4]
|
||||
|
||||
As you know, Kubernetes isn't just one thing -- it's a system of masters, workers, networking bits, etc(d). Similarly, Graylog comes with a supporting cast (apache2, mongodb, etc), as does Prometheus (telegraf, grafana, etc). Connecting the dots in a deployment like this may seem daunting, but the right tools can make all the difference.
|
||||
|
||||
I'll walk through this using [conjure-up][5] and the [Canonical Distribution of Kubernetes][6] (CDK). I find the conjure-up interface really helpful for deploying big software, but I know some of you hate GUIs and TUIs and probably other UIs too. For those folks, I'll do the same deployment again from the command line.
|
||||
|
||||
Before we jump in, note that Graylog and Prometheus will be deployed alongside Kubernetes and not in the cluster itself. Things like the Kubernetes Dashboard and Heapster are excellent sources of information from within a running cluster, but my objective is to provide a mechanism for log/metric analysis whether the cluster is running or not.
|
||||
|
||||
### The Walk Through
|
||||
|
||||
First things first, install conjure-up if you don't already have it. On Linux, that's simply:
|
||||
```
|
||||
sudo snap install conjure-up --classic
|
||||
```
|
||||
|
||||
There's also a brew package for macOS users:
|
||||
```
|
||||
brew install conjure-up
|
||||
```
|
||||
|
||||
You'll need at least version 2.5.2 to take advantage of the recent CDK spell additions, so be sure to `sudo snap refresh conjure-up` or `brew update && brew upgrade conjure-up` if you have an older version installed.
|
||||
|
||||
Once installed, run it:
|
||||
```
|
||||
conjure-up
|
||||
```
|
||||
|
||||
![][7]
|
||||
|
||||
You'll be presented with a list of various spells. Select CDK and press `Enter`.
|
||||
|
||||
![][8]
|
||||
|
||||
At this point, you'll see additional components that are available for the CDK spell. We're interested in Graylog and Prometheus, so check both of those and hit `Continue`.
|
||||
|
||||
You'll be guided through various cloud choices to determine where you want your cluster to live. After that, you'll see options for post-deployment steps, followed by a review screen that lets you see what is about to be deployed:
|
||||
|
||||
![][9]
|
||||
|
||||
In addition to the typical K8s-related applications (etcd, flannel, load-balancer, master, and workers), you'll see additional applications related to our logging and metric selections.
|
||||
|
||||
The Graylog stack includes the following:
|
||||
|
||||
* apache2: reverse proxy for the graylog web interface
|
||||
* elasticsearch: document database for the logs
|
||||
* filebeat: forwards logs from K8s master/workers to graylog
|
||||
* graylog: provides an api for log collection and an interface for analysis
|
||||
* mongodb: database for graylog metadata
|
||||
|
||||
|
||||
|
||||
The Prometheus stack includes the following:
|
||||
|
||||
* grafana: web interface for metric-related dashboards
|
||||
* prometheus: metric collector and time series database
|
||||
* telegraf: sends host metrics to prometheus
|
||||
|
||||
|
||||
|
||||
You can fine tune the deployment from this review screen, but the defaults will suite our needs. Click `Deploy all Remaining Applications` to get things going.
|
||||
|
||||
The deployment will take a few minutes to settle as machines are brought online and applications are configured in your cloud. Once complete, conjure-up will show a summary screen that includes links to various interesting endpoints for you to browse:
|
||||
|
||||
![][10]
|
||||
|
||||
#### Exploring Logs
|
||||
|
||||
Now that Graylog has been deployed and configured, let's take a look at some of the data we're gathering. By default, the filebeat application will send both syslog and container log events to graylog (that's `/var/log/*.log` and `/var/log/containers/*.log` from the kubernetes master and workers).
|
||||
|
||||
Grab the apache2 address and graylog admin password as follows:
|
||||
```
|
||||
juju status --format yaml apache2/0 | grep public-address
|
||||
public-address: <your-apache2-ip>
|
||||
juju run-action --wait graylog/0 show-admin-password
|
||||
admin-password: <your-graylog-password>
|
||||
```
|
||||
|
||||
Browse to `http://<your-apache2-ip>` and login with admin as the username and <your-graylog-password> as the password. **Note:** if the interface is not immediately available, please wait as the reverse proxy configuration may take up to 5 minutes to complete.
|
||||
|
||||
Once logged in, head to the `Sources` tab to get an overview of the logs collected from our K8s master and workers:
|
||||
|
||||
![][11]
|
||||
|
||||
Drill into those logs by clicking the `System / Inputs` tab and selecting `Show received messages` for the filebeat input:
|
||||
|
||||
![][12]
|
||||
|
||||
From here, you may want to play around with various filters or setup Graylog dashboards to help identify the events that are most important to you. Check out the [Graylog Dashboard][13] docs for details on customizing your view.
|
||||
|
||||
#### Exploring Metrics
|
||||
|
||||
Our deployment exposes two types of metrics through our grafana dashboards: system metrics include things like cpu/memory/disk utilization for the K8s master and worker machines, and cluster metrics include container-level data scraped from the K8s cAdvisor endpoints.
|
||||
|
||||
Grab the grafana address and admin password as follows:
|
||||
```
|
||||
juju status --format yaml grafana/0 | grep public-address
|
||||
public-address: <your-grafana-ip>
|
||||
juju run-action --wait grafana/0 get-admin-password
|
||||
password: <your-grafana-password>
|
||||
```
|
||||
|
||||
Browse to `http://<your-grafana-ip>:3000` and login with admin as the username and <your-grafana-password> as the password. Once logged in, check out the cluster metric dashboard by clicking the `Home` drop-down box and selecting `Kubernetes Metrics (via Prometheus)`:
|
||||
|
||||
![][14]
|
||||
|
||||
We can also check out the system metrics of our K8s host machines by switching the drop-down box to `Node Metrics (via Telegraf) `
|
||||
|
||||
![][15]
|
||||
|
||||
|
||||
### The Other Way
|
||||
|
||||
As alluded to in the intro, I prefer the wizard-y feel of conjure-up to guide me through complex software deployments like Kubernetes. Now that we've seen the conjure-up way, some of you may want to see a command line approach to achieve the same results. Still others may have deployed CDK previously and want to extend it with the Graylog/Prometheus components described above. Regardless of why you've read this far, I've got you covered.
|
||||
|
||||
The tool that underpins conjure-up is [Juju][16]. Everything that the CDK spell did behind the scenes can be done on the command line with Juju. Let's step through how that works.
|
||||
|
||||
**Starting From Scratch**
|
||||
|
||||
If you're on Linux, install Juju like this:
|
||||
```
|
||||
sudo snap install juju --classic
|
||||
```
|
||||
|
||||
For macOS, Juju is available from brew:
|
||||
```
|
||||
brew install juju
|
||||
```
|
||||
|
||||
Now setup a controller for your preferred cloud. You may be prompted for any required cloud credentials:
|
||||
```
|
||||
juju bootstrap
|
||||
```
|
||||
|
||||
We then need to deploy the base CDK bundle:
|
||||
```
|
||||
juju deploy canonical-kubernetes
|
||||
```
|
||||
|
||||
**Starting From CDK**
|
||||
|
||||
With our Kubernetes cluster deployed, we need to add all the applications required for Graylog and Prometheus:
|
||||
```
|
||||
## deploy graylog-related applications
|
||||
juju deploy xenial/apache2
|
||||
juju deploy xenial/elasticsearch
|
||||
juju deploy xenial/filebeat
|
||||
juju deploy xenial/graylog
|
||||
juju deploy xenial/mongodb
|
||||
```
|
||||
```
|
||||
## deploy prometheus-related applications
|
||||
juju deploy xenial/grafana
|
||||
juju deploy xenial/prometheus
|
||||
juju deploy xenial/telegraf
|
||||
```
|
||||
|
||||
Now that the software is deployed, connect them together so they can communicate:
|
||||
```
|
||||
## relate graylog applications
|
||||
juju relate apache2:reverseproxy graylog:website
|
||||
juju relate graylog:elasticsearch elasticsearch:client
|
||||
juju relate graylog:mongodb mongodb:database
|
||||
juju relate filebeat:beats-host kubernetes-master:juju-info
|
||||
juju relate filebeat:beats-host kubernetes-worker:jujuu-info
|
||||
```
|
||||
```
|
||||
## relate prometheus applications
|
||||
juju relate prometheus:grafana-source grafana:grafana-source
|
||||
juju relate telegraf:prometheus-client prometheus:target
|
||||
juju relate kubernetes-master:juju-info telegraf:juju-info
|
||||
juju relate kubernetes-worker:juju-info telegraf:juju-info
|
||||
```
|
||||
|
||||
At this point, all the applications can communicate with each other, but we have a bit more configuration to do (e.g., setting up the apache2 reverse proxy, telling prometheus how to scrape k8s, importing our grafana dashboards, etc):
|
||||
```
|
||||
## configure graylog applications
|
||||
juju config apache2 enable_modules="headers proxy_html proxy_http"
|
||||
juju config apache2 vhost_http_template="$(base64 <vhost-tmpl>)"
|
||||
juju config elasticsearch firewall_enabled="false"
|
||||
juju config filebeat \
|
||||
logpath="/var/log/*.log /var/log/containers/*.log"
|
||||
juju config filebeat logstash_hosts="<graylog-ip>:5044"
|
||||
juju config graylog elasticsearch_cluster_name="<es-cluster>"
|
||||
```
|
||||
```
|
||||
## configure prometheus applications
|
||||
juju config prometheus scrape-jobs="<scraper-yaml>"
|
||||
juju run-action --wait grafana/0 import-dashboard \
|
||||
dashboard="$(base64 <dashboard-json>)"
|
||||
```
|
||||
|
||||
Some of the above steps need values specific to your deployment. You can get these in the same way that conjure-up does:
|
||||
|
||||
* <vhost-tmpl>: fetch our sample [template][17] from github
|
||||
* <graylog-ip>: `juju run --unit graylog/0 'unit-get private-address'`
|
||||
* <es-cluster>: `juju config elasticsearch cluster-name`
|
||||
* <scraper-yaml>: fetch our sample [scraper][18] from github; [substitute][19]appropriate values for `[K8S_PASSWORD][20]` and `[K8S_API_ENDPOINT][21]`
|
||||
* <dashboard-json>: fetch our [host][22] and [k8s][23] dashboards from github
|
||||
|
||||
|
||||
|
||||
Finally, you'll want to expose the apache2 and grafana applications to make their web interfaces accessible:
|
||||
```
|
||||
## expose relevant endpoints
|
||||
juju expose apache2
|
||||
juju expose grafana
|
||||
```
|
||||
|
||||
Now that we have everything deployed, related, configured, and exposed, you can login and poke around using the same steps from the **Exploring Logs** and **Exploring Metrics** sections above.
|
||||
|
||||
### The Wrap Up
|
||||
|
||||
My goal here was to show you how to deploy a Kubernetes cluster with rich monitoring capabilities for logs and metrics. Whether you prefer a guided approach or command line steps, I hope it's clear that monitoring complex deployments doesn't have to be a pipe dream. The trick is to figure out how all the moving parts work, make them work together repeatably, and then break/fix/repeat for a while until everyone can use it.
|
||||
|
||||
This is where tools like conjure-up and Juju really shine. Leveraging the expertise of contributors to this ecosystem makes it easy to manage big software. Start with a solid set of apps, customize as needed, and get back to work!
|
||||
|
||||
Give these bits a try and let me know how it goes. You can find enthusiasts like me on Freenode IRC in **#conjure-up** and **#juju**. Thanks for reading!
|
||||
|
||||
### About the author
|
||||
|
||||
Kevin joined Canonical in 2014 with his focus set on modeling complex software. He found his niche on the Juju Big Software team where his mission is to capture operational knowledge of Big Data and Machine Learning applications into repeatable (and reliable!) solutions.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://insights.ubuntu.com/2018/01/16/monitor-your-kubernetes-cluster/
|
||||
|
||||
作者:[Kevin Monroe][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://insights.ubuntu.com/author/kwmonroe/
|
||||
[1]:https://medium.com/@kwmonroe/monitor-your-kubernetes-cluster-a856d2603ec3
|
||||
[2]:https://www.graylog.org/
|
||||
[3]:https://prometheus.io/
|
||||
[4]:https://insights.ubuntu.com/wp-content/uploads/706b/1_TAA57DGVDpe9KHIzOirrBA.png
|
||||
[5]:https://conjure-up.io/
|
||||
[6]:https://jujucharms.com/canonical-kubernetes
|
||||
[7]:https://insights.ubuntu.com/wp-content/uploads/98fd/1_o0UmYzYkFiHIs2sBgj7G9A.png
|
||||
[8]:https://insights.ubuntu.com/wp-content/uploads/0351/1_pgVaO_ZlalrjvYd5pOMJMA.png
|
||||
[9]:https://insights.ubuntu.com/wp-content/uploads/9977/1_WXKxMlml2DWA5Kj6wW9oXQ.png
|
||||
[10]:https://insights.ubuntu.com/wp-content/uploads/8588/1_NWq7u6g6UAzyFxtbM-ipqg.png
|
||||
[11]:https://insights.ubuntu.com/wp-content/uploads/a1c3/1_hHK5mSrRJQi6A6u0yPSGOA.png
|
||||
[12]:https://insights.ubuntu.com/wp-content/uploads/937f/1_cP36lpmSwlsPXJyDUpFluQ.png
|
||||
[13]:http://docs.graylog.org/en/2.3/pages/dashboards.html
|
||||
[14]:https://insights.ubuntu.com/wp-content/uploads/9256/1_kskust3AOImIh18QxQPgRw.png
|
||||
[15]:https://insights.ubuntu.com/wp-content/uploads/2037/1_qJpjPOTGMQbjFY5-cZsYrQ.png
|
||||
[16]:https://jujucharms.com/
|
||||
[17]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/graylog/steps/01_install-graylog/graylog-vhost.tmpl
|
||||
[18]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/prometheus-scrape-k8s.yaml
|
||||
[19]:https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/after-deploy#L25
|
||||
[20]:https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/after-deploy#L10
|
||||
[21]:https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/after-deploy#L11
|
||||
[22]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/grafana-telegraf.json
|
||||
[23]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/grafana-k8s.json
|
@ -1,107 +0,0 @@
|
||||
SPARTA – Network Penetration Testing GUI Toolkit
|
||||
======
|
||||
|
||||
![](https://i0.wp.com/gbhackers.com/wp-content/uploads/2018/01/GjWDZ1516079830.png?resize=696%2C379&ssl=1)
|
||||
|
||||
SPARTA is GUI application developed with python and inbuild Network Penetration Testing Kali Linux tool. It simplifies scanning and enumeration phase with faster results.
|
||||
|
||||
Best thing of SPARTA GUI Toolkit it scans detects the service running on the target port.
|
||||
|
||||
Also, it provides Bruteforce attack for scanned open ports and services as a part of enumeration phase.
|
||||
|
||||
|
||||
Also Read: Network Pentesting Checklist][1]
|
||||
|
||||
## Installation
|
||||
|
||||
Please clone the latest version of SPARTA from github:
|
||||
|
||||
```
|
||||
git clone https://github.com/secforce/sparta.git
|
||||
```
|
||||
|
||||
Alternatively, download the latest zip file [here][2].
|
||||
```
|
||||
cd /usr/share/
|
||||
git clone https://github.com/secforce/sparta.git
|
||||
```
|
||||
Place the "sparta" file in /usr/bin/ and make it executable.
|
||||
Type 'sparta' in any terminal to launch the application.
|
||||
|
||||
|
||||
## The scope of Network Penetration Testing Work:
|
||||
|
||||
* Organizations security weaknesses in their network infrastructures are identified by a list of host or targeted host and add them to the scope.
|
||||
* Select menu bar - File > Add host(s) to scope
|
||||
|
||||
|
||||
|
||||
[![Network Penetration Testing][3]][4]
|
||||
|
||||
[![Network Penetration Testing][5]][6]
|
||||
|
||||
* Above figures show target Ip is added to the scope.According to your network can add the range of IPs to scan.
|
||||
* After adding Nmap scan will begin and results will be very faster.now scanning phase is done.
|
||||
|
||||
|
||||
|
||||
## Open Ports & Services:
|
||||
|
||||
* Nmap results will provide target open ports and services.
|
||||
|
||||
|
||||
|
||||
[![Network Penetration Testing][7]][8]
|
||||
|
||||
* Above figure shows that target operating system, Open ports and services are discovered as scan results.
|
||||
|
||||
|
||||
|
||||
## Brute Force Attack on Open ports:
|
||||
|
||||
* Let us Brute force Server Message Block (SMB) via port 445 to enumerate the list of users and their valid passwords.
|
||||
|
||||
|
||||
|
||||
[![Network Penetration Testing][9]][10]
|
||||
|
||||
* Right-click and Select option Send to Brute.Also, select discovered Open ports and service on target.
|
||||
* Browse and add dictionary files for Username and password fields.
|
||||
|
||||
|
||||
|
||||
[![Network Penetration Testing][11]][12]
|
||||
|
||||
* Click Run to start the Brute force attack on the target.Above Figure shows Brute force attack is successfully completed on the target IP and the valid password is Found!
|
||||
* Always think failed login attempts will be logged as Event logs in Windows.
|
||||
* Password changing policy should be 15 to 30 days will be a good practice.
|
||||
* Always recommended to use a strong password as per policy.Password lockout policy is a good one to stop brute force attacks (After 5 failure attempts account will be locked)
|
||||
* The integration of business-critical asset to SIEM( security incident & Event Management) will detect these kinds of attacks as soon as possible.
|
||||
|
||||
|
||||
|
||||
SPARTA is timing saving GUI Toolkit for pentesters for scanning and enumeration phase.SPARTA Scans and Bruteforce various protocols.It has many more features! Happy Hacking.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://gbhackers.com/sparta-network-penetration-testing-gui-toolkit/
|
||||
|
||||
作者:[Balaganesh][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://gbhackers.com/author/balaganesh/
|
||||
[1]:https://gbhackers.com/network-penetration-testing-checklist-examples/
|
||||
[2]:https://github.com/SECFORCE/sparta/archive/master.zip
|
||||
[3]:https://i0.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-526.png?resize=696%2C495&ssl=1
|
||||
[4]:https://i0.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-526.png?ssl=1
|
||||
[5]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-527.png?resize=696%2C516&ssl=1
|
||||
[6]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-527.png?ssl=1
|
||||
[7]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-528.png?resize=696%2C519&ssl=1
|
||||
[8]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-528.png?ssl=1
|
||||
[9]:https://i1.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-529.png?resize=696%2C525&ssl=1
|
||||
[10]:https://i1.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-529.png?ssl=1
|
||||
[11]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-531.png?resize=696%2C523&ssl=1
|
||||
[12]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-531.png?ssl=1
|
@ -1,3 +1,5 @@
|
||||
translated by cyleft
|
||||
|
||||
Migrating to Linux: The Command Line
|
||||
======
|
||||
|
||||
|
@ -1,170 +0,0 @@
|
||||
Never miss a Magazine's article, build your own RSS notification system
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/01/learn-python-rss-notifier.png-945x400.jpg)
|
||||
|
||||
Python is a great programming language to quickly build applications that make our life easier. In this article we will learn how to use Python to build a RSS notification system, the goal being to have fun learning Python using Fedora. If you are looking for a complete RSS notifier application, there are a few already packaged in Fedora.
|
||||
|
||||
### Fedora and Python - getting started
|
||||
|
||||
Python 3.6 is available by default in Fedora, that includes Python's extensive standard library. The standard library provides a collection of modules which make some tasks simpler for us. For example, in our case we will use the [**sqlite3**][1] module to create, add and read data from a database. In the case where a particular problem we are trying to solve is not covered by the standard library, the chance is that someone has already developed a module for everyone to use. The best place to search for such modules is the Python Package Index known as [PyPI][2]. In our example we are going to use the [**feedparser**][3] to parse an RSS feed.
|
||||
|
||||
Since **feedparser** is not in the standard library, we have to install it in our system. Luckily for us there is an rpm package in Fedora, so the installation of **feedparser** is as simple as:
|
||||
```
|
||||
$ sudo dnf install python3-feedparser
|
||||
```
|
||||
|
||||
We now have everything we need to start coding our application.
|
||||
|
||||
### Storing the feed data
|
||||
|
||||
We need to store data from the articles that have already been published so that we send a notification only for new articles. The data we want to store will give us a unique way to identify an article. Therefore we will store the **title** and the **publication date** of the article.
|
||||
|
||||
So let's create our database using python **sqlite3** module and a simple SQL query. We are also adding the modules we are going to use later ( **feedparser** , **smtplib** and **email** ).
|
||||
|
||||
#### Creating the Database
|
||||
```
|
||||
#!/usr/bin/python3
|
||||
import sqlite3
|
||||
import smtplib
|
||||
from email.mime.text import MIMEText
|
||||
|
||||
import feedparser
|
||||
|
||||
db_connection = sqlite3.connect('/var/tmp/magazine_rss.sqlite')
|
||||
db = db_connection.cursor()
|
||||
db.execute(' CREATE TABLE IF NOT EXISTS magazine (title TEXT, date TEXT)')
|
||||
|
||||
```
|
||||
|
||||
These few lines of code create a new sqlite database stored in a file called 'magazine_rss.sqlite', and then create a new table within the database called 'magazine'. This table has two columns - 'title' and 'date' - that can store data of the type TEXT, which means that the value of each column will be a text string.
|
||||
|
||||
#### Checking the Database for old articles
|
||||
|
||||
Since we only want to add new articles to our database we need a function that will check if the article we get from the RSS feed is already in our database or not. We will use it to decide if we should send an email notification (new article) or not (old article). Ok let's code this function.
|
||||
```
|
||||
def article_is_not_db(article_title, article_date):
|
||||
""" Check if a given pair of article title and date
|
||||
is in the database.
|
||||
Args:
|
||||
article_title (str): The title of an article
|
||||
article_date (str): The publication date of an article
|
||||
Return:
|
||||
True if the article is not in the database
|
||||
False if the article is already present in the database
|
||||
"""
|
||||
db.execute("SELECT * from magazine WHERE title=? AND date=?", (article_title, article_date))
|
||||
if not db.fetchall():
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
```
|
||||
|
||||
The main part of this function is the SQL query we execute to search through the database. We are using a SELECT instruction to define which column of our magazine table we will run the query on. We are using the 0_sync_master.sh 1_add_new_article_manual.sh 1_add_new_article_newspaper.sh 2_start_translating.sh 3_continue_the_work.sh 4_finish.sh 5_pause.sh base.sh env format.test lctt.cfg parse_url_by_manual.sh parse_url_by_newspaper.py parse_url_by_newspaper.sh README.org reformat.sh symbol to select all columns ( title and date). Then we ask to select only the rows of the table WHERE the article_title and article_date string are equal to the value of the title and date column.
|
||||
|
||||
To finish, we have a simple logic that will return True if the query did not return any results and False if the query found an article in database matching our title, date pair.
|
||||
|
||||
#### Adding a new article to the Database
|
||||
|
||||
Now we can code the function to add a new article to the database.
|
||||
```
|
||||
def add_article_to_db(article_title, article_date):
|
||||
""" Add a new article title and date to the database
|
||||
Args:
|
||||
article_title (str): The title of an article
|
||||
article_date (str): The publication date of an article
|
||||
"""
|
||||
db.execute("INSERT INTO magazine VALUES (?,?)", (article_title, article_date))
|
||||
db_connection.commit()
|
||||
```
|
||||
|
||||
This function is straight forward, we are using a SQL query to INSERT a new row INTO the magazine table with the VALUES of the article_title and article_date. Then we commit the change to make it persistent.
|
||||
|
||||
That's all we need from the database's point of view, let's look at the notification system and how we can use python to send emails.
|
||||
|
||||
### Sending an email notification
|
||||
|
||||
Let's create a function to send an email using the python standard library module **smtplib.** We are also using the **email** module from the standard library to format our email message.
|
||||
```
|
||||
def send_notification(article_title, article_url):
|
||||
""" Add a new article title and date to the database
|
||||
|
||||
Args:
|
||||
article_title (str): The title of an article
|
||||
article_url (str): The url to access the article
|
||||
"""
|
||||
|
||||
smtp_server = smtplib.SMTP('smtp.gmail.com', 587)
|
||||
smtp_server.ehlo()
|
||||
smtp_server.starttls()
|
||||
smtp_server.login('your_email@gmail.com', '123your_password')
|
||||
msg = MIMEText(f'\nHi there is a new Fedora Magazine article : {article_title}. \nYou can read it here {article_url}')
|
||||
msg['Subject'] = 'New Fedora Magazine Article Available'
|
||||
msg['From'] = 'your_email@gmail.com'
|
||||
msg['To'] = 'destination_email@gmail.com'
|
||||
smtp_server.send_message(msg)
|
||||
smtp_server.quit()
|
||||
```
|
||||
|
||||
In this example I am using the Google mail smtp server to send an email, but this will work with any email services that provides you with a SMTP server. Most of this function is boilerplate needed to configure the access to the smtp server. You will need to update the code with your email address and credentials.
|
||||
|
||||
If you are using 2 Factor Authentication with your gmail account you can setup a password app that will give you a unique password to use for this application. Check out this help [page][4].
|
||||
|
||||
### Reading Fedora Magazine RSS feed
|
||||
|
||||
We now have functions to store an article in the database and send an email notification, let's create a function that parses the Fedora Magazine RSS feed and extract the articles' data.
|
||||
```
|
||||
def read_article_feed():
|
||||
""" Get articles from RSS feed """
|
||||
feed = feedparser.parse('https://fedoramagazine.org/feed/')
|
||||
for article in feed['entries']:
|
||||
if article_is_not_db(article['title'], article['published']):
|
||||
send_notification(article['title'], article['link'])
|
||||
add_article_to_db(article['title'], article['published'])
|
||||
|
||||
if __name__ == '__main__':
|
||||
read_article_feed()
|
||||
db_connection.close()
|
||||
```
|
||||
|
||||
Here we are making use of the **feedparser.parse** function. The function returns a dictionary representation of the RSS feed, for the full reference of the representation you can consult **feedparser** 's [documentation][5].
|
||||
|
||||
The RSS feed parser will return the last 10 articles as entries and then we extract the following information: the title, the link and the date the article was published. As a result, we can now use the functions we have previously defined to check if the article is not in the database, then send a notification email and finally, add the article to our database.
|
||||
|
||||
The last if statement is used to execute our read_article_feed function and then close the database connection when we execute our script.
|
||||
|
||||
### Running our script
|
||||
|
||||
Finally, to run our script we need to give the correct permission to the file. Next, we make use of the **cron** utility to automatically execute our script every hour (1 minute past the hour). **cron** is a job scheduler that we can use to run a task at a fixed time.
|
||||
```
|
||||
$ chmod a+x my_rss_notifier.py
|
||||
$ sudo cp my_rss_notifier.py /etc/cron.hourly
|
||||
```
|
||||
|
||||
To keep this tutorial simple, we are using the cron.hourly directory to execute the script every hours, I you wish to learn more about **cron** and how to configure the **crontab,** please read **cron 's** wikipedia [page][6].
|
||||
|
||||
### Conclusion
|
||||
|
||||
In this tutorial we have learned how to use Python to create a simple sqlite database, parse an RSS feed and send emails. I hope that this showed you how you can easily build your own application using Python and Fedora.
|
||||
|
||||
The script is available on github [here][7].
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/never-miss-magazines-article-build-rss-notification-system/
|
||||
|
||||
作者:[Clément Verna][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org
|
||||
[1]:https://docs.python.org/3/library/sqlite3.html
|
||||
[2]:https://pypi.python.org/pypi
|
||||
[3]:https://pypi.python.org/pypi/feedparser/5.2.1
|
||||
[4]:https://support.google.com/accounts/answer/185833?hl=en
|
||||
[5]:https://pythonhosted.org/feedparser/reference.html
|
||||
[6]:https://en.wikipedia.org/wiki/Cron
|
||||
[7]:https://github.com/cverna/rss_feed_notifier
|
@ -1,3 +1,5 @@
|
||||
translating----geekpi
|
||||
|
||||
How programmers learn to code
|
||||
============================================================
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
440+ Free Online Programming & Computer Science Courses You Can Start in February
|
||||
============================================================
|
||||
|
||||
@ -1403,4 +1404,4 @@ via: https://medium.freecodecamp.org/440-free-online-programming-computer-scienc
|
||||
[453]:https://www.class-central.com/subject/programming-and-software-development
|
||||
[454]:https://medium.com/@davidventuri
|
||||
[455]:https://medium.freecodecamp.com/the-best-data-science-courses-on-the-internet-ranked-by-your-reviews-6dc5b910ea40
|
||||
[456]:https://medium.freecodecamp.org/how-to-sign-up-for-coursera-courses-for-free-98266efaa531
|
||||
[456]:https://medium.freecodecamp.org/how-to-sign-up-for-coursera-courses-for-free-98266efaa531
|
||||
|
@ -1,64 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
How to Check Your Linux PC for Meltdown or Spectre Vulnerability
|
||||
======
|
||||
|
||||
![](https://www.maketecheasier.com/assets/uploads/2018/01/lmc-feat.jpg)
|
||||
|
||||
One of the scariest realities of the Meltdown and Spectre vulnerabilities is just how widespread they are. Virtually every modern computer is affected in some way. The real question is how exactly are _you_ affected? Every system is at a different state of vulnerability depending on which software has and hasn’t been patched.
|
||||
|
||||
Since Meltdown and Spectre are both fairly new and things are moving quickly, it’s not all that easy to tell what you need to look out for or what’s been fixed on your system. There are a couple of tools available that can help. They’re not perfect, but they can help you figure out what you need to know.
|
||||
|
||||
### Simple Test
|
||||
|
||||
One of the top Linux kernel developers provided a simple way of checking the status of your system in regards to the Meltdown and Spectre vulnerabilities. This one is the easiest, and is most concise, but it doesn’t work on every system. Some distributions decided not to include support for this report. Even still, it’s worth a shot to check.
|
||||
```
|
||||
grep . /sys/devices/system/cpu/vulnerabilities/*
|
||||
|
||||
```
|
||||
|
||||
![Kernel Vulnerability Check][1]
|
||||
|
||||
You should see output similar to the image above. Chances are, you’ll see that at least one of the vulnerabilities remains unchecked on your system. This is especially true since Linux hasn’t made any progress in mitigating Spectre v1 yet.
|
||||
|
||||
### The Script
|
||||
|
||||
If the above method didn’t work for you, or you want a more detailed report of your system, a developer has created a shell script that will check your system to see what exactly it is susceptible to and what has been done to mitigate Meltdown and Spectre.
|
||||
|
||||
In order to get the script, make sure you have Git installed on your system, and then clone the script’s repository into a directory that you don’t mind running it out of.
|
||||
```
|
||||
cd ~/Downloads
|
||||
git clone https://github.com/speed47/spectre-meltdown-checker.git
|
||||
|
||||
```
|
||||
|
||||
It’s not a large repository, so it should only take a few seconds to clone. When it’s done, enter the newly created directory and run the provided script.
|
||||
```
|
||||
cd spectre-meltdown-checker
|
||||
./spectre-meltdown-checker.sh
|
||||
|
||||
```
|
||||
|
||||
You’ll see a bunch of junk spit out into the terminal. Don’t worry, its not too hard to follow. First, the script checks your hardware, and then it runs through the three vulnerabilities: Spectre v1, Spectre v2, and Meltdown. Each gets its own section. In between, the script tells you plainly whether you are vulnerable to each of the three.
|
||||
|
||||
![Meltdown Spectre Check Script Ubuntu][2]
|
||||
|
||||
Each section provides you with a breakdown of potential mitigation and whether or not they have been applied. Here’s where you need to exercise a bit of common sense. The determinations that it gives might seem like they’re in conflict. Do a bit of digging to see if the fixes that it says are applied actually do fully mitigate the problem or not.
|
||||
|
||||
### What This Means
|
||||
|
||||
So, what’s the takeaway? Most Linux systems have been patched against Meltdown. If you haven’t updated yet for that, you should. Spectre v1 is still a big problem, and not a lot of progress has been made there as of yet. Spectre v2 will depend a lot on your distribution and what patches it’s chosen to apply. Regardless of what either tool says, nothing is perfect. Do your research and stay on the lookout for information coming straight from the kernel and distribution developers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/check-linux-meltdown-spectre-vulnerability/
|
||||
|
||||
作者:[Nick Congleton][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com/author/nickcongleton/
|
||||
[1]:https://www.maketecheasier.com/assets/uploads/2018/01/lmc-kernel-check.jpg (Kernel Vulnerability Check)
|
||||
[2]:https://www.maketecheasier.com/assets/uploads/2018/01/lmc-script.jpg (Meltdown Spectre Check Script Ubuntu)
|
@ -1,147 +0,0 @@
|
||||
How to Manage PGP and SSH Keys with Seahorse
|
||||
============================================================
|
||||
|
||||
|
||||
![Seahorse](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fish-1907607_1920.jpg?itok=u07bav4m "Seahorse")
|
||||
Learn how to manage both PGP and SSH keys with the Seahorse GUI tool.[Creative Commons Zero][6]
|
||||
|
||||
Security is tantamount to peace of mind. After all, security is a big reason why so many users migrated to Linux in the first place. But why stop with merely adopting the platform, when you can also employ several techniques and technologies to help secure your desktop or server systems.
|
||||
|
||||
One such technology involves keys—in the form of PGP and SSH. PGP keys allow you to encrypt and decrypt emails and files, and SSH keys allow you to log into servers with an added layer of security.
|
||||
|
||||
Sure, you can manage these keys via the command-line interface (CLI), but what if you’re working on a desktop with a resplendent GUI? Experienced Linux users may cringe at the idea of shrugging off the command line, but not all users have the same skill set and comfort level there. Thus, the GUI!
|
||||
|
||||
In this article, I will walk you through the process of managing both PGP and SSH keys through the [Seahorse][14] GUI tool. Seahorse has a pretty impressive feature set; it can:
|
||||
|
||||
* Encrypt/decrypt/sign files and text.
|
||||
|
||||
* Manage your keys and keyring.
|
||||
|
||||
* Synchronize your keys and your keyring with remote key servers.
|
||||
|
||||
* Sign and publish keys.
|
||||
|
||||
* Cache your passphrase.
|
||||
|
||||
* Backup both keys and keyring.
|
||||
|
||||
* Add an image in any GDK supported format as a OpenPGP photo ID.
|
||||
|
||||
* Create, configure, and cache SSH keys.
|
||||
|
||||
For those that don’t know, Seahorse is a GNOME application for managing both encryption keys and passwords within the GNOME keyring. But fear not, Seahorse is available for installation on numerous desktops. And since Seahorse is found in the standard repositories, you can open up your desktop’s app store (such as Ubuntu Software or Elementary OS AppCenter) and install. To do this, locate Seahorse in your distribution’s application store and click to install. Once you have Seahorse installed, you’re ready to start making use of a very handy tool.
|
||||
|
||||
Let’s do just that.
|
||||
|
||||
### PGP Keys
|
||||
|
||||
The first thing we’re going to do is create a new PGP key. As I said earlier, PGP keys can be used to encrypt email (with tools like [Thunderbird][15]’s [Enigmail][16] or the built-in encryption function with [Evolution][17]). A PGP key also allows you to encrypt files. Anyone with your public key will be able to decrypt those emails or files. Without a PGP key, no can do.
|
||||
|
||||
Creating a new PGP key pair is incredibly simple with Seahorse. Here’s what you do:
|
||||
|
||||
1. Open the Seahorse app
|
||||
|
||||
2. Click the + button in the upper left corner of the main pane
|
||||
|
||||
3. Select PGP Key (Figure 1)
|
||||
|
||||
4. Click Continue
|
||||
|
||||
5. When prompted, type a full name and email address
|
||||
|
||||
6. Click Create
|
||||
|
||||
|
||||
![Seahorse](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_1.jpg?itok=khLOYC61 "Seahorse")
|
||||
Figure 1: Creating a PGP key with Seahorse.[Used with permission][1]
|
||||
|
||||
While creating your PGP key, you can click to expand the Advanced key options section, where you can configure a comment for the key, encryption type, key strength, and expiration date (Figure 2).
|
||||
|
||||
|
||||
![PGP](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_2.jpg?itok=eWiazwrn "PGP")
|
||||
Figure 2: PGP key advanced options.[Used with permission][2]
|
||||
|
||||
The comment section is very handy to help you remember a key’s purpose (or other informative bits).
|
||||
With your PGP created, double-click on it from the key listing. In the resulting window, click on the Names and Signatures tab. In this window, you can sign your key (to indicate you trust this key). Click the Sign button and then (in the resulting window) indicate how carefully you’ve checked this key and how others will see the signature (Figure 3).
|
||||
|
||||
|
||||
![Key signing](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_3.jpg?itok=7USKG9fI "Key signing")
|
||||
Figure 3: Signing a key to indicate trust level.[Used with permission][3]
|
||||
|
||||
Signing keys is very important when you’re dealing with other people’s keys, as a signed key will ensure your system (and you) you’ve done the work and can fully trust an imported key.
|
||||
|
||||
Speaking of imported keys, Seahorse allows you to easily import someone’s public key file (the file will end in .asc). Having someone’s public key on your system means you can decrypt emails and files sent to you from them. However, Seahorse has suffered a [known bug][18] for quite some time. The problem is that Seahorse imports using gpg version one, but displays with gpg version two. This means, until this long-standing bug is fixed, importing public keys will always fail. If you want to import a public PGP key into Seahorse, you’re going to have to use the command line. So, if someone has sent you the file olivia.asc, and you want to import it so it can be used with Seahorse, you would issue the command gpg2 --import olivia.asc. That key would then appear in the GnuPG Keys listing. You can open the key, click the I trust signatures button, and then click the Sign this key button to indicate how carefully you’ve checked the key in question.
|
||||
|
||||
### SSH Keys
|
||||
|
||||
Now we get to what I consider to be the most important aspect of Seahorse—SSH keys. Not only does Seahorse make it easy to generate an SSH key, it makes it easy to send that key to a server, so you can take advantage of SSH key authentication. Here’s how you generate a new key and then export it to a remote server.
|
||||
|
||||
1. Open up Seahorse
|
||||
|
||||
2. Click the + button
|
||||
|
||||
3. Select Secure Shell Key
|
||||
|
||||
4. Click Continue
|
||||
|
||||
5. Give the key a description
|
||||
|
||||
6. Click Create and Set Up
|
||||
|
||||
7. Type and verify a passphrase for the key
|
||||
|
||||
8. Click OK
|
||||
|
||||
9. Type the address of the remote server and a remote login name found on the server (Figure 4)
|
||||
|
||||
10. Type the password for the remote user
|
||||
|
||||
11. Click OK
|
||||
|
||||
|
||||
![SSH key](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_4.jpg?itok=ZxuxT8ry "SSH key")
|
||||
Figure 4: Uploading an SSH key to a remote server.[Used with permission][4]
|
||||
|
||||
The new key will be uploaded to the remote server and is ready to use. If your server is set up for SSH key authentication, you’re good to go.
|
||||
|
||||
Do note, during the creation of an SSH key, you can click to expand the Advanced key options and configure Encryption Type and Key Strength (Figure 5).
|
||||
|
||||
|
||||
![Advanced options](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_5.jpg?itok=vUT7pi0z "Advanced options")
|
||||
Figure 5: Advanced SSH key options.[Used with permission][5]
|
||||
|
||||
### A must-use for new Linux users
|
||||
|
||||
Any new-to-Linux user should get familiar with Seahorse. Even with its flaws, Seahorse is still an incredibly handy tool to have at the ready. At some point, you will likely want (or need) to encrypt or decrypt an email/file, or manage secure shell keys for SSH key authentication. If you want to do this, while avoiding the command line, Seahorse is the tool to use.
|
||||
|
||||
_Learn more about Linux through the free ["Introduction to Linux" ][13]course from The Linux Foundation and edX._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/2/how-manage-pgp-and-ssh-keys-seahorse
|
||||
|
||||
作者:[JACK WALLEN ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/jlwallen
|
||||
[1]:https://www.linux.com/licenses/category/used-permission
|
||||
[2]:https://www.linux.com/licenses/category/used-permission
|
||||
[3]:https://www.linux.com/licenses/category/used-permission
|
||||
[4]:https://www.linux.com/licenses/category/used-permission
|
||||
[5]:https://www.linux.com/licenses/category/used-permission
|
||||
[6]:https://www.linux.com/licenses/category/creative-commons-zero
|
||||
[7]:https://www.linux.com/files/images/seahorse1jpg
|
||||
[8]:https://www.linux.com/files/images/seahorse2jpg
|
||||
[9]:https://www.linux.com/files/images/seahorse3jpg
|
||||
[10]:https://www.linux.com/files/images/seahorse4jpg
|
||||
[11]:https://www.linux.com/files/images/seahorse5jpg
|
||||
[12]:https://www.linux.com/files/images/fish-19076071920jpg
|
||||
[13]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
||||
[14]:https://wiki.gnome.org/Apps/Seahorse
|
||||
[15]:https://www.mozilla.org/en-US/thunderbird/
|
||||
[16]:https://enigmail.net/index.php/en/
|
||||
[17]:https://wiki.gnome.org/Apps/Evolution
|
||||
[18]:https://bugs.launchpad.net/ubuntu/+source/seahorse/+bug/1577198
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
3 Ways to Extend the Power of Kubernetes
|
||||
======
|
||||
|
||||
|
@ -1,93 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
|
||||
How to Check if Your Computer Uses UEFI or BIOS
|
||||
======
|
||||
**Brief: A quick tutorial to tell you if your system uses the modern UEFI or the legacy BIOS. Instructions for both Windows and Linux have been provided.**
|
||||
|
||||
When you are trying to [dual boot Linux with Windows][1], you would want to know if you have UEFI or BIOS boot mode on your system. It helps you decide in partition making for installing Linux.
|
||||
|
||||
I am not going to discuss [what is BIOS][2] here. However, I would like to tell you a few advantages of [UEFI][3] over BIOS.
|
||||
|
||||
UEFI or Unified Extensible Firmware Interface was designed to overcome some of the limitations of BIOS. It added the ability to use larger than 2 TB disks and had a CPU independent architecture and drivers. With a modular design, it supported remote diagnostics and repairing even with no operating system installed and a flexible without-OS environment including networking capability.
|
||||
|
||||
### Advantage of UEFI over BIOS
|
||||
|
||||
* UEFI is faster in initializing your hardware.
|
||||
* Offer Secure Boot which means everything you load before an OS is loaded has to be signed. This gives your system an added layer of protection from running malware.
|
||||
* BIOS do not support a partition of over 2TB.
|
||||
* Most importantly, if you are dual booting it’s always advisable to install both the OS in the same booting mode.
|
||||
|
||||
|
||||
|
||||
![How to check if system has UEFI or BIOS][4]
|
||||
|
||||
If you are trying to find out whether your system runs UEFI or BIOS, it’s not that difficult. Let me start with Windows first and afterward, we’ll see how to check UEFI or BIOS on Linux systems.
|
||||
|
||||
### Check if you are using UEFI or BIOS on Windows
|
||||
|
||||
On Windows, “System Information” in Start panel and under BIOS Mode, you can find the boot mode. If it says Legacy, your system has BIOS. If it says UEFI, well it’s UEFI.
|
||||
|
||||
![][5]
|
||||
|
||||
**Alternative** : If you using Windows 10, you can check whether you are using UEFI or BIOS by opening File Explorer and navigating to C:\Windows\Panther. Open file setupact.log and search for the below string.
|
||||
```
|
||||
Detected boot environment
|
||||
|
||||
```
|
||||
|
||||
I would advise opening this file in notepad++, since its a huge text file and notepad may hang (at least it did for me with 6GB RAM).
|
||||
|
||||
You will find a couple of lines which will give you the information.
|
||||
```
|
||||
2017-11-27 09:11:31, Info IBS Callback_BootEnvironmentDetect:FirmwareType 1.
|
||||
2017-11-27 09:11:31, Info IBS Callback_BootEnvironmentDetect: Detected boot environment: BIOS
|
||||
|
||||
```
|
||||
|
||||
### Check if you are using UEFI or BIOS on Linux
|
||||
|
||||
The easiest way to find out if you are running UEFI or BIOS is to look for a folder /sys/firmware/efi. The folder will be missing if your system is using BIOS.
|
||||
|
||||
![Find if system uses UEFI or BIOS on Ubuntu Linux][6]
|
||||
|
||||
**Alternative** : The other method is to install a package called efibootmgr.
|
||||
|
||||
On Debian and Ubuntu based distributions, you can install the efibootmgr package using the command below:
|
||||
```
|
||||
sudo apt install efibootmgr
|
||||
|
||||
```
|
||||
|
||||
Once done, type the below command:
|
||||
```
|
||||
sudo efibootmgr
|
||||
|
||||
```
|
||||
|
||||
If your system supports UEFI, it will output different variables. If not you will see a message saying EFI variables are not supported.
|
||||
|
||||
![][7]
|
||||
|
||||
### Final Words
|
||||
|
||||
Finding whether your system is using UEFI or BIOS is easy. On one hand, features like faster and secure boot provide an upper hand to UEFI, there is not much that should bother you if you are using BIOS – unless you are planning to use a 2TB hard disk to boot.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/check-uefi-or-bios/
|
||||
|
||||
作者:[Ambarish Kumar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/ambarish/
|
||||
[1]:https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/
|
||||
[2]:https://www.lifewire.com/bios-basic-input-output-system-2625820
|
||||
[3]:https://www.howtogeek.com/56958/htg-explains-how-uefi-will-replace-the-bios/
|
||||
[4]:https://itsfoss.com/wp-content/uploads/2018/02/uefi-or-bios-800x450.png
|
||||
[5]:https://itsfoss.com/wp-content/uploads/2018/01/BIOS-800x491.png
|
||||
[6]:https://itsfoss.com/wp-content/uploads/2018/02/uefi-bios.png
|
||||
[7]:https://itsfoss.com/wp-content/uploads/2018/01/bootmanager.jpg
|
@ -1,50 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
Gnome without chrome-gnome-shell
|
||||
======
|
||||
|
||||
New laptop, has a touchscreen, can be folded into a tablet, I heard gnome-shell would be a good choice of desktop environment, and I managed to tweak it enough that I can reuse existing habits.
|
||||
|
||||
I have a big problem, however, with how it encourages one to download random extensions off the internet and run them as part of the whole desktop environment. I have an even bigger problem with [gnome-core][1] having a hard dependency on [chrome-gnome-shell][2], a plugin which cannot be disabled without root editing files in `/etc`, which exposes parts of my destktop environment to websites.
|
||||
|
||||
Visit [this site][3] and it will know which extensions you have installed, and it will be able to install more. I do not trust that, I do not need that, I do not want that. I am horrified by the idea of that.
|
||||
|
||||
[I made a workaround.][4]
|
||||
|
||||
How can one do the same for firefox?
|
||||
|
||||
### Description
|
||||
|
||||
chrome-gnome-shell is a hard dependency of gnome-core, and it installs a browser plugin that one may not want, and mandates its use by system-wide chrome policies.
|
||||
|
||||
I consider having chrome-gnome-shell an unneeded increase of the attack surface of my system, in exchange for the dubious privilege of being able to download and execute, as my main user, random unreviewed code.
|
||||
|
||||
This package satifies the chrome-gnome-shell dependency, but installs nothing.
|
||||
|
||||
Note that after installing this package you need to purge chrome-gnome-shell if it was previously installed, to have it remove its chromium policy files in /etc/chromium
|
||||
|
||||
### Instructions
|
||||
```
|
||||
apt install equivs
|
||||
equivs-build contain-gnome-shell
|
||||
sudo dpkg -i contain-gnome-shell_1.0_all.deb
|
||||
sudo dpkg --purge chrome-gnome-shell
|
||||
|
||||
```
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.enricozini.org/blog/2018/debian/gnome-without-chrome-gnome-shell/
|
||||
|
||||
作者:[Enrico Zini][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.enricozini.org/
|
||||
[1]:https://packages.debian.org/gnome-core
|
||||
[2]:https://packages.debian.org/chrome-gnome-shell
|
||||
[3]:https://extensions.gnome.org/
|
||||
[4]:https://salsa.debian.org/enrico/contain-gnome-shell
|
@ -0,0 +1,268 @@
|
||||
Protecting Code Integrity with PGP — Part 1: Basic Concepts and Tools
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/pgp-security.jpg?itok=lulwyzYc)
|
||||
|
||||
In this article series, we take an in-depth look at using PGP to ensure the integrity of software. These articles will provide practical guidelines aimed at developers working on free software projects and will cover the following topics:
|
||||
|
||||
1. PGP basics and best practices
|
||||
|
||||
2. How to use PGP with Git
|
||||
|
||||
3. How to protect your developer accounts
|
||||
|
||||
|
||||
|
||||
|
||||
We use the term "Free" as in "Freedom," but the guidelines set out in this series can also be used for any other kind of software that relies on contributions from a distributed team of developers. If you write code that goes into public source repositories, you can benefit from getting acquainted with and following this guide.
|
||||
|
||||
### Structure
|
||||
|
||||
Each section is split into two areas:
|
||||
|
||||
* The checklist that can be adapted to your project's needs
|
||||
|
||||
* Free-form list of considerations that explain what dictated these decisions, together with configuration instructions
|
||||
|
||||
|
||||
|
||||
|
||||
#### Checklist priority levels
|
||||
|
||||
The items in each checklist include the priority level, which we hope will help guide your decision:
|
||||
|
||||
* (ESSENTIAL) items should definitely be high on the consideration list. If not implemented, they will introduce high risks to the code that gets committed to the open-source project.
|
||||
|
||||
* (NICE) to have items will improve the overall security, but will affect how you interact with your work environment, and probably require learning new habits or unlearning old ones.
|
||||
|
||||
|
||||
|
||||
|
||||
Remember, these are only guidelines. If you feel these priority levels do not reflect your project's commitment to security, you should adjust them as you see fit.
|
||||
|
||||
## Basic PGP concepts and tools
|
||||
|
||||
### Checklist
|
||||
|
||||
1. Understand the role of PGP in Free Software Development (ESSENTIAL)
|
||||
|
||||
2. Understand the basics of Public Key Cryptography (ESSENTIAL)
|
||||
|
||||
3. Understand PGP Encryption vs. Signatures (ESSENTIAL)
|
||||
|
||||
4. Understand PGP key identities (ESSENTIAL)
|
||||
|
||||
5. Understand PGP key validity (ESSENTIAL)
|
||||
|
||||
6. Install GnuPG utilities (version 2.x) (ESSENTIAL)
|
||||
|
||||
|
||||
|
||||
|
||||
### Considerations
|
||||
|
||||
The Free Software community has long relied on PGP for assuring the authenticity and integrity of software products it produced. You may not be aware of it, but whether you are a Linux, Mac or Windows user, you have previously relied on PGP to ensure the integrity of your computing environment:
|
||||
|
||||
* Linux distributions rely on PGP to ensure that binary or source packages have not been altered between when they have been produced and when they are installed by the end-user.
|
||||
|
||||
* Free Software projects usually provide detached PGP signatures to accompany released software archives, so that downstream projects can verify the integrity of downloaded releases before integrating them into their own distributed downloads.
|
||||
|
||||
* Free Software projects routinely rely on PGP signatures within the code itself in order to track provenance and verify integrity of code committed by project developers.
|
||||
|
||||
|
||||
|
||||
|
||||
This is very similar to developer certificates/code signing mechanisms used by programmers working on proprietary platforms. In fact, the core concepts behind these two technologies are very much the same -- they differ mostly in the technical aspects of the implementation and the way they delegate trust. PGP does not rely on centralized Certification Authorities, but instead lets each user assign their own trust to each certificate.
|
||||
|
||||
Our goal is to get your project on board using PGP for code provenance and integrity tracking, following best practices and observing basic security precautions.
|
||||
|
||||
### Extremely Basic Overview of PGP operations
|
||||
|
||||
You do not need to know the exact details of how PGP works -- understanding the core concepts is enough to be able to use it successfully for our purposes. PGP relies on Public Key Cryptography to convert plain text into encrypted text. This process requires two distinct keys:
|
||||
|
||||
* A public key that is known to everyone
|
||||
|
||||
* A private key that is only known to the owner
|
||||
|
||||
|
||||
|
||||
|
||||
#### Encryption
|
||||
|
||||
For encryption, PGP uses the public key of the owner to create a message that is only decryptable using the owner's private key:
|
||||
|
||||
1. The sender generates a random encryption key ("session key")
|
||||
|
||||
2. The sender encrypts the contents using that session key (using a symmetric cipher)
|
||||
|
||||
3. The sender encrypts the session key using the recipient's public PGP key
|
||||
|
||||
4. The sender sends both the encrypted contents and the encrypted session key to the recipient
|
||||
|
||||
|
||||
|
||||
|
||||
To decrypt:
|
||||
|
||||
1. The recipient decrypts the session key using their private PGP key
|
||||
|
||||
2. The recipient uses the session key to decrypt the contents of the message
|
||||
|
||||
|
||||
|
||||
|
||||
#### Signatures
|
||||
|
||||
For creating signatures, the private/public PGP keys are used the opposite way:
|
||||
|
||||
1. The signer generates the checksum hash of the contents
|
||||
|
||||
2. The signer uses their own private PGP key to encrypt that checksum
|
||||
|
||||
3. The signer provides the encrypted checksum alongside the contents
|
||||
|
||||
|
||||
|
||||
|
||||
To verify the signature:
|
||||
|
||||
1. The verifier generates their own checksum hash of the contents
|
||||
|
||||
2. The verifier uses the signer's public PGP key to decrypt the provided checksum
|
||||
|
||||
3. If the checksums match, the integrity of the contents is verified
|
||||
|
||||
|
||||
|
||||
|
||||
#### Combined usage
|
||||
|
||||
Frequently, encrypted messages are also signed with the sender's own PGP key. This should be the default whenever using encrypted messaging, as encryption without authentication is not very meaningful (unless you are a whistleblower or a secret agent and need plausible deniability).
|
||||
|
||||
### Understanding Key Identities
|
||||
|
||||
Each PGP key must have one or multiple Identities associated with it. Usually, an "Identity" is the person's full name and email address in the following format:
|
||||
```
|
||||
Alice Engineer <alice.engineer@example.com>
|
||||
|
||||
```
|
||||
|
||||
Sometimes it will also contain a comment in brackets, to tell the end-user more about that particular key:
|
||||
```
|
||||
Bob Designer (obsolete 1024-bit key) <bob.designer@example.com>
|
||||
|
||||
```
|
||||
|
||||
Since people can be associated with multiple professional and personal entities, they can have multiple identities on the same key:
|
||||
```
|
||||
Alice Engineer <alice.engineer@example.com>
|
||||
Alice Engineer <aengineer@personalmail.example.org>
|
||||
Alice Engineer <webmaster@girlswhocode.example.net>
|
||||
|
||||
```
|
||||
|
||||
When multiple identities are used, one of them would be marked as the "primary identity" to make searching easier.
|
||||
|
||||
### Understanding Key Validity
|
||||
|
||||
To be able to use someone else's public key for encryption or verification, you need to be sure that it actually belongs to the right person (Alice) and not to an impostor (Eve). In PGP, this certainty is called "key validity:"
|
||||
|
||||
* Validity: full -- means we are pretty sure this key belongs to Alice
|
||||
|
||||
* Validity: marginal -- means we are somewhat sure this key belongs to Alice
|
||||
|
||||
* Validity: unknown -- means there is no assurance at all that this key belongs to Alice
|
||||
|
||||
|
||||
|
||||
|
||||
#### Web of Trust (WOT) vs. Trust on First Use (TOFU)
|
||||
|
||||
PGP incorporates a trust delegation mechanism known as the "Web of Trust." At its core, this is an attempt to replace the need for centralized Certification Authorities of the HTTPS/TLS world. Instead of various software makers dictating who should be your trusted certifying entity, PGP leaves this responsibility to each user.
|
||||
|
||||
Unfortunately, very few people understand how the Web of Trust works, and even fewer bother to keep it going. It remains an important aspect of the OpenPGP specification, but recent versions of GnuPG (2.2 and above) have implemented an alternative mechanism called "Trust on First Use" (TOFU).
|
||||
|
||||
You can think of TOFU as "the SSH-like approach to trust." With SSH, the first time you connect to a remote system, its key fingerprint is recorded and remembered. If the key changes in the future, the SSH client will alert you and refuse to connect, forcing you to make a decision on whether you choose to trust the changed key or not.
|
||||
|
||||
Similarly, the first time you import someone's PGP key, it is assumed to be trusted. If at any point in the future GnuPG comes across another key with the same identity, both the previously imported key and the new key will be marked as invalid and you will need to manually figure out which one to keep.
|
||||
|
||||
In this guide, we will be using the TOFU trust model.
|
||||
|
||||
### Installing OpenPGP software
|
||||
|
||||
First, it is important to understand the distinction between PGP, OpenPGP, GnuPG and gpg:
|
||||
|
||||
* PGP ("Pretty Good Privacy") is the name of the original commercial software
|
||||
|
||||
* OpenPGP is the IETF standard compatible with the original PGP tool
|
||||
|
||||
* GnuPG ("Gnu Privacy Guard") is free software that implements the OpenPGP standard
|
||||
|
||||
* The command-line tool for GnuPG is called "gpg"
|
||||
|
||||
|
||||
|
||||
|
||||
Today, the term "PGP" is almost universally used to mean "the OpenPGP standard," not the original commercial software, and therefore "PGP" and "OpenPGP" are interchangeable. The terms "GnuPG" and "gpg" should only be used when referring to the tools, not to the output they produce or OpenPGP features they implement. For example:
|
||||
|
||||
* PGP (not GnuPG or GPG) key
|
||||
|
||||
* PGP (not GnuPG or GPG) signature
|
||||
|
||||
* PGP (not GnuPG or GPG) keyserver
|
||||
|
||||
|
||||
|
||||
|
||||
Understanding this should protect you from an inevitable pedantic "actually" from other PGP users you come across.
|
||||
|
||||
#### Installing GnuPG
|
||||
|
||||
If you are using Linux, you should already have GnuPG installed. On a Mac, you should install [GPG-Suite][1] or you can use brew install gnupg2. On a Windows PC, you should install [GPG4Win][2], and you will probably need to adjust some of the commands in the guide to work for you, unless you have a unix-like environment set up. For all other platforms, you'll need to do your own research to find the correct places to download and install GnuPG.
|
||||
|
||||
#### GnuPG 1 vs. 2
|
||||
|
||||
Both GnuPG v.1 and GnuPG v.2 implement the same standard, but they provide incompatible libraries and command-line tools, so many distributions ship both the legacy version 1 and the latest version 2. You need to make sure you are always using GnuPG v.2.
|
||||
|
||||
First, run:
|
||||
```
|
||||
$ gpg --version | head -n1
|
||||
|
||||
```
|
||||
|
||||
If you see gpg (GnuPG) 1.4.x, then you are using GnuPG v.1. Try the gpg2 command:
|
||||
```
|
||||
$ gpg2 --version | head -n1
|
||||
|
||||
```
|
||||
|
||||
If you see gpg (GnuPG) 2.x.x, then you are good to go. This guide will assume you have the version 2.2 of GnuPG (or later). If you are using version 2.0 of GnuPG, some of the commands in this guide will not work, and you should consider installing the latest 2.2 version of GnuPG.
|
||||
|
||||
#### Making sure you always use GnuPG v.2
|
||||
|
||||
If you have both gpg and gpg2 commands, you should make sure you are always using GnuPG v2, not the legacy version. You can make sure of this by setting the alias:
|
||||
```
|
||||
$ alias gpg=gpg2
|
||||
|
||||
```
|
||||
|
||||
You can put that in your .bashrc to make sure it's always loaded whenever you use the gpg commands.
|
||||
|
||||
In part 2 of this series, we will explain the basic steps for generating and protecting your master PGP key.
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][3]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2018/2/protecting-code-integrity-pgp-part-1-basic-pgp-concepts-and-tools
|
||||
|
||||
作者:[Konstantin Ryabitsev][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/mricon
|
||||
[1]:https://gpgtools.org/
|
||||
[2]:https://www.gpg4win.org/
|
||||
[3]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -1,5 +1,7 @@
|
||||
The List Of Useful Bash Keyboard Shortcuts
|
||||
======
|
||||
translating by heart4lor
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/02/Bash-720x340.jpg)
|
||||
|
||||
Nowadays, I spend more time in Terminal, trying to accomplish more in CLI than GUI. I learned many BASH tricks over time. And, here is the list of useful of BASH shortcuts that every Linux users should know to get things done faster in their BASH shell. I won’t claim that this list is a complete list of BASH shortcuts, but just enough to move around your BASH shell faster than before. Learning how to navigate faster in BASH Shell not only saves some time, but also makes you proud of yourself for learning something worth. Well, let’s get started.
|
||||
|
@ -1,119 +0,0 @@
|
||||
How to Get Started Using WSL in Windows 10
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/wsl-main.png?itok=wJ5WrU9U)
|
||||
|
||||
In the [previous article][1], we talked about the Windows Subsystem for Linux (WSL) and its target audience. In this article, we will walk through the process of getting started with WSL on your Windows 10 machine.
|
||||
|
||||
### Prepare your system for WSL
|
||||
|
||||
You must be running the latest version of Windows 10 with Fall Creator Update installed. Then, check which version of Windows 10 is installed on your system by searching on “About” in the search box of the Start menu. You should be running version 1709 or the latest to use WSL.
|
||||
|
||||
Here is a screenshot from my system.
|
||||
|
||||
![kHFKOvrbG1gXdB9lsbTqXC4N4w0Lbsz1Bul5ey9m][2]
|
||||
|
||||
If an older version is installed, you need to download and install the Windows 10 Fall Creator Update (FCU) from [this][3] page. Once FCU is installed, go to Update Settings (just search for “updates” in the search box of the Start menu) and install any available updates.
|
||||
|
||||
Go to Turn Windows Features On or Off (you know the drill by now) and scroll to the bottom and tick on the box Windows Subsystem for Linux, as shown in the following figure. Click Ok. It will download and install the needed packages.
|
||||
|
||||
![oV1mDqGe3zwQgL0N3rDasHH6ZwHtxaHlyrLzjw7x][4]
|
||||
|
||||
Upon the completion of the installation, the system will offer to restart. Go ahead and reboot your machine. WSL won’t launch without a system reboot, as shown below:
|
||||
|
||||
![GsNOQLJlHeZbkaCsrDIhfVvEoycu3D0upoTdt6aN][5]
|
||||
|
||||
Once your system starts, go back to the Turn features on or off setting to confirm that the box next to Windows Subsystem for Linux is selected.
|
||||
|
||||
### Install Linux in Windows
|
||||
|
||||
There are many ways to install Linux on Windows, but we will choose the easiest way. Open the Windows Store and search for Linux. You will see the following option:
|
||||
|
||||
![YAR4UgZiFAy2cdkG4U7jQ7_m81lrxR6aHSMOdED7][6]
|
||||
|
||||
Click on Get the apps, and Windows Store will provide you with three options: Ubuntu, openSUSE Leap 42, and SUSE Linux Enterprise Server. You can install all three distributions side by side and run all three distributions simultaneously. To be able to use SLE, you need a subscription.
|
||||
|
||||
In this case, I am installing openSUSE Leap 42 and Ubuntu. Select your desired distro and click on the Get button to install it. Once installed, you can launch openSUSE in Windows. It can be pinned to the Start menu for quick access.
|
||||
|
||||
![4LU6eRrzDgBprDuEbSFizRuP1J_zS3rBnoJbU2OA][7]
|
||||
|
||||
### Using Linux in Windows
|
||||
|
||||
When you launch the distro, it will open the Bash shell and install the distro. Once installed, you can go ahead and start using it. Simple. Just bear in mind that there is no user in openSUSE and it runs as root user, whereas Ubuntu will ask you to create a user. On Ubuntu, you can perform administrative tasks as sudo user.
|
||||
|
||||
You can easily create a user on openSUSE:
|
||||
```
|
||||
# useradd [username]
|
||||
|
||||
# passwd [username]
|
||||
|
||||
```
|
||||
|
||||
Create a new password for the user and you are all set. For example:
|
||||
```
|
||||
# useradd swapnil
|
||||
|
||||
# passwd swapnil
|
||||
|
||||
```
|
||||
|
||||
You can switch from root to this use by running the su command:
|
||||
```
|
||||
su swapnil
|
||||
|
||||
```
|
||||
|
||||
You do need non-root use to perform many tasks, like using commands like rsync to move files on your local machine.
|
||||
|
||||
The first thing you need to do is update the distro. For openSUSE:
|
||||
```
|
||||
zypper up
|
||||
|
||||
```
|
||||
|
||||
For Ubuntu:
|
||||
```
|
||||
sudo apt-get update
|
||||
|
||||
sudo apt-get dist-upgrade
|
||||
|
||||
```
|
||||
|
||||
![7cRgj1O6J8yfO3L4ol5sP-ZCU7_uwOuEoTzsuVW9][8]
|
||||
|
||||
You now have native Linux Bash shell on Windows. Want to ssh into your server from Windows 10? There’s no need to install puTTY or Cygwin. Just open Bash and then ssh into your server. Easy peasy.
|
||||
|
||||
Want to rsync files to your server? Go ahead and use rsync. It really transforms Windows into a usable machine for those Windows users who want to use native Linux command linux tools on their machines without having to deal with VMs.
|
||||
|
||||
### Where is Fedora?
|
||||
|
||||
You may be wondering about Fedora. Unfortunately, Fedora is not yet available through the store. Matthew Miller, the release manager of Fedora said on Twitter, “We're working on resolving some non-technical issues. I'm afraid I don't have any more than that right now.”
|
||||
|
||||
We don’t know yet what these non-technical issues are. When some users asked why the WSL team could not publish Fedora themselves --- after all it’s an open source project -- Rich Turner, a project manager at Microsoft [responded][9], “We have a policy of not publishing others' IP into the store. We believe that the community would MUCH prefer to see a distro published by the distro owner vs. seeing it published by Microsoft or anyone else that isn't the authoritative source.”
|
||||
|
||||
So, Microsoft can’t just go ahead and publish Debian or Arch Linux on Windows Store. The onus is on the official communities to bring their distros to Windows 10 users.
|
||||
|
||||
### What’s next
|
||||
|
||||
In the next article, we will talk about using Windows 10 as a Linux machine and performing most of the tasks that you would perform on your Linux system using the command-line tools.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
|
||||
|
||||
作者:[SWAPNIL BHARTIYA][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/arnieswap
|
||||
[1]:https://www.linux.com/blog/learn/2018/2/windows-subsystem-linux-bridge-between-two-platforms
|
||||
[2]:https://lh6.googleusercontent.com/kHFKOvrbG1gXdB9lsbTqXC4N4w0Lbsz1Bul5ey9mr_E255GiiBxf8cRlatrte6z23yvo8lHJG8nQ_WeHhUNYqPp7kHuQTTMueqMshCT71JsbMr2Wih9KFHuHgNg1BclWz-iuBt4O
|
||||
[3]:https://www.microsoft.com/en-us/software-download/windows10
|
||||
[4]:https://lh4.googleusercontent.com/oV1mDqGe3zwQgL0N3rDasHH6ZwHtxaHlyrLzjw7xF9M9_AcHPNSxM18KDWK2ZpVcUOfxVVpNH9LwUJT5EtRE7zUrJC_gWV5f345SZRAgXcJzOE-8rM8-RCPTNtns6vVP37V5Eflp
|
||||
[5]:https://lh5.googleusercontent.com/GsNOQLJlHeZbkaCsrDIhfVvEoycu3D0upoTdt6aNEozAcQA59Z3hDu_SxT6I4K4gwxLPX0YnmUsCKjaQaaG2PoAgUYMcN0Zv0tBFaoUL3sZryddM4mdRj1E2tE-IK_GLK4PDa4zf
|
||||
[6]:https://lh3.googleusercontent.com/YAR4UgZiFAy2cdkG4U7jQ7_m81lrxR6aHSMOdED7MKEoYxEsX_yLwyMj9N2edt3GJ2JLx6mUsFEZFILCCSBU2sMOqveFVWZTHcCXhFi5P2Xk-9Ikc3NK9seup5CJObIcYJPORdPW
|
||||
[7]:https://lh6.googleusercontent.com/4LU6eRrzDgBprDuEbSFizRuP1J_zS3rBnoJbU2OAOH3Mx7nfOROfyf81k1s4YQyLBcu0qSXOoaqbYkXL5Wpp9gNCdKH_WsEcqWzjG6uXzYvCYQ42psOz6Iz3NF7ElsPrdiFI0cYv
|
||||
[8]:https://lh6.googleusercontent.com/7cRgj1O6J8yfO3L4ol5sP-ZCU7_uwOuEoTzsuVW9cU5xiBWz_cpZ1IBidNT0C1wg9zROIncViUzXD0vPoH5cggQtuwkanRfRdDVXOI48AcKFLt-Iq2CBF4mGRwqqWvSOhb0HFpjm
|
||||
[9]:https://github.com/Microsoft/WSL/issues/2584
|
@ -1,3 +1,5 @@
|
||||
translating by Auk7F7
|
||||
|
||||
Create a wiki on your Linux desktop with Zim
|
||||
======
|
||||
|
||||
|
@ -1,250 +0,0 @@
|
||||
Getting started with SQL
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X)
|
||||
|
||||
Building a database using SQL is simpler than most people think. In fact, you don't even need to be an experienced programmer to use SQL to create a database. In this article, I'll explain how to create a simple relational database management system (RDMS) using MySQL 5.6. Before I get started, I want to quickly thank [SQL Fiddle][1], which I used to run my script. It provides a useful sandbox for testing simple scripts.
|
||||
|
||||
|
||||
In this tutorial, I'll build a database that uses the simple schema shown in the entity relationship diagram (ERD) below. The database lists students and the course each is studying. I used two entities (i.e., tables) to keep things simple, with only a single relationship and dependency. The entities are called `dbo_students` and `dbo_courses`.
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/erd.png)
|
||||
|
||||
The multiplicity of the database is 1-to-many, as each course can contain many students, but each student can study only one course.
|
||||
|
||||
A quick note on terminology:
|
||||
|
||||
1. A table is called an entity.
|
||||
2. A field is called an attribute.
|
||||
3. A record is called a tuple.
|
||||
4. The script used to construct the database is called a schema.
|
||||
|
||||
|
||||
|
||||
### Constructing the schema
|
||||
|
||||
To construct the database, use the `CREATE TABLE <table name>` command, then define each field name and data type. This database uses `VARCHAR(n)` (string) and `INT(n)` (integer), where n refers to the number of values that can be stored. For example `INT(2)` could be 01.
|
||||
|
||||
This is the code used to create the two tables:
|
||||
```
|
||||
CREATE TABLE dbo_students
|
||||
|
||||
(
|
||||
|
||||
student_id INT(2) AUTO_INCREMENT NOT NULL,
|
||||
|
||||
student_name VARCHAR(50),
|
||||
|
||||
course_studied INT(2),
|
||||
|
||||
PRIMARY KEY (student_id)
|
||||
|
||||
);
|
||||
|
||||
|
||||
|
||||
CREATE TABLE dbo_courses
|
||||
|
||||
(
|
||||
|
||||
course_id INT(2) AUTO_INCREMENT NOT NULL,
|
||||
|
||||
course_name VARCHAR(30),
|
||||
|
||||
PRIMARY KEY (course_id)
|
||||
|
||||
);
|
||||
|
||||
```
|
||||
|
||||
`NOT NULL` means that the field cannot be empty, and `AUTO_INCREMENT` means that when a new tuple is added, the ID number will be auto-generated with 1 added to the previously stored ID number in order to enforce referential integrity across entities. `PRIMARY KEY` is the unique identifier attribute for each table. This means each tuple has its own distinct identity.
|
||||
|
||||
### Relationships as a constraint
|
||||
|
||||
As it stands, the two tables exist on their own with no connections or relationships. To connect them, a foreign key must be identified. In `dbo_students`, the foreign key is `course_studied`, the source of which is within `dbo_courses`, meaning that the field is referenced. The specific command within SQL is called a `CONSTRAINT`, and this relationship will be added using another command called `ALTER TABLE`, which allows tables to be edited even after the schema has been constructed.
|
||||
|
||||
The following code adds the relationship to the database construction script:
|
||||
```
|
||||
ALTER TABLE dbo_students
|
||||
|
||||
ADD CONSTRAINT FK_course_studied
|
||||
|
||||
FOREIGN KEY (course_studied) REFERENCES dbo_courses(course_id);
|
||||
|
||||
```
|
||||
|
||||
Using the `CONSTRAINT` command is not actually necessary, but it's good practice because it means the constraint can be named and it makes maintenance easier. Now that the database is complete, it's time to add some data.
|
||||
|
||||
### Adding data to the database
|
||||
|
||||
`INSERT INTO <table name>` is the command used to directly choose which attributes (i.e., fields) data is added to. The entity name is defined first, then the attributes. Underneath this command is the data that will be added to that entity, creating a tuple. If `NOT NULL` has been specified, it means that the attribute cannot be left blank. The following code shows how to add records to the table:
|
||||
```
|
||||
INSERT INTO dbo_courses(course_id,course_name)
|
||||
|
||||
VALUES(001,'Software Engineering');
|
||||
|
||||
INSERT INTO dbo_courses(course_id,course_name)
|
||||
|
||||
VALUES(002,'Computer Science');
|
||||
|
||||
INSERT INTO dbo_courses(course_id,course_name)
|
||||
|
||||
VALUES(003,'Computing');
|
||||
|
||||
|
||||
|
||||
INSERT INTO dbo_students(student_id,student_name,course_studied)
|
||||
|
||||
VALUES(001,'student1',001);
|
||||
|
||||
INSERT INTO dbo_students(student_id,student_name,course_studied)
|
||||
|
||||
VALUES(002,'student2',002);
|
||||
|
||||
INSERT INTO dbo_students(student_id,student_name,course_studied)
|
||||
|
||||
VALUES(003,'student3',002);
|
||||
|
||||
INSERT INTO dbo_students(student_id,student_name,course_studied)
|
||||
|
||||
VALUES(004,'student4',003);
|
||||
|
||||
```
|
||||
|
||||
Now that the database schema is complete and data is added, it's time to run queries on the database.
|
||||
|
||||
### Queries
|
||||
|
||||
Queries follow a set structure using these commands:
|
||||
```
|
||||
SELECT <attributes>
|
||||
|
||||
FROM <entity>
|
||||
|
||||
WHERE <condition>
|
||||
|
||||
```
|
||||
|
||||
To display all records within the `dbo_courses` entity and display the course code and course name, use an asterisk. This is a wildcard that eliminates the need to type all attribute names. (Its use is not recommended in production databases.) The code for this query is:
|
||||
```
|
||||
SELECT *
|
||||
|
||||
FROM dbo_courses
|
||||
|
||||
```
|
||||
|
||||
The output of this query shows all tuples in the table, so all available courses can be displayed:
|
||||
```
|
||||
| course_id | course_name |
|
||||
|
||||
|-----------|----------------------|
|
||||
|
||||
| 1 | Software Engineering |
|
||||
|
||||
| 2 | Computer Science |
|
||||
|
||||
| 3 | Computing |
|
||||
|
||||
```
|
||||
|
||||
In a future article, I'll explain more complicated queries using one of the three types of joins: Inner, Outer, or Cross.
|
||||
|
||||
Here is the completed script:
|
||||
```
|
||||
CREATE TABLE dbo_students
|
||||
|
||||
(
|
||||
|
||||
student_id INT(2) AUTO_INCREMENT NOT NULL,
|
||||
|
||||
student_name VARCHAR(50),
|
||||
|
||||
course_studied INT(2),
|
||||
|
||||
PRIMARY KEY (student_id)
|
||||
|
||||
);
|
||||
|
||||
|
||||
|
||||
CREATE TABLE dbo_courses
|
||||
|
||||
(
|
||||
|
||||
course_id INT(2) AUTO_INCREMENT NOT NULL,
|
||||
|
||||
course_name VARCHAR(30),
|
||||
|
||||
PRIMARY KEY (course_id)
|
||||
|
||||
);
|
||||
|
||||
|
||||
|
||||
ALTER TABLE dbo_students
|
||||
|
||||
ADD CONSTRAINT FK_course_studied
|
||||
|
||||
FOREIGN KEY (course_studied) REFERENCES dbo_courses(course_id);
|
||||
|
||||
|
||||
|
||||
INSERT INTO dbo_courses(course_id,course_name)
|
||||
|
||||
VALUES(001,'Software Engineering');
|
||||
|
||||
INSERT INTO dbo_courses(course_id,course_name)
|
||||
|
||||
VALUES(002,'Computer Science');
|
||||
|
||||
INSERT INTO dbo_courses(course_id,course_name)
|
||||
|
||||
VALUES(003,'Computing');
|
||||
|
||||
|
||||
|
||||
INSERT INTO dbo_students(student_id,student_name,course_studied)
|
||||
|
||||
VALUES(001,'student1',001);
|
||||
|
||||
INSERT INTO dbo_students(student_id,student_name,course_studied)
|
||||
|
||||
VALUES(002,'student2',002);
|
||||
|
||||
INSERT INTO dbo_students(student_id,student_name,course_studied)
|
||||
|
||||
VALUES(003,'student3',002);
|
||||
|
||||
INSERT INTO dbo_students(student_id,student_name,course_studied)
|
||||
|
||||
VALUES(004,'student4',003);
|
||||
|
||||
|
||||
|
||||
SELECT *
|
||||
|
||||
FROM dbo_courses
|
||||
|
||||
```
|
||||
|
||||
### Learning more
|
||||
|
||||
SQL isn't difficult; I think it is simpler than programming, and the language is universal to different database systems. Note that `dbo.<entity>` is not a required entity-naming convention; I used it simply because it is the standard in Microsoft SQL Server.
|
||||
|
||||
If you'd like to learn more, the best guide this side of the internet is [W3Schools.com][2]'s comprehensive guide to SQL for all database platforms.
|
||||
|
||||
Please feel free to play around with my database. Also, if you have suggestions or questions, please respond in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/getting-started-sql
|
||||
|
||||
作者:[Aaron Cocker][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/aaroncocker
|
||||
[1]:http://sqlfiddle.com
|
||||
[2]:https://www.w3schools.com/sql/default.asp
|
@ -0,0 +1,176 @@
|
||||
Protecting Code Integrity with PGP — Part 2: Generating Your Master Key
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/binary-1538717_1920.png?itok=kv_sxSnf)
|
||||
|
||||
In this article series, we're taking an in-depth look at using PGP and provide practical guidelines for developers working on free software projects. In the previous article, we provided an introduction to [basic tools and concepts][1]. In this installment, we show how to generate and protect your master PGP key.
|
||||
|
||||
### Checklist
|
||||
|
||||
1. Generate a 4096-bit RSA master key (ESSENTIAL)
|
||||
|
||||
2. Back up the master key using paperkey (ESSENTIAL)
|
||||
|
||||
3. Add all relevant identities (ESSENTIAL)
|
||||
|
||||
|
||||
|
||||
|
||||
### Considerations
|
||||
|
||||
#### Understanding the "Master" (Certify) key
|
||||
|
||||
In this and next section we'll talk about the "master key" and "subkeys." It is important to understand the following:
|
||||
|
||||
1. There are no technical differences between the "master key" and "subkeys."
|
||||
|
||||
2. At creation time, we assign functional limitations to each key by giving it specific capabilities.
|
||||
|
||||
3. A PGP key can have four capabilities.
|
||||
|
||||
* [S] key can be used for signing
|
||||
|
||||
* [E] key can be used for encryption
|
||||
|
||||
* [A] key can be used for authentication
|
||||
|
||||
* [C] key can be used for certifying other keys
|
||||
|
||||
4. A single key may have multiple capabilities.
|
||||
|
||||
|
||||
|
||||
|
||||
The key carrying the [C] (certify) capability is considered the "master" key because it is the only key that can be used to indicate relationship with other keys. Only the [C] key can be used to:
|
||||
|
||||
* Add or revoke other keys (subkeys) with S/E/A capabilities
|
||||
|
||||
* Add, change or revoke identities (uids) associated with the key
|
||||
|
||||
* Add or change the expiration date on itself or any subkey
|
||||
|
||||
* Sign other people's keys for the web of trust purposes
|
||||
|
||||
|
||||
|
||||
|
||||
In the Free Software world, the [C] key is your digital identity. Once you create that key, you should take extra care to protect it and prevent it from falling into malicious hands.
|
||||
|
||||
#### Before you create the master key
|
||||
|
||||
Before you create your master key you need to pick your primary identity and your master passphrase.
|
||||
|
||||
##### Primary identity
|
||||
|
||||
Identities are strings using the same format as the "From" field in emails:
|
||||
```
|
||||
Alice Engineer <alice.engineer@example.org>
|
||||
|
||||
```
|
||||
|
||||
You can create new identities, revoke old ones, and change which identity is your "primary" one at any time. Since the primary identity is shown in all GnuPG operations, you should pick a name and address that are both professional and the most likely ones to be used for PGP-protected communication, such as your work address or the address you use for signing off on project commits.
|
||||
|
||||
##### Passphrase
|
||||
|
||||
The passphrase is used exclusively for encrypting the private key with a symmetric algorithm while it is stored on disk. If the contents of your .gnupg directory ever get leaked, a good passphrase is the last line of defense between the thief and them being able to impersonate you online, which is why it is important to set up a good passphrase.
|
||||
|
||||
A good guideline for a strong passphrase is 3-4 words from a rich or mixed dictionary that are not quotes from popular sources (songs, books, slogans). You'll be using this passphrase fairly frequently, so it should be both easy to type and easy to remember.
|
||||
|
||||
##### Algorithm and key strength
|
||||
|
||||
Even though GnuPG has had support for Elliptic Curve crypto for a while now, we'll be sticking to RSA keys, at least for a little while longer. While it is possible to start using ED25519 keys right now, it is likely that you will come across tools and hardware devices that will not be able to handle them correctly.
|
||||
|
||||
You may also wonder why the master key is 4096-bit, if later in the guide we state that 2048-bit keys should be good enough for the lifetime of RSA public key cryptography. The reasons are mostly social and not technical: master keys happen to be the most visible ones on the keychain, and some of the developers you interact with will inevitably judge you negatively if your master key has fewer bits than theirs.
|
||||
|
||||
#### Generate the master key
|
||||
|
||||
To generate your new master key, issue the following command, putting in the right values instead of "Alice Engineer:"
|
||||
```
|
||||
$ gpg --quick-generate-key 'Alice Engineer <alice@example.org>' rsa4096 cert
|
||||
|
||||
```
|
||||
|
||||
A dialog will pop up asking to enter the passphrase. Then, you may need to move your mouse around or type on some keys to generate enough entropy until the command completes.
|
||||
|
||||
Review the output of the command, it will be something like this:
|
||||
```
|
||||
pub rsa4096 2017-12-06 [C] [expires: 2019-12-06]
|
||||
111122223333444455556666AAAABBBBCCCCDDDD
|
||||
uid Alice Engineer <alice@example.org>
|
||||
|
||||
```
|
||||
|
||||
Note the long string on the second line -- that is the full fingerprint of your newly generated key. Key IDs can be represented in three different forms:
|
||||
|
||||
* Fingerprint, a full 40-character key identifier
|
||||
|
||||
* Long, last 16-characters of the fingerprint (AAAABBBBCCCCDDDD)
|
||||
|
||||
* Short, last 8 characters of the fingerprint (CCCCDDDD)
|
||||
|
||||
|
||||
|
||||
|
||||
You should avoid using 8-character "short key IDs" as they are not sufficiently unique.
|
||||
|
||||
At this point, I suggest you open a text editor, copy the fingerprint of your new key and paste it there. You'll need to use it for the next few steps, so having it close by will be handy.
|
||||
|
||||
#### Back up your master key
|
||||
|
||||
For disaster recovery purposes -- and especially if you intend to use the Web of Trust and collect key signatures from other project developers -- you should create a hardcopy backup of your private key. This is supposed to be the "last resort" measure in case all other backup mechanisms have failed.
|
||||
|
||||
The best way to create a printable hardcopy of your private key is using the paperkey software written for this very purpose. Paperkey is available on all Linux distros, as well as installable via brew install paperkey on Macs.
|
||||
|
||||
Run the following command, replacing [fpr] with the full fingerprint of your key:
|
||||
```
|
||||
$ gpg --export-secret-key [fpr] | paperkey -o /tmp/key-backup.txt
|
||||
|
||||
```
|
||||
|
||||
The output will be in a format that is easy to OCR or input by hand, should you ever need to recover it. Print out that file, then take a pen and write the key passphrase on the margin of the paper. This is a required step because the key printout is still encrypted with the passphrase, and if you ever change the passphrase on your key, you will not remember what it used to be when you had first created it -- guaranteed.
|
||||
|
||||
Put the resulting printout and the hand-written passphrase into an envelope and store in a secure and well-protected place, preferably away from your home, such as your bank vault.
|
||||
|
||||
**Note on printers:** Long gone are days when printers were dumb devices connected to your computer's parallel port. These days they have full operating systems, hard drives, and cloud integration. Since the key content we send to the printer will be encrypted with the passphrase, this is a fairly safe operation, but use your best paranoid judgement.
|
||||
|
||||
#### Add relevant identities
|
||||
|
||||
If you have multiple relevant email addresses (personal, work, open-source project, etc), you should add them to your master key. You don't need to do this for any addresses that you don't expect to use with PGP (e.g., probably not your school alumni address).
|
||||
|
||||
The command is (put the full key fingerprint instead of [fpr]):
|
||||
```
|
||||
$ gpg --quick-add-uid [fpr] 'Alice Engineer <allie@example.net>'
|
||||
|
||||
```
|
||||
|
||||
You can review the UIDs you've already added using:
|
||||
```
|
||||
$ gpg --list-key [fpr] | grep ^uid
|
||||
|
||||
```
|
||||
|
||||
##### Pick the primary UID
|
||||
|
||||
GnuPG will make the latest UID you add as your primary UID, so if that is different from what you want, you should fix it back:
|
||||
```
|
||||
$ gpg --quick-set-primary-uid [fpr] 'Alice Engineer <alice@example.org>'
|
||||
|
||||
```
|
||||
|
||||
Next time, we'll look at generating PGP subkeys, which are the keys you'll actually be using for day-to-day work.
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][2]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/PGP/2018/2/protecting-code-integrity-pgp-part-2-generating-and-protecting-your-master-pgp-key
|
||||
|
||||
作者:[KONSTANTIN RYABITSEV][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/mricon
|
||||
[1]:https://www.linux.com/blog/learn/2018/2/protecting-code-integrity-pgp-part-1-basic-pgp-concepts-and-tools
|
||||
[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -1,3 +1,4 @@
|
||||
##amwps290 translating
|
||||
How to configure an Apache web server
|
||||
======
|
||||
|
||||
|
@ -0,0 +1,123 @@
|
||||
Plasma Mobile Could Give Life to a Mobile Linux Experience
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/plasma-mobile_0.png?itok=uUIQFRcm)
|
||||
|
||||
In the past few years, it’s become clear that, outside of powering Android, Linux on mobile devices has been a resounding failure. Canonical came close, even releasing devices running Ubuntu Touch. Unfortunately, the idea of [Scopes][1]was doomed before it touched down on its first piece of hardware and subsequently died a silent death.
|
||||
|
||||
The next best hope for mobile Linux comes in the form of the [Samsung DeX][2] program. With DeX, users will be able to install an app (Linux On Galaxy—not available yet) on their Samsung devices, which would in turn allow them to run a full-blown Linux distribution. The caveat here is that you’ll be running both Android and Linux at the same time—which is not exactly an efficient use of resources. On top of that, most Linux distributions aren’t designed to run on such small form factors. The good news for DeX is that, when you run Linux on Galaxy and dock your Samsung device to DeX, that Linux OS will be running on your connected monitor—so form factor issues need not apply.
|
||||
|
||||
Outside of those two options, a pure Linux on mobile experience doesn’t exist. Or does it?
|
||||
|
||||
You may have heard of the [Purism Librem 5][3]. It’s a crowdfunded device that promises to finally bring a pure Linux experience to the mobile landscape. This device will be powered by a i.MX8 SoC chip, so it should run most any Linux operating system.
|
||||
|
||||
Out of the box, the device will run an encrypted version of [PureOS][4]. However, last year Purism and KDE joined together to create a mobile version of the KDE desktop that could run on the Librem 5. Recently [ISOs were made available for a beta version of Plasma Mobile][5] and, judging from first glance, they’re onto something that makes perfect sense for a mobile Linux platform. I’ve booted up a live instance of Plasma Mobile to kick the tires a bit.
|
||||
|
||||
What I saw seriously impressed me. Let’s take a look.
|
||||
|
||||
### Testing platform
|
||||
|
||||
Before you download the ISO and attempt to fire it up as a VirtualBox VM, you should know that it won’t work well. Because Plasma Mobile uses Wayland (and VirtualBox has yet to play well with that particular X replacement), you’ll find VirtualBox VM a less-than-ideal platform for the beta release. Also know that the Calamares installer doesn’t function well either. In fact, I have yet to get the OS installed on a non-mobile device. And since I don’t own a supported mobile device, I’ve had to run it as a live session on either a laptop or an [Antsle][6] antlet VM every time.
|
||||
|
||||
### What makes Plasma Mobile special?
|
||||
|
||||
This could be easily summed up by saying, Plasma Mobile got it all right. Instead of Canonical re-inventing a perfectly functioning wheel, the developers of KDE simply re-tooled the interface such that a full-functioning Linux distribution (complete with all the apps you’ve grown to love and depend upon) could work on a smaller platform. And they did a spectacular job. Even better, they’ve created an interface that any user of a mobile device could instantly feel familiar with.
|
||||
|
||||
What you have with the Plasma Mobile interface (Figure 1) are the elements common to most Android home screens:
|
||||
|
||||
* Quick Launchers
|
||||
|
||||
* Notification Shade
|
||||
|
||||
* App Drawer
|
||||
|
||||
* Overview button (so you can go back to a previously used app, still running in memory)
|
||||
|
||||
* Home button
|
||||
|
||||
|
||||
|
||||
|
||||
![KDE mobile][8]
|
||||
|
||||
Figure 1: The Plasma Mobile desktop interface.
|
||||
|
||||
[Used with permission][9]
|
||||
|
||||
Because KDE went this route with the UX, it means there’s zero learning curve. And because this is an actual Linux platform, it takes that user-friendly mobile interface and overlays it onto a system that allows for easy installation and usage of apps like:
|
||||
|
||||
* GIMP
|
||||
|
||||
* LibreOffice
|
||||
|
||||
* Audacity
|
||||
|
||||
* Clementine
|
||||
|
||||
* Dropbox
|
||||
|
||||
* And so much more
|
||||
|
||||
|
||||
|
||||
|
||||
Unfortunately, without being able to install Plasma Mobile, you cannot really kick the tires too much, as the live user doesn’t have permission to install applications. However, once Plasma Mobile is fully installed, the Discover software center will allow you to install a host of applications (Figure 2).
|
||||
|
||||
|
||||
![Discover center][11]
|
||||
|
||||
Figure 2: The Discover software center on Plasma Mobile.
|
||||
|
||||
[Used with permission][9]
|
||||
|
||||
Swipe up (or scroll down—depending on what hardware you’re using) to reveal the app drawer, where you can launch all of your installed applications (Figure 3).
|
||||
|
||||
![KDE mobile][13]
|
||||
|
||||
Figure 3: The Plasma Mobile app drawer ready to launch applications.
|
||||
|
||||
[Used with permission][9]
|
||||
|
||||
Open up a terminal window and you can take care of standard Linux admin tasks, such as using SSH to log into a remote server. Using apt, you can install all of the developer tools you need to make Plasma Mobile a powerful development platform.
|
||||
|
||||
We’re talking serious mobile power—either from a phone or a tablet.
|
||||
|
||||
### A ways to go
|
||||
|
||||
Clearly Plasma Mobile is still way too early in development for it to be of any use to the average user. And because most virtual machine technology doesn’t play well with Wayland, you’re likely to get too frustrated with the current ISO image to thoroughly try it out. However, even without being able to fully install the platform (or get full usage out of it), it’s obvious KDE and Purism are going to have the ideal platform that will put Linux into the hands of mobile users.
|
||||
|
||||
If you want to test the waters of Plasma Mobile on an actual mobile device, a handy list of supported hardware can be found [here][14] (for PostmarketOS) or [here][15] (for Halium). If you happen to be lucky enough to have a device that also includes Wi-Fi support, you’ll find you get more out of testing the environment.
|
||||
|
||||
If you do have a supported device, you’ll need to use either [PostmarketOS][16] (a touch-optimized, pre-configured Alpine Linux that can be installed on smartphones and other mobile devices) or [Halium][15] (an application that creates an minimal Android layer which allows a new interface to interact with the Android kernel). Using Halium further limits the number of supported devices, as it has only been built for select hardware. However, if you’re willing, you can build your own Halium images (documentation for this process is found [here][17]). If you want to give PostmarketOS a go, [here are the necessary build instructions][18].
|
||||
|
||||
Suffice it to say, Plasma Mobile isn’t nearly ready for mass market. If you’re a Linux enthusiast and want to give it a go, let either PostmarketOS or Halium help you get the operating system up and running on your device. Otherwise, your best bet is to wait it out and hope Purism and KDE succeed in bringing this oustanding mobile take on Linux to the masses.
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][19]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/2/plasma-mobile-could-give-life-mobile-linux-experience
|
||||
|
||||
作者:[JACK WALLEN][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/jlwallen
|
||||
[1]:https://launchpad.net/unity-scopes
|
||||
[2]:http://www.samsung.com/global/galaxy/apps/samsung-dex/
|
||||
[3]:https://puri.sm/shop/librem-5/
|
||||
[4]:https://www.pureos.net/
|
||||
[5]:http://blog.bshah.in/2018/01/26/trying-out-plasma-mobile/
|
||||
[6]:https://antsle.com/
|
||||
[8]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kdemobile_1.jpg?itok=EK3_vFVP (KDE mobile)
|
||||
[9]:https://www.linux.com/licenses/category/used-permission
|
||||
[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kdemobile_2.jpg?itok=CiUQ-MnB (Discover center)
|
||||
[13]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kdemobile_3.jpg?itok=i6V8fgK8 (KDE mobile)
|
||||
[14]:http://blog.bshah.in/2018/02/02/trying-out-plasma-mobile-part-two/
|
||||
[15]:https://github.com/halium/projectmanagement/issues?q=is%3Aissue+is%3Aopen+label%3APorts
|
||||
[16]:https://postmarketos.org/
|
||||
[17]:http://docs.halium.org/en/latest/
|
||||
[18]:https://wiki.postmarketos.org/wiki/Installation_guide
|
||||
[19]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -0,0 +1,107 @@
|
||||
How To Run A Command For A Specific Time In Linux
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/02/Run-A-Command-For-A-Specific-Time-In-Linux-1-720x340.png)
|
||||
|
||||
The other day I was transferring a large file using rsync to another system on my local area network. Since it is very big file, It took around 20 minutes to complete. I don’t want to wait that longer, and I don’t want to terminate the process by pressing CTRL+C either. I was just wondering if there could be any easy ways to run a command for a specific time and kill it automatically once the time is out in Unix-like operating systems – hence this post. Read on.
|
||||
|
||||
### Run A Command For A Specific Time In Linux
|
||||
|
||||
We can do this in two methods.
|
||||
|
||||
#### Method 1 – Using “timeout” command
|
||||
|
||||
The most common method is using **timeout** command. For those who don’t know, the timeout command will effectively limit the absolute execution time of a process. The timeout command is part of the GNU coreutils package, so it comes pre-installed in all GNU/Linux systems.
|
||||
|
||||
Let us say, you want to run a command for only 5 seconds, and then kill it. To do so, we use:
|
||||
```
|
||||
$ timeout <time-limit-interval> <command>
|
||||
|
||||
```
|
||||
|
||||
For example, the following command will terminate after 10 seconds.
|
||||
```
|
||||
$ timeout 10s tail -f /var/log/pacman.log
|
||||
|
||||
```
|
||||
|
||||
![][2]
|
||||
|
||||
You also don’t have to specify the suffix “s” for seconds. The following command is same as above.
|
||||
```
|
||||
$ timeout 10 tail -f /var/log/pacman.log
|
||||
|
||||
```
|
||||
|
||||
The other available suffixes are:
|
||||
|
||||
* ‘m’ for minutes,
|
||||
* ‘h’ for hours
|
||||
* ‘d’ for days.
|
||||
|
||||
|
||||
|
||||
If you run this **tail -f /var/log/pacman.log** command, it will keep running until you manually end it by pressing CTRL+C. However, if you run it along with **timeout** command, it will be killed automatically after the given time interval. If the command is till running after the time out, you can send a **kill** signal like below.
|
||||
```
|
||||
$ timeout -k 20 10 tail -f /var/log/pacman.log
|
||||
|
||||
```
|
||||
|
||||
In this case, if you the tail command still running after 10 seconds, the timeout command will send it a kill signal after 20 seconds and end it.
|
||||
|
||||
For more details, check the man pages.
|
||||
```
|
||||
$ man timeout
|
||||
|
||||
```
|
||||
|
||||
Sometimes, a particular program might take long time to complete and end up freezing your system. In such cases, you can use this trick to end the process automatically after a particular time.
|
||||
|
||||
Also, consider using **Cpulimit** , a simple application to limit the CPU usage of a process. For more details, check the following link.
|
||||
|
||||
#### Method 2 – Using “Timelimit” program
|
||||
|
||||
The Timelimit utility executes a given command with the supplied arguments and terminates the spawned process after a given time with a given signal. First, it will pass the warning signal and then after timeout, it will send the **kill** signal.
|
||||
|
||||
Unlike the timeout utility, the Timelimit has more options. You can pass number of arguments such as killsig, warnsig, killtime, warntime etc. It is available in the default repositories of Debian-based systems. So, you can install it using command:
|
||||
```
|
||||
$ sudo apt-get install timelimit
|
||||
|
||||
```
|
||||
|
||||
For Arch-based systems, it is available in the AUR. So, you can install it using any AUR helper programs such as [**Pacaur**][3], [**Packer**][4], [**Yay**][5], [**Yaourt**][6] etc.
|
||||
|
||||
For other distributions, download the source [**from here**][7] and manually install it. After installing Timelimit program, run the following command for a specific time, for example 10 seconds:
|
||||
```
|
||||
$ timelimit -t10 tail -f /var/log/pacman.log
|
||||
|
||||
```
|
||||
|
||||
If you run timelimit without any arguments, it will use the default values: warntime=3600 seconds, warnsig=15, killtime=120, killsig=9. For more details, refer the man pages and the project’s website given at the end of this guide.
|
||||
```
|
||||
$ man timelimit
|
||||
|
||||
```
|
||||
|
||||
And, that’s all for today. I hope this was useful. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/run-command-specific-time-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2018/02/Timeout.gif
|
||||
[3]:https://www.ostechnix.com/install-pacaur-arch-linux/
|
||||
[4]:https://www.ostechnix.com/install-packer-arch-linux-2/
|
||||
[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||
[6]:https://www.ostechnix.com/install-yaourt-arch-linux/
|
||||
[7]:http://devel.ringlet.net/sysutils/timelimit/#download
|
@ -0,0 +1,126 @@
|
||||
'Getting to Done' on the Linux command line
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_terminals.png?itok=CfBqYBah)
|
||||
There is a lot of talk about getting things done at the command line. How many articles are there about using obscure flags with `ls`, nifty regular expressions with Sed and Awk, and how to parse out lots of text with Perl? That isn't what this is about.
|
||||
|
||||
This is about [Getting _to_ Done][1], making sure that the stuff we have to do actually gets tracked and done using tools that don't require a graphical desktop, a web browser, or an internet connection. To do this, we'll look at four ways of tracking your to-do list: plaintext files, Todo.txt, TaskWarrior, and Org-mode.
|
||||
|
||||
### Plain (and simple) text
|
||||
|
||||
|
||||
![plaintext][3]
|
||||
|
||||
I like to use Vim, but you can use Nano too.
|
||||
|
||||
The most straightforward way to manage your to-do list is using a plaintext file in your editor of choice. Just open an empty file and add tasks, one per line. When you are done, delete the line. Simple, effective, and it doesn't matter what you use to do it. There are a couple of drawbacks to this method, though. Once you delete a line and save the file, it is gone forever. That can be a problem if you have to report on what you have done this week or last week. And while using a simple file is flexible, it can also get cluttered really easily.
|
||||
|
||||
### Todo.txt: Plaintext leveled up
|
||||
|
||||
|
||||
![todo.txt screen][5]
|
||||
|
||||
Neat, organized, and easy to use
|
||||
|
||||
That leads us to the [Todo.txt][6] file format and application. Installation is simple—[download][7] the latest release from GitHub and run `sudo make install` from the unpacked archive.
|
||||
|
||||
|
||||
![Installing todo.txt][9]
|
||||
|
||||
It works from a Git clone as well.
|
||||
|
||||
Todo.txt makes it very easy to add tasks, list tasks, and mark them as done:
|
||||
|
||||
| `todo.sh add "Some Task"` | add "Some Task" to my todo list |
|
||||
| `todo.sh ls` | list all my tasks |
|
||||
| `todo.sh ls due:2018-02-15` | list all tasks due on February 15, 2018 |
|
||||
| `todo.sh do 3` | mark task number 3 as "done" |
|
||||
|
||||
The actual list is still in plaintext, and you can edit it with your favorite text editor as long as you follow the [correct format][10].
|
||||
|
||||
There is also a very robust help built into the application.
|
||||
|
||||
|
||||
![Syntax highlighting in todo.txt][12]
|
||||
|
||||
You can even get syntax highlighting.
|
||||
|
||||
There is also a large selection of add-ons, as well as specifications for writing your own. There are even browser extensions, mobile apps, and desktop apps that support the Todo.txt format.
|
||||
|
||||
|
||||
![GNOME extensions in todo.txt][14]
|
||||
|
||||
Even GNOME extensions.
|
||||
|
||||
The biggest drawback to Todo.txt is the lack of an automatic or built-in synchronization mechanism. Most (if not all) of the browser extensions and mobile apps require Dropbox to perform synchronization between the app and the copy on your desktop. If you would like something with sync built-in, we have...
|
||||
|
||||
### Taskwarrior: Now we're cooking with Python
|
||||
|
||||
[Taskwarrior][15] is a Python application with many of the same features as Todo.txt. However, it stores the data in a database and has built-in synchronization capabilities. It also keeps track of what is next, notes how old tasks are, and will warn you if you have something more important to do than what you just did.
|
||||
|
||||
[Installation][16] of Taskwarrior can be done either with your distribution's package manager, through Python's `pip` utility, or built from source. Using it is also pretty straightforward, with commands similar to Todo.txt:
|
||||
|
||||
| `task add "Some Task"` | Add "Some Task" to the list |
|
||||
| `task list` | List all tasks |
|
||||
| `task list due ``:today` | List all tasks due on today's date |
|
||||
| `task do 3` | Complete task number 3 |
|
||||
|
||||
Taskwarrior also has some pretty nice text user interfaces.
|
||||
|
||||
![Taskwarrior in Vit][18]
|
||||
|
||||
I like Vit, which was inspired by Vim.
|
||||
|
||||
Unlike Todo.txt, Taskwarrior can synchronize with a local or remote server. A very basic synchronization server called `taskd` is available if you wish to run your own, and there are several services available if you do not.
|
||||
|
||||
Taskwarrior also has a thriving and extensive ecosystem of add-ons and extensions, as well as mobile and desktop apps.
|
||||
|
||||
![Taskwarrior on GNOME][20]
|
||||
|
||||
Taskwarrior looks really nice on GNOME.
|
||||
|
||||
The only disadvantage to Taskwarrior is that, unlike the other programs listed here, you cannot directly modify the to-do list itself. You can export the task list to various formats, modify the export, and then re-import the files, but it is a lot clunkier than just opening the file directly in a text editor.
|
||||
|
||||
Which brings us to the most powerful of them all...
|
||||
|
||||
### Emacs Org-mode: Hulk smash tasks
|
||||
|
||||
![Org-mode][22]
|
||||
|
||||
Emacs has everything.
|
||||
|
||||
Emacs [Org-mode][23] is by far the most powerful, most flexible open source to-do list manager out there. It supports multiple files, uses plaintext, is almost infinitely customizable, and understands calendars, due dates, and schedules. It is also significantly more complicated to set up than the other applications listed here. But once it is set up, it does everything the other applications do and more. If you are familiar with or a fan of [Bullet Journals][24], Org-mode is possibly the closest you can get on a computer.
|
||||
|
||||
Org-mode will run anywhere Emacs runs, and there are a few mobile applications that can interact with it as well. Unfortunately, there are no desktop apps or browser extensions that support Org. Despite all that, Org-mode is still one of the best applications for tracking your to-do list, since it is so very powerful.
|
||||
|
||||
### Choose your tool
|
||||
|
||||
In the end, the goal of all these programs is to help you track what you need to do and make sure you don't forget to do something. While they all have the same basic functions, choosing which one is right for you depends on a lot of factors. Do you want synchronization built-in or not? Do you need a mobile app? Do any of the add-ons include a "must have" feature? Whatever your choice, remember that the program alone cannot make you more organized, but it can help.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/getting-to-done-agile-linux-command-line
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[1]:https://www.scruminc.com/getting-done/
|
||||
[3]:https://opensource.com/sites/default/files/u128651/plain-text.png (plaintext)
|
||||
[5]:https://opensource.com/sites/default/files/u128651/todo-txt.png (todo.txt screen)
|
||||
[6]:http://todotxt.org/
|
||||
[7]:https://github.com/todotxt/todo.txt-cli/releases
|
||||
[9]:https://opensource.com/sites/default/files/u128651/todo-txt-install.png (Installing todo.txt)
|
||||
[10]:https://github.com/todotxt/todo.txt
|
||||
[12]:https://opensource.com/sites/default/files/u128651/todo-txt-vim.png (Syntax highlighting in todo.txt)
|
||||
[14]:https://opensource.com/sites/default/files/u128651/tod-txt-gnome.png (GNOME extensions in todo.txt)
|
||||
[15]:https://taskwarrior.org/
|
||||
[16]:https://taskwarrior.org/download/
|
||||
[18]:https://opensource.com/sites/default/files/u128651/taskwarrior-vit.png (Taskwarrior in Vit)
|
||||
[20]:https://opensource.com/sites/default/files/u128651/taskwarrior-gnome.png (Taskwarrior on GNOME)
|
||||
[22]:https://opensource.com/sites/default/files/u128651/emacs-org-mode.png (Org-mode)
|
||||
[23]:https://orgmode.org/
|
||||
[24]:http://bulletjournal.com/
|
@ -0,0 +1,58 @@
|
||||
Linux Virtual Machines vs Linux Live Images
|
||||
======
|
||||
I'll be the first to admit that I tend to try out new [Linux distros][1] on a far too frequent basis. Yet the method I use to test them, does vary depending on my goals for each instance. In this article, we're going to look at both running Linux virtual machines and running Linux live images. There are advantages to each method, but there are some hurdles with each method as well.
|
||||
|
||||
### Testing out a new Linux distro for the first time
|
||||
|
||||
When I test out a brand new Linux distro for the first time, the method I use depends heavily on the resources of the PC I'm currently on. If I have access to my desktop PC, I'm going to run the distro to be tested in a virtual machine. The reason for this approach is that I can download and test the distro in not only a live environment, but also as an installed product with persistent storage abilities.
|
||||
|
||||
On the other hand, if I am working with much less robust hardware on a PC, then testing out a distro with a virtual machine installation of Linux is counter-productive. I'd be pushing that PC to its limits and honestly would be better off using a live Linux image instead running from a flash drive.
|
||||
|
||||
### Touring software on a new Linux distro
|
||||
|
||||
If you're interested in checking out a distro's desktop environment or the available software, you can't go wrong with a live image of the distro. A live environment provides you with a birds eye view of what to expect in terms of overall layout, applications provided and how the user experience flows overall.
|
||||
|
||||
To be fair, you could do the same thing with a virtual machine installation, but it may be a bit overkill if you would rather avoid filling up hard drive space with yet more data. After all, this is a simple tour of the distro. Remember what I said in the first section – I like to run Linux in a virtual machine to test it. This means I'm going to see how it installs, what the partition options look like and other elements you wouldn't see from using a live image of any given distro.
|
||||
|
||||
Touring usually indicates that you're only looking to take a quick look at a distro, so in this case the method that can be done with the least amount of resistance and time investment is a good course of action.
|
||||
|
||||
### Taking a Linux distro with you
|
||||
|
||||
While it's not as common as it was a few years ago, the ability to take a Linux distro with you may be a consideration for some users. Obviously, virtual machine installations don't necessarily lend themselves favorably to portability. However a live image of a Linux distro is actually quite portable. A live image can be written to a DVD or copied onto a flash drive for easy traveling.
|
||||
|
||||
Expanding on this concept of Linux portability, it's also beneficial to have a live image on a flash drive when showing off how Linux works on a friend's computer. This empowers you to demonstrate how Linux can enrich their life while not relying on running a virtual machine on their PC. It's a bit of a win-win in favor of using a live image.
|
||||
|
||||
### Alternative to dual-booting Linux
|
||||
|
||||
This next item is a huge one. Consider this – perhaps you're a Windows user. You like playing with Linux, but would rather not take the plunge. Dual-booting is out of the question in case something goes wrong or perhaps you're not comfortable identifying individual partitions. Whatever the case may be, both using Linux in a virtual machine or from a live image might be a great option for you.
|
||||
|
||||
Now I'm going to take a rather odd stance on something. I think you'll get far more value in the long term running Linux on a flash drive using a live image than with a virtual machine. There are two reasons for this. First of all, you'll get used to truly running Linux vs running it inside of a virtual machine on top of Windows. Second, you can setup your flash drive to contain user data with persistent storage.
|
||||
|
||||
I'll grant you the same could be said with a virtual machine running Linux, however you will never have an update break anything using the live image approach. Why? Because you're not updating a host OS or the guest OS. Remember there are entire distros that are designed to be nothing more than persistent storage Linux distros. Puppy Linux is one great example. Not only can it run on PCs that would otherwise be recycled or thrown away, it allows you to never be bothered again with tedious system updates thanks to the way the distro handles security. It's not a normal Linux distro and it's walled off in such a way that the persistent live image is free from anything scary.
|
||||
|
||||
### When a Linux virtual machine is absolutely the best option
|
||||
|
||||
As I bring this article to a close, let me leave you with this. There is one instance where using a virtual machine such as Virtual Box is absolutely better than using a live image – recording the desktop environment of any Linux distro.
|
||||
|
||||
For example, I make videos that provide a tour and review of a variety of Linux distros. Doing this with live images would require me to capture the screen with a hardware device or install a software capture device from the live image's repositories. Clearly, a virtual machine is better suited for this job than a live image of a Linux distro.
|
||||
|
||||
Once you toss audio capture into the mix, there is no question that if you're going to use software to capture your review, you really want to have a host OS that has all the basic needs covered for a reasonably decent capture environment. Again, you could do all of this with a hardware device...but that might be cost prohibitive if you're only do video/audio capturing as a part time endeavor.
|
||||
|
||||
### A Linux virtual machine vs a Linux live image
|
||||
|
||||
What is your preferred method of trying out new distros? Perhaps you're someone who is fine with formatting their hard drive and throwing caution to the wind, thus, making the idea of any of this unneeded?
|
||||
|
||||
Most people I've interacted with online tend to follow much of the methodology I've touched on above, but I'd love to hear what approach works best for you. Hit the comments, let me know which method you prefer when checking out the greatest and latest from the Linux distro world.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.datamation.com/open-source/linux-virtual-machines-vs-linux-live-images.html
|
||||
|
||||
作者:[Matt Hartley][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.datamation.com/author/Matt-Hartley-3080.html
|
||||
[1]:https://www.datamation.com/open-source/best-linux-distro.html
|
@ -0,0 +1,79 @@
|
||||
How to block local spoofed addresses using the Linux firewall
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA)
|
||||
|
||||
Attackers are finding sophisticated ways to penetrate even remote networks that are protected by intrusion detection and prevention systems. No IDS/IPS can halt or minimize attacks by hackers who are determined to take over your network. Improper configuration allows attackers to bypass all implemented network security measures.
|
||||
|
||||
In this article, I will explain how security engineers or system administrators can prevent these attacks.
|
||||
|
||||
Almost all Linux distributions come with a built-in firewall to secure processes and applications running on the Linux host. Most firewalls are designed as IDS/IPS solutions, whose primary purpose is to detect and prevent malicious packets from gaining access to a network.
|
||||
|
||||
A Linux firewall usually comes with two interfaces: iptables and ipchains. Most people refer to these interfaces as the "iptables firewall" or the "ipchains firewall." Both interfaces are designed as packet filters. Iptables acts as a stateful firewall, making decisions based on previous packets. Ipchains does not make decisions based on previous packets; hence, it is designed as a stateless firewall.
|
||||
|
||||
In this article, we will focus on the iptables firewall, which comes with kernel version 2.4 and beyond.
|
||||
|
||||
With the iptables firewall, you can create policies, or ordered sets of rules, which communicate to the kernel how it should treat specific classes of packets. Inside the kernel is the Netfilter framework. Netfilter is both a framework and the project name for the iptables firewall. As a framework, Netfilter allows iptables to hook functions designed to perform operations on packets. In a nutshell, iptables relies on the Netfilter framework to build firewall functionality such as filtering packet data.
|
||||
|
||||
Each iptables rule is applied to a chain within a table. An iptables chain is a collection of rules that are compared against packets with similar characteristics, while a table (such as nat or mangle) describes diverse categories of functionality. For instance, a mangle table alters packet data. Thus, specialized rules that alter packet data are applied to it, and filtering rules are applied to the filter table because the filter table filters packet data.
|
||||
|
||||
Iptables rules have a set of matches, along with a target, such as `Drop` or `Deny`, that instructs iptables what to do with a packet that conforms to the rule. Thus, without a target and a set of matches, iptables can’t effectively process packets. A target simply refers to a specific action to be taken if a packet matches a rule. Matches, on the other hand, must be met by every packet in order for iptables to process them.
|
||||
|
||||
Now that we understand how the iptables firewall operates, let's look at how to use iptables firewall to detect and reject or drop spoofed addresses.
|
||||
|
||||
### Turning on source address verification
|
||||
|
||||
The first step I, as a security engineer, take when I deal with spoofed addresses from remote hosts is to turn on source address verification in the kernel.
|
||||
|
||||
Source address verification is a kernel-level feature that drops packets pretending to come from your network. It uses the reverse path filter method to check whether the source of the received packet is reachable through the interface it came in.
|
||||
|
||||
To turn source address verification, utilize the simple shell script below instead of doing it manually:
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
#author’s name: Michael K Aboagye
|
||||
|
||||
#purpose of program: to enable reverse path filtering
|
||||
|
||||
#date: 7/02/18
|
||||
|
||||
#displays “enabling source address verification” on the screen
|
||||
|
||||
echo -n "Enabling source address verification…"
|
||||
|
||||
#Overwrites the value 0 to 1 to enable source address verification
|
||||
|
||||
echo 1 > /proc/sys/net/ipv4/conf/default/rp_filter
|
||||
|
||||
echo "completed"
|
||||
|
||||
```
|
||||
|
||||
The preceding script, when executed, displays the message `Enabling source address verification` without appending a new line. The default value of the reverse path filter is 0.0, which means no source validation. Thus, the second line simply overwrites the default value 0 to 1. 1 means that the kernel will validate the source by confirming the reverse path.
|
||||
|
||||
Finally, you can use the following command to drop or reject spoofed addresses from remote hosts by choosing either one of these targets: `DROP` or `REJECT`. However, I recommend using `DROP` for security reasons.
|
||||
|
||||
Replace the “IP-address” placeholder with your own IP address, as shown below. Also, you must choose to use either `REJECT` or `DROP`; the two targets don’t work together.
|
||||
```
|
||||
iptables -A INPUT -i internal_interface -s IP_address -j REJECT / DROP
|
||||
|
||||
|
||||
|
||||
iptables -A INPUT -i internal_interface -s 192.168.0.0/16 -j REJECT/ DROP
|
||||
|
||||
```
|
||||
|
||||
This article provides only the basics of how to prevent spoofing attacks from remote hosts using the iptables firewall.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/block-local-spoofed-addresses-using-linux-firewall
|
||||
|
||||
作者:[Michael Kwaku Aboagye][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/revoks
|
133
sources/tech/20180228 Why Python devs should use Pipenv.md
Normal file
133
sources/tech/20180228 Why Python devs should use Pipenv.md
Normal file
@ -0,0 +1,133 @@
|
||||
Why Python devs should use Pipenv
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python-programming-code-keyboard.png?itok=fxiSpmnd)
|
||||
|
||||
This article was co-written with [Jeff Triplett][1].
|
||||
|
||||
Pipenv, the "Python Development Workflow for Humans" created by Kenneth Reitz a little more than a year ago, has become the [official Python-recommended resource][2] for managing package dependencies. But there is still confusion about what problems it solves and how it's more useful than the standard workflow using `pip` and a `requirements.txt` file. In this month's Python column, we'll fill in the gaps.
|
||||
|
||||
### A brief history of Python package installation
|
||||
|
||||
To understand the problems that Pipenv solves, it's useful to show how Python package management has evolved.
|
||||
|
||||
Take yourself back to the first Python iteration. We had Python, but there was no clean way to install packages.
|
||||
|
||||
Then came [Easy Install][3], a package that installs other Python packages with relative ease. But it came with a catch: it wasn't easy to uninstall packages that were no longer needed.
|
||||
|
||||
Enter [pip][4], which most Python users are familiar with. `pip` lets us install and uninstall packages. We could specify versions, run `pip freeze > requirements.txt` to output a list of installed packages to a text file, and use that same text file to install everything an app needed with `pip install -r requirements.txt`.
|
||||
|
||||
But `pip` didn't include a way to isolate packages from each other. We might work on apps that use different versions of the same libraries, so we needed a way to enable that. Along came [virtual environments][5], which enabled us to create small, isolated environments for each app we worked on. We've seen many tools for managing virtual environments: [virtualenv][6], [venv][7], [virtualenvwrapper][8], [pyenv][9], [pyenv-virtualenv][10], [pyenv-virtualenvwrapper][11], and even more. They all play well with `pip` and `requirements.txt` files.
|
||||
|
||||
### The new kid: Pipenv
|
||||
|
||||
Pipenv aims to solve several problems.
|
||||
|
||||
`pip` library for package installation, plus a library for creating a virtual environment, plus a library for managing virtual environments, plus all the commands associated with those libraries. That's a lot to manage. Pipenv ships with package management and virtual environment support, so you can use one tool to install, uninstall, track, and document your dependencies and to create, use, and organize your virtual environments. When you start a project with it, Pipenv will automatically create a virtual environment for that project if you aren't already using one.
|
||||
|
||||
First, the problem of needing thelibrary for package installation, plus a library for creating a virtual environment, plus a library for managing virtual environments, plus all the commands associated with those libraries. That's a lot to manage. Pipenv ships with package management and virtual environment support, so you can use one tool to install, uninstall, track, and document your dependencies and to create, use, and organize your virtual environments. When you start a project with it, Pipenv will automatically create a virtual environment for that project if you aren't already using one.
|
||||
|
||||
Pipenv accomplishes this dependency management by abandoning the `requirements.txt` norm and trading it for a new document called a [Pipfile][12]. When you install a library with Pipenv, a `Pipfile` for your project is automatically updated with the details of that installation, including version information and possibly the Git repository location, file path, and other information.
|
||||
|
||||
Second, Pipenv wants to make it easier to manage complex interdependencies. Your app might depend on a specific version of a library, and that library might depend on a specific version of another library, and it's just dependencies and turtles all the way down. When two libraries your app uses have conflicting dependencies, your life can become hard. Pipenv wants to ease that pain by keeping track of a tree of your app's interdependencies in a file called `Pipfile.lock`. `Pipfile.lock` also verifies that the right versions of dependencies are used in production.
|
||||
|
||||
Also, Pipenv is handy when multiple developers are working on a project. With a `pip` workflow, Casey might install a library and spend two days implementing a new feature using that library. When Casey commits the changes, they might forget to run `pip freeze` to update the requirements file. The next day, Jamie pulls down Casey's changes, and suddenly tests are failing. It takes time to realize that the problem is libraries missing from the requirements file that Jamie doesn't have installed in the virtual environment.
|
||||
|
||||
Because Pipenv auto-documents dependencies as you install them, if Jamie and Casey had been using Pipenv, the `Pipfile` would have been automatically updated and included in Casey's commit. Jamie and Casey would have saved time and shipped their product faster.
|
||||
|
||||
Finally, using Pipenv signals to other people who work on your project that it ships with a standardized way to install project dependencies and development and testing requirements. Using a workflow with `pip` and requirements files means that you may have one single `requirements.txt` file, or several requirements files for different environments. It might not be clear to your colleagues whether they should run `dev.txt` or `local.txt` when they're running the project on their laptops, for example. It can also create confusion when two similar requirements files get wildly out of sync with each other: Is `local.txt` out of date, or is it really supposed to be that different from `dev.txt`? Multiple requirements files require more context and documentation to enable others to install the dependencies properly and as expected. This workflow has the potential to confuse colleagues and increase your maintenance burden.
|
||||
|
||||
Using Pipenv, which gives you `Pipfile`, lets you avoid these problems by managing dependencies for different environments for you. This command will install the main project dependencies:
|
||||
```
|
||||
pipenv install
|
||||
|
||||
```
|
||||
|
||||
Adding the `--dev` tag will install the dev/testing requirements:
|
||||
```
|
||||
pipenv install --dev
|
||||
|
||||
```
|
||||
|
||||
There are other benefits to using Pipenv: It has better security features, graphs your dependencies in an easier-to-understand format, seamlessly handles `.env` files, and can automatically handle differing dependencies for development versus production environments in one file. You can read more in the [documentation][13].
|
||||
|
||||
### Pipenv in action
|
||||
|
||||
The basics of using Pipenv are detailed in the [Managing Application Dependencies][14] section of the official Python packaging tutorial. To install Pipenv, use `pip`:
|
||||
```
|
||||
pip install pipenv
|
||||
|
||||
```
|
||||
|
||||
To install packages to use in your project, change into the directory for your project. Then to install a package (we'll use Django as an example), run:
|
||||
```
|
||||
pipenv install django
|
||||
|
||||
```
|
||||
|
||||
You will see some output that indicates that Pipenv is creating a `Pipfile` for your project.
|
||||
|
||||
If you aren't already using a virtual environment, you will also see some output from Pipenv saying it is creating a virtual environment for you.
|
||||
|
||||
Then, you will see the output you are used to seeing when you install packages.
|
||||
|
||||
To generate a `Pipfile.lock` file, run:
|
||||
```
|
||||
pipenv lock
|
||||
|
||||
```
|
||||
|
||||
You can also run Python scripts with Pipenv. To run a top-level Python script called `hello.py`, run:
|
||||
```
|
||||
pipenv run python hello.py
|
||||
|
||||
```
|
||||
|
||||
And you will see your expected result in the console.
|
||||
|
||||
To start a shell, run:
|
||||
```
|
||||
pipenv shell
|
||||
|
||||
```
|
||||
|
||||
If you would like to convert a project that currently uses a `requirements.txt` file to use Pipenv, install Pipenv and run:
|
||||
```
|
||||
pipenv install requirements.txt
|
||||
|
||||
```
|
||||
|
||||
This will create a Pipfile and install the specified requirements. Consider your project upgraded!
|
||||
|
||||
### Learn more
|
||||
|
||||
Check out the Pipenv documentation, particularly [Basic Usage of Pipenv][15], to take you further. Pipenv creator Kenneth Reitz gave a talk on Pipenv, "[The Future of Python Dependency Management][16]," at a recent PyTennessee event. The talk wasn't recorded, but his [slides][17] are helpful in understanding what Pipenv does and the problems it solves.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/why-python-devs-should-use-pipenv
|
||||
|
||||
作者:[Lacey Williams Henschel][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/laceynwilliams
|
||||
[1]:https://opensource.com/users/jefftriplett
|
||||
[2]:https://packaging.python.org/tutorials/managing-dependencies/#managing-dependencies
|
||||
[3]:http://peak.telecommunity.com/DevCenter/EasyInstall
|
||||
[4]:https://packaging.python.org/tutorials/installing-packages/#use-pip-for-installing
|
||||
[5]:https://packaging.python.org/tutorials/installing-packages/#creating-virtual-environments
|
||||
[6]:https://virtualenv.pypa.io/en/stable/
|
||||
[7]:https://docs.python.org/3/library/venv.html
|
||||
[8]:https://virtualenvwrapper.readthedocs.io/en/latest/
|
||||
[9]:https://github.com/pyenv/pyenv
|
||||
[10]:https://github.com/pyenv/pyenv-virtualenv
|
||||
[11]:https://github.com/pyenv/pyenv-virtualenvwrapper
|
||||
[12]:https://github.com/pypa/pipfile
|
||||
[13]:https://docs.pipenv.org/
|
||||
[14]:https://packaging.python.org/tutorials/managing-dependencies/
|
||||
[15]:https://docs.pipenv.org/basics/
|
||||
[16]:https://www.pytennessee.org/schedule/presentation/158/
|
||||
[17]:https://speakerdeck.com/kennethreitz/the-future-of-python-dependency-management
|
@ -1,324 +0,0 @@
|
||||
通过ncurses在终端创建一个冒险游戏
|
||||
======
|
||||
怎样使用curses函数读取键盘并操作屏幕。
|
||||
|
||||
我[之前的文章][1]介绍了ncurses库并提供了一个简单的程序展示一些将文本放到屏幕上的一些curses函数。
|
||||
|
||||
### 探险
|
||||
|
||||
当我逐渐长大,家里有了一台苹果2电脑。我和我兄弟正是在这台电脑上自学了如何用AppleSoft BASIC写程序。我在写了一些数学智力游戏之后,继续创造游戏。作为80年代的人,我已经是龙与地下城桌游的粉丝,在游戏中角色扮演一个追求打败怪物并在陌生土地上抢掠的战士或者男巫。所以我创建一个基本的冒险游戏也在情理之中。
|
||||
|
||||
AppleSoft BASIC支持一种简洁的特性:在标准分辨率图形模式(GR模式)下,你可以检测屏幕上特定点的颜色。这为创建一个冒险游戏提供了捷径。比起创建并更新周期性传送到屏幕的内存地图,我现在可以依赖GR模式为我维护地图,我的程序还可以当玩家字符在屏幕四处移动的时候查询屏幕。通过这种方式,我让电脑完成了大部分艰难的工作。因此,我的自顶向下的冒险游戏使用了块状的GR模式图形来展示我的游戏地图。
|
||||
|
||||
我的冒险游戏使用了一张简单的地图,上面有一大片绿地伴着山脉从中间蔓延向下和一个在左上方的大湖。我要粗略地为桌游战役绘制这个地图,其中包含一个允许玩家穿过到远处的狭窄通道。
|
||||
|
||||
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/quest-map.jpg)
|
||||
|
||||
图1.一个有湖和山的简单桌游地图
|
||||
|
||||
你可以用curses绘制这个地图,并用字符代表草地、山脉和水。接下来,我描述怎样使用curses那样做以及如何在Linux终端创建和进行类似的一个冒险游戏?
|
||||
|
||||
### 构建程序
|
||||
|
||||
在我的上一篇文章,我提到了大多数curses程序以相同的一组指令获取终端类型和设置curses环境:
|
||||
|
||||
```
|
||||
initscr();
|
||||
cbreak();
|
||||
noecho();
|
||||
|
||||
```
|
||||
|
||||
在这个程序,我添加了另外的语句:
|
||||
|
||||
```
|
||||
keypad(stdscr, TRUE);
|
||||
|
||||
```
|
||||
|
||||
这里的TRUE标志允许curses从用户终端读取小键盘和功能键。如果你想要在你的程序中使用上下左右方向键,你需要使用这里的keypad(stdscr, TRUE)。
|
||||
|
||||
这样做了之后,你可以你可以开始在终端屏幕上绘图了。curses函数包括了一系列方法在屏幕上绘制文本。在我之前的文章中,我展示了addch()和addstr()函数以及他们对应的在添加文本之前先移动到指定屏幕位置的副本mvaddch()和mvaddstr()函数。为了创建这个冒险游戏,你可以使用另外一组函数:vline()和hline(),以及它们对应的函数mvvline()和mvhline()。这些mv函数接收屏幕坐标,一个要绘制的字符和要重复此字符的次数。例如,mvhline(1, 2, '-', 20)将会绘制一条开始于第一行第二列并由20个横线组成的线段。
|
||||
|
||||
为了以编程方式绘制地图到终端,让我们先定义这个draw_map()函数:
|
||||
|
||||
```
|
||||
#define GRASS ' '
|
||||
#define EMPTY '.'
|
||||
#define WATER '~'
|
||||
#define MOUNTAIN '^'
|
||||
#define PLAYER '*'
|
||||
|
||||
void draw_map(void)
|
||||
{
|
||||
int y, x;
|
||||
|
||||
/* 绘制探索地图 */
|
||||
|
||||
/* 背景 */
|
||||
|
||||
for (y = 0; y < LINES; y++) {
|
||||
mvhline(y, 0, GRASS, COLS);
|
||||
}
|
||||
|
||||
/* 山和山道 */
|
||||
|
||||
for (x = COLS / 2; x < COLS * 3 / 4; x++) {
|
||||
mvvline(0, x, MOUNTAIN, LINES);
|
||||
}
|
||||
|
||||
mvhline(LINES / 4, 0, GRASS, COLS);
|
||||
|
||||
/* 湖 */
|
||||
|
||||
for (y = 1; y < LINES / 2; y++) {
|
||||
mvhline(y, 1, WATER, COLS / 3);
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
在绘制这副地图时,记住填充大块字符到屏幕使用的mvvline()和mvhline()函数。我绘制从0列开始的字符水平线(mvhline)以创建草地区域,直到整个屏幕的高度和宽度。我绘制从0行开始的多条垂直线(mvvline)在此上添加了山脉,绘制单行水平线添加了一条山道(mvhline)。并且,我通过绘制一系列短水平线(mvhline)创建了湖。这种绘制重叠方块的方式看起来似乎并没有效率,但是记住在我们调用refresh()函数之前curses并不会真正更新屏幕。
|
||||
|
||||
绘制完地图,创建游戏就还剩下进入循环让程序等待用户按下上下左右方向键中的一个然后让玩家图标正确移动了。如果玩家想要移动的地方是空的,就应该允许玩家到那里。
|
||||
|
||||
你可以把curses当做捷径使用。比起在程序中实例化一个版本的地图并复制到屏幕(这么复杂),你可以让屏幕为你跟踪所有东西。inch()函数和相关联的mvinch()函数允许你探测屏幕的内容。这让你可以查询curses以了解玩家想要移动到的位置是否被水填满或者被山阻挡。这样做你需要一个之后会用到的一个帮助函数:
|
||||
|
||||
```
|
||||
int is_move_okay(int y, int x)
|
||||
{
|
||||
int testch;
|
||||
|
||||
/* 如果要进入的位置可以进入,返回true */
|
||||
|
||||
testch = mvinch(y, x);
|
||||
return ((testch == GRASS) || (testch == EMPTY));
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
如你所见,这个函数探测行x、列y并在空间未被占据的时候返回true,否则返回false。
|
||||
|
||||
这样我们写移动循环就很容易了:从键盘获取一个键值然后根据是上下左右键移动用户字符。这里是一个简单版本的这种循环:
|
||||
|
||||
```
|
||||
|
||||
do {
|
||||
ch = getch();
|
||||
|
||||
/* 测试输入的值并获取方向 */
|
||||
|
||||
switch (ch) {
|
||||
case KEY_UP:
|
||||
if ((y > 0) && is_move_okay(y - 1, x)) {
|
||||
y = y - 1;
|
||||
}
|
||||
break;
|
||||
case KEY_DOWN:
|
||||
if ((y < LINES - 1) && is_move_okay(y + 1, x)) {
|
||||
y = y + 1;
|
||||
}
|
||||
break;
|
||||
case KEY_LEFT:
|
||||
if ((x > 0) && is_move_okay(y, x - 1)) {
|
||||
x = x - 1;
|
||||
}
|
||||
break;
|
||||
case KEY_RIGHT
|
||||
if ((x < COLS - 1) && is_move_okay(y, x + 1)) {
|
||||
x = x + 1;
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
while (1);
|
||||
|
||||
```
|
||||
|
||||
为了在游戏中使用(这个循环),你需要在循环里添加一些代码来启用其它的键(例如传统的移动键WASD)以提供方法供用户退出游戏和在屏幕上四处移动。这里是完整的程序:
|
||||
|
||||
```
|
||||
|
||||
/* quest.c */
|
||||
|
||||
#include
|
||||
#include
|
||||
|
||||
#define GRASS ' '
|
||||
#define EMPTY '.'
|
||||
#define WATER '~'
|
||||
#define MOUNTAIN '^'
|
||||
#define PLAYER '*'
|
||||
|
||||
int is_move_okay(int y, int x);
|
||||
void draw_map(void);
|
||||
|
||||
int main(void)
|
||||
{
|
||||
int y, x;
|
||||
int ch;
|
||||
|
||||
/* 初始化curses */
|
||||
|
||||
initscr();
|
||||
keypad(stdscr, TRUE);
|
||||
cbreak();
|
||||
noecho();
|
||||
|
||||
clear();
|
||||
|
||||
/* 初始化探索地图 */
|
||||
|
||||
draw_map();
|
||||
|
||||
/* 在左下角初始化玩家 */
|
||||
|
||||
y = LINES - 1;
|
||||
x = 0;
|
||||
|
||||
do {
|
||||
/* 默认获得一个闪烁的光标--表示玩家字符 */
|
||||
|
||||
mvaddch(y, x, PLAYER);
|
||||
move(y, x);
|
||||
refresh();
|
||||
|
||||
ch = getch();
|
||||
|
||||
/* 测试输入的键并获取方向 */
|
||||
|
||||
switch (ch) {
|
||||
case KEY_UP:
|
||||
case 'w':
|
||||
case 'W':
|
||||
if ((y > 0) && is_move_okay(y - 1, x)) {
|
||||
mvaddch(y, x, EMPTY);
|
||||
y = y - 1;
|
||||
}
|
||||
break;
|
||||
case KEY_DOWN:
|
||||
case 's':
|
||||
case 'S':
|
||||
if ((y < LINES - 1) && is_move_okay(y + 1, x)) {
|
||||
mvaddch(y, x, EMPTY);
|
||||
y = y + 1;
|
||||
}
|
||||
break;
|
||||
case KEY_LEFT:
|
||||
case 'a':
|
||||
case 'A':
|
||||
if ((x > 0) && is_move_okay(y, x - 1)) {
|
||||
mvaddch(y, x, EMPTY);
|
||||
x = x - 1;
|
||||
}
|
||||
break;
|
||||
case KEY_RIGHT:
|
||||
case 'd':
|
||||
case 'D':
|
||||
if ((x < COLS - 1) && is_move_okay(y, x + 1)) {
|
||||
mvaddch(y, x, EMPTY);
|
||||
x = x + 1;
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
while ((ch != 'q') && (ch != 'Q'));
|
||||
|
||||
endwin();
|
||||
|
||||
exit(0);
|
||||
}
|
||||
|
||||
int is_move_okay(int y, int x)
|
||||
{
|
||||
int testch;
|
||||
|
||||
/* 当空间可以进入时返回true */
|
||||
|
||||
testch = mvinch(y, x);
|
||||
return ((testch == GRASS) || (testch == EMPTY));
|
||||
}
|
||||
|
||||
void draw_map(void)
|
||||
{
|
||||
int y, x;
|
||||
|
||||
/* 绘制探索地图 */
|
||||
|
||||
/* 背景 */
|
||||
|
||||
for (y = 0; y < LINES; y++) {
|
||||
mvhline(y, 0, GRASS, COLS);
|
||||
}
|
||||
|
||||
/* 山脉和山道 */
|
||||
|
||||
for (x = COLS / 2; x < COLS * 3 / 4; x++) {
|
||||
mvvline(0, x, MOUNTAIN, LINES);
|
||||
}
|
||||
|
||||
mvhline(LINES / 4, 0, GRASS, COLS);
|
||||
|
||||
/* 湖 */
|
||||
|
||||
for (y = 1; y < LINES / 2; y++) {
|
||||
mvhline(y, 1, WATER, COLS / 3);
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
在完整的程序清单中,你可以看见使用curses函数创建游戏的完整布置:
|
||||
|
||||
1) 初始化curses环境。
|
||||
|
||||
2) 绘制地图。
|
||||
|
||||
3) 初始化玩家坐标(左下角)
|
||||
|
||||
4) 循环:
|
||||
|
||||
* 绘制玩家字符。
|
||||
|
||||
* 从键盘获取键值。
|
||||
|
||||
* 对应地上下左右调整玩家坐标。
|
||||
|
||||
* 重复。
|
||||
|
||||
5) 完成时关闭curses环境并退出。
|
||||
|
||||
### 开始玩
|
||||
|
||||
当你运行游戏时,玩家的字符在左下角初始化。当玩家在游戏区域四处移动的时候,程序创建了“一串”点。这样可以展示玩家经过了的点,让玩家避免经过不必要的路径。
|
||||
|
||||
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/quest-start.png)
|
||||
|
||||
图2\. 初始化在左下角的玩家
|
||||
|
||||
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/quest-1.png)
|
||||
|
||||
图3\. 玩家可以在游戏区域四处移动,例如湖周围和山的通道
|
||||
|
||||
为了创建上面这样的完整冒险游戏,你可能需要在他/她的字符在游戏区域四处移动的时候随机创建不同的怪物。你也可以创建玩家可以发现在打败敌人后可以掠夺的特殊道具,这些道具应能提高玩家的能力。
|
||||
|
||||
但是作为起点,这是一个展示如何使用curses函数读取键盘和操纵屏幕的好程序。
|
||||
|
||||
### 下一步
|
||||
|
||||
这是一个如何使用curses函数更新和读取屏幕和键盘的简单例子。按照你的程序需要做什么,curses可以做得更多。在下一篇文章中,我计划展示如何更新这个简单程序以使用颜色。同时,如果你想要学习更多curses,我鼓励你去读位于Linux文档计划的Pradeep Padala之[如何使用NCURSES编程][2]。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxjournal.com/content/creating-adventure-game-terminal-ncurses
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
译者:[Leemeans](https://github.com/leemeans)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxjournal.com/users/jim-hall
|
||||
[1]:http://www.linuxjournal.com/content/getting-started-ncurses
|
||||
[2]:http://tldp.org/HOWTO/NCURSES-Programming-HOWTO
|
@ -0,0 +1,53 @@
|
||||
放慢速度是如何使我变得更好的领导者
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_leadership_brand.png?itok=YW1Syk4S)
|
||||
|
||||
在我职业生涯的早期,我认为我能做的最重要的事情就是行动。如果我的老板说跳,我的回答是“跳多高?”
|
||||
|
||||
但是当我成长为一个领导者和管理者时,我意识到了我能提供的最重要的品质是 [耐心][1] 和倾听。耐心和倾听意味着我关注于真正重要的东西。我很果断,所以我会毫不犹豫地行动。然而我了解到,当我考虑来自多个来源的意见,并就我们应该做什么提供建议,而不仅仅是对眼前的请求做出反应时,我的行动更具影响力。
|
||||
|
||||
实行开放式领导需要培养耐心和倾听技能,我需要在[最佳行动计划上进行合作,而不仅仅是最快的计划][2]。它还为我提供了一些工具,以解释 [为什么我会对某人说“不”][3] (或者,也许是“不是现在”),这样我就能以透明和自信的方式领导。
|
||||
|
||||
如果你正在进行软件开发和实践 scrum 中,那么下面的观点可能会引起你的共鸣:在 sprint 计划和 sprint 演示中,耐心和倾听经理的表现和它的技能一样重要。(译注: scrum 是迭代式增量软件开发过程,通常用于敏捷软件开发。 sprint 计划和 sprint 演示是其中的两个术语。)忘掉它们,你会减少你能够产生的影响。
|
||||
|
||||
### 专注于耐心
|
||||
|
||||
专注和耐心并不总是容易的。通常,我发现自己正坐在会议上,用行动项目填满我的笔记本时,我一般会思考:“我们可以简单地对 x 和 y 进行改进”。然后我记得事情不是那么线性的。(译者注:这句话感觉翻译得并不通顺)
|
||||
|
||||
我需要考虑可能影响情况的其他因素。暂停下来从多个人和资源中获取数据可以帮我充实策略,以确保出组织长期成功。它还帮助我确定那些短期的里程碑,这些里程碑应该会让我负责生产的业务完成交付。
|
||||
|
||||
这里有一个很好的例子,以前耐心不是我认为应该拥有的东西,而这又是如何影响了我的表现。当我在北卡罗来纳州工作时,我与一个在亚利桑那州的人共事。我们没有使用视频会议技术,所以当我们交谈时我没有看到她的肢体语言。然而当我负责为我领导的项目交付结果时,她是确保我获得足够支持的两个人之一。
|
||||
|
||||
无论出于何种原因,当我与她交谈时,当她要求我做某件事时,我做了。她会为我的绩效评估提供意见,所以我想确保她高兴。那时,我还不够成熟不懂得其实没必要非要讨她开心;我的重点应该放在其他绩效指标上。我本应该花更多的时间倾听并与她合作,而不是在她还在说话的时候拿起第一个“行动项目”并开始工作。
|
||||
|
||||
在工作六个月后,她给了我一些负面的反馈。 我很生气,很伤心。 我没有做她所要求的一切吗? 我工作了很长时间,每周工作近七天,为期六个月。 她怎么敢批评我的表现?
|
||||
|
||||
然后,在我经历了愤怒和悲伤之后,我想到了她说的话,她的反馈很重要。
|
||||
|
||||
在 sprint 计划和 sprint 演示中,耐心和倾听经理的表现和它的技能一样重要。
|
||||
|
||||
她对这个项目感到担忧,她继续让我负责是因为我是项目的负责人。我们解决了问题,并且我学到了关于如何领导的重要课程:领导力并不意味着“现在就完成”。 领导力意味着制定战略,然后制定沟通和实施支持战略的计划。这也意味着犯错和从这些问题中学习。
|
||||
|
||||
### 经验教训
|
||||
|
||||
事后看来,我意识到我可以提出更多的问题来更好地理解她的反馈意图。如果她的指导不符合我收到的其他意见,我也可能会推迟。通过耐心倾听给我的关于项目的各种信息来源,综合我所学到的知识,并创建一个连贯的行动计划,我会成为一个更好的领导者。我也会有更多的目的来推动我正在做的工作。 我不会对单个数据点做出反应,而是会实施一项战略计划。 这样我也会有一个更好的绩效评估。
|
||||
|
||||
我最终对她有一些反馈。 下次我们一起工作时,我不想在六个月后听到反馈意见。 我想早些时候和更频繁地听到反馈意见,以便我能够尽早从错误中学习。 关于这项工作的持续讨论是任何团队都应该发生的事情。
|
||||
|
||||
当我成为一名管理者和领导者时,我坚持要求我的团队达到相同的标准:计划,制定计划并反思。 重复。 不要让外力造成的麻烦让你偏离你需要实施的计划。 将工作分成小的增量,以便反思和调整计划。 正如 Daniel Goleman 写道:“把注意力放在需要的地方是领导力的一个主要任务。” 不要害怕面对这个挑战。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: [https://opensource.com/open-organization/18/2/open-leadership-patience-listening](https://opensource.com/open-organization/18/2/open-leadership-patience-listening)
|
||||
|
||||
作者:[Angela Robertson][a]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/arobertson98
|
||||
[1]:https://opensource.com/open-organization/16/3/my-most-difficult-leadership-lesson
|
||||
[2]:https://opensource.com/open-organization/16/3/fastest-result-isnt-always-best-result
|
||||
[3]:https://opensource.com/open-organization/17/5/saying-no-open-organization
|
@ -1,81 +0,0 @@
|
||||
如何使用 lftp 来加速 Linux/UNIX 上的 ftp/https 下载速度
|
||||
======
|
||||
lftp 是一个文件传输程序。它可以用复杂的 FTP, HTTP/HTTPS 和其他连接。如果指定了站点 URL,那么 lftp 将连接到该站点,否则会使用 open 命令建立连接。它是所有 Linux/Unix 命令行用户的必备工具。我目前写了一些关于[ Linux 下超快命令行下载加速器][1],比如 Axel 和 prozilla。lftp 是另一个能做相同的事,但有更多功能的工具。lftp 可以处理七种文件访问方式:
|
||||
|
||||
1. ftp
|
||||
2. ftps
|
||||
3. http
|
||||
4. https
|
||||
5. hftp
|
||||
6. fish
|
||||
7. sftp
|
||||
8. file
|
||||
|
||||
|
||||
|
||||
### 那么 lftp 的独特之处是什么?
|
||||
|
||||
* lftp 中的每个操作都是可靠的,即任何非致命错误都被忽略,并且重复操作。所以如果下载中断,它会自动重新启动。即使 FTP 服务器不支持 REST 命令,lftp 也会尝试从开头检索文件,直到文件传输完成。
|
||||
* lftp 具有类似 shell 的命令语法,允许你在后台并行启动多个命令。
|
||||
* lftp 有一个内置镜像,可以下载或更新整个目录树。还有一个反向镜像(mittor -R),它可以上传或更新服务器上的目录树。镜像也可以在两个远程服务器之间同步目录,如果可用的话会使用 FXP。
|
||||
|
||||
|
||||
### 如何使用 lftp 作为下载加速器
|
||||
|
||||
lftp 有 pget 命令。它能让你并行下载。语法是:
|
||||
`lftp -e 'pget -n NUM -c url; exit'`
|
||||
例如,使用 pget 分 5个部分下载 <http://kernel.org/pub/linux/kernel/v2.6/linux-2.6.22.2.tar.bz2>:
|
||||
```
|
||||
$ cd /tmp
|
||||
$ lftp -e 'pget -n 5 -c http://kernel.org/pub/linux/kernel/v2.6/linux-2.6.22.2.tar.bz2'
|
||||
```
|
||||
示例输出:
|
||||
```
|
||||
45108964 bytes transferred in 57 seconds (775.3K/s)
|
||||
lftp :~>quit
|
||||
|
||||
```
|
||||
|
||||
这里:
|
||||
|
||||
1. pget - 并行下载文件
|
||||
2. -n 5 - 将最大连接数设置为 5
|
||||
3. -c - 如果当前目录存在 lfile.lftp-pget-status,则继续中断的传输
|
||||
|
||||
|
||||
|
||||
### 如何在 Linux/Unix 中使用 lftp 来加速 ftp/https下载
|
||||
|
||||
再尝试添加退出命令:
|
||||
`$ lftp -e 'pget -n 10 -c https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.15.tar.xz; exit'`
|
||||
|
||||
[Linux-lftp-command-demo][https://www.cyberciti.biz/tips/wp-content/uploads/2007/08/Linux-lftp-command-demo.mp4]
|
||||
|
||||
### 关于并行下载的说明
|
||||
|
||||
请注意,通过使用下载加速器,你将增加远程服务器负载。另请注意,lftp 可能无法在不支持多点下载的站点上工作,或者防火墙阻止了此类请求。
|
||||
|
||||
NA 命令提供了许多其他功能。有关更多信息,请参考 [lftp][2] 的 man 页面:
|
||||
`man lftp`
|
||||
|
||||
### 关于作者
|
||||
|
||||
作者是 nixCraft 的创建者,经验丰富的系统管理员,也是 Linux 操作系统/Unix shell 脚本的培训师。他曾与全球客户以及IT、教育、国防和太空研究以及非营利部门等多个行业合作。在 [Twitter][9]、[Facebook][10]、[Google +][11] 上关注他。通过[我的 RSS/XML 订阅][5]获取**最新的系统管理、Linux/Unix 以及开源主题教程**。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/tips/linux-unix-download-accelerator.html
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz
|
||||
[1]:https://www.cyberciti.biz/tips/download-accelerator-for-linux-command-line-tools.html
|
||||
[2]:https://lftp.yar.ru/
|
||||
[3]:https://twitter.com/nixcraft
|
||||
[4]:https://facebook.com/nixcraft
|
||||
[5]:https://plus.google.com/+CybercitiBiz
|
||||
[6]:https://www.cyberciti.biz/atom/atom.xml
|
@ -1,103 +0,0 @@
|
||||
内核如何管理内存
|
||||
============================================================
|
||||
|
||||
|
||||
在学习了进程的 [虚拟地址布局][1] 之后,我们回到内核,来学习它管理用户内存的机制。这里再次使用 Gonzo:
|
||||
|
||||
![Linux kernel mm_struct](http://static.duartes.org/img/blogPosts/mm_struct.png)
|
||||
|
||||
Linux 进程在内核中是作为进程描述符 [task_struct][2] (译者注:它是在 Linux 中描述进程完整信息的一种数据结构)的实例来实现的。在 task_struct 中的 [mm][3] 域指向到内存描述符,[mm_struct][4] 是一个程序在内存中的执行摘要。它保存了起始和结束内存段,如下图所示,进程使用的物理内存页面的 [数量][5](RSS 译者注:(Resident Set Size)常驻内存大小 )、虚拟地址空间使用的 [总数量][6]、以及其它片断。 在内存描述中,我们可以获悉它有两种管理内存的方式:虚拟内存区域集和页面表。Gonzo 的内存区域如下所示:
|
||||
|
||||
![Kernel memory descriptor and memory areas](http://static.duartes.org/img/blogPosts/memoryDescriptorAndMemoryAreas.png)
|
||||
|
||||
每个虚拟内存区域(VMA)是一个连续的虚拟地址范围;这些区域绝对不会重叠。一个 [vm_area_struct][7] 的实例完整描述了一个内存区域,包括它的起始和结束地址,[flags][8] 决定了访问权限和行为,并且 [vm_file][9] 域指定了映射到这个区域的文件(如果有的话)。除了内存映射段的例外情况之外,一个 VMA 是不能匿名映射文件的,上面的每个内存段(比如,堆、栈)都对应一个单个的 VMA。虽然它通常都使用在 x86 的机器上,但它并不是必需的。VMAs 也不关心它们在哪个段中。
|
||||
|
||||
一个程序的 VMAs 在内存描述符中作为 [mmap][10] 域的一个链接列表保存的,以起始虚拟地址为序进行排列,并且在 [mm_rb][12] 域中作为一个 [红黑树][11] 的根。红黑树允许内核通过给定的虚拟地址去快速搜索内存区域。在你读取文件 `/proc/pid_of_process/maps`时,内核只是简单地读取每个进程的 VMAs 的链接列表并显示它们。
|
||||
|
||||
在 Windows 中,[EPROCESS][14] 块大致类似于一个 task_struct 和 mm_struct 的结合。在 Windows 中模拟一个 VMA 的是虚拟地址描述符,或称为 [VAD][15];它保存在一个 [AVL 树][16] 中。你知道关于 Windows 和 Linux 之间最有趣的事情是什么吗?其实它们只有一点小差别。
|
||||
|
||||
4GB 虚拟地址空间被分为两个页面。在 32 位模式中的 x86 处理器中支持 4KB、2MB、以及 4MB 大小的页面。Linux 和 Windows 都使用大小为 4KB 的页面去映射用户的一部分虚拟地址空间。字节 0-4095 在 page 0 中,字节 4096-8191 在 page 1 中,依次类推。VMA 的大小 _必须是页面大小的倍数_ 。下图是使用 4KB 大小页面的总数量为 3GB 的用户空间:
|
||||
|
||||
![4KB Pages Virtual User Space](http://static.duartes.org/img/blogPosts/pagedVirtualSpace.png)
|
||||
|
||||
处理器通过查看页面表去转换一个虚拟内存地址到一个真实的物理内存地址。每个进程都有它自己的一组页面表;每当发生进程切换时,用户空间的页面表也同时切换。Linux 在内存描述符的 [pgd][17] 域中保存了一个指向处理器页面表的指针。对于每个虚拟页面,页面表中都有一个相应的页面表条目(PTE),在常规的 x86 页面表中,它是一个简单的如下所示的大小为 4 字节的一条记录:
|
||||
|
||||
![x86 Page Table Entry (PTE) for 4KB page](http://static.duartes.org/img/blogPosts/x86PageTableEntry4KB.png)
|
||||
|
||||
Linux 通过函数去 [读取][18] 和 [设置][19] PTE 条目中的每个标志位。标志位 P 告诉处理器这个虚拟页面是否在物理内存中。如果被清除(设置为 0),访问这个页面将触发一个页面故障。请记住,当这个标志位为 0 时,内核可以在剩余的域上做任何想做的事。R/W 标志位是读/写标志;如果被清除,这个页面将变成只读的。U/S 标志位表示用户/超级用户;如果被清除,这个页面将仅被内核访问。这些标志都是用于实现我们在前面看到的只读内存和内核空间保护。
|
||||
|
||||
标志位 D 和 A 用于标识页面是否是“脏的”或者是已被访问过。一个脏页面表示已经被写入,而一个被访问过的页面则表示有一个写入或者读取发生过。这两个标志位都是粘滞位:处理器只能设置它们,而清除则是由内核来完成的。最终,PTE 保存了这个页面相应的起始物理地址,它们按 4KB 进行整齐排列。这个看起来有点小的域是一些痛苦的根源,因为它限制了物理内存最大为 [4 GB][20]。其它的 PTE 域留到下次再讲,因为它是涉及了物理地址扩展的知识。
|
||||
|
||||
由于在一个虚拟页面上的所有字节都共享一个 U/S 和 R/W 标志位,所以内存保护的最小单元是一个虚拟页面。但是,同一个物理内存可能被映射到不同的虚拟页面,这样就有可能会出现相同的物理内存出现不同的保护标志位的情况。请注意,在 PTE 中是看不到运行权限的。这就是为什么经典的 x86 页面上允许代码在栈上被执行的原因,这样会很容易导致挖掘栈缓冲溢出的漏洞(可能会通过使用 [return-to-libc][21] 和其它技术来开发非可执行栈)。由于 PTE 缺少禁止运行标志位说明了一个更广泛的事实:在 VMA 中的权限标志位有可能或者不可能完全转换为硬件保护。内核只能做它能做到的,但是,最终的架构限制了它能做的事情。
|
||||
|
||||
虚拟内存不能保存任何东西,它只是简单地 _映射_ 一个程序的地址空间到底层的物理内存上。物理内存被当作一个被称为物理地址空间的巨大块来被处理器访问。虽然内存的操作[涉及到某些][22] 总线,我们在这里先忽略它,并假设物理地址范围从 0 到可用的最大值按字节递增。物理地址空间被内核进一步分解为页面帧。处理器并不会关心帧的具体情况,这一点对内核也是至关重要的,因为,页面帧是物理内存管理的最小单元。Linux 和 Windows 在 32 位模式下都使用 4KB 大小的页面帧;下图是一个有 2 GB 内存的机器的例子:
|
||||
|
||||
![Physical Address Space](http://static.duartes.org/img/blogPosts/physicalAddressSpace.png)
|
||||
|
||||
在 Linux 上每个页面帧是被一个 [描述符][23] 和 [几个标志][24] 来跟踪的。通过这些描述符和标志,实现了对机器上整个物理内存的跟踪;每个页面帧的具体状态是公开的。物理内存是通过使用 [Buddy 内存分配][25] (译者注:一种内存分配算法)技术来管理的,因此,如果可以通过 Buddy 系统分配内存,那么一个页面帧是未分配的(free)。一个被分配的页面帧可以是匿名的、持有程序数据的、或者它可能处于页面缓存中、持有数据保存在一个文件或者块设备中。还有其它的异形页面帧,但是这些异形页面帧现在已经不怎么使用了。Windows 有一个类似的页面帧号(Page Frame Number (PFN))数据库去跟踪物理内存。
|
||||
|
||||
我们把虚拟内存区域(VMA)、页面表条目(PTE)、以及页面帧放在一起来理解它们是如何工作的。下面是一个用户堆的示例:
|
||||
|
||||
![Physical Address Space](http://static.duartes.org/img/blogPosts/heapMapped.png)
|
||||
|
||||
蓝色的矩形框表示在 VMA 范围内的页面,而箭头表示页面表条目映射页面到页面帧。一些缺少箭头的虚拟页面,表示它们对应的 PTEs 的当前标志位被清除(置为 0)。这可能是因为这个页面从来没有被使用过,或者是它的内容已经被交换出去了(swapped out)。在这两种情况下,即便这些页面在 VMA 中,访问它们也将导致产生一个页面故障。对于这种 VMA 和页面表的不一致的情况,看上去似乎很奇怪,但是这种情况却经常发生。
|
||||
|
||||
一个 VMA 像一个在你的程序和内核之间的合约。你请求它做一些事情(分配内存、文件映射、等等),内核会回应“收到”,然后去创建或者更新相应的 VMA。 但是,它 _并不立刻_ 去“兑现”对你的承诺,而是它会等待到发生一个页面故障时才去 _真正_ 做这个工作。内核是个“懒惰的家伙”、“不诚实的人渣”;这就是虚拟内存的基本原理。它适用于大多数的、一些类似的和意外的情况,但是,它是规则是,VMAs 记录 _约定的_ 内容,而 PTEs 才反映这个“懒惰的内核” _真正做了什么_。通过这两种数据结构共同来管理程序的内存;它们共同来完成解决页面故障、释放内存、从内存中交换出数据、等等。下图是内存分配的一个简单案例:
|
||||
|
||||
![Example of demand paging and memory allocation](http://static.duartes.org/img/blogPosts/heapAllocation.png)
|
||||
|
||||
当程序通过 [brk()][26] 系统调用来请求一些内存时,内核只是简单地 [更新][27] 堆的 VMA 并给程序回复“已搞定”。而在这个时候并没有真正地分配页面帧并且新的页面也没有映射到物理内存上。一旦程序尝试去访问这个页面时,将发生页面故障,然后处理器调用 [do_page_fault()][28]。这个函数将使用 [find_vma()][30] 去 [搜索][29] 发生页面故障的 VMA。如果找到了,然后在 VMA 上进行权限检查以防范恶意访问(读取或者写入)。如果没有合适的 VMA,也没有尝试访问的内存的“合约”,将会给进程返回段故障。
|
||||
|
||||
当找到了一个合适的 VMA,内核必须通过查找 PTE 的内容和 VMA 的类型去处理故障。在我们的案例中,PTE 显示这个页面是 [不存在的][33]。事实上,我们的 PTE 是全部空白的(全部都是 0),在 Linux 中这表示虚拟内存还没有被映射。由于这是匿名 VMA,我们有一个完全的 RAM 事务,它必须被 [do_anonymous_page()][34] 来处理,它分配页面帧,并且用一个 PTE 去映射故障虚拟页面到一个新分配的帧。
|
||||
|
||||
有时候,事情可能会有所不同。例如,对于被交换出内存的页面的 PTE,在当前(Present)标志位上是 0,但它并不是空白的。而是在交换位置仍有页面内容,它必须从磁盘上读取并且通过 [do_swap_page()][35] 来加载到一个被称为 [major fault][36] 的页面帧上。
|
||||
|
||||
这是我们通过探查内核的用户内存管理得出的前半部分的结论。在下一篇文章中,我们通过将文件加载到内存中,来构建一个完整的内存框架图,以及对性能的影响。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://duartes.org/gustavo/blog/post/how-the-kernel-manages-your-memory/
|
||||
|
||||
作者:[Gustavo Duarte][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://duartes.org/gustavo/blog/about/
|
||||
[1]:http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory
|
||||
[2]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/sched.h#L1075
|
||||
[3]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/sched.h#L1129
|
||||
[4]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L173
|
||||
[5]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L197
|
||||
[6]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L206
|
||||
[7]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L99
|
||||
[8]:http://lxr.linux.no/linux+v2.6.28/include/linux/mm.h#L76
|
||||
[9]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L150
|
||||
[10]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L174
|
||||
[11]:http://en.wikipedia.org/wiki/Red_black_tree
|
||||
[12]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L175
|
||||
[13]:http://lxr.linux.no/linux+v2.6.28.1/fs/proc/task_mmu.c#L201
|
||||
[14]:http://www.nirsoft.net/kernel_struct/vista/EPROCESS.html
|
||||
[15]:http://www.nirsoft.net/kernel_struct/vista/MMVAD.html
|
||||
[16]:http://en.wikipedia.org/wiki/AVL_tree
|
||||
[17]:http://lxr.linux.no/linux+v2.6.28.1/include/linux/mm_types.h#L185
|
||||
[18]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/include/asm/pgtable.h#L173
|
||||
[19]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/include/asm/pgtable.h#L230
|
||||
[20]:http://www.google.com/search?hl=en&amp;amp;amp;amp;q=2^20+*+2^12+bytes+in+GB
|
||||
[21]:http://en.wikipedia.org/wiki/Return-to-libc_attack
|
||||
[22]:http://duartes.org/gustavo/blog/post/getting-physical-with-memory
|
||||
[23]:http://lxr.linux.no/linux+v2.6.28/include/linux/mm_types.h#L32
|
||||
[24]:http://lxr.linux.no/linux+v2.6.28/include/linux/page-flags.h#L14
|
||||
[25]:http://en.wikipedia.org/wiki/Buddy_memory_allocation
|
||||
[26]:http://www.kernel.org/doc/man-pages/online/pages/man2/brk.2.html
|
||||
[27]:http://lxr.linux.no/linux+v2.6.28.1/mm/mmap.c#L2050
|
||||
[28]:http://lxr.linux.no/linux+v2.6.28/arch/x86/mm/fault.c#L583
|
||||
[29]:http://lxr.linux.no/linux+v2.6.28/arch/x86/mm/fault.c#L692
|
||||
[30]:http://lxr.linux.no/linux+v2.6.28/mm/mmap.c#L1466
|
||||
[31]:http://lxr.linux.no/linux+v2.6.28/arch/x86/mm/fault.c#L711
|
||||
[32]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2653
|
||||
[33]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2674
|
||||
[34]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2681
|
||||
[35]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2280
|
||||
[36]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2316
|
192
translated/tech/20150708 Choosing a Linux Tracer (2015).md
Normal file
192
translated/tech/20150708 Choosing a Linux Tracer (2015).md
Normal file
@ -0,0 +1,192 @@
|
||||
选择一个 Linux 跟踪器(2015)
|
||||
======
|
||||
[![][1]][2]
|
||||
_Linux 跟踪很神奇!_
|
||||
|
||||
跟踪器是高级的性能分析和调试工具,如果你使用过 strace(1) 或者 tcpdump(8),你不应该被它吓到 ... 你使用的就是跟踪器。系统跟踪器能让你看到很多的东西,而不仅是系统调用或者包,因为常见的跟踪器都可以跟踪内核或者应用程序的任何东西。
|
||||
|
||||
有大量的 Linux 跟踪器可供你选择。由于它们中的每个都有一个官方的(或者非官方的)的吉祥物,我们有足够多的选择给孩子们展示。
|
||||
|
||||
你喜欢使用哪一个呢?
|
||||
|
||||
我从两类读者的角度来回答这个问题:大多数人和性能/内核工程师。当然,随着时间的推移,这也可能会发生变化,因此,我需要及时去更新本文内容,或许是每年一次,或者更频繁。
|
||||
|
||||
## 对于大多数人
|
||||
|
||||
大多数人(开发者、系统管理员、运维人员、网络可靠性工程师(SRE)…)是不需要去学习系统跟踪器的详细内容的。以下是你需要去了解和做的事情:
|
||||
|
||||
### 1. 使用 perf_events 了解 CPU 概要信息
|
||||
|
||||
使用 perf_events 去了解 CPU 的基本情况。它的概要信息可以用一个 [火焰图][3] 来形象地表示。比如:
|
||||
```
|
||||
git clone --depth 1 https://github.com/brendangregg/FlameGraph
|
||||
perf record -F 99 -a -g -- sleep 30
|
||||
perf script | ./FlameGraph/stackcollapse-perf.pl | ./FlameGraph/flamegraph.pl > perf.svg
|
||||
|
||||
```
|
||||
|
||||
Linux 的 perf_events(又称为 "perf",后面用它来表示命令)是官方为 Linux 用户准备的跟踪器/分析器。它在内核源码中,并且维护的非常好(而且现在它的功能还是快速加强)。它一般是通过 linux-tools-common 这个包来添加的。
|
||||
|
||||
perf 可以做的事情很多,但是,如果我建议你只学习其中的一个功能,那就是查看 CPU 概要信息。虽然从技术角度来说,这并不是事件“跟踪”,主要是它很简单。较难的部分是去获得工作的完整栈和符号,这部分的功能在我的 [Linux Profiling at Netflix][4] 中讨论过。
|
||||
|
||||
### 2. 知道它能干什么
|
||||
|
||||
正如一位朋友所说的:“你不需要知道 X 光机是如何工作的,但你需要明白的是,如果你吞下了一个硬币,X 光机是你的一个选择!”你需要知道使用跟踪器能够做什么,因此,如果你在业务上需要它,你可以以后再去学习它,或者请会使用它的人来做。
|
||||
|
||||
简单地说:几乎任何事情都可以通过跟踪来了解它。内部文件系统、TCP/IP 处理过程、设备驱动、应用程序内部情况。阅读我在 lwn.net 上的 [ftrace][5] 的文章,也可以去浏览 [perf_events 页面][6],那里有一些跟踪能力的示例。
|
||||
|
||||
### 3. 请求一个前端
|
||||
|
||||
如果你把它作为一个性能分析工具(有许多公司销售这类产品),并要求支持 Linux 跟踪。希望通过一个“点击”界面去探查内核的内部,包含一个在栈不同位置的延迟的热力图。就像我在 [Monitorama 演讲][7] 中描述的那样。
|
||||
|
||||
我创建并开源了我自己的一些前端,虽然它是基于 CLI 的(不是图形界面的)。这样将使其它人使用跟踪器更快更容易。比如,我的 [perf-tools][8],跟踪新进程是这样的:
|
||||
```
|
||||
# ./execsnoop
|
||||
Tracing exec()s. Ctrl-C to end.
|
||||
PID PPID ARGS
|
||||
22898 22004 man ls
|
||||
22905 22898 preconv -e UTF-8
|
||||
22908 22898 pager -s
|
||||
22907 22898 nroff -mandoc -rLL=164n -rLT=164n -Tutf8
|
||||
[...]
|
||||
|
||||
```
|
||||
|
||||
在 Netflix 上,我创建了一个 [Vector][9],它是一个实例分析工具,实际上它是一个 Linux 跟踪器的前端。
|
||||
|
||||
## 对于性能或者内核工程师
|
||||
|
||||
一般来说,我们的工作都非常难,因为大多数人或许要求我们去搞清楚如何去跟踪某个事件,以及因此需要选择使用其中一个跟踪器。为完全理解一个跟踪器,你通常需要花至少一百多个小时去使用它。理解所有的 Linux 跟踪器并能在它们之间做出正确的选择是件很难的事情。(我或许是唯一接近完成这件事的人)
|
||||
|
||||
在这里我建议选择如下之一:
|
||||
|
||||
A) 选择一个全能的跟踪器,并以它为标准。这需要在一个测试环境中,花大量的时间来搞清楚它的细微差别和安全性。我现在的建议是 SystemTap 的最新版本(即从这个 [源][10] 构建的)。我知道有的公司选择的是 LTTng ,尽管它并不是很强大(但是它很安全),但他们也用的很好。如果在 sysdig 中添加了跟踪点或者是 kprobes,它也是另外的一个候选者。
|
||||
|
||||
B) 按我的 [Velocity 教程中][11] 的流程图。这意味着可能是使用 ftrace 或者 perf_events,因为 eBPF 是集成在内核中的,然后用其它的跟踪器,如 SystemTap/LTTng 作为对 eBPF 的补充。我目前在 Netflix 的工作中就是这么做的。
|
||||
|
||||
以下是我对各个跟踪器的评价:
|
||||
|
||||
### 1. ftrace
|
||||
|
||||
我爱 [Ftrace][12],它是内核黑客最好的朋友。它被构建进内核中,它能够消费跟踪点、kprobes、以及 uprobes,并且提供一些功能:使用可选的过滤器和参数进行事件跟踪;事件计数和计时,内核概览;函数流步进。关于它的示例可以查看内核源树中的 [ftrace.txt][13]。它通过 /sys 来管理,是面向单 root 用户的(虽然你可以使用缓冲实例来破解它以支持多用户),它的界面有时很繁琐,但是它比较容易破解,并且有前端:Steven Rostedt,ftrace 的主要创建者,他设计了 trace-cmd,并且我已经创建了 perf-tools 集合。我最讨厌的就是它不可编程,因此,你也不能,比如,去保存和获取时间戳,计算延迟,以及保存它的历史。你不需要花成本转储事件到用户级以便于进行后期处理。它通过 eBPF 可以实现可编程。
|
||||
|
||||
### 2. perf_events
|
||||
|
||||
[perf_events][14] 是 Linux 用户的主要跟踪工具,它来源于 Linux 内核,一般是通过 linux-tools-common 包来添加。又称为 "perf",后面的 perf 指的是它的前端,它非常高效(动态缓存),一般用于跟踪并转储到一个文件中(perf.data),然后可以在以后的某个时间进行后期处理。它可以做大部分 ftrace 能做的事情。它实现不了函数流步进,并且不太容易破解(因为它的安全/错误检查做的非常好)。但它可以做概览(采样)、CPU 性能计数、用户级的栈转换、以及消费对行使用本地变量进行跟踪的调试信息。它也支持多个并发用户。与 ftrace 一样,它也是内核不可编程的,或者 eBPF 支持(已经计划了补丁)。如果只学习一个跟踪器,我建议大家去学习 perf,它可以解决大量的问题,并且它也很安全。
|
||||
|
||||
### 3. eBPF
|
||||
|
||||
扩展的伯克利包过滤器(eBPF)是一个内核虚拟机,可以在事件上运行程序,它非常高效(JIT)。它可能最终为 ftrace 和 perf_events 提供内核可编程,并可以去增强其它跟踪器。它现在是由 Alexei Starovoitov 开发,还没有实现全整合,但是对于一些令人印象深刻的工具,有些内核版本(比如,4.1)已经支持了:比如,块设备 I/O 延迟热力图。更多参考资料,请查阅 Alexei 的 [BPF 演示][15],和它的 [eBPF 示例][16]。
|
||||
|
||||
### 4. SystemTap
|
||||
|
||||
[SystemTap][17] 是一个非常强大的跟踪器。它可以做任何事情:概览、跟踪点、kprobes、uprobes(它就来自 SystemTap)、USDT、内核编程等等。它将程序编译成内核模块并加载它们 —— 这是一种很难保证安全的方法。它开发的很怪诞,并且在过去的一段时间内出现了很多问题(恐慌或冻结)。许多并不是 SystemTap 的过错 —— 它通常被内核首先用于某些功能跟踪,并首先遇到运行 bug。最新版本的 SystemTap 是非常好的(你需要从它的源代码编译),但是,许多人仍然没有从早期版本的问题阴影中走出来。如果你想去使用它,花一些时间去测试环境,然后,在 irc.freenode.net 的 #systemtap 频道与开发者进行讨论。(Netflix 有一个容错架构,我们使用了 SystemTap,但是我们或许比起你来说,很少担心它的安全性)我最讨厌的事情是,它假设你有办法得到内核调试信息,而我并没有这些信息。没有它我确实可以做一些事情,但是缺少相关的文档和示例(我现在自己开始帮着做这些了)。
|
||||
|
||||
### 5. LTTng
|
||||
|
||||
[LTTng][18] 对事件收集进行了优化,性能要好于其它的跟踪器,也支持许多的事件类型,包括 USDT。它开发的很怪诞。它的核心部分非常简单:通过一个很小的且很固定的指令集写入事件到跟踪缓冲区。这样让它既安全又快速。缺点是做内核编程不太容易。我觉得那不是个大问题,由于它优化的很好,尽管在需要后期处理的情况下,仍然可以充分的扩展。它也探索了一种不同的分析技术。很多的“黑匣子”记录了全部有趣的事件,可以在以后的 GUI 下学习它。我担心意外的记录丢失事件,我真的需要花一些时间去看看它在实践中是如何工作的。这个跟踪器上我花的时间最少(原因是没有实践过它)。
|
||||
|
||||
### 6. ktap
|
||||
|
||||
[ktap][19] 是一个很有前途的跟踪器,它在内核中使用了一个 lua 虚拟机,它不需要调试信息和嵌入式设备就可以工作的很好。这使得它进入了人们的视野,在某个时候似乎要成为 Linux 上最好的跟踪器。然而,eBPF 开始集成到了内核,而 ktap 的集成工作被推迟了,直到它能够使用 eBPF 而不是它自己的虚拟机。由于 eBPF 在几个月后仍然在集成过程中,使得 ktap 的开发者等待了很长的时间。我希望在今年的晚些时间它能够重启开发。
|
||||
|
||||
### 7. dtrace4linux
|
||||
|
||||
[dtrace4linux][20] 主要由一个人 (Paul Fox) 利用业务时间将 Sun DTrace 移植到 Linux 中的。它令人印象深刻,而一些贡献者的工作,还不是很完美,它最多应该算是实验性的工具(不安全)。我认为对于许可证(license)的担心,使人们对它保持谨慎:它可能永远也进入不了 Linux 内核,因为 Sun 是基于 CDDL 许可证发布的 DTrace;Paul 的方法是将它作为一个插件。我非常希望看到 Linux 上的 DTrace,并且希望这个项目能够完成,我想我加入 Netflix 时将花一些时间来帮它完成。但是,我一直在使用内置的跟踪器 ftrace 和 perf_events。
|
||||
|
||||
### 8. OL DTrace
|
||||
|
||||
[Oracle Linux DTrace][21] 是将 DTrace 移植到 Linux 的一系列努力之一,尤其是 Oracle Linux。过去这些年的许多发行版都一直稳定的进步,开发者甚至谈到了改善 DTrace 测试套件,这显示了这个项目很有前途。许多有用的功能已经完成:系统调用、概览、sdt、proc、sched、以及 USDT。我一直在等待着 fbt(函数边界跟踪,对内核的动态跟踪),它将成为 Linux 内核上非常强大的功能。它最终能否成功取决于能否吸引足够多的人去使用 Oracle Linux(并为支持付费)。另一个羁绊是它并非完全开源的:内核组件是开源的,但用户级代码我没有看到。
|
||||
|
||||
### 9. sysdig
|
||||
|
||||
[sysdig][22] 是一个很新的跟踪器,它可以使用类似 tcpdump 的语法来处理系统调用事件,并用 lua 做后期处理。它也是令人印象深刻的,并且很高兴能看到在系统跟踪空间的创新。它的局限性是,它的系统调用只能是在当时,并且,它不能转储事件到用户级进行后期处理。虽然我希望能看到它去支持跟踪点、kprobes、以及 uprobes,但是你还是可以使用系统调用来做一些事情。我也希望在内核概览方面看到它支持 eBPF。sysdig 的开发者现在增加了对容器的支持。可以关注它的进一步发展。
|
||||
|
||||
## 深入阅读
|
||||
|
||||
我自己的工作中使用到的跟踪器包括:
|
||||
|
||||
**ftrace** : 我的 [perf-tools][8] 集合(查看示例目录);我的 lwn.net 的 [ftrace 跟踪器的文章][5]; 一个 [LISA14][8] 演讲;和文章: [function counting][23], [iosnoop][24], [opensnoop][25], [execsnoop][26], [TCP retransmits][27], [uprobes][28], 和 [USDT][29]。
|
||||
|
||||
**perf_events** : 我的 [perf_events 示例][6] 页面:对于 SCALE 的一个 [Linux Profiling at Netflix][4] 演讲;和文章:[CPU 采样][30],[静态跟踪点][31],[势力图][32],[计数][33],[内核行跟踪][34],[off-CPU 时间火焰图][35]。
|
||||
|
||||
**eBPF** : 文章 [eBPF:一个小的进步][36],和一些 [BPF-tools][37] (我需要发布更多)。
|
||||
|
||||
**SystemTap** : 很久以前,我写了一篇 [使用 SystemTap][38] 的文章,它有点时间了。最近我发布了一些 [systemtap-lwtools][39],展示了在没有内核调试信息的情况下,SystemTap 是如何使用的。
|
||||
|
||||
**LTTng** : 我使用它的时间很短,也没有发布什么文章。
|
||||
|
||||
**ktap** : 我的 [ktap 示例][40] 页面包括一行程序和脚本,虽然它是早期的版本。
|
||||
|
||||
**dtrace4linux** : 在我的 [系统性能][41] 书中包含了一些示例,并且在过去的时间中我为了某些事情开发了一些小的修补,比如, [timestamps][42]。
|
||||
|
||||
**OL DTrace** : 因为它是对 DTrace 的简单移植,我早期 DTrace 的大部分工作都 应该是与它相关的(链接太多了,可以去 [我的主页][43] 上搜索)。一旦它更加完美,我可以开发很多专用工具。
|
||||
|
||||
**sysdig** : 我贡献了 [fileslower][44] 和 [subsecond offset spectrogram][45] chisels。
|
||||
|
||||
**others** : 关于 [strace][46],我写了一些告诫文章。
|
||||
|
||||
不好意思,没有更多的跟踪器了! … 如果你想知道为什么 Linux 中的跟踪器不止一个,或者关于 DTrace 的内容,在我的 [从 DTrace 到 Linux][47] 的演讲中有答案,从 [第 28 张幻灯片][48] 开始。
|
||||
|
||||
感谢 [Deirdre Straughan][49] 的编辑,以及创建了跟踪的小马(General Zoi 是小马的创建者)。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.brendangregg.com/blog/2015-07-08/choosing-a-linux-tracer.html
|
||||
|
||||
作者:[Brendan Gregg.][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.brendangregg.com
|
||||
[1]:http://www.brendangregg.com/blog/images/2015/tracing_ponies.png
|
||||
[2]:http://www.slideshare.net/brendangregg/velocity-2015-linux-perf-tools/105
|
||||
[3]:http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html
|
||||
[4]:http://www.brendangregg.com/blog/2015-02-27/linux-profiling-at-netflix.html
|
||||
[5]:http://lwn.net/Articles/608497/
|
||||
[6]:http://www.brendangregg.com/perf.html
|
||||
[7]:http://www.brendangregg.com/blog/2015-06-23/netflix-instance-analysis-requirements.html
|
||||
[8]:http://www.brendangregg.com/blog/2015-03-17/linux-performance-analysis-perf-tools.html
|
||||
[9]:http://techblog.netflix.com/2015/04/introducing-vector-netflixs-on-host.html
|
||||
[10]:https://sourceware.org/git/?p=systemtap.git;a=blob_plain;f=README;hb=HEAD
|
||||
[11]:http://www.slideshare.net/brendangregg/velocity-2015-linux-perf-tools
|
||||
[12]:http://lwn.net/Articles/370423/
|
||||
[13]:https://www.kernel.org/doc/Documentation/trace/ftrace.txt
|
||||
[14]:https://perf.wiki.kernel.org/index.php/Main_Page
|
||||
[15]:http://www.phoronix.com/scan.php?page=news_item&px=BPF-Understanding-Kernel-VM
|
||||
[16]:https://github.com/torvalds/linux/tree/master/samples/bpf
|
||||
[17]:https://sourceware.org/systemtap/wiki
|
||||
[18]:http://lttng.org/
|
||||
[19]:http://ktap.org/
|
||||
[20]:https://github.com/dtrace4linux/linux
|
||||
[21]:http://docs.oracle.com/cd/E37670_01/E38608/html/index.html
|
||||
[22]:http://www.sysdig.org/
|
||||
[23]:http://www.brendangregg.com/blog/2014-07-13/linux-ftrace-function-counting.html
|
||||
[24]:http://www.brendangregg.com/blog/2014-07-16/iosnoop-for-linux.html
|
||||
[25]:http://www.brendangregg.com/blog/2014-07-25/opensnoop-for-linux.html
|
||||
[26]:http://www.brendangregg.com/blog/2014-07-28/execsnoop-for-linux.html
|
||||
[27]:http://www.brendangregg.com/blog/2014-09-06/linux-ftrace-tcp-retransmit-tracing.html
|
||||
[28]:http://www.brendangregg.com/blog/2015-06-28/linux-ftrace-uprobe.html
|
||||
[29]:http://www.brendangregg.com/blog/2015-07-03/hacking-linux-usdt-ftrace.html
|
||||
[30]:http://www.brendangregg.com/blog/2014-06-22/perf-cpu-sample.html
|
||||
[31]:http://www.brendangregg.com/blog/2014-06-29/perf-static-tracepoints.html
|
||||
[32]:http://www.brendangregg.com/blog/2014-07-01/perf-heat-maps.html
|
||||
[33]:http://www.brendangregg.com/blog/2014-07-03/perf-counting.html
|
||||
[34]:http://www.brendangregg.com/blog/2014-09-11/perf-kernel-line-tracing.html
|
||||
[35]:http://www.brendangregg.com/blog/2015-02-26/linux-perf-off-cpu-flame-graph.html
|
||||
[36]:http://www.brendangregg.com/blog/2015-05-15/ebpf-one-small-step.html
|
||||
[37]:https://github.com/brendangregg/BPF-tools
|
||||
[38]:http://dtrace.org/blogs/brendan/2011/10/15/using-systemtap/
|
||||
[39]:https://github.com/brendangregg/systemtap-lwtools
|
||||
[40]:http://www.brendangregg.com/ktap.html
|
||||
[41]:http://www.brendangregg.com/sysperfbook.html
|
||||
[42]:https://github.com/dtrace4linux/linux/issues/55
|
||||
[43]:http://www.brendangregg.com
|
||||
[44]:https://github.com/brendangregg/sysdig/commit/d0eeac1a32d6749dab24d1dc3fffb2ef0f9d7151
|
||||
[45]:https://github.com/brendangregg/sysdig/commit/2f21604dce0b561407accb9dba869aa19c365952
|
||||
[46]:http://www.brendangregg.com/blog/2014-05-11/strace-wow-much-syscall.html
|
||||
[47]:http://www.brendangregg.com/blog/2015-02-28/from-dtrace-to-linux.html
|
||||
[48]:http://www.slideshare.net/brendangregg/from-dtrace-to-linux/28
|
||||
[49]:http://www.beginningwithi.com/
|
54
translated/tech/20170928 Process Monitoring.md
Normal file
54
translated/tech/20170928 Process Monitoring.md
Normal file
@ -0,0 +1,54 @@
|
||||
监视进程
|
||||
======
|
||||
|
||||
由于 fork 了 Mon 项目到 [etbemon [1]][1] 中,我花了一些时间做监视脚本。事实上监视一些事情通常很容易,但是决定监视什么才是困难的部分。进程监视脚本 ps.monitor 是我重新设计过的一个。
|
||||
|
||||
对于进程监视我有一些思路。如果你对进程监视如何做的更好有任何建议,请通过评论区告诉我。
|
||||
|
||||
对于不使用 Mon 的人来说,如果一切 OK 监视脚本就返回 0,而如果有问题它会返回 1,并使用标准输出显示错误信息。虽然我并不知道有谁将 Mon 脚本挂进一个不同的监视系统中,但是,那样做其实很容易实现。我计划去做的一件事情就是,将来实现 mon 和其它的监视系统如 Nagios 之间的互操作性。
|
||||
|
||||
### 基本监视
|
||||
```
|
||||
ps.monitor tor:1-1 master:1-2 auditd:1-1 cron:1-5 rsyslogd:1-1 dbus-daemon:1- sshd:1- watchdog:1-2
|
||||
```
|
||||
|
||||
我现在计划重写进程监视脚本的一些分类。现在的功能是在命令行上有一个进程名字的列表,它包含了有疑问的实例进程的最小和最大数量。上面的示例是一个监视器的配置。在这里有一些限制,在这个实例中的 "master" 进程引用到 Postfix 的主进程,但是其它的守护进程使用了相同的进程名(这是其中一个错误的名字,因为它太显眼了)。一个显而易见的解决方案是,给一个指定完整路径的选项,这样,那个 /usr/lib/postfix/sbin/master 就可以与其它命名为 “master” 的程序区分开了。
|
||||
|
||||
下一个问题是那些可能代表多个用户运行的进程。比如 sshd,它有一个以 root 身份运行的单独的进程去接受新的连接请求,以及在每个登入用户的 UID 下运行的进程。因此,作为 root 用户运行的 sshd 进程的数量将多于 root 会话的数量。这意味着如果一个系统管理员直接以 root 身份通过 ssh 登入系统(这是有争议的,但它不是本文的主题—— 只是有些人需要这样做,所以我们支持),然后 master 进程崩溃了(或者系统管理员意外或者故意杀死了它),这时对于进程丢失并不会产生警报。当然正确的做法是监视 22 号端口,查找字符串 "SSH-2.0-OpenSSH_"。有时候,守护进程的多个实例运行在需要单独监视的不同 UIDs 下面。因此,我们需要通过 UID 监视进程的能力。
|
||||
|
||||
在许多案例中,进程监视可以被替换为对服务端口的监视。因此,如果在 25 号端口上监视,那么有可能意味着,一个运行着 Postfix 的 “master",而不用去理会其它的 "master” 进程。但是对于我而言,我可以在多个监视中很方便地找到它,如果我得到一个关于无法向一个服务器发送邮件的 Jabber 消息,我可以通过这个来自服务器的 Jabber 消息断定 “master" 没有运行,而不需要挨个查找才能发现问题所在。
|
||||
|
||||
### SE Linux
|
||||
|
||||
我想要的一个功能就是,监视 SE Linux 进程上下文,就像监视 UIDs 一样。虽然我对为其它安全系统编写一个测试不感兴趣,但是,我很乐意将别人写好的代码包含进去。因此,不管我做什么,都希望它能与多个安全系统一起灵活地工作。
|
||||
|
||||
### 短暂进程
|
||||
|
||||
大多数守护进程在进程启动期间都有一个相同名字的次级进程(second process)。这意味着如果你为了精确地监视一个进程的实例,你或许会收到一个警报说,当 ”logrotate" 或者类似的守护进程重启时有两个进程运行。如果在重启期间,恰好在一个错误的时间进行检查,你也或许会收到一个警报说,有 0 个实例。我现在处理这种情况的方法是,在与 "alertafter 2" 指令一起的次级进程失败事件之前我的服务器不发出警报。当监视处于一个失败的状态时,"failure_interval" 指令允许指定检查的时间间隔,将其设置为一个低值时,意味着在等待一个次级进程失败结果时并不会使提示延迟太多。
|
||||
|
||||
为处理这种情况,我考虑让 ps.monitor 脚本在一个指定的延迟后再次进行自动检查。我认为使用一个单个参数的监视脚本来解决这个问题比起使用两个配置指令的 mon 要好一些。
|
||||
|
||||
### CPU 使用
|
||||
|
||||
Mon 现在有一个 loadavg.monitor 脚本,它用于检查平均负载。但是它并不能捕获一个单个进程使用了太多的 CPU 时间而没有使系统平均负载上升的情况。同样,也没有捕获一个渴望获得 CPU 的进程进入沉默(例如,在家用服务器上 SETI 运行变少)(译者注:SETI,由加州大学伯克利分校创建的一项利用全球的联网计算机的空闲计算资源来搜寻地外文明的科学实验计划)而其它的进程进入一个无限循环状态的情况。解决这种问题的一个方法是,让 ps.monitor 脚本也配置另外的一个选项去监视 CPU 的使用,但是这也可能会让人产生迷惑。另外的选择是,使用一个独立的脚本,它用来报警任何在它的生命周期或者最后几秒中,使用 CPU 时间超过指定百分比的进程,除非它在一个进程白名单中以及是一个豁免这种检查的用户。或者每个普通用户都应该豁免这种检查,因为当它们运行一个文件压缩程序时,你压根就不知道。这里还有一个包含排除的守护进程(像 BOINC)和系统进程(像 gzip,它是由几个定时任务运行的)的简短列表。
|
||||
|
||||
### 对例外的监视
|
||||
|
||||
一个常见的编程错误是在 setgid() 之前调用 setuid(),这意味着那个程序没有权限去调用 setgid()。如果没有检查返回代码(而犯这种低级错误的人往往不会去检查返回代码),那么进程会保持较高的权限。检查以 GID 0 而不是 UID 0 运行的进程是很方便的。顺利说一下,对一个 Debian/测试工作站运行的一个快速检查显示,一个使用 GID 0 的进程并没有获得较高的权限,但是可以使用一个 chmod 770 命令去改变它。
|
||||
|
||||
在一个 SE Linux 系统上,应该只有一个进程与 init_t 域一起运行。目前在运行守护进程(比如,mysqld 和 tor)的扩展系统中,并不会发生策略与守护进程服务文件所请求的 systemd 的最新功能不匹配的情况。这样的问题将会不断发生,我们需要对它进行自动化测试。
|
||||
|
||||
对配置错误的自动测试可能会影响系统安全,这是一个很大的问题,我将来或许写一篇关于这方面的单独的博客文章。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://etbe.coker.com.au/2017/09/28/process-monitoring/
|
||||
|
||||
作者:[Andrew][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://etbe.coker.com.au
|
||||
[1]:https://doc.coker.com.au/projects/etbe-mon/
|
@ -0,0 +1,88 @@
|
||||
DevOps 接下来会发生什么:观察到的 5 个趋势
|
||||
======
|
||||
|
||||
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO%20Magnifying%20Glass%20Code.png?itok=IqZsJCEH)
|
||||
|
||||
"DevOps" 一词通常认为是来源于 [2008 年关于敏捷基础设施和运营的介绍][1]。现在的 IT 词汇中,它无处不在,这个“混搭”的词汇出现还不到 10 年:我们还在研究它在 IT 中更现代化的工作方法。
|
||||
|
||||
当然,多年来一直在 “从事 DevOps" 的人积累了丰富的知识。但是大多数的 DevOps 环境 —— 人与 [文化][2] 、流程与方法、工具与技术的融合 —— 还远远没有成熟。
|
||||
|
||||
更多的变化即将到来。Robert Reeves 说 ”DevOps 是一个过程,一种算法“,他是 [Datical][3] 的 CTO, "它的绝对目标就是随着时间进行改变和演进”,这就是重点。
|
||||
|
||||
那我们预计接下来会发生什么呢?这里有一些专家们观察到的重要趋势。
|
||||
|
||||
### 1. 预计 DevOps、容器、以及微服务之间的相互依赖会增强
|
||||
|
||||
驱动 DevOps 发展的文化本身可能会演进。当然,DevOps 仍然将在根本上摧毁传统的 IT 站点和瓶颈,但这样做的理由可能会变得更加急迫。展示(证据) A & B: [对容器和微服务的兴趣][4] 与日俱增。这个技术组合很强大、可连续扩展、与规划和 [持续进行的管理][5]配合最佳。
|
||||
|
||||
Arvind Soni 说 "影响 DevOps 的其中一个主要因素是向微服务转变“,它是 [Netsil][6] 的产品副总裁,添加容器和业务流程,使开发者打包和交付的速度越来越快。DevOps 团队的任务可能是帮助去加速打包并管理越来越复杂的微服务弹性架构。
|
||||
|
||||
### 2. 预计 ”安全网“ 更少
|
||||
|
||||
DevOps 使团队可以更快更敏捷地去构建软件,部署速度也更快更频繁、同时还能提升软件质量和稳定性。但是好的 IT 领导通常都不会忽视管理风险,因此,早期大量的 DevOps 迭代都是使用了安全防护 —— 从后备的不重要业务开始的。为了实现更快的速度和敏捷性,越来越多的团队将抛弃他们的 ”辅助轮“(译者注:意思说减少了安全防护措施)。
|
||||
|
||||
Nic Grange 说 "随着团队的成熟,他们决定不再需要一些早期增加的安全 ”防护栏“ 了”,他是 [Retriever Communications][7] 的 CTO。Grange 给出了一个展示服务器的示例:随着 DevOps 团队的成熟,他们决定不再需要了,尤其是他们很少在试生产环境中发现问题。(Grange 指出,这一举措对于缺乏 DevOps 经验的团队来说,不可轻易效仿)
|
||||
|
||||
Grange 说 "这个团队可能在监视和发现并解决生产系统中出现的问题的能力上有足够的信心“,"部署过程和测试阶段,如果没有任何证据证明它的价值,那么它可能会把整个进度拖慢”。
|
||||
|
||||
### 3. 预计 DevOps 将在其它领域大面积展开
|
||||
|
||||
DevOps 将两个传统的 IT 组(开发和运营)结合的更紧密。越来越多的公司看到了这种结合的好处,这种文化可能会传播开来。这种情况在一些组织中已经出现,在 “DevSecOps” 一词越来越多出现的情况下,它反映出了在软件开发周期中有意地、越来越早地包含了安全性。
|
||||
|
||||
Derek Weeks 说 "DevSecOps 不仅是一个工具,它是将安全思维更早地集成到开发实践中“,它是 [Sonatype][8] 的副总裁和 DevOps 拥挤者。
|
||||
|
||||
[Red Hat][9] 的安全策略师 Kirsten Newcomer 说,这种做法并不是一个技术挑战,而是一个文化挑战。
|
||||
|
||||
Newcomer 说 "从历史来看,安全团队都是从开发团队中分离出来的 —— 每个团队在它们不同的 IT 领域中形成了各自的专长” ,"它并不需要这种方法。每个关心安全性的企业也关心他们通过软件快速交付业务价值的能力,这些企业正在寻找方法,将安全放进应用程序的开发周期中。它们采用 DevSecOps 通过 CI/CD 流水线去集成安全实践、工具、和自动化。为了做的更好,他们整合他们的团队 —— 将安全专家整合到应用程序开发团队中,参与到从设计到产品部署的全过程中。这种做法使双方都看到了价值 —— 每个团队都扩充了它们的技能和知识,使他们成为更有价值的技术专家。DevOps 做对了—— 或者说是 DevSecOps —— 提升了 IT 安全性。“
|
||||
|
||||
除了安全以外,让 DevOps 扩展到其它领域,比如数据库团队、QA、甚至是 IT 以外的潜在领域。
|
||||
|
||||
Datical 的 Reeves 说 "这是一件非常 DevOps 化的事情:发现相互掣肘的地方并解决它们”,"对于以前采用 DevOps 的企业来说,安全和数据库是他们面临的最大瓶颈。“
|
||||
|
||||
### 4. 预计 ROI 将会增加
|
||||
|
||||
Eric Schabell 说,”由于公司深入推进他们的 DevOps 工作,IT 团队在方法、流程、容器、和微服务方面的投资将得到更多的回报。“ 他是 Red Hat 的全球技术传播总监,Schabell 说 "Holy Grail 将移动的更快、完成的更多、并且变得更灵活。由于这些组件找到了更宽阔的天地,组织在应用程序中更有归属感时,结果就会出现。”
|
||||
|
||||
"每当新兴技术获得我们的关注时,任何事都有一个令人兴奋的学习曲线,但当认识到它应用很困难的时候,同时也会经历一个从兴奋到幻灭的低谷。最终,我们将开始看到从低谷中爬出来,并收获到应用 DevOps、容器、和微服务的好处。“
|
||||
|
||||
### 5. 预计成功的指标将持续演进
|
||||
|
||||
Mike Kail 说 "我相信 DevOps 文化的两个核心原则 —— 自动化和可衡量是从来不会变的”,它是 [CYBRIC][10] 的 CTO,也是 Yahoo 前 CIO。“总是有办法去自动化一个任务,或者提升一个已经自动化的解决方案,而随着时间的推移,重要的事情是测量可能的变化和扩展。这个成熟的过程是一个永不停步的旅行,而不是一个目的地或者已完成的任务。”
|
||||
|
||||
在 DevOps 的精神中,成熟和学习也与协作者和分享精神有关。Kail 认为,对于敏捷方法和 DevOps 文化来说,它仍然为时尚早,这意味着它们还有足够的增长空间。
|
||||
|
||||
Kail 说 "随着越来越多的成熟组织持续去测量可控指标,我相信(希望) —— 这些经验应该被广泛的分享,以便我们去学习并改善它们。“
|
||||
|
||||
作为 Red Hat 技术传播专家 [Gordon Haff][11] 最近注意到,组织使用业务成果相关的因素去改善他们的 DevOps 指标的工作越来越困难。 [Haff 写道][12] "你或许并不真正关心你的开发者写了多少行代码、服务器是否在一夜之间出现了硬件故障、或者你的测试覆盖面是否全面”。事实上,你可能并不直接关心你的网站的响应速度和更新快慢。但是你要注意的是,这些指标可能与消费者放弃购物车或者转到你的竞争对手那里有关。“
|
||||
|
||||
与业务成果相关的一些 DevOps 指标的例子包括,消费者交易金额(作为消费者花销统计的指标)和净推荐值(消费者推荐公司产品和服务的意愿)。关于这个主题更多的内容,请查看这篇完整的文章—— [DevOps 指标:你是否测量了重要的东西 ][12]。
|
||||
|
||||
### 唯一不变的就是改变
|
||||
|
||||
顺利说一句,如果你希望这件事一蹴而就,那你就要倒霉了。
|
||||
|
||||
Reeves 说 "如果你认为今天发布非常快,那你就什么也没有看到”,“这就是为什么要让相关者包括数据库团队进入到 DevOps 中的重要原因。因为今天这两组人员的冲突会随着发布速度的提升而呈指数级增长。”
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://enterprisersproject.com/article/2017/10/what-s-next-devops-5-trends-watch
|
||||
|
||||
作者:[Kevin Casey][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://enterprisersproject.com/user/kevin-casey
|
||||
[1]:http://www.jedi.be/presentations/agile-infrastructure-agile-2008.pdf
|
||||
[2]:https://enterprisersproject.com/article/2017/9/5-ways-nurture-devops-culture
|
||||
[3]:https://www.datical.com/
|
||||
[4]:https://enterprisersproject.com/article/2017/9/microservices-and-containers-6-things-know-start-time
|
||||
[5]:https://enterprisersproject.com/article/2017/10/microservices-and-containers-6-management-tips-long-haul
|
||||
[6]:https://netsil.com/
|
||||
[7]:http://retrievercommunications.com/
|
||||
[8]:https://www.sonatype.com/
|
||||
[9]:https://www.redhat.com/en/
|
||||
[10]:https://www.cybric.io/
|
||||
[11]:https://enterprisersproject.com/user/gordon-haff
|
||||
[12]:https://enterprisersproject.com/article/2017/7/devops-metrics-are-you-measuring-what-matters
|
@ -0,0 +1,123 @@
|
||||
# 让 “rm” 命令将文件移动到“垃圾桶”,而不是完全删除它们
|
||||
|
||||
人类犯错误是因为我们不是一个可编程设备,所以,在使用 `rm` 命令时要额外注意,不要在任何时候使用 `rm -rf * `。当你使用 rm 命令时,它会永久删除文件,不会像文件管理器那样将这些文件移动到 `垃圾箱`。
|
||||
|
||||
有时我们会将不应该删除的文件删除掉,所以当错误的删除文件时该怎么办? 你必须看看恢复工具(Linux 中有很多数据恢复工具),但我们不知道是否能将它百分之百恢复,所以要如何解决这个问题?
|
||||
|
||||
我们最近发表了一篇关于 [Trash-Cli][1] 的文章,在评论部分,我们从用户 Eemil Lgz 那里获得了一个关于 [saferm.sh][2] 脚本的更新,它可以帮助我们将文件移动到“垃圾箱”而不是永久删除它们。
|
||||
|
||||
将文件移动到“垃圾桶”是一个好主意,当你无意中运行 rm 命令时,可以节省你的时间,但是很少有人会说这是一个坏习惯,如果你不注意“垃圾桶”,它可能会在一定的时间内被文件和文件夹堆积起来。在这种情况下,我建议你按照你的意愿去做一个定时任务。
|
||||
|
||||
这适用于服务器和桌面两种环境。 如果脚本检测到 **GNOME 、KDE、Unity 或 LXDE** 桌面环境(DE),则它将文件或文件夹安全地移动到默认垃圾箱 **\$HOME/.local/share/Trash/files**,否则会在您的主目录中创建垃圾箱文件夹 **$HOME/Trash**。
|
||||
|
||||
saferm.sh 脚本托管在 Github 中,可以从 repository 中克隆,也可以创建一个名为 saferm.sh 的文件并复制其上的代码。
|
||||
```
|
||||
$ git clone https://github.com/lagerspetz/linux-stuff
|
||||
$ sudo mv linux-stuff/scripts/saferm.sh /bin
|
||||
$ rm -Rf linux-stuff
|
||||
|
||||
```
|
||||
|
||||
在 `bashrc` 文件中设置别名,
|
||||
|
||||
```
|
||||
alias rm=saferm.sh
|
||||
|
||||
```
|
||||
|
||||
执行下面的命令使其生效,
|
||||
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
|
||||
```
|
||||
|
||||
一切就绪,现在你可以执行 rm 命令,自动将文件移动到”垃圾桶”,而不是永久删除它们。
|
||||
|
||||
测试一下,我们将删除一个名为 `magi.txt` 的文件,命令行显式的说明了 `Moving magi.txt to $HOME/.local/share/Trash/file`
|
||||
|
||||
```
|
||||
$ rm -rf magi.txt
|
||||
Moving magi.txt to /home/magi/.local/share/Trash/files
|
||||
|
||||
```
|
||||
|
||||
也可以通过 `ls` 命令或 `trash-cli` 进行验证。
|
||||
|
||||
```
|
||||
$ ls -lh /home/magi/.local/share/Trash/files
|
||||
Permissions Size User Date Modified Name
|
||||
.rw-r--r-- 32 magi 11 Oct 16:24 magi.txt
|
||||
|
||||
```
|
||||
|
||||
或者我们可以通过文件管理器界面中查看相同的内容。
|
||||
|
||||
![![][3]][4]
|
||||
|
||||
创建一个定时任务,每天清理一次“垃圾桶”,( LCTT 注:原文为每周一次,但根据下面的代码,应该是每天一次)
|
||||
|
||||
```
|
||||
$ 1 1 * * * trash-empty
|
||||
|
||||
```
|
||||
|
||||
`注意` 对于服务器环境,我们需要使用 rm 命令手动删除。
|
||||
|
||||
```
|
||||
$ rm -rf /root/Trash/
|
||||
/root/Trash/magi1.txt is on . Unsafe delete (y/n)? y
|
||||
Deleting /root/Trash/magi1.txt
|
||||
|
||||
```
|
||||
|
||||
对于桌面环境,trash-put 命令也可以做到这一点。
|
||||
|
||||
在 `bashrc` 文件中创建别名,
|
||||
|
||||
```
|
||||
alias rm=trash-put
|
||||
|
||||
```
|
||||
|
||||
执行下面的命令使其生效。
|
||||
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
|
||||
```
|
||||
|
||||
要了解 saferm.sh 的其他选项,请查看帮助。
|
||||
|
||||
```
|
||||
$ saferm.sh -h
|
||||
This is saferm.sh 1.16. LXDE and Gnome3 detection.
|
||||
Will ask to unsafe-delete instead of cross-fs move. Allows unsafe (regular rm) delete (ignores trashinfo).
|
||||
Creates trash and trashinfo directories if they do not exist. Handles symbolic link deletion.
|
||||
Does not complain about different user any more.
|
||||
|
||||
Usage: /path/to/saferm.sh [OPTIONS] [--] files and dirs to safely remove
|
||||
OPTIONS:
|
||||
-r allows recursively removing directories.
|
||||
-f Allow deleting special files (devices, ...).
|
||||
-u Unsafe mode, bypass trash and delete files permanently.
|
||||
-v Verbose, prints more messages. Default in this version.
|
||||
-q Quiet mode. Opposite of verbose.
|
||||
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/rm-command-to-move-files-to-trash-can-rm-alias/
|
||||
|
||||
作者:[2DAYGEEK][a]
|
||||
译者:[amwps290](https://github.com/amwps290)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/2daygeek/
|
||||
[1]:https://www.2daygeek.com/trash-cli-command-line-trashcan-linux-system/
|
||||
[2]:https://github.com/lagerspetz/linux-stuff/blob/master/scripts/saferm.sh
|
||||
[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[4]:https://www.2daygeek.com/wp-content/uploads/2017/10/rm-command-to-move-files-to-trash-can-rm-alias-1.png
|
@ -1,88 +0,0 @@
|
||||
# 关于处理器你需要知道的每件事
|
||||
|
||||
[![][b]][b]
|
||||
```
|
||||
我们的手机 ,主机以及笔记本电脑已经成长得如此的成熟 ,以至于它们进化成为我们的一部分 ,而不只是一种设备 。
|
||||
在应用和软件的帮助下 ,处理器执行许多任务 。我们是否曾经想过是什么给了这些软件这样的能力 ?它们是如何执行他们的逻辑的 ?它们的大脑在哪 ?
|
||||
我们知道 CPU 或者是处理器是那些需要处理数据和执行逻辑任务设备的大脑 。
|
||||
``
|
||||
[![cpu image][1]][1]
|
||||
```
|
||||
在处理器的深处有那些不一样的概念呢 ? 它们是如何进化的 ? 一些处理器是如何做到比其他处理器更快的 ? 我们来看看关于处理器的主要术语并且它们是如何影响处速度的 ?
|
||||
```
|
||||
## 架构
|
||||
```
|
||||
处理器有不同的架构 ,你一定偶遇过不同种类的那种你说它们是 64 位或 32 位的程序 ,其中的意思是程序支持特定的处理器架构 。
|
||||
如果一颗处理器是 32 位的架构 ,意味着这颗处理器能够在一个处理周期内处理一个 32 位的数据 。同理可得 ,64 位的处理器能够在一个周期内处理一个 64 位的信息 。
|
||||
你可以使用的 RAM 大小决定于处理器的架构 ,你可以使用的 RAM 总量为处理器架构的幂指数 。
|
||||
16 位架构的处理器 ,仅仅有 64 kb 的 RAM 使用 。32 位架构的处理器 ,最大可使用的 RAM 是 4 GB ,64 位架构的处理器的可用 RAM 是 16 Exa-Byte 。
|
||||
```
|
||||
## 内核
|
||||
```
|
||||
在电脑上 ,核心是基本的处理单元 。核心接收指令并且执行指令 。越多的核心带来越快的速度 。把核心当成工厂里的工人 ,越多的工人使工作能够越快的完成 。另一方面 ,工人越多 ,你所付出的薪水也就越多 ,工厂也会越拥挤 ;相对于核心来说 ,越多的合兴消耗更多的能量 ,比核心少的 CPU 更容易发热 。
|
||||
```
|
||||
## 时钟速度
|
||||
[![CPU CLOCK SPEED][2]][2]
|
||||
```
|
||||
GHZ 是 GigaHertz 的简写 ,Giga 意思是 Billon ,Hertz 意思是一秒有几个周期 ,2 GHZ 的处理器意味着处理器一秒能够执行 2 百万个周期 。
|
||||
也被作为 `频率` 或者 `时钟速度` 被熟知 。这项数值越高 ,CPU的性能越好 。
|
||||
```
|
||||
## CPU 缓存
|
||||
```
|
||||
CPU 缓存是处理器内部的一块小的存储单元 ,用来存储一些内存 。不管如何 ,我们都需要执行一些任务 ,数据需要从 RAM 传递到 CPU ,CPU 的工作速度远快于 RAM ,CPU 在大多数时间是在等待从 RAM 传递过来的数据 ,而此时 CPU 是处于空闲状态的 。为了解决这个问题 ,RAM 持续的向 CPU 缓存发送数据 。一般的处理器会有 2 ~ 3 M 的 CPU 缓存 。高端的处理器会有 6 M CPU 缓存 ,越大的缓存 ,意味着处理器更好 。
|
||||
```
|
||||
## 印刷工艺
|
||||
```
|
||||
晶体管的大小就是处理器平板印刷的大小 ,尺寸通常是纳米 ,更小的尺寸意味者更好的兼容性 。这允许你更多的核心 ,更小的面积 ,更小的能量消耗 。
|
||||
这最新的 Intel 处理器有 14 nm 的印刷工艺 。
|
||||
```
|
||||
## 热功耗设计
|
||||
```
|
||||
代表这电池的能量 ,单位是瓦特 。在全核心激活以基本频率来处理 Intel 模式 ,高复杂度的负载是一种浪费处理器的行为 。
|
||||
所以 ,越低的热功耗设计 ,对你越好 。一个低的热功耗设计不仅更好的利用电池能量 ,而且产生更少的热量 。
|
||||
```
|
||||
[![battery][3]][3]
|
||||
```
|
||||
桌面版本的处理器通常消耗更多的能量 ,热功耗消耗的能量能够在 40% 以上 ,相对应的移动版本只有不到桌面版本的 1/3 。
|
||||
```
|
||||
## 内存支持
|
||||
```
|
||||
我们已经提到了处理器的架构是如何影响到我们能够使用的内存总量 。这样我们只掌握了正确的理论 。在实际的应用中 ,RAM 的总量对于处理器的规格来说是足够我们使用的 ,由处理器规格详细规定 ,也包含支持的 DDR 版本的内存 。
|
||||
```
|
||||
[![RAM][4]][4]
|
||||
|
||||
## 超频
|
||||
```
|
||||
前面我们讲过时钟频率 ,超频是程序强迫 CPU 执行更多的周期 。游戏玩家经常会使他们的处理器超频 ,以此来获得更好的性能 。这样确实回增加速度 ,但也会增加消耗的能量 ,产生更多的热量 。
|
||||
一些高端的处理器允许超频 ,如果我们想让一个不支持超平的处理器超频 ,我们需要在主板上安装一个新的 BIOS 。
|
||||
这样通常下回成功 ,但这种情况是不安全的 ,也是不被建议的 。
|
||||
```
|
||||
## 超线程
|
||||
```
|
||||
如果对特定的处理需要添加核心是不合适的 ,那么超线程回建立一个虚拟核心 。
|
||||
当我们说双核处理器有超线程 ,这个双核处理器有两个物理核心和两个虚拟核心 ,在技术上讲 ,一个双核处理器拥有四核核心 。
|
||||
```
|
||||
## 结论
|
||||
```
|
||||
一个处理器有许多相关的数据 ,这是对数字设备来说是最重要的部分 。我们在选择设备时 ,我们应该在脑海中仔细的检查处理器在上面提到的数据 。
|
||||
时钟速度 ,核心数 ,CPU 缓存 ,以及架构是比重最大的数据 。印刷尺寸以及热功耗设计比重要小一些 。
|
||||
仍然有疑惑 ? 欢迎评论 ,我会尽快回复的 。
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.theitstuff.com/processors-everything-need-know
|
||||
|
||||
作者:[Rishabh Kandari][a]
|
||||
译者:[singledo](https://github.com/singledo)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.theitstuff.com/author/reevkandari
|
||||
[b]:http://www.theitstuff.com/wp-content/uploads/2017/10/processors-all-you-need-to-know.jpg
|
||||
[1]:http://www.theitstuff.com/wp-content/uploads/2017/10/download.jpg
|
||||
[2]:http://www.theitstuff.com/wp-content/uploads/2017/10/download-1.jpg
|
||||
[3]:http://www.theitstuff.com/wp-content/uploads/2017/10/download-2.jpg
|
||||
[4]:http://www.theitstuff.com/wp-content/uploads/2017/10/images.jpg
|
||||
[5]:http://www.theitstuff.com/wp-content/uploads/2017/10/processors-all-you-need-to-know.jpg
|
@ -0,0 +1,72 @@
|
||||
DevOps 如何消除掉 Ranger 社区的瓶颈
|
||||
======
|
||||
![配图](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/traffic-light-go.png?itok=nC_851ys)
|
||||
|
||||
Visual Studio Application Lifecycle Management(ALM)项目 —— [Ranger][1] 是一个志愿者社区,它提供专业的指导、实践经验、以及开发者社区的漏洞修补解决方案。它创建于 2006 年,作为微软内部社区去 "connect the product group with the field and remove adoption blockers"。 在 2009 时,社区已经有超过 200 位成员,这导致了协作和计划面临很大的挑战,在依赖和手工流程上产生了瓶颈,并导致了开发者社区不断增加的延迟和各种报怨。在 2010 时,计划进一步去扩充包括微软最有价值专家(MVP)在内的分布在全球的社区。
|
||||
|
||||
这个社区被分割成十几个活跃的团队。每个团队都致力于通过它的生命周期去设计、构建和支持一个指导或处理项目。在以前,团队的瓶颈在团队管理级别上,原因是严格的、瀑布式的流程和高度依赖一个或多个项目经理。在制作、发布和“为什么、做什么、和怎么做”驱动的决定上,项目经理都要介入其中。另外,缺乏一个实时的指标阻止了团队对他们的解决方案效率的监控,以及对来自社区的关于 bug 和常见问题的关注。
|
||||
|
||||
是时候去寻找一些做好这些事情的方法了,更好地实现开发者社区的价值。
|
||||
|
||||
### DevOps 去“灭火”
|
||||
|
||||
> "DevOps 是人员、流程、和产品的结合,使我们的最终用户能够持续传递价值。" --[Donovan Brown][2]
|
||||
|
||||
为解决这些挑战,社区停止了所有对新项目的冲刺,去探索敏捷实践和新产品。致力于使社区重新活跃起来,为找到促进自治、掌控、和目标的方法,正如在 Daniel H. Pink 的书 —— [Drive][3] 中所说的那样,对僵化的流程和产品进行彻底的改革。
|
||||
|
||||
> “成熟的自组织、自管理、和跨职能团队,在自治、掌控、和目标上茁壮成长。" --Drive, Daniel H. Pink.
|
||||
|
||||
从文化开始 —— 人 —— 第一步是去拥抱 DevOps。社区实现了 [Scrum][4] 框架,使用 [kanban][5] 去提升工程化流程,并且通过可视化去提升透明度、意识和最重要的东西 —— 信任。使用自组织团队后,传统的等级制度和指挥系统消失了。自管理促使团队去积极监视和设计它们自己的流程。
|
||||
|
||||
在 2010 年 4 月份,社区再次实施了另外的关键一步,切换并提交它们的文化、流程、以及产品到云上。虽然开放的”为社区而社区“的核心 [解决方案][6] 仍然是指导和补充,但是在开源解决方案(OSS)上大量增加投资去研究和共享 DevOps 转换的成就。
|
||||
|
||||
持续集成(CI)和持续交付(CD)使用自动化流水线代替了死板的人工流程。这使得团队在不受来自项目经理的干预的情况下为早期问题和早期应用者部署解决方案。增加遥测技术可以使团队关注他们的解决方案,以及在用户注意到它们之前,检测和处理未知的问题。
|
||||
|
||||
DevOps 转变是一个持续进化的过程,通过实验去探索和验证人、流程、和产品的改革。最新的试验引入了流水线革新,它可以持续提升价值流。自动扫描组件、持续地、以及静默地检查安全、协议、和开源组件的品质。部署环和特性标志允许团队对所有或者特定用户进行更细粒度的控制。
|
||||
|
||||
在 2017 年 10 月,社区将大部分的私有版本控制仓库转移到 [GitHub][7] 上。对所有仓库转移所有者和管理职责到 ALM DevOps Rangers 社区,给团队提供自治和机会,去激励更多的社区对开源解决方案作贡献。团队被授权向他们的最终用户交付质量和价值。
|
||||
|
||||
### 好处和成就
|
||||
|
||||
拥抱 DevOps 使 Ranger 社区变得更加敏捷,实现了对市场的快速反应和快速学习和反应的流程,减少了宝贵的时间投入,并宣布自治。
|
||||
|
||||
下面是从这个转变中观察到的一个列表,排列没有特定的顺序:
|
||||
|
||||
* 自治、掌控、和目标是核心。
|
||||
* 从可触摸的和可迭代的东西开始 —— 避免摊子铺的过大。
|
||||
* 可触摸的和可操作的指标很重要 —— 确保不要掺杂其它东西。
|
||||
* 人(文化)的转变是最具挑战的部分。
|
||||
* 没有蓝图;任何一个组织和任何一个团队都是独一无二的。
|
||||
* 转变是一个持续的过程。
|
||||
* 透明和可视非常关键。
|
||||
* 使用工程化流程去强化预期行为。
|
||||
|
||||
|
||||
|
||||
转换变化表:~~(致核对:以下是表格,格式转换造成错乱了。)~~
|
||||
|
||||
PAST CURRENT ENVISIONED Branching Servicing and release isolation Feature Master Build Manual and error prone Automated and consistent Issue detection Call from user Proactive telemetry Issue resolution Days to weeks Minutes to days Minutes Planning Detailed design Prototyping and storyboards Program management 2 program managers (PM) 0.25 PM 0.125 PM Release cadence 6 to 12 months 3 to 5 sprints Every sprint Release Manual and error prone Automated and consistent Sprints 1 month 3 weeks Team size 10 to 15 2 to 5 Time to build Hours Seconds Time to release Days Minutes
|
||||
|
||||
但是,我们还没有做完,相反,我们就是一个令人兴奋的、持续不断的、几乎从不结束的转变的一部分。
|
||||
|
||||
如果你想去学习更多的关于我们的转变、有益的经验、以及想知道我们所经历的挑战,请查看 [转变到 DevOps 文化的记录][8]。"
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/11/devops-rangers-transformation
|
||||
|
||||
作者:[Willy Schaub][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/wpschaub
|
||||
[1]:https://aka.ms/vsaraboutus
|
||||
[2]:http://donovanbrown.com/post/what-is-devops
|
||||
[3]:http://www.danpink.com/books/drive/
|
||||
[4]:http://www.scrumguides.org/scrum-guide.html
|
||||
[5]:https://leankit.com/learn/kanban/what-is-kanban/
|
||||
[6]:https://aka.ms/vsarsolutions
|
||||
[7]:https://github.com/ALM-Rangers
|
||||
[8]:https://github.com/ALM-Rangers/Guidance/blob/master/src/Stories/our-journey-of-transforming-to-a-devops-culture.md
|
113
translated/tech/20171214 6 open source home automation tools.md
Normal file
113
translated/tech/20171214 6 open source home automation tools.md
Normal file
@ -0,0 +1,113 @@
|
||||
6 个开源的家庭自动化工具
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520x292_openlightbulbs.png?itok=nrv9hgnH)
|
||||
|
||||
[物联网][13] 不仅是一个时髦词,在现实中,自 2016 年我们发布了一篇关于家庭自动化工具的评论文章以来,它也在迅速占领着我们的生活。在 2017,[26.5% 的美国家庭][14] 已经使用了一些智能家居技术;预计五年内,这一数字还将翻倍。
|
||||
|
||||
使用数量持续增加的各种设备,可以帮助你实现对家庭的自动化管理、安保、和监视,在家庭自动化方面,从来没有像现在这样容易和更加吸引人过。不论你是要远程控制你的 HVAC 系统,集成一个家庭影院,保护你的家免受盗窃、火灾、或是其它威胁,还是节省能源或只是控制几盏灯,现在都有无数的设备可以帮到你。
|
||||
|
||||
但同时,还有许多用户担心安装在他们家庭中的新设备带来的安全和隐私问题 —— 一个很现实也很 [严肃的问题][15]。他们想要去控制有谁可以接触到这个重要的系统,这个系统管理着他们的应用程序,记录了他们生活中的点点滴滴。这种想法是可以理解的:毕竟在一个连你的冰箱都是智能设备的今天,你不想要一个基本的保证吗?甚至是如果你授权了设备可以与外界通讯,它是否是仅被授权的人能够访问它呢?
|
||||
|
||||
[对安全的担心][16] 是为什么开源对我们将来使用的互联设备至关重要的众多理由之一。由于源代码运行在他们自己的设备上,完全可以去搞明白控制你的家庭的程序,也就是说你可以查看它的代码,如果必要的话甚至可以去修改它。
|
||||
|
||||
虽然联网设备通常都包含他们专有的组件,但是将开源引入家庭自动化的第一步是确保你的设备和这些设备可以共同工作 —— 它们为你提供一个接口—— 并且是开源的。幸运的是,现在有许多解决方案可供选择,从 PC 到树莓派,你可以在它们上做任何事情。
|
||||
|
||||
这里有几个我比较喜欢的。
|
||||
|
||||
### Calaos
|
||||
|
||||
[Calaos][17] 是一个设计为全栈家庭自动化的平台,包含一个服务器应用程序、触摸屏接口、Web 应用程序、支持 iOS 和 Android 的原生移动应用、以及一个运行在底层的预配置好的 Linux 操作系统。Calaos 项目出自一个法国公司,因此它的支持论坛以法语为主,不过大量的介绍资料和文档都已经翻译为英语了。
|
||||
|
||||
Calaos 使用的是 [GPL][18] v3 的许可证,你可以在 [GitHub][19] 上查看它的源代码。
|
||||
|
||||
### Domoticz
|
||||
|
||||
[Domoticz][20] 是一个有大量设备库支持的家庭自动化系统,在它的项目网站上有大量的文档,从气象站到远程控制的烟雾探测器,以及大量的第三方 [集成][21] 。它使用一个 HTML5 前端,可以从桌面浏览器或者大多数现代的智能手机上访问它,它是一个轻量级的应用,可以运行在像树莓派这样的低功耗设备上。
|
||||
|
||||
Domoticz 是用 C++ 写的,使用 [GPLv3][22] 许可证。它的 [源代码][23] 在 GitHub 上。
|
||||
|
||||
### Home Assistant
|
||||
|
||||
[Home Assistant][24] 是一个开源的家庭自动化平台,它可以轻松部署在任何能运行 Python 3 的机器上,从树莓派到网络附加存储(NAS),甚至可以使用 Docker 容器轻松地部署到其它系统上。它集成了大量的开源的和商业的产品,允许你去连接它们,比如,IFTTT、天气信息、或者你的 Amazon Echo 设备,去控制从锁到灯的各种硬件。
|
||||
|
||||
Home Assistant 以 [MIT 许可证][25] 发布,它的源代码可以从 [GitHub][26] 上下载。
|
||||
|
||||
### MisterHouse
|
||||
|
||||
从 2016 年起,[MisterHouse][27] 取得了很多的进展,我们把它作为一个“可以考虑的另外选择”列在这个清单上。它使用 Perl 脚本去监视任何东西,它可以通过一台计算机来查询或者控制任何可以远程控制的东西。它可以响应语音命令,查询当前时间、天气、位置、以及其它事件,比如去打开灯、唤醒你、记下你喜欢的电视节目、通报呼入的来电、开门报警、记录你儿子上了多长时间的网、如果你女儿汽车超速它也可以告诉你等等。它可以运行在 Linux、macOS、以及 Windows 计算机上,它可以读/写很多的设备,包括安全系统、气象站、来电显示、路由器、机动车位置系统等等。
|
||||
|
||||
MisterHouse 使用 [GPLv2][28] 许可证,你可以在 [GitHub][29] 上查看它的源代码。
|
||||
|
||||
### OpenHAB
|
||||
|
||||
[OpenHAB][30](开放家庭自动化总线的简称)是在开源爱好者中大家熟知的家庭自动化工具,它拥有大量用户的社区以及支持和集成了大量的设备。它是用 Java 写的,OpenHAB 非常轻便,可以跨大多数主流操作系统使用,它甚至在树莓派上也运行的很好。支持成百上千的设备,OpenHAB 被设计为与设备无关的,这使开发者在系统中添加他们的设备或者插件很容易。OpenHAB 也支持通过 iOS 和 Android 应用来控制设备以及设计工具,因此,你可以为你的家庭系统创建你自己的 UI。
|
||||
|
||||
你可以在 GitHub 上找到 OpenHAB 的 [源代码][31],它使用 [Eclipse 公共许可证][32]。
|
||||
|
||||
### OpenMotics
|
||||
|
||||
[OpenMotics][33] 是一个开源的硬件和软件家庭自动化系统。它的设计目标是为控制设备提供一个综合的系统,而不是从不同的供应商处将各种设备拼接在一起。不像其它的系统主要是为了方便的改装而设计的,OpenMotics 专注于硬件解决方案。更多资料请查阅来自 OpenMotics 的后端开发者 Frederick Ryckbosch的 [完整文章][34] 。
|
||||
|
||||
OpenMotics 使用 [GPLv2][35] 许可证,它的源代码可以从 [GitHub][36] 上下载。
|
||||
|
||||
当然了,我们的选择不仅有这些。许多家庭自动化爱好者使用不同的解决方案,甚至是它们自己动手做。其它用户选择使用单独的智能家庭设备而无需集成它们到一个单一的综合系统中。
|
||||
|
||||
如果上面的解决方案并不能满足你的需求,下面还有一些潜在的替代者可以去考虑:
|
||||
|
||||
* [EventGhost][1] 是一个开源的([GPL v2][2])家庭影院自动化工具,它只能运行在 Microsoft Windows PC 上。它允许用户去控制多媒体电脑和连接的硬件,它通过触发宏指令的插件或者定制的 Python 脚本来使用。
|
||||
* [ioBroker][3] 是一个基于 JavaScript 的物联网平台,它能够控制灯、锁、空调、多媒体、网络摄像头等等。它可以运行在任何可以运行 Node.js 的硬件上,包括 Windows、Linux、以及 macOS,它使用 [MIT 许可证][4]。
|
||||
* [Jeedom][5] 是一个由开源软件([GPL v2][6])构成的家庭自动化平台,它可以控制灯、锁、多媒体等等。它包含一个移动应用程序(Android 和 iOS),并且可以运行在 Linux PC 上;该公司也销售 hubs,它为配置家庭自动化提供一个现成的解决方案。
|
||||
* [LinuxMCE][7] 标称它是你的多媒体与电子设备之间的“数字粘合剂”。它运行在 Linux(包括树莓派)上,它基于 Pluto 开源 [许可证][8] 发布,它可以用于家庭安全、电话(VoIP 和语音信箱)、A/V 设备、家庭自动化、以及玩视频游戏。
|
||||
* [OpenNetHome][9],和这一类中的其它解决方案一样,是一个控制灯、报警、应用程序等等的一个开源软件。它基于 Java 和 Apache Maven,可以运行在 Windows、macOS、以及 Linux —— 包括树莓派,它以 [GPLv3][10] 许可证发布。
|
||||
* [Smarthomatic][11] 是一个专注于硬件设备和软件的开源家庭自动化框架,而不仅是用户接口。它基于 [GPLv3][12] 许可证,它可用于控制灯、电器、以及空调、检测温度、提醒给植物浇水。
|
||||
|
||||
现在该轮到你了:你已经准备好家庭自动化系统了吗?或者正在研究去设计一个。你对家庭自动化的新手有什么建议,你会推荐什么样的系统?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/life/17/12/home-automation-tools
|
||||
|
||||
作者:[Jason Baker][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jason-baker
|
||||
[1]:http://www.eventghost.net/
|
||||
[2]:http://www.gnu.org/licenses/old-licenses/gpl-2.0.html
|
||||
[3]:http://iobroker.net/
|
||||
[4]:https://github.com/ioBroker/ioBroker#license
|
||||
[5]:https://www.jeedom.com/site/en/index.html
|
||||
[6]:http://www.gnu.org/licenses/old-licenses/gpl-2.0.html
|
||||
[7]:http://www.linuxmce.com/
|
||||
[8]:http://wiki.linuxmce.org/index.php/License
|
||||
[9]:http://opennethome.org/
|
||||
[10]:https://github.com/NetHome/NetHomeServer/blob/master/LICENSE
|
||||
[11]:https://www.smarthomatic.org/
|
||||
[12]:https://github.com/breaker27/smarthomatic/blob/develop/GPL3.txt
|
||||
[13]:https://opensource.com/resources/internet-of-things
|
||||
[14]:https://www.statista.com/outlook/279/109/smart-home/united-states
|
||||
[15]:http://www.crn.com/slide-shows/internet-of-things/300089496/black-hat-2017-9-iot-security-threats-to-watch.htm
|
||||
[16]:https://opensource.com/business/15/5/why-open-source-means-stronger-security
|
||||
[17]:https://calaos.fr/en/
|
||||
[18]:https://github.com/calaos/calaos-os/blob/master/LICENSE
|
||||
[19]:https://github.com/calaos
|
||||
[20]:https://domoticz.com/
|
||||
[21]:https://www.domoticz.com/wiki/Integrations_and_Protocols
|
||||
[22]:https://github.com/domoticz/domoticz/blob/master/License.txt
|
||||
[23]:https://github.com/domoticz/domoticz
|
||||
[24]:https://home-assistant.io/
|
||||
[25]:https://github.com/home-assistant/home-assistant/blob/dev/LICENSE.md
|
||||
[26]:https://github.com/balloob/home-assistant
|
||||
[27]:http://misterhouse.sourceforge.net/
|
||||
[28]:http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
|
||||
[29]:https://github.com/hollie/misterhouse
|
||||
[30]:http://www.openhab.org/
|
||||
[31]:https://github.com/openhab/openhab
|
||||
[32]:https://github.com/openhab/openhab/blob/master/LICENSE.TXT
|
||||
[33]:https://www.openmotics.com/
|
||||
[34]:https://opensource.com/life/14/12/open-source-home-automation-system-opemmotics
|
||||
[35]:http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
|
||||
[36]:https://github.com/openmotics
|
109
translated/tech/20171214 IPv6 Auto-Configuration in Linux.md
Normal file
109
translated/tech/20171214 IPv6 Auto-Configuration in Linux.md
Normal file
@ -0,0 +1,109 @@
|
||||
在 Linux 中自动配置 IPv6 地址
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner_5.png?itok=3kN83IjL)
|
||||
|
||||
在 [ KVM 中测试 IPv6 网络:第 1 部分][1] 一文中,我们学习了关于唯一本地地址(ULAs)的相关内容。在本文中,我们将学习如何为 ULAs 自动配置 IP 地址。
|
||||
|
||||
### 何时使用唯一本地地址
|
||||
|
||||
唯一本地地址使用 fd00::/8 地址块,它类似于我们常用的 IPv4 的私有地址:10.0.0.0/8、172.16.0.0/12、以及 192.168.0.0/16。但它们并不能直接替换。IPv4 的私有地址分类和网络地址转换(NAT)功能是为了缓解 IPv4 地址短缺的问题,这是个明智的解决方案,它延缓了本该被替换的 IPv4 的生命周期。IPv6 也支持 NAT,但是我想不出使用它的理由。IPv6 的地址数量远远大于 IPv4;它是不一样的,因此需要做不一样的事情。
|
||||
|
||||
那么,ULAs 存在的意义是什么呢?尤其是在我们已经有了本地链路地址(fe80::/10)时,到底需不需要我们去配置它们呢?它们之间(译者注:指的是唯一本地地址和本地链路地址)有两个重要的区别。一是,本地链路地址是不可路由的,因此,你不能跨子网使用它。二是,ULAs 是你自己管理的;你可以自己选择它用于子网的地址范围,并且它们是可路由的。
|
||||
|
||||
使用 ULAs 的另一个好处是,如果你只是在局域网中“混日子”的话,你不需要为它们分配全局单播 IPv6 地址。当然了,如果你的 ISP 已经为你分配了 IPv6 的全局单播地址,就不需要使用 ULAs 了。你也可以在同一个网络中混合使用全局单播地址和 ULAs,但是,我想不出这样使用的一个好理由,并且要一定确保你不使用网络地址转换以使 ULAs 可公共访问。在我看来,这是很愚蠢的行为。
|
||||
|
||||
ULAs 是仅为私有网络使用的,并且它会阻塞所有流出你的网络的数据包,不允许进入因特网。这很简单,在你的边界设备上只要阻止整个 fd00::/8 范围的 IPv6 地址即可实现。
|
||||
|
||||
### 地址自动配置
|
||||
|
||||
ULAs 不像本地链路地址那样自动配置的,但是使用 radvd 设置自动配置是非常容易的,radva 是路由器公告守护程序。在你开始之前,运行 `ifconfig` 或者 `ip addr show` 去查看你现有的 IP 地址。
|
||||
|
||||
在生产系统上使用时,你应该将 radvd 安装在一台单独的路由器上,如果只是测试使用,你可以将它安装在你的网络中的任意 Linux PC 上。在我的小型 KVM 测试实验室中,我使用 `apt-get install radvd` 命令把它安装在 Ubuntu 上。安装完成之后,我先不启动它,因为它还没有配置文件:
|
||||
```
|
||||
$ sudo systemctl status radvd
|
||||
● radvd.service - LSB: Router Advertising Daemon
|
||||
Loaded: loaded (/etc/init.d/radvd; bad; vendor preset: enabled)
|
||||
Active: active (exited) since Mon 2017-12-11 20:08:25 PST; 4min 59s ago
|
||||
Docs: man:systemd-sysv-generator(8)
|
||||
|
||||
Dec 11 20:08:25 ubunut1 systemd[1]: Starting LSB: Router Advertising Daemon...
|
||||
Dec 11 20:08:25 ubunut1 radvd[3541]: Starting radvd:
|
||||
Dec 11 20:08:25 ubunut1 radvd[3541]: * /etc/radvd.conf does not exist or is empty.
|
||||
Dec 11 20:08:25 ubunut1 radvd[3541]: * See /usr/share/doc/radvd/README.Debian
|
||||
Dec 11 20:08:25 ubunut1 radvd[3541]: * radvd will *not* be started.
|
||||
Dec 11 20:08:25 ubunut1 systemd[1]: Started LSB: Router Advertising Daemon.
|
||||
|
||||
```
|
||||
|
||||
这些所有的消息有点让人困惑,实际上 radvd 并没有运行,你可以使用经典命令 `ps|grep radvd` 来验证这一点。因此,我们现在需要去创建 `/etc/radvd.conf` 文件。拷贝这个示例,将第一行的网络接口名替换成你自己的接口名字:
|
||||
```
|
||||
interface ens7 {
|
||||
AdvSendAdvert on;
|
||||
MinRtrAdvInterval 3;
|
||||
MaxRtrAdvInterval 10;
|
||||
prefix fd7d:844d:3e17:f3ae::/64
|
||||
{
|
||||
AdvOnLink on;
|
||||
AdvAutonomous on;
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
```
|
||||
|
||||
前缀定义了你的网络地址,它是地址的前 64 位。前两个字符必须是 `fd`,前缀接下来的剩余部分你自己定义它,最后的 64 位留空,因为 radvd 将去分配最后的 64 位。前缀后面的 16 位用来定义子网,剩余的地址定义为主机地址。你的子网必须总是 /64。RFC 4193 要求地址必须随机生成;查看 [在 KVM 中测试 IPv6 Networking:第 1 部分][1] 学习创建和管理 ULAs 的更多知识。
|
||||
|
||||
### IPv6 转发
|
||||
|
||||
IPv6 转发必须要启用。下面的命令去启用它,重启后生效:
|
||||
```
|
||||
$ sudo sysctl -w net.ipv6.conf.all.forwarding=1
|
||||
|
||||
```
|
||||
|
||||
取消注释或者添加如下的行到 `/etc/sysctl.conf` 文件中,以使它永久生效:
|
||||
```
|
||||
net.ipv6.conf.all.forwarding = 1
|
||||
```
|
||||
|
||||
启动 radvd 守护程序:
|
||||
```
|
||||
$ sudo systemctl stop radvd
|
||||
$ sudo systemctl start radvd
|
||||
|
||||
```
|
||||
|
||||
这个示例在我的 Ubuntu 测试系统中遇到了一个怪事;radvd 总是停止,我查看它的状态却没有任何问题,做任何改变之后都需要重新启动 radvd。
|
||||
|
||||
启动成功后没有任何输出,并且失败也是如此,因此,需要运行 `sudo systemctl radvd status` 去查看它的运行状态。如果有错误,systemctl 会告诉你。一般常见的错误都是 `/etc/radvd.conf` 中的语法错误。
|
||||
|
||||
在 Twitter 上抱怨了上述问题之后,我学到了一件很酷的技巧:当你运行 ` journalctl -xe --no-pager` 去调试 systemctl 错误时,你的输出将被封装打包,然后,你就可以看到错误信息。
|
||||
|
||||
现在检查你的主机,查看它们自动分配的新地址:
|
||||
```
|
||||
$ ifconfig
|
||||
ens7 Link encap:Ethernet HWaddr 52:54:00:57:71:50
|
||||
[...]
|
||||
inet6 addr: fd7d:844d:3e17:f3ae:9808:98d5:bea9:14d9/64 Scope:Global
|
||||
[...]
|
||||
|
||||
```
|
||||
|
||||
本文到此为止,下周继续学习如何为 ULAs 管理 DNS,这样你就可以使用一个合适的主机名来代替这些长长的 IPv6 地址。
|
||||
|
||||
通过来自 Linux 基金会和 edX 的 ["Linux 入门" ][2] 免费课程学习更多 Linux 的知识。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2017/12/ipv6-auto-configuration-linux
|
||||
|
||||
作者:[Carla Schroder][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/cschroder
|
||||
[1]:https://www.linux.com/learn/intro-to-linux/2017/11/testing-ipv6-networking-kvm-part-1
|
||||
[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -1,85 +0,0 @@
|
||||
Translating zjon
|
||||
2017最佳开源教程
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-teacher-learner.png?itok=rMJqBN5G)
|
||||
|
||||
一个精心编写的教程是任何软件的官方文档的一个很好的补充。 如果官方文件写得不好,不完整或不存在,它也可能是一个有效的选择。
|
||||
|
||||
2017、Opensource.com 发布一些有关各种主题的优秀教程。这些教程不只是针对专家们的。我们把他们针对各种技能水平和经验的用户。
|
||||
|
||||
让我们来看看最好的教程。
|
||||
|
||||
### 关于代码
|
||||
|
||||
对许多人来说,他们对开源的第一次涉足涉及为一个项目或另一个项目提供代码。你在哪里学习编码或编程?以下两篇文章是很好的起点。
|
||||
|
||||
严格来说,VM Brasseur 的[如何开始学习编程][1]是为新手程序员的一个很好的起点,而不是一个教程。它不仅指出了一些有助于你开始学习的优秀资源,而且还提供了了解你的学习方式和如何选择语言的重要建议。
|
||||
|
||||
如果您已经在一个 [IDE][2] 或文本编辑器中记录了几个小时,那么您可能需要学习更多关于编码的不同方法。Fraser Tweedale 的[功能编程的简介][3]很好地引入范式可以应用到许多广泛使用的编程语言。
|
||||
|
||||
### 流行的 Linux
|
||||
|
||||
Linux 是开源的典范。它运行了大量的网络,为世界顶级超级计算机提供动力。它让任何人都可以在台式机上使用专有的操作系统。
|
||||
|
||||
如果你有兴趣深入Linux,这里有三个教程供你参考。
|
||||
|
||||
Jason Baker 查看[设置 Linux $PATH 变量][4]。他引导你通过这一“任何Linux初学者的重要技巧”,使您能够将系统指向包含程序和脚本的目录。
|
||||
|
||||
拥抱你的核心技师 David Both 指南[建立一个 DNS 域名服务器][5]。他详细地记录了如何设置和运行服务器,包括要编辑的配置文件以及如何编辑它们。
|
||||
|
||||
想在你的电脑上更复古一点吗?Jim Hall 告诉你如何[在 Linux 下运行 DOS 程序][6]使用 [FreeDOS][7]和 [qemu][8]。Hall 的文章着重于运行 DOS 生产力工具,但并不全是严肃的——他也谈到了运行他最喜欢的 DOS 游戏。
|
||||
|
||||
### 3 个 Pi
|
||||
|
||||
廉价的单板机使硬件再次变得有趣,这并不是秘密。不仅如此,它们使更多的人更容易接近,无论他们的年龄或技术水平如何。
|
||||
|
||||
其中,[树莓派][9]可能是最广泛使用的单板计算机。Ben Nuttall 带我们通过如何安装和设置 [Postgres 数据库在树莓派上][10]。从那里,你可以在任何你想要的项目中使用它。
|
||||
|
||||
如果你的品味包括文学和技术,你可能会对 Don Watkins 的[如何将树莓派变成电子书服务器][11]感兴趣。有一点工作和一个 [Calibre 电子书管理软件][12]的副本,你就可以得到你最喜欢的电子书,无论你在哪里。
|
||||
|
||||
树莓派并不是其中唯一有特点的。还有 [Orange Pi Pc Plus][13],一种开源的单板机。David Egts 看着[开始使用这个可编程迷你电脑][14]。
|
||||
|
||||
### 日常计算学
|
||||
|
||||
开源并不仅针对技术专家,更多的凡人用它来做日常工作,而且更加效率。这里有三篇文章,使我们这些笨手笨脚的人做任何事情变得优雅(或者不是)。
|
||||
|
||||
当你想到微博的时候,你可能会想到 Twitter。但是 Twitter 的问题多于它的问题。[Mastodon][15] 是 Twitter 的开放的替代方案,它在 2016 年首次亮相。从此, Mastodon 就获得相当大的用户基数。Seth Kenlon 说明[如何加入和使用 Mastodon][16],甚至告诉你如何在 Mastodon 和 Twitter 间交替使用。
|
||||
|
||||
你需要一点帮助来维持开支吗?你所需要的只是一个电子表格和正确的模板。我的文章[要控制你的财政状况] [17],向你展示了如何用[LibreOffice Calc][18] (或任何其他电子表格编辑器)创建一个简单而有吸引力的财务跟踪。
|
||||
|
||||
ImageMagick 是强大的图形处理工具。但是,很多人不经常使用。这意味着他们在最需要它们时忘记了命令。如果是你,Greg Pittman 的 [ImageMagick 入门教程][19]在你需要一些帮助时候能派上用场。
|
||||
|
||||
你有最喜欢的 2017 Opensource.com 公布的教程吗?请随意留言与社区分享。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/best-tutorials
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
译者:[zjon](https://github.com/zjon)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/scottnesbitt
|
||||
[1]:https://opensource.com/article/17/4/how-get-started-learning-program
|
||||
[2]:https://en.wikipedia.org/wiki/Integrated_development_environment
|
||||
[3]:https://opensource.com/article/17/4/introduction-functional-programming
|
||||
[4]:https://opensource.com/article/17/6/set-path-linux
|
||||
[5]:https://opensource.com/article/17/4/build-your-own-name-server
|
||||
[6]:https://opensource.com/article/17/10/run-dos-applications-linux
|
||||
[7]:http://www.freedos.org/
|
||||
[8]:https://www.qemu.org
|
||||
[9]:https://en.wikipedia.org/wiki/Raspberry_Pi
|
||||
[10]:https://opensource.com/article/17/10/set-postgres-database-your-raspberry-pi
|
||||
[11]:https://opensource.com/article/17/6/raspberrypi-ebook-server
|
||||
[12]:https://calibre-ebook.com/
|
||||
[13]:http://www.orangepi.org/
|
||||
[14]:https://opensource.com/article/17/1/how-to-orange-pi
|
||||
[15]:https://joinmastodon.org/
|
||||
[16]:https://opensource.com/article/17/4/guide-to-mastodon
|
||||
[17]:https://opensource.com/article/17/8/budget-libreoffice-calc
|
||||
[18]:https://www.libreoffice.org/discover/calc/
|
||||
[19]:https://opensource.com/article/17/8/imagemagick
|
||||
|
||||
|
@ -0,0 +1,110 @@
|
||||
如何使用 syslog-ng 从远程 Linux 机器上收集日志
|
||||
======
|
||||
![linuxhero.jpg][1]
|
||||
|
||||
Image: Jack Wallen
|
||||
|
||||
如果你的数据中心全是 Linux 服务器,而你就是系统管理员。那么你的其中一项工作内容就是查看服务器的日志文件。但是,如果你在大量的机器上去查看日志文件,那么意味着你需要挨个去登入到机器中来阅读日志文件。如果你管理的机器很多,仅这项工作就可以花费你一天的时间。
|
||||
|
||||
另外的选择是,你可以配置一台单独的 Linux 机器去收集这些日志。这将使你的每日工作更加高效。要实现这个目的,有很多的不同系统可供你选择,而 syslog-ng 就是其中之一。
|
||||
|
||||
使用 syslog-ng 的问题是文档并不容易梳理。但是,我已经解决了这个问题,我可以通过这种方法马上进行安装和配置 syslog-ng。下面我将在 Ubuntu Server 16.04 上示范这两种方法:
|
||||
|
||||
* UBUNTUSERVERVM 的 IP 地址是 192.168.1.118 将配置为日志收集器
|
||||
* UBUNTUSERVERVM2 将配置为一个客户端,发送日志文件到收集器
|
||||
|
||||
|
||||
|
||||
现在我们来开始安装和配置。
|
||||
|
||||
## 安装
|
||||
|
||||
安装很简单。为了尽可能容易,我将从标准仓库安装。打开一个终端窗口,运行如下命令:
|
||||
```
|
||||
sudo apt install syslog-ng
|
||||
```
|
||||
|
||||
在作为收集器和客户端的机器上都要运行上面的命令。安装完成之后,你将开始配置。
|
||||
|
||||
## 配置收集器
|
||||
|
||||
现在,我们开始日志收集器的配置。它的配置文件是 `/etc/syslog-ng/syslog-ng.conf`。syslog-ng 安装完成时就已经包含了一个配置文件。我们不使用这个默认的配置文件,可以使用 `mv /etc/syslog-ng/syslog-ng.conf /etc/syslog-ng/syslog-ng.conf.BAK` 将这个自带的默认配置文件重命名。现在使用 `sudo nano /etc/syslog/syslog-ng.conf` 命令创建一个新的配置文件。在这个文件中添加如下的行:
|
||||
```
|
||||
@version: 3.5
|
||||
@include "scl.conf"
|
||||
@include "`scl-root`/system/tty10.conf"
|
||||
options {
|
||||
time-reap(30);
|
||||
mark-freq(10);
|
||||
keep-hostname(yes);
|
||||
};
|
||||
source s_local { system(); internal(); };
|
||||
source s_network {
|
||||
syslog(transport(tcp) port(514));
|
||||
};
|
||||
destination d_local {
|
||||
file("/var/log/syslog-ng/messages_${HOST}"); };
|
||||
destination d_logs {
|
||||
file(
|
||||
"/var/log/syslog-ng/logs.txt"
|
||||
owner("root")
|
||||
group("root")
|
||||
perm(0777)
|
||||
); };
|
||||
log { source(s_local); source(s_network); destination(d_logs); };
|
||||
```
|
||||
|
||||
需要注意的是,syslog-ng 使用 514 端口,你需要确保你的网络上它可以被访问。
|
||||
|
||||
保存和关闭这个文件。上面的配置将转存期望的日志文件(使用 system() and internal())到 `/var/log/syslog-ng/logs.txt` 中。因此,你需要使用如下的命令去创建所需的目录和文件:
|
||||
```
|
||||
sudo mkdir /var/log/syslog-ng
|
||||
sudo touch /var/log/syslog-ng/logs.txt
|
||||
```
|
||||
|
||||
使用如下的命令启动和启用 syslog-ng:
|
||||
```
|
||||
sudo systemctl start syslog-ng
|
||||
sudo systemctl enable syslog-ng
|
||||
```
|
||||
|
||||
## 配置为客户端
|
||||
|
||||
我们将在客户端上做同样的事情(移动默认配置文件并创建新配置文件)。拷贝下列文本到新的客户端配置文件中:
|
||||
```
|
||||
@version: 3.5
|
||||
@include "scl.conf"
|
||||
@include "`scl-root`/system/tty10.conf"
|
||||
source s_local { system(); internal(); };
|
||||
destination d_syslog_tcp {
|
||||
syslog("192.168.1.118" transport("tcp") port(514)); };
|
||||
log { source(s_local);destination(d_syslog_tcp); };
|
||||
```
|
||||
|
||||
请注意:请将 IP 地址修改为收集器的 IP 地址。
|
||||
|
||||
保存和关闭这个文件。与在配置为收集器的机器上一样的方法启动和启用 syslog-ng。
|
||||
|
||||
## 查看日志文件
|
||||
|
||||
回到你的配置为收集器的服务器上,运行这个命令 `sudo tail -f /var/log/syslog-ng/logs.txt`。你将看到包含了收集器和客户端的日志条目的输出 ( **Figure A** )。
|
||||
|
||||
**Figure A**
|
||||
|
||||
![Figure A][3]
|
||||
|
||||
恭喜你!syslog-ng 已经正常工作了。你现在可以登入到你的收集器上查看本地机器和远程客户端的日志了。如果你的数据中心有很多 Linux 服务器,在每台服务器上都安装上 syslog-ng 并配置它们作为客户端发送日志到收集器,这样你就不需要登入到每个机器去查看它们的日志了。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.techrepublic.com/article/how-to-use-syslog-ng-to-collect-logs-from-remote-linux-machines/
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[1]:https://tr1.cbsistatic.com/hub/i/r/2017/01/11/51204409-68e0-49b8-a637-01af26be85f6/resize/770x/688dfedad4ed30ec4baf548c2adb8cd4/linuxhero.jpg
|
||||
[3]:https://tr4.cbsistatic.com/hub/i/2018/01/09/6a24e5c0-6a29-46d3-8a66-bc72747b5beb/6f94d3e6c6c2121fab6223ed9d8c6aa6/syslognga.jpg
|
@ -0,0 +1,97 @@
|
||||
Partclone - 多功能的分区和克隆免费软件
|
||||
======
|
||||
|
||||
![](https://www.fossmint.com/wp-content/uploads/2018/01/Partclone-Backup-Tool-For-Linux.png)
|
||||
|
||||
**[Partclone][1]** 是由 **Clonezilla** 开发者开发的免费开源的用于创建和克隆分区镜像的软件。实际上,**Partclone** 是基于 **Clonezilla** 的工具之一。
|
||||
|
||||
它为用户提供了备份与恢复占用的分区块工具,并与多个文件系统的高度兼容,这要归功于它能够使用像 **e2fslibs** 这样的现有库来读取和写入分区,例如 **ext2**。
|
||||
|
||||
它最大的优点是支持各种格式,包括 ext2、ext3、ext4、hfs +、reiserfs、reiser4、btrfs、vmfs3、vmfs5、xfs、jfs、ufs、ntfs、fat(12/16/32)、exfat、f2fs 和 nilfs。
|
||||
|
||||
它还有许多的程序,包括 **partclone.ext2**ext3&ext4)、partclone.ntfs、partclone.exfat、partclone.hfsp 和 partclone.vmfs(v3和v5) 等等。
|
||||
|
||||
### Partclone中的功能
|
||||
|
||||
* **免费软件:** **Partclone**免费供所有人下载和使用。
|
||||
* **开源:** **Partclone**是在 GNU GPL 许可下发布的,并在 [GitHub][2] 上公开。
|
||||
* **跨平台**:适用于 Linux、Windows、MAC、ESX 文件系统备份/恢复和 FreeBSD。
|
||||
* 一个在线的[文档页面][3],你可以从中查看帮助文档并跟踪其 GitHub 问题。
|
||||
* 为初学者和专业人士提供的在线[用户手册][4]。
|
||||
* 支持救援。
|
||||
* 克隆分区成镜像文件。
|
||||
* 将镜像文件恢复到分区。
|
||||
* 快速复制分区。
|
||||
* 支持 raw 克隆。
|
||||
* 显示传输速率和持续时间。
|
||||
* 支持管道。
|
||||
* 支持 crc32。
|
||||
* 支持 ESX vmware server 的 vmfs 和 FreeBSD 的文件系统 ufs。
|
||||
|
||||
|
||||
|
||||
**Partclone** 中还捆绑了更多功能,你可以在[这里][5]查看其余的功能。
|
||||
|
||||
[下载 Linux 中的 Partclone][6]
|
||||
|
||||
### 如何安装和使用 Partclone
|
||||
|
||||
在 Linux 上安装 Partclone。
|
||||
```
|
||||
$ sudo apt install partclone [On Debian/Ubuntu]
|
||||
$ sudo yum install partclone [On CentOS/RHEL/Fedora]
|
||||
|
||||
```
|
||||
|
||||
克隆分区为镜像。
|
||||
```
|
||||
# partclone.ext4 -d -c -s /dev/sda1 -o sda1.img
|
||||
|
||||
```
|
||||
|
||||
将镜像恢复到分区。
|
||||
```
|
||||
# partclone.ext4 -d -r -s sda1.img -o /dev/sda1
|
||||
|
||||
```
|
||||
|
||||
分区到分区克隆。
|
||||
```
|
||||
# partclone.ext4 -d -b -s /dev/sda1 -o /dev/sdb1
|
||||
|
||||
```
|
||||
|
||||
显示镜像信息。
|
||||
```
|
||||
# partclone.info -s sda1.img
|
||||
|
||||
```
|
||||
|
||||
检查镜像。
|
||||
```
|
||||
# partclone.chkimg -s sda1.img
|
||||
|
||||
```
|
||||
|
||||
你是 **Partclone** 的用户吗?我最近在 [**Deepin Clone**][7] 上写了一篇文章,显然,Partclone 有擅长处理的任务。你使用其他备份和恢复工具的经验是什么?
|
||||
|
||||
请在下面的评论区与我们分享你的想法和建议。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.fossmint.com/partclone-linux-backup-clone-tool/
|
||||
|
||||
作者:[Martins D. Okoi;View All Posts;Peter Beck;Martins Divine Okoi][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[1]:https://partclone.org/
|
||||
[2]:https://github.com/Thomas-Tsai/partclone
|
||||
[3]:https://partclone.org/help/
|
||||
[4]:https://partclone.org/usage/
|
||||
[5]:https://partclone.org/features/
|
||||
[6]:https://partclone.org/download/
|
||||
[7]:https://www.fossmint.com/deepin-clone-system-backup-restore-for-deepin-users/
|
266
translated/tech/20180116 Monitor your Kubernetes Cluster.md
Normal file
266
translated/tech/20180116 Monitor your Kubernetes Cluster.md
Normal file
@ -0,0 +1,266 @@
|
||||
监视 Kubernetes 集群
|
||||
======
|
||||
这篇文章最初发表在 [Kevin Monroe 的博客][1] 上
|
||||
|
||||
监视日志和指标状态是集群管理员的重点工作。它的好处很明显:指标能帮你设置一个合理的性能目标,而日志分析可以发现影响你工作负载的问题。然而,困难的是如何找到一个与大量运行的应用程序一起工作的监视解决方案。
|
||||
|
||||
在本文中,我将使用 [Graylog][2] (用于日志)和 [Prometheus][3] (用于指标)去打造一个Kubernetes 集群的监视解决方案。当然了,这不仅是将三个东西连接起来那么简单,实现上,最终结果看起来应该如下图所示:
|
||||
|
||||
![][4]
|
||||
|
||||
正如你所了解的,Kubernetes 不仅做一件事情 —— 它是 master、workers、networking bit 等等。同样,Graylog 是一个配角(apache2、mongodb、等等),Prometheus 也一样(telegraf、grafana 等等)。在部署中连接这些点看起来似乎有些让人恐惧,但是使用合适的工具将不会那么困难。
|
||||
|
||||
我将使用 [conjure-up][5] 和 [Canonical Distribution of Kubernetes][6] (CDK) 去探索 Kubernetes。我发现 conjure-up 接口对部署大型软件很有帮助,但是我知道一些人可能不喜欢 GUIs、TUIs 以及其它 UIs。对于这些人,我将用命令行再去部署一遍。
|
||||
|
||||
在开始之前需要注意的一点是,Graylog 和 Prometheus 是部署在 Kubernetes 侧而不是集群上。像 Kubernetes 仪表盘和 Heapster 是运行的集群中非常好的信息来源,但是我的目标是为日志/指标提供一个分析机制,而不管集群运行与否。
|
||||
|
||||
### 开始探索
|
||||
|
||||
如果你的系统上没有 conjure-up,首先要做的第一件事情是,请先安装它,在 Linux 上,这很简单:
|
||||
```
|
||||
sudo snap install conjure-up --classic
|
||||
```
|
||||
|
||||
对于 macOS 用户也提供了 brew 包:
|
||||
```
|
||||
brew install conjure-up
|
||||
```
|
||||
|
||||
你需要最新的 2.5.2 版,它的好处是添加了 CDK spell,因此,如果你的系统上已经安装了旧的版本,请使用 `sudo snap refresh conjure-up` 或者 `brew update && brew upgrade conjure-up` 去更新它。
|
||||
|
||||
安装完成后,运行它:
|
||||
```
|
||||
conjure-up
|
||||
```
|
||||
|
||||
![][7]
|
||||
|
||||
你将发现有一个 spell 列表。选择 CDK 然后按下 `Enter`。
|
||||
|
||||
![][8]
|
||||
|
||||
这个时候,你将看到 CDK spell 可用的附加组件。我们感兴趣的是 Graylog 和 Prometheus,因此选择这两个,然后点击 `Continue`。
|
||||
|
||||
它将引导你选择各种云,以决定你的集群部署的地方。之后,你将看到一些部署的后续步骤,接下来是回顾屏幕,让你再次确认部署内容:
|
||||
|
||||
![][9]
|
||||
|
||||
除了典型的 K8s 相关的应用程序(etcd、flannel、load-balancer、master、以及 workers)之外,你将看到我们选择的日志和指标相关的额外应用程序。
|
||||
|
||||
Graylog 栈包含如下:
|
||||
|
||||
* apache2:graylog web 接口的反向代理
|
||||
* elasticsearch:日志使用的文档数据库
|
||||
* filebeat:从 K8s master/workers 转发日志到 graylog
|
||||
* graylog:为日志收集器提供一个 api,以及提供一个日志分析界面
|
||||
* mongodb:保存 graylog 元数据的数据库
|
||||
|
||||
|
||||
|
||||
Prometheus 栈包含如下:
|
||||
|
||||
* grafana:指标相关的仪表板的 web 界面
|
||||
* prometheus:指标收集器以及时序数据库
|
||||
* telegraf:发送主机的指标到 prometheus 中
|
||||
|
||||
|
||||
|
||||
你可以在回顾屏幕上微调部署,但是默认组件是必选 的。点击 `Deploy all Remaining Applications` 继续。
|
||||
|
||||
部署工作将花费一些时间,它将部署你的机器和配置你的云。完成后,conjure-up 将展示一个摘要屏幕,它包含一些链连,你可以用你的终端去浏览各种感兴趣的内容:
|
||||
|
||||
![][10]
|
||||
|
||||
#### 浏览日志
|
||||
|
||||
现在,Graylog 已经部署和配置完成,我们可以看一下采集到的一些数据。默认情况下,filebeat 应用程序将从 Kubernetes 的 master 和 workers 中转发系统日志( `/var/log/*.log` )和容器日志(`/var/log/containers/*.log`)到 graylog 中。
|
||||
|
||||
记住如下的 apache2 的地址和 graylog 的 admin 密码:
|
||||
```
|
||||
juju status --format yaml apache2/0 | grep public-address
|
||||
public-address: <your-apache2-ip>
|
||||
juju run-action --wait graylog/0 show-admin-password
|
||||
admin-password: <your-graylog-password>
|
||||
```
|
||||
|
||||
在浏览器中输入 `http://<your-apache2-ip>` ,然后以管理员用户名(admin)和密码(<your-graylog-password>)登入。
|
||||
|
||||
**注意:** 如果这个界面不可用,请等待大约 5 分钟时间,以便于配置的反向代理生效。
|
||||
|
||||
登入后,顶部的 `Sources` 选项卡可以看到从 K8s 的 master 和 workers 中收集日志的概述:
|
||||
|
||||
![][11]
|
||||
|
||||
通过点击 `System / Inputs` 选项卡深入这些日志,选择 `Show received messages` 查看 filebeat 的输入:
|
||||
|
||||
![][12]
|
||||
|
||||
在这里,你可以应用各种过滤或者设置 Graylog 仪表板去帮助识别大多数比较重要的事件。查看 [Graylog Dashboard][13] 文档,可以了解如何定制你的视图的详细资料。
|
||||
|
||||
#### 浏览指标
|
||||
|
||||
我们的部署通过 grafana 仪表板提供了两种类型的指标:系统指标,包括像 K8s master 和 workers 的 cpu/内存/磁盘使用情况,以及集群指标,包括像从 K8s cAdvisor 端点上收集的容器级指标。
|
||||
|
||||
记住如下的 grafana 的地址和 admin 密码:
|
||||
```
|
||||
juju status --format yaml grafana/0 | grep public-address
|
||||
public-address: <your-grafana-ip>
|
||||
juju run-action --wait grafana/0 get-admin-password
|
||||
password: <your-grafana-password>
|
||||
```
|
||||
|
||||
在浏览器中输入 `http://<your-grafana-ip>:3000`,输入管理员用户(admin)和密码(<your-grafana-password>)登入。成功登入后,点击 `Home` 下拉框,选取 `Kubernetes Metrics (via Prometheus)` 去查看集群指标仪表板:
|
||||
|
||||
![][14]
|
||||
|
||||
我们也可以通过下拉框切换到 `Node Metrics (via Telegraf) ` 去查看 K8s 主机的系统指标。
|
||||
|
||||
![][15]
|
||||
|
||||
|
||||
### 另一种方法
|
||||
|
||||
正如在文章开始的介绍中提到的,我喜欢 conjure-up 的 “魔法之杖” 去指导我完成像 Kubernetes 这种复杂软件的部署。现在,我们来看一下 conjure-up 的另一种方法,你可能希望去看到实现相同结果的一些命令行的方法。还有其它的可能已经部署了前面的 CDK,想去扩展使用上述的 Graylog/Prometheus 组件。不管什么原因你既然看到这了,既来之则安之,继续向下看吧。
|
||||
|
||||
支持 conjure-up 的工具是 [Juju][16]。CDK spell 所做的一切,都可以使用 juju 命令行来完成。我们来看一下,如何一步步完成这些工作。
|
||||
|
||||
**从 Scratch 中启动**
|
||||
|
||||
如果你使用的是 Linux,安装 Juju 很简单,命令如下:
|
||||
```
|
||||
sudo snap install juju --classic
|
||||
```
|
||||
|
||||
对于 macOS,Juju 也可以从 brew 中安装:
|
||||
```
|
||||
brew install juju
|
||||
```
|
||||
|
||||
现在为你选择的云配置一个控制器。你或许会被提示请求一个凭据(用户名密码):
|
||||
```
|
||||
juju bootstrap
|
||||
```
|
||||
|
||||
我们接下来需要基于 CDK 捆绑部署:
|
||||
```
|
||||
juju deploy canonical-kubernetes
|
||||
```
|
||||
|
||||
**从 CDK 开始**
|
||||
|
||||
使用我们部署的 Kubernetes 集群,我们需要去添加 Graylog 和 Prometheus 所需要的全部应用程序:
|
||||
```
|
||||
## deploy graylog-related applications
|
||||
juju deploy xenial/apache2
|
||||
juju deploy xenial/elasticsearch
|
||||
juju deploy xenial/filebeat
|
||||
juju deploy xenial/graylog
|
||||
juju deploy xenial/mongodb
|
||||
```
|
||||
```
|
||||
## deploy prometheus-related applications
|
||||
juju deploy xenial/grafana
|
||||
juju deploy xenial/prometheus
|
||||
juju deploy xenial/telegraf
|
||||
```
|
||||
|
||||
现在软件已经部署完毕,将它们连接到一起,以便于它们之间可以相互通讯:
|
||||
```
|
||||
## relate graylog applications
|
||||
juju relate apache2:reverseproxy graylog:website
|
||||
juju relate graylog:elasticsearch elasticsearch:client
|
||||
juju relate graylog:mongodb mongodb:database
|
||||
juju relate filebeat:beats-host kubernetes-master:juju-info
|
||||
juju relate filebeat:beats-host kubernetes-worker:jujuu-info
|
||||
```
|
||||
```
|
||||
## relate prometheus applications
|
||||
juju relate prometheus:grafana-source grafana:grafana-source
|
||||
juju relate telegraf:prometheus-client prometheus:target
|
||||
juju relate kubernetes-master:juju-info telegraf:juju-info
|
||||
juju relate kubernetes-worker:juju-info telegraf:juju-info
|
||||
```
|
||||
|
||||
这个时候,所有的应用程序已经可以相互之间进行通讯了,但是我们还需要多做一点配置(比如,配置 apache2 反向代理、告诉 prometheus 如何从 K8s 中取数、导入到 grafana 仪表板等等):
|
||||
```
|
||||
## configure graylog applications
|
||||
juju config apache2 enable_modules="headers proxy_html proxy_http"
|
||||
juju config apache2 vhost_http_template="$(base64 <vhost-tmpl>)"
|
||||
juju config elasticsearch firewall_enabled="false"
|
||||
juju config filebeat \
|
||||
logpath="/var/log/*.log /var/log/containers/*.log"
|
||||
juju config filebeat logstash_hosts="<graylog-ip>:5044"
|
||||
juju config graylog elasticsearch_cluster_name="<es-cluster>"
|
||||
```
|
||||
```
|
||||
## configure prometheus applications
|
||||
juju config prometheus scrape-jobs="<scraper-yaml>"
|
||||
juju run-action --wait grafana/0 import-dashboard \
|
||||
dashboard="$(base64 <dashboard-json>)"
|
||||
```
|
||||
|
||||
以上的步骤需要根据你的部署来指定一些值。你可以用与 conjure-up 相同的方法得到这些:
|
||||
|
||||
* <vhost-tmpl>: 从 github 获取我们的示例 [模板][17]
|
||||
* <graylog-ip>: `juju run --unit graylog/0 'unit-get private-address'`
|
||||
* <es-cluster>: `juju config elasticsearch cluster-name`
|
||||
* <scraper-yaml>: 从 github 获取我们的示例 [scraper][18] ;`[K8S_PASSWORD][20]` 和 `[K8S_API_ENDPOINT][21]` [substitute][19] 的正确值
|
||||
* <dashboard-json>: 从 github 获取我们的 [主机][22] 和 [k8s][23] 仪表板
|
||||
|
||||
|
||||
|
||||
最后,发布 apache2 和 grafana 应用程序,以便于可以通过它们的 web 界面访问:
|
||||
```
|
||||
## expose relevant endpoints
|
||||
juju expose apache2
|
||||
juju expose grafana
|
||||
```
|
||||
|
||||
现在我们已经完成了所有的部署、配置、和发布工作,你可以使用与上面的**浏览日志**和**浏览指标**部分相同的方法去查看它们。
|
||||
|
||||
### 总结
|
||||
|
||||
我的目标是向你展示如何去部署一个 Kubernetes 集群,很方便地去监视它的日志和指标。无论你是喜欢向导的方式还是命令行的方式,我希望你清楚地看到部署一个监视系统并不复杂。关键是要搞清楚所有部分是如何工作的,并将它们连接到一起工作,通过断开/修复/重复的方式,直到它们每一个都能正常工作。
|
||||
|
||||
这里有一些非常好的工具像 conjure-up 和 Juju。充分发挥这个生态系统贡献者的专长让管理大型软件变得更容易。从一套可靠的应用程序开始,按需定制,然后投入到工作中!
|
||||
|
||||
大胆去尝试吧,然后告诉我你用的如何。你可以在 Freenode IRC 的 **#conjure-up** 和 **#juju** 中找到像我这样的爱好者。感谢阅读!
|
||||
|
||||
### 关于作者
|
||||
|
||||
Kevin 在 2014 年加入 Canonical 公司,他专注于复杂软件建模。他在 Juju 大型软件团队中找到了自己的位置,他的任务是将大数据和机器学习应用程序转化成可重复的(可靠的)解决方案。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://insights.ubuntu.com/2018/01/16/monitor-your-kubernetes-cluster/
|
||||
|
||||
作者:[Kevin Monroe][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://insights.ubuntu.com/author/kwmonroe/
|
||||
[1]:https://medium.com/@kwmonroe/monitor-your-kubernetes-cluster-a856d2603ec3
|
||||
[2]:https://www.graylog.org/
|
||||
[3]:https://prometheus.io/
|
||||
[4]:https://insights.ubuntu.com/wp-content/uploads/706b/1_TAA57DGVDpe9KHIzOirrBA.png
|
||||
[5]:https://conjure-up.io/
|
||||
[6]:https://jujucharms.com/canonical-kubernetes
|
||||
[7]:https://insights.ubuntu.com/wp-content/uploads/98fd/1_o0UmYzYkFiHIs2sBgj7G9A.png
|
||||
[8]:https://insights.ubuntu.com/wp-content/uploads/0351/1_pgVaO_ZlalrjvYd5pOMJMA.png
|
||||
[9]:https://insights.ubuntu.com/wp-content/uploads/9977/1_WXKxMlml2DWA5Kj6wW9oXQ.png
|
||||
[10]:https://insights.ubuntu.com/wp-content/uploads/8588/1_NWq7u6g6UAzyFxtbM-ipqg.png
|
||||
[11]:https://insights.ubuntu.com/wp-content/uploads/a1c3/1_hHK5mSrRJQi6A6u0yPSGOA.png
|
||||
[12]:https://insights.ubuntu.com/wp-content/uploads/937f/1_cP36lpmSwlsPXJyDUpFluQ.png
|
||||
[13]:http://docs.graylog.org/en/2.3/pages/dashboards.html
|
||||
[14]:https://insights.ubuntu.com/wp-content/uploads/9256/1_kskust3AOImIh18QxQPgRw.png
|
||||
[15]:https://insights.ubuntu.com/wp-content/uploads/2037/1_qJpjPOTGMQbjFY5-cZsYrQ.png
|
||||
[16]:https://jujucharms.com/
|
||||
[17]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/graylog/steps/01_install-graylog/graylog-vhost.tmpl
|
||||
[18]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/prometheus-scrape-k8s.yaml
|
||||
[19]:https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/after-deploy#L25
|
||||
[20]:https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/after-deploy#L10
|
||||
[21]:https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/after-deploy#L11
|
||||
[22]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/grafana-telegraf.json
|
||||
[23]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/grafana-k8s.json
|
@ -0,0 +1,107 @@
|
||||
SPARTA —— 用于网络渗透测试的 GUI 工具套件
|
||||
======
|
||||
|
||||
![](https://i0.wp.com/gbhackers.com/wp-content/uploads/2018/01/GjWDZ1516079830.png?resize=696%2C379&ssl=1)
|
||||
|
||||
SPARTA 是使用 Python 开发的 GUI 应用程序,它是 Kali Linux 内置的网络渗透测试工具。它简化了扫描和枚举阶段,并更快速的得到结果。
|
||||
|
||||
SPARTA GUI 工具套件最擅长的事情是扫描和发现目标端口和运行的服务。
|
||||
|
||||
因此,作为枚举阶段的一部分功能,它提供对开放端口和服务的暴力攻击。
|
||||
|
||||
|
||||
延伸阅读:[网络渗透检查清单][1]
|
||||
|
||||
## 安装
|
||||
|
||||
请从 GitHub 上克隆最新版本的 SPARTA:
|
||||
|
||||
```
|
||||
git clone https://github.com/secforce/sparta.git
|
||||
```
|
||||
|
||||
或者,从 [这里][2] 下载最新版本的 Zip 文件。
|
||||
```
|
||||
cd /usr/share/
|
||||
git clone https://github.com/secforce/sparta.git
|
||||
```
|
||||
将 "sparta" 文件放到 /usr/bin/ 目录下并赋于可运行权限。
|
||||
在任意终端中输入 'sparta' 来启动应用程序。
|
||||
|
||||
|
||||
## 网络渗透测试的范围:
|
||||
|
||||
* 添加一个目标主机或者目标主机的列表到范围中,来发现一个组织的网络基础设备在安全方面的薄弱环节。
|
||||
* 选择菜单条 - File > Add host(s) to scope
|
||||
|
||||
|
||||
|
||||
[![Network Penetration Testing][3]][4]
|
||||
|
||||
[![Network Penetration Testing][5]][6]
|
||||
|
||||
* 上图展示了在扫描范围中添加 IP 地址。根据你网络的具体情况,你可以添加一个 IP 地址的范围去扫描。
|
||||
* 扫描范围添加之后,Nmap 将开始扫描,并很快得到结果,扫描阶段结束。
|
||||
|
||||
|
||||
|
||||
## 打开 Ports & Services:
|
||||
|
||||
* Nmap 扫描结果提供了目标上开放的端口和服务。
|
||||
|
||||
|
||||
|
||||
[![Network Penetration Testing][7]][8]
|
||||
|
||||
* 上图展示了扫描发现的目标操作系统、开发的端口和服务。
|
||||
|
||||
|
||||
|
||||
## 在开放端口上实施暴力攻击:
|
||||
|
||||
* 我们来通过 445 端口的服务器消息块(SMB)协议来暴力获取用户列表和它们的有效密码。
|
||||
|
||||
|
||||
|
||||
[![Network Penetration Testing][9]][10]
|
||||
|
||||
* 右键并选择 “Send to Brute” 选项。也可以选择发现的目标上的开放端口和服务。
|
||||
* 浏览和在用户名密码框中添加字典文件。
|
||||
|
||||
|
||||
|
||||
[![Network Penetration Testing][11]][12]
|
||||
|
||||
* 点击 “Run” 去启动对目标的暴力攻击。上图展示了对目标 IP 地址进行的暴力攻击取得成功,找到了有效的密码。
|
||||
* 在 Windows 中失败的登陆尝试总是被记录到事件日志中。
|
||||
* 密码每 15 到 30 天改变一次的策略是非常好的一个实践经验。
|
||||
* 强烈建议使用强密码策略。密码锁定策略是阻止这种暴力攻击的最佳方法之一( 5 次失败的登陆尝试之后将锁定帐户)
|
||||
* 将关键业务资产整合到 SIEM( 安全冲突 & 事件管理)中将尽可能快地检测到这类攻击行为。
|
||||
|
||||
|
||||
|
||||
SPARTA 对渗透测试的扫描和枚举阶段来说是一个非常省时的 GUI 工具套件。SPARTA 可以扫描和暴力破解各种协议。它有许多的功能!祝你测试顺利!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://gbhackers.com/sparta-network-penetration-testing-gui-toolkit/
|
||||
|
||||
作者:[Balaganesh][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://gbhackers.com/author/balaganesh/
|
||||
[1]:https://gbhackers.com/network-penetration-testing-checklist-examples/
|
||||
[2]:https://github.com/SECFORCE/sparta/archive/master.zip
|
||||
[3]:https://i0.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-526.png?resize=696%2C495&ssl=1
|
||||
[4]:https://i0.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-526.png?ssl=1
|
||||
[5]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-527.png?resize=696%2C516&ssl=1
|
||||
[6]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-527.png?ssl=1
|
||||
[7]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-528.png?resize=696%2C519&ssl=1
|
||||
[8]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-528.png?ssl=1
|
||||
[9]:https://i1.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-529.png?resize=696%2C525&ssl=1
|
||||
[10]:https://i1.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-529.png?ssl=1
|
||||
[11]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-531.png?resize=696%2C523&ssl=1
|
||||
[12]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-531.png?ssl=1
|
@ -0,0 +1,169 @@
|
||||
构建你自己的 RSS 提示系统——让杂志文章一篇也不会错过
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/01/learn-python-rss-notifier.png-945x400.jpg)
|
||||
|
||||
人生苦短,我用 Python,Python 是非常棒的快速构建应用程序的编程语言。在这篇文章中我们将学习如何使用 Python 去构建一个 RSS 提示系统,目标是使用 Fedora 快乐地学习 Python。如果你正在寻找一个完整的 RSS 提示应用程序,在 Fedora 中已经准备好了几个包。
|
||||
|
||||
### Fedora 和 Python —— 入门知识
|
||||
|
||||
Python 3.6 在 Fedora 中是默认安装的,它包含了 Python 的很多标准库。标准库提供了一些可以让我们的任务更加简单完成的模块的集合。例如,在我们的案例中,我们将使用 [**sqlite3**][1] 模块在数据库中去创建表、添加和读取数据。在这个案例中,我们试图去解决的是在标准库中没有的特定的问题,也有可能已经有人为我们开发了这样一个模块。最好是使用像大家熟知的 [PyPI][2] Python 包索引去搜索一下。在我们的示例中,我们将使用 [**feedparser**][3] 去解析 RSS 源。
|
||||
|
||||
因为 **feedparser** 并不是标准库,我们需要将它安装到我们的系统上。幸运的是,在 Fedora 中有这个 RPM 包,因此,我们可以运行如下的命令去安装 **feedparser**:
|
||||
```
|
||||
$ sudo dnf install python3-feedparser
|
||||
```
|
||||
|
||||
我们现在已经拥有了编写我们的应用程序所需的东西了。
|
||||
|
||||
### 存储源数据
|
||||
|
||||
我们需要存储已经发布的文章的数据,这样我们的系统就可以只提示新发布的文章。我们要保存的数据将是用来辨别一篇文章的唯一方法。因此,我们将存储文章的**标题**和**发布日期**。
|
||||
|
||||
因此,我们来使用 Python **sqlite3** 模块和一个简单的 SQL 语句来创建我们的数据库。同时也添加一些后面将要用到的模块(**feedparse**,**smtplib**,和 **email**)。
|
||||
|
||||
#### 创建数据库
|
||||
```
|
||||
#!/usr/bin/python3
|
||||
import sqlite3
|
||||
import smtplib
|
||||
from email.mime.text import MIMEText
|
||||
|
||||
import feedparser
|
||||
|
||||
db_connection = sqlite3.connect('/var/tmp/magazine_rss.sqlite')
|
||||
db = db_connection.cursor()
|
||||
db.execute(' CREATE TABLE IF NOT EXISTS magazine (title TEXT, date TEXT)')
|
||||
|
||||
```
|
||||
|
||||
这几行代码创建一个新的保存在一个名为 'magazine_rss.sqlite' 文件中的 sqlite 数据库,然后在数据库创建一个名为 'magazine' 的新表。这个表有两个列 —— 'title' 和 'date' —— 它们能存诸 TEXT 类型的数据,也就是说每个列的值都是文本字符。
|
||||
|
||||
#### 检查数据库中的旧文章
|
||||
|
||||
由于我们仅希望增加新的文章到我们的数据库中,因此我们需要一个功能去检查 RSS 源中的文章在数据库中是否存在。我们将根据它来判断是否发送(有新文章的)邮件提示。Ok,现在我们来写这个功能的代码。
|
||||
```
|
||||
def article_is_not_db(article_title, article_date):
|
||||
""" Check if a given pair of article title and date
|
||||
is in the database.
|
||||
Args:
|
||||
article_title (str): The title of an article
|
||||
article_date (str): The publication date of an article
|
||||
Return:
|
||||
True if the article is not in the database
|
||||
False if the article is already present in the database
|
||||
"""
|
||||
db.execute("SELECT * from magazine WHERE title=? AND date=?", (article_title, article_date))
|
||||
if not db.fetchall():
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
```
|
||||
|
||||
这个功能的主要部分是一个 SQL 查询,我们运行它去搜索数据库。我们使用一个 SELECT 命令去定义我们将要在哪个列上运行这个查询。我们使用 `*` 符号去选取所有列(title 和 date)。然后,我们使用查询的 WHERE 条件 `article_title` and `article_date` 去匹配标题和日期列中的值,以检索出我们需要的内容。
|
||||
|
||||
最后,我们使用一个简单的返回 `True` 或者 `False` 的逻辑来表示是否在数据库中找到匹配的文章。
|
||||
|
||||
#### 在数据库中添加新文章
|
||||
|
||||
现在我们可以写一些代码去添加新文章到数据库中。
|
||||
```
|
||||
def add_article_to_db(article_title, article_date):
|
||||
""" Add a new article title and date to the database
|
||||
Args:
|
||||
article_title (str): The title of an article
|
||||
article_date (str): The publication date of an article
|
||||
"""
|
||||
db.execute("INSERT INTO magazine VALUES (?,?)", (article_title, article_date))
|
||||
db_connection.commit()
|
||||
```
|
||||
|
||||
这个功能很简单,我们使用了一个 SQL 查询去插入一个新行到 'magazine' 表的 article_title 和 article_date 列中。然后提交它到数据库中永久保存。
|
||||
|
||||
这些就是在数据库中所需要的东西,接下来我们看一下,如何使用 Python 实现提示系统和发送电子邮件。
|
||||
|
||||
### 发送电子邮件提示
|
||||
|
||||
我们来使用 Python 标准库模块 **smtplib** 来创建一个发送电子邮件的功能。我们也可以使用标准库中的 **email** 模块去格式化我们的电子邮件信息。
|
||||
```
|
||||
def send_notification(article_title, article_url):
|
||||
""" Add a new article title and date to the database
|
||||
|
||||
Args:
|
||||
article_title (str): The title of an article
|
||||
article_url (str): The url to access the article
|
||||
"""
|
||||
|
||||
smtp_server = smtplib.SMTP('smtp.gmail.com', 587)
|
||||
smtp_server.ehlo()
|
||||
smtp_server.starttls()
|
||||
smtp_server.login('your_email@gmail.com', '123your_password')
|
||||
msg = MIMEText(f'\nHi there is a new Fedora Magazine article : {article_title}. \nYou can read it here {article_url}')
|
||||
msg['Subject'] = 'New Fedora Magazine Article Available'
|
||||
msg['From'] = 'your_email@gmail.com'
|
||||
msg['To'] = 'destination_email@gmail.com'
|
||||
smtp_server.send_message(msg)
|
||||
smtp_server.quit()
|
||||
```
|
||||
|
||||
在这个示例中,我使用了谷歌邮件系统的 smtp 服务器去发送电子邮件,在你自己的代码中你需要将它更改为你自己的电子邮件服务提供者的 SMTP 服务器。这个功能是个样板,大多数的内容要根据你的 smtp 服务器的参数来配置。代码中的电子邮件地址和凭证也要更改为你自己的。
|
||||
|
||||
如果在你的 Gmail 帐户中使用了双因子认证,那么你需要配置一个密码应用程序为你的这个应用程序提供一个唯一密码。可以看这个 [帮助页面][4]。
|
||||
|
||||
### 读取 Fedora Magazine 的 RSS 源
|
||||
|
||||
我们已经有了在数据库中存储文章和发送提示电子邮件的功能,现在来创建一个解析 Fedora Magazine RSS 源并提取文章数据的功能。
|
||||
```
|
||||
def read_article_feed():
|
||||
""" Get articles from RSS feed """
|
||||
feed = feedparser.parse('https://fedoramagazine.org/feed/')
|
||||
for article in feed['entries']:
|
||||
if article_is_not_db(article['title'], article['published']):
|
||||
send_notification(article['title'], article['link'])
|
||||
add_article_to_db(article['title'], article['published'])
|
||||
|
||||
if __name__ == '__main__':
|
||||
read_article_feed()
|
||||
db_connection.close()
|
||||
```
|
||||
|
||||
在这里我们将使用 **feedparser.parse** 功能。这个功能返回一个用字典表示的 RSS 源,对于 **feedparser** 的完整描述可以参考它的 [文档][5]。
|
||||
|
||||
RSS 源解析将返回最后的 10 篇文章作为 `entries`,然后我们提取以下信息:标题、链接、文章发布日期。因此,我们现在可以使用前面定义的检查文章是否在数据库中存在的功能,然后,发送提示电子邮件并将这个文章添加到数据库中。
|
||||
|
||||
当运行我们的脚本时,最后的 if 语句运行我们的 `read_article_feed` 功能,然后关闭数据库连接。
|
||||
|
||||
### 运行我们的脚本
|
||||
|
||||
给脚本文件赋于正确运行权限。接下来,我们使用 **cron** 实用程序去每小时自动运行一次我们的脚本。**cron** 是一个作业计划程序,我们可以使用它在一个固定的时间去运行一个任务。
|
||||
```
|
||||
$ chmod a+x my_rss_notifier.py
|
||||
$ sudo cp my_rss_notifier.py /etc/cron.hourly
|
||||
```
|
||||
|
||||
**为了使该教程保持简单**,我们使用了 cron.hourly 目录每小时运行一次我们的脚本,如果你想学习关于 **cron** 的更多知识以及如何配置 **crontab**,请阅读 **cron** 的 wikipedia [页面][6]。
|
||||
|
||||
### 总结
|
||||
|
||||
在本教程中,我们学习了如何使用 Python 去创建一个简单的 sqlite 数据库、解析一个 RSS 源、以及发送电子邮件。我希望通过这篇文章能够向你展示,**使用 Python 和 Fedora 构建你自己的应用程序是件多么容易的事**。
|
||||
|
||||
这个脚本在 [GitHub][7] 上可以找到。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/never-miss-magazines-article-build-rss-notification-system/
|
||||
|
||||
作者:[Clément Verna][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org
|
||||
[1]:https://docs.python.org/3/library/sqlite3.html
|
||||
[2]:https://pypi.python.org/pypi
|
||||
[3]:https://pypi.python.org/pypi/feedparser/5.2.1
|
||||
[4]:https://support.google.com/accounts/answer/185833?hl=en
|
||||
[5]:https://pythonhosted.org/feedparser/reference.html
|
||||
[6]:https://en.wikipedia.org/wiki/Cron
|
||||
[7]:https://github.com/cverna/rss_feed_notifier
|
@ -1,123 +0,0 @@
|
||||
为初学者介绍 Linux whereis 命令 (5个例子)
|
||||
======
|
||||
|
||||
有时,在使用命令行的时候,我们需要快速找到某一个命令二进制文件所在位置。这种情况下可以选择[find][1]命令,但使用它会耗费时间,可能也会出现意料之外的情况。有一个专门为这种情况设计的命令:**whereis**。
|
||||
|
||||
|
||||
在这篇文章里,我们会通过一些便于理解的例子来解释这一命令的基础内容。但在这之前,值得说明的一点是,下面出现的所有例子都在 Ubuntu 16.04 LTS 下测试过。
|
||||
|
||||
|
||||
|
||||
### Linux whereis 命令
|
||||
|
||||
whereis 命令可以帮助用户寻找某一命令的二进制文件,源码以及帮助页面。下面是它的格式:
|
||||
|
||||
```
|
||||
whereis [options] [-BMS directory... -f] name...
|
||||
```
|
||||
|
||||
这是这一命令的man 页面给出的解释:
|
||||
|
||||
```
|
||||
|
||||
whereis可以查找指定命令的二进制文件,源文件和帮助文件。 被找到的文件在显示时,会去掉主路径名,然后再去掉文件的扩展名 (如: .c),来源于源代码控制的.s前缀也会被去掉。接下来,whereis会尝试在Linux存储命令的位置里,寻找具体程序,也会在由$ PATH和$ MANPATH指定的路径中寻找。
|
||||
```
|
||||
|
||||
下面这些以Q&A 形式出现的例子,可以给你一个关于如何使用whereis命令的直观感受。
|
||||
|
||||
|
||||
### Q1.如何用whereis 命令寻找二进制文件所在位置?
|
||||
|
||||
假设你想找,比如说,whereis命令自己所在位置。下面是你具体的操作:
|
||||
|
||||
|
||||
```
|
||||
whereis whereis
|
||||
```
|
||||
|
||||
[![How to find location of binary file using whereis][2]][3]
|
||||
|
||||
需要注意的是,输出的第一个路径才是你想要的结果。使用whereis 命令,同时也会显示帮助页面和源码所在路径。(如果能找到的情况下会显示,但是在这一例中没有找到)所以你在输出中看见的第二个路径就是帮助页面文件所在位置。
|
||||
|
||||
|
||||
|
||||
### Q2.怎么在搜索时规定只搜索二进制文件,帮助页面,还是源代码呢?
|
||||
|
||||
如果你想只搜索,假设说,二进制文件,你可以使用 **-b** 这一命令行选项。例如:
|
||||
|
||||
|
||||
```
|
||||
whereis -b cp
|
||||
```
|
||||
|
||||
[![How to specifically search for binaries, manuals, or source code][4]][5]
|
||||
|
||||
类似的, **-m** and **-s** 这两个 选项分别对应 帮助页面和源码。
|
||||
|
||||
|
||||
### Q3.如何限制whereis 命令的输出结果条数?
|
||||
|
||||
默认情况下,whereis 是从系统的硬编码路径来寻找文件的,它会输出所有符合条件的结果。但如果你想的话,你可以用命令行选项来限制输出内容。例如,如果你只想在 /usr/bin 寻找二进制文件,你可以用 **-B** 这一选项来实现。
|
||||
|
||||
|
||||
```
|
||||
whereis -B /usr/bin/ -f cp
|
||||
```
|
||||
|
||||
**注意**:使用这种方式时可以给出多个路径。使用**-f** 这一选项是指在给出的路径中没有找到这些文件,
|
||||
|
||||
|
||||
类似的,如果你想只搜索 帮助文件或源码,你可以对应使用 **-M** and **-S** 这两个选项。
|
||||
|
||||
|
||||
### Q4. 如何查看 whereis 的搜索路径?
|
||||
|
||||
与次相对应的也有一个选项。只要在whereis 后加上 **-l**。
|
||||
|
||||
|
||||
```
|
||||
whereis -l
|
||||
```
|
||||
|
||||
这是例子的部分输出结果:
|
||||
|
||||
|
||||
[![How to see paths that whereis uses for search][6]][7]
|
||||
|
||||
### Q5. How to find command names with unusual entries? 如何找到一个有异常条目的命令?
|
||||
|
||||
对于whereis 命令来说,如果一个命令对每个显式请求类型都没有条目,则该命令异常。例如,没有可用文档的命令,或者对应文档分散在各处的命令都可以算作异常命令。 当使用 **-u** 这一选项,whereis就会显示那些有异常条目的命令。
|
||||
|
||||
|
||||
例如,下面这一例子就显示,在当前目录中,没有对应文档或有多个文档的命令。
|
||||
|
||||
|
||||
```
|
||||
whereis -m -u *
|
||||
```
|
||||
|
||||
### 总结
|
||||
|
||||
我同意,whereis 不是那种你需要经常使用的命令行工具。但在遇到某些特殊情况时,它绝对会让你的生活变得轻松。我们已经涉及了这一工具提供的一些重要命令行选项,所以要注意练习。想了解更多信息,直接去看它的[man][8]页面吧。
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/linux-whereis-command/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
||||
[1]:https://www.howtoforge.com/tutorial/linux-find-command/
|
||||
[2]:https://www.howtoforge.com/images/command-tutorial/whereis-basic-usage.png
|
||||
[3]:https://www.howtoforge.com/images/command-tutorial/big/whereis-basic-usage.png
|
||||
[4]:https://www.howtoforge.com/images/command-tutorial/whereis-b-option.png
|
||||
[5]:https://www.howtoforge.com/images/command-tutorial/big/whereis-b-option.png
|
||||
[6]:https://www.howtoforge.com/images/command-tutorial/whereis-l.png
|
||||
[7]:https://www.howtoforge.com/images/command-tutorial/big/whereis-l.png
|
||||
[8]:https://linux.die.net/man/1/whereis
|
@ -1,158 +0,0 @@
|
||||
Python中最快解压zip文件的方法
|
||||
======
|
||||
假设(现在的)上下文(context,计算机术语,此处意为业务情景)是这样的:一个zip文件被上传到一个[web服务][1]中,然后Python需要解压这个zip文件然后分析和处理其中的每个文件。这个特殊的应用查看每个文件各自的名称和大小 ,并和已经上传到AWS S3上的文件进行比较,如果文件(和AWS S3上的相比)有所不同或者文件本身更新,那么就将它上传到AWS S3。
|
||||
|
||||
[![Uploads today][2]][3]
|
||||
|
||||
挑战在于这些zip文件太大了。他们的平均大小是560MB但是其中一些大于1GB。这些文件中大多数是文本文件,但是其中同样也有一些巨大的二进制文件。不同寻常的是,每个zip文件包含100个文件但是其中1-3个文件却占据了多达95%的zip文件大小。
|
||||
|
||||
最开始我尝试在内存中解压文件,并且每次只处理一个文件。在各种内存爆炸和EC2耗尽内存的情况下,这个方法壮烈失败了。我觉得这个方法应该有用。最开始你有1GB文件在RAM中,然后你现在解压每个文件并有了大约2-3GB放在了内存中。所以,在很多次测试之后,解决方案是将这些zip文件提取(dump)到磁盘上(在临时目录`/tmp`中)然后遍历这些文件。这次情况好多了但是我仍然注意到了整个解压过程花费了巨量的时间。**是否可能有方法优化呢?**
|
||||
|
||||
### 原始函数(baseline function)
|
||||
|
||||
首先是下面这些模拟对zip文件中文件实际操作的普通函数:
|
||||
```
|
||||
def _count_file(fn):
|
||||
with open(fn, 'rb') as f:
|
||||
return _count_file_object(f)
|
||||
|
||||
def _count_file_object(f):
|
||||
# Note that this iterates on 'f'.
|
||||
# You *could* do 'return len(f.read())'
|
||||
# which would be faster but potentially memory
|
||||
# inefficient and unrealistic in terms of this
|
||||
# benchmark experiment.
|
||||
total = 0
|
||||
for line in f:
|
||||
total += len(line)
|
||||
return total
|
||||
|
||||
```
|
||||
这里是可能最简单的另一个(函数):
|
||||
```
|
||||
def f1(fn, dest):
|
||||
with open(fn, 'rb') as f:
|
||||
zf = zipfile.ZipFile(f)
|
||||
zf.extractall(dest)
|
||||
|
||||
total = 0
|
||||
for root, dirs, files in os.walk(dest):
|
||||
for file_ in files:
|
||||
fn = os.path.join(root, file_)
|
||||
total += _count_file(fn)
|
||||
return total
|
||||
|
||||
```
|
||||
|
||||
如果我更仔细地分析一下,我(将会)发现这个函数花费时间40%运行`extractall`,60%的时间在执行读取文件长度的循环。
|
||||
|
||||
### 第一步尝试
|
||||
|
||||
我的第一步尝试是使用线程。先创建一个`zipfile.ZipFile`的实例,展开每个文件名到其中然后为每一个名称开始一个线程。每个线程都给它一个函数来做"实质工作"(在这个基础测试(benchmark)中,就是遍历每个文件然后获取它的名称)。实际(业务中)的函数进行的工作是复杂的S3,Redis和PostgreSQL操作,但是在我的基准测试中我只需要制作一个可以找出文件长度的函数就好了。线程池函数:
|
||||
```
|
||||
def f2(fn, dest):
|
||||
|
||||
def unzip_member(zf, member, dest):
|
||||
zf.extract(member, dest)
|
||||
fn = os.path.join(dest, member.filename)
|
||||
return _count_file(fn)
|
||||
|
||||
with open(fn, 'rb') as f:
|
||||
zf = zipfile.ZipFile(f)
|
||||
futures = []
|
||||
with concurrent.futures.ThreadPoolExecutor() as executor:
|
||||
for member in zf.infolist():
|
||||
futures.append(
|
||||
executor.submit(
|
||||
unzip_member,
|
||||
zf,
|
||||
member,
|
||||
dest,
|
||||
)
|
||||
)
|
||||
total = 0
|
||||
for future in concurrent.futures.as_completed(futures):
|
||||
total += future.result()
|
||||
return total
|
||||
```
|
||||
|
||||
**结果:加速~10%**
|
||||
|
||||
### 第二步尝试
|
||||
|
||||
所以可能是GIL(译者注:Global Interpreter Lock,一种全局锁,CPython中的一个概念)阻碍了我。最自然的想法是尝试使用multiprocessing在多个CPU上分配工作。但是这样做有缺点,那就是你不能传递一个非可pickle序列化的对象(译注:意为只有可pickle序列化的对象可以被传递),所以你只能发送文件名到之后的函数中:
|
||||
```
|
||||
def unzip_member_f3(zip_filepath, filename, dest):
|
||||
with open(zip_filepath, 'rb') as f:
|
||||
zf = zipfile.ZipFile(f)
|
||||
zf.extract(filename, dest)
|
||||
fn = os.path.join(dest, filename)
|
||||
return _count_file(fn)
|
||||
|
||||
|
||||
|
||||
def f3(fn, dest):
|
||||
with open(fn, 'rb') as f:
|
||||
zf = zipfile.ZipFile(f)
|
||||
futures = []
|
||||
with concurrent.futures.ProcessPoolExecutor() as executor:
|
||||
for member in zf.infolist():
|
||||
futures.append(
|
||||
executor.submit(
|
||||
unzip_member_f3,
|
||||
fn,
|
||||
member.filename,
|
||||
dest,
|
||||
)
|
||||
)
|
||||
total = 0
|
||||
for future in concurrent.futures.as_completed(futures):
|
||||
total += future.result()
|
||||
return total
|
||||
```
|
||||
|
||||
**结果: 加速~300%**
|
||||
|
||||
### 这是作弊
|
||||
|
||||
使用处理器池的问题是这样需要存储在磁盘上的原始`.zip`文件。所以为了在我的web服务器上使用这个解决方案,我首先得要将内存中的ZIP文件保存到磁盘,然后调用这个函数。这样做的代价我不是很清楚但是应该不低。
|
||||
|
||||
好吧,再翻翻(poke around)看又没有损失(Well, it doesn't hurt to poke around)。可能,解压过程加速到足以弥补这样做的损失了吧。
|
||||
|
||||
但是一定记住!这个优化取决于使用所有可用的CPU。如果一些其他的CPU需要执行在`gunicorn`中的其它事务呢?这时,这些其他进程必须等待,直到有CPU可用。由于在这个服务器上有其他的事务正在进行,我不是很确定我想要在进程中接管所有其他CPU。
|
||||
|
||||
### 结论
|
||||
|
||||
一步一步地做(这个任务)这个过程感觉挺好的。你被限制在一个CPU上但是表现仍然特别好。同样地,一定要看看在`f1`和`f2`两段代码之间的不同之处!利用`concurrent.futures`池类你可以获取可以使用的CPU的个数,但是这样做同样给人感觉不是很好。如果你在虚拟环境中获取的个数是错的呢?或者可用的个数太低以致无法从负载分配获取好处并且现在你仅仅是为了移动负载而支付营运开支呢?
|
||||
|
||||
我将会继续使用`zipfile.ZipFile(file_buffer).extractall(temp_dir)`。这个工作这样做已经足够好了。
|
||||
|
||||
### 想试试手吗?
|
||||
|
||||
我使用一个`c5.4xlarge` EC2服务器来进行我的基准测试。文件可以从此处下载:
|
||||
```
|
||||
wget https://www.peterbe.com/unzip-in-parallel/hack.unzip-in-parallel.py
|
||||
wget https://www.peterbe.com/unzip-in-parallel/symbols-2017-11-27T14_15_30.zip
|
||||
|
||||
```
|
||||
|
||||
这里的`.zip`文件有34MB。和在服务器上发生的已经小了很多。
|
||||
|
||||
`hack.unzip-in-parallel.py`文件里是一团糟。它包含了大量可怕的入侵和丑恶的事情,但是万幸这只是一个开始(译注:大概入侵没有完成)。
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.peterbe.com/plog/fastest-way-to-unzip-a-zip-file-in-python
|
||||
|
||||
作者:[Peterbe][a]
|
||||
译者:[Leemeans](https://github.com/leemeans)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.peterbe.com/
|
||||
[1]:https://symbols.mozilla.org
|
||||
[2]:https://cdn-2916.kxcdn.com/cache/b7/bb/b7bbcf60347a5fa91420f71bbeed6d37.png
|
||||
[3]:https://cdn-2916.kxcdn.com/cache/e6/dc/e6dc20acd37d94239edbbc0727721e4a.png
|
@ -0,0 +1,62 @@
|
||||
如何检查你的 Linux PC 是否存在 Meltdown 或者 Spectre 漏洞
|
||||
======
|
||||
|
||||
![](https://www.maketecheasier.com/assets/uploads/2018/01/lmc-feat.jpg)
|
||||
|
||||
Meltdown 和 Specter 漏洞的最恐怖的现实之一是它们涉及非常广泛。几乎每台现代计算机都会受到一些影响。真正的问题是_你_是否受到了影响?每个系统都处于不同的脆弱状态,具体取决于已经或者还没有打补丁的软件。
|
||||
|
||||
由于 Meltdown 和 Spectre 都是相当新的,并且事情正在迅速发展,所以告诉你需要注意什么或在系统上修复了什么并非易事。有一些工具可以提供帮助。它们并不完美,但它们可以帮助你找出你需要知道的东西。
|
||||
|
||||
### 简单测试
|
||||
|
||||
顶级的 Linux 内核开发人员之一提供了一种简单的方式来检查系统在 Meltdown 和 Specter 漏洞方面的状态。它是简单的,也是最简洁的,但它不适用于每个系统。有些发行版不支持它。即使如此,也值得一试。
|
||||
```
|
||||
grep . /sys/devices/system/cpu/vulnerabilities/*
|
||||
|
||||
```
|
||||
|
||||
![Kernel Vulnerability Check][1]
|
||||
|
||||
你应该看到与上面截图类似的输出。很有可能你会发现系统中至少有一个漏洞还存在。这的确是真的,因为 Linux 在减轻 Specter v1 影响方面还没有取得任何进展。
|
||||
|
||||
### 脚本
|
||||
|
||||
如果上面的方法不适合你,或者你希望看到更详细的系统报告,一位开发人员已创建了一个 shell 脚本,它将检查你的系统来查看系统收到什么漏洞影响,还有做了什么来减轻 Meltdown 和 Spectre 的影响。
|
||||
|
||||
要得到脚本,请确保你的系统上安装了 Git,然后将脚本仓库克隆到一个你不介意运行它的目录中。
|
||||
```
|
||||
cd ~/Downloads
|
||||
git clone https://github.com/speed47/spectre-meltdown-checker.git
|
||||
|
||||
```
|
||||
|
||||
这不是一个大型仓库,所以它应该只需要几秒钟就克隆完成。完成后,输入新创建的目录并运行提供的脚本。
|
||||
```
|
||||
cd spectre-meltdown-checker
|
||||
./spectre-meltdown-checker.sh
|
||||
|
||||
```
|
||||
|
||||
你会在中断看到很多输出。别担心,它不是太难查看。首先,脚本检查你的硬件,然后运行三个漏洞:Specter v1、Spectre v2 和 Meltdown。每个漏洞都有自己的部分。在这之间,脚本明确地告诉你是否受到这三个漏洞的影响。
|
||||
|
||||
![Meltdown Spectre Check Script Ubuntu][2]
|
||||
|
||||
每个部分为你提供潜在的可用的缓解方案,以及它们是否已被应用。这里需要你的一点常识。它给出的决定可能看起来有冲突。研究一下,看看它所说的修复是否实际上完全缓解了这个问题。
|
||||
|
||||
### 这意味着什么
|
||||
|
||||
所以,要点是什么?大多数 Linux 系统已经针对 Meltdown 进行了修补。如果你还没有更新,你应该更新一下。 Specter v1 仍然是一个大问题,到目前为止还没有取得很大进展。Spectre v2 将取决于你的发行版以及它选择应用的补丁。无论哪种工具都说,没有什么是完美的。做好研究并留意直接来自内核和发行版开发者的信息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/check-linux-meltdown-spectre-vulnerability/
|
||||
|
||||
作者:[Nick Congleton][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com/author/nickcongleton/
|
||||
[1]:https://www.maketecheasier.com/assets/uploads/2018/01/lmc-kernel-check.jpg (Kernel Vulnerability Check)
|
||||
[2]:https://www.maketecheasier.com/assets/uploads/2018/01/lmc-script.jpg (Meltdown Spectre Check Script Ubuntu)
|
@ -1,83 +0,0 @@
|
||||
如何在 Linux/Unix 中不重启 vim 而重新加载 .vimrc 文件
|
||||
======
|
||||
|
||||
我是一位新的 vim 编辑器用户。我通常加载 ~/.vimrc 用于配置。在编辑 .vimrc 时,我需要不重启 vim 会话而重新加载它。在 Linux 或者类 Unix 系统中,如何在编辑 .vimrc 后,重新加载它而不用重启 vim?
|
||||
|
||||
vim 是免费开源并且向上兼容 vi 的编辑器。它可以用来编辑各种文本。它在编辑用 C/Perl/Python 编写的程序时特别有用。可以用它来编辑 Linux/Unix 配置文件。~/.vimrc 是你个人的 vim 初始化和自定义文件。
|
||||
|
||||
### 如何在不重启 vim 会话的情况下重新加载 .vimrc
|
||||
|
||||
在 vim 中重新加载 .vimrc 而不重新启动的流程:
|
||||
|
||||
1. 输入 `vim filename` 启动 vim
|
||||
2. 按下 `Esc` 接着输入 `:vs ~/.vimrc` 来加载 vim 配置
|
||||
3. 像这样添加自定义:
|
||||
|
||||
```
|
||||
filetype indent plugin on
|
||||
set number
|
||||
syntax on
|
||||
```
|
||||
|
||||
4. 使用 `:wq` 保存文件,并从 ~/.vimrc 窗口退出
|
||||
5. 输入下面任一命令重载 ~/.vimrc:
|
||||
|
||||
```
|
||||
:so $MYVIMRC
|
||||
```
|
||||
或者
|
||||
```
|
||||
:source ~/.vimrc
|
||||
```
|
||||
|
||||
[![How to reload .vimrc file without restarting vim][1]][1]
|
||||
图1:编辑 ~/.vimrc 并在需要的时候重载而不用退出 vim,这样你就可以继续编辑程序了
|
||||
|
||||
`:so[urce]! {file}` 这个 vim 命令会从给定的文件比如 ~/.vimrc 读取配置。这些命令是在正常模式下执行,就像你输入它们一样。当你在 :global、:argdo、 :windo、:bufdo 之后、循环中或者跟着另一个命令时,显示不会再在执行命令时更新。
|
||||
|
||||
### 如何编辑按键来编辑并重载 ~/.vimrc
|
||||
|
||||
在你的 ~/.vimrc 后面跟上这些
|
||||
|
||||
```
|
||||
" Edit vimr configuration file
|
||||
nnoremap confe :e $MYVIMRC<CR>
|
||||
"
|
||||
|
||||
Reload vims configuration file
|
||||
nnoremap confr :source $MYVIMRC<CR>
|
||||
```
|
||||
|
||||
现在只要按下 `Esc` 接着输入 `confe` 开编辑 ~/.vimrc。按下 `Esc` ,接着输入 `confr` 来重新加载。一些喜欢在 .vimrc 中使用 <Leader>。因此上面的映射变成:
|
||||
|
||||
```
|
||||
" Edit vimr configuration file
|
||||
nnoremap <Leader>ve :e $MYVIMRC<CR>
|
||||
"
|
||||
" Reload vimr configuration file
|
||||
nnoremap <Leader>vr :source $MYVIMRC<CR>
|
||||
```
|
||||
|
||||
<Leader> 键默认映射成 \\ 键。因此只要输入 \\ 接着 ve 就能编辑文件。按下 \\ 接着 vr 就能重载 ~/vimrc。
|
||||
这就完成了,你可以不用再重启 vim 就能重新加载 .vimrc 了。
|
||||
|
||||
### 关于作者
|
||||
|
||||
作者是 nixCraft 的创建者,经验丰富的系统管理员,也是 Linux 操作系统/Unix shell 脚本的培训师。他曾与全球客户以及IT、教育、国防和太空研究以及非营利部门等多个行业合作。在 [Twitter][9]、[Facebook][10]、[Google +][11] 上关注他。通过[我的 RSS/XML 订阅][5]获取**最新的系统管理、Linux/Unix 以及开源主题教程**。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/how-to-reload-vimrc-file-without-restarting-vim-on-linux-unix/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/media/new/faq/2018/02/How-to-reload-.vimrc-file-without-restarting-vim.jpg
|
||||
[2]:https://twitter.com/nixcraft
|
||||
[3]:https://facebook.com/nixcraft
|
||||
[4]:https://plus.google.com/+CybercitiBiz
|
||||
[5]:https://www.cyberciti.biz/atom/atom.xml
|
@ -0,0 +1,147 @@
|
||||
如何使用 Seahorse 管理 PGP 和 SSH 密钥
|
||||
============================================================
|
||||
|
||||
|
||||
![Seahorse](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fish-1907607_1920.jpg?itok=u07bav4m "Seahorse")
|
||||
学习使用 Seahorse GUI 工具去管理 PGP 和 SSH 密钥。[Creative Commons Zero][6]
|
||||
|
||||
安全无异于内心的平静。毕竟,安全是许多用户迁移到 Linux 的最大理由。但是当你可以采用几种方法和技术去确保你的桌面或者服务器系统的安全时,你为什么还要停止使用差不多已经接受的平台呢?
|
||||
|
||||
其中一项技术涉及到密钥 —在 PGP 和 SSH 中,PGP 密钥允许你去加密和解密电子邮件和文件,而 SSH 密钥允许你使用一个额外的安全层去登入服务器。
|
||||
|
||||
当然,你可以通过命令行接口(CLI)来管理这些密钥,但是,如果你使用一个华丽的 GUI 桌面环境呢?经验丰富的 Linux 用户可能对于摆脱命令行来工作感到很不适应,但是,并不是所有用户都具备与他们相同的技术和水平因此,使用 GUI!
|
||||
|
||||
在本文中,我将带你探索如何使用 [Seahorse][14] GUI 工具来管理 PGP 和 SSH 密钥。Seahorse 有非常强大的功能,它可以:
|
||||
|
||||
* 加密/解密/签名文件和文本。
|
||||
|
||||
* 管理你的密钥和密钥对。
|
||||
|
||||
* 同步你的密钥和密钥对到远程密钥服务器。
|
||||
|
||||
* 签名和发布密钥。
|
||||
|
||||
* 缓存你的密码。
|
||||
|
||||
* 备份密钥和密钥对。
|
||||
|
||||
* 在任何一个 GDK 支持的格式中添加一个图像作为一个 OpenPGP photo ID。
|
||||
|
||||
* 创建、配置、和缓存 SSH 密钥。
|
||||
|
||||
对于那些不了解 Seahorse 的人来说,它是一个在 GNOME 密钥对中管理加密密钥和密码的 GNOME 应用程序。不用担心,Seahorse 可以安装在许多的桌面上。并且由于 Seahorse 是在标准仓库中创建的,你可以打开你的桌面应用商店(比如,Ubuntu Software 或者 Elementary OS AppCenter)去安装它。因此,你可以在你的发行版的应用商店中点击去安装它。安装完成后,你就可以去使用这个很方便的工具了。
|
||||
|
||||
我们开始去使用它吧。
|
||||
|
||||
### PGP 密钥
|
||||
|
||||
我们需要做的第一件事情就是生成一个新的 PGP 密钥。正如前面所述,PGP 密钥可以用于加密电子邮件(使用一些工具,像 [Thunderbird][15] 的 [Enigmail][16] 或者使用 [Evolution][17] 内置的加密功能)。一个 PGP 密钥也可以用于加密文件。任何人使用你的公钥都可以解密你的电子邮件和文件。没有 PGP 密钥是做不到的。
|
||||
|
||||
使用 Seahorse 创建一个新的 PGP 密钥对是非常简单的。以下是操作步骤:
|
||||
|
||||
1. 打开 Seahorse 应用程序
|
||||
|
||||
2. 在主面板的左上角点击 + 按钮
|
||||
|
||||
3. 选择 PGP Key(如图 1 )
|
||||
|
||||
4. 点击 Continue
|
||||
|
||||
5. 当提示时,输入完整的名字和电子邮件地址
|
||||
|
||||
6. 点击 Create
|
||||
|
||||
|
||||
![Seahorse](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_1.jpg?itok=khLOYC61 "Seahorse")
|
||||
图 1:使用 Seahorse 创建一个 PGP 密钥。[Used with permission][1]
|
||||
|
||||
在创建你的 PGP 密钥期间,你可以点击 Advanced key options 展开选项部分,在那里你可以为密钥添加注释信息、加密类型、密钥长度、以及过期时间(如图 2)。
|
||||
|
||||
|
||||
![PGP](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_2.jpg?itok=eWiazwrn "PGP")
|
||||
图 2:PGP 密钥高级选项[Used with permission][2]
|
||||
|
||||
增加注释部分可以很方便帮你记住密钥的用途(或者其它的信息)。
|
||||
要使用你创建的 PGP,可在密钥列表中双击它。在结果窗口中,点击 Names 和 Signatures 选项卡。在这个窗口中,你可以签名你的密钥(表示你信任这个密钥)。点击 Sign 按钮然后(在结果窗口中)标识 how carefully you’ve checked this key 和 how others will see the signature(如图 3)。
|
||||
|
||||
|
||||
![Key signing](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_3.jpg?itok=7USKG9fI "Key signing")
|
||||
图 3:签名一个密钥表示信任级别。[Used with permission][3]
|
||||
|
||||
当你处理其它人的密钥时,密钥签名是非常重要的,因为一个签名的密钥将确保你的系统(和你)做了这项工作并且完全信任这个重要的密钥。
|
||||
|
||||
谈到导入的密钥,Seahorse 可以允许你很容易地去导入其他人的公钥文件(这个文件以 .asc 为后缀)。你的系统上有其他人的公钥,意味着你可以解密从他们那里发送给你的电子邮件和文件。然而,Seahorse 在很长的一段时间内都存在一个 [已知的 bug][18]。这个问题是,Seahorse 导入使用 GPG 版本 1,但是显示的是 GPG 版本 2。这意味着,在这个存在了很长时间的 bug 被修复之前,导入公钥总是失败的。如果你想导入一个公钥文件到 Seahorse 中,你只能去使用命令行。因此,如果有人发送给你一个文件 olivia.asc,你想去导入到 Seahorse 中使用它,你将只能运行命令 gpg2 --import olivia.asc。那个密钥将出现在 GnuPG 密钥列表中。你可以打开密钥,点击 I trust signatures 按钮,然后在问题 how carefully you’ve checked the key 中,点击 Sign this key 按钮去标示。
|
||||
|
||||
### SSH 密钥
|
||||
|
||||
现在我们来谈谈我认为 Seahorse 中最重要的一个方面 — SSH 密钥。Seahorse 不仅可以很容易地生成一个 SSH 密钥,而且它也可以很容易地将生成的密钥发送到服务器上,因此,你可以享受到 SSH 密钥验证的好处。下面是如何生成一个新的密钥以及如何导出它到一个远程服务器上。
|
||||
|
||||
1. 打开 Seahorse 应用程序
|
||||
|
||||
2. 点击 + 按钮
|
||||
|
||||
3. 选择 Secure Shell Key
|
||||
|
||||
4. 点击 Continue
|
||||
|
||||
5. 提供一个密钥描述信息
|
||||
|
||||
6. 点击 Set Up 去创建密钥
|
||||
|
||||
7. 输入密钥的验证密钥
|
||||
|
||||
8. 点击 OK
|
||||
|
||||
9. 输入远程服务器地址和服务器上的登陆名(如图 4)
|
||||
|
||||
10. 输入远程用户的密码
|
||||
|
||||
11. 点击 OK
|
||||
|
||||
|
||||
![SSH key](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_4.jpg?itok=ZxuxT8ry "SSH key")
|
||||
图 4:上传一个 SSH 密钥到远程服务器。[Used with permission][4]
|
||||
|
||||
新密钥将上传到远程服务器上以准备好使用它。如果你的服务器已经设置为使用 SSH 密钥验证,那就一切就绪了。
|
||||
|
||||
需要注意的是,在创建一个 SSH 密钥期间,你可以点击 Advanced key options 去展开它,配置加密类型和密钥长度(如图 5)。
|
||||
|
||||
|
||||
![Advanced options](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_5.jpg?itok=vUT7pi0z "Advanced options")
|
||||
图 5:高级 SSH 密钥选项。[Used with permission][5]
|
||||
|
||||
### Linux 新手必备
|
||||
|
||||
任何 Linux 新手用户都可以很快熟悉使用 Seahorse。即便是它有缺陷,Seahorse 仍然是为你准备的一个极其方便的工具。有时候,你可能希望(或者需要)去加密或者解密一个电子邮件/文件,或者为使用 SSH 验证来管理 SSH 密钥。如果你想去这样做而不希望使用命令行,那么,Seahorse 将是非常适合你的工具。
|
||||
|
||||
_通过来自 Linux 基金会和 edX 的 ["Linux 入门" ][13] 免费课程学习更多 Linux 的知识。_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/2/how-manage-pgp-and-ssh-keys-seahorse
|
||||
|
||||
作者:[JACK WALLEN ][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/jlwallen
|
||||
[1]:https://www.linux.com/licenses/category/used-permission
|
||||
[2]:https://www.linux.com/licenses/category/used-permission
|
||||
[3]:https://www.linux.com/licenses/category/used-permission
|
||||
[4]:https://www.linux.com/licenses/category/used-permission
|
||||
[5]:https://www.linux.com/licenses/category/used-permission
|
||||
[6]:https://www.linux.com/licenses/category/creative-commons-zero
|
||||
[7]:https://www.linux.com/files/images/seahorse1jpg
|
||||
[8]:https://www.linux.com/files/images/seahorse2jpg
|
||||
[9]:https://www.linux.com/files/images/seahorse3jpg
|
||||
[10]:https://www.linux.com/files/images/seahorse4jpg
|
||||
[11]:https://www.linux.com/files/images/seahorse5jpg
|
||||
[12]:https://www.linux.com/files/images/fish-19076071920jpg
|
||||
[13]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
||||
[14]:https://wiki.gnome.org/Apps/Seahorse
|
||||
[15]:https://www.mozilla.org/en-US/thunderbird/
|
||||
[16]:https://enigmail.net/index.php/en/
|
||||
[17]:https://wiki.gnome.org/Apps/Evolution
|
||||
[18]:https://bugs.launchpad.net/ubuntu/+source/seahorse/+bug/1577198
|
@ -0,0 +1,90 @@
|
||||
如何检查你的计算机使用的是 UEFI 还是 BIOS
|
||||
======
|
||||
**简介:一个快速的教程,来告诉你的系统使用的是现代 UEFI 或者传统 BIOS。同时提供 Windows 和 Linux 的说明。**
|
||||
|
||||
当你尝试[双启动 Linux 和 Windows ][1]时,你需要知道系统上是否有 UEFI 或 BIOS 启动模式。它可以帮助你决定安装 Linux 的分区。
|
||||
|
||||
我不打算在这里讨论[什么是 BIOS][2]。不过,我想通过 BIOS 告诉你一些 [UEFI][3] 的优点。
|
||||
|
||||
UEFI 或者说统一可扩展固件接口旨在克服 BIO S的某些限制。它增加了使用大于 2TB 磁盘的能力,并具有独立于 CPU 的体系结构和驱动程序。采用模块化设计,即使没有安装操作系统,也可以支持远程诊断和修复,以及灵活的无操作系统环境(包括网络功能)。
|
||||
|
||||
### UEFI 优于 BIOS 的点
|
||||
|
||||
* UEFI在初始化硬件时速度更快。
|
||||
* 提供安全启动,这意味着你在加载操作系统之前加载的所有内容都必须签名。这为你的系统提供了额外的保护层。
|
||||
* BIOS 不支持超过 2TB 的分区。
|
||||
* 最重要的是,如果你是双引导,那么建议始终在相同的引导模式下安装两个操作系统。
|
||||
|
||||
|
||||
|
||||
![How to check if system has UEFI or BIOS][4]
|
||||
|
||||
如果试图查看你的系统运行的是 UEFI 还是 BIOS,这并不难。首先让我从 Windows 开始,然后看看如何在 Linux 系统上查看用的是 UEFI 还是 BIOS。
|
||||
|
||||
### 在 Windows 中检查使用的是 UEFI 还是 BIOS
|
||||
|
||||
在 Windows 中,在“开始”面板中的“系统信息”中,在 BIOS 模式下,可以找到启动模式。如果它显示的是 Legacy,那么你的系统是 BIOS。如果显示 UEFI,那么它是 UEFI。
|
||||
|
||||
![][5]
|
||||
|
||||
**另一个方法**:如果你使用 Windows 10,可以打开文件资源管理器并进入到 C:\Windows\Panther 来查看你使用的是 UEFI 还是 BIOS。打开文件 setupact.log 并搜索下面的字符串。
|
||||
```
|
||||
Detected boot environment
|
||||
|
||||
```
|
||||
|
||||
我建议在 notepad++ 中打开这个文件,因为这是一个很大的文件和记事本可能挂起(至少它对我来说是 6GB )。
|
||||
|
||||
你会看到几行有用的信息。
|
||||
```
|
||||
2017-11-27 09:11:31, Info IBS Callback_BootEnvironmentDetect:FirmwareType 1.
|
||||
2017-11-27 09:11:31, Info IBS Callback_BootEnvironmentDetect: Detected boot environment: BIOS
|
||||
|
||||
```
|
||||
|
||||
### 在 Linux 中检查使用的是 UEFI 还是 BIOS
|
||||
|
||||
最简单地找出使用的是 UEFI 还是 BIOS 的方法是查找 /sys/firmware/efi 文件夹。如果使用的 BIOS 那么文件夹不存在。
|
||||
|
||||
![Find if system uses UEFI or BIOS on Ubuntu Linux][6]
|
||||
|
||||
**另一种方法**:安装名为 efibootmgr 的软件包。
|
||||
|
||||
在基于 Debian 和 Ubuntu 的发行版中,你可以使用以下命令安装 efibootmgr 包:
|
||||
```
|
||||
sudo apt install efibootmgr
|
||||
|
||||
```
|
||||
|
||||
完成后,输入以下命令:
|
||||
```
|
||||
sudo efibootmgr
|
||||
|
||||
```
|
||||
|
||||
如果你的系统支持 UEFI,它会输出不同的变量。如果没有,你将看到一条消息指出 EFI 变量不支持。
|
||||
|
||||
![][7]
|
||||
|
||||
### 最后的话
|
||||
|
||||
查看你的系统使用的是 UEFI 还是 BIOS 很容易。一方面,像快速和安全的引导为 UEFI 提供了优势,如果你使用的是 BIOS 也不必担心太多,除非你打算使用 2TB 硬盘。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/check-uefi-or-bios/
|
||||
|
||||
作者:[Ambarish Kumar][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/ambarish/
|
||||
[1]:https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/
|
||||
[2]:https://www.lifewire.com/bios-basic-input-output-system-2625820
|
||||
[3]:https://www.howtogeek.com/56958/htg-explains-how-uefi-will-replace-the-bios/
|
||||
[4]:https://itsfoss.com/wp-content/uploads/2018/02/uefi-or-bios-800x450.png
|
||||
[5]:https://itsfoss.com/wp-content/uploads/2018/01/BIOS-800x491.png
|
||||
[6]:https://itsfoss.com/wp-content/uploads/2018/02/uefi-bios.png
|
||||
[7]:https://itsfoss.com/wp-content/uploads/2018/01/bootmanager.jpg
|
48
translated/tech/20180209 Gnome without chrome-gnome-shell.md
Normal file
48
translated/tech/20180209 Gnome without chrome-gnome-shell.md
Normal file
@ -0,0 +1,48 @@
|
||||
没有 chrome-gnome-shell 的 Gnome
|
||||
======
|
||||
|
||||
新的笔记本有触摸屏,它可以折叠成平板电脑,我听说 gnome-shell 将是桌面环境的一个很好的选择,我设法调整它足以按照现有的习惯重用。
|
||||
|
||||
然而,我有一个很大的问题,它怎么会鼓励人们从互联网上下载随机扩展,并将它们作为整个桌面环境的一部分运行。 一个更大的问题是,[gnome-core][1] 对 [chrome-gnome-shell] [2] 有强制依赖,插件不用 root 用户编辑 `/etc` 下的文件则无法禁用,这会给网站暴露我的桌面环境。
|
||||
|
||||
访问[这个网站][3],它会知道你已经安装了哪些扩展,并且能够安装更多。我不信任它,我不需要那样,我不想那样。我为此感到震惊。
|
||||
|
||||
[我想出了一个临时解决方法][4]。
|
||||
|
||||
人们会在 firefox 中如何做呢?
|
||||
|
||||
### 描述
|
||||
|
||||
chrome-gnome-shell 是 gnome-core 的一个强制依赖项,它安装了一个可能不需要的浏览器插件,并强制它使用系统范围的 chrome 策略。
|
||||
|
||||
我认为使用 chrome-gnome-shell 会不必要地增加系统的攻击面,我作为主要用户,它会获取下载和执行随机未经审查代码的可疑特权。
|
||||
|
||||
这个包满足了 chrome-gnome-shell 的依赖,但不会安装任何东西。
|
||||
|
||||
请注意,在安装此包之后,如果先前安装了 chrome-gnome-shell,则需要清除 chrome-gnome-shell,以使其在 /etc/chromium 中删除 chromium 策略文件
|
||||
|
||||
### 说明
|
||||
```
|
||||
apt install equivs
|
||||
equivs-build contain-gnome-shell
|
||||
sudo dpkg -i contain-gnome-shell_1.0_all.deb
|
||||
sudo dpkg --purge chrome-gnome-shell
|
||||
|
||||
```
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.enricozini.org/blog/2018/debian/gnome-without-chrome-gnome-shell/
|
||||
|
||||
作者:[Enrico Zini][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.enricozini.org/
|
||||
[1]:https://packages.debian.org/gnome-core
|
||||
[2]:https://packages.debian.org/chrome-gnome-shell
|
||||
[3]:https://extensions.gnome.org/
|
||||
[4]:https://salsa.debian.org/enrico/contain-gnome-shell
|
@ -0,0 +1,119 @@
|
||||
如何在 Windows 10 上开启 WSL(Windows Subsystem for Linux) 之旅
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/wsl-main.png?itok=wJ5WrU9U)
|
||||
|
||||
在 [上一篇文章][1] 中,我们讨论过关于 Windows 的子系统 Linux(WSL)的目标用户。本文,我们将在 Windows 10 的设备上,开启 WSL 的旅程。
|
||||
|
||||
### 为 WSL 做准备
|
||||
|
||||
您必须使用最新版本的 Windows 10 Fall Creator Update。之后,通过在开始菜单栏搜索 “About”,检查 Windows 10 的版本。为了使用 WSL,您的版本应当为 1709 或者最新版。
|
||||
|
||||
这里有一张关于我的操作系统的截图。
|
||||
|
||||
![kHFKOvrbG1gXdB9lsbTqXC4N4w0Lbsz1Bul5ey9m][2]
|
||||
|
||||
如果您安装了之前的版本,您有必要在 [这里][3] 下载并且安装 Windows 10 Fall Creator Update (FCU)。安装完毕后,更新设置(在开始菜单的搜索框中搜索 “updates”)。
|
||||
|
||||
前往 `启用或关闭 Windows 功能` 页面,然后滚动至底部,如截图所示,勾选 `适用于 Linux 的 Windows 子系统`,点击确定。它将会下载安装需要的包。
|
||||
|
||||
![oV1mDqGe3zwQgL0N3rDasHH6ZwHtxaHlyrLzjw7x][4]
|
||||
|
||||
安装完成之后,系统将会询问是否重启。是的,重启设备吧。WSL 在系统重启之前不会启动,如下所示:
|
||||
|
||||
![GsNOQLJlHeZbkaCsrDIhfVvEoycu3D0upoTdt6aN][5]
|
||||
|
||||
一旦您的系统重启,返回 `启用或关闭 Windows 功能` 页面,确认 `适用于 Linux 的 Windows 子系统` 已经被勾选。
|
||||
|
||||
### 在 Windows 中安装 Linux
|
||||
|
||||
在 Windows 中安装 Linux,有很多方式,这里我们选择一种最简单的方式。打开 Microsoft Store,搜索 Linux。您将看到下面的选项:
|
||||
|
||||
![YAR4UgZiFAy2cdkG4U7jQ7_m81lrxR6aHSMOdED7][6]
|
||||
|
||||
点击 `获取`,之后 Windows 商店将会提供三个选项:Ubuntu,openSUSE Leap 42 和 SUSE Linux Enterprise Server。您可以一并安装上述三个发行版,并且它们可以同时运行。为了能使用 SLE,您需要订阅消息。
|
||||
|
||||
在此,我将安装 openSUSE Leap 42 和 Ubuntu。选中您想要的发行版,点击获得按钮并安装。一旦安装完毕,您就可以在 Windows 中启动 openSUSE。为了方便访问,可以将其固定到开始菜单中。
|
||||
|
||||
![4LU6eRrzDgBprDuEbSFizRuP1J_zS3rBnoJbU2OA][7]
|
||||
|
||||
### 在 Windwods 中使用 Linux
|
||||
|
||||
当您启动发行版,它将会打开一个 Bash Shell 并且安装此发行版。安装完毕之后,您就可以开始使用了。您需要留意,openSUSE 中并没有用户,它直接运行在 root 用户下,但是 Ubuntu 会询问您是否创建用户。在 Ubuntu,您可以执行 sudo 用户管理任务。
|
||||
|
||||
在 openSUSE 上,您可以很轻松的创建一个用户:
|
||||
```
|
||||
# useradd [username]
|
||||
|
||||
# passwd [username]
|
||||
|
||||
```
|
||||
|
||||
为此用户创建一个新的密码。例如:
|
||||
```
|
||||
# useradd swapnil
|
||||
|
||||
# passwd swapnil
|
||||
|
||||
```
|
||||
|
||||
您可以通过 su 命令从 root 用户切换过来。
|
||||
```
|
||||
su swapnil
|
||||
|
||||
```
|
||||
|
||||
您需要非根用户来执行许多任务,比如使用 rsync 移动文件到本地设备。
|
||||
|
||||
而首要任务是更新发行版。对于 openSUSE 来说,您应该:
|
||||
```
|
||||
zypper up
|
||||
|
||||
```
|
||||
|
||||
而对于 Ubuntu:
|
||||
```
|
||||
sudo apt-get update
|
||||
|
||||
sudo apt-get dist-upgrade
|
||||
|
||||
```
|
||||
|
||||
![7cRgj1O6J8yfO3L4ol5sP-ZCU7_uwOuEoTzsuVW9][8]
|
||||
|
||||
现在,您就在 Windows 上拥有了原生 Linux Bash shell。想在 Windows 10 上通过 ssh 连接您的服务器?不需要安装 puTTY 或是 Cygwin。打开 Bash 之后,就可以通过 ssh 进入您的服务器。简单之至。
|
||||
|
||||
想通过 rsync 同步文件到您的服务器?直接使用 rsync。它切实的将我们的 Windows 设备转变得更为实用,帮助那些需要使用原生 Linux 命令和 Linux 工具的用户避开虚拟机,大开方便之门。
|
||||
|
||||
### 找不到 Fedora?
|
||||
|
||||
您可能想了解 Fedora。可惜,商城里并没有 Fedora。Fedora 项目发布负责人在 Twitter 上表示,“我们正在解决一些非技术性问题。现在可能提供不了更多了。”
|
||||
|
||||
我们并不确定这些非技术性问题是什么。当一些用户询问 WSL 团队为何不发布 Fedora,毕竟它也是一个开源项目。项目负责人 Rich Turner 在 Microsoft [回应][9],“我们没有发布其他 IP 到应用商店的政策。我们相信,相较于被微软或是其他非权威人士,社区更希望看到发行版由发行版所有者发布。”
|
||||
|
||||
因此,微软不方便在 Windows 商店中直接发布 Debian 或是 Arch 系统。这些任务应该落在他们的官方团队中,应该由他们将发行版带给 Windows 10 的用户。
|
||||
|
||||
### 欲知后事,下回分解
|
||||
|
||||
下一篇文章,我们会讨论关于将 Windows 10 作为 Linux 设备,并且向您展示,您可能会在 Linux 系统上使用的命令行工具。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
|
||||
|
||||
作者:[SWAPNIL BHARTIYA][a]
|
||||
译者:[CYLeft](https://github.com/CYLeft)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/arnieswap
|
||||
[1]:https://www.linux.com/blog/learn/2018/2/windows-subsystem-linux-bridge-between-two-platforms
|
||||
[2]:https://lh6.googleusercontent.com/kHFKOvrbG1gXdB9lsbTqXC4N4w0Lbsz1Bul5ey9mr_E255GiiBxf8cRlatrte6z23yvo8lHJG8nQ_WeHhUNYqPp7kHuQTTMueqMshCT71JsbMr2Wih9KFHuHgNg1BclWz-iuBt4O
|
||||
[3]:https://www.microsoft.com/en-us/software-download/windows10
|
||||
[4]:https://lh4.googleusercontent.com/oV1mDqGe3zwQgL0N3rDasHH6ZwHtxaHlyrLzjw7xF9M9_AcHPNSxM18KDWK2ZpVcUOfxVVpNH9LwUJT5EtRE7zUrJC_gWV5f345SZRAgXcJzOE-8rM8-RCPTNtns6vVP37V5Eflp
|
||||
[5]:https://lh5.googleusercontent.com/GsNOQLJlHeZbkaCsrDIhfVvEoycu3D0upoTdt6aNEozAcQA59Z3hDu_SxT6I4K4gwxLPX0YnmUsCKjaQaaG2PoAgUYMcN0Zv0tBFaoUL3sZryddM4mdRj1E2tE-IK_GLK4PDa4zf
|
||||
[6]:https://lh3.googleusercontent.com/YAR4UgZiFAy2cdkG4U7jQ7_m81lrxR6aHSMOdED7MKEoYxEsX_yLwyMj9N2edt3GJ2JLx6mUsFEZFILCCSBU2sMOqveFVWZTHcCXhFi5P2Xk-9Ikc3NK9seup5CJObIcYJPORdPW
|
||||
[7]:https://lh6.googleusercontent.com/4LU6eRrzDgBprDuEbSFizRuP1J_zS3rBnoJbU2OAOH3Mx7nfOROfyf81k1s4YQyLBcu0qSXOoaqbYkXL5Wpp9gNCdKH_WsEcqWzjG6uXzYvCYQ42psOz6Iz3NF7ElsPrdiFI0cYv
|
||||
[8]:https://lh6.googleusercontent.com/7cRgj1O6J8yfO3L4ol5sP-ZCU7_uwOuEoTzsuVW9cU5xiBWz_cpZ1IBidNT0C1wg9zROIncViUzXD0vPoH5cggQtuwkanRfRdDVXOI48AcKFLt-Iq2CBF4mGRwqqWvSOhb0HFpjm
|
||||
[9]:https://github.com/Microsoft/WSL/issues/2584
|
239
translated/tech/20180221 Getting started with SQL.md
Normal file
239
translated/tech/20180221 Getting started with SQL.md
Normal file
@ -0,0 +1,239 @@
|
||||
开始使用 SQL
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X)
|
||||
|
||||
使用 SQL 构建数据库比大多数人想象得要简单。实际上,你甚至不需要成为一个有经验的程序员来使用 SQL 创建数据库。在本文中,我将解释如何使用 MySQL 5.6 来创建简单的关系型数据库管理系统(RDMS)。在开始之前,我想快速地感谢 [SQL Fiddle][1],这是我用来运行脚本的工具。它提供了一个用于测试简单脚本的有用的沙箱。
|
||||
|
||||
在本教程中,我将构建一个使用下面实体关系图(ERD)中显示的简单架构的数据库。数据库列出了学生和正在学习的课程。为了保持简单,我使用了两个实体(即表),只有一种关系和依赖关系。这两个实体称为 `dbo_students` 和 `dbo_courses`。
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/erd.png)
|
||||
|
||||
数据库的多样性是一对多的,因为每门课程可以包含很多学生,但每个学生只能学习一门课程。
|
||||
|
||||
关于术语的快速说明:
|
||||
|
||||
1. 一张表称为一个实体。
|
||||
2. 一个字段称为一个属性。
|
||||
3. 一条记录称为一个元组。
|
||||
4. 用于构建数据库的脚本称为架构。
|
||||
|
||||
### 构建架构
|
||||
|
||||
要构建数据库,使用 `CREATE TABLE <表名>` 命令,然后定义每个字段的名称和数据类型。数据库使用 `VARCHAR(n)` (字符串)和 `INT(n)` (整数),其中 n 表示可以存储的值的长度。例如 `INT(2)` 可能是 01。
|
||||
|
||||
这是用于创建两个表的代码:
|
||||
```
|
||||
CREATE TABLE dbo_students
|
||||
|
||||
(
|
||||
|
||||
student_id INT(2) AUTO_INCREMENT NOT NULL,
|
||||
|
||||
student_name VARCHAR(50),
|
||||
|
||||
course_studied INT(2),
|
||||
|
||||
PRIMARY KEY (student_id)
|
||||
|
||||
);
|
||||
|
||||
|
||||
|
||||
CREATE TABLE dbo_courses
|
||||
|
||||
(
|
||||
|
||||
course_id INT(2) AUTO_INCREMENT NOT NULL,
|
||||
|
||||
course_name VARCHAR(30),
|
||||
|
||||
PRIMARY KEY (course_id)
|
||||
|
||||
);
|
||||
```
|
||||
|
||||
`NOT NULL` 意味着字段不能为空,`AUTO_INCREMENT` 意味着当一个新的元组被添加时,ID 号将自动生成,并将 1 添加到先前存储的 ID 号,来强化各实体之间的完整参照性。 `PRIMARY KEY` 是每个表的惟一标识符属性。这意味着每个元组都有自己的不同的标识。
|
||||
|
||||
### 关系作为一种约束
|
||||
|
||||
就目前来看,这两张表格是独立存在的,没有任何联系或关系。要连接它们,必须标识一个外键。在 `dbo_students` 中,外键是 `course_studied`,其来源在 `dbo_courses` 中,意味着该字段被引用。SQL 中的特定命令为 `CONSTRAINT`,并且将使用另一个名为 `ALTER TABLE` 的命令添加这种关系,这样即使在架构构建完毕后,也可以编辑表。
|
||||
|
||||
以下代码将关系添加到数据库构造脚本中:
|
||||
```
|
||||
ALTER TABLE dbo_students
|
||||
|
||||
ADD CONSTRAINT FK_course_studied
|
||||
|
||||
FOREIGN KEY (course_studied) REFERENCES dbo_courses(course_id);
|
||||
```
|
||||
使用 `CONSTRAINT` 命令实际上并不是必要的,但这是一个好习惯,因为它意味着约束可以被命名并且使维护更容易。现在数据库已经完成了,是时候添加一些数据了。
|
||||
|
||||
### 将数据添加到数据库
|
||||
|
||||
`INSERT INTO <表名>`是用于直接选择将数据添加到哪些属性(即字段)的命令。首先声明实体名称,然后声明属性。命令下边是添加到实体的数据,从而创建一个元组。如果指定了 `NOT NULL`,这表示该属性不能留空。以下代码将展示如何向表中添加记录:
|
||||
```
|
||||
INSERT INTO dbo_courses(course_id,course_name)
|
||||
|
||||
VALUES(001,'Software Engineering');
|
||||
|
||||
INSERT INTO dbo_courses(course_id,course_name)
|
||||
|
||||
VALUES(002,'Computer Science');
|
||||
|
||||
INSERT INTO dbo_courses(course_id,course_name)
|
||||
|
||||
VALUES(003,'Computing');
|
||||
|
||||
|
||||
|
||||
INSERT INTO dbo_students(student_id,student_name,course_studied)
|
||||
|
||||
VALUES(001,'student1',001);
|
||||
|
||||
INSERT INTO dbo_students(student_id,student_name,course_studied)
|
||||
|
||||
VALUES(002,'student2',002);
|
||||
|
||||
INSERT INTO dbo_students(student_id,student_name,course_studied)
|
||||
|
||||
VALUES(003,'student3',002);
|
||||
|
||||
INSERT INTO dbo_students(student_id,student_name,course_studied)
|
||||
|
||||
VALUES(004,'student4',003);
|
||||
```
|
||||
|
||||
现在数据库架构已经完成并添加了数据,现在是时候在数据库上运行查询了。
|
||||
|
||||
### 查询
|
||||
|
||||
查询遵循使用以下命令的集合结构:
|
||||
```
|
||||
SELECT <attributes>
|
||||
|
||||
FROM <entity>
|
||||
|
||||
WHERE <condition>
|
||||
```
|
||||
|
||||
要显示 `dbo_courses` 实体内的所有记录并显示课程代码和课程名称,请使用 * 。 这是一个通配符,它消除了键入所有属性名称的需要。(在生产数据库中不建议使用它。)此处查询的代码是:
|
||||
```
|
||||
SELECT *
|
||||
|
||||
FROM dbo_courses
|
||||
```
|
||||
|
||||
此处查询的输出显示表中的所有元组,因此可显示所有可用课程:
|
||||
```
|
||||
| course_id | course_name |
|
||||
|
||||
|-----------|----------------------|
|
||||
|
||||
| 1 | Software Engineering |
|
||||
|
||||
| 2 | Computer Science |
|
||||
|
||||
| 3 | Computing |
|
||||
```
|
||||
|
||||
在以后的文章中,我将使用三种类型的连接之一来解释更复杂的查询:Inner,Outer 或 Cross。
|
||||
|
||||
这是完整的脚本:
|
||||
```
|
||||
CREATE TABLE dbo_students
|
||||
|
||||
(
|
||||
|
||||
student_id INT(2) AUTO_INCREMENT NOT NULL,
|
||||
|
||||
student_name VARCHAR(50),
|
||||
|
||||
course_studied INT(2),
|
||||
|
||||
PRIMARY KEY (student_id)
|
||||
|
||||
);
|
||||
|
||||
|
||||
|
||||
CREATE TABLE dbo_courses
|
||||
|
||||
(
|
||||
|
||||
course_id INT(2) AUTO_INCREMENT NOT NULL,
|
||||
|
||||
course_name VARCHAR(30),
|
||||
|
||||
PRIMARY KEY (course_id)
|
||||
|
||||
);
|
||||
|
||||
|
||||
|
||||
ALTER TABLE dbo_students
|
||||
|
||||
ADD CONSTRAINT FK_course_studied
|
||||
|
||||
FOREIGN KEY (course_studied) REFERENCES dbo_courses(course_id);
|
||||
|
||||
|
||||
|
||||
INSERT INTO dbo_courses(course_id,course_name)
|
||||
|
||||
VALUES(001,'Software Engineering');
|
||||
|
||||
INSERT INTO dbo_courses(course_id,course_name)
|
||||
|
||||
VALUES(002,'Computer Science');
|
||||
|
||||
INSERT INTO dbo_courses(course_id,course_name)
|
||||
|
||||
VALUES(003,'Computing');
|
||||
|
||||
|
||||
|
||||
INSERT INTO dbo_students(student_id,student_name,course_studied)
|
||||
|
||||
VALUES(001,'student1',001);
|
||||
|
||||
INSERT INTO dbo_students(student_id,student_name,course_studied)
|
||||
|
||||
VALUES(002,'student2',002);
|
||||
|
||||
INSERT INTO dbo_students(student_id,student_name,course_studied)
|
||||
|
||||
VALUES(003,'student3',002);
|
||||
|
||||
INSERT INTO dbo_students(student_id,student_name,course_studied)
|
||||
|
||||
VALUES(004,'student4',003);
|
||||
|
||||
|
||||
|
||||
SELECT *
|
||||
|
||||
FROM dbo_courses
|
||||
```
|
||||
|
||||
### 学习更多
|
||||
|
||||
SQL 并不困难;我认为它比编程简单,并且该语言对于不同的数据库系统是通用的。 请注意,`dbo.<实体>` (译者注:文章中使用的是 `dbo_<实体>`) 不是必需的实体命名约定;我之所以使用,仅仅是因为它是 Microsoft SQL Server 中的标准。
|
||||
|
||||
如果你想了解更多,在网络上这方面的最佳指南是 [W3Schools.com][2] 中对所有数据库平台的 SQL 综合指南。
|
||||
|
||||
请随意使用我的数据库。另外,如果你有任何建议或疑问,请在评论中回复。(译注:请点击原文地址进行评论回应)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: [https://opensource.com/article/18/2/getting-started-sql](https://opensource.com/article/18/2/getting-started-sql)
|
||||
|
||||
作者:[Aaron Cocker][a]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/aaroncocker
|
||||
[1]:http://sqlfiddle.com
|
||||
[2]:https://www.w3schools.com/sql/default.asp
|
Loading…
Reference in New Issue
Block a user