diff --git a/published/20090127 Anatomy of a Program in Memory.md b/published/20090127 Anatomy of a Program in Memory.md new file mode 100644 index 0000000000..4a08caa4be --- /dev/null +++ b/published/20090127 Anatomy of a Program in Memory.md @@ -0,0 +1,84 @@ +剖析内存中的程序之秘 +============================================================ + +内存管理是操作系统的核心任务;它对程序员和系统管理员来说也是至关重要的。在接下来的几篇文章中,我将从实践出发着眼于内存管理,并深入到它的内部结构。虽然这些概念很通用,但示例大都来自于 32 位 x86 架构的 Linux 和 Windows 上。这第一篇文章描述了在内存中程序如何分布。 + +在一个多任务操作系统中的每个进程都运行在它自己的内存“沙箱”中。这个沙箱是一个虚拟地址空间virtual address space,在 32 位的模式中它总共有 4GB 的内存地址块。这些虚拟地址是通过内核页表page table映射到物理地址的,并且这些虚拟地址是由操作系统内核来维护,进而被进程所消费的。每个进程都有它自己的一组页表,但是这里有点玄机。一旦虚拟地址被启用,这些虚拟地址将被应用到这台电脑上的 _所有软件_,_包括内核本身_。因此,一部分虚拟地址空间必须保留给内核使用: + +![Kernel/User Memory Split](http://static.duartes.org/img/blogPosts/kernelUserMemorySplit.png) + +但是,这并**不是**说内核就使用了很多的物理内存,恰恰相反,它只使用了很少一部分可用的地址空间映射到其所需要的物理内存。内核空间在内核页表中被标记为独占使用于 [特权代码][1] (ring 2 或更低),因此,如果一个用户模式的程序尝试去访问它,将触发一个页面故障错误。在 Linux 中,内核空间是始终存在的,并且在所有进程中都映射相同的物理内存。内核代码和数据总是可寻址的,准备随时去处理中断或者系统调用。相比之下,用户模式中的地址空间,在每次进程切换时都会发生变化: + +![Process Switch Effects on Virtual Memory](http://static.duartes.org/img/blogPosts/virtualMemoryInProcessSwitch.png) + +蓝色的区域代表映射到物理地址的虚拟地址空间,白色的区域是尚未映射的部分。在上面的示例中,众所周知的内存“饕餮” Firefox 使用了大量的虚拟内存空间。在地址空间中不同的条带对应了不同的内存段,像heapstack等等。请注意,这些段只是一系列内存地址的简化表示,它与 [Intel 类型的段][2] _并没有任何关系_ 。不过,这是一个在 Linux 进程的标准段布局: + +![Flexible Process Address Space Layout In Linux](http://static.duartes.org/img/blogPosts/linuxFlexibleAddressSpaceLayout.png) + +当计算机还是快乐、安全的时代时,在机器中的几乎每个进程上,那些段的起始虚拟地址都是**完全相同**的。这将使远程挖掘安全漏洞变得容易。漏洞利用经常需要去引用绝对内存位置:比如在栈中的一个地址,一个库函数的地址,等等。远程攻击可以闭着眼睛选择这个地址,因为地址空间都是相同的。当攻击者们这样做的时候,人们就会受到伤害。因此,地址空间随机化开始流行起来。Linux 会通过在其起始地址上增加偏移量来随机化[栈][3]、[内存映射段][4]、以及[堆][5]。不幸的是,32 位的地址空间是非常拥挤的,为地址空间随机化留下的空间不多,因此 [妨碍了地址空间随机化的效果][6]。 + +在进程地址空间中最高的段是栈,在大多数编程语言中它存储本地变量和函数参数。调用一个方法或者函数将推送一个新的栈帧stack frame到这个栈。当函数返回时这个栈帧被删除。这个简单的设计,可能是因为数据严格遵循 [后进先出(LIFO)][7] 的次序,这意味着跟踪栈内容时不需要复杂的数据结构 —— 一个指向栈顶的简单指针就可以做到。推入和弹出也因此而非常快且准确。也可能是,持续的栈区重用往往会在 [CPU 缓存][8] 中保持活跃的栈内存,这样可以加快访问速度。进程中的每个线程都有它自己的栈。 + +向栈中推送更多的而不是刚合适的数据可能会耗尽栈的映射区域。这将触发一个页面故障,在 Linux 中它是通过 [`expand_stack()`][9] 来处理的,它会去调用 [`acct_stack_growth()`][10] 来检查栈的增长是否正常。如果栈的大小低于 `RLIMIT_STACK` 的值(一般是 8MB 大小),那么这是一个正常的栈增长和程序的合理使用,否则可能是发生了未知问题。这是一个栈大小按需调节的常见机制。但是,栈的大小达到了上述限制,将会发生一个栈溢出,并且,程序将会收到一个段故障Segmentation Fault错误。当映射的栈区为满足需要而扩展后,在栈缩小时,映射区域并不会收缩。就像美国联邦政府的预算一样,它只会扩张。 + +动态栈增长是 [唯一例外的情况][11] ,当它去访问一个未映射的内存区域,如上图中白色部分,是允许的。除此之外的任何其它访问未映射的内存区域将触发一个页面故障,导致段故障。一些映射区域是只读的,因此,尝试去写入到这些区域也将触发一个段故障。 + +在栈的下面,有内存映射段。在这里,内核将文件内容直接映射到内存。任何应用程序都可以通过 Linux 的 [`mmap()`][12] 系统调用( [代码实现][13])或者 Windows 的 [`CreateFileMapping()`][14] / [`MapViewOfFile()`][15] 来请求一个映射。内存映射是实现文件 I/O 的方便高效的方式。因此,它经常被用于加载动态库。有时候,也被用于去创建一个并不匹配任何文件的匿名内存映射,这种映射经常被用做程序数据的替代。在 Linux 中,如果你通过 [`malloc()`][16] 去请求一个大的内存块,C 库将会创建这样一个匿名映射而不是使用堆内存。这里所谓的“大”表示是超过了`MMAP_THRESHOLD` 设置的字节数,它的缺省值是 128 kB,可以通过 [`mallopt()`][17] 去调整这个设置值。 + +接下来讲的是“堆”,就在我们接下来的地址空间中,堆提供运行时内存分配,像栈一样,但又不同于栈的是,它分配的数据生存期要长于分配它的函数。大多数编程语言都为程序提供了堆管理支持。因此,满足内存需要是编程语言运行时和内核共同来做的事情。在 C 中,堆分配的接口是 [`malloc()`][18] 一族,然而在支持垃圾回收的编程语言中,像 C#,这个接口使用 `new` 关键字。 + +如果在堆中有足够的空间可以满足内存请求,它可以由编程语言运行时来处理内存分配请求,而无需内核参与。否则将通过 [`brk()`][19] 系统调用([代码实现][20])来扩大堆以满足内存请求所需的大小。堆管理是比较 [复杂的][21],在面对我们程序的混乱分配模式时,它通过复杂的算法,努力在速度和内存使用效率之间取得一种平衡。服务一个堆请求所需要的时间可能是非常可观的。实时系统有一个 [特定用途的分配器][22] 去处理这个问题。堆也会出现  _碎片化_ ,如下图所示: + +![Fragmented Heap](http://static.duartes.org/img/blogPosts/fragmentedHeap.png) + +最后,我们抵达了内存的低位段:BSS、数据、以及程序文本。在 C 中,静态(全局)变量的内容都保存在 BSS 和数据中。它们之间的不同之处在于,BSS 保存 _未初始化的_  静态变量的内容,它的值在源代码中并没有被程序员设置。BSS 内存区域是 _匿名_ 的:它没有映射到任何文件上。如果你在程序中写这样的语句 `static int cntActiveUsers`,`cntActiveUsers` 的内容就保存在 BSS 中。 + +反过来,数据段,用于保存在源代码中静态变量 _初始化后_ 的内容。这个内存区域是 _非匿名_ 的。它映射了程序的二进值镜像上的一部分,包含了在源代码中给定初始化值的静态变量内容。因此,如果你在程序中写这样的语句 `static int cntWorkerBees = 10`,那么,`cntWorkerBees` 的内容就保存在数据段中,并且初始值为 `10`。尽管可以通过数据段映射到一个文件,但是这是一个私有内存映射,意味着,如果改变内存,它并不会将这种变化反映到底层的文件上。必须是这样的,否则,分配的全局变量将会改变你磁盘上的二进制文件镜像,这种做法就太不可思议了! + +用图去展示一个数据段是很困难的,因为它使用一个指针。在那种情况下,指针 `gonzo` 的_内容_(一个 4 字节的内存地址)保存在数据段上。然而,它并没有指向一个真实的字符串。而这个字符串存在于文本段中,文本段是只读的,它用于保存你的代码中的类似于字符串常量这样的内容。文本段也会在内存中映射你的二进制文件,但是,如果你的程序写入到这个区域,将会触发一个段故障错误。尽管在 C 中,它比不上从一开始就避免这种指针错误那么有效,但是,这种机制也有助于避免指针错误。这里有一个展示这些段和示例变量的图: + +![ELF Binary Image Mapped Into Memory](http://static.duartes.org/img/blogPosts/mappingBinaryImage.png) + +你可以通过读取 `/proc/pid_of_process/maps` 文件来检查 Linux 进程中的内存区域。请记住,一个段可以包含很多的区域。例如,每个内存映射的文件一般都在 mmap 段中的它自己的区域中,而动态库有类似于 BSS 和数据一样的额外的区域。下一篇文章中我们将详细说明“区域area”的真正含义是什么。此外,有时候人们所说的“数据段data segment”是指“数据data + BSS + 堆”。 + +你可以使用 [nm][23] 和 [objdump][24] 命令去检查二进制镜像,去显示它们的符号、地址、段等等。最终,在 Linux 中上面描述的虚拟地址布局是一个“弹性的”布局,这就是这几年来的缺省情况。它假设 `RLIMIT_STACK` 有一个值。如果没有值的话,Linux 将恢复到如下所示的“经典” 布局: + +![Classic Process Address Space Layout In Linux](http://static.duartes.org/img/blogPosts/linuxClassicAddressSpaceLayout.png) + +这就是虚拟地址空间布局。接下来的文章将讨论内核如何对这些内存区域保持跟踪、内存映射、文件如何读取和写入、以及内存使用数据的意义。 + +-------------------------------------------------------------------------------- + +via: http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory/ + +作者:[Gustavo Duarte][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://duartes.org/gustavo/blog/about/ +[1]:http://duartes.org/gustavo/blog/post/cpu-rings-privilege-and-protection +[2]:http://duartes.org/gustavo/blog/post/memory-translation-and-segmentation +[3]:http://lxr.linux.no/linux+v2.6.28.1/fs/binfmt_elf.c#L542 +[4]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/mm/mmap.c#L84 +[5]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/kernel/process_32.c#L729 +[6]:http://www.stanford.edu/~blp/papers/asrandom.pdf +[7]:http://en.wikipedia.org/wiki/Lifo +[8]:http://duartes.org/gustavo/blog/post/intel-cpu-caches +[9]:http://lxr.linux.no/linux+v2.6.28/mm/mmap.c#L1716 +[10]:http://lxr.linux.no/linux+v2.6.28/mm/mmap.c#L1544 +[11]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/mm/fault.c#L692 +[12]:http://www.kernel.org/doc/man-pages/online/pages/man2/mmap.2.html +[13]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/kernel/sys_i386_32.c#L27 +[14]:http://msdn.microsoft.com/en-us/library/aa366537(VS.85).aspx +[15]:http://msdn.microsoft.com/en-us/library/aa366761(VS.85).aspx +[16]:http://www.kernel.org/doc/man-pages/online/pages/man3/malloc.3.html +[17]:http://www.kernel.org/doc/man-pages/online/pages/man3/undocumented.3.html +[18]:http://www.kernel.org/doc/man-pages/online/pages/man3/malloc.3.html +[19]:http://www.kernel.org/doc/man-pages/online/pages/man2/brk.2.html +[20]:http://lxr.linux.no/linux+v2.6.28.1/mm/mmap.c#L248 +[21]:http://g.oswego.edu/dl/html/malloc.html +[22]:http://rtportal.upv.es/rtmalloc/ +[23]:http://manpages.ubuntu.com/manpages/intrepid/en/man1/nm.1.html +[24]:http://manpages.ubuntu.com/manpages/intrepid/en/man1/objdump.1.html diff --git a/translated/tech/20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md b/published/20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md similarity index 71% rename from translated/tech/20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md rename to published/20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md index d637c92858..236e0defa6 100644 --- a/translated/tech/20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md +++ b/published/20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md @@ -1,35 +1,43 @@ -Linux / Unix / Mac OS X 中的 30 个方便的 Bash shell 别名 +30 个方便的 Bash shell 别名 ====== -bash 别名不是把别的,只不过是指向命令的快捷方式而已。`alias` 命令允许用户只输入一个单词就运行任意一个命令或一组命令(包括命令选项和文件名)。执行 `alias` 命令会显示一个所有已定义别名的列表。你可以在 [~/.bashrc][1] 文件中自定义别名。使用别名可以在命令行中减少输入的时间,使工作更流畅,同时增加生产率。 + +bash 别名alias只不过是指向命令的快捷方式而已。`alias` 命令允许用户只输入一个单词就运行任意一个命令或一组命令(包括命令选项和文件名)。执行 `alias` 命令会显示一个所有已定义别名的列表。你可以在 [~/.bashrc][1] 文件中自定义别名。使用别名可以在命令行中减少输入的时间,使工作更流畅,同时增加生产率。 本文通过 30 个 bash shell 别名的实际案例演示了如何创建和使用别名。 ![30 Useful Bash Shell Aliase For Linux/Unix Users][2] -## bash alias 的那些事 +### bash alias 的那些事 bash shell 中的 alias 命令的语法是这样的: -### 如何列出 bash 别名 +``` +alias [alias-name[=string]...] +``` + +#### 如何列出 bash 别名 + +输入下面的 [alias 命令][3]: -输入下面的 [alias 命令 ][3]: ``` alias ``` + 结果为: + ``` alias ..='cd ..' alias amazonbackup='s3backup' alias apt-get='sudo apt-get' ... - ``` -默认 alias 命令会列出当前用户定义好的别名。 +`alias` 命令默认会列出当前用户定义好的别名。 -### 如何定义或者说创建一个 bash shell 别名 +#### 如何定义或者创建一个 bash shell 别名 + +使用下面语法 [创建别名][4]: -使用下面语法 [创建别名 ][4]: ``` alias name =value alias name = 'command' @@ -38,19 +46,22 @@ alias name = '/path/to/script' alias name = '/path/to/script.pl arg1' ``` -举个例子,输入下面命令并回车就会为常用的 `clear`( 清除屏幕)命令创建一个别名 **c**: +举个例子,输入下面命令并回车就会为常用的 `clear`(清除屏幕)命令创建一个别名 `c`: + ``` alias c = 'clear' ``` 然后输入字母 `c` 而不是 `clear` 后回车就会清除屏幕了: + ``` c ``` -### 如何临时性地禁用 bash 别名 +#### 如何临时性地禁用 bash 别名 + +下面语法可以[临时性地禁用别名][5]: -下面语法可以[临时性地禁用别名 ][5]: ``` ## path/to/full/command /usr/bin/clear @@ -60,37 +71,43 @@ c command ls ``` -### 如何删除 bash 别名 +#### 如何删除 bash 别名 + +使用 [unalias 命令来删除别名][6]。其语法为: -使用 [unalias 命令来删除别名 ][6]。其语法为: ``` unalias aliasname unalias foo ``` 例如,删除我们之前创建的别名 `c`: + ``` unalias c ``` -你还需要用文本编辑器删掉 [~/.bashrc 文件 ][1] 中的别名定义(参见下一部分内容)。 +你还需要用文本编辑器删掉 [~/.bashrc 文件][1] 中的别名定义(参见下一部分内容)。 -### 如何让 bash shell 别名永久生效 +#### 如何让 bash shell 别名永久生效 别名 `c` 在当前登录会话中依然有效。但当你登出或重启系统后,别名 `c` 就没有了。为了防止出现这个问题,将别名定义写入 [~/.bashrc file][1] 中,输入: + ``` vi ~/.bashrc ``` + 输入下行内容让别名 `c` 对当前用户永久有效: + ``` alias c = 'clear' ``` -保存并关闭文件就行了。系统级的别名(也就是对所有用户都生效的别名) 可以放在 `/etc/bashrc` 文件中。请注意,alias 命令内建于各种 shell 中,包括 ksh,tcsh/csh,ash,bash 以及其他 shell。 +保存并关闭文件就行了。系统级的别名(也就是对所有用户都生效的别名)可以放在 `/etc/bashrc` 文件中。请注意,`alias` 命令内建于各种 shell 中,包括 ksh,tcsh/csh,ash,bash 以及其他 shell。 -### 关于特权权限判断 +#### 关于特权权限判断 可以将下面代码加入 `~/.bashrc`: + ``` # if user is not root, pass all commands via sudo # if [ $UID -ne 0 ]; then @@ -99,9 +116,10 @@ if [ $UID -ne 0 ]; then fi ``` -### 定义与操作系统类型相关的别名 +#### 定义与操作系统类型相关的别名 + +可以将下面代码加入 `~/.bashrc` [使用 case 语句][7]: -可以将下面代码加入 `~/.bashrc` [使用 case 语句 ][7]: ``` ### Get os name via uname ### _myos="$(uname)" @@ -115,13 +133,14 @@ case $_myos in esac ``` -## 30 个 bash shell 别名的案例 +### 30 个 bash shell 别名的案例 你可以定义各种类型的别名来节省时间并提高生产率。 -### #1:控制 ls 命令的输出 +#### #1:控制 ls 命令的输出 + +[ls 命令列出目录中的内容][8] 而你可以对输出进行着色: -[ls 命令列出目录中的内容 ][8] 而你可以对输出进行着色: ``` ## Colorize the ls output ## alias ls = 'ls --color=auto' @@ -133,7 +152,8 @@ alias ll = 'ls -la' alias l.= 'ls -d . .. .git .gitignore .gitmodules .travis.yml --color=auto' ``` -### #2:控制 cd 命令的行为 +#### #2:控制 cd 命令的行为 + ``` ## get rid of command not found ## alias cd..= 'cd ..' @@ -147,9 +167,10 @@ alias .4= 'cd ../../../../' alias .5= 'cd ../../../../..' ``` -### #3:控制 grep 命令的输出 +#### #3:控制 grep 命令的输出 + +[grep 命令是一个用于在纯文本文件中搜索匹配正则表达式的行的命令行工具][9]: -[grep 命令是一个用于在纯文本文件中搜索匹配正则表达式的行的命令行工具 ][9]: ``` ## Colorize the grep command output for ease of use (good for log files)## alias grep = 'grep --color=auto' @@ -157,44 +178,51 @@ alias egrep = 'egrep --color=auto' alias fgrep = 'fgrep --color=auto' ``` -### #4:让计算器默认开启 math 库 +#### #4:让计算器默认开启 math 库 + ``` alias bc = 'bc -l' ``` -### #4:生成 sha1 数字签名 +#### #4:生成 sha1 数字签名 + ``` alias sha1 = 'openssl sha1' ``` -### #5:自动创建父目录 +#### #5:自动创建父目录 + +[mkdir 命令][10] 用于创建目录: -[mkdir 命令 ][10] 用于创建目录: ``` alias mkdir = 'mkdir -pv' ``` -### #6:为 diff 输出着色 +#### #6:为 diff 输出着色 + +你可以[使用 diff 来一行行第比较文件][11] 而一个名为 `colordiff` 的工具可以为 diff 输出着色: -你可以[使用 diff 来一行行第比较文件 ][11] 而一个名为 colordiff 的工具可以为 diff 输出着色: ``` # install colordiff package :) alias diff = 'colordiff' ``` -### #7:让 mount 命令的输出更漂亮,更方便人类阅读 +#### #7:让 mount 命令的输出更漂亮,更方便人类阅读 + ``` alias mount = 'mount |column -t' ``` -### #8:简化命令以节省时间 +#### #8:简化命令以节省时间 + ``` # handy short cuts # alias h = 'history' alias j = 'jobs -l' ``` -### #9:创建一系列新命令 +#### #9:创建一系列新命令 + ``` alias path = 'echo -e ${PATH//:/\\n}' alias now = 'date +"%T"' @@ -202,7 +230,8 @@ alias nowtime =now alias nowdate = 'date +"%d-%m-%Y"' ``` -### #10:设置 vim 为默认编辑器 +#### #10:设置 vim 为默认编辑器 + ``` alias vi = vim alias svi = 'sudo vi' @@ -210,7 +239,8 @@ alias vis = 'vim "+set si"' alias edit = 'vim' ``` -### #11:控制网络工具 ping 的输出 +#### #11:控制网络工具 ping 的输出 + ``` # Stop after sending count ECHO_REQUEST packets # alias ping = 'ping -c 5' @@ -219,16 +249,18 @@ alias ping = 'ping -c 5' alias fastping = 'ping -c 100 -s.2' ``` -### #12:显示打开的端口 +#### #12:显示打开的端口 + +使用 [netstat 命令][12] 可以快速列出服务区中所有的 TCP/UDP 端口: -使用 [netstat 命令 ][12] 可以快速列出服务区中所有的 TCP/UDP 端口: ``` alias ports = 'netstat -tulanp' ``` -### #13:唤醒休眠额服务器 +#### #13:唤醒休眠的服务器 + +[Wake-on-LAN (WOL) 是一个以太网标准][13],可以通过网络消息来开启服务器。你可以使用下面别名来[快速激活 nas 设备][14] 以及服务器: -[Wake-on-LAN (WOL) 是一个以太网标准 ][13],可以通过网络消息来开启服务器。你可以使用下面别名来[快速激活 nas 设备 ][14] 以及服务器: ``` ## replace mac with your actual server mac address # alias wakeupnas01 = '/usr/bin/wakeonlan 00:11:32:11:15:FC' @@ -236,9 +268,10 @@ alias wakeupnas02 = '/usr/bin/wakeonlan 00:11:32:11:15:FD' alias wakeupnas03 = '/usr/bin/wakeonlan 00:11:32:11:15:FE' ``` -### #14:控制防火墙 (iptables) 的输出 +#### #14:控制防火墙 (iptables) 的输出 + +[Netfilter 是一款 Linux 操作系统上的主机防火墙][15]。它是 Linux 发行版中的一部分,且默认情况下是激活状态。[这里列出了大多数 Liux 新手防护入侵者最常用的 iptables 方法][16]。 -[Netfilter 是一款 Linux 操作系统上的主机防火墙 ][15]。它是 Linux 发行版中的一部分,且默认情况下是激活状态。[这里列出了大多数 Liux 新手防护入侵者最常用的 iptables 方法 ][16]。 ``` ## shortcut for iptables and pass it via sudo# alias ipt = 'sudo /sbin/iptables' @@ -251,7 +284,8 @@ alias iptlistfw = 'sudo /sbin/iptables -L FORWARD -n -v --line-numbers' alias firewall =iptlist ``` -### #15:使用 curl 调试 web 服务器 /cdn 上的问题 +#### #15:使用 curl 调试 web 服务器 / CDN 上的问题 + ``` # get web server headers # alias header = 'curl -I' @@ -260,7 +294,8 @@ alias header = 'curl -I' alias headerc = 'curl -I --compress' ``` -### #16:增加安全性 +#### #16:增加安全性 + ``` # do not delete / or prompt if deleting more than 3 files at a time # alias rm = 'rm -I --preserve-root' @@ -276,9 +311,10 @@ alias chmod = 'chmod --preserve-root' alias chgrp = 'chgrp --preserve-root' ``` -### #17:更新 Debian Linux 服务器 +#### #17:更新 Debian Linux 服务器 + +[apt-get 命令][17] 用于通过因特网安装软件包 (ftp 或 http)。你也可以一次性升级所有软件包: -[apt-get 命令 ][17] 用于通过因特网安装软件包 (ftp 或 http)。你也可以一次性升级所有软件包: ``` # distro specific - Debian / Ubuntu and friends # # install with apt-get @@ -289,25 +325,27 @@ alias updatey = "sudo apt-get --yes" alias update = 'sudo apt-get update && sudo apt-get upgrade' ``` -### #18:更新 RHEL / CentOS / Fedora Linux 服务器 +#### #18:更新 RHEL / CentOS / Fedora Linux 服务器 + +[yum 命令][18] 是 RHEL / CentOS / Fedora Linux 以及其他基于这些发行版的 Linux 上的软件包管理工具: -[yum 命令 ][18] 是 RHEL / CentOS / Fedora Linux 以及其他基于这些发行版的 Linux 上的软件包管理工具: ``` ## distrp specifc RHEL/CentOS ## alias update = 'yum update' alias updatey = 'yum -y update' ``` -### #19:优化 sudo 和 su 命令 +#### #19:优化 sudo 和 su 命令 + ``` # become root # alias root = 'sudo -i' alias su = 'sudo -i' ``` -### #20:使用 sudo 执行 halt/reboot 命令 +#### #20:使用 sudo 执行 halt/reboot 命令 -[shutdown 命令 ][19] 会让 Linux / Unix 系统关机: +[shutdown 命令][19] 会让 Linux / Unix 系统关机: ``` # reboot / halt / poweroff alias reboot = 'sudo /sbin/reboot' @@ -316,7 +354,8 @@ alias halt = 'sudo /sbin/halt' alias shutdown = 'sudo /sbin/shutdown' ``` -### #21:控制 web 服务器 +#### #21:控制 web 服务器 + ``` # also pass it via sudo so whoever is admin can reload it without calling you # alias nginxreload = 'sudo /usr/local/nginx/sbin/nginx -s reload' @@ -327,7 +366,8 @@ alias httpdreload = 'sudo /usr/sbin/apachectl -k graceful' alias httpdtest = 'sudo /usr/sbin/apachectl -t && /usr/sbin/apachectl -t -D DUMP_VHOSTS' ``` -### #22:与备份相关的别名 +#### #22:与备份相关的别名 + ``` # if cron fails or if you want backup on demand just run these commands # # again pass it via sudo so whoever is in admin group can start the job # @@ -342,7 +382,8 @@ alias rsnapshotmonthly = 'sudo /home/scripts/admin/scripts/backup/wrapper.rsnaps alias amazonbackup =s3backup ``` -### #23:桌面应用相关的别名 - 按需播放的 avi/mp3 文件 +#### #23:桌面应用相关的别名 - 按需播放的 avi/mp3 文件 + ``` ## play video files in a current directory ## # cd ~/Download/movie-name @@ -364,10 +405,10 @@ alias nplaymp3 = 'for i in /nas/multimedia/mp3/*.mp3; do mplayer "$i"; done' alias music = 'mplayer --shuffle *' ``` +#### #24:设置系统管理相关命令的默认网卡 -### #24:设置系统管理相关命令的默认网卡 +[vnstat 一款基于终端的网络流量检测器][20]。[dnstop 是一款分析 DNS 流量的终端工具][21]。[tcptrack 和 iftop 命令显示][22] TCP/UDP 连接方面的信息,它监控网卡并显示其消耗的带宽。 -[vnstat 一款基于终端的网络流量检测器 ][20]。[dnstop 是一款分析 DNS 流量的终端工具 ][21]。[tcptrack 和 iftop 命令显示 ][22] TCP/UDP 连接方面的信息,它监控网卡并显示其消耗的带宽。 ``` ## All of our servers eth1 is connected to the Internets via vlan / router etc ## alias dnstop = 'dnstop -l 5 eth1' @@ -381,7 +422,8 @@ alias ethtool = 'ethtool eth1' alias iwconfig = 'iwconfig wlan0' ``` -### #25:快速获取系统内存,cpu 使用,和 gpu 内存相关信息 +#### #25:快速获取系统内存,cpu 使用,和 gpu 内存相关信息 + ``` ## pass options to free ## alias meminfo = 'free -m -l -t' @@ -404,9 +446,10 @@ alias cpuinfo = 'lscpu' alias gpumeminfo = 'grep -i --color memory /var/log/Xorg.0.log' ``` -### #26:控制家用路由器 +#### #26:控制家用路由器 + +`curl` 命令可以用来 [重启 Linksys 路由器][23]。 -curl 命令可以用来 [重启 Linksys 路由器 ][23]。 ``` # Reboot my home Linksys WAG160N / WAG54 / WAG320 / WAG120N Router / Gateway from *nix. alias rebootlinksys = "curl -u 'admin:my-super-password' 'http://192.168.1.2/setup.cgi?todo=reboot'" @@ -415,15 +458,17 @@ alias rebootlinksys = "curl -u 'admin:my-super-password' 'http://192.168.1.2/set alias reboottomato = "ssh admin@192.168.1.1 /sbin/reboot" ``` -### #27 wget 默认断点续传 +#### #27 wget 默认断点续传 + +[GNU wget 是一款用来从 web 下载文件的自由软件][25]。它支持 HTTP,HTTPS,以及 FTP 协议,而且它也支持断点续传: -[GNU Wget 是一款用来从 web 下载文件的自由软件 ][25]。它支持 HTTP,HTTPS,以及 FTP 协议,而且它页支持断点续传: ``` ## this one saved by butt so many times ## alias wget = 'wget -c' ``` -### #28 使用不同浏览器来测试网站 +#### #28 使用不同浏览器来测试网站 + ``` ## this one saved by butt so many times ## alias ff4 = '/opt/firefox4/firefox' @@ -438,9 +483,10 @@ alias ff =ff13 alias browser =chrome ``` -### #29:关于 ssh 别名的注意事项 +#### #29:关于 ssh 别名的注意事项 不要创建 ssh 别名,代之以 `~/.ssh/config` 这个 OpenSSH SSH 客户端配置文件。它的选项更加丰富。下面是一个例子: + ``` Host server10 Hostname 1.2.3.4 @@ -451,12 +497,13 @@ Host server10 TCPKeepAlive yes ``` -然后你就可以使用下面语句连接 peer1 了: +然后你就可以使用下面语句连接 server10 了: + ``` $ ssh server10 ``` -### #30:现在该分享你的别名了 +#### #30:现在该分享你的别名了 ``` ## set some other defaults ## @@ -486,27 +533,26 @@ alias cdnmdel = '/home/scripts/admin/cdn/purge_cdn_cache --profile akamai --stdi alias amzcdnmdel = '/home/scripts/admin/cdn/purge_cdn_cache --profile amazon --stdin' ``` -## 结论 +### 总结 本文总结了 *nix bash 别名的多种用法: - 1。为命令设置默认的参数(例如通过 `alias ethtool='ethtool eth0'` 设置 ethtool 命令的默认参数为 eth0)。 - 2。修正错误的拼写(通过 `alias cd。.='cd .。'`让 `cd。.` 变成 `cd .。`)。 - 3。缩减输入。 - 4。设置系统中多版本命令的默认路径(例如 GNU/grep 位于 /usr/local/bin/grep 中而 Unix grep 位于 /bin/grep 中。若想默认使用 GNU grep 则设置别名 `grep='/usr/local/bin/grep'` )。 - 5。通过默认开启命令(例如 rm,mv 等其他命令)的交互参数来增加 Unix 的安全性。 - 6。为老旧的操作系统(比如 MS-DOS 或者其他类似 Unix 的操作系统)创建命令以增加兼容性(比如 `alias del=rm` )。 +1. 为命令设置默认的参数(例如通过 `alias ethtool='ethtool eth0'` 设置 ethtool 命令的默认参数为 eth0)。 +2. 修正错误的拼写(通过 `alias cd..='cd ..'`让 `cd..` 变成 `cd ..`)。 +3. 缩减输入。 +4. 设置系统中多版本命令的默认路径(例如 GNU/grep 位于 `/usr/local/bin/grep` 中而 Unix grep 位于 `/bin/grep` 中。若想默认使用 GNU grep 则设置别名 `grep='/usr/local/bin/grep'` )。 +5. 通过默认开启命令(例如 `rm`,`mv` 等其他命令)的交互参数来增加 Unix 的安全性。 +6. 为老旧的操作系统(比如 MS-DOS 或者其他类似 Unix 的操作系统)创建命令以增加兼容性(比如 `alias del=rm`)。 我已经分享了多年来为了减少重复输入命令而使用的别名。若你知道或使用的哪些 bash/ksh/csh 别名能够减少输入,请在留言框中分享。 - -------------------------------------------------------------------------------- via: https://www.cyberciti.biz/tips/bash-aliases-mac-centos-linux-unix.html 作者:[nixCraft][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20140210 Three steps to learning GDB.md b/published/20140210 Three steps to learning GDB.md new file mode 100644 index 0000000000..01232ea184 --- /dev/null +++ b/published/20140210 Three steps to learning GDB.md @@ -0,0 +1,113 @@ +三步上手 GDB +=============== + +调试 C 程序,曾让我很困扰。然而当我之前在写我的[操作系统][2]时,我有很多的 Bug 需要调试。我很幸运的使用上了 qemu 模拟器,它允许我将调试器附加到我的操作系统。这个调试器就是 `gdb`。 + +我得解释一下,你可以使用 `gdb` 先做一些小事情,因为我发现初学它的时候真的很混乱。我们接下来会在一个小程序中,设置断点,查看内存。. + +### 1、 设断点 + +如果你曾经使用过调试器,那你可能已经会设置断点了。 + +下面是一个我们要调试的程序(虽然没有任何 Bug): + +``` +#include +void do_thing() { + printf("Hi!\n"); +} +int main() { + do_thing(); +} +``` + +另存为 `hello.c`. 我们可以使用 `dbg` 调试它,像这样: + +``` +bork@kiwi ~> gcc -g hello.c -o hello +bork@kiwi ~> gdb ./hello +``` + +以上是带调试信息编译 `hello.c`(为了 `gdb` 可以更好工作),并且它会给我们醒目的提示符,就像这样: + +``` +(gdb) +``` + +我们可以使用 `break` 命令设置断点,然后使用 `run` 开始调试程序。 + +``` +(gdb) break do_thing +Breakpoint 1 at 0x4004f8 +(gdb) run +Starting program: /home/bork/hello + +Breakpoint 1, 0x00000000004004f8 in do_thing () +``` + +程序暂停在了 `do_thing` 开始的地方。 + +我们可以通过 `where` 查看我们所在的调用栈。 + +``` +(gdb) where +#0 do_thing () at hello.c:3 +#1 0x08050cdb in main () at hello.c:6 +(gdb) +``` + +### 2、 阅读汇编代码 + +使用 `disassemble` 命令,我们可以看到这个函数的汇编代码。棒级了,这是 x86 汇编代码。虽然我不是很懂它,但是 `callq` 这一行是 `printf` 函数调用。 + +``` +(gdb) disassemble do_thing +Dump of assembler code for function do_thing: + 0x00000000004004f4 <+0>: push %rbp + 0x00000000004004f5 <+1>: mov %rsp,%rbp +=> 0x00000000004004f8 <+4>: mov $0x40060c,%edi + 0x00000000004004fd <+9>: callq 0x4003f0 + 0x0000000000400502 <+14>: pop %rbp + 0x0000000000400503 <+15>: retq +``` + +你也可以使用 `disassemble` 的缩写 `disas`。 + +### 3、 查看内存 + +当调试我的内核时,我使用 `gdb` 的主要原因是,以确保内存布局是如我所想的那样。检查内存的命令是 `examine`,或者使用缩写 `x`。我们将使用`x`。 + +通过阅读上面的汇编代码,似乎 `0x40060c` 可能是我们所要打印的字符串地址。我们来试一下。 + +``` +(gdb) x/s 0x40060c +0x40060c: "Hi!" +``` + +的确是这样。`x/s` 中 `/s` 部分,意思是“把它作为字符串展示”。我也可以“展示 10 个字符”,像这样: + +``` +(gdb) x/10c 0x40060c +0x40060c: 72 'H' 105 'i' 33 '!' 0 '\000' 1 '\001' 27 '\033' 3 '\003' 59 ';' +0x400614: 52 '4' 0 '\000' +``` + +你可以看到前四个字符是 `H`、`i`、`!` 和 `\0`,并且它们之后的是一些不相关的东西。 + +我知道 `gdb` 很多其他的东西,但是我仍然不是很了解它,其中 `x` 和 `break` 让我获得很多。你还可以阅读 [do umentation for examining memory][4]。 + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2014/02/10/three-steps-to-learning-gdb/ + +作者:[Julia Evans][a] +译者:[Torival](https://github.com/Torival) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca +[1]:https://jvns.ca/categories/spytools +[2]:https://jvns.ca/blog/categories/kernel +[3]:https://twitter.com/mgedmin +[4]:https://ftp.gnu.org/old-gnu/Manuals/gdb-5.1.1/html_chapter/gdb_9.html#SEC56 diff --git a/published/20160625 Trying out LXD containers on our Ubuntu.md b/published/20160625 Trying out LXD containers on our Ubuntu.md new file mode 100644 index 0000000000..78257d02d0 --- /dev/null +++ b/published/20160625 Trying out LXD containers on our Ubuntu.md @@ -0,0 +1,239 @@ +在 Ubuntu 上体验 LXD 容器 +====== + +本文的主角是容器,一种类似虚拟机但更轻量级的构造。你可以轻易地在你的 Ubuntu 桌面系统中创建一堆容器! + +虚拟机会虚拟出整个电脑让你来安装客户机操作系统。**相比之下**,容器**复用**了主机的 Linux 内核,只是简单地 **包容** 了我们选择的根文件系统(也就是运行时环境)。Linux 内核有很多功能可以将运行的 Linux 容器与我们的主机分割开(也就是我们的 Ubuntu 桌面)。 + +Linux 本身需要一些手工操作来直接管理他们。好在,有 LXD(读音为 Lex-deeh),这是一款为我们管理 Linux 容器的服务。 + +我们将会看到如何: + +1. 在我们的 Ubuntu 桌面上配置容器, +2. 创建容器, +3. 安装一台 web 服务器, +4. 测试一下这台 web 服务器,以及 +5. 清理所有的东西。 + +### 设置 Ubuntu 容器 + +如果你安装的是 Ubuntu 16.04,那么你什么都不用做。只要安装下面所列出的一些额外的包就行了。若你安装的是 Ubuntu 14.04.x 或 Ubuntu 15.10,那么按照 [LXD 2.0 系列(二):安装与配置][1] 来进行一些操作,然后再回来。 + +确保已经更新了包列表: + +``` +sudo apt update +sudo apt upgrade +``` + +安装 `lxd` 包: + +``` +sudo apt install lxd +``` + +若你安装的是 Ubuntu 16.04,那么还可以让你的容器文件以 ZFS 文件系统的格式进行存储。Ubuntu 16.04 的 Linux kernel 包含了支持 ZFS 必要的内核模块。若要让 LXD 使用 ZFS 进行存储,我们只需要安装 ZFS 工具包。没有 ZFS,容器会在主机文件系统中以单独的文件形式进行存储。通过 ZFS,我们就有了写入时拷贝等功能,可以让任务完成更快一些。 + +安装 `zfsutils-linux` 包(若你安装的是 Ubuntu 16.04.x): + +``` +sudo apt install zfsutils-linux +``` + +安装好 LXD 后,包安装脚本应该会将你加入 `lxd` 组。该组成员可以使你无需通过 `sudo` 就能直接使用 LXD 管理容器。根据 Linux 的习惯,**你需要先登出桌面会话然后再登录** 才能应用 `lxd` 的组成员关系。(若你是高手,也可以通过在当前 shell 中执行 `newgrp lxd` 命令,就不用重登录了)。 + +在开始使用前,LXD 需要初始化存储和网络参数。 + +运行下面命令: + +``` +$ sudo lxd init +Name of the storage backend to use (dir or zfs): zfs +Create a new ZFS pool (yes/no)? yes +Name of the new ZFS pool: lxd-pool +Would you like to use an existing block device (yes/no)? no +Size in GB of the new loop device (1GB minimum): 30 +Would you like LXD to be available over the network (yes/no)? no +Do you want to configure the LXD bridge (yes/no)? yes +> You will be asked about the network bridge configuration. Accept all defaults and continue. +Warning: Stopping lxd.service, but it can still be activated by: + lxd.socket + LXD has been successfully configured. +$ _ +``` + +我们在一个(单独)的文件而不是块设备(即分区)中构建了一个文件系统来作为 ZFS 池,因此我们无需进行额外的分区操作。在本例中我指定了 30GB 大小,这个空间取之于根(`/`) 文件系统中。这个文件就是 `/var/lib/lxd/zfs.img`。 + +行了!最初的配置完成了。若有问题,或者想了解其他信息,请阅读 https://www.stgraber.org/2016/03/15/lxd-2-0-installing-and-configuring-lxd-212/ 。 + +### 创建第一个容器 + +所有 LXD 的管理操作都可以通过 `lxc` 命令来进行。我们通过给 `lxc` 不同参数来管理容器。 + +``` +lxc list +``` + +可以列出所有已经安装的容器。很明显,这个列表现在是空的,但这表示我们的安装是没问题的。 + +``` +lxc image list +``` + +列出可以用来启动容器的(已经缓存的)镜像列表。很明显这个列表也是空的,但这也说明我们的安装是没问题的。 + +``` +lxc image list ubuntu: +``` + +列出可以下载并启动容器的远程镜像。而且指定了显示 Ubuntu 镜像。 + +``` +lxc image list images: +``` + +列出可以用来启动容器的(已经缓存的)各种发行版的镜像列表。这会列出各种发行版的镜像比如 Alpine、Debian、Gentoo、Opensuse 以及 Fedora。 + +让我们启动一个 Ubuntu 16.04 容器,并称之为 `c1`: + +``` +$ lxc launch ubuntu:x c1 +Creating c1 +Starting c1 +$ +``` + +我们使用 `launch` 动作,然后选择镜像 `ubuntu:x` (`x` 表示 Xenial/16.04 镜像),最后我们使用名字 `c1` 作为容器的名称。 + +让我们来看看安装好的首个容器, + +``` +$ lxc list + ++---------|---------|----------------------|------|------------|-----------+ +| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | ++---------|---------|----------------------|------|------------|-----------+ +| c1 | RUNNING | 10.173.82.158 (eth0) | | PERSISTENT | 0 | ++---------|---------|----------------------|------|------------|-----------+ +``` + +我们的首个容器 c1 已经运行起来了,它还有自己的 IP 地址(可以本地访问)。我们可以开始用它了! + +### 安装 web 服务器 + +我们可以在容器中运行命令。运行命令的动作为 `exec`。 + +``` +$ lxc exec c1 -- uptime + 11:47:25 up 2 min,0 users,load average:0.07,0.05,0.04 +$ _ +``` + +在 `exec` 后面,我们指定容器、最后输入要在容器中运行的命令。该容器的运行时间只有 2 分钟,这是个新出炉的容器:-)。 + +命令行中的 `--` 跟我们 shell 的参数处理过程有关。若我们的命令没有任何参数,则完全可以省略 `-`。 + +``` +$ lxc exec c1 -- df -h +``` + +这是一个必须要 `-` 的例子,由于我们的命令使用了参数 `-h`。若省略了 `-`,会报错。 + +然后我们运行容器中的 shell 来更新包列表。 + +``` +$ lxc exec c1 bash +root@c1:~# apt update +Ign http://archive.ubuntu.com trusty InRelease +Get:1 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB] +Get:2 http://security.ubuntu.com trusty-security InRelease [65.9 kB] +... +Hit http://archive.ubuntu.com trusty/universe Translation-en +Fetched 11.2 MB in 9s (1228 kB/s) +Reading package lists... Done +root@c1:~# apt upgrade +Reading package lists... Done +Building dependency tree +... +Processing triggers for man-db (2.6.7.1-1ubuntu1) ... +Setting up dpkg (1.17.5ubuntu5.7) ... +root@c1:~# _ +``` + +我们使用 nginx 来做 web 服务器。nginx 在某些方面要比 Apache web 服务器更酷一些。 + +``` +root@c1:~# apt install nginx +Reading package lists... Done +Building dependency tree +... +Setting up nginx-core (1.4.6-1ubuntu3.5) ... +Setting up nginx (1.4.6-1ubuntu3.5) ... +Processing triggers for libc-bin (2.19-0ubuntu6.9) ... +root@c1:~# _ +``` + +让我们用浏览器访问一下这个 web 服务器。记住 IP 地址为 10.173.82.158,因此你需要在浏览器中输入这个 IP。 + +[![lxd-nginx][2]][3] + +让我们对页面文字做一些小改动。回到容器中,进入默认 HTML 页面的目录中。 + +``` +root@c1:~# cd /var/www/html/ +root@c1:/var/www/html# ls -l +total 2 +-rw-r--r-- 1 root root 612 Jun 25 12:15 index.nginx-debian.html +root@c1:/var/www/html# +``` + +使用 nano 编辑文件,然后保存: + +[![lxd-nginx-nano][4]][5] + +之后,再刷一下页面看看, + +[![lxd-nginx-modified][6]][7] + +### 清理 + +让我们清理一下这个容器,也就是删掉它。当需要的时候我们可以很方便地创建一个新容器出来。 + +``` +$ lxc list ++---------+---------+----------------------+------+------------+-----------+ +| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | ++---------+---------+----------------------+------+------------+-----------+ +| c1 | RUNNING | 10.173.82.169 (eth0) | | PERSISTENT | 0 | ++---------+---------+----------------------+------+------------+-----------+ +$ lxc stop c1 +$ lxc delete c1 +$ lxc list ++---------+---------+----------------------+------+------------+-----------+ +| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | ++---------+---------+----------------------+------+------------+-----------+ ++---------+---------+----------------------+------+------------+-----------+ +``` + +我们停止(关闭)这个容器,然后删掉它了。 + +本文至此就结束了。关于容器有很多玩法。而这只是配置 Ubuntu 并尝试使用容器的第一步而已。 + +-------------------------------------------------------------------------------- + +via: https://blog.simos.info/trying-out-lxd-containers-on-our-ubuntu/ + +作者:[Simos Xenitellis][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://blog.simos.info/author/simos/ +[1]:https://linux.cn/article-7687-1.html +[2]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx.png?resize=564%2C269&ssl=1 +[3]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx.png?ssl=1 +[4]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-nano.png?resize=750%2C424&ssl=1 +[5]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-nano.png?ssl=1 +[6]:https://i1.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-modified.png?resize=595%2C317&ssl=1 +[7]:https://i1.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-modified.png?ssl=1 diff --git a/published/20160808 Top 10 Command Line Games For Linux.md b/published/20160808 Top 10 Command Line Games For Linux.md new file mode 100644 index 0000000000..0368635e73 --- /dev/null +++ b/published/20160808 Top 10 Command Line Games For Linux.md @@ -0,0 +1,255 @@ +十大 Linux 命令行游戏 +====== + +概要: 本文列举了 Linux 中最好的命令行游戏。 + +Linux 从来都不是游戏的首选操作系统,尽管近日来 [Linux 的游戏][1]提供了很多,你也可以从许多资源[下载到 Linux 游戏][2]。 + +也有专门的 [游戏版 Linux][3]。没错,确实有。但是今天,我们并不是要欣赏游戏版 Linux。 + +Linux 有一个超过 Windows 的优势。它拥有一个强大的 Linux 终端。在 Linux 终端上,你可以做很多事情,包括玩 **命令行游戏**。 + +当然,我们都是 Linux 终端的骨灰粉。终端游戏轻便、快速、有地狱般的魔力。而这最有意思的事情是,你可以在 Linux 终端上重温大量经典游戏。 + +### 最好的 Linux 终端游戏 + +来揭秘这张榜单,找出 Linux 终端最好的游戏。 + +#### 1. Bastet + +谁还没花上几个小时玩[俄罗斯方块][4]?它简单而且容易上瘾。 Bastet 就是 Linux 版的俄罗斯方块。 + +![Linux 终端游戏 Bastet][5] + +使用下面的命令获取 Bastet: + +``` +sudo apt install bastet +``` + +运行下列命令,在终端上开始这个游戏: + +``` +bastet +``` + +使用空格键旋转方块,方向键控制方块移动。 + +#### 2. Ninvaders + +Space Invaders(太空侵略者)。我仍记得这个游戏里,和我兄弟为了最高分而比拼。这是最好的街机游戏之一。 + +![Linux 终端游戏 nInvaders][6] + +复制粘贴这段代码安装 Ninvaders。 + +``` +sudo apt-get install ninvaders +``` + +使用下面的命令开始游戏: + +``` +ninvaders +``` + +方向键移动太空飞船。空格键射击外星人。 + +[推荐阅读:2016 你可以开始的 Linux 游戏 Top 10][21] + +#### 3. Pacman4console + +是的,这个就是街机之王。Pacman4console 是最受欢迎的街机游戏 Pacman(吃豆人)的终端版。 + +![Linux 命令行吃豆豆游戏 Pacman4console][7] + +使用以下命令获取 pacman4console: + +``` +sudo apt-get install pacman4console +``` + +打开终端,建议使用最大的终端界面。键入以下命令启动游戏: + +``` +pacman4console +``` + +使用方向键控制移动。 + +#### 4. nSnake + +记得在老式诺基亚手机里玩的贪吃蛇游戏吗? + +这个游戏让我在很长时间内着迷于手机。我曾经设计过各种姿态去获得更长的蛇身。 + +![nsnake : Linux 终端上的贪吃蛇游戏][8] + +我们拥有 [Linux 终端上的贪吃蛇游戏][9] 得感谢 [nSnake][9]。使用下面的命令安装它: + +``` +sudo apt-get install nsnake +``` + +键入下面的命令开始游戏: + +``` +nsnake +``` + +使用方向键控制蛇身并喂它。 + +#### 5. Greed + +Greed 有点像 Tron(类似贪吃蛇的进化版),但是减少了速度,也没那么刺激。 + +你当前的位置由闪烁的 ‘@’ 表示。你被数字所环绕,你可以在四个方向任意移动。 + +你选择的移动方向上标识的数字,就是你能移动的步数。你将重复这个步骤。走过的路不能再走,如果你无路可走,游戏结束。 + +似乎我让它听起来变得更复杂了。 + +![Greed : 命令行上的 Tron][10] + +通过下列命令获取 Greed: + +``` +sudo apt-get install greed +``` + +通过下列命令启动游戏,使用方向键控制游戏。 + +``` +greed +``` + +#### 6. Air Traffic Controller + +还有什么比做飞行员更有意思的?那就是空中交通管制员。在你的终端中,你可以模拟一个空中交通系统。说实话,在终端里管理空中交通蛮有意思的。 + +![Linux 空中交通管理员][11] + +使用下列命令安装游戏: + +``` +sudo apt-get install bsdgames +``` + +键入下列命令启动游戏: + +``` +atc +``` + +ATC 不是孩子玩的游戏。建议查看官方文档。 + +#### 7. Backgammon(双陆棋) + +无论之前你有没有玩过 [双陆棋][12],你都应该看看这个。 它的说明书和控制手册都非常友好。如果你喜欢,可以挑战你的电脑或者你的朋友。 + +![Linux 终端上的双陆棋][13] + +使用下列命令安装双陆棋: + +``` +sudo apt-get install bsdgames +``` + +键入下列命令启动游戏: + +``` +backgammon +``` + +当你提示游戏规则时,回复 ‘y’ 即可。 + +#### 8. Moon Buggy + +跳跃、开火。欢乐时光不必多言。 + +![Moon buggy][14] + +使用下列命令安装游戏: + +``` +sudo apt-get install moon-buggy +``` + +使用下列命令启动游戏: + +``` +moon-buggy +``` + +空格跳跃,‘a’ 或者 ‘l’射击。尽情享受吧。 + +#### 9. 2048 + +2048 可以活跃你的大脑。[2048][15] 是一个策咯游戏,很容易上瘾。以获取 2048 分为目标。 + +![Linux 终端上的 2048][16] + +复制粘贴下面的命令安装游戏: + +``` +wget https://raw.githubusercontent.com/mevdschee/2048.c/master/2048.c + +gcc -o 2048 2048.c +``` + +键入下列命令启动游戏: + +``` +./2048 +``` + +#### 10. Tron + +没有动作类游戏,这张榜单怎么可能结束? + +![Linux 终端游戏 Tron][17] + +是的,Linux 终端可以实现这种精力充沛的游戏 Tron。为接下来迅捷的反应做准备吧。无需被下载和安装困扰。一个命令即可启动游戏,你只需要一个网络连接: + +``` +ssh sshtron.zachlatta.com +``` + +如果有别的在线游戏者,你可以多人游戏。了解更多:[Linux 终端游戏 Tron][18]。 + +### 你看上了哪一款? + +伙计,十大 Linux 终端游戏都分享给你了。我猜你现在正准备键入 `ctrl+alt+T`(终端快捷键) 了。榜单中那个是你最喜欢的游戏?或者你有其它的终端游戏么?尽情分享吧! + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/best-command-line-games-linux/ + +作者:[Aquil Roshan][a] +译者:[CYLeft](https://github.com/CYleft) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/aquil/ +[1]:https://linux.cn/article-7316-1.html +[2]:https://itsfoss.com/download-linux-games/ +[3]:https://itsfoss.com/manjaro-gaming-linux/ +[4]:https://en.wikipedia.org/wiki/Tetris +[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/bastet.jpg +[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/ninvaders.jpg +[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/pacman.jpg +[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/nsnake.jpg +[9]:https://itsfoss.com/nsnake-play-classic-snake-game-linux-terminal/ +[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/greed.jpg +[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/atc.jpg +[12]:https://en.wikipedia.org/wiki/Backgammon +[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/backgammon.jpg +[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/moon-buggy.jpg +[15]:https://itsfoss.com/2048-offline-play-ubuntu/ +[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/2048.jpg +[17]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/tron.jpg +[18]:https://itsfoss.com/play-tron-game-linux-terminal/ +[19]:https://twitter.com/abhishek_pc +[20]:https://itsfoss.com/linux-gaming-guide/ +[21]:https://itsfoss.com/best-linux-games/ diff --git a/published/20161004 What happens when you start a process on Linux.md b/published/20161004 What happens when you start a process on Linux.md new file mode 100644 index 0000000000..952405c3e1 --- /dev/null +++ b/published/20161004 What happens when you start a process on Linux.md @@ -0,0 +1,135 @@ +当你在 Linux 上启动一个进程时会发生什么? +=========================================================== + +本文是关于 fork 和 exec 是如何在 Unix 上工作的。你或许已经知道,也有人还不知道。几年前当我了解到这些时,我惊叹不已。 + +我们要做的是启动一个进程。我们已经在博客上讨论了很多关于**系统调用**的问题,每当你启动一个进程或者打开一个文件,这都是一个系统调用。所以你可能会认为有这样的系统调用: + +``` +start_process(["ls", "-l", "my_cool_directory"]) +``` + +这是一个合理的想法,显然这是它在 DOS 或 Windows 中的工作原理。我想说的是,这并不是 Linux 上的工作原理。但是,我查阅了文档,确实有一个 [posix_spawn][2] 的系统调用基本上是这样做的,不过这不在本文的讨论范围内。 + +### fork 和 exec + +Linux 上的 `posix_spawn` 是通过两个系统调用实现的,分别是 `fork` 和 `exec`(实际上是 `execve`),这些都是人们常常使用的。尽管在 OS X 上,人们使用 `posix_spawn`,而 `fork` 和 `exec` 是不提倡的,但我们将讨论的是 Linux。 + +Linux 中的每个进程都存在于“进程树”中。你可以通过运行 `pstree` 命令查看进程树。树的根是 `init`,进程号是 1。每个进程(`init` 除外)都有一个父进程,一个进程都可以有很多子进程。 + +所以,假设我要启动一个名为 `ls` 的进程来列出一个目录。我是不是只要发起一个进程 `ls` 就好了呢?不是的。 + +我要做的是,创建一个子进程,这个子进程是我(`me`)本身的一个克隆,然后这个子进程的“脑子”被吃掉了,变成 `ls`。 + +开始是这样的: + +``` +my parent + |- me +``` + +然后运行 `fork()`,生成一个子进程,是我(`me`)自己的一份克隆: + +``` +my parent + |- me + |-- clone of me +``` + +然后我让该子进程运行 `exec("ls")`,变成这样: + +``` +my parent + |- me + |-- ls +``` + +当 ls 命令结束后,我几乎又变回了我自己: + +``` +my parent + |- me + |-- ls (zombie) +``` + +在这时 `ls` 其实是一个僵尸进程。这意味着它已经死了,但它还在等我,以防我需要检查它的返回值(使用 `wait` 系统调用)。一旦我获得了它的返回值,我将再次恢复独自一人的状态。 + +``` +my parent + |- me +``` + +### fork 和 exec 的代码实现 + +如果你要编写一个 shell,这是你必须做的一个练习(这是一个非常有趣和有启发性的项目。Kamal 在 Github 上有一个很棒的研讨会:[https://github.com/kamalmarhubi/shell-workshop][3])。 + +事实证明,有了 C 或 Python 的技能,你可以在几个小时内编写一个非常简单的 shell,像 bash 一样。(至少如果你旁边能有个人多少懂一点,如果没有的话用时会久一点。)我已经完成啦,真的很棒。 + +这就是 `fork` 和 `exec` 在程序中的实现。我写了一段 C 的伪代码。请记住,[fork 也可能会失败哦。][4] + +``` +int pid = fork(); +// 我要分身啦 +// “我”是谁呢?可能是子进程也可能是父进程 +if (pid == 0) { + // 我现在是子进程 + // “ls” 吃掉了我脑子,然后变成一个完全不一样的进程 + exec(["ls"]) +} else if (pid == -1) { + // 天啊,fork 失败了,简直是灾难! +} else { + // 我是父进程耶 + // 继续做一个酷酷的美男子吧 + // 需要的话,我可以等待子进程结束 +} +``` + +### 上文提到的“脑子被吃掉”是什么意思呢? + +进程有很多属性: + +* 打开的文件(包括打开的网络连接) +* 环境变量 +* 信号处理程序(在程序上运行 Ctrl + C 时会发生什么?) +* 内存(你的“地址空间”) +* 寄存器 +* 可执行文件(`/proc/$pid/exe`) +* cgroups 和命名空间(与 Linux 容器相关) +* 当前的工作目录 +* 运行程序的用户 +* 其他我还没想到的 + +当你运行 `execve` 并让另一个程序吃掉你的脑子的时候,实际上几乎所有东西都是相同的! 你们有相同的环境变量、信号处理程序和打开的文件等等。 + +唯一改变的是,内存、寄存器以及正在运行的程序,这可是件大事。 + +### 为何 fork 并非那么耗费资源(写入时复制) + +你可能会问:“如果我有一个使用了 2GB 内存的进程,这是否意味着每次我启动一个子进程,所有 2 GB 的内存都要被复制一次?这听起来要耗费很多资源!” + +事实上,Linux 为 `fork()` 调用实现了写时复制copy on write,对于新进程的 2GB 内存来说,就像是“看看旧的进程就好了,是一样的!”。然后,当如果任一进程试图写入内存,此时系统才真正地复制一个内存的副本给该进程。如果两个进程的内存是相同的,就不需要复制了。 + +### 为什么你需要知道这么多 + +你可能会说,好吧,这些细节听起来很厉害,但为什么这么重要?关于信号处理程序或环境变量的细节会被继承吗?这对我的日常编程有什么实际影响呢? + +有可能哦!比如说,在 Kamal 的博客上有一个很有意思的 [bug][5]。它讨论了 Python 如何使信号处理程序忽略了 `SIGPIPE`。也就是说,如果你从 Python 里运行一个程序,默认情况下它会忽略 `SIGPIPE`!这意味着,程序从 Python 脚本和从 shell 启动的表现会**有所不同**。在这种情况下,它会造成一个奇怪的问题。 + +所以,你的程序的环境(环境变量、信号处理程序等)可能很重要,都是从父进程继承来的。知道这些,在调试时是很有用的。 + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2016/10/04/exec-will-eat-your-brain/ + +作者:[Julia Evans][a] +译者:[jessie-pang](https://github.com/jessie-pang) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca +[1]:https://jvns.ca/categories/favorite +[2]:http://man7.org/linux/man-pages/man3/posix_spawn.3.html +[3]:https://github.com/kamalmarhubi/shell-workshop +[4]:https://rachelbythebay.com/w/2014/08/19/fork/ +[5]:http://kamalmarhubi.com/blog/2015/06/30/my-favourite-bug-so-far-at-the-recurse-center/ diff --git a/published/20170319 ftrace trace your kernel functions.md b/published/20170319 ftrace trace your kernel functions.md new file mode 100644 index 0000000000..c166a3c513 --- /dev/null +++ b/published/20170319 ftrace trace your kernel functions.md @@ -0,0 +1,268 @@ +ftrace:跟踪你的内核函数! +============================================================ + +大家好!今天我们将去讨论一个调试工具:ftrace,之前我的博客上还没有讨论过它。还有什么能比一个新的调试工具更让人激动呢? + +这个非常棒的 ftrace 并不是个新的工具!它大约在 Linux 的 2.6 内核版本中就有了,时间大约是在 2008 年。[这一篇是我用谷歌能找到的最早的文档][10]。因此,如果你是一个调试系统的“老手”,可能早就已经使用它了! + +我知道,ftrace 已经存在了大约 2.5 年了(LCTT 译注:距本文初次写作时),但是还没有真正的去学习它。假设我明天要召开一个专题研究会,那么,关于 ftrace 应该讨论些什么?因此,今天是时间去讨论一下它了! + +### 什么是 ftrace? + +ftrace 是一个 Linux 内核特性,它可以让你去跟踪 Linux 内核的函数调用。为什么要这么做呢?好吧,假设你调试一个奇怪的问题,而你已经得到了你的内核版本中这个问题在源代码中的开始的位置,而你想知道这里到底发生了什么? + +每次在调试的时候,我并不会经常去读内核源代码,但是,极个别的情况下会去读它!例如,本周在工作中,我有一个程序在内核中卡死了。查看到底是调用了什么函数,能够帮我更好的理解在内核中发生了什么,哪些系统涉及其中!(在我的那个案例中,它是虚拟内存系统)。 + +我认为 ftrace 是一个十分好用的工具(它肯定没有 `strace` 那样使用广泛,也比它难以使用),但是它还是值得你去学习。因此,让我们开始吧! + +### 使用 ftrace 的第一步 + +不像 `strace` 和 `perf`,ftrace 并不是真正的 **程序** – 你不能只运行 `ftrace my_cool_function`。那样太容易了! + +如果你去读 [使用 ftrace 调试内核][11],它会告诉你从 `cd /sys/kernel/debug/tracing` 开始,然后做很多文件系统的操作。 + +对于我来说,这种办法太麻烦——一个使用 ftrace 的简单例子像是这样: + +``` +cd /sys/kernel/debug/tracing +echo function > current_tracer +echo do_page_fault > set_ftrace_filter +cat trace +``` + +这个文件系统是跟踪系统的接口(“给这些神奇的文件赋值,然后该发生的事情就会发生”)理论上看起来似乎可用,但是它不是我的首选方式。 + +幸运的是,ftrace 团队也考虑到这个并不友好的用户界面,因此,它有了一个更易于使用的界面,它就是 `trace-cmd`!!!`trace-cmd` 是一个带命令行参数的普通程序。我们后面将使用它!我在 LWN 上找到了一个 `trace-cmd` 的使用介绍:[trace-cmd: Ftrace 的一个前端][12]。 + +### 开始使用 trace-cmd:让我们仅跟踪一个函数 + +首先,我需要去使用 `sudo apt-get install trace-cmd` 安装 `trace-cmd`,这一步很容易。 + +对于第一个 ftrace 的演示,我决定去了解我的内核如何去处理一个页面故障。当 Linux 分配内存时,它经常偷懒,(“你并不是_真的_计划去使用内存,对吗?”)。这意味着,当一个应用程序尝试去对分配给它的内存进行写入时,就会发生一个页面故障,而这个时候,内核才会真正的为应用程序去分配物理内存。 + +我们开始使用 `trace-cmd` 并让它跟踪 `do_page_fault` 函数! + +``` +$ sudo trace-cmd record -p function -l do_page_fault + plugin 'function' +Hit Ctrl^C to stop recording +``` + +我将它运行了几秒钟,然后按下了 `Ctrl+C`。 让我大吃一惊的是,它竟然产生了一个 2.5MB 大小的名为 `trace.dat` 的跟踪文件。我们来看一下这个文件的内容! + +``` +$ sudo trace-cmd report + chrome-15144 [000] 11446.466121: function: do_page_fault + chrome-15144 [000] 11446.467910: function: do_page_fault + chrome-15144 [000] 11446.469174: function: do_page_fault + chrome-15144 [000] 11446.474225: function: do_page_fault + chrome-15144 [000] 11446.474386: function: do_page_fault + chrome-15144 [000] 11446.478768: function: do_page_fault + CompositorTileW-15154 [001] 11446.480172: function: do_page_fault + chrome-1830 [003] 11446.486696: function: do_page_fault + CompositorTileW-15154 [001] 11446.488983: function: do_page_fault + CompositorTileW-15154 [001] 11446.489034: function: do_page_fault + CompositorTileW-15154 [001] 11446.489045: function: do_page_fault + +``` + +看起来很整洁 – 它展示了进程名(chrome)、进程 ID(15144)、CPU ID(000),以及它跟踪的函数。 + +通过察看整个文件,(`sudo trace-cmd report | grep chrome`)可以看到,我们跟踪了大约 1.5 秒,在这 1.5 秒的时间段内,Chrome 发生了大约 500 个页面故障。真是太酷了!这就是我们做的第一个 ftrace! + +### 下一个 ftrace 技巧:我们来跟踪一个进程! + +好吧,只看一个函数是有点无聊!假如我想知道一个程序中都发生了什么事情。我使用一个名为 Hugo 的静态站点生成器。看看内核为 Hugo 都做了些什么事情? + +在我的电脑上 Hugo 的 PID 现在是 25314,因此,我使用如下的命令去记录所有的内核函数: + +``` +sudo trace-cmd record --help # I read the help! +sudo trace-cmd record -p function -P 25314 # record for PID 25314 +``` + +`sudo trace-cmd report` 输出了 18,000 行。如果你对这些感兴趣,你可以看 [这里是所有的 18,000 行的输出][13]。 + +18,000 行太多了,因此,在这里仅摘录其中几行。 + +当系统调用 `clock_gettime` 运行的时候,都发生了什么: + +``` + compat_SyS_clock_gettime + SyS_clock_gettime + clockid_to_kclock + posix_clock_realtime_get + getnstimeofday64 + __getnstimeofday64 + arch_counter_read + __compat_put_timespec +``` + +这是与进程调试相关的一些东西: + +``` + cpufreq_sched_irq_work + wake_up_process + try_to_wake_up + _raw_spin_lock_irqsave + do_raw_spin_lock + _raw_spin_lock + do_raw_spin_lock + walt_ktime_clock + ktime_get + arch_counter_read + walt_update_task_ravg + exiting_task + +``` + +虽然你可能还不理解它们是做什么的,但是,能够看到所有的这些函数调用也是件很酷的事情。 + +### “function graph” 跟踪 + +这里有另外一个模式,称为 `function_graph`。除了它既可以进入也可以退出一个函数外,其它的功能和函数跟踪器是一样的。[这里是那个跟踪器的输出][14] + +``` +sudo trace-cmd record -p function_graph -P 25314 +``` + +同样,这里只是一个片断(这次来自 futex 代码): + +``` + | futex_wake() { + | get_futex_key() { + | get_user_pages_fast() { + 1.458 us | __get_user_pages_fast(); + 4.375 us | } + | __might_sleep() { + 0.292 us | ___might_sleep(); + 2.333 us | } + 0.584 us | get_futex_key_refs(); + | unlock_page() { + 0.291 us | page_waitqueue(); + 0.583 us | __wake_up_bit(); + 5.250 us | } + 0.583 us | put_page(); ++ 24.208 us | } +``` + +我们看到在这个示例中,在 `futex_wake` 后面调用了 `get_futex_key`。这是在源代码中真实发生的事情吗?我们可以检查一下!![这里是在 Linux 4.4 中 futex_wake 的定义][15] (我的内核版本是 4.4)。 + +为节省时间我直接贴出来,它的内容如下: + +``` +static int +futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset) +{ + struct futex_hash_bucket *hb; + struct futex_q *this, *next; + union futex_key key = FUTEX_KEY_INIT; + int ret; + WAKE_Q(wake_q); + + if (!bitset) + return -EINVAL; + + ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &key, VERIFY_READ); +``` + +如你所见,在 `futex_wake` 中的第一个函数调用真的是 `get_futex_key`! 太棒了!相比阅读内核代码,阅读函数跟踪肯定是更容易的找到结果的办法,并且让人高兴的是,还能看到所有的函数用了多长时间。 + +### 如何知道哪些函数可以被跟踪 + +如果你去运行 `sudo trace-cmd list -f`,你将得到一个你可以跟踪的函数的列表。它很简单但是也很重要。 + +### 最后一件事:事件! + +现在,我们已经知道了怎么去跟踪内核中的函数,真是太酷了! + +还有一类我们可以跟踪的东西!有些事件与我们的函数调用并不相符。例如,你可能想知道当一个程序被调度进入或者离开 CPU 时,都发生了什么事件!你可能想通过“盯着”函数调用计算出来,但是,我告诉你,不可行! + +由于函数也为你提供了几种事件,因此,你可以看到当重要的事件发生时,都发生了什么事情。你可以使用 `sudo cat /sys/kernel/debug/tracing/available_events` 来查看这些事件的一个列表。  + +我查看了全部的 sched_switch 事件。我并不完全知道 sched_switch 是什么,但是,我猜测它与调度有关。 + +``` +sudo cat /sys/kernel/debug/tracing/available_events +sudo trace-cmd record -e sched:sched_switch +sudo trace-cmd report +``` + +输出如下: + +``` + 16169.624862: Chrome_ChildIOT:24817 [112] S ==> chrome:15144 [120] + 16169.624992: chrome:15144 [120] S ==> swapper/3:0 [120] + 16169.625202: swapper/3:0 [120] R ==> Chrome_ChildIOT:24817 [112] + 16169.625251: Chrome_ChildIOT:24817 [112] R ==> chrome:1561 [112] + 16169.625437: chrome:1561 [112] S ==> chrome:15144 [120] + +``` + +现在,可以很清楚地看到这些切换,从 PID 24817 -> 15144 -> kernel -> 24817 -> 1561 -> 15114。(所有的这些事件都发生在同一个 CPU 上)。 + +### ftrace 是如何工作的? + +ftrace 是一个动态跟踪系统。当我们开始 ftrace 内核函数时,**函数的代码会被改变**。让我们假设去跟踪 `do_page_fault` 函数。内核将在那个函数的汇编代码中插入一些额外的指令,以便每次该函数被调用时去提示跟踪系统。内核之所以能够添加额外的指令的原因是,Linux 将额外的几个 NOP 指令编译进每个函数中,因此,当需要的时候,这里有添加跟踪代码的地方。 + +这是一个十分复杂的问题,因为,当不需要使用 ftrace 去跟踪我的内核时,它根本就不影响性能。而当我需要跟踪时,跟踪的函数越多,产生的开销就越大。 + +(或许有些是不对的,但是,我认为的 ftrace 就是这样工作的) + +### 更容易地使用 ftrace:brendan gregg 的工具及 kernelshark + +正如我们在文件中所讨论的,你需要去考虑很多的关于单个的内核函数/事件直接使用 ftrace 都做了些什么。能够做到这一点很酷!但是也需要做大量的工作! + +Brendan Gregg (我们的 Linux 调试工具“大神”)有个工具仓库,它使用 ftrace 去提供关于像 I/O 延迟这样的各种事情的信息。这是它在 GitHub 上全部的 [perf-tools][16] 仓库。 + +这里有一个权衡,那就是这些工具易于使用,但是你被限制仅能用于 Brendan Gregg 认可并做到工具里面的方面。它包括了很多方面!:) + +另一个工具是将 ftrace 的输出可视化,做的比较好的是 [kernelshark][17]。我还没有用过它,但是看起来似乎很有用。你可以使用 `sudo apt-get install kernelshark` 来安装它。 + +### 一个新的超能力 + +我很高兴能够花一些时间去学习 ftrace!对于任何内核工具,不同的内核版本有不同的功效,我希望有一天你能发现它很有用! + +### ftrace 系列文章的一个索引 + +最后,这里是我找到的一些 ftrace 方面的文章。它们大部分在 LWN (Linux 新闻周刊)上,它是 Linux 的一个极好的资源(你可以购买一个 [订阅][18]!) + +* [使用 Ftrace 调试内核 - part 1][1] (Dec 2009, Steven Rostedt) +* [使用 Ftrace 调试内核 - part 2][2] (Dec 2009, Steven Rostedt) +* [Linux 函数跟踪器的秘密][3] (Jan 2010, Steven Rostedt) +* [trace-cmd:Ftrace 的一个前端][4] (Oct 2010, Steven Rostedt) +* [使用 KernelShark 去分析实时调试器][5] (2011, Steven Rostedt) +* [Ftrace: 神秘的开关][6] (2014, Brendan Gregg) +* 内核文档:(它十分有用) [Documentation/ftrace.txt][7] +* 你能跟踪的事件的文档 [Documentation/events.txt][8] +* linux 内核开发上的一些 ftrace 设计文档 (不是有用,而是有趣!) [Documentation/ftrace-design.txt][9] + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2017/03/19/getting-started-with-ftrace/ + +作者:[Julia Evans][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca +[1]:https://lwn.net/Articles/365835/ +[2]:https://lwn.net/Articles/366796/ +[3]:https://lwn.net/Articles/370423/ +[4]:https://lwn.net/Articles/410200/ +[5]:https://lwn.net/Articles/425583/ +[6]:https://lwn.net/Articles/608497/ +[7]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/ftrace.txt +[8]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/events.txt +[9]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/ftrace-design.txt +[10]:https://lwn.net/Articles/290277/ +[11]:https://lwn.net/Articles/365835/ +[12]:https://lwn.net/Articles/410200/ +[13]:https://gist.githubusercontent.com/jvns/e5c2d640f7ec76ed9ed579be1de3312e/raw/78b8425436dc4bb5bb4fa76a4f85d5809f7d1ef2/trace-cmd-report.txt +[14]:https://gist.githubusercontent.com/jvns/f32e9b06bcd2f1f30998afdd93e4aaa5/raw/8154d9828bb895fd6c9b0ee062275055b3775101/function_graph.txt +[15]:https://github.com/torvalds/linux/blob/v4.4/kernel/futex.c#L1313-L1324 +[16]:https://github.com/brendangregg/perf-tools +[17]:https://lwn.net/Articles/425583/ +[18]:https://lwn.net/subscribe/Info diff --git a/translated/tech/20170502 A beginner-s guide to Raspberry Pi 3.md b/published/20170502 A beginner-s guide to Raspberry Pi 3.md similarity index 65% rename from translated/tech/20170502 A beginner-s guide to Raspberry Pi 3.md rename to published/20170502 A beginner-s guide to Raspberry Pi 3.md index b53c397aed..38b892e0ec 100644 --- a/translated/tech/20170502 A beginner-s guide to Raspberry Pi 3.md +++ b/published/20170502 A beginner-s guide to Raspberry Pi 3.md @@ -1,103 +1,115 @@ -一个树莓派 3 的新手指南 +树莓派 3 的新手指南 ====== +> 这个教程将帮助你入门树莓派 3Raspberry Pi 3。 + ![](https://images.techhive.com/images/article/2017/03/raspberry2-100711632-large.jpeg) -这篇文章是我的使用树莓派 3 创建新项目的每周系列文章的一部分。该系列的第一篇文章专注于入门,它主要讲使用 PIXEL 桌面去安装树莓派、设置网络以及其它的基本组件。 +这篇文章是我的使用树莓派 3 创建新项目的每周系列文章的一部分。该系列的这个第一篇文章专注于入门,它主要讲安装 Raspbian 和 PIXEL 桌面,以及设置网络和其它的基本组件。 ### 你需要: - * 一台树莓派 3 - * 一个 5v 2mAh 带 USB 接口的电源适配器 - * 至少 8GB 容量的 Micro SD 卡 - * Wi-Fi 或者以太网线 - * 散热片 - * 键盘和鼠标 - * 一台 PC 显示器 - * 一台用于准备 microSD 卡的 Mac 或者 PC +* 一台树莓派 3 +* 一个 5v 2mAh 带 USB 接口的电源适配器 +* 至少 8GB 容量的 Micro SD 卡 +* Wi-Fi 或者以太网线 +* 散热片 +* 键盘和鼠标 +* 一台 PC 显示器 +* 一台用于准备 microSD 卡的 Mac 或者 PC - - -现在市面上有很多基于 Linux 操作系统的树莓派,这种树莓派你可以直接安装它,但是,如果你是第一次接触树莓派,我推荐使用 NOOBS,它是树莓派官方的操作系统安装器,它安装操作系统到设备的过程非常简单。 +现在有很多基于 Linux 操作系统可用于树莓派,你可以直接安装它,但是,如果你是第一次接触树莓派,我推荐使用 NOOBS,它是树莓派官方的操作系统安装器,它安装操作系统到该设备的过程非常简单。 在你的电脑上从 [这个链接][1] 下载 NOOBS。它是一个 zip 压缩文件。如果你使用的是 MacOS,可以直接双击它,MacOS 会自动解压这个文件。如果你使用的是 Windows,右键单击它,选择“解压到这里”。 -如果你运行的是 Linux,如何去解压 zip 文件取决于你的桌面环境,因为,不同的桌面环境下解压文件的方法不一样,但是,使用命令行可以很容易地完成解压工作。 +如果你运行的是 Linux 桌面,如何去解压 zip 文件取决于你的桌面环境,因为,不同的桌面环境下解压文件的方法不一样,但是,使用命令行可以很容易地完成解压工作。 -`$ unzip NOOBS.zip` +``` +$ unzip NOOBS.zip +``` 不管它是什么操作系统,打开解压后的文件,你看到的应该是如下图所示的样子: -![content][3] Swapnil Bhartiya +![content][3] 现在,在你的 PC 上插入 Micro SD 卡,将它格式化成 FAT32 格式的文件系统。在 MacOS 上,使用磁盘实用工具去格式化 Micro SD 卡: -![format][4] Swapnil Bhartiya +![format][4] -在 Windows 上,只需要右键单击这个卡,然后选择“格式化”选项。如果是在 Linux 上,不同的桌面环境使用不同的工具,就不一一去讲解了。在这里我写了一个教程,[在 Linux 上使用命令行接口][5] 去格式化 SD 卡为 Fat32 文件系统。 +在 Windows 上,只需要右键单击这个卡,然后选择“格式化”选项。如果是在 Linux 上,不同的桌面环境使用不同的工具,就不一一去讲解了。在这里我写了一个教程,[在 Linux 上使用命令行界面][5] 去格式化 SD 卡为 Fat32 文件系统。 -在你拥有了 FAT32 格式的文件系统后,就可以去拷贝下载的 NOOBS 目录的内容到这个卡的根目录下。如果你使用的是 MacOS 或者 Linux,可以使用 rsync 将 NOOBS 的内容传到 SD 卡的根目录中。在 MacOS 或者 Linux 中打开终端应用,然后运行如下的 rsync 命令: +在你的卡格式成了 FAT32 格式的文件系统后,就可以去拷贝下载的 NOOBS 目录的内容到这个卡的根目录下。如果你使用的是 MacOS 或者 Linux,可以使用 `rsync` 将 NOOBS 的内容传到 SD 卡的根目录中。在 MacOS 或者 Linux 中打开终端应用,然后运行如下的 rsync 命令: -`rsync -avzP /path_of_NOOBS /path_of_sdcard` +``` +rsync -avzP /path_of_NOOBS /path_of_sdcard +``` 一定要确保选择了 SD 卡的根目录,在我的案例中(在 MacOS 上),它是: -`rsync -avzP /Users/swapnil/Downloads/NOOBS_v2_2_0/ /Volumes/U/` +``` +rsync -avzP /Users/swapnil/Downloads/NOOBS_v2_2_0/ /Volumes/U/ +``` 或者你也可以拷贝粘贴 NOOBS 目录中的内容。一定要确保将 NOOBS 目录中的内容全部拷贝到 Micro SD 卡的根目录下,千万不能放到任何的子目录中。 -现在可以插入这张 Micro SD 卡到树莓派 3 中,连接好显示器、键盘鼠标和电源适配器。如果你拥有有线网络,我建议你使用它,因为有线网络下载和安装操作系统更快。树莓派将引导到 NOOBS,它将提供一个供你去选择安装的分发版列表。从第一个选项中选择树莓派,紧接着会出现如下图的画面。 +现在可以插入这张 MicroSD 卡到树莓派 3 中,连接好显示器、键盘鼠标和电源适配器。如果你拥有有线网络,我建议你使用它,因为有线网络下载和安装操作系统更快。树莓派将引导到 NOOBS,它将提供一个供你去选择安装的分发版列表。从第一个选项中选择 Raspbian,紧接着会出现如下图的画面。 -![raspi config][6] Swapnil Bhartiya +![raspi config][6] -在你安装完成后,树莓派将重新启动,你将会看到一个欢迎使用树莓派的画面。现在可以去配置它,并且去运行系统更新。大多数情况下,我们都是在没有外设的情况下使用树莓派的,都是使用 SSH 基于网络远程去管理它。这意味着你不需要为了管理树莓派而去为它接上鼠标键盘和显示器。 +在你安装完成后,树莓派将重新启动,你将会看到一个欢迎使用树莓派的画面。现在可以去配置它,并且去运行系统更新。大多数情况下,我们都是在没有外设的情况下使用树莓派的,都是使用 SSH 基于网络远程去管理它。这意味着你不需要为了管理树莓派而去为它接上鼠标、键盘和显示器。 开始使用它的第一步是,配置网络(假如你使用的是 Wi-Fi)。点击顶部面板上的网络图标,然后在出现的网络列表中,选择你要配置的网络并为它输入正确的密码。 -![wireless][7] Swapnil Bhartiya +![wireless][7] 恭喜您,无线网络的连接配置完成了。在进入下一步的配置之前,你需要找到你的网络为树莓派分配的 IP 地址,因为远程管理会用到它。 打开一个终端,运行如下的命令: -`ifconfig` +``` +ifconfig +``` -现在,记下这个设备的 wlan0 部分的 IP 地址。它一般显示为 “inet addr” +现在,记下这个设备的 `wlan0` 部分的 IP 地址。它一般显示为 “inet addr”。 -现在,可以去启用 SSH 了,在树莓派上打开一个终端,然后打开 raspi-config 工具。 +现在,可以去启用 SSH 了,在树莓派上打开一个终端,然后打开 `raspi-config` 工具。 -`sudo raspi-config` +``` +sudo raspi-config +``` 树莓派的默认用户名和密码分别是 “pi” 和 “raspberry”。在上面的命令中你会被要求输入密码。树莓派配置工具的第一个选项是去修改默认密码,我强烈推荐你修改默认密码,尤其是你基于网络去使用它的时候。 第二个选项是去修改主机名,如果在你的网络中有多个树莓派时,主机名用于区分它们。一个有意义的主机名可以很容易在网络上识别每个设备。 -然后进入到接口选项,去启用摄像头、SSH、以及 VNC。如果你在树莓派上使用了一个涉及到多媒体的应用程序,比如,家庭影院系统或者 PC,你也可以去改变音频输出选项。缺省情况下,它的默认输出到 HDMI 接口,但是,如果你使用外部音响,你需要去改变音频输出设置。转到树莓派配置工具的高级配置选项,选择音频,然后选择 3.5mm 作为默认输出。 +然后进入到接口选项,去启用摄像头、SSH、以及 VNC。如果你在树莓派上使用了一个涉及到多媒体的应用程序,比如,家庭影院系统或者 PC,你也可以去改变音频输出选项。缺省情况下,它的默认输出到 HDMI 接口,但是,如果你使用外部音响,你需要去改变音频输出设置。转到树莓派配置工具的高级配置选项,选择音频,然后选择 “3.5mm” 作为默认输出。 [小提示:使用箭头键去导航,使用回车键去选择] -一旦所有的改变被应用, 树莓派将要求重新启动。你可以从树莓派上拔出显示器、鼠标键盘,以后可以通过网络来管理它。现在可以在你的本地电脑上打开终端。如果你使用的是 Windows,你可以使用 Putty 或者去读我的文章 - 怎么在 Windows 10 上安装 Ubuntu Bash。 +一旦应用了所有的改变, 树莓派将要求重新启动。你可以从树莓派上拔出显示器、鼠标键盘,以后可以通过网络来管理它。现在可以在你的本地电脑上打开终端。如果你使用的是 Windows,你可以使用 Putty 或者去读我的文章 - 怎么在 Windows 10 上安装 Ubuntu Bash。 在你的本地电脑上输入如下的 SSH 命令: -`ssh pi@IP_ADDRESS_OF_Pi` +``` +ssh pi@IP_ADDRESS_OF_Pi +``` 在我的电脑上,这个命令是这样的: -`ssh pi@10.0.0.161` +``` +ssh pi@10.0.0.161 +``` 输入它的密码,你登入到树莓派了!现在你可以从一台远程电脑上去管理你的树莓派。如果你希望通过因特网去管理树莓派,可以去阅读我的文章 - [如何在你的计算机上启用 RealVNC][8]。 在该系列的下一篇文章中,我将讲解使用你的树莓派去远程管理你的 3D 打印机。 -**这篇文章是作为 IDG 投稿网络的一部分发表的。[想加入吗?][9]** - -------------------------------------------------------------------------------- via: https://www.infoworld.com/article/3176488/linux/a-beginner-s-guide-to-raspberry-pi-3.html 作者:[Swapnil Bhartiya][a] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20170524 Working with Vi-Vim Editor - Advanced concepts.md b/published/20170524 Working with Vi-Vim Editor - Advanced concepts.md new file mode 100644 index 0000000000..a9fbe555c3 --- /dev/null +++ b/published/20170524 Working with Vi-Vim Editor - Advanced concepts.md @@ -0,0 +1,127 @@ +使用 Vi/Vim 编辑器:高级概念 +====== + +早些时候我们已经讨论了一些关于 VI/VIM 编辑器的基础知识,但是 VI 和 VIM 都是非常强大的编辑器,还有很多其他的功能可以和编辑器一起使用。在本教程中,我们将学习 VI/VIM 编辑器的一些高级用法。 + +(**推荐阅读**:[使用 VI 编辑器:基础知识] [1]) + +### 使用 VI/VIM 编辑器打开多个文件 + +要打开多个文件,命令将与打开单个文件相同。我们只要添加第二个文件的名称。 + +``` + $ vi file1 file2 file 3 +``` + +要浏览到下一个文件,我们可以(在 vim 命令模式中)使用: + +``` +:n +``` + +或者我们也可以使用 + +``` +:e filename +``` + +### 在编辑器中运行外部命令 + +我们可以在 vi 编辑器内部运行外部的 Linux/Unix 命令,也就是说不需要退出编辑器。要在编辑器中运行命令,如果在插入模式下,先返回到命令模式,我们使用 BANG 也就是 `!` 接着是需要使用的命令。运行命令的语法是: + +``` +:! command +``` + +这是一个例子: + +``` +:! df -H +``` + +### 根据模板搜索 + +要在文本文件中搜索一个单词或模板,我们在命令模式下使用以下两个命令: + +* 命令 `/` 代表正向搜索模板 +* 命令 `?` 代表正向搜索模板 + +这两个命令都用于相同的目的,唯一不同的是它们搜索的方向。一个例子是: + +如果在文件的开头向前搜索, + +``` +:/ search pattern +``` + +如果在文件末尾向后搜索, + +``` +:? search pattern +``` + +### 搜索并替换一个模式 + +我们可能需要搜索和替换我们的文本中的单词或模式。我们不是从整个文本中找到单词的出现的地方并替换它,我们可以在命令模式中使用命令来自动替换单词。使用搜索和替换的语法是: + +``` +:s/pattern_to_be_found/New_pattern/g +``` + +假设我们想要将单词 “alpha” 用单词 “beta” 代替,命令就是这样: + +``` +:s/alpha/beta/g +``` + +如果我们只想替换第一个出现的 “alpha”,那么命令就是: + +``` +$ :s/alpha/beta/ +``` + +### 使用 set 命令 + +我们也可以使用 set 命令自定义 vi/vim 编辑器的行为和外观。下面是一些可以使用 set 命令修改 vi/vim 编辑器行为的选项列表: + +``` +:set ic ' 在搜索时忽略大小写 + +:set smartcase ' 搜索强制区分大小写 + +:set nu ' 在每行开始显示行号 + +:set hlsearch ' 高亮显示匹配的单词 + +:set ro ' 将文件类型更改为只读 + +:set term ' 打印终端类型 + +:set ai ' 设置自动缩进 + +:set noai ' 取消自动缩进 +``` + +其他一些修改 vi 编辑器的命令是: + +``` +:colorscheme ' 用来改变编辑器的配色方案 。(仅适用于 VIM 编辑器) + +:syntax on ' 为 .xml、.html 等文件打开颜色方案。(仅适用于VIM编辑器) +``` + +这篇结束了本系列教程,请在下面的评论栏中提出你的疑问/问题或建议。 + + +-------------------------------------------------------------------------------- + +via: http://linuxtechlab.com/working-vivim-editor-advanced-concepts/ + +作者:[Shusain][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxtechlab.com/author/shsuain/ +[1]:http://linuxtechlab.com/working-vi-editor-basics/ diff --git a/translated/tech/20170607 Why Car Companies Are Hiring Computer Security Experts.md b/published/20170607 Why Car Companies Are Hiring Computer Security Experts.md similarity index 67% rename from translated/tech/20170607 Why Car Companies Are Hiring Computer Security Experts.md rename to published/20170607 Why Car Companies Are Hiring Computer Security Experts.md index ba5a9e2c2e..c41aefb701 100644 --- a/translated/tech/20170607 Why Car Companies Are Hiring Computer Security Experts.md +++ b/published/20170607 Why Car Companies Are Hiring Computer Security Experts.md @@ -1,29 +1,29 @@ 为什么车企纷纷招聘计算机安全专家 ============================================================ -Photo ![](https://static01.nyt.com/images/2017/06/08/business/08BITS-GURUS1/08BITS-GURUS1-superJumbo.jpg) -来自 CloudFlare 公司的网络安全专家 Marc Rogers(左)和来自 Lookout 的 Kevin Mahaffey 能够通过直接连接在汽车上的笔记本电脑控制特斯拉的进行许多操作。图为他们在 CloudFlare 的大厅里的的熔岩灯前的合影,这些熔岩灯被用来生成密匙。纽约时报 CreditChristie Hemm Klok 拍摄 -大约在七年前,伊朗的数位顶级核科学家经历过一系列形式类似的暗杀:凶手的摩托车接近他们乘坐的汽车,把磁性炸弹吸附在汽车上,然后逃离并引爆炸弹。 +来自 CloudFlare 公司的网络安全专家 Marc Rogers(左)和来自 Lookout 的 Kevin Mahaffey 能够通过直接连接在汽车上的笔记本电脑控制特斯拉汽车进行许多操作。图为他们在 CloudFlare 的大厅里的的熔岩灯前的合影,这些熔岩灯被用来生成密匙。(纽约时报 CreditChristie Hemm Klok 拍摄) + +大约在七年前,伊朗的几位顶级核科学家经历过一系列形式类似的暗杀:凶手的摩托车接近他们乘坐的汽车,把磁性炸弹吸附在汽车上,然后逃离并引爆炸弹。 安全专家们警告人们,再过 7 年,凶手们不再需要摩托车或磁性炸弹。他们所需要的只是一台笔记本电脑和发送给无人驾驶汽车的一段代码——让汽车坠桥、被货车撞扁或者在高速公路上突然抛锚。 汽车制造商眼中的无人驾驶汽车。在黑客眼中只是一台可以达到时速 100 公里的计算机。 -网络安全公司CloudFlare的首席安全研究员马克·罗杰斯(Marc Rogers)说:“它们已经不再是汽车了。它们是装在车轮上的数据中心。从外界接收的每一条数据都可以作为黑客的攻击载体。“ +网络安全公司 CloudFlare 的首席安全研究员马克·罗杰斯Marc Rogers说:“它们已经不再是汽车了。它们是装在车轮上的数据中心。从外界接收的每一条数据都可以作为黑客的攻击载体。“ -两年前,两名“白帽”黑客——寻找系统漏洞并修复它们的研究员,而不是利用漏洞来犯罪的破坏者(Cracker)——成功地在数里之外用电脑获得了一辆 Jeep Cherokee 的控制权。他们控制汽车撞击一个放置在高速公路中央的假人(在场景设定中是一位紧张的记者),直接终止了假人的一生。 +两年前,两名“白帽”黑客(寻找系统漏洞并修复它们的研究员,而不是利用漏洞来犯罪的破坏者Cracker)成功地在数里之外用电脑获得了一辆 Jeep Cherokee 的控制权。他们控制汽车撞击一个放置在高速公路中央的假人(在场景设定中是一位紧张的记者),直接终止了假人的一生。 黑客 Chris Valasek 和 Charlie Miller(现在是 Uber 和滴滴的安全研究人员)发现了一条 [由 Jeep 娱乐系统通向仪表板的电路][10]。他们利用这条线路控制了车辆转向、刹车和变速——他们在高速公路上撞击假人所需的一切。 -Miller 先生上周日在 Twitter 上写道:“汽车被黑客入侵成为头条新闻,但是人们要清楚,没有人的汽车被坏人入侵过。 这些只是研究人员的测试。” +Miller 先生上周日在 Twitter 上写道:“汽车被黑客入侵成为头条新闻,但是人们要清楚,没有谁的汽车被坏人入侵过。 这些只是研究人员的测试。” -尽管如此,Miller 和 Valasek 的研究使 Jeep 汽车的制造商菲亚特克莱斯勒(Fiat Chrysler)付出了巨大的代价,因为这个安全漏洞,菲亚特克莱斯勒被迫召回了 140 万辆汽车。 +尽管如此,Miller 和 Valasek 的研究使 Jeep 汽车的制造商菲亚特克莱斯勒Fiat Chrysler付出了巨大的代价,因为这个安全漏洞,菲亚特克莱斯勒被迫召回了 140 万辆汽车。 -毫无疑问,后来通用汽车首席执行官玛丽·巴拉(Mary Barra)把网络安全作为公司的首要任务。现在,计算机网络安全领域的人才在汽车制造商和高科技公司推进的无人驾驶汽车项目中的需求量很大。 +毫无疑问,后来通用汽车首席执行官玛丽·巴拉Mary Barra把网络安全作为公司的首要任务。现在,计算机网络安全领域的人才在汽车制造商和高科技公司推进的无人驾驶汽车项目中的需求量很大。 -优步 、[特斯拉][11]、苹果和中国的滴滴一直在积极招聘像 Miller 先生和 Valasek 先生这样的白帽黑客,传统的网络安全公司和学术界也有这样的趋势。 +优步 、特斯拉、苹果和中国的滴滴一直在积极招聘像 Miller 先生和 Valasek 先生这样的白帽黑客,传统的网络安全公司和学术界也有这样的趋势。 去年,特斯拉挖走了苹果 iOS 操作系统的安全经理 Aaron Sigel。优步挖走了 Facebook 的白帽黑客 Chris Gates。Miller 先生在发现 Jeep 的漏洞后就职于优步,然后被滴滴挖走。计算机安全领域已经有数十名优秀的工程师加入无人驾驶汽车项目研究的行列。 @@ -31,19 +31,19 @@ Miller 先生说,他离开了优步的一部分原因是滴滴给了他更自 Miller 星期六在 Twitter 上写道:“汽车制造商对待网络攻击的威胁似乎更加严肃,但我仍然希望有更大的透明度。” -像许多大型科技公司一样,特斯拉和菲亚特克莱斯勒也开始给那些发现并提交漏洞的黑客们提供奖励。通用汽车公司也做了类似的事情,但批评人士认为通用汽车公司的计划与科技公司提供的计划相比诚意不足,迄今为止还收效甚微。 +像许多大型科技公司一样,特斯拉和菲亚特克莱斯勒也开始给那些发现并提交漏洞的黑客们提供奖励。通用汽车公司也做了类似的事情,但批评人士认为通用汽车公司的计划与科技公司们提供的计划相比诚意不足,迄今为止还收效甚微。 在 Miller 和 Valasek 发现 Jeep 漏洞的一年后,他们又向人们演示了所有其他可能危害乘客安全的方式,包括劫持车辆的速度控制系统,猛打方向盘或在高速行驶下拉动手刹——这一切都是由汽车外的电脑操作的。(在测试中使用的汽车最后掉进路边的沟渠,他们只能寻求当地拖车公司的帮助) 虽然他们必须在 Jeep 车上才能做到这一切,但这也证明了入侵的可能性。 -在 Jeep 被入侵之前,华盛顿大学和加利福尼亚大学圣地亚哥分校的安全研究人员第一个通过蓝牙远程控制轿车并控制其刹车。研究人员警告汽车公司:汽车联网程度越高,被入侵的可能性就越大。 +在 Jeep 被入侵之前,华盛顿大学和加利福尼亚大学圣地亚哥分校的[安全研究人员][12]第一个通过蓝牙远程控制轿车并控制其刹车。研究人员警告汽车公司:汽车联网程度越高,被入侵的可能性就越大。 -2015年,安全研究人员们发现了入侵高度软件化的特斯拉 Model S 的途径。Rogers 先生和网络安全公司 Lookout 的首席技术官凯文·马哈菲(Kevin Mahaffey)找到了一种通过直接连接在汽车上的笔记本电脑控制特斯拉汽车的方法。 +2015 年,安全研究人员们发现了入侵高度软件化的特斯拉 Model S 的途径。Rogers 先生和网络安全公司 Lookout 的首席技术官凯文·马哈菲Kevin Mahaffey找到了一种通过直接连接在汽车上的笔记本电脑控制特斯拉汽车的方法。 -一年后,来自中国腾讯的一支团队做了更进一步的尝试。他们入侵了一辆行驶中的特斯拉 Model S 并控制了其刹车器。和 Jeep 不同,特斯拉可以通过远程安装补丁来修复安全漏洞,这使得黑客的远程入侵也变的可能。 +一年后,来自中国腾讯的一支团队做了更进一步的尝试。他们入侵了一辆行驶中的特斯拉 Model S 并控制了其刹车器达12 米远。和 Jeep 不同,特斯拉可以通过远程安装补丁来修复那些可能被黑的安全漏洞。 -以上所有的例子中,入侵者都是无恶意的白帽黑客或者安全研究人员。但是给无人驾驶汽车制造商的教训是惨重的。 +以上所有的例子中,入侵者都是无恶意的白帽黑客或者安全研究人员,但是给无人驾驶汽车制造商的教训是惨重的。 黑客入侵汽车的动机是无穷的。在得知 Rogers 先生和 Mahaffey 先生对特斯拉 Model S 的研究之后,一位中国 app 开发者和他们联系、询问他们是否愿意分享或者出售他们发现的漏洞。(这位 app 开发者正在寻找后门,试图在特斯拉的仪表盘上偷偷安装 app) @@ -51,25 +51,25 @@ Miller 星期六在 Twitter 上写道:“汽车制造商对待网络攻击的 但随着越来越多的无人驾驶和半自动驾驶的汽车驶入公路,它们将成为更有价值的目标。安全专家警告道:无人驾驶汽车面临着更复杂、更多面的入侵风险,每一辆新无人驾驶汽车的加入,都使这个系统变得更复杂,而复杂性不可避免地带来脆弱性。 -20年前,平均每辆汽车有100万行代码,通用汽车公司的2010雪佛兰Volt有大约1000万行代码——比一架F-35战斗机的代码还要多。 +20 年前,平均每辆汽车有 100 万行代码,通用汽车公司的 2010 [雪佛兰 Volt][13] 有大约 1000 万行代码——比一架 [F-35 战斗机][14]的代码还要多。 -如今, 平均每辆汽车至少有1亿行代码。无人驾驶汽车公司预计不久以后它们将有2亿行代码。当你停下来考虑:平均每1000行代码有15到50个缺陷,那么潜在的可利用缺陷就会以很快的速度增加。 +如今, 平均每辆汽车至少有 1 亿行代码。无人驾驶汽车公司预计不久以后它们将有 2 亿行代码。当你停下来考虑:平均每 1000 行代码有 15 到 50 个缺陷,那么潜在的可利用缺陷就会以很快的速度增加。 “计算机最大的安全威胁仅仅是数据被删除,但无人驾驶汽车一旦出现安全事故,失去的却是乘客的生命。”一家致力于解决汽车安全问题的以色列初创公司 Karamba Security 的联合创始人 David Barzilai 说。 安全专家说道:要想真正保障无人驾驶汽车的安全,汽车制造商必须想办法避免所有可能产生的漏洞——即使漏洞不可避免。其中最大的挑战,是汽车制造商和软件开发商们之间的缺乏合作经验。 -网络安全公司 Lookout 的 Mahaffey 先生说:“新的革命已经出现,我们不能固步自封,应该寻求新的思维。我们需要像发明出安全气囊那样的人来解决安全漏洞,但我们现在还没有看到行业内有人做出改变。“ +网络安全公司 Lookout 的 Mahaffey 先生说:“新的革命已经出现,我们不能固步自封,应该寻求新的思维。我们需要像发明出安全气囊那样的人来解决安全漏洞,但我们现在还没有看到行业内有人做出改变。” -Mahaffey 先生说:”在这场无人驾驶汽车的竞争中,那些最注重软件的公司将会成为最后的赢家“ +Mahaffey 先生说:“在这场无人驾驶汽车的竞争中,那些最注重软件的公司将会成为最后的赢家。” -------------------------------------------------------------------------------- via: https://www.nytimes.com/2017/06/07/technology/why-car-companies-are-hiring-computer-security-experts.html -作者:[NICOLE PERLROTH ][a] +作者:[NICOLE PERLROTH][a] 译者:[XiatianSummer](https://github.com/XiatianSummer) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20170802 Creating SWAP partition using FDISK - FALLOCATE commands.md b/published/20170802 Creating SWAP partition using FDISK - FALLOCATE commands.md new file mode 100644 index 0000000000..09e1d1fbaf --- /dev/null +++ b/published/20170802 Creating SWAP partition using FDISK - FALLOCATE commands.md @@ -0,0 +1,113 @@ +使用 fdisk 和 fallocate 命令创建交换分区 +====== + +交换分区在物理内存(RAM)被填满时用来保持内存中的内容。当 RAM 被耗尽,Linux 会将内存中不活动的页移动到交换空间中,从而空出内存给系统使用。虽然如此,但交换空间不应被认为是物理内存的替代品。 + +大多数情况下,建议交换内存的大小为物理内存的 1 到 2 倍。也就是说如果你有 8GB 内存, 那么交换空间大小应该介于8-16 GB。 + +若系统中没有配置交换分区,当内存耗尽后,系统可能会杀掉正在运行中的进程/应用,从而导致系统崩溃。在本文中,我们将学会如何为 Linux 系统添加交换分区,我们有两个办法: + +- 使用 fdisk 命令 +- 使用 fallocate 命令 + +### 第一个方法(使用 fdisk 命令) + +通常,系统的第一块硬盘会被命名为 `/dev/sda`,而其中的分区会命名为 `/dev/sda1` 、 `/dev/sda2`。 本文我们使用的是一块有两个主分区的硬盘,两个分区分别为 `/dev/sda1`、 `/dev/sda2`,而我们使用 `/dev/sda3` 来做交换分区。 + +首先创建一个新分区, + +``` +$ fdisk /dev/sda +``` + +按 `n` 来创建新分区。系统会询问你从哪个柱面开始,直接按回车键使用默认值即可。然后系统询问你到哪个柱面结束, 这里我们输入交换分区的大小(比如 1000MB)。这里我们输入 `+1000M`。 + +![swap][2] + +现在我们创建了一个大小为 1000MB 的磁盘了。但是我们并没有设置该分区的类型,我们按下 `t` 然后回车,来设置分区类型。 + +现在我们要输入分区编号,这里我们输入 `3`,然后输入磁盘分类号,交换分区的分区类型为 `82` (要显示所有可用的分区类型,按下 `l` ) ,然后再按下 `w` 保存磁盘分区表。 + +![swap][4] + +再下一步使用 `mkswap` 命令来格式化交换分区: + +``` +$ mkswap /dev/sda3 +``` + +然后激活新建的交换分区: + +``` +$ swapon /dev/sda3 +``` + +然而我们的交换分区在重启后并不会自动挂载。要做到永久挂载,我们需要添加内容到 `/etc/fstab` 文件中。打开 `/etc/fstab` 文件并输入下面行: + +``` +$ vi /etc/fstab + +/dev/sda3 swap swap default 0 0 +``` + +保存并关闭文件。现在每次重启后都能使用我们的交换分区了。 + +### 第二种方法(使用 fallocate 命令) + +我推荐用这种方法因为这个是最简单、最快速的创建交换空间的方法了。`fallocate` 是最被低估和使用最少的命令之一了。 `fallocate` 命令用于为文件预分配块/大小。 + +使用 `fallocate` 创建交换空间,我们首先在 `/` 目录下创建一个名为 `swap_space` 的文件。然后分配 2GB 到 `swap_space` 文件: + +``` +$ fallocate -l 2G /swap_space +``` + +我们运行下面命令来验证文件大小: + +``` +$ ls -lh /swap_space +``` + +然后更改文件权限,让 `/swap_space` 更安全: + +``` +$ chmod 600 /swap_space +``` + +这样只有 root 可以读写该文件了。我们再来格式化交换分区(LCTT 译注:虽然这个 `swap_space` 是个文件,但是我们把它当成是分区来挂载): + +``` +$ mkswap /swap_space +``` + +然后启用交换空间: + +``` +$ swapon -s +``` + +每次重启后都要重新挂载磁盘分区。因此为了使之持久化,就像上面一样,我们编辑 `/etc/fstab` 并输入下面行: + +``` +/swap_space swap swap sw 0 0 +``` + +保存并退出文件。现在我们的交换分区会一直被挂载了。我们重启后可以在终端运行 `free -m` 来检查交换分区是否生效。 + +我们的教程至此就结束了,希望本文足够容易理解和学习,如果有任何疑问欢迎提出。 + +-------------------------------------------------------------------------------- + +via: http://linuxtechlab.com/create-swap-using-fdisk-fallocate/ + +作者:[Shusain][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxtechlab.com/author/shsuain/ +[1]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=668%2C211 +[2]:https://i0.wp.com/linuxtechlab.com/wp-content/uploads/2017/02/fidsk.jpg?resize=668%2C211 +[3]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=620%2C157 +[4]:https://i0.wp.com/linuxtechlab.com/wp-content/uploads/2017/02/fidsk-swap-select.jpg?resize=620%2C157 diff --git a/translated/tech/20170820 How To Display Date And Time In History Command.md b/published/20170820 How To Display Date And Time In History Command.md similarity index 69% rename from translated/tech/20170820 How To Display Date And Time In History Command.md rename to published/20170820 How To Display Date And Time In History Command.md index 402b471d92..b3fd163009 100644 --- a/translated/tech/20170820 How To Display Date And Time In History Command.md +++ b/published/20170820 How To Display Date And Time In History Command.md @@ -1,19 +1,21 @@ -让 History 命令显示日期和时间 +让 history 命令显示日期和时间 ====== -我们都对 History 命令很熟悉。它将终端上 bash 执行过的所有命令存储到 `.bash_history` 文件中,来帮助我们复查用户之前执行过的命令。 -默认情况下 history 命令直接显示用户执行的命令而不会输出运行命令时的日期和时间,即使 history 命令记录了这个时间。 +我们都对 `history` 命令很熟悉。它将终端上 bash 执行过的所有命令存储到 `.bash_history` 文件中,来帮助我们复查用户之前执行过的命令。 -运行 history 命令时,它会检查一个叫做 `HISTTIMEFORMAT` 的环境变量,这个环境变量指明了如何格式化输出 history 命令中记录的这个时间。 +默认情况下 `history` 命令直接显示用户执行的命令而不会输出运行命令时的日期和时间,即使 `history` 命令记录了这个时间。 -若该值为 null 或者根本没有设置,则它跟大多数系统默认显示的一样,不会现实日期和时间。 +运行 `history` 命令时,它会检查一个叫做 `HISTTIMEFORMAT` 的环境变量,这个环境变量指明了如何格式化输出 `history` 命令中记录的这个时间。 -`HISTTIMEFORMAT` 使用 strftime 来格式化显示时间 (strftime - 将日期和时间转换为字符串)。history 命令输出日期和时间能够帮你更容易地追踪问题。 +若该值为 null 或者根本没有设置,则它跟大多数系统默认显示的一样,不会显示日期和时间。 - * **%T:** 替换为时间 ( %H:%M:%S )。 - * **%F:** 等同于 %Y-%m-%d (ISO 8601:2000 标准日期格式)。 +`HISTTIMEFORMAT` 使用 `strftime` 来格式化显示时间(`strftime` - 将日期和时间转换为字符串)。`history` 命令输出日期和时间能够帮你更容易地追踪问题。 + +* `%T`: 替换为时间(`%H:%M:%S`)。 +* `%F`: 等同于 `%Y-%m-%d` (ISO 8601:2000 标准日期格式)。 + +下面是 `history` 命令默认的输出。 -下面是 history 命令默认的输出。 ``` # history 1 yum install -y mysql-server mysql-client @@ -46,36 +48,36 @@ 28 sysdig 29 yum install httpd mysql 30 service httpd start - ``` -根据需求,有三种不同的方法设置环境变量。 +根据需求,有三种不同的设置环境变量的方法。 - * 临时设置当前用户的环境变量 - * 永久设置当前/其他用户的环境变量 - * 永久设置所有用户的环境变量 +* 临时设置当前用户的环境变量 +* 永久设置当前/其他用户的环境变量 +* 永久设置所有用户的环境变量 **注意:** 不要忘了在最后那个单引号前加上空格,否则输出会很混乱的。 -### 方法 -1: +### 方法 1: + +运行下面命令为为当前用户临时设置 `HISTTIMEFORMAT` 变量。这会一直生效到下次重启。 -运行下面命令为为当前用户临时设置 HISTTIMEFORMAT 变量。这会一直生效到下次重启。 ``` # export HISTTIMEFORMAT='%F %T ' - ``` -### 方法 -2: +### 方法 2: + +将 `HISTTIMEFORMAT` 变量加到 `.bashrc` 或 `.bash_profile` 文件中,让它永久生效。 -将 HISTTIMEFORMAT 变量加到 `.bashrc` 或 `.bash_profile` 文件中,让它永久生效。 ``` # echo 'HISTTIMEFORMAT="%F %T "' >> ~/.bashrc 或 # echo 'HISTTIMEFORMAT="%F %T "' >> ~/.bash_profile - ``` 运行下面命令来让文件中的修改生效。 + ``` # source ~/.bashrc 或 @@ -83,21 +85,22 @@ ``` -### 方法 -3: +### 方法 3: + +将 `HISTTIMEFORMAT` 变量加入 `/etc/profile` 文件中,让它对所有用户永久生效。 -将 HISTTIMEFORMAT 变量加入 `/etc/profile` 文件中,让它对所有用户永久生效。 ``` # echo 'HISTTIMEFORMAT="%F %T "' >> /etc/profile - ``` 运行下面命令来让文件中的修改生效。 + ``` # source /etc/profile - ``` -输出结果为。 +输出结果为: + ``` # history 1 2017-08-16 15:30:15 yum install -y mysql-server mysql-client @@ -130,7 +133,6 @@ 28 2017-08-16 15:30:15 sysdig 29 2017-08-16 15:30:15 yum install httpd mysql 30 2017-08-16 15:30:15 service httpd start - ``` -------------------------------------------------------------------------------- @@ -138,7 +140,7 @@ via: https://www.2daygeek.com/display-date-time-linux-bash-history-command/ 作者:[2daygeek][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20170828 How To Use YUM History Command To Rollback An Updates In RHEL-CentOS Systems.md b/published/20170828 How To Use YUM History Command To Rollback An Updates In RHEL-CentOS Systems.md similarity index 83% rename from translated/tech/20170828 How To Use YUM History Command To Rollback An Updates In RHEL-CentOS Systems.md rename to published/20170828 How To Use YUM History Command To Rollback An Updates In RHEL-CentOS Systems.md index bb721e0042..f9bc03e3e1 100644 --- a/translated/tech/20170828 How To Use YUM History Command To Rollback An Updates In RHEL-CentOS Systems.md +++ b/published/20170828 How To Use YUM History Command To Rollback An Updates In RHEL-CentOS Systems.md @@ -1,23 +1,26 @@ -在 RHEL/CentOS 系统上使用 YUM History 命令回滚升级操作 +在 RHEL/CentOS 系统上使用 YUM history 命令回滚升级操作 ====== + 为服务器打补丁是 Linux 系统管理员的一项重要任务,为的是让系统更加稳定,性能更加优化。厂商经常会发布一些安全/高危的补丁包,相关软件需要升级以防范潜在的安全风险。 -Yum (Yellowdog Update Modified) 是 CentOS 和 RedHat 系统上用的 RPM 包管理工具,Yum history 命令允许系统管理员将系统回滚到上一个状态,但由于某些限制,回滚不是在所有情况下都能成功,有时 yum 命令可能什么都不做,有时可能会删掉一些其他的包。 +Yum (Yellowdog Update Modified) 是 CentOS 和 RedHat 系统上用的 RPM 包管理工具,`yum history` 命令允许系统管理员将系统回滚到上一个状态,但由于某些限制,回滚不是在所有情况下都能成功,有时 `yum` 命令可能什么都不做,有时可能会删掉一些其他的包。 -我建议你在升级之前还是要做一个完整的系统备份,而 yum history 并不能用来替代系统备份的。系统备份能让你将系统还原到任意时候的节点状态。 +我建议你在升级之前还是要做一个完整的系统备份,而 `yum history` 并不能用来替代系统备份的。系统备份能让你将系统还原到任意时候的节点状态。 **推荐阅读:** -**(#)** [在 RHEL/CentOS 系统上使用 YUM 命令管理软件包 ][1] -**(#)** [在 Fedora 系统上使用 DNF (YUM 的一个分支) 命令管理软件包 ][2] -**(#)** [如何让 History 命令显示日期和时间 ][3] -某些情况下,安装的应用程序在升级了补丁之后不能正常工作或者出现一些错误(可能是由于库不兼容或者软件包升级导致的),那该怎么办呢? +- [在 RHEL/CentOS 系统上使用 YUM 命令管理软件包][1] +- [在 Fedora 系统上使用 DNF (YUM 的一个分支)命令管理软件包 ][2] +- [如何让 history 命令显示日期和时间][3] + +某些情况下,安装的应用程序在升级了补丁之后不能正常工作或者出现一些错误(可能是由于库不兼容或者软件包升级导致的),那该怎么办呢? + +与应用开发团队沟通,并找出导致库和软件包的问题所在,然后使用 `yum history` 命令进行回滚。 -与应用开发团队沟通,并找出导致库和软件包的问题所在,然后使用 yum history 命令进行回滚。 **注意:** - * 它不支持回滚 selinux,selinux-policy-*,kernel,glibc (以及依赖 glibc 的包,比如 gcc)。 - * 不建议将系统降级到更低的版本(比如 CentOS 6.9 降到 CentOS 6.8),这回导致系统处于不稳定的状态 +* 它不支持回滚 selinux,selinux-policy-*,kernel,glibc (以及依赖 glibc 的包,比如 gcc)。 +* 不建议将系统降级到更低的版本(比如 CentOS 6.9 降到 CentOS 6.8),这会导致系统处于不稳定的状态 让我们先来看看系统上有哪些包可以升级,然后挑选出一些包来做实验。 @@ -66,10 +69,10 @@ Upgrade 4 Package(s) Total download size: 5.5 M Is this ok [y/N]: n - ``` -你会发现 `git` 包可以被升级,那我们就用它来实验吧。运行下面命令获得软件包的版本信息(当前安装的版本和可以升级的版本)。 +你会发现 `git` 包可以被升级,那我们就用它来实验吧。运行下面命令获得软件包的版本信息(当前安装的版本和可以升级的版本)。 + ``` # yum list git Loaded plugins: fastestmirror, security @@ -80,10 +83,10 @@ Installed Packages git.x86_64 1.7.1-8.el6 @base Available Packages git.x86_64 1.7.1-9.el6_9 updates - ``` 运行下面命令来将 `git` 从 `1.7.1-8` 升级到 `1.7.1-9`。 + ``` # yum update git Loaded plugins: fastestmirror, presto @@ -147,27 +150,29 @@ Dependency Updated: perl-Git.noarch 0:1.7.1-9.el6_9 Complete! - ``` 验证升级后的 `git` 版本. + ``` # yum list git Installed Packages git.x86_64 1.7.1-9.el6_9 @updates -or +或 # rpm -q git git-1.7.1-9.el6_9.x86_64 - ``` -现在我们成功升级这个软件包,可以对它进行回滚了. 步骤如下. +现在我们成功升级这个软件包,可以对它进行回滚了。步骤如下。 + +### 使用 YUM history 命令回滚升级操作 + +首先,使用下面命令获取 yum 操作的 id。下面的输出很清晰地列出了所有需要的信息,例如操作 id、谁做的这个操作(用户名)、操作日期和时间、操作的动作(安装还是升级)、操作影响的包数量。 -首先,使用下面命令获取yum操作id. 下面的输出很清晰地列出了所有需要的信息,例如操作 id, 谁做的这个操作(用户名), 操作日期和时间, 操作的动作(安装还是升级), 操作影响的包数量. ``` # yum history -or +或 # yum history list all Loaded plugins: fastestmirror, presto ID | Login user | Date and time | Action(s) | Altered @@ -185,10 +190,10 @@ ID | Login user | Date and time | Action(s) | Altered 3 | root | 2016-10-18 12:53 | Install | 1 2 | root | 2016-09-30 10:28 | E, I, U | 31 EE 1 | root | 2016-07-26 11:40 | E, I, U | 160 EE - ``` -上面命令现实有两个包受到了影响,因为 git 还升级了它的依赖包 **perl-Git**. 运行下面命令来查看关于操作的详细信息. +上面命令显示有两个包受到了影响,因为 `git` 还升级了它的依赖包 `perl-Git`。 运行下面命令来查看关于操作的详细信息。 + ``` # yum history info 13 Loaded plugins: fastestmirror, presto @@ -214,7 +219,8 @@ history info ``` -运行下面命令来回滚 `git` 包到上一个版本. +运行下面命令来回滚 `git` 包到上一个版本。 + ``` # yum history undo 13 Loaded plugins: fastestmirror, presto @@ -279,21 +285,21 @@ Installed: git.x86_64 0:1.7.1-8.el6 perl-Git.noarch 0:1.7.1-8.el6 Complete! - ``` -回滚后, 使用下面命令来检查降级包的版本. +回滚后,使用下面命令来检查降级包的版本。 + ``` # yum list git -or +或 # rpm -q git git-1.7.1-8.el6.x86_64 - ``` ### 使用YUM downgrade 命令回滚升级 -此外,我们也可以使用 YUM downgrade 命令回滚升级. +此外,我们也可以使用 YUM `downgrade` 命令回滚升级。 + ``` # yum downgrade git-1.7.1-8.el6 perl-Git-1.7.1-8.el6 Loaded plugins: search-disabled-repos, security, ulninfo @@ -346,14 +352,14 @@ Installed: git.x86_64 0:1.7.1-8.el6 perl-Git.noarch 0:1.7.1-8.el6 Complete! - ``` -**注意 :** 你也需要降级依赖包, 否则它会删掉当前版本的依赖包而不是对依赖包做降级,因为downgrade命令无法处理依赖关系. +注意: 你也需要降级依赖包,否则它会删掉当前版本的依赖包而不是对依赖包做降级,因为 `downgrade` 命令无法处理依赖关系。 ### 至于 Fedora 用户 -命令是一样的,只需要将包管理器名称从YUM改成DNF就行了. +命令是一样的,只需要将包管理器名称从 `yum` 改成 `dnf` 就行了。 + ``` # dnf list git # dnf history @@ -361,7 +367,6 @@ Complete! # dnf history undo # dnf list git # dnf downgrade git-1.7.1-8.el6 perl-Git-1.7.1-8.el6 - ``` -------------------------------------------------------------------------------- @@ -370,7 +375,7 @@ via: https://www.2daygeek.com/rollback-fallback-updates-downgrade-packages-cento 作者:[2daygeek][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md b/published/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md similarity index 78% rename from translated/tech/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md rename to published/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md index 10decaada3..94bc84b462 100644 --- a/translated/tech/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md +++ b/published/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md @@ -1,7 +1,7 @@ 如何在 Linux 上让一段时间不活动的用户自动登出 ====== -![](https://www.ostechnix.com/wp-content/uploads/2017/09/logout-720x340.jpg) +![](https://www.ostechnix.com/wp-content/uploads/2017/09/logout-720x340.jpg) 让我们想象这么一个场景。你有一台服务器经常被网络中各系统的很多个用户访问。有可能出现某些用户忘记登出会话让会话保持会话处于连接状态。我们都知道留下一个处于连接状态的用户会话是一件多么危险的事情。有些用户可能会借此故意做一些损坏系统的事情。而你,作为一名系统管理员,会去每个系统上都检查一遍用户是否有登出吗?其实这完全没必要的。而且若网络中有成百上千台机器,这也太耗时了。不过,你可以让用户在本机或 SSH 会话上超过一定时间不活跃的情况下自动登出。本教程就将教你如何在类 Unix 系统上实现这一点。一点都不难。跟我做。 @@ -11,32 +11,40 @@ #### 方法 1: -编辑 **~/.bashrc** 或 **~/.bash_profile** 文件: +编辑 `~/.bashrc` 或 `~/.bash_profile` 文件: + ``` $ vi ~/.bashrc ``` + 或, + ``` $ vi ~/.bash_profile ``` -将下面行加入其中。 +将下面行加入其中: + ``` TMOUT=100 ``` -这回让用户在停止动作 100 秒后自动登出。你可以根据需要定义这个值。保存并关闭文件。 +这会让用户在停止动作 100 秒后自动登出。你可以根据需要定义这个值。保存并关闭文件。 运行下面命令让更改生效: + ``` $ source ~/.bashrc ``` + 或, + ``` $ source ~/.bash_profile ``` 现在让会话闲置 100 秒。100 秒不活动后,你会看到下面这段信息,并且用户会自动退出会话。 + ``` timed out waiting for input: auto-logout Connection to 192.168.43.2 closed. @@ -44,13 +52,16 @@ Connection to 192.168.43.2 closed. 该设置可以轻易地被用户所修改。因为,`~/.bashrc` 文件被用户自己所拥有。 -要修改或者删除超时设置,只需要删掉上面添加的行然后执行 "source ~/.bashrc" 命令让修改生效。 +要修改或者删除超时设置,只需要删掉上面添加的行然后执行 `source ~/.bashrc` 命令让修改生效。 + +此外,用户也可以运行下面命令来禁止超时: -此啊玩 i,用户也可以运行下面命令来禁止超时: ``` $ export TMOUT=0 ``` + 或, + ``` $ unset TMOUT ``` @@ -59,14 +70,16 @@ $ unset TMOUT #### 方法 2: -以 root 用户登陆 +以 root 用户登录。 创建一个名为 `autologout.sh` 的新文件。 + ``` # vi /etc/profile.d/autologout.sh ``` 加入下面内容: + ``` TMOUT=100 readonly TMOUT @@ -76,55 +89,58 @@ export TMOUT 保存并退出该文件。 为它添加可执行权限: + ``` # chmod +x /etc/profile.d/autologout.sh ``` 现在,登出或者重启系统。非活动用户就会在 100 秒后自动登出了。普通用户即使想保留会话连接但也无法修改该配置了。他们会在 100 秒后强制退出。 -这两种方法对本地会话和远程会话都适用(即本地登陆的用户和远程系统上通过 SSH 登陆的用户)。下面让我们来看看如何实现只自动登出非活动的 SSH 会话,而不自动登出本地会话。 +这两种方法对本地会话和远程会话都适用(即本地登录的用户和远程系统上通过 SSH 登录的用户)。下面让我们来看看如何实现只自动登出非活动的 SSH 会话,而不自动登出本地会话。 #### 方法 3: -这种方法,我们智慧让 SSH 会话用户在一段时间不活动后自动登出。 +这种方法,我们只会让 SSH 会话用户在一段时间不活动后自动登出。 编辑 `/etc/ssh/sshd_config` 文件: + ``` $ sudo vi /etc/ssh/sshd_config ``` 添加/修改下面行: + ``` ClientAliveInterval 100 ClientAliveCountMax 0 ``` 保存并退出该文件。重启 sshd 服务让改动生效。 + ``` $ sudo systemctl restart sshd ``` -现在,在远程系统通过 ssh 登陆该系统。100 秒后,ssh 会话就会自动关闭了,你也会看到下面消息: +现在,在远程系统通过 ssh 登录该系统。100 秒后,ssh 会话就会自动关闭了,你也会看到下面消息: + ``` $ Connection to 192.168.43.2 closed by remote host. Connection to 192.168.43.2 closed. ``` -现在,任何人从远程系统通过 SSH 登陆本系统,都会在 100 秒不活动后自动登出了。 +现在,任何人从远程系统通过 SSH 登录本系统,都会在 100 秒不活动后自动登出了。 -希望本文能对你有所帮助。我马上还会写另一篇实用指南。如果你觉得我们的指南有用,请在您的社交网络上分享,支持 OSTechNix! +希望本文能对你有所帮助。我马上还会写另一篇实用指南。如果你觉得我们的指南有用,请在您的社交网络上分享,支持 我们! 祝您好运! - - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/auto-logout-inactive-users-period-time-linux/ 作者:[SK][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20170918 Linux fmt command - usage and examples.md b/published/20170918 Linux fmt command - usage and examples.md similarity index 68% rename from translated/tech/20170918 Linux fmt command - usage and examples.md rename to published/20170918 Linux fmt command - usage and examples.md index e9b1d8921a..5724d9abb2 100644 --- a/translated/tech/20170918 Linux fmt command - usage and examples.md +++ b/published/20170918 Linux fmt command - usage and examples.md @@ -1,13 +1,13 @@ -Linux fmt 命令 - 用法与案例 +Linux 的 fmt 命令用法与案例 ====== -有时你会发现需要格式化某个文本文件中的内容。比如,该文本文件每行一个单词,而人物是把所有的单词都放在同一行。当然,你可以手工来做,但没人喜欢手工做这么耗时的工作。而且,这只是一个例子 - 事实上的任务可能千奇百怪。 +有时你会发现需要格式化某个文本文件中的内容。比如,该文本文件每行一个单词,而任务是把所有的单词都放在同一行。当然,你可以手工来做,但没人喜欢手工做这么耗时的工作。而且,这只是一个例子 - 事实上的任务可能千奇百怪。 好在,有一个命令可以满足至少一部分的文本格式化的需求。这个工具就是 `fmt`。本教程将会讨论 `fmt` 的基本用法以及它提供的一些主要功能。文中所有的命令和指令都在 Ubuntu 16.04LTS 下经过了测试。 ### Linux fmt 命令 -fmt 命令是一个简单的文本格式化工具,任何人都能在命令行下运行它。它的基本语法为: +`fmt` 命令是一个简单的文本格式化工具,任何人都能在命令行下运行它。它的基本语法为: ``` fmt [-WIDTH] [OPTION]... [FILE]... @@ -15,15 +15,13 @@ fmt [-WIDTH] [OPTION]... [FILE]... 它的 man 页是这么说的: -``` -重新格式化文件FILE(s)中的每一个段落,将结果写到标准输出. 选项 -WIDTH 是 --width=DIGITS 形式的缩写 -``` +> 重新格式化文件中的每一个段落,将结果写到标准输出。选项 `-WIDTH` 是 `--width=DIGITS` 形式的缩写。 -下面这些问答方式的例子应该能让你对 fmt 的用法有很好的了解。 +下面这些问答方式的例子应该能让你对 `fmt` 的用法有很好的了解。 -### Q1。如何使用 fmt 来将文本内容格式成同一行? +### Q1、如何使用 fmt 来将文本内容格式成同一行? -使用 `fmt` 命令的基本格式(省略任何选项)就能做到这一点。你只需要将文件名作为参数传递给它。 +使用 `fmt` 命令的基本形式(省略任何选项)就能做到这一点。你只需要将文件名作为参数传递给它。 ``` fmt [file-name] @@ -33,9 +31,9 @@ fmt [file-name] [![format contents of file in single line][1]][2] -你可以看到文件中多行内容都被格式化成同一行了。请注意,这并不会修改原文件(也就是 file1)。 +你可以看到文件中多行内容都被格式化成同一行了。请注意,这并不会修改原文件(file1)。 -### Q2。如何修改最大行宽? +### Q2、如何修改最大行宽? 默认情况下,`fmt` 命令产生的输出中的最大行宽为 75。然而,如果你想的话,可以用 `-w` 选项进行修改,它接受一个表示新行宽的数字作为参数值。 @@ -47,7 +45,7 @@ fmt -w [n] [file-name] [![change maximum line width][3]][4] -### Q3。如何让 fmt 突出显示第一行? +### Q3、如何让 fmt 突出显示第一行? 这是通过让第一行的缩进与众不同来实现的,你可以使用 `-t` 选项来实现。 @@ -57,7 +55,7 @@ fmt -t [file-name] [![make fmt highlight the first line][5]][6] -### Q4。如何使用 fmt 拆分长行? +### Q4、如何使用 fmt 拆分长行? fmt 命令也能用来对长行进行拆分,你可以使用 `-s` 选项来应用该功能。 @@ -69,9 +67,9 @@ fmt -s [file-name] [![make fmt split long lines][7]][8] -### Q5。如何在单词与单词之间,行与行之间用空格分开? +### Q5、如何在单词与单词之间,句子之间用空格分开? -fmt 命令提供了一个 `-u` 选项,这会在单词与单词之间用单个空格分开,行与行之间用两个空格分开。你可以这样用: +fmt 命令提供了一个 `-u` 选项,这会在单词与单词之间用单个空格分开,句子之间用两个空格分开。你可以这样用: ``` fmt -u [file-name] @@ -81,7 +79,7 @@ fmt -u [file-name] ### 总结 -没错,fmt 提供的功能不多,但不代表它的应用就不广泛。因为你永远不知道什么时候会用到它。在本教程中,我们已经讲解了 `fmt` 提供的主要选项。若想了解更多细节,请查看该工具的 [man 页 ][9]。 +没错,`fmt` 提供的功能不多,但不代表它的应用就不广泛。因为你永远不知道什么时候会用到它。在本教程中,我们已经讲解了 `fmt` 提供的主要选项。若想了解更多细节,请查看该工具的 [man 页][9]。 -------------------------------------------------------------------------------- @@ -90,7 +88,7 @@ via: https://www.howtoforge.com/linux-fmt-command/ 作者:[Himanshu Arora][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20170919 What Are Bitcoins.md b/published/20170919 What Are Bitcoins.md new file mode 100644 index 0000000000..49a58ef9d1 --- /dev/null +++ b/published/20170919 What Are Bitcoins.md @@ -0,0 +1,76 @@ +比特币是什么? +====== + +![what are bitcoins](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/what-are-bitcoins_orig.jpg) + +[比特币][1]Bitcoin 是一种数字货币或者说是电子现金,依靠点对点技术来完成交易。 由于使用点对点技术作为主要网络,比特币提供了一个类似于管制经济managed economy的社区。 这就是说,比特币消除了货币管理的集中式管理方式,促进了货币的社区管理。 大部分比特币数字现金的挖掘和管理软件也是开源的。 + +第一个比特币软件是由中本聪Satoshi Nakamoto开发的,基于开源的密码协议。 比特币最小单位被称为Satoshi,它基本上是一个比特币的百万分之一(0.00000001 BTC)。 + +人们不能低估比特币在数字经济中消除的界限。 例如,比特币消除了由中央机构对货币进行的管理控制,并将控制和管理提供给整个社区。 此外,比特币基于开放源代码密码协议的事实使其成为一个开放的领域,其中存在价值波动、通货紧缩和通货膨胀等严格的活动。 当许多互联网用户正在意识到他们在网上完成交易的隐私性时,比特币正在变得比以往更受欢迎。 但是,对于那些了解暗网及其工作原理的人们,可以确认有些人早就开始使用它了。 + +不利的一面是,比特币在匿名支付方面也非常安全,可能会对安全或个人健康构成威胁。 例如,暗网市场是进口药物甚至武器的主要供应商和零售商。 在暗网中使用比特币有助于这种犯罪活动。 尽管如此,如果使用得当,比特币有许多的好处,可以消除一些由于集中的货币代理管理导致的经济上的谬误。 另外,比特币允许在世界任何地方交换现金。 比特币的使用也可以减少货币假冒、印刷或贬值。 同时,依托对等网络作为骨干网络,促进交易记录的分布式权限,交易会更加安全。 + +比特币的其他优点包括: + +- 在网上商业世界里,比特币促进资金安全和完全控制。这是因为买家受到保护,以免商家可能想要为较低成本的服务额外收取钱财。买家也可以选择在交易后不分享个人信息。此外,由于隐藏了个人信息,也就保护了身份不被盗窃。 +- 对于主要的常见货币灾难,比如如丢失、冻结或损坏,比特币是一种替代品。但是,始终都建议对比特币进行备份并使用密码加密。 +- 使用比特币进行网上购物和付款时,收取的费用少或者不收取。这就提高了使用时的可承受性。 +- 与其他电子货币不同,商家也面临较少的欺诈风险,因为比特币交易是无法逆转的。即使在高犯罪率和高欺诈的时刻,比特币也是有用的,因为在公开的公共总账(区块链)上难以对付某个人。 +- 比特币货币也很难被操纵,因为它是开源的,密码协议是非常安全的。 +- 交易也可以随时随地进行验证和批准。这是数字货币提供的灵活性水准。 + +还可以阅读 - [Bitkey:专用于比特币交易的 Linux 发行版][2] + +### 如何挖掘比特币和完成必要的比特币管理任务的应用程序 + +在数字货币中,比特币挖矿和管理需要额外的软件。有许多开源的比特币管理软件,便于进行支付,接收付款,加密和备份比特币,还有很多的比特币挖掘软件。有些网站,比如:通过查看广告赚取免费比特币的 [Freebitcoin][4],MoonBitcoin 是另一个可以免费注册并获得比特币的网站。但是,如果有空闲时间和相当多的人脉圈参与,会很方便。有很多提供比特币挖矿的网站,可以轻松注册然后开始挖矿。其中一个主要秘诀就是尽可能引入更多的人构建成一个大型的网络。 + +与比特币一起使用时需要的应用程序包括比特币钱包,使得人们可以安全的持有比特币。这就像使用实物钱包来保存硬通货币一样,而这里是以数字形式存在的。钱包可以在这里下载 —— [比特币-钱包][6]。其他类似的应用包括:与比特币钱包类似的[区块链][7]。 + +下面的屏幕截图分别显示了 Freebitco 和 MoonBitco 这两个挖矿网站。 + + [![freebitco bitcoin mining site](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/freebitco-bitcoin-mining-site_orig.jpg)][8] + + [![moonbitcoin bitcoin mining site](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/moonbitcoin-bitcoin-mining-site_orig.png)][9] + +获得比特币的方式多种多样。其中一些包括比特币挖矿机的使用,比特币在交易市场的购买以及免费的比特币在线采矿。比特币可以在 [MtGox][10](LCTT 译注:本文比较陈旧,此交易所已经倒闭),[bitNZ][11],[Bitstamp][12],[BTC-E][13],[VertEx][14] 等等这些网站买到,这些网站都提供了开源开源应用程序。这些应用包括:Bitminter、[5OMiner][15],[BFG Miner][16] 等等。这些应用程序使用一些图形卡和处理器功能来生成比特币。在个人电脑上开采比特币的效率在很大程度上取决于显卡的类型和采矿设备的处理器。(LCTT 译注:目前个人挖矿已经几乎毫无意义了)此外,还有很多安全的在线存储用于备份比特币。这些网站免费提供比特币存储服务。比特币管理网站的例子包括:[xapo][17] , [BlockChain][18] 等。在这些网站上注册需要有效的电子邮件和电话号码进行验证。 Xapo 通过电话应用程序提供额外的安全性,无论何时进行新的登录都需要做请求验证。 + +### 比特币的缺点 + +使用比特币数字货币所带来的众多优势不容忽视。 但是,由于比特币还处于起步阶段,因此遇到了几个阻力点。 例如,大多数人没有完全意识到比特币数字货币及其工作方式。 缺乏意识可以通过教育和意识的创造来缓解。 比特币用户也面临波动,因为比特币的需求量高于可用的货币数量。 但是,考虑到更长的时间,很多人开始使用比特币的时候,波动性会降低。 + +### 改进点 + +基于[比特币技术][19]的起步,仍然有变化的余地使其更安全更可靠。 考虑到更长的时间,比特币货币将会发展到足以提供作为普通货币的灵活性。 为了让比特币成功,除了给出有关比特币如何工作及其好处的信息之外,还需要更多人了解比特币。 + +-------------------------------------------------------------------------------- + +via: http://www.linuxandubuntu.com/home/things-you-need-to-know-about-bitcoins + +作者:[LINUXANDUBUNTU][a] +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxandubuntu.com/ +[1]:http://www.linuxandubuntu.com/home/bitkey-a-linux-distribution-dedicated-for-conducting-bitcoin-transactions +[2]:http://www.linuxandubuntu.com/home/bitkey-a-linux-distribution-dedicated-for-conducting-bitcoin-transactions +[3]:http://www.linuxandubuntu.com/home/things-you-need-to-know-about-bitcoins +[4]:https://freebitco.in/?r=2167375 +[5]:http://moonbit.co.in/?ref=c637809a5051 +[6]:https://bitcoin.org/en/choose-your-wallet +[7]:https://blockchain.info/wallet/ +[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/freebitco-bitcoin-mining-site_orig.jpg +[9]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/moonbitcoin-bitcoin-mining-site_orig.png +[10]:http://mtgox.com/ +[11]:https://en.bitcoin.it/wiki/BitNZ +[12]:https://www.bitstamp.net/ +[13]:https://btc-e.com/ +[14]:https://www.vertexinc.com/ +[15]:https://www.downloadcloud.com/bitcoin-miner-software.html +[16]:https://github.com/luke-jr/bfgminer +[17]:https://xapo.com/ +[18]:https://www.blockchain.com/ +[19]:https://en.wikipedia.org/wiki/Bitcoin diff --git a/translated/tech/20170924 Simulate System Loads.md b/published/20170924 Simulate System Loads.md similarity index 56% rename from translated/tech/20170924 Simulate System Loads.md rename to published/20170924 Simulate System Loads.md index 66b74be5c1..8c079664d9 100644 --- a/translated/tech/20170924 Simulate System Loads.md +++ b/published/20170924 Simulate System Loads.md @@ -1,71 +1,70 @@ -模拟系统负载的方法 +在 Linux 上简单模拟系统负载的方法 ====== + 系统管理员通常需要探索在不同负载对应用性能的影响。这意味着必须要重复地人为创造负载。当然,你可以通过专门的工具来实现,但有时你可能不想也无法安装新工具。 -每个 Linux 发行版中都自带有创建负载的工具。他们不如专门的工具那么灵活但它们是现成的,而且无需专门学习。 +每个 Linux 发行版中都自带有创建负载的工具。他们不如专门的工具那么灵活,但它们是现成的,而且无需专门学习。 ### CPU 下面命令会创建 CPU 负荷,方法是通过压缩随机数据并将结果发送到 `/dev/null`: + ``` cat /dev/urandom | gzip -9 > /dev/null - ``` 如果你想要更大的负荷,或者系统有多个核,那么只需要对数据进行压缩和解压就行了,像这样: + ``` cat /dev/urandom | gzip -9 | gzip -d | gzip -9 | gzip -d > /dev/null - ``` -按下 `CTRL+C` 来暂停进程。 +按下 `CTRL+C` 来终止进程。 -### RAM +### 内存占用 -下面命令会减少可用内存的总量。它是是通过在内存中创建文件系统然后往里面写文件来实现的。你可以使用任意多的内存,只需哟往里面写入更多的文件就行了。 +下面命令会减少可用内存的总量。它是通过在内存中创建文件系统然后往里面写文件来实现的。你可以使用任意多的内存,只需哟往里面写入更多的文件就行了。 + +首先,创建一个挂载点,然后将 ramfs 文件系统挂载上去: -首先,创建一个挂载点,然后将 `ramfs` 文件系统挂载上去: ``` mkdir z mount -t ramfs ramfs z/ - ``` 第二步,使用 `dd` 在该目录下创建文件。这里我们创建了一个 128M 的文件: + ``` dd if=/dev/zero of=z/file bs=1M count=128 - ``` 文件的大小可以通过下面这些操作符来修改: - + **bs=** 块大小。可以是任何数字后面接上 **B**( 表示字节 ),**K**( 表示 KB),**M**( 表示 MB) 或者 **G**( 表示 GB)。 - + **count=** 要写多少个块 +- `bs=` 块大小。可以是任何数字后面接上 `B`(表示字节),`K`(表示 KB),`M`( 表示 MB)或者 `G`(表示 GB)。 +- `count=` 要写多少个块。 +### 磁盘 I/O +创建磁盘 I/O 的方法是先创建一个文件,然后使用 `for` 循环来不停地拷贝它。 -### Disk +下面使用命令 `dd` 创建了一个全是零的 1G 大小的文件: -创建磁盘 I/O 的方法是先创建一个文件,然后使用 for 循环来不停地拷贝它。 - -下面使用命令 `dd` 创建了一个充满零的 1G 大小的文件: ``` dd if=/dev/zero of=loadfile bs=1M count=1024 - ``` -下面命令用 for 循环执行 10 次操作。每次都会拷贝 `loadfile` 来覆盖 `loadfile1`: +下面命令用 `for` 循环执行 10 次操作。每次都会拷贝 `loadfile` 来覆盖 `loadfile1`: + ``` for i in {1..10}; do cp loadfile loadfile1; done - ``` -通过修改 `{1。.10}` 中的第二个参数来调整运行时间的长短。 +通过修改 `{1..10}` 中的第二个参数来调整运行时间的长短。(LCTT 译注:你的 Linux 系统中的默认使用的 `cp` 命令很可能是 `cp -i` 的别名,这种情况下覆写会提示你输入 `y` 来确认,你可以使用 `-f` 参数的 `cp` 命令来覆盖此行为,或者直接用 `/bin/cp` 命令。) 若你想要一直运行,直到按下 `CTRL+C` 来停止,则运行下面命令: + ``` while true; do cp loadfile loadfile1; done - ``` -------------------------------------------------------------------------------- @@ -73,7 +72,7 @@ via: https://bash-prompt.net/guides/create-system-load/ 作者:[Elliot Cooper][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md b/published/20170925 A Commandline Fuzzy Search Tool For Linux.md similarity index 55% rename from translated/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md rename to published/20170925 A Commandline Fuzzy Search Tool For Linux.md index 9d16aaf1aa..d76309f820 100644 --- a/translated/tech/20170925 A Commandline Fuzzy Search Tool For Linux.md +++ b/published/20170925 A Commandline Fuzzy Search Tool For Linux.md @@ -1,34 +1,38 @@ -Pick - 一款 Linux 上的命令行模糊搜索工具 +Pick:一款 Linux 上的命令行模糊搜索工具 ====== -![](https://www.ostechnix.com/wp-content/uploads/2017/09/search-720x340.jpg) -今天,我们要讲的是一款有趣的命令行工具,名叫 `Pick`。它允许用户通过 ncurses(3X) 界面来从一系列选项中进行选择,而且还支持模糊搜索的功能。当你想要选择某个名字中包含非英文字符的目录或文件时,这款工具就很有用了。你根本都无需学习如何输入非英文字符。借助 Pick,你可以很方便地进行搜索,选择,然后浏览该文件或进入该目录。你甚至无需输入任何字符来过滤文件/目录。这很适合那些有大量目录和文件的人来用。 +![](https://www.ostechnix.com/wp-content/uploads/2017/09/search-720x340.jpg) -### Pick - 一款 Linux 上的命令行模糊搜索工具 +今天,我们要讲的是一款有趣的命令行工具,名叫 Pick。它允许用户通过 ncurses(3X) 界面来从一系列选项中进行选择,而且还支持模糊搜索的功能。当你想要选择某个名字中包含非英文字符的目录或文件时,这款工具就很有用了。你根本都无需学习如何输入非英文字符。借助 Pick,你可以很方便地进行搜索、选择,然后浏览该文件或进入该目录。你甚至无需输入任何字符来过滤文件/目录。这很适合那些有大量目录和文件的人来用。 -#### 安装 Pick +### 安装 Pick + +对 Arch Linux 及其衍生品来说,Pick 放在 [AUR][1] 中。因此 Arch 用户可以使用类似 [Pacaur][2],[Packer][3],以及 [Yaourt][4] 等 AUR 辅助工具来安装它。 -对 **Arch Linux** 及其衍生品来说,pick 放在 [**AUR**][1] 中。因此 Arch 用户可以使用类似 [**Pacaur**][2],[**Packer**][3],以及 [**Yaourt**][4] 等 AUR 辅助工具来安装它。 ``` pacaur -S pick ``` 或者, + ``` packer -S pick ``` 或者, + ``` yaourt -S pick ``` -**Debian**,**Ubuntu**,**Linux Mint** 用户则可以通过运行下面命令来安装 Pick。 +Debian,Ubuntu,Linux Mint 用户则可以通过运行下面命令来安装 Pick。 + ``` sudo apt-get install pick ``` -其他的发行版则可以从[**这里 **][5] 下载最新的安装包,然后按照下面的步骤来安装。在写本指南时,其最新版为 1.9.0。 +其他的发行版则可以从[这里][5]下载最新的安装包,然后按照下面的步骤来安装。在写本指南时,其最新版为 1.9.0。 + ``` wget https://github.com/calleerlandsson/pick/releases/download/v1.9.0/pick-1.9.0.tar.gz tar -zxvf pick-1.9.0.tar.gz @@ -36,81 +40,87 @@ cd pick-1.9.0/ ``` 使用下面命令进行配置: + ``` ./configure ``` -最后,构建并安装 pick: +最后,构建并安装 Pick: + ``` make sudo make install ``` -#### 用法 +### 用法 通过将它与其他命令集成能够大幅简化你的工作。我这里会给出一些例子,让你理解它是怎么工作的。 让们先创建一堆目录。 + ``` mkdir -p abcd/efgh/ijkl/mnop/qrst/uvwx/yz/ ``` -现在,你想进入目录 `/ijkl/`。你有两种选择。可以使用 **cd** 命令: +现在,你想进入目录 `/ijkl/`。你有两种选择。可以使用 `cd` 命令: + ``` cd abcd/efgh/ijkl/ ``` -或者,创建一个[**快捷方式 **][6] 或者说别名指向这个目录,这样你可以迅速进入该目录。 +或者,创建一个[快捷方式][6] 或者说别名指向这个目录,这样你可以迅速进入该目录。 + +但,使用 `pick` 命令则问题变得简单的多。看下面这个例子。 -但,使用 "pick" 命令则问题变得简单的多。看下面这个例子。 ``` cd $(find . -type d | pick) ``` 这个命令会列出当前工作目录下的所有目录及其子目录,你可以用上下箭头选择你想进入的目录,然后按下回车就行了。 -**像这样:** +像这样: -[![][7]][8] +![][8] 而且,它还会根据你输入的内容过滤目录和文件。比如,当我输入 “or” 时会显示如下结果。 -[![][7]][9] +![][9] -这只是一个例子。你也可以将 “pick” 命令跟其他命令一起混用。 +这只是一个例子。你也可以将 `pick` 命令跟其他命令一起混用。 这是另一个例子。 + ``` find -type f | pick | xargs less ``` -该命令让你选择当前目录中的某个文件并用 less 来查看它。 +该命令让你选择当前目录中的某个文件并用 `less` 来查看它。 -[![][7]][10] +![][10] + +还想看其他例子?还有呢。下面命令让你选择当前目录下的文件或目录,并将之迁移到其他地方去,比如这里我们迁移到 `/home/sk/ostechnix`。 -还想看其他例子?还有呢。下面命令让你选择当前目录下的文件或目录,并将之迁移到其他地方去,比如这里我们迁移到 **/home/sk/ostechnix**。 ``` mv "$(find . -maxdepth 1 |pick)" /home/sk/ostechnix/ ``` -[![][7]][11] +![][11] 通过上下按钮选择要迁移的文件,然后按下回车就会把它迁移到 `/home/sk/ostechnix/` 目录中的。 -[![][7]][12] +![][12] -从上面的结果中可以看到,我把一个名叫 “abcd” 的目录移动到 "ostechnix" 目录中了。 +从上面的结果中可以看到,我把一个名叫 `abcd` 的目录移动到 `ostechnix` 目录中了。 -使用案例是无限的。甚至 Vim 编辑器上还有一个叫做 [**pick.vim**][13] 的插件让你在 Vim 中选择更加方便。 +使用方式是无限的。甚至 Vim 编辑器上还有一个叫做 [pick.vim][13] 的插件让你在 Vim 中选择更加方便。 要查看详细信息,请参阅它的 man 页。 + ``` man pick ``` -我们的讲解至此就结束了。希望这狂工具能给你们带来帮助。如果你觉得我们的指南有用的话,请将它分享到您的社交网络上,并向大家推荐 OSTechNix 博客。 - - +我们的讲解至此就结束了。希望这款工具能给你们带来帮助。如果你觉得我们的指南有用的话,请将它分享到您的社交网络上,并向大家推荐我们。 -------------------------------------------------------------------------------- @@ -118,7 +128,7 @@ via: https://www.ostechnix.com/pick-commandline-fuzzy-search-tool-linux/ 作者:[SK][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -130,9 +140,9 @@ via: https://www.ostechnix.com/pick-commandline-fuzzy-search-tool-linux/ [5]:https://github.com/calleerlandsson/pick/releases/ [6]:https://www.ostechnix.com/create-shortcuts-frequently-used-directories-shell/ [7]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[8]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_001-3.png () -[9]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_002-1.png () -[10]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_004-1.png () -[11]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_005.png () -[12]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_006-1.png () +[8]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_001-3.png +[9]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_002-1.png +[10]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_004-1.png +[11]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_005.png +[12]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_006-1.png [13]:https://github.com/calleerlandsson/pick.vim/ diff --git a/translated/tech/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md b/published/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md similarity index 58% rename from translated/tech/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md rename to published/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md index 80566a8ae0..c8129ee61e 100644 --- a/translated/tech/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md +++ b/published/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md @@ -1,8 +1,9 @@ -如何方便地寻找 GitHub 上超棒的项目和资源 +如何轻松地寻找 GitHub 上超棒的项目和资源 ====== -![](https://www.ostechnix.com/wp-content/uploads/2017/09/Awesome-finder-Find-Awesome-Projects-720x340.png) -在 **GitHub** 网站上每天都会新增上百个项目。由于 GitHub 上有成千上万的项目,要在上面搜索好的项目简直要累死人。好在,有那么一伙人已经创建了一些这样的列表。其中包含的类别五花八门,如编程,数据库,编辑器,游戏,娱乐等。这使得我们寻找在 GitHub 上托管的项目,软件,资源,裤,书籍等其他东西变得容易了很多。有一个 GitHub 用户更进了一步,创建了一个名叫 `Awesome-finder` 的命令行工具,用来在 awesome 系列的仓库中寻找超棒的项目和资源。该工具帮助我们不需要离开终端(当然也就不需要使用浏览器了)的情况下浏览 awesome 列表。 +![](https://www.ostechnix.com/wp-content/uploads/2017/09/Awesome-finder-Find-Awesome-Projects-720x340.png) + +在 GitHub 网站上每天都会新增上百个项目。由于 GitHub 上有成千上万的项目,要在上面搜索好的项目简直要累死人。好在,有那么一伙人已经创建了一些这样的列表。其中包含的类别五花八门,如编程、数据库、编辑器、游戏、娱乐等。这使得我们寻找在 GitHub 上托管的项目、软件、资源、库、书籍等其他东西变得容易了很多。有一个 GitHub 用户更进了一步,创建了一个名叫 `Awesome-finder` 的命令行工具,用来在 awesome 系列的仓库中寻找超棒的项目和资源。该工具可以让我们不需要离开终端(当然也就不需要使用浏览器了)的情况下浏览 awesome 列表。 在这篇简单的说明中,我会向你演示如何方便地在类 Unix 系统中浏览 awesome 列表。 @@ -12,12 +13,14 @@ 使用 `pip` 可以很方便地安装该工具,`pip` 是一个用来安装使用 Python 编程语言开发的程序的包管理器。 -在 **Arch Linux** 一起衍生发行版中(比如 **Antergos**,**Manjaro Linux**),你可以使用下面命令安装 `pip`: +在 Arch Linux 及其衍生发行版中(比如 Antergos,Manjaro Linux),你可以使用下面命令安装 `pip`: + ``` sudo pacman -S python-pip ``` -在 **RHEL**,**CentOS** 中: +在 RHEL,CentOS 中: + ``` sudo yum install epel-release ``` @@ -25,32 +28,33 @@ sudo yum install epel-release sudo yum install python-pip ``` -在 **Fedora** 上: +在 Fedora 上: + ``` sudo dnf install epel-release -``` -``` sudo dnf install python-pip ``` -在 **Debian**,**Ubuntu**,**Linux Mint** 上: +在 Debian,Ubuntu,Linux Mint 上: + ``` sudo apt-get install python-pip ``` -在 **SUSE**,**openSUSE** 上: +在 SUSE,openSUSE 上: ``` sudo zypper install python-pip ``` -PIP 安装好后,用下面命令来安装 'Awesome-finder'。 +`pip` 安装好后,用下面命令来安装 'Awesome-finder'。 + ``` sudo pip install awesome-finder ``` #### 用法 -Awesome-finder 会列出 GitHub 网站中如下这些主题(其实就是仓库)的内容: +Awesome-finder 会列出 GitHub 网站中如下这些主题(其实就是仓库)的内容: * awesome * awesome-android @@ -66,83 +70,84 @@ Awesome-finder 会列出 GitHub 网站中如下这些主题(其实就是仓库) * awesome-scala * awesome-swift - 该列表会定期更新。 比如,要查看 `awesome-go` 仓库中的列表,只需要输入: + ``` awesome go ``` 你就能看到用 “Go” 写的所有流行的东西了,而且这些东西按字母顺序进行了排列。 -[![][1]][2] +![][2] -你可以通过 **上/下** 箭头在列表中导航。一旦找到所需要的东西,只需要选中它,然后按下 **回车** 键就会用你默认的 web 浏览器打开相应的链接了。 +你可以通过 上/下 箭头在列表中导航。一旦找到所需要的东西,只需要选中它,然后按下回车键就会用你默认的 web 浏览器打开相应的链接了。 类似的, - * "awesome android" 命令会搜索 **awesome-android** 仓库。 - * "awesome awesome" 命令会搜索 **awesome** 仓库。 - * "awesome elixir" 命令会搜索 **awesome-elixir**。 - * "awesome go" 命令会搜索 **awesome-go**。 - * "awesome ios" 命令会搜索 **awesome-ios**。 - * "awesome java" 命令会搜索 **awesome-java**。 - * "awesome javascript" 命令会搜索 **awesome-javascript**。 - * "awesome php" 命令会搜索 **awesome-php**。 - * "awesome python" 命令会搜索 **awesome-python**。 - * "awesome ruby" 命令会搜索 **awesome-ruby**。 - * "awesome rust" 命令会搜索 **awesome-rust**。 - * "awesome scala" 命令会搜索 **awesome-scala**。 - * "awesome swift" 命令会搜索 **awesome-swift**。 + * `awesome android` 命令会搜索 awesome-android 仓库。 + * `awesome awesome` 命令会搜索 awesome 仓库。 + * `awesome elixir` 命令会搜索 awesome-elixir。 + * `awesome go` 命令会搜索 awesome-go。 + * `awesome ios` 命令会搜索 awesome-ios。 + * `awesome java` 命令会搜索 awesome-java。 + * `awesome javascript` 命令会搜索 awesome-javascript。 + * `awesome php` 命令会搜索 awesome-php。 + * `awesome python` 命令会搜索 awesome-python。 + * `awesome ruby` 命令会搜索 awesome-ruby。 + * `awesome rust` 命令会搜索 awesome-rust。 + * `awesome scala` 命令会搜索 awesome-scala。 + * `awesome swift` 命令会搜索 awesome-swift。 -而且,它还会随着你在提示符中输入的内容而自动进行筛选。比如,当我输入 "dj" 后,他会显示与 Django 相关的内容。 +而且,它还会随着你在提示符中输入的内容而自动进行筛选。比如,当我输入 `dj` 后,他会显示与 Django 相关的内容。 -[![][1]][3] +![][3] 若你想从最新的 `awesome-`( 而不是用缓存中的数据) 中搜索,使用 `-f` 或 `-force` 标志: + ``` awesome -f (--force) - ``` -**像这样:** +像这样: + ``` awesome python -f ``` 或, + ``` awesome python --force ``` -上面命令会显示 **awesome-python** GitHub 仓库中的列表。 +上面命令会显示 awesome-python GitHub 仓库中的列表。 很棒,对吧? -要退出这个工具的话,按下 **ESC** 键。要显示帮助信息,输入: +要退出这个工具的话,按下 ESC 键。要显示帮助信息,输入: + ``` awesome -h ``` 本文至此就结束了。希望本文能对你产生帮助。如果你觉得我们的文章对你有帮助,请将他们分享到你的社交网络中去,造福大众。我们马上还有其他好东西要来了。敬请期待! - - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/ 作者:[SK][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://www.ostechnix.com/author/sk/ [1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_008-1.png () -[3]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_009.png () +[2]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_008-1.png +[3]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_009.png [4]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=reddit (Click to share on Reddit) [5]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=twitter (Click to share on Twitter) [6]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=facebook (Click to share on Facebook) diff --git a/translated/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md b/published/20170927 Microservices and containers- 5 pitfalls to avoid.md similarity index 67% rename from translated/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md rename to published/20170927 Microservices and containers- 5 pitfalls to avoid.md index eb556dd301..b8d8c8d410 100644 --- a/translated/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md +++ b/published/20170927 Microservices and containers- 5 pitfalls to avoid.md @@ -1,57 +1,61 @@ 微服务和容器:需要去防范的 5 个“坑” ====== +> 微服务与容器天生匹配,但是你需要避开一些常见的陷阱。 + ![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO%20Containers%20Ecosystem.png?itok=lDTaYXzk) 因为微服务和容器是 [天生的“一对”][1],所以一起来使用它们,似乎也就不会有什么问题。当我们将这对“天作之合”投入到生产系统后,你就会发现,随着你的 IT 基础的提升,等待你的将是大幅上升的成本。是不是这样的? +(让我们等一下,等人们笑声过去) + 是的,很遗憾,这并不是你所希望的结果。虽然这两种技术的组合是非常强大的,但是,如果没有很好的规划和适配,它们并不能发挥出强大的性能来。在前面的文章中,我们整理了如果你想 [使用它们你应该掌握的知识][2]。但是,那些都是组织在容器中使用微服务时所遇到的常见问题。 -事先了解这些可能出现的问题,可以为你的成功奠定更坚实的基础。 +事先了解这些可能出现的问题,能够帮你避免这些问题,为你的成功奠定更坚实的基础。 -微服务和容器技术的出现是基于组织的需要、知识、资源等等更多的现实的要求。Mac Browning 说,“他们最常犯的一个 [错误] 是试图一次就想“搞定”一切”,他是 [DigitalOcean][3] 的工程部经理。“而真正需要面对的问题是,你的公司应该采用什么样的容器和微服务。” +微服务和容器技术的出现是基于组织的需要、知识、资源等等更多的现实的要求。Mac Browning 说,“他们最常犯的一个 [错误] 是试图一次就想‘搞定’一切”,他是 [DigitalOcean][3] 的工程部经理。“而真正需要面对的问题是,你的公司应该采用什么样的容器和微服务。” **[ 努力向你的老板和同事去解释什么是微服务?阅读我们的入门读本[如何简单明了地解释微服务][4]。]** Browning 和其他的 IT 专业人员分享了他们遇到的,在组织中使用容器化微服务时的五个陷阱,特别是在他们的生产系统生命周期的早期时候。在你的组织中需要去部署微服务和容器时,了解这些知识,将有助于你去评估微服务和容器化的部署策略。 -### 1. 在部署微服务和容器化上,试图同时从零开始 +### 1、 在部署微服务和容器化上,试图同时从零开始 -如果你刚开始从完全的实体服务器上开始改变,或者如果你的组织在微服务和容器化上还没有足够的知识储备,那么,请记住:微服务和容器化并不是拴在一起,不可分别部署的。这就意味着,你可以发挥你公司内部专家的技术特长,先从部署其中的一个开始。Kevin McGrath,CTO, [Sungard 服务可用性][5] 资深设计师,他建议,通过首先使用容器化来为你的团队建立知识和技能储备,通过对现有应用或者新应用进行容器化部署,接着再将它们迁移到微服务架构,这样才能在最后的阶段感受到它们的优势所在。 +如果你刚开始从完全的单例应用开始改变,或者如果你的组织在微服务和容器化上还没有足够的知识储备,那么,请记住:微服务和容器化并不是拴在一起、不可分别部署的。这就意味着,你可以发挥你公司内部专家的技术特长,先从部署其中的一个开始。Sungard Availability Services][5] 的资深 CTO 架构师 Kevin McGrath 建议,通过首先使用容器化来为你的团队建立知识和技能储备,通过对现有应用或者新应用进行容器化部署,接着再将它们迁移到微服务架构,这样才能最终感受到它们的优势所在。 -McGrath 说,“微服务要想运行的很好,需要公司经过多年的反复迭代,这样才能实现快速部署和迁移”,“如果组织不能实现快速迁移,那么支持微服务将很困难。实现快速迁移,容器化可以帮助你,这样就不用担心业务整体停机” +McGrath 说,“微服务要想运行的很好,需要公司经过多年的反复迭代,这样才能实现快速部署和迁移”,“如果组织不能实现快速迁移,那么支持微服务将很困难。实现快速迁移,容器化可以帮助你,这样就不用担心业务整体停机”。 -### 2. 从一个面向客户的或者关键的业务应用开始 +### 2、 从一个面向客户的或者关键的业务应用开始 -对组织来说,一个相关陷阱恰恰就是引入容器、微服务、或者同时两者都引入的这个开端:在尝试征服一片丛林中的雄狮之前,你应该先去征服处于食物链底端的一些小动物,以取得一些实践经验。 +对组织来说,一个相关陷阱恰恰就是从容器、微服务、或者两者同时起步:在尝试征服一片丛林中的雄狮之前,你应该先去征服处于食物链底端的一些小动物,以取得一些实践经验。 -在你的学习过程中预期会有一些错误出现 - 你是希望这些错误发生在面向客户的关键业务应用上,还是,仅对 IT 或者其他内部团队可见的低风险应用上? +在你的学习过程中可以预期会有一些错误出现 —— 你是希望这些错误发生在面向客户的关键业务应用上,还是,仅对 IT 或者其他内部团队可见的低风险应用上? DigitalOcean 的 Browning 说,“如果整个生态系统都是新的,为了获取一些微服务和容器方面的操作经验,那么,将它们先应用到影响面较低的区域,比如像你的持续集成系统或者内部工具,可能是一个低风险的做法。”你获得这方面的经验以后,当然会将这些技术应用到为客户提供服务的生产系统上。而现实情况是,不论你准备的如何周全,都不可避免会遇到问题,因此,需要提前为可能出现的问题制定应对之策。 -### 3. 在没有合适的团队之前引入了太多的复杂性 +### 3、 在没有合适的团队之前引入了太多的复杂性 由于微服务架构的弹性,它可能会产生复杂的管理需求。 -作为 [Red Hat][6] 技术的狂热拥护者,[Gordon Haff][7] 最近写道,“一个符合 OCI 标准的容器运行时本身管理单个容器是很擅长的,但是,当你开始使用多个容器和容器化应用时,并将它们分解为成百上千个节点后,管理和编配它们将变得极为复杂。最终,你将回过头来需要将容器分组来提供服务 - 比如,跨容器的网络、安全、测控” +作为 [Red Hat][6] 技术的狂热拥护者,[Gordon Haff][7] 最近写道,“一个符合 OCI 标准的容器运行时本身管理单个容器是很擅长的,但是,当你开始使用多个容器和容器化应用时,并将它们分解为成百上千个节点后,管理和编配它们将变得极为复杂。最终,你将需要回过头来将容器分组来提供服务 —— 比如,跨容器的网络、安全、测控”。 -Haff 提示说,“幸运的是,由于容器是可移植的,并且,与之相关的管理栈也是可移植的”。“这时出现的编配技术,比如像 [Kubernetes][8] ,使得这种 IT 需求变得简单化了”(更多内容请查阅 Haff 的文章:[容器化为编写应用带来的 5 个优势][1]) +Haff 提示说,“幸运的是,由于容器是可移植的,并且,与之相关的管理栈也是可移植的”。“这时出现的编配技术,比如像 [Kubernetes][8] ,使得这种 IT 需求变得简单化了”(更多内容请查阅 Haff 的文章:[容器化为编写应用带来的 5 个优势][1])。 另外,你需要合适的团队去做这些事情。如果你已经有 [DevOps shop][9],那么,你可能比较适合做这种转换。因为,从一开始你已经聚集了相关技能的人才。 -Mike Kavis 说,“随着时间的推移,会有越来越多的服务得以部署,管理起来会变得很不方便”,他是 [Cloud Technology Partners][10] 的副总裁兼首席云架构设计师。他说,“在 DevOps 的关键过程中,确保各个领域的专家 - 开发、测试、安全、运营等等 - 全部者参与进来,并且在基于容器的微服务中,在构建、部署、运行、安全方面实现协作。” +Mike Kavis 说,“随着时间的推移,部署了越来越多的服务,管理起来会变得很不方便”,他是 [Cloud Technology Partners][10] 的副总裁兼首席云架构设计师。他说,“在 DevOps 的关键过程中,确保各个领域的专家 —— 开发、测试、安全、运营等等 —— 全部都参与进来,并且在基于容器的微服务中,在构建、部署、运行、安全方面实现协作。” -### 4. 忽视重要的需求:自动化 +### 4、 忽视重要的需求:自动化 除了具有一个合适的团队之外,那些在基于容器化的微服务部署比较成功的组织都倾向于以“实现尽可能多的自动化”来解决固有的复杂性。 -Carlos Sanchez 说,“实现分布式架构并不容易,一些常见的挑战,像数据持久性、日志、排错等等,在微服务架构中都会变得很复杂”,他是 [CloudBees][11] 的资深软件工程师。根据定义,Sanchez 提到的分布式架构,随着业务的增长,将变成一个巨大无比的繁重的运营任务。“服务和组件的增殖,将使得运营自动化变成一项非常强烈的需求”。Sanchez 警告说。“手动管理将限制服务的规模” +Carlos Sanchez 说,“实现分布式架构并不容易,一些常见的挑战,像数据持久性、日志、排错等等,在微服务架构中都会变得很复杂”,他是 [CloudBees][11] 的资深软件工程师。根据定义,Sanchez 提到的分布式架构,随着业务的增长,将变成一个巨大无比的繁重的运营任务。“服务和组件的增殖,将使得运营自动化变成一项非常强烈的需求”。Sanchez 警告说。“手动管理将限制服务的规模”。 -### 5. 随着时间的推移,微服务变得越来越臃肿 +### 5、 随着时间的推移,微服务变得越来越臃肿 -在一个容器中运行一个服务或者软件组件并不神奇。但是,这样做并不能证明你就一定在使用微服务。Manual Nedbal, [ShieldX Networks][12] 的 CTO,它警告说,IT 专业人员要确保,随着时间的推移,微服务仍然是微服务。 +在一个容器中运行一个服务或者软件组件并不神奇。但是,这样做并不能证明你就一定在使用微服务。Manual Nedbal, [ShieldX Networks][12] 的 CTO,他警告说,IT 专业人员要确保,随着时间的推移,微服务仍然是微服务。 -Nedbal 说,“随着时间的推移,一些软件组件积累了大量的代码和特性,将它们将在一个容器中将会产生并不需要的微服务,也不会带来相同的优势”,也就是说,“随着组件的变大,工程师需要找到合适的时机将它们再次分解” +Nedbal 说,“随着时间的推移,一些软件组件积累了大量的代码和特性,将它们放在一个容器中将会产生并不需要的微服务,也不会带来相同的优势”,也就是说,“随着组件的变大,工程师需要找到合适的时机将它们再次分解”。 -------------------------------------------------------------------------------- @@ -59,7 +63,7 @@ via: https://enterprisersproject.com/article/2017/9/using-microservices-containe 作者:[Kevin Casey][a] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20171002 Connect To Wifi From The Linux Command Line.md b/published/20171002 Connect To Wifi From The Linux Command Line.md similarity index 92% rename from translated/tech/20171002 Connect To Wifi From The Linux Command Line.md rename to published/20171002 Connect To Wifi From The Linux Command Line.md index 50c25bd839..c866c10590 100644 --- a/translated/tech/20171002 Connect To Wifi From The Linux Command Line.md +++ b/published/20171002 Connect To Wifi From The Linux Command Line.md @@ -28,22 +28,20 @@ wpa_supplicant 可以作为命令行工具来用。使用一个简单的配置 wpa_supplicant 中有一个工具叫做 `wpa_cli`,它提供了一个命令行接口来管理你的 WiFi 连接。事实上你可以用它来设置任何东西,但是设置一个配置文件看起来要更容易一些。 使用 root 权限运行 `wpa_cli`,然后扫描网络。 -``` +``` # wpa_cli > scan - ``` 扫描过程要花上一点时间,并且会显示所在区域的那些网络。记住你想要连接的那个网络。然后输入 `quit` 退出。 ### 生成配置块并且加密你的密码 -还有更方便的工具可以用来设置配置文件。它接受网络名称和密码作为参数,然后生成一个包含该网路配置块(其中的密码被加密处理了)的配置文件。 +还有更方便的工具可以用来设置配置文件。它接受网络名称和密码作为参数,然后生成一个包含该网路配置块(其中的密码被加密处理了)的配置文件。 + ``` - # wpa_passphrase networkname password > /etc/wpa_supplicant/wpa_supplicant.conf - ``` ### 裁剪你的配置 @@ -51,9 +49,9 @@ wpa_supplicant 中有一个工具叫做 `wpa_cli`,它提供了一个命令行 现在你已经有了一个配置文件了,这个配置文件就是 `/etc/wpa_supplicant/wpa_supplicant.conf`。其中的内容并不多,只有一个网络块,其中有网络名称和密码,不过你可以在此基础上对它进行修改。 用喜欢的编辑器打开该文件,首先删掉说明密码的那行注释。然后,将下面行加到配置最上方。 + ``` ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel - ``` 这一行只是让 `wheel` 组中的用户可以管理 wpa_supplicant。这会方便很多。 @@ -61,29 +59,29 @@ ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel 其他的内容则添加到网络块中。 如果你要连接到一个隐藏网络,你可以添加下面行来通知 wpa_supplicant 先扫描该网络。 + ``` scan_ssid=1 - ``` 下一步,设置协议以及密钥管理方面的配置。下面这些是 WPA2 相关的配置。 + ``` proto=RSN key_mgmt=WPA-PSK - ``` -group 和 pairwise 配置告诉 wpa_supplicant 你是否使用了 CCMP,TKIP,或者两者都用到了。为了安全考虑,你应该只用 CCMP。 +`group` 和 `pairwise` 配置告诉 wpa_supplicant 你是否使用了 CCMP、TKIP,或者两者都用到了。为了安全考虑,你应该只用 CCMP。 + ``` group=CCMP pairwise=CCMP - ``` 最后,设置网络优先级。越高的值越会优先连接。 + ``` priority=10 - ``` ![Complete WPA_Supplicant Settings][1] @@ -94,14 +92,13 @@ priority=10 当然,该方法并不是用于即时配置无线网络的最好方法,但对于定期连接的网络来说,这种方法非常有效。 - -------------------------------------------------------------------------------- via: https://linuxconfig.org/connect-to-wifi-from-the-linux-command-line 作者:[Nick Congleton][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20171004 How To Create A Video From PDF Files In Linux.md b/published/20171004 How To Create A Video From PDF Files In Linux.md new file mode 100644 index 0000000000..ffa9ff915a --- /dev/null +++ b/published/20171004 How To Create A Video From PDF Files In Linux.md @@ -0,0 +1,97 @@ +如何在 Linux 中从 PDF 创建视频 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2017/10/Video-1-720x340.jpg) + +我在我的平板电脑中收集了大量的 PDF 文件,其中主要是 Linux 教程。有时候我懒得在平板电脑上看。我认为如果我能够从 PDF 创建视频,并在大屏幕设备(如电视机或计算机)中观看会更好。虽然我对 [FFMpeg][1] 有一些经验,但我不知道如何使用它来创建视频。经过一番 Google 搜索,我想出了一个很好的解决方案。对于那些想从一组 PDF 文件制作视频文件的人,请继续阅读。这并不困难。 + +### 在 Linux 中从 PDF 创建视频 + +为此,你需要在系统中安装 “FFMpeg” 和 “ImageMagick”。 + +要安装 FFMpeg,请参考以下链接。 + +- [在 Linux 上安装 FFMpeg][2] + +Imagemagick 可在大多数 Linux 发行版的官方仓库中找到。 + +在 Arch Linux 以及 Antergos、Manjaro Linux 等衍生产品上,运行以下命令进行安装。 + +``` +sudo pacman -S imagemagick +``` + +Debian、Ubuntu、Linux Mint: + +``` +sudo apt-get install imagemagick +``` + +Fedora: + +``` +sudo dnf install imagemagick +``` + +RHEL、CentOS、Scientific Linux: + +``` +sudo yum install imagemagick +``` + +SUSE、 openSUSE: + +``` +sudo zypper install imagemagick +``` + +在安装 ffmpeg 和 imagemagick 之后,将你的 PDF 文件转换成图像格式,如 PNG 或 JPG,如下所示。 + +``` +convert -density 400 input.pdf picture.png +``` + +这里,`-density 400` 指定输出图像的水平分辨率。 + +上面的命令会将指定 PDF 的所有页面转换为 PNG 格式。PDF 中的每个页面都将被转换成 PNG 文件,并保存在当前目录中,文件名为: `picture-1.png`、 `picture-2.png` 等。根据选择的 PDF 的页数,这将需要一些时间。 + +将 PDF 中的所有页面转换为 PNG 格式后,运行以下命令以从 PNG 创建视频文件。 + +``` +ffmpeg -r 1/10 -i picture-%01d.png -c:v libx264 -r 30 -pix_fmt yuv420p video.mp4 +``` + +这里: + +* `-r 1/10` :每张图像显示 10 秒。 +* `-i picture-%01d.png` :读取以 `picture-` 开头,接着是一位数字(`%01d`),最后以 `.png` 结尾的所有图片。如果图片名称带有 2 位数字(也就是 `picture-10.png`、`picture11.png` 等),在上面的命令中使用(`%02d`)。 +* `-c:v libx264`:输出的视频编码器(即 h264)。 +* `-r 30` :输出视频的帧率 +* `-pix_fmt yuv420p`:输出的视频分辨率 +* `video.mp4`:以 .mp4 格式输出视频文件。 + +好了,视频文件完成了!你可以在任何支持 .mp4 格式的设备上播放它。接下来,我需要找到一种方法来为我的视频插入一个很酷的音乐。我希望这也不难。 + +如果你想要更高的分辨率,你不必重新开始。只要将输出的视频文件转换为你选择的任何其他更高/更低的分辨率,比如说 720p,如下所示。 + +``` +ffmpeg -i video.mp4 -vf scale=-1:720 video_720p.mp4 +``` + +请注意,使用 ffmpeg 创建视频需要一台配置好的 PC。在转换视频时,ffmpeg 会消耗大量系统资源。我建议在高端系统中这样做。 + +就是这些了。希望你觉得这个有帮助。还会有更好的东西。敬请关注! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/create-video-pdf-files-linux/ + +作者:[SK][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.ostechnix.com/20-ffmpeg-commands-beginners/ +[2]:https://www.ostechnix.com/install-ffmpeg-linux/ \ No newline at end of file diff --git a/translated/tech/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md b/published/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md similarity index 66% rename from translated/tech/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md rename to published/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md index e56ce8e292..8593e4f558 100644 --- a/translated/tech/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md +++ b/published/20171005 python-hwinfo - Display Summary Of Hardware Information In Linux.md @@ -1,80 +1,81 @@ -# python-hwinfo:使用Linux系统工具展示硬件信息概况 +python-hwinfo:使用 Linux 系统工具展示硬件信息概况 +========== ---- -到目前为止,获取Linux系统硬件信息和配置已经被大部分工具所涵盖,不过也有许多命令可用于相同目的。 +到目前为止,我们已经介绍了大部分获取 Linux 系统硬件信息和配置的工具,不过也有许多命令可用于相同目的。 -而且,一些工具会显示所有硬件组成的详细信息,重置后,只显示特定设备的信息。 +而且,一些工具会显示所有硬件组件的详细信息,或只显示特定设备的信息。 -在这个系列中, 今天我们讨论一下关于[python-hwinfo][1], 它是一个展示硬件信息概况和整洁配置的工具之一。 +在这个系列中, 今天我们讨论一下关于 [python-hwinfo][1], 它是一个展示硬件信息概况的工具之一,并且其配置简洁。 -### 什么是python-hwinfo +### 什么是 python-hwinfo -这是一个通过解析系统工具(例如lspci和dmidecode)的输出,来检查硬件和设备的Python库。 +这是一个通过解析系统工具(例如 `lspci` 和 `dmidecode`)的输出,来检查硬件和设备的 Python 库。 -它提供了一个简单的命令行工具,可以用来检查本地,远程和捕获到的主机。用sudo运行命令以获得最大的信息。 +它提供了一个简单的命令行工具,可以用来检查本地、远程的主机和记录的信息。用 `sudo` 运行该命令以获得最大的信息。 -另外,你可以提供服务器IP或者主机名,用户名和密码,在远程的服务器上执行它。当然你也可以使用这个工具查看其它工具捕获的输出(例如demidecode输出的'dmidecode.out',/proc/cpuinfo输出的'cpuinfo',lspci -nnm输出的'lspci-nnm.out')。 +另外,你可以提供服务器 IP 或者主机名、用户名和密码,在远程的服务器上执行它。当然你也可以使用这个工具查看其它工具捕获的输出(例如 `demidecode` 输出的 `dmidecode.out`,`/proc/cpuinfo` 输出的 `cpuinfo`,`lspci -nnm` 输出的 `lspci-nnm.out`)。 -**建议阅读 :** -**(#)** [inxi - A Great Tool to Check Hardware Information on Linux][2] -**(#)** [Dmidecode - Easy Way To Get Linux System Hardware Information][3] -**(#)** [LSHW (Hardware Lister) - A Nifty Tool To Get A Hardware Information On Linux][4] -**(#)** [hwinfo (Hardware Info) - A Nifty Tool To Detect System Hardware Information On Linux][5] -**(#)** [How To Use lspci, lsscsi, lsusb, And lsblk To Get Linux System Devices Information][6] +建议阅读: -### Linux上如何安装python-hwinfo +- [Inxi:一个功能强大的获取 Linux 系统信息的命令行工具][2] +- [Dmidecode:获取 Linux 系统硬件信息的简易方式][3] +- [LSHW (Hardware Lister): 一个在 Linux 上获取硬件信息的漂亮工具][4] +- [hwinfo (Hardware Info):一个在 Linux 上检测系统硬件信息的漂亮工具][5] +- [如何使用 lspci、lsscsi、lsusb 和 lsblk 获取 Linux 系统设备信息][6] -在绝大多数Linux发行版,都可以通过pip包安装。为了安装python-hwinfo, 确保你的系统已经有python和python-pip包作为先决条件。 +### Linux 上如何安装 python-hwinfo -pip是Python附带的一个包管理工具,在Linux上安装Python包的推荐工具之一。 +在绝大多数 Linux 发行版,都可以通过 pip 包安装。为了安装 python-hwinfo, 确保你的系统已经有 Python 和python-pip 包作为先决条件。 + +`pip` 是 Python 附带的一个包管理工具,在 Linux 上安装 Python 包的推荐工具之一。 + +在 Debian/Ubuntu 平台,使用 [APT-GET 命令][7] 或者 [APT 命令][8] 安装 `pip`。 -在**`Debian/Ubuntu`**平台,使用[APT-GET 命令][7] 或者 [APT 命令][8] 安装pip。 ``` $ sudo apt install python-pip - ``` -在**`RHEL/CentOS`**平台,使用[YUM 命令][9]安装pip。 +在 RHEL/CentOS 平台,使用 [YUM 命令][9]安装 `pip`。 + ``` $ sudo yum install python-pip python-devel - ``` -在**`Fedora`**平台,使用[DNF 命令][10]安装pip。 +在 Fedora 平台,使用 [DNF 命令][10]安装 `pip`。 + ``` $ sudo dnf install python-pip - ``` -在**`Arch Linux`**平台,使用[Pacman 命令][11]安装pip。 +在 Arch Linux 平台,使用 [Pacman 命令][11]安装 `pip`。 + ``` $ sudo pacman -S python-pip - ``` -在**`openSUSE`**平台,使用[Zypper 命令][12]安装pip。 +在 openSUSE 平台,使用 [Zypper 命令][12]安装 `pip`。 + ``` $ sudo zypper python-pip - ``` -最后,执行下面的pip命令安装python-hwinfo。 +最后,执行下面的 `pip` 命令安装 python-hwinfo。 + ``` $ sudo pip install python-hwinfo - ``` -### 怎么使用python-hwinfo在本地机器 +### 怎么在本地机器使用 python-hwinfo 执行下面的命令,检查本地机器现有的硬件。输出很清楚和整洁,这是我在其他命令中没有看到的。 -它的输出分为了五类。 +它的输出分为了五类: - * **`Bios Info:`** bios供应商名称,系统产品名称, 系统序列号,系统唯一标识符,系统制造商,bios发布日期和bios版本。 - * **`CPU Info:`** 处理器编号,供应商ID,cpu系列代号,型号,制作更新版本,型号名称,cpu主频。 - * **`Ethernet Controller Info:`** 供应商名称,供应商ID,设备名称,设备ID,子供应商名称,子供应商ID,子设备名称,子设备ID。 - * **`Storage Controller Info:`** 供应商名称,供应商ID,设备名称,设备ID,子供应商名称,子供应商ID,子设备名称,子设备ID。 - * **`GPU Info:`** 供应商名称,供应商ID,设备名称,设备ID,子供应商名称,子供应商ID,子设备名称,子设备ID。 +* Bios Info(BIOS 信息): BIOS 供应商名称、系统产品名称、系统序列号、系统唯一标识符、系统制造商、BIOS 发布日期和BIOS 版本。 +* CPU Info(CPU 信息):处理器编号、供应商 ID,CPU 系列代号、型号、步进编号、型号名称、CPU 主频。 +* Ethernet Controller Info(网卡信息): 供应商名称、供应商 ID、设备名称、设备 ID、子供应商名称、子供应商 ID,子设备名称、子设备 ID。 +* Storage Controller Info(存储设备信息): 供应商名称、供应商 ID、设备名称、设备 ID、子供应商名称,子供应商 ID、子设备名称、子设备 ID。 +* GPU Info(GPU 信息): 供应商名称、供应商 ID、设备名称、设备 ID、子供应商名称、子供应商 ID、子设备名称、子设备 ID。 ``` @@ -136,20 +137,20 @@ GPU Info: ``` -### 怎么使用python-hwinfo在远处机器上 +### 怎么在远处机器上使用 python-hwinfo + +执行下面的命令检查远程机器现有的硬件,需要远程机器 IP,用户名和密码: -执行下面的命令检查远程机器现有的硬件,需要远程机器IP,用户名和密码 ``` $ hwinfo -m x.x.x.x -u root -p password - ``` -### 如何使用python-hwinfo读取捕获的输出 +### 如何使用 python-hwinfo 读取记录的输出 执行下面的命令,检查本地机器现有的硬件。输出很清楚和整洁,这是我在其他命令中没有看到的。 + ``` $ hwinfo -f [Path to file] - ``` -------------------------------------------------------------------------------- @@ -158,13 +159,13 @@ via: https://www.2daygeek.com/python-hwinfo-check-display-system-hardware-config 作者:[2DAYGEEK][a] 译者:[Torival](https://github.com/Torival) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://www.2daygeek.com/author/2daygeek/ [1]:https://github.com/rdobson/python-hwinfo -[2]:https://www.2daygeek.com/inxi-system-hardware-information-on-linux/ +[2]:https://linux.cn/article-8424-1.html [3]:https://www.2daygeek.com/dmidecode-get-print-display-check-linux-system-hardware-information/ [4]:https://www.2daygeek.com/lshw-find-check-system-hardware-information-details-linux/ [5]:https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/ diff --git a/published/20171008 The most important Firefox command line options.md b/published/20171008 The most important Firefox command line options.md new file mode 100644 index 0000000000..1f9383906c --- /dev/null +++ b/published/20171008 The most important Firefox command line options.md @@ -0,0 +1,55 @@ +最重要的 Firefox 命令行选项 +====== + +Firefox web 浏览器支持很多命令行选项,可以定制它启动的方式。 + +你可能已经接触过一些了,比如 `-P "配置文件名"` 指定浏览器启动加载时的配置文件,`-private` 开启一个私有会话。 + +本指南会列出对 FIrefox 来说比较重要的那些命令行选项。它并不包含所有的可选项,因为很多选项只用于特定的目的,对一般用户来说没什么价值。 + +你可以在 Firefox 开发者网站上看到[完整][1] 的命令行选项列表。需要注意的是,很多命令行选项对其它基于 Mozilla 的产品一样有效,甚至对某些第三方的程序也有效。 + +### 重要的 Firefox 命令行选项 + +![firefox command line][2] + +#### 配置文件相关选项 + + - `-CreateProfile 配置文件名` -- 创建新的用户配置信息,但并不立即使用它。 + - `-CreateProfile "配置文件名 存放配置文件的目录"` -- 跟上面一样,只是指定了存放配置文件的目录。 + - `-ProfileManager`,或 `-P` -- 打开内置的配置文件管理器。 + - `-P "配置文件名"` -- 使用指定的配置文件启动 Firefox。若指定的配置文件不存在则会打开配置文件管理器。只有在没有其他 Firefox 实例运行时才有用。 + - `-no-remote` -- 与 `-P` 连用来创建新的浏览器实例。它允许你在同一时间运行多个配置文件。 + +#### 浏览器相关选项 + + - `-headless` -- 以无头模式(LCTT 译注:无显示界面)启动 Firefox。Linux 上需要 Firefox 55 才支持,Windows 和 Mac OS X 上需要 Firefox 56 才支持。 + - `-new-tab URL` -- 在 Firefox 的新标签页中加载指定 URL。 + - `-new-window URL` -- 在 Firefox 的新窗口中加载指定 URL。 + - `-private` -- 以隐私浏览模式启动 Firefox。可以用来让 Firefox 始终运行在隐私浏览模式下。 + - `-private-window` -- 打开一个隐私窗口。 + - `-private-window URL` -- 在新的隐私窗口中打开 URL。若已经打开了一个隐私浏览窗口,则在那个窗口中打开 URL。 + - `-search 单词` -- 使用 FIrefox 默认的搜索引擎进行搜索。 + - - `url URL` -- 在新的标签页或窗口中加载 URL。可以省略这里的 `-url`,而且支持打开多个 URL,每个 URL 之间用空格分离。 + +#### 其他选项 + + - `-safe-mode` -- 在安全模式下启动 Firefox。在启动 Firefox 时一直按住 Shift 键也能进入安全模式。 + - `-devtools` -- 启动 Firefox,同时加载并打开开发者工具。 + - `-inspector URL` -- 使用 DOM Inspector 查看指定的 URL + - `-jsconsole` -- 启动 Firefox,同时打开浏览器终端。 + - `-tray` -- 启动 Firefox,但保持最小化。 + +-------------------------------------------------------------------------------- + +via: https://www.ghacks.net/2017/10/08/the-most-important-firefox-command-line-options/ + +作者:[Martin Brinkmann][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ghacks.net/author/martin/ +[1]:https://developer.mozilla.org/en-US/docs/Mozilla/Command_Line_Options +[2]:https://cdn.ghacks.net/wp-content/uploads/2017/10/firefox-command-line.png \ No newline at end of file diff --git a/published/20171011 What is a firewall.md b/published/20171011 What is a firewall.md new file mode 100644 index 0000000000..d854340ab6 --- /dev/null +++ b/published/20171011 What is a firewall.md @@ -0,0 +1,81 @@ +什么是防火墙? +===== + +> 流行的防火墙是多数组织主要的边界防御。 + +![](https://images.techhive.com/images/article/2017/04/firewall-100716789-large.jpg) + +基于网络的防火墙已经在美国企业无处不在,因为它们证实了抵御日益增长的威胁的防御能力。 + +通过网络测试公司 NSS 实验室最近的一项研究发现,高达 80% 的美国大型企业运行着下一代防火墙。研究公司 IDC 评估防火墙和相关的统一威胁管理市场的营业额在 2015 是 76 亿美元,预计到 2020 年底将达到 127 亿美元。 + +**如果你想升级,这里是《[当部署下一代防火墙时要考虑什么》][1]** + +### 什么是防火墙? + +防火墙作为一个边界防御工具,其监控流量——要么允许它、要么屏蔽它。 多年来,防火墙的功能不断增强,现在大多数防火墙不仅可以阻止已知的一些威胁、执行高级访问控制列表策略,还可以深入检查流量中的每个数据包,并测试包以确定它们是否安全。大多数防火墙都部署为用于处理流量的网络硬件,和允许终端用户配置和管理系统的软件。越来越多的软件版防火墙部署到高度虚拟化的环境中,以在被隔离的网络或 IaaS 公有云中执行策略。 + +随着防火墙技术的进步,在过去十年中创造了新的防火墙部署选择,所以现在对于部署防火墙的最终用户来说,有了更多选择。这些选择包括: + +### 有状态的防火墙 + +当防火墙首次创造出来时,它们是无状态的,这意味着流量所通过的硬件当单独地检查被监视的每个网络流量包时,屏蔽或允许是隔离的。从 1990 年代中后期开始,防火墙的第一个主要进展是引入了状态。有状态防火墙在更全面的上下文中检查流量,同时考虑到网络连接的工作状态和特性,以提供更全面的防火墙。例如,维持这个状态的防火墙可以允许某些流量访问某些用户,同时对其他用户阻塞同一流量。 + +### 基于代理的防火墙 + +这些防火墙充当请求数据的最终用户和数据源之间的网关。在传递给最终用户之前,所有的流量都通过这个代理过滤。这通过掩饰信息的原始请求者的身份来保护客户端不受威胁。 + +### Web 应用防火墙(WAF) + +这些防火墙位于特定应用的前面,而不是在更广阔的网络的入口或者出口上。基于代理的防火墙通常被认为是保护终端客户的,而 WAF 则被认为是保护应用服务器的。 + +### 防火墙硬件 + +防火墙硬件通常是一个简单的服务器,它可以充当路由器来过滤流量和运行防火墙软件。这些设备放置在企业网络的边缘,位于路由器和 Internet 服务提供商(ISP)的连接点之间。通常企业可能在整个数据中心部署十几个物理防火墙。 用户需要根据用户基数的大小和 Internet 连接的速率来确定防火墙需要支持的吞吐量容量。 + +### 防火墙软件 + +通常,终端用户部署多个防火墙硬件端和一个中央防火墙软件系统来管理该部署。 这个中心系统是配置策略和特性的地方,在那里可以进行分析,并可以对威胁作出响应。 + +### 下一代防火墙(NGFW) + +多年来,防火墙增加了多种新的特性,包括深度包检查、入侵检测和防御以及对加密流量的检查。下一代防火墙(NGFW)是指集成了许多先进的功能的防火墙。 + +#### 有状态的检测 + +阻止已知不需要的流量,这是基本的防火墙功能。 + +#### 反病毒 + +在网络流量中搜索已知病毒和漏洞,这个功能有助于防火墙接收最新威胁的更新,并不断更新以保护它们。 + +#### 入侵防御系统(IPS) + +这类安全产品可以部署为一个独立的产品,但 IPS 功能正逐步融入 NGFW。 虽然基本的防火墙技术可以识别和阻止某些类型的网络流量,但 IPS 使用更细粒度的安全措施,如签名跟踪和异常检测,以防止不必要的威胁进入公司网络。 这一技术的以前版本是入侵检测系统(IDS),其重点是识别威胁而不是遏制它们,已经被 IPS 系统取代了。 + +#### 深度包检测(DPI) + +DPI 可作为 IPS 的一部分或与其结合使用,但其仍然成为一个 NGFW 的重要特征,因为它提供细粒度分析流量的能力,可以具体到流量包头和流量数据。DPI 还可以用来监测出站流量,以确保敏感信息不会离开公司网络,这种技术称为数据丢失防御(DLP)。 + +#### SSL 检测 + +安全套接字层(SSL)检测是一个检测加密流量来测试威胁的方法。随着越来越多的流量进行加密,SSL 检测成为 NGFW 正在实施的 DPI 技术的一个重要组成部分。SSL 检测作为一个缓冲区,它在送到最终目的地之前解码流量以检测它。 + +#### 沙盒 + +这个是被卷入 NGFW 中的一个较新的特性,它指防火墙接收某些未知的流量或者代码,并在一个测试环境运行,以确定它是否存在问题的能力。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3230457/lan-wan/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html + +作者:[Brandon Butler][a] +译者:[zjon](https://github.com/zjon) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.networkworld.com/author/Brandon-Butler/ +[1]:https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html + + diff --git a/translated/tech/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md b/published/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md similarity index 76% rename from translated/tech/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md rename to published/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md index a40a1194d4..13c4dc78da 100644 --- a/translated/tech/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md +++ b/published/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md @@ -1,14 +1,14 @@ 在 Ubuntu 16.04 上安装并使用 YouTube-DL ====== -Youtube-dl 是一个免费而开源的命令行视频下载工具,可以用来从 Youtube 等类似的网站上下载视频,目前它支持的网站除了 Youtube 还有 Facebook,Dailymotion,Google Video,Yahoo 等等。它构架于 pygtk 之上,需要 Python 的支持来运行。它支持很多操作系统,包括 Windows,Mac 以及 Unix。Youtube-dl 还有断点续传,下载整个频道或者整个播放清单中的视频,添加自定义的标题,代理,等等其他功能。 +Youtube-dl 是一个自由开源的命令行视频下载工具,可以用来从 Youtube 等类似的网站上下载视频,目前它支持的网站除了 Youtube 还有 Facebook、Dailymotion、Google Video、Yahoo 等等。它构架于 pygtk 之上,需要 Python 的支持来运行。它支持很多操作系统,包括 Windows、Mac 以及 Unix。Youtube-dl 还有断点续传、下载整个频道或者整个播放清单中的视频、添加自定义的标题、代理等等其他功能。 -本文中,我们将来学习如何在 Ubuntu16.04 上安装并使用 Youtube-dl 和 Youtube-dlg。我们还会学习如何以不同质量,不同格式来下载 Youtube 中的视频。 +本文中,我们将来学习如何在 Ubuntu 16.04 上安装并使用 Youtube-dl 和 Youtube-dlg。我们还会学习如何以不同质量,不同格式来下载 Youtube 中的视频。 ### 前置需求 - * 一台运行 Ubuntu 16.04 的服务器。 - * 非 root 用户但拥有 sudo 特权。 +* 一台运行 Ubuntu 16.04 的服务器。 +* 非 root 用户但拥有 sudo 特权。 让我们首先用下面命令升级系统到最新版: @@ -21,37 +21,37 @@ sudo apt-get upgrade -y ### 安装 Youtube-dl -默认情况下,Youtube-dl 并不在 Ubuntu-16.04 仓库中。你需要从官网上来下载它。使用 curl 命令可以进行下载: +默认情况下,Youtube-dl 并不在 Ubuntu-16.04 仓库中。你需要从官网上来下载它。使用 `curl` 命令可以进行下载: -首先,使用下面命令安装 curl: +首先,使用下面命令安装 `curl`: ``` sudo apt-get install curl -y ``` -然后,下载 youtube-dl 的二进制包: +然后,下载 `youtube-dl` 的二进制包: ``` curl -L https://yt-dl.org/latest/youtube-dl -o /usr/bin/youtube-dl ``` -接着,用下面命令更改 youtube-dl 二进制包的权限: +接着,用下面命令更改 `youtube-dl` 二进制包的权限: ``` sudo chmod 755 /usr/bin/youtube-dl ``` -youtube-dl 有算是安装好了,现在可以进行下一步了。 +`youtube-dl` 算是安装好了,现在可以进行下一步了。 ### 使用 Youtube-dl -运行下面命令会列出 youtube-dl 的所有可选项: +运行下面命令会列出 `youtube-dl` 的所有可选项: ``` youtube-dl --h ``` -Youtube-dl 支持多种视频格式,像 Mp4,WebM,3gp,以及 FLV 都支持。你可以使用下面命令列出指定视频所支持的所有格式: +`youtube-dl` 支持多种视频格式,像 Mp4,WebM,3gp,以及 FLV 都支持。你可以使用下面命令列出指定视频所支持的所有格式: ``` youtube-dl -F https://www.youtube.com/watch?v=j_JgXJ-apXs @@ -94,6 +94,7 @@ youtube-dl -f 18 https://www.youtube.com/watch?v=j_JgXJ-apXs ``` 该命令会下载 640x360 分辨率的 mp4 格式的视频: + ``` [youtube] j_JgXJ-apXs: Downloading webpage [youtube] j_JgXJ-apXs: Downloading video info webpage @@ -101,7 +102,6 @@ youtube-dl -f 18 https://www.youtube.com/watch?v=j_JgXJ-apXs [youtube] j_JgXJ-apXs: Downloading MPD manifest [download] Destination: B.A. PASS 2 Trailer no 2 _ Filmybox-j_JgXJ-apXs.mp4 [download] 100% of 6.90MiB in 00:47 - ``` 如果你想以 mp3 音频的格式下载 Youtube 视频,也可以做到: @@ -122,7 +122,7 @@ youtube-dl -citw https://www.youtube.com/channel/UCatfiM69M9ZnNhOzy0jZ41A youtube-dl --proxy http://proxy-ip:port https://www.youtube.com/watch?v=j_JgXJ-apXs ``` -若想一条命令下载多个 Youtube 视频,那么首先把所有要下载的 Youtube 视频 URL 存在一个文件中(假设这个文件叫 youtube-list.txt),然后运行下面命令: +若想一条命令下载多个 Youtube 视频,那么首先把所有要下载的 Youtube 视频 URL 存在一个文件中(假设这个文件叫 `youtube-list.txt`),然后运行下面命令: ``` youtube-dl -a youtube-list.txt @@ -130,7 +130,7 @@ youtube-dl -a youtube-list.txt ### 安装 Youtube-dl GUI -若你想要图形化的界面,那么 youtube-dlg 是你最好的选择。youtube-dlg 是一款由 wxPython 所写的免费而开源的 youtube-dl 界面。 +若你想要图形化的界面,那么 `youtube-dlg` 是你最好的选择。`youtube-dlg` 是一款由 wxPython 所写的免费而开源的 `youtube-dl` 界面。 该工具默认也不在 Ubuntu 16.04 仓库中。因此你需要为它添加 PPA。 @@ -138,14 +138,14 @@ youtube-dl -a youtube-list.txt sudo add-apt-repository ppa:nilarimogard/webupd8 ``` -下一步,更新软件包仓库并安装 youtube-dlg: +下一步,更新软件包仓库并安装 `youtube-dlg`: ``` sudo apt-get update -y sudo apt-get install youtube-dlg -y ``` -安装好 Youtube-dl 后,就能在 `Unity Dash` 中启动它了: +安装好 Youtube-dl 后,就能在 Unity Dash 中启动它了: [![][2]][3] @@ -157,14 +157,13 @@ sudo apt-get install youtube-dlg -y 恭喜你!你已经成功地在 Ubuntu 16.04 服务器上安装好了 youtube-dl 和 youtube-dlg。你可以很方便地从 Youtube 及任何 youtube-dl 支持的网站上以任何格式和任何大小下载视频了。 - -------------------------------------------------------------------------------- via: https://www.howtoforge.com/tutorial/install-and-use-youtube-dl-on-ubuntu-1604/ 作者:[Hitesh Jethva][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20171016 Fixing vim in Debian - There and back again.md b/published/20171016 Fixing vim in Debian - There and back again.md new file mode 100644 index 0000000000..ebe765c4be --- /dev/null +++ b/published/20171016 Fixing vim in Debian - There and back again.md @@ -0,0 +1,59 @@ +修复 Debian 中的 vim 奇怪行为 +====== + +我一直在想,为什么我服务器上 vim 为什么在鼠标方面表现得如此愚蠢:不能像平时那样跳转、复制、粘贴。尽管在 `/etc/vim/vimrc.local` 中已经设置了。 + +``` +set mouse= +``` + +最后我终于知道为什么了,多谢 bug [#864074][1] 并且修复了它。 + +![][2] + +原因是,当没有 `~/.vimrc` 的时候,vim 在 `vimrc.local` **之后**加载 `defaults.vim`,从而覆盖了几个设置。 + +在 `/etc/vim/vimrc` 中有一个注释(虽然我没有看到)解释了这一点: + +``` +" Vim will load $VIMRUNTIME/defaults.vim if the user does not have a vimrc. +" This happens after /etc/vim/vimrc(.local) are loaded, so it will override +" any settings in these files. +" If you don't want that to happen, uncomment the below line to prevent +" defaults.vim from being loaded. +" let g:skip_defaults_vim = 1 +``` + +我同意这是在正常安装 vim 后设置 vim 的好方法,但 Debian 包可以做得更好。在错误报告中清楚地说明了这个问题:如果没有 `~/.vimrc`,`/etc/vim/vimrc.local` 中的设置被覆盖。 + +这在Debian中是违反直觉的 - 而且我也不知道其他包中是否采用类似的方法。 + +由于 `defaults.vim` 中的设置非常合理,所以我希望使用它,但只修改了一些我不同意的项目,比如鼠标。最后,我在 `/etc/vim/vimrc.local` 中做了以下操作: + +``` +if filereadable("/usr/share/vim/vim80/defaults.vim") + source /usr/share/vim/vim80/defaults.vim +endif +" now set the line that the defaults file is not reloaded afterwards! +let g:skip_defaults_vim = 1 + +" turn of mouse +set mouse= +" other override settings go here +``` + +可能有更好的方式来获得一个不依赖于 vim 版本的通用加载语句, 但现在我对此很满意。 + +-------------------------------------------------------------------------------- + +via: https://www.preining.info/blog/2017/10/fixing-vim-in-debian/ + +作者:[Norbert Preining][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.preining.info/blog/author/norbert/ +[1]:https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=864074 +[2]:https://www.preining.info/blog/wp-content/uploads/2017/10/fixing-debian-vim.jpg diff --git a/translated/tech/20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md b/published/20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md similarity index 73% rename from translated/tech/20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md rename to published/20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md index 9da14a545e..0e1ff54829 100644 --- a/translated/tech/20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md +++ b/published/20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md @@ -1,17 +1,23 @@ 让我们使用 PC 键盘在终端演奏钢琴 ====== -厌倦了工作?那么来吧,让我们弹弹钢琴!是的,你没有看错。谁需要真的钢琴啊?我们可以用 PC 键盘在命令行下就能弹钢琴。向你们介绍一下 **Piano-rs** - 这是一款用 Rust 语言编写的,可以让你用 PC 键盘在终端弹钢琴的简单工具。它免费,开源,而且基于 MIT 协议。你可以在任何支持 Rust 的操作系统中使用它。 +![](https://www.ostechnix.com/wp-content/uploads/2017/10/Play-Piano-In-Terminal-720x340.jpg) -### Piano-rs:使用 PC 键盘在终端弹钢琴 +厌倦了工作?那么来吧,让我们弹弹钢琴!是的,你没有看错,根本不需要真的钢琴。我们可以用 PC 键盘在命令行下就能弹钢琴。向你们介绍一下 `piano-rs` —— 这是一款用 Rust 语言编写的,可以让你用 PC 键盘在终端弹钢琴的简单工具。它自由开源,基于 MIT 协议。你可以在任何支持 Rust 的操作系统中使用它。 + +### piano-rs:使用 PC 键盘在终端弹钢琴 #### 安装 确保系统已经安装了 Rust 编程语言。若还未安装,运行下面命令来安装它。 + ``` curl https://sh.rustup.rs -sSf | sh ``` -安装程序会问你是否默认安装还是自定义安装还是取消安装。我希望默认安装,因此输入 **1** (数字一)。 +(LCTT 译注:这种直接通过 curl 执行远程 shell 脚本是一种非常危险和不成熟的做法。) + +安装程序会问你是否默认安装还是自定义安装还是取消安装。我希望默认安装,因此输入 `1` (数字一)。 + ``` info: downloading installer @@ -43,7 +49,7 @@ default host triple: x86_64-unknown-linux-gnu 1) Proceed with installation (default) 2) Customize installation 3) Cancel installation -**1** +1 info: syncing channel updates for 'stable-x86_64-unknown-linux-gnu' 223.6 KiB / 223.6 KiB (100 %) 215.1 KiB/s ETA: 0 s @@ -72,9 +78,10 @@ environment variable. Next time you log in this will be done automatically. To configure your current shell run source $HOME/.cargo/env ``` -登出然后重启系统来将 cargo 的 bin 目录纳入 PATH 变量中。 +登出然后重启系统来将 cargo 的 bin 目录纳入 `PATH` 变量中。 校验 Rust 是否正确安装: + ``` $ rustc --version rustc 1.21.0 (3b72af97e 2017-10-09) @@ -83,40 +90,44 @@ rustc 1.21.0 (3b72af97e 2017-10-09) 太棒了!Rust 成功安装了。是时候构建 piano-rs 应用了。 使用下面命令克隆 Piano-rs 仓库: + ``` git clone https://github.com/ritiek/piano-rs ``` -上面命令会在当前工作目录创建一个名为 "piano-rs" 的目录并下载所有内容到其中。进入该目录: +上面命令会在当前工作目录创建一个名为 `piano-rs` 的目录并下载所有内容到其中。进入该目录: + ``` cd piano-rs ``` 最后,运行下面命令来构建 Piano-rs: + ``` cargo build --release ``` 编译过程要花上一阵子。 -#### Usage +#### 用法 + +编译完成后,在 `piano-rs` 目录中运行下面命令: -编译完成后,在 **piano-rs** 目录中运行下面命令: ``` ./target/release/piano-rs ``` -这就我们在终端上的钢琴键盘了!可以开始弹指一些音符了。按下按键可以弹奏相应音符。使用 **左/右** 方向键可以在弹奏时调整音频。而,使用 **上/下** 方向键可以在弹奏时调整音长。 +这就是我们在终端上的钢琴键盘了!可以开始弹指一些音符了。按下按键可以弹奏相应音符。使用 **左/右** 方向键可以在弹奏时调整音频。而,使用 **上/下** 方向键可以在弹奏时调整音长。 -[![][1]][2] +![][2] -Piano-rs 使用与 [**multiplayerpiano.com**][3] 一样的音符和按键。另外,你可以使用[**这些音符 **][4] 来学习弹指各种流行歌曲。 +Piano-rs 使用与 [multiplayerpiano.com][3] 一样的音符和按键。另外,你可以使用[这些音符][4] 来学习弹指各种流行歌曲。 要查看帮助。输入: + ``` $ ./target/release/piano-rs -h -``` -``` + piano-rs 0.1.0 Ritiek Malhotra Play piano in the terminal using PC keyboard. @@ -141,19 +152,18 @@ OPTIONS: 此致敬礼! - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/let-us-play-piano-terminal-using-pc-keyboard/ 作者:[SK][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://www.ostechnix.com/author/sk/ [1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]:http://www.ostechnix.com/wp-content/uploads/2017/10/Piano.png () +[2]:http://www.ostechnix.com/wp-content/uploads/2017/10/Piano.png [3]:http://www.multiplayerpiano.com/ [4]:https://pastebin.com/CX1ew0uB diff --git a/published/20171030 How To Create Custom Ubuntu Live CD Image.md b/published/20171030 How To Create Custom Ubuntu Live CD Image.md new file mode 100644 index 0000000000..97ede8771c --- /dev/null +++ b/published/20171030 How To Create Custom Ubuntu Live CD Image.md @@ -0,0 +1,156 @@ +如何创建定制的 Ubuntu Live CD 镜像 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-720x340.png) + +今天让我们来讨论一下如何创建 Ubuntu Live CD 的定制镜像(ISO)。我们以前可以使用 [Pinguy Builder][1] 完成这项工作。但是,现在它似乎停止维护了。最近 Pinguy Builder 的官方网站似乎没有任何更新。幸运的是,我找到了另一种创建 Ubuntu Live CD 镜像的工具。使用 Cubic 即 **C**ustom **Ub**untu **I**SO **C**reator 的首字母缩写,这是一个用来创建定制的可启动的 Ubuntu Live CD(ISO)镜像的 GUI 应用程序。 + +Cubic 正在积极开发,它提供了许多选项来轻松地创建一个定制的 Ubuntu Live CD ,它有一个集成的 chroot 命令行环境(LCTT 译注:chroot —— Change Root,也就是改变程序执行时所参考的根目录位置),在那里你可以定制各种方面,比如安装新的软件包、内核,添加更多的背景壁纸,添加更多的文件和文件夹。它有一个直观的 GUI 界面,在 live 镜像创建过程中可以轻松的利用导航(可以利用点击鼠标来回切换)。您可以创建一个新的自定义镜像或修改现有的项目。因为它可以用来制作 Ubuntu live 镜像,所以我相信它可以用在制作其他 Ubuntu 的发行版和衍生版镜像中,比如 Linux Mint。 + +### 安装 Cubic + +Cubic 的开发人员已经做出了一个 PPA 来简化安装过程。要在 Ubuntu 系统上安装 Cubic ,在你的终端上运行以下命令: + +``` +sudo apt-add-repository ppa:cubic-wizard/release +sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 6494C6D6997C215E +sudo apt update +sudo apt install cubic +``` + +### 利用 Cubic 创建 Ubuntu Live CD 的定制镜像 + +安装完成后,从应用程序菜单或 dock 启动 Cubic。这是在我在 Ubuntu 16.04 LTS 桌面系统中 Cubic 的样子。 + +为新项目选择一个目录。它是保存镜像文件的目录。 + +![][3] + +请注意,Cubic 不是创建您当前系统的 Live CD 镜像,而是利用 Ubuntu 的安装 CD 来创建一个定制的 Live CD,因此,你应该有一个最新的 ISO 镜像。 + +选择您存储 Ubuntu 安装 ISO 镜像的路径。Cubic 将自动填写您定制操作系统的所有细节。如果你愿意,你可以改变细节。单击 Next 继续。 + +![][4] + +接下来,来自源安装介质中的压缩的 Linux 文件系统将被提取到项目的目录(在我们的例子中目录的位置是 `/home/ostechnix/custom_ubuntu`)。 + +![][5] + +一旦文件系统被提取出来,将自动加载到 chroot 环境。如果你没有看到终端提示符,请按几次回车键。 + +![][6] + +在这里可以安装任何额外的软件包,添加背景图片,添加软件源列表,添加最新的 Linux 内核和所有其他定制到你的 Live CD 。 + +例如,我希望 `vim` 安装在我的 Live CD 中,所以现在就要安装它。 + +![][7] + +我们不需要使用 `sudo`,因为我们已经在具有最高权限(root)的环境中了。 + +类似地,如果需要,可以安装更多的任何版本 Linux 内核。 + +``` +apt install linux-image-extra-4.10.0-24-generic +``` + +此外,您还可以更新软件源列表(添加或删除软件存储库列表): + +![][8] + +修改源列表后,不要忘记运行 `apt update` 命令来更新源列表: + +``` +apt update +``` + +另外,您还可以向 Live CD 中添加文件或文件夹。复制文件或文件夹(右击它们并选择复制或者利用 `CTRL+C`),在终端右键单击(在 Cubic 窗口内),选择 “Paste file(s)”,最后点击 Cubic 向导底部的 “Copy”。 + +![][9] + +**Ubuntu 17.10 用户注意事项** + +> 在 Ubuntu 17.10 系统中,DNS 查询可能无法在 chroot 环境中工作。如果您正在制作一个定制的 Ubuntu 17.10 Live 镜像,您需要指向正确的 `resolve.conf` 配置文件: + +>``` +ln -sr /run/systemd/resolve/resolv.conf /run/systemd/resolve/stub-resolv.conf +``` + +> 要验证 DNS 解析工作,运行: + +> ``` +cat /etc/resolv.conf +ping google.com +``` + +如果你想的话,可以添加你自己的壁纸。要做到这一点,请切换到 `/usr/share/backgrounds/` 目录, + +``` +cd /usr/share/backgrounds +``` + +并将图像拖放到 Cubic 窗口中。或复制图像,右键单击 Cubic 终端窗口并选择 “Paste file(s)” 选项。此外,确保你在 `/usr/share/gnome-backproperties` 的XML文件中添加了新的壁纸,这样你可以在桌面上右键单击新添加的图像选择 “Change Desktop Background” 进行交互。完成所有更改后,在 Cubic 向导中单击 “Next”。 + +接下来,选择引导到新的 Live ISO 镜像时使用的 Linux 内核版本。如果已经安装了其他版本内核,它们也将在这部分中被列出。然后选择您想在 Live CD 中使用的内核。 + +![][10] + +在下一节中,选择要从您的 Live 映像中删除的软件包。在使用定制的 Live 映像安装完 Ubuntu 操作系统后,所选的软件包将自动删除。在选择要删除的软件包时,要格外小心,您可能在不知不觉中删除了一个软件包,而此软件包又是另外一个软件包的依赖包。 + +![][11] + +接下来, Live 镜像创建过程将开始。这里所要花费的时间取决于你定制的系统规格。 + +![][12] + +镜像创建完成后后,单击 “Finish”。Cubic 将显示新创建的自定义镜像的细节。 + +如果你想在将来修改刚刚创建的自定义 Live 镜像,不要选择“ Delete all project files, except the generated disk image and the corresponding MD5 checksum file”(除了生成的磁盘映像和相应的 MD5 校验和文件之外,删除所有的项目文件**) ,Cubic 将在项目的工作目录中保留自定义图像,您可以在将来进行任何更改。而不用从头再来一遍。 + +要为不同的 Ubuntu 版本创建新的 Live 镜像,最好使用不同的项目目录。 + +### 利用 Cubic 修改 Ubuntu Live CD 的定制镜像 + +从菜单中启动 Cubic ,并选择一个现有的项目目录。单击 “Next” 按钮,您将看到以下三个选项: + +1. Create a disk image from the existing project. (从现有项目创建一个磁盘映像。) +2. Continue customizing the existing project.(继续定制现有项目。) +3. Delete the existing project.(删除当前项目。) + +![][13] + +第一个选项将允许您从现有项目中使用之前所做的自定义设置创建一个新的 Live ISO 镜像。如果您丢失了 ISO 镜像,您可以使用第一个选项来创建一个新的。 + +第二个选项允许您在现有项目中进行任何其他更改。如果您选择此选项,您将再次进入 chroot 环境。您可以添加新的文件或文件夹,安装任何新的软件,删除任何软件,添加其他的 Linux 内核,添加桌面背景等等。 + +第三个选项将删除现有的项目,所以您可以从头开始。选择此选项将删除所有文件,包括新生成的 ISO 镜像文件。 + +我用 Cubic 做了一个定制的 Ubuntu 16.04 LTS 桌面 Live CD 。就像这篇文章里描述的一样。如果你想创建一个 Ubuntu Live CD, Cubic 可能是一个不错的选择。 + +就这些了,再会! + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/create-custom-ubuntu-live-cd-image/ + +作者:[SK][a] +译者:[stevenzdg988](https://github.com/stevenzdg988) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.ostechnix.com/pinguy-builder-build-custom-ubuntu-os/ +[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-1.png +[4]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-2.png +[5]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-3.png +[6]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-4.png +[7]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-6.png +[8]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-5.png +[9]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-7.png +[10]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-8.png +[11]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-10-1.png +[12]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-12-1.png +[13]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-13.png diff --git a/published/20171106 Autorandr- automatically adjust screen layout.md b/published/20171106 Autorandr- automatically adjust screen layout.md new file mode 100644 index 0000000000..3e87fce587 --- /dev/null +++ b/published/20171106 Autorandr- automatically adjust screen layout.md @@ -0,0 +1,51 @@ +autorandr:自动调整屏幕布局 +====== + +像许多笔记本用户一样,我经常将笔记本插入到不同的显示器上(桌面上有多台显示器,演示时有投影机等)。运行 `xrandr` 命令或点击界面非常繁琐,编写脚本也不是很好。 + +最近,我遇到了 [autorandr][1],它使用 EDID(和其他设置)检测连接的显示器,保存 `xrandr` 配置并恢复它们。它也可以在加载特定配置时运行任意脚本。我已经打包了它,目前仍在 NEW 状态。如果你不能等待,[这是 deb][2],[这是 git 仓库][3]。 + +要使用它,只需安装软件包,并创建你的初始配置(我这里用的名字是 `undocked`): + +``` +autorandr --save undocked +``` + +然后,连接你的笔记本(或者插入你的外部显示器),使用 `xrandr`(或其他任何)更改配置,然后保存你的新配置(我这里用的名字是 workstation): + +``` +autorandr --save workstation +``` + +对你额外的配置(或当你有新的配置)进行重复操作。 + +`autorandr` 有 `udev`、`systemd` 和 `pm-utils` 钩子,当新的显示器出现时 `autorandr --change` 应该会立即运行。如果需要,也可以手动运行 `autorandr --change` 或 `autorandr - load workstation`。你也可以在加载配置后在 `~/.config/autorandr/$PROFILE/postswitch` 添加自己的脚本来运行。由于我运行 i3,我的工作站配置如下所示: + +``` +#!/bin/bash + +xrandr --dpi 92 +xrandr --output DP2-2 --primary +i3-msg '[workspace="^(1|4|6)"] move workspace to output DP2-2;' +i3-msg '[workspace="^(2|5|9)"] move workspace to output DP2-3;' +i3-msg '[workspace="^(3|8)"] move workspace to output DP2-1;' +``` + +它适当地修正了 dpi,设置主屏幕(可能不需要?),并移动 i3 工作区。你可以通过在配置文件目录中添加一个 `block` 钩子来安排配置永远不会运行。 + +如果你定期更换显示器,请看一下! + +-------------------------------------------------------------------------------- + +via: https://www.donarmstrong.com/posts/autorandr/ + +作者:[Don Armstrong][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.donarmstrong.com +[1]:https://github.com/phillipberndt/autorandr +[2]:https://www.donarmstrong.com/autorandr_1.2-1_all.deb +[3]:https://git.donarmstrong.com/deb_pkgs/autorandr.git diff --git a/published/20171107 The long goodbye to C.md b/published/20171107 The long goodbye to C.md new file mode 100644 index 0000000000..436c01021f --- /dev/null +++ b/published/20171107 The long goodbye to C.md @@ -0,0 +1,87 @@ +与 C 语言长别离 +========================================== + +这几天来,我在思考那些正在挑战 C 语言的系统编程语言领袖地位的新潮语言,尤其是 Go 和 Rust。思考的过程中,我意识到了一个让我震惊的事实 —— 我有着 35 年的 C 语言经验。每周我都要写很多 C 代码,但是我已经记不清楚上一次我 _创建一个新的 C 语言项目_ 是在什么时候了。 + +如果你完全不认为这种情况令人震惊,那你很可能不是一个系统程序员。我知道有很多程序员使用更高级的语言工作。但是我把大部分时间都花在了深入打磨像 NTPsec、 GPSD 以及 giflib 这些东西上。熟练使用 C 语言在这几十年里一直就是我的专长。但是,现在我不仅是不再使用 C 语言写新的项目,甚至我都记不清我是什么时候开始这样做的了,而且……回头想想,我觉得这都不是本世纪发生的事情。 + +这个对于我来说是件大事,因为如果你问我,我的五个最核心软件开发技能是什么,“C 语言专家” 一定是你最有可能听到的之一。这也激起了我的思考。C 语言的未来会怎样 ?C 语言是否正像当年的 COBOL 语言一样,在辉煌之后,走向落幕? + +我恰好是在 C 语言迅猛发展,并把汇编语言以及其它许多编译型语言挤出主流存在的前几年开始编程的。那场过渡大约是在 1982 到 1985 年之间。在那之前,有很多编译型语言争相吸引程序员的注意力,那些语言中还没有明确的领导者;但是在那之后,小众的语言就直接毫无声息的退出了舞台。主流的语言(FORTRAN、Pascal、COBOL)则要么只限于老代码,要么就是固守单一领域,再就是在 C 语言的边缘领域顶着愈来愈大的压力苟延残喘。 + +而在那以后,这种情形持续了近 30 年。尽管在应用程序开发上出现了新的动向: Java、 Perl、 Python, 以及许许多多不是很成功的竞争者。起初我很少关注这些语言,这很大一部分是因为在它们的运行时的开销对于当时的实际硬件来说太大。因此,这就使得 C 的成功无可撼动;为了使用和对接大量已有的 C 语言代码,你得使用 C 语言写新代码(一部分脚本语言尝试过打破这种壁垒,但是只有 Python 有可能取得成功)。 + +回想起来,我在 1997 年使用脚本语言写应用时本应该注意到这些语言的更重要的意义的。当时我写的是一个名为 SunSITE 的帮助图书管理员做源码分发的辅助软件,当时使用的是 Perl 语言。 + +这个应用完全是用来处理文本输入的,而且只需要能够应对人类的反应速度即可(大概 0.1 秒),因此使用 C 或者别的没有动态内存分配以及字符串类型的语言来写就会显得很傻。但是在当时,我仅仅是把其视为一个试验,而完全没有想到我几乎再也不会在一个新项目的第一个文件里敲下 `int main(int argc, char **argv)` 这样的 C 语言代码了。 + +我说“几乎”,主要是因为 1999 年的 [SNG][3]。 我想那是我最后一个用 C 从头开始写的项目了。 + +在那之后我写的所有的 C 代码都是在为那些上世纪已经存在的老项目添砖加瓦,或者是在维护诸如 GPSD 以及 NTPsec 一类的项目。 + +当年我本不应该使用 C 语言写 SNG 的。因为在那个年代,摩尔定律的快速迭代使得硬件愈加便宜,使得像 Perl 这样的语言的执行效率也不再是问题。仅仅三年以后,我可能就会毫不犹豫地使用 Python 而不是 C 语言来写 SNG。 + +在 1997 年我学习了 Python, 这对我来说是一道分水岭。这个语言很美妙 —— 就像我早年使用的 Lisp 一样,而且 Python 还有很酷的库!甚至还完全遵循了 POSIX!还有一个蛮好用的对象系统!Python 没有把 C 语言挤出我的工具箱,但是我很快就习惯了在只要能用 Python 时就写 Python ,而只在必须使用 C 语言时写 C。 + +(在此之后,我开始在我的访谈中指出我所谓的 “Perl 的教训” ,也就是任何一个没能实现和 C 语言语义等价的遵循 POSIX 的语言_都注定要失败_。在计算机科学的发展史上,很多学术语言的骨骸俯拾皆是,原因是这些语言的设计者没有意识到这个重要的问题。) + +显然,对我来说,Python 的主要优势之一就是它很简单,当我写 Python 时,我不再需要担心内存管理问题或者会导致核心转储的程序崩溃 —— 对于 C 程序员来说,处理这些问题烦的要命。而不那么明显的优势恰好在我更改语言时显现,我在 90 年代末写应用程序和非核心系统服务的代码时,为了平衡成本与风险都会倾向于选择具有自动内存管理但是开销更大的语言,以抵消之前提到的 C 语言的缺陷。而在仅仅几年之前(甚至是 1990 年),那些语言的开销还是大到无法承受的;那时硬件产业的发展还在早期阶段,没有给摩尔定律足够的时间来发挥威力。 + +尽量地在 C 语言和 Python 之间选择 C —— 只要是能的话我就会从 C 语言转移到 Python 。这是一种降低工程复杂程度的有效策略。我将这种策略应用在了 GPSD 中,而针对 NTPsec , 我对这个策略的采用则更加系统化。这就是我们能把 NTP 的代码库大小削减四分之一的原因。 + +但是今天我不是来讲 Python 的。尽管我觉得它在竞争中脱颖而出,Python 也未必真的是在 2000 年之前彻底结束我在新项目上使用 C 语言的原因,因为在当时任何一个新的学院派的动态语言都可以让我不再选择使用 C 语言。也有可能是在某段时间里在我写了很多 Java 之后,我才慢慢远离了 C 语言。 + +我写这个回忆录是因为我觉得我并非特例,在世纪之交,同样的发展和转变也改变了不少 C 语言老手的编码习惯。像我一样,他们在当时也并没有意识到这种转变正在发生。 + +在 2000 年以后,尽管我还在使用 C/C++ 写之前的项目,比如 GPSD ,游戏韦诺之战以及 NTPsec,但是我的所有新项目都是使用 Python 的。 + +有很多程序是在完全无法在 C 语言下写出来的,尤其是 [reposurgeon][4] 以及 [doclifter][5] 这样的项目。由于 C 语言受限的数据类型本体论以及其脆弱的底层数据管理问题,尝试用 C 写的话可能会很恐怖,并注定失败。 + +甚至是对于更小的项目 —— 那些可以在 C 中实现的东西 —— 我也使用 Python 写,因为我不想花不必要的时间以及精力去处理内核转储问题。这种情况一直持续到去年年底,持续到我创建我的第一个 Rust 项目,以及成功写出第一个[使用 Go 语言的项目][6]。 + +如前文所述,尽管我是在讨论我的个人经历,但是我想我的经历体现了时代的趋势。我期待新潮流的出现,而不是仅仅跟随潮流。在 98 年的时候,我就是 Python 的早期使用者。来自 [TIOBE][7] 的数据则表明,在 Go 语言脱胎于公司的实验项目并刚刚从小众语言中脱颖而出的几个月内,我就开始实现自己的第一个 Go 语言项目了。 + +总而言之:直到现在第一批有可能挑战 C 语言的传统地位的语言才出现。我判断这个的标准很简单 —— 只要这个语言能让我等 C 语言老手接受不再写 C 的事实,这个语言才 “有可能” 挑战到 C 语言的地位 —— 来看啊,这有个新编译器,能把 C 转换到新语言,现在你可以让他完成你的_全部工作_了 —— 这样 C 语言的老手就会开心起来。 + +Python 以及和其类似的语言对此做的并不够好。使用 Python 实现 NTPsec(以此举例)可能是个灾难,最终会由于过高的运行时开销以及由于垃圾回收机制导致的延迟变化而烂尾。如果需求是针对单个用户且只需要以人类能接受的速度运行,使用 Python 当然是很好的,但是对于以 _机器的速度_ 运行的程序来说就不总是如此了 —— 尤其是在很高的多用户负载之下。这不只是我自己的判断 —— 因为拿 Go 语言来说,它的存在主要就是因为当时作为 Python 语言主要支持者的 Google 在使用 Python 实现一些工程的时候也遭遇了同样的效能痛点。 + +Go 语言就是为了解决 Python 搞不定的那些大多由 C 语言来实现的任务而设计的。尽管没有一个全自动语言转换软件让我很是不爽,但是使用 Go 语言来写系统程序对我来说不算麻烦,我发现我写 Go 写的还挺开心的。我的很多 C 编码技能还可以继续使用,我还收获了垃圾回收机制以及并发编程机制,这何乐而不为? + +([这里][8]有关于我第一次写 Go 的经验的更多信息) + +本来我想把 Rust 也视为 “C 语言要过时了” 的例证,但是在学习并尝试使用了这门语言编程之后,我觉得[这种语言现在还没有做好准备][9]。也许 5 年以后,它才会成为 C 语言的对手。 + +随着 2017 的尾声来临,我们已经发现了一个相对成熟的语言,其和 C 类似,能够胜任 C 语言的大部分工作场景(我在下面会准确描述),在几年以后,这个语言界的新星可能就会取得成功。 + +这件事意义重大。如果你不长远地回顾历史,你可能看不出来这件事情的伟大性。_三十年了_ —— 这几乎就是我作为一个程序员的全部生涯,我们都没有等到一个 C 语言的继任者,也无法遥望 C 之后的系统编程会是什么样子的。而现在,我们面前突然有了后 C 时代的两种不同的展望和未来…… + +……另一种展望则是下面这个语言留给我们的。我的一个朋友正在开发一个他称之为 “Cx” 的语言,这个语言在 C 语言上做了很少的改动,使得其能够支持类型安全;他的项目的目的就是要创建一个能够在最少人力参与的情况下把古典 C 语言修改为新语言的程序。我不会指出这位朋友的名字,免得给他太多压力,让他做出太多不切实际的保证。但是他的实现方法真的很是有意思,我会尽量给他募集资金。 + +现在,我们看到了可以替代 C 语言实现系统编程的三种不同的可能的道路。而就在两年之前,我们的眼前还是一片漆黑。我重复一遍:这件事情意义重大。 + +我是在说 C 语言将要灭绝吗?不是这样的,在可预见的未来里,C 语言还会是操作系统的内核编程以及设备固件编程的主流语言,在这些场景下,尽力压榨硬件性能的古老规则还在奏效,尽管它可能不是那么安全。 + +现在那些将要被 C 的继任者攻破的领域就是我之前提到的我经常涉及的领域 —— 比如 GPSD 以及 NTPsec、系统服务以及那些因为历史原因而使用 C 语言写的进程。还有就是以 DNS 服务器以及邮件传输代理 —— 那些需要以机器速度而不是人类的速度运行的系统程序。 + +现在我们可以对后 C 时代的未来窥见一斑,即上述这类领域的代码都可以使用那些具有强大内存安全特性的 C 语言的替代者实现。Go 、Rust 或者 Cx ,无论是哪个,都可能使 C 的存在被弱化。比如,如果我现在再来重新实现一遍 NTP ,我可能就会毫不犹豫的使用 Go 语言去完成。 + +-------------------------------------------------------------------------------- + +via: http://esr.ibiblio.org/?p=7711 + +作者:[Eric Raymond][a] +译者:[name1e5s](https://github.com/name1e5s) +校对:[yunfengHe](https://github.com/yunfengHe), [wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://esr.ibiblio.org/?author=2 +[1]:http://esr.ibiblio.org/?author=2 +[2]:http://esr.ibiblio.org/?p=7711 +[3]:http://sng.sourceforge.net/ +[4]:http://www.catb.org/esr/reposurgeon/ +[5]:http://www.catb.org/esr/doclifter/ +[6]:http://www.catb.org/esr/loccount/ +[7]:https://www.tiobe.com/tiobe-index/ +[8]:https://blog.ntpsec.org/2017/02/07/grappling-with-go.html +[9]:http://esr.ibiblio.org/?p=7303 diff --git a/published/20171119 10 Best LaTeX Editors For Linux.md b/published/20171119 10 Best LaTeX Editors For Linux.md new file mode 100644 index 0000000000..0493502624 --- /dev/null +++ b/published/20171119 10 Best LaTeX Editors For Linux.md @@ -0,0 +1,144 @@ +10 款 Linux 平台上最好的 LaTeX 编辑器 +====== + +**简介:一旦你克服了 LaTeX 的学习曲线,就没有什么比 LaTeX 更棒了。下面介绍的是针对 Linux 和其他平台的最好的 LaTeX 编辑器。** + +### LaTeX 是什么? + +[LaTeX][1] 是一个文档制作系统。与纯文本编辑器不同,在 LaTeX 编辑器中你不能只写纯文本,为了组织文档的内容,你还必须使用一些 LaTeX 命令。 + +![LaTeX 示例][3] + +LaTeX 编辑器一般用在出于学术目的的科学研究文档或书籍的出版,最重要的是,当你需要处理包含众多复杂数学符号的文档时,它能够为你带来方便。当然,使用 LaTeX 编辑器是很有趣的,但它也并非总是很有用,除非你对所要编写的文档有一些特别的需求。 + +### 为什么你应当使用 LaTeX? + +好吧,正如我前面所提到的那样,使用 LaTeX 编辑器便意味着你有着特定的需求。为了捣腾 LaTeX 编辑器,并不需要你有一颗极客的头脑。但对于那些使用一般文本编辑器的用户来说,它并不是一个很有效率的解决方法。 + +假如你正在寻找一款工具来精心制作一篇文档,同时你对花费时间在格式化文本上没有任何兴趣,那么 LaTeX 编辑器或许正是你所寻找的那款工具。在 LaTeX 编辑器中,你只需要指定文档的类型,它便会相应地为你设置好文档的字体种类和大小尺寸。正是基于这个原因,难怪它会被认为是 [给作家的最好开源工具][4] 之一。 + +但请务必注意: LaTeX 编辑器并不是自动化的工具,你必须首先学会一些 LaTeX 命令来让它能够精确地处理文本的格式。 + +### 针对 Linux 平台的 10 款最好 LaTeX 编辑器 + +事先说明一下,以下列表并没有一个明确的先后顺序,序号为 3 的编辑器并不一定比序号为 7 的编辑器优秀。 + +#### 1、 LyX + +![][5] + +[LyX][6] 是一个开源的 LaTeX 编辑器,即是说它是网络上可获取到的最好的文档处理引擎之一。LyX 帮助你集中于你的文章,并忘记对单词的格式化,而这些正是每个 LaTeX 编辑器应当做的。LyX 能够让你根据文档的不同,管理不同的文档内容。一旦安装了它,你就可以控制文档中的很多东西了,例如页边距、页眉、页脚、空白、缩进、表格等等。 + +假如你正忙着精心撰写科学类文档、研究论文或类似的文档,你将会很高兴能够体验到 LyX 的公式编辑器,这也是其特色之一。 LyX 还包括一系列的教程来入门,使得入门没有那么多的麻烦。 + +#### 2、 Texmaker + +![][7] + +[Texmaker][8] 被认为是 GNOME 桌面环境下最好的 LaTeX 编辑器之一。它呈现出一个非常好的用户界面,带来了极好的用户体验。它也被称之为最实用的 LaTeX 编辑器之一。假如你经常进行 PDF 的转换,你将发现 TeXmaker 相比其他编辑器更加快速。在你书写的同时,你也可以预览你的文档最终将是什么样子的。同时,你也可以观察到可以很容易地找到所需要的符号。 + +Texmaker 也提供一个扩展的快捷键支持。你有什么理由不试着使用它呢? + +#### 3、 TeXstudio + +![][9] + +假如你想要一个这样的 LaTeX 编辑器:它既能为你提供相当不错的自定义功能,又带有一个易用的界面,那么 [TeXstudio][10] 便是一个完美的选择。它的 UI 确实很简单,但是不粗糙。 TeXstudio 带有语法高亮,自带一个集成的阅读器,可以让你检查参考文献,同时还带有一些其他的辅助工具。 + +它同时还支持某些酷炫的功能,例如自动补全,链接覆盖,书签,多游标等等,这使得书写 LaTeX 文档变得比以前更加简单。 + +TeXstudio 的维护很活跃,对于新手或者高级写作者来说,这使得它成为一个引人注目的选择。 + +#### 4、 Gummi + +![][11] + +[Gummi][12] 是一个非常简单的 LaTeX 编辑器,它基于 GTK+ 工具箱。当然,在这个编辑器中你找不到许多华丽的选项,但如果你只想能够立刻着手写作, 那么 Gummi 便是我们给你的推荐。它支持将文档输出为 PDF 格式,支持语法高亮,并帮助你进行某些基础的错误检查。尽管在 GitHub 上它已经不再被活跃地维护,但它仍然工作地很好。 + +#### 5、 TeXpen + +![][13] + +[TeXpen][14] 是另一个简洁的 LaTeX 编辑器。它为你提供了自动补全功能。但其用户界面或许不会让你感到印象深刻。假如你对用户界面不在意,又想要一个超级容易的 LaTeX 编辑器,那么 TeXpen 将满足你的需求。同时 TeXpen 还能为你校正或提高在文档中使用的英语语法和表达式。 + +#### 6、 ShareLaTeX + +![][15] + +[ShareLaTeX][16] 是一款在线 LaTeX 编辑器。假如你想与某人或某组朋友一同协作进行文档的书写,那么这便是你所需要的。 + +它提供一个免费方案和几种付费方案。甚至来自哈佛大学和牛津大学的学生也都使用它来进行个人的项目。其免费方案还允许你添加一位协作者。 + +其付费方案允许你与 GitHub 和 Dropbox 进行同步,并且能够记录完整的文档修改历史。你可以为你的每个方案选择多个协作者。对于学生,它还提供单独的计费方案。 + +#### 7、 Overleaf + +![][17] + +[Overleaf][18] 是另一款在线的 LaTeX 编辑器。它与 ShareLaTeX 类似,它为专家和学生提供了不同的计费方案。它也提供了一个免费方案,使用它你可以与 GitHub 同步,检查你的修订历史,或添加多个合作者。 + +在每个项目中,它对文件的数目有所限制。所以在大多数情况下如果你对 LaTeX 文件非常熟悉,这并不会为你带来不便。 + +#### 8、 Authorea + +![][19] + +[Authorea][20] 是一个美妙的在线 LaTeX 编辑器。当然,如果考虑到价格,它可能不是最好的一款。对于免费方案,它有 100 MB 的数据上传限制和每次只能创建一个私有文档。而付费方案则提供更多的额外好处,但如果考虑到价格,它可能不是最便宜的。你应该选择 Authorea 的唯一原因应该是因为其用户界面。假如你喜爱使用一款提供令人印象深刻的用户界面的工具,那就不要错过它。 + +#### 9、 Papeeria + +![][21] + +[Papeeria][22] 是在网络上你能够找到的最为便宜的 LaTeX 在线编辑器,如果考虑到它和其他的编辑器一样可信赖的话。假如你想免费地使用它,则你不能使用它开展私有项目。但是,如果你更偏爱公共项目,它允许你创建不限数目的项目,添加不限数目的协作者。它的特色功能是有一个非常简便的画图构造器,并且在无需额外费用的情况下使用 Git 同步。假如你偏爱付费方案,它赋予你创建 10 个私有项目的能力。 + +#### 10、 Kile + +![Kile LaTeX 编辑器][23] + +位于我们最好 LaTeX 编辑器清单的最后一位是 [Kile][24] 编辑器。有些朋友对 Kile 推崇备至,很大程度上是因为其提供某些特色功能。 + +Kile 不仅仅是一款编辑器,它还是一款类似 Eclipse 的 IDE 工具,提供了针对文档和项目的一整套环境。除了快速编译和预览功能,你还可以使用诸如命令的自动补全 、插入引用,按照章节来组织文档等功能。你真的应该使用 Kile 来见识其潜力。 + +Kile 在 Linux 和 Windows 平台下都可获取到。 + +### 总结 + +所以上面便是我们推荐的 LaTeX 编辑器,你可以在 Ubuntu 或其他 Linux 发行版本中使用它们。 + +当然,我们可能还遗漏了某些可以在 Linux 上使用并且有趣的 LaTeX 编辑器。如若你正好知道它们,请在下面的评论中让我们知晓。 + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/LaTeX-editors-linux/ + +作者:[Ankush Das][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/ankush/ +[1]:https://www.LaTeX-project.org/ +[3]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/11/latex-sample-example.jpeg +[4]:https://itsfoss.com/open-source-tools-writers/ +[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/lyx_latex_editor.jpg +[6]:https://www.LyX.org/ +[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/texmaker_latex_editor.jpg +[8]:http://www.xm1math.net/texmaker/ +[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/tex_studio_latex_editor.jpg +[10]:https://www.texstudio.org/ +[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/gummi_latex_editor.jpg +[12]:https://github.com/alexandervdm/gummi +[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/texpen_latex_editor.jpg +[14]:https://sourceforge.net/projects/texpen/ +[15]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/sharelatex.jpg +[16]:https://www.shareLaTeX.com/ +[17]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/overleaf.jpg +[18]:https://www.overleaf.com/ +[19]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/authorea.jpg +[20]:https://www.authorea.com/ +[21]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/papeeria_latex_editor.jpg +[22]:https://www.papeeria.com/ +[23]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/11/kile-latex-800x621.png +[24]:https://kile.sourceforge.io/ diff --git a/published/20171207 How To Find Files Based On their Permissions.md b/published/20171207 How To Find Files Based On their Permissions.md new file mode 100644 index 0000000000..4e8021bf35 --- /dev/null +++ b/published/20171207 How To Find Files Based On their Permissions.md @@ -0,0 +1,188 @@ +如何根据文件权限查找文件 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2017/12/Find-Files-Based-On-their-Permissions-720x340.png) + +在 Linux 中查找文件并不是什么大问题。市面上也有很多可靠的自由开源的可视化查找工具。但对我而言,查找文件,用命令行的方式会更快更简单。我们已经知道 [如何根据访问和修改文件的时间寻找或整理文件][1]。今天,在基于 Unix 的操作系统中,我们将见识如何通过权限查找文件。 + +本段教程中,我将创建三个文件名为 `file1`,`file2` 和 `file3` 分别赋予 `777`,`766` 和 `655` 文件权限,并分别置于名为 `ostechnix` 的文件夹中。 + +``` +mkdir ostechnix && cd ostechnix/ +install -b -m 777 /dev/null file1 +install -b -m 766 /dev/null file2 +install -b -m 655 /dev/null file3 +``` + +![][3] + +现在,让我们通过权限来查找一下文件。 + +### 根据权限查找文件 + +根据权限查找文件最具代表性的语法: + +``` +find -perm mode +``` + +mode 可以是代表权限的八进制数字(777、666 …)也可以是权限符号(u=x,a=r+x)。 + +在深入之前,我们就以下三点详细说明 mode 参数。 + +1. 如果我们不指定任何参数前缀,它将会寻找**具体**权限的文件。 +2. 如果我们使用 `-` 参数前缀, 寻找到的文件至少拥有 mode 所述的权限,而不是具体的权限(大于或等于此权限的文件都会被查找出来)。 +3. 如果我们使用 `/` 参数前缀,那么所有者、组或者其他人任意一个应当享有此文件的权限。 + +为了让你更好的理解,让我举些例子。 + +首先,我们将要看到基于数字权限查找文件。 + +### 基于数字(八进制)权限查找文件 + +让我们运行下列命令: + +``` +find -perm 777 +``` + +这条命令将会查找到当前目录权限为**确切为 777** 权限的文件。 + +![1][4] + +如你看见的屏幕输出,file1 是唯一一个拥有**确切为 777 权限**的文件。 + +现在,让我们使用 `-` 参数前缀,看看会发生什么。 + +``` +find -perm -766 +``` + +![][5] + +如你所见,命令行上显示两个文件。我们给 file2 设置了 766 权限,但是命令行显示两个文件,什么鬼?因为,我们设置了 `-` 参数前缀。它意味着这条命令将在所有文件中查找文件所有者的“读/写/执行”权限,文件用户组的“读/写”权限和其他用户的“读/写”权限。本例中,file1 和 file2 都符合要求。换句话说,文件并不一样要求时确切的 766 权限。它将会显示任何属于(高于)此权限的文件 。 + +然后,让我们使用 `/` 参数前置,看看会发生什么。 + +``` +find -perm /222 +``` + +![][6] + +上述命令将会查找某些人(要么是所有者、用户组,要么是其他人)拥有写权限的文件。这里有另外一个例子: + +``` +find -perm /220 +``` + +这条命令会查找所有者或用户组中拥有写权限的文件。这意味着匹配所有者和用户组任一可写的文件,而其他人的权限随意。 + +如果你使用 `-` 前缀运行相同的命令,你只会看到所有者和用户组都拥有写权限的文件。 + +``` +find -perm -220 +``` + +下面的截图会告诉你这两个参数前缀的不同。 + +![][7] + +如我之前说过的一样,我们也可以使用符号表示文件权限。 + +请阅读: + +- [如何在 Linux 中找到最大和最小的目录和文件][10] +- [如何在 Linux 的目录树中找到最老的文件][11] +- [如何在 Linux 中找到超过或小于某个大小的文件][12] + +### 基于符号的文件权限查找文件 + +在下面的例子中,我们使用例如 `u`(所有者)、`g`(用户组) 和 `o`(其他) 的符号表示法。我们也可以使用字母 `a` 代表上述三种类型。我们可以通过特指的 `r` (读)、 `w` (写)、 `x` (执行)分别代表它们的权限。 + +例如,寻找用户组中拥有 `写` 权限的文件,执行: + +``` +find -perm -g=w +``` + +![][8] + +上面的例子中,file1 和 file2 都拥有 `写` 权限。请注意,你可以等效使用 `=` 或 `+` 两种符号标识。例如,下列两行相同效果的代码。 + +``` +find -perm -g=w +find -perm -g+w +``` + +查找文件所有者中拥有写权限的文件,执行: + +``` +find -perm -u=w +``` + +查找所有用户中拥有写权限的文件,执行: + +``` +find -perm -a=w +``` + +查找所有者和用户组中同时拥有写权限的文件,执行: + +``` +find -perm -g+w,u+w +``` + +上述命令等效与 `find -perm -220`。 + +查找所有者或用户组中拥有写权限的文件,执行: + +``` +find -perm /u+w,g+w +``` + +或者, + +``` +find -perm /u=w,g=w +``` + +上述命令等效于 `find -perm /220`。 + +更多详情,参照 man 手册。 + +``` +man find +``` + +了解更多简化案例或其他 Linux 命令,查看[man 手册][9]。 + +然后,这就是所有的内容。希望这个教程有用。更多干货,敬请关注。 + +干杯! + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/find-files-based-permissions/ + +作者:[SK][a] +译者:[CYLeft](https://github.com/CYLeft) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.ostechnix.com/find-sort-files-based-access-modification-date-time-linux/ +[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7/ +[3]:https://www.ostechnix.com/wp-content/uploads/2017/12/find-files-1-1.png +[4]:https://www.ostechnix.com/wp-content/uploads/2017/12/find-files-2.png +[5]:https://www.ostechnix.com/wp-content/uploads/2017/12/find-files-3.png + +[6]:https://www.ostechnix.com/wp-content/uploads/2017/12/find-files-6.png +[7]:https://www.ostechnix.com/wp-content/uploads/2017/12/find-files-7.png +[8]:https://www.ostechnix.com/wp-content/uploads/2017/12/find-files-8.png +[9]:https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/ +[10]:https://www.ostechnix.com/how-to-find-largest-and-smallest-directories-and-files-in-linux/ +[11]:https://www.ostechnix.com/find-oldest-file-directory-tree-linux/ +[12]:https://www.ostechnix.com/find-files-bigger-smaller-x-size-linux/ \ No newline at end of file diff --git a/published/20171212 How to Search PDF Files from the Terminal with pdfgrep.md b/published/20171212 How to Search PDF Files from the Terminal with pdfgrep.md new file mode 100644 index 0000000000..495817a531 --- /dev/null +++ b/published/20171212 How to Search PDF Files from the Terminal with pdfgrep.md @@ -0,0 +1,71 @@ +如何使用 pdfgrep 从终端搜索 PDF 文件 +====== + +![](https://www.maketecheasier.com/assets/uploads/2017/12/search-pdf-terminal.jpg) + +诸如 [grep][1] 和 [ack-grep][2] 之类的命令行工具对于搜索匹配指定[正则表达式][3]的纯文本非常有用。但是你有没有试过使用这些工具在 PDF 中搜索?不要这么做!由于这些工具无法读取PDF文件,因此你不会得到任何结果。它们只能读取纯文本文件。 + +顾名思义,[pdfgrep][4] 是一个可以在不打开文件的情况下搜索 PDF 中的文本的小命令行程序。它非常快速 —— 比几乎所有 PDF 浏览器提供的搜索更快。`grep` 和 `pdfgrep` 的最大区别在于 `pdfgrep` 对页进行操作,而 `grep` 对行操作。`grep` 如果在一行上找到多个匹配项,它也会多次打印单行。让我们看看如何使用该工具。 + +### 安装 + +对于 Ubuntu 和其他基于 Ubuntu 的 Linux 发行版来说,这非常简单: + +``` +sudo apt install pdfgrep +``` + +对于其他发行版,只要在[包管理器][5]里输入 “pdfgrep” 查找,它就应该能够安装它。万一你想浏览其代码,你也可以查看项目的 [GitLab 页面][6]。 + +### 测试运行 + +现在你已经安装了这个工具,让我们去测试一下。`pdfgrep` 命令采用以下格式: + +``` +pdfgrep [OPTION...] PATTERN [FILE...] +``` + +- `OPTION` 是一个额外的属性列表,给出诸如 `-i` 或 `--ignore-case` 这样的命令,这两者都会忽略匹配正则中的大小写。 +- `PATTERN` 是一个扩展正则表达式。 + +- `FILE` 如果它在相同的工作目录就是文件的名称,或文件的路径。 + +我对 Python 3.6 官方文档运行该命令。下图是结果。 + +![pdfgrep search][7] + +红色高亮显示所有遇到单词 “queue” 的地方。在命令中加入 `-i` 选项将会匹配单词 “Queue”。请记住,当加入 `-i` 时,大小写并不重要。 + +### 其它 + +`pdfgrep` 有相当多的有趣的选项。不过,我只会在这里介绍几个。 + +* `-c` 或者 `--count`:这会抑制匹配的正常输出。它只显示在文件中遇到该单词的次数,而不是显示匹配的长输出。 +* `-p` 或者 `--page-count`:这个选项打印页面上匹配的页码和页面上的该匹配模式出现次数。 +* `-m` 或者 `--max-count` [number]:指定匹配的最大数目。这意味着当达到匹配次数时,该命令停止读取文件。 + +所支持的选项的完整列表可以在 man 页面或者 `pdfgrep` 在线[文档][8]中找到。如果你在批量处理一些文件,不要忘记,`pdfgrep` 可以同时搜索多个文件。可以通过更改 `GREP_COLORS` 环境变量来更改默认的匹配高亮颜色。 + +### 总结 + +下一次你想在 PDF 中搜索一些东西。请考虑使用 `pdfgrep`。该工具会派上用场,并且节省你的时间。 + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/search-pdf-files-pdfgrep/ + +作者:[Bruno Edoh][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com +[1]:https://www.maketecheasier.com/what-is-grep-and-uses/ +[2]: https://www.maketecheasier.com/ack-a-better-grep/ +[3]: https://www.maketecheasier.com/the-beginner-guide-to-regular-expressions/ +[4]: https://pdfgrep.org/ +[5]: https://www.maketecheasier.com/install-software-in-various-linux-distros/ +[6]: https://gitlab.com/pdfgrep/pdfgrep +[7]: https://www.maketecheasier.com/assets/uploads/2017/11/pdfgrep-screenshot.png (pdfgrep search) +[8]: https://pdfgrep.org/doc.html diff --git a/published/20171214 A step-by-step guide to building open culture.md b/published/20171214 A step-by-step guide to building open culture.md new file mode 100644 index 0000000000..9bfea16c69 --- /dev/null +++ b/published/20171214 A step-by-step guide to building open culture.md @@ -0,0 +1,48 @@ +手把手教你构建开放式文化 +====== + +> 这本开放式组织的最新著作是大规模体验开方的手册。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/red_shoes_whitehurst_lead.jpeg?itok=jKL6AKeW) + +我们于 2015 年发表开放组织Open Organization 后,很多各种类型、各种规模的公司都对“开放式”文化究竟意味着什么感到好奇。甚至当我跟别的公司谈论我们产品和服务的优势时,也总是很快就从谈论技术转移到人和文化上去了。几乎所有对推动创新和保持行业竞争优势有兴趣的人都在思考这个问题。 + +不是只有高层领导团队senior leadership teams才对开放式工作感兴趣。[红帽公司最近一次调查 ][1] 发现 [81% 的受访者 ][2] 同意这样一种说法:“拥有开放式的组织文化对我们公司非常重要。” + +然而要注意的是。同时只有 [67% 的受访者 ][3] 认为:“我们的组织有足够的资源来构建开放式文化。” + +这个结果与我从其他公司那交流所听到的相吻合:人们希望在开放式文化中工作,他们只是不知道该怎么做。对此我表示同情,因为组织的行事风格是很难捕捉、评估和理解的。在 [Catalyst-In-Chief][4] 中,我将其称之为“组织中最神秘莫测的部分。” + +《开放式组织》认为, 在数字转型有望改变我们工作的许多传统方式的时代,拥抱开放文化是创造持续创新的最可靠途径。当我们在书写这本书的时候,我们所关注的是描述在红帽公司中兴起的那种文化--而不是编写一本如何操作的书。我们并不会制定出一步步的流程来让其他组织采用。 + +这也是为什么与其他领导者和高管谈论他们是如何开始构建开放式文化的会那么有趣。在创建开放组织时,很多高管会说我们要“改变我们的文化”。但是文化并不是一项输入。它是一项输出——它是人们互动和日常行为的副产品。 + +告诉组织成员“更加透明地工作”,“更多地合作”,以及“更加包容地行动”并没有什么作用。因为像“透明”,“合作”和“包容”这一类的文化特质并不是行动。他们只是组织内指导行为的价值观而已。 + +要如何才能构建开放式文化呢? + +在过去的两年里,Opensource.com 社区收集了各种以开放的精神来进行工作、管理和领导的最佳实践方法。现在我们在新书 《[The Open Organization Workbook][5]》 中将之分享出来,这是一本更加规范的引发文化变革的指引。 + +要记住,任何改变,尤其是巨大的改变,都需要承诺、耐心,以及努力的工作。我推荐你在通往伟大成功的大道上先使用这本工作手册来实现一些微小的,有意义的成果。 + +通过阅读这本书,你将能够构建一个开放而又富有创新的文化氛围,使你们的人能够茁壮成长。我已經迫不及待想听听你的故事了。 + +本文摘自 《[Open Organization Workbook project][6]》。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/open-organization/17/12/whitehurst-workbook-introduction + +作者:[Jim Whitehurst][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jwhitehurst +[1]:https://www.redhat.com/en/blog/red-hat-releases-2017-open-source-culture-survey-results +[2]:https://www.techvalidate.com/tvid/923-06D-74C +[3]:https://www.techvalidate.com/tvid/D30-09E-B52 +[4]:https://opensource.com/open-organization/resources/catalyst-in-chief +[5]:https://opensource.com/open-organization/resources/workbook +[6]:https://opensource.com/open-organization/17/8/workbook-project-announcement diff --git a/published/20171215 How to find and tar files into a tar ball.md b/published/20171215 How to find and tar files into a tar ball.md new file mode 100644 index 0000000000..3dc34e7ab6 --- /dev/null +++ b/published/20171215 How to find and tar files into a tar ball.md @@ -0,0 +1,135 @@ +如何找出并打包文件成 tar 包 +====== + +Q:我想找出所有的 *.doc 文件并将它们创建成一个 tar 包,然后存储在 `/nfs/backups/docs/file.tar` 中。是否可以在 Linux 或者类 Unix 系统上查找并 tar 打包文件? + +`find` 命令用于按照给定条件在目录层次结构中搜索文件。`tar` 命令是用于 Linux 和类 Unix 系统创建 tar 包的归档工具。 + +[![How to find and tar files on linux unix][1]][1] + +让我们看看如何将 `tar` 命令与 `find` 命令结合在一个命令行中创建一个 tar 包。 + +### Find 命令 + +语法是: + +``` +find /path/to/search -name "file-to-search" -options +## 找出所有 Perl(*.pl)文件 ## +find $HOME -name "*.pl" -print +## 找出所有 *.doc 文件 ## +find $HOME -name "*.doc" -print +## 找出所有 *.sh(shell 脚本)并运行 ls -l 命令 ## +find . -iname "*.sh" -exec ls -l {} + +``` + +最后一个命令的输出示例: + +``` +-rw-r--r-- 1 vivek vivek 1169 Apr 4 2017 ./backups/ansible/cluster/nginx.build.sh +-rwxr-xr-x 1 vivek vivek 1500 Dec 6 14:36 ./bin/cloudflare.pure.url.sh +lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/cmspostupload.sh -> postupload.sh +lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/cmspreupload.sh -> preupload.sh +lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/cmssuploadimage.sh -> uploadimage.sh +lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/faqpostupload.sh -> postupload.sh +lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/faqpreupload.sh -> preupload.sh +lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/faquploadimage.sh -> uploadimage.sh +-rw-r--r-- 1 vivek vivek 778 Nov 6 14:44 ./bin/mirror.sh +-rwxr-xr-x 1 vivek vivek 136 Apr 25 2015 ./bin/nixcraft.com.301.sh +-rwxr-xr-x 1 vivek vivek 547 Jan 30 2017 ./bin/paypal.sh +-rwxr-xr-x 1 vivek vivek 531 Dec 31 2013 ./bin/postupload.sh +-rwxr-xr-x 1 vivek vivek 437 Dec 31 2013 ./bin/preupload.sh +-rwxr-xr-x 1 vivek vivek 1046 May 18 2017 ./bin/purge.all.cloudflare.domain.sh +lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/tipspostupload.sh -> postupload.sh +lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/tipspreupload.sh -> preupload.sh +lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/tipsuploadimage.sh -> uploadimage.sh +-rwxr-xr-x 1 vivek vivek 1193 Oct 18 2013 ./bin/uploadimage.sh +-rwxr-xr-x 1 vivek vivek 29 Nov 6 14:33 ./.vim/plugged/neomake/tests/fixtures/errors.sh +-rwxr-xr-x 1 vivek vivek 215 Nov 6 14:33 ./.vim/plugged/neomake/tests/helpers/trap.sh +``` + +### Tar 命令 + +要[创建 /home/vivek/projects 目录的 tar 包][2],运行: + +``` +$ tar -cvf /home/vivek/projects.tar /home/vivek/projects +``` + +### 结合 find 和 tar 命令 + +语法是: + +``` +find /dir/to/search/ -name "*.doc" -exec tar -rvf out.tar {} \; +``` + +或者 + +``` +find /dir/to/search/ -name "*.doc" -exec tar -rvf out.tar {} + +``` + +例子: + +``` +find $HOME -name "*.doc" -exec tar -rvf /tmp/all-doc-files.tar "{}" \; +``` + +或者 + +``` +find $HOME -name "*.doc" -exec tar -rvf /tmp/all-doc-files.tar "{}" + +``` + +这里,find 命令的选项: + +* `-name "*.doc"`:按照给定的模式/标准查找文件。在这里,在 $HOME 中查找所有 *.doc 文件。 +* `-exec tar ...` :对 `find` 命令找到的所有文件执行 `tar` 命令。 + +这里,`tar` 命令的选项: + +* `-r`:将文件追加到归档末尾。参数与 `-c` 选项具有相同的含义。 +* `-v`:详细输出。 +* `-f out.tar` : 将所有文件追加到 out.tar 中。 + +也可以像下面这样将 `find` 命令的输出通过管道输入到 `tar` 命令中: + +``` +find $HOME -name "*.doc" -print0 | tar -cvf /tmp/file.tar --null -T - +``` + +传递给 `find` 命令的 `-print0` 选项处理特殊的文件名。`--null` 和 `-T` 选项告诉 `tar` 命令从标准输入/管道读取输入。也可以使用 `xargs` 命令: + +``` +find $HOME -type f -name "*.sh" | xargs tar cfvz /nfs/x230/my-shell-scripts.tgz +``` + +有关更多信息,请参阅下面的 man 页面: + +``` +$ man tar +$ man find +$ man xargs +$ man bash +``` + +------------------------------ + +作者简介: + +作者是 nixCraft 的创造者,是一名经验丰富的系统管理员,也是 Linux 操作系统/Unix shell 脚本培训师。他曾与全球客户以及 IT、教育、国防和太空研究以及非营利部门等多个行业合作。在 Twitter、Facebook 和 Google+ 上关注他。 + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/faq/linux-unix-find-tar-files-into-tarball-command/ + +作者:[Vivek Gite][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/media/new/faq/2017/12/How-to-find-and-tar-files-on-linux-unix.jpg +[2]:https://www.cyberciti.biz/faq/creating-a-tar-file-linux-command-line/ diff --git a/translated/tech/20171219 Surf anonymously- Learn to install TOR network on Linux.md b/published/20171219 Surf anonymously- Learn to install TOR network on Linux.md similarity index 57% rename from translated/tech/20171219 Surf anonymously- Learn to install TOR network on Linux.md rename to published/20171219 Surf anonymously- Learn to install TOR network on Linux.md index 97dc71c641..b2b763a1fb 100644 --- a/translated/tech/20171219 Surf anonymously- Learn to install TOR network on Linux.md +++ b/published/20171219 Surf anonymously- Learn to install TOR network on Linux.md @@ -1,70 +1,89 @@ 匿名上网:学习在 Linux 上安装 TOR 网络 ====== -Tor 网络是一个匿名网络来保护你的互联网以及隐私。Tor 网络是一组志愿者运营的服务器。Tor 通过在由志愿者运营的分布式中继系统之间跳转来保护互联网通信。这避免了人们窥探我们的网络,他们无法了解我们访问的网站或者用户身在何处,并且也可以让我们访问被屏蔽的网站。 + +Tor 网络是一个用来保护你的互联网以及隐私的匿名网络。Tor 网络是一组志愿者运营的服务器。Tor 通过在由志愿者运营的分布式中继系统之间跳转来保护互联网通信。这避免了人们窥探我们的网络,他们无法了解我们访问的网站或者用户身在何处,并且也可以让我们访问被屏蔽的网站。 在本教程中,我们将学习在各种 Linux 操作系统上安装 Tor 网络,以及如何使用它来配置我们的程序来保护通信。 - **(推荐阅读:[如何在 Linux 上安装 Tor 浏览器(Ubuntu、Mint、RHEL、Fedora、CentOS)][1])** + 推荐阅读:[如何在 Linux 上安装 Tor 浏览器(Ubuntu、Mint、RHEL、Fedora、CentOS)][1] ### CentOS/RHEL/Fedora -Tor 包是 EPEL 仓库的一部分,所以如果我们安装了 EPEL 仓库,我们可以直接使用 yum 来安装 Tor。如果你需要在您的系统上安装 EPEL 仓库,请使用下列适当的命令(基于操作系统和体系结构): +Tor 包是 EPEL 仓库的一部分,所以如果我们安装了 EPEL 仓库,我们可以直接使用 `yum` 来安装 Tor。如果你需要在您的系统上安装 EPEL 仓库,请使用下列适当的命令(基于操作系统和体系结构): - **RHEL/CentOS 7** +RHEL/CentOS 7: - **$ sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-11.noarch.rpm** +``` +$ sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-11.noarch.rpm +``` - **RHEL/CentOS 6 (64 位)** +RHEL/CentOS 6 (64 位): - **$ sudo rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm** +``` +$ sudo rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm +``` - **RHEL/CentOS 6 (32 位)** +RHEL/CentOS 6 (32 位): - **$ sudo rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm** +``` +$ sudo rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm +``` 安装完成后,我们可以用下面的命令安装 Tor 浏览器: - **$ sudo yum install tor** +``` +$ sudo yum install tor +``` ### Ubuntu -为了在 Ubuntu 机器上安装 Tor 网络,我们需要添加官方 Tor 仓库。我们需要将仓库信息添加到 “/etc/apt/sources.list” 中。 +为了在 Ubuntu 机器上安装 Tor 网络,我们需要添加官方 Tor 仓库。我们需要将仓库信息添加到 `/etc/apt/sources.list` 中。 - **$ sudo nano /etc/apt/sources.list** +``` +$ sudo nano /etc/apt/sources.list +``` 现在根据你的操作系统添加下面的仓库信息: - **Ubuntu 16.04** +Ubuntu 16.04: - **deb http://deb.torproject.org/torproject.org xenial main** -**deb-src http://deb.torproject.org/torproject.org xenial main** +``` +deb http://deb.torproject.org/torproject.org xenial main +deb-src http://deb.torproject.org/torproject.org xenial main +``` - **Ubuntu 14.04** +Ubuntu 14.04 - **deb http://deb.torproject.org/torproject.org trusty main** -**deb-src http://deb.torproject.org/torproject.org trusty main** +``` +deb http://deb.torproject.org/torproject.org trusty main +deb-src http://deb.torproject.org/torproject.org trusty main +``` 接下来打开终端并执行以下两个命令添加用于签名软件包的 gpg 密钥: - **$ gpg -keyserver keys.gnupg.net -recv A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89** -**$ gpg -export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | sudo apt-key add -** +``` +$ gpg -keyserver keys.gnupg.net -recv A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 +$ gpg -export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | sudo apt-key add - +``` 现在运行更新并安装 Tor 网络: - **$ sudo apt-get update** -**$ sudo apt-get install tor deb.torproject.org-keyring** +``` +$ sudo apt-get update +$ sudo apt-get install tor deb.torproject.org-keyring +``` ### Debian 我们可以无需添加任何仓库在 Debian 上安装 Tor 网络。只要打开终端并以 root 身份执行以下命令: - **$ apt install tor** - -### +``` +$ apt install tor +``` ### Tor 配置 -如果你最终目的只是为了保护互联网浏览,而没有其他要求,直接使用 Tor 更好,但是如果你需要保护即时通信、IRC、Jabber 等程序,则需要配置这些应用程序进行安全通信。但在做之前,让我们先看看**[Tor 网站上提到的警告][2]**。 +如果你最终目的只是为了保护互联网浏览,而没有其他要求,直接使用 Tor 更好,但是如果你需要保护即时通信、IRC、Jabber 等程序,则需要配置这些应用程序进行安全通信。但在做之前,让我们先看看[Tor 网站上提到的警告][2]。 - 不要大流量使用 Tor - 不要在 Tor 中使用任何浏览器插件 @@ -72,7 +91,7 @@ Tor 包是 EPEL 仓库的一部分,所以如果我们安装了 EPEL 仓库, - 不要在线打开通过 Tor 下载的任何文档。 - 尽可能使用 Tor 桥 -现在配置程序来使用 Tor,例如 jabber。首先选择 “SOCKS代理” 而不是使用 HTTP 代理,并使用端口号 9050,或者也可以使用端口 9150(Tor 浏览器使用)。 +现在配置程序来使用 Tor,例如 jabber。首先选择 “SOCKS代理” 而不是使用 HTTP 代理,并使用端口号 `9050`,或者也可以使用端口 9150(Tor 浏览器使用)。 ![install tor network][4] @@ -90,7 +109,7 @@ via: http://linuxtechlab.com/learn-install-tor-network-linux/ 作者:[Shusain][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20171226 How to Configure Linux for Children.md b/published/20171226 How to Configure Linux for Children.md new file mode 100644 index 0000000000..8889150238 --- /dev/null +++ b/published/20171226 How to Configure Linux for Children.md @@ -0,0 +1,127 @@ +如何配置一个小朋友使用的 Linux +====== + +![](https://www.maketecheasier.com/assets/uploads/2017/12/keep-kids-safe-online-hero.jpg) + +如果你接触电脑有一段时间了,提到 Linux,你应该会联想到一些特定的人群。你觉得哪些人在使用 Linux?别担心,这就告诉你。 + +Linux 是一个可以深度定制的操作系统。这就赋予了用户高度控制权。事实上,家长们可以针对小朋友设置出一个专门的 Linux 发行版,确保让孩子不会在不经意间接触那些高危地带。但是相比 Windows,这些设置显得更费时,但是一劳永逸。Linux 的开源免费,让教室或计算机实验室系统部署变得容易。 + +### 小朋友的 Linux 发行版 + +这些为儿童而简化的 Linux 发行版,界面对儿童十分友好。家长只需要先安装和设置,孩子就可以完全独立地使用计算机了。你将看见多彩的图形界面,丰富的图画,简明的语言。 + +不过,不幸的是,这类发行版不会经常更新,甚至有些已经不再积极开发了。但也不意味着不能使用,只是故障发生率可能会高一点。 + +![qimo-gcompris][1] + +#### 1. Edubuntu + +[Edubuntu][2] 是 Ubuntu 的一个分支版本,专用于教育事业。它拥有丰富的图形环境和大量教育软件,易于更新维护。它被设计成初高中学生专用的操作系统。 + +#### 2. Ubermix + +[Ubermix][3] 是根据教育需求而被设计出来的。Ubermix 将学生从复杂的计算机设备中解脱出来,就像手机一样简单易用,而不会牺牲性能和操作系统的全部能力。一键开机、五分钟安装、二十秒钟快速还原机制,以及超过 60 个的免费预装软件,ubermix 就可以让你的硬件变成功能强大的学习设备。 + +#### 3. Sugar + +[Sugar][4] 是为“每个孩子一台笔记本(OLPC)计划”而设计的操作系统。Sugar 和普通桌面 Linux 大不相同,它更专注于学生课堂使用和教授编程能力。 + +**注意** :很多为儿童开发的 Linux 发行版我并没有列举,因为它们大都不再积极维护或是被长时间遗弃。 + +### 为小朋友过筛选内容的 Linux + +只有你,最能保护孩子拒绝访问少儿不宜的内容,但是你不可能每分每秒都在孩子身边。但是你可以设置“限制访问”的 URL 到内容过滤代理服务器(通过软件)。这里有两个主要的软件可以帮助你。 + +![儿童内容过滤 Linux][5] + +#### 1、 DansGuardian + +[DansGuardian][6],一个开源内容过滤软件,几乎可以工作在任何 Linux 发行版上,灵活而强大,需要你通过命令行设置你的代理。如果你不深究代理服务器的设置,这可能是最强力的选择。 + +配置 DansGuardian 可不是轻松活儿,但是你可以跟着安装说明按步骤完成。一旦设置完成,它将是过滤不良内容的高效工具。 + +#### 2、 Parental Control: Family Friendly Filter + +[Parental Control: Family Friendly Filter][7] 是 Firefox 的插件,允许家长屏蔽包含色情内容在内的任何少儿不宜的网站。你也可以设置不良网站黑名单,将其一直屏蔽。 + +![firefox 内容过滤插件][8] + +你使用的老版本的 Firefox 可能不支持 [网页插件][9],那么你可以使用 [ProCon Latte 内容过滤器][10]。家长们添加网址到预设的黑名单内,然后设置密码,防止设置被篡改。 + +#### 3、 Blocksi 网页过滤 + +[Blocksi 网页过滤][11] 是 Chrome 浏览器插件,能有效过滤网页和 Youtube。它也提供限时服务,这样你可以限制家里小朋友的上网时间。 + +### 闲趣 + +![Linux 儿童游戏:tux kart][12] + +给孩子们使用的计算机,不管是否是用作教育,最好都要有一些游戏。虽然 Linux 没有 Windows 那么好的游戏性,但也在奋力追赶。这有建议几个有益的游戏,你可以安装到孩子们的计算机上。 + +* [Super Tux Kart][21](竞速卡丁车) +* [GCompris][22](适合教育的游戏) +* [Secret Maryo Chronicles][23](超级马里奥) +* [Childsplay][24](教育/记忆力游戏) +* [EToys][25](儿童编程) +* [TuxTyping][26](打字游戏) +* [Kalzium][27](元素周期表) +* [Tux of Math Command][28](数学游戏) +* [Pink Pony][29](Tron 风格竞速游戏) +* [KTuberling][30](创造游戏) +* [TuxPaint][31](绘画) +* [Blinken][32]([记忆力][33] 游戏) +* [KTurtle][34](编程指导环境) +* [KStars][35](天文馆) +* [Marble][36](虚拟地球) +* [KHangman][37](猜单词) + +### 结论:为什么给孩子使用 Linux? + +Linux 以复杂著称。那为什么给孩子使用 Linux?这是为了让孩子适应 Linux。在 Linux 上工作给了解系统运行提供了很多机会。当孩子长大,他们就有随自己兴趣探索的机会。得益于 Linux 如此开放的平台,孩子们才能得到这么一个极佳的场所发现自己对计算机的毕生之恋。 + +本文于 2010 年 7 月首发,2017 年 12 月更新。 + +图片来自 [在校学生][13] + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/configure-linux-for-children/ + +作者:[Alexander Fox][a] +译者:[CYLeft](https://github.com/CYLeft) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/alexfox/ +[1]:https://www.maketecheasier.com/assets/uploads/2010/08/qimo-gcompris.jpg (qimo-gcompris) +[2]:http://www.edubuntu.org +[3]:http://www.ubermix.org/ +[4]:http://wiki.sugarlabs.org/go/Downloads +[5]:https://www.maketecheasier.com/assets/uploads/2017/12/linux-for-children-content-filtering.png (linux-for-children-content-filtering) +[6]:https://help.ubuntu.com/community/DansGuardian +[7]:https://addons.mozilla.org/en-US/firefox/addon/family-friendly-filter/ +[8]:https://www.maketecheasier.com/assets/uploads/2017/12/firefox-content-filter-addon.png (firefox-content-filter-addon) +[9]:https://www.maketecheasier.com/best-firefox-web-extensions/ +[10]:https://addons.mozilla.org/en-US/firefox/addon/procon-latte/ +[11]:https://chrome.google.com/webstore/detail/blocksi-web-filter/pgmjaihnmedpcdkjcgigocogcbffgkbn?hl=en +[12]:https://www.maketecheasier.com/assets/uploads/2017/12/linux-for-children-tux-kart-e1513389774535.jpg (linux-for-children-tux-kart) +[13]:https://www.flickr.com/photos/lupuca/8720604364 +[21]:http://supertuxkart.sourceforge.net/ +[22]:http://gcompris.net/ +[23]:http://www.secretmaryo.org/ +[24]:http://www.schoolsplay.org/ +[25]:http://www.squeakland.org/about/intro/ +[26]:http://tux4kids.alioth.debian.org/tuxtype/index.php +[27]:http://edu.kde.org/kalzium/ +[28]:http://tux4kids.alioth.debian.org/tuxmath/index.php +[29]:http://code.google.com/p/pink-pony/ +[30]:http://games.kde.org/game.php?game=ktuberling +[31]:http://www.tuxpaint.org/ +[32]:https://www.kde.org/applications/education/blinken/ +[33]:https://www.ebay.com/sch/i.html?_nkw=memory +[34]:https://www.kde.org/applications/education/kturtle/ +[35]:https://www.kde.org/applications/education/kstars/ +[36]:https://www.kde.org/applications/education/marble/ +[37]:https://www.kde.org/applications/education/khangman/ diff --git a/translated/tech/20171226 How to use-run bash aliases over ssh based session.md b/published/20171226 How to use-run bash aliases over ssh based session.md similarity index 62% rename from translated/tech/20171226 How to use-run bash aliases over ssh based session.md rename to published/20171226 How to use-run bash aliases over ssh based session.md index e93f9be95e..03c60be3d7 100644 --- a/translated/tech/20171226 How to use-run bash aliases over ssh based session.md +++ b/published/20171226 How to use-run bash aliases over ssh based session.md @@ -1,7 +1,8 @@ 通过 ssh 会话执行 bash 别名 ====== -我在远程主机上[上设置过一个叫做 file_repl 的 bash 别名 ][1] . 当我使用 ssh 命令登陆远程主机后,可以很正常的使用这个别名。然而这个 bash 别名却无法通过 ssh 来运行,像这样: +我在远程主机上[上设置过一个叫做 file_repl 的 bash 别名 ][1]。当我使用 ssh 命令登录远程主机后,可以很正常的使用这个别名。然而这个 bash 别名却无法通过 ssh 来运行,像这样: + ``` $ ssh vivek@server1.cyberciti.biz file_repl bash:file_repl:command not found @@ -9,38 +10,48 @@ bash:file_repl:command not found 我要怎样做才能通过 ssh 命令运行 bash 别名呢? -SSH 客户端 (ssh) 是一个登陆远程服务器并在远程系统上执行 shell 命令的 Linux/Unix 命令。它被设计用来在两个非信任的机器上通过不安全的网络(比如互联网)提供安全的加密通讯。 +SSH 客户端 (ssh) 是一个登录远程服务器并在远程系统上执行 shell 命令的 Linux/Unix 命令。它被设计用来在两个非信任的机器上通过不安全的网络(比如互联网)提供安全的加密通讯。 -## 如何用 ssh 客户端执行命令 +### 如何用 ssh 客户端执行命令 + +通过 ssh 运行 `free` 命令或 [date 命令][2] 可以这样做: + +``` +$ ssh vivek@server1.cyberciti.biz date +``` -通过 ssh 运行 free 命令或 [date 命令 ][2] 可以这样做: -`$ ssh vivek@server1.cyberciti.biz date` 结果为: + ``` Tue Dec 26 09:02:50 UTC 2017 ``` -或者 -`$ ssh vivek@server1.cyberciti.biz free -h` -结果为: +或者: + +``` +$ ssh vivek@server1.cyberciti.biz free -h +``` + +结果为: + ``` -  total used free shared buff/cache available Mem:2.0G 428M 138M 145M 1.4G 1.1G Swap:0B 0B 0B ``` -## 理解 bash shell 以及命令的类型 +### 理解 bash shell 以及命令的类型 [bash shell][4] 共有下面几类命令: - 1。别名,比如 ll - 2。关键字,比如 if - 3。函数(用户自定义函数,比如 genpasswd) - 4。内置命令,比如 pwd - 5。外部文件,比如 /bin/date +1. 别名,比如 `ll` +2. 关键字,比如 `if` +3. 函数 (用户自定义函数,比如 `genpasswd`) +4. 内置命令,比如 `pwd` +5. 外部文件,比如 `/bin/date` + +[type 命令][5] 和 [command 命令][6] 可以用来查看命令类型: -The [type 命令 ][5] 和 [command 命令 ][6] 可以用来查看命令类型: ``` $ type -a date date is /bin/date @@ -51,33 +62,38 @@ pwd is a shell builtin $ type -a file_repl is aliased to `sudo -i /shared/takes/master.replication' ``` -date 和 free 都是外部命令而 file_repl 是 `sudo -i /shared/takes/master.replication` 的别名。你不能直接执行像 file_repl 这样的别名: +`date` 和 `free` 都是外部命令,而 `file_repl` 是 `sudo -i /shared/takes/master.replication` 的别名。你不能直接执行像 `file_repl` 这样的别名: ``` $ ssh user@remote file_repl ``` -## 在 Unix 系统上无法直接通过 ssh 客户端执行 bash 别名 +### 在 Unix 系统上无法直接通过 ssh 客户端执行 bash 别名 要解决这个问题可以用下面方法运行 ssh 命令: + ``` $ ssh -t user@remote /bin/bash -ic 'your-alias-here' $ ssh -t user@remote /bin/bash -ic 'file_repl' ``` -ssh 命令选项: - - 1。**-t**:[强制分配伪终端。可以用来在远程机器上执行任意的 ][7] 基于屏幕的程序,有时这非常有用。当使用 `-t` 时你可能会收到一个类似" bash:cannot set terminal process group (-1):Inappropriate ioctl for device。bash:no job control in this shell ." 的错误。 +`ssh` 命令选项: +- `-t`:[强制分配伪终端。可以用来在远程机器上执行任意的][7] 基于屏幕的程序,有时这非常有用。当使用 `-t` 时你可能会收到一个类似“bash: cannot set terminal process group (-1): Inappropriate ioctl for device. bash: no job control in this shell .”的错误。 bash shell 的选项: - 1。**-i**:运行交互 shell,这样 shell 才能运行 bash 别名 - 2。**-c**:要执行的命令取之于第一个非选项参数的命令字符串。若在命令字符串后面还有其他参数,这些参会会作为位置参数传递给命令,参数从 $0 开始。 +- `-i`:运行交互 shell,这样 shell 才能运行 bash 别名。 +- `-c`:要执行的命令取之于第一个非选项参数的命令字符串。若在命令字符串后面还有其他参数,这些参数会作为位置参数传递给命令,参数从 `$0` 开始。 总之,要运行一个名叫 `ll` 的 bash 别名,可以运行下面命令: -`$ ssh -t [[email protected]][3] -ic 'll'` + +``` +$ ssh -t vivek@server1.cyberciti.biz -ic 'll' +``` + 结果为: + [![Running bash aliases over ssh based session when using Unix or Linux ssh cli][8]][8] 下面是我的一个 shell 脚本的例子: @@ -100,9 +116,10 @@ ssh ${box} /usr/bin/lxc file push /tmp/https.www.cyberciti.biz.410.url.conf ngin ssh -t ${box} /bin/bash -ic 'push_config_job' ``` -## 相关资料 +### 相关资料 + +更多信息请输入下面命令查看 [OpenSSH 客户端][9] 和 [bash 的 man 帮助 ][10]: -更多信息请输入下面命令查看 [OpenSSH client][9] 和 [bash 的 man 帮助 ][10]: ``` $ man ssh $ man bash @@ -110,14 +127,13 @@ $ help type $ help command ``` - -------------------------------------------------------------------------------- via: https://www.cyberciti.biz/faq/use-bash-aliases-ssh-based-session/ 作者:[Vivek Gite][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20171230 How To Sync Time Between Linux And Windows Dual Boot.md b/published/20171230 How To Sync Time Between Linux And Windows Dual Boot.md similarity index 73% rename from translated/tech/20171230 How To Sync Time Between Linux And Windows Dual Boot.md rename to published/20171230 How To Sync Time Between Linux And Windows Dual Boot.md index 2738213365..1c152f8ba5 100644 --- a/translated/tech/20171230 How To Sync Time Between Linux And Windows Dual Boot.md +++ b/published/20171230 How To Sync Time Between Linux And Windows Dual Boot.md @@ -1,12 +1,15 @@ 解决 Linux 和 Windows 双启动带来的时间同步问题 ====== -想在保留 windows 系统的前提下尝试其他 Linux 发行版,双启动是个常用的做法。这种方法如此风行是因为实现双启动是一件很容易的事情。然而这也带来了一个大问题,那就是 **时间**。 + +![](http://www.theitstuff.com/wp-content/uploads/2017/12/How-To-Sync-Time-Between-Linux-And-Windows-Dual-Boot.jpg) + +想在保留 Windows 系统的前提下尝试其他 Linux 发行版,双启动是个常用的做法。这种方法如此风行是因为实现双启动是一件很容易的事情。然而这也带来了一个大问题,那就是 **时间**。 是的,你没有看错。若你只是用一个操作系统,时间同步不会有什么问题。但若有 Windows 和 Linux 两个系统,则可能出现时间同步上的问题。Linux 使用的是格林威治时间而 Windows 使用的是本地时间。当你从 Linux 切换到 Windows 或者从 Windows 切换到 Linux 时,就可能显示错误的时间了。 不过不要担心,这个问题很好解决。 -点击 windows 系统中的开始菜单,然后搜索 regedit。 +点击 Windows 系统中的开始菜单,然后搜索 regedit。 [![open regedit in windows 10][1]][1] @@ -14,15 +17,13 @@ [![windows 10 registry editor][2]][2] -在左边的导航菜单,导航到 - +在左边的导航菜单,导航到 `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation`。 - **`HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation`** - -在右边窗口,右键点击空白位置,然后选择 **`New>> DWORD(32 bit) Value`**。 +在右边窗口,右键点击空白位置,然后选择 `New >> DWORD(32 bit) Value`。 [![change time format utc from windows registry][3]][3] -之后,会有新生成一个条目,而且这个条目默认是高亮的。将这个条目重命名为 `**RealTimeIsUniversal**` 并设置值为 **1。** +之后,你会新生成一个条目,而且这个条目默认是高亮的。将这个条目重命名为 `RealTimeIsUniversal` 并设置值为 `1`。 [![set universal time utc in windows][4]][4] @@ -34,7 +35,7 @@ via: http://www.theitstuff.com/how-to-sync-time-between-linux-and-windows-dual-b 作者:[Rishabh Kandari][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20180101 The mysterious case of the Linux Page Table Isolation patches.md b/published/20180101 The mysterious case of the Linux Page Table Isolation patches.md new file mode 100644 index 0000000000..f4b2d1568c --- /dev/null +++ b/published/20180101 The mysterious case of the Linux Page Table Isolation patches.md @@ -0,0 +1,139 @@ +关于 Linux 页面表隔离补丁的神秘情况 +===== + +**[本文勘误与补充][1]** + +_长文预警:_ 这是一个目前严格限制的、禁止披露的安全 bug(LCTT 译注:目前已经部分披露),它影响到目前几乎所有实现虚拟内存的 CPU 架构,需要硬件的改变才能完全解决这个 bug。通过软件来缓解这种影响的紧急开发工作正在进行中,并且最近在 Linux 内核中已经得以实现,并且,在 11 月份,在 NT 内核中也开始了一个类似的紧急开发。在最糟糕的情况下,软件修复会导致一般工作负载出现巨大的减速(LCTT 译注:外在表现为 CPU 性能下降)。这里有一个提示,攻击会影响虚拟化环境,包括 Amazon EC2 和 Google 计算引擎,以及另外的提示是,这种精确的攻击可能涉及一个新的 Rowhammer 变种(LCTT 译注:一个由 Google 安全团队提出的 DRAM 的安全漏洞,在文章的后面部分会简单介绍)。 + +我一般不太关心安全问题,但是,对于这个 bug 我有点好奇,而一般会去写这个主题的人似乎都很忙,要么就是知道这个主题细节的人会保持沉默。这让我在新年的第一天(元旦那天)花了几个小时深入去挖掘关于这个谜团的更多信息,并且我将这些信息片断拼凑到了一起。 + +注意,这是一件相互之间高度相关的事件,因此,它的主要描述都是猜测,除非过一段时间,它的限制禁令被取消。我所看到的,包括涉及到的供应商、许多争论和这种戏剧性场面,将在限制禁令取消的那一天出现。 + +### LWN + +这个事件的线索出现于 12 月 20 日 LWN 上的 [内核页面表的当前状况:页面隔离][2]这篇文章。从文章语气上明显可以看到这项工作的紧急程度,内核的核心开发者紧急加入了 [KAISER 补丁系列][3]的开发——它由奥地利的 [TU Graz][4] 的一组研究人员首次发表于去年 10 月份。 + +这一系列的补丁的用途从概念上说很简单:为了阻止运行在用户空间的进程在进程页面表中通过映射得到内核空间页面的各种攻击方式,它可以很好地阻止了从非特权的用户空间代码中识别到内核虚拟地址的攻击企图。 + +这个小组在描述 KAISER 的论文《[KASLR 已死:KASLR 永存][5]》摘要中特别指出,当用户代码在 CPU 上处于活动状态的时候,在内存管理硬件中删除所有内核地址空间的信息。 + +这个补丁集的魅力在于它触及到了核心,内核的全部基柱(以及与用户空间的接口),显然,它应该被最优先考虑。遍观 Linux 中内存管理方面的变化,通常某个变化的首次引入会发生在该改变被合并的很久之前,并且,通常会进行多次的评估、拒绝、以及因各种原因爆发争论的一系列过程。 + +而 KAISER(就是现在的 KPTI)系列(从引入到)被合并还不足三个月。 + +### ASLR 概述 + +从表面上看,这些补丁设计以确保地址空间布局随机化Address Space Layout Randomization(ASLR)仍然有效:这是一个现代操作系统的安全特性,它试图将更多的随机位引入到公共映射对象的地址空间中。 + +例如,在引用 `/usr/bin/python` 时,动态链接将对系统的 C 库、堆、线程栈、以及主要的可执行文件进行排布,去接受随机分配的地址范围: + +``` +$ bash -c ‘grep heap /proc/$$/maps’ +019de000-01acb000 rw-p 00000000 00:00 0                                  [heap] +$ bash -c 'grep heap /proc/$$/maps’ +023ac000-02499000 rw-p 00000000 00:00 0                                  [heap] +``` +注意两次运行的 bash 进程的堆(heap)的开始和结束偏移量上的变化。 + +如果一个缓存区管理的 bug 将导致攻击者可以去覆写一些程序代码指向的内存地址,而那个地址之后将在程序控制流中使用,这样这种攻击者就可以使控制流转向到一个包含他们所选择的内容的缓冲区上。而这个特性的作用是,对于攻击者来说,使用机器代码来填充缓冲区做他们想做的事情(例如,调用 `system()` C 库函数)将更困难,因为那个函数的地址在不同的运行进程上不同的。 + +这是一个简单的示例,ASLR 被设计用于去保护类似这样的许多场景,包括阻止攻击者了解有可能被用来修改控制流的程序数据的地址或者实现一个攻击。 + +KASLR 是应用到内核本身的一个 “简化的” ASLR:在每个重新引导的系统上,属于内核的地址范围是随机的,这样就使得,虽然被攻击者操控的控制流运行在内核模式上,但是,他们不能猜测到为实现他们的攻击目的所需要的函数和结构的地址,比如,定位当前进程的数据段,将活动的 UID 从一个非特权用户提升到 root 用户,等等。 + +### 坏消息:缓减这种攻击的软件运行成本过于贵重 + +之前的方式,Linux 将内核的内存映射到用户内存的同一个页面表中的主要原因是,当用户的代码触发一个系统调用、故障、或者产生中断时,就不需要改变正在运行的进程的虚拟内存布局。 + +因为它不需要去改变虚拟内存布局,进而也就不需要去清洗掉(flush)依赖于该布局的与 CPU 性能高度相关的缓存(LCTT 译注:意即如果清掉这些高速缓存,CPU 性能就会下降),而主要是通过 [转换查找缓冲器][6]Translation Lookaside Buffer(TLB)(LCTT 译注:TLB ,将虚拟地址转换为物理地址)。 + +随着页面表分割补丁的合并,内核每次开始运行时,需要将内核的缓存清掉,并且,每次用户代码恢复运行时都会这样。对于大多数工作负载,在每个系统调用中,TLB 的实际总损失将导致明显的变慢:[@grsecurity 测量的一个简单的案例][7],在一个最新的 AMD CPU 上,Linux `du -s` 命令变慢了 50%。 + +### 34C3 + +在今年的 CCC 大会上,你可以找到 TU Graz 的另外一位研究人员,《[描述了一个纯 Javascript 的 ASLR 攻击][8]》,通过仔细地掌握 CPU 内存管理单元的操作时机,遍历了描述虚拟内存布局的页面表,来实现 ASLR 攻击。它通过高度精确的时间掌握和选择性回收的 CPU 缓存行的组合方式来实现这种结果,一个运行在 web 浏览器的 Javascript 程序可以找回一个 Javascript 对象的虚拟地址,使得可以利用浏览器内存管理 bug 进行接下来的攻击。(LCTT 译注:本文作者勘误说,上述链接 CCC 的讲演与 KAISER 补丁完全无关,是作者弄错了) + +因此,从表面上看,我们有一组 KAISER 补丁,也展示了解除 ASLR 化地址的技术,并且,这个展示使用的是 Javascript,它很快就可以在一个操作系统内核上进行重新部署。 + +### 虚拟内存概述 + +在通常情况下,当一些机器码尝试去加载、存储、或者跳转到一个内存地址时,现代的 CPU 必须首先去转换这个 _虚拟地址_ 到一个 _物理地址_ ,这是通过遍历一系列操作系统托管的数组(被称为页面表)的方式进行的,这些数组描述了虚拟地址和安装在这台机器上的物理内存之间的映射。 + +在现代操作系统中,虚拟内存可能是最重要的强大特性:它可以避免什么发生呢?例如,一个濒临死亡的进程崩溃了操作系统、一个 web 浏览器 bug 崩溃了你的桌面环境、或者一个运行在 Amazon EC2 中的虚拟机的变化影响了同一台主机上的另一个虚拟机。 + +这种攻击的原理是,利用 CPU 上维护的大量的缓存,通过仔细地操纵这些缓存的内容,它可以去推测内存管理单元的地址,以去访问页面表的不同层级,因为一个未缓存的访问将比一个缓存的访问花费更长的时间(以实时而言)。通过检测页面表上可访问的元素,它可能能够恢复在 MMU(LCTT 译注:存储器管理单元)忙于解决的虚拟地址中的大部分比特(bits)。 + +### 这种动机的证据,但是不用恐慌 + +我们找到了动机,但是到目前为止,我们并没有看到这项工作引进任何恐慌。总的来说,ASLR 并不能完全缓减这种风险,并且也是一道最后的防线:仅在这 6 个月的周期内,即便是一个没有安全意识的人也能看到一些关于解除(unmasking) ASLR 化的指针的新闻,并且,实际上这种事从 ASLR 出现时就有了。 + +单独的修复 ASLR 并不足于去描述这项工作高优先级背后的动机。 + +### 它是硬件安全 bug 的证据 + +通过阅读这一系列补丁,可以明确许多事情。 + +第一,正如 [@grsecurity 指出][9] 的,代码中的一些注释已经被编辑掉了(redacted),并且,描述这项工作的附加的主文档文件已经在 Linux 源代码树中看不到了。 + +通过检查代码,它以运行时补丁的方式构建,在系统引导时仅当内核检测到是受影响的系统时才会被应用,与对臭名昭著的 [Pentium F00F bug][10] 的缓解措施,使用完全相同的机制: + +![](http://78.media.tumblr.com/1c80c45e14c1e676b35cdd89cc9b557c/tumblr_inline_p1untxZBBD1rkm8fh_500.jpg) + +### 更多的线索:Microsoft 也已经实现了页面表的分割 + +通过对 FreeBSD 源代码的一个简单挖掘可以看出,目前,其它的自由操作系统没有实现页面表分割,但是,通过 [Alex Ioniscu 在 Twitter][11] 上的提示,这项工作已经不局限于 Linux 了:从 11 月起,公开的 NT 内核也已经实现了同样的技术。 + +### 猜测:Rowhammer + +对 TU Graz 研究人员的工作的进一步挖掘,我们找到这篇 《[当 rowhammer 仅敲一次][12]》,这是 12 月 4 日通告的一个 [新的 Rowhammer 攻击的变种][13]: + +> 在这篇论文中,我们提出了新的 Rowhammer 攻击和漏洞的原始利用方式,表明即便是组合了所有防御也没有效果。我们的新攻击技术,对一个位置的反复 “敲打”(hammering),打破了以前假定的触发 Rowhammer bug 的前提条件。 + +快速回顾一下,Rowhammer 是多数(全部?)种类的商业 DRAM 的一类根本性问题,比如,在普通的计算机中的内存上。通过精确操作内存中的一个区域,这可能会导致内存该区域存储的相关(但是逻辑上是独立的)内容被毁坏。效果是,Rowhammer 可能被用于去反转内存中的比特(bits),使未经授权的用户代码可以访问到,比如,这个比特位描述了系统中的其它代码的访问权限。 + +我发现在 Rowhammer 上,这项工作很有意思,尤其是它反转的位接近页面表分割补丁时,但是,因为 Rowhammer 攻击要求一个目标:你必须知道你尝试去反转的比特在内存中的物理地址,并且,第一步是得到的物理地址可能是一个虚拟地址,就像在 KASLR 中的解除(unmasking)工作。 + +### 猜测:它影响主要的云供应商 + +在我能看到的内核邮件列表中,除了该子系统维护者的名字之外,e-mail 地址属于 Intel、Amazon 和 Google 的雇员,这表示这两个大的云计算供应商对此特别感兴趣,这为我们提供了一个强大的线索,这项工作很大的可能是受虚拟化安全驱动的。 + +它可能会导致产生更多的猜测:虚拟机 RAM 和由这些虚拟机所使用的虚拟内存地址,最终表示为在主机上大量的相邻的数组,那些数组,尤其是在一个主机上只有两个租户的情况下,在 Xen 和 Linux 内核中是通过内存分配来确定的,这样可能会有(准确性)非常高的可预测行为。 + +### 最喜欢的猜测:这是一个提升特权的攻击 + +把这些综合到一起,我并不难预测,可能是我们在 2018 年会使用的这些存在提升特权的 bug 的发行版,或者类似的系统推动了如此紧急的进展,并且在补丁集的抄送列表中出现如此多的感兴趣者的名字。 + +最后的一个趣闻,虽然我在阅读补丁集的时候没有找到我要的东西,但是,在一些代码中标记,paravirtual 或者 HVM Xen 是不受此影响的。 + +### 吃瓜群众表示 2018 将很有趣 + +这些猜想是完全有可能的,它离实现很近,但是可以肯定的是,当这些事情被公开后,那将是一个非常令人激动的几个星期。 + +-------------------------------------------------------------------------------- + +via: http://pythonsweetness.tumblr.com/post/169166980422/the-mysterious-case-of-the-linux-page-table + +作者:[python sweetness][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://pythonsweetness.tumblr.com/ +[1]:http://pythonsweetness.tumblr.com/post/169217189597/quiet-in-the-peanut-gallery +[2]:https://linux.cn/article-9201-1.html +[3]:https://lwn.net/Articles/738975/ +[4]:https://www.iaik.tugraz.at/content/research/sesys/ +[5]:https://gruss.cc/files/kaiser.pdf +[6]:https://en.wikipedia.org/wiki/Translation_lookaside_buffer +[7]:https://twitter.com/grsecurity/status/947439275460702208 +[8]:https://www.youtube.com/watch?v=ewe3-mUku94 +[9]:https://twitter.com/grsecurity/status/947147105684123649 +[10]:https://en.wikipedia.org/wiki/Pentium_F00F_bug +[11]:https://twitter.com/aionescu/status/930412525111296000 +[12]:https://www.tugraz.at/en/tu-graz/services/news-stories/planet-research/singleview/article/wenn-rowhammer-nur-noch-einmal-klopft/ +[13]:https://arxiv.org/abs/1710.00551 +[14]:http://pythonsweetness.tumblr.com/post/169166980422/the-mysterious-case-of-the-linux-page-table +[15]:http://pythonsweetness.tumblr.com/ + + diff --git a/published/20180104 4 artificial intelligence trends to watch.md b/published/20180104 4 artificial intelligence trends to watch.md new file mode 100644 index 0000000000..534575cebe --- /dev/null +++ b/published/20180104 4 artificial intelligence trends to watch.md @@ -0,0 +1,71 @@ +2018 年 4 个需要关注的人工智能趋势 +====== + +> 今年人工智能决策将变得更加透明? + +![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO%20Mentor.png?itok=K-6s_q2C) + + +无论你的 IT 业务现在使用了多少[人工智能][1],预计你将会在 2018 年使用更多。即便你从来没有涉猎过 AI 项目,这也可能是将谈论转变为行动的一年,[德勤][2]董事总经理 David Schatsky 说。他说:“与 AI 开展合作的公司数量正在上升。” + +看看他对未来一年的AI预测: + +### 1、预期更多的企业 AI 试点项目 + +如今我们经常使用的许多现成的应用程序和平台都将 AI 结合在一起。 Schatsky 说:“除此之外,越来越多的公司正在试验机器学习或自然语言处理来解决特定的问题,或者帮助理解他们的数据,或者使内部流程自动化,或者改进他们自己的产品和服务。 + +“除此之外,公司与人工智能的合作强度将会上升。”他说,“早期采纳它的公司已经有五个或略少的项目正在进行中,但是我们认为这个数字会上升到十个或有更多正在进行的计划。” 他说,这个预测的一个原因是人工智能技术正在变得越来越好,也越来越容易使用。 + +### 2、人工智能将缓解数据科学人才紧缺的现状 + +人才是数据科学中的一个大问题,大多数大公司都在努力聘用他们所需要的数据科学家。 Schatsky 说,AI 可以承担一些负担。他说:“数据科学的实践,逐渐成为由创业公司和大型成熟的技术供应商提供的自动化的工具。”他解释说,大量的数据科学工作是重复的、乏味的,自动化的时机已经成熟。 “数据科学家不会消亡,但他们将会获得更高的生产力,所以一家只能做一些数据科学项目而没有自动化的公司将能够使用自动化来做更多的事情,虽然它不能雇用更多的数据科学家”。 + +### 3、合成数据模型将缓解瓶颈 + +Schatsky 指出,在你训练机器学习模型之前,你必须得到数据来训练它。 这并不容易,他说:“这通常是一个商业瓶颈,而不是生产瓶颈。 在某些情况下,由于有关健康记录和财务信息的规定,你无法获取数据。” + +他说,合成数据模型可以采集一小部分数据,并用它来生成可能需要的较大集合。 “如果你以前需要 10000 个数据点来训练一个模型,但是只能得到 2000 个,那么现在就可以产生缺少的 8000 个数据点,然后继续训练你的模型。” + +### 4、人工智能决策将变得更加透明 + +AI 的业务问题之一就是它经常作为一个黑匣子来操作。也就是说,一旦你训练了一个模型,它就会吐出你不能解释的答案。 Schatsky 说:“机器学习可以自动发现人类无法看到的数据模式,因为数据太多或太复杂。 “发现了这些模式后,它可以预测未见的新数据。” + +问题是,有时你确实需要知道 AI 发现或预测背后的原因。 “以医学图像为例子来说,模型说根据你给我的数据,这个图像中有 90% 的可能性是肿瘤。 “Schatsky 说,“你说,‘你为什么这么认为?’ 模型说:‘我不知道,这是数据给的建议。’” + +Schatsky 说,如果你遵循这些数据,你将不得不对患者进行探查手术。 当你无法解释为什么时,这是一个艰难的请求。 “但在很多情况下,即使模型产生了非常准确的结果,如果不能解释为什么,也没有人愿意相信它。” + +还有一些情况是由于规定,你确实不能使用你无法解释的数据。 Schatsky 说:“如果一家银行拒绝贷款申请,就需要能够解释为什么。 这是一个法规,至少在美国是这样。传统上来说,人类分销商会打个电话做回访。一个机器学习模式可能会更准确,但如果不能解释它的答案,就不能使用。” + +大多数算法不是为了解释他们的推理而设计的。 他说:“所以研究人员正在找到聪明的方法来让 AI 泄漏秘密,并解释哪些变量使得这个病人更可能患有肿瘤。 一旦他们这样做,人们可以发现答案,看看为什么会有这样的结论。” + +他说,这意味着人工智能的发现和决定可以用在许多今天不可能的领域。 “这将使这些模型更加值得信赖,在商业世界中更具可用性。” + +-------------------------------------------------------------------------------- + +via: https://enterprisersproject.com/article/2018/1/4-ai-trends-watch + +作者:[Minda Zetlin][a] +译者:[Wuod3n](https://github.com/Wuod3n) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://enterprisersproject.com/user/minda-zetlin +[1]:https://enterprisersproject.com/tags/artificial-intelligence +[2]:https://www2.deloitte.com/us/en.html +[3]:https://enterprisersproject.com/article/2017/12/8-emerging-ai-jobs-it-pros?sc_cid=70160000000h0aXAAQ + + + + + + + + + + + + + + + diff --git a/published/20180112 City of Barcelona Kicks Out Microsoft in Favor of Linux and Open Source.md b/published/20180112 City of Barcelona Kicks Out Microsoft in Favor of Linux and Open Source.md new file mode 100644 index 0000000000..2103838406 --- /dev/null +++ b/published/20180112 City of Barcelona Kicks Out Microsoft in Favor of Linux and Open Source.md @@ -0,0 +1,62 @@ +巴塞罗那城放弃微软,转向 Linux 和开源 +============= + +> 概述:巴塞罗那城市管理署已为从其现存的来自微软和专有软件的系统转换到 Linux 和开源软件规划好路线图。 + +西班牙报纸 [El País][1] 日前报道,[巴塞罗那城][2]已在迁移其计算机系统至开源技术的进程中。 + +根据该新闻报道,巴塞罗那城计划首先用开源应用程序替换掉所有的用户端应用。所有的专有软件都会被替换,最后仅剩下 Windows,而最终它也会被一个 Linux 发行版替代。 + +![BarcelonaSave][image-1] + +### 巴塞罗那将会在 2019 年春季全面转换到开源 + +巴塞罗那城已经计划来年将其软件预算的 70% 投入到开源软件中。根据其城市议会技术和数字创新委员会委员 Francesca Bria 的说法,这一转换的过渡期将会在 2019 年春季本届城市管理署的任期结束前完成。 + +### 迁移旨在帮助 IT 人才 + +为了完成向开源的迁移,巴塞罗那城将会在中小企业中探索 IT 相关的项目。另外,城市管理署将吸纳 65 名新的开发者来构建软件以满足特定的需求。 + +设想中的一项重要项目,是开发一个在线的数字市场平台,小型企业将会利用其参加公开招标。 + +### Ubuntu 将成为替代的 Linux 发行版 + +由于巴塞罗那已经运行着一个 1000 台规模的基于 Ubuntu 桌面的试点项目,Ubuntu 可能会成为替代 Windows 的 Linux 发行版。新闻报道同时披露,Open-Xchange 将会替代 Outlook 邮件客户端和 Exchange 邮件服务器,而 Firefox 与 LibreOffice 将会替代 Internet Explorer 与微软 Office。 + +### 巴塞罗那市政当局成为首个参与「公共资产,公共代码Public Money, Public Code」运动的当局 + +凭借此次向开源项目迁移,巴塞罗那市政当局成为首个参与欧洲的「[公共资产,公共代码Public Money, Public Code](3)」运动的当局。 + +[欧洲自由软件基金会](4)发布了一封[公开信](5),倡议公共筹资的软件应该是自由的,并发起了这项运动。已有超过 15,000 人和 100 家组织支持这一号召。你也可以支持一个,只需要[签署请愿书](6)并且为开源发出你的声音。 + +### 资金永远是一个理由 + +根据 Bria 的说法,从 Windows 到开源软件的迁移,就已开发的程序可以被部署在西班牙或世界上的其他地方当局而言,促进了重复利用。显然,这一迁移也是为了防止大量的金钱被花费在专有软件上。 + +### 你的想法如何? + +对于开源社区来讲,巴塞罗那的迁移是一场已经赢得的战争,也是一个有利条件。当[慕尼黑选择回归微软的怀抱](7)时,这一消息是开源社区十分需要的。 + +你对巴塞罗那转向开源有什么开发?你有预见到其他欧洲城市也跟随这一变化吗?在评论中和我们分享你的观点吧。 + +*來源: [Open Source Observatory][8]* + +-------------------------------------------------------------------------------- +via: https://itsfoss.com/barcelona-open-source/ + +作者:[Derick Sullivan M. Lobga][a] +译者:[Purling Nayuki](https://github.com/PurlingNayuki) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/derick/ +[1]:https://elpais.com/ccaa/2017/12/01/catalunya/1512145439_132556.html +[2]:https://en.wikipedia.org/wiki/Barcelona +[image-1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/01/barcelona-city-animated.jpg +[3]:https://publiccode.eu/ +[4]:https://fsfe.org/ +[5]:https://publiccode.eu/openletter/ +[6]:https://creativecommons.org/2017/09/18/sign-petition-public-money-produce-public-code/ +[7]:https://itsfoss.com/munich-linux-failure/ +[8]:https://joinup.ec.europa.eu/news/public-money-public-code diff --git a/published/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md b/published/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md new file mode 100644 index 0000000000..b5d1b8637f --- /dev/null +++ b/published/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md @@ -0,0 +1,98 @@ +Debian 取代 Ubuntu 成为 Google 内部 Linux 发行版的新选择 +============================================================ + +> 摘要:Google 多年来一直使用基于 Ubuntu 的内部操作系统 Goobuntu。如今,Goobuntu 正在被基于 Debian Testing 的 gLinux 所取代。 + +如果你读过那篇《[Ubuntu 十个令人惊奇的事实][18]》,你可能知道 Google 使用了一个名为 [Goobuntu][19] 的 Linux 发行版作为开发平台。这是一个定制化的 Linux 发行版,不难猜到,它是基于 Ubuntu 的。 + +Goobuntu 基本上是一个 [采用轻量级的界面的 Ubuntu][20],它是基于 Ubuntu LTS 版本的。如果你认为 Google 对 Ubuntu 的测试或开发做出了贡献,那么你就错了。Google 只是 Canonical 公司的 [Ubuntu Advantage Program][21] 计划的付费客户而已。[Canonical][22] 是 Ubuntu 的母公司。 + +### 遇见 gLinux:Google 基于 Debian Buster 的新 Linux 发行版 + +![gLinux from Goobuntu](https://itsfoss.com/wp-content/uploads/2018/01/glinux-announcement-800x450.jpg) + +在使用 Ubuntu 五年多以后,Google 正在用一个基于 Debian Testing 版本的 Linux 发行版 —— gLinux 取代 Goobuntu。 + +正如 [MuyLinux][23] 所报道的,gLinux 是从软件包的源代码中构建出来的,然后 Google 对其进行了修改,这些改动也将为上游做出贡献。 + +这个“新闻”并不是什么新鲜事,它早在去年八月就在 Debconf'17 开发者大会上宣布了。但不知为何,这件事并没有引起应有的关注。 + +请点击 [这里][24] 观看 Debconf 视频中的演示。gLinux 的演示从 12:00 开始。 + +[推荐阅读:微软出局,巴塞罗那青睐 Linux 系统和开源软件][25] + +### 从 Ubuntu 14.04 LTS 转移到 Debian 10 Buster + +Google 曾经看重 Ubuntu LTS 的稳定性,现在为了及时测试软件而转移到 Debian Testing 上。但目前尚不清楚 Google 为什么决定从 Ubuntu 切换到 Debian。 + +Google 计划如何转移到 Debian Testing?目前的 Debian Testing 版本是即将发布的 Debian 10 Buster。Google 开发了一个内部工具,用于将现有系统从 Ubuntu 14.04 LTS 迁移到 Debian 10 Buster。项目负责人 Margarita 在 Debconf 中声称,经过测试,该工具工作正常。 + +Google 还计划将这些改动发到 Debian 的上游项目中,从而为其发展做出贡献。 + +![gLinux testing plan from Google](https://itsfoss.com/wp-content/uploads/2018/01/glinux-testing-plan.jpg) + +*gLinux 的开发计划* + +### Ubuntu 丢失了一个大客户! + +回溯到 2012 年,Canonical 公司澄清说 Google 不是他们最大的商业桌面客户。但至少可以说,Google 是他们的大客户。当 Google 准备切换到 Debian 时,必然会使 Canonical 蒙受损失。 + +[推荐阅读:Mandrake Linux Creator 推出新的开源移动操作系统][26] + +### 你怎么看? + +请记住,Google 不会限制其开发者使用任何操作系统,但鼓励使用 Linux。 + +如果你想使用 Goobuntu 或 gLinux,那得成为 Google 公司的雇员才行。因为这是 Google 的内部项目,不对公众开放。 + +总的来说,这对 Debian 来说是一个好消息,尤其是他们成为了上游发行版的话。对 Ubuntu 来说可就不同了。我已经联系了 Canonical 公司征求意见,但至今没有回应。 + +更新:Canonical 公司回应称,他们“不共享与单个客户关系的细节”,因此他们不能提供有关收入和任何其他的细节。 + +你对 Google 抛弃 Ubuntu 而选择 Debian 有什么看法? + + +![](https://secure.gravatar.com/avatar/20749c268f5d3e4d2c785499eb6a17c0?s=125&d=mm&r=g) + +#### 关于作者 Abhishek Prakash + +我是一名专业的软件开发人员,也是 FOSS 的创始人。我是一个狂热的 Linux 爱好者和开源爱好者。我使用 Ubuntu 并相信知识共享。除了 Linux 之外,我还喜欢经典的侦探推理故事。我是阿加莎·克里斯蒂(Agatha Christie)作品的忠实粉丝。 + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/goobuntu-glinux-google/ + +作者:[Abhishek Prakash][a] +译者:[jessie-pang](https://github.com/jessie-pang) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/abhishek/ +[1]:https://itsfoss.com/author/abhishek/ +[2]:https://itsfoss.com/goobuntu-glinux-google/#comments +[3]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[4]:https://twitter.com/share?original_referer=/&text=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%E2%80%99s+In-house+Linux+Distribution&url=https://itsfoss.com/goobuntu-glinux-google/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=abhishek_foss +[5]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[6]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[7]:http://www.stumbleupon.com/submit?url=https://itsfoss.com/goobuntu-glinux-google/&title=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%26%238217%3Bs+In-house+Linux+Distribution +[8]:https://www.reddit.com/submit?url=https://itsfoss.com/goobuntu-glinux-google/&title=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%26%238217%3Bs+In-house+Linux+Distribution +[9]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[10]:https://twitter.com/share?original_referer=/&text=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%E2%80%99s+In-house+Linux+Distribution&url=https://itsfoss.com/goobuntu-glinux-google/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=abhishek_foss +[11]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[12]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[13]:http://www.stumbleupon.com/submit?url=https://itsfoss.com/goobuntu-glinux-google/&title=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%26%238217%3Bs+In-house+Linux+Distribution +[14]:https://www.reddit.com/submit?url=https://itsfoss.com/goobuntu-glinux-google/&title=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%26%238217%3Bs+In-house+Linux+Distribution +[15]:https://itsfoss.com/category/news/ +[16]:https://itsfoss.com/tag/glinux/ +[17]:https://itsfoss.com/tag/goobuntu/ +[18]:https://itsfoss.com/facts-about-ubuntu/ +[19]:https://en.wikipedia.org/wiki/Goobuntu +[20]:http://www.zdnet.com/article/the-truth-about-goobuntu-googles-in-house-desktop-ubuntu-linux/ +[21]:https://www.ubuntu.com/support +[22]:https://www.canonical.com/ +[23]:https://www.muylinux.com/2018/01/15/goobuntu-glinux-google/ +[24]:https://debconf17.debconf.org/talks/44/ +[25]:https://linux.cn/article-9236-1.html +[26]:https://itsfoss.com/eelo-mobile-os/ diff --git a/sources/talk/20160605 Manjaro Gaming- Gaming on Linux Meets Manjaro-s Awesomeness.md b/sources/talk/20160605 Manjaro Gaming- Gaming on Linux Meets Manjaro-s Awesomeness.md new file mode 100644 index 0000000000..78e700de26 --- /dev/null +++ b/sources/talk/20160605 Manjaro Gaming- Gaming on Linux Meets Manjaro-s Awesomeness.md @@ -0,0 +1,115 @@ +Manjaro Gaming: Gaming on Linux Meets Manjaro’s Awesomeness +====== +[![Meet Manjaro Gaming, a Linux distro designed for gamers with the power of Manjaro][1]][1] + +[Gaming on Linux][2]? Yes, that's very much possible and we have a dedicated new Linux distribution aiming for gamers. + +Manjaro Gaming is a Linux distro designed for gamers with the power of Manjaro. Those who have used Manjaro Linux before, know exactly why it is a such a good news for gamers. + +[Manjaro][3] is a Linux distro based on one of the most popular distro - [Arch Linux][4]. Arch Linux is widely known for its bleeding-edge nature offering a lightweight, powerful, extensively customizable and up-to-date experience. And while all those are absolutely great, the main drawback is that Arch Linux embraces the DIY (do it yourself) approach where users need to possess a certain level of technical expertise to get along with it. + +Manjaro strips that requirement and makes Arch accessible to newcomers, and at the same time provides all the advanced and powerful features of Arch for the experienced users as well. In short, Manjaro is an user-friendly Linux distro that works straight out of the box. + +The reasons why Manjaro makes a great and extremely suitable distro for gaming are: + + * Manjaro automatically detects computer's hardware (e.g. Graphics cards) + * Automatically installs the necessary drivers and software (e.g. Graphics drivers) + * Various codecs for media files playback comes pre-installed with it + * Has dedicated repositories that deliver fully tested and stable packages + + + +Manjaro Gaming is packed with all of Manjaro's awesomeness with the addition of various tweaks and software packages dedicated to make gaming on Linux smooth and enjoyable. + +![Inside Manjaro Gaming][5] + +#### Tweaks + +Some of the tweaks made on Manjaro Gaming are: + + * Manjaro Gaming uses highly customizable XFCE desktop environment with an overall dark theme. + * Sleep mode is disabled for preventing computers from sleeping while playing games with GamePad or watching long cutscenes. + + + +#### Softwares + +Maintaining Manjaro's tradition of working straight out of the box, Manjaro Gaming comes bundled with various Open Source software to provide often needed functionalities for gamers. Some of the software included are: + + * [**KdenLIVE**][6]: Videos editing software for editing gaming videos + * [**Mumble**][7]: Voice chatting software for gamers + * [**OBS Studio**][8]: Software for video recording and live streaming games videos on [Twitch][9] + * **[OpenShot][10]** : Powerful video editor for Linux + * [**PlayOnLinux**][11]: For running Windows games on Linux with [Wine][12] backend + * [**Shutter**][13]: Feature-rich screenshot tool + + + +#### Emulators + +Manjaro Gaming comes with a long list of gaming emulators: + + * **[DeSmuME][14]** : Nintendo DS emulator + * **[Dolphin Emulator][15]** : GameCube and Wii emulator + * [**DOSBox**][16]: DOS Games emulator + * **[FCEUX][17]** : Nintendo Entertainment System (NES), Famicom, and Famicom Disk System (FDS) emulator + * **Gens/GS** : Sega Mega Drive emulator + * **[PCSXR][18]** : PlayStation Emulator + * [**PCSX2**][19]: Playstation 2 emulator + * [**PPSSPP**][20]: PSP emulator + * **[Stella][21]** : Atari 2600 VCS emulator + * [**VBA-M**][22]: Gameboy and GameboyAdvance emulator + * [**Yabause**][23]: Sega Saturn Emulator + * **[ZSNES][24]** : Super Nintendo emulator + + + +#### Others + +There are some terminal add-ons - Color, ILoveCandy and Screenfetch. [Conky Manager][25] with Retro Conky theme is also included. + +**Point to be noted: Not all the features mentioned are included in the current release of Manjaro Gaming (which is 16.03). Some of them are scheduled to be included in the next release - Manjaro Gaming 16.06.** + +### Downloads + +Manjaro Gaming 16.06 is going to be the first proper release of Manjaro Gaming. But if you are interested enough to try it now, Manjaro Gaming 16.03 is available for downloading on the Sourceforge [project page][26]. Go there and grab the ISO. + +How do you feel about this new Gaming Linux distro? Are you thinking of giving it a try? Let us know! + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/manjaro-gaming-linux/ + +作者:[Munif Tanjim][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/munif/ +[1]:https://itsfoss.com/wp-content/uploads/2016/06/Manjaro-Gaming.jpg +[2]:https://itsfoss.com/linux-gaming-guide/ +[3]:https://manjaro.github.io/ +[4]:https://www.archlinux.org/ +[5]:https://itsfoss.com/wp-content/uploads/2016/06/Manjaro-Gaming-Inside-1024x576.png +[6]:https://kdenlive.org/ +[7]:https://www.mumble.info +[8]:https://obsproject.com/ +[9]:https://www.twitch.tv/ +[10]:http://www.openshot.org/ +[11]:https://www.playonlinux.com +[12]:https://www.winehq.org/ +[13]:http://shutter-project.org/ +[14]:http://desmume.org/ +[15]:https://dolphin-emu.org +[16]:https://www.dosbox.com/ +[17]:http://www.fceux.com/ +[18]:https://pcsxr.codeplex.com +[19]:http://pcsx2.net/ +[20]:http://www.ppsspp.org/ +[21]:http://stella.sourceforge.net/ +[22]:http://vba-m.com/ +[23]:https://yabause.org/ +[24]:http://www.zsnes.com/ +[25]:https://itsfoss.com/conky-gui-ubuntu-1304/ +[26]:https://sourceforge.net/projects/mgame/ diff --git a/sources/talk/20170523 Best Websites to Download Linux Games.md b/sources/talk/20170523 Best Websites to Download Linux Games.md new file mode 100644 index 0000000000..d3b2870738 --- /dev/null +++ b/sources/talk/20170523 Best Websites to Download Linux Games.md @@ -0,0 +1,141 @@ +申请翻译  WangYueScream +================================ +Best Websites to Download Linux Games +====== +Brief: New to Linux gaming and wondering where to **download Linux games** from? We list the best resources from where you can **download free Linux games** as well as buy premium Linux games. + +Linux and Games? Once upon a time, it was hard to imagine these two going together. Then time passed and a lot of things happened. Fast-forward to the present, there are thousands and thousands of games available for Linux and more are being developed by both big game companies and independent developers. + +[Gaming on Linux][1] is real now and today we are going to see where you can find games for Linux platform and hunt down the games that you like. + +### Where to download Linux games? + +![Websites to download Linux games][2] + +First and foremost, look into your Linux distribution's software center (if it has one). You should find plenty of games there already. + +But that doesn't mean you should restrict yourself to the software center. Let me list you some websites to download Linux games. + +#### 1. Steam + +If you are a seasoned gamer, you have heard about Steam. Yes, if you don't know it already, Steam is available for Linux. Steam recommends Ubuntu but it should run on other major distributions too. And if you are really psyched up about Steam, there is even a dedicated operating system for playing Steam games - [SteamOS][3]. We covered it last year in the [Best Linux Gaming Distribution][4] article. + +![Steam Store][5] + +Steam has the largest games store for Linux. While writing this article, it has exactly 3487 games on Linux platform and that's really huge. You can find games from wide range of genre. As for [Digital Rights Management][6], most of the Steam games have some kind of DRM. + +For using Steam either you will have to install the [Steam client][7] on your Linux distribution or use SteamOS. One of the advantages of Steam is that, after your initial setup, for most of the games you wouldn't need to worry about dependencies and complex installation process. Steam client will do the heavy tasks for you. + +[Steam Store][8] + +#### 2. GOG + +If you are solely interested in DRM-free games, GOG has a pretty large collection of it. At this moment, GOG has 1978 DRM-free games in their library. GOG is kind of famous for its vast collection of DRM-free games. + +![GOG Store][9] + +Officially, GOG games support Ubuntu LTS versions and Linux Mint. So, Ubuntu and its derivatives will have no problem installing them. Installing them on other distributions might need some extra works, such as - installing correct dependencies. + +You will not need any extra clients for downloading games from GOG. All the purchased games will be available in your accounts section. You can download them directly with your favorite download manager. + +[GOG Store][10] + +#### 3. Humble Store + +The Humble Store is another place where you can find various games for Linux. There are both DRM-free and non-DRM-free games available on Humble Store. The non-DRM-free games are generally from the Steam. Currently there are about 1826 games for Linux in the Humble Store. + +![The Humble Store][11] + +Humble Store is famous for another reason though. They have a program called [**Humble Indie Bundle**][12] where they offer a bunch of games together with a compelling discount for a limited time period. Another thing about Humble is that when you make a purchase, 10% of the revenue from your purchase goes to charities. + +Humble doesn't have any extra clients for downloading their games. + +[The Humble Store][13] + +#### 4. itch.io + +itch.io is an open marketplace for independent digital creators with a focus on independent video games. itch.io has some of the most interesting and unique games that you can find. Most games available on itch.io are DRM-free. + +![itch.io Store][14] + +Right now, itch.io has 9514 games available in their store for Linux platform. + +itch.io has their own [client][15] for effortlessly downloading, installing, updating and playing their games. + +[itch.io Store][16] + +#### 5. LGDB + +LGDB is an abbreviation for Linux Game Database. Though technically not a game store, it has a large collection of games for Linux along with various information about them. Every game is documented with links of where you can find them. + +![Linux Game Database][17] + +As of now, there are 2046 games entries in the database. They also have very long lists for [Emulators][18], [Tools][19] and [Game Engines][20]. + +[LGDB][21] + +[Annoying Experiences Every Linux Gamer Never Wanted!][27] + +#### 6. Game Jolt + +Game Jolt has a very impressive collection with about 5000 indie games for Linux under their belt. + +![GameJolt Store][22] + +Game Jolt has an (pre-release) [client][23] for downloading, installing, updating and playing games with ease. + +[Game Jolt Store][24] + +### Others + +There are many other stores that sells Linux Games. Also there are many places you can find free games too. Here are a couple of them: + + * [**Bundle Stars**][25]: Bundle Stars currently has 814 Linux games and 31 games bundles. + * [**GamersGate**][26]: GamersGate has 595 Linux games as for now. There are both DRM-free and non-DRM-free games. + + + +#### App Stores, Software Center & Repositories + +Linux distribution has their own application stores or repositories. Though not many, but there you can find various games too. + +That's all for today. Did you know there are this many games available for Linux? How do you feel about this? Do you use some other websites to download Linux games? Do share your favorites with us. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/download-linux-games/ + +作者:[Munif Tanjim][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/munif/ +[1]:https://itsfoss.com/linux-gaming-guide/ +[2]:https://itsfoss.com/wp-content/uploads/2017/05/download-linux-games-800x450.jpg +[3]:http://store.steampowered.com/steamos/ +[4]:https://itsfoss.com/linux-gaming-distributions/ +[5]:https://itsfoss.com/wp-content/uploads/2017/05/Steam-Store-800x382.jpg +[6]:https://www.wikiwand.com/en/Digital_rights_management +[7]:http://store.steampowered.com/about/ +[8]:http://store.steampowered.com/linux +[9]:https://itsfoss.com/wp-content/uploads/2017/05/GOG-Store-800x366.jpg +[10]:https://www.gog.com/games?system=lin_mint,lin_ubuntu +[11]:https://itsfoss.com/wp-content/uploads/2017/05/The-Humble-Store-800x393.jpg +[12]:https://www.humblebundle.com/?partner=itsfoss +[13]:https://www.humblebundle.com/store?partner=itsfoss +[14]:https://itsfoss.com/wp-content/uploads/2017/05/itch.io-Store-800x485.jpg +[15]:https://itch.io/app +[16]:https://itch.io/games/platform-linux +[17]:https://itsfoss.com/wp-content/uploads/2017/05/LGDB-800x304.jpg +[18]:https://lgdb.org/emulators +[19]:https://lgdb.org/tools +[20]:https://lgdb.org/engines +[21]:https://lgdb.org/games +[22]:https://itsfoss.com/wp-content/uploads/2017/05/GameJolt-Store-800x357.jpg +[23]:http://gamejolt.com/client +[24]:http://gamejolt.com/games/best?os=linux +[25]:https://www.bundlestars.com/en/games?page=1&platforms=Linux +[26]:https://www.gamersgate.com/games?state=available +[27]:https://itsfoss.com/linux-gaming-problems/ diff --git a/sources/talk/20170915 Deep learning wars- Facebook-backed PyTorch vs Google-s TensorFlow.md b/sources/talk/20170915 Deep learning wars- Facebook-backed PyTorch vs Google-s TensorFlow.md index 5b0014246c..31dbeb394b 100644 --- a/sources/talk/20170915 Deep learning wars- Facebook-backed PyTorch vs Google-s TensorFlow.md +++ b/sources/talk/20170915 Deep learning wars- Facebook-backed PyTorch vs Google-s TensorFlow.md @@ -1,3 +1,4 @@ +Translating by Wuod3n Deep learning wars: Facebook-backed PyTorch vs Google's TensorFlow ====== The rapid rise of tools and techniques in Artificial Intelligence and Machine learning of late has been astounding. Deep Learning, or "Machine learning on steroids" as some say, is one area where data scientists and machine learning experts are spoilt for choice in terms of the libraries and frameworks available. A lot of these frameworks are Python-based, as Python is a more general-purpose and a relatively easier language to work with. [Theano][1], [Keras][2] [TensorFlow][3] are a few of the popular deep learning libraries built on Python, developed with an aim to make the life of machine learning experts easier. diff --git a/sources/talk/20180107 7 leadership rules for the DevOps age.md b/sources/talk/20180107 7 leadership rules for the DevOps age.md new file mode 100644 index 0000000000..cd202d5591 --- /dev/null +++ b/sources/talk/20180107 7 leadership rules for the DevOps age.md @@ -0,0 +1,124 @@ +7 leadership rules for the DevOps age +====== + +![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO_DigitalAcumen_2.png?itok=TGeMQYs4) + +If [DevOps][1] is ultimately more about culture than any particular technology or platform, then remember this: There isn't a finish line. It's about continuous change and improvement - and the C-suite doesn't get a pass. + +Rather, leaders need to [revise some of their traditional approaches][2] if they expect DevOps to help drive the outcomes they seek. Let's consider seven ideas for more effective IT leadership in the DevOps era. + +### 1. Say "yes" to failure + +The word "failure" has long had very specific connotations in IT, and they're almost universally bad: server failure, backup failure, hard drive failure - you get the picture. + +A healthy DevOps culture, however, depends upon redefining failure - IT leaders should rewrite their thesaurus to make the word synonymous with "opportunity." + +"Prior to DevOps, we had a culture of punishing failure," says Robert Reeves, CTO and co-founder of [Datical][3]. "The only learning we had was to avoid mistakes. The number one way to avoid mistakes in IT is to not change anything: Don't accelerate the release schedule, don't move to the cloud, don't do anything differently!" + +That's a playbook for a bygone era and, as Reeves puts plainly, it doesn't work. In fact, that kind of stasis is actual failure. + +"Companies that release slowly and avoid the cloud are paralyzed by fear - and they will fail," Reeves says. "IT leaders must embrace failure as an opportunity. Humans not only learn from their mistakes, they learn from others' mistakes. A culture of openness and ['psychological safety'][4] fosters learning and improvement." + +**[ Related article: [Why agile leaders must move beyond talking about "failure."][5] ]** + +### 2. Live, eat, and breathe DevOps in the C-suite + +While DevOps culture can certainly grow organically in all directions, companies that are shifting from monolithic, siloed IT practices - and likely encountering headwinds en route - need total buy-in from executive leadership. Without it, you're sending mixed messages and likely emboldening those who'd rather push a _but this is the way we 've always done things_ agenda. [Culture change is hard][6]; people need to see leadership fully invested in that change for it to actually happen. + +"Top management must fully support DevOps in order for it to be successful in delivering the benefits," says Derek Choy, CIO at [Rainforest QA][7]. + +Becoming a DevOps shop. Choy notes, touches pretty much everything in the organization, from technical teams to tools to processes to roles and responsibilities. + +"Without unified sponsorship from top management, DevOps implementation will not be successful," Choy says. "Therefore, it is important to have leaders aligned at the top level before transitioning to DevOps." + +### 3. Don 't just declare "DevOps" - define it + +Even in IT organizations that have welcomed DevOps with open arms, it's possible that's not everyone's on the same page. + +**[Read our related article,**[ **3 areas where DevOps and CIOs must get on the same page**][8] **.]** + +One fundamental reason for such disconnects: People might be operating with different definitions for what the term even means. + +"DevOps can mean different things to different people," Choy says. "It is important for C-level [and] VP-level execs to define the goals of DevOps, clearly stating the expected outcome, understand how this outcome can benefit the business and be able to measure and report on success along the way." + +Indeed, beyond the baseline definition and vision, DevOps requires ongoing and frequent communication, not just in the trenches but throughout the organization. IT leaders must make that a priority. + +"Inevitably, there will be hiccups, there will be failures and disruptions to the business," Choy says. "Leaders need to clearly communicate the journey to the rest of the company and what they can expect as part of the process." + +### 4.DevOps is as much about business as technology + +IT leaders running successful DevOps shops have embraced its culture and practices as a business strategy as much as an approach to building and operating software. DevOps culture is a great enabler of IT's shift from support arm to strategic business unit. + +"IT leaders must shift their thinking and approach from being cost/service centers to driving business outcomes, and a DevOps culture helps speed up those outcomes via automation and stronger collaboration," says Mike Kail, CTO and co-founder at [CYBRIC][9]. + +Indeed, this is a strong current that runs through much of these new "rules" for leading in the age of DevOps. + +"Promoting innovation and encouraging team members to take smart risks is a key part of a DevOps culture and IT leaders need to clearly communicate that on a continuous basis," Kail says. + +"An effective IT leader will need to be more engaged with the business than ever before," says Evan Callendar, director, performance services at [West Monroe Partners][10]. "Gone are the days of yearly or quarterly reviews - you need to welcome the [practice of] [bi-weekly backlog grooming][11]. The ability to think strategically at the year level, but interact at the sprint level, will be rewarded when business expectations are met." + +### 5. Change anything that hampers DevOps goals + + +While DevOps veterans generally agree that DevOps is much more a matter of culture than technology, success does depend on enabling that culture with the right processes and tools. Declaring your department a DevOps shop while resisting the necessary changes to processes or technologies is like buying a Ferrari but keeping the engine from your 20-year-old junker that billows smoke each time you turn the key. + +Exhibit A: [Automation][12]. It's critical parallel strategy for DevOps success. + +"IT leadership has to put an emphasis on automation," Callendar says. "This will be an upfront investment, but without it, DevOps simply will engulf itself with inefficiency and lack of delivery." + +Automation is a fundamental, but change doesn't stop there. + +"Leaders need to push for automation, monitoring, and a continuous delivery process. This usually means changes to many existing practices, processes, team structures, [and] roles," Choy says. "Leaders need to be willing to change anything that'll hinder the team's ability to fully automate the process." + +### 6. Rethink team structure and performance metrics + +While we're on the subject of change...if that org chart collecting dust on your desktop is the same one you've been plugging names into for the better part of a decade (or more), it's time for an overhaul. + +"IT executives need to take a completely different approach to organizational structure in this new era of DevOps culture," Kail says. "Remove strict team boundaries, which tend to hamper collaboration, and allow for the teams to be self-organizing and agile." + +Kail says this kind of rethinking can and should extend to other areas in the DevOps age, too, including how you measure individual and team success, and even how you interact with people. + +"Measure initiatives in terms of business outcomes and overall positive impact," Kail advises. "Finally, and something that I believe to be the most important aspect of management: Be empathetic." + +Beware easily collected measurements that are not truly DevOps metrics, writes [Red Hat ][13]technology evangelist Gordon Haff. "DevOps metrics should be tied to business outcomes in some manner," he notes. "You probably don't really care about how many lines of code your developers write, whether a server had a hardware failure overnight, or how comprehensive your test coverage is. In fact, you may not even directly care about the responsiveness of your website or the rapidity of your updates. But you do care to the degree such metrics can be correlated with customers abandoning shopping carts or leaving for a competitor." See his full article, [DevOps metrics: Are you measuring what matters?][14] + +### 7. Chuck conventional wisdom out the window + +If the DevOps age requires new ways of thinking about IT leadership, it follows that some of the old ways need to be retired. But which ones? + +"To be honest, all of them," Kail says. "Get rid of the 'because that's the way we've always done things' mindset. The transition to a culture of DevOps is a complete paradigm shift, not a few subtle changes to the old days of Waterfall and Change Advisory Boards." + +Indeed, IT leaders recognize that real transformation requires more than minor touch-ups to old approaches. Often, it requires a total reboot of a previous process or strategy. + +Callendar of West Monroe Partners shares a parting example of legacy leadership thinking that hampers DevOps: Failing to embrace hybrid IT models and modern infrastructure approaches such as containers and microservices. + +"One of the big rules I see going out the window is architecture consolidation, or the idea that long-term maintenance is cheaper if done within a homogenous environment," Callendar says. + +**Want more wisdom like this, IT leaders? [Sign up for our weekly email newsletter][15].** + +-------------------------------------------------------------------------------- + +via: https://enterprisersproject.com/article/2018/1/7-leadership-rules-devops-age + +作者:[Kevin Casey][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://enterprisersproject.com/user/kevin-casey +[1]:https://enterprisersproject.com/tags/devops +[2]:https://enterprisersproject.com/article/2017/7/devops-requires-dumping-old-it-leadership-ideas +[3]:https://www.datical.com/ +[4]:https://rework.withgoogle.com/guides/understanding-team-effectiveness/steps/foster-psychological-safety/ +[5]:https://enterprisersproject.com/article/2017/10/why-agile-leaders-must-move-beyond-talking-about-failure?sc_cid=70160000000h0aXAAQ +[6]:https://enterprisersproject.com/article/2017/10/how-beat-fear-and-loathing-it-change +[7]:https://www.rainforestqa.com/ +[8]:https://enterprisersproject.com/article/2018/1/3-areas-where-devops-and-cios-must-get-same-page +[9]:https://www.cybric.io/ +[10]:http://www.westmonroepartners.com/ +[11]:https://www.scrumalliance.org/community/articles/2017/february/product-backlog-grooming +[12]:https://www.redhat.com/en/topics/automation?intcmp=701f2000000tjyaAAA +[13]:https://www.redhat.com/en?intcmp=701f2000000tjyaAAA +[14]:https://enterprisersproject.com/article/2017/7/devops-metrics-are-you-measuring-what-matters +[15]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ diff --git a/sources/talk/20180110 8 simple ways to promote team communication.md b/sources/talk/20180110 8 simple ways to promote team communication.md index dc4ee37f72..fb8a334b98 100644 --- a/sources/talk/20180110 8 simple ways to promote team communication.md +++ b/sources/talk/20180110 8 simple ways to promote team communication.md @@ -1,6 +1,6 @@ 8 simple ways to promote team communication ====== - +translating ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/world_remote_teams.png?itok=Wk1yBFv6) Image by : opensource.com diff --git a/sources/tech/20190110 Why isn-t open source hot among computer science students.md b/sources/talk/20180110 Why isn-t open source hot among computer science students.md similarity index 100% rename from sources/tech/20190110 Why isn-t open source hot among computer science students.md rename to sources/talk/20180110 Why isn-t open source hot among computer science students.md diff --git a/sources/talk/20180111 AI and machine learning bias has dangerous implications.md b/sources/talk/20180111 AI and machine learning bias has dangerous implications.md deleted file mode 100644 index 7a83ebb3a2..0000000000 --- a/sources/talk/20180111 AI and machine learning bias has dangerous implications.md +++ /dev/null @@ -1,81 +0,0 @@ -AI and machine learning bias has dangerous implications -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_goodbadugly.png?itok=ZxaimUWU) - -Image by : opensource.com - -Algorithms are everywhere in our world, and so is bias. From social media news feeds to streaming service recommendations to online shopping, computer algorithms--specifically, machine learning algorithms--have permeated our day-to-day world. As for bias, we need only examine the 2016 American election to understand how deeply--both implicitly and explicitly--it permeates our society as well. - -What's often overlooked, however, is the intersection between these two: bias in computer algorithms themselves. - -Contrary to what many of us might think, technology is not objective. AI algorithms and their decision-making processes are directly shaped by those who build them--what code they write, what data they use to "[train][1]" the machine learning models, and how they [stress-test][2] the models after they're finished. This means that the programmers' values, biases, and human flaws are reflected in the software. If I fed an image-recognition algorithm the faces of only white researchers in my lab, for instance, it [wouldn't recognize non-white faces as human][3]. Such a conclusion isn't the result of a "stupid" or "unsophisticated" AI, but to a bias in training data: a lack of diverse faces. This has dangerous consequences. - -There's no shortage of examples. [State court systems][4] across the country use "black box" algorithms to recommend prison sentences for convicts. [These algorithms are biased][5] against black individuals because of the data that trained them--so they recommend longer sentences as a result, thus perpetuating existing racial disparities in prisons. All this happens under the guise of objective, "scientific" decision-making. - -The United States federal government uses machine-learning algorithms to calculate welfare payouts and other types of subsidies. But [information on these algorithms][6], such as their creators and their training data, is extremely difficult to find--which increases the risk of public officials operating under bias and meting out systematically unfair payments. - -This list goes on. From Facebook news algorithms to medical care systems to police body cameras, we as a society are at great risk of inserting our biases--racism, sexism, xenophobia, socioeconomic discrimination, confirmation bias, and more--into machines that will be mass-produced and mass-distributed, operating under the veil of perceived technological objectivity. - -This must stop. - -While we should by no means halt research and development on artificial intelligence, we need to slow its development such that we tread carefully. The danger of algorithmic bias is already too great. - -## How can we fight algorithmic bias? - -One of the best ways to fight algorithmic bias is by vetting the training data fed into machine learning models themselves. As [researchers at Microsoft][2] point out, this can take many forms. - -The data itself might have a skewed distribution--for instance, programmers may have more data about United States-born citizens than immigrants, and about rich men than poor women. Such imbalances will cause an AI to make improper conclusions about how our society is in fact represented--i.e., that most Americans are wealthy white businessmen--simply because of the way machine-learning models make statistical correlations. - -It's also possible, even if men and women are equally represented in training data, that the representations themselves result in prejudiced understandings of humanity. For instance, if all the pictures of "male occupation" are of CEOs and all those of "female occupation" are of secretaries (even if more CEOs are in fact male than female), the AI could conclude that women are inherently not meant to be CEOs. - -We can imagine similar issues, for example, with law enforcement AIs that examine representations of criminality in the media, which dozens of studies have shown to be [egregiously slanted][7] towards black and Latino citizens. - -Bias in training data can take many other forms as well--unfortunately, more than can be adequately covered here. Nonetheless, training data is just one form of vetting; it's also important that AI models are "stress-tested" after they're completed to seek out prejudice. - -If we show an Indian face to our camera, is it appropriately recognized? Is our AI less likely to recommend a job candidate from an inner city than a candidate from the suburbs, even if they're equally qualified? How does our terrorism algorithm respond to intelligence on a white domestic terrorist compared to an Iraqi? Can our ER camera pull up medical records of children? - -These are obviously difficult issues to resolve in the data itself, but we can begin to identify and address them through comprehensive testing. - -## Why is open source well-suited for this task? - -Both open source technology and open source methodologies have extreme potential to help in this fight against algorithmic bias. - -Modern artificial intelligence is dominated by open source software, from TensorFlow to IBM Watson to packages like [scikit-learn][8]. The open source community has already proven extremely effective in developing robust and rigorously tested machine-learning tools, so it follows that the same community could effectively build anti-bias tests into that same software. - -Debugging tools like [DeepXplore][9], out of Columbia and Lehigh Universities, for example, make the AI stress-testing process extensive yet also easy to navigate. This and other projects, such as work being done at [MIT's Computer Science and Artificial Intelligence Lab][10], develop the agile and rapid prototyping the open source community should adopt. - -Open source technology has also proven to be extremely effective for vetting and sorting large sets of data. Nothing should make this more obvious than the domination of open source tools in the data analysis market (Weka, Rapid Miner, etc.). Tools for identifying data bias should be designed by the open source community, and those techniques should also be applied to the plethora of open training data sets already published on sites like [Kaggle][11]. - -The open source methodology itself is also well-suited for designing processes to fight bias. Making conversations about software open, democratized, and in tune with social good are pivotal to combating an issue that is partly caused by the very opposite--closed conversations, private software development, and undemocratized decision-making. If online communities, corporations, and academics can adopt these open source characteristics when approaching machine learning, fighting algorithmic bias should become easier. - -## How can we all get involved? - -Education is extremely important. We all know people who may be unaware of algorithmic bias but who care about its implications--for law, social justice, public policy, and more. It's critical to talk to those people and explain both how the bias is formed and why it matters because the only way to get these conversations started is to start them ourselves. - -For those of us who work with artificial intelligence in some capacity--as developers, on the policy side, through academic research, or in other capacities--these conversations are even more important. Those who are designing the artificial intelligence of tomorrow need to understand the extreme dangers that bias presents today; clearly, integrating anti-bias processes into software design depends on this very awareness. - -Finally, we should all build and strengthen open source community around ethical AI. Whether that means contributing to software tools, stress-testing machine learning models, or sifting through gigabytes of training data, it's time we leverage the power of open source methodology to combat one of the greatest threats of our digital age. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/1/how-open-source-can-fight-algorithmic-bias - -作者:[Justin Sherman][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/justinsherman -[1]:https://www.crowdflower.com/what-is-training-data/ -[2]:https://medium.com/microsoft-design/how-to-recognize-exclusion-in-ai-ec2d6d89f850 -[3]:https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms -[4]:https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/ -[5]:https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing -[6]:https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3012499 -[7]:https://www.hivlawandpolicy.org/sites/default/files/Race%20and%20Punishment-%20Racial%20Perceptions%20of%20Crime%20and%20Support%20for%20Punitive%20Policies%20%282014%29.pdf -[8]:http://scikit-learn.org/stable/ -[9]:https://arxiv.org/pdf/1705.06640.pdf -[10]:https://www.csail.mit.edu/research/understandable-deep-networks -[11]:https://www.kaggle.com/datasets diff --git a/sources/talk/20180115 Why DevSecOps matters to IT leaders.md b/sources/talk/20180115 Why DevSecOps matters to IT leaders.md new file mode 100644 index 0000000000..e731013e2b --- /dev/null +++ b/sources/talk/20180115 Why DevSecOps matters to IT leaders.md @@ -0,0 +1,86 @@ +Why DevSecOps matters to IT leaders +====== + +![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/TEP_SecurityTraining1_620x414_1014.png?itok=zqxqJGDG) + +If [DevOps][1] is ultimately about building better software, that means better-secured software, too. + +Enter the term "DevSecOps." Like any IT term, DevSecOps - a descendant of the better-established DevOps - could be susceptible to hype and misappropriation. But the term has real meaning for IT leaders who've embraced a culture of DevOps and the practices and tools that help deliver on its promise. + +Speaking of which: What does "DevSecOps" mean? + +"DevSecOps is a portmanteau of development, security, and operations," says Robert Reeves, CTO and co-founder at [Datical][2]. "It reminds us that security is just as important to our applications as creating them and deploying them to production." + +**[ Want DevOps advice from other CIOs? See our comprehensive resource, [DevOps: The IT Leader's Guide][3]. ]** + +One easy way to explain DevSecOps to non-technical people: It bakes security into the development process intentionally and earlier. + +"Security teams have historically been isolated from development teams - and each team has developed deep expertise in different areas of IT," [Red Hat][4] security strategist Kirsten Newcomer [told us][5] recently. "It doesn't need to be this way. Enterprises that care deeply about security and also care deeply about their ability to quickly deliver business value through software are finding ways to move security left in their application development lifecycles. They're adopting DevSecOps by integrating security practices, tooling, and automation throughout the CI/CD pipeline." + +"To do this well, they're integrating their teams - security professionals are embedded with application development teams from inception (design) through to production deployment," she says. "Both sides are seeing the value - each team expands their skill sets and knowledge base, making them more valuable technologists. DevOps done right - or DevSecOps - improves IT security." + +IT teams are tasked with delivering services faster and more frequently than ever before. DevOps can be a great enabler of this, in part because it can remove some of the traditional friction between development and operations teams that commonly surfaced when Ops was left out of the process until deployment time and Dev tossed its code over an invisible wall, never to manage it again, much less have any infrastructure responsibility. That kind of siloed approach causes problems, to put it mildly, in the digital age. According to Reeves, the same holds true if security exists in a silo. + +"We have adopted DevOps because it's proven to improve our IT performance by removing the barriers between development and operations," Reeves says. "Much like we shouldn't wait until the end of the deployment cycle to involve operations, we shouldn't wait until the end to involve security." + +### Why DevSecOps is here to stay + +It may be tempting to see DevSecOps as just another buzzword, but for security-conscious IT leaders, it's a substantive term: Security must be a first-class citizen in the software development pipeline, not something that gets bolted on as a final step before a deploy, or worse, as a team that gets scrambled only after an actual incident occurs. + +"DevSecOps is not just a buzzword - it is the current and future state of IT for multiple reasons," says George Gerchow, VP of security and compliance at [Sumo Logic][6]. "The most important benefit is the ability to bake security into development and operational processes to provide guardrails - not barriers - to achieve agility and innovation." + +Moreover, the appearance of the DevSecOps on the scene might be another sign that DevOps itself is maturing and digging deep roots inside IT. + +"The culture of DevOps in the enterprise is here to stay, and that means that developers are delivering features and updates to the production environment at an increasingly higher velocity, especially as the self-organizing teams become more comfortable with both collaboration and measurement of results," says Mike Kail, CTO and co-founder at [CYBRIC][7]. + +Teams and companies that have kept their old security practices in place while embracing DevOps are likely experiencing an increasing amount of pain managing security risks as they continue to deploy faster and more frequently. + +"The current, manual testing approaches of security continue to fall further and further behind." + +"The current, manual testing approaches of security continue to fall further and further behind, and leveraging both automation and collaboration to shift security testing left into the software development life cycle, thus driving the culture of DevSecOps, is the only way for IT leaders to increase overall resiliency and delivery security assurance," Kail says. + +Shifting security testing left (earlier) benefits developers, too: Rather than finding out about a glaring hole in their code right before a new or updated service is set to deploy, they can identify and resolve potential issues during much earlier stages of development - often with little or no intervention from security personnel. + +"Done right, DevSecOps can ingrain security into the development lifecycle, empowering developers to more quickly and easily secure their applications without security disruptions," says Brian Wilson, chief information security officer at [SAS][8]. + +Wilson points to static (SAST) and source composition analysis (SCA) tools, integrated into a team's continuous delivery pipelines, as useful technologies that help make this possible by giving developers feedback about potential issues in their own code as well as vulnerabilities in third-party dependencies. + +"As a result, developers can proactively and iteratively mitigate appsec issues and rerun security scans without the need to involve security personnel," Wilson says. He notes, too, that DevSecOps can also help the Dev team streamline updates and patching. + +DevSecOps doesn't mean you no longer need security pros, just as DevOps doesn't mean you no longer need infrastructure experts; it just helps reduce the likelihood of flaws finding their way into production, or from slowing down deployments because they're caught late in the pipeline. + +"We're here if they have questions or need help, but having given developers the tools they need to secure their apps, we're less likely to find a showstopper issue during a penetration test," Wilson says. + +### DevSecOps meets Meltdown + +Sumo Logic's Gerchow shares a timely example of the DevSecOps culture in action: When the recent [Meltdown and Spectre][9] news hit, the team's DevSecOps approach enabled a rapid response to mitigate its risks without any noticeable disruption to internal or external customers, which Gerchow said was particularly important for the cloud-native, highly regulated company. + +The first step: Gerchow's small security team, which he notes also has development skills, was able to work with one of its main cloud vendors via Slack to ensure its infrastructure was completely patched within 24 hours. + +"My team then began OS-level fixes immediately with zero downtime to end users without having to open tickets and requests with engineering that would have meant waiting on a long change management process. All the changes were accounted for via automated Jira tickets opened via Slack and monitored through our logs and analytics solution," Gerchow explains. + +In essence, it sounds a whole lot like the culture of DevOps, matched with the right mix of people, processes, and tools, but it explicitly includes security as part of that culture and mix. + +"In traditional environments, it would have taken weeks or months to do this with downtime because all three development, operations, and security functions were siloed," Gerchow says. "With a DevSecOps process and mindset, end users get a seamless experience with easy communication and same-day fixes." + + +-------------------------------------------------------------------------------- + +via: https://enterprisersproject.com/article/2018/1/why-devsecops-matters-it-leaders + +作者:[Kevin Casey][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://enterprisersproject.com/user/kevin-casey +[1]:https://enterprisersproject.com/tags/devops +[2]:https://www.datical.com/ +[3]:https://enterprisersproject.com/devops?sc_cid=70160000000h0aXAAQ +[4]:https://www.redhat.com/en?intcmp=701f2000000tjyaAAA +[5]:https://enterprisersproject.com/article/2017/10/what-s-next-devops-5-trends-watch +[6]:https://www.sumologic.com/ +[7]:https://www.cybric.io/ +[8]:https://www.sas.com/en_us/home.html +[9]:https://www.redhat.com/en/blog/what-are-meltdown-and-spectre-heres-what-you-need-know?intcmp=701f2000000tjyaAAA diff --git a/sources/talk/20180117 How technology changes the rules for doing agile.md b/sources/talk/20180117 How technology changes the rules for doing agile.md new file mode 100644 index 0000000000..1b67935509 --- /dev/null +++ b/sources/talk/20180117 How technology changes the rules for doing agile.md @@ -0,0 +1,95 @@ +How technology changes the rules for doing agile +====== + +![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO%20Containers%20Ecosystem.png?itok=lDTaYXzk) + +More companies are trying agile and [DevOps][1] for a clear reason: Businesses want more speed and more experiments - which lead to innovations and competitive advantage. DevOps helps you gain that speed. But doing DevOps in a small group or startup and doing it at scale are two very different things. Any of us who've worked in a cross-functional group of 10 people, come up with a great solution to a problem, and then tried to apply the same patterns across a team of 100 people know the truth: It often doesn't work. This path has been so hard, in fact, that it has been easy for IT leaders to put off agile methodology for another year. + +But that time is over. If you've tried and stalled, it's time to jump back in. + +Until now, DevOps required customized answers for many organizations - lots of tweaks and elbow grease. But today, [Linux containers ][2]and Kubernetes are fueling standardization of DevOps tools and processes. That standardization will only accelerate. The technology we are using to practice the DevOps way of working has finally caught up with our desire to move faster. + +Linux containers and [Kubernetes][3] are changing the way teams interact. Moreover, on the Kubernetes platform, you can run any application you now run on Linux. What does that mean? You can run a tremendous number of enterprise apps (and handle even previously vexing coordination issues between Windows and Linux.) Finally, containers and Kubernetes will handle almost all of what you'll run tomorrow. They're being future-proofed to handle machine learning, AI, and analytics workloads - the next wave of problem-solving tools. + +**[ See our related article,[4 container adoption patterns: What you need to know. ] ][4]** + +Think about machine learning, for example. Today, people still find the patterns in much of an enterprise's data. When machines find the patterns (think machine learning), your people will be able to act on them faster. With the addition of AI, machines can not only find but also act on patterns. Today, with people doing everything, three weeks is an aggressive software development sprint cycle. With AI, machines can change code multiple times per second. Startups will use that capability - to disrupt you. + +Consider how fast you have to be to compete. If you can't make a leap of faith now to DevOps and a one week cycle, think of what will happen when that startup points its AI-fueled process at you. It's time to move to the DevOps way of working now, or get left behind as your competitors do. + +### How are containers changing how teams work? + +DevOps has frustrated many groups trying to scale this way of working to a bigger group. Many IT (and business) people are suspicious of agile: They've heard it all before - languages, frameworks, and now models (like DevOps), all promising to revolutionize application development and IT process. + +**[ Want DevOps advice from other CIOs? See our comprehensive resource, [DevOps: The IT Leader's Guide][5]. ]** + +It's not easy to "sell" quick development sprints to your stakeholders, either. Imagine if you bought a house this way. You're not going to pay a fixed amount to your builder anymore. Instead, you get something like: "We'll pour the foundation in 4 weeks and it will cost x. Then we'll frame. Then we'll do electrical. But we only know the timing on the foundation right now." People are used to buying homes with a price up front and a schedule. + +The challenge is that building software is not like building a house. The same builder builds thousands of houses that are all the same. Software projects are never the same. This is your first hurdle to get past. + +Dev and operations teams really do work differently: I know because I've worked on both sides. We incent them differently. Developers are rewarded for changing and creating, while operations pros are rewarded for reducing cost and ensuring security. We put them in different groups and generally minimize interaction. And the roles typically attract technical people who think quite differently. This situation sets IT up to fail. You have to be willing to break down these barriers. + +Think of what has traditionally happened. You throw pieces over the wall, then the business throws requirements over the wall because they are operating in "house-buying" mode: "We'll see you in 9 months." Developers build to those requirements and make changes as needed for technical constraints. Then they throw it over the wall to operations to "figure out how to run this." Operations then works diligently to make a slew of changes to align the software with their infrastructure. And what's the end result? + +More often than not, the end result isn't even recognizable to the business when they see it in its final glory. We've watched this pattern play out time and time again in our industry for the better part of two decades. It's time for a change. + +It's Linux containers that truly crack the problem - because containers close the gap between development and operations. They allow both teams to understand and design to all of the critical requirements, but still uniquely fulfill their team's responsibilities. Basically, we take out the telephone game between developers and operations. With containers, we can have smaller operations teams, even teams responsible for millions of applications, but development teams that can change software as quickly as needed. (In larger organizations, the desired pace may be faster than humans can respond on the operations side.) + +With containers, you're separating what is delivered from where it runs. Your operations teams are responsible for the host that will run the containers and the security footprint, and that's all. What does this mean? + +First, it means you can get going on DevOps now, with the team you have. That's right. Keep teams focused on the expertise they already have: With containers, just teach them the bare minimum of the required integration dependencies. + +If you try and retrain everyone, no one will be that good at anything. Containers let teams interact, but alongside a strong boundary, built around each team's strengths. Your devs know what needs to be consumed, but don't need to know how to make it run at scale. Ops teams know the core infrastructure, but don't need to know the minutiae of the app. Also, Ops teams can update apps to address new security implications, before you become the next trending data breach story. + +Teaching a large IT organization of say 30,000 people both ops and devs skills? It would take you a decade. You don't have that kind of time. + +When people talk about "building new, cloud-native apps will get us out of this problem," think critically. You can build cloud-native apps in 10-person teams, but that doesn't scale for a Fortune 1000 company. You can't just build new microservices one by one until you're somehow not reliant on your existing team: You'll end up with a siloed organization. It's an alluring idea, but you can't count on these apps to redefine your business. I haven't met a company that could fund parallel development at this scale and succeed. IT budgets are already constrained; doubling or tripling them for an extended period of time just isn't realistic. + +### When the remarkable happens: Hello, velocity + +Linux containers were made to scale. Once you start to do so, [orchestration tools like Kubernetes come into play][6] - because you'll need to run thousands of containers. Applications won't consist of just a single container, they will depend on many different pieces, all running on containers, all running as a unit. If they don't, your apps won't run well in production. + +Think of how many small gears and levers come together to run your business: The same is true for any application. Developers are responsible for all the pulleys and levers in the application. (You could have an integration nightmare if developers don't own those pieces.) At the same time, your operations team is responsible for all the pulleys and levers that make up your infrastructure, whether on-premises or in the cloud. With Kubernetes as an abstraction, your operations team can give the application the fuel it needs to run - without being experts on all those pieces. + +Developers get to experiment. The operations team keeps infrastructure secure and reliable. This combination opens up the business to take small risks that lead to innovation. Instead of having to make only a couple of bet-the-farm size bets, real experimentation happens inside the company, incrementally and quickly. + +In my experience, this is where the remarkable happens inside organizations: Because people say "How do we change planning to actually take advantage of this ability to experiment?" It forces agile planning. + +For example, KeyBank, which uses a DevOps model, containers, and Kubernetes, now deploys code every day. (Watch this [video][7] in which John Rzeszotarski, director of Continuous Delivery and Feedback at KeyBank, explains the change.) Similarly, Macquarie Bank uses DevOps and containers to put something in production every day. + +Once you push software every day, it changes every aspect of how you plan - and [accelerates the rate of change to the business][8]. "An idea can get to a customer in a day," says Luis Uguina, CDO of Macquarie's banking and financial services group. (See this [case study][9] on Red Hat's work with Macquarie Bank). + +### The right time to build something great + +The Macquarie example demonstrates the power of velocity. How would that change your approach to your business? Remember, Macquarie is not a startup. This is the type of disruptive power that CIOs face, not only from new market entrants but also from established peers. + +The developer freedom also changes the talent equation for CIOs running agile shops. Suddenly, individuals within huge companies (even those not in the hottest industries or geographies) can have great impact. Macquarie uses this dynamic as a recruiting tool, promising developers that all new hires will push something live within the first week. + +At the same time, in this day of cloud-based compute and storage power, we have more infrastructure available than ever. That's fortunate, considering the [leaps that machine learning and AI tools will soon enable][10]. + +This all adds up to this being the right time to build something great. Given the pace of innovation in the market, you need to keep building great things to keep customers loyal. So if you've been waiting to place your bet on DevOps, now is the right time. Containers and Kubernetes have changed the rules - in your favor. + +**Want more wisdom like this, IT leaders? [Sign up for our weekly email newsletter][11].** + +-------------------------------------------------------------------------------- + +via: https://enterprisersproject.com/article/2018/1/how-technology-changes-rules-doing-agile + +作者:[Matt Hicks][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://enterprisersproject.com/user/matt-hicks +[1]:https://enterprisersproject.com/tags/devops +[2]:https://www.redhat.com/en/topics/containers?intcmp=701f2000000tjyaAAA +[3]:https://www.redhat.com/en/topics/containers/what-is-kubernetes?intcmp=701f2000000tjyaAAA +[4]:https://enterprisersproject.com/article/2017/8/4-container-adoption-patterns-what-you-need-know?sc_cid=70160000000h0aXAAQ +[5]:https://enterprisersproject.com/devops?sc_cid=70160000000h0aXAAQ +[6]:https://enterprisersproject.com/article/2017/11/how-enterprise-it-uses-kubernetes-tame-container-complexity +[7]:https://www.redhat.com/en/about/videos/john-rzeszotarski-keybank-red-hat-summit-2017?intcmp=701f2000000tjyaAAA +[8]:https://enterprisersproject.com/article/2017/11/dear-cios-stop-beating-yourselves-being-behind-transformation +[9]:https://www.redhat.com/en/resources/macquarie-bank-case-study?intcmp=701f2000000tjyaAAA +[10]:https://enterprisersproject.com/article/2018/1/4-ai-trends-watch +[11]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ diff --git a/sources/talk/20180117 How to get into DevOps.md b/sources/talk/20180117 How to get into DevOps.md new file mode 100644 index 0000000000..09e50ae4f2 --- /dev/null +++ b/sources/talk/20180117 How to get into DevOps.md @@ -0,0 +1,143 @@ +How to get into DevOps +====== +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_resume_rh1x.png?itok=S3HGxi6E) + +I've observed a sharp uptick of developers and systems administrators interested in "getting into DevOps" within the past year or so. This pattern makes sense: In an age in which a single developer can spin up a globally distributed infrastructure for an application with a few dollars and a few API calls, the gap between development and systems administration is closer than ever. Although I've seen plenty of blog posts and articles about cool DevOps tools and thoughts to think about, I've seen fewer content on pointers and suggestions for people looking to get into this work. + +My goal with this article is to draw what that path looks like. My thoughts are based upon several interviews, chats, late-night discussions on [reddit.com/r/devops][1], and random conversations, likely over beer and delicious food. I'm also interested in hearing feedback from those who have made the jump; if you have, please reach out through [my blog][2], [Twitter][3], or in the comments below. I'd love to hear your thoughts and stories. + +### Olde world IT + +Understanding history is key to understanding the future, and DevOps is no exception. To understand the pervasiveness and popularity of the DevOps movement, understanding what IT was like in the late '90s and most of the '00s is helpful. This was my experience. + +I started my career in late 2006 as a Windows systems administrator in a large, multi-national financial services firm. In those days, adding new compute involved calling Dell (or, in our case, CDW) and placing a multi-hundred-thousand-dollar order of servers, networking equipment, cables, and software, all destined for your on- and offsite datacenters. Although VMware was still convincing companies that using virtual machines was, indeed, a cost-effective way of hosting its "performance-sensitive" application, many companies, including mine, pledged allegiance to running applications on their physical hardware. + +Our technology department had an entire group dedicated to datacenter engineering and operations, and its job was to negotiate our leasing rates down to some slightly less absurd monthly rate and ensure that our systems were being cooled properly (an exponentially difficult problem if you have enough equipment). If the group was lucky/wealthy enough, the offshore datacenter crew knew enough about all of our server models to not accidentally pull the wrong thing during after-hours trading. Amazon Web Services and Rackspace were slowly beginning to pick up steam, but were far from critical mass. + +In those days, we also had teams dedicated to ensuring that the operating systems and software running on top of that hardware worked when they were supposed to. The engineers were responsible for designing reliable architectures for patching, monitoring, and alerting these systems as well as defining what the "gold image" looked like. Most of this work was done with a lot of manual experimentation, and the extent of most tests was writing a runbook describing what you did, and ensuring that what you did actually did what you expected it to do after following said runbook. This was important in a large organization like ours, since most of the level 1 and 2 support was offshore, and the extent of their training ended with those runbooks. + +(This is the world that your author lived in for the first three years of his career. My dream back then was to be the one who made the gold standard!) + +Software releases were another beast altogether. Admittedly, I didn't gain a lot of experience working on this side of the fence. However, from stories that I've gathered (and recent experience), much of the daily grind for software development during this time went something like this: + + * Developers wrote code as specified by the technical and functional requirements laid out by business analysts from meetings they weren't invited to. + * Optionally, developers wrote unit tests for their code to ensure that it didn't do anything obviously crazy, like try to divide over zero without throwing an exception. + * When done, developers would mark their code as "Ready for QA." A quality assurance person would pick up the code and run it in their own environment, which might or might not be like production or even the environment the developer used to test their own code against. + * Failures would get sent back to the developers within "a few days or weeks" depending on other business activities and where priorities fell. + + + +Although sysadmins and developers didn't often see eye to eye, the one thing they shared a common hatred for was "change management." This was a composition of highly regulated (and in the case of my employer at the time), highly necessary rules and procedures governing when and how technical changes happened in a company. Most companies followed [ITIL][4] practices, which, in a nutshell, asked a lot of questions around why, when, where, and how things happened and provided a process for establishing an audit trail of the decisions that led up to those answers. + +As you could probably gather from my short history lesson, many, many things were done manually in IT. This led to a lot of mistakes. Lots of mistakes led up to lots of lost revenue. Change management's job was to minimize those lost revenues; this usually came in the form of releases only every two weeks and changes to servers, regardless of their impact or size, queued up to occur between Friday at 4 p.m. and Monday at 5:59 a.m. (Ironically, this batching of work led to even more mistakes, usually more serious ones.) + +### DevOps isn't a Tiger Team + +You might be thinking "What is Carlos going on about, and when is he going to talk about Ansible playbooks?" I love Ansible tons, but hang on; this is important. + +Have you ever been assigned to a project where you had to interact with the "DevOps" team? Or did you have to rely on a "configuration management" or "CI/CD" team to ensure your pipeline was set up properly? Have you had to attend meetings about your release and what it pertains to--weeks after the work was marked "code complete"? + +If so, then you're reliving history. All of that comes from all of the above. + +[Silos form][5] out of an instinctual draw to working with people like ourselves. Naturally, it's no surprise that this human trait also manifests in the workplace. I even saw this play out at a 250-person startup where I used to work. When I started, developers all worked in common pods and collaborated heavily with each other. As the codebase grew in complexity, developers who worked on common features naturally aligned with each other to try and tackle the complexity within their own feature. Soon afterwards, feature teams were officially formed. + +Sysadmins and developers at many of the companies I worked at not only formed natural silos like this, but also fiercely competed with each other. Developers were mad at sysadmins when their environments were broken. Developers were mad at sysadmins when their environments were too locked down. Sysadmins were mad that developers were breaking their environments in arbitrary ways all of the time. Sysadmins were mad at developers for asking for way more computing power than they needed. Neither side understood each other, and worse yet, neither side wanted to. + +Most developers were uninterested in the basics of operating systems, kernels, or, in some cases, computer hardware. As well, most sysadmins, even Linux sysadmins, took a 10-foot pole approach to learning how to code. They tried a bit of C in college, hated it and never wanted to touch an IDE again. Consequently, developers threw their environment problems over the wall to sysadmins, sysadmins prioritized them with the hundreds of other things that were thrown over the wall to them, and everyone busy-waited angrily while hating each other. The purpose of DevOps was to put an end to this. + +DevOps isn't a team. CI/CD isn't a group in Jira. DevOps is a way of thinking. According to the movement, in an ideal world, developers, sysadmins, and business stakeholders would be working as one team. While they might not know everything about each other's worlds, not only do they all know enough to understand each other and their backlogs, but they can, for the most part, speak the same language. + +This is the basis behind having all infrastructure and business logic be in code and subject to the same deployment pipelines as the software that sits on top of it. Everybody is winning because everyone understands each other. This is also the basis behind the rise of other tools like chatbots and easily accessible monitoring and graphing. + +[Adam Jacob said][6] it best: "DevOps is the word we will use to describe the operational side of the transition to enterprises being software led." + +### What do I need to know to get into DevOps? + +I'm commonly asked this question, and the answer, like most open-ended questions like this, is: It depends. + +At the moment, the "DevOps engineer" varies from company to company. Smaller companies that have plenty of software developers but fewer folks that understand infrastructure will likely look for people with more experience administrating systems. Other, usually larger and/or older companies that have a solid sysadmin organization will likely optimize for something closer to a [Google site reliability engineer][7], i.e. "a software engineer to design an operations function." This isn't written in stone, however, as, like any technology job, the decision largely depends on the hiring manager sponsoring it. + +That said, we typically look for engineers who are interested in learning more about: + + * How to administrate and architect secure and scalable cloud platforms (usually on AWS, but Azure, Google Cloud Platform, and PaaS providers like DigitalOcean and Heroku are popular too); + * How to build and optimize deployment pipelines and deployment strategies on popular [CI/CD][8] tools like Jenkins, Go continuous delivery, and cloud-based ones like Travis CI or CircleCI; + * How to monitor, log, and alert on changes in your system with timeseries-based tools like Kibana, Grafana, Splunk, Loggly, or Logstash; and + * How to maintain infrastructure as code with configuration management tools like Chef, Puppet, or Ansible, as well as deploy said infrastructure with tools like Terraform or CloudFormation. + + + +Containers are becoming increasingly popular as well. Despite the [beef against the status quo][9] surrounding Docker at scale, containers are quickly becoming a great way of achieving an extremely high density of services and applications running on fewer systems while increasing their reliability. (Orchestration tools like Kubernetes or Mesos can spin up new containers in seconds if the host they're being served by fails.) Given this, having knowledge of Docker or rkt and an orchestration platform will go a long way. + +If you're a systems administrator that's looking to get into DevOps, you will also need to know how to write code. Python and Ruby are popular languages for this purpose, as they are portable (i.e., can be used on any operating system), fast, and easy to read and learn. They also form the underpinnings of the industry's most popular configuration management tools (Python for Ansible, Ruby for Chef and Puppet) and cloud API clients (Python and Ruby are commonly used for AWS, Azure, and Google Cloud Platform clients). + +If you're a developer looking to make this change, I highly recommend learning more about Unix, Windows, and networking fundamentals. Even though the cloud abstracts away many of the complications of administrating a system, debugging slow application performance is aided greatly by knowing how these things work. I've included a few books on this topic in the next section. + +If this sounds overwhelming, you aren't alone. Fortunately, there are plenty of small projects to dip your feet into. One such toy project is Gary Stafford's Voter Service, a simple Java-based voting platform. We ask our candidates to take the service from GitHub to production infrastructure through a pipeline. One can combine that with Rob Mile's awesome DevOps Tutorial repository to learn about ways of doing this. + +Another great way of becoming familiar with these tools is taking popular services and setting up an infrastructure for them using nothing but AWS and configuration management. Set it up manually first to get a good idea of what to do, then replicate what you just did using nothing but CloudFormation (or Terraform) and Ansible. Surprisingly, this is a large part of the work that we infrastructure devs do for our clients on a daily basis. Our clients find this work to be highly valuable! + +### Books to read + +If you're looking for other resources on DevOps, here are some theory and technical books that are worth a read. + +#### Theory books + + * [The Phoenix Project][10] by Gene Kim. This is a great book that covers much of the history I explained earlier (with much more color) and describes the journey to a lean company running on agile and DevOps. + * [Driving Technical Change][11] by Terrance Ryan. Awesome little book on common personalities within most technology organizations and how to deal with them. This helped me out more than I expected. + * [Peopleware][12] by Tom DeMarco and Tim Lister. A classic on managing engineering organizations. A bit dated, but still relevant. + * [Time Management for System Administrators][13] by Tom Limoncelli. While this is heavily geared towards sysadmins, it provides great insight into the life of a systems administrator at most large organizations. If you want to learn more about the war between sysadmins and developers, this book might explain more. + * [The Lean Startup][14] by Eric Ries. Describes how Eric's 3D avatar company, IMVU, discovered how to work lean, fail fast, and find profit faster. + * [Lean Enterprise][15] by Jez Humble and friends. This book is an adaption of The Lean Startup for the enterprise. Both are great reads and do a good job of explaining the business motivation behind DevOps. + * [Infrastructure As Code][16] by Kief Morris. Awesome primer on, well, infrastructure as code! It does a great job of describing why it's essential for any business to adopt this for their infrastructure. + * [Site Reliability Engineering][17] by Betsy Beyer, Chris Jones, Jennifer Petoff, and Niall Richard Murphy. A book explaining how Google does SRE, or also known as "DevOps before DevOps was a thing." Provides interesting opinions on how to handle uptime, latency, and keeping engineers happy. + + + +#### Technical books + +If you're looking for books that'll take you straight to code, you've come to the right section. + + * [TCP/IP Illustrated][18] by the late W. Richard Stevens. This is the classic (and, arguably, complete) tome on the fundamental networking protocols, with special emphasis on TCP/IP. If you've heard of Layers 1, 2, 3, and 4 and are interested in learning more, you'll need this book. + * [UNIX and Linux System Administration Handbook][19] by Evi Nemeth, Trent Hein, and Ben Whaley. A great primer into how Linux and Unix work and how to navigate around them. + * [Learn Windows Powershell In A Month of Lunches][20] by Don Jones and Jeffrey Hicks. If you're doing anything automated with Windows, you will need to learn how to use Powershell. This is the book that will help you do that. Don Jones is a well-known MVP in this space. + * Practically anything by [James Turnbull][21]. He puts out great technical primers on popular DevOps-related tools. + + + +From companies deploying everything to bare metal (there are plenty that still do, for good reasons) to trailblazers doing everything serverless, DevOps is likely here to stay for a while. The work is interesting, the results are impactful, and, most important, it helps bridge the gap between technology and business. It's a wonderful thing to see. + +Originally published at [Neurons Firing on a Keyboard][22], CC-BY-SA. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/getting-devops + +作者:[Carlos Nunez][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/carlosonunez +[1]:https://www.reddit.com/r/devops/ +[2]:https://carlosonunez.wordpress.com/ +[3]:https://twitter.com/easiestnameever +[4]:https://en.wikipedia.org/wiki/ITIL +[5]:https://www.psychologytoday.com/blog/time-out/201401/getting-out-your-silo +[6]:https://twitter.com/adamhjk/status/572832185461428224 +[7]:https://landing.google.com/sre/interview/ben-treynor.html +[8]:https://en.wikipedia.org/wiki/CI/CD +[9]:https://thehftguy.com/2016/11/01/docker-in-production-an-history-of-failure/ +[10]:https://itrevolution.com/book/the-phoenix-project/ +[11]:https://pragprog.com/book/trevan/driving-technical-change +[12]:https://en.wikipedia.org/wiki/Peopleware:_Productive_Projects_and_Teams +[13]:http://shop.oreilly.com/product/9780596007836.do +[14]:http://theleanstartup.com/ +[15]:https://info.thoughtworks.com/lean-enterprise-book.html +[16]:http://infrastructure-as-code.com/book/ +[17]:https://landing.google.com/sre/book.html +[18]:https://en.wikipedia.org/wiki/TCP/IP_Illustrated +[19]:http://www.admin.com/ +[20]:https://www.manning.com/books/learn-windows-powershell-in-a-month-of-lunches-third-edition +[21]:https://jamesturnbull.net/ +[22]:https://carlosonunez.wordpress.com/2017/03/02/getting-into-devops/ diff --git a/sources/talk/20180117 Some thoughts on Spectre and Meltdown.md b/sources/talk/20180117 Some thoughts on Spectre and Meltdown.md new file mode 100644 index 0000000000..ae8ce0d204 --- /dev/null +++ b/sources/talk/20180117 Some thoughts on Spectre and Meltdown.md @@ -0,0 +1,104 @@ +### Some thoughts on Spectre and Meltdown + +By now I imagine that all of my regular readers, and a large proportion of the rest of the world, have heard of the security issues dubbed "Spectre" and "Meltdown". While there have been some excellent technical explanations of these issues from several sources — I particularly recommend the [Project Zero][3] blog post — I have yet to see anyone really put these into a broader perspective; nor have I seen anyone make a serious attempt to explain these at a level suited for a wide audience. While I have not been involved with handling these issues directly, I think it's time for me to step up and provide both a wider context and a more broadly understandable explanation. + +The story of these attacks starts in late 2004\. I had submitted my doctoral thesis and had a few months before flying back to Oxford for my defense, so I turned to some light reading: Intel's latest "Optimization Manual", full of tips on how to write faster code. (Eking out every last nanosecond of performance has long been an interest of mine.) Here I found an interesting piece of advice: On Intel CPUs with "Hyper-Threading", a common design choice (aligning the top of thread stacks on page boundaries) should be avoided because it would result in some resources being overused and others being underused, with a resulting drop in performance. This started me thinking: If two programs can hurt each others' performance by accident, one should be able to  _measure_  whether its performance is being hurt by the other; if it can measure whether its performance is being hurt by people not following Intel's optimization guidelines, it should be able to measure whether its performance is being hurt by other patterns of resource usage; and if it can measure that, it should be able to make deductions about what the other program is doing. + +It took me a few days to convince myself that information could be stolen in this manner, but within a few weeks I was able to steal an [RSA][4] private key from [OpenSSL][5]. Then started the lengthy process of quietly notifying Intel and all the major operating system vendors; and on Friday the 13th of May 2005 I presented [my paper][6] describing this new attack at [BSDCan][7] 2005 — the first attack of this type exploiting how a running program causes changes to the microarchitectural state of a CPU. Three months later, the team of Osvik, Shamir, and Tromer published [their work][8], which showed how the same problem could be exploited to steal [AES][9] keys. + +Over the years there have been many attacks which expoit different aspects of CPU design — exploiting L1 data cache collisions, exploiting L1 code cache collisions, exploiting L2 cache collisions, exploiting the TLB, exploiting branch prediction, etc. — but they have all followed the same basic mechanism: A program does something which interacts with the internal state of a CPU, and either we can measure that internal state (the more common case) or we can set up that internal state before the program runs in a way which makes the program faster or slower. These new attacks use the same basic mechanism, but exploit an entirely new angle. But before I go into details, let me go back to basics for a moment. + +#### Understanding the attacks + +These attacks exploit something called a "side channel". What's a side channel? It's when information is revealed as an inadvertant side effect of what you're doing. For example, in the movie [2001][10], Bowman and Poole enter a pod to ensure that the HAL 9000 computer cannot hear their conversation — but fail to block the  _optical_  channel which allows Hal to read their lips. Side channels are related to a concept called "covert channels": Where side channels are about stealing information which was not intended to be conveyed, covert channels are about conveying information which someone is trying to prevent you from sending. The famous case of a [Prisoner of War][11] blinking the word "TORTURE" in Morse code is an example of using a covert channel to convey information. + +Another example of a side channel — and I'll be elaborating on this example later, so please bear with me if it seems odd — is as follows: I want to know when my girlfriend's passport expires, but she won't show me her passport (she complains that it has a horrible photo) and refuses to tell me the expiry date. I tell her that I'm going to take her to Europe on vacation in August and watch what happens: If she runs out to renew her passport, I know that it will expire before August; while if she doesn't get her passport renewed, I know that it will remain valid beyond that date. Her desire to ensure that her passport would be valid inadvertantly revealed to me some information: Whether its expiry date was before or after August. + +Over the past 12 years, people have gotten reasonably good at writing programs which avoid leaking information via side channels; but as the saying goes, if you make something idiot-proof, the world will come up with a better idiot; in this case, the better idiot is newer and faster CPUs. The Spectre and Meltdown attacks make use of something called "speculative execution". This is a mechanism whereby, if a CPU isn't sure what you want it to do next, it will  _speculatively_  perform some action. The idea here is that if it guessed right, it will save time later — and if it guessed wrong, it can throw away the work it did and go back to doing what you asked for. As long as it sometimes guesses right, this saves time compared to waiting until it's absolutely certain about what it should be doing next. Unfortunately, as several researchers recently discovered, it can accidentally leak some information during this speculative execution. + +Going back to my analogy: I tell my girlfriend that I'm going to take her on vacation in June, but I don't tell her where yet; however, she knows that it will either be somewhere within Canada (for which she doesn't need a passport, since we live in Vancouver) or somewhere in Europe. She knows that it takes time to get a passport renewed, so she checks her passport and (if it was about to expire) gets it renewed just in case I later reveal that I'm going to take her to Europe. If I tell her later that I'm only taking her to Ottawa — well, she didn't need to renew her passport after all, but in the mean time her behaviour has already revealed to me whether her passport was about to expire. This is what Google refers to "variant 1" of the Spectre vulnerability: Even though she didn't need her passport, she made sure it was still valid  _just in case_  she was going to need it. + +"Variant 2" of the Spectre vulnerability also relies on speculative execution but in a more subtle way. Here, instead of the CPU knowing that there are two possible execution paths and choosing one (or potentially both!) to speculatively execute, the CPU has no idea what code it will need to execute next. However, it has been keeping track and knows what it did the last few times it was in the same position, and it makes a guess — after all, there's no harm in guessing since if it guesses wrong it can just throw away the unneeded work. Continuing our analogy, a "Spectre version 2" attack on my girlfriend would be as follows: I spend a week talking about how Oxford is a wonderful place to visit and I really enjoyed the years I spent there, and then I tell her that I want to take her on vacation. She very reasonably assumes that — since I've been talking about Oxford so much — I must be planning on taking her to England, and runs off to check her passport and potentially renew it... but in fact I tricked her and I'm only planning on taking her to Ottawa. + +This "version 2" attack is far more powerful than "version 1" because it can be used to exploit side channels present in many different locations; but it is also much harder to exploit and depends intimately on details of CPU design, since the attacker needs to make the CPU guess the correct (wrong) location to anticipate that it will be visiting next. + +Now we get to the third attack, dubbed "Meltdown". This one is a bit weird, so I'm going to start with the analogy here: I tell my girlfriend that I want to take her to the Korean peninsula. She knows that her passport is valid for long enough; but she immediately runs off to check that her North Korean visa hasn't expired. Why does she have a North Korean visa, you ask? Good question. She doesn't — but she runs off to check its expiry date anyway! Because she doesn't have a North Korean visa, she (somehow) checks the expiry date on  _someone else's_  North Korean visa, and then (if it is about to expire) runs out to renew it — and so by telling her that I want to take her to Korea for a vacation  _I find out something she couldn't have told me even if she wanted to_ . If this sounds like we're falling down a [Dodgsonian][12] rabbit hole... well, we are. The most common reaction I've heard from security people about this is "Intel CPUs are doing  _what???_ ", and it's not by coincidence that one of the names suggested for an early Linux patch was Forcefully Unmap Complete Kernel With Interrupt Trampolines (FUCKWIT). (For the technically-inclined: Intel CPUs continue speculative execution through faults, so the fact that a page of memory cannot be accessed does not prevent it from, well, being accessed.) + +#### How users can protect themselves + +So that's what these vulnerabilities are all about; but what can regular users do to protect themselves? To start with, apply the damn patches. For the next few months there are going to be patches to operating systems; patches to individual applications; patches to phones; patches to routers; patches to smart televisions... if you see a notification saying "there are updates which need to be installed", **install the updates**. (However, this doesn't mean that you should be stupid: If you get an email saying "click here to update your system", it's probably malware.) These attacks are complicated, and need to be fixed in many ways in many different places, so  _each individual piece of software_  may have many patches as the authors work their way through from fixing the most easily exploited vulnerabilities to the more obscure theoretical weaknesses. + +What else can you do? Understand the implications of these vulnerabilities. Intel caught some undeserved flak for stating that they believe "these exploits do not have the potential to corrupt, modify or delete data"; in fact, they're quite correct in a direct sense, and this distinction is very relevant. A side channel attack inherently  _reveals information_ , but it does not by itself allow someone to take control of a system. (In some cases side channels may make it easier to take advantage of other bugs, however.) As such, it's important to consider what information could be revealed: Even if you're not working on top secret plans for responding to a ballistic missile attack, you've probably accessed password-protected websites (Facebook, Twitter, Gmail, perhaps your online banking...) and possibly entered your credit card details somewhere today. Those passwords and credit card numbers are what you should worry about. + +Now, in order for you to be attacked, some code needs to run on your computer. The most likely vector for such an attack is through a website — and the more shady the website the more likely you'll be attacked. (Why? Because if the owners of a website are already doing something which is illegal — say, selling fake prescription drugs — they're far more likely to agree if someone offers to pay them to add some "harmless" extra code to their site.) You're not likely to get attacked by visiting your bank's website; but if you make a practice of visiting the less reputable parts of the World Wide Web, it's probably best to not log in to your bank's website at the same time. Remember, this attack won't allow someone to take over your computer — all they can do is get access to information which is in your computer's memory  _at the time they carry out the attack_ . + +For greater paranoia, avoid accessing suspicious websites  _after_  you handle any sensitive information (including accessing password-protected websites or entering your credit card details). It's possible for this information to linger in your computer's memory even after it isn't needed — it will stay there until it's overwritten, usually because the memory is needed for something else — so if you want to be safe you should reboot your computer in between. + +For maximum paranoia: Don't connect to the internet from systems you care about. In the industry we refer to "airgapped" systems; this is a reference back to the days when connecting to a network required wires, so if there was a literal gap with just air between two systems, there was no way they could communicate. These days, with ubiquitous wifi (and in many devices, access to mobile phone networks) the terminology is in need of updating; but if you place devices into "airplane" mode it's unlikely that they'll be at any risk. Mind you, they won't be nearly as useful — there's almost always a tradeoff between security and usability, but if you're handling something really sensitive, you may want to consider this option. (For my [Tarsnap online backup service][13] I compile and cryptographically sign the packages on a system which has never been connected to the Internet. Before I turned it on for the first time, I opened up the case and pulled out the wifi card; and I copy files on and off the system on a USB stick. Tarsnap's slogan, by the way, is "Online backups  _for the truly paranoid_ ".) + +#### How developers can protect everyone + +The patches being developed and distributed by operating systems — including microcode updates from Intel — will help a lot, but there are still steps individual developers can take to reduce the risk of their code being exploited. + +First, practice good "cryptographic hygiene": Information which isn't in memory can't be stolen this way. If you have a set of cryptographic keys, load only the keys you need for the operations you will be performing. If you take a password, use it as quickly as possible and then immediately wipe it from memory. This [isn't always possible][14], especially if you're using a high level language which doesn't give you access to low level details of pointers and memory allocation; but there's at least a chance that it will help. + +Second, offload sensitive operations — especially cryptographic operations — to other processes. The security community has become more aware of [privilege separation][15] over the past two decades; but we need to go further than this, to separation of  _information_  — even if two processes need exactly the same operating system permissions, it can be valuable to keep them separate in order to avoid information from one process leaking via a side channel attack against the other. + +One common design paradigm I've seen recently is to "[TLS][16] all the things", with a wide range of applications gaining understanding of the TLS protocol layer. This is something I've objected to in the past as it results in unnecessary exposure of applications to vulnerabilities in the TLS stacks they use; side channel attacks provide another reason, namely the unnecessary exposure of the TLS stack to side channels in the application. If you want to add TLS to your application, don't add it to the application itself; rather, use a separate process to wrap and unwrap connections with TLS, and have your application take unencrypted connections over a local (unix) socket or a loopback TCP/IP connection. + +Separating code into multiple processes isn't always practical, however, for reasons of both performance and practical matters of code design. I've been considering (since long before these issues became public) another form of mitigation: Userland page unmapping. In many cases programs have data structures which are "private" to a small number of source files; for example, a random number generator will have internal state which is only accessed from within a single file (with appropriate functions for inputting entropy and outputting random numbers), and a hash table library would have a data structure which is allocated, modified, accessed, and finally freed only by that library via appropriate accessor functions. If these memory allocations can be corralled into a subset of the system address space, and the pages in question only mapped upon entering those specific routines, it could dramatically reduce the risk of information being revealed as a result of vulnerabilities which — like these side channel attacks — are limited to leaking information but cannot be (directly) used to execute arbitrary code. + +Finally, developers need to get better at providing patches: Not just to get patches out promptly, but also to get them into users' hands  _and to convince users to install them_ . That last part requires building up trust; as I wrote last year, one of the worst problems facing the industry is the [mixing of security and non-security updates][17]. If users are worried that they'll lose features (or gain "features" they don't want), they won't install the updates you recommend; it's essential to give users the option of getting security patches without worrying about whether anything else they rely upon will change. + +#### What's next? + +So far we've seen three attacks demonstrated: Two variants of Spectre and one form of Meltdown. Get ready to see more over the coming months and years. Off the top of my head, there are four vulnerability classes I expect to see demonstrated before long: + +* Attacks on [p-code][1] interpreters. Google's "Variant 1" demonstrated an attack where a conditional branch was mispredicted resulting in a bounds check being bypassed; but the same problem could easily occur with mispredicted branches in aswitch statement resulting in the wrong  _operation_  being performed on a valid address. On p-code machines which have an opcode for "jump to this address, which contains machine code" (not entirely unlikely in the case of bytecode machines which automatically transpile "hot spots" into host machine code), this could very easily be exploited as a "speculatively execute attacker-provided code" mechanism. + +* Structure deserializing. This sort of code handles attacker-provided inputs which often include the lengths or numbers of fields in a structure, along with bounds checks to ensure the validity of the serialized structure. This is prime territory for a CPU to speculatively reach past the end of the input provided if it mispredicts the layout of the structure. + +* Decompressors, especially in HTTP(S) stacks. Data decompression inherently involves a large number of steps of "look up X in a table to get the length of a symbol, then adjust pointers and perform more memory accesses" — exactly the sort of behaviour which can leak information via cache side channels if a branch mispredict results in X being speculatively looked up in the wrong table. Add attacker-controlled inputs to HTTP stacks and the fact that services speaking HTTP are often required to perform request authentication and/or include TLS stacks, and you have all the conditions needed for sensitive information to be leaked. + +* Remote attacks. As far as I'm aware, all of the microarchitectural side channels demonstrated over the past 14 years have made use of "attack code" running on the system in question to observe the state of the caches or other microarchitectural details in order to extract the desired data. This makes attacks far easier, but should not be considered to be a prerequisite! Remote timing attacks are feasible, and I am confident that we will see a demonstration of "innocent" code being used for the task of extracting the microarchitectural state information before long. (Indeed, I think it is very likely that [certain people][2] are already making use of such remote microarchitectural side channel attacks.) + +#### Final thoughts on vulnerability disclosure + +The way these issues were handled was a mess; frankly, I expected better of Google, I expected better of Intel, and I expected better of the Linux community. When I found that Hyper-Threading was easily exploitable, I spent five months notifying the security community and preparing everyone for my announcement of the vulnerability; but when the embargo ended at midnight UTC and FreeBSD published its advisory a few minutes later, the broader world was taken entirely by surprise. Nobody knew what was coming aside from the people who needed to know; and the people who needed to know had months of warning. + +Contrast that with what happened this time around. Google discovered a problem and reported it to Intel, AMD, and ARM on June 1st. Did they then go around contacting all of the operating systems which would need to work on fixes for this? Not even close. FreeBSD was notified  _the week before Christmas_ , over six months after the vulnerabilities were discovered. Now, FreeBSD can occasionally respond very quickly to security vulnerabilities, even when they arise at inconvenient times — on November 30th 2009 a [vulnerability was reported][18] at 22:12 UTC, and on December 1st I [provided a patch][19] at 01:20 UTC, barely over 3 hours later — but that was an extremely simple bug which needed only a few lines of code to fix; the Spectre and Meltdown issues are orders of magnitude more complex. + +To make things worse, the Linux community was notified  _and couldn't keep their mouths shut_ . Standard practice for multi-vendor advisories like this is that an embargo date is set, and **nobody does anything publicly prior to that date**. People don't publish advisories; they don't commit patches into their public source code repositories; and they  _definitely_  don't engage in arguments on public mailing lists about whether the patches are needed for different CPUs. As a result, despite an embargo date being set for January 9th, by January 4th anyone who cared knew about the issues and there was code being passed around on Twitter for exploiting them. + +This is not the first time I've seen people get sloppy with embargoes recently, but it's by far the worst case. As an industry we pride ourselves on the concept of responsible disclosure — ensuring that people are notified in time to prepare fixes before an issue is disclosed publicly — but in this case there was far too much disclosure and nowhere near enough responsibility. We can do better, and I sincerely hope that next time we do. + +-------------------------------------------------------------------------------- + +via: http://www.daemonology.net/blog/2018-01-17-some-thoughts-on-spectre-and-meltdown.html + +作者:[ Daemonic Dispatches][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.daemonology.net/blog/ +[1]:https://en.wikipedia.org/wiki/P-code_machine +[2]:https://en.wikipedia.org/wiki/National_Security_Agency +[3]:https://googleprojectzero.blogspot.ca/2018/01/reading-privileged-memory-with-side.html +[4]:https://en.wikipedia.org/wiki/RSA_(cryptosystem) +[5]:https://www.openssl.org/ +[6]:http://www.daemonology.net/papers/cachemissing.pdf +[7]:http://www.bsdcan.org/ +[8]:https://eprint.iacr.org/2005/271.pdf +[9]:https://en.wikipedia.org/wiki/Advanced_Encryption_Standard +[10]:https://en.wikipedia.org/wiki/2001:_A_Space_Odyssey_(film) +[11]:https://en.wikipedia.org/wiki/Jeremiah_Denton +[12]:https://en.wikipedia.org/wiki/Lewis_Carroll +[13]:https://www.tarsnap.com/ +[14]:http://www.daemonology.net/blog/2014-09-06-zeroing-buffers-is-insufficient.html +[15]:https://en.wikipedia.org/wiki/Privilege_separation +[16]:https://en.wikipedia.org/wiki/Transport_Layer_Security +[17]:http://www.daemonology.net/blog/2017-06-14-oil-changes-safety-recalls-software-patches.html +[18]:http://seclists.org/fulldisclosure/2009/Nov/371 +[19]:https://lists.freebsd.org/pipermail/freebsd-security/2009-December/005369.html diff --git a/sources/talk/20180119 5 of the Best Linux Dark Themes that Are Easy on the Eyes.md b/sources/talk/20180119 5 of the Best Linux Dark Themes that Are Easy on the Eyes.md new file mode 100644 index 0000000000..db70cd8732 --- /dev/null +++ b/sources/talk/20180119 5 of the Best Linux Dark Themes that Are Easy on the Eyes.md @@ -0,0 +1,73 @@ +5 of the Best Linux Dark Themes that Are Easy on the Eyes +====== + +![](https://www.maketecheasier.com/assets/uploads/2017/12/linux-themes.png) + +There are several reasons people opt for dark themes on their computers. Some find them easy on the eye while others prefer them because of their medical condition. Programmers, especially, like dark themes because they reduce glare on the eyes. + +If you are a Linux user and a dark theme lover, you are in luck. Here are five of the best dark themes for Linux. Check them out! + +### 1. OSX-Arc-Shadow + +![OSX-Arc-Shadow Theme][1] + +As its name implies, this theme is inspired by OS X. It is a flat theme based on Arc. The theme supports GTK 3 and GTK 2 desktop environments, so Gnome, Cinnamon, Unity, Manjaro, Mate, and XFCE users can install and use the theme. [OSX-Arc-Shadow][2] is part of the OSX-Arc theme collection. The collection has several other themes (dark and light) included. You can download the whole collection and just use the dark variants. + +Debian- and Ubuntu-based distro users have the option of installing the stable release using the .deb files found on this [page][3]. The compressed source files are also on the same page. Arch Linux users, check out this [AUR link][4]. Finally, to install the theme manually, extract the zip content to the "~/.themes" folder and set it as your current theme, controls, and window borders. + +### 2. Kiss-Kool-Red version 2 + +![Kiss-Kool-Red version 2 ][5] + +The theme is only a few days old. It has a darker look compared to OSX-Arc-Shadow and red selection outlines. It is especially appealing to those who want more contrast and less glare from the computer screen. Hence, It reduces distraction when used at night or in places with low lights. It supports GTK 3 and GTK2. + +Head to [gnome-looks][6] to download the theme under the "Files" menu. The installation procedure is simple: extract the theme into the "~/.themes" folder and set it as your current theme, controls, and window borders. + +### 3. Equilux + +![Equilux][7] + +Equilux is another simple dark theme based on Materia Theme. It has a neutral dark color tone and is not overly fancy. The contrast between the selection outlines is also minimal and not as sharp as the red color in Kiss-Kool-Red. The theme is truly made with reduction of eye strain in mind. + +[Download the compressed file][8] and unzip it into your "~/.themes" folder. Then, you can set it as your theme. You can check [its GitHub page][9] for the latest additions. + +### 4. Deepin Dark + +![Deepin Dark][10] + +Deepin Dark is a completely dark theme. For those who like a little more darkness, this theme is definitely one to consider. Moreover, it also reduces the amount of glare from the computer screen. Additionally, it supports Unity. [Download Deepin Dark here][11]. + +### 5. Ambiance DS BlueSB12 + +![Ambiance DS BlueSB12 ][12] + +Ambiance DS BlueSB12 is a simple dark theme, so it makes the important details stand out. It helps with focus as is not unnecessarily fancy. It is very similar to Deepin Dark. Especially relevant to Ubuntu users, it is compatible with Ubuntu 17.04. You can download and try it from [here][13]. + +### Conclusion + +If you use a computer for a very long time, dark themes are a great way to reduce the strain on your eyes. Even if you don't, dark themes can help you in many other ways like improving your focus. Let us know which is your favorite. + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/best-linux-dark-themes/ + +作者:[Bruno Edoh][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com +[1]:https://www.maketecheasier.com/assets/uploads/2017/12/osx-arc-shadow.png (OSX-Arc-Shadow Theme) +[2]:https://github.com/LinxGem33/OSX-Arc-Shadow/ +[3]:https://github.com/LinxGem33/OSX-Arc-Shadow/releases +[4]:https://aur.archlinux.org/packages/osx-arc-shadow/ +[5]:https://www.maketecheasier.com/assets/uploads/2017/12/Kiss-Kool-Red.png (Kiss-Kool-Red version 2 ) +[6]:https://www.gnome-look.org/p/1207964/ +[7]:https://www.maketecheasier.com/assets/uploads/2017/12/equilux.png (Equilux) +[8]:https://www.gnome-look.org/p/1182169/ +[9]:https://github.com/ddnexus/equilux-theme +[10]:https://www.maketecheasier.com/assets/uploads/2017/12/deepin-dark.png (Deepin Dark ) +[11]:https://www.gnome-look.org/p/1190867/ +[12]:https://www.maketecheasier.com/assets/uploads/2017/12/ambience.png (Ambiance DS BlueSB12 ) +[13]:https://www.gnome-look.org/p/1013664/ diff --git a/sources/talk/20180119 PlayOnLinux For Easier Use Of Wine.md b/sources/talk/20180119 PlayOnLinux For Easier Use Of Wine.md new file mode 100644 index 0000000000..2af3433920 --- /dev/null +++ b/sources/talk/20180119 PlayOnLinux For Easier Use Of Wine.md @@ -0,0 +1,153 @@ +PlayOnLinux For Easier Use Of Wine +====== + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux-for-easier-use-of-wine_orig.jpg) + +[PlayOnLinux][1] is a free program that helps to install, run, and manage Windows software on Linux. It can also manage virtual C: drives (known as Wine prefixes), and download and install certain Windows libraries for getting some software to run on Wine properly. Creating different drives using different Wine versions is also possible. It is very handy because what runs well in one version may not run as well (if at all) on a newer version. There is [PlayOnMac][2] for macOS and PlayOnBSD for FreeBSD. + +[Wine][3] is the compatibility layer that allows many programs developed for Windows to run under operating systems such as Linux, FreeBSD, macOS and other UNIX systems. The app database ([AppDB][4]) gives users an overview of a multitude of programs that will function on Wine, however successfully. + +Both programs can be obtained using your distribution’s software center or package manager for convenience. + +### Installing Programs Using PlayOnLinux + +Installing software is easy. PlayOnLinux has hundreds of scripts to aid in installing different software with which to run the setup. In the sidebar, select “Install Software”. You will find several categories to choose from. + +​ + +Hundreds of games can be installed this way. + + [![install games playonlinux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_orig.png)][5] + +​Office software can be installed as well, including Microsoft Office as shown here. + + [![microsoft office in linux playonlinux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_1_orig.png)][6] + +​Let’s install Notepad++ using the script. You can select the script to read the compatibility rating according to PlayOnLinux, and an overview of the program. To get a better idea of compatibility, refer to the WineHQ App Database and find “Browse Apps” to find a program like Notepad++. + + [![install notepad++ in linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_2_orig.png)][7] + +​Once you press “Install”, if you are using PlayOnLinux for the first time, you will encounter two popups: one to give you tips when installing programs with a script, and the other to not submit bug reports to WineHQ because PlayOnLinux has nothing to do with them. + +​ + +​During the installation, I was given the choice to either download the setup executable, or select one on the computer. I downloaded the file but received a File Mismatch error; however, I continued and it was successful. It’s not perfect, but it is functional. (It is possible to submit bug reports to PlayOnLinux if the option is given.) + +[![bug report on playonlinux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_3_orig.png)][8] + +Nevertheless, I was able to install Notepad++ successfully, run it, and update it to the latest version (at the time of writing 7.5.3) from version 7.4.2. + +​ + +Also during installation, it created a virtual C: drive specifically for Notepad++. As there are no other Wine versions available for PlayOnLinux to use, it defaults to using the version installed on the system. In this case, it is more than adequate for Notepad++ to run smoothly. + +### Installing Non-Listed Programs + +You can also install a program that is not on the list by pressing “Install Non-Listed Program” on the bottom-left corner of the install menu. Bear in mind that there is no script to install certain libraries to make things work properly. You will need to do this yourself. Look at the Wine AppDB for information for your program. Also, if the app isn’t listed, it doesn’t mean that it won’t work with Wine. It just means no one has given any information about it. + +​ + +I’ve installed Graphmatica, a graph plotting program, using this method. First I selected the option to install it on a new virtual drive. + + [![install non listed programs on linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_4_orig.png)][9] + +​Then I selected the option to install additional libraries after creating the drive and select a Wine version to use in doing so. + + [![playonlinux setup wizard](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_5_orig.png)][10] + +​I then proceeded to select Gecko (which encountered an error for some reason), and Mono 2.10 to install. + + [![playonlinux wizard POL_install](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_6_orig.png)][11] + +​Finally, I installed Graphmatica. It’s as simple as that. + + [![software installation done playonlinux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_7_orig.png)][12] + +A launcher can be created after installation. A list of executables found in the drive will appear. Search for the app executable (may not always be obvious) which may have its icon, select it and give it a display name. The icon will appear on the desktop. + + [![install graphmatica in linux playonlinux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_8_orig.png)][13] + [![playonlinux install windows software](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_9_orig.png)][14] + +### Multiple “C:” Drives + +Now that we have easily installed a program, let’s have a look at the drive configuration. In the main window, press “Configure” in the toolbar and this window will show. + + [![multiple c: drives in linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/editor/playonlinux_10.png?1516170517)][15] + +On the left are the drives that are found within PlayOnLinux. To the right, the “General” tab allows you to create shortcuts of programs installed on that virtual drive. + +​ + +The “Wine” tab has 8 buttons, including those to launch the Wine configuration program (winecfg), control panel, registry editor, command prompt, etc. + + [![playonlinux configuration wine](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_11_orig.png)][16] + +​“Install Components” allows you to select different Windows libraries like DirectX 9, .NET Framework versions 2 – 4.5, Visual C++ runtime, etc., like [winetricks][17]. + + [![install playonlinux components](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_12_orig.png)][18] + +“Display” allows the user to control advanced graphics settings like GLSL support, video memory size, and more. And “Miscellaneous” is for other actions like running an executable found anywhere on the computer to be run under the selected virtual drive. + +### Creating Virtual Drives Without Installing Programs + +To create a drive without installing software, simply press “New” below the list of drives to launch the virtual drive creator. Drives are created using the same method used in installing programs not found in the install menu. Follow the prompts, select either a 32-bit or 64-bit installation (in this case we only have 32-bit versions so select 32-bit), choose the Wine version, and give the drive a name. Once completed, it will appear in the drive list. + + [![playonlinux sandbox](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_13_orig.png)][19] + +### Managing Wine Versions + +Entire Wine versions can be downloaded using the manager. To access this through the menu bar, press “Tools” and select “Manage Wine versions”. Sometimes different software can behave differently between Wine versions. A Wine update can break something that made your application work in the previous version; thus rendering the application broken or completely unusable. Therefore, this feature is one of the highlights of PlayOnLinux. + +​ + +If you’re still on the configuration window, in the “General” tab, you can also access the version manager by pressing the “+” button next to the Wine version field. + + [![playonlinux select wine version](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_14_orig.png)][20] + +To install a version of Wine (32-bit or 64-bit), simply select the version, and press the “>” button to download and install it. After installation, if setup executables for Mono, and/or the Gecko HTML engine have not yet been downloaded by PlayOnLinux, they will be downloaded. + +​ + +I went ahead and installed the 2.21-staging version of Wine afterward. + + [![select wine version playonlinux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_15_orig.png)][21] + +​To remove a version, press the “<” button. + +### Conclusion + +​This article demonstrated how to use PlayOnLinux to easily install Windows software into separate virtual C: drives, create and manage virtual drives, and manage several Wine versions. The software isn’t perfect, but it is still functional and useful. Managing different drives with different Wine versions is one of the key features of PlayOnLinux. It is a lot easier to use a front-end for Wine such as PlayOnLinux than pure Wine. + + +-------------------------------------------------------------------------------- + +via: http://www.linuxandubuntu.com/home/playonlinux-for-easier-use-of-wine + +作者:[LinuxAndUbuntu][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxandubuntu.com +[1]:https://www.playonlinux.com/en/ +[2]:https://www.playonmac.com +[3]:https://www.winehq.org/ +[4]:http://appdb.winehq.org/ +[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_orig.png +[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_1_orig.png +[7]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_2_orig.png +[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_3_orig.png +[9]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_4_orig.png +[10]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_5_orig.png +[11]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_6_orig.png +[12]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_7_orig.png +[13]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_8_orig.png +[14]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_9_orig.png +[15]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_10_orig.png +[16]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_11_orig.png +[17]:https://github.com/Winetricks/winetricks +[18]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_12_orig.png +[19]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_13_orig.png +[20]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_14_orig.png +[21]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_15_orig.png diff --git a/sources/talk/20180122 An overview of the Perl 5 engine.md b/sources/talk/20180122 An overview of the Perl 5 engine.md new file mode 100644 index 0000000000..a26266a39a --- /dev/null +++ b/sources/talk/20180122 An overview of the Perl 5 engine.md @@ -0,0 +1,130 @@ +An overview of the Perl 5 engine +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/camel-perl-lead.png?itok=VyEv-C5o) + +As I described in "[My DeLorean runs Perl][1]," switching to Perl has vastly improved my development speed and possibilities. Here I'll dive deeper into the design of Perl 5 to discuss aspects important to systems programming. + +Some years ago, I wrote "OpenGL bindings for Bash" as sort of a joke. The implementation was simply an X11 program written in C that read OpenGL calls on [stdin][2] (yes, as text) and emitted user input on [stdout][3] . Then I had a littlefile that would declare all the OpenGL functions as Bash functions, which echoed the name of the function into a pipe, starting the GL interpreter process if it wasn't already running. The point of the exercise was to show that OpenGL (the 1.4 API, not the newer shader stuff) could render a lot of graphics with just a few calls per frame by using GL display lists. The OpenGL library did all the heavy lifting, and Bash just printed a few dozen lines of text per frame. + +In the end though, Bash is a really horrible [glue language][4], both from high overhead and limited available operations and syntax. [Perl][5], on the other hand, is a great glue language. + +### Syntax aside... + +If you're not a regular Perl user, the first thing you probably notice is the syntax. + +Perl 5 is built on a long legacy of awkward syntax, but more recent versions have removed the need for much of the punctuation. The remaining warts can mostly be avoided by choosing modules that give you domain-specific "syntactic sugar," which even alter the Perl syntax as it is parsed. This is in stark contrast to most other languages, where you are stuck with the syntax you're given, and infinitely more flexible than C's macros. Combined with Perl's powerful sparse-syntax operators, like `map`, `grep`, `sort`, and similar user-defined operators, I can almost always write complex algorithms more legibly and with less typing using Perl than with JavaScript, PHP, or any compiled language. + +So, because syntax is what you make of it, I think the underlying machine is the most important aspect of the language to consider. Perl 5 has a very capable engine, and it differs in interesting and useful ways from other languages. + +### A layer above C + +I don't recommend anyone start working with Perl by looking at the interpreter's internal API, but a quick description is useful. One of the main problems we deal with in the world of C is acquiring and releasing memory while also supporting control flow through a chain of function calls. C has a rough ability to throw exceptions using `longjmp`, but it doesn't do any cleanup for you, so it is almost useless without a framework to manage resources. The Perl interpreter is exactly this sort of framework. + +Perl provides a stack of variables independent from C's stack of function calls on which you can mark the logical boundaries of a Perl scope. There are also API calls you can use to allocate memory, Perl variables, etc., and tell Perl to automatically free them at the end of the Perl scope. Now you can make whatever C calls you like, "die" out of the middle of them, and let Perl clean everything up for you. + +Although this is a really unconventional perspective, I bring it up to emphasize that Perl sits on top of C and allows you to use as much or as little interpreted overhead as you like. Perl's internal API is certainly not as nice as C++ for general programming, but C++ doesn't give you an interpreted language on top of your work when you're done. I've lost track of the number of times that I wanted reflective capability to inspect or alter my C++ objects, and following that rabbit hole has derailed more than one of my personal projects. + +### Lisp-like functions + +Perl functions take a list of arguments. The downside is that you have to do argument count and type checking at runtime. The upside is you don't end up doing that much, because you can just let the interpreter's own runtime check catch those mistakes. You can also create the effect of C++'s overloaded functions by inspecting the arguments you were given and behaving accordingly. + +Because arguments are a list, and return values are a list, this encourages [Lisp-style programming][6], where you use a series of functions to filter a list of data elements. This "piping" or "streaming" effect can result in some really complicated loops turning into a single line of code. + +Every function is available to the language as a `coderef` that can be passed around in variables, including anonymous closure functions. Also, I find `sub {}` more convenient to type than JavaScript's `function(){}` or C++11's `[&](){}`. + +### Generic data structures + +The variables in Perl are either "scalars," references, arrays, or "hashes" ... or some other stuff that I'll skip. + +Scalars act as a string/integer/float hybrid and are automatically typecast as needed for the purpose you are using them. In other words, instead of determining the operation by the type of variable, the type of operator determines how the variable should be interpreted. This is less efficient than if the language knows the type in advance, but not as inefficient as, for example, shell scripting because Perl caches the type conversions. + +Perl scalars may contain null characters, so they are fully usable as buffers for binary data. The scalars are mutable and copied by value, but optimized with copy-on-write, and substring operations are also optimized. Strings support unicode characters but are stored efficiently as normal bytes until you append a codepoint above 255. + +References (which are considered scalars as well) hold a reference to any other variable; `hashrefs` and `arrayrefs` are most common, along with the `coderefs` described above. + +Arrays are simply a dynamic-length array of scalars (or references). + +Hashes (i.e., dictionaries, maps, or whatever you want to call them) are a performance-tuned hash table implementation where every key is a string and every value is a scalar (or reference). Hashes are used in Perl in the same way structs are used in C. Clearly a hash is less efficient than a struct, but it keeps things generic so tasks that require dozens of lines of code in other languages can become one-liners in Perl. For instance, you can dump the contents of a hash into a list of (key, value) pairs or reconstruct a hash from such a list as a natural part of the Perl syntax. + +### Object model + +Any reference can be "blessed" to make it into an object, granting it a multiple-inheritance method-dispatch table. The blessing is simply the name of a package (namespace), and any function in that namespace becomes an available method of the object. The inheritance tree is defined by variables in the package. As a result, you can make modifications to classes or class hierarchies or create new classes on the fly with simple data edits, rather than special keywords or built-in reflection APIs. By combining this with Perl's `local` keyword (where changes to a global are automatically undone at the end of the current scope), you can even make temporary changes to class methods or inheritance! + +Perl objects only have methods, so attributes are accessed via accessors like the canonical Java `get_` and `set_` methods. Perl authors usually combine them into a single method of just the attribute name and differentiate `get` from `set` by whether a parameter was given. + +You can also "re-bless" objects from one class to another, which enables interesting tricks not available in most other languages. Consider state machines, where each method would normally start by checking the object's current state; you can avoid that in Perl by swapping the method table to one that matches the object's state. + +### Visibility + +While other languages spend a bunch of effort on access rules between classes, Perl adopted a simple "if the name begins with underscore, don't touch it unless it's yours" convention. Although I can see how this could be a problem with an undisciplined software team, it has worked great in my experience. The only thing C++'s `private` keyword ever did for me was impair my debugging efforts, yet it felt dirty to make everything `public`. Perl removes my guilt. + +Likewise, an object provides methods, but you can ignore them and just access the underlying Perl data structure. This is another huge boost for debugging. + +### Garbage collection via reference counting + +Although [reference counting][7] is a rather leak-prone form of memory management (it doesn't detect cycles), it has a few upsides. It gives you deterministic destruction of your objects, like in C++, and never interrupts your program with a surprise garbage collection. It strongly encourages module authors to use a tree-of-objects pattern, which I much prefer vs. the tangle-of-objects pattern often seen in Java and JavaScript. (I've found trees to be much more easily tested with unit tests.) But, if you need a tangle of objects, Perl does offer "weak" references, which won't be considered when deciding if it's time to garbage-collect something. + +On the whole, the only time this ever bites me is when making heavy use of closures for event-driven callbacks. It's easy to have an object hold a reference to an event handle holding a reference to a callback that references the containing object. Again, weak references solve this, but it's an extra thing to be aware of that JavaScript or Python don't make you worry about. + +### Parallelism + +The Perl interpreter is a single thread, although modules written in C can use threads of their own internally, and Perl often includes support for multiple interpreters within the same process. + +Although this is a large limitation, knowing that a data structure will only ever be touched by one thread is nice, and it means you don't need locks when accessing them from C code. Even in Java, where locking is built into the syntax in convenient ways, it can be a real time sink to reason through all the ways that threads can interact (and especially annoying that they force you to deal with that in every GUI program you write). + +There are several event libraries available to assist in writing event-driven callback programs in the style of Node.js to avoid the need for threads. + +### Access to C libraries + +Aside from directly writing your own C extensions via Perl's [XS][8] system, there are already lots of common C libraries wrapped for you and available on Perl's [CPAN][9] repository. There is also a great module, [Inline::C][10], that takes most of the pain out of bridging between Perl and C, to the point where you just paste C code into the middle of a Perl module. (It compiles the first time you run it and caches the .so shared object file for subsequent runs.) You still need to learn some of the Perl interpreter API if you want to manipulate the Perl stack or pack/unpack Perl's variables other than your C function arguments and return value. + +### Memory usage + +Perl can use a surprising amount of memory, especially if you make use of heavyweight libraries and create thousands of objects, but with the size of today's systems it usually doesn't matter. It also isn't much worse than other interpreted systems. My personal preference is to only use lightweight libraries, which also generally improve performance. + +### Startup speed + +The Perl interpreter starts in under five milliseconds on modern hardware. If you take care to use only lightweight modules, you can use Perl for anything you might have used Bash for, like `hotplug` scripts. + +### Regex implementation + +Perl provides the mother of all regex implementations... but you probably already knew that. Regular expressions are built into Perl's syntax rather than being an object-oriented or function-based API; this helps encourage their use for any text processing you might need to do. + +### Ubiquity and stability + +Perl 5 is installed on just about every modern Unix system, and the CPAN module collection is extensive and easy to install. There's a production-quality module for almost any task, with solid test coverage and good documentation. + +Perl 5 has nearly complete backward compatibility across two decades of releases. The community has embraced this as well, so most of CPAN is pretty stable. There's even a crew of testers who run unit tests on all of CPAN on a regular basis to help detect breakage. + +The toolchain is also pretty solid. The documentation syntax (POD) is a little more verbose than I'd like, but it yields much more useful results than [doxygen][11] or [Javadoc][12]. You can run `perldoc FILENAME` to instantly see the documentation of the module you're writing. `perldoc Module::Name` shows you the specific documentation for the version of the module that you would load from your `include` path and can likewise show you the source code of that module without needing to browse deep into your filesystem. + +The testcase system (the `prove` command and Test Anything Protocol, or TAP) isn't specific to Perl and is extremely simple to work with (as opposed to unit testing based around language-specific object-oriented structure, or XML). Modules like `Test::More` make writing the test cases so easy that you can write a test suite in about the same time it would take to test your module once by hand. The testing effort barrier is so low that I've started using TAP and the POD documentation style for my non-Perl projects as well. + +### In summary + +Perl 5 still has a lot to offer despite the large number of newer languages competing with it. The frontend syntax hasn't stopped evolving, and you can improve it however you like with custom modules. The Perl 5 engine is capable of handling most programming problems you can throw at it, and it is even suitable for low-level work as a "glue" layer on top of C libraries. Once you get really familiar with it, it can even be an environment for developing C code. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/why-i-love-perl-5 + +作者:[Michael Conrad][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/nerdvana +[1]:https://opensource.com/article/17/12/my-delorean-runs-perl +[2]:https://en.wikipedia.org/wiki/Standard_streams#Standard_input_(stdin) +[3]:https://en.wikipedia.org/wiki/Standard_streams#Standard_output_(stdout) +[4]:https://www.techopedia.com/definition/19608/glue-language +[5]:https://www.perl.org/ +[6]:https://en.wikipedia.org/wiki/Lisp_(programming_language) +[7]:https://en.wikipedia.org/wiki/Reference_counting +[8]:https://en.wikipedia.org/wiki/XS_(Perl) +[9]:https://www.cpan.org/ +[10]:https://metacpan.org/pod/distribution/Inline-C/lib/Inline/C.pod +[11]:http://www.stack.nl/~dimitri/doxygen/ +[12]:http://www.oracle.com/technetwork/java/javase/documentation/index-jsp-135444.html diff --git a/sources/talk/20180122 How to price cryptocurrencies.md b/sources/talk/20180122 How to price cryptocurrencies.md new file mode 100644 index 0000000000..061090db5a --- /dev/null +++ b/sources/talk/20180122 How to price cryptocurrencies.md @@ -0,0 +1,73 @@ +How to price cryptocurrencies +====== + +![](https://tctechcrunch2011.files.wordpress.com/2018/01/fabian-blank-78637.jpg?w=1279&h=727&crop=1) + +Predicting cryptocurrency prices is a fool's game, yet this fool is about to try. The drivers of a single cryptocurrency's value are currently too varied and vague to make assessments based on any one point. News is trending up on Bitcoin? Maybe there's a hack or an API failure that is driving it down at the same time. Ethereum looking sluggish? Who knows: Maybe someone will build a new smarter DAO tomorrow that will draw in the big spenders. + +So how do you invest? Or, more correctly, on which currency should you bet? + +The key to understanding what to buy or sell and when to hold is to use the tools associated with assessing the value of open-source projects. This has been said again and again, but to understand the current crypto boom you have to go back to the quiet rise of Linux. + +Linux appeared on most radars during the dot-com bubble. At that time, if you wanted to set up a web server, you had to physically ship a Windows server or Sun Sparc Station to a server farm where it would do the hard work of delivering Pets.com HTML. At the same time, Linux, like a freight train running on a parallel path to Microsoft and Sun, would consistently allow developers to build one-off projects very quickly and easily using an OS and toolset that were improving daily. In comparison, then, the massive hardware and software expenditures associated with the status quo solution providers were deeply inefficient, and very quickly all of the tech giants that made their money on software now made their money on services or, like Sun, folded. + +From the acorn of Linux an open-source forest bloomed. But there was one clear problem: You couldn't make money from open source. You could consult and you could sell products that used open-source components, but early builders built primarily for the betterment of humanity and not the betterment of their bank accounts. + +Cryptocurrencies have followed the Linux model almost exactly, but cryptocurrencies have cash value. Therefore, when you're working on a crypto project you're not doing it for the common good or for the joy of writing free software. You're writing it with the expectation of a big payout. This, therefore, clouds the value judgements of many programmers. The same folks that brought you Python, PHP, Django and Node.js are back… and now they're programming money. + +### Check the codebase + +This year will be the year of great reckoning in the token sale and cryptocurrency space. While many companies have been able to get away with poor or unusable codebases, I doubt developers will let future companies get away with so much smoke and mirrors. It's safe to say we can [expect posts like this one detailing Storj's anemic codebase to become the norm][1] and, more importantly, that these commentaries will sink many so-called ICOs. Though massive, the money trough that is flowing from ICO to ICO is finite and at some point there will be greater scrutiny paid to incomplete work. + +What does this mean? It means to understand cryptocurrency you have to treat it like a startup. Does it have a good team? Does it have a good product? Does the product work? Would someone want to use it? It's far too early to assess the value of cryptocurrency as a whole, but if we assume that tokens or coins will become the way computers pay each other in the future, this lets us hand wave away a lot of doubt. After all, not many people knew in 2000 that Apache was going to beat nearly every other web server in a crowded market or that Ubuntu instances would be so common that you'd spin them up and destroy them in an instant. + +The key to understanding cryptocurrency pricing is to ignore the froth, hype and FUD and instead focus on true utility. Do you think that some day your phone will pay another phone for, say, an in-game perk? Do you expect the credit card system to fold in the face of an Internet of Value? Do you expect that one day you'll move through life splashing out small bits of value in order to make yourself more comfortable? Then by all means, buy and hold or speculate on things that you think will make your life better. If you don't expect the Internet of Value to improve your life the way the TCP/IP internet did (or you do not understand enough to hold an opinion), then you're probably not cut out for this. NASDAQ is always open, at least during banker's hours. + +Still will us? Good, here are my predictions. + +### The rundown + +Here is my assessment of what you should look at when considering an "investment" in cryptocurrencies. There are a number of caveats we must address before we begin: + + * Crypto is not a monetary investment in a real currency, but an investment in a pie-in-the-sky technofuture. That's right: When you buy crypto you're basically assuming that we'll all be on the deck of the Starship Enterprise exchanging them like Galactic Credits one day. This is the only inevitable future for crypto bulls. While you can force crypto into various economic models and hope for the best, the entire platform is techno-utopianist and assumes all sorts of exciting and unlikely things will come to pass in the next few years. If you have spare cash lying around and you like Star Wars, then you're golden. If you bought bitcoin on a credit card because your cousin told you to, then you're probably going to have a bad time. + * Don't trust anyone. There is no guarantee and, in addition to offering the disclaimer that this is not investment advice and that this is in no way an endorsement of any particular cryptocurrency or even the concept in general, we must understand that everything I write here could be wrong. In fact, everything ever written about crypto could be wrong, and anyone who is trying to sell you a token with exciting upside is almost certainly wrong. In short, everyone is wrong and everyone is out to get you, so be very, very careful. + * You might as well hold. If you bought when BTC was $18,000 you'd best just hold on. Right now you're in Pascal's Wager territory. Yes, maybe you're angry at crypto for screwing you, but maybe you were just stupid and you got in too high and now you might as well keep believing because nothing is certain, or you can admit that you were a bit overeager and now you're being punished for it but that there is some sort of bitcoin god out there watching over you. Ultimately you need to take a deep breath, agree that all of this is pretty freaking weird, and hold on. + + + +Now on with the assessments. + +**Bitcoin** - Expect a rise over the next year that will surpass the current low. Also expect [bumps as the SEC and other federal agencies][2] around the world begin regulating the buying and selling of cryptocurrencies in very real ways. Now that banks are in on the joke they're going to want to reduce risk. Therefore, the bitcoin will become digital gold, a staid, boring and volatility proof safe haven for speculators. Although all but unusable as a real currency, it's good enough for what we need it to do and we also can expect quantum computing hardware to change the face of the oldest and most familiar cryptocurrency. + +**Ethereum** - Ethereum could sustain another few thousand dollars on its price as long as Vitalik Buterin, the creator, doesn't throw too much cold water on it. Like a remorseful Victor Frankenstein, Buterin tends to make amazing things and then denigrate them online, a sort of self-flagellation that is actually quite useful in a space full of froth and outright lies. Ethereum is the closest we've come to a useful cryptocurrency, but it is still the Raspberry Pi of distributed computing -- it's a useful and clever hack that makes it easy to experiment but no one has quite replaced the old systems with new distributed data stores or applications. In short, it's a really exciting technology, but nobody knows what to do with it. + +![][3] + +Where will the price go? It will hover around $1,000 and possibly go as high as $1,500 this year, but this is a principled tech project and not a store of value. + +**Altcoins** - One of the signs of a bubble is when average people make statements like "I couldn't afford a Bitcoin so I bought a Litecoin." This is exactly what I've heard multiple times from multiple people and it's akin to saying "I couldn't buy hamburger so I bought a pound of sawdust instead. I think the kids will eat it, right?" Play at your own risk. Altcoins are a very useful low-risk play for many, and if you create an algorithm -- say to sell when the asset hits a certain level -- then you could make a nice profit. Further, most altcoins will not disappear overnight. I would honestly recommend playing with Ethereum instead of altcoins, but if you're dead set on it, then by all means, enjoy. + +**Tokens** - This is where cryptocurrency gets interesting. Tokens require research, education and a deep understanding of technology to truly assess. Many of the tokens I've seen are true crapshoots and are used primarily as pump and dump vehicles. I won't name names, but the rule of thumb is that if you're buying a token on an open market then you've probably already missed out. The value of the token sale as of January 2018 is to allow crypto whales to turn a few cent per token investment into a 100X return. While many founders talk about the magic of their product and the power of their team, token sales are quite simply vehicles to turn 4 cents into 20 cents into a dollar. Multiply that by millions of tokens and you see the draw. + +The answer is simple: find a few projects you like and lurk in their message boards. Assess if the team is competent and figure out how to get in very, very early. Also expect your money to disappear into a rat hole in a few months or years. There are no sure things, and tokens are far too bleeding-edge a technology to assess sanely. + +You are reading this post because you are looking to maintain confirmation bias in a confusing space. That's fine. I've spoken to enough crypto-heads to know that nobody knows anything right now and that collusion and dirty dealings are the rule of the day. Therefore, it's up to folks like us to slowly buy surely begin to understand just what's going on and, perhaps, profit from it. At the very least we'll all get a new Linux of Value when we're all done. + + + +-------------------------------------------------------------------------------- + +via: https://techcrunch.com/2018/01/22/how-to-price-cryptocurrencies/ + +作者:[John Biggs][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://techcrunch.com/author/john-biggs/ +[1]:https://shitcoin.com/storj-not-a-dropbox-killer-1a9f27983d70 +[2]:http://www.businessinsider.com/bitcoin-price-cryptocurrency-warning-from-sec-cftc-2018-1 +[3]:https://tctechcrunch2011.files.wordpress.com/2018/01/vitalik-twitter-1312.png?w=525&h=615 +[4]:https://unsplash.com/photos/pElSkGRA2NU?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[5]:https://unsplash.com/search/photos/cash?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText diff --git a/sources/talk/20180122 Ick- a continuous integration system.md b/sources/talk/20180122 Ick- a continuous integration system.md new file mode 100644 index 0000000000..4620e2c036 --- /dev/null +++ b/sources/talk/20180122 Ick- a continuous integration system.md @@ -0,0 +1,75 @@ +Ick: a continuous integration system +====== +**TL;DR:** Ick is a continuous integration or CI system. See for more information. + +More verbose version follows. + +### First public version released + +The world may not need yet another continuous integration system (CI), but I do. I've been unsatisfied with the ones I've tried or looked at. More importantly, I am interested in a few things that are more powerful than what I've ever even heard of. So I've started writing my own. + +My new personal hobby project is called ick. It is a CI system, which means it can run automated steps for building and testing software. The home page is at , and the [download][1] page has links to the source code and .deb packages and an Ansible playbook for installing it. + +I have now made the first publicly advertised release, dubbed ALPHA-1, version number 0.23. It is of alpha quality, and that means it doesn't have all the intended features and if any of the features it does have work, you should consider yourself lucky. + +### Invitation to contribute + +Ick has so far been my personal project. I am hoping to make it more than that, and invite contributions. See the [governance][2] page for the constitution, the [getting started][3] page for tips on how to start contributing, and the [contact][4] page for how to get in touch. + +### Architecture + +Ick has an architecture consisting of several components that communicate over HTTPS using RESTful APIs and JSON for structured data. See the [architecture][5] page for details. + +### Manifesto + +Continuous integration (CI) is a powerful tool for software development. It should not be tedious, fragile, or annoying. It should be quick and simple to set up, and work quietly in the background unless there's a problem in the code being built and tested. + +A CI system should be simple, easy, clear, clean, scalable, fast, comprehensible, transparent, reliable, and boost your productivity to get things done. It should not be a lot of effort to set up, require a lot of hardware just for the CI, need frequent attention for it to keep working, and developers should never have to wonder why something isn't working. + +A CI system should be flexible to suit your build and test needs. It should support multiple types of workers, as far as CPU architecture and operating system version are concerned. + +Also, like all software, CI should be fully and completely free software and your instance should be under your control. + +(Ick is little of this yet, but it will try to become all of it. In the best possible taste.) + +### Dreams of the future + +In the long run, I would ick to have features like ones described below. It may take a while to get all of them implemented. + + * A build may be triggered by a variety of events. Time is an obvious event, as is source code repository for the project changing. More powerfully, any build dependency changing, regardless of whether the dependency comes from another project built by ick, or a package from, say, Debian: ick should keep track of all the packages that get installed into the build environment of a project, and if any of their versions change, it should trigger the project build and tests again. + + * Ick should support building in (or against) any reasonable target, including any Linux distribution, any free operating system, and any non-free operating system that isn't brain-dead. + + * Ick should manage the build environment itself, and be able to do builds that are isolated from the build host or the network. This partially works: one can ask ick to build a container and run a build in the container. The container is implemented using systemd-nspawn. This can be improved upon, however. (If you think Docker is the only way to go, please contribute support for that.) + + * Ick should support any workers that it can control over ssh or a serial port or other such neutral communication channel, without having to install an agent of any kind on them. Ick won't assume that it can have, say, a full Java run time, so that the worker can be, say, a micro controller. + + * Ick should be able to effortlessly handle very large numbers of projects. I'm thinking here that it should be able to keep up with building everything in Debian, whenever a new Debian source package is uploaded. (Obviously whether that is feasible depends on whether there are enough resources to actually build things, but ick itself should not be the bottleneck.) + + * Ick should optionally provision workers as needed. If all workers of a certain type are busy, and ick's been configured to allow using more resources, it should do so. This seems like it would be easy to do with virtual machines, containers, cloud providers, etc. + + * Ick should be flexible in how it can notify interested parties, particularly about failures. It should allow an interested party to ask to be notified over IRC, Matrix, Mastodon, Twitter, email, SMS, or even by a phone call and speech syntethiser. "Hello, interested party. It is 04:00 and you wanted to be told when the hello package has been built for RISC-V." + + + + +### Please give feedback + +If you try ick, or even if you've just read this far, please share your thoughts on it. See the [contact][4] page for where to send it. Public feedback is preferred over private, but if you prefer private, that's OK too. + +-------------------------------------------------------------------------------- + +via: https://blog.liw.fi/posts/2018/01/22/ick_a_continuous_integration_system/ + +作者:[Lars Wirzenius][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://blog.liw.fi/ +[1]:http://ick.liw.fi/download/ +[2]:http://ick.liw.fi/governance/ +[3]:http://ick.liw.fi/getting-started/ +[4]:http://ick.liw.fi/contact/ +[5]:http://ick.liw.fi/architecture/ diff --git a/sources/talk/20180122 Raspberry Pi Alternatives.md b/sources/talk/20180122 Raspberry Pi Alternatives.md new file mode 100644 index 0000000000..bf3bca4f61 --- /dev/null +++ b/sources/talk/20180122 Raspberry Pi Alternatives.md @@ -0,0 +1,58 @@ +Raspberry Pi Alternatives +====== +A look at some of the many interesting Raspberry Pi competitors. + +The phenomenon behind the Raspberry Pi computer series has been pretty amazing. It's obvious why it has become so popular for Linux projects—it's a low-cost computer that's actually quite capable for the price, and the GPIO pins allow you to use it in a number of electronics projects such that it starts to cross over into Arduino territory in some cases. Its overall popularity has spawned many different add-ons and accessories, not to mention step-by-step guides on how to use the platform. I've personally written about Raspberry Pis often in this space, and in my own home, I use one to control a beer fermentation fridge, one as my media PC, one to control my 3D printer and one as a handheld gaming device. + +The popularity of the Raspberry Pi also has spawned competition, and there are all kinds of other small, low-cost, Linux-powered Raspberry Pi-like computers for sale—many of which even go so far as to add "Pi" to their names. These computers aren't just clones, however. Although some share a similar form factor to the Raspberry Pi, and many also copy the GPIO pinouts, in many cases, these other computers offer features unavailable in a traditional Raspberry Pi. Some boards offer SATA, Wi-Fi or Gigabit networking; others offer USB3, and still others offer higher-performance CPUs or more RAM. When you are choosing a low-power computer for a project or as a home server, it pays to be aware of these Raspberry Pi alternatives, as in many cases, they will perform much better. So in this article, I discuss some alternatives to Raspberry Pis that I've used personally, their pros and cons, and then provide some examples of where they work best. + +### Banana Pi + +I've mentioned the Banana Pi before in past articles (see "Papa's Got a Brand New NAS" in the September 2016 issue and "Banana Backups" in the September 2017 issue), and it's a great choice when you want a board with a similar form factor, similar CPU and RAM specs, and a similar price (~$30) to a Raspberry Pi but need faster I/O. The Raspberry Pi product line is used for a lot of home server projects, but it limits you to 10/100 networking and a USB2 port for additional storage. Where the Banana Pi product line really shines is in the fact that it includes both a Gigabit network port and SATA port, while still having similar GPIO expansion options and running around the same price as a Raspberry Pi. + +Before I settled on an Odroid XU4 for my home NAS (more on that later), I first experimented with a cluster of Banana Pis. The idea was to attach a SATA disk to each Banana Pi and use software like Ceph or GlusterFS to create a storage cluster shared over the network. Even though any individual Banana Pi wasn't necessarily that fast, considering how cheap they are in aggregate, they should be able to perform reasonably well and allow you to expand your storage by adding another disk and another Banana Pi. In the end, I decided to go a more traditional and simpler route with a single server and software RAID, and now I use one Banana Pi as an image gallery server. I attached a 2.5" laptop SATA drive to the other and use it as a local backup server running BackupPC. It's a nice solution that takes up almost no space and little power to run. + +### Orange Pi Zero + +I was really excited when I first heard about the Raspberry Pi Zero project. I couldn't believe there was such a capable little computer for only $5, and I started imagining all of the cool projects I could use one for around the house. That initial excitement was dampened a bit by the fact that they sold out quickly, and just about every vendor settled into the same pattern: put standalone Raspberry Pi Zeros on backorder but have special $20 starter kits in stock that include various adapter cables, a micro SD card and a plastic case that I didn't need. More than a year after the release, the situation still remains largely the same. Although I did get one Pi Zero and used it for a cool Adafruit "Pi Grrl Zero" gaming project, I had to put the rest of my ideas on hold, because they just never seemed to be in stock when I wanted them. + +The Orange Pi Zero was created by the same company that makes the entire line of Orange Pi computers that compete with the Raspberry Pi. The main thing that makes the Orange Pi Zero shine in my mind is that they have a small, square form factor that is wider than a Raspberry Pi Zero but not as long. It also includes a Wi-Fi card like the more expensive Raspberry Pi Zero W, and it runs between $6 and $9, depending on whether you opt for 256MB of RAM or 512MB of RAM. More important, they are generally in stock, so there's no need to sit on a backorder list when you have a fun project in mind. + +The Orange Pi Zero boards themselves are pretty capable. Out of the box, they include a quad-core ARM CPU, Wi-Fi (as I mentioned before), along with a 10/100 network port and USB2\. They also include Raspberry-Pi-compatible GPIO pins, but even more interesting is that there is a $9 "NAS" expansion board for it that mounts to its 13-pin header and provides extra USB2 ports, a SATA and mSATA port, along with an IR and audio and video ports, which makes it about as capable as a more expensive Banana Pi board. Even without the expansion board, this would make a nice computer you could sit anywhere within range of your Wi-Fi and run any number of services. The main downside is you are limited to composite video, so this isn't the best choice for gaming or video-based projects. + +Although Orange Pi Zeros are capable boards in their own right, what makes them particularly enticing to me is that they are actually available when you want them, unlike some of the other sub-$10 boards out there. There's nothing worse than having a cool idea for a cheap home project and then having to wait for a board to come off backorder. + +![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12261f1.jpg) + +Figure 1\. An Orange Pi Zero (right) and an Espressobin (left) + +### Odroid XU4 + +When I was looking to replace my rack-mounted NAS at home, I first looked at all of the Raspberry Pi options, including Banana Pi and other alternatives, but none of them seemed to have quite enough horsepower for my needs. I needed a machine that not only offered Gigabit networking to act as a NAS, but one that had high-speed disk I/O as well. The Odroid XU4 fit the bill with its eight-core ARM CPU, 2GB RAM, Gigabit network and USB3 ports. Although it was around $75 (almost twice the price of a Raspberry Pi), it was a much more capable computer all while being small and low-power. + +The entire Odroid product line is a good one to consider if you want a low-power home server but need more resources than a traditional Raspberry Pi can offer and are willing to spend a little bit extra for the privilege. In addition to a NAS, the Odroid XU4, with its more powerful CPU and extra RAM, is a good all-around server for the home. The USB3 port means you have a lot of storage options should you need them. + +### Espressobin + +Although the Odroid XU4 is a great home server, I still sometimes can see that it gets bogged down in disk and network I/O compared to a traditional higher-powered server. Some of this might be due to the chips that were selected for the board, and perhaps some of it has to do with the fact that I'm using both disk encryption and software RAID over USB3\. In either case, I started looking for another option to help take a bit of the storage burden off this server, and I came across the Espressobin board. + +The Espressobin is a $50 board that launched as a popular Indiegogo campaign and is now a shipping product that you can pick up in a number of places, including Amazon. Although it costs a bit more than a Raspberry Pi 3, it includes a 64-bit dual-core ARM Cortex A53 at 1.2GHz, 1–2Gb of RAM (depending on the configuration), three Gigabit network ports with a built-in switch, a SATA port, a USB3 port, a mini-PCIe port, plus a number of other options, including two sets of GPIO headers and a nice built-in serial console running on the micro-USB port. + +The main benefit to the Espressobin is the fact that it was designed by Marvell with chips that actually can use all of the bandwidth that the board touts. In some other boards, often you'll find a SATA2 port that's hanging off a USB2 interface or other architectural hacks that, although they will let you connect a SATA disk or Gigabit networking port, it doesn't mean you'll get the full bandwidth the spec claims. Although I intend to have my own Espressobin take over home NAS duties, it also would make a great home gateway router, general-purpose server or even a Wi-Fi access point, provided you added the right Wi-Fi card. + +### Conclusion + +A whole world of alternatives to Raspberry Pis exists—this list covers only some of the ones I've used myself. I hope it has encouraged you to think twice before you default to a Raspberry Pi for your next project. Although there's certainly nothing wrong with Raspberry Pis, there are several small computers that run Linux well and, in many cases, offer better hardware or other expansion options beyond the capabilities of a Raspberry Pi for a similar price. + + +-------------------------------------------------------------------------------- + +via: http://www.linuxjournal.com/content/raspberry-pi-alternatives + +作者:[Kyle Rankin][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxjournal.com/users/kyle-rankin diff --git a/sources/talk/20180123 Moving to Linux from dated Windows machines.md b/sources/talk/20180123 Moving to Linux from dated Windows machines.md new file mode 100644 index 0000000000..6acd6e53f2 --- /dev/null +++ b/sources/talk/20180123 Moving to Linux from dated Windows machines.md @@ -0,0 +1,50 @@ +Moving to Linux from dated Windows machines +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/1980s-computer-yearbook.png?itok=eGOYEKK-) + +Every day, while working in the marketing department at ONLYOFFICE, I see Linux users discussing our office productivity software on the internet. Our products are popular among Linux users, which made me curious about using Linux as an everyday work tool. My old Windows XP-powered computer was an obstacle to performance, so I started reading about Linux systems (particularly Ubuntu) and decided to try it out as an experiment. Two of my colleagues joined me. + +### Why Linux? + +We needed to make a change, first, because our old systems were not enough in terms of performance: we experienced regular crashes, an overload every time more than two apps were active, a 50% chance of freezing when a machine was shut down, and so forth. This was rather distracting to our work, which meant we were considerably less efficient than we could be. + +Upgrading to newer versions of Windows was an option, too, but that is an additional expense, plus our software competes against Microsoft's office suite. So that was an ideological question, too. + +Second, as I mentioned earlier, ONLYOFFICE products are rather popular within the Linux community. By reading about Linux users' experience with our software, we became interested in joining them. + +A week after we asked to change to Linux, we got our shiny new computer cases with [Kubuntu][1] inside. We chose version 16.04, which features KDE Plasma 5.5 and many KDE apps including Dolphin, as well as LibreOffice 5.1 and Firefox 45. + +### What we like about Linux + +Linux's biggest advantage, I believe, is its speed; for instance, it takes just seconds from pushing the machine's On button to starting your work. Everything seemed amazingly rapid from the very beginning: the overall responsiveness, the graphics, and even system updates. + +One other thing that surprised me compared to Windows is that Linux allows you to configure nearly everything, including the entire look of your desktop. In Settings, I found how to change the color and shape of bars, buttons, and fonts; relocate any desktop element; and build a composition of widgets, even including comics and Color Picker. I believe I've barely scratched the surface of the available options and have yet to explore most of the customization opportunities that this system is well known for. + +Linux distributions are generally a very safe environment. People rarely use antivirus apps in Linux, simply because there are so few viruses written for it. You save system speed, time, and, sure enough, money. + +In general, Linux has refreshed our everyday work lives, surprising us with a number of new options and opportunities. Even in the short time we've been using it, we'd characterize it as: + + * Fast and smooth to operate + * Highly customizable + * Relatively newcomer-friendly + * Challenging with basic components, however very rewarding in return + * Safe and secure + * An exciting experience for everyone who seeks to refresh their workplace + + + +Have you switched from Windows or MacOS to Kubuntu or another Linux variant? Or are you considering making the change? Please share your reasons for wanting to adopt Linux, as well as your impressions of going open source, in the comments. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/move-to-linux-old-windows + +作者:[Michael Korotaev][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/michaelk +[1]:https://kubuntu.org/ diff --git a/sources/talk/20180124 Containers, the GPL, and copyleft- No reason for concern.md b/sources/talk/20180124 Containers, the GPL, and copyleft- No reason for concern.md new file mode 100644 index 0000000000..814598f7cf --- /dev/null +++ b/sources/talk/20180124 Containers, the GPL, and copyleft- No reason for concern.md @@ -0,0 +1,44 @@ +Containers, the GPL, and copyleft: No reason for concern +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_patents4abstract_B.png?itok=6RHeRaYh) + +Though open source is thoroughly mainstream, new software technologies and old technologies that get newly popularized sometimes inspire hand-wringing about open source licenses. Most often the concern is about the GNU General Public License (GPL), and specifically the scope of its copyleft requirement, which is often described (somewhat misleadingly) as the GPL's derivative work issue. + +One imperfect way of framing the question is whether GPL-licensed code, when combined in some sense with proprietary code, forms a single modified work such that the proprietary code could be interpreted as being subject to the terms of the GPL. While we haven't yet seen much of that concern directed to Linux containers, we expect more questions to be raised as adoption of containers continues to grow. But it's fairly straightforward to show that containers do _not_ raise new or concerning GPL scope issues. + +Statutes and case law provide little help in interpreting a license like the GPL. On the other hand, many of us give significant weight to the interpretive views of the Free Software Foundation (FSF), the drafter and steward of the GPL, even in the typical case where the FSF is not a copyright holder of the software at issue. In addition to being the author of the license text, the FSF has been engaged for many years in providing commentary and guidance on its licenses to the community. Its views have special credibility and influence based on its public interest mission and leadership in free software policy. + +The FSF's existing guidance on GPL interpretation has relevance for understanding the effects of including GPL and non-GPL code in containers. The FSF has placed emphasis on the process boundary when considering copyleft scope, and on the mechanism and semantics of the communication between multiple software components to determine whether they are closely integrated enough to be considered a single program for GPL purposes. For example, the [GNU Licenses FAQ][1] takes the view that pipes, sockets, and command-line arguments are mechanisms that are normally suggestive of separateness (in the absence of sufficiently "intimate" communications). + +Consider the case of a container in which both GPL code and proprietary code might coexist and execute. A container is, in essence, an isolated userspace stack. In the [OCI container image format][2], code is packaged as a set of filesystem changeset layers, with the base layer normally being a stripped-down conventional Linux distribution without a kernel. As with the userspace of non-containerized Linux distributions, these base layers invariably contain many GPL-licensed packages (both GPLv2 and GPLv3), as well as packages under licenses considered GPL-incompatible, and commonly function as a runtime for proprietary as well as open source applications. The ["mere aggregation" clause][3] in GPLv2 (as well as its counterpart GPLv3 provision on ["aggregates"][4]) shows that this type of combination is generally acceptable, is specifically contemplated under the GPL, and has no effect on the licensing of the two programs, assuming incompatibly licensed components are separate and independent. + +Of course, in a given situation, the relationship between two components may not be "mere aggregation," but the same is true of software running in non-containerized userspace on a Linux system. There is nothing in the technical makeup of containers or container images that suggests a need to apply a special form of copyleft scope analysis. + +It follows that when looking at the relationship between code running in a container and code running outside a container, the "separate and independent" criterion is almost certainly met. The code will run as separate processes, and the whole technical point of using containers is isolation from other software running on the system. + +Now consider the case where two components, one GPL-licensed and one proprietary, are running in separate but potentially interacting containers, perhaps as part of an application designed with a [microservices][5] architecture. In the absence of very unusual facts, we should not expect to see copyleft scope extending across multiple containers. Separate containers involve separate processes. Communication between containers by way of network interfaces is analogous to such mechanisms as pipes and sockets, and a multi-container microservices scenario would seem to preclude what the FSF calls "[intimate][6]" communication by definition. The composition of an application using multiple containers may not be dispositive of the GPL scope issue, but it makes the technical boundaries between the components more apparent and provides a strong basis for arguing separateness. Here, too, there is no technical feature of containers that suggests application of a different and stricter approach to copyleft scope analysis. + +A company that is overly concerned with the potential effects of distributing GPL-licensed code might attempt to prohibit its developers from adding any such code to a container image that it plans to distribute. Insofar as the aim is to avoid distributing code under the GPL, this is a dubious strategy. As noted above, the base layers of conventional container images will contain multiple GPL-licensed components. If the company pushes a container image to a registry, there is normally no way it can guarantee that this will not include the base layer, even if it is widely shared. + +On the other hand, the company might decide to embrace containerization as a means of limiting copyleft scope issues by isolating GPL and proprietary code--though one would hope that technical benefits would drive the decision, rather than legal concerns likely based on unfounded anxiety about the GPL. While in a non-containerized setting the relationship between two interacting software components will often be mere aggregation, the evidence of separateness that containers provide may be comforting to those who worry about GPL scope. + +Open source license compliance obligations may arise when sharing container images. But there's nothing technically different or unique about containers that changes the nature of these obligations or makes them harder to satisfy. With respect to copyleft scope, containerization should, if anything, ease the concerns of the extra-cautious. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/containers-gpl-and-copyleft + +作者:[Richard Fontana][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/fontana +[1]:https://www.gnu.org/licenses/gpl-faq.en.html#MereAggregation +[2]:https://github.com/opencontainers/image-spec/blob/master/spec.md +[3]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html#section2 +[4]:https://www.gnu.org/licenses/gpl.html#section5 +[5]:https://www.redhat.com/en/topics/microservices +[6]:https://www.gnu.org/licenses/gpl-faq.en.html#GPLPlugins diff --git a/sources/talk/20180124 Security Chaos Engineering- A new paradigm for cybersecurity.md b/sources/talk/20180124 Security Chaos Engineering- A new paradigm for cybersecurity.md new file mode 100644 index 0000000000..35c89150c8 --- /dev/null +++ b/sources/talk/20180124 Security Chaos Engineering- A new paradigm for cybersecurity.md @@ -0,0 +1,87 @@ +Security Chaos Engineering: A new paradigm for cybersecurity +====== +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_bank_vault_secure_safe.png?itok=YoW93h7C) + +Security is always changing and failure always exists. + +This toxic scenario requires a fresh perspective on how we think about operational security. We must understand that we are often the primary cause of our own security flaws. The industry typically looks at cybersecurity and failure in isolation or as separate matters. We believe that our lack of insight and operational intelligence into our own security control failures is one of the most common causes of security incidents and, subsequently, data breaches. + +> Fall seven times, stand up eight." --Japanese proverb + +The simple fact is that "to err is human," and humans derive their success as a direct result of the failures they encounter. Their rate of failure, how they fail, and their ability to understand that they failed in the first place are important building blocks to success. Our ability to learn through failure is inherent in the systems we build, the way we operate them, and the security we use to protect them. Yet there has been a lack of focus when it comes to how we approach preventative security measures, and the spotlight has trended toward the evolving attack landscape and the need to buy or build new solutions. + +### Security spending is continually rising and so are security incidents + +We spend billions on new information security technologies, however, we rarely take a proactive look at whether those security investments perform as expected. This has resulted in a continual increase in security spending on new solutions to keep up with the evolving attacks. + +Despite spending more on security, data breaches are continuously getting bigger and more frequent across all industries. We have marched so fast down this path of the "get-ahead-of-the-attacker" strategy that we haven't considered that we may be a primary cause of our own demise. How is it that we are building more and more security measures, but the problem seems to be getting worse? Furthermore, many of the notable data breaches over the past year were not the result of an advanced nation-state or spy-vs.-spy malicious advanced persistent threats (APTs); rather the principal causes of those events were incomplete implementation, misconfiguration, design flaws, and lack of oversight. + +The 2017 Ponemon Cost of a Data Breach Study breaks down the [root causes of data breaches][1] into three areas: malicious or criminal attacks, human factors or errors, and system glitches, including both IT and business-process failure. Of the three categories, malicious or criminal attacks comprises the largest distribution (47%), followed by human error (28%), and system glitches (25%). Cybersecurity vendors have historically focused on malicious root causes of data breaches, as it is the largest sole cause, but together human error and system glitches total 53%, a larger share of the overall problem. + +What is not often understood, whether due to lack of insight, reporting, or analysis, is that malicious or criminal attacks are often successful due to human error and system glitches. Both human error and system glitches are, at their root, primary markers of the existence of failure. Whether it's IT system failures, failures in process, or failures resulting from humans, it begs the question: "Should we be focusing on finding a method to identify, understand, and address our failures?" After all, it can be an arduous task to predict the next malicious attack, which often requires investment of time to sift threat intelligence, dig through forensic data, or churn threat feeds full of unknown factors and undetermined motives. Failure instrumentation, identification, and remediation are mostly comprised of things that we know, have the ability to test, and can measure. + +Failures we can analyze consist not only of IT, business, and general human factors but also the way we design, build, implement, configure, operate, observe, and manage security controls. People are the ones designing, building, monitoring, and managing the security controls we put in place to defend against malicious attackers. How often do we proactively instrument what we designed, built, and are operationally managing to determine if the controls are failing? Most organizations do not discover that their security controls were failing until a security incident results from that failure. The worst time to find out your security investment failed is during a security incident at 3 a.m. + +> Security incidents are not detective measures and hope is not a strategy when it comes to operating effective security controls. + +We hypothesize that a large portion of data breaches are caused not by sophisticated nation-state actors or hacktivists, but rather simple things rooted in human error and system glitches. Failure in security controls can arise from poor control placement, technical misconfiguration, gaps in coverage, inadequate testing practices, human error, and numerous other things. + +### The journey into Security Chaos Testing + +Our venture into this new territory of Security Chaos Testing has shifted our thinking about the root cause of many of our notable security incidents and data breaches. + +We were brought together by [Bruce Wong][2], who now works at Stitch Fix with Charles, one of the authors of this article. Prior to Stitch Fix, Bruce was a founder of the Chaos Engineering and System Reliability Engineering (SRE) practices at Netflix, the company commonly credited with establishing the field. Bruce learned about this article's other author, Aaron, through the open source [ChaoSlingr][3] Security Chaos Testing tool project, on which Aaron was a contributor. Aaron was interested in Bruce's perspective on the idea of applying Chaos Engineering to cybersecurity, which led Bruce to connect us to share what we had been working on. As security practitioners, we were both intrigued by the idea of Chaos Engineering and had each begun thinking about how this new method of instrumentation might have a role in cybersecurity. + +Within a short timeframe, we began finishing each other's thoughts around testing and validating security capabilities, which we collectively call "Security Chaos Engineering." We directly challenged many of the concepts we had come to depend on in our careers, such as compensating security controls, defense-in-depth, and how to design preventative security. Quickly we realized that we needed to challenge the status quo "set-it-and-forget-it" model and instead execute on continuous instrumentation and validation of security capabilities. + +Businesses often don't fully understand whether their security capabilities and controls are operating as expected until they are not. We had both struggled throughout our careers to provide measurements on security controls that go beyond simple uptime metrics. Our journey has shown us there is a need for a more pragmatic approach that emphasizes proactive instrumentation and experimentation over blind faith. + +### Defining new terms + +In the security industry, we have a habit of not explaining terms and assuming we are speaking the same language. To correct that, here are a few key terms in this new approach: + + * **(Security) Chaos Experiments** are foundationally rooted in the scientific method, in that they seek not to validate what is already known to be true or already known to be false, rather they are focused on deriving new insights about the current state. + * **Security Chaos Engineering** is the discipline of instrumentation, identification, and remediation of failure within security controls through proactive experimentation to build confidence in the system's ability to defend against malicious conditions in production. + + + +### Security and distributed systems + +Consider the evolving nature of modern application design where systems are becoming more and more distributed, ephemeral, and immutable in how they operate. In this shifting paradigm, it is becoming difficult to comprehend the operational state and health of our systems' security. Moreover, how are we ensuring that it remains effective and vigilant as the surrounding environment is changing its parameters, components, and methodologies? + +What does it mean to be effective in terms of security controls? After all, a single security capability could easily be implemented in a wide variety of diverse scenarios in which failure may arise from many possible sources. For example, a standard firewall technology may be implemented, placed, managed, and configured differently depending on complexities in the business, web, and data logic. + +It is imperative that we not operate our business products and services on the assumption that something works. We must constantly, consistently, and proactively instrument our security controls to ensure they cut the mustard when it matters. This is why Security Chaos Testing is so important. What Security Chaos Engineering does is it provides a methodology for the experimentation of the security of distributed systems in order to build confidence in the ability to withstand malicious conditions. + +In Security Chaos Engineering: + + * Security capabilities must be end-to-end instrumented. + * Security must be continuously instrumented to build confidence in the system's ability to withstand malicious conditions. + * Readiness of a system's security defenses must be proactively assessed to ensure they are battle-ready and operating as intended. + * The security capability toolchain must be instrumented from end to end to drive new insights into not only the effectiveness of the functionality within the toolchain but also to discover where added value and improvement can be injected. + * Practiced instrumentation seeks to identify, detect, and remediate failures in security controls. + * The focus is on vulnerability and failure identification, not failure management. + * The operational effectiveness of incident management is sharpened. + + + +As Henry Ford said, "Failure is only the opportunity to begin again, this time more intelligently." Security Chaos Engineering and Security Chaos Testing give us that opportunity. + +Would you like to learn more? Join the discussion by following [@aaronrinehart][4] and [@charles_nwatu][5] on Twitter. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/new-paradigm-cybersecurity + +作者:[Aaron Rinehart][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/aaronrinehart +[1]:https://www.ibm.com/security/data-breach +[2]:https://twitter.com/bruce_m_wong?lang=en +[3]:https://github.com/Optum/ChaoSlingr +[4]:https://twitter.com/aaronrinehart +[5]:https://twitter.com/charles_nwatu diff --git a/sources/tech/20090211 Page Cache the Affair Between Memory and Files.md b/sources/tech/20090211 Page Cache the Affair Between Memory and Files.md deleted file mode 100644 index 98c546eb2a..0000000000 --- a/sources/tech/20090211 Page Cache the Affair Between Memory and Files.md +++ /dev/null @@ -1,76 +0,0 @@ -Translating by qhwdw [20090211 Page Cache, the Affair Between Memory and Files][1] -============================================================ - - -Previously we looked at how the kernel [manages virtual memory][2] for a user process, but files and I/O were left out. This post covers the important and often misunderstood relationship between files and memory and its consequences for performance. - -Two serious problems must be solved by the OS when it comes to files. The first one is the mind-blowing slowness of hard drives, and [disk seeks in particular][3], relative to memory. The second is the need to load file contents in physical memory once and share the contents among programs. If you use [Process Explorer][4] to poke at Windows processes, you'll see there are ~15MB worth of common DLLs loaded in every process. My Windows box right now is running 100 processes, so without sharing I'd be using up to ~1.5 GB of physical RAM just for common DLLs. No good. Likewise, nearly all Linux programs need [ld.so][5] and libc, plus other common libraries. - -Happily, both problems can be dealt with in one shot: the page cache, where the kernel stores page-sized chunks of files. To illustrate the page cache, I'll conjure a Linux program named render, which opens file scene.dat and reads it 512 bytes at a time, storing the file contents into a heap-allocated block. The first read goes like this: - -![Reading and the page cache](http://static.duartes.org/img/blogPosts/readFromPageCache.png) - -After 12KB have been read, render's heap and the relevant page frames look thus: - -![Non-mapped file read](http://static.duartes.org/img/blogPosts/nonMappedFileRead.png) - -This looks innocent enough, but there's a lot going on. First, even though this program uses regular read calls, three 4KB page frames are now in the page cache storing part of scene.dat. People are sometimes surprised by this, but all regular file I/O happens through the page cache. In x86 Linux, the kernel thinks of a file as a sequence of 4KB chunks. If you read a single byte from a file, the whole 4KB chunk containing the byte you asked for is read from disk and placed into the page cache. This makes sense because sustained disk throughput is pretty good and programs normally read more than just a few bytes from a file region. The page cache knows the position of each 4KB chunk within the file, depicted above as #0, #1, etc. Windows uses 256KB views analogous to pages in the Linux page cache. - -Sadly, in a regular file read the kernel must copy the contents of the page cache into a user buffer, which not only takes cpu time and hurts the [cpu caches][6], but also wastes physical memory with duplicate data. As per the diagram above, the scene.dat contents are stored twice, and each instance of the program would store the contents an additional time. We've mitigated the disk latency problem but failed miserably at everything else. Memory-mapped files are the way out of this madness: - -![Mapped file read](http://static.duartes.org/img/blogPosts/mappedFileRead.png) - -When you use file mapping, the kernel maps your program's virtual pages directly onto the page cache. This can deliver a significant performance boost: [Windows System Programming][7] reports run time improvements of 30% and up relative to regular file reads, while similar figures are reported for Linux and Solaris in [Advanced Programming in the Unix Environment][8]. You might also save large amounts of physical memory, depending on the nature of your application. - -As always with performance, [measurement is everything][9], but memory mapping earns its keep in a programmer's toolbox. The API is pretty nice too, it allows you to access a file as bytes in memory and does not require your soul and code readability in exchange for its benefits. Mind your [address space][10] and experiment with [mmap][11] in Unix-like systems, [CreateFileMapping][12] in Windows, or the many wrappers available in high level languages. When you map a file its contents are not brought into memory all at once, but rather on demand via [page faults][13]. The fault handler [maps your virtual pages][14] onto the page cache after [obtaining][15] a page frame with the needed file contents. This involves disk I/O if the contents weren't cached to begin with. - -Now for a pop quiz. Imagine that the last instance of our render program exits. Would the pages storing scene.dat in the page cache be freed immediately? People often think so, but that would be a bad idea. When you think about it, it is very common for us to create a file in one program, exit, then use the file in a second program. The page cache must handle that case. When you think more about it, why should the kernel ever get rid of page cache contents? Remember that disk is 5 orders of magnitude slower than RAM, hence a page cache hit is a huge win. So long as there's enough free physical memory, the cache should be kept full. It is therefore not dependent on a particular process, but rather it's a system-wide resource. If you run render a week from now and scene.dat is still cached, bonus! This is why the kernel cache size climbs steadily until it hits a ceiling. It's not because the OS is garbage and hogs your RAM, it's actually good behavior because in a way free physical memory is a waste. Better use as much of the stuff for caching as possible. - -Due to the page cache architecture, when a program calls [write()][16] bytes are simply copied to the page cache and the page is marked dirty. Disk I/O normally does not happen immediately, thus your program doesn't block waiting for the disk. On the downside, if the computer crashes your writes will never make it, hence critical files like database transaction logs must be [fsync()][17]ed (though one must still worry about drive controller caches, oy!). Reads, on the other hand, normally block your program until the data is available. Kernels employ eager loading to mitigate this problem, an example of which is read ahead where the kernel preloads a few pages into the page cache in anticipation of your reads. You can help the kernel tune its eager loading behavior by providing hints on whether you plan to read a file sequentially or randomly (see [madvise()][18], [readahead()][19], [Windows cache hints][20] ). Linux [does read-ahead][21] for memory-mapped files, but I'm not sure about Windows. Finally, it's possible to bypass the page cache using [O_DIRECT][22] in Linux or [NO_BUFFERING][23] in Windows, something database software often does. - -A file mapping may be private or shared. This refers only to updates made to the contents in memory: in a private mapping the updates are not committed to disk or made visible to other processes, whereas in a shared mapping they are. Kernels use the copy on write mechanism, enabled by page table entries, to implement private mappings. In the example below, both render and another program called render3d (am I creative or what?) have mapped scene.dat privately. Render then writes to its virtual memory area that maps the file: - -![The Copy-On-Write mechanism](http://static.duartes.org/img/blogPosts/copyOnWrite.png) - -The read-only page table entries shown above do not mean the mapping is read only, they're merely a kernel trick to share physical memory until the last possible moment. You can see how 'private' is a bit of a misnomer until you remember it only applies to updates. A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from. Once copy-on-write is done, changes by others are no longer seen. This behavior is not guaranteed by the kernel, but it's what you get in x86 and makes sense from an API perspective. By contrast, a shared mapping is simply mapped onto the page cache and that's it. Updates are visible to other processes and end up in the disk. Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write. - -Dynamically loaded libraries are brought into your program's address space via file mapping. There's nothing magical about it, it's the same private file mapping available to you via regular APIs. Below is an example showing part of the address spaces from two running instances of the file-mapping render program, along with physical memory, to tie together many of the concepts we've seen. - -![Mapping virtual memory to physical memory](http://static.duartes.org/img/blogPosts/virtualToPhysicalMapping.png) - -This concludes our 3-part series on memory fundamentals. I hope the series was useful and provided you with a good mental model of these OS topics. - --------------------------------------------------------------------------------- - -via:https://manybutfinite.com/post/page-cache-the-affair-between-memory-and-files/ - -作者:[Gustavo Duarte][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://duartes.org/gustavo/blog/about/ -[1]:https://manybutfinite.com/post/page-cache-the-affair-between-memory-and-files/ -[2]:https://manybutfinite.com/post/how-the-kernel-manages-your-memory -[3]:https://manybutfinite.com/post/what-your-computer-does-while-you-wait -[4]:http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx -[5]:http://ld.so -[6]:https://manybutfinite.com/post/intel-cpu-caches -[7]:http://www.amazon.com/Windows-Programming-Addison-Wesley-Microsoft-Technology/dp/0321256190/ -[8]:http://www.amazon.com/Programming-Environment-Addison-Wesley-Professional-Computing/dp/0321525949/ -[9]:https://manybutfinite.com/post/performance-is-a-science -[10]:https://manybutfinite.com/post/anatomy-of-a-program-in-memory -[11]:http://www.kernel.org/doc/man-pages/online/pages/man2/mmap.2.html -[12]:http://msdn.microsoft.com/en-us/library/aa366537(VS.85).aspx -[13]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2678 -[14]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2436 -[15]:http://lxr.linux.no/linux+v2.6.28/mm/filemap.c#L1424 -[16]:http://www.kernel.org/doc/man-pages/online/pages/man2/write.2.html -[17]:http://www.kernel.org/doc/man-pages/online/pages/man2/fsync.2.html -[18]:http://www.kernel.org/doc/man-pages/online/pages/man2/madvise.2.html -[19]:http://www.kernel.org/doc/man-pages/online/pages/man2/readahead.2.html -[20]:http://msdn.microsoft.com/en-us/library/aa363858(VS.85).aspx#caching_behavior -[21]:http://lxr.linux.no/linux+v2.6.28/mm/filemap.c#L1424 -[22]:http://www.kernel.org/doc/man-pages/online/pages/man2/open.2.html -[23]:http://msdn.microsoft.com/en-us/library/cc644950(VS.85).aspx \ No newline at end of file diff --git a/sources/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md b/sources/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md index bb527d5519..7e3f5708dd 100644 --- a/sources/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md +++ b/sources/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md @@ -1,3 +1,5 @@ +Translating by jessie-pang + 30 Linux System Monitoring Tools Every SysAdmin Should Know ====== diff --git a/sources/tech/20090724 Top 20 OpenSSH Server Best Security Practices.md b/sources/tech/20090724 Top 20 OpenSSH Server Best Security Practices.md new file mode 100644 index 0000000000..a7ad346af4 --- /dev/null +++ b/sources/tech/20090724 Top 20 OpenSSH Server Best Security Practices.md @@ -0,0 +1,474 @@ +Top 20 OpenSSH Server Best Security Practices +====== +![OpenSSH Security Tips][1] + +OpenSSH is the implementation of the SSH protocol. OpenSSH is recommended for remote login, making backups, remote file transfer via scp or sftp, and much more. SSH is perfect to keep confidentiality and integrity for data exchanged between two networks and systems. However, the main advantage is server authentication, through the use of public key cryptography. From time to time there are [rumors][2] about OpenSSH zero day exploit. This **page shows how to secure your OpenSSH server running on a Linux or Unix-like system to improve sshd security**. + + +#### OpenSSH defaults + + * TCP port - 22 + * OpenSSH server config file - sshd_config (located in /etc/ssh/) + + + +#### 1. Use SSH public key based login + +OpenSSH server supports various authentication. It is recommended that you use public key based authentication. First, create the key pair using following ssh-keygen command on your local desktop/laptop: + +DSA and RSA 1024 bit or lower ssh keys are considered weak. Avoid them. RSA keys are chosen over ECDSA keys when backward compatibility is a concern with ssh clients. All ssh keys are either ED25519 or RSA. Do not use any other type. + +``` +$ ssh-keygen -t key_type -b bits -C "comment" +$ ssh-keygen -t ed25519 -C "Login to production cluster at xyz corp" +$ ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_rsa_aws_$(date +%Y-%m-%d) -C "AWS key for abc corp clients" +``` +Next, install the public key using ssh-copy-id command: +``` +$ ssh-copy-id -i /path/to/public-key-file user@host +$ ssh-copy-id user@remote-server-ip-or-dns-name +$ ssh-copy-id vivek@rhel7-aws-server +``` +When promoted supply user password. Verify that ssh key based login working for you: +`$ ssh vivek@rhel7-aws-server` +[![OpenSSH server security best practices][3]][3] +For more info on ssh public key auth see: + +* [keychain: Set Up Secure Passwordless SSH Access For Backup Scripts][48] + +* [sshpass: Login To SSH Server / Provide SSH Password Using A Shell Script][49] + +* [How To Setup SSH Keys on a Linux / Unix System][50] + +* [How to upload ssh public key to as authorized_key using Ansible DevOPS tool][51] + + +#### 2. Disable root user login + +Before we disable root user login, make sure regular user can log in as root. For example, allow vivek user to login as root using the sudo command. + +##### How to add vivek user to sudo group on a Debian/Ubuntu + +Allow members of group sudo to execute any command. [Add user vivek to sudo group][4]: +`$ sudo adduser vivek sudo` +Verify group membership with [id command][5] +`$ id vivek` + +##### How to add vivek user to sudo group on a CentOS/RHEL server + +Allows people in group wheel to run all commands on a CentOS/RHEL and Fedora Linux server. Use the usermod command to add the user named vivek to the wheel group: +``` +$ sudo usermod -aG wheel vivek +$ id vivek +``` + +##### Test sudo access and disable root login for ssh + +Test it and make sure user vivek can log in as root or run the command as root: +``` +$ sudo -i +$ sudo /etc/init.d/sshd status +$ sudo systemctl status httpd +``` +Once confirmed disable root login by adding the following line to sshd_config: +``` +PermitRootLogin no +ChallengeResponseAuthentication no +PasswordAuthentication no +UsePAM no +``` +See "[How to disable ssh password login on Linux to increase security][6]" for more info. + +#### 3. Disable password based login + +All password-based logins must be disabled. Only public key based logins are allowed. Add the following in your sshd_config file: +``` +AuthenticationMethods publickey +PubkeyAuthentication yes +``` +Older version of SSHD on CentOS 6.x/RHEL 6.x user should use the following setting: +``` +PubkeyAuthentication yes +``` + +#### 4. Limit Users' ssh access + +By default, all systems user can login via SSH using their password or public key. Sometimes you create UNIX / Linux user account for FTP or email purpose. However, those users can log in to the system using ssh. They will have full access to system tools including compilers and scripting languages such as Perl, Python which can open network ports and do many other fancy things. Only allow root, vivek and jerry user to use the system via SSH, add the following to sshd_config: +`AllowUsers vivek jerry` +Alternatively, you can allow all users to login via SSH but deny only a few users, with the following line in sshd_config: +`DenyUsers root saroj anjali foo` +You can also [configure Linux PAM][7] allows or deny login via the sshd server. You can allow [list of group name][8] to access or deny access to the ssh. + +#### 5. Disable Empty Passwords + +You need to explicitly disallow remote login from accounts with empty passwords, update sshd_config with the following line: +`PermitEmptyPasswords no` + +#### 6. Use strong passwords and passphrase for ssh users/keys + +It cannot be stressed enough how important it is to use strong user passwords and passphrase for your keys. Brute force attack works because user goes to dictionary based passwords. You can force users to avoid [passwords against a dictionary][9] attack and use [john the ripper tool][10] to find out existing weak passwords. Here is a sample random password generator (put in your ~/.bashrc): +``` +genpasswd() { + local l=$1 + [ "$l" == "" ] && l=20 + tr -dc A-Za-z0-9_ < /dev/urandom | head -c ${l} | xargs +} +``` + +Run it: +`genpasswd 16` +Output: +``` +uw8CnDVMwC6vOKgW +``` +* [Generating Random Password With mkpasswd / makepasswd / pwgen][52] + +* [Linux / UNIX: Generate Passwords][53] + +* [Linux Random Password Generator Command][54] + +-------------------------------------------------------------------------------- + +#### 7. Firewall SSH TCP port # 22 + +You need to firewall ssh TCP port # 22 by updating iptables/ufw/firewall-cmd or pf firewall configurations. Usually, OpenSSH server must only accept connections from your LAN or other remote WAN sites only. + +##### Netfilter (Iptables) Configuration + +Update [/etc/sysconfig/iptables (Redhat and friends specific file) to accept connection][11] only from 192.168.1.0/24 and 202.54.1.5/29, enter: +``` +-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p tcp --dport 22 -j ACCEPT +-A RH-Firewall-1-INPUT -s 202.54.1.5/29 -m state --state NEW -p tcp --dport 22 -j ACCEPT +``` + +If you've dual stacked sshd with IPv6, edit /etc/sysconfig/ip6tables (Redhat and friends specific file), enter: +``` +-A RH-Firewall-1-INPUT -s ipv6network::/ipv6mask -m tcp -p tcp --dport 22 -j ACCEPT + +``` + +Replace ipv6network::/ipv6mask with actual IPv6 ranges. + +##### UFW for Debian/Ubuntu Linux + +[UFW is an acronym for uncomplicated firewall. It is used for managing a Linux firewall][12] and aims to provide an easy to use interface for the user. Use the [following command to accept port 22 from 202.54.1.5/29][13] only: +`$ sudo ufw allow from 202.54.1.5/29 to any port 22` +Read "[Linux: 25 Iptables Netfilter Firewall Examples For New SysAdmins][14]" for more info. + +##### *BSD PF Firewall Configuration + +If you are using PF firewall update [/etc/pf.conf][15] as follows: +``` +pass in on $ext_if inet proto tcp from {192.168.1.0/24, 202.54.1.5/29} to $ssh_server_ip port ssh flags S/SA synproxy state +``` + +#### 8. Change SSH Port and limit IP binding + +By default, SSH listens to all available interfaces and IP address on the system. Limit ssh port binding and change ssh port (many brutes forcing scripts only try to connect to TCP port # 22). To bind to 192.168.1.5 and 202.54.1.5 IPs and port 300, add or correct the following line in sshd_config: +``` +Port 300 +ListenAddress 192.168.1.5 +ListenAddress 202.54.1.5 +``` + +Port 300 ListenAddress 192.168.1.5 ListenAddress 202.54.1.5 + +A better approach to use proactive approaches scripts such as fail2ban or denyhosts when you want to accept connection from dynamic WAN IP address. + +#### 9. Use TCP wrappers (optional) + +TCP Wrapper is a host-based Networking ACL system, used to filter network access to the Internet. OpenSSH does support TCP wrappers. Just update your /etc/hosts.allow file as follows to allow SSH only from 192.168.1.2 and 172.16.23.12 IP address: +``` +sshd : 192.168.1.2 172.16.23.12 +``` + +See this [FAQ about setting and using TCP wrappers][16] under Linux / Mac OS X and UNIX like operating systems. + +#### 10. Thwart SSH crackers/brute force attacks + +Brute force is a method of defeating a cryptographic scheme by trying a large number of possibilities (combination of users and passwords) using a single or distributed computer network. To prevents brute force attacks against SSH, use the following software: + + * [DenyHosts][17] is a Python based security tool for SSH servers. It is intended to prevent brute force attacks on SSH servers by monitoring invalid login attempts in the authentication log and blocking the originating IP addresses. + * Explains how to setup [DenyHosts][18] under RHEL / Fedora and CentOS Linux. + * [Fail2ban][19] is a similar program that prevents brute force attacks against SSH. + * [sshguard][20] protect hosts from brute force attacks against ssh and other services using pf. + * [security/sshblock][21] block abusive SSH login attempts. + * [ IPQ BDB filter][22] May be considered as a fail2ban lite. + + + +#### 11. Rate-limit incoming traffic at TCP port # 22 (optional) + +Both netfilter and pf provides rate-limit option to perform simple throttling on incoming connections on port # 22. + +##### Iptables Example + +The following example will drop incoming connections which make more than 5 connection attempts upon port 22 within 60 seconds: +``` +#!/bin/bash +inet_if=eth1 +ssh_port=22 +$IPT -I INPUT -p tcp --dport ${ssh_port} -i ${inet_if} -m state --state NEW -m recent --set +$IPT -I INPUT -p tcp --dport ${ssh_port} -i ${inet_if} -m state --state NEW -m recent --update --seconds 60 --hitcount 5 +``` + +Call above script from your iptables scripts. Another config option: +``` +$IPT -A INPUT -i ${inet_if} -p tcp --dport ${ssh_port} -m state --state NEW -m limit --limit 3/min --limit-burst 3 -j ACCEPT +$IPT -A INPUT -i ${inet_if} -p tcp --dport ${ssh_port} -m state --state ESTABLISHED -j ACCEPT +$IPT -A OUTPUT -o ${inet_if} -p tcp --sport ${ssh_port} -m state --state ESTABLISHED -j ACCEPT +# another one line example +# $IPT -A INPUT -i ${inet_if} -m state --state NEW,ESTABLISHED,RELATED -p tcp --dport 22 -m limit --limit 5/minute --limit-burst 5-j ACCEPT +``` + +See iptables man page for more details. + +##### *BSD PF Example + +The following will limits the maximum number of connections per source to 20 and rate limit the number of connections to 15 in a 5 second span. If anyone breaks our rules add them to our abusive_ips table and block them for making any further connections. Finally, flush keyword kills all states created by the matching rule which originate from the host which exceeds these limits. +``` +sshd_server_ip = "202.54.1.5" +table persist +block in quick from +pass in on $ext_if proto tcp to $sshd_server_ip port ssh flags S/SA keep state (max-src-conn 20, max-src-conn-rate 15/5, overload flush) +``` + +#### 12. Use port knocking (optional) + +[Port knocking][23] is a method of externally opening ports on a firewall by generating a connection attempt on a set of prespecified closed ports. Once a correct sequence of connection attempts is received, the firewall rules are dynamically modified to allow the host which sent the connection attempts to connect to the specific port(s). A sample port Knocking example for ssh using iptables: +``` +$IPT -N stage1 +$IPT -A stage1 -m recent --remove --name knock +$IPT -A stage1 -p tcp --dport 3456 -m recent --set --name knock2 + +$IPT -N stage2 +$IPT -A stage2 -m recent --remove --name knock2 +$IPT -A stage2 -p tcp --dport 2345 -m recent --set --name heaven + +$IPT -N door +$IPT -A door -m recent --rcheck --seconds 5 --name knock2 -j stage2 +$IPT -A door -m recent --rcheck --seconds 5 --name knock -j stage1 +$IPT -A door -p tcp --dport 1234 -m recent --set --name knock + +$IPT -A INPUT -m --state ESTABLISHED,RELATED -j ACCEPT +$IPT -A INPUT -p tcp --dport 22 -m recent --rcheck --seconds 5 --name heaven -j ACCEPT +$IPT -A INPUT -p tcp --syn -j door +``` + + +For more info see: +[Debian / Ubuntu: Set Port Knocking With Knockd and Iptables][55] + +#### 13. Configure idle log out timeout interval + +A user can log in to the server via ssh, and you can set an idle timeout interval to avoid unattended ssh session. Open sshd_config and make sure following values are configured: +``` +ClientAliveInterval 300 +ClientAliveCountMax 0 +``` +You are setting an idle timeout interval in seconds (300 secs == 5 minutes). After this interval has passed, the idle user will be automatically kicked out (read as logged out). See [how to automatically log BASH / TCSH / SSH users][24] out after a period of inactivity for more details. + +#### 14. Enable a warning banner for ssh users + +Set a warning banner by updating sshd_config with the following line: +`Banner /etc/issue` +Sample /etc/issue file: +``` +---------------------------------------------------------------------------------------------- +You are accessing a XYZ Government (XYZG) Information System (IS) that is provided for authorized use only. +By using this IS (which includes any device attached to this IS), you consent to the following conditions: + ++ The XYZG routinely intercepts and monitors communications on this IS for purposes including, but not limited to, +penetration testing, COMSEC monitoring, network operations and defense, personnel misconduct (PM), +law enforcement (LE), and counterintelligence (CI) investigations. + ++ At any time, the XYZG may inspect and seize data stored on this IS. + ++ Communications using, or data stored on, this IS are not private, are subject to routine monitoring, +interception, and search, and may be disclosed or used for any XYZG authorized purpose. + ++ This IS includes security measures (e.g., authentication and access controls) to protect XYZG interests--not +for your personal benefit or privacy. + ++ Notwithstanding the above, using this IS does not constitute consent to PM, LE or CI investigative searching +or monitoring of the content of privileged communications, or work product, related to personal representation +or services by attorneys, psychotherapists, or clergy, and their assistants. Such communications and work +product are private and confidential. See User Agreement for details. +---------------------------------------------------------------------------------------------- + +``` + +Above is a standard sample, consult your legal team for specific user agreement and legal notice details. + +#### 15. Disable .rhosts files (verification) + +Don't read the user's ~/.rhosts and ~/.shosts files. Update sshd_config with the following settings: +`IgnoreRhosts yes` +SSH can emulate the behavior of the obsolete rsh command, just disable insecure access via RSH. + +#### 16. Disable host-based authentication (verification) + +To disable host-based authentication, update sshd_config with the following option: +`HostbasedAuthentication no` + +#### 17. Patch OpenSSH and operating systems + +It is recommended that you use tools such as [yum][25], [apt-get][26], [freebsd-update][27] and others to keep systems up to date with the latest security patches: + +#### 18. Chroot OpenSSH (Lock down users to their home directories) + +By default users are allowed to browse the server directories such as /etc/, /bin and so on. You can protect ssh, using os based chroot or use [special tools such as rssh][28]. With the release of OpenSSH 4.8p1 or 4.9p1, you no longer have to rely on third-party hacks such as rssh or complicated chroot(1) setups to lock users to their home directories. See [this blog post][29] about new ChrootDirectory directive to lock down users to their home directories. + +#### 19. Disable OpenSSH server on client computer + +Workstations and laptop can work without OpenSSH server. If you do not provide the remote login and file transfer capabilities of SSH, disable and remove the SSHD server. CentOS / RHEL users can disable and remove openssh-server with the [yum command][30]: +`$ sudo yum erase openssh-server` +Debian / Ubuntu Linux user can disable and remove the same with the [apt command][31]/[apt-get command][32]: +`$ sudo apt-get remove openssh-server` +You may need to update your iptables script to remove ssh exception rule. Under CentOS / RHEL / Fedora edit the files /etc/sysconfig/iptables and /etc/sysconfig/ip6tables. Once done [restart iptables][33] service: +``` +# service iptables restart +# service ip6tables restart +``` + +#### 20. Bonus tips from Mozilla + +If you are using OpenSSH version 6.7+ or newer try [following][34] settings: +``` +#################[ WARNING ]######################## +# Do not use any setting blindly. Read sshd_config # +# man page. You must understand cryptography to # +# tweak following settings. Otherwise use defaults # +#################################################### + +# Supported HostKey algorithms by order of preference. +HostKey /etc/ssh/ssh_host_ed25519_key +HostKey /etc/ssh/ssh_host_rsa_key +HostKey /etc/ssh/ssh_host_ecdsa_key + +# Specifies the available KEX (Key Exchange) algorithms. +KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256 + +# Specifies the ciphers allowed +Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr + +#Specifies the available MAC (message authentication code) algorithms +MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com + +# LogLevel VERBOSE logs user's key fingerprint on login. Needed to have a clear audit track of which key was using to log in. +LogLevel VERBOSE + +# Log sftp level file access (read/write/etc.) that would not be easily logged otherwise. +Subsystem sftp /usr/lib/ssh/sftp-server -f AUTHPRIV -l INFO +``` + +You can grab list of cipher and alog supported by your OpenSSH server using the following commands: +``` +$ ssh -Q cipher +$ ssh -Q cipher-auth +$ ssh -Q mac +$ ssh -Q kex +$ ssh -Q key +``` +[![OpenSSH Security Tutorial Query Ciphers and algorithms choice][35]][35] + +#### How do I test sshd_config file and restart/reload my SSH server? + +To [check the validity of the configuration file and sanity of the keys][36] for any errors before restarting sshd, run: +`$ sudo sshd -t` +Extended test mode: +`$ sudo sshd -T` +Finally [restart sshd on a Linux or Unix like systems][37] as per your distro version: +``` +$ [sudo systemctl start ssh][38] ## Debian/Ubunt Linux## +$ [sudo systemctl restart sshd.service][39] ## CentOS/RHEL/Fedora Linux## +$ doas /etc/rc.d/sshd restart ## OpenBSD## +$ sudo service sshd restart ## FreeBSD## +``` + +#### Other susggesions + + 1. [Tighter SSH security with 2FA][40] - Multi-Factor authentication can be enabled with [OATH Toolkit][41] or [DuoSecurity][42]. + 2. [Use keychain based authentication][43] - keychain is a special bash script designed to make key-based authentication incredibly convenient and flexible. It offers various security benefits over passphrase-free keys + + + +#### See also: + + * The [official OpenSSH][44] project. + * Man pages: sshd(8),ssh(1),ssh-add(1),ssh-agent(1) + + + +If you have a technique or handy software not mentioned here, please share in the comments below to help your fellow readers keep their OpenSSH based server secure. + +#### About the author + +The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][45], [Facebook][46], [Google+][47]. + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/tips/linux-unix-bsd-openssh-server-best-practices.html + +作者:[Vivek Gite][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/media/new/tips/2009/07/openSSH_logo.png +[2]:https://isc.sans.edu/diary/OpenSSH+Rumors/6742 +[3]:https://www.cyberciti.biz/tips/wp-content/uploads/2009/07/OpenSSH-server-security-best-practices.png +[4]:https://www.cyberciti.biz/faq/how-to-create-a-sudo-user-on-ubuntu-linux-server/ +[5]:https://www.cyberciti.biz/faq/unix-linux-id-command-examples-usage-syntax/ (See Linux/Unix id command examples for more info) +[6]:https://www.cyberciti.biz/faq/how-to-disable-ssh-password-login-on-linux/ +[7]:https://www.cyberciti.biz/tips/linux-pam-configuration-that-allows-or-deny-login-via-the-sshd-server.html +[8]:https://www.cyberciti.biz/tips/openssh-deny-or-restrict-access-to-users-and-groups.html +[9]:https://www.cyberciti.biz/tips/linux-check-passwords-against-a-dictionary-attack.html +[10]:https://www.cyberciti.biz/faq/unix-linux-password-cracking-john-the-ripper/ +[11]:https://www.cyberciti.biz/faq/rhel-fedorta-linux-iptables-firewall-configuration-tutorial/ +[12]:https://www.cyberciti.biz/faq/howto-configure-setup-firewall-with-ufw-on-ubuntu-linux/ +[13]:https://www.cyberciti.biz/faq/ufw-allow-incoming-ssh-connections-from-a-specific-ip-address-subnet-on-ubuntu-debian/ +[14]:https://www.cyberciti.biz/tips/linux-iptables-examples.html +[15]:https://bash.cyberciti.biz/firewall/pf-firewall-script/ +[16]:https://www.cyberciti.biz/faq/tcp-wrappers-hosts-allow-deny-tutorial/ +[17]:https://www.cyberciti.biz/faq/block-ssh-attacks-with-denyhosts/ +[18]:https://www.cyberciti.biz/faq/rhel-linux-block-ssh-dictionary-brute-force-attacks/ +[19]:https://www.fail2ban.org +[20]:https://sshguard.sourceforge.net/ +[21]:http://www.bsdconsulting.no/tools/ +[22]:https://savannah.nongnu.org/projects/ipqbdb/ +[23]:https://en.wikipedia.org/wiki/Port_knocking +[24]:https://www.cyberciti.biz/faq/linux-unix-login-bash-shell-force-time-outs/ +[25]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ +[26]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html +[27]:https://www.cyberciti.biz/tips/howto-keep-freebsd-system-upto-date.html +[28]:https://www.cyberciti.biz/tips/rhel-centos-linux-install-configure-rssh-shell.html +[29]:https://www.debian-administration.org/articles/590 +[30]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info) +[31]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info) +[32]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info) +[33]:https://www.cyberciti.biz/faq/howto-rhel-linux-open-port-using-iptables/ +[34]:https://wiki.mozilla.org/Security/Guidelines/OpenSSH +[35]:https://www.cyberciti.biz/tips/wp-content/uploads/2009/07/OpenSSH-Security-Tutorial-Query-Ciphers-and-algorithms-choice.jpg +[36]:https://www.cyberciti.biz/tips/checking-openssh-sshd-configuration-syntax-errors.html +[37]:https://www.cyberciti.biz/faq/howto-restart-ssh/ +[38]:https://www.cyberciti.biz/faq/howto-start-stop-ssh-server/ (Restart sshd on a Debian/Ubuntu Linux) +[39]:https://www.cyberciti.biz/faq/centos-stop-start-restart-sshd-command/ (Restart sshd on a CentOS/RHEL/Fedora Linux) +[40]:https://www.cyberciti.biz/open-source/howto-protect-linux-ssh-login-with-google-authenticator/ +[41]:http://www.nongnu.org/oath-toolkit/ +[42]:https://duo.com +[43]:https://www.cyberciti.biz/faq/ssh-passwordless-login-with-keychain-for-scripts/ +[44]:https://www.openssh.com/ +[45]:https://twitter.com/nixcraft +[46]:https://facebook.com/nixcraft +[47]:https://plus.google.com/+CybercitiBiz +[48]:https://www.cyberciti.biz/faq/ssh-passwordless-login-with-keychain-for-scripts/ +[49]:https://www.cyberciti.biz/faq/noninteractive-shell-script-ssh-password-provider/ +[50]:https://www.cyberciti.biz/faq/how-to-set-up-ssh-keys-on-linux-unix/ +[51]:https://www.cyberciti.biz/faq/how-to-upload-ssh-public-key-to-as-authorized_key-using-ansible/ +[52]:https://www.cyberciti.biz/faq/generating-random-password/ +[53]:https://www.cyberciti.biz/faq/linux-unix-generating-passwords-command/ +[54]:https://www.cyberciti.biz/faq/linux-random-password-generator/ +[55]:https://www.cyberciti.biz/faq/debian-ubuntu-linux-iptables-knockd-port-knocking-tutorial/ diff --git a/sources/tech/20091104 Linux-Unix App For Prevention Of RSI (Repetitive Strain Injury).md b/sources/tech/20091104 Linux-Unix App For Prevention Of RSI (Repetitive Strain Injury).md new file mode 100644 index 0000000000..0adea8a54c --- /dev/null +++ b/sources/tech/20091104 Linux-Unix App For Prevention Of RSI (Repetitive Strain Injury).md @@ -0,0 +1,140 @@ +Linux/Unix App For Prevention Of RSI (Repetitive Strain Injury) +====== +![workrave-image][1] + +[A repetitive strain injury][2] (RSI) is occupational overuse syndrome, non-specific arm pain or work related upper limb disorder. RSI caused from overusing the hands to perform a repetitive task, such as typing, writing, or clicking a mouse. Unfortunately, most people do not understand what RSI is or how dangerous it can be. You can easily prevent RSI using open source software called Workrave. + + +### What are the symptoms of RSI? + +I'm quoting from this [page][3]. Do you experience: + + 1. Fatigue or lack of endurance? + 2. Weakness in the hands or forearms? + 3. Tingling, numbness, or loss of sensation? + 4. Heaviness: Do your hands feel like dead weight? + 5. Clumsiness: Do you keep dropping things? + 6. Lack of strength in your hands? Is it harder to open jars? Cut vegetables? + 7. Lack of control or coordination? + 8. Chronically cold hands? + 9. Heightened awareness? Just being slightly more aware of a body part can be a clue that something is wrong. + 10. Hypersensitivity? + 11. Frequent self-massage (subconsciously)? + 12. Sympathy pains? Do your hands hurt when someone else talks about their hand pain? + + + +### How to reduce your risk of Developing RSI + + * Take breaks, when using your computer, every 30 minutes or so. Use software such as workrave to prevent RSI. + * Regular exercise can prevent all sort of injuries including RSI. + * Use good posture. Adjust your computer desk and chair to support muscles necessary for good posture. + + + +### Workrave + +Workrave is a free open source software application intended to prevent computer users from developing RSI or myopia. The software periodically locks the screen while an animated character, "Miss Workrave," walks the user through various stretching exercises and urges them to take a coffee break. The program frequently alerts you to take micro-pauses, rest breaks and restricts you to your daily limit. The program works under MS-Windows and Linux, UNIX-like operating systems. + +#### Install workrave + +Type the following [apt command][4]/[apt-get command][5] under a Debian / Ubuntu Linux: +`$ sudo apt-get install workrave` +Fedora Linux user should type the following dnf command: +`$ sudo dnf install workrave` +RHEL/CentOS Linux user should enable EPEL repo and install it using [yum command][6]: +``` +### [ **tested on a CentOS/RHEL 7.x and clones** ] ### +$ sudo yum install epel-release +$ sudo yum install https://rpms.remirepo.net/enterprise/remi-release-7.rpm +$ sudo yum install workrave +``` +Arch Linux user type the following pacman command to install it: +`$ sudo pacman -S workrave` +FreeBSD user can install it using the following pkg command: +`# pkg install workrave` +OpenBSD user can install it using the following pkg_add command +``` +$ doas pkg_add workrave +``` + +#### How to configure workrave + +Workrave works as an applet which is a small application whose user interface resides within a panel. You need to add workrave to panel to control behavior and appearance of the software. + +##### Adding a New Workrave Object To Panel + + * Right-click on a vacant space on a panel to open the panel popup menu. + * Choose Add to Panel. + * The Add to Panel dialog opens.The available panel objects are listed alphabetically, with launchers at the top. Select workrave applet and click on Add button. + +![Fig.01: Adding an Object \(Workrave\) to a Panel][7] +Fig.01: Adding an Object (Workrave) to a Panel + +##### How Do I Modify Properties Of Workrave Software? + +To modify the properties of an object workrave, perform the following steps: + + * Right-click on the workrave object to open the panel object popup. + * Choose Preference. Use the Properties dialog to modify the properties as required. + +![](https://www.cyberciti.biz/media/new/tips/2009/11/linux-gnome-workwave-preferences-.png) +Fig.02: Modifying the Properties of The Workrave Software + +#### Workrave in Action + +The main window shows the time remaining until it suggests a pause. The windows can be closed and you will the time remaining on the panel itself: +![Fig.03: Time reaming counter ][8] +Fig.03: Time reaming counter + +![Fig.04: Miss Workrave - an animated character walks you through various stretching exercises][9] +Fig.04: Miss Workrave - an animated character walks you through various stretching exercises + +The break prelude window, bugging you to take a micro-pause: +![Fig.05: Time for a micro-pause remainder ][10] +Fig.05: Time for a micro-pause remainder + +![Fig.06: You can skip Micro-break ][11] +Fig.06: You can skip Micro-break + +##### References: + + 1. [Workrave project][12] home page. + 2. [pokoy][13] lightweight daemon that helps prevent RSI and other computer related stress. + 3. [A Pomodoro][14] timer for GNOME 3. + 4. [RSI][2] from the wikipedia. + + + +### about the author + +The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][15], [Facebook][16], [Google+][17]. + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/tips/repetitive-strain-injury-prevention-software.html + +作者:[Vivek Gite][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz/ +[1]:https://www.cyberciti.biz/media/new/tips/2009/11/workrave-image.jpg (workrave-image) +[2]:https://en.wikipedia.org/wiki/Repetitive_strain_injury +[3]:https://web.eecs.umich.edu/~cscott/rsi.html##symptoms +[4]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info) +[5]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info) +[6]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info) +[7]:https://www.cyberciti.biz/media/new/tips/2009/11/add-workwave-to-panel.png (Adding an Object (Workrave) to a Gnome Panel) +[8]:https://www.cyberciti.biz/media/new/tips/2009/11/screenshot-workrave.png (Workrave main window shows the time remaining until it suggests a pause.) +[9]:https://www.cyberciti.biz/media/new/tips/2009/11/miss-workrave.png (Miss Workrave Sofrware character walks you through various RSI stretching exercises ) +[10]:https://www.cyberciti.biz/media/new/tips/2009/11/time-for-micro-pause.gif (Workrave RSI Software Time for a micro-pause remainder ) +[11]:https://www.cyberciti.biz/media/new/tips/2009/11/Micro-break.png (Workrave RSI Software Micro-break ) +[12]:http://www.workrave.org/ +[13]:https://github.com/ttygde/pokoy +[14]:http://gnomepomodoro.org +[15]:https://twitter.com/nixcraft +[16]:https://facebook.com/nixcraft +[17]:https://plus.google.com/+CybercitiBiz diff --git a/sources/tech/20100419 10 Tools To Add Some Spice To Your UNIX-Linux Shell Scripts.md b/sources/tech/20100419 10 Tools To Add Some Spice To Your UNIX-Linux Shell Scripts.md new file mode 100644 index 0000000000..d350bd07b8 --- /dev/null +++ b/sources/tech/20100419 10 Tools To Add Some Spice To Your UNIX-Linux Shell Scripts.md @@ -0,0 +1,383 @@ +10 Tools To Add Some Spice To Your UNIX/Linux Shell Scripts +====== +There are some misconceptions that shell scripts are only for a CLI environment. You can efficiently use various tools to write GUI and network (socket) scripts under KDE or Gnome desktops. Shell scripts can make use of some of the GUI widget (menus, warning boxes, progress bars, etc.). You can always control the final output, cursor position on the screen, various output effects, and more. With the following tools, you can build powerful, interactive, user-friendly UNIX / Linux bash shell scripts. + +Creating GUI application is not an expensive task but a task that takes time and patience. Luckily, both UNIX and Linux ships with plenty of tools to write beautiful GUI scripts. The following tools are tested on FreeBSD and Linux operating systems but should work under other UNIX like operating systems. + +### 1. notify-send Command + +The notify-send command allows you to send desktop notifications to the user via a notification daemon from the command line. This is useful to inform the desktop user about an event or display some form of information without getting in the user's way. You need to install the following package on a Debian/Ubuntu Linux using [apt command][1]/[apt-get command][2]: +`$ sudo apt-get install libnotify-bin` +CentOS/RHEL user try the following [yum command][3]: +`$ sudo yum install libnotify` +Fedora Linux user type the following dnf command: +`$ sudo dnf install libnotify` +In this example, send simple desktop notification from the command line, enter: +``` +### send some notification ## +notify-send "rsnapshot done :)" +``` + +Sample outputs: +![Fig:01: notify-send in action ][4] +Here is another code with additional options: +``` +.... +alert=18000 +live=$(lynx --dump http://money.rediff.com/ | grep 'BSE LIVE' | awk '{ print $5}' | sed 's/,//g;s/\.[0-9]*//g') +[ $notify_counter -eq 0 ] && [ $live -ge $alert ] && { notify-send -t 5000 -u low -i "BSE Sensex touched 18k"; notify_counter=1; } +... +``` + +Sample outputs: +![Fig.02: notify-send with timeouts and other options][5] +Where, + + * -t 5000: Specifies the timeout in milliseconds ( 5000 milliseconds = 5 seconds) + * -u low : Set the urgency level (i.e. low, normal, or critical). + * -i gtk-dialog-info : Set an icon filename or stock icon to display (you can set path as -i /path/to/your-icon.png). + + + +For more information on use of the notify-send utility, please refer to the notify-send man page, viewable by typing man notify-send from the command line: +``` +man notify-send +``` + +### #2: tput Command + +The tput command is used to set terminal features. With tput you can set: + + * Move the cursor around the screen. + * Get information about terminal. + * Set colors (background and foreground). + * Set bold mode. + * Set reverse mode and much more. + + + +Here is a sample code: +``` +#!/bin/bash + +# clear the screen +tput clear + +# Move cursor to screen location X,Y (top left is 0,0) +tput cup 3 15 + +# Set a foreground colour using ANSI escape +tput setaf 3 +echo "XYX Corp LTD." +tput sgr0 + +tput cup 5 17 +# Set reverse video mode +tput rev +echo "M A I N - M E N U" +tput sgr0 + +tput cup 7 15 +echo "1. User Management" + +tput cup 8 15 +echo "2. Service Management" + +tput cup 9 15 +echo "3. Process Management" + +tput cup 10 15 +echo "4. Backup" + +# Set bold mode +tput bold +tput cup 12 15 +read -p "Enter your choice [1-4] " choice + +tput clear +tput sgr0 +tput rc +``` + + +Sample outputs: +![Fig.03: tput in action][6] +For more detail concerning the tput command, see the following man page: +``` +man 5 terminfo +man tput +``` + +### #3: setleds Command + +The setleds command allows you to set the keyboard leds. In this example, set NumLock on: +``` +setleds -D +num +``` + +To turn it off NumLock, enter: +``` +setleds -D -num +``` + + * -caps : Clear CapsLock. + * +caps : Set CapsLock. + * -scroll : Clear ScrollLock. + * +scroll : Set ScrollLock. + + + +See setleds command man page for more information and options: +`man setleds` + +### #4: zenity Command + +The [zenity commadn will display GTK+ dialogs box][7], and return the users input. This allows you to present information, and ask for information from the user, from all manner of shell scripts. Here is a sample GUI client for the whois directory service for given domain name: + +```shell +#!/bin/bash +# Get domain name +_zenity="/usr/bin/zenity" +_out="/tmp/whois.output.$$" +domain=$(${_zenity} --title "Enter domain" \ + --entry --text "Enter the domain you would like to see whois info" ) + +if [ $? -eq 0 ] +then + # Display a progress dialog while searching whois database + whois $domain | tee >(${_zenity} --width=200 --height=100 \ + --title="whois" --progress \ + --pulsate --text="Searching domain info..." \ + --auto-kill --auto-close \ + --percentage=10) >${_out} + + # Display back output + ${_zenity} --width=800 --height=600 \ + --title "Whois info for $domain" \ + --text-info --filename="${_out}" +else + ${_zenity} --error \ + --text="No input provided" +fi +``` + +Sample outputs: +![Fig.04: zenity in Action][8] +See the zenity man page for more information and all other supports GTK+ widgets: +``` +zenity --help +man zenity +``` + +### #5: kdialog Command + +kdialog is just like zenity but it is designed for KDE desktop / qt apps. You can display dialogs using kdialog. The following will display message on screen: +``` +kdialog --dontagain myscript:nofilemsg --msgbox "File: '~/.backup/config' not found." +``` + +Sample outputs: +![Fig.05: Suppressing the display of a dialog ][9] + +See [shell scripting with KDE Dialogs][10] tutorial for more information. + +### #6: Dialog + +[Dialog is an application used in shell scripts][11] which displays text user interface widgets. It uses the curses or ncurses library. Here is a sample code: +``` +#!/bin/bash +dialog --title "Delete file" \ +--backtitle "Linux Shell Script Tutorial Example" \ +--yesno "Are you sure you want to permanently delete \"/tmp/foo.txt\"?" 7 60 + +# Get exit status +# 0 means user hit [yes] button. +# 1 means user hit [no] button. +# 255 means user hit [Esc] key. +response=$? +case $response in + 0) echo "File deleted.";; + 1) echo "File not deleted.";; + 255) echo "[ESC] key pressed.";; +esac +``` + +See the dialog man page for details: +`man dialog` + +#### A Note About Other User Interface Widgets Tools + +UNIX and Linux comes with lots of other tools to display and control apps from the command line, and shell scripts can make use of some of the KDE / Gnome / X widget set: + + * **gmessage** - a GTK-based xmessage clone. + * **xmessage** - display a message or query in a window (X-based /bin/echo) + * **whiptail** - display dialog boxes from shell scripts + * **python-dialog** - Python module for making simple Text/Console-mode user interfaces + + + +### #7: logger command + +The logger command writes entries in the system log file such as /var/log/messages. It provides a shell command interface to the syslog system log module: +``` +logger "MySQL database backup failed." +tail -f /var/log/messages +logger -t mysqld -p daemon.error "Database Server failed" +tail -f /var/log/syslog +``` + +Sample outputs: +``` +Apr 20 00:11:45 vivek-desktop kernel: [38600.515354] CPU0: Temperature/speed normal +Apr 20 00:12:20 vivek-desktop mysqld: Database Server failed +``` + +See howto [write message to a syslog / log file][12] for more information. Alternatively, you can see the logger man page for details: +`man logger` + +### #8: setterm Command + +The setterm command can set various terminal attributes. In this example, force screen to turn black in 15 minutes. Monitor standby will occur at 60 minutes: +``` +setterm -blank 15 -powersave powerdown -powerdown 60 +``` + +In this example show underlined text for xterm window: +``` +setterm -underline on; +echo "Add Your Important Message Here" +setterm -underline off +``` + +Another useful option is to turn on or off cursor: +``` +setterm -cursor off +``` + +Turn it on: +``` +setterm -cursor on +``` + +See the setterm command man page for details: +`man setterm` + +### #9: smbclient: Sending Messages To MS-Windows Workstations + +The smbclient command can talk to an SMB/CIFS server. It can send a message to selected users or all users on MS-Windows systems: +``` +smbclient -M WinXPPro </dev/tcp/localhost/25) &>/dev/null && echo "TCP port 25 open" || echo "TCP port 25 close" +``` + +You can use [bash loop and find out open ports][14] with the snippets: +``` +echo "Scanning TCP ports..." +for p in {1..1023} +do + (echo >/dev/tcp/localhost/$p) >/dev/null 2>&1 && echo "$p open" +done +``` + + +Sample outputs: +``` +Scanning TCP ports... +22 open +53 open +80 open +139 open +445 open +631 open +``` + +In this example, your bash script act as an HTTP client: +``` +#!/bin/bash +exec 3<> /dev/tcp/${1:-www.cyberciti.biz}/80 + +printf "GET / HTTP/1.0\r\n" >&3 +printf "Accept: text/html, text/plain\r\n" >&3 +printf "Accept-Language: en\r\n" >&3 +printf "User-Agent: nixCraft_BashScript v.%s\r\n" "${BASH_VERSION}" >&3 +printf "\r\n" >&3 + +while read LINE <&3 +do + # do something on $LINE + # or send $LINE to grep or awk for grabbing data + # or simply display back data with echo command + echo $LINE +done +``` + +See the bash man page for more information: +`man bash` + +### A Note About GUI Tools and Cronjob + +You need to request local display/input service using export DISPLAY=[user's machine]:0 command if you are [using cronjob][15] to call your scripts. For example, call /home/vivek/scripts/monitor.stock.sh as follows which uses zenity tool: +`@hourly DISPLAY=:0.0 /home/vivek/scripts/monitor.stock.sh` + +Have a favorite UNIX tool to spice up shell script? Share it in the comments below. + +### about the author + +The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][16], [Facebook][17], [Google+][18]. + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/tips/spice-up-your-unix-linux-shell-scripts.html + +作者:[Vivek Gite][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info) +[2]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info) +[3]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info) +[4]:https://www.cyberciti.biz/media/new/tips/2010/04/notify-send.png (notify-send: Shell Script Get Or Send Desktop Notifications ) +[5]:https://www.cyberciti.biz/media/new/tips/2010/04/notify-send-with-icons-timeout.png (Linux / UNIX: Display Notifications From Your Shell Scripts With notify-send) +[6]:https://www.cyberciti.biz/media/new/tips/2010/04/tput-options.png (Linux / UNIX Script Colours and Cursor Movement With tput) +[7]:https://bash.cyberciti.biz/guide/Zenity:_Shell_Scripting_with_Gnome +[8]:https://www.cyberciti.biz/media/new/tips/2010/04/zenity-outputs.png (zenity: Linux / UNIX display Dialogs Boxes From The Shell Scripts) +[9]:https://www.cyberciti.biz/media/new/tips/2010/04/KDialog.png (Kdialog: Suppressing the display of a dialog ) +[10]:http://techbase.kde.org/Development/Tutorials/Shell_Scripting_with_KDE_Dialogs +[11]:https://bash.cyberciti.biz/guide/Bash_display_dialog_boxes +[12]:https://www.cyberciti.biz/tips/howto-linux-unix-write-to-syslog.html +[13]:https://www.cyberciti.biz/tips/freebsd-sending-a-message-to-windows-workstation.html +[14]:https://www.cyberciti.biz/faq/bash-for-loop/ +[15]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses/ +[16]:https://twitter.com/nixcraft +[17]:https://facebook.com/nixcraft +[18]:https://plus.google.com/+CybercitiBiz diff --git a/sources/tech/20111124 How to find hidden processes and ports on Linux-Unix-Windows.md b/sources/tech/20111124 How to find hidden processes and ports on Linux-Unix-Windows.md deleted file mode 100644 index 15b667f3d2..0000000000 --- a/sources/tech/20111124 How to find hidden processes and ports on Linux-Unix-Windows.md +++ /dev/null @@ -1,208 +0,0 @@ -Translating by ljgibbslf - -How to find hidden processes and ports on Linux/Unix/Windows -====== -Unhide is a little handy forensic tool to find hidden processes and TCP/UDP ports by rootkits / LKMs or by another hidden technique. This tool works under Linux, Unix-like system, and MS-Windows operating systems. From the man page: - -> It detects hidden processes using three techniques: -> -> 1. The proc technique consists of comparing /proc with the output of [/bin/ps][1]. -> 2. The sys technique consists of comparing information gathered from [/bin/ps][1] with information gathered from system calls. -> 3. The brute technique consists of bruteforcing the all process IDs. This technique is only available on Linux 2.6 kernels. -> - - - -Most rootkits/malware use the power of the kernel to hide, they are only visible from within the kernel. You can use unhide or tool such as [rkhunter to scan for rootkits, backdoors, and possible][2] local exploits. -[![How to find hidden process and ports on Linux, Unix, FreeBSD and Windows][3]][3] -This page describes how to install unhide and search for hidden process and TCP/UDP ports. - -### How do I Install Unhide? - -It is recommended that you run this tool from read-only media. To install the same under a Debian or Ubuntu Linux, type the following [apt-get command][4]/[apt command][5]: -`$ sudo apt-get install unhide` -Sample outputs: -``` -[sudo] password for vivek: -Reading package lists... Done -Building dependency tree -Reading state information... Done -Suggested packages: - rkhunter -The following NEW packages will be installed: - unhide -0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. -Need to get 46.6 kB of archives. -After this operation, 136 kB of additional disk space will be used. -Get:1 http://in.archive.ubuntu.com/ubuntu artful/universe amd64 unhide amd64 20130526-1 [46.6 kB] -Fetched 46.6 kB in 0s (49.0 kB/s) -Selecting previously unselected package unhide. -(Reading database ... 205367 files and directories currently installed.) -Preparing to unpack .../unhide_20130526-1_amd64.deb ... -Unpacking unhide (20130526-1) ... -Setting up unhide (20130526-1) ... -Processing triggers for man-db (2.7.6.1-2) ... -``` - -### How to install unhide on a RHEL/CentOS/Oracle/Scientific/Fedora Linux - -Type the following [yum command][6] (first turn on [EPLE repo on a CentOS/RHEL version 6.x][7] or [version 7.x][8]): -`$ sudo yum install unhide` -If you are using a Fedora Linux, type the following dnf command: -`$ sudo dnf install unhide` - -### How to install unhide on an Arch Linux - -Type the following pacman command: -`$ sudo pacman -S unhide` - -### FreeBSD : Install unhide - -Type the following command to install unhide using the port, enter: -``` -# cd /usr/ports/security/unhide/ -# make install clean -``` -OR, you can install the same using the binary package with help of pkg command: -`# pkg install unhide` -**unhide-tcp** is a forensic tool that identifies TCP/UDP ports that are listening but are not listed in [/bin/netstat][9] or [/bin/ss command][10] through brute forcing of all TCP/UDP ports available. - -### How do I use unhide tool? - -The syntax is: -` unhide [options] test_list` -Test_list is one or more of the following standard tests: - - 1. brute - 2. proc - 3. procall - 4. procfs - 5. quick - 6. reverse - 7. sys - - - -Elementary tests: - - 1. checkbrute - 2. checkchdir - 3. checkgetaffinity - 4. checkgetparam - 5. checkgetpgid - 6. checkgetprio - 7. checkRRgetinterval - 8. checkgetsched - 9. checkgetsid - 10. checkkill - 11. checknoprocps - 12. checkopendir - 13. checkproc - 14. checkquick - 15. checkreaddir - 16. checkreverse - 17. checksysinfo - 18. checksysinfo2 - 19. checksysinfo3 - - - -You can use it as follows: -``` -# unhide proc -# unhide sys -# unhide quick -``` -Sample outputs: -``` -Unhide 20130526 -Copyright © 2013 Yago Jesus & Patrick Gouin -License GPLv3+ : GNU GPL version 3 or later -http://www.unhide-forensics.info - -NOTE : This version of unhide is for systems using Linux >= 2.6 - -Used options: -[*]Searching for Hidden processes through comparison of results of system calls, proc, dir and ps -``` - -### How to use unhide-tcp forensic tool that identifies TCP/UDP ports - -From the man page: - -> unhide-tcp is a forensic tool that identifies TCP/UDP ports that are listening but are not listed by /sbin/ss (or alternatively by /bin/netstat) through brute forcing of all TCP/UDP ports available. -> Note1 : On FreeBSD ans OpenBSD, netstat is allways used as iproute2 doesn't exist on these OS. In addition, on FreeBSD, sockstat is used instead of fuser. -> Note2 : If iproute2 is not available on the system, option -n or -s SHOULD be given on the command line. - -``` -# unhide-tcp -``` -Sample outputs: -``` -Unhide 20100201 -http://www.security-projects.com/?Unhide - -Starting TCP checking - -Starting UDP checking -``` - -(Fig.02: No hidden ports found using the unhide-tcp command) -However, I found something interesting: -`# unhide-tcp ` -Sample outputs: -``` -Unhide 20100201 -http://www.security-projects.com/?Unhide - - -Starting TCP checking - -Found Hidden port that not appears in netstat: 1048 -Found Hidden port that not appears in netstat: 1049 -Found Hidden port that not appears in netstat: 1050 -Starting UDP checking - -``` - -The [netstat -tulpn][11] or [ss commands][12] displayed nothing about the hidden TCP ports # 1048, 1049, and 1050: -``` -# netstat -tulpn | grep 1048 -# ss -lp -# ss -l | grep 1048 -``` -For more info read man pages by typing the following command: -``` -$ man unhide -$ man unhide-tcp -``` - -### A note about Windows users - -You can grab the WinUnhide/WinUnhide-TCP by [visiting this page][13]. - - --------------------------------------------------------------------------------- - -via: https://www.cyberciti.biz/tips/linux-unix-windows-find-hidden-processes-tcp-udp-ports.html - -作者:[Vivek Gite][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.cyberciti.biz -[1]:https://www.cyberciti.biz/faq/show-all-running-processes-in-linux/ (Linux / Unix ps command) -[2]:https://www.cyberciti.biz/faq/howto-check-linux-rootkist-with-detectors-software/ -[3]:https://www.cyberciti.biz/tips/wp-content/uploads/2011/11/Linux-FreeBSD-Unix-Windows-Find-Hidden-Process-Ports.jpg -[4]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info) -[5]://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info) -[6]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info) -[7]:https://www.cyberciti.biz/faq/fedora-sl-centos-redhat6-enable-epel-repo/ -[8]:https://www.cyberciti.biz/faq/installing-rhel-epel-repo-on-centos-redhat-7-x/ -[9]:https://www.cyberciti.biz/tips/linux-display-open-ports-owner.html (Linux netstat command) -[10]:https://www.cyberciti.biz/tips/linux-investigate-sockets-network-connections.html -[11]:https://www.cyberciti.biz/tips/netstat-command-tutorial-examples.html -[12]:https://www.cyberciti.biz/tips/linux-investigate-sockets-network-connections.html -[13]:http://www.unhide-forensics.info/?Windows:Download diff --git a/sources/tech/20121211 Python Nmon Analyzer- moving away from excel macros.md b/sources/tech/20121211 Python Nmon Analyzer- moving away from excel macros.md deleted file mode 100644 index 3591e379d5..0000000000 --- a/sources/tech/20121211 Python Nmon Analyzer- moving away from excel macros.md +++ /dev/null @@ -1,101 +0,0 @@ -translating by lujun9972 -Python Nmon Analyzer: moving away from excel macros -====== -[Nigel's monitor][1], dubbed "Nmon", is a fantastic tool for monitoring, recording and analyzing a Linux/*nix system's performance over time. Nmon was originally developed by IBM and Open Sourced in the summer of 2009. By now Nmon is available on just about every linux platfrom and architecture. It provides a great real-time command line visualization of current system statistics, such as CPU, RAM, Network and Disk I/O. However, Nmon's greatest feature is the capability to record system performance snapshots over time. -For example: `nmon -f -s 1`. -![nmon CPU and Disk utilization][2] -This will create a log file starting of with some system metadata(Section AAA - BBBV), followed by timed snapshots of all monitored system attributes, such as CPU and Memory usage. This produces a file that is hard to directly interpret with a spreadsheet application, hence the birth of the [Nmon_Analyzer][3] excel macro. This tool is great, if you have access to Windows/Mac with Microsoft Office installed. If not there is also the Nmon2rrd tool, which generates RRD input files to generate your graphs. This is a very rigid approach and slightly painful. Now to provide a more flexible tool, I am introducing the pyNmonAnalyzer, which aims to provide a customization solution for generating organized CSV files and simple HTML reports with [matplotlib][4] based graphs. - -### Getting Started: - -System requirements: -As the name indicates you will need python. Additionally pyNmonAnalyzer depends on matplotlib and numpy. If you are on a debian-derivative system these are the packages you'll need to install: -``` -$> sudo apt-get install python-numpy python-matplotlib - -``` - -##### Getting pyNmonAnalyzer: - -Either clone the git repository: -``` -$> git clone git@github.com:madmaze/pyNmonAnalyzer.git - -``` - -or - -Download the current release here: [pyNmonAnalyzer-0.1.zip][5] - -Next we need an an Nmon file, if you do not already have one, either use the example provided in the release or record a sample: `nmon -F test.nmon -s 1 -c 120`, this will record 120 snapshots at 1 second intervals to test.nmon. - -Lets have a look at the basic help output: -``` -$> ./pyNmonAnalyzer.py -h -usage: pyNmonAnalyzer.py [-h] [-x] [-d] [-o OUTDIR] [-c] [-b] [-r CONFFNAME] - input_file - -nmonParser converts Nmon monitor files into time-sorted -CSV/Spreadsheets for easier analysis, without the use of the -MS Excel Macro. Also included is an option to build an HTML -report with graphs, which is configured through report.config. - -positional arguments: - input_file Input NMON file - -optional arguments: - -h, --help show this help message and exit - -x, --overwrite overwrite existing results (Default: False) - -d, --debug debug? (Default: False) - -o OUTDIR, --output OUTDIR - Output dir for CSV (Default: ./data/) - -c, --csv CSV output? (Default: False) - -b, --buildReport report output? (Default: False) - -r CONFFNAME, --reportConfig CONFFNAME - Report config file, if none exists: we will write the - default config file out (Default: ./report.config) - -``` - -There are 2 main options of using this tool - - 1. Turn the nmon file into a set of separate CSV file - 2. Generate an HTML report with matplotlib graphs - - - -The following command does both: -``` -$> ./pyNmonAnalyzer.py -c -b test.nmon - -``` - -This will create a directory called ./data in which you will find a folder of CSV files ("./data/csv/"), a folder of PNG graphs ("./data/img/") and an HTML report ("./data/report.html"). - -By default the HTML report will include graphs for CPU, Disk Busy, Memory utilization and Network transfers. This is all defined in a self explanitory configuration file, "report.config". At the moment this is not yet very flexible as CPU and MEM are not configurable besides on or off, but one of the next steps will be to refine the plotting approach and to expose more flexibility with which graphs plot which data points. - -### Report Example: - -[![pyNmonAnalyzer Graph output][6] -**Click to see the full Report**][7] - -Currently these reports are very bare bones and only prints out basic labeled graphs, but development is on-going. Currently in development is a wizard that will make adjusting the configurations easier. Please do let me know if you have any suggestions, find any bugs or have feature requests. - --------------------------------------------------------------------------------- - -via: https://matthiaslee.com/python-nmon-analyzer-moving-away-from-excel-macros/ - -作者:[Matthias Lee][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://matthiaslee.com/ -[1]:http://nmon.sourceforge.net/ -[2]:https://matthiaslee.com//content/images/2015/06/nmon_cpudisk.png -[3]:http://www.ibm.com/developerworks/wikis/display/WikiPtype/nmonanalyser -[4]:http://matplotlib.org/ -[5]:https://github.com/madmaze/pyNmonAnalyzer/blob/master/release/pyNmonAnalyzer-0.1.zip?raw=true -[6]:https://matthiaslee.com//content/images/2017/04/teaser-short_0.png (pyNmonAnalyzer Graph output) -[7]:http://matthiaslee.com/pub/pyNmonAnalyzer/data/report.html diff --git a/sources/tech/20130319 Linux - Unix Bash Shell List All Builtin Commands.md b/sources/tech/20130319 Linux - Unix Bash Shell List All Builtin Commands.md new file mode 100644 index 0000000000..230ca95cba --- /dev/null +++ b/sources/tech/20130319 Linux - Unix Bash Shell List All Builtin Commands.md @@ -0,0 +1,170 @@ +Linux / Unix Bash Shell List All Builtin Commands +====== + +Builtin commands contained within the bash shell itself. How do I list all built-in bash commands on Linux / Apple OS X / *BSD / Unix like operating systems without reading large size bash man page? + +A shell builtin is nothing but command or a function, called from a shell, that is executed directly in the shell itself. The bash shell executes the command directly, without invoking another program. You can view information for Bash built-ins with help command. There are different types of built-in commands. + + +### built-in command types + + 1. Bourne Shell Builtins: Builtin commands inherited from the Bourne Shell. + 2. Bash Builtins: Table of builtins specific to Bash. + 3. Modifying Shell Behavior: Builtins to modify shell attributes and optional behavior. + 4. Special Builtins: Builtin commands classified specially by POSIX. + + + +### How to see all bash builtins + +Type the following command: +``` +$ help +$ help | less +$ help | grep read +``` + +Sample outputs: +``` +GNU bash, version 4.1.5(1)-release (x86_64-pc-linux-gnu) +These shell commands are defined internally. Type `help' to see this list. +Type `help name' to find out more about the function `name'. +Use `info bash' to find out more about the shell in general. +Use `man -k' or `info' to find out more about commands not in this list. + +A star (*) next to a name means that the command is disabled. + + job_spec [&] history [-c] [-d offset] [n] or hist> + (( expression )) if COMMANDS; then COMMANDS; [ elif C> + . filename [arguments] jobs [-lnprs] [jobspec ...] or jobs > + : kill [-s sigspec | -n signum | -sigs> + [ arg... ] let arg [arg ...] + [[ expression ]] local [option] name[=value] ... + alias [-p] [name[=value] ... ] logout [n] + bg [job_spec ...] mapfile [-n count] [-O origin] [-s c> + bind [-lpvsPVS] [-m keymap] [-f filen> popd [-n] [+N | -N] + break [n] printf [-v var] format [arguments] + builtin [shell-builtin [arg ...]] pushd [-n] [+N | -N | dir] + caller [expr] pwd [-LP] + case WORD in [PATTERN [| PATTERN]...)> read [-ers] [-a array] [-d delim] [-> + cd [-L|-P] [dir] readarray [-n count] [-O origin] [-s> + command [-pVv] command [arg ...] readonly [-af] [name[=value] ...] or> + compgen [-abcdefgjksuv] [-o option] > return [n] + complete [-abcdefgjksuv] [-pr] [-DE] > select NAME [in WORDS ... ;] do COMM> + compopt [-o|+o option] [-DE] [name ..> set [--abefhkmnptuvxBCHP] [-o option> + continue [n] shift [n] + coproc [NAME] command [redirections] shopt [-pqsu] [-o] [optname ...] + declare [-aAfFilrtux] [-p] [name[=val> source filename [arguments] + dirs [-clpv] [+N] [-N] suspend [-f] + disown [-h] [-ar] [jobspec ...] test [expr] + echo [-neE] [arg ...] time [-p] pipeline + enable [-a] [-dnps] [-f filename] [na> times + eval [arg ...] trap [-lp] [[arg] signal_spec ...] + exec [-cl] [-a name] [command [argume> true + exit [n] type [-afptP] name [name ...] + export [-fn] [name[=value] ...] or ex> typeset [-aAfFilrtux] [-p] name[=val> + false ulimit [-SHacdefilmnpqrstuvx] [limit> + fc [-e ename] [-lnr] [first] [last] o> umask [-p] [-S] [mode] + fg [job_spec] unalias [-a] name [name ...] + for NAME [in WORDS ... ] ; do COMMAND> unset [-f] [-v] [name ...] + for (( exp1; exp2; exp3 )); do COMMAN> until COMMANDS; do COMMANDS; done + function name { COMMANDS ; } or name > variables - Names and meanings of so> + getopts optstring name [arg] wait [id] + hash [-lr] [-p pathname] [-dt] [name > while COMMANDS; do COMMANDS; done + help [-dms] [pattern ...] { COMMANDS ; } +``` + +### Viewing information for Bash built-ins + +To get detailed info run: +``` +help command +help read +``` +To just get a list of all built-ins with a short description, execute: + +`$ help -d` + +### Find syntax and other options for builtins + +Use the following syntax ' to find out more about the builtins commands: +``` +help name +help cd +help fg +help for +help read +help : +``` + +Sample outputs: +``` +:: : + Null command. +  + No effect; the command does nothing. +  + Exit Status: + Always succeeds +``` + +### Find out if a command is internal (builtin) or external + +Use the type command or command command: +``` +type -a command-name-here +type -a cd +type -a uname +type -a : +type -a ls +``` + + +OR +``` +type -a cd uname : ls uname +``` + +Sample outputs: +``` +cd is a shell builtin +uname is /bin/uname +: is a shell builtin +ls is aliased to `ls --color=auto' +ls is /bin/ls +l is a function +l () +{ + ls --color=auto +} + +``` + +OR +``` +command -V ls +command -V cd +command -V foo +``` + +[![View list bash built-ins command info on Linux or Unix][1]][1] + +### about the author + +The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][2], [Facebook][3], [Google+][4]. + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/faq/linux-unix-bash-shell-list-all-builtin-commands/ + +作者:[Vivek Gite][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/media/new/faq/2013/03/View-list-bash-built-ins-command-info-on-Linux-or-Unix.jpg +[2]:https://twitter.com/nixcraft +[3]:https://facebook.com/nixcraft +[4]:https://plus.google.com/+CybercitiBiz diff --git a/sources/tech/20140523 Tail Calls Optimization and ES6.md b/sources/tech/20140523 Tail Calls Optimization and ES6.md new file mode 100644 index 0000000000..2999c606d2 --- /dev/null +++ b/sources/tech/20140523 Tail Calls Optimization and ES6.md @@ -0,0 +1,173 @@ +#Translating by qhwdw [Tail Calls, Optimization, and ES6][1] + + +In this penultimate post about the stack, we take a quick look at tail calls, compiler optimizations, and the proper tail calls landing in the newest version of JavaScript. + +A tail call happens when a function F makes a function call as its final action. At that point F will do absolutely no more work: it passes the ball to whatever function is being called and vanishes from the game. This is notable because it opens up the possibility of tail call optimization: instead of [creating a new stack frame][6] for the function call, we can simply reuse F's stack frame, thereby saving stack space and avoiding the work involved in setting up a new frame. Here are some examples in C and their results compiled with [mild optimization][7]: + +Simple Tail Calls[download][2] + +``` +int add5(int a) +{ + return a + 5; +} + +int add10(int a) +{ + int b = add5(a); // not tail + return add5(b); // tail +} + +int add5AndTriple(int a){ + int b = add5(a); // not tail + return 3 * add5(a); // not tail, doing work after the call +} + +int finicky(int a){ + if (a > 10){ + return add5AndTriple(a); // tail + } + + if (a > 5){ + int b = add5(a); // not tail + return finicky(b); // tail + } + + return add10(a); // tail +} +``` + +You can normally spot tail call optimization (hereafter, TCO) in compiler output by seeing a [jump][8] instruction where a [call][9] would have been expected. At runtime TCO leads to a reduced call stack. + +A common misconception is that tail calls are necessarily [recursive][10]. That's not the case: a tail call may be recursive, such as in finicky() above, but it need not be. As long as caller F is completely done at the call site, we've got ourselves a tail call. Whether it can be optimized is a different question whose answer depends on your programming environment. + +"Yes, it can, always!" is the best answer we can hope for, which is famously the case for Scheme, as discussed in [SICP][11] (by the way, if when you program you don't feel like "a Sorcerer conjuring the spirits of the computer with your spells," I urge you to read that book). It's also the case for [Lua][12]. And most importantly, it is the case for the next version of JavaScript, ES6, whose spec does a good job defining [tail position][13] and clarifying the few conditions required for optimization, such as [strict mode][14]. When a language guarantees TCO, it supports proper tail calls. + +Now some of us can't kick that C habit, heart bleed and all, and the answer there is a more complicated "sometimes" that takes us into compiler optimization territory. We've seen the [simple examples][15] above; now let's resurrect our factorial from [last post][16]: + +Recursive Factorial[download][3] + +``` +#include + +int factorial(int n) +{ + int previous = 0xdeadbeef; + + if (n == 0 || n == 1) { + return 1; + } + + previous = factorial(n-1); + return n * previous; +} + +int main(int argc) +{ + int answer = factorial(5); + printf("%d\n", answer); +} +``` + +So, is line 11 a tail call? It's not, because of the multiplication by n afterwards. But if you're not used to optimizations, gcc's [result][17] with [O2 optimization][18] might shock you: not only it transforms factorial into a [recursion-free loop][19], but the factorial(5) call is eliminated entirely and replaced by a [compile-time constant][20] of 120 (5! == 120). This is why debugging optimized code can be hard sometimes. On the plus side, if you call this function it will use a single stack frame regardless of n's initial value. Compiler algorithms are pretty fun, and if you're interested I suggest you check out [Building an Optimizing Compiler][21] and [ACDI][22]. + +However, what happened here was not tail call optimization, since there was no tail call to begin with. gcc outsmarted us by analyzing what the function does and optimizing away the needless recursion. The task was made easier by the simple, deterministic nature of the operations being done. By adding a dash of chaos (e.g., getpid()) we can throw gcc off: + +Recursive PID Factorial[download][4] + +``` +#include +#include +#include + +int pidFactorial(int n) +{ + if (1 == n) { + return getpid(); // tail + } + + return n * pidFactorial(n-1) * getpid(); // not tail +} + +int main(int argc) +{ + int answer = pidFactorial(5); + printf("%d\n", answer); +} +``` + +Optimize that, unix fairies! So now we have a regular [recursive call][23] and this function allocates O(n) stack frames to do its work. Heroically, gcc still does [TCO for getpid][24] in the recursion base case. If we now wished to make this function tail recursive, we'd need a slight change: + +tailPidFactorial.c[download][5] + +``` +#include +#include +#include + +int tailPidFactorial(int n, int acc) +{ + if (1 == n) { + return acc * getpid(); // not tail + } + + acc = (acc * getpid() * n); + return tailPidFactorial(n-1, acc); // tail +} + +int main(int argc) +{ + int answer = tailPidFactorial(5, 1); + printf("%d\n", answer); +} +``` + +The accumulation of the result is now [a loop][25] and we've achieved true TCO. But before you go out partying, what can we say about the general case in C? Sadly, while good C compilers do TCO in a number of cases, there are many situations where they cannot do it. For example, as we saw in our [function epilogues][26], the caller is responsible for cleaning up the stack after a function call using the standard C calling convention. So if function F takes two arguments, it can only make TCO calls to functions taking two or fewer arguments. This is one among many restrictions. Mark Probst wrote an excellent thesis discussing [Proper Tail Recursion in C][27] where he discusses these issues along with C stack behavior. He also does [insanely cool juggling][28]. + +"Sometimes" is a rocky foundation for any relationship, so you can't rely on TCO in C. It's a discrete optimization that may or may not take place, rather than a language feature like proper tail calls, though in practice the compiler will optimize the vast majority of cases. But if you must have it, say for transpiling Scheme into C, you will [suffer][29]. + +Since JavaScript is now the most popular transpilation target, proper tail calls become even more important there. So kudos to ES6 for delivering it along with many other significant improvements. It's like Christmas for JS programmers. + +This concludes our brief tour of tail calls and compiler optimization. Thanks for reading and see you next time. + +-------------------------------------------------------------------------------- + +via:https://manybutfinite.com/post/tail-calls-optimization-es6/ + +作者:[Gustavo Duarte][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://duartes.org/gustavo/blog/about/ +[1]:https://manybutfinite.com/post/tail-calls-optimization-es6/ +[2]:https://manybutfinite.com/code/x86-stack/tail.c +[3]:https://manybutfinite.com/code/x86-stack/factorial.c +[4]:https://manybutfinite.com/code/x86-stack/pidFactorial.c +[5]:https://manybutfinite.com/code/x86-stack/tailPidFactorial.c +[6]:https://manybutfinite.com/post/journey-to-the-stack +[7]:https://github.com/gduarte/blog/blob/master/code/x86-stack/asm-tco.sh +[8]:https://github.com/gduarte/blog/blob/master/code/x86-stack/tail-tco.s#L27 +[9]:https://github.com/gduarte/blog/blob/master/code/x86-stack/tail.s#L37-L39 +[10]:https://manybutfinite.com/post/recursion/ +[11]:http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-11.html +[12]:http://www.lua.org/pil/6.3.html +[13]:https://people.mozilla.org/~jorendorff/es6-draft.html#sec-tail-position-calls +[14]:https://people.mozilla.org/~jorendorff/es6-draft.html#sec-strict-mode-code +[15]:https://github.com/gduarte/blog/blob/master/code/x86-stack/tail.c +[16]:https://manybutfinite.com/post/recursion/ +[17]:https://github.com/gduarte/blog/blob/master/code/x86-stack/factorial-o2.s +[18]:https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html +[19]:https://github.com/gduarte/blog/blob/master/code/x86-stack/factorial-o2.s#L16-L19 +[20]:https://github.com/gduarte/blog/blob/master/code/x86-stack/factorial-o2.s#L38 +[21]:http://www.amazon.com/Building-Optimizing-Compiler-Bob-Morgan-ebook/dp/B008COCE9G/ +[22]:http://www.amazon.com/Advanced-Compiler-Design-Implementation-Muchnick-ebook/dp/B003VM7GGK/ +[23]:https://github.com/gduarte/blog/blob/master/code/x86-stack/pidFactorial-o2.s#L20 +[24]:https://github.com/gduarte/blog/blob/master/code/x86-stack/pidFactorial-o2.s#L43 +[25]:https://github.com/gduarte/blog/blob/master/code/x86-stack/tailPidFactorial-o2.s#L22-L27 +[26]:https://manybutfinite.com/post/epilogues-canaries-buffer-overflows/ +[27]:http://www.complang.tuwien.ac.at/schani/diplarb.ps +[28]:http://www.complang.tuwien.ac.at/schani/jugglevids/index.html +[29]:http://en.wikipedia.org/wiki/Tail_call#Through_trampolining \ No newline at end of file diff --git a/sources/tech/20150615 Let-s Build A Simple Interpreter. Part 1..md b/sources/tech/20150615 Let-s Build A Simple Interpreter. Part 1..md index 4c0d541a5a..9a815f2852 100644 --- a/sources/tech/20150615 Let-s Build A Simple Interpreter. Part 1..md +++ b/sources/tech/20150615 Let-s Build A Simple Interpreter. Part 1..md @@ -1,3 +1,4 @@ +// Translating by Linchenguang.... Let’s Build A Simple Interpreter. Part 1. ====== diff --git a/sources/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md b/sources/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md deleted file mode 100644 index b9f923e048..0000000000 --- a/sources/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md +++ /dev/null @@ -1,244 +0,0 @@ -Let’s Build A Simple Interpreter. Part 2. -====== - -In their amazing book "The 5 Elements of Effective Thinking" the authors Burger and Starbird share a story about how they observed Tony Plog, an internationally acclaimed trumpet virtuoso, conduct a master class for accomplished trumpet players. The students first played complex music phrases, which they played perfectly well. But then they were asked to play very basic, simple notes. When they played the notes, the notes sounded childish compared to the previously played complex phrases. After they finished playing, the master teacher also played the same notes, but when he played them, they did not sound childish. The difference was stunning. Tony explained that mastering the performance of simple notes allows one to play complex pieces with greater control. The lesson was clear - to build true virtuosity one must focus on mastering simple, basic ideas. - -The lesson in the story clearly applies not only to music but also to software development. The story is a good reminder to all of us to not lose sight of the importance of deep work on simple, basic ideas even if it sometimes feels like a step back. While it is important to be proficient with a tool or framework you use, it is also extremely important to know the principles behind them. As Ralph Waldo Emerson said: - -> "If you learn only methods, you'll be tied to your methods. But if you learn principles, you can devise your own methods." - -On that note, let's dive into interpreters and compilers again. - -Today I will show you a new version of the calculator from [Part 1][1] that will be able to: - - 1. Handle whitespace characters anywhere in the input string - 2. Consume multi-digit integers from the input - 3. Subtract two integers (currently it can only add integers) - - - -Here is the source code for your new version of the calculator that can do all of the above: -``` -# Token types -# EOF (end-of-file) token is used to indicate that -# there is no more input left for lexical analysis -INTEGER, PLUS, MINUS, EOF = 'INTEGER', 'PLUS', 'MINUS', 'EOF' - - -class Token(object): - def __init__(self, type, value): - # token type: INTEGER, PLUS, MINUS, or EOF - self.type = type - # token value: non-negative integer value, '+', '-', or None - self.value = value - - def __str__(self): - """String representation of the class instance. - - Examples: - Token(INTEGER, 3) - Token(PLUS '+') - """ - return 'Token({type}, {value})'.format( - type=self.type, - value=repr(self.value) - ) - - def __repr__(self): - return self.__str__() - - -class Interpreter(object): - def __init__(self, text): - # client string input, e.g. "3 + 5", "12 - 5", etc - self.text = text - # self.pos is an index into self.text - self.pos = 0 - # current token instance - self.current_token = None - self.current_char = self.text[self.pos] - - def error(self): - raise Exception('Error parsing input') - - def advance(self): - """Advance the 'pos' pointer and set the 'current_char' variable.""" - self.pos += 1 - if self.pos > len(self.text) - 1: - self.current_char = None # Indicates end of input - else: - self.current_char = self.text[self.pos] - - def skip_whitespace(self): - while self.current_char is not None and self.current_char.isspace(): - self.advance() - - def integer(self): - """Return a (multidigit) integer consumed from the input.""" - result = '' - while self.current_char is not None and self.current_char.isdigit(): - result += self.current_char - self.advance() - return int(result) - - def get_next_token(self): - """Lexical analyzer (also known as scanner or tokenizer) - - This method is responsible for breaking a sentence - apart into tokens. - """ - while self.current_char is not None: - - if self.current_char.isspace(): - self.skip_whitespace() - continue - - if self.current_char.isdigit(): - return Token(INTEGER, self.integer()) - - if self.current_char == '+': - self.advance() - return Token(PLUS, '+') - - if self.current_char == '-': - self.advance() - return Token(MINUS, '-') - - self.error() - - return Token(EOF, None) - - def eat(self, token_type): - # compare the current token type with the passed token - # type and if they match then "eat" the current token - # and assign the next token to the self.current_token, - # otherwise raise an exception. - if self.current_token.type == token_type: - self.current_token = self.get_next_token() - else: - self.error() - - def expr(self): - """Parser / Interpreter - - expr -> INTEGER PLUS INTEGER - expr -> INTEGER MINUS INTEGER - """ - # set current token to the first token taken from the input - self.current_token = self.get_next_token() - - # we expect the current token to be an integer - left = self.current_token - self.eat(INTEGER) - - # we expect the current token to be either a '+' or '-' - op = self.current_token - if op.type == PLUS: - self.eat(PLUS) - else: - self.eat(MINUS) - - # we expect the current token to be an integer - right = self.current_token - self.eat(INTEGER) - # after the above call the self.current_token is set to - # EOF token - - # at this point either the INTEGER PLUS INTEGER or - # the INTEGER MINUS INTEGER sequence of tokens - # has been successfully found and the method can just - # return the result of adding or subtracting two integers, - # thus effectively interpreting client input - if op.type == PLUS: - result = left.value + right.value - else: - result = left.value - right.value - return result - - -def main(): - while True: - try: - # To run under Python3 replace 'raw_input' call - # with 'input' - text = raw_input('calc> ') - except EOFError: - break - if not text: - continue - interpreter = Interpreter(text) - result = interpreter.expr() - print(result) - - -if __name__ == '__main__': - main() -``` - -Save the above code into the calc2.py file or download it directly from [GitHub][2]. Try it out. See for yourself that it works as expected: it can handle whitespace characters anywhere in the input; it can accept multi-digit integers, and it can also subtract two integers as well as add two integers. - -Here is a sample session that I ran on my laptop: -``` -$ python calc2.py -calc> 27 + 3 -30 -calc> 27 - 7 -20 -calc> -``` - -The major code changes compared with the version from [Part 1][1] are: - - 1. The get_next_token method was refactored a bit. The logic to increment the pos pointer was factored into a separate method advance. - 2. Two more methods were added: skip_whitespace to ignore whitespace characters and integer to handle multi-digit integers in the input. - 3. The expr method was modified to recognize INTEGER -> MINUS -> INTEGER phrase in addition to INTEGER -> PLUS -> INTEGER phrase. The method now also interprets both addition and subtraction after having successfully recognized the corresponding phrase. - -In [Part 1][1] you learned two important concepts, namely that of a **token** and a **lexical analyzer**. Today I would like to talk a little bit about **lexemes** , **parsing** , and **parsers**. - -You already know about tokens. But in order for me to round out the discussion of tokens I need to mention lexemes. What is a lexeme? A **lexeme** is a sequence of characters that form a token. In the following picture you can see some examples of tokens and sample lexemes and hopefully it will make the relationship between them clear: - -![][3] - -Now, remember our friend, the expr method? I said before that that's where the interpretation of an arithmetic expression actually happens. But before you can interpret an expression you first need to recognize what kind of phrase it is, whether it is addition or subtraction, for example. That's what the expr method essentially does: it finds the structure in the stream of tokens it gets from the get_next_token method and then it interprets the phrase that is has recognized, generating the result of the arithmetic expression. - -The process of finding the structure in the stream of tokens, or put differently, the process of recognizing a phrase in the stream of tokens is called **parsing**. The part of an interpreter or compiler that performs that job is called a **parser**. - -So now you know that the expr method is the part of your interpreter where both **parsing** and **interpreting** happens - the expr method first tries to recognize ( **parse** ) the INTEGER -> PLUS -> INTEGER or the INTEGER -> MINUS -> INTEGER phrase in the stream of tokens and after it has successfully recognized ( **parsed** ) one of those phrases, the method interprets it and returns the result of either addition or subtraction of two integers to the caller. - -And now it's time for exercises again. - -![][4] - - 1. Extend the calculator to handle multiplication of two integers - 2. Extend the calculator to handle division of two integers - 3. Modify the code to interpret expressions containing an arbitrary number of additions and subtractions, for example "9 - 5 + 3 + 11" - - - -**Check your understanding.** - - 1. What is a lexeme? - 2. What is the name of the process that finds the structure in the stream of tokens, or put differently, what is the name of the process that recognizes a certain phrase in that stream of tokens? - 3. What is the name of the part of the interpreter (compiler) that does parsing? - - - - -I hope you liked today's material. In the next article of the series you will extend your calculator to handle more complex arithmetic expressions. Stay tuned. - - --------------------------------------------------------------------------------- - -via: https://ruslanspivak.com/lsbasi-part2/ - -作者:[Ruslan Spivak][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://ruslanspivak.com -[1]:http://ruslanspivak.com/lsbasi-part1/ (Part 1) -[2]:https://github.com/rspivak/lsbasi/blob/master/part2/calc2.py -[3]:https://ruslanspivak.com/lsbasi-part2/lsbasi_part2_lexemes.png -[4]:https://ruslanspivak.com/lsbasi-part2/lsbasi_part2_exercises.png diff --git a/sources/tech/20150812 Let-s Build A Simple Interpreter. Part 3..md b/sources/tech/20150812 Let-s Build A Simple Interpreter. Part 3..md index 2502d2624a..d9deb9f50e 100644 --- a/sources/tech/20150812 Let-s Build A Simple Interpreter. Part 3..md +++ b/sources/tech/20150812 Let-s Build A Simple Interpreter. Part 3..md @@ -1,3 +1,5 @@ +BriFuture is Translating this article + Let’s Build A Simple Interpreter. Part 3. ====== diff --git a/sources/tech/20160808 Top 10 Command Line Games For Linux.md b/sources/tech/20160808 Top 10 Command Line Games For Linux.md deleted file mode 100644 index ebce8a8073..0000000000 --- a/sources/tech/20160808 Top 10 Command Line Games For Linux.md +++ /dev/null @@ -1,240 +0,0 @@ -Top 10 Command Line Games For Linux -====== -Brief: This article lists the **best command line games for Linux**. - -Linux has never been the preferred operating system for gaming. Though [gaming on Linux][1] has improved a lot lately. You can [download Linux games][2] from a number of resources. - -There are dedicated [Linux distributions for gaming][3]. Yes, they do exist. But, we are not going to see the Linux gaming distributions today. - -Linux has one added advantage over its Windows counterpart. It has got the mighty Linux terminal. You can do a hell lot of things in terminal including playing **command line games**. - -Yeah, hardcore terminal lovers, gather around. Terminal games are light, fast and hell lotta fun to play. And the best thing of all, you've got a lot of classic retro games in Linux terminal. - -[Suggested read: Gaming On Linux:All You Need To Know][20] - -### Best Linux terminal games - -So let's crack this list and see what are some of the best Linux terminal games. - -### 1. Bastet - -Who hasn't spent hours together playing [Tetris][4]? Simple, but totally addictive. Bastet is the Tetris of Linux. - -![Bastet Linux terminal game][5] - -Use the command below to get Bastet: -``` -sudo apt install bastet -``` - -To play the game, run the below command in terminal: -``` -bastet -``` - -Use spacebar to rotate the bricks and arrow keys to guide. - -### 2. Ninvaders - -Space Invaders. I remember tussling for high score with my brother on this. One of the best arcade games out there. - -![nInvaders command line game in Linux][6] - -Copy paste the command to install Ninvaders. -``` -sudo apt-get install ninvaders -``` - -To play this game, use the command below: -``` -ninvaders -``` - -Arrow keys to move the spaceship. Space bar to shoot at the aliens. - -[Suggested read:Top 10 Best Linux Games eleased in 2016 That You Can Play Today][21] - - -### 3. Pacman4console - -Yes, the King of the Arcade is here. Pacman4console is the terminal version of the popular arcade hit, Pacman. - -![Pacman4console is a command line Pacman game in Linux][7] - -Use the command to get pacman4console: -``` -sudo apt-get install pacman4console -``` - -Open a terminal, and I suggest you maximize it. Type the command below to launch the game: -``` -pacman4console -``` - -Use the arrow keys to control the movement. - -### 4. nSnake - -Remember the snake game in old Nokia phones? - -That game kept me hooked to the phone for a really long time. I used to devise various coiling patterns to manage the grown up snake. - -![nsnake : Snake game in Linux terminal][8] - -We have the [snake game in Linux terminal][9] thanks to [nSnake][9]. Use the command below to install it. -``` -sudo apt-get install nsnake -``` - -To play the game, type in the below command to launch the game. -``` -nsnake -``` - -Use arrow keys to move the snake and feed it. - -### 5. Greed - -Greed is little like Tron, minus the speed and adrenaline. - -Your location is denoted by a blinking '@'. You are surrounded by numbers and you can choose to move in any of the 4 directions, - -The direction you choose has a number and you move exactly that number of steps. And you repeat the step again. You cannot revisit the visited spot again and the game ends when you cannot make a move. - -I made it sound more complicated than it really is. - -![Greed : Tron game in Linux command line][10] - -Grab greed with the command below: -``` -sudo apt-get install greed -``` - -To launch the game use the command below. Then use the arrow keys to play the game. -``` -greed -``` - -### 6. Air Traffic Controller - -What's better than being a pilot? An air traffic controller. You can simulate an entire air traffic system in your terminal. To be honest, managing air traffic from a terminal kinda feels, real. - -![Air Traffic Controller game in Linux][11] - -Install the game using the command below: -``` -sudo apt-get install bsdgames -``` - -Type in the command below to launch the game: -``` -atc -``` - -ATC is not a child's play. So read the man page using the command below. - -### 7. Backgammon - -Whether You have played [Backgammon][12] before or not, You should check this out. The instructions and control manuals are all so friendly. Play it against computer or your friend if you prefer. - -![Backgammon terminal game in Linux][13] - -Install Backgammon using this command: -``` -sudo apt-get install bsdgames -``` - -Type in the below command to launch the game: -``` -backgammon -``` - -Press 'y' when prompted for rules of the game. - -### 8. Moon Buggy - -Jump. Fire. Hours of fun. No more words. - -![Moon buggy][14] - -Install the game using the command below: -``` -sudo apt-get install moon-buggy -``` - -Use the below command to start the game: -``` -moon-buggy -``` - -Press space to jump, 'a' or 'l' to shoot. Enjoy - -### 9. 2048 - -Here's something to make your brain flex. [2048][15] is a strategic as well as a highly addictive game. The goal is to get a score of 2048. - -![2048 game in Linux terminal][16] - -Copy paste the commands below one by one to install the game. -``` -wget https://raw.githubusercontent.com/mevdschee/2048.c/master/2048.c - -gcc -o 2048 2048.c -``` - -Type the below command to launch the game and use the arrow keys to play. -``` -./2048 -``` - -### 10. Tron - -How can this list be complete without a brisk action game? - -![Tron Linux terminal game][17] - -Yes, the snappy Tron is available on Linux terminal. Get ready for some serious nimble action. No installation hassle nor setup hassle. One command will launch the game. All You need is an internet connection. -``` -ssh sshtron.zachlatta.com -``` - -You can even play this game in multiplayer if there are other gamers online. Read more about [Tron game in Linux][18]. - -### Your pick? - -There you have it, people. Top 10 Linux terminal games. I guess it's ctrl+alt+T now. What is Your favorite among the list? Or got some other fun stuff for the terminal? Do share. - -With inputs from [Abhishek Prakash][19]. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/best-command-line-games-linux/ - -作者:[Aquil Roshan][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://itsfoss.com/author/aquil/ -[1]:https://itsfoss.com/linux-gaming-guide/ -[2]:https://itsfoss.com/download-linux-games/ -[3]:https://itsfoss.com/manjaro-gaming-linux/ -[4]:https://en.wikipedia.org/wiki/Tetris -[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/bastet.jpg -[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/ninvaders.jpg -[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/pacman.jpg -[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/nsnake.jpg -[9]:https://itsfoss.com/nsnake-play-classic-snake-game-linux-terminal/ -[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/greed.jpg -[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/atc.jpg -[12]:https://en.wikipedia.org/wiki/Backgammon -[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/backgammon.jpg -[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/moon-buggy.jpg -[15]:https://itsfoss.com/2048-offline-play-ubuntu/ -[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/2048.jpg -[17]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/tron.jpg -[18]:https://itsfoss.com/play-tron-game-linux-terminal/ -[19]:https://twitter.com/abhishek_pc -[20]:https://itsfoss.com/linux-gaming-guide/ -[21]:https://itsfoss.com/best-linux-games/ diff --git a/sources/tech/20160810 How does gdb work.md b/sources/tech/20160810 How does gdb work.md new file mode 100644 index 0000000000..56b0cfe7bf --- /dev/null +++ b/sources/tech/20160810 How does gdb work.md @@ -0,0 +1,220 @@ +translating by ucasFL + +How does gdb work? +============================================================ + +Hello! Today I was working a bit on my [ruby stacktrace project][1] and I realized that now I know a couple of things about how gdb works internally. + +Lately I’ve been using gdb to look at Ruby programs, so we’re going to be running gdb on a Ruby program. This really means the Ruby interpreter. First, we’re going to print out the address of a global variable: `ruby_current_thread`: + +### getting a global variable + +Here’s how to get the address of the global `ruby_current_thread`: + +``` +$ sudo gdb -p 2983 +(gdb) p & ruby_current_thread +$2 = (rb_thread_t **) 0x5598a9a8f7f0 + +``` + +There are a few places a variable can live: on the heap, the stack, or in your program’s text. Global variables are part of your program! You can think of them as being allocated at compile time, kind of. It turns out we can figure out the address of a global variable pretty easily! Let’s see how `gdb` came up with `0x5598a9a8f7f0`. + +We can find the approximate region this variable lives in by looking at a cool file in `/proc` called `/proc/$pid/maps`. + +``` +$ sudo cat /proc/2983/maps | grep bin/ruby +5598a9605000-5598a9886000 r-xp 00000000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby +5598a9a86000-5598a9a8b000 r--p 00281000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby +5598a9a8b000-5598a9a8d000 rw-p 00286000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby + +``` + +So! There’s this starting address `5598a9605000` That’s  _like_  `0x5598a9a8f7f0`, but different. How different? Well, here’s what I get when I subtract them: + +``` +(gdb) p/x 0x5598a9a8f7f0 - 0x5598a9605000 +$4 = 0x48a7f0 + +``` + +“What’s that number?”, you might ask? WELL. Let’s look at the **symbol table**for our program with `nm`. + +``` +sudo nm /proc/2983/exe | grep ruby_current_thread +000000000048a7f0 b ruby_current_thread + +``` + +What’s that we see? Could it be `0x48a7f0`? Yes it is! So!! If we want to find the address of a global variable in our program, all we need to do is look up the name of the variable in the symbol table, and then add that to the start of the range in `/proc/whatever/maps`, and we’re done! + +So now we know how gdb does that. But gdb does so much more!! Let’s skip ahead to… + +### dereferencing pointers + +``` +(gdb) p ruby_current_thread +$1 = (rb_thread_t *) 0x5598ab3235b0 + +``` + +The next thing we’re going to do is **dereference** that `ruby_current_thread`pointer. We want to see what’s in that address! To do that, gdb will run a bunch of system calls like this: + +``` +ptrace(PTRACE_PEEKTEXT, 2983, 0x5598a9a8f7f0, [0x5598ab3235b0]) = 0 + +``` + +You remember this address `0x5598a9a8f7f0`? gdb is asking “hey, what’s in that address exactly”? `2983` is the PID of the process we’re running gdb on. It’s using the `ptrace` system call which is how gdb does everything. + +Awesome! So we can dereference memory and figure out what bytes are at what memory addresses. Some useful gdb commands to know here are `x/40w variable` and `x/40b variable` which will display 40 words / bytes at a given address, respectively. + +### describing structs + +The memory at an address looks like this. A bunch of bytes! + +``` +(gdb) x/40b ruby_current_thread +0x5598ab3235b0: 16 -90 55 -85 -104 85 0 0 +0x5598ab3235b8: 32 47 50 -85 -104 85 0 0 +0x5598ab3235c0: 16 -64 -55 115 -97 127 0 0 +0x5598ab3235c8: 0 0 2 0 0 0 0 0 +0x5598ab3235d0: -96 -83 -39 115 -97 127 0 0 + +``` + +That’s useful, but not that useful! If you are a human like me and want to know what it MEANS, you need more. Like this: + +``` +(gdb) p *(ruby_current_thread) +$8 = {self = 94114195940880, vm = 0x5598ab322f20, stack = 0x7f9f73c9c010, + stack_size = 131072, cfp = 0x7f9f73d9ada0, safe_level = 0, raised_flag = 0, + last_status = 8, state = 0, waiting_fd = -1, passed_block = 0x0, + passed_bmethod_me = 0x0, passed_ci = 0x0, top_self = 94114195612680, + top_wrapper = 0, base_block = 0x0, root_lep = 0x0, root_svar = 8, thread_id = + 140322820187904, + +``` + +GOODNESS. That is a lot more useful. How does gdb know that there are all these cool fields like `stack_size`? Enter DWARF. DWARF is a way to store extra debugging data about your program, so that debuggers like gdb can do their job better! It’s generally stored as part of a binary. If I run `dwarfdump` on my Ruby binary, I get some output like this: + +(I’ve redacted it heavily to make it easier to understand) + +``` +DW_AT_name "rb_thread_struct" +DW_AT_byte_size 0x000003e8 +DW_TAG_member + DW_AT_name "self" + DW_AT_type <0x00000579> + DW_AT_data_member_location DW_OP_plus_uconst 0 +DW_TAG_member + DW_AT_name "vm" + DW_AT_type <0x0000270c> + DW_AT_data_member_location DW_OP_plus_uconst 8 +DW_TAG_member + DW_AT_name "stack" + DW_AT_type <0x000006b3> + DW_AT_data_member_location DW_OP_plus_uconst 16 +DW_TAG_member + DW_AT_name "stack_size" + DW_AT_type <0x00000031> + DW_AT_data_member_location DW_OP_plus_uconst 24 +DW_TAG_member + DW_AT_name "cfp" + DW_AT_type <0x00002712> + DW_AT_data_member_location DW_OP_plus_uconst 32 +DW_TAG_member + DW_AT_name "safe_level" + DW_AT_type <0x00000066> + +``` + +So. The name of the type of `ruby_current_thread` is `rb_thread_struct`. It has size `0x3e8` (or 1000 bytes), and it has a bunch of member items. `stack_size` is one of them, at an offset of 24, and it has type 31\. What’s 31? No worries! We can look that up in the DWARF info too! + +``` +< 1><0x00000031> DW_TAG_typedef + DW_AT_name "size_t" + DW_AT_type <0x0000003c> +< 1><0x0000003c> DW_TAG_base_type + DW_AT_byte_size 0x00000008 + DW_AT_encoding DW_ATE_unsigned + DW_AT_name "long unsigned int" + +``` + +So! `stack_size` has type `size_t`, which means `long unsigned int`, and is 8 bytes. That means that we can read the stack size! + +How that would break down, once we have the DWARF debugging data, is: + +1. Read the region of memory that `ruby_current_thread` is pointing to + +2. Add 24 bytes to get to `stack_size` + +3. Read 8 bytes (in little-endian format, since we’re on x86) + +4. Get the answer! + +Which in this case is 131072 or 128 kb. + +To me, this makes it a lot more obvious what debugging info is **for** – if we didn’t have all this extra metadata about what all these variables meant, we would have no idea what the bytes at address `0x5598ab3235b0` meant. + +This is also why you can install debug info for a program separately from your program – gdb doesn’t care where it gets the extra debug info from. + +### DWARF is confusing + +I’ve been reading a bunch of DWARF info recently. Right now I’m using libdwarf which hasn’t been the best experience – the API is confusing, you initialize everything in a weird way, and it’s really slow (it takes 0.3 seconds to read all the debugging data out of my Ruby program which seems ridiculous). I’ve been told that libdw from elfutils is better. + +Also, I casually remarked that you can look at `DW_AT_data_member_location` to get the offset of a struct member! But I looked up on Stack Overflow how to actually do that and I got [this answer][2]. Basically you start with a check like: + +``` +dwarf_whatform(attrs[i], &form, &error); + if (form == DW_FORM_data1 || form == DW_FORM_data2 + form == DW_FORM_data2 || form == DW_FORM_data4 + form == DW_FORM_data8 || form == DW_FORM_udata) { + +``` + +and then it keeps GOING. Why are there 8 million different `DW_FORM_data` things I need to check for? What is happening? I have no idea. + +Anyway my impression is that DWARF is a large and complicated standard (and possibly the libraries people use to generate DWARF are subtly incompatible?), but it’s what we have, so that’s what we work with! + +I think it’s really cool that I can write code that reads DWARF and my code actually mostly works. Except when it crashes. I’m working on that. + +### unwinding stacktraces + +In an earlier version of this post, I said that gdb unwinds stacktraces using libunwind. It turns out that this isn’t true at all! + +Someone who’s worked on gdb a lot emailed me to say that they actually spent a ton of time figuring out how to unwind stacktraces so that they can do a better job than libunwind does. This means that if you get stopped in the middle of a weird program with less debug info than you might hope for that’s done something strange with its stack, gdb will try to figure out where you are anyway. Thanks <3 + +### other things gdb does + +The few things I’ve described here (reading memory, understanding DWARF to show you structs) aren’t everything gdb does – just looking through Brendan Gregg’s [gdb example from yesterday][3], we see that gdb also knows how to + +* disassemble assembly + +* show you the contents of your registers + +and in terms of manipulating your program, it can + +* set breakpoints and step through a program + +* modify memory (!! danger !!) + +Knowing more about how gdb works makes me feel a lot more confident when using it! I used to get really confused because gdb kind of acts like a C REPL sometimes – you type `ruby_current_thread->cfp->iseq`, and it feels like writing C code! But you’re not really writing C at all, and it was easy for me to run into limitations in gdb and not understand why. + +Knowing that it’s using DWARF to figure out the contents of the structs gives me a better mental model and have more correct expectations! Awesome. + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2016/08/10/how-does-gdb-work/ + +作者:[ Julia Evans][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca/ +[1]:http://jvns.ca/blog/2016/06/12/a-weird-system-call-process-vm-readv/ +[2]:https://stackoverflow.com/questions/25047329/how-to-get-struct-member-offset-from-dwarf-info +[3]:http://www.brendangregg.com/blog/2016-08-09/gdb-example-ncurses.html diff --git a/sources/tech/20170101 How to resolve mount.nfs- Stale file handle error.md b/sources/tech/20170101 How to resolve mount.nfs- Stale file handle error.md new file mode 100644 index 0000000000..d57280df28 --- /dev/null +++ b/sources/tech/20170101 How to resolve mount.nfs- Stale file handle error.md @@ -0,0 +1,96 @@ +How to resolve mount.nfs: Stale file handle error +====== +Learn how to resolve mount.nfs: Stale file handle error on Linux platform. This is Network File System error can be resolved from client or server end. + + _![][1]_ + +When you are using Network File System in your environment, you must have seen`mount.nfs: Stale file handle` error at times. This error denotes that NFS share is unable to mount since something has changed since last good known configuration. + +Whenever you reboot NFS server or some of the NFS processes are not running on client or server or share is not properly exported at server; these can be reasons for this error. Moreover its irritating when this error comes to previously mounted NFS share. Because this means configuration part is correct since it was previously mounted. In such case once can try following commands: + +Make sure NFS service are running good on client and server. + +``` +# service nfs status +rpc.svcgssd is stopped +rpc.mountd (pid 11993) is running... +nfsd (pid 12009 12008 12007 12006 12005 12004 12003 12002) is running... +rpc.rquotad (pid 11988) is running... +``` + +> Stay connected to your favorite windows applications from anywhere on any device with [ windows 7 cloud desktop ][2] from CloudDesktopOnline.com. Get Office 365 with expert support and free migration from [ Apps4Rent.com ][3]. + +If NFS share currently mounted on client, then un-mount it forcefully and try to remount it on NFS client. Check if its properly mounted by `df` command and changing directory inside it. + +``` +# umount -f /mydata_nfs + +# mount -t nfs server:/nfs_share /mydata_nfs + +#df -k +------ output clipped ----- +server:/nfs_share 41943040 892928 41050112 3% /mydata_nfs +``` + +In above mount command, server can be IP or [hostname ][4]of NFS server. + +If you are getting error while forcefully un-mounting like below : + +``` +# umount -f /mydata_nfs +umount2: Device or resource busy +umount: /mydata_nfs: device is busy +umount2: Device or resource busy +umount: /mydata_nfs: device is busy +``` +Then you can check which all processes or users are using that mount point with `lsof` command like below: + +``` +# lsof |grep mydata_nfs +lsof: WARNING: can't stat() nfs file system /mydata_nfs + Output information may be incomplete. +su 3327 root cwd unknown /mydata_nfs/dir (stat: Stale NFS file handle) +bash 3484 grid cwd unknown /mydata_nfs/MYDB (stat: Stale NFS file handle) +bash 20092 oracle11 cwd unknown /mydata_nfs/MPRP (stat: Stale NFS file handle) +bash 25040 oracle11 cwd unknown /mydata_nfs/MUYR (stat: Stale NFS file handle) +``` + +If you see in above example that 4 PID are using some files on said mount point. Try killing them off to free mount point. Once done you will be able to un-mount it properly. + +Sometimes it still give same error for mount command. Then try mounting after restarting NFS service at client using below command. + +``` +# service nfs restart +Shutting down NFS daemon: [ OK ] +Shutting down NFS mountd: [ OK ] +Shutting down NFS quotas: [ OK ] +Shutting down RPC idmapd: [ OK ] +Starting NFS services: [ OK ] +Starting NFS quotas: [ OK ] +Starting NFS mountd: [ OK ] +Starting NFS daemon: [ OK ] +``` + +Also read : [How to restart NFS step by step in HPUX][5] + +Even if this didnt solve your issue, final step is to restart services at NFS server. Caution! This will disconnect all NFS shares which are exported from NFS server. All clients will see mount point disconnect. This step is where 99% you will get your issue resolved. If not then [NFS configurations][6] must be checked, provided you have changed configuration and post that you started seeing this error. + +Outputs in above post are from RHEL6.3 server. Drop us your comments related to this post. + +-------------------------------------------------------------------------------- + +via: https://kerneltalks.com/troubleshooting/resolve-mount-nfs-stale-file-handle-error/ + +作者:[KernelTalks][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://kerneltalks.com +[1]:http://kerneltalks.com/wp-content/uploads/2017/01/nfs_error-2-150x150.png +[2]:https://www.clouddesktoponline.com/ +[3]:http://www.apps4rent.com +[4]:https://kerneltalks.com/linux/all-you-need-to-know-about-hostname-in-linux/ +[5]:http://kerneltalks.com/hpux/restart-nfs-in-hpux/ +[6]:http://kerneltalks.com/linux/nfs-configuration-linux-hpux/ diff --git a/sources/tech/20170216 25 Free Books To Learn Linux For Free.md b/sources/tech/20170216 25 Free Books To Learn Linux For Free.md deleted file mode 100644 index e549f50ea3..0000000000 --- a/sources/tech/20170216 25 Free Books To Learn Linux For Free.md +++ /dev/null @@ -1,295 +0,0 @@ -25 Free Books To Learn Linux For Free -====== -Brief: In this article, I'll share with you the best resource to **learn Linux for free**. This is a collection of websites, online video courses and free eBooks. - -**How to learn Linux?** - -This is perhaps the most commonly asked question in our Facebook group for Linux users. - -The answer to this simple looking question 'how to learn Linux' is not at all simple. - -Problem is that different people have different meanings of learning Linux. - - * If someone has never used Linux, be it command line or desktop version, that person might be just wondering to know more about it. - * If someone uses Windows as the desktop but have to use Linux command line at work, that person might be interested in learning Linux commands. - * If someone has been using Linux for sometimes and is aware of the basics but he/she might want to go to the next level. - * If someone is just interested in getting your way around a specific Linux distribution. - * If someone is trying to improve or learn Bash scripting which is almost synonymous with Linux command line. - * If someone is willing to make a career as a Linux SysAdmin or trying to improve his/her sysadmin skills. - - - -You see, the answer to "how do I learn Linux" depends on what kind of Linux knowledge you are seeking. And for this purpose, I have collected a bunch of resources that you could use for learning Linux. - -These free resources include eBooks, video courses, websites etc. And these are divided into sub-categories so that you can easily find what you are looking for when you seek to learn Linux. - -Again, there is no **best way to learn Linux**. It totally up to you how you go about learning Linux, by online web portals, downloaded eBooks, video courses or something else. - -Let's see how you can learn Linux. - -**Disclaimer** : All the books listed here are legal to download. The sources mentioned here are the official sources, as per my knowledge. However, if you find it otherwise, please let me know so that I can take appropriate action. - -![Best Free eBooks to learn Linux for Free][1] - -## 1. Free materials to learn Linux for absolute beginners - -So perhaps you have just heard of Linux from your friends or from a discussion online. You are intrigued about the hype around Linux and you are overwhelmed by the vast information available on the internet but just cannot figure out exactly where to look for to know more about Linux. - -Worry not. Most of us, if not all, have been to your stage. - -### Introduction to Linux by Linux Foundation [Video Course] - -If you have no idea about what is Linux and you want to get started with it, I suggest you to go ahead with the free video course provided by the [Linux Foundation][2] on [edX][3]. Consider it an official course by the organization that 'maintains' Linux. And yes, it is endorsed by [Linus Torvalds][4], the father of Linux himself. - -[Introduction To Linux][5] - -### Linux Journey [Online Portal] - -Not official and perhaps not very popular. But this little website is the perfect place for a no non-sense Linux learning for beginners. - -The website is designed beautifully and is well organized based on the topics. It also has interactive quizzes that you can take after reading a section or chapter. My advice, bookmark this website: - -[Linux Journey][6] - -### Learn Linux in 5 Days [eBook] - -This brilliant eBook is available for free exclusively to It's FOSS readers all thanks to [Linux Training Academy][7]. - -Written for absolute beginners in mind, this free Linux eBook gives you a quick overview of Linux, common Linux commands and other things that you need to learn to get started with Linux. - -You can download the book from the page below: - -[Learn Linux In 5 Days][8] - -### The Ultimate Linux Newbie Guide [eBook] - -This is a free to download eBook for Linux beginners. The eBook starts with explaining what is Linux and then go on to provide more practical usage of Linux as a desktop. - -You can download the latest version of this eBook from the link below: - -[The Ultimate Linux Newbie Guide][9] - -## 2. Free Linux eBooks for Beginners to Advanced - -This section lists out those Linux eBooks that are 'complete' in nature. - -What I mean is that these are like academic textbooks that focus on each and every aspects of Linux, well most of it. You can read those as an absolute beginner or you can read those for deeper understanding as an intermediate Linux user. You can also use them for reference even if you are at expert level. - -### Introduction to Linux [eBook] - -Introduction to Linux is a free eBook from [The Linux Documentation Project][10] and it is one of the most popular free Linux books out there. Though I think some parts of this book needs to be updated, it is still a very good book to teach you about Linux, its file system, command line, networking and other related stuff. - -[Introduction To Linux][11] - -### Linux Fundamentals [eBook] - -This free eBook by Paul Cobbaut teaches you about Linux history, installation and focuses on the basic Linux commands you should know. You can get the book from the link below: - -[Linux Fundamentals][12] - -### Advanced Linux Programming [eBook] - -As the name suggests, this is for advanced users who are or want to develop software for Linux. It deals with sophisticated features such as multiprocessing, multi-threading, interprocess communication, and interaction with hardware devices. - -Following the book will help you develop a faster, reliable and secure program that uses the full capability of a GNU/Linux system. - -[Advanced Linux Programming][13] - -### Linux From Scratch [eBook] - -If you think you know enough about Linux and you are a pro, then why not create your own Linux distribution? Linux From Scratch (LFS) is a project that provides you with step-by-step instructions for building your own custom Linux system, entirely from source code. - -Call it DIY Linux but this is a great way to put your Linux expertise to the next level. - -There are various sub-parts of this project, you can check it out on its website and download the books from there. - -[Linux From Scratch][14] - -## 3. Free eBooks to learn Linux command line and Shell scripting - -The real power of Linux lies in the command line and if you want to conquer Linux, you must learn Linux command line and Shell scripting. - -In fact, if you have to work on Linux terminal on your job, having a good knowledge of Linux command line will actually help you in your tasks and perhaps help you in advancing your career as well (as you'll be more efficient). - -In this section, we'll see various Linux commands free eBooks. - -### GNU/Linux Command−Line Tools Summary [eBook] - -This eBook from The Linux Documentation Project is a good place to begin with Linux command line and get acquainted with Shell scripting. - -[GNU/Linux Command−Line Tools Summary][15] - -### Bash Reference Manual from GNU [eBook] - -This is a free eBook to download from [GNU][16]. As the name suggests, it deals with Bash Shell (if I can call that). This book has over 175 pages and it covers a number of topics around Linux command line in Bash. - -You can get it from the link below: - -[Bash Reference Manual][17] - -### The Linux Command Line [eBook] - -This 500+ pages of free eBook by William Shotts is the MUST HAVE for anyone who is serious about learning Linux command line. - -Even if you think you know things about Linux, you'll be amazed at how much this book still teaches you. - -It covers things from beginners to advanced level. I bet that you'll be a hell lot of better Linux user after reading this book. Download it and keep it with you always. - -[The Linux Command Line][18] - -### Bash Guide for Beginners [eBook] - -If you just want to get started with Bash scripting, this could be a good companion for you. The Linux Documentation Project is behind this eBook again and it's the same author who wrote Introduction to Linux eBook (discussed earlier in this article). - -[Bash Guide for Beginners][19] - -### Advanced Bash-Scripting Guide [eBook] - -If you think you already know basics of Bash scripting and you want to take your skills to the next level, this is what you need. This book has over 900+ pages of various advanced commands and their examples. - -[Advanced Bash-Scripting Guide][20] - -### The AWK Programming Language [eBook] - -Not the prettiest book here but if you really need to go deeper with your scripts, this old-yet-gold book could be helpful. - -[The AWK Programming Language][21] - -### Linux 101 Hacks [eBook] - -This 270 pages eBook from The Geek Stuff teaches you the essentials of Linux command lines with easy to follow practical examples. You can get the book from the link below: - -[Linux 101 Hacks][22] - -## 4. Distribution specific free learning material - -This section deals with material that are dedicated to a certain Linux distribution. What we saw so far was the Linux in general, more focused on file systems, commands and other core stuff. - -These books, on the other hand, can be termed as manual or getting started guide for various Linux distributions. So if you are using a certain Linux distribution or planning to use it, you can refer to these resources. And yes, these books are more desktop Linux focused. - -I would also like to add that most Linux distributions have their own wiki or documentation section which are often pretty vast. You can always refer to them when you are online. - -### Ubuntu Manual - -Needless to say that this eBook is for Ubuntu users. It's an independent project that provides Ubuntu manual in the form of free eBook. It is updated for each version of Ubuntu. - -The book is rightly called manual because it is basically a composition of step by step instruction and aimed at absolute beginners to Ubuntu. So, you get to know Unity desktop, how to go around it and find applications etc. - -It's a must have if you never used Ubuntu Unity because it helps you to figure out how to use Ubuntu for your daily usage. - -[Ubuntu Manual][23] - -### For Linux Mint: Just Tell Me Damnit! [eBook] - -A very basic eBook that focuses on Linux Mint. It shows you how to install Linux Mint in a virtual machine, how to find software, install updates and customize the Linux Mint desktop. - -You can download the eBook from the link below: - -[Just Tell Me Damnit!][24] - -### Solus Linux Manual [eBook] - -Caution! This used to be the official manual from Solus Linux but I cannot find its mentioned on Solus Project's website anymore. I don't know if it's outdated or not. But in any case, a little something about Solu Linux won't really hurt, will it? - -[Solus Linux User Guide][25] - -## 5. Free eBooks for SysAdmin - -This section is dedicated to the SysAdmins, the superheroes for developers. I have listed a few free eBooks here for SysAdmin which will surely help anyone who is already a SysAdmin or aspirs to be one. I must add that you should also focus on essential Linux command lines as it will make your job easier. - -### The Debian Administration's Handbook [eBook] - -If you use Debian Linux for your servers, this is your bible. Book starts with Debian history, installation, package management etc and then moves on to cover topics like [LAMP][26], virtual machines, storage management and other core sysadmin stuff. - -[The Debian Administration's Handbook][27] - -### Advanced Linux System Administration [eBook] - -This is an ideal book if you are preparing for [LPI certification][28]. The book deals straightway to the topics essential for sysadmins. So knowledge of Linux command line is a prerequisite in this case. - -[Advanced Linux System Administration][29] - -### Linux System Administration [eBook] - -Another free eBook by Paul Cobbaut. The 370 pages long eBook covers networking, disk management, user management, kernel management, library management etc. - -[Linux System Administration][30] - -### Linux Servers [eBook] - -One more eBook from Paul Cobbaut of [linux-training.be][31]. This book covers web servers, mysql, DHCP, DNS, Samba and other file servers. - -[Linux Servers][32] - -### Linux Networking [eBook] - -Networking is the bread and butter of a SysAdmin, and this book by Paul Cobbaut (again) is a good reference material. - -[Linux Networking][33] - -### Linux Storage [eBook] - -This book by Paul Cobbaut (yes, him again) explains disk management on Linux in detail and introduces a lot of other storage-related technologies. - -[Linux Storage][34] - -### Linux Security [eBook] - -This is the last eBook by Paul Cobbaut in our list here. Security is one of the most important part of a sysadmin's job. This book focuses on file permissions, acls, SELinux, users and passwords etc. - -[Linux Security][35] - -## Your favorite Linux learning material? - -I know that this is a good collection of free Linux eBooks. But this could always be made better. - -If you have some other resources that could be helpful in learning Linux, do share with us. Please note to share only the legal downloads so that I can update this article with your suggestion(s) without any problem. - -I hope you find this article helpful in learning Linux. Your feedback is welcome :) - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/learn-linux-for-free/ - -作者:[Abhishek Prakash][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://itsfoss.com/author/abhishek/ -[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/02/free-ebooks-linux-800x450.png -[2]:https://www.linuxfoundation.org/ -[3]:https://www.edx.org -[4]:https://www.youtube.com/watch?v=eE-ovSOQK0Y -[5]:https://www.edx.org/course/introduction-linux-linuxfoundationx-lfs101x-0 -[6]:https://linuxjourney.com/ -[7]:https://www.linuxtrainingacademy.com/ -[8]:https://courses.linuxtrainingacademy.com/itsfoss-ll5d/ -[9]:https://linuxnewbieguide.org/ulngebook/ -[10]:http://www.tldp.org/index.html -[11]:http://tldp.org/LDP/intro-linux/intro-linux.pdf -[12]:http://linux-training.be/linuxfun.pdf -[13]:http://advancedlinuxprogramming.com/alp-folder/advanced-linux-programming.pdf -[14]:http://www.linuxfromscratch.org/ -[15]:http://tldp.org/LDP/GNU-Linux-Tools-Summary/GNU-Linux-Tools-Summary.pdf -[16]:https://www.gnu.org/home.en.html -[17]:https://www.gnu.org/software/bash/manual/bash.pdf -[18]:http://linuxcommand.org/tlcl.php -[19]:http://www.tldp.org/LDP/Bash-Beginners-Guide/Bash-Beginners-Guide.pdf -[20]:http://www.tldp.org/LDP/abs/abs-guide.pdf -[21]:https://ia802309.us.archive.org/25/items/pdfy-MgN0H1joIoDVoIC7/The_AWK_Programming_Language.pdf -[22]:http://www.thegeekstuff.com/linux-101-hacks-ebook/ -[23]:https://ubuntu-manual.org/ -[24]:http://downtoearthlinux.com/resources/just-tell-me-damnit/ -[25]:https://drive.google.com/file/d/0B5Ymf8oYXx-PWTVJR0pmM3daZUE/view -[26]:https://en.wikipedia.org/wiki/LAMP_(software_bundle) -[27]:https://debian-handbook.info/about-the-book/ -[28]:https://www.lpi.org/our-certifications/getting-started -[29]:http://www.nongnu.org/lpi-manuals/manual/pdf/GNU-FDL-OO-LPI-201-0.1.pdf -[30]:http://linux-training.be/linuxsys.pdf -[31]:http://linux-training.be/ -[32]:http://linux-training.be/linuxsrv.pdf -[33]:http://linux-training.be/linuxnet.pdf -[34]:http://linux-training.be/linuxsto.pdf -[35]:http://linux-training.be/linuxsec.pdf diff --git a/sources/tech/20170508 Ansible Tutorial- Intorduction to simple Ansible commands.md b/sources/tech/20170508 Ansible Tutorial- Intorduction to simple Ansible commands.md new file mode 100644 index 0000000000..d0300fe6e3 --- /dev/null +++ b/sources/tech/20170508 Ansible Tutorial- Intorduction to simple Ansible commands.md @@ -0,0 +1,158 @@ +translating---geekpi + +Ansible Tutorial: Intorduction to simple Ansible commands +====== +In our earlier Ansible tutorial, we discussed [**the installation & configuration of Ansible**][1]. Now in this ansible tutorial, we will learn some basic examples of ansible commands that we will use to manage our infrastructure. So let us start by looking at the syntax of a complete ansible command, + +``` +$ ansible -m -a +``` + +Here, we can also use a single host or all in place of & are optional to provide. Now let's look at some basic commands to use with ansible, + +### Check connectivity of hosts + +We have used this command in our previous tutorial also. The command to check connectivity of hosts is + +``` +$ ansible -m ping +``` + +### Rebooting hosts + +``` +$ ansible -a "/sbin/reboot" +``` + +### Checking host 's system information + +Ansible collects the system's information for all the hosts connected to it. To display the information of hosts, run + +``` +$ ansible -m setup | less +``` + +Secondly, to check a particular info from the collected information by passing an argument, + +``` +$ ansible -m setup -a "filter=ansible_distribution" +``` + +### Transfering files + +For transferring files we use a module 'copy' & complete command that is used is + +``` +$ ansible -m copy -a "src=/home/dan dest=/tmp/home" +``` + +### Manging users + +So to manage the users on the connected hosts, we use a module named 'user' & comamnds to use it are as follows, + +#### Creating a new user + +``` + $ ansible -m user -a "name=testuser password=" +``` + +#### Deleting a user + +``` +$ ansible -m user -a "name=testuser state=absent" +``` + + **Note:-** To create an encrypted password, use the 'mkpasswd -method=sha-512' command. + +### Changing permissions & ownership + +So for changing ownership of files of connected hosts, we use module named 'file' & commands used are + +### Changing permission of a file + +``` +$ ansible -m file -a "dest=/home/dan/file1.txt mode=777" +``` + +### Changing ownership of a file + +``` + $ ansible -m file -a "dest=/home/dan/file1.txt mode=777 owner=dan group=dan" +``` + +### Managing Packages + +So, we can manage the packages installed on all the hosts connected to ansible by using 'yum' & 'apt' modules & the complete commands used are + +#### Check if package is installed & update it + +``` +$ ansible -m yum -a "name=ntp state=latest" +``` + +#### Check if package is installed & don't update it + +``` +$ ansible -m yum -a "name=ntp state=present" +``` + +#### Check if package is at a specific version + +``` +$ ansible -m yum -a "name= ntp-1.8 state=present" +``` + +#### Check if package is not installed + +``` +$ ansible -m yum -a "name=ntp state=absent" +``` + +### Managing services + +So to manage services with ansible, we use a modules 'service' & complete commands that are used are, + +#### Starting a service + +``` +$ansible -m service -a "name=httpd state=started" +``` + +#### Stopping a service + +``` +$ ansible -m service -a "name=httpd state=stopped" +``` + +#### Restarting a service + +``` +$ ansible -m service -a "name=httpd state=restarted" +``` + +So this completes our tutorial of some simple, one line commands that can be used with ansible. Also, for our future tutorials, we will learn to create plays & playbooks that help us manage our hosts more easliy & efficiently. + +If you think we have helped you or just want to support us, please consider these :- + +Connect to us: [Facebook][2] | [Twitter][3] | [Google Plus][4] + +Become a Supporter - [Make a contribution via PayPal][5] + +Linux TechLab is thankful for your continued support. + +-------------------------------------------------------------------------------- + +via: http://linuxtechlab.com/ansible-tutorial-simple-commands/ + +作者:[SHUSAIN][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxtechlab.com/author/shsuain/ +[1]:http://linuxtechlab.com/create-first-ansible-server-automation-setup/ +[2]:https://www.facebook.com/linuxtechlab/ +[3]:https://twitter.com/LinuxTechLab +[4]:https://plus.google.com/+linuxtechlab +[5]:http://linuxtechlab.com/contact-us-2/ diff --git a/sources/tech/20170511 Working with VI editor - The Basics.md b/sources/tech/20170511 Working with VI editor - The Basics.md index 4056c3c9ec..6653a1b2cc 100644 --- a/sources/tech/20170511 Working with VI editor - The Basics.md +++ b/sources/tech/20170511 Working with VI editor - The Basics.md @@ -1,3 +1,5 @@ +translating by ljgibbslf + Working with VI editor : The Basics ====== VI editor is a powerful command line based text editor that was originally created for Unix but has since been ported to various Unix & Linux distributions. In Linux there exists another, advanced version of VI editor called VIM (also known as VI IMproved ). VIM only adds funtionalities to already powefrul VI editor, some of the added functionalities a diff --git a/sources/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md b/sources/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md deleted file mode 100644 index a12c95e409..0000000000 --- a/sources/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md +++ /dev/null @@ -1,119 +0,0 @@ -translating---geekpi - - -Working with Vi/Vim Editor : Advanced concepts -====== -Earlier we have discussed some basics about VI/VIM editor but VI & VIM are both very powerful editors and there are many other functionalities that can be used with these editors. In this tutorial, we are going to learn some advanced uses of VI/VIM editor. - -( **Recommended Read** : [Working with VI editor : The Basics ][1]) - -## Opening multiple files with VI/VIM editor - -To open multiple files, command would be same as is for a single file; we just add the file name for second file as well. - -``` - $ vi file1 file2 file 3 -``` - -Now to browse to next file, we can use - -``` -$ :n -``` - -or we can also use - -``` -$ :e filename -``` - -## Run external commands inside the editor - -We can run external Linux/Unix commands from inside the vi editor, i.e. without exiting the editor. To issue a command from editor, go back to Command Mode if in Insert mode & we use the BANG i.e. '!' followed by the command that needs to be used. Syntax for running a command is, - -``` -$ :! command -``` - -An example for this would be - -``` -$ :! df -H -``` - -## Searching for a pattern - -To search for a word or pattern in the text file, we use following two commands in command mode, - - * command '/' searches the pattern in forward direction - - * command '?' searched the pattern in backward direction - - -Both of these commands are used for same purpose, only difference being the direction they search in. An example would be, - - `$ :/ search pattern` (If at beginning of the file) - - `$ :/ search pattern` (If at the end of the file) - -## Searching & replacing a pattern - -We might be required to search & replace a word or a pattern from our text files. So rather than finding the occurrence of word from whole text file & replace it, we can issue a command from the command mode to replace the word automatically. Syntax for using search & replacement is, - -``` -$ :s/pattern_to_be_found/New_pattern/g -``` - -Suppose we want to find word "alpha" & replace it with word "beta", the command would be - -``` -$ :s/alpha/beta/g -``` - -If we want to only replace the first occurrence of word "alpha", then the command would be - -``` -$ :s/alpha/beta/ -``` - -## Using Set commands - -We can also customize the behaviour, the and feel of the vi/vim editor by using the set command. Here is a list of some options that can be use set command to modify the behaviour of vi/vim editor, - - `$ :set ic ` ignores cases while searching - - `$ :set smartcase ` enforce case sensitive search - - `$ :set nu` display line number at the begining of the line - - `$ :set hlsearch ` highlights the matching words - - `$ : set ro ` change the file type to read only - - `$ : set term ` prints the terminal type - - `$ : set ai ` sets auto-indent - - `$ :set noai ` unsets the auto-indent - -Some other commands to modify vi editors are, - - `$ :colorscheme ` its used to change the color scheme for the editor. (for VIM editor only) - - `$ :syntax on ` will turn on the color syntax for .xml, .html files etc. (for VIM editor only) - -This complete our tutorial, do mention your queries/questions or suggestions in the comment box below. - - --------------------------------------------------------------------------------- - -via: http://linuxtechlab.com/working-vivim-editor-advanced-concepts/ - -作者:[Shusain][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linuxtechlab.com/author/shsuain/ -[1]:http://linuxtechlab.com/working-vi-editor-basics/ diff --git a/sources/tech/20170526 Creating a YUM repository from ISO - Online repo.md b/sources/tech/20170526 Creating a YUM repository from ISO - Online repo.md deleted file mode 100644 index cd21bb951a..0000000000 --- a/sources/tech/20170526 Creating a YUM repository from ISO - Online repo.md +++ /dev/null @@ -1,116 +0,0 @@ -Creating a YUM repository from ISO & Online repo -====== - -YUM tool is one of the most important tool for Centos/RHEL/Fedora. Though in latest builds of fedora, it has been replaced with DNF but that not at all means that it has ran its course. It is still used widely for installing rpm packages, we have already discussed YUM with examples in our earlier tutorial ([ **READ HERE**][1]). - -In this tutorial, we are going to learn to create a Local YUM repository, first by using ISO image of OS & then by creating a mirror image of an online yum repository. - -### Creating YUM with DVD ISO - -We are using a Centos 7 dvd for this tutorial & same process should work on RHEL 7 as well. - -Firstly create a directory named YUM in root folder - -``` -$ mkdir /YUM- -``` - -then mount Centos 7 ISO , - -``` -$ mount -t iso9660 -o loop /home/dan/Centos-7-x86_x64-DVD.iso /mnt/iso/ -``` - -Next, copy the packages from mounted ISO to /YUM folder. Once all the packages have been copied to the system, we will install the required packages for creating YUM. Open /YUM & install the following RPM packages, - -``` -$ rpm -ivh deltarpm -$ rpm -ivh python-deltarpm -$ rpm -ivh createrepo -``` - -Once these packages have been installed, we will create a file named " **local.repo "** in **/etc/yum.repos.d** folder with all the yum information - -``` -$ vi /etc/yum.repos.d/local.repo -``` - -``` -LOCAL REPO] -Name=Local YUM -baseurl=file:///YUM -gpgcheck=0 -enabled=1 -``` - -Save & exit the file. Next we will create repo-data by running the following command - -``` -$ createrepo -v /YUM -``` - -It will take some time to create the repo data. Once the process finishes, run - -``` -$ yum clean all -``` - -to clean cache & then run - -``` -$ yum repolist -``` - -to check the list of all repositories. You should see repo "local.repo" in the list. - - -### Creating mirror YUM repository with online repository - -Process involved in creating a yum is similar to creating a yum with an ISO image with one exception that we will fetch our rpm packages from an online repository instead of an ISO. - -Firstly, we need to find an online repository to get the latest packages . It is advised to find an online yum that is closest to your location , in order to optimize the download speeds. We will be using below mentioned , you can select one nearest to yours location from [CENTOS MIRROR LIST][2] - -After selecting a mirror, we will sync that mirror with our system using rsync but before you do that, make sure that you plenty of space on your server - -``` -$ rsync -avz rsync://mirror.fibergrid.in/centos/7.2/os/x86_64/Packages/s/ /YUM -``` - -Sync will take quite a while (maybe an hour) depending on your internet speed. After the syncing is completed, we will update our repo-data - -``` -$ createrepo - v /YUM -``` - -Our Yum is now ready to used . We can create a cron job for our repo to be updated automatically at a determined time daily or weekly as per you needs. - -To create a cron job for syncing the repository, run - -``` -$ crontab -e -``` - -& add the following line - -``` -30 12 * * * rsync -avz http://mirror.centos.org/centos/7/os/x86_64/Packages/ /YUM -``` - -This will enable the syncing of yum every night at 12:30 AM. Also remember to create repository configuration file in /etc/yum.repos.d , as we did above. - -That's it guys, you now have your own yum repository to use. Please share this article if you like it & leave your comments/queries in the comment box down below. - - --------------------------------------------------------------------------------- - -via: http://linuxtechlab.com/creating-yum-repository-iso-online-repo/ - -作者:[Shusain][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linuxtechlab.com/author/shsuain/ -[1]:http://linuxtechlab.com/using-yum-command-examples/ -[2]:http://mirror.centos.org/centos/ diff --git a/sources/tech/20170805 How to use Fio (Flexible I-O Tester) to Measure Disk Performance in Linux.md b/sources/tech/20170805 How to use Fio (Flexible I-O Tester) to Measure Disk Performance in Linux.md new file mode 100644 index 0000000000..c2659f3664 --- /dev/null +++ b/sources/tech/20170805 How to use Fio (Flexible I-O Tester) to Measure Disk Performance in Linux.md @@ -0,0 +1,254 @@ +How to use Fio (Flexible I/O Tester) to Measure Disk Performance in Linux +====== +![](https://wpmojo.com/wp-content/uploads/2017/08/wpmojo.com-how-to-use-fio-to-measure-disk-performance-in-linux-dotlayer.com-how-to-use-fio-to-measure-disk-performance-in-linux-816x457.jpeg) + +Fio which stands for Flexible I/O Tester [is a free and open source][1] disk I/O tool used both for benchmark and stress/hardware verification developed by Jens Axboe. + +It has support for 19 different types of I/O engines (sync, mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio, and more), I/O priorities (for newer Linux kernels), rate I/O, forked or threaded jobs, and much more. It can work on block devices as well as files. + +Fio accepts job descriptions in a simple-to-understand text format. Several example job files are included. Fio displays all sorts of I/O performance information, including complete IO latencies and percentiles. + +It is in wide use in many places, for both benchmarking, QA, and verification purposes. It supports Linux, FreeBSD, NetBSD, OpenBSD, OS X, OpenSolaris, AIX, HP-UX, Android, and Windows. + +In this tutorial, we will be using Ubuntu 16 and you are required to have sudo or root privileges to the computer. We will go over the installation and use of fio. + +### Installing fio from Source + +We are going to clone the repo on GitHub. Install the prerequisites, and then we will build the packages from the source code. Lets' start by making sure we have git installed. +``` + +sudo apt-get install git + + +``` + +For centOS users you can use: +``` + +sudo yum install git + + +``` + +Now we change directory to /opt and clone the repo from Github: +``` + +cd /opt +git clone https://github.com/axboe/fio + + +``` + +You should see the output below: +``` + +Cloning into 'fio'... +remote: Counting objects: 24819, done. +remote: Compressing objects: 100% (44/44), done. +remote: Total 24819 (delta 39), reused 62 (delta 32), pack-reused 24743 +Receiving objects: 100% (24819/24819), 16.07 MiB | 0 bytes/s, done. +Resolving deltas: 100% (16251/16251), done. +Checking connectivity... done. + + +``` + +Now, we change directory into the fio codebase by typing the command below inside the opt folder: +``` + +cd fio + + +``` + +We can finally build fio from source using the `make` build utility bu using the commands below: +``` + +# ./configure +# make +# make install + + +``` + +### Installing fio on Ubuntu + +For Ubuntu and Debian, fio is available on the main repository. You can easily install fio using the standard package managers such as yum and apt-get. + +For Ubuntu and Debian you can simple use: +``` + +sudo apt-get install fio + + +``` + +For CentOS/Redhat you can simple use: +On CentOS, you might need to install EPEL repository to your system before you can have access to fio. You can install it by running the following command: +``` + +sudo yum install epel-release -y + + +``` + +You can then install fio using the command below: +``` + +sudo yum install fio -y + + +``` + +### Disk Performace testing with Fio + +With Fio is installed on your system. It's time to see how to use Fio with some examples below. We are going to perform a random write, read and read and write test. + +### Performing a Random Write Test + +Let's start by running the following command. This command will write a total 4GB file [4 jobs x 512 MB = 2GB] running 2 processes at a time: +``` + +sudo fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=512M --numjobs=2 --runtime=240 --group_reporting + + +``` +``` + +... +fio-2.2.10 +Starting 2 processes + +randwrite: (groupid=0, jobs=2): err= 0: pid=7271: Sat Aug 5 13:28:44 2017 + write: io=1024.0MB, bw=2485.5MB/s, iops=636271, runt= 412msec + slat (usec): min=1, max=268, avg= 1.79, stdev= 1.01 + clat (usec): min=0, max=13, avg= 0.20, stdev= 0.40 + lat (usec): min=1, max=268, avg= 2.03, stdev= 1.01 + clat percentiles (usec): + | 1.00th=[ 0], 5.00th=[ 0], 10.00th=[ 0], 20.00th=[ 0], + | 30.00th=[ 0], 40.00th=[ 0], 50.00th=[ 0], 60.00th=[ 0], + | 70.00th=[ 0], 80.00th=[ 1], 90.00th=[ 1], 95.00th=[ 1], + | 99.00th=[ 1], 99.50th=[ 1], 99.90th=[ 1], 99.95th=[ 1], + | 99.99th=[ 1] + lat (usec) : 2=99.99%, 4=0.01%, 10=0.01%, 20=0.01% + cpu : usr=15.14%, sys=84.00%, ctx=8, majf=0, minf=26 + IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% + submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% + complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% + issued : total=r=0/w=262144/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 + latency : target=0, window=0, percentile=100.00%, depth=1 + +Run status group 0 (all jobs): + WRITE: io=1024.0MB, aggrb=2485.5MB/s, minb=2485.5MB/s, maxb=2485.5MB/s, mint=412msec, maxt=412msec + +Disk stats (read/write): + sda: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% + + +``` + +### Performing a Random Read Test + +We are going to perform a random read test now, we will be trying to read a random 2Gb file +``` + +sudo fio --name=randread --ioengine=libaio --iodepth=16 --rw=randread --bs=4k --direct=0 --size=512M --numjobs=4 --runtime=240 --group_reporting + + +``` + +You should see the output below: +``` + +... +fio-2.2.10 +Starting 4 processes +randread: Laying out IO file(s) (1 file(s) / 512MB) +randread: Laying out IO file(s) (1 file(s) / 512MB) +randread: Laying out IO file(s) (1 file(s) / 512MB) +randread: Laying out IO file(s) (1 file(s) / 512MB) +Jobs: 4 (f=4): [r(4)] [100.0% done] [71800KB/0KB/0KB /s] [17.1K/0/0 iops] [eta 00m:00s] +randread: (groupid=0, jobs=4): err= 0: pid=7586: Sat Aug 5 13:30:52 2017 + read : io=2048.0MB, bw=80719KB/s, iops=20179, runt= 25981msec + slat (usec): min=72, max=10008, avg=195.79, stdev=94.72 + clat (usec): min=2, max=28811, avg=2971.96, stdev=760.33 + lat (usec): min=185, max=29080, avg=3167.96, stdev=798.91 + clat percentiles (usec): + | 1.00th=[ 2192], 5.00th=[ 2448], 10.00th=[ 2576], 20.00th=[ 2736], + | 30.00th=[ 2800], 40.00th=[ 2832], 50.00th=[ 2928], 60.00th=[ 3024], + | 70.00th=[ 3120], 80.00th=[ 3184], 90.00th=[ 3248], 95.00th=[ 3312], + | 99.00th=[ 3536], 99.50th=[ 6304], 99.90th=[15168], 99.95th=[18816], + | 99.99th=[22912] + bw (KB /s): min=17360, max=25144, per=25.05%, avg=20216.90, stdev=1605.65 + lat (usec) : 4=0.01%, 10=0.01%, 250=0.01%, 500=0.01%, 750=0.01% + lat (usec) : 1000=0.01% + lat (msec) : 2=0.01%, 4=99.27%, 10=0.44%, 20=0.24%, 50=0.04% + cpu : usr=1.35%, sys=5.18%, ctx=524309, majf=0, minf=98 + IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0% + submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% + complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% + issued : total=r=524288/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 + latency : target=0, window=0, percentile=100.00%, depth=16 + +Run status group 0 (all jobs): + READ: io=2048.0MB, aggrb=80718KB/s, minb=80718KB/s, maxb=80718KB/s, mint=25981msec, maxt=25981msec + +Disk stats (read/write): + sda: ios=521587/871, merge=0/1142, ticks=96664/612, in_queue=97284, util=99.85% + + +``` + +Finally, we want to show a sample read-write test to see how the kind out output that fio returns. + +### Read Write Performance Test + +The command below will measure random read/write performance of USB Pen drive (/dev/sdc1): +``` + +sudo fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75 + + +``` + +Below is the outout we get from the command above. +``` + +fio-2.2.10 +Starting 1 process +Jobs: 1 (f=1): [m(1)] [100.0% done] [217.8MB/74452KB/0KB /s] [55.8K/18.7K/0 iops] [eta 00m:00s] +test: (groupid=0, jobs=1): err= 0: pid=8475: Sat Aug 5 13:36:04 2017 + read : io=3071.7MB, bw=219374KB/s, iops=54843, runt= 14338msec + write: io=1024.4MB, bw=73156KB/s, iops=18289, runt= 14338msec + cpu : usr=6.78%, sys=20.81%, ctx=1007218, majf=0, minf=9 + IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% + submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% + complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% + issued : total=r=786347/w=262229/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 + latency : target=0, window=0, percentile=100.00%, depth=64 + +Run status group 0 (all jobs): + READ: io=3071.7MB, aggrb=219374KB/s, minb=219374KB/s, maxb=219374KB/s, mint=14338msec, maxt=14338msec + WRITE: io=1024.4MB, aggrb=73156KB/s, minb=73156KB/s, maxb=73156KB/s, mint=14338msec, maxt=14338msec + +Disk stats (read/write): + sda: ios=774141/258944, merge=1463/899, ticks=748800/150316, in_queue=900720, util=99.35% + + +``` + +We hope you enjoyed this tutorial and enjoyed following along, Fio is a very useful tool and we hope you can use it in your next debugging activity. If you enjoyed reading this post feel free to leave a comment of questions. Go ahead and clone the repo and play around with the code. + + +-------------------------------------------------------------------------------- + +via: https://wpmojo.com/how-to-use-fio-to-measure-disk-performance-in-linux/ + +作者:[Alex Pearson][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://wpmojo.com/author/wpmojo/ +[1]:https://github.com/axboe/fio diff --git a/sources/tech/20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md b/sources/tech/20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md new file mode 100644 index 0000000000..45ce0c0a7a --- /dev/null +++ b/sources/tech/20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md @@ -0,0 +1,333 @@ +How To Set Up PF Firewall on FreeBSD to Protect a Web Server +====== + +I am a new FreeBSD server user and moved from netfilter on Linux. How do I setup a firewall with PF on FreeBSD server to protect a web server with single public IP address and interface? + + +PF is an acronym for packet filter. It was created for OpenBSD but has been ported to FreeBSD and other operating systems. It is a stateful packet filtering engine. This tutorial will show you how to set up a firewall with PF on FreeBSD 10.x and 11.x server to protect your web server. + + +## Step 1 - Turn on PF firewall + +You need to add the following three lines to /etc/rc.conf file: +``` +# echo 'pf_enable="YES"' >> /etc/rc.conf +# echo 'pf_rules="/usr/local/etc/pf.conf"' >> /etc/rc.conf +# echo 'pflog_enable="YES"' >> /etc/rc.conf +# echo 'pflog_logfile="/var/log/pflog"' >> /etc/rc.conf +``` +Where, + + 1. **pf_enable="YES"** - Turn on PF service. + 2. **pf_rules="/usr/local/etc/pf.conf"** - Read PF rules from this file. + 3. **pflog_enable="YES"** - Turn on logging support for PF. + 4. **pflog_logfile="/var/log/pflog"** - File where pflogd should store the logfile i.e. store logs in /var/log/pflog file. + + + +[![How To Set Up a Firewall with PF on FreeBSD to Protect a Web Server][1]][1] + +## Step 2 - Creating firewall rules in /usr/local/etc/pf.conf + +Type the following command: +``` +# vi /usr/local/etc/pf.conf +``` +Append the following PF rulesets : +``` +# vim: set ft=pf +# /usr/local/etc/pf.conf + +## Set your public interface ## +ext_if="vtnet0" + +## Set your server public IP address ## +ext_if_ip="172.xxx.yyy.zzz" + +## Set and drop these IP ranges on public interface ## +martians = "{ 127.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12, \ + 10.0.0.0/8, 169.254.0.0/16, 192.0.2.0/24, \ + 0.0.0.0/8, 240.0.0.0/4 }" + +## Set http(80)/https (443) port here ## +webports = "{http, https}" + +## enable these services ## +int_tcp_services = "{domain, ntp, smtp, www, https, ftp, ssh}" +int_udp_services = "{domain, ntp}" + +## Skip loop back interface - Skip all PF processing on interface ## +set skip on lo + +## Sets the interface for which PF should gather statistics such as bytes in/out and packets passed/blocked ## +set loginterface $ext_if + +## Set default policy ## +block return in log all +block out all + +# Deal with attacks based on incorrect handling of packet fragments +scrub in all + +# Drop all Non-Routable Addresses +block drop in quick on $ext_if from $martians to any +block drop out quick on $ext_if from any to $martians + +## Blocking spoofed packets +antispoof quick for $ext_if + +# Open SSH port which is listening on port 22 from VPN 139.xx.yy.zz Ip only +# I do not allow or accept ssh traffic from ALL for security reasons +pass in quick on $ext_if inet proto tcp from 139.xxx.yyy.zzz to $ext_if_ip port = ssh flags S/SA keep state label "USER_RULE: Allow SSH from 139.xxx.yyy.zzz" +## Use the following rule to enable ssh for ALL users from any IP address # +## pass in inet proto tcp to $ext_if port ssh +### [ OR ] ### +## pass in inet proto tcp to $ext_if port 22 + +# Allow Ping-Pong stuff. Be a good sysadmin +pass inet proto icmp icmp-type echoreq + +# All access to our Nginx/Apache/Lighttpd Webserver ports +pass proto tcp from any to $ext_if port $webports + +# Allow essential outgoing traffic +pass out quick on $ext_if proto tcp to any port $int_tcp_services +pass out quick on $ext_if proto udp to any port $int_udp_services + +# Add custom rules below +``` + +Save and close the file. PR [welcome here to improve rulesets][2]. To check for syntax error, run: +`# service pf check` +OR +`/etc/rc.d/pf check` +OR +`# pfctl -n -f /usr/local/etc/pf.conf ` + +## Step 3 - Start PF firewall + +The commands are as follows. Be careful you might be disconnected from your server over ssh based session: + +### Start PF + +`# service pf start` + +### Stop PF + +`# service pf stop` + +### Check PF for syntax error + +`# service pf check` + +### Restart PF + +`# service pf restart` + +### See PF status + +`# service pf status` +Sample outputs: +``` +Status: Enabled for 0 days 00:02:18 Debug: Urgent + +Interface Stats for vtnet0 IPv4 IPv6 + Bytes In 19463 0 + Bytes Out 18541 0 + Packets In + Passed 244 0 + Blocked 3 0 + Packets Out + Passed 136 0 + Blocked 12 0 + +State Table Total Rate + current entries 1 + searches 395 2.9/s + inserts 4 0.0/s + removals 3 0.0/s +Counters + match 19 0.1/s + bad-offset 0 0.0/s + fragment 0 0.0/s + short 0 0.0/s + normalize 0 0.0/s + memory 0 0.0/s + bad-timestamp 0 0.0/s + congestion 0 0.0/s + ip-option 0 0.0/s + proto-cksum 0 0.0/s + state-mismatch 0 0.0/s + state-insert 0 0.0/s + state-limit 0 0.0/s + src-limit 0 0.0/s + synproxy 0 0.0/s + map-failed 0 0.0/s +``` + + +### Command to start/stop/restart pflog service + +Type the following commands: +``` +# service pflog start +# service pflog stop +# service pflog restart +``` + +## Step 4 - A quick introduction to pfctl command + +You need to use the pfctl command to see PF ruleset and parameter configuration including status information from the packet filter. Let us see all common commands: + +### Show PF rules information + +`# pfctl -s rules` +Sample outputs: +``` +block return in log all +block drop out all +block drop in quick on ! vtnet0 inet from 172.xxx.yyy.zzz/24 to any +block drop in quick inet from 172.xxx.yyy.zzz/24 to any +pass in quick on vtnet0 inet proto tcp from 139.aaa.ccc.ddd to 172.xxx.yyy.zzz/24 port = ssh flags S/SA keep state label "USER_RULE: Allow SSH from 139.aaa.ccc.ddd" +pass inet proto icmp all icmp-type echoreq keep state +pass out quick on vtnet0 proto tcp from any to any port = domain flags S/SA keep state +pass out quick on vtnet0 proto tcp from any to any port = ntp flags S/SA keep state +pass out quick on vtnet0 proto tcp from any to any port = smtp flags S/SA keep state +pass out quick on vtnet0 proto tcp from any to any port = http flags S/SA keep state +pass out quick on vtnet0 proto tcp from any to any port = https flags S/SA keep state +pass out quick on vtnet0 proto tcp from any to any port = ftp flags S/SA keep state +pass out quick on vtnet0 proto tcp from any to any port = ssh flags S/SA keep state +pass out quick on vtnet0 proto udp from any to any port = domain keep state +pass out quick on vtnet0 proto udp from any to any port = ntp keep state +``` + +#### Show verbose output for each rule + +`# pfctl -v -s rules` + +#### Add rule numbers with verbose output for each rule + +`# pfctl -vvsr show` + +#### Show state + +``` +# pfctl -s state +# pfctl -s state | more +# pfctl -s state | grep 'something' +``` + +### How to disable PF from the CLI + +`# pfctl -d ` + +### How to enable PF from the CLI + +`# pfctl -e ` + +### How to flush ALL PF rules/nat/tables from the CLI + +`# pfctl -F all` +Sample outputs: +``` +rules cleared +nat cleared +0 tables deleted. +2 states cleared +source tracking entries cleared +pf: statistics cleared +pf: interface flags reset +``` + +#### How to flush only the PF RULES from the CLI + +`# pfctl -F rules ` + +#### How to flush only queue's from the CLI + +`# pfctl -F queue ` + +#### How to flush all stats that are not part of any rule from the CLI + +`# pfctl -F info` + +#### How to clear all counters from the CLI + +`# pfctl -z clear ` + +## Step 5 - See PF log + +PF logs are in binary format. To see them type: +`# tcpdump -n -e -ttt -r /var/log/pflog` +Sample outputs: +``` +Aug 29 15:41:11.757829 rule 0/(match) block in on vio0: 86.47.225.151.55806 > 45.FOO.BAR.IP.23: S 757158343:757158343(0) win 52206 [tos 0x28] +Aug 29 15:41:44.193309 rule 0/(match) block in on vio0: 5.196.83.88.25461 > 45.FOO.BAR.IP.26941: S 2224505792:2224505792(0) ack 4252565505 win 17520 (DF) [tos 0x24] +Aug 29 15:41:54.628027 rule 0/(match) block in on vio0: 45.55.13.94.50217 > 45.FOO.BAR.IP.465: S 3941123632:3941123632(0) win 65535 +Aug 29 15:42:11.126427 rule 0/(match) block in on vio0: 87.250.224.127.59862 > 45.FOO.BAR.IP.80: S 248176545:248176545(0) win 28200 (DF) +Aug 29 15:43:04.953537 rule 0/(match) block in on vio0: 77.72.82.22.47218 > 45.FOO.BAR.IP.7475: S 1164335542:1164335542(0) win 1024 +Aug 29 15:43:05.122156 rule 0/(match) block in on vio0: 77.72.82.22.47218 > 45.FOO.BAR.IP.7475: R 1164335543:1164335543(0) win 1200 +Aug 29 15:43:37.302410 rule 0/(match) block in on vio0: 94.130.12.27.18080 > 45.FOO.BAR.IP.64857: S 683904905:683904905(0) ack 4000841729 win 16384 +Aug 29 15:44:46.574863 rule 0/(match) block in on vio0: 77.72.82.22.47218 > 45.FOO.BAR.IP.7677: S 3451987887:3451987887(0) win 1024 +Aug 29 15:44:46.819754 rule 0/(match) block in on vio0: 77.72.82.22.47218 > 45.FOO.BAR.IP.7677: R 3451987888:3451987888(0) win 1200 +Aug 29 15:45:21.194752 rule 0/(match) block in on vio0: 185.40.4.130.55910 > 45.FOO.BAR.IP.80: S 3106068642:3106068642(0) win 1024 +Aug 29 15:45:32.999219 rule 0/(match) block in on vio0: 185.40.4.130.55910 > 45.FOO.BAR.IP.808: S 322591763:322591763(0) win 1024 +Aug 29 15:46:30.157884 rule 0/(match) block in on vio0: 77.72.82.22.47218 > 45.FOO.BAR.IP.6511: S 2412580953:2412580953(0) win 1024 [tos 0x28] +Aug 29 15:46:30.252023 rule 0/(match) block in on vio0: 77.72.82.22.47218 > 45.FOO.BAR.IP.6511: R 2412580954:2412580954(0) win 1200 [tos 0x28] +Aug 29 15:49:44.337015 rule 0/(match) block in on vio0: 189.219.226.213.22640 > 45.FOO.BAR.IP.23: S 14807:14807(0) win 14600 [tos 0x28] +Aug 29 15:49:55.161572 rule 0/(match) block in on vio0: 5.196.83.88.25461 > 45.FOO.BAR.IP.40321: S 1297217585:1297217585(0) ack 1051525121 win 17520 (DF) [tos 0x24] +Aug 29 15:49:59.735391 rule 0/(match) block in on vio0: 36.7.147.209.2545 > 45.FOO.BAR.IP.3389: SWE 3577047469:3577047469(0) win 8192 (DF) [tos 0x2 (E)] +Aug 29 15:50:00.703229 rule 0/(match) block in on vio0: 36.7.147.209.2546 > 45.FOO.BAR.IP.3389: SWE 1539382950:1539382950(0) win 8192 (DF) [tos 0x2 (E)] +Aug 29 15:51:33.880334 rule 0/(match) block in on vio0: 45.55.22.21.53510 > 45.FOO.BAR.IP.2362: udp 14 +Aug 29 15:51:34.006656 rule 0/(match) block in on vio0: 77.72.82.22.47218 > 45.FOO.BAR.IP.6491: S 151489102:151489102(0) win 1024 [tos 0x28] +Aug 29 15:51:34.274654 rule 0/(match) block in on vio0: 77.72.82.22.47218 > 45.FOO.BAR.IP.6491: R 151489103:151489103(0) win 1200 [tos 0x28] +Aug 29 15:51:36.393019 rule 0/(match) block in on vio0: 60.191.38.78.4249 > 45.FOO.BAR.IP.8000: S 3746478095:3746478095(0) win 29200 (DF) +Aug 29 15:51:57.213051 rule 0/(match) block in on vio0: 24.137.245.138.7343 > 45.FOO.BAR.IP.5358: S 14134:14134(0) win 14600 +Aug 29 15:52:37.852219 rule 0/(match) block in on vio0: 122.226.185.125.51128 > 45.FOO.BAR.IP.23: S 1715745381:1715745381(0) win 5840 (DF) +Aug 29 15:53:31.309325 rule 0/(match) block in on vio0: 189.218.148.69.377 > 45.FOO.BAR.IP5358: S 65340:65340(0) win 14600 [tos 0x28] +Aug 29 15:53:31.809570 rule 0/(match) block in on vio0: 13.93.104.140.53184 > 45.FOO.BAR.IP.1433: S 39854048:39854048(0) win 1024 +Aug 29 15:53:32.138231 rule 0/(match) block in on vio0: 13.93.104.140.53184 > 45.FOO.BAR.IP.1433: R 39854049:39854049(0) win 1200 +Aug 29 15:53:41.459088 rule 0/(match) block in on vio0: 77.72.82.22.47218 > 45.FOO.BAR.IP.6028: S 168338703:168338703(0) win 1024 +Aug 29 15:53:41.789732 rule 0/(match) block in on vio0: 77.72.82.22.47218 > 45.FOO.BAR.IP.6028: R 168338704:168338704(0) win 1200 +Aug 29 15:54:34.993594 rule 0/(match) block in on vio0: 212.47.234.50.5102 > 45.FOO.BAR.IP.5060: udp 408 (DF) [tos 0x28] +Aug 29 15:54:57.987449 rule 0/(match) block in on vio0: 51.15.69.145.5100 > 45.FOO.BAR.IP.5060: udp 406 (DF) [tos 0x28] +Aug 29 15:55:07.001743 rule 0/(match) block in on vio0: 190.83.174.214.58863 > 45.FOO.BAR.IP.23: S 757158343:757158343(0) win 27420 +Aug 29 15:55:51.269549 rule 0/(match) block in on vio0: 142.217.201.69.26112 > 45.FOO.BAR.IP.22: S 757158343:757158343(0) win 22840 +Aug 29 15:58:41.346028 rule 0/(match) block in on vio0: 169.1.29.111.29765 > 45.FOO.BAR.IP.23: S 757158343:757158343(0) win 28509 +Aug 29 15:59:11.575927 rule 0/(match) block in on vio0: 187.160.235.162.32427 > 45.FOO.BAR.IP.5358: S 22445:22445(0) win 14600 [tos 0x28] +Aug 29 15:59:37.826598 rule 0/(match) block in on vio0: 94.74.81.97.54656 > 45.FOO.BAR.IP.3128: S 2720157526:2720157526(0) win 1024 [tos 0x28] +Aug 29 15:59:37.991171 rule 0/(match) block in on vio0: 94.74.81.97.54656 > 45.FOO.BAR.IP.3128: R 2720157527:2720157527(0) win 1200 [tos 0x28] +Aug 29 16:01:36.990050 rule 0/(match) block in on vio0: 182.18.8.28.23299 > 45.FOO.BAR.IP.445: S 1510146048:1510146048(0) win 16384 +``` + +To see live log run: +`# tcpdump -n -e -ttt -i pflog0` +For more info the [PF FAQ][3], [FreeBSD HANDBOOK][4] and the following man pages: +``` +# man tcpdump +# man pfctl +# man pf +``` + +## about the author: + +The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][5], [Facebook][6], [Google+][7]. + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/faq/how-to-set-up-a-firewall-with-pf-on-freebsd-to-protect-a-web-server/ + +作者:[Vivek Gite][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/media/new/faq/2017/08/howto-setup-a-firewall-with-pf-on-freebsd.001.jpeg +[2]:https://github.com/nixcraft/pf.conf/blob/master/pf.conf +[3]:https://www.openbsd.org/faq/pf/ +[4]:https://www.freebsd.org/doc/handbook/firewalls.html +[5]:https://twitter.com/nixcraft +[6]:https://facebook.com/nixcraft +[7]:https://plus.google.com/+CybercitiBiz diff --git a/sources/tech/20170918 3 text editor alternatives to Emacs and Vim.md b/sources/tech/20170918 3 text editor alternatives to Emacs and Vim.md deleted file mode 100644 index 742e1d9f92..0000000000 --- a/sources/tech/20170918 3 text editor alternatives to Emacs and Vim.md +++ /dev/null @@ -1,102 +0,0 @@ -3 text editor alternatives to Emacs and Vim -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_blue.png?itok=IfckxN48) - -Before you start reaching for those implements of mayhem, Emacs and Vim fans, understand that this article isn't about putting the boot to your favorite editor. I'm a professed Emacs guy, but one who also likes Vim. A lot. - -That said, I realize that Emacs and Vim aren't for everyone. It might be that the silliness of the so-called [Editor war][1] has turned some people off. Or maybe they just want an editor that is less demanding and has a more modern sheen. - -If you're looking for an alternative to Emacs or Vim, keep reading. Here are three that might interest you. - -### Geany - - -![Editing a LaTeX document with Geany][3] - - -Editing a LaTeX document with Geany - -[Geany][4] is an old favorite from the days when I computed on older hardware running lightweight Linux distributions. Geany started out as my [LaTeX][5] editor, but quickly became the app in which I did all of my text editing. - -Although Geany is billed as a small and fast [IDE][6] (integrated development environment), it's definitely not just a techie's tool. Geany is small and it is fast, even on older hardware or a [Chromebook running Linux][7]. You can use Geany for everything from editing configuration files to maintaining a task list or journal, from writing an article or a book to doing some coding and scripting. - -[Plugins][8] give Geany a bit of extra oomph. Those plugins expand the editor's capabilities, letting you code or work with markup languages more effectively, manipulate text, and even check your spelling. - -### Atom - - -![Editing a webpage with Atom][10] - - -Editing a webpage with Atom - -[Atom][11] is a new-ish kid in the text editing neighborhood. In the short time it's been on the scene, though, Atom has gained a dedicated following. - -What makes Atom attractive is that you can customize it. If you're of a more technical bent, you can fiddle with the editor's configuration. If you aren't all that technical, Atom has [a number of themes][12] you can use to change how the editor looks. - -And don't discount Atom's thousands of [packages][13]. They extend the editor in many different ways, enabling you to turn it into the text editing or development environment that's right for you. Atom isn't just for coders. It's a very good [text editor for writers][14], too. - -### Xed - -![Writing this article in Xed][16] - - -Writing this article in Xed - -Maybe Atom and Geany are a bit heavy for your tastes. Maybe you want a lighter editor, something that's not bare bones but also doesn't have features you'll rarely (if ever) use. In that case, [Xed][17] might be what you're looking for. - -If Xed looks familiar, it's a fork of the Pluma text editor for the MATE desktop environment. I've found that Xed is a bit faster and a bit more responsive than Pluma--your mileage may vary, though. - -Although Xed isn't as rich in features as other editors, it doesn't do too badly. It has solid syntax highlighting, a better-than-average search and replace function, a spelling checker, and a tabbed interface for editing multiple files in a single window. - -### Other editors worth exploring - -I'm not a KDE guy, but when I worked in that environment, [KDevelop][18] was my go-to editor for heavy-duty work. It's a lot like Geany in that KDevelop is powerful and flexible without a lot of bulk. - -Although I've never really felt the love, more than a couple of people I know swear by [Brackets][19]. It is powerful, and I have to admit its [extensions][20] look useful. - -Billed as a "text editor for developers," [Notepadqq][21] is an editor that's reminiscent of [Notepad++][22]. It's in the early stages of development, but Notepadqq does look promising. - -[Gedit][23] and [Kate][24] are excellent for anyone whose text editing needs are simple. They're definitely not bare bones--they pack enough features to do heavy text editing. Both Gedit and Kate balance that by being speedy and easy to use. - -Do you have another favorite text editor that's not Emacs or Vim? Feel free to share by leaving a comment. - -### About The Author -Scott Nesbitt;I'M A Long-Time User Of Free Open Source Software;Write Various Things For Both Fun;Profit. I Don'T Take Myself Too Seriously;I Do All Of My Own Stunts. You Can Find Me At These Fine Establishments On The Web - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/9/3-alternatives-emacs-and-vim - -作者:[Scott Nesbitt][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/scottnesbitt -[1]:https://en.wikipedia.org/wiki/Editor_war -[2]:/file/370196 -[3]:https://opensource.com/sites/default/files/u128651/geany.png (Editing a LaTeX document with Geany) -[4]:https://www.geany.org/ -[5]:https://opensource.com/article/17/6/introduction-latex -[6]:https://en.wikipedia.org/wiki/Integrated_development_environment -[7]:https://opensource.com/article/17/4/linux-chromebook-gallium-os -[8]:http://plugins.geany.org/ -[9]:/file/370191 -[10]:https://opensource.com/sites/default/files/u128651/atom.png (Editing a webpage with Atom) -[11]:https://atom.io -[12]:https://atom.io/themes -[13]:https://atom.io/packages -[14]:https://opensource.com/article/17/5/atom-text-editor-packages-writers -[15]:/file/370201 -[16]:https://opensource.com/sites/default/files/u128651/xed.png (Writing this article in Xed) -[17]:https://github.com/linuxmint/xed -[18]:https://www.kdevelop.org/ -[19]:http://brackets.io/ -[20]:https://registry.brackets.io/ -[21]:http://notepadqq.altervista.org/s/ -[22]:https://opensource.com/article/16/12/notepad-text-editor -[23]:https://wiki.gnome.org/Apps/Gedit -[24]:https://kate-editor.org/ diff --git a/sources/tech/20170919 What Are Bitcoins.md b/sources/tech/20170919 What Are Bitcoins.md deleted file mode 100644 index c61b32b76a..0000000000 --- a/sources/tech/20170919 What Are Bitcoins.md +++ /dev/null @@ -1,82 +0,0 @@ -translating by Flowsnow - -What Are Bitcoins? -====== - -![what are bitcoins](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/what-are-bitcoins_orig.jpg) - - **[Bitcoin][1]** is a digital currency or electronic cash the relies on peer to peer technology for completing transactions. Since peer to peer technology is used as the major network, bitcoins provide a community like managed economy. This is to mean, bitcoins eliminate the centralized authority way of managing currency and promotes community management of currency. Most Also of the software related to bitcoin mining and managing of bitcoin digital cash is open source. - -​The first Bitcoin software was developed by Satoshi Nakamoto and it's based on open source cryptographic protocol. Bitcoins smallest unit is known as the Satoshi which is basically one-hundredth millionth of a single bitcoin (0.00000001 BTC). - -One cannot underestimate the boundaries BITCOINS eliminate in the digital economy. For instance, the BITCOIN eliminates governed controls over currency by a centralised agency and offers control and management to the community as a whole. Furthermore, the fact that the BITCOIN is based on an open source cryptographic protocol makes it an open place where there are scrupulous activities such as fluctuating value, deflation and inflation among others. While many internet users are becoming aware of the privacy they should exercise to complete some online transactions, bitcoin is gaining more popularity than ever before. However, for those who know about the dark web and how it works can acknowledge that some people began using it long ago. - -On the downside, the bitcoin is also very secure in making anonymous payments which may be a threat to security or personal health. For instance, the dark web markets are the major suppliers and retailers of imported drugs and even weapons. The use of BITCOINs in the dark web facilitates a safe network for such criminal activities. Despite that, if put to good use, bitcoin has many benefits that can eliminate some of the economic fallacy as a result of centralized agency management of currency. In addition, the bitcoin allows for instance exchange of cash anywhere in the world. The use of bitcoins also mitigates counterfeiting, printing, or devaluation over time. Also, while relying on peer to peer network as its backbone, it promotes the distributed authority of transaction records making it safe to make exchanges. - -Other advantages of the bitcoin include; - -* In the online business world, bitcoin promotes money security and total control. This is because buyers are protected against merchants who may want to charge extra for a lower cost service. The buyer can also choose not to share personal information after making a transaction. Besides, identity theft protection is achieved as a result of backed up hiding personal information. - -* Bitcoins are provided alternatives to major common currency catastrophes such as getting lost, frozen or damaged. However, it is recommended to always make a backup of your bitcoins and encrypt them with a password. - -* In making online purchases and payments using bitcoins, there is a small fee or zero transaction fee charged. This promotes affordability of use. - -* Merchants also face fewer risks that could result from fraud as bitcoin transactions cannot be reversed, unlike other currencies in electronic form. Bitcoins also prove useful even in moments of high crime rate and fraud since it is difficult to con someone over an open public ledger (Blockchain). - -* Bitcoin currency is also hard to be manipulated as it is open source and the cryptographic protocol is very secure. - -* Transactions can also be verified and approved, anywhere, anytime. This is the level of flexibility offered by this digital currency. - -Also Read - [Bitkey A Linux Distribution Dedicated To Bitcoin Transactions][2] - -### How To Mine Bitcoins and The Applications to Accomplish Necessary Bitcoin Management Tasks - -​In the digital currency, BITCOIN mining and management requires additional software. There are numerous open source bitcoin management software that make it easy to make payments, receive payments, encrypt and backup of your bitcoins and also bitcoin mining software. There are sites such as; [Freebitcoin][4] where one earns free bitcoins by viewing ads, [MoonBitcoin][5] is another site that one can sign up for free and earn bitcoins. However, it is convenient if one has spare time and a sizable network of friends participating in the same. There are many sites offering bitcoin mining and one can easily sign up and start mining. One of the major secrets is referring as many people as you can to create a large network. - -Applications required for use with bitcoins include the bitcoin wallet which allows one to safely keep bitcoins. This is just like the physical wallet using to keep hard cash but in a digital form. The wallet can be downloaded here - [Bitcoin - Wallet][6] . Other similar applications include; the [Blockchain][7] which works similar to the Bitcoin Wallet. - -The screenshots below show the Freebitco and MoonBitco mining sites respectively. - - [![freebitco bitcoin mining site](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/freebitco-bitcoin-mining-site_orig.jpg)][8] - [![moonbitcoin bitcoin mining site](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/moonbitcoin-bitcoin-mining-site_orig.png)][9] - -​There are various ways of acquiring the bitcoin currency. Some of them include the use of bitcoin mining rigs, purchasing of bitcoins in exchange markets and doing free bitcoin mining online. Purchasing of bitcoins can be done at; [MtGox][10] , [bitNZ][11] , [Bitstamp][12] , [BTC-E][13] , [VertEx][14] , etc.. Several mining open source applications are available online. These applications include; Bitminter, [5OMiner][15] , [BFG Miner][16] among others. These applications make use of some graphics card and processor features to generate bitcoins. The efficiency of mining bitcoins on a pc largely depends on the type of graphics card and the processor of the mining rig. Besides, there are many secure online storages for backing up bitcoins. These sites provide bitcoin storage services free of charge. Examples of bitcoin managing sites include; [xapo][17] , [BlockChain][18] etc. signing up on these sites require a valid email and phone number for verification. Xapo offers additional security through the phone application by requesting for verification whenever a new sign in is made. - -### Disadvantages Of Bitcoins - -​The numerous advantages ripped from using bitcoins digital currency cannot be overlooked. However, as it is still in its infancy stage, the bitcoin currency meets several points of resistance. For instance, the majority of individual are not fully aware of the bitcoin digital currency and how it works. The lack of awareness can be mitigated through education and creation of awareness. Bitcoin users also face volatility as the demand for bitcoins is higher than the available amount of coins. However, given more time, volatility will be lowered as when many people will start using bitcoins. - -### Improvements Can be Made - -​Based on the infancy of the [bitcoin technology][19] , there is still room for changes to make it more secure and reliable. Given more time, the bitcoin currency will be developed enough to provide flexibility as a common currency. For the bitcoin to succeed, many people need to be made aware of it besides being given information on how it works and its benefits. - --------------------------------------------------------------------------------- - -via: http://www.linuxandubuntu.com/home/things-you-need-to-know-about-bitcoins - -作者:[LINUXANDUBUNTU][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxandubuntu.com/ -[1]:http://www.linuxandubuntu.com/home/bitkey-a-linux-distribution-dedicated-for-conducting-bitcoin-transactions -[2]:http://www.linuxandubuntu.com/home/bitkey-a-linux-distribution-dedicated-for-conducting-bitcoin-transactions -[3]:http://www.linuxandubuntu.com/home/things-you-need-to-know-about-bitcoins -[4]:https://freebitco.in/?r=2167375 -[5]:http://moonbit.co.in/?ref=c637809a5051 -[6]:https://bitcoin.org/en/choose-your-wallet -[7]:https://blockchain.info/wallet/ -[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/freebitco-bitcoin-mining-site_orig.jpg -[9]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/moonbitcoin-bitcoin-mining-site_orig.png -[10]:http://mtgox.com/ -[11]:https://en.bitcoin.it/wiki/BitNZ -[12]:https://www.bitstamp.net/ -[13]:https://btc-e.com/ -[14]:https://www.vertexinc.com/ -[15]:https://www.downloadcloud.com/bitcoin-miner-software.html -[16]:https://github.com/luke-jr/bfgminer -[17]:https://xapo.com/ -[18]:https://www.blockchain.com/ -[19]:https://en.wikipedia.org/wiki/Bitcoin diff --git a/sources/tech/20170920 Easy APT Repository - Iain R. Learmonth.md b/sources/tech/20170920 Easy APT Repository - Iain R. Learmonth.md deleted file mode 100644 index b0031f8c94..0000000000 --- a/sources/tech/20170920 Easy APT Repository - Iain R. Learmonth.md +++ /dev/null @@ -1,83 +0,0 @@ -Easy APT Repository · Iain R. Learmonth -====== - -The [PATHspider][5] software I maintain as part of my work depends on some features in [cURL][6] and in [PycURL][7] that have [only][8] [just][9] been mereged or are still [awaiting][10] merge. I need to build a docker container that includes these as Debian packages, so I need to quickly build an APT repository. - -A Debian repository can essentially be seen as a static website and the contents are GPG signed so it doesn't necessarily need to be hosted somewhere trusted (unless availability is critical for your application). I host my blog with [Netlify][11], a static website host, and I figured they would be perfect for this use case. They also [support open source projects][12]. - -There is a CLI tool for netlify which you can install with: -``` -sudo apt install npm -sudo npm install -g netlify-cli - -``` - -The basic steps for setting up a repository are: -``` -mkdir repository -cp /path/to/*.deb repository/ - - -cd - - repository -apt-ftparchive packages . > Packages -apt-ftparchive release . > Release -gpg --clearsign -o InRelease Release -netlify deploy - -``` - -Once you've followed these steps, and created a new site on Netlify, you'll be able to manage this site also through the web interface. A few things you might want to do are set up a custom domain name for your repository, or enable HTTPS with Let's Encrypt. (Make sure you have `apt-transport-https` if you're going to enable HTTPS though.) - -To add this repository to your apt sources: -``` -gpg --export -a YOURKEYID | sudo apt-key add - - - -echo - - - -"deb https://SUBDOMAIN.netlify.com/ /" - - | sudo tee -a /etc/apt/sources.list -sudo apt update - -``` - -You'll now find that those packages are installable. Beware of [APT pinning][13] as you may find that the newer versions on your repository are not actually the preferred versions according to your policy. - -**Update** : If you're wanting a solution that would be more suitable for regular use, take a look at [repropro][14]. If you're wanting to have end-users add your apt repository as a third-party repository to their system, please take a look at [this page on the Debian wiki][15] which contains advice on how to instruct users to use your repository. - -**Update 2** : Another commenter has pointed out [aptly][16], which offers a greater feature set and removes some of the restrictions imposed by repropro. I've never use aptly myself so can't comment on specifics, but from the website it looks like it might be a nicely polished tool. - - - --------------------------------------------------------------------------------- - -via: https://iain.learmonth.me/blog/2017/2017w383/ - -作者:[Iain R. Learmonth][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://iain.learmonth.me -[1]:https://iain.learmonth.me/tags/netlify/ -[2]:https://iain.learmonth.me/tags/debian/ -[3]:https://iain.learmonth.me/tags/apt/ -[4]:https://iain.learmonth.me/tags/foss/ -[5]:https://pathspider.net -[6]:http://curl.haxx.se/ -[7]:http://pycurl.io/ -[8]:https://github.com/pycurl/pycurl/pull/456 -[9]:https://github.com/pycurl/pycurl/pull/458 -[10]:https://github.com/curl/curl/pull/1847 -[11]:http://netlify.com/ -[12]:https://www.netlify.com/open-source/ -[13]:https://wiki.debian.org/AptPreferences -[14]:https://mirrorer.alioth.debian.org/ -[15]:https://wiki.debian.org/DebianRepository/UseThirdParty -[16]:https://www.aptly.info/ diff --git a/sources/tech/20170921 Mastering file searches on Linux.md b/sources/tech/20170921 Mastering file searches on Linux.md deleted file mode 100644 index 524585003c..0000000000 --- a/sources/tech/20170921 Mastering file searches on Linux.md +++ /dev/null @@ -1,224 +0,0 @@ -Translating by jessie-pang - -Mastering file searches on Linux -====== - -![](https://images.idgesg.net/images/article/2017/09/telescope-100736548-large.jpg) - -There are many ways to search for files on Linux systems and the commands can be very easy or very specific -- narrowing down your search criteria to find what just you're looking for and nothing else. In today's post, we're going to examine some of the most useful commands and options for your file searches. We're going to look into: - - * Quick finds - * More complex search criteria - * Combining conditions - * Reversing criteria - * Simple vs. detailed responses - * Looking for duplicate files - - - -There are actually several useful commands for searching for files. The **find** command may be the most obvious, but it's not the only command or always the fastest way to find what you're looking for. - -### Quick file search commands: which and locate - -The simplest commands for searching for files are probably **which** and **locate**. Both have some constraints that you should be aware of. The **which** command is only going to search through directories on your search path looking for files that are executable. It is generally used to identify commands. If you are curious about what command will be run when you type "which", for example, you can use the command "which which" and it will point you to the executable. -``` -$ which which -/usr/bin/which - -``` - -The **which** command will display the first executable that it finds with the name you supply (i.e., the one you would run if you use that command) and then stop. - -The **locate** command is a bit more generous. However, it has a constraint, as well. It will find any number of files, but only if the file names are contained in a database prepared by the **updatedb** command. That file will likely be stored in some location like /var/lib/mlocate/mlocate.db, but is not intended to be read by anything other than the locate command. Updates to this file are generally made by updatedb running daily through cron. - -Simple **find** commands don't require a lot more effort, but they do require a starting point for the search and some kind of search criteria. The simplest find command -- one that searches for files by name -- might look like this: -``` -$ find . -name runme -./bin/runme - -``` - -Searching from the current position in the file system by file name as shown will also involve searching all subdirectories unless a search depth is specified. - -### More than just file names - -The **find** command allows you to search on a number of criteria beyond just file names. These include file owner, group, permissions, size, modification time, lack of an active owner or group and file type. And you can do things beyond just locating the files. You can delete them, rename them, change ownership, change permissions, or run nearly any command against the located files. - -These two commands would find 1) files owned by root within the current directory and 2) files _not_ owned by the specified user (in this case, shs). In this case, both responses are the same, but they won't always be. -``` -$ find . -user root -ls - 396926 0 lrwxrwxrwx 1 root root 21 Sep 21 09:03 ./xyz -> /home/peanut/xyz -$ find . ! -user shs -ls - 396926 0 lrwxrwxrwx 1 root root 21 Sep 21 09:03 ./xyz -> /home/peanut/xyz - -``` - -The ! character represents "not" -- reversing the condition that follows it. - -The command below finds files that have a particular set of permissions. -``` -$ find . -perm 750 -ls - 397176 4 -rwxr-x--- 1 shs shs 115 Sep 14 13:52 ./ll - 398209 4 -rwxr-x--- 1 shs shs 117 Sep 21 08:55 ./get-updates - 397145 4 drwxr-x--- 2 shs shs 4096 Sep 14 15:42 ./newdir - -``` - -This command displays files with 777 permissions that are _not_ symbolic links. -``` -$ sudo find /home -perm 777 ! -type l -ls - 397132 4 -rwxrwxrwx 1 shs shs 18 Sep 15 16:06 /home/shs/bin/runme - 396949 4 -rwxrwxrwx 1 root root 558 Sep 21 11:21 /home/oops - -``` - -The following command looks for files that are larger than a gigabyte in size. And notice that we've located a very interesting file. It represents the physical memory of this system in the ELF core file format. -``` -$ sudo find / -size +1G -ls -4026531994 0 -r-------- 1 root root 140737477881856 Sep 21 11:23 /proc/kcore - 1444722 15332 -rw-rw-r-- 1 shs shs 1609039872 Sep 13 15:55 /home/shs/Downloads/ubuntu-17.04-desktop-amd64.iso - -``` - -Finding files by file type is easy as long as you know how the file types are described for the find command. -``` -b = block special file -c = character special file -d = directory -p = named pipe -f = regular file -l = symbolic link -s = socket -D = door (Solaris only) - -``` - -In the commands below, we are looking for symbolic links and sockets. -``` -$ find . -type l -ls - 396926 0 lrwxrwxrwx 1 root root 21 Sep 21 09:03 ./whatever -> /home/peanut/whatever -$ find . -type s -ls - 395256 0 srwxrwxr-x 1 shs shs 0 Sep 21 08:50 ./.gnupg/S.gpg-agent - -``` - -You can also search for files by inode number. -``` -$ find . -inum 397132 -ls - 397132 4 -rwx------ 1 shs shs 18 Sep 15 16:06 ./bin/runme - -``` - -Another way to search for files by inode involves using the **debugfs** command. On a large file system, this command might be considerably faster than using find. You may need to install icheck. -``` -$ sudo debugfs -R 'ncheck 397132' /dev/sda1 -debugfs 1.42.13 (17-May-2015) -Inode Pathname -397132 /home/shs/bin/runme - -``` - -In the following command, we're starting in our home directory (~), limiting the depth of our search (how deeply we'll search subdirectories) and looking only for files that have been created or modified within the last day (mtime setting). -``` -$ find ~ -maxdepth 2 -mtime -1 -ls - 407928 4 drwxr-xr-x 21 shs shs 4096 Sep 21 12:03 /home/shs - 394006 8 -rw------- 1 shs shs 5909 Sep 21 08:18 /home/shs/.bash_history - 399612 4 -rw------- 1 shs shs 53 Sep 21 08:50 /home/shs/.Xauthority - 399615 4 drwxr-xr-x 2 shs shs 4096 Sep 21 09:32 /home/shs/Downloads - -``` - -### More than just listing files - -With an **-exec** option, the find command allows you to change files in some way once you've found them. You simply need to follow the -exec option with the command you want to run. -``` -$ find . -name runme -exec chmod 700 {} \; -$ find . -name runme -ls - 397132 4 -rwx------ 1 shs shs 18 Sep 15 16:06 ./bin/runme - -``` - -In this command, {} represents the name of the file. This command would change permissions on any files named "runme" in the current directory and subdirectories. - -Put whatever command you want to run following the -exec option and using a syntax similar to what you see above. - -### Other search criteria - -As shown in one of the examples above, you can also search by other criteria -- file age, owner, permissions, etc. Here are some examples. - -#### Finding by user -``` -$ sudo find /home -user peanut -/home/peanut -/home/peanut/.bashrc -/home/peanut/.bash_logout -/home/peanut/.profile -/home/peanut/examples.desktop - -``` - -#### Finding by file permissions -``` -$ sudo find /home -perm 777 -/home/shs/whatever -/home/oops - -``` - -#### Finding by age -``` -$ sudo find /home -mtime +100 -/home/shs/.mozilla/firefox/krsw3giq.default/gmp-gmpopenh264/1.6/gmpopenh264.info -/home/shs/.mozilla/firefox/krsw3giq.default/gmp-gmpopenh264/1.6/libgmpopenh264.so - -``` - -#### Finding by age comparison - -Commands like this allow you to find files newer than some other file. -``` -$ sudo find /var/log -newer /var/log/syslog -/var/log/auth.log - -``` - -### Finding duplicate files - -If you're looking to clean up disk space, you might want to remove large duplicate files. The best way to determine whether files are truly duplicates is to use the **fdupes** command. This command uses md5 checksums to determine if files have the same content. With the -r (recursive) option, fdupes will run through a directory and find files that have the same checksum and are thus identical in content. - -If you run a command like this as root, you will likely find a lot of duplicate files, but many will be startup files that were added to home directories when they were created. -``` -# fdupes -rn /home > /tmp/dups.txt -# more /tmp/dups.txt -/home/jdoe/.profile -/home/tsmith/.profile -/home/peanut/.profile -/home/rocket/.profile - -/home/jdoe/.bashrc -/home/tsmith/.bashrc -/home/peanut/.bashrc -/home/rocket/.bashrc - -``` - -Similarly, you might find a lot of duplicate configuration files in /usr that you shouldn't remove. So, be careful with the fdupes output. - -The fdupes command isn't always speedy, but keeping in mind that it's running checksum queries over a lot of files to compare them, you'll probably appreciate how efficient it is. - -### Wrap-up - -There are lots of way to locate files on Linux systems. If you can describe what you're looking for, one of the commands above will help you find it. - - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3227075/linux/mastering-file-searches-on-linux.html - -作者:[Sandra Henry-Stocker][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ diff --git a/sources/tech/20170927 Linux directory structure- -lib explained.md b/sources/tech/20170927 Linux directory structure- -lib explained.md deleted file mode 100644 index 3f8322d630..0000000000 --- a/sources/tech/20170927 Linux directory structure- -lib explained.md +++ /dev/null @@ -1,77 +0,0 @@ -Linux directory structure: /lib explained -====== -[![lib folder linux][1]][1] - -We already explained other important system folders like /bin, /boot, /dev, /etc etc folders in our previous posts. Please check below links for more information about other stuff which you are interested. In this post, we will see what is /lib folder all about. - -[**Linux Directory Structure explained: /bin folder**][2] - -[**Linux Directory Structure explained: /boot folder**][3] - -[**Linux Directory Structure explained: /dev folder**][4] - -[**Linux Directory Structure explained: /etc folder**][5] - -[**Linux Directory Structure explained: /lost+found folder**][6] - -[**Linux Directory Structure explained: /home folder**][7] - -### What is /lib folder in Linux? - -The lib folder is a **library files directory** which contains all helpful library files used by the system. In simple terms, these are helpful files which are used by an application or a command or a process for their proper execution. The commands in /bin or /sbin dynamic library files are located just in this directory. The kernel modules are also located here. - -Taken an example of executing pwd command. It requires some library files to execute properly. Let us prove what is happening with pwd command when executing. We will use [the strace command][8] to figure out which library files are used. - -Example: - -If you observe, We just used open kernel call for pwd command. The pwd command to execute properly it will require two lib files. - -Contents of /lib folder in Linux - -As said earlier this folder contains object files and libraries, it's good to know some important subfolders with this directory. And below content are for my system and you may see some variants in your system. - -**/lib/firmware** - This is a folder which contains hardware firmware code. - -### What is the difference between firmware and drivers? - -Many devices software consists of two software piece to make that hardware properly. The piece of code that is loaded into actual hardware is firmware and the software which communicate between this firmware and kernel is called drivers. This way the kernel directly communicate with hardware and make sure hardware is doing the work assigned to it. - -**/lib/modprobe.d** - Configuration directory for modprobe command - -**/lib/modules** - All loadable kernel modules are stored in this directory. If you have more kernels you will see folders within this directory each represents a kernel. - -**/lib/hdparm** - Contains SATA/IDE parameters for disks to run properly. - -**/lib/udev** - Userspace /dev is a device manager for Linux Kernel. This folder contains all udev related files/folders like rules.d folder which contain udev specific rules. - -### The /lib folder sister folders: /lib32 and /lib64 - -These folders contain their specific architecture library files. These folders are almost identical to /lib folder expects architecture level differences. - -### Other library folders in Linux - -**/usr/lib** - All software libraries are installed here. This does not contain system default or kernel libraries. - -**/usr/local/lib** - To place extra system library files here. These library files can be used by different applications. - -**/var/lib** - Holds dynamic data libraries/files like the rpm/dpkg database and game scores. - --------------------------------------------------------------------------------- - -via: https://www.linuxnix.com/linux-directory-structure-lib-explained/ - -作者:[Surendra Anne][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linuxnix.com/author/surendra/ -[1]:https://www.linuxnix.com/wp-content/uploads/2017/09/The-lib-folder-explained.png -[2]:https://www.linuxnix.com/linux-directory-structure-explained-bin-folder/ -[3]:https://www.linuxnix.com/linux-directory-structure-explained-boot-folder/ -[4]:https://www.linuxnix.com/linux-directory-structure-explained-dev-folder/ -[5]:https://www.linuxnix.com/linux-directory-structure-explainedetc-folder/ -[6]:https://www.linuxnix.com/lostfound-directory-linuxunix/ -[7]:https://www.linuxnix.com/linux-directory-structure-home-root-folders/ -[8]:https://www.linuxnix.com/10-strace-command-examples-linuxunix/ diff --git a/sources/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md b/sources/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md deleted file mode 100644 index a5f819da51..0000000000 --- a/sources/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md +++ /dev/null @@ -1,61 +0,0 @@ -translating---geekpi - -Reset Linux Desktop To Default Settings With A Single Command -====== -![](https://www.ostechnix.com/wp-content/uploads/2017/10/Reset-Linux-Desktop-To-Default-Settings-720x340.jpg) - -A while ago, we shared an article about [**Resetter**][1] - an useful piece of software which is used to reset Ubuntu to factory defaults within few minutes. Using Resetter, anyone can easily reset their Ubuntu system to the state when you installed it in the first time. Today, I stumbled upon a similar thing. No, It's not an application, but a single-line command to reset your Linux desktop settings, tweaks and customization to default state. - -### Reset Linux Desktop To Default Settings - -This command will reset Ubuntu Unity, Gnome and MATE desktops to the default state. I tested this command on both my **Arch Linux MATE** desktop and **Ubuntu 16.04 Unity** desktop. It worked on both systems. I hope it will work on other desktops as well. I don't have any Linux desktop with GNOME as of writing this, so I couldn't confirm it. But, I believe it will work on Gnome DE as well. - -**A word of caution:** Please be mindful that this command will reset all customization and tweaks you made in your system, including the pinned applications in the Unity launcher or Dock, desktop panel applets, desktop indicators, your system fonts, GTK themes, Icon themes, monitor resolution, keyboard shortcuts, window button placement, menu and launcher behaviour etc. - -Good thing is it will only reset the desktop settings. It won't affect the other applications that doesn't use dconf. Also, it won't delete your personal data. - -Now, let us do this. To reset Ubuntu Unity or any other Linux desktop with GNOME/MATE DEs to its default settings, run: -``` -dconf reset -f / -``` - -This is my Ubuntu 16.04 LTS desktop before running the above command: - -[![][2]][3] - -As you see, I have changed the desktop wallpaper and themes. - -This is how my Ubuntu 16.04 LTS desktop looks like after running that command: - -[![][2]][4] - -Look? Now, my Ubuntu desktop has gone to the factory settings. - -For more details about "dconf" command, refer man pages. -``` -man dconf -``` - -I personally prefer to use "Resetter" over "dconf" command for this purpose. Because, Resetter provides more options to the users. The users can decide which applications to remove, which applications to keep, whether to keep existing user account or create a new user and many. If you're too lazy to install Resetter, you can just use this "dconf" command to reset your Linux system to default settings within few minutes. - -And, that's all. Hope this helps. I will be soon here with another useful guide. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/reset-linux-desktop-default-settings-single-command/ - -作者:[Edwin Arteaga][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com -[1]:https://www.ostechnix.com/reset-ubuntu-factory-defaults/ -[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[3]:http://www.ostechnix.com/wp-content/uploads/2017/10/Before-resetting-Ubuntu-to-default-1.png () -[4]:http://www.ostechnix.com/wp-content/uploads/2017/10/After-resetting-Ubuntu-to-default-1.png () diff --git a/sources/tech/20171004 How To Create A Video From PDF Files In Linux.md b/sources/tech/20171004 How To Create A Video From PDF Files In Linux.md deleted file mode 100644 index 27aa32dc77..0000000000 --- a/sources/tech/20171004 How To Create A Video From PDF Files In Linux.md +++ /dev/null @@ -1,89 +0,0 @@ -How To Create A Video From PDF Files In Linux -====== -![](https://www.ostechnix.com/wp-content/uploads/2017/10/Video-1-720x340.jpg) - -I have a huge collection of PDF files, mostly Linux tutorials, in my tablet PC. Sometimes I feel too lazy to read them from the tablet. I thought It would be better If I can be able to create a video from PDF files and watch it in a big screen devices like a TV or a Computer. Though I have a little working experience with [**FFMpeg**][1], I am not aware of how to create a movie file using it. After a bit of Google searches, I came up with a good solution. For those who wanted to make a movie file from a set of PDF files, read on. It is not that difficult. - -### Create A Video From PDF Files In Linux - -For this purpose, you need to install **" FFMpeg"** and **" ImageMagick"** software in your system. - -To install FFMpeg, refer the following link. - -Imagemagick is available in the official repositories of most Linux distributions. - -On **Arch Linux** and derivatives such as **Antergos** , **Manjaro Linux** , run the following command to install it. -``` -sudo pacman -S imagemagick -``` - -**Debian, Ubuntu, Linux Mint:** -``` -sudo apt-get install imagemagick -``` - -**Fedora:** -``` -sudo dnf install imagemagick -``` - -**RHEL, CentOS, Scientific Linux:** -``` -sudo yum install imagemagick -``` - -**SUSE, openSUSE:** -``` -sudo zypper install imagemagick -``` - -After installing ffmpeg and imagemagick, convert your PDF file image format such as PNG or JPG like below. -``` -convert -density 400 input.pdf picture.png -``` - -Here, **-density 400** specifies the horizontal resolution of the output image file(s). - -The above command will convert all pages in the given PDF file to PNG format. Each page in the PDF file will be converted into a PNG file and saved in the current directory with file name **picture-1.png** , **picture-2.png** … and so on. It will take a while depending on the number of pages in the input PDF file. - -Once all pages in the PDF converted into PNG format, run the following command to create a video file from the PNG files. -``` -ffmpeg -r 1/10 -i picture-%01d.png -c:v libx264 -r 30 -pix_fmt yuv420p video.mp4 -``` - -Here, - - * **-r 1/10** : Display each image for 10 seconds. - * **-i picture-%01d.png** : Reads all pictures that starts with name **" picture-"**, following with 1 digit (%01d) and ending with **.png**. If the images name comes with 2 digits (I.e picture-10.png, picture11.png etc), use (%02d) in the above command. - * **-c:v libx264** : Output video codec (i.e h264). - * **-r 30** : framerate of output video - * **-pix_fmt yuv420p** : Output video resolution - * **video.mp4** : Output video file with .mp4 format. - - - -Hurrah! The movie file is ready!! You can play it on any devices that supports .mp4 format. Next, I need to find a way to insert a cool music to my video. I hope it won't be difficult either. - -If you wanted it in higher pixel resolution, you don't have to start all over again. Just convert the output video file to any other higher/lower resolution of your choice, say 720p, as shown below. -``` -ffmpeg -i video.mp4 -vf scale=-1:720 video_720p.mp4 -``` - -Please note that creating a video using ffmpeg requires a good configuration PC. While converting videos, ffmpeg will consume most of your system resources. I recommend to do this in high-end system. - -And, that's all for now folks. Hope you find this useful. More good stuffs to come. Stay tuned! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/create-video-pdf-files-linux/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.ostechnix.com/20-ffmpeg-commands-beginners/ diff --git a/sources/tech/20171009 10 layers of Linux container security - Opensource.com.md b/sources/tech/20171009 10 layers of Linux container security - Opensource.com.md deleted file mode 100644 index b992cac2c3..0000000000 --- a/sources/tech/20171009 10 layers of Linux container security - Opensource.com.md +++ /dev/null @@ -1,131 +0,0 @@ -10 layers of Linux container security | Opensource.com -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA) - -Containers provide an easy way to package applications and deliver them seamlessly from development to test to production. This helps ensure consistency across a variety of environments, including physical servers, virtual machines (VMs), or private or public clouds. These benefits are leading organizations to rapidly adopt containers in order to easily develop and manage the applications that add business value. - -Enterprises require strong security, and anyone running essential services in containers will ask, "Are containers secure?" and "Can we trust containers with our applications?" - -Securing containers is a lot like securing any running process. You need to think about security throughout the layers of the solution stack before you deploy and run your container. You also need to think about security throughout the application and container lifecycle. - -Try these 10 key elements to secure different layers of the container solution stack and different stages of the container lifecycle. - -### 1. The container host operating system and multi-tenancy - -Containers make it easier for developers to build and promote an application and its dependencies as a unit and to get the most use of servers by enabling multi-tenant application deployments on a shared host. It's easy to deploy multiple applications on a single host, spinning up and shutting down individual containers as needed. To take full advantage of this packaging and deployment technology, the operations team needs the right environment for running containers. Operations needs an operating system that can secure containers at the boundaries, securing the host kernel from container escapes and securing containers from each other. - -### 2. Container content (use trusted sources) - -Containers are Linux processes with isolation and resource confinement that enable you to run sandboxed applications on a shared host kernel. Your approach to securing containers should be the same as your approach to securing any running process on Linux. Dropping privileges is important and still the best practice. Even better is to create containers with the least privilege possible. Containers should run as user, not root. Next, make use of the multiple levels of security available in Linux. Linux namespaces, Security-Enhanced Linux ( [SELinux][1] ), [cgroups][2] , capabilities, and secure computing mode ( [seccomp][3] ) are five of the security features available for securing containers. - -When it comes to security, what's inside your container matters. For some time now, applications and infrastructures have been composed from readily available components. Many of these are open source packages, such as the Linux operating system, Apache Web Server, Red Hat JBoss Enterprise Application Platform, PostgreSQL, and Node.js. Containerized versions of these packages are now also readily available, so you don't have to build your own. But, as with any code you download from an external source, you need to know where the packages originated, who built them, and whether there's any malicious code inside them. - -### 3. Container registries (secure access to container images) - -Your teams are building containers that layer content on top of downloaded public container images, so it's critical to manage access to and promotion of the downloaded container images and the internally built images in the same way other types of binaries are managed. Many private registries support storage of container images. Select a private registry that helps to automate policies for the use of container images stored in the registry. - -### 4. Security and the build process - -In a containerized environment, the software-build process is the stage in the lifecycle where application code is integrated with needed runtime libraries. Managing this build process is key to securing the software stack. Adhering to a "build once, deploy everywhere" philosophy ensures that the product of the build process is exactly what is deployed in production. It's also important to maintain the immutability of your containers--in other words, do not patch running containers; rebuild and redeploy them instead. - -Whether you work in a highly regulated industry or simply want to optimize your team's efforts, design your container image management and build process to take advantage of container layers to implement separation of control, so that the: - - * Operations team manages base images - * Architects manage middleware, runtimes, databases, and other such solutions - * Developers focus on application layers and just write code - - - -Finally, sign your custom-built containers so that you can be sure they are not tampered with between build and deployment. - -### 5. Control what can be deployed within a cluster - -In case anything falls through during the build process, or for situations where a vulnerability is discovered after an image has been deployed, add yet another layer of security in the form of tools for automated, policy-based deployment. - -Let's look at an application that's built using three container image layers: core, middleware, and the application layer. An issue is discovered in the core image and that image is rebuilt. Once the build is complete, the image is pushed to the container platform registry. The platform can detect that the image has changed. For builds that are dependent on this image and have triggers defined, the platform will automatically rebuild the application image, incorporating the fixed libraries. - -Add yet another layer of security in the form of tools for automated, policy-based deployment. - -Once the build is complete, the image is pushed to container platform's internal registry. It immediately detects changes to images in its internal registry and, for applications where triggers are defined, automatically deploys the updated image, ensuring that the code running in production is always identical to the most recently updated image. All these capabilities work together to integrate security capabilities into your continuous integration and continuous deployment (CI/CD) process and pipeline. - -### 6. Container orchestration: Securing the container platform - -Once the build is complete, the image is pushed to container platform's internal registry. It immediately detects changes to images in its internal registry and, for applications where triggers are defined, automatically deploys the updated image, ensuring that the code running in production is always identical to the most recently updated image. All these capabilities work together to integrate security capabilities into your continuous integration and continuous deployment (CI/CD) process and pipeline. - -Of course, applications are rarely delivered in a single container. Even simple applications typically have a frontend, a backend, and a database. And deploying modern microservices applications in containers means deploying multiple containers, sometimes on the same host and sometimes distributed across multiple hosts or nodes, as shown in this diagram. - -When managing container deployment at scale, you need to consider: - - * Which containers should be deployed to which hosts? - * Which host has more capacity? - * Which containers need access to each other? How will they discover each other? - * How will you control access to--and management of--shared resources, like network and storage? - * How will you monitor container health? - * How will you automatically scale application capacity to meet demand? - * How will you enable developer self-service while also meeting security requirements? - - - -Given the wealth of capabilities for both developers and operators, strong role-based access control is a critical element of the container platform. For example, the orchestration management servers are a central point of access and should receive the highest level of security scrutiny. APIs are key to automating container management at scale and used to validate and configure the data for pods, services, and replication controllers; perform project validation on incoming requests; and invoke triggers on other major system components. - -### 7. Network isolation - -Deploying modern microservices applications in containers often means deploying multiple containers distributed across multiple nodes. With network defense in mind, you need a way to isolate applications from one another within a cluster. A typical public cloud container service, like Google Container Engine (GKE), Azure Container Services, or Amazon Web Services (AWS) Container Service, are single-tenant services. They let you run your containers on the VM cluster that you initiate. For secure container multi-tenancy, you want a container platform that allows you to take a single cluster and segment the traffic to isolate different users, teams, applications, and environments within that cluster. - -With network namespaces, each collection of containers (known as a "pod") gets its own IP and port range to bind to, thereby isolating pod networks from each other on the node. Pods from different namespaces (projects) cannot send packets to or receive packets from pods and services of a different project by default, with the exception of options noted below. You can use these features to isolate developer, test, and production environments within a cluster; however, this proliferation of IP addresses and ports makes networking more complicated. In addition, containers are designed to come and go. Invest in tools that handle this complexity for you. The preferred tool is a container platform that uses [software-defined networking][4] (SDN) to provide a unified cluster network that enables communication between containers across the cluster. - -### 8. Storage - -Containers are useful for both stateless and stateful applications. Protecting attached storage is a key element of securing stateful services. Container platforms should provide plugins for multiple flavors of storage, including network file systems (NFS), AWS Elastic Block Stores (EBS), GCE Persistent Disks, GlusterFS, iSCSI, RADOS (Ceph), Cinder, etc. - -A persistent volume (PV) can be mounted on a host in any way supported by the resource provider. Providers will have different capabilities, and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read only. Each PV gets its own set of access modes describing that specific PV's capabilities, such as ReadWriteOnce, ReadOnlyMany, and ReadWriteMany. - -### 9. API management, endpoint security, and single sign-on (SSO) - -Securing your applications includes managing application and API authentication and authorization. - -Web SSO capabilities are a key part of modern applications. Container platforms can come with various containerized services for developers to use when building their applications. - -APIs are key to applications composed of microservices. These applications have multiple independent API services, leading to proliferation of service endpoints, which require additional tools for governance. An API management tool is also recommended. All API platforms should offer a variety of standard options for API authentication and security, which can be used alone or in combination, to issue credentials and control access. - -Securing your applications includes managing application and API authentication and authorization. - -These options include standard API keys, application ID and key pairs, and OAuth 2.0. - -### 10. Roles and access management in a cluster federation - -These options include standard API keys, application ID and key pairs, and OAuth 2.0. - -In July 2016, Kubernetes 1.3 introduced [Kubernetes Federated Clusters][5]. This is one of the exciting new features evolving in the Kubernetes upstream, currently in beta in Kubernetes 1.6. Federation is useful for deploying and accessing application services that span multiple clusters running in the public cloud or enterprise datacenters. Multiple clusters can be useful to enable application high availability across multiple availability zones or to enable common management of deployments or migrations across multiple cloud providers, such as AWS, Google Cloud, and Azure. - -When managing federated clusters, you must be sure that your orchestration tools provide the security you need across the different deployment platform instances. As always, authentication and authorization are key--as well as the ability to securely pass data to your applications, wherever they run, and manage application multi-tenancy across clusters. Kubernetes is extending Cluster Federation to include support for Federated Secrets, Federated Namespaces, and Ingress objects. - -### Choosing a container platform - -Of course, it is not just about security. Your container platform needs to provide an experience that works for your developers and your operations team. It needs to offer a secure, enterprise-grade container-based application platform that enables both developers and operators, without compromising the functions needed by each team, while also improving operational efficiency and infrastructure utilization. - -Learn more in Daniel's talk, [Ten Layers of Container Security][6], at [Open Source Summit EU][7], which will be held October 23-26 in Prague. - -### About The Author -Daniel Oh;Microservives;Agile;Devops;Java Ee;Container;Openshift;Jboss;Evangelism - - - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/10/10-layers-container-security - -作者:[Daniel Oh][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/daniel-oh -[1]:https://en.wikipedia.org/wiki/Security-Enhanced_Linux -[2]:https://en.wikipedia.org/wiki/Cgroups -[3]:https://en.wikipedia.org/wiki/Seccomp -[4]:https://en.wikipedia.org/wiki/Software-defined_networking -[5]:https://kubernetes.io/docs/concepts/cluster-administration/federation/ -[6]:https://osseu17.sched.com/mobile/#session:f2deeabfc1640d002c1d55101ce81223 -[7]:http://events.linuxfoundation.org/events/open-source-summit-europe diff --git a/sources/tech/20171016 5 SSH alias examples in Linux.md b/sources/tech/20171016 5 SSH alias examples in Linux.md index ddf92f1cc0..cd7e79db0b 100644 --- a/sources/tech/20171016 5 SSH alias examples in Linux.md +++ b/sources/tech/20171016 5 SSH alias examples in Linux.md @@ -1,3 +1,4 @@ +ch-cn translating 5 SSH alias examples in Linux ====== [![][1]][1] diff --git a/sources/tech/20171016 Fixing vim in Debian - There and back again.md b/sources/tech/20171016 Fixing vim in Debian - There and back again.md deleted file mode 100644 index 622b9fe885..0000000000 --- a/sources/tech/20171016 Fixing vim in Debian - There and back again.md +++ /dev/null @@ -1,59 +0,0 @@ -translating---geekpi - -Fixing vim in Debian – There and back again -====== -I was wondering for quite some time why on my server vim behaves so stupid with respect to the mouse: Jumping around, copy and paste wasn't possible the usual way. All this despite having -``` - set mouse= -``` - -in my `/etc/vim/vimrc.local`. Finally I found out why, thanks to bug [#864074][1] and fixed it. - -![][2] - -The whole mess comes from the fact that, when there is no `~/.vimrc`, vim loads `defaults.vim` **after** ` vimrc.local` and thus overwriting several settings put in there. - -There is a comment (I didn't see, though) in `/etc/vim/vimrc` explaining this: -``` -" Vim will load $VIMRUNTIME/defaults.vim if the user does not have a vimrc. -" This happens after /etc/vim/vimrc(.local) are loaded, so it will override -" any settings in these files. -" If you don't want that to happen, uncomment the below line to prevent -" defaults.vim from being loaded. -" let g:skip_defaults_vim = 1 -``` - - -I agree that this is a good way to setup vim on a normal installation of Vim, but the Debian package could do better. The problem is laid out clearly in the bug report: If there is no `~/.vimrc`, settings in `/etc/vim/vimrc.local` are overwritten. - -This is as counterintuitive as it can be in Debian - and I don't know any other package that does it in a similar way. - -Since the settings in `defaults.vim` are quite reasonable, I want to have them, but only fix a few of the items I disagree with, like the mouse. At the end what I did is the following in my `/etc/vim/vimrc.local`: -``` -if filereadable("/usr/share/vim/vim80/defaults.vim") - source /usr/share/vim/vim80/defaults.vim -endif -" now set the line that the defaults file is not reloaded afterwards! -let g:skip_defaults_vim = 1 - -" turn of mouse -set mouse= -" other override settings go here -``` - - -There is probably a better way to get a generic load statement that does not depend on the Vim version, but for now I am fine with that. - --------------------------------------------------------------------------------- - -via: https://www.preining.info/blog/2017/10/fixing-vim-in-debian/ - -作者:[Norbert Preining][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.preining.info/blog/author/norbert/ -[1]:https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=864074 -[2]:https://www.preining.info/blog/wp-content/uploads/2017/10/fixing-debian-vim.jpg diff --git a/sources/tech/20171016 Using the Linux find command with caution.md b/sources/tech/20171016 Using the Linux find command with caution.md deleted file mode 100644 index bb43f2cd76..0000000000 --- a/sources/tech/20171016 Using the Linux find command with caution.md +++ /dev/null @@ -1,95 +0,0 @@ -translating---geekpi - -Using the Linux find command with caution -====== -![](https://images.idgesg.net/images/article/2017/10/caution-sign-100738884-large.jpg) -A friend recently reminded me of a useful option that can add a little caution to the commands that I run with the Linux find command. It's called -ok and it works like the -exec option except for one important difference -- it makes the find command ask for permission before taking the specified action. - -Here's an example. If you were looking for files that you intended to remove from the system using find, you might run a command like this: -``` -$ find . -name runme -exec rm {} \; - -``` - -Anywhere within the current directory and its subdirectories, any files named "runme" would be summarily removed -- provided, of course, you have permission to remove them. Use the -ok command instead, and you'll see something like this. The find command will ask for approval before removing the files. Answering **y** for "yes" would allow the find command to go ahead and remove the files one by one. -``` -$ find . -name runme -ok rm {} \; -< rm ... ./bin/runme > ? - -``` - -### The -exedir command is also an option - -Another option that can be used to modify the behavior of the find command and potentially make it more controllable is the -execdir command. Where -exec runs whatever command is specified, -execdir runs the specified command from the directory in which the located file resides rather than from the directory in which the find command is run. Here's an example of how it works: -``` -$ pwd -/home/shs -$ find . -name runme -execdir pwd \; -/home/shs/bin - -``` -``` -$ find . -name runme -execdir ls \; -ls rm runme - -``` - -So far, so good. One important thing to keep in mind, however, is that the -execdir option will also run commands from the directories in which the located files reside. If you run the command shown below and the directory contains a file named "ls", it will run that file and it will run it even if the file does _not_ have execute permissions set. Using **-exec** or **-execdir** is similar to running a command by sourcing it. -``` -$ find . -name runme -execdir ls \; -Running the /home/shs/bin/ls file - -``` -``` -$ find . -name runme -execdir rm {} \; -This is an imposter rm command - -``` -``` -$ ls -l bin -total 12 --r-x------ 1 shs shs 25 Oct 13 18:12 ls --rwxr-x--- 1 shs shs 36 Oct 13 18:29 rm --rw-rw-r-- 1 shs shs 28 Oct 13 18:55 runme - -``` -``` -$ cat bin/ls -echo Running the $0 file -$ cat bin/rm -echo This is an imposter rm command - -``` - -### The -okdir option also asks for permission - -To be more cautious, you can use the **-okdir** option. Like **-ok** , this option will prompt for permission to run the command. -``` -$ find . -name runme -okdir rm {} \; -< rm ... ./bin/runme > ? - -``` - -You can also be careful to specify the commands you want to run with full paths to avoid any problems with imposter commands like those shown above. -``` -$ find . -name runme -execdir /bin/rm {} \; - -``` - -The find command has a lot of options besides the default print. Some can make your file searching more precise, but a little caution is always a good idea. - -Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3233305/linux/using-the-linux-find-command-with-caution.html - -作者:[Sandra Henry-Stocker][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ -[1]:https://www.facebook.com/NetworkWorld/ -[2]:https://www.linkedin.com/company/network-world diff --git a/sources/tech/20171024 Run Linux On Android Devices, No Rooting Required.md b/sources/tech/20171024 Run Linux On Android Devices, No Rooting Required.md deleted file mode 100644 index e93ea4638a..0000000000 --- a/sources/tech/20171024 Run Linux On Android Devices, No Rooting Required.md +++ /dev/null @@ -1,68 +0,0 @@ -translating by lujun9972 -Run Linux On Android Devices, No Rooting Required! -====== -![](https://www.ostechnix.com/wp-content/uploads/2017/10/Termux-720x340.jpg) - -The other day I was searching for a simple and easy way to run Linux on Android. My only intention was to just use Linux with some basic applications like SSH, Git, awk etc. Not much! I don't want to root the Android device. I have a Tablet PC that I mostly use for reading EBooks, news, and few Linux blogs. I don't use it much for other activities. So, I decided to use it for some Linux activities. After spending few minutes on Google Play Store, one app immediately caught my attention and I wanted to give it a try. If you're ever wondered how to run Linux on Android devices, this one might help. - -### Termux - An Android terminal emulator to run Linux on Android and Chrome OS - -**Termux** is an Android terminal emulator and Linux environment app. Unlike many other apps, you don 't need to root your device or no setup required. It just works out of the box! A minimal base Linux system will be installed automatically, and of course you can install other packages with APT package manager. In short, you can use your Android device like a pocket Linux computer. It's not just for Android, you can install it on your Chrome OS too. - -Termux offers many significant features than you would think. - - * It allows you to SSH to your remote server via openSSH. - * You can also SSH into your Android devices from any remote system. - * Sync your smart phone contacts to a remote system using rsync and curl. - * You could choose any shells such as BASH, ZSH, and FISH etc. - * You can choose different text editors such as Emacs, Nano, and Vim to edit/view files. - * Install any packages of your choice in your Android devices using APT package manager. Up-to-date versions of Git, Perl, Python, Ruby and Node.js are all available. - * Connect your Android device with a bluetooth Keyboard, mouse and external display and use it like a convergence device. Termux supports keyboard shortcuts . - * Termux allows you to run almost all GNU/Linux commands. - - - -It also has some extra features. You can enable them by installing the addons. For instance, **Termux:API** addon will allow you to Access Android and Chrome hardware features. The other useful addons are: - - * Termux:Boot - Run script(s) when your device boots. - * Termux:Float - Run Termux in a floating window. - * Termux:Styling - Provides color schemes and powerline-ready fonts to customize the appearance of the Termux terminal. - * Termux:Task - Provides an easy way to call Termux executables from Tasker and compatible apps. - * Termux:Widget - Provides an easy way to start small scriptlets from the home screen. - - - -To know more about termux, open the built-in help section by long-pressing anywhere on the terminal and selecting the Help menu option. The only drawback is it **requires Android 5.0 and higher versions**. It could be more useful for many users if it supports Android 4.x and older versions. Termux is available in **Google Play Store** and **F-Droid**. - -To install Termux from Google Play Store, click the following button. - -[![termux][1]][2] - -To install it from F-Droid, click the following button. - -[![][1]][3] - -You know now how to try Linux on your android devices using Termux. Do you use any other better apps worth trying? Please mention them in the comment section below. I'd love to try them too! - -Cheers! - -Resource: - -+[Termux website][4] - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/termux-run-linux-android-devices-no-rooting-required/ - -作者:[SK][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]:https://play.google.com/store/apps/details?id=com.termux -[3]:https://f-droid.org/packages/com.termux/ -[4]:https://termux.com/ diff --git a/sources/tech/20171027 Easy guide to secure VNC server with TLS encryption.md b/sources/tech/20171027 Easy guide to secure VNC server with TLS encryption.md deleted file mode 100644 index 189e57535f..0000000000 --- a/sources/tech/20171027 Easy guide to secure VNC server with TLS encryption.md +++ /dev/null @@ -1,135 +0,0 @@ -Easy guide to secure VNC server with TLS encryption -====== -In this tutorial, we will learn to install VNC server & secure VNC server sessions with TLS encryption. -This method has been tested on CentOS 6 & 7 but should work on other versions/OS as well (RHEL, Scientific Linux etc). - -**(Recommended Read:[Ultimate guide for Securing SSH sessions][1] )** - -### Installing VNC server - -Before we install VNC server on our machines, make sure we are have a working GUI. If GUI is not installed on our machine, we can install it by executing the following command, - -``` -yum groupinstall "GNOME Desktop" -``` - -Now we will tigervnc as our VNC server, to install it run, - -``` -# yum install tigervnc-server -``` - -Once VNC server has been installed, we will create a new user to access the server, - -``` -# useradd vncuser -``` - -& assign it a password for accessing VNC by using following command, - -``` -# vncpasswd vncuser -``` - -Now we have a little change in configuration on CentOS 6 & 7, we will first address the CentOS 6 configuration, - -#### CentOS 6 - -Now we need to edit VNC configuration file, - -``` - **# vim /etc/sysconfig/vncservers** -``` - -& add the following lines, - -``` -[ …] -VNCSERVERS= "1:vncuser" -VNCSERVERARGS[1]= "-geometry 1024×768″ -``` - -Save the file & exit. Next restart the vnc service to implement the changes, - -``` -# service vncserver restart -``` - -& enable it at boot, - -``` -# chkconfig vncserver on -``` - -#### CentOS 7 - -On CentOS 7, /etc/sysconfig/vncservers file has been changed to /lib/systemd/system/vncserver@.service. We will use this configuration file as reference, so create a copy of the file, - -``` -# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:1.service -``` - -Next we will edit the file to include our created user, - -``` -# vim /etc/systemd/system/vncserver@:1.service -``` - -& edit the user on the following 2 lines, - -``` -ExecStart=/sbin/runuser -l vncuser -c "/usr/bin/vncserver %i" -PIDFile=/home/vncuser/.vnc/%H%i.pid -``` - -Save file & exit. Next restart the service & enable it at boot, - -``` -systemctl restart[[email protected]][2]:1.service -systemctl enable[[email protected]][2]:1.service -``` - -We now have our VNC server ready & can connect to it from a client machine using the IP address of VNC server. But we before we do that, we will secure our connections with TLS encryption. - -### Securing the VNC session - -To secure VNC server session, we will first configure the encryption method to secure VNC server sessions. We will be using TLS encryption but can also use SSL encryption. Execute the following command to start using TLS encrytption on VNC server, - -``` -# vncserver -SecurityTypes=VeNCrypt,TLSVnc -``` - -You will asked to enter a password to access VNC (if using any other user, than the above mentioned user) - -![secure vnc server][4] - -We can now access the server using the VNC viewer from the client machine, use the following command to start vnc viewer with secure connection, - - **# vncviewer -SecurityTypes=VeNCrypt,TLSVnc 192.168.1.45:1** - -here, 192.168.1.45 is the IP address of the VNC server. - -![secure vnc server][6] - -Enter the password & we can than access the server remotely & that too with TLS encryption. - -This completes our tutorial, feel free to send your suggestions or queries using the comment box below. - - --------------------------------------------------------------------------------- - -via: http://linuxtechlab.com/secure-vnc-server-tls-encryption/ - -作者:[Shusain][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linuxtechlab.com/author/shsuain/ -[1]:http://linuxtechlab.com/ultimate-guide-to-securing-ssh-sessions/ -[2]:/cdn-cgi/l/email-protection -[3]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=642%2C241 -[4]:https://i1.wp.com/linuxtechlab.com/wp-content/uploads/2017/10/secure_vnc-1.png?resize=642%2C241 -[5]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=665%2C419 -[6]:https://i2.wp.com/linuxtechlab.com/wp-content/uploads/2017/10/secure_vnc-2.png?resize=665%2C419 diff --git a/sources/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md b/sources/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md deleted file mode 100644 index be091e91a2..0000000000 --- a/sources/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md +++ /dev/null @@ -1,94 +0,0 @@ -How to bind ntpd to specific IP addresses on Linux/Unix -====== -By default, my ntpd/NTP server listens on all interfaces or IP address i.e 0.0.0.0:123. How do I make sure ntpd only listen on a specific IP address such as localhost or 192.168.1.1:123 on a Linux or FreeBSD Unix server? - -NTP is an acronym for Network Time Protocol. It is used for clock synchronization between computers. The ntpd program is an operating system daemon which sets and maintains the system time of day in synchronism with Internet standard time servers. -[![How to prevent NTPD from listening on 0.0.0.0:123 and binding to specific IP addresses on a Linux/Unix server][1]][1] -The NTP is configured using ntp.conf located in /etc/ directory. - -## interface directive in /etc/ntp.conf - - -You can prevent ntpd to listen on 0.0.0.0:123 by setting the interface command. The syntax is: -`interface listen IPv4|IPv6|all -interface ignore IPv4|IPv6|all -interface drop IPv4|IPv6|all` -The above configures which network addresses ntpd listens or dropped without processing any requests. The ignore prevents opening matching addresses, drop causes ntpd to open the address and drop all received packets without examination. For example to ignore listing on all interfaces, add the following in /etc/ntp.conf: -`interface ignore wildcard` -To listen to only 127.0.0.1 and 192.168.1.1 addresses: -`interface listen 127.0.0.1 -interface listen 192.168.1.1` -Here is my sample /etc/ntp.conf file from FreeBSD cloud server: -`$ egrep -v '^#|$^' /etc/ntp.conf` -Sample outputs: -``` -tos minclock 3 maxclock 6 -pool 0.freebsd.pool.ntp.org iburst -restrict default limited kod nomodify notrap noquery nopeer -restrict -6 default limited kod nomodify notrap noquery nopeer -restrict source limited kod nomodify notrap noquery -restrict 127.0.0.1 -restrict -6 ::1 -leapfile "/var/db/ntpd.leap-seconds.list" -interface ignore wildcard -interface listen 172.16.3.1 -interface listen 10.105.28.1 -``` - - -## Restart ntpd - -Reload/restart the ntpd on a FreeBSD unix: -`$ sudo /etc/rc.d/ntpd restart` -OR [use the following command on a Debian/Ubuntu Linux][2]: -`$ sudo systemctl restart ntp` -OR [use the following on a CentOS/RHEL 7/Fedora Linux][2]: -`$ sudo systemctl restart ntpd` - -## Verification - -Use the netstat command/ss command for verification or to make sure ntpd bind to the specific IP address only: -`$ netstat -tulpn | grep :123` -OR -`$ ss -tulpn | grep :123` -Sample outputs: -``` -udp 0 0 10.105.28.1:123 0.0.0.0:* - -udp 0 0 172.16.3.1:123 0.0.0.0:* - -``` - -udp 0 0 10.105.28.1:123 0.0.0.0:* - udp 0 0 172.16.3.1:123 0.0.0.0:* - - -Use [the sockstat command on a FreeBSD Unix server][3]: -`$ sudo sockstat -$ sudo sockstat -4 -$ sudo sockstat -4 | grep :123` -Sample outputs: -``` -root ntpd 59914 22 udp4 127.0.0.1:123 *:* -root ntpd 59914 24 udp4 127.0.1.1:123 *:* -``` - -root ntpd 59914 22 udp4 127.0.0.1:123 *:* root ntpd 59914 24 udp4 127.0.1.1:123 *:* - -## Posted by:Vivek Gite - -The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][4], [Facebook][5], [Google+][6]. - --------------------------------------------------------------------------------- - -via: https://www.cyberciti.biz/faq/how-to-bind-ntpd-to-specific-ip-addresses-on-linuxunix/ - -作者:[Vivek Gite][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.cyberciti.biz -[1]:https://www.cyberciti.biz/media/new/faq/2017/10/how-to-prevent-ntpd-to-listen-on-all-interfaces-on-linux-unix-box.jpg -[2]:https://www.cyberciti.biz/faq/restarting-ntp-service-on-linux/ -[3]:https://www.cyberciti.biz/faq/freebsd-unix-find-the-process-pid-listening-on-a-certain-port-commands/ -[4]:https://twitter.com/nixcraft -[5]:https://facebook.com/nixcraft -[6]:https://plus.google.com/+CybercitiBiz diff --git a/sources/tech/20171102 What is huge pages in Linux.md b/sources/tech/20171102 What is huge pages in Linux.md deleted file mode 100644 index 448280643f..0000000000 --- a/sources/tech/20171102 What is huge pages in Linux.md +++ /dev/null @@ -1,138 +0,0 @@ -translating by lujun9972 -What is huge pages in Linux? -====== -Learn about huge pages in Linux. Understand what is hugepages, how to configure it, how to check current state and how to disable it. - -![Huge Pages in Linux][1] - -In this article, we will walk you though details about huge pages so that you will be able to answer : what is huge pages in Linux? How to enable/disable huge pages? How to determine huge page value? in Linux like RHEL6, RHEL7, Ubuntu etc. - -Lets start with Huge pages basics. - -### What is Huge page in Linux? - -Huge pages are helpful in virtual memory management in Linux system. As name suggests, they help is managing huge size pages in memory in addition to standard 4KB page size. You can define as huge as 1GB page size using huge pages. - -During system boot, you reserve your memory portion with huge pages for your application. This memory portion i.e. these memory occupied by huge pages is never swapped out of memory. It will stick there until you change your configuration. This increases application performance to great extent like Oracle database with pretty large memory requirement. - -### Why use huge page? - -In virtual memory management, kernel maintains table in which it has mapping of virtual memory address to physical address. For every page transaction, kernel needs to load related mapping. If you have small size pages then you need to load more numbers of pages resulting kernel to load more mapping tables. This decreases performance. - -Using huge pages, means you will need fewer pages. This decreases number of mapping tables to load by kernel to great extent. This increases your kernel level performance which ultimately benefits your application. - -In short, by enabling huge pages, system has fewer page tables to deal with and hence less overhead to access / maintain them! - -### How to configure huge pages? - -Run below command to check current huge pages details. - -``` -root@kerneltalks # grep Huge /proc/meminfo -AnonHugePages: 0 kB -HugePages_Total: 0 -HugePages_Free: 0 -HugePages_Rsvd: 0 -HugePages_Surp: 0 -Hugepagesize: 2048 kB -``` - -In above output you can see one page size is 2MB `Hugepagesize` and total of 0 pages on system `HugePages_Total`. This huge page size can be increased from 2MB to max 1GB. - -Run below script to get how much huge pages your system needs currently . Script is from Oracle and can be found. - -``` -#!/bin/bash -# -# hugepages_settings.sh -# -# Linux bash script to compute values for the -# recommended HugePages/HugeTLB configuration -# -# Note: This script does calculation for all shared memory -# segments available when the script is run, no matter it -# is an Oracle RDBMS shared memory segment or not. -# Check for the kernel version -KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'` -# Find out the HugePage size -HPG_SZ=`grep Hugepagesize /proc/meminfo | awk {'print $2'}` -# Start from 1 pages to be on the safe side and guarantee 1 free HugePage -NUM_PG=1 -# Cumulative number of pages required to handle the running shared memory segments -for SEG_BYTES in `ipcs -m | awk {'print $5'} | grep "[0-9][0-9]*"` -do - MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q` - if [ $MIN_PG -gt 0 ]; then - NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q` - fi -done -# Finish with results -case $KERN in - '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`; - echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;; - '2.6' | '3.8' | '3.10' | '4.1' ) echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;; - *) echo "Unrecognized kernel version $KERN. Exiting." ;; -esac -# End -``` -You can save it in `/tmp` as `hugepages_settings.sh` and then run it like below : -``` -root@kerneltalks # sh /tmp/hugepages_settings.sh -Recommended setting: vm.nr_hugepages = 124 -``` - -Output will be similar to some number as shown in above sample output. - -This means your system needs 124 huge pages of 2MB each! If you have set 4MB as page size then output would have been 62. You got the point, right? - -### Configure hugepages in kernel - -Now last part is to configure above stated [kernel parameter][2] and reload it. Add below value in `/etc/sysctl.conf` and reload configuration by issuing `sysctl -p` command. - -``` -vm .nr_hugepages=126 -``` - -Notice that we added 2 extra pages in kernel since we want to keep couple of pages spare than actual required number. - -Now, huge pages has been configured in kernel but to allow your application to use them you need to increase memory limits as well. New memory limit should be 126 pages x 2 MB each = 252 MB i.e. 258048 KB. - -You need to edit below settings in `/etc/security/limits.conf` - -``` -soft memlock 258048 -hard memlock 258048 -``` - -Sometimes these settings are configured in app specific files like for Oracle DB its in `/etc/security/limits.d/99-grid-oracle-limits.conf` - -Thats it! You might want to restart your application to make use of these new huge pages. - -### How to disable hugepages? - -HugePages are generally enabled by default. Use below command to check current state of hugepages. - -``` -root@kerneltalks# cat /sys/kernel/mm/transparent_hugepage/enabled -[always] madvise never -``` - -`[always]` flag in output shows that hugepages are enabled on system. - -For RedHat base systems file path is `/sys/kernel/mm/redhat_transparent_hugepage/enabled` - -If you want to disable huge pages then add `transparent_hugepage=never` at the end of `kernel` line in `/etc/grub.conf` and reboot the system. - --------------------------------------------------------------------------------- - -via: https://kerneltalks.com/services/what-is-huge-pages-in-linux/ - -作者:[Shrikant Lavhate][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://kerneltalks.com -[1]:https://c1.kerneltalks.com/wp-content/uploads/2017/11/hugepages-in-linux.png -[2]:https://kerneltalks.com/linux/how-to-tune-kernel-parameters-in-linux/ diff --git a/sources/tech/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md b/sources/tech/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md deleted file mode 100644 index 922ef18040..0000000000 --- a/sources/tech/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md +++ /dev/null @@ -1,185 +0,0 @@ -Step by Step guide for creating Master Slave replication in MariaDB -====== -In our earlier tutorials,we have already learned [**to install & configure MariaDB**][1] & also [**learned some basic administration commands for managing MariaDB**][2]. We are now going to learn to setup a MASTER SLAVE replication for MariaDB server. - -Replication is used to create multiple copies of our database & these copies then can either be used as another database to run our queries on, queries that might otherwise affect performance of master server like running some heavy analytics queries or we can just use them for data redundancy purposes or for both. We can automate the whole process i.e. data replication occurs automatically from master to slave. Backups are be done without affecting the write operations of the master - -So we will now setup our **master-slave** replication, for this we need two machines with Mariadb installed. IP addresses for the both the machines are mentioned below, - - **Master -** 192.168.1.120 **Hostname-** master.ltechlab.com - - **Slave -** 192.168.1.130 **Hostname -** slave.ltechlab.com - -Once MariaDB has been installed in those machines, we will move on with the tutorial. If you need help installing and configuring maridb, have a[ **look at our tutorial HERE.**][1] - - -### **Step 1- Master Server Configuration** - -We are going to take a database named ' **important '** in MariaDB, that will be replicated to our slave server. To start the process, we will edit the files ' **/etc/my.cnf** ' , it's the configuration file for mariadb, - -``` -$ vi /etc/my.cnf -``` - -& look for section with [mysqld] & then enter the following details, - -``` -[mysqld] -log-bin -server_id=1 -replicate-do-db=important -bind-address=192.168.1.120 -``` - -Save & exit the file. Once done, restart the mariadb services, - -``` -$ systemctl restart mariadb -``` - -Next, we will login to our mariadb instance on master server, - -``` -$ mysql -u root -p -``` - -& then will create a new user for slave named 'slaveuser' & assign it necessary privileges by running the following command - -``` -STOP SLAVE; -GRANT REPLICATION SLAVE ON *.* TO 'slaveuser'@'%' IDENTIFIED BY 'iamslave'; -FLUSH PRIVILEGES; -FLUSH TABLES WITH READ LOCK; -SHOW MASTER STATUS; -``` - -**Note:- ** We need values from **MASTER_LOG_FILE and MASTER_LOG_POS ** from out of 'show master status' for configuring replication, so make sure that you have those. - -Once these commands run successfully, exit from the session by typing 'exit'. - -### Step2 - Create a backup of the database & move it slave - -Now we need to create backup of our database 'important' , which can be done using 'mysqldump' command, - -``` -$ mysqldump -u root -p important > important_backup.sql -``` - -Once the backup is complete, we need to log back into the mariadb & unlock our tables, - -``` -$ mysql -u root -p -$ UNLOCK TABLES; -``` - -& exit the session. Now we will move the database backup to our slave server which has a IPaddress of 192.168.1.130, - -This completes our configuration on Master server, we will now move onto configuring our slave server. - -### Step 3 Configuring Slave server - -We will again start with editing '/etc/my.cnf' file & look for section [mysqld] & enter the following details, - -``` -[mysqld] -server-id = 2 -replicate-do-db=important -[ …] -``` - -We will now restore our database to mariadb, by running - -``` -$ mysql -u root -p < /data/ important_backup.sql -``` - -When the process completes, we will provide the privileges to 'slaveuser' on db 'important' by logging into mariadb on slave server, - -``` -$ mysql -u root -p -``` - -``` -GRANT ALL PRIVILEGES ON important.* TO 'slaveuser'@'localhost' WITH GRANT OPTION; -FLUSH PRIVILEGES; -``` - -Next restart mariadb for implementing the changes. - -``` -$ systemctl restart mariadb -``` - -### **Step 4 Start the replication** - -Remember, we need **MASTER_LOG_FILE and MASTER_LOG_POS** variables which we got from running 'SHOW MASTER STATUS' on mariadb on master server. Now login to mariadb on slave server & we will tell our slave server where to look for the master by running the following commands, - -``` -STOP SLAVE; -CHANGE MASTER TO MASTER_HOST= '192.168.1.110′, MASTER_USER='slaveuser', MASTER_PASSWORD='iamslave', MASTER_LOG_FILE='mariadb-bin.000001′, MASTER_LOG_POS=460; -SLAVE START; -SHOW SLAVE STATUS\G; -``` - -**Note:-** Change details of your master as necessary. - -### Step 5 Testing the replication - -We will now create a new tables in our database on master to make sure if the replication is working or not. So, login to mariadb on master server, - -``` -$ mysql -u root -p -``` - -select the database 'important', - -``` -use important; -``` - -and create a table named test in the db, - -``` -create table test (c int); -``` - -then insert some value into it, - -``` -insert into test (c) value (1); -``` - -To check the added value, - -``` -select * from test; -``` - -& you will find that your db has a table has the value you inserted. - -Now let's login to our slave database to make sure if our data replication is working, - -``` -$ mysql -u root -p -$ use important; -$ select * from test; -``` - -You will see that the output shows the same value that we inserted on the master server, hence our replication is working fine without any issues. - -This concludes our tutorial, please send your queries/questions through the comment box below. - - --------------------------------------------------------------------------------- - -via: http://linuxtechlab.com/creating-master-slave-replication-mariadb/ - -作者:[Shusain][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linuxtechlab.com/author/shsuain/ -[1]:http://linuxtechlab.com/installing-configuring-mariadb-rhelcentos/ -[2]:http://linuxtechlab.com/mariadb-administration-commands-beginners/ diff --git a/sources/tech/20171113 The big break in computer languages.md b/sources/tech/20171113 The big break in computer languages.md deleted file mode 100644 index c6eb8c3bfe..0000000000 --- a/sources/tech/20171113 The big break in computer languages.md +++ /dev/null @@ -1,93 +0,0 @@ -Translated by name1e5s - -The big break in computer languages -============================================================ - - -My last post ([The long goodbye to C][3]) elicited a comment from a C++ expert I was friends with long ago, recommending C++ as the language to replace C. Which ain’t gonna happen; if that were a viable future, Go and Rust would never have been conceived. - -But my readers deserve more than a bald assertion. So here, for the record, is the story of why I don’t touch C++ any more. This is a launch point for a disquisition on the economics of computer-language design, why some truly unfortunate choices got made and baked into our infrastructure, and how we’re probably going to fix them. - -Along the way I will draw aside the veil from a rather basic mistake that people trying to see into the future of programming languages (including me) have been making since the 1980s. Only very recently do we have the field evidence to notice where we went wrong. - -I think I first picked up C++ because I needed GNU eqn to be able to output MathXML, and eqn was written in C++. That project succeeded. Then I was a senior dev on Battle For Wesnoth for a number of years in the 2000s and got comfortable with the language. - -Then came the day we discovered that a person we incautiously gave commit privileges to had fucked up the games’s AI core. It became apparent that I was the only dev on the team not too frightened of that code to go in. And I fixed it all right – took me two weeks of struggle. After which I swore a mighty oath never to go near C++ again. - -My problem with the language, starkly revealed by that adventure, is that it piles complexity on complexity upon chrome upon gingerbread in an attempt to address problems that cannot actually be solved because the foundational abstractions are leaky. It’s all very well to say “well, don’t do that” about things like bare pointers, and for small-scale single-developer projects (like my eqn upgrade) it is realistic to expect the discipline can be enforced. - -Not so on projects with larger scale or multiple devs at varying skill levels (the case I normally deal with). With probability asymptotically approaching one over time and increasing LOC, someone is inadvertently going to poke through one of the leaks. At which point you have a bug which, because of over-layers of gnarly complexity such as STL, is much more difficult to characterize and fix than the equivalent defect in C. My Battle For Wesnoth experience rubbed my nose in this problem pretty hard. - -What works for a Steve Heller (my old friend and C++ advocate) doesn’t scale up when I’m dealing with multiple non-Steve-Hellers and might end up having to clean up their mess. So I just don’t go there any more. Not worth the aggravation. C is flawed, but it does have one immensely valuable property that C++ didn’t keep – if you can mentally model the hardware it’s running on, you can easily see all the way down. If C++ had actually eliminated C’s flaws (that it, been type-safe and memory-safe) giving away that transparency might be a trade worth making. As it is, nope. - -One way we can tell that C++ is not sufficient is to imagine an alternate world in which it is. In that world, older C projects would routinely up-migrate to C++. Major OS kernels would be written in C++, and existing kernel implementations like Linux would be upgrading to it. In the real world, this ain’t happening. Not only has C++ failed to present enough of a value proposition to keep language designers uninterested in imagining languages like D, Go, and Rust, it has failed to displace its own ancestor. There’s no path forward from C++ without breaching its core assumptions; thus, the abstraction leaks won’t go away. - -Since I’ve mentioned D, I suppose this is also the point at which I should explain why I don’t see it as a serious contender to replace C. Yes, it was spun up eight years before Rust and nine years before Go – props to Walter Bright for having the vision. But in 2001 the example of Perl and Python had already been set – the window when a proprietary language could compete seriously with open source was already closing. The wrestling match between the official D library/runtime and Tango hurt it, too. It has never recovered from those mistakes. - -So now there’s Go (I’d say “…and Rust”, but for reasons I’ve discussed before I think it will be years before Rust is fully competitive). It  _is_  type-safe and memory-safe (well, almost; you can partway escape using interfaces, but it’s not normal to have to go to the unsafe places). One of my regulars, Mark Atwood, has correctly pointed out that Go is a language made of grumpy-old-man rage, specifically rage by  _one of the designers of C_  (Ken Thompson) at the bloated mess that C++ became. - -I can relate to Ken’s grumpiness; I’ve been muttering for decades that C++ attacked the wrong problem. There were two directions a successor language to C might have gone. One was to do what C++ did – accept C’s leaky abstractions, bare pointers and all, for backward compatibility, than try to build a state-of-the-art language on top of them. The other would have been to attack C’s problems at their root –  _fix_ the leaky abstractions. That would break backward compatibility, but it would foreclose the class of problems that dominate C/C++ defects. - -The first serious attempt at the second path was Java in 1995\. It wasn’t a bad try, but the choice to build it over a j-code interpreter mode it unsuitable for systems programming. That left a huge hole in the options for systems programming that wouldn’t be properly addressed for another 15 years, until Rust and Go. In particular, it’s why software like my GPSD and NTPsec projects is still predominantly written in C in 2017 despite C’s manifest problems. - -This is in many ways a bad situation. It was hard to really see this because of the lack of viable alternatives, but C/C++ has not scaled well. Most of us take for granted the escalating rate of defects and security compromises in infrastructure software without really thinking about how much of that is due to really fundamental language problems like buffer-overrun vulnerabilities. - -So, why did it take so long to address that? It was 37 years from C (1972) to Go (2009); Rust only launched a year sooner. I think the underlying reasons are economic. - -Ever since the very earliest computer languages it’s been understood that every language design embodies an assertion about the relative value of programmer time vs. machine resources. At one end of that spectrum you have languages like assembler and (later) C that are designed to extract maximum performance at the cost of also pessimizing developer time and costs; at the other, languages like Lisp and (later) Python that try to automate away as much housekeeping detail as possible, at the cost of pessimizing machine performance. - -In broadest terms, the most important discriminator between the ends of this spectrum is the presence or absence of automatic memory management. This corresponds exactly to the empirical observation that memory-management bugs are by far the most common class of defects in machine-centric languages that require programmers to manage that resource by hand. - -A language becomes economically viable where and when its relative-value assertion matches the actual cost drivers of some particular area of software development. Language designers respond to the conditions around them by inventing languages that are a better fit for present or near-future conditions than the languages they have available to use. - -Over time, there’s been a gradual shift from languages that require manual memory management to languages with automatic memory management and garbage collection (GC). This shift corresponds to the Moore’s Law effect of decreasing hardware costs making programmer time relatively more expensive. But there are at least two other relevant dimensions. - -One is distance from the bare metal. Inefficiency low in the software stack (kernels and service code) ripples multiplicatively up the stack. This, we see machine-centric languages down low and programmer-centric languages higher up, most often in user-facing software that only has to respond at human speed (time scale 0.1 sec). - -Another is project scale. Every language also has an expected rate of induced defects per thousand lines of code due to programmers tripping over leaks and flaws in its abstractions. This rate runs higher in machine-centric languages, much lower in programmer-centric ones with GC. As project scale goes up, therefore, languages with GC become more and more important as a strategy against unacceptable defect rates. - -When we view language deployments along these three dimensions, the observed pattern today – C down below, an increasing gallimaufry of languages with GC above – almost makes sense. Almost. But there is something else going on. C is stickier than it ought to be, and used way further up the stack than actually makes sense. - -Why do I say this? Consider the classic Unix command-line utilities. These are generally pretty small programs that would run acceptably fast implemented in a scripting language with a full POSIX binding. Re-coded that way they would be vastly easier to debug, maintain and extend. - -Why are these still in C (or, in unusual exceptions like eqn, in C++)? Transition costs. It’s difficult to translate even small, simple programs between languages and verify that you have faithfully preserved all non-error behaviors. More generally, any area of applications or systems programming can stay stuck to a language well after the tradeoff that language embodies is actually obsolete. - -Here’s where I get to the big mistake I and other prognosticators made. We thought falling machine-resource costs – increasing the relative cost of programmer-hours – would be enough by themselves to displace C (and non-GC languages generally). In this we were not entirely or even mostly wrong – the rise of scripting languages, Java, and things like Node.js since the early 1990s was pretty obviously driven that way. - -Not so the new wave of contending systems-programming languages, though. Rust and Go are both explicitly responses to  _increasing project scale_ . Where scripting languages got started as an effective way to write small programs and gradually scaled up, Rust and Go were positioned from the start as ways to reduce defect rates in  _really large_  projects. Like, Google’s search service and Facebook’s real-time-chat multiplexer. - -I think this is the answer to the “why not sooner” question. Rust and Go aren’t actually late at all, they’re relatively prompt responses to a cost driver that was underweighted until recently. - -OK, so much for theory. What predictions does this one generate? What does it tell us about what comes after C? - -Here’s the big one. The largest trend driving development towards GC languages haven’t reversed, and there’s no reason to expect it will. Therefore: eventually we  _will_  have GC techniques with low enough latency overhead to be usable in kernels and low-level firmware, and those will ship in language implementations. Those are the languages that will truly end C’s long reign. - -There are broad hints in the working papers from the Go development group that they’re headed in this direction – references to academic work on concurrent garbage collectors that never have stop-the-world pauses. If Go itself doesn’t pick up this option, other language designers will. But I think they will – the business case for Google to push them there is obvious (can you say “Android development”?). - -Well before we get to GC that good, I’m putting my bet on Go to replace C anywhere that the GC it has now is affordable – which means not just applications but most systems work outside of kernels and embedded. The reason is simple: there is no path out of C’s defect rates with lower transition costs. - -I’ve been experimenting with moving C code to Go over the last week, and I’m noticing two things. One is that it’s easy to do – C’s idioms map over pretty well. The other is that the resulting code is much simpler. One would expect that, with GC in the language and maps as a first-class data type, but I’m seeing larger reductions in code volume than initially expected – about 2:1, similar to what I see when moving C code to Python. - -Sorry, Rustaceans – you’ve got a plausible future in kernels and deep firmware, but too many strikes against you to beat Go over most of C’s range. No GC, plus Rust is a harder transition from C because of the borrow checker, plus the standardized part of the API is still seriously incomplete (where’s my select(2), again?). - -The only consolation you get, if it is one, is that the C++ fans are screwed worse than you are. At least Rust has a real prospect of dramatically lowering downstream defect rates relative to C anywhere it’s not crowded out by Go; C++ doesn’t have that. - -This entry was posted in [Software][4] by [Eric Raymond][5]. Bookmark the [permalink][6]. - --------------------------------------------------------------------------------- - -via: http://esr.ibiblio.org/?p=7724 - -作者:[Eric Raymond][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://esr.ibiblio.org/?author=2 -[1]:http://esr.ibiblio.org/?author=2 -[2]:http://esr.ibiblio.org/?p=7724 -[3]:http://esr.ibiblio.org/?p=7711 -[4]:http://esr.ibiblio.org/?cat=13 -[5]:http://esr.ibiblio.org/?author=2 -[6]:http://esr.ibiblio.org/?p=7724 diff --git a/sources/tech/20171120 How to use special permissions- the setuid, setgid and sticky bits.md b/sources/tech/20171120 How to use special permissions- the setuid, setgid and sticky bits.md deleted file mode 100644 index ab38f8856a..0000000000 --- a/sources/tech/20171120 How to use special permissions- the setuid, setgid and sticky bits.md +++ /dev/null @@ -1,103 +0,0 @@ -How to use special permissions: the setuid, setgid and sticky bits -====== - -### Objective - -Getting to know how special permissions works, how to identify and set them. - -### Requirements - - * Knowledge of the standard unix/linux permissions system - -### Difficulty - -EASY - -### Conventions - - * **#** \- requires given command to be executed with root privileges either directly as a root user or by use of `sudo` command - * **$** \- given command to be executed as a regular non-privileged user - - - -### Introduction - -Normally, on a unix-like operating system, the ownership of files and directories is based on the default `uid` (user-id) and `gid` (group-id) of the user who created them. The same thing happens when a process is launched: it runs with the effective user-id and group-id of the user who started it, and with the corresponding privileges. This behavior can be modified by using special permissions. - -### The setuid bit - -When the `setuid` bit is used, the behavior described above it's modified so that when an executable is launched, it does not run with the privileges of the user who launched it, but with that of the file owner instead. So, for example, if an executable has the `setuid` bit set on it, and it's owned by root, when launched by a normal user, it will run with root privileges. It should be clear why this represents a potential security risk, if not used correctly. - -An example of an executable with the setuid permission set is `passwd`, the utility we can use to change our login password. We can verify that by using the `ls` command: -``` - -ls -l /bin/passwd --rwsr-xr-x. 1 root root 27768 Feb 11 2017 /bin/passwd - -``` - -How to identify the `setuid` bit? As you surely have noticed looking at the output of the command above, the `setuid` bit is represented by an `s` in place of the `x` of the executable bit. The `s` implies that the executable bit is set, otherwise you would see a capital `S`. This happens when the `setuid` or `setgid` bits are set, but the executable bit is not, showing the user an inconsistency: the `setuid` and `setgit` bits have no effect if the executable bit is not set. The setuid bit has no effect on directories. - -### The setgid bit - -Unlike the `setuid` bit, the `setgid` bit has effect on both files and directories. In the first case, the file which has the `setgid` bit set, when executed, instead of running with the privileges of the group of the user who started it, runs with those of the group which owns the file: in other words, the group ID of the process will be the same of that of the file. - -When used on a directory, instead, the `setgid` bit alters the standard behavior so that the group of the files created inside said directory, will not be that of the user who created them, but that of the parent directory itself. This is often used to ease the sharing of files (files will be modifiable by all the users that are part of said group). Just like the setuid, the setgid bit can easily be spotted (in this case on a test directory): -``` - -ls -ld test -drwxrwsr-x. 2 egdoc egdoc 4096 Nov 1 17:25 test - -``` - -This time the `s` is present in place of the executable bit on the group sector. - -### The sticky bit - -The sticky bit works in a different way: while it has no effect on files, when used on a directory, all the files in said directory will be modifiable only by their owners. A typical case in which it is used, involves the `/tmp` directory. Typically this directory is writable by all users on the system, so to make impossible for one user to delete the files of another one, the sticky bit is set: -``` - -$ ls -ld /tmp -drwxrwxrwt. 14 root root 300 Nov 1 16:48 /tmp - -``` - -In this case the owner, the group, and all other users, have full permissions on the directory (read, write and execute). The sticky bit is identifiable by a `t` which is reported where normally the executable `x` bit is shown, in the "other" section. Again, a lowercase `t` implies that the executable bit is also present, otherwise you would see a capital `T`. - -### How to set special bits - -Just like normal permissions, the special bits can be assigned with the `chmod` command, using the numeric or the `ugo/rwx` format. In the former case the `setuid`, `setgid`, and `sticky` bits are represented respectively by a value of 4, 2 and 1. So for example if we want to set the `setgid` bit on a directory we would execute: -``` -$ chmod 2775 test -``` - -With this command we set the `setgid` bit on the directory, (identified by the first of the four numbers), and gave full privileges on it to it's owner and to the user that are members of the group the directory belongs to, plus read and execute permission for all the other users (remember the execute bit on a directory means that a user is able to `cd` into it or use `ls` to list its content). - -The other way we can set the special permissions bits is to use the ugo/rwx syntax: -``` -$ chmod g+s test -``` - -To apply the `setuid` bit to a file, we would have run: -``` -$ chmod u+s file -``` - -While to apply the sticky bit: -``` -$ chmod o+t test -``` - -The use of special permissions can be very useful in some situations, but if not used correctly the can introduce serious vulnerabilities, so think twice before using them. - --------------------------------------------------------------------------------- - -via: https://linuxconfig.org/how-to-use-special-permissions-the-setuid-setgid-and-sticky-bits - -作者:[Egidio Docile][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://linuxconfig.org diff --git a/sources/tech/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md b/sources/tech/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md deleted file mode 100644 index 2bb34b90ef..0000000000 --- a/sources/tech/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md +++ /dev/null @@ -1,221 +0,0 @@ -Protecting Your Website From Application Layer DOS Attacks With mod -====== -There exist many ways of maliciously taking a website offline. The more complicated methods involve technical knowledge of databases and programming. A far simpler method is known as a "Denial Of Service", or "DOS" attack. This attack derives its name from its goal which is to deny your regular clients or site visitors normal website service. - -There are, generally speaking, two forms of DOS attack; - - 1. Layer 3,4 or Network-Layer attacks. - 2. Layer 7 or Application-Layer attacks. - - - -The first type of DOS attack, network-layer, is when a huge quantity of junk traffic is directed at the web server. When the quantity of junk traffic exceeds the capacity of the network infrastructure the website is taken offline. - -The second type of DOS attack, application-layer, is where instead of junk traffic legitimate looking page requests are made. When the number of page requests exceeds the capacity of the web server to serve pages legitimate visitors will not be able to use the site. - -This guide will look at mitigating application-layer attacks. This is because mitigating networking-layer attacks requires huge quantities of available bandwidth and the co-operation of upstream providers. This is usually not something that can be protected against through configuration of the web server. - -An application-layer attack, at least a modest one, can be protected against through the configuration of a normal web server. Protecting against this form of attack is important because [Cloudflare][1] have [recently reported][2] that the number of network-layer attacks is diminishing while the number of application-layer attacks is increasing. - -This guide will explain using the Apache2 module [mod_evasive][3] by [zdziarski][4]. - -In addition, mod_evasive will stop an attacker trying to guess a username/password combination by attempting hundreds of combinations i.e. a brute force attack. - -Mod_evasive works by keeping a record of the number of requests arriving from each IP address. When this number exceeds one of the several thresholds that IP is served an error page. Error pages require far fewer resources than a site page keeping the site online for legitimate visitors. - -### Installing mod_evasive on Ubuntu 16.04 - -Mod_evasive is contained in the default Ubuntu 16.04 repositories with the package name "libapache2-mod-evasive". A simple `apt-get` will get it installed: -``` -apt-get update -apt-get upgrade -apt-get install libapache2-mod-evasive - -``` - -We now need to configure mod_evasive. - -It's configuration file is located at `/etc/apache2/mods-available/evasive.conf`. By default, all the modules settings are commented after installation. Therefore, the module won't interfere with site traffic until the configuration file has been edited. -``` - - #DOSHashTableSize 3097 - #DOSPageCount 2 - #DOSSiteCount 50 - #DOSPageInterval 1 - #DOSSiteInterval 1 - #DOSBlockingPeriod 10 - - #DOSEmailNotify you@yourdomain.com - #DOSSystemCommand "su - someuser -c '/sbin/... %s ...'" - #DOSLogDir "/var/log/mod_evasive" - - -``` - -The first block of directives mean as follows: - - * **DOSHashTableSize** - The current list of accessing IP's and their request count. - * **DOSPageCount** - The threshold number of page requests per DOSPageInterval. - * **DOSPageInterval** - The amount of time in which mod_evasive counts up the page requests. - * **DOSSiteCount** - The same as the DOSPageCount but counts requests from the same IP for any page on the site. - * **DOSSiteInterval** - The amount of time that mod_evasive counts up the site requests. - * **DOSBlockingPeriod** - The amount of time in seconds that an IP is blocked for. - - - -If the default configuration shown above is used then an IP will be blocked if it: - - * Requests a single page more than twice a second. - * Requests more than 50 pages different pages per second. - - - -If an IP exceeds these thresholds it is blocked for 10 seconds. - -This may not seem like a lot, however, mod_evasive will continue monitoring the page requests even for blocked IP's and reset their block period. As long as an IP is attempting to DOS the site it will remain blocked. - -The remaining directives are: - - * **DOSEmailNotify** - An email address to receive notification of DOS attacks and IP's being blocked. - * **DOSSystemCommand** - A command to run in the event of a DOS. - * **DOSLogDir** - The directory where mod_evasive keeps some temporary files. - - - -### Configuring mod_evasive - -The default configuration is a good place to start as it should not block any legitimate users. The configuration file with all directives (apart from DOSSystemCommand) uncommented looks like the following: -``` - - DOSHashTableSize 3097 - DOSPageCount 2 - DOSSiteCount 50 - DOSPageInterval 1 - DOSSiteInterval 1 - DOSBlockingPeriod 10 - - DOSEmailNotify JohnW@example.com - #DOSSystemCommand "su - someuser -c '/sbin/... %s ...'" - DOSLogDir "/var/log/mod_evasive" - - -``` - -The log directory must be created and given the same owner as the apache process. Here it is created at `/var/log/mod_evasive` and given the owner and group of the Apache web server on Ubuntu `www-data`: -``` -mkdir /var/log/mod_evasive -chown www-data:www-data /var/log/mod_evasive - -``` - -After editing Apache's configuration, especially on a live website, it is always a good idea to check the syntax of the edits before restarting or reloading. This is because a syntax error will stop Apache from re-starting and taking your site offline. - -Apache comes packaged with a helper command that has a configuration syntax checker. Simply run the following command to check your edits: -``` -apachectl configtest - -``` - -If your configuration is correct you will get the response: -``` -Syntax OK - -``` - -However, if there is a problem you will be told where it occurred and what it was, e.g.: -``` -AH00526: Syntax error on line 6 of /etc/apache2/mods-enabled/evasive.conf: -DOSSiteInterval takes one argument, Set site interval -Action 'configtest' failed. -The Apache error log may have more information. - -``` - -If your configuration passes the configtest then the module can be safely enabled and Apache reloaded: -``` -a2enmod evasive -systemctl reload apache2.service - -``` - -Mod_evasive is now configured and running. - -### Testing - -In order to test mod_evasive, we simply need to make enough web requests to the server that we exceed the threshold and record the response codes from Apache. - -A normal, successful page request will receive the response: -``` -HTTP/1.1 200 OK - -``` - -However, one that has been denied by mod_evasive will return the following: -``` -HTTP/1.1 403 Forbidden - -``` - -The following script will make HTTP requests to `127.0.0.1:80`, that is localhost on port 80, as rapidly as possible and print out the response code of every request. - -All you need to do is to copy the following bash script into a file e.g. `mod_evasive_test.sh`: -``` -#!/bin/bash -set -e - -for i in {1..50}; do - curl -s -I 127.0.0.1 | head -n 1 -done - -``` - -The parts of this script mean as follows: - - * curl - This is a command to make web requests. - * -s - Hide the progress meter. - * -I - Only display the response header information. - * head - Print the first part of a file. - * -n 1 - Only display the first line. - - - -Then make it executable: -``` -chmod 755 mod_evasive_test.sh - -``` - -When the script is run **before** mod_evasive is enabled you will see 50 lines of `HTTP/1.1 200 OK` returned. - -However, after mod_evasive is enabled you will see the following: -``` -HTTP/1.1 200 OK -HTTP/1.1 200 OK -HTTP/1.1 403 Forbidden -HTTP/1.1 403 Forbidden -HTTP/1.1 403 Forbidden -HTTP/1.1 403 Forbidden -HTTP/1.1 403 Forbidden -... - -``` - -The first two requests were allowed, but then once a third in the same second was made mod_evasive denied any further requests. You will also receive an email letting you know that a DOS attempt was detected to the address you set with the `DOSEmailNotify` option. - -Mod_evasive is now protecting your site! - --------------------------------------------------------------------------------- - -via: https://bash-prompt.net/guides/mod_proxy/ - -作者:[Elliot Cooper][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://bash-prompt.net/about/ -[1]:https://www.cloudflare.com -[2]:https://blog.cloudflare.com/the-new-ddos-landscape/ -[3]:https://github.com/jzdziarski/mod_evasive -[4]:https://www.zdziarski.com/blog/ diff --git a/sources/tech/20171207 How To Find Files Based On their Permissions.md b/sources/tech/20171207 How To Find Files Based On their Permissions.md deleted file mode 100644 index d9e6ecc95a..0000000000 --- a/sources/tech/20171207 How To Find Files Based On their Permissions.md +++ /dev/null @@ -1,172 +0,0 @@ -translated by cyleft -How To Find Files Based On their Permissions -====== -Finding files in Linux is not a big deal. There are plenty of free and open source graphical utilities available on the market. In my opinion, finding files from command line is much easier and faster. We already knew how to [**find and sort files based on access and modification date and time**][1]. Today, we will see how to find files based on their permissions in Unix-like operating systems. - -For the purpose of this guide, I am going to create three files namely **file1** , **file2** and **file3** with permissions **777** , **766** , **655** respectively in a folder named **ostechnix**. -``` -mkdir ostechnix && cd ostechnix/ -``` -``` -install -b -m 777 /dev/null file1 -``` -``` -install -b -m 766 /dev/null file2 -``` -``` -install -b -m 655 /dev/null file3 -``` - -[![][2]][3] - -Now let us find the files based on their permissions. - -### Find files Based On their Permissions - -The typical syntax to find files based on their permissions is: -``` -find -perm mode -``` - -The MODE can be either with numeric or octal permission (like 777, 666.. etc) or symbolic permission (like u=x, a=r+x). - -Before going further, we can specify the MODE in three different ways. - - 1. If we specify the mode without any prefixes, it will find files of **exact** permissions. - 2. If we use **" -"** prefix with mode, at least the files should have the given permission, not the exact permission. - 3. If we use **" /"** prefix, either the owner, the group, or other should have permission to the file. - - - -Allow me to explain with some examples, so you can understand better. - -First, we will see finding files based on numeric permissions. - -### Find Files Based On their Numeric (octal) Permissions - -Now let me run the following command: -``` -find -perm 777 -``` - -This command will find the files with permission of **exactly 777** in the current directory. - -[![][2]][4] - -As you see in the above output, file1 is the only one that has **exact 777 permission**. - -Now, let us use "-" prefix and see what happens. -``` -find -perm -766 -``` - -[![][2]][5] - -As you see, the above command displays two files. We have set 766 permission to file2, but this command displays two files, why? Because, here we have used "-" prefix". It means that this command will find all files where the file owner has read/write/execute permissions, file group members have read/write permissions and everything else has also read/write permission. In our case, file1 and file2 have met this criteria. In other words, the files need not to have exact 766 permission. It will display any files that falls under this 766 permission. - -Next, we will use "/" prefix and see what happens. -``` -find -perm /222 -``` - -[![][2]][6] - -The above command will find files which are writable by somebody (either their owner, or their group, or anybody else). Here is another example. -``` -find -perm /220 -``` - -This command will find files which are writable by either their owner or their group. That means the files **don 't have to be writable** by **both the owner and group** to be matched; **either** will do. - -But if you run the same command with "-" prefix, you will only see the files only which are writable by both owner and group. -``` -find -perm -220 -``` - -The following screenshot will show you the difference between these two prefixes. - -[![][2]][7] - -Like I already said, we can also use symbolic notation to represent the file permissions. - -Also read: - -### Find Files Based On their Permissions using symbolic notation - -In the following examples, we use symbolic notations such as **u** ( for user), **g** (group), **o** (others). We can also use the letter **a** to represent all three of these categories. The permissions can be specified using letters **r** (read), **w** (write), **x** (executable). - -For instance, to find any file with group **write** permission, run: -``` -find -perm -g=w -``` - -[![][2]][8] - -As you see in the above example, file1 and file2 have group **write** permission. Please note that you can use either "=" or "+" for symbolic notation. It doesn't matter. For example, the following two commands do the same thing. -``` -find -perm -g=w -find -perm -g+w -``` - -To find any file which are writable by the file owner, run: -``` -find -perm -u=w -``` - -To find any file which are writable by all (the file owner, group and everyone else), run: -``` -find -perm -a=w -``` - -To find files which are writable by **both** their **owner** and their **group** , use this command: -``` -find -perm -g+w,u+w -``` - -The above command is equivalent of "find -perm -220" command. - -To find files which are writable by **either** their **owner** or their **group** , run: -``` -find -perm /u+w,g+w -``` - -Or, -``` -find -perm /u=w,g=w -``` - -These two commands does the same job as "find -perm /220" command. - -For more details, refer the man pages. -``` -man find -``` - -Also, check the [**man pages alternatives**][9] to learn more simplified examples of any Linux command. - -And, that's all for now folks. I hope this guide was useful. More good stuffs to come. Stay tuned. - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/find-files-based-permissions/ - -作者:[][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com -[1] https://www.ostechnix.com/find-sort-files-based-access-modification-date-time-linux/ -[2] data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[3] http://www.ostechnix.com/wp-content/uploads/2017/12/find-files-1-1.png () -[4] http://www.ostechnix.com/wp-content/uploads/2017/12/find-files-2.png () -[5] http://www.ostechnix.com/wp-content/uploads/2017/12/find-files-3.png () -[6] http://www.ostechnix.com/wp-content/uploads/2017/12/find-files-6.png () -[7] http://www.ostechnix.com/wp-content/uploads/2017/12/find-files-7.png () -[8] http://www.ostechnix.com/wp-content/uploads/2017/12/find-files-8.png () -[9] https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/ diff --git a/sources/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md b/sources/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md deleted file mode 100644 index 9e8de9c467..0000000000 --- a/sources/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md +++ /dev/null @@ -1,179 +0,0 @@ -How To Count The Number Of Files And Folders/Directories In Linux -====== -Hi folks, today again we came with set of tricky commands that help you in many ways. It's kind of manipulation commands which help you to count files and directories in the current directory, recursive count, list of files created by particular user, etc,. - -In this tutorial, we are going to show you, how to use more than one command like, all together to perform some advanced actions using ls, egrep, wc and find command. The below set of commands which helps you in many ways. - -To experiment this, i'm going to create totally 7 files and 2 folders (5 regular files & 2 hidden files). See the below tree command output which clearly shows the files and folder lists. - -**Suggested Read :** [File Manipulation Commands][1] -``` -# tree -a /opt -/opt -├── magi -│   └── 2g -│   ├── test5.txt -│   └── .test6.txt -├── test1.txt -├── test2.txt -├── test3.txt -├── .test4.txt -└── test.txt - -2 directories, 7 files - -``` - -**Example-1 :** To count current directory files (excluded hidden files). Run the following command to determine how many files there are in the current directory and it doesn't count dotfiles. -``` -# ls -l . | egrep -c '^-' -4 - -``` - -**Details :** - - * `ls` : list directory contents - * `-l` : Use a long listing format - * `.` : List information about the FILEs (the current directory by default). - * `|` : control operator that send the output of one program to another program for further processing. - * `egrep` : print lines matching a pattern - * `-c` : General Output Control - * `'^-'` : This respectively match the empty string at the beginning and end of a line. - - - -**Example-2 :** To count current directory files which includes hidden files. This will include dotfiles as well in the current directory. -``` -# ls -la . | egrep -c '^-' -5 - -``` - -**Example-3 :** Run the following command to count current directory files & folders. It will count all together at once. -``` -# ls -1 | wc -l -5 - -``` - -**Details :** - - * `ls` : list directory contents - * `-l` : Use a long listing format - * `|` : control operator that send the output of one program to another program for further processing. - * `wc` : It's a command to print newline, word, and byte counts for each file - * `-l` : print the newline counts - - - -**Example-4 :** To count current directory files & folders which includes hidden files & directory. -``` -# ls -1a | wc -l -8 - -``` - -**Example-5 :** To count current directory files recursively which includes hidden files. -``` -# find . -type f | wc -l -7 - -``` - -**Details :** - - * `find` : search for files in a directory hierarchy - * `-type` : File is of type - * `f` : regular file - * `wc` : It's a command to print newline, word, and byte counts for each file - * `-l` : print the newline counts - - - -**Example-6 :** To print directories & files count using tree command (excluded hidden files). -``` -# tree | tail -1 -2 directories, 5 files - -``` - -**Example-7 :** To print directories & files count using tree command which includes hidden files. -``` -# tree -a | tail -1 -2 directories, 7 files - -``` - -**Example-8 :** Run the below command to count directory recursively which includes hidden directory. -``` -# find . -type d | wc -l -3 - -``` - -**Example-9 :** To count the number of files based on file extension. Here we are going to count `.txt` files. -``` -# find . -name "*.txt" | wc -l -7 - -``` - -**Example-10 :** Count all files in the current directory by using the echo command in combination with the wc command. `4` indicates the amount of files in the current directory. -``` -# echo * | wc -1 4 39 - -``` - -**Example-11 :** Count all directories in the current directory by using the echo command in combination with the wc command. `1` indicates the amount of directories in the current directory. -``` -# echo comic/ published/ sources/ translated/ | wc -1 1 6 - -``` - -**Example-12 :** Count all files and directories in the current directory by using the echo command in combination with the wc command. `5` indicates the amount of directories and files in the current directory. -``` -# echo * | wc -1 5 44 - -``` - -**Example-13 :** To count number of files in the system (Entire system) -``` -# find / -type f | wc -l -69769 - -``` - -**Example-14 :** To count number of folders in the system (Entire system) -``` -# find / -type d | wc -l -8819 - -``` - -**Example-15 :** Run the following command to count number of files, folders, hardlinks, and symlinks in the system (Entire system) -``` -# find / -type d -exec echo dirs \; -o -type l -exec echo symlinks \; -o -type f -links +1 -exec echo hardlinks \; -o -type f -exec echo files \; | sort | uniq -c - 8779 dirs - 69343 files - 20 hardlinks - 11646 symlinks - -``` - - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/how-to-count-the-number-of-files-and-folders-directories-in-linux/ - -作者:[Magesh Maruthamuthu][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.2daygeek.com/author/magesh/ -[1]:https://www.2daygeek.com/empty-a-file-delete-contents-lines-from-a-file-remove-matching-string-from-a-file-remove-empty-blank-lines-from-a-file/ diff --git a/sources/tech/20171218 Internet Chemotherapy.md b/sources/tech/20171218 Internet Chemotherapy.md index ffe15fb5c1..2d2b950db5 100644 --- a/sources/tech/20171218 Internet Chemotherapy.md +++ b/sources/tech/20171218 Internet Chemotherapy.md @@ -1,3 +1,4 @@ +(yixunx translating) Internet Chemotherapy ====== diff --git a/sources/tech/20171219 How to generate webpages using CGI scripts.md b/sources/tech/20171219 How to generate webpages using CGI scripts.md deleted file mode 100644 index b452cdee84..0000000000 --- a/sources/tech/20171219 How to generate webpages using CGI scripts.md +++ /dev/null @@ -1,129 +0,0 @@ -How to generate webpages using CGI scripts -====== -Back in the stone age of the Internet when I first created my first business website, life was good. - -I installed Apache and created a few simple HTML pages that stated a few important things about my business and gave important information like an overview of my product and how to contact me. It was a static website because the content seldom changed. Maintenance was simple because of the unchanging nature of my site. - -## Static content - -Static content is easy and still common. Let's take a quick look at a couple sample static web pages. You don't need a working website to perform these little experiments. Just place the files in your home directory and open them with your browser. You will see exactly what you would if the file were served to your browser via a web server. - -The first thing you need on a static website is the index.html file which is usually located in the /var/www/html directory. This file can be as simple as a text phrase such as "Hello world" without any HTML markup at all. This would simply display the text string. Create index.html in your home directory and add "Hello world" (without the quotes) as it's only content. Open the index.html in your browser with the following URL. -``` -file:///home//index.html -``` - -So HTML is not required, but if you had a large amount of text that needed formatting, the results of a web page with no HTML coding would be incomprehensible with everything running together. - -So the next step is to make the content more readable by using a bit of HTML coding to provide some formatting. The following command creates a page with the absolute minimum markup required for a static web page with HTML. You could also use your favorite editor to create the content. -``` -echo "

Hello World

" > test1.html -``` - -Now view index.html and see the difference. - -Of course you can put a lot of additional HTML around the actual content line to make a more complete and standard web page. That more complete version as shown below will still display the same results in the browser, but it also forms the basis for more standardized web site. Go ahead and use this content for your index.html file and display it in your browser. -``` - - - -My Web Page - - -

Hello World

- - -``` - -I built a couple static websites using these techniques, but my life was about to change. - -## Dynamic web pages for a new job - -I took a new job in which my primary task was to create and maintain the CGI ([Common Gateway Interface][6]) code for a very dynamic website. In this context, dynamic means that the HTML needed to produce the web page on a browser was generated from data that could be different every time the page was accessed. This includes input from the user on a web form that is used to look up data in a database. The resulting data is surrounded by appropriate HTML and displayed on the requesting browser. But it does not need to be that complex. - -Using CGI scripts for a website allows you to create simple or complex interactive programs that can be run to provide a dynamic web page that can change based on input, calculations, current conditions in the server, and so on. There are many languages that can be used for CGI scripts. We will look at two of them, Perl and Bash. Other popular CGI languages include PHP and Python. - -This article does not cover installation and setup of Apache or any other web server. If you have access to a web server that you can experiment with, you can directly view the results as they would appear in a browser. Otherwise, you can still run the programs from the command line and view the HTML that would be created. You can also redirect that HTML output to a file and then display the resulting file in your browser. - -### Using Perl - -Perl is a very popular language for CGI scripts. Its strength is that it is a very powerful language for the manipulation of text. - -To get CGI scripts to execute, you need the following line in the in httpd.conf for the website you are using. This tells the web server where your executable CGI files are located. For this experiment, let's not worry about that. -``` -ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" -``` - -Add the following Perl code to the file index.cgi, which should be located in your home directory for your experimentation. Set the ownership of the file to apache.apache when you use a web server, and set the permissions to 755 because it must be executable no matter where it is located. - -``` -#!/usr/bin/perl -print "Content-type: text/html\n\n"; -print "\n"; -print "

Hello World

\n"; -print "Using Perl

\n"; -print "\n"; -``` - -Run this program from the command line and view the results. It should display the HTML code it will generate. - -Now view the index.cgi in your browser. Well, all you get is the contents of the file. Browsers really need to have this delivered as CGI content. Apache does not really know that it needs to run the file as a CGI program unless the Apache configuration for the web site includes the "ScriptAlias" definition as shown above. Without that bit of configuration Apache simply send the data in the file to the browser. If you have access to a web server, you could try this out with your executable index files in the /var/www/cgi-bin directory. - -To see what this would look like in your browser, run the program again and redirect the output to a new file. Name it whatever you want. Then use your browser to view the file that contains the generated content. - -The above CGI program is still generating static content because it always displays the same output. Add the following line to your CGI program immediately after the "Hello World" line. The Perl "system" command executes the commands following it in a system shell, and returns the result to the program. In this case, we simply grep the current RAM usage out of the results from the free command. - -``` -system "free | grep Mem\n"; -``` - -Now run the program again and redirect the output to the results file. Reload the file in the browser. You should see an additional line so that displays the system memory statistics. Run the program and refresh the browser a couple more times and notice that the memory usage should change occasionally. - -### Using Bash - -Bash is probably the simplest language of all for use in CGI scripts. Its primary strength for CGI programming is that it has direct access to all of the standard GNU utilities and system programs. - -Rename the existing index.cgi to Perl.index.cgi and create a new index.cgi with the following content. Remember to set the permissions correctly to executable. - -``` -#!/bin/bash -echo "Content-type: text/html" -echo "" -echo '' -echo '' -echo '' -echo 'Hello World' -echo '' -echo '' -echo '

Hello World

' -echo 'Using Bash

' -free | grep Mem -echo '' -echo '' -exit 0 -``` - -Execute this program from the command line and view the output, then run it and redirect the output to the temporary results file you created before. Then refresh the browser to view what it looks like displayed as a web page. - -## Conclusion - -It is actually very simple to create CGI programs that can be used to generate a wide range of dynamic web pages. This is a trivial example but you should now see some of the possibilities. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/12/cgi-scripts - -作者:[David Both][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/dboth -[1]:http://december.com/html/4/element/html.html -[2]:http://december.com/html/4/element/head.html -[3]:http://december.com/html/4/element/title.html -[4]:http://december.com/html/4/element/body.html -[5]:http://december.com/html/4/element/h1.html -[6]:https://en.wikipedia.org/wiki/Common_Gateway_Interface -[7]:http://perldoc.perl.org/functions/system.html diff --git a/sources/tech/20171224 My first Rust macro.md b/sources/tech/20171224 My first Rust macro.md new file mode 100644 index 0000000000..a8002e050b --- /dev/null +++ b/sources/tech/20171224 My first Rust macro.md @@ -0,0 +1,145 @@ +My first Rust macro +============================================================ + +Last night I wrote a Rust macro for the first time!! The most striking thing to me about this was how **easy** it was – I kind of expected it to be a weird hard finicky thing, and instead I found that I could go from “I don’t know how macros work but I think I could do this with a macro” to “wow I’m done” in less than an hour. + +I used [these examples][2] to figure out how to write my macro. + +### what’s a macro? + +There’s more than one kind of macro in Rust – + +* macros defined using `macro_rules` (they have an exclamation mark and you call them like functions – `my_macro!()`) + +* “syntax extensions” / “procedural macros” like `#[derive(Debug)]` (you put these like annotations on your functions) + +* built-in macros like `println!` + +[Macros in Rust][3] and [Macros in Rust part II][4] seems like a nice overview of the different kinds with examples + +I’m not actually going to try to explain what a macro **is**, instead I will just show you what I used a macro for yesterday and hopefully that will be interesting. I’m going to be talking about `macro_rules!`, I don’t understand syntax extension/procedural macros yet. + +### compiling the `get_stack_trace` function for 30 different Ruby versions + +I’d written some functions that got the stack trace out of a running Ruby program (`get_stack_trace`). But the function I wrote only worked for Ruby 2.2.0 – here’s what it looked like. Basically it imported some structs from `bindings::ruby_2_2_0` and then used them. + +``` +use bindings::ruby_2_2_0::{rb_control_frame_struct, rb_thread_t, RString}; +fn get_stack_trace(pid: pid_t) -> Vec { + // some code using rb_control_frame_struct, rb_thread_t, RString +} + +``` + +Let’s say I wanted to instead have a version of `get_stack_trace` that worked for Ruby 2.1.6. `bindings::ruby_2_2_0` and `bindings::ruby_2_1_6` had basically all the same structs in them. But `bindings::ruby_2_1_6::rb_thread_t` wasn’t the **same** as `bindings::ruby_2_2_0::rb_thread_t`, it just had the same name and most of the same struct members. + +So I could implement a working function for Ruby 2.1.6 really easily! I just need to basically replace `2_2_0` for `2_1_6`, and then the compiler would generate different code (because `rb_thread_t` is different). Here’s a sketch of what the Ruby 2.1.6 version would look like: + +``` +use bindings::ruby_2_1_6::{rb_control_frame_struct, rb_thread_t, RString}; +fn get_stack_trace(pid: pid_t) -> Vec { + // some code using rb_control_frame_struct, rb_thread_t, RString +} + +``` + +### what I wanted to do + +I basically wanted to write code like this, to generate a `get_stack_trace` function for every Ruby version. The code inside `get_stack_trace` would be the same in every case, it’s just the `use bindings::ruby_2_1_3` that needed to be different + +``` +pub mod ruby_2_1_3 { + use bindings::ruby_2_1_3::{rb_control_frame_struct, rb_thread_t, RString}; + fn get_stack_trace(pid: pid_t) -> Vec { + // insert code here + } +} +pub mod ruby_2_1_4 { + use bindings::ruby_2_1_4::{rb_control_frame_struct, rb_thread_t, RString}; + fn get_stack_trace(pid: pid_t) -> Vec { + // same code + } +} +pub mod ruby_2_1_5 { + use bindings::ruby_2_1_5::{rb_control_frame_struct, rb_thread_t, RString}; + fn get_stack_trace(pid: pid_t) -> Vec { + // same code + } +} +pub mod ruby_2_1_6 { + use bindings::ruby_2_1_6::{rb_control_frame_struct, rb_thread_t, RString}; + fn get_stack_trace(pid: pid_t) -> Vec { + // same code + } +} + +``` + +### macros to the rescue! + +This really repetitive thing was I wanted to do was a GREAT fit for macros. Here’s what using `macro_rules!` to do this looked like! + +``` +macro_rules! ruby_bindings( + ($ruby_version:ident) => ( + pub mod $ruby_version { + use bindings::$ruby_version::{rb_control_frame_struct, rb_thread_t, RString}; + fn get_stack_trace(pid: pid_t) -> Vec { + // insert code here + } + } +)); + +``` + +I basically just needed to put my code in and insert `$ruby_version` in the places I wanted it to go in. So simple! I literally just looked at an example, tried the first thing I thought would work, and it worked pretty much right away. + +(the [actual code][5] is more lines and messier but the usage of macros is exactly as simple in this example) + +I was SO HAPPY about this because I’d been worried getting this to work would be hard but instead it was so easy!! + +### dispatching to the right code + +Then I wrote some super simple dispatch code to call the right code depending on which Ruby version was running! + +``` + let version = get_api_version(pid); + let stack_trace_function = match version.as_ref() { + "2.1.1" => stack_trace::ruby_2_1_1::get_stack_trace, + "2.1.2" => stack_trace::ruby_2_1_2::get_stack_trace, + "2.1.3" => stack_trace::ruby_2_1_3::get_stack_trace, + "2.1.4" => stack_trace::ruby_2_1_4::get_stack_trace, + "2.1.5" => stack_trace::ruby_2_1_5::get_stack_trace, + "2.1.6" => stack_trace::ruby_2_1_6::get_stack_trace, + "2.1.7" => stack_trace::ruby_2_1_7::get_stack_trace, + "2.1.8" => stack_trace::ruby_2_1_8::get_stack_trace, + // and like 20 more versions + _ => panic!("OH NO OH NO OH NO"), + }; + +``` + +### it works! + +I tried out my prototype, and it totally worked! The same program could get stack traces out the running Ruby program for all of the ~10 different Ruby versions I tried – it figured which Ruby version was running, called the right code, and got me stack traces!! + +Previously I’d compile a version for Ruby 2.2.0 but then if I tried to use it for any other Ruby version it would crash, so this was a huge improvement. + +There are still more issues with this approach that I need to sort out. The two main ones right now are: firstly the ruby binary that ships with Debian doesn’t have symbols and I need the address of the current thread, and secondly it’s still possible that `#ifdefs` will ruin my day. + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2017/12/24/my-first-rust-macro/ + +作者:[Julia Evans ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca +[1]:https://jvns.ca/categories/ruby-profiler +[2]:https://gist.github.com/jfager/5936197 +[3]:https://www.ncameron.org/blog/macros-in-rust-pt1/ +[4]:https://www.ncameron.org/blog/macros-in-rust-pt2/ +[5]:https://github.com/jvns/ruby-stacktrace/blob/b0b92863564e54da59ea7f066aff5bb0d92a4968/src/lib.rs#L249-L393 diff --git a/sources/tech/20171226 How to Configure Linux for Children.md b/sources/tech/20171226 How to Configure Linux for Children.md deleted file mode 100644 index 318e4126a7..0000000000 --- a/sources/tech/20171226 How to Configure Linux for Children.md +++ /dev/null @@ -1,143 +0,0 @@ -How to Configure Linux for Children -====== - -![](https://www.maketecheasier.com/assets/uploads/2017/12/keep-kids-safe-online-hero.jpg) - -If you've been around computers for a while, you might associate Linux with a certain stereotype of computer user. How do you know someone uses Linux? Don't worry, they'll tell you. - -But Linux is an exceptionally customizable operating system. This allows users an unprecedented degree of control. In fact, parents can set up a specialized distro of Linux for children, ensuring children don't stumble across dangerous content accidentally. While the process is more prolonged than using Windows, it's also more powerful and durable. Linux is also free, which can make it well-suited for classroom or computer lab deployment. - -## Linux Distros for Children - -These Linux distros for children are built with simplified, kid-friendly interfaces. An adult will need to install and set up the operating system at first, but kids can run the computer entirely alone. You'll find large colorful interfaces, plenty of pictures and simple language. - -Unfortunately, none of these distros are regularly updated, and some are no longer in active development. That doesn't mean they won't work, but it does make malfunctions more likely. - -![qimo-gcompris][1] - - -### 1. Edubuntu - -[Edubuntu][2] is an education-specific fork of the popular Ubuntu operating system. It has a rich graphical environment and ships with a lot of educational software that's easy to update and maintain. It's designed for children in middle and high school. - -### 2. Ubermix - -[Ubermix][3] is designed from the ground up with the needs of education in mind. Ubermix takes all the complexity out of student devices by making them as reliable and easy-to-use as a cell phone without sacrificing the power and capabilities of a full operating system. With a turn-key, five-minute installation, twenty-second quick recovery mechanism, and more than sixty free applications pre-installed, ubermix turns whatever hardware you have into a powerful device for learning. - -### 3. Sugar - -[Sugar][4] is the operating system built for the One Laptop Per Child initiative. Sugar is pretty different from normal desktop Linux, with a heavy bias towards classroom use and teaching programming skills. - - **Note** : do note that there are several more Linux distros for kids that we didn't include in the list above because they have not been actively developed or were abandoned a long time ago. - -## Content Filtering Linux for Children - -The best tool for protecting children from accessing inappropriate content is you, but you can't be there all the time. Content filtering via proxy filtering sets up certain URLs as "off limits." There are two main tools you can use. - -![linux-for-children-content-filtering][5] - -### 1. DansGuardian - -[DansGuardian][6], an open-source content filter that works on virtually every Linux distro, is flexible and powerful, requiring command-line setup with a proxy of your choice. If you don't mind digging into proxy settings, this is the most powerful choice. - -Setting up DansGuardian is not an easy task, and you can follow the installation instructions on its main page. But once it is set up, it is a very effective tool to filter out unwanted content. - -### 2. Parental Control: Family Friendly Filter - -[Parental Control: Family Friendly Filter][7] is an extension for Firefox that allows parents to block sites containing pornography and any other kind of inappropriate material. You can blacklist particular domains so that bad websites are always blocked. - -![firefox-content-filter-addon][8] - -If you are still using an older version of Firefox that doesn't support [web extensions][9], then you can check out [ProCon Latte Content Filter][10]. Parents add domains to a pre-loaded blacklist and set a password to keep the extension from being modified. - -### 3. Blocksi Web Filter - -[Blocksi Web Filter][11] is an extension for Chrome and is useful for Web and Youtube filtering. It also comes with a time-access control so that you can limit the hours your kids can access the Web. - -## Fun Stuff - -![linux-for-children-tux-kart][12] - -Any computer for children better have some games on it, educational or otherwise. While Linux isn't as gaming-friendly as Windows, it's getting closer all the time. Here are several suggestions for constructive games you might load on to Linux for children: - -* [Super Tux Kart][21] (kart racing game) - -* [GCompris][22] (educational game suite) - -* [Secret Maryo Chronicles][23] (Super Mario clone) - -* [Childsplay][24] (educational/memory games) - -* [EToys][25] (programming for kids) - -* [TuxTyping][26], (typing game) - -* [Kalzium][27] (periodic table guide) - -* [Tux of Math Command][28] (math arcade games) - -* [Pink Pony][29] (Tron-like racing game) - -* [KTuberling][30] (constructor game) - -* [TuxPaint][31] (painting) - -* [Blinken][32] ([memory][33] game) - -* [KTurtle][34] (educational programming environment) - -* [KStars][35] (desktop planetarium) - -* [Marble][36] (virtual globe) - -* [KHangman][37] (hangman guessing game) - -## Conclusion: Why Linux for Children? - -Linux has a reputation for being needlessly complex. So why use Linux for children? It's about setting kids up to learn. Working with Linux provides many opportunities to learn how the operating system works. As children get older, they'll have opportunities to explore, driven by their own interests and curiosity. Because the Linux platform is so open to users, it's an excellent venue for children to discover a life-long love of computers. - -This article was first published in July 2010 and was updated in December 2017. - -Image by [Children at school][13] - --------------------------------------------------------------------------------- - -via: https://www.maketecheasier.com/configure-linux-for-children/ - -作者:[Alexander Fox][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.maketecheasier.com/author/alexfox/ -[1]:https://www.maketecheasier.com/assets/uploads/2010/08/qimo-gcompris.jpg (qimo-gcompris) -[2]:http://www.edubuntu.org -[3]:http://www.ubermix.org/ -[4]:http://wiki.sugarlabs.org/go/Downloads -[5]:https://www.maketecheasier.com/assets/uploads/2017/12/linux-for-children-content-filtering.png (linux-for-children-content-filtering) -[6]:https://help.ubuntu.com/community/DansGuardian -[7]:https://addons.mozilla.org/en-US/firefox/addon/family-friendly-filter/ -[8]:https://www.maketecheasier.com/assets/uploads/2017/12/firefox-content-filter-addon.png (firefox-content-filter-addon) -[9]:https://www.maketecheasier.com/best-firefox-web-extensions/ -[10]:https://addons.mozilla.org/en-US/firefox/addon/procon-latte/ -[11]:https://chrome.google.com/webstore/detail/blocksi-web-filter/pgmjaihnmedpcdkjcgigocogcbffgkbn?hl=en -[12]:https://www.maketecheasier.com/assets/uploads/2017/12/linux-for-children-tux-kart-e1513389774535.jpg (linux-for-children-tux-kart) -[13]:https://www.flickr.com/photos/lupuca/8720604364 -[21]:http://supertuxkart.sourceforge.net/ -[22]:http://gcompris.net/ -[23]:http://www.secretmaryo.org/ -[24]:http://www.schoolsplay.org/ -[25]:http://www.squeakland.org/about/intro/ -[26]:http://tux4kids.alioth.debian.org/tuxtype/index.php -[27]:http://edu.kde.org/kalzium/ -[28]:http://tux4kids.alioth.debian.org/tuxmath/index.php -[29]:http://code.google.com/p/pink-pony/ -[30]:http://games.kde.org/game.php?game=ktuberling -[31]:http://www.tuxpaint.org/ -[32]:https://www.kde.org/applications/education/blinken/ -[33]:https://www.ebay.com/sch/i.html?_nkw=memory -[34]:https://www.kde.org/applications/education/kturtle/ -[35]:https://www.kde.org/applications/education/kstars/ -[36]:https://www.kde.org/applications/education/marble/ -[37]:https://www.kde.org/applications/education/khangman/ diff --git a/sources/tech/20171231 Why You Should Still Love Telnet.md b/sources/tech/20171231 Why You Should Still Love Telnet.md index 6e6976fda4..201ee91bd4 100644 --- a/sources/tech/20171231 Why You Should Still Love Telnet.md +++ b/sources/tech/20171231 Why You Should Still Love Telnet.md @@ -1,3 +1,4 @@ +XYenChi is translating Why You Should Still Love Telnet ====== Telnet, the protocol and the command line tool, were how system administrators used to log into remote servers. However, due to the fact that there is no encryption all communication, including passwords, are sent in plaintext meant that Telnet was abandoned in favour of SSH almost as soon as SSH was created. diff --git a/sources/tech/20180102 Best open source tutorials in 2017.md b/sources/tech/20180102 Best open source tutorials in 2017.md deleted file mode 100644 index e9d9d7b9ad..0000000000 --- a/sources/tech/20180102 Best open source tutorials in 2017.md +++ /dev/null @@ -1,82 +0,0 @@ -Best open source tutorials in 2017 -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-teacher-learner.png?itok=rMJqBN5G) - -A well-written tutorial is a great supplement to any software's official documentation. It can also be an effective alternative if that official documentation is poorly written, incomplete, or non-existent. - -In 2017, Opensource.com published a number of excellent tutorials on a variety of topics. Those tutorials weren't just for experts. We aimed them at users of all levels of skill and experience. - -Let's take a look at the best of those tutorials. - -### It's all about the code - -For many, their first foray into open source involved contributing code to one project or another. Where do you go to learn to code or program? The following two articles are great starting points. - -While not a tutorial in the strictest sense of the word, VM Brasseur's [How to get started learning to program][1] is a good starting point for the neophyte coder. It doesn't merely point out some excellent resources that will help you get started, but also offers important advice about understanding your learning style and how to pick a language. - -If you've logged a more than a few hours in an [IDE][2] or a text editor, you'll probably want to learn a bit more about different approaches to coding. Fraser Tweedale's [Introduction to functional programming][3] does a fine job of introducing a paradigm that you can apply to many widely used programming languages. - -### Going Linux - -Linux is arguably the poster child of open source. It runs a good chunk of the web and powers the world's top supercomputers. And it gives anyone an alternative to proprietary operating systems on their desktops. - -If you're interested in diving deeper into Linux, here are a trio of tutorials for you. - -Jason Baker looks at [setting the Linux $PATH variable][4]. He guides you through this "important skill for any beginning Linux user," which enables you to point the system to directories containing programs and scripts. - -Embrace your inner techie with David Both's guide to [building a DNS name server][5]. He documents, in considerable detail, how to set up and run the server, including what configuration files to edit and how to edit them. - -Want to go a bit more retro in your computing? Jim Hall shows you how to [run DOS programs in Linux][6] using [FreeDOS][7] and [QEMU][8]. Hall's article focuses on running DOS productivity tools, but it's not all serious--he talks about running his favorite DOS games, too. - -### Three slices of Pi - -It's no secret that inexpensive single-board computers have made hardware hacking fun again. Not only that, but they've made it more accessible to more people, regardless of their age or their level of technical proficiency. - -The [Raspberry Pi][9] is probably the most widely used single-board computer out there. Ben Nuttall walks us through how to install and set up [a Postgres database on a Raspberry Pi][10]. From there, you're ready to use it in whatever project you have in mind. - -If your tastes include both the literary and technical, you might be interested in Don Watkins' [How to turn a Raspberry Pi into an eBook server][11]. With a little work and a copy of the [Calibre eBook management software][12], you'll be able to get to your favorite eBooks anywhere you are. - -Raspberry isn't the only flavor of Pi out there. There's also the [Orange Pi Pc Plus][13], an open-source single-board computer. David Egts looks at [getting started with this hackable mini-computer][14]. - -### Day-to-day computing - -Open source isn't just for techies. Mere mortals use it to do their daily work and be more productive. Here are a trio of articles for those of us who have 10 thumbs when it comes to anything technical (and for those who don't). - -When you think of microblogging, you probably think Twitter. But Twitter has more than its share of problems. [Mastodon][15] is an open alternative to Twitter that debuted in 2016. Since then, Mastodon has gained a sizeable base of users. Seth Kenlon explains [how to join and use Mastodon][16], and even shows you how to cross-post between Mastodon and Twitter. - -Do you need a little help staying on top of your expenses? All you need is a spreadsheet and the right template. My article on [getting control of your finances][17] shows you how to create a simple, attractive finance-tracking spreadsheet with [LibreOffice Calc][18] (or any other spreadsheet editor). - -ImageMagick is a powerful tool for manipulating graphics. It's one, though, that many people don't use as often as they should. That means they forget the commands just when they need them the most. If that's you, then keep Greg Pittman's [introductory tutorial to ImageMagick][19] handy for those times you need some help. - -Do you have a favorite tutorial published by Opensource.com in 2017? Feel free to share it with the community by leaving a comment. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/1/best-tutorials - -作者:[Scott Nesbitt][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/scottnesbitt -[1]:https://opensource.com/article/17/4/how-get-started-learning-program -[2]:https://en.wikipedia.org/wiki/Integrated_development_environment -[3]:https://opensource.com/article/17/4/introduction-functional-programming -[4]:https://opensource.com/article/17/6/set-path-linux -[5]:https://opensource.com/article/17/4/build-your-own-name-server -[6]:https://opensource.com/article/17/10/run-dos-applications-linux -[7]:http://www.freedos.org/ -[8]:https://www.qemu.org -[9]:https://en.wikipedia.org/wiki/Raspberry_Pi -[10]:https://opensource.com/article/17/10/set-postgres-database-your-raspberry-pi -[11]:https://opensource.com/article/17/6/raspberrypi-ebook-server -[12]:https://calibre-ebook.com/ -[13]:http://www.orangepi.org/ -[14]:https://opensource.com/article/17/1/how-to-orange-pi -[15]:https://joinmastodon.org/ -[16]:https://opensource.com/article/17/4/guide-to-mastodon -[17]:https://opensource.com/article/17/8/budget-libreoffice-calc -[18]:https://www.libreoffice.org/discover/calc/ -[19]:https://opensource.com/article/17/8/imagemagick diff --git a/sources/tech/20180102 HTTP errors in WordPress.md b/sources/tech/20180102 HTTP errors in WordPress.md deleted file mode 100644 index 79c92c24b2..0000000000 --- a/sources/tech/20180102 HTTP errors in WordPress.md +++ /dev/null @@ -1,166 +0,0 @@ -translating by wenwensnow -HTTP errors in WordPress -====== -![http error wordpress][1] - -We'll show you, how to fix HTTP errors in WordPress, on a Linux VPS. Listed below are the most common HTTP errors in WordPress, experienced by WordPress users, and our suggestions on how to investigate and fix them. - -### 1\. Fix HTTP error in WordPress when uploading images - -If you get an error when uploading an image to your WordPress based site, it may be due to PHP configuration settings on your server, like insufficient memory limit or so. - -Locate the php configuration file using the following command: -``` -#php -i | grep php.ini -Configuration File (php.ini) Path => /etc -Loaded Configuration File => /etc/php.ini -``` - -According to the output, the PHP configuration file is located in the '/etc' directory, so edit the '/etc/php.ini' file, find the lines below and modify them with these values: -``` -vi /etc/php.ini -``` -``` -upload_max_filesize = 64M -post_max_size = 32M -max_execution_time = 300 -max_input_time 300 -memory_limit = 128M -``` - -Of course if you are unfamiliar with the vi text editor, use your favorite one. - -Do not forget to restart your web server for the changes to take effect. - -If the web server installed on your server is Apache, you may use .htaccess. First, locate the .htaccess file. It should be in the document root directory of the WordPress installation. If there is no .htaccess file, create one, then add the following content: -``` -vi /www/html/path_to_wordpress/.htaccess -``` -``` -php_value upload_max_filesize 64M -php_value post_max_size 32M -php_value max_execution_time 180 -php_value max_input_time 180 - -# BEGIN WordPress - -RewriteEngine On -RewriteBase / -RewriteRule ^index\.php$ - [L] -RewriteCond %{REQUEST_FILENAME} !-f -RewriteCond %{REQUEST_FILENAME} !-d -RewriteRule . /index.php [L] - -# END WordPress -``` - -If you are using nginx, configure the nginx server block about your WordPress instance. It should look something like the example below: -``` -server { - -listen 80; -client_max_body_size 128m; -client_body_timeout 300; - -server_name your-domain.com www.your-domain.com; - -root /var/www/html/wordpress; -index index.php; - -location = /favicon.ico { -log_not_found off; -access_log off; -} - -location = /robots.txt { -allow all; -log_not_found off; -access_log off; -} - -location / { -try_files $uri $uri/ /index.php?$args; -} - -location ~ \.php$ { -include fastcgi_params; -fastcgi_pass 127.0.0.1:9000; -fastcgi_index index.php; -fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; -} - -location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { -expires max; -log_not_found off; -} -} -``` - -Depending on the PHP configuration, you may need to replace 'fastcgi_pass 127.0.0.1:9000;' with 'fastcgi_pass unix:/var/run/php7-fpm.sock;' or so. - -Restart nginx service for the changes to take effect. - -### 2\. Fix HTTP error in WordPress due to incorrect file permissions - -If you get an unexpected HTTP error in WordPress, it may be due to incorrect file permissions, so set a proper ownership of your WordPress files and directories: -``` -chown www-data:www-data -R /var/www/html/path_to_wordpress/ -``` - -Replace 'www-data' with the actual web server user, and '/var/www/html/path_to_wordpress' with the actual path of the WordPress installation. - -### 3\. Fix HTTP error in WordPress due to memory limit - -The PHP memory_limit value can be set by adding this to your wp-config.php file: -``` - define('WP_MEMORY_LIMIT', '128MB'); -``` - -### 4\. Fix HTTP error in WordPress due to misconfiguration of PHP.INI - -Edit the main PHP configuration file and locate the line with the content 'cgi.fix_pathinfo' . This will be commented by default and set to 1. Uncomment the line (remove the semi-colon) and change the value from 1 to 0. You may also want to change the 'date.timezone' PHP setting, so edit the PHP configuration file and modify this setting to 'date.timezone = US/Central' (or whatever your timezone is). -``` - vi /etc/php.ini -``` -``` - cgi.fix_pathinfo=0 - date.timezone = America/New_York -``` - -### 5. Fix HTTP error in WordPress due to Apache mod_security modul - -If you are using the Apache mod_security module, it might be causing problems. Try to disable it to see if that is the problem by adding the following lines in .htaccess: -``` - -SecFilterEngine Off -SecFilterScanPOST Off - -``` - -### 6. Fix HTTP error in WordPress due to problematic plugin or theme - -Some plugins and/or themes may cause HTTP errors and other problems in WordPress. You can try to disable the problematic plugins/themes, or temporarily disable all the plugins. If you have phpMyAdmin, use it to deactivate all plugins: -Locate the table wp_options, under the option_name column (field) find the 'active_plugins' row and change the option_value field to: a:0:{} - -Or, temporarily rename your plugins directory via SSH using the following command: -``` - mv /www/html/path_to_wordpress/wp-content/plugins /www/html/path_to_wordpress/wp-content/plugins.old -``` - -In general, HTTP errors are logged in the web server log files, so a good starting point is to check the web server error log on your server. - -You don't have to Fix HTTP errors in WordPress, if you use one of our [WordPress VPS Hosting][2] services, in which case you can simply ask our expert Linux admins to **fix HTTP errors in WordPress** for you. They are available 24 ×7 and will take care of your request immediately. - --------------------------------------------------------------------------------- - -via: https://www.rosehosting.com/blog/http-error-wordpress/ - -作者:[rosehosting][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.rosehosting.com -[1]:https://www.rosehosting.com/blog/wp-content/uploads/2018/01/http-error-wordpress.jpg -[2]:https://www.rosehosting.com/wordpress-hosting.html diff --git a/sources/tech/20180102 How To Find (Top-10) Largest Files In Linux.md b/sources/tech/20180102 How To Find (Top-10) Largest Files In Linux.md new file mode 100644 index 0000000000..7e5d8c82a5 --- /dev/null +++ b/sources/tech/20180102 How To Find (Top-10) Largest Files In Linux.md @@ -0,0 +1,189 @@ +How To Find (Top-10) Largest Files In Linux +====== +When you are running out of disk space in system, you may prefer to check with df command or du command or ncdu command but all these will tell you only current directory files and doesn't shows the system wide files. + +You have to spend huge amount of time to get the largest files in the system using the above commands, that to you have to navigate to each and every directory to achieve this. + +It's making you to face trouble and this is not the right way to do it. + +If so, what would be the suggested way to get top 10 largest files in Linux? + +I have spend a lot of time with google but i didn't found this. Everywhere i could see an article which list the top 10 files in the current directory. So, i want to make this article useful for people whoever looking to get the top 10 largest files in the system. + +In this tutorial, we are going to teach you how to find top 10 largest files in Linux system using below four methods. + +### Method-1 : + +There is no specific command available in Linux to do this, hence we are using more than one command (all together) to get this done. +``` +# find / -type f -print0 | xargs -0 du -h | sort -rh | head -n 10 + +1.4G /swapfile +1.1G /home/magi/ubuntu-17.04-desktop-amd64.iso +564M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqTFU0XzkzUlJUZzA +378M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqeldzUmhPeC03Zm8 +377M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqRGd4V0VrOXM4YVU +100M /usr/lib/x86_64-linux-gnu/libOxideQtCore.so.0 +93M /usr/lib/firefox/libxul.so +84M /var/lib/snapd/snaps/core_3604.snap +84M /var/lib/snapd/snaps/core_3440.snap +84M /var/lib/snapd/snaps/core_3247.snap + +``` + +**Details :** +**`find`** : It 's a command, Search for files in a directory hierarchy. +**`/`** : Check in the whole system (starting from / directory) +**`-type`** : File is of type + +**`f`** : Regular file +**`-print0`** : Print the full file name on the standard output, followed by a null character +**`|`** : Control operator that send the output of one program to another program for further processing. + +**`xargs`** : It 's a command, which build and execute command lines from standard input. +**`-0`** : Input items are terminated by a null character instead of by whitespace +**`du -h`** : It 's a command to calculate disk usage with human readable format + +**`sort`** : It 's a command, Sort lines of text files +**`-r`** : Reverse the result of comparisons +**`-h`** : Print the output with human readable format + +**`head`** : It 's a command, Output the first part of files +**`n -10`** : Print the first 10 files. + +### Method-2 : + +This is an another way to find or check top 10 largest files in Linux system. Here also, we are putting few commands together to achieve this. +``` +# find / -type f -exec du -Sh {} + | sort -rh | head -n 10 + +1.4G /swapfile +1.1G /home/magi/ubuntu-17.04-desktop-amd64.iso +564M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqTFU0XzkzUlJUZzA +378M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqeldzUmhPeC03Zm8 +377M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqRGd4V0VrOXM4YVU +100M /usr/lib/x86_64-linux-gnu/libOxideQtCore.so.0 +93M /usr/lib/firefox/libxul.so +84M /var/lib/snapd/snaps/core_3604.snap +84M /var/lib/snapd/snaps/core_3440.snap +84M /var/lib/snapd/snaps/core_3247.snap + +``` + +**Details :** +**`find`** : It 's a command, Search for files in a directory hierarchy. +**`/`** : Check in the whole system (starting from / directory) +**`-type`** : File is of type + +**`f`** : Regular file +**`-exec`** : This variant of the -exec action runs the specified command on the selected files +**`du`** : It 's a command to estimate file space usage. + +**`-S`** : Do not include size of subdirectories +**`-h`** : Print sizes in human readable format +**`{}`** : Summarize disk usage of each FILE, recursively for directories. + +**`|`** : Control operator that send the output of one program to another program for further processing. +**`sort`** : It 's a command, Sort lines of text files +**`-r`** : Reverse the result of comparisons + +**`-h`** : Compare human readable numbers +**`head`** : It 's a command, Output the first part of files +**`n -10`** : Print the first 10 files. + +### Method-3 : + +It 's an another method to find or search top 10 largest files in Linux system. +``` +# find / -type f -print0 | xargs -0 du | sort -n | tail -10 | cut -f2 | xargs -I{} du -sh {} + +84M /var/lib/snapd/snaps/core_3247.snap +84M /var/lib/snapd/snaps/core_3440.snap +84M /var/lib/snapd/snaps/core_3604.snap +93M /usr/lib/firefox/libxul.so +100M /usr/lib/x86_64-linux-gnu/libOxideQtCore.so.0 +377M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqRGd4V0VrOXM4YVU +378M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqeldzUmhPeC03Zm8 +564M /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqTFU0XzkzUlJUZzA +1.1G /home/magi/ubuntu-17.04-desktop-amd64.iso +1.4G /swapfile + +``` + +**Details :** +**`find`** : It 's a command, Search for files in a directory hierarchy. +**`/`** : Check in the whole system (starting from / directory) +**`-type`** : File is of type + +**`f`** : Regular file +**`-print0`** : Print the full file name on the standard output, followed by a null character +**`|`** : Control operator that send the output of one program to another program for further processing. + +**`xargs`** : It 's a command, which build and execute command lines from standard input. +**`-0`** : Input items are terminated by a null character instead of by whitespace +**`du`** : It 's a command to estimate file space usage. + +**`sort`** : It 's a command, Sort lines of text files +**`-n`** : Compare according to string numerical value +**`tail -10`** : It 's a command, output the last part of files (last 10 files) + +**`cut`** : It 's a command, remove sections from each line of files +**`-f2`** : Select only these fields value. +**`-I{}`** : Replace occurrences of replace-str in the initial-arguments with names read from standard input. + +**`-s`** : Display only a total for each argument +**`-h`** : Print sizes in human readable format +**`{}`** : Summarize disk usage of each FILE, recursively for directories. + +### Method-4 : + +It 's an another method to find or search top 10 largest files in Linux system. +``` +# find / -type f -ls | sort -k 7 -r -n | head -10 | column -t | awk '{print $7,$11}' + +1494845440 /swapfile +1085984380 /home/magi/ubuntu-17.04-desktop-amd64.iso +591003648 /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqTFU0XzkzUlJUZzA +395770383 /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqeldzUmhPeC03Zm8 +394891761 /home/magi/.gdfuse/magi/cache/0B5nso_FPaZFqRGd4V0VrOXM4YVU +103999072 /usr/lib/x86_64-linux-gnu/libOxideQtCore.so.0 +97356256 /usr/lib/firefox/libxul.so +87896064 /var/lib/snapd/snaps/core_3604.snap +87793664 /var/lib/snapd/snaps/core_3440.snap +87089152 /var/lib/snapd/snaps/core_3247.snap + +``` + +**Details :** +**`find`** : It 's a command, Search for files in a directory hierarchy. +**`/`** : Check in the whole system (starting from / directory) +**`-type`** : File is of type + +**`f`** : Regular file +**`-ls`** : List current file in ls -dils format on standard output. +**`|`** : Control operator that send the output of one program to another program for further processing. + +**`sort`** : It 's a command, Sort lines of text files +**`-k`** : start a key at POS1 +**`-r`** : Reverse the result of comparisons + +**`-n`** : Compare according to string numerical value +**`head`** : It 's a command, Output the first part of files +**`-10`** : Print the first 10 files. + +**`column`** : It 's a command, formats its input into multiple columns. +**`-t`** : Determine the number of columns the input contains and create a table. +**`awk`** : It 's a command, Pattern scanning and processing language +**`'{print $7,$11}'`** : Print only mentioned column. + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-find-search-check-print-top-10-largest-biggest-files-in-linux/ + +作者:[Magesh Maruthamuthu][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.2daygeek.com/author/magesh/ diff --git a/sources/tech/20180103 How to preconfigure LXD containers with cloud-init.md b/sources/tech/20180103 How to preconfigure LXD containers with cloud-init.md deleted file mode 100644 index ed6eacd2fb..0000000000 --- a/sources/tech/20180103 How to preconfigure LXD containers with cloud-init.md +++ /dev/null @@ -1,197 +0,0 @@ -How to preconfigure LXD containers with cloud-init -====== -You are creating containers and you want them to be somewhat preconfigured. For example, you want them to run automatically **apt update** as soon as they are launched. Or, get some packages pre-installed, or run a few commands. Here is how to perform this early initialization with [**cloud-init**][1] through [LXD to container images that support **cloud-init**][2]. - -In the following, we are creating a separate LXD profile with some cloud-init instructions, then launch a container using that profile. - -### How to create a new LXD profile - -Let's see the existing profiles. -``` -$ **lxc profile list** -+---------|---------+ -| NAME | USED BY | -+---------|---------+ -| default | 11 | -+---------|---------+ -``` - -There is one profile, **default**. We copy it to a new name, so that we can start adding our instructions on that profile. -``` -$ **lxc profile copy default devprofile** - -$ **lxc profile list** -+------------|---------+ -| NAME | USED BY | -+------------|---------+ -| default | 11 | -+------------|---------+ -| devprofile | 0 | -+------------|---------+ -``` - -We have a new profile to work on, **devprofile**. Here is how it looks, -``` -$ **lxc profile show devprofile** -config: - environment.TZ: "" -description: Default LXD profile -devices: - eth0: - nictype: bridged - parent: lxdbr0 - type: nic - root: - path: / - pool: default - type: disk -name: devprofile -used_by: [] -``` - -Note the main sections, **config:** , **description:** , **devices:** , **name:** , and **used_by:**. There is careful indentation in the profile, and when you make edits, you need to take care of the indentation. - -### How to add cloud-init to an LXD profile - -In the **config:** section of a LXD profile, we can insert [cloud-init][1] instructions. Those[ cloud-init][1] instructions will be passed to the container and will be used when it is first launched. - -Here are those that we are going to use in the example, -``` - package_upgrade: true - packages: - - build-essential - locale: es_ES.UTF-8 - timezone: Europe/Madrid - runcmd: - - [touch, /tmp/simos_was_here] -``` - -**package_upgrade: true** means that we want **cloud-init** to run **sudo apt upgrade** when the container is first launched. Under **packages:** we list the packages that we want to get automatically installed. Then we set the **locale** and **timezone**. In the Ubuntu container images, the default locale for **root** is **C.UTF-8** , for the **ubuntu** account it 's **en_US.UTF-8**. The timezone is **Etc/UTC**. Finally, we show [how to run a Unix command with **runcmd**][3]. - -The part that needs a bit of attention is how to insert the **cloud-init** instructions into the LXD profile. My preferred way is -``` -$ **lxc profile edit devprofile** -``` - -This opens up a text editor and allows to paste the instructions. Here is [how the result should look like][4], -``` -$ **lxc profile show devprofile** -config: - environment.TZ: "" - - - user.user-data: | - #cloud-config - package_upgrade: true - packages: - - build-essential - locale: es_ES.UTF-8 - timezone: Europe/Madrid - runcmd: - - [touch, /tmp/simos_was_here] - - -description: Default LXD profile -devices: - eth0: - nictype: bridged - parent: lxdbr0 - type: nic - root: - path: / - pool: default - type: disk -name: devprofile -used_by: [] -``` - -WordPress can get a bit messed with indentation when you copy/paste, therefore, you may use [this pastebin][4] instead. - -### How to launch a container using a profile - -Let's launch a new container using the profile **devprofile**. -``` -$ **lxc launch --profile devprofile ubuntu:x mydev** -``` - -Let's get into the container and figure out whether our instructions took effect. -``` -$ **lxc exec mydev bash** -root@mydev:~# **ps ax** - PID TTY STAT TIME COMMAND - 1 ? Ss 0:00 /sbin/init - ... - 427 ? Ss 0:00 /usr/bin/python3 /usr/bin/cloud-init modules --mode=f - 430 ? S 0:00 /bin/sh -c tee -a /var/log/cloud-init-output.log - 431 ? S 0:00 tee -a /var/log/cloud-init-output.log - 432 ? S 0:00 /usr/bin/apt-get --option=Dpkg::Options::=--force-con - 437 ? S 0:00 /usr/lib/apt/methods/http - 438 ? S 0:00 /usr/lib/apt/methods/http - 440 ? S 0:00 /usr/lib/apt/methods/gpgv - 570 ? Ss 0:00 bash - 624 ? S 0:00 /usr/lib/apt/methods/store - 625 ? R+ 0:00 ps ax -root@mydev:~# -``` - -We connected quite quickly, and **ps ax** shows that the package update is indeed taking place! We can get the full output at /var/log/cloud-init-output.log and in there, -``` -Generating locales (this might take a while)... - es_ES.UTF-8... done -Generation complete. -``` - -The locale got set. The **root** user keeps having the **C.UTF-8** default locale. It is only the non-root account **ubuntu** that gets the new locale. -``` -Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease -Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB] -Get:3 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB] -``` - -Here is **apt update** that is required before installing packages. -``` -The following packages will be upgraded: - libdrm2 libseccomp2 squashfs-tools unattended-upgrades -4 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. -Need to get 211 kB of archives. -``` - -Here is runs **package_upgrade: true** and installs any available packages. -``` -The following NEW packages will be installed: - binutils build-essential cpp cpp-5 dpkg-dev fakeroot g++ g++-5 gcc gcc-5 - libalgorithm-diff-perl libalgorithm-diff-xs-perl libalgorithm-merge-perl -``` - -This is from our instruction to install the **build-essential** meta-package. - -What about the **runcmd** instruction? -``` -root@mydev:~# **ls -l /tmp/** -total 1 --rw-r--r-- 1 root root 0 Jan 3 15:23 simos_was_here -root@mydev:~# -``` - -It worked as well! - -### Conclusion - -When we launch LXD containers, we often need some configuration to be enabled by default and avoid repeated actions. The way to solve this, is to create LXD profiles. Each profile captures those configurations. Finally, when we launch the new container, we specify which LXD profile to use. - - --------------------------------------------------------------------------------- - -via: https://blog.simos.info/how-to-preconfigure-lxd-containers-with-cloud-init/ - -作者:[Simos Xenitellis][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://blog.simos.info/author/simos/ -[1]:http://cloudinit.readthedocs.io/en/latest/index.html -[2]:https://github.com/lxc/lxd/blob/master/doc/cloud-init.md -[3]:http://cloudinit.readthedocs.io/en/latest/topics/modules.html#runcmd -[4]:https://paste.ubuntu.com/26313399/ diff --git a/sources/tech/20180104 4 Tools for Network Snooping on Linux.md b/sources/tech/20180104 4 Tools for Network Snooping on Linux.md new file mode 100644 index 0000000000..0ba60006ee --- /dev/null +++ b/sources/tech/20180104 4 Tools for Network Snooping on Linux.md @@ -0,0 +1,197 @@ +4 Tools for Network Snooping on Linux +====== +Computer networking data has to be exposed, because packets can't travel blindfolded, so join us as we use `whois`, `dig`, `nmcli`, and `nmap` to snoop networks. + +Do be polite and don't run `nmap` on any network but your own, because probing other people's networks can be interpreted as a hostile act. + +### Thin and Thick whois + +You may have noticed that our beloved old `whois` command doesn't seem to give the level of detail that it used to. Check out this example for Linux.com: +``` +$ whois linux.com +Domain Name: LINUX.COM +Registry Domain ID: 4245540_DOMAIN_COM-VRSN +Registrar WHOIS Server: whois.namecheap.com +Registrar URL: http://www.namecheap.com +Updated Date: 2018-01-10T12:26:50Z +Creation Date: 1994-06-02T04:00:00Z +Registry Expiry Date: 2018-06-01T04:00:00Z +Registrar: NameCheap Inc. +Registrar IANA ID: 1068 +Registrar Abuse Contact Email: abuse@namecheap.com +Registrar Abuse Contact Phone: +1.6613102107 +Domain Status: ok https://icann.org/epp#ok +Name Server: NS5.DNSMADEEASY.COM +Name Server: NS6.DNSMADEEASY.COM +Name Server: NS7.DNSMADEEASY.COM +DNSSEC: unsigned +[...] + +``` + +There is quite a bit more, mainly annoying legalese. But where is the contact information? It is sitting on whois.namecheap.com (see the third line of output above): +``` +$ whois -h whois.namecheap.com linux.com + +``` + +I won't print the output here, as it is very long, containing the Registrant, Admin, and Tech contact information. So what's the deal, Lucille? Some registries, such as .com and .net are "thin" registries, storing a limited subset of domain data. To get complete information use the `-h`, or `--host` option, to get the complete dump from the domain's `Registrar WHOIS Server`. + +Most of the other top-level domains are thick registries, such as .info. Try `whois blockchain.info` to see an example. + +Want to get rid of the obnoxious legalese? Use the `-H` option. + +### Digging DNS + +Use the `dig` command to compare the results from different name servers to check for stale entries. DNS records are cached all over the place, and different servers have different refresh intervals. This is the simplest usage: +``` +$ dig linux.com +<<>> DiG 9.10.3-P4-Ubuntu <<>> linux.com +;; global options: +cmd +;; Got answer: +;; ->>HEADER<<<- opcode: QUERY, status: NOERROR, id: 13694 +;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1 + +;; OPT PSEUDOSECTION: +; EDNS: version: 0, flags:; udp: 1440 +;; QUESTION SECTION: +;linux.com. IN A + +;; ANSWER SECTION: +linux.com. 10800 IN A 151.101.129.5 +linux.com. 10800 IN A 151.101.65.5 +linux.com. 10800 IN A 151.101.1.5 +linux.com. 10800 IN A 151.101.193.5 + +;; Query time: 92 msec +;; SERVER: 127.0.1.1#53(127.0.1.1) +;; WHEN: Tue Jan 16 15:17:04 PST 2018 +;; MSG SIZE rcvd: 102 + +``` + +Take notice of the SERVER: 127.0.1.1#53(127.0.1.1) line near the end of the output. This is your default caching resolver. When the address is localhost, that means there is a DNS server installed on your machine. In my case that is Dnsmasq, which is being used by Network Manager: +``` +$ ps ax|grep dnsmasq +2842 ? S 0:00 /usr/sbin/dnsmasq --no-resolv --keep-in-foreground +--no-hosts --bind-interfaces --pid-file=/var/run/NetworkManager/dnsmasq.pid +--listen-address=127.0.1.1 + +``` + +The `dig` default is to return A records, which define the domain name. IPv6 has AAAA records: +``` +$ $ dig linux.com AAAA +[...] +;; ANSWER SECTION: +linux.com. 60 IN AAAA 64:ff9b::9765:105 +linux.com. 60 IN AAAA 64:ff9b::9765:4105 +linux.com. 60 IN AAAA 64:ff9b::9765:8105 +linux.com. 60 IN AAAA 64:ff9b::9765:c105 +[...] + +``` + +Checkitout, Linux.com has IPv6 addresses. Very good! If your Internet service provider supports IPv6 then you can connect over IPv6. (Sadly, my overpriced mobile broadband does not.) + +Suppose you make some DNS changes to your domain, or you're seeing `dig` results that don't look right. Try querying with a public DNS service, like OpenNIC: +``` +$ dig @69.195.152.204 linux.com +[...] +;; Query time: 231 msec +;; SERVER: 69.195.152.204#53(69.195.152.204) + +``` + +`dig` confirms that you're getting your lookup from 69.195.152.204. You can query all kinds of servers and compare results. + +### Upstream Name Servers + +I want to know what my upstream name servers are. To find this, I first look in `/etc/resolv/conf`: +``` +$ cat /etc/resolv.conf +# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) +# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN +nameserver 127.0.1.1 + +``` + +Thanks, but I already knew that. Your Linux distribution may be configured differently, and you'll see your upstream servers. Let's try `nmcli`, the Network Manager command-line tool: +``` +$ nmcli dev show | grep DNS +IP4.DNS[1]: 192.168.1.1 + +``` + +Now we're getting somewhere, as that is the address of my mobile hotspot, and I should have thought of that myself. I can log in to its weird little Web admin panel to see its upstream servers. A lot of consumer Internet gateways don't let you view or change these settings, so try an external service such as [What's my DNS server?][1] + +### List IPv4 Addresses on your Network + +Which IPv4 addresses are up and in use on your network? +``` +$ nmap -sn 192.168.1.0/24 +Starting Nmap 7.01 ( https://nmap.org ) at 2018-01-14 14:03 PST +Nmap scan report for Mobile.Hotspot (192.168.1.1) +Host is up (0.011s latency). +Nmap scan report for studio (192.168.1.2) +Host is up (0.000071s latency). +Nmap scan report for nellybly (192.168.1.3) +Host is up (0.015s latency) +Nmap done: 256 IP addresses (2 hosts up) scanned in 2.23 seconds + +``` + +Everyone wants to scan their network for open ports. This example looks for services and their versions: +``` +$ nmap -sV 192.168.1.1/24 + +Starting Nmap 7.01 ( https://nmap.org ) at 2018-01-14 16:46 PST +Nmap scan report for Mobile.Hotspot (192.168.1.1) +Host is up (0.0071s latency). +Not shown: 997 closed ports +PORT STATE SERVICE VERSION +22/tcp filtered ssh +53/tcp open domain dnsmasq 2.55 +80/tcp open http GoAhead WebServer 2.5.0 + +Nmap scan report for studio (192.168.1.102) +Host is up (0.000087s latency). +Not shown: 998 closed ports +PORT STATE SERVICE VERSION +22/tcp open ssh OpenSSH 7.2p2 Ubuntu 4ubuntu2.2 (Ubuntu Linux; protocol 2.0) +631/tcp open ipp CUPS 2.1 +Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel + +Service detection performed. Please report any incorrect results at https://nmap.org/submit/ . +Nmap done: 256 IP addresses (2 hosts up) scanned in 11.65 seconds + +``` + +These are interesting results. Let's try the same run from a different Internet account, to see if any of these services are exposed to big bad Internet. You have a second network if you have a smartphone. There are probably apps you can download, or use your phone as a hotspot to your faithful Linux computer. Fetch the WAN IP address from the hotspot control panel and try again: +``` +$ nmap -sV 12.34.56.78 + +Starting Nmap 7.01 ( https://nmap.org ) at 2018-01-14 17:05 PST +Nmap scan report for 12.34.56.78 +Host is up (0.0061s latency). +All 1000 scanned ports on 12.34.56.78 are closed + +``` + +That's what I like to see. Consult the fine man pages for these commands to learn more fun snooping techniques. + +Learn more about Linux through the free ["Introduction to Linux" ][2]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/1/4-tools-network-snooping-linux + +作者:[Carla Schroder][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/cschroder +[1]:http://www.whatsmydnsserver.com/ +[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180104 How does gdb call functions.md b/sources/tech/20180104 How does gdb call functions.md new file mode 100644 index 0000000000..c88fae999e --- /dev/null +++ b/sources/tech/20180104 How does gdb call functions.md @@ -0,0 +1,254 @@ +translating by ucasFL + +How does gdb call functions? +============================================================ + +(previous gdb posts: [how does gdb work? (2016)][4] and [three things you can do with gdb (2014)][5]) + +I discovered this week that you can call C functions from gdb! I thought this was cool because I’d previously thought of gdb as mostly a read-only debugging tool. + +I was really surprised by that (how does that WORK??). As I often do, I asked [on Twitter][6] how that even works, and I got a lot of really useful answers! My favorite answer was [Evan Klitzke’s example C code][7] showing a way to do it. Code that  _works_  is very exciting! + +I believe (through some stracing & experiments) that that example C code is different from how gdb actually calls functions, so I’ll talk about what I’ve figured out about what gdb does in this post and how I’ve figured it out. + +There is a lot I still don’t know about how gdb calls functions, and very likely some things in here are wrong. + +### What does it mean to call a C function from gdb? + +Before I get into how this works, let’s talk quickly about why I found it surprising / nonobvious. + +So, you have a running C program (the “target program”). You want to run a function from it. To do that, you need to basically: + +* pause the program (because it is already running code!) + +* find the address of the function you want to call (using the symbol table) + +* convince the program (the “target program”) to jump to that address + +* when the function returns, restore the instruction pointer and registers to what they were before + +Using the symbol table to figure out the address of the function you want to call is pretty straightforward – here’s some sketchy (but working!) Rust code that I’ve been using on Linux to do that. This code uses the [elf crate][8]. If I wanted to find the address of the `foo` function in PID 2345, I’d run `elf_symbol_value("/proc/2345/exe", "foo")`. + +``` +fn elf_symbol_value(file_name: &str, symbol_name: &str) -> Result> { + // open the ELF file + let file = elf::File::open_path(file_name).ok().ok_or("parse error")?; + // loop over all the sections & symbols until you find the right one! + let sections = &file.sections; + for s in sections { + for sym in file.get_symbols(&s).ok().ok_or("parse error")? { + if sym.name == symbol_name { + return Ok(sym.value); + } + } + } + None.ok_or("No symbol found")? +} + +``` + +This won’t totally work on its own, you also need to look at the memory maps of the file and add the symbol offset to the start of the place that file is mapped. But finding the memory maps isn’t so hard, they’re in `/proc/PID/maps`. + +Anyway, this is all to say that finding the address of the function to call seemed straightforward to me but that the rest of it (change the instruction pointer? restore the registers? what else?) didn’t seem so obvious! + +### You can’t just jump + +I kind of said this already but – you can’t just find the address of the function you want to run and then jump to that address. I tried that in gdb (`jump foo`) and the program segfaulted. Makes sense! + +### How you can call C functions from gdb + +First, let’s see that this is possible. I wrote a tiny C program that sleeps for 1000 seconds and called it `test.c`: + +``` +#include + +int foo() { + return 3; +} +int main() { + sleep(1000); +} + +``` + +Next, compile and run it: + +``` +$ gcc -o test test.c +$ ./test + +``` + +Finally, let’s attach to the `test` program with gdb: + +``` +$ sudo gdb -p $(pgrep -f test) +(gdb) p foo() +$1 = 3 +(gdb) quit + +``` + +So I ran `p foo()` and it ran the function! That’s fun. + +### Why is this useful? + +a few possible uses for this: + +* it lets you treat gdb a little bit like a C REPL, which is fun and I imagine could be useful for development + +* utility functions to display / navigate complex data structures quickly while debugging in gdb (thanks [@invalidop][1]) + +* [set an arbitrary process’s namespace while it’s running][2] (featuring a not-so-surprising appearance from my colleague [nelhage][3]!) + +* probably more that I don’t know about + +### How it works + +I got a variety of useful answers on Twitter when I asked how calling functions from gdb works! A lot of them were like “well you get the address of the function from the symbol table” but that is not the whole story!! + +One person pointed me to this nice 2 part series on how gdb works that they’d written: [Debugging with the natives, part 1][9] and [Debugging with the natives, part 2][10]. Part 1 explains approximately how calling functions works (or could work – figuring out what gdb **actually** does isn’t trivial, but I’ll try my best!). + +The steps outlined there are: + +1. Stop the process + +2. Create a new stack frame (far away from the actual stack) + +3. Save all the registers + +4. Set the registers to the arguments you want to call your function with + +5. Set the stack pointer to the new stack frame + +6. Put a trap instruction somewhere in memory + +7. Set the return address to that trap instruction + +8. Set the instruction pointer register to the address of the function you want to call + +9. Start the process again! + +I’m not going to go through how gdb does all of these (I don’t know!) but here are a few things I’ve learned about the various pieces this evening. + +**Create a stack frame** + +If you’re going to run a C function, most likely it needs a stack to store variables on! You definitely don’t want it to clobber your current stack. Concretely – before gdb calls your function (by setting the instruction pointer to it and letting it go), it needs to set the **stack pointer** to… something. + +There was some speculation on Twitter about how this works: + +> i think it constructs a new stack frame for the call right on top of the stack where you’re sitting! + +and: + +> Are you certain it does that? It could allocate a pseudo stack, then temporarily change sp value to that location. You could try, put a breakpoint there and look at the sp register address, see if it’s contiguous to your current program register? + +I did an experiment where (inside gdb) I ran:` + +``` +(gdb) p $rsp +$7 = (void *) 0x7ffea3d0bca8 +(gdb) break foo +Breakpoint 1 at 0x40052a +(gdb) p foo() +Breakpoint 1, 0x000000000040052a in foo () +(gdb) p $rsp +$8 = (void *) 0x7ffea3d0bc00 + +``` + +This seems in line with the “gdb constructs a new stack frame for the call right on top of the stack where you’re sitting” theory, since the stack pointer (`$rsp`) goes from being `...bca8` to `..bc00` – stack pointers grow downward, so a `bc00`stack pointer is **after** a `bca8` pointer. Interesting! + +So it seems like gdb just creates the new stack frames right where you are. That’s a bit surprising to me! + +**change the instruction pointer** + +Let’s see whether gdb changes the instruction pointer! + +``` +(gdb) p $rip +$1 = (void (*)()) 0x7fae7d29a2f0 <__nanosleep_nocancel+7> +(gdb) b foo +Breakpoint 1 at 0x40052a +(gdb) p foo() +Breakpoint 1, 0x000000000040052a in foo () +(gdb) p $rip +$3 = (void (*)()) 0x40052a + +``` + +It does! The instruction pointer changes from `0x7fae7d29a2f0` to `0x40052a` (the address of the `foo` function). + +I stared at the strace output and I still don’t understand **how** it changes, but that’s okay. + +**aside: how breakpoints are set!!** + +Above I wrote `break foo`. I straced gdb while running all of this and understood almost nothing but I found ONE THING that makes sense to me!! + +Here are some of the system calls that gdb uses to set a breakpoint. It’s really simple! It replaces one instruction with `cc` (which [https://defuse.ca/online-x86-assembler.htm][11] tells me means `int3` which means `send SIGTRAP`), and then once the program is interrupted, it puts the instruction back the way it was. + +I was putting a breakpoint on a function `foo` with the address `0x400528`. + +This `PTRACE_POKEDATA` is how gdb changes the code of running programs. + +``` +// change the 0x400528 instructions +25622 ptrace(PTRACE_PEEKTEXT, 25618, 0x400528, [0x5d00000003b8e589]) = 0 +25622 ptrace(PTRACE_POKEDATA, 25618, 0x400528, 0x5d00000003cce589) = 0 +// start the program running +25622 ptrace(PTRACE_CONT, 25618, 0x1, SIG_0) = 0 +// get a signal when it hits the breakpoint +25622 ptrace(PTRACE_GETSIGINFO, 25618, NULL, {si_signo=SIGTRAP, si_code=SI_KERNEL, si_value={int=-1447215360, ptr=0x7ffda9bd3f00}}) = 0 +// change the 0x400528 instructions back to what they were before +25622 ptrace(PTRACE_PEEKTEXT, 25618, 0x400528, [0x5d00000003cce589]) = 0 +25622 ptrace(PTRACE_POKEDATA, 25618, 0x400528, 0x5d00000003b8e589) = 0 + +``` + +**put a trap instruction somewhere** + +When gdb runs a function, it **also** puts trap instructions in a bunch of places! Here’s one of them (per strace). It’s basically replacing one instruction with `cc` (`int3`). + +``` +5908 ptrace(PTRACE_PEEKTEXT, 5810, 0x7f6fa7c0b260, [0x48f389fd89485355]) = 0 +5908 ptrace(PTRACE_PEEKTEXT, 5810, 0x7f6fa7c0b260, [0x48f389fd89485355]) = 0 +5908 ptrace(PTRACE_POKEDATA, 5810, 0x7f6fa7c0b260, 0x48f389fd894853cc) = 0 + +``` + +What’s `0x7f6fa7c0b260`? Well, I looked in the process’s memory maps, and it turns it’s somewhere in `/lib/x86_64-linux-gnu/libc-2.23.so`. That’s weird! Why is gdb putting trap instructions in libc? + +Well, let’s see what function that’s in. It turns out it’s `__libc_siglongjmp`. The other functions gdb is putting traps in are `__longjmp`, `____longjmp_chk`, `dl_main`, and `_dl_close_worker`. + +Why? I don’t know! Maybe for some reason when our function `foo()` returns, it’s calling `longjmp`, and that is how gdb gets control back? I’m not sure. + +### how gdb calls functions is complicated! + +I’m going to stop there (it’s 1am!), but now I know a little more! + +It seems like the answer to “how does gdb call a function?” is definitely not that simple. I found it interesting to try to figure a little bit of it out and hopefully you have too! + +I still have a lot of unanswered questions about how exactly gdb does all of these things, but that’s okay. I don’t really need to know the details of how this works and I’m happy to have a slightly improved understanding. + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2018/01/04/how-does-gdb-call-functions/ + +作者:[Julia Evans ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca/ +[1]:https://twitter.com/invalidop/status/949161146526781440 +[2]:https://github.com/baloo/setns/blob/master/setns.c +[3]:https://github.com/nelhage +[4]:https://jvns.ca/blog/2016/08/10/how-does-gdb-work/ +[5]:https://jvns.ca/blog/2014/02/10/three-steps-to-learning-gdb/ +[6]:https://twitter.com/b0rk/status/948060808243765248 +[7]:https://github.com/eklitzke/ptrace-call-userspace/blob/master/call_fprintf.c +[8]:https://cole14.github.io/rust-elf +[9]:https://www.cl.cam.ac.uk/~srk31/blog/2016/02/25/#native-debugging-part-1 +[10]:https://www.cl.cam.ac.uk/~srk31/blog/2017/01/30/#native-debugging-part-2 +[11]:https://defuse.ca/online-x86-assembler.htm diff --git a/sources/tech/20180104 How to Change Your Linux Console Fonts.md b/sources/tech/20180104 How to Change Your Linux Console Fonts.md deleted file mode 100644 index 302f8459b4..0000000000 --- a/sources/tech/20180104 How to Change Your Linux Console Fonts.md +++ /dev/null @@ -1,88 +0,0 @@ -translating by lujun9972 -How to Change Your Linux Console Fonts -====== -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/font-size_0.png?itok=d97vmyYa) - -I try to be a peaceful soul, but some things make that difficult, like tiny console fonts. Mark my words, friends, someday your eyes will be decrepit and you won't be able to read those tiny fonts you coded into everything, and then you'll be sorry, and I will laugh. - -Fortunately, Linux fans, you can change your console fonts. As always, the ever-changing Linux landscape makes this less than straightforward, and font management on Linux is non-existent, so we'll muddle along as best we can. In this article, I'll show what I've found to be the easiest approach. - -### What is the Linux Console? - -Let us first clarify what we're talking about. When I say Linux console, I mean TTY1-6, the virtual terminals that you access from your graphical desktop with Ctrl+Alt+F1 through F6. To get back to your graphical environment, press Alt+F7. (This is no longer universal, however, and your Linux distribution may have it mapped differently. You may have more or fewer TTYs, and your graphical session may not be at F7. For example, Fedora puts the default graphical session at F2, and an extra one at F1.) I think it is amazingly cool that we can have both X and console sessions running at the same time. - -The Linux console is part of the kernel, and does not run in an X session. This is the same console you use on headless servers that have no graphical environments. I call the terminals in a graphical session X terminals, and terminal emulators is my catch-all name for both console and X terminals. - -But that's not all. The Linux console has come a long way from the early ANSI days, and thanks to the Linux framebuffer, it has Unicode and limited graphics support. There are also a number of console multimedia applications that we will talk about in a future article. - -### Console Screenshots - -The easy way to get console screenshots is from inside a virtual machine. Then you can use your favorite graphical screen capture program from the host system. You may also make screen captures from your console with [fbcat][1] or [fbgrab][2]. `fbcat` creates a portable pixmap format (PPM) image; this is a highly portable uncompressed image format that should be readable on any operating system, and of course you can convert it to whatever format you want. `fbgrab` is a wrapper script to `fbcat` that creates a PNG file. There are multiple versions of `fbgrab` written by different people floating around. Both have limited options and make only a full-screen capture. - -`fbcat` needs root permissions, and must redirect to a file. Do not specify a file extension, but only the filename: -``` -$ sudo fbcat > Pictures/myfile - -``` - -After cropping in GIMP, I get Figure 1. - -It would be nice to have a little padding on the left margin, so if any of you excellent readers know how to do this, please tell us in the comments. - -`fbgrab` has a few more options that you can read about in `man fbgrab`, such as capturing a different console, and time delay. This example makes a screen grab just like `fbcat`, except you don't have to explicitly redirect: -``` -$ sudo fbgrab Pictures/myOtherfile - -``` - -### Finding Fonts - -As far as I know, there is no way to list your installed kernel fonts other than looking in the directories they are stored in: `/usr/share/consolefonts/` (Debian/etc.), `/lib/kbd/consolefonts/` (Fedora), `/usr/share/kbd/consolefonts` (openSUSE)...you get the idea. - -### Changing Fonts - -Readable fonts are not a new concept. Embrace the old! Readability matters. And so does configurability, which sometimes gets lost in the rush to the new-shiny. - -On Debian/Ubuntu/etc. systems you can run `sudo dpkg-reconfigure console-setup` to set your console font, then run the `setupcon` command in your console to activate the changes. `setupcon` is part of the `console-setup` package. If your Linux distribution doesn't include it, there might be a package for you at [openSUSE][3]. - -You can also edit `/etc/default/console-setup` directly. This example sets the Terminus Bold font at 32 points, which is my favorite, and restricts the width to 80 columns. -``` -ACTIVE_CONSOLES="/dev/tty[1-6]" -CHARMAP="UTF-8" -CODESET="guess" -FONTFACE="TerminusBold" -FONTSIZE="16x32" -SCREEN_WIDTH="80" - -``` - -The FONTFACE and FONTSIZE values come from the font's filename, `TerminusBold32x16.psf.gz`. Yes, you have to know to reverse the order for FONTSIZE. Computers are so much fun. Run `setupcon` to apply the new configuration. You can see the whole character set for your active font with `showconsolefont`. Refer to `man console-setup` for complete options. - -### Systemd - -Systemd is different from `console-setup`, and you don't need to install anything, except maybe some extra font packages. All you do is edit `/etc/vconsole.conf` and then reboot. On my Fedora and openSUSE systems I had to install some extra Terminus packages to get the larger sizes as the installed fonts only went up to 16 points, and I wanted 32. This is the contents of `/etc/vconsole.conf` on both systems: -``` -KEYMAP="us" -FONT="ter-v32b" - -``` - -Come back next week to learn some more cool console hacks, and some multimedia console applications. - -Learn more about Linux through the free ["Introduction to Linux" ][4]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2018/1/how-change-your-linux-console-fonts - -作者:[Carla Schroder][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/cschroder -[1]:http://jwilk.net/software/fbcat -[2]:https://github.com/jwilk/fbcat/blob/master/fbgrab -[3]:https://software.opensuse.org/package/console-setup -[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180105 Ansible- the Automation Framework That Thinks Like a Sysadmin.md b/sources/tech/20180105 Ansible- the Automation Framework That Thinks Like a Sysadmin.md index 8e0a970f7e..c6ed399cfd 100644 --- a/sources/tech/20180105 Ansible- the Automation Framework That Thinks Like a Sysadmin.md +++ b/sources/tech/20180105 Ansible- the Automation Framework That Thinks Like a Sysadmin.md @@ -1,3 +1,5 @@ +translating by Flowsnow + Ansible: the Automation Framework That Thinks Like a Sysadmin ====== @@ -185,7 +187,6 @@ You should see the results of the uptime command for each host in the webservers In a future article, I plan start to dig in to Ansible's ability to manage the remote computers. I'll look at various modules and how you can use the ad-hoc mode to accomplish in a few keystrokes what would take a long time to handle individually on the command line. If you didn't get the results you expected from the sample Ansible commands above, take this time to make sure authentication is working. Check out [the Ansible docs][1] for more help if you get stuck. - -------------------------------------------------------------------------------- via: http://www.linuxjournal.com/content/ansible-automation-framework-thinks-sysadmin diff --git a/sources/tech/20180105 How To Display Asterisks When You Type Password In terminal.md b/sources/tech/20180105 How To Display Asterisks When You Type Password In terminal.md deleted file mode 100644 index 7a49972103..0000000000 --- a/sources/tech/20180105 How To Display Asterisks When You Type Password In terminal.md +++ /dev/null @@ -1,72 +0,0 @@ -translating---geekpi - -How To Display Asterisks When You Type Password In terminal -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/01/Display-Asterisks-When-You-Type-Password-In-terminal-1-720x340.png) - -When you type passwords in a web browser login or any GUI login, the passwords will be masked as asterisks like 0_sync_master.sh 1_add_new_article_manual.sh 1_add_new_article_newspaper.sh 2_start_translating.sh 3_continue_the_work.sh 4_finish.sh 5_pause.sh base.sh env format.test lctt.cfg parse_url_by_manual.sh parse_url_by_newspaper.py parse_url_by_newspaper.sh README.org reedit.sh reformat.sh or bullets like •••••••••••••. This is the built-in security mechanism to prevent the users near you to view your password. But when you type the password in Terminal to perform any administrative task with **sudo** or **su** , you won't even the see the asterisks or bullets as you type the password. There won't be any visual indication of entering passwords, there won't be any cursor movement, nothing at all. You will not know whether you entered all characters or not. All you will see just a blank screen! - -Look at the following screenshot. - -![][2] - -As you see in the above image, I've already entered the password, but there was no indication (either asterisks or bullets). Now, I am not sure whether I entered all characters in my password or not. This security mechanism also prevents the person near you to guess the password length. Of course, this behavior can be changed. This is what this guide all about. It is not that difficult. Read on! - -#### Display Asterisks When You Type Password In terminal - -To display asterisks as you type password in Terminal, we need to make a small modification in **" /etc/sudoers"** file. Before making any changes, it is better to backup this file. To do so, just run: -``` -sudo cp /etc/sudoers{,.bak} -``` - -The above command will backup /etc/sudoers file to a new file named /etc/sudoers.bak. You can restore it, just in case something went wrong after editing the file. - -Next, edit **" /etc/sudoers"** file using command: -``` -sudo visudo -``` - -Find the following line: -``` -Defaults env_reset -``` - -![][3] - -Add an extra word **" ,pwfeedback"** to the end of that line as shown below. -``` -Defaults env_reset,pwfeedback -``` - -![][4] - -Then, press **" CTRL+x"** and **" y"** to save and close the file. Restart your Terminal to take effect the changes. - -Now, you will see asterisks when you enter password in Terminal. - -![][5] - -If you're not comfortable to see a blank screen when you type passwords in Terminal, the small tweak will help. Please be aware that the other users can predict the password length if they see the password when you type it. If you don't mind it, go ahead make the changes as described above to make your password visible (masked as asterisks, of course!). - -And, that's all for now. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/display-asterisks-type-password-terminal/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[2]:http://www.ostechnix.com/wp-content/uploads/2018/01/password-1.png () -[3]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-1.png () -[4]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-1-1.png () -[5]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-2.png () diff --git a/sources/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md b/sources/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md deleted file mode 100644 index 4467e442c5..0000000000 --- a/sources/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md +++ /dev/null @@ -1,143 +0,0 @@ -translating by lujun9972 -Linux size Command Tutorial for Beginners (6 Examples) -====== - -As some of you might already know, an object or executable file in Linux consists of several sections (like txt and data). In case you want to know the size of each section, there exists a command line utility - dubbed **size** \- that provides you this information. In this tutorial, we will discuss the basics of this tool using some easy to understand examples. - -But before we do that, it's worth mentioning that all examples mentioned in this article have been tested on Ubuntu 16.04LTS. - -## Linux size command - -The size command basically lists section sizes as well as total size for the input object file(s). Here's the syntax for the command: -``` -size [-A|-B|--format=compatibility] -            [--help] -            [-d|-o|-x|--radix=number] -            [--common] -            [-t|--totals] -            [--target=bfdname] [-V|--version] -            [objfile...] -``` - -And here's how the man page describes this utility: -``` -The GNU size utility lists the section sizes---and the total size---for each of the object or archive files objfile in its argument list. By default, one line of output is generated for each object file or each module in an archive. - -objfile... are the object files to be examined. If none are specified, the file "a.out" will be used. -``` - -Following are some Q&A-styled examples that'll give you a better idea about how the size command works. - -## Q1. How to use size command? - -Basic usage of size is very simple. All you have to do is to pass the object/executable file name as input to the tool. Following is an example: - -``` -size apl -``` - -Following is the output the above command produced on our system: - -[![How to use size command][1]][2] - -The first three entries are for text, data, and bss sections, with their corresponding sizes. Then comes the total in decimal and hexadecimal formats. And finally, the last entry is for the filename. - -## Q2. How to switch between different output formats? - -The default output format, the man page for size says, is similar to the Berkeley's format. However, if you want, you can go for System V convention as well. For this, you'll have to use the **\--format** option with SysV as value. - -``` -size apl --format=SysV -``` - -Here's the output in this case: - -[![How to switch between different output formats][3]][4] - -## Q3. How to switch between different size units? - -By default, the size of sections is displayed in decimal. However, if you want, you can have this information on octal as well as hexadecimal. For this, use the **-o** and **-x** command line options. - -[![How to switch between different size units][5]][6] - -Here's what the man page says about these options: -``` --d --o --x ---radix=number - -Using one of these options, you can control whether the size of each section is given in decimal -(-d, or --radix=10); octal (-o, or --radix=8); or hexadecimal (-x, or --radix=16).  In ---radix=number, only the three values (8, 10, 16) are supported. The total size is always given in -two radices; decimal and hexadecimal for -d or -x output, or octal and hexadecimal if you're using --o. -``` - -## Q4. How to make size command show totals of all object files? - -If you are using size to find out section sizes for multiple files in one go, then if you want, you can also have the tool provide totals of all column values. You can enable this feature using the **-t** command line option. - -``` -size -t [file1] [file2] ... -``` - -The following screenshot shows this command line option in action: - -[![How to make size command show totals of all object files][7]][8] - -The last row in the output has been added by the **-t** command line option. - -## Q5. How to make size print total size of common symbols in each file? - -If you are running the size command with multiple input files, and want the command to display common symbols in each file, then you can do this with the **\--common** command line option. - -``` -size --common [file1] [file2] ... -``` - -It's also worth mentioning that when using Berkeley format these are included in the bss size. - -## Q6. What are the other available command line options? - -Aside from the ones discussed until now, size also offers some generic command line options like **-v** (for version info) and **-h** (for summary of eligible arguments and options) - -[![What are the other available command line options][9]][10] - -In addition, you can also make size read command-line options from a file. This you can do using the **@file** option. Following are some details related to this option: -``` -The options read are inserted in place of the original @file option. If file does not exist, or - cannot be read, then the option will be treated literally, and not removed. Options in file are -separated by whitespace. A whitespace character may be included in an option by surrounding the -entire option in either single or double quotes. Any character (including a backslash) may be -included by prefixing the character to be included with a backslash. The file may itself contain -additional @file options; any such options will be processed recursively. -``` - -## Conclusion - -One thing is clear, the size command isn't for everybody. It's aimed at only those who deal with the structure of object/executable files in Linux. So if you are among the target audience, practice the options we've discussed here, and you should be ready to use the tool on daily basis. For more information on size, head to its [man page][11]. - - --------------------------------------------------------------------------------- - -via: https://www.howtoforge.com/linux-size-command/ - -作者:[Himanshu Arora][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.howtoforge.com -[1]:https://www.howtoforge.com/images/command-tutorial/size-basic-usage.png -[2]:https://www.howtoforge.com/images/command-tutorial/big/size-basic-usage.png -[3]:https://www.howtoforge.com/images/command-tutorial/size-format-option.png -[4]:https://www.howtoforge.com/images/command-tutorial/big/size-format-option.png -[5]:https://www.howtoforge.com/images/command-tutorial/size-o-x-options.png -[6]:https://www.howtoforge.com/images/command-tutorial/big/size-o-x-options.png -[7]:https://www.howtoforge.com/images/command-tutorial/size-t-option.png -[8]:https://www.howtoforge.com/images/command-tutorial/big/size-t-option.png -[9]:https://www.howtoforge.com/images/command-tutorial/size-v-x1.png -[10]:https://www.howtoforge.com/images/command-tutorial/big/size-v-x1.png -[11]:https://linux.die.net/man/1/size diff --git a/sources/tech/20180109 Profiler adventures resolving symbol addresses is hard.md b/sources/tech/20180109 Profiler adventures resolving symbol addresses is hard.md new file mode 100644 index 0000000000..971f575f5f --- /dev/null +++ b/sources/tech/20180109 Profiler adventures resolving symbol addresses is hard.md @@ -0,0 +1,163 @@ +Profiler adventures: resolving symbol addresses is hard! +============================================================ + +The other day I posted [How does gdb call functions?][1]. In that post I said: + +> Using the symbol table to figure out the address of the function you want to call is pretty straightforward + +Unsurprisingly, it turns out that figuring out the address in memory corresponding to a given symbol is actually not really that straightforward. This is actually something I’ve been doing in my profiler, and I think it’s interesting, so I thought I’d write about it! + +Basically the problem I’ve been trying to solve is – I have a symbol (like `ruby_api_version`), and I want to figure out which address that symbol is mapped to in my target process’s memory (so that I can get the data in it, like the Ruby process’s Ruby version). So far I’ve run into (and fixed!) 3 issues when trying to do this: + +1. When binaries are loaded into memory, they’re loaded at a random address (so I can’t just read the symbol table) + +2. The symbol I want isn’t necessary in the “main” binary (`/proc/PID/exe`, sometimes it’s in some other dynamically linked library) + +3. I need to look at the ELF program header to adjust which address I look at for the symbol + +I’ll start with some background, and then explain these 3 things! (I actually don’t know what gdb does) + +### what’s a symbol? + +Most binaries have functions and variables in them. For instance, Perl has a global variable called `PL_bincompat_options` and a function called `Perl_sv_catpv_mg`. + +Sometimes binaries need to look up functions from another binary (for example, if the binary is a dynamically linked library, you need to look up its functions by name). Also sometimes you’re debugging your code and you want to know what function an address corresponds to. + +Symbols are how you look up functions / variables in a binary. They’re in a section called the “symbol table”. The symbol table is basically an index for your binary! Sometimes they’re missing (“stripped”). There are a lot of binary formats, but this post is just about the usual binary format on Linux: ELF. + +### how do you get the symbol table of a binary? + +A thing that I learned today (or at least learned and then forgot) is that there are 2 possible sections symbols can live in: `.symtab` and `.dynsym`. `.dynsym` is the “dynamic symbol table”. According to [this page][2], the dynsym is a smaller version of the symtab that only contains global symbols. + +There are at least 3 ways to read the symbol table of a binary on Linux: you can use nm, objdump, or readelf. + +* **read the .symtab**: `nm $FILE`, `objdump --syms $FILE`, `readelf -a $FILE` + +* **read the .dynsym**: `nm -D $FILE`, `objdump --dynamic-syms $FILE`, `readelf -a $FILE` + +`readelf -a` is the same in both cases because `readelf -a` just shows you everything in an ELF file. It’s my favorite because I don’t need to guess where the information I want is, I can just print out everything and then use grep. + +Here’s an example of some of the symbols in `/usr/bin/perl`. You can see that each symbol has a **name**, a **value**, and a **type**. The value is basically the offset of the code/data corresponding to that symbol in the binary. (except some symbols have value 0\. I think that has something to do with dynamic linking but I don’t understand it so we’re not going to get into it) + +``` +$ readelf -a /usr/bin/perl +... + Num: Value Size Type Ndx Name + 523: 00000000004d6590 49 FUNC 14 Perl_sv_catpv_mg + 524: 0000000000543410 7 FUNC 14 Perl_sv_copypv + 525: 00000000005a43e0 202 OBJECT 16 PL_bincompat_options + 526: 00000000004e6d20 2427 FUNC 14 Perl_pp_ucfirst + 527: 000000000044a8c0 1561 FUNC 14 Perl_Gv_AMupdate +... + +``` + +### the question we want to answer: what address is a symbol mapped to? + +That’s enough background! + +Now – suppose I’m a debugger, and I want to know what address the `ruby_api_version` symbol is mapped to. Let’s use readelf to look at the relevant Ruby binary! + +``` +readelf -a ~/.rbenv/versions/2.1.6/bin/ruby | grep ruby_api_version + 365: 00000000001f9180 12 OBJECT GLOBAL DEFAULT 15 ruby_api_version + +``` + +Neat! The offset of `ruby_api_version` is `0x1f9180`. We’re done, right? Of course not! :) + +### Problem 1: ASLR (Address space layout randomization) + +Here’s the first issue: when Linux loads a binary into memory (like `~/.rbenv/versions/2.1.6/bin/ruby`), it doesn’t just load it at the `0` address. Instead, it usually adds a random offset. Wikipedia’s article on ASLR explains why: + +> Address space layout randomization (ASLR) is a memory-protection process for operating systems (OSes) that guards against buffer-overflow attacks by randomizing the location where system executables are loaded into memory. + +We can see this happening in practice: I started `/home/bork/.rbenv/versions/2.1.6/bin/ruby` 3 times and every time the process gets mapped to a different place in memory. (`0x56121c86f000`, `0x55f440b43000`, `0x56163334a000`) + +Here we’re meeting our good friend `/proc/$PID/maps` – this file contains a list of memory maps for a process. The memory maps tell us every address range in the process’s virtual memory (it turns out virtual memory isn’t contiguous! Instead process get a bunch of possibly-disjoint memory maps!). This file is so useful! You can find the address of the stack, the heap, every dynamically loaded library, anonymous memory maps, and probably more. + +``` +$ cat /proc/(pgrep -f 2.1.6)/maps | grep 'bin/ruby' +56121c86f000-56121caf0000 r-xp 00000000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby +56121ccf0000-56121ccf5000 r--p 00281000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby +56121ccf5000-56121ccf7000 rw-p 00286000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby +$ cat /proc/(pgrep -f 2.1.6)/maps | grep 'bin/ruby' +55f440b43000-55f440dc4000 r-xp 00000000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby +55f440fc4000-55f440fc9000 r--p 00281000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby +55f440fc9000-55f440fcb000 rw-p 00286000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby +$ cat /proc/(pgrep -f 2.1.6)/maps | grep 'bin/ruby' +56163334a000-5616335cb000 r-xp 00000000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby +5616337cb000-5616337d0000 r--p 00281000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby +5616337d0000-5616337d2000 rw-p 00286000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby + +``` + +Okay, so in the last example we see that our binary is mapped at `0x56163334a000`. If we combine this with the knowledge that `ruby_api_version` is at `0x1f9180`, then that means that we just need to look that the address `0x1f9180 + 0x56163334a000` to find our variable, right? + +Yes! In this case, that works. But in other cases it won’t! So that brings us to problem 2. + +### Problem 2: dynamically loaded libraries + +Next up, I tried running system Ruby: `/usr/bin/ruby`. This binary has basically no symbols at all! Disaster! In particular it does not have a `ruby_api_version`symbol. + +But when I tried to print the `ruby_api_version` variable with gdb, it worked!!! Where was gdb finding my symbol? I found the answer with the help of our good friend: `/proc/PID/maps` + +It turns out that `/usr/bin/ruby` dynamically loads a library called `libruby-2.3`. You can see it in the memory maps here: + +``` +$ cat /proc/(pgrep -f /usr/bin/ruby)/maps | grep libruby +7f2c5d789000-7f2c5d9f1000 r-xp 00000000 00:14 /usr/lib/x86_64-linux-gnu/libruby-2.3.so.2.3.0 +7f2c5d9f1000-7f2c5dbf0000 ---p 00268000 00:14 /usr/lib/x86_64-linux-gnu/libruby-2.3.so.2.3.0 +7f2c5dbf0000-7f2c5dbf6000 r--p 00267000 00:14 /usr/lib/x86_64-linux-gnu/libruby-2.3.so.2.3.0 +7f2c5dbf6000-7f2c5dbf7000 rw-p 0026d000 00:14 /usr/lib/x86_64-linux-gnu/libruby-2.3.so.2.3.0 + +``` + +And if we read it with `readelf`, we find the address of that symbol! + +``` +readelf -a /usr/lib/x86_64-linux-gnu/libruby-2.3.so.2.3.0 | grep ruby_api_version + 374: 00000000001c72f0 12 OBJECT GLOBAL DEFAULT 13 ruby_api_version + +``` + +So in this case the address of the symbol we want is `0x7f2c5d789000` (the start of the libruby-2.3 memory map) plus `0x1c72f0`. Nice! But we’re still not done. There is (at least) one more mystery! + +### Problem 3: the `vaddr` offset in the ELF program header + +This one I just figured out today so it’s the one I have the shakiest understanding of. Here’s what happened. + +I was running system ruby on Ubuntu 14.04: Ruby 1.9.3\. And my usual code (find the libruby map, get its address, get the symbol offset, add them up) wasn’t working!!! I was confused. + +But I’d asked Julian if he knew of any weird stuff I need to worry about a while back and he said “well, you should read the code for `dlsym`, you’re trying to do basically the same thing”. So I decided to, instead of randomly guessing, go read the code for `dlsym`. + +The man page for `dlsym` says “dlsym, dlvsym - obtain address of a symbol in a shared object or executable”. Perfect!! + +[Here’s the dlsym code from musl I read][3]. (musl is like glibc, but, different. Maybe easier to read? I don’t understand it that well.) + +The dlsym code says (on line 1468) `return def.dso->base + def.sym->st_value;` That sounds like what I’m doing!! But what’s `dso->base`? It looks like `base = map - addr_min;`, and `addr_min = ph->p_vaddr;`. (there’s also some stuff that makes sure `addr_min` is aligned with the page size which I should maybe pay attention to.) + +So the code I want is something like `map_base - ph->p_vaddr + sym->st_value`. + +I looked up this `vaddr` thing in the ELF program header, subtracted it from my calculation, and voilà! It worked!!! + +### there are probably more problems! + +I imagine I will discover even more ways that I am calculating the symbol address wrong. It’s interesting that such a seemingly simple thing (“what’s the address of this symbol?”) is so complicated! + +It would be nice to be able to just call `dlsym` and have it do all the right calculations for me, but I think I can’t because the symbol is in a different process. Maybe I’m wrong about that though! I would like to be wrong about that. If you know an easier way to do all this I would very much like to know! + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2018/01/09/resolving-symbol-addresses/ + +作者:[Julia Evans ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca +[1]:https://jvns.ca/blog/2018/01/04/how-does-gdb-call-functions/ +[2]:https://blogs.oracle.com/ali/inside-elf-symbol-tables +[3]:https://github.com/esmil/musl/blob/194f9cf93da8ae62491b7386edf481ea8565ae4e/src/ldso/dynlink.c#L1451 diff --git a/sources/tech/20180111 BASH drivers, start your engines.md b/sources/tech/20180111 BASH drivers, start your engines.md index 7126bea3e0..e5f8631e39 100644 --- a/sources/tech/20180111 BASH drivers, start your engines.md +++ b/sources/tech/20180111 BASH drivers, start your engines.md @@ -1,4 +1,4 @@ -BASH drivers, start your engines +Translating by Torival BASH drivers, start your engines ====== ![](http://www.thelinuxrain.com/content/01-articles/201-bash-drivers-start-your-engines/headimage.jpg) diff --git a/sources/tech/20180111 How to Install Snipe-IT Asset Management Software on Debian 9.md b/sources/tech/20180111 How to Install Snipe-IT Asset Management Software on Debian 9.md new file mode 100644 index 0000000000..80412f03f3 --- /dev/null +++ b/sources/tech/20180111 How to Install Snipe-IT Asset Management Software on Debian 9.md @@ -0,0 +1,374 @@ +How to Install Snipe-IT Asset Management Software on Debian 9 +====== + +Snipe-IT is a free and open source IT assets management web application that can be used for tracking licenses, accessories, consumables, and components. It is written in PHP language and uses MySQL to store its data. It is a cross-platform application that works on all the major operating system like, Linux, Windows and Mac OS X. It easily integrates with Active Directory, LDAP and supports two-factor authentication with Google Authenticator. + +In this tutorial, we will learn how to install Snipe-IT on Debian 9 server. + +### Requirements + + * A server running Debian 9. + * A non-root user with sudo privileges. + + + +### Getting Started + +Before installing any packages, it is recommended to update the system package with the latest version. You can do this by running the following command: + +``` +sudo apt-get update -y +sudo apt-get upgrade -y +``` + +Next, restart the system to apply all the updates. Then install other required packages with the following command: + +``` +sudo apt-get install git curl unzip wget -y +``` + +Once all the packages are installed, you can proceed to the next step. + +### Install LAMP Server + +Snipe-IT runs on Apache web server, so you will need to install LAMP (Apache, MariaDB, PHP) to your system. + +First, install Apache, PHP and other PHP libraries with the following command: + +``` +sudo apt-get install apache2 libapache2-mod-php php php-pdo php-mbstring php-tokenizer php-curl php-mysql php-ldap php-zip php-fileinfo php-gd php-dom php-mcrypt php-bcmath -y +``` + +Once all the packages are installed, start Apache service and enable it to start on boot with the following command: + +``` +sudo systemctl start apache2 +sudo systemctl enable apache2 +``` + +### Install and Configure MariaDB + +Snipe-IT uses MariaDB to store its data. So you will need to install MariaDB to your system. By default, the latest version of the MariaDB is not available in the Debian 9 repository. So you will need to install MariaDB repository to your system. + +First, add the APT key with the following command: + +``` +sudo apt-get install software-properties-common -y +sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db +``` + +Next, add the MariaDB repository using the following command: + +``` +sudo add-apt-repository 'deb [arch=amd64,i386,ppc64el] http://nyc2.mirrors.digitalocean.com/mariadb/repo/10.1/debian stretch main' +``` + +Next, update the repository with the following command: + +``` +sudo apt-get update -y +``` + +Once the repository is updated, you can install MariaDB with the following command: + +``` +sudo apt-get install mariadb-server mariadb-client -y +``` + +Next, start the MariaDB service and enable it to start on boot time with the following command: + +``` +sudo systemctl start mysql +sudo systemctl start mysql +``` + +You can check the status of MariaDB server with the following command: + +``` +sudo systemctl status mysql +``` + +If everything is fine, you should see the following output: +``` +? mariadb.service - MariaDB database server + Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled) + Active: active (running) since Mon 2017-12-25 08:41:25 EST; 29min ago + Process: 618 ExecStartPost=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS) + Process: 615 ExecStartPost=/etc/mysql/debian-start (code=exited, status=0/SUCCESS) + Process: 436 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=`/usr/bin/galera_recovery`; [ $? -eq 0 ] && systemc + Process: 429 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS) + Process: 418 ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/run/mysqld (code=exited, status=0/SUCCESS) + Main PID: 574 (mysqld) + Status: "Taking your SQL requests now..." + Tasks: 27 (limit: 4915) + CGroup: /system.slice/mariadb.service + ??574 /usr/sbin/mysqld + +Dec 25 08:41:07 debian systemd[1]: Starting MariaDB database server... +Dec 25 08:41:14 debian mysqld[574]: 2017-12-25 8:41:14 140488893776448 [Note] /usr/sbin/mysqld (mysqld 10.1.26-MariaDB-0+deb9u1) starting as p +Dec 25 08:41:25 debian systemd[1]: Started MariaDB database server. + +``` + +Next, secure your MariaDB by running the following script: + +``` +sudo mysql_secure_installation +``` + +Answer all the questions as shown below: +``` +Set root password? [Y/n] n +Remove anonymous users? [Y/n] y +Disallow root login remotely? [Y/n] y +Remove test database and access to it? [Y/n] y +Reload privilege tables now? [Y/n] y + +``` + +Once MariaDB is secured, log in to MariaDB shell with the following command: + +``` +mysql -u root -p +``` + +Enter your root password when prompt, then create a database for Snipe-IT with the following command: + +``` +MariaDB [(none)]> create database snipeitdb character set utf8; +``` + +Next, create a user for Snipe-IT and grant all privileges to the Snipe-IT with the following command: + +``` +MariaDB [(none)]> GRANT ALL PRIVILEGES ON snipeitdb.* TO 'snipeit'@'localhost' IDENTIFIED BY 'password'; +``` + +Next, flush the privileges with the following command: + +``` +MariaDB [(none)]> flush privileges; +``` + +Finally, exit from the MariaDB console using the following command: + +``` +MariaDB [(none)]> quit +``` + +### Install Snipe-IT + +You can download the latest version of the Snipe-IT from Git repository with the following command: + +``` +git clone https://github.com/snipe/snipe-it snipe-it +``` + +Next, move the downloaded directory to the apache root directory with the following command: + +``` +sudo mv snipe-it /var/www/ +``` + +Next, you will need to install Composer to your system. You can install it with the following command: + +``` +curl -sS https://getcomposer.org/installer | php +sudo mv composer.phar /usr/local/bin/composer +``` + +Next, change the directory to snipe-it and Install PHP dependencies using Composer with the following command: + +``` +cd /var/www/snipe-it +sudo composer install --no-dev --prefer-source +``` +Next, generate the "APP_Key" with the following command: + +``` +sudo php artisan key:generate +``` + +You should see the following output: +``` +************************************** +* Application In Production! * +************************************** + + Do you really wish to run this command? (yes/no) [no]: + > yes + +Application key [base64:uWh7O0/TOV10asWpzHc0DH1dOxJHprnZw2kSOnbBXww=] set successfully. + +``` + +Next, you will need to populate MySQL with Snipe-IT's default database schema. You can do this by running the following command: + +``` +sudo php artisan migrate +``` + +Type yes, when prompted to confirm that you want to perform the migration: +``` +************************************** +* Application In Production! * +************************************** + + Do you really wish to run this command? (yes/no) [no]: + > yes + +Migration table created successfully. + +``` + +Next, copy sample .env file and make some changes in it: + +``` +sudo cp .env.example .env +sudo nano .env +``` + +Change the following lines: +``` +APP_URL=http://example.com +APP_TIMEZONE=US/Eastern +APP_LOCALE=en + +# -------------------------------------------- +# REQUIRED: DATABASE SETTINGS +# -------------------------------------------- +DB_CONNECTION=mysql +DB_HOST=localhost +DB_DATABASE=snipeitdb +DB_USERNAME=snipeit +DB_PASSWORD=password +DB_PREFIX=null +DB_DUMP_PATH='/usr/bin' + +``` + +Save and close the file when you are finished. + +Next, provide the appropriate ownership and file permissions with the following command: + +``` +sudo chown -R www-data:www-data storage public/uploads +sudo chmod -R 755 storage public/uploads +``` + +### Configure Apache For Snipe-IT + +Next, you will need to create an apache virtual host directive for Snipe-IT. You can do this by creating `snipeit.conf` file inside `/etc/apache2/sites-available` directory: + +``` +sudo nano /etc/apache2/sites-available/snipeit.conf +``` + +Add the following lines: +``` + +ServerAdmin webmaster@example.com + + Require all granted + AllowOverride All + + DocumentRoot /var/www/snipe-it/public + ServerName example.com + ErrorLog /var/log/apache2/snipeIT.error.log + CustomLog /var/log/apache2/access.log combined + + +``` + +Save and close the file when you are finished. Then, enable virtual host with the following command: + +``` +sudo a2ensite snipeit.conf +``` + +Next, enable PHP mcrypt, mbstring module and Apache rewrite module with the following command: + +``` +sudo phpenmod mcrypt +sudo phpenmod mbstring +sudo a2enmod rewrite +``` + +Finally, restart apache web server to apply all the changes: + +``` +sudo systemctl restart apache2 +``` + +### Configure Firewall + +By default, Snipe-IT runs on port 80, so you will need to allow port 80 through the firewall. By default, UFW firewall is not installed in Debian 9, so you will need to install it first. You can install it by just running the following command: + +``` +sudo apt-get install ufw -y +``` + +Once UFW is installed, enable it to start on boot time with the following command: + +``` +sudo ufw enable +``` + +Next, allow port 80 using the following command: + +``` +sudo ufw allow 80 +``` + +Next, reload the UFW firewall rule with the following command: + +``` +sudo ufw reload +``` + +### Access Snipe-IT + +Everything is now installed and configured, it's time to access Snipe-IT web interface. + +Open your web browser and type the URL, you will be redirected to the following page: + +[![Snipe-IT Checks the system][2]][3] + +The above page will do a system check to make sure your configuration looks correct. Next, click on the **Create Database Table** button you should see the following page: + +[![Create database table][4]][5] + +Here, click on the **Create User** page, you should see the following page: + +[![Create user][6]][7] + +Here, provide your Site name, Domain name, Admin username, and password, then click on the **Save User** button, you should see the Snipe-IT default dashboard as below: + +[![Snipe-IT Dashboard][8]][9] + +### Conclusion + +In the above tutorial, we have learned to install Snipe-IT on Debian 9 server. We have also learned to configure Snipe-IT through web interface.I hope you have now enough knowledge to deploy Snipe-IT in your production environment. For more information you can refer Snipe-IT [Documentation Page][10]. + + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/tutorial/how-to-install-snipe-it-on-debian-9/ + +作者:[Hitesh Jethva][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.howtoforge.com +[1]:/cdn-cgi/l/email-protection +[2]:https://www.howtoforge.com/images/how_to_install_snipe_it_on_debian_9/Screenshot-of-snipeit-page1.png +[3]:https://www.howtoforge.com/images/how_to_install_snipe_it_on_debian_9/big/Screenshot-of-snipeit-page1.png +[4]:https://www.howtoforge.com/images/how_to_install_snipe_it_on_debian_9/Screenshot-of-snipeit-page2.png +[5]:https://www.howtoforge.com/images/how_to_install_snipe_it_on_debian_9/big/Screenshot-of-snipeit-page2.png +[6]:https://www.howtoforge.com/images/how_to_install_snipe_it_on_debian_9/Screenshot-of-snipeit-page3.png +[7]:https://www.howtoforge.com/images/how_to_install_snipe_it_on_debian_9/big/Screenshot-of-snipeit-page3.png +[8]:https://www.howtoforge.com/images/how_to_install_snipe_it_on_debian_9/Screenshot-of-snipeit-page4.png +[9]:https://www.howtoforge.com/images/how_to_install_snipe_it_on_debian_9/big/Screenshot-of-snipeit-page4.png +[10]:https://snipe-it.readme.io/docs diff --git a/sources/tech/20180111 How to install software applications on Linux.md b/sources/tech/20180111 How to install software applications on Linux.md new file mode 100644 index 0000000000..6414bd19be --- /dev/null +++ b/sources/tech/20180111 How to install software applications on Linux.md @@ -0,0 +1,261 @@ +How to install software applications on Linux +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux-penguins.png?itok=yKOpaJM_) + +Image by : Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0 + +How do you install an application on Linux? As with many operating systems, there isn't just one answer to that question. Applications can come from so many sources--it's nearly impossible to count--and each development team may deliver their software whatever way they feel is best. Knowing how to install what you're given is part of being a true power user of your OS. + +### Repositories + +For well over a decade, Linux has used software repositories to distribute software. A "repository" in this context is a public server hosting installable software packages. A Linux distribution provides a command, and usually a graphical interface to that command, that pulls the software from the server and installs it onto your computer. It's such a simple concept that it has served as the model for all major cellphone operating systems and, more recently, the "app stores" of the two major closed source computer operating systems. + + +![Linux repository][2] + +Not an app store + +Installing from a software repository is the primary method of installing apps on Linux. It should be the first place you look for any application you intend to install. + +To install from a software repository, there's usually a command: +``` + + +$ sudo dnf install inkscape +``` + +The actual command you use depends on what distribution of Linux you use. Fedora uses `dnf`, OpenSUSE uses `zypper`, Debian and Ubuntu use `apt`, Slackware uses `sbopkg`, FreeBSD uses `pkg_add`, and Illumos-based OpenIndiana uses `pkg`. Whatever you use, the incantation usually involves searching for the proper name of what you want to install, because sometimes what you call software is not its official or solitary designation: +``` + + +$ sudo dnf search pyqt + +PyQt.x86_64 : Python bindings for Qt3 + +PyQt4.x86_64 : Python bindings for Qt4 + +python-qt5.x86_64 : PyQt5 is Python bindings for Qt5 +``` + +Once you have located the name of the package you want to install, use the `install` subcommand to perform the actual download and automated install: +``` + + +$ sudo dnf install python-qt5 +``` + +For specifics on installing from a software repository, see your distribution's documentation. + +The same generally holds true with the graphical tools. Search for what you think you want, and then install it. + +![](https://opensource.com/sites/default/files/u128651/apper.png) + +Like the underlying command, the name of the graphical installer depends on what distribution you are running. The relevant application is usually tagged with the software or package keywords, so search your launcher or menu for those terms, and you'll find what you need. Since open source is all about user choice, if you don't like the graphical user interface (GUI) that your distribution provides, there may be an alternative that you can install. And now you know how to do that. + +#### Extra repositories + +Your distribution has its standard repository for software that it packages for you, and there are usually extra repositories common to your distribution. For example, [EPEL][3] serves Red Hat Enterprise Linux and CentOS, [RPMFusion][4] serves Fedora, Ubuntu has various levels of support as well as a Personal Package Archive (PPA) network, [Packman][5] provides extra software for OpenSUSE, and [SlackBuilds.org][6] provides community build scripts for Slackware. + +By default, your Linux OS is set to look at just its official repositories, so if you want to use additional software collections, you must add extra repositories yourself. You can usually install a repository as though it were a software package. In fact, when you install certain software, such as [GNU Ring][7] video chat, the [Vivaldi][8] web browser, Google Chrome, and many others, what you are actually installing is access to their private repositories, from which the latest version of their application is installed to your machine. + + +![Installing a repo][10] + +Installing a repo + +You can also add the repository manually by editing a text file and adding it to your package manager's configuration directory, or by running a command to install the repository. As usual, the exact command you use depends on the distribution you are running; for example, here is a `dnf` command that adds a repository to the system: +``` + + +$ sudo dnf config-manager --add-repo=http://example.com/pub/centos/7 +``` + +### Installing apps without repositories + +The repository model is so popular because it provides a link between the user (you) and the developer. When important updates are released, your system kindly prompts you to accept the updates, and you can accept them all from one centralized location. + +Sometimes, though, there are times when a package is made available with no repository attached. These installable packages come in several forms. + +#### Linux packages + +Sometimes, a developer distributes software in a common Linux packaging format, such as RPM, DEB, or the newer but very popular FlatPak or Snap formats. You make not get access to a repository with this download; you might just get the package. + +The video editor [Lightworks][11], for example, provides a `.deb` file for APT users and an `.rpm` file for RPM users. When you want to update, you return to the website and download the latest appropriate file. + +These one-off packages can be installed with all the same tools used when installing from a repository. If you double-click the package you download, a graphical installer launches and steps you through the install process. + +Alternately, you can install from a terminal. The difference here is that a lone package file you've downloaded from the internet isn't coming from a repository. It's a "local" install, meaning your package management software doesn't need to download it to install it. Most package managers handle this transparently: +``` + + +$ sudo dnf install ~/Downloads/lwks-14.0.0-amd64.rpm +``` + +In some cases, you need to take additional steps to get the application to run, so carefully read the documentation about the software you're installing. + +#### Generic install scripts + +Some developers release their packages in one of several generic formats. Common extensions include `.run` and `.sh`. NVIDIA graphic card drivers, Foundry visual FX packages like Nuke and Mari, and many DRM-free games from [GOG][12] use this style of installer. + +This model of installation relies on the developer to deliver an installation "wizard." Some of the installers are graphical, while others just run in a terminal. + +There are two ways to run these types of installers. + + 1. You can run the installer directly from a terminal: + + +``` + + +$ sh ./game/gog_warsow_x.y.z.sh +``` + + 2. Alternately, you can run it from your desktop by marking it as executable. To mark an installer executable, right-click on its icon and select **Properties**. + +![Giving an installer executable permission][14] + + +Giving an installer executable permission + +Once you've given permission for it to run, double-click the icon to start the install. + +![GOG installer][16] + +GOG installer + +For the rest of the install, just follow the instructions on the screen. + +#### AppImage portable apps + +The AppImage format is relatively new to Linux, although its concept is based on both NeXT and Rox. The idea is simple: everything required to run an application is placed into one directory, and then that directory is treated as an "app." To run the application, you just double-click the icon, and it runs. There's no need or expectation that the application is installed in the traditional sense; it just runs from wherever you have it lying around on your hard drive. + +Despite its ability to run as a self-contained app, an AppImage usually offers to do some soft system integration. + +![AppImage system integration][18] + +AppImage system integration + +If you accept this offer, a local `.desktop` file is installed to your home directory. A `.desktop` file is a small configuration file used by the Applications menu and mimetype system of a Linux desktop. Essentially, placing the desktop config file in your home directory's application list "installs" the application without actually installing it. You get all the benefits of having installed something, and the benefits of being able to run something locally, as a "portable app." + +#### Application directory + +Sometimes, a developer just compiles an application and posts the result as a download, with no install script and no packaging. Usually, this means that you download a TAR file, [extract it][19], and then double-click the executable file (it's usually the one with the name of the software you downloaded). + +![Twine downloaded for Linux][21] + + +Twine downloaded for Linux + +When presented with this style of software delivery, you can either leave it where you downloaded it and launch it manually when you need it, or you can do a quick and dirty install yourself. This involves two simple steps: + + 1. Save the directory to a standard location and launch it manually when you need it. + 2. Save the directory to a standard location and create a `.desktop` file to integrate it into your system. + + + +If you're just installing applications for yourself, it's traditional to keep a `bin` directory (short for "binary") in your home directory as a storage location for locally installed applications and scripts. If you have other users on your system who need access to the applications, it's traditional to place the binaries in `/opt`. Ultimately, it's up to you where you store the application. + +Downloads often come in directories with versioned names, such as `twine_2.13` or `pcgen-v6.07.04`. Since it's reasonable to assume you'll update the application at some point, it's a good idea to either remove the version number or to create a symlink to the directory. This way, the launcher that you create for the application can remain the same, even though you update the application itself. + +To create a `.desktop` launcher file, open a text editor and create a file called `twine.desktop`. The [Desktop Entry Specification][22] is defined by [FreeDesktop.org][23]. Here is a simple launcher for a game development IDE called Twine, installed to the system-wide `/opt` directory: +``` + + +[Desktop Entry] + +Encoding=UTF-8 + +Name=Twine + +GenericName=Twine + +Comment=Twine + +Exec=/opt/twine/Twine + +Icon=/usr/share/icons/oxygen/64x64/categories/applications-games.png + +Terminal=false + +Type=Application + +Categories=Development;IDE; +``` + +The tricky line is the `Exec` line. It must contain a valid command to start the application. Usually, it's just the full path to the thing you downloaded, but in some cases, it's something more complex. For example, a Java application might need to be launched as an argument to Java itself: +``` + + +Exec=java -jar /path/to/foo.jar +``` + +Sometimes, a project includes a wrapper script that you can run so you don't have to figure out the right command: +``` + + +Exec=/opt/foo/foo-launcher.sh +``` + +In the Twine example, there's no icon bundled with the download, so the example `.desktop` file assigns a generic gaming icon that shipped with the KDE desktop. You can use workarounds like that, but if you're more artistic, you can just create your own icon, or you can search the Internet for a good icon. As long as the `Icon` line points to a valid PNG or SVG file, your application will inherit the icon. + +The example script also sets the application category primarily to Development, so in KDE, GNOME, and most other Application menus, Twine appears under the Development category. + +To get this example to appear in an Application menu, place the `twine.desktop` file into one of two places: + + * Place it in `~/.local/share/applications` if you're storing the application in your own home directory. + * Place it in `/usr/share/applications` if you're storing the application in `/opt` or another system-wide location and want it to appear in all your users' Application menus. + + + +And now the application is installed as it needs to be and integrated with the rest of your system. + +### Compiling from source + +Finally, there's the truly universal install format: source code. Compiling an application from source code is a great way to learn how applications are structured, how they interact with your system, and how they can be customized. It's by no means a push-button process, though. It requires a build environment, it usually involves installing dependency libraries and header files, and sometimes a little bit of debugging. + +To learn more about compiling from source code, [read my article][24] on the topic. + +### Now you know + +Some people think installing software is a magical process that only developers understand, or they think it "activates" an application, as if the binary executable file isn't valid until it has been "installed." Hopefully, learning about the many different methods of installing has shown you that install is really just shorthand for "copying files from one place to the appropriate places on your system." There's nothing mysterious about it. As long as you approach each install without expectations of how it's supposed to happen, and instead look for what the developer has set up as the install process, it's generally easy, even if it is different from what you're used to. + +The important thing is that an installer is honest with you. If you come across an installer that attempts to install additional software without your consent (or maybe it asks for consent, but in a confusing or misleading way), or that attempts to run checks on your system for no apparent reason, then don't continue an install. + +Good software is flexible, honest, and open. And now you know how to get good software onto your computer. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/how-install-apps-linux + +作者:[Seth Kenlon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[1]:/file/382591 +[2]:https://opensource.com/sites/default/files/u128651/repo.png (Linux repository) +[3]:https://fedoraproject.org/wiki/EPEL +[4]:http://rpmfusion.org +[5]:http://packman.links2linux.org/ +[6]:http://slackbuilds.org +[7]:https://ring.cx/en/download/gnu-linux +[8]:http://vivaldi.com +[9]:/file/382566 +[10]:https://opensource.com/sites/default/files/u128651/access.png (Installing a repo) +[11]:https://www.lwks.com/ +[12]:http://gog.com +[13]:/file/382581 +[14]:https://opensource.com/sites/default/files/u128651/exec.jpg (Giving an installer executable permission) +[15]:/file/382586 +[16]:https://opensource.com/sites/default/files/u128651/gog.jpg (GOG installer) +[17]:/file/382576 +[18]:https://opensource.com/sites/default/files/u128651/appimage.png (AppImage system integration) +[19]:https://opensource.com/article/17/7/how-unzip-targz-file +[20]:/file/382596 +[21]:https://opensource.com/sites/default/files/u128651/twine.jpg (Twine downloaded for Linux) +[22]:https://specifications.freedesktop.org/desktop-entry-spec/desktop-entry-spec-latest.html +[23]:http://freedesktop.org +[24]:https://opensource.com/article/17/10/open-source-cats diff --git a/sources/tech/20180111 Multimedia Apps for the Linux Console.md b/sources/tech/20180111 Multimedia Apps for the Linux Console.md new file mode 100644 index 0000000000..6cdd3ef857 --- /dev/null +++ b/sources/tech/20180111 Multimedia Apps for the Linux Console.md @@ -0,0 +1,112 @@ +Translating by Yinr + +Multimedia Apps for the Linux Console +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/multimedia.jpg?itok=v-XrnKRB) +The Linux console supports multimedia, so you can enjoy music, movies, photos, and even read PDF files. + +When last we met, we learned that the Linux console supports multimedia. Yes, really! You can enjoy music, movies, photos, and even read PDF files without being in an X session with MPlayer, fbi, and fbgs. And, as a bonus, you can enjoy a Matrix-style screensaver for the console, CMatrix. + +You will probably have make some tweaks to your system to make this work. The examples used here are for Ubuntu Linux 16.04. + +### MPlayer + +You're probably familiar with the amazing and versatile MPlayer, which supports almost every video and audio format, and runs on nearly everything, including Linux, Android, Windows, Mac, Kindle, OS/2, and AmigaOS. Using MPLayer in your console will probably require some tweaking, depending on your Linux distribution. To start, try playing a video: +``` +$ mplayer [video name] + +``` + +If it works, then hurrah, and you can invest your time in learning useful MPlayer options, such as controlling the size of the video screen. However, some Linux distributions are managing the framebuffer differently than in the olden days, and you may have to adjust some settings to make it work. This is how to make it work on recent Ubuntu releases. + +First, add yourself to the video group. + +Second, verify that `/etc/modprobe.d/blacklist-framebuffer.conf` has this line: `#blacklist vesafb`. It should already be commented out, and if it isn't then comment it. All the other module lines should be un-commented, which prevents them from loading. Side note: if you want to dig more deeply into managing your framebuffer, the module for your video card may give better performance. + +Add these two modules to the end of `/etc/initramfs-tools/modules`, `vesafb` and `fbcon`, then rebuild the initramfs image: +``` +$ sudo nano /etc/initramfs-tools/modules + # List of modules that you want to include in your initramfs. + # They will be loaded at boot time in the order below. + fbcon + vesafb + +$ sudo update-initramfs -u + +``` + +[fbcon][1] is the Linux framebuffer console. It runs on top of the framebuffer and adds graphical features. It requires a framebuffer device, which is supplied by the `vesafb` module. + +Now you must edit your GRUB2 configuration. In `/etc/default/grub` you should see a line like this: +``` +GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" + +``` + +It may have some other options, but it should be there. Add `vga=789`: +``` +GRUB_CMDLINE_LINUX_DEFAULT="quiet splash vga=789" + +``` + +Reboot and enter your console (Ctrl+Alt+F1), and try playing a video. This command selects the `fbdev2` video device; I haven't learned yet how to know which one to use, but I had to use it to play the video. The default screen size is 320x240, so I scaled it to 960: +``` +$ mplayer -vo fbdev2 -vf scale -zoom -xy 960 AlienSong_mp4.mov +``` + +And behold Figure 1. It's grainy because I have a low-fi copy of this video, not because MPlayer is making it grainy. + +MPLayer plays CDs, DVDs, network streams, and has a giant batch of playback options, which I shall leave as your homework to explore. + +### fbi Image Viewer + +`fbi`, the framebuffer image viewer, comes in the [fbida][2] package on most Linuxes. It has native support for the common image file formats, and uses `convert` (from Image Magick), if it is installed, for other formats. Its simplest use is to view a single image file: +``` +$ fbi filename + +``` + +Use the arrow keys to scroll a large image, + and - to zoom, and r and l to rotate 90 degress right and left. Press the Escape key to close the image. You can play a slideshow by giving `fbi` a list of files: +``` +$ fbi --list file-list.txt + +``` + +`fbi` supports autozoom. With `-a` `fbi` controls the zoom factor. `--autoup` and `--autodown` tell `fbi` to only zoom up or down. Control the blend time between images with `--blend [time]`, in milliseconds. Press the k and j keys to jump behind and ahead in your file list. + +`fbi` has commands for creating file lists from images you have viewed, and for exporting your commands to a file, and a host of other cool options. Check out `man fbi` for complete options. + +### CMatrix Console Screensaver + +The Matrix screensaver is still my favorite (Figure 2), second only to the bouncing cow. [CMatrix][3] runs on the console. Simply type `cmatrix` to start it, and Ctrl+C stops it. Run `cmatrix -s` to launch it in screensaver mode, which exits on any keypress. `-C` changes the color. Your choices are green, red, blue, yellow, white, magenta, cyan, and black. + +CMatrix supports asynchronous key presses, which means you can change options while it's running. + +`-B` is all bold text, and `-B` is partially bold. + +### fbgs PDF Viewer + +It seems that the addiction to PDF documents is pandemic and incurable, though PDFs are better than they used to be, with live hyperlinks, copy-paste, and good text search. The `fbgs` console PDF viewer is part of the `fbida` package. Options include page size, resolution, page selections, and most `fbi` options, with the exceptions listed in `man fbgs`. The main option I use is page size; you get `-l`, `xl`, and `xxl` to choose from: +``` +$ fbgs -xl annoyingpdf.pdf + +``` + +Learn more about Linux through the free ["Introduction to Linux" ][4]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/1/multimedia-apps-linux-console + +作者:[Carla Schroder][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/cschroder +[1]:https://www.mjmwired.net/kernel/Documentation/fb/fbcon.txt +[2]:https://www.kraxel.org/blog/linux/fbida/ +[3]:http://www.asty.org/cmatrix/ +[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180111 The open organization and inner sourcing movements can share knowledge.md b/sources/tech/20180111 The open organization and inner sourcing movements can share knowledge.md new file mode 100644 index 0000000000..272c1b03ae --- /dev/null +++ b/sources/tech/20180111 The open organization and inner sourcing movements can share knowledge.md @@ -0,0 +1,121 @@ +The open organization and inner sourcing movements can share knowledge +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gov_collaborative_risk.png?itok=we8DKHuL) +Image by : opensource.com + +Red Hat is a company with roughly 11,000 employees. The IT department consists of roughly 500 members. Though it makes up just a fraction of the entire organization, the IT department is still sufficiently staffed to have many application service, infrastructure, and operational teams within it. Our purpose is "to enable Red Hatters in all functions to be effective, productive, innovative, and collaborative, so that they feel they can make a difference,"--and, more specifically, to do that by providing technologies and related services in a fashion that is as open as possible. + +Being open like this takes time, attention, and effort. While we always strive to be as open as possible, it can be difficult. For a variety of reasons, we don't always succeed. + +In this story, I'll explain a time when, in the rush to innovate, the Red Hat IT organization lost sight of its open ideals. But I'll also explore how returning to those ideals--and using the collaborative tactics of "inner source"--helped us to recover and greatly improve the way we deliver services. + +### About inner source + +Before I explain how inner source helped our team, let me offer some background on the concept. + +Inner source is the adoption of open source development practices between teams within an organization to promote better and faster delivery without requiring project resources be exposed to the world or openly licensed. It allows an organization to receive many of the benefits of open source development methods within its own walls. + +In this way, inner source aligns well with open organization strategies and principles; it provides a path for open, collaborative development. While the open organization defines its principles of openness broadly as transparency, inclusivity, adaptability, collaboration, and community--and covers how to use these open principles for communication, decision making, and many other topics--inner source is about the adoption of specific and tactical practices, processes, and patterns from open source communities to improve delivery. + +For instance, [the Open Organization Maturity Model][1] suggests that in order to be transparent, teams should, at minimum, share all project resources with the project team (though it suggests that it's generally better to share these resources with the entire organization). The common pattern in both inner source and open source development is to host all resources in a publicly available version control system, for source control management, which achieves the open organization goal of high transparency. + +Inner source aligns well with open organization strategies and principles. + +Another example of value alignment appears in the way open source communities accept contributions. In open source communities, source code is transparently available. Community contributions in the form of patches or merge requests are commonly accepted practices (even expected ones). This provides one example of how to meet the open organization's goal of promoting inclusivity and collaboration. + +### The challenge + +Early in 2014, Red Hat IT began its first steps toward making Amazon Web Services (AWS) a standard hosting offering for business critical systems. While teams within Red Hat IT had built several systems and services in AWS by this time, these were bespoke creations, and we desired to make deploying services to IT standards in AWS both simple and standardized. + +In order to make AWS cloud hosting meet our operational standards (while being scalable), the Cloud Enablement team within Red Hat IT decided that all infrastructure in AWS would be configured through code, rather than manually, and that everyone would use a standard set of tools. The Cloud Enablement team designed and built these standard tools; a separate group, the Platform Operations team, was responsible for provisioning and hosting systems and services in AWS using the tools. + +The Cloud Enablement team built a toolset, obtusely named "Template Util," based on AWS Cloud Formations configurations wrapped in a management layer to enforce certain configuration requirements and make stamping out multiple copies of services across environments easier. While the Template Util toolset technically met all our initial requirements, and we eventually provisioned the infrastructure for more than a dozen services with it, engineers in every team working with the tool found using it to be painful. Michael Johnson, one engineer using the tool, said "It made doing something relatively straightforward really complicated." + +Among the issues Template Util exhibited were: + + * Underlying cloud formations technologies implied constraints on application stack management at odds with how we managed our application systems. + * The tooling was needlessly complex and brittle in places, using multiple layered templating technologies and languages making syntax issues hard to debug. + * The code for the tool--and some of the data users needed to manipulate the tool--were kept in a repository that was difficult for most users to access. + * There was no standard process to contributing or accepting changes. + * The documentation was poor. + + + +As more engineers attempted to use the Template Util toolset, they found even more issues and limitations with the tools. Unhappiness continued to grow. To make matters worse, the Cloud Enablement team then shifted priorities to other deliverables without relinquishing ownership of the tool, so bug fixes and improvements to the tools were further delayed. + +The real, core issues here were our inability to build an inclusive community to collaboratively build shared tooling that met everyone's needs. Fear of losing "ownership," fear of changing requirements, and fear of seeing hard work abandoned all contributed to chronic conflict, which in turn led to poorer outcomes. + +### Crisis point + +By September 2015, more than a year after launching our first major service in AWS with the Template Util tool, we hit a crisis point. + +Many engineers refused to use the tools. That forced all of the related service provisioning work on a small set of engineers, further fracturing the community and disrupting service delivery roadmaps as these engineers struggled to deal with unexpected work. We called an emergency meeting and invited all the teams involved to find a solution. + +During the emergency meeting, we found that people generally thought we needed immediate change and should start the tooling effort over, but even the decision to start over wasn't unanimous. Many solutions emerged--sometimes multiple solutions from within a single team--all of which would require significant work to implement. While we couldn't reach a consensus on which solution to use during this meeting, we did reach an agreement to give proponents of different technologies two weeks to work together, across teams, to build their case with a prototype, which the community could then review. + +While we didn't reach a final and definitive decision, this agreement was the first point where we started to return to the open source ideals that guide our mission. By inviting all involved parties, we were able to be transparent and inclusive, and we could begin rebuilding our internal community. By making clear that we wanted to improve things and were open to new options, we showed our commitment to adaptability and meritocracy. Most importantly, the plan for building prototypes gave people a clear, return path to collaboration. + +When the community reviewed the prototypes, it determined that the clear leader was an Ansible-based toolset that would eventually become known, internally, as Ansicloud. (At the time, no one involved with this work had any idea that Red Hat would acquire Ansible the following month. It should also be noted that other teams within Red Hat have found tools based on Cloud Formation extremely useful, even when our specific Template Util tool did not find success.) + +This prototyping and testing phase didn't fix things overnight, though. While we had consensus on the general direction we needed to head, we still needed to improve the new prototype to the point at which engineers could use it reliably for production services. + +So over the next several months, a handful of engineers worked to further build and extend the Ansicloud toolset. We built three new production services. While we were sharing code, that sharing activity occurred at a low level of maturity. Some engineers had trouble getting access due to older processes. Other engineers headed in slightly different directions, with each engineer having to rediscover some of the core design issues themselves. + +### Returning to openness + +This led to a turning point: Building on top of the previous agreement, we focused on developing a unified vision and providing easier access. To do this, we: + + 1. created a list of specific goals for the project (both "must-haves" and "nice-to-haves"), + 2. created an open issue log for the project to avoid solving the same problem repeatedly, + 3. opened our code base so anyone in Red Hat could read or clone it, and + 4. made it easy for engineers to get trusted committer access + + + +Our agreement to collaborate, our finally unified vision, and our improved tool development methods spurred the growth of our community. Ansicloud adoption spread throughout the involved organizations, but this led to a new problem: The tool started changing more quickly than users could adapt to it, and improvements that different groups submitted were beginning to affect other groups in unanticipated ways. + +These issues resulted in our recent turn to inner source practices. While every open source project operates differently, we focused on adopting some best practices that seemed common to many of them. In particular: + + * We identified the business owner of the project and the core-contributor group of developers who would govern the development of the tools and decide what contributions to accept. While we want to keep things open, we can't have people working against each other or breaking each other's functionality. + * We developed a project README clarifying the purpose of the tool and specifying how to use it. We also created a CONTRIBUTING document explaining how to contribute, what sort of contributions would be useful, and what sort of tests a contribution would need to pass to be accepted. + * We began building continuous integration and testing services for the Ansicloud tool itself. This helped us ensure we could quickly and efficiently validate contributions technically, before the project accepted and merged them. + + + +With these basic agreements, documents, and tools available, we were back onto the path of open collaboration and successful inner sourcing. + +### Why it matters + +Why does inner source matter? + +From a developer community point of view, shifting from a traditional siloed development model to the inner source model has produced significant, quantifiable improvements: + + * Contributions to our tooling have grown 72% per week (by number of commits). + * The percentage of contributions from non-core committers has grown from 27% to 78%; the users of the toolset are driving its development. + * The contributor list has grown by 15%, primarily from new users of the tool set, rather than core committers, increasing our internal community. + + + +And the tools we've delivered through this project have allowed us to see dramatic improvements in our business outcomes. Using the Ansicloud tools, 54 new multi-environment application service deployments were created in 385 days (compared to 20 services in 1,013 days with the Template Util tools). We've gone from one new service deployment in a 50-day period to one every week--a seven-fold increase in the velocity of our delivery. + +What really matters here is that the improvements we saw were not aberrations. Inner source provides common, easily understood patterns that organizations can adopt to effectively promote collaboration (not to mention other open organization principles). By mirroring open source production practices, inner source can also mirror the benefits of open source code, which have been seen time and time again: higher quality code, faster development, and more engaged communities. + +This article is part of the [Open Organization Workbook project][2]. + +### about the author +Tom Benninger - Tom Benninger is a Solutions Architect, Systems Engineer, and continual tinkerer at Red Hat, Inc. Having worked with startups, small businesses, and larger enterprises, he has experience within a broad set of IT disciplines. His current area of focus is improving Application Lifecycle Management in the enterprise. He has a particular interest in how open source, inner source, and collaboration can help support modern application development practices and the adoption of DevOps, CI/CD, Agile,... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/open-organization/18/1/open-orgs-and-inner-source-it + +作者:[Tom Benninger][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/tomben +[1]:https://opensource.com/open-organization/resources/open-org-maturity-model +[2]:https://opensource.com/open-organization/17/8/workbook-project-announcement diff --git a/sources/tech/20180112 8 KDE Plasma Tips and Tricks to Improve Your Productivity.md b/sources/tech/20180112 8 KDE Plasma Tips and Tricks to Improve Your Productivity.md new file mode 100644 index 0000000000..66e96549c7 --- /dev/null +++ b/sources/tech/20180112 8 KDE Plasma Tips and Tricks to Improve Your Productivity.md @@ -0,0 +1,96 @@ +8 KDE Plasma Tips and Tricks to Improve Your Productivity +====== + +![](https://www.maketecheasier.com/assets/uploads/2018/01/kde-plasma-desktop-featured.jpg) + +KDE's Plasma is easily one of the most powerful desktop environments available for Linux. It's highly configurable, and it looks pretty good, too. That doesn't amount to a whole lot unless you can actually get things done. + +You can easily configure Plasma and make use of a lot of its convenient and time-saving features to boost your productivity and have a desktop that empowers you, rather than getting in your way. + +These tips aren't in any particular order, so you don't need to prioritize. Pick the ones that best fit your workflow. + + **Related** : [10 of the Best KDE Plasma Applications You Should Try][1] + +### 1. Multimedia Controls + +This isn't so much of a tip as it is something that's good to keep in mind. Plasma keeps multimedia controls everywhere. You don't need to open your media player every time you need to pause, resume, or skip a song; you can mouse over the minimized window or even control it via the lock screen. There's no need to scramble to log in to change a song or because you forgot to pause one. + +### 2. KRunner + +![KDE Plasma KRunner][2] + +KRunner is an often under-appreciated feature of the Plasma desktop. Most people are used to digging through the application launcher menu to find the program that they're looking to launch. That's not necessary with KRunner. + +To use KRunner, make sure that your focus is on the desktop itself. (Click on it instead of a window.) Then, start typing the name of the program that you want. KRunner will automatically drop down from the top of your screen with suggestions. Click or press Enter on the one you're looking for. It's much faster than remembering which category your program is under. + +### 3. Jump Lists + +![KDE Plasma Jump Lists][3] + +Jump lists are a fairly recent addition to the Plasma desktop. They allow you to launch an application directly to a specific section or feature. + +So if you have a launcher on a menu bar, you can right-click and get a list of places to jump to. Select where you want to go, and you're off. + +### 4. KDE Connect + +![KDE Connect Menu Android][4] + +[KDE Connect][5] is a massive help if you have an Android phone. It connects the phone to your desktop so you can share things seamlessly between the devices. + +With KDE Connect, you can see your [Android device's notification][6] on your desktop in real time. It also enables you to send and receive text messages from Plasma without ever picking up your phone. + +KDE Connect also lets you send files and share web pages between your phone and your computer. You can easily move from one device to the other without a lot of hassle or losing your train of thought. + +### 5. Plasma Vaults + +![KDE Plasma Vault][7] + +Plasma Vaults are another new addition to the Plasma desktop. They are KDE's simple solution to encrypted files and folders. If you don't work with encrypted files, this one won't really save you any time. If you do, though, vaults are a much simpler approach. + +Plasma Vaults let you create encrypted directories as a regular user without root and manage them from your task bar. You can mount and unmount the directories on the fly without the need for external programs or additional privileges. + +### 6. Pager Widget + +![KDE Plasma Pager][8] + +Configure your desktop with the pager widget. It allows you to easily access three additional workspaces for even more screen room. + +Add the widget to your menu bar, and you can slide between multiple workspaces. These are all the size of your screen, so you gain multiple times the total screen space. That lets you lay out more windows without getting confused by a minimized mess or disorganization. + +### 7. Create a Dock + +![KDE Plasma Dock][9] + +Plasma is known for its flexibility and the room it allows for configuration. Use that to your advantage. If you have programs that you're always using, consider setting up an OS X style dock with your most used applications. You'll be able to get them with a single click rather than going through a menu or typing in their name. + +### 8. Add a File Tree to Dolphin + +![Plasma Dolphin Directory][10] + +It's much easier to navigate folders in a directory tree. Dolphin, Plasma's default file manager, has built-in functionality to display a directory listing in the form of a tree on the side of the folder window. + +To enable the directory tree, click on the "Control" tab, then "Configure Dolphin," "View Modes," and "Details." Finally, select "Expandable Folders." + +Remember that these tips are just tips. Don't try to force yourself to do something that's getting in your way. You may hate using file trees in Dolphin. You may never use Pager. That's alright. There may even be something that you personally like that's not listed here. Do what works for you. That said, at least a few of these should shave some serious time out of your work day. + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/kde-plasma-tips-tricks-improve-productivity/ + +作者:[Nick Congleton][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/nickcongleton/ +[1]:https://www.maketecheasier.com/10-best-kde-plasma-applications/ (10 of the Best KDE Plasma Applications You Should Try) +[2]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-krunner.jpg (KDE Plasma KRunner) +[3]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-jumplist.jpg (KDE Plasma Jump Lists) +[4]:https://www.maketecheasier.com/assets/uploads/2017/05/kde-connect-menu-e1494899929112.jpg (KDE Connect Menu Android) +[5]:https://www.maketecheasier.com/send-receive-sms-linux-kde-connect/ +[6]:https://www.maketecheasier.com/android-notifications-ubuntu-kde-connect/ +[7]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-vault.jpg (KDE Plasma Vault) +[8]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-pager.jpg (KDE Plasma Pager) +[9]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-dock.jpg (KDE Plasma Dock) +[10]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-dolphin.jpg (Plasma Dolphin Directory) diff --git a/sources/tech/20180112 Linux yes Command Tutorial for Beginners (with Examples).md b/sources/tech/20180112 Linux yes Command Tutorial for Beginners (with Examples).md new file mode 100644 index 0000000000..a4b4ff385c --- /dev/null +++ b/sources/tech/20180112 Linux yes Command Tutorial for Beginners (with Examples).md @@ -0,0 +1,96 @@ +Linux yes Command Tutorial for Beginners (with Examples) +====== + +Most of the Linux commands you encounter do not depend on other operations for users to unlock their full potential, but there exists a small subset of command line tool which you can say are useless when used independently, but become a must-have or must-know when used with other command line operations. One such tool is **yes** , and in this tutorial, we will discuss this command with some easy to understand examples. + +But before we do that, it's worth mentioning that all examples provided in this tutorial have been tested on Ubuntu 16.04 LTS. + +### Linux yes command + +The yes command in Linux outputs a string repeatedly until killed. Following is the syntax of the command: + +``` +yes [STRING]... +yes OPTION +``` + +And here's what the man page says about this tool: +``` +Repeatedly output a line with all specified STRING(s), or 'y'. +``` + +The following Q&A-type examples should give you a better idea about the usage of yes. + +### Q1. How yes command works? + +As the man page says, the yes command produces continuous output - 'y' by default, or any other string if specified by user. Here's a screenshot that shows the yes command in action: + +[![How yes command works][1]][2] + +I could only capture the last part of the output as the output frequency was so fast, but the screenshot should give you a good idea about what kind of output the tool produces. + +You can also provide a custom string for the yes command to use in output. For example: + +``` +yes HTF +``` + +[![Repeat word with yes command][3]][4] + +### Q2. Where yes command helps the user? + +That's a valid question. Reason being, from what yes does, it's difficult to imagine the usefulness of the tool. But you'll be surprised to know that yes can not only save your time, but also automate some mundane tasks. + +For example, consider the following scenario: + +[![Where yes command helps the user][5]][6] + +You can see that user has to type 'y' for each query. It's in situation like these where yes can help. For the above scenario specifically, you can use yes in the following way: + +``` +yes | rm -ri test +``` + +[![yes command in action][7]][8] + +So the command made sure user doesn't have to write 'y' each time when rm asked for it. Of course, one would argue that we could have simply removed the '-i' option from the rm command. That's right, I took this example as it's simple enough to make people understand the situations in which yes can be helpful. + +Another - and probably more relevant - scenario would be when you're using the fsck command, and don't want to enter 'y' each time system asks your permission before fixing errors. + +### Q3. Is there any use of yes when it's used alone? + +Yes, there's at-least one use: to tell how well a computer system handles high amount of loads. Reason being, the tool utilizes 100% processor for systems that have a single processor. In case you want to apply this test on a system with multiple processors, you need to run a yes process for each processor. + +### Q4. What command line options yes offers? + +The tool only offers generic command line options: --help and --version. As the names suggests. the former displays help information related to the command, while the latter one outputs version related information. + +[![What command line options yes offers][9]][10] + +### Conclusion + +So now you'd agree that there could be several scenarios where the yes command would be of help. There are no command line options unique to yes, so effectively, there's no learning curve associated with the tool. Just in case you need, here's the command's [man page][11]. + + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/linux-yes-command/ + +作者:[Himanshu Arora][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.howtoforge.com +[1]:https://www.howtoforge.com/images/command-tutorial/yes-def-output.png +[2]:https://www.howtoforge.com/images/command-tutorial/big/yes-def-output.png +[3]:https://www.howtoforge.com/images/command-tutorial/yes-custom-string.png +[4]:https://www.howtoforge.com/images/command-tutorial/big/yes-custom-string.png +[5]:https://www.howtoforge.com/images/command-tutorial/rm-ri-output.png +[6]:https://www.howtoforge.com/images/command-tutorial/big/rm-ri-output.png +[7]:https://www.howtoforge.com/images/command-tutorial/yes-in-action.png +[8]:https://www.howtoforge.com/images/command-tutorial/big/yes-in-action.png +[9]:https://www.howtoforge.com/images/command-tutorial/yes-help-version1.png +[10]:https://www.howtoforge.com/images/command-tutorial/big/yes-help-version1.png +[11]:https://linux.die.net/man/1/yes diff --git a/sources/tech/20180114 Playing Quake 4 on Linux in 2018.md b/sources/tech/20180114 Playing Quake 4 on Linux in 2018.md new file mode 100644 index 0000000000..26dd305a4a --- /dev/null +++ b/sources/tech/20180114 Playing Quake 4 on Linux in 2018.md @@ -0,0 +1,80 @@ +Playing Quake 4 on Linux in 2018 +====== +A few months back [I wrote an article][1] outlining the various options Linux users now have for playing Doom 3, as well as stating which of the three contenders I felt to be the best option in 2017. Having already gone to the trouble of getting the original Doom 3 binary working on my modern Arch Linux system, it made me wonder just how much effort it would take to get the closed source Quake 4 port up and running again as well. + +### Getting it running + +[![][2]][3] [![][4]][5] + +Quake 4 was ported to Linux by Timothee Besset in 2005, although the binaries themselves were later taken down along with the rest of the id Software FTP server by ZeniMax. The original [Linux FAQ page][6] is still online though, and mirrors hosting the Linux installer still exist, such as [this one][7] ran by the fan website [Quaddicted][8]. Once downloaded this will give you a graphical installer which will install the game binary without any of the game assets. + +These will need to be taken from either the game discs of a retail Windows version as I did, or taken from an already installed Windows version of the game such as from [Steam][9]. Follow the steps in the Linux FAQ to the letter for best results. Please note that the [GOG.com][10] release of Quake 4 is unique in not supplying a valid CD key, something which is still required for the Linux port to launch. There are [ways to get around this][11], but we only condone these methods for legitimate purchasers. + +Like with Doom 3 I had to remove the libgcc_s.so.1, libSDL-1.2.id.so.0, and libstdc++.so.6 libraries that the game came with in the install directory in order to get it to run. I also ran into the same sound issue I had with Doom 3, meaning I had to modify the Quake4Config.cfg file located in the hidden ~/.quake4/q4base directory in the same fashion as before. However, this time I ran into a whole host of other issues that made me have to modify the configuration file as well. + +First off the language the game wanted to use would always default to Spanish, meaning I had to manually tell the game to use English instead. I also ran into a known issue on all platforms wherein the game would not properly recognize the available VRAM on modern graphics cards, and as such would force the game to use lower image quality settings. Quake 4 will also not render see-through surfaces unless anti-aliasing is enabled, although going beyond 8x caused the game not to load for me. + +Appending the following to the end of the Quake4Config.cfg file resolved all of my issues: + +``` +seta image_downSize "0" +seta image_downSizeBump "0" +seta image_downSizeSpecular "0" +seta image_filter "GL_LINEAR_MIPMAP_LINEAR" +seta image_ignoreHighQuality "0" +seta image_roundDown "0" +seta image_useCompression "0" +seta image_useNormalCompression "0" +seta image_anisotropy "16" +seta image_lodbias "0" +seta r_renderer "best" +seta r_multiSamples "8" +seta sys_lang "english" +seta s_alsa_pcm "hw:0,0" +seta com_allowConsole "1" +``` + +Please note that this will also set the game to use 8x anti-aliasing and restore the drop down console to how it worked in all of the previous Quake games. Similar to the Linux port of Doom 3 the Linux version of Quake 4 also does not support Creative EAX ADVANCED HD audio technology. Unlike Doom 3 though Quake 4 does seem to also feature an alternate method for surround sound, and widescreen support was thankfully patched into the game soon after its release. + +### Playing the game + +[![][12]][13] [![][14]][15] + +Over the years Quake 4 has gained something of a reputation as the black sheep of the Quake family, with many people complaining that the game's vehicle sections, squad mechanics, and general aesthetic made it feel too close to contemporary military shooters of the time. In the game's heart of hearts though it really does feel like a concerted sequel to Quake II, with some of developer Raven Software's own Star Trek: Voyager - Elite Force title thrown in for good measure. + +To me at least Quake 4 does stand as being one of the "Last of the Romans" in terms of being a first person shooter that embraced classic design ideals at a time when similar titles were not getting the support of major publishers. Most of the game still features the player moving between levels featuring fixed enemy placements, a wide variety of available weapons, traditional health packs, and an array of enemies each sporting unique attributes and skills. + +Quake 4 also offers a well made campaign that I found myself going back to on a higher skill level not long after I had already finished my first try at the game. Certain aspects like the vehicle sections do indeed drag the game down a bit, and the multiplayer aspect pails in comparison to its predecessor Quake III Arena, but overall I am quite pleased with what Raven Software was able to accomplish with the Doom 3 engine, especially when so few others tried. + +### Final thoughts + +If anyone ever needed a reason to be reminded of the value of video game source code releases, this is it. Most of the problems I encountered could have been easily sidestepped if Quake 4 source ports were available, but with the likes of John Carmack and Timothee Besset gone from id Software and the current climate at ZeniMax not looking too promising, it is doubtful that any such creations will ever materialize. Doom 3 source ports look to be the end of the road. + +Instead we are stuck using this cranky 32 bit binary with an obstructive CD Key check and a graphics system that freaks out at the sight of any modern video card sporting more than 512 MB of VRAM. The game itself has aged well, with graphics that still look great and dynamic lighting that is better than what is included with many modern titles. It is just a shame that it is now such a pain to get running, not just on Linux, but on any platform. + +-------------------------------------------------------------------------------- + +via: https://www.gamingonlinux.com/articles/playing-quake-4-on-linux-in-2018.11017 + +作者:[Hamish][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.gamingonlinux.com/profiles/6 +[1]:https://www.gamingonlinux.com/articles/playing-doom-3-on-linux-in-2017.10561 +[2]:https://www.gamingonlinux.com/uploads/articles/article_images/thumbs/20458196191515697921gol6.jpg +[3]:https://www.gamingonlinux.com/uploads/articles/article_images/20458196191515697921gol6.jpg +[4]:https://www.gamingonlinux.com/uploads/articles/article_images/thumbs/9405540721515697921gol6.jpg +[5]:https://www.gamingonlinux.com/uploads/articles/article_images/9405540721515697921gol6.jpg +[6]:http://zerowing.idsoftware.com/linux/quake4/Quake4FrontPage/ +[7]:https://www.quaddicted.com/files/idgames2/idstuff/quake4/linux/ +[8]:https://www.quaddicted.com/ +[9]:http://store.steampowered.com/app/2210/Quake_IV/ +[10]:https://www.gog.com/game/quake_4 +[11]:https://www.gog.com/forum/quake_series/quake_4_on_linux_no_cd_key/post31 +[12]:https://www.gamingonlinux.com/uploads/articles/article_images/thumbs/5043571471515951537gol6.jpg +[13]:https://www.gamingonlinux.com/uploads/articles/article_images/5043571471515951537gol6.jpg +[14]:https://www.gamingonlinux.com/uploads/articles/article_images/thumbs/6922853731515697921gol6.jpg +[15]:https://www.gamingonlinux.com/uploads/articles/article_images/6922853731515697921gol6.jpg diff --git a/sources/tech/20180114 What a GNU C Compiler Bug looks like.md b/sources/tech/20180114 What a GNU C Compiler Bug looks like.md new file mode 100644 index 0000000000..3b95d4089b --- /dev/null +++ b/sources/tech/20180114 What a GNU C Compiler Bug looks like.md @@ -0,0 +1,77 @@ +What a GNU C Compiler Bug looks like +====== +Back in December a Linux Mint user sent a [strange bug report][1] to the darktable mailing list. Apparently the GNU C Compiler (GCC) on his system exited with the following error message, breaking the build process: +``` +cc1: error: unrecognized command line option '-Wno-format-truncation' [-Werror] +cc1: all warnings being treated as errors +src/iop/CMakeFiles/colortransfer.dir/build.make:67: recipe for target 'src/iop/CMakeFiles/colortransfer.dir/introspection_colortransfer.c.o' failed make[2]: 0_sync_master.sh 1_add_new_article_manual.sh 1_add_new_article_newspaper.sh 2_start_translating.sh 3_continue_the_work.sh 4_finish.sh 5_pause.sh base.sh env format.test lctt.cfg parse_url_by_manual.sh parse_url_by_newspaper.py parse_url_by_newspaper.sh README.org reformat.sh [src/iop/CMakeFiles/colortransfer.dir/introspection_colortransfer.c.o] Error 1 CMakeFiles/Makefile2:6323: recipe for target 'src/iop/CMakeFiles/colortransfer.dir/all' failed + +make[1]: 0_sync_master.sh 1_add_new_article_manual.sh 1_add_new_article_newspaper.sh 2_start_translating.sh 3_continue_the_work.sh 4_finish.sh 5_pause.sh base.sh env format.test lctt.cfg parse_url_by_manual.sh parse_url_by_newspaper.py parse_url_by_newspaper.sh README.org reformat.sh [src/iop/CMakeFiles/colortransfer.dir/all] Error 2 + +``` + +`-Wno-format-truncation` is a rather new GCC feature which instructs the compiler to issue a warning if it can already deduce at compile time that calls to formatted I/O functions like `snprintf()` or `vsnprintf()` might result in truncated output. + +That's definitely neat, but Linux Mint 18.3 (just like Ubuntu 16.04 LTS) uses GCC 5.4.0, which doesn't support this feature. And darktable relies on a chain of CMake macros to make sure it doesn't use any flags the compiler doesn't know about: +``` +CHECK_COMPILER_FLAG_AND_ENABLE_IT(-Wno-format-truncation) + +``` + +So why did this even happen? I logged into one of my Ubuntu 16.04 installations and tried to reproduce the problem. Which wasn't hard, I just had to check out the git tree in question and build it. Boom, same error. + +### The solution + +It turns out that while `-Wformat-truncation` isn't a valid option for GCC 5.4.0 (it's not documented), this version silently accepts the negation under some circumstances (!): +``` + +sturmflut@hogsmeade:/tmp$ gcc -Wformat-truncation -o test test.c +gcc: error: unrecognized command line option '-Wformat-truncation' +sturmflut@hogsmeade:/tmp$ gcc -Wno-format-truncation -o test test.c +sturmflut@hogsmeade:/tmp$ + +``` + +(test.c just contains an empty main() method). + +Because darktable uses `CHECK_COMPILER_FLAG_AND_ENABLE_IT(-Wno-format-truncation)`, it is fooled into thinking this compiler version actually supports `-Wno-format-truncation` at all times. The simple test case used by the CMake macro doesn't fail, but the compiler later decides to no longer silently accept the invalid command line option for some reason. + +One of the cases which triggered this was when the source file under compilation had already generated some other warnings before. If I forced a serialized build using `make -j1` on a clean darktable checkout on this machine, `./src/iop/colortransfer.c` actually was the first file which caused any +compiler warnings at all, so this is why the process failed exactly there. + +The minimum test case to trigger this behavior in GCC 5.4.0 is a C file with a `main()` function with a parameter which has the wrong type, like this one: +``` + +int main(int argc, int argv) +{ +} + +``` + +Then add `-Wall` to make sure the compiler will treat this as a warning, and it fails: +``` + +sturmflut@hogsmeade:/tmp$ gcc -Wall -Wno-format-truncation -o test test.c +test.c:1:5: warning: second argument of 'main' should be 'char **' [-Wmain] + int main(int argc, int argv) + ^ +cc1: warning: unrecognized command line option '-Wno-format-truncation' + +``` + +If you omit `-Wall`, the compiler will not generate the first warning and also not complain about `-Wno-format-truncation`. + +I've never run into this before, but I guess Ubuntu 16.04 is going to stay with us for a while since it is the current LTS release until May 2018, and even after that it will still be supported until 2021. So this buggy GCC version will most likely also stay alive for quite a while. Which is why the check for this flag has been removed from the + +-------------------------------------------------------------------------------- + +via: http://www.lieberbiber.de/2018/01/14/what-a-gnu-compiler-bug-looks-like/ + +作者:[sturmflut][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.lieberbiber.de/author/sturmflut/ +[1]:https://www.mail-archive.com/darktable-dev@lists.darktable.org/msg02760.html diff --git a/sources/tech/20180115 2 scientific calculators for the Linux desktop.md b/sources/tech/20180115 2 scientific calculators for the Linux desktop.md new file mode 100644 index 0000000000..f91450b383 --- /dev/null +++ b/sources/tech/20180115 2 scientific calculators for the Linux desktop.md @@ -0,0 +1,111 @@ +2 scientific calculators for the Linux desktop +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_OpenData_CityNumbers.png?itok=lC03ce76) + +Image by : opensource.com + +Every Linux desktop environment comes with at least a simple desktop calculator, but most of those simple calculators are just that: a simple tool for simple calculations. + +Fortunately, there are exceptions; programs that go far beyond square roots and a couple of trigonometric functions, yet are still easy to use. Here are two powerful calculator tools for Linux, plus a couple of bonus options. + +### SpeedCrunch + +[SpeedCrunch][1] is a high-precision scientific calculator with a simple Qt5 graphical interface and strong focus on the keyboard. + +![SpeedCrunch graphical interface][3] + + +SpeedCrunch at work + +It supports working with units and comes loaded with all kinds of functions. + +For example, by writing: +`2 * 10^6 newton / (meter^2)` + +you get: +`= 2000000 pascal` + +By default, SpeedCrunch delivers its results in the international unit system, but units can be transformed with the "in" instruction. + +For example: +`3*10^8 meter / second in kilo meter / hour` + +produces: +`= 1080000000 kilo meter / hour` + +With the `F5` key, all results will turn into scientific notation (`1.08e9 kilo meter / hour`), while with `F2` only numbers that are small enough or big enough will change. More options are available on the Configuration menu. + +The list of available functions is really impressive. It works on Linux, Windows, and MacOS, and it's licensed under GPLv2; you can access its source code on [Bitbucket][4]. + +### Qalculate! + +[Qalculate!][5] (with the exclamation point) has a long and complex history. + +The project offers a powerful library that can be used by other programs (the Plasma desktop can use it to perform calculations from krunner) and a graphical interface built on GTK3. It allows you to work with units, handle physical constants, create graphics, use complex numbers, matrices, and vectors, choose arbitrary precision, and more. + + +![Qalculate! Interface][7] + + +Looking for some physical constants on Qalculate! + +Its use of units is far more intuitive than SpeedCrunch's and it understands common prefixes without problem. Have you heard of an exapascal pressure? I hadn't (the Sun's core stops at `~26 PPa`), but Qalculate! has no problem understanding the meaning of `1 EPa`. Also, Qalculate! is more flexible with syntax errors, so you don't need to worry about closing all those parentheses: if there is no ambiguity, Qalculate! will give you the right answer. + +After a long period on which the project seemed orphaned, it came back to life in 2016 and has been going strong since, with more than 10 versions in just one year. It's licensed under GPLv2 (with source code on [GitHub][8]) and offers versions for Linux and Windows, as well as a MacOS port. + +### Bonus calculators + +#### ConvertAll + +OK, it's not a "calculator," yet this simple application is incredibly useful. + +Most unit converters stop at a long list of basic units and a bunch of common combinations, but not [ConvertAll][9]. Trying to convert from astronomical units per year into inches per second? It doesn't matter if it makes sense or not, if you need to transform a unit of any kind, ConvertAll is the tool for you. + +Just write the starting unit and the final unit in the corresponding boxes; if the units are compatible, you'll get the transformation without protest. + +The main application is written in PyQt5, but there is also an [online version written in JavaScript][10]. + +#### (wx)Maxima with the units package + +Sometimes (OK, many times) a desktop calculator is not enough and you need more raw power. + +[Maxima][11] is a computer algebra system (CAS) with which you can do derivatives, integrals, series, equations, eigenvectors and eigenvalues, Taylor series, Laplace and Fourier transformations, as well as numerical calculations with arbitrary precision, graph on two and three dimensions… we could fill several pages just listing its capabilities. + +[wxMaxima][12] is a well-designed graphical frontend for Maxima that simplifies the use of many Maxima options without compromising others. On top of the full power of Maxima, wxMaxima allows you to create "notebooks" on which you write comments, keep your graphics with your math, etc. One of the (wx)Maxima combo's most impressive features is that it works with dimension units. + +On the prompt, just type: +`load("unit")` + +press Shift+Enter, wait a few seconds, and you'll be ready to work. + +By default, the unit package works with the basic MKS units, but if you prefer, for instance, to get `N` instead of `kg*m/s2`, you just need to type: +`setunits(N)` + +Maxima's help (which is also available from wxMaxima's help menu) will give you more information. + +Do you use these programs? Do you know another great desktop calculator for scientists and engineers or another related tool? Tell us about them in the comments! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/scientific-calculators-linux + +作者:[Ricardo Berlasso][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/rgb-es +[1]:http://speedcrunch.org/index.html +[2]:/file/382511 +[3]:https://opensource.com/sites/default/files/u128651/speedcrunch.png (SpeedCrunch graphical interface) +[4]:https://bitbucket.org/heldercorreia/speedcrunch +[5]:https://qalculate.github.io/ +[6]:/file/382506 +[7]:https://opensource.com/sites/default/files/u128651/qalculate-600.png (Qalculate! Interface) +[8]:https://github.com/Qalculate +[9]:http://convertall.bellz.org/ +[10]:http://convertall.bellz.org/js/ +[11]:http://maxima.sourceforge.net/ +[12]:https://andrejv.github.io/wxmaxima/ diff --git a/sources/tech/20180115 How To Boot Into Linux Command Line.md b/sources/tech/20180115 How To Boot Into Linux Command Line.md new file mode 100644 index 0000000000..00649cc678 --- /dev/null +++ b/sources/tech/20180115 How To Boot Into Linux Command Line.md @@ -0,0 +1,63 @@ +translating---geekpi + +How To Boot Into Linux Command Line +====== +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/how-to-boot-into-linux-command-line_orig.jpg) + +There may be times where you need or want to boot up a [Linux][1] system without using a GUI, that is with no X, but rather opt for the command line. Whatever the reason, fortunately, booting straight into the Linux **command-line** is very simple. It requires a simple change to the boot parameter after the other kernel options. This change specifies the runlevel to boot the system into. + +### ​Why Do This? + +If your system does not run Xorg because the configuration is invalid, or if the display manager is broken, or whatever may prevent the GUI from starting properly, booting into the command-line will allow you to troubleshoot by logging into a terminal (assuming you know what you’re doing to start with) and do whatever you need to do. Booting into the command-line is also a great way to become more familiar with the terminal, otherwise, you can do it just for fun. + +### ​Accessing GRUB Menu + +On startup, you will need access to the GRUB boot menu. You may need to hold the SHIFT key down before the system boots if the menu isn’t set to display every time the computer is started. In the menu, the [Linux distribution][2] entry must be selected. Once highlighted, press ‘e’ to edit the boot parameters. + + [![zorin os grub menu](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnu-grub_orig.png)][3] + + Older GRUB versions follow a similar mechanism. The boot manager should provide instructions on how to edit the boot parameters. + +### ​​Specify the Runlevel + +​An editor will appear and you will see the options that GRUB parses to the kernel. Navigate to the line that starts with ‘linux’ (older GRUB versions may be ‘kernel’; select that and follow the instructions). This specifies parameters to parse into the kernel. At the end of that line (may appear to span multiple lines, depending on resolution), you simply specify the runlevel to boot into, which is 3 (multi-user mode, text-only). + + [![customize grub menu](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/runlevel_orig.png)][4] + +Pressing Ctrl-X or F10 will boot the system using those parameters. Boot-up will continue as normal. The only thing that has changed is the runlevel to boot into. + +​ + +This is what was started up: + + [![boot linux in command line](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/runlevel_1_orig.png)][5] + +### Runlevels + +You can specify different runlevels to boot into with runlevel 5 being the default one. 1 boots into “single-user” mode, which boots into a root shell. 3 provides a multi-user, command-line only system. + +### Switch From Command-Line + +At some point, you may want to run the display manager again to use a GUI, and the quickest way to do that is running this: +``` +$ sudo init 5 +``` + +And it is as simple as that. Personally, I find the command-line much more exciting and hands-on than using GUI tools; however, that’s just my preference. + +-------------------------------------------------------------------------------- + +via: http://www.linuxandubuntu.com/home/how-to-boot-into-linux-command-line + +作者:[LinuxAndUbuntu][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxandubuntu.com +[1]:http://www.linuxandubuntu.com/home/category/linux +[2]:http://www.linuxandubuntu.com/home/category/distros +[3]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnu-grub_orig.png +[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/runlevel_orig.png +[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/runlevel_1_orig.png diff --git a/sources/tech/20180115 How debuggers really work.md b/sources/tech/20180115 How debuggers really work.md new file mode 100644 index 0000000000..452bc67823 --- /dev/null +++ b/sources/tech/20180115 How debuggers really work.md @@ -0,0 +1,99 @@ +How debuggers really work +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/annoyingbugs.png?itok=ywFZ99Gs) + +Image by : opensource.com + +A debugger is one of those pieces of software that most, if not every, developer uses at least once during their software engineering career, but how many of you know how they actually work? During my talk at [linux.conf.au 2018][1] in Sydney, I will be talking about writing a debugger from scratch... in [Rust][2]! + +In this article, the terms debugger/tracer are interchangeably. "Tracee" refers to the process being traced by the tracer. + +### The ptrace system call + +Most debuggers heavily rely on a system call known as `ptrace(2)`, which has the prototype: +``` + + +long ptrace(enum __ptrace_request request, pid_t pid, void *addr, void *data); +``` + +This is a system call that can manipulate almost all aspects of a process; however, before the debugger can attach to a process, the "tracee" has to call `ptrace` with the request `PTRACE_TRACEME`. This tells Linux that it is legitimate for the parent to attach via `ptrace` to this process. But... how do we coerce a process into calling `ptrace`? Easy-peasy! `fork/execve` provides an easy way of calling `ptrace` after `fork` but before the tracee really starts using `execve`. Conveniently, `fork` will also return the `pid` of the tracee, which is required for using `ptrace` later. + +Now that the tracee can be traced by the debugger, important changes take place: + + * Every time a signal is delivered to the tracee, it stops and a wait-event is delivered to the tracer that can be captured by the `wait` family of system calls. + * Each `execve` system call will cause a `SIGTRAP` to be delivered to the tracee. (Combined with the previous item, this means the tracee is stopped before an `execve` can fully take place.) + + + +This means that, once we issue the `PTRACE_TRACEME` request and call the `execve` system call to actually start the program in the tracee, the tracee will immediately stop, since `execve` delivers a `SIGTRAP`, and that is caught by a wait-event in the tracer. How do we continue? As one would expect, `ptrace` has a number of requests that can be used for telling the tracee it's fine to continue: + + * `PTRACE_CONT`: This is the simplest. The tracee runs until it receives a signal, at which point a wait-event is delivered to the tracer. This is most commonly used to implement "continue-until-breakpoint" and "continue-forever" options of a real-world debugger. Breakpoints will be covered below. + * `PTRACE_SYSCALL`: Very similar to `PTRACE_CONT`, but stops before a system call is entered and also before a system call returns to userspace. It can be used in combination with other requests (which we will cover later in this article) to monitor and modify a system call's arguments or return value. `strace`, the system call tracer, uses this request heavily to figure out what system calls are made by a process. + * `PTRACE_SINGLESTEP`: This one is pretty self-explanatory. If you used a debugger before, this request executes the next instruction, but stops immediately after. + + + +We can stop the process with a variety of requests, but how do we get the state of the tracee? The state of a process is mostly captured by its registers, so of course `ptrace` has a request to get (or modify!) the registers: + + * `PTRACE_GETREGS`: This request will give the registers' state as it was when a tracee was stopped. + * `PTRACE_SETREGS`: If the tracer has the values of registers from a previous call to `PTRACE_GETREGS`, it can modify the values in that structure and set the registers to the new values via this request. + * `PTRACE_PEEKUSER` and `PTRACE_POKEUSER`: These allow reading from the tracee's `USER` area, which holds the registers and other useful information. This can be used to modify a single register, without the more heavyweight `PTRACE_{GET,SET}REGS`. + + + +Modifying the registers isn't always sufficient in a debugger. A debugger will sometimes need to read some parts of the memory or even modify it. The GNU Project Debugger (GDB) can use `print` to get the value of a memory location or a variable. `ptrace` has the functionality to implement this: + + * `PTRACE_PEEKTEXT` and `PTRACE_POKETEXT`: These allow reading and writing a word in the address space of the tracee. Of course, the tracee has to be stopped for this to work. + + + +Real-world debuggers also have features like breakpoints and watchpoints. In the next section, I'll dive into the architectural details of debugging support. For the purposes of clarity and conciseness, this article will consider x86 only. + +### Architectural support + +`ptrace` is all cool, but how does it work? In the previous section, we've seen that `ptrace` has quite a bit to do with signals: `SIGTRAP` can be delivered during single-stepping, before `execve` and before or after system calls. Signals can be generated a number of ways, but we will look at two specific examples that can be used by debuggers to stop a program (effectively creating a breakpoint!) at a given location: + + * **Undefined instructions:** When a process tries to execute an undefined instruction, an exception is raised by the CPU. This exception is handled via a CPU interrupt, and a handler corresponding to the interrupt in the kernel is called. This will result in a `SIGILL` being sent to the process. This, in turn, causes the process to stop, and the tracer is notified via a wait-event. It can then decide what to do. On x86, an instruction `ud2` is guaranteed to be always undefined. + + * **Debugging interrupt:** The problem with the previous approach is that the `ud2` instruction takes two bytes of machine code. A special instruction exists that takes one byte and raises an interrupt. It's `int $3` and the machine code is `0xCC`. When this interrupt is raised, the kernel sends a `SIGTRAP` to the process and, just as before, the tracer is notified. + + + + +This is fine, but how do we coerce the tracee to execute these instructions? Easy: `ptrace` has `PTRACE_POKETEXT`, which can override a word at a memory location. A debugger would read the original word at the location using `PTRACE_PEEKTEXT` and replace it with `0xCC`, remembering the original byte and the fact that it is a breakpoint in its internal state. The next time the tracee executes at the location, it is automatically stopped by the virtue of a `SIGTRAP`. The debugger's end user can then decide how to continue (for instance, inspect the registers). + +Okay, we've covered breakpoints, but what about watchpoints? How does a debugger stop a program when a certain memory location is read or written? Surely you wouldn't just overwrite every instruction with `int $3` that could read or write some memory location. Meet debug registers, a set of registers designed to fulfill this goal more efficiently: + + * `DR0` to `DR3`: Each of these registers contains an address (a memory location), where the debugger wants the tracee to stop for some reason. The reason is specified as a bitmask in `DR7`. + * `DR4` and `DR5`: These obsolete aliases to `DR6` and `DR7`, respectively. + * `DR6`: Debug status. Contains information about which `DR0` to `DR3` caused the debugging exception to be raised. This is used by Linux to figure out the information passed along with the `SIGTRAP` to the tracee. + * `DR7`: Debug control. Using the bits in these registers, the debugger can control how the addresses specified in `DR0` to `DR3` are interpreted. A bitmask controls the size of the watchpoint (whether 1, 2, 4, or 8 bytes are monitored) and whether to raise an exception on execution, reading, writing, or either of reading and writing. + + + +Because the debug registers form part of the `USER` area of a process, the debugger can use `PTRACE_POKEUSER` to write values into the debug registers. The debug registers are only relevant to a specific process and are thus restored to the value at preemption before the process regains control of the CPU. + +### Tip of the iceberg + +We've glanced at the iceberg a debugger is: we've covered `ptrace`, went over some of its functionality, then we had a look at how `ptrace` is implemented. Some parts of `ptrace` can be implemented in software, but other parts have to be implemented in hardware, otherwise they'd be very expensive or even impossible. + +There's plenty that we didn't cover, of course. Questions, like "how does a debugger know where a variable is in memory?" remain open due to space and time constraints, but I hope you've learned something from this article; if it piqued your interest, there are plenty of resources available online to learn more. + +For more, attend Levente Kurusa's talk, [Let's Write a Debugger!][3], at [linux.conf.au][1], which will be held January 22-26 in Sydney. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/how-debuggers-really-work + +作者:[Levente Kurusa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/lkurusa +[1]:https://linux.conf.au/index.html +[2]:https://www.rust-lang.org +[3]:https://rego.linux.conf.au/schedule/presentation/91/ diff --git a/sources/tech/20180116 A Versatile Free Software for Partition Imaging and Cloning.md b/sources/tech/20180116 A Versatile Free Software for Partition Imaging and Cloning.md new file mode 100644 index 0000000000..d5cf47b45e --- /dev/null +++ b/sources/tech/20180116 A Versatile Free Software for Partition Imaging and Cloning.md @@ -0,0 +1,97 @@ +Partclone – A Versatile Free Software for Partition Imaging and Cloning +====== + +![](https://www.fossmint.com/wp-content/uploads/2018/01/Partclone-Backup-Tool-For-Linux.png) + +**[Partclone][1]** is a free and open-source tool for creating and cloning partition images brought to you by the developers of **Clonezilla**. In fact, **Partclone** is one of the tools that **Clonezilla** is based on. + +It provides users with the tools required to backup and restores used partition blocks along with high compatibility with several file systems thanks to its ability to use existing libraries like **e2fslibs** to read and write partitions e.g. **ext2**. + +Its best stronghold is the variety of formats it supports including ext2, ext3, ext4, hfs+, reiserfs, reiser4, btrfs, vmfs3, vmfs5, xfs, jfs, ufs, ntfs, fat(12/16/32), exfat, f2fs, and nilfs. + +It also has a plethora of available programs including **partclone.ext2** (ext3 & ext4), partclone.ntfs, partclone.exfat, partclone.hfsp, and partclone.vmfs (v3 and v5), among others. + +### Features in Partclone + + * **Freeware:** **Partclone** is free for everyone to download and use. + * **Open Source:** **Partclone** is released under the GNU GPL license and is open to contribution on [GitHub][2]. + * **Cross-Platform** : Available on Linux, Windows, MAC, ESX file system backup/restore, and FreeBSD. + * An online [Documentation page][3] from where you can view help docs and track its GitHub issues. + * An online [user manual][4] for beginners and pros alike. + * Rescue support. + * Clone partitions to image files. + * Restore image files to partitions. + * Duplicate partitions quickly. + * Support for raw clone. + * Displays transfer rate and elapsed time. + * Supports piping. + * Support for crc32. + * Supports vmfs for ESX vmware server and ufs for FreeBSD file system. + + + +There are a lot more features bundled in **Partclone** and you can see the rest of them [here][5]. + +[__Download Partclone for Linux][6] + +### How to Install and Use Partclone + +To install Partclone on Linux. +``` +$ sudo apt install partclone [On Debian/Ubuntu] +$ sudo yum install partclone [On CentOS/RHEL/Fedora] + +``` + +Clone partition to image. +``` +# partclone.ext4 -d -c -s /dev/sda1 -o sda1.img + +``` + +Restore image to partition. +``` +# partclone.ext4 -d -r -s sda1.img -o /dev/sda1 + +``` + +Partition to partition clone. +``` +# partclone.ext4 -d -b -s /dev/sda1 -o /dev/sdb1 + +``` + +Display image information. +``` +# partclone.info -s sda1.img + +``` + +Check image. +``` +# partclone.chkimg -s sda1.img + +``` + +Are you a **Partclone** user? I wrote on [**Deepin Clone**][7] just recently and apparently, there are certain tasks Partclone is better at handling. What has been your experience with other backup and restore utility tools? + +Do share your thoughts and suggestions with us in the comments section below. + +-------------------------------------------------------------------------------- + +via: https://www.fossmint.com/partclone-linux-backup-clone-tool/ + +作者:[Martins D. Okoi;View All Posts;Peter Beck;Martins Divine Okoi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[1]:https://partclone.org/ +[2]:https://github.com/Thomas-Tsai/partclone +[3]:https://partclone.org/help/ +[4]:https://partclone.org/usage/ +[5]:https://partclone.org/features/ +[6]:https://partclone.org/download/ +[7]:https://www.fossmint.com/deepin-clone-system-backup-restore-for-deepin-users/ diff --git a/sources/tech/20180116 Analyzing the Linux boot process.md b/sources/tech/20180116 Analyzing the Linux boot process.md new file mode 100644 index 0000000000..24a7cb971d --- /dev/null +++ b/sources/tech/20180116 Analyzing the Linux boot process.md @@ -0,0 +1,253 @@ +Translating by jessie-pang + +Analyzing the Linux boot process +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_boot.png?itok=FUesnJQp) + +Image by : Penguin, Boot. Modified by Opensource.com. CC BY-SA 4.0. + +The oldest joke in open source software is the statement that "the code is self-documenting." Experience shows that reading the source is akin to listening to the weather forecast: sensible people still go outside and check the sky. What follows are some tips on how to inspect and observe Linux systems at boot by leveraging knowledge of familiar debugging tools. Analyzing the boot processes of systems that are functioning well prepares users and developers to deal with the inevitable failures. + +In some ways, the boot process is surprisingly simple. The kernel starts up single-threaded and synchronous on a single core and seems almost comprehensible to the pitiful human mind. But how does the kernel itself get started? What functions do [initial ramdisk][1] ) and bootloaders perform? And wait, why is the LED on the Ethernet port always on? + +Read on for answers to these and other questions; the [code for the described demos and exercises][2] is also available on GitHub. + +### The beginning of boot: the OFF state + +#### Wake-on-LAN + +The OFF state means that the system has no power, right? The apparent simplicity is deceptive. For example, the Ethernet LED is illuminated because wake-on-LAN (WOL) is enabled on your system. Check whether this is the case by typing: +``` + $# sudo ethtool +``` + +where `` might be, for example, `eth0`. (`ethtool` is found in Linux packages of the same name.) If "Wake-on" in the output shows `g`, remote hosts can boot the system by sending a [MagicPacket][3]. If you have no intention of waking up your system remotely and do not wish others to do so, turn WOL off either in the system BIOS menu, or via: +``` +$# sudo ethtool -s wol d +``` + +The processor that responds to the MagicPacket may be part of the network interface or it may be the [Baseboard Management Controller][4] (BMC). + +#### Intel Management Engine, Platform Controller Hub, and Minix + +The BMC is not the only microcontroller (MCU) that may be listening when the system is nominally off. x86_64 systems also include the Intel Management Engine (IME) software suite for remote management of systems. A wide variety of devices, from servers to laptops, includes this technology, [which enables functionality][5] such as KVM Remote Control and Intel Capability Licensing Service. The [IME has unpatched vulnerabilities][6], according to [Intel's own detection tool][7]. The bad news is, it's difficult to disable the IME. Trammell Hudson has created an [me_cleaner project][8] that wipes some of the more egregious IME components, like the embedded web server, but could also brick the system on which it is run. + +The IME firmware and the System Management Mode (SMM) software that follows it at boot are [based on the Minix operating system][9] and run on the separate Platform Controller Hub processor, not the main system CPU. The SMM then launches the Universal Extensible Firmware Interface (UEFI) software, about which much has [already been written][10], on the main processor. The Coreboot group at Google has started a breathtakingly ambitious [Non-Extensible Reduced Firmware][11] (NERF) project that aims to replace not only UEFI but early Linux userspace components such as systemd. While we await the outcome of these new efforts, Linux users may now purchase laptops from Purism, System76, or Dell [with IME disabled][12], plus we can hope for laptops [with ARM 64-bit processors][13]. + +#### Bootloaders + +Besides starting buggy spyware, what function does early boot firmware serve? The job of a bootloader is to make available to a newly powered processor the resources it needs to run a general-purpose operating system like Linux. At power-on, there not only is no virtual memory, but no DRAM until its controller is brought up. A bootloader then turns on power supplies and scans buses and interfaces in order to locate the kernel image and the root filesystem. Popular bootloaders like U-Boot and GRUB have support for familiar interfaces like USB, PCI, and NFS, as well as more embedded-specific devices like NOR- and NAND-flash. Bootloaders also interact with hardware security devices like [Trusted Platform Modules][14] (TPMs) to establish a chain of trust from earliest boot. + +![Running the U-boot bootloader][16] + +Running the U-boot bootloader in the sandbox on the build host. + +The open source, widely used [U-Boot ][17]bootloader is supported on systems ranging from Raspberry Pi to Nintendo devices to automotive boards to Chromebooks. There is no syslog, and when things go sideways, often not even any console output. To facilitate debugging, the U-Boot team offers a sandbox in which patches can be tested on the build-host, or even in a nightly Continuous Integration system. Playing with U-Boot's sandbox is relatively simple on a system where common development tools like Git and the GNU Compiler Collection (GCC) are installed: +``` + + +$# git clone git://git.denx.de/u-boot; cd u-boot + +$# make ARCH=sandbox defconfig + +$# make; ./u-boot + +=> printenv + +=> help +``` + +That's it: you're running U-Boot on x86_64 and can test tricky features like [mock storage device][2] repartitioning, TPM-based secret-key manipulation, and hotplug of USB devices. The U-Boot sandbox can even be single-stepped under the GDB debugger. Development using the sandbox is 10x faster than testing by reflashing the bootloader onto a board, and a "bricked" sandbox can be recovered with Ctrl+C. + +### Starting up the kernel + +#### Provisioning a booting kernel + +Upon completion of its tasks, the bootloader will execute a jump to kernel code that it has loaded into main memory and begin execution, passing along any command-line options that the user has specified. What kind of program is the kernel? `file /boot/vmlinuz` indicates that it is a bzImage, meaning a big compressed one. The Linux source tree contains an [extract-vmlinux tool][18] that can be used to uncompress the file: +``` + + +$# scripts/extract-vmlinux /boot/vmlinuz-$(uname -r) > vmlinux + +$# file vmlinux + +vmlinux: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically + +linked, stripped +``` + +The kernel is an [Executable and Linking Format][19] (ELF) binary, like Linux userspace programs. That means we can use commands from the `binutils` package like `readelf` to inspect it. Compare the output of, for example: +``` + + +$# readelf -S /bin/date + +$# readelf -S vmlinux +``` + +The list of sections in the binaries is largely the same. + +So the kernel must start up something like other Linux ELF binaries ... but how do userspace programs actually start? In the `main()` function, right? Not precisely. + +Before the `main()` function can run, programs need an execution context that includes heap and stack memory plus file descriptors for `stdio`, `stdout`, and `stderr`. Userspace programs obtain these resources from the standard library, which is `glibc` on most Linux systems. Consider the following: +``` + + +$# file /bin/date + +/bin/date: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically + +linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, + +BuildID[sha1]=14e8563676febeb06d701dbee35d225c5a8e565a, + +stripped +``` + +ELF binaries have an interpreter, just as Bash and Python scripts do, but the interpreter need not be specified with `#!` as in scripts, as ELF is Linux's native format. The ELF interpreter [provisions a binary][20] with the needed resources by calling `_start()`, a function available from the `glibc` source package that can be [inspected via GDB][21]. The kernel obviously has no interpreter and must provision itself, but how? + +Inspecting the kernel's startup with GDB gives the answer. First install the debug package for the kernel that contains an unstripped version of `vmlinux`, for example `apt-get install linux-image-amd64-dbg`, or compile and install your own kernel from source, for example, by following instructions in the excellent [Debian Kernel Handbook][22]. `gdb vmlinux` followed by `info files` shows the ELF section `init.text`. List the start of program execution in `init.text` with `l *(address)`, where `address` is the hexadecimal start of `init.text`. GDB will indicate that the x86_64 kernel starts up in the kernel's file [arch/x86/kernel/head_64.S][23], where we find the assembly function `start_cpu0()` and code that explicitly creates a stack and decompresses the zImage before calling the `x86_64 start_kernel()` function. ARM 32-bit kernels have the similar [arch/arm/kernel/head.S][24]. `start_kernel()` is not architecture-specific, so the function lives in the kernel's [init/main.c][25]. `start_kernel()` is arguably Linux's true `main()` function. + +### From start_kernel() to PID 1 + +#### The kernel's hardware manifest: the device-tree and ACPI tables + +At boot, the kernel needs information about the hardware beyond the processor type for which it has been compiled. The instructions in the code are augmented by configuration data that is stored separately. There are two main methods of storing this data: [device-trees][26] and [ACPI tables][27]. The kernel learns what hardware it must run at each boot by reading these files. + +For embedded devices, the device-tree is a manifest of installed hardware. The device-tree is simply a file that is compiled at the same time as kernel source and is typically located in `/boot` alongside `vmlinux`. To see what's in the binary device-tree on an ARM device, just use the `strings` command from the `binutils` package on a file whose name matches `/boot/*.dtb`, as `dtb` refers to a device-tree binary. Clearly the device-tree can be modified simply by editing the JSON-like files that compose it and rerunning the special `dtc` compiler that is provided with the kernel source. While the device-tree is a static file whose file path is typically passed to the kernel by the bootloader on the command line, a [device-tree overlay][28] facility has been added in recent years, where the kernel can dynamically load additional fragments in response to hotplug events after boot. + +x86-family and many enterprise-grade ARM64 devices make use of the alternative Advanced Configuration and Power Interface ([ACPI][27]) mechanism. In contrast to the device-tree, the ACPI information is stored in the `/sys/firmware/acpi/tables` virtual filesystem that is created by the kernel at boot by accessing onboard ROM. The easy way to read the ACPI tables is with the `acpidump` command from the `acpica-tools` package. Here's an example: + +![ACPI tables on Lenovo laptops][30] + + +ACPI tables on Lenovo laptops are all set for Windows 2001. + +Yes, your Linux system is ready for Windows 2001, should you care to install it. ACPI has both methods and data, unlike the device-tree, which is more of a hardware-description language. ACPI methods continue to be active post-boot. For example, starting the command `acpi_listen` (from package `apcid`) and opening and closing the laptop lid will show that ACPI functionality is running all the time. While temporarily and dynamically [overwriting the ACPI tables][31] is possible, permanently changing them involves interacting with the BIOS menu at boot or reflashing the ROM. If you're going to that much trouble, perhaps you should just [install coreboot][32], the open source firmware replacement. + +#### From start_kernel() to userspace + +The code in [init/main.c][25] is surprisingly readable and, amusingly, still carries Linus Torvalds' original copyright from 1991-1992. The lines found in `dmesg | head` on a newly booted system originate mostly from this source file. The first CPU is registered with the system, global data structures are initialized, and the scheduler, interrupt handlers (IRQs), timers, and console are brought one-by-one, in strict order, online. Until the function `timekeeping_init()` runs, all timestamps are zero. This part of the kernel initialization is synchronous, meaning that execution occurs in precisely one thread, and no function is executed until the last one completes and returns. As a result, the `dmesg` output will be completely reproducible, even between two systems, as long as they have the same device-tree or ACPI tables. Linux is behaving like one of the RTOS (real-time operating systems) that runs on MCUs, for example QNX or VxWorks. The situation persists into the function `rest_init()`, which is called by `start_kernel()` at its termination. + +![Summary of early kernel boot process.][34] + +Summary of early kernel boot process. + +The rather humbly named `rest_init()` spawns a new thread that runs `kernel_init()`, which invokes `do_initcalls()`. Users can spy on `initcalls` in action by appending `initcall_debug` to the kernel command line, resulting in `dmesg` entries every time an `initcall` function runs. `initcalls` pass through seven sequential levels: early, core, postcore, arch, subsys, fs, device, and late. The most user-visible part of the `initcalls` is the probing and setup of all the processors' peripherals: buses, network, storage, displays, etc., accompanied by the loading of their kernel modules. `rest_init()` also spawns a second thread on the boot processor that begins by running `cpu_idle()` while it waits for the scheduler to assign it work. + +`kernel_init()` also [sets up symmetric multiprocessing][35] (SMP). With more recent kernels, find this point in `dmesg` output by looking for "Bringing up secondary CPUs..." SMP proceeds by "hotplugging" CPUs, meaning that it manages their lifecycle with a state machine that is notionally similar to that of devices like hotplugged USB sticks. The kernel's power-management system frequently takes individual cores offline, then wakes them as needed, so that the same CPU hotplug code is called over and over on a machine that is not busy. Observe the power-management system's invocation of CPU hotplug with the [BCC tool][36] called `offcputime.py`. + +Note that the code in `init/main.c` is nearly finished executing when `smp_init()` runs: The boot processor has completed most of the one-time initialization that the other cores need not repeat. Nonetheless, the per-CPU threads must be spawned for each core to manage interrupts (IRQs), workqueues, timers, and power events on each. For example, see the per-CPU threads that service softirqs and workqueues in action via the `ps -o psr` command. +``` + + +$\# ps -o pid,psr,comm $(pgrep ksoftirqd)   + + PID PSR COMMAND + +   7   0 ksoftirqd/0 + +  16   1 ksoftirqd/1 + +  22   2 ksoftirqd/2 + +  28   3 ksoftirqd/3 + + + +$\# ps -o pid,psr,comm $(pgrep kworker) + +PID  PSR COMMAND + +   4   0 kworker/0:0H + +  18   1 kworker/1:0H + +  24   2 kworker/2:0H + +  30   3 kworker/3:0H + +[ . .  . ] +``` + +where the PSR field stands for "processor." Each core must also host its own timers and `cpuhp` hotplug handlers. + +How is it, finally, that userspace starts? Near its end, `kernel_init()` looks for an `initrd` that can execute the `init` process on its behalf. If it finds none, the kernel directly executes `init` itself. Why then might one want an `initrd`? + +#### Early userspace: who ordered the initrd? + +Besides the device-tree, another file path that is optionally provided to the kernel at boot is that of the `initrd`. The `initrd` often lives in `/boot` alongside the bzImage file vmlinuz on x86, or alongside the similar uImage and device-tree for ARM. List the contents of the `initrd` with the `lsinitramfs` tool that is part of the `initramfs-tools-core` package. Distro `initrd` schemes contain minimal `/bin`, `/sbin`, and `/etc` directories along with kernel modules, plus some files in `/scripts`. All of these should look pretty familiar, as the `initrd` for the most part is simply a minimal Linux root filesystem. The apparent similarity is a bit deceptive, as nearly all the executables in `/bin` and `/sbin` inside the ramdisk are symlinks to the [BusyBox binary][37], resulting in `/bin` and `/sbin` directories that are 10x smaller than glibc's. + +Why bother to create an `initrd` if all it does is load some modules and then start `init` on the regular root filesystem? Consider an encrypted root filesystem. The decryption may rely on loading a kernel module that is stored in `/lib/modules` on the root filesystem ... and, unsurprisingly, in the `initrd` as well. The crypto module could be statically compiled into the kernel instead of loaded from a file, but there are various reasons for not wanting to do so. For example, statically compiling the kernel with modules could make it too large to fit on the available storage, or static compilation may violate the terms of a software license. Unsurprisingly, storage, network, and human input device (HID) drivers may also be present in the `initrd`--basically any code that is not part of the kernel proper that is needed to mount the root filesystem. The `initrd` is also a place where users can stash their own [custom ACPI][38] table code. + +![Rescue shell and a custom initrd.][40] + +Having some fun with the rescue shell and a custom `initrd`. + +`initrd`'s are also great for testing filesystems and data-storage devices themselves. Stash these test tools in the `initrd` and run your tests from memory rather than from the object under test. + +At last, when `init` runs, the system is up! Since the secondary processors are now running, the machine has become the asynchronous, preemptible, unpredictable, high-performance creature we know and love. Indeed, `ps -o pid,psr,comm -p 1` is liable to show that userspace's `init` process is no longer running on the boot processor. + +### Summary + +The Linux boot process sounds forbidding, considering the number of different pieces of software that participate even on simple embedded devices. Looked at differently, the boot process is rather simple, since the bewildering complexity caused by features like preemption, RCU, and race conditions are absent in boot. Focusing on just the kernel and PID 1 overlooks the large amount of work that bootloaders and subsidiary processors may do in preparing the platform for the kernel to run. While the kernel is certainly unique among Linux programs, some insight into its structure can be gleaned by applying to it some of the same tools used to inspect other ELF binaries. Studying the boot process while it's working well arms system maintainers for failures when they come. + +To learn more, attend Alison Chaiken's talk, [Linux: The first second][41], at [linux.conf.au][42], which will be held January 22-26 in Sydney. + +Thanks to [Akkana Peck][43] for originally suggesting this topic and for many corrections. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/analyzing-linux-boot-process + +作者:[Alison Chaiken][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/don-watkins +[1]:https://en.wikipedia.org/wiki/Initial_ramdisk +[2]:https://github.com/chaiken/LCA2018-Demo-Code +[3]:https://en.wikipedia.org/wiki/Wake-on-LAN +[4]:https://lwn.net/Articles/630778/ +[5]:https://www.youtube.com/watch?v=iffTJ1vPCSo&index=65&list=PLbzoR-pLrL6pISWAq-1cXP4_UZAyRtesk +[6]:https://security-center.intel.com/advisory.aspx?intelid=INTEL-SA-00086&languageid=en-fr +[7]:https://www.intel.com/content/www/us/en/support/articles/000025619/software.html +[8]:https://github.com/corna/me_cleaner +[9]:https://lwn.net/Articles/738649/ +[10]:https://lwn.net/Articles/699551/ +[11]:https://trmm.net/NERF +[12]:https://www.extremetech.com/computing/259879-dell-now-shipping-laptops-intels-management-engine-disabled +[13]:https://lwn.net/Articles/733837/ +[14]:https://linuxplumbersconf.org/2017/ocw/events/LPC2017/tracks/639 +[15]:/file/383501 +[16]:https://opensource.com/sites/default/files/u128651/linuxboot_1.png (Running the U-boot bootloader) +[17]:http://www.denx.de/wiki/DULG/Manual +[18]:https://github.com/torvalds/linux/blob/master/scripts/extract-vmlinux +[19]:http://man7.org/linux/man-pages/man5/elf.5.html +[20]:https://0xax.gitbooks.io/linux-insides/content/Misc/program_startup.html +[21]:https://github.com/chaiken/LCA2018-Demo-Code/commit/e543d9812058f2dd65f6aed45b09dda886c5fd4e +[22]:http://kernel-handbook.alioth.debian.org/ +[23]:https://github.com/torvalds/linux/blob/master/arch/x86/boot/compressed/head_64.S +[24]:https://github.com/torvalds/linux/blob/master/arch/arm/boot/compressed/head.S +[25]:https://github.com/torvalds/linux/blob/master/init/main.c +[26]:https://www.youtube.com/watch?v=m_NyYEBxfn8 +[27]:http://events.linuxfoundation.org/sites/events/files/slides/x86-platform.pdf +[28]:http://lwn.net/Articles/616859/ +[29]:/file/383506 +[30]:https://opensource.com/sites/default/files/u128651/linuxboot_2.png (ACPI tables on Lenovo laptops) +[31]:https://www.mjmwired.net/kernel/Documentation/acpi/method-customizing.txt +[32]:https://www.coreboot.org/Supported_Motherboards +[33]:/file/383511 +[34]:https://opensource.com/sites/default/files/u128651/linuxboot_3.png (Summary of early kernel boot process.) +[35]:http://free-electrons.com/pub/conferences/2014/elc/clement-smp-bring-up-on-arm-soc +[36]:http://www.brendangregg.com/ebpf.html +[37]:https://www.busybox.net/ +[38]:https://www.mjmwired.net/kernel/Documentation/acpi/initrd_table_override.txt +[39]:/file/383516 +[40]:https://opensource.com/sites/default/files/u128651/linuxboot_4.png (Rescue shell and a custom initrd.) +[41]:https://rego.linux.conf.au/schedule/presentation/16/ +[42]:https://linux.conf.au/index.html +[43]:http://shallowsky.com/ diff --git a/sources/tech/20180116 How To Create A Bootable Zorin OS USB Drive.md b/sources/tech/20180116 How To Create A Bootable Zorin OS USB Drive.md new file mode 100644 index 0000000000..4ab7fea3f6 --- /dev/null +++ b/sources/tech/20180116 How To Create A Bootable Zorin OS USB Drive.md @@ -0,0 +1,315 @@ +How To Create A Bootable Zorin OS USB Drive +====== +![Zorin OS][17] + +### Introduction + +In this guide I will show you how to create a bootable Zorin OS USB Drive. + +To be able to follow this guide you will need the following: + + * A blank USB drive + * An internet connection + + + +### What Is Zorin OS? + +Zorin OS is a Linux based operating system. + +If you are a Windows user you might wonder why you would bother with Zorin OS. If you are a Linux user then you might also wonder why you would use Zorin OS over other distributions such as Linux Mint or Ubuntu. + +If you are using an older version of Windows and you can't afford to upgrade to Windows 10 or your computer doesn't have the right specifications for running Windows 10 then Zorin OS provides a free (or cheap, depending how much you choose to donate) upgrade path allowing you to continue to use your computer in a much more secure environment. + +If your current operating system is Windows XP or Windows Vista then you might consider using Zorin OS Lite as opposed to Zorin OS Core. + +The features of Zorin OS Lite are generally the same as the Zorin OS Core product but some of the applications installed and the desktop environment used for displaying menus and icons and other Windowsy features take up much less memory and processing power. + +If you are running Windows 7 then your operating system is coming towards the end of its life. You could probably upgrade to Windows 10 but at a hefty price. + +Not everybody has the finances to pay for a new Windows license and not everybody has the money to buy a brand new computer. + +Zorin OS will help you extend the life of your computer and you will still feel you are using a premium product and that is because you will be. The product with the highest price doesn't always provide the best value. + +Whilst we are talking about value for money, Zorin OS allows you to install the best free and open source software available and comes with a good selection of packages pre-installed. + +For the home user, using Zorin OS doesn't have to feel any different to running Windows. You can browse the web using the browser of your choice, you can listen to music and watch videos. There are mail clients and other productivity tools. + +Talking of productivity there is LibreOffice. LibreOffice has everything the average home user requires from an office suite with a word processor, spreadsheet and presentations package. + +If you want to run Windows software then you can use the pre-installed PlayOnLinux and WINE packages to install and run all manner of packages including Microsoft Office. + +By running Zorin OS you will get the extra security benefits of running a Linux based operating system. + +Are you fed up with Windows updates stalling your productivity? When Windows wants to install updates it requires a reboot and then a long wait whilst it proceeds to install update after update. Sometimes it even forces a reboot whilst you are busy working. + +Zorin OS is different. Updates download and install themselves whilst you are using the computer. You won't even need to know it is happening. + +Why Zorin over Mint or Ubuntu? Zorin is the happy stepping stone between Windows and Linux. It is Linux but you don't need to care that it is Linux. If you decide later on to move to something different then so be it but there really is no need. + +### The Zorin OS Website + +![](http://dailylinuxuser.com/wp-content/uploads/2018/01/zorinwebsite1-678x381.png) + +You can visit the Zorin OS website by visiting [www.zorinos.com][18]. + +The homepage of the Zorin OS website tells you everything you need to know. + +"Zorin OS is an alternative to Windows and macOX, designed to make your computer faster, more powerful and secure". + +There is nothing that tells you that Zorin OS is based on Linux. There is no need for Zorin to tell you that because even though Windows used to be heavily based on DOS you didn't need to know DOS commands to use it. Likewise you don't necessarily need to know Linux commands to use Zorin. + +If you scroll down the page you will see a slide show highlighting the way the desktop looks and feels under Zorin. + +The good thing is that you can customise the user interface so that if you prefer a Windows layout you can use a Windows style layout but if you prefer a Mac style layout you can go for that as well. + +Zorin OS is based on Ubuntu Linux and the website uses this fact to highlight that underneath it has a stable base and it highlights the security benefits provided by Linux. + +If you want to see what applications are available for Zorin then there is a link to do that and Zorin never sells your data and protects your privacy. + +### What Are The Different Versions Of Zorin OS + +#### Zorin OS Ultimate + +The ultimate edition takes the core edition and adds other features such as different layouts, more applications pre-installed and extra games. + +The ultimate edition comes at a price of 19 euros which is a bargain compared to other operating systems. + +#### Zorin OS Core + +The core version is the standard edition and comes with everything the average person could need from the outset. + +This is the version I will show you how to download and install in this guide. + +#### Zorin OS Lite + +Zorin OS Lite also has an ultimate version available and a core version. Zorin OS Lite is perfect for older computers and the main difference is the desktop environments used to display menus and handle screen elements such as icons and panels. + +Zorin OS Lite is less memory intensive than Zorin OS. + +#### Zorin OS Business + +Zorin OS Business comes with business applications installed as standard such as finance applications and office applications. + +### How To Get Zorin OS + +To download Zorin OS visit . + +To get the core version scroll past the Zorin Ultimate section until you get to the Zorin Core section. + +You will see a small pay panel which allows you to choose how much you wish to pay for Zorin Core with a purchase now button underneath. + +#### How To Pay For Zorin OS + +![](http://dailylinuxuser.com/wp-content/uploads/2018/01/zorinwebsite1-678x381.png) + +You can choose from the three preset amounts or enter an amount of your choice in the "Custom" box. + +When you click "Purchase Zorin OS Core" the following window will appear: + +![](http://dailylinuxuser.com/wp-content/uploads/2018/01/payforzorin.png) + +You can now enter your email and credit card information. + +When you click the "pay" button a window will appear with a download link. + +#### How To Get Zorin OS For Free + +If you don't wish to pay anything at all you can enter zero (0) into the custom box. The button will change and will show the words "Download Zorin OS Core". + +#### How To Download Zorin OS + +![](http://dailylinuxuser.com/wp-content/uploads/2018/01/downloadzorin.png) + +Whether you have bought Zorin or have chosen to download for free, a window will appear with the option to download a 64 bit or 32 bit version of Zorin. + +Most modern computers are capable of running 64 bit operating systems but in order to check within Windows click the "start" button and type "system information". + +![](http://dailylinuxuser.com/wp-content/uploads/2018/01/systeminfo.png) + +Click on the "System Information" desktop app and halfway down the right panel you will see the words "system type". If you see the words "x64 based PC" then the system is capable of running 64-bit operating systems. + +If your computer is capable of running 64-bit operating systems click on the "Download 64 bit" button otherwise click on "Download 32 bit". + +The ISO image file for Zorin will now start to download to your computer. + +### How To Verify If The Zorin OS Download Is Valid + +It is important to check whether the download is valid for many reasons. + +If the file has only partially downloaded or there were interruptions whilst downloading and you had to resume then the image might not be perfect and it should be downloaded again. + +More importantly you should check the validity to make sure the version you downloaded is genuine and wasn't uploaded by a hacker. + +In order to check the validity of the ISO image you should download a piece of software called QuickHash for Windows from . + +Click the "download" link and when the file has downloaded double click on it. + +Click on the relevant application file within the zip file. If you have a 32-bit system click "Quickhash-v2.8.4-32bit" or for a 64-bit system click "Quickhash-v2.8.4-64bit". + +Click on the "Run" button. + +![](http://dailylinuxuser.com/wp-content/uploads/2018/01/zorinhash.png) + +Click the SHA256 radio button on the left side of the screen and then click on the file tab. + +Click "Select File" and navigate to the downloads folder. + +Choose the Zorin ISO image downloaded previously. + +A progress bar will now work out the hash value for the ISO image. + +To compare this with the valid keys available for Zorin visit and scroll down until you see the list of checksums as follows: + +![](http://dailylinuxuser.com/wp-content/uploads/2018/01/zorinhashcodes.png) + +Select the long list of scrambled characters next to the version of Zorin OS that you downloaded and press CTRL and C to copy. + +Go back to the Quickhash screen and paste the value into the "Expected hash value" box by pressing CTRL and V. + +You should see the words "Expected hash matches the computed file hash, OK". + +If the values do not match you will see the words "Expected hash DOES NOT match the computed file hash" and you should download the ISO image again. + +### How To Create A Bootable Zorin OS USB Drive + +In order to be able to install Zorin you will need to install a piece of software called Etcher. You will also need a blank USB drive. + +You can download Etcher from . + +![](http://dailylinuxuser.com/wp-content/uploads/2018/01/downloadetcher.png) + +If you are using a 64 bit computer click on the "Download for Windows x64" link otherwise click on the little arrow and choose "Etcher for Windows x86 (32-bit) (Installer)". + +Insert the USB drive into your computer and double click on the "Etcher" setup executable file. + +![](http://dailylinuxuser.com/wp-content/uploads/2018/01/etcherlicense.png) + +When the license screen appears click "I Agree". + +Etcher should start automatically after the installation completes but if it doesn't you can press the Windows key or click the start button and search for "Etcher". + +![](http://dailylinuxuser.com/wp-content/uploads/2018/01/etcherscreen.png) + +Click on "Select Image" and select the "Zorin" ISO image downloaded previously. + +Click "Flash". + +Windows will ask for your permission to continue. Click "Yes" to accept. + +After a while a window will appear with the words "Flash Complete". + +### How To Buy A Zorin OS USB Drive + +If the above instructions seem too much like hard work then you can order a Zorin USB Drive by clicking one of the following links: + +* [Zorin OS Core – 32-bit DVD][1] + +* [Zorin OS Core – 64-bit DVD][2] + +* [Zorin OS Core – 16 gigabyte USB drive (32-bit)][3] + +* [Zorin OS Core – 32 gigabyte USB drive (32-bit)][4] + +* [Zorin OS Core – 64 gigabyte USB drive (32-bit)][5] + +* [Zorin OS Core – 16 gigabyte USB drive (64-bit)][6] + +* [Zorin OS Core – 32 gigabyte USB drive (64-bit)][7] + +* [Zorin OS Core – 64 gigabyte USB drive (64-bit)][8] + +* [Zorin OS Lite – 32-bit DVD][9] + +* [Zorin OS Lite – 64-bit DVD][10] + +* [Zorin OS Lite – 16 gigabyte USB drive (32-bit)][11] + +* [Zorin OS Lite – 32 gigabyte USB drive (32-bit)][12] + +* [Zorin OS Lite – 64 gigabyte USB drive (32-bit)][13] + +* [Zorin OS Lite – 16 gigabyte USB drive (64-bit)][14] + +* [Zorin OS Lite – 32 gigabyte USB drive (64-bit)][15] + +* [Zorin OS Lite – 64 gigabyte USB drive (64-bit)][16] + + +### How To Boot Into Zorin OS Live + +On older computers simply insert the USB drive and restart the computer. The boot menu for Zorin should appear straight away. + +On modern computers insert the USB drive, restart the computer and before Windows loads press the appropriate function key to bring up the boot menu. + +The following list shows the key or keys you can press for the most popular computer manufacturers. + + * Acer - Escape, F12, F9 + * Asus - Escape, F8 + * Compaq - Escape, F9 + * Dell - F12 + * Emachines - F12 + * HP - Escape, F9 + * Intel - F10 + * Lenovo - F8, F10, F12 + * Packard Bell - F8 + * Samsung - Escape, F12 + * Sony - F10, F11 + * Toshiba - F12 + + + +Check the manufacturer's website to find the key for your computer if it isn't listed or keep trying different function keys or the escape key. + +A screen will appear with the following three options: + + 1. Try Zorin OS without Installing + 2. Install Zorin OS + 3. Check disc for defects + + + +Choose "Try Zorin OS without Installing" by pressing enter with that option selected. + +### Summary + +You can now try Zorin OS without damaging your current operating system. + +To get back to your original operating system reboot and remove the USB drive. + +### How To Remove Zorin OS From The USB Drive + +If you have decided that Zorin OS is not for you and you want to get the USB drive back into its pre-Zorin state follow this guide: + +[How To Fix A USB Drive After Linux Has Been Installed On It][19] + +-------------------------------------------------------------------------------- + +via: http://dailylinuxuser.com/2018/01/how-to-create-a-bootable-zorin-os-usb-drive.html + +作者:[admin][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[1]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-install-live-dvd-32bit.html?affiliate=everydaylinuxuser +[2]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-install-live-dvd-64bit.html?affiliate=everydaylinuxuser +[3]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-16gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser +[4]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-32gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser +[5]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-64gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser +[6]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-16gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser +[7]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-32gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser +[8]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-64gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser +[9]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-install-live-dvd-32bit.html?affiliate=everydaylinuxuser +[10]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-install-live-dvd-64bit.html?affiliate=everydaylinuxuser +[11]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-16gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser +[12]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-32gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser +[13]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-64gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser +[14]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-16gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser +[15]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-32gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser +[16]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-64gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser +[17]:http://dailylinuxuser.com/wp-content/uploads/2018/01/zorindesktop-678x381.png (Zorin OS) +[18]:http://www.zorinos.com +[19]:http://dailylinuxuser.com/2016/04/how-to-fix-usb-drive-after-linux-has.html diff --git a/sources/tech/20180116 How to Install and Optimize Apache on Ubuntu - ThisHosting.Rocks.md b/sources/tech/20180116 How to Install and Optimize Apache on Ubuntu - ThisHosting.Rocks.md new file mode 100644 index 0000000000..eba7ce9c54 --- /dev/null +++ b/sources/tech/20180116 How to Install and Optimize Apache on Ubuntu - ThisHosting.Rocks.md @@ -0,0 +1,267 @@ +How to Install and Optimize Apache on Ubuntu +====== + +This is the beginning of our LAMP tutorial series: how to install the Apache web server on Ubuntu. + +These instructions should work on any Ubuntu-based distro, including Ubuntu 14.04, Ubuntu 16.04, [Ubuntu 18.04][1], and even non-LTS Ubuntu releases like 17.10. They were tested and written for Ubuntu 16.04. + +Apache (aka httpd) is the most popular and most widely used web server, so this should be useful for everyone. + +### Before we begin installing Apache + +Some requirements and notes before we begin: + + * Apache may already be installed on your server, so check if it is first. You can do so with the "apachectl -V" command that outputs the Apache version you're using and some other information. + * You'll need an Ubuntu server. You can buy one from [Vultr][2], they're one of the [best and cheapest cloud hosting providers][3]. Their servers start from $2.5 per month. + * You'll need the root user or a user with sudo access. All commands below are executed by the root user so we didn't have to append 'sudo' to each command. + * You'll need [SSH enabled][4] if you use Ubuntu or an SSH client like [MobaXterm][5] if you use Windows. + + + +That's most of it. Let's move onto the installation. + + + + + +### Install Apache on Ubuntu + +The first thing you always need to do is update Ubuntu before you do anything else. You can do so by running: +``` +apt-get update && apt-get upgrade +``` + +Next, to install Apache, run the following command: +``` +apt-get install apache2 +``` + +If you want to, you can also install the Apache documentation and some Apache utilities. You'll need the Apache utilities for some of the modules we'll install later. +``` +apt-get install apache2-doc apache2-utils +``` + +**And that 's it. You've successfully installed Apache.** + +You'll still need to configure it. + +### Configure and Optimize Apache on Ubuntu + +There are various configs you can do on Apache, but the main and most common ones are explained below. + +#### Check if Apache is running + +By default, Apache is configured to start automatically on boot, so you don't have to enable it. You can check if it's running and other relevant information with the following command: +``` +systemctl status apache2 +``` + +[![check if apache is running][6]][6] + +And you can check what version you're using with +``` +apachectl -V +``` + +A simpler way of checking this is by visiting your server's IP address. If you get the default Apache page, then everything's working fine. + +#### Update your firewall + +If you use a firewall (which you should), you'll probably need to update your firewall rules and allow access to the default ports. The most common firewall used on Ubuntu is UFW, so the instructions below are for UFW. + +To allow traffic through both the 80 (http) and 443 (https) ports, run the following command: +``` +ufw allow 'Apache Full' +``` + +#### Install common Apache modules + +Some modules are frequently recommended and you should install them. We'll include instructions for the most common ones: + +##### Speed up your website with the PageSpeed module + +The PageSpeed module will optimize and speed up your Apache server automatically. + +First, go to the [PageSpeed download page][7] and choose the file you need. We're using a 64-bit Ubuntu server and we'll install the latest stable version. Download it using wget: +``` +wget https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_amd64.deb +``` + +Then, install it with the following commands: +``` +dpkg -i mod-pagespeed-stable_current_amd64.deb +apt-get -f install +``` + +Restart Apache for the changes to take effect: +``` +systemctl restart apache2 +``` + +##### Enable rewrites/redirects using the mod_rewrite module + +This module is used for rewrites (redirects), as the name suggests. You'll need it if you use WordPress or any other CMS for that matter. To install it, just run: +``` +a2enmod rewrite +``` + +And restart Apache again. You may need some extra configurations depending on what CMS you're using, if any. Google it for specific instructions for your setup. + +##### Secure your Apache with the ModSecurity module + +ModSecurity is a module used for security, again, as the name suggests. It basically acts as a firewall, and it monitors your traffic. To install it, run the following command: +``` +apt-get install libapache2-modsecurity +``` + +And restart Apache again: +``` +systemctl restart apache2 +``` + +ModSecurity comes with a default setup that's enough by itself, but if you want to extend it, you can use the [OWASP rule set][8]. + +##### Block DDoS attacks using the mod_evasive module + +You can use the mod_evasive module to block and prevent DDoS attacks on your server, though it's debatable how useful it is in preventing attacks. To install it, use the following command: +``` +apt-get install libapache2-mod-evasive +``` + +By default, mod_evasive is disabled, to enable it, edit the following file: +``` +nano /etc/apache2/mods-enabled/evasive.conf +``` + +And uncomment all the lines (remove #) and configure it per your requirements. You can leave everything as-is if you don't know what to edit. + +[![mod_evasive][9]][9] + +And create a log file: +``` +mkdir /var/log/mod_evasive +chown -R www-data:www-data /var/log/mod_evasive +``` + +That's it. Now restart Apache for the changes to take effect: +``` +systemctl restart apache2 +``` + +There are [additional modules][10] you can install and configure, but it's all up to you and the software you're using. They're usually not required. Even the 4 modules we included are not required. If a module is required for a specific application, then they'll probably note that. + +#### Optimize Apache with the Apache2Buddy script + +Apache2Buddy is a script that will automatically fine-tune your Apache configuration. The only thing you need to do is run the following command and the script does the rest automatically: +``` +curl -sL https://raw.githubusercontent.com/richardforth/apache2buddy/master/apache2buddy.pl | perl +``` + +You may need to install curl if you don't have it already installed. Use the following command to install curl: +``` +apt-get install curl +``` + +#### Additional configurations + +There's some extra stuff you can do with Apache, but we'll leave them for another tutorial. Stuff like enabling http/2 support, turning off (or on) KeepAlive, tuning your Apache even more. You don't have to do any of this, but you can find tutorials online and do it if you can't wait for our tutorials. + +### Create your first website with Apache + +Now that we're done with all the tuning, let's move onto creating an actual website. Follow our instructions to create a simple HTML page and a virtual host that's going to run on Apache. + +The first thing you need to do is create a new directory for your website. Run the following command to do so: +``` +mkdir -p /var/www/example.com/public_html +``` + +Of course, replace example.com with your desired domain. You can get a cheap domain name from [Namecheap][11]. + +Don't forget to replace example.com in all of the commands below. + +Next, create a simple, static web page. Create the HTML file: +``` +nano /var/www/example.com/public_html/index.html +``` + +And paste this: +``` + +     +       Simple Page +     +     +      

If you're seeing this in your browser then everything works.

+     + +``` + +Save and close the file. + +Configure the permissions of the directory: +``` +chown -R www-data:www-data /var/www/example.com +chmod -R og-r /var/www/example.com +``` + +Create a new virtual host for your site: +``` +nano /etc/apache2/sites-available/example.com.conf +``` + +And paste the following: +``` + +     ServerAdmin admin@example.com +     ServerName example.com +     ServerAlias www.example.com +    +     DocumentRoot /var/www/example.com/public_html +     +     ErrorLog ${APACHE_LOG_DIR}/error.log +     CustomLog ${APACHE_LOG_DIR}/access.log combined + +``` + +This is a basic virtual host. You may need a more advanced .conf file depending on your setup. + +Save and close the file after updating everything accordingly. + +Now, enable the virtual host with the following command: +``` +a2ensite example.com.conf +``` + +And finally, restart Apache for the changes to take effect: +``` +systemctl restart apache2 +``` + +That's it. You're done. Now you can visit example.com and view your page. + + + +-------------------------------------------------------------------------------- + +via: https://thishosting.rocks/how-to-install-optimize-apache-ubuntu/ + +作者:[ThisHosting][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://thishosting.rocks +[1]:https://thishosting.rocks/ubuntu-18-04-new-features-release-date/ +[2]:https://thishosting.rocks/go/vultr/ +[3]:https://thishosting.rocks/cheap-cloud-hosting-providers-comparison/ +[4]:https://thishosting.rocks/how-to-enable-ssh-on-ubuntu/ +[5]:https://mobaxterm.mobatek.net/ +[6]:https://thishosting.rocks/wp-content/uploads/2018/01/apache-running.jpg +[7]:https://www.modpagespeed.com/doc/download +[8]:https://www.owasp.org/index.php/Category:OWASP_ModSecurity_Core_Rule_Set_Project +[9]:https://thishosting.rocks/wp-content/uploads/2018/01/mod_evasive.jpg +[10]:https://httpd.apache.org/docs/2.4/mod/ +[11]:https://thishosting.rocks/neamcheap-review-cheap-domains-cool-names +[12]:https://thishosting.rocks/wp-content/plugins/patron-button-and-widgets-by-codebard/images/become_a_patron_button.png +[13]:https://www.patreon.com/thishostingrocks diff --git a/sources/tech/20180116 How to Install and Use iostat on Ubuntu 16.04 LTS.md b/sources/tech/20180116 How to Install and Use iostat on Ubuntu 16.04 LTS.md new file mode 100644 index 0000000000..7ddb17eb68 --- /dev/null +++ b/sources/tech/20180116 How to Install and Use iostat on Ubuntu 16.04 LTS.md @@ -0,0 +1,225 @@ +How to Install and Use iostat on Ubuntu 16.04 LTS +====== + +iostat also known as input/output statistics is a popular Linux system monitoring tool that can be used to collect statistics of input and output devices. It allows users to identify performance issues of local disk, remote disk and system information. The iostat create reports, the CPU Utilization report, the Device Utilization report and the Network Filesystem report. + +In this tutorial, we will learn how to install iostat on Ubuntu 16.04 and how to use it. + +### Prerequisite + + * Ubuntu 16.04 desktop installed on your system. + * Non-root user with sudo privileges setup on your system + + + +### Install iostat + +By default, iostat is included with sysstat package in Ubuntu 16.04. You can easily install it by just running the following command: + +``` +sudo apt-get install sysstat -y +``` + +Once sysstat is installed, you can proceed to the next step. + +### iostat Basic Example + +Let's start by running the iostat command without any argument. This will displays information about the CPU usage, and I/O statistics of your system: + +``` +iostat +``` + +You should see the following output: +``` +Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 22.67 0.52 6.99 1.88 0.00 67.94 + +Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 15.15 449.15 119.01 771022 204292 + +``` + +In the above output, the first line display, Linux kernel version and hostname. Next two lines displays CPU statistics like, average CPU usage, percentage of time the CPU were idle and waited for I/O response, percentage of waiting time of virtual CPU and the percentage of time the CPU is idle. Next two lines displays the device utilization report like, number of blocks read and write per second and total block reads and write per second. + +By default iostat displays the report with current date. If you want to display the current time, run the following command: + +``` +iostat -t +``` + +You should see the following output: +``` +Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) + +Saturday 16 December 2017 09:44:55 IST +avg-cpu: %user %nice %system %iowait %steal %idle + 21.37 0.31 6.93 1.28 0.00 70.12 + +Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 9.48 267.80 79.69 771022 229424 + +``` + +To check the version of the iostat, run the following command: + +``` +iostat -V +``` + +Output: +``` +sysstat version 10.2.0 +(C) Sebastien Godard (sysstat orange.fr) + +``` + +You can listout all the options available with iostat command using the following command: + +``` +iostat --help +``` + +Output: +``` +Usage: iostat [ options ] [ [ ] ] +Options are: +[ -c ] [ -d ] [ -h ] [ -k | -m ] [ -N ] [ -t ] [ -V ] [ -x ] [ -y ] [ -z ] +[ -j { ID | LABEL | PATH | UUID | ... } ] +[ [ -T ] -g ] [ -p [ [,...] | ALL ] ] +[ [...] | ALL ] + +``` + +### iostat Advance Usage Example + +If you want to view only the device report only once, run the following command: + +``` +iostat -d +``` + +You should see the following output: +``` +Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) + +Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 12.18 353.66 102.44 771022 223320 + +``` + +To view the device report continuously for every 5 seconds, for 3 times: + +``` +iostat -d 5 3 +``` + +You should see the following output: +``` +Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) + +Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 11.77 340.71 98.95 771022 223928 + +Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 2.00 0.00 8.00 0 40 + +Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 0.60 0.00 3.20 0 16 + +``` + +If you want to view the statistics of specific devices, run the following command: + +``` +iostat -p sda +``` + +You should see the following output: +``` +Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 21.69 0.36 6.98 1.44 0.00 69.53 + +Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 11.00 316.91 92.38 771022 224744 +sda1 0.07 0.27 0.00 664 0 +sda2 0.01 0.05 0.00 128 0 +sda3 0.07 0.27 0.00 648 0 +sda4 10.56 315.21 92.35 766877 224692 +sda5 0.12 0.48 0.02 1165 52 +sda6 0.07 0.32 0.00 776 0 + +``` + +You can also view the statistics of multiple devices with the following command: + +``` +iostat -p sda, sdb, sdc +``` + +If you want to displays the device I/O statistics in MB/second, run the following command: + +``` +iostat -m +``` + +You should see the following output: +``` +Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 21.39 0.31 6.94 1.30 0.00 70.06 + +Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn +sda 9.67 0.27 0.08 752 223 + +``` + +If you want to view the extended information for a specific partition (sda4), run the following command: + +``` +iostat -x sda4 +``` + +You should see the following output: +``` +Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 21.26 0.28 6.87 1.19 0.00 70.39 + +Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util +sda4 0.79 4.65 5.71 2.68 242.76 73.28 75.32 0.35 41.80 43.66 37.84 4.55 3.82 + +``` + +If you want to displays only the CPU usage statistics, run the following command: + +``` +iostat -c +``` + +You should see the following output: +``` +Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 21.45 0.33 6.96 1.34 0.00 69.91 + +``` + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/tutorial/how-to-install-and-use-iostat-on-ubuntu-1604/ + +作者:[Hitesh Jethva][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.howtoforge.com diff --git a/sources/tech/20180116 Monitor your Kubernetes Cluster.md b/sources/tech/20180116 Monitor your Kubernetes Cluster.md new file mode 100644 index 0000000000..f0ac585f6f --- /dev/null +++ b/sources/tech/20180116 Monitor your Kubernetes Cluster.md @@ -0,0 +1,264 @@ +Monitor your Kubernetes Cluster +====== +This article originally appeared on [Kevin Monroe's blog][1] + +Keeping an eye on logs and metrics is a necessary evil for cluster admins. The benefits are clear: metrics help you set reasonable performance goals, while log analysis can uncover issues that impact your workloads. The hard part, however, is getting a slew of applications to work together in a useful monitoring solution. + +In this post, I'll cover monitoring a Kubernetes cluster with [Graylog][2] (for logging) and [Prometheus][3] (for metrics). Of course that's not just wiring 3 things together. In fact, it'll end up looking like this: + +![][4] + +As you know, Kubernetes isn't just one thing -- it's a system of masters, workers, networking bits, etc(d). Similarly, Graylog comes with a supporting cast (apache2, mongodb, etc), as does Prometheus (telegraf, grafana, etc). Connecting the dots in a deployment like this may seem daunting, but the right tools can make all the difference. + +I'll walk through this using [conjure-up][5] and the [Canonical Distribution of Kubernetes][6] (CDK). I find the conjure-up interface really helpful for deploying big software, but I know some of you hate GUIs and TUIs and probably other UIs too. For those folks, I'll do the same deployment again from the command line. + +Before we jump in, note that Graylog and Prometheus will be deployed alongside Kubernetes and not in the cluster itself. Things like the Kubernetes Dashboard and Heapster are excellent sources of information from within a running cluster, but my objective is to provide a mechanism for log/metric analysis whether the cluster is running or not. + +### The Walk Through + +First things first, install conjure-up if you don't already have it. On Linux, that's simply: +``` +sudo snap install conjure-up --classic +``` + +There's also a brew package for macOS users: +``` +brew install conjure-up +``` + +You'll need at least version 2.5.2 to take advantage of the recent CDK spell additions, so be sure to `sudo snap refresh conjure-up` or `brew update && brew upgrade conjure-up` if you have an older version installed. + +Once installed, run it: +``` +conjure-up +``` + +![][7] + +You'll be presented with a list of various spells. Select CDK and press `Enter`. + +![][8] + +At this point, you'll see additional components that are available for the CDK spell. We're interested in Graylog and Prometheus, so check both of those and hit `Continue`. + +You'll be guided through various cloud choices to determine where you want your cluster to live. After that, you'll see options for post-deployment steps, followed by a review screen that lets you see what is about to be deployed: + +![][9] + +In addition to the typical K8s-related applications (etcd, flannel, load-balancer, master, and workers), you'll see additional applications related to our logging and metric selections. + +The Graylog stack includes the following: + + * apache2: reverse proxy for the graylog web interface + * elasticsearch: document database for the logs + * filebeat: forwards logs from K8s master/workers to graylog + * graylog: provides an api for log collection and an interface for analysis + * mongodb: database for graylog metadata + + + +The Prometheus stack includes the following: + + * grafana: web interface for metric-related dashboards + * prometheus: metric collector and time series database + * telegraf: sends host metrics to prometheus + + + +You can fine tune the deployment from this review screen, but the defaults will suite our needs. Click `Deploy all Remaining Applications` to get things going. + +The deployment will take a few minutes to settle as machines are brought online and applications are configured in your cloud. Once complete, conjure-up will show a summary screen that includes links to various interesting endpoints for you to browse: + +![][10] + +#### Exploring Logs + +Now that Graylog has been deployed and configured, let's take a look at some of the data we're gathering. By default, the filebeat application will send both syslog and container log events to graylog (that's `/var/log/*.log` and `/var/log/containers/*.log` from the kubernetes master and workers). + +Grab the apache2 address and graylog admin password as follows: +``` +juju status --format yaml apache2/0 | grep public-address + public-address: +juju run-action --wait graylog/0 show-admin-password + admin-password: +``` + +Browse to `http://` and login with admin as the username and as the password. **Note:** if the interface is not immediately available, please wait as the reverse proxy configuration may take up to 5 minutes to complete. + +Once logged in, head to the `Sources` tab to get an overview of the logs collected from our K8s master and workers: + +![][11] + +Drill into those logs by clicking the `System / Inputs` tab and selecting `Show received messages` for the filebeat input: + +![][12] + +From here, you may want to play around with various filters or setup Graylog dashboards to help identify the events that are most important to you. Check out the [Graylog Dashboard][13] docs for details on customizing your view. + +#### Exploring Metrics + +Our deployment exposes two types of metrics through our grafana dashboards: system metrics include things like cpu/memory/disk utilization for the K8s master and worker machines, and cluster metrics include container-level data scraped from the K8s cAdvisor endpoints. + +Grab the grafana address and admin password as follows: +``` +juju status --format yaml grafana/0 | grep public-address + public-address: +juju run-action --wait grafana/0 get-admin-password + password: +``` + +Browse to `http://:3000` and login with admin as the username and as the password. Once logged in, check out the cluster metric dashboard by clicking the `Home` drop-down box and selecting `Kubernetes Metrics (via Prometheus)`: + +![][14] + +We can also check out the system metrics of our K8s host machines by switching the drop-down box to `Node Metrics (via Telegraf) ` + +![][15] + + +### The Other Way + +As alluded to in the intro, I prefer the wizard-y feel of conjure-up to guide me through complex software deployments like Kubernetes. Now that we've seen the conjure-up way, some of you may want to see a command line approach to achieve the same results. Still others may have deployed CDK previously and want to extend it with the Graylog/Prometheus components described above. Regardless of why you've read this far, I've got you covered. + +The tool that underpins conjure-up is [Juju][16]. Everything that the CDK spell did behind the scenes can be done on the command line with Juju. Let's step through how that works. + +**Starting From Scratch** + +If you're on Linux, install Juju like this: +``` +sudo snap install juju --classic +``` + +For macOS, Juju is available from brew: +``` +brew install juju +``` + +Now setup a controller for your preferred cloud. You may be prompted for any required cloud credentials: +``` +juju bootstrap +``` + +We then need to deploy the base CDK bundle: +``` +juju deploy canonical-kubernetes +``` + +**Starting From CDK** + +With our Kubernetes cluster deployed, we need to add all the applications required for Graylog and Prometheus: +``` +## deploy graylog-related applications +juju deploy xenial/apache2 +juju deploy xenial/elasticsearch +juju deploy xenial/filebeat +juju deploy xenial/graylog +juju deploy xenial/mongodb +``` +``` +## deploy prometheus-related applications +juju deploy xenial/grafana +juju deploy xenial/prometheus +juju deploy xenial/telegraf +``` + +Now that the software is deployed, connect them together so they can communicate: +``` +## relate graylog applications +juju relate apache2:reverseproxy graylog:website +juju relate graylog:elasticsearch elasticsearch:client +juju relate graylog:mongodb mongodb:database +juju relate filebeat:beats-host kubernetes-master:juju-info +juju relate filebeat:beats-host kubernetes-worker:jujuu-info +``` +``` +## relate prometheus applications +juju relate prometheus:grafana-source grafana:grafana-source +juju relate telegraf:prometheus-client prometheus:target +juju relate kubernetes-master:juju-info telegraf:juju-info +juju relate kubernetes-worker:juju-info telegraf:juju-info +``` + +At this point, all the applications can communicate with each other, but we have a bit more configuration to do (e.g., setting up the apache2 reverse proxy, telling prometheus how to scrape k8s, importing our grafana dashboards, etc): +``` +## configure graylog applications +juju config apache2 enable_modules="headers proxy_html proxy_http" +juju config apache2 vhost_http_template="$(base64 )" +juju config elasticsearch firewall_enabled="false" +juju config filebeat \ + logpath="/var/log/*.log /var/log/containers/*.log" +juju config filebeat logstash_hosts=":5044" +juju config graylog elasticsearch_cluster_name="" +``` +``` +## configure prometheus applications +juju config prometheus scrape-jobs="" +juju run-action --wait grafana/0 import-dashboard \ + dashboard="$(base64 )" +``` + +Some of the above steps need values specific to your deployment. You can get these in the same way that conjure-up does: + + * : fetch our sample [template][17] from github + * : `juju run --unit graylog/0 'unit-get private-address'` + * : `juju config elasticsearch cluster-name` + * : fetch our sample [scraper][18] from github; [substitute][19]appropriate values for `[K8S_PASSWORD][20]` and `[K8S_API_ENDPOINT][21]` + * : fetch our [host][22] and [k8s][23] dashboards from github + + + +Finally, you'll want to expose the apache2 and grafana applications to make their web interfaces accessible: +``` +## expose relevant endpoints +juju expose apache2 +juju expose grafana +``` + +Now that we have everything deployed, related, configured, and exposed, you can login and poke around using the same steps from the **Exploring Logs** and **Exploring Metrics** sections above. + +### The Wrap Up + +My goal here was to show you how to deploy a Kubernetes cluster with rich monitoring capabilities for logs and metrics. Whether you prefer a guided approach or command line steps, I hope it's clear that monitoring complex deployments doesn't have to be a pipe dream. The trick is to figure out how all the moving parts work, make them work together repeatably, and then break/fix/repeat for a while until everyone can use it. + +This is where tools like conjure-up and Juju really shine. Leveraging the expertise of contributors to this ecosystem makes it easy to manage big software. Start with a solid set of apps, customize as needed, and get back to work! + +Give these bits a try and let me know how it goes. You can find enthusiasts like me on Freenode IRC in **#conjure-up** and **#juju**. Thanks for reading! + +### About the author + +Kevin joined Canonical in 2014 with his focus set on modeling complex software. He found his niche on the Juju Big Software team where his mission is to capture operational knowledge of Big Data and Machine Learning applications into repeatable (and reliable!) solutions. + +-------------------------------------------------------------------------------- + +via: https://insights.ubuntu.com/2018/01/16/monitor-your-kubernetes-cluster/ + +作者:[Kevin Monroe][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://insights.ubuntu.com/author/kwmonroe/ +[1]:https://medium.com/@kwmonroe/monitor-your-kubernetes-cluster-a856d2603ec3 +[2]:https://www.graylog.org/ +[3]:https://prometheus.io/ +[4]:https://insights.ubuntu.com/wp-content/uploads/706b/1_TAA57DGVDpe9KHIzOirrBA.png +[5]:https://conjure-up.io/ +[6]:https://jujucharms.com/canonical-kubernetes +[7]:https://insights.ubuntu.com/wp-content/uploads/98fd/1_o0UmYzYkFiHIs2sBgj7G9A.png +[8]:https://insights.ubuntu.com/wp-content/uploads/0351/1_pgVaO_ZlalrjvYd5pOMJMA.png +[9]:https://insights.ubuntu.com/wp-content/uploads/9977/1_WXKxMlml2DWA5Kj6wW9oXQ.png +[10]:https://insights.ubuntu.com/wp-content/uploads/8588/1_NWq7u6g6UAzyFxtbM-ipqg.png +[11]:https://insights.ubuntu.com/wp-content/uploads/a1c3/1_hHK5mSrRJQi6A6u0yPSGOA.png +[12]:https://insights.ubuntu.com/wp-content/uploads/937f/1_cP36lpmSwlsPXJyDUpFluQ.png +[13]:http://docs.graylog.org/en/2.3/pages/dashboards.html +[14]:https://insights.ubuntu.com/wp-content/uploads/9256/1_kskust3AOImIh18QxQPgRw.png +[15]:https://insights.ubuntu.com/wp-content/uploads/2037/1_qJpjPOTGMQbjFY5-cZsYrQ.png +[16]:https://jujucharms.com/ +[17]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/graylog/steps/01_install-graylog/graylog-vhost.tmpl +[18]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/prometheus-scrape-k8s.yaml +[19]:https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/after-deploy#L25 +[20]:https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/after-deploy#L10 +[21]:https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/after-deploy#L11 +[22]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/grafana-telegraf.json +[23]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/grafana-k8s.json diff --git a/sources/tech/20180116 SPARTA - Network Penetration Testing GUI Toolkit.md b/sources/tech/20180116 SPARTA - Network Penetration Testing GUI Toolkit.md new file mode 100644 index 0000000000..06427c101d --- /dev/null +++ b/sources/tech/20180116 SPARTA - Network Penetration Testing GUI Toolkit.md @@ -0,0 +1,107 @@ +SPARTA – Network Penetration Testing GUI Toolkit +====== + +![](https://i0.wp.com/gbhackers.com/wp-content/uploads/2018/01/GjWDZ1516079830.png?resize=696%2C379&ssl=1) + +SPARTA is GUI application developed with python and inbuild Network Penetration Testing Kali Linux tool. It simplifies scanning and enumeration phase with faster results. + +Best thing of SPARTA GUI Toolkit it scans detects the service running on the target port. + +Also, it provides Bruteforce attack for scanned open ports and services as a part of enumeration phase. + + +Also Read: Network Pentesting Checklist][1] + +## Installation + +Please clone the latest version of SPARTA from github: + +``` +git clone https://github.com/secforce/sparta.git +``` + +Alternatively, download the latest zip file [here][2]. +``` +cd /usr/share/ +git clone https://github.com/secforce/sparta.git +``` +Place the "sparta" file in /usr/bin/ and make it executable. +Type 'sparta' in any terminal to launch the application. + + +## The scope of Network Penetration Testing Work: + + * Organizations security weaknesses in their network infrastructures are identified by a list of host or targeted host and add them to the scope. + * Select menu bar - File > Add host(s) to scope + + + +[![Network Penetration Testing][3]][4] + +[![Network Penetration Testing][5]][6] + + * Above figures show target Ip is added to the scope.According to your network can add the range of IPs to scan. + * After adding Nmap scan will begin and results will be very faster.now scanning phase is done. + + + +## Open Ports & Services: + + * Nmap results will provide target open ports and services. + + + +[![Network Penetration Testing][7]][8] + + * Above figure shows that target operating system, Open ports and services are discovered as scan results. + + + +## Brute Force Attack on Open ports: + + * Let us Brute force Server Message Block (SMB) via port 445 to enumerate the list of users and their valid passwords. + + + +[![Network Penetration Testing][9]][10] + + * Right-click and Select option Send to Brute.Also, select discovered Open ports and service on target. + * Browse and add dictionary files for Username and password fields. + + + +[![Network Penetration Testing][11]][12] + + * Click Run to start the Brute force attack on the target.Above Figure shows Brute force attack is successfully completed on the target IP and the valid password is Found! + * Always think failed login attempts will be logged as Event logs in Windows. + * Password changing policy should be 15 to 30 days will be a good practice. + * Always recommended to use a strong password as per policy.Password lockout policy is a good one to stop brute force attacks (After 5 failure attempts account will be locked) + * The integration of business-critical asset to SIEM( security incident & Event Management) will detect these kinds of attacks as soon as possible. + + + +SPARTA is timing saving GUI Toolkit for pentesters for scanning and enumeration phase.SPARTA Scans and Bruteforce various protocols.It has many more features! Happy Hacking. + +-------------------------------------------------------------------------------- + +via: https://gbhackers.com/sparta-network-penetration-testing-gui-toolkit/ + +作者:[Balaganesh][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://gbhackers.com/author/balaganesh/ +[1]:https://gbhackers.com/network-penetration-testing-checklist-examples/ +[2]:https://github.com/SECFORCE/sparta/archive/master.zip +[3]:https://i0.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-526.png?resize=696%2C495&ssl=1 +[4]:https://i0.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-526.png?ssl=1 +[5]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-527.png?resize=696%2C516&ssl=1 +[6]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-527.png?ssl=1 +[7]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-528.png?resize=696%2C519&ssl=1 +[8]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-528.png?ssl=1 +[9]:https://i1.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-529.png?resize=696%2C525&ssl=1 +[10]:https://i1.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-529.png?ssl=1 +[11]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-531.png?resize=696%2C523&ssl=1 +[12]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-531.png?ssl=1 diff --git a/sources/tech/20180116 Why building a community is worth the extra effort.md b/sources/tech/20180116 Why building a community is worth the extra effort.md new file mode 100644 index 0000000000..ec971e84eb --- /dev/null +++ b/sources/tech/20180116 Why building a community is worth the extra effort.md @@ -0,0 +1,66 @@ +Why building a community is worth the extra effort +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_brandbalance.png?itok=XSQ1OU16) + +When we launched [Nethesis][1] in 2003, we were just system integrators. We only used existing open source projects. Our business model was clear: Add multiople forms of value to those projects: know-how, documentation for the Italian market, extra modules, professional support, and training courses. We gave back to upstream projects as well, through upstream code contributions and by participating in their communities. + +Times were different then. We couldn't use the term "open source" too loudly. People associated it with words like: "nerdy," "no value" and, worst of all, "free." Not too good for a business. + +On a Saturday in 2010, with pasties and espresso in hand, the Nethesis staff were discussing how to move things forward (hey, we like to eat and drink while we innovate!). In spite of the momentum working against us, we decided not to change course. In fact, we decided to push harder--to make open source, and an open way of working, a successful model for running a business. + +Over the years, we've proven that model's potential. And one thing has been key to our success: community. + +In this three-part series, I'll explain the important role community plays in an open organization's existence. I'll explore why an organization would want to build a community, and discuss how to build one--because I really do believe it's the best way to generate new innovations today. + +### The crazy idea + +Together with the Nethesis guys, we decided to build our own open source project: our own operating system, built on top of CentOS (because we didn't want to reinvent the wheel). We assumed that we had the experience, know-how, and workforce to achieve it. We felt brave. + +And we very much wanted to build an operating system called [NethServer][2] with one mission: making a sysadmin's life easier with open source. We knew we could create a Linux distribution for a server that would be more accessible, easier to adopt, and simpler to understand than anything currently offered. + +Above all, though, we decided to create a real, 100% open project with three primary rules: + + * completely free to download, + * openly developed, and + * community-driven + + + +That last one is important. We were a company; we were able to develop it by ourselves. We would have been more effective (and made quicker decisions) if we'd done the work in-house. It would be so simple, like any other company in Italy. + +But we were so deeply into open source culture culture that we chose a different path. + +We really wanted as many people as possible around us, around the product, and around the company. We wanted as many perspectives on the work as possible. We realized: Alone, you can go fast--but if you want to go far, you need to go together. + +So we decided to build a community instead. + +### What next? + +We realized that creating a community has so many benefits. For example, if the people who use your product are really involved in the project, they will provide feedback and use cases, write documentation, catch bugs, compare with other products, suggest features, and contribute to development. All of this generates innovations, attracts contributors and customers, and expands your product's user base. + +But quicky the question arose: How can we build a community? We didn't know how to achieve that. We'd participated in many communities, but we'd never built one. + +We were good at code--not with people. And we were a company, an organization with very specific priorities. So how were we going to build a community and a foster good relationships between the company and the community itself? + +We did the first thing you had to do: study. We learned from experts, blogs, and lots of books. We experimented. We failed many times, collected data from the outcomes, and tested them again. + +Eventually we learned the golden rule of the community management: There is no golden rule of community management. + +People are too complex and communities are too different to have one rule "to rule them all," + +One thing I can say, however, is that an healthy relationship between a community and a company is always a process of give and take. In my next article, I'll discuss what your organization should expect to give if it wants a flourishing and innovating community. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/open-organization/18/1/why-build-community-1 + +作者:[Alessio Fattorini][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/alefattorini +[1]:http://www.nethesis.it/ +[2]:http://www.nethserver.org/ diff --git a/sources/tech/20180117 Avoiding Server Disaster.md b/sources/tech/20180117 Avoiding Server Disaster.md new file mode 100644 index 0000000000..cb88fe20d9 --- /dev/null +++ b/sources/tech/20180117 Avoiding Server Disaster.md @@ -0,0 +1,125 @@ +Avoiding Server Disaster +====== + +Worried that your server will go down? You should be. Here are some disaster-planning tips for server owners. + +If you own a car or a house, you almost certainly have insurance. Insurance seems like a huge waste of money. You pay it every year and make sure that you get the best possible price for the best possible coverage, and then you hope you never need to use the insurance. Insurance seems like a really bad deal—until you have a disaster and realize that had it not been for the insurance, you might have been in financial ruin. + +Unfortunately, disasters and mishaps are a fact of life in the computer industry. And so, just as you pay insurance and hope never to have to use it, you also need to take time to ensure the safety and reliability of your systems—not because you want disasters to happen, or even expect them to occur, but rather because you have to. + +If your website is an online brochure for your company and then goes down for a few hours or even days, it'll be embarrassing and annoying, but not financially painful. But, if your website is your business, when your site goes down, you're losing money. If that's the case, it's crucial to ensure that your server and software are not only unlikely to go down, but also easily recoverable if and when that happens. + +Why am I writing about this subject? Well, let's just say that this particular problem hit close to home for me, just before I started to write this article. After years of helping clients around the world to ensure the reliability of their systems, I made the mistake of not being as thorough with my own. ("The shoemaker's children go barefoot", as the saying goes.) This means that just after launching my new online product for Python developers, a seemingly trivial upgrade turned into a disaster. The precautions I put in place, it turns out, weren't quite enough—and as I write this, I'm still putting my web server together. I'll survive, as will my server and business, but this has been a painful and important lesson—one that I'll do almost anything to avoid repeating in the future. + +So in this article, I describe a number of techniques I've used to keep servers safe and sound through the years, and to reduce the chances of a complete meltdown. You can think of these techniques as insurance for your server, so that even if something does go wrong, you'll be able to recover fairly quickly. + +I should note that most of the advice here assumes no redundancy in your architecture—that is, a single web server and (at most) a single database server. If you can afford to have a bunch of servers of each type, these sorts of problems tend to be much less frequent. However, that doesn't mean they go away entirely. Besides, although people like to talk about heavy-duty web applications that require massive iron in order to run, the fact is that many businesses run on small, one- and two-computer servers. Moreover, those businesses don't need more than that; the ROI (return on investment) they'll get from additional servers cannot be justified. However, the ROI from a good backup and recovery plan is huge, and thus worth the investment. + +### The Parts of a Web Application + +Before I can talk about disaster preparation and recovery, it's important to consider the different parts of a web application and what those various parts mean for your planning. + +For many years, my website was trivially small and simple. Even if it contained some simple programs, those generally were used for sending email or for dynamically displaying different assets to visitors. The entire site consisted of some static HTML, images, JavaScript and CSS. No database or other excitement was necessary. + +At the other end of the spectrum, many people have full-blown web applications, sitting on multiple servers, with one or more databases and caches, as well as HTTP servers with extensively edited configuration files. + +But even when considering those two extremes, you can see that a web application consists of only a few parts: + +* The application software itself. + +* Static assets for that application. + +* Configuration file(s) for the HTTP server(s). + +* Database configuration files. + +* Database schema and contents. + +Assuming that you're using a high-level language, such as Python, Ruby or JavaScript, everything in this list either is a file or can be turned into one. (All databases make it possible to "dump" their contents onto disk, into a format that then can be loaded back into the database server.) + +Consider a site containing only application software, static assets and configuration files. (In other words, no database is involved.) In many cases, such a site can be backed up reliably in Git. Indeed, I prefer to keep my sites in Git, backed up on a commercial hosting service, such as GitHub or Bitbucket, and then deployed using a system like Capistrano. + +In other words, you develop the site on your own development machine. Whenever you are happy with a change that you've made, you commit the change to Git (on your local machine) and then do a git push to your central repository. In order to deploy your application, you then use Capistrano to do a cap deploy, which reads the data from the central repository, puts it into the appropriate place on the server's filesystem, and you're good to go. + +This system keeps you safe in a few different ways. The code itself is located in at least three locations: your development machine, the server and the repository. And those central repositories tend to be fairly reliable, if only because it's in the financial interest of the hosting company to ensure that things are reliable. + +I should add that in such a case, you also should include the HTTP server's configuration files in your Git repository. Those files aren't likely to change very often, but I can tell you from experience, if you're recovering from a crisis, the last thing you want to think about is how your Apache configuration files should look. Copying those files into your Git repository will work just fine. + +### Backing Up Databases + +You could argue that the difference between a "website" and a "web application" is a database. Databases long have powered the back ends of many web applications and for good reason—they allow you to store and retrieve data reliably and flexibly. The power that modern open-source databases provides was unthinkable just a decade or two ago, and there's no reason to think that they'll be any less reliable in the future. + +And yet, just because your database is pretty reliable doesn't mean that it won't have problems. This means you're going to want to keep a snapshot ("dump") of the database's contents around, in case the database server corrupts information, and you need to roll back to a previous version. + +My favorite solution for such a problem is to dump the database on a regular basis, preferably hourly. Here's a shell script I've used, in one form or another, for creating such regular database dumps: + +``` + +#!/bin/sh + +BACKUP_ROOT="/home/database-backups/" +YEAR=`/bin/date +'%Y'` +MONTH=`/bin/date +'%m'` +DAY=`/bin/date +'%d'` + +DIRECTORY="$BACKUP_ROOT/$YEAR/$MONTH/$DAY" +USERNAME=dbuser +DATABASE=dbname +HOST=localhost +PORT=3306 + +/bin/mkdir -p $DIRECTORY + +/usr/bin/mysqldump -h $HOST --databases $DATABASE -u $USERNAME + ↪| /bin/gzip --best --verbose > + ↪$DIRECTORY/$DATABASE-dump.gz + +``` + +The above shell script starts off by defining a bunch of variables, from the directory in which I want to store the backups, to the parts of the date (stored in $YEAR, $MONTH and $DAY). This is so I can have a separate directory for each day of the month. I could, of course, go further and have separate directories for each hour, but I've found that I rarely need more than one backup from a day. + +Once I have defined those variables, I then use the mkdir command to create a new directory. The -p option tells mkdir that if necessary, it should create all of the directories it needs such that the entire path will exist. + +Finally, I then run the database's "dump" command. In this particular case, I'm using MySQL, so I'm using the mysqldump command. The output from this command is a stream of SQL that can be used to re-create the database. I thus take the output from mysqldump and pipe it into gzip, which compresses the output file. Finally, the resulting dumpfile is placed, in compressed form, inside the daily backup directory. + +Depending on the size of your database and the amount of disk space you have on hand, you'll have to decide just how often you want to run dumps and how often you want to clean out old ones. I know from experience that dumping every hour can cause some load problems. On one virtual machine I've used, the overall administration team was unhappy that I was dumping and compressing every hour, which they saw as an unnecessary use of system resources. + +If you're worried your system will run out of disk space, you might well want to run a space-checking program that'll alert you when the filesystem is low on free space. In addition, you can run a cron job that uses find to erase all dumpfiles from before a certain cutoff date. I'm always a bit nervous about programs that automatically erase backups, so I generally prefer not to do this. Rather, I run a program that warns me if the disk usage is going above 85% (which is usually low enough to ensure that I can fix the problem in time, even if I'm on a long flight). Then I can go in and remove the problematic files by hand. + +When you back up your database, you should be sure to back up the configuration for that database as well. The database schema and data, which are part of the dumpfile, are certainly important. However, if you find yourself having to re-create your server from scratch, you'll want to know precisely how you configured the database server, with a particular emphasis on the filesystem configuration and memory allocations. I tend to use PostgreSQL for most of my work, and although postgresql.conf is simple to understand and configure, I still like to keep it around with my dumpfiles. + +Another crucial thing to do is to check your database dumps occasionally to be sure that they are working the way you want. It turns out that the backups I thought I was making weren't actually happening, in no small part because I had modified the shell script and hadn't double-checked that it was creating useful backups. Occasionally pulling out one of your dumpfiles and restoring it to a separate (and offline!) database to check its integrity is a good practice, both to ensure that the dump is working and that you remember how to restore it in the case of an emergency. + +### Storing Backups + +But wait. It might be great to have these backups, but what if the server goes down entirely? In the case of the code, I mentioned to ensure that it was located on more than one machine, ensuring its integrity. By contrast, your database dumps are now on the server, such that if the server fails, your database dumps will be inaccessible. + +This means you'll want to have your database dumps stored elsewhere, preferably automatically. How can you do that? + +There are a few relatively easy and inexpensive solutions to this problem. If you have two servers—ideally in separate physical locations—you can use rsync to copy the files from one to the other. Don't rsync the database's actual files, since those might get corrupted in transfer and aren't designed to be copied when the server is running. By contrast, the dumpfiles that you have created are more than able to go elsewhere. Setting up a remote server, with a user specifically for handling these backup transfers, shouldn't be too hard and will go a long way toward ensuring the safety of your data. + +I should note that using rsync in this way basically requires that you set up passwordless SSH, so that you can transfer without having to be physically present to enter the password. + +Another possible solution is Amazon's Simple Storage Server (S3), which offers astonishing amounts of disk space at very low prices. I know that many companies use S3 as a simple (albeit slow) backup system. You can set up a cron job to run a program that copies the contents of a particular database dumpfile directory onto a particular server. The assumption here is that you're not ever going to use these backups, meaning that S3's slow searching and access will not be an issue once you're working on the server. + +Similarly, you might consider using Dropbox. Dropbox is best known for its desktop client, but it has a "headless", text-based client that can be used on Linux servers without a GUI connected. One nice advantage of Dropbox is that you can share a folder with any number of people, which means you can have Dropbox distribute your backup databases everywhere automatically, including to a number of people on your team. The backups arrive in their Dropbox folder, and you can be sure that the LAMP is conditional. + +Finally, if you're running a WordPress site, you might want to consider VaultPress, a for-pay backup system. I must admit that in the weeks before I took my server down with a database backup error, I kept seeing ads in WordPress for VaultPress. "Who would buy that?", I asked myself, thinking that I'm smart enough to do backups myself. Of course, after disaster occurred and my database was ruined, I realized that $30/year to back up all of my data is cheap, and I should have done it before. + +### Conclusion + +When it comes to your servers, think less like an optimistic programmer and more like an insurance agent. Perhaps disaster won't strike, but if it does, will you be able to recover? Making sure that even if your server is completely unavailable, you'll be able to bring up your program and any associated database is crucial. + +My preferred solution involves combining a Git repository for code and configuration files, distributed across several machines and services. For the databases, however, it's not enough to dump your database; you'll need to get that dump onto a separate machine, and preferably test the backup file on a regular basis. That way, even if things go wrong, you'll be able to get back up in no time. + +-------------------------------------------------------------------------------- + +via: http://www.linuxjournal.com/content/avoiding-server-disaster + +作者:[Reuven M.Lerner][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxjournal.com/user/1000891 diff --git a/sources/tech/20180117 How To Manage Vim Plugins Using Vundle On Linux.md b/sources/tech/20180117 How To Manage Vim Plugins Using Vundle On Linux.md new file mode 100644 index 0000000000..40f6c926f1 --- /dev/null +++ b/sources/tech/20180117 How To Manage Vim Plugins Using Vundle On Linux.md @@ -0,0 +1,252 @@ +translated by cyleft + +How To Manage Vim Plugins Using Vundle On Linux +====== +![](https://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-4-720x340.png) + +**Vim** , undoubtedly, is one of the powerful and versatile tool to manipulate text files, manage the system configuration files and writing code. The functionality of Vim can be extended to different levels using plugins. Usually, all plugins and additional configuration files will be stored in **~/.vim** directory. Since all plugin files are stored in a single directory, the files from different plugins are mixed up together as you install more plugins. Hence, it is going to be a daunting task to track and manage all of them. This is where Vundle comes in help. Vundle, acronym of **V** im B **undle** , is an extremely useful plug-in to manage Vim plugins. + +Vundle creates a separate directory tree for each plugin you install and stores the additional configuration files in the respective plugin directory. Therefore, there is no mix up files with one another. In a nutshell, Vundle allows you to install new plugins, configure existing plugins, update configured plugins, search for installed plugins and clean up unused plugins. All actions can be done in a single keypress with interactive mode. In this brief tutorial, let me show you how to install Vundle and how to manage Vim plugins using Vundle in GNU/Linux. + +### Installing Vundle + +If you need Vundle, I assume you have already installed **vim** on your system. If not, install vim and **git** (to download vundle). Both packages are available in the official repositories of most GNU/Linux distributions.For instance, you can use the following command to install these packages on Debian based systems. +``` +sudo apt-get install vim git +``` + +**Download Vundle** + +Clone Vundle GitHub repository: +``` +git clone https://github.com/VundleVim/Vundle.vim.git ~/.vim/bundle/Vundle.vim +``` + +**Configure Vundle** + +To tell vim to use the new plugin manager, we need to create **~/.vimrc** file. This file is required to install, update, configure and remove plugins. +``` +vim ~/.vimrc +``` + +Put the following lines on the top of this file: +``` +set nocompatible " be iMproved, required +filetype off " required + +" set the runtime path to include Vundle and initialize +set rtp+=~/.vim/bundle/Vundle.vim +call vundle#begin() +" alternatively, pass a path where Vundle should install plugins +"call vundle#begin('~/some/path/here') + +" let Vundle manage Vundle, required +Plugin 'VundleVim/Vundle.vim' + +" The following are examples of different formats supported. +" Keep Plugin commands between vundle#begin/end. +" plugin on GitHub repo +Plugin 'tpope/vim-fugitive' +" plugin from http://vim-scripts.org/vim/scripts.html +" Plugin 'L9' +" Git plugin not hosted on GitHub +Plugin 'git://git.wincent.com/command-t.git' +" git repos on your local machine (i.e. when working on your own plugin) +Plugin 'file:///home/gmarik/path/to/plugin' +" The sparkup vim script is in a subdirectory of this repo called vim. +" Pass the path to set the runtimepath properly. +Plugin 'rstacruz/sparkup', {'rtp': 'vim/'} +" Install L9 and avoid a Naming conflict if you've already installed a +" different version somewhere else. +" Plugin 'ascenator/L9', {'name': 'newL9'} + +" All of your Plugins must be added before the following line +call vundle#end() " required +filetype plugin indent on " required +" To ignore plugin indent changes, instead use: +"filetype plugin on +" +" Brief help +" :PluginList - lists configured plugins +" :PluginInstall - installs plugins; append `!` to update or just :PluginUpdate +" :PluginSearch foo - searches for foo; append `!` to refresh local cache +" :PluginClean - confirms removal of unused plugins; append `!` to auto-approve removal +" +" see :h vundle for more details or wiki for FAQ +" Put your non-Plugin stuff after this line +``` + +The lines which are marked as "required" are Vundle's requirement. The rest of the lines are just examples. You can remove those lines if you don't want to install that specified plugins. Once you finished, type **:wq** to save and close file. + +Finally, open vim: +``` +vim +``` + +And type the following to install the plugins. +``` +:PluginInstall +``` + +[![][1]][2] + +A new split window will open and all the plugins which we added in the .vimrc file will be installed automatically. + +[![][1]][3] + +When the installation is completed, you can delete the buffer cache and close the split window by typing the following command: +``` +:bdelete +``` + +You can also install the plugins without opening vim using the following command from the Terminal: +``` +vim +PluginInstall +qall +``` + +For those using the [**fish shell**][4], add the following line to your **.vimrc** file.`` +``` +set shell=/bin/bash +``` + +### Manage Vim Plugins Using Vundle + +**Add New Plugins** + +First, search for the available plugins using command: +``` +:PluginSearch +``` + +To refresh the local list from the from the vimscripts site, add **"! "** at the end. +``` +:PluginSearch! +``` + +A new split window will open list all available plugins. + +[![][1]][5] + +You can also narrow down your search by using directly specifying the name of the plugin like below. +``` +:PluginSearch vim +``` + +This will list the plugin(s) that contains the words "vim" + +You can, of course, specify the exact plugin name like below. +``` +:PluginSearch vim-dasm +``` + +To install a plugin, move the cursor to the correct line and hit **" i"**. Now, the selected plugin will be installed. + +[![][1]][6] + +Similarly, install all plugins you wanted to have in your system. Once installed, delete the Vundle buffer cache using command: +``` +:bdelete +``` + +Now the plugin is installed. To make it autoload correctly, we need to add the installed plugin name to .vimrc file. + +To do so, type: +``` +:e ~/.vimrc +``` + +Add the following line. +``` +[...] +Plugin 'vim-dasm' +[...] +``` + +Replace vim-dasm with your plugin name. Then, hit ESC key and type **:wq** to save the changes and close the file. + +Please note that all of your Plugins must be added before the following line in your .vimrc file. +``` +[...] +filetype plugin indent on +``` + +**List installed Plugins** + +To list installed plugins, type the following from the vim editor: +``` +:PluginList +``` + +[![][1]][7] + +**Update plugins** + +To update the all installed plugins, type: +``` +:PluginUpdate +``` + +To reinstall all plugins, type: +``` +:PluginInstall! +``` + +**Uninstall plugins** + +First, list out all installed plugins: +``` +:PluginList +``` + +Then place the cursor to the correct line, and press **" SHITF+d"**. + +[![][1]][8] + +Then, edit your .vimrc file: +``` +:e ~/.vimrc +``` + +And delete the Plugin entry. Finally, type **:wq** to save the changes and exit from vim editor. + +Alternatively, you can uninstall a plugin by removing its line from .vimrc file and run: +``` +:PluginClean +``` + +This command will remove all plugins which are no longer present in your .vimrc but still present the bundle directory. + +At this point, you should have learned the basic usage about managing plugins using Vundle. For details, refer the help section by typing the following in your vim editor. +``` +:h vundle +``` + +**Also Read:** + +And, that's all for now. I will be soon here with another useful guide. Until then, stay tuned with OSTechNix! + +Cheers! + +**Resource:** + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/manage-vim-plugins-using-vundle-linux/ + +作者:[SK][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-1.png () +[3]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-2.png () +[4]:https://www.ostechnix.com/install-fish-friendly-interactive-shell-linux/ +[5]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-3.png () +[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-4-2.png () +[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-5-1.png () +[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-6.png () diff --git a/sources/tech/20180118 Configuring MSMTP On Ubuntu 16.04 (Again).md b/sources/tech/20180118 Configuring MSMTP On Ubuntu 16.04 (Again).md new file mode 100644 index 0000000000..9ddb25b40b --- /dev/null +++ b/sources/tech/20180118 Configuring MSMTP On Ubuntu 16.04 (Again).md @@ -0,0 +1,82 @@ +Configuring MSMTP On Ubuntu 16.04 (Again) +====== +This post exists as a copy of what I had on my previous blog about configuring MSMTP on Ubuntu 16.04; I'm posting it as-is for posterity, and have no idea if it'll work on later versions. As I'm not hosting my own Ubuntu/MSMTP server anymore I can't see any updates being made to this, but if I ever do have to set this up again I'll create an updated post! Anyway, here's what I had… + +I previously wrote an article around configuring msmtp on Ubuntu 12.04, but as I hinted at in a previous post that sort of got lost when the upgrade of my host to Ubuntu 16.04 went somewhat awry. What follows is essentially the same post, with some slight updates for 16.04. As before, this assumes that you're using Apache as the web server, but I'm sure it shouldn't be too different if your web server of choice is something else. + +I use [msmtp][1] for sending emails from this blog to notify me of comments and upgrades etc. Here I'm going to document how I configured it to send emails via a Google Apps account, although this should also work with a standard Gmail account too. + +To begin, we need to install 3 packages: +`sudo apt-get install msmtp msmtp-mta ca-certificates` +Once these are installed, a default config is required. By default msmtp will look at `/etc/msmtprc`, so I created that using vim, though any text editor will do the trick. This file looked something like this: +``` +# Set defaults. +defaults +# Enable or disable TLS/SSL encryption. +tls on +tls_starttls on +tls_trust_file /etc/ssl/certs/ca-certificates.crt +# Setup WP account's settings. +account +host smtp.gmail.com +port 587 +auth login +user +password +from +logfile /var/log/msmtp/msmtp.log + +account default : + +``` + +Any of the uppercase items (i.e. ``) are things that need replacing specific to your configuration. The exception to that is the log file, which can of course be placed wherever you wish to log any msmtp activity/warnings/errors to. + +Once that file is saved, we'll update the permissions on the above configuration file -- msmtp won't run if the permissions on that file are too open -- and create the directory for the log file. +``` +sudo mkdir /var/log/msmtp +sudo chown -R www-data:adm /var/log/msmtp +sudo chmod 0600 /etc/msmtprc + +``` + +Next I chose to configure logrotate for the msmtp logs, to make sure that the log files don't get too large as well as keeping the log directory a little tidier. To do this, we create `/etc/logrotate.d/msmtp` and configure it with the following file. Note that this is optional, you may choose to not do this, or you may choose to configure the logs differently. +``` +/var/log/msmtp/*.log { +rotate 12 +monthly +compress +missingok +notifempty +} + +``` + +Now that the logging is configured, we need to tell PHP to use msmtp by editing `/etc/php/7.0/apache2/php.ini` and updating the sendmail path from +`sendmail_path =` +to +`sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a -t"` +Here I did run into an issue where even though I specified the account name it wasn't sending emails correctly when I tested it. This is why the line `account default : ` was placed at the end of the msmtp configuration file. To test the configuration, ensure that the PHP file has been saved and run `sudo service apache2 restart`, then run `php -a` and execute the following +``` +mail ('personal@email.com', 'Test Subject', 'Test body text'); +exit(); + +``` + +Any errors that occur at this point will be displayed in the output so should make diagnosing any errors after the test relatively easy. If all is successful, you should now be able to use PHPs sendmail (which at the very least WordPress uses) to send emails from your Ubuntu server using Gmail (or Google Apps). + +I make no claims that this is the most secure configuration, so if you come across this and realise it's grossly insecure or something is drastically wrong please let me know and I'll update it accordingly. + + +-------------------------------------------------------------------------------- + +via: https://codingproductivity.wordpress.com/2018/01/18/configuring-msmtp-on-ubuntu-16-04-again/ + +作者:[JOE][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://codingproductivity.wordpress.com/author/joeb454/ +[1]:http://msmtp.sourceforge.net/ diff --git a/sources/tech/20180118 Getting Started with ncurses.md b/sources/tech/20180118 Getting Started with ncurses.md new file mode 100644 index 0000000000..d02ad61785 --- /dev/null +++ b/sources/tech/20180118 Getting Started with ncurses.md @@ -0,0 +1,213 @@ +Getting Started with ncurses +====== +How to use curses to draw to the terminal screen. + +While graphical user interfaces are very cool, not every program needs to run with a point-and-click interface. For example, the venerable vi editor ran in plain-text terminals long before the first GUI. + +The vi editor is one example of a screen-oriented program that draws in "text" mode, using a library called curses, which provides a set of programming interfaces to manipulate the terminal screen. The curses library originated in BSD UNIX, but Linux systems provide this functionality through the ncurses library. + +[For a "blast from the past" on ncurses, see ["ncurses: Portable Screen-Handling for Linux"][1], September 1, 1995, by Eric S. Raymond.] + +Creating programs that use curses is actually quite simple. In this article, I show an example program that leverages curses to draw to the terminal screen. + +### Sierpinski's Triangle + +One simple way to demonstrate a few curses functions is by generating Sierpinski's Triangle. If you aren't familiar with this method to generate Sierpinski's Triangle, here are the rules: + +1. Set three points that define a triangle. + +2. Randomly select a point anywhere (x,y). + +Then: + +1. Randomly select one of the triangle's points. + +2. Set the new x,y to be the midpoint between the previous x,y and the triangle point. + +3. Repeat. + +So with those instructions, I wrote this program to draw Sierpinski's Triangle to the terminal screen using the curses functions: + +``` + + 1 /* triangle.c */ + 2 + 3 #include + 4 #include + 5 + 6 #include "getrandom_int.h" + 7 + 8 #define ITERMAX 10000 + 9 + 10 int main(void) + 11 { + 12 long iter; + 13 int yi, xi; + 14 int y[3], x[3]; + 15 int index; + 16 int maxlines, maxcols; + 17 + 18 /* initialize curses */ + 19 + 20 initscr(); + 21 cbreak(); + 22 noecho(); + 23 + 24 clear(); + 25 + 26 /* initialize triangle */ + 27 + 28 maxlines = LINES - 1; + 29 maxcols = COLS - 1; + 30 + 31 y[0] = 0; + 32 x[0] = 0; + 33 + 34 y[1] = maxlines; + 35 x[1] = maxcols / 2; + 36 + 37 y[2] = 0; + 38 x[2] = maxcols; + 39 + 40 mvaddch(y[0], x[0], '0'); + 41 mvaddch(y[1], x[1], '1'); + 42 mvaddch(y[2], x[2], '2'); + 43 + 44 /* initialize yi,xi with random values */ + 45 + 46 yi = getrandom_int() % maxlines; + 47 xi = getrandom_int() % maxcols; + 48 + 49 mvaddch(yi, xi, '.'); + 50 + 51 /* iterate the triangle */ + 52 + 53 for (iter = 0; iter < ITERMAX; iter++) { + 54 index = getrandom_int() % 3; + 55 + 56 yi = (yi + y[index]) / 2; + 57 xi = (xi + x[index]) / 2; + 58 + 59 mvaddch(yi, xi, '*'); + 60 refresh(); + 61 } + 62 + 63 /* done */ + 64 + 65 mvaddstr(maxlines, 0, "Press any key to quit"); + 66 + 67 refresh(); + 68 + 69 getch(); + 70 endwin(); + 71 + 72 exit(0); + 73 } + +``` + +Let me walk through that program by way of explanation. First, the getrandom_int() is my own wrapper to the Linux getrandom() system call, but it's guaranteed to return a positive integer value. Otherwise, you should be able to identify the code lines that initialize and then iterate Sierpinski's Triangle, based on the above rules. Aside from that, let's look at the curses functions I used to draw the triangle on a terminal. + +Most curses programs will start with these four instructions. 1) The initscr() function determines the terminal type, including its size and features, and sets up the curses environment based on what the terminal can support. The cbreak() function disables line buffering and sets curses to take one character at a time. The noecho() function tells curses not to echo the input back to the screen, and the clear() function clears the screen: + +``` + + 20 initscr(); + 21 cbreak(); + 22 noecho(); + 23 + 24 clear(); + +``` + +The program then sets a few variables to define the three points that define a triangle. Note the use of LINES and COLS here, which were set by initscr(). These values tell the program how many lines and columns exist on the terminal. Screen coordinates start at zero, so the top-left of the screen is row 0, column 0\. The bottom-right of the screen is row LINES - 1, column COLS - 1\. To make this easy to remember, my program sets these values in the variables maxlines and maxcols, respectively. + +Two simple methods to draw text on the screen are the addch() and addstr() functions. To put text at a specific screen location, use the related mvaddch() and mvaddstr() functions. My program uses these functions in several places. First, the program draws the three points that define the triangle, labeled "0", "1" and "2": + +``` + + 40 mvaddch(y[0], x[0], '0'); + 41 mvaddch(y[1], x[1], '1'); + 42 mvaddch(y[2], x[2], '2'); + +``` + +To draw the random starting point, the program makes a similar call: + +``` + + 49 mvaddch(yi, xi, '.'); + +``` + +And to draw each successive point in Sierpinski's Triangle iteration: + +``` + + 59 mvaddch(yi, xi, '*'); + +``` + +When the program is done, it displays a helpful message at the lower-left corner of the screen (at row maxlines, column 0): + +``` + + 65 mvaddstr(maxlines, 0, "Press any key to quit"); + +``` + +It's important to note that curses maintains a version of the screen in memory and updates the screen only when you ask it to. This provides greater performance, especially if you want to display a lot of text to the screen. This is because curses can update only those parts of the screen that changed since the last update. To cause curses to update the terminal screen, use the refresh() function. + +In my example program, I've chosen to update the screen after "drawing" each successive point in Sierpinski's Triangle. By doing so, users should be able to observe each iteration in the triangle. + +Before exiting, I use the getch() function to wait for the user to press a key. Then I call endwin() to exit the curses environment and return the terminal screen to normal control: + +``` + + 69 getch(); + 70 endwin(); + +``` + +### Compiling and Sample Output + +Now that you have your first sample curses program, it's time to compile and run it. Remember that Linux systems implement the curses functionality via the ncurses library, so you need to link with -lncurses when you compile—for example: + +``` + +$ ls +getrandom_int.c getrandom_int.h triangle.c + +$ gcc -Wall -lncurses -o triangle triangle.c getrandom_int.c + +``` + +Running the triangle program on a standard 80x24 terminal is not very interesting. You just can't see much detail in Sierpinski's Triangle at that resolution. If you run a terminal window and set a very small font size, you can see the fractal nature of Sierpinski's Triangle more easily. On my system, the output looks like Figure 1. + +![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/triangle.png) + +Figure 1. Output of the triangle Program + +Despite the random nature of the iteration, every run of Sierpinski's Triangle will look pretty much the same. The only difference will be where the first few points are drawn to the screen. In this example, you can see the single dot that starts the triangle, near point 1\. It looks like the program picked point 2 next, and you can see the asterisk halfway between the dot and the "2". And it looks like the program randomly picked point 2 for the next random number, because you can see the asterisk halfway between the first asterisk and the "2". From there, it's impossible to tell how the triangle was drawn, because all of the successive dots fall within the triangle area. + +### Starting to Learn ncurses + +This program is a simple example of how to use the curses functions to draw characters to the screen. You can do so much more with curses, depending on what you need your program to do. In a follow up article, I will show how to use curses to allow the user to interact with the screen. If you are interested in getting a head start with curses, I encourage you to read Pradeep Padala's ["NCURSES Programming HOWTO"][2], at the Linux Documentation Project. + +### About the author + +Jim Hall is an advocate for free and open-source software, best known for his work on the FreeDOS Project, and he also focuses on the usability of open-source software. Jim is the Chief Information Officer at Ramsey County, Minn. + +-------------------------------------------------------------------------------- + +via: http://www.linuxjournal.com/content/getting-started-ncurses + +作者:[Jim Hall][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxjournal.com/users/jim-hall +[1]:http://www.linuxjournal.com/article/1124 +[2]:http://tldp.org/HOWTO/NCURSES-Programming-HOWTO diff --git a/sources/tech/20180118 How To List and Delete iptables Firewall Rules.md b/sources/tech/20180118 How To List and Delete iptables Firewall Rules.md new file mode 100644 index 0000000000..b6b875ad11 --- /dev/null +++ b/sources/tech/20180118 How To List and Delete iptables Firewall Rules.md @@ -0,0 +1,106 @@ +How To List and Delete iptables Firewall Rules +====== +![How To List and Delete iptables Firewall Rules][1] + +We'll show you, how to list and delete iptables firewall rules. Iptables is a command line utility that allows system administrators to configure the packet filtering rule set on Linux. iptables requires elevated privileges to operate and must be executed by user root, otherwise it fails to function. + +### How to List iptables Firewall Rules + +Iptables allows you to list all the rules which are already added to the packet filtering rule set. In order to be able to check this you need to have SSH access to the server. [Connect to your Linux VPS via SSH][2] and run the following command: +``` +sudo iptables -nvL +``` + +To run the command above your user need to have `sudo` privileges. Otherwise, you need to [add sudo user on your Linux VPS][3] or use the root user. + +If there are no rules added to the packet filtering ruleset the output should be similar to the one below: +``` +Chain INPUT (policy ACCEPT 0 packets, 0 bytes) + pkts bytes target prot opt in out source destination + +Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) + pkts bytes target prot opt in out source destination + +Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) + pkts bytes target prot opt in out source destination + +``` + +Since NAT (Network Address Translation) can also be configured via iptables, you can use iptables to list the NAT rules: +``` +sudo iptables -t nat -n -L -v +``` + +The output will be similar to the one below if there are no rules added: +``` +Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) + pkts bytes target prot opt in out source destination + +Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) + pkts bytes target prot opt in out source destination + +Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) + pkts bytes target prot opt in out source destination + +``` + +If this is the case we recommend you to check our tutorial on How to [Set Up a Firewall with iptables on Ubuntu and CentOS][4] to make your server more secure. + +### How to Delete iptables Firewall Rules + +At some point, you may need to remove a specific iptables firewall rule on your server. For that purpose you need to use the following syntax: +``` +iptables [-t table] -D chain rulenum +``` + +For example, if you have a firewall rule to block all connections from 111.111.111.111 to your server on port 22 and you want to remove that rule, you can use the following command: +``` +sudo iptables -D INPUT -s 111.111.111.111 -p tcp --dport 22 -j DROP +``` + +Now that you removed the iptables firewall rule you need to save the changes to make them persistent. + +In case you are using [Ubuntu VPS][5] you need to install additional package for that purpose. To install the required package use the following command: +``` +sudo apt-get install iptables-persistent +``` + +On **Ubutnu 14.04** you can save and reload the firewall rules using the commands below: +``` +sudo /etc/init.d/iptables-persistent save +sudo /etc/init.d/iptables-persistent reload +``` + +On **Ubuntu 16.04** use the following commands instead: +``` +sudo netfilter-persistent save +sudo netfilter-persistent reload +``` + +If you are using [CentOS VPS][6] you can save the changes using the command below: +``` +service iptables save +``` + +Of course, you don't have to list and delete iptables firewall rules if you use one of our [Managed VPS Hosting][7] services, in which case you can simply ask our expert Linux admins to help you list and delete iptables firewall rules on your server. They are available 24×7 and will take care of your request immediately. + +**PS**. If you liked this post, on how to list and delete iptables firewall rules, please share it with your friends on the social networks using the buttons on the left or simply leave a reply below. Thanks. + +-------------------------------------------------------------------------------- + +via: https://www.rosehosting.com/blog/how-to-list-and-delete-iptables-firewall-rules/ + +作者:[RoseHosting][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.rosehosting.com +[1]:https://www.rosehosting.com/blog/wp-content/uploads/2018/01/How-To-List-and-Delete-iptables-Firewall-Rules.jpg +[2]:https://www.rosehosting.com/blog/connect-to-your-linux-vps-via-ssh/ +[3]:https://www.rosehosting.com/blog/how-to-create-a-sudo-user-on-ubuntu/ +[4]:https://www.rosehosting.com/blog/how-to-set-up-a-firewall-with-iptables-on-ubuntu-and-centos/ +[5]:https://www.rosehosting.com/ubuntu-vps.html +[6]:https://www.rosehosting.com/centos-vps.html +[7]:https://www.rosehosting.com/managed-vps-hosting.html diff --git a/sources/tech/20180118 How to Play Sound Through Two or More Output Devices in Linux.md b/sources/tech/20180118 How to Play Sound Through Two or More Output Devices in Linux.md new file mode 100644 index 0000000000..2f35b15ac7 --- /dev/null +++ b/sources/tech/20180118 How to Play Sound Through Two or More Output Devices in Linux.md @@ -0,0 +1,62 @@ +translating by lujun9972 +How to Play Sound Through Two or More Output Devices in Linux +====== + +![](https://www.maketecheasier.com/assets/uploads/2018/01/output-audio-multiple-devices-featured.jpg) + +Handling audio in Linux can be a pain. Pulseaudio has made it both better and worse. While some things work better than they did before, other things have become more complicated. Handling audio output is one of those things. + +If you want to enable multiple audio outputs from your Linux PC, you can use a simple utility to enable your other sound devices on a virtual interface. It's a lot easier than it sounds. + +In case you're wondering why you'd want to do this, a pretty common instance is playing video from your computer on a TV and using both the PC and TV speakers. + +### Install Paprefs + +The easiest way to enable audio playback from multiple sources is to use a simple graphical utility called "paprefs." It's short for PulseAudio Preferences. + +It's available through the Ubuntu repositories, so just install it with Apt. +``` +sudo apt install paprefs +``` + +When the install finishes, you can just launch the program. + +### Enable Dual Audio Playback + +Even though the utility is graphical, it's still probably easier to launch it by typing `paprefs` in the command line as a regular user. + +The window that opens has a few tabs with settings that you can tweak. The tab that you're looking for is the last one, "Simultaneous Output." + +![Paprefs on Ubuntu][1] + +There isn't a whole lot on the tab, just a checkbox to enable the setting. + +Next, open up the regular sound preferences. It's in different places on different distributions. On Ubuntu it'll be under the GNOME system settings. + +![Enable Simultaneous Audio][2] + +Once you have your sound preferences open, select the "Output" tab. Select the "Simultaneous output" radio button. It's now your default output. + +### Test It + +To test it, you can use anything you like, but music always works. If you are using a video, like suggested earlier, you can certainly test it with that as well. + +If everything is working well, you should hear audio out of all connected devices. + +That's all there really is to do. This works best when there are multiple devices, like the HDMI port and the standard analog output. You can certainly try it with other configurations, too. You should also keep in mind that there will only be a single volume control, so adjust the physical output devices accordingly. + + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/play-sound-through-multiple-devices-linux/ + +作者:[Nick Congleton][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/nickcongleton/ +[1]:https://www.maketecheasier.com/assets/uploads/2018/01/sa-paprefs.jpg (Paprefs on Ubuntu) +[2]:https://www.maketecheasier.com/assets/uploads/2018/01/sa-enable.jpg (Enable Simultaneous Audio) +[3]:https://depositphotos.com/89314442/stock-photo-headphones-on-speakers.html diff --git a/sources/tech/20180118 Rediscovering make- the power behind rules.md b/sources/tech/20180118 Rediscovering make- the power behind rules.md new file mode 100644 index 0000000000..2dbddb8949 --- /dev/null +++ b/sources/tech/20180118 Rediscovering make- the power behind rules.md @@ -0,0 +1,100 @@ +Rediscovering make: the power behind rules +====== + +![](https://user-images.githubusercontent.com/4419992/35015638-0529f1c0-faf4-11e7-9801-4995fc4b54f0.jpg) + +I used to think makefiles were just a convenient way to list groups of shell commands; over time I've learned how powerful, flexible, and full-featured they are. This post brings to light over some of those features related to rules. + +### Rules + +Rules are instructions that indicate `make` how and when a file called the target should be built. The target can depend on other files called prerequisites. + +You instruct `make` how to build the target in the recipe, which is no more than a set of shell commands to be executed, one at a time, in the order they appear. The syntax looks like this: +``` +target_name : prerequisites + recipe +``` + +Once you have defined a rule, you can build the target from the command line by executing: +``` +$ make target_name +``` + +Once the target is built, `make` is smart enough to not run the recipe ever again unless at least one of the prerequisites has changed. + +### More on prerequisites + +Prerequisites indicate two things: + + * When the target should be built: if a prerequisite is newer than the target, `make` assumes that the target should be built. + * An order of execution: since prerequisites can, in turn, be built by another rule on the makefile, they also implicitly set an order on which rules are executed. + + + +If you want to define an order, but you don't want to rebuild the target if the prerequisite changes, you can use a special kind of prerequisite called order only, which can be placed after the normal prerequisites, separated by a pipe (`|`) + +### Patterns + +For convenience, `make` accepts patterns for targets and prerequisites. A pattern is defined by including the `%` character, a wildcard that matches any number of literal characters or an empty string. Here are some examples: + + * `%`: match any file + * `%.md`: match all files with the `.md` extension + * `prefix%.go`: match all files that start with `prefix` that have the `.go` extension + + + +### Special targets + +There's a set of target names that have special meaning for `make` called special targets. + +You can find the full list of special targets in the [documentation][1]. As a rule of thumb, special targets start with a dot followed by uppercase letters. + +Here are a few useful ones: + +**.PHONY** : Indicates `make` that the prerequisites of this target are considered to be phony targets, which means that `make` will always run it's recipe regardless of whether a file with that name exists or what its last-modification time is. + +**.DEFAULT** : Used for any target for which no rules are found. + +**.IGNORE** : If you specify prerequisites for `.IGNORE`, `make` will ignore errors in execution of their recipes. + +### Substitutions + +Substitutions are useful when you need to modify the value of a variable with alterations that you specify. + +A substitution has the form `$(var:a=b)` and its meaning is to take the value of the variable `var`, replace every `a` at the end of a word with `b` in that value, and substitute the resulting string. For example: +``` +foo := a.o +bar : = $(foo:.o=.c) # sets bar to a.c +``` + +note: special thanks to [Luis Lavena][2] for letting me know about the existence of substitutions. + +### Archive Files + +Archive files are used to collect multiple data files together into a single file (same concept as a zip file), they are built with the `ar` Unix utility. `ar` can be used to create archives for any purpose, but has been largely replaced by `tar` for any other purposes than [static libraries][3]. + +In `make`, you can use an individual member of an archive file as a target or prerequisite as follows: +``` +archive(member) : prerequisite + recipe +``` + +### Final Thoughts + +There's a lot more to discover about make, but at least this counts as a start, I strongly encourage you to check the [documentation][4], create a dumb makefile, and just play with it. + +-------------------------------------------------------------------------------- + +via: https://monades.roperzh.com/rediscovering-make-power-behind-rules/ + +作者:[Roberto Dip][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://monades.roperzh.com +[1]:https://www.gnu.org/software/make/manual/make.html#Special-Targets +[2]:https://twitter.com/luislavena/ +[3]:http://tldp.org/HOWTO/Program-Library-HOWTO/static-libraries.html +[4]:https://www.gnu.org/software/make/manual/make.html diff --git a/sources/tech/20180118 Securing the Linux filesystem with Tripwire.md b/sources/tech/20180118 Securing the Linux filesystem with Tripwire.md new file mode 100644 index 0000000000..a359e3a422 --- /dev/null +++ b/sources/tech/20180118 Securing the Linux filesystem with Tripwire.md @@ -0,0 +1,112 @@ +Securing the Linux filesystem with Tripwire +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/file_system.jpg?itok=pzCrX1Kc) + +While Linux is considered to be the most secure operating system (ahead of Windows and MacOS), it is still vulnerable to rootkits and other variants of malware. Thus, Linux users need to know how to protect their servers or personal computers from destruction, and the first step they need to take is to protect the filesystem. + +In this article, we'll look at [Tripwire][1], an excellent tool for protecting Linux filesystems. Tripwire is an integrity checking tool that enables system administrators, security engineers, and others to detect alterations to system files. Although it's not the only option available ([AIDE][2] and [Samhain][3] offer similar features), Tripwire is arguably the most commonly used integrity checker for Linux system files, and it is available as open source under GPLv2. + +### How Tripwire works + +It's helpful to know how Tripwire operates in order to understand what it does once it's installed. Tripwire is made up of two major components: policy and database. Policy lists all the files and directories that the integrity checker should take a snapshot of, in addition to creating rules for identifying violations of changes to directories and files. Database consists of the snapshot taken by Tripwire. + +Tripwire also has a configuration file, which specifies the locations of the database, policy file, and Tripwire executable. It also provides two cryptographic keys--site key and local key--to protect important files against tampering. The site key protects the policy and configuration files, while the local key protects the database and generated reports. + +Tripwire works by periodically comparing the directories and files against the snapshot in the database and reporting any changes. + +### Installing Tripwire + +In order to use Tripwire, we need to download and install it first. Tripwire works on almost all Linux distributions; you can download an open source version from [Sourceforge][4] and install it as follows, depending on your version of Linux. + +Debian and Ubuntu users can install Tripwire directly from the repository using `apt-get`. Non-root users should type the `sudo` command to install Tripwire via `apt-get`. +``` + + +sudo apt-get update + +sudo  apt-get install tripwire   +``` + +CentOS and other rpm-based distributions use a similar process. For the sake of best practice, update your repository before installing a new package such as Tripwire. The command `yum install epel-release` simply means we want to install extra repositories. (`epel` stands for Extra Packages for Enterprise Linux.) +``` + + +yum update + +yum install epel-release + +yum install tripwire   +``` + +This command causes the installation to run a configuration of packages that are required for Tripwire to function effectively. In addition, it will ask if you want to select passphrases during installation. You can select "Yes" to both prompts. + +Also, select or choose "Yes" if it's required to build the configuration file. Choose and confirm a passphrase for a site key and for a local key. (A complex passphrase such as `Il0ve0pens0urce` is recommended.) + +### Build and initialize Tripwire's database + +Next, initialize the Tripwire database as follows: +``` + + +tripwire --init +``` + +You'll need to provide your local key passphrase to run the commands. + +### Basic integrity checking using Tripwire + +You can use the following command to instruct Tripwire to check whether your files or directories have been modified. Tripwire's ability to compare files and directories against the initial snapshot in the database is based on the rules you created in the active policy. +``` + + +tripwire  --check   +``` + +You can also limit the `-check` command to specific files or directories, such as in this example: +``` + + +tripwire   --check   /usr/tmp   +``` + +In addition, if you need extended help on using Tripwire's `-check` command, this command allows you to consult Tripwire's manual: +``` + + +tripwire  --check  --help   +``` + +### Generating reports using Tripwire + +To easily generate a daily system integrity report, create a `crontab` with this command: +``` + + +crontab -e +``` + +Afterward, you can edit this file (with the text editor of your choice) to introduce tasks to be run by cron. For instance, you can set up a cron job to send Tripwire reports to your email daily at 5:40 a.m. by using this command: +``` + + +40 5  *  *  *  usr/sbin/tripwire   --check +``` + +Whether you decide to use Tripwire or another integrity checker with similar features, the key issue is making sure you have a solution to protect the security of your Linux filesystem. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/securing-linux-filesystem-tripwire + +作者:[Michael Kwaku Aboagye][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/revoks +[1]:https://www.tripwire.com/ +[2]:http://aide.sourceforge.net/ +[3]:http://www.la-samhna.de/samhain/ +[4]:http://sourceforge.net/projects/tripwire diff --git a/sources/tech/20180119 How to install Spotify application on Linux.md b/sources/tech/20180119 How to install Spotify application on Linux.md new file mode 100644 index 0000000000..3050e36199 --- /dev/null +++ b/sources/tech/20180119 How to install Spotify application on Linux.md @@ -0,0 +1,103 @@ +translating---geekpi + +How to install Spotify application on Linux +====== + +How do I install Spotify app on Ubuntu Linux desktop to stream music? + +Spotify is a digital music stream service that provides you access to tons of songs. You can stream for free or buy a subscription. Creating a playlist is possible. A subscriber can listen music ad-free. You get better sound quality. This page **shows how to install Spotify on Linux using a snap package manager that works on Ubuntu, Mint, Debian, Fedora, Arch and many other distros**. + +### Installing spotify application on Linux + +The procedure to install spotify on Linux is as follows: + +1. Install snapd +2. Turn on snapd +3. Find Spotify snap: +``` +snap find spotify +``` +4. Install spotify music app: +``` +do snap install spotify +``` +5. Run it: +``` +spotify & +``` + +Let us see all steps and examples in details. + +### Step 1 - Install Snapd + +You need to install snapd package. It is daemon (service) and tooling that enable snap packages on Linux operating system. + +#### Snapd on a Debian/Ubuntu/Mint Linux + +Type the following [apt command][1]/[apt-get command][2] as follows: +`$ sudo apt install snapd` + +#### Install snapd on an Arch Linux + +snapd is available in the Arch User Repository (AUR) only. Run yaourt command (see [how to install yaourt on Archlinux][3]): +``` +$ sudo yaourt -S snapd +$ sudo systemctl enable --now snapd.socket +``` + +#### Get snapd on a Fedora Linux + +Run snapd command +``` +sudo dnf install snapd +sudo ln -s /var/lib/snapd/snap /snap +``` + +#### OpenSUSE install snapd + +Execute the snap command: +`$ snap find spotify` +[![snap search for spotify app command][4]][4] +Install it: +`$ sudo snap install spotify` +[![How to install Spotify application on Linux using snap command][5]][5] + +### Step 3 - Run spotify and enjoy it(译注:原博客中就是这么直接跳到step3的) + +Run it from GUI or simply type: +`$ spotify` +Automatically sign in to your account on startup: +``` +$ spotify --username vivek@nixcraft.com +$ spotify --username vivek@nixcraft.com --password 'myPasswordHere' +``` +Start spotify client with given URI when initialized: +`$ spotify--uri=` +Start with the specified URL: +`$ spotify--url=` +[![Spotify client app running on my Ubuntu Linux desktop][6]][6] + +### About the author + +The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][7], [Facebook][8], [Google+][9]. + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/faq/how-to-install-spotify-application-on-linux/ + +作者:[Vivek Gite][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info) +[2]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info) +[3]:https://www.cyberciti.biz/faq/how-to-install-yaourt-in-arch-linux/ +[4]:https://www.cyberciti.biz/media/new/faq/2018/01/snap-search-for-spotify-app-command.jpg +[5]:https://www.cyberciti.biz/media/new/faq/2018/01/How-to-install-Spotify-application-on-Linux-using-snap-command.jpg +[6]:https://www.cyberciti.biz/media/new/faq/2018/01/Spotify-client-app-running-on-my-Ubuntu-Linux-desktop.jpg +[7]:https://twitter.com/nixcraft +[8]:https://facebook.com/nixcraft +[9]:https://plus.google.com/+CybercitiBiz diff --git a/sources/tech/20180119 Linux mv Command Explained for Beginners (8 Examples).md b/sources/tech/20180119 Linux mv Command Explained for Beginners (8 Examples).md new file mode 100644 index 0000000000..786528137f --- /dev/null +++ b/sources/tech/20180119 Linux mv Command Explained for Beginners (8 Examples).md @@ -0,0 +1,186 @@ +Linux mv Command Explained for Beginners (8 Examples) +====== + +Just like [cp][1] for copying and rm for deleting, Linux also offers an in-built command for moving and renaming files. It's called **mv**. In this article, we will discuss the basics of this command line tool using easy to understand examples. Please note that all examples used in this tutorial have been tested on Ubuntu 16.04 LTS. + +#### Linux mv command + +As already mentioned, the mv command in Linux is used to move or rename files. Following is the syntax of the command: + +``` +mv [OPTION]... [-T] SOURCE DEST +mv [OPTION]... SOURCE... DIRECTORY +mv [OPTION]... -t DIRECTORY SOURCE... +``` + +And here's what the man page says about it: +``` +Rename SOURCE to DEST, or move SOURCE(s) to DIRECTORY. +``` + +The following Q&A-styled examples will give you a better idea on how this tool works. + +#### Q1. How to use mv command in Linux? + +If you want to just rename a file, you can use the mv command in the following way: + +``` +mv [filename] [new_filename] +``` + +For example: + +``` +mv names.txt fullnames.txt +``` + +[![How to use mv command in Linux][2]][3] + +Similarly, if the requirement is to move a file to a new location, use the mv command in the following way: + +``` +mv [filename] [dest-dir] +``` + +For example: + +``` +mv fullnames.txt /home/himanshu/Downloads +``` + +[![Linux mv command][4]][5] + +#### Q2. How to make sure mv prompts before overwriting? + +By default, the mv command doesn't prompt when the operation involves overwriting an existing file. For example, the following screenshot shows the existing full_names.txt was overwritten by mv without any warning or notification. + +[![How to make sure mv prompts before overwriting][6]][7] + +However, if you want, you can force mv to prompt by using the **-i** command line option. + +``` +mv -i [file_name] [new_file_name] +``` + +[![the -i command option][8]][9] + +So the above screenshots clearly shows that **-i** leads to mv asking for user permission before overwriting an existing file. Please note that in case you want to explicitly specify that you don't want mv to prompt before overwriting, then use the **-f** command line option. + +#### Q3. How to make mv not overwrite an existing file? + +For this, you need to use the **-n** command line option. + +``` +mv -n [filename] [new_filename] +``` + +The following screenshot shows the mv operation wasn't successful as a file with name 'full_names.txt' already existed and the command had -n option in it. + +[![How to make mv not overwrite an existing file][10]][11] + +Note: +``` +If you specify more than one of -i, -f, -n, only the final one takes effect. +``` + +#### Q4. How to make mv remove trailing slashes (if any) from source argument? + +To remove any trailing slashes from source arguments, use the **\--strip-trailing-slashes** command line option. + +``` +mv --strip-trailing-slashes [source] [dest] +``` + +Here's how the official documentation explains the usefulness of this option: +``` +This is useful when a + +source + + argument may have a trailing slash and specify a symbolic link to a directory. This scenario is in fact rather common because some shells can automatically append a trailing slash when performing file name completion on such symbolic links. Without this option, + +mv + +, for example, (via the system's rename function) must interpret a trailing slash as a request to dereference the symbolic link and so must rename the indirectly referenced + +directory + + and not the symbolic link. Although it may seem surprising that such behavior be the default, it is required by POSIX and is consistent with other parts of that standard. +``` + +#### Q5. How to make mv treat destination as normal file? + +To be absolutely sure that the destination entity is treated as a normal file (and not a directory), use the **-T** command line option. + +``` +mv -T [source] [dest] +``` + +Here's why this command line option exists: +``` +This can help avoid race conditions in programs that operate in a shared area. For example, when the command 'mv /tmp/source /tmp/dest' succeeds, there is no guarantee that /tmp/source was renamed to /tmp/dest: it could have been renamed to/tmp/dest/source instead, if some other process created /tmp/dest as a directory. However, if mv -T /tmp/source /tmp/dest succeeds, there is no question that/tmp/source was renamed to /tmp/dest. +``` +``` +In the opposite situation, where you want the last operand to be treated as a directory and want a diagnostic otherwise, you can use the --target-directory (-t) option. +``` + +#### Q6. How to make mv move file only when its newer than destination file? + +Suppose there exists a file named fullnames.txt in Downloads directory of your system, and there's a file with same name in your home directory. Now, you want to update ~/Downloads/fullnames.txt with ~/fullnames.txt, but only when the latter is newer. Then in this case, you'll have to use the **-u** command line option. + +``` +mv -u ~/fullnames.txt ~/Downloads/fullnames.txt +``` + +This option is particularly useful in cases when you need to take such decisions from within a shell script. + +#### Q7. How make mv emit details of what all it is doing? + +If you want mv to output information explaining what exactly it's doing, then use the **-v** command line option. + +``` +mv -v [filename] [new_filename] +``` + +For example, the following screenshots shows mv emitting some helpful details of what exactly it did. + +[![How make mv emit details of what all it is doing][12]][13] + +#### Q8. How to force mv to create backup of existing destination files? + +This you can do using the **-b** command line option. The backup file created this way will have the same name as the destination file, but with a tilde (~) appended to it. Here's an example: + +[![How to force mv to create backup of existing destination files][14]][15] + +#### Conclusion + +As you'd have guessed by now, mv is as important as cp and rm for the functionality it offers - renaming/moving files around is also one of the basic operations after all. We've discussed a majority of command line options this tool offers. So you can just practice them and start using the command. To know more about mv, head to its [man page][16]. + + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/linux-mv-command/ + +作者:[Himanshu Arora][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.howtoforge.com +[1]:https://www.howtoforge.com/linux-cp-command/ +[2]:https://www.howtoforge.com/images/command-tutorial/mv-rename-ex.png +[3]:https://www.howtoforge.com/images/command-tutorial/big/mv-rename-ex.png +[4]:https://www.howtoforge.com/images/command-tutorial/mv-transfer-file.png +[5]:https://www.howtoforge.com/images/command-tutorial/big/mv-transfer-file.png +[6]:https://www.howtoforge.com/images/command-tutorial/mv-overwrite.png +[7]:https://www.howtoforge.com/images/command-tutorial/big/mv-overwrite.png +[8]:https://www.howtoforge.com/images/command-tutorial/mv-prompt-overwrite.png +[9]:https://www.howtoforge.com/images/command-tutorial/big/mv-prompt-overwrite.png +[10]:https://www.howtoforge.com/images/command-tutorial/mv-n-option.png +[11]:https://www.howtoforge.com/images/command-tutorial/big/mv-n-option.png +[12]:https://www.howtoforge.com/images/command-tutorial/mv-v-option.png +[13]:https://www.howtoforge.com/images/command-tutorial/big/mv-v-option.png +[14]:https://www.howtoforge.com/images/command-tutorial/mv-b-option.png +[15]:https://www.howtoforge.com/images/command-tutorial/big/mv-b-option.png +[16]:https://linux.die.net/man/1/mv diff --git a/sources/tech/20180119 Two great uses for the cp command Bash shortcuts.md b/sources/tech/20180119 Two great uses for the cp command Bash shortcuts.md new file mode 100644 index 0000000000..9a45c26e7a --- /dev/null +++ b/sources/tech/20180119 Two great uses for the cp command Bash shortcuts.md @@ -0,0 +1,154 @@ +Translating by cncuckoo + +Two great uses for the cp command: Bash shortcuts +============================================================ + +### Here's how to streamline the backup and synchronize functions of the cp command. + +![Two great uses for the cp command: Bash shortcuts ](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC) + +>Image by : [Internet Archive Book Images][6]. Modified by Opensource.com. CC BY-SA 4.0 + +Last July, I wrote about [two great uses for the cp command][7]: making a backup of a file, and synchronizing a secondary copy of a folder. + +Having discovered these great utilities, I find that they are more verbose than necessary, so I created shortcuts to them in my Bash shell startup script. I thought I’d share these shortcuts in case they are useful to others or could offer inspiration to Bash users who haven’t quite taken on aliases or shell functions. + +### Updating a second copy of a folder – Bash alias + +The general pattern for updating a second copy of a folder with cp is: + +``` +cp -r -u -v SOURCE-FOLDER DESTINATION-DIRECTORY +``` + +I can easily remember the -r option because I use it often when copying folders around. I can probably, with some more effort, remember -v, and with even more effort, -u (is it “update” or “synchronize” or…). + +Or I can just use the [alias capability in Bash][8] to convert the cp command and options to something more memorable, like this: + +``` +alias sync='cp -r -u -v' +``` + +``` +sync Pictures /media/me/4388-E5FE +``` + +Not sure if you already have a sync alias defined? You can list all your currently defined aliases by typing the word alias at the command prompt in your terminal window. + +Like this so much you just want to start using it right away? Open a terminal window and type: + +``` +echo "alias sync='cp -r -u -v'" >> ~/.bash_aliases +``` + +``` +me@mymachine~$ alias + +alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' + +alias egrep='egrep --color=auto' + +alias fgrep='fgrep --color=auto' + +alias grep='grep --color=auto' + +alias gvm='sdk' + +alias l='ls -CF' + +alias la='ls -A' + +alias ll='ls -alF' + +alias ls='ls --color=auto' + +alias sync='cp -r -u -v' + +me@mymachine:~$ +``` + +### Making versioned backups – Bash function + +The general pattern for making a backup of a file with cp is: + +``` +cp --force --backup=numbered WORKING-FILE BACKED-UP-FILE +``` + +Besides remembering the options to the cp command, we also need to remember to repeat the WORKING-FILE name a second time. But why repeat ourselves when [a Bash function][9] can take care of that overhead for us, like this: + +Again, you can save this to your .bash_aliases file in your home directory. + +``` +function backup { + +    if [ $# -ne 1 ]; then + +        echo "Usage: $0 filename" + +    elif [ -f $1 ] ; then + +        echo "cp --force --backup=numbered $1 $1" + +        cp --force --backup=numbered $1 $1 + +    else + +        echo "$0: $1 is not a file" + +    fi + +} +``` + +The first if statement checks to make sure that only one argument is provided to the function, otherwise printing the correct usage with the echo command. + +The elif statement checks to make sure the argument provided is a file, and if so, it (verbosely) uses the second echo to print the cp command to be used and then executes it. + +If the single argument is not a file, the third echo prints an error message to that effect. + +In my home directory, if I execute the backup command so defined on the file checkCounts.sql, I see that backup creates a file called checkCounts.sql.~1~. If I execute it once more, I see a new file checkCounts.sql.~2~. + +Success! As planned, I can go on editing checkCounts.sql, but if I take a snapshot of it every so often with backup, I can return to the most recent snapshot should I run into trouble. + +At some point, it’s better to start using git for version control, but backup as defined above is a nice cheap tool when you need to create snapshots but you’re not ready for git. + +### Conclusion + +In my last article, I promised you that repetitive tasks can often be easily streamlined through the use of shell scripts, shell functions, and shell aliases. + +Here I’ve shown concrete examples of the use of shell aliases and shell functions to streamline the synchronize and backup functionality of the cp command. If you’d like to learn more about this, check out the two articles cited above: [How to save keystrokes at the command line with alias][10] and [Shell scripting: An introduction to the shift method and custom functions][11], written by my colleagues Greg and Seth, respectively. + + +### About the author + + [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/clh_portrait2.jpg?itok=V1V-YAtY)][13] Chris Hermansen  + + + Engaged in computing since graduating from the University of British Columbia in 1978, I have been a full-time Linux user since 2005 and a full-time Solaris, SunOS and UNIX System V user before that. On the technical side of things, I have spent a great deal of my career doing data analysis; especially spatial data analysis. I have a substantial amount of programming experience in relation to data analysis, using awk, Python, PostgreSQL, PostGIS and lately Groovy. I have also built a few... [more about Chris Hermansen][14] + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/two-great-uses-cp-command-update + +作者:[Chris Hermansen][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/clhermansen +[1]:https://opensource.com/users/clhermansen +[2]:https://opensource.com/users/clhermansen +[3]:https://opensource.com/user/37806/feed +[4]:https://opensource.com/article/18/1/two-great-uses-cp-command-update?rate=J_7R7wSPbukG9y8jrqZt3EqANfYtVAwZzzpopYiH3C8 +[5]:https://opensource.com/article/18/1/two-great-uses-cp-command-update#comments +[6]:https://www.flickr.com/photos/internetarchivebookimages/14803082483/in/photolist-oy6EG4-pZR3NZ-i6r3NW-e1tJSX-boBtf7-oeYc7U-o6jFKK-9jNtc3-idt2G9-i7NG1m-ouKjXe-owqviF-92xFBg-ow9e4s-gVVXJN-i1K8Pw-4jybMo-i1rsBr-ouo58Y-ouPRzz-8cGJHK-85Evdk-cru4Ly-rcDWiP-gnaC5B-pAFsuf-hRFPcZ-odvBMz-hRCE7b-mZN3Kt-odHU5a-73dpPp-hUaaAi-owvUMK-otbp7Q-ouySkB-hYAgmJ-owo4UZ-giHgqu-giHpNc-idd9uQ-osAhcf-7vxk63-7vwN65-fQejmk-pTcLgA-otZcmj-fj1aSX-hRzHQk-oyeZfR +[7]:https://opensource.com/article/17/7/two-great-uses-cp-command +[8]:https://opensource.com/article/17/5/introduction-alias-command-line-tool +[9]:https://opensource.com/article/17/1/shell-scripting-shift-method-custom-functions +[10]:https://opensource.com/article/17/5/introduction-alias-command-line-tool +[11]:https://opensource.com/article/17/1/shell-scripting-shift-method-custom-functions +[12]:https://opensource.com/tags/linux +[13]:https://opensource.com/users/clhermansen +[14]:https://opensource.com/users/clhermansen diff --git a/sources/tech/20180120 The World Map In Your Terminal.md b/sources/tech/20180120 The World Map In Your Terminal.md new file mode 100644 index 0000000000..edc23edf12 --- /dev/null +++ b/sources/tech/20180120 The World Map In Your Terminal.md @@ -0,0 +1,112 @@ +translating---geekpi + +The World Map In Your Terminal +====== +I just stumbled upon an interesting utility. The World map in the Terminal! Yes, It is so cool. Say hello to **MapSCII** , a Braille and ASCII world map renderer for your xterm-compatible terminals. It supports GNU/Linux, Mac OS, and Windows. I thought it is a just another project hosted on GitHub. But I was wrong! It is really impressive what they did there. We can use our mouse pointer to drag and zoom in and out a location anywhere in the world map. The other notable features are; + + * Discover Point-of-Interests around any given location + * Highly customizable layer styling with [Mapbox Styles][1] support + * Connect to any public or private vector tile server + * Or just use the supplied and optimized [OSM2VectorTiles][2] based one + * Work offline and discover local [VectorTile][3]/[MBTiles][4] + * Compatible with most Linux and OSX terminals + * Highly optimizied algorithms for a smooth experience + + + +### Displaying the World Map in your Terminal using MapSCII + +To open the map, just run the following command from your Terminal: +``` +telnet mapscii.me +``` + +Here is the World map from my Terminal. + +[![][5]][6] + +Cool, yeah? + +To switch to Braille view, press **c**. + +[![][5]][7] + +Type **c** again to switch back to the previous format **.** + +To scroll around the map, use arrow keys **up** , **down** , **left** , **right**. To zoom in/out a location, use **a** and **z** keys. Also, you can use the scroll wheel of your mouse to zoom in or out. To quit the map, press **q**. + +Like I already said, don't think it is a simple project. Click on any location on the map and press **" a"** to zoom in. + +Here are some the sample screenshots after I zoomed it. + +[![][5]][8] + +I can be able to zoom to view the states in my country (India). + +[![][5]][9] + +And the districts in a state (Tamilnadu): + +[![][5]][10] + +Even the [Taluks][11] and the towns in a district: + +[![][5]][12] + +And, the place where I completed my schooling: + +[![][5]][13] + +Even though it is just a smallest town, MapSCII displayed it accurately. MapSCII uses [**OpenStreetMap**][14] to collect the data. + +### Install MapSCII locally + +Liked it? Great! You can host it on your own system. + +Make sure you have installed Node.js on your system. If not, refer the following link. + +[Install NodeJS on Linux][15] + +Then, run the following command to install it. +``` +sudo npm install -g mapscii + +``` + +To launch MapSCII, run: +``` +mapscii +``` + +Have fun! More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/mapscii-world-map-terminal/ + +作者:[SK][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.mapbox.com/mapbox-gl-style-spec/ +[2]:https://github.com/osm2vectortiles +[3]:https://github.com/mapbox/vector-tile-spec +[4]:https://github.com/mapbox/mbtiles-spec +[5]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-1-2.png () +[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-2.png () +[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-3.png () +[9]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-4.png () +[10]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-5.png () +[11]:https://en.wikipedia.org/wiki/Tehsils_of_India +[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-6.png () +[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-7.png () +[14]:https://www.openstreetmap.org/ +[15]:https://www.ostechnix.com/install-node-js-linux/ diff --git a/sources/tech/20180120 socat as a handler for multiple reverse shells - System Overlord.md b/sources/tech/20180120 socat as a handler for multiple reverse shells - System Overlord.md new file mode 100644 index 0000000000..b57a1e0140 --- /dev/null +++ b/sources/tech/20180120 socat as a handler for multiple reverse shells - System Overlord.md @@ -0,0 +1,66 @@ +socat as a handler for multiple reverse shells · System Overlord +====== + +I was looking for a new way to handle multiple incoming reverse shells. My shells needed to be encrypted and I preferred not to use Metasploit in this case. Because of the way I was deploying my implants, I wasn't able to use separate incoming port numbers or other ways of directing the traffic to multiple listeners. + +Obviously, it's important to keep each reverse shell separated, so I couldn't just have a listener redirecting all the connections to STDIN/STDOUT. I also didn't want to wait for sessions serially - obviously I wanted to be connected to all of my implants simultaneously. (And allow them to disconnect/reconnect as needed due to loss of network connectivity.) + +As I was thinking about the problem, I realized that I basically wanted `tmux` for reverse shells. So I began to wonder if there was some way to connect `openssl s_server` or something similar to `tmux`. Given the limitations of `s_server`, I started looking at `socat`. Despite it's versatility, I've actually only used it once or twice before this, so I spent a fair bit of time reading the man page and the examples. + +I couldn't find a way to get `socat` to talk directly to `tmux` in a way that would spawn each connection as a new window (file descriptors are not passed to the newly-started process in `tmux new-window`), so I ended up with a strange workaround. I feel a little bit like Rube Goldberg inventing C2 software (and I need to get something more permanent and featureful eventually, but this was a quick and dirty PoC), but I've put together a chain of `socat` to get a working solution. + +My implementation works by having a single `socat` process receive the incoming connections (forking on incoming connection), and executing a script that first starts a `socat` instance within tmux, and then another `socat` process to copy from the first to the second over a UNIX domain socket. + +Yes, this is 3 socat processes. It's a little ridiculous, but I couldn't find a better approach. Roughly speaking, the communications flow looks a little like this: +``` +TLS data <--> socat listener <--> script stdio <--> socat <--> unix socket <--> socat in tmux <--> terminal window + +``` + +Getting it started is fairly simple. Begin by generating your SSL certificate. In this case, I'm using a self-signed certificate, but obviously you could go through a commercial CA, Let's Encrypt, etc. +``` +openssl req -newkey rsa:2048 -nodes -keyout server.key -x509 -days 30 -out server.crt +cat server.key server.crt > server.pem + +``` + +Now we will create the script that is run on each incoming connection. This script needs to launch a `tmux` window running a `socat` process copying from a UNIX domain socket to `stdio` (in tmux), and then connecting another `socat` between the `stdio` coming in to the UNIX domain socket. +``` +#!/bin/bash + +SOCKDIR=$(mktemp -d) +SOCKF=${SOCKDIR}/usock + +# Start tmux, if needed +tmux start +# Create window +tmux new-window "socat UNIX-LISTEN:${SOCKF},umask=0077 STDIO" +# Wait for socket +while test ! -e ${SOCKF} ; do sleep 1 ; done +# Use socat to ship data between the unix socket and STDIO. +exec socat STDIO UNIX-CONNECT:${SOCKF} +``` + +The while loop is necessary to make sure that the last `socat` process does not attempt to open the UNIX domain socket before it has been created by the new `tmux` child process. + +Finally, we can launch the `socat` process that will accept the incoming requests (handling all the TLS steps) and execute our per-connection script: +``` +socat OPENSSL-LISTEN:8443,cert=server.pem,reuseaddr,verify=0,fork EXEC:./socatscript.sh + +``` + +This listens on port 8443, using the certificate and private key contained in `server.pem`, performs a `fork()` on accepting each incoming connection (so they do not block each other) and disables certificate verification (since we're not expecting our clients to provide a certificate). On the other side, it launches our script, providing the data from the TLS connection via STDIO. + +At this point, an incoming TLS connection connects, and is passed through our processes to eventually arrive on the `STDIO` of a new window in the running `tmux` server. Each connection gets its own window, allowing us to easily see and manage the connections for our implants. + +-------------------------------------------------------------------------------- + +via: https://systemoverlord.com/2018/01/20/socat-as-a-handler-for-multiple-reverse-shells.html + +作者:[David][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://systemoverlord.com/about diff --git a/sources/tech/20180121 Shell Scripting a Bunco Game.md b/sources/tech/20180121 Shell Scripting a Bunco Game.md new file mode 100644 index 0000000000..4d5113ec74 --- /dev/null +++ b/sources/tech/20180121 Shell Scripting a Bunco Game.md @@ -0,0 +1,235 @@ +translating by wenwensnow +Shell Scripting a Bunco Game +====== +I haven't dug into any game programming for a while, so I thought it was high time to do something in that realm. At first, I thought "Halo as a shell script?", but then I came to my senses. Instead, let's look at a simple dice game called Bunco. You may not have heard of it, but I bet your Mom has—it's a quite popular game for groups of gals at a local pub or tavern. + +Played in six rounds with three dice, the game is simple. You roll all three dice and have to match the current round number. If all three dice match the current round number (for example, three 3s in round three), you score 21\. If all three match but aren't the current round number, it's a Mini Bunco and worth five points. Failing both of those, each die with the same value as the round number is worth one point. + +Played properly, the game also involves teams, multiple tables including a winner's table, and usually cash prizes funded by everyone paying $5 or similar to play and based on specific winning scenarios like "most Buncos" or "most points". I'll skip that part here, however, and just focus on the dice part. + +### Let's Do the Math + +Before I go too far into the programming side of things, let me talk briefly about the math behind the game. Dice are easy to work with because on a properly weighted die, the chance of a particular value coming up is 1:6. + +Random tip: not sure whether your dice are balanced? Toss them in salty water and spin them. There are some really interesting YouTube videos from the D&D world showing how to do this test. + +So what are the odds of three dice having the same value? The first die has a 100% chance of having a value (no leaners here), so that's easy. The second die has a 16.66% chance of being any particular value, and then the third die has the same chance of being that value, but of course, they multiply, so three dice have about a 2.7% chance of all having the same value. + +Then, it's a 16.66% chance that those three dice would be the current round's number—or, in mathematical terms: 0.166 * 0.166 * 0.166 = 0.00462. + +In other words, you have a 0.46% chance of rolling a Bunco, which is a bit less than once out of every 200 rolls of three dice. + +It could be tougher though. If you were playing with five dice, the chance of rolling a Mini Bunco (or Yahtzee) is 0.077%, and if you were trying to accomplish a specific value, say just sixes, then it's 0.00012% likely on any given roll—which is to say, not bloody likely! + +### And So into the Coding + +As with every game, the hardest part is really having a good random number generator that generates truly random values. That's actually hard to affect in a shell script though, so I'm going to sidestep this entire issue and assume that the shell's built-in random number generator will be sufficient. + +What's nice is that it's super easy to work with. Just reference $RANDOM, and you'll have a random value between 0 and MAXINT (32767): + +``` + +$ echo $RANDOM $RANDOM $RANDOM +10252 22142 14863 + +``` + +To constrain that to values between 1–6 use the modulus function: + +``` + +$ echo $(( $RANDOM % 6 )) +3 +$ echo $(( $RANDOM % 6 )) +0 + +``` + +Oops! I forgot to shift it one. Here's another try: + +``` + +$ echo $(( ( $RANDOM % 6 ) + 1 )) +6 + +``` + +That's the dice-rolling feature. Let's make it a function where you can specify the variable you'd like to have the generated value as part of the invocation: + +``` + +rolldie() +{ + local result=$1 + rolled=$(( ( $RANDOM % 6 ) + 1 )) + eval $result=$rolled +} + +``` + +The use of the eval is to ensure that the variable specified in the invocation is actually assigned the calculated value. It's easy to work with: + +``` + +rolldie die1 + +``` + +That will load a random value between 1–6 into the variable die1. To roll your three dice, it's straightforward: + +``` + +rolldie die1 ; rolldie die2 ; rolldie die3 + +``` + +Now to test the values. First, let's test for a Bunco where all three dice have the same value, and it's the value of the current round too: + +``` + +if [ $die1 -eq $die2 ] && [ $die2 -eq $die3 ] ; then + if [ $die1 -eq $round ] ; then + echo "BUNCO!" + score=25 + else + echo "Mini Bunco!" + score=5 + fi + +``` + +That's probably the hardest of the tests, and notice the unusual use of test in the first conditional: [ cond1 ] && [ cond2 ]. If you're thinking that you could also write it as cond1 -a cond2, you're right. As with so much in the shell, there's more than one way to get to the solution. + +The remainder of the code is straightforward; you just need to test for whether the die matches the current round value: + +``` + +if [ $die1 -eq $round ] ; then + score=1 +fi +if [ $die2 -eq $round ] ; then + score=$(( $score + 1 )) +fi +if [ $die3 -eq $round ] ; then + score=$(( $score + 1 )) +fi + +``` + +The only thing to consider here is that you don't want to score die value vs. round if you've also scored a Bunco or Mini Bunco, so the entire second set of tests needs to be within the else clause of the first conditional (to see if all three dice have the same value). + +Put it together and specify the round number on the command line, and here's what you have at this point: + +``` + +$ sh bunco.sh 5 +You rolled: 1 1 5 +score = 1 +$ sh bunco.sh 2 +You rolled: 6 4 3 +score = 0 +$ sh bunco.sh 1 +You rolled: 1 1 1 +BUNCO! +score = 25 + +``` + +A Bunco so quickly? Well, as I said, there might be a slight issue with the randomness of the random number generator in the shell. + +You can test it once you have the script working by running it a few hundred times and then checking to see what percentage are Bunco or Mini Bunco, but I'll leave that as an exercise for you, dear reader. Well, maybe I'll come back to it another time. + +Let's finish up this script by having it accumulate score and run for all six rounds instead of specifying a round on the command line. That's easily done, because it's just a wrapper around the entire script, or, better, the big conditional statement becomes a function all its own: + +``` + +BuncoRound() +{ + # roll, display, and score a round of bunco! + # round is specified when invoked, score added to totalscore + + local score=0 ; local round=$1 ; local hidescore=0 + + rolldie die1 ; rolldie die2 ; rolldie die3 + echo Round $round. You rolled: $die1 $die2 $die3 + + if [ $die1 -eq $die2 ] && [ $die2 -eq $die3 ] ; then + if [ $die1 -eq $round ] ; then + echo " BUNCO!" + score=25 + hidescore=1 + else + echo " Mini Bunco!" + score=5 + hidescore=1 + fi + else + if [ $die1 -eq $round ] ; then + score=1 + fi + if [ $die2 -eq $round ] ; then + score=$(( $score + 1 )) + fi + if [ $die3 -eq $round ] ; then + score=$(( $score + 1 )) + fi + fi + + if [ $hidescore -eq 0 ] ; then + echo " score this round: $score" + fi + + totalscore=$(( $totalscore + $score )) +} + +``` + +I admit, I couldn't resist a few improvements as I went along, including the addition of it showing either Bunco, Mini Bunco or a score value (that's what $hidescore does). + +Invoking it is a breeze, and you'll use a for loop: + +``` + +for round in {1..6} ; do + BuncoRound $round +done + +``` + +That's about the entire program at this point. Let's run it once and see what happens: + +``` + +$ sh bunco.sh 1 +Round 1\. You rolled: 2 3 3 + score this round: 0 +Round 2\. You rolled: 2 6 6 + score this round: 1 +Round 3\. You rolled: 1 2 4 + score this round: 0 +Round 4\. You rolled: 2 1 4 + score this round: 1 +Round 5\. You rolled: 5 5 6 + score this round: 2 +Round 6\. You rolled: 2 1 3 + score this round: 0 +Game over. Your total score was 4 + +``` + +Ugh. Not too impressive, but it's probably a typical round. Again, you can run it a few hundred—or thousand—times, just save the "Game over" line, then do some quick statistical analysis to see how often you score more than 3 points in six rounds. (With three dice to roll a given value, you should hit that 50% of the time.) + +It's not a complicated game by any means, but it makes for an interesting little programming project. Now, what if they used 20-sided die and let you re-roll one die per round and had a dozen rounds? + + +-------------------------------------------------------------------------------- + +via: http://www.linuxjournal.com/content/shell-scripting-bunco-game + +作者:[Dave Taylor][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxjournal.com/users/dave-taylor diff --git a/sources/tech/20180122 A Simple Command-line Snippet Manager.md b/sources/tech/20180122 A Simple Command-line Snippet Manager.md new file mode 100644 index 0000000000..1c8ef14fb6 --- /dev/null +++ b/sources/tech/20180122 A Simple Command-line Snippet Manager.md @@ -0,0 +1,319 @@ +A Simple Command-line Snippet Manager +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/01/pet-6-720x340.png) + +We can't remember all the commands, right? Yes. Except the frequently used commands, it is nearly impossible to remember some long commands that we rarely use. That's why we need to some external tools to help us to find the commands when we need them. In the past, we have reviewed two useful utilities named [**" Bashpast"**][1] and [**" Keep"**][2]. Using Bashpast, we can easily bookmark the Linux commands for easier repeated invocation. And, the Keep utility can be used to keep the some important and lengthy commands in your Terminal, so you can use them on demand. Today, we are going to see yet another tool in the series to help you remembering commands. Say hello to **" Pet"**, a simple command-line snippet manager written in **Go** language. + +Using Pet, you can; + + * Register/add your important, long and complex command snippets. + * Search the saved command snippets interactively. + * Run snippets directly without having to type over and over. + * Edit the saved command snippets easily. + * Sync the snippets via Gist. + * Use variables in snippets. + * And more yet to come. + + + +#### Installing Pet CLI Snippet Manager + +Since it is written in Go language, make sure you have installed Go in your system. + +After Go language, grab the latest binaries from [**the releases page**][3]. +``` +wget https://github.com/knqyf263/pet/releases/download/v0.2.4/pet_0.2.4_linux_amd64.zip +``` + +For 32 bit: +``` +wget https://github.com/knqyf263/pet/releases/download/v0.2.4/pet_0.2.4_linux_386.zip +``` + +Extract the downloaded archive: +``` +unzip pet_0.2.4_linux_amd64.zip +``` + +32 bit: +``` +unzip pet_0.2.4_linux_386.zip +``` + +Copy the pet binary file to your PATH (i.e **/usr/local/bin** or the like). +``` +sudo cp pet /usr/local/bin/ +``` + +Finally, make it executable: +``` +sudo chmod +x /usr/local/bin/pet +``` + +If you're using Arch based systems, then you can install it from AUR using any AUR helper tools. + +Using [**Pacaur**][4]: +``` +pacaur -S pet-git +``` + +Using [**Packer**][5]: +``` +packer -S pet-git +``` + +Using [**Yaourt**][6]: +``` +yaourt -S pet-git +``` + +Using [**Yay** :][7] +``` +yay -S pet-git +``` + +Also, you need to install **[fzf][8]** or [**peco**][9] tools to enable interactive search. Refer the official GitHub links to know how to install these tools. + +#### Usage + +Run 'pet' without any arguments to view the list of available commands and general options. +``` +$ pet +pet - Simple command-line snippet manager. + +Usage: + pet [command] + +Available Commands: + configure Edit config file + edit Edit snippet file + exec Run the selected commands + help Help about any command + list Show all snippets + new Create a new snippet + search Search snippets + sync Sync snippets + version Print the version number + +Flags: + --config string config file (default is $HOME/.config/pet/config.toml) + --debug debug mode + -h, --help help for pet + +Use "pet [command] --help" for more information about a command. +``` + +To view the help section of a specific command, run: +``` +$ pet [command] --help +``` + +**Configure Pet** + +It just works fine with default values. However, you can change the default directory to save snippets, choose the selector (fzf or peco) to use, the default text editor to edit snippets, add GIST id details etc. + +To configure Pet, run: +``` +$ pet configure +``` + +This command will open the default configuration in the default text editor (for example **vim** in my case). Change/edit the values as per your requirements. +``` +[General] + snippetfile = "/home/sk/.config/pet/snippet.toml" + editor = "vim" + column = 40 + selectcmd = "fzf" + +[Gist] + file_name = "pet-snippet.toml" + access_token = "" + gist_id = "" + public = false +~ +``` + +**Creating Snippets** + +To create a new snippet, run: +``` +$ pet new +``` + +Add the command and the description and hit ENTER to save it. +``` +Command> echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9' +Description> Remove numbers from output. +``` + +[![][10]][11] + +This is a simple command to remove all numbers from the echo command output. You can easily remember it. But, if you rarely use it, you may forgot it completely after few days. Of course we can search the history using "CTRL+r", but "Pet" is much easier. Also, Pet can help you to add any number of entries. + +Another cool feature is we can easily add the previous command. To do so, add the following lines in your **.bashrc** or **.zshrc** file. +``` +function prev() { + PREV=$(fc -lrn | head -n 1) + sh -c "pet new `printf %q "$PREV"`" +} +``` + +Do the following command to take effect the saved changes. +``` +source .bashrc +``` + +Or, +``` +source .zshrc +``` + +Now, run any command, for example: +``` +$ cat Documents/ostechnix.txt | tr '|' '\n' | sort | tr '\n' '|' | sed "s/.$/\\n/g" +``` + +To add the above command, you don't have to use "pet new" command. just do: +``` +$ prev +``` + +Add the description to the command snippet and hit ENTER to save. + +[![][10]][12] + +**List snippets** + +To view the saved snippets, run: +``` +$ pet list +``` + +[![][10]][13] + +**Edit Snippets** + +If you want to edit the description or the command of a snippet, run: +``` +$ pet edit +``` + +This will open all saved snippets in your default text editor. You can edit or change the snippets as you wish. +``` +[[snippets]] + description = "Remove numbers from output." + command = "echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9'" + output = "" + +[[snippets]] + description = "Alphabetically sort one line of text" + command = "\t prev" + output = "" +``` + +**Use Tags in snippets** + +To use tags to a snippet, use **-t** flag like below. +``` +$ pet new -t +Command> echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9 +Description> Remove numbers from output. +Tag> tr command examples + +``` + +**Execute Snippets** + +To execute a saved snippet, run: +``` +$ pet exec +``` + +Choose the snippet you want to run from the list and hit ENTER to run it. + +[![][10]][14] + +Remember you need to install fzf or peco to use this feature. + +**Search Snippets** + +If you have plenty of saved snippets, you can easily search them using a string or key word like below. +``` +$ pet search +``` + +Enter the search term or keyword to narrow down the search results. + +[![][10]][15] + +**Sync Snippets** + +First, you need to obtain the access token. Go to this link and create access token (only need "gist" scope). + +Configure Pet using command: +``` +$ pet configure +``` + +Set that token to **access_token** in **[Gist]** field. + +After setting, you can upload snippets to Gist like below. +``` +$ pet sync -u +Gist ID: 2dfeeeg5f17e1170bf0c5612fb31a869 +Upload success + +``` + +You can also download snippets on another PC. To do so, edit configuration file and set **Gist ID** to **gist_id** in **[Gist]**. + +Then, download the snippets using command: +``` +$ pet sync +Download success + +``` + +For more details, refer the help section: +``` +pet -h +``` + +Or, +``` +pet [command] -h +``` + +And, that's all. Hope this helps. As you can see, Pet usage is fairly simple and easy to use! If you're having hard time remembering lengthy commands, Pet utility can definitely be useful. + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/pet-simple-command-line-snippet-manager/ + +作者:[SK][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.ostechnix.com/bookmark-linux-commands-easier-repeated-invocation/ +[2]:https://www.ostechnix.com/save-commands-terminal-use-demand/ +[3]:https://github.com/knqyf263/pet/releases +[4]:https://www.ostechnix.com/install-pacaur-arch-linux/ +[5]:https://www.ostechnix.com/install-packer-arch-linux-2/ +[6]:https://www.ostechnix.com/install-yaourt-arch-linux/ +[7]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[8]:https://github.com/junegunn/fzf +[9]:https://github.com/peco/peco +[10]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[11]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-1.png () +[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-2.png () +[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-3.png () +[14]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-4.png () +[15]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-5.png () diff --git a/sources/tech/20180122 How to Create a Docker Image.md b/sources/tech/20180122 How to Create a Docker Image.md new file mode 100644 index 0000000000..4894085a8f --- /dev/null +++ b/sources/tech/20180122 How to Create a Docker Image.md @@ -0,0 +1,197 @@ +How to Create a Docker Image +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/container-image_0.jpg?itok=G_Gz80R9) + +In the previous [article][1], we learned about how to get started with Docker on Linux, macOS, and Windows. In this article, we will get a basic understanding of creating Docker images. There are prebuilt images available on DockerHub that you can use for your own project, and you can publish your own image there. + +We are going to use prebuilt images to get the base Linux subsystem, as it's a lot of work to build one from scratch. You can get Alpine (the official distro used by Docker Editions), Ubuntu, BusyBox, or scratch. In this example, I will use Ubuntu. + +Before we start building our images, let's "containerize" them! By this I just mean creating directories for all of your Docker images so that you can maintain different projects and stages isolated from each other. +``` +$ mkdir dockerprojects + +cd dockerprojects + +``` + +Now create a Dockerfile inside the dockerprojects directory using your favorite text editor; I prefer nano, which is also easy for new users. +``` +$ nano Dockerfile + +``` + +And add this line: +``` +FROM Ubuntu + +``` + +![m7_f7No0pmZr2iQmEOH5_ID6MDG2oEnODpQZkUL7][2] + +Save it with Ctrl+Exit then Y. + +Now create your new image and provide it with a name (run these commands within the same directory): +``` +$ docker build -t dockp . + +``` + +(Note the dot at the end of the command.) This should build successfully, so you'll see: +``` +Sending build context to Docker daemon 2.048kB + +Step 1/1 : FROM ubuntu + +---> 2a4cca5ac898 + +Successfully built 2a4cca5ac898 + +Successfully tagged dockp:latest + +``` + +It's time to run and test your image: +``` +$ docker run -it Ubuntu + +``` + +You should see root prompt: +``` +root@c06fcd6af0e8:/# + +``` + +This means you are literally running bare minimal Ubuntu inside Linux, Windows, or macOS. You can run all native Ubuntu commands and CLI utilities. + +![vpZ8ts9oq3uk--z4n6KP3DD3uD_P4EpG7fX06MC3][3] + +Let's check all the Docker images you have in your directory: +``` +$docker images + + +REPOSITORY TAG IMAGE ID CREATED SIZE + +dockp latest 2a4cca5ac898 1 hour ago 111MB + +ubuntu latest 2a4cca5ac898 1 hour ago 111MB + +hello-world latest f2a91732366c 8 weeks ago 1.85kB + +``` + +You can see all three images: dockp, Ubuntu, and hello-world, which I created a few weeks ago when working on the previous articles of this series. Building a whole LAMP stack can be challenging, so we are going create a simple Apache server image with Dockerfile. + +Dockerfile is basically a set of instructions to install all the needed packages, configure, and copy files. In this case, it's Apache and Nginx. + +You may also want to create an account on DockerHub and log into your account before building images, in case you are pulling something from DockerHub. To log into DockerHub from the command line, just run: +``` +$ docker login + +``` + +Enter your username and password and you are logged in. + +Next, create a directory for Apache inside the dockerproject: +``` +$ mkdir apache + +``` + +Create a Dockerfile inside Apache folder: +``` +$ nano Dockerfile + +``` + +And paste these lines: +``` +FROM ubuntu + +MAINTAINER Kimbro Staken version: 0.1 + +RUN apt-get update && apt-get install -y apache2 && apt-get clean && rm -rf /var/lib/apt/lists/* + + +ENV APACHE_RUN_USER www-data + +ENV APACHE_RUN_GROUP www-data + +ENV APACHE_LOG_DIR /var/log/apache2 + + +EXPOSE 80 + + +CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"] + +``` + +Then, build the image: +``` +docker build -t apache . + +``` + +(Note the dot after a space at the end.) + +It will take some time, then you should see successful build like this: +``` +Successfully built e7083fd898c7 + +Successfully tagged ng:latest + +Swapnil:apache swapnil$ + +``` + +Now let's run the server: +``` +$ docker run -d apache + +a189a4db0f7c245dd6c934ef7164f3ddde09e1f3018b5b90350df8be85c8dc98 + +``` + +Eureka. Your container image is running. Check all the running containers: +``` +$ docker ps + +CONTAINER ID IMAGE COMMAND CREATED + +a189a4db0f7 apache "/usr/sbin/apache2ctl" 10 seconds ago + +``` + +You can kill the container with the docker kill command: +``` +$docker kill a189a4db0f7 + +``` + +So, you see the "image" itself is persistent that stays in your directory, but the container runs and goes away. Now you can create as many images as you want and spin and nuke as many containers as you need from those images. + +That's how to create an image and run containers. + +To learn more, you can open your web browser and check out the documentation about how to build more complicated Docker images like the whole LAMP stack. Here is a[ Dockerfile][4] file for you to play with. In the next article, I'll show how to push images to DockerHub. + +Learn more about Linux through the free ["Introduction to Linux" ][5]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/intro-to-linux/2018/1/how-create-docker-image + +作者:[SWAPNIL BHARTIYA][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/arnieswap +[1]:https://www.linux.com/blog/learn/intro-to-linux/how-install-docker-ce-your-desktop +[2]:https://lh6.googleusercontent.com/m7_f7No0pmZr2iQmEOH5_ID6MDG2oEnODpQZkUL7q3GYRB9f1-lvMYLE5f3GBpzIk-ev5VlcB0FHYSxn6NNQjxY4jJGqcgdFWaeQ-027qX_g-SVtbCCMybJeD6QIXjzM2ga8M4l4 +[3]:https://lh3.googleusercontent.com/vpZ8ts9oq3uk--z4n6KP3DD3uD_P4EpG7fX06MC3uFvj2-WaI1DfOfec9ZXuN7XUNObQ2SCc4Nbiqp-CM7ozUcQmtuzmOdtUHTF4Jq8YxkC49o2k7y5snZqTXsueITZyaLiHq8bT +[4]:https://github.com/fauria/docker-lamp/blob/master/Dockerfile +[5]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180122 Linux rm Command Explained for Beginners (8 Examples).md b/sources/tech/20180122 Linux rm Command Explained for Beginners (8 Examples).md new file mode 100644 index 0000000000..5ba87a1b7e --- /dev/null +++ b/sources/tech/20180122 Linux rm Command Explained for Beginners (8 Examples).md @@ -0,0 +1,172 @@ +Linux rm Command Explained for Beginners (8 Examples) +====== + +Deleting files is a fundamental operation, just like copying files or renaming/moving them. In Linux, there's a dedicated command - dubbed **rm** \- that lets you perform all deletion-related operations. In this tutorial, we will discuss the basics of this tool along with some easy to understand examples. + +But before we do that, it's worth mentioning that all examples mentioned in the article have been tested on Ubuntu 16.04 LTS. + +#### Linux rm command + +So in layman's terms, we can simply say the rm command is used for removing/deleting files and directories. Following is the syntax of the command: + +``` +rm [OPTION]... [FILE]... +``` + +And here's how the tool's man page describes it: +``` +This manual page documents the GNU version of rm. rm removes each specified file. By default, it +does not remove directories. + +If  the  -I or --interactive=once option is given, and there are more than three files or the -r, +-R, or --recursive are given, then rm prompts the user for whether to proceed with the entire +operation. If the response is not affirmative, the entire command is aborted. + +Otherwise, if a file is unwritable, standard input is a terminal, and the -f or --force option is +not given, or the -i or --interactive=always option is given, rm prompts the user for whether to +remove the file. If the response is not affirmative, the file is skipped. +``` + +The following Q&A-styled examples will give you a better idea on how the tool works. + +#### Q1. How to remove files using rm command? + +That's pretty easy and straightforward. All you have to do is to pass the name of the files (along with paths if they are not in the current working directory) as input to the rm command. + +``` +rm [filename] +``` + +For example: + +``` +rm testfile.txt +``` + +[![How to remove files using rm command][1]][2] + +#### Q2. How to remove directories using rm command? + +If you are trying to remove a directory, then you need to use the **-r** command line option. Otherwise, rm will throw an error saying what you are trying to delete is a directory. + +``` +rm -r [dir name] +``` + +For example: + +``` +rm -r testdir +``` + +[![How to remove directories using rm command][3]][4] + +#### Q3. How to make rm prompt before every removal? + +If you want rm to prompt before each delete action it performs, then use the **-i** command line option. + +``` +rm -i [file or dir] +``` + +For example, suppose you want to delete a directory 'testdir' and all its contents, but want rm to prompt before every deletion, then here's how you can do that: + +``` +rm -r -i testdir +``` + +[![How to make rm prompt before every removal][5]][6] + +#### Q4. How to force rm to ignore nonexistent files? + +The rm command lets you know through an error message if you try deleting a non-existent file or directory. + +[![Linux rm command example][7]][8] + +However, if you want, you can make rm suppress such error/notifications - all you have to do is to use the **-f** command line option. + +``` +rm -f [filename] +``` + +[![How to force rm to ignore nonexistent files][9]][10] + +#### Q5. How to make rm prompt only in some scenarios? + +There exists a command line option **-I** , which when used, makes sure the command only prompts once before removing more than three files, or when removing recursively. + +For example, the following screenshot shows this option in action - there was no prompt when two files were deleted, but the command prompted when more than three files were deleted. + +[![How to make rm prompt only in some scenarios][11]][12] + +#### Q6. How rm works when dealing with root directory? + +Of course, deleting root directory is the last thing a Linux user would want. That's why, the rm command doesn't let you perform a recursive delete operation on this directory by default. + +[![How rm works when dealing with root directory][13]][14] + +However, if you want to go ahead with this operation for whatever reason, then you need to tell this to rm by using the **\--no-preserve-root** option. When this option is enabled, rm doesn't treat the root directory (/) specially. + +In case you want to know the scenarios in which a user might want to delete the root directory of their system, head [here][15]. + +#### Q7. How to make rm only remove empty directories? + +In case you want to restrict rm's directory deletion ability to only empty directories, then you can use the -d command line option. + +``` +rm -d [dir] +``` + +The following screenshot shows the -d command line option in action - only empty directory got deleted. + +[![How to make rm only remove empty directories][16]][17] + +#### Q8. How to force rm to emit details of operation it is performing? + +If you want rm to display detailed information of the operation being performed, then this can be done by using the **-v** command line option. + +``` +rm -v [file or directory name] +``` + +For example: + +[![How to force rm to emit details of operation it is performing][18]][19] + +#### Conclusion + +Given the kind of functionality it offers, rm is one of the most frequently used commands in Linux (like [cp][20] and mv). Here, in this tutorial, we have covered almost all major command line options this tool provides. rm has a bit of learning curve associated with, so you'll have to spent some time practicing its options before you start using the tool in your day to day work. For more information, head to the command's [man page][21]. + + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/linux-rm-command/ + +作者:[Himanshu Arora][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.howtoforge.com +[1]:https://www.howtoforge.com/images/command-tutorial/rm-basic-usage.png +[2]:https://www.howtoforge.com/images/command-tutorial/big/rm-basic-usage.png +[3]:https://www.howtoforge.com/images/command-tutorial/rm-r.png +[4]:https://www.howtoforge.com/images/command-tutorial/big/rm-r.png +[5]:https://www.howtoforge.com/images/command-tutorial/rm-i-option.png +[6]:https://www.howtoforge.com/images/command-tutorial/big/rm-i-option.png +[7]:https://www.howtoforge.com/images/command-tutorial/rm-non-ext-error.png +[8]:https://www.howtoforge.com/images/command-tutorial/big/rm-non-ext-error.png +[9]:https://www.howtoforge.com/images/command-tutorial/rm-f-option.png +[10]:https://www.howtoforge.com/images/command-tutorial/big/rm-f-option.png +[11]:https://www.howtoforge.com/images/command-tutorial/rm-I-option.png +[12]:https://www.howtoforge.com/images/command-tutorial/big/rm-I-option.png +[13]:https://www.howtoforge.com/images/command-tutorial/rm-root-default.png +[14]:https://www.howtoforge.com/images/command-tutorial/big/rm-root-default.png +[15]:https://superuser.com/questions/742334/is-there-a-scenario-where-rm-rf-no-preserve-root-is-needed +[16]:https://www.howtoforge.com/images/command-tutorial/rm-d-option.png +[17]:https://www.howtoforge.com/images/command-tutorial/big/rm-d-option.png +[18]:https://www.howtoforge.com/images/command-tutorial/rm-v-option.png +[19]:https://www.howtoforge.com/images/command-tutorial/big/rm-v-option.png +[20]:https://www.howtoforge.com/linux-cp-command/ +[21]:https://linux.die.net/man/1/rm diff --git a/sources/tech/20180123 Never miss a Magazine-s article, build your own RSS notification system.md b/sources/tech/20180123 Never miss a Magazine-s article, build your own RSS notification system.md new file mode 100644 index 0000000000..8794ca611a --- /dev/null +++ b/sources/tech/20180123 Never miss a Magazine-s article, build your own RSS notification system.md @@ -0,0 +1,170 @@ +Never miss a Magazine's article, build your own RSS notification system +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/01/learn-python-rss-notifier.png-945x400.jpg) + +Python is a great programming language to quickly build applications that make our life easier. In this article we will learn how to use Python to build a RSS notification system, the goal being to have fun learning Python using Fedora. If you are looking for a complete RSS notifier application, there are a few already packaged in Fedora. + +### Fedora and Python - getting started + +Python 3.6 is available by default in Fedora, that includes Python's extensive standard library. The standard library provides a collection of modules which make some tasks simpler for us. For example, in our case we will use the [**sqlite3**][1] module to create, add and read data from a database. In the case where a particular problem we are trying to solve is not covered by the standard library, the chance is that someone has already developed a module for everyone to use. The best place to search for such modules is the Python Package Index known as [PyPI][2]. In our example we are going to use the [**feedparser**][3] to parse an RSS feed. + +Since **feedparser** is not in the standard library, we have to install it in our system. Luckily for us there is an rpm package in Fedora, so the installation of **feedparser** is as simple as: +``` +$ sudo dnf install python3-feedparser +``` + +We now have everything we need to start coding our application. + +### Storing the feed data + +We need to store data from the articles that have already been published so that we send a notification only for new articles. The data we want to store will give us a unique way to identify an article. Therefore we will store the **title** and the **publication date** of the article. + +So let's create our database using python **sqlite3** module and a simple SQL query. We are also adding the modules we are going to use later ( **feedparser** , **smtplib** and **email** ). + +#### Creating the Database +``` +#!/usr/bin/python3 +import sqlite3 +import smtplib +from email.mime.text import MIMEText + +import feedparser + +db_connection = sqlite3.connect('/var/tmp/magazine_rss.sqlite') +db = db_connection.cursor() +db.execute(' CREATE TABLE IF NOT EXISTS magazine (title TEXT, date TEXT)') + +``` + +These few lines of code create a new sqlite database stored in a file called 'magazine_rss.sqlite', and then create a new table within the database called 'magazine'. This table has two columns - 'title' and 'date' - that can store data of the type TEXT, which means that the value of each column will be a text string. + +#### Checking the Database for old articles + +Since we only want to add new articles to our database we need a function that will check if the article we get from the RSS feed is already in our database or not. We will use it to decide if we should send an email notification (new article) or not (old article). Ok let's code this function. +``` +def article_is_not_db(article_title, article_date): + """ Check if a given pair of article title and date + is in the database. + Args: + article_title (str): The title of an article + article_date (str): The publication date of an article + Return: + True if the article is not in the database + False if the article is already present in the database + """ + db.execute("SELECT * from magazine WHERE title=? AND date=?", (article_title, article_date)) + if not db.fetchall(): + return True + else: + return False +``` + +The main part of this function is the SQL query we execute to search through the database. We are using a SELECT instruction to define which column of our magazine table we will run the query on. We are using the 0_sync_master.sh 1_add_new_article_manual.sh 1_add_new_article_newspaper.sh 2_start_translating.sh 3_continue_the_work.sh 4_finish.sh 5_pause.sh base.sh env format.test lctt.cfg parse_url_by_manual.sh parse_url_by_newspaper.py parse_url_by_newspaper.sh README.org reformat.sh symbol to select all columns ( title and date). Then we ask to select only the rows of the table WHERE the article_title and article_date string are equal to the value of the title and date column. + +To finish, we have a simple logic that will return True if the query did not return any results and False if the query found an article in database matching our title, date pair. + +#### Adding a new article to the Database + +Now we can code the function to add a new article to the database. +``` +def add_article_to_db(article_title, article_date): + """ Add a new article title and date to the database + Args: + article_title (str): The title of an article + article_date (str): The publication date of an article + """ + db.execute("INSERT INTO magazine VALUES (?,?)", (article_title, article_date)) + db_connection.commit() +``` + +This function is straight forward, we are using a SQL query to INSERT a new row INTO the magazine table with the VALUES of the article_title and article_date. Then we commit the change to make it persistent. + +That's all we need from the database's point of view, let's look at the notification system and how we can use python to send emails. + +### Sending an email notification + +Let's create a function to send an email using the python standard library module **smtplib.** We are also using the **email** module from the standard library to format our email message. +``` +def send_notification(article_title, article_url): + """ Add a new article title and date to the database + + Args: + article_title (str): The title of an article + article_url (str): The url to access the article + """ + + smtp_server = smtplib.SMTP('smtp.gmail.com', 587) + smtp_server.ehlo() + smtp_server.starttls() + smtp_server.login('your_email@gmail.com', '123your_password') + msg = MIMEText(f'\nHi there is a new Fedora Magazine article : {article_title}. \nYou can read it here {article_url}') + msg['Subject'] = 'New Fedora Magazine Article Available' + msg['From'] = 'your_email@gmail.com' + msg['To'] = 'destination_email@gmail.com' + smtp_server.send_message(msg) + smtp_server.quit() +``` + +In this example I am using the Google mail smtp server to send an email, but this will work with any email services that provides you with a SMTP server. Most of this function is boilerplate needed to configure the access to the smtp server. You will need to update the code with your email address and credentials. + +If you are using 2 Factor Authentication with your gmail account you can setup a password app that will give you a unique password to use for this application. Check out this help [page][4]. + +### Reading Fedora Magazine RSS feed + +We now have functions to store an article in the database and send an email notification, let's create a function that parses the Fedora Magazine RSS feed and extract the articles' data. +``` +def read_article_feed(): + """ Get articles from RSS feed """ + feed = feedparser.parse('https://fedoramagazine.org/feed/') + for article in feed['entries']: + if article_is_not_db(article['title'], article['published']): + send_notification(article['title'], article['link']) + add_article_to_db(article['title'], article['published']) + +if __name__ == '__main__': + read_article_feed() + db_connection.close() +``` + +Here we are making use of the **feedparser.parse** function. The function returns a dictionary representation of the RSS feed, for the full reference of the representation you can consult **feedparser** 's [documentation][5]. + +The RSS feed parser will return the last 10 articles as entries and then we extract the following information: the title, the link and the date the article was published. As a result, we can now use the functions we have previously defined to check if the article is not in the database, then send a notification email and finally, add the article to our database. + +The last if statement is used to execute our read_article_feed function and then close the database connection when we execute our script. + +### Running our script + +Finally, to run our script we need to give the correct permission to the file. Next, we make use of the **cron** utility to automatically execute our script every hour (1 minute past the hour). **cron** is a job scheduler that we can use to run a task at a fixed time. +``` +$ chmod a+x my_rss_notifier.py +$ sudo cp my_rss_notifier.py /etc/cron.hourly +``` + +To keep this tutorial simple, we are using the cron.hourly directory to execute the script every hours, I you wish to learn more about **cron** and how to configure the **crontab,** please read **cron 's** wikipedia [page][6]. + +### Conclusion + +In this tutorial we have learned how to use Python to create a simple sqlite database, parse an RSS feed and send emails. I hope that this showed you how you can easily build your own application using Python and Fedora. + +The script is available on github [here][7]. + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/never-miss-magazines-article-build-rss-notification-system/ + +作者:[Clément Verna][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://fedoramagazine.org +[1]:https://docs.python.org/3/library/sqlite3.html +[2]:https://pypi.python.org/pypi +[3]:https://pypi.python.org/pypi/feedparser/5.2.1 +[4]:https://support.google.com/accounts/answer/185833?hl=en +[5]:https://pythonhosted.org/feedparser/reference.html +[6]:https://en.wikipedia.org/wiki/Cron +[7]:https://github.com/cverna/rss_feed_notifier diff --git a/sources/tech/20180123 What Is bashrc and Why Should You Edit It.md b/sources/tech/20180123 What Is bashrc and Why Should You Edit It.md new file mode 100644 index 0000000000..93b8b5dc7f --- /dev/null +++ b/sources/tech/20180123 What Is bashrc and Why Should You Edit It.md @@ -0,0 +1,108 @@ +What Is bashrc and Why Should You Edit It +====== + +![](https://www.maketecheasier.com/assets/uploads/2018/01/what-is-bashrc-hero.png) + +There are a number of hidden files tucked away in your home directory. If you run macOS or a popular Linux distribution, you'll see a file named ".bashrc" up near the top of your hidden files. What is bashrc, and why is editing bashrc useful? + +![finder-find-bashrc][1] + +If you run a Unix-based or Unix-like operating system, you likely have bash installed as your default terminal. While many [different shells][2] exist, bash is both the most common and, likely, the most popular. If you don't know what that means, bash interprets your typed input in the Terminal program and runs commands based on your input. It allows for some degree of customization using scripting, which is where bashrc comes in. + +In order to load your preferences, bash runs the contents of the bashrc file at each launch. This shell script is found in each user's home directory. It's used to save and load your terminal preferences and environmental variables. + +Terminal preferences can contain a number of different things. Most commonly, the bashrc file contains aliases that the user always wants available. Aliases allow the user to refer to commands by shorter or alternative names, and can be a huge time-saver for those that work in a terminal regularly. + +![terminal-edit-bashrc-1][3] + +You can edit bashrc in any terminal text editor. We will use `nano` in the following examples. + +To edit bashrc using `nano`, invoke the following command in Terminal: +``` +nano ~/.bashrc +``` + +If you've never edited your bashrc file before, you might find that it's empty. That's fine! If not, you can feel free to put your additions on any line. + +Any changes you make to bashrc will be applied next time you launch terminal. If you want to apply them immediately, run the command below: +``` +source ~/.bashrc +``` + +You can add to bashrc where ever you like, but feel free to use command (proceeded by `#`) to organize your code. + +Edits in bashrc have to follow [bash's scripting format][4]. If you don't know how to script with bash, there are a number of resources you can use online. This guide represents a fairly [comprehensive introduction][5] into the aspects of bashrc that we couldn't mention here. + + **Related** : [How to Run Bash Script as Root During Startup on Linux][6] + +There's a couple of useful tricks you can do to make your terminal experience more efficient and user-friendly. + +### Why should I edit bashrc? + +#### Bash Prompt + +The bash prompt allows you to style up your terminal and have it to show prompts when you run a command. A customized bash prompt can indeed make your work on the terminal more productive and efficient. + +Check out some of the [useful][7] and [interesting][8] bash prompts you can add to your bashrc. + +#### Aliases + +![terminal-edit-bashrc-3][9] + +Aliases can also allow you to access a favored form of a command with a shorthand code. Let's take the command `ls` as an example. By default, `ls` displays the contents of your directory. That's useful, but it's often more useful to know more about the directory, or know the hidden contents of the directory. As such, a common alias is `ll`, which is set to run `ls -lha` or something similar. That will display the most details about files, revealing hidden files and showing file sizes in "human readable" units instead of blocks. + +You'll need to format your aliases like so: +``` +alias ll = "ls -lha" +``` + +Type the text you want to replace on the left, and the command on the right between quotes. You can use to this to create shorter versions of command, guard against common typos, or force a command to always run with your favored flags. You can also circumvent annoying or easy-to-forget syntax with your own preferred shorthand. Here are some of the [commonly used aliases][10] you can add to your bashrc. + +#### Functions + +![terminal-edit-bashrc-2][11] + +In addition to shorthand command names, you can combine multiple commands into a single operation using bash functions. They can get pretty complicated, but they generally follow this syntax: +``` +function_name () { + command_1 + command_2 +} +``` + +The command below combines `mkdir` and `cd`. Typing `md folder_name` creates a directory named "folder_name" in your working directory and navigates into it immediately. +``` +md () { + mkdir -p $1 + cd $1 +} +``` + +The `$1` you see in the function represents the first argument, which is the text you type immediately after the function name. + +### Conclusion + +Unlike some terminal customization tricks, messing with bashrc is fairly straight-forward and low risk. If you mess anything up, you can always delete the bashrc file completely and start over again. Try it out now and you will be amazed at your improved productivity. + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/what-is-bashrc/ + +作者:[Alexander Fox][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/alexfox/ +[1]:https://www.maketecheasier.com/assets/uploads/2018/01/finder-find-bashrc.png (finder-find-bashrc) +[2]:https://www.maketecheasier.com/alternative-linux-shells/ +[3]:https://www.maketecheasier.com/assets/uploads/2018/01/terminal-edit-bashrc-1.png (terminal-edit-bashrc-1) +[4]:http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO.html +[5]:https://www.digitalocean.com/community/tutorials/an-introduction-to-useful-bash-aliases-and-functions +[6]:https://www.maketecheasier.com/run-bash-script-as-root-during-startup-linux/ (How to Run Bash Script as Root During Startup on Linux) +[7]:https://www.maketecheasier.com/8-useful-and-interesting-bash-prompts/ +[8]:https://www.maketecheasier.com/more-useful-and-interesting-bash-prompts/ +[9]:https://www.maketecheasier.com/assets/uploads/2018/01/terminal-edit-bashrc-3.png (terminal-edit-bashrc-3) +[10]:https://www.maketecheasier.com/install-software-in-various-linux-distros/#aliases +[11]:https://www.maketecheasier.com/assets/uploads/2018/01/terminal-edit-bashrc-2.png (terminal-edit-bashrc-2) diff --git a/sources/tech/20180124 4 cool new projects to try in COPR for January.md b/sources/tech/20180124 4 cool new projects to try in COPR for January.md new file mode 100644 index 0000000000..53e8f362a0 --- /dev/null +++ b/sources/tech/20180124 4 cool new projects to try in COPR for January.md @@ -0,0 +1,85 @@ +translating---geekpi + +4 cool new projects to try in COPR for January +====== + +![](https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg) + +COPR is a [collection][1] of personal repositories for software that isn't carried in Fedora. Some software doesn't conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn't supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software. + +Here's a set of new and interesting projects in COPR. + +### Elisa + +[Elisa][2] is a minimal music player. It lets you browse music by albums, artists or tracks. It automatically detects all playable music in your ~/Music directory, thus it requires no set up at all - neither does it offer any. Currently, Elisa focuses on being a simple music player, so it offers no tools for managing your music collection. + +![][3] + +#### Installation instructions + +The repo currently provides Elisa for Fedora 26, 27 and Rawhide. To install Elisa, use these commands: +``` +sudo dnf copr enable eclipseo/elisa +sudo dnf install elisa +``` + +### Bing Wallpapers + +[Bing Wallpapers][4] is a simple program that downloads Bing's wallpaper of the day and sets it as a desktop wallpaper or a lock screen image. The program can rotate over pictures in its directory in set intervals as well as delete old pictures after a set amount of time. + +#### Installation instructions + +The repo currently provides Bing Wallpapers for Fedora 25, 26, 27 and Rawhide. To install Bing Wallpapers, use these commands: +``` +sudo dnf copr enable julekgwa/Bingwallpapers +sudo dnf install bingwallpapers +``` + +### Polybar + +[Polybar][5] is a tool for creating status bars. It has a lot of customization options as well as built-in functionality to display information about commonly used services, such as systray icons, window title, workspace and desktop panel for [bspwm][6], [i3][7], and more. You can also configure your own modules for your status bar. See [Polybar's wiki][8] for more information about usage and configuration. + +#### Installation instructions + +The repo currently provides Polybar for Fedora 27. To install Polybar, use these commands: +``` +sudo dnf copr enable tomwishaupt/polybar +sudo dnf install polybar +``` + +### Netdata + +[Netdata][9] is a distributed monitoring system. It can run on all your systems including PCs, servers, containers and IoT devices, from which it collects metrics in real time. All the information then can be accessed using netdata's web dashboard. Additionally, Netdata provides pre-configured alarms and notifications for detecting performance issue, as well as templates for creating your own alarms. + +![][10] + +#### Installation instructions + +The repo currently provides netdata for EPEL 7, Fedora 27 and Rawhide. To install netdata, use these commands: +``` +sudo dnf copr enable recteurlp/netdata +sudo dnf install netdata +``` + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january/ + +作者:[Dominik Turecek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://fedoramagazine.org +[1]:https://copr.fedorainfracloud.org/ +[2]:https://community.kde.org/Elisa +[3]:https://fedoramagazine.org/wp-content/uploads/2018/01/elisa.png +[4]:http://bingwallpapers.lekgoara.com/ +[5]:https://github.com/jaagr/polybar +[6]:https://github.com/baskerville/bspwm +[7]:https://i3wm.org/ +[8]:https://github.com/jaagr/polybar/wiki +[9]:http://my-netdata.io/ +[10]:https://fedoramagazine.org/wp-content/uploads/2018/01/netdata.png diff --git a/sources/tech/20180124 8 ways to generate random password in Linux.md b/sources/tech/20180124 8 ways to generate random password in Linux.md new file mode 100644 index 0000000000..ee60df826b --- /dev/null +++ b/sources/tech/20180124 8 ways to generate random password in Linux.md @@ -0,0 +1,272 @@ +8 ways to generate random password in Linux +====== +Learn 8 different ways to generate random password in Linux using Linux native commands or third party utilities. + +![][1] + +In this article, we will walk you through various different ways to generate random password in Linux terminal. Few of them are using native Linux commands and others are using third party tools or utilities which can easily be installed on Linux machine. Here we are looking at native commands like `openssl`, [dd][2], `md5sum`, `tr`, `urandom` and third party tools like mkpasswd, randpw, pwgen, spw, gpg, xkcdpass, diceware, revelation, keepaasx, passwordmaker. + +These are actually ways to get some random alphanumeric string which can be utilized as password. Random passwords can be used for new users so that there will be uniqueness no matter how large your user base is. Without any further delay lets jump into those 15 different ways to generate random password in Linux. + +##### Generate password using mkpasswd utility + +`mkpasswd` comes with install of `expect` package on RHEL based systems. On Debian based systems `mkpasswd` comes with package `whois`. Trying to install `mkpasswd` package will results in error - + +No package mkpasswd available. on RHEL system and E: Unable to locate package mkpasswd in Debian based. + +So install their parent packages as mentioned above and you are good to go. + +Run `mkpasswd` to get passwords + +``` +root@kerneltalks# mkpasswd << on RHEL +zt*hGW65c + +root@kerneltalks# mkpas +``` + +Command behaves differently on different systems so work accordingly. There are many switches which can be used to control length etc parameters. You can explore them from man pages. + +##### Generate password using openssl + +Openssl comes in build with almost all the Linux distributions. We can use its random function to get alphanumeric string generated which can be used as password. + +``` +root@kerneltalks # openssl rand -base64 10 +nU9LlHO5nsuUvw== +``` + +Here, we are using `base64` encoding with random function and last digit for argument to `base64` encoding. + +##### Generate password using urandom + +Device file `/dev/urandom` is another source of getting random characters. We are using `tr` function and trimming output to get random string to use as password. + +``` +root@kerneltalks # strings /dev/urandom |tr -dc A-Za-z0-9 | head -c20; echo +UiXtr0NAOSIkqtjK4c0X +``` + +##### dd command to generate password + +We can even use /dev/urandom device along with [dd command ][2]to get string of random characters. + +``` +oot@kerneltalks# dd if=/dev/urandom bs=1 count=15|base64 -w 0 +15+0 records in +15+0 records out +15 bytes (15 B) copied, 5.5484e-05 s, 270 kB/s +QMsbe2XbrqAc2NmXp8D0 +``` + +We need to pass output through `base64` encoding to make it human readable. You can play with count value to get desired length. For much cleaner output, redirect std2 to `/dev/null`. Clean command will be - + +``` +oot@kerneltalks # dd if=/dev/urandom bs=1 count=15 2>/dev/null|base64 -w 0 +F8c3a4joS+a3BdPN9C++ +``` + +##### Using md5sum to generate password + +Another way to get array of random characters which can be used as password is to calculate MD5 checksum! s you know checksum value is indeed looks like random characters grouped together we can use it as password. Make sure you use source as something variable so that you get different checksum every time you run command. For example `date` ! [date command][3] always yields changing output. + +``` +root@kerneltalks # date |md5sum +4d8ce5c42073c7e9ca4aeffd3d157102 - +``` + +Here we passed `date` command output to `md5sum` and get the checksum hash! You can use [cut command][4] to get desired length of output. + +##### Generate password using pwgen + +`pwgen` package comes with [repositories like EPEL][5]. `pwgen` is more focused on generating passwords which are pronounceable but not a dictionary word or not in plain English. You may not find it in standard distribution repo. Install the package and run `pwgen` command. Boom ! + +``` +root@kerneltalks # pwgen +thu8Iox7 ahDeeQu8 Eexoh0ai oD8oozie ooPaeD9t meeNeiW2 Eip6ieph Ooh1tiet +cootad7O Gohci0vo wah9Thoh Ohh3Ziur Ao1thoma ojoo6aeW Oochai4v ialaiLo5 +aic2OaDa iexieQu8 Aesoh4Ie Eixou9ph ShiKoh0i uThohth7 taaN3fuu Iege0aeZ +cah3zaiW Eephei0m AhTh8guo xah1Shoo uh8Iengo aifeev4E zoo4ohHa fieDei6c +aorieP7k ahna9AKe uveeX7Hi Ohji5pho AigheV7u Akee9fae aeWeiW4a tiex8Oht +``` +You will be presented with list of passwords at your terminal! What else you want? Ok. You still want to explore, `pwgen` comes with many custom options which can be referred for man page. + +##### Generate password using gpg tool + +GPG is a OpenPGP encryption and signing tool. Mostly gpg tool comes pre-installed (at least it is on my RHEL7). But if not you can look for `gpg` or `gpg2` package and [install][6] it. + +Use below command to generate password from gpg tool. + +``` +root@kerneltalks # gpg --gen-random --armor 1 12 +mL8i+PKZ3IuN6a7a +``` + +Here we are passing generate random byte sequence switch (`--gen-random`) of quality 1 (first argument) with count of 12 (second argument). Switch `--armor` ensures output is `base64` encoded. + +##### Generate password using xkcdpass + +Famous geek humor website [xkcd][7], published a very interesting post about memorable but still complex passwords. You can view it [here][8]. So `xkcdpass` tool took inspiration from this post and did its work! Its a python package and available on python's official website [here][9] + +All installation and usage instructions are mentioned on that page. Here is install steps and outputs from my test RHEL server for your reference. + +``` +root@kerneltalks # wget https://pypi.python.org/packages/b4/d7/3253bd2964390e034cf0bba227db96d94de361454530dc056d8c1c096abc/xkcdpass-1.14.3.tar.gz#md5=5f15d52f1d36207b07391f7a25c7965f +--2018-01-23 19:09:17-- https://pypi.python.org/packages/b4/d7/3253bd2964390e034cf0bba227db96d94de361454530dc056d8c1c096abc/xkcdpass-1.14.3.tar.gz +Resolving pypi.python.org (pypi.python.org)... 151.101.32.223, 2a04:4e42:8::223 +Connecting to pypi.python.org (pypi.python.org)|151.101.32.223|:443... connected. +HTTP request sent, awaiting response... 200 OK +Length: 871848 (851K) [binary/octet-stream] +Saving to: ‘xkcdpass-1.14.3.tar.gz’ + +100%[==============================================================================================================================>] 871,848 --.-K/s in 0.01s + +2018-01-23 19:09:17 (63.9 MB/s) - ‘xkcdpass-1.14.3.tar.gz’ saved [871848/871848] + + +root@kerneltalks # tar -xvf xkcdpass-1.14.3.tar.gz +xkcdpass-1.14.3/ +xkcdpass-1.14.3/examples/ +xkcdpass-1.14.3/examples/example_import.py +xkcdpass-1.14.3/examples/example_json.py +xkcdpass-1.14.3/examples/example_postprocess.py +xkcdpass-1.14.3/LICENSE.BSD +xkcdpass-1.14.3/MANIFEST.in +xkcdpass-1.14.3/PKG-INFO +xkcdpass-1.14.3/README.rst +xkcdpass-1.14.3/setup.cfg +xkcdpass-1.14.3/setup.py +xkcdpass-1.14.3/tests/ +xkcdpass-1.14.3/tests/test_list.txt +xkcdpass-1.14.3/tests/test_xkcdpass.py +xkcdpass-1.14.3/tests/__init__.py +xkcdpass-1.14.3/xkcdpass/ +xkcdpass-1.14.3/xkcdpass/static/ +xkcdpass-1.14.3/xkcdpass/static/eff-long +xkcdpass-1.14.3/xkcdpass/static/eff-short +xkcdpass-1.14.3/xkcdpass/static/eff-special +xkcdpass-1.14.3/xkcdpass/static/fin-kotus +xkcdpass-1.14.3/xkcdpass/static/ita-wiki +xkcdpass-1.14.3/xkcdpass/static/legacy +xkcdpass-1.14.3/xkcdpass/static/spa-mich +xkcdpass-1.14.3/xkcdpass/xkcd_password.py +xkcdpass-1.14.3/xkcdpass/__init__.py +xkcdpass-1.14.3/xkcdpass.1 +xkcdpass-1.14.3/xkcdpass.egg-info/ +xkcdpass-1.14.3/xkcdpass.egg-info/dependency_links.txt +xkcdpass-1.14.3/xkcdpass.egg-info/entry_points.txt +xkcdpass-1.14.3/xkcdpass.egg-info/not-zip-safe +xkcdpass-1.14.3/xkcdpass.egg-info/PKG-INFO +xkcdpass-1.14.3/xkcdpass.egg-info/SOURCES.txt +xkcdpass-1.14.3/xkcdpass.egg-info/top_level.txt + + +root@kerneltalks # cd xkcdpass-1.14.3 + +root@kerneltalks # python setup.py install +running install +running bdist_egg +running egg_info +writing xkcdpass.egg-info/PKG-INFO +writing top-level names to xkcdpass.egg-info/top_level.txt +writing dependency_links to xkcdpass.egg-info/dependency_links.txt +writing entry points to xkcdpass.egg-info/entry_points.txt +reading manifest file 'xkcdpass.egg-info/SOURCES.txt' +reading manifest template 'MANIFEST.in' +writing manifest file 'xkcdpass.egg-info/SOURCES.txt' +installing library code to build/bdist.linux-x86_64/egg +running install_lib +running build_py +creating build +creating build/lib +creating build/lib/xkcdpass +copying xkcdpass/xkcd_password.py -> build/lib/xkcdpass +copying xkcdpass/__init__.py -> build/lib/xkcdpass +creating build/lib/xkcdpass/static +copying xkcdpass/static/eff-long -> build/lib/xkcdpass/static +copying xkcdpass/static/eff-short -> build/lib/xkcdpass/static +copying xkcdpass/static/eff-special -> build/lib/xkcdpass/static +copying xkcdpass/static/fin-kotus -> build/lib/xkcdpass/static +copying xkcdpass/static/ita-wiki -> build/lib/xkcdpass/static +copying xkcdpass/static/legacy -> build/lib/xkcdpass/static +copying xkcdpass/static/spa-mich -> build/lib/xkcdpass/static +creating build/bdist.linux-x86_64 +creating build/bdist.linux-x86_64/egg +creating build/bdist.linux-x86_64/egg/xkcdpass +copying build/lib/xkcdpass/xkcd_password.py -> build/bdist.linux-x86_64/egg/xkcdpass +copying build/lib/xkcdpass/__init__.py -> build/bdist.linux-x86_64/egg/xkcdpass +creating build/bdist.linux-x86_64/egg/xkcdpass/static +copying build/lib/xkcdpass/static/eff-long -> build/bdist.linux-x86_64/egg/xkcdpass/static +copying build/lib/xkcdpass/static/eff-short -> build/bdist.linux-x86_64/egg/xkcdpass/static +copying build/lib/xkcdpass/static/eff-special -> build/bdist.linux-x86_64/egg/xkcdpass/static +copying build/lib/xkcdpass/static/fin-kotus -> build/bdist.linux-x86_64/egg/xkcdpass/static +copying build/lib/xkcdpass/static/ita-wiki -> build/bdist.linux-x86_64/egg/xkcdpass/static +copying build/lib/xkcdpass/static/legacy -> build/bdist.linux-x86_64/egg/xkcdpass/static +copying build/lib/xkcdpass/static/spa-mich -> build/bdist.linux-x86_64/egg/xkcdpass/static +byte-compiling build/bdist.linux-x86_64/egg/xkcdpass/xkcd_password.py to xkcd_password.pyc +byte-compiling build/bdist.linux-x86_64/egg/xkcdpass/__init__.py to __init__.pyc +creating build/bdist.linux-x86_64/egg/EGG-INFO +copying xkcdpass.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO +copying xkcdpass.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO +copying xkcdpass.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO +copying xkcdpass.egg-info/entry_points.txt -> build/bdist.linux-x86_64/egg/EGG-INFO +copying xkcdpass.egg-info/not-zip-safe -> build/bdist.linux-x86_64/egg/EGG-INFO +copying xkcdpass.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO +creating dist +creating 'dist/xkcdpass-1.14.3-py2.7.egg' and adding 'build/bdist.linux-x86_64/egg' to it +removing 'build/bdist.linux-x86_64/egg' (and everything under it) +Processing xkcdpass-1.14.3-py2.7.egg +creating /usr/lib/python2.7/site-packages/xkcdpass-1.14.3-py2.7.egg +Extracting xkcdpass-1.14.3-py2.7.egg to /usr/lib/python2.7/site-packages +Adding xkcdpass 1.14.3 to easy-install.pth file +Installing xkcdpass script to /usr/bin + +Installed /usr/lib/python2.7/site-packages/xkcdpass-1.14.3-py2.7.egg +Processing dependencies for xkcdpass==1.14.3 +Finished processing dependencies for xkcdpass==1.14.3 +``` + +Now running xkcdpass command will give you random set of dictionary words like below - + +``` +root@kerneltalks # xkcdpass +broadside unpadded osmosis statistic cosmetics lugged +``` + +You can use these words as input to other commands like `md5sum` to get random password (like below) or you can even use Nth letter of each words to form your password! + +``` +oot@kerneltalks # xkcdpass |md5sum +45f2ec9b3ca980c7afbd100268c74819 - + +root@kerneltalks # xkcdpass |md5sum +ad79546e8350744845c001d8836f2ff2 - +``` +Or even you can use all those words together as such a long password which is easy to remember for a user and very hard to crack using computer program. + +There are tools like [Diceware][10], [KeePassX][11], [Revelation][12], [PasswordMaker][13] for Linux which can be considered for making strong random passwords. + +-------------------------------------------------------------------------------- + +via: https://kerneltalks.com/tips-tricks/8-ways-to-generate-random-password-in-linux/ + +作者:[kerneltalks][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://kerneltalks.com +[1]:https://a1.kerneltalks.com/wp-content/uploads/2018/01/different-ways-to-generate-password-in-linux.png +[2]:https://kerneltalks.com/commands/learn-dd-command-with-examples/ +[3]:https://kerneltalks.com/commands/date-time-management-using-timedatectl-command/ +[4]:https://kerneltalks.com/linux/cut-command-examples/ +[5]:https://kerneltalks.com/package/how-to-install-epel-repository/ +[6]:https://kerneltalks.com/tools/package-installation-linux-yum-apt/ +[7]:https://xkcd.com/ +[8]:https://xkcd.com/936/ +[9]:https://pypi.python.org/pypi/xkcdpass/ +[10]:http://world.std.com/~reinhold/diceware.html +[11]:https://www.keepassx.org/ +[12]:https://packages.debian.org/sid/gnome/revelation +[13]:https://passwordmaker.org/ diff --git a/sources/tech/20180124 Containers the GPL and copyleft No reason for concern.md b/sources/tech/20180124 Containers the GPL and copyleft No reason for concern.md new file mode 100644 index 0000000000..0578bdd642 --- /dev/null +++ b/sources/tech/20180124 Containers the GPL and copyleft No reason for concern.md @@ -0,0 +1,61 @@ +Containers, the GPL, and copyleft: No reason for concern +============================================================ + +### Wondering how open source licensing affects Linux containers? Here's what you need to know. + + +![Containers, the GPL, and copyleft: No reason for concern](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_patents4abstract_B.png?itok=6RHeRaYh "Containers, the GPL, and copyleft: No reason for concern") +Image by : opensource.com + +Though open source is thoroughly mainstream, new software technologies and old technologies that get newly popularized sometimes inspire hand-wringing about open source licenses. Most often the concern is about the GNU General Public License (GPL), and specifically the scope of its copyleft requirement, which is often described (somewhat misleadingly) as the GPL’s derivative work issue. + +One imperfect way of framing the question is whether GPL-licensed code, when combined in some sense with proprietary code, forms a single modified work such that the proprietary code could be interpreted as being subject to the terms of the GPL. While we haven’t yet seen much of that concern directed to Linux containers, we expect more questions to be raised as adoption of containers continues to grow. But it’s fairly straightforward to show that containers do  _not_  raise new or concerning GPL scope issues. + +Statutes and case law provide little help in interpreting a license like the GPL. On the other hand, many of us give significant weight to the interpretive views of the Free Software Foundation (FSF), the drafter and steward of the GPL, even in the typical case where the FSF is not a copyright holder of the software at issue. In addition to being the author of the license text, the FSF has been engaged for many years in providing commentary and guidance on its licenses to the community. Its views have special credibility and influence based on its public interest mission and leadership in free software policy. + +The FSF’s existing guidance on GPL interpretation has relevance for understanding the effects of including GPL and non-GPL code in containers. The FSF has placed emphasis on the process boundary when considering copyleft scope, and on the mechanism and semantics of the communication between multiple software components to determine whether they are closely integrated enough to be considered a single program for GPL purposes. For example, the [GNU Licenses FAQ][4] takes the view that pipes, sockets, and command-line arguments are mechanisms that are normally suggestive of separateness (in the absence of sufficiently "intimate" communications). + +Consider the case of a container in which both GPL code and proprietary code might coexist and execute. A container is, in essence, an isolated userspace stack. In the [OCI container image format][5], code is packaged as a set of filesystem changeset layers, with the base layer normally being a stripped-down conventional Linux distribution without a kernel. As with the userspace of non-containerized Linux distributions, these base layers invariably contain many GPL-licensed packages (both GPLv2 and GPLv3), as well as packages under licenses considered GPL-incompatible, and commonly function as a runtime for proprietary as well as open source applications. The ["mere aggregation" clause][6] in GPLv2 (as well as its counterpart GPLv3 provision on ["aggregates"][7]) shows that this type of combination is generally acceptable, is specifically contemplated under the GPL, and has no effect on the licensing of the two programs, assuming incompatibly licensed components are separate and independent. + +Of course, in a given situation, the relationship between two components may not be "mere aggregation," but the same is true of software running in non-containerized userspace on a Linux system. There is nothing in the technical makeup of containers or container images that suggests a need to apply a special form of copyleft scope analysis. + +It follows that when looking at the relationship between code running in a container and code running outside a container, the "separate and independent" criterion is almost certainly met. The code will run as separate processes, and the whole technical point of using containers is isolation from other software running on the system. + +Now consider the case where two components, one GPL-licensed and one proprietary, are running in separate but potentially interacting containers, perhaps as part of an application designed with a [microservices][8] architecture. In the absence of very unusual facts, we should not expect to see copyleft scope extending across multiple containers. Separate containers involve separate processes. Communication between containers by way of network interfaces is analogous to such mechanisms as pipes and sockets, and a multi-container microservices scenario would seem to preclude what the FSF calls "[intimate][9]" communication by definition. The composition of an application using multiple containers may not be dispositive of the GPL scope issue, but it makes the technical boundaries between the components more apparent and provides a strong basis for arguing separateness. Here, too, there is no technical feature of containers that suggests application of a different and stricter approach to copyleft scope analysis. + +A company that is overly concerned with the potential effects of distributing GPL-licensed code might attempt to prohibit its developers from adding any such code to a container image that it plans to distribute. Insofar as the aim is to avoid distributing code under the GPL, this is a dubious strategy. As noted above, the base layers of conventional container images will contain multiple GPL-licensed components. If the company pushes a container image to a registry, there is normally no way it can guarantee that this will not include the base layer, even if it is widely shared. + +On the other hand, the company might decide to embrace containerization as a means of limiting copyleft scope issues by isolating GPL and proprietary code—though one would hope that technical benefits would drive the decision, rather than legal concerns likely based on unfounded anxiety about the GPL. While in a non-containerized setting the relationship between two interacting software components will often be mere aggregation, the evidence of separateness that containers provide may be comforting to those who worry about GPL scope. + +Open source license compliance obligations may arise when sharing container images. But there’s nothing technically different or unique about containers that changes the nature of these obligations or makes them harder to satisfy. With respect to copyleft scope, containerization should, if anything, ease the concerns of the extra-cautious. + + +### About the author + + [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/picture-216.jpg?itok=R8W7jae8)][10] Richard Fontana - Richard is Senior Commercial Counsel on the Products and Technologies team in Red Hat's legal department. Most of his work focuses on open source-related legal issues.[More about me][2] + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/containers-gpl-and-copyleft + +作者:[Richard Fontana ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/fontana +[1]:https://opensource.com/article/18/1/containers-gpl-and-copyleft?rate=qTlANxnuA2tf0hcGE6Po06RGUzcbB-cBxbU3dCuCt9w +[2]:https://opensource.com/users/fontana +[3]:https://opensource.com/user/10544/feed +[4]:https://www.gnu.org/licenses/gpl-faq.en.html#MereAggregation +[5]:https://github.com/opencontainers/image-spec/blob/master/spec.md +[6]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html#section2 +[7]:https://www.gnu.org/licenses/gpl.html#section5 +[8]:https://www.redhat.com/en/topics/microservices +[9]:https://www.gnu.org/licenses/gpl-faq.en.html#GPLPlugins +[10]:https://opensource.com/users/fontana +[11]:https://opensource.com/users/fontana +[12]:https://opensource.com/users/fontana +[13]:https://opensource.com/tags/licensing +[14]:https://opensource.com/tags/containers diff --git a/sources/tech/20180125 A step-by-step guide to Git.md b/sources/tech/20180125 A step-by-step guide to Git.md new file mode 100644 index 0000000000..cf43f625ce --- /dev/null +++ b/sources/tech/20180125 A step-by-step guide to Git.md @@ -0,0 +1,130 @@ +A step-by-step guide to Git +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lightbulb-idea-think-yearbook-lead.png?itok=5ZpCm0Jh) + +If you've never used [Git][1], you may be nervous about it. There's nothing to worry about--just follow along with this step-by-step getting-started guide, and you will soon have a new Git repository hosted on [GitHub][2]. + +Before we dive in, let's clear up a common misconception: Git isn't the same thing as GitHub. Git is a version-control system (i.e., a piece of software) that helps you keep track of your computer programs and files and the changes that are made to them over time. It also allows you to collaborate with your peers on a program, code, or file. GitHub and similar services (including GitLab and BitBucket) are websites that host a Git server program to hold your code. + +### Step 1: Create a GitHub account + +The easiest way to get started is to create an account on [GitHub.com][3] (it's free). + +![](https://opensource.com/sites/default/files/u128651/git_guide1.png) + +Pick a username (e.g., octocat123), enter your email address and a password, and click **Sign up for GitHub**. Once you are in, it will look something like this: + +![](https://opensource.com/sites/default/files/u128651/git_guide2.png) + +### Step 2: Create a new repository + +A repository is like a place or a container where something is stored; in this case we're creating a Git repository to store code. To create a new repository, select **New Repository** from the `+` sign dropdown menu (you can see I've selected it in the upper-right corner in the image above). + +![](https://opensource.com/sites/default/files/u128651/git_guide3.png) + +Enter a name for your repository (e.g, "Demo") and click **Create Repository**. Don't worry about changing any other options on this page. + +Congratulations! You have set up your first repo on GitHub.com. + +### Step 3: Create a file + +Once your repo is created, it will look like this: + +![](https://opensource.com/sites/default/files/u128651/git_guide4.png) + +Don't panic, it's simpler than it looks. Stay with me. Look at the section that starts "...or create a new repository on the command line," and ignore the rest for now. + +Open the Terminal program on your computer. + +![](https://opensource.com/sites/default/files/u128651/git_guide5.png) + +Type `git` and hit **Enter**. If it says command `bash: git: command not found`, then [install Git][4] with the command for your Linux operating system or distribution. Check the installation by typing `git` and hitting **Enter** ; if it's installed, you should see a bunch of information about how you can use the command. + +In the terminal, type: +``` +mkdir Demo +``` + +This command will create a directory (or folder) named Demo. + +Change your terminal to the Demo directory with the command: +``` +cd Demo +``` + +Then enter: +``` +echo "#Demo" >> README.md +``` + +This creates a file named `README.md` and writes `#Demo` in it. To check that the file was created successfully, enter: +``` +cat README.md +``` + +This will show you what is inside the `README.md` file, if the file was created correctly. Your terminal will look like this: + +![](https://opensource.com/sites/default/files/u128651/git_guide7.png) + +To tell your computer that Demo is a directory managed by the Git program, enter: +``` +git init +``` + +Then, to tell the Git program you care about this file and want to track any changes from this point forward, enter: +``` +git add README.md +``` + +### Step 4: Make a commit + +So far you've created a file and told Git about it, and now it's time to create a commit. Commit can be thought of as a milestone. Every time you accomplish some work, you can write a Git commit to store that version of your file, so you can go back later and see what it looked like at that point in time. Whenever you make a change to your file, you create a new version of that file, different from the previous one. + +To make a commit, enter: +``` +git commit -m "first commit" +``` + +That's it! You just created a Git commit and included a message that says first commit. You must always write a message in commit; it not only helps you identify a commit, but it also enables you to understand what you did with the file at that point. So tomorrow, if you add a new piece of code in your file, you can write a commit message that says, Added new code, and when you come back in a month to look at your commit history or Git log (the list of commits), you will know what you changed in the files. + +### Step 5: Connect your GitHub repo with your computer + +Now, it's time to connect your computer to GitHub with the command: +``` +git remote add origin https://github.com//Demo.git +``` + +Let's look at this command step by step. We are telling Git to add a `remote` called `origin` with the address `https://github.com//Demo.git` (i.e., the URL of your Git repo on GitHub.com). This allows you to interact with your Git repository on GitHub.com by typing `origin` instead of the full URL and Git will know where to send your code. Why `origin`? Well, you can name it anything else if you'd like. + +Now we have connected our local copy of the Demo repository to its remote counterpart on GitHub.com. Your terminal looks like this: + +![](https://opensource.com/sites/default/files/u128651/git_guide8.png) + +Now that we have added the remote, we can push our code (i.e., upload our `README.md` file) to GitHub.com. + +Once you are done, your terminal will look like this: + +![](https://opensource.com/sites/default/files/u128651/git_guide9.png) + +And if you go to `https://github.com//Demo` you will see something like this: + +![](https://opensource.com/sites/default/files/u128651/git_guide10.png) + +That's it! You have created your first GitHub repo, connected it to your computer, and pushed (or uploaded) a file from your computer to your repository called Demo on GitHub.com. Next time, I will write about Git cloning (downloading your code from GitHub to your computer), adding new files, modifying existing files, and pushing (uploading) files to GitHub. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/step-step-guide-git + +作者:[Kedar Vijay Kulkarni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/kkulkarn +[1]:https://opensource.com/resources/what-is-git +[2]:https://opensource.com/life/15/11/short-introduction-github +[3]:https://github.com/ +[4]:https://www.linuxbabe.com/linux-server/install-git-verion-control-on-linux-debianubuntufedoraarchlinux#crt-2 diff --git a/sources/tech/20180125 Keep Accurate Time on Linux with NTP.md b/sources/tech/20180125 Keep Accurate Time on Linux with NTP.md new file mode 100644 index 0000000000..817931c2a4 --- /dev/null +++ b/sources/tech/20180125 Keep Accurate Time on Linux with NTP.md @@ -0,0 +1,146 @@ +Keep Accurate Time on Linux with NTP +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/usno-amc.jpg?itok=KA8HwI02) + +How to keep the correct time and keep your computers synchronized without abusing time servers, using NTP and systemd. + +### What Time is It? + +Linux is funky when it comes to telling the time. You might think that the `time` tells the time, but it doesn't because it is a timer that measures how long a process runs. To get the time, you run the `date` command, and to view more than one date, you use `cal`. Timestamps on files are also a source of confusion as they are typically displayed in two different ways, depending on your distro defaults. This example is from Ubuntu 16.04 LTS: +``` +$ ls -l +drwxrwxr-x 5 carla carla 4096 Mar 27 2017 stuff +drwxrwxr-x 2 carla carla 4096 Dec 8 11:32 things +-rw-rw-r-- 1 carla carla 626052 Nov 21 12:07 fatpdf.pdf +-rw-rw-r-- 1 carla carla 2781 Apr 18 2017 oddlots.txt + +``` + +Some display the year, some display the time, which makes ordering your files rather a mess. The GNU default is files dated within the last six months display the time instead of the year. I suppose there is a reason for this. If your Linux does this, try `ls -l --time-style=long-iso` to display the timestamps all the same way, sorted alphabetically. See [How to Change the Linux Date and Time: Simple Commands][1] to learn all manner of fascinating ways to manage the time on Linux. + +### Check Current Settings + +NTP, the network time protocol, is the old-fashioned way of keeping correct time on computers. `ntpd`, the NTP daemon, periodically queries a public time server and adjusts your system time as needed. It's a simple lightweight protocol that is easy to set up for basic use. Systemd has barged into NTP territory with the `systemd-timesyncd.service`, which acts as a client to `ntpd`. + +Before messing with NTP, let's take a minute to check that current time settings are correct. + +There are (at least) two timekeepers on your system: system time, which is managed by the Linux kernel, and the hardware clock on your motherboard, which is also called the real-time clock (RTC). When you enter your system BIOS, you see the hardware clock time and you can change its settings. When you install a new Linux, and in some graphical time managers, you are asked if you want your RTC set to the UTC (Coordinated Universal Time) zone. It should be set to UTC, because all time zone and daylight savings time calculations are based on UTC. Use the `hwclock` command to check: +``` +$ sudo hwclock --debug +hwclock from util-linux 2.27.1 +Using the /dev interface to the clock. +Hardware clock is on UTC time +Assuming hardware clock is kept in UTC time. +Waiting for clock tick... +...got clock tick +Time read from Hardware Clock: 2018/01/22 22:14:31 +Hw clock time : 2018/01/22 22:14:31 = 1516659271 seconds since 1969 +Time since last adjustment is 1516659271 seconds +Calculated Hardware Clock drift is 0.000000 seconds +Mon 22 Jan 2018 02:14:30 PM PST .202760 seconds + +``` + +"Hardware clock is kept in UTC time" confirms that your RTC is on UTC, even though it translates the time to your local time. If it were set to local time it would report "Hardware clock is kept in local time." + +You should have a `/etc/adjtime` file. If you don't, sync your RTC to system time: +``` +$ sudo hwclock -w + +``` + +This should generate the file, and the contents should look like this example: +``` +$ cat /etc/adjtime +0.000000 1516661953 0.000000 +1516661953 +UTC + +``` + +The new-fangled systemd way is to run `timedatectl`, which does not need root permissions: +``` +$ timedatectl + Local time: Mon 2018-01-22 14:17:51 PST + Universal time: Mon 2018-01-22 22:17:51 UTC + RTC time: Mon 2018-01-22 22:17:51 + Time zone: America/Los_Angeles (PST, -0800) + Network time on: yes +NTP synchronized: yes + RTC in local TZ: no + +``` + +"RTC in local TZ: no" confirms that it is on UTC time. What if it is on local time? There are, as always, multiple ways to change it. The easy way is with a nice graphical configuration tool, like YaST in openSUSE. You can use `timedatectl`: +``` +$ timedatectl set-local-rtc 0 +``` + +Or edit `/etc/adjtime`, replacing UTC with LOCAL. + +### systemd-timesyncd Client + +Now I'm tired, and we've just gotten to the good part. Who knew timekeeping was so complex? We haven't even scratched the surface; read `man 8 hwclock` to get an idea of how time is kept on computers. + +Systemd provides the `systemd-timesyncd.service` client, which queries remote time servers and adjusts your system time. Configure your servers in `/etc/systemd/timesyncd.conf`. Most Linux distributions provide a default configuration that points to time servers that they maintain, like Fedora: +``` +[Time] +#NTP= +#FallbackNTP=0.fedora.pool.ntp.org 1.fedora.pool.ntp.org + +``` + +You may enter any other servers you desire, such as your own local NTP server, on the `NTP=` line in a space-delimited list. (Remember to uncomment this line.) Anything you put on the `NTP=` line overrides the fallback. + +What if you are not using systemd? Then you need only NTP. + +### Setting up NTP Server and Client + +It is a good practice to set up your own LAN NTP server, so that you are not pummeling public NTP servers from all of your computers. On most Linuxes NTP comes in the `ntp` package, and most of them provide `/etc/ntp.conf` to configure the service. Consult [NTP Pool Time Servers][2] to find the NTP server pool that is appropriate for your region. Then enter 4-5 servers in your `/etc/ntp.conf` file, with each server on its own line: +``` +driftfile /var/ntp.drift +logfile /var/log/ntp.log +server 0.europe.pool.ntp.org +server 1.europe.pool.ntp.org +server 2.europe.pool.ntp.org +server 3.europe.pool.ntp.org + +``` + +The `driftfile` tells `ntpd` where to store the information it needs to quickly synchronize your system clock with the time servers at startup, and your logs should have their own home instead of getting dumped into the syslog. Use your Linux distribution defaults for these files if it provides them. + +Now start the daemon; on most Linuxes this is `sudo systemctl start ntpd`. Let it run for a few minutes, then check its status: +``` +$ ntpq -p + remote refid st t when poll reach delay offset jitter +============================================================== ++dev.smatwebdesi 192.168.194.89 3 u 25 64 37 92.456 -6.395 18.530 +*chl.la 127.67.113.92 2 u 23 64 37 75.175 8.820 8.230 ++four0.fairy.mat 35.73.197.144 2 u 22 64 37 116.272 -10.033 40.151 +-195.21.152.161 195.66.241.2 2 u 27 64 37 107.559 1.822 27.346 + +``` + +I have no idea what any of that means, other than your daemon is talking to the remote time servers, and that is what you want. To permanently enable it, run `sudo systemctl enable ntpd`. If your Linux doesn't use systemd then it is your homework to figure out how to run `ntpd`. + +Now you can set up `systemd-timesyncd` on your other LAN hosts to use your local NTP server, or install NTP on them and enter your local server in their `/etc/ntp.conf` files. + +NTP servers take a beating, and demand continually increases. You can help by running your own public NTP server. Come back next week to learn how. + +Learn more about Linux through the free ["Introduction to Linux" ][3]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/1/keep-accurate-time-linux-ntp + +作者:[CARLA SCHRODER][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/cschroder +[1]:https://www.linux.com/learn/how-change-linux-date-and-time-simple-commands +[2]:http://support.ntp.org/bin/view/Servers/NTPPoolServers +[3]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180125 Linux whereis Command Explained for Beginners (5 Examples).md b/sources/tech/20180125 Linux whereis Command Explained for Beginners (5 Examples).md new file mode 100644 index 0000000000..45107b050c --- /dev/null +++ b/sources/tech/20180125 Linux whereis Command Explained for Beginners (5 Examples).md @@ -0,0 +1,108 @@ +Linux whereis Command Explained for Beginners (5 Examples) +====== + +Sometimes, while working on the command line, we just need to quickly find out the location of the binary file for a command. Yes, the [find][1] command is an option in this case, but it's a bit time consuming and will likely produce some non-desired results as well. There's a specific command that's designed for this purpose: **whereis**. + +In this article, we will discuss the basics of this command using some easy to understand examples. But before we do that, it's worth mentioning that all examples in this tutorial have been tested on Ubuntu 16.04LTS. + +### Linux whereis command + +The whereis command lets users locate binary, source, and manual page files for a command. Following is its syntax: + +``` +whereis [options] [-BMS directory... -f] name... +``` + +And here's how the tool's man page explains it: +``` +whereis locates the binary, source and manual files for the specified command names. The supplied +names are first stripped of leading pathname components and any (single) trailing extension of the +form .ext (for example: .c) Prefixes of s. resulting from use of source code control are also dealt +with. whereis then attempts to locate the desired program in the standard Linux places, and in the +places specified by $PATH and $MANPATH. +``` + +The following Q&A-styled examples should give you a good idea on how the whereis command works. + +### Q1. How to find location of binary file using whereis? + +Suppose you want to find the location for, let's say, the whereis command itself. Then here's how you can do that: + +``` +whereis whereis +``` + +[![How to find location of binary file using whereis][2]][3] + +Note that the first path in the output is what you are looking for. The whereis command also produces paths for manual pages and source code (if available, which isn't in this case). So the second path you see in the output above is the path to the whereis manual file(s). + +### Q2. How to specifically search for binaries, manuals, or source code? + +If you want to search specifically for, say binary, then you can use the **-b** command line option. For example: + +``` +whereis -b cp +``` + +[![How to specifically search for binaries, manuals, or source code][4]][5] + +Similarly, the **-m** and **-s** options are used in case you want to find manuals and sources. + +### Q3. How to limit whereis search as per requirement? + +By default whereis tries to find files from hard-coded paths, which are defined with glob patterns. However, if you want, you can limit the search using specific command line options. For example, if you want whereis to only search for binary files in /usr/bin, then you can do this using the **-B** command line option. + +``` +whereis -B /usr/bin/ -f cp +``` + +**Note** : Since you can pass multiple paths this way, the **-f** command line option terminates the directory list and signals the start of file names. + +Similarly, if you want to limit manual or source searches, you can use the **-M** and **-S** command line options. + +### Q4. How to see paths that whereis uses for search? + +There's an option for this as well. Just run the command with **-l**. + +``` +whereis -l +``` + +Here is the list (partial) it produced for us: + +[![How to see paths that whereis uses for search][6]][7] + +### Q5. How to find command names with unusual entries? + +For whereis, a command becomes unusual if it does not have just one entry of each explicitly requested type. For example, commands with no documentation available, or those with documentation in multiple places are considered unusual. The **-u** command line option, when used, makes whereis show the command names that have unusual entries. + +For example, the following command should display files in the current directory which have no documentation file, or more than one. + +``` +whereis -m -u * +``` + +### Conclusion + +Agreed, whereis is not the kind of command line tool that you'll require very frequently. But when the situation arises, it definitely makes your life easy. We've covered some of the important command line options the tool offers, so do practice them. For more info, head to its [man page][8]. + + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/linux-whereis-command/ + +作者:[Himanshu Arora][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.howtoforge.com +[1]:https://www.howtoforge.com/tutorial/linux-find-command/ +[2]:https://www.howtoforge.com/images/command-tutorial/whereis-basic-usage.png +[3]:https://www.howtoforge.com/images/command-tutorial/big/whereis-basic-usage.png +[4]:https://www.howtoforge.com/images/command-tutorial/whereis-b-option.png +[5]:https://www.howtoforge.com/images/command-tutorial/big/whereis-b-option.png +[6]:https://www.howtoforge.com/images/command-tutorial/whereis-l.png +[7]:https://www.howtoforge.com/images/command-tutorial/big/whereis-l.png +[8]:https://linux.die.net/man/1/whereis diff --git a/translated/talk/20171107 The long goodbye to C.md b/translated/talk/20171107 The long goodbye to C.md deleted file mode 100644 index 4b19be074a..0000000000 --- a/translated/talk/20171107 The long goodbye to C.md +++ /dev/null @@ -1,86 +0,0 @@ -对 C 的漫长的告别 -========================================== - - -这几天来,我就在思考那些能够挑战 C 语言作为系统编程语言堆中的根节点的地位的新潮语言,尤其是 Go 和 Rust。我发现了一个让我震惊的事实 —— 我有着 35 年的 C 语言经验。每周我都要写很多 C 代码,但是我已经忘了我上一次是在什么时候 _创建新的 C 语言项目_ 了。 - -如果你认为这件事情不够震惊,那你可能不是一个系统程序员。我知道有很多程序员使用更高级的语言工作。但是我把大部分时间都花在了深入打磨像 NTPsec , GPSD 以及 giflib 这些东西上。熟练使用 C 语言在这几十年里一直就是我的专长。但是,现在我不仅是不再使用 C 语言写新的项目,而且我都记不清我什么时候开始这样做的了。而且...回望历史,我不认为这是本世纪发生的事情。 - -当你问到我我的五个核心软件开发技能,“C 语言专家” 一定是你最有可能听到的,这件事情对我来说很好。这也激起了我的思考。C 的未来会怎样 ?C 是否正像当年的 COBOL 一样,在辉煌之后,走向落幕? - -我恰好是在 C 语言迅猛发展并把汇编语言以及其他许多编译型语言挤出主流存在的前几年开始编程的。那场过渡大约是在 1982 到 1985 年之间。在那之前,有很多编译型语言来争相吸引程序员的注意力,那些语言中还没有明确的领导者;但是在那之后,小众的语言直接毫无声息的退出舞台。主流的(FORTRAN,Pascal,COBOL)语言则要么只限于老代码,要么就是固守单一领域,再就是在 C 语言的边缘领域顶着愈来愈大的压力苟延残喘。 - -在那以后,这种情形持续了近 30 年。尽管在应用程序开发上出现了新的动向: Java, Perl, Python, 以及许许多多不是很成功的竞争者。起初我很少关注这些语言,这很大一部是是因为在它们的运行时的开销对于当时的实际硬件来说太大。因此,这就使得 C 的成功无可撼动;为了使用之前存在的 C 语言代码,你得使用 C 语言写新代码(一部分脚本语言尝试过大伯这个限制,但是只有 Python 做到了) - -回想起来,我在 1997 年使用脚本语言写应用时本应该注意到这些语言的更重要的意义的。当时我写的是一个帮助图书管理员使用一款叫做 SunSITE 的源码分发式软件,我使用的那个语言,叫做 Perl。 - -这个应用完全是基于文本的,而且只需要以人类能反应过来的速度运行(大概 0.1 秒),因此使用 C 或者别的没有动态内存分配以及字符串类型的语言来写就会显得很傻。但是在当时,我仅仅是把其视为一个试验,我在那时没想到我几乎再也不会在一个新项目的第一个文件里敲下 “int main(int argc, char **argv)” 了。 - -我说“几乎”,主要是因为 1999 年的 [SNG][3].我像那是我最后一个从头开始写的项目。在那之后我的所有新的 C 代码都是为我贡献代码,或者成为维护者的项目而写 —— 比如 GPSD 以及 NTPsec。 - -当年我本不应该使用 C 语言写 SNG 的。因为在那个年代,摩尔定律的快速循环使得硬件愈加便宜,像 Perl 这样的语言的运行也不再是问题。仅仅三年以后,我可能就会毫不犹豫地使用 Python 而不是 C 语言来写 SNG。 - -在 1997 年学习了 Python 这件事对我来说是一道分水岭。这个语言很完美 —— 就像我早年使用的 Lisp 一样,而且 Python 还有很酷的库!还完全绑定了 POSIX!还有一个绝不完犊子的对象系统!Python 没有把 C 语言挤出我的工具箱,但是我很快就习惯了在只要能用 Python 时就写 Python ,而只在必须使用 C 时写 C . - -(在此之后,我开始在我的访谈中指出我所谓的 “Perl 的教训” ,也就是任何一个没有和 C 语言语义等价的 POSIX 绑定的语言_都得失败_。在计算机科学的发展史上,作者没有意识到这一点的学术语言的骨骸俯拾皆是。) - -显然,对我来说,,Python 的主要优势之一就是它很简单,当我写 Python 时,我不再需要担心内存管理问题或者会导致吐核的程序崩溃 —— 对于 C 程序员来说,处理这些问题烦的要命。而不那么明显的优势恰好在我更改语言时显现,我在 90 年代末写应用程序和非核心系统服务的代码时为了平衡成本与风险都会倾向于选择具有自动内存管理但是开销更大的语言,以抵消之前提到的 C 语言的缺陷。而在仅仅几年之前(甚至是 1990 年),那些语言的开销还是大到无法承受的;那时摩尔定律还没让硬件产业迅猛发展。 - -与 C 相比更喜欢 Python —— 然后只要是能的话我就会从 C 语言转移到 Python ,这让我的工作的复杂程度降了不少。我开始在 GPSD 以及 NTPsec 里面加入 Python。这就是我们能把 NTP 的代码库大小削减四分之一的原因。 - -但是今天我不是来讲 Python 的。尽管我觉得它在竞争中脱颖而出,Python 也不是在 2000 年之前彻底结束我在新项目上使用 C 语言的原因,在当时任何一个新的学院派的动态语言都可以让我不写 C 语言代码。那件事可能是在我写了很多 Java 之后发生的,这就是另一段时间线了。 - -我写这个回忆录部分原因是我觉得我不特殊,我像在世纪之交,同样的事件也改变了不少 C 语言老手的编码习惯。他们也会和我之前一样,没有发现这一转变。 - -在 2000 年以后,尽管我还在使用 C/C++ 写之前的项目,比如 GPSD ,游戏韦诺之战以及 NTPsec,但是我的所有新项目都是使用 Python 的。 - -有很多程序是在完全无法在 C 语言下写出来的,尤其是 [reposurgeon][4] 以及 [doclifter][5] 这样的项目。由于 C 语言的有限的数据本体以及其脆弱的底层管理,尝试用 C 写的话可能会很恐怖,并注定失败。 - -甚至是对于更小的项目 —— 那些可以在 C 中实现的东西 —— 我也使用 Python 写,因为我不想花不必要的时间以及精力去处理内核转储问题。这种情况一直持续到去年年底,持续到我创建我的第一个 Rust 项目,以及成功写出第一个[使用 Go 语言的项目][6]。 - -如前文所述,尽管我是在讨论我的个人经历,但是我想我的经历体现了时代的趋势。我期待新潮流的出现,而不是仅仅跟随潮流。在 98 年,我是 Python 的早期使用者。来自 [TIOBE][7] 的数据让我在 Go 语言脱胎于公司的实验项目从小众语言火爆的几个月内开始写自己的第一个 Go 语言项目。 - -总而言之:直到现在第一批有可能挑战 C 语言的传统地位的语言才出现。我判断这个的标砖很简单 —— 只要这个语言能让我等 C 语言老手接受不再写 C 的 事实,这个语言才 “有可能” 挑战到 C 语言的地位 —— 来看啊,这有个新编译器,能把 C 转换到新语言,现在你可以让他完成你的_全部工作_了 —— 这样 C 语言的老手就会开心起来。 - -Python 以及和其类似的语言对此做的并不够好。使用 Python 实现 NTPsec(以此举例)可能是个灾难,最终会由于过高的运行时开销以及由于垃圾回收机制导致的延迟变化而烂尾。当写单用户且只需要以人类能接受的速度运行的程序时,使用 Python 很好,但是对于以 _机器的速度_ 运行的程序来说就不总是如此了 —— 尤其是在很高的多用户负载之下。这不只是我自己的判断,起初 Go 存在的主要原因就是 Google ,然后 Python 的众多支持者也来支持这款语言 ——— 他们遭遇了同样的痛点。 - -Go 语言就是为了处理 Python 处理不了的类 C 语言工作而设计的。尽管没有一个全自动语言转换软件让我很是不爽,但是使用 Go 语言来写系统程序对我来说不算麻烦,我发现我写 Go 写的还挺开心的。我的 很多 C 编码技能还可以继续使用,我还收获了垃圾回收机制以及并发编程机制,这何乐而不为? - -([这里][8]有关于我第一次写 Go 的经验的更多信息) - -本来我像把 Rust 也视为 “C 语言要过时了” 的例子,但是在学习这们语言并尝试使用这门语言编程之后,我觉得[这语言现在还不行][9]。也许 5 年以后,它才会成为 C 语言的对手。 - -随着 2017 的临近,我们已经发现了一个相对成熟的语言,其和 C 类似,能够胜任 C 语言的大部分工作场景(我在下面会准确描述),在几年以后,这个语言届的新星可能就会取得成功。 - -这件事意义重大。如果你不长远地回顾历史,你可能看不出来这件事情的伟大性。_三十年了_ —— 这几乎就是我写代码的时间,我们都没有等到 C 语言的继任者。也无法体验在前 C 语言时代的系统编程是什么模样。但是现在我们可以使用两种视角来看待系统编程... - -...另一个视角就是下面这个语言。我的一个朋友正在开发一个他称之为 "Cx" 的语言,这个语言在 C 语言上做了很少的改动,使得其能够支持类型安全;他的项目的目的就是要创建一个能够在最少人力参与的情况下把古典 C 语言修改为新语言的程序。我不会指出这位朋友的名字,免得给他太多压力,让他给我做出不切实际的保证,他的实现方法真的很是有意思,我会尽量给他募集资金。 - -现在,除了 C 语言之外,我看到了三种不同的道路。在两年之前,我一种都不会发现。我重复一遍:这件事情意义重大。 - -我是说 C 语言将要灭绝吗?没有,在可预见的未来里,C 语言还会在操作系统的内核以及设备固件的编程的主流语言,在那里,尽力压榨硬件性能的古老命令还在奏效,尽管它可能不是那么安全。 - -现在被攻破的领域就是我之前提到的我经常出没的领域 —— 比如 GPSD 以及 NTPsec ,系统服务以及那些因为历史原因而使用 C 语言写的进程。还有就是以 DNS 服务器以及邮箱 —— 那些得以机器而不是人类的速度运行的系统程序。 - -现在我们可以预见,未来大多数代码都是由具有强大内存安全特性的 C 语言的替代者实现。Go , Rust 或者 Cx ,无论是哪个, C 的存在都将被弱化。如果我现在来实现 NTP ,我可能就会毫不犹豫的使用 Go 语言来实现。 - --------------------------------------------------------------------------------- - -via: http://esr.ibiblio.org/?p=7711 - -作者:[Eric Raymond][a] -译者:[name1e5s](https://github.com/name1e5s) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://esr.ibiblio.org/?author=2 -[1]:http://esr.ibiblio.org/?author=2 -[2]:http://esr.ibiblio.org/?p=7711 -[3]:http://sng.sourceforge.net/ -[4]:http://www.catb.org/esr/reposurgeon/ -[5]:http://www.catb.org/esr/doclifter/ -[6]:http://www.catb.org/esr/loccount/ -[7]:https://www.tiobe.com/tiobe-index/ -[8]:https://blog.ntpsec.org/2017/02/07/grappling-with-go.html -[9]:http://esr.ibiblio.org/?p=7303 diff --git a/translated/talk/20180111 AI and machine learning bias has dangerous implications.md b/translated/talk/20180111 AI and machine learning bias has dangerous implications.md new file mode 100644 index 0000000000..3484b21163 --- /dev/null +++ b/translated/talk/20180111 AI and machine learning bias has dangerous implications.md @@ -0,0 +1,81 @@ +AI 和机器中暗含的算法偏见是怎样形成的,我们又能通过开源社区做些什么 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_goodbadugly.png?itok=ZxaimUWU) + +图片来源:opensource.com + +在我们的世界里,算法无处不在,偏见也是一样。从社会媒体新闻的提供到流式媒体服务的推荐到线上购物,计算机算法,尤其是机器学习算法,已经渗透到我们日常生活的每一个角落。至于偏见,我们只需要参考 2016 年美国大选就可以知道,偏见是怎样在明处与暗处影响着我们的社会。 + +很难想像,我们经常忽略的一点是这二者的交集:计算机算法中存在的偏见。 + +与我们大多数人所认为的相反,科技并不是客观的。 AI 算法和它们的决策程序是由它们的研发者塑造的,他们写入的代码,使用的“[训练][1]”数据还有他们对算法进行[应力测试][2] 的过程,都会影响这些算法今后的选择。这意味着研发者的价值观,偏见和人类缺陷都会反映在软件上。如果我只给实验室中的人脸识别算法提供白人的照片,当遇到不是白人照片时,它[不会认为照片中的是人类][3] 。这结论并不意味着 AI 是“愚蠢的”或是“天真的”,它显示的是训练数据的分布偏差:缺乏多种的脸部照片。这会引来非常严重的后果。 + +这样的例子并不少。全美范围内的[州法院系统][4] 都使用“黑箱子”对罪犯进行宣判。由于训练数据的问题,[这些算法对黑人有偏见][5] ,他们对黑人罪犯会选择更长的服刑期,因此监狱中的种族差异会一直存在。而这些都发生在科技的客观性伪装下,这是“科学的”选择。 + +美国联邦政府使用机器学习算法来计算福利性支出和各类政府补贴。[但这些算法中的信息][6],例如它们的创造者和训练信息,都很难找到。这增加了政府工作人员进行不平等补助金分发操作的几率。 + +算法偏见情况还不止这些。从 Facebook 的新闻算法到医疗系统再到警方使用的相机,我们作为社会的一部分极有可能对这些算法输入各式各样的偏见,性别歧视,仇外思想,社会经济地位歧视,确认偏误等等。这些被输入了偏见的机器会大量生产分配,将种种社会偏见潜藏于科技客观性的面纱之下。 + +这种状况绝对不能再继续下去了。 + +在我们对人工智能进行不断开发研究的同时,需要降低它的开发速度,小心仔细地开发。算法偏见的危害已经足够大了。 + +## 我们能怎样减少算法偏见? + +最好的方式是从算法训练的数据开始审查,根据 [Microsoft 的研究者][2] 所说,这方法很有效。 + +数据分布本身就带有一定的偏见性。编程者手中的美国公民数据分布并不均衡,本地居民的数据多于移民者,富人的数据多于穷人,这是极有可能出现的情况。这种数据的不平均会使 AI 对我们是社会组成得出错误的结论。例如机器学习算法仅仅通过统计分析,就得出“大多数美国人都是富有的白人”这个结论。 + +即使男性和女性的样本在训练数据中等量分布,也可能出现偏见的结果。如果训练数据中所有男性的职业都是 CEO,而所有女性的职业都是秘书(即使现实中男性 CEO 的数量要多于女性),AI 也可能得出女性天生不适合做 CEO 的结论。 + +同样的,大量研究表明,用于执法部门的 AI 在检测新闻中出现的罪犯照片时,结果会 [惊人地偏向][7] 黑人及拉丁美洲裔居民。 + +在训练数据中存在的偏见还有很多其他形式,不幸的是比这里提到的要多得多。但是训练数据只是审查方式的一种,通过“应力测验”找出人类存在的偏见也同样重要。 + +如果提供一张印度人的照片,我们自己的相机能够识别吗?在两名同样水平的应聘者中,我们的 AI 是否会倾向于推荐住在市区的应聘者呢?对于情报中本地白人恐怖分子和伊拉克籍恐怖分子,反恐算法会怎样选择呢?急诊室的相机可以调出儿童的病历吗? + +这些对于 AI 来说是十分复杂的数据,但我们可以通过多项测试对它们进行定义和传达。 + +## 为什么开源很适合这项任务? + +开源方法和开源技术都有着极大的潜力改变算法偏见。 + +现代人工智能已经被开源软件占领,TensorFlow、IBM Watson 还有 [scikit-learn][8] 这类的程序包都是开源软件。开源社区已经证明它能够开发出强健的,经得住严酷测试的机器学习工具。同样的,我相信,开源社区也能开发出消除偏见的测试程序,并将其应用于这些软件中。 + +调试工具如哥伦比亚大学和理海大学推出的 [DeepXplore][9],增强了 AI 应力测试的强度,同时提高了其操控性。还有 [麻省理工学院的计算机科学和人工智能实验室][10]完成的项目,它开发出敏捷快速的样机研究软件,这些应该会被开源社区采纳。 + +开源技术也已经证明了其在审查和分类大组数据方面的能力。最明显的体现在开源工具在数据分析市场的占有率上(Weka , Rapid Miner 等等)。应当由开源社区来设计识别数据偏见的工具,已经在网上发布的大量训练数据组比如 [Kaggle][11]也应当使用这种技术进行识别筛选。 + +开源方法本身十分适合消除偏见程序的设计。内部谈话,私人软件开发及非民主的决策制定引起了很多问题。开源社区能够进行软件公开的谈话,进行大众化,维持好与大众的关系,这对于处理以上问题是十分重要的。如果线上社团,组织和院校能够接受这些开源特质,那么由开源社区进行消除算法偏见的机器设计也会顺利很多。 + +## 我们怎样才能够参与其中? + +教育是一个很重要的环节。我们身边有很多还没意识到算法偏见的人,但算法偏见在立法,社会公正,政策及更多领域产生的影响与他们息息相关。让这些人知道算法偏见是怎样形成的和它们带来的重要影响是很重要的,因为想要改变目前是局面,从我们自身做起是唯一的方法。 + +对于我们中间那些与人工智能一起工作的人来说,这种沟通尤其重要。不论是人工智能的研发者,警方或是科研人员,当他们为今后设计人工智能时,应当格外意识到现今这种偏见存在的危险性,很明显,想要消除人工智能中存在的偏见,就要从意识到偏见的存在开始。 + +最后,我们需要围绕 AI 伦理化建立并加强开源社区。不论是需要建立应力实验训练模型,软件工具,或是从千兆字节的训练数据中筛选,现在已经到了我们利用开源方法来应对数字化时代最大的威胁的时间了。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/how-open-source-can-fight-algorithmic-bias + +作者:[Justin Sherman][a] +译者:[Valoniakim](https://github.com/Valoniakim) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/justinsherman +[1]:https://www.crowdflower.com/what-is-training-data/ +[2]:https://medium.com/microsoft-design/how-to-recognize-exclusion-in-ai-ec2d6d89f850 +[3]:https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms +[4]:https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/ +[5]:https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing +[6]:https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3012499 +[7]:https://www.hivlawandpolicy.org/sites/default/files/Race%20and%20Punishment-%20Racial%20Perceptions%20of%20Crime%20and%20Support%20for%20Punitive%20Policies%20%282014%29.pdf +[8]:http://scikit-learn.org/stable/ +[9]:https://arxiv.org/pdf/1705.06640.pdf +[10]:https://www.csail.mit.edu/research/understandable-deep-networks +[11]:https://www.kaggle.com/datasets diff --git a/translated/tech/20070129 How To Debug a Bash Shell Script Under Linux or UNIX.md b/translated/tech/20070129 How To Debug a Bash Shell Script Under Linux or UNIX.md new file mode 100644 index 0000000000..4cc979015b --- /dev/null +++ b/translated/tech/20070129 How To Debug a Bash Shell Script Under Linux or UNIX.md @@ -0,0 +1,246 @@ +如何在 Linux 或者 UNIX 下调试 Bash Shell 脚本 +====== +来自我的邮箱: +**我写了一个你好世界的小脚本。我如何能调试运行在 Linux 或者类 UNIX 的系统上的 bash shell 脚本呢?** +这是 Linux / Unix 系统管理员或新用户最常问的问题。shell 脚本调试可能是一项繁琐的工作(不容易阅读)。调试 shell 脚本有多种方法。 + +您需要传递 -X 或 -V 参数,以在 bash shell 中浏览每行代码。 + +[![如何在 Linux 或者 UNIX 下调试 Bash Shell 脚本][1]][1] + +让我们看看如何使用各种方法调试 Linux 和 UNIX 上运行的脚本。 + +``` +### -x 选项来调试脚本 + +用 -x 选项来运行脚本 +``` +$ bash -x script-name +$ bash -x domains.sh +``` + +### 使用 set 内置命令 + +bash shell 提供调试选项,可以打开或关闭使用 [set 命令][2]: + + * **set -x** : 显示命令及其执行时的参数。 + * **set -v** : 显示 shell 输入行作为它们读取的 + +可以在shell脚本本身中使用上面的两个命令: +``` +#!/bin/bash +clear + +# turn on debug mode +set -x +for f in * +do + file $f +done +# turn OFF debug mode +set +x +ls +# more commands +``` + +你可以代替 [标准 Shebang][3] 行: +`#!/bin/bash` +用一下代码(用于调试): +`#!/bin/bash -xv` + +### 使用智能调试功能 + +首先添加一个叫做 _DEBUG 的特殊变量。当你需要调试脚本的时候,设置 _DEBUG 为 'on': +`_DEBUG="on"` + + +在脚本的开头放置以下函数: +``` +function DEBUG() +{ + [ "$_DEBUG" == "on" ] && $@ +} +``` + +function DEBUG() { [ "$_DEBUG" == "on" ] && $@ } + +现在,只要你需要调试,只需使用 DEBUG 函数如下: +`DEBUG echo "File is $filename"` +或者 +``` +DEBUG set -x +Cmd1 +Cmd2 +DEBUG set +x +``` + +当调试完(在移动你的脚本到生产环境之前)设置 _DEBUG 为 'off'。不需要删除调试行。 +`_DEBUG="off" # 设置为非 'on' 的任何字符` + + +示例脚本: +``` +#!/bin/bash +_DEBUG="on" +function DEBUG() +{ + [ "$_DEBUG" == "on" ] && $@ +} + +DEBUG echo 'Reading files' +for i in * +do + grep 'something' $i > /dev/null + [ $? -eq 0 ] && echo "Found in $i file" +done +DEBUG set -x +a=2 +b=3 +c=$(( $a + $b )) +DEBUG set +x +echo "$a + $b = $c" +``` + +保存并关闭文件。运行脚本如下: +`$ ./script.sh` +输出: +``` +Reading files +Found in xyz.txt file ++ a=2 ++ b=3 ++ c=5 ++ DEBUG set +x ++ '[' on == on ']' ++ set +x +2 + 3 = 5 + +``` + +现在设置 DEBUG 为关闭(你需要编辑文件): +`_DEBUG="off"` +运行脚本: +`$ ./script.sh` +输出: +``` +Found in xyz.txt file +2 + 3 = 5 + +``` + +以上是一个简单但非常有效的技术。还可以尝试使用 DEBUG 作为别名替代函数。 + +### 调试 Bash Shell 的常见错误 + +Bash 或者 sh 或者 ksh 在屏幕上给出各种错误信息,在很多情况下,错误信息可能不提供详细的信息。 + +#### 跳过在文件上应用执行权限 +When you [write your first hello world bash shell script][4], you might end up getting an error that read as follows: +当你 [编写你的第一个 hello world 脚本][4],您可能会得到一个错误,如下所示: +`bash: ./hello.sh: Permission denied` +设置权限使用 chmod 命令: +``` +$ chmod +x hello.sh +$ ./hello.sh +$ bash hello.sh +``` + +#### 文件结束时发生意外的错误 + +如果您收到文件结束意外错误消息,请打开脚本文件,并确保它有打开和关闭引号。在这个例子中,echo 语句有一个开头引号,但没有结束引号: +``` +#!/bin/bash + + +... +.... + + +echo 'Error: File not found + ^^^^^^^ + missing quote +``` + +还要确保你检查缺少的括号和大括号 ({}): +``` +#!/bin/bash +..... +[ ! -d $DIRNAME ] && { echo "Error: Chroot dir not found"; exit 1; + ^^^^^^^^^^^^^ + missing brace } +... +``` + +#### 丢失像 fi,esac,;; 等关键字。 +如果你缺少了结尾的关键字,如 fi 或 ;; 你会得到一个错误,如 “XXX 意外”。因此,确保所有嵌套的 if 和 case 语句以适当的关键字结束。有关语法要求的页面。在本例中,缺少 fi: +``` +#!/bin/bash +echo "Starting..." +.... +if [ $1 -eq 10 ] +then + if [ $2 -eq 100 ] + then + echo "Do something" +fi + +for f in $files +do + echo $f +done + +# 注意 fi 已经丢失 +``` + +#### 在 Windows 或 UNIX 框中移动或编辑 shell 脚本 + +不要在 Linux 上创建脚本并移动到 Windows。另一个问题是编辑 Windows 10上的 shell 脚本并将其移动到 UNIX 服务器上。这将导致一个错误的命令没有发现由于回车返回(DOS CR-LF)。你可以 [将 DOS 换行转换为 CR-LF 的Unix/Linux 格式][5] 使用下列命令: +`dos2unix my-script.sh` + +### 提示1 - 发送调试信息输出到标准错误 +[标准错误] 是默认错误输出设备,用于写所有系统错误信息。因此,将消息发送到默认的错误设备是个好主意: +``` +# 写错误到标准输出 +echo "Error: $1 file not found" +# +# 写错误到标准错误(注意 1>&2 在 echo 命令末尾) +# +echo "Error: $1 file not found" 1>&2 +``` + +### 提示2 - 在使用 vim 文本编辑器时,打开语法高亮。 +大多数现代文本编辑器允许设置语法高亮选项。这对于检测语法和防止常见错误如打开或关闭引号非常有用。你可以在不同的颜色中看到。这个特性简化了 shell 脚本结构中的编写,语法错误在视觉上截然不同。强调不影响文本本身的意义,它只为你编写。在这个例子中,我的脚本使用了 vim 语法高亮: +[!如何调试 Bash Shell 脚本,在 Linux 或者 UNIX 使用 Vim 语法高亮特性][7]][7] + +### 提示3 - 使用 shellcheck 检查脚本 +[shellcheck 是一个用于静态分析 shell 脚本的工具][8]。可以使用它来查找 shell 脚本中的错误。这是用 Haskell 编写的。您可以使用这个工具找到警告和建议。让我们看看如何在 Linux 或 类UNIX 系统上安装和使用 shellcheck 来改善你的 shell 脚本,避免错误和高效。 + +### 关于作者 +发表者: + +作者是 nixCraft 创造者,一个经验丰富的系统管理员和一个练习 Linux 操作系统/ UNIX shell 脚本的教练。他曾与全球客户和各种行业,包括 IT,教育,国防和空间研究,以及非营利部门。跟随他 [推特][9],[脸谱网][10],[谷歌+ ][11]。 + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/tips/debugging-shell-script.html + +作者:[Vivek Gite][a] +译者:[zjon](https://github.com/zjon) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/tips/wp-content/uploads/2007/01/How-to-debug-a-bash-shell-script-on-Linux-or-Unix.jpg +[2]:https://bash.cyberciti.biz/guide/Set_command +[3]:https://bash.cyberciti.biz/guide/Shebang +[4]:https://www.cyberciti.biz/faq/hello-world-bash-shell-script/ +[5]:https://www.cyberciti.biz/faq/howto-unix-linux-convert-dos-newlines-cr-lf-unix-text-format/ +[6]:https://bash.cyberciti.biz/guide/Standard_error +[7]:https://www.cyberciti.biz/media/new/tips/2007/01/bash-vim-debug-syntax-highlighting.png +[8]:https://www.cyberciti.biz/programming/improve-your-bashsh-shell-script-with-shellcheck-lint-script-analysis-tool/ +[9]:https://twitter.com/nixcraft +[10]:https://facebook.com/nixcraft +[11]:https://plus.google.com/+CybercitiBiz + + diff --git a/translated/tech/20090127 Anatomy of a Program in Memory.md b/translated/tech/20090127 Anatomy of a Program in Memory.md deleted file mode 100644 index aa478535f4..0000000000 --- a/translated/tech/20090127 Anatomy of a Program in Memory.md +++ /dev/null @@ -1,84 +0,0 @@ -剖析内存中的程序 -============================================================ - -内存管理是一个操作系统的核心任务;它对程序员和系统管理员来说也是至关重要的。在接下来的几篇文章中,我将从实践出发着眼于内存管理,并深入到它的内部结构。尽管这些概念很普通,示例也大都来自于 32 位 x86 架构的 Linux 和 Windows 上。第一篇文章描述了在内存中程序如何分布。 - -在一个多任务操作系统中的每个进程都运行在它自己的内存“沙箱”中。这个沙箱是一个虚拟地址空间,它在 32 位的模式中它总共有 4GB 的内存地址块。这些虚拟地址是通过内核页表映射到物理地址的,并且这些虚拟地址是由操作系统内核来维护,进而被进程所消费的。每个进程都有它自己的一组页表,但是在它这里仅是一个钩子。一旦虚拟地址被启用,这些虚拟地址将被应用到这台电脑上的 _所有软件_,_包括内核本身_。因此,一部分虚拟地址空间必须保留给内核使用: - -![Kernel/User Memory Split](http://static.duartes.org/img/blogPosts/kernelUserMemorySplit.png) - -但是,这并不说内核就使用了很多的物理内存,恰恰相反,它只使用了很少一部分用于去做地址映射。内核空间在内核页表中被标记为仅 [特权代码][1] (ring 2 或更低)独占使用,因此,如果一个用户模式的程序尝试去访问它,将触发一个页面故障错误。在 Linux 中,内核空间是始终存在的,并且在所有进程中都映射相同的物理内存。内核代码和数据总是可寻址的,准备随时去处理中断或者系统调用。相比之下,用户模式中的地址空间,在每次进程切换时都会发生变化: - -![Process Switch Effects on Virtual Memory](http://static.duartes.org/img/blogPosts/virtualMemoryInProcessSwitch.png) - -蓝色的区域代表映射到物理地址的虚拟地址空间,白色的区域是尚未映射的部分。在上面的示例中,Firefox 因它令人惊奇的“狂吃”内存而使用了大量的虚拟内存空间。在地址空间中不同的组合对应了不同的内存段,像堆、栈、等等。请注意,这些段只是一系列内存地址的简化表示,它与 [Intel 类型的段][2] _并没有任何关系_ 。不过,这是一个在 Linux 中的标准的段布局: - -![Flexible Process Address Space Layout In Linux](http://static.duartes.org/img/blogPosts/linuxFlexibleAddressSpaceLayout.png) - -当计算是快乐、安全、讨人喜欢的时候,在机器中的几乎每个进程上,它们的起始虚拟地址段都是完全相同的。这将使远程挖掘安全漏洞变得容易。一个漏洞利用经常需要去引用绝对内存位置:在栈中的一个地址,这个地址可能是一个库的函数,等等。远程攻击必须要“盲选”这个地址,因为地址空间都是相同的。当攻击者们这样做的时候,人们就会受到伤害。因此,地址空间随机化开始流行起来。Linux 随机化栈、内存映射段、以及在堆上增加起始地址偏移量。不幸的是,32 位的地址空间是非常拥挤的,为地址空间随机化留下的空间不多,因此 [妨碍了地址空间随机化的效果][6]。 - -在进程地址空间中最高的段是栈,在大多数编程语言中它存储本地变量和函数参数。调用一个方法或者函数将推送一个新的栈帧到这个栈。当函数返回时这个栈帧被删除。这个简单的设计,可能是因为数据严格遵循 [后进先出(LIFO)][7] 的次序,这意味着跟踪栈内容时不需要复杂的数据结构 – 一个指向栈顶的简单指针就可以做到。推送和弹出也因此而非常快且准确。也可能是,持续的栈区重用倾向于在 [CPU 缓存][8] 中保持活跃的栈内存,这样可以加快访问速度。进程中的每个线程都有它自己的栈。 - -向栈中推送更多的而不是刚合适的数据可能会耗尽栈的映射区域。这将触发一个页面故障,在 Linux 中它是通过 [expand_stack()][9] 来处理的,它会去调用 [acct_stack_growth()][10] 来检查栈的增长是否正常。如果栈的大小低于 RLIMIT_STACK 的值(一般是 8MB 大小),那么这是一个正常的栈增长和程序的合理使用,否则可能是发生了未知问题。这是一个栈大小按需调节的常见机制。但是,栈的大小达到了上述限制,将会发生一个栈溢出,并且,程序将会收到一个段故障错误。当映射的栈为满足需要而扩展后,在栈缩小时,映射区域并不会收缩。就像美国联邦政府的预算一样,它只会扩张。 - -动态栈增长是 [唯一例外的情况][11] ,当它去访问一个未映射的内存区域,如上图中白色部分,是允许的。除此之外的任何其它访问未映射的内存区域将在段故障中触发一个页面故障。一些映射区域是只读的,因此,尝试去写入到这些区域也将触发一个段故障。 - -在栈的下面,有内存映射段。在这里,内核将文件内容直接映射到内存。任何应用程序都可以通过 Linux 的 [mmap()][12] 系统调用( [实现][13])或者 Windows 的 [CreateFileMapping()][14] / [MapViewOfFile()][15] 来请求一个映射。内存映射是实现文件 I/O 的方便高效的方式。因此,它经常被用于加载动态库。有时候,也被用于去创建一个并不匹配任何文件的匿名内存映射,这种映射经常被用做程序数据的替代。在 Linux 中,如果你通过 [malloc()][16] 去请求一个大的内存块,C 库将会创建这样一个匿名映射而不是使用堆内存。这里的‘大’ 表示是超过了MMAP_THRESHOLD 设置的字节数,它的缺省值是 128 kB,可以通过 [mallopt()][17] 去调整这个设置值。 - -接下来讲的是“堆”,就在我们接下来的地址空间中,堆提供运行时内存分配,像栈一样,但又不同于栈的是,它分配的数据生存期要长于分配它的函数。大多数编程语言都为程序去提供堆管理支持。因此,满足内存需要是编程语言运行时和内核共同来做的事情。在 C 中,堆分配的接口是 [malloc()][18] ,它是个用户友好的接口,然而在编程语言的垃圾回收中,像 C# 中,这个接口使用 new 关键字。 - -如果在堆中有足够的空间去满足内存请求,它可以由编程语言运行时来处理内存分配请求,而无需内核参与。否则将通过 [brk()][19] 系统调用([实现][20])来扩大堆以满足内存请求所需的大小。堆的管理是比较 [复杂的][21],在面对我们程序的混乱分配模式时,它通过复杂的算法,努力在速度和内存使用效率之间取得一种平衡。服务一个堆请求所需要的时间可能是非常可观的。实时系统有一个 [特定用途的分配器][22] 去处理这个问题。堆也会出现  _碎片化_ ,如下图所示: - -![Fragmented Heap](http://static.duartes.org/img/blogPosts/fragmentedHeap.png) - -最后,我们取得了内存的低位段:BSS、数据、以及程序文本。在 C 中,静态(全局)变量的内容都保存在 BSS 和数据中。它们之间的不同之处在于,BSS 保存 _未初始化的_  静态变量的内容,它的值在源代码中并没有被程序员设置。BSS 内存区域是_匿名_的:它没有映射到任何文件上。如果你在程序中写这样的语句 static int cntActiveUserscntActiveUsers 的内容就保存在 BSS 中。 - -反过来,数据段,用于保存在源代码中静态变量_初始化后_的内容。这个内存区域是_非匿名_的。它映射到程序的二进值镜像上的一部分,这个二进制镜像包含在源代码中给定初始化值的静态变量内容。因此,如果你在程序中写这样的语句 static int cntWorkerBees = 10,那么,cntWorkerBees 的内容就保存在数据段中,并且初始值为 10。尽管可以通过数据段映射到一个文件,但是这是一个私有内存映射,意味着,如果在内存中这个文件发生了变化,它并不会将这种变化反映到底层的文件上。必须是这样的,否则,分配的全局变量将会改变你磁盘上的二进制文件镜像,这种做法就太不可思议了! - -用图去展示一个数据段是很困难的,因为它使用一个指针。在那种情况下,指针 gonzo 的_内容_ – 保存在数据段上的一个 4 字节的内存地址。它并没有指向一个真实的字符串。而这个字符串存在于文本段中,文本段是只读的,它用于保存你的代码中的类似于字符串常量这样的内容。文本段也映射你的内存中的库,但是,如果你的程序写入到这个区域,将会触发一个段故障错误。尽管在 C 中,它比不上从一开始就避免这种指针错误那么有效,但是,这种机制也有助于避免指针错误。这里有一个展示这些段和示例变量的图: - -![ELF Binary Image Mapped Into Memory](http://static.duartes.org/img/blogPosts/mappingBinaryImage.png) - -你可以通过读取 /proc/pid_of_process/maps 文件来检查 Linux 进程中的内存区域。请记住,一个段可以包含很多的区域。例如,每个内存映射的文件一般都在 mmap 段中的它自己的区域中,而动态库有类似于BSS 和数据一样的额外的区域。下一篇文章中我们将详细说明“区域(area)”的真正含义是什么。此外,有时候人们所说的“数据段(data segment)”是指“数据 + BSS + 堆”。 - -你可以使用 [nm][23] 和 [objdump][24] 命令去检查二进制镜像,去显示它们的符号、地址、段、等等。最终,在 Linux 中上面描述的虚拟地址布局是一个“弹性的”布局,这就是这几年来的缺省情况。它假设 RLIMIT_STACK 有一个值。如果没有值的话,Linux 将恢复到如下所示的“经典” 布局: - -![Classic Process Address Space Layout In Linux](http://static.duartes.org/img/blogPosts/linuxClassicAddressSpaceLayout.png) - -这就是虚拟地址空间布局。接下来的文章将讨论内核如何对这些内存区域保持跟踪、内存映射、文件如何读取和写入、以及内存使用数据的意义。 - --------------------------------------------------------------------------------- - -via: http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory/ - -作者:[gustavo ][a] -译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://duartes.org/gustavo/blog/about/ -[1]:http://duartes.org/gustavo/blog/post/cpu-rings-privilege-and-protection -[2]:http://duartes.org/gustavo/blog/post/memory-translation-and-segmentation -[3]:http://lxr.linux.no/linux+v2.6.28.1/fs/binfmt_elf.c#L542 -[4]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/mm/mmap.c#L84 -[5]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/kernel/process_32.c#L729 -[6]:http://www.stanford.edu/~blp/papers/asrandom.pdf -[7]:http://en.wikipedia.org/wiki/Lifo -[8]:http://duartes.org/gustavo/blog/post/intel-cpu-caches -[9]:http://lxr.linux.no/linux+v2.6.28/mm/mmap.c#L1716 -[10]:http://lxr.linux.no/linux+v2.6.28/mm/mmap.c#L1544 -[11]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/mm/fault.c#L692 -[12]:http://www.kernel.org/doc/man-pages/online/pages/man2/mmap.2.html -[13]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/kernel/sys_i386_32.c#L27 -[14]:http://msdn.microsoft.com/en-us/library/aa366537(VS.85).aspx -[15]:http://msdn.microsoft.com/en-us/library/aa366761(VS.85).aspx -[16]:http://www.kernel.org/doc/man-pages/online/pages/man3/malloc.3.html -[17]:http://www.kernel.org/doc/man-pages/online/pages/man3/undocumented.3.html -[18]:http://www.kernel.org/doc/man-pages/online/pages/man3/malloc.3.html -[19]:http://www.kernel.org/doc/man-pages/online/pages/man2/brk.2.html -[20]:http://lxr.linux.no/linux+v2.6.28.1/mm/mmap.c#L248 -[21]:http://g.oswego.edu/dl/html/malloc.html -[22]:http://rtportal.upv.es/rtmalloc/ -[23]:http://manpages.ubuntu.com/manpages/intrepid/en/man1/nm.1.html -[24]:http://manpages.ubuntu.com/manpages/intrepid/en/man1/objdump.1.html diff --git a/translated/tech/20090211 Page Cache the Affair Between Memory and Files.md b/translated/tech/20090211 Page Cache the Affair Between Memory and Files.md new file mode 100644 index 0000000000..644cb1c33b --- /dev/null +++ b/translated/tech/20090211 Page Cache the Affair Between Memory and Files.md @@ -0,0 +1,76 @@ +[页面缓存,内存和文件之间的那些事][1] +============================================================ + + +上一篇文章中我们学习了内核怎么为一个用户进程 [管理虚拟内存][2],而忽略了文件和 I/O。这一篇文章我们将专门去讲这个重要的主题 —— 页面缓存。文件和内存之间的关系常常很不好去理解,而它们对系统性能的影响却是非常大的。 + +在面对文件时,有两个很重要的问题需要操作系统去解决。第一个是相对内存而言,慢的让人发狂的硬盘驱动器,[尤其是磁盘查找][3]。第二个是需要将文件内容一次性地加载到物理内存中,以便程序间共享文件内容。如果你在 Windows 中使用 [进程浏览器][4] 去查看它的进程,你将会看到每个进程中加载了大约 ~15MB 的公共 DLLs。我的 Windows 机器上现在大约运行着 100 个进程,因此,如果不共享的话,仅这些公共的 DLLs 就要使用高达 ~1.5 GB 的物理内存。如果是那样的话,那就太糟糕了。同样的,几乎所有的 Linux 进程都需要 [ld.so][5] 和 libc,加上其它的公共库,它们占用的内存数量也不是一个小数目。 + +幸运的是,所有的这些问题都用一个办法解决了:页面缓存 —— 保存在内存中的页面大小的文件块。为了用图去说明页面缓存,我捏造出一个名为 Render 的 Linux 程序,它打开了文件 scene.dat,并且一次读取 512 字节,并将文件内容存储到一个分配的堆块中。第一次读取的过程如下: + +![Reading and the page cache](http://static.duartes.org/img/blogPosts/readFromPageCache.png) + +读取完 12KB 的文件内容以后,Render 程序的堆和相关的页面帧如下图所示: + +![Non-mapped file read](http://static.duartes.org/img/blogPosts/nonMappedFileRead.png) + +它看起来很简单,其实这一过程做了很多的事情。首先,虽然这个程序使用了普通的读取调用,但是,已经有三个 4KB 的页面帧将文件 scene.dat 的一部分内容保存在了页面缓存中。虽然有时让人觉得很惊奇,但是,普通的文件 I/O 就是这样通过页面缓存来进行的。在 x86 架构的 Linux 中,内核将文件认为是一系列的 4KB 大小的块。如果你从文件中读取单个字节,包含这个字节的整个 4KB 块将被从磁盘中读入到页面缓存中。这是可以理解的,因为磁盘通常是持续吞吐的,并且程序读取的磁盘区域也不仅仅只保存几个字节。页面缓存知道文件中的每个 4KB 块的位置,在上图中用 #0、#1、等等来描述。Windows 也是类似的,使用 256KB 大小的页面缓存。 + +不幸的是,在一个普通的文件读取中,内核必须拷贝页面缓存中的内容到一个用户缓存中,它不仅花费 CPU 时间和影响 [CPU 缓存][6],在复制数据时也浪费物理内存。如前面的图示,scene.dat 的内存被保存了两次,并且,程序中的每个实例都在另外的时间中去保存了内容。我们虽然解决了从磁盘中读取文件缓慢的问题,但是在其它的方面带来了更痛苦的问题。内存映射文件是解决这种痛苦的一个方法: + +![Mapped file read](http://static.duartes.org/img/blogPosts/mappedFileRead.png) + +当你使用文件映射时,内核直接在页面缓存上映射你的程序的虚拟页面。这样可以显著提升性能:[Windows 系统编程][7] 的报告指出,在相关的普通文件读取上运行时性能有多达 30% 的提升,在 [Unix 环境中的高级编程][8] 的报告中,文件映射在 Linux 和 Solaris 也有类似的效果。取决于你的应用程序类型的不同,通过使用文件映射,可以节约大量的物理内存。 + +对高性能的追求是永衡不变的目标,[测量是很重要的事情][9],内存映射应该是程序员始终要使用的工具。而 API 提供了非常好用的实现方式,它允许你通过内存中的字节去访问一个文件,而不需要为了这种好处而牺牲代码可读性。在一个类 Unix 的系统中,可以使用 [mmap][11] 查看你的 [地址空间][10],在 Windows 中,可以使用 [CreateFileMapping][12],或者在高级编程语言中还有更多的可用封装。当你映射一个文件内容时,它并不是一次性将全部内容都映射到内存中,而是通过 [页面故障][13] 来按需映射的。在 [获取][15] 需要的文件内容的页面帧后,页面故障句柄在页面缓存上 [映射你的虚拟页面][14] 。如果一开始文件内容没有缓存,这还将涉及到磁盘 I/O。 + +假设我们的 Reader 程序是持续存在的实例,现在出现一个突发的状况。在页面缓存中保存着 scene.dat 内容的页面要立刻释放掉吗?这是一个人们经常要考虑的问题,但是,那样做并不是个好主意。你应该想到,我们经常在一个程序中创建一个文件,退出程序,然后,在第二个程序去使用这个文件。页面缓存正好可以处理这种情况。如果考虑更多的情况,内核为什么要清除页面缓存的内容?请记住,磁盘读取的速度要慢于内存 5 个数量级,因此,命中一个页面缓存是一件有非常大收益的事情。因此,只要有足够大的物理内存,缓存就应该始终完整保存。并且,这一原则适用于所有的进程。如果你现在运行 Render,一周后 scene.dat 的内容还在缓存中,那么应该恭喜你!这就是什么内核缓存越来越大,直至达到最大限制的原因。它并不是因为操作系统设计的太“垃圾”而浪费你的内存,其实这是一个非常好的行为,因为,释放物理内存才是一种“浪费”。(译者注:释放物理内存会导致页面缓存被清除,下次运行程序需要的相关数据,需要再次从磁盘上进行读取,会“浪费” CPU 和 I/O 资源)最好的做法是尽可能多的使用缓存。 + +由于页面缓存架构的原因,当程序调用 [write()][16] 时,字节只是被简单地拷贝到页面缓存中,并将这个页面标记为“赃”页面。磁盘 I/O 通常并不会立即发生,因此,你的程序并不会被阻塞在等待磁盘写入上。如果这时候发生了电脑死机,你的写入将不会被标记,因此,对于至关重要的文件,像数据库事务日志,必须要求 [fsync()][17]ed(仍然还需要去担心磁盘控制器的缓存失败问题),另一方面,读取将被你的程序阻塞,走到数据可用为止。内核采取预加载的方式来缓解这个矛盾,它一般提前预读取几个页面并将它加载到页面缓存中,以备你后来的读取。在你计划进行一个顺序或者随机读取时(请查看 [madvise()][18]、[readahead()][19]、[Windows cache hints][20] ),你可以通过提示(hint)帮助内核去调整这个预加载行为。Linux 会对内存映射的文件进行 [预读取][21],但是,在 Windows 上并不能确保被内存映射的文件也会预读。当然,在 Linux 中它可能会使用 [O_DIRECT][22] 跳过预读取,或者,在 Windows 中使用 [NO_BUFFERING][23] 去跳过预读,一些数据库软件就经常这么做。 + +一个内存映射的文件可以是私有的,也可以是共享的。当然,这只是针对内存中内容的更新而言:在一个私有的内存映射文件上,更新并不会提交到磁盘或者被其它进程可见,然而,共享的内存映射文件,则正好相反,它的任何更新都会提交到磁盘上,并且对其它的进程可见。内核在写机制上使用拷贝,这是通过页面表条目来实现这种私有的映射。在下面的例子中,Render 和另一个被称为 render3d 都私有映射到 scene.dat 上。然后 Render 去写入映射的文件的虚拟内存区域: + +![The Copy-On-Write mechanism](http://static.duartes.org/img/blogPosts/copyOnWrite.png) + +上面展示的只读页面表条目并不意味着映射是只读的,它只是内核的一个用于去共享物理内存的技巧,直到尽可能的最后一刻之前。你可以认为“私有”一词用的有点不太恰当,你只需要记住,这个“私有”仅用于更新的情况。这种设计的重要性在于,要想看到被映射的文件的变化,其它程序只能读取它的虚拟页面。一旦“写时复制”发生,从其它地方是看不到这种变化的。但是,内核并不能保证这种行为,因为它是在 x86 中实现的,从 API 的角度来看,这是有意义的。相比之下,一个共享的映射只是将它简单地映射到页面缓存上。更新会被所有的进程看到并被写入到磁盘上。最终,如果上面的映射是只读的,页面故障将触发一个内存段失败而不是写到一个副本。 + +动态加载库是通过文件映射融入到你的程序的地址空间中的。这没有什么可奇怪的,它通过普通的 APIs 为你提供与私有文件映射相同的效果。下面的示例展示了 Reader 程序映射的文件的两个实例运行的地址空间的一部分,以及物理内存,尝试将我们看到的许多概念综合到一起。 + +![Mapping virtual memory to physical memory](http://static.duartes.org/img/blogPosts/virtualToPhysicalMapping.png) + +这是内存架构系列的第三部分的结论。我希望这个系列文章对你有帮助,对理解操作系统的这些主题提供一个很好的思维模型。 + +-------------------------------------------------------------------------------- + +via:https://manybutfinite.com/post/page-cache-the-affair-between-memory-and-files/ + +作者:[Gustavo Duarte][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://duartes.org/gustavo/blog/about/ +[1]:https://manybutfinite.com/post/page-cache-the-affair-between-memory-and-files/ +[2]:https://manybutfinite.com/post/how-the-kernel-manages-your-memory +[3]:https://manybutfinite.com/post/what-your-computer-does-while-you-wait +[4]:http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx +[5]:http://ld.so +[6]:https://manybutfinite.com/post/intel-cpu-caches +[7]:http://www.amazon.com/Windows-Programming-Addison-Wesley-Microsoft-Technology/dp/0321256190/ +[8]:http://www.amazon.com/Programming-Environment-Addison-Wesley-Professional-Computing/dp/0321525949/ +[9]:https://manybutfinite.com/post/performance-is-a-science +[10]:https://manybutfinite.com/post/anatomy-of-a-program-in-memory +[11]:http://www.kernel.org/doc/man-pages/online/pages/man2/mmap.2.html +[12]:http://msdn.microsoft.com/en-us/library/aa366537(VS.85).aspx +[13]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2678 +[14]:http://lxr.linux.no/linux+v2.6.28/mm/memory.c#L2436 +[15]:http://lxr.linux.no/linux+v2.6.28/mm/filemap.c#L1424 +[16]:http://www.kernel.org/doc/man-pages/online/pages/man2/write.2.html +[17]:http://www.kernel.org/doc/man-pages/online/pages/man2/fsync.2.html +[18]:http://www.kernel.org/doc/man-pages/online/pages/man2/madvise.2.html +[19]:http://www.kernel.org/doc/man-pages/online/pages/man2/readahead.2.html +[20]:http://msdn.microsoft.com/en-us/library/aa363858(VS.85).aspx#caching_behavior +[21]:http://lxr.linux.no/linux+v2.6.28/mm/filemap.c#L1424 +[22]:http://www.kernel.org/doc/man-pages/online/pages/man2/open.2.html +[23]:http://msdn.microsoft.com/en-us/library/cc644950(VS.85).aspx \ No newline at end of file diff --git a/translated/tech/20111124 How to find hidden processes and ports on Linux-Unix-Windows.md b/translated/tech/20111124 How to find hidden processes and ports on Linux-Unix-Windows.md new file mode 100644 index 0000000000..dd834e3a53 --- /dev/null +++ b/translated/tech/20111124 How to find hidden processes and ports on Linux-Unix-Windows.md @@ -0,0 +1,183 @@ +# 如何在 Linux/Unix/Windows 中发现隐藏的进程和端口 + + +unhide 是一个小巧的网络取证工具,能够发现那些借助 rootkits,LKM 等其他技术隐藏的进程和 TCP/UDP 端口。这个工具在 Linux,unix-like,Windows 等操作系统下都可以工作。根据其 man 页面的说明: + +> Unhide 通过下述三项技术来发现隐藏的进程。 +> 1. 进程相关的技术,包括将 /proc 目录与 /bin/ps 命令的输出进行比较。 +> 2. 系统相关的技术,包括将 ps 命令的输出结果同从系统调用方面得到的信息进行比较。 +> 3. 穷举法相关的技术,包括对所有的进程 ID 进行暴力求解,该技术仅限于在基于 Linux2.6 内核的系统中使用。 + +绝大多数的 Rootkits 工具或者恶意软件借助内核来实现进程隐藏,这些进程只在内核内部可见。你可以使用 unhide 或者诸如 rkhunter 等工具,扫描 rootkit 程序,后门程序以及一些可能存在的本地漏洞。 + +![本文讲解如何在多个操作系统下安装和使用unhide][1] +如何安装 unhide +----------- + +这里首先建议你在只读介质上运行这个工具。如果使用的是 Ubuntu 或者 Debian 发行版,输入下述的 apt-get/apt 命令以安装 Unhide:`$ sudo apt-get install unhide` 一切顺利的话你的命令行会输出以下内容: + + [sudo] password for vivek: + Reading package lists... Done + Building dependency tree + Reading state information... Done + Suggested packages: + rkhunter + The following NEW packages will be installed: + unhide + 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. + Need to get 46.6 kB of archives. + After this operation, 136 kB of additional disk space will be used. + Get:1 http://in.archive.ubuntu.com/ubuntu artful/universe amd64 unhide amd64 20130526-1 [46.6 kB] + Fetched 46.6 kB in 0s (49.0 kB/s) + Selecting previously unselected package unhide. + (Reading database ... 205367 files and directories currently installed.) + Preparing to unpack .../unhide_20130526-1_amd64.deb ... + Unpacking unhide (20130526-1) ... + Setting up unhide (20130526-1) ... + Processing triggers for man-db (2.7.6.1-2) ... + +如何在RHEL/CentOS/Oracle/Scientific/Fedora上安装 unhide +------------------------------------------------------------------ + +你可以使用以下的 yum 命令: + + `Sudo yum install unhide` + +在 Fedora 上则使用以下 dnf 命令: + + Sudo dnf install unhide. + +如何在 Arch 上安装 unhide +------------------- + + 键入以下 pacman 命令安装 $ sudo pacman -S unhide + +如何在 FreeBSD 上安装 unhide +---------------------- + +可以通过以下的命令使用 port 来安装 unhide + + # cd /usr/ports/security/unhide/ + # make install clean + +或者可以通过二进制文件安装hide,使用 pkg 命令安装 + + # pkg install unhide + +Unhide-tcp 取证工具通过对所有可用的 TCP/IP 端口进行暴力求解的方式,辨别所有正在监听,却没有列入 /bin/netstat 或者 /bin/ss command 目录的 TCP/IP 端口身份。 + +如何使用 unhide 工具? +--------------- + +Unhide 的语法是 `unhide [options] test_list` test_list 参数可以是以下测试列表中的一个或者多个标准测试: + + + 1. Brute + 2. proc + 3. procall + 4. procfs + 5. quick + 6. reverse + 7. sys + +基本测试: + + 1. checkbrute + 2. checkchdir + 3. checkgetaffinity + 4. checkgetparam + 5. checkgetpgid + 6. checkgetprio + 7. checkRRgetinterval + 8. checkgetsched + 9. checkgetsid + 10. checkkill + 11. checknoprocps + 12. checkopendir + 13. checkproc + 14. checkquick + 15. checkreaddir + 16. checkreverse + 17. checksysinfo + 18. checksysinfo2 + 19. checksysinfo3 + +你可以通过以下示例命令使用 unhide: + + # unhide proc + # unhide sys + # unhide quick + +示例输出: + + Unhide 20130526 + Copyright © 2013 Yago Jesus & Patrick Gouin + License GPLv3+ : GNU GPL version 3 or later + http://www.unhide-forensics.info + + NOTE : This version of unhide is for systems using Linux >= 2.6 + + Used options: + [*]Searching for Hidden processes through comparison of results of system calls, proc, dir and ps + +如何使用 unhide-tcp 工具辨明 TCP/UDP 端口的身份 +---------------------------------- + +以下是来自 man 页面的介绍 + +> unhide-tcp is a forensic tool that identifies TCP/UDP ports that are +> listening but are not listed by /sbin/ss (or alternatively by +> /bin/netstat) through brute forcing of all TCP/UDP ports available. +> Note1 : On FreeBSD ans OpenBSD, netstat is allways used as iproute2 +> doesn't exist on these OS. In addition, on FreeBSD, sockstat is used +> instead of fuser. Note2 : If iproute2 is not available on the system, +> option -n or -s SHOULD be given on the command line. + +Unhide-tcp 取证工具,通过对所有可用的 TCP/IP 端口进行暴力求解的方式,辨别所有正在监听,却没有列入 /bin/netstat 或者 /bin/ss command 目录的 TCP/IP 端口身份。请注意:对于 FreeBSD,OpenBSD系统,一般使用 iproute2,fuser 命令取代在这些操作系统上不存在的 netstat,sockstat 命令。请注意 2:如果操作系统不支持 iproute2 命令,在使用 unhide 时需要在命令上加上 -n 或者 -s 选项。 + + # `unhide-tcp` + +示例输出: + + Unhide 20100201 + http://www.security-projects.com/?Unhide + Starting TCP checking + Starting UDP checking + +上述操作中,没有发现隐藏的端口。但在下述示例中,我展示了一些有趣的事。 + + # `unhide-tcp` + +示例输出: + + Unhide 20100201 + http://www.security-projects.com/?Unhide + Starting TCP checking + Found Hidden port that not appears in netstat: 1048 + Found Hidden port that not appears in netstat: 1049 + Found Hidden port that not appears in netstat: 1050 + Starting UDP checking + +可以看到 netstat -tulpn 和 ss commands 命令确实没有反映出这三个隐藏的端口 + + # netstat -tulpn | grep 1048 + # ss -lp + # ss -l | grep 1048 + +通过下述的 man 命令可以更多地了解unhide + + $ man unhide + $ man unhide-tcp + +Windows 用户如何安装使用 unhide +--------------------- +你可以通过这个[页面][2]获取 Windows 版本的 unhide + +via: https://www.cyberciti.biz/tips/linux-unix-windows-find-hidden-processes-tcp-udp-ports.html +作者:Vivek Gite 译者:[ljgibbs][3] 校对:校对者ID +本文由 LCTT 原创编译,Linux中国 荣誉推出! + + + [1]: https://camo.githubusercontent.com/51ee31c20a799512dcd09d88cacbe8dd04731529/68747470733a2f2f7777772e6379626572636974692e62697a2f746970732f77702d636f6e74656e742f75706c6f6164732f323031312f31312f4c696e75782d467265654253442d556e69782d57696e646f77732d46696e642d48696464656e2d50726f636573732d506f7274732e6a7067 + [2]: http://www.unhide-forensics.info/?Windows:Download + [3]: https://github.com/ljgibbslf diff --git a/translated/tech/20121211 Python Nmon Analyzer- moving away from excel macros.md b/translated/tech/20121211 Python Nmon Analyzer- moving away from excel macros.md new file mode 100644 index 0000000000..c772ceff73 --- /dev/null +++ b/translated/tech/20121211 Python Nmon Analyzer- moving away from excel macros.md @@ -0,0 +1,100 @@ +Python 版的 Nmon 分析器:让你远离 excel 宏 +====== +[Nigel's monitor][1],也叫做 "Nmon",是一个很好的监控,记录和分析 Linux/*nix 系统性能随时间变化的工具。Nmon 最初由 IBM 开发并于 2009 年夏天开源。时至今日 Nmon 已经在所有 linux 平台和架构上都可用了。它提供了大量的实时工具来可视化当前系统统计信息,这些统计信息包括 CPU,RAM,网络和磁盘 I/O。然而,Nmon 最棒的特性是可以随着时间的推移记录系统性能快照。 +比如:`nmon -f -s 1`。 +![nmon CPU and Disk utilization][2] +会创建一个日志文件,该日志文件最开头是一些系统的元数据 T( 章节 AAA - BBBV),后面是定时抓取的监控系统属性的快照,比如 CPU 和内存的使用情况。这个文件很难直接由电子表格应用来处理,因此诞生了 [Nmon_Analyzer][3] excel 宏。如果你用的是 Windows/Mac 并安装了 Microsoft Office,那么这个工具非常不错。如果没有这个环境那也可以使用 Nmon2rrd 工具,这个工具能将日志文件转换 RRD 输入文件,进而生成图形。这个过程很死板而且有点麻烦。现在出现了一个更灵活的工具,像你们介绍一下 pyNmonAnalyzer,它一个可定制化的解决方案来生成结构化的 CSV 文件和基于 [matplotlib][4] 生成图片的简单 HTML 报告。 + +### 入门介绍: + +系统需求: +从名字中就能看出我们需要有 python。此外 pyNmonAnalyzer 还依赖于 matplotlib 和 numpy。若你使用的是 debian 衍生的系统,则你需要先安装这些包: +``` +$> sudo apt-get install python-numpy python-matplotlib + +``` + +##### 获取 pyNmonAnalyzer: + +你可页克隆 git 仓库: +``` +$> git clone git@github.com:madmaze/pyNmonAnalyzer.git + +``` + +或者 + +直接从这里下载:[pyNmonAnalyzer-0.1.zip][5] + +接下来我们需要一个 Nmon 文件,如果没有的话,可以使用发行版中提供的实例或者自己录制一个样本:`nmon -F test.nmon -s 1 -c 120`,会录制每个 1 秒录制一次,供录制 120 个快照道 test.nmon 文件中 .nmon。 + +让我们来看看基本的帮助信息: +``` +$> ./pyNmonAnalyzer.py -h +usage: pyNmonAnalyzer.py [-h] [-x] [-d] [-o OUTDIR] [-c] [-b] [-r CONFFNAME] + input_file + +nmonParser converts Nmon monitor files into time-sorted +CSV/Spreadsheets for easier analysis, without the use of the +MS Excel Macro. Also included is an option to build an HTML +report with graphs, which is configured through report.config. + +positional arguments: + input_file Input NMON file + +optional arguments: + -h, --help show this help message and exit + -x, --overwrite overwrite existing results (Default: False) + -d, --debug debug? (Default: False) + -o OUTDIR, --output OUTDIR + Output dir for CSV (Default: ./data/) + -c, --csv CSV output? (Default: False) + -b, --buildReport report output? (Default: False) + -r CONFFNAME, --reportConfig CONFFNAME + Report config file, if none exists: we will write the + default config file out (Default: ./report.config) + +``` + +该工具有两个主要的选项 + + 1。将 nmon 文件传唤成一系列独立的 CSV 文件 + 2。使用 matplotlib 生成带图形的 HTML 报告 + + + +下面命令既会生成 CSV 文件,也会生成 HTML 报告: +``` +$> ./pyNmonAnalyzer.py -c -b test.nmon + +``` + +这会常见一个 `。/data` 目录,其中有一个存放 CSV 文件的目录 ("。/data/csv/"),一个存放 PNG 图片的目录 ("。/data/img/") 以及一个 HTML 报告 ("。/data/report.html")。 + +默认情况下,HTML 报告中会用图片展示 CPU,磁盘繁忙度,内存使用情况和网络传输情况。所有这些都定义在一个自解释的配置文件中 ("report.config")。目前这个工具 h 那不是特别的灵活,因为 CPU 和 MEM 除了 on 和 off 外,无法做其他的配置。不过下一步将会改进作图的方法并允许用户灵活地指定针对哪些数据使用哪种作图方法。 + +### 报告的例子: + +[![pyNmonAnalyzer Graph output][6] +**Click to see the full Report**][7] + +目前这些报告还十分的枯燥而且只能打印出基本的几种标记图表,不过它的功能还在不断的完善中。目前在开发的是一个向导来让配置调整变得更容易。如果有任何建议,找到任何 bug 或者有任何功能需求,欢迎与我交流。 + +-------------------------------------------------------------------------------- + +via: https://matthiaslee.com/python-nmon-analyzer-moving-away-from-excel-macros/ + +作者:[Matthias Lee][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://matthiaslee.com/ +[1]:http://nmon.sourceforge.net/ +[2]:https://matthiaslee.com//content/images/2015/06/nmon_cpudisk.png +[3]:http://www.ibm.com/developerworks/wikis/display/WikiPtype/nmonanalyser +[4]:http://matplotlib.org/ +[5]:https://github.com/madmaze/pyNmonAnalyzer/blob/master/release/pyNmonAnalyzer-0.1.zip?raw=true +[6]:https://matthiaslee.com//content/images/2017/04/teaser-short_0.png (pyNmonAnalyzer Graph output) +[7]:http://matthiaslee.com/pub/pyNmonAnalyzer/data/report.html diff --git a/translated/tech/20140410 Recursion- dream within a dream.md b/translated/tech/20140410 Recursion- dream within a dream.md new file mode 100644 index 0000000000..3becf75ebd --- /dev/null +++ b/translated/tech/20140410 Recursion- dream within a dream.md @@ -0,0 +1,122 @@ +#[递归:梦中梦][1] +递归是很神奇的,但是在大多数的编程类书藉中对递归讲解的并不好。它们只是给你展示一个递归阶乘的实现,然后警告你递归运行的很慢,并且还有可能因为栈缓冲区溢出而崩溃。“你可以将头伸进微波炉中去烘干你的头发,但是需要警惕颅内高压以及让你的头发生爆炸,或者你可以使用毛巾来擦干头发。”这就是人们不愿意使用递归的原因。这是很糟糕的,因为在算法中,递归是最强大的。 + +我们来看一下这个经典的递归阶乘: + +递归阶乘 - factorial.c + +``` +#include + +int factorial(int n) +{ + int previous = 0xdeadbeef; + + if (n == 0 || n == 1) { + return 1; + } + + previous = factorial(n-1); + return n * previous; +} + +int main(int argc) +{ + int answer = factorial(5); + printf("%d\n", answer); +} +``` + +函数的目的是调用它自己,这在一开始是让人很难理解的。为了解具体的内容,当调用 `factorial(5)` 并且达到 `n == 1` 时,[在栈上][3] 究竟发生了什么? + +![](https://manybutfinite.com/img/stack/factorial.png) + +每次调用 `factorial` 都生成一个新的 [栈帧][4]。这些栈帧的创建和 [销毁][5] 是递归慢于迭代的原因。在调用返回之前,累积的这些栈帧可能会耗尽栈空间,进而使你的程序崩溃。 + +而这些担心经常是存在于理论上的。例如,对于每个 `factorial` 的栈帧取 16 字节(这可能取决于栈排列以及其它因素)。如果在你的电脑上运行着现代的 x86 的 Linux 内核,一般情况下你拥有 8 GB 的栈空间,因此,`factorial` 最多可以被运行 ~512,000 次。这是一个 [巨大无比的结果][6],它相当于 8,971,833 比特,因此,栈空间根本就不是什么问题:一个极小的整数 - 甚至是一个 64 位的整数 - 在我们的栈空间被耗尽之前就早已经溢出了成千上万次了。 + +过一会儿我们再去看 CPU 的使用,现在,我们先从比特和字节回退一步,把递归看作一种通用技术。我们的阶乘算法总结为将整数 N、N-1、 … 1 推入到一个栈,然后将它们按相反的顺序相乘。实际上我们使用了程序调用栈来实现这一点,这是它的细节:我们在堆上分配一个栈并使用它。虽然调用栈具有特殊的特性,但是,你只是把它用作一种另外的数据结构。我希望示意图可以让你明白这一点。 + +当你看到栈调用作为一种数据结构使用,有些事情将变得更加清晰明了:将那些整数堆积起来,然后再将它们相乘,这并不是一个好的想法。那是一种有缺陷的实现:就像你拿螺丝刀去钉钉子一样。相对更合理的是使用一个迭代过程去计算阶乘。 + +但是,螺丝钉太多了,我们只能挑一个。有一个经典的面试题,在迷宫里有一只老鼠,你必须帮助这只老鼠找到一个奶酪。假设老鼠能够在迷宫中向左或者向右转弯。你该怎么去建模来解决这个问题? + +就像现实生活中的很多问题一样,你可以将这个老鼠找奶酪的问题简化为一个图,一个二叉树的每个结点代表在迷宫中的一个位置。然后你可以让老鼠在任何可能的地方都左转,而当它进入一个死胡同时,再返回来右转。这是一个老鼠行走的 [迷宫示例][7]: + +![](https://manybutfinite.com/img/stack/mazeGraph.png) + +每到边缘(线)都让老鼠左转或者右转来到达一个新的位置。如果向哪边转都被拦住,说明相关的边缘不存在。现在,我们来讨论一下!这个过程无论你是调用栈还是其它数据结构,它都离不开一个递归的过程。而使用调用栈是非常容易的: + +递归迷宫求解 [下载][2] + +``` +#include +#include "maze.h" + +int explore(maze_t *node) +{ + int found = 0; + + if (node == NULL) + { + return 0; + } + if (node->hasCheese){ + return 1;// found cheese + } + + found = explore(node->left) || explore(node->right); + return found; + } + + int main(int argc) + { + int found = explore(&maze); + } +``` +当我们在 `maze.c:13` 中找到奶酪时,栈的情况如下图所示。你也可以在 [GDB 输出][8] 中看到更详细的数据,它是使用 [命令][9] 采集的数据。 + +![](https://manybutfinite.com/img/stack/mazeCallStack.png) + +它展示了递归的良好表现,因为这是一个适合使用递归的问题。而且这并不奇怪:当涉及到算法时,递归是一种使用较多的算法,而不是被排除在外的。当进行搜索时、当进行遍历树和其它数据结构时、当进行解析时、当需要排序时:它的用途无处不在。正如众所周知的 pi 或者 e,它们在数学中像“神”一样的存在,因为它们是宇宙万物的基础,而递归也和它们一样:只是它在计算的结构中。 + +Steven Skienna 的优秀著作 [算法设计指南][10] 的精彩之处在于,他通过“战争故事” 作为手段来诠释工作,以此来展示解决现实世界中的问题背后的算法。这是我所知道的拓展你的算法知识的最佳资源。另一个较好的做法是,去读 McCarthy 的 [LISP 上的原创论文][11]。递归在语言中既是它的名字也是它的基本原理。这篇论文既可读又有趣,在工作中能看到大师的作品是件让人兴奋的事情。 + +回到迷宫问题上。虽然它在这里很难离开递归,但是并不意味着必须通过调用栈的方式来实现。你可以使用像 “RRLL” 这样的字符串去跟踪转向,然后,依据这个字符串去决定老鼠下一步的动作。或者你可以分配一些其它的东西来记录奶酪的状态。你仍然是去实现一个递归的过程,但是需要你实现一个自己的数据结构。 + +那样似乎更复杂一些,因为栈调用更合适。每个栈帧记录的不仅是当前节点,也记录那个节点上的计算状态(在这个案例中,我们是否只让它走左边,或者已经尝试向右)。因此,代码已经变得不重要了。然而,有时候我们因为害怕溢出和期望中的性能而放弃这种优秀的算法。那是很愚蠢的! + +正如我们所见,栈空间是非常大的,在耗尽栈空间之前往往会遇到其它的限制。一方面可以通过检查问题大小来确保它能够被安全地处理。而对 CPU 的担心是由两个广为流传的有问题的示例所导致的:哑阶乘(dumb factorial)和可怕的无记忆的 O(2n) [Fibonacci 递归][12]。它们并不是栈递归算法的正确代表。 + +事实上栈操作是非常快的。通常,栈对数据的偏移是非常准确的,它在 [缓存][13] 中是热点,并且是由专门的指令来操作它。同时,使用你自己定义的堆上分配的数据结构的相关开销是很大的。经常能看到人们写的一些比栈调用递归更复杂、性能更差的实现方法。最后,现代的 CPU 的性能都是 [非常好的][14] ,并且一般 CPU 不会是性能瓶颈所在。要注意牺牲简单性与保持性能的关系。[测量][15]。 + +下一篇文章将是探秘栈系列的最后一篇了,我们将了解尾调用、闭包、以及其它相关概念。然后,我们就该深入我们的老朋友—— Linux 内核了。感谢你的阅读! + +![](https://manybutfinite.com/img/stack/1000px-Sierpinski-build.png) + +-------------------------------------------------------------------------------- + +via:https://manybutfinite.com/post/recursion/ + +作者:[Gustavo Duarte][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://duartes.org/gustavo/blog/about/ +[1]:https://manybutfinite.com/post/recursion/ +[2]:https://manybutfinite.com/code/x86-stack/maze.c +[3]:https://github.com/gduarte/blog/blob/master/code/x86-stack/factorial-gdb-output.txt +[4]:https://manybutfinite.com/post/journey-to-the-stack +[5]:https://manybutfinite.com/post/epilogues-canaries-buffer-overflows/ +[6]:https://gist.github.com/gduarte/9944878 +[7]:https://github.com/gduarte/blog/blob/master/code/x86-stack/maze.h +[8]:https://github.com/gduarte/blog/blob/master/code/x86-stack/maze-gdb-output.txt +[9]:https://github.com/gduarte/blog/blob/master/code/x86-stack/maze-gdb-commands.txt +[10]:http://www.amazon.com/Algorithm-Design-Manual-Steven-Skiena/dp/1848000693/ +[11]:https://github.com/papers-we-love/papers-we-love/blob/master/comp_sci_fundamentals_and_history/recursive-functions-of-symbolic-expressions-and-their-computation-by-machine-parti.pdf +[12]:http://stackoverflow.com/questions/360748/computational-complexity-of-fibonacci-sequence +[13]:https://manybutfinite.com/post/intel-cpu-caches/ +[14]:https://manybutfinite.com/post/what-your-computer-does-while-you-wait/ +[15]:https://manybutfinite.com/post/performance-is-a-science \ No newline at end of file diff --git a/translated/tech/20140510 Journey to the Stack Part I.md b/translated/tech/20140510 Journey to the Stack Part I.md new file mode 100644 index 0000000000..b18c7d32f5 --- /dev/null +++ b/translated/tech/20140510 Journey to the Stack Part I.md @@ -0,0 +1,103 @@ +#[探秘“栈”之旅(I)][1] + +早些时候,我们讲解了 [“剖析内存中的程序之秘”][2],我们欣赏了在一台电脑中是如何运行我们的程序的。今天,我们去探索栈的调用,它在大多数编程语言和虚拟机中都默默地存在。在此过程中,我们将接触到一些平时很难见到的东西,像闭包(closures)、递归、以及缓冲溢出等等。但是,我们首先要作的事情是,描绘出栈是如何运作的。 + +栈非常重要,因为它持有着在一个程序中运行的函数,而函数又是一个软件的重要组成部分。事实上,程序的内部操作都是非常简单的。它大部分是由函数向栈中推入数据或者从栈中弹出数据的相互调用组成的,虽然为数据分配内存是在堆上,但是,在跨函数的调用中数据必须要保存下来,不论是低级(low-leverl)的 C 软件还是像 JavaScript 和 C# 这样的基于虚拟机的语言,它们都是这样的。而对这些行为的深刻理解,对排错、性能调优以及大概了解究竟发生了什么是非常重要的。 + +当一个函数被调用时,将会创建一个栈帧(stack frame)去支持函数的运行。这个栈帧包含函数的本地变量和调用者传递给它的参数。这个栈帧也包含了允许被调用的函数安全返回给调用者的内部事务信息。栈帧的精确内容和结构因处理器架构和函数调用规则而不同。在本文中我们以 Intel x86 架构和使用 C 风格的函数调用(cdecl)的栈为例。下图是一个处于栈顶部的一个单个栈帧: + +![](https://manybutfinite.com/img/stack/stackIntro.png) + +在图上的场景中,有三个 CPU 寄存器进入栈。栈指针 `esp`(译者注:扩展栈指针寄存器) 指向到栈的顶部。栈的顶部总是被最后一个推入到栈且还没有弹出的东西所占据,就像现实世界中堆在一起的一叠板子或者面值 $100 的钞票。 + +保存在 `esp` 中的地址始终在变化着,因为栈中的东西不停被推入和弹出,而它总是指向栈中的最后一个推入的东西。许多 CPU 指令的一个副作用就是自动更新 `esp`,离开寄存器而使用栈是行不通的。 + +在 Intel 的架构中,绝大多数情况下,栈的增长是向着低位内存地址的方向。因此,这个“顶部” 在包含数据(在这种情况下,包含的数据是 `local_buffer`)的栈中是处于低位的内存地址。注意,关于从 `esp` 到 `local_buffer` 的箭头,这里并没有模糊的地方。这个箭头代表着事务:它专门指向到由 `local_buffer` 所拥有的第一个字节,因为,那是一个保存在 `esp` 中的精确地址。 + +第二个寄存器跟踪的栈是 `ebp`(译者注:扩展基址指针寄存器),它包含一个基指针或者称为帧指针。它指向到一个当前运行的函数的栈帧内的固定的位置,并且它为参数和本地变量的访问提供一个稳定的参考点(基址)。仅当开始或者结束调用一个函数时,`ebp` 的内容才会发生变化。因此,我们可以很容易地处理每个在栈中的从 `ebp` 开始偏移后的一个东西。如下图所示。 + +不像 `esp`, `ebp` 大多数情况下是在程序代码中通过花费很少的 CPU 来进行维护的。有时候,完成抛弃 `ebp` 有一些性能优势,可以通过 [编译标志][3] 来做到这一点。Linux 内核中有一个实现的示例。 + +最后,`eax`(译者注:扩展的 32 位通用数据寄存器)寄存器是被调用规则所使用的寄存器,对于大多数 C 数据类型来说,它的作用是转换一个返回值给调用者。 + +现在,我们来看一下在我们的栈帧中的数据。下图清晰地按字节展示了字节的内容,就像你在一个调试器中所看到的内容一样,内存是从左到右、从底部到顶部增长的,如下图所示: + +![](https://manybutfinite.com/img/stack/frameContents.png) + +本地变量 `local_buffer` 是一个字节数组,它包含一个空终止(null-terminated)的 ascii 字符串,这是一个 C 程序中的基本元素。这个字符串可以从任意位置读取,例如,从键盘输入或者来自一个文件,它只有 7 个字节的长度。因为,`local_buffer` 只能保存 8 字节,在它的左侧保留了 1 个未使用的字节。这个字节的内容是未知的,因为栈的推入和弹出是极其活跃的,除了你写入的之外,你从不知道内存中保存了什么。因为 C 编译器并不为栈帧初始化内存,所以它的内容是未知的并且是随机的 - 除非是你自己写入。这使得一些人对此很困惑。 + +再往上走,`local1` 是一个 4 字节的整数,并且你可以看到每个字节的内容。它似乎是一个很大的数字,所有的零都在 8 后面,在这里可能会让你误入歧途。 + +Intel 处理器是按从小到大的机制来处理的,这表示在内存中的数字也是首先从小的位置开始的。因此,在一个多字节数字中,最小的标志字节在内存中处于低端地址。因为一般情况下是从左边开始显示的,这背离了我们一般意义上对数字的认识。我们讨论的这种从小到大的机制,使我想起《Gulliver 游记》:就像 Lilliput 吃鸡蛋是从小头开始的一样,Intel 处理器处理它们的数字也是从字节的小端开始的。 + +因此,`local1` 事实上只保存了一个数字 8,就像一个章鱼的腿。然而,`param1` 在第二个字节的位置有一个值 2,因此,它的数学上的值是 2 * 256 = 512(我们与 256 相乘是因为,每个位置值的范围都是从 0 到 255)。同时,`param2` 承载的数量是 1 * 256 * 256 = 65536。 + +这个栈帧的内部数据是由两个重要的部分组成:前一个栈帧的地址和函数的出口(返回地址)上运行的指令的地址。它们一起确保了函数能够正常返回,从而使程序可以继续正常运行。 + +现在,我们来看一下栈帧是如何产生的,以及去建立一个它们如何共同工作的内部蓝图。在刚开始的时候,栈的增长是非常令人困惑的,因为它发生的一切都不是你所期望的东西。例如,在栈上从 `esp` 减去 8,去分配一个 8 字节,而减法是以一种奇怪的方式去开始的。 + +我们来看一个简单的 C 程序: + +``` +Simple Add Program - add.c + +int add(int a, int b) +{ + int result = a + b; + return result; +} + +int main(int argc) +{ + int answer; + answer = add(40, 2); +} +``` + +假设我们在 Linux 中不使用命令行参数去运行它。当你运行一个 C 程序时,去真实运行的第一个代码是 C 运行时库,由它来调用我们的 `main` 函数。下图展示了程序运行时每一步都发生了什么。每个图链接的 GDB 输出展示了内存的状态和寄存器。你也可以看到所使用的 [GDB 命令][4],以及整个 [GDB 输出][5]。如下: + +![](https://manybutfinite.com/img/stack/mainProlog.png) + +第 2 步和第 3 步,以及下面的第 4 步,都只是函数的开端,几乎所有的函数都是这样的:`ebp` 的当前值保存着栈的顶部,然后,将 `esp` 的内容拷贝到 `ebp`,维护一个新帧。`main` 的开端和任何一个其它函数都是一样,但是,不同之处在于,当程序启动时 `ebp` 被清零。 + +如果你去检查栈下面的整形变量(argc),你将找到更多的数据,包括指向到程序名和命令行参数(传统的 C 参数数组)、Unix 环境变量以及它们真实的内容的指针。但是,在这里这些并不是重点,因此,继续向前调用 add(): + +![](https://manybutfinite.com/img/stack/callAdd.png) + +在 `main` 从 `esp` 减去 12 之后得到它所需的栈空间,它为 a 和 b 设置值。在内存中值展示为十六进制,并且是从小到大的格式。与你从调试器中看到的一样。一旦设置了参数值,`main` 将调用 `add` ,并且它开始运行: + +![](https://manybutfinite.com/img/stack/addProlog.png) + +现在,有一点小激动!我们进入了另一个开端,在这时你可以明确看到栈帧是如何从 `ebp` 的一个链表开始进入到栈的。这就是在高级语言中调试器和异常对象如何对它们的栈进行跟踪的。当一个新帧产生时,你也可以看到更多这种从 `ebp` 到 `esp` 的典型的捕获。我们再次从 `esp` 中做减法得到更多的栈空间。 + +当 `ebp` 寄存器的值拷贝到内存时,这里也有一个稍微有些怪异的地方。在这里发生的奇怪事情是,寄存器并没有真的按字节顺序拷贝:因为对于内存,没有像寄存器那样的“增长的地址”。因此,通过调试器的规则以最自然的格式给人展示了寄存器的值:从最重要的到最不重要的数字。因此,这个在从小到大的机制中拷贝的结果,与内存中常用的从左到右的标记法正好相反。我想用图去展示你将会看到的东西,因此有了下面的图。 + +在比较难懂的部分,我们增加了注释: + +![](https://manybutfinite.com/img/stack/doAdd.png) + +这是一个临时寄存器,用于帮你做加法,因此没有什么警报或者惊喜。对于加法这样的作业,栈的动作正好相反,我们留到下次再讲。 + +对于任何读到这篇文章的人都应该有一个小礼物,因此,我做了一个大的图表展示了 [组合到一起的所有步骤][6]。 + +一旦把它们全部布置好了,看上起似乎很乏味。这些小方框给我们提供了很多帮助。事实上,在计算机科学中,这些小方框是主要的展示工具。我希望这些图片和寄存器的移动能够提供一种更直观的构想图,将栈的增长和内存的内容整合到一起。从软件的底层运作来看,我们的软件与一个简单的图灵机器差不多。 + +这就是我们栈探秘的第一部分,再讲一些内容之后,我们将看到构建在这个基础上的高级编程的概念。下周见! + +-------------------------------------------------------------------------------- + +via:https://manybutfinite.com/post/journey-to-the-stack/ + +作者:[Gustavo Duarte][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://duartes.org/gustavo/blog/about/ +[1]:https://manybutfinite.com/post/journey-to-the-stack/ +[2]:https://manybutfinite.com/post/anatomy-of-a-program-in-memory +[3]:http://stackoverflow.com/questions/14666665/trying-to-understand-gcc-option-fomit-frame-pointer +[4]:https://github.com/gduarte/blog/blob/master/code/x86-stack/add-gdb-commands.txt +[5]:https://github.com/gduarte/blog/blob/master/code/x86-stack/add-gdb-output.txt +[6]:https://manybutfinite.com/img/stack/callSequence.png \ No newline at end of file diff --git a/translated/tech/20140519 Epilogues Canaries and Buffer Overflows.md b/translated/tech/20140519 Epilogues Canaries and Buffer Overflows.md new file mode 100644 index 0000000000..b74400a68b --- /dev/null +++ b/translated/tech/20140519 Epilogues Canaries and Buffer Overflows.md @@ -0,0 +1,100 @@ +[探秘“栈”之旅(II)—— 谢幕,金丝雀,和缓冲区溢出][1] +============================================================ + +上一周我们讲解了 [栈是如何工作的][2] 以及在函数的开端上栈帧是如何被构建的。今天,我们来看一下它的相反的过程,在函数结束时,栈帧是如何被销毁的。重新回到我们的 add.c 上: + +简单的一个做加法的程序 - add.c + +``` +int add(int a, int b) +{ + int result = a + b; + return result; +} + +int main(int argc) +{ + int answer; + answer = add(40, 2); +} +``` + + +在运行到第 4 行时,在把 `a + b` 值赋给 `result` 后,这时发生了什么: + +![](https://manybutfinite.com/img/stack/returnFromAdd.png) + +第一个指令是有些多余而且有点傻的,因为我们知道 `eax` 已经等于了 `result` ,但这就是关闭优化时得到的结果。剩余的指令接着运行,这一小段做了两个任务:重置 `esp` 并将它指向到当前栈帧开始的地方,另一个是恢复在 `ebp` 中保存的值。这两个操作在逻辑上是独立的,因此,在图中将它们分开来说,但是,如果你使用一个调试器去跟踪,你就会发现它们都是自动发生的。 + +在运行完毕后,恢复了前一个栈帧。`add` 调用唯一留下的东西就是在栈顶部的返回地址。它包含了运行完 `add` 之后在 `main` 中的指令的地址。它带来的是 `ret` 指令:它弹出返回地址到 `eip` 寄存器(译者注:32位的指令寄存器),这个寄存器指向下一个要执行的指令。现在程序将返回到 `main` ,主要部分如下: + +![](https://manybutfinite.com/img/stack/returnFromMain.png) + +`main` 从 `add` 中拷贝返回值到本地变量 `answer`,然后,运行它的“谢幕仪式”,这一点和其它的函数是一样的。在 `main` 中唯一的怪异之处是,它在 `ebp` 中保存了 `null` 值,因为,在我们的代码中它是第一个栈帧。最后一步执行的是,返回到 C 运行时库(libc),它将退回到操作系统中。这里为需要的人提供了一个 [完整的返回顺序][3] 的图。 + +现在,你已经理解了栈是如何运作的,所以我们现在可以来看一下,一直以来最著名的黑客行为:挖掘缓冲区溢出。这是一个有漏洞的程序: + +有漏洞的程序 - buffer.c + +``` +void doRead() +{ + char buffer[28]; + gets(buffer); +} + +int main(int argc) +{ + doRead(); +} +``` + +上面的代码中使用了 [gets][4] 从标准输入中去读取内容。`gets` 持续读取直到一个新行或者文件结束。下图是读取一个字符串之后栈的示意图: + +![](https://manybutfinite.com/img/stack/bufferCopy.png) + +在这里存在的问题是,`gets` 并不知道缓冲区大小:它毫无查觉地持续读取输入内容,并将读取的内容填入到栈那边的缓冲区,清除保存在 `ebp` 中的值,返回地址,下面的其它内容也是如此。对于挖掘行为,攻击者制作一个载荷片段并将它“喂”给程序。在这个时候,栈应该是下图所示的样子,然后去调用 `gets`: + +![](https://manybutfinite.com/img/stack/bufferOverflowExploit.png) + +基本的想法是提供一个恶意的汇编代码去运行,通过覆写栈上的返回地址指向到那个代码。这有点像病毒侵入一个细胞,颠覆它,然后引入一些 RNA 去达到它的目的。 + +和病毒一样,挖掘者的载荷有许多特别的功能。它从使用几个 `nop` 指令开始,以提升成功挖掘漏洞的可能性。这是因为返回的地址是一个靠猜测的且不受约束的地址,因此,攻击者并不知道保存它的代码的栈的准确位置。但是,只要它们进入一个 `nop`,这个漏洞挖掘工作就会进行:处理器将运行 `nops`,直到击中它希望去运行的指令。 + +exec /bin/sh 表示运行一个 shell(假设漏洞是在一个网络程序中,因此,这个漏洞可能提供一个访问系统的 shell)的原生汇编指令。将原生汇编指令嵌入到一个程序中,使程序产生一个命令窗口或者用户输入的想法是很可怕的,但是,那只是让安全研究如此有趣且“脑洞大开”的一部分而已。对于防范这个怪异的 `get`, 给你提供一个思路,有时候,在有漏洞的程序上,让它的输入转换为小写或者大写,将迫使攻击者写的汇编指令的完整字节不属于小写或者大写的 ascii 字母的范围内。 + +最后,攻击者重放几次猜测的返回地址,这将再次提升他们的胜算。通过从一个 4 字节的边界上多次重放,它们可能会覆写栈上的原始返回地址。 + +幸亏,现代操作系统有了 [防止缓冲区溢出][5] 的一系列保护措施,包括不可执行的栈和栈金丝雀(stack canaries)。这个 “金丝雀(canary)” 名字来自 [煤矿中的金丝雀(canary in a coal mine)][6] 中的表述(译者注:指在煤矿工人下井时,带一只金丝雀,因为金丝雀对煤矿中的瓦斯气体非常敏感,如果进入煤矿后,金丝雀死亡,说明瓦斯超标,矿工会立即撤出煤矿。金丝雀做为煤矿中瓦斯预警器来使用),是对丰富的计算机科学词汇的补充,用 Steve McConnell 的话解释如下: + +> 计算机科学拥有比其它任何领域都丰富多彩的语言,在其它的领域中你进入一个无菌室,小心地将温度控制在 68°F,然后,能找到病毒、特洛伊木马、蠕虫、臭虫、炸弹、崩溃、爆发、扭曲的变性者、以及致命错误吗? Steve McConnell 代码大全 2 + +不管怎么说,这里所谓的“栈金丝雀”应该看起来是这个样子的: + +![](https://manybutfinite.com/img/stack/bufferCanary.png) + +金丝雀是通过汇编来实现的。例如,由于 GCC 的 [栈保护器][7] 选项的原因使金丝雀被用于任何可能有漏洞的函数上。函数开端加载一个神奇的值到金丝雀的位置,并且在函数结束调用时确保这个值完好无损。如果这个值发生了变化,那就表示发生了一个缓冲区溢出(或者 bug),这时,程序通过 [__stack_chk_fail][8] 被终止运行。由于金丝雀处于栈的关键位置上,它使得栈缓冲区溢出的漏洞挖掘变得非常困难。 + +深入栈的探秘之旅结束了。我并不想过于深入。下一周我将深入递归、尾调用以及其它相关内容。或许要用到谷歌的 V8 引擎。为总结函数的开端和结束的讨论,我引述了美国国家档案馆纪念雕像上的一句名言:(what is past is prologue) + +![](https://manybutfinite.com/img/stack/past-is-prologue.jpg) + +-------------------------------------------------------------------------------- + +via:https://manybutfinite.com/post/epilogues-canaries-buffer-overflows/ + +作者:[Gustavo Duarte][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://duartes.org/gustavo/blog/about/ +[1]:https://manybutfinite.com/post/epilogues-canaries-buffer-overflows/ +[2]:https://manybutfinite.com/post/journey-to-the-stack +[3]:https://manybutfinite.com/img/stack/returnSequence.png +[4]:http://linux.die.net/man/3/gets +[5]:http://paulmakowski.wordpress.com/2011/01/25/smashing-the-stack-in-2011/ +[6]:http://en.wiktionary.org/wiki/canary_in_a_coal_mine +[7]:http://gcc.gnu.org/onlinedocs/gcc-4.2.3/gcc/Optimize-Options.html +[8]:http://refspecs.linux-foundation.org/LSB_4.0.0/LSB-Core-generic/LSB-Core-generic/libc---stack-chk-fail-1.html \ No newline at end of file diff --git a/translated/tech/20140523 Tail Calls Optimization and ES6.md b/translated/tech/20140523 Tail Calls Optimization and ES6.md new file mode 100644 index 0000000000..f04b743d26 --- /dev/null +++ b/translated/tech/20140523 Tail Calls Optimization and ES6.md @@ -0,0 +1,173 @@ +#[尾调用,优化,和 ES6][1] + + +在探秘“栈”的倒数第二篇文章中,我们提到了**尾调用**、编译优化、以及新发布的 JavaScript 上*特有的*尾调用。 + +当一个函数 F 调用另一个函数作为它的结束动作时,就发生了一个**尾调用**。在那个时间点,函数 F 绝对不会有多余的工作:函数 F 将“球”传给被它调用的任意函数之后,它自己就“消失”了。这就是关键点,因为它打开了尾调用优化的“可能之门”:我们可以简单地重用函数 F 的栈帧,而不是为函数调用 [创建一个新的栈帧][6],因此节省了栈空间并且避免了新建一个栈帧所需要的工作量。下面是一个用 C 写的简单示例,然后使用 [mild 优化][7] 来编译它的结果: + +简单的尾调用 [下载][2] + +``` +int add5(int a) +{ + return a + 5; +} + +int add10(int a) +{ + int b = add5(a); // not tail + return add5(b); // tail +} + +int add5AndTriple(int a){ + int b = add5(a); // not tail + return 3 * add5(a); // not tail, doing work after the call +} + +int finicky(int a){ + if (a > 10){ + return add5AndTriple(a); // tail + } + + if (a > 5){ + int b = add5(a); // not tail + return finicky(b); // tail + } + + return add10(a); // tail +} +``` + +在编译器的输出中,在预期会有一个 [调用][9] 的地方,你可以看到一个 [跳转][8] 指令,一般情况下你可以发现尾调用优化(以下简称 TCO)。在运行时中,TCO 将会引起调用栈的减少。 + +一个通常认为的错误观念是,尾调用必须要 [递归][10]。实际上并不是这样的:一个尾调用可以被递归,比如在上面的 `finicky()` 中,但是,并不是必须要使用递归的。在调用点只要函数 F 完成它的调用,我们将得到一个单独的尾调用。是否能够进行优化这是一个另外的问题,它取决于你的编程环境。 + +“是的,它总是可以!”,这是我们所希望的最佳答案,它是在这个结构下这个案例最好的结果,就像是,在 [SICP][11](顺便说一声,如果你的程序不像“一个魔法师使用你的咒语召唤你的电脑精灵”那般有效,建议你读一下那本书)上所讨论的那样。它是 [Lua][12] 的案例。而更重要的是,它是下一个版本的 JavaScript —— ES6 的案例,这个规范定义了[尾的位置][13],并且明确了优化所需要的几个条件,比如,[严格模式][14]。当一个编程语言保证可用 TCO 时,它将支持特有的尾调用。 + +现在,我们中的一些人不能抛开那些 C 的习惯,心脏出血,等等,而答案是一个更复杂的“有时候(sometimes)”,它将我们带进了编译优化的领域。我们看一下上面的那个 [简单示例][15];把我们 [上篇文章][16] 的阶乘程序重新拿出来: + +递归阶乘 [下载][3] + +``` +#include + +int factorial(int n) +{ + int previous = 0xdeadbeef; + + if (n == 0 || n == 1) { + return 1; + } + + previous = factorial(n-1); + return n * previous; +} + +int main(int argc) +{ + int answer = factorial(5); + printf("%d\n", answer); +} +``` + +像第 11 行那样的,是尾调用吗?答案是:“不是”,因为它被后面的 n 相乘了。但是,如果你不去优化它,GCC 使用 [O2 优化][18] 的 [结果][17] 会让你震惊:它不仅将阶乘转换为一个 [无递归循环][19],而且 `factorial(5)` 调用被消除了,以一个 120 (5! == 120) 的 [编译时常数][20]来替换。这就是调试优化代码有时会很难的原因。好的方面是,如果你调用这个函数,它将使用一个单个的栈帧,而不会去考虑 n 的初始值。编译算法是非常有趣的,如果你对它感兴趣,我建议你去阅读 [构建一个优化编译器][21] 和 [ACDI][22]。 + +但是,这里**没有**做尾调用优化时到底发生了什么?通过分析函数的功能和无需优化的递归发现,GCC 比我们更聪明,因为一开始就没有使用尾调用。由于过于简单以及很确定的操作,这个任务变得很简单。我们给它增加一些可以引起混乱的东西(比如,getpid()),我们给 GCC 增加难度: + +递归 PID 阶乘 [下载][4] + +``` +#include +#include +#include + +int pidFactorial(int n) +{ + if (1 == n) { + return getpid(); // tail + } + + return n * pidFactorial(n-1) * getpid(); // not tail +} + +int main(int argc) +{ + int answer = pidFactorial(5); + printf("%d\n", answer); +} +``` + +优化它,unix 精灵!现在,我们有了一个常规的 [递归调用][23] 并且这个函数分配 O(n) 栈帧来完成工作。GCC 在递归的基础上仍然 [为 getpid 使用了 TCO][24]。如果我们现在希望让这个函数尾调用递归,我需要稍微变一下: + +tailPidFactorial.c [下载][5] + +``` +#include +#include +#include + +int tailPidFactorial(int n, int acc) +{ + if (1 == n) { + return acc * getpid(); // not tail + } + + acc = (acc * getpid() * n); + return tailPidFactorial(n-1, acc); // tail +} + +int main(int argc) +{ + int answer = tailPidFactorial(5, 1); + printf("%d\n", answer); +} +``` + +现在,结果的累加是 [一个循环][25],并且我们获得了真实的 TCO。但是,在你庆祝之前,我们能说一下关于在 C 中的一般案例吗?不幸的是,虽然优秀的 C 编译器在大多数情况下都可以实现 TCO,但是,在一些情况下它们仍然做不到。例如,正如我们在 [函数开端][26] 中所看到的那样,函数调用者在使用一个标准的 C 调用规则调用一个函数之后,它要负责去清理栈。因此,如果函数 F 带了两个参数,它只能使 TCO 调用的函数使用两个或者更少的参数。这是 TCO 的众多限制之一。Mark Probst 写了一篇非常好的论文,他们讨论了 [在 C 中正确使用尾递归][27],在这篇论文中他们讨论了这些属于 C 栈行为的问题。他也演示一些 [疯狂的、很酷的欺骗方法][28]。 + +“有时候” 对于任何一种关系来说都是不坚定的,因此,在 C 中你不能依赖 TCO。它是一个在某些地方可以或者某些地方不可以的离散型优化,而不是像特有的尾调用一样的编程语言的特性,在实践中,可以使用编译器来优化绝大部分的案例。但是,如果你想必须要实现 TCO,比如将架构编译转换进 C,你将会 [很痛苦][29]。 + +因为 JavaScript 现在是非常流行的转换对象,特有的尾调用在那里尤其重要。因此,从 kudos 到 ES6 的同时,还提供了许多其它的重大改进。它就像 JS 程序员的圣诞节一样。 + +这就是尾调用和编译优化的简短结论。感谢你的阅读,下次再见! + +-------------------------------------------------------------------------------- + +via:https://manybutfinite.com/post/tail-calls-optimization-es6/ + +作者:[Gustavo Duarte][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://duartes.org/gustavo/blog/about/ +[1]:https://manybutfinite.com/post/tail-calls-optimization-es6/ +[2]:https://manybutfinite.com/code/x86-stack/tail.c +[3]:https://manybutfinite.com/code/x86-stack/factorial.c +[4]:https://manybutfinite.com/code/x86-stack/pidFactorial.c +[5]:https://manybutfinite.com/code/x86-stack/tailPidFactorial.c +[6]:https://manybutfinite.com/post/journey-to-the-stack +[7]:https://github.com/gduarte/blog/blob/master/code/x86-stack/asm-tco.sh +[8]:https://github.com/gduarte/blog/blob/master/code/x86-stack/tail-tco.s#L27 +[9]:https://github.com/gduarte/blog/blob/master/code/x86-stack/tail.s#L37-L39 +[10]:https://manybutfinite.com/post/recursion/ +[11]:http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-11.html +[12]:http://www.lua.org/pil/6.3.html +[13]:https://people.mozilla.org/~jorendorff/es6-draft.html#sec-tail-position-calls +[14]:https://people.mozilla.org/~jorendorff/es6-draft.html#sec-strict-mode-code +[15]:https://github.com/gduarte/blog/blob/master/code/x86-stack/tail.c +[16]:https://manybutfinite.com/post/recursion/ +[17]:https://github.com/gduarte/blog/blob/master/code/x86-stack/factorial-o2.s +[18]:https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html +[19]:https://github.com/gduarte/blog/blob/master/code/x86-stack/factorial-o2.s#L16-L19 +[20]:https://github.com/gduarte/blog/blob/master/code/x86-stack/factorial-o2.s#L38 +[21]:http://www.amazon.com/Building-Optimizing-Compiler-Bob-Morgan-ebook/dp/B008COCE9G/ +[22]:http://www.amazon.com/Advanced-Compiler-Design-Implementation-Muchnick-ebook/dp/B003VM7GGK/ +[23]:https://github.com/gduarte/blog/blob/master/code/x86-stack/pidFactorial-o2.s#L20 +[24]:https://github.com/gduarte/blog/blob/master/code/x86-stack/pidFactorial-o2.s#L43 +[25]:https://github.com/gduarte/blog/blob/master/code/x86-stack/tailPidFactorial-o2.s#L22-L27 +[26]:https://manybutfinite.com/post/epilogues-canaries-buffer-overflows/ +[27]:http://www.complang.tuwien.ac.at/schani/diplarb.ps +[28]:http://www.complang.tuwien.ac.at/schani/jugglevids/index.html +[29]:http://en.wikipedia.org/wiki/Tail_call#Through_trampolining \ No newline at end of file diff --git a/translated/tech/20141027 Closures Objects and the Fauna of the Heap.md b/translated/tech/20141027 Closures Objects and the Fauna of the Heap.md new file mode 100644 index 0000000000..a075e583ca --- /dev/null +++ b/translated/tech/20141027 Closures Objects and the Fauna of the Heap.md @@ -0,0 +1,234 @@ +#[闭包,对象,以及堆“族”][1] + + +在上篇文章中我们提到了闭包、对象、以及栈外的其它东西。我们学习的大部分内容都是与特定编程语言无关的元素,但是,我主要还是专注于 JavaScript,以及一些 C。让我们以一个简单的 C 程序开始,它的功能是读取一首歌曲和乐队名字,然后将它们输出给用户: + +stackFolly.c [下载][2] + +``` +#include +#include + +char *read() +{ + char data[64]; + fgets(data, 64, stdin); + return data; +} + +int main(int argc, char *argv[]) +{ + char *song, *band; + + puts("Enter song, then band:"); + song = read(); + band = read(); + + printf("\n%sby %s", song, band); + return 0; +} +``` + +如果你运行这个程序,你会得到什么?(=> 表示程序输出): + +``` +./stackFolly +=> Enter song, then band: +The Past is a Grotesque Animal +of Montreal + +=> ?ǿontreal +=> by ?ǿontreal +``` + +(曾经的 C 新手说)发生了错误? + +事实证明,函数的栈变量的内容仅在栈帧活动期间才是可用的,也就是说,仅在函数返回之前。在上面的返回中,被栈帧使用的内存 [被认为是可用的][3],并且在下一个函数调用中可以被覆写。 + +下面的图展示了这种情况下究竟发生了什么。这个图现在有一个镜像映射,因此,你可以点击一个数据片断去看一下相关的 GDB 输出(GDB 命令在 [这里][4])。只要 `read()` 读取了歌曲的名字,栈将是这个样子: + +![](https://manybutfinite.com/img/stack/readSong.png) + +在这个时候,这个 `song` 变量立即指向到歌曲的名字。不幸的是,存储字符串的内存位置准备被下次调用的任意函数的栈帧重用。在这种情况下,`read()` 再次被调用,而且使用的是同一个位置的栈帧,因此,结果变成下图的样子: + +![](https://manybutfinite.com/img/stack/readBand.png) + +乐队名字被读入到相同的内存位置,并且覆盖了前面存储的歌曲名字。`band` 和 `song` 最终都准确指向到相同点。最后,我们甚至都不能得到 “of Montreal”(译者注:一个欧美乐队的名字) 的正确输出。你能猜到是为什么吗? + +因此,即使栈很有用,但也有很重要的限制。它不能被一个函数用于去存储比该函数的运行周期还要长的数据。你必须将它交给 [堆][5],然后与热点缓存、明确的瞬时操作、以及频繁计算的偏移等内容道别。有利的一面是,它是[工作][6] 的: + +![](https://manybutfinite.com/img/stack/readIntoHeap.png) + +这个代价是你必须记得去`free()` 内存,或者由一个垃圾回收机制花费一些性能来随机回收,垃圾回收将去找到未使用的堆对象,然后去回收它们。那就是栈和堆之间在本质上的权衡:性能 vs. 灵活性。 + +大多数编程语言的虚拟机都有一个中间层用来做一个 C 程序员该做的一些事情。栈被用于**值类型**,比如,整数、浮点数、以及布尔型。这些都按特定值(像上面的 `argc` )的字节顺序被直接保存在本地变量和对象字段中。相比之下,堆被用于**引用类型**,比如,字符串和 [对象][7]。 变量和字段包含一个引用到这个对象的内存地址,像上面的 `song` 和 `band`。 + +参考这个 JavaScript 函数: + +``` +function fn() +{ + var a = 10; + var b = { name: 'foo', n: 10 }; +} +``` +它可能的结果如下: + +![](https://manybutfinite.com/img/stack/fnFrame.png) + +我之所以说“可能”的原因是,特定的行为高度依赖于实现。这篇文章使用的许多图形是以一个 V8 为中心的方法,这些图形都链接到相关的源代码。在 V8 中,仅 [小整数][8] 是 [以值的方式保存][9]。因此,从现在开始,我将在对象中直接以字符串去展示,以避免引起混乱,但是,请记住,正如上图所示的那样,它们在堆中是分开保存的。 + +现在,我们来看一下闭包,它其实很简单,但是由于我们将它宣传的过于夸张,以致于有点神化了。先看一个简单的 JS 函数: + +``` +function add(a, b) +{ + var c = a + b; + return c; +} +``` + +这个函数定义了一个词法域(lexical scope),它是一个快乐的小王国,在这里它的名字 a,b,c 是有明确意义的。它有两个参数和由函数声明的一个本地变量。程序也可以在别的地方使用相同的名字,但是在 `add` 内部它们所引用的内容是明确的。尽管词法域是一个很好的术语,它符合我们直观上的理解:毕竟,我们从字面意义上看,我们可以像词法分析器一样,把它看作在源代码中的一个文本块。 + +在看到栈帧的操作之后,很容易想像出这个名称的具体实现。在 `add` 内部,这些名字引用到函数的每个运行实例中私有的栈的位置。这种情况在一个虚拟机中经常发生。 + +现在,我们来嵌套两个词法域: + +``` +function makeGreeter() +{ + return function hi(name){ + console.log('hi, ' + name); + } +} + +var hi = makeGreeter(); +hi('dear reader'); // prints "hi, dear reader" +``` + +那样更有趣。函数 `hi` 在函数 `makeGreeter` 运行的时候被构建在它内部。它有它自己的词法域,`name` 在这个地方是一个栈上的参数,但是,它似乎也可以访问父级的词法域,它可以那样做。我们来看一下那样做的好处: + +``` +function makeGreeter(greeting) +{ + return function greet(name){ + console.log(greeting + ', ' + name); + } +} + +var heya = makeGreeter('HEYA'); +heya('dear reader'); // prints "HEYA, dear reader" +``` + +虽然有点不习惯,但是很酷。即便这样违背了我们的直觉:`greeting` 确实看起来像一个栈变量,这种类型应该在 `makeGreeter()` 返回后消失。可是因为 `greet()` 一直保持工作,出现了一些奇怪的事情。进入闭包: + +![](https://manybutfinite.com/img/stack/closure.png) + +虚拟机分配一个对象去保存被里面的 `greet()` 使用的父级变量。它就好像是 `makeGreeter` 的词法作用域在那个时刻被关闭了,一旦需要时被具体化到一个堆对象(在这个案例中,是指返回的函数的生命周期)。因此叫做闭包,当你这样去想它的时候,它的名字就有意义了。如果使用(或者捕获)了更多的父级变量,对象内容将有更多的属性,每个捕获的变量有一个。当然,发送到 `greet()` 的代码知道从对象内容中去读取问候语,而不是从栈上。 + +这是完整的示例: + +``` +function makeGreeter(greetings) +{ + var count = 0; + var greeter = {}; + + for (var i = 0; i < greetings.length; i++) { + var greeting = greetings[i]; + + greeter[greeting] = function(name){ + count++; + console.log(greeting + ', ' + name); + } + } + + greeter.count = function(){return count;} + + return greeter; +} + +var greeter = makeGreeter(["hi", "hello","howdy"]) +greeter.hi('poppet');//prints "howdy, poppet" +greeter.hello('darling');// prints "howdy, darling" +greeter.count(); // returns 2 +``` + +是的,`count()` 在工作,但是我们的 `greeter` 是在 `howdy` 中的栈上。你能告诉我为什么吗?我们使用 `count` 是一条线索:尽管词法域进入一个堆对象中被关闭,但是变量(或者对象属性)带的值仍然可能被改变。下图是我们拥有的内容: + +![](https://manybutfinite.com/img/stack/greeterFail.png) + +​ + +这是一个被所有函数共享的公共内容。那就是为什么 `count` 工作的原因。但是,`greeting` 也是被共享的,并且它被设置为迭代结束后的最后一个值,在这个案例中是“howdy”。这是一个很常见的一般错误,避免它的简单方法是,引用一个函数调用,以闭包变量作为一个参数。在 CoffeeScript 中, [do][10] 命令提供了一个实现这种目的的简单方式。下面是对我们的 `greeter` 的一个简单的解决方案: + +``` +function makeGreeter(greetings) +{ + var count = 0; + var greeter = {}; + + greetings.forEach(function(greeting){ + greeter[greeting] = function(name){ + count++; + console.log(greeting + ', ' + name); + } + }); + + greeter.count = function(){return count;} + + return greeter; +} + +var greeter = makeGreeter(["hi", "hello", "howdy"]) +greeter.hi('poppet'); // prints "hi, poppet" +greeter.hello('darling'); // prints "hello, darling" +greeter.count(); // returns 2 +``` + +它现在是工作的,并且结果将变成下图所示: + +![](https://manybutfinite.com/img/stack/greeter.png) + +这里有许多箭头!在这里我们感兴趣的特性是:在我们的代码中,我们闭包了两个嵌套的词法内容,并且完全可以确保我们得到了两个链接到堆上的对象内容。你可以嵌套并且闭包任何词法内容、“俄罗斯套娃”类型、并且最终从本质上说你使用的是所有那些对象内容的一个链表。 + +当然,就像受信鸽携带信息启发实现了 TCP 一样,去实现这些编程语言的特性也有很多种方法。例如,ES6 规范定义了 [词法环境][11] 作为 [环境记录][12]( 大致相当于在一个块内的本地标识)的组成部分,加上一个链接到外部环境的记录,这样就允许我们看到的嵌套。逻辑规则是由规范(一个希望)所确定的,但是其实现取决于将它们变成比特和字节的转换。 + +你也可以检查具体案例中由 V8 产生的汇编代码。[Vyacheslav Egorov][13] 有一篇很好的文章,它在细节中使用 V8 的 [闭包内部构件][14] 解释了这一过程。我刚开始学习 V8,因此,欢迎指教。如果你熟悉 C#,检查闭包产生的中间代码将会很受启发 - 你将看到显式定义的 V8 内容和实例化的模拟。 + +闭包是个强大的“家伙”。它在被一组函数共享期间,提供了一个简单的方式去隐藏来自调用者的信息。我喜欢它们真正地隐藏你的数据:不像对象字段,调用者并不能访问或者甚至是看到闭包变量。保持接口清晰而安全。 + +但是,它们并不是“银弹”(译者注:意指极为有效的解决方案,或者寄予厚望的新技术)。有时候一个对象的拥护者和一个闭包的狂热者会无休止地争论它们的优点。就像大多数的技术讨论一样,他们通常更关注的是自尊而不是真正的权衡。不管怎样,Anton van Straaten 的这篇 [史诗级的公案][15] 解决了这个问题: + +> 德高望重的老师 Qc Na 和它的学生 Anton 一起散步。Anton 希望将老师引入到一个讨论中,Anton 说:“老师,我听说对象是一个非常好的东西,是这样的吗?Qc Na 同情地看了一眼,责备它的学生说:“可怜的孩子 - 对象不过是穷人的闭包。” Anton 待它的老师走了之后,回到他的房间,专心学习闭包。他认真地阅读了完整的 “Lambda:The Ultimate…" 系列文章和它的相关资料,并使用一个基于闭包的对象系统实现了一个小的架构解释器。他学到了很多的东西,并期待告诉老师他的进步。在又一次和 Qc Na 散步时,Anton 尝试给老师留下一个好的印象,说“老师,我仔细研究了这个问题,并且,现在理解了对象真的是穷人的闭包。”Qc Na 用它的手杖打了一下 Anton 说:“你什么时候才能明白?闭包是穷人的对象。”在那个时候,Anton 顿悟了。Anton van Straaten 说:“原来架构这么酷啊?” + +探秘“栈”系列文章到此结束了。后面我将计划去写一些其它的编程语言实现的主题,像对象绑定和虚表。但是,内核调用是很强大的,因此,明天将发布一篇操作系统的文章。我邀请你 [订阅][16] 并 [关注我][17]。 + +-------------------------------------------------------------------------------- + +via:https://manybutfinite.com/post/closures-objects-heap/ + +作者:[Gustavo Duarte][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://duartes.org/gustavo/blog/about/ +[1]:https://manybutfinite.com/post/closures-objects-heap/ +[2]:https://manybutfinite.com/code/x86-stack/stackFolly.c +[3]:https://manybutfinite.com/post/epilogues-canaries-buffer-overflows/ +[4]:https://github.com/gduarte/blog/blob/master/code/x86-stack/stackFolly-gdb-commands.txt +[5]:https://github.com/gduarte/blog/blob/master/code/x86-stack/readIntoHeap.c +[6]:https://github.com/gduarte/blog/blob/master/code/x86-stack/readIntoHeap-gdb-output.txt#L47 +[7]:https://code.google.com/p/v8/source/browse/trunk/src/objects.h#37 +[8]:https://code.google.com/p/v8/source/browse/trunk/src/objects.h#1264 +[9]:https://code.google.com/p/v8/source/browse/trunk/src/objects.h#148 +[10]:http://coffeescript.org/#loops +[11]:http://people.mozilla.org/~jorendorff/es6-draft.html#sec-lexical-environments +[12]:http://people.mozilla.org/~jorendorff/es6-draft.html#sec-environment-records +[13]:http://mrale.ph +[14]:http://mrale.ph/blog/2012/09/23/grokking-v8-closures-for-fun.html +[15]:http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/msg03277.html +[16]:https://manybutfinite.com/feed.xml +[17]:http://twitter.com/manybutfinite \ No newline at end of file diff --git a/translated/tech/20141106 System Calls Make the World Go Round.md b/translated/tech/20141106 System Calls Make the World Go Round.md new file mode 100644 index 0000000000..e2841b6d4b --- /dev/null +++ b/translated/tech/20141106 System Calls Make the World Go Round.md @@ -0,0 +1,164 @@ +# 系统调用,让世界转起来! + +我其实不想将它分解开给你看,一个用户应用程序在整个系统中就像一个可怜的孤儿一样无依无靠: + +![](https://manybutfinite.com/img/os/appInVat.png) + +它与外部世界的每个交流都要在内核的帮助下通过系统调用才能完成。一个应用程序要想保存一个文件、写到终端、或者打开一个 TCP 连接,内核都要参与。应用程序是被内核高度怀疑的:认为它到处充斥着 bugs,而最糟糕的是那些充满邪恶想法的天才大脑(写的恶意程序)。 + +这些系统调用是从一个应用程序到内核的函数调用。它们因为安全考虑使用一个特定的机制,实际上你只是调用了内核的 API。“系统调用”这个术语指的是调用由内核提供的特定功能(比如,系统调用 open())或者是调用途径。你也可以简称为:syscall。 + +这篇文章讲解系统调用,系统调用与调用一个库有何区别,以及在操作系统/应用程序接口上的刺探工具。如果想彻底了解应用程序借助操作系统都发生的哪些事情?那么就可以将一个不可能解决的问题转变成一个快速而有趣的难题。 + +因此,下图是一个运行着的应用程序,一个用户进程: + +![](https://manybutfinite.com/img/os/sandbox.png) + +它有一个私有的 [虚拟地址空间][2]—— 它自己的内存沙箱。整个系统都在地址空间中,程序的二进制文件加上它所需要的库全部都 [被映射到内存中][3]。内核自身也映射为地址空间的一部分。 + +下面是我们程序的代码和 PID,进程的 PID 可以通过 [getpid(2)][4]: + +pid.c [download][1] + +| +​``` +123456789 +​``` + | +​``` +#include #include #include int main(){ pid_t p = getpid(); printf("%d\n", p);} +​``` + | + +**(致校对:本文的所有代码部分都出现了排版错误,请与原文核对确认!!)** + +在 Linux 中,一个进程并不是一出生就知道它的 PID。要想知道它的 PID,它必须去询问内核,因此,这个询问请求也是一个系统调用: + +![](https://manybutfinite.com/img/os/syscallEnter.png) + +它的第一步是开始于调用一个 C 库的 [getpid()][5],它是系统调用的一个封装。当你调用一些功能时,比如,open(2)、read(2)、以及相关的一些支持时,你就调用了这些封装。其实,对于大多数编程语言在这一块的原生方法,最终都是在 libc 中完成的。 + +极简设计的操作系统都提供了方便的 API 封装,这样可以保持内核的简洁。所有的内核代码运行在特权模式下,有 bugs 的内核代码行将会产生致命的后果。在用户模式下做的任何事情都是在用户模式中完成的。由库来提供友好的方法和想要的参数处理,像 printf(3) 这样。 + +我们拿一个 web APIs 进行比较,内核的封装方式与构建一个简单易行的 HTTP 接口去提供服务是类似的,然后使用特定语言的守护方法去提供特定语言的库。或者也可能有一些缓存,它是库的 getpid() 完成的内容:首次调用时,它真实地去执行了一个系统调用,然后,它缓存了 PID,这样就可以避免后续调用时的系统调用开销。 + +一旦封装完成,它做的第一件事就是进入了超空间(hyperspace)的内核(译者注:一个快速而安全的计算环境,独立于操作系统而存在)。这种转换机制因处理器架构设计不同而不同。(译者注:就是前一段时间爆出的存在于处理器硬件中的运行于 Ring -3 的操作系统,比如,Intel 的 ME)在 Intel 处理器中,参数和 [系统调用号][6] 是 [加载到寄存器中的][7],然后,运行一个 [指令][8] 将 CPU 置于 [特权模式][9] 中,并立即将控制权转移到内核中的全局系统调用 [入口][10]。如果你对这些细节感兴趣,David Drysdale 在 LWN 上有两篇非常好的文章([第一篇][11],[第二篇][12])。 + +内核然后使用这个系统调用号作为进入 [sys_call_table][14] 的一个 [索引][13],它是一个函数指针到每个系统调用实现的数组。在这里,调用 了 [sys_getpid][15]: + +![](https://manybutfinite.com/img/os/syscallExit.png) + +在 Linux 中,系统调用大多数都实现为独立的 C 函数,有时候这样做 [很琐碎][16],但是通过内核优秀的设计,系统调用被严格隔离。它们是工作在一般数据结构中的普通代码。关于这些争论的验证除了完全偏执的以外,其它的还是非常好的。 + +一旦它们的工作完成,它们就会正常返回,然后,根据特定代码转回到用户模式,封装将在那里继续做一些后续处理工作。在我们的例子中,[getpid(2)][17] 现在缓存了由内核返回的 PID。如果内核返回了一个错误,另外的封装可以去设置全局 errno 变量。让你知道 GNU 所关心的一些小事。 + +如果你想看未处理的原生内容,glibc 提供了 [syscall(2)][18] 函数,它可以不通过封装来产生一个系统调用。你也可以通过它来做一个你自己的封装。这对一个 C 库来说,并不神奇,也不是保密的。 + +这种系统调用的设计影响是很深远的。我们从一个非常有用的 [strace(1)][19] 开始,这个工具可以用来监视 Linux 进程的系统调用(在 Mac 上,看 [dtruss(1m)][20] 和神奇的 [dtrace][21];在 Windows 中,看 [sysinternals][22])。这里在 pid 上的跟踪: + +| +​``` +1234567891011121314151617181920 +​``` + | +​``` +~/code/x86-os$ strace ./pidexecve("./pid", ["./pid"], [/* 20 vars */]) = 0brk(0) = 0x9aa0000access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)mmap2(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7767000access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3fstat64(3, {st_mode=S_IFREG|0644, st_size=18056, ...}) = 0mmap2(NULL, 18056, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7762000close(3) = 0[...snip...]getpid() = 14678fstat64(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 1), ...}) = 0mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7766000write(1, "14678\n", 614678) = 6exit_group(6) = ? +​``` + | + +输出的每一行都显示了一个系统调用 、它的参数、以及返回值。如果你在一个循环中将 getpid(2) 运行 1000 次,你就会发现始终只有一个 getpid() 系统调用,因为,它的 PID 已经被缓存了。我们也可以看到在格式化输出字符串之后,printf(3) 调用了 write(2)。 + +strace 可以开始一个新进程,也可以附加到一个已经运行的进程上。你可以通过不同程序的系统调用学到很多的东西。例如,sshd 守护进程一天都干了什么? + +| +​``` +1234567891011121314151617181920212223242526272829 +​``` + | +​``` +~/code/x86-os$ ps ax | grep sshd12218 ? Ss 0:00 /usr/sbin/sshd -D~/code/x86-os$ sudo strace -p 12218Process 12218 attached - interrupt to quitselect(7, [3 4], NULL, NULL, NULL[ ... nothing happens ... No fun, it's just waiting for a connection using select(2) If we wait long enough, we might see new keys being generated and so on, but let's attach again, tell strace to follow forks (-f), and connect via SSH]~/code/x86-os$ sudo strace -p 12218 -f[lots of calls happen during an SSH login, only a few shown][pid 14692] read(3, "-----BEGIN RSA PRIVATE KEY-----\n"..., 1024) = 1024[pid 14692] open("/usr/share/ssh/blacklist.RSA-2048", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)[pid 14692] open("/etc/ssh/blacklist.RSA-2048", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)[pid 14692] open("/etc/ssh/ssh_host_dsa_key", O_RDONLY|O_LARGEFILE) = 3[pid 14692] open("/etc/protocols", O_RDONLY|O_CLOEXEC) = 4[pid 14692] read(4, "# Internet (IP) protocols\n#\n# Up"..., 4096) = 2933[pid 14692] open("/etc/hosts.allow", O_RDONLY) = 4[pid 14692] open("/lib/i386-linux-gnu/libnss_dns.so.2", O_RDONLY|O_CLOEXEC) = 4[pid 14692] stat64("/etc/pam.d", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0[pid 14692] open("/etc/pam.d/common-password", O_RDONLY|O_LARGEFILE) = 8[pid 14692] open("/etc/pam.d/other", O_RDONLY|O_LARGEFILE) = 4 +​``` + | + +看懂 SSH 的调用是块难啃的骨头,但是,如果搞懂它你就学会了跟踪。也可以用它去看一个应用程序打开的哪个文件是有用的(“这个配置是从哪里来的?”)。如果你有一个出现错误的进程,你可以跟踪它,然后去看它通过系统调用做了什么?当一些应用程序没有提供适当的错误信息而意外退出时,你可以去检查它是否是一个系统调用失败。你也可以使用过滤器,查看每个调用的次数,等等: + +| + +``` + +``` + +123456789 + +``` + +``` + + | +​``` +~/code/x86-os$ strace -T -e trace=recv curl -silent www.google.com. > /dev/nullrecv(3, "HTTP/1.1 200 OK\r\nDate: Wed, 05 N"..., 16384, 0) = 4164 <0.000007>recv(3, "fl a{color:#36c}a:visited{color:"..., 16384, 0) = 2776 <0.000005>recv(3, "adient(top,#4d90fe,#4787ed);filt"..., 16384, 0) = 4164 <0.000007>recv(3, "gbar.up.spd(b,d,1,!0);break;case"..., 16384, 0) = 2776 <0.000006>recv(3, "$),a.i.G(!0)),window.gbar.up.sl("..., 16384, 0) = 1388 <0.000004>recv(3, "margin:0;padding:5px 8px 0 6px;v"..., 16384, 0) = 1388 <0.000007>recv(3, "){window.setTimeout(function(){v"..., 16384, 0) = 1484 <0.000006> +​``` + | + +我鼓励你去浏览在你的操作系统中的这些工具。使用它们会让你觉得自己像个超人一样强大。 + +但是,足够有用的东西,往往要让我们深入到它的设计中。我们可以看到那些用户空间中的应用程序是被严格限制在它自己的虚拟地址空间中,它的虚拟地址空间运行在 Ring 3(非特权模式)中。一般来说,只涉及到计算和内存访问的任务是不需要请求系统调用的。例如,像 [strlen(3)][23] 和 [memcpy(3)][24] 这样的 C 库函数并不需要内核去做什么。这些都是在应用程序内部发生的事。 + +一个 C 库函数的 man 页面节上(在圆括号 2 和 3 中)也提供了线索。节 2 是用于系统调用封装,而节 3 包含了其它 C 库函数。但是,正如我们在 printf(3) 中所看到的,一个库函数可以最终产生一个或者多个系统调用。 + +如果你对此感到好奇,这里是 [Linux][25] ( [Filippo's list][26])和 [Windows][27] 的全部系统调用列表。它们各自有 ~310 和 ~460 个系统调用。看这些系统调用是非常有趣的,因为,它们代表了软件在现代的计算机上能够做什么。另外,你还可能在这里找到与进程间通讯和性能相关的“宝藏”。这是一个“不懂 Unix 的人注定最终还要重新发明一个蹩脚的 Unix ” 的地方。(译者注:“Those who do not understand Unix are condemned to reinvent it,poorly。”这句话是 [Henry Spencer][35] 的名言,反映了 Unix 的设计哲学,它的一些理念和文化是一种技术发展的必须结果,看似糟糕却无法超越。) + +与 CPU 周期相比,许多系统调用花很长的时间去执行任务,例如,从一个硬盘驱动器中读取内容。在这种情况下,调用进程在底层的工作完成之前一直处于休眠状态。因为,CPUs 运行的非常快,一般的程序都因为 I/O 的限制在它的生命周期的大部分时间处于休眠状态,等待系统的调用。相反,如果你跟踪一个计算密集型任务,你经常会看到没有任何的系统调用参与其中。在这种情况下,[top(1)][29] 将显示大量的 CPU 使用。 + +在一个系统调用中的开销可能会是一个问题。例如,固态硬盘比普通硬盘要快很多,但是,操作系统的开销可能比 I/O 操作本身的开销 [更加昂贵][30]。执行大量读写操作的程序可能就是操作系统开销的瓶颈所在。[向量化 I/O][31] 对此有一些帮助。因此要做 [文件的内存映射][32],它允许一个程序仅访问内存就可以读或写磁盘文件。类似的映射也存在于像视频卡这样的地方。最终,经济性俱佳的云计算可能导致内核在用户模式/内核模式的切换消失或者最小化。 + +最终,系统调用还有益于系统安全。一是,无论看起来多么模糊的一个二进制程序,你都可以通过观察它的系统调用来检查它的行为。这种方式可能用于去检测恶意程序。例如,我们可以记录一个未知程序的系统调用的策略,并对它的偏差进行报警,或者对程序调用指定一个白名单,这样就可以让漏洞利用变得更加困难。在这个领域,我们有大量的研究,和许多工具,但是没有“杀手级”的解决方案。 + +这就是系统调用。很抱歉这篇文章有点长,我希望它对你有用。接下来的时间,我将写更多(短的)文章,也可以在 [RSS][33] 和 [Twitter][34] 关注我。这篇文章献给 glorious Clube Atlético Mineiro。 + +-------------------------------------------------------------------------------- + +via:https://manybutfinite.com/post/system-calls/ + +作者:[Gustavo Duarte][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://duartes.org/gustavo/blog/about/ +[1]:https://manybutfinite.com/code/x86-os/pid.c +[2]:https://manybutfinite.com/post/anatomy-of-a-program-in-memory +[3]:https://manybutfinite.com/post/page-cache-the-affair-between-memory-and-files/ +[4]:http://linux.die.net/man/2/getpid +[5]:https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/getpid.c;h=937b1d4e113b1cff4a5c698f83d662e130d596af;hb=4c6da7da9fb1f0f94e668e6d2966a4f50a7f0d85#l49 +[6]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/syscalls/syscall_64.tbl#L48 +[7]:https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/x86_64/sysdep.h;h=4a619dafebd180426bf32ab6b6cb0e5e560b718a;hb=4c6da7da9fb1f0f94e668e6d2966a4f50a7f0d85#l139 +[8]:https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/x86_64/sysdep.h;h=4a619dafebd180426bf32ab6b6cb0e5e560b718a;hb=4c6da7da9fb1f0f94e668e6d2966a4f50a7f0d85#l179 +[9]:https://manybutfinite.com/post/cpu-rings-privilege-and-protection +[10]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/entry_64.S#L354-L386 +[11]:http://lwn.net/Articles/604287/ +[12]:http://lwn.net/Articles/604515/ +[13]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/entry_64.S#L422 +[14]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/syscall_64.c#L25 +[15]:https://github.com/torvalds/linux/blob/v3.17/kernel/sys.c#L800-L809 +[16]:https://github.com/torvalds/linux/blob/v3.17/kernel/sys.c#L800-L859 +[17]:http://linux.die.net/man/2/getpid +[18]:http://linux.die.net/man/2/syscall +[19]:http://linux.die.net/man/1/strace +[20]:https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/dtruss.1m.html +[21]:http://dtrace.org/blogs/brendan/2011/10/10/top-10-dtrace-scripts-for-mac-os-x/ +[22]:http://technet.microsoft.com/en-us/sysinternals/bb842062.aspx +[23]:http://linux.die.net/man/3/strlen +[24]:http://linux.die.net/man/3/memcpy +[25]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/syscalls/syscall_64.tbl +[26]:https://filippo.io/linux-syscall-table/ +[27]:http://j00ru.vexillium.org/ntapi/ +[28]:https://manybutfinite.com/post/what-your-computer-does-while-you-wait/ +[29]:http://linux.die.net/man/1/top +[30]:http://danluu.com/clwb-pcommit/ +[31]:http://en.wikipedia.org/wiki/Vectored_I/O +[32]:https://manybutfinite.com/post/page-cache-the-affair-between-memory-and-files/ +[33]:http://feeds.feedburner.com/GustavoDuarte +[34]:http://twitter.com/food4hackers +[35]:https://en.wikipedia.org/wiki/Henry_Spencer \ No newline at end of file diff --git a/translated/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md b/translated/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md new file mode 100644 index 0000000000..a22a94bae0 --- /dev/null +++ b/translated/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md @@ -0,0 +1,234 @@ +让我们做个简单的解释器(2) +====== + +在一本叫做 《高效思考的 5 要素》 的书中,作者 Burger 和 Starbird 讲述了一个关于他们如何研究 Tony Plog 的故事,一个举世闻名的交响曲名家,为一些有才华的演奏者开创了一个大师班。这些学生一开始演奏复杂的乐曲,他们演奏的非常好。然后他们被要求演奏非常基础简单的乐曲。当他们演奏这些乐曲时,与之前所演奏的相比,听起来非常幼稚。在他们结束演奏后,老师也演奏了同样的乐曲,但是听上去非常娴熟。差别令人震惊。Tony 解释道,精通简单符号可以让人更好的掌握复杂的部分。这个例子很清晰 - 要成为真正的名家,必须要掌握简单基础的思想。 + +故事中的例子明显不仅仅适用于音乐,而且适用于软件开发。这个故事告诉我们不要忽视繁琐工作中简单基础的概念的重要性,哪怕有时候这让人感觉是一种倒退。尽管熟练掌握一门工具或者框架非常重要,了解他们背后的原理也是极其重要的。正如 Palph Waldo Emerson 所说: + +> “如果你只学习方法,你就会被方法束缚。但如果你知道原理,就可以发明自己的方法。” + +有鉴于此,让我们再次深入了解解释器和编译器。 + +今天我会向你们展示一个全新的计算器,与 [第一部分][1] 相比,它可以做到: + + 1. 处理输入字符串任意位置的空白符 + 2. 识别输入字符串中的多位整数 + 3. 做两个整数之间的减法(目前它仅能加减整数) + + +新版本计算器的源代码在这里,它可以做到上述的所有事情: +``` +# 标记类型 +# EOF (end-of-file 文件末尾) 标记是用来表示所有输入都解析完成 +INTEGER, PLUS, MINUS, EOF = 'INTEGER', 'PLUS', 'MINUS', 'EOF' + + +class Token(object): + def __init__(self, type, value): + # token 类型: INTEGER, PLUS, MINUS, or EOF + self.type = type + # token 值: 非负整数值, '+', '-', 或无 + self.value = value + + def __str__(self): + """String representation of the class instance. + + Examples: + Token(INTEGER, 3) + Token(PLUS '+') + """ + return 'Token({type}, {value})'.format( + type=self.type, + value=repr(self.value) + ) + + def __repr__(self): + return self.__str__() + + +class Interpreter(object): + def __init__(self, text): + # 客户端字符输入, 例如. "3 + 5", "12 - 5", + self.text = text + # self.pos 是 self.text 的索引 + self.pos = 0 + # 当前标记实例 + self.current_token = None + self.current_char = self.text[self.pos] + + def error(self): + raise Exception('Error parsing input') + + def advance(self): + """Advance the 'pos' pointer and set the 'current_char' variable.""" + self.pos += 1 + if self.pos > len(self.text) - 1: + self.current_char = None # Indicates end of input + else: + self.current_char = self.text[self.pos] + + def skip_whitespace(self): + while self.current_char is not None and self.current_char.isspace(): + self.advance() + + def integer(self): + """Return a (multidigit) integer consumed from the input.""" + result = '' + while self.current_char is not None and self.current_char.isdigit(): + result += self.current_char + self.advance() + return int(result) + + def get_next_token(self): + """Lexical analyzer (also known as scanner or tokenizer) + + This method is responsible for breaking a sentence + apart into tokens. + """ + while self.current_char is not None: + + if self.current_char.isspace(): + self.skip_whitespace() + continue + + if self.current_char.isdigit(): + return Token(INTEGER, self.integer()) + + if self.current_char == '+': + self.advance() + return Token(PLUS, '+') + + if self.current_char == '-': + self.advance() + return Token(MINUS, '-') + + self.error() + + return Token(EOF, None) + + def eat(self, token_type): + # 将当前的标记类型与传入的标记类型作比较,如果他们相匹配,就 + # “eat” 掉当前的标记并将下一个标记赋给 self.current_token, + # 否则抛出一个异常 + if self.current_token.type == token_type: + self.current_token = self.get_next_token() + else: + self.error() + + def expr(self): + """Parser / Interpreter + + expr -> INTEGER PLUS INTEGER + expr -> INTEGER MINUS INTEGER + """ + # 将输入中的第一个标记设置成当前标记 + self.current_token = self.get_next_token() + + # 当前标记应该是一个整数 + left = self.current_token + self.eat(INTEGER) + + # 当前标记应该是 ‘+’ 或 ‘-’ + op = self.current_token + if op.type == PLUS: + self.eat(PLUS) + else: + self.eat(MINUS) + + # 当前标记应该是一个整数 + right = self.current_token + self.eat(INTEGER) + # 在上述函数调用后,self.current_token 就被设为 EOF 标记 + + # 这时要么是成功地找到 INTEGER PLUS INTEGER,要么是 INTEGER MINUS INTEGER + # 序列的标记,并且这个方法可以仅仅返回两个整数的加或减的结果,就能高效解释客户端的输入 + if op.type == PLUS: + result = left.value + right.value + else: + result = left.value - right.value + return result + + +def main(): + while True: + try: + # To run under Python3 replace 'raw_input' call + # with 'input' + text = raw_input('calc> ') + except EOFError: + break + if not text: + continue + interpreter = Interpreter(text) + result = interpreter.expr() + print(result) + + +if __name__ == '__main__': + main() +``` + +把上面的代码保存到 calc2.py 文件中,或者直接从 [GitHub][2] 上下载。试着运行它。看看它是不是正常工作:它应该能够处理输入中任意位置的空白符;能够接受多位的整数,并且能够对两个整数做减法和加法。 + +这是我在自己的笔记本上运行的示例: +``` +$ python calc2.py +calc> 27 + 3 +30 +calc> 27 - 7 +20 +calc> +``` + +与 [第一部分][1] 的版本相比,主要的代码改动有: + + 1. get_next_token 方法重写了很多。增加指针位置的逻辑之前是放在一个单独的方法中。 + 2. 增加了一些方法:skip_whitespace 用于忽略空白字符,integer 用于处理输入字符的多位整数。 + 3. expr 方法修改成了可以识别 “整数 -> 减号 -> 整数” 词组和 “整数 -> 加号 -> 整数” 词组。在成功识别相应的词组后,这个方法现在可以解释加法和减法。 + +[第一部分][1] 中你学到了两个重要的概念,叫做 **标记** 和 **词法分析**。现在我想谈一谈 **词法**, **解析**,和**解析器**。 + +你已经知道标记。但是为了让我详细的讨论标记,我需要谈一谈词法。词法是什么?**词法** 是一个标记中的字符序列。在下图中你可以看到一些关于标记的例子,还好这可以让它们之间的关系变得清晰: + +![][3] + +现在还记得我们的朋友,expr 方法吗?我之前说过,这是数学表达式实际被解释的地方。但是你要先识别这个表达式有哪些词组才能解释它,比如它是加法还是减法。expr 方法最重要的工作是:它从 get_next_token 方法中得到流,并找出标记流的结构然后解释已经识别出的词组,产生数学表达式的结果。 + +在标记流中找出结构的过程,或者换种说法,识别标记流中的词组的过程就叫 **解析**。解释器或者编译器中执行这个任务的部分就叫做 **解析器**。 + +现在你知道 expr 方法就是你的解释器的部分,**解析** 和 **解释** 都在这里发生 - expr 方法首先尝试识别(**解析**)标记流里的 “整数 -> 加法 -> 整数” 或者 “整数 -> 减法 -> 整数” 词组,成功识别后 (**解析**) 其中一个词组,这个方法就开始解释它,返回两个整数的和或差。 + +又到了练习的时间。 + +![][4] + + 1. 扩展这个计算器,让它能够计算两个整数的乘法 + 2. 扩展这个计算器,让它能够计算两个整数的除法 + 3. 修改代码,让它能够解释包含了任意数量的加法和减法的表达式,比如 “9 - 5 + 3 + 11” + + + +**检验你的理解:** + + 1. 词法是什么? + 2. 找出标记流结构的过程叫什么,或者换种说法,识别标记流中一个词组的过程叫什么? + 3. 解释器(编译器)执行解析的部分叫什么? + + +希望你喜欢今天的内容。在该系列的下一篇文章里你就能扩展计算器从而处理更多复杂的算术表达式。敬请期待。 + +-------------------------------------------------------------------------------- + +via: https://ruslanspivak.com/lsbasi-part2/ + +作者:[Ruslan Spivak][a] +译者:[BriFuture](https://github.com/BriFuture) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://ruslanspivak.com +[1]:http://ruslanspivak.com/lsbasi-part1/ (Part 1) +[2]:https://github.com/rspivak/lsbasi/blob/master/part2/calc2.py +[3]:https://ruslanspivak.com/lsbasi-part2/lsbasi_part2_lexemes.png +[4]:https://ruslanspivak.com/lsbasi-part2/lsbasi_part2_exercises.png diff --git a/translated/tech/20160625 Trying out LXD containers on our Ubuntu.md b/translated/tech/20160625 Trying out LXD containers on our Ubuntu.md deleted file mode 100644 index 29a19792fa..0000000000 --- a/translated/tech/20160625 Trying out LXD containers on our Ubuntu.md +++ /dev/null @@ -1,223 +0,0 @@ -在 Ubuntu 上玩玩 LXD 容器 -====== -本文的主角是容器,一种类似虚拟机但更轻量级的构造。你可以轻易地在你的 Ubuntu 桌面系统中创建一堆个容器! - -虚拟机会虚拟出正太电脑让你来安装客户机操作系统。**相比之下**,容器**复用**了主机 Linux 内核,只是简单地 **包容** 了我们选择的根文件系统(也就是运行时环境)。Linux 内核有很多功能可以将运行的 Linux 容器与我们的主机分割开(也就是我们的 Ubuntu 桌面)。 - -Linux 本身需要一些手工操作来直接管理他们。好在,有 LXD( 读音为 Lex-deeh),一款为我们管理 Linux 容器的服务。 - -我们将会看到如何 - - 1。在我们的 Ubuntu 桌面上配置容器, - 2。创建容器, - 3。安装一台 web 服务器, - 4。测试一下这台 web 服务器,以及 - 5。清理所有的东西。 - -### 设置 Ubuntu 容器 - -如果你安装的是 Ubuntu 16.04,那么你什么都不用做。只要安装下面所列出的一些额外的包就行了。若你安装的是 Ubuntu 14.04.x 或 Ubuntu 15.10,那么按照 [LXD 2.0:Installing and configuring LXD [2/12]][1] 来进行一些操作,然后再回来。 - -确保已经更新了包列表: -``` -sudo apt update -sudo apt upgrade -``` - -安装 **lxd** 包: -``` -sudo apt install lxd -``` - -若你安装的是 Ubuntu 16.04,那么还可以让你的容器文件以 ZFS 文件系统的格式进行存储。Ubuntu 16.04 的 Linux kernel 包含了支持 ZFS 必要的内核模块。若要让 LXD 使用 ZFS 进行存储,我们只需要安装 ZFS 工具包。没有 ZFS,容器会在主机文件系统中以单独的文件形式进行存储。通过 ZFS,我们就有了写入时拷贝等功能,可以让任务完成更快一些。 - -安装 **zfsutils-linux** 包 (若你安装的是 Ubuntu 16.04.x): -``` -sudo apt install zfsutils-linux -``` - -安装好 LXD 后,包安装脚本应该会将你加入 **lxd** 组。该组成员可以使你无需通过 sudo 就能直接使用 LXD 管理容器。根据 Linux 的尿性,**你需要先登出桌面会话然后再登陆** 才能应用 **lxd** 的组成员关系。(若你是高手,也可以通过在当前 shell 中执行 newgrp lxd 命令,就不用重登陆了)。 - -在开始使用前,LXD 需要初始化存储和网络参数。 - -运行下面命令: -``` -$ **sudo  lxd init** -Name of the storage backend to use (dir or zfs):**zfs** -Create a new ZFS pool (yes/no)?**yes** -Name of the new ZFS pool:**lxd-pool** -Would you like to use an existing block device (yes/no)?**no** -Size in GB of the new loop device (1GB minimum):**30** -Would you like LXD to be available over the network (yes/no)?**no** -Do you want to configure the LXD bridge (yes/no)?**yes** -**> You will be asked about the network bridge configuration。Accept all defaults and continue。** -Warning:Stopping lxd.service,but it can still be activated by: - lxd.socket - LXD has been successfully configured。 -$ _ -``` - -我们在一个(独立)的文件而不是块设备(即分区)中构建了一个文件系统来作为 ZFS 池,因此我们无需进行额外的分区操作。在本例中我指定了 30GB 大小,这个空间取之于根(/) 文件系统中。这个文件就是 `/var/lib/lxd/zfs.img`。 - -行了!最初的配置完成了。若有问题,或者想了解其他信息,请阅读 https://www.stgraber.org/2016/03/15/lxd-2-0-installing-and-configuring-lxd-212/ - -### 创建第一个容器 - -所有 LXD 的管理操作都可以通过 **lxc** 命令来进行。我们通过给 **lxc** 不同参数来管理容器。 -``` -lxc list -``` -可以列出所有已经安装的容器。很明显,这个列表现在是空的,但这表示我们的安装是没问题的。 - -``` -lxc image list -``` -列出可以用来启动容器的(已经缓存)镜像列表。很明显这个列表也是空的,但这也说明我们的安装是没问题的。 - -``` -lxc image list ubuntu: -``` -列出可以下载并启动容器的远程镜像。而且指定了是显示 Ubuntu 镜像。 - -``` -lxc image list images: -``` -列出可以用来启动容器的(已经缓存)各种发行版的镜像列表。这会列出各种发行版的镜像比如 Alpine,Debian,Gentoo,Opensuse 以及 Fedora。 - -让我们启动一个 Ubuntu 16.04 容器,并称之为 c1: -``` -$ lxc launch ubuntu:x c1 -Creating c1 -Starting c1 -$ -``` - -我们使用 launch 动作,然后选择镜像 **ubuntu:x** (x 表示 Xenial/16.04 镜像),最后我们使用名字 `c1` 作为容器的名称。 - -让我们来看看安装好的首个容器, -``` -$ lxc list - -+---------|---------|----------------------|------|------------|-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+---------|---------|----------------------|------|------------|-----------+ -| c1 | RUNNING | 10.173.82.158 (eth0) | | PERSISTENT | 0 | -+---------|---------|----------------------|------|------------|-----------+ -``` - -我们的首个容器 c1 已经运行起来了,它还有自己的 IP 地址(可以本地访问)。我们可以开始用它了! - -### 安装 web 服务器 - -我们可以在容器中运行命令。运行命令的动作为 **exec**。 -``` -$ lxc exec c1 -- uptime - 11:47:25 up 2 min,0 users,load average:0.07,0.05,0.04 -$ _ -``` - -在 exec 后面,我们指定容器,最后输入要在容器中运行的命令。运行时间只有 2 分钟,这是个新出炉的容器:-)。 - -命令行中的`--`跟我们 shell 的参数处理过程有关是告诉。若我们的命令没有任何参数,则完全可以省略`-`。 -``` -$ lxc exec c1 -- df -h -``` - -这是一个必须要`-`的例子,由于我们的命令使用了参数 -h。若省略了 -,会报错。 - -然我们运行容器中的 shell 来新包列表。 -``` -$ lxc exec c1 bash -root@c1:~# apt update -Ign http://archive.ubuntu.com trusty InRelease -Get:1 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB] -Get:2 http://security.ubuntu.com trusty-security InRelease [65.9 kB] -.。。 -Hit http://archive.ubuntu.com trusty/universe Translation-en -Fetched 11.2 MB in 9s (1228 kB/s) -Reading package lists。.. Done -root@c1:~# **apt upgrade** -Reading package lists。.. Done -Building dependency tree -.。。 -Processing triggers for man-db (2.6.7.1-1ubuntu1) .。。 -Setting up dpkg (1.17.5ubuntu5.7) .。。 -root@c1:~# _ -``` - -我们使用 **nginx** 来做 web 服务器。nginx 在某些方面要比 Apache web 服务器更酷一些。 -``` -root@c1:~# apt install nginx -Reading package lists。.. Done -Building dependency tree -.。。 -Setting up nginx-core (1.4.6-1ubuntu3.5) .。。 -Setting up nginx (1.4.6-1ubuntu3.5) .。。 -Processing triggers for libc-bin (2.19-0ubuntu6.9) .。。 -root@c1:~# _ -``` - -让我们用浏览器访问一下这个 web 服务器。记住 IP 地址为 10.173.82.158,因此你需要在浏览器中输入这个 IP。 - -[![lxd-nginx][2]][3] - -让我们对页面文字做一些小改动。回到容器中,进入默认 HTML 页面的目录中。 -``` -root@c1:~# **cd /var/www/html/** -root@c1:/var/www/html# **ls -l** -total 2 --rw-r--r-- 1 root root 612 Jun 25 12:15 index.nginx-debian.html -root@c1:/var/www/html# -``` - -使用 nano 编辑文件,然后保存 - -[![lxd-nginx-nano][4]][5] - -子后,再刷一下页面看看, - -[![lxd-nginx-modified][6]][7] - -### 清理 - -让我们清理一下这个容器,也就是删掉它。当需要的时候我们可以很方便地创建一个新容器出来。 -``` -$ **lxc list** -+---------|---------|----------------------|------|------------|-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+---------|---------|----------------------|------|------------|-----------+ -| c1 | RUNNING | 10.173.82.169 (eth0) | | PERSISTENT | 0 | -+---------|---------|----------------------|------|------------|-----------+ -$ **lxc stop c1** -$ **lxc delete c1** -$ **lxc list** -+---------|---------|----------------------|------|------------|-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+---------|---------|----------------------|------|------------|-----------+ -+---------|---------|----------------------|------|------------|-----------+ - -``` - -我们停止(关闭)这个容器,然后删掉它了。 - -本文至此就结束了。关于容器有很多玩法。而这只是配置 Ubuntu 并尝试使用容器的第一步而已。 - - --------------------------------------------------------------------------------- - -via: https://blog.simos.info/trying-out-lxd-containers-on-our-ubuntu/ - -作者:[Simos Xenitellis][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://blog.simos.info/author/simos/ -[1]:https://www.stgraber.org/2016/03/15/lxd-2-0-installing-and-configuring-lxd-212/ -[2]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx.png?resize=564%2C269&ssl=1 -[3]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx.png?ssl=1 -[4]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-nano.png?resize=750%2C424&ssl=1 -[5]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-nano.png?ssl=1 -[6]:https://i1.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-modified.png?resize=595%2C317&ssl=1 -[7]:https://i1.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-modified.png?ssl=1 diff --git a/translated/tech/20170216 25 Free Books To Learn Linux For Free.md b/translated/tech/20170216 25 Free Books To Learn Linux For Free.md new file mode 100644 index 0000000000..5bdf056669 --- /dev/null +++ b/translated/tech/20170216 25 Free Books To Learn Linux For Free.md @@ -0,0 +1,292 @@ +25本免费学习linux的书 +====== +简介: 在这篇文章中,我将与你分享**免费学习Linux**的最佳资源。这是一个网站,在线视频课程和免费电子书的集合。 + +**如何学习linux?** + +这可能是 Facebook Linux 用户群组中最常见的问题。 + +'如何学习linux'这个看起来简单的问题的答案并不简单。 + +问题在于不同的人对于学习 linux 有不同的意义。 +* 比如有人从来没有使用过 Linux,无论是命令行还是桌面版本,那个人可能只是想知道更多关于它的信息。 +* 比如有人使用 Windows 作为桌面,但必须在工作中使用 Linux 命令行,那个人可能对学习 Linux 命令感兴趣。 +* 比如有人已经使用过一段时间的 Linux,而且懂得一些基础,但他/她可能想要更上一层楼。 +* 比如有人只是对 Linux 特定的发行版本感兴趣。 +* 比如有人想要改进或学习几乎与 Linux 命令行差不多的Bash脚本。 +* 比如有人想要从事一个 Linux 系统管理员的职业,或者想提高他/她的系统管理技能。 + +你看,'我如何学习Linux'的答案取决于你追求什么样的 linux 知识。为此,我收集了大量能用来学习Linux的资源 + +这些免费的资源包括电子书,视频课程,网站等。这些资源分成几个子类别,以便当你试图学习 Linux 时可以很容易地找到你想要的东西。 + +再者,这里没有**最好的方式来学习Linux**。这完全取决于你如何去学习 Linux,通过在线门户网站,下载电子书,视频课程或者其他。 + +让我们看看你能如何学习 Linux。 + +**免责声明** : 这里列举的所有书都可以合法的下载。 据我所知,这里提到的资源都是官方的资源。但是,如果你发现它不是,请让我知道以便我可以采取适当的措施。 + +![Best Free eBooks to learn Linux for Free][1] + +## 1. 对于完全新手的免费资料 + +也许你刚刚从朋友那里或者从网上的讨论中听到了 Linux。关于 Linux 的炒作让你对Linux很感兴趣,你被互联网上的大量信息所淹没,不知道在哪里寻找更多的关于Linux的知识。 + +不用担心, 我们中的大多数, 即使不是全部, 已经来到你的身边 + +### Linux基金会关于Linux的介绍 [Video Course] + +如果你对于什么是Linux和如何开始学习Linux完全没有概念的话,我建议你从学习Linux基金会[Linux Foundation][2]在[edX][3]提供的免费的视频课程开始。 +把它当做一个'维护'Linux组织的官方的课程。是的,它是由Linux之父[Linus Torvalds][4]赞同的 + +[Introduction To Linux][5] + +### Linux 旅程 [Online Portal] + +不是官方的,也许不是很受欢迎。但是这个小网站对于初学者来说是一个Linux学习的完美场所。 + +该网站设计精美,并根据主题组织得很好。它给你提供了能够在阅读完一个片段或章节后的进行的互动式测验。我的建议,收藏这个网站: +[Linux Journey][6] + +### 5天学习Linux [eBook] + +这本出色的书对于它专门的 FOSS 读者 来说完全的免费,这完全得感谢[Linux Training Academy][7]。 + +为了完全的新手而写,这本免费的 Linux 电子书给你一个关于 Linux的概述,常用的 Linux指令和你开始学习 Linux 所需要的其他东西 + +你能够从下面的网页下载书: + +[Learn Linux In 5 Days][8] + +### 终极的Linux新手指南 [eBook] + +这是一本Linux初学者可以免费下载的电子书。电子书从解释什么是 Linux 开始,然后继续提供了更多Linux作为桌面的实际的使用。 + +您可以从下面的链接下载最新版本的电子书: + +[The Ultimate Linux Newbie Guide][9] + +## 2. 初学者进阶的免费书籍 + +本节列出了那些已经"完成"的 Linux 电子书。 + +我的意思是,这些之中的大部分就像是专注于 Linux 的每个方面的学术教科书。你可以作为一个绝对的新手阅读这些书或者你可以作为一个中级的 Linux 用户来深入学习。即使你已经是专家级,你也可以把它们作为参考 + +### Introduction to Linux [eBook] + +Linux 简介是[The Linux Documentation Project][10]的免费电子书,而且它是最热门的 Linux 免费电子书之一。即使我认为其中的部分段落需要更新,它仍然是一本非常好的电子书来教你 Linux,Linux 的文件系统,命令行,网络和其他相关的东西。 + +[Introduction To Linux][11] + +### Linux 基础 [eBook] + +这本由 Paul Cobbaut 编写的免费的电子书教你关于 Linux 的历史,安装和你需要知道的基本的 Linux 命令。你能够从下列链接上得到这本书: + +[Linux Fundamentals][12] + +### 高级的 Linux 编程[eBook] + +顾名思义,这是一本对于想要或者正在开发 Linux 软件的高级用户的书。它解决了负责的功能比如多进程,多线程,进程间通信以及和硬件设备的交互。 + +跟着这本书学习会帮你开发一个更快速,更可靠,更安全的使用 GNU/Linux 系统全部功能的项目 + +[Advanced Linux Programming][13] + +### Linux From Scratch(就是一种从网上直接下载源码,从头编译LINUX的安装方式) [eBook] + +如果你认为自己对Linux有足够的了解,并且你是一个专业人士,那么为什么不创建自己的Linux版本呢? Linux From Scratch(LFS)是一个完全基于源代码,为你构建你自定义的 Linux 系统提供手把手的指导。 + +把它叫做 DIY Linux 但是它是一个把你的 Linux 专业知识提高到新的高度的方法。 + +这里有许多的关于这个项目的子项目,你能够在这个网站上查看和下载。 + +[Linux From Scratch][14] + +## 3.免费的电子书来学习 Linux 命令和 Shell脚本 + +Linux 的真正强大在于命令行,如果你想要征服 Linux,你必须学习命令行和shell + +事实上,如果你必须在你的工作中使用Linux终端,那么熟悉Linux命令行实际上会帮助你完成任务,也有可能帮助你提高你的职业生涯(因为你会更有效率)。 + +在本节中,我们将看到各种Linux命令的免费电子书。 + +### GNU/Linux Command−Line Tools Summary [eBook] + +这本Linux文档项目中的电子书是接触Linux命令行并开始熟悉Shell脚本的好地方 + +[GNU/Linux Command−Line Tools Summary][15] + +### 来自 GNU 的 Bash 参考指南[eBook] + +这是一本从[GNU][16]下载的免费电子书。 就像名字暗示的那样, 它涉及 Bash Shell (如果我能这么叫的话). 这本书有超过175页而且它包括了许多在 Bash里和 Linux有关的主题。 + +你能够从下面的链接中获取: + +[Bash Reference Manual][17] + +### Linux 命令行 [eBook] + +这本500多页的由William Shotts编写的免费电子书,对于那些认真学习Linux命令行的人来说,是一本必须拥有的书。 + +即使你认为你知道关于Linux的东西,你还是会惊讶于这本书能教你很多东西。 + +它涵盖了从初学者到高级的东西。我敢打赌读完这本书之后你会成为一个更好的Linux用户。请下载这本书并且随时携带它。 + +[The Linux Command Line][18] + +### Bash 入门指南 [eBook] + +如果你是想从 Bash 脚本开始,这可能对于你来说是一个很好的助手。 Linux 文档项目又是这本电子书的基础,它是编写 Linux 介绍的电子书的作者(本文前面讨论过)。 + +[Bash Guide for Beginners][19] + +### 高级的 Bash 脚本指南[eBook] + +如果你认为你已经知道了基本的Bash脚本的知识,并且你想把你的技能提高到一个新的水平,这本书就是你所需要的。这本书有超过900页的各种高级命令和举例。 + +[Advanced Bash-Scripting Guide][20] + +### AWK 编程语言 [eBook] + +这不是最漂亮的书,但是如果你真的想要通过脚本研究的更深,这本旧的但是依然发光的书会很有帮助。 + +[The AWK Programming Language][21] + +### Linux 101 黑客 [eBook] + +这本来自 "The Geek Stuf" 的书通过易于跟踪学习的例子教你基本的 Linux 命令行。你能够从下列的链接获取: + +[Linux 101 Hacks][22] + +## 4. 特定版本的免费学习资料 + +这个章节专注于特定 Linux 版本的材料。到目前为止,我们看到的都是常规的 Linux,更多的关注文件系统,命令和其他的核心内容。 + +这些书,在另一方面,可以被认为是用户手册或者开始学习各种各样的 Linux 版本的指南。所以如果你正在使用一个特定的 Linux 版本或者你准备使用它,你可以参考这些资源。是的,这些书更加关注 Linux 桌面。 + +我还想补充的是大部分的 Linux 版本有它们自己的大量的 wiki 或者文档。你能够从网上随时找到它们。 + +### Ubuntu 用户指南 + +不用说这本书是针对 Ubuntu 用户的。这是一个独立的项目在免费的电子书中提供 Ubuntun 的用户指南。它对于每个版本的 Ubuntu 都有更新。 + +这本书被叫做用户指南因为它是由一步步的指导组成而且受众目标是对于 Ubuntu 绝对的新手。所以,你会去了解 Unity 桌面,怎样慢慢走近而且查找应用等等。 + +如果你从来没有使用过 Ubuntu Unity 那么这是一本你必须拥有的书因为它帮助你理解怎样在日常中使用 Ubuntu。 + +[Ubuntu Manual][23] + +### 对于 Linux Mint: 只要告诉我 Damnit! [eBook] + +一本非常基本的关于 Linux Mint 的电子书。它告诉你怎么样在虚拟机中安装 Linux Mint,怎么样去查找软件,安装更新和自定义 Linux Mint 桌面。 + +你能够在下面的链接下载电子书: + +[Just Tell Me Damnit!][24] + +### Solus Linux 用户指南 [eBook] + +注意!这本书过去是 Solus Linux 的官方用户指南但是我找不到 Solux 项目的网站上在哪里有提到它。我不知道它是不是已经过时了。尽管如此,学习一点Solu Linux 并不是受到伤害,不是吗? + +[Solus Linux User Guide][25] + +## 5. 对于系统管理者的免费电子书 + +这个章节主要关注与系统管理者,开发者的超级英雄。我已经列了一部分会真正帮助那些已经是系统管理者或者想要成为系统管理者的免费的电子书。我必须补充你必须要关注基本的 Linux 命令行因为它会使你的工作更加简单 + +### The Debian 管理者的手册 [eBook] + +如果你使用 Debian Linux 作为你的服务器,这本书就是你的圣经。这本书从 Debian 的历史,安装,包管理等等开始,接着覆盖一些主题,比如说[LAMP][26],虚拟机,存储管理和其他核心系统管理。 + +[The Debian Administration's Handbook][27] + +### 高级的 Linux 系统管理者[eBook] + +如果在准备[LPI certification][28],那么这本书是一本理想的书。这本书的涉及系统管理员必要的主题。所以了解 Linux 命令行在这个条件下是一个前置条件。 + +[Advanced Linux System Administration][29] + +### Linux 系统管理者 [eBook] + +Paul Cobbaut 编写的另一本免费的电子书。370页长的的书包括了网络,磁盘管理,用户管理,内核管理,库管理等等。 + +[Linux System Administration][30] + +### Linux 服务器 [eBook] + +又一本 Paul Cobbaut 编写的[linux-training.be][31]. 这本书包括了网页服务器,mysql,DHCP,DNS,Samba和其他文件服务器。 + +[Linux Servers][32] + +### Linux 网络 [eBook] + +网络是系统管理者的面包和黄油,这本由 Paul Cobbaut 编写的书是一本好的参考资料。 + +[Linux Networking][33] + +### Linux 存储 [eBook] + +这本由 Paul Cobbaut(对,还是他) 编写的书解释了 Linux 的详细的磁盘管理而且介绍了许多其他的和存储相关的技术 + +[Linux Storage][34] + +### Linux 安全 [eBook] + +这是最后一本在这个书单里由 Paul Cobbaut 编写的书。 安全是系统管理员最重要的工作之一。这本书关注文件权限,acls,SELinux,用户和密码等等。 + +[Linux Security][35] + +## 你最喜爱的 Linux 资料? + +我知道这是一个免费 Linux 电子书的集合。但是它可以做的更好。 + +如果你有其他的在学习 Linux 有更大帮助的资料,请务必和我们共享。请注意只共享合法的下载资料以便我可以根据你的建议更新这篇文章而不会有任何问题。 + +我希望你觉得这篇文章在学习 Linux 时有帮助,欢迎你的反馈。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/learn-linux-for-free/ + +作者:[Abhishek Prakash][a] +译者:[yyyfor](https://github.com/yyyfor) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/abhishek/ +[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/02/free-ebooks-linux-800x450.png +[2]:https://www.linuxfoundation.org/ +[3]:https://www.edx.org +[4]:https://www.youtube.com/watch?v=eE-ovSOQK0Y +[5]:https://www.edx.org/course/introduction-linux-linuxfoundationx-lfs101x-0 +[6]:https://linuxjourney.com/ +[7]:https://www.linuxtrainingacademy.com/ +[8]:https://courses.linuxtrainingacademy.com/itsfoss-ll5d/ +[9]:https://linuxnewbieguide.org/ulngebook/ +[10]:http://www.tldp.org/index.html +[11]:http://tldp.org/LDP/intro-linux/intro-linux.pdf +[12]:http://linux-training.be/linuxfun.pdf +[13]:http://advancedlinuxprogramming.com/alp-folder/advanced-linux-programming.pdf +[14]:http://www.linuxfromscratch.org/ +[15]:http://tldp.org/LDP/GNU-Linux-Tools-Summary/GNU-Linux-Tools-Summary.pdf +[16]:https://www.gnu.org/home.en.html +[17]:https://www.gnu.org/software/bash/manual/bash.pdf +[18]:http://linuxcommand.org/tlcl.php +[19]:http://www.tldp.org/LDP/Bash-Beginners-Guide/Bash-Beginners-Guide.pdf +[20]:http://www.tldp.org/LDP/abs/abs-guide.pdf +[21]:https://ia802309.us.archive.org/25/items/pdfy-MgN0H1joIoDVoIC7/The_AWK_Programming_Language.pdf +[22]:http://www.thegeekstuff.com/linux-101-hacks-ebook/ +[23]:https://ubuntu-manual.org/ +[24]:http://downtoearthlinux.com/resources/just-tell-me-damnit/ +[25]:https://drive.google.com/file/d/0B5Ymf8oYXx-PWTVJR0pmM3daZUE/view +[26]:https://en.wikipedia.org/wiki/LAMP_(software_bundle) +[27]:https://debian-handbook.info/about-the-book/ +[28]:https://www.lpi.org/our-certifications/getting-started +[29]:http://www.nongnu.org/lpi-manuals/manual/pdf/GNU-FDL-OO-LPI-201-0.1.pdf +[30]:http://linux-training.be/linuxsys.pdf +[31]:http://linux-training.be/ +[32]:http://linux-training.be/linuxsrv.pdf +[33]:http://linux-training.be/linuxnet.pdf +[34]:http://linux-training.be/linuxsto.pdf +[35]:http://linux-training.be/linuxsec.pdf diff --git a/translated/tech/20170526 Creating a YUM repository from ISO - Online repo.md b/translated/tech/20170526 Creating a YUM repository from ISO - Online repo.md new file mode 100644 index 0000000000..a483766ddf --- /dev/null +++ b/translated/tech/20170526 Creating a YUM repository from ISO - Online repo.md @@ -0,0 +1,116 @@ +从 ISO 和在线仓库创建一个 YUM 仓库 +====== + +YUM 是 Centos/RHEL/Fedora 中最重要的工具之一。尽管在 Fedora 的最新版本中,它已经被 DNF 所取代,但这并不意味着它已经成功了。它仍然被广泛用于安装 rpm 包,我们已经在前面的教程([**在这里阅读**] [1])中用示例讨论了 YUM。 + +在本教程中,我们将学习创建一个本地 YUM 仓库,首先使用系统的 ISO 镜像,然后创建一个在线 yum 仓库的镜像。 + +### 用 DVD ISO 创建 YUM + +我们在本教程中使用 Centos 7 dvd,同样的过程也应该可以用在 RHEL 7 上。 + +首先在根文件夹中创建一个名为 YUM 的目录 + +``` +$ mkdir /YUM- +``` + +然后挂载 Centos 7 ISO: + +``` +$ mount -t iso9660 -o loop /home/dan/Centos-7-x86_x64-DVD.iso /mnt/iso/ +``` + +接下来,从挂载的 ISO 中复制软件包到 /YUM 中。当所有的软件包都被复制到系统中后,我们将安装创建 YUM 所需的软件包。打开 /YUM 并安装以下 RPM 包: + +``` +$ rpm -ivh deltarpm +$ rpm -ivh python-deltarpm +$ rpm -ivh createrepo +``` + +安装完成后,我们将在 **/etc/yum.repos.d** 中创建一个名 为 **“local.repo”** 的文件,其中包含所有的 yum 信息。 + +``` +$ vi /etc/yum.repos.d/local.repo +``` + +``` +LOCAL REPO] +Name=Local YUM +baseurl=file:///YUM +gpgcheck=0 +enabled=1 +``` + +保存并退出文件。接下来,我们将通过运行以下命令来创建仓库数据。 + +``` +$ createrepo -v /YUM +``` + +创建仓库数据需要一些时间。一切完成后,请运行 + +``` +$ yum clean all +``` + +清理缓存,然后运行 + +``` +$ yum repolist +``` + +检查所有仓库列表。你应该在列表中看到 “local.repo”。 + + +### 使用在线仓库创建镜像 YUM 仓库 + +创建在线 yum 的过程与使用 ISO 镜像创建 yum 类似,只是我们将从在线仓库而不是 ISO 中获取 rpm 软件包。 + +首先,我们需要找到一个在线仓库来获取最新的软件包。建议你找一个离你位置最近的在线 yum 仓库,以优化下载速度。我们将使用下面的镜像,你可以从[ CENTOS 镜像列表][2]中选择一个离你最近的镜像。 + +选择镜像之后,我们将使用 rsync 将该镜像与我们的系统同步,但在此之前,请确保你服务器上有足够的空间。 + +``` +$ rsync -avz rsync://mirror.fibergrid.in/centos/7.2/os/x86_64/Packages/s/ /YUM +``` + +同步将需要相当长一段时间(也许一个小时),这取决于你互联网的速度。同步完成后,我们将更新我们的仓库数据。 + +``` +$ createrepo - v /YUM +``` + +我们的 Yum 已经可以使用了。我们可以创建一个 cron 任务来根据你的需求每天或每周定时地自动更新仓库数据。 + +要创建一个用于同步仓库的 cron 任务,请运行: + +``` +$ crontab -e +``` + +并添加以下行 + +``` +30 12 * * * rsync -avz http://mirror.centos.org/centos/7/os/x86_64/Packages/ /YUM +``` + +这会在每晚 12:30 同步 yum。还请记住在 /etc/yum.repos.d 中创建仓库配置文件,就像我们上面所做的一样。 + +就是这样,你现在有你自己的 yum 仓库来使用。如果你喜欢它,请分享这篇文章,并在下面的评论栏留下你的意见/疑问。 + + +-------------------------------------------------------------------------------- + +via: http://linuxtechlab.com/creating-yum-repository-iso-online-repo/ + +作者:[Shusain][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxtechlab.com/author/shsuain/ +[1]:http://linuxtechlab.com/using-yum-command-examples/ +[2]:http://mirror.centos.org/centos/ diff --git a/translated/tech/20170628 Notes on BPF and eBPF.md b/translated/tech/20170628 Notes on BPF and eBPF.md new file mode 100644 index 0000000000..b7fad29ba1 --- /dev/null +++ b/translated/tech/20170628 Notes on BPF and eBPF.md @@ -0,0 +1,152 @@ +关于 BPF 和 eBPF 的笔记 +============================================================ + +今天,我喜欢的 meetup 网站上有一篇我超爱的文章![Suchakra Sharma][6]([@tuxology][7] 在 twitter/github)的一篇非常棒的关于传统 BPF 和在 Linux 中最新加入的 eBPF 的讨论文章,正是它促使我想去写一个 eBPF 的程序! + +这篇文章就是 —— [BSD 包过滤器:一个新的用户级包捕获架构][8] + +我想在讨论的基础上去写一些笔记,因为,我觉得它超级棒! + +这是 [幻灯片][9] 和一个 [pdf][10]。这个 pdf 非常好,结束的位置有一些链接,在 PDF 中你可以直接点击这个链接。 + +### 什么是 BPF? + +在 BPF 出现之前,如果你想去做包过滤,你必须拷贝所有进入用户空间的包,然后才能去过滤它们(使用 “tap”)。 + +这样做存在两个问题: + +1. 如果你在用户空间中过滤,意味着你将拷贝所有进入用户空间的包,拷贝数据的代价是很昂贵的。 + +2. 使用的过滤算法很低效 + +问题 #1 的解决方法似乎很明显,就是将过滤逻辑移到内核中。(虽然具体实现的细节并没有明确,我们将在稍后讨论) + +但是,为什么过滤算法会很低效? + +如果你运行 `tcpdump host foo`,它实际上运行了一个相当复杂的查询,用下图的这个树来描述它: + +![](https://jvns.ca/images/bpf-1.png) + +评估这个树有点复杂。因此,可以用一种更简单的方式来表示这个树,像这样: + +![](https://jvns.ca/images/bpf-2.png) + +然后,如果你设置 `ether.type = IP` 和  `ip.src = foo`,你必然明白匹配的包是 `host foo`,你也不用去检查任何其它的东西了。因此,这个数据结构(它们称为“控制流图” ,或者 “CFG”)是表示你真实希望去执行匹配检查的程序的最佳方法,而不是用前面的树。 + +### 为什么 BPF 要工作在内核中 + +这里的关键点是,包仅仅是个字节的数组。BPF 程序是运行在这些字节的数组上。它们不允许有循环(loops),但是,它们 _可以_  有聪明的办法知道 IP 包头(IPv6 和 IPv4 长度是不同的)以及基于它们的长度来找到 TCP 端口 + +``` +x = ip_header_length +port = *(packet_start + x + port_offset) + +``` + +(看起来不一样,其实它们基本上都相同)。在这个论文/幻灯片上有一个非常详细的虚拟机的描述,因此,我不打算解释它。 + +当你运行 `tcpdump host foo` 后,这时发生了什么?就我的理解,应该是如下的过程。 + +1. 转换 `host foo` 为一个高效的 DAG 规则 + +2. 转换那个 DAG 规则为 BPF 虚拟机的一个 BPF 程序(BPF 字节码) + +3. 发送 BPF 字节码到 Linux 内核,由 Linux 内核验证它 + +4. 编译这个 BPF 字节码程序为一个原生(native)代码。例如, [在 ARM 上是 JIT 代码][1] 以及为 [x86][2] 的机器码 + +5. 当包进入时,Linux 运行原生代码去决定是否过滤这个包。对于每个需要去处理的包,它通常仅需运行 100 - 200 个 CPU 指令就可以完成,这个速度是非常快的! + +### 现状:eBPF + +毕竟 BPF 出现已经有很长的时间了!现在,我们可以拥有一个更加令人激动的东西,它就是 eBPF。我以前听说过 eBPF,但是,我觉得像这样把这些片断拼在一起更好(我在 4 月份的 netdev 上我写了这篇 [XDP & eBPF 的文章][11]回复) + +关于 eBPF 的一些事实是: + +* eBPF 程序有它们自己的字节码语言,并且从那个字节码语言编译成内核原生代码,就像 BPF 程序 + +* eBPF 运行在内核中 + +* eBPF 程序不能随心所欲的访问内核内存。而是通过内核提供的函数去取得一些受严格限制的所需要的内容的子集。 + +* 它们  _可以_  与用户空间的程序通过 BPF 映射进行通讯 + +* 这是 Linux 3.18 的 `bpf` 系统调用 + +### kprobes 和 eBPF + +你可以在 Linux 内核中挑选一个函数(任意函数),然后运行一个你写的每次函数被调用时都运行的程序。这样看起来是不是很神奇。 + +例如:这里有一个 [名为 disksnoop 的 BPF 程序][12],它的功能是当你开始/完成写入一个块到磁盘时,触发它执行跟踪。下图是它的代码片断: + +``` +BPF_HASH(start, struct request *); +void trace_start(struct pt_regs *ctx, struct request *req) { + // stash start timestamp by request ptr + u64 ts = bpf_ktime_get_ns(); + start.update(&req, &ts); +} +... +b.attach_kprobe(event="blk_start_request", fn_name="trace_start") +b.attach_kprobe(event="blk_mq_start_request", fn_name="trace_start") + +``` + +从根本上来说,它声明一个 BPF 哈希(它的作用是当请求开始/完成时,这个程序去触发跟踪),一个名为 `trace_start` 的函数将被编译进 BPF 字节码,然后附加 `trace_start` 到内核函数 `blk_start_request` 上。 + +这里使用的是 `bcc` 框架,它可以使你写的 Python 化的程序去生成 BPF 代码。你可以在 [https://github.com/iovisor/bcc][13] 找到它(那里有非常多的示例程序)。 + +### uprobes 和 eBPF + +因为我知道你可以附加 eBPF 程序到内核函数上,但是,我不知道你能否将 eBPF 程序附加到用户空间函数上!那会有更多令人激动的事情。这是 [在 Python 中使用一个 eBPF 程序去计数 malloc 调用的示例][14]。 + +### 附加 eBPF 程序时应该考虑的事情 + +* 带 XDP 的网卡(我之前写过关于这方面的文章) + +* tc egress/ingress (在网络栈上) + +* kprobes(任意内核函数) + +* uprobes(很明显,任意用户空间函数??像带符号的任意 C 程序) + +* probes 是为 dtrace 构建的名为 “USDT probes” 的探针(像 [这些 mysql 探针][3])。这是一个 [使用 dtrace 探针的示例程序][4] + +* [JVM][5] + +* 跟踪点 + +* seccomp / landlock 安全相关的事情 + +* 更多的事情 + +### 这个讨论超级棒 + +在幻灯片里有很多非常好的链接,并且在  iovisor 仓库里有个 [LINKS.md][15]。现在已经很晚了,但是,很快我将写我的第一个 eBPF 程序了! + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2017/06/28/notes-on-bpf---ebpf/ + +作者:[Julia Evans ][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca/ +[1]:https://github.com/torvalds/linux/blob/v4.10/arch/arm/net/bpf_jit_32.c#L512 +[2]:https://github.com/torvalds/linux/blob/v3.18/arch/x86/net/bpf_jit_comp.c#L189 +[3]:https://dev.mysql.com/doc/refman/5.7/en/dba-dtrace-ref-query.html +[4]:https://github.com/iovisor/bcc/blob/master/examples/tracing/mysqld_query.py +[5]:http://blogs.microsoft.co.il/sasha/2016/03/31/probing-the-jvm-with-bpfbcc/ +[6]:http://suchakra.in/ +[7]:https://twitter.com/tuxology +[8]:http://www.vodun.org/papers/net-papers/van_jacobson_the_bpf_packet_filter.pdf +[9]:https://speakerdeck.com/tuxology/the-bsd-packet-filter +[10]:http://step.polymtl.ca/~suchakra/PWL-Jun28-MTL.pdf +[11]:https://jvns.ca/blog/2017/04/07/xdp-bpf-tutorial/ +[12]:https://github.com/iovisor/bcc/blob/0c8c179fc1283600887efa46fe428022efc4151b/examples/tracing/disksnoop.py +[13]:https://github.com/iovisor/bcc +[14]:https://github.com/iovisor/bcc/blob/00f662dbea87a071714913e5c7382687fef6a508/tests/lua/test_uprobes.lua +[15]:https://github.com/iovisor/bcc/blob/master/LINKS.md diff --git a/translated/tech/20170802 Creating SWAP partition using FDISK - FALLOCATE commands.md b/translated/tech/20170802 Creating SWAP partition using FDISK - FALLOCATE commands.md deleted file mode 100644 index 455ade916c..0000000000 --- a/translated/tech/20170802 Creating SWAP partition using FDISK - FALLOCATE commands.md +++ /dev/null @@ -1,117 +0,0 @@ -使用 FDISK 和 FALLOCATE 命令创建交换分区 -====== -交换分区在物理内存(RAM)被填满时用来保持内存中的内容. 当 RAM 被耗尽, Linux 会将内存中不活动的页移动到交换空间中,从而空出内存给系统使用. 虽然如此, 但交换空间不应被认为是可以用来替代物理内存/RAM的. - -大多数情况下, 建议交换内存的大小为物理内存的1到2倍. 也就是说如果你有8GB内存, 那么交换空间大小应该介于8-16 GB. - -若系统中没有配置交换分区, 当内存耗尽后,系统可能会杀掉正在运行中哦该的进程/应哟该从而导致系统崩溃. 在本文中, 我们将学会如何为Linux系统添加交换分区,我们有两个办法: - -+ **使用 fdisk 命令** -+ **使用 fallocate 命令** - - - -### 第一个方法(使用 Fdisk 命令) - -通常, 系统的第一块硬盘会被命名为 **/dev/sda** 而其中的分区会命名为 **/dev/sda1** , **/dev/sda2**. 本文我们使用的石块有两个主分区的硬盘,两个分区分别为 /dev/sda1, /dev/sda2,而我们使用 /dev/sda3 来做交换分区. - -首先创建一个新分区, - -``` -$ fdisk /dev/sda -``` - -按 **' n'** 来创建新分区. 系统会询问你从哪个柱面开始, 直接按回车键使用默认值即可。然后系统询问你到哪个柱面结束, 这里我们输入交换分区的大小(比如1000MB). 这里我们输入 +1000M. - -![swap][2] - -现在我们创建了一个大小为 1000MB 的磁盘了。但是我们并没有设个分区的类型, 我们按下 **" t"** 然后回车来设置分区类型. - -现在我们要输入分区编号, 这里我们输入 **3**,然后输入磁盘分类id,交换分区的磁盘类型为 **82** (要显示所有可用的磁盘类型, 按下 **" l"** ) 然后再按下 " **w "** 保存磁盘分区表. - -![swap][4] - -再下一步使用 `mkswap` 命令来格式化交换分区 - -``` -$ mkswap /dev/sda3 -``` - -然后激活新建的交换分区 - -``` -$ swapon /dev/sda3 -``` - -然而我们的交换分区在重启后并不会自动挂载. 要做到永久挂载,我们需要添加内容道 `/etc/fstab` 文件中. 打开 `/etc/fstab` 文件并输入下面行 - -``` -$ vi /etc/fstab -``` - -``` -/dev/sda3 swap swap default 0 0 -``` - -保存并关闭文件. 现在每次重启后都能使用我们的交换分区了. - -### 第二种方法(使用 fallocate 命令) - -我推荐用这种方法因为这个是最简单,最快速的创建交换空间的方法了. Fallocate 是最被低估和使用最少的命令之一了. Fallocate 用于为文件预分配块/大小. - -使用 fallocate 创建交换空间, 我们首先在 ** '/'** 目录下创建一个名为 **swap_space** 的文件. 然后分配2GB道 swap_space 文件, - -``` -$ fallocate -l 2G /swap_space -``` - -我们运行下面命令来验证文件大小 - -``` -ls-lh /swap_space. -``` - -然后更改文件权限,让 `/swap_space` 更安全 - -``` -$ chmod 600 /swap_space** -``` - -这样只有 root 可以读写该文件了. 我们再来格式化交换分区(译者注:虽然这个swap_space应该是文件,但是我们把它当成是分区来挂载), - -``` -$ mkswap /swap_space -``` - -然后启用交换空间 - -``` -$ swapon -s -``` - -每次重启后都要重现挂载磁盘分区. 因此为了使之持久话,就像上面一样,我们编辑 `/etc/fstab` 并输入下面行 - -``` -/swap_space swap swap sw 0 0 -``` - -保存并退出文件. 现在我们的交换分区会一直被挂载了. 我们重启后可以在终端运行 **free -m** 来检查交换分区是否生效. - -我们的教程至此就结束了, 希望本文足够容易理解和学习. 如果有任何疑问欢迎提出. - - --------------------------------------------------------------------------------- - -via: http://linuxtechlab.com/create-swap-using-fdisk-fallocate/ - -作者:[Shusain][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linuxtechlab.com/author/shsuain/ -[1]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=668%2C211 -[2]:https://i0.wp.com/linuxtechlab.com/wp-content/uploads/2017/02/fidsk.jpg?resize=668%2C211 -[3]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=620%2C157 -[4]:https://i0.wp.com/linuxtechlab.com/wp-content/uploads/2017/02/fidsk-swap-select.jpg?resize=620%2C157 diff --git a/translated/tech/20170918 3 text editor alternatives to Emacs and Vim.md b/translated/tech/20170918 3 text editor alternatives to Emacs and Vim.md new file mode 100644 index 0000000000..136214ce33 --- /dev/null +++ b/translated/tech/20170918 3 text editor alternatives to Emacs and Vim.md @@ -0,0 +1,102 @@ +3 个替代 Emacs 的 Vim 文本编辑器 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_blue.png?itok=IfckxN48) + +Emacs 和 Vim 的粉丝们,在你们开始编辑器之争之前,请你们理解,这篇文章并不会把引导放在诸位最喜欢的编辑器上。我是一个 Emacs 爱好者,但是也很喜欢 Vim。 + +就是说,我已经意识到 Emacs 和 Vim 并不适合所有人。也许 [编辑器之争][1] 略显幼稚,让很多人失望了。也许他们只是想要有一个不太苛刻的现代化的编辑器。 + +如果你正寻找可以替代 Emacs 或者 Vim 的编辑器,请继续阅读下去。这里有三个可能会让你感兴趣的编辑器。 + +### Geany + + +![用 Geany 编辑一个 LaTeX 文档][3] + + +你可以用 Geany 编辑 LaTeX 文档 + +[Geany][4] 是一个古老的编辑器,当我还在过时的硬件上运行轻量级 Linux 发行版的时候,[Geany][4] 就是一个优秀的的编辑器。Geany 开始于我的 [LaTeX][5] 编辑,但是很快就成为我所有应用程序的编辑器了。 + +尽管 Geany 号称是轻量且高速的 [IDE][6](集成开发环境),但是它绝不仅仅是一个技术工具。Geany 轻便快捷,即便是在一个过时的机器或是 [运行 Linux 的 Chromebook][7] 也能轻松运行起来。无论是编辑配置文件维护任务列表、写文章、代码还是脚本,Geany 都能轻松胜任。 + +[插件][8] 给 Geany 带来一些额外的魅力。这些插件拓展了 Geany 的功能,让你编码或是处理一些标记语言变得更高效,帮助你处理文本,甚至做拼写检查。 + +### Atom + + +![使用 Atom 编辑网页][10] + + +使用 Atom 编辑网页 + +在文本编辑器领域,[Atom][11] 后来居上。很短的时间内,Atom 就获得了一批忠实的追随者。 + +Atom 的定制功能让其拥有如此的吸引力。如果有一些技术癖好,你完全可以在这个编辑器上随意设置。如果你不仅仅是忠于技术,Atom 也有 [一些主题][12] ,你可以用来更改编辑器外观。 + +千万不要低估 Atom 数以千计的 [拓展包][13]。它们能在不同功能上拓展 Atom,能根据你的爱好把 Atom 转化成合适的文本编辑器或是开发环境。Atom 不仅为程序员提供服务。它同样适用于 [作家的文本编辑器][14]。 + +### Xed + +![使用 Xed 编辑文章][16] + + +使用 Xed 编辑文章 + +可能对用户体验来说,Atom 和 Geany 略显臃肿。也许你只想要一个轻量级,一个不要太露骨也不要有太多很少使用的特性的编辑器,如此看来,[Xed][17] 正是你所期待的。 + +如果 Xed 你看着眼熟,那是因为它是 MATE 桌面环境中 Pluma 编辑器上的分支。我发现相比于 Pluma,Xed 可能速度更快一点,响应更灵敏一点--不过,因人而异吧。 + +虽然 Xed 没有那么多的功能,但也不至于太糟。它有扎实的语法高亮,略强于一般的搜索替换和拼写检查功能以及单窗口编辑多文件的选项卡式界面。 + +### 其他值得发掘的编辑器 + +我不是 KDE 痴,当我工作在 KDE 环境下时, [KDevelop][18] 就已经是我深度工作时的首选了。它很强大而且灵活,又没有过大的体积,很像 Genany。 + +虽然我还没感受过爱,但是我发誓我和我了解的几个人都在 [Brackets][19] 感受到了。它很强大,而且不得不承认它的 [拓展][20] 真的很实用。 + +被称为 “开发者的编辑器” 的 [Notepadqq][21] ,总让人联想到 [Notepad++][22]。虽然它的发展仍处于早期阶段,但至少它看起来还是很有前景的。 + +对于那些只有简单的文本编辑器需求的人来说,[Gedit][23] 和 [Kate][24] 相比是极好的。它绝不是太过原始的编辑器--它有丰富的功能去完成大型文本编辑。无论是 Gedit 还是 Kate 都缘于速度和易上手而齐名。 + +你有其他 Emacs 和 Vim 之外的挚爱编辑器么?留言下来,免费分享。 + +### 关于作者 +Scott Nesbitt;我长期使用开源软件;记录各种有趣的事物;利益。做自己力所能及的事,并不把自己当回事。你可以在网络上的这些地方找到我。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/9/3-alternatives-emacs-and-vim + +作者:[Scott Nesbitt][a] +译者:[CYLeft](https://github.com/CYLeft) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/scottnesbitt +[1]:https://en.wikipedia.org/wiki/Editor_war +[2]:/file/370196 +[3]:https://opensource.com/sites/default/files/u128651/geany.png (Editing a LaTeX document with Geany) +[4]:https://www.geany.org/ +[5]:https://opensource.com/article/17/6/introduction-latex +[6]:https://en.wikipedia.org/wiki/Integrated_development_environment +[7]:https://opensource.com/article/17/4/linux-chromebook-gallium-os +[8]:http://plugins.geany.org/ +[9]:/file/370191 +[10]:https://opensource.com/sites/default/files/u128651/atom.png (Editing a webpage with Atom) +[11]:https://atom.io +[12]:https://atom.io/themes +[13]:https://atom.io/packages +[14]:https://opensource.com/article/17/5/atom-text-editor-packages-writers +[15]:/file/370201 +[16]:https://opensource.com/sites/default/files/u128651/xed.png (Writing this article in Xed) +[17]:https://github.com/linuxmint/xed +[18]:https://www.kdevelop.org/ +[19]:http://brackets.io/ +[20]:https://registry.brackets.io/ +[21]:http://notepadqq.altervista.org/s/ +[22]:https://opensource.com/article/16/12/notepad-text-editor +[23]:https://wiki.gnome.org/Apps/Gedit +[24]:https://kate-editor.org/ diff --git a/translated/tech/20170920 Easy APT Repository - Iain R. Learmonth.md b/translated/tech/20170920 Easy APT Repository - Iain R. Learmonth.md new file mode 100644 index 0000000000..8ebb6a2cfd --- /dev/null +++ b/translated/tech/20170920 Easy APT Repository - Iain R. Learmonth.md @@ -0,0 +1,83 @@ +简化 APT 仓库 +====== + +作为我工作的一部分,我所维护的 [PATHspider][5] 依赖于 [cURL][6] 和 [PycURL][7]中的一些[刚刚][8][被][9]合并或仍在[等待][10]被合并的功能。我需要构建一个包含这些 Debian 包的 Docker 容器,所以我需要快速构建一个 APT 仓库。 + +Debian 仓库本质上可以看作是一个静态的网站,而且内容是经过 GPG 签名的,所以它不一定需要托管在某个可信任的地方(除非可用性对你的程序来说是至关重要的)。我在 [Netlify][11] 上托管我的博客,一个静态的网站主机,在这种情况下,我认为用它很完美。他们也[支持开源项目][12]。 + +你可以用下面的命令安装 netlify 的 CLI 工具: +``` +sudo apt install npm +sudo npm install -g netlify-cli + +``` + +设置仓库的基本步骤是: +``` +mkdir repository +cp /path/to/*.deb repository/ + + +cd + + repository +apt-ftparchive packages . > Packages +apt-ftparchive release . > Release +gpg --clearsign -o InRelease Release +netlify deploy + +``` + +当你完成这些步骤后这些步骤,并在 Netlify 上创建了一个新的网站,你也可以通过网页来管理这个网站。你可能想要做的一些事情是为你的仓库设置自定义域名,或者使用 Let's Encrypt 启用 HTTPS。(如果你打算启用 HTTPS,请确保命令中有 “apt-transport-https”。) + +要将这个仓库添加到你的 apt 源: +``` +gpg --export -a YOURKEYID | sudo apt-key add - + + +echo + + + +"deb https://SUBDOMAIN.netlify.com/ /" + + | sudo tee -a /etc/apt/sources.list +sudo apt update + +``` + +你会发现这些软件包是可以安装的。注意下[ APT pinnng][13],因为你可能会发现,根据你的策略,仓库上的较新版本实际上并不是首选版本。 + +**更新**:如果你想要一个更适合平时使用的解决方案,请参考 [repropro][14]。如果你想让最终用户将你的 apt 仓库作为第三方仓库添加到他们的系统中,请查看[ Debian wiki 上的这个页面][15],其中包含关于如何指导用户使用你的仓库。 + +**更新 2**:有一位评论者指出用 [aptly][16],它提供了更多的功能,并消除了 repropro 的一些限制。我从来没有用过 aptly,所以不能评论具体细节,但从网站看来,这是一个很好的工具。 + + + +-------------------------------------------------------------------------------- + +via: https://iain.learmonth.me/blog/2017/2017w383/ + +作者:[Iain R. Learmonth][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://iain.learmonth.me +[1]:https://iain.learmonth.me/tags/netlify/ +[2]:https://iain.learmonth.me/tags/debian/ +[3]:https://iain.learmonth.me/tags/apt/ +[4]:https://iain.learmonth.me/tags/foss/ +[5]:https://pathspider.net +[6]:http://curl.haxx.se/ +[7]:http://pycurl.io/ +[8]:https://github.com/pycurl/pycurl/pull/456 +[9]:https://github.com/pycurl/pycurl/pull/458 +[10]:https://github.com/curl/curl/pull/1847 +[11]:http://netlify.com/ +[12]:https://www.netlify.com/open-source/ +[13]:https://wiki.debian.org/AptPreferences +[14]:https://mirrorer.alioth.debian.org/ +[15]:https://wiki.debian.org/DebianRepository/UseThirdParty +[16]:https://www.aptly.info/ diff --git a/translated/tech/20170921 Mastering file searches on Linux.md b/translated/tech/20170921 Mastering file searches on Linux.md new file mode 100644 index 0000000000..e964a35a64 --- /dev/null +++ b/translated/tech/20170921 Mastering file searches on Linux.md @@ -0,0 +1,234 @@ +精通 Linux 上的文件搜索 +====== + +![](https://images.idgesg.net/images/article/2017/09/telescope-100736548-large.jpg) + +在 Linux 系统上搜索文件的方法有很多,有的命令很简单,有的很详细。我们的目标是:缩小搜索范围,找到您正在寻找的文件,又不受其他文件的干扰。在今天的文章中,我们将研究一些对文件搜索最有用的命令和选项。我们将涉及: + + * 快速搜索 + * 更复杂的搜索条件 + * 连接条件 + * 反转条件 + * 简单和详细的回应 + * 寻找重复的文件 + +有很多有用的命令可以搜索文件,**find** 命令可能是其中最有名的,但它不是唯一的命令,也不一定总是找到目标文件的最快方法。 + +### 快速搜索命令:which 和 locate + +搜索文件的最简单的命令可能就是 **which** 和 **locate** 了,但二者都有一些局限性。**which** 命令只会在系统定义的搜索路径中,查找可执行的文件,通常用于识别命令。如果您对输入 which 时会运行的命令感到好奇,您可以使用命令 which which,它会指向对应的可执行文件。 + +``` +$ which which +/usr/bin/which + +``` + +**which** 命令会显示它找到的第一个以相应名称命名的可执行文件(也就是使用该命令时将运行的那个文件),然后停止。 + +**locate** 命令更大方一点,它可以查找任意数量的文件,但它也有一个限制:仅当文件名被包含在由 **updatedb** 命令准备的数据库时才有效。该文件可能会存储在某个位置,如 /var/lib/mlocate/mlocate.db,但不能用 locate 以外的任何命令读取。这个文件的更新通常是通过每天通过 cron 运行的 updatedb 进行的。 + +简单的 **find** 命令不需要太多限制,不过它需要搜索的起点和指定搜索条件。最简单的 find 命令:按文件名搜索文件。如下所示: + +``` +$ find . -name runme +./bin/runme + +``` + +如上所示,通过文件名搜索文件系统的当前位置将会搜索所有子目录,除非您指定了搜索深度。 + +### 不仅仅是文件名 + +**find** 命令允许您搜索除文件名以外的多种条件,包括文件所有者、组、权限、大小、修改时间、缺少所有者或组和文件类型等。除了查找文件外,您还可以删除文件、对其进行重命名、更改所有者、更改权限和对文件运行几乎任何命令。 + +下面两条命令会查找:在当前目录中 root 用户拥有的文件,以及非指定用户(在本例中为 shs)拥有的文件。在这个例子中,两个输出是一样的,但并不总是如此。 + +``` +$ find . -user root -ls + 396926 0 lrwxrwxrwx 1 root root 21 Sep 21 09:03 ./xyz -> /home/peanut/xyz +$ find . ! -user shs -ls + 396926 0 lrwxrwxrwx 1 root root 21 Sep 21 09:03 ./xyz -> /home/peanut/xyz + +``` + +感叹号“!”字符代表“非”:反转跟随其后的条件。 + +下面的命令将查找具有特定权限的文件: + +``` +$ find . -perm 750 -ls + 397176 4 -rwxr-x--- 1 shs shs 115 Sep 14 13:52 ./ll + 398209 4 -rwxr-x--- 1 shs shs 117 Sep 21 08:55 ./get-updates + 397145 4 drwxr-x--- 2 shs shs 4096 Sep 14 15:42 ./newdir + +``` + +接下来的命令显示具有 777 权限的非符号链接文件: + +``` +$ sudo find /home -perm 777 ! -type l -ls + 397132 4 -rwxrwxrwx 1 shs shs 18 Sep 15 16:06 /home/shs/bin/runme + 396949 4 -rwxrwxrwx 1 root root 558 Sep 21 11:21 /home/oops + +``` + +以下命令将查找大小超过千兆字节的文件。请注意,我们找到了一个非常有趣的文件。它在 ELF 核心文件格式中代表该系统的物理内存。 + +``` +$ sudo find / -size +1G -ls + 4026531994 0 -r-------- 1 root root 140737477881856 Sep 21 11:23 /proc/kcore + 1444722 15332 -rw-rw-r-- 1 shs shs 1609039872 Sep 13 15:55 /home/shs/Downloads/ubuntu-17.04-desktop-amd64.iso + +``` + +只要您知道 find 命令是如何描述文件类型的,就可以通过文件类型来查找文件。 + +``` +b = 块设备文件 +c = 字符设备文件 +d = 目录 +p = 命名管道 +f = 常规文件 +l = 符号链接 +s = 套接字 +D = 门(仅限 Solaris) + +``` + +在下面的命令中,我们要寻找符号链接和套接字: + +``` +$ find . -type l -ls + 396926 0 lrwxrwxrwx 1 root root 21 Sep 21 09:03 ./whatever -> /home/peanut/whatever +$ find . -type s -ls + 395256 0 srwxrwxr-x 1 shs shs 0 Sep 21 08:50 ./.gnupg/S.gpg-agent + +``` + +您还可以根据 inode 数字来搜索文件: + +``` +$ find . -inum 397132 -ls + 397132 4 -rwx------ 1 shs shs 18 Sep 15 16:06 ./bin/runme + +``` + +另一种通过 inode 搜索文件的方法是使用 **debugfs** 命令。在大的文件系统上,这个命令可能比 find 快得多,您可能需要安装 icheck。 + +``` +$ sudo debugfs -R 'ncheck 397132' /dev/sda1 +debugfs 1.42.13 (17-May-2015) +Inode Pathname +397132 /home/shs/bin/runme + +``` + +在下面的命令中,我们从主目录(〜)开始,限制搜索的深度(是我们将搜索子目录的层数),并只查看在最近一天内创建或修改的文件(mtime 设置)。 + +``` +$ find ~ -maxdepth 2 -mtime -1 -ls + 407928 4 drwxr-xr-x 21 shs shs 4096 Sep 21 12:03 /home/shs + 394006 8 -rw------- 1 shs shs 5909 Sep 21 08:18 /home/shs/.bash_history + 399612 4 -rw------- 1 shs shs 53 Sep 21 08:50 /home/shs/.Xauthority + 399615 4 drwxr-xr-x 2 shs shs 4096 Sep 21 09:32 /home/shs/Downloads + +``` + +### 不仅仅是列出文件 + +使用 **-exec** 选项,在您使用 find 命令找到文件后可以以某种方式更改文件。您只需参照 -exec 选项即可运行相应的命令。 + +``` +$ find . -name runme -exec chmod 700 {} \; +$ find . -name runme -ls + 397132 4 -rwx------ 1 shs shs 18 Sep 15 16:06 ./bin/runme + +``` + +在这条命令中,“{}”代表文件名。此命令将更改当前目录和子目录中任何名为“runme”的文件的权限。 + +把您想运行的任何命令放在 -exec 选项之后,并使用类似于上面命令的语法即可。 + +### 其他搜索条件 + +如上面的例子所示,您还可以通过其他条件进行搜索:文件的修改时间、所有者、权限等。以下是一些示例。 + +#### 根据用户查找文件 +``` +$ sudo find /home -user peanut +/home/peanut +/home/peanut/.bashrc +/home/peanut/.bash_logout +/home/peanut/.profile +/home/peanut/examples.desktop + +``` + +#### 根据权限查找文件 +``` +$ sudo find /home -perm 777 +/home/shs/whatever +/home/oops + +``` + +#### 根据修改时间查找文件 +``` +$ sudo find /home -mtime +100 +/home/shs/.mozilla/firefox/krsw3giq.default/gmp-gmpopenh264/1.6/gmpopenh264.info +/home/shs/.mozilla/firefox/krsw3giq.default/gmp-gmpopenh264/1.6/libgmpopenh264.so + +``` + +#### 通过比较修改时间查找文件 + +像这样的命令可以让您找到修改时间较近的文件。 + +``` +$ sudo find /var/log -newer /var/log/syslog +/var/log/auth.log + +``` + +### 寻找重复的文件 + +如果您正在清理磁盘空间,则可能需要删除较大的重复文件。确定文件是否真正重复的最好方法是使用 **fdupes** 命令。此命令使用 md5 校验和来确定文件是否具有相同的内容。使用 -r(递归)选项,fdupes 将在一个目录下并查找具有相同校验和而被确定为内容相同的文件。 + +如果以 root 身份运行这样的命令,您可能会发现很多重复的文件,但是很多文件都是创建时被添加到主目录的启动文件。 + +``` +# fdupes -rn /home > /tmp/dups.txt +# more /tmp/dups.txt +/home/jdoe/.profile +/home/tsmith/.profile +/home/peanut/.profile +/home/rocket/.profile + +/home/jdoe/.bashrc +/home/tsmith/.bashrc +/home/peanut/.bashrc +/home/rocket/.bashrc + +``` + +同样,您可能会在 /usr 中发现很多重复的但不该删除的配置文件。所以,请谨慎利用 fdupes 的输出。 + +fdupes 命令并不总是很快,但是要记住,它正在对许多文件运行校验和来做比较,你可能会意识到它的有效性。 + +### 总结 + +有很多方法可以在 Linux 系统上查找文件。如果您可以描述清楚您正在寻找什么,上面的命令将帮助您找到目标。 + + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3227075/linux/mastering-file-searches-on-linux.html + +作者:[Sandra Henry-Stocker][a] +译者:[jessie-pang](https://github.com/jessie-pang) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ diff --git a/translated/tech/20170927 Linux directory structure- -lib explained.md b/translated/tech/20170927 Linux directory structure- -lib explained.md new file mode 100644 index 0000000000..3472981eb9 --- /dev/null +++ b/translated/tech/20170927 Linux directory structure- -lib explained.md @@ -0,0 +1,77 @@ +Linux 目录结构:/lib 分析 +====== +[![linux 目录 lib][1]][1] + +我们在之前的文章中已经分析了其他重要系统目录,比如 bin、/boot、/dev、 /etc 等。可以根据自己的兴趣进入下列链接了解更多信息。本文中,让我们来看看 /lib 目录都有些什么。 + +[**目录结构分析:/bin 文件夹**][2] + +[**目录结构分析:/boot 文件夹**][3] + +[**目录结构分析:/dev 文件夹**][4] + +[**目录结构分析:/etc 文件夹**][5] + +[**目录结构分析:/lost+found 文件夹**][6] + +[**目录结构分析:/home 文件夹**][7] + +### Linux 中,/lib 文件夹是什么? + +lib 文件夹是 **库文件目录** ,包含了所有对系统有用的库文件。简单来说,它是应用程序、命令或进程正确执行所需要的文件。指令在 /bin 或 /sbin 目录,而动态库文件正是在此目录中。内核模块同样也在这里。 + +以 pwd 命令执行为例。正确执行,需要调用一些库文件。让我们来探索一下 pwd 命令执行时都发生了什么。我们需要使用 [strace 命令][8] 找出调用的库文件。 + +示例: + +如果你在观察的话,会发现我们使用的 pwd 命令仅进行了内核调用,命令正确执行需要调用两个库文件。 + +Linux 中 /lib 文件夹内部信息 + +正如之前所说,这个文件夹包含了目标文件和一些库文件,如果能了解这个文件夹的一些重要子文件,想必是极好的。下面列举的内容是基于我自己的系统,对于你的来说,可能会有所不同。 + +**/lib/firmware** - 这个文件夹包含了一些硬件、固件(Firmware)代码。 + +### 硬件和固件(Firmware)之间有什么不同? + +为了使硬件合法运行,很多设备软件有两部分软件组成。加载了一个代码片段的切实硬件就是固件,固件与内核交流的软件,被称为驱动。这样一来,确保被指派工作的硬件完成内核直接与硬件交流的工作。 + +**/lib/modprobe.d** - 自动处理可载入模块命令配置目录 + +**/lib/modules** - 所有可加载的内核模块都存储在这个目录下。如果你有多个内核,那这个目录下有且不仅有一个文件夹,其中每一个都代表一个内核。 + +**/lib/hdparm** - 包含 SATA/IDE 硬盘正确运行的参数。 + +**/lib/udev** - Userspace /dev,是 Linux 内核设备管理器。这个文件夹包含了所有的 udev,类似 rules.d 这样描述特殊规则的相关文件/文件夹。 + +### /lib 的姊妹文件夹:/lib32 和 /lib64 + +这两个文件夹包含了特殊结构的库文件。它们几乎和 /lib 文件夹一样,除了架构级别的差异。 + +### Linux 其他的库文件 + +**/usr/lib** - 所有软件的库都安装在这里。但是不包含系统默认库文件和内核库文件。 + +**/usr/local/lib** - 放置额外的系统文件。不同应用都可以调用。 + +**/var/lib** - rpm/dpkg 数据和游戏缓存类似的动态库/文件都存储在这里。 + +-------------------------------------------------------------------------------- + +via: https://www.linuxnix.com/linux-directory-structure-lib-explained/ + +作者:[Surendra Anne][a] +译者:[CYLeft](https://github.com/CYLeft) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linuxnix.com/author/surendra/ +[1]:https://www.linuxnix.com/wp-content/uploads/2017/09/The-lib-folder-explained.png +[2]:https://www.linuxnix.com/linux-directory-structure-explained-bin-folder/ +[3]:https://www.linuxnix.com/linux-directory-structure-explained-boot-folder/ +[4]:https://www.linuxnix.com/linux-directory-structure-explained-dev-folder/ +[5]:https://www.linuxnix.com/linux-directory-structure-explainedetc-folder/ +[6]:https://www.linuxnix.com/lostfound-directory-linuxunix/ +[7]:https://www.linuxnix.com/linux-directory-structure-home-root-folders/ +[8]:https://www.linuxnix.com/10-strace-command-examples-linuxunix/ diff --git a/sources/tech/20171002 Bash Bypass Alias Linux-Unix Command.md b/translated/tech/20171002 Bash Bypass Alias Linux-Unix Command.md similarity index 57% rename from sources/tech/20171002 Bash Bypass Alias Linux-Unix Command.md rename to translated/tech/20171002 Bash Bypass Alias Linux-Unix Command.md index 87f99fcbd2..e4dec43782 100644 --- a/sources/tech/20171002 Bash Bypass Alias Linux-Unix Command.md +++ b/translated/tech/20171002 Bash Bypass Alias Linux-Unix Command.md @@ -1,24 +1,23 @@ -Bash Bypass Alias Linux/Unix Command +绕过 Linux/Unix 命令别名 ====== -I defined mount bash shell alias as follows on my Linux system: +我在我的 Linux 系统上定义了如下 mount 别名: ``` alias mount='mount | column -t' ``` -However, I need to bash bypass alias for mounting the file system and another usage. How can I disable or bypass my bash shell aliases temporarily on a Linux, *BSD, macOS or Unix-like system? +但是我需要在挂载文件系统和其他用途时绕过 bash 别名。我如何在 Linux、\*BSD、macOS 或者类 Unix 系统上临时禁用或者绕过 bash shell 呢? - -You can define or display bash shell aliases with alias command. Once bash shell aliases created, they take precedence over external or internal commands. This page shows how to bypass bash aliases temporarily so that you can run actual internal or external command. +你可以使用 alias 命令定义或显示 bash shell 别名。一旦创建了 bash shell 别名,它们将优先于外部或内部命令。本文将展示如何暂时绕过 bash 别名,以便你可以运行实际的内部或外部命令。 [![Bash Bypass Alias Linux BSD macOS Unix Command][1]][1] -## Four ways to bash bypass alias +## 4 种绕过 bash 别名的方法 -Try any one of the following ways to run a command that is shadowed by a bash shell alias. Let us [define an alias as follows][2]: +尝试以下任意一种方法来运行被 bash shell 别名绕过的命令。让我们[如下定义一个别名][2]: `alias mount='mount | column -t'` -Run it as follows: +运行如下: `mount ` -Sample outputs: +示例输出: ``` sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) @@ -31,16 +30,16 @@ binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_m lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other) ``` -### Method 1 - Use \command +### 方法1 - 使用 \command -Type the following command to temporarily bypass a bash alias called mount: +输入以下命令暂时绕过名为 mount 的 bash 别名: `\mount` -### Method 2 - Use "command" or 'command' +### 方法2 - 使用 "command" 或 'command' -Quote the mount command as follows to call actual /bin/mount: +如下引用 mount 命令调用实际的 /bin/mount: `"mount"` -OR +或者 `'mount'` ### Method 3 - Use full command path @@ -49,27 +48,27 @@ Use full binary path such as /bin/mount: `/bin/mount /bin/mount /dev/sda1 /mnt/sda` -### Method 4 - Use internal command +### 方法3 - 使用完整的命令路径 -The syntax is: +语法是: `command cmd command cmd arg1 arg2` -To override alias set in .bash_aliases such as mount: +要覆盖 .bash_aliases 中设置的别名,例如 mount: `command mount command mount /dev/sdc /mnt/pendrive/` -[The 'command' run a simple command or display][3] information about commands. It runs COMMAND with ARGS suppressing shell function lookup or aliases, or display information about the given COMMANDs. +[”command“ 运行命令或显示][3]关于命令的信息。它带参数运行命令会抑制 shell 函数查询或者别名,或者显示有关给定命令的信息。 -## A note about unalias command +## 关于 unalias 命令的说明 -To remove each alias from the list of defined aliases from the current session use unalias command: +要从当前会话的已定义别名列表中移除别名,请使用 unalias 命令: `unalias mount` -To remove all alias definitions from the current bash session: +要从当前 bash 会话中删除所有别名定义: `unalias -a` -Make sure you update your ~/.bashrc or $HOME/.bash_aliases file. You must remove defined aliases if you want to remove them permanently: +确保你更新你的 ~/.bashrc 或 $HOME/.bash_aliases。如果要永久删除定义的别名,则必须删除定义的别名: `vi ~/.bashrc` -OR +或者 `joe $HOME/.bash_aliases` -For more information see bash command man page online [here][4] or read it by typing the following command: +想了解更多信息,参考[这里][4]的在线手册,或者输入下面的命令查看: ``` man bash help command @@ -83,7 +82,7 @@ help alias via: https://www.cyberciti.biz/faq/bash-bypass-alias-command-on-linux-macos-unix/ 作者:[Vivek Gite][a] -译者:[译者ID](https://github.com/译者ID) +译者:[geekpi](https://github.com/geekpi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md b/translated/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md new file mode 100644 index 0000000000..d486a777de --- /dev/null +++ b/translated/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md @@ -0,0 +1,59 @@ +使用一个命令重置 Linux 桌面到默认设置 +====== +![](https://www.ostechnix.com/wp-content/uploads/2017/10/Reset-Linux-Desktop-To-Default-Settings-720x340.jpg) + +前段时间,我们分享了一篇关于 [**Resetter**][1] 的文章 - 这是一个有用的软件,可以在几分钟内将 Ubuntu 重置为出厂默认设置。使用 Resetter,任何人都可以轻松地将 Ubuntu 重置为第一次安装时的状态。今天,我偶然发现了一个类似的东西。不,它不是一个应用程序,而是一个单行的命令来重置你的 Linux 桌面设置、调整和定制到默认状态。 + +### 将 Linux 桌面重置为默认设置 + +这个命令会将 Ubuntu Unity、Gnome 和 MATE 桌面重置为默认状态。我在我的 **Arch Linux MATE** 和 **Ubuntu 16.04 Unity** 上测试了这个命令。它可以在两个系统上工作。我希望它也能在其他桌面上运行。在写这篇文章的时候,我还没有安装 GNOME 的 Linux 桌面,因此我无法确认。但是,我相信它也可以在 Gnome 桌面环境中使用。 + +**一句忠告:**请注意,此命令将重置你在系统中所做的所有定制和调整,包括 Unity 启动器或 Dock 中的固定应用程序、桌面小程序、桌面指示器、系统字体、GTK主题、图标主题、显示器分辨率、键盘快捷键、窗口按钮位置、菜单和启动器行为等。 + +好的是它只会重置桌面设置。它不会影响其他不使用 dconf 的程序。此外,它不会删除你的个人资料。 + +现在,让我们开始。要将 Ubuntu Unity 或其他带有 GNOME/MATE 环境的 Linux 桌面重置,运行下面的命令: +``` +dconf reset -f / +``` + +在运行上述命令之前,这是我的 Ubuntu 16.04 LTS 桌面: + +[![][2]][3] + +如你所见,我已经改变了桌面壁纸和主题。 + +这是运行该命令后,我的 Ubuntu 16.04 LTS 桌面的样子: + +[![][2]][4] + +看见了么?现在,我的 Ubuntu 桌面已经回到了出厂设置。 + +有关 “dconf” 命令的更多详细信息,请参阅手册页。 +``` +man dconf +``` + +在重置桌面上我个人更喜欢 “Resetter” 而不是 “dconf” 命令。因为,Resetter 给用户提供了更多的选择。用户可以决定删除哪些应用程序、保留哪些应用程序、是保留现有用户帐户还是创建新用户等等。如果你懒得安装 Resetter,你可以使用这个 “dconf” 命令在几分钟内将你的 Linux 系统重置为默认设置。 + +就是这样了。希望这个有帮助。我将很快发布另一篇有用的指导。敬请关注! + +干杯! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/reset-linux-desktop-default-settings-single-command/ + +作者:[Edwin Arteaga][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com +[1]:https://www.ostechnix.com/reset-ubuntu-factory-defaults/ +[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]:http://www.ostechnix.com/wp-content/uploads/2017/10/Before-resetting-Ubuntu-to-default-1.png () +[4]:http://www.ostechnix.com/wp-content/uploads/2017/10/After-resetting-Ubuntu-to-default-1.png () diff --git a/translated/tech/20171008 The most important Firefox command line options.md b/translated/tech/20171008 The most important Firefox command line options.md deleted file mode 100644 index 14daac06cb..0000000000 --- a/translated/tech/20171008 The most important Firefox command line options.md +++ /dev/null @@ -1,58 +0,0 @@ -最重要的 Firefox 命令行选项 -====== -Firefox web 浏览器支持很多命令行选项,可以定制它启动的方式。 - -你可能已经接触过一些了,比如 `-P "profile name"` 指定浏览器启动加载时的配置文件,`-private` 开启一个私有会话。 - -本指南会列出对 FIrefox 来说比较重要的那些命令行选项。它并不包含所有的可选项,因为很多选项只用于特定的目的,对一般用户来说没什么价值。 - -你可以在 Firefox 开发者网站上看到[完整 ][1] 的命令行选项。需要注意的是,很多命令行选项对其他基于 Mozilla 的产品一样有效,甚至对某些第三方的程序也有效。 - -### 重要的 Firefox 命令行选项 - -![firefox command line][2] - -#### Profile 相关选项 - - + **-CreateProfile profile 名称** -- 创建新的用户配置信息,但并不立即使用它。 - + **-CreateProfile "profile 名 存放 profile 的目录"** -- 跟上面一样,只是指定了存放 profile 的目录。 - + **-ProfileManager**,或 **-P** -- 打开内置的 profile 管理器。 - + - **P "profile 名"** -- 使用 n 指定的 profile 启动 Firefox。若指定的 profile 不存在则会打开 profile 管理器。只有在没有其他 Firefox 实例运行时才有用。 - + **-no-remote** -- 与 `-P` 连用来创建新的浏览器实例。它允许你在同一时间运行多个 profile。 - -#### 浏览器相关选项 - - + **-headless** -- 以无头模式启动 Firefox。Linux 上需要 Firefox 55 才支持,Windows 和 Mac OS X 上需要 Firefox 56 才支持。 - + **-new-tab URL** -- 在 Firefox 的新标签页中加载指定 URL。 - + **-new-window URL** -- 在 Firefox 的新窗口中加载指定 URL。 - + **-private** -- 以私隐私浏览模式启动 Firefox。可以用来让 Firefox 始终运行在隐私浏览模式下。 - + **-private-window** -- 打开一个隐私窗口 - + **-private-window URL** -- 在新的隐私窗口中打开 URL。若已经打开了一个隐私浏览窗口,则在那个窗口中打开 URL。 - + **-search 单词** -- 使用 FIrefox 默认的搜索引擎进行搜索。 - + - **url URL** -- 在新的标签也或窗口中加载 URL。可以省略这里的 `-url`,而且支持打开多个 URL,每个 URL 之间用空格分离。 - - - -#### 其他 options - - + **-safe-mode** -- 在安全模式下启动 Firefox。在启动 Firefox 时一直按住 Shift 键也能进入安全模式。 - + **-devtools** -- 启动 Firefox,同时加载并打开 Developer Tools。 - + **-inspector URL** -- 使用 DOM Inspector 查看指定的 URL - + **-jsconsole** -- 启动 Firefox,同时打开 Browser Console。 - + **-tray** -- 启动 Firefox,但保持最小化。 - - - - --------------------------------------------------------------------------------- - -via: https://www.ghacks.net/2017/10/08/the-most-important-firefox-command-line-options/ - -作者:[Martin Brinkmann][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ghacks.net/author/martin/ -[1]:https://developer.mozilla.org/en-US/docs/Mozilla/Command_Line_Options diff --git a/translated/tech/20171009 10 layers of Linux container security - Opensource.com.md b/translated/tech/20171009 10 layers of Linux container security - Opensource.com.md new file mode 100644 index 0000000000..1c3425d008 --- /dev/null +++ b/translated/tech/20171009 10 layers of Linux container security - Opensource.com.md @@ -0,0 +1,131 @@ +Linux 容器安全的 10 个层面 +====== +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA) + +容器提供了打包应用程序的一种简单方法,它实现了从开发到测试到投入生产系统的无缝传递。它也有助于确保跨不同环境的连贯性,包括物理服务器、虚拟机、以及公有云或私有云。这些好处使得一些组织为了更方便地部署和管理为他们提升业务价值的应用程序,而快速部署容器。 + +企业要求存储安全,在容器中运行基础服务的任何人都会问,“容器安全吗?”以及“怎么相信运行在容器中的我的应用程序是安全的?” + +安全的容器就像是许多安全运行的进程。在你部署和运行你的容器之前,你需要去考虑整个解决方案栈~~(致校对,容器是由不同的层堆叠而成,英文原文中使用的stack,可以直译为“解决方案栈”,但是似乎没有这一习惯说法,也可以翻译为解决方案的不同层级,哪个更合适?)~~各个层面的安全。你也需要去考虑应用程序和容器整个生命周期的安全。 + +尝试从这十个关键的因素去确保容器解决方案栈不同层面、以及容器生命周期的不同阶段的安全。 + +### 1. 容器宿主机操作系统和多租户环境 + +由于容器将应用程序和它的依赖作为一个单元来处理,使得开发者构建和升级应用程序变得更加容易,并且,容器可以启用多租户技术将许多应用程序和服务部署到一台共享主机上。在一台单独的主机上以容器方式部署多个应用程序、按需启动和关闭单个容器都是很容易的。为完全实现这种打包和部署技术的优势,运营团队需要运行容器的合适环境。运营者需要一个安全的操作系统,它能够在边界上保护容器安全、从容器中保护主机内核、以及保护容器彼此之间的安全。 + +### 2. 容器内容(使用可信来源) + +容器是隔离的 Linux 进程,并且在一个共享主机的内核中,容器内使用的资源被限制在仅允许你运行着应用程序的沙箱中。保护容器的方法与保护你的 Linux 中运行的任何进程的方法是一样的。降低权限是非常重要的,也是保护容器安全的最佳实践。甚至是使用尽可能小的权限去创建容器。容器应该以一个普通用户的权限来运行,而不是 root 权限的用户。在 Linux 中可以使用多级安全,Linux 命名空间、安全强化 Linux( [SELinux][1])、[cgroups][2] 、capabilities(译者注:Linux 内核的一个安全特性,它打破了传统的普通用户与 root 用户的概念,在进程级提供更好的安全控制)、以及安全计算模式( [seccomp][3] ),Linux 的这五种安全特性可以用于保护容器的安全。 + +在谈到安全时,首先要考虑你的容器里面有什么?例如 ,有些时候,应用程序和基础设施是由很多可用的组件所构成。它们中的一些是开源的包,比如,Linux 操作系统、Apache Web 服务器、Red Hat JBoss 企业应用平台、PostgreSQL、以及Node.js。这些包的容器化版本已经可以使用了,因此,你没有必要自己去构建它们。但是,对于你从一些外部来源下载的任何代码,你需要知道这些包的原始来源,是谁构建的它,以及这些包里面是否包含恶意代码。 + +### 3. 容器注册(安全访问容器镜像) + +你的团队所构建的容器的最顶层的内容是下载的公共容器镜像,因此,管理和下载容器镜像以及内部构建镜像,与管理和下载其它类型的二进制文件的方式是相同的,这一点至关重要。许多私有的注册者支持容器镜像的保存。选择一个私有的注册者,它可以帮你将存储在它的注册中的容器镜像实现策略自动化。 + +### 4. 安全性与构建过程 + +在一个容器化环境中,构建过程是软件生命周期的一个阶段,它将所需的运行时库和应用程序代码集成到一起。管理这个构建过程对于软件栈安全来说是很关键的。遵守“一次构建,到处部署”的原则,可以确保构建过程的结果正是生产系统中需要的。保持容器的恒定不变也很重要 — 换句话说就是,不要对正在运行的容器打补丁,而是,重新构建和部署它们。 + +不论是因为你处于一个高强度监管的行业中,还是只希望简单地优化你的团队的成果,去设计你的容器镜像管理以及构建过程,可以使用容器层的优势来实现控制分离,因此,你应该去这么做: + + * 运营团队管理基础镜像 + * 设计者管理中间件、运行时、数据库、以及其它解决方案 + * 开发者专注于应用程序层面,并且只写代码 + + + +最后,标记好你的定制构建容器,这样可以确保在构建和部署时不会搞混乱。 + +### 5. 控制好在同一个集群内部署应用 + +如果是在构建过程中出现的任何问题,或者在镜像被部署之后发现的任何漏洞,那么,请在基于策略的、自动化工具上添加另外的安全层。 + +我们来看一下,一个应用程序的构建使用了三个容器镜像层:内核、中间件、以及应用程序。如果在内核镜像中发现了问题,那么只能重新构建镜像。一旦构建完成,镜像就会被发布到容器平台注册中。这个平台可以自动检测到发生变化的镜像。对于基于这个镜像的其它构建将被触发一个预定义的动作,平台将自己重新构建应用镜像,合并进修复库。 + +在基于策略的、自动化工具上添加另外的安全层。 + +一旦构建完成,镜像将被发布到容器平台的内部注册中。在它的内部注册中,会立即检测到镜像发生变化,应用程序在这里将会被触发一个预定义的动作,自动部署更新镜像,确保运行在生产系统中的代码总是使用更新后的最新的镜像。所有的这些功能协同工作,将安全功能集成到你的持续集成和持续部署(CI/CD)过程和管道中。 + +### 6. 容器编配:保护容器平台 + +一旦构建完成,镜像被发布到容器平台的内部注册中。内部注册会立即检测到镜像的变化,应用程序在这里会被触发一个预定义的动作,自己部署更新,确保运行在生产系统中的代码总是使用更新后的最新的镜像。所有的功能协同工作,将安全功能集成到你的持续集成和持续部署(CI/CD)过程和管道中。~~(致校对:这一段和上一段是重复的,请确认,应该是选题工具造成的重复!!)~~ + +当然了,应用程序很少会部署在单一的容器中。甚至,单个应用程序一般情况下都有一个前端、一个后端、以及一个数据库。而在容器中以微服务模式部署的应用程序,意味着应用程序将部署在多个容器中,有时它们在同一台宿主机上,有时它们是分布在多个宿主机或者节点上,如下面的图所示:~~(致校对:图去哪里了???应该是选题问题的问题!)~~ + +在大规模的容器部署时,你应该考虑: + + * 哪个容器应该被部署在哪个宿主机上? + * 那个宿主机应该有什么样的性能? + * 哪个容器需要访问其它容器?它们之间如何发现彼此? + * 你如何控制和管理对共享资源的访问,像网络和存储? + * 如何监视容器健康状况? + * 如何去自动扩展性能以满足应用程序的需要? + * 如何在满足安全需求的同时启用开发者的自助服务? + + + +考虑到开发者和运营者的能力,提供基于角色的访问控制是容器平台的关键要素。例如,编配管理服务器是中心访问点,应该接受最高级别的安全检查。APIs 是规模化的自动容器平台管理的关键,可以用于为 pods、服务、以及复制控制器去验证和配置数据;在入站请求上执行项目验证;以及调用其它主要系统组件上的触发器。 + +### 7. 网络隔离 + +在容器中部署现代微服务应用,经常意味着跨多个节点在多个容器上部署。考虑到网络防御,你需要一种在一个集群中的应用之间的相互隔离的方法。一个典型的公有云容器服务,像 Google 容器引擎(GKE)、Azure 容器服务、或者 Amazon Web 服务(AWS)容器服务,是单租户服务。他们让你在你加入的虚拟机集群上运行你的容器。对于多租户容器的安全,你需要容器平台为你启用一个单一集群,并且分割通讯以隔离不同的用户、团队、应用、以及在这个集群中的环境。 + +使用网络命名空间,容器内的每个集合(即大家熟知的“pod”)得到它自己的 IP 和绑定的端口范围,以此来从一个节点上隔离每个 pod 网络。除使用下文所述的选项之外,~~(选项在哪里???,请查看原文,是否是选题丢失???)~~默认情况下,来自不同命名空间(项目)的Pods 并不能发送或者接收其它 Pods 上的包和不同项目的服务。你可以使用这些特性在同一个集群内,去隔离开发者环境、测试环境、以及生产环境。但是,这样会导致 IP 地址和端口数量的激增,使得网络管理更加复杂。另外,容器是被反复设计的,你应该在处理这种复杂性的工具上进行投入。在容器平台上比较受欢迎的工具是使用 [软件定义网络][4] (SDN) 去提供一个定义的网络集群,它允许跨不同集群的容器进行通讯。 + +### 8. 存储 + +容器即可被用于无状态应用,也可被用于有状态应用。保护附加存储是保护有状态服务的一个关键要素。容器平台对多个受欢迎的存储提供了插件,包括网络文件系统(NFS)、AWS 弹性块存储(EBS)、GCE 持久磁盘、GlusterFS、iSCSI、 RADOS(Ceph)、Cinder、等等。 + +一个持久卷(PV)可以通过资源提供者支持的任何方式装载到一个主机上。提供者有不同的性能,而每个 PV 的访问模式是设置为被特定的卷支持的特定模式。例如,NFS 能够支持多路客户端同时读/写,但是,一个特定的 NFS 的 PV 可以在服务器上被发布为只读模式。每个 PV 得到它自己的一组反应特定 PV 性能的访问模式的描述,比如,ReadWriteOnce、ReadOnlyMany、以及 ReadWriteMany。 + +### 9. API 管理、终端安全、以及单点登陆(SSO) + +保护你的应用包括管理应用、以及 API 的认证和授权。 + +Web SSO 能力是现代应用程序的一个关键部分。在构建它们的应用时,容器平台带来了开发者可以使用的多种容器化服务。 + +APIs 是微服务构成的应用程序的关键所在。这些应用程序有多个独立的 API 服务,这导致了终端服务数量的激增,它就需要额外的管理工具。推荐使用 API 管理工具。所有的 API 平台应该提供多种 API 认证和安全所需要的标准选项,这些选项既可以单独使用,也可以组合使用,以用于发布证书或者控制访问。 + +保护你的应用包括管理应用以及 API 的认证和授权。~~(致校对:这一句话和本节的第一句话重复)~~ + +这些选项包括标准的 API keys、应用 ID 和密钥对、 以及 OAuth 2.0。 + +### 10. 在一个联合集群中的角色和访问管理 + +这些选项包括标准的 API keys、应用 ID 和密钥对、 以及 OAuth 2.0。~~(致校对:这一句和上一节最后一句重复)~~ + +在 2016 年 7 月份,Kubernetes 1.3 引入了 [Kubernetes 联合集群][5]。这是一个令人兴奋的新特性之一,它是在 Kubernetes 上游、当前的 Kubernetes 1.6 beta 中引用的。联合是用于部署和访问跨多集群运行在公有云或企业数据中心的应用程序服务的。多个集群能够用于去实现应用程序的高可用性,应用程序可以跨多个可用区域、或者去启用部署公共管理、或者跨不同的供应商进行迁移,比如,AWS、Google Cloud、以及 Azure。 + +当管理联合集群时,你必须确保你的编配工具能够提供,你所需要的跨不同部署平台的实例的安全性。一般来说,认证和授权是很关键的 — 不论你的应用程序运行在什么地方,将数据安全可靠地传递给它们,以及管理跨集群的多租户应用程序。Kubernetes 扩展了联合集群,包括对联合的秘密数据、联合的命名空间、以及 Ingress objects 的支持。 + +### 选择一个容器平台 + +当然,它并不仅关乎安全。你需要提供一个你的开发者团队和运营团队有相关经验的容器平台。他们需要一个安全的、企业级的基于容器的应用平台,它能够同时满足开发者和运营者的需要,而且还能够提高操作效率和基础设施利用率。 + +想从 Daniel 在 [欧盟开源峰会][7] 上的 [容器安全的十个层面][6] 的演讲中学习更多知识吗?这个峰会将于10 月 23 - 26 日在 Prague 举行。 + +### 关于作者 +Daniel Oh;Microservives;Agile;Devops;Java Ee;Container;Openshift;Jboss;Evangelism + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/10/10-layers-container-security + +作者:[Daniel Oh][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/daniel-oh +[1]:https://en.wikipedia.org/wiki/Security-Enhanced_Linux +[2]:https://en.wikipedia.org/wiki/Cgroups +[3]:https://en.wikipedia.org/wiki/Seccomp +[4]:https://en.wikipedia.org/wiki/Software-defined_networking +[5]:https://kubernetes.io/docs/concepts/cluster-administration/federation/ +[6]:https://osseu17.sched.com/mobile/#session:f2deeabfc1640d002c1d55101ce81223 +[7]:http://events.linuxfoundation.org/events/open-source-summit-europe diff --git a/translated/tech/20171011 What is a firewall.md b/translated/tech/20171011 What is a firewall.md deleted file mode 100644 index cdbf18a5c9..0000000000 --- a/translated/tech/20171011 What is a firewall.md +++ /dev/null @@ -1,78 +0,0 @@ -什么是防火墙? -===== -基于网络的防火墙已经在美国企业无处不在,因为它们证实了抵御日益增长的威胁的防御能力。 - -通过网络测试公司 NSS 实验室最近的一项研究发现高达 80% 的美国大型企业运行着下一代防火墙。研究公司 IDC 评估防火墙和相关的统一威胁管理市场营业额在 2015 是 76 亿美元,预计到 2020 年底将达到 127 亿美元。 - -**如果你想提升,这里是[What to consider when deploying a next generation firewall][1]** - -### 什么是防火墙? - -防火墙充当一个监控流量的边界防御工具,要么允许它要么屏蔽它。 多年来,防火墙的功能不断增强,现在大多数防火墙不仅可以阻止已知的一组威胁,并执行高级访问控制列表策略,还可以深入检查各个包的流量和测试包,以确定它们是否安全。大多数防火墙被部署为网络硬件,用于处理流量和允许终端用户配置和管理系统的软件。越来越多的软件版防火墙部署到高度虚拟机环境中执行策略在被隔离的网络或 IaaS 公有云中。 - -随着防火墙技术的进步在过去十年中创造了新的防火墙部署选项,所以现在对于部署防火墙的最终用户来说,有一些选择。这些选择包括: - -### 有状态的防火墙 - 当首次创造防火墙时,它们是无状态的,这意味着流量通过硬件,在检查被监视的每个网络包流量的过程中,并单独屏蔽或允许它。从1990年代中后期开始,防火墙的第一个主要进展是引入状态。有状态防火墙在更全面的上下文中检查流量,同时考虑到网络连接的工作状态和特性,以提供更全面的防火墙。例如,维持这状态的防火墙允许某些流量访问某些用户,同时阻塞其他用户的同一流量。 - -### 下一代防火墙 - 多年来,防火墙增加了多种新的特性,包括深度包检查、入侵检测以及对加密流量的预防和检查。下一代防火墙(NGFWs)是指有许多先进的功能集成到防火墙的防火墙。 - -### 基于代理的防火墙 - -这些防火墙充当请求数据的最终用户和数据源之间的网关。在传递给最终用户之前,所有的流量都通过这个代理过滤。这通过掩饰信息的原始请求者的身份来保护客户端不受威胁。 - -### Web 应用防火墙 - -这些防火墙位于特定应用程序的前面,而不是在更广阔的网络的入口或则出口上。而基于代理的防火墙通常被认为是保护终端客户,WAFs 通常被认为是保护应用服务器。 - -### 防火墙硬件 - -防火墙硬件通常是一个简单的服务器,它可以充当路由器来过滤流量和运行防火墙软件。这些设备放置在企业网络的边缘,路由器和 Internet 服务提供商的连接点之间。通常企业可能在整个数据中心部署十几个物理防火墙。 用户需要根据用户基数的大小和 Internet 连接的速率来确定防火墙需要支持的吞吐量容量。 - -### 防火墙软件 - -通常,终端用户部署多个防火墙硬件端和一个中央防火墙软件系统来管理部署。 这个中心系统是配置策略和特性的地方,在那里可以进行分析,并可以对威胁作出响应。 - -### 下一代防火墙 - -多年来,防火墙增加了多种新的特性,包括深度包检查、入侵检测以及对加密流量的预防和检查。下一代防火墙(NGFWs)是指集成了这些先进功能的防火墙,这里描述的是它们中的一些。 - -### 有状态的检测 - -阻止已知不需要的流量,这是基本的防火墙功能。 - -### 抵御病毒 - -在网络流量中搜索已知病毒和漏洞,这个功能有助于防火墙接收最新威胁的更新,并不断更新以保护它们。 - -### 入侵防御系统 - -这类安全产品可以部署为一个独立的产品,但 IPS 功能正逐步融入 NGFWs。 虽然基本的防火墙技术识别和阻止某些类型的网络流量,但 IPS 使用更多的细粒度安全措施,如签名跟踪和异常检测,以防止不必要的威胁进入公司网络。 IPS 系统已经取代了以前这一技术的版本,入侵检测系统(IDS)的重点是识别威胁而不是遏制它们。 - -### 深度包检测(DPI) - -DPI 可部分或用于与 IPS 的结合,但其仍然成为一个 NGFWs 的重要特征,因为它提供细粒度分析的能力,具体到流量包和流量数据的头文件。DPI 还可以用来监测出站流量,以确保敏感信息不会离开公司网络,这种技术称为数据丢失预防(DLP)。 - -### SSL 检测 - -安全套接字层(SSL)检测是一个检测加密流量来测试威胁的方法。随着越来越多的流量进行加密,SSL 检测成为 DPI 技术,NGFWs 正在实施的一个重要组成部分。SSL 检测作为一个缓冲区,它在送到最终目的地之前解码流量以检测它。 - -### 沙盒 - -这个是被卷入 NGFWs 中的一个较新的特性,它指防火墙接收某些未知的流量或者代码,并在一个测试环境运行,以确定它是否是邪恶的能力。 - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3230457/lan-wan/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html - -作者:[Brandon Butler][a] -译者:[zjon](https://github.com/zjon) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.networkworld.com/author/Brandon-Butler/ -[1]:https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html - - diff --git a/translated/tech/20171016 Using the Linux find command with caution.md b/translated/tech/20171016 Using the Linux find command with caution.md new file mode 100644 index 0000000000..a72ff48c11 --- /dev/null +++ b/translated/tech/20171016 Using the Linux find command with caution.md @@ -0,0 +1,93 @@ +谨慎使用 Linux find 命令 +====== +![](https://images.idgesg.net/images/article/2017/10/caution-sign-100738884-large.jpg) +最近有朋友提醒我可以添加一个有用的选项来更加谨慎地运行 find 命令,它是 -ok。除了一个重要的区别之外,它的工作方式与 -exec 相似,它使 find 命令在执行指定的操作之前请求权限。 + +这有一个例子。如果你使用 find 命令查找文件并删除它们,则可以运行下面的命令: +``` +$ find . -name runme -exec rm {} \; + +``` + +在当前目录及其子目录中中任何名为 “runme” 的文件都将被立即删除 - 当然,你要有权删除它们。改用 -ok 选项,你会看到类似这样的东西,find 命令将在删除文件之前会请求权限。回答 **y** 代表 “yes” 将允许 find 命令继续并逐个删除文件。 +``` +$ find . -name runme -ok rm {} \; +< rm ... ./bin/runme > ? + +``` + +### -exedir 命令也是一个选项 + +另一个可以用来修改 find 命令行为并可能使其更可控的选项是 -execdir 。其中 -exec 运行指定的任何命令,-execdir 从文件所在的目录运行指定的命令,而不是在运行 find 命令的目录运行。这是一个它的例子: +``` +$ pwd +/home/shs +$ find . -name runme -execdir pwd \; +/home/shs/bin + +``` +``` +$ find . -name runme -execdir ls \; +ls rm runme + +``` + +到现在为止还挺好。但要记住的是,-execdir 也会在匹配文件的目录中执行命令。如果运行下面的命令,并且目录包含一个名为 “ls” 的文件,那么即使该文件没有_执行权限,它也将运行该文件。使用 **-exec** 或 **-execdir** 类似于通过 source 来运行命令。 +``` +$ find . -name runme -execdir ls \; +Running the /home/shs/bin/ls file + +``` +``` +$ find . -name runme -execdir rm {} \; +This is an imposter rm command + +``` +``` +$ ls -l bin +total 12 +-r-x------ 1 shs shs 25 Oct 13 18:12 ls +-rwxr-x--- 1 shs shs 36 Oct 13 18:29 rm +-rw-rw-r-- 1 shs shs 28 Oct 13 18:55 runme + +``` +``` +$ cat bin/ls +echo Running the $0 file +$ cat bin/rm +echo This is an imposter rm command + +``` + +### -okdir 选项也会请求权限 + +要更谨慎,可以使用 **-okdir** 选项。类似 **-ok**,该选项将请求权限来运行该命令。 +``` +$ find . -name runme -okdir rm {} \; +< rm ... ./bin/runme > ? + +``` + +你也可以小心地指定你想用的命令的完整路径,以避免像上面那样的冒牌命令出现的任何问题。 +``` +$ find . -name runme -execdir /bin/rm {} \; + +``` + +find 命令除了默认打印之外还有很多选项,有些可以使你的文件搜索更精确,但谨慎一点总是好的。 + +在 [Facebook][1] 和 [LinkedIn][2] 上加入网络世界社区来进行评论。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3233305/linux/using-the-linux-find-command-with-caution.html + +作者:[Sandra Henry-Stocker][a] +译者:[geekpi](https://github.com/geekpi) +校对:[Locez](https://github.com/locez) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[1]:https://www.facebook.com/NetworkWorld/ +[2]:https://www.linkedin.com/company/network-world diff --git a/sources/tech/20171019 More ways to examine network connections on Linux.md b/translated/tech/20171019 More ways to examine network connections on Linux.md similarity index 60% rename from sources/tech/20171019 More ways to examine network connections on Linux.md rename to translated/tech/20171019 More ways to examine network connections on Linux.md index 41e19559bf..8afd276c88 100644 --- a/sources/tech/20171019 More ways to examine network connections on Linux.md +++ b/translated/tech/20171019 More ways to examine network connections on Linux.md @@ -1,13 +1,12 @@ -translating by kimii -More ways to examine network connections on Linux +检查 linux 上网络连接的更多方法 ====== -The ifconfig and netstat commands are incredibly useful, but there are many other commands that can help you see what's up with you network on Linux systems. Today's post explores some very handy commands for examining network connections. +ifconfig 和 netstat 命令当然非常有用,但还有很多其他命令能帮你查看 linux 系统上的网络状况。本文探索了一些检查网络连接的非常简便的命令。 -### ip command +### ip 命令 -The **ip** command shows a lot of the same kind of information that you'll get when you use **ifconfig**. Some of the information is in a different format - e.g., "192.168.0.6/24" instead of "inet addr:192.168.0.6 Bcast:192.168.0.255" and ifconfig is better for packet counts, but the ip command has many useful options. +**ip** 命令显示了许多与你使用 **ifconfig** 命令时的一样信息。其中一些信息以不同的格式呈现,比如使用“192.168.0.6/24”,而不是“inet addr:192.168.0.6 Bcast:192.168.0.255”,尽管 ifconfig 更适合数据包计数,但 ip 命令有许多有用的选项。 -First, here's the **ip a** command listing information on all network interfaces. +首先,这里是 **ip a** 命令列出的所有网络接口的信息。 ``` $ ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 @@ -25,7 +24,7 @@ $ ip a ``` -If you want only to see a simple list of network interfaces, you can limit its output with **grep**. +如果你只想看到简单的网络接口列表,你可以用 **grep** 限制它的输出。 ``` $ ip a | grep inet inet 127.0.0.1/8 scope host lo @@ -35,7 +34,7 @@ $ ip a | grep inet ``` -You can get a glimpse of your default route using a command like this: +使用如下面的命令,你可以看到你的默认路由: ``` $ ip route show default via 192.168.0.1 dev eth0 @@ -43,18 +42,18 @@ default via 192.168.0.1 dev eth0 ``` -In this output, you can see that the default gateway is 192.168.0.1 through eth0 and that the local network is the fairly standard 192.168.0.0/24. +在这个输出中,你可以看到通过 eth0 的默认网关是 192.168.0.1,并且本地网络是相当标准的 192.168.0.0/24。 -You can also use the **ip** command to bring network interfaces up and shut them down. +你也可以使用 **ip** 命令来启用和禁用网络接口。 ``` $ sudo ip link set eth1 up $ sudo ip link set eth1 down ``` -### ethtool command +### ethtool 命令 -Another very useful tool for examining networks is **ethtool**. This command provides a lot of descriptive data on network interfaces. +另一个检查网络非常有用的工具是 **ethtool**。这个命令提供了网络接口上的许多描述性的数据。 ``` $ ethtool eth0 Settings for eth0: @@ -83,7 +82,7 @@ Cannot get wake-on-lan settings: Operation not permitted ``` -You can also use the **ethtool** command to examine ethernet driver settings. +你也可以使用 **ethtool** 命令来检查以太网驱动设置。 ``` $ ethtool -i eth0 driver: e1000e @@ -99,7 +98,7 @@ supports-priv-flags: no ``` -The autonegotiation details can be displayed with a command like this: +自动协商的详细信息可以用这样的命令来显示: ``` $ ethtool -a eth0 Pause parameters for eth0: @@ -109,9 +108,10 @@ TX: on ``` -### traceroute command +### traceroute 命令 -The **traceroute** command displays routing pathways. It works by using the TTL (time to live) field in the packet header in a series of packets to capture the path that packets take and how long they take to get from one hop to the next. Traceroute's output helps to gauge the health of network connections, since some routes might take much longer to reach the eventual destination. + +**traceroute** 命令显示路由路径。它通过在一系列数据包中设置数据包头的TTL(生存时间)字段来捕获数据包所经过的路径,以及数据包从一跳到下一跳需要的时间。Traceroute 的输出有助于评估网络连接的健康状况,因为某些路由可能需要花费更长的时间才能到达最终的目的地。 ``` $ sudo traceroute world.std.com traceroute to world.std.com (192.74.137.5), 30 hops max, 60 byte packets @@ -133,13 +133,13 @@ traceroute to world.std.com (192.74.137.5), 30 hops max, 60 byte packets ``` -### tcptraceroute command +### tcptraceroute 命令 -The **tcptraceroute** command does basically the same thing as traceroute except that it is able to bypass the most common firewall filters. As the command's man page explains, tcptraceroute sends out TCP SYN packets instead of UDP or ICMP ECHO packets, thus making it less susceptible to being blocked. +**tcptraceroute** 命令与 traceroute 基本上是一样的,只是它能够绕过最常见的防火墙的过滤。正如该命令的手册页所述,tcptraceroute 发送 TCP SYN 数据包而不是 UDP 或 ICMP ECHO 数据包,所以其不易被阻塞。 -### tcpdump command +### tcpdump 命令 -The **tcpdump** command allows you to capture network packets for later analysis. With the -D option, it lists available interfaces. +**tcpdump** 命令允许你捕获网络数据包来进一步分析。使用 -D 选项列出可用的网络接口。 ``` $ tcpdump -D 1.eth0 [Up, Running] @@ -157,7 +157,7 @@ $ tcpdump -D ``` -The -v (verbose) option controls how much detail you will see -- more v's, more details, but more than three v's doesn't add anything more. +-v(verbose)选项控制你看到的细节程度--越多的 v,越详细,但超过 3 个 v 不会有更多意义。 ``` $ sudo tcpdump -vv host 192.168.0.32 tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes @@ -172,9 +172,10 @@ tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 byt ``` -Expect to see a _lot_ of output when you run commands like this one. +当你运行像这样的命令时,会看到非常多的输出。 + +这个命令捕获来自特定主机和 eth0 上的 11 个数据包。-w 选项标识保存捕获包的文件。在这个示例命令中,我们只要求捕获 11 个数据包。 -This command captures 11 packets from a specific host and over eth0. The -w option identifies the file that will contain the capture packets. In this example command, we've only asked to capture 11 packets. ``` $ sudo tcpdump -c 11 -i eth0 src 192.168.0.32 -w packets.pcap tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes @@ -184,9 +185,10 @@ tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 byt ``` -### arp command +### arp 命令 + +arp 命令将 IPv4 地址映射到硬件地址。它所提供的信息也可以在一定程度上用于识别系统,因为网络适配器可以告诉你使用它们的系统的一些信息。下面的第二个MAC 地址,从 f8:8e:85 开始,很容易被识别为 Comtrend 路由器。 -The arp command maps IPv4 addresses to hardware addresses. The information provided can also be used to identify the systems to some extent, since the network adaptors in use can tell you something about the systems using them. The second MAC address below, starting with f8:8e:85, is easily identified as a Comtrend router. ``` $ arp -a ? (192.168.0.12) at b0:c0:90:3f:10:15 [ether] on eth0 @@ -194,15 +196,14 @@ $ arp -a ``` -The first line above shows the MAC address for the network adaptor on the system itself. This network adaptor appears to have been manufactured by Chicony Electronics in Taiwan. You can look up MAC address associations fairly easily on the web with tools such as this one from Wireshark -- https://www.wireshark.org/tools/oui-lookup.html - +上面的第一行显示了系统本身的网络适配器的 MAC 地址。该网络适配器似乎已由台湾 Chicony 电子公司制造。你可以很容易地在网上查找 MAC 地址关联,例如来自 Wireshark 的这个工具 -- https://www.wireshark.org/tools/oui-lookup.html -------------------------------------------------------------------------------- via: https://www.networkworld.com/article/3233306/linux/more-ways-to-examine-network-connections-on-linux.html 作者:[Sandra Henry-Stocker][a] -译者:[译者ID](https://github.com/译者ID) +译者:[kimii](https://github.com/kimii) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20171024 Run Linux On Android Devices, No Rooting Required.md b/translated/tech/20171024 Run Linux On Android Devices, No Rooting Required.md new file mode 100644 index 0000000000..929c3ecdf8 --- /dev/null +++ b/translated/tech/20171024 Run Linux On Android Devices, No Rooting Required.md @@ -0,0 +1,65 @@ +无需 Root 实现在 Android 设备上运行 Linux +====== +![](https://www.ostechnix.com/wp-content/uploads/2017/10/Termux-720x340.jpg) + +曾今,我尝试过搜索一种简单的可以在 Android 上运行 Linux 的方法。我当时唯一的意图只是想使用 Linux 以及一些基本的用用程序,比如 SSH,Git,awk 等。要求的并不多!我不不想 root Android 设备。我有一台平板电脑,主要用于阅读电子书,新闻和少数 Linux 博客。除此之外也不怎么用它了。因此我决定用它来实现一些 Linux 的功能。在 Google Play 商店上浏览了几分钟后,一个应用程序瞬间引起了我的注意,勾起了我实验的欲望。如果你也想在 Android 设备上运行 Linux,这个应用可能会有所帮助。 + +### Termux - 在 Android 和 Chrome OS 上运行的 Android 终端模拟器 + +**Termux** 是一个 Android 终端模拟器以及提供 Linux 环境的应用程序。跟许多其他应用程序不同,你无需 root 设备也无需进行设置。它是开箱即用的!它会自动安装好一个最基本的 Linux 系统,当然你也可以使用 APT 软件包管理器来安装其他软件包。总之,你可以让你的 Android 设备变成一台袖珍的 Linux 电脑。它不仅适用于 Android,你还能在 Chrome OS 上安装它。 + +![](http://www.ostechnix.com/wp-content/uploads/2017/10/termux.png) + +Termux 提供了许多重要的功能,比您想象的要多。 + + * 它允许你通过 openSSH 登陆远程服务器 + * 你还能够从远程系统 SSH 到 Android 设备中。 + * 使用 rsync 和 curl 将您的智能手机通讯录同步到远程系统。 + * 支持不同的 shell,比如 BASH,ZSH,以及 FISH 等等。 + * 可以选择不同的文本编辑器来编辑/查看文件,支持 Emacs,Nano 和 Vim。 + * 使用 APT 软件包管理器在 Android 设备上安装你想要的软件包。支持 Git,Perl,Python,Ruby 和 Node.js 的最新版本。 + * 可以将 Android 设备与蓝牙键盘,鼠标和外置显示器连接起来,就像是整合在一起的设备一样。Termux 支持键盘快捷键。 + * Termux 支持几乎所有 GNU/Linux 命令。 + +此外通过安装插件可以启用其他一些功能。例如,**Termux:API** 插件允许你访问 Android 和 Chrome 的硬件功能。其他有用的插件包括: + + * Termux:Boot - 设备启动时运行脚本 + * Termux:Float - 在浮动窗口中运行 Termux + * Termux:Styling - 提供配色方案和支持 powerline 的字体来定制 Termux 终端的外观。 + * Termux:Task - 提供一种从任务栏类的应用中调用 Termux 可执行文件的简易方法。 + * Termux:Widget - 提供一种从主屏幕启动小脚本的建议方法。 + +要了解更多有关 termux 的信息,请长按终端上的任意位置并选择“帮助”菜单选项来打开内置的帮助部分。它唯一的缺点就是**需要 Android 5.0 及更高版本**。如果它支持 Android 4.x 和旧版本的话,将会更有用的多。你可以在** Google Play 商店 **和** F-Droid **中找到并安装 Termux。 + +要在 Google Play 商店中安装 Termux,点击下面按钮。 + +[![termux][1]][2] + +若要在 F-Droid 中安装,则点击下面按钮。 + +[![][1]][3] + +你现在知道如何使用 Termux 在 Android 设备上使用 Linux 了。你有用过其他更好的应用吗?请在下面留言框中留言。我很乐意也去尝试他们! + +此致敬礼! + +相关资源: + ++[Termux 官网 ][4] + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/termux-run-linux-android-devices-no-rooting-required/ + +作者:[SK][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]:https://play.google.com/store/apps/details?id=com.termux +[3]:https://f-droid.org/packages/com.termux/ +[4]:https://termux.com/ diff --git a/translated/tech/20171027 Easy guide to secure VNC server with TLS encryption.md b/translated/tech/20171027 Easy guide to secure VNC server with TLS encryption.md new file mode 100644 index 0000000000..bc3e2587e8 --- /dev/null +++ b/translated/tech/20171027 Easy guide to secure VNC server with TLS encryption.md @@ -0,0 +1,135 @@ +使用 TLS 加密保护 VNC 服务器的简单指南 +====== +在本教程中,我们将学习使用 TLS 加密安装 VNC 服务器并保护 VNC 会话。 +此方法已经在 CentOS 6&7 上测试过了,但是也可以在其他的版本/操作系统上运行(RHEL、Scientific Linux 等)。 + +**(推荐阅读:[保护 SSH 会话终极指南][1])** + +### 安装 VNC 服务器 + +在机器上安装 VNC 服务器之前,请确保我们有一个可用的 GUI。如果机器上还没有安装 GUI,我们可以通过执行以下命令来安装: + +``` +yum groupinstall "GNOME Desktop" +``` + +现在我们将 tigervnc 作为我们的 VNC 服务器,运行下面的命令运行: + +``` +# yum install tigervnc-server +``` + +安装完成后,我们将创建一个新的用户访问服务器: + +``` +# useradd vncuser +``` + +并使用以下命令为其分配访问 VNC 的密码: + +``` +# vncpasswd vncuser +``` + +我们在 CentOS 6&7 上配置会有一点改变,我们首先看 CentOS 6 的配置。 + +#### CentOS 6 + +现在我们需要编辑 VNC 配置文件: + +``` + **# vim /etc/sysconfig/vncservers** +``` + +并添加下面这几行: + +``` +[ …] +VNCSERVERS= "1:vncuser" +VNCSERVERARGS[1]= "-geometry 1024×768″ +``` + +保存文件并退出。接下来重启 vnc 服务使改动生效: + +``` +# service vncserver restart +``` + +并在启动时启用它: + +``` +# chkconfig vncserver on +``` + +#### CentOS 7 + +在 CentOS 7 上,/etc/sysconfig/vncservers 已经改为 /lib/systemd/system/vncserver@.service。我们将使用这个配置文件作为参考,所以创建一个文件的副本, + +``` +# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:1.service +``` + +接下来,我们将编辑文件以包含我们创建的用户: + +``` +# vim /etc/systemd/system/vncserver@:1.service +``` + +编辑下面 2 行中的用户: + +``` +ExecStart=/sbin/runuser -l vncuser -c "/usr/bin/vncserver %i" +PIDFile=/home/vncuser/.vnc/%H%i.pid +``` + +保存文件并退出。接下来重启服务并在启动时启用它: + +``` +systemctl restart[[email protected]][2]:1.service +systemctl enable[[email protected]][2]:1.service +``` + +现在我们已经设置好了 VNC 服务器,并且可以使用 VNC 服务器的 IP 地址从客户机连接到它。但是,在此之前,我们将使用 TLS 加密保护我们的连接。 + +### 保护 VNC 会话 + +要保护 VNC 会话,我们将首先配置加密方法。我们将使用 TLS 加密,但也可以使用 SSL 加密。执行以下命令在 VNC 服务器上使用 TLS 加密: + +``` +# vncserver -SecurityTypes=VeNCrypt,TLSVnc +``` + +你将被要求输入密码来访问 VNC(如果使用其他用户,而不是上述用户)。 + +![secure vnc server][4] + +现在,我们可以使用客户机上的 VNC 浏览器访问服务器,使用以下命令以安全连接启动 vnc 浏览器: + + **# vncviewer -SecurityTypes=VeNCrypt,TLSVnc 192.168.1.45:1** + +这里,192.168.1.45 是 VNC 服务器的 IP 地址。 + +![secure vnc server][6] + +输入密码,我们可以远程访问服务器,并且也是 TLS 加密的。 + +这篇教程就完了,欢迎随时使用下面的评论栏提交你的建议或疑问。 + + +-------------------------------------------------------------------------------- + +via: http://linuxtechlab.com/secure-vnc-server-tls-encryption/ + +作者:[Shusain][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxtechlab.com/author/shsuain/ +[1]:http://linuxtechlab.com/ultimate-guide-to-securing-ssh-sessions/ +[2]:/cdn-cgi/l/email-protection +[3]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=642%2C241 +[4]:https://i1.wp.com/linuxtechlab.com/wp-content/uploads/2017/10/secure_vnc-1.png?resize=642%2C241 +[5]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=665%2C419 +[6]:https://i2.wp.com/linuxtechlab.com/wp-content/uploads/2017/10/secure_vnc-2.png?resize=665%2C419 diff --git a/translated/tech/20171030 How To Create Custom Ubuntu Live CD Image.md b/translated/tech/20171030 How To Create Custom Ubuntu Live CD Image.md deleted file mode 100644 index 2a6dad8027..0000000000 --- a/translated/tech/20171030 How To Create Custom Ubuntu Live CD Image.md +++ /dev/null @@ -1,157 +0,0 @@ -如何创建 Ubuntu Live CD (Linux 中国注:Ubuntu 原生光盘)的定制镜像 -====== -![](https://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-720x340.png) - -今天让我们来讨论一下如何创建 Ubuntu Live CD 的定制镜像(ISO)。我们已经使用[* *Pinguy Builder* *][1]完成了这项工作。但是,现在似乎停止了。最近 Pinguy Builder 的官方网站似乎没有任何更新。幸运的是,我找到了另一种创建 Ubuntu Live CD 镜像的工具。使用 **Cubic** 即 **C**ustom **Ub**untu **I**SO **C**reator (Linux 中国注:Ubuntu 镜像定制器)的首字母所写,一个 GUI (图形用户界面)应用程序用来创建一个可定制的可启动的 Ubuntu Live CD(ISO)镜像。 - -Cubic 正在积极开发,它提供了许多选项来轻松地创建一个定制的 Ubuntu Live CD ,它有一个集成的命令行环境``chroot``(Linux 中国注:Change Root,也就是改变程序执行时所参考的根目录位置),在那里你可以定制所有,比如安装新的软件包,内核,添加更多的背景壁纸,添加更多的文件和文件夹。它有一个直观的 GUI 界面,在实时镜像创建过程中可以轻松的利用导航(可以利用点击鼠标来回切换)。您可以创建一个新的自定义镜像或修改现有的项目。因为它可以用来实时制作 Ubuntu 镜像,所以我相信它可以被利用在制作其他 Ubuntu 的发行版和衍生版镜像中使用,比如 Linux Mint。 -### 安装 Cubic - -Cubic 的开发人员已经开发出了一个 PPA (Linux 中国注:Personal Package Archives 首字母简写,私有的软件包档案) 来简化安装过程。要在 Ubuntu 系统上安装 Cubic ,在你的终端上运行以下命令: -``` -sudo apt-add-repository ppa:cubic-wizard/release -``` -``` -sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 6494C6D6997C215E -``` -``` -sudo apt update -``` -``` -sudo apt install cubic -``` - -### 利用 Cubic 创建 Ubuntu Live CD 的定制镜像 - - -安装完成后,从应用程序菜单或坞站启动 Cubic。这是在我在 Ubuntu 16.04 LTS 桌面系统中 Cubic 的样子。 - -为新项目选择一个目录。它是保存镜像文件的目录。 -[![][2]][3] - -请注意,Cubic 不是创建您系统的 Live CD 镜像。而它只是利用 Ubuntu 安装 CD 来创建一个定制的 Live CD,因此,你应该有一个最新的 ISO 镜像。 -选择您存储 Ubuntu 安装 ISO 镜像的路径。Cubic 将自动填写您定制操作系统的所有细节。如果你愿意,你可以改变细节。单击 Next 继续。 -[![][2]][4] - - -接下来,从压缩的源安装介质中的 Linux 文件系统将被提取到项目的目录(在我们的例子中目录的位置是 **/home/ostechnix/custom_ubuntu**)。 -[![][2]][5] - - -一旦文件系统被提取出来,将自动加载到``chroot``环境。如果你没有看到终端提示,按下回车键几次。 -[![][2]][6] - - -在这里可以安装任何额外的软件包,添加背景图片,添加软件源列表,添加最新的 Linux 内核和所有其他定制到你的 Live CD 。 - -例如,我希望 `vim` 安装在我的 Live CD 中,所以现在就要安装它。 -[![][2]][7] - - -我们不需要使用 ``sudo``,因为我们已经在具有最高权限(root)的环境中了。 - -类似地,如果需要,可以安装添加的任何版本 Linux Kernel 。 -``` -apt install linux-image-extra-4.10.0-24-generic -``` - -此外,您还可以更新软件源列表(添加或删除软件存储库列表): -[![][2]][8] - -修改源列表后,不要忘记运行 ``apt update`` 命令来更新源列表: -``` -apt update -``` - - -另外,您还可以向 Live CD 中添加文件或文件夹。复制文件/文件夹(右击它们并选择复制或者利用 `CTRL+C`),在终端右键单击(在 Cubic 窗口内),选择**Paste file(s)**,最后点击它将其复制进 Cubic 向导的底部。 -[![][2]][9] - -**Ubuntu 17.10 用户注意事项: ** - - -在 Ubuntu 17.10 系统中,DNS 查询可能无法在 ``chroot``环境中工作。如果您正在制作一个定制的 Ubuntu 17.10 原生镜像,您需要指向正确的 `resolve.conf` 配置文件: -``` -ln -sr /run/systemd/resolve/resolv.conf /run/systemd/resolve/stub-resolv.conf - -``` - -验证 DNS 解析工作,运行: -``` -cat /etc/resolv.conf -ping google.com -``` - - -如果你想的话,可以添加你自己的壁纸。要做到这一点,请切换到 **/usr/share/backgrounds/** 目录, -``` -cd /usr/share/backgrounds -``` - - -并将图像拖放到 Cubic 窗口中。或复制图像,右键单击 Cubic 终端窗口,选择 **Paste file(s)** 选项。此外,确保你在**/usr/share/gnome-backproperties** 的XML文件中添加了新的壁纸,这样你可以在桌面上右键单击新添加的图像选择**Change Desktop Background** 进行交互。完成所有更改后,在 Cubic 向导中单击 ``Next``。 - -接下来,选择引导到新的原生 ISO 镜像时使用的 Linux 内核版本。如果已经安装了其他版本内核,它们也将在这部分中被列出。然后选择您想在 Live CD 中使用的内核。 -[![][2]][10] - - -在下一节中,选择要从您的原生映像中删除的软件包。在使用定制的原生映像安装完 Ubuntu 操作系统后,所选的软件包将自动删除。在选择要删除的软件包时,要格外小心,您可能在不知不觉中删除了一个软件包,而此软件包又是另外一个软件包的依赖包。 -[![][2]][11] - - -接下来,原生镜像创建过程将开始。这里所要花费的时间取决于你定制的系统规格。 -[![][2]][12] - - -镜像创建完成后后,单击 ``Finish``。Cubic 将显示新创建的自定义镜像的细节。 - -如果你想在将来修改刚刚创建的自定义原生镜像,**uncheck** 选项解释说**" Delete all project files, except the generated disk image and the corresponding MD5 checksum file"** (**除了生成的磁盘映像和相应的MD5校验和文件之外,删除所有的项目文件**) Cubic 将在项目的工作目录中保留自定义图像,您可以在将来进行任何更改。而不用从头再来一遍。 - -要为不同的 Ubuntu 版本创建新的原生镜像,最好使用不同的项目目录。 -### 利用 Cubic 修改 Ubuntu Live CD 的定制镜像 - -从菜单中启动 Cubic ,并选择一个现有的项目目录。单击 Next 按钮,您将看到以下三个选项: - 1. 从现有项目创建一个磁盘映像。 - 2. 继续定制现有项目。 - 3. 删除当前项目。 - - - -[![][2]][13] - - -第一个选项将允许您使用之前所做的自定义在现有项目中创建一个新的原生 ISO 镜像。如果您丢失了 ISO 镜像,您可以使用第一个选项来创建一个新的。 - -第二个选项允许您在现有项目中进行任何其他更改。如果您选择此选项,您将再次进入 ``chroot``环境。您可以添加新的文件或文件夹,安装任何新的软件,删除任何软件,添加其他的 Linux 内核,添加桌面背景等等。 - -第三个选项将删除现有的项目,所以您可以从头开始。选择此选项将删除所有文件,包括新生成的 ISO 镜像文件。 - -我用 Cubic 做了一个定制的 Ubuntu 16.04 LTS 桌面 Live CD 。就像这篇文章里描述的一样。如果你想创建一个 Ubuntu Live CD, Cubic 可能是一个不错的选择。 - -就这些了,再会! - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/create-custom-ubuntu-live-cd-image/ - -作者:[SK][a] -译者:[stevenzdg988](https://github.com/stevenzdg988) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.ostechnix.com/pinguy-builder-build-custom-ubuntu-os/ -[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[3]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-1.png () -[4]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-2.png () -[5]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-3.png () -[6]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-4.png () -[7]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-6.png () -[8]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-5.png () -[9]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-7.png () -[10]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-8.png () -[11]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-10-1.png () -[12]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-12-1.png () -[13]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-13.png () diff --git a/translated/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md b/translated/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md new file mode 100644 index 0000000000..6fd4ee93a3 --- /dev/null +++ b/translated/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md @@ -0,0 +1,123 @@ +如何在 Linux/Unix 之上绑定 ntpd 到特定的 IP 地址 +====== + +默认的情况下,我们的 ntpd/NTP 服务器会监听所有的端口或者 IP 地址,也就是:0.0.0.0:123。 怎么才可以在一个 Linux 或是 FreeBSD Unix 服务器上,确保只监听特定的 IP 地址,比如 localhost 或者是 192.168.1.1:123 ? + +NTP 是网络时间协议的首字母简写,这是一个用来同步两台电脑之间时间的协议。ntpd 是一个操作系统守护进程,可以设置并且保证系统的时间与互联网标准时间服务器同步。 + +[![如何在Linux和Unix服务器,防止 NTPD 监听0.0.0.0:123 并将其绑定到特定的 IP 地址][1]][1] + +NTP使用 `/etc/directory` 之下的 `ntp.conf`作为配置文件。 + + + +## /etc/ntp.conf 之中的端口指令 + +你可以通过设置端口命令来防止 ntpd 监听 0.0.0.0:123,语法如下: + +``` +interface listen IPv4|IPv6|all +interface ignore IPv4|IPv6|all +interface drop IPv4|IPv6|all +``` + +上面的配置可以使 ntpd 监听或者断开一个网络地址而不需要任何的请求。**这样将会** 举个例子,如果要忽略所有端口之上的监听,加入下面的语句到`/etc/ntp.conf`: + +The above configures which network addresses ntpd listens or dropped without processing any requests. **The ignore prevents opening matching addresses, drop causes ntpd to open the address and drop all received packets without examination.** For example to ignore listing on all interfaces, add the following in /etc/ntp.conf: + +`interface ignore wildcard` + +如果只监听 127.0.0.1 和 192.168.1.1 则是这样: + +``` +interface listen 127.0.0.1 +interface listen 192.168.1.1 +``` + +这是我 FreeBSD 云服务器上的样例 /etc/ntp.conf 文件: + +`$ egrep -v '^#|$^' /etc/ntp.conf` + +样例输出为: + +``` +tos minclock 3 maxclock 6 +pool 0.freebsd.pool.ntp.org iburst +restrict default limited kod nomodify notrap noquery nopeer +restrict -6 default limited kod nomodify notrap noquery nopeer +restrict source limited kod nomodify notrap noquery +restrict 127.0.0.1 +restrict -6 ::1 +leapfile "/var/db/ntpd.leap-seconds.list" +interface ignore wildcard +interface listen 172.16.3.1 +interface listen 10.105.28.1 +``` + + +## 重启 ntpd + +在 FreeBSD Unix 之上重新加载/重启 ntpd + +`$ sudo /etc/rc.d/ntpd restart` +或者 [在Debian和Ubuntu Linux 之上使用下面的命令][2]: +`$ sudo systemctl restart ntp` +或者 [在CentOS/RHEL 7/Fedora Linux 之上使用下面的命令][2]: +`$ sudo systemctl restart ntpd` + +## 校验 + +使用 `netstat` 和 `ss` 命令来检查 ntpd只绑定到了特定的 IP 地址: + +`$ netstat -tulpn | grep :123` +或是 +`$ ss -tulpn | grep :123` +样例输出: + +``` +udp 0 0 10.105.28.1:123 0.0.0.0:* - +udp 0 0 172.16.3.1:123 0.0.0.0:* - +``` +使用 + +使用 [socksata命令(FreeBSD Unix 服务群)][3]: + +``` +$ sudo sockstat +$ sudo sockstat -4 +$ sudo sockstat -4 | grep :123 +``` + + +样例输出: + +``` +root ntpd 59914 22 udp4 127.0.0.1:123 *:* +root ntpd 59914 24 udp4 127.0.1.1:123 *:* +``` + + + +## Vivek Gite 投稿 + +这个作者是 nixCraft 的作者并且是一位经验丰富的系统管理员,也是一名 Linux 操作系统和 Unix shell 脚本的训练师。他为全球不同行业,包括 IT、教育业、安全防护、空间研究和非营利性组织的客户工作。关注他的 [Twitter][4], [Facebook][5], [Google+][6]。 + + + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/faq/how-to-bind-ntpd-to-specific-ip-addresses-on-linuxunix/ + +作者:[Vivek Gite][a] +译者:[Drshu](https://github.com/Drshu) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/media/new/faq/2017/10/how-to-prevent-ntpd-to-listen-on-all-interfaces-on-linux-unix-box.jpg +[2]:https://www.cyberciti.biz/faq/restarting-ntp-service-on-linux/ +[3]:https://www.cyberciti.biz/faq/freebsd-unix-find-the-process-pid-listening-on-a-certain-port-commands/ +[4]:https://twitter.com/nixcraft +[5]:https://facebook.com/nixcraft +[6]:https://plus.google.com/+CybercitiBiz diff --git a/translated/tech/20171102 What is huge pages in Linux.md b/translated/tech/20171102 What is huge pages in Linux.md new file mode 100644 index 0000000000..ee261956ad --- /dev/null +++ b/translated/tech/20171102 What is huge pages in Linux.md @@ -0,0 +1,137 @@ +Linux 中的 huge pages 是个什么玩意? +====== +学习 Linux 中的 huge pages( 巨大页)。理解什么是 hugepages,如何进行配置,如何查看当前状态以及如何禁用它。 + +![Huge Pages in Linux][1] + +本文,我们会详细介绍 huge page,让你能够回答:Linux 中的 huge page 是什么玩意?在 RHEL6,RHEL7,Ubuntu 等 Linux 中,如何启用/禁用 huge pages?如何查看 huge page 的当前值? + +首先让我们从 Huge page 的基础知识开始讲起。 + +### Linux 中的 Huge page 是个什么玩意? + +Huge pages 有助于 Linux 系统进行虚拟内存管理。顾名思义,除了标准的 4KB 大小的页面外,他们还能帮助管理内存中的巨大页面。使用 huge pages,你最大可以定义 1GB 的页面大小。 + +在系统启动期间,huge pages 会为应用程序预留一部分内存。这部分内存,即被 huge pages 占用的这些存储器永远不会被交换出内存。它会一直保留其中除非你修改了配置。这会极大地提高像 Orcle 数据库这样的需要海量内存的应用程序的性能。 + +### 为什么使用巨大的页? + +在虚拟内存管理中,内核维护一个将虚拟内存地址映射到物理地址的表,对于每个页面操作,内核都需要加载相关的映射标。如果你的内存页很小,那么你需要加载的页就会很多,导致内核加载更多的映射表。而这会降低性能。 + +使用巨大的页,意味着所需要的页变少了。从而大大减少由内核加载的映射表的数量。这提高了内核级别的性能最终有利于应用程序的性能。 + +简而言之,通过启用 huge pages,系统具只需要处理较少的页面映射表,从而减少访问/维护它们的开销! + +### 如何配置 huge pages? + +运行下面命令来查看当前 huge pages 的详细内容。 + +``` +root@kerneltalks # grep Huge /proc/meminfo +AnonHugePages: 0 kB +HugePages_Total: 0 +HugePages_Free: 0 +HugePages_Rsvd: 0 +HugePages_Surp: 0 +Hugepagesize: 2048 kB +``` + +从上面输出可以看到,每个页的大小为 2MB(`Hugepagesize`) 并且系统中目前有 0 个页 (`HugePages_Total`)。这里巨大页的大小可以从 2MB 增加到 1GB。 + +运行下面的脚本可以获取系统当前需要多少个巨大页。该脚本取之于 Oracle。 + +``` +#!/bin/bash +# +# hugepages_settings.sh +# +# Linux bash script to compute values for the +# recommended HugePages/HugeTLB configuration +# +# Note: This script does calculation for all shared memory +# segments available when the script is run, no matter it +# is an Oracle RDBMS shared memory segment or not. +# Check for the kernel version +KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'` +# Find out the HugePage size +HPG_SZ=`grep Hugepagesize /proc/meminfo | awk {'print $2'}` +# Start from 1 pages to be on the safe side and guarantee 1 free HugePage +NUM_PG=1 +# Cumulative number of pages required to handle the running shared memory segments +for SEG_BYTES in `ipcs -m | awk {'print $5'} | grep "[0-9][0-9]*"` +do + MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q` + if [ $MIN_PG -gt 0 ]; then + NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q` + fi +done +# Finish with results +case $KERN in + '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`; + echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;; + '2.6' | '3.8' | '3.10' | '4.1' ) echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;; + *) echo "Unrecognized kernel version $KERN. Exiting." ;; +esac +# End +``` +将它以 `hugepages_settings.sh` 为名保存到 `/tmp` 中,然后运行之: +``` +root@kerneltalks # sh /tmp/hugepages_settings.sh +Recommended setting: vm.nr_hugepages = 124 +``` + +输出如上结果,只是数字会有一些出入。 + +这意味着,你系统需要 124 个每个 2MB 的巨大页!若你设置页面大小为 4MB,则结果就变成了 62。你明白了吧? + +### 配置内核中的 hugepages + +本文最后一部分内容是配置上面提到的 [内核参数 ][2] 然后重新加载。将下面内容添加到 `/etc/sysctl.conf` 中,然后输入 `sysctl -p` 命令重新加载配置。 + +``` +vm .nr_hugepages=126 +``` + +注意我们这里多加了两个额外的页,因为我们希望在实际需要的页面数量外多一些额外的空闲页。 + +现在,内核已经配置好了,但是要让应用能够使用这些巨大页还需要提高内存的使用阀值。新的内存阀值应该为 126 个页 x 每个页 2 MB = 252 MB,也就是 258048 KB。 + +你需要编辑 `/etc/security/limits.conf` 中的如下配置 + +``` +soft memlock 258048 +hard memlock 258048 +``` + +某些情况下,这些设置是在指定应用的文件中配置的,比如 Oracle DB 就是在 `/etc/security/limits.d/99-grid-oracle-limits.conf` 中配置的。 + +这就完成了!你可能还需要重启应用来让应用来使用这些新的巨大页。 + +### 如何禁用 hugepages? + +HugePages 默认是开启的。使用下面命令来查看 hugepages 的当前状态。 + +``` +root@kerneltalks# cat /sys/kernel/mm/transparent_hugepage/enabled +[always] madvise never +``` + +输出中的 `[always]` 标志说明系统启用了 hugepages。 + +若使用的是基于 RedHat 的系统,则应该要查看的文件路径为 `/sys/kernel/mm/redhat_transparent_hugepage/enabled`。 + +若想禁用巨大页,则在 `/etc/grub.conf` 中的 `kernel` 行后面加上 `transparent_hugepage=never`,然后重启系统。 + +-------------------------------------------------------------------------------- + +via: https://kerneltalks.com/services/what-is-huge-pages-in-linux/ + +作者:[Shrikant Lavhate][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://kerneltalks.com +[1]:https://c1.kerneltalks.com/wp-content/uploads/2017/11/hugepages-in-linux.png +[2]:https://kerneltalks.com/linux/how-to-tune-kernel-parameters-in-linux/ diff --git a/translated/tech/20171106 Autorandr- automatically adjust screen layout.md b/translated/tech/20171106 Autorandr- automatically adjust screen layout.md deleted file mode 100644 index 4dc8095669..0000000000 --- a/translated/tech/20171106 Autorandr- automatically adjust screen layout.md +++ /dev/null @@ -1,50 +0,0 @@ -Autorandr:自动调整屏幕布局 -====== -像许多笔记本用户一样,我经常将笔记本插入到不同的显示器上(桌面上有多台显示器,演示时有投影机等)。运行 xrandr 命令或点击界面非常繁琐,编写脚本也不是很好。 - -最近,我遇到了 [autorandr][1],它使用 EDID(和其他设置)检测连接的显示器,保存 xrandr 配置并恢复它们。它也可以在加载特定配置时运行任意脚本。我已经打包了它,目前仍在 NEW 状态。如果你不能等待,[这是 deb][2],[这是 git 仓库][3]。 - -要使用它,只需安装软件包,并创建你的初始配置(我这里是 undocked): -``` - autorandr --save undocked - -``` - -然后,连接你的笔记本(或者插入你的外部显示器),使用 xrandr(或其他任何)更改配置,然后保存你的新配置(我这里是 workstation): -``` -autorandr --save workstation - -``` - -对你额外的配置(或当你有新的配置)进行重复操作。 - -Autorandr 有 `udev`、`systemd` 和 `pm-utils` 钩子,当新的显示器出现时 `autorandr --change` 应该会立即运行。如果需要,也可以手动运行 `autorandr --change` 或 `autorandr - load workstation`。你也可以在加载配置后在 `~/.config/autorandr/$PROFILE/postswitch` 添加自己的脚本来运行。由于我运行 i3,我的工作站配置如下所示: -``` - #!/bin/bash - - xrandr --dpi 92 - xrandr --output DP2-2 --primary - i3-msg '[workspace="^(1|4|6)"] move workspace to output DP2-2;' - i3-msg '[workspace="^(2|5|9)"] move workspace to output DP2-3;' - i3-msg '[workspace="^(3|8)"] move workspace to output DP2-1;' - -``` - -它适当地修正了 dpi,设置主屏幕(可能不需要?),并移动 i3 工作区。你可以通过在配置文件目录中添加一个 `block` 钩子来安排配置永远不会运行。 - -如果你定期更换显示器,请看一下! - --------------------------------------------------------------------------------- - -via: https://www.donarmstrong.com/posts/autorandr/ - -作者:[Don Armstrong][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.donarmstrong.com -[1]:https://github.com/phillipberndt/autorandr -[2]:https://www.donarmstrong.com/autorandr_1.2-1_all.deb -[3]:https://git.donarmstrong.com/deb_pkgs/autorandr.git diff --git a/translated/tech/20171108 How To Setup Japanese Language Environment In Arch Linux.md b/translated/tech/20171108 How To Setup Japanese Language Environment In Arch Linux.md index e924dcbf28..97bbfe6fb6 100644 --- a/translated/tech/20171108 How To Setup Japanese Language Environment In Arch Linux.md +++ b/translated/tech/20171108 How To Setup Japanese Language Environment In Arch Linux.md @@ -7,7 +7,7 @@ ### 在Arch Linux中设置日语环境 -首先,安装必要的日语字体,以正确查看日语 ASCII 格式: +首先,为了正确查看日语 ASCII 格式,先安装必要的日语字体: ``` sudo pacman -S adobe-source-han-sans-jp-fonts otf-ipafont ``` @@ -27,7 +27,7 @@ pacaur -S ttf-monapo sudo pacman -S ibus ibus-anthy ``` -在 **~/.xprofile** 中添加以下行(如果不存在,创建一个): +在 **~/.xprofile** 中添加以下几行(如果不存在,创建一个): ``` # Settings for Japanese input export GTK_IM_MODULE='ibus' @@ -38,7 +38,7 @@ export XMODIFIERS=@im='ibus' ibus-daemon -drx ``` -~/.xprofile 允许我们在窗口管理器启动之前在 X 用户会话开始时执行命令。 +~/.xprofile 允许我们在 X 用户会话开始时且在窗口管理器启动之前执行命令。 保存并关闭文件。重启 Arch Linux 系统以使更改生效。 @@ -72,9 +72,9 @@ ibus-setup [![][2]][8] -你还可以在键盘绑定中编辑默认的快捷键。完成所有更改后,单击应用并确定。就是这样。从任务栏中的 iBus 图标中选择日语,或者按下**Command/Window 键+空格键**来在日语和英语(或者系统中的其他默认语言)之间切换。你可以从 iBus 首选项窗口更改键盘快捷键。 +你还可以在键盘绑定中编辑默认的快捷键。完成所有更改后,点击应用并确定。就是这样。从任务栏中的 iBus 图标中选择日语,或者按下**SUPER 键+空格键**(LCTT译注:SUPER KEY 通常为 Command/Window KEY)来在日语和英语(或者系统中的其他默认语言)之间切换。你可以从 iBus 首选项窗口更改键盘快捷键。 -你现在知道如何在 Arch Linux 及其衍生版中使用日语了。如果你发现我们的指南很有用,那么请您在社交、专业网络上分享,并支持 OSTechNix。 +现在你知道如何在 Arch Linux 及其衍生版中使用日语了。如果你发现我们的指南很有用,那么请您在社交、专业网络上分享,并支持 OSTechNix。 @@ -84,7 +84,7 @@ via: https://www.ostechnix.com/setup-japanese-language-environment-arch-linux/ 作者:[][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[Locez](https://github.com/locez) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md b/translated/tech/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md new file mode 100644 index 0000000000..397843785e --- /dev/null +++ b/translated/tech/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md @@ -0,0 +1,184 @@ +一步一步学习如何在 MariaDB 中配置主从复制 +====== +在我们前面的教程中,我们已经学习了 [**如何安装和配置 MariaDB**][1],也学习了 [**管理 MariaDB 的一些基础命令**][2]。现在我们来学习,如何在 MariaDB 服务器上配置一个主从复制。 + +复制是用于为我们的数据库去创建多个副本,这些副本可以在其它数据库上用于运行查询,像一些非常繁重的查询可能会影响主数据库服务器的性能,或者我们可以使用它来做数据冗余,或者兼具以上两个目的。我们可以将这个过程自动化,即主服务器到从服务器的复制过程自动进行。执行备份而不影响在主服务器上的写操作。 + +因此,我们现在去配置我们的主-从复制,它需要两台安装了 MariaDB 的机器。它们的 IP 地址如下: + + **主服务器 -** 192.168.1.120 **主机名** master.ltechlab.com + + **从服务器 -** 192.168.1.130 **主机名 -** slave.ltechlab.com + +MariaDB 安装到这些机器上之后,我们继续进行本教程。如果你需要安装和配置 MariaDB 的教程,请查看[ **这个教程**][1]。 + + +### **第 1 步 - 主服务器配置** + +我们现在进入到 MariaDB 中的一个命名为 ' **important '** 的数据库,它将被复制到我们的从服务器。为开始这个过程,我们编辑名为 ' **/etc/my.cnf** ' 的文件,它是 MariaDB 的配置文件。 + +``` +$ vi /etc/my.cnf +``` + +在这个文件中找到 [mysqld] 节,然后输入如下内容: + +``` +[mysqld] +log-bin +server_id=1 +replicate-do-db=important +bind-address=192.168.1.120 +``` + +保存并退出这个文件。完成之后,需要重启 MariaDB 服务。 + +``` +$ systemctl restart mariadb +``` + +接下来,我们登入我们的主服务器上的 Mariadb 实例。 + +``` +$ mysql -u root -p +``` + +在它上面创建一个命名为 'slaveuser' 的为主从复制使用的新用户,然后运行如下的命令为它分配所需要的权限: + +``` +STOP SLAVE; +GRANT REPLICATION SLAVE ON *.* TO 'slaveuser'@'%' IDENTIFIED BY 'iamslave'; +FLUSH PRIVILEGES; +FLUSH TABLES WITH READ LOCK; +SHOW MASTER STATUS; +``` + +**注意: ** 我们配置主从复制需要 **MASTER_LOG_FILE 和 MASTER_LOG_POS ** 的值,它可以通过 'show master status' 来获得,因此,你一定要确保你记下了它们的值。 + +这些命令运行完成之后,输入 'exit' 退出这个会话。 + +### 第 2 步 - 创建一个数据库备份,并将它移动到从服务器上 + +现在,我们需要去为我们的数据库 'important' 创建一个备份,可以使用 'mysqldump' 命令去备份。 + +``` +$ mysqldump -u root -p important > important_backup.sql +``` + +备份完成后,我们需要重新登陆到 MariaDB 数据库,并解锁我们的表。 + +``` +$ mysql -u root -p +$ UNLOCK TABLES; +``` + +然后退出这个会话。现在,我们移动我们刚才的备份到从服务器上,它的 IP 地址是:192.168.1.130。 + +在主服务器上的配置已经完成了,现在,我们开始配置从服务器。 + +### 第 3 步:配置从服务器 + +我们再次去编辑 '/etc/my.cnf' 文件,找到配置文件中的 [mysqld] 节,然后输入如下内容: + +``` +[mysqld] +server-id = 2 +replicate-do-db=important +[ …] +``` + +现在,我们恢复我们主数据库的备份到从服务器的 MariaDB 上,运行如下命令: + +``` +$ mysql -u root -p < /data/ important_backup.sql +``` + +当这个恢复过程结束之后,我们将通过登入到从服务器上的 MariaDB,为数据库 'important' 上的用户 'slaveuser' 授权。 + +``` +$ mysql -u root -p +``` + +``` +GRANT ALL PRIVILEGES ON important.* TO 'slaveuser'@'localhost' WITH GRANT OPTION; +FLUSH PRIVILEGES; +``` + +接下来,为了这个变化生效,重启 MariaDB。 + +``` +$ systemctl restart mariadb +``` + +### **第 4 步:启动复制** + +记住,我们需要 **MASTER_LOG_FILE 和 MASTER_LOG_POS** 变量的值,它可以通过在主服务器上运行 'SHOW MASTER STATUS' 获得。现在登入到从服务器上的 MariaDB,然后通过运行下列命令,告诉我们的从服务器它应该去哪里找主服务器。 + +``` +STOP SLAVE; +CHANGE MASTER TO MASTER_HOST= '192.168.1.110′, MASTER_USER='slaveuser', MASTER_PASSWORD='iamslave', MASTER_LOG_FILE='mariadb-bin.000001′, MASTER_LOG_POS=460; +SLAVE START; +SHOW SLAVE STATUS\G; +``` + +**注意:** 请根据你的机器的具体情况来改变主服务器的配置。 + +### 第 5 步:测试复制 + +我们将在我们的主服务器上创建一个新表来测试主从复制是否正常工作。因此,登入到主服务器上的 MariaDB。 + +``` +$ mysql -u root -p +``` + +选择数据库为 'important': + +``` +use important; +``` + +在这个数据库上创建一个名为 ‘test’ 的表: + +``` +create table test (c int); +``` + +然后在这个表中插入一些数据: + +``` +insert into test (c) value (1); +``` + +检索刚才插入的值是否存在: + +``` +select * from test; +``` + +你将会看到刚才你插入的值已经在这个新建的表中了。 + +现在,我们登入到从服务器的数据库中,查看主从复制是否正常工作。 + +``` +$ mysql -u root -p +$ use important; +$ select * from test; +``` + +你可以看到与前面在主服务器上的命令输出是一样的。因此,说明我们的主从服务工作正常,没有发生任何问题。 + +我们的教程结束了,请在下面的评论框中留下你的查询/问题。 + +-------------------------------------------------------------------------------- + +via: http://linuxtechlab.com/creating-master-slave-replication-mariadb/ + +作者:[Shusain][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxtechlab.com/author/shsuain/ +[1]:http://linuxtechlab.com/installing-configuring-mariadb-rhelcentos/ +[2]:http://linuxtechlab.com/mariadb-administration-commands-beginners/ diff --git a/translated/tech/20171113 The big break in computer languages.md b/translated/tech/20171113 The big break in computer languages.md new file mode 100644 index 0000000000..c8d2f9a07a --- /dev/null +++ b/translated/tech/20171113 The big break in computer languages.md @@ -0,0 +1,91 @@ +计算机语言的巨变 +==================================================== + + +我的上一篇博文([与 C 的长久别离][3])引来了我的老朋友,一位 C++ 专家的评论。在评论里,他推荐把 C++ 作为 C 的替代品。这是不可能发生的,如果 C ++ 代替 C 是趋势的话,那么 Go 和 Rust 也就不会出现了。 + +但是我不能只给我的读者一个空洞的看法。所以,在这篇文章中,我来讲述一下为什么我不再碰 C++ 的故事。这是关于计算机语言设计经济学专题文章的起始点。这篇文章会讨论为什么一些真心不好的决策会被做出来,然后进入语言的基础设计之中,以及我们该如何修正这些问题。 + +在这篇文章中,我会一点一点的指出人们(当然也包括我)自从 20 世纪 80 年代以来就存在的关于未来的编程语言的预见失误。直到最近我们才找到了证明我们错了的证据。 + +我第一次学习 C++ 是因为我需要使用 GNU eqn 输出 MathXML,而 eqn 是使用 C++ 写的。那个项目不错。在那之后,21世纪初,我在韦诺之战那边当了多年的高级开发工程师,并且与 C++ 相处甚欢。 + +在那之后啊,有一天我们发现一个不小心被我们授予特权的人已经把游戏的 AI 核心搞崩掉了。显然,在团队中只有我是不那么害怕查看代码的。最终,我把一切都恢复正常了 —— 我折腾了整整两周。再那之后,我就发誓我再也不靠近 C++ 了。 + +在那次经历过后,我发现这个语言的问题就是它在尝试使得本来就复杂的东西更加复杂,来属兔补上因为基础概念的缺失造成的漏洞。对于裸指针,他说“别这样做”,这没有问题。对于小规模的个人项目(比如我的魔改版 eqn),遵守这些规定没有问题。 + +但是对于大型项目,或者开发者水平不同的多人项目(这是我经常要处理的情况)就不能这样。随着时间的推移以及代码行数的增加,有的人就会捅篓子。当别人指出有 BUG 时,因为诸如 STL 之类的东西给你增加了一层复杂度,你处理这种问题所需要的精力就比处理同等规模的 C 语言的问题就要难上很多。我在韦诺之战时,我就知道了,处理这种问题真的相当棘手。 + +我给 Stell Heller(我的老朋友 ,C++ 的支持者)写代码时不会发生的问题在我与非 Heller 们合作时就被放大了,我和他们合作的结局可能就是我得给他们擦屁股。所以我就不用 C++ ,我觉得不值得为了其花时间。 C 是有缺陷的,但是 C 有 C++ 没有的优点 —— 如果你能在脑内模拟出硬件,那么你就能很简单的看出程序是怎么运行的。如果 C++ 真的能解决 C 的问题(也就是说,C++ 是类型安全以及内存安全的),那么失去其透明性也是值得的。但是,C++ 并没有这样。 + +我们判断 C++ 做的还不够的方法之一是想象一个 C++ 已经搞得不错的世界。在那个世界里,老旧的 C 语言项目会被迁移到 C++ 上来。主流的操作系统内核会是 C++ 写就,而现存的内核实现,比如 Linux 会渐渐升级成那样。在现实世界,这些都没有发生。C++ 不仅没有打消语言设计者设想像 D,Go 以及 Rust 那样的新语言的想法,他甚至都没有取代他的前辈。不改变 C++ 的核心思想,他就没有未来,也因此,C++ 的抽象泄露也不会消失。 + +既然我刚刚提到了 D 语言,那我就说说为什么我不把 D 视为一个够格的 C 语言竞争者的原因吧.尽管他比 Rust 早出现了八年 -- 和 Rust 相比是九年 -- Walter Bright 早在那时就有了构建那样一个语言的想法.但是在 2001 年,以 Python 和 Perl 为首的语言的出现已经确定了,专有语言能和开源语言抗衡的时代已经过去.官方 D 语言库/运行时和 Tangle 的无谓纷争也打击了其发展.它从未修正这些错误。 + +然后就是 Go 语言(我本来想说“以及 Rust”。但是如前文所述,我认为 Rust 还需要几年时间才能有竞争力)。它 _的确是_ 类型安全以及内存安全的(好吧,是在大多数时候是这样,但是如果你要使用接口的话就不是如此了,但是自找麻烦可不是正常人的做法)。我的一位好友,Mark Atwood,曾指出过 Go 语言是脾气暴躁的老头子因为愤怒创造出的语言,主要是 _C 语言的作者之一_(Ken Thompson) 因为 C++ 的混乱臃肿造成的愤怒,我深以为然。 + +我能理解 Ken 恼火的原因。这几十年来我就一直认为 C++ 搞错了需要解决的问题。C 语言的后继者有两条路可走。其一就是 C++ 那样,接受 C 的抽象泄漏,裸指针等等,以保证兼容性。然后以此为基础,构建一个最先进的语言。还有一条道路,就是从根源上解决问题 —— _修正_ C语言的抽象泄露。这一来就会破环其兼容性,但是也会杜绝 C/C++ 现有的问题。 + +对于第二条道路,第一次严谨的尝试就是 1995 年出现的 Java。Java 搞得不错,但是在语言解释器上构建这门语言使其不适合系统编程。这就在系统编程那留下一个巨大的漏洞,在 Go 以及 Rust 出现之前的 15 年里,都没有语言来填补这个空白。这也就是我的GPSD和NTPsec等软件在2017年仍然主要用C写成的原因,尽管C的问题也很多。 + +程序员的现状很差。尽管由于缺少足够多样化的选择,我们很难认识到 C/C++ 做的不够好的地方。我们都认为在软件里面出现缺陷以及基于安全方面考虑的妥协是理所当然的,而不是想想这其中多少是真的由于语言的设计问题导致的,就像缓存区溢出漏洞一样。 + +所以,为什么我们花了这么长时间才开始解决这个问题?从 C(1972) 面世到 Go(2009) 出现,这其中隔了 37 年;Rust也是在其仅仅一年之前出现。我想根本原因还是经济。 + +从最早的计算机语言开始,人们就已经知道,每种语言的设计都体现了程序员时间与机器资源的相对价值。在机器这端,就是汇编语言,以及之后的 C 语言,这些语言以牺牲开发人员的时间为代价来提高性能。 另一方面,像 Lisp 和(之后的)Python 这样的语言则试图自动处理尽可能多的细节,但这是以牺牲机器性能为代价的。 + +广义地说,这两端的语言的最重要的区别就是有没有自动内存管理。这与经验一致,内存管理缺陷是以机器为中心的语言中最常见的一类缺陷,程序员需要手动管理资源。 + +当一个语言对于程序员和机器的价值的理念与软件开发的某些领域的理念一致时,这个语言就是在经济上可行的。语言设计者通过设计一个适合处理现在或者不远的将来出现的情况的语言,而不是使用现有的语言来解决他们遇到的问题。 + +今年来,时兴的编程语言已经渐渐从需要手动管理内存的语言变为带有自动内存管理以及垃圾回收(GC)机制的语言。这种变化对应了摩尔定律导致的计算机硬件成本的降低,使得程序员的时间与之前相比更加的宝贵。但是,除了程序员的时间以及机器效率的变化之外,至少还有两个维度与这种变化相关。 + +其一就是距离底层硬件的距离。底层软件(内核与服务代码)的低效率会被成倍地扩大。因此我们可以发现,以机器为中心的语言像底层推进而以程序员为中心的语言向着高级发展。因为大多数情况下面向用户的语言仅仅需要以人类的反应速度(0.1秒)做出回应即可。 + +另一个维度就是项目的规模。由于程序员抽象出的问题的漏洞以及自身的疏忽,任何语言都会有预期的每千行代码的出错率。这个比率在以机器为中心的语言上很高,而在程序员为中心的带有 GC 的语言里就大大降低。随着项目规模的增大,带有 GC 语言作为一个防止出错率不堪入目的策略就显得愈发重要起来。 + +当我们使用这三种维度来看当今的编程语言的形势 —— C 语言在底层,蓬勃发展的带有 GC 的语言在上层,我们会发现这基本上很合理。但是还有一些看似不合理的是 —— C 语言的应用不合理地广泛。 + +我为什么这么说?想想那些经典的 Unix 命令行工具吧。这些通常都是可以使用带有完整的POSIX绑定的脚本语言写出的小程序。那样重新编码的程序调试维护拓展起来都会更加简单。 + +但是为什么还是使用 C (或者某些像 eqn 的项目,使用 C++)?因为有转型成本。就算是把相当小相当简单的语言使用新的语言重写并且确认你已经忠实地保留了所有非错误行为都是相当困难的。笼统地说,在任何一个领域的应用编程或者系统编程在语言的权衡过去之后,都可以使用一种哪怕是过时的语言。 + +这就是我和其他预测者犯的大错。 我们认为,降低机器资源成本(增加程序员时间的相对成本)本身就足以取代C语言(以及没有 GC 的语言)。 在这个过程中,我们有一部分或者甚至一大部分都是错误的 - 自20世纪90年代初以来,脚本语言,Java 以及像 Node.js 这样的东西的兴起显然都是这样兴起的的。 + +但是,竞争系统编程语言的新浪潮并非如此。 Rust和Go都明确地回应了_增加项目规模_ 这一需求。 脚本语言是先是作为编写小程序的有效途径,并逐渐扩大规模,而Rust和Go从一开始就定位为减少_大型项目_中的缺陷率。 比如 Google 的搜索服务和 Facebook 的实时聊天多服务。 + +我认为这就是对 "为什么不再早点儿" 这个问题的回答。Rust 和 Go 实际上并不算晚,他们相对迅速地回应了一个直到最近才被发现低估的成本问题。 + +好,说了这么多理论上的问题。按照这些理论我们能预言什么?它高偶素我们在 C 之后会出现什么? + +推动 GC 语言发展的趋势还没有扭转,也不要期待其扭转。这是大势所趋。因此:最终我们将拥有具有足够低延迟的 GC 技术,可用于内核和底层固件,这些技术将以语言实现方式被提供。 这些才是真正结束C长期统治的语言应有的特性。 + +我们能从 Go 语言开发团队的工作文件中发现端倪,他们正朝着这个方向前进 - 参考关于并发GC 的学术研究,从来没有停止研究。 如果 Go 语言自己没有选择这么做,其他的语言设计师也会这样。 但我认为他们会这么做 - 谷歌推动他们的项目的能力是显而易见的(我们从 “Android 的发展”就能看出来)。 + +在我们拥有那么理想的 GC 之前,我把能替换 C 语言的赌注押在 Go 语言上。因为其 GC 的开销是可以接受的 —— 也就是说不只是应用,甚至是大部分内核外的服务都可以使用。原因很简单: C 的出错率无药可医,转化成本还很高。 + +上周我尝试将 C 语言项目转化到 Go 语言上,我发现了两件事。其一就是这话很简单, C 的语言和 Go 对应的很好。还有就是写出的代码相当简单。因为 GC 的存在以及把集合视为首要的我数据结构,人人都要注意到这一点。但是我意识到我写的代码比我期望的多了不少,比例约为 2:1 —— 和 C 转 Python 类似。 + +抱歉呐,Rust 粉们。你们在内核以及底层固件上有着美好的未来。但是你们在别的领域被 Go 压的很惨。没有 GC ,再加上难以从 C 语言转化过来,还有就是有一部分 API 还是不够完善。(我的 select(2) 又哪去了啊?)。 + +对你们来说,唯一的安慰就是,C++ 粉比你们更糟糕 —— 如果这算是安慰的话。至少 Rust 还可以在 Go 顾及不到的 C 领域内大展宏图。C++ 可不能。 + +本文由 [Eric Raymond][5] 发布在 [Software][4] 栏。[收藏链接][6]。 + +-------------------------------------------------------------------------------- + +via: http://esr.ibiblio.org/?p=7724 + +作者:[Eric Raymond][a] +译者:[name1e5s](https://github.com/name1e5s) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://esr.ibiblio.org/?author=2 +[1]:http://esr.ibiblio.org/?author=2 +[2]:http://esr.ibiblio.org/?p=7724 +[3]:http://esr.ibiblio.org/?p=7711 +[4]:http://esr.ibiblio.org/?cat=13 +[5]:http://esr.ibiblio.org/?author=2 +[6]:http://esr.ibiblio.org/?p=7724 diff --git a/sources/tech/20171117 Command line fun- Insult the user when typing wrong bash command.md b/translated/tech/20171117 Command line fun- Insult the user when typing wrong bash command.md similarity index 71% rename from sources/tech/20171117 Command line fun- Insult the user when typing wrong bash command.md rename to translated/tech/20171117 Command line fun- Insult the user when typing wrong bash command.md index 123dca59cb..3f1cacfaab 100644 --- a/sources/tech/20171117 Command line fun- Insult the user when typing wrong bash command.md +++ b/translated/tech/20171117 Command line fun- Insult the user when typing wrong bash command.md @@ -1,19 +1,19 @@ -Command line fun: Insult the user when typing wrong bash command +命令行乐趣:恶搞输错 Bash 命令的用户 ====== -You can configure sudo command to insult user when they type the wrong password. Now, it is possible to abuse insult the user when they enter the wrong command at the shell prompt. +你可以通过配置 sudo 命令去恶搞输入错误密码的用户。但是之后,shell 的恶搞提示语可能会滥用于输入错误命令的用户。 -## Say hello bash-insulter +## 你好 bash-insulter -From the Github page: +来自 Github 页面: -> Randomly insults the user when typing wrong command. It use a new builtin error-handling function named command_not_found_handle in bash 4.x. +> 当用户键入错误命令,随机嘲讽。它使用了一个 bash4.x. 版本的全新内置错误处理函数,叫 command_not_found_handle。 -## Installation +## 安装 -Type the following git command to clone repo: +键入下列 git 命令克隆一个仓库: `git clone https://github.com/hkbakke/bash-insulter.git bash-insulter` -Sample outputs: +示例输出: ``` Cloning into 'bash-insulter'... remote: Counting objects: 52, done. @@ -23,35 +23,35 @@ Unpacking objects: 100% (52/52), done. ``` -Edit your ~/.bashrc or /etc/bash.bashrc using a text editor such as vi command: +用文本编辑器,编辑你的 ~/.bashrc 或者 /etc/bash.bashrc 文件,比如说使用 vi: `$ vi ~/.bashrc` -Append the following lines (see [if..else..fi statement][1] and [source command][2]): +在其后追加这一行(具体了解请查看 [if..else..fi 声明][1] 和 [命令源码][2]): ``` if [ -f $HOME/bash-insulter/src/bash.command-not-found ]; then source $HOME/bash-insulter/src/bash.command-not-found fi ``` -Save and close the file. Login again or just run it manually if you do not want to logout: +保存并关闭文件。重新登陆,如果不想退出账号也可以手动运行它: ``` $ . $HOME/bash-insulter/src/bash.command-not-found ``` -## How do I use it? +## 如何使用它? -Just type some invalid commands: +尝试键入一些无效命令: ``` $ ifconfigs $ dates ``` -Sample outputs: -[![An interesting bash hook feature to insult you when you type an invalid command. ][3]][3] +示例输出: +[![一个有趣的 bash 钩子功能,嘲讽输入了错误命令的你。][3]][3] -## Customization +## 自定义 -You need to edit $HOME/bash-insulter/src/bash.command-not-found: +你需要编辑 $HOME/bash-insulter/src/bash.command-not-found: `$ vi $HOME/bash-insulter/src/bash.command-not-found` -Sample code: +示例代码: ``` command_not_found_handle () { local INSULTS=( @@ -89,7 +89,7 @@ command_not_found_handle () { "Pro tip: type a valid command!" ) - # Seed "random" generator + # 设置“随机”种子发生器 RANDOM=$(date +%s%N) VALUE=$((${RANDOM}%2)) @@ -99,20 +99,20 @@ command_not_found_handle () { echo "-bash: $1: command not found" - # Return the exit code normally returned on invalid command + # 无效命令,常规返回已存在的代码 return 127 } ``` -## sudo insults +## sudo 嘲讽 -Edit the sudoers file: +编辑 sudoers 文件: `$ sudo visudo` -Append the following line: +追加下面这一行: `Defaults insults` -Or update as follows i.e. add insults at the end of line: +或者像下面尾行增加一句嘲讽语: `Defaults !lecture,tty_tickets,!fqdn,insults` -Here is my file: +这是我的文件: ``` Defaults env_reset Defaults mail_badpass @@ -146,21 +146,21 @@ $ sudo -k # clear old stuff so that we get a fresh prompt $ sudo ls /root/ $ sudo -i ``` -Sample session: -[![An interesting sudo feature to insult you when you type an invalid password.][4]][4] +样例对话: +[![当输入错误密码时,你会被一个有趣的的 sudo 嘲讽语戏弄。][4]][4] -## Say hello to sl +## 你好 sl -[sl is a joke software or classic UNIX][5] game. It is a steam locomotive runs across your screen if you type "sl" (Steam Locomotive) instead of "ls" by mistake. +[sl 或是 UNIX 经典捣蛋软件][5] 游戏。当你错误的把 “ls” 输入成 “sl”,将会有一辆蒸汽机车穿过你的屏幕。 `$ sl` -[![Linux / UNIX Desktop Fun: Steam Locomotive][6]][5] +[![Linux / UNIX 桌面乐趣: 蒸汽机车][6]][5] -------------------------------------------------------------------------------- via: https://www.cyberciti.biz/howto/insult-linux-unix-bash-user-when-typing-wrong-command/ 作者:[Vivek Gite][a] -译者:[译者ID](https://github.com/译者ID) +译者:[CYLeft](https://github.com/CYLeft) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20171119 10 Best LaTeX Editors For Linux.md b/translated/tech/20171119 10 Best LaTeX Editors For Linux.md deleted file mode 100644 index 9b4650ac97..0000000000 --- a/translated/tech/20171119 10 Best LaTeX Editors For Linux.md +++ /dev/null @@ -1,184 +0,0 @@ -针对 Linux 平台的 10 款最好 LaTeX 编辑器 -====== -**简介:一旦你克服了 LaTeX 的学习曲线,就没有什么比得上 LaTeX 了。下面介绍的是针对 Linux 和其他平台的最好的 LaTeX 编辑器。** - -## LaTeX 是什么? - -[LaTeX][1] 是一个文档制作系统。与纯文本编辑器不同,在 LaTeX 编辑器中你不能只写纯文本,为了组织文档的内容,你还必须使用一些 LaTeX 命令。 - -![LaTeX 示例][2]![LaTeX 示例][3] - -LaTeX 编辑器一般用在出于学术目的的科学研究文档或书籍的出版,最重要的是,当你需要处理包含众多复杂数学符号的文档时,它能够为你带来方便。当然,使用 LaTeX 编辑器是很有趣的,但它也并非总是很有用,除非你对所要编写的文档有一些特别的需求。 - -## 为什么你应当使用 LaTeX? - -好吧,正如我前面所提到的那样,使用 LaTeX 编辑器便意味着你有着特定的需求。为了捣腾 LaTeX 编辑器,并不需要你有一颗极客的头脑。但对于那些使用一般文本编辑器的用户来说,它并不是一个很有效率的解决方法。 - -假如你正在寻找一款工具来精心制作一篇文档,同时你对花费时间在格式化文本上没有任何兴趣,那么 LaTeX 编辑器或许正是你所寻找的那款工具。在 LaTeX 编辑器中,你只需要指定文档的类型,它便会相应地为你设置好文档的字体种类和大小尺寸。正是基于这个原因,难怪它会被认为是 [给作家的最好开源工具][4] 之一。 - -但请务必注意: LaTeX 编辑器并不是自动化的工具,你必须首先学会一些 LaTeX 命令来让它能够精确地处理文本的格式。 - -## 针对 Linux 平台的 10 款最好 LaTeX 编辑器 - -事先说明一下,以下列表并没有一个明确的先后顺序,序号为 3 的编辑器并不一定比序号为 7 的编辑器优秀。 - -### 1\. LyX - -![][2] - -![][5] - -LyX 是一个开源的 LaTeX 编辑器,即是说它是网络上可获取到的最好的文档处理引擎之一。LyX 帮助你集中于你的文章,并忘记对单词的格式化,而这些正是每个 LaTeX 编辑器应当做的。LyX 能够让你根据文档的不同,管理不同的文档内容。一旦安装了它,你就可以控制文档中的很多东西了,例如页边距,页眉,页脚,空白,缩进,表格等等。 - -假如你正忙着精心撰写科学性的文档,研究论文或类似的文档,你将会很高兴能够体验到 LyX 的公式编辑器,这也是其特色之一。 LyX 还包括一系列的教程来入门,使得入门没有那么多的麻烦。 - -[LyX][6] - -### 2\. Texmaker - -![][2] - -![][7] - -Texmaker 被认为是 GNOME 桌面环境下最好的 LaTeX 编辑器之一。它呈现出一个非常好的用户界面,带来了极好的用户体验。它也被冠以最实用的 LaTeX 编辑器之一。假如你经常进行 PDF 的转换,你将发现 TeXmaker 相比其他编辑器更加快速。在你书写的同时,你也可以预览你的文档最终将是什么样子的。同时,你也可以观察到可以很容易地找到所需要的符号。 - -Texmaker 也提供一个扩展的快捷键支持。你有什么理由不试着使用它呢? - -[Texmaker][8] - -### 3\. TeXstudio - -![][2] - -![][9] - -假如你想要一个这样的 LaTeX 编辑器:它既能为你提供相当不错的自定义功能,又带有一个易用的界面,那么 TeXstudio 便是一个完美的选择。它的 UI 确实很简单,但是不粗糙。 TeXstudio 带有语法高亮,自带一个集成的阅读器,可以让你检查参考文献,同时还带有一些其他的辅助工具。 - -它同时还支持某些酷炫的功能,例如自动补全,链接覆盖,书签,多游标等等,这使得书写 LaTeX 文档变得比以前更加简单。 - -TeXstudio 的维护很活跃,对于新手或者高级写作者来说,这使得它成为一个引人注目的选择。 - -[TeXstudio][10] - -### 4\. Gummi - -![][2] - -![][11] - -Gummi 是一个非常简单的 LaTeX 编辑器,它基于 GTK+ 工具箱。当然,在这个编辑器中你找不到许多华丽的选项,但如果你只想能够立刻着手写作, 那么 Gummi 便是我们给你的推荐。它支持将文档输出为 PDF 格式,支持语法高亮,并帮助你进行某些基础的错误检查。尽管在 GitHub 上它已经不再被活跃地维护,但它仍然工作地很好。 - -[Gummi][12] - -### 5\. TeXpen - -![][2] - -![][13] - -TeXpen 是另一个简洁的 LaTeX 编辑器。它为你提供了自动补全功能。但其用户界面或许不会让你感到印象深刻。假如你对用户界面不在意,又想要一个超级容易的 LaTeX 编辑器,那么 TeXpen 将满足你的需求。同时 TeXpen 还能为你校正或提高在文档中使用的英语语法和表达式。 - -[TeXpen][14] - -### 6\. ShareLaTeX - -![][2] - -![][15] - -ShareLaTeX 是一款在线 LaTeX 编辑器。假如你想与某人或某组朋友一同协作进行文档的书写,那么这便是你所需要的。 - -它提供一个免费方案和几种付费方案。甚至来自哈佛大学和牛津大学的学生也都使用它来进行个人的项目。其免费方案还允许你添加一位协作者。 - -其付费方案允许你与 GitHub 和 Dropbox 进行同步,并且能够记录完整的文档修改历史。你可以为你的每个方案选择多个协作者。对于学生,它还提供单独的计费方案。 - -[ShareLaTeX][16] - -### 7\. Overleaf - -![][2] - -![][17] - -Overleaf 是另一款在线的 LaTeX 编辑器。它与 ShareLaTeX 类似,它为专家和学生提供了不同的计费方案。它也提供了一个免费方案,使用它你可以与 GitHub 同步,检查你的修订历史,或添加多个合作者。 - -在每个项目中,它对文件的数目有所限制。所以在大多数情况下如果你对 LaTeX 文件非常熟悉,这并不会为你带来不便。 - -[Overleaf][18] - -### 8\. Authorea - -![][2] - -![][19] - -Authorea 是一个美妙的在线 LaTeX 编辑器。当然,如果考虑到价格,它可能不是最好的一款。对于免费方案,它有 100 MB 的数据上传限制和每次只能创建一个私有文档。而付费方案则提供更多的额外好处,但如果考虑到价格,它可能不是最便宜的。你应该选择 Authorea 的唯一原因应该是因为其用户界面。假如你喜爱使用一款提供令人印象深刻的用户界面的工具,那就不要错过它。 - -[Authorea][20] - -### 9\. Papeeria - -![][2] - -![][21] - -Papeeria 是在网络上你能够找到的最为便宜的 LaTeX 在线编辑器,如果考虑到它和其他的编辑器一样可信赖的话。假如你想免费地使用它,则你不能使用它开展私有项目。但是,如果你更偏爱公共项目,它允许你创建不限数目的项目,添加不限数目的协作者。它的特色功能是有一个非常简便的画图构造器,并且在无需额外费用的情况下使用 Git 同步。假如你偏爱付费方案,它赋予你创建 10 个私有项目的能力。 - -[Papeeria][22] - -### 10\. Kile - -![Kile LaTeX 编辑器][2] - -![Kile LaTeX 编辑器][23] - -位于我们最好 LaTeX 编辑器清单的最后一位是 Kile 编辑器。有些朋友对 Kile 推崇备至,很大程度上是因为其提供某些特色功能。 - -Kile 不仅仅是一款编辑器,它还是一款类似 Eclipse 的 IDE 工具,提供了针对文档和项目的一整套环境。除了快速编译和预览功能,你还可以使用诸如命令的自动补全,插入引用,按照章节来组织文档等功能。你真的应该使用 Kile 来见识其潜力。 - -Kile 在 Linux 和 Windows 平台下都可获取到。 - -[Kile][24] - -### 总结 - -所以上面便是我们推荐的 LaTeX 编辑器,你可以在 Ubuntu 或其他 Linux 发行版本中使用它们。 - -当然,我们可能还遗漏了某些可以在 Linux 上使用并且有趣的 LaTeX 编辑器。如若你正好知道它们,请在下面的评论中让我们知晓。 - - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/LaTeX-editors-linux/ - -作者:[Ankush Das][a] -译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://itsfoss.com/author/ankush/ -[1]:https://www.LaTeX-project.org/ -[2]:data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs= -[3]:https://itsfoss.com/wp-content/uploads/2017/11/LaTeX-sample-example.jpeg -[4]:https://itsfoss.com/open-source-tools-writers/ -[5]:https://itsfoss.com/wp-content/uploads/2017/10/LyX_LaTeX_editor.jpg -[6]:https://www.LyX.org/ -[7]:https://itsfoss.com/wp-content/uploads/2017/10/texmaker_LaTeX_editor.jpg -[8]:http://www.xm1math.net/texmaker/ -[9]:https://itsfoss.com/wp-content/uploads/2017/10/tex_studio_LaTeX_editor.jpg -[10]:https://www.texstudio.org/ -[11]:https://itsfoss.com/wp-content/uploads/2017/10/gummi_LaTeX_editor.jpg -[12]:https://github.com/alexandervdm/gummi -[13]:https://itsfoss.com/wp-content/uploads/2017/10/texpen_LaTeX_editor.jpg -[14]:https://sourceforge.net/projects/texpen/ -[15]:https://itsfoss.com/wp-content/uploads/2017/10/shareLaTeX.jpg -[16]:https://www.shareLaTeX.com/ -[17]:https://itsfoss.com/wp-content/uploads/2017/10/overleaf.jpg -[18]:https://www.overleaf.com/ -[19]:https://itsfoss.com/wp-content/uploads/2017/10/authorea.jpg -[20]:https://www.authorea.com/ -[21]:https://itsfoss.com/wp-content/uploads/2017/10/papeeria_LaTeX_editor.jpg -[22]:https://www.papeeria.com/ -[23]:https://itsfoss.com/wp-content/uploads/2017/11/kile-LaTeX-800x621.png -[24]:https://kile.sourceforge.io/ diff --git a/translated/tech/20171120 How to use special permissions- the setuid, setgid and sticky bits.md b/translated/tech/20171120 How to use special permissions- the setuid, setgid and sticky bits.md new file mode 100644 index 0000000000..80805b0d30 --- /dev/null +++ b/translated/tech/20171120 How to use special permissions- the setuid, setgid and sticky bits.md @@ -0,0 +1,108 @@ +如何使用特殊权限:setuid、setgid 和 sticky 位 +====== + +### 目标 + +了解特殊权限的工作原理,以及如何识别和设置它们。 + +### 要求 + + * 了解标准的 Unix / Linux 权限系统 + +### 难度 + +简单 + +### 约定 + + * **#** \- 要求直接以 root 用户或使用 `sudo` 命令执行指定的命令 + * **$** \- 用普通的非特权用户来执行指定的命令 + +### 介绍 + +通常,在类 Unix 操作系统上,文件和目录的所有权是基于文件创建者的默认 `uid`(user-id)和 `gid`(group-id)的。启动一个进程时也是同样的情况:它以启动它的用户的 uid 和 gid 运行,并具有相应的权限。这种行为可以通过使用特殊的权限进行改变。 + +### setuid 位 + +当使用 setuid 位时,之前描述的行为会有所变化,所以当一个可执行文件启动时,它不会以启动它的用户的权限运行,而是以该文件所有者的权限运行。所以,如果在一个可执行文件上设置了 setuid 位,并且该文件由 root 拥有,当一个普通用户启动它时,它将以 root 权限运行。显然,如果 setuid 位使用不当的话,会带来潜在的安全风险。 + +使用 setuid 权限的可执行文件的例子是 `passwd`,我们可以使用该程序更改登录密码。我们可以通过使用 `ls` 命令来验证: + +``` + +ls -l /bin/passwd +-rwsr-xr-x. 1 root root 27768 Feb 11 2017 /bin/passwd + +``` + +如何识别 `setuid` 位呢?相信您在上面命令的输出已经注意到,`setuid` 位是用 `s` 来表示的,代替了可执行位的 `x`。小写的 `s` 意味着可执行位已经被设置,否则你会看到一个大写的 `S`。大写的 `S` 发生于当设置了 `setuid` 或 `setgid` 位、但没有设置可执行位 `x` 时。它用于提醒用户这个矛盾的设置:如果可执行位未设置,则 `setuid` 和 `setgid` 位均不起作用。setuid 位对目录没有影响。 + +### setgid 位 + +与 `setuid` 位不同,`setgid` 位对文件和目录都有影响。在第一个例子中,具有 `setgid` 位设置的文件在执行时,不是以启动它的用户所属组的权限运行,而是以拥有该文件的组运行。换句话说,进程的 gid 与文件的 gid 相同。 + +当在一个目录上使用时,`setgid` 位与一般的行为不同,它使得在所述目录内创建的文件,不属于创建者所属的组,而是属于父目录所属的组。这个功能通常用于文件共享(目录所属组中的所有用户都可以修改文件)。就像 setuid 一样,setgid 位很容易识别(我们用 test 目录举例): + +``` + +ls -ld test +drwxrwsr-x. 2 egdoc egdoc 4096 Nov 1 17:25 test + +``` + +这次 `s` 出现在组权限的可执行位上。 + +### sticky 位 + +Sticky 位的工作方式有所不同:它对文件没有影响,但当它在目录上使用时,所述目录中的所有文件只能由其所有者删除或移动。一个典型的例子是 `/tmp` 目录,通常系统中的所有用户都对这个目录有写权限。所以,设置 sticky 位使用户不能删除其他用户的文件: + +``` + +$ ls -ld /tmp +drwxrwxrwt. 14 root root 300 Nov 1 16:48 /tmp + +``` + +在上面的例子中,目录所有者、组和其他用户对该目录具有完全的权限(读、写和执行)。Sticky 位在可执行位上用 `t` 来标识。同样,小写的 `t` 表示可执行权限 `x`也被设置了,否则你会看到一个大写字母 `T`。 + +### 如何设置特殊权限位 + +就像普通的权限一样,特殊权限位可以用 `chmod` 命令设置,使用数字或者 `ugo/rwx` 格式。在前一种情况下,`setuid`、`setgid` 和 `sticky` 位分别由数值 4、2 和 1 表示。例如,如果我们要在目录上设置 `setgid` 位,我们可以运行: + +``` +$ chmod 2775 test +``` + +通过这个命令,我们在目录上设置了 `setgid` 位(由四个数字中的第一个数字标识),并给它的所有者和该目录所属组的所有用户赋予全部权限,对其他用户赋予读和执行的权限(目录上的执行位意味着用户可以 `cd` 进入该目录或使用 `ls` 列出其内容)。 + +另一种设置特殊权限位的方法是使用 `ugo/rwx` 语法: + +``` +$ chmod g+s test +``` + +要将 `setuid` 位应用于一个文件,我们可以运行: + +``` +$ chmod u+s file +``` + +要设置 Sticky 位,可运行: + +``` +$ chmod o+t test +``` + +在某些情况下,使用特殊权限会非常有用。但如果使用不当,可能会引入严重的漏洞,因此使用之前请三思。 + +-------------------------------------------------------------------------------- + +via: https://linuxconfig.org/how-to-use-special-permissions-the-setuid-setgid-and-sticky-bits + +作者:[Egidio Docile][a] +译者:[jessie-pang](https://github.com/jessie-pang) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://linuxconfig.org \ No newline at end of file diff --git a/translated/tech/20171121 How to organize your passwords using pass password manager.md b/translated/tech/20171121 How to organize your passwords using pass password manager.md index b129a5daf9..be460cc720 100644 --- a/translated/tech/20171121 How to organize your passwords using pass password manager.md +++ b/translated/tech/20171121 How to organize your passwords using pass password manager.md @@ -3,9 +3,9 @@ ### 目标 -学习使用 "pass" 密码管理器来组织你的密码 +学习在 Linux 上使用 "pass" 密码管理器来管理你的密码 -### 需求 +### 条件 * 需要 root 权限来安装需要的包 @@ -16,15 +16,15 @@ ### 约定 * **#** - 执行指定命令需要 root 权限,可以是直接使用 root 用户来执行或者使用 `sudo` 命令来执行 - * **$** - 使用非特权普通用户执行指定命令 + * **$** - 使用普通的非特权用户执行指定命令 ### 介绍 -如果你有根据目的不同设置不同密码的好习惯,你可能已经感受到要一个密码管理器的必要性了。在 Linux 上有很多选择,可以是专利软件(如果你敢的话)也可以是开源软件。如果你跟我一样喜欢简洁的话,你可能会对 `pass` 感兴趣。 +如果你有根据不同的意图设置不同密码的好习惯,你可能已经感受到需要一个密码管理器的必要性了。在 Linux 上有很多选择,可以是专利软件(如果你敢的话)也可以是开源软件。如果你跟我一样喜欢简洁的话,你可能会对 `pass` 感兴趣。 ### First steps -Pass 作为一个密码管理器,其实际上是对类似 `gpg` 和 `git` 等可信赖的实用工具的一种封装。虽然它也有图形界面,但它专门设计能成在命令行下工作的:因此它也可以在 headless machines 上工作 (LCTT 注:根据 wikipedia 的说法,所谓 headless machines 是指没有显示器、键盘和鼠标的机器,一般通过网络链接来控制)。 +Pass 作为一个密码管理器,其实际上是一些你可能早已每天使用的、可信赖且实用的工具的一种封装,比如 `gpg` 和 `git` 。虽然它也有图形界面,但它专门设计能成在命令行下工作的:因此它也可以在 headless machines 上工作 (LCTT 注:根据 wikipedia 的说法,所谓 headless machines 是指没有显示器、键盘和鼠标的机器,一般通过网络链接来控制)。 ### 步骤 1 - 安装 @@ -42,7 +42,7 @@ Pass 不在官方仓库中,但你可以从 `epel` 中获取道它。要在 Cen # yum install epel-release ``` -然而在 Red Hat 企业版的 Linux 上,这个额外的源是不可用的;你需要从官方的 EPEL 网站上下载它。 +然而在 Red Hat 企业版的 Linux 上,这个额外的源是不可用的;你需要从 EPEL 官方网站上下载它。 #### Debian and Ubuntu ``` @@ -95,12 +95,12 @@ Password Store pass mysite ``` -然而更好的方法是使用 `-c` 选项让 pass 将密码直接拷贝道粘帖板上: +然而更好的方法是使用 `-c` 选项让 pass 将密码直接拷贝到剪切板上: ``` pass -c mysite ``` -这种情况下粘帖板中的内容会在 `45` 秒后自动清除。两种方法都会要求你输入 gpg 密码。 +这种情况下剪切板中的内容会在 `45` 秒后自动清除。两种方法都会要求你输入 gpg 密码。 ### 生成密码 @@ -109,11 +109,11 @@ Pass 也可以为我们自动生成(并自动存储)安全密码。假设我们 pass generate mysite 15 ``` -若希望密码只包含字母和数字则可以是使用 `--no-symbols` 选项。生成的密码会显示在屏幕上。也可以通过 `--clip` 或 `-c` 选项让 pass 吧密码直接拷贝到粘帖板中。通过使用 `-q` 或 `--qrcode` 选项来生成二维码: +若希望密码只包含字母和数字则可以是使用 `--no-symbols` 选项。生成的密码会显示在屏幕上。也可以通过 `--clip` 或 `-c` 选项让 pass 把密码直接拷贝到剪切板中。通过使用 `-q` 或 `--qrcode` 选项来生成二维码: ![qrcode][1] -从上面的截屏中尅看出,生成了一个二维码,不过由于 `mysite` 的密码以及存在了,pass 会提示我们确认是否要覆盖原密码。 +从上面的截屏中尅看出,生成了一个二维码,不过由于 `mysite` 的密码已经存在了,pass 会提示我们确认是否要覆盖原密码。 Pass 使用 `/dev/urandom` 设备作为(伪)随机数据生成器来生成密码,同时它使用 `xclip` 工具来将密码拷贝到粘帖板中,同时使用 `qrencode` 来将密码以二维码的形式显示出来。在我看来,这种模块化的设计正是它最大的优势:它并不重复造轮子,而只是将常用的工具包装起来完成任务。 @@ -131,9 +131,9 @@ pass git init pass git remote add ``` -我们可以把这个仓库当成普通密码仓库来用。唯一的不同点在于每次我们新增或修改一个密码,`pass` 都会自动将该文件加入索引并创建一个提交。 +我们可以把这个密码仓库当成普通仓库来用。唯一的不同点在于每次我们新增或修改一个密码,`pass` 都会自动将该文件加入索引并创建一个提交。 -`pass` 有一个叫做 `qtpass` 的图形界面,而且 `pass` 也支持 Windows 和 MacOs。通过使用 `PassFF` 插件,它还能获取 firefox 中存储的密码。在它的项目网站上可以查看更多详细信息。试一下 `pass` 吧,你不会失望的! +`pass` 有一个叫做 `qtpass` 的图形界面,而且也支持 Windows 和 MacOs。通过使用 `PassFF` 插件,它还能获取 firefox 中存储的密码。在它的项目网站上可以查看更多详细信息。试一下 `pass` 吧,你不会失望的! -------------------------------------------------------------------------------- @@ -142,7 +142,7 @@ via: https://linuxconfig.org/how-to-organize-your-passwords-using-pass-password- 作者:[Egidio Docile][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[Locez](https://github.com/locez) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md b/translated/tech/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md new file mode 100644 index 0000000000..7913acd02c --- /dev/null +++ b/translated/tech/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md @@ -0,0 +1,216 @@ +用 mod 保护您的网站免受应用层 DOS 攻击 +====== + +有多种恶意攻击网站的方法,比较复杂的方法要涉及数据库和编程方面的技术知识。一个更简单的方法被称为“拒绝服务”或“DOS”攻击。这个攻击方法的名字来源于它的意图:使普通客户或网站访问者的正常服务请求被拒绝。 + +一般来说,有两种形式的 DOS 攻击: + + 1. OSI 模型的三、四层,即网络层攻击 + 2. OSI 模型的七层,即应用层攻击 + +第一种类型的 DOS 攻击——网络层,发生于当大量的垃圾流量流向网页服务器时。当垃圾流量超过网络的处理能力时,网站就会宕机。 + +第二种类型的 DOS 攻击是在应用层,是利用合法的服务请求,而不是垃圾流量。当页面请求数量超过网页服务器能承受的容量时,即使是合法访问者也将无法使用该网站。 + +本文将着眼于缓解应用层攻击,因为减轻网络层攻击需要大量的可用带宽和上游提供商的合作,这通常不是通过配置网络服务器就可以做到的。 + +通过配置普通的网页服务器,可以保护网页免受应用层攻击,至少是适度的防护。防止这种形式的攻击是非常重要的,因为 [Cloudflare][1] 最近 [报道][2] 了网络层攻击的数量正在减少,而应用层攻击的数量则在增加。 + +本文将根据 [zdziarski 的博客][4] 来解释如何使用 Apache2 的模块 [mod_evasive][3]。 + +另外,mod_evasive 会阻止攻击者试图通过尝试数百个组合来猜测用户名和密码,即暴力攻击。 + +Mod_evasive 会记录来自每个 IP 地址的请求的数量。当这个数字超过相应 IP 地址的几个阈值之一时,会出现一个错误页面。错误页面所需的资源要比一个能够响应合法访问的在线网站少得多。 + +### 在 Ubuntu 16.04 上安装 mod_evasive + +Ubuntu 16.04 默认的软件库中包含了 mod_evasive,名称为“libapache2-mod-evasive”。您可以使用 `apt-get` 来完成安装: +``` +apt-get update +apt-get upgrade +apt-get install libapache2-mod-evasive + +``` + +现在我们需要配置 mod_evasive。 + +它的配置文件位于 `/etc/apache2/mods-available/evasive.conf`。默认情况下,所有模块的设置在安装后都会被注释掉。因此,在修改配置文件之前,模块不会干扰到网站流量。 +``` + + #DOSHashTableSize 3097 + #DOSPageCount 2 + #DOSSiteCount 50 + #DOSPageInterval 1 + #DOSSiteInterval 1 + #DOSBlockingPeriod 10 + + #DOSEmailNotify you@yourdomain.com + #DOSSystemCommand "su - someuser -c '/sbin/... %s ...'" + #DOSLogDir "/var/log/mod_evasive" + + +``` + +第一部分的参数的含义如下: + + * **DOSHashTableSize** - 正在访问网站的 IP 地址列表及其请求数。 + * **DOSPageCount** - 在一定的时间间隔内,每个的页面的请求次数。时间间隔由 DOSPageInterval 定义。 + * **DOSPageInterval** - mod_evasive 统计页面请求次数的时间间隔。 + * **DOSSiteCount** - 与 DOSPageCount 相同,但统计的是网站内任何页面的来自相同 IP 地址的请求数量。 + * **DOSSiteInterval** - mod_evasive 统计网站请求次数的时间间隔。 + * **DOSBlockingPeriod** - 某个 IP 地址被加入黑名单的时长(以秒为单位)。 + + +如果使用上面显示的默认配置,则在如下情况下,一个 IP 地址会被加入黑名单: + + * 每秒请求同一页面超过两次。 + * 每秒请求 50 个以上不同页面。 + + +如果某个 IP 地址超过了这些阈值,则被加入黑名单 10 秒钟。 + +这看起来可能不算久,但是,mod_evasive 将一直监视页面请求,包括在黑名单中的 IP 地址,并重置其加入黑名单的起始时间。只要一个 IP 地址一直尝试使用 DOS 攻击该网站,它将始终在黑名单中。 + +其余的参数是: + + * **DOSEmailNotify** - 用于接收 DOS 攻击信息和 IP 地址黑名单的电子邮件地址。 + * **DOSSystemCommand** - 检测到 DOS 攻击时运行的命令。 + * **DOSLogDir** - 用于存放 mod_evasive 的临时文件的目录。 + + +### 配置 mod_evasive + +默认的配置是一个很好的开始,因为它的黑名单里不该有任何合法的用户。取消配置文件中的所有参数(DOSSystemCommand 除外)的注释,如下所示: +``` + + DOSHashTableSize 3097 + DOSPageCount 2 + DOSSiteCount 50 + DOSPageInterval 1 + DOSSiteInterval 1 + DOSBlockingPeriod 10 + + DOSEmailNotify JohnW@example.com + #DOSSystemCommand "su - someuser -c '/sbin/... %s ...'" + DOSLogDir "/var/log/mod_evasive" + + +``` + +必须要创建日志目录并且要赋予其与 apache 进程相同的所有者。这里创建的目录是 `/var/log/mod_evasive` ,并且在 Ubuntu 上将该目录的所有者和组设置为 `www-data` ,与 Apache 服务器相同: +``` +mkdir /var/log/mod_evasive +chown www-data:www-data /var/log/mod_evasive + +``` + +在编辑了 Apache 的配置之后,特别是在正在运行的网站上,在重新启动或重新加载之前,最好检查一下语法,因为语法错误将影响 Apache 的启动从而使网站宕机。 + +Apache 包含一个辅助命令,是一个配置语法检查器。只需运行以下命令来检查您的语法: +``` +apachectl configtest + +``` + +如果您的配置是正确的,会得到如下结果: +``` +Syntax OK + +``` + +但是,如果出现问题,您会被告知在哪部分发生了什么错误,例如: +``` +AH00526: Syntax error on line 6 of /etc/apache2/mods-enabled/evasive.conf: +DOSSiteInterval takes one argument, Set site interval +Action 'configtest' failed. +The Apache error log may have more information. + +``` + +如果您的配置通过了 configtest 的测试,那么这个模块可以安全地被启用并且 Apache 可以重新加载: +``` +a2enmod evasive +systemctl reload apache2.service + +``` + +Mod_evasive 现在已配置好并正在运行了。 + +### 测试 + +为了测试 mod_evasive,我们只需要向服务器提出足够的网页访问请求,以使其超出阈值,并记录来自 Apache 的响应代码。 + +一个正常并成功的页面请求将收到如下响应: +``` +HTTP/1.1 200 OK + +``` + +但是,被 mod_evasive 拒绝的将返回以下内容: +``` +HTTP/1.1 403 Forbidden + +``` + +以下脚本会尽可能迅速地向本地主机(127.0.0.1,localhost)的 80 端口发送 HTTP 请求,并打印出每个请求的响应代码。 + +你所要做的就是把下面的 bash 脚本复制到一个文件中,例如 `mod_evasive_test.sh`: +``` +#!/bin/bash +set -e + +for i in {1..50}; do + curl -s -I 127.0.0.1 | head -n 1 +done + +``` + +这个脚本的部分含义如下: + + * curl - 这是一个发出网络请求的命令。 + * -s - 隐藏进度表。 + * -I - 仅显示响应头部信息。 + * head - 打印文件的第一部分。 + * -n 1 - 只显示第一行。 + +然后赋予其执行权限: +``` +chmod 755 mod_evasive_test.sh + +``` + +在启用 mod_evasive **之前**,脚本运行时,将会看到 50 行“HTTP / 1.1 200 OK”的返回值。 + +但是,启用 mod_evasive 后,您将看到以下内容: +``` +HTTP/1.1 200 OK +HTTP/1.1 200 OK +HTTP/1.1 403 Forbidden +HTTP/1.1 403 Forbidden +HTTP/1.1 403 Forbidden +HTTP/1.1 403 Forbidden +HTTP/1.1 403 Forbidden +... + +``` + +前两个请求被允许,但是在同一秒内第三个请求发出时,mod_evasive 拒绝了任何进一步的请求。您还将收到一封电子邮件(邮件地址在选项 `DOSEmailNotify` 中设置),通知您有 DOS 攻击被检测到。 + +Mod_evasive 现在已经在保护您的网站啦! + + +-------------------------------------------------------------------------------- + +via: https://bash-prompt.net/guides/mod_proxy/ + +作者:[Elliot Cooper][a] +译者:[jessie-pang](https://github.com/jessie-pang) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://bash-prompt.net/about/ +[1]:https://www.cloudflare.com +[2]:https://blog.cloudflare.com/the-new-ddos-landscape/ +[3]:https://github.com/jzdziarski/mod_evasive +[4]:https://www.zdziarski.com/blog/ \ No newline at end of file diff --git a/translated/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md b/translated/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md new file mode 100644 index 0000000000..5b8fe7f215 --- /dev/null +++ b/translated/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md @@ -0,0 +1,163 @@ +如何统计Linux中文件和文件夹/目录的数量 +====== +嗨,伙计们,今天我们又来了一系列棘手的命令,会多方面帮助你。 这是一种操作命令,它可以帮助您计算当前目录中的文件和目录,递归计数,特定用户创建的文件列表等。 + +在本教程中,我们将向您展示如何使用多个命令,并使用ls,egrep,wc和find命令执行一些高级操作。 下面的命令很有帮助。 + +为了实验,我打算总共创建7个文件和2个文件夹(5个常规文件和2个隐藏文件)。 看到下面的tree命令的输出清楚的展示文件和文件夹列表。 + +**推荐阅读** [文件操作命令][1] +``` +# tree -a /opt +/opt +├── magi +│   └── 2g +│   ├── test5.txt +│   └── .test6.txt +├── test1.txt +├── test2.txt +├── test3.txt +├── .test4.txt +└── test.txt + +2 directories, 7 files + +``` + +**示例-1 :** 统计当前目录文件(排除隐藏文件)。 运行以下命令以确定当前目录中有多少个文件,并且不计算点文件(LCTT译者注:点文件即当前目录文件和上级目录文件)。 +``` +# ls -l . | egrep -c '^-' +4 +``` + +**细节:** + + * `ls` : 列出目录内容 + * `-l` : 使用长列表格式 + * `.` : 列出有关文件的信息(默认为当前目录) + * `|` : 控制操作器将一个程序的输出发送到另一个程序进行进一步处理 + * `egrep` : 打印符合模式的行 + * `-c` : 通用输出控制 + * `'^-'` : 它们分别匹配一行的开头和结尾的空字符串 + + + +**示例-2 :** 统计包含隐藏文件的当前目录文件。 包括当前目录中的点文件。 +``` +# ls -la . | egrep -c '^-' +5 +``` + +**示例-3 :** 运行以下命令来计算当前目录文件和文件夹。 它会一次计算所有的。 +``` +# ls -1 | wc -l +5 +``` + +**细节:** + + * `ls` : 列出目录内容 + * `-l` : 使用长列表格式 + * `|` : 控制操作器将一个程序的输出发送到另一个程序进行进一步处理 + * `wc` : 这是一个为每个文件打印换行符,字和字节数的命令 + * `-l` : 打印换行符数 + + + +**示例-4 :** 统计包含隐藏文件和目录的当前目录文件和文件夹。 +``` +# ls -1a | wc -l +8 +``` + +**示例-5 :** 递归计算当前目录文件,其中包括隐藏文件。 +``` +# find . -type f | wc -l +7 +``` + +**细节 :** + + * `find` : 搜索目录层次结构中的文件 + * `-type` : 文件类型 + * `f` : 常规文件 + * `wc` : 这是一个为每个文件打印换行符,字和字节数的命令 + * `-l` : 打印换行符数 + + + +**示例-6 :** 使用tree命令打印目录和文件数(排除隐藏文件)。 +``` +# tree | tail -1 +2 directories, 5 files +``` + +**示例-7 :** 使用包含隐藏文件的树命令打印目录和文件数。 +``` +# tree -a | tail -1 +2 directories, 7 files +``` + +**示例-8 :** 运行下面的命令递归计算包含隐藏目录的目录。 +``` +# find . -type d | wc -l +3 +``` + +**示例-9 :** 根据文件扩展名计算文件数量。 这里我们要计算 `.txt` 文件。 +``` +# find . -name "*.txt" | wc -l +7 +``` + +**示例-10 :** 使用echo命令和wc命令统计当前目录中的所有文件。 `4`表示当前目录中的文件数量。 +``` +# echo * | wc +1 4 39 +``` + +**示例-11 :** 通过使用echo命令和wc命令来统计当前目录中的所有目录。 `1`表示当前目录中的目录数量。 +``` +# echo comic/ published/ sources/ translated/ | wc +1 1 6 +``` + +**示例-12 :** 通过使用echo命令和wc命令来统计当前目录中的所有文件和目录。 `5`表示当前目录中的目录和文件的数量。 +``` +# echo * | wc +1 5 44 +``` + +**示例-13 :** 统计系统(整个系统)中的文件数。 +``` +# find / -type f | wc -l +69769 +``` + +**示例-14 :** 统计系统(整个系统)中的文件夹数。 +``` +# find / -type d | wc -l +8819 +``` + +**示例-15 :** 运行以下命令来计算系统(整个系统)中的文件,文件夹,硬链接和符号链接数。 +``` +# find / -type d -exec echo dirs \; -o -type l -exec echo symlinks \; -o -type f -links +1 -exec echo hardlinks \; -o -type f -exec echo files \; | sort | uniq -c + 8779 dirs + 69343 files + 20 hardlinks + 11646 symlinks +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-count-the-number-of-files-and-folders-directories-in-linux/ + +作者:[Magesh Maruthamuthu][a] +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.2daygeek.com/author/magesh/ +[1]:https://www.2daygeek.com/empty-a-file-delete-contents-lines-from-a-file-remove-matching-string-from-a-file-remove-empty-blank-lines-from-a-file/ diff --git a/translated/tech/20171212 How to Search PDF Files from the Terminal with pdfgrep.md b/translated/tech/20171212 How to Search PDF Files from the Terminal with pdfgrep.md deleted file mode 100644 index 75aae3b97e..0000000000 --- a/translated/tech/20171212 How to Search PDF Files from the Terminal with pdfgrep.md +++ /dev/null @@ -1,64 +0,0 @@ -如何使用 pdfgrep 从终端搜索 PDF 文件 -====== -诸如 [grep][1] 和 [ack-grep][2] 之类的命令行工具对于搜索匹配指定[正则表达式][3]的纯文本非常有用。但是你有没有试过使用这些工具在 PDF 中搜索模板?不要这么做!由于这些工具无法读取PDF文件,因此你不会得到任何结果。他们只能读取纯文本文件。 - -顾名思义,[pdfgrep][4] 是一个小的命令行程序,可以在不打开文件的情况下搜索 PDF 中的文本。它非常快速 - 比几乎所有 PDF 浏览器提供的搜索更快。grep 和 pdfgrep 的区别在于 pdfgrep 对页进行操作,而 grep 对行操作。grep 如果在一行上找到多个匹配项,它也会多次打印单行。让我们看看如何使用该工具。 - -对于 Ubuntu 和其他基于 Ubuntu 的 Linux 发行版来说,这非常简单: -``` -sudo apt install pdfgrep -``` - -对于其他发行版,只要将 `pdfgrep` 作为[包管理器][5]的输入,它就应该能够安装。万一你想浏览代码,你也可以查看项目的[ GitLab 页面][6]。 - -现在你已经安装了这个工具,让我们去测试一下。pdfgrep 命令采用以下格式: -``` -pdfgrep [OPTION...] PATTERN [FILE...] -``` - - **OPTION** 是一个额外的属性列表,给出诸如 `-i` 或 `--ignore-case` 这样的命令,这两者都会忽略匹配正则中的大小写。 - - **PATTERN** 是一个扩展的正则表达式。 - - **FILE** 如果它在相同的工作目录或文件的路径,这是文件的名称。 - -我根据官方文档用 Python 3.6 运行命令。下图是结果。 - -![pdfgrep search][7] - -![pdfgrep search][7] - -红色高亮显示所有遇到单词 “queue” 的地方。在命令中加入 `-i` 选项将会匹配单词 “Queue”。请记住,当加入 `-i` 时,大小写并不重要。 - -pdfgrep 有相当多的有趣的选项。不过,我只会在这里介绍几个。 - - - * `-c` 或者 `--count`:这会抑制匹配的正常输出。它只显示在文件中遇到该单词的次数,而不是显示匹配的长输出, -  * `-p` 或者 `--page-count`:这个选项打印页面上匹配的页码和页面上的模式出现次数 -  * `-m` 或者 `--max-count` [number]:指定匹配的最大数目。这意味着当达到匹配次数时,该命令停止读取文件。 - - - -支持的选项的完整列表可以在 man 页面或者 pdfgrep 在线[文档][8]中找到。以防你在处理一些批量文件,不要忘记,pdfgrep 可以同时搜索多个文件。可以通过更改 GREP_COLORS 环境变量来更改默认的匹配高亮颜色。 - -下一次你想在 PDF 中搜索一些东西。请考虑使用 pdfgrep。该工具会派上用场,并且节省你的时间。 - --------------------------------------------------------------------------------- - -via: https://www.maketecheasier.com/search-pdf-files-pdfgrep/ - -作者:[Bruno Edoh][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.maketecheasier.com -[1] https://www.maketecheasier.com/what-is-grep-and-uses/ -[2] https://www.maketecheasier.com/ack-a-better-grep/ -[3] https://www.maketecheasier.com/the-beginner-guide-to-regular-expressions/ -[4] https://pdfgrep.org/ -[5] https://www.maketecheasier.com/install-software-in-various-linux-distros/ -[6] https://gitlab.com/pdfgrep/pdfgrep -[7] https://www.maketecheasier.com/assets/uploads/2017/11/pdfgrep-screenshot.png (pdfgrep search) -[8] https://pdfgrep.org/doc.html diff --git a/translated/tech/20171214 A step-by-step guide to building open culture.md b/translated/tech/20171214 A step-by-step guide to building open culture.md deleted file mode 100644 index d6674c4286..0000000000 --- a/translated/tech/20171214 A step-by-step guide to building open culture.md +++ /dev/null @@ -1,43 +0,0 @@ -手把手教你构建开放式文化 -====== -我们于 2015 年发表 `开放组织 (Open Organization)` 后,很对各种类型不同大小的公司都对“开放式”文化究竟意味着什么感到好奇。甚至当我跟别的公司谈论我们产品和服务的优势时,也总是很快就从谈论技术转移到人和文化上去了。几乎所有对推动创新和保持行业竞争优势有兴趣的人都在思考这个问题。 - -不是只有高级领导团队 (Senior leadership teams) 才对开放式工作感兴趣。[红帽公司最近一次调查 ][1] 发现 [81% 的受访者 ][2] 同意这样一种说法:"拥有开放式的组织文化对我们公司非常重要。" - -然而要注意的是。同时只有 [67% 的受访者 ][3] 认为:"我们的组织有足够的资源来构建开放式文化。" - -这个结果与我从其他公司那交流所听到的相吻合:人们希望在开放式文化中工作,他们只是不知道该怎么做。对此我表示同情,因为组织的行事风格是很难捕捉,评估,和理解的。在 [Catalyst-In-Chief][4] 中,我将其称之为 "组织中最神秘莫测的部分。" - -开放式组织之所以让人神往是因为在这个数字化转型有望改变传统工作方式的时代,拥抱开放文化是保持持续创新的最可靠的途径。当我们在书写本文的时候,我们所关注的是描述在红帽公司中兴起的那种文化--而不是编写一本如何操作的书。我们并不会制定出一步步的流程来让其他组织采用。 - -这也是为什么与其他领导者和高管谈论他们是如何开始构建开放式文化的会那么有趣。在创建开发组织时,很多高管会说我们要"改变我们的文化"。但是文化并不是一项输入。它是一项输出--它是人们互动和日常行为的副产品。 - -告诉组织成员"更加透明地工作","更多地合作",以及 "更加包容地行动" 并没有什么作用。因为像 "透明," "合作," and "包容" 这一类的文化特质并不是行动。他们只是组织内指导行为的价值观而已。 - -纳入要如何才能构建开放式文化呢? - -在过去的两年里,Opensource.com 设计收集了各种以开放的精神来进行工作,管理和领导的最佳实践方法。现在我们在新书 [The Open Organization Workbook][5] 中将之分享出来,这是一本更加规范的引发文化变革的指引。 - -要记住,任何改变,尤其是巨大的改变,都需要许诺 (commitment),耐心,以及努力的工作。我推荐你在通往伟大成功的大道上先使用这本工作手册来实现一些微小的,有意义的成果。 - -通过阅读这本书,你将能够构建一个开放而又富有创新的文化氛围,使你们的人能够茁壮成长。我已經迫不及待想听听你的故事了。 - -本文摘自 [Open Organization Workbook project][6]。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/open-organization/17/12/whitehurst-workbook-introduction - -作者:[Jim Whitehurst][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jwhitehurst -[1]:https://www.redhat.com/en/blog/red-hat-releases-2017-open-source-culture-survey-results -[2]:https://www.techvalidate.com/tvid/923-06D-74C -[3]:https://www.techvalidate.com/tvid/D30-09E-B52 -[4]:https://opensource.com/open-organization/resources/catalyst-in-chief -[5]:https://opensource.com/open-organization/resources/workbook -[6]:https://opensource.com/open-organization/17/8/workbook-project-announcement diff --git a/translated/tech/20171215 How to find and tar files into a tar ball.md b/translated/tech/20171215 How to find and tar files into a tar ball.md deleted file mode 100644 index b1cc728635..0000000000 --- a/translated/tech/20171215 How to find and tar files into a tar ball.md +++ /dev/null @@ -1,120 +0,0 @@ -如何找出并打包文件成 tar 包 -====== - -我想找出所有的 \*.doc 文件并将它们创建成一个 tar 包,然后存储在 /nfs/backups/docs/file.tar 中。是否可以在 Linux 或者类 Unix 系统上查找并 tar 打包文件? - -find 命令用于按照给定条件在目录层次结构中搜索文件。tar 命令是用于 Linux 和类 Unix 系统创建 tar 包的归档工具。 - -[![How to find and tar files on linux unix][1]][1] - -让我们看看如何将 tar 命令与 find 命令结合在一个命令行中创建一个 tar 包。 - -## Find 命令 - -语法是: -``` -find /path/to/search -name "file-to-search" -options -## 找出所有 Perl(*.pl)文件 ## -find $HOME -name "*.pl" -print -## 找出所有 \*.doc 文件 ## -find $HOME -name "*.doc" -print -## 找出所有 *.sh(shell 脚本)并运行 ls -l 命令 ## -find . -iname "*.sh" -exec ls -l {} + -``` -最后一个命令的输出示例: -``` --rw-r--r-- 1 vivek vivek 1169 Apr 4 2017 ./backups/ansible/cluster/nginx.build.sh --rwxr-xr-x 1 vivek vivek 1500 Dec 6 14:36 ./bin/cloudflare.pure.url.sh -lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/cmspostupload.sh -> postupload.sh -lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/cmspreupload.sh -> preupload.sh -lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/cmssuploadimage.sh -> uploadimage.sh -lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/faqpostupload.sh -> postupload.sh -lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/faqpreupload.sh -> preupload.sh -lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/faquploadimage.sh -> uploadimage.sh --rw-r--r-- 1 vivek vivek 778 Nov 6 14:44 ./bin/mirror.sh --rwxr-xr-x 1 vivek vivek 136 Apr 25 2015 ./bin/nixcraft.com.301.sh --rwxr-xr-x 1 vivek vivek 547 Jan 30 2017 ./bin/paypal.sh --rwxr-xr-x 1 vivek vivek 531 Dec 31 2013 ./bin/postupload.sh --rwxr-xr-x 1 vivek vivek 437 Dec 31 2013 ./bin/preupload.sh --rwxr-xr-x 1 vivek vivek 1046 May 18 2017 ./bin/purge.all.cloudflare.domain.sh -lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/tipspostupload.sh -> postupload.sh -lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/tipspreupload.sh -> preupload.sh -lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/tipsuploadimage.sh -> uploadimage.sh --rwxr-xr-x 1 vivek vivek 1193 Oct 18 2013 ./bin/uploadimage.sh --rwxr-xr-x 1 vivek vivek 29 Nov 6 14:33 ./.vim/plugged/neomake/tests/fixtures/errors.sh --rwxr-xr-x 1 vivek vivek 215 Nov 6 14:33 ./.vim/plugged/neomake/tests/helpers/trap.sh -``` - -## Tar 命令 - -要[创建 /home/vivek/projects 目录的 tar 包][2],运行: -``` -$ tar -cvf /home/vivek/projects.tar /home/vivek/projects -``` - -## 结合 find 和 tar 命令 - -语法是: -``` -find /dir/to/search/ -name "*.doc" -exec tar -rvf out.tar {} \; -``` -或者 -``` -find /dir/to/search/ -name "*.doc" -exec tar -rvf out.tar {} + -``` -例子: -``` -find $HOME -name "*.doc" -exec tar -rvf /tmp/all-doc-files.tar "{}" \; -``` -或者 -``` -find $HOME -name "*.doc" -exec tar -rvf /tmp/all-doc-files.tar "{}" + -``` -这里,find 命令的选项: - - * **-name "*.doc"** : 按照给定的模式/标准查找文件。在这里,在 $HOME 中查找所有 \*.doc 文件。 - * **-exec tar ...** : 对 find 命令找到的所有文件执行 tar 命令。 - -这里,tar 命令的选项: - - * **-r** : 将文件追加到归档末尾。参数与 -c 选项具有相同的含义。 - * **-v** : 详细输出。 - * **-f** : out.tar : 将所有文件追加到 out.tar 中。 - - - -也可以像下面这样将 find 命令的输出通过管道输入到 tar 命令中: -``` -find $HOME -name "*.doc" -print0 | tar -cvf /tmp/file.tar --null -T - -``` -传递给 find 命令的 -print0 选项处理特殊的文件名。-null 和 -T 选项告诉 tar 命令从标准输入/管道读取输入。也可以使用 xargs 命令: -``` -find $HOME -type f -name "*.sh" | xargs tar cfvz /nfs/x230/my-shell-scripts.tgz -``` -有关更多信息,请参阅下面的 man 页面: -``` -$ man tar -$ man find -$ man xargs -$ man bash -``` - ------------------------------- - -作者简介: - -作者是 nixCraft 的创造者,是一名经验丰富的系统管理员,也是 Linux 操作系统/Unix shell 脚本培训师。他曾与全球客户以及 IT、教育、国防和太空研究以及非营利部门等多个行业合作。在 Twitter、Facebook 和 Google+ 上关注他。 - --------------------------------------------------------------------------------- - -via: https://www.cyberciti.biz/faq/linux-unix-find-tar-files-into-tarball-command/ - -作者:[Vivek Gite][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.cyberciti.biz -[1]:https://www.cyberciti.biz/media/new/faq/2017/12/How-to-find-and-tar-files-on-linux-unix.jpg -[2]:https://www.cyberciti.biz/faq/creating-a-tar-file-linux-command-line/ diff --git a/translated/tech/20171219 How to generate webpages using CGI scripts.md b/translated/tech/20171219 How to generate webpages using CGI scripts.md new file mode 100644 index 0000000000..47d424303a --- /dev/null +++ b/translated/tech/20171219 How to generate webpages using CGI scripts.md @@ -0,0 +1,129 @@ +如何使用 CGI 脚本生成网页 +====== +回到互联网的开端,当我第一次创建了我的第一个商业网站,生活是无比的美好。 + +我安装 Apache 并写了一些简单的 HTML 网页,网页上列出了一些关于我的业务的重要信息,比如产品概览以及如何联系我。这是一个静态网站,因为内容很少改变。由于网站的内容很少改变这一性质,因此维护起来也很简单。 + +## 静态内容 + +静态内容很简单,同时也很常见。让我们快速的浏览一些静态网页的例子。你不需要一个可运行网站来执行这些小实验,只需要把这些文件放到 home 目录,然后使用浏览器打开。你所看到的内容将和通过 web 服务器提供这一文件看到的内容一样。 + +对于一个静态网站,你需要的第一件东西就是 index.html 文件,该文件通常放置在 `/var/www/html` 目录下。这个文件的内容可以非常简单,比如可以是像 "Hello, world" 这样一句短文本,没有任何 HTML 标记。它将简单的展示文本串内容。在你的 home 目录创建 index.html 文件,并添加 "hello, world" 作为内容(不需要引号)。在浏览器中通过下面的链接来打开这一文件: +``` +file:///home//index.html +``` + +所以 HTML 不是必须的,但是,如果你有大量需要格式化的文本,那么,不用 HTML 编码的网页的结果将会令人难以理解。 + +所以,下一步就是通过使用一些 HTML 编码来提供格式化,从而使内容更加可读。下面这一命令创建了一个具有 HTML 静态网页所需要的绝对最小标记的页面。你也可以使用你最喜欢的编辑器来创建这一内容。 +``` +echo "

Hello World

" > test1.html +``` + +现在,再次查看 index.html 文件,将会看到和刚才有些不同。 + +当然,你可以在实际的内容行上添加大量的 HTML 标记,以形成更加完整和标准的网页。下面展示的是更加完整的版本,尽管在浏览器中会看到同样的内容,但这也为更加标准化的网站奠定了基础。继续在 index.html 中写入这些内容并通过浏览器查看。 +``` + + + +My Web Page + + +

Hello World

+ + +``` + +我使用这些技术搭建了一些静态网站,但我的生活正在改变。 + +## 动态网页 + +我找了一份新工作,这份工作的主要任务就是创建并维护用于一个动态网站的 CGI([公共网关接口][6])代码。字面意思来看,动态意味着在浏览器中生成的网页所需要的 HTML 是由每次访问页面时所访问到的数据生成的。这些数据包括网页表格中的用户输入,以用来在数据库中进行数据查找,结果数据被一些恰当的 HTML 包围着并展示在所请求的浏览器中。但是这不需要非常复杂。 + +通过使用 CGI 脚本,你可以创建一些简单或复杂的交互式程序,通过运行这些程序能够生成基于输入、计算、服务器的当前条件等改变的动态页面。有许多种语言可以用来写 CGI 脚本,在这篇文章中,我将谈到的是 Perl 和 Bash ,其他非常受欢迎的 CGI 语言包括 PHP 和 Python 。 + +这篇文章不会介绍 Apache 或其他任何 web 服务器的安装和配置。如果你能够访问一个你可以进行实验的 web 服务器,那么你可以直接查看它们在浏览器中出现的结果。否则,你可以在命令行中运行程序来查看它们所创建的 HTML 文本。你也可以重定向 HTML 输出到一个文件中,然后通过浏览器查看结果文件。 + +### 使用 Perl + +Perl 是一门非常受欢迎的 CGI 脚本语言,它的优势是强大的文本操作能力。 + +为了使 CGI 脚本可执行,你需要在你的网站的 httpd.conf 中添加下面这行内容。这会告诉服务器可执行 CGI 文件的位置。在这次实验中,不必担心这个问题。 +``` +ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" +``` + +把下面的 Perl 代码添加到文件 index.cgi,在这次实验中,这个文件应该放在你的 home 目录下。如果你使用 web 服务器,那么应把文件的所有者更改为 apache.apache,同时将文件权限设置为 755,因为无论位于哪,它必须是可执行的。 + +``` +#!/usr/bin/perl +print "Content-type: text/html\n\n"; +print "\n"; +print "

Hello World

\n"; +print "Using Perl

\n"; +print "\n"; +``` + +在命令行中运行这个程序并查看结果,它将会展示出它所生成的 HTML 内容 + +现在,在浏览器中查看 index.cgi 文件,你所看到的只是文件的内容。浏览器的确将它看做 CGI 内容,但是,Apache 不知道需要将这个文件作为 CGI 程序运行,除非 Apache 的配置中包括上面所展示的 "ScriptAlias" 定义。没有这一配置,Apache 只会简单地将文件中的数据发送给浏览器。如果你能够访问 web 服务器,那么你可以将可执行文件放到 `/var/www/cgi-bin` 目录下。 + +如果想知道这个脚本的运行结果在浏览器中长什么样,那么,重新运行程序并把输出重定向到一个新文件,名字可以是任何你想要的。然后使用浏览器来查看这一文件,它包含了脚本所生成的内容。 + +上面这个 CGI 程序依旧生成静态内容,因为它总是生成相同的输出。把下面这行内容添加到 CGI 程序中 "Hello, world" 这一行后面。Perl 的 "system" 命令将会执行跟在它后面的 shell 命令,并把结果返回给程序。此时,我们将会通过 `free` 命令获得当前的 RAM 使用量。 + +``` +system "free | grep Mem\n"; +``` + +现在,重新运行这个程序,并把结果重定向到一个文件,在浏览器中重新加载这个文件。你将会看到额外的一行,它展示了系统的内存统计数据。多次运行程序并刷新浏览器,你将会发现,内存使用量应该是不断变化的。 + +### 使用 Bash + +Bash 可能是用于 CGI 脚本中最简单的语言。用 Bash 来进行 CGI 编程的最大优势是它能够直接访问所有的标准 GNU 工具和系统程序。 + +把已经存在的 index.cgi 文件重命名为 Perl.index.cgi ,然后创建一个新的 index.cgi 文件并添加下面这些内容。记得设置权限使它可执行。 + +``` +#!/bin/bash +echo "Content-type: text/html" +echo "" +echo '' +echo '' +echo '' +echo 'Hello World' +echo '' +echo '' +echo '

Hello World

' +echo 'Using Bash

' +free | grep Mem +echo '' +echo '' +exit 0 +``` + +在命令行中执行这个文件并查看输出,然后再次运行并把结果重定向到一个临时结果文件中。然后,刷新浏览器查看它所展示的网页是什么样子。 + +## 结论 + +创建能够生成许多种动态网页的 CGI 程序实际上非常简单。尽管这是一个很简单的例子,但是现在你应该看到一些可能性了。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/12/cgi-scripts + +作者:[David Both][a] +译者:[ucasFL](https://github.com/ucasFL) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/dboth +[1]:http://december.com/html/4/element/html.html +[2]:http://december.com/html/4/element/head.html +[3]:http://december.com/html/4/element/title.html +[4]:http://december.com/html/4/element/body.html +[5]:http://december.com/html/4/element/h1.html +[6]:https://en.wikipedia.org/wiki/Common_Gateway_Interface +[7]:http://perldoc.perl.org/functions/system.html diff --git a/translated/tech/20180101 The mysterious case of the Linux Page Table Isolation patches.md b/translated/tech/20180101 The mysterious case of the Linux Page Table Isolation patches.md deleted file mode 100644 index 2cfd429533..0000000000 --- a/translated/tech/20180101 The mysterious case of the Linux Page Table Isolation patches.md +++ /dev/null @@ -1,139 +0,0 @@ -# [关于 Linux 页面表隔离补丁的神秘情况][14] - -* * * - -_长文预警:_ 这是一个目前严格限制的、禁止披露的安全 bug,它影响到目前几乎所有实现虚拟内存的 CPU 架构,需要硬件的改变才能完全解决这个 bug。通过软件来缓解这种影响的紧急开发工作正在进行中,并且最近在 Linux 内核中已经得以实现,并且,在 11 月份,在 NT 内核中也开始了一个类似的紧急开发。在最糟糕的情况下,软件修复会导致一般工作负载出现巨大的减速(译者注:外在表现为 CPU 性能下降)。这里有一个提示,攻击会影响虚拟化环境,包括 Amazon EC2 和 Google 计算引擎,以及另外的提示是,这种精确的攻击可能涉及一个新的 Rowhammer 变种(译者注:一个由 Google 安全团队提出的 DRAM 的安全漏洞,在文章的后面部分会简单介绍)。 - -* * * - -我一般不太关心安全问题,但是,对于这个 bug 我有点好奇,而一般会去写这个主题的人似乎都很忙,要么就是知道这个主题细节的人会保持沉默。这让我在新年的第一天(元旦那天)花了几个小时深入去挖掘关于这个谜团的更多信息,并且我将这些信息片断拼凑到了一起。 - -注意,这是一件相互之间高度相关的事件,因此,它的主要描述都是猜测,除非过一段时间,它的限制禁令被取消。我所看到的,包括涉及到的供应商、许多争论和这种戏剧性场面,将在限制禁令取消的那一天出现。 - -**LWN** - -这个事件的线索出现于 12 月 20 日 LWN 上的 [内核页面表的当前状况:页面隔离][2](致校对:就是昨天我翻译的那篇) 这篇文章。它在 10 月份被奥地利的 [TU Graz][4] 的一组研究人员第一次发表。从文章语气上明显可以看到这项工作的紧急程度,内核的核心开发者紧急加入了 [KAISER 补丁系列][3]。 - -这一系列的补丁的用途从概念上说很简单:为了阻止运行在用户空间的进程在进程页面表中,通过映射得到内核空间页面的各种攻击方式,可以很好地阻止了从非特权的用户空间代码中识别到内核虚拟地址的攻击企图。 - -这组论文描述的 KAISER,[KASLR 已死:KASLR 永存][5](致校对:这里我觉得是[ASLR 已死:KASLR 永存],请查看原文出处。),在它的抽象中,通过特定的引用,在内存管理硬件中去删除所有内核地址空间的信息,即便是用户代码在这个 CPU 上处于活动状态的时候。 - -这个补丁集的魅力在于它触及到了核心,内核的全部基础核心(和与用户空间的接口),显然,它应该被最优先考虑。在 Linux 中当我读到关于内存管理的变化时,通常,第一个引用发生在变化被合并的很久之前,并且,通常会进行多次的评估、拒绝、以及因各种原因爆发争论的一系列过程。 - -KAISER(就是现在的 KPTI)系列被合并还不足三个月。 - -**ASLR 概述** - -从表面上看,设计的这些补丁可以确保地址空间布局随机化仍然有效:这是一个现代操作系统的安全特性,它企图去将更多的随机位,引入到公共映射对象的地址空间中。 - -例如,在引用 /usr/bin/python 时,动态链接将对系统的 C 库、堆、线程栈、以及主要的可执行文件进行排布,去接受随机分配的地址范围: - -> $ bash -c ‘grep heap /proc/$$/maps’ -> 019de000-01acb000 rw-p 00000000 00:00 0                                  [heap] -> $ bash -c 'grep heap /proc/$$/maps’ -> 023ac000-02499000 rw-p 00000000 00:00 0                                  [heap] - -注意跨 bash 进程的开始和结束偏移量上的堆的变化。 - -这个特性的效果是,一个 buffer 管理的 bug 导致一个攻击者可以去覆写一些程序代码指向的内存地址,并且,那个地址将在程序控制流中被使用,诸如这种攻击者可以使控制流转向到一个包含他们选择的内容的 buffer 上,对于攻击者来说,使用机器代码来填充 buffer 将更困难。例如,system() C 库函数将被引用,因为,那个函数的地址在不同的运行进程上不同的。 - -这是一个简单的示例,ASLR 被设计用于去保护类似这样的许多场景,包括阻止攻击者从有可能被用来修改控制流或者实现一个攻击的程序数据的地址内容。 - -KASLR 是 “简化的” 应用到内核本身的 ASLR:在每个重新引导的系统上,属于内核的地址范围是随机的,这样就使得,虽然被攻击者转向的控制流运行在内核模式上,但是,不能猜测到为实现他们的攻击目的所需要的函数和结构的地址,比如,定位当前进程数据,将活动的 UID 从一个非特权用户提升到 root 用户,等等。 - -**坏消息:缓减这种攻击的软件运行成本过于贵重** - -老的 Linux 将内核内存映射在同一个页面表中的这个行为的主要原因是,当用户的代码触发一个系统调用、故障、或者产生中断时,用户内存也是这种行为,这样就不需要改变正在运行的进程的虚拟内存布局。 - -因为在那样,它不需要去改变虚拟内存布局,进而也就不需要去清洗掉(flush)与 CPU 性能高度依赖的缓存(致校对:意思是如果清掉这些缓存,CPU 性能就会下降),主要是通过 [转换查找缓冲器][6](译者注:Translation Lookaside Buffer(TLB)(将虚拟地址转换为物理地址)。 - -使用已合并的页面表分割补丁后变成,内核每次开始运行时,需要将内核的缓存清掉,并且,每次用户代码恢复运行时都会这样。对于大多数工作负载,在每个系统调用中,TLB 的实际总损失将导致明显的变慢:[@grsecurity 测量的一个简单的案例][7],在一个最新的 AMD CPU 上,Linux “du -s” 变慢了 50%。 - -**34C3** - -在今年的 CCC 上,你可以找到 TU Graz 的研究人员的另一篇,[一个纯 Javascript 的 ASLR 攻击描述][8] ,通过仔细地掌握 CPU 内存管理单元的操作时机,遍历了描述虚拟内存布局的页面表,来实现 ASLR 攻击。它通过高度精确的时间掌握和选择性回收的 CPU 缓存行的组合方式来实现这种结果,一个运行在 web 浏览器的 Javascript 程序可以找回一个 Javascript 对象的虚拟地址,使得利用浏览器内存管理 bugs 被允许进行接下来的攻击。 - -因此,从表面上看,我们有一组 KAISER 补丁,也展示了解除 ASLR 的地址的技术,并且,这个展示使用的是 Javascript,很快就可以在一个操作系统内核上进行重新部署。 - -**虚拟内存概述** - -在通常情况下,当一些机器码尝试去加载、存储、或者跳转到一个内存地址时,现代的 CPUs 必须首先去转换这个  _虚拟地址_  到一个  _物理地址_ ,通过使用一系列操作系统托管的数组(被称为页面表),来描述一个虚拟地址和安装在这台机器上的物理内存之间的映射。 - -在现代操作系统中,虚拟内存可能是仅有的一个非常重要的强大特性:它都阻止了什么呢?例如,一个濒临死亡的进程崩溃了操作系统、一个 web 浏览器 bugs 崩溃了你的桌面环境、或者,一个运行在 Amazon EC2 中的虚拟机的变化影响了同一台主机上的另一个虚拟机。 - -这种攻击的原理是,利用 CPU 上维护的大量的缓存,通过仔细地操纵这些缓存的内存,它可以去推测内存管理单元的地址,以去访问页面表的不同层级,因为一个未缓存的访问将比一个缓存的访问花费更长的时间。通过检测页面表上可访问的元素,它可能去恢复在 MMU(译者注:存储器管理单元)忙于解决的虚拟地址中的大部分比特(bits)。 - -**这种动机的证据,但是不用恐慌** - -我们找到了动机,但是到目前为止,我们并没有看到这项工作引进任何恐慌。总的来说,ASLR 并不能完全缓减这种风险,并且也是一道最后的防线:仅在这 6 个月的周期内,即便是一个没有安全意识的人也能看到一些关于解除(unmasking) ASLR 的指针的新闻,并且,实际上 ASLR 已经存在了。 - -单独的修复 ASLR 并不足于去描述这项工作高优先级背后的动机。 - -**它是硬件安全 bug 的证据** - -通过阅读这一系列补丁,可以明确许多事情。 - -第一,正如 [@grsecurity 指出][9] 的,代码中的一些注释已经被编辑(redacted),并且,描述这项工作的额外的主文档文件已经在 Linux 源代码树中看不到了。 - -测试代码已经以运行时补丁的方式构建,在系统引导时仅当内核检测到是受影响的系统时才会被应用,与对臭名昭著的 [Pentium F00F bug][10] 的缓解措施,使用完全相同的机制: - -![](http://78.media.tumblr.com/1c80c45e14c1e676b35cdd89cc9b557c/tumblr_inline_p1untxZBBD1rkm8fh_500.jpg) - -**更多的线索:Microsoft 也已经实现了页面表的分割** - -通过对 FreeBSD 源代码的一个小挖掘可以看出,目前,其它的免费操作系统没有实现页面表分割,但是,通过 [Alex Ioniscu on Twitter][11] 的启示,这项工作已经不局限于 Linux 了:从 11 月起,公开的 NT 内核也已经实现了同样的技术。 - -**猜测的结果:Rowhammer** - -在 TU Graz 上进一步挖掘对这项工作的研究,我们找到 [When rowhammer only knocks once][12],12 月 4 日通告的一个 [新的 Rowhammer 攻击的变种][13]: - -> 在这篇论文中,我们提出了新的 Rowhammer 攻击和原始的漏洞利用,表明即便是所有防御的组合也没有效果。我们的新攻击技术,对一个位置的反复 “敲打”(hammering),打破了以前假定的触发 Rowhammer bug 的前提条件。 - -作一个快速回顾,Rowhammer 是一个对主要(全部?)种类的商品 DRAMs 的基础问题的一个类别,比如,在普通的计算机中的内存上。通过精确操作内存中的一个区域,这可能会导致内存该区域存储的相关(但是逻辑上是独立的)内容被毁坏。效果是,Rowhammer 可能被用于去反转内存中的比特(bits),使未经授权的用户代码可以访问到,比如,这个比特位描述了系统中的其它代码的访问权限。 - -我发现在 Rowhammer 上,这项工作很有意思,尤其是它反转的位接近页面表分割补丁时,但是,因为 Rowhammer 攻击要求一个目标:你必须知道你尝试去反转的比特在内存中的物理地址,并且,第一步是得到的物理地址可能是一个虚拟地址,比如,在 KASLR 中的解除(unmasking)工作。 - -**猜测的结果:它影响主要的云供应商** - -在我能看到的内核邮件列表中,除了子系统维护者的名字之外,e-mail 地址是属于 Intel、Amazon、和 Google 的雇员,这表示这两个大的云计算供应商对此特别感兴趣,这为我们提供了一个强大的线索,这项工作很大的可能是受虚拟化安全驱动的。 - -它可能会导致产生更多的猜测:虚拟机 RAM 和由这些虚拟机所使用的虚拟内存地址,最终表示为在主机上大量的相邻的数组,那些数组,尤其是在一个主机上只有两个租户的情况下,在 Xen 和 Linux 内核中是通过内存分配来确定的,这样可能会有(准确性)非常高的可预测行为。 - -**最喜欢的猜测:这是一个提升特权的攻击** - -把这些综合到一起,我并不难预测,如果我们在 2018 年使用这些存在提升特权的 bug 的发行版,或者类似的系统去驱动如此紧急的情况,并且在补丁集的抄送列表中出现如此多的感兴趣者的名字。 - -最后的一个趣闻,虽然我在阅读补丁集的时候没有找到我要的东西,但是,在一些代码中标记,paravirtual 或者 HVM Xen 是不受此影响的。 - -**Invest in popcorn, 2018 将很有趣** - -这些猜想是完全有可能的,它离实现很近,但是可以肯定的是,当这些事情被公开后,那将是一个非常令人激动的几个星期。 - --------------------------------------------------------------------------------- - -via: http://pythonsweetness.tumblr.com/post/169166980422/the-mysterious-case-of-the-linux-page-table - -作者:[python sweetness][a] -译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://pythonsweetness.tumblr.com/ -[1]:http://pythonsweetness.tumblr.com/post/169217189597/quiet-in-the-peanut-gallery -[2]:http://t.umblr.com/redirect?z=https%3A%2F%2Flwn.net%2FArticles%2F741878%2F&t=ODY1YTM4MjYyYzU2NzNmM2VmYzEyMGIzODJkY2IxNDg0MDhkZDM1MSxXRG55eVpXNw%3D%3D&b=t%3AqBH2b-yWL63V8acbuG-EUQ&p=http%3A%2F%2Fpythonsweetness.tumblr.com%2Fpost%2F169166980422%2Fthe-mysterious-case-of-the-linux-page-table&m=1 -[3]:http://t.umblr.com/redirect?z=https%3A%2F%2Flwn.net%2FArticles%2F738975%2F&t=MzQxMmMyYThhNDdiMGJkZmRmZWI5NDkzZmQ3ZTM4ZDcwYzFhMjU5OSxXRG55eVpXNw%3D%3D&b=t%3AqBH2b-yWL63V8acbuG-EUQ&p=http%3A%2F%2Fpythonsweetness.tumblr.com%2Fpost%2F169166980422%2Fthe-mysterious-case-of-the-linux-page-table&m=1 -[4]:http://t.umblr.com/redirect?z=https%3A%2F%2Fwww.iaik.tugraz.at%2Fcontent%2Fresearch%2Fsesys%2F&t=NzEwZjg5YmQ1ZTNlZWIyYWE0YzgzZmZjN2ZmM2E2YjMzNDk5YTk4YixXRG55eVpXNw%3D%3D&b=t%3AqBH2b-yWL63V8acbuG-EUQ&p=http%3A%2F%2Fpythonsweetness.tumblr.com%2Fpost%2F169166980422%2Fthe-mysterious-case-of-the-linux-page-table&m=1 -[5]:http://t.umblr.com/redirect?z=https%3A%2F%2Fgruss.cc%2Ffiles%2Fkaiser.pdf&t=OTk4NGQwZTQ1NTdlNzE1ZGEyZTdlY2ExMTY1MTJhNzk2ODIzYWY1OSxXRG55eVpXNw%3D%3D&b=t%3AqBH2b-yWL63V8acbuG-EUQ&p=http%3A%2F%2Fpythonsweetness.tumblr.com%2Fpost%2F169166980422%2Fthe-mysterious-case-of-the-linux-page-table&m=1 -[6]:http://t.umblr.com/redirect?z=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FTranslation_lookaside_buffer&t=NjEyNGUzNTk2MGY3ODY3ODIxZjQ1Yjc4YWZjMGNmNmI1OWU1M2U0YyxXRG55eVpXNw%3D%3D&b=t%3AqBH2b-yWL63V8acbuG-EUQ&p=http%3A%2F%2Fpythonsweetness.tumblr.com%2Fpost%2F169166980422%2Fthe-mysterious-case-of-the-linux-page-table&m=1 -[7]:https://twitter.com/grsecurity/status/947439275460702208 -[8]:http://t.umblr.com/redirect?z=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dewe3-mUku94&t=NjczZmIzNWY3YTA2NGFiZDJmYThlMjlhMWM1YTE3NThhNzY0OGJlMSxXRG55eVpXNw%3D%3D&b=t%3AqBH2b-yWL63V8acbuG-EUQ&p=http%3A%2F%2Fpythonsweetness.tumblr.com%2Fpost%2F169166980422%2Fthe-mysterious-case-of-the-linux-page-table&m=1 -[9]:https://twitter.com/grsecurity/status/947147105684123649 -[10]:http://t.umblr.com/redirect?z=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FPentium_F00F_bug&t=Yjc4MDZhNDZjZDdiYWNkNmJkNjQ3ZDNjZmVlZmRkMGM2NDYwN2I2YSxXRG55eVpXNw%3D%3D&b=t%3AqBH2b-yWL63V8acbuG-EUQ&p=http%3A%2F%2Fpythonsweetness.tumblr.com%2Fpost%2F169166980422%2Fthe-mysterious-case-of-the-linux-page-table&m=1 -[11]:https://twitter.com/aionescu/status/930412525111296000 -[12]:http://t.umblr.com/redirect?z=https%3A%2F%2Fwww.tugraz.at%2Fen%2Ftu-graz%2Fservices%2Fnews-stories%2Fplanet-research%2Fsingleview%2Farticle%2Fwenn-rowhammer-nur-noch-einmal-klopft%2F&t=NWM1ZjZlZWU2NzFlMWIyNmI5MGZlNjJlZmM2YTlhOTIzNGY3Yjk4NyxXRG55eVpXNw%3D%3D&b=t%3AqBH2b-yWL63V8acbuG-EUQ&p=http%3A%2F%2Fpythonsweetness.tumblr.com%2Fpost%2F169166980422%2Fthe-mysterious-case-of-the-linux-page-table&m=1 -[13]:http://t.umblr.com/redirect?z=https%3A%2F%2Farxiv.org%2Fabs%2F1710.00551&t=ZjAyMDUzZWRmYjExNGNlYzRlMjE1NTliMTI2M2Y4YjkxMTFhMjI0OCxXRG55eVpXNw%3D%3D&b=t%3AqBH2b-yWL63V8acbuG-EUQ&p=http%3A%2F%2Fpythonsweetness.tumblr.com%2Fpost%2F169166980422%2Fthe-mysterious-case-of-the-linux-page-table&m=1 -[14]:http://pythonsweetness.tumblr.com/post/169166980422/the-mysterious-case-of-the-linux-page-table -[15]:http://pythonsweetness.tumblr.com/ - - diff --git a/translated/tech/20180102 Best open source tutorials in 2017.md b/translated/tech/20180102 Best open source tutorials in 2017.md new file mode 100644 index 0000000000..892c7d7a8e --- /dev/null +++ b/translated/tech/20180102 Best open source tutorials in 2017.md @@ -0,0 +1,85 @@ +Translating zjon +2017最佳开源教程 +====== +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-teacher-learner.png?itok=rMJqBN5G) + +一个精心编写的教程是任何软件的官方文档的一个很好的补充。 如果官方文件写得不好,不完整或不存在,它也可能是一个有效的选择。 + +2017、Opensource.com 发布一些有关各种主题的优秀教程。这些教程不只是针对专家们的。我们把他们针对各种技能水平和经验的用户。 + +让我们来看看最好的教程。 + +### 关于代码 + +对许多人来说,他们对开源的第一次涉足涉及为一个项目或另一个项目提供代码。你在哪里学习编码或编程?以下两篇文章是很好的起点。 + +严格来说,VM Brasseur 的[如何开始学习编程][1]是为新手程序员的一个很好的起点,而不是一个教程。它不仅指出了一些有助于你开始学习的优秀资源,而且还提供了了解你的学习方式和如何选择语言的重要建议。 + +如果您已经在一个 [IDE][2] 或文本编辑器中记录了几个小时,那么您可能需要学习更多关于编码的不同方法。Fraser Tweedale 的[功能编程的简介][3]很好地引入范式可以应用到许多广泛使用的编程语言。 + +### 流行的 Linux + +Linux 是开源的典范。它运行了大量的网络,为世界顶级超级计算机提供动力。它让任何人都可以在台式机上使用专有的操作系统。 + +如果你有兴趣深入Linux,这里有三个教程供你参考。 + +Jason Baker 查看[设置 Linux $PATH 变量][4]。他引导你通过这一“任何Linux初学者的重要技巧”,使您能够将系统指向包含程序和脚本的目录。 + +拥抱你的核心技师 David Both 指南[建立一个 DNS 域名服务器][5]。他详细地记录了如何设置和运行服务器,包括要编辑的配置文件以及如何编辑它们。 + +想在你的电脑上更复古一点吗?Jim Hall 告诉你如何[在 Linux 下运行 DOS 程序][6]使用 [FreeDOS][7]和 [qemu][8]。Hall 的文章着重于运行 DOS 生产力工具,但并不全是严肃的——他也谈到了运行他最喜欢的 DOS 游戏。 + +### 3 个 Pi + +廉价的单板机使硬件再次变得有趣,这并不是秘密。不仅如此,它们使更多的人更容易接近,无论他们的年龄或技术水平如何。 + +其中,[树莓派][9]可能是最广泛使用的单板计算机。Ben Nuttall 带我们通过如何安装和设置 [Postgres 数据库在树莓派上][10]。从那里,你可以在任何你想要的项目中使用它。 + +如果你的品味包括文学和技术,你可能会对 Don Watkins 的[如何将树莓派变成电子书服务器][11]感兴趣。有一点工作和一个 [Calibre 电子书管理软件][12]的副本,你就可以得到你最喜欢的电子书,无论你在哪里。 + +树莓派并不是其中唯一有特点的。还有 [Orange Pi Pc Plus][13],一种开源的单板机。David Egts 看着[开始使用这个可编程迷你电脑][14]。 + +### 日常计算学 + +开源并不仅针对技术专家,更多的凡人用它来做日常工作,而且更加效率。这里有三篇文章,使我们这些笨手笨脚的人做任何事情变得优雅(或者不是)。 + +当你想到微博的时候,你可能会想到 Twitter。但是 Twitter 的问题多于它的问题。[Mastodon][15] 是 Twitter 的开放的替代方案,它在 2016 年首次亮相。从此, Mastodon 就获得相当大的用户基数。Seth Kenlon 说明[如何加入和使用 Mastodon][16],甚至告诉你如何在 Mastodon 和 Twitter 间交替使用。 + +你需要一点帮助来维持开支吗?你所需要的只是一个电子表格和正确的模板。我的文章[要控制你的财政状况] [17],向你展示了如何用[LibreOffice Calc][18] (或任何其他电子表格编辑器)创建一个简单而有吸引力的财务跟踪。 + +ImageMagick 是强大的图形处理工具。但是,很多人不经常使用。这意味着他们在最需要它们时忘记了命令。如果是你,Greg Pittman 的 [ImageMagick 入门教程][19]在你需要一些帮助时候能派上用场。 + +你有最喜欢的 2017 Opensource.com 公布的教程吗?请随意留言与社区分享。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/best-tutorials + +作者:[Scott Nesbitt][a] +译者:[zjon](https://github.com/zjon) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/scottnesbitt +[1]:https://opensource.com/article/17/4/how-get-started-learning-program +[2]:https://en.wikipedia.org/wiki/Integrated_development_environment +[3]:https://opensource.com/article/17/4/introduction-functional-programming +[4]:https://opensource.com/article/17/6/set-path-linux +[5]:https://opensource.com/article/17/4/build-your-own-name-server +[6]:https://opensource.com/article/17/10/run-dos-applications-linux +[7]:http://www.freedos.org/ +[8]:https://www.qemu.org +[9]:https://en.wikipedia.org/wiki/Raspberry_Pi +[10]:https://opensource.com/article/17/10/set-postgres-database-your-raspberry-pi +[11]:https://opensource.com/article/17/6/raspberrypi-ebook-server +[12]:https://calibre-ebook.com/ +[13]:http://www.orangepi.org/ +[14]:https://opensource.com/article/17/1/how-to-orange-pi +[15]:https://joinmastodon.org/ +[16]:https://opensource.com/article/17/4/guide-to-mastodon +[17]:https://opensource.com/article/17/8/budget-libreoffice-calc +[18]:https://www.libreoffice.org/discover/calc/ +[19]:https://opensource.com/article/17/8/imagemagick + + diff --git a/translated/tech/20180102 HTTP errors in WordPress.md b/translated/tech/20180102 HTTP errors in WordPress.md new file mode 100644 index 0000000000..5acb3613be --- /dev/null +++ b/translated/tech/20180102 HTTP errors in WordPress.md @@ -0,0 +1,189 @@ +WordPress 中的HTTP错误 +====== +![http error wordpress][1] + +我们会向你介绍,如何修复WordPress中的HTTP错误(在Linux VPS上)。 下面列出了WordPress用户遇到的最常见的HTTP错误,我们的建议侧重于如何发现错误原因以及解决方法。 + + + + +### 1\. 修复在上传图像时出现的HTTP错误 + +如果你在基于WordPress的网页中上传图像时出现错误,这也许是因为服务器上PHP配置,例如存储空间不足或者其他配置问题造成的。 + + +用如下命令查找php配置文件: + + +``` +#php -i | grep php.ini +Configuration File (php.ini) Path => /etc +Loaded Configuration File => /etc/php.ini +``` + +根据输出结果,php配置文件位于 '/etc'文件夹下。编辑 '/etc/php.ini'文件,找出下列行,并按照下面的例子修改其中相对应的值: + + +``` +vi /etc/php.ini +``` +``` +upload_max_filesize = 64M +post_max_size = 32M +max_execution_time = 300 +max_input_time 300 +memory_limit = 128M +``` + +当然,如果你不习惯使用vi文本编辑器,你可以选用自己喜欢的。 + + +不要忘记重启你的网页服务器来让改动生效。 + + +如果你安装的网页服务器是Apache,你需要使用 .htaccess文件。首先,找到 .htaccess 文件。它位于WordPress安装路径的根文件夹下。如果没有找到 .htaccess文件,需要自己手动创建一个,然后加入如下内容: + + +``` +vi /www/html/path_to_wordpress/.htaccess +``` +``` +php_value upload_max_filesize 64M +php_value post_max_size 32M +php_value max_execution_time 180 +php_value max_input_time 180 + +# BEGIN WordPress + +RewriteEngine On +RewriteBase / +RewriteRule ^index\.php$ - [L] +RewriteCond %{REQUEST_FILENAME} !-f +RewriteCond %{REQUEST_FILENAME} !-d +RewriteRule . /index.php [L] + +# END WordPress +``` +如果你使用的网页服务器是nginx,在WordPress实例中具体配置nginx服务器的设置。详细配置和下面的例子相似: + +``` +server { + +listen 80; +client_max_body_size 128m; +client_body_timeout 300; + +server_name your-domain.com www.your-domain.com; + +root /var/www/html/wordpress; +index index.php; + +location = /favicon.ico { +log_not_found off; +access_log off; +} + +location = /robots.txt { +allow all; +log_not_found off; +access_log off; +} + +location / { +try_files $uri $uri/ /index.php?$args; +} + +location ~ \.php$ { +include fastcgi_params; +fastcgi_pass 127.0.0.1:9000; +fastcgi_index index.php; +fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; +} + +location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { +expires max; +log_not_found off; +} +} +``` + +根据自己的PHP配置,你需要将 'fastcgi_pass 127.0.0.1:9000;' 用类似于 'fastcgi_pass unix:/var/run/php7-fpm.sock;' 替换掉(依照实际连接方式) + + +重启nginx服务来使改动生效。 + + + +### 2\. 修复因为不恰当的文件权限而产生的HTTP错误 + +如果你在WordPress中出现一个意外错误,也许是因为不恰当的文件权限导致的,所以需要给WordPress文件和文件夹设置一个正确的权限: + +``` +chown www-data:www-data -R /var/www/html/path_to_wordpress/ +``` + +将 'www-data' 替换成实际的网页服务器用户,将 '/var/www/html/path_to_wordpress' 换成WordPress的实际安装路径。 + + +### 3\. 修复因为内存不足而产生的HTTP错误 + +你可以通过在wp-config.php中添加如下内容来设置PHP的最大内存限制: + +``` + define('WP_MEMORY_LIMIT', '128MB'); +``` + +### 4\. 修复因为PHP.INI文件错误配置而产生的HTTP错误 + +编辑PHP配置主文件,然后找到 'cgi.fix_pathinfo' 这一行。 这一行内容默认情况下是被注释掉的,默认值为1。取消这一行的注释(删掉这一行最前面的分号),然后将1改为0.同时需要修改 'date.timezone' 这一PHP设置,再次编辑 PHP 配置文件并将这一选项改成 'date.timezone = US/Central' (或者将等号后内容改为你所在的时区) + +``` + vi /etc/php.ini +``` +``` + cgi.fix_pathinfo=0 + date.timezone = America/New_York +``` + +### 5. 修复因为Apache mod_security模块而产生的HTTP错误 + +如果你在使用 Apache mod_security 模块,这可能也会引起问题。试着禁用这一模块,确认是否因为在 .htaccess 文件中加入如下内容而引起了问题: + +``` + +SecFilterEngine Off +SecFilterScanPOST Off + +``` + +### 6. 修复因为有问题的插件/主题而产生的HTTP错误 + +一些插件或主题也会导致HTTP错误以及其他问题。你可以首先禁用有问题的插件/主题,或暂时禁用所有WordPress插件。如果你有phpMyAdmin,使用它来禁用所有插件:在其中找到 wp_options这一表格,在 option_name 这一列中找到 'active_plugins' 这一行,然后将 option_value 改为 :a:0:{} + + +或者用以下命令通过SSH重命名插件所在文件夹: + +``` + mv /www/html/path_to_wordpress/wp-content/plugins /www/html/path_to_wordpress/wp-content/plugins.old +``` + +通常情况下,HTTP错误会被记录在网页服务器的日志文件中,所以寻找错误时一个很好的切入点就是查看服务器日志。 + + +如果你在使用WordPress VPS主机服务的话,你不需要自己去修复WordPress中出现的HTTP错误。你只要让你的Linux管理员来处理它们,他们24小时在线并且会立刻开始着手解决你的问题。 + + + +-------------------------------------------------------------------------------- + +via: https://www.rosehosting.com/blog/http-error-wordpress/ + +作者:[rosehosting][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.rosehosting.com +[1]:https://www.rosehosting.com/blog/wp-content/uploads/2018/01/http-error-wordpress.jpg +[2]:https://www.rosehosting.com/wordpress-hosting.html diff --git a/translated/tech/20180103 How to preconfigure LXD containers with cloud-init.md b/translated/tech/20180103 How to preconfigure LXD containers with cloud-init.md new file mode 100644 index 0000000000..919efe4a26 --- /dev/null +++ b/translated/tech/20180103 How to preconfigure LXD containers with cloud-init.md @@ -0,0 +1,204 @@ +如何使用cloud-init来预配置LXD容器 +====== +当你正在创建LXD容器的时候,你希望它们能被预先配置好。例如在容器一启动就自动执行 **apt update**来安装一些软件包,或者运行一些命令。 +这篇文章将讲述如何用[**cloud-init**][1]来对[LXD容器进行进行早期初始化][2]。 +接下来,我们将创建一个包含cloud-init指令的LXD profile,然后启动一个新的容器来使用这个profile。 + +### 如何创建一个新的LXD profile + +查看已经存在的profile: + +```shell +$ lxc profile list ++---------|---------+ +| NAME | USED BY | ++---------|---------+ +| default | 11 | ++---------|---------+ +``` + +我们把名叫default的profile复制一份,然后在其内添加新的指令: + +```shell +$ lxc profile copy default devprofile + +$ lxc profile list ++------------|---------+ +| NAME | USED BY | ++------------|---------+ +| default | 11 | ++------------|---------+ +| devprofile | 0 | ++------------|---------+ +``` + +我们就得到了一个新的profile: **devprofile**。下面是它的详情: + +```yaml +$ lxc profile show devprofile +config: + environment.TZ: "" +description: Default LXD profile +devices: + eth0: + nictype: bridged + parent: lxdbr0 + type: nic + root: + path: / + pool: default + type: disk +name: devprofile +used_by: [] +``` + +注意这几个部分: **config:** , **description:** , **devices:** , **name:** 和 **used_by:**,当你修改这些内容的时候注意不要搞错缩进。(译者注:因为这些内容是YAML格式的,缩进是语法的一部分) + +### 如何把cloud-init添加到LXD profile里 + +[cloud-init][1]可以添加到LXD profile的 **config** 里。当这些指令将被传递给容器后,会在容器第一次启动的时候执行。 +下面是用在示例中的指令: + +```yaml + package_upgrade: true + packages: + - build-essential + locale: es_ES.UTF-8 + timezone: Europe/Madrid + runcmd: + - [touch, /tmp/simos_was_here] +``` + +**package_upgrade: true** 是指当容器第一次被启动时,我们想要**cloud-init** 运行 **sudo apt upgrade**。 +**packages:** 列出了我们想要自动安装的软件。然后我们设置了**locale** and **timezone**。在Ubuntu容器的镜像里,root用户默认的 locale 是**C.UTF-8**,而**ubuntu** 用户则是 **en_US.UTF-8**。此外,我们把时区设置为**Etc/UTC**。 +最后,我们展示了[如何使用**runcmd**来运行一个Unix命令][3]。 + +我们需要关注如何将**cloud-init**指令插入LXD profile。 + +我首选的方法是: + +``` +$ lxc profile edit devprofile +``` + +它会打开一个文本编辑器,以便你将指令粘贴进去。[结果应该是这样的][4]: + +```yaml +$ lxc profile show devprofile +config: + environment.TZ: "" + user.user-data: | + #cloud-config + package_upgrade: true + packages: + - build-essential + locale: es_ES.UTF-8 + timezone: Europe/Madrid + runcmd: + - [touch, /tmp/simos_was_here] +description: Default LXD profile +devices: + eth0: + nictype: bridged + parent: lxdbr0 + type: nic + root: + path: / + pool: default + type: disk +name: devprofile +used_by: [] +``` + +### 如何使用LXD profile启动一个容器 + +使用profile **devprofile**来启动一个新容器: + +``` +$ lxc launch --profile devprofile ubuntu:x mydev +``` + +然后访问该容器来查看我们的的指令是否生效: + +```shell +$ lxc exec mydev bash +root@mydev:~# ps ax + PID TTY STAT TIME COMMAND + 1 ? Ss 0:00 /sbin/init + ... + 427 ? Ss 0:00 /usr/bin/python3 /usr/bin/cloud-init modules --mode=f + 430 ? S 0:00 /bin/sh -c tee -a /var/log/cloud-init-output.log + 431 ? S 0:00 tee -a /var/log/cloud-init-output.log + 432 ? S 0:00 /usr/bin/apt-get --option=Dpkg::Options::=--force-con + 437 ? S 0:00 /usr/lib/apt/methods/http + 438 ? S 0:00 /usr/lib/apt/methods/http + 440 ? S 0:00 /usr/lib/apt/methods/gpgv + 570 ? Ss 0:00 bash + 624 ? S 0:00 /usr/lib/apt/methods/store + 625 ? R+ 0:00 ps ax +root@mydev:~# +``` + +如果我们连接得够快,通过**ps ax**将能够看到系统正在更新软件。我们可以从/var/log/cloud-init-output.log看到完整的日志: + +``` +Generating locales (this might take a while)... + es_ES.UTF-8... done +Generation complete. +``` + +以上可以看出locale已经被更改了。root 用户还是保持默认的**C.UTF-8**,只有非root用户**ubuntu**使用了新的locale。 + +``` +Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease +Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB] +Get:3 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB] +``` + +以上是安装软件包之前执行的**apt update**。 + +``` +The following packages will be upgraded: + libdrm2 libseccomp2 squashfs-tools unattended-upgrades +4 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. +Need to get 211 kB of archives. +``` +以上是在执行**package_upgrade: true**和安装软件包。 + +``` +The following NEW packages will be installed: + binutils build-essential cpp cpp-5 dpkg-dev fakeroot g++ g++-5 gcc gcc-5 + libalgorithm-diff-perl libalgorithm-diff-xs-perl libalgorithm-merge-perl +``` +以上是我们安装**build-essential**软件包的指令。 + +**runcmd** 执行的结果如何? + +``` +root@mydev:~# ls -l /tmp/ +total 1 +-rw-r--r-- 1 root root 0 Jan 3 15:23 simos_was_here +root@mydev:~# +``` + +可见它已经生效了! + +### 结论 + +当我们启动LXD容器的时候,我们常常需要默认启用一些配置,并且希望能够避免重复工作。通常解决这个问题的方法是创建LXD profile,然后把需要的配置添加进去。最后,当我们启动新的容器时,只需要应用该LXD profile即可。 + +-------------------------------------------------------------------------------- + +via: https://blog.simos.info/how-to-preconfigure-lxd-containers-with-cloud-init/ + +作者:[Simos Xenitellis][a] +译者:[kaneg](https://github.com/kaneg) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://blog.simos.info/author/simos/ +[1]:http://cloudinit.readthedocs.io/en/latest/index.html +[2]:https://github.com/lxc/lxd/blob/master/doc/cloud-init.md +[3]:http://cloudinit.readthedocs.io/en/latest/topics/modules.html#runcmd +[4]:https://paste.ubuntu.com/26313399/ \ No newline at end of file diff --git a/translated/tech/20180104 How to Change Your Linux Console Fonts.md b/translated/tech/20180104 How to Change Your Linux Console Fonts.md new file mode 100644 index 0000000000..245f15924e --- /dev/null +++ b/translated/tech/20180104 How to Change Your Linux Console Fonts.md @@ -0,0 +1,88 @@ +如何更改 Linux 控制台上的字体 +====== +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/font-size_0.png?itok=d97vmyYa) + +我尝试尽可能的保持心灵祥和,然而总有一些事情让我意难平,比如控制台字体太小了。记住我的话,朋友,有一天你的眼睛会退化,无法再看清你编码时用的那些细小字体,到那时你就后悔莫及了。 + +幸好,Linux 死忠们,你可以更改控制台的字体。按照 Linux 一贯的尿性,不断变化的 Linux 环境使得这个问题变得不太简单明了,而 Linux 上也没有字体管理这么个东西,这使得我们很容易就被搞晕了。本文,我将会向你展示,我找到的更改字体的最简方法。 + +### Linux 控制台是个什么鬼? + +首先让我们来澄清一下我们说的到底是个什么东西。当我提到 Linux 控制台,我指的是 TTY1-6,即你从图形环境用 `Ctrl-Alt-F1` 到 `F6` 切换到的虚拟终端。按下 `Ctrl+Alt+F7` 会切回图形环境。(不过这些热键已经不再通用,你的 Linux 发行版可能有不同的键映射。你的 TTY 的数量也可能不同,你图形环境会话也可能不在 `F7`。比如,Fedora 的默认图形会话是 `F2`,它只有一个额外的终端在 `F1`。) 我觉得能同时拥有 X 会话和终端绘画实在是太酷了。 + +Linux 控制台是内核的一部分,而且并不运行在 X 会话中。它和你在没有图形环境的无头服务器中用的控制台是一样的。我称呼在图形会话中的 X 终端为终端,而将控制台和 X 终端统称为终端模拟器。 + +但这还没完。Linux 终端从早期的 ANSI 时代开始已经经历了长久的发展,多亏了 Linux framebuffer,它现在支持 Unicode 并且对图形也有了有限的一些支持。而且出现了很多在控制台下运行的多媒体应用,这些我们在以后的文章中会提到。 + +### 控制台截屏 + +获取控制台截屏的最简单方法是让控制台跑在虚拟机内部。然后你可以在宿主系统上使用中意的截屏软件来抓取。不过借助 [fbcat][1] 和 [fbgrab][2] 你也可以直接在控制台上截屏。`fbcat` 会创建一个可移植的像素映射格式 (PPM) 图像; 这是一个高度可移植的未压缩图像格式,可以在所有的操作系统上读取,当然你也可以把它转换成任何喜欢的其他格式。`fbgrab` 则是 `fbcat` 的一个封装脚本,用来生成一个 PNG 文件。不同的人写过多个版本的 `fbgrab`。每个版本的选项都有限而且只能创建截取全屏。 + +`fbcat` 的执行需要 root 权限,而且它的输出需要重定向到文件中。你无需指定文件扩展名,只需要输入文件名就行了: +``` +$ sudo fbcat > Pictures/myfile + +``` + +在 GIMP 中裁剪后,就得到了图 1。 + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-1_10.png?itok=bHOxrZk9) +Figure 1:View after cropping。 + +如果能在左边空白处有一点填充就好了,如果有读者知道如何实现请在留言框中告诉我。 + +`fbgrab` 还有一些选项,你可以通过 `man fbgrab` 来查看,这些选项包括对另一个控制台进行截屏,以及延时截屏。在下面的例子中可以看到,`fbgrab` 截屏跟 `fbcat` 截屏类似,只是你无需明确进行输出重定性了: +``` +$ sudo fbgrab Pictures/myOtherfile + +``` + +### 查找字体 + +就我所知,除了查看字体存储目录 `/usr/share/consolefonts/`(Debian/etc。),`/lib/kbd/consolefonts/` (Fedora),`/usr/share/kbd/consolefonts` (openSUSE),外没有其他方法可以列出已安装的字体了。 + +### 更改字体 + +可读字体不是什么新概念。我们应该尊重以前的经验!可读性是很重要的。可配置性也很重要,然而现如今却不怎么看重了。 + +在 Debian/Ubuntu/ 等系统上,可以运行 `sudo dpkg-reconfigure console-setup` 来设置控制台字体,然后在控制台运行 `setupcon` 命令来让变更生效。`setupcon` 属于 `console-setup` 软件包中的一部分。若你的 Linux 发行版中不包含该工具,可以在 [openSUSE][3] 中下载到它。 + +你也可以直接编辑 `/etc/default/console-setup` 文件。下面这个例子中设置字体为 32 点大小的 Terminus Bold 字体,这是我的最爱,并且严格限制控制台宽度为 80 列。 +``` +ACTIVE_CONSOLES="/dev/tty[1-6]" +CHARMAP="UTF-8" +CODESET="guess" +FONTFACE="TerminusBold" +FONTSIZE="16x32" +SCREEN_WIDTH="80" + +``` + +这里的 FONTFACE 和 FONTSIZE 的值来自于字体的文件名,`TerminusBold32x16.psf.gz`。是的,你需要反转 FONTSIZE 中值的顺序。计算机就是这么搞笑。然后再运行 `setupcon` 来让新配置生效。可以使用 `showconsolefont` 来查看当前所用字体的所有字符集。要查看完整的选项说明请参考 `man console-setup`。 + +### Systemd + +Systemd 与 `console-setup` 不太一样,除了字体之外,你无需安装任何东西。你只需要编辑 `/etc/vconsole.conf` 然后重启就行了。我在 Fedora 和 openSUSE 系统中安装了一些额外的大型号的 Terminus 字体包,因为默认安装的字体最大只有 16 点而我想要的是 32 点。然后将 `/etc/vconsole.conf` 的内容修改为: +``` +KEYMAP="us" +FONT="ter-v32b" + +``` + +下周我们还将学习一些更加酷的控制台小技巧,以及一些在控制台上运行的多媒体应用。 + + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/1/how-change-your-linux-console-fonts + +作者:[Carla Schroder][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/cschroder +[1]:http://jwilk.net/software/fbcat +[2]:https://github.com/jwilk/fbcat/blob/master/fbgrab +[3]:https://software.opensuse.org/package/console-setup diff --git a/translated/tech/20180105 How To Display Asterisks When You Type Password In terminal.md b/translated/tech/20180105 How To Display Asterisks When You Type Password In terminal.md new file mode 100644 index 0000000000..0b764d093f --- /dev/null +++ b/translated/tech/20180105 How To Display Asterisks When You Type Password In terminal.md @@ -0,0 +1,70 @@ +如何在终端输入密码时显示星号 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/01/Display-Asterisks-When-You-Type-Password-In-terminal-1-720x340.png) + +当你在 Web 浏览器或任何 GUI 登录中输入密码时,密码会被标记成星号 ******** 或圆形符号 ••••••••••••• 。这是内置的安全机制,以防止你附近的用户看到你的密码。但是当你在终端输入密码来执行任何 **sudo** 或 **su** 的管理任务时,你不会在输入密码的时候看见星号或者圆形符号。它不会有任何输入密码的视觉指示,也不会有任何光标移动,什么也没有。你不知道你是否输入了所有的字符。你只会看到一个空白的屏幕! + +看看下面的截图。 + +![][2] + +正如你在上面的图片中看到的,我已经输入了密码,但没有任何指示(星号或圆形符号)。现在,我不确定我是否输入了所有密码。这个安全机制也可以防止你附近的人猜测密码长度。当然,这种行为可以改变。这是本指南要说的。这并不困难。请继续阅读。 + +#### 当你在终端输入密码时显示星号 + +要在终端输入密码时显示星号,我们需要在 **“/etc/sudoers”** 中做一些小修改。在做任何更改之前,最好备份这个文件。为此,只需运行: +``` +sudo cp /etc/sudoers{,.bak} +``` + +上述命令将 /etc/sudoers 备份成名为 /etc/sudoers.bak。你可以恢复它,以防万一在编辑文件后出错。 + +接下来,使用下面的命令编辑 **“/etc/sudoers”**: +``` +sudo visudo +``` + +找到下面这行: +``` +Defaults env_reset +``` + +![][3] + +在该行的末尾添加一个额外的单词 **“,pwfeedback”**,如下所示。 +``` +Defaults env_reset,pwfeedback +``` + +![][4] + +然后,按下 **“CTRL + x”** 和 **“y”** 保存并关闭文件。重新启动终端以使更改生效。 + +现在,当你在终端输入密码时,你会看到星号。 + +![][5] + +如果你对在终端输入密码时看不到密码感到不舒服,那么这个小技巧会有帮助。请注意,当你输入输入密码时其他用户就可以预测你的密码长度。如果你不介意,请按照上述方法进行更改,以使你的密码可见(当然,标记为星号!)。 + +现在就是这样了。还有更好的东西。敬请关注! + +干杯! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/display-asterisks-type-password-terminal/ + +作者:[SK][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[2]:http://www.ostechnix.com/wp-content/uploads/2018/01/password-1.png () +[3]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-1.png () +[4]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-1-1.png () +[5]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-2.png () diff --git a/translated/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md b/translated/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md new file mode 100644 index 0000000000..3681dfa3c6 --- /dev/null +++ b/translated/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md @@ -0,0 +1,137 @@ +六个例子带你入门 size 命令 +====== + +正如你所知道的那样,Linux 中的目标文件或着说可执行文件由多个段组成(比如 txt 和 data)。若你想知道每个段的大小,那么确实存在这么一个命令行工具 - 那就是 `size`。在本教程中,我们将会用几个简单易懂的案例来讲解该工具的基本用法。 + +在我们开始前,有必要先声明一下,本文的所有案例都在 Ubuntu 16.04LTS 中测试过了 .04LTS。 + +## Linux size 命令 + +size 命令基本上就是输出指定木比奥文件各段及其总和的大小。下面是该命令的语法: +``` +size [-A|-B|--format=compatibility] +            [--help] +            [-d|-o|-x|--radix=number] +            [--common] +            [-t|--totals] +            [--target=bfdname] [-V|--version] +            [objfile...] +``` + +man 页是这样描述它的: +``` +GNU的size程序列出参数列表objfile中,各目标文件(object)或存档库文件(archive)的段节(section)大小 — 以及总大小.默认情况下,对每目标文件或存档库中的每个模块都会产生一行输出. + +objfile... 是待检查的目标文件(object). 如果没有指定, 则默认为文件 "a.out". +``` + +下面是一些问答方式的案例,希望能让你对 size 命令有所了解。 + +## Q1。如何使用 size 命令? + +size 的基本用法很简单。你只需要将目标文件/可执行文件名称作为输入就行了。下面是一个例子: + +``` +size apl +``` + +该命令在我的系统中的输出如下: + +[![How to use size command][1]][2] + +前三部分的内容是 text,data,和 bss 段及其相应的大小。然后是十进制格式和十六进制格式的总大小。最后是文件名。 + +## Q2。如何切换不同的输出格式? + +根据 man 页的说法,size 的默认输出格式类似于 Berkeley 的格式。然而,如果你想的话,你也可以使用 System V 规范。要做到这一点,你可以使用 `--format` 选项加上 `SysV` 值。 + +``` +size apl --format=SysV +``` + +下面是它的输出: + +[![How to switch between different output formats][3]][4] + +## Q3。如何切换使用其他的单位? + +默认情况下,段的大小是以十进制的方式来展示。然而,如果你想的话,也可以使用八进制或十六进制来表示。对应的命令行参数分别为 `o` 和 `-x`。 + +[![How to switch between different size units][5]][6] + +关于这些参数,man 页是这么说的: +``` +-d +-o +-x +--radix=number + +使用这几个选项,你可以让各个段节的大小以十进制(`-d',或`--radix 10');八进制(`-o',或`--radix 8');或十六进制(`-x',或`--radix 16')数字的格式显示.`--radix number' 只支持三个数值参数 (8, 10, 16).总共大小以两种进制给出; `-d'或`-x'的十进制和十六进制输出,或`-o'的 八进制和 十六进制 输出. +``` + +## Q4。如何让 size 命令显示所有对象文件的总大小? + +如果你用 size 一次性查找多个文件的段大小,则通过使用 `-t` 选项还可以让它显示各列值的总和。 + +``` +size -t [file1] [file2] ... +``` + +下面是该命令的执行的截屏: + +[![How to make size command show totals of all object files][7]][8] + +`-t` 选项让它多加了最后那一行。 + +## Q5。如何让 size 输出每个文件中公共符号的总大小? + +若你为 size 提供多个输入文件作为参数,而且想让它显示每个文件中公共符号(指 common segment 中的 symbol) 的大小,则你可以带上 `--common` 选项。 + +``` +size --common [file1] [file2] ... +``` + +另外需要指出的是,当使用 Berkeley 格式时,和谐公共符号的大小被纳入了 bss 大小中。 + +## Q6。还有什么其他的选项? + +除了刚才提到的那些选项外,size 还有一些一般性的命令行选项,比如 `v` (显示版本信息) 和 `-h` (可选参数和选项的 summary) + +[![What are the other available command line options][9]][10] + +除此之外,你也可以使用 `@file` 选项来让 size 从文件中读取命令行选项。下面是详细的相关说明: +``` +读出来的选项会插入并替代原来的@file选项。若文件不存在或着无法读取,则该选项不会被删除,而是会以字面意义来解释该选项。 + +文件中的选项以空格分隔。当选项中要包含空格时需要用单引号或双引号将整个选项包起来。 +通过在字符前面添加一个反斜杠可以将任何字符(包括反斜杠本身)纳入到选项中。 +文件本身也能包含其他的@file选项;任何这样的选项都会被递归处理。 +``` + +## 结论 + +很明显,size 命令并不适用于所有人。它的目标群体是那些需要处理 Linux 中目标文件/可执行文件结构的人。因此,如果你刚好是目标受众,那么多试试我们这里提到的那些选项,你应该做好每天都使用这个工具的准备。想了解关于 size 的更多信息,请阅读它的 [man 页 ][11]。 + + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/linux-size-command/ + +作者:[Himanshu Arora][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.howtoforge.com +[1]:https://www.howtoforge.com/images/command-tutorial/size-basic-usage.png +[2]:https://www.howtoforge.com/images/command-tutorial/big/size-basic-usage.png +[3]:https://www.howtoforge.com/images/command-tutorial/size-format-option.png +[4]:https://www.howtoforge.com/images/command-tutorial/big/size-format-option.png +[5]:https://www.howtoforge.com/images/command-tutorial/size-o-x-options.png +[6]:https://www.howtoforge.com/images/command-tutorial/big/size-o-x-options.png +[7]:https://www.howtoforge.com/images/command-tutorial/size-t-option.png +[8]:https://www.howtoforge.com/images/command-tutorial/big/size-t-option.png +[9]:https://www.howtoforge.com/images/command-tutorial/size-v-x1.png +[10]:https://www.howtoforge.com/images/command-tutorial/big/size-v-x1.png +[11]:https://linux.die.net/man/1/size diff --git a/translated/tech/20180110 Best Linux Screenshot and Screencasting Tools.md b/translated/tech/20180110 Best Linux Screenshot and Screencasting Tools.md new file mode 100644 index 0000000000..277ded9f69 --- /dev/null +++ b/translated/tech/20180110 Best Linux Screenshot and Screencasting Tools.md @@ -0,0 +1,140 @@ +Linux 最好的图片截取和视频截录工具 +====== +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/best-linux-screenshot-and-screencasting-tools_orig.jpg) + +这里可能有一个困扰你多时的问题,当你想要获取一张屏幕截图向开发者反馈问题,或是在 _Stack Overflow_ 寻求帮助时,你可能缺乏一个可靠的屏幕截图工具去保存和发送集截图。GNOME 有一些形如程序和 shell 拓展的工具。不必担心,这里有 Linux 最好的屏幕截图工具,供你截取图片或截录视频。 + +## Linux 最好的图片截取和视频截录工具 + +### 1. Shutter + + [![shutter Linux 截图工具](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/shutter-linux-screenshot-taking-tools_orig.jpg)][2] + +[Shutter][3] 可以截取任意你想截取的屏幕,是 Linux 最好的截屏工具之一。得到截屏之后,它还可以在保存截屏之前预览图片。GNOME 面板顶部有一个 Shutter 拓展菜单,使得用户进入软件变得更人性化。 + +你可以选择性的截取窗口、桌面、光标下的面板、自由内容、菜单、提示框或网页。Shutter 允许用户直接上传屏幕截图到设置内首选的云服务器中。它同样允许用户在保存截图之前编辑器图片;同样提供可自由添加或移除的插件。 + +终端内键入下列命令安装此工具: + +``` +sudo add-apt-repository -y ppa:shutter/ppa +sudo apt-get update && sudo apt-get install shutter +``` + +### 2. Vokoscreen + + [![vokoscreen Linux 屏幕录制工具](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-screencasting-tool-for-linux_orig.jpg)][4] + + +[Vokoscreen][5] 是一款允许记录和叙述屏幕活动的一款软件。它有一个简洁的界面,界面的顶端包含有一个简明的菜单栏,方便用户开始录制视频。 + +你可以选择记录整个屏幕,或是记录一个窗口,抑或是记录一个自由区域,并且自定义保存类型;你甚至可以将屏幕录制记录保存为 gif 文件。当然,你也可以使用网络摄像头记录自己的情况,将自己转换成学习者。一旦你这么做了,你就可以在应用程序中回放视频记录。 + + [![vokoscreen preferences](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-preferences_orig.jpg)][6] + +你可以安装自己仓库的 Vocoscreen 发行版,或者你也可以在 [pkgs.org][7] 选择下载你需要的发行版。 + +``` +sudo dpkg -i vokoscreen_2.5.0-1_amd64.deb +``` + +### 3. OBS + + [![obs Linux 视频截录](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/obs-linux-screencasting-tool_orig.jpg)][8] + +[OBS][9] 可以用来录制自己的屏幕亦可用来录制互联网上的数据流。它允许你看到自己所录制的内容或者当你叙述时的屏幕录制。它允许你根据喜好选择录制视频的品质;它也允许你选择文件的保存类型。除了视频录制功能之外,你还可以切换到 Studio 模式,不借助其他软件编辑视频。要在你的 Linux 系统中安装 OBS,你必须确保你的电脑已安装 FFmpeg。ubuntu 14.04 或更早的版本安装 FFmpeg 可以使用如下命令: + +``` +sudo add-apt-repository ppa:kirillshkrogalev/ffmpeg-next + +sudo apt-get update && sudo apt-get install ffmpeg +``` + +ubuntu 15.04 以及之后的版本,你可以在终端中键入如下命令安装 FFmpeg: + +``` +sudo apt-get install ffmpeg +``` + +​如果 GGmpeg 安装完成,在终端中键入如下安装 OBS: + +``` +sudo add-apt-repository ppa:obsproject/obs-studio + +sudo apt-get update + +sudo apt-get install obs-studio +``` + +### 4. Green Recorder + + [![屏幕录制工具](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/green-recording-linux-tool_orig.jpg)][10] + +[Green recorder][11] 是一款基于接口的简单程序,它可以让你记录屏幕。你可以选择包括视频和单纯的音频在内的录制内容,也可以显示鼠标指针,甚至可以跟随鼠标录制视频。同样,你可以选择记录窗口或是自由区域,以便于在自己的记录中保留需要的内容;你还可以自定义保存视频的帧数。如果你想要延迟录制,它提供给你一个选项可以设置出你想要的延迟时间。它还提供一个录制结束的命令运行选项,这样,就可以在视频录制结束后立即运行。​ + +在终端中键入如下命令来安装 green recorder: + +``` +sudo add-apt-repository ppa:fossproject/ppa + +sudo apt update && sudo apt install green-recorder +``` + +### 5. Kazam + + [![kazam screencasting tool for linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/kazam-screencasting-tool-for-linux_orig.jpg)][12] + +[Kazam][13] 在几乎所有使用截图工具的 Linux 用户中,都十分流行。这是一款简单直观的软件,它可以让你做一个屏幕截图或是视频录制也同样允许在屏幕截图或屏幕录制之前设置延时。它可以让你选择录制区域,窗口或是你想要抓取的整个屏幕。Kazam 的界面接口部署的非常好,和其他软件相比毫无复杂感。它的特点,就是让你优雅的截图。Kazam 在系统托盘和菜单中都有图标,无需打开应用本身,你就可以开始屏幕截图。​​ + +终端中键入如下命令来安装 Kazam: + +``` +sudo apt-get install kazam +``` + +​如果没有找到 PPA,你需要使用下面的命令安装它: + +``` +sudo add-apt-repository ppa:kazam-team/stable-series + +sudo apt-get update && sudo apt-get install kazam +``` + +### 6. GNOME 拓展截屏工具 + + [![gnome screenshot extension](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-compressed_orig.jpg)][1] + +GNOME 的一个拓展软件就叫做 screenshot tool,它常驻系统面板,如果你没有设置禁用它。由于它是常驻系统面板的软件,所以它会一直等待你的调用,获取截图,方便和容易获取是它最主要的特点,除非你在系统工具禁用,否则它将一直在你的系统面板中。这个工具也有用来设置首选项的选项窗口。在 extensions.gnome.org 中搜索“_Screenshot Tool_”,在你的 GNOME 中安装它。 + +你需要安装 gnome 拓展,chrome 拓展和 GNOME 调整工具才能使用这个工具。 + + [![gnome screenshot 拓展选项](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-preferences_orig.jpg)][14] + +当你碰到一个问题,不知道怎么处理,想要在 [the Linux community][15] 或者其他开发社区分享、寻求帮助的的时候, **Linux 截图工具** 尤其合适。学习开发、程序或者其他任何事物都会发现这些工具在分享截图的时候真的很实用。Youtube 用户和教程制作爱好者会发现视频截录工具真的很适合录制可以发表的教程。 + +-------------------------------------------------------------------------------- + +via: http://www.linuxandubuntu.com/home/best-linux-screenshot-screencasting-tools + +作者:[linuxandubuntu][a] +译者:[CYLeft](https://github.com/CYLeft) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxandubuntu.com +[1]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-compressed_orig.jpg +[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/shutter-linux-screenshot-taking-tools_orig.jpg +[3]:http://shutter-project.org/ +[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-screencasting-tool-for-linux_orig.jpg +[5]:https://github.com/vkohaupt/vokoscreen +[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-preferences_orig.jpg +[7]:https://pkgs.org/download/vokoscreen +[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/obs-linux-screencasting-tool_orig.jpg +[9]:https://obsproject.com/ +[10]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/green-recording-linux-tool_orig.jpg +[11]:https://github.com/foss-project/green-recorder +[12]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/kazam-screencasting-tool-for-linux_orig.jpg +[13]:https://launchpad.net/kazam +[14]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-preferences_orig.jpg +[15]:http://www.linuxandubuntu.com/home/top-10-communities-to-help-you-learn-linux diff --git a/translated/tech/20180111 The Fold Command Tutorial With Examples For Beginners.md b/translated/tech/20180111 The Fold Command Tutorial With Examples For Beginners.md new file mode 100644 index 0000000000..9cc63eb46a --- /dev/null +++ b/translated/tech/20180111 The Fold Command Tutorial With Examples For Beginners.md @@ -0,0 +1,118 @@ +Fold命令入门级示例教程 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/01/Fold-Command-2-720x340.png) + +你有没有发现自己在某种情况下想要折叠或打破命令的输出用于适应特定的宽度? 在运行虚拟机的时候,我遇到了几次这种的情况,特别是没有GUI的服务器。 以防万一,如果你想限制一个命令的输出为一个特定的宽度,现在看看这里! **fold**命令在这里就能派的上用场了! fold命令以适合指定的宽度调整输入文件中的每一行并将其打印到标准输出。 + +在这个简短的教程中,我们将看到fold命令的用法,带有实例哦。 + +### fold命令示例教程 + +fold命令是GNU coreutils包的一部分,所以我们不用为安装的事情烦恼。 + +fold命令的典型语法: +``` +fold [OPTION]... [FILE]... +``` + +请允许我向您展示一些示例,以便您更好地了解fold命令。 我有一个名为linux.txt文件,内容是随机的。 + +Allow me to show you some examples, so you can get a better idea about fold command. I have a file named **linux.txt** with some random lines. + +![][2] + +要将上述文件中的每一行换行为默认宽度,请运行: + +``` +fold linux.txt +``` + +每行**80**列是默认的宽度。 这里是上述命令的输出: + +![][3] + +正如你在上面的输出中看到的,fold命令已经将输出限制为80个字符的宽度。 + +当然,我们可以指定您的首选宽度,例如50,如下所示: + +``` +fold -w50 linux.txt +``` + +Sample output would be: + +![][4] + +我们也可以将输出写入一个新的文件,如下所示: + +``` +fold -w50 linux.txt > linux1.txt +``` + +以上命令将把**linux.txt**的行宽度改为50个字符,并将输出写入到名为**linux1.txt**的新文件中。 + +让我们检查一下新文件的内容: + +``` +cat linux1.txt +``` + +![][5] + +你有没有注意到前面的命令的输出? 有些词在行之间被打破。 为了解决这个问题,我们可以使用-s标志来在空格处换行。 + +以下命令将给定文件中的每行调整为宽度“50”,并在空格处换到新行: + +``` +fold -w50 -s linux.txt +``` + +示例输出: + +![][6] + +看清楚了吗? 现在,输出很清楚。 换到新行中的单词都是用空格隔开的,所在行单词的长度大于50的时候就会被调整到下一行。 + +在所有上面的例子中,我们用列来限制输出宽度。 但是,我们可以使用**-b**选项将输出的宽度强制为指定的字节数。 以下命令以20个字节中断输出。 + +``` +fold -b20 linux.txt +``` + +Sample output: + +![][7] + +**另请阅读:** + ++ [Unix命令入门级示例教程][8] + +有关更多详细信息,请参阅man手册页。 +``` +man fold +``` + +而且,这些就是所有的内容了。 您现在知道如何使用fold命令以适应特定的宽度来限制命令的输出。 我希望这是有用的。 我们将每天发布更多有用的指南。 敬请关注! + +干杯! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/fold-command-tutorial-examples-beginners/ + +作者:[SK][a] +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-1.png +[3]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-2.png +[4]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-3-1.png +[5]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-4.png +[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-5-1.png +[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-6-1.png +[8]:https://www.ostechnix.com/uniq-command-tutorial-examples-beginners/ diff --git a/translated/tech/20180112 Top 5 Firefox extensions to install now.md b/translated/tech/20180112 Top 5 Firefox extensions to install now.md new file mode 100644 index 0000000000..9f4698aea7 --- /dev/null +++ b/translated/tech/20180112 Top 5 Firefox extensions to install now.md @@ -0,0 +1,79 @@ +五个值得现在安装的火狐插件 +====== + +合适的插件能大大增强你浏览器的功能,但仔细挑选插件很重要。本文有五个值得一看的插件。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/firefox_blue_lead.jpg) + +对于很多用户来说,网页浏览器已经成为电脑使用体验的重要环节。现代浏览器已经发展成强大、可拓展的平台。作为平台的一部分,_插件_能添加或修改浏览器的功能。火狐插件的构建使用了 WebExtensions API ,一个跨浏览器的开发系统。 + +你得安装哪一个插件?一般而言,这个问题的答案取决于你如何使用你的浏览器、你对于隐私的看法、你信任插件开发者多少以及其他个人喜好。 + +首先,我想指出浏览器插件通常需要读取和(或者)修改你浏览的网页上的每项内容。你应该_非常_仔细地考虑这件事的后果。如果一个插件有修改所有你访问过的网页的权限,那么它可能记录你的按键、拦截信用卡信息、在线跟踪你、插入广告,以及其他各种各样邪恶的行为。 + +并不是每个插件都偷偷摸摸地做这些事,但是在你安装任何插件之前,你要慎重考虑下插件安装来源、涉及的权限、你的风险数据和其他因素。记住,你可以从个人数据的角度来管理一个插件如何影响你的攻击面( LCTT 译者注:攻击面是指入侵者能尝试获取或提取数据的途径总和)——例如使用特定的配置、不使用插件来完成例如网上银行的操作。 + +考虑到这一点,这里有你或许想要考虑的五个火狐插件 + +### uBlock Origin + +![ublock origin ad blocker screenshot][2] + +ublock Origin 可以拦截广告和恶意网页,还允许用户定义自己的内容过滤器。 + +[uBlock Origin][3] 是一款快速、内存占用低、适用范围广的拦截器,它不仅能屏蔽广告,还能让你执行你自己的内容过滤。uBlock Origin 默认使用多份预定义好的过滤名单来拦截广告、跟踪器和恶意网页。它允许你任意地添加列表和规则,或者锁定在一个默认拒绝的模式。除了强大之外,这个插件已被证明是效率高、性能好。 + +### Privacy Badger + +![privacy badger ad blocker][5] + +Privacy Badger 运用了算法来无缝地屏蔽侵犯用户准则的广告和跟踪器。 + +正如它名字所表明,[Privacy Badger][6] 是一款专注于隐私的插件,它屏蔽广告和第三方跟踪器。EFF (LCTT 译者注:EFF全称是电子前哨基金会(Electronic Frontier Foundation),旨在宣传互联网版权和监督执法机构 )说:“我们想要推荐一款能自动分析并屏蔽任何侵犯用户准则的跟踪器和广告,而 Privacy Badger 诞生于此目的;它不用任何设置、知识或者用户的配置,就能运行得很好;它是由一个明显为用户服务而不是为广告主服务的组织出品;它使用算法来绝定什么正在跟踪,什么没有在跟踪” + +为什么 Privacy Badger 出现在这列表上的原因跟 uBlock Origin 如此相似?其中一个原因是Privacy Badger 从根本上跟 uBlock Origin 的工作不同。另一个原因是纵深防御的做法是个可以跟随的合理策略。 + +### LastPass + +![lastpass password manager screenshot][8] + +LastPass 是一款用户友好的密码管理插件,支持双重授权。 + +这个插件对于很多人来说是个有争议的补充。你是否应该使用密码管理器——如果你用了,你是否应该选择一个浏览器插件——这都是个热议的话题,而答案取决于你的风险资料。我想说大部分不关心的电脑用户应该用一个,因为这比起常见的选择:每一处使用相同的弱密码,都好太多了。 + +[LastPass][9] 对于用户很友好,支持双重授权,相当安全。这家公司过去出过点安全事故,但是都处理得当,而且资金充足。记住使用密码管理器不是非此即彼的命题。很多用户选择使用密码管理器管理绝大部分密码,但是保持了一点复杂性,为例如银行这样重要的网页精心设计了密码和使用多重认证。 + +### Xmarks Sync + +[Xmarks Sync][10] 是一款方便的插件,能跨实例同步你的书签、打开的标签页、配置项和浏览器历史。如果你有多台机器,想要在桌面设备和移动设备之间同步、或者在同一台设备使用不同的浏览器,那来看看 Xmarks Sync 。(注意这款插件最近被 LastPass 收购) + +### Awesome Screenshot Plus + +[Awesome Screenshot Plus][11] 允许你很容易捕获任意网页的全部或部分区域,也能添加注释、评论、使敏感信息模糊等。你还能用一个可选的在线服务来分享图片。我发现这工具在网页调试时截图、讨论设计和分享信息上很棒。这是一款比你预期中发现自己使用得多的工具。 + +我发现这五款插件有用,我把它们推荐给其他人。这就是说,还有很多浏览器插件。我好奇其他的哪一款是 Opensource.com 社区用户正在使用并推荐的。让评论中让我知道。(LCTT 译者注:本文引用自 Opensource.com ,这两句话意在引导用户留言,推荐自己使用的插件) + +![Awesome Screenshot Plus screenshot][13] + +Awesome Screenshot Plus 允许你容易地截下任何网页的部分或全部内容。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/top-5-firefox-extensions + +作者:[Jeremy Garcia][a] +译者:[ypingcn](https://github.com/ypingcn) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jeremy-garcia +[2]: https://opensource.com/sites/default/files/ublock.png "ublock origin ad blocker screenshot" +[3]: https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/ +[5]: https://opensource.com/sites/default/files/images/life-uploads/privacy_badger_1.0.1.png "privacy badger ad blocker screenshot" +[6]: https://www.eff.org/privacybadger +[8]: https://opensource.com/sites/default/files/images/life-uploads/lastpass4.jpg "lastpass password manager screenshot" +[9]: https://addons.mozilla.org/en-US/firefox/addon/lastpass-password-manager/ +[10]: https://addons.mozilla.org/en-US/firefox/addon/xmarks-sync/ +[11]: https://addons.mozilla.org/en-US/firefox/addon/screenshot-capture-annotate/ +[13]: https://opensource.com/sites/default/files/screenshot_from_2018-01-04_17-11-32.png "Awesome Screenshot Plus screenshot" diff --git a/translated/tech/20180117 Linux tee Command Explained for Beginners (6 Examples).md b/translated/tech/20180117 Linux tee Command Explained for Beginners (6 Examples).md new file mode 100644 index 0000000000..f945733855 --- /dev/null +++ b/translated/tech/20180117 Linux tee Command Explained for Beginners (6 Examples).md @@ -0,0 +1,132 @@ +为初学者介绍的 Linux tee 命令(6 个例子) +====== + +有时候,你会想手动跟踪命令的输出内容,同时又想将输出的内容写入文件,确保之后可以用来参考。如果你想寻找这相关的工具,那么恭喜你,Linux 已经有了一个叫做 **tee** 的命令可以帮助你。 + +本教程中,我们将基于 tee 命令,用一些简单的例子开始讨论。但是在此之前,值得一提的是,本文我们所有的测试实例都基于 Ubuntu 16.04 LTS。 + +### Linux tee 命令 + +tee 命令基于标准输入读取数据,标准输出或文件写入数据。感受下这个命令的语法: + +``` +tee [OPTION]... [FILE]... +``` + +这里是帮助文档的说明: +``` +从标准输入中复制到每一个文件,并输出到标准输出。 +``` + +让 Q&A(问&答)风格的实例给我们带来更多灵感,深入了解这个命令。 + +### Q1. 如何在 Linux 上使用这个命令? + +假设因为某些原因,你正在使用 ping 命令。 + +``` +ping google.com +``` + +[![如何在 Linux 上使用 tee 命令][1]][2] + +然后同时,你想要输出的信息也同时能写入文件。这个时候,tee 命令就有其用武之地了。 + +``` +ping google.com | tee output.txt +``` + +下面的截图展示了这个输出内容不仅被写入 ‘output.txt’ 文件,也被显示在标准输出中。 + +[![tee command 输出][3]][4] + +如此应当明确了 tee 的基础用法。 + +### Q2. 如何确保 tee 命令追加信息到文件中? + +默认情况下,在同一个文件下再次使用 tee 命令会覆盖之前的信息。如果你想的话,可以通过 -a 命令选项改变默认设置。 + +``` +[command] | tee -a [file] +``` + +基本上,-a 选项强制 tee 命令追加信息到文件。 + +### Q3. 如何让 tee 写入多个文件? + +这非常之简单。你仅仅只需要写明文件名即可。 + +``` +[command] | tee [file1] [file2] [file3] +``` + +比如: + +``` +ping google.com | tee output1.txt output2.txt output3.txt +``` + +[![如何让 tee 写入多个文件][5]][6] + +### Q4. 如何让 tee 命令的输出内容直接作为另一个命令的输入内容? + +使用 tee 命令,你不仅可以将输出内容写入文件,还可以把输出内容作为另一个命令的输入内容。比如说,下面的命令不仅会将文件名存入‘output.txt’文件中,还会通过 wc 命令让你知道输入到 output.txt 中的文件数目。 + +``` +ls file* | tee output.txt | wc -l +``` + +[![如何让 tee 命令的输出内容直接作为另一个命令的输入内容][7]][8] + +### Q5. 如何使用 tee 命令提升文件写入权限? + +假如你使用 [Vim editor][9] 打开文件,并且做了很多更改,然后当你尝试保存修改时,你得到一个报错,让你意识到那是一个 root 所拥有的文件,这意味着你需要使用 sudo 权限保存修改。 + +[![如何使用 tee 命令提升文件写入权限][10]][11] + +如此情况下,你可以使用 tee 命令来提高权限。 + +``` +:w !sudo tee % +``` + +上述命令会向你索要 root 密码,然后就能让你保存修改了。 + +### Q6. 如何让 tee 命令忽视中断? + +-i 命令行选项使 tee 命令忽视通常由 crl+c 组合键发起的中断信号(`SIGINT`)。 + +``` +[command] | tee -i [file] +``` + +当你想要使用 crl+c 中断命令的同时,让 tee 命令优雅的退出,这个选项尤为实用。 + +### 总结 + +现在你可能已经认同 tee 是一个非常实用的命令。基于 tee 命令的用法,我们已经介绍了其绝大多数的命令行选项。这个工具并没有什么陡峭的学习曲线,所以,只需跟随这几个例子练习,你就可以运用自如了。更多信息,请查看 [帮助文档][12]. + + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/linux-tee-command/ + +作者:[Himanshu Arora][a] +译者:[CYLeft](https://github.com/CYLeft) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.howtoforge.com +[1]:https://www.howtoforge.com/images/command-tutorial/ping-example.png +[2]:https://www.howtoforge.com/images/command-tutorial/big/ping-example.png +[3]:https://www.howtoforge.com/images/command-tutorial/ping-with-tee.png +[4]:https://www.howtoforge.com/images/command-tutorial/big/ping-with-tee.png +[5]:https://www.howtoforge.com/images/command-tutorial/tee-mult-files1.png +[6]:https://www.howtoforge.com/images/command-tutorial/big/tee-mult-files1.png +[7]:https://www.howtoforge.com/images/command-tutorial/tee-redirect-output.png +[8]:https://www.howtoforge.com/images/command-tutorial/big/tee-redirect-output.png +[9]:https://www.howtoforge.com/vim-basics +[10]:https://www.howtoforge.com/images/command-tutorial/vim-write-error.png +[11]:https://www.howtoforge.com/images/command-tutorial/big/vim-write-error.png +[12]:https://linux.die.net/man/1/tee diff --git a/translated/tech/20180123 Linux mkdir Command Explained for Beginners (with examples).md b/translated/tech/20180123 Linux mkdir Command Explained for Beginners (with examples).md new file mode 100644 index 0000000000..a1c52b4ba7 --- /dev/null +++ b/translated/tech/20180123 Linux mkdir Command Explained for Beginners (with examples).md @@ -0,0 +1,95 @@ +Linux mkdir 命令的初学者教程 +====== + +当你使用命令行的时候,无论什么时候,你都位于一个目录中,它告诉了命令行当前所位于的完整目录。在 Linux 中,你可以使用 `rm` 命令删除目录,但是首先,你需要使用 `mkdir` 命令来创建目录。在这篇教程中,我将使用一些易于理解的例子来讲解这个工具的基本用法。 + +在开始之前,值得一提的是,这篇教程中的所有例子都已经在 Ubuntu 16.04 LTS 中测试过。 + +### Linux `mkdir` 命令 + +正如上面所提到的,用户可以使用 `mkdir` 命令来创建目录。它的语法如下: + +``` +mkdir [OPTION]... DIRECTORY... +``` + +下面的内容是 man 手册对这个工具的描述: +``` +Create the DIRECTORY(ies), if they do not already exist. +``` + +下面这些问答式的例子将能够帮助你更好的理解 `mkdir` 这个命令是如何工作的。 + +### Q1. 如何使用 `mkdir` 命令创建目录? + +创建目录非常简单,你唯一需要做的就是把你想创建的目录的名字跟在 `mkdir` 命令的后面作为参数。 + +``` +mkdir [dir-name] +``` + +下面是一个简单例子: + +``` +mkdir test-dir +``` + +### Q2. 如何确保当父目录不存在的时候,同时创建父目录? + +有时候,我们需要使用一条 `mkdir` 命令来创建一个完整的目录结构,这时候,你只需要使用 `-p` 这个命令行选项即可。 + +比如,你想创建目录 `dir1/dir2/dir3`,但是,该目录的父目录都不存在,这时候,你可以像下面这样做: + +``` +mkdir -p dir1/dir2/dir3 +``` + +[![How to make sure parent directories \(if non-existent\) are created][1]][2] + +### Q3. 如何在创建目录时自定义权限? + +默认情况下,`mkdir` 命令创建目录时会把权限设置为 `rwx, rwx, r-x` 。 + +[![How to set permissions for directory being created][3]][4] + +但是,如果你想自定义权限,那么你可以使用 `-m` 这一命令行选项。 + +[![mkdir -m command option][5]][6] + +### Q4. 如何使 `mkdir` 命令显示操作细节? + +如果你希望 `mkdir` 命令显示它所执行的操作的完整细节,那么你可以使用 `-v` 这一命令行选项。 + +``` +mkdir -v [dir] +``` + +下面是一个例子: + +[![How to make mkdir emit details of operation][7]][8] + +### 结论 + +你已经看到,`mkdir` 是一个非常简单,易于理解和使用的命令。学习这一命令不会遇到任何屏障。在这篇教程中,我们讨论到了它的绝大部分命令行选项。记得练习这些命令,并在日复一日的工作中使用这些命令。如果你想了解关于这一命令的更过内容,请查看它的 [man][9] 手册。 + + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/linux-mkdir-command/ + +作者:[Himanshu Arora][a] +译者:[ucasFL](https://github.com/ucasFL) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.howtoforge.com +[1]:https://www.howtoforge.com/images/command-tutorial/mkdir-p.png +[2]:https://www.howtoforge.com/images/command-tutorial/big/mkdir-p.png +[3]:https://www.howtoforge.com/images/command-tutorial/mkdir-def-perm.png +[4]:https://www.howtoforge.com/images/command-tutorial/big/mkdir-def-perm.png +[5]:https://www.howtoforge.com/images/command-tutorial/mkdir-custom-perm.png +[6]:https://www.howtoforge.com/images/command-tutorial/big/mkdir-custom-perm.png +[7]:https://www.howtoforge.com/images/command-tutorial/mkdir-verbose.png +[8]:https://www.howtoforge.com/images/command-tutorial/big/mkdir-verbose.png +[9]:https://linux.die.net/man/1/mkdir