mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-16 22:42:21 +08:00
commit
769e232a82
@ -64,6 +64,7 @@ LCTT 的组成
|
||||
* 2017/11/19 wxy 在上海交大举办的 2017 中国开源年会上做了演讲:《[如何以翻译贡献参与开源社区](https://linux.cn/article-9084-1.html)》。
|
||||
* 2018/01/11 提升 lujun9972 成为核心成员,并加入选题组。
|
||||
* 2018/02/20 遭遇 DMCA 仓库被封。
|
||||
* 2018/05/15 提升 MjSeven 为核心成员。
|
||||
|
||||
核心成员
|
||||
-------------------------------
|
||||
@ -92,6 +93,7 @@ LCTT 的组成
|
||||
- 核心成员 @rusking,
|
||||
- 核心成员 @qhwdw,
|
||||
- 核心成员 @lujun9972
|
||||
- 核心成员 @MjSeven
|
||||
- 前任选题 @DeadFire,
|
||||
- 前任校对 @reinoir222,
|
||||
- 前任校对 @PurlingNayuki,
|
||||
|
@ -3,9 +3,10 @@ Caffeinated 6.828:练习 shell
|
||||
|
||||
通过在 shell 中实现多项功能,该作业将使你更加熟悉 Unix 系统调用接口和 shell。你可以在支持 Unix API 的任何操作系统(一台 Linux Athena 机器、装有 Linux 或 Mac OS 的笔记本电脑等)上完成此作业。请在第一次上课前将你的 shell 提交到[网站][1]。
|
||||
|
||||
如果你在练习中遇到困难或不理解某些内容时,你不要害羞给[员工邮件列表][2]发送邮件,但我们确实希望全班的人能够自行处理这级别的 C 编程。如果你对 C 不是很熟悉,可以认为这个是你对 C 熟悉程度的检查。再说一次,如果你有任何问题,鼓励你向我们寻求帮助。
|
||||
如果你在练习中遇到困难或不理解某些内容时,你不要羞于给[员工邮件列表][2]发送邮件,但我们确实希望全班的人能够自行处理这级别的 C 编程。如果你对 C 不是很熟悉,可以认为这个是你对 C 熟悉程度的检查。再说一次,如果你有任何问题,鼓励你向我们寻求帮助。
|
||||
|
||||
下载 xv6 shell 的[框架][3],然后查看它。框架 shell 包含两个主要部分:解析 shell 命令并实现它们。解析器只能识别简单的 shell 命令,如下所示:
|
||||
|
||||
```
|
||||
ls > y
|
||||
cat < y | sort | uniq | wc > y1
|
||||
@ -13,31 +14,30 @@ cat y1
|
||||
rm y1
|
||||
ls | sort | uniq | wc
|
||||
rm y
|
||||
|
||||
```
|
||||
|
||||
将这些命令剪切并粘贴到 `t.sh `中。
|
||||
将这些命令剪切并粘贴到 `t.sh` 中。
|
||||
|
||||
你可以按如下方式编译框架 shell 的代码:
|
||||
|
||||
你可以按如下方式编译框架 shell:
|
||||
```
|
||||
$ gcc sh.c
|
||||
|
||||
```
|
||||
|
||||
它会生成一个名为 `a.out` 的文件,你可以运行它:
|
||||
|
||||
```
|
||||
$ ./a.out < t.sh
|
||||
|
||||
```
|
||||
|
||||
执行会崩溃,因为你还没有实现几个功能。在本作业的其余部分中,你将实现这些功能。
|
||||
执行会崩溃,因为你还没有实现其中的几个功能。在本作业的其余部分中,你将实现这些功能。
|
||||
|
||||
### 执行简单的命令
|
||||
|
||||
实现简单的命令,例如:
|
||||
|
||||
```
|
||||
$ ls
|
||||
|
||||
```
|
||||
|
||||
解析器已经为你构建了一个 `execcmd`,所以你唯一需要编写的代码是 `runcmd` 中的 case ' '。要测试你可以运行 “ls”。你可能会发现查看 `exec` 的手册页是很有用的。输入 `man 3 exec`。
|
||||
@ -47,10 +47,10 @@ $ ls
|
||||
### I/O 重定向
|
||||
|
||||
实现 I/O 重定向命令,这样你可以运行:
|
||||
|
||||
```
|
||||
echo "6.828 is cool" > x.txt
|
||||
cat < x.txt
|
||||
|
||||
```
|
||||
|
||||
解析器已经识别出 '>' 和 '<',并且为你构建了一个 `redircmd`,所以你的工作就是在 `runcmd` 中为这些符号填写缺少的代码。确保你的实现在上面的测试输入中正确运行。你可能会发现 `open`(`man 2 open`) 和 `close` 的 man 手册页很有用。
|
||||
@ -60,9 +60,9 @@ cat < x.txt
|
||||
### 实现管道
|
||||
|
||||
实现管道,这样你可以运行命令管道,例如:
|
||||
|
||||
```
|
||||
$ ls | sort | uniq | wc
|
||||
|
||||
```
|
||||
|
||||
解析器已经识别出 “|”,并且为你构建了一个 `pipecmd`,所以你必须编写的唯一代码是 `runcmd` 中的 case '|'。测试你可以运行上面的管道。你可能会发现 `pipe`、`fork`、`close` 和 `dup` 的 man 手册页很有用。
|
||||
@ -71,7 +71,6 @@ $ ls | sort | uniq | wc
|
||||
|
||||
```
|
||||
$ ./a.out < t.sh
|
||||
|
||||
```
|
||||
|
||||
无论是否完成挑战任务,不要忘记将你的答案提交给[网站][1]。
|
||||
@ -80,11 +79,10 @@ $ ./a.out < t.sh
|
||||
|
||||
如果你想进一步尝试,可以将所选的任何功能添加到你的 shell。你可以尝试以下建议之一:
|
||||
|
||||
* 实现由 `;` 分隔的命令列表
|
||||
* 通过实现 `(` 和 `)` 来实现子 shell
|
||||
* 通过支持 `&` 和 `wait` 在后台执行命令
|
||||
* 实现参数引用
|
||||
|
||||
* 实现由 `;` 分隔的命令列表
|
||||
* 通过实现 `(` 和 `)` 来实现子 shell
|
||||
* 通过支持 `&` 和 `wait` 在后台执行命令
|
||||
* 实现参数引用
|
||||
|
||||
|
||||
所有这些都需要改变解析器和 `runcmd` 函数。
|
||||
@ -95,7 +93,7 @@ via: https://sipb.mit.edu/iap/6.828/lab/shell/
|
||||
|
||||
作者:[mit][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,101 @@
|
||||
在 Ubuntu 和 Linux Mint 中轻松安装 Android Studio
|
||||
======
|
||||
|
||||
[Android Studio][1] 是谷歌自己的 Android 开发 IDE,是带 ADT 插件的 Eclipse 的不错替代品。Android Studio 可以通过源代码安装,但在这篇文章中,我们将看到**如何在 Ubuntu 18.04、16.04 和相应的 Linux Mint 变体**中安装 Android Studio。
|
||||
|
||||
在继续安装 Android Studio 之前,请确保你已经[在 Ubuntu 中安装了 Java][2]。
|
||||
|
||||
![How to install Android Studio in Ubuntu][3]
|
||||
|
||||
### 使用 Snap 在 Ubuntu 和其他发行版中安装 Android Studio
|
||||
|
||||
自从 Ubuntu 开始专注于 Snap 软件包以来,越来越多的软件开始提供易于安装的 Snap 软件包。Android Studio 就是其中之一。Ubuntu 用户可以直接在软件中心找到 Android Studio 程序并从那里安装。
|
||||
|
||||
![Install Android Studio in Ubuntu from Software Center][4]
|
||||
|
||||
如果你在软件中心安装 Android Studio 时看到错误,则可以使用[ Snap 命令][5] 安装 Android Studio。
|
||||
|
||||
```
|
||||
sudo snap install android-studio --classic
|
||||
```
|
||||
|
||||
非常简单!
|
||||
|
||||
### 另一种方式 1:在 Ubuntu 中使用 umake 安装 Android Studio
|
||||
|
||||
你也可以使用 Ubuntu Developer Tools Center,现在称为 [Ubuntu Make][6],轻松安装 Android Studio。Ubuntu Make 提供了一个命令行工具来安装各种开发工具和 IDE 等。Ubuntu Make 在 Ubuntu 仓库中就有。
|
||||
|
||||
要安装 Ubuntu Make,请在终端中使用以下命令:
|
||||
|
||||
```
|
||||
sudo apt-get install ubuntu-make
|
||||
```
|
||||
|
||||
安装 Ubuntu Make 后,请使用以下命令在 Ubuntu 中安装 Android Studio:
|
||||
|
||||
```
|
||||
umake android
|
||||
```
|
||||
|
||||
在安装过程中它会给你的几个选项。我认为你可以处理。如果你决定卸载 Android Studio,则可以按照以下方式使用相同的 umake 工具:
|
||||
|
||||
```
|
||||
umake android --remove
|
||||
|
||||
```
|
||||
|
||||
### 另一种方式 2:通过非官方的 PPA 在 Ubuntu 和 Linux Mint 中安装 Android Studio
|
||||
|
||||
感谢 [Paolo Ratolo][7],我们有一个 PPA,可用于 Ubuntu 16.04、14.04、Linux Mint 和其他基于 Ubuntu 的发行版中轻松安装 Android Studio。请注意,它将下载大约 650MB 的数据。请注意你的互联网连接以及数据费用(如果有的话)。
|
||||
|
||||
打开一个终端并使用以下命令:
|
||||
|
||||
```
|
||||
sudo apt-add-repository ppa:paolorotolo/android-studio
|
||||
sudo apt-get update
|
||||
sudo apt-get install android-studio
|
||||
```
|
||||
|
||||
这不容易吗?虽然从源代码安装程序很有趣,但拥有这样的 PPA 总是不错的。我们看到了如何安装 Android Studio,现在来看看如何卸载它。
|
||||
|
||||
### 卸载 Android Studio:
|
||||
|
||||
如果你还没有安装 PPA Purge:
|
||||
|
||||
```
|
||||
sudo apt-get install ppa-purge
|
||||
```
|
||||
|
||||
现在使用 PPA Purge 来清除已安装的 PPA:
|
||||
|
||||
```
|
||||
sudo apt-get remove android-studio
|
||||
sudo ppa-purge ppa:paolorotolo/android-studio
|
||||
```
|
||||
|
||||
就是这些了。我希望这能够帮助你**在 Ubuntu 和 Linux Mint 上安装 Android Studio**。在运行 Android Studio 之前,请确保[在 Ubuntu 中安装了Java][8]。在类似的文章中,我建议你阅读[如何安装和配置 Ubuntu SDK ][9]和[如何在 Ubuntu 中轻松安装 Microsoft Visual Studio][10]。
|
||||
|
||||
欢迎提出任何问题或建议。再见 :)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/install-android-studio-ubuntu-linux/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/abhishek/
|
||||
[1]:http://developer.android.com/sdk/installing/studio.html
|
||||
[2]:https://itsfoss.com/install-java-ubuntu-1404/
|
||||
[3]:https://itsfoss.com/wp-content/uploads/2014/04/Android_Studio_Ubuntu.jpeg
|
||||
[4]:https://itsfoss.com/wp-content/uploads/2014/04/install-android-studio-snap-800x469.jpg
|
||||
[5]:https://itsfoss.com/install-snap-linux/
|
||||
[6]:https://wiki.ubuntu.com/ubuntu-make
|
||||
[7]:https://plus.google.com/+PaoloRotolo
|
||||
[8]:https://itsfoss.com/install-java-ubuntu-1404/ (How To Install Java On Ubuntu 14.04)
|
||||
[9]:https://itsfoss.com/install-configure-ubuntu-sdk/
|
||||
[10]:https://itsfoss.com/install-visual-studio-code-ubuntu/
|
@ -1,11 +1,12 @@
|
||||
[递归:梦中梦][1]
|
||||
递归:梦中梦
|
||||
======
|
||||
|
||||
**递归**是很神奇的,但是在大多数的编程类书藉中对递归讲解的并不好。它们只是给你展示一个递归阶乘的实现,然后警告你递归运行的很慢,并且还有可能因为栈缓冲区溢出而崩溃。“你可以将头伸进微波炉中去烘干你的头发,但是需要警惕颅内高压以及让你的头发生爆炸,或者你可以使用毛巾来擦干头发。”难怪人们不愿意使用递归。但这种建议是很糟糕的,因为在算法中,递归是一个非常强大的观点。
|
||||
> “方其梦也,不知其梦也。梦之中又占其梦焉,觉而后知其梦也。” —— 《庄子·齐物论》
|
||||
|
||||
**递归**是很神奇的,但是在大多数的编程类书藉中对递归讲解的并不好。它们只是给你展示一个递归阶乘的实现,然后警告你递归运行的很慢,并且还有可能因为栈缓冲区溢出而崩溃。“你可以将头伸进微波炉中去烘干你的头发,但是需要警惕颅内高压并让你的头发生爆炸,或者你可以使用毛巾来擦干头发。”难怪人们不愿意使用递归。但这种建议是很糟糕的,因为在算法中,递归是一个非常强大的思想。
|
||||
|
||||
我们来看一下这个经典的递归阶乘:
|
||||
|
||||
递归阶乘 - factorial.c
|
||||
```
|
||||
#include <stdio.h>
|
||||
|
||||
@ -28,15 +29,18 @@ int main(int argc)
|
||||
}
|
||||
```
|
||||
|
||||
*递归阶乘 - factorial.c*
|
||||
|
||||
|
||||
函数调用自身的这个观点在一开始是让人很难理解的。为了让这个过程更形象具体,下图展示的是当调用 `factorial(5)` 并且达到 `n == 1`这行代码 时,[栈上][3] 端点的情况:
|
||||
|
||||
![](https://manybutfinite.com/img/stack/factorial.png)
|
||||
|
||||
每次调用 `factorial` 都生成一个新的 [栈帧][4]。这些栈帧的创建和 [销毁][5] 是使得递归版本的阶乘慢于其相应的迭代版本的原因。在调用返回之前,累积的这些栈帧可能会耗尽栈空间,进而使你的程序崩溃。
|
||||
|
||||
而这些担心经常是存在于理论上的。例如,对于每个 `factorial` 的栈帧占用 16 字节(这可能取决于栈排列以及其它因素)。如果在你的电脑上运行着现代的 x86 的 Linux 内核,一般情况下你拥有 8 GB 的栈空间,因此,`factorial` 程序中的 `n` 最多可以达到 512,000 左右。这是一个 [巨大无比的结果][6],它将花费 8,971,833 比特来表示这个结果,因此,栈空间根本就不是什么问题:一个极小的整数 - 甚至是一个 64 位的整数 - 在我们的栈空间被耗尽之前就早已经溢出了成千上万次了。
|
||||
而这些担心经常是存在于理论上的。例如,对于每个 `factorial` 的栈帧占用 16 字节(这可能取决于栈排列以及其它因素)。如果在你的电脑上运行着现代的 x86 的 Linux 内核,一般情况下你拥有 8 GB 的栈空间,因此,`factorial` 程序中的 `n` 最多可以达到 512,000 左右。这是一个 [巨大无比的结果][6],它将花费 8,971,833 比特来表示这个结果,因此,栈空间根本就不是什么问题:一个极小的整数 —— 甚至是一个 64 位的整数 —— 在我们的栈空间被耗尽之前就早已经溢出了成千上万次了。
|
||||
|
||||
过一会儿我们再去看 CPU 的使用,现在,我们先从比特和字节回退一步,把递归看作一种通用技术。我们的阶乘算法可归结为:将整数 N、N-1、 … 1 推入到一个栈,然后将它们按相反的顺序相乘。实际上我们使用了程序调用栈来实现这一点,这是它的细节:我们在堆上分配一个栈并使用它。虽然调用栈具有特殊的特性,但是它也只是额外的一种数据结构,你可以随意处置。我希望示意图可以让你明白这一点。
|
||||
过一会儿我们再去看 CPU 的使用,现在,我们先从比特和字节回退一步,把递归看作一种通用技术。我们的阶乘算法可归结为:将整数 N、N-1、 … 1 推入到一个栈,然后将它们按相反的顺序相乘。实际上我们使用了程序调用栈来实现这一点,这是它的细节:我们在堆上分配一个栈并使用它。虽然调用栈具有特殊的特性,但是它也只是又一种数据结构而已,你可以随意使用。我希望这个示意图可以让你明白这一点。
|
||||
|
||||
当你将栈调用视为一种数据结构,有些事情将变得更加清晰明了:将那些整数堆积起来,然后再将它们相乘,这并不是一个好的想法。那是一种有缺陷的实现:就像你拿螺丝刀去钉钉子一样。相对更合理的是使用一个迭代过程去计算阶乘。
|
||||
|
||||
@ -48,7 +52,6 @@ int main(int argc)
|
||||
|
||||
每到边缘(线)都让老鼠左转或者右转来到达一个新的位置。如果向哪边转都被拦住,说明相关的边缘不存在。现在,我们来讨论一下!这个过程无论你是调用栈还是其它数据结构,它都离不开一个递归的过程。而使用调用栈是非常容易的:
|
||||
|
||||
递归迷宫求解 [下载][2]
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
@ -75,21 +78,24 @@ int explore(maze_t *node)
|
||||
int found = explore(&maze);
|
||||
}
|
||||
```
|
||||
|
||||
*递归迷宫求解 [下载][2]*
|
||||
|
||||
当我们在 `maze.c:13` 中找到奶酪时,栈的情况如下图所示。你也可以在 [GDB 输出][8] 中看到更详细的数据,它是使用 [命令][9] 采集的数据。
|
||||
|
||||
![](https://manybutfinite.com/img/stack/mazeCallStack.png)
|
||||
|
||||
它展示了递归的良好表现,因为这是一个适合使用递归的问题。而且这并不奇怪:当涉及到算法时,递归是规则,而不是例外。它出现在如下情景中:当进行搜索时、当进行遍历树和其它数据结构时、当进行解析时、当需要排序时:它无处不在。正如众所周知的 pi 或者 e,它们在数学中像“神”一样的存在,因为它们是宇宙万物的基础,而递归也和它们一样:只是它在计算的结构中。
|
||||
它展示了递归的良好表现,因为这是一个适合使用递归的问题。而且这并不奇怪:当涉及到算法时,*递归是规则,而不是例外*。它出现在如下情景中——进行搜索时、进行遍历树和其它数据结构时、进行解析时、需要排序时——它无处不在。正如众所周知的 pi 或者 e,它们在数学中像“神”一样的存在,因为它们是宇宙万物的基础,而递归也和它们一样:只是它存在于计算结构中。
|
||||
|
||||
Steven Skienna 的优秀著作 [算法设计指南][10] 的精彩之处在于,他通过“战争故事” 作为手段来诠释工作,以此来展示解决现实世界中的问题背后的算法。这是我所知道的拓展你的算法知识的最佳资源。另一个读物是 McCarthy 的 [关于 LISP 实现的的原创论文][11]。递归在语言中既是它的名字也是它的基本原理。这篇论文既可读又有趣,在工作中能看到大师的作品是件让人兴奋的事情。
|
||||
Steven Skienna 的优秀著作 [算法设计指南][10] 的精彩之处在于,他通过 “战争故事” 作为手段来诠释工作,以此来展示解决现实世界中的问题背后的算法。这是我所知道的拓展你的算法知识的最佳资源。另一个读物是 McCarthy 的 [关于 LISP 实现的的原创论文][11]。递归在语言中既是它的名字也是它的基本原理。这篇论文既可读又有趣,在工作中能看到大师的作品是件让人兴奋的事情。
|
||||
|
||||
回到迷宫问题上。虽然它在这里很难离开递归,但是并不意味着必须通过调用栈的方式来实现。你可以使用像 `RRLL` 这样的字符串去跟踪转向,然后,依据这个字符串去决定老鼠下一步的动作。或者你可以分配一些其它的东西来记录追寻奶酪的整个状态。你仍然是去实现一个递归的过程,但是需要你实现一个自己的数据结构。
|
||||
回到迷宫问题上。虽然它在这里很难离开递归,但是并不意味着必须通过调用栈的方式来实现。你可以使用像 `RRLL` 这样的字符串去跟踪转向,然后,依据这个字符串去决定老鼠下一步的动作。或者你可以分配一些其它的东西来记录追寻奶酪的整个状态。你仍然是实现了一个递归的过程,只是需要你实现一个自己的数据结构。
|
||||
|
||||
那样似乎更复杂一些,因为栈调用更合适。每个栈帧记录的不仅是当前节点,也记录那个节点上的计算状态(在这个案例中,我们是否只让它走左边,或者已经尝试向右)。因此,代码已经变得不重要了。然而,有时候我们因为害怕溢出和期望中的性能而放弃这种优秀的算法。那是很愚蠢的!
|
||||
|
||||
正如我们所见,栈空间是非常大的,在耗尽栈空间之前往往会遇到其它的限制。一方面可以通过检查问题大小来确保它能够被安全地处理。而对 CPU 的担心是由两个广为流传的有问题的示例所导致的:哑阶乘(dumb factorial)和可怕的无记忆的 O(2n) [Fibonacci 递归][12]。它们并不是栈递归算法的正确代表。
|
||||
正如我们所见,栈空间是非常大的,在耗尽栈空间之前往往会遇到其它的限制。一方面可以通过检查问题大小来确保它能够被安全地处理。而对 CPU 的担心是由两个广为流传的有问题的示例所导致的:<ruby>哑阶乘<rt>dumb factorial</rt></ruby>和可怕的无记忆的 O( 2^n ) [Fibonacci 递归][12]。它们并不是栈递归算法的正确代表。
|
||||
|
||||
事实上栈操作是非常快的。通常,栈对数据的偏移是非常准确的,它在 [缓存][13] 中是热点,并且是由专门的指令来操作它。同时,使用你自己定义的堆上分配的数据结构的相关开销是很大的。经常能看到人们写的一些比栈调用递归更复杂、性能更差的实现方法。最后,现代的 CPU 的性能都是 [非常好的][14] ,并且一般 CPU 不会是性能瓶颈所在。在考虑牺牲程序的简单性时要特别注意,就像经常考虑程序的性能及性能的[测量][15]那样。
|
||||
事实上栈操作是非常快的。通常,栈对数据的偏移是非常准确的,它在 [缓存][13] 中是热数据,并且是由专门的指令来操作它的。同时,使用你自己定义的在堆上分配的数据结构的相关开销是很大的。经常能看到人们写的一些比栈调用递归更复杂、性能更差的实现方法。最后,现代的 CPU 的性能都是 [非常好的][14] ,并且一般 CPU 不会是性能瓶颈所在。在考虑牺牲程序的简单性时要特别注意,就像经常考虑程序的性能及性能的[测量][15]那样。
|
||||
|
||||
下一篇文章将是探秘栈系列的最后一篇了,我们将了解尾调用、闭包、以及其它相关概念。然后,我们就该深入我们的老朋友—— Linux 内核了。感谢你的阅读!
|
||||
|
197
published/20141106 System Calls Make the World Go Round.md
Normal file
197
published/20141106 System Calls Make the World Go Round.md
Normal file
@ -0,0 +1,197 @@
|
||||
系统调用,让世界转起来!
|
||||
=====================
|
||||
|
||||
|
||||
我其实不想将它分解开给你看,用户应用程序其实就是一个可怜的<ruby>瓮中大脑<rt>brain in a vat</rt></ruby>:
|
||||
|
||||
![](https://manybutfinite.com/img/os/appInVat.png)
|
||||
|
||||
它与外部世界的*每个*交流都要在内核的帮助下通过**系统调用**才能完成。一个应用程序要想保存一个文件、写到终端、或者打开一个 TCP 连接,内核都要参与。应用程序是被内核高度怀疑的:认为它到处充斥着 bug,甚至是个充满邪恶想法的脑子。
|
||||
|
||||
这些系统调用是从一个应用程序到内核的函数调用。出于安全考虑,它们使用了特定的机制,实际上你只是调用了内核的 API。“<ruby>系统调用<rt>system call</rt></ruby>”这个术语指的是调用由内核提供的特定功能(比如,系统调用 `open()`)或者是调用途径。你也可以简称为:**syscall**。
|
||||
|
||||
这篇文章讲解系统调用,系统调用与调用一个库有何区别,以及在操作系统/应用程序接口上的刺探工具。如果彻底了解了应用程序借助操作系统发生的哪些事情,那么就可以将一个不可能解决的问题转变成一个快速而有趣的难题。
|
||||
|
||||
那么,下图是一个运行着的应用程序,一个用户进程:
|
||||
|
||||
![](https://manybutfinite.com/img/os/sandbox.png)
|
||||
|
||||
它有一个私有的 [虚拟地址空间][2]—— 它自己的内存沙箱。整个系统都在它的地址空间中(即上面比喻的那个“瓮”),程序的二进制文件加上它所使用的库全部都 [被映射到内存中][3]。内核自身也映射为地址空间的一部分。
|
||||
|
||||
下面是我们程序 `pid` 的代码,它通过 [getpid(2)][4] 直接获取了其进程 id:
|
||||
|
||||
|
||||
```
|
||||
#include <sys/types.h>
|
||||
#include <unistd.h>
|
||||
#include <stdio.h>
|
||||
|
||||
int main()
|
||||
{
|
||||
pid_t p = getpid();
|
||||
printf("%d\n", p);
|
||||
}
|
||||
```
|
||||
|
||||
*pid.c [download][1]*
|
||||
|
||||
|
||||
在 Linux 中,一个进程并不是一出生就知道它的 PID。要想知道它的 PID,它必须去询问内核,因此,这个询问请求也是一个系统调用:
|
||||
|
||||
![](https://manybutfinite.com/img/os/syscallEnter.png)
|
||||
|
||||
它的第一步是开始于调用 C 库的 [getpid()][5],它是系统调用的一个*封装*。当你调用一些函数时,比如,`open(2)`、`read(2)` 之类,你是在调用这些封装。其实,对于大多数编程语言在这一块的原生方法,最终都是在 libc 中完成的。
|
||||
|
||||
封装为这些基本的操作系统 API 提供了方便,这样可以保持内核的简洁。*所有的内核代码*运行在特权模式下,有 bug 的内核代码行将会产生致命的后果。能在用户模式下做的任何事情都应该在用户模式中完成。由库来提供友好的方法和想要的参数处理,像 `printf(3)` 这样。
|
||||
|
||||
我们拿一个 web API 进行比较,内核的封装方式可以类比为构建一个尽可能简单的 HTTP 接口去提供服务,然后提供特定语言的库及辅助方法。或者也可能有一些缓存,这就是 libc 的 `getpid()` 所做的:首次调用时,它真实地去执行了一个系统调用,然后,它缓存了 PID,这样就可以避免后续调用时的系统调用开销。
|
||||
|
||||
一旦封装完成,它做的第一件事就是进入了<ruby>~~超空间~~<rt>hyperspace</rt></ruby>:内核。这种转换机制因处理器架构设计不同而不同。在 Intel 处理器中,参数和 [系统调用号][6] 是 [加载到寄存器中的][7],然后,运行一个 [指令][8] 将 CPU 置于 [特权模式][9] 中,并立即将控制权转移到内核中的全局系统调用 [入口][10]。如果你对这些细节感兴趣,David Drysdale 在 LWN 上有两篇非常好的文章([其一][11],[其二][12])。
|
||||
|
||||
内核然后使用这个系统调用号作为进入 [`sys_call_table`][14] 的一个 [索引][13],它是一个函数指针到每个系统调用实现的数组。在这里,调用了 [`sys_getpid`][15]:
|
||||
|
||||
![](https://manybutfinite.com/img/os/syscallExit.png)
|
||||
|
||||
在 Linux 中,系统调用大多数都实现为架构无关的 C 函数,有时候这样做 [很琐碎][16],但是通过内核优秀的设计,系统调用机制被严格隔离。它们是工作在一般数据结构中的普通代码。嗯,除了*完全偏执*的参数校验以外。
|
||||
|
||||
一旦它们的工作完成,它们就会正常*返回*,然后,架构特定的代码会接手转回到用户模式,封装将在那里继续做一些后续处理工作。在我们的例子中,[getpid(2)][17] 现在缓存了由内核返回的 PID。如果内核返回了一个错误,另外的封装可以去设置全局 `errno` 变量。这些细节可以让你知道 GNU 是怎么处理的。
|
||||
|
||||
如果你想要原生的调用,glibc 提供了 [syscall(2)][18] 函数,它可以不通过封装来产生一个系统调用。你也可以通过它来做一个你自己的封装。这对一个 C 库来说,既不神奇,也不特殊。
|
||||
|
||||
这种系统调用的设计影响是很深远的。我们从一个非常有用的 [strace(1)][19] 开始,这个工具可以用来监视 Linux 进程的系统调用(在 Mac 上,参见 [dtruss(1m)][20] 和神奇的 [dtrace][21];在 Windows 中,参见 [sysinternals][22])。这是对 `pid` 程序的跟踪:
|
||||
|
||||
```
|
||||
~/code/x86-os$ strace ./pid
|
||||
|
||||
execve("./pid", ["./pid"], [/* 20 vars */]) = 0
|
||||
brk(0) = 0x9aa0000
|
||||
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
|
||||
mmap2(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7767000
|
||||
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
|
||||
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
|
||||
fstat64(3, {st_mode=S_IFREG|0644, st_size=18056, ...}) = 0
|
||||
mmap2(NULL, 18056, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7762000
|
||||
close(3) = 0
|
||||
|
||||
[...snip...]
|
||||
|
||||
getpid() = 14678
|
||||
fstat64(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 1), ...}) = 0
|
||||
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7766000
|
||||
write(1, "14678\n", 614678
|
||||
) = 6
|
||||
exit_group(6) = ?
|
||||
```
|
||||
|
||||
输出的每一行都显示了一个系统调用、它的参数,以及返回值。如果你在一个循环中将 `getpid(2)` 运行 1000 次,你就会发现始终只有一个 `getpid()` 系统调用,因为,它的 PID 已经被缓存了。我们也可以看到在格式化输出字符串之后,`printf(3)` 调用了 `write(2)`。
|
||||
|
||||
`strace` 可以开始一个新进程,也可以附加到一个已经运行的进程上。你可以通过不同程序的系统调用学到很多的东西。例如,`sshd` 守护进程一天都在干什么?
|
||||
|
||||
```
|
||||
~/code/x86-os$ ps ax | grep sshd
|
||||
12218 ? Ss 0:00 /usr/sbin/sshd -D
|
||||
|
||||
~/code/x86-os$ sudo strace -p 12218
|
||||
Process 12218 attached - interrupt to quit
|
||||
select(7, [3 4], NULL, NULL, NULL
|
||||
|
||||
[
|
||||
... nothing happens ...
|
||||
No fun, it's just waiting for a connection using select(2)
|
||||
If we wait long enough, we might see new keys being generated and so on, but
|
||||
let's attach again, tell strace to follow forks (-f), and connect via SSH
|
||||
]
|
||||
|
||||
~/code/x86-os$ sudo strace -p 12218 -f
|
||||
|
||||
[lots of calls happen during an SSH login, only a few shown]
|
||||
|
||||
[pid 14692] read(3, "-----BEGIN RSA PRIVATE KEY-----\n"..., 1024) = 1024
|
||||
[pid 14692] open("/usr/share/ssh/blacklist.RSA-2048", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
|
||||
[pid 14692] open("/etc/ssh/blacklist.RSA-2048", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
|
||||
[pid 14692] open("/etc/ssh/ssh_host_dsa_key", O_RDONLY|O_LARGEFILE) = 3
|
||||
[pid 14692] open("/etc/protocols", O_RDONLY|O_CLOEXEC) = 4
|
||||
[pid 14692] read(4, "# Internet (IP) protocols\n#\n# Up"..., 4096) = 2933
|
||||
[pid 14692] open("/etc/hosts.allow", O_RDONLY) = 4
|
||||
[pid 14692] open("/lib/i386-linux-gnu/libnss_dns.so.2", O_RDONLY|O_CLOEXEC) = 4
|
||||
[pid 14692] stat64("/etc/pam.d", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
|
||||
[pid 14692] open("/etc/pam.d/common-password", O_RDONLY|O_LARGEFILE) = 8
|
||||
[pid 14692] open("/etc/pam.d/other", O_RDONLY|O_LARGEFILE) = 4
|
||||
```
|
||||
|
||||
看懂 SSH 的调用是块难啃的骨头,但是,如果搞懂它你就学会了跟踪。能够看到应用程序打开的是哪个文件是有用的(“这个配置是从哪里来的?”)。如果你有一个出现错误的进程,你可以 `strace` 它,然后去看它通过系统调用做了什么?当一些应用程序意外退出而没有提供适当的错误信息时,你可以去检查它是否有系统调用失败。你也可以使用过滤器,查看每个调用的次数,等等:
|
||||
|
||||
```
|
||||
~/code/x86-os$ strace -T -e trace=recv curl -silent www.google.com. > /dev/null
|
||||
|
||||
recv(3, "HTTP/1.1 200 OK\r\nDate: Wed, 05 N"..., 16384, 0) = 4164 <0.000007>
|
||||
recv(3, "fl a{color:#36c}a:visited{color:"..., 16384, 0) = 2776 <0.000005>
|
||||
recv(3, "adient(top,#4d90fe,#4787ed);filt"..., 16384, 0) = 4164 <0.000007>
|
||||
recv(3, "gbar.up.spd(b,d,1,!0);break;case"..., 16384, 0) = 2776 <0.000006>
|
||||
recv(3, "$),a.i.G(!0)),window.gbar.up.sl("..., 16384, 0) = 1388 <0.000004>
|
||||
recv(3, "margin:0;padding:5px 8px 0 6px;v"..., 16384, 0) = 1388 <0.000007>
|
||||
recv(3, "){window.setTimeout(function(){v"..., 16384, 0) = 1484 <0.000006>
|
||||
```
|
||||
|
||||
我鼓励你在你的操作系统中的试验这些工具。把它们用好会让你觉得自己有超能力。
|
||||
|
||||
但是,足够有用的东西,往往要让我们深入到它的设计中。我们可以看到那些用户空间中的应用程序是被严格限制在它自己的虚拟地址空间里,运行在 Ring 3(非特权模式)中。一般来说,只涉及到计算和内存访问的任务是不需要请求系统调用的。例如,像 [strlen(3)][23] 和 [memcpy(3)][24] 这样的 C 库函数并不需要内核去做什么。这些都是在应用程序内部发生的事。
|
||||
|
||||
C 库函数的 man 页面所在的节(即圆括号里的 `2` 和 `3`)也提供了线索。节 2 是用于系统调用封装,而节 3 包含了其它 C 库函数。但是,正如我们在 `printf(3)` 中所看到的,库函数最终可以产生一个或者多个系统调用。
|
||||
|
||||
如果你对此感到好奇,这里是 [Linux][25] (也有 [Filippo 的列表][26])和 [Windows][27] 的全部系统调用列表。它们各自有大约 310 和 460 个系统调用。看这些系统调用是非常有趣的,因为,它们代表了*软件*在现代的计算机上能够做什么。另外,你还可能在这里找到与进程间通讯和性能相关的“宝藏”。这是一个“不懂 Unix 的人注定最终还要重新发明一个蹩脚的 Unix ” 的地方。(LCTT 译注:原文 “Those who do not understand Unix are condemned to reinvent it,poorly。” 这句话是 [Henry Spencer][35] 的名言,反映了 Unix 的设计哲学,它的一些理念和文化是一种技术发展的必须结果,看似糟糕却无法超越。)
|
||||
|
||||
与 CPU 周期相比,许多系统调用花[很长的时间][28]去执行任务,例如,从一个硬盘驱动器中读取内容。在这种情况下,调用进程在底层的工作完成之前一直*处于休眠状态*。因为,CPU 运行的非常快,一般的程序都因为 **I/O 的限制**在它的生命周期的大部分时间处于休眠状态,等待系统调用返回。相反,如果你跟踪一个计算密集型任务,你经常会看到没有任何的系统调用参与其中。在这种情况下,[top(1)][29] 将显示大量的 CPU 使用。
|
||||
|
||||
在一个系统调用中的开销可能会是一个问题。例如,固态硬盘比普通硬盘要快很多,但是,操作系统的开销可能比 I/O 操作本身的开销 [更加昂贵][30]。执行大量读写操作的程序可能就是操作系统开销的瓶颈所在。[向量化 I/O][31] 对此有一些帮助。因此要做 [文件的内存映射][32],它允许一个程序仅访问内存就可以读或写磁盘文件。类似的映射也存在于像视频卡这样的地方。最终,云计算的经济性可能导致内核消除或最小化用户模式/内核模式的切换。
|
||||
|
||||
最终,系统调用还有益于系统安全。一是,无论如何来历不明的一个二进制程序,你都可以通过观察它的系统调用来检查它的行为。这种方式可能用于去检测恶意程序。例如,我们可以记录一个未知程序的系统调用的策略,并对它的异常行为进行报警,或者对程序调用指定一个白名单,这样就可以让漏洞利用变得更加困难。在这个领域,我们有大量的研究,和许多工具,但是没有“杀手级”的解决方案。
|
||||
|
||||
这就是系统调用。很抱歉这篇文章有点长,我希望它对你有用。接下来的时间,我将写更多(短的)文章,也可以在 [RSS][33] 和 [Twitter][34] 关注我。这篇文章献给 glorious Clube Atlético Mineiro。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via:https://manybutfinite.com/post/system-calls/
|
||||
|
||||
作者:[Gustavo Duarte][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://duartes.org/gustavo/blog/about/
|
||||
[1]:https://manybutfinite.com/code/x86-os/pid.c
|
||||
[2]:https://manybutfinite.com/post/anatomy-of-a-program-in-memory
|
||||
[3]:https://manybutfinite.com/post/page-cache-the-affair-between-memory-and-files/
|
||||
[4]:http://linux.die.net/man/2/getpid
|
||||
[5]:https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/getpid.c;h=937b1d4e113b1cff4a5c698f83d662e130d596af;hb=4c6da7da9fb1f0f94e668e6d2966a4f50a7f0d85#l49
|
||||
[6]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/syscalls/syscall_64.tbl#L48
|
||||
[7]:https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/x86_64/sysdep.h;h=4a619dafebd180426bf32ab6b6cb0e5e560b718a;hb=4c6da7da9fb1f0f94e668e6d2966a4f50a7f0d85#l139
|
||||
[8]:https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/x86_64/sysdep.h;h=4a619dafebd180426bf32ab6b6cb0e5e560b718a;hb=4c6da7da9fb1f0f94e668e6d2966a4f50a7f0d85#l179
|
||||
[9]:https://manybutfinite.com/post/cpu-rings-privilege-and-protection
|
||||
[10]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/entry_64.S#L354-L386
|
||||
[11]:http://lwn.net/Articles/604287/
|
||||
[12]:http://lwn.net/Articles/604515/
|
||||
[13]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/entry_64.S#L422
|
||||
[14]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/syscall_64.c#L25
|
||||
[15]:https://github.com/torvalds/linux/blob/v3.17/kernel/sys.c#L800-L809
|
||||
[16]:https://github.com/torvalds/linux/blob/v3.17/kernel/sys.c#L800-L859
|
||||
[17]:http://linux.die.net/man/2/getpid
|
||||
[18]:http://linux.die.net/man/2/syscall
|
||||
[19]:http://linux.die.net/man/1/strace
|
||||
[20]:https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/dtruss.1m.html
|
||||
[21]:http://dtrace.org/blogs/brendan/2011/10/10/top-10-dtrace-scripts-for-mac-os-x/
|
||||
[22]:http://technet.microsoft.com/en-us/sysinternals/bb842062.aspx
|
||||
[23]:http://linux.die.net/man/3/strlen
|
||||
[24]:http://linux.die.net/man/3/memcpy
|
||||
[25]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/syscalls/syscall_64.tbl
|
||||
[26]:https://filippo.io/linux-syscall-table/
|
||||
[27]:http://j00ru.vexillium.org/ntapi/
|
||||
[28]:https://manybutfinite.com/post/what-your-computer-does-while-you-wait/
|
||||
[29]:http://linux.die.net/man/1/top
|
||||
[30]:http://danluu.com/clwb-pcommit/
|
||||
[31]:http://en.wikipedia.org/wiki/Vectored_I/O
|
||||
[32]:https://manybutfinite.com/post/page-cache-the-affair-between-memory-and-files/
|
||||
[33]:http://feeds.feedburner.com/GustavoDuarte
|
||||
[34]:http://twitter.com/food4hackers
|
||||
[35]:https://en.wikipedia.org/wiki/Henry_Spencer
|
@ -1,64 +1,67 @@
|
||||
Getting Started with Taskwarrior
|
||||
======
|
||||
Taskwarrior is a flexible [command-line task management program][1]. In their [own words][2]:
|
||||
基于命令行的任务管理器 Taskwarrior
|
||||
=====
|
||||
|
||||
Taskwarrior manages your TODO list from your command line. It is flexible, fast, efficient, unobtrusive, does its job then gets out of your way.
|
||||
Taskwarrior 是一个灵活的[命令行任务管理程序][1],用他们[自己的话说][2]:
|
||||
|
||||
Taskwarrior is highly customizable, but can also be used "right out of the box." In this article, we'll show you the basic commands to add and complete tasks. Then we'll cover a couple more advanced commands. And finally, we'll show you some basic configuration settings to begin customizing your setup.
|
||||
> Taskwarrior 在命令行里管理你的 TODO 列表。它灵活,快速,高效,不显眼,它默默做自己的事情让你避免自己管理。
|
||||
|
||||
### Installing Taskwarrior
|
||||
Taskwarrior 是高度可定制的,但也可以“立即使用”。在本文中,我们将向你展示添加和完成任务的基本命令,然后我们将介绍几个更高级的命令。最后,我们将向你展示一些基本的配置设置,以开始自定义你的设置。
|
||||
|
||||
### 安装 Taskwarrior
|
||||
|
||||
Taskwarrior 在 Fedora 仓库中是可用的,所有安装它很容易:
|
||||
|
||||
Taskwarrior is available in the Fedora repositories, so installing it is simple:
|
||||
```
|
||||
sudo dnf install task
|
||||
|
||||
```
|
||||
|
||||
Once installed, run `task`. This first run will create a `~/.taskrc` file for you.
|
||||
一旦完成安装,运行 `task` 命令。第一次运行将会创建一个 `~/.taskrc` 文件。
|
||||
|
||||
```
|
||||
$ **task**
|
||||
$ task
|
||||
A configuration file could not be found in ~
|
||||
|
||||
Would you like a sample /home/link/.taskrc created, so Taskwarrior can proceed? (yes/no) yes
|
||||
[task next]
|
||||
No matches.
|
||||
|
||||
```
|
||||
|
||||
### Adding Tasks
|
||||
### 添加任务
|
||||
|
||||
添加任务快速而不显眼。
|
||||
|
||||
Adding tasks is fast and unobtrusive.
|
||||
```
|
||||
$ **task add Plant the wheat**
|
||||
$ task add Plant the wheat
|
||||
Created task 1.
|
||||
|
||||
```
|
||||
|
||||
Run `task` or `task list` to show upcoming tasks.
|
||||
运行 `task` 或者 `task list` 来显示即将来临的任务。
|
||||
|
||||
```
|
||||
$ **task list**
|
||||
$ task list
|
||||
|
||||
ID Age Description Urg
|
||||
1 8s Plant the wheat 0
|
||||
|
||||
1 task
|
||||
|
||||
```
|
||||
|
||||
Let's add a few more tasks to round out the example.
|
||||
让我们添加一些任务来完成这个示例。
|
||||
|
||||
```
|
||||
$ **task add Tend the wheat**
|
||||
$ task add Tend the wheat
|
||||
Created task 2.
|
||||
$ **task add Cut the wheat**
|
||||
$ task add Cut the wheat
|
||||
Created task 3.
|
||||
$ **task add Take the wheat to the mill to be ground into flour**
|
||||
$ task add Take the wheat to the mill to be ground into flour
|
||||
Created task 4.
|
||||
$ **task add Bake a cake**
|
||||
$ task add Bake a cake
|
||||
Created task 5.
|
||||
|
||||
```
|
||||
|
||||
Run `task` again to view the list.
|
||||
再次运行 `task` 来查看列表。
|
||||
|
||||
```
|
||||
[task next]
|
||||
|
||||
@ -70,83 +73,83 @@ ID Age Description Urg
|
||||
5 2s Bake a cake 0
|
||||
|
||||
5 tasks
|
||||
|
||||
```
|
||||
|
||||
### Completing Tasks
|
||||
### 完成任务
|
||||
|
||||
将一个任务标记为完成, 查找其 ID 并运行:
|
||||
|
||||
To mark a task as complete, look up its ID and run:
|
||||
```
|
||||
$ **task 1 done**
|
||||
$ task 1 done
|
||||
Completed task 1 'Plant the wheat'.
|
||||
Completed 1 task.
|
||||
|
||||
```
|
||||
|
||||
You can also mark a task done with its description.
|
||||
你也可以用它的描述来标记一个任务已完成。
|
||||
|
||||
```
|
||||
$ **task 'Tend the wheat' done**
|
||||
$ task 'Tend the wheat' done
|
||||
Completed task 1 'Tend the wheat'.
|
||||
Completed 1 task.
|
||||
|
||||
```
|
||||
|
||||
With `add`, `list` and `done`, you're all ready to get started with Taskwarrior.
|
||||
通过使用 `add`、`list` 和 `done`,你可以说已经入门了。
|
||||
|
||||
### Setting Due Dates
|
||||
### 设定截止日期
|
||||
|
||||
很多任务不需要一个截止日期:
|
||||
|
||||
Many tasks do not require a due date:
|
||||
```
|
||||
task add Finish the article on Taskwarrior
|
||||
|
||||
```
|
||||
|
||||
But sometimes, setting a due date is just the kind of motivation you need to get productive. Use the `due` modifier when adding a task to set a specific due date.
|
||||
但是有时候,设定一个截止日期正是你需要提高效率的动力。在添加任务时使用 `due` 修饰符来设置特定的截止日期。
|
||||
|
||||
```
|
||||
task add Finish the article on Taskwarrior due:tomorrow
|
||||
|
||||
```
|
||||
|
||||
`due` is highly flexible. It accepts specific dates ("2017-02-02"), or ISO-8601 ("2017-02-02T20:53:00Z"), or even relative time ("8hrs"). See the [Date & Time][3] documentation for all the examples.
|
||||
`due` 非常灵活。它接受特定日期 (`2017-02-02`) 或 ISO-8601 (`2017-02-02T20:53:00Z`),甚至相对时间 (`8hrs`)。可以查看所有示例的 [Date & Time][3] 文档。
|
||||
|
||||
日期也不只有截止日期,Taskwarrior 有 `scheduled`, `wait` 和 `until` 选项。
|
||||
|
||||
Dates go beyond due dates too. Taskwarrior has `scheduled`, `wait`, and `until`.
|
||||
```
|
||||
task add Proof the article on Taskwarrior scheduled:thurs
|
||||
|
||||
```
|
||||
|
||||
Once the date (Thursday in this example) passes, the task is tagged with the `READY` virtual tag. It will then show up in the `ready` report.
|
||||
一旦日期(本例中的星期四)通过,该任务就会被标记为 `READY` 虚拟标记。它会显示在 `ready` 报告中。
|
||||
|
||||
```
|
||||
$ **task ready**
|
||||
$ task ready
|
||||
|
||||
ID Age S Description Urg
|
||||
1 2s 1d Proof the article on Taskwarrior 5
|
||||
|
||||
```
|
||||
|
||||
To remove a date, `modify` the task with a blank value:
|
||||
要移除一个日期,使用空白值来 `modify` 任务:
|
||||
|
||||
```
|
||||
$ task 1 modify scheduled:
|
||||
|
||||
```
|
||||
|
||||
### Searching Tasks
|
||||
### 查找任务
|
||||
|
||||
如果没有使用正则表达式搜索的能力,任务列表是不完整的,对吧?
|
||||
|
||||
No task list is complete without the ability to search with regular expressions, right?
|
||||
```
|
||||
$ **task '/.* the wheat/' list**
|
||||
$ task '/.* the wheat/' list
|
||||
|
||||
ID Age Project Description Urg
|
||||
2 42min Take the wheat to the mill to be ground into flour 0
|
||||
1 42min Home Cut the wheat 1
|
||||
|
||||
2 tasks
|
||||
|
||||
```
|
||||
|
||||
### Customizing Taskwarrior
|
||||
### 自定义 Taskwarrior
|
||||
|
||||
记得我们在开头创建的文件 (`~/.taskrc`)吗?让我们来看看默认设置:
|
||||
|
||||
Remember that file we created back in the beginning (`~/.taskrc`). Let's take at the defaults:
|
||||
```
|
||||
# [Created by task 2.5.1 2/9/2017 16:39:14]
|
||||
# Taskwarrior program configuration file.
|
||||
@ -178,47 +181,46 @@ data.location=~/.task
|
||||
#include /usr//usr/share/task/solarized-dark-256.theme
|
||||
#include /usr//usr/share/task/solarized-light-256.theme
|
||||
#include /usr//usr/share/task/no-color.theme
|
||||
|
||||
|
||||
```
|
||||
|
||||
The only active option right now is `data.location=~/.task`. To view active configuration settings (including the built-in defaults), run `show`.
|
||||
现在唯一生效的选项是 `data.location=~/.task`。要查看活动配置设置(包括内置的默认设置),运行 `show`。
|
||||
|
||||
```
|
||||
task show
|
||||
|
||||
```
|
||||
|
||||
To change a setting, use `config`.
|
||||
要改变设置,使用 `config`。
|
||||
|
||||
```
|
||||
$ **task config displayweeknumber no**
|
||||
$ task config displayweeknumber no
|
||||
Are you sure you want to add 'displayweeknumber' with a value of 'no'? (yes/no) yes
|
||||
Config file /home/link/.taskrc modified.
|
||||
|
||||
```
|
||||
|
||||
### Examples
|
||||
### 示例
|
||||
|
||||
These are just some of the things you can do with Taskwarrior.
|
||||
这些只是你可以用 Taskwarrior 做的一部分事情。
|
||||
|
||||
将你的任务分配到一个项目:
|
||||
|
||||
Assign a project to your tasks:
|
||||
```
|
||||
task 'Fix leak in the roof' modify project:Home
|
||||
|
||||
```
|
||||
|
||||
Use `start` to mark what you were working on. This can help you remember what you were working on after the weekend:
|
||||
使用 `start` 来标记你正在做的事情,这可以帮助你回忆起你周末后在做什么:
|
||||
|
||||
```
|
||||
task 'Fix bug #141291' start
|
||||
|
||||
```
|
||||
|
||||
Use relevant tags:
|
||||
使用相关的标签:
|
||||
|
||||
```
|
||||
task add 'Clean gutters' +weekend +house
|
||||
|
||||
```
|
||||
|
||||
Be sure to read the [complete documentation][4] to learn all the ways you can catalog and organize your tasks.
|
||||
务必阅读[完整文档][4]以了解你可以编目和组织任务的所有方式。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
@ -226,8 +228,8 @@ Be sure to read the [complete documentation][4] to learn all the ways you can ca
|
||||
via: https://fedoramagazine.org/getting-started-taskwarrior/
|
||||
|
||||
作者:[Link Dupont][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -7,17 +7,16 @@
|
||||
|
||||
我想在讨论的基础上去写一些笔记,因为,我觉得它超级棒!
|
||||
|
||||
这是 [幻灯片][9] 和一个 [pdf][10]。这个 pdf 非常好,结束的位置有一些链接,在 PDF 中你可以直接点击这个链接。
|
||||
开始前,这里有个 [幻灯片][9] 和一个 [pdf][10]。这个 pdf 非常好,结束的位置有一些链接,在 PDF 中你可以直接点击这个链接。
|
||||
|
||||
### 什么是 BPF?
|
||||
|
||||
在 BPF 出现之前,如果你想去做包过滤,你必须拷贝所有进入用户空间的包,然后才能去过滤它们(使用 “tap”)。
|
||||
在 BPF 出现之前,如果你想去做包过滤,你必须拷贝所有的包到用户空间,然后才能去过滤它们(使用 “tap”)。
|
||||
|
||||
这样做存在两个问题:
|
||||
|
||||
1. 如果你在用户空间中过滤,意味着你将拷贝所有进入用户空间的包,拷贝数据的代价是很昂贵的。
|
||||
|
||||
2. 使用的过滤算法很低效
|
||||
1. 如果你在用户空间中过滤,意味着你将拷贝所有的包到用户空间,拷贝数据的代价是很昂贵的。
|
||||
2. 使用的过滤算法很低效。
|
||||
|
||||
问题 #1 的解决方法似乎很明显,就是将过滤逻辑移到内核中。(虽然具体实现的细节并没有明确,我们将在稍后讨论)
|
||||
|
||||
@ -35,12 +34,11 @@
|
||||
|
||||
### 为什么 BPF 要工作在内核中
|
||||
|
||||
这里的关键点是,包仅仅是个字节的数组。BPF 程序是运行在这些字节的数组上。它们不允许有循环(loops),但是,它们 _可以_ 有聪明的办法知道 IP 包头(IPv6 和 IPv4 长度是不同的)以及基于它们的长度来找到 TCP 端口
|
||||
这里的关键点是,包仅仅是个字节的数组。BPF 程序是运行在这些字节的数组之上。它们不允许有循环(loop),但是,它们 _可以_ 有聪明的办法知道 IP 包头(IPv6 和 IPv4 长度是不同的)以及基于它们的长度来找到 TCP 端口:
|
||||
|
||||
```
|
||||
x = ip_header_length
|
||||
port = *(packet_start + x + port_offset)
|
||||
|
||||
```
|
||||
|
||||
(看起来不一样,其实它们基本上都相同)。在这个论文/幻灯片上有一个非常详细的虚拟机的描述,因此,我不打算解释它。
|
||||
@ -48,13 +46,9 @@ port = *(packet_start + x + port_offset)
|
||||
当你运行 `tcpdump host foo` 后,这时发生了什么?就我的理解,应该是如下的过程。
|
||||
|
||||
1. 转换 `host foo` 为一个高效的 DAG 规则
|
||||
|
||||
2. 转换那个 DAG 规则为 BPF 虚拟机的一个 BPF 程序(BPF 字节码)
|
||||
|
||||
3. 发送 BPF 字节码到 Linux 内核,由 Linux 内核验证它
|
||||
|
||||
4. 编译这个 BPF 字节码程序为一个原生(native)代码。例如, [在 ARM 上是 JIT 代码][1] 以及为 [x86][2] 的机器码
|
||||
|
||||
4. 编译这个 BPF 字节码程序为一个<ruby>原生<rt>native</rt></ruby>代码。例如,这是个[ARM 上的 JIT 代码][1] 以及 [x86][2] 的机器码
|
||||
5. 当包进入时,Linux 运行原生代码去决定是否过滤这个包。对于每个需要去处理的包,它通常仅需运行 100 - 200 个 CPU 指令就可以完成,这个速度是非常快的!
|
||||
|
||||
### 现状:eBPF
|
||||
@ -63,19 +57,15 @@ port = *(packet_start + x + port_offset)
|
||||
|
||||
关于 eBPF 的一些事实是:
|
||||
|
||||
* eBPF 程序有它们自己的字节码语言,并且从那个字节码语言编译成内核原生代码,就像 BPF 程序
|
||||
|
||||
* eBPF 程序有它们自己的字节码语言,并且从那个字节码语言编译成内核原生代码,就像 BPF 程序一样
|
||||
* eBPF 运行在内核中
|
||||
|
||||
* eBPF 程序不能随心所欲的访问内核内存。而是通过内核提供的函数去取得一些受严格限制的所需要的内容的子集。
|
||||
|
||||
* eBPF 程序不能随心所欲的访问内核内存。而是通过内核提供的函数去取得一些受严格限制的所需要的内容的子集
|
||||
* 它们 _可以_ 与用户空间的程序通过 BPF 映射进行通讯
|
||||
|
||||
* 这是 Linux 3.18 的 `bpf` 系统调用
|
||||
|
||||
### kprobes 和 eBPF
|
||||
|
||||
你可以在 Linux 内核中挑选一个函数(任意函数),然后运行一个你写的每次函数被调用时都运行的程序。这样看起来是不是很神奇。
|
||||
你可以在 Linux 内核中挑选一个函数(任意函数),然后运行一个你写的每次该函数被调用时都运行的程序。这样看起来是不是很神奇。
|
||||
|
||||
例如:这里有一个 [名为 disksnoop 的 BPF 程序][12],它的功能是当你开始/完成写入一个块到磁盘时,触发它执行跟踪。下图是它的代码片断:
|
||||
|
||||
@ -92,45 +82,37 @@ b.attach_kprobe(event="blk_mq_start_request", fn_name="trace_start")
|
||||
|
||||
```
|
||||
|
||||
从根本上来说,它声明一个 BPF 哈希(它的作用是当请求开始/完成时,这个程序去触发跟踪),一个名为 `trace_start` 的函数将被编译进 BPF 字节码,然后附加 `trace_start` 到内核函数 `blk_start_request` 上。
|
||||
本质上它声明一个 BPF 哈希(它的作用是当请求开始/完成时,这个程序去触发跟踪),一个名为 `trace_start` 的函数将被编译进 BPF 字节码,然后附加 `trace_start` 到内核函数 `blk_start_request` 上。
|
||||
|
||||
这里使用的是 `bcc` 框架,它可以使你写的 Python 化的程序去生成 BPF 代码。你可以在 [https://github.com/iovisor/bcc][13] 找到它(那里有非常多的示例程序)。
|
||||
这里使用的是 `bcc` 框架,它可以让你写 Python 式的程序去生成 BPF 代码。你可以在 [https://github.com/iovisor/bcc][13] 找到它(那里有非常多的示例程序)。
|
||||
|
||||
### uprobes 和 eBPF
|
||||
|
||||
因为我知道你可以附加 eBPF 程序到内核函数上,但是,我不知道你能否将 eBPF 程序附加到用户空间函数上!那会有更多令人激动的事情。这是 [在 Python 中使用一个 eBPF 程序去计数 malloc 调用的示例][14]。
|
||||
因为我知道可以附加 eBPF 程序到内核函数上,但是,我不知道能否将 eBPF 程序附加到用户空间函数上!那会有更多令人激动的事情。这是 [在 Python 中使用一个 eBPF 程序去计数 malloc 调用的示例][14]。
|
||||
|
||||
### 附加 eBPF 程序时应该考虑的事情
|
||||
|
||||
* 带 XDP 的网卡(我之前写过关于这方面的文章)
|
||||
|
||||
* tc egress/ingress (在网络栈上)
|
||||
|
||||
* kprobes(任意内核函数)
|
||||
|
||||
* uprobes(很明显,任意用户空间函数??像带符号的任意 C 程序)
|
||||
|
||||
* uprobes(很明显,任意用户空间函数??像带调试符号的任意 C 程序)
|
||||
* probes 是为 dtrace 构建的名为 “USDT probes” 的探针(像 [这些 mysql 探针][3])。这是一个 [使用 dtrace 探针的示例程序][4]
|
||||
|
||||
* [JVM][5]
|
||||
|
||||
* 跟踪点
|
||||
|
||||
* seccomp / landlock 安全相关的事情
|
||||
|
||||
* 更多的事情
|
||||
* 等等
|
||||
|
||||
### 这个讨论超级棒
|
||||
|
||||
在幻灯片里有很多非常好的链接,并且在 iovisor 仓库里有个 [LINKS.md][15]。现在已经很晚了,但是,很快我将写我的第一个 eBPF 程序了!
|
||||
在幻灯片里有很多非常好的链接,并且在 iovisor 仓库里有个 [LINKS.md][15]。虽然现在已经很晚了,但是我马上要去写我的第一个 eBPF 程序了!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2017/06/28/notes-on-bpf---ebpf/
|
||||
|
||||
作者:[Julia Evans ][a]
|
||||
作者:[Julia Evans][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,122 @@
|
||||
ImageMagick 的一些高级图片查看技巧
|
||||
======
|
||||
|
||||
> 用这些 ImageMagick 命令行图像编辑应用的技巧更好的管理你的数码照片集。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-photo-camera-green.png?itok=qiDqmXV1)
|
||||
|
||||
图片源于 [Internet Archive Book Images](https://www.flickr.com/photos/internetarchivebookimages/14759826206/in/photolist-ougY7b-owgz5y-otZ9UN-waBxfL-oeEpEf-xgRirT-oeMHfj-wPAvMd-ovZgsb-xhpXhp-x3QSRZ-oeJmKC-ovWeKt-waaNUJ-oeHPN7-wwMsfP-oeJGTK-ovZPKv-waJnTV-xDkxoc-owjyCW-oeRqJh-oew25u-oeFTm4-wLchfu-xtjJFN-oxYznR-oewBRV-owdP7k-owhW3X-oxXxRg-oevDEY-oeFjP1-w7ZB6f-x5ytS8-ow9C7j-xc6zgV-oeCpG1-oewNzY-w896SB-wwE3yA-wGNvCL-owavts-oevodT-xu9Lcr-oxZqZg-x5y4XV-w89d3n-x8h6fi-owbfiq),Opensource.com 修改,[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)协议
|
||||
|
||||
在我先前的[ImageMagick 入门:使用命令行来编辑图片][1] 文章中,我展示了如何使用 ImageMagick 的菜单栏进行图片的编辑和变换风格。在这篇续文里,我将向你展示使用这个开源的图像编辑器来查看图片的另外方法。
|
||||
|
||||
### 别样的风格
|
||||
|
||||
在深入 ImageMagick 的高级图片查看技巧之前,我想先分享另一个使用 `convert` 达到的有趣但简单的效果,在[上一篇文章][1]中我已经详细地介绍了 `convert` 命令,这个技巧涉及这个命令的 `edge` 和 `negate` 选项:
|
||||
|
||||
```
|
||||
convert DSC_0027.JPG -edge 3 -negate edge3+negate.jpg
|
||||
```
|
||||
|
||||
![在图片上使用 `edge` 和 `negate` 选项][3]
|
||||
|
||||
*使用`edge` 和 `negate` 选项前后的图片对比*
|
||||
|
||||
这些使我更喜爱编辑后的图片:海的外观,作为前景和背景的植被,特别是太阳及其在海上的反射,最后是天空。
|
||||
|
||||
### 使用 display 来查看一系列图片
|
||||
|
||||
假如你跟我一样是个命令行用户,你就知道 shell 为复杂任务提供了更多的灵活性和快捷方法。下面我将展示一个例子来佐证这个观点。ImageMagick 的 `display` 命令可以克服我在 GNOME 桌面上使用 [Shotwell][4] 图像管理器导入图片时遇到的问题。
|
||||
|
||||
Shotwell 会根据每张导入图片的 [Exif][5] 数据,创建以图片被生成或者拍摄时的日期为名称的目录结构。最终的效果是最上层的目录以年命名,接着的子目录是以月命名 (01、 02、 03 等等),然后是以每月的日期命名的子目录。我喜欢这种结构,因为当我想根据图片被创建或者拍摄时的日期来查找它们时将会非常方便。
|
||||
|
||||
但这种结构也并不是非常完美的,当我想查看最近几个月或者最近一年的所有图片时就会很麻烦。使用常规的图片查看器,我将不停地在不同层级的目录间跳转,但 ImageMagick 的 `display` 命令可以使得查看更加简单。例如,假如我想查看最近一年的图片,我便可以在命令行中键入下面的 `display` 命令:
|
||||
|
||||
```
|
||||
display -resize 35% 2017/*/*/*.JPG
|
||||
```
|
||||
|
||||
我可以一个月又一个月,一天又一天地遍历这一年。
|
||||
|
||||
现在假如我想查看某张图片,但我不确定我是在 2016 年的上半年还是在 2017 的上半年拍摄的,那么我便可以使用下面的命令来找到它:
|
||||
|
||||
```
|
||||
display -resize 35% 201[6-7]/0[1-6]/*/*.JPG
|
||||
```
|
||||
|
||||
这限制查看的图片拍摄于 2016 和 2017 年的一月到六月
|
||||
|
||||
### 使用 montage 来查看图片的缩略图
|
||||
|
||||
假如现在我要查找一张我想要编辑的图片,使用 `display` 的一个问题是它只会显示每张图片的文件名,而不显示其在目录结构中的位置,所以想要找到那张图片并不容易。另外,假如我很偶然地在从相机下载图片的过程中将这些图片从相机的内存里面清除了它们,结果使得下次拍摄照片的名称又从 `DSC_0001.jpg` 开始命名,那么当使用 `display` 来展示一整年的图片时,将会在这 12 个月的图片中花费很长的时间来查找它们。
|
||||
|
||||
这时 `montage` 命令便可以派上用场了。它可以将一系列的图片缩略图放在一张图片中,这样就会非常有用。例如可以使用下面的命令来完成上面的任务:
|
||||
|
||||
```
|
||||
montage -label %d/%f -title 2017 -tile 5x -resize 10% -geometry +4+4 2017/0[1-4]/*/*.JPG 2017JanApr.jpg
|
||||
```
|
||||
|
||||
从左到右,这个命令以标签开头,标签的形式是包含文件名(`%f`)和以 `/` 分割的目录(`%d`)结构,接着这个命令以目录的名称(2017)来作为标题,然后将图片排成 5 列,每个图片缩放为 10% (这个参数可以很好地匹配我的屏幕)。`geometry` 的设定将在每张图片的四周留白,最后指定那些图片要包括到这张合成图片中,以及一个合适的文件名称(`2017JanApr.jpg`)。现在图片 `2017JanApr.jpg` 便可以成为一个索引,使得我可以不时地使用它来查看这个时期的所有图片。
|
||||
|
||||
### 注意内存消耗
|
||||
|
||||
你可能会好奇为什么我在上面的合成图中只特别指定了为期 4 个月(从一月到四月)的图片。因为 `montage` 将会消耗大量内存,所以你需要多加注意。我的相机产生的图片每张大约有 2.5MB,我发现我的系统可以很轻松地处理 60 张图片。但一旦图片增加到 80 张,如果此时还有另外的程序(例如 Firefox 、Thunderbird)在后台工作,那么我的电脑将会死机,这似乎和内存使用相关,`montage`可能会占用可用 RAM 的 80% 乃至更多(你可以在此期间运行 `top` 命令来查看内存占用)。假如我关掉其他的程序,我便可以在我的系统死机前处理 80 张图片。
|
||||
|
||||
下面的命令可以让你知晓在你运行 `montage` 命令前你需要处理图片张数:
|
||||
|
||||
```
|
||||
ls 2017/0[1-4/*/*.JPG > filelist; wc -l filelist
|
||||
```
|
||||
|
||||
`ls` 命令生成我们搜索的文件的列表,然后通过重定向将这个列表保存在任意以名为 `filelist` 的文件中。接着带有 `-l` 选项的 `wc` 命令输出该列表文件共有多少行,换句话说,展示出了需要处理的文件个数。下面是我运行命令后的输出:
|
||||
|
||||
```
|
||||
163 filelist
|
||||
```
|
||||
|
||||
啊呀!从一月到四月我居然有 163 张图片,使用这些图片来创建一张合成图一定会使得我的系统死机的。我需要将这个列表减少点,可能只处理到 3 月份或者更早的图片。但如果我在 4 月 20 号到 30 号期间拍摄了很多照片,我想这便是问题的所在。下面的命令便可以帮助指出这个问题:
|
||||
|
||||
```
|
||||
ls 2017/0[1-3]/*/*.JPG > filelist; ls 2017/04/0[1-9]/*.JPG >> filelist; ls 2017/04/1[0-9]/*.JPG >> filelist; wc -l filelist
|
||||
```
|
||||
|
||||
上面一行中共有 4 个命令,它们以分号分隔。第一个命令特别指定从一月到三月期间拍摄的照片;第二个命令使用 `>>` 将拍摄于 4 月 1 日至 9 日的照片追加到这个列表文件中;第三个命令将拍摄于 4 月 10 日到 19 日的照片追加到列表中。最终它的显示结果为:
|
||||
|
||||
```
|
||||
81 filelist
|
||||
```
|
||||
|
||||
我知道假如我关掉其他的程序,处理 81 张图片是可行的。
|
||||
|
||||
使用 `montage` 来处理它们是很简单的,因为我们只需要将上面所做的处理添加到 `montage` 命令的后面即可:
|
||||
|
||||
```
|
||||
montage -label %d/%f -title 2017 -tile 5x -resize 10% -geometry +4+4 2017/0[1-3]/*/*.JPG 2017/04/0[1-9]/*.JPG 2017/04/1[0-9]/*.JPG 2017Jan01Apr19.jpg
|
||||
```
|
||||
|
||||
从左到右,`montage` 命令后面最后的那个文件名将会作为输出,在它之前的都是输入。这个命令将花费大约 3 分钟来运行,并生成一张大小约为 2.5MB 的图片,但我的系统只是有一点反应迟钝而已。
|
||||
|
||||
### 展示合成图片
|
||||
|
||||
当你第一次使用 `display` 查看一张巨大的合成图片时,你将看到合成图的宽度很合适,但图片的高度被压缩了,以便和屏幕相适应。不要慌,只需要左击图片,然后选择 `View > Original Size` 便会显示整个图片。再次点击图片便可以使菜单栏隐藏。
|
||||
|
||||
我希望这篇文章可以在你使用新方法查看图片时帮助你。在我的下一篇文章中,我将讨论更加复杂的图片操作技巧。
|
||||
|
||||
### 作者简介
|
||||
|
||||
Greg Pittman - Greg 肯塔基州路易斯维尔的一名退休的神经科医生,对计算机和程序设计有着长期的兴趣,最早可以追溯到 1960 年代的 Fortran IV 。当 Linux 和开源软件相继出现时,他开始学习更多的相关知识,并分享自己的心得。他是 Scribus 团队的成员。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/9/imagemagick-viewing-images
|
||||
|
||||
作者:[Greg Pittman][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/greg-p
|
||||
[1]:https://linux.cn/article-8851-1.html
|
||||
[3]:https://opensource.com/sites/default/files/u128651/edge3negate.jpg "Using the edge and negate options on an image."
|
||||
[4]:https://wiki.gnome.org/Apps/Shotwell
|
||||
[5]:https://en.wikipedia.org/wiki/Exif
|
@ -1,55 +1,55 @@
|
||||
5 个最佳实践开始你的 DevOps 之旅
|
||||
============================================================
|
||||
|
||||
### 想要实现 DevOps 但是不知道如何开始吗?试试这 5 个最佳实践吧。
|
||||
|
||||
> 想要实现 DevOps 但是不知道如何开始吗?试试这 5 个最佳实践吧。
|
||||
|
||||
![5 best practices for getting started with DevOps](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/devops-gears.png?itok=rUejbLQX "5 best practices for getting started with DevOps")
|
||||
|
||||
Image by : [Andrew Magill][8]. Modified by Opensource.com. [CC BY 4.0][9]
|
||||
|
||||
想要采用 DevOps 的人通常会过早的被它的歧义性给吓跑,更不要说更加深入的使用了。当一些人开始使用 DevOps 的时候都会问:“如何开始使用呢?”,”怎么才算使用了呢?“。这 5 个最佳实践是很好的路线图来指导你的 DevOps 之旅。
|
||||
想要采用 DevOps 的人通常会过早的被它的歧义性给吓跑,更不要说更加深入的使用了。当一些人开始使用 DevOps 的时候都会问:“如何开始使用呢?”,”怎么才算使用了呢?“。这 5 个最佳实践是指导你的 DevOps 之旅的很好的路线图。
|
||||
|
||||
### 1\. 衡量所有的事情
|
||||
### 1、 衡量所有的事情
|
||||
|
||||
除非你能量化输出结果,否则你并不能确认你的努力能否使事情变得更好。新功能能否快速的输出给客户?有更少的漏洞泄漏给他们吗?出错了能快速应对和恢复吗?
|
||||
除非你能够量化输出结果,否则你并不能确认你的努力能否使事情变得更好。新功能能否快速的输出给客户?带给他们的缺陷更少吗?出错了能快速应对和恢复吗?
|
||||
|
||||
在你开始做任何修改之前,思考一下你切换到 DevOps 之后想要一些什么样的输出。随着你的 DevOps 之旅,将享受到服务的所有内容的丰富的实时报告,从这两个指标考虑一下:
|
||||
|
||||
* **上架时间** 衡量端到端,通常是面向客户的业务经验。这通常从一个功能被正式提出而开始,客户在产品中开始使用这个功能而结束。上架时间不是团队的主要指标;更加重要的是,当开发出一个有价值的新功能时,它表明了你完成业务的效率,为系统改进提供了一个机会。
|
||||
* **上架时间** 衡量端到端,通常是面向客户的业务经验。这通常从一个功能被正式提出而开始,客户在产品中开始使用这个功能而结束。上架时间不是工程团队的主要指标;更加重要的是,当开发出一个有价值的新功能时,它表明了你完成业务的效率,为系统改进提供了一个机会。
|
||||
|
||||
* **时间周期** 衡量工程团队的进度。从开始开发一个新功能开始,到在产品中运行需要多久?这个指标对于你理解团队的效率是非常有用的,为团队等级的提升提供了一个机会。
|
||||
* **时间周期** 衡量工程团队的进度。从开始开发一个新功能开始,到在产品环境中运行需要多久?这个指标对于你了解团队的效率是非常有用的,为团队层面的提升提供了一个机会。
|
||||
|
||||
### 2\. 放飞你的流程
|
||||
### 2、 放飞你的流程
|
||||
|
||||
DevOps 的成功需要团队布置一个定期流程并且持续提升它。这不总是有效的,但是必须是一个定期(希望有效)的流程。通常它有一些敏捷开发的味道,就像 Scrum 或者 Scrumban 一样;一些时候它也像精益开发。不论你用的什么方法,挑选一个正式的流程,开始使用它,并且做好这些基础。
|
||||
DevOps 的成功需要团队布置一个定期(但愿有效)流程并且持续提升它。这不总是有效的,但是必须是一个定期的流程。通常它有一些敏捷开发的味道,就像 Scrum 或者 Scrumban 一样;一些时候它也像精益开发。不论你用的什么方法,挑选一个正式的流程,开始使用它,并且做好这些基础。
|
||||
|
||||
定期检查和调整流程是 DevOps 成功的关键,抓住相关演示,团队回顾,每日会议的机会来提升你的流程。
|
||||
定期检查和调整流程是 DevOps 成功的关键,抓住相关演示、团队回顾、每日会议的机会来提升你的流程。
|
||||
|
||||
DevOps 的成功取决于大家一起有效的工作。团队的成员需要在一个有权改进的公共流程中工作。他们也需要定期找机会分享从这个流程中上游或下游的其他人那里学到的东西。
|
||||
|
||||
随着你构建成功。好的流程规范能帮助你的团队以很快的速度体会到 DevOps 其他的好处
|
||||
随着你构建成功,好的流程规范能帮助你的团队以很快的速度体会到 DevOps 其他的好处
|
||||
|
||||
尽管更多面向开发的团队采用 Scrum 是常见的,但是以运营为中心的团队(或者其他中断驱动的团队)可能选用一个更短期的流程,例如 Kanban。
|
||||
|
||||
### 3\. 可视化工作流程
|
||||
### 3、 可视化工作流程
|
||||
|
||||
这是很强大的,能够看到哪个人在给定的时间做哪一部分工作,可视化你的工作流程能帮助大家知道接下来应该做什么,流程中有多少工作以及流程中的瓶颈在哪里。
|
||||
|
||||
在你看到和衡量之前你并不能有效的限制流程中的工作。同样的,你也不能有效的排除瓶颈直到你清楚的看到它。
|
||||
|
||||
全部工作可视化能帮助团队中的成员了解他们在整个工作中的贡献。这样可以促进跨组织边界的关系建设,帮助您的团队更有效地协作,实现共同的成就感。
|
||||
|
||||
### 4\. 持续化所有的事情
|
||||
### 4、 持续化所有的事情
|
||||
|
||||
DevOps 应该是强制自动化的。然而罗马不是一日建成的。你应该注意的第一个事情应该是努力的持续集成(CI),但是不要停留到这里;紧接着的是持续交付(CD)以及最终的持续部署。
|
||||
DevOps 应该是强制自动化的。然而罗马不是一日建成的。你应该注意的第一个事情应该是努力的[持续集成(CI)][10],但是不要停留到这里;紧接着的是[持续交付(CD)][11]以及最终的持续部署。
|
||||
|
||||
持续部署的过程中是个注入自动测试的好时机。这个时候新代码刚被提交,你的持续部署应该运行测试代码来测试你的代码和构建成功的加工品。这个加工品经受流程的考验被产出直到最终被客户看到。
|
||||
持续部署的过程中是个注入自动测试的好时机。这个时候新代码刚被提交,你的持续部署应该运行测试代码来测试你的代码和构建成功的加工品。这个加工品经受流程的考验被产出,直到最终被客户看到。
|
||||
|
||||
另一个“持续”是不太引人注意的持续改进。一个简单的场景是每天询问你旁边的同事:“今天做些什么能使工作变得更好?”,随着时间的推移,这些日常的小改进融合到一起会引起很大的结果,你将很惊喜!但是这也会让人一直思考着如何改进。
|
||||
另一个“持续”是不太引人注意的持续改进。一个简单的场景是每天询问你旁边的同事:“今天做些什么能使工作变得更好?”,随着时间的推移,这些日常的小改进融合到一起会带来很大的结果,你将很惊喜!但是这也会让人一直思考着如何改进。
|
||||
|
||||
### 5\. Gherkinize
|
||||
### 5、 Gherkinize
|
||||
|
||||
促进组织间更有效的沟通对于成功的 DevOps 的系统思想至关重要。在程序员和业务员之间直接使用共享语言来描述新功能的需求文档对于沟通是个好办法。一个好的产品经理能在一天内学会 [Gherkin][12] 然后使用它构造出明确的英语来描述需求文档,工程师会使用 Gherkin 描述的需求文档来写功能测试,之后开发功能代码直到代码通过测试。这是一个简化的 [验收测试驱动开发][13](ATDD),这样就开始了你的 DevOps 文化和开发实践。
|
||||
促进组织间更有效的沟通对于成功的 DevOps 的系统思想至关重要。在程序员和业务员之间直接使用共享语言来描述新功能的需求文档对于沟通是个好办法。一个好的产品经理能在一天内学会 [Gherkin][12] 然后使用它以平实的英语构造出明确的描述需求文档,工程师会使用 Gherkin 描述的需求文档来写功能测试,之后开发功能代码直到代码通过测试。这是一个简化的 [验收测试驱动开发][13](ATDD),能够帮助你开始你的 DevOps 文化和开发实践。
|
||||
|
||||
### 开始你旅程
|
||||
|
||||
@ -60,15 +60,15 @@ DevOps 应该是强制自动化的。然而罗马不是一日建成的。你应
|
||||
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/headshot_4.jpg?itok=jntfDCfX)][14]
|
||||
|
||||
Magnus Hedemark - Magnus 在IT行业已有20多年,并且一直热衷于技术。他目前是 nitedHealth Group 的 DevOps 工程师。在业余时间,Magnus 喜欢摄影和划独木舟。
|
||||
Magnus Hedemark - Magnus 在 IT 行业已有 20 多年,并且一直热衷于技术。他目前是 nitedHealth Group 的 DevOps 工程师。在业余时间,Magnus 喜欢摄影和划独木舟。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/11/5-keys-get-started-devops
|
||||
|
||||
作者:[Magnus Hedemark ][a]
|
||||
作者:[Magnus Hedemark][a]
|
||||
译者:[aiwhj](https://github.com/aiwhj)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,4 +1,4 @@
|
||||
DevOps 将让你失业?
|
||||
DevOps 会让你失业吗?
|
||||
======
|
||||
|
||||
>你是否担心工作中自动化将代替人?可能是对的,但是这并不是件坏事。
|
||||
@ -8,19 +8,19 @@ DevOps 将让你失业?
|
||||
|
||||
这是一个很正常的担心:DevOps 最终会让你失业?毕竟,DevOps 意味着开发人员做运营,对吗?DevOps 是自动化的。如果我的工作都自动化了,我去做什么?实行持续分发和容器化意味着运营已经过时了吗?对于 DevOps 来说,所有的东西都是代码:基础设施是代码、测试是代码、这个和那个都是代码。如果我没有这些技能怎么办?
|
||||
|
||||
[DevOps][1] 是一个即将到来的变化,将颠覆这一领域,狂热的拥挤者们正在谈论,如何使用 [三种方法][2] 去改变世界 —— 即 DevOps 的三大基础 —— 去推翻一个旧的世界。它是势不可档的。那么,问题来了 —— DevOps 将会让我失业吗?
|
||||
[DevOps][1] 是一个即将到来的变化,它将颠覆这一领域,狂热的拥挤者们正在谈论,如何使用 [三种方法][2] 去改变世界 —— 即 DevOps 的三大基础 —— 去推翻一个旧的世界。它是势不可档的。那么,问题来了 —— DevOps 将会让我失业吗?
|
||||
|
||||
### 第一个担心:再也不需要我了
|
||||
|
||||
由于开发者来管理应用程序的整个生命周期,接受 DevOps 的理念很容易。容器化可能是影响这一想法的重要因素。当容器化在各种场景下铺开之后,它们被吹嘘成开发者构建、测试、和部署他们代码的一站式解决方案。DevOps 对于运营、测试、以及 QA 团队来说,有什么作用呢?
|
||||
由于开发者来管理应用程序的整个生命周期,接受 DevOps 的理念很容易。容器化可能是影响这一想法的重要因素。当容器化在各种场景下铺开之后,它们被吹嘘成开发者构建、测试和部署他们代码的一站式解决方案。DevOps 对于运营、测试、以及 QA 团队来说,有什么作用呢?
|
||||
|
||||
这源于对 DevOps 原则的误解。DevOps 的第一原则,或者第一方法是,_系统思考_ ,或者强调整体管理方法和了解应用程序或服务的整个生命周期。这并不意味着应用程序的开发者将学习和管理整个过程。相反,是拥有各个专业和技能的人共同合作,以确保成功。让开发者对这一过程完全负责的作法,几乎是将开发者置于使用者的对立面—— 本质上就是 “将鸡蛋放在了一个篮子里”。
|
||||
这源于对 DevOps 原则的误解。DevOps 的第一原则,或者第一方法是,<ruby>系统思考<rt>Systems Thinking</rt></ruby>,或者强调整体管理方法和了解应用程序或服务的整个生命周期。这并不意味着应用程序的开发者将学习和管理整个过程。相反,是拥有各个专业和技能的人共同合作,以确保成功。让开发者对这一过程完全负责的作法,几乎是将开发者置于使用者的对立面 —— 本质上就是 “将鸡蛋放在了一个篮子里”。
|
||||
|
||||
在 DevOps 中有一个为你保留的专门职位。就像将一个受过传统教育的、拥有线性回归和二分查找知识的软件工程师,被用去写一些 Ansible playbooks 和 Docker 文件,这是一种浪费。而对于那些拥有高级技能,知道如何保护一个系统和优化数据库执行的系统管理员,被浪费在写一些 CSS 和设计用户流这样的工作上。写代码、做测试、和维护应用程序的高效团队一般是跨学科、跨职能的、拥有不同专业技术和背景的人组成的混编团队。
|
||||
|
||||
### 第二个担心:我的工作将被自动化
|
||||
|
||||
或许是,或许不是,DevOps 可能在有时候是自动化的同义词。当自动化构建、测试、部署、监视、以及提醒等事项,已经占据了整个应用程序生命周期管理的时候,还会给我们剩下什么工作呢?这种对自动化的关注可能与第二个方法有关:_放大反馈循环_。DevOps 的第二个方法是在团队和部署的应用程序之间,采用相反的方向优先处理快速反馈 —— 从监视和维护部署、测试、开发、等等,通过强调,使反馈更加重要并且可操作。虽然这第二种方式与自动化并不是特别相关,许多自动化工具团队在它们的部署流水线中使用,以促进快速提醒和快速行动,或者基于对使用者的支持业务中产生的反馈来改进。传统的做法是靠人来完成的,这就可以理解为什么自动化可能会导致未来一些人失业的焦虑了。
|
||||
或许是,或许不是,DevOps 可能在有时候是自动化的同义词。当自动化构建、测试、部署、监视,以及提醒等事项,已经占据了整个应用程序生命周期管理的时候,还会给我们剩下什么工作呢?这种对自动化的关注可能与第二个方法有关:<ruby>放大反馈循环<rt>Amplify Feedback Loops</rt></ruby>。DevOps 的第二个方法是在团队和部署的应用程序之间,采用相反的方向优先处理快速反馈 —— 从监视和维护部署、测试、开发、等等,通过强调,使反馈更加重要并且可操作。虽然这第二种方式与自动化并不是特别相关,许多自动化工具团队在它们的部署流水线中使用,以促进快速提醒和快速行动,或者基于对使用者的支持业务中产生的反馈来改进。传统的做法是靠人来完成的,这就可以理解为什么自动化可能会导致未来一些人失业的焦虑了。
|
||||
|
||||
自动化只是一个工具,它并不能代替人。聪明的人使用它来做一些重复的工作,不去开发智力和创造性的财富,而是去按红色的 “George Jetson” 按钮是一种极大的浪费。让每天工作中的苦活自动化,意味着有更多的时间去解决真正的问题和即将到来的创新的解决方案。人类需要解决更多的 “怎么做和为什么” 问题,而计算机只能处理 “复制和粘贴”。
|
||||
|
||||
@ -28,17 +28,17 @@ DevOps 将让你失业?
|
||||
|
||||
### 第三个担心:我没有这些技能怎么办
|
||||
|
||||
"我怎么去继续做这些事情?我不懂如何自动化。现在所有的工作都是代码 —— 我不是开发人员,我不会做 DevOps 中写代码的工作“,第三个担心是一种不自信的担心。由于文化的改变,是的,团队将也会要求随之改变,一些人可能担心,他们缺乏继续做他们工作的技能。
|
||||
“我怎么去继续做这些事情?我不懂如何自动化。现在所有的工作都是代码 —— 我不是开发人员,我不会做 DevOps 中写代码的工作”,第三个担心是一种不自信的担心。由于文化的改变,是的,团队将也会要求随之改变,一些人可能担心,他们缺乏继续做他们工作的技能。
|
||||
|
||||
然而,大多数人或许已经比他们所想的更接近。Dockerfile 是什么,或者像 Puppet 或 Ansible 配置管理是什么,但是环境即代码,系统管理员已经写了 shell 脚本和 Python 程序去处理他们重复的任务。学习更多的知识并使用已有的工具处理他们的更多问题 —— 编排、部署、维护即代码 —— 尤其是当从繁重的手动任务中解放出来,专注于成长时。
|
||||
然而,大多数人或许已经比他们所想的更接近。Dockerfile 是什么,或者像 Puppet 或 Ansible 配置管理是什么,这就是环境即代码,系统管理员已经写了 shell 脚本和 Python 程序去处理他们重复的任务。学习更多的知识并使用已有的工具处理他们的更多问题 —— 编排、部署、维护即代码 —— 尤其是当从繁重的手动任务中解放出来,专注于成长时。
|
||||
|
||||
在 DevOps 的使用者中去回答这第三个担心,第三个方法是:_一种不断实验和学习的文化_。尝试、失败、并从错误中吸取教训而不是责怪它们的能力,是设计出更有创意的解决方案的重要因素。第三个方法是为前两个方法授权—— 允许快速检测和修复问题,并且开发人员可以自由地尝试和学习,其它的团队也是如此。从未使用过配置管理或者写过自动供给基础设施程序的运营团队也要自由尝试并学习。测试和 QA 团队也要自由实现新测试流水线,并且自动批准和发布新流程。在一个拥抱学习和成长的文化中,每个人都可以自由地获取他们需要的技术,去享受工作带来的成功和喜悦。
|
||||
在 DevOps 的使用者中去回答这第三个担心,第三个方法是:<ruby>一种不断实验和学习的文化<rt>A Culture of Continual Experimentation and Learning</ruby>。尝试、失败,并从错误中吸取教训而不是责怪它们的能力,是设计出更有创意的解决方案的重要因素。第三个方法是为前两个方法授权 —— 允许快速检测和修复问题,并且开发人员可以自由地尝试和学习,其它的团队也是如此。从未使用过配置管理或者写过自动供给基础设施程序的运营团队也要自由尝试并学习。测试和 QA 团队也要自由实现新测试流水线,并且自动批准和发布新流程。在一个拥抱学习和成长的文化中,每个人都可以自由地获取他们需要的技术,去享受工作带来的成功和喜悦。
|
||||
|
||||
### 结束语
|
||||
|
||||
在一个行业中,任何可能引起混乱的实践或变化都会产生担心和不确定,DevOps 也不例外。对自己工作的担心是对成百上千的文章和演讲的合理回应,其中列举了无数的实践和技术,而这些实践和技术正致力于授权开发者对行业的各个方面承担职责。
|
||||
|
||||
然而,事实上,DevOps 是 "[一个跨学科的沟通实践,致力于研究构建、进化、和运营快速变化的弹性系统][3]"。 DevOps 意味着终结 ”筒仓“,但并不专业化。它是受委托去做苦差事的自动化系统,解放你,让你去做人类更擅长做的事:思考和想像。并且,如果你愿意去学习和成长,它将不会终结你解决新的、挑战性的问题的机会。
|
||||
然而,事实上,DevOps 是 “[一个跨学科的沟通实践,致力于研究构建、进化、和运营快速变化的弹性系统][3]”。 DevOps 意味着终结 “筒仓”,但并不专业化。它是受委托去做苦差事的自动化系统,解放你,让你去做人类更擅长做的事:思考和想像。并且,如果你愿意去学习和成长,它将不会终结你解决新的、挑战性的问题的机会。
|
||||
|
||||
DevOps 会让你失业吗?会的,但它同时给你提供了更好的工作。
|
||||
|
||||
@ -48,7 +48,7 @@ via: https://opensource.com/article/17/12/will-devops-steal-my-job
|
||||
|
||||
作者:[Chris Collins][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,71 +1,72 @@
|
||||
Red Hat 的去 Docker 化容器实践
|
||||
======
|
||||
|
||||
最近几年,开源项目Docker (已更名为[Moby][1]) 在容器普及化方面建树颇多。然而,它的功能特性不断集中到一个单一、庞大的系统,该系统由具有 root 权限运行的守护进程 `dockerd` 管控,这引发了人们的焦虑。对这些焦虑的阐述,具有代表性的是 Red Hat 公司的容器团队负责人 Dan Walsh 在 [KubeCon \+ CloudNativecon][3] 会议中的[演讲][2]。Walsh讲述了他的容器团队目前的工作方向,即使用一系列更小、可协同工作的组件替代 Docker。他的战斗口号是”拒绝臃肿的守护进程“,理由是与公认的 Unix 哲学相违背。
|
||||
最近几年,开源项目 Docker (已更名为[Moby][1]) 在容器普及化方面建树颇多。然而,它的功能特性不断集中到一个单一、庞大的系统,该系统由具有 root 权限运行的守护进程 `dockerd` 管控,这引发了人们的焦虑。对这些焦虑的阐述,具有代表性的是 Red Hat 公司的容器团队负责人 Dan Walsh 在 [KubeCon + CloudNativecon][3] 会议中的[演讲][2]。Walsh 讲述了他的容器团队目前的工作方向,即使用一系列更小、可协同工作的组件替代 Docker。他的战斗口号是“拒绝臃肿的守护进程”,理由是与公认的 Unix 哲学相违背。
|
||||
|
||||
### Docker 模块化实践
|
||||
|
||||
就像我们在[早期文献][4]中看到的那样,容器的基础操作不复杂:你首先拉取一个容器镜像,利用该镜像创建一个容器,最后启动这个容器。除此之外,你要懂得如何构建镜像并推送至镜像仓库。大多数人在上述这些步骤中使用 Docker,但其实 Docker 并不是唯一的选择,目前的可替换选择是 `rkt`。rkt引发了一系列标准的创建,包括运行时标准 CRI,镜像标准 OCI 及网络标准 CNI 等。遵守这些标准的后端,如 [CRI-O][5] 和 Docker,可以与 [Kubernetes][6] 为代表的管理软件协同工作。
|
||||
就像我们在[早期文献][4]中看到的那样,容器的基础操作不复杂:你首先拉取一个容器镜像,利用该镜像创建一个容器,最后启动这个容器。除此之外,你要懂得如何构建镜像并推送至镜像仓库。大多数人在上述这些步骤中使用 Docker,但其实 Docker 并不是唯一的选择,目前的可替换选择是 `rkt`。rkt 引发了一系列标准的创建,包括运行时标准 CRI,镜像标准 OCI 及网络标准 CNI 等。遵守这些标准的后端,如 [CRI-O][5] 和 Docker,可以与 [Kubernetes][6] 为代表的管理软件协同工作。
|
||||
|
||||
这些标准促使 Red Hat 公司开发了一系列部分实现标准的”核心应用“供 Kubernetes 使用,例如 CRI-O 运行时。但 Kubernetes 提供的功能不足以满足 Red Hat公司的 [OpenShift][7] 项目所需。开发者可能需要构建容器并推送至镜像仓库,实现这些操作需要额外的一整套方案。
|
||||
这些标准促使 Red Hat 公司开发了一系列实现了部分标准的“核心应用”供 Kubernetes 使用,例如 CRI-O 运行时。但 Kubernetes 提供的功能不足以满足 Red Hat 公司的 [OpenShift][7] 项目所需。开发者可能需要构建容器并推送至镜像仓库,实现这些操作需要额外的一整套方案。
|
||||
|
||||
事实上,目前市面上已有多种构建容器的工具。来自 Sysdig 公司的 Michael Ducy 在[分会场][8]中回顾了 Docker 本身之外的8种镜像构建工具,而这也很可能不是全部的工具。Ducy 将理想的构建工具定义如下:可以用可重现的方式创建最小化镜像。最小化镜像并不包含操作系统,只包含应用本身及其依赖。Ducy 认为 [Distroless][9], [Smith][10] 及 [Source-to-Image][11] 都是很好的工具,可用于构建最小化镜像。Ducy 将最小化镜像称为”微容器“。
|
||||
事实上,目前市面上已有多种构建容器的工具。来自 Sysdig 公司的 Michael Ducy 在[分会场][8]中回顾了 Docker 本身之外的 8 种镜像构建工具,而这也很可能不是全部。Ducy 将理想的构建工具定义如下:可以用可重现的方式创建最小化镜像。最小化镜像并不包含操作系统,只包含应用本身及其依赖。Ducy 认为 [Distroless][9], [Smith][10] 及 [Source-to-Image][11] 都是很好的工具,可用于构建最小化镜像。Ducy 将最小化镜像称为“微容器”。
|
||||
|
||||
可重现镜像是指构建多次结果保持不变的镜像。为达到这个目标,Ducy 表示应该使用“宣告式”而不是“命令式”的方式。考虑到 Ducy 来自 Chef 配置管理工具领域,你应该能理解他的意思。Ducy 给出了符合标准的几个不错的实现,包括 [Ansible 容器][12], [Habitat][13], [nixos-容器][14]和 [Simth][10] 等,但你需要了解这些项目对应的编程语言。Ducy 额外指出 Habitat 构建的容器自带管理功能,如果你已经使用了systemd, Docker 或 Kubernetes 等外部管理工具,Habitat 的管理功能可能是冗余的。除此之外,我们还要提从 Docker 和 [Buildah][16] 项目诞生的新项目 [BuildKit][15], 它是 Red Hat 公司 [Atomic 工程][17]的一个组件。
|
||||
<ruby>可重现镜像<rt>reproducible container</rt></ruby>是指构建多次结果保持不变的镜像。为达到这个目标,Ducy 表示应该使用“宣告式”而不是“命令式”的方式。考虑到 Ducy 来自 Chef 配置管理工具领域,你应该能理解他的意思。Ducy 给出了符合标准的几个不错的实现,包括 [Ansible 容器][12]、 [Habitat][13]、 [nixos-容器][14]和 [Simth][10] 等,但你需要了解这些项目对应的编程语言。Ducy 额外指出 Habitat 构建的容器自带管理功能,如果你已经使用了 systemd、 Docker 或 Kubernetes 等外部管理工具,Habitat 的管理功能可能是冗余的。除此之外,我们还要提到从 Docker 和 [Buildah][16] 项目诞生的新项目 [BuildKit][15], 它是 Red Hat 公司 [Atomic 工程][17]的一个组件。
|
||||
|
||||
### 使用Buildah构建容器
|
||||
### 使用 Buildah 构建容器
|
||||
|
||||
![\[Buildah logo\]][18] Buildah 名称显然来自于 Walsh 风趣的 [Boston 口音][19]; 该工具的品牌宣传中充满了 Boston 风格,例如 logo 使用了 Boston 梗犬(如图所示)。该项目的实现思路与 Ducy 不同:为了构建容器,与其被迫使用宣告式配置管理的方案,不如构建一些简单工具,结合你最喜欢的配置管理工具使用。这样你可以如愿的使用命令行,例如使用 `cp` 命令代替 Docker 的自定义指令 `COPY` 。除此之外,你可以使用如下工具为容器提供内容:1) 配置管理工具,例如Ansible 或 Puppet;2) 操作系统相关或编程语言相关的安装工具,例如 APT 和 pip; 3) 其它系统。下面展示了基于通用 shell 命令的容器构建场景,其中只需要使用 `make` 命令即可为容器安装可执行文件。
|
||||
![\[Buildah logo\]][18]
|
||||
|
||||
Buildah 名称显然来自于 Walsh 风趣的 [波士顿口音][19]; 该工具的品牌宣传中充满了波士顿风格,例如 logo 使用了波士顿梗犬(如图所示)。该项目的实现思路与 Ducy 不同:为了构建容器,与其被迫使用宣告式配置管理的方案,不如构建一些简单工具,结合你最喜欢的配置管理工具使用。这样你可以如愿的使用命令行,例如使用 `cp` 命令代替 Docker 的自定义指令 `COPY` 。除此之外,你可以使用如下工具为容器提供内容:1) 配置管理工具,例如Ansible 或 Puppet;2) 操作系统相关或编程语言相关的安装工具,例如 APT 和 pip; 3) 其它系统。下面展示了基于通用 shell 命令的容器构建场景,其中只需要使用 `make` 命令即可为容器安装可执行文件。
|
||||
|
||||
```
|
||||
# 拉取基础镜像, 类似 Dockerfile 中的 FROM 命令
|
||||
buildah from redhat
|
||||
# 拉取基础镜像, 类似 Dockerfile 中的 FROM 命令
|
||||
buildah from redhat
|
||||
|
||||
# 挂载基础镜像, 在其基础上工作
|
||||
crt=$(buildah mount)
|
||||
ap foo $crt
|
||||
make install DESTDIR=$crt
|
||||
|
||||
# 下一步,生成快照
|
||||
buildah commit
|
||||
# 挂载基础镜像, 在其基础上工作
|
||||
crt=$(buildah mount)
|
||||
ap foo $crt
|
||||
make install DESTDIR=$crt
|
||||
|
||||
# 下一步,生成快照
|
||||
buildah commit
|
||||
```
|
||||
|
||||
有趣的是,基于这个思路,你可以复用主机环境中的构建工具,无需在镜像中安装这些依赖,故可以构建非常微小的镜像。通常情况下,构建容器镜像时需要在容器中安装目标应用的构建依赖。例如,从源码构建需要容器中有编译器工具链,这是因为构建并不在主机环境进行。大量的容器也包含了 `ps` 和 `bash` 这样的 Unix 命令,对微容器而言其实是多余的。开发者经常忘记或无法从构建好的容器中移除一些依赖,增加了不必要的开销和攻击面。
|
||||
|
||||
Buildah的模块化方案能够以非 root 方式进行部分构建;但`mount` 命令仍然需要 `CAP_SYS_ADMIN` 或 `等同 root 访问权限` 的能力,有一个 [issue][20] 试图解决该问题。但 Buildah 与 Docker [都有][21]同样的限制[22],即无法在容器内构建容器。对于 Docker,你需要使用“特权”模式运行容器,一些特殊的环境很难满足这个条件,例如 [GitLab 持续集成][23];即使满足该条件,配置也特别[繁琐][24]。
|
||||
Buildah 的模块化方案能够以非 root 方式进行部分构建;但`mount` 命令仍然需要 `CAP_SYS_ADMIN`,有一个 [工单][20] 试图解决该问题。但 Buildah 与 Docker [都有][21]同样的[限制][22],即无法在容器内构建容器。对于 Docker,你需要使用“特权”模式运行容器,一些特殊的环境很难满足这个条件,例如 [GitLab 持续集成][23];即使满足该条件,配置也特别[繁琐][24]。
|
||||
|
||||
手动提交的步骤可以对创建容器快照的时间节点进行细粒度控制。Dockerfile 每一行都会创建一个新的快照;相比而言,Buildah 的提交检查点都是事先选择好的,这可以减少不必要的快照并节省磁盘空间。这也有利于隔离私钥或密码等敏感信息,避免其出现在公共镜像中。
|
||||
|
||||
Docker 构建的镜像是非标准的、仅供其自身使用;相比而言,Buildah 提供[多种输出格式][25],其中包括符合 OCI 标准的镜像。为向后兼容,Buildah 提供 一个 `使用Dockerfile构建` 的命令,即 [`buildah bud`][26], 它可以解析标准的 Dockerfile。Buildah 提供 `enter` 命令直接查看镜像内部信息,`run` 命令启动一个容器。实现这些功能仅使用了 `runc` 在内的标准工具,无需在后台运行一个“臃肿的守护进程”。
|
||||
Docker 构建的镜像是非标准的、仅供其自身使用;相比而言,Buildah 提供[多种输出格式][25],其中包括符合 OCI 标准的镜像。为向后兼容,Buildah 提供了一个“使用 Dockerfile 构建”的命令,即 [`buildah bud`][26], 它可以解析标准的 Dockerfile。Buildah 提供 `enter` 命令直接查看镜像内部信息,`run` 命令启动一个容器。实现这些功能仅使用了 `runc` 在内的标准工具,无需在后台运行一个“臃肿的守护进程”。
|
||||
|
||||
Ducy 对 Buildah 表示质疑,认为采用非宣告性不利于可重现性。如果允许使用 shell 命令,可能产生很多预想不到的情况;例如,一个 shell 脚本下载了任意的可执行程序,但后续无法追溯文件的来源。shell 命令的执行受环境变量影响,执行结果可能大相径庭。与基于 shell 的工具相比,Puppet 或 Chef 这样的配置管理系统在理论上更加可靠,因为他们的设计初衷就是收敛于最终配置;事实上,可以通过配置管理系统调用 shell 命令。但 Walsh 对此提出反驳,认为已有的配置管理工具可以在 Buildah 的基础上工作,用户可以选择是否使用配置管理;这样更加符合“机制与策略分离”的经典 Unix 哲学。
|
||||
Ducy 对 Buildah 表示质疑,认为采用非宣告性不利于可重现性。如果允许使用 shell 命令,可能产生很多预想不到的情况;例如,一个 shell 脚本下载了任意的可执行程序,但后续无法追溯文件的来源。shell 命令的执行受环境变量影响,执行结果可能大相径庭。与基于 shell 的工具相比,Puppet 或 Chef 这样的配置管理系统在理论上更加可靠,因为它们的设计初衷就是收敛于最终配置;事实上,可以通过配置管理系统调用 shell 命令。但 Walsh 对此提出反驳,认为已有的配置管理工具可以在 Buildah 的基础上工作,用户可以选择是否使用配置管理;这样更加符合“机制与策略分离”的经典 Unix 哲学。
|
||||
|
||||
目前 Buildah 处于测试阶段,Red Hat 公司正努力将其集成到 OpenShift。我写这篇文章时已经测试过 Buildah,它缺少一些主题的文档,但基本可以稳定运行。尽管在错误处理方面仍有待提高,但它确实是一款值得你关注的容器工具。
|
||||
目前 Buildah 处于测试阶段,Red Hat 公司正努力将其集成到 OpenShift。我写这篇文章时已经测试过 Buildah,它缺少一些文档,但基本可以稳定运行。尽管在错误处理方面仍有待提高,但它确实是一款值得你关注的容器工具。
|
||||
|
||||
### 替换其它 Docker 命令行
|
||||
|
||||
Walsh 在其演讲中还简单介绍了 Redhat 公司 正在开发的另一个暂时叫做 [libpod][24] 的项目。项目名称来源于 Kubernetes 中的 “pod”, 在 Kubernetes 中 “pod” 用于分组主机内的容器,分享名字空间等。
|
||||
Walsh 在其演讲中还简单介绍了 Red hat 公司 正在开发的另一个暂时叫做 [libpod][24] 的项目。项目名称来源于 Kubernetes 中的 “pod”, 在 Kubernetes 中 “pod” 用于分组主机内的容器,分享名字空间等。
|
||||
|
||||
Libpod 提供 `kpod` 命令,用于直接检查和操作容器存储。Walsh 分析了该命令发挥作用的场景,例如 `dockerd` 停止响应或 Kubernetes 集群崩溃。基本上,`kpod` 独立地再次实现了 `docker` 命令行工具。`kpod ps` 返回运行中的容器列表,`kpod images` 返回镜像列表。事实上,[命令转换速查手册][28] 中给出了每一条 Docker 命令对应的 `kpod` 命令。
|
||||
|
||||
这种模块化实现的一个好处是,当你使用 `kpod run` 运行容器时,容器直接作为当前 shell 而不是 `dockerd` 的子进程启动。理论上,可以直接使用 systemd 启动容器,这样可以消除 `dockerd` 引入的冗余。这让[由套接字激活的容器][29]成为可能,但暂时基于 Docker 实现该特性[并不容易][30],[即使借助 Kubernetes][31] 也是如此。但我在测试过程中发现,使用 `kpod` 启动的容器有一些基础功能性缺失,具体而言是网络功能(!),相关实现在[活跃开发][32]过程中。
|
||||
这种模块化实现的一个好处是,当你使用 `kpod run` 运行容器时,容器直接作为当前 shell 而不是 `dockerd` 的子进程启动。理论上,可以直接使用 systemd 启动容器,这样可以消除 `dockerd` 引入的冗余。这让[由套接字激活的容器][29]成为可能,但暂时基于 Docker 实现该特性[并不容易][30],[即使借助 Kubernetes][31] 也是如此。但我在测试过程中发现,使用 `kpod` 启动的容器有一些基础功能性缺失,具体而言是网络功能(!),相关实现在[活跃开发][32]过程中。
|
||||
|
||||
我们最后提到的命令是 `push`。虽然上述命令已经足以满足本地使用容器的需求,但没有提到远程仓库,借助远程仓库开发者可以活跃地进行应用打包协作。仓库也是持续部署框架的核心组件。[skopeo][33] 项目用于填补这个空白,它是另一个 Atomic 成员项目,按其 `README` 文件描述,“包含容器镜像及镜像库的多种操作”。该项目的设计初衷是,在不用类似 `docker pull` 那样实际下载可能体积庞大的镜像的前提下,检查容器镜像的内容。Docker [拒绝加入][34] 检查功能,建议通过一个额外的工具实现该功能,这促成了 Skopeo 项目。除了`pull`,`push`,Skopeo现在还可以完成很多其它操作,例如在,不产生本地副本的情况下将镜像在不同的仓库中复制和转换。由于部分功能比较基础,可供其它项目使用,目前很大一部分 Skopeo 代码位于一个叫做 [containers/image][35] 的基础库。[Pivotal][36], Google的 [container-diff][37] ,`kpod push` 及 `buildah push` 都使用了该库。
|
||||
我们最后提到的命令是 `push`。虽然上述命令已经足以满足本地使用容器的需求,但没有提到远程仓库,借助远程仓库开发者可以活跃地进行应用打包协作。仓库也是持续部署框架的核心组件。[skopeo][33] 项目用于填补这个空白,它是另一个 Atomic 成员项目,按其 `README` 文件描述,“包含容器镜像及镜像库的多种操作”。该项目的设计初衷是,在不用类似 `docker pull` 那样实际去下载可能体积庞大的镜像的前提下,检查容器镜像的内容。Docker [拒绝加入][34] 检查功能,建议通过一个额外的工具实现该功能,这促成了 Skopeo 项目。除了 `pull`、`push`,Skopeo 现在还可以完成很多其它操作,例如在,不产生本地副本的情况下将镜像在不同的仓库中复制和转换。由于部分功能比较基础,可供其它项目使用,目前很大一部分 Skopeo 代码位于一个叫做 [containers/image][35] 的基础库。[Pivotal][36]、 Google 的 [container-diff][37] 、`kpod push` 及 `buildah push` 都使用了该库。
|
||||
|
||||
`kpod` 与 Kubernetes 并没有紧密的联系,故未来可能会更换名称(事实上,在本文刊发过程中,已经更名为 [`podman`][38]),毕竟 Red Hat 法务部门还没有明确其名称。该团队希望实现更多 pod 级别的命令,这样可以对多个容器进行操作,有点类似于 [`docker compose`][39] 实现的功能。但在这方面,[Kompose][40] 是更好的工具,可以通过 [复合 YAML 文件][41] 在 Kubernetes 集群中运行容器。按计划,我们不会实现类似于 [`swarm`] 的 Docker 命令,这部分功能最好由 Kubernetes 本身完成。
|
||||
`kpod` 与 Kubernetes 并没有紧密的联系,故未来可能会更换名称(事实上,在本文刊发过程中,已经更名为 [`podman`][38]),毕竟 Red Hat 法务部门还没有明确其名称。该团队希望实现更多 pod 级别的命令,这样可以对多个容器进行操作,有点类似于 [`docker compose`][39] 实现的功能。但在这方面,[Kompose][40] 是更好的工具,可以通过 [复合 YAML 文件][41] 在 Kubernetes 集群中运行容器。按计划,我们不会实现类似于 [`swarm`] 的 Docker 命令,这部分功能最好由 Kubernetes 本身完成。
|
||||
|
||||
目前看来,已经持续数年的 Docker 模块化努力终将硕果累累。但目前 `kpod` 处于快速迭代过程中,不太适合用于生产环境,但那些工具的众不同的设计理念让人很感兴趣,而且其中大部分的工具已经可以用于开发环境。目前只能通过编译源码的方式安装 libpod,但最终会提供各个发行版的二进制包。
|
||||
目前看来,已经持续数年的 Docker 模块化努力终将硕果累累。但目前 `kpod` 处于快速迭代过程中,不太适合用于生产环境,不过那些工具的与众不同的设计理念让人很感兴趣,而且其中大部分的工具已经可以用于开发环境。目前只能通过编译源码的方式安装 libpod,但最终会提供各个发行版的二进制包。
|
||||
|
||||
> 本文[最初发表][43]于 [Linux Weekly News][44]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
链接: https://anarc.at/blog/2017-12-20-docker-without-docker/
|
||||
via: https://anarc.at/blog/2017-12-20-docker-without-docker/
|
||||
|
||||
作者:[À propos de moi][a]
|
||||
作者:[Anarcat][a]
|
||||
译者:[pinewall](https://github.com/pinewall)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,7 +1,7 @@
|
||||
"Exit Traps" 让你的 Bash 脚本更稳固可靠
|
||||
“Exit Trap” 让你的 Bash 脚本更稳固可靠
|
||||
============================================================
|
||||
|
||||
有个简单实用的方针可以让你的 bash 脚本更稳健 -- 确保总是执行必要的收尾工作,哪怕是在发生异常的时候。要做到这一点,秘诀就是 bash 提供的一个叫做 EXIT 的伪信号,你可以 trap 它,当脚本因为任何原因退出时,相应的命令或函数就会执行。我们来看看它是如何工作的。
|
||||
有个简单实用的技巧可以让你的 bash 脚本更稳健 -- 确保总是执行必要的收尾工作,哪怕是在发生异常的时候。要做到这一点,秘诀就是 bash 提供的一个叫做 EXIT 的伪信号,你可以 [trap][1] 它,当脚本因为任何原因退出时,相应的命令或函数就会执行。我们来看看它是如何工作的。
|
||||
|
||||
基本的代码结构看起来像这样:
|
||||
|
||||
@ -13,7 +13,7 @@ function finish {
|
||||
trap finish EXIT
|
||||
```
|
||||
|
||||
你可以把任何你觉得务必要运行的代码放在这个 "finish" 函数里。一个很好的例子是:创建一个临时目录,事后再删除它。
|
||||
你可以把任何你觉得务必要运行的代码放在这个 `finish` 函数里。一个很好的例子是:创建一个临时目录,事后再删除它。
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
@ -24,10 +24,10 @@ function finish {
|
||||
trap finish EXIT
|
||||
```
|
||||
|
||||
这样,在你的核心代码中,你就可以在这个 `$scratch` 目录里下载、生成、操作中间或临时数据了。[[1]][2]
|
||||
这样,在你的核心代码中,你就可以在这个 `$scratch` 目录里下载、生成、操作中间或临时数据了。^[注1][2]
|
||||
|
||||
```
|
||||
# 下载所有版本的 linux 内核…… 为了科学!
|
||||
# 下载所有版本的 linux 内核…… 为了科学研究!
|
||||
for major in {1..4}; do
|
||||
for minor in {0..99}; do
|
||||
for patchlevel in {0..99}; do
|
||||
@ -45,7 +45,7 @@ cp "$scratch/frankenstein-linux.tar.bz2" "$1"
|
||||
# 脚本结束, scratch 目录自动被删除
|
||||
```
|
||||
|
||||
比较一下如果不用 trap ,你是怎么删除 scratch 目录的:
|
||||
比较一下如果不用 `trap` ,你是怎么删除 `scratch` 目录的:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
@ -61,11 +61,10 @@ rm -rf "$scratch"
|
||||
|
||||
这有什么问题么?很多:
|
||||
|
||||
* 如果运行出错导致脚本提前退出, scratch 目录及里面的内容不会被删除。这会导致资料泄漏,可能引发安全问题。
|
||||
* 如果运行出错导致脚本提前退出, `scratch` 目录及里面的内容不会被删除。这会导致资料泄漏,可能引发安全问题。
|
||||
* 如果这个脚本的设计初衷就是在脚本末尾以前退出,那么你必须手动复制粘贴 `rm` 命令到每一个出口。
|
||||
|
||||
* 如果这个脚本的设计初衷就是在末尾以前退出,那么你必须手动复制粘贴 rm 命令到每一个出口。
|
||||
|
||||
* 这也给维护带来了麻烦。如果今后在脚本某处添加了一个 exit ,你很可能就忘了加上删除操作 -- 从而制造潜在的安全漏洞。
|
||||
* 这也给维护带来了麻烦。如果今后在脚本某处添加了一个 `exit` ,你很可能就忘了加上删除操作 -- 从而制造潜在的安全漏洞。
|
||||
|
||||
### 无论如何,服务要在线
|
||||
|
||||
@ -93,28 +92,25 @@ function finish {
|
||||
trap finish EXIT
|
||||
# 关闭 mongod 服务
|
||||
sudo service mongdb stop
|
||||
# (如果 mongod 配置了 fork ,比如 replica set ,你可能需要执行 "sudo killall --wait /usr/bin/mongod")
|
||||
# (如果 mongod 配置了 fork ,比如 replica set ,你可能需要执行 “sudo killall --wait /usr/bin/mongod”)
|
||||
```
|
||||
|
||||
### 控制开销
|
||||
|
||||
有一种情况特别能体现 EXIT trap 的价值:你要在脚本运行过程中创建一些临时的付费资源,结束时要确保把它们释放掉。比如你在 AWS (Amazon Web Services) 上工作,要在脚本中创建一个镜像。
|
||||
有一种情况特别能体现 EXIT `trap` 的价值:如果你的脚本运行过程中需要初始化一下成本高昂的资源,结束时要确保把它们释放掉。比如你在 AWS (Amazon Web Services) 上工作,要在脚本中创建一个镜像。
|
||||
|
||||
(名词解释: 在亚马逊云上的运行的服务器叫实例。实例从镜像创建而来,镜像通常被称为 "AMIs" 或 "images" 。AMI 相当于某个特殊时间点的服务器快照。)
|
||||
(名词解释: 在亚马逊云上的运行的服务器叫“[实例][3]”。实例从<ruby>亚马逊机器镜像<rt>Amazon Machine Image</rt></ruby>创建而来,通常被称为 “AMI” 或 “镜像” 。AMI 相当于某个特殊时间点的服务器快照。)
|
||||
|
||||
我们可以这样创建一个自定义的 AMI :
|
||||
|
||||
1. 基于一个基准 AMI 运行(创建)一个实例。
|
||||
|
||||
1. 基于一个基准 AMI 运行一个实例(例如,启动一个服务器)。
|
||||
2. 在实例中手动或运行脚本来做一些修改。
|
||||
|
||||
3. 用修改后的实例创建一个镜像。
|
||||
|
||||
4. 如果不再需要这个实例,可以将其删除。
|
||||
|
||||
最后一步**相当重要**。如果你的脚本没有把实例删除掉,它会一直运行并计费。(到月底你的账单让你大跌眼镜时,恐怕哭都来不及了!)
|
||||
|
||||
如果把 AMI 的创建封装在脚本里,我们就可以利用 trap EXIT 来删除实例了。我们还可以用上 EC2 的命令行工具:
|
||||
如果把 AMI 的创建封装在脚本里,我们就可以利用 `trap` EXIT 来删除实例了。我们还可以用上 EC2 的命令行工具:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
@ -137,7 +133,7 @@ ec2-run-instances "$ami" > "$scratch/run-instance"
|
||||
instance=$(grep '^INSTANCE' "$scratch/run-instance" | cut -f 2)
|
||||
```
|
||||
|
||||
脚本执行到这里,实例(EC2 服务器)已经开始运行 [[2]][4]。接下来你可以做任何事情:在实例中安装软件,修改配置文件等,然后为最终版本创建一个镜像。实例会在脚本结束时被删除 -- 即使脚本因错误而提前退出。(请确保实例创建成功后再运行业务代码。)
|
||||
脚本执行到这里,实例(EC2 服务器)已经开始运行 ^[注2][4]。接下来你可以做任何事情:在实例中安装软件,修改配置文件等,然后为最终版本创建一个镜像。实例会在脚本结束时被删除 -- 即使脚本因错误而提前退出。(请确保实例创建成功后再运行业务代码。)
|
||||
|
||||
### 更多应用
|
||||
|
||||
@ -145,22 +141,17 @@ instance=$(grep '^INSTANCE' "$scratch/run-instance" | cut -f 2)
|
||||
|
||||
### 尾注
|
||||
|
||||
1. mktemp 的选项 "-t" 在 Linux 上可选,在 OS X 上必需。带上此选项可以让你的脚本有更好的可移植性。
|
||||
- 注1. `mktemp` 的选项 `-t` 在 Linux 上是可选的,在 OS X 上是必需的。带上此选项可以让你的脚本有更好的可移植性。
|
||||
- 注2. 如果只是为了获取实例 ID ,我们不用创建文件,直接写成 `instance=$(ec2-run-instances "$ami" | grep '^INSTANCE' | cut -f 2)` 就可以。但把输出写入文件可以记录更多有用信息,便于调试 ,代码可读性也更强。
|
||||
|
||||
2. 如果只是为了获取实例 ID ,我们不用创建文件,直接写成 `instance=$(ec2-run-instances "$ami" | grep '^INSTANCE' | cut -f 2)` 就可以。但把输出写入文件可以记录更多有用信息,便于 debug ,代码可读性也更强。
|
||||
作者简介:美国加利福尼亚旧金山的作家,软件工程师,企业家。[Powerful Python][5] 的作者,他的 [blog][6]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
美国加利福尼亚旧金山的作家,软件工程师,企业家
|
||||
|
||||
Author of [Powerful Python][5] and its [blog][6].
|
||||
via: http://redsymbol.net/articles/bash-exit-traps/
|
||||
|
||||
作者:[aaron maxwell ][a]
|
||||
作者:[aaron maxwell][a]
|
||||
译者:[Dotcra](https://github.com/Dotcra)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,122 @@
|
||||
两款 Linux 桌面端可用的科学计算器
|
||||
======
|
||||
|
||||
> 如果你想找个高级的桌面计算器的话,你可以看看开源软件,以及一些其它有趣的工具。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_OpenData_CityNumbers.png?itok=lC03ce76)
|
||||
|
||||
每个 Linux 桌面环境都至少带有一个功能简单的桌面计算器,但大多数计算器只能进行一些简单的计算。
|
||||
|
||||
幸运的是,还是有例外的:不仅可以做得比开平方根和一些三角函数还多,而且还很简单。这里将介绍两款强大的计算器,外加一大堆额外的功能。
|
||||
|
||||
### SpeedCrunch
|
||||
|
||||
[SpeedCrunch][1] 是一款高精度科学计算器,有着简明的 Qt5 图像界面,并且强烈依赖键盘。
|
||||
|
||||
![SpeedCrunch graphical interface][3]
|
||||
|
||||
*SpeedCrunch 在工作时*
|
||||
|
||||
它支持单位,并且可用在所有函数中。
|
||||
|
||||
例如,
|
||||
|
||||
```
|
||||
2 * 10^6 newton / (meter^2)
|
||||
```
|
||||
|
||||
你可以得到:
|
||||
|
||||
```
|
||||
= 2000000 pascal
|
||||
```
|
||||
|
||||
SpeedCrunch 会默认地将结果转化为国际标准单位,但还是可以用 `in` 命令转换:
|
||||
|
||||
例如:
|
||||
|
||||
```
|
||||
3*10^8 meter / second in kilo meter / hour
|
||||
```
|
||||
|
||||
结果是:
|
||||
|
||||
```
|
||||
= 1080000000 kilo meter / hour
|
||||
```
|
||||
|
||||
`F5` 键可以将所有结果转为科学计数法(`1.08e9 kilo meter / hour`),`F2` 键可以只将那些很大的数或很小的数转为科学计数法。更多选项可以在配置页面找到。
|
||||
|
||||
可用的函数的列表看上去非常壮观。它可以用在 Linux 、 Windows、macOS。许可证是 GPLv2,你可以在 [Bitbucket][4] 上得到它的源码。
|
||||
|
||||
### Qalculate!
|
||||
|
||||
[Qalculate!][5](有感叹号)有一段长而复杂的历史。
|
||||
|
||||
这个项目给了我们一个强大的库,而这个库可以被其它程序使用(在 Plasma 桌面中,krunner 可以用它来计算),以及一个用 GTK3 搭建的图形界面。它允许你转换单位,处理物理常量,创建图像,使用复数,矩阵以及向量,选择任意精度,等等。
|
||||
|
||||
![Qalculate! Interface][7]
|
||||
|
||||
*在 Qalculate! 中寻找物理常量*
|
||||
|
||||
在单位的使用方面,Qalculate! 会比 SppedCrunch 更加直观,而且可以识别一些常用前缀。你有听说过 exapascal 压力吗?反正我没有(太阳的中心大概在 `~26 PPa`),但 Qalculate! ,可以准确 `1 EPa` 的意思。同时,Qalculate! 可以更加灵活地处理语法错误,所以你不需要担心打括号:如果没有歧义,Qalculate! 会直接给出正确答案。
|
||||
|
||||
一段时间之后这个项目看上去被遗弃了。但在 2016 年,它又变得强大了,在一年里更新了 10 个版本。它的许可证是 GPLv2 (源码在 [GitHub][8] 上),提供Linux 、Windows 、macOS的版本。
|
||||
|
||||
### 更多计算器
|
||||
|
||||
#### ConvertAll
|
||||
|
||||
好吧,这不是“计算器”,但这个程序非常好用。
|
||||
|
||||
大部分单位转换器只是一个大的基本单位列表以及一大堆基本组合,但 [ConvertAll][9] 与它们不一样。有试过把光年转换为英尺每秒吗?不管它们说不说得通,只要你想转换任何种类的单位,ConvertAll 就是你要的工具。
|
||||
|
||||
只需要在相应的输入框内输入转换前和转换后的单位:如果单位相容,你会直接得到答案。
|
||||
|
||||
主程序是在 PyQt5 上搭建的,但也有 [JavaScript 的在线版本][10]。
|
||||
|
||||
#### 带有单位包的 (wx)Maxima
|
||||
|
||||
有时候(好吧,很多时候)一款桌面计算器时候不够你用的,然后你需要更多的原力。
|
||||
|
||||
[Maxima][11] 是一款计算机代数系统(LCTT 译注:进行符号运算的软件。这种系统的要件是数学表示式的符号运算),你可以用它计算导数、积分、方程、特征值和特征向量、泰勒级数、拉普拉斯变换与傅立叶变换,以及任意精度的数字计算、二维或三维图像··· ···列出这些都够我们写几页纸的了。
|
||||
|
||||
[wxMaxima][12] 是一个设计精湛的 Maxima 的图形前端,它简化了许多 Maxima 的选项,但并不会影响其它。在 Maxima 的基础上,wxMaxima 还允许你创建 “笔记本”,你可以在上面写一些笔记,保存你的图像等。其中一项 (wx)Maxima 最惊艳的功能是它可以处理尺寸单位。
|
||||
|
||||
在提示符只需要输入:
|
||||
|
||||
```
|
||||
load("unit")
|
||||
```
|
||||
|
||||
按 `Shift+Enter`,等几秒钟的时间,然后你就可以开始了。
|
||||
|
||||
默认地,单位包可以用基本的 MKS 单位,但如果你喜欢,例如,你可以用 `N` 为单位而不是 `kg*m/s2`,你只需要输入:`setunits(N)`。
|
||||
|
||||
Maxima 的帮助(也可以在 wxMaxima 的帮助菜单中找到)会给你更多信息。
|
||||
|
||||
你使用这些程序吗?你知道还有其它好的科学、工程用途的桌面计算器或者其它相关的计算器吗?在评论区里告诉我们吧!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/scientific-calculators-linux
|
||||
|
||||
作者:[Ricardo Berlasso][a]
|
||||
译者:[zyk2290](https://github.com/zyk2290)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rgb-es
|
||||
[1]:http://speedcrunch.org/index.html
|
||||
[2]:/file/382511
|
||||
[3]:https://opensource.com/sites/default/files/u128651/speedcrunch.png "SpeedCrunch graphical interface"
|
||||
[4]:https://bitbucket.org/heldercorreia/speedcrunch
|
||||
[5]:https://qalculate.github.io/
|
||||
[6]:/file/382506
|
||||
[7]:https://opensource.com/sites/default/files/u128651/qalculate-600.png "Qalculate! Interface"
|
||||
[8]:https://github.com/Qalculate
|
||||
[9]:http://convertall.bellz.org/
|
||||
[10]:http://convertall.bellz.org/js/
|
||||
[11]:http://maxima.sourceforge.net/
|
||||
[12]:https://andrejv.github.io/wxmaxima/
|
@ -1,29 +1,32 @@
|
||||
如何使用 Npm 管理 NodeJS 包
|
||||
如何使用 npm 管理 NodeJS 包
|
||||
=====
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/01/npm-720x340.png)
|
||||
|
||||
前一段时间,我们发布了一个[**使用 PIP 管理 Python 包**][3]的指南。今天,我们将讨论如何使用 Npm 管理 NodeJS 包。NPM 是最大的软件注册中心,包含 600,000 多个包。每天,世界各地的开发人员通过 npm 共享和下载软件包。在本指南中,我将解释使用 npm 基础知识,例如安装包(本地和全局)、安装特定版本的包、更新、删除和管理 NodeJS 包等等。
|
||||
前一段时间,我们发布了一个[使用 pip 管理 Python 包][3]的指南。今天,我们将讨论如何使用 npm 管理 NodeJS 包。npm 是最大的软件注册中心,包含 600,000 多个包。每天,世界各地的开发人员通过 npm 共享和下载软件包。在本指南中,我将解释使用 npm 基础知识,例如安装包(本地和全局)、安装特定版本的包、更新、删除和管理 NodeJS 包等等。
|
||||
|
||||
### 使用 Npm 管理 NodeJS 包
|
||||
|
||||
##### 安装 NPM
|
||||
### 安装 npm
|
||||
|
||||
用于 npm 是用 NodeJS 编写的,我们需要安装 NodeJS 才能使用 npm。要在不同的 Linux 发行版上安装 NodeJS,请参考下面的链接。
|
||||
|
||||
- [在 Linux 上安装 NodeJS](https://www.ostechnix.com/install-node-js-linux/)
|
||||
|
||||
检查 node 安装的位置:
|
||||
|
||||
```
|
||||
$ which node
|
||||
/home/sk/.nvm/versions/node/v9.4.0/bin/node
|
||||
```
|
||||
|
||||
检查它的版本:
|
||||
|
||||
```
|
||||
$ node -v
|
||||
v9.4.0
|
||||
```
|
||||
|
||||
进入 Node 交互式解释器:
|
||||
|
||||
```
|
||||
$ node
|
||||
> .help
|
||||
@ -38,43 +41,46 @@ $ node
|
||||
```
|
||||
|
||||
检查 npm 安装的位置:
|
||||
|
||||
```
|
||||
$ which npm
|
||||
/home/sk/.nvm/versions/node/v9.4.0/bin/npm
|
||||
```
|
||||
|
||||
还有版本:
|
||||
|
||||
```
|
||||
$ npm -v
|
||||
5.6.0
|
||||
```
|
||||
|
||||
棒极了!Node 和 NPM 已安装并能工作!正如你可能已经注意到,我已经在我的 $HOME 目录中安装了 NodeJS 和 NPM,这样是为了避免在全局模块时出现权限问题。这是 NodeJS 团队推荐的方法。
|
||||
棒极了!Node 和 npm 已安装好!正如你可能已经注意到,我已经在我的 `$HOME` 目录中安装了 NodeJS 和 NPM,这样是为了避免在全局模块时出现权限问题。这是 NodeJS 团队推荐的方法。
|
||||
|
||||
那么,让我们继续看看如何使用 npm 管理 NodeJS 模块(或包)。
|
||||
|
||||
##### 安装 NodeJS 模块
|
||||
### 安装 NodeJS 模块
|
||||
|
||||
NodeJS 模块可以安装在本地或全局(系统范围)。现在我将演示如何在本地安装包。
|
||||
NodeJS 模块可以安装在本地或全局(系统范围)。现在我将演示如何在本地安装包(LCTT 译注:即将包安装到一个 NodeJS 项目当中,所以下面会先创建一个空项目做演示)。
|
||||
|
||||
**在本地安装包**
|
||||
#### 在本地安装包
|
||||
|
||||
为了在本地管理包,我们通常使用 **package.json** 文件来管理。
|
||||
为了在本地管理包,我们通常使用 `package.json` 文件来管理。
|
||||
|
||||
首先,让我们创建我们的项目目录。
|
||||
|
||||
```
|
||||
$ mkdir demo
|
||||
```
|
||||
```
|
||||
$ cd demo
|
||||
```
|
||||
|
||||
在项目目录中创建一个 package.json 文件。为此,运行:
|
||||
在项目目录中创建一个 `package.json` 文件。为此,运行:
|
||||
|
||||
```
|
||||
$ npm init
|
||||
```
|
||||
|
||||
输入你的包的详细信息,例如名称,版本,作者,github 页面等等,或者按下 ENTER 键接受默认值并键入 **YES** 确认。
|
||||
输入你的包的详细信息,例如名称、版本、作者、GitHub 页面等等,或者按下回车键接受默认值并键入 `yes` 确认。
|
||||
|
||||
```
|
||||
This utility will walk you through creating a package.json file.
|
||||
It only covers the most common items, and tries to guess sensible defaults.
|
||||
@ -112,19 +118,22 @@ About to write to /home/sk/demo/package.json:
|
||||
Is this ok? (yes) yes
|
||||
```
|
||||
|
||||
上面的命令初始化你的项目并创建了 package.json 文件。
|
||||
上面的命令初始化你的项目并创建了 `package.json` 文件。
|
||||
|
||||
你也可以使用命令以非交互式方式执行此操作:
|
||||
|
||||
```
|
||||
npm init --y
|
||||
```
|
||||
|
||||
现在让我们安装名为 [**commander**][2] 的包。
|
||||
现在让我们安装名为 [commander][2] 的包。
|
||||
|
||||
```
|
||||
$ npm install commander
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
npm notice created a lockfile as package-lock.json. You should commit this file.
|
||||
npm WARN demo@1.0.0 No repository field.
|
||||
@ -133,9 +142,10 @@ npm WARN demo@1.0.0 No repository field.
|
||||
added 1 package in 2.519s
|
||||
```
|
||||
|
||||
这将在项目的根目录中创建一个名为 **" node_modules"** 的目录(如果它不存在的话),并在其中下载包。
|
||||
这将在项目的根目录中创建一个名为 `node_modules` 的目录(如果它不存在的话),并在其中下载包。
|
||||
|
||||
让我们检查 `pachage.json` 文件。
|
||||
|
||||
让我们检查 pachage.json 文件。
|
||||
```
|
||||
$ cat package.json
|
||||
{
|
||||
@ -148,30 +158,33 @@ $ cat package.json
|
||||
},
|
||||
"author": "",
|
||||
"license": "ISC",
|
||||
**"dependencies": {**
|
||||
**"commander": "^2.13.0"**
|
||||
"dependencies": {
|
||||
"commander": "^2.13.0"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
你会看到添加了依赖文件,版本号前面的插入符号 ( **^** ) 表示在安装时,npm 将取出它可以找到的最高版本的包。
|
||||
你会看到添加了依赖文件,版本号前面的插入符号 ( `^` ) 表示在安装时,npm 将取出它可以找到的最高版本的包。
|
||||
|
||||
```
|
||||
$ ls node_modules/
|
||||
commander
|
||||
```
|
||||
|
||||
package.json 文件的优点是,如果你的项目目录中有 package.json 文件,只需键入 "npm install",那么 npm 将查看文件中列出的依赖关系并下载它们。你甚至可以与其他开发人员共享它或将其推送到你的 GitHub 仓库。因此,当他们键入 “npm install” 时,他们将获得你拥有的所有相同的包。
|
||||
`package.json` 文件的优点是,如果你的项目目录中有 `package.json` 文件,只需键入 `npm install`,那么 `npm` 将查看文件中列出的依赖关系并下载它们。你甚至可以与其他开发人员共享它或将其推送到你的 GitHub 仓库。因此,当他们键入 `npm install` 时,他们将获得你拥有的所有相同的包。
|
||||
|
||||
你也可能会注意到另一个名为 **package-lock.json** 的文件,该文件确保在项目安装的所有系统上都保持相同的依赖关系。
|
||||
你也可能会注意到另一个名为 `package-lock.json` 的文件,该文件确保在项目安装的所有系统上都保持相同的依赖关系。
|
||||
|
||||
要在你的程序中使用已安装的包,使用实际代码在项目目录中创建一个 `index.js`(或者其他任何名称)文件,然后使用以下命令运行它:
|
||||
|
||||
要在你的程序中使用已安装的包,使用实际代码在项目目录中创建一个 **index.js**(或者其他任何名称)文件,然后使用以下命令运行它:
|
||||
```
|
||||
$ node index.js
|
||||
```
|
||||
|
||||
**在全局安装包**
|
||||
#### 在全局安装包
|
||||
|
||||
如果你想使用一个包作为命令行工具,那么最好在全局安装它。这样,无论你的当前目录是哪个目录,它都能正常工作。
|
||||
|
||||
```
|
||||
$ npm install async -g
|
||||
+ async@2.6.0
|
||||
@ -179,23 +192,27 @@ added 2 packages in 4.695s
|
||||
```
|
||||
|
||||
或者
|
||||
|
||||
```
|
||||
$ npm install async --global
|
||||
```
|
||||
|
||||
要安装特定版本的包,我们可以:
|
||||
|
||||
```
|
||||
$ npm install async@2.6.0 --global
|
||||
```
|
||||
|
||||
##### 更新 NodeJS 模块
|
||||
### 更新 NodeJS 模块
|
||||
|
||||
要更新本地包,转到 `package.json` 所在的项目目录并运行:
|
||||
|
||||
要更新本地包,转到 package.json 所在的项目目录并运行:
|
||||
```
|
||||
$ npm update
|
||||
```
|
||||
|
||||
然后,运行以下命令确保所有包都更新了。
|
||||
|
||||
```
|
||||
$ npm outdated
|
||||
```
|
||||
@ -203,6 +220,7 @@ $ npm outdated
|
||||
如果没有需要更新的,那么它返回空。
|
||||
|
||||
要找出哪一个全局包需要更新,运行:
|
||||
|
||||
```
|
||||
$ npm outdated -g --depth=0
|
||||
```
|
||||
@ -210,32 +228,37 @@ $ npm outdated -g --depth=0
|
||||
如果没有输出,意味着所有包都已更新。
|
||||
|
||||
更新单个全局包,运行:
|
||||
|
||||
```
|
||||
$ npm update -g <package-name>
|
||||
```
|
||||
|
||||
更新所有的全局包,运行:
|
||||
|
||||
```
|
||||
$ npm update -g <package>
|
||||
$ npm update -g
|
||||
```
|
||||
|
||||
##### 列出 NodeJS 模块
|
||||
### 列出 NodeJS 模块
|
||||
|
||||
列出本地包,转到项目目录并运行:
|
||||
|
||||
```
|
||||
$ npm list
|
||||
demo@1.0.0 /home/sk/demo
|
||||
└── commander@2.13.0
|
||||
```
|
||||
|
||||
如你所见,我在本地安装了 "commander" 这个包。
|
||||
如你所见,我在本地安装了 `commander` 这个包。
|
||||
|
||||
要列出全局包,从任何位置都可以运行以下命令:
|
||||
|
||||
```
|
||||
$ npm list -g
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
/home/sk/.nvm/versions/node/v9.4.0/lib
|
||||
├─┬ async@2.6.0
|
||||
@ -252,7 +275,8 @@ $ npm list -g
|
||||
|
||||
该命令将列出所有模块及其依赖关系。
|
||||
|
||||
要仅仅列出顶级模块,使用 -depth=0 选项:
|
||||
要仅仅列出顶级模块,使用 `-depth=0` 选项:
|
||||
|
||||
```
|
||||
$ npm list -g --depth=0
|
||||
/home/sk/.nvm/versions/node/v9.4.0/lib
|
||||
@ -260,43 +284,48 @@ $ npm list -g --depth=0
|
||||
└── npm@5.6.0
|
||||
```
|
||||
|
||||
##### 寻找 NodeJS 模块
|
||||
#### 寻找 NodeJS 模块
|
||||
|
||||
要搜索一个模块,使用 `npm search` 命令:
|
||||
|
||||
要搜索一个模块,使用 "npm search" 命令:
|
||||
```
|
||||
npm search <search-string>
|
||||
```
|
||||
|
||||
例如:
|
||||
|
||||
```
|
||||
$ npm search request
|
||||
```
|
||||
|
||||
该命令将显示包含搜索字符串 "request" 的所有模块。
|
||||
该命令将显示包含搜索字符串 `request` 的所有模块。
|
||||
|
||||
##### 移除 NodeJS 模块
|
||||
|
||||
要删除本地包,转到项目目录并运行以下命令,这会从 **node_modules** 目录中删除包:
|
||||
要删除本地包,转到项目目录并运行以下命令,这会从 `node_modules` 目录中删除包:
|
||||
|
||||
```
|
||||
$ npm uninstall <package-name>
|
||||
```
|
||||
|
||||
要从 **package.json** 文件中的依赖关系中删除它,使用如下所示的 **save** 标志:
|
||||
要从 `package.json` 文件中的依赖关系中删除它,使用如下所示的 `save` 选项:
|
||||
|
||||
```
|
||||
$ npm uninstall --save <package-name>
|
||||
|
||||
```
|
||||
|
||||
要删除已安装的全局包,运行:
|
||||
|
||||
```
|
||||
$ npm uninstall -g <package>
|
||||
```
|
||||
|
||||
##### 清楚 NPM 缓存
|
||||
### 清除 npm 缓存
|
||||
|
||||
默认情况下,NPM 在安装包时,会将其副本保存在 $HOME 目录中名为 npm 的缓存文件夹中。所以,你可以在下次安装时不必再次下载。
|
||||
默认情况下,npm 在安装包时,会将其副本保存在 `$HOME` 目录中名为 `.npm` 的缓存文件夹中。所以,你可以在下次安装时不必再次下载。
|
||||
|
||||
查看缓存模块:
|
||||
|
||||
```
|
||||
$ ls ~/.npm
|
||||
```
|
||||
@ -304,28 +333,33 @@ $ ls ~/.npm
|
||||
随着时间的推移,缓存文件夹会充斥着大量旧的包。所以不时清理缓存会好一些。
|
||||
|
||||
从 npm@5 开始,npm 缓存可以从 corruption 问题中自行修复,并且保证从缓存中提取的数据有效。如果你想确保一切都一致,运行:
|
||||
|
||||
```
|
||||
$ npm cache verify
|
||||
```
|
||||
|
||||
清楚整个缓存,运行:
|
||||
清除整个缓存,运行:
|
||||
|
||||
```
|
||||
$ npm cache clean --force
|
||||
```
|
||||
|
||||
##### 查看 NPM 配置
|
||||
### 查看 npm 配置
|
||||
|
||||
要查看 npm 配置,键入:
|
||||
|
||||
要查看 NPM 配置,键入:
|
||||
```
|
||||
$ npm config list
|
||||
```
|
||||
|
||||
或者
|
||||
或者:
|
||||
|
||||
```
|
||||
$ npm config ls
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
; cli configs
|
||||
metrics-registry = "https://registry.npmjs.org/"
|
||||
@ -339,26 +373,25 @@ user-agent = "npm/5.6.0 node/v9.4.0 linux x64"
|
||||
```
|
||||
|
||||
要显示当前的全局位置:
|
||||
|
||||
```
|
||||
$ npm config get prefix
|
||||
/home/sk/.nvm/versions/node/v9.4.0
|
||||
```
|
||||
|
||||
好吧,这就是全部了。我们刚才介绍的只是基础知识,NPM 是一个广泛话题。有关更多详细信息,参阅 [**NPM Getting Started**][3] 指南。
|
||||
好吧,这就是全部了。我们刚才介绍的只是基础知识,npm 是一个广泛话题。有关更多详细信息,参阅 [**NPM Getting Started**][3] 指南。
|
||||
|
||||
希望这对你有帮助。更多好东西即将来临,敬请关注!
|
||||
|
||||
干杯!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/manage-nodejs-packages-using-npm/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,24 +1,26 @@
|
||||
如何使用 virsh 命令创建、还原和删除 KVM 虚拟机快照
|
||||
======
|
||||
[![KVM-VirtualMachine-Snapshot][1]![KVM-VirtualMachine-Snapshot][2]][2]
|
||||
|
||||
在虚拟化平台上进行系统管理工作时经常在开始主要活动比如部署补丁和代码前先设置一个虚拟机快照。
|
||||
![KVM-VirtualMachine-Snapshot][2]
|
||||
|
||||
在虚拟化平台上进行系统管理工作时,经常需要在开始重大操作比如部署补丁和代码前先设置一个虚拟机<ruby>快照<rt>snapshot</rt></ruby>。
|
||||
|
||||
虚拟机**快照**是特定时间点的虚拟机磁盘的副本。换句话说,快照保存了给定的时间点虚拟机的状态和数据。
|
||||
|
||||
### 我们可以在哪里使用虚拟机快照?
|
||||
|
||||
如果你在使用基于 **KVM** 的**虚拟机管理程序**,那么可以使用 virsh 命令获取虚拟机或域快照。快照在一种情况下变得非常有用,当你已经在虚拟机上安装或应用了最新的补丁,但是由于某些原因,虚拟机上的程序变得不稳定,程序团队想要还原所有的更改和补丁。如果你在应用补丁之前设置了虚拟机的快照,那么可以使用快照将虚拟机恢复到之前的状态。
|
||||
如果你在使用基于 **KVM** 的**虚拟机管理程序**,那么可以使用 `virsh` 命令获取虚拟机或域快照。快照在一种情况下变得非常有用,当你已经在虚拟机上安装或应用了最新的补丁,但是由于某些原因,虚拟机上的程序变得不稳定,开发团队想要还原所有的更改和补丁。如果你在应用补丁之前设置了虚拟机的快照,那么可以使用快照将虚拟机恢复到之前的状态。
|
||||
|
||||
**注意:**我们只能对磁盘格式为 **Qcow2** 的虚拟机的进行快照,并且 kvm 的 `virsh` 命令不支持 raw 磁盘格式,请使用以下命令将原始磁盘格式转换为 qcow2。
|
||||
|
||||
**注意:**我们只能设置磁盘格式为 **Qcow2** 的虚拟机的快照,并且 kvm virsh 命令不支持 raw 磁盘格式,请使用以下命令将原始磁盘格式转换为 qcow2。
|
||||
```
|
||||
# qemu-img convert -f raw -O qcow2 image-name.img image-name.qcow2
|
||||
|
||||
```
|
||||
|
||||
### 创建 KVM 虚拟机(域)快照
|
||||
|
||||
我假设 KVM 管理程序已经在 CentOS 7 / RHEL 7 机器上配置好了,并且有虚拟机正在运行。我们可以使用下面的 virsh 命令列出虚拟机管理程序中的所有虚拟机,
|
||||
我假设 KVM 管理程序已经在 CentOS 7 / RHEL 7 机器上配置好了,并且有虚拟机正在运行。我们可以使用下面的 `virsh` 命令列出虚拟机管理程序中的所有虚拟机,
|
||||
|
||||
```
|
||||
[root@kvm-hypervisor ~]# virsh list --all
|
||||
Id Name State
|
||||
@ -29,35 +31,33 @@
|
||||
103 overcloud-compute1 running
|
||||
114 webserver running
|
||||
115 Test-MTN running
|
||||
[root@kvm-hypervisor ~]#
|
||||
|
||||
```
|
||||
|
||||
假设我们想创建 ‘ **webserver** ‘ 虚拟机的快照,运行下面的命令,
|
||||
假设我们想创建 webserver 虚拟机的快照,运行下面的命令,
|
||||
|
||||
**语法:**
|
||||
|
||||
```
|
||||
# virsh snapshot-create-as –domain {vm_name} –name {snapshot_name} –description “enter description here”
|
||||
```
|
||||
|
||||
```
|
||||
[root@kvm-hypervisor ~]# virsh snapshot-create-as --domain webserver --name webserver_snap --description "snap before patch on 4Feb2018"
|
||||
Domain snapshot webserver_snap created
|
||||
[root@kvm-hypervisor ~]#
|
||||
|
||||
```
|
||||
|
||||
创建快照后,我们可以使用下面的命令列出与虚拟机相关的快照,
|
||||
创建快照后,我们可以使用下面的命令列出与虚拟机相关的快照:
|
||||
|
||||
```
|
||||
[root@kvm-hypervisor ~]# virsh snapshot-list webserver
|
||||
Name Creation Time State
|
||||
------------------------------------------------------------
|
||||
webserver_snap 2018-02-04 15:05:05 +0530 running
|
||||
[root@kvm-hypervisor ~]#
|
||||
|
||||
```
|
||||
|
||||
要列出虚拟机快照的详细信息,请运行下面的 virsh 命令,
|
||||
要列出虚拟机快照的详细信息,请运行下面的 `virsh` 命令:
|
||||
|
||||
```
|
||||
[root@kvm-hypervisor ~]# virsh snapshot-info --domain webserver --snapshotname webserver_snap
|
||||
Name: webserver_snap
|
||||
@ -69,50 +69,44 @@ Parent: -
|
||||
Children: 0
|
||||
Descendants: 0
|
||||
Metadata: yes
|
||||
[root@kvm-hypervisor ~]#
|
||||
|
||||
```
|
||||
|
||||
我们可以使用下面的 qemu-img 命令查看快照的大小,
|
||||
我们可以使用下面的 `qemu-img` 命令查看快照的大小:
|
||||
|
||||
```
|
||||
[root@kvm-hypervisor ~]# qemu-img info /var/lib/libvirt/images/snaptestvm.img
|
||||
|
||||
```
|
||||
|
||||
[![qemu-img-command-output-kvm][1]![qemu-img-command-output-kvm][3]][3]
|
||||
![qemu-img-command-output-kvm][3]
|
||||
|
||||
### 还原 KVM 虚拟机快照
|
||||
|
||||
假设我们想要将 webserver 虚拟机还原到我们在上述步骤中创建的快照。使用下面的 virsh 命令将 Webserver 虚拟机恢复到其快照 “**webserver_snap**” 上。
|
||||
假设我们想要将 webserver 虚拟机还原到我们在上述步骤中创建的快照。使用下面的 `virsh` 命令将 Webserver 虚拟机恢复到其快照 webserver_snap 时。
|
||||
|
||||
**语法:**
|
||||
|
||||
```
|
||||
# virsh snapshot-revert {vm_name} {snapshot_name}
|
||||
```
|
||||
|
||||
```
|
||||
[root@kvm-hypervisor ~]# virsh snapshot-revert webserver webserver_snap
|
||||
[root@kvm-hypervisor ~]#
|
||||
|
||||
```
|
||||
|
||||
### 删除 KVM 虚拟机快照
|
||||
|
||||
要删除 KVM 虚拟机快照,首先使用 “**virsh snapshot-list**” 命令获取虚拟机的快照详细信息,然后使用 “**virsh snapshot-delete**” 命令删除快照。如下示例所示:
|
||||
要删除 KVM 虚拟机快照,首先使用 `virsh snapshot-list` 命令获取虚拟机的快照详细信息,然后使用 `virsh snapshot-delete` 命令删除快照。如下示例所示:
|
||||
|
||||
```
|
||||
[root@kvm-hypervisor ~]# virsh snapshot-list --domain webserver
|
||||
Name Creation Time State
|
||||
------------------------------------------------------------
|
||||
webserver_snap 2018-02-04 15:05:05 +0530 running
|
||||
[root@kvm-hypervisor ~]#
|
||||
|
||||
[root@kvm-hypervisor ~]# virsh snapshot-delete --domain webserver --snapshotname webserver_snap
|
||||
Domain snapshot webserver_snap deleted
|
||||
[root@kvm-hypervisor ~]#
|
||||
|
||||
```
|
||||
|
||||
这就是本文的全部内容,我希望你们能够了解如何使用 virsh 命令来管理 KVM 虚拟机快照。请分享你的反馈,并不要犹豫地分享给你的技术朋友🙂
|
||||
这就是本文的全部内容,我希望你们能够了解如何使用 `virsh` 命令来管理 KVM 虚拟机快照。请分享你的反馈,并不要犹豫地分享给你的技术朋友🙂
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -120,7 +114,7 @@ via: https://www.linuxtechi.com/create-revert-delete-kvm-virtual-machine-snapsho
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,198 @@
|
||||
初识 Python:全局、局部和非局部变量(带示例)
|
||||
======
|
||||
|
||||
### 全局变量
|
||||
|
||||
在 Python 中,在函数之外或在全局范围内声明的变量被称为全局变量。 这意味着,全局变量可以在函数内部或外部访问。
|
||||
|
||||
我们来看一个关于如何在 Python 中创建一个全局变量的示例。
|
||||
|
||||
#### 示例 1:创建全局变量
|
||||
|
||||
```python
|
||||
x = "global"
|
||||
|
||||
def foo():
|
||||
print("x inside :", x)
|
||||
|
||||
foo()
|
||||
print("x outside:", x)
|
||||
```
|
||||
|
||||
当我们运行代码时,将会输出:
|
||||
|
||||
```
|
||||
x inside : global
|
||||
x outside: global
|
||||
```
|
||||
|
||||
在上面的代码中,我们创建了 `x` 作为全局变量,并定义了一个 `foo()` 来打印全局变量 `x`。 最后,我们调用 `foo()` 来打印x的值。
|
||||
|
||||
倘若你想改变一个函数内的 `x` 的值该怎么办?
|
||||
|
||||
```python
|
||||
x = "global"
|
||||
|
||||
def foo():
|
||||
x = x * 2
|
||||
print(x)
|
||||
foo()
|
||||
```
|
||||
|
||||
当我们运行代码时,将会输出:
|
||||
|
||||
```
|
||||
UnboundLocalError: local variable 'x' referenced before assignment
|
||||
```
|
||||
|
||||
输出显示一个错误,因为 Python 将 `x` 视为局部变量,而 `x` 没有在 `foo()` 内部定义。
|
||||
|
||||
为了运行正常,我们使用 `global` 关键字,查看 [PythonGlobal 关键字][1]以便了解更多。
|
||||
|
||||
### 局部变量
|
||||
|
||||
在函数体内或局部作用域内声明的变量称为局部变量。
|
||||
|
||||
#### 示例 2:访问作用域外的局部变量
|
||||
|
||||
```python
|
||||
def foo():
|
||||
y = "local"
|
||||
|
||||
foo()
|
||||
print(y)
|
||||
```
|
||||
|
||||
当我们运行代码时,将会输出:
|
||||
|
||||
```
|
||||
NameError: name 'y' is not defined
|
||||
```
|
||||
|
||||
输出显示了一个错误,因为我们试图在全局范围内访问局部变量 `y`,而局部变量只能在 `foo()` 函数内部或局部作用域内有效。
|
||||
|
||||
我们来看一个关于如何在 Python 中创建一个局部变量的例子。
|
||||
|
||||
#### 示例 3:创建一个局部变量
|
||||
|
||||
通常,我们在函数内声明一个变量来创建一个局部变量。
|
||||
|
||||
```python
|
||||
def foo():
|
||||
y = "local"
|
||||
print(y)
|
||||
|
||||
foo()
|
||||
```
|
||||
|
||||
当我们运行代码时,将会输出:
|
||||
|
||||
```
|
||||
local
|
||||
```
|
||||
|
||||
让我们来看看前面的问题,其中x是一个全局变量,我们想修改 `foo()` 内部的 `x`。
|
||||
|
||||
### 全局变量和局部变量
|
||||
|
||||
在这里,我们将展示如何在同一份代码中使用全局变量和局部变量。
|
||||
|
||||
#### 示例 4:在同一份代码中使用全局变量和局部变量
|
||||
|
||||
```python
|
||||
x = "global"
|
||||
|
||||
def foo():
|
||||
global x
|
||||
y = "local"
|
||||
x = x * 2
|
||||
print(x)
|
||||
print(y)
|
||||
|
||||
foo()
|
||||
```
|
||||
|
||||
当我们运行代码时,将会输出(LCTT 译注:原文中输出结果的两个 `global` 有空格,正确的是没有空格):
|
||||
```
|
||||
globalglobal
|
||||
local
|
||||
```
|
||||
|
||||
在上面的代码中,我们将 `x` 声明为全局变量,将 `y` 声明为 `foo()` 中的局部变量。 然后,我们使用乘法运算符 `*` 来修改全局变量 `x`,并打印 `x` 和 `y`。
|
||||
|
||||
在调用 `foo()` 之后,`x` 的值变成 `globalglobal`了(LCTT 译注:原文同样有空格,正确的是没有空格),因为我们使用 `x * 2` 打印两次 `global`。 之后,我们打印局部变量y的值,即 `local` 。
|
||||
|
||||
#### 示例 5:具有相同名称的全局变量和局部变量
|
||||
|
||||
```python
|
||||
x = 5
|
||||
|
||||
def foo():
|
||||
x = 10
|
||||
print("local x:", x)
|
||||
|
||||
foo()
|
||||
print("global x:", x)
|
||||
```
|
||||
|
||||
当我们运行代码时,将会输出:
|
||||
|
||||
```
|
||||
local x: 10
|
||||
global x: 5
|
||||
```
|
||||
|
||||
在上面的代码中,我们对全局变量和局部变量使用了相同的名称 `x`。 当我们打印相同的变量时却得到了不同的结果,因为这两个作用域内都声明了变量,即 `foo()` 内部的局部作用域和 `foo()` 外面的全局作用域。
|
||||
|
||||
当我们在 `foo()` 内部打印变量时,它输出 `local x: 10`,这被称为变量的局部作用域。
|
||||
|
||||
同样,当我们在 `foo()` 外部打印变量时,它输出 `global x: 5`,这被称为变量的全局作用域。
|
||||
|
||||
### 非局部变量
|
||||
|
||||
非局部变量用于局部作用域未定义的嵌套函数。 这意味着,变量既不能在局部也不能在全局范围内。
|
||||
|
||||
我们来看一个关于如何在 Python 中创建一个非局部变量的例子。(LCTT 译者注:原文为创建全局变量,疑为笔误)
|
||||
|
||||
我们使用 `nonlocal` 关键字来创建非局部变量。
|
||||
|
||||
#### 例 6:创建一个非局部变量
|
||||
|
||||
```python
|
||||
def outer():
|
||||
x = "local"
|
||||
|
||||
def inner():
|
||||
nonlocal x
|
||||
x = "nonlocal"
|
||||
print("inner:", x)
|
||||
|
||||
inner()
|
||||
print("outer:", x)
|
||||
|
||||
outer()
|
||||
```
|
||||
|
||||
当我们运行代码时,将会输出:
|
||||
|
||||
```
|
||||
inner: nonlocal
|
||||
outer: nonlocal
|
||||
```
|
||||
|
||||
在上面的代码中有一个嵌套函数 `inner()`。 我们使用 `nonlocal` 关键字来创建非局部变量。`inner()` 函数是在另一个函数 `outer()` 的作用域中定义的。
|
||||
|
||||
注意:如果我们改变非局部变量的值,那么变化就会出现在局部变量中。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.programiz.com/python-programming/global-local-nonlocal-variables
|
||||
|
||||
作者:[programiz][a]
|
||||
译者:[Flowsnow](https://github.com/Flowsnow)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.programiz.com/
|
||||
[1]:https://www.programiz.com/python-programming/global-keyword
|
@ -1,6 +1,8 @@
|
||||
放慢速度是如何使我变得更好的领导者
|
||||
放慢速度如何使我变成更好的领导者
|
||||
======
|
||||
|
||||
> 开放式领导和耐心、倾听一样重要,它们都是关于执行的。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_leadership_brand.png?itok=YW1Syk4S)
|
||||
|
||||
在我职业生涯的早期,我认为我能做的最重要的事情就是行动。如果我的老板说跳,我的回答是“跳多高?”
|
||||
@ -9,15 +11,15 @@
|
||||
|
||||
实行开放式领导需要培养耐心和倾听技能,我需要在[最佳行动计划上进行合作,而不仅仅是最快的计划][2]。它还为我提供了一些工具,以解释 [为什么我会对某人说“不”][3] (或者,也许是“不是现在”),这样我就能以透明和自信的方式领导。
|
||||
|
||||
如果你正在进行软件开发和实践 scrum 中,那么下面的观点可能会引起你的共鸣:在 sprint 计划和 sprint 演示中,耐心和倾听经理的表现和它的技能一样重要。(译注: scrum 是迭代式增量软件开发过程,通常用于敏捷软件开发。 sprint 计划和 sprint 演示是其中的两个术语。)忘掉它们,你会减少你能够产生的影响。
|
||||
如果你正在进行软件开发和实践 scrum 中,那么下面的观点可能会引起你的共鸣:在 sprint 计划和 sprint 演示中,耐心和倾听经理的表现和它的技能一样重要。(LCTT 译注: scrum 是迭代式增量软件开发过程,通常用于敏捷软件开发。 sprint 计划和 sprint 演示是其中的两个术语。)忘掉它们,你会减少你能够产生的影响。
|
||||
|
||||
### 专注于耐心
|
||||
|
||||
专注和耐心并不总是容易的。通常,我发现自己正坐在会议上,用行动项目填满我的笔记本时,我一般会思考:“我们可以简单地对 x 和 y 进行改进”。然后我记得事情不是那么线性的。(译者注:这句话感觉翻译得并不通顺)
|
||||
专注和耐心并不总是容易的。通常,我发现自己正坐在会议上,用行动项目填满我的笔记本时,我一般会思考:“我们只要做了某事,另外一件事就会得到改善”。然后我记得事物不是那么线性发展的。
|
||||
|
||||
我需要考虑可能影响情况的其他因素。暂停下来从多个人和资源中获取数据可以帮我充实策略,以确保出组织长期成功。它还帮助我确定那些短期的里程碑,这些里程碑应该会让我负责生产的业务完成交付。
|
||||
我需要考虑可能影响情况的其他因素。暂停下来从多个人和资源中获取数据可以帮我充实策略,以确保组织长期成功。它还帮助我确定那些短期的里程碑,这些里程碑应该可以让我负责生产的业务完成交付。
|
||||
|
||||
这里有一个很好的例子,以前耐心不是我认为应该拥有的东西,而这又是如何影响了我的表现。当我在北卡罗来纳州工作时,我与一个在亚利桑那州的人共事。我们没有使用视频会议技术,所以当我们交谈时我没有看到她的肢体语言。然而当我负责为我领导的项目交付结果时,她是确保我获得足够支持的两个人之一。
|
||||
这里有一个很好的例子,以前耐心不是我认为应该拥有、以及影响我的表现的东西。当我在北卡罗来纳州工作时,我与一个在亚利桑那州的人共事。我们没有使用视频会议技术,所以当我们交谈时我没有看到她的肢体语言。然而当我负责为我领导的项目交付结果时,她是确保我获得足够支持的两个人之一。
|
||||
|
||||
无论出于何种原因,当我与她交谈时,当她要求我做某件事时,我做了。她会为我的绩效评估提供意见,所以我想确保她高兴。那时,我还不够成熟不懂得其实没必要非要讨她开心;我的重点应该放在其他绩效指标上。我本应该花更多的时间倾听并与她合作,而不是在她还在说话的时候拿起第一个“行动项目”并开始工作。
|
||||
|
||||
@ -35,7 +37,7 @@
|
||||
|
||||
我最终对她有一些反馈。 下次我们一起工作时,我不想在六个月后听到反馈意见。 我想早些时候和更频繁地听到反馈意见,以便我能够尽早从错误中学习。 关于这项工作的持续讨论是任何团队都应该发生的事情。
|
||||
|
||||
当我成为一名管理者和领导者时,我坚持要求我的团队达到相同的标准:计划,制定计划并反思。 重复。 不要让外力造成的麻烦让你偏离你需要实施的计划。 将工作分成小的增量,以便反思和调整计划。 正如 Daniel Goleman 写道:“把注意力放在需要的地方是领导力的一个主要任务。” 不要害怕面对这个挑战。
|
||||
当我成为一名管理者和领导者时,我坚持要求我的团队达到相同的标准:计划,执行计划并反思。 重复。 不要让外力造成的麻烦让你偏离你需要实施的计划。 将工作分成小的增量,以便反思和调整计划。 正如 Daniel Goleman 写道:“把注意力放在需要的地方是领导力的一个主要任务。” 不要害怕面对这个挑战。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -43,7 +45,7 @@ via: [https://opensource.com/open-organization/18/2/open-leadership-patience-lis
|
||||
|
||||
作者:[Angela Robertson][a]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,64 +1,51 @@
|
||||
# 在 Linux 上使用 groff -me 格式化你的学术论文
|
||||
在 Linux 上使用 groff -me 格式化你的学术论文
|
||||
===========
|
||||
|
||||
> 学习用简单的宏为你的课程论文添加脚注、引用、子标题及其它格式。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_paperclips.png?itok=j48op49T)
|
||||
|
||||
当我在 1993 年发现 Linux 时,我还是一名本科生。我很兴奋在我的宿舍里拥有 Unix 系统的强大功能,但是尽管它有很多功能,Linux 却缺乏应用程序。像 LibreOffice 和 OpenOffice 这样的文字处理程序还需要几年的时间。如果你想使用文字处理器,你可能会将你的系统引导到 MS-DOS 中,并使用 WordPerfect、shareware GalaxyWrite 或类似的程序。
|
||||
当我在 1993 年发现 Linux 时,我还是一名本科生。我很兴奋在我的宿舍里拥有 Unix 系统的强大功能,但是尽管它有很多功能,但 Linux 却缺乏应用程序。像 LibreOffice 和 OpenOffice 这样的文字处理程序还需要几年的时间才出现。如果你想使用文字处理器,你可能会将你的系统引导到 MS-DOS 中,并使用 WordPerfect、共享软件 GalaxyWrite 或类似的程序。
|
||||
|
||||
`nroff` 和 `troff ` 。它们是同一系统的不同接口:`nroff` 生成纯文本输出,适用于屏幕或行式打印机,而 `troff` 产生非常优美的输出,通常用于在激光打印机上打印。
|
||||
|
||||
这就是我的方法,因为我需要为我的课程写论文,但我更喜欢呆在 Linux 中。我从我们的 “大 Unix ” 校园计算机实验室得知,Unix 系统提供了一组文本格式化的程序。它们是同一系统的不同接口:生成纯文本的输出,适合于屏幕或行打印机,或者生成非常优美的输出,通常用于在激光打印机上打印。
|
||||
这就是我的方法,因为我需要为我的课程写论文,但我更喜欢呆在 Linux 中。我从我们的 “大 Unix” 校园计算机实验室得知,Unix 系统提供了一组文本格式化的程序 `nroff` 和 `troff ` ,它们是同一系统的不同接口:`nroff` 生成纯文本输出,适用于屏幕或行式打印机,而 `troff` 产生非常优美的输出,通常用于在激光打印机上打印。
|
||||
|
||||
在 Linux 上,`nroff` 和 `troff` 被合并为 GNU troff,通常被称为 [groff][1]。 我很高兴看到早期的 Linux 发行版中包含了某个版本的 groff,因此我着手学习如何使用它来编写课程论文。 我学到的第一个宏集是 `-me` 宏包,一个简单易学的宏集。
|
||||
|
||||
关于 `groff` ,首先要了解的是它根据一组宏处理和格式化文本。一个宏通常是一个两个字符的命令,它自己设置在一行上,并带有一个引导点。宏可能包含一个或多个选项。当 groff 在处理文档时遇到这些宏中的一个时,它会自动对文本进行格式化。
|
||||
关于 `groff` ,首先要了解的是它根据一组宏来处理和格式化文本。宏通常是个两个字符的命令,它自己设置在一行上,并带有一个引导点。宏可能包含一个或多个选项。当 `groff` 在处理文档时遇到这些宏中的一个时,它会自动对文本进行格式化。
|
||||
|
||||
下面,我将分享使用 `groff -me` 编写课程论文等简单文档的基础知识。 我不会深入细节进行讨论,比如如何创建嵌套列表,保存和显示,以及使用表格和数字。
|
||||
|
||||
### 段落
|
||||
|
||||
让我们从一个简单的例子开始,在几乎所有类型的文档中都可以看到:段落。段落可以格式化第一行的缩进或不缩进(即,与左边齐平)。 包括学术论文,杂志,期刊和书籍在内的许多印刷文档都使用了这两种类型的组合,其中文档或章节中的第一个(主要)段落与左侧的所有段落以及所有其他(常规)段落缩进。 在 `groff -me`中,您可以使用两种段落类型:前导段落(`.lp`)和常规段落(`.pp`)。
|
||||
让我们从一个简单的例子开始,在几乎所有类型的文档中都可以看到:段落。段落可以格式化为首行缩进或不缩进(即,与左边齐平)。 包括学术论文,杂志,期刊和书籍在内的许多印刷文档都使用了这两种类型的组合,其中文档或章节中的第一个(主要)段落左侧对齐,而所有其他(常规)的段落缩进。 在 `groff -me`中,您可以使用两种段落类型:前导段落(`.lp`)和常规段落(`.pp`)。
|
||||
|
||||
```
|
||||
.lp
|
||||
|
||||
This is the first paragraph.
|
||||
|
||||
.pp
|
||||
|
||||
This is a standard paragraph.
|
||||
|
||||
```
|
||||
|
||||
### 文本格式
|
||||
|
||||
用粗体格式化文本的宏是 `.b`,斜体格式是 `.i` 。 如果您将 `.b` 或 `.i` 放在一行上,则后面的所有文本将以粗体或斜体显示。 但更有可能你只是想用粗体或斜体来表示一个或几个词。 要将一个词加粗或斜体,将该单词放在与 `.b` 或 `.i` 相同的行上作为选项。 要用**粗体**或斜体格式化多个单词,请将文字用引号引起来。
|
||||
用粗体格式化文本的宏是 `.b`,斜体格式是 `.i` 。 如果您将 `.b` 或 `.i` 放在一行上,则后面的所有文本将以粗体或斜体显示。 但更有可能你只是想用粗体或斜体来表示一个或几个词。 要将一个词加粗或斜体,将该单词放在与 `.b` 或 `.i` 相同的行上作为选项。 要用粗体或斜体格式化多个单词,请将文字用引号引起来。
|
||||
|
||||
```
|
||||
.pp
|
||||
|
||||
You can do basic formatting such as
|
||||
|
||||
.i italics
|
||||
|
||||
or
|
||||
|
||||
.b "bold text."
|
||||
|
||||
```
|
||||
|
||||
在上面的例子中,粗体文本结尾的句点也是粗体。 在大多数情况下,这不是你想要的。 只要文字是粗体字,而不是后面的句点也是粗体字。 要获得您想要的效果,您可以向 `.b` 或 `.i` 添加第二个参数,以指示要以粗体或斜体显示的文本,但是正常类型的文本。 您可以这样做,以确保尾随句点不会以粗体显示。
|
||||
在上面的例子中,粗体文本结尾的句点也是粗体。 在大多数情况下,这不是你想要的。 只要文字是粗体字,而不是后面的句点也是粗体字。 要获得您想要的效果,您可以向 `.b` 或 `.i` 添加第二个参数,以指示以粗体或斜体显示的文本后面跟着的任意文本以正常类型显示。 您可以这样做,以确保尾随句点不会以粗体显示。
|
||||
|
||||
```
|
||||
.pp
|
||||
|
||||
You can do basic formatting such as
|
||||
|
||||
.i italics
|
||||
|
||||
or
|
||||
|
||||
.b "bold text" .
|
||||
|
||||
```
|
||||
|
||||
### 列表
|
||||
@ -67,64 +54,38 @@ or
|
||||
|
||||
```
|
||||
.pp
|
||||
|
||||
Bullet lists are easy to make:
|
||||
|
||||
.bu
|
||||
|
||||
Apple
|
||||
|
||||
.bu
|
||||
|
||||
Banana
|
||||
|
||||
.bu
|
||||
|
||||
Pineapple
|
||||
|
||||
.pp
|
||||
|
||||
Numbered lists are as easy as:
|
||||
|
||||
.np
|
||||
|
||||
One
|
||||
|
||||
.np
|
||||
|
||||
Two
|
||||
|
||||
.np
|
||||
|
||||
Three
|
||||
|
||||
.pp
|
||||
|
||||
Note that numbered lists will reset at the next pp or lp.
|
||||
|
||||
```
|
||||
|
||||
### 副标题
|
||||
|
||||
如果你正在写一篇长论文,你可能想把你的内容分成几部分。使用 `groff -me`,您可以创建编号的标题 (`.sh`) 和未编号的标题 (`.uh`)。在这两种方法中,将节标题作为参数括起来。对于编号的标题,您还需要提供标题级别 `:1` 将给出一个一级标题(例如,1)。同样,`2` 和 `3` 将给出第二和第三级标题,如 2.1 或 3.1.1。
|
||||
如果你正在写一篇长论文,你可能想把你的内容分成几部分。使用 `groff -me`,您可以创建编号的标题(`.sh`) 和未编号的标题 (`.uh`)。在这两种方法中,将节标题作为参数括起来。对于编号的标题,您还需要提供标题级别 `:1` 将给出一个一级标题(例如,`1`)。同样,`2` 和 `3` 将给出第二和第三级标题,如 `2.1` 或 `3.1.1`。
|
||||
|
||||
```
|
||||
.uh Introduction
|
||||
|
||||
.pp
|
||||
|
||||
Provide one or two paragraphs to describe the work
|
||||
|
||||
and why it is important.
|
||||
|
||||
.sh 1 "Method and Tools"
|
||||
|
||||
.pp
|
||||
|
||||
Provide a few paragraphs to describe how you
|
||||
|
||||
did the research, including what equipment you used
|
||||
|
||||
```
|
||||
|
||||
### 智能引号和块引号
|
||||
@ -133,135 +94,88 @@ did the research, including what equipment you used
|
||||
|
||||
```
|
||||
.pp
|
||||
|
||||
Christine Peterson coined the phrase \*(lqopen source.\*(rq
|
||||
|
||||
```
|
||||
|
||||
`groff -me` 中还有一个快捷方式来创建这些引号(`.q`),我发现它更易于使用。
|
||||
|
||||
```
|
||||
.pp
|
||||
|
||||
Christine Peterson coined the phrase
|
||||
|
||||
.q "open source."
|
||||
|
||||
```
|
||||
|
||||
如果引用的是跨越几行的较长的引用,则需要使用一个块引用。为此,在引用的开头和结尾插入块引用宏(
|
||||
|
||||
`.(q`)。
|
||||
如果引用的是跨越几行的较长的引用,则需要使用一个块引用。为此,在引用的开头和结尾插入块引用宏(`.(q`)。
|
||||
|
||||
```
|
||||
.pp
|
||||
|
||||
Christine Peterson recently wrote about open source:
|
||||
|
||||
.(q
|
||||
|
||||
On April 7, 1998, Tim O'Reilly held a meeting of key
|
||||
|
||||
leaders in the field. Announced in advance as the first
|
||||
|
||||
.q "Freeware Summit,"
|
||||
|
||||
by April 14 it was referred to as the first
|
||||
|
||||
.q "Open Source Summit."
|
||||
|
||||
.)q
|
||||
|
||||
```
|
||||
|
||||
### 脚注
|
||||
|
||||
要插入脚注,请在脚注文本前后添加脚注宏(`.(f`),并使用内联宏(`\ **`)添加脚注标记。脚注标记应出现在文本中和脚注中。
|
||||
要插入脚注,请在脚注文本前后添加脚注宏(`.(f`),并使用内联宏(`\**`)添加脚注标记。脚注标记应出现在文本中和脚注中。
|
||||
|
||||
```
|
||||
.pp
|
||||
|
||||
Christine Peterson recently wrote about open source:\**
|
||||
|
||||
.(f
|
||||
|
||||
\**Christine Peterson.
|
||||
|
||||
.q "How I coined the term open source."
|
||||
|
||||
.i "OpenSource.com."
|
||||
|
||||
1 Feb 2018.
|
||||
|
||||
.)f
|
||||
|
||||
.(q
|
||||
|
||||
On April 7, 1998, Tim O'Reilly held a meeting of key
|
||||
|
||||
leaders in the field. Announced in advance as the first
|
||||
|
||||
.q "Freeware Summit,"
|
||||
|
||||
by April 14 it was referred to as the first
|
||||
|
||||
.q "Open Source Summit."
|
||||
|
||||
.)q
|
||||
|
||||
```
|
||||
|
||||
### 封面
|
||||
|
||||
大多数课程论文都需要一个包含论文标题,姓名和日期的封面。 在 `groff -me` 中创建封面需要一些组件。 我发现最简单的方法是使用居中的文本块并在标题,名称和日期之间添加额外的行。 (我倾向于在每一行之间使用两个空行)。在文章顶部,从标题页(`.tp`)宏开始,插入五个空白行(`.sp 5`),然后添加居中文本(`.(c`) 和额外的空白行(`.sp 2`)。
|
||||
大多数课程论文都需要一个包含论文标题,姓名和日期的封面。 在 `groff -me` 中创建封面需要一些组件。 我发现最简单的方法是使用居中的文本块并在标题、名字和日期之间添加额外的行。 (我倾向于在每一行之间使用两个空行)。在文章顶部,从标题页(`.tp`)宏开始,插入五个空白行(`.sp 5`),然后添加居中文本(`.(c`) 和额外的空白行(`.sp 2`)。
|
||||
|
||||
```
|
||||
.tp
|
||||
|
||||
.sp 5
|
||||
|
||||
.(c
|
||||
|
||||
.b "Writing Class Papers with groff -me"
|
||||
|
||||
.)c
|
||||
|
||||
.sp 2
|
||||
|
||||
.(c
|
||||
|
||||
Jim Hall
|
||||
|
||||
.)c
|
||||
|
||||
.sp 2
|
||||
|
||||
.(c
|
||||
|
||||
February XX, 2018
|
||||
|
||||
.)c
|
||||
|
||||
.bp
|
||||
|
||||
```
|
||||
|
||||
最后一个宏(`.bp`)告诉 groff 在标题页后添加一个分页符。
|
||||
|
||||
### 更多内容
|
||||
|
||||
这些是用 `groff-me` 写一份专业的论文非常基础的东西,包括前导和缩进段落,粗体和斜体,有序和无需列表,编号和不编号的章节标题,块引用以及脚注。
|
||||
这些是用 `groff-me` 写一份专业的论文非常基础的东西,包括前导和缩进段落,粗体和斜体,有序和无需列表,编号和不编号的章节标题,块引用以及脚注。
|
||||
|
||||
我已经包含一个示例 groff 文件来演示所有这些格式。 将 `lorem-ipsum.me` 文件保存到您的系统并通过 groff 运行。 `-Tps` 选项将输出类型设置为 `PostScript` ,以便您可以将文档发送到打印机或使用 `ps2pdf` 程序将其转换为 PDF 文件。
|
||||
我已经包含一个[示例 groff 文件](https://opensource.com/sites/default/files/lorem-ipsum.me_.txt)来演示所有这些格式。 将 `lorem-ipsum.me` 文件保存到您的系统并通过 groff 运行。 `-Tps` 选项将输出类型设置为 `PostScript` ,以便您可以将文档发送到打印机或使用 `ps2pdf` 程序将其转换为 [PDF 文件](https://opensource.com/sites/default/files/lorem-ipsum.me_.pdf)。
|
||||
|
||||
```
|
||||
groff -Tps -me lorem-ipsum.me > lorem-ipsum.me.ps
|
||||
|
||||
ps2pdf lorem-ipsum.me.ps lorem-ipsum.me.pdf
|
||||
|
||||
```
|
||||
|
||||
如果你想使用 groff-me 的更多高级功能,请参阅 Eric Allman 所著的 “使用 `Groff-me` 来写论文”,你可以在你系统的 groff 的 `doc` 目录下找到一个名叫 `meintro.me` 的文件。这份文档非常完美的说明了如何使用 `groff-me` 宏来格式化你的论文。
|
||||
如果你想使用 `groff -me` 的更多高级功能,请参阅 Eric Allman 所著的 “使用 Groff -me 来写论文”,你可以在你系统的 groff 的 `doc` 目录下找到一个名叫 `meintro.me` 的文件。这份文档非常完美的说明了如何使用 `groff-me` 宏来格式化你的论文。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -269,7 +183,7 @@ via: https://opensource.com/article/18/2/how-format-academic-papers-linux-groff-
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
译者:[amwps290](https://github.com/amwps290)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,134 +1,120 @@
|
||||
如何使用树莓派测定颗粒物
|
||||
如何使用树莓派测定颗粒物(PM 2.5)
|
||||
======
|
||||
|
||||
> 使用两个简单的硬件设备和几行代码构建一个空气质量探测器。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bubblehands_fromRHT_520_0612LL.png?itok=_iQ2dO3S)
|
||||
|
||||
我们在东南亚的学校定期测定空气中的颗粒物。这里的测定值非常高,尤其是在二到五月之间,干燥炎热、土地干旱等各种因素都对空气质量产生了不利的影响。我将会在这篇文章中展示如何使用树莓派来测定颗粒物。
|
||||
|
||||
### 什么是颗粒物?
|
||||
|
||||
颗粒物就是粉尘或者空气中的微小颗粒。其中 PM10 和 PM2.5 之间的差别就是 PM10 指的是粒径小于10微米的颗粒,而 PM2.5 指的是粒径小于2.5微米的颗粒。在粒径小于2.5微米的的情况下,由于它们能被吸入肺泡中并且对呼吸系统造成影响,因此颗粒越小,对人的健康危害越大。
|
||||
颗粒物就是粉尘或者空气中的微小颗粒。其中 PM10 和 PM2.5 之间的差别就是 PM10 指的是粒径小于 10 微米的颗粒,而 PM2.5 指的是粒径小于 2.5 微米的颗粒。在粒径小于 2.5 微米的的情况下,由于它们能被吸入肺泡中并且对呼吸系统造成影响,因此颗粒越小,对人的健康危害越大。
|
||||
|
||||
世界卫生组织的建议[颗粒物浓度][1]是:
|
||||
|
||||
* 年均 PM10 不高于20 µg/m³
|
||||
* 年均 PM2.5 不高于10 µg/m³
|
||||
* 不允许超标时,日均 PM10 不高于50 µg/m³
|
||||
* 不允许超标时,日均 PM2.5 不高于25 µg/m³
|
||||
* 年均 PM10 不高于 20 µg/m³
|
||||
* 年均 PM2.5 不高于 10 µg/m³
|
||||
* 不允许超标时,日均 PM10 不高于 50 µg/m³
|
||||
* 不允许超标时,日均 PM2.5 不高于 25 µg/m³
|
||||
|
||||
以上数值实际上是低于大多数国家的标准的,例如欧盟对于 PM10 所允许的年均值是不高于40 µg/m³。
|
||||
以上数值实际上是低于大多数国家的标准的,例如欧盟对于 PM10 所允许的年均值是不高于 40 µg/m³。
|
||||
|
||||
### 什么是空气质量指数(AQI, Air Quality Index)?
|
||||
### 什么是<ruby>空气质量指数<rt>Air Quality Index</rt></ruby>(AQI)?
|
||||
|
||||
空气质量指数按照颗粒物的测定值来评价空气质量的好坏,然而由于各国之间的计算方式有所不同,这个指数并没有统一的标准。维基百科上关于[空气质量指数][2]的词条对此给出了一个概述。我们学校则以[美国环境保护协会][3](EPA, Environment Protection Agency)建立的分类法来作为依据。
|
||||
空气质量指数是按照颗粒物的测定值来评价空气质量的好坏,然而由于各国之间的计算方式有所不同,这个指数并没有统一的标准。维基百科上关于[空气质量指数][2]的词条对此给出了一个概述。我们学校则以<ruby>[美国环境保护协会][3]<rt>Environment Protection Agency</rt></ruby>(EPA)建立的分类法来作为依据。
|
||||
|
||||
![空气质量指数][5]
|
||||
|
||||
空气质量指数
|
||||
*空气质量指数*
|
||||
|
||||
### 测定颗粒物需要哪些准备?
|
||||
|
||||
测定颗粒物只需要以下两种器材:
|
||||
|
||||
* 树莓派(款式不限,最好带有 WiFi)
|
||||
* SDS011 颗粒物传感器
|
||||
|
||||
|
||||
|
||||
![颗粒物传感器][7]
|
||||
|
||||
颗粒物传感器
|
||||
*颗粒物传感器*
|
||||
|
||||
如果是只带有 Micro USB的树莓派Zero W,那还需要一根连接到标准 USB 端口的适配线,只需要20美元,而传感器则自带适配串行接口的 USB 适配器。
|
||||
如果是只带有 Micro USB 的树莓派 Zero W,那还需要一根连接到标准 USB 端口的适配线,只需要 20 美元,而传感器则自带适配串行接口的 USB 适配器。
|
||||
|
||||
### 安装过程
|
||||
|
||||
对于树莓派,只需要下载对应的 Raspbian Lite 镜像并且[写入到 Micro SD 卡][8]上就可以了(网上很多教程都有介绍如何设置 WLAN 连接,我就不细说了)。
|
||||
|
||||
如果要使用 SSH,那还需要在启动分区建立一个名为 `ssh` 的空文件。树莓派的 IP 通过路由器或者 DHCP 服务器获取,随后就可以通过 SSH 登录到树莓派了(默认密码是 raspberry):
|
||||
|
||||
```
|
||||
$ ssh pi@192.168.1.5
|
||||
|
||||
```
|
||||
|
||||
首先我们需要在树莓派上安装一下这些包:
|
||||
|
||||
```
|
||||
$ sudo apt install git-core python-serial python-enum lighttpd
|
||||
|
||||
```
|
||||
|
||||
在开始之前,我们可以用 `dmesg` 来获取 USB 适配器连接的串行接口:
|
||||
|
||||
```
|
||||
$ dmesg
|
||||
|
||||
[ 5.559802] usbcore: registered new interface driver usbserial
|
||||
|
||||
[ 5.559930] usbcore: registered new interface driver usbserial_generic
|
||||
|
||||
[ 5.560049] usbserial: USB Serial support registered for generic
|
||||
|
||||
[ 5.569938] usbcore: registered new interface driver ch341
|
||||
|
||||
[ 5.570079] usbserial: USB Serial support registered for ch341-uart
|
||||
|
||||
[ 5.570217] ch341 1–1.4:1.0: ch341-uart converter detected
|
||||
|
||||
[ 5.575686] usb 1–1.4: ch341-uart converter now attached to ttyUSB0
|
||||
|
||||
```
|
||||
|
||||
在最后一行,可以看到接口 `ttyUSB0`。然后我们需要写一个 Python 脚本来读取传感器的数据并以 JSON 格式存储,在通过一个 HTML 页面就可以把数据展示出来了。
|
||||
|
||||
### 在树莓派上读取数据
|
||||
|
||||
首先创建一个传感器实例,每5分钟读取一次传感器的数据,持续30秒,这些数值后续都可以调整。在每两次测定的间隔,我们把传感器调到睡眠模式以延长它的使用寿命(厂商认为元件的寿命大约8000小时)。
|
||||
首先创建一个传感器实例,每 5 分钟读取一次传感器的数据,持续 30 秒,这些数值后续都可以调整。在每两次测定的间隔,我们把传感器调到睡眠模式以延长它的使用寿命(厂商认为元件的寿命大约 8000 小时)。
|
||||
|
||||
我们可以使用以下命令来下载 Python 脚本:
|
||||
|
||||
```
|
||||
$ wget -O /home/pi/aqi.py https://raw.githubusercontent.com/zefanja/aqi/master/python/aqi.py
|
||||
|
||||
```
|
||||
|
||||
另外还需要执行以下两条命令来保证脚本正常运行:
|
||||
|
||||
```
|
||||
$ sudo chown pi:pi /var/wwww/html/
|
||||
|
||||
$ echo[] > /var/wwww/html/aqi.json
|
||||
|
||||
$ sudo chown pi:pi /var/www/html/
|
||||
$ echo '[]' > /var/www/html/aqi.json
|
||||
```
|
||||
|
||||
下面就可以执行脚本了:
|
||||
|
||||
```
|
||||
$ chmod +x aqi.py
|
||||
|
||||
$ chmod +x aqi.p
|
||||
$ ./aqi.py
|
||||
|
||||
PM2.5:55.3, PM10:47.5
|
||||
|
||||
PM2.5:55.5, PM10:47.7
|
||||
|
||||
PM2.5:55.7, PM10:47.8
|
||||
|
||||
PM2.5:53.9, PM10:47.6
|
||||
|
||||
PM2.5:53.6, PM10:47.4
|
||||
|
||||
PM2.5:54.2, PM10:47.3
|
||||
|
||||
…
|
||||
|
||||
```
|
||||
|
||||
### 自动化执行脚本
|
||||
|
||||
只需要使用诸如 crontab 的服务,我们就不需要每次都手动启动脚本了。按照以下命令打开 crontab 文件:
|
||||
|
||||
```
|
||||
$ crontab -e
|
||||
|
||||
```
|
||||
|
||||
在文件末尾添加这一行:
|
||||
|
||||
```
|
||||
@reboot cd /home/pi/ && ./aqi.py
|
||||
|
||||
```
|
||||
|
||||
现在我们的脚本就会在树莓派每次重启后自动执行了。
|
||||
@ -138,17 +124,14 @@ $ crontab -e
|
||||
我们在前面已经安装了一个轻量级的 web 服务器 `lighttpd`,所以我们需要把 HTML、JavaScript、CSS 文件放置在 `/var/www/html` 目录中,这样就能通过电脑和智能手机访问到相关数据了。执行下面的三条命令,可以下载到对应的文件:
|
||||
|
||||
```
|
||||
$ wget -O /var/wwww/html/index.html https://raw.githubusercontent.com/zefanja/aqi/master/html/index.html
|
||||
|
||||
$ wget -O /var/wwww/html/aqi.js https://raw.githubusercontent.com/zefanja/aqi/master/html/aqi.js
|
||||
|
||||
$ wget -O /var/wwww/html/style.css https://raw.githubusercontent.com/zefanja/aqi/master/html/style.css
|
||||
|
||||
$ wget -O /var/www/html/index.html https://raw.githubusercontent.com/zefanja/aqi/master/html/index.html
|
||||
$ wget -O /var/www/html/aqi.js https://raw.githubusercontent.com/zefanja/aqi/master/html/aqi.js
|
||||
$ wget -O /var/www/html/style.css https://raw.githubusercontent.com/zefanja/aqi/master/html/style.css
|
||||
```
|
||||
|
||||
在 JavaScript 文件中,实现了打开 JSON 文件、提取数据、计算空气质量指数的过程,随后页面的背景颜色将会根据 EPA 的划分标准而变化。
|
||||
|
||||
你只需要用浏览器访问树莓派的地址,就可以看到当前颗粒物浓度值等数据了。[http://192.168.1.5:][9]
|
||||
你只需要用浏览器访问树莓派的地址,就可以看到当前颗粒物浓度值等数据了: [http://192.168.1.5:][9]
|
||||
|
||||
这个页面比较简单而且可扩展,比如可以添加一个展示过去数小时历史数据的表格等等。
|
||||
|
||||
@ -158,7 +141,7 @@ $ wget -O /var/wwww/html/style.css https://raw.githubusercontent.com/zefanja/aqi
|
||||
|
||||
在资金相对紧张的情况下,树莓派是一种选择。除此以外,还有很多可以用来测定颗粒物的应用,包括室外固定装置、移动测定设备等等。我们学校则同时采用了这两种:固定装置在室外测定全天颗粒物浓度,而移动测定设备在室内检测空调过滤器的效果。
|
||||
|
||||
[Luftdaten.info][12]提供了一个如何设计类似的传感器的介绍,其中的软件效果出众,而且因为它没有使用树莓派,所以硬件更是小巧。
|
||||
[Luftdaten.info][12] 提供了一个如何设计类似的传感器的介绍,其中的软件效果出众,而且因为它没有使用树莓派,所以硬件更是小巧。
|
||||
|
||||
对于学生来说,设计一个颗粒物传感器确实算得上是一个优秀的课外项目。
|
||||
|
||||
@ -170,7 +153,7 @@ via: https://opensource.com/article/18/3/how-measure-particulate-matter-raspberr
|
||||
|
||||
作者:[Stephan Tetzel][a]
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,41 +1,40 @@
|
||||
如何使用 Vim 编辑器编辑多个文件
|
||||
如何使用 Vim 编辑多个文件
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/03/Edit-Multiple-Files-Using-Vim-Editor-720x340.png)
|
||||
|
||||
|
||||
有时候,您可能需要修改多个文件,或要将一个文件的内容复制到另一个文件中。在图形用户界面中,您可以在任何图形文本编辑器(如 gedit)中打开文件,并使用 CTRL + C 和 CTRL + V 复制和粘贴内容。在命令行模式下,您不能使用这种编辑器。不过别担心,只要有 vim 编辑器就有办法。在本教程中,我们将学习使用 Vim 编辑器同时编辑多个文件。相信我,很有意思哒。
|
||||
有时候,您可能需要修改多个文件,或要将一个文件的内容复制到另一个文件中。在图形用户界面中,您可以在任何图形文本编辑器(如 gedit)中打开文件,并使用 `CTRL + C` 和 `CTRL + V` 复制和粘贴内容。在命令行模式下,您不能使用这种编辑器。不过别担心,只要有 `vim` 编辑器就有办法。在本教程中,我们将学习使用 `vim` 编辑器同时编辑多个文件。相信我,很有意思哒。
|
||||
|
||||
### 安装 Vim
|
||||
|
||||
Vim 编辑器可在大多数 Linux 发行版的官方软件仓库中找到,所以您可以用默认的软件包管理器来安装它。例如,在 Arch Linux 及其变体上,您可以使用如下命令:
|
||||
|
||||
```
|
||||
$ sudo pacman -S vim
|
||||
|
||||
```
|
||||
|
||||
在 Debian 和 Ubuntu 上:
|
||||
|
||||
```
|
||||
$ sudo apt-get install vim
|
||||
|
||||
```
|
||||
|
||||
在 RHEL 和 CentOS 上:
|
||||
|
||||
```
|
||||
$ sudo yum install vim
|
||||
|
||||
```
|
||||
|
||||
在 Fedora 上:
|
||||
|
||||
```
|
||||
$ sudo dnf install vim
|
||||
|
||||
```
|
||||
|
||||
在 openSUSE 上:
|
||||
|
||||
```
|
||||
$ sudo zypper install vim
|
||||
|
||||
```
|
||||
|
||||
### 使用 Linux 的 Vim 编辑器同时编辑多个文件
|
||||
@ -44,7 +43,8 @@ $ sudo zypper install vim
|
||||
|
||||
#### 方法一
|
||||
|
||||
有两个文件,即 **file1.txt** 和 **file2.txt**,带有一堆随机单词:
|
||||
有两个文件,即 `file1.txt` 和 `file2.txt`,带有一堆随机单词:
|
||||
|
||||
```
|
||||
$ cat file1.txt
|
||||
ostechnix
|
||||
@ -59,53 +59,52 @@ line2
|
||||
line3
|
||||
line4
|
||||
line5
|
||||
|
||||
```
|
||||
|
||||
现在,让我们同时编辑这两个文件。请运行:
|
||||
|
||||
```
|
||||
$ vim file1.txt file2.txt
|
||||
|
||||
```
|
||||
|
||||
Vim 将按顺序显示文件的内容。首先显示第一个文件的内容,然后显示第二个文件,依此类推。
|
||||
|
||||
![][2]
|
||||
|
||||
**在文件中切换**
|
||||
##### 在文件中切换
|
||||
|
||||
要移至下一个文件,请键入:
|
||||
|
||||
```
|
||||
:n
|
||||
|
||||
```
|
||||
|
||||
![][3]
|
||||
|
||||
要返回到前一个文件,请键入:
|
||||
|
||||
```
|
||||
:N
|
||||
|
||||
```
|
||||
|
||||
如果有任何未保存的更改,Vim 将不允许您移动到下一个文件。要保存当前文件中的更改,请键入:
|
||||
|
||||
```
|
||||
ZZ
|
||||
|
||||
```
|
||||
|
||||
请注意,是两个大写字母 ZZ(SHIFT + zz)。
|
||||
请注意,是两个大写字母 `ZZ`(`SHIFT + zz`)。
|
||||
|
||||
要放弃更改并移至上一个文件,请键入:
|
||||
|
||||
```
|
||||
:N!
|
||||
|
||||
```
|
||||
|
||||
要查看当前正在编辑的文件,请键入:
|
||||
|
||||
```
|
||||
:buffers
|
||||
|
||||
```
|
||||
|
||||
![][4]
|
||||
@ -114,57 +113,59 @@ ZZ
|
||||
|
||||
![][5]
|
||||
|
||||
要切换到下一个文件,请输入 **:buffer**,后跟缓冲区编号。例如,要切换到第一个文件,请键入:
|
||||
要切换到下一个文件,请输入 `:buffer`,后跟缓冲区编号。例如,要切换到第一个文件,请键入:
|
||||
|
||||
```
|
||||
:buffer 1
|
||||
|
||||
```
|
||||
|
||||
![][6]
|
||||
|
||||
**打开其他文件进行编辑**
|
||||
##### 打开其他文件进行编辑
|
||||
|
||||
目前我们正在编辑两个文件,即 file1.txt 和 file2.txt。我想打开另一个名为 **file3.txt** 的文件进行编辑。
|
||||
目前我们正在编辑两个文件,即 `file1.txt` 和 `file2.txt`。我想打开另一个名为 `file3.txt` 的文件进行编辑。
|
||||
|
||||
您会怎么做?这很容易。只需键入 `:e`,然后输入如下所示的文件名即可:
|
||||
|
||||
您会怎么做?这很容易。只需键入 **:e**,然后输入如下所示的文件名即可:
|
||||
```
|
||||
:e file3.txt
|
||||
|
||||
```
|
||||
|
||||
![][7]
|
||||
|
||||
现在你可以编辑 file3.txt 了。
|
||||
现在你可以编辑 `file3.txt` 了。
|
||||
|
||||
要查看当前正在编辑的文件数量,请键入:
|
||||
|
||||
```
|
||||
:buffers
|
||||
|
||||
```
|
||||
|
||||
![][8]
|
||||
|
||||
请注意,对于使用 **:e** 打开的文件,您无法使用 **:n** 或 **:N** 进行切换。要切换到另一个文件,请输入 **:buffer**,然后输入文件缓冲区编号。
|
||||
请注意,对于使用 `:e` 打开的文件,您无法使用 `:n` 或 `:N` 进行切换。要切换到另一个文件,请输入 `:buffer`,然后输入文件缓冲区编号。
|
||||
|
||||
**将一个文件的内容复制到另一个文件中**
|
||||
##### 将一个文件的内容复制到另一个文件中
|
||||
|
||||
您已经知道了如何同时打开和编辑多个文件。有时,您可能想要将一个文件的内容复制到另一个文件中。这也是可以做到的。切换到您选择的文件,例如,假设您想将 file1.txt 的内容复制到 file2.txt 中:
|
||||
您已经知道了如何同时打开和编辑多个文件。有时,您可能想要将一个文件的内容复制到另一个文件中。这也是可以做到的。切换到您选择的文件,例如,假设您想将 `file1.txt` 的内容复制到 `file2.txt` 中:
|
||||
|
||||
|
||||
首先,请切换到 `file1.txt`:
|
||||
|
||||
首先,请切换到 file1.txt:
|
||||
```
|
||||
:buffer 1
|
||||
|
||||
```
|
||||
|
||||
将光标移动至在想要复制的行的前面,并键入**yy** 以抽出(复制)该行。然后,移至 file2.txt:
|
||||
将光标移动至在想要复制的行的前面,并键入`yy` 以抽出(复制)该行。然后,移至 `file2.txt`:
|
||||
|
||||
```
|
||||
:buffer 2
|
||||
|
||||
```
|
||||
|
||||
将光标移至要从 file1.txt 粘贴复制行的位置,然后键入 **p**。例如,您想要将复制的行粘贴到 line2 和 line3 之间,请将鼠标光标置于行前并键入 **p**。
|
||||
将光标移至要从 `file1.txt` 粘贴复制行的位置,然后键入 `p`。例如,您想要将复制的行粘贴到 `line2` 和 `line3` 之间,请将鼠标光标置于行前并键入 `p`。
|
||||
|
||||
输出示例:
|
||||
|
||||
```
|
||||
line1
|
||||
line2
|
||||
@ -172,54 +173,54 @@ ostechnix
|
||||
line3
|
||||
line4
|
||||
line5
|
||||
|
||||
```
|
||||
|
||||
![][9]
|
||||
|
||||
要保存当前文件中所做的更改,请键入:
|
||||
|
||||
```
|
||||
ZZ
|
||||
|
||||
```
|
||||
|
||||
再次提醒,是两个大写字母 ZZ(SHIFT + z)。
|
||||
再次提醒,是两个大写字母 ZZ(`SHIFT + z`)。
|
||||
|
||||
保存所有文件的更改并退出 vim 编辑器,键入:
|
||||
|
||||
```
|
||||
:wq
|
||||
|
||||
```
|
||||
|
||||
同样,您可以将任何文件的任何行复制到其他文件中。
|
||||
|
||||
**将整个文件内容复制到另一个文件中**
|
||||
##### 将整个文件内容复制到另一个文件中
|
||||
|
||||
我们知道如何复制一行,那么整个文件的内容呢?也是可以的。比如说,您要将 file1.txt 的全部内容复制到 file2.txt 中。
|
||||
我们知道如何复制一行,那么整个文件的内容呢?也是可以的。比如说,您要将 `file1.txt` 的全部内容复制到 `file2.txt` 中。
|
||||
|
||||
先打开 `file2.txt`:
|
||||
|
||||
先打开 file2.txt:
|
||||
```
|
||||
$ vim file2.txt
|
||||
|
||||
```
|
||||
|
||||
如果文件已经加载,您可以通过输入以下命令切换到 file2.txt:
|
||||
如果文件已经加载,您可以通过输入以下命令切换到 `file2.txt`:
|
||||
|
||||
```
|
||||
:buffer 2
|
||||
|
||||
```
|
||||
|
||||
将光标移动到您想要粘贴 file1.txt 内容的位置。我想在 file2.txt 的第 5 行之后粘贴 file1.txt 的内容,所以我将光标移动到第 5 行。然后,键入以下命令并按回车键:
|
||||
将光标移动到您想要粘贴 `file1.txt` 的内容的位置。我想在 `file2.txt` 的第 5 行之后粘贴 `file1.txt` 的内容,所以我将光标移动到第 5 行。然后,键入以下命令并按回车键:
|
||||
|
||||
```
|
||||
:r file1.txt
|
||||
|
||||
```
|
||||
|
||||
![][10]
|
||||
|
||||
这里,**r** 代表 **read**。
|
||||
这里,`r` 代表 “read”。
|
||||
|
||||
现在您会看到 `file1.txt` 的内容被粘贴在 `file2.txt` 的第 5 行之后。
|
||||
|
||||
现在您会看到 file1.txt 的内容被粘贴在 file2.txt 的第 5 行之后。
|
||||
```
|
||||
line1
|
||||
line2
|
||||
@ -231,107 +232,103 @@ open source
|
||||
technology
|
||||
linux
|
||||
unix
|
||||
|
||||
```
|
||||
|
||||
![][11]
|
||||
|
||||
要保存当前文件中的更改,请键入:
|
||||
|
||||
```
|
||||
ZZ
|
||||
|
||||
```
|
||||
|
||||
要保存所有文件的所有更改并退出 vim 编辑器,请输入:
|
||||
|
||||
```
|
||||
:wq
|
||||
|
||||
```
|
||||
|
||||
#### 方法二
|
||||
|
||||
另一种同时打开多个文件的方法是使用 **-o** 或 **-O** 标志。
|
||||
另一种同时打开多个文件的方法是使用 `-o` 或 `-O` 标志。
|
||||
|
||||
要在水平窗口中打开多个文件,请运行:
|
||||
|
||||
```
|
||||
$ vim -o file1.txt file2.txt
|
||||
|
||||
```
|
||||
|
||||
![][12]
|
||||
|
||||
要在窗口之间切换,请按 **CTRL-w w**(即按 **CTRL + w** 并再次按 **w**)。或者,您可以使用以下快捷方式在窗口之间移动:
|
||||
|
||||
* **CTRL-w k** – 上面的窗口
|
||||
* **CTRL-w j** – 下面的窗口
|
||||
|
||||
要在窗口之间切换,请按 `CTRL-w w`(即按 `CTRL + w` 并再次按 `w`)。或者,您可以使用以下快捷方式在窗口之间移动:
|
||||
|
||||
* `CTRL-w k` – 上面的窗口
|
||||
* `CTRL-w j` – 下面的窗口
|
||||
|
||||
要在垂直窗口中打开多个文件,请运行:
|
||||
|
||||
```
|
||||
$ vim -O file1.txt file2.txt file3.txt
|
||||
|
||||
```
|
||||
|
||||
![][13]
|
||||
|
||||
要在窗口之间切换,请按 **CTRL-w w**(即按 **CTRL + w** 并再次按 **w**)。或者,使用以下快捷方式在窗口之间移动:
|
||||
|
||||
* **CTRL-w l** – 左面的窗口
|
||||
* **CTRL-w h** – 右面的窗口
|
||||
|
||||
要在窗口之间切换,请按 `CTRL-w w`(即按 `CTRL + w` 并再次按 `w`)。或者,使用以下快捷方式在窗口之间移动:
|
||||
|
||||
* `CTRL-w l` – 左面的窗口
|
||||
* `CTRL-w h` – 右面的窗口
|
||||
|
||||
其他的一切都与方法一的描述相同。
|
||||
|
||||
例如,要列出当前加载的文件,请运行:
|
||||
|
||||
```
|
||||
:buffers
|
||||
|
||||
```
|
||||
|
||||
在文件之间切换:
|
||||
|
||||
```
|
||||
:buffer 1
|
||||
|
||||
```
|
||||
|
||||
打开其他文件,请键入:
|
||||
|
||||
```
|
||||
:e file3.txt
|
||||
|
||||
```
|
||||
|
||||
将文件的全部内容复制到另一个文件中:
|
||||
|
||||
```
|
||||
:r file1.txt
|
||||
|
||||
```
|
||||
|
||||
方法二的唯一区别是,只要您使用 **ZZ** 保存对当前文件的更改,文件将自动关闭。然后,您需要依次键入 **:wq ** 来关闭文件。但是,如果您按照方法一进行操作,输入 **:wq** 时,所有更改将保存在所有文件中,并且所有文件将立即关闭。
|
||||
方法二的唯一区别是,只要您使用 `ZZ` 保存对当前文件的更改,文件将自动关闭。然后,您需要依次键入 `:wq` 来关闭文件。但是,如果您按照方法一进行操作,输入 `:wq` 时,所有更改将保存在所有文件中,并且所有文件将立即关闭。
|
||||
|
||||
有关更多详细信息,请参阅手册页。
|
||||
|
||||
```
|
||||
$ man vim
|
||||
|
||||
```
|
||||
|
||||
|
||||
**建议阅读:**
|
||||
### 建议阅读
|
||||
|
||||
您现在掌握了如何在 Linux 中使用 vim 编辑器编辑多个文件。正如您所见,编辑多个文件并不难。Vim 编辑器还有更强大的功能。我们接下来会提供更多关于 Vim 编辑器的内容。
|
||||
|
||||
再见!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-edit-multiple-files-using-vim-editor/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[jessie-pang](https://github.com/jessie-pang)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,53 +1,52 @@
|
||||
一个用于追踪工作时间的命令行生产力工具
|
||||
moro:一个用于追踪工作时间的命令行生产力工具
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/03/Moro-720x340.jpg)
|
||||
保持对你的工作小时数的追踪将让你知晓在一个特定时间区间内你所完成的工作总量。在网络上有大量的基于GUI的生产力工具可以用来追踪工作小时数。但我却不能找到一个基于CLI的工具。今天我偶然发现了一个简单而奏效的叫做 **“Moro”** 的追踪工作时间数的工具。Moro是一个芬兰词汇,意为"Hello"。通过使用Moro,你可以找到你在完成某项特定任务时花费了多少时间。这个工具是免费且开源的,它是通过**NodeJS**编写的。
|
||||
|
||||
保持对你的工作小时数的追踪将让你知晓在一个特定时间区间内你所完成的工作总量。在网络上有大量的基于 GUI 的生产力工具可以用来追踪工作小时数。但我却不能找到一个基于 CLI 的工具。今天我偶然发现了一个简单而奏效的叫做 Moro 的追踪工作时间数的工具。Moro 是一个芬兰词汇,意为“Hello”。通过使用 Moro,你可以找到你在完成某项特定任务时花费了多少时间。这个工具是自由开源软件,它是通过 NodeJS 编写的。
|
||||
|
||||
### Moro - 一个追踪工作时间的命令行生产力工具
|
||||
|
||||
由于Moro是使用NodeJS编写的,保证你的系统上已经安装了(NodeJS)。如果你没有安装好NodeJS,跟随下面的链接在你的Linux中安装NodeJS和NPM。
|
||||
由于 Moro 是使用 NodeJS 编写的,保证你的系统上已经安装了 NodeJS。如果你没有安装好 NodeJS,跟随下面的链接在你的 Linux 中安装 NodeJS 和 NPM。
|
||||
|
||||
- [如何在 Linux 上安装 NodeJS](https://www.ostechnix.com/install-node-js-linux/)
|
||||
|
||||
NodeJS 和NPM一旦装好,运行下面的命令来安装 Moro。
|
||||
|
||||
NodeJS和NPM一旦装好,运行下面的命令来安装Moro。
|
||||
```
|
||||
$ npm install -g moro
|
||||
|
||||
```
|
||||
|
||||
### 用法
|
||||
|
||||
Moro的工作概念非常简单。它记录了你的工作开始时间,结束时间和在你的系统上的休息时间。在每天结束时,它将会告知你已经工作了多少时间。
|
||||
Moro 的工作概念非常简单。它记录了你的工作开始时间,结束时间和在你的系统上的休息时间。在每天结束时,它将会告知你已经工作了多少时间。
|
||||
|
||||
当你到达办公室时,只需键入:
|
||||
|
||||
```
|
||||
$ moro
|
||||
|
||||
```
|
||||
|
||||
示例输出:
|
||||
```
|
||||
💙 Moro \o/
|
||||
|
||||
✔ You clocked in at: 9:20
|
||||
|
||||
♥ Moro \o/
|
||||
√ You clocked in at: 9:20
|
||||
```
|
||||
|
||||
Moro将会把这个时间注册为你的开始时间。
|
||||
Moro 将会把这个时间注册为你的开始时间。
|
||||
|
||||
当你离开办公室时,再次键入:
|
||||
|
||||
```
|
||||
$ moro
|
||||
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
💙 Moro \o/
|
||||
|
||||
✔ You clocked out at: 19:22
|
||||
|
||||
♥ Moro \o/
|
||||
√ You clocked out at: 19:22
|
||||
ℹ Today looks like this so far:
|
||||
|
||||
┌──────────────────┬─────────────────────────┐
|
||||
│ Today you worked │ 9 Hours and 72 Minutes │
|
||||
├──────────────────┼─────────────────────────┤
|
||||
@ -60,43 +59,37 @@ $ moro
|
||||
│ Date │ 2018-03-19 │
|
||||
└──────────────────┴─────────────────────────┘
|
||||
ℹ Run moro --help to learn how to edit your clock in, clock out or break duration for today
|
||||
|
||||
```
|
||||
|
||||
Moro将会把这个时间注册为你的结束时间。
|
||||
Moro 将会把这个时间注册为你的结束时间。
|
||||
|
||||
现在,Moro将会从结束时间减去开始时间然后从总的时间减去另外的30分钟作为休息时间并给你在那天总的工作时间。抱歉,我的数学计算过程解释实在糟糕。假设你在早上10:00来工作并在晚上17:30离开。所以,你总共在办公室呆了7.30小时(例如17.30-10)。然后在总的时间减去休息时间(默认是30分钟)。因此,你的总工作时间是7小时。明白了?很好!
|
||||
现在,Moro 将会从结束时间减去开始时间,然后从总的时间减去另外的 30 分钟作为休息时间,并给你在那天总的工作时间。抱歉,我的数学计算过程解释实在糟糕。假设你在早上 10:00 来工作并在晚上 17:30 离开。所以,你总共在办公室呆了 7:30 小时(例如 17:30-10)。然后在总的时间减去休息时间(默认是 30 分钟)。因此,你的总工作时间是 7 小时。明白了?很好!
|
||||
|
||||
**注意:**不要像我在写这个手册的时候一样把“moro”和“more”弄混了。
|
||||
**注意:**不要像我在写这个手册的时候一样把 “moro” 和 “more” 弄混了。
|
||||
|
||||
查看你注册的所有小时数,运行:
|
||||
|
||||
```
|
||||
$ moro report --all
|
||||
|
||||
```
|
||||
|
||||
以防万一,如果你忘记注册开始时间或者结束时间,你一样可以在之后指定这些值。
|
||||
|
||||
例如,将上午10点注册为开始时间,运行:
|
||||
例如,将上午 10 点注册为开始时间,运行:
|
||||
|
||||
```
|
||||
$ moro hi 10:00
|
||||
|
||||
💙 Moro \o/
|
||||
|
||||
✔ You clocked in at: 10:00
|
||||
|
||||
♥ Moro \o/
|
||||
√ You clocked in at: 10:00
|
||||
⏰ Working until 18:00 will make it a full (7.5 hours) day
|
||||
|
||||
```
|
||||
|
||||
注册17:30作为结束时间:
|
||||
注册 17:30 作为结束时间:
|
||||
|
||||
```
|
||||
$ moro bye 17:30
|
||||
|
||||
💙 Moro \o/
|
||||
|
||||
✔ You clocked out at: 17:30
|
||||
|
||||
♥ Moro \o/
|
||||
√ You clocked out at: 17:30
|
||||
ℹ Today looks like this so far:
|
||||
|
||||
┌──────────────────┬───────────────────────┐
|
||||
@ -111,79 +104,75 @@ $ moro bye 17:30
|
||||
│ Date │ 2018-03-19 │
|
||||
└──────────────────┴───────────────────────┘
|
||||
ℹ Run moro --help to learn how to edit your clock in, clock out or break duration for today
|
||||
|
||||
```
|
||||
|
||||
你已经知道Moro默认将会减去30分钟的休息时间。如果你需要设置一个自定义的休息时间,你可以简单使用以下命令:
|
||||
你已经知道 Moro 默认将会减去 30 分钟的休息时间。如果你需要设置一个自定义的休息时间,你可以简单使用以下命令:
|
||||
|
||||
```
|
||||
$ moro break 45
|
||||
|
||||
```
|
||||
|
||||
现在,休息时间是45分钟了。
|
||||
现在,休息时间是 45 分钟了。
|
||||
|
||||
若要清除所有的数据:
|
||||
|
||||
```
|
||||
$ moro clear --yes
|
||||
|
||||
💙 Moro \o/
|
||||
|
||||
✔ Database file deleted successfully
|
||||
|
||||
♥ Moro \o/
|
||||
√ Database file deleted successfully
|
||||
```
|
||||
|
||||
**添加笔记**
|
||||
#### 添加笔记
|
||||
|
||||
有时候,你想要在工作时添加笔记。不必去寻找一个独立的作笔记的应用。Moro 将会帮助你添加笔记。要添加笔记,只需运行:
|
||||
|
||||
有时候,你想要在工作时添加笔记。不必去寻找一个独立的作笔记的应用。Moro将会帮助你添加笔记。要添加笔记,只需运行:
|
||||
```
|
||||
$ moro note mynotes
|
||||
|
||||
```
|
||||
|
||||
要在之后搜索所有已经注册的笔记,只需做:
|
||||
|
||||
```
|
||||
$ moro search mynotes
|
||||
|
||||
```
|
||||
|
||||
**修改默认设置**
|
||||
#### 修改默认设置
|
||||
|
||||
默认的完整工作时间是7.5小时。这是因为开发者来自芬兰,这是官方的工作小时数。但是你也可以修改这个设置为你的国家的工作小时数。
|
||||
默认的完整工作时间是 7.5 小时。这是因为开发者来自芬兰,这是官方的工作小时数。但是你也可以修改这个设置为你的国家的工作小时数。
|
||||
|
||||
举个例子,要将其设置为 7 小时,运行:
|
||||
|
||||
举个例子,要将其设置为7小时,运行:
|
||||
```
|
||||
$ moro config --day 7
|
||||
|
||||
```
|
||||
|
||||
同样地,默认的休息时间也可以像下面这样从30分钟修改:
|
||||
同样地,默认的休息时间也可以像下面这样从 30 分钟修改:
|
||||
|
||||
```
|
||||
$ moro config --break 45
|
||||
|
||||
```
|
||||
|
||||
**备份你的数据**
|
||||
#### 备份你的数据
|
||||
|
||||
正如我已经说了的,Moro将时间追踪信息存储在你的家目录,文件名是**.moro-data.db**。
|
||||
正如我已经说了的,Moro 将时间追踪信息存储在你的家目录,文件名是 `.moro-data.db`。
|
||||
|
||||
但是,你可以保存备份数据库到不同的位置。要这样做的话,像下面这样将**.moro-data.db**文件移到你选择的一个不同的位置并告知Moro使用那个数据库文件。
|
||||
但是,你可以保存备份数据库到不同的位置。要这样做的话,像下面这样将 `.moro-data.db` 文件移到你选择的一个不同的位置并告知 Moro 使用那个数据库文件。
|
||||
|
||||
```
|
||||
$ moro config --database-path /home/sk/personal/moro-data.db
|
||||
|
||||
```
|
||||
|
||||
在上面的每一个命令,我都已经把默认的数据库文件分配到了**/home/sk/personal**目录。
|
||||
在上面的每一个命令,我都已经把默认的数据库文件分配到了 `/home/sk/personal` 目录。
|
||||
|
||||
需要帮助的话,运行:
|
||||
|
||||
```
|
||||
$ moro --help
|
||||
|
||||
```
|
||||
|
||||
正如你所见,Moro是非常简单而又能用于追踪你完成你的工作使用了多少时间的。对于自由职业者和任何想要在一定时间范围内完成事情的人,它将会是有用的。
|
||||
正如你所见,Moro 是非常简单而又能用于追踪你完成你的工作使用了多少时间的。对于自由职业者和任何想要在一定时间范围内完成事情的人,它将会是有用的。
|
||||
|
||||
并且,这些只是今天的(内容)。希望这些(内容)能够有所帮助。更多的好东西将会出现。请保持关注!
|
||||
并且,这些只是今天的内容。希望这些内容能够有所帮助。更多的好东西将会出现。请保持关注!
|
||||
|
||||
干杯!
|
||||
|
||||
@ -194,7 +183,7 @@ via: https://www.ostechnix.com/moro-a-command-line-productivity-tool-for-trackin
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[leemeans](https://github.com/leemeans)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,9 @@
|
||||
Dry – 一个命令行交互式 Docker 容器管理器
|
||||
Dry:一个命令行交互式 Docker 容器管理器
|
||||
======
|
||||
Docker 是一种实现操作系统级别虚拟化或容器化的软件。
|
||||
|
||||
基于 Linux 内核的 cgroups 和 namespaces 等资源隔离特性,Docker 可以在单个 Linux 实例中运行多个独立的容器。
|
||||
Docker 是一种所谓容器化的操作系统级的虚拟化软件。
|
||||
|
||||
基于 Linux 内核的 cgroup 和 namespace 等资源隔离特性,Docker 可以在单个 Linux 实例中运行多个独立的容器。
|
||||
|
||||
通过将应用依赖和相关库打包进容器,Docker 使得应用可以在容器中安全隔离地运行。
|
||||
|
||||
@ -10,25 +11,26 @@ Docker 是一种实现操作系统级别虚拟化或容器化的软件。
|
||||
|
||||
[Dry][1] 是一个管理并监控 Docker 容器和镜像的命令行工具。
|
||||
|
||||
Dry 给出容器相关的信息,包括对应镜像、容器名称、网络、容器中运行的命令及容器状态;如果运行在 Docker Swarm 中,工具还会给出 Swarm 集群的各种状态信息。
|
||||
Dry 可以给出容器相关的信息,包括对应镜像、容器名称、网络、容器中运行的命令及容器状态;如果运行在 Docker Swarm 中,工具还会给出 Swarm 集群的各种状态信息。
|
||||
|
||||
Dry 可以连接至本地或远程的 Docker 守护进程。如果连接本地 Docker,Docker 主机显示为`unix:///var/run/docker.sock`。
|
||||
Dry 可以连接至本地或远程的 Docker 守护进程。如果连接本地 Docker,Docker 主机显示为 `unix:///var/run/docker.sock`。
|
||||
|
||||
如果连接远程 Docker,Docker 主机显示为 `tcp://IP Address:Port Number` 或 `tcp://Host Name:Port Number`。
|
||||
|
||||
Dry 可以提供类似 `docker ps` 的指标输出,但输出比 “docker ps” 内容详实、富有色彩。
|
||||
Dry 可以提供类似 `docker ps` 的指标输出,但输出比 `docker ps` 内容详实、富有色彩。
|
||||
|
||||
相比 Docker,Dry 还可以手动添加一个额外的名称列,用于降低记忆难度。
|
||||
|
||||
***推荐阅读:**
|
||||
**推荐阅读:**
|
||||
|
||||
**(#)** [Portainer – 用于 Docker 管理的简明 GUI][2]
|
||||
**(#)** [Rancher – 适用于生产环境的完备容器管理平台][3]
|
||||
**(#)** [cTop – Linux环境下容器管理与监控的命令行工具][4]
|
||||
- [Portainer – 用于 Docker 管理的简明 GUI][2]
|
||||
- [Rancher – 适用于生产环境的完备容器管理平台][3]
|
||||
- [cTop – Linux环境下容器管理与监控的命令行工具][4]
|
||||
|
||||
### 如何在 Linux 中安装 Dry
|
||||
|
||||
在 Linux 中,可以通过一个简单的 shell 脚本安装最新版本的 dry 工具。Dry 不依赖外部库。对于绝大多数的 Docker 命令,dry 提供类似样式的命令。
|
||||
在 Linux 中,可以通过一个简单的 shell 脚本安装最新版本的 Dry 工具。Dry 不依赖外部库。对于绝大多数的 Docker 命令,Dry 提供类似样式的命令。
|
||||
|
||||
```
|
||||
$ curl -sSf https://moncho.github.io/dry/dryup.sh | sudo sh
|
||||
% Total % Received % Xferd Average Speed Time Time Time Current
|
||||
@ -38,41 +40,40 @@ dryup: downloading dry binary
|
||||
######################################################################## 100.0%
|
||||
dryup: Moving dry binary to its destination
|
||||
dryup: dry binary was copied to /usr/local/bin, now you should 'sudo chmod 755 /usr/local/bin/dry'
|
||||
|
||||
```
|
||||
|
||||
使用如下命令将文件权限变更为 `755`
|
||||
使用如下命令将文件权限变更为 `755`:
|
||||
|
||||
```
|
||||
$ sudo chmod 755 /usr/local/bin/dry
|
||||
|
||||
```
|
||||
|
||||
对于使用 Arch Linux 的用户,可以使用 **[Packer][5]** or **[Yaourt][6]** 包管理器,从 AUR 源安装该工具。
|
||||
对于使用 Arch Linux 的用户,可以使用 **[Packer][5]** 或 **[Yaourt][6]** 包管理器,从 AUR 源安装该工具。
|
||||
```
|
||||
$ yaourt -S dry-bin
|
||||
或者
|
||||
$ packer -S dry-bin
|
||||
|
||||
```
|
||||
|
||||
如果希望在 Docker 容器中运行 dry,可以运行如下命令。前提条件是已确认在操作系统中安装了 Docker。
|
||||
|
||||
**推荐阅读:**
|
||||
**(#)** [如何在 Linux 中安装 Docker][7]
|
||||
**(#)** [如何在 Linux 中玩转 Docker 镜像][8]
|
||||
**(#)** [如何在 Linux 中玩转 Docker 容器][9]
|
||||
**(#)** [如何在 Docker 容器中安装并运行应用程序][10]
|
||||
|
||||
- [如何在 Linux 中安装 Docker][7]
|
||||
- [如何在 Linux 中玩转 Docker 镜像][8]
|
||||
- [如何在 Linux 中玩转 Docker 容器][9]
|
||||
- [如何在 Docker 容器中安装并运行应用程序][10]
|
||||
|
||||
```
|
||||
$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock moncho/dry
|
||||
|
||||
```
|
||||
|
||||
### 如何启动并运行 Dry
|
||||
|
||||
在控制台运行 `dry` 命令即可启动该工具,其默认输出如下:
|
||||
|
||||
```
|
||||
$ dry
|
||||
|
||||
```
|
||||
|
||||
![][12]
|
||||
@ -80,18 +81,20 @@ $ dry
|
||||
### 如何使用 Dry 监控 Docker
|
||||
|
||||
你可以在 dry 的界面中按下 `m` 键打开监控模式。
|
||||
|
||||
![][13]
|
||||
|
||||
### 如何使用 Dry 管理容器
|
||||
|
||||
在选中的容器上单击 `Enter` 键,即可管理容器。Dry 提供如下操作:查看日志,查看、杀死、删除容器,停止、启动、重启容器,查看容器状态及镜像历史记录等。
|
||||
在选中的容器上单击回车键,即可管理容器。Dry 提供如下操作:查看日志,查看、杀死、删除容器,停止、启动、重启容器,查看容器状态及镜像历史记录等。
|
||||
|
||||
![][14]
|
||||
|
||||
### 如何监控容器资源利用率
|
||||
|
||||
用户可以使用 `Stats+Top` 选项查看指定容器的资源利用率。
|
||||
|
||||
该操作需要在容器管理界面完成(在上一步的基础上,点击 `Stats+Top` 选项)。另外,也可以按下 `s` 打开容器资源利用率界面。
|
||||
该操作需要在容器管理界面完成(在上一步的基础上,点击 `Stats+Top` 选项)。另外,也可以按下 `s` 打开容器资源利用率界面。
|
||||
|
||||
![][15]
|
||||
|
||||
@ -100,35 +103,39 @@ $ dry
|
||||
可以使用 `F8` 键查看容器、镜像及本地卷的磁盘使用情况。
|
||||
|
||||
该界面明确地给出容器、镜像和卷的总数,哪些处于使用状态,以及整体磁盘使用情况、可回收空间大小的详细信息。
|
||||
|
||||
![][16]
|
||||
|
||||
### 如何查看已下载的镜像
|
||||
|
||||
按下 `2` 键即可列出全部的已下载镜像。
|
||||
|
||||
![][17]
|
||||
|
||||
### 如何查看网络列表
|
||||
|
||||
按下 `3` 键即可查看全部网络及网关。
|
||||
|
||||
![][18]
|
||||
|
||||
### 如何查看全部 Docker 容器
|
||||
|
||||
按下 `F2` 键即可列出列出全部容器,包括运行中和已关闭的容器。
|
||||
|
||||
![][19]
|
||||
|
||||
### Dry 快捷键
|
||||
|
||||
查看帮助页面或 [dry github][1] 即可查看全部快捷键。
|
||||
查看帮助页面或 [dry GitHub][1] 即可查看全部快捷键。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/dry-an-interactive-cli-manager-for-docker-containers/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
译者:[pinewall](https://github.com/pinewall)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[pinewall](https://github.com/pinewall)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,4 +1,4 @@
|
||||
我们可以在同一个虚拟机中运行 Python 2 和 Python 3 代码而不需要更改代码吗?
|
||||
我们可以在同一个虚拟机中运行 Python 2 和 3 代码而不需要更改代码吗?
|
||||
=====
|
||||
|
||||
从理论上来说,可以。Zed Shaw 说过一句著名的话,如果不行,那么 Python 3 一定不是图灵完备的。但在实践中,这是不现实的,我将通过给你们举几个例子来说明原因。
|
||||
@ -6,9 +6,8 @@
|
||||
### 对于字典(dict)来说,这意味着什么?
|
||||
|
||||
让我们来想象一台拥有 Python 6 的虚拟机,它可以读取 Python 3.6 编写的 `module3.py`。但是在这个模块中,它可以导入 Python 2.7 编写的 `module2.py`,并成功使用它,没有问题。这显然是实验代码,但假设 `module2.py` 包含以下的功能:
|
||||
|
||||
```
|
||||
|
||||
|
||||
def update_config_from_dict(config_dict):
|
||||
items = config_dict.items()
|
||||
while items:
|
||||
@ -28,14 +27,13 @@ def update_in_place(config_dict):
|
||||
del config_dict[k]
|
||||
elif new_value != v:
|
||||
config_dict[k] = v
|
||||
|
||||
```
|
||||
|
||||
现在,当我们想从 `module3` 中调用这些函数时,我们遇到了一个问题:Python 3.6 中的字典类型与 Python 2.7 中的字典类型不同。在 Python 2 中,dicts 是无序的,它们的 `.keys()`, `.values()`, `.items()` 方法返回了正确的序列,这意味着调用 `.items()` 会在字典中创建状态的副本。在 Python 3 中,这些方法返回字典当前状态的动态视图。
|
||||
现在,当我们想从 `module3` 中调用这些函数时,我们遇到了一个问题:Python 3.6 中的字典类型与 Python 2.7 中的字典类型不同。在 Python 2 中,字典是无序的,它们的 `.keys()`, `.values()`, `.items()` 方法返回了正确的序列,这意味着调用 `.items()` 会在字典中创建状态的副本。在 Python 3 中,这些方法返回字典当前状态的动态视图。
|
||||
|
||||
这意味着如果 `module3` 调用 `module2.update_config_from_dict(some_dictionary)`,它将无法运行,因为 Python 3 中 `dict.items()` 返回的值不是一个列表,并且没有 `.pop()` 方法。反过来也是如此。如果 `module3` 调用 `module2.config_to_dict()`,它可能会返回一个 Python 2 的字典。现在调用 `.items()` 突然返回一个列表,所以这段代码无法正常工作(这对 Python 3 字典来说工作正常):
|
||||
```
|
||||
|
||||
```
|
||||
def main(cmdline_options):
|
||||
d = module2.config_to_dict()
|
||||
items = d.items()
|
||||
@ -45,7 +43,6 @@ def main(cmdline_options):
|
||||
d[k] = v
|
||||
for k, v in items:
|
||||
print(f'Config with cmdline overrides: {k}={v}')
|
||||
|
||||
```
|
||||
|
||||
最后,使用 `module2.update_in_place()` 会失败,因为 Python 3 中 `.items()` 的值现在不允许在迭代过程中改变。
|
||||
@ -54,7 +51,7 @@ def main(cmdline_options):
|
||||
|
||||
### Python 应该神奇地知道类型并会自动转换!
|
||||
|
||||
为什么拥有 Python 6 的虚拟机无法识别 Python 3 的代码,在 Python 2 中调用 `some_dict.keys()` 时,我们还有别的意思吗?好吧,Python 不知道代码的作者在编写代码时,她所认为的 `some_dict` 应该是什么。代码中没有任何内容表明它是否是一个字典。在 Python 2 中没有类型注释,因为它们是可选的,即使在 Python 3 中,大多数代码也不会使用它们。
|
||||
为什么我们的 Python 6 的虚拟机无法识别 Python 3 的代码,在 Python 2 中调用 `some_dict.keys()` 时,我们还有别的意思吗?好吧,Python 不知道代码的作者在编写代码时,她所认为的 `some_dict` 应该是什么。代码中没有任何内容表明它是否是一个字典。在 Python 2 中没有类型注释,因为它们是可选的,即使在 Python 3 中,大多数代码也不会使用它们。
|
||||
|
||||
在运行时,当你调用 `some_dict.keys()` 的时候,Python 只是简单地在对象上查找一个属性,该属性恰好隐藏在 `some_dict` 名下,并试图在该属性上运行 `__call__()`。这里有一些关于方法绑定,描述符,slots 等技术问题,但这是它的核心。我们称这种行为为“鸭子类型”。
|
||||
|
||||
@ -62,33 +59,33 @@ def main(cmdline_options):
|
||||
|
||||
### 好的,让我们在运行时做出这个决定
|
||||
|
||||
Python 6 的虚拟机可以通过标记每个属性查找来实现这一点,信息是“来自 py2 的调用”或“来自 py3 的调用”,并使对象发送正确的属性。这会让事情变得很慢,并且使用更多的内存。这将要求我们使用用户代码使用的代理将两种版本的给定类型保留在内存中。我们需要将这些对象的状态同步到用户背后,使工作加倍。毕竟,新字典的内存表示与 Python 2 不同。
|
||||
Python 6 的虚拟机可以标记每个属性,通过查找“来自 py2 的调用”或“来自 py3 的调用”的信息来实现这一点,并使对象发送正确的属性。这会让它变得很慢,并且使用更多的内存。这将要求我们在内存中保留两种版本的代码,并通过代理来使用它们。我们需要加倍付出努力,在用户背后同步这些对象的状态。毕竟,新字典的内存表示与 Python 2 不同。
|
||||
|
||||
如果你在思考字典问题,考虑 Python 3 中的 Unicode 字符串和 Python 2 中的字节(byte)字符串的所有问题。
|
||||
如果你已经被字典问题绕晕了,那么再想想 Python 3 中的 Unicode 字符串和 Python 2 中的字节(byte)字符串的各种问题吧。
|
||||
|
||||
### 一切都会丢失吗?Python 3 不能运行旧代码?
|
||||
### 没有办法了吗?Python 3 根本就不能运行旧代码吗?
|
||||
|
||||
一切都不会丢失。每天都会有项目移植到 Python 3。将 Python 2 代码移植到两个版本的 Python 上推荐方法是在代码上运行 [Python-Modernize][1]。它会捕获那些在 Python 3 上不起作用的代码,并使用 [six][2] 库将其替换,以便它在 Python 2 和 Python 3 上运行。这是对 `2to3` 的改编,它正在生成 Python 3-only 代码。`Modernize` 是首选,因为它提供了更多的增量迁移路线。所有的这些在 Python 文档中的 [Porting Python 2 Code to Python 3][3]文档中都有很好的概述。
|
||||
不会。每天都会有项目移植到 Python 3。将 Python 2 代码移植到两个版本的 Python 上推荐方法是在你的代码上运行 [Python-Modernize][1]。它会捕获那些在 Python 3 上不起作用的代码,并使用 [six][2] 库将其替换,以便它在 Python 2 和 Python 3 上运行。这是 `2to3` 的一个改编版本,用于生成仅针对 Python 3 代码。`Modernize` 是首选,因为它提供了更多的增量迁移路线。所有的这些在 Python 文档中的 [Porting Python 2 Code to Python 3][3]文档中都有很好的概述。
|
||||
|
||||
但是,等一等,你不是说 Python 6 的虚拟机不能自动执行此操作吗?对。`Modernize` 查看你的代码,并试图猜测哪些是安全的。它会做出一些不必要的改变,还会错过其他必要的改变。但是,它不会帮助你处理字符串。如果你的代码没有保留“来自外部的二进制数据”和“流程中的文本数据”之间的界限,那么这种转换并非微不足道。
|
||||
但是,等一等,你不是说 Python 6 的虚拟机不能自动执行此操作吗?对。`Modernize` 查看你的代码,并试图猜测哪些是安全的。它会做出一些不必要的改变,还会错过其他必要的改变。但是,它不会帮助你处理字符串。如果你的代码没有在“来自外部的二进制数据”和“流程中的文本数据”之间保持界限,那么这种转换就不会那么轻易。
|
||||
|
||||
因此,迁移大项目不能自动完成,并且涉及人类进行测试,发现问题并修复它们。它工作吗?是的,我曾帮助[将一百万行代码迁移到 Python 3][4],并且交换没有造成事故。这一举措重获了我们服务器内存的 1/3,并使代码运行速度提高了 12%。那是在 Python 3.5 上,但是 Python 3.6 的速度要快得多,根据你的工作量,你甚至可以达到 [4 倍加速][5]。
|
||||
因此,大项目的迁移不能自动完成,并且需要人类进行测试,发现问题并修复它们。它工作吗?是的,我曾帮助[将一百万行代码迁移到 Python 3][4],并且这种切换没有造成事故。这一举措让我们重新获得了 1/3 的服务器内存,并使代码运行速度提高了 12%。那是在 Python 3.5 上,但是 Python 3.6 的速度要快得多,根据你的工作量,你甚至可以达到 [4 倍加速][5]。
|
||||
|
||||
### 亲爱的 Zed
|
||||
|
||||
hi,伙计,我关注你已经超过 10 年了。我一直在观察,当你感到沮丧的时候,你对 Mongrel 没有任何信任,尽管 Rails 生态系统几乎全部都在上面运行。当你重新设计它并开始 Mongrel 2 项目时,我一直在观察。我一直在关注你使用 Fossil 这一令人惊讶的举动。随着你发布 “Rails 是一个贫民窟”的帖子,我看到你突然离开了 Ruby 社区。当你开始编写 “Learn Python The Hard Way” 并且开始推荐它时,我感到非常兴奋。2013 年我在 [DjangoCon Europe][6] 见过你,我们谈了很多关于绘画,唱歌和倦怠的内容。关于[这张你的照片][7]是我在 Instagram 上的第一篇文章。
|
||||
hi,伙计,我关注你已经超过 10 年了。我一直在观察,当你感到沮丧的时候,你对 Mongrel 没有任何信任,尽管 Rails 生态系统几乎全部都在上面运行。当你重新设计它并开始 Mongrel 2 项目时,我一直在观察。我一直在关注你使用 Fossil 这一令人惊讶的举动。随着你发布 “Rails 是一个贫民窟”的帖子,我看到你突然离开了 Ruby 社区。当你开始编写《笨方法学 Python》并且开始推荐它时,我感到非常兴奋。2013 年我在 [DjangoCon Europe][6] 见过你,我们谈了很多关于绘画,唱歌和倦怠的内容。[你的这张照片][7]是我在 Instagram 上的第一个帖子。
|
||||
|
||||
你几乎把另一个“贫民区”的行动与 [“反对 Python 3” 案例][8] 文章拉到一起。我认为你本意是好的,但是这篇文章引起了很多混乱,包括许多人认为你认为 Python 3 不是图灵完整的。我花了好几个小时让人们相信,你是在开玩笑。但是,鉴于你对 Python 学习方式的重大贡献,我认为这是值得的。特别是你为 Python 3 更新了你的书。感谢你做这件事。如果我们社区中真的有人要求因你的帖子为由将你和你的书列入黑名单,把他们请出去。这是一个双输的局面,这是错误的。
|
||||
你几乎把另一个“贫民区”的行动与 [“反对 Python 3” 案例][8] 文章拉到一起。我认为你本意是好的,但是这篇文章引起了很多混淆,包括许多人觉得你认为 Python 3 不是图灵完整的。我花了好几个小时让人们相信,你是在开玩笑。但是,鉴于你对《笨方法学 Python》的重大贡献,我认为这是值得的。特别是你为 Python 3 更新了你的书。感谢你做这件事。如果我们社区中真的有人因你的帖子为由要求将你和你的书列入黑名单,而请他们出去。这是一个双输的局面,这是错误的。
|
||||
|
||||
在记录中,没有一个核心 Python 开发人员认为 Python 2 到 Python 3 的转换过程会顺利而且计划得当,[包括 Guido van Rossum][9]。真的,可以看那个视频,这有点事后诸葛亮的意思了。从这个意义上说,我们实际上是积极地相互认同的。如果我们再做一次,它会看起来不一样。但在这一点上,[在 2020 年 1 月 1 日,Python 2 将会到达终结][10]。大多数第三方库已经支持 Python 3,甚至开始发布只支持 Python 3 版本(参见[Django][11]或 [科学项目关于 Python 3 的声明][12])。
|
||||
说实话,没有一个核心 Python 开发人员认为 Python 2 到 Python 3 的转换过程会顺利而且计划得当,[包括 Guido van Rossum][9]。真的,可以看那个视频,这有点事后诸葛亮的意思了。从这个意义上说,*我们实际上是积极地相互认同的*。如果我们再做一次,它会看起来不一样。但在这一点上,[在 2020 年 1 月 1 日,Python 2 将会到达终结][10]。大多数第三方库已经支持 Python 3,甚至开始发布只支持 Python 3 的版本(参见 [Django][11] 或 [科学项目关于 Python 3 的声明][12])。
|
||||
|
||||
我们也积极地就另一件事达成一致。就像你于 Mongrel 一样,Python 核心开发人员是志愿者,他们的工作没有得到报酬。我们大多数人在这个项目上投入了大量的时间和精力,因此[我们自然而然敏感][13]于那些对他们的贡献不屑一顾和激烈的评论。特别是如果这个信息既攻击目前的事态,又要求更多的自由劳动。
|
||||
我们也积极地就另一件事达成一致。就像你于 Mongrel 一样,Python 核心开发人员是志愿者,他们的工作没有得到报酬。我们大多数人在这个项目上投入了大量的时间和精力,因此[我们自然而然敏感][13]于那些对他们的贡献不屑一顾和激烈的评论。特别是如果这个信息既攻击目前的事态,又要求更多的自由贡献。
|
||||
|
||||
我希望到 2018 年你会让忘记 2016 发布的帖子,有一堆好的反驳。[我特别喜欢 eevee][14](译注:eevee 是一个为 Blender 设计的渲染器)。它特别针对“一起运行 Python 2 和 Python 3 ”的场景,这是不现实的,就像在同一个虚拟机中运行 Ruby 1.8 和 Ruby 2.x 一样,或者像 Lua 5.3 和 Lua 5.1 同时运行一样。你甚至不能用 libc.so.6 运行针对 libc.so.5 编译的 C 二进制文件。然而,我发现最令人惊讶的是,你声称 Python 核心开发者是“有目的地”创造诸如 2to3 之类的破坏工具,这些由 Guido 创建,其最大利益就是让每个人尽可能顺利,快速地迁移。我很高兴你在之后的帖子中放弃了这个说法,但是你必须意识到你会激怒那些阅读原始版本的人。对蓄意伤害的指控最好有强有力的证据支持。
|
||||
我希望到 2018 年你会让忘记 2016 发布的帖子,有一堆好的反驳。[我特别喜欢 eevee][14](LCTT 译注:eevee 是一个为 Blender 设计的渲染器)。它特别针对“一起运行 Python 2 和 Python 3 ”的场景,这是不现实的,就像在同一个虚拟机中运行 Ruby 1.8 和 Ruby 2.x 一样,或者像 Lua 5.3 和 Lua 5.1 同时运行一样。你甚至不能用 libc.so.6 运行针对 libc.so.5 编译的 C 二进制文件。然而,我发现最令人惊讶的是,你声称 Python 核心开发者是“有目的地”创造诸如 2to3 之类的破坏工具,这些由 Guido 创建,其最大利益就是让每个人尽可能顺利,快速地迁移。我很高兴你在之后的帖子中放弃了这个说法,但是你必须意识到你会激怒那些阅读了原始版本的人。对蓄意伤害的指控最好有强有力的证据支持。
|
||||
|
||||
但看起来你仍然会这样做。[就在今天][15]你说 Python 核心开发者“忽略”尝试解决 API 的问题,特别是 `six`。正如我上面写的那样,Python 文档中的官方移植指南涵盖了 “six”。更重要的是,`six` 是由 Python 2.7 的发布管理者 Benjamin Peterson 编写。很多人学会了编程,这要归功于你,而且由于你在网上有大量的粉丝,人们会阅读这样的推文,他们会相信它的价值,这是有害的。
|
||||
但看起来你仍然会这样做。[就在今天][15]你说 Python 核心开发者“忽略”尝试解决 API 的问题,特别是 `six`。正如我上面写的那样,Python 文档中的官方移植指南涵盖了 `six`。更重要的是,`six` 是由 Python 2.7 的发布管理者 Benjamin Peterson 编写。很多人学会了编程,这要归功于你,而且由于你在网上有大量的粉丝,人们会阅读这样的推文,他们会相信它的价值,这是有害的。
|
||||
|
||||
我有一个建议,让我们把 “Python 3 管理不善”的争议搁置起来。Python 2 正在死亡,这个过程会很慢,并且它是丑陋而血腥的,但它是一条单行道。争论那些没有用。相反,让我们专注于我们现在可以做什么来使 Python 3.8 比其他任何 Python 版本更好。也许你更喜欢看外面的角色,但作为这个社区的成员,你会更有影响力。请说“我们”而不是“他们”。
|
||||
我有一个建议,让我们把 “Python 3 管理不善”的争议搁置起来。Python 2 正在死亡,这个过程会很慢,并且它是丑陋而血腥的,但它是一条单行道。争论那些没有用。相反,让我们专注于我们现在可以做什么来使 Python 3.8 比其他任何 Python 版本更好。也许你更喜欢看外面的角色,但作为这个社区的成员,你会更有影响力。请说“我们”而不是“他们”。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
@ -96,9 +93,9 @@ hi,伙计,我关注你已经超过 10 年了。我一直在观察,当你
|
||||
via: http://lukasz.langa.pl/13/could-we-run-python-2-and-python-3-code-same-vm/
|
||||
|
||||
作者:[Łukasz Langa][a]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,32 +1,41 @@
|
||||
Linux 下的 4 个命令行笔记记录程序
|
||||
======
|
||||
4 个 Linux 下的命令行笔记程序
|
||||
===============
|
||||
|
||||
> 这些工具可以让你在 Linux 命令行下简单而有效地记录笔记和保存信息。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/note-taking.jpeg?itok=fiF5EBEb)
|
||||
当你需要保存代码段或 URL、想法或引用时,可能会启动文本编辑器或使用[桌面][1]或[基于 Web 的] [2]笔记记录工具。但那些不是你唯一的选择。如果你在终端窗口中工作,则可以使用 Linux 命令行下的许多笔记记录工具之一。
|
||||
|
||||
当你需要保存代码段或 URL、想法或引用时,可能会启动文本编辑器或使用[桌面][1]或[基于 Web 的][2]笔记记录工具。但那些不是你唯一的选择。如果你在终端窗口中工作,则可以使用 Linux 命令行下的许多笔记记录工具之一。
|
||||
|
||||
我们来看看这四个程序。
|
||||
|
||||
### tnote
|
||||
|
||||
[tnote][3] 使在终端窗口中记笔记很简单 - 几乎太简单了。
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/tnote.png?itok=M84ABZcr)
|
||||
|
||||
tnote 是一个 Python 脚本。首次启动时,它会要求你输入密码和口令来加密存储笔记的[ SQLite 数据库][4]。完成之后,按 “A” 创建一个笔记。输入你的笔记,然后按 CTRL-D 保存。
|
||||
[tnote][3] 使在终端窗口中记笔记很简单 —— 几乎太简单了。
|
||||
|
||||
一旦你有几个(或多个)笔记,你可以查看它们或搜索特定的笔记,单词或短语或标签。tnote 不包含很多功能,但它确实实现了任务。
|
||||
tnote 是一个 Python 脚本。首次启动时,它会要求你输入密码和口令来加密存储笔记的 [SQLite 数据库][4]。完成之后,按 `A` 创建一个笔记。输入你的笔记,然后按 `CTRL-D` 保存。
|
||||
|
||||
一旦你有几个(或多个)笔记,你可以查看它们或搜索特定的笔记,单词或短语或标签。tnote 没有很多功能,但它确实实现了任务。
|
||||
|
||||
### Terminal Velocity
|
||||
|
||||
如果你使用的是 Mac OS,你可能会看到一个名为 [Notational Velocity][5] 的流行开源笔记程序,这是一种记录笔记的简单有效方法。[Terminal Velocity][6] 在将 Notational Velocity 体验带入命令行方面做得很好。
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/terminal_velocity.png?itok=rkWejQ7_)
|
||||
|
||||
如果你使用过 Mac OS,你可能会看到一个名为 [Notational Velocity][5] 的流行开源笔记程序,这是一种记录笔记的简单有效方法。[Terminal Velocity][6] 在将 Notational Velocity 体验带入命令行方面做得很好。
|
||||
|
||||
Terminal Velocity 打开你的默认文本编辑器(由你的 `.profile` 或 `.bashrc` 文件中的 `$EDITOR` 变量设置)。输入你的笔记,然后保存。该笔记出现在 Terminal Velocity 窗口的列表中。
|
||||
|
||||
使用键盘上的箭头键滚动查看你的笔记列表。要查看或编辑笔记,请按 Enter 键。如果你有一长串笔记,则可以在 `Find or Create` 字段中输入笔记标题的前几个字符以缩小列表的范围。在那里滚动笔记并按下 Enter 键将其打开。
|
||||
使用键盘上的箭头键滚动查看你的笔记列表。要查看或编辑笔记,请按回车键。如果你有一长串笔记,则可以在 `Find or Create` 字段中输入笔记标题的前几个字符以缩小列表的范围。在那里滚动笔记并按下回车键将其打开。
|
||||
|
||||
### pygmynote
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/pygmynote.png?itok=Z8qEC4dq)
|
||||
|
||||
在本文中的四个应用中,[pygmynote][7] 可能是最不用户友好的。然而,它是最灵活的。
|
||||
|
||||
像 tnote 一样,pygmynote 将你的笔记和附件保存在 SQLite 数据库中。当你启动它时,pygmynote 看起来并不特别有用。在任何时候,输入 `help` 并按下 Enter 键获取命令列表。
|
||||
像 tnote 一样,pygmynote 将你的笔记和附件保存在 SQLite 数据库中。当你启动它时,pygmynote 看起来并不特别有用。在任何时候,输入 `help` 并按下回车键获取命令列表。
|
||||
|
||||
你可以添加、编辑、查看和搜索笔记,并在笔记中添加[标签][8]。标签使找到笔记更容易,特别是如果你有很多笔记的时候。
|
||||
|
||||
@ -34,6 +43,8 @@ pygmynote 的灵活性在于它能够将附件添加到笔记中。这些附件
|
||||
|
||||
### jrnl
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/jrnl.png?itok=Mx7QIgYj)
|
||||
|
||||
[jrnl][9] 是这里的一个奇怪应用。正如你可能从它的名字中猜到的那样,jrnl 意在成为一种日记工具。但这并不意味着你不能记笔记。 jrnl 做得很好。
|
||||
|
||||
当你第一次启动 jrnl 时,它会询问你想把文件 `journal.txt` (它存储你的笔记)保存的位置以及是否需要密码保护。如果你决定添加密码,那么你在应用内的操作都需要输入密码。
|
||||
@ -50,7 +61,7 @@ via: https://opensource.com/article/18/3/command-line-note-taking-applications
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,39 +1,41 @@
|
||||
如何在终端中使用 Instagram
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/03/instagram-in-terminal-720x340.png)
|
||||
Instagram 不需要介绍。它是像 Facebook 和 Twitter 之类的流行社交网络平台之一,它可以公开或私下分享照片和视频给确认过的粉丝。它是由两位企业家于 2010 年发起的,分别是 **Kevin Systrom** and **Mike Krieger**。2012 年,社交网络巨头 Facebook 收购了 Instagram。Android 和 iOS 设备上免费提供 Instagram。我们也可以通过网络浏览器在桌面系统中使用它。而且,最酷的是现在你可以在任何类 Unix 操作系统上的终端中使用 Instagram。你兴奋了吗?那么,请阅读以下内容了解如何在终端上查看你的Instagram feed。
|
||||
|
||||
Instagram 不需要介绍。它是像 Facebook 和 Twitter 之类的流行社交网络平台之一,它可以公开或私下分享照片和视频给确认过的粉丝。它是由两位企业家 **Kevin Systrom** 和 **Mike Krieger**于 2010 年发起的。2012 年,社交网络巨头 Facebook 收购了 Instagram。Android 和 iOS 设备上可以免费下载 Instagram。我们也可以通过网络浏览器在桌面系统中使用它。而且,最酷的是现在你可以在任何类 Unix 操作系统上的终端中使用 Instagram。你兴奋了吗?那么,请阅读以下内容了解如何在终端上查看你的 Instagram feed。
|
||||
|
||||
### 终端中的 Instagram
|
||||
|
||||
首先,按照以下链接中的说明安装 **pip3**。
|
||||
首先,按照以下链接中的说明安装 `pip3`。
|
||||
|
||||
然后,git clone 它的脚本仓库。
|
||||
|
||||
然后,git clone “instagram-terminal-news-feed” 脚本仓库。
|
||||
```
|
||||
$ git clone https://github.com/billcccheng/instagram-terminal-news-feed.git
|
||||
|
||||
```
|
||||
|
||||
以上命令会将 instagram 脚本的内容克隆到当前工作目录中名为 “instagram-terminal-news-feed” 的目录中。cd 到该目录:
|
||||
以上命令会将 instagram 脚本的内容克隆到当前工作目录中名为 `instagram-terminal-news-feed` 的目录中。cd 到该目录:
|
||||
|
||||
```
|
||||
$ cd instagram-terminal-news-feed/
|
||||
|
||||
```
|
||||
|
||||
然后,运行以下命令安装 instagram 终端 feed:
|
||||
然后,运行以下命令安装它:
|
||||
|
||||
```
|
||||
$ pip3 install -r requirements.txt
|
||||
|
||||
```
|
||||
|
||||
现在,运行以下命令在 Linux 终端中启动 instagram。
|
||||
|
||||
```
|
||||
$ python3 start.py
|
||||
|
||||
```
|
||||
|
||||
输入你的 Instagram 用户名和密码,并直接从终端中浏览你的 Instagram feed。你的 instragram 用户名和密码将仅本地存储在名为 **credential.json** 的文件中。所以,你不必担心它。你也可以选择不保存默认保存的凭证。
|
||||
输入你的 Instagram 用户名和密码,并直接从终端中浏览你的 Instagram feed。你的 instragram 用户名和密码将仅本地存储在名为 `credential.json` 的文件中。所以,你不必担心它。你也可以选择不保存默认保存的凭证。
|
||||
|
||||
下面是[**我的 Instagram 页面**][1]的一些截图。
|
||||
下面是[我的 Instagram 页面][1]的一些截图。
|
||||
|
||||
![][3]
|
||||
|
||||
@ -53,9 +55,9 @@ $ python3 start.py
|
||||
via: https://www.ostechnix.com/how-to-use-instagram-in-terminal/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,30 +1,33 @@
|
||||
为啥我喜欢 ARM 和 PowerPC
|
||||
为什么我喜欢 ARM 和 PowerPC?
|
||||
======
|
||||
|
||||
> 一个学生在搜寻强劲而节能的工作站的历程中怎样对开源系统的热情与日俱增的。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/desk_clock_job_work.jpg?itok=Nj4fuhl6)
|
||||
最近我被问起为啥在博客和推特里经常提到 [ARM][1] 和 [PowerPC][2]。我有两个答案:一个是个人原因,另一个是技术上的。
|
||||
|
||||
最近我被问起为什么在博客和推特里经常提到 [ARM][1] 和 [PowerPC][2]。我有两个答案:一个是个人原因,另一个是技术上的。
|
||||
|
||||
### 个人原因
|
||||
|
||||
从前,我是学环境保护的。在我读博的时候,我准备买个新电脑。作为一个环保人士,我需要一台强劲且环保的电脑。这就是我开始对 PowerPC 感兴趣的原因,我找到了 [Pegasos][3], 一台 [Genesi][4] 公司制造的 PowerPC 工作站。
|
||||
从前,我是学环境保护的。在我读博的时候,我准备买个新电脑。作为一个环保人士,我需要一台强劲且节能的电脑。这就是我开始对 PowerPC 感兴趣的原因,我找到了 [Pegasos][3],这是一台 [Genesi][4] 公司制造的 PowerPC 工作站。
|
||||
|
||||
我还用过 [RS/6000][5] (PowerPC), [SGI][6] (MIPS), [HP-UX][7] (PA-RISC),和[VMS][8] (Alpha)的服务器和工作站,由于我的 PC 使用 Linux 而非 Windows,所以使用不同的 CPU 架构对我来说并没有什么区别。 [Pegasos][9] 是我第一台工作站,它很小而且对家用来说性能足够。
|
||||
我还用过 [RS/6000][5] (PowerPC)、 [SGI][6] (MIPS)、 [HP-UX][7] (PA-RISC)和 [VMS][8] (Alpha)的服务器和工作站,由于我的 PC 使用 Linux 而非 Windows,所以使用不同的 CPU 架构对我来说并没有什么区别。 [Pegasos][9] 是我第一台工作站,它小型而节能而且对家用来说性能足够。
|
||||
|
||||
很快我就开始为 Genesi 工作,为 Pegasos 移植 [openSUSE][10], Ubuntu 和其他 Linux 发行版,并提供质量保证和社区支持。继 Pegasos 之后是 [EFIKA][11],另一款基于 PowerPC 的开发板。在用过工作站之后,刚开始使用嵌入式系统会感觉有点奇怪。但是作为第一代普及价位的开发板,这是一场革命的开端。
|
||||
很快我就开始为 Genesi 工作,为 Pegasos 移植 [openSUSE][10]、 Ubuntu 和其他 Linux 发行版,并提供质量保证和社区支持。继 Pegasos 之后是 [EFIKA][11],这是另一款基于 PowerPC 的开发板。在用过工作站之后,刚开始使用嵌入式系统会感觉有点奇怪。但是作为第一代普及价位的开发板,这是一场革命的开端。
|
||||
|
||||
在我收到 Genesi 的另一块有趣的开发板的时候,我开始了一个大规模的服务器项目:基于 ARM 的 [Smarttop][12] 和 [Smartbook][13]。我最喜欢的 Linux 发行版————openSUSE,也收到了一打这种机器。这在当时 ARM 电脑非常稀缺的情况下,极大地促进了 ARM 版 openSUSE 项目的开发。
|
||||
我工作于一个大规模的服务器项目时,我收到 Genesi 的另一块有趣的开发板:基于 ARM 的 [Smarttop][12] 和 [Smartbook][13]。我最喜欢的 Linux 发行版——openSUSE,也收到了一打这种机器。这在当时 ARM 电脑非常稀缺的情况下,极大地促进了 ARM 版 openSUSE 项目的开发。
|
||||
|
||||
尽管最近我很忙,我尽量保持对 ARM 和 PowerPC 新闻的关注。这有助于我支持非 x86 平台上的 SysLog-NG 用户。只要有半个小时的空,我就会去捣鼓一下 ARM 机器。我在[树莓派2][14]上做了很多 [syslog-ng][15] 的测试,结果令人振奋。我最近在树莓派上做了个音乐播放器,用了一块 USB 声卡和[音乐播放守护进程][17],我经常使用它。
|
||||
尽管最近我很忙,我尽量保持对 ARM 和 PowerPC 新闻的关注。这有助于我支持非 x86 平台上的 syslog-ng 用户。只要有半个小时的空,我就会去捣鼓一下 ARM 机器。我在[树莓派2][14]上做了很多 [syslog-ng][15] 的测试,结果令人振奋。我最近在树莓派上做了个音乐播放器,用了一块 USB 声卡和[音乐播放守护进程][17],我经常使用它。
|
||||
|
||||
### 技术方面
|
||||
|
||||
美好的多样性:它创造了竞争,而竞争创造了更好的产品。虽然 x86 是一款强劲的通用处理器,但 ARM 和 PowerPC (以及许多其他)这样的芯片在多种特定场景下显得更适合。
|
||||
|
||||
如果你有一部运行[安卓][18]的移动设备或者[苹果][19]的 iPhone 或 iPad,极有可能它使用的就是 基于ARM 的 SoC (片上系统)。网络存储服务器也一样。原因很简单:省电。你不会希望手机一直在充电,也不想为你的路由器付更多的电费。
|
||||
如果你有一部运行[安卓][18]的移动设备或者[苹果][19]的 iPhone 或 iPad,极有可能它使用的就是基于ARM 的 SoC (片上系统)。网络存储服务器也一样。原因很简单:省电。你不会希望手机一直在充电,也不想为你的路由器付更多的电费。
|
||||
|
||||
ARM 亦在使用 64-bit ARMv8 芯片征战服务器市场。很多任务只需要极少的计算能力,另一方面省电和快速IO才是关键,比如思维存储(译者注:原文为 think storage),静态网页服务器,电子邮件和其他网络/存储相关的功能。一个最好的例子就是 [Ceph][20],一个分布式的面向对象文件系统。[SoftIron][21] 就是一个基于 ARMv8 开发版,使用 CentOS 作为基准软件,运行在 Ceph 上的完整存储应用。
|
||||
ARM 亦在使用 64 位 ARMv8 芯片征战企业级服务器市场。很多任务只需要极少的计算能力,另一方面省电和快速 IO 才是关键,想想存储、静态网页服务器、电子邮件和其他网络/存储相关的功能。一个最好的例子就是 [Ceph][20],一个分布式的面向对象文件系统。[SoftIron][21] 就是一个基于 ARMv8 开发版,使用 CentOS 作为基准软件,运行在 Ceph 上的完整存储应用。
|
||||
|
||||
众所周知 PowerPC 是旧版苹果 [Mac][22] 电脑上的 CPU。虽然它不再作为通用桌面电脑的 CPU ,它依然在路由器和电信设备里发挥作用。而且 [IBM][23] 仍在为高端服务器制造芯片。几年前,随着 Power8 的引入, IBM 在 [OpenPower 基金会][24] 的支持下开放了架构。 Power8 对于关心内存带宽的设备,比如 HPC , 大数据,数据挖掘来说,是非常理想的平台。目前,Power9 也正呼之欲出。
|
||||
众所周知 PowerPC 是旧版苹果 [Mac][22] 电脑上的 CPU。虽然它不再作为通用桌面电脑的 CPU ,它依然在路由器和电信设备里发挥作用。而且 [IBM][23] 仍在为高端服务器制造芯片。几年前,随着 Power8 的引入, IBM 在 [OpenPower 基金会][24] 的支持下开放了架构。 Power8 对于关心内存带宽的设备,比如 HPC 、大数据、数据挖掘来说,是非常理想的平台。目前,Power9 也正呼之欲出。
|
||||
|
||||
这些都是服务器应用,但也有计划用于终端用户。猛禽工程团队正在开发一款基于 [Power9 的工作站][25],也有一个基于飞思卡尔/恩智浦 QORIQ E6500 芯片[制造笔记本] [26]的倡议。当然,这些电脑并不适合所有人,你不能在它们上面安装 Windows 游戏或者商业应用。但它们对于 PowerPC 开发人员和爱好者,或者任何想要完全开放系统的人来说是理想的选择,因为从硬件到固件到应用程序都是开放的。
|
||||
|
||||
@ -32,7 +35,7 @@ ARM 亦在使用 64-bit ARMv8 芯片征战服务器市场。很多任务只需
|
||||
|
||||
我的梦想是完全没有 x86 的环境,不是因为我讨厌 x86 ,而是因为我喜欢多样化而且总是希望使用最适合工作的工具。如果你看看猛禽工程网页上的[图][27],根据不同的使用情景, ARM 和 POWER 完全可以代替 x86 。现在,我在笔记本的 x86 虚拟机上编译、打包和测试 syslog-ng。如果能用上足够强劲的 ARMv8 或者 PowerPC 电脑,无论工作站还是服务器,我就能避免在 x86 上做这些事。
|
||||
|
||||
现在我正在等待下一代[菠萝本][28]的到来,就像我在二月份 [FOSDEM][29] 上说的,下一代有望提供更高的性能。和 Chrome 本不同的是,这个 ARM 笔记本设计用于运行 Linux 而非仅是个客户端(译者注:Chrome 笔记本只提供基于网页的应用)。作为桌面系统,我在寻找 ARMv8 工作站级别的硬件。有些已经接近完成——就像 Avantek 公司的 [雷神X 台式机][30]——不过他们还没有装备最新最快最重要也最节能的 ARMv8 CPU。当这些都实现了,我将用我的 Pixel C 笔记本运行安卓。它不像 Linux 那样简单灵活,但它以强大的 ARM SoC 和 Linux 内核为基础。
|
||||
现在我正在等待下一代[菠萝本][28]的到来,就像我在二月份 [FOSDEM][29] 上说的,下一代有望提供更高的性能。和 Chrome 本不同的是,这个 ARM 笔记本设计用于运行 Linux 而非仅是个客户端(LCTT 译注:Chrome 笔记本只提供基于网页的应用)。作为桌面系统,我在寻找 ARMv8 工作站级别的硬件。有些已经接近完成——就像 Avantek 公司的 [雷神 X 台式机][30]——不过他们还没有装备最新最快最重要也最节能的 ARMv8 CPU。当这些都实现了,我将用我的 Pixel C 笔记本运行安卓。它不像 Linux 那样简单灵活,但它以强大的 ARM SoC 和 Linux 内核为基础。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -40,7 +43,7 @@ via: https://opensource.com/article/18/4/why-i-love-arm-and-powerpc
|
||||
|
||||
作者:[Peter Czanik][a]
|
||||
译者:[kennethXia](https://github.com/kennethXia)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,12 +1,14 @@
|
||||
给初学者看的 Shuf 命令教程
|
||||
给初学者看的 shuf 命令教程
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/04/shuf-command-720x340.png)
|
||||
Shuf 命令用于在类 Unix 操作系统中生成随机排列。使用 shuf 命令,我们可以随机打乱给定输入文件的行。Shuf 命令是 GNU Coreutils 的一部分,因此你不必担心安装问题。在这个简短的教程中,让我向你展示一些 shuf 命令的例子。
|
||||
|
||||
### 带例子的 Shuf 命令教程
|
||||
`shuf` 命令用于在类 Unix 操作系统中生成随机排列。使用 `shuf` 命令,我们可以随机打乱给定输入文件的行。`shuf` 命令是 GNU Coreutils 的一部分,因此你不必担心安装问题。在这个简短的教程中,让我向你展示一些 `shuf` 命令的例子。
|
||||
|
||||
### 带例子的 shuf 命令教程
|
||||
|
||||
我有一个名为 `ostechnix.txt` 的文件,内容如下:
|
||||
|
||||
我有一个名为 **ostechnix.txt** 的文件,内容如下。
|
||||
```
|
||||
$ cat ostechnix.txt
|
||||
line1
|
||||
@ -19,10 +21,10 @@ line7
|
||||
line8
|
||||
line9
|
||||
line10
|
||||
|
||||
```
|
||||
|
||||
现在让我们以随机顺序显示上面的行。为此,请运行:
|
||||
|
||||
```
|
||||
$ shuf ostechnix.txt
|
||||
line2
|
||||
@ -35,24 +37,24 @@ line4
|
||||
line6
|
||||
line9
|
||||
line3
|
||||
|
||||
```
|
||||
|
||||
看到了吗?上面的命令将名为 “ostechnix.txt” 中的行随机排列并输出了结果。
|
||||
看到了吗?上面的命令将名为 `ostechnix.txt` 中的行随机排列并输出了结果。
|
||||
|
||||
你可能想将输出写入另一个文件。例如,我想将输出保存到 `output.txt` 中。为此,请先创建 `output.txt`:
|
||||
|
||||
你可能想将输出写入另一个文件。例如,我想将输出保存到 **output.txt** 中。为此,请先创建 output.txt:
|
||||
```
|
||||
$ touch output.txt
|
||||
|
||||
```
|
||||
|
||||
然后,像下面使用 **-o** 标志将输出写入该文件。
|
||||
然后,像下面使用 `-o` 标志将输出写入该文件:
|
||||
|
||||
```
|
||||
$ shuf ostechnix.txt -o output.txt
|
||||
|
||||
```
|
||||
|
||||
上面的命令将随机随机打乱 ostechnix.txt 的内容并将输出写入 output.txt。你可以使用命令查看 output.txt 的内容:
|
||||
上面的命令将随机随机打乱 `ostechnix.txt` 的内容并将输出写入 `output.txt`。你可以使用命令查看 `output.txt` 的内容:
|
||||
|
||||
```
|
||||
$ cat output.txt
|
||||
|
||||
@ -66,17 +68,17 @@ line7
|
||||
line6
|
||||
line4
|
||||
line5
|
||||
|
||||
```
|
||||
|
||||
我只想显示文件中的任意一行。我该怎么做?很简单!
|
||||
|
||||
```
|
||||
$ shuf -n 1 ostechnix.txt
|
||||
line6
|
||||
|
||||
```
|
||||
|
||||
同样,我们可以选择前 “n” 个随机条目。以下命令将只显示前五个随机条目。
|
||||
同样,我们可以选择前 “n” 个随机条目。以下命令将只显示前五个随机条目:
|
||||
|
||||
```
|
||||
$ shuf -n 5 ostechnix.txt
|
||||
line10
|
||||
@ -84,10 +86,10 @@ line4
|
||||
line5
|
||||
line9
|
||||
line3
|
||||
|
||||
```
|
||||
|
||||
如下所示,我们可以直接使用 **-e** 标志传入输入,而不是从文件中读取行。
|
||||
如下所示,我们可以直接使用 `-e` 标志传入输入,而不是从文件中读取行:
|
||||
|
||||
```
|
||||
$ shuf -e line1 line2 line3 line4 line5
|
||||
line1
|
||||
@ -95,10 +97,10 @@ line3
|
||||
line5
|
||||
line4
|
||||
line2
|
||||
|
||||
```
|
||||
|
||||
你也可以传入数字:
|
||||
|
||||
```
|
||||
$ shuf -e 1 2 3 4 5
|
||||
3
|
||||
@ -106,25 +108,25 @@ $ shuf -e 1 2 3 4 5
|
||||
1
|
||||
4
|
||||
2
|
||||
|
||||
```
|
||||
|
||||
要快速在给定范围选择一个,请改用此命令。
|
||||
要快速在给定范围选择一个,请改用此命令:
|
||||
|
||||
```
|
||||
$ shuf -n 1 -e 1 2 3 4 5
|
||||
|
||||
```
|
||||
|
||||
或者,选择下面的任意三个随机数字。
|
||||
或者,选择下面的任意三个随机数字:
|
||||
|
||||
```
|
||||
$ shuf -n 3 -e 1 2 3 4 5
|
||||
3
|
||||
5
|
||||
1
|
||||
|
||||
```
|
||||
|
||||
我们也可以在特定范围内生成随机数。例如,要显示 1 到 10 之间的随机数,只需使用:
|
||||
|
||||
```
|
||||
$ shuf -i 1-10
|
||||
1
|
||||
@ -137,29 +139,26 @@ $ shuf -i 1-10
|
||||
3
|
||||
10
|
||||
5
|
||||
|
||||
```
|
||||
|
||||
有关更多详细信息,请参阅手册页。
|
||||
|
||||
```
|
||||
$ man shuf
|
||||
|
||||
```
|
||||
|
||||
今天就是这些。还有更多更好的东西。敬请关注!
|
||||
|
||||
干杯!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/the-shuf-command-tutorial-with-examples-for-beginners/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,98 +1,103 @@
|
||||
强制关闭系统的内核模块
|
||||
强制关闭你的系统的内核模块
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/04/kgotobed-720x340.png)
|
||||
|
||||
我知道熬夜对健康不利。但谁在乎?多年来我一直是一只夜猫子。我通常在 12 点以后睡觉,有时在凌晨 1 点以后睡觉。第二天早上,我至少推迟三次闹钟,醒来后又累又有脾气。每天,我向自己保证早点睡觉,但最终会像平常一样晚睡。而且,这个循环还在继续!如果你和我一样,这有一个好消息。一个同学通宵开发了一个名为 **“Kgotobed”** 的内核模块,它迫使你在特定的时间上床睡觉。也就是说它会强制关闭你的系统。
|
||||
|
||||
我为什么要用这个?我有很多其他的选择。我可以设置一个 cron 作业来安排在特定时间关闭系统。我可以设置提醒或闹钟。我可以使用浏览器插件或软件。你可能会问!但是,它们都可以轻易忽略或绕过。Kgotobed 是你不能忽视的东西。**即使您是 root 用户也无法禁用**。是的,它会在指定的时间强制关闭你的系统。没有推迟选项。你不能推迟关机过程,也不能取消它。无论如何,系统都会在指定的时间停止运行。你被警告了!!
|
||||
你可能会问!我为什么要用这个?我有很多其他的选择。我可以设置一个 cron 作业来安排在特定时间关闭系统。我可以设置提醒或闹钟。我可以使用浏览器插件或软件。但是,它们都可以轻易忽略或绕过。Kgotobed 是你不能忽视的东西。**即使您是 root 用户也无法禁用**。是的,它会在指定的时间强制关闭你的系统。没有推迟选项。你不能推迟关机过程,也不能取消它。无论如何,系统都会在指定的时间停止运行。你被警告了!!
|
||||
|
||||
### 安装 Kgotobed
|
||||
|
||||
确保你已经安装了 **dkms**。它在大多数 Linux 发行版的默认仓库中都有。
|
||||
确保你已经安装了 `dkms`。它在大多数 Linux 发行版的默认仓库中都有。
|
||||
|
||||
例如在 Fedora 上,你可以使用以下命令安装它:
|
||||
|
||||
```
|
||||
$ sudo dnf install kernel-devel-$(uname -r) dkms
|
||||
|
||||
```
|
||||
|
||||
在 Debian、Ubuntu、linux Mint 上:
|
||||
|
||||
```
|
||||
$ sudo apt install dkms
|
||||
|
||||
```
|
||||
|
||||
安装完成后,git clone Kgotobed 项目。
|
||||
安装完成后,`git clone` Kgotobed 项目。
|
||||
|
||||
```
|
||||
$ git clone https://github.com/nikital/kgotobed.git
|
||||
|
||||
```
|
||||
|
||||
该命令会在当前工作目录中将所有 Kgotobed 仓库的内容克隆到名为 “kgotobed” 的文件夹中。cd 到该目录:
|
||||
该命令会在当前工作目录中将所有 Kgotobed 仓库的内容克隆到名为 `kgotobed` 的文件夹中。进入到该目录:
|
||||
|
||||
```
|
||||
$ cd kgotobed/
|
||||
|
||||
```
|
||||
|
||||
接着,使用命令安装 Kgotobed 驱动:
|
||||
|
||||
```
|
||||
$ sudo make install
|
||||
|
||||
```
|
||||
|
||||
上面的命令将 **kgotobed.ko** 模块注册到 **DKMS**(这样它会为每个你运行的内核重建)并在 **/usr/local/bin/** 目录下安装 **gotobed**,然后注册、启用并启动 kgotobed 服务。
|
||||
上面的命令将 `kgotobed.ko` 模块注册到 **DKMS**(这样它会为每个你运行的内核重建)并在 `/usr/local/bin/` 目录下安装 `gotobed`,然后注册、启用并启动 kgotobed 服务。
|
||||
|
||||
### 如何运行
|
||||
|
||||
默认情况下,Kgotobed 将睡前时间设置为 **1:00 AM**。也就是说,无论你在做什么,你的电脑都会在凌晨 1 点关机。
|
||||
|
||||
要查看当前的睡前时间,请运行:
|
||||
|
||||
```
|
||||
$ gotobed
|
||||
Current bedtime is 2018-04-10 01:00:00
|
||||
|
||||
```
|
||||
|
||||
要提前睡眠时间,例如 22:00(晚上 10 点),请运行:
|
||||
|
||||
```
|
||||
$ sudo gotobed 22:00
|
||||
[sudo] password for sk:
|
||||
Current bedtime is 2018-04-10 00:58:00
|
||||
Setting bedtime to 2018-04-09 22:00:00
|
||||
Bedtime will be in 2 hours 16 minutes
|
||||
|
||||
```
|
||||
|
||||
当你想早点睡觉时,这会很有帮助!
|
||||
|
||||
但是,你不能设置更晚的时间也就是凌晨 1 点以后。你无法卸载模块,并且调整系统时钟也无济于事。唯一的出路是重启!
|
||||
|
||||
要设置不同的默认时间,您需要自定义 **kgotobed.service**(通过编辑或使用 systemd 工具)。
|
||||
要设置不同的默认时间,您需要自定义 `kgotobed.service`(通过编辑或使用 systemd 工具)。
|
||||
|
||||
### 卸载 Kgotobed
|
||||
|
||||
对 Kgotobed 不满意?别担心!进入我们先前克隆的 “kgotobed” 文件夹,然后运行以下命令将其卸载。
|
||||
对 Kgotobed 不满意?别担心!进入我们先前克隆的 `kgotobed` 文件夹,然后运行以下命令将其卸载。
|
||||
|
||||
```
|
||||
$ sudo make uninstall
|
||||
|
||||
```
|
||||
|
||||
再一次,我警告你,即使你是 root 用户,也没有办法推迟或取消关机过程。你的系统将在指定的时间强制关闭。这并不适合每个人!当你在做一项重要任务时,它可能会让你疯狂。在这种情况下,请确保你已经不时地保存工作,或使用下面链接中的一些高级工具来帮助你在特定时间自动关闭、重启、暂停和休眠系统。
|
||||
|
||||
- [在特定时间自动关闭、重启、暂停和休眠系统](https://www.ostechnix.com/auto-shutdown-reboot-suspend-hibernate-linux-system-specific-time/)
|
||||
|
||||
就是这些了。希望你觉得这个指南有帮助。还有更好的东西。敬请关注!
|
||||
|
||||
干杯!
|
||||
|
||||
### 资源
|
||||
|
||||
- [Kgotobed GitHub 仓库](https://github.com/nikital/kgotobed)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/kgotobed-a-kernel-module-that-forcibly-shutdown-your-system/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,9 @@
|
||||
另一个 TUI 图形活动监视器,使用 Go 编写
|
||||
Gotop:另一个 TUI 图形活动监视器,使用 Go 编写
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/04/Gotop-720x340.png)
|
||||
你已经知道 “top” 命令,对么?是的,它提供类 Unix 操作系统中运行中的进程的动态实时信息。一些开发人员为 top 命令构建了图形前端,因此用户可以在图形窗口中轻松找到他们系统的活动。其中之一是 **Gotop**。顾名思义,Gotop 是一个 TUI 图形活动监视器,使用 **Go** 语言编写。它是完全免费、开源的,受到 [gtop][1] 和 [vtop][2] 的启发。
|
||||
|
||||
你已经知道 `top` 命令,对么?是的,它提供类 Unix 操作系统中运行中的进程的动态实时信息。一些开发人员为 `top` 命令构建了图形前端,因此用户可以在图形窗口中轻松找到他们系统的活动。其中之一是 **Gotop**。顾名思义,Gotop 是一个 TUI 图形活动监视器,使用 **Go** 语言编写。它是完全免费、开源的,受到了 [gtop][1] 和 [vtop][2] 的启发。
|
||||
|
||||
在此简要的指南中,我们将讨论如何安装和使用 Gotop 来监视 Linux 系统的活动。
|
||||
|
||||
@ -11,21 +12,21 @@
|
||||
Gotop 是用 Go 编写的,所以我们需要先安装它。要在 Linux 中安装 Go 语言,请参阅以下指南。
|
||||
|
||||
安装 Go 之后,使用以下命令下载最新的 Gotop 二进制文件。
|
||||
|
||||
```
|
||||
$ sh -c "$(curl https://raw.githubusercontent.com/cjbassi/gotop/master/download.sh)"
|
||||
|
||||
```
|
||||
|
||||
然后,将下载的二进制文件移动到您的 $PATH 中,例如 **/usr/local/bin/**。
|
||||
然后,将下载的二进制文件移动到您的 `$PATH` 中,例如 `/usr/local/bin/`。
|
||||
|
||||
```
|
||||
$ cp gotop /usr/local/bin
|
||||
|
||||
```
|
||||
|
||||
最后,用下面的命令使其可执行:
|
||||
|
||||
```
|
||||
$ chmod +x /usr/local/bin/gotop
|
||||
|
||||
```
|
||||
|
||||
如果你使用的是基于 Arch 的系统,Gotop 存在于 **AUR** 中,所以你可以使用任何 AUR 助手程序进行安装。
|
||||
@ -33,92 +34,91 @@ $ chmod +x /usr/local/bin/gotop
|
||||
使用 [**Cower**][3]:
|
||||
```
|
||||
$ cower -S gotop
|
||||
|
||||
```
|
||||
|
||||
使用 [**Pacaur**][4]:
|
||||
|
||||
```
|
||||
$ pacaur -S gotop
|
||||
|
||||
```
|
||||
|
||||
使用 [**Packer**][5]:
|
||||
|
||||
```
|
||||
$ packer -S gotop
|
||||
|
||||
```
|
||||
|
||||
使用 [**Trizen**][6]:
|
||||
|
||||
```
|
||||
$ trizen -S gotop
|
||||
|
||||
```
|
||||
|
||||
使用 [**Yay**][7]:
|
||||
|
||||
```
|
||||
$ yay -S gotop
|
||||
|
||||
```
|
||||
|
||||
使用 [yaourt][8]:
|
||||
|
||||
```
|
||||
$ yaourt -S gotop
|
||||
|
||||
```
|
||||
|
||||
### 用法
|
||||
|
||||
Gotop 的使用非常简单!你所要做的就是从终端运行以下命令。
|
||||
|
||||
```
|
||||
$ gotop
|
||||
|
||||
```
|
||||
|
||||
这样就行了!你将在简单的 TUI 窗口中看到系统 CPU、磁盘、内存、网络、CPU温度和进程列表的使用情况。
|
||||
|
||||
![][10]
|
||||
|
||||
要仅显示CPU、内存和进程组件,请使用下面的 **-m** 标志
|
||||
要仅显示CPU、内存和进程组件,请使用下面的 `-m` 标志:
|
||||
|
||||
```
|
||||
$ gotop -m
|
||||
|
||||
```
|
||||
|
||||
![][11]
|
||||
|
||||
你可以使用以下键盘快捷键对进程表进行排序。
|
||||
|
||||
* **c** – CPU
|
||||
* **m** – 内存
|
||||
* **p** – PID
|
||||
* `c` – CPU
|
||||
* `m` – 内存
|
||||
* `p` – PID
|
||||
|
||||
|
||||
|
||||
对于进程浏览,请使用以下键。
|
||||
|
||||
* **上/下** 箭头或者 **j/k** 键用于上移下移。
|
||||
* **Ctrl-d** 和 **Ctrl-u** – 上移和下移半页。
|
||||
* **Ctrl-f** 和 **Ctrl-b** – 上移和下移整页。
|
||||
* **gg** 和 **G** – 跳转顶部和底部。
|
||||
* `上/下` 箭头或者 `j/k` 键用于上移下移。
|
||||
* `Ctrl-d` 和 `Ctrl-u` – 上移和下移半页。
|
||||
* `Ctrl-f` 和 `Ctrl-b` – 上移和下移整页。
|
||||
* `gg` 和 `G` – 跳转顶部和底部。
|
||||
|
||||
|
||||
|
||||
按下 **< TAB>** 切换进程分组。要杀死选定的进程或进程组,请输入 **dd**。要选择一个进程,只需点击它。要向下/向上滚动,请使用鼠标滚动按钮。要放大和缩小 CPU 和内存图,请使用 **h** 和 **l**。要显示帮助菜单,只需按 **?**。
|
||||
|
||||
**推荐阅读:**
|
||||
按下 `TAB` 切换进程分组。要杀死选定的进程或进程组,请输入 `dd`。要选择一个进程,只需点击它。要向下/向上滚动,请使用鼠标滚动按钮。要放大和缩小 CPU 和内存的图形,请使用 `h` 和 `l`。要显示帮助菜单,只需按 `?`。
|
||||
|
||||
就是这些了。希望这有帮助。还有更多好东西。敬请关注!
|
||||
|
||||
### 资源
|
||||
|
||||
- [Gotop GitHub Repository](https://github.com/cjbassi/gotop)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/gotop-yet-another-tui-graphical-activity-monitor-written-in-go/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,11 +1,11 @@
|
||||
在 GITLAB CI 中使用 DOCKER 构建 GO 项目
|
||||
在 GitKab CI 中使用 Docker 构建 Go 项目
|
||||
===============================================
|
||||
|
||||
### 介绍
|
||||
|
||||
这篇文章是我在 CI 的 Docker 容器中构建 Go 项目的研究总结(特别是在 Gitlab 中)。我发现很难解决私有依赖问题(来自 Node/.NET 背景),因此这是我写这篇文章的主要原因。如果 Docker 镜像上存在任何问题或提交请求,请随时与我们联系。
|
||||
这篇文章是我在 CI 环境(特别是在 Gitlab 中)的 Docker 容器中构建 Go 项目的研究总结。我发现很难解决私有依赖问题(来自 Node/.NET 背景),因此这是我写这篇文章的主要原因。如果 Docker 镜像上存在任何问题或提交请求,请随时与我们联系。
|
||||
|
||||
### Dep
|
||||
### dep
|
||||
|
||||
由于 dep 是现在管理 Go 依赖关系的最佳选择,因此在构建前之前运行 `dep ensure`。
|
||||
|
||||
@ -18,50 +18,43 @@
|
||||
我第一次尝试使用 `golang:1.10`,但这个镜像没有:
|
||||
|
||||
* curl
|
||||
|
||||
* git
|
||||
|
||||
* make
|
||||
|
||||
* dep
|
||||
|
||||
* golint
|
||||
|
||||
我已经为我将不断更新的构建创建好了镜像([github][2] / [dockerhub][3]) - 但我不提供任何保证,因此你应该创建并管理自己的 Dockerhub。
|
||||
我已经创建好了用于构建的镜像([github][2] / [dockerhub][3]),我会保持更新,但我不提供任何担保,因此你应该创建并管理自己的 Dockerhub。
|
||||
|
||||
### 内部依赖关系
|
||||
|
||||
我们完全有能力创建一个有公共依赖关系的项目。但是如果你的项目依赖于另一个私人 gitlab 仓库呢?
|
||||
我们完全有能力创建一个有公共依赖关系的项目。但是如果你的项目依赖于另一个私人 Gitlab 仓库呢?
|
||||
|
||||
在本地运行 `dep ensure` 应该可以和你的 git 设置一起工作,但是一旦在 CI 上不适用,构建就会失败。
|
||||
|
||||
### Gitlab 权限模型
|
||||
#### Gitlab 权限模型
|
||||
|
||||
这是在[ Gitlab 8.12 中添加的][4],我们关心的最有用的功能是在构建期提供的 `CI_JOB_TOKEN` 环境变量。
|
||||
这是在 [Gitlab 8.12 中添加的][4],这个我们最关心的有用的功能是在构建期提供的 `CI_JOB_TOKEN` 环境变量。
|
||||
|
||||
这基本上意味着我们可以像这样克隆[依赖仓库][5]
|
||||
这基本上意味着我们可以像这样克隆[依赖仓库][5]:
|
||||
|
||||
```
|
||||
git clone https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.com/myuser/mydependentrepo
|
||||
|
||||
```
|
||||
|
||||
然而,我们希望使这更友好一点,因为 dep 在试图拉取代码时不会奇迹般地添加凭据。
|
||||
然而,我们希望使这更友好一点,因为 `dep` 在试图拉取代码时不会奇迹般地添加凭据。
|
||||
|
||||
我们将把这一行添加到 `.gitlab-ci.yml` 的 `before_script` 部分。
|
||||
|
||||
```
|
||||
before_script:
|
||||
- echo -e "machine gitlab.com\nlogin gitlab-ci-token\npassword ${CI_JOB_TOKEN}" > ~/.netrc
|
||||
|
||||
```
|
||||
|
||||
使用 `.netrc` 文件可以指定哪个凭证用于哪个服务器。这种方法可以避免每次从 Git 中拉取(或推送)时输入用户名和密码。密码以明文形式存储,因此你不应在自己的计算机上执行此操作。这实际用于 Git 在背后使用 `cURL`。 [在这里阅读更多][6]。
|
||||
|
||||
项目文件
|
||||
============================================================
|
||||
### 项目文件
|
||||
|
||||
### Makefile
|
||||
#### Makefile
|
||||
|
||||
虽然这是可选的,但我发现它使事情变得更容易。
|
||||
|
||||
@ -93,7 +86,7 @@ lint-all:
|
||||
|
||||
```
|
||||
|
||||
### .gitlab-ci.yml
|
||||
#### .gitlab-ci.yml
|
||||
|
||||
这是 Gitlab CI 魔术发生的地方。你可能想使用自己的镜像。
|
||||
|
||||
@ -132,7 +125,7 @@ build:
|
||||
|
||||
### 缺少了什么
|
||||
|
||||
我通常会用我的二进制文件构建 Docker 镜像,并将其推送到 Gitlab 容器注册器中。
|
||||
我通常会用我的二进制文件构建 Docker 镜像,并将其推送到 Gitlab 容器注册库中。
|
||||
|
||||
你可以看到我正在构建二进制文件并退出,你至少需要将该二进制文件(例如生成文件)存储在某处。
|
||||
|
||||
@ -140,9 +133,9 @@ build:
|
||||
|
||||
via: https://seandrumm.co.uk/blog/building-go-projects-with-docker-on-gitlab-ci/
|
||||
|
||||
作者:[ SEAN DRUMM][a]
|
||||
作者:[SEAN DRUMM][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,5 +1,6 @@
|
||||
如何在 Linux 上查看用户的创建日期
|
||||
======
|
||||
|
||||
你知道吗,如何在 Linux 系统上查看帐户的创建日期?如果知道,那么有些什么办法。
|
||||
|
||||
你成功了么?如果是的话,该怎么做?
|
||||
@ -12,19 +13,18 @@
|
||||
|
||||
可以使用以下 7 种方法进行验证。
|
||||
|
||||
* 使用 /var/log/secure
|
||||
* 使用 aureport 工具
|
||||
* 使用 .bash_logout
|
||||
* 使用 chage 命令
|
||||
* 使用 useradd 命令
|
||||
* 使用 passwd 命令
|
||||
* 使用 last 命令
|
||||
|
||||
|
||||
* 使用 `/var/log/secure`
|
||||
* 使用 `aureport` 工具
|
||||
* 使用 `.bash_logout`
|
||||
* 使用 `chage` 命令
|
||||
* 使用 `useradd` 命令
|
||||
* 使用 `passwd` 命令
|
||||
* 使用 `last` 命令
|
||||
|
||||
### 方式 1:使用 /var/log/secure
|
||||
|
||||
它存储所有安全相关的消息,包括身份验证失败和授权特权。它还会通过系统安全守护进程跟踪 sudo 登录、SSH 登录和其他错误记录。
|
||||
它存储所有安全相关的消息,包括身份验证失败和授权特权。它还会通过系统安全守护进程跟踪 `sudo` 登录、SSH 登录和其他错误记录。
|
||||
|
||||
```
|
||||
# grep prakash /var/log/secure
|
||||
Apr 12 04:07:18 centos.2daygeek.com useradd[21263]: new group: name=prakash, GID=501
|
||||
@ -32,24 +32,24 @@ Apr 12 04:07:18 centos.2daygeek.com useradd[21263]: new user: name=prakash, UID=
|
||||
Apr 12 04:07:34 centos.2daygeek.com passwd: pam_unix(passwd:chauthtok): password changed for prakash
|
||||
Apr 12 04:08:32 centos.2daygeek.com sshd[21269]: Accepted password for prakash from 103.5.134.167 port 60554 ssh2
|
||||
Apr 12 04:08:32 centos.2daygeek.com sshd[21269]: pam_unix(sshd:session): session opened for user prakash by (uid=0)
|
||||
|
||||
```
|
||||
|
||||
### 方式 2:使用 aureport 工具
|
||||
|
||||
aureport 工具可以根据记录在审计日志中的事件记录生成汇总和柱状报告。默认情况下,它会查询 /var/log/audit/ 目录中的所有 audit.log 文件来创建报告。
|
||||
`aureport` 工具可以根据记录在审计日志中的事件记录生成汇总和柱状报告。默认情况下,它会查询 `/var/log/audit/` 目录中的所有 `audit.log` 文件来创建报告。
|
||||
|
||||
```
|
||||
# aureport --auth | grep prakash
|
||||
46. 04/12/2018 04:08:32 prakash 103.5.134.167 ssh /usr/sbin/sshd yes 288
|
||||
47. 04/12/2018 04:08:32 prakash 103.5.134.167 ssh /usr/sbin/sshd yes 291
|
||||
|
||||
```
|
||||
|
||||
### 方式 3:使用 .bash_logout
|
||||
|
||||
家目录中的 .bash_logout 对 bash 有特殊的含义,它提供了一种在用户退出系统时执行命令的方式。
|
||||
家目录中的 `.bash_logout` 对 bash 有特殊的含义,它提供了一种在用户退出系统时执行命令的方式。
|
||||
|
||||
我们可以查看用户家目录中 `.bash_logout` 的更改日期。该文件是在用户第一次注销时创建的。
|
||||
|
||||
我们可以查看用户家目录中 .bash_logout 的更改日期。该文件是在用户第一次注销时创建的。
|
||||
```
|
||||
# stat /home/prakash/.bash_logout
|
||||
File: `/home/prakash/.bash_logout'
|
||||
@ -59,14 +59,14 @@ Access: (0644/-rw-r--r--) Uid: ( 501/ prakash) Gid: ( 501/ prakash)
|
||||
Access: 2017-03-22 20:15:00.000000000 -0400
|
||||
Modify: 2017-03-22 20:15:00.000000000 -0400
|
||||
Change: 2018-04-12 04:07:18.283000323 -0400
|
||||
|
||||
```
|
||||
|
||||
### 方式 4:使用 chage 命令
|
||||
|
||||
chage 代表 change age。该命令让用户管理密码过期信息。chage 命令更改密码更改时和上次密码更改日期之间的天数。
|
||||
`chage` 意即 “change age”。该命令让用户管理密码过期信息。`chage` 命令可以修改上次密码更改日期后需要更改密码的天数。
|
||||
|
||||
系统使用此信息来确定用户何时必须更改其密码。如果用户自帐户创建日期以来没有更改密码,这个就有用。
|
||||
|
||||
```
|
||||
# chage --list prakash
|
||||
Last password change : Apr 12, 2018
|
||||
@ -76,45 +76,44 @@ Account expires : never
|
||||
Minimum number of days between password change : 0
|
||||
Maximum number of days between password change : 99999
|
||||
Number of days of warning before password expires : 7
|
||||
|
||||
```
|
||||
|
||||
### 方式 5:使用 useradd 命令
|
||||
|
||||
useradd 命令用于在 Linux 中创建新帐户。默认情况下,它不会添加用户创建日期,我们必须使用 “Comment” 选项添加日期。
|
||||
`useradd` 命令用于在 Linux 中创建新帐户。默认情况下,它不会添加用户创建日期,我们必须使用 “备注” 选项添加日期。
|
||||
|
||||
```
|
||||
# useradd -m prakash -c `date +%Y/%m/%d`
|
||||
|
||||
# grep prakash /etc/passwd
|
||||
prakash:x:501:501:2018/04/12:/home/prakash:/bin/bash
|
||||
|
||||
```
|
||||
|
||||
### 方式 6:使用 passwd 命令
|
||||
|
||||
passwd 命令用于将密码分配给本地帐户或用户。如果用户在帐户创建后没有修改密码,那么可以使用 passwd 命令查看最后一次密码修改的日期。
|
||||
`passwd` 命令用于将密码分配给本地帐户或用户。如果用户在帐户创建后没有修改密码,那么可以使用 `passwd` 命令查看最后一次密码修改的日期。
|
||||
|
||||
```
|
||||
# passwd -S prakash
|
||||
prakash PS 2018-04-11 0 99999 7 -1 (Password set, MD5 crypt.)
|
||||
|
||||
```
|
||||
|
||||
### 方式 7:使用 last 命令
|
||||
|
||||
last 命令读取 /var/log/wtmp,并显示自该文件创建以来所有登录(和退出)用户的列表。
|
||||
`last` 命令读取 `/var/log/wtmp`,并显示自该文件创建以来所有登录(和退出)用户的列表。
|
||||
|
||||
```
|
||||
# last | grep "prakash"
|
||||
prakash pts/2 103.5.134.167 Thu Apr 12 04:08 still logged in
|
||||
|
||||
```
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-check-user-created-date-on-linux/
|
||||
|
||||
作者:[Prakash Subramanian][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
137
published/20180413 Finding what you-re looking for on Linux.md
Normal file
137
published/20180413 Finding what you-re looking for on Linux.md
Normal file
@ -0,0 +1,137 @@
|
||||
在 Linux 上寻找你正在寻找的东西
|
||||
=====
|
||||
|
||||
> 怎样在 Linux 系统上使用 find、locate、mlocate、which、 whereis、 whatis 和 apropos 命令寻找文件。
|
||||
|
||||
![](https://images.idgesg.net/images/article/2018/04/binoculars-100754967-large.jpg)
|
||||
|
||||
在 Linux 系统上找到你要找的文件或命令并不难, 有很多种方法可以寻找。
|
||||
|
||||
### find
|
||||
|
||||
最显然的无疑是 `find` 命令,并且 `find` 变得比过去几年更容易使用了。它过去需要一个搜索的起始位置,但是现在,如果你想将搜索限制在当下目录中,你还可以使用仅包含文件名或正则表达式的 `find` 命令。
|
||||
|
||||
```
|
||||
$ find e*
|
||||
empty
|
||||
examples.desktop
|
||||
```
|
||||
|
||||
这样,它就像 `ls` 命令一样工作,并没有做太多的搜索。
|
||||
|
||||
对于更专业的搜索,`find` 命令需要一个起点和一些搜索条件(除非你只是希望它提供该起点目录的递归列表)。命令 `find -type f` 从当前目录开始将递归列出所有常规文件,而 `find ~nemo -type f -empty` 将在 nemo 的主目录中找到空文件。
|
||||
|
||||
```
|
||||
$ find ~nemo -type f -empty
|
||||
/home/nemo/empty
|
||||
```
|
||||
|
||||
参见:[11 个好玩的 Linux 终端技巧][1]。
|
||||
|
||||
### locate
|
||||
|
||||
`locate` 命令的名称表明它与 `find` 命令基本相同,但它的工作原理完全不同。`find` 命令可以根据各种条件 —— 名称、大小、所有者、权限、状态(如空文件)等等选择文件并作为搜索选择深度,`locate` 命令通过名为 `/var/lib/mlocate/mlocate.db` 的文件查找你要查找的内容。该数据文件会定期更新,因此你刚创建的文件的位置它可能无法找到。如果这让你感到困扰,你可以运行 `updatedb` 命令立即获得更新。
|
||||
|
||||
```
|
||||
$ sudo updatedb
|
||||
```
|
||||
|
||||
### mlocate
|
||||
|
||||
`mlocate` 命令的工作类似于 `locate` 命令,它使用与 `locate` 相同的 `mlocate.db` 文件。
|
||||
|
||||
### which
|
||||
|
||||
`which` 命令的工作方式与 `find` 命令和 `locate` 命令有很大的区别。它使用你的搜索路径(`$PATH`)并检查其上的每个目录中具有你要查找的文件名的可执行文件。一旦找到一个,它会停止搜索并显示该可执行文件的完整路径。
|
||||
|
||||
`which` 命令的主要优点是它回答了“如果我输入此命令,将运行什么可执行文件?”的问题。它会忽略不可执行文件,并且不会列出系统上带有该名称的所有可执行文件 —— 列出的就是它找到的第一个。如果你想查找具有某个名称的所有可执行文件,则可以像这样运行 `find` 命令,但是要比非常高效 `which` 命令用更长的时间。
|
||||
|
||||
```
|
||||
$ find / -name locate -perm -a=x 2>/dev/null
|
||||
/usr/bin/locate
|
||||
/etc/alternatives/locate
|
||||
```
|
||||
|
||||
在这个 `find` 命令中,我们在寻找名为 “locate” 的所有可执行文件(任何人都可以运行的文件)。我们也选择了不要查看所有“拒绝访问”的消息,否则这些消息会混乱我们的屏幕。
|
||||
|
||||
### whereis
|
||||
|
||||
`whereis` 命令与 `which` 命令非常类似,但它提供了更多信息。它不仅仅是寻找可执行文件,它还寻找手册页(man page)和源文件。像 `which` 命令一样,它使用搜索路径(`$PATH`) 来驱动搜索。
|
||||
|
||||
```
|
||||
$ whereis locate
|
||||
locate: /usr/bin/locate /usr/share/man/man1/locate.1.gz
|
||||
```
|
||||
|
||||
### whatis
|
||||
|
||||
`whatis` 命令有其独特的使命。它不是实际查找文件,而是在手册页中查找有关所询问命令的信息,并从手册页的顶部提供该命令的简要说明。
|
||||
|
||||
```
|
||||
$ whatis locate
|
||||
locate (1) - find files by name
|
||||
```
|
||||
|
||||
如果你询问你刚刚设置的脚本,它不会知道你指的是什么,并会告诉你。
|
||||
|
||||
```
|
||||
$ whatis cleanup
|
||||
cleanup: nothing appropriate.
|
||||
```
|
||||
|
||||
### apropos
|
||||
|
||||
当你知道你想要做什么,但不知道应该使用什么命令来执行此操作时,`apropos` 命令很有用。例如,如果你想知道如何查找文件,那么 `apropos find` 和 `apropos locate` 会提供很多建议。
|
||||
|
||||
```
|
||||
$ apropos find
|
||||
File::IconTheme (3pm) - find icon directories
|
||||
File::MimeInfo::Applications (3pm) - Find programs to open a file by mimetype
|
||||
File::UserDirs (3pm) - find extra media and documents directories
|
||||
find (1) - search for files in a directory hierarchy
|
||||
findfs (8) - find a filesystem by label or UUID
|
||||
findmnt (8) - find a filesystem
|
||||
gst-typefind-1.0 (1) - print Media type of file
|
||||
ippfind (1) - find internet printing protocol printers
|
||||
locate (1) - find files by name
|
||||
mlocate (1) - find files by name
|
||||
pidof (8) - find the process ID of a running program.
|
||||
sane-find-scanner (1) - find SCSI and USB scanners and their device files
|
||||
systemd-delta (1) - Find overridden configuration files
|
||||
xdg-user-dir (1) - Find an XDG user dir
|
||||
$
|
||||
$ apropos locate
|
||||
blkid (8) - locate/print block device attributes
|
||||
deallocvt (1) - deallocate unused virtual consoles
|
||||
fallocate (1) - preallocate or deallocate space to a file
|
||||
IO::Tty (3pm) - Low-level allocate a pseudo-Tty, import constants.
|
||||
locate (1) - find files by name
|
||||
mlocate (1) - find files by name
|
||||
mlocate.db (5) - a mlocate database
|
||||
mshowfat (1) - shows FAT clusters allocated to file
|
||||
ntfsfallocate (8) - preallocate space to a file on an NTFS volume
|
||||
systemd-sysusers (8) - Allocate system users and groups
|
||||
systemd-sysusers.service (8) - Allocate system users and groups
|
||||
updatedb (8) - update a database for mlocate
|
||||
updatedb.mlocate (8) - update a database for mlocate
|
||||
whereis (1) - locate the binary, source, and manual page files for a...
|
||||
which (1) - locate a command
|
||||
```
|
||||
|
||||
### 总结
|
||||
|
||||
Linux 上可用于查找和识别文件的命令有很多种,但它们都非常有用。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3268768/linux/finding-what-you-re-looking-for-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[1]:http://www.networkworld.com/article/2926630/linux/11-pointless-but-awesome-linux-terminal-tricks.html#tk.nww-fsb
|
@ -1,10 +1,11 @@
|
||||
4 月 COPR 中 4 个新的酷项目
|
||||
4 月 COPR 中的 4 个新酷项目
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg)
|
||||
COPR 是一个人仓库[收集][1],它不在 Fedora 中运行。某些软件不符合易于打包的标准。或者它可能不符合其他 Fedora 标准,尽管它是免费且开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不受 Fedora 基础设施支持或项目签名。但是,它可能是尝试新软件或实验软件的一种很好的方式。
|
||||
|
||||
这是 COPR 中一系列新的和有趣的项目。
|
||||
COPR 是一个个人软件仓库[集合][1],它包含 Fedora 所没有提供的软件。这些软件或不符合易于打包的标准,或者它可能不符合其他 Fedora 标准,尽管它是自由且开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件并没有得到 Fedora 基础设施支持,也没有由该项目背书。但是,它可能是尝试新软件或实验软件的一种很好的方式。
|
||||
|
||||
这是 COPR 中一些新的和有趣的项目。
|
||||
|
||||
### Anki
|
||||
|
||||
@ -17,28 +18,28 @@ COPR 是一个人仓库[收集][1],它不在 Fedora 中运行。某些软件
|
||||
#### 安装说明
|
||||
|
||||
仓库目前为 Fedora 27、28 和 Rawhide 提供 Anki。要安装 Anki,请使用以下命令:
|
||||
|
||||
```
|
||||
sudo dnf copr enable thomasfedb/anki
|
||||
sudo dnf install anki
|
||||
|
||||
```
|
||||
|
||||
### Fd
|
||||
|
||||
[Fd][5] 是一个命令行工具,它是简单而稍快的替代 [find][6] 的方法。它可以并行地查找项目。fd 也使用彩色输出,并默认忽略隐藏文件和 .gitignore 中指定模式的文件。
|
||||
[Fd][5] 是一个命令行工具,它是简单而稍快的替代 [find][6] 的方法。它可以并行地查找项目。fd 也使用彩色输出,并默认忽略隐藏文件和 `.gitignore` 中指定模式的文件。
|
||||
|
||||
#### 安装说明
|
||||
|
||||
仓库目前为 Fedora 26、27、28 和 Rawhide 提供 fd。要安装 fd,请使用以下命令:
|
||||
仓库目前为 Fedora 26、27、28 和 Rawhide 提供 `fd`。要安装 fd,请使用以下命令:
|
||||
|
||||
```
|
||||
sudo dnf copr enable keefle/fd
|
||||
sudo dnf install fd
|
||||
|
||||
```
|
||||
|
||||
### KeePass
|
||||
|
||||
[KeePass][7]是一个密码管理器。它将所有密码保存在一个由主密钥或密钥文件锁定的端对端加密数据库中。密码可以组织成组并由程序的内置生成器生成。其他功能包括自动输入,它可以为选定的表单输入用户名和密码。
|
||||
[KeePass][7] 是一个密码管理器。它将所有密码保存在一个由主密钥或密钥文件锁定的端对端加密数据库中。密码可以组织成组并由程序的内置生成器生成。其他功能包括自动输入,它可以为选定的表单输入用户名和密码。
|
||||
|
||||
虽然 KeePass 已经在 Fedora 中,但这个仓库提供了最新版本。
|
||||
|
||||
@ -47,10 +48,10 @@ sudo dnf install fd
|
||||
#### 安装说明
|
||||
|
||||
仓库目前为 Fedora 26 和 27 提供 KeePass。要安装 KeePass,请使用以下命令:
|
||||
|
||||
```
|
||||
sudo dnf copr enable mavit/keepass
|
||||
sudo dnf install keepass
|
||||
|
||||
```
|
||||
|
||||
### jo
|
||||
@ -60,10 +61,10 @@ sudo dnf install keepass
|
||||
#### 安装说明
|
||||
|
||||
目前,仓库为 Fedora 26、27 和 Rawhide 以及 EPEL 6 和 7 提供 jo。要安装 jo,请使用以下命令:
|
||||
|
||||
```
|
||||
sudo dnf copr enable ganto/jo
|
||||
sudo dnf install jo
|
||||
|
||||
```
|
||||
|
||||
|
||||
@ -72,9 +73,9 @@ sudo dnf install jo
|
||||
via: https://fedoramagazine.org/4-try-copr-april-2018/
|
||||
|
||||
作者:[Dominik Turecek][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,23 +1,27 @@
|
||||
Linux 命令行下的数学运算
|
||||
======
|
||||
|
||||
> 有几个有趣的命令可以在 Linux 系统下做数学运算: `expr`、`factor`、`jot` 和 `bc` 命令。
|
||||
|
||||
![](https://images.techhive.com/images/article/2014/12/math_blackboard-100534564-large.jpg)
|
||||
|
||||
可以在 Linux 命令行下做数学运算吗?当然可以!事实上,有不少命令可以轻松完成这些操作,其中一些甚至让你大吃一惊。让我们来学习这些有用的数学运算命令或命令语法吧。
|
||||
|
||||
### expr
|
||||
|
||||
首先,对于在命令行使用命令进行数学运算,可能最容易想到、最常用的命令就是 **expr** (expression)。它可以完成四则运算,也可以用于比较大小。下面是几个例子:
|
||||
首先,对于在命令行使用命令进行数学运算,可能最容易想到、最常用的命令就是 `expr` (<ruby>表达式<rt>expression</rt></ruby>。它可以完成四则运算,也可以用于比较大小。下面是几个例子:
|
||||
|
||||
#### 变量递增
|
||||
|
||||
```
|
||||
$ count=0
|
||||
$ count=`expr $count + 1`
|
||||
$ echo $count
|
||||
1
|
||||
|
||||
```
|
||||
|
||||
#### 完成简单运算
|
||||
|
||||
```
|
||||
$ expr 11 + 123
|
||||
134
|
||||
@ -31,11 +35,12 @@ $ expr 11 \* 123
|
||||
1353
|
||||
$ expr 20 % 3
|
||||
2
|
||||
|
||||
```
|
||||
注意,你需要在 * 运算符之前增加 \ 符号,避免语法错误。% 运算符用于取余运算。
|
||||
|
||||
注意,你需要在 `*` 运算符之前增加 `\` 符号,避免语法错误。`%` 运算符用于取余运算。
|
||||
|
||||
下面是一个稍微复杂的例子:
|
||||
|
||||
```
|
||||
participants=11
|
||||
total=156
|
||||
@ -45,47 +50,49 @@ echo $share
|
||||
14
|
||||
echo $remaining
|
||||
2
|
||||
|
||||
```
|
||||
|
||||
假设某个活动中有 11 位参与者,需要颁发的奖项总数为 156,那么平均每个参与者获得 14 项奖项,额外剩余 2 个奖项。
|
||||
|
||||
#### 比较大小
|
||||
#### 比较
|
||||
|
||||
下面让我们看一下比较的操作。从第一印象来看,语句看似有些怪异;这里并不是**设置**数值,而是进行数字比较。在本例中 `expr` 判断表达式是否为真:如果结果是 1,那么表达式为真;反之,表达式为假。
|
||||
|
||||
下面让我们看一下比较大小的操作。从第一印象来看,语句看似有些怪异;这里并不是设置数值,而是进行数字大小比较。在本例中 **expr** 判断表达式是否为真:如果结果是 1,那么表达式为真;反之,表达式为假。
|
||||
```
|
||||
$ expr 11 = 11
|
||||
1
|
||||
$ expr 11 = 12
|
||||
0
|
||||
|
||||
```
|
||||
请读作"11 是否等于 11?"及"11 是否等于 12?",你很快就会习惯这种写法。当然,我们不会在命令行上执行上述比较,可能的比较是 $age 是否等于 11。
|
||||
|
||||
请读作“11 是否等于 11?”及“11 是否等于 12?”,你很快就会习惯这种写法。当然,我们不会在命令行上执行上述比较,可能的比较是 `$age` 是否等于 `11`。
|
||||
|
||||
```
|
||||
$ age=11
|
||||
$ expr $age = 11
|
||||
1
|
||||
|
||||
```
|
||||
|
||||
如果将数字放到引号中间,那么你将进行字符串比较,而不是数值比较。
|
||||
|
||||
```
|
||||
$ expr "11" = "11"
|
||||
1
|
||||
$ expr "eleven" = "11"
|
||||
0
|
||||
|
||||
```
|
||||
|
||||
在本例中,我们判断 10 是否大于 5,以及是否 大于 99。
|
||||
在本例中,我们判断 10 是否大于 5,以及是否大于 99。
|
||||
|
||||
```
|
||||
$ expr 10 \> 5
|
||||
1
|
||||
$ expr 10 \> 99
|
||||
0
|
||||
|
||||
```
|
||||
|
||||
的确,返回 1 和 0 分别代表比较的结果为真和假,我们一般预期在 Linux 上得到这个结果。在下面的例子中,按照上述逻辑使用 **expr** 并不正确,因为 **if** 的工作原理刚好相反,即 0 代表真。
|
||||
的确,返回 1 和 0 分别代表比较的结果为真和假,我们一般预期在 Linux 上得到这个结果。在下面的例子中,按照上述逻辑使用 `expr` 并不正确,因为 `if` 的工作原理刚好相反,即 0 代表真。
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
@ -99,19 +106,19 @@ if [ `expr $price \> $cost` ]; then
|
||||
else
|
||||
echo "Don't sell it"
|
||||
fi
|
||||
|
||||
```
|
||||
|
||||
下面,我们运行这个脚本:
|
||||
|
||||
```
|
||||
$ ./checkPrice
|
||||
Cost to us> 11.50
|
||||
Price we're asking> 6
|
||||
We make money
|
||||
|
||||
```
|
||||
|
||||
这显然与我们预期不符!我们稍微修改一下,以便使其按我们预期工作:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
@ -125,12 +132,12 @@ if [ `expr $price \> $cost` == 1 ]; then
|
||||
else
|
||||
echo "Don't sell it"
|
||||
fi
|
||||
|
||||
```
|
||||
|
||||
### factor
|
||||
|
||||
**factor** 命令的功能基本与你预期相符。你给出一个数字,该命令会给出对应数字的因子。
|
||||
`factor` 命令的功能基本与你预期相符。你给出一个数字,该命令会给出对应数字的因子。
|
||||
|
||||
```
|
||||
$ factor 111
|
||||
111: 3 37
|
||||
@ -143,11 +150,12 @@ $ factor 1987
|
||||
|
||||
```
|
||||
|
||||
注:factor 命令对于最后一个数字没有返回很多,这是因为 1987 是一个 **质数**。
|
||||
注:`factor` 命令对于最后一个数字没有返回更多因子,这是因为 1987 是一个**质数**。
|
||||
|
||||
### jot
|
||||
|
||||
**jot** 命令可以创建一系列数字。给定数字总数及起始数字即可。
|
||||
`jot` 命令可以创建一系列数字。给定数字总数及起始数字即可。
|
||||
|
||||
```
|
||||
$ jot 8 10
|
||||
10
|
||||
@ -158,10 +166,10 @@ $ jot 8 10
|
||||
15
|
||||
16
|
||||
17
|
||||
|
||||
```
|
||||
|
||||
你也可以用如下方式使用 **jot**,这里我们要求递减至数字 2。
|
||||
你也可以用如下方式使用 `jot`,这里我们要求递减至数字 2。
|
||||
|
||||
```
|
||||
$ jot 8 10 2
|
||||
10
|
||||
@ -172,10 +180,10 @@ $ jot 8 10 2
|
||||
4
|
||||
3
|
||||
2
|
||||
|
||||
```
|
||||
|
||||
**jot** 可以帮你构造一系列数字组成的列表,该列表可以用于其它任务。
|
||||
`jot` 可以帮你构造一系列数字组成的列表,该列表可以用于其它任务。
|
||||
|
||||
```
|
||||
$ for i in `jot 7 17`; do echo April $i; done
|
||||
April 17
|
||||
@ -185,28 +193,28 @@ April 20
|
||||
April 21
|
||||
April 22
|
||||
April 23
|
||||
|
||||
```
|
||||
|
||||
### bc
|
||||
|
||||
**bc** 基本上是命令行数学运算最佳工具之一。输入你想执行的运算,使用管道发送至该命令即可:
|
||||
`bc` 基本上是命令行数学运算最佳工具之一。输入你想执行的运算,使用管道发送至该命令即可:
|
||||
|
||||
```
|
||||
$ echo "123.4+5/6-(7.89*1.234)" | bc
|
||||
113.664
|
||||
|
||||
```
|
||||
|
||||
可见 **bc** 并没有忽略精度,而且输入的字符串也相当直截了当。它还可以进行大小比较、处理布尔值、计算平方根、正弦、余弦和正切等。
|
||||
可见 `bc` 并没有忽略精度,而且输入的字符串也相当直截了当。它还可以进行大小比较、处理布尔值、计算平方根、正弦、余弦和正切等。
|
||||
|
||||
```
|
||||
$ echo "sqrt(256)" | bc
|
||||
16
|
||||
$ echo "s(90)" | bc -l
|
||||
.89399666360055789051
|
||||
|
||||
```
|
||||
|
||||
事实上,**bc** 甚至可以计算 pi。你需要指定需要的精度。
|
||||
事实上,`bc` 甚至可以计算 pi。你需要指定需要的精度。
|
||||
|
||||
```
|
||||
$ echo "scale=5; 4*a(1)" | bc -l
|
||||
3.14156
|
||||
@ -216,10 +224,10 @@ $ echo "scale=20; 4*a(1)" | bc -l
|
||||
3.14159265358979323844
|
||||
$ echo "scale=40; 4*a(1)" | bc -l
|
||||
3.1415926535897932384626433832795028841968
|
||||
|
||||
```
|
||||
|
||||
除了通过管道接收数据并返回结果,**bc**还可以交互式运行,输入你想执行的运算即可。本例中提到的 scale 设置可以指定有效数字的个数。
|
||||
除了通过管道接收数据并返回结果,`bc`还可以交互式运行,输入你想执行的运算即可。本例中提到的 `scale` 设置可以指定有效数字的个数。
|
||||
|
||||
```
|
||||
$ bc
|
||||
bc 1.06.95
|
||||
@ -232,10 +240,10 @@ scale=2
|
||||
2/3
|
||||
.66
|
||||
quit
|
||||
|
||||
```
|
||||
|
||||
你还可以使用 **bc** 完成数字进制转换。**obase** 用于设置输出的数字进制。
|
||||
你还可以使用 `bc` 完成数字进制转换。`obase` 用于设置输出的数字进制。
|
||||
|
||||
```
|
||||
$ bc
|
||||
bc 1.06.95
|
||||
@ -248,23 +256,23 @@ obase=16
|
||||
256 <=== entered
|
||||
100 <=== response
|
||||
quit
|
||||
|
||||
```
|
||||
|
||||
按如下方式使用 **bc** 也是完成十六进制与十进制转换的最简单方式之一:
|
||||
按如下方式使用 `bc` 也是完成十六进制与十进制转换的最简单方式之一:
|
||||
|
||||
```
|
||||
$ echo "ibase=16; F2" | bc
|
||||
242
|
||||
$ echo "obase=16; 242" | bc
|
||||
F2
|
||||
|
||||
```
|
||||
|
||||
在上面第一个例子中,我们将输入进制 (ibase) 设置为十六进制 (hex),完成十六进制到为十进制的转换。在第二个例子中,我们执行相反的操作,即将输出进制 (obase) 设置为十六进制。
|
||||
在上面第一个例子中,我们将输入进制(`ibase`)设置为十六进制(`hex`),完成十六进制到为十进制的转换。在第二个例子中,我们执行相反的操作,即将输出进制(`obase`)设置为十六进制。
|
||||
|
||||
### 简单的 bash 数学运算
|
||||
|
||||
通过使用双括号,我们可以在 bash 中完成简单的数学运算。在下面的例子中,我们创建一个变量,为变量赋值,然后依次执行加法、自减和平方。
|
||||
|
||||
```
|
||||
$ ((e=11))
|
||||
$ (( e = e + 7 ))
|
||||
@ -278,19 +286,19 @@ $ echo $e
|
||||
$ ((e=e**2))
|
||||
$ echo $e
|
||||
289
|
||||
|
||||
```
|
||||
|
||||
允许使用的运算符包括:
|
||||
|
||||
```
|
||||
+ - 加法及减法
|
||||
++ -- 自增与自减
|
||||
* / % 乘法,除法及求余数
|
||||
* / % 乘法、除法及求余数
|
||||
^ 指数运算
|
||||
|
||||
```
|
||||
|
||||
你还可以使用逻辑运算符和布尔运算符:
|
||||
|
||||
```
|
||||
$ ((x=11)); ((y=7))
|
||||
$ if (( x > y )); then
|
||||
@ -303,14 +311,13 @@ $ if (( x > y )) >> (( y > z )); then
|
||||
> echo "letters roll downhill"
|
||||
> fi
|
||||
letters roll downhill
|
||||
|
||||
```
|
||||
|
||||
或者如下方式:
|
||||
|
||||
```
|
||||
$ if [ x > y ] << [ y > z ]; then echo "letters roll downhill"; fi
|
||||
letters roll downhill
|
||||
|
||||
```
|
||||
|
||||
下面计算 2 的 3 次幂:
|
||||
@ -319,23 +326,20 @@ $ echo "2 ^ 3"
|
||||
2 ^ 3
|
||||
$ echo "2 ^ 3" | bc
|
||||
8
|
||||
|
||||
```
|
||||
|
||||
### 总结
|
||||
|
||||
在 Linux 系统中,有很多不同的命令行工具可以完成数字运算。希望你在读完本文之后,能掌握一两个新工具。
|
||||
|
||||
使用 [Facebook][1] 或 [LinkedIn][2] 加入 Network World 社区,点评你最喜爱的主题。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3268964/linux/how-to-do-math-on-the-linux-command-line.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
译者:[pinewall](https://github.com/pinewall)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[pinewall](https://github.com/pinewall)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -2,38 +2,39 @@
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/04/java-getting-started-816x345.jpg)
|
||||
Java 是世界上最流行的编程语言之一。它广泛用于开发物联网设备、Android 程序,Web 和企业应用。本文将提供使用 [OpenJDK][1] 安装和配置工作站的指南。
|
||||
|
||||
Java 是世界上最流行的编程语言之一。它广泛用于开发物联网设备、Android 程序、Web 和企业应用。本文将提供使用 [OpenJDK][1] 安装和配置工作站的指南。
|
||||
|
||||
### 安装编译器和工具
|
||||
|
||||
在 Fedora 中安装编译器或 Java Development Kit(JDK)很容易。在写这篇文章时,可以用 v8 和 v9。只需打开一个终端并输入:
|
||||
在 Fedora 中安装编译器或 Java Development Kit(JDK)很容易。在写这篇文章时,可以用 v8 和 v9 版本。只需打开一个终端并输入:
|
||||
|
||||
```
|
||||
sudo dnf install java-1.8.0-openjdk-devel
|
||||
|
||||
```
|
||||
|
||||
这安装 JDK v8。对于 v9,请输入:
|
||||
|
||||
```
|
||||
sudo dnf install java-9-openjdk-devel
|
||||
|
||||
```
|
||||
|
||||
对于需要其他工具和库(如 Ant 和 Maven)的开发人员,可以使用 **Java Development** 组。要安装套件,请输入:
|
||||
|
||||
```
|
||||
sudo dnf group install "Java Development"
|
||||
|
||||
```
|
||||
|
||||
要验证编译器是否已安装,请运行:
|
||||
|
||||
```
|
||||
javac -version
|
||||
|
||||
```
|
||||
|
||||
输出显示编译器版本,如下所示:
|
||||
|
||||
```
|
||||
javac 1.8.0_162
|
||||
|
||||
```
|
||||
|
||||
### 编译程序
|
||||
@ -41,49 +42,48 @@ javac 1.8.0_162
|
||||
你可以使用任何基本的文本编辑器(如 nano、vim 或 gedit)编写程序。这个例子提供了一个简单的 “Hello Fedora” 程序。
|
||||
|
||||
打开你最喜欢的文本编辑器并输入以下内容:
|
||||
|
||||
```
|
||||
public class HelloFedora {
|
||||
|
||||
|
||||
public static void main (String[] args) {
|
||||
System.out.println("Hello Fedora!");
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
将文件保存为 HelloFedora.java。在终端切换到包含该文件的目录并执行以下操作:
|
||||
将文件保存为 `HelloFedora.java`。在终端切换到包含该文件的目录并执行以下操作:
|
||||
|
||||
```
|
||||
javac HelloFedora.java
|
||||
|
||||
```
|
||||
|
||||
如果编译器遇到任何语法错误,它会发出错误。否则,它只会在下面显示 shell 提示符。
|
||||
|
||||
你现在应该有一个名为 HelloFedora 的文件,它是编译好的程序。使用以下命令运行它:
|
||||
你现在应该有一个名为 `HelloFedora` 的文件,它是编译好的程序。使用以下命令运行它:
|
||||
|
||||
```
|
||||
java HelloFedora
|
||||
|
||||
```
|
||||
|
||||
输出将显示:
|
||||
|
||||
```
|
||||
Hello Fedora!
|
||||
|
||||
```
|
||||
|
||||
### 安装集成开发环境(IDE)
|
||||
|
||||
有些程序可能更复杂,IDE 可以帮助顺利进行。Java 程序员有很多可用的 IDE,其中包括:
|
||||
|
||||
+ Geany,一个加载快速的基本 IDE,并提供内置模板
|
||||
+ Geany,一个快速加载的基本 IDE,并提供内置模板
|
||||
+ Anjuta
|
||||
+ GNOME Builder,已经在 Builder 的文章中介绍过 - 这是一个专门面向 GNOME 程序开发人员的新 IDE
|
||||
+ GNOME Builder,已经在 [Builder - 这是一个专门面向 GNOME 程序开发人员的新 IDE][6] 的文章中介绍过
|
||||
|
||||
然而,主要用 Java 编写的最流行的开源 IDE 之一是 [Eclipse][2]。 Eclipse 在官方仓库中有。要安装它,请运行以下命令:
|
||||
|
||||
```
|
||||
sudo dnf install eclipse-jdt
|
||||
|
||||
```
|
||||
|
||||
安装完成后,Eclipse 的快捷方式会出现在桌面菜单中。
|
||||
@ -93,9 +93,9 @@ sudo dnf install eclipse-jdt
|
||||
### 浏览器插件
|
||||
|
||||
如果你正在开发 Web 小程序并需要一个用于浏览器的插件,则可以使用 [IcedTea-Web][4]。像 OpenJDK 一样,它是开源的并易于在 Fedora 中安装。运行这个命令:
|
||||
|
||||
```
|
||||
sudo dnf install icedtea-web
|
||||
|
||||
```
|
||||
|
||||
从 Firefox 52 开始,Web 插件不再有效。有关详细信息,请访问 Mozilla 支持网站 [https://support.mozilla.org/en-US/kb/npapi-plugins?as=u&utm_source=inproduct][5]。
|
||||
@ -110,7 +110,7 @@ via: https://fedoramagazine.org/start-developing-java-fedora/
|
||||
作者:[Shaun Assam][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -120,3 +120,4 @@ via: https://fedoramagazine.org/start-developing-java-fedora/
|
||||
[3]:http://help.eclipse.org/oxygen/nav/0
|
||||
[4]:https://icedtea.classpath.org/wiki/IcedTea-Web
|
||||
[5]:https://support.mozilla.org/en-US/kb/npapi-plugins?as=u&utm_source=inproduct
|
||||
[6]:https://fedoramagazine.org/builder-a-new-ide-specifically-for-gnome-app-developers-2/
|
@ -0,0 +1,97 @@
|
||||
如何在任何地方使用 Vim 编辑器输入文本
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/05/vim-anywhere-720x340.png)
|
||||
|
||||
各位 Vim 使用者大家好!今天,我这里有个好消息告诉大家。我会向大家介绍 **Vim-anywhere**,这是一个简单的脚本,它允许你使用 Vim 编辑器在 Linux 中的任何地方输入文本。这意味着你能简单地调用自己最爱的 Vim 编辑器,输入任何你所想的,并将这些文本粘贴到任意的应用和网站中。这些文本将在剪贴板可用,直到你重启了系统。这个工具对那些喜欢在非 Vim 环境中使用 Vim 键位绑定的人来说十分有用。
|
||||
|
||||
### 在 Linux 中安装 Vim-anywhere
|
||||
|
||||
Vim-anywhere 工具可以运行在任何基于 GNOME(或其他衍生品)的 Linux 发行版上。另外,确保你已经安装了下面的依赖。
|
||||
|
||||
* Curl
|
||||
* Git
|
||||
* gVim
|
||||
* xclip
|
||||
|
||||
比如,你可以用下面的命令在 Ubuntu 中安装这些工具:
|
||||
|
||||
```
|
||||
$ sudo apt install curl git vim-gnome xclip
|
||||
```
|
||||
|
||||
然后运行如下的命令来安装 Vim-anywhere:
|
||||
|
||||
```
|
||||
$ curl -fsSL https://raw.github.com/cknadler/vim-anywhere/master/install | bash
|
||||
```
|
||||
|
||||
Vim-anywhere 到此已经安装完成。现在我们来看看如何使用它。
|
||||
|
||||
### 在任何地方使用 Vim 编辑器输入文本
|
||||
|
||||
假如你需要创建一个 word 文档。但是你更愿意使用 Vim 编辑器,而不是 LibreOffice。没问题,这里 Vim-anywhere 就派上用场了。Vim-anywhere 自动化了整个流程。它仅仅简单地调用 Vim 编辑器,所以你能写任何你所想的,然后将之粘贴到 .doc 文件中。
|
||||
|
||||
让我给你展示一个用例。打开 LibreOffice 或者你选的任何图形文本编辑器。然后打开 Vim-anywhere。你只需要按下 `CTRL+ALT+V` 即可。它将会打开 gVim 编辑器。按下 `i` 切换到交互模式然后输入文本。完成之后,键入 `:wq` 关闭并保存文件。
|
||||
|
||||
![][2]
|
||||
|
||||
这些文本会在剪贴板中可用,直到你重启了系统。在你关闭编辑器之后,你之前的应用会重新占据主界面。你只需按下 `CTRL+P` 将文本粘贴进去。
|
||||
|
||||
![][3]
|
||||
|
||||
这仅仅只是一个例子。你甚至可以使用 Vim-anywhere 在烦人的 web 表单或者其他应用上进行输入。一旦 Vim-anywhere 被调用,它将会打开一个缓冲区。关闭 Vim-anywhere 之后,缓冲器内的内容会自动复制到你的剪贴板中,之前的应用会重新占据主界面。
|
||||
|
||||
Vim-anywhere 在被调用的时候会在 `/tmp/vim-anywhere` 中创建一个临时文件。这些临时文件会一致保存着,直到你重启了系统,并为你提供临时的历史记录。
|
||||
|
||||
```
|
||||
$ ls /tmp/vim-anywhere
|
||||
```
|
||||
|
||||
你可以用下面的命令重新打开最近的文件:
|
||||
|
||||
```
|
||||
$ vim $( ls /tmp/vim-anywhere | sort -r | head -n 1 )
|
||||
```
|
||||
|
||||
#### 更新 Vim-anywhere
|
||||
|
||||
运行下面的命令来更新 Vim-anywhere:
|
||||
|
||||
```
|
||||
$ ~/.vim-anywhere/update
|
||||
|
||||
```
|
||||
|
||||
#### 更改快捷键
|
||||
|
||||
默认调用 Vim-anywhere 的键位是 `CTRL+ALT+V`。你可以用 `gconf` 工具将其更改为任何自定义的键位绑定。
|
||||
|
||||
```
|
||||
$ gconftool -t str --set /desktop/gnome/keybindings/vim-anywhere/binding <custom binding>
|
||||
```
|
||||
|
||||
#### 卸载 Vim-anywhere
|
||||
|
||||
可能有些人觉得每次打开 Vim 编辑器,输入一些文本,然后将文本复制到其他应用中是没有意义也毫无必要的。
|
||||
|
||||
如果你不觉得这个工具有用,只需使用下面的命令来卸载它:
|
||||
|
||||
```
|
||||
$ ~/.vim-anywhere/uninstall
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-use-vim-editor-to-input-text-anywhere/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[paperzhang](https://github.com/paperzhang)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2018/05/vim-anywhere-1-1.png
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2018/05/vim-anywhere-2.png
|
@ -1,4 +1,4 @@
|
||||
Being open about data privacy
|
||||
Translating by FelixYFZ Being open about data privacy
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_opendata.png?itok=M8L2HGVx)
|
||||
|
||||
|
@ -1,95 +0,0 @@
|
||||
Translating by FelxiYFZ IT automation: How to make the case
|
||||
======
|
||||
At the start of any significant project or change initiative, IT leaders face a proverbial fork in the road.
|
||||
|
||||
Path #1 might seem to offer the shortest route from A to B: Simply force-feed the project to everyone by executive mandate, essentially saying, “You’re going to do this – or else.”
|
||||
|
||||
Path #2 might appear less direct, because on this journey you take the time to explain the strategy and the reasons behind it. In fact, you’re going to be making pit stops along this route, rather than marathoning from start to finish: “Here’s what we’re doing – and why we’re doing it.”
|
||||
|
||||
Guess which path bears better results?
|
||||
|
||||
If you said #2, you’ve traveled both paths before – and experienced the results first-hand. Getting people on board with major changes beforehand is almost always the smarter choice.
|
||||
|
||||
IT leaders know as well as anyone that with significant change often comes [significant fear][1], skepticism, and other challenges. It may be especially true with IT automation. The term alone sounds scary to some people, and it is often tied to misconceptions. Helping people understand the what, why, and how of your company’s automation strategy is a necessary step to achieving your goals associated with that strategy.
|
||||
|
||||
[ **Read our related article,** [**IT automation best practices: 7 keys to long-term success**][2]. ]
|
||||
|
||||
With that in mind, we asked a variety of IT leaders for their advice on making the case for automation in your organization:
|
||||
|
||||
## 1. Show people what’s in it for them
|
||||
|
||||
Let’s face it: Self-interest and self-preservation are natural instincts. Tapping into that human tendency is a good way to get people on board: Show people how your automation strategy will benefit them and their jobs. Will automating a particular process in the software pipeline mean fewer middle-of-the-night calls for team members? Will it enable some people to dump low-skill, manual tasks in favor of more strategic, higher-order work – the sort that helps them take the next step in their career?
|
||||
|
||||
“Convey what’s in it for them, and how it will benefit clients and the whole company,” advises Vipul Nagrath, global CIO at [ADP][3]. “Compare the current state to a brighter future state, where the company enjoys greater stability, agility, efficiency, and security.”
|
||||
|
||||
The same approach holds true when making the case outside of IT; just lighten up on the jargon when explaining the benefits to non-technical stakeholders, Nagrath says.
|
||||
|
||||
Setting up a before-and-after picture is a good storytelling device for helping people see the upside.
|
||||
|
||||
“You want to paint a picture of the current state that people can relate to,” Nagrath says. “Present what’s working, but also highlight what’s causing teams to be less than agile.” Then explain how automating certain processes will improve that current state.
|
||||
|
||||
## 2. Connect automation to specific business goals
|
||||
|
||||
Part of making a strong case entails making sure people understand that you’re not just trend-chasing. If you’re automating simply for the sake of automating, people will sniff that out and become more resistant – perhaps especially within IT.
|
||||
|
||||
“The case for automation needs to be driven by a business demand signal, such as revenue or operating expense,” says David Emerson, VP and deputy CISO at [Cyxtera][4]. “No automation endeavor is self-justifying, and no technical feat, generally, should be a means unto itself, unless it’s a core competency of the company.”
|
||||
|
||||
Like Nagrath, Emerson recommends promoting the incentives associated with achieving the business goals of automation, and working toward these goals (and corresponding incentives) in an iterative, step-by-step fashion.
|
||||
|
||||
## 3. Break the automation plan into manageable pieces
|
||||
|
||||
Even if your automation strategy is literally “automate everything,” that’s a tough sell (and probably unrealistic) for most organizations. You’ll make a stronger case with a plan that approaches automation manageable piece by manageable piece, and that enables greater flexibility to adapt along the way.
|
||||
|
||||
“When making a case for automation, I recommend clearly illustrating the incentive to move to an automated process, and allowing iteration toward that goal to introduce and prove the benefits at lower risk,” Emerson says.
|
||||
|
||||
Sergey Zuev, founder at [GA Connector][5], shares an in-the-trenches account of why automating incrementally is crucial – and how it will help you build a stronger, longer-lasting argument for your strategy. Zuev should know: His company’s tool automates the import of data from CRM applications into Google Analytics. But it was actually the company’s internal experience automating its own customer onboarding process that led to a lightbulb moment.
|
||||
|
||||
“At first, we tried to build the whole onboarding funnel at once, and as a result, the project dragged [on] for months,” Zuev says. “After realizing that it [was] going nowhere, we decided to select small chunks that would have the biggest immediate effect, and start with that. As a result, we managed to implement one of the email sequences in just a week, and are already reaping the benefits of the desecrated manual effort.”
|
||||
|
||||
## 4. Sell the big-picture benefits too
|
||||
|
||||
A step-by-step approach does not preclude painting a bigger picture. Just as it’s a good idea to make the case at the individual or team level, it’s also a good idea for help people understand the company-wide benefits.
|
||||
|
||||
“If we can accelerate the time it takes for the business to get what it needs, it will silence the skeptics.”
|
||||
|
||||
Eric Kaplan, CTO at [AHEAD][6], agrees that using small wins to show automation’s value is a smart strategy for winning people over. But the value those so-called “small” wins reveal can actually help you sharpen the big picture for people. Kaplan points to the value of individual and organizational time as an area everyone can connect with easily.
|
||||
|
||||
“The best place to do this is where you can show savings in terms of time,” Kaplan says. “If we can accelerate the time it takes for the business to get what it needs, it will silence the skeptics.”
|
||||
|
||||
Time and scalability are powerful benefits business and IT colleagues, both charged with growing the business, can grasp.
|
||||
|
||||
“The result of automation is scalability – less effort per person to maintain and grow your IT environment, as [Red Hat][7] VP, Global Services John Allessio recently [noted][8]. “If adding manpower is the only way to grow your business, then scalability is a pipe dream. Automation reduces your manpower requirements and provides the flexibility required for continued IT evolution.” (See his full article, [What DevOps teams really need from a CIO][8].)
|
||||
|
||||
## 5. Promote the heck out of your results
|
||||
|
||||
At the outset of your automation strategy, you’ll likely be making the case based on goals and the anticipated benefits of achieving those goals. But as your automation strategy evolves, there’s no case quite as convincing as one grounded in real-world results.
|
||||
|
||||
“Seeing is believing,” says Nagrath, ADP’s CIO. “Nothing quiets skeptics like a track record of delivery.”
|
||||
|
||||
That means, of course, not only achieving your goals, but also doing so on time – another good reason for the iterative, step-by-step approach.
|
||||
|
||||
While quantitative results such as percentage improvements or cost savings can speak loudly, Nagrath advises his fellow IT leaders not to stop there when telling your automation story.
|
||||
|
||||
“Making a case for automation is also a qualitative discussion, where we can promote the issues prevented, overall business continuity, reductions in failures/errors, and associates taking on [greater] responsibility as they tackle more value-added tasks.”
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://enterprisersproject.com/article/2018/1/how-make-case-it-automation
|
||||
|
||||
作者:[Kevin Casey][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://enterprisersproject.com/user/kevin-casey
|
||||
[1]:https://enterprisersproject.com/article/2017/10/how-beat-fear-and-loathing-it-change
|
||||
[2]:https://enterprisersproject.com/article/2018/1/it-automation-best-practices-7-keys-long-term-success?sc_cid=70160000000h0aXAAQ
|
||||
[3]:https://www.adp.com/
|
||||
[4]:https://www.cyxtera.com/
|
||||
[5]:http://gaconnector.com/
|
||||
[6]:https://www.thinkahead.com/
|
||||
[7]:https://www.redhat.com/en?intcmp=701f2000000tjyaAAA
|
||||
[8]:https://enterprisersproject.com/article/2017/12/what-devops-teams-really-need-cio
|
||||
[9]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ
|
@ -1,104 +0,0 @@
|
||||
fuzheng1998 translating
|
||||
|
||||
Why Linux is better than Windows or macOS for security
|
||||
======
|
||||
|
||||
![](https://images.idgesg.net/images/article/2018/02/linux_security_vs_macos_and_windows_locks_data_thinkstock-100748607-large.jpg)
|
||||
|
||||
Enterprises invest a lot of time, effort and money in keeping their systems secure. The most security-conscious might have a security operations center. They of course use firewalls and antivirus tools. They probably spend a lot of time monitoring their networks, looking for telltale anomalies that could indicate a breach. What with IDS, SIEM and NGFWs, they deploy a veritable alphabet of defenses.
|
||||
|
||||
But how many have given much thought to one of the cornerstones of their digital operations: the operating systems deployed on the workforce’s PCs? Was security even a factor when the desktop OS was selected?
|
||||
|
||||
This raises a question that every IT person should be able to answer: Which operating system is the most secure for general deployment?
|
||||
|
||||
We asked some experts what they think of the security of these three choices: Windows, the ever-more-complex platform that’s easily the most popular desktop system; macOS X, the FreeBSD Unix-based operating system that powers Apple Macintosh systems; and Linux, by which we mean all the various Linux distributions and related Unix-based systems.
|
||||
|
||||
### How we got here
|
||||
|
||||
One reason enterprises might not have evaluated the security of the OS they deployed to the workforce is that they made the choice years ago. Go back far enough and all operating systems were reasonably safe, because the business of hacking into them and stealing data or installing malware was in its infancy. And once an OS choice is made, it’s hard to consider a change. Few IT organizations would want the headache of moving a globally dispersed workforce to an entirely new OS. Heck, they get enough pushback when they move users to a new version of their OS of choice.
|
||||
|
||||
Still, would it be wise to reconsider? Are the three leading desktop OSes different enough in their approach to security to make a change worthwhile?
|
||||
|
||||
Certainly the threats confronting enterprise systems have changed in the last few years. Attacks have become far more sophisticated. The lone teen hacker that once dominated the public imagination has been supplanted by well-organized networks of criminals and shadowy, government-funded organizations with vast computing resources.
|
||||
|
||||
Like many of you, I have firsthand experience of the threats that are out there: I have been infected by malware and viruses on numerous Windows computers, and I even had macro viruses that infected files on my Mac. More recently, a widespread automated hack circumvented the security on my website and infected it with malware. The effects of such malware were always initially subtle, something you wouldn’t even notice, until the malware ended up so deeply embedded in the system that performance started to suffer noticeably. One striking thing about the infestations was that I was never specifically targeted by the miscreants; nowadays, it’s as easy to attack 100,000 computers with a botnet as it is to attack a dozen.
|
||||
|
||||
### Does the OS really matter?
|
||||
|
||||
The OS you deploy to your users does make a difference for your security stance, but it isn’t a sure safeguard. For one thing, a breach these days is more likely to come about because an attacker probed your users, not your systems. A [survey][1] of hackers who attended a recent DEFCON conference revealed that “84 percent use social engineering as part of their attack strategy.” Deploying a secure operating system is an important starting point, but without user education, strong firewalls and constant vigilance, even the most secure networks can be invaded. And of course there’s always the risk of user-downloaded software, extensions, utilities, plug-ins and other software that appears benign but becomes a path for malware to appear on the system.
|
||||
|
||||
And no matter which platform you choose, one of the best ways to keep your system secure is to ensure that you apply software updates promptly. Once a patch is in the wild, after all, the hackers can reverse engineer it and find a new exploit they can use in their next wave of attacks.
|
||||
|
||||
And don’t forget the basics. Don’t use root, and don’t grant guest access to even older servers on the network. Teach your users how to pick really good passwords and arm them with tools such as [1Password][2] that make it easier for them to have different passwords on every account and website they use.
|
||||
|
||||
Because the bottom line is that every decision you make regarding your systems will affect your security, even the operating system your users do their work on.
|
||||
|
||||
**[ To comment on this story, visit[Computerworld's Facebook page][3]. ]**
|
||||
|
||||
### Windows, the popular choice
|
||||
|
||||
If you’re a security manager, it is extremely likely that the questions raised by this article could be rephrased like so: Would we be more secure if we moved away from Microsoft Windows? To say that Windows dominates the enterprise market is to understate the case. [NetMarketShare][4] estimates that a staggering 88% of all computers on the internet are running a version of Windows.
|
||||
|
||||
If your systems fall within that 88%, you’re probably aware that Microsoft has continued to beef up security in the Windows system. Among its improvements have been rewriting and re-rewriting its operating system codebase, adding its own antivirus software system, improving firewalls and implementing a sandbox architecture, where programs can’t access the memory space of the OS or other applications.
|
||||
|
||||
But the popularity of Windows is a problem in itself. The security of an operating system can depend to a large degree on the size of its installed base. For malware authors, Windows provides a massive playing field. Concentrating on it gives them the most bang for their efforts.
|
||||
As Troy Wilkinson, CEO of Axiom Cyber Solutions, explains, “Windows always comes in last in the security world for a number of reasons, mainly because of the adoption rate of consumers. With a large number of Windows-based personal computers on the market, hackers historically have targeted these systems the most.”
|
||||
|
||||
It’s certainly true that, from Melissa to WannaCry and beyond, much of the malware the world has seen has been aimed at Windows systems.
|
||||
|
||||
### macOS X and security through obscurity
|
||||
|
||||
If the most popular OS is always going to be the biggest target, then can using a less popular option ensure security? That idea is a new take on the old — and entirely discredited — concept of “security through obscurity,” which held that keeping the inner workings of software proprietary and therefore secret was the best way to defend against attacks.
|
||||
|
||||
Wilkinson flatly states that macOS X “is more secure than Windows,” but he hastens to add that “macOS used to be considered a fully secure operating system with little chance of security flaws, but in recent years we have seen hackers crafting additional exploits against macOS.”
|
||||
|
||||
In other words, the attackers are branching out and not ignoring the Mac universe.
|
||||
|
||||
Security researcher Lee Muson of Comparitech says that “macOS is likely to be the pick of the bunch” when it comes to choosing a more secure OS, but he cautions that it is not impenetrable, as once thought. Its advantage is that “it still benefits from a touch of security through obscurity versus the still much larger target presented by Microsoft’s offering.”
|
||||
|
||||
Joe Moore of Wolf Solutions gives Apple a bit more credit, saying that “off the shelf, macOS X has a great track record when it comes to security, in part because it isn’t as widely targeted as Windows and in part because Apple does a pretty good job of staying on top of security issues.”
|
||||
|
||||
### And the winner is …
|
||||
|
||||
You probably knew this from the beginning: The clear consensus among experts is that Linux is the most secure operating system. But while it’s the OS of choice for servers, enterprises deploying it on the desktop are few and far between.
|
||||
|
||||
And if you did decide that Linux was the way to go, you would still have to decide which distribution of the Linux system to choose, and things get a bit more complicated there. Users are going to want a UI that seems familiar, and you are going to want the most secure OS.
|
||||
|
||||
As Moore explains, “Linux has the potential to be the most secure, but requires the user be something of a power user.” So, not for everyone.
|
||||
|
||||
Linux distros that target security as a primary feature include [Parrot Linux][5], a Debian-based distro that Moore says provides numerous security-related tools right out of the box.
|
||||
|
||||
Of course, an important differentiator is that Linux is open source. The fact that coders can read and comment upon each other’s work might seem like a security nightmare, but it actually turns out to be an important reason why Linux is so secure, says Igor Bidenko, CISO of Simplex Solutions. “Linux is the most secure OS, as its source is open. Anyone can review it and make sure there are no bugs or back doors.”
|
||||
|
||||
Wilkinson elaborates that “Linux and Unix-based operating systems have less exploitable security flaws known to the information security world. Linux code is reviewed by the tech community, which lends itself to security: By having that much oversight, there are fewer vulnerabilities, bugs and threats.”
|
||||
|
||||
That’s a subtle and perhaps counterintuitive explanation, but by having dozens — or sometimes hundreds — of people read through every line of code in the operating system, the code is actually more robust and the chance of flaws slipping into the wild is diminished. That had a lot to do with why PC World came right out and said Linux is more secure. As Katherine Noyes [explains][6], “Microsoft may tout its large team of paid developers, but it’s unlikely that team can compare with a global base of Linux user-developers around the globe. Security can only benefit through all those extra eyeballs.”
|
||||
|
||||
Another factor cited by PC World is Linux’s better user privileges model: Windows users “are generally given administrator access by default, which means they pretty much have access to everything on the system,” according to Noyes’ article. Linux, in contrast, greatly restricts “root.”
|
||||
|
||||
Noyes also noted that the diversity possible within Linux environments is a better hedge against attacks than the typical Windows monoculture: There are simply a lot of different distributions of Linux available. And some of them are differentiated in ways that specifically address security concerns. Security Researcher Lee Muson of Comparitech offers this suggestion for a Linux distro: “The[Qubes OS][7] is as good a starting point with Linux as you can find right now, with an [endorsement from Edward Snowden][8] massively overshadowing its own extremely humble claims.” Other security experts point to specialized secure Linux distributions such as [Tails Linux][9], designed to run securely and anonymously directly from a USB flash drive or similar external device.
|
||||
|
||||
### Building security momentum
|
||||
|
||||
Inertia is a powerful force. Although there is clear consensus that Linux is the safest choice for the desktop, there has been no stampede to dump Windows and Mac machines in favor of it. Nonetheless, a small but significant increase in Linux adoption would probably result in safer computing for everyone, because in market share loss is one sure way to get Microsoft’s and Apple’s attention. In other words, if enough users switch to Linux on the desktop, Windows and Mac PCs are very likely to become more secure platforms.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.computerworld.com/article/3252823/linux/why-linux-is-better-than-windows-or-macos-for-security.html
|
||||
|
||||
作者:[Dave Taylor][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.computerworld.com/author/Dave-Taylor/
|
||||
[1]:https://www.esecurityplanet.com/hackers/fully-84-percent-of-hackers-leverage-social-engineering-in-attacks.html
|
||||
[2]:http://www.1password.com
|
||||
[3]:https://www.facebook.com/Computerworld/posts/10156160917029680
|
||||
[4]:https://www.netmarketshare.com/operating-system-market-share.aspx?options=%7B%22filter%22%3A%7B%22%24and%22%3A%5B%7B%22deviceType%22%3A%7B%22%24in%22%3A%5B%22Desktop%2Flaptop%22%5D%7D%7D%5D%7D%2C%22dateLabel%22%3A%22Trend%22%2C%22attributes%22%3A%22share%22%2C%22group%22%3A%22platform%22%2C%22sort%22%3A%7B%22share%22%3A-1%7D%2C%22id%22%3A%22platformsDesktop%22%2C%22dateInterval%22%3A%22Monthly%22%2C%22dateStart%22%3A%222017-02%22%2C%22dateEnd%22%3A%222018-01%22%2C%22segments%22%3A%22-1000%22%7D
|
||||
[5]:https://www.parrotsec.org/
|
||||
[6]:https://www.pcworld.com/article/202452/why_linux_is_more_secure_than_windows.html
|
||||
[7]:https://www.qubes-os.org/
|
||||
[8]:https://twitter.com/snowden/status/781493632293605376?lang=en
|
||||
[9]:https://tails.boum.org/about/index.en.html
|
@ -1,3 +1,5 @@
|
||||
translating----geekpi
|
||||
|
||||
College student reflects on getting started in open source
|
||||
======
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
Easily Run And Integrate AppImage Files With AppImageLauncher
|
||||
======
|
||||
Did you ever download an AppImage file and you didn't know how to use it? Or maybe you know how to use it but you have to navigate to the folder where you downloaded the .AppImage file every time you want to run it, or manually create a launcher for it.
|
||||
|
@ -0,0 +1,38 @@
|
||||
How a university network assistant used Linux in the 90s
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/moneyrecycle_520x292.png?itok=SAaIziNr)
|
||||
In the mid-1990s, I was enrolled in computer science classes. My university’s computer science department provided a SunOS server—a multi-user, multitasking Unix system—for its students. We logged into it and wrote source code for the programming languages we were learning, such as C, C++, and ADA. In those days, well before social networks and instant messaging, we also used the system to communicate with each other, sending emails and using utilities such as `write` and `talk`. We were each also allowed to host a personal website. I enjoyed being able to complete my assignments and contact other users.
|
||||
|
||||
It was my first experience with this type of operating environment, but I soon learned about another operating system that could do the same thing: Linux.
|
||||
|
||||
While I was a student, I also worked part-time at the university. My first position was as a network installer in the Department of Housing and Residence (H&R). This involved connecting student dormitories to the campus network. As this was the university's first dormitory network service, only two buildings and about 75 students had been connected.
|
||||
|
||||
In my second year, the network expanded to cover an additional two buildings. H&R decided to let the university’s Office of Information Technology (OIT) manage this growing operation. I transferred to OIT and started the position of Student Assistant to the OIT Network Manager. That is how I discovered Linux. One of my new responsibilities was to manage the firewall systems that provided network and internet access to the dormitories.
|
||||
|
||||
Each student was registered with their hardware MAC address. Registered students could connect to the dorm network and receive an IP address and a route to the internet. Unlike the other expensive SunOS and VMS servers used by the university, these firewalls used low-cost computers running the free and open source Linux operating system. By the end of the year, the system had registered nearly 500 students.
|
||||
|
||||
![Red hat Linux install disks][1]
|
||||
|
||||
The OIT network staff members were using Linux for HTTP, FTP, and other services. They also used Linux on their personal desktops. That's when I realized I had my hands on a computer system that looked and acted just like the expensive SunOS box in the CS department but without the high cost. Linux could run on commodity x86 hardware, such as a Dell Latitude with 8 MB of RAM and a 133Mhz Intel Pentium CPU. That was the selling point for me! I installed Red Hat Linux 5.2 on a box scavenged from the surplus warehouse and gave my friends login accounts.
|
||||
|
||||
While I used my new Linux server to host my website and provide accounts to my friends, it also offered graphics capabilities over the CS department server. Using the X Windows system, I could browse the web with Netscape Navigator, play music with [XMMS][2], and try out different window managers. I could also download and compile other open source software and write my own code.
|
||||
|
||||
I learned that Linux offered some pretty advanced features, many of which were more convenient than or superior to more mainstream operating systems. For example, many operating systems did not yet offer simple ways to apply updates. In Linux, this was easy, thanks to [autoRPM][3], an update manager written by Kirk Bauer, which sent the root user a daily email with available updates. It had an intuitive interface for reviewing and selecting software updates to install—pretty amazing for the mid-'90s.
|
||||
|
||||
Linux may not have been well-known back then, and it was often received with skepticism, but I was convinced it would survive. And survive it did!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/my-linux-story-student
|
||||
|
||||
作者:[Alan Formy-Duval][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/alanfdoss
|
||||
[1]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/red_hat_linux_install_disks.png?itok=VSw6Cke9 (Red hat Linux install disks)
|
||||
[2]:http://www.xmms.org/
|
||||
[3]:http://www.ccp14.ac.uk/solution/linux/autorpm_redhat7_3.html
|
@ -0,0 +1,84 @@
|
||||
Person with diabetes finds open source and builds her own medical device
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/health_heartbeat.png?itok=P-GXea-p)
|
||||
Dana Lewis is the 2018 Women in [Open Source Community Award][1] winner! Here is her story about how open source improved her health in a big way.
|
||||
|
||||
Dana has Type 1 diabetes and commercially available medical devices were failing her. The continuous glucose monitor (CGM) alarm she was using to manage her blood sugar was not loud enough to wake her up. The product design put her in danger every time she went to sleep.
|
||||
|
||||
"I went to a bunch of manufacturers and asked what they could do, and I was told, 'It’s loud enough for most people.' I was told that 'it’s not a problem for most people, and we're working on it. It'll be out in a future version.’' That was all really frustrating to hear, but at the same time, I didn’t feel like I could do anything about it because it’s an FDA-approved medical device. You can’t change it."
|
||||
|
||||
These obstacles aside, Dana thought that if she could get her data from the device, she could use her phone to make a louder alarm. Toward the end of 2013, she saw a tweet that provided an answer to her problem. The author of the tweet, who is the parent of a child with diabetes, had reverse-engineered a CGM to get the data off his child’s device so that he could monitor his child’s blood sugar remotely.
|
||||
|
||||
She realized that if he was willing to share, she could use the same code to build a louder alarm system.
|
||||
|
||||
"I didn’t understand that it was perfectly normal to ask people to share code. That was my first introduction to open source."
|
||||
|
||||
The system evolved from a louder alarm to a web page where she could share data with her loved ones. Together with her now-husband, Scott Leibrand, she iteratively added features to the page, eventually including an algorithm that could not only monitor glucose levels in real time but also proactively predict future highs and lows.
|
||||
|
||||
As Dana got more involved with the open source diabetes community, she met Ben West. He had spent years figuring out how to communicate with the insulin pump Dana used. Unlike a CGM, which tells the user if their blood sugar is high or low, an insulin pump is a separate device used to continuously infuse insulin throughout the day.
|
||||
|
||||
"A light bulb went off. We said, 'Oh, if we take this code to communicate with the pump with what we’ve done to access the data from the CGM in real time and our algorithm, we can actually process data from both devices in real time and create a closed-loop system.'"
|
||||
|
||||
The result was a do-it-yourself artificial pancreas system (DIY APS).
|
||||
|
||||
Using the algorithm to process data from the insulin pump and CGM, the DIY APS forecasts predicted blood glucose levels and automates adjustments to the insulin delivery, making small changes to keep blood sugar within the target range. This makes life much easier for people with diabetes because they no longer have to calibrate insulin delivery manually several times per day.
|
||||
|
||||
"Because we had been using open source software, we knew that the right thing to do was to turn around and make what we had done open source as well so that other people could leverage it." And thus, OpenAPS (the Open Source Artificial Pancreas System) was born.
|
||||
|
||||
The OpenAPS community has since grown to more than 600 users of various DIY "closed-loop" systems. OpenAPS contributors have embraced the hashtag #WeAreNotWaiting as a mantra to express their belief that patient communities should not have to wait for the healthcare industry to create something that works for them.
|
||||
|
||||
"Yes, you may choose to adopt a commercial solution in the future—that’s totally fine, and you should have the freedom do to that. Waiting should be a choice and not the status quo. To me, what’s amazing about this movement of open source in healthcare is that waiting is now a choice. You can choose not to DIY. You can choose to wait for a commercial solution. But if you don’t want to wait, you don’t have to. There are a plethora of options to take advantage of. A lot of problems have been solved by people in the community."
|
||||
|
||||
The OpenAPS community is made up of people with diabetes, their loved ones, parents of children with diabetes, and people who want to use their skills for good. By helping lead the community, Dana has learned about the many ways of contributing to an open source project. She sees many valuable contributions to OpenAPS come from non-technical contributors on Facebook or [Gitter][2].
|
||||
|
||||
"There are a lot of different ways that people contribute, and it’s important that we recognize all of those because they’re all equally valuable. And they often involve different interests and different skill sets, but together, that’s what makes a community project in open source succeed."
|
||||
|
||||
She knows firsthand how discouraging it can be for contributions to go unrecognized by others in a community. She also isn’t shy about discussing people’s tendency to discount the contributions of women. She first wrote about her experience being treated differently in a [2014 blog post][3] and [reflected on it again][4] when she learned she was a Women in Open Source Award finalist.
|
||||
|
||||
In her first blog post, she and Scott shared the differences in the way they were treated by members of the community. They both noticed that, in subtle ways, Dana was constantly battling to be taken seriously. People often directed technical questions to him instead of her, even after Scott tried to redirect them to Dana. By calling out these behaviors, the post opened up a highly productive discussion in the community.
|
||||
|
||||
"People would talk about the project initially as 'Scott’s thing' instead of 'Dana and Scott’s thing.' It was death by a thousand paper cuts in terms of frustration. I wrote the blog post to call it out. I said, 'Look, for some of you it’s conscious, but for some of you, it’s unconscious. We need to think that if we want this community to grow and support and allow many diverse participants, we need to talk about how we’re behaving.' To their credit, a lot of people in the community stopped and had serious conversations. They said, 'OK, here’s what I’m going to do to change. Call me out if I do it unconsciously.' It was phenomenal."
|
||||
|
||||
She added that if it weren’t for the support of Scott as another active developer in the community, as well as that of other women in the community she could talk to and get encouragement from, she might have stopped.
|
||||
|
||||
"I think that might have totally changed what happened in diabetes in open source if I had just thrown up my hands. I know that happens to other people, and it’s unfortunate. They leave open source because they don’t feel welcome or valued. My hope is that we continue to have the conversation about it and recognize that even if you’re not consciously trying to discourage people, we can all always do better at being more welcoming and engaging and recognizing contributions."
|
||||
|
||||
Communication and sharing about OpenAPS are examples of non-technical contributions that have been critical to the success of the community. Dana’s background in public relations and communications certainly contributed to getting the word out. She has written and spoken extensively about the community on the [DIYPS blog][5], in a [TEDx Talk][6], at [OSCON][7], and more.
|
||||
|
||||
"Not every project that is really impactful to a patient community has made it into the mainstream the way OpenAPS has. The diabetes community has done a really good job communicating about various projects, which brings more people with diabetes in and also gets the attention of people who want to help."
|
||||
|
||||
Her goal now is to help bring to light to other patient community projects. Specifically, she wants to share tools or skills community members have learned with other patient communities looking to take projects to the next level, facilitate research, or work with companies.
|
||||
|
||||
"I also realize that a lot of patients in these projects are told, 'You should patent that. You should create a company. You should create a non-profit.' But all those are huge things. They’re very time-consuming. They take away from your day job or require you to totally switch professions. People like me, we don’t always want to do that, and we shouldn’t have to do that in order to scale projects like this and help other people."
|
||||
|
||||
To this end, she also wants to find other pathways people can take that aren’t all-consuming—for example, writing a children’s book. Dana took on this challenge in 2017 to help her nieces and nephews understand their aunt’s diabetes devices. When her niece asked her what "the thing on her arm was" (her CGM), she realized she didn’t have a point of reference to explain to a young child what it meant to be a person with diabetes. Her solution was [Carolyn’s Robot Relative][8].
|
||||
|
||||
"I wanted to talk to my nieces and nephews in a way that was age-appropriate that also normalizes that people are different in different ways. I was like, 'I wish there was a kid’s book that talks about this. Well, why don’t I write my own?'"
|
||||
|
||||
She wrote the book and published it on Amazon because true to her open source values, she wanted it to be available to others. She followed up by also writing a [blog post about self-publishing a book on Amazon][9] in the hopes that others would publish books that speak to their own experiences.
|
||||
|
||||
Books like Carolyn’s Robot Relative and awards like the Women in Open Source Award speak to the greater need for representation of different kinds of people in many areas of life, including open source.
|
||||
|
||||
"Things are always better when the communities are more diverse."
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/dana-lewis-women-open-source-community-award-winner-2018
|
||||
|
||||
作者:[Taylor Greene][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/tgreene
|
||||
[1]:https://www.redhat.com/en/about/women-in-open-source
|
||||
[2]:https://gitter.im/
|
||||
[3]:https://diyps.org/2014/08/25/being-female-a-patient-and-co-designing-diyps-means-often-being-discounted/
|
||||
[4]:https://diyps.org/2018/02/01/women-in-open-source-make-a-difference/
|
||||
[5]:https://diyps.org/
|
||||
[6]:https://www.youtube.com/watch?v=kgu-AYSnyZ8
|
||||
[7]:https://www.youtube.com/watch?v=eQGWrdgu_fE
|
||||
[8]:https://www.amazon.com/gp/product/1977641415/ref=as_li_tl?ie=UTF8&tag=diyps-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=1977641415&linkId=96bb65e21b5801901586e9fabd12c860
|
||||
[9]:https://diyps.org/2017/11/01/makers-gonna-make-a-book-about-diabetes-devices-kids-book-written-by-danamlewis/
|
@ -1,101 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
|
||||
Easily Install Android Studio in Ubuntu And Linux Mint
|
||||
======
|
||||
[Android Studio][1], Google’s own IDE for Android development, is a nice alternative to Eclipse with ADT plugin. Android Studio can be installed from its source code but in this quick post, we shall see **how to install Android Studio in Ubuntu 18.04, 16.04 and corresponding Linux Mint variants**.
|
||||
|
||||
Before you proceed to install Android Studio, make sure that you have [Java installed in Ubuntu][2].
|
||||
|
||||
![How to install Android Studio in Ubuntu][3]
|
||||
|
||||
### Install Android Studio in Ubuntu and other distributions using Snap
|
||||
|
||||
Ever since Ubuntu started focusing on Snap packages, more software have started providing easy to install Snap packages. Android Studio is one of them. Ubuntu users can simply find the Android Studio application in the Software Center and install it from there.
|
||||
|
||||
![Install Android Studio in Ubuntu from Software Center][4]
|
||||
|
||||
If you see an error while installing Android Studio from Software Center, you can use the [Snap commands][5] to install Android studio.
|
||||
```
|
||||
sudo snap install android-studio --classic
|
||||
|
||||
```
|
||||
|
||||
Easy peasy!
|
||||
|
||||
### Alternative Method 1: Install Android Studio using umake in Ubuntu
|
||||
|
||||
You can also easily install Android Studio using Ubuntu Developer Tools Center, now known as [Ubuntu Make][6]. Ubuntu Make provides a command line tool to install various development tools, IDE etc. Ubuntu Make is available in Ubuntu repository.
|
||||
|
||||
To install Ubuntu Make, use the commands below in a terminal:
|
||||
|
||||
`sudo apt-get install ubuntu-make`
|
||||
|
||||
Once you have installed Ubuntu Make, use the command below to install Android Studio in Ubuntu:
|
||||
```
|
||||
umake android
|
||||
|
||||
```
|
||||
|
||||
It will give you a couple of options in the course of the installation. I presume that you can handle it. If you decide to uninstall Android Studio, you can use the same umake tool in the following manner:
|
||||
```
|
||||
umake android --remove
|
||||
|
||||
```
|
||||
|
||||
### Alternative Method 2: Install Android Studio in Ubuntu and Linux Mint via unofficial PPA
|
||||
|
||||
Thanks to [Paolo Ratolo][7], we have a PPA which can be used to easily install Android Studio in Ubuntu 16.04, 14.04, Linux Mint and other Ubuntu based distributions. Just note that it will download around 650 MB of data. So mind your internet connection as well as data charges (if any).
|
||||
|
||||
Open a terminal and use the following commands:
|
||||
```
|
||||
sudo apt-add-repository ppa:paolorotolo/android-studio
|
||||
sudo apt-get update
|
||||
sudo apt-get install android-studio
|
||||
|
||||
```
|
||||
|
||||
Was it not easy? While installing a program from source code is fun in a way, it is always nice to have such PPAs. Once we have seen how to install Android Studio, lets see how to uninstall it.
|
||||
|
||||
### Uninstall Android Studio:
|
||||
|
||||
If you don’t have already, install PPA Purge:
|
||||
```
|
||||
sudo apt-get install ppa-purge
|
||||
|
||||
```
|
||||
|
||||
Now use the PPA Purge to purge the installed PPA:
|
||||
```
|
||||
sudo apt-get remove android-studio
|
||||
|
||||
sudo ppa-purge ppa:paolorotolo/android-studio
|
||||
|
||||
```
|
||||
|
||||
That’s it. I hope this quick helps you to **install Android Studio in Ubuntu and Linux Mint**. Before you run Android Studio, make sure to [install Java in Ubuntu][8] first. In similar posts, I advise you to read [how to install and configure Ubuntu SDK][9] and [how to easily install Microsoft Visual Studio in Ubuntu][10].
|
||||
|
||||
Any questions or suggestions are always welcomed. Ciao :)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/install-android-studio-ubuntu-linux/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/abhishek/
|
||||
[1]:http://developer.android.com/sdk/installing/studio.html
|
||||
[2]:https://itsfoss.com/install-java-ubuntu-1404/
|
||||
[3]:https://itsfoss.com/wp-content/uploads/2014/04/Android_Studio_Ubuntu.jpeg
|
||||
[4]:https://itsfoss.com/wp-content/uploads/2014/04/install-android-studio-snap-800x469.jpg
|
||||
[5]:https://itsfoss.com/install-snap-linux/
|
||||
[6]:https://wiki.ubuntu.com/ubuntu-make
|
||||
[7]:https://plus.google.com/+PaoloRotolo
|
||||
[8]:https://itsfoss.com/install-java-ubuntu-1404/ (How To Install Java On Ubuntu 14.04)
|
||||
[9]:https://itsfoss.com/install-configure-ubuntu-sdk/
|
||||
[10]:https://itsfoss.com/install-visual-studio-code-ubuntu/
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
3 ways robotics affects the CIO role
|
||||
======
|
||||
![配图](https://enterprisersproject.com/sites/default/files/styles/620x350/public/cio_ai.png?itok=toMIgELj)
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Testing IPv6 Networking in KVM: Part 2
|
||||
======
|
||||
|
||||
|
@ -1,79 +0,0 @@
|
||||
Translating by FelixYFZ 10 easy steps from proprietary to open source
|
||||
======
|
||||
"But surely open source software is less secure, because everybody can see it, and they can just recompile it and replace it with bad stuff they've written." Hands up: who's heard this?1
|
||||
|
||||
When I talk to customers--yes, they let me talk to customers sometimes--and to folks in the field2 this comes up quite frequently. In a previous article, "[Review by many eyes does not always prevent buggy code][1]", I talked about how open source software--particularly security software--isn't magically more secure than proprietary software, but I'd still go with open source over proprietary every time. But the way I've heard the particular question--about open source software being less secure--suggests that sometimes it's not enough to just explain that open source needs work, but we must also actively engage in [apologetics][2]3.
|
||||
|
||||
So here goes. I don't expect it to be up to Newton's or Wittgenstein's levels of logic, but I'll do what I can, and I'll summarise at the bottom so you have a quick list of the points if you want it.
|
||||
|
||||
### The arguments
|
||||
|
||||
First, we should accept that no software is perfect6. Not proprietary software, not open source software. Second, we should accept that good proprietary software exists, and third, there is also some bad open source software out there. Fourth, there are extremely intelligent, gifted, and dedicated architects, designers, and software engineers who create proprietary software.
|
||||
|
||||
But here's the rub: fifth, there is a limited pool of people who will work on or otherwise look at proprietary software. And you can never hire all the best people. Even in government and public sector organisations--who often have a larger talent pool available to them, particularly for cough security-related cough applications--the pool is limited.
|
||||
|
||||
Sixth, the pool of people available to look at, test, improve, break, re-improve, and roll out open source software is almost unlimited and does include the best people. Seventh (and I love this one), the pool also includes many of the people writing the proprietary software. Eighth, many of the applications being written by public sector and government organisations are open sourced anyway.
|
||||
|
||||
Ninth, if you're worried about running open source software that is unsupported or comes from dodgy, un-provenanced sources, then good news: There are a bunch of organisations7 who will check the provenance of that code, support, maintain, and patch it. They'll do it along the same type of business lines that you'd expect from a proprietary software provider. You can also ensure that the software you get from them is the right software: Their standard technique is to sign bundles of software so you can verify that what you're installing isn't from some random bad person who's taken that code and done Bad Things™ with it.
|
||||
|
||||
Tenth (and here's the point of this article), when you run open source software, when you test it, when you provide feedback on issues, when you discover errors and report them, you are tapping into--and adding to--the commonwealth of knowledge and expertise and experience that is open source, which is made only greater by your doing so. If you do this yourself, or through one of the businesses that support open source software8, you are part of this commonwealth. Things get better with open source software, and you can see them getting better. Nothing is hidden--it's, well, open. Can things get worse? Yes, they can, but we can see when that happens and fix it.
|
||||
|
||||
This commonwealth does not apply to proprietary software: what stays hidden does not enlighten or enrich the world.
|
||||
|
||||
I know that I need to be careful about the use of the "commonwealth" as a Briton; it has connotations of (faded…) empires, which I don't intend in this case. It's probably not what Cromwell9 had in mind when he talked about the "Commonwealth," either, and anyway, he's a somewhat controversial historical figure. What I'm talking about is a concept in which I think the words deserve concatenation--"common" and "wealth"--to show that we're talking about something more than just money, but shared wealth available to all of humanity.
|
||||
|
||||
I really believe in this. If you want to take away a religious message from this article, it should be this10: the commonwealth is our heritage, our experience, our knowledge, our responsibility. The commonwealth is available to all of humanity. We have it in common, and it is an almost inestimable wealth.
|
||||
|
||||
### A handy crib sheet
|
||||
|
||||
1. (Almost) no software is perfect.
|
||||
2. There is good proprietary software.
|
||||
3. There is bad open source software.
|
||||
4. There are clever, talented, and devoted people who create proprietary software.
|
||||
5. The pool of people available to write and improve proprietary software is limited, even within the public sector and government realm.
|
||||
6. The corresponding pool of people for open source is virtually unlimited…
|
||||
7. …and includes a goodly number of the talent pool of people writing proprietary software.
|
||||
8. Public sector and government organisations often open source their software anyway.
|
||||
9. There are businesses that will support open source software for you.
|
||||
10. Contribution--even usage--adds to the commonwealth.
|
||||
|
||||
|
||||
|
||||
1 OK--you can put your hands down now.
|
||||
|
||||
2 Should this be capitalized? Is there a particular field, or how does it work? I'm not sure.
|
||||
|
||||
3 I have a degree in English literature and theology--this probably won't surprise regular readers of my articles.4
|
||||
|
||||
4 Not, I hope, because I spout too much theology,5 but because it's often full of long-winded, irrelevant humanities (U.S. English: "liberal arts") references.
|
||||
|
||||
5 Emacs. Every time.
|
||||
|
||||
6 Not even Emacs. And yes, I know that there are techniques to prove the correctness of some software. (I suspect that Emacs doesn't pass many of them…)
|
||||
|
||||
7 Hand up here: I'm employed by one of them, [Red Hat][3]. Go have a look--it's a fun place to work, and [we're usually hiring][4].
|
||||
|
||||
8 Assuming that they fully abide by the rules of the open source licence(s) they're using, that is.
|
||||
|
||||
9 Erstwhile "Lord Protector of England, Scotland, and Ireland"--that Cromwell.
|
||||
|
||||
10 Oh, and choose Emacs over Vi variants, obviously.
|
||||
|
||||
This article originally appeared on [Alice, Eve, and Bob - a security blog][5] and is republished with permission.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/11/commonwealth-open-source
|
||||
|
||||
作者:[Mike Bursell][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mikecamel
|
||||
[1]:https://opensource.com/article/17/10/many-eyes
|
||||
[2]:https://en.wikipedia.org/wiki/Apologetics
|
||||
[3]:https://www.redhat.com/
|
||||
[4]:https://www.redhat.com/en/jobs
|
||||
[5]:https://aliceevebob.com/2017/10/24/the-commonwealth-of-open-source/
|
@ -1,3 +1,5 @@
|
||||
translating by cizezsy
|
||||
|
||||
How To Kill The Largest Process In An Unresponsive Linux System
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2017/11/Kill-The-Largest-Process-720x340.png)
|
||||
|
@ -1,141 +0,0 @@
|
||||
HeRM’s - A Commandline Food Recipes Manager
|
||||
======
|
||||
![配图](https://www.ostechnix.com/wp-content/uploads/2017/12/herms-720x340.jpg)
|
||||
|
||||
Cooking is love made visible, isn't? Indeed! Either cooking is your passion or hobby or profession, I am sure you will maintain a cooking journal. Keeping a cooking journal is one way to improve your cooking practice. There are many ways to take notes about the recipes. You could maintain a small diary/notebook or store the recipe's notes in the smartphone or save them in a word document in your computer. There are multitude of options. Today, I introduce **HeRM 's**, a Haskell-based commandline food recipes manager to make notes about your delicious food recipes. Using Herm's, you can add, view, edit, and delete food recipes and even can make your shopping lists. All from your Terminal! It is free, and open source utility written using Haskell programming language. The source code is freely available in GitHub, so you can fork it, add more features or improve it.
|
||||
|
||||
### HeRM's - A Commandline Food Recipes Manager
|
||||
|
||||
#### **Installing HeRM 's**
|
||||
|
||||
Since it is written using Haskell, we need to install Cabal first. Cabal is a command-line program for downloading and building software written in Haskell programming language. Cabal is available in the core repositories of most Linux distributions, so you can install it using your distribution's default package manager.
|
||||
|
||||
For instance, you can install cabal in Arch Linux and its variants such as Antergos, Manjaro Linux using command:
|
||||
```
|
||||
sudo pacman -S cabal-install
|
||||
```
|
||||
|
||||
On Debian, Ubuntu:
|
||||
```
|
||||
sudo apt-get install cabal-install
|
||||
```
|
||||
|
||||
After installing Cabal, make sure you have added it your PATH. To do so, edit your **~/.bashrc** file:
|
||||
```
|
||||
vi ~/.bashrc
|
||||
```
|
||||
|
||||
Add the following line:
|
||||
```
|
||||
PATH=$PATH:~/.cabal/bin
|
||||
```
|
||||
|
||||
Press **:wq** to save and quit the file. Then, run the following command to update the changes made.
|
||||
```
|
||||
source ~/.bashrc
|
||||
```
|
||||
|
||||
Once cabal installed, run the following command to install herms:
|
||||
```
|
||||
cabal install herms
|
||||
```
|
||||
|
||||
Have a cup of coffee! This will take a while. After couple minutes, you will see an output, something like below.
|
||||
```
|
||||
[...]
|
||||
Linking dist/build/herms/herms ...
|
||||
Installing executable(s) in /home/sk/.cabal/bin
|
||||
Installed herms-1.8.1.2
|
||||
```
|
||||
|
||||
Congratulations! Herms is installed.
|
||||
|
||||
#### **Adding recipes**
|
||||
|
||||
Let us add a food recipe, for example **Dosa**. For those wondering, Dosa is a popular south Indian food served hot with **sambar** and **chutney**. It is a healthy, and arguably most delicious food. It contains no added sugars or saturated fats. It is also easy to make one. There are couple types of different Dosas, the most common served in our home is Plain Dosa.
|
||||
|
||||
To add a recipe, type:
|
||||
```
|
||||
herms add
|
||||
```
|
||||
|
||||
You will see a screen something like below. Start entering the recipe's details.
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
To navigate through fields,use the following keyboard shortcuts:
|
||||
|
||||
* **Tab / Shift+Tab** - Next / Previous field
|
||||
* **Ctrl + <Arrow keys>** - Navigate fields
|
||||
* **[Meta or Alt] + <h-j-k-l>** - Navigate fields
|
||||
* **Esc** - Save or Cancel.
|
||||
|
||||
|
||||
|
||||
Once you added the recipe's details, press ESC key and hit Y to save it. Similarly, you can add as many recipes as you want.
|
||||
|
||||
To list the added recipes, type:
|
||||
```
|
||||
herms list
|
||||
```
|
||||
|
||||
[![][1]][3]
|
||||
|
||||
To view the details of any recipes listed above, just use the respective number like below.
|
||||
```
|
||||
herms view 1
|
||||
```
|
||||
|
||||
[![][1]][4]
|
||||
|
||||
To edit any recipes, use:
|
||||
```
|
||||
herms edit 1
|
||||
```
|
||||
|
||||
Once you made the changes, press ESC key. You'll be asked whether you want to save or not. Just choose the appropriate option.
|
||||
|
||||
[![][1]][5]
|
||||
|
||||
To delete a recipe, the command would be:
|
||||
```
|
||||
herms remove 1
|
||||
```
|
||||
|
||||
To generate a shopping list for a given recipe(s), run:
|
||||
```
|
||||
herms shopping 1
|
||||
```
|
||||
|
||||
[![][1]][6]
|
||||
|
||||
For help, run:
|
||||
```
|
||||
herms -h
|
||||
```
|
||||
|
||||
The next time you overhear a conversation about a good recipe from your colleague or friend or somewhere else, just open Herms and quickly take a note and share them to your spouse. She would be delighted!
|
||||
|
||||
And, that's all. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/herms-commandline-food-recipes-manager/
|
||||
|
||||
作者:[][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com
|
||||
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2017/12/Make-Dosa-1.png ()
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-1-1.png ()
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-2.png ()
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-3.png ()
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-4.png ()
|
@ -1,3 +1,6 @@
|
||||
Translating by MjSeven
|
||||
|
||||
|
||||
How to create mobile-friendly documentation
|
||||
======
|
||||
![配图](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd)
|
||||
|
@ -1,113 +0,0 @@
|
||||
2 scientific calculators for the Linux desktop
|
||||
======
|
||||
|
||||
Translating by zyk2290
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_OpenData_CityNumbers.png?itok=lC03ce76)
|
||||
|
||||
Image by : opensource.com
|
||||
|
||||
Every Linux desktop environment comes with at least a simple desktop calculator, but most of those simple calculators are just that: a simple tool for simple calculations.
|
||||
|
||||
Fortunately, there are exceptions; programs that go far beyond square roots and a couple of trigonometric functions, yet are still easy to use. Here are two powerful calculator tools for Linux, plus a couple of bonus options.
|
||||
|
||||
### SpeedCrunch
|
||||
|
||||
[SpeedCrunch][1] is a high-precision scientific calculator with a simple Qt5 graphical interface and strong focus on the keyboard.
|
||||
|
||||
![SpeedCrunch graphical interface][3]
|
||||
|
||||
|
||||
SpeedCrunch at work
|
||||
|
||||
It supports working with units and comes loaded with all kinds of functions.
|
||||
|
||||
For example, by writing:
|
||||
`2 * 10^6 newton / (meter^2)`
|
||||
|
||||
you get:
|
||||
`= 2000000 pascal`
|
||||
|
||||
By default, SpeedCrunch delivers its results in the international unit system, but units can be transformed with the "in" instruction.
|
||||
|
||||
For example:
|
||||
`3*10^8 meter / second in kilo meter / hour`
|
||||
|
||||
produces:
|
||||
`= 1080000000 kilo meter / hour`
|
||||
|
||||
With the `F5` key, all results will turn into scientific notation (`1.08e9 kilo meter / hour`), while with `F2` only numbers that are small enough or big enough will change. More options are available on the Configuration menu.
|
||||
|
||||
The list of available functions is really impressive. It works on Linux, Windows, and MacOS, and it's licensed under GPLv2; you can access its source code on [Bitbucket][4].
|
||||
|
||||
### Qalculate!
|
||||
|
||||
[Qalculate!][5] (with the exclamation point) has a long and complex history.
|
||||
|
||||
The project offers a powerful library that can be used by other programs (the Plasma desktop can use it to perform calculations from krunner) and a graphical interface built on GTK3. It allows you to work with units, handle physical constants, create graphics, use complex numbers, matrices, and vectors, choose arbitrary precision, and more.
|
||||
|
||||
|
||||
![Qalculate! Interface][7]
|
||||
|
||||
|
||||
Looking for some physical constants on Qalculate!
|
||||
|
||||
Its use of units is far more intuitive than SpeedCrunch's and it understands common prefixes without problem. Have you heard of an exapascal pressure? I hadn't (the Sun's core stops at `~26 PPa`), but Qalculate! has no problem understanding the meaning of `1 EPa`. Also, Qalculate! is more flexible with syntax errors, so you don't need to worry about closing all those parentheses: if there is no ambiguity, Qalculate! will give you the right answer.
|
||||
|
||||
After a long period on which the project seemed orphaned, it came back to life in 2016 and has been going strong since, with more than 10 versions in just one year. It's licensed under GPLv2 (with source code on [GitHub][8]) and offers versions for Linux and Windows, as well as a MacOS port.
|
||||
|
||||
### Bonus calculators
|
||||
|
||||
#### ConvertAll
|
||||
|
||||
OK, it's not a "calculator," yet this simple application is incredibly useful.
|
||||
|
||||
Most unit converters stop at a long list of basic units and a bunch of common combinations, but not [ConvertAll][9]. Trying to convert from astronomical units per year into inches per second? It doesn't matter if it makes sense or not, if you need to transform a unit of any kind, ConvertAll is the tool for you.
|
||||
|
||||
Just write the starting unit and the final unit in the corresponding boxes; if the units are compatible, you'll get the transformation without protest.
|
||||
|
||||
The main application is written in PyQt5, but there is also an [online version written in JavaScript][10].
|
||||
|
||||
#### (wx)Maxima with the units package
|
||||
|
||||
Sometimes (OK, many times) a desktop calculator is not enough and you need more raw power.
|
||||
|
||||
[Maxima][11] is a computer algebra system (CAS) with which you can do derivatives, integrals, series, equations, eigenvectors and eigenvalues, Taylor series, Laplace and Fourier transformations, as well as numerical calculations with arbitrary precision, graph on two and three dimensions… we could fill several pages just listing its capabilities.
|
||||
|
||||
[wxMaxima][12] is a well-designed graphical frontend for Maxima that simplifies the use of many Maxima options without compromising others. On top of the full power of Maxima, wxMaxima allows you to create "notebooks" on which you write comments, keep your graphics with your math, etc. One of the (wx)Maxima combo's most impressive features is that it works with dimension units.
|
||||
|
||||
On the prompt, just type:
|
||||
`load("unit")`
|
||||
|
||||
press Shift+Enter, wait a few seconds, and you'll be ready to work.
|
||||
|
||||
By default, the unit package works with the basic MKS units, but if you prefer, for instance, to get `N` instead of `kg*m/s2`, you just need to type:
|
||||
`setunits(N)`
|
||||
|
||||
Maxima's help (which is also available from wxMaxima's help menu) will give you more information.
|
||||
|
||||
Do you use these programs? Do you know another great desktop calculator for scientists and engineers or another related tool? Tell us about them in the comments!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/scientific-calculators-linux
|
||||
|
||||
作者:[Ricardo Berlasso][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rgb-es
|
||||
[1]:http://speedcrunch.org/index.html
|
||||
[2]:/file/382511
|
||||
[3]:https://opensource.com/sites/default/files/u128651/speedcrunch.png (SpeedCrunch graphical interface)
|
||||
[4]:https://bitbucket.org/heldercorreia/speedcrunch
|
||||
[5]:https://qalculate.github.io/
|
||||
[6]:/file/382506
|
||||
[7]:https://opensource.com/sites/default/files/u128651/qalculate-600.png (Qalculate! Interface)
|
||||
[8]:https://github.com/Qalculate
|
||||
[9]:http://convertall.bellz.org/
|
||||
[10]:http://convertall.bellz.org/js/
|
||||
[11]:http://maxima.sourceforge.net/
|
||||
[12]:https://andrejv.github.io/wxmaxima/
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
5 open source software tools for supply chain management
|
||||
======
|
||||
|
||||
|
@ -1,312 +0,0 @@
|
||||
Translating by qhwdw
|
||||
How to apply Machine Learning to IoT using Android Things and TensorFlow
|
||||
============================================================
|
||||
|
||||
This project explores how to apply Machine Learning to IoT. In more details, as IoT platform, we will use **Android Things** and as Machine Learning engine we will use **Google TensorFlow**.
|
||||
|
||||
![Machine Learning with Android Things](https://www.survivingwithandroid.com/wp-content/uploads/2018/03/machine_learning_android_things.png)
|
||||
|
||||
Nowadays, Machine Learning is with Internet of Things one of the most interesting technological topics. To give a simple definition of the Machine Learning, it is possible to the [Wikipedia definition][13]:Machine learning is a field of computer science that gives computer systems the ability to “learn” (i.e. progressively improve performance on a specific task) with data, without being explicitly programmed.
|
||||
|
||||
In other words, after a training step, a system can predict outcomes even if it is not specifically programmed for them. On the other hands, we all know IoT and the concept of connected devices. One of the most promising topics is how to apply Machine Learning to IoT, building expert systems so that it is possible to develop a system that is able to “learn”. Moreover, it uses this knowledge to control and manage physical objects.
|
||||
|
||||
There are several fields where applying Machine Learning and IoT produce an important value, just to mention a few interesting fields, there are:
|
||||
|
||||
* Industrial IoT (IIoT) in the predictive maintenance
|
||||
|
||||
* Consumer IoT where the Machine earning can make the device intelligent so that it can adapt to our habits
|
||||
|
||||
In this tutorial, we want to explore how to apply Machine Learning to IoT using Android Things and TensorFlow. The basic idea that stands behind this Android Things IoT project is exploring how to build a _robot car that is able to recognize some basic shapes (like arrows) and control in this way the robot car directions_ . We have already covered [how to build robot car using Android Things][5], so I suggest you read the tutorial before starting this project.
|
||||
|
||||
This Machine Learning and IoT project cover these main topics:
|
||||
|
||||
* How to set up the TensorFlow environment using Docker
|
||||
|
||||
* How to train the TensorFlow system
|
||||
|
||||
* How to integrate TensorFlow with Android Things
|
||||
|
||||
* How to control the robot car using TensorFlow result
|
||||
|
||||
This project is derived from [Android Things TensorFlow image classifier][6].
|
||||
|
||||
Let us start!
|
||||
|
||||
### How to use Tensorflow image recognition
|
||||
|
||||
Before starting it is necessary to install and configure the TensorFlow environment. I’m not a Machine Learning expert, so I need to find something fast and ready to use so that we can build the TensorFlow image classifier. For this reason, we can use Docker to run an image of TensorFlow. Follow these steps:
|
||||
|
||||
1. Clone the TensorFlow repository:
|
||||
```
|
||||
git clone https://github.com/tensorflow/tensorflow.git
|
||||
cd /tensorflow
|
||||
git checkout v1.5.0
|
||||
```
|
||||
|
||||
2. Create a directory (`/tf-data`) that will hold all the files that we will use during the project.
|
||||
|
||||
3. Run Docker:
|
||||
```
|
||||
docker run -it \
|
||||
--volume /tf-data:/tf-data \
|
||||
--volume /tensorflow:/tensorflow \
|
||||
--workdir /tensorflow tensorflow/tensorflow:1.5.0 bash
|
||||
```
|
||||
|
||||
Using this command, we run an interactive TensorFlow environment and we mount some directories that we will use during the project
|
||||
|
||||
### How to Train TensorFlow to recognize images
|
||||
|
||||
Before the Android Things system is able to recognize images, it is necessary to train the TensorFlow engine so that it can build its model. For this purpose, it is necessary to gather several images. As said before, we want to use arrows to control the Android Things robot car so that we have to collect at least four arrow types:
|
||||
|
||||
* up arrow
|
||||
|
||||
* down arrow
|
||||
|
||||
* left arrow
|
||||
|
||||
* right arrow
|
||||
|
||||
To train the system is necessary to create a “knowledge base” with these four different image categories. Create in `/tf-data` a directory called `images` and under it four sub-directories named:
|
||||
|
||||
* up-arrow
|
||||
|
||||
* down-arrow
|
||||
|
||||
* left-arrow
|
||||
|
||||
* right-arrow
|
||||
|
||||
Now it is time to look for the images. I have used Google Image search but you can use other approaches too. To simplify the image download process, you should install a Chrome plugin that downloads all the images with only one click. Do not forget more images you download better is the training process, even if the time to create the model could increase.
|
||||
|
||||
**You may like also**
|
||||
[How to integrate Android Things using API][2]
|
||||
[How to use Android Things with Firebase][3]
|
||||
|
||||
Open your browser and start looking for the four image categories:
|
||||
|
||||
![TensorFlow image classifier](https://www.survivingwithandroid.com/wp-content/uploads/2018/03/TensorFlow-image-classifier.png)
|
||||
[Save][7]
|
||||
|
||||
I have downloaded 80 images for each category. Do not care about image extension.
|
||||
|
||||
Once all the categories have their images follow these steps (in the Docker interface):
|
||||
|
||||
```
|
||||
python /tensorflow/examples/image_retraining/retrain.py \
|
||||
--bottleneck_dir=tf_files/bottlenecks \
|
||||
--how_many_training_steps=4000 \
|
||||
--output_graph=/tf-data/retrained_graph.pb \
|
||||
--output_labels=/tf-data/retrained_labels.txt \
|
||||
--image_dir=/tf-data/images
|
||||
```
|
||||
|
||||
It could take some time so be patient. At the end, you should have two files in `/tf-data` folder:
|
||||
|
||||
1. retrained_graph.pb
|
||||
|
||||
2. retrained_labels.txt
|
||||
|
||||
The first file contains our model as the result of the TensorFlow training process while the second file contains the labels related to our four image categories.
|
||||
|
||||
### How to test the Tensorflow model
|
||||
|
||||
If you want to test the model to check if everything is working you can use this command:
|
||||
|
||||
```
|
||||
python scripts.label_image \
|
||||
--graph=/tf-data/retrained-graph.pb \
|
||||
--image=/tf-data/images/[category]/[image_name.jpg]
|
||||
```
|
||||
|
||||
### Optimizing the model
|
||||
|
||||
Before we can use this TensorFlow model in the Android Things project it is necessary to optimize it:
|
||||
|
||||
```
|
||||
python /tensorflow/python/tools/optimize_for_inference.py \
|
||||
--input=/tf-data/retrained_graph.pb \
|
||||
--output=/tf-data/opt_graph.pb \
|
||||
--input_names="Mul" \
|
||||
--output_names="final_result"
|
||||
```
|
||||
|
||||
That’s all we have our model. We will use this model to apply Machine Learning to IoT or in more details to integrate Android Things with TensorFlow. The goal is applying to the Android Things app the intelligence to recognize arrow images and react consequently controlling the robot car directions.
|
||||
|
||||
If you want to have more details about TensorFlow and how to generate the model look at the official documentation and to this [tutorial][8].
|
||||
|
||||
### How to apply Machine Learning to IoT using Android Things and TensorFlow
|
||||
|
||||
Once the TensorFlow data model is ready, we can move to the next step: how to integrate Android Things with TensorFlow. To this purpose, we can split this task into two steps:
|
||||
|
||||
1. The hardware part, where we connect motors and other peripherals to the Android Things board
|
||||
|
||||
2. Implementing the app
|
||||
|
||||
### Android Things Schematics
|
||||
|
||||
Before digging into the details about how to connect peripherals, this is the list of components used in this Android Things project:
|
||||
|
||||
1. Android Things board (Raspberry Pi 3)
|
||||
|
||||
2. Raspberry Pi Camera
|
||||
|
||||
3. One LED
|
||||
|
||||
4. LN298N Dual H Bridge (to control the motors)
|
||||
|
||||
5. A robot car chassis with two wheels
|
||||
|
||||
I do not cover again [how to control motors using Android Things][9] because we have already covered in the previous post.
|
||||
|
||||
Below the schematics:
|
||||
|
||||
![Integrating Android Things with IoT](https://www.survivingwithandroid.com/wp-content/uploads/2018/03/tensor_bb.png)
|
||||
[Save][10]
|
||||
|
||||
In the picture above, the camera is not shown. The final result is:
|
||||
|
||||
![Integrating Android Things with TensorFlow](https://www.survivingwithandroid.com/wp-content/uploads/2018/03/android_things_with_tensorflow-min.jpg)
|
||||
[Save][11]
|
||||
|
||||
### Implementing the Android Things app with TensorFlow
|
||||
|
||||
The last step is implementing the Android Things app. To this purpose, we can re-use the example available in Github named [sample TensorFlow image classifier][12]. Before starting, clone the Github repository so that you can modify the source code.
|
||||
|
||||
This Android Things app is different from the original app because:
|
||||
|
||||
1. it does not use the button to start the camera to capture the image
|
||||
|
||||
2. It uses a different model
|
||||
|
||||
3. It uses a blinking led to notify that the camera will take the picture after the LED stops blinking
|
||||
|
||||
4. It controls the motors when TensorFlow detects an image (arrows). Moreover, it turns on the motors for 5 seconds before starting the loop from step 3
|
||||
|
||||
To handle a blinking LED, use the following code:
|
||||
|
||||
```
|
||||
private Handler blinkingHandler = new Handler();
|
||||
private Runnable blinkingLED = new Runnable() {
|
||||
@Override
|
||||
public void run() {
|
||||
try {
|
||||
// If the motor is running the app does not start the cam
|
||||
if (mc.getStatus())
|
||||
return ;
|
||||
|
||||
Log.d(TAG, "Blinking..");
|
||||
mReadyLED.setValue(!mReadyLED.getValue());
|
||||
if (currentValue <= NUM_OF_TIMES) {
|
||||
currentValue++;
|
||||
blinkingHandler.postDelayed(blinkingLED,
|
||||
BLINKING_INTERVAL_MS);
|
||||
}
|
||||
else {
|
||||
mReadyLED.setValue(false);
|
||||
currentValue = 0;
|
||||
mBackgroundHandler.post(mBackgroundClickHandler);
|
||||
}
|
||||
} catch (IOException e) {
|
||||
e.printStackTrace();
|
||||
}
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
When the LED stops blinking, the app captures the image.
|
||||
|
||||
Now it is necessary to focus on how to control the motors according to the image detected. Modify the method:
|
||||
|
||||
```
|
||||
@Override
|
||||
public void onImageAvailable(ImageReader reader) {
|
||||
final Bitmap bitmap;
|
||||
try (Image image = reader.acquireNextImage()) {
|
||||
bitmap = mImagePreprocessor.preprocessImage(image);
|
||||
}
|
||||
|
||||
final List<Classifier.Recognition> results =
|
||||
mTensorFlowClassifier.doRecognize(bitmap);
|
||||
|
||||
Log.d(TAG,
|
||||
"Got the following results from Tensorflow: " + results);
|
||||
|
||||
// Check the result
|
||||
if (results == null || results.size() == 0) {
|
||||
Log.d(TAG, "No command..");
|
||||
blinkingHandler.post(blinkingLED);
|
||||
return ;
|
||||
}
|
||||
|
||||
Classifier.Recognition rec = results.get(0);
|
||||
Float confidence = rec.getConfidence();
|
||||
Log.d(TAG, "Confidence " + confidence.floatValue());
|
||||
|
||||
if (confidence.floatValue() < 0.55) {
|
||||
Log.d(TAG, "Confidence too low..");
|
||||
blinkingHandler.post(blinkingLED);
|
||||
return ;
|
||||
}
|
||||
|
||||
String command = rec.getTitle();
|
||||
Log.d(TAG, "Command: " + rec.getTitle());
|
||||
|
||||
if (command.indexOf("down") != -1)
|
||||
mc.backward();
|
||||
else if (command.indexOf("up") != -1)
|
||||
mc.forward();
|
||||
else if (command.indexOf("left") != -1)
|
||||
mc.turnLeft();
|
||||
else if (command.indexOf("right") != -1)
|
||||
mc.turnRight();
|
||||
}
|
||||
```
|
||||
|
||||
In this method, after the TensorFlow returns the possible labels matching the image captured, the app compares the result with the possible directions and controls the motors consequently.
|
||||
|
||||
Finally, it is time to use the model created at the beginning. Copy the `opt_graph.pb` and the `reatrained_labels.txt` under the _assets_ folder replacing the existing files.
|
||||
|
||||
Open the `Helper.java` and modify the following lines:
|
||||
|
||||
```
|
||||
public static final int IMAGE_SIZE = 299;
|
||||
private static final int IMAGE_MEAN = 128;
|
||||
private static final float IMAGE_STD = 128;
|
||||
private static final String LABELS_FILE = "retrained_labels.txt";
|
||||
public static final String MODEL_FILE = "file:///android_asset/opt_graph.pb";
|
||||
public static final String INPUT_NAME = "Mul";
|
||||
public static final String OUTPUT_OPERATION = "output";
|
||||
public static final String OUTPUT_NAME = "final_result";
|
||||
```
|
||||
|
||||
Run the app and have fun showing arrows to the camera and check the result. The robot car has to move according to the arrow shown.
|
||||
|
||||
### Summary
|
||||
|
||||
At the end of this tutorial, we have discovered how to apply Machine Learning to IoT using Android Things and TensorFlow. We can control the robot car using images and make it moving according to the image shown.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.survivingwithandroid.com/2018/03/apply-machine-learning-iot-using-android-things-tensorflow.html
|
||||
|
||||
作者:[Francesco Azzola ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.survivingwithandroid.com/author/francesco-azzolagmail-com
|
||||
[1]:https://www.survivingwithandroid.com/author/francesco-azzolagmail-com
|
||||
[2]:https://www.survivingwithandroid.com/2017/11/building-a-restful-api-interface-using-android-things.html
|
||||
[3]:https://www.survivingwithandroid.com/2017/10/synchronize-android-things-with-firebase-real-time-control-firebase-iot.html
|
||||
[4]:http://pinterest.com/pin/create/bookmarklet/?media=data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=&url=https://www.survivingwithandroid.com/2018/03/apply-machine-learning-iot-using-android-things-tensorflow.html&is_video=false&description=Machine%20Learning%20with%20Android%20Things
|
||||
[5]:https://www.survivingwithandroid.com/2017/12/building-a-remote-controlled-car-using-android-things-gpio.html
|
||||
[6]:https://github.com/androidthings/sample-tensorflow-imageclassifier
|
||||
[7]:http://pinterest.com/pin/create/bookmarklet/?media=data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=&url=https://www.survivingwithandroid.com/2018/03/apply-machine-learning-iot-using-android-things-tensorflow.html&is_video=false&description=TensorFlow%20image%20classifier
|
||||
[8]:https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/#0
|
||||
[9]:https://www.survivingwithandroid.com/2017/12/building-a-remote-controlled-car-using-android-things-gpio.html
|
||||
[10]:http://pinterest.com/pin/create/bookmarklet/?media=data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=&url=https://www.survivingwithandroid.com/2018/03/apply-machine-learning-iot-using-android-things-tensorflow.html&is_video=false&description=Integrating%20Android%20Things%20with%20IoT
|
||||
[11]:http://pinterest.com/pin/create/bookmarklet/?media=data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=&url=https://www.survivingwithandroid.com/2018/03/apply-machine-learning-iot-using-android-things-tensorflow.html&is_video=false&description=Integrating%20Android%20Things%20with%20TensorFlow
|
||||
[12]:https://github.com/androidthings/sample-tensorflow-imageclassifier
|
||||
[13]:https://en.wikipedia.org/wiki/Machine_learning
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Dynamic Linux Routing with Quagga
|
||||
============================================================
|
||||
|
||||
|
@ -1,150 +0,0 @@
|
||||
Translating by qhwdw
|
||||
How to build a digital pinhole camera with a Raspberry Pi
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rasp-pi-pinhole-howto.png?itok=ubmevVZB)
|
||||
At the tail end of 2015, the Raspberry Pi Foundation surprised the world by releasing the diminutive [Raspberry Pi Zero][1]. What's more, they [gave it away for free][2] on the cover of the MagPi magazine. I immediately rushed out and trawled around several newsagents until I found the last two copies in the area. I wasn't sure what I would use them for, but I knew their small size would allow me to do interesting projects that a full-sized Pi could not satisfy.
|
||||
|
||||
|
||||
![Raspberry Pi Zero][4]
|
||||
|
||||
Raspberry Pi Zero from MagPi magazine. CC BY-SA.4.0.
|
||||
|
||||
Because of my interest in astrophotography, I had previously modified a Microsoft LifeCam Cinema HD webcam, stripping its case, lens, and infrared cut filter to expose the bare [CCD chip][5]. I used it with a custom-built case in place of an eyepiece in my Celestron telescope for astrophotography. It has captured incredible views of Jupiter, close-ups of craters on the Moon, and sunspots on the Sun (with suitable Baader safety film protection).
|
||||
|
||||
Even before that, I had turned my film SLR camera into a [pinhole camera][6] by drilling a hole in a body cap (the cap that protects a camera's internal workings when it doesn't have a lens attached) and covering it in a thin disc cut from a soda can, pierced with a needle to provide a pinhole. Quite by chance, one day this pinhole body cap was sitting on my desk next to the modified astrophotography webcam. I wondered whether the webcam had good enough low-light performance to capture an image from behind the pinhole body cap. It only took a minute with the [GNOME Cheese][7] application to verify that a pinhole webcam was indeed a viable idea.
|
||||
|
||||
From this seed of an idea, I had a way to use one of my Raspberry Pi Zeros! The pinhole cameras that people build are typically minimalistic, offering no controls other than the exposure duration and film's ISO rating. Digital cameras, by contrast, have 20 or more buttons and hundreds more settings buried in menus. My goal for the digital pinhole webcam was to stay true to pinhole photography traditions and build a minimalist device with no controls at all, not even exposure time.
|
||||
|
||||
The digital pinhole camera, created from a Raspberry Pi Zero, HD webcam, and empty powder compact, was the [first project][8] in an [ongoing series][9] of pinhole cameras I built. Here's how I made it.
|
||||
|
||||
### Hardware
|
||||
|
||||
Since I already had the Raspberry Pi Zero in hand, I needed a webcam for this project. Given that the Pi Zero retails for £4 in the UK, I wanted other parts of the project to be priced similarly. Spending £30 on a camera to use with a £4 computer board just feels unbalanced. The obvious answer was to head over to a well-known internet auction site and bid on some secondhand webcams. Soon, I'd acquired a generic HD resolution webcam for a mere £1 plus shipping. After a quick test to ensure it operated correctly with Fedora, I went about stripping the case to examine the size of the electronics.
|
||||
|
||||
|
||||
![Hercules DualPix HD webcam][11]
|
||||
|
||||
Hercules DualPix HD webcam, which would be gutted to extract the circuit board and CCD imaging sensor. CC BY-SA 4.0.
|
||||
|
||||
Next, I needed a case to house the camera. The Raspberry Pi Zero circuit board is a mere 65mm x 30mm x 5mm. The webcam's circuit board is even smaller, although it has a plastic mounting around the CCD chip to hold the lens in place. I looked around the house for a container that would fit the two tiny circuit boards. I discovered that my wife's powder compact was just wide enough to fit the Pi Zero circuit board. With a little fiddling, it looked as though I could squeeze the webcam board inside, too.
|
||||
|
||||
![Powder compact][13]
|
||||
|
||||
Powder compact that became the case for the pinhole camera. CC BY-SA 4.0.
|
||||
|
||||
I set out to strip the case off of the webcam by removing a handful of tiny screws to get at the innards. The size of a webcam's case gives little clue about the size of the circuit board inside or where the CCD is positioned. I was lucky that this webcam was small with a convenient layout. Since I was making a pinhole camera, I had to remove the lens to expose the bare CCD chip.
|
||||
|
||||
The plastic mounting was about 1cm high, which would be too tall to fit inside the powder compact. I could remove it entirely with a couple more screws on the rear of the circuit board, but I thought it would be useful to keep it to block light coming from gaps in the case, so I trimmed it down to 4mm high using a craft knife, then reattached it. I bent the legs on the LED to reduce its height. Finally, I chopped off a second plastic tube mounted over the microphone that funneled the sound, since I didn't intend to capture audio.
|
||||
|
||||
![Bare CCD chip][15]
|
||||
|
||||
With the lens removed, the bare CCD chip is visible. The cylindrical collar holds the lens in place and prevents light from the power LED from spoiling the image. CC BY-SA 4.0.
|
||||
|
||||
The webcam had a long USB cable with a full-size plug, while the Raspberry Pi Zero uses a Micro-USB socket, so I needed a USB-to-Micro-USB adapter. But, with the adapter plugged in, the Pi wouldn't fit inside the powder compact, nor would the 1m of USB cable. So I took a sharp knife to the Micro-USB adapter, cutting off its USB socket entirely and stripping plastic to reveal the metal tracks leading to the Micro-USB plug. I also cut the webcam's USB cable down to about 6cm and removed its outer sheaf and foil wrap to expose the four individual cable strands. I soldered them directly to the tracks on the Micro-USB plug. Now the webcam could be plugged into the Pi Zero, and the pair would still fit inside the powder compact case.
|
||||
|
||||
![Modified USB plugs][17]
|
||||
|
||||
The stripped-down Micro-USB plug with the webcam USB cable strands directly soldered onto the individual contact strips. The plug now protrudes only about 1cm from the Raspberry Pi Zero when attached. CC BY-SA 4.0.
|
||||
|
||||
Originally I thought this would be the end of my electrical design, but after testing, I realized I couldn't tell if the camera was capturing images or even powered on. I decided to use the Pi's GPIO pins to drive indicator LEDs. A yellow LED illuminates when the camera control software is running, and a green LED illuminates when the webcam is capturing an image. I connected BCM pins 17 and 18 to the positive leg of the LEDs via 300ohm current-limiting resistors, then connected both negative legs to a common ground pin.
|
||||
|
||||
![LEDs][19]
|
||||
|
||||
The LEDs are connected to GPIO pins BCM 17 and BCM 18, with a 300ohm resistor in series and a common ground. CC BY-SA 4.0.
|
||||
|
||||
Next, it was time to modify the powder compact. First, I removed the inner tray that holds the powder to free up space inside the case by cutting it off with a knife at its hinge. I was planning to run the Pi Zero on a portable USB power-bank battery, which wouldn't fit inside the case, so I cut a hole in the side of the case for the USB cable connector. The LEDs needed to be visible outside the case, so I used a 3mm drill bit to make two holes in the lid.
|
||||
|
||||
Then I used a 6mm drill bit to make a hole in the center of the bottom of the case, which I covered with a thin piece of metal and used a sewing needle to pierce a pinhole in its center. I made sure that only the very tip of the needle poked through, as inserting the entire needle would make the hole far too large. I used fine wet/dry sandpaper to smooth out the pinhole, then re-pierced it from the other side, again using only the tip of the needle. The goal with a pinhole camera is to get a clean, round hole with no deformations or ridges and that just barely lets light through. The smaller the hole, the sharper the images.
|
||||
|
||||
![Bottom of the case with the pinhole aperture][21]
|
||||
|
||||
The bottom of the case with the pinhole aperture. CC BY-SA 4.0.
|
||||
|
||||
All that remained was assembling the finished device. First I fixed the webcam circuit board in the case, using blue putty to hold it in position so the CCD was directly over the pinhole. Using putty allows me to easily reposition the CCD when I need to clean dust spots (and as insurance in case I put it in the wrong place). I placed the Raspberry Pi Zero board directly on top of the webcam board. To protect against electrical short circuits between the two boards, I covered the back of the Pi in several layers of electrical tape.
|
||||
|
||||
The [Raspberry Pi Zero][22] was such a perfect fit for the powder compact that it didn't need anything else to hold it in position, besides the USB cable for the battery that sticks out through the hole in the case. Finally, I poked the LEDs through the previously drilled holes and glued them into place. I added more electrical tape on the legs of the LEDs to prevent short circuits against the Pi Zero board when the lid is closed.
|
||||
|
||||
![Raspberry Pi Zero slotted into the case][24]
|
||||
|
||||
The Raspberry Pi Zero slotted into the case with barely 1mm of clearance at the edge. The hacked up Micro-USB plug connected to the webcam is next to the Micro-USB plug from the battery. CC BY-SA 4.0.
|
||||
|
||||
### Software
|
||||
|
||||
Computer hardware is useless without software to control it, of course. The Raspberry Pi Zero can run the same software as a full-sized Pi, but booting up a traditional [Raspbian OS][25] image is a very time-consuming process due to the Zero's slow CPU speed. A camera that takes more than a minute to turn on is a camera that will not get much real-world use. Furthermore, almost nothing that a full Raspbian OS runs is useful to this camera. Even if I disable all the redundant services launched at boot, it still takes unreasonably long to boot. I decided the only stock software I would use is a [U-Boot][26] bootloader and the Linux kernel. A custom written `init` binary mounts the root filesystem from the microSD card, loads the kernel modules needed to drive the webcam, populates `/dev`, and runs the application binary.
|
||||
|
||||
The application binary is another custom C program that does the core job of controlling the camera. First, it waits for the kernel driver to initialize the webcam, opens it, and initializes it via low-level `v4l ioctl` calls. The GPIO pins are configured to drive the LEDs by poking registers via `/dev/mem`.
|
||||
|
||||
With initialization out of the way, the camera goes into a loop. Each iteration captures a single frame from the webcam in JPEG format using default exposure settings, saves the image straight to the SD card, then sleeps for three seconds. This loop runs forever until the battery is unplugged. This nicely achieves the original goal, which was to create a digital camera with the simplicity on par with a typical analog pinhole camera.
|
||||
|
||||
[The code][27] for this custom userspace is made available under [GPLv3][28] or any later version. The Raspberry Pi Zero requires an ARMv6 binary, so I built it from an x86_64 host using the [QEMU ARM][29] emulator to run compilers from a `chroot` populated with the toolchain for the [Pignus][30] distro (a Fedora 23 port/rebuild for ARMv6). Both binaries were statically linked with [glibc][31], so they are self-contained. I built a custom RAMDisk containing the binaries and a few required kernel modules and copied it to the SD card, where the bootloader can find them.
|
||||
|
||||
![Completed camera][33]
|
||||
|
||||
The finished camera is entirely hidden inside the powder compact case. The only hint of something unusual is the USB cable coming out of the side. CC BY-SA 4.0.
|
||||
|
||||
### Taking photos
|
||||
|
||||
With both the hardware and software complete, it was time to see what the camera was capable of. Everyone is familiar with the excellent quality of images produced by modern digital cameras, whether professional DSLRs or mobile phones. It is important to reset expectations here to a more realistic level. The HD webcam captures 1280x1024 resolution (~1 megapixel). The CCD struggles to capture an image from the tiny amount of light allowed through the pinhole. The webcam automatically increases gain and exposure time to compensate, which results in very noisy images. The images also suffer from a very narrow dynamic range, as evidenced by a squashed histogram, which has to be stretched in post-processing to get true blacks and whites.
|
||||
|
||||
The best results are achieved by capturing images outside in daylight, as most interiors have insufficient illumination to register any kind of usable image. The CCD is only about 1cm in diameter, and it's just a few millimeters away from the pinhole, which creates a relatively narrow field of view. For example, in a selfie taken by holding the camera at arm's length, the person's head fills the entire frame. Finally, the images are in very soft focus, which is a defining characteristic of all pinhole cameras.
|
||||
|
||||
![Picture of houses taken with pinhole webcam][35]
|
||||
|
||||
Terraced houses in the street, London. CC BY-SA 4.0.
|
||||
|
||||
![Airport photo][37]
|
||||
|
||||
Farnborough airport, former terminal building. CC BY-SA 4.0.
|
||||
|
||||
Initially, I just used the camera to capture small numbers of still images. I later reduced the loop delay from three seconds to one second and used the camera used to capture sequences of images over many minutes. I rendered the images into time-lapse videos using [GStreamer.][38]
|
||||
|
||||
Here's a video I created with this process:
|
||||
|
||||
[][38]
|
||||
|
||||
Video of the walk from Bank to Waterloo along the Thames to unwind after a day's work. 1200 frames captured at 40 frames per minute animated at 20 frames per second.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/how-build-digital-pinhole-camera-raspberry-pi
|
||||
|
||||
作者:[Daniel Berrange][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/berrange
|
||||
[1]:https://www.raspberrypi.org/products/raspberry-pi-zero/
|
||||
[2]:https://opensource.com/users/node/24776
|
||||
[3]:https://opensource.com/file/390776
|
||||
[4]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-raspberrypizero.jpg?itok=1ry7Kx9m (Raspberry Pi Zero)
|
||||
[5]:https://en.wikipedia.org/wiki/Charge-coupled_device
|
||||
[6]:https://en.wikipedia.org/wiki/Pinhole_camera
|
||||
[7]:https://help.gnome.org/users/cheese/stable/
|
||||
[8]:https://pinholemiscellany.berrange.com/motivation/m-arcturus/
|
||||
[9]:https://pinholemiscellany.berrange.com/
|
||||
[10]:https://opensource.com/file/390756
|
||||
[11]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-hercules_dualpix_hd.jpg?itok=r858OM9_ (Hercules DualPix HD webcam)
|
||||
[12]:https://opensource.com/file/390771
|
||||
[13]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-powdercompact.jpg?itok=RZSwqCY7 (Powder compact)
|
||||
[14]:https://opensource.com/file/390736
|
||||
[15]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-bareccdchip.jpg?itok=IQzjZmED (Bare CCD chip)
|
||||
[17]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-usbs.jpg?itok=QJBkbI1F (Modified USB plugs)
|
||||
[19]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-_pi-zero-led.png?itok=oH4c2oCn (LEDs)
|
||||
[21]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-casebottom.jpg?itok=QjDMaWLi (Bottom of the case with the pinhole aperture)
|
||||
[22]:https://opensource.com/users/node/34916
|
||||
[24]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-pizeroincase.jpg?itok=cyUIvjjt (Raspberry Pi Zero slotted into the case)
|
||||
[25]:https://www.raspberrypi.org/downloads/raspbian/
|
||||
[26]:https://www.denx.de/wiki/U-Boot
|
||||
[27]:https://gitlab.com/berrange/pinholemiscellany/
|
||||
[28]:https://www.gnu.org/licenses/gpl-3.0.en.html
|
||||
[29]:https://wiki.qemu.org/Documentation/Platforms/ARM
|
||||
[30]:https://pignus.computer/
|
||||
[31]:https://www.gnu.org/software/libc/
|
||||
[33]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-completedcamera.jpg?itok=VYFaT-kA (Completed camera)
|
||||
[35]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-housesimage.jpg?itok=-_gtwn9N (Picture of houses taken with pinhole webcam)
|
||||
[37]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-farnboroughairportimage.jpg?itok=E829gg4F (Airport photo)
|
||||
[38]:https://gstreamer.freedesktop.org/modules/gst-ffmpeg.html
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
How To Register The Oracle Linux System With The Unbreakable Linux Network (ULN)
|
||||
======
|
||||
Most of us knows about RHEL subscription but only few of them knows about Oracle subscription and its details.
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Top 9 open source ERP systems to consider | Opensource.com
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_orgchart1.png?itok=tukiFj89)
|
||||
|
@ -1,126 +0,0 @@
|
||||
Useful Resources for Those Who Want to Know More About Linux
|
||||
======
|
||||
|
||||
Linux is one of the most popular and versatile operating systems available. It can be used on a smartphone, computer and even a car. Linux has been around since the 1990s and is still one of the most widespread operating systems.
|
||||
|
||||
Linux is actually used to run most of the Internet as it is considered to be rather stable compared to other operating systems. This is one of the [reasons why people choose Linux over Windows][1]. Besides, Linux provides its users with privacy and doesn’t collect their data at all, while Windows 10 and its Cortana voice control system always require updating your personal information.
|
||||
|
||||
Linux has many advantages. However, people do not hear much about it, as it has been squeezed out from the market by Windows and Mac. And many people get confused when they start using Linux, as it’s a bit different from popular operating systems.
|
||||
|
||||
So to help you out we’ve collected 5 useful resources for those who want to know more about Linux.
|
||||
|
||||
### 1.[Linux for Absolute Beginners][2]
|
||||
|
||||
If you want to learn as much about Linux as you can, you should consider taking a full course for beginners, provided by Eduonix. This course will introduce you to all features of Linux and provide you with all necessary materials to help you find out more about the peculiarities of how Linux works.
|
||||
|
||||
You should definitely choose this course if:
|
||||
|
||||
* you want to learn the details about the Linux operating system;
|
||||
|
||||
* you want to find out how to install it;
|
||||
|
||||
* you want to understand how Linux cooperates with your hardware;
|
||||
|
||||
* you want to learn how to operate Linux command line.
|
||||
|
||||
|
||||
|
||||
|
||||
### 2.[PC World: A Linux Beginner’s Guide][3]
|
||||
|
||||
A free resource for those who want to learn everything about Linux in one place. PC World specializes in various aspects of working with computer operating systems, and it provides its subscribers with the most accurate and up-to-date information. Here you can also learn more about the [benefits of Linux][4] and latest news about his operating system.
|
||||
|
||||
This resource provides you with information on:
|
||||
|
||||
* how to install Linux;
|
||||
|
||||
* how to use command line;
|
||||
|
||||
* how to install additional software;
|
||||
|
||||
* how to operate Linux desktop environment.
|
||||
|
||||
|
||||
|
||||
|
||||
### 3.[Linux Training][5]
|
||||
|
||||
A lot of people who work with computers are required to learn how to operate Linux in case Windows operating system suddenly crashes. And what can be better than using an official resource to start your Linux training?
|
||||
|
||||
This resource provides online enrollment on the Linux training, where you can get the most updated information from the authentic source. “A year ago our IT department offered us a Linux training on the official website”, says Martin Gibson, a developer at [Assignmenthelper.com.au][6]. “We took this course because we needed to learn how to back up all our files to another system to provide our customers with maximum security, and this resource really taught us everything.”
|
||||
|
||||
So you should definitely use this resource if:
|
||||
|
||||
* you want to receive firsthand information about the operating system;
|
||||
|
||||
* want to learn the peculiarities of how to run Linux on your computer;
|
||||
|
||||
* want to connect with other Linux users and share your experience with them.
|
||||
|
||||
|
||||
|
||||
|
||||
4. [The Linux Foundation: Training Videos][7]
|
||||
|
||||
If you easily get bored from reading a lot of resources, this website is definitely for you. The Linux Foundation provides training videos, lectures and webinars, held by IT specialists, software developers and technical consultants.
|
||||
|
||||
All the training videos are subdivided into categories for:
|
||||
|
||||
* Developers: working with Linux Kernel, handling Linux Device Drivers, Linux virtualization etc.;
|
||||
|
||||
* System Administrators: developing virtual hosts on Linux, building a Firewall, analyzing Linux performance etc.;
|
||||
|
||||
* Users: getting started using Linux, introduction to embedded Linux and so on.
|
||||
|
||||
|
||||
|
||||
|
||||
5. [LinuxInsider][8]
|
||||
|
||||
Did you know that Microsoft was so amazed by the efficiency of Linux that it [allowed users to run Linux on Microsoft cloud computing device][9]? If you want to learn more about this operating system, Linux Insider provides its subscribers with the latest news on Linux operating systems, gives information about the latest updates and Linux features.
|
||||
|
||||
On this resource, you will have the opportunity to:
|
||||
|
||||
* participate in Linux community;
|
||||
|
||||
* learn about how to run Linux on various devices;
|
||||
|
||||
* check out reviews;
|
||||
|
||||
* participate in blog discussions and read the tech blog.
|
||||
|
||||
|
||||
|
||||
|
||||
### Wrapping up…
|
||||
|
||||
Linux offers a lot of benefits, including complete privacy, stable operation and even malware protection. It’s definitely worth trying, learning how to use will help you better understand how your computer works and what it needs to operate smoothly.
|
||||
|
||||
### About the Author
|
||||
_Lucy Benton is a digital marketing specialist, business consultant and helps people to turn their dreams into the profitable business. Now she is writing for marketing and business resources. Also Lucy has her own blog_ [_Prowritingpartner.com_][10] _,where you can check her last publications._
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://linuxaria.com/article/useful-resources-for-those-who-want-to-know-more-about-linux
|
||||
|
||||
作者:[Lucy Benton][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.lifewire.com
|
||||
[1]:https://www.lifewire.com/windows-vs-linux-mint-2200609
|
||||
[2]:https://www.eduonix.com/courses/system-programming/linux-for-absolute-beginners
|
||||
[3]:https://www.pcworld.com/article/2918397/operating-systems/how-to-get-started-with-linux-a-beginners-guide.html
|
||||
[4]:https://www.popsci.com/switch-to-linux-operating-system#page-4
|
||||
[5]:https://www.linux.com/learn/training
|
||||
[6]:https://www.assignmenthelper.com.au/
|
||||
[7]:https://training.linuxfoundation.org/free-linux-training/linux-training-videos
|
||||
[8]:https://www.linuxinsider.com/
|
||||
[9]:https://www.wired.com/2016/08/linux-took-web-now-taking-world/
|
||||
[10]:https://prowritingpartner.com/
|
||||
[11]:https://cdn.linuxaria.com/wp-content/plugins/flattr/img/flattr-badge-large.png
|
||||
[12]:https://linuxaria.com/?flattrss_redirect&id=8570&md5=ee76fa2b44bdf6ef419a7f9906d3a5ad (Flattr)
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Some Common Concurrent Programming Mistakes
|
||||
============================================================
|
||||
|
||||
|
@ -1,119 +0,0 @@
|
||||
pinewall translating
|
||||
|
||||
Getting started with Anaconda Python for data science
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X)
|
||||
Like many others, I've been trying to get involved in the rapidly expanding field of data science. When I took Udemy courses on the [R][1] and [Python][2] programming languages, I downloaded and installed the applications independently. As I was trying to work through the challenges of installing data science packages like [NumPy][3] and [Matplotlib][4] and solving the various dependencies, I learned about the [Anaconda Python distribution][5].
|
||||
|
||||
Anaconda is a complete, [open source][6] data science package with a community of over 6 million users. It is easy to [download][7] and install, and it is supported on Linux, MacOS, and Windows.
|
||||
|
||||
I appreciate that Anaconda eases the frustration of getting started for new users. The distribution comes with more than 1,000 data packages as well as the [Conda][8] package and virtual environment manager, so it eliminates the need to learn to install each library independently. As Anaconda's website says, "The Python and R conda packages in the Anaconda Repository are curated and compiled in our secure environment so you get optimized binaries that 'just work' on your system."
|
||||
|
||||
I recommend using [Anaconda Navigator][9], a desktop graphical user interface (GUI) system that includes links to all the applications included with the distribution including [RStudio][10], [iPython][11], [Jupyter Notebook][12], [JupyterLab][13], [Spyder][14], [Glue][15], and [Orange][16]. The default environment is Python 3.6, but you can also easily install Python 3.5, Python 2.7, or R. The [documentation][9] is incredibly detailed and there is an excellent community of users for additional support.
|
||||
|
||||
### Installing Anaconda
|
||||
|
||||
To install Anaconda on my Linux laptop (an I3 with 4GB of RAM), I downloaded the Anaconda 5.1 Linux installer and ran `md5sum` to verify the file:
|
||||
```
|
||||
$ md5sum Anaconda3-5.1.0-Linux-x86_64.sh
|
||||
|
||||
```
|
||||
|
||||
Then I followed the directions in the [documentation][17], which instructed me to issue the following Bash command whether I was in the Bash shell or not:
|
||||
```
|
||||
$ bash Anaconda3-5.1.0-Linux-x86_64.sh
|
||||
|
||||
```
|
||||
|
||||
`/home/<user>/.bashrc`?" I allowed it and restarted the shell, which I found was necessary for the `.bashrc` environment to work correctly.
|
||||
|
||||
I followed the installation directions exactly, and the well-scripted install took about five minutes to complete. When the installation prompted: "Do you wish the installer to prepend the Anaconda install location to PATH in your?" I allowed it and restarted the shell, which I found was necessary for theenvironment to work correctly.
|
||||
|
||||
After completing the install, I launched Anaconda Navigator by entering the following at the command prompt in the shell:
|
||||
```
|
||||
$ anaconda-navigator
|
||||
|
||||
```
|
||||
|
||||
Every time Anaconda Navigator launches, it checks to see if new software is available and prompts you to update if necessary.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/anaconda-update.png?itok=wMk78pGQ)
|
||||
|
||||
Anaconda updated successfully without needing to return to the command line. Anaconda's initial launch was a little slow; that plus the update meant it took a few additional minutes to get started.
|
||||
|
||||
You can also update manually by entering the following:
|
||||
```
|
||||
$ conda update anaconda-navigator
|
||||
|
||||
```
|
||||
|
||||
### Exploring and installing applications
|
||||
|
||||
Once Navigator launched, I was free to explore the range of applications included with Anaconda Distribution. According to the documentation, the 64-bit Python 3.6 version of Anaconda [supports 499 packages][18]. The first application I explored was [Jupyter QtConsole][19]. The easy-to-use GUI supports inline figures and syntax highlighting.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/anaconda-jupyterqtconsole.png?itok=fQQoErIO)
|
||||
|
||||
Jupyter Notebook is included with the distribution, so (unlike other Python environments I have used) there is no need for a separate install.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/anaconda-jupyternotebook.png?itok=VqvbyOcI)
|
||||
|
||||
I was already familiar with RStudio. It's not installed by default, but it's easy to add with the click of a mouse. Other applications, including JupyterLab, Orange, Glue, and Spyder, can be launched or installed with just a mouse click.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/anaconda-otherapps.png?itok=9QmSUdel)
|
||||
|
||||
One of the Anaconda distribution's strengths is the ability to create multiple environments. For example, if I wanted to create a Python 2.7 environment instead of the default Python 3.6, I would enter the following in the shell:
|
||||
```
|
||||
$ conda create -n py27 python=2.7 anaconda
|
||||
|
||||
```
|
||||
|
||||
Conda takes care of the entire install; to launch it, just open the shell and enter:
|
||||
```
|
||||
$ anaconda-navigator
|
||||
|
||||
```
|
||||
|
||||
Select the **py27** environment from the "Applications on" drop-down in the Anaconda GUI.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/anaconda-navigator.png?itok=2i5qYAyG)
|
||||
|
||||
### Learn more
|
||||
|
||||
There's a wealth of information available about Anaconda if you'd like to know more. You can start by searching the [Anaconda Community][20] and its [mailing list][21].
|
||||
|
||||
Are you using Anaconda Distribution and Navigator? Let us know your impressions in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/4/getting-started-anaconda-python
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/don-watkins
|
||||
[1]:https://www.r-project.org/
|
||||
[2]:https://www.python.org/
|
||||
[3]:http://www.numpy.org/
|
||||
[4]:https://matplotlib.org/
|
||||
[5]:https://www.anaconda.com/distribution/
|
||||
[6]:https://docs.anaconda.com/anaconda/eula
|
||||
[7]:https://www.anaconda.com/download/#linux
|
||||
[8]:https://conda.io/
|
||||
[9]:https://docs.anaconda.com/anaconda/navigator/
|
||||
[10]:https://www.rstudio.com/
|
||||
[11]:https://ipython.org/
|
||||
[12]:http://jupyter.org/
|
||||
[13]:https://blog.jupyter.org/jupyterlab-is-ready-for-users-5a6f039b8906
|
||||
[14]:https://spyder-ide.github.io/
|
||||
[15]:http://glueviz.org/
|
||||
[16]:https://orange.biolab.si/
|
||||
[17]:https://docs.anaconda.com/anaconda/install/linux
|
||||
[18]:https://docs.anaconda.com/anaconda/packages/py3.6_linux-64
|
||||
[19]:http://qtconsole.readthedocs.io/en/stable/
|
||||
[20]:https://www.anaconda.com/community/
|
||||
[21]:https://groups.google.com/a/continuum.io/forum/#!forum/anaconda
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Passwordless Auth: Server
|
||||
============================================================
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
An introduction to Python bytecode
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82)
|
||||
|
@ -1,95 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
Enhance your Python with an interactive shell
|
||||
======
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/03/python-shells-816x345.jpg)
|
||||
The Python programming language has become one of the most popular languages used in IT. One reason for this success is it can be used to solve a variety of problems. From web development to data science, machine learning to task automation, the Python ecosystem is rich in popular frameworks and libraries. This article presents some useful Python shells available in the Fedora packages collection to make development easier.
|
||||
|
||||
### Python Shell
|
||||
|
||||
The Python Shell lets you use the interpreter in an interactive mode. It’s very useful to test code or try a new library. In Fedora you can invoke the default shell by typing python3 in a terminal session. Some more advanced and enhanced shells are available to Fedora, though.
|
||||
|
||||
### IPython
|
||||
|
||||
IPython provides many useful enhancements to the Python shell. Examples include tab completion, object introspection, system shell access and command history retrieval. Many of these features are also used by the [Jupyter Notebook][1] , since it uses IPython underneath.
|
||||
|
||||
#### Install and run IPython
|
||||
```
|
||||
dnf install ipython3
|
||||
ipython3
|
||||
|
||||
```
|
||||
|
||||
Using tab completion prompts you with possible choices. This features comes in handy when you use an unfamiliar library.
|
||||
|
||||
![][2]
|
||||
|
||||
If you need more information, use the documentation by typing the ? command. For more details, you can use the ?? command.
|
||||
|
||||
![][3]
|
||||
|
||||
Another cool feature is the ability to execute a system shell command using the ! character. The result of the command can then be referenced in the IPython shell.
|
||||
|
||||
![][4]
|
||||
|
||||
A comprehensive list of IPython features is available in the [official documentation][5].
|
||||
|
||||
### bpython
|
||||
|
||||
bpython doesn’t do as much as IPython, but nonetheless it provides a useful set of features in a simple and lightweight package. Among other features, bpython provides:
|
||||
|
||||
* In-line syntax highlighting
|
||||
* Autocomplete with suggestions as you type
|
||||
* Expected parameter list
|
||||
* Ability to send or save code to a pastebin service or file
|
||||
|
||||
|
||||
|
||||
#### Install and run bpython
|
||||
```
|
||||
dnf install bpython3
|
||||
bpython3
|
||||
|
||||
```
|
||||
|
||||
As you type, bpython offers you choices to autocomplete your code.
|
||||
|
||||
![][6]
|
||||
|
||||
When you call a function or method, the expected parameters and the docstring are automatically displayed.
|
||||
|
||||
![][7]
|
||||
|
||||
Another neat feature is the ability to open the current bpython session in an external editor (Vim by default) using the function key F7. This is very useful when testing more complex programs.
|
||||
|
||||
For more details about configuration and features, consult the bpython [documentation][8].
|
||||
|
||||
### Conclusion
|
||||
|
||||
Using an enhanced Python shell is a good way to increase productivity. It gives you enhanced features to write a quick prototype or try out a new library. Are you using an enhanced Python shell? Feel free to mention it in the comment section below.
|
||||
|
||||
Photo by [David Clode][9] on [Unsplash][10]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/enhance-python-interactive-shell/
|
||||
|
||||
作者:[Clément Verna][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/author/cverna/
|
||||
[1]:https://ipython.org/notebook.html
|
||||
[2]:https://fedoramagazine.org/wp-content/uploads/2018/03/ipython-tabcompletion.png
|
||||
[3]:https://fedoramagazine.org/wp-content/uploads/2018/03/ipyhton_doc1.png
|
||||
[4]:https://fedoramagazine.org/wp-content/uploads/2018/03/ipython_shell.png
|
||||
[5]:https://ipython.readthedocs.io/en/stable/overview.html#main-features-of-the-interactive-shell
|
||||
[6]:https://fedoramagazine.org/wp-content/uploads/2018/03/bpython1.png
|
||||
[7]:https://fedoramagazine.org/wp-content/uploads/2018/03/bpython2.png
|
||||
[8]:https://docs.bpython-interpreter.org/
|
||||
[9]:https://unsplash.com/photos/d0CasEMHDQs?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[10]:https://unsplash.com/search/photos/python?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -1,3 +1,5 @@
|
||||
translating----geekpi
|
||||
|
||||
Continuous Profiling of Go programs
|
||||
============================================================
|
||||
|
||||
|
@ -1,144 +0,0 @@
|
||||
How to Compile a Linux Kernel
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/chester-alvarez-644-unsplash.jpg?itok=aFxG9kUZ)
|
||||
|
||||
Once upon a time the idea of upgrading the Linux kernel sent fear through the hearts of many a user. Back then, the process of upgrading the kernel involved a lot of steps and even more time. Now, installing a new kernel can be easily handled with package managers like apt. With the addition of certain repositories, you can even easily install experimental or specific kernels (such as real-time kernels for audio production) without breaking a sweat.
|
||||
|
||||
Considering how easy it is to upgrade your kernel, why would you bother compiling one yourself? Here are a few possible reasons:
|
||||
|
||||
* You simply want to know how it’s done.
|
||||
|
||||
* You need to enable or disable specific options into a kernel that simply aren’t available via the standard options.
|
||||
|
||||
* You want to enable hardware support that might not be found in the standard kernel.
|
||||
|
||||
* You’re using a distribution that requires you compile the kernel.
|
||||
|
||||
* You’re a student and this is an assignment.
|
||||
|
||||
|
||||
|
||||
|
||||
Regardless of why, knowing how to compile a Linux kernel is very useful and can even be seen as a right of passage. When I first compiled a new Linux kernel (a long, long time ago) and managed to boot from said kernel, I felt a certain thrill coursing through my system (which was quickly crushed the next time I attempted and failed).
|
||||
With that said, let’s walk through the process of compiling a Linux kernel. I’ll be demonstrating on Ubuntu 16.04 Server. After running through a standard sudo apt upgrade, the installed kernel is 4.4.0-121. I want to upgrade to kernel 4.17. Let’s take care of that.
|
||||
|
||||
A word of warning: I highly recommend you practice this procedure on a virtual machine. By working with a VM, you can always create a snapshot and back out of any problems with ease. DO NOT upgrade the kernel this way on a production machine… not until you know what you’re doing.
|
||||
|
||||
### Downloading the kernel
|
||||
|
||||
The first thing to do is download the kernel source file. This can be done by finding the URL of the kernel you want to download (from [Kernel.org][1]). Once you have the URL, download the source file with the following command (I’ll demonstrate with kernel 4.17 RC2):
|
||||
```
|
||||
wget https://git.kernel.org/torvalds/t/linux-4.17-rc2.tar.gz
|
||||
|
||||
```
|
||||
|
||||
While that file is downloading, there are a few bits to take care of.
|
||||
|
||||
### Installing requirements
|
||||
|
||||
In order to compile the kernel, we’ll need to first install a few requirements. This can be done with a single command:
|
||||
```
|
||||
sudo apt-get install git fakeroot build-essential ncurses-dev xz-utils libssl-dev bc flex libelf-dev bison
|
||||
|
||||
```
|
||||
|
||||
Do note: You will need at least 12GB of free space on your local drive to get through the kernel compilation process. So make sure you have enough space.
|
||||
|
||||
### Extracting the source
|
||||
|
||||
From within the directory housing our newly downloaded kernel, extract the kernel source with the command:
|
||||
```
|
||||
tar xvzf linux-4.17-rc2.tar.gz
|
||||
|
||||
```
|
||||
|
||||
Change into the newly created directory with the command cd linux-4.17-rc2.
|
||||
|
||||
### Configuring the kernel
|
||||
|
||||
Before we actually compile the kernel, we must first configure which modules to include. There is actually a really easy way to do this. With a single command, you can copy the current kernel’s config file and then use the tried and true menuconfig command to make any necessary changes. To do this, issue the command:
|
||||
```
|
||||
cp /boot/config-$(uname -r) .config
|
||||
|
||||
```
|
||||
|
||||
Now that you have a configuration file, issue the command make menuconfig. This command will open up a configuration tool (Figure 1) that allows you to go through every module available and enable or disable what you need or don’t need.
|
||||
|
||||
|
||||
![menuconfig][3]
|
||||
|
||||
Figure 1: The make menuconfig in action.
|
||||
|
||||
[Used with permission][4]
|
||||
|
||||
It is quite possible you might disable a critical portion of the kernel, so step through menuconfig with care. If you’re not sure about an option, leave it alone. Or, better yet, stick with the configuration we just copied from the running kernel (as we know it works). Once you’ve gone through the entire list (it’s quite long), you’re ready to compile!
|
||||
|
||||
### Compiling and installing
|
||||
|
||||
Now it’s time to actually compile the kernel. The first step is to compile using the make command. So issue make and then answer the necessary questions (Figure 2). The questions asked will be determined by what kernel you’re upgrading from and what kernel you’re upgrading to. Trust me when I say there’s a ton of questions to answer, so give yourself plenty of time here.
|
||||
|
||||
|
||||
![make][6]
|
||||
|
||||
Figure 2: Answering the questions for the make command.
|
||||
|
||||
[Used with permission][4]
|
||||
|
||||
After answering the litany of questions, you can then install the modules you’ve enabled with the command:
|
||||
```
|
||||
make modules_install
|
||||
|
||||
```
|
||||
|
||||
Once again, this command will take some time, so either sit back and watch the output, or go do something else (as it will not require your input). Chances are, you’ll want to undertake another task (unless you really enjoy watching output fly by in a terminal).
|
||||
|
||||
Now we install the kernel with the command:
|
||||
```
|
||||
sudo make install
|
||||
|
||||
```
|
||||
|
||||
Again, another command that’s going to take a significant amount of time. In fact, the make install command will take even longer than the make modules_install command. Go have lunch, configure a router, install Linux on a few servers, or take a nap.
|
||||
|
||||
### Enable the kernel for boot
|
||||
|
||||
Once the make install command completes, it’s time to enable the kernel for boot. To do this, issue the command:
|
||||
```
|
||||
sudo update-initramfs -c -k 4.17-rc2
|
||||
|
||||
```
|
||||
|
||||
Of course, you would substitute the kernel number above for the kernel you’ve compiled. When that command completes, update grub with the command:
|
||||
```
|
||||
sudo update-grub
|
||||
|
||||
```
|
||||
|
||||
You should now be able to restart your system and select the newly installed kernel.
|
||||
|
||||
### Congratulations!
|
||||
|
||||
You’ve compiled a Linux kernel! It’s a process that may take some time; but, in the end, you’ll have a custom kernel for your Linux distribution, as well as an important skill that many Linux admins tend to overlook.
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][7] course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/4/how-compile-linux-kernel-0
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/jlwallen
|
||||
[1]:https://www.kernel.org/
|
||||
[2]:/files/images/kernelcompile1jpg
|
||||
[3]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kernel_compile_1.jpg?itok=ZNybYgEt (menuconfig)
|
||||
[4]:/licenses/category/used-permission
|
||||
[5]:/files/images/kernelcompile2jpg
|
||||
[6]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kernel_compile_2.jpg?itok=TYfV02wC (make)
|
||||
[7]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -1,92 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
How to use FIND in Linux
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux31x_cc.png?itok=Pvim4U-B)
|
||||
|
||||
In [a recent Opensource.com article][1], Lewis Cowles introduced the `find` command.
|
||||
|
||||
`find` is one of the more powerful and flexible command-line programs in the daily toolbox, so it's worth spending a little more time on it.
|
||||
|
||||
At a minimum, `find` takes a path to find things. For example:
|
||||
```
|
||||
find /
|
||||
|
||||
```
|
||||
|
||||
will find (and print) every file on the system. And since everything is a file, you will get a lot of output to sort through. This probably doesn't help you find what you're looking for. You can change the path argument to narrow things down a bit, but it's still not really any more helpful than using the `ls` command. So you need to think about what you're trying to locate.
|
||||
|
||||
Perhaps you want to find all the JPEG files in your home directory. The `-name` argument allows you to restrict your results to files that match the given pattern.
|
||||
```
|
||||
find ~ -name '*jpg'
|
||||
|
||||
```
|
||||
|
||||
But wait! What if some of them have an uppercase extension? `-iname` is like `-name`, but it is case-insensitive.
|
||||
```
|
||||
find ~ -iname '*jpg'
|
||||
|
||||
```
|
||||
|
||||
Great! But the 8.3 name scheme is so 1985. Some of the pictures might have a .jpeg extension. Fortunately, we can combine patterns with an "or," represented by `-o`.
|
||||
```
|
||||
find ~ ( -iname 'jpeg' -o -iname 'jpg' )
|
||||
|
||||
```
|
||||
|
||||
We're getting closer. But what if you have some directories that end in jpg? (Why you named a directory `bucketofjpg` instead of `pictures` is beyond me.) We can modify our command with the `-type` argument to look only for files.
|
||||
```
|
||||
find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f
|
||||
|
||||
```
|
||||
|
||||
Or maybe you'd like to find those oddly named directories so you can rename them later:
|
||||
```
|
||||
find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type d
|
||||
|
||||
```
|
||||
|
||||
It turns out you've been taking a lot of pictures lately, so let's narrow this down to files that have changed in the last week.
|
||||
```
|
||||
find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f -mtime -7
|
||||
|
||||
```
|
||||
|
||||
`ctime`), modification time (`mtime`), or access time (`atime`). These are in days, so if you want finer-grained control, you can express it in minutes instead (`cmin`, `mmin`, and `amin`, respectively). Unless you know exactly the time you want, you'll probably prefix the number with `+` (more than) or `–` (less than).
|
||||
|
||||
You can do time filters based on file status change time (), modification time (), or access time (). These are in days, so if you want finer-grained control, you can express it in minutes instead (, and, respectively). Unless you know exactly the time you want, you'll probably prefix the number with(more than) or(less than).
|
||||
|
||||
But maybe you don't care about your pictures. Maybe you're running out of disk space, so you want to find all the gigantic (let's define that as "greater than 1 gigabyte") files in the `log` directory:
|
||||
```
|
||||
find /var/log -size +1G
|
||||
|
||||
```
|
||||
|
||||
Or maybe you want to find all the files owned by bcotton in `/data`:
|
||||
```
|
||||
find /data -owner bcotton
|
||||
|
||||
```
|
||||
|
||||
You can also look for files based on permissions. Perhaps you want to find all the world-readable files in your home directory to make sure you're not oversharing.
|
||||
```
|
||||
find ~ -perm -o=r
|
||||
|
||||
```
|
||||
|
||||
This post only scratches the surface of what `find` can do. Combining tests with Boolean logic can give you incredible flexibility to find exactly the files you're looking for. And with arguments like `-exec` or `-delete`, you can have `find` take action on what it... finds. Have any favorite `find` expressions? Share them in the comments!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/4/how-use-find-linux
|
||||
|
||||
作者:[Ben Cotton][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/bcotton
|
||||
[1]:https://opensource.com/article/18/4/how-find-files-linux
|
@ -1,87 +0,0 @@
|
||||
Easily Search And Install Google Web Fonts In Linux
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/04/Font-Finder-720x340.png)
|
||||
**Font Finder** is the Rust implementation of good old [**Typecatcher**][1], which is used to easily search and install Google web fonts from [**Google’s font archive**][2]. It helps you to install hundreds of free and open source fonts in your Linux desktop. In case you’re looking for beautiful fonts for your web projects and apps and whatever else, Font Finder can easily get them for you. It is free, open source GTK3 application written in Rust programming language. Unlike Typecatcher, which is written using Python, Font Finder can filter fonts by their categories, has zero Python runtime dependencies, and has much better performance and resource consumption.
|
||||
|
||||
In this brief tutorial, we are going to see how to install and use Font Finder in Linux.
|
||||
|
||||
### Install Font Finder
|
||||
|
||||
Since Fond Finder is written using Rust programming language, you need to install Rust in your system as described below.
|
||||
|
||||
After installing Rust, run the following command to install Font Finder:
|
||||
```
|
||||
$ cargo install fontfinder
|
||||
|
||||
```
|
||||
|
||||
Font Finder is also available as [**flatpak app**][3]. First install Flatpak in your system as described in the link below.
|
||||
|
||||
Then, install Font Finder using command:
|
||||
```
|
||||
$ flatpak install flathub io.github.mmstick.FontFinder
|
||||
|
||||
```
|
||||
|
||||
### Search And Install Google Web Fonts In Linux Using Font Finder
|
||||
|
||||
You can launch font finder either from the application launcher or run the following command to launch it.
|
||||
```
|
||||
$ flatpak run io.github.mmstick.FontFinder
|
||||
|
||||
```
|
||||
|
||||
This is how Font Finder default interface looks like.
|
||||
|
||||
![][5]
|
||||
|
||||
As you can see, Font Finder user interface is very simple. All Google web fonts are listed in the left pane and the preview of the respective font is given at the right pane. You can type any words in the preview box to view how the words will look like in the selected font. There is also a search box on the top left which allows you to quickly search for a font of your choice.
|
||||
|
||||
By default, Font Finder displays all type of fonts. You can, however, display the fonts by category-wise from the category drop-down box above the the search box.
|
||||
|
||||
![][6]
|
||||
|
||||
To install a font, just choose it and click the **Install** button on the top.
|
||||
|
||||
![][7]
|
||||
|
||||
You can test the newly installed fonts in any text processing applications.
|
||||
|
||||
![][8]
|
||||
|
||||
Similarly, to remove a font, just choose it from the Font Finder dashboard and click the **Uninstall** button. It’s that simple!
|
||||
|
||||
The Settings button (the gear button) on the top left corner provides the option to switch to dark preview.
|
||||
|
||||
![][9]
|
||||
|
||||
As you can see, Font Finder is very simple and does the job exactly as advertised in its home page. If you’re looking for an application to install Google web fonts, Font Finder is one such application.
|
||||
|
||||
And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/font-finder-easily-search-and-install-google-web-fonts-in-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.ostechnix.com/install-google-web-fonts-ubuntu/
|
||||
[2]:https://fonts.google.com/
|
||||
[3]:https://flathub.org/apps/details/io.github.mmstick.FontFinder
|
||||
[4]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2018/04/font-finder-1.png
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2018/04/font-finder-2.png
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2018/04/font-finder-3.png
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2018/04/font-finder-5.png
|
||||
[9]:http://www.ostechnix.com/wp-content/uploads/2018/04/font-finder-4.png
|
@ -1,82 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
Reset a lost root password in under 5 minutes
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-password.jpg?itok=KJMdkKum)
|
||||
|
||||
A system administrator can easily reset passwords for users who have forgotten theirs. But what happens if the system administrator forgets the root password, or leaves the company? This guide will show you how to reset a lost or forgotten root password on a Red Hat-compatible system, including Fedora and CentOS, in less than 5 minutes.
|
||||
|
||||
Please note, if the entire system hard disk has been encrypted with LUKS, you would need to provide the LUKS password when prompted. Also, this procedure is applicable to systems running systemd which has been the default init system since Fedora 15, CentOS 7.14.04, and Red Hat Enterprise Linux 7.0.
|
||||
|
||||
First, you need to interrupt the boot process, so you'll need to turn the system on or restart it if it’s already powered on. The first step is tricky because the GRUB menu tends to flash very quickly on the screen. You may need to try this a few times until you are able to do it.
|
||||
|
||||
Press **e** on your keyboard when you see this screen:
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/grub0.png?itok=cz9nk5BT)
|
||||
|
||||
If you've done this correctly, you should see a screen similar to this one:
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/grub1.png?itok=3ZY5uiGq)
|
||||
|
||||
Use your arrow keys to move to the Linux16 line:
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/grub2_0.png?itok=8epRyqOl)
|
||||
|
||||
Using your **del** key or your **backspace** key, remove `rhgb quiet` and replace with the following:
|
||||
|
||||
`rd.break enforcing=0`
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/grub3.png?itok=JDdMXnUb)
|
||||
|
||||
Setting `enforcing=0` will allow you to avoid performing a complete system SELinux relabeling. Once the system is rebooted, you'll only have to restore the correct SELinux context for the `/etc/shadow` file. I'll show you how to do this too.
|
||||
|
||||
Press **Ctrl-x** to start.
|
||||
|
||||
**The system will now be in emergency mode.**
|
||||
|
||||
Remount the hard drive with read-write access:
|
||||
```
|
||||
# mount –o remount,rw /sysroot
|
||||
|
||||
```
|
||||
|
||||
Run `chroot` to access the system:
|
||||
```
|
||||
# chroot /sysroot
|
||||
|
||||
```
|
||||
|
||||
You can now change the root password:
|
||||
```
|
||||
# passwd
|
||||
|
||||
```
|
||||
|
||||
Type the new root password twice when prompted. If you are successful, you should see a message that reads " **all authentication tokens updated successfully**. "
|
||||
|
||||
Type **exit** twice to reboot the system.
|
||||
|
||||
Log in as root and restore the SELinux label to the `/etc/shadow` file.
|
||||
```
|
||||
# restorecon -v /etc/shadow
|
||||
|
||||
```
|
||||
|
||||
Turn SELinux back to enforcing mode:
|
||||
```
|
||||
# setenforce 1
|
||||
|
||||
```
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/4/reset-lost-root-password
|
||||
|
||||
作者:[Curt Warfield][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rcurtiswarfield
|
@ -1,100 +0,0 @@
|
||||
How To Use Vim Editor To Input Text Anywhere
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/05/vim-anywhere-720x340.png)
|
||||
|
||||
Howdy Vim users! Today, I have come up with a good news to all of you. Say hello to **Vim-anywhere** , a simple script that allows you to use the Vim editor to input text anywhere in your Linux box. That means you can simply invoke your favorite Vim editor, type whatever you want and paste the text on any application or on a website. The text will be available in your clipboard until you restart your system. This utility is absolutely useful for those who love to use the Vim keybindings often in non-vim environment.
|
||||
|
||||
### Install Vim-anywhere in Linux
|
||||
|
||||
The Vim-anywhere utility will work on any GNOME based (or derivatives) Linux distributions. Also, make sure you have installed the following prerequisites.
|
||||
|
||||
* Curl
|
||||
* Git
|
||||
* gVim
|
||||
* xclip
|
||||
|
||||
|
||||
|
||||
For instance, you can those utilities in Ubuntu as shown below.
|
||||
```
|
||||
$ sudo apt install curl git vim-gnome xclip
|
||||
|
||||
```
|
||||
|
||||
Then, run the following command to install Vim-anywhere:
|
||||
```
|
||||
$ curl -fsSL https://raw.github.com/cknadler/vim-anywhere/master/install | bash
|
||||
|
||||
```
|
||||
|
||||
Vim-anywhere has been installed. Now let us see how to use it.
|
||||
|
||||
### Use Vim Editor To Input Text Anywhere
|
||||
|
||||
Let us say you need to create a word document. But you’re much more comfortable using Vim editor than LibreOffice writer. No problem, this is where Vim-anywhere comes in handy. It automates the entire process. It simply invokes the Vim editor, so you can write whatever you want in it and paste it in the .doc file.
|
||||
|
||||
Let me show you an example. Open LibreOffice writer or any graphical text editor of your choice. Then, open Vim-anywhere. To do so, simply press **CTRL+ALT+V**. It will open the gVim editor. Press “i” to switch to interactive mode and input the text. Once done, save and close it by typing **:wq**.
|
||||
|
||||
![][2]
|
||||
|
||||
The text will be available in the clipboard until you restart the system. After you closed the editor, your previous application is refocused. Just press **CTRL+P** to paste the text in it.
|
||||
|
||||
![][3]
|
||||
|
||||
It’s just an example. You can even use Vim-anywhere to write something on an annoying web form or any other applications. Once Vim-anywhere invoked, it will open a buffer. Close it and its contents are automatically copied to your clipboard and your previous application is refocused.
|
||||
|
||||
The vim-anywhere utility will create a temporary file in **/tmp/vim-anywhere** when invoked. These temporary files stick around until you restart your system, giving you a temporary history.
|
||||
```
|
||||
$ ls /tmp/vim-anywhere
|
||||
|
||||
```
|
||||
|
||||
You can re-open your most recent file using command:
|
||||
```
|
||||
$ vim $( ls /tmp/vim-anywhere | sort -r | head -n 1 )
|
||||
|
||||
```
|
||||
|
||||
**Update Vim-anywhere**
|
||||
|
||||
Run the following command to update Vim-anywhere:
|
||||
```
|
||||
$ ~/.vim-anywhere/update
|
||||
|
||||
```
|
||||
|
||||
**Change keyboard shotcut**
|
||||
|
||||
The default keybinding to invoke Vim-anywhere is CTRL+ALT+V. You can change it to any custom keybinding using gconf tool.
|
||||
```
|
||||
$ gconftool -t str --set /desktop/gnome/keybindings/vim-anywhere/binding <custom binding>
|
||||
|
||||
```
|
||||
|
||||
**Uninstall Vim-anywhere**
|
||||
|
||||
Some of you might think that opening Vim editor each time to input text and paste the text back to another application might be pointless and completely unnecessary.
|
||||
|
||||
If you don’t find this utility useful, simply uninstall it using command:
|
||||
```
|
||||
$ ~/.vim-anywhere/uninstall
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-use-vim-editor-to-input-text-anywhere/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2018/05/vim-anywhere-1-1.png
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2018/05/vim-anywhere-2.png
|
@ -0,0 +1,193 @@
|
||||
11 Methods To Find System/Server Uptime In Linux
|
||||
======
|
||||
Do you want to know, how long your Linux system has been running without downtime? when the system is up and what date.
|
||||
|
||||
There are multiple commands is available in Linux to check server/system uptime and most of users prefer the standard and very famous command called `uptime` to get this details.
|
||||
|
||||
Server uptime is not important for some people but it’s very important for server administrators when the server running with mission-critical applications such as online shopping portal, netbanking portal, etc,.
|
||||
|
||||
It must be zero downtime because if there is a down time then it will impact badly to million users.
|
||||
|
||||
As i told, many commands are available to check server uptime in Linux. In this tutorial we are going teach you how to check this using below 11 methods.
|
||||
|
||||
Uptime means how long the server has been up since its last shutdown or reboot.
|
||||
|
||||
The uptime command the fetch the details from `/proc` files and print the server uptime, the `/proc` file is not directly readable by humans.
|
||||
|
||||
The below commands will print how long the system has been running and up. It also shows some additional information.
|
||||
|
||||
### Method-1 : Using uptime Command
|
||||
|
||||
uptime command will tell how long the system has been running. It gives a one line display of the following information.
|
||||
|
||||
The current time, how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes.
|
||||
```
|
||||
# uptime
|
||||
|
||||
08:34:29 up 21 days, 5:46, 1 user, load average: 0.06, 0.04, 0.00
|
||||
|
||||
```
|
||||
|
||||
### Method-2 : Using w Command
|
||||
|
||||
w command provides a quick summary of every user logged into a computer, what each user is currently doing,
|
||||
and what load all the activity is imposing on the computer itself. The command is a one-command combination of several other Unix programs: who, uptime, and ps -a.
|
||||
```
|
||||
# w
|
||||
|
||||
08:35:14 up 21 days, 5:47, 1 user, load average: 0.26, 0.09, 0.02
|
||||
USER TTY FROM [email protected] IDLE JCPU PCPU WHAT
|
||||
root pts/1 103.5.134.167 08:34 0.00s 0.01s 0.00s w
|
||||
|
||||
```
|
||||
|
||||
### Method-3 : Using top Command
|
||||
|
||||
Top command is one of the basic command to monitor real-time system processes in Linux. It display system information and running processes information like uptime, average load, tasks running, number of users logged in, number of CPUs & cpu utilization, Memory & swap information. Run top command then hit E to bring the memory utilization in MB.
|
||||
|
||||
**Suggested Read :** [TOP Command Examples to Monitor Server Performance][1]
|
||||
```
|
||||
# top -c
|
||||
|
||||
top - 08:36:01 up 21 days, 5:48, 1 user, load average: 0.12, 0.08, 0.02
|
||||
Tasks: 98 total, 1 running, 97 sleeping, 0 stopped, 0 zombie
|
||||
Cpu(s): 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
|
||||
Mem: 1872888k total, 1454644k used, 418244k free, 175804k buffers
|
||||
Swap: 2097148k total, 0k used, 2097148k free, 1098140k cached
|
||||
|
||||
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
|
||||
1 root 20 0 19340 1492 1172 S 0.0 0.1 0:01.04 /sbin/init
|
||||
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kthreadd]
|
||||
3 root RT 0 0 0 0 S 0.0 0.0 0:00.00 [migration/0]
|
||||
4 root 20 0 0 0 0 S 0.0 0.0 0:34.32 [ksoftirqd/0]
|
||||
5 root RT 0 0 0 0 S 0.0 0.0 0:00.00 [stopper/0]
|
||||
|
||||
```
|
||||
|
||||
### Method-4 : Using who Command
|
||||
|
||||
who command displays a list of users who are currently logged into the computer. The who command is related to the command w, which provides the same information but also displays additional data and statistics.
|
||||
```
|
||||
# who -b
|
||||
|
||||
system boot 2018-04-12 02:48
|
||||
|
||||
```
|
||||
|
||||
### Method-5 : Using last Command
|
||||
|
||||
The last command displays a list of last logged in users. Last searches back through the file /var/log/wtmp and displays a list of all users logged in (and out) since that file was created.
|
||||
```
|
||||
# last reboot -F | head -1 | awk '{print $5,$6,$7,$8,$9}'
|
||||
|
||||
Thu Apr 12 02:48:04 2018
|
||||
|
||||
```
|
||||
|
||||
### Method-6 : Using /proc/uptime File
|
||||
|
||||
This file contains information detailing how long the system has been on since its last restart. The output of `/proc/uptime` is quite minimal.
|
||||
|
||||
The first number is the total number of seconds the system has been up. The second number is how much of that time the machine has spent idle, in seconds.
|
||||
```
|
||||
# cat /proc/uptime
|
||||
|
||||
1835457.68 1809207.16
|
||||
|
||||
```
|
||||
|
||||
# date -d “$(Method-7 : Using tuptime Command
|
||||
|
||||
Tuptime is a tool for report the historical and statistical running time of the system, keeping it between restarts. Like uptime command but with more interesting output.
|
||||
```
|
||||
$ tuptime
|
||||
|
||||
```
|
||||
|
||||
### Method-8 : Using htop Command
|
||||
|
||||
htop is an interactive process viewer for Linux which was developed by Hisham using ncurses library. Htop have many of features and options compared to top command.
|
||||
|
||||
**Suggested Read :** [Monitor system resources using Htop command][2]
|
||||
```
|
||||
# htop
|
||||
|
||||
CPU[| 0.5%] Tasks: 48, 5 thr; 1 running
|
||||
Mem[||||||||||||||||||||||||||||||||||||||||||||||||||| 165/1828MB] Load average: 0.10 0.05 0.01
|
||||
Swp[ 0/2047MB] Uptime: 21 days, 05:52:35
|
||||
|
||||
PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command
|
||||
29166 root 20 0 110M 2484 1240 R 0.0 0.1 0:00.03 htop
|
||||
29580 root 20 0 11464 3500 1032 S 0.0 0.2 55:15.97 /bin/sh ./OSWatcher.sh 10 1
|
||||
1 root 20 0 19340 1492 1172 S 0.0 0.1 0:01.04 /sbin/init
|
||||
486 root 16 -4 10780 900 348 S 0.0 0.0 0:00.07 /sbin/udevd -d
|
||||
748 root 18 -2 10780 932 360 S 0.0 0.0 0:00.00 /sbin/udevd -d
|
||||
|
||||
```
|
||||
|
||||
### Method-9 : Using glances Command
|
||||
|
||||
Glances is a cross-platform curses-based system monitoring tool written in Python. We can say all in one place, like maximum of information in a minimum of space. It uses psutil library to get information from your system.
|
||||
|
||||
Glances capable to monitor CPU, Memory, Load, Process list, Network interface, Disk I/O, Raid, Sensors, Filesystem (and folders), Docker, Monitor, Alert, System info, Uptime, Quicklook (CPU, MEM, LOAD), etc,.
|
||||
|
||||
**Suggested Read :** [Glances (All in one Place)– An Advanced Real Time System Performance Monitoring Tool for Linux][3]
|
||||
```
|
||||
glances
|
||||
|
||||
ubuntu (Ubuntu 17.10 64bit / Linux 4.13.0-37-generic) - IP 192.168.1.6/24 Uptime: 21 days, 05:55:15
|
||||
|
||||
CPU [|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 90.6%] CPU - 90.6% nice: 0.0% ctx_sw: 4K MEM \ 78.4% active: 942M SWAP - 5.9% LOAD 2-core
|
||||
MEM [||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 78.0%] user: 55.1% irq: 0.0% inter: 1797 total: 1.95G inactive: 562M total: 12.4G 1 min: 4.35
|
||||
SWAP [|||| 5.9%] system: 32.4% iowait: 1.8% sw_int: 897 used: 1.53G buffers: 14.8M used: 749M 5 min: 4.38
|
||||
idle: 7.6% steal: 0.0% free: 431M cached: 273M free: 11.7G 15 min: 3.38
|
||||
|
||||
NETWORK Rx/s Tx/s TASKS 211 (735 thr), 4 run, 207 slp, 0 oth sorted automatically by memory_percent, flat view
|
||||
docker0 0b 232b
|
||||
enp0s3 12Kb 4Kb Systemd 7 Services loaded: 197 active: 196 failed: 1
|
||||
lo 616b 616b
|
||||
_h478e48e 0b 232b CPU% MEM% VIRT RES PID USER NI S TIME+ R/s W/s Command
|
||||
63.8 18.9 2.33G 377M 2536 daygeek 0 R 5:57.78 0 0 /usr/lib/firefox/firefox -contentproc -childID 1 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51
|
||||
DefaultGateway 83ms 78.5 10.9 3.46G 217M 2039 daygeek 0 S 21:07.46 0 0 /usr/bin/gnome-shell
|
||||
8.5 10.1 2.32G 201M 2464 daygeek 0 S 8:45.69 0 0 /usr/lib/firefox/firefox -new-window
|
||||
DISK I/O R/s W/s 1.1 8.5 2.19G 170M 2653 daygeek 0 S 2:56.29 0 0 /usr/lib/firefox/firefox -contentproc -childID 4 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51
|
||||
dm-0 0 0 1.7 7.2 2.15G 143M 2880 daygeek 0 S 7:10.46 0 0 /usr/lib/firefox/firefox -contentproc -childID 6 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51
|
||||
sda1 9.46M 12K 0.0 4.9 1.78G 97.2M 6125 daygeek 0 S 1:36.57 0 0 /usr/lib/firefox/firefox -contentproc -childID 7 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51
|
||||
|
||||
```
|
||||
|
||||
### Method-10 : Using stat Command
|
||||
|
||||
stat command displays the detailed status of a particular file or a file system.
|
||||
```
|
||||
# stat /var/log/dmesg | grep Modify
|
||||
|
||||
Modify: 2018-04-12 02:48:04.027999943 -0400
|
||||
|
||||
```
|
||||
|
||||
### Method-11 : Using procinfo Command
|
||||
|
||||
procinfo gathers some system data from the /proc directory and prints it nicely formatted on the standard output device.
|
||||
```
|
||||
# procinfo | grep Bootup
|
||||
|
||||
Bootup: Fri Apr 20 19:40:14 2018 Load average: 0.16 0.05 0.06 1/138 16615
|
||||
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/11-methods-to-find-check-system-server-uptime-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/magesh/
|
||||
[1]:https://www.2daygeek.com/top-command-examples-to-monitor-server-performance/
|
||||
[2]:https://www.2daygeek.com/htop-command-examples-to-monitor-system-resources/
|
||||
[3]:https://www.2daygeek.com/install-glances-advanced-real-time-linux-system-performance-monitoring-tool-on-centos-fedora-ubuntu-debian-opensuse-arch-linux/
|
@ -0,0 +1,155 @@
|
||||
How the four components of a distributed tracing system work together
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/touch-tracing.jpg?itok=rOmsY-nU)
|
||||
Ten years ago, essentially the only people thinking hard about distributed tracing were academics and a handful of large internet companies. Today, it’s turned into table stakes for any organization adopting microservices. The rationale is well-established: microservices fail in surprising and often spectacular ways, and distributed tracing is the best way to describe and diagnose those failures.
|
||||
|
||||
That said, if you set out to integrate distributed tracing into your own application, you’ll quickly realize that the term “Distributed Tracing” means different things to different people. Furthermore, the tracing ecosystem is crowded with partially-overlapping projects with similar charters. This article describes the four (potentially) independent components in distributed tracing, and how they fit together.
|
||||
|
||||
### Distributed tracing: A mental model
|
||||
|
||||
Most mental models for tracing descend from [Google’s Dapper paper][1]. [OpenTracing][2] uses similar nouns and verbs, so we will borrow the terms from that project:
|
||||
|
||||
![Tracing][3]
|
||||
|
||||
* **Trace:** The description of a transaction as it moves through a distributed system.
|
||||
* **Span:** A named, timed operation representing a piece of the workflow. Spans accept key:value tags as well as fine-grained, timestamped, structured logs attached to the particular span instance.
|
||||
* **Span context:** Trace information that accompanies the distributed transaction, including when it passes from service to service over the network or through a message bus. The span context contains the trace identifier, span identifier, and any other data that the tracing system needs to propagate to the downstream service.
|
||||
|
||||
|
||||
|
||||
If you would like to dig into a detailed description of this mental model, please check out the [OpenTracing specification][4].
|
||||
|
||||
### The four big pieces
|
||||
|
||||
From the perspective of an application-layer distributed tracing system, a modern software system looks like the following diagram:
|
||||
|
||||
![Tracing][5]
|
||||
|
||||
The components in a modern software system can be broken down into three categories:
|
||||
|
||||
* **Application and business logic:** Your code.
|
||||
* **Widely shared libraries:** Other people's code.
|
||||
* **Widely shared services:** Other people’s infrastructure.
|
||||
|
||||
|
||||
|
||||
These three components have different requirements and drive the design of the Distributed Tracing systems which is tasked with monitoring the application. The resulting design yields four important pieces:
|
||||
|
||||
* **A tracing instrumentation API:** What decorates application code.
|
||||
* **Wire protocol:** What gets sent alongside application data in RPC requests.
|
||||
* **Data protocol:** What gets sent asynchronously (out-of-band) to your analysis system.
|
||||
* **Analysis system:** A database and interactive UI for working with the trace data.
|
||||
|
||||
|
||||
|
||||
To explain this further, we’ll dig into the details which drive this design. If you just want my suggestions, please skip to the four big solutions at the bottom.
|
||||
|
||||
### Requirements, details, and explanations
|
||||
|
||||
Application code, shared libraries, and shared services have notable operational differences, which heavily influence the requirements for instrumenting them.
|
||||
|
||||
#### Instrumenting application code and business logic
|
||||
|
||||
In any particular microservice, the bulk of the code written by the microservice developer is the application or business logic. This is the code that defines domain-specific operations; typically, it contains whatever special, unique logic justified the creation of a new microservice in the first place. Almost by definition, **this code is usually not shared or otherwise present in more than one service.**
|
||||
|
||||
That said, you still need to understand it, and that means it needs to be instrumented somehow. Some monitoring and tracing analysis systems auto-instrument code using black-box agents, and others expect explicit "white-box" instrumentation. For the latter, abstract tracing APIs offer many practical advantages for microservice-specific application code:
|
||||
|
||||
* An abstract API allows you to swap in new monitoring tools without re-writing instrumentation code. You may want to change cloud providers, vendors, and monitoring technologies, and a huge pile of non-portable instrumentation code would add meaningful overhead and friction to that procedure.
|
||||
* It turns out there are other interesting uses for instrumentation, beyond production monitoring. There are existing projects that use this same tracing instrumentation to power testing tools, distributed debuggers, “chaos engineering” fault injectors, and other meta-applications.
|
||||
* But most importantly, what if you wanted to extract an application component into a shared library? That leads us to:
|
||||
|
||||
|
||||
|
||||
#### Instrumenting shared libraries
|
||||
|
||||
The utility code present in most applications—code that handles network requests, database calls, disk writes, threading, queueing, concurrency management, and so on—is often generic and not specific to any particular application. This code is packaged up into libraries and frameworks which are then installed in many microservices, and deployed into many different environments.
|
||||
|
||||
This is the real difference: with shared code, someone else is the user. Most users have different dependencies and operational styles. If you attempt to instrument this shared code, you will note a couple of common issues:
|
||||
|
||||
* You need an API to write instrumentation. However, your library does not know what analysis system is being used. There are many choices, and all the libraries running in the same application cannot make incompatible choices.
|
||||
* The task of injecting and extracting span contexts from request headers often falls on RPC libraries, since those packages encapsulate all network-handling code. However, a shared library cannot not know which tracing protocol is being used by each application.
|
||||
* Finally, you don’t want to force conflicting dependencies on your user. Most users have different dependencies and operational styles. Even if they use gRPC, will it be the same version of gRPC you are binding to? So any monitoring API your library brings in for tracing must be free of dependencies.
|
||||
|
||||
|
||||
|
||||
**So, an abstract API which (a) has no dependencies, (b) is wire protocol agnostic, and (c) works with popular vendors and analysis systems should be a requirement for instrumenting shared library code.**
|
||||
|
||||
#### Instrumenting shared services
|
||||
|
||||
Finally, sometimes entire services—or sets of microservices—are general-purpose enough that they are used by many independent applications. These shared services are often hosted and managed by third parties. Examples might be cache servers, message queues, and databases.
|
||||
|
||||
It’s important to understand that **shared services are essentially "black boxes" from the perspective of application developers.** It is not possible to inject your application’s monitoring solution into a shared service. Instead, the hosted service often runs its own monitoring solution.
|
||||
|
||||
### **The four big solutions**
|
||||
|
||||
So, an abstracted tracing API would help libraries emit data and inject/extract Span Context. A standard wire protocol would help black-box services interconnect, and a standard data format would help separate analysis systems consolidate their data. Let's have a look at some promising options for solving these problems.
|
||||
|
||||
#### Tracing API: The OpenTracing project
|
||||
|
||||
#### As shown above, in order to instrument application code, a tracing API is required. And in order to extend that instrumentation to shared libraries, where most of the Span Context injection and extraction occurs, the API must be abstracted in certain critical ways.
|
||||
|
||||
The [OpenTracing][2] project aims to solve this problem for library developers. OpenTracing is a vendor-neutral tracing API which comes with no dependencies, and is quickly gaining support from a large number of monitoring systems. This means that, increasingly, if libraries ship with native OpenTracing instrumentation baked in, tracing will automatically be enabled when a monitoring system connects at application startup.
|
||||
|
||||
Personally, as someone who has been writing, shipping, and operating open source software for over a decade, it is profoundly satisfying to work on the OpenTracing project and finally scratch this observability itch.
|
||||
|
||||
In addition to the API, the OpenTracing project maintains a growing list of contributed instrumentation, some of which can be found [here][6]. If you would like to get involved, either by contributing an instrumentation plugin, natively instrumenting your own OSS libraries, or just want to ask a question, please find us on [Gitter][7] and say hi.
|
||||
|
||||
#### Wire Protocol: The trace-context HTTP headers
|
||||
|
||||
In order for monitoring systems to interoperate, and to mitigate migration issues when changing from one monitoring system to another, a standard wire protocol is needed for propagating Span Context.
|
||||
|
||||
The [w3c Distributed Trace Context Community Group][8] is hard at work defining this standard. Currently, the focus is on defining a set of standard HTTP headers. The latest draft of the specification can be found [here][9]. If you have questions for this group, the [mailing list][10] and [Gitter chatroom][11] are great places to go for answers.
|
||||
|
||||
#### Data protocol (Doesn't exist yet!!)
|
||||
|
||||
For black-box services, where it is not possible to install a tracer or otherwise interact with the program, a data protocol is needed to export data from the system.
|
||||
|
||||
Work on this data format and protocol is currently at an early stage, and mostly happening within the context of the w3c Distributed Trace Context Working Group. There is particular interest is in defining higher-level concepts, such as RPC calls, database statements, etc, in a standard data schema. This would allow tracing systems to make assumptions about what kind of data would be available. The OpenTracing project is also working on this issue, by starting to define a [standard set of tags][12]. The plan is for these two efforts to dovetail with each other.
|
||||
|
||||
Note that there is a middle ground available at the moment. For “network appliances” that the application developer operates, but does not want to compile or otherwise perform code modifications to, dynamic linking can help. The primary examples of this are service meshes and proxies, such as Envoy or NGINX. For this situation, an OpenTracing-compliant tracer can be compiled as a shared object, and then dynamically linked into the executable at runtime. This option is currently provided by the [C++ OpenTracing API][13]. For Java, an OpenTracing [Tracer Resolver][14] is also under development.
|
||||
|
||||
These solutions work well for services that support dynamic linking, and are deployed by the application developer. But in the long run, a standard data protocol may solve this problem more broadly.
|
||||
|
||||
#### Analysis system: A service for extracting insights from trace data
|
||||
|
||||
Last but not least, there is now a cornucopia of tracing and monitoring solutions. A list of monitoring systems known to be compatible with OpenTracing can be found [here][15], but there are many more options out there. I would encourage you to research your options, and I hope you find the framework provided in this article to be useful when comparing options. In addition to rating monitoring systems based on their operational characteristics (not to mention whether you like the UI and features), make sure you think about the three big pieces above, their relative importance to you, and how the tracing system you are interested in provides a solution to them.
|
||||
|
||||
### Conclusion
|
||||
|
||||
In the end, how important each piece is depends heavily on who you are and what kind of system you are building. For example, open source library authors are very interested in the OpenTracing API, while service developers tend to be more interested in the Trace-Context specification. When someone says one piece is more important than the other, they usually mean “one piece is more important to me than the other."
|
||||
|
||||
However, the reality is this: Distributed Tracing has become a necessity for monitoring modern systems. In designing the building blocks for these systems, the age-old approach—"decouple where you can"—still holds true. Cleanly decoupled components are the best way to maintain flexibility and forwards-compatibility when building a system as cross-cutting as a distributed monitoring system.
|
||||
|
||||
Thanks for reading! Hopefully, now when you're ready to implement tracing in your own application, you have a guide to understanding which pieces they are talking about, and how they fit together.
|
||||
|
||||
Want to learn more? Sign up to attend [KubeCon EU][16] in May or [KubeCon North America][17] in December.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/distributed-tracing
|
||||
|
||||
作者:[Ted Young][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/tedsuo
|
||||
[1]:https://research.google.com/pubs/pub36356.html
|
||||
[2]:http://opentracing.io/
|
||||
[3]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/tracing1_0.png?itok=dvDTX0JJ (Tracing)
|
||||
[4]:https://github.com/opentracing/specification/blob/master/specification.md
|
||||
[5]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/tracing2_0.png?itok=yokjNLZk (Tracing)
|
||||
[6]:https://github.com/opentracing-contrib/
|
||||
[7]:https://gitter.im/opentracing/public
|
||||
[8]:https://www.w3.org/community/trace-context/
|
||||
[9]:https://w3c.github.io/distributed-tracing/report-trace-context.html
|
||||
[10]:http://lists.w3.org/Archives/Public/public-trace-context/
|
||||
[11]:https://gitter.im/TraceContext/Lobby
|
||||
[12]:https://github.com/opentracing/specification/blob/master/semantic_conventions.md
|
||||
[13]:https://github.com/opentracing/opentracing-cpp
|
||||
[14]:https://github.com/opentracing-contrib/java-tracerresolver
|
||||
[15]:http://opentracing.io/documentation/pages/supported-tracers
|
||||
[16]:https://events.linuxfoundation.org/kubecon-eu-2018/
|
||||
[17]:https://events.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2018/
|
@ -1,135 +0,0 @@
|
||||
How to build container images with Buildah
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/04/buildah-816x345.png)
|
||||
|
||||
Project Atomic, through their efforts on the Open Container Initiative (OCI), have created a great tool called [Buildah][1]. Buildah helps with creating, building and updating container images supporting Docker formatted images as well as OCI compliant images.
|
||||
|
||||
Buildah handles building container images without the need to have a full container runtime or daemon installed. This particularly shines for setting up a continuous integration and continuous delivery pipeline for building containers.
|
||||
|
||||
Buildah makes the container’s filesystem directly available to the build host. Meaning that the build tooling is available on the host and not needed in the container image, keeping the build faster and the image smaller and safer. There are Buildah packages for CentOS, Fedora, and Debian.
|
||||
|
||||
### Installing Buildah
|
||||
|
||||
Since Fedora 26 Buildah can be installed using dnf.
|
||||
```
|
||||
$ sudo dnf install buildah -y
|
||||
|
||||
```
|
||||
|
||||
The current version of buildah is 0.16, which can be displayed by the following command.
|
||||
```
|
||||
$ buildah --version
|
||||
|
||||
```
|
||||
|
||||
### Basic commands
|
||||
|
||||
The first step needed to build a container image is to get a base image, this is done by the FROM statement in a Dockerfile. Buildah does handle this in a similar way.
|
||||
```
|
||||
$ sudo buildah from fedora
|
||||
|
||||
```
|
||||
|
||||
This command pulls the Fedora based image and stores it on the host. It is possible to inspect the images available on the host, by running the following.
|
||||
```
|
||||
$ sudo buildah images
|
||||
IMAGE ID IMAGE NAME CREATED AT SIZE
|
||||
9110ae7f579f docker.io/library/fedora:latest Mar 7, 2018 20:51 234.7 MB
|
||||
|
||||
```
|
||||
|
||||
After pulling the base image, a running container instance of this image is available, this is a “working-container”.
|
||||
|
||||
The following command displays the running containers.
|
||||
```
|
||||
$ sudo buildah containers
|
||||
CONTAINER ID BUILDER IMAGE ID IMAGE NAME
|
||||
CONTAINER NAME
|
||||
6112db586ab9 * 9110ae7f579f docker.io/library/fedora:latest fedora-working-container
|
||||
|
||||
```
|
||||
|
||||
Buildah also provides a very useful command to stop and remove all the containers that are currently running.
|
||||
```
|
||||
$ sudo buildah rm --all
|
||||
|
||||
```
|
||||
|
||||
The full list of command is available using the –help option.
|
||||
```
|
||||
$ buildah --help
|
||||
|
||||
```
|
||||
|
||||
### Building an Apache web server container image
|
||||
|
||||
Let’s see how to use Buildah to install an Apache web server on a Fedora base image, then copy a custom index.html to be served by the server.
|
||||
|
||||
First let’s create the custom index.html.
|
||||
```
|
||||
$ echo "Hello Fedora Magazine !!!" > index.html
|
||||
|
||||
```
|
||||
|
||||
Then install the httpd package inside the running container.
|
||||
```
|
||||
$ sudo buildah from fedora
|
||||
$ sudo buildah run fedora-working-container dnf install httpd -y
|
||||
|
||||
```
|
||||
|
||||
Let’s copy index.html to /var/www/html/.
|
||||
```
|
||||
$ sudo buildah copy fedora-working-container index.html /var/www/html/index.html
|
||||
|
||||
```
|
||||
|
||||
Then configure the container entrypoint to start httpd.
|
||||
```
|
||||
$ sudo buildah config --entrypoint "/usr/sbin/httpd -DFOREGROUND" fedora-working-container
|
||||
|
||||
```
|
||||
|
||||
Now to make the “working-container” available, the commit command saves the container to an image.
|
||||
```
|
||||
$ sudo buildah commit fedora-working-container hello-fedora-magazine
|
||||
|
||||
```
|
||||
|
||||
The hello-fedora-magazine image is now available, and can be pushed to a registry to be used.
|
||||
```
|
||||
$ sudo buildah images
|
||||
IMAGE ID IMAGE NAME CREATED
|
||||
AT SIZE
|
||||
9110ae7f579f docker.io/library/fedora:latest
|
||||
Mar 7, 2018 22:51 234.7 MB
|
||||
49bd5ec5be71 docker.io/library/hello-fedora-magazine:latest
|
||||
Apr 27, 2018 11:01 427.7 MB
|
||||
|
||||
```
|
||||
|
||||
It is also possible to use Buildah to test this image by running the following steps.
|
||||
```
|
||||
$ sudo buildah from --name=hello-magazine docker.io/library/hello-fedora-magazine
|
||||
|
||||
$ sudo buildah run hello-magazine
|
||||
|
||||
```
|
||||
|
||||
Accessing <http://localhost> will display “Hello Fedora Magazine !!!“
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/daemon-less-container-management-buildah/
|
||||
|
||||
作者:[Ashutosh Sudhakar Bhakare][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/author/ashutoshbhakare/
|
||||
[1]:https://github.com/projectatomic/buildah
|
230
sources/tech/20180504 A Beginners Guide To Cron Jobs.md
Normal file
230
sources/tech/20180504 A Beginners Guide To Cron Jobs.md
Normal file
@ -0,0 +1,230 @@
|
||||
A Beginners Guide To Cron Jobs
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/05/cron-jobs1-720x340.jpg)
|
||||
**Cron** is one of the most useful utility that you can find in any Unix-like operating system. It is used to schedule commands at a specific time. These scheduled commands or tasks are known as “Cron Jobs”. Cron is generally used for running scheduled backups, monitoring disk space, deleting files (for example log files) periodically which are no longer required, running system maintenance tasks and a lot more. In this brief guide, we will see the basic usage of Cron Jobs in Linux.
|
||||
|
||||
### The Beginners Guide To Cron Jobs
|
||||
|
||||
The typical format of a cron job is:
|
||||
```
|
||||
Minute(0-59) Hour(0-24) Day_of_month(1-31) Month(1-12) Day_of_week(0-6) Command_to_execute
|
||||
|
||||
```
|
||||
|
||||
Just memorize the cron job format or print the following illustration and keep it in your desk.
|
||||
|
||||
![][2]
|
||||
|
||||
In the above picture, the asterisks refers the specific blocks of time.
|
||||
|
||||
To display the contents of the **crontab** file of the currently logged in user:
|
||||
```
|
||||
$ crontab -l
|
||||
|
||||
```
|
||||
|
||||
To edit the current user’s cron jobs, do:
|
||||
```
|
||||
$ crontab -e
|
||||
|
||||
```
|
||||
|
||||
If it is the first time, you will be asked to editor to edit the jobs.
|
||||
```
|
||||
no crontab for sk - using an empty one
|
||||
|
||||
Select an editor. To change later, run 'select-editor'.
|
||||
1. /bin/nano <---- easiest
|
||||
2. /usr/bin/vim.basic
|
||||
3. /usr/bin/vim.tiny
|
||||
4. /bin/ed
|
||||
|
||||
Choose 1-4 [1]:
|
||||
|
||||
```
|
||||
|
||||
Choose any one that suits you. Here it is how a sample crontab file looks like.
|
||||
|
||||
![][3]
|
||||
|
||||
In this file, you need to add your cron jobs.
|
||||
|
||||
To edit the crontab of a different user, for example ostechnix, do:
|
||||
```
|
||||
$ crontab -u ostechnix -e
|
||||
|
||||
```
|
||||
|
||||
Let us see some examples.
|
||||
|
||||
To run a cron job **every minute** , the format should be like below.
|
||||
```
|
||||
* * * * * <command-to-execute>
|
||||
|
||||
```
|
||||
|
||||
To run cron job every 5 minute, add the following in your crontab file.
|
||||
```
|
||||
*/5 * * * * <command-to-execute>
|
||||
|
||||
```
|
||||
|
||||
To run a cron job at every quarter hour (every 15th minute), add this:
|
||||
```
|
||||
*/15 * * * * <command-to-execute>
|
||||
|
||||
```
|
||||
|
||||
To run a cron job every hour at 30 minutes, run:
|
||||
```
|
||||
30 * * * * <command-to-execute>
|
||||
|
||||
```
|
||||
|
||||
You can also define multiple time intervals separated by commas. For example, the following cron job will run three times every hour, at minutes 0, 5 and 10:
|
||||
```
|
||||
0,5,10 * * * * <command-to-execute>
|
||||
|
||||
```
|
||||
|
||||
Run a cron job every half hour:
|
||||
```
|
||||
*/30 * * * * <command-to-execute>
|
||||
|
||||
```
|
||||
|
||||
Run a job every hour:
|
||||
```
|
||||
0 * * * * <command-to-execute>
|
||||
|
||||
```
|
||||
|
||||
Run a job every 2 hours:
|
||||
```
|
||||
0 */2 * * * <command-to-execute>
|
||||
|
||||
```
|
||||
|
||||
Run a job every day (It will run at 00:00):
|
||||
```
|
||||
0 0 * * * <command-to-execute>
|
||||
|
||||
```
|
||||
|
||||
Run a job every day at 3am:
|
||||
```
|
||||
0 3 * * * <command-to-execute>
|
||||
|
||||
```
|
||||
|
||||
Run a job every sunday:
|
||||
```
|
||||
0 0 * * SUN <command-to-execute>
|
||||
|
||||
```
|
||||
|
||||
Or,
|
||||
```
|
||||
0 0 * * 0 <command-to-execute>
|
||||
|
||||
```
|
||||
|
||||
It will run at exactly at 00:00 on Sunday.
|
||||
|
||||
Run a job on every day-of-week from Monday through Friday i.e every weekday:
|
||||
```
|
||||
0 0 * * 1-5 <command-to-execute>
|
||||
|
||||
```
|
||||
|
||||
The job will start at 00:00.
|
||||
|
||||
Run a job every month:
|
||||
```
|
||||
0 0 1 * * <command-to-execute>
|
||||
|
||||
```
|
||||
|
||||
Run a job at 16:15 on day-of-month 1:
|
||||
```
|
||||
15 16 1 * * <command-to-execute>
|
||||
|
||||
```
|
||||
|
||||
Run a job at every quarter i.e on day-of-month 1 in every 3rd month:
|
||||
```
|
||||
0 0 1 */3 * <command-to-execute>
|
||||
|
||||
```
|
||||
|
||||
Run a job on a specific month at a specific time:
|
||||
```
|
||||
5 0 * 4 * <command-to-execute>
|
||||
|
||||
```
|
||||
|
||||
The job will start at 00:05 in April.
|
||||
|
||||
Run a job every 6 months:
|
||||
```
|
||||
0 0 1 */6 * <command-to-execute>
|
||||
|
||||
```
|
||||
|
||||
This cron job will start at 00:00 on day-of-month 1 in every 6th month.
|
||||
|
||||
Run a job every year:
|
||||
```
|
||||
0 0 1 1 * <command-to-execute>
|
||||
|
||||
```
|
||||
|
||||
This cron job will start at 00:00 on day-of-month 1 in January.
|
||||
|
||||
We can also use the following strings to define job.
|
||||
|
||||
@reboot Run once, at startup. @yearly Run once a year. @annually (same as @yearly). @monthly Run once a month. @weekly Run once a week. @daily Run once a day. @midnight (same as @daily). @hourly Run once an hour.
|
||||
|
||||
For example, to run a job every time the server is rebooted, add this line in your crontab file.
|
||||
```
|
||||
@reboot <command-to-execute>
|
||||
|
||||
```
|
||||
|
||||
To remove all cron jobs for the current user:
|
||||
```
|
||||
$ crontab -r
|
||||
|
||||
```
|
||||
|
||||
There is also a dedicated website named [**crontab.guru**][4] for learning cron jobs examples. This site provides a lot of cron job examples.
|
||||
|
||||
For more details, check man pages.
|
||||
```
|
||||
$ man crontab
|
||||
|
||||
```
|
||||
|
||||
And, that’s all for now. At this point, you might have a basic understanding of cron jobs and how to use them in real time. More good stuffs to come. Stay tuned!!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/a-beginners-guide-to-cron-jobs/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2018/05/cron-job-format-1.png
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2018/05/cron-jobs-1.png
|
||||
[4]:https://crontab.guru/
|
75
sources/tech/20180507 4 Firefox extensions to install now.md
Normal file
75
sources/tech/20180507 4 Firefox extensions to install now.md
Normal file
@ -0,0 +1,75 @@
|
||||
4 Firefox extensions to install now
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/redpanda_firefox_pet_animal.jpg?itok=aSpKsyna)
|
||||
As I mentioned in my [original article][1] on Firefox extensions, the web browser has become a critical component of the computing experience for many users. Modern browsers have evolved into powerful and extensible platforms, and extensions can add or modify their functionality. Extensions for Firefox are built using the WebExtensions API, a cross-browser development system.
|
||||
|
||||
In the first article, I asked readers: "Which extensions should you install?" To reiterate, that decision largely comes down to how you use your browser, your views on privacy, how much you trust extension developers, and other personal preferences. Since that article was published, one extension I recommended (Xmarks) has been discontinued. Additionally, that article received a ton of feedback that has been taken into account for this update.
|
||||
|
||||
Once again, I'd like to point out that browser extensions often require the ability to read and/or change everything on the web pages you visit. You should consider the ramifications of this very carefully. If an extension has modify access to all the web pages you visit, it could act as a keylogger, intercept credit card information, track you online, insert advertisements, and perform a variety of other nefarious activities. That doesn't mean every extension will surreptitiously do these things, but you should carefully consider the installation source, the permissions involved, your risk profile, and other factors before you install any extension. Keep in mind you can use profiles to manage how an extension impacts your attack surface—for example, using a dedicated profile with no extensions to perform tasks such as online banking.
|
||||
|
||||
With that in mind, here are four open source Firefox extensions you may want to consider.
|
||||
|
||||
### uBlock Origin
|
||||
|
||||
![ublock origin ad blocker screenshot][2]
|
||||
|
||||
My first recommendation remains unchanged. [uBlock Origin][3] is a fast, low memory, wide-spectrum blocker that allows you to not only block ads but also enforce your own content filtering. The default behavior of uBlock Origin is to block ads, trackers, and malware sites using multiple, predefined filter lists. From there it allows you to arbitrarily add lists and rules, or even lock down to a default-deny mode. Despite being powerful, the extension has proven to be efficient and performant. It continues to be updated regularly and is one of the best options available for this functionality.
|
||||
|
||||
### Privacy Badger
|
||||
|
||||
![privacy badger ad blocker][4]
|
||||
|
||||
My second recommendation also remains unchanged. If anything, privacy has been brought even more to the forefront since my previous article, making this extension an easy recommendation. As the name indicates, [Privacy Badger][5] is a privacy-focused extension that blocks ads and other third-party trackers. It's a project of the Electronic Freedom Foundation, which says:
|
||||
|
||||
> "Privacy Badger was born out of our desire to be able to recommend a single extension that would automatically analyze and block any tracker or ad that violated the principle of user consent; which could function well without any settings, knowledge, or configuration by the user; which is produced by an organization that is unambiguously working for its users rather than for advertisers; and which uses algorithmic methods to decide what is and isn't tracking."
|
||||
|
||||
Why is Privacy Badger on this list when the previous item may seem similar? A couple reasons. The first is that it fundamentally works differently than uBlock Origin. The second is that a practice of defense in depth is a sound policy to follow. Speaking of defense in depth, the EFF also maintains [HTTPS Everywhere][6] to automatically ensure https is used for many major websites. When you're installing Privacy Badger, you may want to consider HTTPS Everywhere as well.
|
||||
|
||||
In case you were starting to think this article was simply going to be a rehash of the last one, here's where my recommendations diverge.
|
||||
|
||||
### Bitwarden
|
||||
|
||||
![Bitwarden][7]
|
||||
|
||||
When recommending LastPass in the previous article, I mentioned it was likely going to be a controversial selection. That certainly proved true. Whether you should use a password manager at all—and if you do, whether you should choose one that has a browser plugin—is a hotly debated topic, and the answer very much depends on your personal risk profile. I asserted that most casual computer users should use one because it's much better than the most common alternative: using the same weak password everywhere. I still believe that.
|
||||
|
||||
[Bitwarden][8] has really matured since the last time I checked it out. Like LastPass, it is user-friendly, supports two-factor authentication, and is reasonably secure. Unlike LastPass, it is [open source][9]. It can be used with or without the browser plugin and supports importing from other solutions including LastPass. The core functionality is completely free, and there is a premium version that is $10/year.
|
||||
|
||||
### Vimium-FF
|
||||
|
||||
![Vimium][10]
|
||||
|
||||
[Vimium][11] is another open source extension that provides Firefox keyboard shortcuts for navigation and control in the spirit of Vim. They call it "The Hacker's Browser." Modifier keys are specified as **< c-x>**, **< m-x>**, and **< a-x>** for Ctrl+x, Meta+x, and Alt+x, respectively, and the defaults can be easily customized. Once you have Vimium installed, you can see this list of key bindings at any time by typing **?**. Note that if you prefer Emacs, there are also a couple of extensions for those keybindings as well. Either way, I think keyboard shortcuts are an underutilized productivity booster.
|
||||
|
||||
### Bonus: Grammarly
|
||||
|
||||
Not everyone is lucky enough to write a column on Opensource.com—although you should seriously consider writing for the site; if you have questions, are interested, or would like a mentor, reach out and let's chat. But even without a column to write, proper grammar is beneficial in a large variety of situations. Enter [Grammarly][12]. This extension is not open source, unfortunately, but it does make sure everything you type is clear, effective, and mistake-free. It does this by scanning your text for common and complex grammatical mistakes, spanning everything from subject-verb agreement to article use to modifier placement. Basic functionality is free, with a premium version with additional checks available for a monthly charge. I used it for this article and it caught multiple errors that my proofreading didn't.
|
||||
|
||||
Again, Grammarly is the only extension included on this list that is not open source, so if you know of a similar high-quality open source replacement, let us know in the comments.
|
||||
|
||||
These extensions are ones I've found useful and recommend to others. Let me know in the comments what you think of the updated recommendations.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/firefox-extensions
|
||||
|
||||
作者:[Jeremy Garcia][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jeremy-garcia
|
||||
[1]:https://opensource.com/article/18/1/top-5-firefox-extensions
|
||||
[2]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/ublock.png?itok=_QFEbDmq (ublock origin ad blocker screenshot)
|
||||
[3]:https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/
|
||||
[4]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/privacy_badger_1.0.1.png?itok=qZXQeKtc (privacy badger ad blocker screenshot)
|
||||
[5]:https://www.eff.org/privacybadger
|
||||
[6]:https://www.eff.org/https-everywhere
|
||||
[7]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/bitwarden.png?itok=gZPrCYoi (Bitwarden)
|
||||
[8]:https://bitwarden.com/
|
||||
[9]:https://github.com/bitwarden
|
||||
[10]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/vimium.png?itok=QRESXjWG (Vimium)
|
||||
[11]:https://addons.mozilla.org/en-US/firefox/addon/vimium-ff/
|
||||
[12]:https://www.grammarly.com/
|
@ -0,0 +1,162 @@
|
||||
A reading list for Linux and open source fans
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_stack_library_reading.jpg?itok=uulcS8Sw)
|
||||
I recently asked our writer community to share with us what they're reading. These folks come from all different walks of life and roles in tech. What they have in common is that they are living and breathing Linux and open source every day.
|
||||
|
||||
Drink in this fantastic list. Many of them are free and available to download.
|
||||
|
||||
You may see books you've been meaning to get around to, books that are completely new to you, and some that feel like old friends.
|
||||
|
||||
We'd love to hear what you think of this list. Share with us in the comments below or on [Twitter][1] with #Linuxbooks #opensourcebooks.
|
||||
|
||||
### 17 books to add to your reading list
|
||||
|
||||
**Plus, a bonus fiction read.**
|
||||
|
||||
[23 Years of FreeDOS][2] by Jim Hall
|
||||
|
||||
Last year, the [FreeDOS][3] Project turned 23 years old. While there's nothing special about 23 years, the project decided to celebrate that milestone by sharing stories about how different people use or contribute to FreeDOS. The free, CC BY eBook is a collection of essays that describe the history of FreeDOS since 1994, and how people use FreeDOS today. (Recommendation and review by [Jim Hall][4])
|
||||
|
||||
[Eloquent JavaScript][5] by Marijn Haverbeke
|
||||
|
||||
This book teaches you how to write beautifully crafted programs using one of the most ubiquitous programming languages: [Javascript][6]. Learn the basics and advanced concepts of the language, and how to write programs that run in the browser or Node.js environment. The book also includes five fun projects so you can dive into actual programming while making a platform game or even writing your own programming language. (Recommendation and review by [Rahul Thakoor][7])
|
||||
|
||||
[_Forge Your Future with Open Source_][8] by VM (Vicky) Brasseur
|
||||
|
||||
If you're looking to contribute to open source, but you don't know how to start, this is the book for you. It covers how to find a project to join and how to make your first contributions. (Recommendation and review by [Ben Cotton][9])
|
||||
|
||||
[_Git for Teams_][10] by Emma Jane Hogbin Westby
|
||||
|
||||
Git is a widely-used version control system for individuals and teams alike, but its power means it can be complex. This book provides guidance on how to effectively use [git][11] in a team environment. For more, read our [in-depth review][12]. (Recommendation and review by [Ben Cotton][9])
|
||||
|
||||
[Getting to Yes][13] by Fisher, Ury, and Patton
|
||||
|
||||
The Harvard Negotiation Project, formed in the 1970s, was an academic effort involving economists, psychologists, sociologists, and political scientists to create a framework for negotiations which allows better outcomes for all involved. Their framework and techniques have been used in a diverse set of circumstances, including the Camp David Accords between Egypt and Israel in 1978.
|
||||
|
||||
Principled Negotiation involves understanding the real interests of the participants in a negotiation and using this knowledge to generate options acceptable to all. The same techniques can be used to resolve interpersonal issues, negotiations over cars and houses, discussions with insurance companies, and so on.
|
||||
|
||||
What does this have to do with open source software development? Everything in open source is a negotiation, in some sense. Submitting a bug report is outlining a position—that something does not work correctly—and requesting that someone reprioritize their work to fix it. A heated discussion on a mailing list over the right way to do something or a comment on a feature request is a negotiation, often with imperfect knowledge, about the scope and goals of the project.
|
||||
|
||||
Reframing these conversations as explorations, trying to understand why the other person is asking for something, and being transparent about the reasons why you believe another viewpoint to apply, can dramatically change your relationships and effectiveness working in an open source project. (Recommendation and review by [Dave Neary][14])
|
||||
|
||||
[Just for Fun: The Story of an Accidental Revolutionary][15] by Linus Torvalds et al.
|
||||
|
||||
Linux is an amazing and powerful operating system that spawned a movement to transparency and openness. And, the open source ethos that drives it flies in the face of traditional models of business and capital appreciation. In this book, learn about the genius of Linus the man and [Linux][16] the operating system. Get insight into the experiences that shaped Linus's life and fueled his transformation from a nerdy young man who enjoyed toying with his grandfather's clock to the master programmer of the world's predominant operating system. (Recommendation and review by [Don Watkins][17])
|
||||
|
||||
[Linux in a Month of Lunches][18] by Steven Ovadia
|
||||
|
||||
This book is designed to teach non-technical users how to use desktop [Linux][19] in about an hour a day. The book covers everything from choosing a desktop environment to installing software, to using Git. At the end of the month, readers can use Linux fulltime, replacing their other operating systems. (Recommendation and review by [Steven Ovadia][20])
|
||||
|
||||
[Linux in Action][21] by David Clinton
|
||||
|
||||
This book introduces serious Linux administration tools for anyone interested in getting more out of their tech, including IT professionals, developers, [DevOps][22] specialists, and more. Rather than teaching skills in isolation, the book is organized around practical projects like automating off-site data backups, securing a web server, and creating a VPN to safely connect an organization's resources. [Read more][23] by this author. (Recommendation and review by [David Clinton][24])
|
||||
|
||||
[Make: Linux for Makers][25] by Aaron Newcomb
|
||||
|
||||
This book is a must-read for anyone wanting to create and innovate with the [Raspberry Pi][26]. This book will have you up and operating your Raspberry Pi while at the same time understanding the nuances of it Raspbian Linux operating system. This is a masterful basic text that will help any maker unlock the potential of the Raspberry Pi. It’s concise and well written with a lot of fantastic illustrations and practical examples. (Recommendation by Jason Hibbets | Review by [Don Watkins][17])
|
||||
|
||||
[Managing Humans: Biting and Humorous Tales of a Software Engineering Manager][27] by Michael Lopp
|
||||
|
||||
Michael Lopp is better known by the nom de plume Rands, author of the popular blog [Rands in Repose][28]. This book is an edited, curated collection of blog posts, all related to the management of software development teams. What I love about the book and the blog, is that Rands starts from the fundamental principle that the most complex part of software development is human interactions. The book covers a range of topics about reading a group, understanding the personalities that make it up, and figuring out how to get the best out of everyone.
|
||||
|
||||
These things are universal, and as an open source community manager, I come across them all the time. How do you know if someone might be burning out? How do you run a good meeting? How do you evolve the culture of a project and team as it grows? How much process is the right amount? Regardless of the activity, questions like these arise all the time, and Rands's irreverent, humorous take is educational and entertaining. (Recommendation and review by [Dave Neary][14])
|
||||
|
||||
[Open Sources: Voices from the Open Source Revolution][29] (O'Reilly, 1999)
|
||||
|
||||
This book is a must-read for all open source enthusiasts. Linus Torvalds, Eric S. Raymond, Richard Stallman, Michael Tiemann, Tim O'Reilly, and other important figures in the open source movement share their thoughts on the forward momentum of [open source software][30]. (Recommendation by [Jim Hall][4] | Review by Jen Wike Huger)
|
||||
|
||||
[Producing Open Source Software: How to Run a Successful Free Software Project][31] by Karl Fogel
|
||||
|
||||
This book is for anyone who wants to build an open source community, is already building one, or wants to better understand trends in successful open source project community development. Karl Fogel analyzes and studies traits and characteristics of successful open source projects and how they have developed a community around the project. The book offers helpful advice to community managers (or want-to-be community managers) on how to navigate community development around a project. This is a rare book that takes a deeper look into open source community development and offers plenty of ingredients for success, but you have to take it and create the recipe for your project or community. (Recommendation and review by [Justin Flory][32])
|
||||
|
||||
[Programming with Robots][33] by Albert W. Schueller
|
||||
|
||||
This book introduces the basics of programming using the Lego Mindstorms NXT. Instead of writing abstract programs, learn how to program devices that can sense and interface with the physical world. Learn how software and hardware interact with each other while experimenting with sensors, motors or making music using code. (Recommendation and review by [Rahul Thakoor][7])
|
||||
|
||||
[The AWK programming language][34] by Alfred V. Aho, Brian W. Kernighan, and Peter J. Weinberger
|
||||
|
||||
This book, written by the creators of awk, follows a pattern similar to other books about *nix tools written by the original Bell Labs Unix team and published in the 1970s-1990s, explaining the rationale and intended use of awk in clear and compact prose, liberally sprinkled with examples that start simply and are further elaborated by the need to deal with more fully-detailed problems and edge cases. When published, the typical reader of this book would have been someone who had files of textual or numeric data that needed to be processed and transformed, and who wanted to be able to easily create lookup tables, apply regular expressions, react to structure changes within the input, apply mathematical transformations to numbers and easily format the output.
|
||||
|
||||
While that characterization still applies, today the book can also provide a window back into the time when the only user interface available was a terminal, when "modularity" created the ability to string together numerous single-purpose utility programs in shell scripts to create data transformation pipelines that crunched the data and produced the reports that everyone expected of computers. Today, awk should be a part of the operations toolbox, providing a fine ability to further process configuration and log files, and this book still provides a great introduction to that process. (Recommendation by [Jim Hall][4] | Review by [Chris Hermansen][35])
|
||||
|
||||
[Think Python: Think Like a Computer Scientist][36] by Allen Downey
|
||||
|
||||
This book about [Python][37] is part of [a series][38] that covers other languages as well, like Java, [Perl][39], etc. It moves past simple language syntax downloads and approaches the topic through the lens of how a problem solver would build a solution. It's both a great introductory guide to programming through a layering of concepts, but it can serve the dabbler who is looking to develop skills in an area such as classes or inheritance with chapters that have examples and exercises to then apply the skills taught. (Recommendation and review by [Steve Morris][40])
|
||||
|
||||
[Understanding Open Source and Free Software Licensing][41] (O'Reilly, 2004)
|
||||
|
||||
"This book bridges the gap between the open source vision and the practical implications of its legal underpinnings. If open source and free software licenses interest you, this book will help you understand them. If you're an open source/free software developer, this book is an absolute necessity." (Recommendation by [Jim Hall][4] | review from [Amazon][42])
|
||||
|
||||
[Unix Text Processing][43] by Dale Dougherty and Tim O'Reilly
|
||||
|
||||
This book was written in 1987 as an introduction to Unix systems and how writers could use Unix tools to do work. It's still a useful resource for beginners to learn the basics of the Unix shell, the vi editor, awk and shell scripts, and the nroff and troff typesetting system. The original edition is out of print, but O'Reilly has made the book available for free via their website. (Recommendation and review by [Jim Hall][4])
|
||||
|
||||
### Bonus: Fiction book
|
||||
|
||||
[Station Eleven][44] by Emily St. John Mandel
|
||||
|
||||
This story is set in a near future, twenty years after the earth's population has been decimated by a mysterious and deadly flu. We follow Kirsten Raymonde, a young woman who is traveling near the Great Lakes with a nomadic theatre group because "Survival is insufficient," as she makes her way through the post-apocalyptic world. It's a wonderful story, well worth reading.
|
||||
|
||||
What struck me about the book is how tenuous our relationship with technology actually is. In the Douglas Adams book "Mostly Harmless", there is a great line: "Left to his own devices he couldn't build a toaster. He could just about make a sandwich and that was it." This is the world of Kristin Raymonde. Everyone has been left to their own devices: There is no electricity because no one can work the power grid. No cars, no oil refineries.
|
||||
|
||||
There is a fascinating passage where one inventor has rigged up a generator with a bicycle and is trying to turn on a laptop, trying to see if there is still an internet. We discover the Museum of Civilization, stocked with objects which have no use, which has been left over from the old world: passports, mobile phones, credit cards, stilettoes.
|
||||
|
||||
All of the world's technology becomes useless. (Recommendation and review by [Dave Neary][14])
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/list-books-Linux-open-source
|
||||
|
||||
作者:[Jen Wike Huger][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/remyd
|
||||
[1]:https://twitter.com/opensourceway
|
||||
[2]:http://www.freedos.org/ebook/
|
||||
[3]:https://opensource.com/article/18/5/node/44116
|
||||
[4]:https://opensource.com/users/jim-hall
|
||||
[5]:https://eloquentjavascript.net/
|
||||
[6]:https://opensource.com/article/18/5/node/32826
|
||||
[7]:https://opensource.com/users/rahul27
|
||||
[8]:https://pragprog.com/book/vbopens/forge-your-future-with-open-source
|
||||
[9]:https://opensource.com/users/bcotton
|
||||
[10]:http://gitforteams.com/
|
||||
[11]:https://opensource.com/article/18/5/node/43741
|
||||
[12]:https://opensource.com/business/15/11/git-for-teams-review
|
||||
[13]:http://www.williamury.com/books/getting-to-yes/
|
||||
[14]:https://opensource.com/users/dneary
|
||||
[15]:http://a.co/749s27n
|
||||
[16]:https://opensource.com/article/18/5/node/19796
|
||||
[17]:https://opensource.com/users/don-watkins
|
||||
[18]:https://manning.com/ovadia
|
||||
[19]:https://opensource.com/article/18/5/node/42626
|
||||
[20]:https://opensource.com/users/stevenov
|
||||
[21]:https://www.manning.com/books/linux-in-action?a_aid=bootstrap-it&a_bid=4ca15fc9
|
||||
[22]:https://opensource.com/article/18/5/node/44696
|
||||
[23]:https://bootstrap-it.com/index.php/books/
|
||||
[24]:https://opensource.com/users/dbclinton
|
||||
[25]:https://www.makershed.com/products/make-linux-for-makers
|
||||
[26]:https://opensource.com/article/18/5/node/35731
|
||||
[27]:https://www.amazon.com/Managing-Humans-Humorous-Software-Engineering/dp/1484221575/ref=dp_ob_title_bk
|
||||
[28]:http://randsinrepose.com/
|
||||
[29]:https://www.oreilly.com/openbook/opensources/book/index.html
|
||||
[30]:https://opensource.com/article/18/5/node/42001
|
||||
[31]:https://producingoss.com/
|
||||
[32]:https://opensource.com/users/justinflory
|
||||
[33]:http://engineering.nyu.edu/gk12/amps-cbri/pdf/RobotC%20FTC%20Books/notesRobotC.pdf
|
||||
[34]:https://archive.org/details/pdfy-MgN0H1joIoDVoIC7
|
||||
[35]:https://opensource.com/users/clhermansen
|
||||
[36]:http://greenteapress.com/thinkpython2/thinkpython2.pdf
|
||||
[37]:https://opensource.com/article/18/5/node/40481
|
||||
[38]:http://greenteapress.com/wp/
|
||||
[39]:https://opensource.com/article/18/5/node/35141
|
||||
[40]:https://opensource.com/users/smorris12
|
||||
[41]:http://shop.oreilly.com/product/9780596005818.do
|
||||
[42]:https://www.amazon.com/Understanding-Open-Source-Software-Licensing/dp/0596005814
|
||||
[43]:http://www.oreilly.com/openbook/utp/
|
||||
[44]:http://www.emilymandel.com/stationeleven.html
|
@ -0,0 +1,181 @@
|
||||
Systemd Services: Beyond Starting and Stopping
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/systemd-minetest-2.jpg?itok=bXO0ggHL)
|
||||
[In the previous article][1], we showed how to create a systemd service that you can run as a regular user to start and stop your game server. As it stands, however, your service is still not much better than running the server directly. Let's jazz it up a bit by having it send out emails to the players, alerting them when the server becomes available and warning them when it is about to be turned off:
|
||||
```
|
||||
# minetest.service
|
||||
|
||||
[Unit]
|
||||
Description= Minetest server
|
||||
Documentation= https://wiki.minetest.net/Main_Page
|
||||
|
||||
[Service]
|
||||
Type= simple
|
||||
|
||||
ExecStart= /usr/games/minetest --server
|
||||
ExecStartPost= /home/<username>/bin/mtsendmail.sh "Ready to rumble?" "Minetest Starting up"
|
||||
|
||||
TimeoutStopSec= 180
|
||||
ExecStop= /home/<username>/bin/mtsendmail.sh "Off to bed. Nightie night!" "
|
||||
Minetest Stopping in 2 minutes"
|
||||
ExecStop= /bin/sleep 120
|
||||
ExecStop= /bin/kill -2 $MAINPID
|
||||
|
||||
```
|
||||
|
||||
There are a few new things in here. First, there's the `ExecStartPost` directive. You can use this directive for anything you want to run right after the main application starts. In this case, you run a custom script, `mtsendmail` (see below), that sends an email to your friends telling them that the server is up.
|
||||
```
|
||||
#!/bin/bash
|
||||
# mtsendmail
|
||||
echo $1 | mutt -F /home/<username>/.muttrc -s "$2" my_minetest@mailing_list.com
|
||||
|
||||
```
|
||||
|
||||
You can use [Mutt][2], a command-line email client, to shoot off your messages. Although the script shown above is to all practical effects only one line long, remember you can't have a line with pipes and redirections as a systemd unit argument, so you have to wrap it in a script.
|
||||
|
||||
For the record, there is also an `ExecStartPre` directive for things you want to execute before starting the service proper.
|
||||
|
||||
Next up, you have a block of commands that close down the server. The `TimeoutStopSec` directive pushes up the time before systemd bails on shutting down the service. The default time out value is round about 90 seconds. Anything longer, and systemd will force the service to close down and report a failure. But, as you want to give your users a couple of minutes before closing the server completely, you are going to push the default up to three minutes. This will stop systemd from thinking the closedown has failed.
|
||||
|
||||
Then the close down proper starts. Although there is no `ExecStopPre` as such, you can simulate running stuff before closing down your server by using more than one `ExecStop` directive. They will be executed in order, from topmost to bottommost, and will allow you to send out a message before the server is actually stopped.
|
||||
|
||||
With that in mind, the first thing you do is shoot off an email to your friends, warning them the server is going down. Then you wait two minutes. Finally you close down the server. Minetest likes to be closed down with [Ctrl] + [c], which translates into an interrupt signal ( _SIGINT_ ). That is what you do when you issue the `kill -2 $MAINPID` command. `$MAINPID` is a systemd variable for your service that points to the PID of the main application.
|
||||
|
||||
This is much better! Now, when you run
|
||||
```
|
||||
systemctl --user start minetest
|
||||
|
||||
```
|
||||
|
||||
The service will start up the Minetest server and send out an email to your users. Likewise when you are about to close down, but giving two minutes to users to log off.
|
||||
|
||||
### Starting at Boot
|
||||
|
||||
The next step is to make your service available as soon as the machine boots up, and close down when you switch it off at night.
|
||||
|
||||
Start be moving your service out to where the system services live, The directory youa re looking for is _/etc/systemd/system/_ :
|
||||
```
|
||||
sudo mv /home/<username>/.config/systemd/user/minetest.service /etc/systemd/system/
|
||||
|
||||
```
|
||||
|
||||
If you were to try and run the service now, it would have to be with superuser privileges:
|
||||
```
|
||||
sudo systemctl start minetest
|
||||
|
||||
```
|
||||
|
||||
But, what's more, if you check your service's status with
|
||||
```
|
||||
sudo systemctl status minetest
|
||||
|
||||
```
|
||||
|
||||
You would see it had failed miserably. This is because systemd does not have any context, no links to worlds, textures, configuration files, or details of the specific user running the service. You can solve this problem by adding the `User` directive to your unit:
|
||||
```
|
||||
# minetest.service
|
||||
|
||||
[Unit]
|
||||
Description= Minetest server
|
||||
Documentation= https://wiki.minetest.net/Main_Page
|
||||
|
||||
[Service]
|
||||
Type= simple
|
||||
User= <username>
|
||||
|
||||
ExecStart= /usr/games/minetest --server
|
||||
ExecStartPost= /home/<username>/bin/mtsendmail.sh "Ready to rumble?"
|
||||
"Minetest Starting up"
|
||||
|
||||
TimeoutStopSec= 180
|
||||
ExecStop= /home/<username>/bin/mtsendmail.sh "Off to bed. Nightie night!"
|
||||
"Minetest Stopping in 2 minutes"
|
||||
ExecStop= /bin/sleep 120
|
||||
ExecStop= /bin/kill -2 $MAINPID
|
||||
|
||||
```
|
||||
|
||||
The `User` directive tells systemd which user's environment it should use to correctly run the service. You could use root, but that would probably be a security hazard. You could also use your personal user and that would be a bit better, but what many administrators do is create a specific user for each service, effectively isolating the service from the rest of the system and users.
|
||||
|
||||
The next step is to make your service start when you boot up and stop when you power down your computer. To do that you need to _enable_ your service, but, before you can do that, you have to tell systemd where to _install_ it.
|
||||
|
||||
In systemd parlance, _installing_ means telling systemd when in the boot sequence should your service become activated. For example the _cups.service_ , the service for the _Common UNIX Printing System_ , will have to be brought up after the network framework is activated, but before any other printing services are enabled. Likewise, the _minetest.service_ uses a user's email (among other things) and will have to be slotted in when the network is up and services for regular users become available.
|
||||
|
||||
You do all that by adding a new section and directive to your unit:
|
||||
```
|
||||
...
|
||||
[Install]
|
||||
WantedBy= multi-user.target
|
||||
|
||||
```
|
||||
|
||||
You can read this as "wait until we have everything ready for a multiples user system." Targets in systemd are like the old run levels and can be used to put your machine into one state or another, or, like here, to tell your service to wait until a certain state has been reached.
|
||||
|
||||
Your final _minetest.service_ file will look like this:
|
||||
```
|
||||
# minetest.service
|
||||
[Unit]
|
||||
Description= Minetest server
|
||||
Documentation= https://wiki.minetest.net/Main_Page
|
||||
|
||||
[Service]
|
||||
Type= simple
|
||||
User= <username>
|
||||
|
||||
ExecStart= /usr/games/minetest --server
|
||||
ExecStartPost= /home/<username>/bin/mtsendmail.sh "Ready to rumble?"
|
||||
"Minetest Starting up"
|
||||
|
||||
TimeoutStopSec= 180
|
||||
ExecStop= /home/<username>/bin/mtsendmail.sh "Off to bed. Nightie night!"
|
||||
"Minetest Stopping in 2 minutes"
|
||||
ExecStop= /bin/sleep 120
|
||||
ExecStop= /bin/kill -2 $MAINPID
|
||||
|
||||
[Install]
|
||||
WantedBy= multi-user.target
|
||||
|
||||
```
|
||||
|
||||
Before trying it out, you may have to do some adjustments to your email script:
|
||||
```
|
||||
#!/bin/bash
|
||||
# mtsendmail
|
||||
|
||||
sleep 20
|
||||
echo $1 | mutt -F /home/<username>/.muttrc -s "$2" my_minetest@mailing_list.com
|
||||
sleep 10
|
||||
|
||||
```
|
||||
|
||||
This is because the system will need some time to set up the emailing system (so you wait 20 seconds) and also some time to actually send the email (so you wait 10 seconds). Notice that these are the wait times that worked for me. You may have to adjust these for your own system.
|
||||
|
||||
And you're done! Run:
|
||||
```
|
||||
sudo systemctl enable minetest
|
||||
|
||||
```
|
||||
|
||||
and the Minetest service will come online when you power up and gracefully shut down when you power off, warning your users in the process.
|
||||
|
||||
### Conclusion
|
||||
|
||||
The fact that Debian, Ubuntu, and distros of the same family have a special package called _minetest-server_ that does some of the above for you (but no messaging!) should not deter you from setting up your own customised services. In fact, the version you set up here is much more versatile and does more than Debian's default server.
|
||||
|
||||
Furthermore, the process described here will allow you to set up most simple servers as services, whether they are for games, web applications, or whatever. And those are the first steps towards veritable systemd guruhood.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2018/5/systemd-services-beyond-starting-and-stopping
|
||||
|
||||
作者:[Paul Brown][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/bro66
|
||||
[1]:https://www.linux.com/blog/learn/intro-to-linux/2018/5/writing-systemd-services-fun-and-profit
|
||||
[2]:http://www.mutt.org/
|
@ -0,0 +1,127 @@
|
||||
How To Check Ubuntu Version and Other System Information Easily
|
||||
======
|
||||
**Brief: Wondering which Ubuntu version are you using? Here’s how to check Ubuntu version, desktop environment and other relevant system information.**
|
||||
|
||||
You can easily find the Ubuntu version you are using in the command line or via the graphical interface. Knowing the exact Ubuntu version, desktop environment and other system information helps a lot when you are trying to follow a tutorial from the web or seeking help in various forums.
|
||||
|
||||
In this quick tip, I’ll show you various ways to check [Ubuntu][1] version and other common system information.
|
||||
|
||||
### How to check Ubuntu version in terminal
|
||||
|
||||
This is the best way to find Ubuntu version. I could have mentioned the graphical way first but then I chose this method because this one doesn’t depend on the [desktop environment][2] you are using. You can use it on any Ubuntu variant.
|
||||
|
||||
Open a terminal (Ctrl+Alt+T) and type the following command:
|
||||
```
|
||||
lsb_release -a
|
||||
|
||||
```
|
||||
|
||||
The output of the above command should be like this:
|
||||
```
|
||||
No LSB modules are available.
|
||||
Distributor ID: Ubuntu
|
||||
Description: Ubuntu 16.04.4 LTS
|
||||
Release: 16.04
|
||||
Codename: xenial
|
||||
|
||||
```
|
||||
|
||||
![How to check Ubuntu version in command line][3]
|
||||
|
||||
As you can see, the current Ubuntu installed in my system is Ubuntu 16.04 and its code name is Xenial.
|
||||
|
||||
Wait! Why does it say Ubuntu 16.04.4 in Description and 16.04 in the Release? Which one is it, 16.04 or 16.04.4? What’s the difference between the two?
|
||||
|
||||
The short answer is that you are using Ubuntu 16.04. That’s the base image. 16.04.4 signifies the fourth point release of 16.04. A point release can be thought of as a service pack in Windows era. Both 16.04 and 16.04.4 will be the correct answer here.
|
||||
|
||||
What’s Xenial in the output? That’s the codename of the Ubuntu 16.04 release. You can read this [article to know about Ubuntu naming convention][4].
|
||||
|
||||
#### Some alternate ways to find Ubuntu version
|
||||
|
||||
Alternatively, you can use either of the following commands to find Ubuntu version:
|
||||
```
|
||||
cat /etc/lsb-release
|
||||
|
||||
```
|
||||
|
||||
The output of the above command would look like this:
|
||||
```
|
||||
DISTRIB_ID=Ubuntu
|
||||
DISTRIB_RELEASE=16.04
|
||||
DISTRIB_CODENAME=xenial
|
||||
DISTRIB_DESCRIPTION="Ubuntu 16.04.4 LTS"
|
||||
|
||||
```
|
||||
|
||||
![How to check Ubuntu version in command line][5]
|
||||
|
||||
You can also use this command to know Ubuntu version
|
||||
```
|
||||
cat /etc/issue
|
||||
|
||||
```
|
||||
|
||||
The output of this command will be like this:
|
||||
```
|
||||
Ubuntu 16.04.4 LTS \n \l
|
||||
|
||||
```
|
||||
|
||||
Forget the \n \l. The Ubuntu version is 16.04.4 in this case or simply Ubuntu 16.04.
|
||||
|
||||
### How to check Ubuntu version graphically
|
||||
|
||||
Checking Ubuntu version graphically is no big deal either. I am going to use screenshots from Ubuntu 18.04 GNOME here. Things may look different if you are using Unity or some other desktop environment. This is why I recommend the command line version discussed in the previous sections because that doesn’t depend on the desktop environment.
|
||||
|
||||
I’ll show you how to find the desktop environment in the next section.
|
||||
|
||||
For now, go to System Settings and look under the Details segment.
|
||||
|
||||
![Finding Ubuntu version graphically][6]
|
||||
|
||||
You should see the Ubuntu version here along with the information about the desktop environment you are using, [GNOME][7] being the case here.
|
||||
|
||||
![Finding Ubuntu version graphically][8]
|
||||
|
||||
### How to know the desktop environment and other system information in Ubuntu
|
||||
|
||||
So you just learned how to find Ubuntu version. What about the desktop environment in use? Which Linux kernel version is being used?
|
||||
|
||||
Of course, there are various commands you can use to get all those information but I’ll recommend a command line utility called [Neofetch][9]. This will show you essential system information in the terminal beautifully with the logo of Ubuntu or any other Linux distribution you are using.
|
||||
|
||||
Install Neofetch using the command below:
|
||||
```
|
||||
sudo apt install neofetch
|
||||
|
||||
```
|
||||
|
||||
Once installed, simply run the command `neofetch` in the terminal and see a beautiful display of system information.
|
||||
|
||||
![System information in Linux terminal][10]
|
||||
|
||||
As you can see, Neofetch shows you the Linux kernel version, Ubuntu version, desktop environment in use along with its version, themes and icons in use etc.
|
||||
|
||||
I hope it helps you to find Ubuntu version and other system information. If you have suggestions to improve this article, feel free to drop it in the comment section. Ciao :)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/how-to-know-ubuntu-unity-version/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[1]:https://www.ubuntu.com/
|
||||
[2]:https://en.wikipedia.org/wiki/Desktop_environment
|
||||
[3]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/check-ubuntu-version-command-line-1-800x216.jpeg
|
||||
[4]:https://itsfoss.com/linux-code-names/
|
||||
[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/check-ubuntu-version-command-line-2-800x185.jpeg
|
||||
[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/ubuntu-version-system-settings.jpeg
|
||||
[7]:https://www.gnome.org/
|
||||
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/checking-ubuntu-version-gui.jpeg
|
||||
[9]:https://itsfoss.com/display-linux-logo-in-ascii/
|
||||
[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/ubuntu-system-information-terminal-800x400.jpeg
|
@ -0,0 +1,93 @@
|
||||
translating---geekpi
|
||||
|
||||
Orbital Apps – A New Generation Of Linux applications
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2016/05/orbital-apps-720x340.jpg)
|
||||
|
||||
Today, we are going to learn about **Orbital Apps** or **ORB** ( **O** pen **R** unnable **B** undle) **apps** , a collection of free, cross-platform, open source applications. All ORB apps are portable. You can either install them on your Linux system or on your USB drive, so that you you can use the same app on any system. There is no need of root privileges, and there are no dependencies. All required dependencies are included in the apps. Just copy the ORB apps to your USB drive and plug it on any Linux system, and start using them in no time. All settings and configurations, and data of the apps will be stored on the USB drive. Since there is no need to install the apps on the local drive, we can run the apps either in online or offline computers. That means we don’t need Internet to download any dependencies.
|
||||
|
||||
ORB apps are compressed up to 60% smaller, so we can store and use them even from the small sized USB drives. All ORB apps are signed with PGP/RSA and distributed via TLS 1.2. All Applications are packaged without any modifications, they are not even re-compiled. Here is the list of currently available portable ORB applications.
|
||||
|
||||
* abiword
|
||||
* audacious
|
||||
* audacity
|
||||
* darktable
|
||||
* deluge
|
||||
* filezilla
|
||||
* firefox
|
||||
* gimp
|
||||
* gnome-mplayer
|
||||
* hexchat
|
||||
* inkscape
|
||||
* isomaster
|
||||
* kodi
|
||||
* libreoffice
|
||||
* qbittorrent
|
||||
* sound-juicer
|
||||
* thunderbird
|
||||
* tomahawk
|
||||
* uget
|
||||
* vlc
|
||||
* And more yet to come.
|
||||
|
||||
|
||||
|
||||
Orb is open source, so If you’re a developer, feel free to collaborate and add more applications.
|
||||
|
||||
### Download and use portable ORB apps
|
||||
|
||||
As I mentioned already, we don’t need to install portable ORB apps. However, ORB team strongly recommends you to use **ORB launcher** to get better experience. ORB launcher is a small installer file (less than 5MB) that will help you to launch the ORB apps with better and smoother experience.
|
||||
|
||||
Let us install ORB launcher first. To do so, [**download the ORB launcher**][1]. You can manually download ORB launcher ISO and mount it on your file manager. Or run any one of the following command in Terminal to install it:
|
||||
```
|
||||
$ wget -O - https://www.orbital-apps.com/orb.sh | bash
|
||||
|
||||
```
|
||||
|
||||
If you don’t have wget, run:
|
||||
```
|
||||
$ curl https://www.orbital-apps.com/orb.sh | bash
|
||||
|
||||
```
|
||||
|
||||
Enter the root user password when it asked.
|
||||
|
||||
That’s it. Orbit launcher is installed and ready to use.
|
||||
|
||||
Now, go to the [**ORB portable apps download page**][2], and download the apps of your choice. For the purpose of this tutorial, I am going to download Firefox application.
|
||||
|
||||
Once you downloaded the package, go to the download location and double click ORB app to launch it. Click Yes to confirm.
|
||||
|
||||
![][4]
|
||||
|
||||
Firefox ORB application in action!
|
||||
|
||||
![][5]
|
||||
|
||||
Similarly, you can download and run any applications instantly.
|
||||
|
||||
If you don’t want to use ORB launcher, make the downloaded .orb installer file as executable and double click it to install. However, ORB launcher is recommended and it gives you easier and smoother experience while using orb apps.
|
||||
|
||||
As far as I tested ORB apps, they worked just fine out of the box. Hope this helps. And, that’s all for now. Have a good day!
|
||||
|
||||
Cheers!!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/orbitalapps-new-generation-ubuntu-linux-applications/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.orbital-apps.com/documentation/orb-launcher-all-installers
|
||||
[2]:https://www.orbital-apps.com/download/portable_apps_linux/
|
||||
[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2016/05/orbital-apps-1-2.png
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2016/05/orbital-apps-2.png
|
@ -0,0 +1,419 @@
|
||||
UKTools - Easy Way To Install Latest Linux Kernel
|
||||
======
|
||||
There are multiple utilities is available for Ubuntu to upgrade Linux kernel to latest stable version. We had already wrote about those utility in the past such as Linux Kernel Utilities (LKU), Ubuntu Kernel Upgrade Utility (UKUU) and Ubunsys.
|
||||
|
||||
Also few utilities are available and we will be planning to include in the further article like, ubuntu-mainline-kernel.sh and manual method from mainline kernel.
|
||||
|
||||
Today also we are going to teach you the similar utility called UKTools. You can try any one of these utilities to get your Linux kernels to the latest releases.
|
||||
|
||||
Latest kernel release comes with security bug fixes and some improvements so, better to keep latest one to get reliable, secure and better hardware performance.
|
||||
|
||||
Some times the latest kernel version might be buggy and can crash your system so, it’s your own risk. I would like to advise you to not to install on production environment.
|
||||
|
||||
**Suggested Read :**
|
||||
**(#)** [Linux Kernel Utilities (LKU) – A Set Of Shell Scripts To Compile, Install & Update Latest Kernel In Ubuntu/LinuxMint][1]
|
||||
**(#)** [Ukuu – An Easy Way To Install/Upgrade Linux Kernel In Ubuntu based Systems][2]
|
||||
**(#)** [6 Methods To Check The Running Linux Kernel Version On System][3]
|
||||
|
||||
### What Is UKTools
|
||||
|
||||
[UKTools][4] stands for Ubuntu Kernel Tools, that contains two shell scripts `ukupgrade` and `ukpurge`.
|
||||
|
||||
ukupgrade stands for “Ubuntu Kernel Upgrade”, which allows user to upgrade Linux kernel to latest stable version for Ubuntu/Mint and derivatives based on [kernel.ubuntu.com][5].
|
||||
|
||||
ukpurge stands for “Ubuntu Kernel Purge”, which allows user to remove old Linux kernel images/headers in machine for Ubuntu/ Mint and derivatives. It will keep only three kernel versions.
|
||||
|
||||
There is no GUI for this utility, however it looks very simple and straight forward so, newbie can perform the upgrade without any issues.
|
||||
|
||||
I’m running Ubuntu 17.10 and the current kernel version is below.
|
||||
```
|
||||
$ uname -a
|
||||
Linux ubuntu 4.13.0-39-generic #44-Ubuntu SMP Thu Apr 5 14:25:01 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
|
||||
|
||||
```
|
||||
|
||||
Run the following command to get the list of installed kernel on your system (Ubuntu and derivatives). Currently i’m holding `seven` kernels.
|
||||
```
|
||||
$ dpkg --list | grep linux-image
|
||||
ii linux-image-4.13.0-16-generic 4.13.0-16.19 amd64 Linux kernel image for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-4.13.0-17-generic 4.13.0-17.20 amd64 Linux kernel image for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-4.13.0-32-generic 4.13.0-32.35 amd64 Linux kernel image for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-4.13.0-36-generic 4.13.0-36.40 amd64 Linux kernel image for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-4.13.0-37-generic 4.13.0-37.42 amd64 Linux kernel image for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-4.13.0-38-generic 4.13.0-38.43 amd64 Linux kernel image for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-4.13.0-39-generic 4.13.0-39.44 amd64 Linux kernel image for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-extra-4.13.0-16-generic 4.13.0-16.19 amd64 Linux kernel extra modules for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-extra-4.13.0-17-generic 4.13.0-17.20 amd64 Linux kernel extra modules for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-extra-4.13.0-32-generic 4.13.0-32.35 amd64 Linux kernel extra modules for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-extra-4.13.0-36-generic 4.13.0-36.40 amd64 Linux kernel extra modules for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-extra-4.13.0-37-generic 4.13.0-37.42 amd64 Linux kernel extra modules for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-extra-4.13.0-38-generic 4.13.0-38.43 amd64 Linux kernel extra modules for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-extra-4.13.0-39-generic 4.13.0-39.44 amd64 Linux kernel extra modules for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-generic 4.13.0.39.42 amd64 Generic Linux kernel image
|
||||
|
||||
```
|
||||
|
||||
### How To Install UKTools
|
||||
|
||||
Just run the below commands to install UKTools on Ubuntu and derivatives.
|
||||
|
||||
Run the below command to clone UKTools repository on your system.
|
||||
```
|
||||
$ git clone https://github.com/usbkey9/uktools
|
||||
|
||||
```
|
||||
|
||||
Navigate to uktools directory.
|
||||
```
|
||||
$ cd uktools
|
||||
|
||||
```
|
||||
|
||||
Run the Makefile to generate the necessary files. Also this will automatically install latest available kernel. Just reboot the system in order to use the latest kernel.
|
||||
```
|
||||
$ sudo make
|
||||
[sudo] password for daygeek:
|
||||
Creating the directories if neccessary
|
||||
Linking profile.d file for reboot message
|
||||
Linking files to global sbin directory
|
||||
Ubuntu Kernel Upgrade - by Mustafa Hasturk
|
||||
------------------------------------------
|
||||
This script is based on the work of Mustafa Hasturk and was reworked by
|
||||
Caio Oliveira and modified and fixed by Christoph Kepler
|
||||
|
||||
Current Development and Maintenance by Christoph Kepler
|
||||
|
||||
Do you want the Stable Release (if not sure, press y)? (y/n): y
|
||||
Do you want the Generic kernel? (y/n): y
|
||||
Do you want to autoremove old kernel? (y/n): y
|
||||
no crontab for root
|
||||
Do you want to update the kernel automatically? (y/n): y
|
||||
Setup complete. Update the kernel right now? (y/n): y
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
The following packages were automatically installed and are no longer required:
|
||||
linux-headers-4.13.0-16 linux-headers-4.13.0-16-generic linux-headers-4.13.0-17 linux-headers-4.13.0-17-generic linux-headers-4.13.0-32 linux-headers-4.13.0-32-generic linux-headers-4.13.0-36
|
||||
linux-headers-4.13.0-36-generic linux-headers-4.13.0-37 linux-headers-4.13.0-37-generic linux-image-4.13.0-16-generic linux-image-4.13.0-17-generic linux-image-4.13.0-32-generic linux-image-4.13.0-36-generic
|
||||
linux-image-4.13.0-37-generic linux-image-extra-4.13.0-16-generic linux-image-extra-4.13.0-17-generic linux-image-extra-4.13.0-32-generic linux-image-extra-4.13.0-36-generic
|
||||
linux-image-extra-4.13.0-37-generic
|
||||
Use 'sudo apt autoremove' to remove them.
|
||||
The following additional packages will be installed:
|
||||
lynx-common
|
||||
The following NEW packages will be installed:
|
||||
lynx lynx-common
|
||||
0 upgraded, 2 newly installed, 0 to remove and 71 not upgraded.
|
||||
Need to get 1,498 kB of archives.
|
||||
After this operation, 5,418 kB of additional disk space will be used.
|
||||
Get:1 http://in.archive.ubuntu.com/ubuntu artful/universe amd64 lynx-common all 2.8.9dev16-1 [873 kB]
|
||||
Get:2 http://in.archive.ubuntu.com/ubuntu artful/universe amd64 lynx amd64 2.8.9dev16-1 [625 kB]
|
||||
Fetched 1,498 kB in 12s (120 kB/s)
|
||||
Selecting previously unselected package lynx-common.
|
||||
(Reading database ... 441037 files and directories currently installed.)
|
||||
Preparing to unpack .../lynx-common_2.8.9dev16-1_all.deb ...
|
||||
Unpacking lynx-common (2.8.9dev16-1) ...
|
||||
Selecting previously unselected package lynx.
|
||||
Preparing to unpack .../lynx_2.8.9dev16-1_amd64.deb ...
|
||||
Unpacking lynx (2.8.9dev16-1) ...
|
||||
Processing triggers for mime-support (3.60ubuntu1) ...
|
||||
Processing triggers for doc-base (0.10.7) ...
|
||||
Processing 1 added doc-base file...
|
||||
Processing triggers for man-db (2.7.6.1-2) ...
|
||||
Setting up lynx-common (2.8.9dev16-1) ...
|
||||
Setting up lynx (2.8.9dev16-1) ...
|
||||
update-alternatives: using /usr/bin/lynx to provide /usr/bin/www-browser (www-browser) in auto mode
|
||||
|
||||
Cleaning old downloads in /tmp
|
||||
|
||||
Downloading the kernel's components...
|
||||
Checksum for linux-headers-4.16.7-041607-generic_4.16.7-041607.201805021131_amd64.deb succeed
|
||||
Checksum for linux-image-unsigned-4.16.7-041607-generic_4.16.7-041607.201805021131_amd64.deb succeed
|
||||
Checksum for linux-modules-4.16.7-041607-generic_4.16.7-041607.201805021131_amd64.deb succeed
|
||||
|
||||
Downloading the shared kernel header...
|
||||
Checksum for linux-headers-4.16.7-041607_4.16.7-041607.201805021131_all.deb succeed
|
||||
|
||||
Installing Kernel and Headers...
|
||||
Selecting previously unselected package linux-headers-4.16.7-041607.
|
||||
(Reading database ... 441141 files and directories currently installed.)
|
||||
Preparing to unpack .../linux-headers-4.16.7-041607_4.16.7-041607.201805021131_all.deb ...
|
||||
Unpacking linux-headers-4.16.7-041607 (4.16.7-041607.201805021131) ...
|
||||
Selecting previously unselected package linux-headers-4.16.7-041607-generic.
|
||||
Preparing to unpack .../linux-headers-4.16.7-041607-generic_4.16.7-041607.201805021131_amd64.deb ...
|
||||
Unpacking linux-headers-4.16.7-041607-generic (4.16.7-041607.201805021131) ...
|
||||
Selecting previously unselected package linux-image-unsigned-4.16.7-041607-generic.
|
||||
Preparing to unpack .../linux-image-unsigned-4.16.7-041607-generic_4.16.7-041607.201805021131_amd64.deb ...
|
||||
Unpacking linux-image-unsigned-4.16.7-041607-generic (4.16.7-041607.201805021131) ...
|
||||
Selecting previously unselected package linux-modules-4.16.7-041607-generic.
|
||||
Preparing to unpack .../linux-modules-4.16.7-041607-generic_4.16.7-041607.201805021131_amd64.deb ...
|
||||
Unpacking linux-modules-4.16.7-041607-generic (4.16.7-041607.201805021131) ...
|
||||
Setting up linux-headers-4.16.7-041607 (4.16.7-041607.201805021131) ...
|
||||
dpkg: dependency problems prevent configuration of linux-headers-4.16.7-041607-generic:
|
||||
linux-headers-4.16.7-041607-generic depends on libssl1.1 (>= 1.1.0); however:
|
||||
Package libssl1.1 is not installed.
|
||||
|
||||
Setting up linux-modules-4.16.7-041607-generic (4.16.7-041607.201805021131) ...
|
||||
Setting up linux-image-unsigned-4.16.7-041607-generic (4.16.7-041607.201805021131) ...
|
||||
I: /vmlinuz.old is now a symlink to boot/vmlinuz-4.13.0-39-generic
|
||||
I: /initrd.img.old is now a symlink to boot/initrd.img-4.13.0-39-generic
|
||||
I: /vmlinuz is now a symlink to boot/vmlinuz-4.16.7-041607-generic
|
||||
I: /initrd.img is now a symlink to boot/initrd.img-4.16.7-041607-generic
|
||||
Processing triggers for linux-image-unsigned-4.16.7-041607-generic (4.16.7-041607.201805021131) ...
|
||||
/etc/kernel/postinst.d/initramfs-tools:
|
||||
update-initramfs: Generating /boot/initrd.img-4.16.7-041607-generic
|
||||
/etc/kernel/postinst.d/zz-update-grub:
|
||||
Generating grub configuration file ...
|
||||
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
|
||||
Found linux image: /boot/vmlinuz-4.16.7-041607-generic
|
||||
Found initrd image: /boot/initrd.img-4.16.7-041607-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-39-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-39-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-38-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-38-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-37-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-37-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-36-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-36-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-32-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-32-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-17-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-17-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-16-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-16-generic
|
||||
Found memtest86+ image: /boot/memtest86+.elf
|
||||
Found memtest86+ image: /boot/memtest86+.bin
|
||||
done
|
||||
|
||||
Thanks for using this script! Hope it helped.
|
||||
Give it a star: https://github.com/MarauderXtreme/uktools
|
||||
|
||||
```
|
||||
|
||||
Restart the system to activate the latest kernel.
|
||||
```
|
||||
$ sudo shutdown -r now
|
||||
|
||||
```
|
||||
|
||||
Once the system back to up, re-check the kernel version.
|
||||
```
|
||||
$ uname -a
|
||||
Linux ubuntu 4.16.7-041607-generic #201805021131 SMP Wed May 2 15:34:55 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
|
||||
|
||||
```
|
||||
|
||||
This make command will drop the below files into `/usr/local/bin` directory.
|
||||
```
|
||||
do-kernel-upgrade
|
||||
do-kernel-purge
|
||||
|
||||
```
|
||||
|
||||
To remove old kernels, run the following command.
|
||||
```
|
||||
$ do-kernel-purge
|
||||
|
||||
Ubuntu Kernel Purge - by Caio Oliveira
|
||||
|
||||
This script will only keep three versions: the first and the last two, others will be purge
|
||||
|
||||
---Current version:
|
||||
Linux Kernel 4.16.7-041607 Generic (linux-image-4.16.7-041607-generic)
|
||||
|
||||
---Versions to remove:
|
||||
4.13.0-16
|
||||
4.13.0-17
|
||||
4.13.0-32
|
||||
4.13.0-36
|
||||
4.13.0-37
|
||||
|
||||
---Do you want to remove the old kernels/headers versions? (Y/n): y
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
The following packages were automatically installed and are no longer required:
|
||||
linux-headers-4.13.0-17 linux-headers-4.13.0-17-generic linux-headers-4.13.0-32 linux-headers-4.13.0-32-generic linux-headers-4.13.0-36 linux-headers-4.13.0-36-generic linux-headers-4.13.0-37
|
||||
linux-headers-4.13.0-37-generic linux-image-4.13.0-17-generic linux-image-4.13.0-32-generic linux-image-4.13.0-36-generic linux-image-4.13.0-37-generic linux-image-extra-4.13.0-17-generic
|
||||
linux-image-extra-4.13.0-32-generic linux-image-extra-4.13.0-36-generic linux-image-extra-4.13.0-37-generic
|
||||
Use 'sudo apt autoremove' to remove them.
|
||||
The following packages will be REMOVED:
|
||||
linux-headers-4.13.0-16* linux-headers-4.13.0-16-generic* linux-image-4.13.0-16-generic* linux-image-extra-4.13.0-16-generic*
|
||||
0 upgraded, 0 newly installed, 4 to remove and 71 not upgraded.
|
||||
After this operation, 318 MB disk space will be freed.
|
||||
(Reading database ... 465582 files and directories currently installed.)
|
||||
Removing linux-headers-4.13.0-16-generic (4.13.0-16.19) ...
|
||||
Removing linux-headers-4.13.0-16 (4.13.0-16.19) ...
|
||||
Removing linux-image-extra-4.13.0-16-generic (4.13.0-16.19) ...
|
||||
run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 4.13.0-16-generic /boot/vmlinuz-4.13.0-16-generic
|
||||
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 4.13.0-16-generic /boot/vmlinuz-4.13.0-16-generic
|
||||
update-initramfs: Generating /boot/initrd.img-4.13.0-16-generic
|
||||
run-parts: executing /etc/kernel/postinst.d/unattended-upgrades 4.13.0-16-generic /boot/vmlinuz-4.13.0-16-generic
|
||||
run-parts: executing /etc/kernel/postinst.d/update-notifier 4.13.0-16-generic /boot/vmlinuz-4.13.0-16-generic
|
||||
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 4.13.0-16-generic /boot/vmlinuz-4.13.0-16-generic
|
||||
Generating grub configuration file ...
|
||||
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
|
||||
Found linux image: /boot/vmlinuz-4.16.7-041607-generic
|
||||
Found initrd image: /boot/initrd.img-4.16.7-041607-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-39-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-39-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-38-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-38-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-37-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-37-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-36-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-36-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-32-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-32-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-17-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-17-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-16-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-16-generic
|
||||
Found memtest86+ image: /boot/memtest86+.elf
|
||||
Found memtest86+ image: /boot/memtest86+.bin
|
||||
done
|
||||
Removing linux-image-4.13.0-16-generic (4.13.0-16.19) ...
|
||||
Examining /etc/kernel/postrm.d .
|
||||
run-parts: executing /etc/kernel/postrm.d/initramfs-tools 4.13.0-16-generic /boot/vmlinuz-4.13.0-16-generic
|
||||
update-initramfs: Deleting /boot/initrd.img-4.13.0-16-generic
|
||||
run-parts: executing /etc/kernel/postrm.d/zz-update-grub 4.13.0-16-generic /boot/vmlinuz-4.13.0-16-generic
|
||||
Generating grub configuration file ...
|
||||
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
|
||||
Found linux image: /boot/vmlinuz-4.16.7-041607-generic
|
||||
Found initrd image: /boot/initrd.img-4.16.7-041607-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-39-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-39-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-38-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-38-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-37-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-37-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-36-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-36-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-32-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-32-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-17-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-17-generic
|
||||
Found memtest86+ image: /boot/memtest86+.elf
|
||||
Found memtest86+ image: /boot/memtest86+.bin
|
||||
done
|
||||
(Reading database ... 430635 files and directories currently installed.)
|
||||
Purging configuration files for linux-image-extra-4.13.0-16-generic (4.13.0-16.19) ...
|
||||
Purging configuration files for linux-image-4.13.0-16-generic (4.13.0-16.19) ...
|
||||
Examining /etc/kernel/postrm.d .
|
||||
run-parts: executing /etc/kernel/postrm.d/initramfs-tools 4.13.0-16-generic /boot/vmlinuz-4.13.0-16-generic
|
||||
run-parts: executing /etc/kernel/postrm.d/zz-update-grub 4.13.0-16-generic /boot/vmlinuz-4.13.0-16-generic
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
.
|
||||
.
|
||||
.
|
||||
.
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
The following packages will be REMOVED:
|
||||
linux-headers-4.13.0-37* linux-headers-4.13.0-37-generic* linux-image-4.13.0-37-generic* linux-image-extra-4.13.0-37-generic*
|
||||
0 upgraded, 0 newly installed, 4 to remove and 71 not upgraded.
|
||||
After this operation, 321 MB disk space will be freed.
|
||||
(Reading database ... 325772 files and directories currently installed.)
|
||||
Removing linux-headers-4.13.0-37-generic (4.13.0-37.42) ...
|
||||
Removing linux-headers-4.13.0-37 (4.13.0-37.42) ...
|
||||
Removing linux-image-extra-4.13.0-37-generic (4.13.0-37.42) ...
|
||||
run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 4.13.0-37-generic /boot/vmlinuz-4.13.0-37-generic
|
||||
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 4.13.0-37-generic /boot/vmlinuz-4.13.0-37-generic
|
||||
update-initramfs: Generating /boot/initrd.img-4.13.0-37-generic
|
||||
run-parts: executing /etc/kernel/postinst.d/unattended-upgrades 4.13.0-37-generic /boot/vmlinuz-4.13.0-37-generic
|
||||
run-parts: executing /etc/kernel/postinst.d/update-notifier 4.13.0-37-generic /boot/vmlinuz-4.13.0-37-generic
|
||||
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 4.13.0-37-generic /boot/vmlinuz-4.13.0-37-generic
|
||||
Generating grub configuration file ...
|
||||
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
|
||||
Found linux image: /boot/vmlinuz-4.16.7-041607-generic
|
||||
Found initrd image: /boot/initrd.img-4.16.7-041607-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-39-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-39-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-38-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-38-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-37-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-37-generic
|
||||
Found memtest86+ image: /boot/memtest86+.elf
|
||||
Found memtest86+ image: /boot/memtest86+.bin
|
||||
done
|
||||
Removing linux-image-4.13.0-37-generic (4.13.0-37.42) ...
|
||||
Examining /etc/kernel/postrm.d .
|
||||
run-parts: executing /etc/kernel/postrm.d/initramfs-tools 4.13.0-37-generic /boot/vmlinuz-4.13.0-37-generic
|
||||
update-initramfs: Deleting /boot/initrd.img-4.13.0-37-generic
|
||||
run-parts: executing /etc/kernel/postrm.d/zz-update-grub 4.13.0-37-generic /boot/vmlinuz-4.13.0-37-generic
|
||||
Generating grub configuration file ...
|
||||
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
|
||||
Found linux image: /boot/vmlinuz-4.16.7-041607-generic
|
||||
Found initrd image: /boot/initrd.img-4.16.7-041607-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-39-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-39-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-38-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-38-generic
|
||||
Found memtest86+ image: /boot/memtest86+.elf
|
||||
Found memtest86+ image: /boot/memtest86+.bin
|
||||
done
|
||||
(Reading database ... 290810 files and directories currently installed.)
|
||||
Purging configuration files for linux-image-extra-4.13.0-37-generic (4.13.0-37.42) ...
|
||||
Purging configuration files for linux-image-4.13.0-37-generic (4.13.0-37.42) ...
|
||||
Examining /etc/kernel/postrm.d .
|
||||
run-parts: executing /etc/kernel/postrm.d/initramfs-tools 4.13.0-37-generic /boot/vmlinuz-4.13.0-37-generic
|
||||
run-parts: executing /etc/kernel/postrm.d/zz-update-grub 4.13.0-37-generic /boot/vmlinuz-4.13.0-37-generic
|
||||
|
||||
Thanks for using this script!!!
|
||||
|
||||
```
|
||||
|
||||
Re-check the list of installed kernels using the below command. This will keep only old three kernels.
|
||||
```
|
||||
$ dpkg --list | grep linux-image
|
||||
ii linux-image-4.13.0-38-generic 4.13.0-38.43 amd64 Linux kernel image for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-4.13.0-39-generic 4.13.0-39.44 amd64 Linux kernel image for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-extra-4.13.0-38-generic 4.13.0-38.43 amd64 Linux kernel extra modules for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-extra-4.13.0-39-generic 4.13.0-39.44 amd64 Linux kernel extra modules for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-generic 4.13.0.39.42 amd64 Generic Linux kernel image
|
||||
ii linux-image-unsigned-4.16.7-041607-generic 4.16.7-041607.201805021131 amd64 Linux kernel image for version 4.16.7 on 64 bit x86 SMP
|
||||
|
||||
```
|
||||
|
||||
For next time you can call `do-kernel-upgrade` utility for new kernel installation. If any new kernel is available then it will install. If no, it will report no kernel update is available at the moment.
|
||||
```
|
||||
$ do-kernel-upgrade
|
||||
Kernel up to date. Finishing
|
||||
|
||||
```
|
||||
|
||||
Run the `do-kernel-purge` command once again to confirm on this. If this found more than three kernels then it will remove. If no, it will report nothing to remove message.
|
||||
```
|
||||
$ do-kernel-purge
|
||||
|
||||
Ubuntu Kernel Purge - by Caio Oliveira
|
||||
|
||||
This script will only keep three versions: the first and the last two, others will be purge
|
||||
|
||||
---Current version:
|
||||
Linux Kernel 4.16.7-041607 Generic (linux-image-4.16.7-041607-generic)
|
||||
Nothing to remove!
|
||||
|
||||
Thanks for using this script!!!
|
||||
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/uktools-easy-way-to-install-latest-stable-linux-kernel-on-ubuntu-mint-and-derivatives/
|
||||
|
||||
作者:[Prakash Subramanian][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/prakash/
|
||||
[1]:https://www.2daygeek.com/lku-linux-kernel-utilities-compile-install-update-latest-kernel-in-linux-mint-ubuntu/
|
||||
[2]:https://www.2daygeek.com/ukuu-install-upgrade-linux-kernel-in-linux-mint-ubuntu-debian-elementary-os/
|
||||
[3]:https://www.2daygeek.com/check-find-determine-running-installed-linux-kernel-version/
|
||||
[4]:https://github.com/usbkey9/uktools
|
||||
[5]:http://kernel.ubuntu.com/~kernel-ppa/mainline/
|
@ -0,0 +1,247 @@
|
||||
3 Methods To Install Latest Python3 Package On CentOS 6 System
|
||||
======
|
||||
CentOS is RHEL clone and comes with free of cost. It’s a industry standard and cutting edge operating system, this has been used by 90% of webhosting provider since it’s supporting the leading edge server control panel called cPanel/WHM.
|
||||
|
||||
This control panel allowing users to manage everything through control panel without entering into terminal.
|
||||
|
||||
As we already know that RHEL has long term support and doesn’t offer the latest version of packages due to stability.
|
||||
|
||||
If you want to install latest version of packages, which is not available in the default repository and you have to install manually by compiling the source package.
|
||||
|
||||
It’s a high risk because we can’t upgrade the manually installed packages to latest version if they release new version and we have to reinstall manually.
|
||||
|
||||
In this case what will be the solution and suggested method to install latest version of package? Yes, this can be done by adding the necessary third party repository to system.
|
||||
|
||||
There are many third party repositories are available for Enterprise Linux but only few of repositories are suggested to use by CentOS communicant, which doesn’t alter the base packages in large scale.
|
||||
|
||||
They are usually well maintained and provide a substantial number of additional packages to CentOS.
|
||||
|
||||
In this tutorial, we will teach you, how to install latest Python 3 package on CentOS 6 system.
|
||||
|
||||
### Method-1 : Using Software Collections Repository (SCL)
|
||||
|
||||
The SCL repository is now maintained by a CentOS SIG, which rebuilds the Red Hat Software Collections and also provides some additional packages of their own.
|
||||
|
||||
It contains newer versions of various programs that can be installed alongside existing older packages and invoked by using the scl command.
|
||||
|
||||
Run the following command to install Software Collections Repository on CentOS
|
||||
```
|
||||
# yum install centos-release-scl
|
||||
|
||||
```
|
||||
|
||||
Check the available python 3 version.
|
||||
```
|
||||
# yum info rh-python35
|
||||
Loaded plugins: fastestmirror, security
|
||||
Loading mirror speeds from cached hostfile
|
||||
* epel: ewr.edge.kernel.org
|
||||
* remi-safe: mirror.team-cymru.com
|
||||
Available Packages
|
||||
Name : rh-python35
|
||||
Arch : x86_64
|
||||
Version : 2.0
|
||||
Release : 2.el6
|
||||
Size : 0.0
|
||||
Repo : installed
|
||||
From repo : centos-sclo-rh
|
||||
Summary : Package that installs rh-python35
|
||||
License : GPLv2+
|
||||
Description : This is the main package for rh-python35 Software Collection.
|
||||
|
||||
```
|
||||
|
||||
Run the below command to install latest available python 3 package from scl.
|
||||
```
|
||||
# yum install rh-python35
|
||||
|
||||
```
|
||||
|
||||
Run the below special scl command to enable the installed package version at the shell.
|
||||
```
|
||||
# scl enable rh-python35 bash
|
||||
|
||||
```
|
||||
|
||||
Run the below command to check installed python3 version.
|
||||
```
|
||||
# python --version
|
||||
Python 3.5.1
|
||||
|
||||
```
|
||||
|
||||
Run the following command to get a list of SCL packages have been installed on system.
|
||||
```
|
||||
# scl -l
|
||||
rh-python35
|
||||
|
||||
```
|
||||
|
||||
### Method-2 : Using EPEL Repository (Extra Packages for Enterprise Linux)
|
||||
|
||||
EPEL stands for Extra Packages for Enterprise Linux maintained by Fedora Special Interest Group.
|
||||
|
||||
They creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).
|
||||
|
||||
EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions.
|
||||
|
||||
**Suggested Read :** [Install / Enable EPEL Repository on RHEL, CentOS, Oracle Linux & Scientific Linux][1]
|
||||
|
||||
EPEL package is included in the CentOS Extras repository and enabled by default so, we can install this by running below command.
|
||||
```
|
||||
# yum install epel-release
|
||||
|
||||
```
|
||||
|
||||
Check the available python 3 version.
|
||||
```
|
||||
# yum --disablerepo="*" --enablerepo="epel" info python34
|
||||
Loaded plugins: fastestmirror, security
|
||||
Loading mirror speeds from cached hostfile
|
||||
* epel: ewr.edge.kernel.org
|
||||
Available Packages
|
||||
Name : python34
|
||||
Arch : x86_64
|
||||
Version : 3.4.5
|
||||
Release : 4.el6
|
||||
Size : 50 k
|
||||
Repo : epel
|
||||
Summary : Version 3 of the Python programming language aka Python 3000
|
||||
URL : http://www.python.org/
|
||||
License : Python
|
||||
Description : Python 3 is a new version of the language that is incompatible with the 2.x
|
||||
: line of releases. The language is mostly the same, but many details, especially
|
||||
: how built-in objects like dictionaries and strings work, have changed
|
||||
: considerably, and a lot of deprecated features have finally been removed.
|
||||
|
||||
|
||||
```
|
||||
|
||||
Run the below command to install latest available python 3 package from EPEL repository.
|
||||
```
|
||||
# yum --disablerepo="*" --enablerepo="epel" install python34
|
||||
|
||||
```
|
||||
|
||||
By default this will not install matching pip & setuptools and we have to install by running below command.
|
||||
```
|
||||
# curl -O https://bootstrap.pypa.io/get-pip.py
|
||||
% Total % Received % Xferd Average Speed Time Time Time Current
|
||||
Dload Upload Total Spent Left Speed
|
||||
100 1603k 100 1603k 0 0 2633k 0 --:--:-- --:--:-- --:--:-- 4816k
|
||||
|
||||
# /usr/bin/python3.4 get-pip.py
|
||||
Collecting pip
|
||||
Using cached https://files.pythonhosted.org/packages/0f/74/ecd13431bcc456ed390b44c8a6e917c1820365cbebcb6a8974d1cd045ab4/pip-10.0.1-py2.py3-none-any.whl
|
||||
Collecting setuptools
|
||||
Downloading https://files.pythonhosted.org/packages/8c/10/79282747f9169f21c053c562a0baa21815a8c7879be97abd930dbcf862e8/setuptools-39.1.0-py2.py3-none-any.whl (566kB)
|
||||
100% |████████████████████████████████| 573kB 4.0MB/s
|
||||
Collecting wheel
|
||||
Downloading https://files.pythonhosted.org/packages/1b/d2/22cde5ea9af055f81814f9f2545f5ed8a053eb749c08d186b369959189a8/wheel-0.31.0-py2.py3-none-any.whl (41kB)
|
||||
100% |████████████████████████████████| 51kB 8.0MB/s
|
||||
Installing collected packages: pip, setuptools, wheel
|
||||
Successfully installed pip-10.0.1 setuptools-39.1.0 wheel-0.31.0
|
||||
|
||||
```
|
||||
|
||||
Run the below command to check installed python3 version.
|
||||
```
|
||||
# python3 --version
|
||||
Python 3.4.5
|
||||
|
||||
```
|
||||
|
||||
### Method-3 : Using IUS Community Repository
|
||||
|
||||
IUS Community is a CentOS Community Approved third-party RPM repository which contains latest upstream versions of PHP, Python, MySQL, etc.., packages for Enterprise Linux (RHEL & CentOS) 5, 6 & 7.
|
||||
|
||||
IUS Community Repository have dependency with EPEL Repository so we have to install EPEL repository prior to IUS repository installation. Follow the below steps to install & enable EPEL & IUS Community Repository to RPM systems and install the packages.
|
||||
|
||||
**Suggested Read :** [Install / Enable IUS Community Repository on RHEL & CentOS][2]
|
||||
|
||||
EPEL package is included in the CentOS Extras repository and enabled by default so, we can install this by running below command.
|
||||
```
|
||||
# yum install epel-release
|
||||
|
||||
```
|
||||
|
||||
Download IUS Community Repository Shell script
|
||||
```
|
||||
# curl 'https://setup.ius.io/' -o setup-ius.sh
|
||||
% Total % Received % Xferd Average Speed Time Time Time Current
|
||||
Dload Upload Total Spent Left Speed
|
||||
100 1914 100 1914 0 0 6563 0 --:--:-- --:--:-- --:--:-- 133k
|
||||
|
||||
```
|
||||
|
||||
Install/Enable IUS Community Repository.
|
||||
```
|
||||
# sh setup-ius.sh
|
||||
|
||||
```
|
||||
|
||||
Check the available python 3 version.
|
||||
```
|
||||
# yum --enablerepo=ius info python36u
|
||||
Loaded plugins: fastestmirror, security
|
||||
Loading mirror speeds from cached hostfile
|
||||
* epel: ewr.edge.kernel.org
|
||||
* ius: mirror.team-cymru.com
|
||||
* remi-safe: mirror.team-cymru.com
|
||||
Available Packages
|
||||
Name : python36u
|
||||
Arch : x86_64
|
||||
Version : 3.6.5
|
||||
Release : 1.ius.centos6
|
||||
Size : 55 k
|
||||
Repo : ius
|
||||
Summary : Interpreter of the Python programming language
|
||||
URL : https://www.python.org/
|
||||
License : Python
|
||||
Description : Python is an accessible, high-level, dynamically typed, interpreted programming
|
||||
: language, designed with an emphasis on code readability.
|
||||
: It includes an extensive standard library, and has a vast ecosystem of
|
||||
: third-party libraries.
|
||||
:
|
||||
: The python36u package provides the "python3.6" executable: the reference
|
||||
: interpreter for the Python language, version 3.
|
||||
: The majority of its standard library is provided in the python36u-libs package,
|
||||
: which should be installed automatically along with python36u.
|
||||
: The remaining parts of the Python standard library are broken out into the
|
||||
: python36u-tkinter and python36u-test packages, which may need to be installed
|
||||
: separately.
|
||||
:
|
||||
: Documentation for Python is provided in the python36u-docs package.
|
||||
:
|
||||
: Packages containing additional libraries for Python are generally named with
|
||||
: the "python36u-" prefix.
|
||||
|
||||
```
|
||||
|
||||
Run the below command to install latest available python 3 package from IUS repository.
|
||||
```
|
||||
# yum --enablerepo=ius install python36u
|
||||
|
||||
```
|
||||
|
||||
Run the below command to check installed python3 version.
|
||||
```
|
||||
# python3.6 --version
|
||||
Python 3.6.5
|
||||
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/3-methods-to-install-latest-python3-package-on-centos-6-system/
|
||||
|
||||
作者:[PRAKASH SUBRAMANIAN][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/prakash/
|
||||
[1]:https://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/
|
||||
[2]:https://www.2daygeek.com/install-enable-ius-community-repository-on-rhel-centos/
|
@ -0,0 +1,114 @@
|
||||
4MLinux Revives Your Older Computer [Review]
|
||||
======
|
||||
**Brief:** 4MLinux is a lightweight Linux distribution that can turn your old computer into a functional one with multimedia support, maintenance tools and classic games.
|
||||
|
||||
As more and more [Linux distributions drop the support for 32-bit systems][1], you may wonder what would you do with that old computer of yours. Thankfully, there are plenty of [lightweight Linux distributions][2] that could put those old computers for some regular computing tasks such as playing small games, watching movies, listening to music and surfing web.
|
||||
|
||||
[4MLinux][3] is one such Linux distribution that requires fewer system resources and can even run on 128 MB of RAM. The desktop edition comes only for 32-bit architecture while the server edition is of 64-bit.
|
||||
|
||||
4MLinux can also be used as a rescue CD along with serving as a full-fledged working system or as a mini-server.
|
||||
|
||||
![4MLinux Review][4]
|
||||
|
||||
It is named 4MLinux because it focuses mainly on four points, called the “4 M”:
|
||||
|
||||
* Maintenance – You can use 4MLinux as a rescue Live CD.
|
||||
* Multimedia – There is inbuilt support for almost every multimedia format, be it for Image, Audio and Video.
|
||||
* Miniserver – A 64-bit server is included running LAMP suite, which can be enabled from the Application Menu.
|
||||
* Mystery – Includes a collection of classic Linux games.
|
||||
|
||||
|
||||
|
||||
Most of the Linux distributions are either based on Debian with DEB packages or Fedora with RPM. 4MLinux, on the other hand, does not rely on these package management systems, is pretty damn fast and works quite well on older systems.
|
||||
|
||||
### 4MLinux
|
||||
|
||||
The 4MLinux Desktop comes with a variety of [lightweight applications][5] so that it could work on older hardware. [JWM][6] – Joe’s Windows Manager, which is a lightweight stacking windows manager for [X Window System][7]. For managing the desktop wallpapers, a lightweight and powerful [feh][8] is used. It uses [PCMan File Manager][9] which is a standard file manager for [LXDE][10] too.
|
||||
|
||||
#### Installing 4MLinux is quick
|
||||
|
||||
I grabbed the ISO from 4MLinux website and used [MultiBootUSB][11] to create a bootable drive and live booted with it.
|
||||
|
||||
4MLinux do not use the grub or grub2 bootloader but uses **LI** nux **LO** ader ([LILO][12]) bootloader. The main advantage of LILO is that it allows fast boot-ups for a Linux system.
|
||||
|
||||
Now to install the 4MLinux, you will have to manually create a partition. Go to **Maintenance - > Partitions -> GParted**. Click on **Device - > Create Partition Table**. Once done, click on **New** , leave the settings to default and click on **Add**. Click on **Apply** to save the settings and close it.
|
||||
|
||||
Next step is to go to 4MLinux -> Installer and it will launch a text-based installer.
|
||||
|
||||
![][13]
|
||||
|
||||
Identify the partition you have created for the default partition to install 4MLinux and follow the instructions to complete the installation.
|
||||
|
||||
Surprisingly, the installation took less than a minute. Restart your system and remove the live USB and you will be greeted with this desktop.
|
||||
|
||||
![][14]
|
||||
|
||||
#### Experiencing 4MLinux
|
||||
|
||||
The default desktop screen has a dock at the top with most common applications pinned. There is a taskbar, a [Conky theme][15] with option to turn it on/off in the dock and a clock at the bottom right corner. Left click on the desktop opens the application menu.
|
||||
|
||||
The CPU usage was too minimal with less than 2% and RAM was less than 100 MB.
|
||||
|
||||
4MLinux comes with a number of applications tabbed under different sections. There is Transmission for torrent downloads, Tor is included by default and Bluetooth support is there.
|
||||
|
||||
Under Maintenance, there are options to backup the system and recover using TestDisk and GNUddrescue, CD burning tools are available along with partitioning tools. There are a number of Monitoring tools and Clam Antivirus.
|
||||
|
||||
Multimedia section includes various video and music players and mixers, image viewers and editors and tools for digital cameras.
|
||||
|
||||
Mystery section is interesting. It includes a number of [console games][16] like Snake, Tetris, Mines, Casino etc.
|
||||
|
||||
Under Settings, you can select your preferences for display and others, networking, Desktop and choose default applications. The default desktop resolution was 1024×768 at the highest, so that might disappoint you.
|
||||
|
||||
Some of the applications are not installed by default. Launching it gives you an option to install it. But that’s about it. Since there is no package manager here, you are limited to the available applications. If you want more software that are not available in the system, you’ll have to [install it from source code][17].
|
||||
|
||||
This is by design because 4MLinux is focused on providing only essential desktop experience. A small handful selection of lightweight applications fit in its ecosystem.
|
||||
|
||||
#### Download 4M Linux
|
||||
|
||||
The Download section features the 32-bit stable 4MLinux and its beta version, 64bit 4MServer and a 4MRescueKit. Although the ISO size is over 1GB, 4mlinux is very light in its design.
|
||||
|
||||
[Download 4MLinux][18]
|
||||
|
||||
There is a [separate page to downloaded additional drivers][19]. For any other missing drivers, while you launch an application, 4MLinux asks you to download and install it.
|
||||
|
||||
#### Final thoughts on 4MLinux
|
||||
|
||||
4MLinux has look and feel of an old-school Linux system but the desktop is super fast. I was able to run it on an Intel Dual Core processor desktop with ease and most of the things worked. WiFi was connecting fine; the application section included most of the software I use on daily basis and the retro games section was pretty cool.
|
||||
|
||||
The one negative point was the limitation of available application. If you can manage with the handful of applications, 4MLinux can be seen as one of the best Linux distribution for older systems and for the people who don’t prefer going in the technicality even for once.
|
||||
|
||||
Fast boot makes it an ideal rescue disc!
|
||||
|
||||
Let us know in the comment section. What do you think of 4MLinux? Are you willing to give it a try? Let us know in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/4mlinux-review/
|
||||
|
||||
作者:[Ambarish Kumar][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/ambarish/
|
||||
[1]:https://itsfoss.com/32-bit-os-list/
|
||||
[2]:https://itsfoss.com/lightweight-linux-beginners/
|
||||
[3]:http://4mlinux.com/
|
||||
[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/4minux-review-feature-800x450.jpeg
|
||||
[5]:https://itsfoss.com/lightweight-alternative-applications-ubuntu/
|
||||
[6]:https://joewing.net/projects/jwm/
|
||||
[7]:https://en.wikipedia.org/wiki/X_Window_System
|
||||
[8]:https://feh.finalrewind.org/
|
||||
[9]:https://wiki.lxde.org/en/PCManFM
|
||||
[10]:https://lxde.org/
|
||||
[11]:https://itsfoss.com/multiple-linux-one-usb/
|
||||
[12]:https://en.wikipedia.org/wiki/LILO_(boot_loader)
|
||||
[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/4MLinux-installer.png
|
||||
[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/2-800x600.jpg
|
||||
[15]:https://itsfoss.com/conky-gui-ubuntu-1304/
|
||||
[16]:https://itsfoss.com/best-command-line-games-linux/
|
||||
[17]:https://itsfoss.com/install-software-from-source-code/
|
||||
[18]:http://4mlinux.com/index.php?page=download
|
||||
[19]:http://sourceforge.net/projects/linux4m/files/24.0/drivers/
|
@ -0,0 +1,121 @@
|
||||
How to kill a process or stop a program in Linux
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/x_stop_terminate_program_kill.jpg?itok=9rM8i9x8)
|
||||
When a process misbehaves, you might sometimes want to terminate or kill it. In this post, we'll explore a few ways to terminate a process or an application from the command line as well as from a graphical interface, using [gedit][1] as a sample application.
|
||||
|
||||
### Using the command line/termination characters
|
||||
|
||||
#### Ctrl + C
|
||||
|
||||
One problem invoking `gedit` from the command line (if you are not using `gedit &`) is that it will not free up the prompt, so that shell session is blocked. In such cases, Ctrl+C (the Control key in combination with 'C') comes in handy. That will terminate `gedit` and all work will be lost (unless the file was saved). Ctrl+C sends the `SIGINT` signal to `gedit`. This is a stop signal whose default action is to terminate the process. It instructs the shell to stop `gedit` and return to the main loop, and you'll get the prompt back.
|
||||
```
|
||||
$ gedit
|
||||
|
||||
^C
|
||||
|
||||
```
|
||||
|
||||
#### Ctrl + Z
|
||||
|
||||
This is called a suspend character. It sends a `SIGTSTP` signal to process. This is also a stop signal, but the default action is not to kill but to suspend the process.
|
||||
|
||||
It will stop (kill/terminate) `gedit` and return the shell prompt.
|
||||
```
|
||||
$ gedit
|
||||
|
||||
^Z
|
||||
|
||||
[1]+ Stopped gedit
|
||||
|
||||
$
|
||||
|
||||
```
|
||||
|
||||
Once the process is suspended (in this case, `gedit`), it is not possible to write or do anything in `gedit`. In the background, the process becomes a job. This can be verified by the `jobs` command.
|
||||
```
|
||||
$ jobs
|
||||
|
||||
[1]+ Stopped gedit
|
||||
|
||||
```
|
||||
|
||||
`jobs` allows you to control multiple processes within a single shell session. You can stop, resume, and move jobs to the background or foreground as needed.
|
||||
|
||||
Let's resume `gedit` in the background and free up a prompt to run other commands. You can do this using the `bg` command, followed by job ID (notice `[1]` from the output of `jobs` above. `[1]` is the job ID).
|
||||
```
|
||||
$ bg 1
|
||||
|
||||
[1]+ gedit &
|
||||
|
||||
```
|
||||
|
||||
This is similar to starting `gedit` with `&,`:
|
||||
```
|
||||
$ gedit &
|
||||
|
||||
```
|
||||
|
||||
### Using kill
|
||||
|
||||
`kill` allows fine control over signals, enabling you to signal a process by specifying either a signal name or a signal number, followed by a process ID, or PID.
|
||||
|
||||
What I like about `kill` is that it can also work with job IDs. Let's start `gedit` in the background using `gedit &`. Assuming I have a job ID of `gedit` from the `jobs` command, let's send `SIGINT` to `gedit`:
|
||||
```
|
||||
$ kill -s SIGINT %1
|
||||
|
||||
```
|
||||
|
||||
Note that the job ID should be prefixed with `%`, or `kill` will consider it a PID.
|
||||
|
||||
`kill` can work without specifying a signal explicitly. In that case, the default action is to send `SIGTERM`, which will terminate the process. Execute `kill -l` to list all signal names, and use the `man kill` command to read the man page.
|
||||
|
||||
### Using killall
|
||||
|
||||
If you don't want to specify a job ID or PID, `killall` lets you specify a process by name. The simplest way to terminate `gedit` using `killall` is:
|
||||
```
|
||||
$ killall gedit
|
||||
|
||||
```
|
||||
|
||||
This will kill all the processes with the name `gedit`. Like `kill`, the default signal is `SIGTERM`. It has the option to ignore case using `-I`:
|
||||
```
|
||||
$ gedit &
|
||||
|
||||
[1] 14852
|
||||
|
||||
|
||||
|
||||
$ killall -I GEDIT
|
||||
|
||||
[1]+ Terminated gedit
|
||||
|
||||
```
|
||||
|
||||
To learn more about various flags provided by `killall` (such as `-u`, which allows you to kill user-owned processes) check the man page (`man killall`)
|
||||
|
||||
### Using xkill
|
||||
|
||||
Have you ever encountered an issue where a media player, such as [VLC][2], grayed out or hung? Now you can find the PID and kill the application using one of the commands listed above or use `xkill`.
|
||||
|
||||
![Using xkill][3]
|
||||
|
||||
`xkill` allows you to kill a window using a mouse. Simply execute `xkill` in a terminal, which should change the mouse cursor to an **x** or a tiny skull icon. Click **x** on the window you want to close. Be careful using `xkill`, though—as its man page explains, it can be dangerous. You have been warned!
|
||||
|
||||
Refer to the man page of each command for more information. You can also explore commands like `pkill` and `pgrep`.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/how-kill-process-stop-program-linux
|
||||
|
||||
作者:[Sachin Patil][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/psachin
|
||||
[1]:https://wiki.gnome.org/Apps/Gedit
|
||||
[2]:https://www.videolan.org/vlc/index.html
|
||||
[3]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/xkill_gedit.png?itok=TBvMw0TN (Using xkill)
|
119
sources/tech/20180510 Analyzing Ansible runs using ARA.md
Normal file
119
sources/tech/20180510 Analyzing Ansible runs using ARA.md
Normal file
@ -0,0 +1,119 @@
|
||||
Analyzing Ansible runs using ARA
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X)
|
||||
[Ansible][1] is a versatile platform that has become popular for managing servers and server configurations. Today, Ansible is used heavily to deploy and test through continuous integration (CI).
|
||||
|
||||
In the world of automated continuous integration, it’s not uncommon to have hundreds, if not thousands, of jobs running every day for testing, building, compiling, deploying, and more.
|
||||
|
||||
### The Ansible Run Analysis (ARA) tool
|
||||
|
||||
Ansible runs generate a large amount of console data, and keeping up with high volumes of Ansible output in the context of CI is challenging. The Ansible Run Analysis (ARA) tool makes this verbose output readable and more representative of the job status and debug information. ARA organizes recorded playbook data so you can search and find what you’re interested in as quickly and as easily as possible.
|
||||
|
||||
Note that ARA doesn't run your playbooks for you; rather, it integrates with Ansible as a callback plugin wherever it is. A callback plugin enables adding new behaviors to Ansible when responding to events. It can perform custom actions in response to Ansible events such as a play starting or a task completing on a host.
|
||||
|
||||
Compared to [AWX][2] and [Tower][3], which are tools that control the entire workflow, with features like inventory management, playbook execution, editing features, and more, the scope of ARA is comparatively narrow: It records data and provides an intuitive interface. It is a relatively simple application that is easy to install and configure.
|
||||
|
||||
#### Installation
|
||||
|
||||
There are two ways to install ARA on your system:
|
||||
|
||||
* Using the Ansible role hosted on your [GitHub account][4]. Clone the repo and do:
|
||||
|
||||
|
||||
```
|
||||
ansible-playbook Playbook.yml
|
||||
|
||||
```
|
||||
|
||||
If the playbook run is successful, you will get:
|
||||
```
|
||||
TASK [ara : Display ara UI URL] ************************
|
||||
|
||||
ok: [localhost] => {}
|
||||
|
||||
"msg": "Access playbook records at http://YOUR_IP:9191"
|
||||
|
||||
```
|
||||
|
||||
Note: It picks the IP address from `ansible_default_ipv4` fact gathered by Ansible. If there is no such fact gathered, replace it with your IP in `main.yml` file in the `roles/ara/tasks/` folder.
|
||||
|
||||
* ARA is an open source project available on [GitHub][5] under the Apache v2 license. Installation instructions are in the Quickstart chapter. The [documentation][6] and [FAQs][7] are available on [readthedocs.io][6].
|
||||
|
||||
|
||||
|
||||
#### What can ARA do?
|
||||
|
||||
The image below shows the ARA landing page launched from the browser:
|
||||
|
||||
|
||||
![ara landing page][9]
|
||||
|
||||
The ARA landing page
|
||||
|
||||
It provides summaries of task results per host or per playbook:
|
||||
|
||||
|
||||
![task summaries][11]
|
||||
|
||||
ARA displays task summaries
|
||||
|
||||
It allows you to filter task results by playbook, play, host, task, or status:
|
||||
|
||||
|
||||
![playbook runs filtered by hosts][13]
|
||||
|
||||
Playbook runs, filtered by host
|
||||
|
||||
With ARA, you can easily drill down from the summary view to find the results you’re interested in, whether it’s a particular host or a specific task:
|
||||
|
||||
|
||||
![summary of each task][15]
|
||||
|
||||
A detailed summary of each task
|
||||
|
||||
ARA supports recording and viewing multiple runs in the same database.
|
||||
|
||||
|
||||
![show gathered facts][17]
|
||||
|
||||
Displaying gathered facts
|
||||
|
||||
#### Wrapping up
|
||||
|
||||
ARA is a useful resource that has helped me get more out of Ansible run logs and outputs. I highly recommend it to all Ansible ninjas out there.
|
||||
|
||||
Feel free to share this, and please let me know about your experience using ARA in the comments.
|
||||
|
||||
**[See our related story,[Tips for success when getting started with Ansible][18].]**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/analyzing-ansible-runs-using-ara
|
||||
|
||||
作者:[Ajinkya Bapat][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/iamajinkya
|
||||
[1]:https://www.ansible.com/
|
||||
[2]:https://www.ansible.com/products/awx-project
|
||||
[3]:https://www.ansible.com/products/tower
|
||||
[4]:https://github.com/AjinkyaBapat/Ansible-Run-Analyser
|
||||
[5]:https://github.com/dmsimard/ara
|
||||
[6]:http://ara.readthedocs.io/en/latest/
|
||||
[7]:http://ara.readthedocs.io/en/latest/faq.html
|
||||
[8]:/file/395716
|
||||
[9]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/ara_landing_page.png?itok=PoB7KfhB (ara landing page)
|
||||
[10]:/file/395726
|
||||
[11]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/task_summaries.png?itok=8EBP9sTG (task summaries)
|
||||
[12]:/file/395731
|
||||
[13]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/playbook_filtered_by_hosts.png?itok=Lol0K_My (playbook runs filtered by hosts)
|
||||
[14]:/file/395736
|
||||
[15]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/summary_of_each_task.png?itok=KJnLHEZC (summary of each task)
|
||||
[16]:/file/395741
|
||||
[17]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/showing_gathered_facts.png?itok=FVDc6oA0 (show gathered facts)
|
||||
[18]:/article/18/2/tips-success-when-getting-started-ansible
|
@ -0,0 +1,98 @@
|
||||
Creating small containers with Buildah
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open%20source_collaboration_0.png?itok=YEl_GXbv)
|
||||
I recently joined Red Hat after many years working for another tech company. In my previous job, I developed a number of different software products that were successful but proprietary. Not only were we legally compelled to not share the software outside of the company, we often didn’t even share it within the company. At the time, that made complete sense to me: The company spent time, energy, and budget developing the software, so they should protect and claim the rewards it garnered.
|
||||
|
||||
Fast-forward to a year ago, when I joined Red Hat and developed a completely different mindset. One of the first things I jumped into was the [Buildah project][1]. It facilitates building Open Container Initiative (OCI) images, and it is especially good at allowing you to tailor the size of the image that is created. At that time Buildah was in its very early stages, and there were some warts here and there that weren’t quite production-ready.
|
||||
|
||||
Being new to the project, I made a few minor changes, then asked where the company’s internal git repository was so that I could push my changes. The answer: Nothing internal, just push your changes to GitHub. I was baffled—sending my changes out to GitHub would mean anyone could look at that code and use it for their own projects. Plus, the code still had a few warts, so that just seemed so counterintuitive. But being the new guy, I shook my head in wonder and pushed the changes out.
|
||||
|
||||
A year later, I’m now convinced of the power and value of open source software. I’m still working on Buildah, and we recently had an issue that illustrates that power and value. The issue, titled [Buildah images not so small?][2] , was raised by Tim Dudgeon (@tdudgeon). To summarize, he noted that images created by Buildah were bigger than those created by Docker, even though the Buildah images didn’t contain the extra "fluff" he saw in the Docker images.
|
||||
|
||||
For comparison he first did:
|
||||
```
|
||||
$ docker pull centos:7
|
||||
$ docker images
|
||||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||
docker.io/centos 7 2d194b392dd1 2 weeks ago 195 MB
|
||||
```
|
||||
|
||||
He noted that the size of the Docker image was 195MB. Tim then created a minimal (scratch) image using Buildah, with only the `coreutils` and `bash` packages added to the image, using the following script:
|
||||
```
|
||||
$ cat ./buildah-base.sh
|
||||
#!/bin/bash
|
||||
|
||||
set -x
|
||||
|
||||
# build a minimal image
|
||||
newcontainer=$(buildah from scratch)
|
||||
scratchmnt=$(buildah mount $newcontainer)
|
||||
|
||||
# install the packages
|
||||
yum install --installroot $scratchmnt bash coreutils --releasever 7 --setopt install_weak_deps=false -y
|
||||
yum clean all -y --installroot $scratchmnt --releasever 7
|
||||
|
||||
sudo buildah config --cmd /bin/bash $newcontainer
|
||||
|
||||
# set some config info
|
||||
buildah config --label name=centos-base $newcontainer
|
||||
|
||||
# commit the image
|
||||
buildah unmount $newcontainer
|
||||
buildah commit $newcontainer centos-base
|
||||
|
||||
$ sudo ./buildah-base.sh
|
||||
|
||||
$ sudo buildah images
|
||||
IMAGE ID IMAGE NAME CREATED AT SIZE
|
||||
8379315d3e3e docker.io/library/centos-base:latest Mar 25, 2018 17:08 212.1 MB
|
||||
```
|
||||
|
||||
Tim wondered why the image was 17MB larger, because `python` and `yum` were not installed in the Buildah image, whereas they were installed in the Docker image. This set off quite the discussion in the GitHub issue, as it was not at all an expected result.
|
||||
|
||||
What was great about the discussion was that not only were Red Hat folks involved, but several others from outside as well. In particular, a lot of great discussion and investigation was led by GitHub user @pixdrift, who noted that the documentation and locale-archive were chewing up a little more than 100MB of space in the Buildah image. Pixdrift suggested forcing locale in the yum installer and provided this updated `buildah-bash.sh` script with those changes:
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
set -x
|
||||
|
||||
# build a minimal image
|
||||
newcontainer=$(buildah from scratch)
|
||||
scratchmnt=$(buildah mount $newcontainer)
|
||||
|
||||
# install the packages
|
||||
yum install --installroot $scratchmnt bash coreutils --releasever 7 --setopt=install_weak_deps=false --setopt=tsflags=nodocs --setopt=override_install_langs=en_US.utf8 -y
|
||||
yum clean all -y --installroot $scratchmnt --releasever 7
|
||||
|
||||
sudo buildah config --cmd /bin/bash $newcontainer
|
||||
|
||||
# set some config info
|
||||
buildah config --label name=centos-base $newcontainer
|
||||
|
||||
# commit the image
|
||||
buildah unmount $newcontainer
|
||||
buildah commit $newcontainer centos-base
|
||||
```
|
||||
|
||||
When Tim ran this new script, the image size shrank to 92MB, shedding 120MB from the original Buildah image size and getting closer to the expected size; however, engineers being engineers, a size savings of 56% wasn’t enough. The discussion went further, involving how to remove individual locale packages to save even more space. To see more details of the discussion, click the [Buildah images not so small?][2] link. Who knows—maybe you’ll have a helpful tip, or better yet, become a contributor for Buildah. On a side note, this solution illustrates how the Buildah software can be used to quickly and easily create a minimally sized container that's loaded only with the software that you need to do your job efficiently. As a bonus, it doesn’t require a daemon to be running.
|
||||
|
||||
This image-sizing issue drove home the power of open source software for me. A number of people from different companies all collaborated to solve a problem through open discussion in a little over a day. Although no code changes were created to address this particular issue, there have been many code contributions to Buildah from contributors outside of Red Hat, and this has helped to make the project even better. These contributions have served to get a wider variety of talented people to look at the code than ever would have if it were a proprietary piece of software stuck in a private git repository. It’s taken only a year to convert me to the [open source way][3], and I don’t think I could ever go back.
|
||||
|
||||
This article was originally posted at [Project Atomic][4]. Reposted with permission.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/containers-buildah
|
||||
|
||||
作者:[Tom Sweeney][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/tomsweeneyredhat
|
||||
[1]:https://github.com/projectatomic/buildah
|
||||
[2]:https://github.com/projectatomic/buildah/issues/532
|
||||
[3]:https://twitter.com/opensourceway
|
||||
[4]:http://www.projectatomic.io/blog/2018/04/open-source-what-a-concept/
|
@ -0,0 +1,260 @@
|
||||
Get more done at the Linux command line with GNU Parallel
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR)
|
||||
|
||||
Do you ever get the funny feeling that your computer isn't quite as fast as it should be? I used to feel that way, and then I found GNU Parallel.
|
||||
|
||||
GNU Parallel is a shell utility for executing jobs in parallel. It can parse multiple inputs, thereby running your script or command against sets of data at the same time. You can use all your CPU at last!
|
||||
|
||||
If you've ever used `xargs`, you already know how to use Parallel. If you don't, then this article teaches you, along with many other use cases.
|
||||
|
||||
### Installing GNU Parallel
|
||||
|
||||
GNU Parallel may not come pre-installed on your Linux or BSD computer. Install it from your repository or ports collection. For example, on Fedora:
|
||||
```
|
||||
$ sudo dnf install parallel
|
||||
|
||||
```
|
||||
|
||||
Or on NetBSD:
|
||||
```
|
||||
# pkg_add parallel
|
||||
|
||||
```
|
||||
|
||||
If all else fails, refer to the [project homepage][1].
|
||||
|
||||
### From serial to parallel
|
||||
|
||||
As its name suggests, Parallel's strength is that it runs jobs in parallel rather than, as many of us still do, sequentially.
|
||||
|
||||
When you run one command against many objects, you're inherently creating a queue. Some number of objects can be processed by the command, and all the other objects just stand around and wait their turn. It's inefficient. Given enough data, there's always going to be a queue, but instead of having just one queue, why not have lots of small queues?
|
||||
|
||||
Imagine you have a folder full of images you want to convert from JPEG to PNG. There are many ways to do this. There's the manual way of opening each image in GIMP and exporting it to the new format. That's usually the worst possible way. It's not only time-intensive, it's labor-intensive.
|
||||
|
||||
A pretty neat variation on this theme is the shell-based solution:
|
||||
```
|
||||
$ convert 001.jpeg 001.png
|
||||
|
||||
$ convert 002.jpeg 002.png
|
||||
|
||||
$ convert 003.jpeg 003.png
|
||||
|
||||
... and so on ...
|
||||
|
||||
```
|
||||
|
||||
It's a great trick when you first learn it, and at first it's a vast improvement. No need for a GUI and constant clicking. But it's still labor-intensive.
|
||||
|
||||
Better still:
|
||||
```
|
||||
$ for i in *jpeg; do convert $i $i.png ; done
|
||||
|
||||
```
|
||||
|
||||
This, at least, sets the job(s) in motion and frees you up to do more productive things. The problem is, it's still a serial process. One image gets converted, and then the next one in the queue steps up for conversion, and so on until the queue has been emptied.
|
||||
|
||||
With Parallel:
|
||||
```
|
||||
$ find . -name "*jpeg" | parallel -I% --max-args 1 convert % %.png
|
||||
|
||||
```
|
||||
|
||||
This is a combination of two commands: the `find` command, which gathers the objects you want to operate on, and the `parallel` command, which sorts through the objects and makes sure everything gets processed as required.
|
||||
|
||||
* `find . -name "*jpeg"` finds all files in the current directory that end in `jpeg`.
|
||||
* `parallel` invokes GNU Parallel.
|
||||
* `-I%` creates a placeholder, called `%`, to stand in for whatever `find` hands over to Parallel. You use this because otherwise you'd have to manually write a new command for each result of `find`, and that's exactly what you're trying to avoid.
|
||||
* `--max-args 1` limits the rate at which Parallel requests a new object from the queue. Since the command Parallel is running requires only one file, you limit the rate to 1. Were you doing a more complex command that required two files (such as `cat 001.txt 002.txt > new.txt`), you would limit the rate to 2.
|
||||
* `convert % %.png` is the command you want to run in Parallel.
|
||||
|
||||
|
||||
|
||||
The result of this command is that `find` gathers all relevant files and hands them over to `parallel`, which launches a job and immediately requests the next in line. Parallel continues to do this for as long as it is safe to launch new jobs without crippling your computer. As old jobs are completed, it replaces them with new ones, until all the data provided to it has been processed. What took 10 minutes before might take only 5 or 3 with Parallel.
|
||||
|
||||
### Multiple inputs
|
||||
|
||||
The `find` command is an excellent gateway to Parallel as long as you're familiar with `find` and `xargs` (collectively called GNU Find Utilities, or `findutils`). It provides a flexible interface that many Linux users are already comfortable with and is pretty easy to learn if you're a newcomer.
|
||||
|
||||
The `find` command is fairly straightforward: you provide `find` with a path to a directory you want to search and some portion of the file name you want to search for. Use wildcard characters to cast your net wider; in this example, the asterisk indicates anything, so `find` locates all files that end with the string `searchterm`:
|
||||
```
|
||||
$ find /path/to/directory -name "*searchterm"
|
||||
|
||||
```
|
||||
|
||||
By default, `find` returns the results of its search one item at a time, with one item per line:
|
||||
```
|
||||
$ find ~/graphics -name "*jpg"
|
||||
|
||||
/home/seth/graphics/001.jpg
|
||||
|
||||
/home/seth/graphics/cat.jpg
|
||||
|
||||
/home/seth/graphics/penguin.jpg
|
||||
|
||||
/home/seth/graphics/IMG_0135.jpg
|
||||
|
||||
```
|
||||
|
||||
When you pipe the results of `find` to `parallel`, each item on each line is treated as one argument to the command that `parallel` is arbitrating. If, on the other hand, you need to process more than one argument in one command, you can split up the way the data in the queue is handed over to `parallel`.
|
||||
|
||||
Here's a simple, unrealistic example, which I'll later turn into something more useful. You can follow along with this example, as long as you have GNU Parallel installed.
|
||||
|
||||
Assume you have four files. List them, one per line, to see exactly what you have:
|
||||
```
|
||||
$ echo ada > ada ; echo lovelace > lovelace
|
||||
|
||||
$ echo richard > richard ; echo stallman > stallman
|
||||
|
||||
$ ls -1
|
||||
|
||||
ada
|
||||
|
||||
lovelace
|
||||
|
||||
richard
|
||||
|
||||
stallman
|
||||
|
||||
```
|
||||
|
||||
You want to combine two files into a third that contains the contents of both files. This requires that Parallel has access to two files, so the `-I%` variable won't work in this case.
|
||||
|
||||
Parallel's default behavior is basically invisible:
|
||||
```
|
||||
$ ls -1 | parallel echo
|
||||
|
||||
ada
|
||||
|
||||
lovelace
|
||||
|
||||
richard
|
||||
|
||||
stallman
|
||||
|
||||
```
|
||||
|
||||
Now tell Parallel you want to get two objects per job:
|
||||
```
|
||||
$ ls -1 | parallel --max-args=2 echo
|
||||
|
||||
ada lovelace
|
||||
|
||||
richard stallman
|
||||
|
||||
```
|
||||
|
||||
Now the lines have been combined. Specifically, two results from `ls -1` are passed to Parallel all at once. That's the right number of arguments for this task, but they're effectively one argument right now: "ada lovelace" and "richard stallman." What you actually want is two distinct arguments per job.
|
||||
|
||||
Luckily, that technicality is parsed by Parallel itself. If you set `--max-args` to `2`, you get two variables, `{1}` and `{2}`, representing the first and second parts of the argument:
|
||||
```
|
||||
$ ls -1 | parallel --max-args=2 cat {1} {2} ">" {1}_{2}.person
|
||||
|
||||
```
|
||||
|
||||
In this command, the variable `{1}` is ada or richard (depending on which job you look at) and `{2}` is either `lovelace` or `stallman`. The contents of the files are redirected with a redirect symbol in quotes (the quotes grab the redirect symbol from Bash so Parallel can use it) and placed into new files called `ada_lovelace.person` and `richard_stallman.person`.
|
||||
```
|
||||
$ ls -1
|
||||
|
||||
ada
|
||||
|
||||
ada_lovelace.person
|
||||
|
||||
lovelace
|
||||
|
||||
richard
|
||||
|
||||
richard_stallman.person
|
||||
|
||||
stallman
|
||||
|
||||
|
||||
|
||||
$ cat ada_*person
|
||||
|
||||
ada lovelace
|
||||
|
||||
$ cat ri*person
|
||||
|
||||
richard stallman
|
||||
|
||||
```
|
||||
|
||||
If you spend all day parsing log files that are hundreds of megabytes in size, you might see how parallelized text parsing could be useful to you; otherwise, this is mostly a demonstrative exercise.
|
||||
|
||||
However, this kind of processing is invaluable for more than just text parsing. Here's a real-life example from the film world. Consider a directory of video files and audio files that need to be joined together.
|
||||
```
|
||||
$ ls -1
|
||||
|
||||
12_LS_establishing-manor.avi
|
||||
|
||||
12_wildsound.flac
|
||||
|
||||
14_butler-dialogue-mixed.flac
|
||||
|
||||
14_MS_butler.avi
|
||||
|
||||
...and so on...
|
||||
|
||||
```
|
||||
|
||||
Using the same principles, a simple command can be created so that the files are combined in parallel:
|
||||
```
|
||||
$ ls -1 | parallel --max-args=2 ffmpeg -i {1} -i {2} -vcodec copy -acodec copy {1}.mkv
|
||||
|
||||
```
|
||||
|
||||
### Brute. Force.
|
||||
|
||||
All this fancy input and output parsing isn't to everyone's taste. If you prefer a more direct approach, you can throw commands at Parallel and walk away.
|
||||
|
||||
First, create a text file with one command on each line:
|
||||
```
|
||||
$ cat jobs2run
|
||||
|
||||
bzip2 oldstuff.tar
|
||||
|
||||
oggenc music.flac
|
||||
|
||||
opusenc ambiance.wav
|
||||
|
||||
convert bigfile.tiff small.jpeg
|
||||
|
||||
ffmepg -i foo.avi -v:b 12000k foo.mp4
|
||||
|
||||
xsltproc --output build/tmp.fo style/dm.xsl src/tmp.xml
|
||||
|
||||
bzip2 archive.tar
|
||||
|
||||
```
|
||||
|
||||
Then hand the file over to Parallel:
|
||||
```
|
||||
$ parallel --jobs 6 < jobs2run
|
||||
|
||||
```
|
||||
|
||||
And now all jobs in your file are run in Parallel. If more jobs exist than jobs allowed, a queue is formed and maintained by Parallel until all jobs have run.
|
||||
|
||||
### Much, much more
|
||||
|
||||
GNU Parallel is a powerful and flexible tool, with far more use cases than can fit into this article. Its man page provides examples of really cool things you can do with it, from remote execution over SSH to incorporating Bash functions into your Parallel commands. There's even an extensive demonstration series on [YouTube][2], so you can learn from the GNU Parallel team directly. The GNU Parallel lead maintainer has also just released the command's official guide, available from [Lulu.com][3].
|
||||
|
||||
GNU Parallel has the power to change the way you compute, and if doesn't do that, it will at the very least change the time your computer spends computing. Try it today!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/gnu-parallel
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[1]:https://www.gnu.org/software/parallel
|
||||
[2]:https://www.youtube.com/watch?v=OpaiGYxkSuQ&list=PL284C9FF2488BC6D1
|
||||
[3]:http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html
|
129
sources/tech/20180510 How To Display Images In The Terminal.md
Normal file
129
sources/tech/20180510 How To Display Images In The Terminal.md
Normal file
@ -0,0 +1,129 @@
|
||||
Translating KevinSJ -- 05142018
|
||||
How To Display Images In The Terminal
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/05/fim-2-720x340.png)
|
||||
There are plenty of GUI picture viewers available for Linux. But I haven’t heard or used any applications which displays pictures in the Terminal itself. Luckily, I have just found a CLI image viewer named **FIM** that can be used to display images images in Terminal. The FIM utility draw my attention, because it is very lightweight compared to most GUI picture viewer applications. Without further ado, lets us go ahead and see what it is capable of.
|
||||
|
||||
### Display Images In the Terminal Using FIM
|
||||
|
||||
**FIM** stands for **F** bi **IM** proved. For those who don’t know, **Fbi** is a linux **f** rame **b** uffer **i** mageviewer. It uses the system’s framebuffer to display images directly from the command line. By default, it displays bmp, gif, jpeg, PhotoCD, png, ppm, tiff, and xwd from the Terminal itself. For other formats, it will try to use ImageMagick’s convert.
|
||||
|
||||
FIM is based on Fbi and it is a highly customizable and scriptable image viewer targeted at the users who are comfortable with software like the Vim text editor or the Mutt mail user agent. It displays the images in full screen and the images can be controlled (such as resize, flip, zoom) using keyboard shortcuts. Unlike fbi, the FIM utility is universal: it can open many file formats and it can display pictures in the following video modes:
|
||||
|
||||
* Graphically, with the Linux framebuffer device.
|
||||
* Graphically, under X/Xorg, using the SDL library.
|
||||
* Graphically, under X/Xorg, using the Imlib2 library.
|
||||
* Rendered as ASCII Art in any textual console, using the AAlib library.
|
||||
|
||||
|
||||
|
||||
FIM is completely free and open source.
|
||||
|
||||
### Install FIM
|
||||
|
||||
The FIM image viewer is available in the default repositories of DEB-based systems such as Ubuntu, Linux Mint. So, you can install fbi using command:
|
||||
```
|
||||
$ sudo apt-get install fim
|
||||
|
||||
```
|
||||
|
||||
If it is not available in the default repositories of your Linux distribution, you can download, compile and install from source as shown below.
|
||||
```
|
||||
wget http://download.savannah.nongnu.org/releases/fbi-improved/fim-0.6-trunk.tar.gz
|
||||
wget http://download.savannah.nongnu.org/releases/fbi-improved/fim-0.6-trunk.tar.gz.sig
|
||||
gpg --search 'dezperado autistici org'
|
||||
# import the key from a trusted keyserver by following on screen instructions
|
||||
gpg --verify fim-0.6-trunk.tar.gz.sig
|
||||
|
||||
tar xzf fim-0.6-trunk.tar.gz
|
||||
cd fim-0.6-trunk
|
||||
./configure --help=short
|
||||
# read the ./configure --help=short output: you can give options to ./configure
|
||||
./configure
|
||||
make
|
||||
su -c "make install"
|
||||
|
||||
```
|
||||
|
||||
### FIM Usage
|
||||
|
||||
Once installed, you can display an image with “auto zoom” option using command:
|
||||
```
|
||||
$ fim -a dog.jpg
|
||||
|
||||
```
|
||||
|
||||
Here is the sample output from my Ubuntu box.
|
||||
|
||||
![][1]
|
||||
|
||||
As you can see in the above screenshot, FIM didn’t use any external GUI picture viewers. Instead, it uses our system’s framebuffer to display the image.
|
||||
|
||||
If you have multiple .jpg files in the current directory, you could use wildcard to open all of them as shown below.
|
||||
```
|
||||
$ fim -a *.jpg
|
||||
|
||||
```
|
||||
|
||||
To open all images in a directory, for example **Pictures** , run:
|
||||
```
|
||||
$ fim Pictures/
|
||||
|
||||
```
|
||||
|
||||
We can also open the images recursively in a folder and its sub-folder and then sorting the list like below.
|
||||
```
|
||||
$ fim -R Pictures/ --sort
|
||||
|
||||
```
|
||||
|
||||
To render the image in ASCII format, you can use **-t** flag.
|
||||
```
|
||||
$ fim -t dog.jpg
|
||||
|
||||
```
|
||||
|
||||
To quit Fim, press **ESC** or **q**.
|
||||
|
||||
**Keyboard shortcuts**
|
||||
|
||||
You can use various keyboard shortcuts to manage the images. For example, to load next image and previous images, press PgUp/PgDown keys. Ton Zoom in or out, use +/- keys. Here is the common keys used to control images in FIM.
|
||||
|
||||
* **PageUp/Down** : Prev/Next image
|
||||
* **+/-** : Zoom in/out
|
||||
* **a** : Autoscale
|
||||
* **w** : Fit to width
|
||||
* **h** : Fit to height
|
||||
* **j/k** : Pan down/up
|
||||
* **f/m** : flip/mirror
|
||||
* **r/R** : Rotate (Clock wise and ant-clock wise)
|
||||
* **ESC/q** : Quit
|
||||
|
||||
|
||||
|
||||
For complete details, refer man pages.
|
||||
```
|
||||
$ man fim
|
||||
|
||||
```
|
||||
|
||||
And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-display-images-in-the-terminal/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:http://www.ostechnix.com/wp-content/uploads/2018/05/fim-1.png
|
@ -0,0 +1,84 @@
|
||||
3 useful things you can do with the IP tool in Linux
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0)
|
||||
|
||||
It has been more than a decade since the `ifconfig` command has been deprecated on Linux in favor of the `iproute2` project, which contains the magical tool `ip`. Many online tutorial resources still refer to old command-line tools like `ifconfig`, `route`, and `netstat`. The goal of this tutorial is to share some of the simple networking-related things you can do easily using the `ip` tool instead.
|
||||
|
||||
### Find your IP address
|
||||
```
|
||||
[dneary@host]$ ip addr show
|
||||
|
||||
[snip]
|
||||
|
||||
44: wlp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
|
||||
|
||||
link/ether 5c:e0:c5:c7:f0:f1 brd ff:ff:ff:ff:ff:ff
|
||||
|
||||
inet 10.16.196.113/23 brd 10.16.197.255 scope global dynamic wlp4s0
|
||||
|
||||
valid_lft 74830sec preferred_lft 74830sec
|
||||
|
||||
inet6 fe80::5ee0:c5ff:fec7:f0f1/64 scope link
|
||||
|
||||
valid_lft forever preferred_lft forever
|
||||
|
||||
```
|
||||
|
||||
`ip addr show` will show you a lot of information about all of your network link devices. In this case, my wireless Ethernet card (wlp4s0) is the IPv4 address (the `inet` field) `10.16.196.113/23`. The `/23` means that there are 23 bits of the 32 bits in the IP address, which will be shared by all of the IP addresses in this subnet. IP addresses in the subnet will range from `10.16.196.0 to 10.16.197.254`. The broadcast address for the subnet (the `brd` field after the IP address) `10.16.197.255` is reserved for broadcast traffic to all hosts on the subnet.
|
||||
|
||||
We can show only the information about a single device using `ip addr show dev wlp4s0`, for example.
|
||||
|
||||
### Display your routing table
|
||||
```
|
||||
[dneary@host]$ ip route list
|
||||
|
||||
default via 10.16.197.254 dev wlp4s0 proto static metric 600
|
||||
|
||||
10.16.196.0/23 dev wlp4s0 proto kernel scope link src 10.16.196.113 metric 601
|
||||
|
||||
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
|
||||
|
||||
```
|
||||
|
||||
The routing table is the local host's way of helping network traffic figure out where to go. It contains a set of signposts, sending traffic to a specific interface, and a specific next waypoint on its journey.
|
||||
|
||||
If you run any virtual machines or containers, these will get their own IP addresses and subnets, which can make these routing tables quite complicated, but in a single host, there are typically two instructions. For local traffic, send it out onto the local Ethernet, and the network switches will figure out (using a protocol called ARP) which host owns the destination IP address, and thus where the traffic should be sent. For traffic to the internet, send it to the local gateway node, which will have a better idea how to get to the destination.
|
||||
|
||||
In the situation above, the first line represents the external gateway for external traffic, the second line is for local traffic, and the third is reserved for a virtual bridge for VMs running on the host, but this link is not currently active.
|
||||
|
||||
### Monitor your network configuration
|
||||
```
|
||||
[dneary@host]$ ip monitor all
|
||||
|
||||
[dneary@host]$ ip -s link list wlp4s0
|
||||
|
||||
```
|
||||
|
||||
The `ip monitor` command can be used to monitor changes in routing tables, network addressing on network interfaces, or changes in ARP tables on the local host. This command can be particularly useful in debugging network issues related to containers and networking, when two VMs should be able to communicate with each other but cannot.
|
||||
|
||||
`all`, `ip monitor` will report all changes, prefixed with one of `[LINK]` (network interface changes), `[ROUTE]` (changes to a routing table), `[ADDR]` (IP address changes), or `[NEIGH]` (nothing to do with horses—changes related to ARP addresses of neighbors).
|
||||
|
||||
When used withwill report all changes, prefixed with one of(network interface changes),(changes to a routing table),(IP address changes), or(nothing to do with horses—changes related to ARP addresses of neighbors).
|
||||
|
||||
You can also monitor changes on specific objects (for example, a specific routing table or an IP address).
|
||||
|
||||
Another useful option that works with many commands is `ip -s`, which gives some statistics. Adding a second `-s` option adds even more statistics. `ip -s link list wlp4s0` above will give lots of information about packets received and transmitted, with the number of packets dropped, errors detected, and so on.
|
||||
|
||||
### Handy tip: Shorten your commands
|
||||
|
||||
In general, for the `ip` tool, you need to include only enough letters to uniquely identify what you want to do. Instead of `ip monitor`, you can use `ip mon`. Instead of `ip addr list`, you can use `ip a l`, and you can use `ip r` in place of `ip route`. `Ip link list` can be shorted to `ip l ls`. To read about the many options you can use to change the behavior of a command, visit the [ip manpage][1].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/useful-things-you-can-do-with-IP-tool-Linux
|
||||
|
||||
作者:[Dave Neary][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/dneary
|
||||
[1]:https://www.systutorials.com/docs/linux/man/8-ip-route/
|
357
sources/tech/20180511 Looking at the Lispy side of Perl.md
Normal file
357
sources/tech/20180511 Looking at the Lispy side of Perl.md
Normal file
@ -0,0 +1,357 @@
|
||||
Looking at the Lispy side of Perl
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg)
|
||||
Some programming languages (e.g., C) have named functions only, whereas others (e.g., Lisp, Java, and Perl) have both named and unnamed functions. A lambda is an unnamed function, with Lisp as the language that popularized the term. Lambdas have various uses, but they are particularly well-suited for data-rich applications. Consider this depiction of a data pipeline, with two processing stages shown:
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/data_source.png?itok=OON2cC2R)
|
||||
|
||||
### Lambdas and higher-order functions
|
||||
|
||||
The filter and transform stages can be implemented as higher-order functions—that is, functions that can take a function as an argument. Suppose that the depicted pipeline is part of an accounts-receivable application. The filter stage could consist of a function named `filter_data`, whose single argument is another function—for example, a `high_buyers` function that filters out amounts that fall below a threshold. The transform stage might convert amounts in U.S. dollars to equivalent amounts in euros or some other currency, depending on the function plugged in as the argument to the higher-order `transform_data` function. Changing the filter or the transform behavior requires only plugging in a different function argument to the higher order `filter_data` or `transform_data` functions.
|
||||
|
||||
Lambdas serve nicely as arguments to higher-order functions for two reasons. First, lambdas can be crafted on the fly, and even written in place as arguments. Second, lambdas encourage the coding of pure functions, which are functions whose behavior depends solely on the argument(s) passed in; such functions have no side effects and thereby promote safe concurrent programs.
|
||||
|
||||
Perl has a straightforward syntax and semantics for lambdas and higher-order functions, as shown in the following example:
|
||||
|
||||
### A first look at lambdas in Perl
|
||||
|
||||
```
|
||||
#!/usr/bin/perl
|
||||
|
||||
use strict;
|
||||
use warnings;
|
||||
|
||||
## References to lambdas that increment, decrement, and do nothing.
|
||||
## $_[0] is the argument passed to each lambda.
|
||||
my $inc = sub { $_[0] + 1 }; ## could use 'return $_[0] + 1' for clarity
|
||||
my $dec = sub { $_[0] - 1 }; ## ditto
|
||||
my $nop = sub { $_[0] }; ## ditto
|
||||
|
||||
sub trace {
|
||||
my ($val, $func, @rest) = @_;
|
||||
print $val, " ", $func, " ", @rest, "\nHit RETURN to continue...\n";
|
||||
<STDIN>;
|
||||
}
|
||||
|
||||
## Apply an operation to a value. The base case occurs when there are
|
||||
## no further operations in the list named @rest.
|
||||
sub apply {
|
||||
my ($val, $first, @rest) = @_;
|
||||
trace($val, $first, @rest) if 1; ## 0 to stop tracing
|
||||
|
||||
return ($val, apply($first->($val), @rest)) if @rest; ## recursive case
|
||||
return ($val, $first->($val)); ## base case
|
||||
}
|
||||
|
||||
my $init_val = 0;
|
||||
my @ops = ( ## list of lambda references
|
||||
$inc, $dec, $dec, $inc,
|
||||
$inc, $inc, $inc, $dec,
|
||||
$nop, $dec, $dec, $nop,
|
||||
$nop, $inc, $inc, $nop
|
||||
);
|
||||
|
||||
## Execute.
|
||||
print join(' ', apply($init_val, @ops)), "\n";
|
||||
## Final line of output: 0 1 0 -1 0 1 2 3 2 2 1 0 0 0 1 2 2strictwarningstraceSTDINapplytraceapplyapply
|
||||
```
|
||||
|
||||
The lispy program shown above highlights the basics of Perl lambdas and higher-order functions. Named functions in Perl start with the keyword `sub` followed by a name:
|
||||
```
|
||||
sub increment { ... } # named function
|
||||
|
||||
```
|
||||
|
||||
An unnamed or anonymous function omits the name:
|
||||
```
|
||||
sub {...} # lambda, or unnamed function
|
||||
|
||||
```
|
||||
|
||||
In the lispy example, there are three lambdas, and each has a reference to it for convenience. Here, for review, is the `$inc` reference and the lambda referred to:
|
||||
```
|
||||
my $inc = sub { $_[0] + 1 };
|
||||
|
||||
```
|
||||
|
||||
The lambda itself, the code block to the right of the assignment operator `=`, increments its argument `$_[0]` by 1. The lambda’s body is written in Lisp style; that is, without either an explicit `return` or a semicolon after the incrementing expression. In Perl, as in Lisp, the value of the last expression in a function’s body becomes the returned value if there is no explicit `return` statement. In this example, each lambda has only one expression in its body—a simplification that befits the spirit of lambda programming.
|
||||
|
||||
The `trace` function in the lispy program helps to clarify how the program works (as I'll illustrate below). The higher-order function `apply`, a nod to a Lisp function of the same name, takes a numeric value as its first argument and a list of lambda references as its second argument. The `apply` function is called initially, at the bottom of the program, with zero as the first argument and the list named `@ops` as the second argument. This list consists of 16 lambda references from among `$inc` (increment a value), `$dec` (decrement a value), and `$nop` (do nothing). The list could contain the lambdas themselves, but the code is easier to write and to understand with the more concise lambda references.
|
||||
|
||||
The logic of the higher-order `apply` function can be clarified as follows:
|
||||
|
||||
1. The argument list passed to `apply` in typical Perl fashion is separated into three pieces:
|
||||
```
|
||||
my ($val, $first, @rest) = @_; ## break the argument list into three elements
|
||||
|
||||
```
|
||||
|
||||
The first element `$val` is a numeric value, initially `0`. The second element `$first` is a lambda reference, one of `$inc` `$dec`, or `$nop`. The third element `@rest` is a list of any remaining lambda references after the first such reference is extracted as `$first`.
|
||||
|
||||
2. If the list `@rest` is not empty after its first element is removed, then `apply` is called recursively. The two arguments to the recursively invoked `apply` are:
|
||||
|
||||
* The value generated by applying lambda operation `$first` to numeric value `$val`. For example, if `$first` is the incrementing lambda to which `$inc` refers, and `$val` is 2, then the new first argument to `apply` would be 3.
|
||||
* The list of remaining lambda references. Eventually, this list becomes empty because each call to `apply` shortens the list by extracting its first element.
|
||||
|
||||
|
||||
|
||||
Here is some output from a sample run of the lispy program, with `%` as the command-line prompt:
|
||||
```
|
||||
% ./lispy.pl
|
||||
|
||||
0 CODE(0x8f6820) CODE(0x8f68c8)CODE(0x8f68c8)CODE(0x8f6820)CODE(0x8f6820)CODE(0x8f6820)...
|
||||
Hit RETURN to continue...
|
||||
|
||||
1 CODE(0x8f68c8) CODE(0x8f68c8)CODE(0x8f6820)CODE(0x8f6820)CODE(0x8f6820)CODE(0x8f6820)...
|
||||
Hit RETURN to continue
|
||||
```
|
||||
|
||||
The first output line can be clarified as follows:
|
||||
|
||||
* The `0` is the numeric value passed as an argument in the initial (and thus non-recursive) call to function `apply`. The argument name is `$val` in `apply`.
|
||||
* The `CODE(0x8f6820)` is a reference to one of the lambdas, in this case the lambda to which `$inc` refers. The second argument is thus the address of some lambda code. The argument name is `$first` in `apply`
|
||||
* The third piece, the series of `CODE` references, is the list of lambda references beyond the first. The argument name is `@rest` in `apply`.
|
||||
|
||||
|
||||
|
||||
The second line of output shown above also deserves a look. The numeric value is now `1`, the result of incrementing `0`: the initial lambda is `$inc` and the initial value is `0`. The extracted reference `CODE(0x8f68c8)` is now `$first`, as this reference is the first element in the `@rest` list after `$inc` has been extracted earlier.
|
||||
|
||||
Eventually, the `@rest` list becomes empty, which ends the recursive calls to `apply`. In this case, the function `apply` simply returns a list with two elements:
|
||||
|
||||
1. The numeric value taken in as an argument (in the sample run, 2).
|
||||
2. This argument transformed by the lambda (also 2 because the last lambda reference happens to be `$nop` for do nothing).
|
||||
|
||||
|
||||
|
||||
The lispy example underscores that Perl supports lambdas without any special fussy syntax: A lambda is just an unnamed code block, perhaps with a reference to it for convenience. Lambdas themselves, or references to them, can be passed straightforwardly as arguments to higher-order functions such as `apply` in the lispy example. Invoking a lambda through a reference is likewise straightforward. In the `apply` function, the call is:
|
||||
```
|
||||
$first->($val) ## $first is a lambda reference, $val a numeric argument passed to the lambda
|
||||
|
||||
```
|
||||
|
||||
### A richer code example
|
||||
|
||||
The next code example puts a lambda and a higher-order function to practical use. The example implements Conway’s Game of Life, a cellular automaton that can be represented as a matrix of cells. Such a matrix goes through various transformations, each yielding a new generation of cells. The Game of Life is fascinating because even relatively simple initial configurations can lead to quite complex behavior. A quick look at the rules governing cell birth, survival, and death is in order.
|
||||
|
||||
Consider this 5x5 matrix, with a star representing a live cell and a dash representing a dead one:
|
||||
```
|
||||
----- ## initial configuration
|
||||
--*--
|
||||
--*--
|
||||
--*--
|
||||
-----
|
||||
```
|
||||
|
||||
The next generation becomes:
|
||||
```
|
||||
----- ## next generation
|
||||
-----
|
||||
-***-
|
||||
----
|
||||
-----
|
||||
```
|
||||
|
||||
As life continues, the generations oscillate between these two configurations.
|
||||
|
||||
Here are the rules determining birth, death, and survival for a cell. A given cell has between three neighbors (a corner cell) and eight neighbors (an interior cell):
|
||||
|
||||
* A dead cell with exactly three live neighbors comes to life.
|
||||
* A live cell with more than three live neighbors dies from over-crowding.
|
||||
* A live cell with two or three live neighbors survives; hence, a live cell with fewer than two live neighbors dies from loneliness.
|
||||
|
||||
|
||||
|
||||
In the initial configuration shown above, the top and bottom live cells die because neither has two or three live neighbors. By contrast, the middle live cell in the initial configuration gains two live neighbors, one on either side, in the next generation.
|
||||
|
||||
## Conway’s Game of Life
|
||||
```
|
||||
#!/usr/bin/perl
|
||||
|
||||
### A simple implementation of Conway's game of life.
|
||||
# Usage: ./gol.pl [input file] ;; If no file name given, DefaultInfile is used.
|
||||
|
||||
use constant Dead => "-";
|
||||
use constant Alive => "*";
|
||||
use constant DefaultInfile => 'conway.in';
|
||||
|
||||
use strict;
|
||||
use warnings;
|
||||
|
||||
my $dimension = undef;
|
||||
my @matrix = ();
|
||||
my $generation = 1;
|
||||
|
||||
sub read_data {
|
||||
my $datafile = DefaultInfile;
|
||||
$datafile = shift @ARGV if @ARGV;
|
||||
die "File $datafile does not exist.\n" if !-f $datafile;
|
||||
open(INFILE, "<$datafile");
|
||||
|
||||
## Check 1st line for dimension;
|
||||
$dimension = <INFILE>;
|
||||
die "1st line of input file $datafile not an integer.\n" if $dimension !~ /\d+/;
|
||||
|
||||
my $record_count = 0;
|
||||
while (<INFILE>) {
|
||||
chomp($_);
|
||||
last if $record_count++ == $dimension;
|
||||
die "$_: bad input record -- incorrect length\n" if length($_) != $dimension;
|
||||
my @cells = split(//, $_);
|
||||
push @matrix, @cells;
|
||||
}
|
||||
close(INFILE);
|
||||
draw_matrix();
|
||||
}
|
||||
|
||||
sub draw_matrix {
|
||||
my $n = $dimension * $dimension;
|
||||
print "\n\tGeneration $generation\n";
|
||||
for (my $i = 0; $i < $n; $i++) {
|
||||
print "\n\t" if ($i % $dimension) == 0;
|
||||
print $matrix[$i];
|
||||
}
|
||||
print "\n\n";
|
||||
$generation++;
|
||||
}
|
||||
|
||||
sub has_left_neighbor {
|
||||
my ($ind) = @_;
|
||||
return ($ind % $dimension) != 0;
|
||||
}
|
||||
|
||||
sub has_right_neighbor {
|
||||
my ($ind) = @_;
|
||||
return (($ind + 1) % $dimension) != 0;
|
||||
}
|
||||
|
||||
sub has_up_neighbor {
|
||||
my ($ind) = @_;
|
||||
return (int($ind / $dimension)) != 0;
|
||||
}
|
||||
|
||||
sub has_down_neighbor {
|
||||
my ($ind) = @_;
|
||||
return (int($ind / $dimension) + 1) != $dimension;
|
||||
}
|
||||
|
||||
sub has_left_up_neighbor {
|
||||
my ($ind) = @_;
|
||||
($ind) && has_up_neighbor($ind);
|
||||
}
|
||||
|
||||
sub has_right_up_neighbor {
|
||||
my ($ind) = @_;
|
||||
($ind) && has_up_neighbor($ind);
|
||||
}
|
||||
|
||||
sub has_left_down_neighbor {
|
||||
my ($ind) = @_;
|
||||
($ind) && has_down_neighbor($ind);
|
||||
}
|
||||
|
||||
sub has_right_down_neighbor {
|
||||
my ($ind) = @_;
|
||||
($ind) && has_down_neighbor($ind);
|
||||
}
|
||||
|
||||
sub compute_cell {
|
||||
my ($ind) = @_;
|
||||
my @neighbors;
|
||||
|
||||
# 8 possible neighbors
|
||||
push(@neighbors, $ind - 1) if has_left_neighbor($ind);
|
||||
push(@neighbors, $ind + 1) if has_right_neighbor($ind);
|
||||
push(@neighbors, $ind - $dimension) if has_up_neighbor($ind);
|
||||
push(@neighbors, $ind + $dimension) if has_down_neighbor($ind);
|
||||
push(@neighbors, $ind - $dimension - 1) if has_left_up_neighbor($ind);
|
||||
push(@neighbors, $ind - $dimension + 1) if has_right_up_neighbor($ind);
|
||||
push(@neighbors, $ind + $dimension - 1) if has_left_down_neighbor($ind);
|
||||
push(@neighbors, $ind + $dimension + 1) if has_right_down_neighbor($ind);
|
||||
|
||||
my $count = 0;
|
||||
foreach my $n (@neighbors) {
|
||||
$count++ if $matrix[$n] eq Alive;
|
||||
}
|
||||
|
||||
if ($matrix[$ind] eq Alive) && (($count == 2) || ($count == 3)); ## survival
|
||||
if ($matrix[$ind] eq Dead) && ($count == 3); ## birth
|
||||
; ## death
|
||||
}
|
||||
|
||||
sub again_or_quit {
|
||||
print "RETURN to continue, 'q' to quit.\n";
|
||||
my $flag = <STDIN>;
|
||||
chomp($flag);
|
||||
return ($flag eq 'q') ? 1 : 0;
|
||||
}
|
||||
|
||||
sub animate {
|
||||
my @new_matrix;
|
||||
my $n = $dimension * $dimension - 1;
|
||||
|
||||
while (1) { ## loop until user signals stop
|
||||
@new_matrix = map {compute_cell($_)} (0..$n); ## generate next matrix
|
||||
|
||||
splice @matrix; ## empty current matrix
|
||||
push @matrix, @new_matrix; ## repopulate matrix
|
||||
draw_matrix(); ## display the current matrix
|
||||
|
||||
last if again_or_quit(); ## continue?
|
||||
splice @new_matrix; ## empty temp matrix
|
||||
}
|
||||
}
|
||||
|
||||
### Execute
|
||||
read_data(); ## read initial configuration from input file
|
||||
animate(); ## display and recompute the matrix until user tires
|
||||
```
|
||||
|
||||
The gol program (see [Conway’s Game of Life][1]) has almost 140 lines of code, but most of these involve reading the input file, displaying the matrix, and bookkeeping tasks such as determining the number of live neighbors for a given cell. Input files should be configured as follows:
|
||||
```
|
||||
5
|
||||
-----
|
||||
--*--
|
||||
--*--
|
||||
--*--
|
||||
-----
|
||||
```
|
||||
|
||||
The first record gives the matrix side, in this case 5 for a 5x5 matrix. The remaining rows are the contents, with stars for live cells and spaces for dead ones.
|
||||
|
||||
The code of primary interest resides in two functions, `animate` and `compute_cell`. The `animate` function constructs the next generation, and this function needs to call `compute_cell` on every cell in order to determine the cell’s new status as either alive or dead. How should the `animate` function be structured?
|
||||
|
||||
The `animate` function has a `while` loop that iterates until the user decides to terminate the program. Within this `while` loop the high-level logic is straightforward:
|
||||
|
||||
1. Create the next generation by iterating over the matrix cells, calling function `compute_cell` on each cell to determine its new status. At issue is how best to do the iteration. A loop nested inside the `while `loop would do, of course, but nested loops can be clunky. Another way is to use a higher-order function, as clarified shortly.
|
||||
2. Replace the current matrix with the new one.
|
||||
3. Display the next generation.
|
||||
4. Check if the user wants to continue: if so, continue; otherwise, terminate.
|
||||
|
||||
|
||||
|
||||
Here, for review, is the call to Perl’s higher-order `map` function, with the function’s name again a nod to Lisp. This call occurs as the first statement within the `while` loop in `animate`:
|
||||
```
|
||||
while (1) {
|
||||
@new_matrix = map {compute_cell($_)} (0..$n); ## generate next matrixcompute_cell
|
||||
```
|
||||
|
||||
The `map` function takes two arguments: an unnamed code block (a lambda!), and a list of values passed to this code block one at a time. In this example, the code block calls the `compute_cell` function with one of the matrix indexes, 0 through the matrix size - 1. Although the matrix is displayed as two-dimensional, it is implemented as a one-dimensional list.
|
||||
|
||||
Higher-order functions such as `map` encourage the code brevity for which Perl is famous. My view is that such functions also make code easier to write and to understand, as they dispense with the required but messy details of loops. In any case, lambdas and higher-order functions make up the Lispy side of Perl.
|
||||
|
||||
If you're interested in more detail, I recommend Mark Jason Dominus's book, [Higher-Order Perl][2].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/looking-lispy-side-perl
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mkalindepauledu
|
||||
[1]:https://trello-attachments.s3.amazonaws.com/575088ec94ca6ac38b49b30e/5ad4daf12f6b6a3ac2318d28/c0700c7379983ddf61f5ab5ab4891f0c/lispyPerl.html#gol (Conway’s Game of Life)
|
||||
[2]:https://www.elsevier.com/books/higher-order-perl/dominus/978-1-55860-701-9
|
@ -0,0 +1,179 @@
|
||||
MidnightBSD Could Be Your Gateway to FreeBSD
|
||||
======
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/midnight_4_0.jpg?itok=T2gpLVui)
|
||||
|
||||
[FreeBSD][1] is an open source operating system that descended from the famous [Berkeley Software Distribution][2]. The first version of FreeBSD was released in 1993 and is still going strong. Around 2007, Lucas Holt wanted to create a fork of FreeBSD that made use of the [GnuStep][3] implementation of the OpenStep (now Cocoa) Objective-C frameworks, widget toolkit, and application development tools. To that end, he began development of the MidnightBSD desktop distribution.
|
||||
|
||||
MidnightBSD (named after Lucas’s cat, Midnight) is still in active (albeit slow) development. The latest stable release (0.8.6) has been available since August, 2017. Although the BSD distributions aren’t what you might call user-friendly, getting up to speed on their installation is a great way to familiarize yourself with how to deal with an ncurses installation and with finalizing an install via the command line.
|
||||
|
||||
In the end, you’ll wind up with desktop distribution of a very reliable fork of FreeBSD. It’ll take a bit of work, but if you’re a Linux user looking to stretch your skills… this is a good place to start.
|
||||
|
||||
I want to walk you through the process of installing MidnightBSD, how to add a graphical desktop environment, and then how to install applications.
|
||||
|
||||
### Installation
|
||||
|
||||
As I mentioned, this is an ncurses installation process, so there is no point-and-click to be found. Instead, you’ll be using your keyboard Tab and arrow keys. Once you’ve downloaded the [latest release][4], burn it to a CD/DVD or USB drive and boot your machine (or create a virtual machine in [VirtualBox][5]). The installer will open and give you three options (Figure 1). Select Install (using your keyboard arrow keys) and hit Enter.
|
||||
|
||||
|
||||
![MidnightBSD installer][7]
|
||||
|
||||
Figure 1: Launching the MidnightBSD installer.
|
||||
|
||||
[Used with permission][8]
|
||||
|
||||
At this point, there are quite a lot of screens to go through. Many of those screens are self-explanatory:
|
||||
|
||||
1. Set non-default key mapping (yes/no)
|
||||
|
||||
2. Set hostname
|
||||
|
||||
3. Add optional system components (documentation, games, 32-bit compatibility, system source code)
|
||||
|
||||
4. Partitioning hard drive
|
||||
|
||||
5. Administrator password
|
||||
|
||||
6. Configure networking interface
|
||||
|
||||
7. Select region (for timezone)
|
||||
|
||||
8. Enable services (such as secure shell)
|
||||
|
||||
9. Add users (Figure 2)
|
||||
|
||||
|
||||
|
||||
|
||||
![Adding a user][10]
|
||||
|
||||
Figure 2: Adding a user to the system.
|
||||
|
||||
[Used with permission][8]
|
||||
|
||||
After you’ve added the user(s) to the system, you will then be dropped to a window (Figure 3), where you can take care of anything you might have missed or you want to re-configure. If you don’t need to make any changes, select Exit, and your configurations will be applied.
|
||||
|
||||
In the next window, when prompted, select No, and the system will reboot. Once MidnightBSD reboots, you’re ready for the next phase of the installation.
|
||||
|
||||
### Post install
|
||||
|
||||
When your newly installed MidnightBSD boots, you’ll find yourself at a command prompt. At this point, there is no graphical interface to be found. To install applications, MidnightBSD relies on the mport tool. Let’s say you want to install the Xfce desktop environment. To do this, log into MidnightBSD and issue the following commands:
|
||||
```
|
||||
sudo mport index
|
||||
|
||||
sudo mport install xorg
|
||||
|
||||
```
|
||||
|
||||
You now have the Xorg window server installed, which will allow you to install the desktop environment. Installing Xfce is handled with the command:
|
||||
```
|
||||
sudo mport install xfce
|
||||
|
||||
```
|
||||
|
||||
Xfce is now installed. However, we must enable it to run with the command startx. To do this, let’s first install the nano editor. Issue the command:
|
||||
```
|
||||
sudo mport install nano
|
||||
|
||||
```
|
||||
|
||||
With nano installed, issue the command:
|
||||
```
|
||||
nano ~/.xinitrc
|
||||
|
||||
```
|
||||
|
||||
That file need only contain a single line:
|
||||
```
|
||||
exec startxfce4
|
||||
|
||||
```
|
||||
|
||||
Save and close that file. If you now issue the command startx, the Xfce desktop environment will start. You should start to feel a bit more at home (Figure 4).
|
||||
|
||||
![ Xfce][12]
|
||||
|
||||
Figure 4: The Xfce desktop interface is ready to serve.
|
||||
|
||||
[Used with permission][8]
|
||||
|
||||
Since you don’t want to always have to issue the command startx, you’ll want to enable the login daemon. However, it’s not installed. To install this subsystem, issue the command:
|
||||
```
|
||||
sudo mport install mlogind
|
||||
|
||||
```
|
||||
|
||||
When the installation completes, enable mlogind at boot by adding an entry to the /etc/rc.conf file. At the bottom of the rc.conf file, add the following:
|
||||
```
|
||||
mlogind_enable=”YES”
|
||||
|
||||
```
|
||||
|
||||
Save and close that file. Now, when you boot (or reboot) the machine, you should be greeted by the graphical login screen. At the time of writing, after logging in, I wound up with a blank screen and the dreaded X cursor. Unfortunately, it seems there’s no fix for this at the moment. So, to gain access to your desktop environment, you must make use of the startx command.
|
||||
|
||||
### Installing
|
||||
|
||||
Out of the box, you won’t find much in the way of applications. If you attempt to install applications (using mport), you’ll quickly find yourself frustrated, as very few applications can be found. To get around this, we need to check out the list of available mport software, using the svnlite command. Go back to the terminal window and issue the command:
|
||||
```
|
||||
svnlite co http://svn.midnightbsd.org/svn/mports/trunk mports
|
||||
|
||||
```
|
||||
|
||||
Once you do that, you should see a new directory named ~/mports. Change into that directory (with the command cd ~/.mports. Issue the ls command and you should see a number of categories (Figure 5).
|
||||
|
||||
![applications][14]
|
||||
|
||||
Figure 5: The categories of applications now available for mport.
|
||||
|
||||
[Used with permission][8]
|
||||
|
||||
Say you want to install Firefox? If you look in the www directory, you’ll see a listing for linux-firefox. Issue the command:
|
||||
```
|
||||
sudo mport install linux-firefox
|
||||
|
||||
```
|
||||
|
||||
You should now see an entry for Firefox in the Xfce desktop menu. Go through all of the categories and install all of the software you need, using the mport command.
|
||||
|
||||
### A sad caveat
|
||||
|
||||
One sad little caveat is that the only version of an office suite to be found for mport (via svnlite) is OpenOffice 3. That’s quite out of date. And although Abiword is found in the ~/mports/editors directory, it seems it’s not available for installation. Even after installing OpenOffice 3, it errors out with an Exec format error. In other words, you won’t be doing much in the way of office productivity with MidnightBSD. But, hey, if you have an old Palm Pilot lying around, you can always install pilot-link. In other words, the available software doesn’t make for an incredibly useful desktop distribution… at least not for the average user. However, if you want to develop on MidnightBSD, you’ll find plenty of available tools, ready to install (check out the ~/mports/devel directory). You could even install Drupal with the command:
|
||||
|
||||
sudo mport install drupal7
|
||||
|
||||
Of course, after that you’ll need to create a database (MySQL is already installed), install Apache (sudo mport install apache24) and configure the necessary Apache directives.
|
||||
|
||||
Clearly, what is installed and what can be installed is a bit of a hodgepodge of applications, systems, and servers. But with enough work, you could wind up with a distribution that could serve a specific purpose.
|
||||
|
||||
### Enjoy the *BSD Goodness
|
||||
|
||||
And that is how you can get MidnightBSD up and running into a somewhat useful desktop distribution. It’s not as quick and easy as many other Linux distributions, but if you want a distribution that’ll make you think, this could be exactly what you’re looking for. Although much of the competition has quite a bit more available software titles ready for installation, MidnightBSD is certainly an interesting challenge that every Linux enthusiast or admin should try.
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][15]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/5/midnightbsd-could-be-your-gateway-freebsd
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/jlwallen
|
||||
[1]:https://www.freebsd.org/
|
||||
[2]:https://en.wikipedia.org/wiki/Berkeley_Software_Distribution
|
||||
[3]:https://en.wikipedia.org/wiki/GNUstep
|
||||
[4]:http://www.midnightbsd.org/download/
|
||||
[5]:https://www.virtualbox.org/
|
||||
[6]:/files/images/midnight1jpg
|
||||
[7]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/midnight_1.jpg?itok=BRfGIEk_ (MidnightBSD installer)
|
||||
[8]:/licenses/category/used-permission
|
||||
[9]:/files/images/midnight2jpg
|
||||
[10]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/midnight_2.jpg?itok=xhxHlNJr (Adding a user)
|
||||
[11]:/files/images/midnight4jpg
|
||||
[12]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/midnight_4.jpg?itok=DNqA47s_ ( Xfce)
|
||||
[13]:/files/images/midnight5jpg
|
||||
[14]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/midnight_5.jpg?itok=LpavDHQP (applications)
|
||||
[15]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -0,0 +1,309 @@
|
||||
How To Check Laptop Battery Status In Terminal In Linux
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2016/12/Check-Laptop-Battery-Status-In-Terminal-In-Linux-720x340.png)
|
||||
Finding your Laptop battery status in GUI mode is easy. You could easily tell the battery level by hovering the mouse pointer over the battery indicator icon in the task bar. But, how about from the command line? Not everyone know this. The other day a friend of mine asked how to check his Laptop battery level from Terminal in his Ubuntu desktop – hence this post. Here I have included three simple methods which will help you to check Laptop battery status in Terminal in any Linux distribution.
|
||||
|
||||
### Check Laptop Battery Status In Terminal In Linux
|
||||
|
||||
We can find the Laptop battery status from command line in three methods.
|
||||
|
||||
##### Method 1 – Using “Upower” command
|
||||
|
||||
The **Upower** command comes preinstalled with most Linux distributions. To display the battery status using Upower, open up the Terminal and run:
|
||||
```
|
||||
$ upower -i /org/freedesktop/UPower/devices/battery_BAT0
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
native-path: BAT0
|
||||
vendor: Samsung SDI
|
||||
model: DELL 7XFJJA2
|
||||
serial: 4448
|
||||
power supply: yes
|
||||
updated: Sat 12 May 2018 06:48:48 PM IST (41 seconds ago)
|
||||
has history: yes
|
||||
has statistics: yes
|
||||
battery
|
||||
present: yes
|
||||
rechargeable: yes
|
||||
state: charging
|
||||
warning-level: none
|
||||
energy: 43.3011 Wh
|
||||
energy-empty: 0 Wh
|
||||
energy-full: 44.5443 Wh
|
||||
energy-full-design: 48.84 Wh
|
||||
energy-rate: 9.8679 W
|
||||
voltage: 12.548 V
|
||||
time to full: 7.6 minutes
|
||||
percentage: 97%
|
||||
capacity: 91.2045%
|
||||
technology: lithium-ion
|
||||
icon-name: 'battery-full-charging-symbolic'
|
||||
History (charge):
|
||||
1526131128 97.000 charging
|
||||
History (rate):
|
||||
1526131128 9.868 charging
|
||||
|
||||
```
|
||||
|
||||
As you see above, my battery is in charging mode now and the battery level is 97%.
|
||||
|
||||
If the above command doesn’t work for any reason, try the following command instead:
|
||||
```
|
||||
$ upower -i `upower -e | grep 'BAT'`
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
native-path: BAT0
|
||||
vendor: Samsung SDI
|
||||
model: DELL 7XFJJA2
|
||||
serial: 4448
|
||||
power supply: yes
|
||||
updated: Sat 12 May 2018 06:50:49 PM IST (22 seconds ago)
|
||||
has history: yes
|
||||
has statistics: yes
|
||||
battery
|
||||
present: yes
|
||||
rechargeable: yes
|
||||
state: charging
|
||||
warning-level: none
|
||||
energy: 43.6119 Wh
|
||||
energy-empty: 0 Wh
|
||||
energy-full: 44.5443 Wh
|
||||
energy-full-design: 48.84 Wh
|
||||
energy-rate: 8.88 W
|
||||
voltage: 12.552 V
|
||||
time to full: 6.3 minutes
|
||||
percentage: 97%
|
||||
capacity: 91.2045%
|
||||
technology: lithium-ion
|
||||
icon-name: 'battery-full-charging-symbolic'
|
||||
History (rate):
|
||||
1526131249 8.880 charging
|
||||
|
||||
```
|
||||
|
||||
Upower not just display the battery status, but also the complete details of the installed battery such as model, vendor name, serial no, state, voltage etc.
|
||||
|
||||
However, you can only display the status of the battery by with combination of upower and [**grep**][1] commands as shown below.
|
||||
```
|
||||
$ upower -i $(upower -e | grep BAT) | grep --color=never -E "state|to\ full|to\ empty|percentage"
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
state: fully-charged
|
||||
percentage: 100%
|
||||
|
||||
```
|
||||
|
||||
![][3]
|
||||
|
||||
As you see in the above output, my Laptop battery has been fully charged.
|
||||
|
||||
For more details, refer man pages.
|
||||
```
|
||||
$ man upower
|
||||
|
||||
```
|
||||
|
||||
##### Method 2 – Using “acpi” command
|
||||
|
||||
The **acpi** command shows battery status and other ACPI information in your Linux distribution.
|
||||
|
||||
You might need to install **acpi** command in some Linux distributions.
|
||||
|
||||
To install acpi on Debian, Ubuntu and its derivatives:
|
||||
```
|
||||
$ sudo apt-get install acpi
|
||||
|
||||
```
|
||||
|
||||
On RHEL, CentOS, Fedora:
|
||||
```
|
||||
$ sudo yum install acpi
|
||||
|
||||
```
|
||||
|
||||
Or,
|
||||
```
|
||||
$ sudo dnf install acpi
|
||||
|
||||
```
|
||||
|
||||
On Arch Linux and its derivatives:
|
||||
```
|
||||
$ sudo pacman -S acpi
|
||||
|
||||
```
|
||||
|
||||
Once acpi installed, run the following command:
|
||||
```
|
||||
$ acpi -V
|
||||
|
||||
```
|
||||
|
||||
**Note:** Here, “V” is capital letter.
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Battery 0: Charging, 99%, 00:02:09 until charged
|
||||
Battery 0: design capacity 4400 mAh, last full capacity 4013 mAh = 91%
|
||||
Battery 1: Discharging, 0%, rate information unavailable
|
||||
Adapter 0: on-line
|
||||
Thermal 0: ok, 77.5 degrees C
|
||||
Thermal 0: trip point 0 switches to mode critical at temperature 84.0 degrees C
|
||||
Cooling 0: Processor 0 of 3
|
||||
Cooling 1: Processor 0 of 3
|
||||
Cooling 2: LCD 0 of 15
|
||||
Cooling 3: Processor 0 of 3
|
||||
Cooling 4: Processor 0 of 3
|
||||
Cooling 5: intel_powerclamp no state information available
|
||||
Cooling 6: x86_pkg_temp no state information available
|
||||
|
||||
```
|
||||
|
||||
Let us only check the state of the charge of battery. To do so, run:
|
||||
```
|
||||
$ acpi
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Battery 0: Charging, 99%, 00:01:41 until charged
|
||||
Battery 1: Discharging, 0%, rate information unavailable
|
||||
|
||||
```
|
||||
|
||||
Let us check the battery temperature:
|
||||
```
|
||||
$ acpi -t
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Thermal 0: ok, 63.5 degrees C
|
||||
|
||||
```
|
||||
|
||||
Let us view the above output in Fahrenheit:
|
||||
```
|
||||
$ acpi -t -f
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Thermal 0: ok, 144.5 degrees F
|
||||
|
||||
```
|
||||
|
||||
Want to know whether the AC power is connected or not? Run:
|
||||
```
|
||||
$ acpi -a
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Adapter 0: on-line
|
||||
|
||||
```
|
||||
|
||||
If the AC power is not available, you would the see the following instead:
|
||||
```
|
||||
Adapter 0: off-line
|
||||
|
||||
```
|
||||
|
||||
For more details, check the man pages.
|
||||
```
|
||||
$ man acpi
|
||||
|
||||
```
|
||||
|
||||
##### Method 3: Using “Batstat” Program
|
||||
|
||||
The **batstat** is a small ncurses-based CLI utility to display your Laptop battery status in Unix-like systems. It will display the following details:
|
||||
|
||||
* Current battery level
|
||||
* Current Energy
|
||||
* Full charge energy
|
||||
* Time elapsed from the start of the program, without tracking the sleep time of the machine.
|
||||
* Battery level history
|
||||
|
||||
|
||||
|
||||
Installing batstat is a piece of cake. Git clone the latest version using command:
|
||||
```
|
||||
$ git clone https://github.com/Juve45/batstat.git
|
||||
|
||||
```
|
||||
|
||||
The above command will pull the latest batstat version and save it’s contents in a folder named “batstat”.
|
||||
|
||||
CD to batstat/bin/ directory:
|
||||
```
|
||||
$ cd batstat/bin/
|
||||
|
||||
```
|
||||
|
||||
Copy “batstat” binary file to your PATH, for example /usr/local/bin/.
|
||||
```
|
||||
$ sudo cp batstat /usr/local/bin/
|
||||
|
||||
```
|
||||
|
||||
Make it executable using command:
|
||||
```
|
||||
$ sudo chmod +x /usr/local/bin/batstat
|
||||
|
||||
```
|
||||
|
||||
Finally, run the following command to view your battery status.
|
||||
```
|
||||
$ batstat
|
||||
|
||||
```
|
||||
|
||||
Sample output:
|
||||
|
||||
![][4]
|
||||
|
||||
As you see in the above screenshot, my battery is in charging mode.
|
||||
|
||||
This utility has some limitations though. As of writing this guide, batstat will support only one battery. And, it gathers information only from this folder – **“/sys/class/power_supply/”**. If your machine contains the battery information on a different folder, this program will not work.
|
||||
|
||||
For more details, check batstat github page.
|
||||
|
||||
And, that’s all for today folks. There might be many commands and programs out there to check the laptop battery status in Terminal in Linux. As far as I know, the above given methods have worked just fine as expected. If you know some other commands to find out the battery status, let me know in the comment section below. I will update commands in the article if they works.
|
||||
|
||||
And, that’s all for now. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-check-laptop-battery-status-in-terminal-in-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.ostechnix.com/the-grep-command-tutorial-with-examples-for-beginners/
|
||||
[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2016/12/sk@sk_006-1.png
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2016/12/batstat-1.png
|
@ -0,0 +1,47 @@
|
||||
LikeCoin, a cryptocurrency for creators of openly licensed content
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_whitehurst_money.png?itok=ls-SOzM0)
|
||||
|
||||
Conventional wisdom indicates that writers, photographers, artists, and other creators who share their content for free, under Creative Commons and other open licenses, won't get paid. That means most independent creators don't make any money by publishing their work on the internet. Enter [LikeCoin][1]: a new, open source project that intends to make this convention, where artists often have to compromise or sacrifice in order to contribute, a thing of the past.
|
||||
|
||||
The LikeCoin protocol is designed to monetize creative content so creators can focus on creating great material rather than selling it.
|
||||
|
||||
The protocol is also based on decentralized technologies that track when content is used and reward its creators with LikeCoin, an [Ethereum ERC-20][2] cryptocurrency token. It operates through a "Proof of Creativity" algorithm which assigns LikeCoins based partially on how many "likes" a piece of content receives and how many derivative works are produced from it. Because openly licensed content has more opportunity to be reused and earn LikeCoin tokens, the system encourages content creators to publish under Creative Commons licenses.
|
||||
|
||||
### How it works
|
||||
|
||||
When a creative piece is uploaded via the LikeCoin protocol, the content creator includes the work's metadata, including author information and its InterPlanetary Linked Data ([IPLD][3]). This data forms a family graph of derivative works; we call the relationships between a work and its derivatives the "content footprint." This structure allows a content's inheritance tree to be easily traced all the way back to the original work.
|
||||
|
||||
LikeCoin tokens will be distributed to creators using information about a work's derivation history. Since all creative works contain the metadata of the author's wallet, the corresponding LikeCoin shares can be calculated through the algorithm and distributed accordingly.
|
||||
|
||||
LikeCoins are awarded in two ways: either directly by individuals who want to show their appreciation by paying a content creator, or through the Creators Pool, which collects viewers' "Likes" and distributes LikeCoin according to a content's LikeRank. Based on content-footprint tracing in the LikeCoin protocol, the LikeRank measures the importance (or creativity as we define it in this context) of a creative content. In general, the more derivative works a creative content generates, the more creative the creative content is, and thus the higher LikeRank of the content. LikeRank is the quantifier of the creativity of contents.
|
||||
|
||||
### Want to get involved?
|
||||
|
||||
LikeCoin is still very new, and we expect to launch our first decentralized application later in 2018 to reward Creative Commons content and connect seamlessly with a much larger and established community.
|
||||
|
||||
Most of LikeCoin's code can be accessed in the [LikeCoin GitHub][4] repository under a [GPL 3.0 license][5]. Since it's still under active development, some of the experimental code is not yet open to the public, but we will make it so as soon as possible.
|
||||
|
||||
We welcome feature requests, pull requests, forks, and stars. Please join our development on GitHub and our general discussions on [Telegram][6]. We also release updates about our progress on [Medium][7], [Facebook][8], [Twitter][9], and our website, [like.co][1].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/likecoin
|
||||
|
||||
作者:[Kin Ko][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/ckxpress
|
||||
[1]:https://like.co/
|
||||
[2]:https://en.wikipedia.org/wiki/ERC20
|
||||
[3]:https://ipld.io/
|
||||
[4]:https://github.com/likecoin
|
||||
[5]:https://www.gnu.org/licenses/gpl-3.0.en.html
|
||||
[6]:https://t.me/likecoin
|
||||
[7]:http://medium.com/likecoin
|
||||
[8]:http://fb.com/likecoin.foundation
|
||||
[9]:https://twitter.com/likecoin_fdn
|
@ -0,0 +1,96 @@
|
||||
IT自动化:如何去实现
|
||||
======
|
||||
|
||||
在任何重要的项目或变更刚开始的时候,TI的管理者在前进的道路上面临着普遍的抉择。
|
||||
|
||||
第一条路径看上去是提供了一个从A到B的最短路径:简单的把项目强制分配给每个人去执行,本质来说就是你要么按照要求去做要么就不要做了。
|
||||
|
||||
第二条路径可能看上去会不是很直接,因为要通过这条路径你要花时间去解释项目背后的策略以及原因。你会沿着这条路线设置停靠站点而不是从起点到终点的马拉松:
|
||||
“这就是我们正在做的-和为什么我们这么做。”
|
||||
|
||||
猜想一下哪条路径会赢得更好的结果?
|
||||
|
||||
如果你选的是路径2,你肯定是以前都经历过这两条路径-而且经历了第一次的结局。让人们参与到重大变革中总会是最明智的选择。
|
||||
|
||||
IT领导者也知道重大的变革总会带来严重的恐慌、怀疑,和其他的挑战。IT自动化确实是很正确的改变。这个术语对某些人来说是很可怕的,而且容易被曲解。帮助人们理解你的公司需要IT自动化的必要性的原因以及如何去实现是达到你的目标和策略的重要步骤。
|
||||
|
||||
[**阅读我们的相关文章,**[**IT自动化最佳实践:持久成功的7个关键点**][2]. ]
|
||||
|
||||
考虑到这一点,我们咨询了许多IT管理者关于如何在你的组织中实现IT自动化。
|
||||
|
||||
## 1. 向人们展示它的优点
|
||||
|
||||
我们要面对的一点事实是:自我利益和自我保护是本能。利用人们的这种本能是一个吸引他们的好方法:向他们展示自动化策略将如何让他们和他们的工作获益。自动化将会是软件管道中的一个特定过程意味着将会减少在半夜呼叫团队同事来解决故障?他将能让一些人丢弃技术含量低的技能,用更有策略,高效的有序工作代替手工作业,这将会帮助他们的职业生涯更进一步?
|
||||
|
||||
”向他们传达他们能得到什么好处,自动化将会如何让他们的客户和公司受益,“来自vipual的建议,ADP全球首席技术官。”将现在的状态和未来光明的未来进行对比,展现公司将会变得如何稳定,敏捷,高效和安全。“
|
||||
|
||||
这样的方法同样适用于IT领域之外的其他领域;只要在向非技术领域的股东们解读利益的时候解释清楚一些术语即可,Nagrath 说道。
|
||||
|
||||
设置好前后的情景是一个不错的帮助人们理解的更透彻的故事机。
|
||||
|
||||
“你要描述一幅人们能够联想到的当前状态的画面,”Nagrath 说。“描述现在是什么工作,但也要重点强调是什么导致团队的工作效率不够敏捷。”然后再阐释自动化过程将如何提高现在的状态。
|
||||
|
||||
## 2.将自动化和特定的商业目标绑定在一起
|
||||
|
||||
一个强有力的案列的一部分要确保人们理解你不只是在追逐潮流趋势。如果i只是为了自动化而自动化,人们会很快察觉到进而会更加抵制的-也许在IT界更是如此。
|
||||
|
||||
“自动化需要商业需求的驱动,列如收入和运营开销,” David说道,Cyxtera的副总裁和首席信息安全官。“没有自动化的努力是自我辩护的,而且任何技术专长都不应该被当做一种手段,除非它是公司的一项核心能力”
|
||||
|
||||
像Nagrath一样,Emerson建议将达到自动化的商业目标和奖励措施挂钩,用迭代式的循序渐进的方式推进这些目标和相关的激励措施。
|
||||
|
||||
## 3. 将自动化计划分解为可管理的条目
|
||||
|
||||
即使你的自动化策略字面上是“一切都自动化,”对大多数组织来说那也是很艰难的而且可能是没有灵活性的。你需要一个能够将自动化目标分解为可管理的目标的计划来制定一个强有力的方案。而且这将能够创造很大的灵活性来适应之后漫长的道路。
|
||||
|
||||
“当制定一个自动化方案的时候,我建议详细的阐明推进自动化进程的奖励措施,而且允许迭代朝着目标前进来介绍和证明利益处于一个低风险水平,”Emerson说道。
|
||||
|
||||
Sergey Zuev, GA Connector的创始人,分享了一个为什么自动化如此重要的快节奏体验的报告-它将如何帮助你的策略建立一个强壮持久的论点。Zuevz应该知道:他的公司的自动化工具将公司的客户关系应用数据导入谷歌分析。但实际上是公司的内部经验使顾客培训进程自动化从而出现了一个闪耀的时刻。
|
||||
|
||||
“起初, 我们曾尝试去建立整个培训机制,结果这个项目搁浅了好几个月,”Zuev说道。“认识到这将无法继续下去之后,我们决定挑选其中的一个能够有巨大的时效的领域,而且立即启动。结果我们只用了一周就实现了其中的电子邮件序列的目标,而且我们已经从被亵渎的体力劳动中获益。”
|
||||
|
||||
## 4. 出售主要部分也有好处
|
||||
|
||||
循序渐进的方法并不会阻碍构建一个宏伟的蓝图。就像以个人或者团队的水平来制定方案是一个好主意,帮助人们理解全公司的利益也是一个不错的主意。
|
||||
|
||||
“如果我们能够加速达到商业需求所需的时间,那么一切质疑将会平息。”
|
||||
|
||||
Eric Kaplan, AHEAD的首席技术官,赞同通过小范围的胜利来展示自动化的价值是一个赢得人心的聪明策略。但是那些所谓的“小的”的价值揭示能够帮助你提高人们的整体形象。Kaplan指出个人和组织间的价值是每个人都可以容易联系到的领域。
|
||||
|
||||
“最能展现的地方就是你能够在节约多少时间,”Kaaplan说。“如果我们能够加速达到商业需求所需的时间,那么一切质疑将会消失。”
|
||||
|
||||
时间和可伸缩性是业务和IT同事的强大优势,都被业务的增长控制,能够被控制。
|
||||
|
||||
“自动化的结果是伸缩灵活的-每个人只需较少的努力就能保持和改善你的IT环境”,红帽的全球服务副总裁John最近提到。“如果增加人力是提升你的商业的唯一途径,那么伸缩灵活就是白日梦。自动化减少了你的人力需求而且提供了IT演进所需的灵活性和韧性。”(详细内容请参考他的文章,[DevOps团队对CIO的真正需求是什么。])
|
||||
|
||||
## 5. 推广你的成果。
|
||||
|
||||
在你自动化策略的开始时,你可能是在目标和要达到目标的预期利益上制定方案。但随着你的自动化策略的不断演进,没有什么能够比现实中的实际结果令人信服。
|
||||
|
||||
“眼见为实,”ADP的首席技术官Nagrath说。“没有什么比追踪记录能够平息质疑。”
|
||||
|
||||
那意味着,不仅仅要达到你的目标,还要准时的完成-这是迭代的循序渐进的方法论的另一个不错的解释。
|
||||
|
||||
而量化的结果如比列的提高或者成本的节省可以大声宣扬出来,Nagrath建议他的IT领导者的同事们在讲述你们的自动化故事的时候不要仅仅止步于此。
|
||||
|
||||
为自动化提供案列也是一个定性的讨论,通过它我们能够促进问题的预防,归总商业的连续性,减伤失败或错误,而且能够在他们处理更有价值的任务时承担更多的责任。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://enterprisersproject.com/article/2018/1/how-make-case-it-automation
|
||||
|
||||
作者:[Kevin Casey][a]
|
||||
译者:[FelixYFZ](https://github.com/FelixYFZ)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://enterprisersproject.com/user/kevin-casey
|
||||
[1]:https://enterprisersproject.com/article/2017/10/how-beat-fear-and-loathing-it-change
|
||||
[2]:https://enterprisersproject.com/article/2018/1/it-automation-best-practices-7-keys-long-term-success?sc_cid=70160000000h0aXAAQ
|
||||
[3]:https://www.adp.com/
|
||||
[4]:https://www.cyxtera.com/
|
||||
[5]:http://gaconnector.com/
|
||||
[6]:https://www.thinkahead.com/
|
||||
[7]:https://www.redhat.com/en?intcmp=701f2000000tjyaAAA
|
||||
[8]:https://enterprisersproject.com/article/2017/12/what-devops-teams-really-need-cio
|
||||
[9]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ
|
@ -0,0 +1,104 @@
|
||||
|
||||
为什么 Linux 比 Windows 和 macOS 的安全性好
|
||||
======
|
||||
|
||||
> 多年前做出的操作系统选型终将影响到如今的企业安全。在三大主流操作系统当中,有一个能被称作最安全的。
|
||||
|
||||
![](https://images.idgesg.net/images/article/2018/02/linux_security_vs_macos_and_windows_locks_data_thinkstock-100748607-large.jpg)
|
||||
|
||||
企业投入了大量时间、精力和金钱来保障系统的安全性。最强的安全意识可能就是有一个安全运营中心,肯定用上了防火墙以及反病毒软件,可能花费大量时间监控他们的网络,寻找可能表明违规的异常信号,就像 IDS、SIEM 和 NGFW 一样,他们部署了一个名副其实的防御阵列。
|
||||
|
||||
然而又有多少人想过数字化操作的基础之一:部署在员工的个人电脑上的操作系统?当选择桌面操作系统时,安全性是一个考虑的因素吗?
|
||||
|
||||
这就产生了一个 IT 人士都应该能回答的问题:一般部署哪种操作系统最安全呢?
|
||||
|
||||
我们问了一些专家他们对于以下三种选项的看法:Windows,最复杂的平台也是最受欢迎的桌面操作系统;macOS X,基于 FreeBSD 的 Unix 操作系统,驱动着苹果的 Macintosh 系统运行;还有 Linux,这里我们指的是所有的 Linux 发行版以及与基于 Unix 的操作系统相关的系统。
|
||||
|
||||
### 怎么会这样
|
||||
|
||||
企业可能没有评估他们部署到工作人员的操作系统的安全性的一个原因是,他们多年前就已经做出了选择。退一步讲,所有操作系统都还算安全,因为侵入它们,窃取数据或安装恶意软件的业务还处于起步阶段。而且一旦选择了操作系统,就很难再想改变。很少有 IT 组织希望将全球分散的员工队伍转移到全新的操作系统上。唉,他们已经受够了把用户搬到一个选好的新版本操作系统时的负面反响。
|
||||
|
||||
还有,重新考虑它是高明的吗?这三款领先的桌面操作系统在安全方面的差异是否足以值得我们去做出改变呢?
|
||||
|
||||
当然商业系统面临的威胁近几年已经改变了。攻击变得成熟多了。曾经支配了公众想象力的单枪匹马的青少年黑客已经被组织良好的犯罪分子网络以及具有庞大计算资源的政府资助组织的网络所取代。
|
||||
|
||||
像你们许多人一样,我有过很多那时的亲身经历:我曾经在许多 Windows 电脑上被恶意软件和病毒感染,我甚至被 宏病毒感染了 Mac 上的文件。最近,一个广泛传播的自动黑客绕开了网站的保护程序并用恶意软件感染了它。这种恶意软件的影响一开始是隐形的,甚至有些东西你没注意,直到恶意软件最终深深地植入系统以至于它的性能开始变差。一件有关病毒蔓延的震惊之事是不法之徒从来没有特定针对过我;当今世界,用僵尸网络攻击 100,000 台电脑容易得就像一次攻击几台电脑一样。
|
||||
|
||||
### 操作系统真的很重要吗?
|
||||
|
||||
给你的用户部署的那个操作系统确实对你的安全态度产生了影响,但那并不是一个可靠的安全措施。首先,现在的攻击很可能会发生,因为攻击者探测了你的用户,而不是你的系统。一项对参与过 DEFCON 会议黑客的[调查][1]表明“84%的人使用社交工程作为攻击策略的一部分。”部署安全操作系统只是一个重要的起点,但如果没有用户培训,强大的防火墙和持续的警惕性,即使是最安全的网络也会受到入侵。当然,用户下载的软件,扩展程序,实用程序,插件和其他看起来还好的软件总是有风险的,成为了恶意软件出现在系统上的一种途径.
|
||||
|
||||
无论你选择哪种平台,保持你系统安全最好的方法之一就是确保立即应用了软件更新。一旦补丁正式发布,黑客就可以对其进行反向工程并找到一种新的漏洞,以便在下一波攻击中使用。
|
||||
|
||||
而且别忘了最基本的操作。别用 root 权限,别授权用户连接到网络中的老服务器上。教您的用户如何挑选一个真正的好密码并且使用例如 [1Password][2] 这样的工具,以便在每个他们使用的帐户和网站上拥有不同的密码
|
||||
|
||||
因为底线是您对系统做出的每一个决定都会影响您的安全性,即使您的用户工作使用的操作系统也是如此。
|
||||
|
||||
### Windows,流行之选
|
||||
|
||||
若你是一个安全管理人员,很可能文章中提出的问题就会变成这样:是否我们远离微软的 Windows 会更安全呢?说 Windows 主导商业市场都是低估事实了。[NetMarketShare][4] 估计互联网上 88% 的电脑令人震惊地运行着 Windows 的某个版本。
|
||||
|
||||
如果你的系统在这 88% 之中,你可能知道微软会继续加强 Windows 系统的安全性。不断重写其改进或者重新改写了其代码库,增加了它的反病毒软件系统,改进了防火墙以及实现了沙箱架构,这样在沙箱里的程序就不能访问系统的内存空间或者其他应用程序。
|
||||
|
||||
但可能 Windows 的流行本身就是个问题,操作系统的安全性可能很大程度上依赖于装机用户量的规模。对于恶意软件作者来说,Windows 提供了大的施展平台。专注其中可以让他们的努力发挥最大作用。
|
||||
|
||||
像 Troy Wilkinson,Axiom Cyber Solutions 的 CEO 解释的那样,“Windows 总是因为很多原因而导致安全性保障来的最晚,主要是因为消费者的采用率。由于市场上大量基于 Windows 的个人电脑,黑客历来最有针对性地将这些系统作为目标。”
|
||||
|
||||
可以肯定地说,从梅丽莎病毒到 WannaCry 或者更强的,许多世界上已知的恶意软件早已对准了 Windows 系统.
|
||||
|
||||
### macOS X 以及通过隐匿实现的安全
|
||||
|
||||
如果最流行的操作系统总是成为大目标,那么用一个不流行的操作系统能确保安全吗?这个主意是老法新用——而且是完全不可信的概念——“通过隐匿实现的安全”,这秉承了让软件内部运作保持专有,从而不为人知是抵御攻击的最好方法的理念。
|
||||
|
||||
Wilkinson 坦言,macOS X “比 Windows 更安全”,但他急于补充说,“macOS 曾被认为是一个安全漏洞很小的完全安全的操作系统,但近年来,我们看到黑客制造了攻击苹果系统的额外漏洞。”
|
||||
|
||||
换句话说,攻击者会扩大活动范围而不会无视 Mac 领域。
|
||||
|
||||
Comparitech 的安全研究员 Lee Muson 说,在选择更安全的操作系统时,“macOS 很可能是被选中的目标”,但他提醒说,这一想法并不令人费解。它的优势在于“它仍然受益于通过隐匿实现的安全感和微软提供的更大的目标。”
|
||||
|
||||
Wolf Solutions 公司的 Joe Moore 给予了苹果更多的信任,称“现成的 macOS X 在安全方面有着良好的记录,部分原因是它不像 Windows 那么广泛,而且部分原因是苹果公司在安全问题上干的不错。”
|
||||
|
||||
### 最终胜者是 ……
|
||||
|
||||
你可能一开始就知道它:专家们的明确共识是 Linux 是最安全的操作系统。然而,尽管它是服务器的首选操作系统,而将其部署在桌面上的企业很少。
|
||||
|
||||
如果你确定 Linux 是要选择的系统,你仍然需要决定选择哪种 Linux 系统,并且事情会变得更加复杂。 用户需要一个看起来很熟悉的用户界面,而你需要最安全的操作系统。
|
||||
|
||||
像 Moore 解释的那样,“Linux 有可能是最安全的,但要求用户是资深用户。”所以,它不是针对所有人的。
|
||||
|
||||
将安全性作为主要功能的 Linux 发行版包括 Parrot Linux,这是一个基于 Debian 的发行版,Moore 说,它提供了许多与安全相关开箱即用的工具。
|
||||
|
||||
当然,一个重要的区别是 Linux 是开源的。Simplex Solutions 的 CISO Igor Bidenko 说,编码人员可以阅读和评论彼此工作的现实看起来像是一场安全噩梦,但这确实是让 Linux 如此安全的重要原因。 “Linux 是最安全的操作系统,因为它的源代码是开放的。任何人都可以查看它,并确保没有错误或后门。”
|
||||
|
||||
Wilkinson 阐述说:“Linux 和基于 Unix 的操作系统具有较少的信息安全领域已知的、可利用的安全缺陷。技术社区对 Linux 代码进行了审查,该代码有助于提高安全性:通过进行这么多的监督,易受攻击之处、漏洞和威胁就会减少。”
|
||||
|
||||
这是一个微妙的而违反直觉的解释,但是通过让数十人(有时甚至数百人)通读操作系统中的每一行代码,代码实际上更加健壮,并且发布漏洞错误的机会减少了。这与 PC World 为何出来说 Linux 更安全有很大关系。正如 Katherine Noyes 解释的那样,“微软可能吹捧它的大型付费开发者团队,但团队不太可能与基于全球的 Linux 用户开发者进行比较。 安全只能通过所有额外的关注获益。”
|
||||
|
||||
另一个被 《PC World》举例的原因是 Linux 更好的用户特权模式:Windows 用户“一般被默认授予管理员权限,那意味着他们几乎可以访问系统中的一切,”Noye 的文章讲到。Linux,反而很好地限制了“root”权限。
|
||||
|
||||
Noyes 还指出,Linux 环境下的多样性可能比典型的 Windows 单一文化更好地对抗攻击:Linux 有很多不同的发行版。其中一些以其特别的安全关注点进行差异化。Comparitech 的安全研究员 Lee Muson 为 Linux 发行版提供了这样的建议:“Qubes OS 对于 Linux 来说是一个很好的出发点,现在你可以发现,爱德华·斯诺登的认可大大地掩盖了它自己极其不起眼的主张。”其他安全性专家指出了专门的安全 Linux 发行版,如 Tails Linux,它旨在直接从 USB 闪存驱动器或类似的外部设备安全地匿名运行。
|
||||
|
||||
### 构建安全趋势
|
||||
|
||||
惯性是一股强大的力量。虽然人们有明确的共识,认为 Linux 是桌面系统的最安全选择,但并没有出现对 Windows 和 Mac 机器压倒性的倾向。尽管如此,Linux 采用率的小幅增长却可能会产生对所有人都更加安全的计算,因为市场份额的丧失是确定能获得微软和苹果公司关注的一个方式。换句话说,如果有足够的用户在桌面上切换到 Linux,Windows 和 Mac PC 很可能成为更安全的平台。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.computerworld.com/article/3252823/linux/why-linux-is-better-than-windows-or-macos-for-security.html
|
||||
|
||||
作者:[Dave Taylor][a]
|
||||
译者:[fuzheng1998](https://github.com/fuzheng1998)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.computerworld.com/author/Dave-Taylor/
|
||||
[1]:https://www.esecurityplanet.com/hackers/fully-84-percent-of-hackers-leverage-social-engineering-in-attacks.html
|
||||
[2]:http://www.1password.com
|
||||
[3]:https://www.facebook.com/Computerworld/posts/10156160917029680
|
||||
[4]:https://www.netmarketshare.com/operating-system-market-share.aspx?options=%7B%22filter%22%3A%7B%22%24and%22%3A%5B%7B%22deviceType%22%3A%7B%22%24in%22%3A%5B%22Desktop%2Flaptop%22%5D%7D%7D%5D%7D%2C%22dateLabel%22%3A%22Trend%22%2C%22attributes%22%3A%22share%22%2C%22group%22%3A%22platform%22%2C%22sort%22%3A%7B%22share%22%3A-1%7D%2C%22id%22%3A%22platformsDesktop%22%2C%22dateInterval%22%3A%22Monthly%22%2C%22dateStart%22%3A%222017-02%22%2C%22dateEnd%22%3A%222018-01%22%2C%22segments%22%3A%22-1000%22%7D
|
||||
[5]:https://www.parrotsec.org/
|
||||
[6]:https://www.pcworld.com/article/202452/why_linux_is_more_secure_than_windows.html
|
||||
[7]:https://www.qubes-os.org/
|
||||
[8]:https://twitter.com/snowden/status/781493632293605376?lang=en
|
||||
[9]:https://tails.boum.org/about/index.en.html
|
@ -1,164 +0,0 @@
|
||||
# 系统调用,让世界转起来!
|
||||
|
||||
我其实不想将它分解开给你看,一个用户应用程序在整个系统中就像一个可怜的孤儿一样无依无靠:
|
||||
|
||||
![](https://manybutfinite.com/img/os/appInVat.png)
|
||||
|
||||
它与外部世界的每个交流都要在内核的帮助下通过系统调用才能完成。一个应用程序要想保存一个文件、写到终端、或者打开一个 TCP 连接,内核都要参与。应用程序是被内核高度怀疑的:认为它到处充斥着 bugs,而最糟糕的是那些充满邪恶想法的天才大脑(写的恶意程序)。
|
||||
|
||||
这些系统调用是从一个应用程序到内核的函数调用。它们因为安全考虑使用一个特定的机制,实际上你只是调用了内核的 API。“系统调用”这个术语指的是调用由内核提供的特定功能(比如,系统调用 open())或者是调用途径。你也可以简称为:syscall。
|
||||
|
||||
这篇文章讲解系统调用,系统调用与调用一个库有何区别,以及在操作系统/应用程序接口上的刺探工具。如果想彻底了解应用程序借助操作系统都发生的哪些事情?那么就可以将一个不可能解决的问题转变成一个快速而有趣的难题。
|
||||
|
||||
因此,下图是一个运行着的应用程序,一个用户进程:
|
||||
|
||||
![](https://manybutfinite.com/img/os/sandbox.png)
|
||||
|
||||
它有一个私有的 [虚拟地址空间][2]—— 它自己的内存沙箱。整个系统都在地址空间中,程序的二进制文件加上它所需要的库全部都 [被映射到内存中][3]。内核自身也映射为地址空间的一部分。
|
||||
|
||||
下面是我们程序的代码和 PID,进程的 PID 可以通过 [getpid(2)][4]:
|
||||
|
||||
pid.c [download][1]
|
||||
|
||||
|
|
||||
```
|
||||
123456789
|
||||
```
|
||||
|
|
||||
```
|
||||
#include #include #include int main(){ pid_t p = getpid(); printf("%d\n", p);}
|
||||
```
|
||||
|
|
||||
|
||||
**(致校对:本文的所有代码部分都出现了排版错误,请与原文核对确认!!)**
|
||||
|
||||
在 Linux 中,一个进程并不是一出生就知道它的 PID。要想知道它的 PID,它必须去询问内核,因此,这个询问请求也是一个系统调用:
|
||||
|
||||
![](https://manybutfinite.com/img/os/syscallEnter.png)
|
||||
|
||||
它的第一步是开始于调用一个 C 库的 [getpid()][5],它是系统调用的一个封装。当你调用一些功能时,比如,open(2)、read(2)、以及相关的一些支持时,你就调用了这些封装。其实,对于大多数编程语言在这一块的原生方法,最终都是在 libc 中完成的。
|
||||
|
||||
极简设计的操作系统都提供了方便的 API 封装,这样可以保持内核的简洁。所有的内核代码运行在特权模式下,有 bugs 的内核代码行将会产生致命的后果。在用户模式下做的任何事情都是在用户模式中完成的。由库来提供友好的方法和想要的参数处理,像 printf(3) 这样。
|
||||
|
||||
我们拿一个 web APIs 进行比较,内核的封装方式与构建一个简单易行的 HTTP 接口去提供服务是类似的,然后使用特定语言的守护方法去提供特定语言的库。或者也可能有一些缓存,它是库的 getpid() 完成的内容:首次调用时,它真实地去执行了一个系统调用,然后,它缓存了 PID,这样就可以避免后续调用时的系统调用开销。
|
||||
|
||||
一旦封装完成,它做的第一件事就是进入了超空间(hyperspace)的内核(译者注:一个快速而安全的计算环境,独立于操作系统而存在)。这种转换机制因处理器架构设计不同而不同。(译者注:就是前一段时间爆出的存在于处理器硬件中的运行于 Ring -3 的操作系统,比如,Intel 的 ME)在 Intel 处理器中,参数和 [系统调用号][6] 是 [加载到寄存器中的][7],然后,运行一个 [指令][8] 将 CPU 置于 [特权模式][9] 中,并立即将控制权转移到内核中的全局系统调用 [入口][10]。如果你对这些细节感兴趣,David Drysdale 在 LWN 上有两篇非常好的文章([第一篇][11],[第二篇][12])。
|
||||
|
||||
内核然后使用这个系统调用号作为进入 [sys_call_table][14] 的一个 [索引][13],它是一个函数指针到每个系统调用实现的数组。在这里,调用 了 [sys_getpid][15]:
|
||||
|
||||
![](https://manybutfinite.com/img/os/syscallExit.png)
|
||||
|
||||
在 Linux 中,系统调用大多数都实现为独立的 C 函数,有时候这样做 [很琐碎][16],但是通过内核优秀的设计,系统调用被严格隔离。它们是工作在一般数据结构中的普通代码。关于这些争论的验证除了完全偏执的以外,其它的还是非常好的。
|
||||
|
||||
一旦它们的工作完成,它们就会正常返回,然后,根据特定代码转回到用户模式,封装将在那里继续做一些后续处理工作。在我们的例子中,[getpid(2)][17] 现在缓存了由内核返回的 PID。如果内核返回了一个错误,另外的封装可以去设置全局 errno 变量。让你知道 GNU 所关心的一些小事。
|
||||
|
||||
如果你想看未处理的原生内容,glibc 提供了 [syscall(2)][18] 函数,它可以不通过封装来产生一个系统调用。你也可以通过它来做一个你自己的封装。这对一个 C 库来说,并不神奇,也不是保密的。
|
||||
|
||||
这种系统调用的设计影响是很深远的。我们从一个非常有用的 [strace(1)][19] 开始,这个工具可以用来监视 Linux 进程的系统调用(在 Mac 上,看 [dtruss(1m)][20] 和神奇的 [dtrace][21];在 Windows 中,看 [sysinternals][22])。这里在 pid 上的跟踪:
|
||||
|
||||
|
|
||||
```
|
||||
1234567891011121314151617181920
|
||||
```
|
||||
|
|
||||
```
|
||||
~/code/x86-os$ strace ./pidexecve("./pid", ["./pid"], [/* 20 vars */]) = 0brk(0) = 0x9aa0000access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)mmap2(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7767000access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3fstat64(3, {st_mode=S_IFREG|0644, st_size=18056, ...}) = 0mmap2(NULL, 18056, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7762000close(3) = 0[...snip...]getpid() = 14678fstat64(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 1), ...}) = 0mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7766000write(1, "14678\n", 614678) = 6exit_group(6) = ?
|
||||
```
|
||||
|
|
||||
|
||||
输出的每一行都显示了一个系统调用 、它的参数、以及返回值。如果你在一个循环中将 getpid(2) 运行 1000 次,你就会发现始终只有一个 getpid() 系统调用,因为,它的 PID 已经被缓存了。我们也可以看到在格式化输出字符串之后,printf(3) 调用了 write(2)。
|
||||
|
||||
strace 可以开始一个新进程,也可以附加到一个已经运行的进程上。你可以通过不同程序的系统调用学到很多的东西。例如,sshd 守护进程一天都干了什么?
|
||||
|
||||
|
|
||||
```
|
||||
1234567891011121314151617181920212223242526272829
|
||||
```
|
||||
|
|
||||
```
|
||||
~/code/x86-os$ ps ax | grep sshd12218 ? Ss 0:00 /usr/sbin/sshd -D~/code/x86-os$ sudo strace -p 12218Process 12218 attached - interrupt to quitselect(7, [3 4], NULL, NULL, NULL[ ... nothing happens ... No fun, it's just waiting for a connection using select(2) If we wait long enough, we might see new keys being generated and so on, but let's attach again, tell strace to follow forks (-f), and connect via SSH]~/code/x86-os$ sudo strace -p 12218 -f[lots of calls happen during an SSH login, only a few shown][pid 14692] read(3, "-----BEGIN RSA PRIVATE KEY-----\n"..., 1024) = 1024[pid 14692] open("/usr/share/ssh/blacklist.RSA-2048", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)[pid 14692] open("/etc/ssh/blacklist.RSA-2048", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)[pid 14692] open("/etc/ssh/ssh_host_dsa_key", O_RDONLY|O_LARGEFILE) = 3[pid 14692] open("/etc/protocols", O_RDONLY|O_CLOEXEC) = 4[pid 14692] read(4, "# Internet (IP) protocols\n#\n# Up"..., 4096) = 2933[pid 14692] open("/etc/hosts.allow", O_RDONLY) = 4[pid 14692] open("/lib/i386-linux-gnu/libnss_dns.so.2", O_RDONLY|O_CLOEXEC) = 4[pid 14692] stat64("/etc/pam.d", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0[pid 14692] open("/etc/pam.d/common-password", O_RDONLY|O_LARGEFILE) = 8[pid 14692] open("/etc/pam.d/other", O_RDONLY|O_LARGEFILE) = 4
|
||||
```
|
||||
|
|
||||
|
||||
看懂 SSH 的调用是块难啃的骨头,但是,如果搞懂它你就学会了跟踪。也可以用它去看一个应用程序打开的哪个文件是有用的(“这个配置是从哪里来的?”)。如果你有一个出现错误的进程,你可以跟踪它,然后去看它通过系统调用做了什么?当一些应用程序没有提供适当的错误信息而意外退出时,你可以去检查它是否是一个系统调用失败。你也可以使用过滤器,查看每个调用的次数,等等:
|
||||
|
||||
|
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
123456789
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
|
|
||||
```
|
||||
~/code/x86-os$ strace -T -e trace=recv curl -silent www.google.com. > /dev/nullrecv(3, "HTTP/1.1 200 OK\r\nDate: Wed, 05 N"..., 16384, 0) = 4164 <0.000007>recv(3, "fl a{color:#36c}a:visited{color:"..., 16384, 0) = 2776 <0.000005>recv(3, "adient(top,#4d90fe,#4787ed);filt"..., 16384, 0) = 4164 <0.000007>recv(3, "gbar.up.spd(b,d,1,!0);break;case"..., 16384, 0) = 2776 <0.000006>recv(3, "$),a.i.G(!0)),window.gbar.up.sl("..., 16384, 0) = 1388 <0.000004>recv(3, "margin:0;padding:5px 8px 0 6px;v"..., 16384, 0) = 1388 <0.000007>recv(3, "){window.setTimeout(function(){v"..., 16384, 0) = 1484 <0.000006>
|
||||
```
|
||||
|
|
||||
|
||||
我鼓励你去浏览在你的操作系统中的这些工具。使用它们会让你觉得自己像个超人一样强大。
|
||||
|
||||
但是,足够有用的东西,往往要让我们深入到它的设计中。我们可以看到那些用户空间中的应用程序是被严格限制在它自己的虚拟地址空间中,它的虚拟地址空间运行在 Ring 3(非特权模式)中。一般来说,只涉及到计算和内存访问的任务是不需要请求系统调用的。例如,像 [strlen(3)][23] 和 [memcpy(3)][24] 这样的 C 库函数并不需要内核去做什么。这些都是在应用程序内部发生的事。
|
||||
|
||||
一个 C 库函数的 man 页面节上(在圆括号 2 和 3 中)也提供了线索。节 2 是用于系统调用封装,而节 3 包含了其它 C 库函数。但是,正如我们在 printf(3) 中所看到的,一个库函数可以最终产生一个或者多个系统调用。
|
||||
|
||||
如果你对此感到好奇,这里是 [Linux][25] ( [Filippo's list][26])和 [Windows][27] 的全部系统调用列表。它们各自有 ~310 和 ~460 个系统调用。看这些系统调用是非常有趣的,因为,它们代表了软件在现代的计算机上能够做什么。另外,你还可能在这里找到与进程间通讯和性能相关的“宝藏”。这是一个“不懂 Unix 的人注定最终还要重新发明一个蹩脚的 Unix ” 的地方。(译者注:“Those who do not understand Unix are condemned to reinvent it,poorly。”这句话是 [Henry Spencer][35] 的名言,反映了 Unix 的设计哲学,它的一些理念和文化是一种技术发展的必须结果,看似糟糕却无法超越。)
|
||||
|
||||
与 CPU 周期相比,许多系统调用花很长的时间去执行任务,例如,从一个硬盘驱动器中读取内容。在这种情况下,调用进程在底层的工作完成之前一直处于休眠状态。因为,CPUs 运行的非常快,一般的程序都因为 I/O 的限制在它的生命周期的大部分时间处于休眠状态,等待系统的调用。相反,如果你跟踪一个计算密集型任务,你经常会看到没有任何的系统调用参与其中。在这种情况下,[top(1)][29] 将显示大量的 CPU 使用。
|
||||
|
||||
在一个系统调用中的开销可能会是一个问题。例如,固态硬盘比普通硬盘要快很多,但是,操作系统的开销可能比 I/O 操作本身的开销 [更加昂贵][30]。执行大量读写操作的程序可能就是操作系统开销的瓶颈所在。[向量化 I/O][31] 对此有一些帮助。因此要做 [文件的内存映射][32],它允许一个程序仅访问内存就可以读或写磁盘文件。类似的映射也存在于像视频卡这样的地方。最终,经济性俱佳的云计算可能导致内核在用户模式/内核模式的切换消失或者最小化。
|
||||
|
||||
最终,系统调用还有益于系统安全。一是,无论看起来多么模糊的一个二进制程序,你都可以通过观察它的系统调用来检查它的行为。这种方式可能用于去检测恶意程序。例如,我们可以记录一个未知程序的系统调用的策略,并对它的偏差进行报警,或者对程序调用指定一个白名单,这样就可以让漏洞利用变得更加困难。在这个领域,我们有大量的研究,和许多工具,但是没有“杀手级”的解决方案。
|
||||
|
||||
这就是系统调用。很抱歉这篇文章有点长,我希望它对你有用。接下来的时间,我将写更多(短的)文章,也可以在 [RSS][33] 和 [Twitter][34] 关注我。这篇文章献给 glorious Clube Atlético Mineiro。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via:https://manybutfinite.com/post/system-calls/
|
||||
|
||||
作者:[Gustavo Duarte][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://duartes.org/gustavo/blog/about/
|
||||
[1]:https://manybutfinite.com/code/x86-os/pid.c
|
||||
[2]:https://manybutfinite.com/post/anatomy-of-a-program-in-memory
|
||||
[3]:https://manybutfinite.com/post/page-cache-the-affair-between-memory-and-files/
|
||||
[4]:http://linux.die.net/man/2/getpid
|
||||
[5]:https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/getpid.c;h=937b1d4e113b1cff4a5c698f83d662e130d596af;hb=4c6da7da9fb1f0f94e668e6d2966a4f50a7f0d85#l49
|
||||
[6]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/syscalls/syscall_64.tbl#L48
|
||||
[7]:https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/x86_64/sysdep.h;h=4a619dafebd180426bf32ab6b6cb0e5e560b718a;hb=4c6da7da9fb1f0f94e668e6d2966a4f50a7f0d85#l139
|
||||
[8]:https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/x86_64/sysdep.h;h=4a619dafebd180426bf32ab6b6cb0e5e560b718a;hb=4c6da7da9fb1f0f94e668e6d2966a4f50a7f0d85#l179
|
||||
[9]:https://manybutfinite.com/post/cpu-rings-privilege-and-protection
|
||||
[10]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/entry_64.S#L354-L386
|
||||
[11]:http://lwn.net/Articles/604287/
|
||||
[12]:http://lwn.net/Articles/604515/
|
||||
[13]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/entry_64.S#L422
|
||||
[14]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/syscall_64.c#L25
|
||||
[15]:https://github.com/torvalds/linux/blob/v3.17/kernel/sys.c#L800-L809
|
||||
[16]:https://github.com/torvalds/linux/blob/v3.17/kernel/sys.c#L800-L859
|
||||
[17]:http://linux.die.net/man/2/getpid
|
||||
[18]:http://linux.die.net/man/2/syscall
|
||||
[19]:http://linux.die.net/man/1/strace
|
||||
[20]:https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/dtruss.1m.html
|
||||
[21]:http://dtrace.org/blogs/brendan/2011/10/10/top-10-dtrace-scripts-for-mac-os-x/
|
||||
[22]:http://technet.microsoft.com/en-us/sysinternals/bb842062.aspx
|
||||
[23]:http://linux.die.net/man/3/strlen
|
||||
[24]:http://linux.die.net/man/3/memcpy
|
||||
[25]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/syscalls/syscall_64.tbl
|
||||
[26]:https://filippo.io/linux-syscall-table/
|
||||
[27]:http://j00ru.vexillium.org/ntapi/
|
||||
[28]:https://manybutfinite.com/post/what-your-computer-does-while-you-wait/
|
||||
[29]:http://linux.die.net/man/1/top
|
||||
[30]:http://danluu.com/clwb-pcommit/
|
||||
[31]:http://en.wikipedia.org/wiki/Vectored_I/O
|
||||
[32]:https://manybutfinite.com/post/page-cache-the-affair-between-memory-and-files/
|
||||
[33]:http://feeds.feedburner.com/GustavoDuarte
|
||||
[34]:http://twitter.com/food4hackers
|
||||
[35]:https://en.wikipedia.org/wiki/Henry_Spencer
|
@ -1,108 +0,0 @@
|
||||
ImageMagick 的一些高级图片查看技巧
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-photo-camera-green.png?itok=qiDqmXV1)
|
||||
图片源于 [Internet Archive Book Images](https://www.flickr.com/photos/internetarchivebookimages/14759826206/in/photolist-ougY7b-owgz5y-otZ9UN-waBxfL-oeEpEf-xgRirT-oeMHfj-wPAvMd-ovZgsb-xhpXhp-x3QSRZ-oeJmKC-ovWeKt-waaNUJ-oeHPN7-wwMsfP-oeJGTK-ovZPKv-waJnTV-xDkxoc-owjyCW-oeRqJh-oew25u-oeFTm4-wLchfu-xtjJFN-oxYznR-oewBRV-owdP7k-owhW3X-oxXxRg-oevDEY-oeFjP1-w7ZB6f-x5ytS8-ow9C7j-xc6zgV-oeCpG1-oewNzY-w896SB-wwE3yA-wGNvCL-owavts-oevodT-xu9Lcr-oxZqZg-x5y4XV-w89d3n-x8h6fi-owbfiq),Opensource.com 修改,[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)协议
|
||||
|
||||
在我先前的[ImageMagick 入门:使用命令行来编辑图片](https://linux.cn/article-8851-1.html) 文章中,我展示了如何使用 ImageMagick 的菜单栏进行图片的编辑和变换风格。在这篇续文里,我将向你展示使用这个开源的图像编辑器来查看图片的额外方法。
|
||||
|
||||
### 别样的风格
|
||||
|
||||
在深入 ImageMagick 的高级图片查看技巧之前,我想先分享另一个使用 **convert** 达到的有趣但简单的效果,在上一篇文章中我已经详细地介绍了 **convert** 命令,这个技巧涉及这个命令的 `edge` 和 `negate` 选项:
|
||||
```
|
||||
convert DSC_0027.JPG -edge 3 -negate edge3+negate.jpg
|
||||
```
|
||||
|
||||
![在图片上使用 `edge` 和 `negate` 选项][3]
|
||||
使用`edge` 和 `negate` 选项前后的图片对比
|
||||
|
||||
编辑后的图片让我更加喜爱,具体是因为如下因素:海的外观,作为前景和背景的植被,特别是太阳及其在海上的反射,最后是天空。
|
||||
|
||||
### 使用 `display` 来查看一系列图片
|
||||
|
||||
假如你跟我一样是个命令行用户,你就知道 shell 为复杂任务提供了更多的灵活性和快捷方法。下面我将展示一个例子来佐证这个观点。ImageMagick 的 **display** 命令可以克服我在 GNOME 桌面上使用 [Shotwell][4] 图像管理器导入图片时遇到的问题。
|
||||
|
||||
Shotwell 会根据每张导入图片的 [Exif][5] 数据,创建以图片被生成或者拍摄时的日期为名称的目录结构。最终的效果是最上层的目录以年命名,接着的子目录是以月命名 (01, 02, 03 等等),然后是以每月的日期命名的子目录。我喜欢这种结构,因为当我想根据图片被创建或者拍摄时的日期来查找它们时将会非常方便。
|
||||
|
||||
但这种结构也并不是非常完美的,当我想查看最近几个月或者最近一年的所有图片时就会很麻烦。使用常规的图片查看器,我将不停地在不同层级的目录间跳转,但 ImageMagick 的 **display** 命令可以使得查看更加简单。例如,假如我想查看最近一年的图片,我便可以在命令行中键入下面的 **display** 命令:
|
||||
```
|
||||
display -resize 35% 2017/*/*/*.JPG
|
||||
```
|
||||
|
||||
我可以匹配一年中的每一月和每一天。
|
||||
|
||||
现在假如我想查看某张图片,但我不确定我是在 2016 年的上半年还是在 2017 的上半年拍摄的,那么我便可以使用下面的命令来找到它:
|
||||
```
|
||||
display -resize 35% 201[6-7]/0[1-6]/*/*.JPG
|
||||
```
|
||||
限制查看的图片拍摄于 2016 和 2017 年的一月到六月
|
||||
|
||||
### 使用 `montage` 来查看图片的缩略图
|
||||
|
||||
假如现在我要查找一张我想要编辑的图片,使用 **display** 的一个问题是它只会显示每张图片的文件名,而不显示其在目录结构中的位置,所以想要找到那张图片并不容易。另外,假如我很偶然地在从相机下载图片的过程中将这些图片从相机的内存里面清除了它们,结果使得下次拍摄照片的名称又从 **DSC_0001.jpg** 开始命名,那么当使用 **display** 来展示一整年的图片时,将会在这 12 个月的图片中花费很长的时间来查找它们。
|
||||
|
||||
这时 **montage** 命令便可以派上用场了。它可以将一系列的图片放在一张图片中,这样就会非常有用。例如可以使用下面的命令来完成上面的任务:
|
||||
```
|
||||
montage -label %d/%f -title 2017 -tile 5x -resize 10% -geometry +4+4 2017/0[1-4]/*/*.JPG 2017JanApr.jpg
|
||||
```
|
||||
|
||||
从左到右,这个命令以标签开头,标签的形式是包含文件名( **%f** )和以 **/** 分割的目录( **%d** )结构,接着这个命令以目录的名称(2017)来作为标题,然后将图片排成 5 列,每个图片缩放为 10% (这个参数可以很好地匹配我的屏幕)。`geometry` 的设定将在每张图片的四周留白,最后在接上要处理的对象和一个合适的名称( **2017JanApr.jpg** )。现在图片 **2017JanApr.jpg** 便可以成为一个索引,使得我可以不时地使用它来查看这个时期的所有图片。
|
||||
|
||||
### 注意内存消耗
|
||||
|
||||
你可能会好奇为什么我在上面的合成图中只特别指定了为期 4 个月(从一月到四月)的图片。因为 **montage** 将会消耗大量内存,所以你需要多加注意。我的相机产生的图片每张大约有 2.5MB,我发现我的系统可以很轻松地处理 60 张图片。但一旦图片增加到 80 张,如果此时还有另外的程序(例如 Firefox 、Thunderbird)在后台工作,那么我的电脑将会死机,这似乎和内存使用相关,**montage**可能会占用可用 RAM 的 80% 乃至更多(你可以在此期间运行 **top** 命令来查看内存占用)。假如我关掉其他的程序,我便可以在我的系统死机前处理 80 张图片。
|
||||
|
||||
下面的命令可以让你知晓在你运行 **montage** 命令前你需要处理图片张数:
|
||||
```
|
||||
ls 2017/0[1-4/*/*.JPG > filelist; wc -l filelist
|
||||
```
|
||||
|
||||
**ls** 命令生成我们搜索的文件的列表,然后通过重定向将这个列表保存在任意以 filelist 为名称的文件中。接着带有 **-l** 选项的 **wc** 命令输出该列表文件共有多少行,换句话说,展示出需要处理的文件个数。下面是我运行命令后的输出:
|
||||
```
|
||||
163 filelist
|
||||
```
|
||||
|
||||
啊呀!从一月到四月我居然有 163 张图片,使用这些图片来创建一张合成图一定会使得我的系统死机的。我需要将这个列表减少点,可能只处理到 3 月份或者更早的图片。但如果我在4月20号到30号期间拍摄了很多照片,我想这便是问题的所在。下面的命令便可以帮助指出这个问题:
|
||||
```
|
||||
ls 2017/0[1-3]/*/*.JPG > filelist; ls 2017/04/0[1-9]/*.JPG >> filelist; ls 2017/04/1[0-9]/*.JPG >> filelist; wc -l filelist
|
||||
```
|
||||
|
||||
上面一行中共有 4 个命令,它们以分号分隔。第一个命令特别指定从一月到三月期间拍摄的照片;第二个命令使用 **>>** 将拍摄于 4 月 1 日至 9 日的照片追加到这个列表文件中;第三个命令将拍摄于 4 月 1 0 日到 19 日的照片追加到列表中。最终它的显示结果为:
|
||||
```
|
||||
81 filelist
|
||||
```
|
||||
|
||||
我知道假如我关掉其他的程序,处理 81 张图片是可行的。
|
||||
|
||||
使用 **montage** 来处理它们是很简单的,因为我们只需要将上面所做的处理添加到 **montage** 命令的后面即可:
|
||||
```
|
||||
montage -label %d/%f -title 2017 -tile 5x -resize 10% -geometry +4+4 2017/0[1-3]/*/*.JPG 2017/04/0[1-9]/*.JPG 2017/04/1[0-9]/*.JPG 2017Jan01Apr19.jpg
|
||||
```
|
||||
|
||||
从左到右,**montage** 命令后面最后的那个文件名将会作为输出,在它之前的都是输入。这个命令将花费大约 3 分钟来运行,并生成一张大小约为 2.5MB 的图片,但我的系统只是有一点反应迟钝而已。
|
||||
|
||||
### 展示合成图片
|
||||
|
||||
当你第一次使用 **display** 查看一张巨大的合成图片时,你将看到合成图的宽度很合适,但图片的高度被压缩了,以便和屏幕相适应。不要慌,只需要左击图片,然后选择 **View > Original Size** 便会显示整个图片。再次点击图片便可以使菜单栏隐藏。
|
||||
|
||||
我希望这篇文章可以在你使用新方法查看图片时帮助你。在我的下一篇文章中,我将讨论更加复杂的图片操作技巧。
|
||||
|
||||
### 作者简介
|
||||
Greg Pittman - Greg 肯塔基州路易斯维尔的一名退休的神经科医生,对计算机和程序设计有着长期的兴趣,最早可以追溯到 1960 年代的 Fortran IV 。当 Linux 和开源软件相继出现时,他开始学习更多的相关知识,并分享自己的心得。他是 Scribus 团队的成员。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/9/imagemagick-viewing-images
|
||||
|
||||
作者:[Greg Pittman][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/greg-p
|
||||
[1]:https://opensource.com/article/17/8/imagemagick
|
||||
[2]:/file/370946
|
||||
[3]:https://opensource.com/sites/default/files/u128651/edge3negate.jpg "Using the edge and negate options on an image."
|
||||
[4]:https://wiki.gnome.org/Apps/Shotwell
|
||||
[5]:https://en.wikipedia.org/wiki/Exif
|
@ -0,0 +1,81 @@
|
||||
从专有到开源的十个简单步骤
|
||||
======
|
||||
|
||||
"开源软件的确不是很安全,因为每个人都能使用它,而且他们能够随意的进行编译并且用他们自己写的不好的东西进行替换。"举手示意:谁之前听说过这个?1
|
||||
|
||||
当我和顾客讨论的时候,是的,他们有时候会让我和顾客交谈,对于场景2的人来说这是很常见的。在前一篇文章中,"[许多人的评论并不一定能防止错误代码]",我会
|
||||
谈论尤其是安全的软件这块--并没有如外界所说的那样比专有软件安全,但是和专有软件比起来,我每次还是比较青睐开源软件。但我听到--关于开源软件不是很安全--表明了有时候仅仅解释开源需要工作投入是不够的,但是我们也需要积极的参与进去。
|
||||
|
||||
我并不期望能够达到牛顿或者维特根斯坦的逻辑水平,但是我会尽我所能,而且我会在结尾做个总结,如果你感兴趣的话可以去快速的浏览一下。
|
||||
|
||||
### 关键因素
|
||||
|
||||
首先,我们必须明白没有任何一款软件是绝对安全的。无论是专有软件还是开源软件。第二,我们应该接受确实还是存在一些很不错的专利软件的。第三,也存在一些不好的开源软件。第四,有很多优秀的,很有天赋的,专业的架构师,设计师和软件工程师设计开发专利软件。
|
||||
|
||||
但也有些摩擦:第五点,从事专有软件的人员是有限的,而且你不可能总是能够雇佣到最好的员工。即使在政府部门或者公共组织--他们拥有丰富的人才资源池,但在安全应用这块,他们的人才也是有限的。
|
||||
|
||||
第六点,开发,测试,提升改善开源软件的人总是无限的,而且还包含最好的人才。第七(也是我最欢的一),这群人找那个包含很多编写专有软件的人才。第八,许多政府或者公共组织开发的软件也都逐渐开源了。
|
||||
|
||||
第九,如果你在担心你在运行的软件的不被支持或者来源不明,好消息是:有一批组织会来检查软件代码的来源,提供支持和补丁更新。他们会按照专利软件模式那样去运行开源软件:他们的技术标准就是去签署认证以便你可以验证你正在运行的开源软件不是来源不明或者是恶意的软件。
|
||||
|
||||
第十点(也是这篇文章的重点),当你运行,测试,在问题上进行反馈,发现问题并且报告的时候,你正在为共福利贡献知识,专业技能以及经验,这就是开源,正因为你的所做的这些而变得更好。如果你是通过个人或者提供支持的商业组织,,你已经成为了这个组织的一部分了。开源让软件变得越来越好,你可以看到它们的变化。没有什么是隐藏封闭的,它是完全开放的。事情会变坏吗?是的,但是我们能够及时发现问题并且修复。
|
||||
|
||||
这个共享福利并不适用于专有软件:保持隐藏的东西是不能照亮个丰富世界的。
|
||||
|
||||
我知道作为一个英国人在使用联邦这个词的时候要小心谨慎的;它和帝国连接着的,但我所表达的不是这个意思。它不是克伦威尔在对这个词所表述的意思,无论如何,他是一个有争议的历史人物。我所表达的意思是这个词有共同和福利连接,福利不是指钱而是全人类都能拥有的福利。
|
||||
|
||||
我真的很相信这点的。如果i想从这篇文章中国得到一些虔诚的信息的话,那应该是第十条:共享福利是我们的遗产,我们的经验,我们的知识,我们的责任。共享福利是全人类都能拥有的。我们共同拥有它而且它是一笔无法估量的财富。
|
||||
|
||||
### 便利贴
|
||||
|
||||
1. (几乎)没有一款软件是完美无缺的。
|
||||
2. 有很好的专有软件。
|
||||
3. 有不好的专有软件。
|
||||
4. 有聪明,有才能,专注的人开开发专有软件。
|
||||
5. 从事开发完善专有软件的人是有限的,即使在政府或者公共组织也是如此。
|
||||
6. 相对来说从事开源软件的人是无限的。
|
||||
7. …而且包括很多从事专有软件的人才。
|
||||
8. 政府和公共组织的人经常开源它们的软件.
|
||||
9. 有商业组织会为你的开源软件提供支持.
|
||||
10. 贡献--即使使用--为开源软件贡献.
|
||||
|
||||
|
||||
|
||||
1 OK--you can put your hands down now.
|
||||
|
||||
2 Should this be capitalized? Is there a particular field, or how does it work? I'm not sure.
|
||||
|
||||
3 I have a degree in English literature and theology--this probably won't surprise regular readers of my articles.4
|
||||
|
||||
4 Not, I hope, because I spout too much theology,5 but because it's often full of long-winded, irrelevant humanities (U.S. English: "liberal arts") references.
|
||||
|
||||
5 Emacs. Every time.
|
||||
|
||||
6 Not even Emacs. And yes, I know that there are techniques to prove the correctness of some software. (I suspect that Emacs doesn't pass many of them…)
|
||||
|
||||
7 Hand up here: I'm employed by one of them, [Red Hat][3]. Go have a look--it's a fun place to work, and [we're usually hiring][4].
|
||||
|
||||
8 Assuming that they fully abide by the rules of the open source licence(s) they're using, that is.
|
||||
|
||||
9 Erstwhile "Lord Protector of England, Scotland, and Ireland"--that Cromwell.
|
||||
|
||||
10 Oh, and choose Emacs over Vi variants, obviously.
|
||||
|
||||
This article originally appeared on [Alice, Eve, and Bob - a security blog][5] and is republished with permission.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/11/commonwealth-open-source
|
||||
|
||||
作者:[Mike Bursell][a]
|
||||
译者:[FelixYFZ](https://github.com/FelixYFZ)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mikecamel
|
||||
[1]:https://opensource.com/article/17/10/many-eyes
|
||||
[2]:https://en.wikipedia.org/wiki/Apologetics
|
||||
[3]:https://www.redhat.com/
|
||||
[4]:https://www.redhat.com/en/jobs
|
||||
[5]:https://aliceevebob.com/2017/10/24/the-commonwealth-of-open-source/
|
141
translated/tech/20171221 A Commandline Food Recipes Manager.md
Normal file
141
translated/tech/20171221 A Commandline Food Recipes Manager.md
Normal file
@ -0,0 +1,141 @@
|
||||
HeRM’s - 一个命令行食谱管理器
|
||||
======
|
||||
![配图](https://www.ostechnix.com/wp-content/uploads/2017/12/herms-720x340.jpg)
|
||||
|
||||
烹饪让爱变得可见,不是吗?确实!烹饪也许是你的热情或爱好或职业,我相信你会维护一份烹饪日记。保持写烹饪日记是改善烹饪习惯的一种方法。有很多方法可以记录食谱。你可以维护一份小日记/笔记或将配方的笔记存储在智能手机中,或将它们保存在计算机中文档中。这有很多选择。今天,我介绍 **HeRM 's**,一个基于 Haskell 的命令行食谱管理器,能为你的美食食谱做笔记。使用 Herm's,你可以添加、查看、编辑和删除食物配方,甚至可以制作购物清单。这些全部来自你的终端!它是免费的,并使用 Haskell 语言编写的开源程序。源代码在 GitHub 中免费提供,因此你可以 fork 它,添加更多功能或改进它。
|
||||
|
||||
### HeRM's - 一个命令食谱管理器
|
||||
|
||||
#### **安装 HeRM 's**
|
||||
|
||||
由于它是使用 Haskell 编写的,因此我们需要首先安装 Cabal。 Cabal 是一个用于下载和编译用 Haskell 语言编写的软件的命令行程序。Cabal 存在于大多数 Linux 发行版的核心软件库中,因此你可以使用发行版的默认软件包管理器来安装它。
|
||||
|
||||
例如,你可以使用以下命令在 Arch Linux 及其变体(如 Antergos、Manjaro Linux)中安装 cabal:
|
||||
```
|
||||
sudo pacman -S cabal-install
|
||||
```
|
||||
|
||||
在 Debian、Ubuntu 上:
|
||||
```
|
||||
sudo apt-get install cabal-install
|
||||
```
|
||||
|
||||
安装 Cabal 后,确保你已经添加了 PATH。为此,请编辑你的 **~/.bashrc** :
|
||||
```
|
||||
vi ~/.bashrc
|
||||
```
|
||||
|
||||
添加下面这行:
|
||||
```
|
||||
PATH=$PATH:~/.cabal/bin
|
||||
```
|
||||
|
||||
按 **:wq** 保存并退出文件。然后,运行以下命令更新所做的更改。
|
||||
```
|
||||
source ~/.bashrc
|
||||
```
|
||||
|
||||
安装 cabal 后,运行以下命令安装 herms:
|
||||
```
|
||||
cabal install herms
|
||||
```
|
||||
|
||||
喝一杯咖啡!这将需要一段时间。几分钟后,你会看到一个输出,如下所示。
|
||||
```
|
||||
[...]
|
||||
Linking dist/build/herms/herms ...
|
||||
Installing executable(s) in /home/sk/.cabal/bin
|
||||
Installed herms-1.8.1.2
|
||||
```
|
||||
|
||||
恭喜! Herms 已经安装完成。
|
||||
|
||||
#### **添加食谱**
|
||||
|
||||
让我们添加一个食谱,例如 **Dosa**。对于那些想知道的,Dosa 是一种受欢迎的南印度食物,配以 **sambar** 和**酸辣酱**。这是一种健康的,可以说是最美味的食物。它不含添加的糖或饱和脂肪。制作一个也很容易。有几种不同的 Dosas,在我们家中最常见的是 Plain Dosa。
|
||||
|
||||
要添加食谱,请输入:
|
||||
```
|
||||
herms add
|
||||
```
|
||||
|
||||
你会看到一个如下所示的屏幕。开始输入食谱的详细信息。
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
要变换字段,请使用以下键盘快捷键:
|
||||
|
||||
* **Tab / Shift+Tab** - 下一个/前一个字段
|
||||
* **Ctrl + <箭头键>** - 导航字段
|
||||
* **[Meta 或者 Alt] + <h-j-k-l>** - 导航字段
|
||||
* **Esc** - 保存或取消。
|
||||
|
||||
|
||||
|
||||
添加完配方的详细信息后,按下 ESC 键并点击 Y 保存。同样,你可以根据需要添加尽可能多的食谱。
|
||||
|
||||
要列出添加的食谱,输入:
|
||||
```
|
||||
herms list
|
||||
```
|
||||
|
||||
[![][1]][3]
|
||||
|
||||
要查看上面列出的任何食谱的详细信息,请使用下面的相应编号。
|
||||
```
|
||||
herms view 1
|
||||
```
|
||||
|
||||
[![][1]][4]
|
||||
|
||||
要编辑任何食谱,使用:
|
||||
```
|
||||
herms edit 1
|
||||
```
|
||||
|
||||
完成更改后,按下 ESC 键。系统会询问你是否要保存。你只需选择适当的选项。
|
||||
|
||||
[![][1]][5]
|
||||
|
||||
要删除食谱,命令是:
|
||||
```
|
||||
herms remove 1
|
||||
```
|
||||
|
||||
要为指定食谱生成购物清单,运行:
|
||||
```
|
||||
herms shopping 1
|
||||
```
|
||||
|
||||
[![][1]][6]
|
||||
|
||||
要获得帮助,运行:
|
||||
```
|
||||
herms -h
|
||||
```
|
||||
|
||||
当你下次听到你的同事、朋友或其他地方谈到好的食谱时,只需打开 Herms,并快速记下,并将它们分享给你的配偶。她会很高兴!
|
||||
|
||||
今天就是这些。还有更好的东西。敬请关注!
|
||||
|
||||
干杯!!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/herms-commandline-food-recipes-manager/
|
||||
|
||||
作者:[][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com
|
||||
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2017/12/Make-Dosa-1.png ()
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-1-1.png ()
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-2.png ()
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-3.png ()
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-4.png ()
|
@ -1,187 +0,0 @@
|
||||
Python全局,局部和非局部变量(带示例)
|
||||
======
|
||||
|
||||
### 全局变量
|
||||
|
||||
在Python中,在函数之外或在全局范围内声明的变量被称为全局变量。 这意味着,全局变量可以在函数内部或外部访问。
|
||||
|
||||
我们来看一个关于如何在Python中创建一个全局变量的示例。
|
||||
|
||||
#### 示例1:创建全局变量
|
||||
```python
|
||||
x = "global"
|
||||
|
||||
def foo():
|
||||
print("x inside :", x)
|
||||
|
||||
foo()
|
||||
print("x outside:", x)
|
||||
```
|
||||
|
||||
当我们运行代码时,将会输出:
|
||||
```
|
||||
x inside : global
|
||||
x outside: global
|
||||
```
|
||||
|
||||
在上面的代码中,我们创建了x作为全局变量,并定义了一个`foo()`来打印全局变量x。 最后,我们调用`foo()`来打印x的值。
|
||||
|
||||
倘若你想改变一个函数内的x的值该怎么办?
|
||||
|
||||
```python
|
||||
x = "global"
|
||||
|
||||
def foo():
|
||||
x = x * 2
|
||||
print(x)
|
||||
foo()
|
||||
```
|
||||
|
||||
当我们运行代码时,将会输出:
|
||||
```
|
||||
UnboundLocalError: local variable 'x' referenced before assignment
|
||||
```
|
||||
|
||||
输出显示一个错误,因为Python将x视为局部变量,x也没有`foo()`内部定义。
|
||||
|
||||
为了运行正常,我们使用`global`关键字,访问[PythonGlobal关键字][1]以便了解更多。
|
||||
|
||||
### 局部变量
|
||||
|
||||
在函数体内或局部作用域内声明的变量称为局部变量。
|
||||
|
||||
#### 示例2:访问作用域外的局部变量
|
||||
|
||||
```python
|
||||
def foo():
|
||||
y = "local"
|
||||
|
||||
foo()
|
||||
print(y)
|
||||
```
|
||||
|
||||
当我们运行代码时,将会输出:
|
||||
```
|
||||
NameError: name 'y' is not defined
|
||||
```
|
||||
|
||||
输出显示了一个错误,因为我们试图在全局范围内访问局部变量y,而局部变量只能在`foo() `函数内部或局部作用域内有效。
|
||||
|
||||
我们来看一个关于如何在Python中创建一个局部变量的例子。
|
||||
|
||||
#### 示例3:创建一个局部变量
|
||||
|
||||
通常,我们在函数内声明一个变量来创建一个局部变量。
|
||||
```python
|
||||
def foo():
|
||||
y = "local"
|
||||
print(y)
|
||||
|
||||
foo()
|
||||
```
|
||||
|
||||
当我们运行代码时,将会输出:
|
||||
```
|
||||
local
|
||||
```
|
||||
|
||||
让我们来看看前面的问题,其中x是一个全局变量,我们想修改`foo()`内部的x。
|
||||
|
||||
### 全局变量和局部变量
|
||||
|
||||
在这里,我们将展示如何在同一份代码中使用全局变量和局部变量。
|
||||
|
||||
#### 示例4:在同一份代码中使用全局变量和局部变量
|
||||
```python
|
||||
x = "global"
|
||||
|
||||
def foo():
|
||||
global x
|
||||
y = "local"
|
||||
x = x * 2
|
||||
print(x)
|
||||
print(y)
|
||||
|
||||
foo()
|
||||
```
|
||||
|
||||
当我们运行代码时,将会输出(译者注:原文中输出结果的两个global有空格,正确的是没有空格):
|
||||
```
|
||||
globalglobal
|
||||
local
|
||||
```
|
||||
|
||||
在上面的代码中,我们将x声明为全局变量,将y声明为`foo()`中的局部变量。 然后,我们使用乘法运算符`*`来修改全局变量x,并打印x和y。
|
||||
|
||||
在调用`foo()`之后,x的值变成`globalglobal`了(译者注:原文同样有空格,正确的是没有空格),因为我们使用`x * 2`打印两次`global`。 之后,我们打印局部变量y的值,即`local`。
|
||||
|
||||
#### 示例5:具有相同名称的全局变量和局部变量
|
||||
```python
|
||||
x = 5
|
||||
|
||||
def foo():
|
||||
x = 10
|
||||
print("local x:", x)
|
||||
|
||||
foo()
|
||||
print("global x:", x)
|
||||
```
|
||||
|
||||
当我们运行代码时,将会输出:
|
||||
```
|
||||
local x: 10
|
||||
global x: 5
|
||||
```
|
||||
|
||||
在上面的代码中,我们对全局变量和局部变量使用了相同的名称x。 当我们打印相同的变量时却得到了不同的结果,因为这两个作用域内都声明了变量,即`foo()`内部的局部作用域和`foo()`外面的全局作用域。
|
||||
|
||||
当我们在`foo()`内部打印变量时,它输出`local x: 10`,这被称为变量的局部作用域。
|
||||
|
||||
同样,当我们在`foo()`外部打印变量时,它输出`global x: 5`,这被称为变量的全局作用域。
|
||||
|
||||
### 非局部变量
|
||||
|
||||
非局部变量用于局部作用域未定义的嵌套函数。 这意味着,变量既不能在局部也不能在全局范围内。
|
||||
|
||||
我们来看一个关于如何在Python中创建一个非局部变量的例子。(译者注:原文为创建全局变量,疑为笔误)
|
||||
|
||||
我们使用`nonlocal`关键字来创建非局部变量。
|
||||
|
||||
#### 例6:创建一个非局部变量
|
||||
```python
|
||||
def outer():
|
||||
x = "local"
|
||||
|
||||
def inner():
|
||||
nonlocal x
|
||||
x = "nonlocal"
|
||||
print("inner:", x)
|
||||
|
||||
inner()
|
||||
print("outer:", x)
|
||||
|
||||
outer()
|
||||
```
|
||||
|
||||
当我们运行代码时,将会输出:
|
||||
```
|
||||
inner: nonlocal
|
||||
outer: nonlocal
|
||||
```
|
||||
|
||||
在上面的代码中有一个嵌套函数`inner()`。 我们使用`nonlocal`关键字来创建非局部变量。`inner()`函数是在另一个函数`outer()`的作用域中定义的。
|
||||
|
||||
注意:如果我们改变非局部变量的值,那么变化就会出现在局部变量中。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.programiz.com/python-programming/global-local-nonlocal-variables
|
||||
|
||||
作者:[programiz][a]
|
||||
译者:[Flowsnow](https://github.com/Flowsnow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.programiz.com/
|
||||
[1]:https://www.programiz.com/python-programming/global-keyword
|
@ -0,0 +1,311 @@
|
||||
如何使用 Android Things 和 TensorFlow 在物联网上应用机器学习
|
||||
============================================================
|
||||
|
||||
这个项目探索了如何将机器学习应用到物联网上。具体来说,物联网平台我们将使用 **Android Things**,而机器学习引擎我们将使用 **Google TensorFlow**。
|
||||
|
||||
![Machine Learning with Android Things](https://www.survivingwithandroid.com/wp-content/uploads/2018/03/machine_learning_android_things.png)
|
||||
|
||||
现如今,机器学习是物联网上使用的最热门的主题之一。给机器学习的最简单的定义,可能就是 [维基百科上的定义][13]:机器学习是计算机科学中,让计算机不需要显式编程就能去“学习”(即,逐步提升在特定任务上的性能)使用数据的一个领域。
|
||||
|
||||
换句话说就是,经过训练之后,那怕是它没有针对它们进行特定的编程,这个系统也能够预测结果。另一方面,我们都知道物联网和联网设备的概念。其中一个前景看好的领域就是如何在物联网上应用机器学习,构建专业的系统,这样就能够去开发一个能够“学习”的系统。此外,还可以使用这些知识去控制和管理物理对象。
|
||||
|
||||
这里有几个应用机器学习和物联网产生重要价值的领域,以下仅提到了几个感兴趣的领域,它们是:
|
||||
|
||||
* 在工业物联网(IIoT)中的预见性维护
|
||||
|
||||
* 消费物联网中,机器学习可以让设备更智能,它通过调整使设备更适应我们的习惯
|
||||
|
||||
在本教程中,我们希望去探索如何使用 Android Things 和 TensorFlow 在物联网上应用机器学习。这个 Adnroid Things 物联网项目的基本想法是,探索如何去*构建一个能够识别前方道路上基本形状(比如箭头)的无人驾驶汽车*。我们已经介绍了 [如何使用 Android Things 去构建一个无人驾驶汽车][5],因此,在开始这个项目之前,我们建议你去阅读那个教程。
|
||||
|
||||
这个机器学习和物联网项目包含如下的主题:
|
||||
|
||||
* 如何使用 Docker 配置 TensorFlow 环境
|
||||
|
||||
* 如何训练 TensorFlow 系统
|
||||
|
||||
* 如何使用 Android Things 去集成 TensorFlow
|
||||
|
||||
* 如何使用 TensorFlow 的成果去控制无人驾驶汽车
|
||||
|
||||
这个项目起源于 [Android Things TensorFlow 图像分类器][6]。
|
||||
|
||||
我们开始吧!
|
||||
|
||||
### 如何使用 Tensorflow 图像识别
|
||||
|
||||
在开始之前,需要安装和配置 TensorFlow 环境。我不是机器学习方面的专家,因此,我需要快速找到并且准备去使用一些东西,因此,我们可以构建 TensorFlow 图像识别器。为此,我们使用 Docker 去运行一个 TensorFlow 镜像。以下是操作步骤:
|
||||
|
||||
1. 克隆 TensorFlow 仓库:
|
||||
```
|
||||
git clone https://github.com/tensorflow/tensorflow.git
|
||||
cd /tensorflow
|
||||
git checkout v1.5.0
|
||||
```
|
||||
|
||||
2. 创建一个目录(`/tf-data`),它将用于保存这个项目中使用的所有文件。
|
||||
|
||||
3. 运行 Docker:
|
||||
```
|
||||
docker run -it \
|
||||
--volume /tf-data:/tf-data \
|
||||
--volume /tensorflow:/tensorflow \
|
||||
--workdir /tensorflow tensorflow/tensorflow:1.5.0 bash
|
||||
```
|
||||
|
||||
使用这个命令,我们运行一个交互式 TensorFlow 环境,可以在使用项目期间挂载一些目录。
|
||||
|
||||
### 如何训练 TensorFlow 去识别图像
|
||||
|
||||
在 Android Things 系统能够识别图像之前,我们需要去训练 TensorFlow 引擎,以使它能够构建它的模型。为此,我们需要去收集一些图像。正如前面所言,我们需要使用箭头来控制 Android Things 无人驾驶汽车,因此,我们至少要收集四种类型的箭头:
|
||||
|
||||
* 向上的箭头
|
||||
|
||||
* 向下的箭头
|
||||
|
||||
* 向左的箭头
|
||||
|
||||
* 向右的箭头
|
||||
|
||||
为训练这个系统,需要使用这四类不同的图像去创建一个“知识库”。在 `/tf-data` 目录下创建一个名为 `images` 的目录,然后在它下面创建如下名字的四个子目录:
|
||||
|
||||
* up-arrow
|
||||
|
||||
* down-arrow
|
||||
|
||||
* left-arrow
|
||||
|
||||
* right-arrow
|
||||
|
||||
现在,我们去找图片。我使用的是 Google 图片搜索,你也可以使用其它的方法。为了简化图片下载过程,你可以安装一个 Chrome 下载插件,这样你只需要点击就可以下载选定的图片。别忘了多下载一些图片,这样训练效果更好,当然,这样创建模型的时间也会相应增加。
|
||||
|
||||
**扩展阅读**
|
||||
[如何使用 API 去集成 Android Things][2]
|
||||
[如何与 Firebase 一起使用 Android Things][3]
|
||||
|
||||
打开浏览器,开始去查找四种箭头的图片:
|
||||
|
||||
![TensorFlow image classifier](https://www.survivingwithandroid.com/wp-content/uploads/2018/03/TensorFlow-image-classifier.png)
|
||||
[Save][7]
|
||||
|
||||
每个类别我下载了 80 张图片。不用管图片文件的扩展名。
|
||||
|
||||
为所有类别的图片做一次如下的操作(在 Docker 界面下):
|
||||
|
||||
```
|
||||
python /tensorflow/examples/image_retraining/retrain.py \
|
||||
--bottleneck_dir=tf_files/bottlenecks \
|
||||
--how_many_training_steps=4000 \
|
||||
--output_graph=/tf-data/retrained_graph.pb \
|
||||
--output_labels=/tf-data/retrained_labels.txt \
|
||||
--image_dir=/tf-data/images
|
||||
```
|
||||
|
||||
这个过程你需要耐心等待,它需要花费很长时间。结束之后,你将在 `/tf-data` 目录下发现如下的两个文件:
|
||||
|
||||
1. retrained_graph.pb
|
||||
|
||||
2. retrained_labels.txt
|
||||
|
||||
第一个文件包含了 TensorFlow 训练过程产生的结果模型,而第二个文件包含了我们的四个图片类相关的标签。
|
||||
|
||||
### 如何测试 Tensorflow 模型
|
||||
|
||||
如果你想去测试这个模型,去验证它是否能按预期工作,你可以使用如下的命令:
|
||||
|
||||
```
|
||||
python scripts.label_image \
|
||||
--graph=/tf-data/retrained-graph.pb \
|
||||
--image=/tf-data/images/[category]/[image_name.jpg]
|
||||
```
|
||||
|
||||
### 优化模型
|
||||
|
||||
在 Android Things 项目中使用我们的 TensorFlow 模型之前,需要去优化它:
|
||||
|
||||
```
|
||||
python /tensorflow/python/tools/optimize_for_inference.py \
|
||||
--input=/tf-data/retrained_graph.pb \
|
||||
--output=/tf-data/opt_graph.pb \
|
||||
--input_names="Mul" \
|
||||
--output_names="final_result"
|
||||
```
|
||||
|
||||
那个就是我们全部的模型。我们将使用这个模型,把 TensorFlow 与 Android Things 集成到一起,在物联网或者更多任务上应用机器学习。目标是使用 Android Things 应用程序智能识别箭头图片,并反应到接下来的无人驾驶汽车的方向控制上。
|
||||
|
||||
如果你想去了解关于 TensorFlow 以及如何生成模型的更多细节,请查看官方文档以及这篇 [教程][8]。
|
||||
|
||||
### 如何使用 Android Things 和 TensorFlow 在物联网上应用机器学习
|
||||
|
||||
TensorFlow 的数据模型准备就绪之后,我们继续下一步:如何将 Android Things 与 TensorFlow 集成到一起。为此,我们将这个任务分为两步来完成:
|
||||
|
||||
1. 硬件部分,我们将把电机和其它部件连接到 Android Things 开发板上
|
||||
|
||||
2. 实现这个应用程序
|
||||
|
||||
### Android Things 示意图
|
||||
|
||||
在深入到如何连接外围部件之前,先列出在这个 Android Things 项目中使用到的组件清单:
|
||||
|
||||
1. Android Things 开发板(树莓派 3)
|
||||
|
||||
2. 树莓派摄像头
|
||||
|
||||
3. 一个 LED 灯
|
||||
|
||||
4. LN298N 双 H 桥电机驱动模块(连接控制电机)
|
||||
|
||||
5. 一个带两个轮子的无人驾驶汽车底盘
|
||||
|
||||
我不再重复 [如何使用 Android Things 去控制电机][9] 了,因为在以前的文章中已经讲过了。
|
||||
|
||||
下面是示意图:
|
||||
|
||||
![Integrating Android Things with IoT](https://www.survivingwithandroid.com/wp-content/uploads/2018/03/tensor_bb.png)
|
||||
[Save][10]
|
||||
|
||||
上图中没有展示摄像头。最终成果如下图:
|
||||
|
||||
![Integrating Android Things with TensorFlow](https://www.survivingwithandroid.com/wp-content/uploads/2018/03/android_things_with_tensorflow-min.jpg)
|
||||
[Save][11]
|
||||
|
||||
### 使用 TensorFlow 实现 Android Things 应用程序
|
||||
|
||||
最后一步是实现 Android Things 应用程序。为此,我们可以复用 Github 上名为 [TensorFlow 图片分类器示例][12] 的示例代码。开始之前,先克隆 Github 仓库,这样你就可以修改源代码。
|
||||
|
||||
这个 Android Things 应用程序与原始的应用程序是不一样的,因为:
|
||||
|
||||
1. 它不使用按钮去开启摄像头图像捕获
|
||||
|
||||
2. 它使用了不同的模型
|
||||
|
||||
3. 它使用一个闪烁的 LED 灯来提示,摄像头将在 LED 停止闪烁后拍照
|
||||
|
||||
4. 当 TensorFlow 检测到图像时(箭头)它将控制电机。此外,在第 3 步的循环开始之前,它将打开电机 5 秒钟。
|
||||
|
||||
为了让 LED 闪烁,使用如下的代码:
|
||||
|
||||
```
|
||||
private Handler blinkingHandler = new Handler();
|
||||
private Runnable blinkingLED = new Runnable() {
|
||||
@Override
|
||||
public void run() {
|
||||
try {
|
||||
// If the motor is running the app does not start the cam
|
||||
if (mc.getStatus())
|
||||
return ;
|
||||
|
||||
Log.d(TAG, "Blinking..");
|
||||
mReadyLED.setValue(!mReadyLED.getValue());
|
||||
if (currentValue <= NUM_OF_TIMES) {
|
||||
currentValue++;
|
||||
blinkingHandler.postDelayed(blinkingLED,
|
||||
BLINKING_INTERVAL_MS);
|
||||
}
|
||||
else {
|
||||
mReadyLED.setValue(false);
|
||||
currentValue = 0;
|
||||
mBackgroundHandler.post(mBackgroundClickHandler);
|
||||
}
|
||||
} catch (IOException e) {
|
||||
e.printStackTrace();
|
||||
}
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
当 LED 停止闪烁后,应用程序将捕获图片。
|
||||
|
||||
现在需要去关心如何根据检测到的图片去控制电机。修改这个方法:
|
||||
|
||||
```
|
||||
@Override
|
||||
public void onImageAvailable(ImageReader reader) {
|
||||
final Bitmap bitmap;
|
||||
try (Image image = reader.acquireNextImage()) {
|
||||
bitmap = mImagePreprocessor.preprocessImage(image);
|
||||
}
|
||||
|
||||
final List<Classifier.Recognition> results =
|
||||
mTensorFlowClassifier.doRecognize(bitmap);
|
||||
|
||||
Log.d(TAG,
|
||||
"Got the following results from Tensorflow: " + results);
|
||||
|
||||
// Check the result
|
||||
if (results == null || results.size() == 0) {
|
||||
Log.d(TAG, "No command..");
|
||||
blinkingHandler.post(blinkingLED);
|
||||
return ;
|
||||
}
|
||||
|
||||
Classifier.Recognition rec = results.get(0);
|
||||
Float confidence = rec.getConfidence();
|
||||
Log.d(TAG, "Confidence " + confidence.floatValue());
|
||||
|
||||
if (confidence.floatValue() < 0.55) {
|
||||
Log.d(TAG, "Confidence too low..");
|
||||
blinkingHandler.post(blinkingLED);
|
||||
return ;
|
||||
}
|
||||
|
||||
String command = rec.getTitle();
|
||||
Log.d(TAG, "Command: " + rec.getTitle());
|
||||
|
||||
if (command.indexOf("down") != -1)
|
||||
mc.backward();
|
||||
else if (command.indexOf("up") != -1)
|
||||
mc.forward();
|
||||
else if (command.indexOf("left") != -1)
|
||||
mc.turnLeft();
|
||||
else if (command.indexOf("right") != -1)
|
||||
mc.turnRight();
|
||||
}
|
||||
```
|
||||
|
||||
在这个方法中,当 TensorFlow 返回捕获的图片匹配到的可能的标签之后,应用程序将比较这个结果与可能的方向,并因此来控制电机。
|
||||
|
||||
最后,将去使用前面创建的模型了。拷贝 _assets_ 文件夹下的 `opt_graph.pb` 和 `reatrained_labels.txt` 去替换现在的文件。
|
||||
|
||||
打开 `Helper.java` 并修改如下的行:
|
||||
|
||||
```
|
||||
public static final int IMAGE_SIZE = 299;
|
||||
private static final int IMAGE_MEAN = 128;
|
||||
private static final float IMAGE_STD = 128;
|
||||
private static final String LABELS_FILE = "retrained_labels.txt";
|
||||
public static final String MODEL_FILE = "file:///android_asset/opt_graph.pb";
|
||||
public static final String INPUT_NAME = "Mul";
|
||||
public static final String OUTPUT_OPERATION = "output";
|
||||
public static final String OUTPUT_NAME = "final_result";
|
||||
```
|
||||
|
||||
运行这个应用程序,并给摄像头展示几种箭头,以检查它的反应。无人驾驶汽车将根据展示的箭头进行移动。
|
||||
|
||||
### 总结
|
||||
|
||||
教程到此结束,我们讲解了如何使用 Android Things 和 TensorFlow 在物联网上应用机器学习。我们使用图片去控制无人驾驶汽车的移动。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.survivingwithandroid.com/2018/03/apply-machine-learning-iot-using-android-things-tensorflow.html
|
||||
|
||||
作者:[Francesco Azzola ][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.survivingwithandroid.com/author/francesco-azzolagmail-com
|
||||
[1]:https://www.survivingwithandroid.com/author/francesco-azzolagmail-com
|
||||
[2]:https://www.survivingwithandroid.com/2017/11/building-a-restful-api-interface-using-android-things.html
|
||||
[3]:https://www.survivingwithandroid.com/2017/10/synchronize-android-things-with-firebase-real-time-control-firebase-iot.html
|
||||
[4]:http://pinterest.com/pin/create/bookmarklet/?media=data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=&url=https://www.survivingwithandroid.com/2018/03/apply-machine-learning-iot-using-android-things-tensorflow.html&is_video=false&description=Machine%20Learning%20with%20Android%20Things
|
||||
[5]:https://www.survivingwithandroid.com/2017/12/building-a-remote-controlled-car-using-android-things-gpio.html
|
||||
[6]:https://github.com/androidthings/sample-tensorflow-imageclassifier
|
||||
[7]:http://pinterest.com/pin/create/bookmarklet/?media=data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=&url=https://www.survivingwithandroid.com/2018/03/apply-machine-learning-iot-using-android-things-tensorflow.html&is_video=false&description=TensorFlow%20image%20classifier
|
||||
[8]:https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/#0
|
||||
[9]:https://www.survivingwithandroid.com/2017/12/building-a-remote-controlled-car-using-android-things-gpio.html
|
||||
[10]:http://pinterest.com/pin/create/bookmarklet/?media=data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=&url=https://www.survivingwithandroid.com/2018/03/apply-machine-learning-iot-using-android-things-tensorflow.html&is_video=false&description=Integrating%20Android%20Things%20with%20IoT
|
||||
[11]:http://pinterest.com/pin/create/bookmarklet/?media=data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=&url=https://www.survivingwithandroid.com/2018/03/apply-machine-learning-iot-using-android-things-tensorflow.html&is_video=false&description=Integrating%20Android%20Things%20with%20TensorFlow
|
||||
[12]:https://github.com/androidthings/sample-tensorflow-imageclassifier
|
||||
[13]:https://en.wikipedia.org/wiki/Machine_learning
|
@ -0,0 +1,151 @@
|
||||
|
||||
|
||||
如何使用树莓派制作一个数字针孔摄像头
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rasp-pi-pinhole-howto.png?itok=ubmevVZB)
|
||||
在 2015 年底的时候,树莓派基金会发布了一个非常小的 [Raspberry Pi Zero][1],这让大家感到很意外。更夸张的是,他们随 MagPi 杂志一起 [免费赠送][2]。我看到这个消息后立即冲出去到处找报刊亭,直到我在这一区域的某处找到最后两份。实际上我还没有想好如何去使用它们,但是我知道,因为它们非常小,所以,它们可以做很多全尺寸树莓派没法做的一些项目。
|
||||
|
||||
|
||||
![Raspberry Pi Zero][4]
|
||||
|
||||
从 MagPi 杂志上获得的树莓派 Zero。CC BY-SA.4.0。
|
||||
|
||||
因为我对天文摄影非常感兴趣,我以前还改过一台微软出的 LifeCam Cinema 高清网络摄像头,拆掉了它的外壳、镜头、以及红外滤镜,露出了它的 [CCD 芯片][5]。我把它定制为我的 Celestron 天文望远镜的目镜。用它我捕获到了令人难以置信的木星照片、月球上的陨石坑、以及太阳黑子的特写镜头(使用了适当的 Baader 安全保护膜)。
|
||||
|
||||
在那之前,我甚至还在我的使用胶片的 SLR 摄像机上,通过在镜头盖(这个盖子就是在摄像机上没有安装镜头时,用来保护摄像机的内部元件的那个盖子)上钻一个小孔来变成一个 [针孔摄像头][6],将这个钻了小孔的盖子,盖到一个汽水罐上切下来的小圆盘上,以提供一个针孔。碰巧有一天,在我的桌子上发现了一个可以用来做针孔体的盖子,随后我将它改成了用于天文摄像的网络摄像头。我很好奇这个网络摄像头是否有从针孔盖子后面捕获低照度图像的能力。我花了一些时间使用 [GNOME Cheese][7] 应用程序,去验证这个针孔摄像头确实是个可行的创意。
|
||||
|
||||
自从有了这个想法,我就有了树莓派 Zero 的一个用法!针孔摄像机一般都非常小,除了曝光时间和胶片的 ISO 速率外,一般都不提供其它的控制选项。数字摄像机就不一样了,它至少有 20 多个按钮和成百上千的设置菜单。我的数字针孔摄像头的目标是真实反映天文摄像的传统风格,设计一个没有控制选项的极简设备,甚至连曝光时间控制也没有。
|
||||
|
||||
用树莓派 Zero、高清网络镜头和空粉盒设计的数字针孔摄像头,是我设计的 [一系列][9] 针孔摄像头的 [第一个项目][8]。现在,我们开始来制作它。
|
||||
|
||||
### 硬件
|
||||
|
||||
因为我手头已经有了一个树莓派 Zero,为完成这个项目我还需要一个网络摄像头。这个树莓派 Zero 在英国的零售价是 4 英磅,这个项目其它部件的价格,我希望也差不多是这样的价格水平。花费 30 英磅买来的摄像头安装在一个 4 英磅的计算机主板上,让我感觉有些不平衡。显而易见的答案是前往一个知名的拍卖网站上,去找到一些二手的网络摄像头。不久之后,我仅花费了 1 英磅再加一些运费,获得了一个普通的高清摄像头。之后,在 Fedora 上做了一些测试操作,以确保它是可用正常使用的,我拆掉了它的外壳,以检查它的电子元件的大小是否适合我的项目。
|
||||
|
||||
|
||||
![Hercules DualPix HD webcam][11]
|
||||
|
||||
Hercules DualPix 高清网络摄像头,它将被解剖以提取它的电路板和 CCD 图像传感器。CC BY-SA 4.0.
|
||||
|
||||
接下来,我需要一个安放摄像头的外壳。树莓派 Zero 电路板大小仅仅为 65mm x 30mm x 5mm。虽然网络摄像头的 CCD 芯片周围有一个用来安装镜头的塑料支架,但是,实际上它的电路板尺寸更小。我在家里找了一圈,希望能够找到一个适合盛放这两个极小的电路板的容器。最后,我发现我妻子的粉盒足够去安放树莓派的电路板。稍微做一些小调整,似乎也可以将网络摄像头的电路板放进去。
|
||||
|
||||
![Powder compact][13]
|
||||
|
||||
变成我的针孔摄像头外壳的粉盒。CC BY-SA 4.0.
|
||||
|
||||
我拆掉了网络摄像头外壳上的一些螺丝,取出了它的内部元件。网络摄像头外壳的大小反映了它的电路板的大小或 CCD 的位置。我很幸运,这个网络摄像头很小而且它的电路板的布局也很方便。因为我要做一个针孔摄像头,我需要取掉镜像,露出 CCD 芯片。
|
||||
|
||||
它的塑料外壳大约有 1 厘米高,它太高了没有办法放进粉盒里。我拆掉了电路板后面的螺丝,将它完全拆开,我认为将它放在盒子里有助于阻挡从缝隙中来的光线,因此,我用一把工艺刀将它修改成 4 毫米高,然后将它重新安装。我折弯了 LED 的支脚以降低它的高度。最后,我切掉了安装麦克风的塑料管,因为我不想采集声音。
|
||||
|
||||
![Bare CCD chip][15]
|
||||
|
||||
取下镜头以后,就可以看到裸露的 CCD 芯片了。圆柱形的塑料柱将镜头固定在合适的位置上,并阻挡 LED 光进入镜头破坏图像。CC BY-SA 4.0
|
||||
|
||||
网络摄像头有一个很长的带全尺寸插头的 USB 线缆,而树莓派 Zero 使用的是一个 Micro-USB 插座,因此,我需要一个 USB-to-Micro-USB 适配器。但是,使用适配器插入,这个树莓派将放不进这个粉盒中,更不用说还有将近一米长的 USB 线缆。因此,我用刀将 Micro-USB 适配器削开,切掉了它的 USB 插座并去掉这些塑料,露出连接到 Micro-USB 插头上的金属材料。同时也切掉了网络摄像头上大约 6 厘米长的 USB 电缆,并剥掉裹在它外面的锡纸,露出它的四根电线。我把它们直接焊接到 Micro-USB 插头上。现在网络摄像头可以插入到树莓派 Zero 上了,并且电线也可以放到粉盒中了。
|
||||
|
||||
![Modified USB plugs][17]
|
||||
|
||||
网络摄像头使用的 Micro-USB 插头已经剥掉了线,并直接焊接到触点上。这个插头现在插入到树莓派 Zero 后大约仅高出树莓派 1 厘米。CC BY-SA 4.0
|
||||
|
||||
最初,我认为到此为止,已经全部完成了电子设计部分,但是在测试之后,我意识到,如果摄像头没有捕获图像或者甚至没有加电我都不知道。我决定使用树莓派的 GPIO 针脚去驱动 LED 指示灯。一个黄色的 LED 表示网络摄像头控制软件已经运行,而一个绿色的 LED 表示网络摄像头正在捕获图像。我在 BCM 的 17 号和 18 号针脚上各自串接一个 300 欧姆的电阻,并将它们各自连接到 LED 的正极上,然后将 LED 的负极连接到一起并接入到公共地针脚上。
|
||||
|
||||
![LEDs][19]
|
||||
|
||||
LED 使用一个 300 欧姆的电阻连接到 GPIO 的 BCM 17 号和 BCM 18 号针脚上,负极连接到公共地针脚。CC BY-SA 4.0.
|
||||
|
||||
接下来,该去修改粉盒了。首先,我去掉了卡在粉盒上的托盘以腾出更多的空间,并且用刀将连接处切开。我打算在一个便携式充电宝上运行树莓派 Zero,充电宝肯定是放不到这个盒子里面,因此,我挖了一个孔,这样就可以引出 USB 连接头。LED 的光需要能够从盒子外面看得见,因此,我在盖子上钻了两个 3 毫米的小孔。
|
||||
|
||||
然后,我使用一个 6 毫米的钻头在盖子的底部中间处钻了一个孔,并找了一个薄片盖在它上面,然后用针在它的中央扎了一个小孔。一定要确保针尖很细,因为如果插入整个针会使孔太大。我使用干/湿砂纸去打磨这个针孔,以让它更光滑,然后从另一面再次打孔,再强调一次仅使用针尖。使用针孔摄像头的目的是为了得到一个规则的、没有畸形或凸起的圆孔,并且勉强让光通过。孔越小,图像越锐利。
|
||||
|
||||
![Bottom of the case with the pinhole aperture][21]
|
||||
|
||||
带针孔的盒子底部。CC BY-SA 4.0
|
||||
|
||||
剩下的工作就是将这些已经改造完成的设备封装起来。首先我使用蓝色腻子将摄像头的电路板固定在盒子中合适的位置,这样针孔就直接处于 CCD 之上了。使用蓝色腻子的好处是,如果我需要清理污渍(或者如果放错了位置)时,就可以很容易地重新安装 CCD 了。将树莓派 Zero 直接放在摄像头电路板上面。为防止这两个电路板之间可能出现的短路情况,我在这两个电路板之间放了几层防静电袋。
|
||||
|
||||
[树莓派 Zero][22] 非常适合放到这个粉盒中,并且不需要任何固定,而从小孔中穿出去连接充电宝的 USB 电缆需要将它粘住固定。最后,我将 LED 塞进了前面在盒子上钻的孔中,并用胶水将它们固定住。我在 LED 的针脚之中放了一些防静电袋,以防止盒子盖上时,它与树莓派电路板接触而发生短路。
|
||||
|
||||
![Raspberry Pi Zero slotted into the case][24]
|
||||
|
||||
树莓派 Zero 塞入到这个盒子中后,周围的空隙不到 1mm。从盒子中引出的连接到网络摄像头上的 Micro-USB 插头,接下来需要将它连接到充电宝上。CC BY-SA 4.0
|
||||
|
||||
### 软件
|
||||
|
||||
当然,计算机硬件离开控制它的软件是不能使用的。树莓派 Zero 同样可以运行全尺寸树莓派能够运行的软件,但是,因为树莓派 Zero 的 CPU 速度比较慢,让它去引导传统的 [Raspbian OS][25] 镜像非常耗时。打开摄像头都要花费差不多一分钟的时间,这样的速度在现实中是没有什么用处的。而且,一个完整的树莓派操作系统对我的这个摄像头项目来说也没有必要。甚至是,我禁用了引导时启动的所有可禁用的服务,启动仍然需要很长的时间。因此,我决定仅使用需要的软件,我将用一个 [U-Boot][26] 引导加载器和 Linux 内核。自定义 `init` 二进制文件从 microSD 卡上加载 root 文件系统,读入驱动网络摄像头所需要的内核模块,并将它放在 `/dev` 目录下,然后运行二进制的应用程序。
|
||||
|
||||
这个二进制的应用程序是另一个定制的 C 程序,它做的核心工作就是管理摄像头。首先,它等待内核驱动程序去初始化网络摄像头、打开它、以及通过低级的 `v4l ioctl` 调用去初始化它。GPIO 针是通过 `/dev/mem` 寄存器被配置为驱动 LED。
|
||||
|
||||
初始化完成之后,摄像头进入一个 loop 循环。每个图像捕获循环是摄像头使用缺省配置,以 JPEG 格式捕获一个单一的图像帧、保存这个图像帧到 SD 卡、然后休眠三秒。这个循环持续运行直到断电为止。这已经很完美地实现了我的最初目标,那就是用一个传统的模拟的针孔摄像头设计一个简单的数字摄像头。
|
||||
|
||||
定制的用户空间 [代码][27] 在遵守 [GPLv3][28] 或者更新版许可下自由使用。树莓派 Zero 需要 ARMv6 的二进制文件,因此,我使用了 [QEMU ARM][29] 模拟器在一台 x86_64 主机上进行编译,它使用了 [Pignus][30] 发行版(一个针对 ARMv6 移植/重构建的 Fedora 23 版本)下的工具链,在 `chroot` 下进行编译。所有的二进制文件都静态链接了 [glibc][31],因此,它们都是自包含的。我构建了一个定制的 RAMDisk 去包含这些二进制文件和所需的内核模块,并将它拷贝到 SD 卡,这样引导加载器就可以找到它们了。
|
||||
|
||||
![Completed camera][33]
|
||||
|
||||
最终完成的摄像机完全隐藏在这个粉盒中了。唯一露在外面的东西就是 USB 电缆。CC BY-SA 4.0
|
||||
|
||||
### 照像
|
||||
|
||||
软件和硬件已经完成了,是该去验证一下它能做什么了。每个人都熟悉用现代的数字摄像头拍摄的高质量图像,不论它是用专业的 DSLRs 还是移动电话拍的。但是,这个高清的 1280x1024 分辨率的网络摄像头(差不多是一百万像素),在这里可能会让你失望。这个 CCD 从一个光通量极小的针孔中努力捕获图像。网络摄像头自动提升增益和曝光时间来进行补偿,最后的结果是一幅噪点很高的图像。图像的动态范围也非常窄,从一个非常拥挤的柱状图就可以看出来,这可以通过后期处理来拉长它,以得到更真实的亮部和暗部。
|
||||
|
||||
在户外阳光充足时捕获的图像达到了最佳效果,因此在室内获得的图像大多数都是不可用的图像。它的 CCD 直径仅有大约 1cm,并且是从一个几毫米的针孔中来捕获图像的,它的视界相当窄。比如,在自拍时,手臂拿着相机尽可能伸长,所获得的图像也就是充满整个画面的人头。最后,图像都是失焦的,所有的针孔摄像机都是这样的。
|
||||
|
||||
![Picture of houses taken with pinhole webcam][35]
|
||||
|
||||
在伦敦,大街上的屋顶。CC BY-SA 4.0
|
||||
|
||||
![Airport photo][37]
|
||||
|
||||
范堡罗机场的老航站楼。CC BY-SA 4.0
|
||||
|
||||
最初,我只是想使用摄像头去捕获一些静态图像。后面,我降低了 loop 循环的延迟时间,从三秒改为一秒,然后用它捕获一段时间内的一系列图像。我使用 [GStreamer][38] 将这些图像做成了延时视频。
|
||||
|
||||
以下是我创建视频的过程:
|
||||
|
||||
[][38]
|
||||
|
||||
视频是我在某天下班之后,从银行沿着泰晤式河到滑铁卢的画面。以每分钟 40 帧捕获的 1200 帧图像被我制作成了每秒 20 帧的动画。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/how-build-digital-pinhole-camera-raspberry-pi
|
||||
|
||||
作者:[Daniel Berrange][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/berrange
|
||||
[1]:https://www.raspberrypi.org/products/raspberry-pi-zero/
|
||||
[2]:https://opensource.com/users/node/24776
|
||||
[3]:https://opensource.com/file/390776
|
||||
[4]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-raspberrypizero.jpg?itok=1ry7Kx9m "Raspberry Pi Zero"
|
||||
[5]:https://en.wikipedia.org/wiki/Charge-coupled_device
|
||||
[6]:https://en.wikipedia.org/wiki/Pinhole_camera
|
||||
[7]:https://help.gnome.org/users/cheese/stable/
|
||||
[8]:https://pinholemiscellany.berrange.com/motivation/m-arcturus/
|
||||
[9]:https://pinholemiscellany.berrange.com/
|
||||
[10]:https://opensource.com/file/390756
|
||||
[11]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-hercules_dualpix_hd.jpg?itok=r858OM9_ "Hercules DualPix HD webcam"
|
||||
[12]:https://opensource.com/file/390771
|
||||
[13]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-powdercompact.jpg?itok=RZSwqCY7 "Powder compact"
|
||||
[14]:https://opensource.com/file/390736
|
||||
[15]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-bareccdchip.jpg?itok=IQzjZmED "Bare CCD chip"
|
||||
[17]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-usbs.jpg?itok=QJBkbI1F "Modified USB plugs"
|
||||
[19]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-_pi-zero-led.png?itok=oH4c2oCn "LEDs"
|
||||
[21]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-casebottom.jpg?itok=QjDMaWLi "Bottom of the case with the pinhole aperture"
|
||||
[22]:https://opensource.com/users/node/34916
|
||||
[24]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-pizeroincase.jpg?itok=cyUIvjjt "Raspberry Pi Zero slotted into the case"
|
||||
[25]:https://www.raspberrypi.org/downloads/raspbian/
|
||||
[26]:https://www.denx.de/wiki/U-Boot
|
||||
[27]:https://gitlab.com/berrange/pinholemiscellany/
|
||||
[28]:https://www.gnu.org/licenses/gpl-3.0.en.html
|
||||
[29]:https://wiki.qemu.org/Documentation/Platforms/ARM
|
||||
[30]:https://pignus.computer/
|
||||
[31]:https://www.gnu.org/software/libc/
|
||||
[33]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-completedcamera.jpg?itok=VYFaT-kA "Completed camera"
|
||||
[35]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-housesimage.jpg?itok=-_gtwn9N "Picture of houses taken with pinhole webcam"
|
||||
[37]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pinhole-farnboroughairportimage.jpg?itok=E829gg4F "Airport photo"
|
||||
[38]:https://gstreamer.freedesktop.org/modules/gst-ffmpeg.html
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user