* 'master' of https://github.com/LCTT/TranslateProject.git: (109 commits)
  PUB:20171020 How Eclipse is advancing IoT development.md
  PRF:20171020 How Eclipse is advancing IoT development.md
  PUB:20171202 docker - Use multi-stage builds.md
  PRF:20171202 docker - Use multi-stage builds.md
  选题: 10 useful ncat (nc) Command Examples for Linux Systems
  选题: What Are Zombie Processes And How To Find & Kill Zombie Processes?
  darsh8 translating
  翻译完毕
  转让
  PRF&PUB:20171201 Randomize your WiFi MAC address on Ubuntu 16.04.md
  PRF&PUB:20171204 How To Know What A Command Or Program Will Exactly Do Before Executing It.md
  PRF&PUB:20171108 Archiving repositories.md
  translating
  translated
  PRF&PUB:20171125 AWS to Help Build ONNX Open Source AI Platform.md
  Rename 20171121 7 tools for analyzing performance in Linux with bccBPF.md to 20171207 7 tools for analyzing performance in Linux with bccBPF.md
  Delete 20171207 7 tools for analyzing performance in Linux with bccBPF.md
  Create 20171121 7 tools for analyzing performance in Linux with bccBPF.md
  PRF&PUB:20171205 NETSTAT Command Learn to use netstat with examples.md
  PRF&PUB:20171206 How to extract substring in Bash.md
  ...
This commit is contained in:
yunfengHe 2017-12-11 23:59:53 +08:00
commit be33696b2a
62 changed files with 5797 additions and 1590 deletions

View File

@ -0,0 +1,122 @@
Docker使用多阶段构建镜像
============================================================
多阶段构建是 Docker 17.05 及更高版本提供的新功能。这对致力于优化 Dockerfile 的人来说,使得 Dockerfile 易于阅读和维护。
> 致谢: 特别感谢 [Alex Ellis][1] 授权使用他的关于 Docker 多阶段构建的博客文章 [Builder pattern vs. Multi-stage builds in Docker][2] 作为以下示例的基础。
### 在多阶段构建之前
关于构建镜像最具挑战性的事情之一是保持镜像体积小巧。 Dockerfile 中的每条指令都会在镜像中增加一层,并且在移动到下一层之前,需要记住清除不需要的构件。要编写一个非常高效的 Dockerfile你通常需要使用 shell 技巧和其它方式来尽可能地减少层数,并确保每一层都具有上一层所需的构件,而其它任何东西都不需要。
实际上最常见的是,有一个 Dockerfile 用于开发(其中包含构建应用程序所需的所有内容),而另一个裁剪过的用于生产环境,它只包含您的应用程序以及运行它所需的内容。这被称为“构建器模式”。但是维护两个 Dockerfile 并不理想。
下面分别是一个 `Dockerfile.build` 和遵循上面的构建器模式的 `Dockerfile` 的例子:
`Dockerfile.build`
```
FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN go get -d -v golang.org/x/net/html \
&& CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
```
注意这个例子还使用 Bash 的 `&&` 运算符人为地将两个 `RUN` 命令压缩在一起,以避免在镜像中创建额外的层。这很容易失败,难以维护。例如,插入另一个命令时,很容易忘记继续使用 `\` 字符。
`Dockerfile`
```
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY app .
CMD ["./app"]
```
`build.sh`
```
#!/bin/sh
echo Building alexellis2/href-counter:build
docker build --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy \
-t alexellis2/href-counter:build . -f Dockerfile.build
docker create --name extract alexellis2/href-counter:build
docker cp extract:/go/src/github.com/alexellis/href-counter/app ./app
docker rm -f extract
echo Building alexellis2/href-counter:latest
docker build --no-cache -t alexellis2/href-counter:latest .
rm ./app
```
当您运行 `build.sh` 脚本时,它会构建第一个镜像,从中创建一个容器,以便将该构件复制出来,然后构建第二个镜像。 这两个镜像会占用您的系统的空间,而你仍然会一个 `app` 构件存放在你的本地磁盘上。
多阶段构建大大简化了这种情况!
### 使用多阶段构建
在多阶段构建中,您需要在 Dockerfile 中多次使用 `FROM` 声明。每次 `FROM` 指令可以使用不同的基础镜像,并且每次 `FROM` 指令都会开始新阶段的构建。您可以选择将构件从一个阶段复制到另一个阶段,在最终镜像中,不会留下您不需要的所有内容。为了演示这是如何工作的,让我们调整前一节中的 Dockerfile 以使用多阶段构建。
`Dockerfile`
```
FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
```
您只需要单一个 Dockerfile。 不需要另外的构建脚本。只需运行 `docker build` 即可。
```
$ docker build -t alexellis2/href-counter:latest .
```
最终的结果是和以前体积一样小的生产镜像,复杂性显著降低。您不需要创建任何中间镜像,也不需要将任何构件提取到本地系统。
它是如何工作的呢?第二条 `FROM` 指令以 `alpine:latest` 镜像作为基础开始新的建造阶段。`COPY --from=0` 这一行将刚才前一个阶段产生的构件复制到这个新阶段。Go SDK 和任何中间构件都被留在那里,而不会保存到最终的镜像中。
### 命名您的构建阶段
默认情况下,这些阶段没有命名,您可以通过它们的整数来引用它们,从第一个 `FROM` 指令的 0 开始。但是,你可以通过在 `FROM` 指令中使用 `as <NAME>` 来为阶段命名。以下示例通过命名阶段并在 `COPY` 指令中使用名称来改进前一个示例。这意味着,即使您的 `Dockerfile` 中的指令稍后重新排序,`COPY` 也不会出问题。
```
FROM golang:1.7.3 as builder
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
```
--------------------------------------------------------------------------------
via: https://docs.docker.com/engine/userguide/eng-image/multistage-build/
作者:[docker][a]
译者:[iron0x](https://github.com/iron0x)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://docs.docker.com/engine/userguide/eng-image/multistage-build/
[1]:https://twitter.com/alexellisuk
[2]:http://blog.alexellis.io/mutli-stage-docker-builds/

View File

@ -0,0 +1,179 @@
使用 groff 编写 man 手册页
===================
`groff` 是大多数 Unix 系统上所提供的流行的文本格式化工具 nroff/troff 的 GNU 版本。它一般用于编写手册页,即命令、编程接口等的在线文档。在本文中,我们将给你展示如何使用 `groff` 编写你自己的 man 手册页。
在 Unix 系统上最初有两个文本处理系统troff 和 nroff它们是由贝尔实验室为初始的 Unix 所开发的(事实上,开发 Unix 系统的部分原因就是为了支持这样的一个文本处理系统)。这个文本处理器的第一个版本被称作 roff意为 “runoff”——径流稍后出现了 troff在那时用于为特定的<ruby>排字机<rt>Typesetter</rt></ruby>生成输出。nroff 是更晚一些的版本,它成为了各种 Unix 系统的标准文本处理器。groff 是 nroff 和 troff 的 GNU 实现,用在 Linux 系统上。它包括了几个扩展功能和一些打印设备的驱动程序。
`groff` 能够生成文档、文章和书籍,很多时候它就像是其它的文本格式化系统(如 TeX的血管一样。然而`groff`(以及原来的 nroff有一个固有的功能是 TeX 及其变体所缺乏的:生成普通 ASCII 输出。其它的系统在生成打印的文档方面做得很好,而 `groff` 却能够生成可以在线浏览的普通 ASCII甚至可以在最简单的打印机上直接以普通文本打印。如果要生成在线浏览的文档以及打印的表单`groff` 也许是你所需要的(虽然也有替代品,如 Texinfo、Lametex 等等)。
`groff` 还有一个好处是它比 TeX 小很多;它所需要的支持文件和可执行程序甚至比最小化的 TeX 版本都少。
`groff` 一个特定的用途是用于格式化 Unix 的 man 手册页。如果你是一个 Unix 程序员,你肯定需要编写和生成各种 man 手册页。在本文中,我们将通过编写一个简短的 man 手册页来介绍 `groff` 的使用。
像 TeX 一样,`groff` 使用特定的文本格式化语言来描述如何处理文本。这种语言比 TeX 之类的系统更加神秘一些,但是更加简洁。此外,`groff` 在基本的格式化器之上提供了几个宏软件包;这些宏软件包是为一些特定类型的文档所定制的。举个例子, mgs 宏对于写作文章或论文很适合,而 man 宏可用于 man 手册页。
### 编写 man 手册页
`groff` 编写 man 手册页十分简单。要让你的 man 手册页看起来和其它的一样,你需要从源头上遵循几个惯例,如下所示。在这个例子中,我们将为一个虚构的命令 `coffee` 编写 man 手册页,它用于以各种方式控制你的联网咖啡机。
使用任意文本编辑器,输入如下代码,并保存为 `coffee.man`。不要输入每行的行号,它们仅用于本文中的说明。
```
.TH COFFEE 1 "23 March 94"
.SH NAME
coffee \- Control remote coffee machine
.SH SYNOPSIS
\fBcoffee\fP [ -h | -b ] [ -t \fItype\fP ]
\fIamount\fP
.SH DESCRIPTION
\fBcoffee\fP queues a request to the remote
coffee machine at the device \fB/dev/cf0\fR.
The required \fIamount\fP argument specifies
the number of cups, generally between 0 and
12 on ISO standard coffee machines.
.SS Options
.TP
\fB-h\fP
Brew hot coffee. Cold is the default.
.TP
\fB-b\fP
Burn coffee. Especially useful when executing
\fBcoffee\fP on behalf of your boss.
.TP
\fB-t \fItype\fR
Specify the type of coffee to brew, where
\fItype\fP is one of \fBcolumbian\fP,
\fBregular\fP, or \fBdecaf\fP.
.SH FILES
.TP
\fC/dev/cf0\fR
The remote coffee machine device
.SH "SEE ALSO"
milk(5), sugar(5)
.SH BUGS
May require human intervention if coffee
supply is exhausted.
```
*清单 1示例 man 手册页源文件*
不要让这些晦涩的代码吓坏了你。字符串序列 `\fB`、`\fI` 和 `\fR` 分别用来改变字体为粗体、斜体和正体(罗马字体)。`\fP` 设置字体为前一个选择的字体。
其它的 `groff` <ruby>请求<rt>request</rt></ruby>以点(`.`)开头出现在行首。第 1 行中,我们看到的 `.TH` 请求用于设置该 man 手册页的标题为 `COFFEE`、man 的部分为 `1`、以及该 man 手册页的最新版本的日期。说明man 手册的第 1 部分用于用户命令、第 2 部分用于系统调用等等。使用 `man man` 命令了解各个部分)。
在第 2 行,`.SH` 请求用于标记一个<ruby><rt>section</rt></ruby>的开始,并给该节名称为 `NAME`。注意,大部分的 Unix man 手册页依次使用 `NAME``SYNOPSIS`、`DESCRIPTION`、`FILES`、`SEE ALSO`、`NOTES`、`AUTHOR` 和 `BUGS` 等节,个别情况下也需要一些额外的可选节。这只是编写 man 手册页的惯例,并不强制所有软件都如此。
第 3 行给出命令的名称,并在一个横线(`-`)后给出简短描述。在 `NAME` 节使用这个格式以便你的 man 手册页可以加到 whatis 数据库中——它可以用于 `man -k``apropos` 命令。
第 4-6 行我们给出了 `coffee` 命令格式的大纲。注意,斜体 `\fI...\fP` 用于表示命令行的参数,可选参数用方括号扩起来。
第 7-12 行给出了该命令的摘要介绍。粗体通常用于表示程序或文件的名称。
在 13 行,使用 `.SS` 开始了一个名为 `Options` 的子节。
接着第 14-25 行是选项列表,会使用参数列表样式表示。参数列表中的每一项以 `.TP` 请求来标记;`.TP` 后的行是参数,再之后是该项的文本。例如,第 14-16 行:
```
.TP
\fB-h\P
Brew hot coffee. Cold is the default.
```
将会显示如下:
```
-h Brew hot coffee. Cold is the default.
```
第 26-29 行创建该 man 手册页的 `FILES` 节,它用于描述该命令可能使用的文件。可以使用 `.TP` 请求来表示文件列表。
第 30-31 行,给出了 `SEE ALSO` 节,它提供了其它可以参考的 man 手册页。注意,第 30 行的 `.SH` 请求中 `"SEE ALSO"` 使用括号扩起来,这是因为 `.SH` 使用第一个空格来分隔该节的标题。任何超过一个单词的标题都需要使用引号扩起来成为一个单一参数。
最后,第 32-34 行,是 `BUGS` 节。
### 格式化和安装 man 手册页
为了在你的屏幕上查看这个手册页格式化的样式,你可以使用如下命令:
```
$ groff -Tascii -man coffee.man | more
```
`-Tascii` 选项告诉 `groff` 生成普通 ASCII 输出;`-man` 告诉 `groff` 使用 man 手册页宏集合。如果一切正常,这个 man 手册页显示应该如下。
```
COFFEE(1) COFFEE(1)
NAME
coffee - Control remote coffee machine
SYNOPSIS
coffee [ -h | -b ] [ -t type ] amount
DESCRIPTION
coffee queues a request to the remote coffee machine at
the device /dev/cf0\. The required amount argument speci-
fies the number of cups, generally between 0 and 12 on ISO
standard coffee machines.
Options
-h Brew hot coffee. Cold is the default.
-b Burn coffee. Especially useful when executing cof-
fee on behalf of your boss.
-t type
Specify the type of coffee to brew, where type is
one of columbian, regular, or decaf.
FILES
/dev/cf0
The remote coffee machine device
SEE ALSO
milk(5), sugar(5)
BUGS
May require human intervention if coffee supply is
exhausted.
```
*格式化的 man 手册页*
如之前提到过的,`groff` 能够生成其它类型的输出。使用 `-Tps` 选项替代 `-Tascii` 将会生成 PostScript 输出,你可以将其保存为文件,用 GhostView 查看,或用一个 PostScript 打印机打印出来。`-Tdvi` 会生成设备无关的 .dvi 输出,类似于 TeX 的输出。
如果你希望让别人在你的系统上也可以查看这个 man 手册页,你需要安装这个 groff 源文件到其它用户的 `%MANPATH` 目录里面。标准的 man 手册页放在 `/usr/man`。第一部分的 man 手册页应该放在 `/usr/man/man1` 下,因此,使用命令:
```
$ cp coffee.man /usr/man/man1/coffee.1
```
这将安装该 man 手册页到 `/usr/man` 中供所有人使用(注意使用 `.1` 扩展名而不是 `.man`)。当接下来执行 `man coffee` 命令时,该 man 手册页会被自动重新格式化,并且可查看的文本会被保存到 `/usr/man/cat1/coffee.1.Z` 中。
如果你不能直接复制 man 手册页的源文件到 `/usr/man`(比如说你不是系统管理员),你可创建你自己的 man 手册页目录树,并将其加入到你的 `%MANPATH`。`%MANPATH` 环境变量的格式同 `%PATH` 一样,举个例子,要添加目录 `/home/mdw/man``%MANPATH` ,只需要:
```
$ export MANPATH=/home/mdw/man:$MANPATH
```
`groff` 和 man 手册页宏还有许多其它的选项和格式化命令。找到它们的最好办法是查看 `/usr/lib/groff` 中的文件; `tmac` 目录包含了宏文件,自身通常会包含其所提供的命令的文档。要让 `groff` 使用特定的宏集合,只需要使用 `-m macro` (或 `-macro` 选项。例如,要使用 mgs 宏,使用命令:
```
groff -Tascii -mgs files...
```
`groff` 的 man 手册页对这个选项描述了更多细节。
不幸的是,随同 `groff` 提供的宏集合没有完善的文档。第 7 部分的 man 手册页提供了一些,例如,`man 7 groff_mm` 会给你 mm 宏集合的信息。然而,该文档通常只覆盖了在 `groff` 实现中不同和新功能,而假设你已经了解过原来的 nroff/troff 宏集合(称作 DWBthe Documentor's Work Bench。最佳的信息来源或许是一本覆盖了那些经典宏集合细节的书。要了解更多的编写 man 手册页的信息,你可以看看 man 手册页源文件(`/usr/man` 中),并通过它们来比较源文件的输出。
这篇文章是《Running Linux》 中的一章,由 Matt Welsh 和 Lar Kaufman 著奥莱理出版ISBN 1-56592-100-3。在本书中还包括了 Linux 下使用的各种文本格式化系统的教程。这期的《Linux Journal》中的内容及《Running Linux》应该可以给你提供在 Linux 上使用各种文本工具的良好开端。
### 祝好,撰写快乐!
Matt Welsh [mdw@cs.cornell.edu][1])是康奈尔大学的一名学生和系统程序员,在机器人和视觉实验室从事于时时机器视觉研究。
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/article/1158
作者:[Matt Welsh][a]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/user/800006
[1]:mailto:mdw@cs.cornell.edu

View File

@ -0,0 +1,75 @@
在 Linux 启动或重启时执行命令与脚本
======
有时可能会需要在重启时或者每次系统启动时运行某些命令或者脚本。我们要怎样做呢?本文中我们就对此进行讨论。 我们会用两种方法来描述如何在 CentOS/RHEL 以及 Ubuntu 系统上做到重启或者系统启动时执行命令和脚本。 两种方法都通过了测试。
### 方法 1 使用 rc.local
这种方法会利用 `/etc/` 中的 `rc.local` 文件来在启动时执行脚本与命令。我们在文件中加上一行来执行脚本,这样每次启动系统时,都会执行该脚本。
不过我们首先需要为 `/etc/rc.local` 添加执行权限,
```
$ sudo chmod +x /etc/rc.local
```
然后将要执行的脚本加入其中:
```
$ sudo vi /etc/rc.local
```
在文件最后加上:
```
sh /root/script.sh &
```
然后保存文件并退出。使用 `rc.local` 文件来执行命令也是一样的,但是一定要记得填写命令的完整路径。 想知道命令的完整路径可以运行:
```
$ which command
```
比如:
```
$ which shutter
/usr/bin/shutter
```
如果是 CentOS我们修改的是文件 `/etc/rc.d/rc.local` 而不是 `/etc/rc.local`。 不过我们也需要先为该文件添加可执行权限。
注意:- 启动时执行的脚本,请一定保证是以 `exit 0` 结尾的。
### 方法 2 使用 Crontab
该方法最简单了。我们创建一个 cron 任务,这个任务在系统启动后等待 90 秒,然后执行命令和脚本。
要创建 cron 任务,打开终端并执行
```
$ crontab -e
```
然后输入下行内容,
```
@reboot ( sleep 90 ; sh \location\script.sh )
```
这里 `\location\script.sh` 就是待执行脚本的地址。
我们的文章至此就完了。如有疑问,欢迎留言。
--------------------------------------------------------------------------------
via: http://linuxtechlab.com/executing-commands-scripts-at-reboot/
作者:[Shusain][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linuxtechlab.com/author/shsuain/

View File

@ -0,0 +1,72 @@
Linux 上如何禁用 USB 存储
======
为了保护数据不被泄漏,我们使用软件和硬件防火墙来限制外部未经授权的访问,但是数据泄露也可能发生在内部。 为了消除这种可能性,机构会限制和监测访问互联网,同时禁用 USB 存储设备。
在本教程中,我们将讨论三种不同的方法来禁用 Linux 机器上的 USB 存储设备。所有这三种方法都在 CentOS 67 机器上通过测试。那么让我们一一讨论这三种方法,
(另请阅读: [Ultimate guide to securing SSH sessions][1]
### 方法 1 伪安装
在本方法中,我们往配置文件中添加一行 `install usb-storage /bin/true` 这会让安装 usb-storage 模块的操作实际上变成运行 `/bin/true` 这也是为什么这种方法叫做`伪安装`的原因。 具体来说就是,在文件夹 `/etc/modprobe.d` 中创建并打开一个名为 `block_usb.conf` (也可能叫其他名字)
```
$ sudo vim /etc/modprobe.d/block_usb.conf
```
然后将下行内容添加进去:
```
install usb-storage /bin/true
```
最后保存文件并退出。
### 方法 2 删除 USB 驱动
这种方法要求我们将 USB 存储的驱动程序(`usb_storage.ko`)删掉或者移走,从而达到无法再访问 USB 存储设备的目的。 执行下面命令可以将驱动从它默认的位置移走:
```
$ sudo mv /lib/modules/$(uname -r)/kernel/drivers/usb/storage/usb-storage.ko /home/user1
```
现在在默认的位置上无法再找到驱动程序了,因此当 USB 存储器连接到系统上时也就无法加载到驱动程序了,从而导致磁盘不可用。 但是这个方法有一个小问题,那就是当系统内核更新的时候,`usb-storage` 模块会再次出现在它的默认位置。
### 方法 3 - 将 USB 存储器纳入黑名单
我们也可以通过 `/etc/modprobe.d/blacklist.conf` 文件将 usb-storage 纳入黑名单。这个文件在 RHEL/CentOS 6 是现成就有的,但在 7 上可能需要自己创建。 要将 USB 存储列入黑名单,请使用 vim 打开/创建上述文件:
```
$ sudo vim /etc/modprobe.d/blacklist.conf
```
并输入以下行将 USB 纳入黑名单:
```
blacklist usb-storage
```
保存文件并退出。`usb-storage` 就在就会被系统阻止加载,但这种方法有一个很大的缺点,即任何特权用户都可以通过执行以下命令来加载 `usb-storage` 模块,
```
$ sudo modprobe usb-storage
```
这个问题使得这个方法不是那么理想,但是对于非特权用户来说,这个方法效果很好。
在更改完成后重新启动系统,以使更改生效。请尝试用这些方法来禁用 USB 存储,如果您遇到任何问题或有什么疑问,请告知我们。
--------------------------------------------------------------------------------
via: http://linuxtechlab.com/disable-usb-storage-linux/
作者:[Shusain][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject)原创编译,[Linux 中国](https://linux.cn/)荣誉推出
[a]:http://linuxtechlab.com/author/shsuain/
[1]:http://linuxtechlab.com/ultimate-guide-to-securing-ssh-sessions/

View File

@ -1,43 +1,50 @@
translated by smartgrids
Eclipse 如何助力 IoT 发展
============================================================
### 开源组织的模块发开发方式非常适合物联网。
> 开源组织的模块化开发方式非常适合物联网。
![How Eclipse is advancing IoT development](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_BUS_ArchitectureOfParticipation_520x292.png?itok=FA0Uuwzv "How Eclipse is advancing IoT development")
图片来源: opensource.com
[Eclipse][3] 可能不是第一个去研究物联网的开源组织。但是,远在 IoT 家喻户晓之前,该基金会在 2001 年左右就开始支持开源软件发展商业化。九月 Eclipse 物联网日和 RedMonk 的 [ThingMonk 2017][4] 一块举行,着重强调了 Eclipse 在 [物联网发展][5] 中的重要作用。它现在已经包含了 28 个项目,覆盖了大部分物联网项目需求。会议过程中,我和负责 Eclipse 市场化运作的 [Ian Skerritt][6] 讨论了 Eclipse 的物联网项目以及如何拓展它。
[Eclipse][3] 可能不是第一个去研究物联网的开源组织。但是,远在 IoT 家喻户晓之前,该基金会在 2001 年左右就开始支持开源软件发展商业化。
九月份的 Eclipse 物联网日和 RedMonk 的 [ThingMonk 2017][4] 一块举行,着重强调了 Eclipse 在 [物联网发展][5] 中的重要作用。它现在已经包含了 28 个项目,覆盖了大部分物联网项目需求。会议过程中,我和负责 Eclipse 市场化运作的 [Ian Skerritt][6] 讨论了 Eclipse 的物联网项目以及如何拓展它。
### 物联网的最新进展?
###物联网的最新进展?
我问 Ian 物联网同传统工业自动化,也就是前几十年通过传感器和相应工具来实现工厂互联的方式有什么不同。 Ian 指出很多工厂是还没有互联的。
另外,他说“ SCADA[监控和数据分析] 系统以及工厂底层技术都是私有、独立性的。我们很难去改变它,也很难去适配它们…… 现在,如果你想运行一套生产系统,你需要设计成百上千的单元。生产线想要的是满足用户需求,使制造过程更灵活,从而可以不断产出。” 这也就是物联网会带给制造业的一个很大的帮助。
另外,他说 “SCADA [<ruby>监控和数据分析<rt>supervisory control and data analysis</rt></ruby>] 系统以及工厂底层技术都是非常私有的、独立性的。我们很难去改变它,也很难去适配它们 …… 现在,如果你想运行一套生产系统,你需要设计成百上千的单元。生产线想要的是满足用户需求,使制造过程更灵活,从而可以不断产出。” 这也就是物联网会带给制造业的一个很大的帮助。
###Eclipse 物联网方面的研究
Ian 对于 Eclipse 在物联网的研究是这样描述的:“满足任何物联网解决方案的核心基础技术” ,通过使用开源技术,“每个人都可以使用从而可以获得更好的适配性。” 他说Eclipse 将物联网视为包括三层互联的软件栈。从更高的层面上看,这些软件栈(按照大家常见的说法)将物联网描述为跨越三个层面的网络。特定的观念可能认为含有更多的层面,但是他们一直符合这个三层模型的功能的:
### Eclipse 物联网方面的研究
Ian 对于 Eclipse 在物联网的研究是这样描述的:“满足任何物联网解决方案的核心基础技术” ,通过使用开源技术,“每个人都可以使用,从而可以获得更好的适配性。” 他说Eclipse 将物联网视为包括三层互联的软件栈。从更高的层面上看,这些软件栈(按照大家常见的说法)将物联网描述为跨越三个层面的网络。特定的实现方式可能含有更多的层,但是它们一般都可以映射到这个三层模型的功能上:
* 一种可以装载设备(例如设备、终端、微控制器、传感器)用软件的堆栈。
* 将不同的传感器采集到的数据信息聚合起来并传输到网上的一类网关。这一层也可能会针对传感器数据检测做出实时反
* 将不同的传感器采集到的数据信息聚合起来并传输到网上的一类网关。这一层也可能会针对传感器数据检测做出实时反
* 物联网平台后端的一个软件栈。这个后端云存储数据并能根据采集的数据比如历史趋势、预测分析提供服务。
这三个软件栈在 Eclipse 的白皮书 “ [The Three Software Stacks Required for IoT Architectures][7] ”中有更详细的描述。
这三个软件栈在 Eclipse 的白皮书 “[The Three Software Stacks Required for IoT Architectures][7] ”中有更详细的描述。
Ian 说在这些架构中开发一种解决方案时,“需要开发一些特殊的东西,但是很多底层的技术是可以借用的,像通信协议、网关服务。需要一种模块化的方式来满足不的需求场合。” Eclipse 关于物联网方面的研究可以概括为:开发模块化开源组件从而可以被用于开发大量的特定性商业服务和解决方案。
Ian 说在这些架构中开发一种解决方案时,“需要开发一些特殊的东西,但是很多底层的技术是可以借用的,像通信协议、网关服务。需要一种模块化的方式来满足不的需求场合。” Eclipse 关于物联网方面的研究可以概括为:开发模块化开源组件从而可以被用于开发大量的特定性商业服务和解决方案。
###Eclipse 的物联网项目
### Eclipse 的物联网项目
在众多一杯应用的 Eclipse 物联网应用中, Ian 举了两个和 [MQTT][8] 有关联的突出应用一个设备与设备互联M2M的物联网协议。 Ian 把它描述成“一个专为重视电源管理工作的油气传输线监控系统的信息发布/订阅协议。MQTT 已经是众多物联网广泛应用标准中很成功的一个。” [Eclipse Mosquitto][9] 是 MQTT 的代理,[Eclipse Paho][10] 是他的客户端。
[Eclipse Kura][11] 是一个物联网网关,引用 Ian 的话“它连接了很多不同的协议间的联系”包括蓝牙、Modbus、CANbus 和 OPC 统一架构协议,以及一直在不断添加的协议。一个优势就是,他说,取代了你自己写你自己的协议, Kura 提供了这个功能并将你通过卫星、网络或其他设备连接到网络。”另外它也提供了防火墙配置、网络延时以及其它功能。Ian 也指出“如果网络不通时,它会存储信息直到网络恢复。”
在众多已被应用的 Eclipse 物联网应用中, Ian 举了两个和 [MQTT][8] 有关联的突出应用一个设备与设备互联M2M的物联网协议。 Ian 把它描述成“一个专为重视电源管理工作的油气传输线监控系统的信息发布/订阅协议。MQTT 已经是众多物联网广泛应用标准中很成功的一个。” [Eclipse Mosquitto][9] 是 MQTT 的代理,[Eclipse Paho][10] 是他的客户端。
[Eclipse Kura][11] 是一个物联网网关,引用 Ian 的话“它连接了很多不同的协议间的联系”包括蓝牙、Modbus、CANbus 和 OPC 统一架构协议,以及一直在不断添加的各种协议。他说,一个优势就是,取代了你自己写你自己的协议, Kura 提供了这个功能并将你通过卫星、网络或其他设备连接到网络。”另外它也提供了防火墙配置、网络延时以及其它功能。Ian 也指出“如果网络不通时,它会存储信息直到网络恢复。”
最新的一个项目中,[Eclipse Kapua][12] 正尝试通过微服务来为物联网云平台提供不同的服务。比如它集成了通信、汇聚、管理、存储和分析功能。Ian 说“它正在不断前进,虽然还没被完全开发出来,但是 Eurotech 和 RedHat 在这个项目上非常积极。”
Ian 说 [Eclipse hawkBit][13] ,软件更新管理的软件,是一项“非常有趣的项目。从安全的角度说,如果你不能更新你的设备,你将会面临巨大的安全漏洞。”很多物联网安全事故都和无法更新的设备有关,他说,“ HawkBit 可以基本负责通过物联网系统来完成扩展性更新的后端管理。”
物联网设备软件升级的难度一直被看作是难度最高的安全挑战之一。物联网设备不是一直连接的,而且数目众多,再加上首先设备的更新程序很难完全正常。正因为这个原因,关于无赖女王软件升级的项目一直是被当作重要内容往前推进。
Ian 说 [Eclipse hawkBit][13] 一个软件更新管理的软件是一项“非常有趣的项目。从安全的角度说如果你不能更新你的设备你将会面临巨大的安全漏洞。”很多物联网安全事故都和无法更新的设备有关他说“HawkBit 可以基本负责通过物联网系统来完成扩展性更新的后端管理。”
###为什么物联网这么适合 Eclipse
物联网设备软件升级的难度一直被看作是难度最高的安全挑战之一。物联网设备不是一直连接的,而且数目众多,再加上首先设备的更新程序很难完全正常。正因为这个原因,关于 IoT 软件升级的项目一直是被当作重要内容往前推进。
在物联网发展趋势中的一个方面就是关于构建模块来解决商业问题,而不是宽约工业和公司的大物联网平台。 Eclipse 关于物联网的研究放在一系列模块栈、提供特定和大众化需求功能的项目,还有就是指定目标所需的可捆绑式中间件、网关和协议组件上。
### 为什么物联网这么适合 Eclipse
在物联网发展趋势中的一个方面就是关于构建模块来解决商业问题,而不是跨越行业和公司的大物联网平台。 Eclipse 关于物联网的研究放在一系列模块栈、提供特定和大众化需求功能的项目上,还有就是指定目标所需的可捆绑式中间件、网关和协议组件上。
--------------------------------------------------------------------------------
@ -46,15 +53,15 @@ Ian 说 [Eclipse hawkBit][13] ,软件更新管理的软件,是一项“非
作者简介:
Gordon Haff - Gordon Haff 是红帽公司的云营销员,经常在消费者和工业会议上讲话,并且帮助发展红帽全办公云解决方案。他是 计算机前言:云如何如何打开众多出版社未来之门 的作者。在红帽之前, Gordon 写了成百上千的研究报告,经常被引用到公众刊物上,像纽约时报关于 IT 的议题和产品建议等……
Gordon Haff - Gordon Haff 是红帽公司的云专家,经常在消费者和行业会议上讲话,并且帮助发展红帽全面云化解决方案。他是《计算机前沿:云如何如何打开众多出版社未来之门》的作者。在红帽之前, Gordon 写了成百上千的研究报告,经常被引用到公众刊物上,像纽约时报关于 IT 的议题和产品建议等……
--------------------------------------------------------------------------------
转自 https://opensource.com/article/17/10/eclipse-and-iot
via https://opensource.com/article/17/10/eclipse-and-iot
作者:[Gordon Haff ][a]
作者:[Gordon Haff][a]
译者:[smartgrids](https://github.com/smartgrids)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,20 +1,19 @@
归档仓库
如何归档 GitHub 仓库
====================
因为仓库不再活跃开发或者你不想接受额外的贡献并不意味着你想要删除它。现在在 Github 上归档仓库让它变成只读。
如果仓库不再活跃开发或者你不想接受额外的贡献,但这并不意味着你想要删除它。现在可以在 Github 上归档仓库让它变成只读。
[![archived repository banner](https://user-images.githubusercontent.com/7321362/32558403-450458dc-c46a-11e7-96f9-af31d2206acb.png)][1]
归档一个仓库让它对所有人只读(包括仓库拥有者)。这包括编辑仓库、问题、合并请求、标记、里程碑、维基、发布、提交、标签、分支、反馈和评论。没有人可以在一个归档的仓库上创建新的问题、合并请求或者评论,但是你仍可以 fork 仓库-允许归档的仓库在其他地方继续开发。
归档一个仓库让它对所有人只读(包括仓库拥有者)。这包括对仓库的编辑、<ruby>问题<rt>issue</rt></ruby><ruby>合并请求<rt>pull request</rt></ruby>PR、标记、里程碑、项目、维基、发布、提交、标签、分支、反馈和评论。谁都不可以在一个归档的仓库上创建新的问题、合并请求或者评论,但是你仍可以 fork 仓库——以允许归档的仓库在其它地方继续开发。
要归档一个仓库,进入仓库设置页面并点在这个仓库上点击归档。
要归档一个仓库,进入仓库设置页面并点在这个仓库上点击<ruby>归档该仓库<rt>Archive this repository</rt></ruby>
[![archive repository button](https://user-images.githubusercontent.com/125011/32273119-0fc5571e-bef9-11e7-9909-d137268a1d6d.png)][2]
在归档你的仓库前,确保你已经更改了它的设置并考虑关闭所有的开放问题和合并请求。你还应该更新你的 README 和描述来让它让访问者了解他不再能够贡献。
在归档你的仓库前,确保你已经更改了它的设置并考虑关闭所有的开放问题和合并请求。你还应该更新你的 README 和描述来让它让访问者了解他不再能够对之贡献。
如果你改变了主意想要解除归档你的仓库,在相同的地方点击解除归档。请注意大多数归档仓库的设置是隐藏的,并且你需要解除归档来改变它们。
如果你改变了主意想要解除归档你的仓库,在相同的地方点击<ruby>解除归档该仓库<rt>Unarchive this repository</rt></ruby>”。请注意归档仓库的大多数设置是隐藏的,你需要解除归档才能改变它们。
[![archived labelled repository](https://user-images.githubusercontent.com/125011/32541128-9d67a064-c466-11e7-857e-3834054ba3c9.png)][3]
@ -24,9 +23,9 @@
via: https://github.com/blog/2460-archiving-repositories
作者:[MikeMcQuaid ][a]
作者:[MikeMcQuaid][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,76 @@
AWS 帮助构建 ONNX 开源 AI 平台
============================================================
![onnx-open-source-ai-platform](https://www.linuxinsider.com/article_images/story_graphics_xlarge/xl-2017-onnx-1.jpg)
AWS 最近成为了加入深度学习社区的<ruby>开放神经网络交换<rt>Open Neural Network Exchange</rt></ruby>ONNX协作的技术公司最近在<ruby>无障碍和可互操作<rt>frictionless and interoperable</rt></ruby>的环境中推出了高级人工智能。由 Facebook 和微软领头了该协作。
作为该合作的一部分AWS 开源其深度学习框架 Python 软件包 ONNX-MXNet该框架提供了跨多种语言的编程接口API包括 Python、Scala 和开源统计软件 R。
AWS 深度学习工程经理 Hagay Lupesko 和软件开发人员 Roshani Nagmote 上周在一篇帖子中写道ONNX 格式将帮助开发人员构建和训练其它框架的模型,包括 PyTorch、Microsoft Cognitive Toolkit 或 Caffe2。它可以让开发人员将这些模型导入 MXNet并运行它们进行推理。
### 对开发者的帮助
今年夏天Facebook 和微软推出了 ONNX以支持共享模式的互操作性来促进 AI 的发展。微软提交了其 Cognitive Toolkit、Caffe2 和 PyTorch 来支持 ONNX。
微软表示Cognitive Toolkit 和其他框架使开发人员更容易构建和运行计算图以表达神经网络。
[ONNX 代码和文档][4]的初始版本已经放到了 Github。
AWS 和微软上个月宣布了在 Apache MXNet 上的一个新 Gluon 接口计划,该计划允许开发人员构建和训练深度学习模型。
[Tractica][5] 的研究总监 Aditya Kaul 观察到“Gluon 是他们试图与 Google 的 Tensorflow 竞争的合作伙伴关系的延伸”。
他告诉 LinuxInsider“谷歌在这点上的疏忽是非常明显的但也说明了他们在市场上的主导地位。”
Kaul 说:“甚至 Tensorflow 也是开源的,所以开源在这里并不是什么大事,但这归结到底是其他生态系统联手与谷歌竞争。”
根据 AWS 的说法本月早些时候Apache MXNet 社区推出了 MXNet 的 0.12 版本,它扩展了 Gluon 的功能,以便进行新的尖端研究。它的新功能之一是变分 dropout它允许开发人员使用 dropout 技术来缓解递归神经网络中的过拟合。
AWS 指出:卷积 RNN、LSTM 网络和门控循环单元GRU允许使用基于时间的序列和空间维度对数据集进行建模。
### 框架中立方式
[Tirias Research][6] 的首席分析师 Paul Teich 说:“这看起来像是一个提供推理的好方法,而不管是什么框架生成的模型。”
他告诉 LinuxInsider“这基本上是一种框架中立的推理方式。”
Teich 指出,像 AWS、微软等云提供商在客户的压力下可以在一个网络上进行训练同时提供另一个网络以推进人工智能。
他说:“我认为这是这些供应商检查互操作性的一种基本方式。”
Tractica 的 Kaul 指出:“框架互操作性是一件好事,这会帮助开发人员确保他们建立在 MXNet 或 Caffe 或 CNTK 上的模型可以互操作。”
至于这种互操作性如何适用于现实世界Teich 指出,诸如自然语言翻译或语音识别等技术将要求将 Alexa 的语音识别技术打包并交付给另一个开发人员的嵌入式环境。
### 感谢开源
[ThinkStrategies][7] 的总经理 Jeff Kaplan 表示:“尽管存在竞争差异,但这些公司都认识到他们在开源运动所带来的软件开发进步方面所取得的巨大成功。”
他告诉 LinuxInsider“开放式神经网络交换ONNX致力于在人工智能方面产生类似的优势和创新。”
越来越多的大型科技公司已经宣布使用开源技术来加快 AI 协作开发的计划,以便创建更加统一的开发和研究平台。
ATT 几周前宣布了与 TechMahindra 和 Linux 基金会合作[推出 Acumos 项目][8]的计划。该平台旨在开拓电信、媒体和技术方面的合作。
--------------------------------------------------------------------------------
via: https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html
作者:[David Jones][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html#searchbyline
[1]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html#
[2]:https://www.linuxinsider.com/perl/mailit/?id=84971
[3]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html
[4]:https://github.com/onnx/onnx
[5]:https://www.tractica.com/
[6]:http://www.tiriasresearch.com/
[7]:http://www.thinkstrategies.com/
[8]:https://www.linuxinsider.com/story/84926.html
[9]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html

View File

@ -0,0 +1,161 @@
在 Ubuntu 16.04 下随机化你的 WiFi MAC 地址
============================================================
> 你的设备的 MAC 地址可以在不同的 WiFi 网络中记录你的活动。这些信息能被共享后出售,用于识别特定的个体。但可以用随机生成的伪 MAC 地址来阻止这一行为。
![A captive portal screen for a hotel allowing you to log in with social media for an hour of free WiFi](https://www.paulfurley.com/img/captive-portal-our-hotel.gif)
_Image courtesy of [Cloudessa][4]_
每一个诸如 WiFi 或者以太网卡这样的网络设备,都有一个叫做 MAC 地址的唯一标识符,如:`b4:b6:76:31:8c:ff`。这就是你能上网的原因:每当你连上 WiFi路由器就会用这一地址来向你接受和发送数据并且用它来区别你和这一网络的其它设备。
这一设计的缺陷在于唯一性,不变的 MAC 地址正好可以用来追踪你。连上了星巴克的 WiFi? 好,注意到了。在伦敦的地铁上? 也记录下来。
如果你曾经在某一个 WiFi 验证页面上输入过你的真实姓名,你就已经把自己和这一 MAC 地址建立了联系。没有仔细阅读许可服务条款、你可以认为,机场的免费 WiFi 正通过出售所谓的 ‘顾客分析数据’(你的个人信息)获利。出售的对象包括酒店,餐饮业,和任何想要了解你的人。
我不想信息被记录,再出售给多家公司,所以我花了几个小时想出了一个解决方案。
### MAC 地址不一定总是不变的
幸运的是,在不断开网络的情况下,是可以随机生成一个伪 MAC 地址的。
我想随机生成我的 MAC 地址,但是有三个要求:
1. MAC 地址在不同网络中是不相同的。这意味着,我在星巴克和在伦敦地铁网络中的 MAC 地址是不相同的,这样在不同的服务提供商中就无法将我的活动系起来。
2. MAC 地址需要经常更换,这样在网络上就没人知道我就是去年在这儿经过了 75 次的那个人。
3. MAC 地址一天之内应该保持不变。当 MAC 地址更改时,大多数网络都会与你断开连接,然后必须得进入验证页面再次登陆 - 这很烦人。
### 操作<ruby>网络管理器<rt>NetworkManager</rt></ruby>
我第一次尝试用一个叫做 `macchanger` 的工具,但是失败了。因为<ruby>网络管理器<rt>NetworkManager</rt></ruby>会根据它自己的设置恢复默认的 MAC 地址。
我了解到,网络管理器 1.4.1 以上版本可以自动生成随机的 MAC 地址。如果你在使用 Ubuntu 17.04 版本,你可以根据[这一配置文件][7]实现这一目的。但这并不能完全符合我的三个要求(你必须在<ruby>随机<rt>random</rt></ruby><ruby>稳定<rt>stable</rt></ruby>这两个选项之中选择一个,但没有一天之内保持不变这一选项)
因为我使用的是 Ubuntu 16.04,网络管理器版本为 1.2,不能直接使用高版本这一新功能。可能网络管理器有一些随机化方法支持,但我没能成功。所以我编了一个脚本来实现这一目标。
幸运的是,网络管理器 1.2 允许模拟 MAC 地址。你在已连接的网络中可以看见 ‘编辑连接’ 这一选项:
![Screenshot of NetworkManager's edit connection dialog, showing a text entry for a cloned mac address](https:/www.paulfurley.com/img/network-manager-cloned-mac-address.png)
网络管理器也支持钩子处理 —— 任何位于 `/etc/NetworkManager/dispatcher.d/pre-up.d/` 的脚本在建立网络连接之前都会被执行。
### 分配随机生成的伪 MAC 地址
我想根据网络 ID 和日期来生成新的随机 MAC 地址。 我们可以使用网络管理器的命令行工具 nmcli 来显示所有可用网络:
```
> nmcli connection
NAME UUID TYPE DEVICE
Gladstone Guest 618545ca-d81a-11e7-a2a4-271245e11a45 802-11-wireless wlp1s0
DoESDinky 6e47c080-d81a-11e7-9921-87bc56777256 802-11-wireless --
PublicWiFi 79282c10-d81a-11e7-87cb-6341829c2a54 802-11-wireless --
virgintrainswifi 7d0c57de-d81a-11e7-9bae-5be89b161d22 802-11-wireless --
```
因为每个网络都有一个唯一标识符UUID为了实现我的计划我将 UUID 和日期拼接在一起,然后使用 MD5 生成 hash 值:
```
# eg 618545ca-d81a-11e7-a2a4-271245e11a45-2017-12-03
> echo -n "${UUID}-$(date +%F)" | md5sum
53594de990e92f9b914a723208f22b3f -
```
生成的结果可以代替 MAC 地址的最后八个字节。
值得注意的是,最开始的字节 `02` 代表这个地址是[自行指定][8]的。实际上,真实 MAC 地址的前三个字节是由制造商决定的,例如 `b4:b6:76` 就代表 Intel。
有可能某些路由器会拒绝自己指定的 MAC 地址,但是我还没有遇到过这种情况。
每次连接到一个网络,这一脚本都会用 `nmcli` 来指定一个随机生成的伪 MAC 地址:
![A terminal window show a number of nmcli command line calls](https://www.paulfurley.com/imgterminal-window-nmcli-commands.png)
最后,我查看了 `ifconfig` 的输出结果,我发现 MAC 地址 `HWaddr` 已经变成了随机生成的地址(模拟 Intel 的),而不是我真实的 MAC 地址。
```
> ifconfig
wlp1s0 Link encap:Ethernet HWaddr b4:b6:76:45:64:4d
inet addr:192.168.0.86 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::648c:aff2:9a9d:764/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:12107812 errors:0 dropped:2 overruns:0 frame:0
TX packets:18332141 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:11627977017 (11.6 GB) TX bytes:20700627733 (20.7 GB)
```
### 脚本
完整的脚本也可以[在 Github 上查看][9]。
```
#!/bin/sh
# /etc/NetworkManager/dispatcher.d/pre-up.d/randomize-mac-addresses
# Configure every saved WiFi connection in NetworkManager with a spoofed MAC
# address, seeded from the UUID of the connection and the date eg:
# 'c31bbcc4-d6ad-11e7-9a5a-e7e1491a7e20-2017-11-20'
# This makes your MAC impossible(?) to track across WiFi providers, and
# for one provider to track across days.
# For craptive portals that authenticate based on MAC, you might want to
# automate logging in :)
# Note that NetworkManager >= 1.4.1 (Ubuntu 17.04+) can do something similar
# automatically.
export PATH=$PATH:/usr/bin:/bin
LOG_FILE=/var/log/randomize-mac-addresses
echo "$(date): $*" > ${LOG_FILE}
WIFI_UUIDS=$(nmcli --fields type,uuid connection show |grep 802-11-wireless |cut '-d ' -f3)
for UUID in ${WIFI_UUIDS}
do
UUID_DAILY_HASH=$(echo "${UUID}-$(date +F)" | md5sum)
RANDOM_MAC="02:$(echo -n ${UUID_DAILY_HASH} | sed 's/^\(..\)\(..\)\(..\)\(..\)\(..\).*$/\1:\2:\3:\4:\5/')"
CMD="nmcli connection modify ${UUID} wifi.cloned-mac-address ${RANDOM_MAC}"
echo "$CMD" >> ${LOG_FILE}
$CMD &
done
wait
```
_更新[使用自己指定的 MAC 地址][5]可以避免和真正的 intel 地址冲突。感谢 [@_fink][6]_
---------------------------------------------------------------------------------
via: https://www.paulfurley.com/randomize-your-wifi-mac-address-on-ubuntu-1604-xenial/
作者:[Paul M Furley][a]
译者:[wenwensnow](https://github.com/wenwensnow)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.paulfurley.com/
[1]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/raw/5f02fc8f6ff7fca5bca6ee4913c63bf6de15abcarandomize-mac-addresses
[2]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f#file-randomize-mac-addresses
[3]:https://github.com/
[4]:http://cloudessa.com/products/cloudessa-aaa-and-captive-portal-cloud-service/
[5]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/revisions#diff-824d510864d58c07df01102a8f53faef
[6]:https://twitter.com/fink_/status/937305600005943296
[7]:https://gist.github.com/paulfurley/978d4e2e0cceb41d67d017a668106c53/
[8]:https://en.wikipedia.org/wiki/MAC_address#Universal_vs._local
[9]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f

View File

@ -0,0 +1,140 @@
如何在执行一个命令或程序之前就了解它会做什么
======
有没有想过在执行一个 Unix 命令前就知道它干些什么呢?并不是每个人都会知道一个特定的命令或者程序将会做什么。当然,你可以用 [Explainshell][2] 来查看它。你可以在 Explainshell 网站中粘贴你的命令,然后它可以让你了解命令的每个部分做了什么。但是,这是没有必要的。现在,我们从终端就可以轻易地在执行一个命令或者程序前就知道它会做什么。 `maybe` ,一个简单的工具,它允许你运行一条命令并可以查看此命令对你的文件做了什么,而实际上这条命令却并未执行!在查看 `maybe` 的输出列表后,你可以决定是否真的想要运行这条命令。
![](https://www.ostechnix.com/wp-content/uploads/2017/12/maybe-2-720x340.png)
### `maybe` 是如何工作的
根据开发者的介绍:
> `maybe` 利用 `python-ptrace` 库在 `ptrace` 控制下运行了一个进程。当它截取到一个即将更改文件系统的系统调用时,它会记录该调用,然后修改 CPU 寄存器,将这个调用重定向到一个无效的系统调用 ID效果上将其变成一个无效操作no-op并将这个无效操作no-op的返回值设置为有效操作的返回值。结果这个进程认为它所做的一切都发生了实际上什么都没有改变。
警告:在生产环境或者任何你所关心的系统里面使用这个工具时都应该小心。它仍然可能造成严重的损失,因为它只能阻止少数系统调用。
#### 安装 `maybe`
确保你已经在你的 Linux 系统中已经安装了 `pip` 。如果没有,可以根据您使用的发行版,按照如下指示进行安装。
在 Arch Linux 及其衍生产品(如 Antergos、Manjaro Linux使用以下命令安装 `pip`
```
sudo pacman -S python-pip
```
在 RHELCentOS 上:
```
sudo yum install epel-release
sudo yum install python-pip
```
在 Fedora 上:
```
sudo dnf install epel-release
sudo dnf install python-pip
```
在 DebianUbuntuLinux Mint 上:
```
sudo apt-get install python-pip
```
在 SUSE、 openSUSE 上:
```
sudo zypper install python-pip
```
安装 `pip` 后,运行以下命令安装 `maybe`
```
sudo pip install maybe
```
### 了解一个命令或程序在执行前会做什么
用法是非常简单的!只要在要执行的命令前加上 `maybe` 即可。
让我给你看一个例子:
```
$ maybe rm -r ostechnix/
```
如你所看到的,我从我的系统中删除一个名为 `ostechnix` 的文件夹。下面是示例输出:
```
maybe has prevented rm -r ostechnix/ from performing 5 file system operations:
delete /home/sk/inboxer-0.4.0-x86_64.AppImage
delete /home/sk/Docker.pdf
delete /home/sk/Idhayathai Oru Nodi.mp3
delete /home/sk/dThmLbB334_1398236878432.jpg
delete /home/sk/ostechnix
Do you want to rerun rm -r ostechnix/ and permit these operations? [y/N] y
```
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/maybe-1.png)][3]
`maybe` 执行了 5 个文件系统操作,并向我显示该命令(`rm -r ostechnix/`)究竟会做什么。现在我可以决定是否应该执行这个操作。是不是很酷呢?确实很酷!
这是另一个例子。我要为 Gmail 安装 Inboxer 桌面客户端。这是我得到的输出:
```
$ maybe ./inboxer-0.4.0-x86_64.AppImage
fuse: bad mount point `/tmp/.mount_inboxemDzuGV': No such file or directory
squashfuse 0.1.100 (c) 2012 Dave Vasilevsky
Usage: /home/sk/Downloads/inboxer-0.4.0-x86_64.AppImage [options] ARCHIVE MOUNTPOINT
FUSE options:
-d -o debug enable debug output (implies -f)
-f foreground operation
-s disable multi-threaded operation
open dir error: No such file or directory
maybe has prevented ./inboxer-0.4.0-x86_64.AppImage from performing 1 file system operations:
create directory /tmp/.mount_inboxemDzuGV
Do you want to rerun ./inboxer-0.4.0-x86_64.AppImage and permit these operations? [y/N]
```
如果它没有检测到任何文件系统操作,那么它会只显示如下所示的结果。
例如,我运行下面这条命令来更新我的 Arch Linux。
```
$ maybe sudo pacman -Syu
sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?
maybe has not detected any file system operations from sudo pacman -Syu.
```
看到没?它没有检测到任何文件系统操作,所以没有任何警告。这非常棒,而且正是我所预料到的结果。从现在开始,我甚至可以在执行之前知道一个命令或一个程序将执行什么操作。我希望这对你也会有帮助。
Cheers!
资源:
* [`maybe` GitHub 主页][1]
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/know-command-program-will-exactly-executing/
作者:[SK][a]
译者:[imquanquan](https://github.com/imquanquan)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://github.com/p-e-w/maybe
[2]:https://www.ostechnix.com/explainshell-find-part-linux-command/
[3]:http://www.ostechnix.com/wp-content/uploads/2017/12/maybe-1.png
[4]:https://www.ostechnix.com/inboxer-unofficial-google-inbox-desktop-client/

View File

@ -1,26 +1,25 @@
NETSTAT 命令: 通过案例学习使用 netstate
通过示例学习使用 netstat
======
Netstat 是一个告诉我们系统中所有 tcp/udp/unix socket 连接状态的命令行工具。它会列出所有已经连接或者等待连接状态的连接。 该工具在识别某个应用监听哪个端口时特别有用,我们也能用它来判断某个应用是否正常的在监听某个端口。
Netstat 命令还能显示其他各种各样的网络相关信息,例如路由表, 网卡统计信息, 虚假连接以及多播成员等
netstat 是一个告诉我们系统中所有 tcp/udp/unix socket 连接状态的命令行工具。它会列出所有已经连接或者等待连接状态的连接。 该工具在识别某个应用监听哪个端口时特别有用,我们也能用它来判断某个应用是否正常的在监听某个端口
本文中,我们会通过几个例子来学习 Netstat
netstat 命令还能显示其它各种各样的网络相关信息,例如路由表, 网卡统计信息, 虚假连接以及多播成员等
(推荐阅读: [Learn to use CURL command with examples][1] )
本文中,我们会通过几个例子来学习 netstat。
Netstat with examples
============================================================
(推荐阅读: [通过示例学习使用 CURL 命令][1] )
### 1- 检查所有的连接
### 1 - 检查所有的连接
使用 `a` 选项可以列出系统中的所有连接,
```shell
$ netstat -a
```
这会显示系统所有的 tcpudp 以及 unix 连接。
这会显示系统所有的 tcpudp 以及 unix 连接。
### 2- 检查所有的 tcp/udp/unix socket 连接
### 2 - 检查所有的 tcp/udp/unix socket 连接
使用 `t` 选项只列出 tcp 连接,
@ -28,19 +27,19 @@ $ netstat -a
$ netstat -at
```
类似的,使用 `u` 选项只列出 udp 连接 to list out only the udp connections on our system we can use u option with netstat
类似的,使用 `u` 选项只列出 udp 连接,
```shell
$ netstat -au
```
使用 `x` 选项只列出 Unix socket 连接,we can use x options
使用 `x` 选项只列出 Unix socket 连接,
```shell
$ netstat -ax
```
### 3- 同时列出进程 ID/进程名称
### 3 - 同时列出进程 ID/进程名称
使用 `p` 选项可以在列出连接的同时也显示 PID 或者进程名称,而且它还能与其他选项连用,
@ -48,15 +47,15 @@ $ netstat -ax
$ netstat -ap
```
### 4- 列出端口号而不是服务名
### 4 - 列出端口号而不是服务名
使用 `n` 选项可以加快输出,它不会执行任何反向查询(译者注:这里原文说的是 "it will perform any reverse lookup",应该是写错了),而是直接输出数字。 由于无需查询,因此结果输出会快很多。
使用 `n` 选项可以加快输出,它不会执行任何反向查询(LCTT 译注:这里原文有误),而是直接输出数字。 由于无需查询,因此结果输出会快很多。
```shell
$ netstat -an
```
### 5- 只输出监听端口
### 5 - 只输出监听端口
使用 `l` 选项只输出监听端口。它不能与 `a` 选项连用,因为 `a` 会输出所有端口,
@ -64,15 +63,15 @@ $ netstat -an
$ netstat -l
```
### 6- 输出网络状态
### 6 - 输出网络状态
使用 `s` 选项输出每个协议的统计信息,包括接收/发送的包数量
使用 `s` 选项输出每个协议的统计信息,包括接收/发送的包数量
```shell
$ netstat -s
```
### 7- 输出网卡状态
### 7 - 输出网卡状态
使用 `I` 选项只显示网卡的统计信息,
@ -80,7 +79,7 @@ $ netstat -s
$ netstat -i
```
### 8- 显示多播组(multicast group)信息
### 8 - 显示<ruby>多播组<rt>multicast group</rt></ruby>信息
使用 `g` 选项输出 IPV4 以及 IPV6 的多播组信息,
@ -88,7 +87,7 @@ $ netstat -i
$ netstat -g
```
### 9- 显示网络路由信息
### 9 - 显示网络路由信息
使用 `r` 输出网络路由信息,
@ -96,7 +95,7 @@ $ netstat -g
$ netstat -r
```
### 10- 持续输出
### 10 - 持续输出
使用 `c` 选项持续输出结果
@ -104,7 +103,7 @@ $ netstat -r
$ netstat -c
```
### 11- 过滤出某个端口
### 11 - 过滤出某个端口
`grep` 连用来过滤出某个端口的连接,
@ -112,17 +111,17 @@ $ netstat -c
$ netstat -anp | grep 3306
```
### 12- 统计连接个数
### 12 - 统计连接个数
通过与 wc 和 grep 命令连用,可以统计指定端口的连接数量
通过与 `wc``grep` 命令连用,可以统计指定端口的连接数量
```shell
$ netstat -anp | grep 3306 | wc -l
```
输出 mysql 服务端口(即 3306的连接数。
输出 mysql 服务端口(即 3306的连接数。
这就是我们间断的案例指南了,希望它带给你的信息量足够。 有任何疑问欢迎提出。
这就是我们简短的案例指南了,希望它带给你的信息量足够。 有任何疑问欢迎提出。
--------------------------------------------------------------------------------
@ -130,7 +129,7 @@ via: http://linuxtechlab.com/learn-use-netstat-with-examples/
作者:[Shusain][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,274 @@
如何在 Linux 上安装友好的交互式 shellFish
======
Fish<ruby>友好的交互式 shell<rt>Friendly Interactive SHell</rt></ruby> 的缩写,它是一个适于装备于类 Unix 系统的智能而用户友好的 shell。Fish 有着很多重要的功能,比如自动建议、语法高亮、可搜索的历史记录(像在 bash 中 `CTRL+r`)、智能搜索功能、极好的 VGA 颜色支持、基于 web 的设置方式、完善的手册页和许多开箱即用的功能。尽管安装并立即使用它吧。无需更多其他配置,你也不需要安装任何额外的附加组件/插件!
在这篇教程中,我们讨论如何在 Linux 中安装和使用 fish shell。
#### 安装 Fish
尽管 fish 是一个非常用户友好的并且功能丰富的 shell但并没有包括在大多数 Linux 发行版的默认仓库中。它只能在少数 Linux 发行版中的官方仓库中找到,如 Arch LinuxGentooNixOS和 Ubuntu 等。然而,安装 fish 并不难。
在 Arch Linux 和它的衍生版上,运行以下命令来安装它。
```
sudo pacman -S fish
```
在 CentOS 7 上以 root 运行以下命令:
```
cd /etc/yum.repos.d/
wget https://download.opensuse.org/repositories/shells:fish:release:2/CentOS_7/shells:fish:release:2.repo
yum install fish
```
在 CentOS 6 上以 root 运行以下命令:
```
cd /etc/yum.repos.d/
wget https://download.opensuse.org/repositories/shells:fish:release:2/CentOS_6/shells:fish:release:2.repo
yum install fish
```
在 Debian 9 上以 root 运行以下命令:
```
wget -nv https://download.opensuse.org/repositories/shells:fish:release:2/Debian_9.0/Release.key -O Release.key
apt-key add - < Release.key
echo 'deb http://download.opensuse.org/repositories/shells:/fish:/release:/2/Debian_9.0/ /' > /etc/apt/sources.list.d/fish.list
apt-get update
apt-get install fish
```
在 Debian 8 上以 root 运行以下命令:
```
wget -nv https://download.opensuse.org/repositories/shells:fish:release:2/Debian_8.0/Release.key -O Release.key
apt-key add - < Release.key
echo 'deb http://download.opensuse.org/repositories/shells:/fish:/release:/2/Debian_8.0/ /' > /etc/apt/sources.list.d/fish.list
apt-get update
apt-get install fish
```
在 Fedora 26 上以 root 运行以下命令:
```
dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_26/shells:fish:release:2.repo
dnf install fish
```
在 Fedora 25 上以 root 运行以下命令:
```
dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_25/shells:fish:release:2.repo
dnf install fish
```
在 Fedora 24 上以 root 运行以下命令:
```
dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_24/shells:fish:release:2.repo
dnf install fish
```
在 Fedora 23 上以 root 运行以下命令:
```
dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_23/shells:fish:release:2.repo
dnf install fish
```
在 openSUSE 上以 root 运行以下命令:
```
zypper install fish
```
在 RHEL 7 上以 root 运行以下命令:
```
cd /etc/yum.repos.d/
wget https://download.opensuse.org/repositories/shells:fish:release:2/RHEL_7/shells:fish:release:2.repo
yum install fish
```
在 RHEL-6 上以 root 运行以下命令:
```
cd /etc/yum.repos.d/
wget https://download.opensuse.org/repositories/shells:fish:release:2/RedHat_RHEL-6/shells:fish:release:2.repo
yum install fish
```
在 Ubuntu 和它的衍生版上:
```
sudo apt-get update
sudo apt-get install fish
```
就这样了。是时候探索 fish shell 了。
### 用法
要从你默认的 shell 切换到 fish,请执行以下操作:
```
$ fish
Welcome to fish, the friendly interactive shell
```
你可以在 `~/.config/fish/config.fish` 上找到默认的 fish 配置(类似于 `.bashrc`)。如果它不存在,就创建它吧。
#### 自动建议
当我输入一个命令,它以浅灰色自动建议一个命令。所以,我需要输入一个 Linux 命令的前几个字母,然后按下 `tab` 键来完成这个命令。
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-1.png)][2]
如果有更多的可能性,它将会列出它们。你可以使用上/下箭头键从列表中选择列出的命令。在选择你想运行的命令后,只需按下右箭头键,然后按下 `ENTER` 运行它。
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-2.png)][3]
无需 `CTRL+r` 了!正如你已知道的,我们通过按 `CTRL+r` 来反向搜索 Bash shell 中的历史命令。但在 fish shell 中是没有必要的。由于它有自动建议功能,只需输入命令的前几个字母,然后从历史记录中选择已经执行的命令。很酷,是吧。
#### 智能搜索
我们也可以使用智能搜索来查找一个特定的命令、文件或者目录。例如,我输入一个命令的一部分,然后按向下箭头键进行智能搜索,再次输入一个字母来从列表中选择所需的命令。
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-6.png)][4]
#### 语法高亮
当你输入一个命令时,你将注意到语法高亮。请看下面当我在 Bash shell 和 fish shell 中输入相同的命令时截图的区别。
Bash
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-3.png)][5]
Fish
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-4.png)][6]
正如你所看到的,`sudo` 在 fish shell 中已经被高亮显示。此外,默认情况下它将以红色显示无效命令。
#### 基于 web 的配置方式
这是 fish shell 另一个很酷的功能。我们可以设置我们的颜色、更改 fish 提示符,并从网页上查看所有功能、变量、历史记录、键绑定。
启动 web 配置接口,只需输入:
```
fish_config
```
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-5.png)][7]
#### 手册页补完
Bash 和 其它 shells 支持可编程的补完,但只有 fish 可以通过解析已安装的手册来自动生成它们。
为此,请运行:
```
fish_update_completions
```
实例输出将是:
```
Parsing man pages and writing completions to /home/sk/.local/share/fish/generated_completions/
3435 / 3435 : zramctl.8.gz
```
#### 禁用问候语
默认情况下fish 在启动时问候你“Welcome to fish, the friendly interactive shell”。如果你不想要这个问候消息可以禁用它。为此编辑 fish 配置文件:
```
vi ~/.config/fish/config.fish
```
添加以下行:
```
set -g -x fish_greeting ''
```
你也可以设置任意自定义的问候语,而不是禁用 fish 问候。
```
set -g -x fish_greeting 'Welcome to OSTechNix'
```
#### 获得帮助
这是另一个吸引我的令人印象深刻的功能。要在终端的默认 web 浏览器中打开 fish 文档页面,只需输入:
```
help
```
官方文档将会在你的默认浏览器中打开。另外,你可以使用手册页来显示任何命令的帮助部分。
```
man fish
```
#### 设置 fish 为默认 shell
非常喜欢它?太好了!设置它作为默认 shell 吧。为此,请使用命令 `chsh`
```
chsh -s /usr/bin/fish
```
在这里,`/usr/bin/fish` 是 fish shell 的路径。如果你不知道正确的路径,以下命令将会帮助你:
```
which fish
```
注销并且重新登录以使用新的默认 shell。
请记住,为 Bash 编写的许多 shell 脚本可能不完全兼容 fish。
要切换回 Bash只需运行
```
bash
```
如果你想 Bash 作为你的永久默认 shell运行
```
chsh -s /bin/bash
```
各位,这就是全部了。在这个阶段,你可能会得到一个有关 fish shell 使用的基本概念。 如果你正在寻找一个Bash的替代品fish 可能是一个不错的选择。
Cheers!
资源:
* [fish shell 官网][1]
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/install-fish-friendly-interactive-shell-linux/
作者:[SK][a]
译者:[kimii](https://github.com/kimii)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://fishshell.com/
[2]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-1.png
[3]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-2.png
[4]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-6.png
[5]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-3.png
[6]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-4.png
[7]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-5.png

View File

@ -0,0 +1,165 @@
如何在 Bash 中抽取子字符串
======
所谓“子字符串”就是出现在其它字符串内的字符串。 比如 “3382” 就是 “this is a 3382 test” 的子字符串。 我们有多种方法可以从中把数字或指定部分字符串抽取出来。
[![How to Extract substring in Bash Shell on Linux or Unix](https://www.cyberciti.biz/media/new/faq/2017/12/How-to-Extract-substring-in-Bash-Shell-on-Linux-or-Unix.jpg)][2]
本文会向你展示在 bash shell 中如何获取或者说查找出子字符串。
### 在 Bash 中抽取子字符串
其语法为:
```shell
## 格式 ##
${parameter:offset:length}
```
子字符串扩展是 bash 的一项功能。它会扩展成 `parameter` 值中以 `offset` 为开始,长为 `length` 个字符的字符串。 假设, `$u` 定义如下:
```shell
## 定义变量 u ##
u="this is a test"
```
那么下面参数的子字符串扩展会抽取出子字符串:
```shell
var="${u:10:4}"
echo "${var}"
```
结果为:
```
test
```
其中这些参数分别表示:
+ 10 : 偏移位置
+ 4 : 长度
### 使用 IFS
根据 bash 的 man 页说明:
> [IFS (内部字段分隔符)][3]用于在扩展后进行单词分割,并用内建的 read 命令将行分割为词。默认值是<space><tab><newline>
另一种 <ruby>POSIX 就绪<rt>POSIX ready</rt></ruby>的方案如下:
```shell
u="this is a test"
set -- $u
echo "$1"
echo "$2"
echo "$3"
echo "$4"
```
输出为:
```shell
this
is
a
test
```
下面是一段 bash 代码,用来从 Cloudflare cache 中去除带主页的 url。
```shell
#/bin/bash
####################################################
## Author - Vivek Gite {https://www.cyberciti.biz/}
## Purpose - Purge CF cache
## License - Under GPL ver 3.x+
####################################################
## set me first ##
zone_id="YOUR_ZONE_ID_HERE"
api_key="YOUR_API_KEY_HERE"
email_id="YOUR_EMAIL_ID_HERE"
## hold data ##
home_url=""
amp_url=""
urls="$@"
## Show usage
[ "$urls" == "" ] && { echo "Usage: $0 url1 url2 url3"; exit 1; }
## Get home page url as we have various sub dirs on domain
## /tips/
## /faq/
get_home_url(){
local u="$1"
IFS='/'
set -- $u
echo "${1}${IFS}${IFS}${3}${IFS}${4}${IFS}"
}
echo
echo "Purging cache from Cloudflare。.。"
echo
for u in $urls
do
home_url="$(get_home_url $u)"
amp_url="${u}amp/"
curl -X DELETE "https://api.cloudflare.com/client/v4/zones/${zone_id}/purge_cache" \
-H "X-Auth-Email: ${email_id}" \
-H "X-Auth-Key: ${api_key}" \
-H "Content-Type: application/json" \
--data "{\"files\":[\"${u}\"\"${amp_url}\"\"${home_url}\"]}"
echo
done
echo
```
它的使用方法为:
```shell
~/bin/cf.clear.cache https://www.cyberciti.biz/faq/bash-for-loop/ https://www.cyberciti.biz/tips/linux-security.html
```
### 借助 cut 命令
可以使用 `cut` 命令来将文件中每一行或者变量中的一部分删掉。它的语法为:
```shell
u="this is a test"
echo "$u" | cut -d' ' -f 4
echo "$u" | cut --delimiter=' ' --fields=4
##########################################
## WHERE
## -d' ' : Use a whitespace as delimiter
## -f 4 : Select only 4th field
##########################################
var="$(cut -d' ' -f 4 <<< $u)"
echo "${var}"
```
想了解更多请阅读 bash 的 man 页:
```shell
man bash
man cut
```
另请参见: [Bash String Comparison: Find Out IF a Variable Contains a Substring][1]
--------------------------------------------------------------------------------
via: https://www.cyberciti.biz/faq/how-to-extract-substring-in-bash/
作者:[Vivek Gite][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz
[1]:https://www.cyberciti.biz/faq/bash-find-out-if-variable-contains-substring/
[2]:https://www.cyberciti.biz/media/new/faq/2017/12/How-to-Extract-substring-in-Bash-Shell-on-Linux-or-Unix.jpg
[3]:https://bash.cyberciti.biz/guide/$IFS

View File

@ -1,3 +1,5 @@
darsh8 Translating
Book review: Ours to Hack and to Own
============================================================

View File

@ -0,0 +1,159 @@
Annoying Experiences Every Linux Gamer Never Wanted!
============================================================
[![Linux gamer's problem](https://itsfoss.com/wp-content/uploads/2016/09/Linux-Gaming-Problems.jpg)][10]
[Gaming on Linux][12] has come a long way. There are dedicated [Linux gaming distributions][13] now. But this doesnt mean that gaming experience on Linux is as smooth as on Windows.
What are the obstacles that should be thought about to ensure that we enjoy games as much as Windows users do?
[Wine][14], [PlayOnLinux][15] and other similar tools are not always able to play every popular Windows game. In this article, I would like to discuss various factors that must be dealt with in order to have the best possible Linux gaming experience.
### #1 SteamOS is Open Source, Steam for Linux is NOT
As stated on the [SteamOS page][16], even though SteamOS is open source, Steam for Linux continues to be proprietary. Had it also been open source, the amount of support from the open source community would have been tremendous! Since it is not, [the birth of Project Ascension was inevitable][17]:
[video](https://youtu.be/07UiS5iAknA)
Project Ascension is an open source game launcher designed to launch games that have been bought and downloaded from anywhere they can be Steam games, [Origin games][18], Uplay games, games downloaded directly from game developer websites or from DVD/CD-ROMs.
Here is how it all began: [Sharing The Idea][19] resulted in a very interesting discussion with readers all over from the gaming community pitching in their own opinions and suggestions.
### #2 Performance compared to Windows
Getting Windows games to run on Linux is not always an easy task. But thanks to a feature called [CSMT][20] (command stream multi-threading), PlayOnLinux is now better equipped to deal with these performance issues, though its still a long way to achieve Windows level outcomes.
Native Linux support for games has not been so good for past releases.
Last year, it was reported that SteamOS performed [significantly worse][21] than Windows. Tomb Raider was released on SteamOS/Steam for Linux last year. However, benchmark results were [not at par][22] with performance on Windows.
[video](https://youtu.be/nkWUBRacBNE)
This was much obviously due to the fact that the game had been developed with [DirectX][23] in mind and not [OpenGL][24].
Tomb Raider is the [first Linux game that uses TressFX][25]. This video includes TressFX comparisons:
[video](https://youtu.be/-IeY5ZS-LlA)
Here is another interesting comparison which shows Wine+CSMT performing much better than the native Linux version itself on Steam! This is the power of Open Source!
[Suggested readA New Linux OS "OSu" Vying To Be Ubuntu Of Arch Linux World][26]
[video](https://youtu.be/sCJkC6oJ08A)
TressFX has been turned off in this case to avoid FPS loss.
Here is another Linux vs Windows comparison for the recently released “[Life is Strange][27]” on Linux:
[video](https://youtu.be/Vlflu-pIgIY)
Its good to know that  [_Steam for Linux_][28]  has begun to show better improvements in performance for this new Linux game.
Before launching any game for Linux, developers should consider optimizing them especially if its a DirectX game and requires OpenGL translation. We really do hope that [Deus Ex: Mankind Divided on Linux][29] gets benchmarked well, upon release. As its a DirectX game, we hope its being ported well for Linux. Heres [what the Executive Game Director had to say][30].
### #3 Proprietary NVIDIA Drivers
[AMDs support for Open Source][31] is definitely commendable when compared to [NVIDIA][32]. Though [AMD][33] driver support is [pretty good on Linux][34] now due to its better open source driver, NVIDIA graphic card owners will still have to use the proprietary NVIDIA drivers because of the limited capabilities of the open-source version of NVIDIAs graphics driver called Nouveau.
In the past, legendary Linus Torvalds has also shared his thoughts about Linux support from NVIDIA to be totally unacceptable:
[video](https://youtu.be/O0r6Pr_mdio)
You can watch the complete talk [here][35]. Although NVIDIA responded with [a commitment for better linux support][36], the open source graphics driver still continues to be weak as before.
### #4 Need for Uplay and Origin DRM support on Linux
[video](https://youtu.be/rc96NFwyxWU)
The above video describes how to install the [Uplay][37] DRM on Linux. The uploader also suggests that the use of wine as the main tool of games and applications is not recommended on Linux. Rather, preference to native applications should be encouraged instead.
The following video is a guide about installing the [Origin][38] DRM on Linux:
[video](https://youtu.be/ga2lNM72-Kw)
Digital Rights Management Software adds another layer for game execution and hence it adds up to the already challenging task to make a Windows game run well on Linux. So in addition to making the game execute, W.I.N.E has to take care of running the DRM software such as Uplay or Origin as well. It would have been great if, like Steam, Linux could have got its own native versions of Uplay and Origin.
[Suggested readLinux Foundation Head Calls 2017 'Year of the Linux Desktop'... While Running Apple's macOS Himself][39]
### #5 DirectX 11 support for Linux
Even though we have tools on Linux to run Windows applications, every game comes with its own set of tweak requirements for it to be playable on Linux. Though there was an announcement about [DirectX 11 support for Linux][40] last year via Code Weavers, its still a long way to go to make playing newly launched titles on Linux a possibility. Currently, you can
Currently, you can [buy Crossover from Codeweavers][41] to get the best DirectX 11 support available. This [thread][42] on the Arch Linux forums clearly shows how much more effort is required to make this dream a possibility. Here is an interesting [find][43] from a [Reddit thread][44], which mentions Wine getting [DirectX 11 patches from Codeweavers][45]. Now thats definitely some good news.
### #6 100% of Steam games are not available for Linux
This is an important point to ponder as Linux gamers continue to miss out on every major game release since most of them land up on Windows. Here is a guide to [install Steam for Windows on Linux][46].
### #7 Better Support from video game publishers for OpenGL
Currently, developers and publishers focus primarily on DirectX for video game development rather than OpenGL. Now as Steam is officially here for Linux, developers should start considering development in OpenGL as well.
[Direct3D][47] is made solely for the Windows platform. The OpenGL API is an open standard, and implementations exist for not only Windows but a wide variety of other platforms.
Though quite an old article, [this valuable resource][48] shares a lot of thoughtful information on the realities of OpenGL and DirectX. The points made are truly very sensible and enlightens the reader about the facts based on actual chronological events.
Publishers who are launching their titles on Linux should definitely not leave out the fact that developing the game on OpenGL would be a much better deal than translating it from DirectX to OpenGL. If conversion has to be done, the translations must be well optimized and carefully looked into. There might be a delay in releasing the games but still it would definitely be worth the wait.
Have more annoyances to share? Do let us know in the comments.
--------------------------------------------------------------------------------
via: https://itsfoss.com/linux-gaming-problems/
作者:[Avimanyu Bandyopadhyay ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/avimanyu/
[1]:https://itsfoss.com/author/avimanyu/
[2]:https://itsfoss.com/linux-gaming-problems/#comments
[3]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Flinux-gaming-problems%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[4]:https://twitter.com/share?original_referer=/&text=Annoying+Experiences+Every+Linux+Gamer+Never+Wanted%21&url=https://itsfoss.com/linux-gaming-problems/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=itsfoss2
[5]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Flinux-gaming-problems%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[6]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Flinux-gaming-problems%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[7]:http://www.stumbleupon.com/submit?url=https://itsfoss.com/linux-gaming-problems/&title=Annoying+Experiences+Every+Linux+Gamer+Never+Wanted%21
[8]:https://www.reddit.com/submit?url=https://itsfoss.com/linux-gaming-problems/&title=Annoying+Experiences+Every+Linux+Gamer+Never+Wanted%21
[9]:https://itsfoss.com/wp-content/uploads/2016/09/Linux-Gaming-Problems.jpg
[10]:https://itsfoss.com/wp-content/uploads/2016/09/Linux-Gaming-Problems.jpg
[11]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/09/Linux-Gaming-Problems.jpg&url=https://itsfoss.com/linux-gaming-problems/&is_video=false&description=Linux%20gamer%27s%20problem
[12]:https://itsfoss.com/linux-gaming-guide/
[13]:https://itsfoss.com/linux-gaming-distributions/
[14]:https://itsfoss.com/use-windows-applications-linux/
[15]:https://www.playonlinux.com/en/
[16]:http://store.steampowered.com/steamos/
[17]:http://www.ibtimes.co.uk/reddit-users-want-replace-steam-open-source-game-launcher-project-ascension-1498999
[18]:https://www.origin.com/
[19]:https://www.reddit.com/r/pcmasterrace/comments/33xcvm/we_hate_valves_monopoly_over_pc_gaming_why/
[20]:https://github.com/wine-compholio/wine-staging/wiki/CSMT
[21]:http://arstechnica.com/gaming/2015/11/ars-benchmarks-show-significant-performance-hit-for-steamos-gaming/
[22]:https://www.gamingonlinux.com/articles/tomb-raider-benchmark-video-comparison-linux-vs-windows-10.7138
[23]:https://en.wikipedia.org/wiki/DirectX
[24]:https://en.wikipedia.org/wiki/OpenGL
[25]:https://www.gamingonlinux.com/articles/tomb-raider-released-for-linux-video-thoughts-port-report-included-the-first-linux-game-to-use-tresfx.7124
[26]:https://itsfoss.com/osu-new-linux/
[27]:http://lifeisstrange.com/
[28]:https://itsfoss.com/install-steam-ubuntu-linux/
[29]:https://itsfoss.com/deus-ex-mankind-divided-linux/
[30]:http://wccftech.com/deus-ex-mankind-divided-director-console-ports-on-pc-is-disrespectful/
[31]:http://developer.amd.com/tools-and-sdks/open-source/
[32]:http://nvidia.com/
[33]:http://amd.com/
[34]:http://www.makeuseof.com/tag/open-source-amd-graphics-now-awesome-heres-get/
[35]:https://youtu.be/MShbP3OpASA
[36]:https://itsfoss.com/nvidia-optimus-support-linux/
[37]:http://uplay.com/
[38]:http://origin.com/
[39]:https://itsfoss.com/linux-foundation-head-uses-macos/
[40]:http://www.pcworld.com/article/2940470/hey-gamers-directx-11-is-coming-to-linux-thanks-to-codeweavers-and-wine.html
[41]:https://itsfoss.com/deal-run-windows-software-and-games-on-linux-with-crossover-15-66-off/
[42]:https://bbs.archlinux.org/viewtopic.php?id=214771
[43]:https://ghostbin.com/paste/sy3e2
[44]:https://www.reddit.com/r/linux_gaming/comments/3ap3uu/directx_11_support_coming_to_codeweavers/
[45]:https://www.codeweavers.com/about/blogs/caron/2015/12/10/directx-11-really-james-didnt-lie
[46]:https://itsfoss.com/linux-gaming-guide/
[47]:https://en.wikipedia.org/wiki/Direct3D
[48]:http://blog.wolfire.com/2010/01/Why-you-should-use-OpenGL-and-not-DirectX

View File

@ -0,0 +1,68 @@
translating by zrszrszrs
GitHub Is Building a Coders Paradise. Its Not Coming Cheap
============================================================
The VC-backed unicorn startup lost $66 million in nine months of 2016, financial documents show.
Though the name GitHub is practically unknown outside technology circles, coders around the world have embraced the software. The startup operates a sort of Google Docs for programmers, giving them a place to store, share and collaborate on their work. But GitHub Inc. is losing money through profligate spending and has stood by as new entrants emerged in a software category it essentially gave birth to, according to people familiar with the business and financial paperwork reviewed by Bloomberg.
The rise of GitHub has captivated venture capitalists. Sequoia Capital led a $250 million investment in mid-2015\. But GitHub management may have been a little too eager to spend the new money. The company paid to send employees jetting across the globe to Amsterdam, London, New York and elsewhere. More costly, it doubled headcount to 600 over the course of about 18 months.
GitHub lost $27 million in the fiscal year that ended in January 2016, according to an income statement seen by Bloomberg. It generated $95 million in revenue during that period, the internal financial document says.
![Chris Wanstrath, co-founder and chief executive officer at GitHub Inc., speaks during the 2015 Bloomberg Technology Conference in San Francisco, California, U.S., on Tuesday, June 16, 2015\. The conference gathers global business leaders, tech influencers, top investors and entrepreneurs to shine a spotlight on how coders and coding are transforming business and fueling disruption across all industries. Photographer: David Paul Morris/Bloomberg *** Local Caption *** Chris Wanstrath](https://assets.bwbx.io/images/users/iqjWHBFdfxIU/iXpmtRL9Q0C4/v0/400x-1.jpg)
GitHub CEO Chris Wanstrath.Photographer: David Paul Morris/Bloomberg
Sitting in a conference room featuring an abstract art piece on the wall and a Mad Men-style rollaway bar cart in the corner, GitHubs Chris Wanstrath says the business is running more smoothly now and growing. “What happened to 2015?” says the 31-year-old co-founder and chief executive officer. “Nothing was getting done, maybe? I shouldnt say that. Strike that.”
GitHub recently hired Mike Taylor, the former treasurer and vice president of finance at Tesla Motors Inc., to manage spending as chief financial officer. It also hopes to add a seasoned chief operating officer. GitHub has already surpassed last years revenue in nine months this year, with $98 million, the financial document shows. “The whole product road map, we have all of our shit together in a way that weve never had together. Im pretty elated right now with the way things are going,” says Wanstrath. “Weve had a lot of ups and downs, and right now were definitely in an up.”
Also up: expenses. The income statement shows a loss of $66 million in the first three quarters of this year. Thats more than twice as much lost in any nine-month time frame by Twilio Inc., another maker of software tools founded the same year as GitHub. At least a dozen members of GitHubs leadership team have left since last year, several of whom expressed unhappiness with Wanstraths management style. GitHub says the company has flourished under his direction but declined to comment on finances. Wanstrath says: “We raised $250 million last year, and were putting it to use. Were not expecting to be profitable right now.”
Wanstrath started GitHub with three friends during the recession of 2008 and bootstrapped the business for four years. They encouraged employees to [work remotely][1], which forced the team to adopt GitHubs tools for their own projects and had the added benefit of saving money on office space. GitHub quickly became essential to the code-writing process at technology companies of all sizes and gave birth to a new generation of programmers by hosting their open-source code for free.
Peter Levine, a partner at Andreessen Horowitz, courted the founders and eventually convinced them to take their first round of VC money in 2012\. The firm led a $100 million cash infusion, and Levine joined the board. The next year, GitHub signed a seven-year lease worth about $35 million for a headquarters in San Francisco, says a person familiar with the project.
The new digs gave employees a reason to come into the office. Visitors would enter a lobby modeled after the White Houses Oval Office before making their way to a replica of the Situation Room. The company also erected a statue of its mascot, a cartoon octopus-cat creature known as the Octocat. The 55,000-square-foot space is filled with wooden tables and modern art.
In GitHubs cultural hierarchy, the coder is at the top. The company has strived to create the best product possible for software developers and watch them to flock to it. In addition to offering its base service for free, GitHub sells more advanced programming tools to companies big and small. But it found that some chief information officers want a human touch and began to consider building out a sales team.
The issue took on a new sense of urgency in 2014 with the formation of a rival startup with a similar name. GitLab Inc. went after large businesses from the start, offering them a cheaper alternative to GitHub. “The big differentiator for GitLab is that it was designed for the enterprise, and GitHub was not,” says GitLab CEO Sid Sijbrandij. “One of the values is frugality, and this is something very close to our heart. We want to treat our team members really well, but we dont want to waste any money where its not needed. So we dont have a big fancy office because we can be effective without it.”
Y Combinator, a Silicon Valley business incubator, welcomed GitLab into the fold last year. GitLab says more than 110,000 organizations, including IBM and Macys Inc., use its software. (IBM also uses GitHub.) Atlassian Corp. has taken a similar top-down approach with its own code repository Bitbucket.
Wanstrath says the competition has helped validate GitHubs business. “When we started, people made fun of us and said there is no money in developer tools,” he says. “Ive kind of been waiting for this for a long time—to be proven right, that this is a real market.”
![GitHub_Office-03](https://assets.bwbx.io/images/users/iqjWHBFdfxIU/iQB5sqXgihdQ/v0/400x-1.jpg)
Source: GitHub
It also spurred GitHub into action. With fresh capital last year valuing the company at $2 billion, it went on a hiring spree. It spent $71 million on salaries and benefits last fiscal year, according to the financial document seen by Bloomberg. This year, those costs rose to $108 million from February to October, with three months still to go in the fiscal year, the document shows. This was the startups biggest expense by far.
The emphasis on sales seemed to be making an impact, but the team missed some of its targets, says a person familiar with the matter. In September 2014, subscription revenue on an annualized basis was about $25 million each from enterprise sales and organizations signing up through the site, according to another financial document. After GitHub staffed up, annual recurring revenue from large clients increased this year to $70 million while the self-service business saw healthy, if less dramatic, growth to $52 million.
But the uptick in revenue wasnt keeping pace with the aggressive hiring. GitHub cut about 20 employees in recent weeks. “The unicorn trap is that youve sold equity against a plan that you often cant hit; then what do you do?” says Nick Sturiale, a VC at Ignition Partners.
Such business shifts are risky, and stumbles arent uncommon, says Jason Lemkin, a corporate software VC whos not an investor in GitHub. “That transition from a self-service product in its early days to being enterprise always has bumps,” he says. GitHub says it has 18 million users, and its Enterprise service is used by half of the worlds 10 highest-grossing companies, including Wal-Mart Stores Inc. and Ford Motor Co.
Some longtime GitHub fans werent happy with the new direction, though. More than 1,800 developers signed an online petition, saying: “Those of us who run some of the most popular projects on GitHub feel completely ignored by you.”
The backlash was a wake-up call, Wanstrath says. GitHub is now more focused on its original mission of catering to coders, he says. “I want us to be judged on, Are we making developers more productive?’” he says. At GitHubs developer conference in September, Wanstrath introduced several new features, including an updated process for reviewing code. He says 2016 was a “marquee year.”
At least five senior staffers left in 2015, and turnover among leadership continued this year. Among them was co-founder and CIO Scott Chacon, who says he left to start a new venture. “GitHub was always very good to me, from the first day I started when it was just the four of us,” Chacon says. “They allowed me to travel the world representing them; they supported my teaching and evangelizing Git and remote work culture for a long time.”
The travel excursions are expected to continue at GitHub, and theres little evidence it can rein in spending any time soon. The company says about half its staff is remote and that the trips bring together GitHubs distributed workforce and encourage collaboration. Last week, at least 20 employees on GitHubs human-resources team convened in Rancho Mirage, California, for a retreat at the Ritz Carlton.
--------------------------------------------------------------------------------
via: https://www.bloomberg.com/news/articles/2016-12-15/github-is-building-a-coder-s-paradise-it-s-not-coming-cheap
作者:[Eric Newcomer ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.bloomberg.com/authors/ASFMS16EsvU/eric-newcomer
[1]:https://www.bloomberg.com/news/articles/2016-09-06/why-github-finally-abandoned-its-bossless-workplace

View File

@ -0,0 +1,104 @@
New Years resolution: Donate to 1 free software project every month
============================================================
### Donating just a little bit helps ensure the open source software I use remains alive
Free and open source software is an absolutely critical part of our world—and the future of technology and computing. One problem that consistently plagues many free software projects, though, is the challenge of funding ongoing development (and support and documentation). 
With that in mind, I have finally settled on a New Years resolution for 2017: to donate to one free software project (or group) every month—or the whole year. After all, these projects are saving me a boatload of money because I dont need to buy expensive, proprietary packages to accomplish the same things.
#### + Also on Network World: [Free Software Foundation shakes up its list of priority projects][19] +
Im not setting some crazy goal here—not requiring that I donate beyond my means. Heck, some months I may be able to donate only a few bucks. But every little bit helps, right? 
To help me accomplish that goal, below is a list of free software projects with links to where I can donate to them. Organized by categories, just because. Im scheduling a monthly calendar item to remind me to bring up this page and donate to one of these projects. 
This isnt a complete list—not by any measure—but its a good starting point. Apologies to the (many) great projects out there that I missed.
#### Linux distributions 
[elementary OS][20] — In addition to the distribution itself (which is based, in part, on Ubuntu), this team also develops the Pantheon desktop environment. 
[Solus][21] — This is a “from scratch” distro using their own custom-developed desktop environment, “Budgie.” 
[Ubuntu MATE][22] — Its Ubuntu—with Unity ripped off and replaced with MATE. I like to think of this as “What Ubuntu was like back when I still used Ubuntu.” 
[Debian][23] — If you use Ubuntu or elementary or Mint, you are using a system based on Debian. Personally, I use Debian on my [PocketCHIP][24].
#### Linux components 
[PulseAudio][25] — PulsAudio is all over the place now. If it stopped being supported and maintained, that would be… highly inconvenient. 
#### Productivity/Creation 
[Gimp][26] — The GNU Image Manipulation Program is one of the most famous free software projects—and the standard for cross-platform raster design tools. 
[FreeCAD][27] — When people talk about difficulty in moving from Windows to Linux, the lack of CAD software often crops up. Supporting projects such as FreeCAD helps to remove that barrier. 
[OpenShot][28] — Video editing on Linux (and other free software desktops) has improved tremendously over the past few years. But there is still work to be done. 
[Blender][29] — What is Blender? A 3D modelling suite? A video editor? A game creation system? All three (and more)? Whatever you use Blender for, its amazing. 
[Inkscape][30] — This is the most fantastic vector graphics editing suite on the planet (in my oh-so-humble opinion). 
[LibreOffice / The Document Foundation][31] — I am writing this very document in LibreOffice. Donating to their foundation to help further development seems to be in my best interests. 
#### Software development 
[Python Software Foundation][32] — Python is a great language and is used all over the place. 
#### Free and open source foundations 
[Free Software Foundation][33] — “The Free Software Foundation (FSF) is a nonprofit with a worldwide mission to promote computer user freedom. We defend the rights of all software users.” 
[Software Freedom Conservancy][34] — “Software Freedom Conservancy helps promote, improve, develop and defend Free, Libre and Open Source Software (FLOSS) projects.” 
Again—this is, by no means, a complete list. Not even close. Luckily many projects provide easy donation mechanisms on their websites.
Join the Network World communities on [Facebook][17] and [LinkedIn][18] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3160174/linux/new-years-resolution-donate-to-1-free-software-project-every-month.html
作者:[ Bryan Lunduke][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Bryan-Lunduke/
[1]:https://www.networkworld.com/article/3143583/linux/linux-y-things-i-am-thankful-for.html
[2]:https://www.networkworld.com/article/3152745/linux/5-rock-solid-linux-distros-for-developers.html
[3]:https://www.networkworld.com/article/3130760/open-source-tools/elementary-os-04-review-and-interview-with-the-founder.html
[4]:https://www.networkworld.com/video/51206/solo-drone-has-linux-smarts-gopro-mount
[5]:https://twitter.com/intent/tweet?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html&via=networkworld&text=New+Year%E2%80%99s+resolution%3A+Donate+to+1+free+software+project+every+month
[6]:https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html
[7]:http://www.linkedin.com/shareArticle?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html&title=New+Year%E2%80%99s+resolution%3A+Donate+to+1+free+software+project+every+month
[8]:https://plus.google.com/share?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html
[9]:http://reddit.com/submit?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html&title=New+Year%E2%80%99s+resolution%3A+Donate+to+1+free+software+project+every+month
[10]:http://www.stumbleupon.com/submit?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html
[11]:https://www.networkworld.com/article/3160174/linux/new-years-resolution-donate-to-1-free-software-project-every-month.html#email
[12]:https://www.networkworld.com/article/3143583/linux/linux-y-things-i-am-thankful-for.html
[13]:https://www.networkworld.com/article/3152745/linux/5-rock-solid-linux-distros-for-developers.html
[14]:https://www.networkworld.com/article/3130760/open-source-tools/elementary-os-04-review-and-interview-with-the-founder.html
[15]:https://www.networkworld.com/video/51206/solo-drone-has-linux-smarts-gopro-mount
[16]:https://www.networkworld.com/video/51206/solo-drone-has-linux-smarts-gopro-mount
[17]:https://www.facebook.com/NetworkWorld/
[18]:https://www.linkedin.com/company/network-world
[19]:http://www.networkworld.com/article/3158685/open-source-tools/free-software-foundation-shakes-up-its-list-of-priority-projects.html
[20]:https://www.patreon.com/elementary
[21]:https://www.patreon.com/solus
[22]:https://www.patreon.com/ubuntu_mate
[23]:https://www.debian.org/donations
[24]:http://www.networkworld.com/article/3157210/linux/review-pocketchipsuper-cheap-linux-terminal-that-fits-in-your-pocket.html
[25]:https://www.patreon.com/tanuk
[26]:https://www.gimp.org/donating/
[27]:https://www.patreon.com/yorikvanhavre
[28]:https://www.patreon.com/openshot
[29]:https://www.blender.org/foundation/donation-payment/
[30]:https://inkscape.org/en/support-us/donate/
[31]:https://www.libreoffice.org/donate/
[32]:https://www.python.org/psf/donations/
[33]:http://www.fsf.org/associate/
[34]:https://sfconservancy.org/supporter/

View File

@ -1,3 +1,6 @@
translating by HardworkFish
INTRODUCING DOCKER SECRETS MANAGEMENT
============================================================

View File

@ -0,0 +1,168 @@
Which Official Ubuntu Flavor Is Best for You?
============================================================
![Ubuntu Budgie](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_budgie.jpg?itok=xpo3Ujfw "Ubuntu Budgie")
Ubuntu Budgie is just one of the few officially recognized flavors of Ubuntu. Jack Wallen takes a look at some important differences between them.[Used with permission][7]
Ubuntu Linux comes in a few officially recognized flavors, as well as several derivative distributions. The recognized flavors are:
* [Kubuntu][9] - Ubuntu with the KDE desktop
* [Lubuntu][10] - Ubuntu with the LXDE desktop
* [Mythbuntu][11] - Ubuntu MythTV
* [Ubuntu Budgie][12] - Ubuntu with the Budgie desktop
* [Xubuntu][8] - Ubuntu with Xfce
Up until recently, the official Ubuntu Linux included the in-house Unity desktop and a sixth recognized flavor existed: Ubuntu GNOME -- Ubuntu with the GNOME desktop environment.
When Mark Shuttleworth decided to nix Unity, the choice was obvious to Canonical—make GNOME the official desktop of Ubuntu Linux. This begins with Ubuntu 18.04 (so April, 2018) and well be down to the official distribution and four recognized flavors.
For those already enmeshed in the Linux community, thats some seriously simple math to do—you know which Linux desktop you like, so making the choice between Ubuntu, Kubuntu, Lubuntu, Mythbuntu, Ubuntu Budgie, and Xubuntu couldnt be easier. Those that havent already been indoctrinated into the way of Linux wont see that as such a cut-and-dried decision.
To that end, I thought it might be a good idea to help newer users decide which flavor is best for them. After all, choosing the wrong distribution out of the starting gate can make for a less-than-ideal experience.
And so, if youre considering a flavor of Ubuntu, and you want your experience to be as painless as possible, read on.
### Ubuntu
Ill begin with the official flavor of Ubuntu. I am also going to warp time a bit and skip Unity, to launch right into the upcoming GNOME-based distribution. Beyond GNOME being an incredibly stable and easy to use desktop environment, there is one very good reason to select the official flavor—support. The official flavor of Ubuntu is commercially supported by Canonical. For $150.00 per year, you can purchase [official support][20] for the Ubuntu desktop. There is, of course, a 50-desktop minimum for this level of support. For individuals, the best bet for support would be the [Ubuntu Forums][21], the [Ubuntu documentation][22], or the [Community help wiki][23].
Beyond the commercial support, the reason to choose the official Ubuntu flavor would be if youre looking for a modern, full-featured desktop that is incredibly reliable and easy to use. GNOME has been designed to serve as a platform perfectly suited for both desktops and laptops (Figure 1). Unlike its predecessor, Unity, GNOME can be far more easily customized to suit your needs—to a point. If youre not one to tinker with the desktop, fear not, GNOME just works. In fact, the out of the box experience with GNOME might well be one of the finest on the market—even rivaling (or besting) Mac OS X. If tinkering and tweaking is of primary interest, you will find GNOME somewhat limiting. The [GNOME Tweak Tool][24] and [GNOME Shell Extensions ][25]will only take you so far, before you find yourself wanting more.
![GNOME desktop](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_flavor_a.jpg?itok=Ir6jBKbd "GNOME desktop")
Figure 1: The GNOME desktop with a Unity-like flavor might be what we see with Ubuntu 18.04.[Used with permission][1]
### Kubuntu
The [K Desktop Environment][26] (otherwise known as KDE) has been around as long as GNOME and has, at times, been maligned as a lesser desktop. With the release of KDE Plasma 5, that changed. KDE has become an incredibly powerful, efficient, and stable desktop that can stand toe to toe with the best of them. But why would you select Kubuntu over the official Ubuntu? The answer to that question is quite simple—youre used to the Windows XP/7 desktop metaphor. Start menu, taskbar, system tray, etc., KDE has those and more, all fashioned in such a way that will make you feel like youre using the best of the past and current technologies. In fact, if youre looking for one of the most Windows 7-like official Ubuntu flavors, you wont find one that better fits the bill.
One of the nice things about Kubuntu, is that youll find it a bit more flexible than any Windows iteration youve ever used—and equally reliable/user-friendly. And dont think, because KDE opts to offer a desktop somewhat similar to Windows 7, that it doesnt have a modern flavor. In fact, Kubuntu takes what worked well with the Windows 7 interface and updates it to meet a more modern aesthetic (Figure 2).
![Kubuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_flavor_b.jpg?itok=dGpebi4z "Kubuntu")
Figure 2: Kubuntu offers a modern take on an old UX.[Used with permission][2]
The official Ubuntu is not the only flavor to offer desktop support. Kubuntu users also can pay for [commercial support][27]. Be warned, its not cheap. One hour of support time will cost you $103.88 cents.
### Lubuntu
If youre looking for an easy-to-use desktop that is very fast (so that older hardware will feel like new) and far more flexible than just about any desktop youve ever used, Lubuntu is what you want. The only caveat to Lubuntu is that youre looking at a bit more bare bones on the desktop then you may be accustomed to. Lubuntu makes use of the [LXDE desktop][28] and includes a list of applications that continues the lightweight theme. So if youre looking for blazing fast speeds on the desktop, Lubuntu might be a good choice.
However, there is a caveat with Lubuntu and, for some users, this might be a deal breaker. Along with the small footprint of Lubuntu come pre-installed applications that might not stand up to task. For example, instead of the full-blown office suite, youll find the [AibWord word processor][29] and the [Gnumeric spreadsheet][30] tool. Dont get me wrong; both of these are fine tools. However, if youre looking for software thats business-ready, you will find them lacking. On the other hand, if you want to install more work-centric tools (e.g., LibreOffice), Lubuntu includes the Synaptic Package Manager to make installation of third-party software simple.
Even with the limited default software, Lubuntu offers a clean and easy to use desktop (Figure 3), that anyone could start using with little to no learning curve.
![Lubuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_flavor_c.jpg?itok=nWsJr39r "Lubuntu")
Figure 3: What Lubuntu lacks in software, it makes up for in speed and simplicity.[Used with permission][3]
### Mythbuntu
Mythbuntu is a sort of odd bird here, because it isnt really a desktop variant. Instead, Mythbuntu is a special flavor of Ubuntu designed to be a multimedia powerhouse. Using Mythbuntu requires TV Tuners and TV Out cards. And, during the installation, there are a number of additional steps that must be taken (choosing how to set up the frontend/backend as well as setting up your IR remotes).
If you do happen to have the hardware (and the desire to create your own Ubuntu-powered entertainment system), Mythbuntu is the distribution you want. Once youve installed Mythbuntu, you will then be prompted to walk through the setup of your Capture cards, recording profiles, video sources, and Input connections (Figure 4).
![Mythbuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_flavor_d.jpg?itok=Uk16xUIF "Mythbuntu")
Figure 4: Getting ready to set up Mythbuntu.[Used with permission][4]
### Ubuntu Budgie
Ubuntu Budgie is the new kid on the block to the official flavor list. Sporting the Budgie Desktop, this is a beautiful and modern take on Linux that will please just about any type of user. The goal of Ubuntu Budgie was to create an elegant and simple desktop interface. Mission accomplished. If youre looking for a beautiful desktop to work on top of the remarkably stable Ubuntu Linux platform, look no further than Ubuntu Budgie.
Adding this particular spin on Ubuntu to the list of official variants was a smart move on the part of Canonical. With Unity going away, they needed a desktop that would offer the elegance found in Unity. Customization of Budgie is very easy, and the list of included software will get you working and browsing immediately.
And, unlike the learning curve many users encountered with Unity, the developers/designers of Ubuntu Budgie have done a remarkable job of keeping this take on Ubuntu familiar. Click on the “start” button to reveal a fairly standard menu of applications. Budgie also includes an easy to use Dock (Figure 5) that holds applications launchers for quick access.
![Budgie](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_flavor_e.jpg?itok=mwlo4xzm "Budgie")
Figure 5: This is one beautiful desktop.[Used with permission][5]
Another really nice feature found in Ubuntu Budgie is a sidebar that can be quickly revealed and hidden. This sidebar holds applets and notifications. With this in play, your desktop can be both incredibly useful, while remaining clutter free.
In the end, if youre looking for something a bit different, that happens to also be a very modern take on the desktop—with features and functions not found on other distributions—Ubuntu Budgie is what youre looking for.
### Xubuntu
Another official flavor of Ubuntu that does a nice job of providing a small footprint version of Linux is [Xubuntu][32]. The difference between Xubuntu and Lubuntu is that, where Lubuntu uses the LXDE desktop, Xubuntu makes use of [Xfce][33]. What you get with that difference is a lightweight desktop that is far more configurable (than Lubuntu) as well as one that includes the more business-ready LibreOffice office suite.
Xubuntu is an out of the box experience that anyone, regardless of experience, can use. But don't think that immediate familiarity means this flavor of Ubuntu is locked out of making it your own. If you're looking for a take on Ubuntu that's somewhat old-school out of the box, but can be heavily tweaked to better resemble a more modern desktop, Xubuntu is what you want.
One really handy addition to Xubuntu that I've always enjoyed (one that harks back to Enlightenment) is the ability to bring up the "start" menu by right-clicking anywhere on the desktop (Figure 6). This can make for very efficient usage.
![Xubuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/xubuntu.jpg?itok=XL8_hLet "Xubuntu")
Figure 6: Xubuntu lets you bring up the "start" menu by right-clicking anywhere on the desktop.[Used with permission][6]
### The choice is yours
There is a flavor of Ubuntu to meet nearly any need—which one you choose is up to you. As yourself questions such as:
* What are your needs?
* What type of desktop do you prefer to interact with?
* Is your hardware aging?
* Do you prefer a Windows XP/7 feel?
* Are you wanting a multimedia system?
Your answers to the above questions will go a long way to determining which flavor of Ubuntu is right for you. The good news is that you cant really go wrong with any of the available options.
_Learn more about Linux through the free ["Introduction to Linux" ][31]course from The Linux Foundation and edX._
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2017/5/which-official-ubuntu-flavor-best-you
作者:[ JACK WALLEN][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/used-permission
[3]:https://www.linux.com/licenses/category/used-permission
[4]:https://www.linux.com/licenses/category/used-permission
[5]:https://www.linux.com/licenses/category/used-permission
[6]:https://www.linux.com/licenses/category/used-permission
[7]:https://www.linux.com/licenses/category/used-permission
[8]:http://xubuntu.org/
[9]:http://www.kubuntu.org/
[10]:http://lubuntu.net/
[11]:http://www.mythbuntu.org/
[12]:https://ubuntubudgie.org/
[13]:https://www.linux.com/files/images/ubuntuflavorajpg
[14]:https://www.linux.com/files/images/ubuntuflavorbjpg
[15]:https://www.linux.com/files/images/ubuntuflavorcjpg
[16]:https://www.linux.com/files/images/ubuntuflavordjpg
[17]:https://www.linux.com/files/images/ubuntuflavorejpg
[18]:https://www.linux.com/files/images/xubuntujpg
[19]:https://www.linux.com/files/images/ubuntubudgiejpg
[20]:https://buy.ubuntu.com/collections/ubuntu-advantage-for-desktop
[21]:https://ubuntuforums.org/
[22]:https://help.ubuntu.com/?_ga=2.155705979.1922322560.1494162076-828730842.1481046109
[23]:https://help.ubuntu.com/community/CommunityHelpWiki?_ga=2.155705979.1922322560.1494162076-828730842.1481046109
[24]:https://apps.ubuntu.com/cat/applications/gnome-tweak-tool/
[25]:https://extensions.gnome.org/
[26]:https://www.kde.org/
[27]:https://kubuntu.emerge-open.com/buy
[28]:http://lxde.org/
[29]:https://www.abisource.com/
[30]:http://www.gnumeric.org/
[31]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
[32]:https://xubuntu.org/
[33]:https://www.xfce.org/

View File

@ -1,3 +1,5 @@
Translating by XiatianSummer
Why Car Companies Are Hiring Computer Security Experts
============================================================

View File

@ -1,232 +0,0 @@
translating by liuxinyu123
Containing System Services in Red Hat Enterprise Linux Part 1
============================================================
At the 2017 Red Hat Summit, several people asked me “We normally use full VMs to separate network services like DNS and DHCP, can we use containers instead?”. The answer is yes, and heres an example of how to create a system container in Red Hat Enterprise Linux 7 today.   
### **THE GOAL**
#### _Create a network service that can be updated independently of any other services of the system, yet easily managed and updated from the host._
Lets explore setting up a BIND server running under systemd in a container. In this part, well look at building our container, as well as managing the BIND configuration and data files.
In Part Two, well look at how systemd on the host integrates with systemd in the container. Well explore managing the service in the container, and enabling it as a service on the host.
### **CREATING THE BIND CONTAINER**
To get systemd working inside a container easily, we first need to add two packages on the host: `oci-register-machine` and `oci-systemd-hook`. The `oci-systemd-hook` hook allows us to run systemd in a container without needing to use a privileged container or manually configuring tmpfs and cgroups. The `oci-register-machine` hook allows us to keep track of the container with the systemd tools like `systemctl` and `machinectl`.
```
[root@rhel7-host ~]# yum install oci-register-machine oci-systemd-hook
```
On to creating our BIND container. The [Red Hat Enterprise Linux 7 base image][6]  includes systemd as an init system. We can install and enable BIND the same way we would on a typical system. You can [download this Dockerfile from the git repository][7] in the Resources.
```
[root@rhel7-host bind]# vi Dockerfile
# Dockerfile for BIND
FROM registry.access.redhat.com/rhel7/rhel
ENV container docker
RUN yum -y install bind && \
   yum clean all && \
   systemctl enable named
STOPSIGNAL SIGRTMIN+3
EXPOSE 53
EXPOSE 53/udp
CMD [ "/sbin/init" ]
```
Since were starting with an init system as PID 1, we need to change the signal sent by the docker CLI when we tell the container to stop. From the `kill` system call man pages (`man 2 kill`):
```
The only signals that can be sent to process ID 1, the init
process, are those for which init has explicitly installed
signal handlers. This is done to assure the system is not
brought down accidentally.
```
For the systemd signal handlers, `SIGRTMIN+3` is the signal that corresponds to `systemd start halt.target`. We also expose both TCP and UDP ports for BIND, since both protocols could be in use.
### **MANAGING DATA**
With a functional BIND service, we need a way to manage the configuration and zone files. Currently those are inside the container, so we  _could_  enter the container any time we wanted to update the configs or make a zone file change. This isnt ideal from a management perspective.  Well need to rebuild the container when we need to update BIND, so changes in the images would be lost. Having to enter the container any time we need to update a file or restart the service adds steps and time.
Instead, well extract the configuration and data files from the container and copy them to the host, then mount them at run time. This way we can easily restart or rebuild the container without losing changes. We can also modify configs and zones by using an editor outside of the container. Since this container data looks like “ _site-specific data served by this system_ ”, lets follow the File System Hierarchy and create `/srv/named` on the local host to maintain administrative separation.
```
[root@rhel7-host ~]# mkdir -p /srv/named/etc
[root@rhel7-host ~]# mkdir -p /srv/named/var/named
```
##### _NOTE: If you are migrating an existing configuration, you can skip the following step and copy it directly to the`/srv/named` directories. You may still want to check the container assigned GID with a temporary container._
Lets build and run an temporary container to examine BIND. With a init process as PID 1, we cant run the container interactively to get a shell. Well exec into it after it launches, and check for important files with `rpm`.
```
[root@rhel7-host ~]# docker build -t named .
[root@rhel7-host ~]# docker exec -it $( docker run -d named ) /bin/bash
[root@0e77ce00405e /]# rpm -ql bind
```
For this example, well need `/etc/named.conf` and everything under `/var/named/`. We can extract these with `machinectl`. If theres more than one container registered, we can see whats running in any machine with `machinectl status`. Once we have the configs we can stop the temporary container.
_Theres also a[ sample `named.conf` and zone files for `example.com` in the Resources][2] if you prefer._
```
[root@rhel7-host bind]# machinectl list
MACHINE                          CLASS     SERVICE
8824c90294d5a36d396c8ab35167937f container docker
[root@rhel7-host ~]# machinectl copy-from 8824c90294d5a36d396c8ab35167937f /etc/named.conf /srv/named/etc/named.conf
[root@rhel7-host ~]# machinectl copy-from 8824c90294d5a36d396c8ab35167937f /var/named /srv/named/var/named
[root@rhel7-host ~]# docker stop infallible_wescoff
```
### **FINAL CREATION**
To create and run the final container, add the volume options to mount:
* file `/srv/named/etc/named.conf` as `/etc/named.conf`
* directory `/srv/named/var/named` as `/var/named`
Since this is our final container, well also provide a meaningful name that we can refer to later.
```
[root@rhel7-host ~]# docker run -d -p 53:53 -p 53:53/udp -v /srv/named/etc/named.conf:/etc/named.conf:Z -v /srv/named/var/named:/var/named:Z --name named-container named
```
With the final container running, we can modify the local configs to change the behavior of BIND in the container. The BIND server will need to listen on any IP that the container might be assigned. Be sure the GID of any new file matches the rest of the BIND files from the container. 
```
[root@rhel7-host bind]# cp named.conf /srv/named/etc/named.conf
[root@rhel7-host ~]# cp example.com.zone /srv/named/var/named/example.com.zone
[root@rhel7-host ~]# cp example.com.rr.zone  /srv/named/var/named/example.com.rr.zone
```
> [Curious why I didnt need to change SELinux context on the host directories?][3]
Well reload the config by execing the `rndc` binary provided by the container. We can use `journald` in the same fashion to check the BIND logs. If you run into errors, you can edit the file on the host, and reload the config. Using `host` or `dig` on the host, we can check the responses from the contained service for example.com.
```
[root@rhel7-host ~]# docker exec -it named-container rndc reload       
server reload successful
[root@rhel7-host ~]# docker exec -it named-container journalctl -u named -n
-- Logs begin at Fri 2017-05-12 19:15:18 UTC, end at Fri 2017-05-12 19:29:17 UTC. --
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: 9.E.F.IP6.ARPA
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: A.E.F.IP6.ARPA
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: B.E.F.IP6.ARPA
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: 8.B.D.0.1.0.0.2.IP6.ARPA
May 12 19:29:17 ac1752c314a7 named[27]: reloading configuration succeeded
May 12 19:29:17 ac1752c314a7 named[27]: reloading zones succeeded
May 12 19:29:17 ac1752c314a7 named[27]: zone 1.0.10.in-addr.arpa/IN: loaded serial 2001062601
May 12 19:29:17 ac1752c314a7 named[27]: zone 1.0.10.in-addr.arpa/IN: sending notifies (serial 2001062601)
May 12 19:29:17 ac1752c314a7 named[27]: all zones loaded
May 12 19:29:17 ac1752c314a7 named[27]: running
[root@rhel7-host bind]# host www.example.com localhost
Using domain server:
Name: localhost
Address: ::1#53
Aliases:
www.example.com is an alias for server1.example.com.
server1.example.com is an alias for mail
```
> [Did your zone file not update? It might be your editor not the serial number.][4]
### THE FINISH LINE (?)
Weve got what we set out to accomplish. DNS requests and zones are being served from a container. Weve got a persistent location to manage data and configurations across updates.  
In Part 2 of this series, well see how to treat the container as a normal service on the host.
* * *
_[Follow the RHEL Blog][5] to receive updates on Part 2 of this series and other new posts via email._
* * *
### _**Additional Resources:**_
#### GitHub repository for accompanying files:  [https://github.com/nzwulfin/named-container][8]
#### **SIDEBAR 1: ** _SELinux context on local files accessed by a container_
You may have noticed that when I copied the files from the container to the local host, I didnt run a `chcon` to change the files on the host to type `svirt_sandbox_file_t`.  Why didnt it break? Copying a file into `/srv` should have made that file label type `var_t`. Did I `setenforce 0`?
Of course not, that would make Dan Walsh cry.  And yes, `machinectl` did indeed set the label type as expected, take a look:
Before starting the container:
```
[root@rhel7-host ~]# ls -Z /srv/named/etc/named.conf
-rw-r-----. unconfined_u:object_r:var_t:s0   /srv/named/etc/named.conf
```
No, I used a volume option in run that makes Dan Walsh happy, `:Z`.  This part of the command `-v /srv/named/etc/named.conf:/etc/named.conf:Z` does two things: first it says this needs to be relabeled with a private volume SELinux label, and second it says to mount it read / write.
After starting the container:
```
[root@rhel7-host ~]# ls -Z /srv/named/etc/named.conf
-rw-r-----. root 25 system_u:object_r:svirt_sandbox_file_t:s0:c821,c956 /srv/named/etc/named.conf
```
#### **SIDEBAR 2: ** _VIM backup behavior can change inodes_
If you made the edits to the config file with `vim` on the local host and you arent seeing the changes in the container, you may have inadvertently created a new file that the container isnt aware of. There are three `vim` settings that affect backup copies during editing: backup, writebackup, and backupcopy.
Ive snipped out the defaults that apply for RHEL 7 from the official VIM backup_table [http://vimdoc.sourceforge.net/htmldoc/editing.html#backup-table]
```
backup    writebackup
  off     on backup current file, deleted afterwards (default)
```
So we dont create tilde copies that stick around, but we are creating backups. The other setting is backupcopy, where `auto` is the shipped default:
```
"yes" make a copy of the file and overwrite the original one
"no" rename the file and write a new one
"auto" one of the previous, what works best
```
This combo means that when you edit a file, unless `vim` sees a reason not to (check the docs for the logic) you will end up with a new file that contains your edits, which will be renamed to the original filename when you save. This means the file gets a new inode. For most situations this isnt a problem, but here the bind mount into the container *is* senstive to inode changes. To solve this, you need to change the backupcopy behavior.
Either in the `vim` session or in your `.vimrc`, add `set backupcopy=yes`. This will make sure the original file gets truncated and overwritten, preserving the inode and propagating the changes into the container.
--------------------------------------------------------------------------------
via: http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/
作者:[Matt Micene ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/
[1]:http://rhelblog.redhat.com/author/mmicenerht/
[2]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#repo
[3]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#sidebar_1
[4]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#sidebar_2
[5]:http://redhatstackblog.wordpress.com/feed/
[6]:https://access.redhat.com/containers
[7]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#repo
[8]:https://github.com/nzwulfin/named-container

View File

@ -1,175 +0,0 @@
translating by HardworkFish
How to answer questions in a helpful way
============================================================
Your coworker asks you a slightly unclear question. How do you answer? I think asking questions is a skill (see [How to ask good questions][1]) and that answering questions in a helpful way is also a skill! Both of them are super useful.
To start out with sometimes the people asking you questions dont respect your time, and that sucks. Im assuming here throughout that thats not what happening were going to assume that the person asking you questions is a reasonable person who is trying their best to figure something out and that you want to help them out. Everyone I work with is like that and so thats the world I live in :)
Here are a few strategies for answering questions in a helpful way!
### If theyre not asking clearly, help them clarify
Often beginners dont ask clear questions, or ask questions that dont have the necessary information to answer the questions. Here are some strategies you can use to help them clarify.
* **Rephrase a more specific question** back at them (“Are you asking X?”)
* **Ask them for more specific information** they didnt provide (“are you using IPv6?”)
* **Ask what prompted their question**. For example, sometimes people come into my teams channel with questions about how our service discovery works. Usually this is because theyre trying to set up/reconfigure a service. In that case its helpful to ask “which service are you working with? Can I see the pull request youre working on?”
A lot of these strategies come from the [how to ask good questions][2] post. (though I would never say to someone “oh you need to read this Document On How To Ask Good Questions before asking me a question”)
### Figure out what they know already
Before answering a question, its very useful to know what the person knows already!
Harold Treen gave me a great example of this:
> Someone asked me the other day to explain “Redux Sagas”. Rather than dive in and say “They are like worker threads that listen for actions and let you update the store!” 
> I started figuring out how much they knew about Redux, actions, the store and all these other fundamental concepts. From there it was easier to explain the concept that ties those other concepts together.
Figuring out what your question-asker knows already is important because they may be confused about fundamental concepts (“Whats Redux?”), or they may be an expert whos getting at a subtle corner case. An answer building on concepts they dont know is confusing, and an answer that recaps things they know is tedious.
One useful trick for asking what people know instead of “Do you know X?”, maybe try “How familiar are you with X?”.
### Point them to the documentation
“RTFM” is the classic unhelpful answer to a question, but pointing someone to a specific piece of documentation can actually be really helpful! When Im asking a question, Id honestly rather be pointed to documentation that actually answers my question, because its likely to answer other questions I have too.
I think its important here to make sure youre linking to documentation that actually answers the question, or at least check in afterwards to make sure it helped. Otherwise you can end up with this (pretty common) situation:
* Ali: How do I do X?
* Jada: <link to documentation>
* Ali: That doesnt actually explain how to X, it only explains Y!
If the documentation Im linking to is very long, I like to point out the specific part of the documentation Im talking about. The [bash man page][3] is 44,000 words (really!), so just saying “its in the bash man page” is not that helpful :)
### Point them to a useful search
Often I find things at work by searching for some Specific Keyword that I know will find me the answer. That keyword might not be obvious to a beginner! So saying “this is the search Id use to find the answer to that question” can be useful. Again, check in afterwards to make sure the search actually gets them the answer they need :)
### Write new documentation
People often come and ask my team the same questions over and over again. This is obviously not the fault of the people (how should  _they_  know that 10 people have asked this already, or what the answer is?). So were trying to, instead of answering the questions directly,
1. Immediately write documentation
2. Point the person to the new documentation we just wrote
3. Celebrate!
Writing documentation sometimes takes more time than just answering the question, but its often worth it! Writing documentation is especially worth it if:
a. Its a question which is being asked again and again b. The answer doesnt change too much over time (if the answer changes every week or month, the documentation will just get out of date and be frustrating)
### Explain what you did
As a beginner to a subject, its really frustrating to have an exchange like this:
* New person: “hey how do you do X?”
* More Experienced Person: “I did it, it is done.”
* New person: ….. but what did you DO?!
If the person asking you is trying to learn how things work, its helpful to:
* Walk them through how to accomplish a task instead of doing it yourself
* Tell them the steps for how you got the answer you gave them!
This might take longer than doing it yourself, but its a learning opportunity for the person who asked, so that theyll be better equipped to solve such problems in the future.
Then you can have WAY better exchanges, like this:
* New person: “Im seeing errors on the site, whats happening?”
* More Experienced Person: (2 minutes later) “oh thats because theres a database failover happening”
* New person: how did you know that??!?!?
* More Experienced Person: “Heres what I did!”:
1. Often these errors are due to Service Y being down. I looked at $PLACE and it said Service Y was up. So that wasnt it.
2. Then I looked at dashboard X, and this part of that dashboard showed there was a database failover happening.
3. Then I looked in the logs for the service and it showed errors connecting to the database, heres what those errors look like.
If youre explaining how you debugged a problem, its useful both to explain how you found out what the problem was, and how you found out what the problem wasnt. While it might feel good to look like you knew the answer right off the top of your head, it feels even better to help someone improve at learning and diagnosis, and understand the resources available.
### Solve the underlying problem
This one is a bit tricky. Sometimes people think theyve got the right path to a solution, and they just need one more piece of information to implement that solution. But they might not be quite on the right path! For example:
* George: Im doing X, and I got this error, how do I fix it
* Jasminda: Are you actually trying to do Y? If so, you shouldnt do X, you should do Z instead
* George: Oh, youre right!!! Thank you! I will do Z instead.
Jasminda didnt answer Georges question at all! Instead she guessed that George didnt actually want to be doing X, and she was right. That is helpful!
Its possible to come off as condescending here though, like
* George: Im doing X, and I got this error, how do I fix it?
* Jasminda: Dont do that, youre trying to do Y and you should do Z to accomplish that instead.
* George: Well, I am not trying to do Y, I actually want to do X because REASONS. How do I do X?
So dont be condescending, and keep in mind that some questioners might be attached to the steps theyve taken so far! It might be appropriate to answer both the question they asked and the one they should have asked: “Well, if you want to do X then you might try this, but if youre trying to solve problem Y with that, you might have better luck doing this other thing, and heres why thatll work better”.
### Ask “Did that answer your question?”
I always like to check in after I  _think_  Ive answered the question and ask “did that answer your question? Do you have more questions?”.
Its good to pause and wait after asking this because often people need a minute or two to know whether or not theyve figured out the answer. I especially find this extra “did this answer your questions?” step helpful after writing documentation! Often when writing documentation about something I know well Ill leave out something very important without realizing it.
### Offer to pair program/chat in real life
I work remote, so many of my conversations at work are text-based. I think of that as the default mode of communication.
Today, we live in a world of easy video conferencing & screensharing! At work I can at any time click a button and immediately be in a video call/screensharing session with someone. Some problems are easier to talk about using your voices!
For example, recently someone was asking about capacity planning/autoscaling for their service. I could tell there were a few things we needed to clear up but I wasnt exactly sure what they were yet. We got on a quick video call and 5 minutes later wed answered all their questions.
I think especially if someone is really stuck on how to get started on a task, pair programming for a few minutes can really help, and it can be a lot more efficient than email/instant messaging.
### Dont act surprised
This ones a rule from the Recurse Center: [no feigning surprise][4]. Heres a relatively common scenario
* Human 1: “whats the Linux kernel?”
* Human 2: “you dont know what the LINUX KERNEL is?!!!!?!!!???”
Human 2s reaction (regardless of whether theyre  _actually_  surprised or not) is not very helpful. It mostly just serves to make Human 1 feel bad that they dont know what the Linux kernel is.
Ive worked on actually pretending not to be surprised even when I actually am a bit surprised the person doesnt know the thing and its awesome.
### Answering questions well is awesome
Obviously not all these strategies are appropriate all the time, but hopefully you will find some of them helpful! I find taking the time to answer questions and teach people can be really rewarding.
Special thanks to Josh Triplett for suggesting this post and making many helpful additions, and to Harold Treen, Vaibhav Sagar, Peter Bhat Harkins, Wesley Aptekar-Cassels, and Paul Gowder for reading/commenting.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/answer-questions-well/
作者:[ Julia Evans][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://jvns.ca/about
[1]:https://jvns.ca/blog/good-questions/
[2]:https://jvns.ca/blog/good-questions/
[3]:https://linux.die.net/man/1/bash
[4]:https://jvns.ca/blog/2017/04/27/no-feigning-surprise/

View File

@ -1,149 +0,0 @@
Translating by qhwdw
Reasons Kubernetes is cool
============================================================
When I first learned about Kubernetes (a year and a half ago?) I really didnt understand why I should care about it.
Ive been working full time with Kubernetes for 3 months or so and now have some thoughts about why I think its useful. (Im still very far from being a Kubernetes expert!) Hopefully this will help a little in your journey to understand what even is going on with Kubernetes!
I will try to explain some reason I think Kubenetes is interesting without using the words “cloud native”, “orchestration”, “container”, or any Kubernetes-specific terminology :). Im going to explain this mostly from the perspective of a kubernetes operator / infrastructure engineer, since my job right now is to set up Kubernetes and make it work well.
Im not going to try to address the question of “should you use kubernetes for your production systems?” at all, that is a very complicated question. (not least because “in production” has totally different requirements depending on what youre doing)
### Kubernetes lets you run code in production without setting up new servers
The first pitch I got for Kubernetes was the following conversation with my partner Kamal:
Heres an approximate transcript:
* Kamal: With Kubernetes you can set up a new service with a single command
* Julia: I dont understand how thats possible.
* Kamal: Like, you just write 1 configuration file, apply it, and then you have a HTTP service running in production
* Julia: But today I need to create new AWS instances, write a puppet manifest, set up service discovery, configure my load balancers, configure our deployment software, and make sure DNS is working, it takes at least 4 hours if nothing goes wrong.
* Kamal: Yeah. With Kubernetes you dont have to do any of that, you can set up a new HTTP service in 5 minutes and itll just automatically run. As long as you have spare capacity in your cluster it just works!
* Julia: There must be a trap
There kind of is a trap, setting up a production Kubernetes cluster is (in my experience) is definitely not easy. (see [Kubernetes The Hard Way][3] for whats involved to get started). But were not going to go into that right now!
So the first cool thing about Kubernetes is that it has the potential to make life way easier for developers who want to deploy new software into production. Thats cool, and its actually true, once you have a working Kubernetes cluster you really can set up a production HTTP service (“run 5 of this application, set up a load balancer, give it this DNS name, done”) with just one configuration file. Its really fun to see.
### Kubernetes gives you easy visibility & control of what code you have running in production
IMO you cant understand Kubernetes without understanding etcd. So lets talk about etcd!
Imagine that I asked you today “hey, tell me every application you have running in production, what host its running on, whether its healthy or not, and whether or not it has a DNS name attached to it”. I dont know about you but I would need to go look in a bunch of different places to answer this question and it would take me quite a while to figure out. I definitely cant query just one API.
In Kubernetes, all the state in your cluster applications running (“pods”), nodes, DNS names, cron jobs, and more is stored in a single database (etcd). Every Kubernetes component is stateless, and basically works by
* Reading state from etcd (eg “the list of pods assigned to node 1”)
* Making changes (eg “actually start running pod A on node 1”)
* Updating the state in etcd (eg “set the state of pod A to running”)
This means that if you want to answer a question like “hey, how many nginx pods do I have running right now in that availabliity zone?” you can answer it by querying a single unified API (the Kubernetes API!). And you have exactly the same access to that API that every other Kubernetes component does.
This also means that you have easy control of everything running in Kubernetes. If you want to, say,
* Implement a complicated custom rollout strategy for deployments (deploy 1 thing, wait 2 minutes, deploy 5 more, wait 3.7 minutes, etc)
* Automatically [start a new webserver][1] every time a branch is pushed to github
* Monitor all your running applications to make sure all of them have a reasonable cgroups memory limit
all you need to do is to write a program that talks to the Kubernetes API. (a “controller”)
Another very exciting thing about the Kubernetes API is that youre not limited to just functionality that Kubernetes provides! If you decide that you have your own opinions about how your software should be deployed / created / monitored, then you can write code that uses the Kubernetes API to do it! It lets you do everything you need.
### If every Kubernetes component dies, your code will still keep running
One thing I was originally promised (by various blog posts :)) about Kubernetes was “hey, if the Kubernetes apiserver and everything else dies, its ok, your code will just keep running”. I thought this sounded cool in theory but I wasnt sure if it was actually true.
So far it seems to be actually true!
Ive been through some etcd outages now, and what happens is
1. All the code that was running keeps running
2. Nothing  _new_  happens (you cant deploy new code or make changes, cron jobs will stop working)
3. When everything comes back, the cluster will catch up on whatever it missed
This does mean that if etcd goes down and one of your applications crashes or something, it cant come back up until etcd returns.
### Kubernetes design is pretty resilient to bugs
Like any piece of software, Kubernetes has bugs. For example right now in our cluster the controller manager has a memory leak, and the scheduler crashes pretty regularly. Bugs obviously arent good but so far Ive found that Kubernetes design helps mitigate a lot of the bugs in its core components really well.
If you restart any component, what happens is:
* It reads all its relevant state from etcd
* It starts doing the necessary things its supposed to be doing based on that state (scheduling pods, garbage collecting completed pods, scheduling cronjobs, deploying daemonsets, whatever)
Because all the components dont keep any state in memory, you can just restart them at any time and that can help mitigate a variety of bugs.
For example! Lets say you have a memory leak in your controller manager. Because the controller manager is stateless, you can just periodically restart it every hour or something and feel confident that you wont cause any consistency issues. Or we ran into a bug in the scheduler where it would sometimes just forget about pods and never schedule them. You can sort of mitigate this just by restarting the scheduler every 10 minutes. (we didnt do that, we fixed the bug instead, but you  _could_  :) )
So I feel like I can trust Kubernetes design to help make sure the state in the cluster is consistent even when there are bugs in its core components. And in general I think the software is generally improving over time. The only stateful thing you have to operate is etcd
Not to harp on this “state” thing too much but I think its cool that in Kubernetes the only thing you have to come up with backup/restore plans for is etcd (unless you use persistent volumes for your pods). I think it makes kubernetes operations a lot easier to think about.
### Implementing new distributed systems on top of Kubernetes is relatively easy
Suppose you want to implement a distributed cron job scheduling system! Doing that from scratch is a ton of work. But implementing a distributed cron job scheduling system inside Kubernetes is much easier! (still not trivial, its still a distributed system)
The first time I read the code for the Kubernetes cronjob controller I was really delighted by how simple it was. Here, go read it! The main logic is like 400 lines of Go. Go ahead, read it! => [cronjob_controller.go][4] <=
Basically what the cronjob controller does is:
* Every 10 seconds:
* Lists all the cronjobs that exist
* Checks if any of them need to run right now
* If so, creates a new Job object to be scheduled & actually run by other Kubernetes controllers
* Clean up finished jobs
* Repeat
The Kubernetes model is pretty constrained (it has this pattern of resources are defined in etcd, controllers read those resources and update etcd), and I think having this relatively opinionated/constrained model makes it easier to develop your own distributed systems inside the Kubernetes framework.
Kamal introduced me to this idea of “Kubernetes is a good platform for writing your own distributed systems” instead of just “Kubernetes is a distributed system you can use” and I think its really interesting. He has a prototype of a [system to run an HTTP service for every branch you push to github][5]. It took him a weekend and is like 800 lines of Go, which I thought was impressive!
### Kubernetes lets you do some amazing things (but isnt easy)
I started out by saying “kubernetes lets you do these magical things, you can just spin up so much infrastructure with a single configuration file, its amazing”. And thats true!
What I mean by “Kubernetes isnt easy” is that Kubernetes has a lot of moving parts learning how to successfully operate a highly available Kubernetes cluster is a lot of work. Like I find that with a lot of the abstractions it gives me, I need to understand what is underneath those abstractions in order to debug issues and configure things properly. I love learning new things so this doesnt make me angry or anything, I just think its important to know :)
One specific example of “I cant just rely on the abstractions” that Ive struggled with is that I needed to learn a LOT [about how networking works on Linux][6] to feel confident with setting up Kubernetes networking, way more than Id ever had to learn about networking before. This was very fun but pretty time consuming. I might write more about what is hard/interesting about setting up Kubernetes networking at some point.
Or I wrote a [2000 word blog post][7] about everything I had to learn about Kubernetes different options for certificate authorities to be able to set up my Kubernetes CAs successfully.
I think some of these managed Kubernetes systems like GKE (googles kubernetes product) may be simpler since they make a lot of decisions for you but I havent tried any of them.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2017/10/05/reasons-kubernetes-is-cool/
作者:[ Julia Evans][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://jvns.ca/about
[1]:https://github.com/kamalmarhubi/kubereview
[2]:https://jvns.ca/categories/kubernetes
[3]:https://github.com/kelseyhightower/kubernetes-the-hard-way
[4]:https://github.com/kubernetes/kubernetes/blob/e4551d50e57c089aab6f67333412d3ca64bc09ae/pkg/controller/cronjob/cronjob_controller.go
[5]:https://github.com/kamalmarhubi/kubereview
[6]:https://jvns.ca/blog/2016/12/22/container-networking/
[7]:https://jvns.ca/blog/2017/08/05/how-kubernetes-certificates-work/

View File

@ -1,4 +1,4 @@
fuzheng1998 translating
A Large-Scale Study of Programming Languages and Code Quality in GitHub
============================================================
@ -36,7 +36,7 @@ Our language and project data was extracted from the  _GitHub Archive_ , a data
**Identifying top languages.** We aggregate projects based on their primary language. Then we select the languages with the most projects for further analysis, as shown in [Table 1][48]. A given project can use many languages; assigning a single language to it is difficult. Github Archive stores information gathered from GitHub Linguist which measures the language distribution of a project repository using the source file extensions. The language with the maximum number of source files is assigned as the  _primary language_  of the project.
[![t1.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t1.jpg)][49]
[![t1.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t1.jpg)][49]
**Table 1\. Top 3 projects in each language.**
**Retrieving popular projects.** For each selected language, we filter the project repositories written primarily in that language by its popularity based on the associated number of  _stars._ This number indicates how many people have actively expressed interest in the project, and is a reasonable proxy for its popularity. Thus, the top 3 projects in C are  _linux, git_ , and  _php-src_ ; and for C++ they are  _node-webkit, phantomjs_ , and  _mongo_ ; and for `Java` they are  _storm, elasticsearch_ , and  _ActionBarSherlock._  In total, we select the top 50 projects in each language.
@ -47,7 +47,7 @@ To ensure that these projects have a sufficient development history, we drop the
[Table 2][51] summarizes our data set. Since a project may use multiple languages, the second column of the table shows the total number of projects that use a certain language at some capacity. We further exclude some languages from a project that have fewer than 20 commits in that language, where 20 is the first quartile value of the total number of commits per project per language. For example, we find 220 projects that use more than 20 commits in C. This ensures sufficient activity for each languageproject pair.
[![t2.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t2.jpg)][52]
[![t2.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t2.jpg)][52]
**Table 2\. Study subjects.**
In summary, we study 728 projects developed in 17 languages with 18 years of history. This includes 29,000 different developers, 1.57 million commits, and 564,625 bug fix commits.
@ -57,14 +57,14 @@ In summary, we study 728 projects developed in 17 languages with 18 years of his
We define language classes based on several properties of the language thought to influence language quality,[7][9], [8][10], [12][11] as shown in [Table 3][53]. The  _Programming Paradigm_  indicates whether the project is written in an imperative procedural, imperative scripting, or functional language. In the rest of the paper, we use the terms procedural and scripting to indicate imperative procedural and imperative scripting respectively.
[![t3.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t3.jpg)][54]
[![t3.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t3.jpg)][54]
**Table 3\. Different types of language classes.**
_Type Checking_  indicates static or dynamic typing. In statically typed languages, type checking occurs at compile time, and variable names are bound to a value and to a type. In addition, expressions (including variables) are classified by types that correspond to the values they might take on at run-time. In dynamically typed languages, type checking occurs at run-time. Hence, in the latter, it is possible to bind a variable name to objects of different types in the same program.
_Implicit Type Conversion_  allows access of an operand of type T1 as a different type T2, without an explicit conversion. Such implicit conversion may introduce type-confusion in some cases, especially when it presents an operand of specific type T1, as an instance of a different type T2\. Since not all implicit type conversions are immediately a problem, we operationalize our definition by showing examples of the implicit type confusion that can happen in all the languages we identified as allowing it. For example, in languages like `Perl, JavaScript`, and `CoffeeScript` adding a string to a number is permissible (e.g., "5" + 2 yields "52"). The same operation yields 7 in `Php`. Such an operation is not permitted in languages such as `Java` and `Python` as they do not allow implicit conversion. In C and C++ coercion of data types can result in unintended results, for example, `int x; float y; y=3.5; x=y`; is legal C code, and results in different values for x and y, which, depending on intent, may be a problem downstream.[a][12] In `Objective-C` the data type  _id_  is a generic object pointer, which can be used with an object of any data type, regardless of the class.[b][13] The flexibility that such a generic data type provides can lead to implicit type conversion and also have unintended consequences.[c][14]Hence, we classify a language based on whether its compiler  _allows_  or  _disallows_  the implicit type conversion as above; the latter explicitly detects type confusion and reports it.
Disallowing implicit type conversion could result from static type inference within a compiler (e.g., with `Java`), using a type-inference algorithm such as Hindley[10][15] and Milner,[17][16] or at run-time using a dynamic type checker. In contrast, a type-confusion can occur silently because it is either undetected or is unreported. Either way, implicitly allowing type conversion provides flexibility but may eventually cause errors that are difficult to localize. To abbreviate, we refer to languages allowing implicit type conversion as  _implicit_  and those that disallow it as  _explicit._
Disallowing implicit type conversion could result from static type inference within a compiler (e.g., with `Java`), using a type-inference algorithm such as Hindley[10][15] and Milner,[17][16] or at run-time using a dynamic type checker. In contrast, a type-confusion can occur silently because it is either undetected or is unreported. Either way, implicitly allowing type conversion provides flexibility but may eventually cause errors that are difficult to localize. To abbreviate, we refer to languages allowing implicit type conversion as  _implicit_  and those that disallow it as  _explicit._
_Memory Class_  indicates whether the language requires developers to manage memory. We treat `Objective-C` as unmanaged, in spite of it following a hybrid model, because we observe many memory errors in its codebase, as discussed in RQ4 in Section 3.
@ -77,7 +77,7 @@ We classify the studied projects into different domains based on their features
We detect 30 distinct domains, that is, topics, and estimate the probability that each project belonging to each domain. Since these auto-detected domains include several project-specific keywords, for example, facebook, it is difficult to identify the underlying common functions. In order to assign a meaningful name to each domain, we manually inspect each of the 30 domains to identify projectname-independent, domain-identifying keywords. We manually rename all of the 30 auto-detected domains and find that the majority of the projects fall under six domains: Application, Database, CodeAnalyzer, Middleware, Library, and Framework. We also find that some projects do not fall under any of the above domains and so we assign them to a catchall domain labeled as  _Other_ . This classification of projects into domains was subsequently checked and confirmed by another member of our research group. [Table 4][57] summarizes the identified domains resulting from this process.
[![t4.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t4.jpg)][58]
[![t4.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t4.jpg)][58]
**Table 4\. Characteristics of domains.**
![*](http://dl.acm.org/images/bullet.gif)
@ -87,7 +87,7 @@ While fixing software bugs, developers often leave important information in the
First, we categorize the bugs based on their  _Cause_  and  _Impact. Causes_  are further classified into disjoint subcategories of errors: Algorithmic, Concurrency, Memory, generic Programming, and Unknown. The bug  _Impact_  is also classified into four disjoint subcategories: Security, Performance, Failure, and Other unknown categories. Thus, each bug-fix commit also has an induced Cause and an Impact type. [Table 5][59] shows the description of each bug category. This classification is performed in two phases:
[![t5.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t5.jpg)][60]
[![t5.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t5.jpg)][60]
**Table 5\. Categories of bugs and their distribution in the whole dataset.**
**(1) Keyword search.** We randomly choose 10% of the bug-fix messages and use a keyword based search technique to automatically categorize them as potential bug types. We use this annotation, separately, for both Cause and Impact types. We chose a restrictive set of keywords and phrases, as shown in [Table 5][61]. Such a restrictive set of keywords and phrases helps reduce false positives.
@ -119,7 +119,7 @@ We begin with a straightforward question that directly addresses the core of wha
We use a regression model to compare the impact of each language on the number of defects with the average impact of all languages, against defect fixing commits (see [Table 6][64]).
[![t6.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t6.jpg)][65]
[![t6.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t6.jpg)][65]
**Table 6\. Some languages induce fewer defects than other languages.**
We include some variables as controls for factors that will clearly influence the response. Project age is included as older projects will generally have a greater number of defect fixes. Trivially, the number of commits to a project will also impact the response. Additionally, the number of developers who touch a project and the raw size of the project are both expected to grow with project activity.
@ -128,11 +128,11 @@ The sign and magnitude of the estimated coefficients in the above model relates
One should take care not to overestimate the impact of language on defects. While the observed relationships are statistically significant, the effects are quite small. Analysis of deviance reveals that language accounts for less than 1% of the total explained deviance.
[![ut1.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut1.jpg)][66]
[![ut1.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut1.jpg)][66]
We can read the model coefficients as the expected change in the log of the response for a one unit change in the predictor with all other predictors held constant; that is, for a coefficient  _β<sub style="border: 0px; outline: 0px; font-size: smaller; vertical-align: sub; background: transparent;">i</sub>_ , a one unit change in  _β<sub style="border: 0px; outline: 0px; font-size: smaller; vertical-align: sub; background: transparent;">i</sub>_  yields an expected change in the response of e _βi_ . For the factor variables, this expected change is compared to the average across all languages. Thus, if, for some number of commits, a particular project developed in an  _average_  language had four defective commits, then the choice to use C++ would mean that we should expect one additional defective commit since e0.18 × 4 = 4.79\. For the same project, choosing `Haskell` would mean that we should expect about one fewer defective commit as  _e_ 0.26 × 4 = 3.08\. The accuracy of this prediction depends on all other factors remaining the same, a challenging proposition for all but the most trivial of projects. All observational studies face similar limitations; we address this concern in more detail in Section 5.
**Result 1:**  _Some languages have a greater association with defects than other languages, although the effect is small._
**Result 1:**  _Some languages have a greater association with defects than other languages, although the effect is small._
In the remainder of this paper we expand on this basic result by considering how different categories of application, defect, and language, lead to further insight into the relationship between languages and defect proneness.
@ -150,26 +150,26 @@ Rather than considering languages individually, we aggregate them by language cl
As with language (earlier in [Table 6][67]), we are comparing language  _classes_  with the average behavior across all language classes. The model is presented in [Table 7][68]. It is clear that `Script-Dynamic-Explicit-Managed` class has the smallest magnitude coefficient. The coefficient is insignificant, that is, the z-test for the coefficient cannot distinguish the coefficient from zero. Given the magnitude of the standard error, however, we can assume that the behavior of languages in this class is very close to the average across all languages. We confirm this by recoding the coefficient using `Proc-Static-Implicit-Unmanaged` as the base level and employing treatment, or dummy coding that compares each language class with the base level. In this case, `Script-Dynamic-Explicit-Managed` is significantly different with  _p_  = 0.00044\. We note here that while choosing different coding methods affects the coefficients and z-scores, the models are identical in all other respects. When we change the coding we are rescaling the coefficients to reflect the comparison that we wish to make.[4][28] Comparing the other language classes to the grand mean, `Proc-Static-Implicit-Unmanaged` languages are more likely to induce defects. This implies that either implicit type conversion or memory management issues contribute to greater defect proneness as compared with other procedural languages.
[![t7.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t7.jpg)][69]
[![t7.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t7.jpg)][69]
**Table 7\. Functional languages have a smaller relationship to defects than other language classes whereas procedural languages are greater than or similar to the average.**
Among scripting languages we observe a similar relationship between languages that allow versus those that do not allow implicit type conversion, providing some evidence that implicit type conversion (vs. explicit) is responsible for this difference as opposed to memory management. We cannot state this conclusively given the correlation between factors. However when compared to the average, as a group, languages that do not allow implicit type conversion are less error-prone while those that do are more error-prone. The contrast between static and dynamic typing is also visible in functional languages.
The functional languages as a group show a strong difference from the average. Statically typed languages have a substantially smaller coefficient yet both functional language classes have the same standard error. This is strong evidence that functional static languages are less error-prone than functional dynamic languages, however, the z-tests only test whether the coefficients are different from zero. In order to strengthen this assertion, we recode the model as above using treatment coding and observe that the `Functional-Static-Explicit-Managed` language class is significantly less defect-prone than the `Functional-Dynamic-Explicit-Managed`language class with  _p_  = 0.034.
[![ut2.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut2.jpg)][70]
[![ut2.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut2.jpg)][70]
As with language and defects, the relationship between language class and defects is based on a small effect. The deviance explained is similar, albeit smaller, with language class explaining much less than 1% of the deviance.
We now revisit the question of application domain. Does domain have an interaction with language class? Does the choice of, for example, a functional language, have an advantage for a particular domain? As above, a Chi-square test for the relationship between these factors and the project domain yields a value of 99.05 and  _df_  = 30 with  _p_  = 2.622e09 allowing us to reject the null hypothesis that the factors are independent. Cramer's-V yields a value of 0.133, a weak level of association. Consequently, although there is some relation between domain and language, there is only a weak relationship between domain and language class.
**Result 2:**  _There is a small but significant relationship between language class and defects. Functional languages are associated with fewer defects than either procedural or scripting languages._
**Result 2:**  _There is a small but significant relationship between language class and defects. Functional languages are associated with fewer defects than either procedural or scripting languages._
It is somewhat unsatisfying that we do not observe a strong association between language, or language class, and domain within a project. An alternative way to view this same data is to disregard projects and aggregate defects over all languages and domains. Since this does not yield independent samples, we do not attempt to analyze it statistically, rather we take a descriptive, visualization-based approach.
We define  _Defect Proneness_  as the ratio of bug fix commits over total commits per language per domain. [Figure 1][71] illustrates the interaction between domain and language using a heat map, where the defect proneness increases from lighter to darker zone. We investigate which language factors influence defect fixing commits across a collection of projects written across a variety of languages. This leads to the following research question:
[![f1.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/f1.jpg)][72]
[![f1.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/f1.jpg)][72]
**Figure 1\. Interaction of language's defect proneness with domain. Each cell in the heat map represents defect proneness of a language (row header) for a given domain (column header). The "Overall" column represents defect proneness of a language over all the domains. The cells with white cross mark indicate null value, that is, no commits were made corresponding to that cell.**
**RQ3\. Does language defect proneness depend on domain?**
@ -178,9 +178,9 @@ In order to answer this question we first filtered out projects that would have
We see only a subdued variation in this heat map which is a result of the inherent defect proneness of the languages as seen in RQ1\. To validate this, we measure the pairwise rank correlation between the language defect proneness for each domain with the overall. For all of the domains except Database, the correlation is positive, and p-values are significant (<0.01). Thus, w.r.t. defect proneness, the language ordering in each domain is strongly correlated with the overall language ordering.
[![ut3.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut3.jpg)][74]
[![ut3.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut3.jpg)][74]
**Result 3:**  _There is no general relationship between application domain and language defect proneness._
**Result 3:**  _There is no general relationship between application domain and language defect proneness._
We have shown that different languages induce a larger number of defects and that this relationship is not only related to particular languages but holds for general classes of languages; however, we find that the type of project does not mediate this relationship to a large degree. We now turn our attention to categorization of the response. We want to understand how language relates to specific kinds of defects and how this relationship compares to the more general relationship that we observe. We divide the defects into categories as described in [Table 5][75] and ask the following question:
@ -188,12 +188,12 @@ We have shown that different languages induce a larger number of defects and tha
We use an approach similar to RQ3 to understand the relation between languages and bug categories. First, we study the relation between bug categories and language class. A heat map ([Figure 2][76]) shows aggregated defects over language classes and bug types. To understand the interaction between bug categories and languages, we use an NBR regression model for each category. For each model we use the same control factors as RQ1 as well as languages encoded with weighted effects to predict defect fixing commits.
[![f2.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/f2.jpg)][77]
[![f2.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/f2.jpg)][77]
**Figure 2\. Relation between bug categories and language class. Each cell represents percentage of bug fix commit out of all bug fix commits per language class (row header) per bug category (column header). The values are normalized column wise.**
The results along with the anova value for language are shown in [Table 8][78]. The overall deviance for each model is substantially smaller and the proportion explained by language for a specific defect type is similar in magnitude for most of the categories. We interpret this relationship to mean that language has a greater impact on specific categories of bugs, than it does on bugs overall. In the next section we expand on these results for the bug categories with significant bug counts as reported in [Table 5][79]. However, our conclusion generalizes for all categories.
[![t8.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t8.jpg)][80]
[![t8.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t8.jpg)][80]
**Table 8\. While the impact of language on defects varies across defect category, language has a greater impact on specific categories than it does on defects in general.**
**Programming errors.** Generic programming errors account for around 88.53% of all bug fix commits and occur in all the language classes. Consequently, the regression analysis draws a similar conclusion as of RQ1 (see [Table 6][81]). All languages incur programming errors such as faulty error-handling, faulty definitions, typos, etc.
@ -202,7 +202,7 @@ The results along with the anova value for language are shown in [Table 8][78].
**Concurrency errors.** 1.99% of the total bug fix commits are related to concurrency errors. The heat map shows that `Proc-Static-Implicit-Unmanaged` dominates this error type. C and C++ introduce 19.15% and 7.89% of the errors, and they are distributed across the projects.
[![ut4.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut4.jpg)][84]
[![ut4.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut4.jpg)][84]
Both of the `Static-Strong-Managed` language classes are in the darker zone in the heat map confirming, in general static languages produce more concurrency errors than others. Among the dynamic languages, only `Erlang` is more prone to concurrency errors, perhaps relating to the greater use of this language for concurrent applications. Likewise, the negative coefficients in [Table 8][85] shows that projects written in dynamic languages like `Ruby` and `Php` have fewer concurrency errors. Note that, certain languages like `JavaScript, CoffeeScript`, and `TypeScript` do not support concurrency, in its traditional form, while `Php` has a limited support depending on its implementations. These languages introduce artificial zeros in the data, and thus the concurrency model coefficients in [Table 8][86] for those languages cannot be interpreted like the other coefficients. Due to these artificial zeros, the average over all languages in this model is smaller, which may affect the sizes of the coefficients, since they are given w.r.t. the average, but it will not affect their relative relationships, which is what we are after.
@ -210,7 +210,7 @@ A textual analysis based on word-frequency of the bug fix messages suggests that
**Security and other impact errors.** Around 7.33% of all the bug fix commits are related to Impact errors. Among them `Erlang, C++`, and `Python` associate with more security errors than average ([Table 8][87]). `Clojure` projects associate with fewer security errors ([Figure 2][88]). From the heat map we also see that `Static` languages are in general more prone to failure and performance errors, these are followed by `Functional-Dynamic-Explicit-Managed` languages such as `Erlang`. The analysis of deviance results confirm that language is strongly associated with failure impacts. While security errors are the weakest among the categories, the deviance explained by language is still quite strong when compared with the residual deviance.
**Result 4:**  _Defect types are strongly associated with languages; some defect type like memory errors and concurrency errors also depend on language primitives. Language matters more for specific categories than it does for defects overall._
**Result 4:**  _Defect types are strongly associated with languages; some defect type like memory errors and concurrency errors also depend on language primitives. Language matters more for specific categories than it does for defects overall._
[Back to Top][89]

View File

@ -1,3 +1,4 @@
**translating by [erlinux](https://github.com/erlinux)**
Operating a Kubernetes network
============================================================

View File

@ -0,0 +1,189 @@
3 Simple, Excellent Linux Network Monitors
============================================================
![network](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner_3.png?itok=iuPcSN4k "network")
Learn more about your network connections with the iftop, Nethogs, and vnstat tools.[Used with permission][3]
You can learn an amazing amount of information about your network connections with these three glorious Linux networking commands. iftop tracks network connections by process number, Nethogs quickly reveals what is hogging your bandwidth, and vnstat runs as a nice lightweight daemon to record your usage over time.
### iftop
The excellent [iftop][8] listens to the network interface that you specify, and displays connections in a top-style interface.
This is a great little tool for quickly identifying hogs, measuring speed, and also to maintain a running total of your network traffic. It is rather surprising to see how much bandwidth we use, especially for us old people who remember the days of telephone land lines, modems, screaming kilobits of speed, and real live bauds. We abandoned bauds a long time ago in favor of bit rates. Baud measures signal changes, which sometimes were the same as bit rates, but mostly not.
If you have just one network interface, run iftop with no options. iftop requires root permissions:
```
$ sudo iftop
```
When you have more than one, specify the interface you want to monitor:
```
$ sudo iftop -i wlan0
```
Just like top, you can change the display options while it is running.
* **h** toggles the help screen.
* **n** toggles name resolution.
* **s** toggles source host display, and **d** toggles the destination hosts.
* **s** toggles port numbers.
* **N** toggles port resolution; to see all port numbers toggle resolution off.
* **t** toggles the text interface. The default display requires ncurses. I think the text display is more readable and better-organized (Figure 1).
* **p** pauses the display.
* **q** quits the program.
![text display](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-1_8.png?itok=luKHS5ve "text display")
Figure 1: The text display is readable and organized.[Used with permission][1]
When you toggle the display options, iftop continues to measure all traffic. You can also select a single host to monitor. You need the host's IP address and netmask. I was curious how much of a load Pandora put on my sad little meager bandwidth cap, so first I used dig to find their IP address:
```
$ dig A pandora.com
[...]
;; ANSWER SECTION:
pandora.com. 267 IN A 208.85.40.20
pandora.com. 267 IN A 208.85.40.50
```
What's the netmask? [ipcalc][9] tells us:
```
$ ipcalc -b 208.85.40.20
Address: 208.85.40.20
Netmask: 255.255.255.0 = 24
Wildcard: 0.0.0.255
=>
Network: 208.85.40.0/24
```
Now feed the address and netmask to iftop:
```
$ sudo iftop -F 208.85.40.20/24 -i wlan0
```
Is that not seriously groovy? I was surprised to learn that Pandora is easy on my precious bits, using around 500Kb per hour. And, like most streaming services, Pandora's traffic comes in spurts and relies on caching to smooth out the lumps and bumps.
You can do the same with IPv6 addresses, using the **-G** option. Consult the fine man page to learn the rest of iftop's features, including customizing your default options with a personal configuration file, and applying custom filters (see [PCAP-FILTER][10] for a filter reference).
### Nethogs
When you want to quickly learn who is sucking up your bandwidth, Nethogs is fast and easy. Run it as root and specify the interface to listen on. It displays the hoggy application and the process number, so that you may kill it if you so desire:
```
$ sudo nethogs wlan0
NetHogs version 0.8.1
PID USER PROGRAM DEV SENT RECEIVED
7690 carla /usr/lib/firefox wlan0 12.494 556.580 KB/sec
5648 carla .../chromium-browser wlan0 0.052 0.038 KB/sec
TOTAL 12.546 556.618 KB/sec
```
Nethogs has few options: cycling between kb/s, kb, b, and mb, sorting by received or sent packets, and adjusting the delay between refreshes. See `man nethogs`, or run `nethogs -h`.
### vnstat
[vnstat][11] is the easiest network data collector to use. It is lightweight and does not need root permissions. It runs as a daemon and records your network statistics over time. The `vnstat`command displays the accumulated data:
```
$ vnstat -i wlan0
Database updated: Tue Oct 17 08:36:38 2017
wlan0 since 10/17/2017
rx: 45.27 MiB tx: 3.77 MiB total: 49.04 MiB
monthly
rx | tx | total | avg. rate
------------------------+-------------+-------------+---------------
Oct '17 45.27 MiB | 3.77 MiB | 49.04 MiB | 0.28 kbit/s
------------------------+-------------+-------------+---------------
estimated 85 MiB | 5 MiB | 90 MiB |
daily
rx | tx | total | avg. rate
------------------------+-------------+-------------+---------------
today 45.27 MiB | 3.77 MiB | 49.04 MiB | 12.96 kbit/s
------------------------+-------------+-------------+---------------
estimated 125 MiB | 8 MiB | 133 MiB |
```
By default it displays all network interfaces. Use the `-i` option to select a single interface. Merge the data of multiple interfaces this way:
```
$ vnstat -i wlan0+eth0+eth1
```
You can filter the display in several ways:
* **-h** displays statistics by hours.
* **-d** displays statistics by days.
* **-w** and **-m** displays statistics by weeks and months.
* Watch live updates with the **-l** option.
This command deletes the database for wlan1 and stops watching it:
```
$ vnstat -i wlan1 --delete
```
This command creates an alias for a network interface. This example uses one of the weird interface names from Ubuntu 16.04:
```
$ vnstat -u -i enp0s25 --nick eth0
```
By default vnstat monitors eth0\. You can change this in `/etc/vnstat.conf`, or create your own personal configuration file in your home directory. See `man vnstat` for a complete reference.
You can also install vnstati to create simple, colored graphs (Figure 2):
```
$ vnstati -s -i wlx7cdd90a0a1c2 -o vnstat.png
```
![vnstati](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-2_5.png?itok=HsWJMcW0 "vnstati")
Figure 2: You can create simple colored graphs with vnstati.[Used with permission][2]
See `man vnstati` for complete options.
_Learn more about Linux through the free ["Introduction to Linux" ][7]course from The Linux Foundation and edX._
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2017/10/3-simple-excellent-linux-network-monitors
作者:[CARLA SCHRODER ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/used-permission
[3]:https://www.linux.com/licenses/category/used-permission
[4]:https://www.linux.com/files/images/fig-1png-8
[5]:https://www.linux.com/files/images/fig-2png-5
[6]:https://www.linux.com/files/images/bannerpng-3
[7]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
[8]:http://www.ex-parrot.com/pdw/iftop/
[9]:https://www.linux.com/learn/intro-to-linux/2017/8/how-calculate-network-addresses-ipcalc
[10]:http://www.tcpdump.org/manpages/pcap-filter.7.html
[11]:http://humdi.net/vnstat/

View File

@ -1,4 +1,4 @@
Dive into BPF: a list of reading material
Translating by qhwdw Dive into BPF: a list of reading material
============================================================
* [What is BPF?][143]
@ -709,3 +709,5 @@ via: https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/
[190]:https://github.com/torvalds/linux
[191]:https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md
[192]:https://qmonnet.github.io/whirl-offload/categories/#BPF

View File

@ -1,95 +0,0 @@
translating---geekpi
GitHub welcomes all CI tools
====================
[![GitHub and all CI tools](https://user-images.githubusercontent.com/29592817/32509084-2d52c56c-c3a1-11e7-8c49-901f0f601faf.png)][11]
Continuous Integration ([CI][12]) tools help you stick to your team's quality standards by running tests every time you push a new commit and [reporting the results][13] to a pull request. Combined with continuous delivery ([CD][14]) tools, you can also test your code on multiple configurations, run additional performance tests, and automate every step [until production][15].
There are several CI and CD tools that [integrate with GitHub][16], some of which you can install in a few clicks from [GitHub Marketplace][17]. With so many options, you can pick the best tool for the job—even if it's not the one that comes pre-integrated with your system.
The tools that will work best for you depends on many factors, including:
* Programming language and application architecture
* Operating system and browsers you plan to support
* Your team's experience and skills
* Scaling capabilities and plans for growth
* Geographic distribution of dependent systems and the people who use them
* Packaging and delivery goals
Of course, it isn't possible to optimize your CI tool for all of these scenarios. The people who build them have to choose which use cases to serve best—and when to prioritize complexity over simplicity. For example, if you like to test small applications written in a particular programming language for one platform, you won't need the complexity of a tool that tests embedded software controllers on dozens of platforms with a broad mix of programming languages and frameworks.
If you need a little inspiration for which CI tool might work best, take a look at [popular GitHub projects][18]. Many show the status of their integrated CI/CD tools as badges in their README.md. We've also analyzed the use of CI tools across more than 50 million repositories in the GitHub community, and found a lot of variety. The following diagram shows the relative percentage of the top 10 CI tools used with GitHub.com, based on the most used [commit status contexts][19] used within our pull requests.
_Our analysis also showed that many teams use more than one CI tool in their projects, allowing them to emphasize what each tool does best._
[![Top 10 CI systems used with GitHub.com based on most used commit status contexts](https://user-images.githubusercontent.com/7321362/32575895-ea563032-c49a-11e7-9581-e05ec882658b.png)][20]
If you'd like to check them out, here are the top 10 tools teams use:
* [Travis CI][1]
* [Circle CI][2]
* [Jenkins][3]
* [AppVeyor][4]
* [CodeShip][5]
* [Drone][6]
* [Semaphore CI][7]
* [Buildkite][8]
* [Wercker][9]
* [TeamCity][10]
It's tempting to just pick the default, pre-integrated tool without taking the time to research and choose the best one for the job, but there are plenty of [excellent choices][21] built for your specific use cases. And if you change your mind later, no problem. When you choose the best tool for a specific situation, you're guaranteeing tailored performance and the freedom of interchangability when it no longer fits.
Ready to see how CI tools can fit into your workflow?
[Browse GitHub Marketplace][22]
--------------------------------------------------------------------------------
via: https://github.com/blog/2463-github-welcomes-all-ci-tools
作者:[jonico ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://github.com/jonico
[1]:https://travis-ci.org/
[2]:https://circleci.com/
[3]:https://jenkins.io/
[4]:https://www.appveyor.com/
[5]:https://codeship.com/
[6]:http://try.drone.io/
[7]:https://semaphoreci.com/
[8]:https://buildkite.com/
[9]:http://www.wercker.com/
[10]:https://www.jetbrains.com/teamcity/
[11]:https://user-images.githubusercontent.com/29592817/32509084-2d52c56c-c3a1-11e7-8c49-901f0f601faf.png
[12]:https://en.wikipedia.org/wiki/Continuous_integration
[13]:https://github.com/blog/2051-protected-branches-and-required-status-checks
[14]:https://en.wikipedia.org/wiki/Continuous_delivery
[15]:https://developer.github.com/changes/2014-01-09-preview-the-new-deployments-api/
[16]:https://github.com/works-with/category/continuous-integration
[17]:https://github.com/marketplace/category/continuous-integration
[18]:https://github.com/explore?trending=repositories#trending
[19]:https://developer.github.com/v3/repos/statuses/
[20]:https://user-images.githubusercontent.com/7321362/32575895-ea563032-c49a-11e7-9581-e05ec882658b.png
[21]:https://github.com/works-with/category/continuous-integration
[22]:https://github.com/marketplace/category/continuous-integration

View File

@ -1,61 +0,0 @@
【翻译中 @haoqixu】Sysadmin 101: Patch Management
============================================================
* [HOW-TOs][1]
* [Servers][2]
* [SysAdmin][3]
A few articles ago, I started a Sysadmin 101 series to pass down some fundamental knowledge about systems administration that the current generation of junior sysadmins, DevOps engineers or "full stack" developers might not learn otherwise. I had thought that I was done with the series, but then the WannaCry malware came out and exposed some of the poor patch management practices still in place in Windows networks. I imagine some readers that are still stuck in the Linux versus Windows wars of the 2000s might have even smiled with a sense of superiority when they heard about this outbreak.
The reason I decided to revive my Sysadmin 101 series so soon is I realized that most Linux system administrators are no different from Windows sysadmins when it comes to patch management. Honestly, in some areas (in particular, uptime pride), some Linux sysadmins are even worse than Windows sysadmins regarding patch management. So in this article, I cover some of the fundamentals of patch management under Linux, including what a good patch management system looks like, the tools you will want to put in place and how the overall patching process should work.
### What Is Patch Management?
When I say patch management, I'm referring to the systems you have in place to update software already on a server. I'm not just talking about keeping up with the latest-and-greatest bleeding-edge version of a piece of software. Even more conservative distributions like Debian that stick with a particular version of software for its "stable" release still release frequent updates that patch bugs or security holes.
Of course, if your organization decided to roll its own version of a particular piece of software, either because developers demanded the latest and greatest, you needed to fork the software to apply a custom change, or you just like giving yourself extra work, you now have a problem. Ideally you have put in a system that automatically packages up the custom version of the software for you in the same continuous integration system you use to build and package any other software, but many sysadmins still rely on the outdated method of packaging the software on their local machine based on (hopefully up to date) documentation on their wiki. In either case, you will need to confirm that your particular version has the security flaw, and if so, make sure that the new patch applies cleanly to your custom version.
### What Good Patch Management Looks Like
Patch management starts with knowing that there is a software update to begin with. First, for your core software, you should be subscribed to your Linux distribution's security mailing list, so you're notified immediately when there are security patches. If there you use any software that doesn't come from your distribution, you must find out how to be kept up to date on security patches for that software as well. When new security notifications come in, you should review the details so you understand how severe the security flaw is, whether you are affected and gauge a sense of how urgent the patch is.
Some organizations have a purely manual patch management system. With such a system, when a security patch comes along, the sysadmin figures out which servers are running the software, generally by relying on memory and by logging in to servers and checking. Then the sysadmin uses the server's built-in package management tool to update the software with the latest from the distribution. Then the sysadmin moves on to the next server, and the next, until all of the servers are patched.
There are many problems with manual patch management. First is the fact that it makes patching a laborious chore. The more work patching is, the more likely a sysadmin will put it off or skip doing it entirely. The second problem is that manual patch management relies too much on the sysadmin's ability to remember and recall all of the servers he or she is responsible for and keep track of which are patched and which aren't. This makes it easy for servers to be forgotten and sit unpatched.
The faster and easier patch management is, the more likely you are to do it. You should have a system in place that quickly can tell you which servers are running a particular piece of software at which version. Ideally, that system also can push out updates. Personally, I prefer orchestration tools like MCollective for this task, but Red Hat provides Satellite, and Canonical provides Landscape as central tools that let you view software versions across your fleet of servers and apply patches all from a central place.
Patching should be fault-tolerant as well. You should be able to patch a service and restart it without any overall down time. The same idea goes for kernel patches that require a reboot. My approach is to divide my servers into different high availability groups so that lb1, app1, rabbitmq1 and db1 would all be in one group, and lb2, app2, rabbitmq2 and db2 are in another. Then, I know I can patch one group at a time without it causing downtime anywhere else.
So, how fast is fast? Your system should be able to roll out a patch to a minor piece of software that doesn't have an accompanying service (such as bash in the case of the ShellShock vulnerability) within a few minutes to an hour at most. For something like OpenSSL that requires you to restart services, the careful process of patching and restarting services in a fault-tolerant way probably will take more time, but this is where orchestration tools come in handy. I gave examples of how to use MCollective to accomplish this in my recent MCollective articles (see the December 2016 and January 2017 issues), but ideally, you should put a system in place that makes it easy to patch and restart services in a fault-tolerant and automated way.
When patching requires a reboot, such as in the case of kernel patches, it might take a bit more time, but again, automation and orchestration tools can make this go much faster than you might imagine. I can patch and reboot the servers in an environment in a fault-tolerant way within an hour or two, and it would be much faster than that if I didn't need to wait for clusters to sync back up in between reboots.
Unfortunately, many sysadmins still hold on to the outdated notion that uptime is a badge of pride—given that serious kernel patches tend to come out at least once a year if not more often, to me, it's proof you don't take security seriously.
Many organizations also still have that single point of failure server that can never go down, and as a result, it never gets patched or rebooted. If you want to be secure, you need to remove these outdated liabilities and create systems that at least can be rebooted during a late-night maintenance window.
Ultimately, fast and easy patch management is a sign of a mature and professional sysadmin team. Updating software is something all sysadmins have to do as part of their jobs, and investing time into systems that make that process easy and fast pays dividends far beyond security. For one, it helps identify bad architecture decisions that cause single points of failure. For another, it helps identify stagnant, out-of-date legacy systems in an environment and provides you with an incentive to replace them. Finally, when patching is managed well, it frees up sysadmins' time and turns their attention to the things that truly require their expertise.
______________________
Kyle Rankin is senior security and infrastructure architect, the author of many books including Linux Hardening in Hostile Networks, DevOps Troubleshooting and The Official Ubuntu Server Book, and a columnist for Linux Journal. Follow him @kylerankin
--------------------------------------------------------------------------------
via: https://www.linuxjournal.com/content/sysadmin-101-patch-management
作者:[Kyle Rankin ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxjournal.com/users/kyle-rankin
[1]:https://www.linuxjournal.com/tag/how-tos
[2]:https://www.linuxjournal.com/tag/servers
[3]:https://www.linuxjournal.com/tag/sysadmin
[4]:https://www.linuxjournal.com/users/kyle-rankin

View File

@ -1,3 +1,5 @@
translating by aiwhj
Adopting Kubernetes step by step
============================================================

View File

@ -1,3 +1,5 @@
**translating by [erlinux](https://github.com/erlinux)**
Why microservices are a security issue
============================================================

View File

@ -1,78 +0,0 @@
translating---geekpi
AWS to Help Build ONNX Open Source AI Platform
============================================================
![onnx-open-source-ai-platform](https://www.linuxinsider.com/article_images/story_graphics_xlarge/xl-2017-onnx-1.jpg)
Amazon Web Services has become the latest tech firm to join the deep learning community's collaboration on the Open Neural Network Exchange, recently launched to advance artificial intelligence in a frictionless and interoperable environment. Facebook and Microsoft led the effort.
As part of that collaboration, AWS made its open source Python package, ONNX-MxNet, available as a deep learning framework that offers application programming interfaces across multiple languages including Python, Scala and open source statistics software R.
The ONNX format will help developers build and train models for other frameworks, including PyTorch, Microsoft Cognitive Toolkit or Caffe2, AWS Deep Learning Engineering Manager Hagay Lupesko and Software Developer Roshani Nagmote wrote in an online post last week. It will let developers import those models into MXNet, and run them for inference.
### Help for Developers
Facebook and Microsoft this summer launched ONNX to support a shared model of interoperability for the advancement of AI. Microsoft committed its Cognitive Toolkit, Caffe2 and PyTorch to support ONNX.
Cognitive Toolkit and other frameworks make it easier for developers to construct and run computational graphs that represent neural networks, Microsoft said.
Initial versions of [ONNX code and documentation][4] were made available on Github.
AWS and Microsoft last month announced plans for Gluon, a new interface in Apache MXNet that allows developers to build and train deep learning models.
Gluon "is an extension of their partnership where they are trying to compete with Google's Tensorflow," observed Aditya Kaul, research director at [Tractica][5].
"Google's omission from this is quite telling but also speaks to their dominance in the market," he told LinuxInsider.
"Even Tensorflow is open source, and so open source is not the big catch here -- but the rest of the ecosystem teaming up to compete with Google is what this boils down to," Kaul said.
The Apache MXNet community earlier this month introduced version 0.12 of MXNet, which extends Gluon functionality to allow for new, cutting-edge research, according to AWS. Among its new features are variational dropout, which allows developers to apply the dropout technique for mitigating overfitting to recurrent neural networks.
Convolutional RNN, Long Short-Term Memory and gated recurrent unit cells allow datasets to be modeled using time-based sequence and spatial dimensions, AWS noted.
### Framework-Neutral Method
"This looks like a great way to deliver inference regardless of which framework generated a model," said Paul Teich, principal analyst at [Tirias Research][6].
"This is basically a framework-neutral way to deliver inference," he told LinuxInsider.
Cloud providers like AWS, Microsoft and others are under pressure from customers to be able to train on one network while delivering on another, in order to advance AI, Teich pointed out.
"I see this as kind of a baseline way for these vendors to check the interoperability box," he remarked.
"Framework interoperability is a good thing, and this will only help developers in making sure that models that they build on MXNet or Caffe or CNTK are interoperable," Tractica's Kaul pointed out.
As to how this interoperability might apply in the real world, Teich noted that technologies such as natural language translation or speech recognition would require that Alexa's voice recognition technology be packaged and delivered to another developer's embedded environment.
### Thanks, Open Source
"Despite their competitive differences, these companies all recognize they owe a significant amount of their success to the software development advancements generated by the open source movement," said Jeff Kaplan, managing director of [ThinkStrategies][7].
"The Open Neural Network Exchange is committed to producing similar benefits and innovations in AI," he told LinuxInsider.
A growing number of major technology companies have announced plans to use open source to speed the development of AI collaboration, in order to create more uniform platforms for development and research.
AT&T just a few weeks ago announced plans [to launch the Acumos Project][8] with TechMahindra and The Linux Foundation. The platform is designed to open up efforts for collaboration in telecommunications, media and technology. 
![](https://www.ectnews.com/images/end-enn.gif)
--------------------------------------------------------------------------------
via: https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html
作者:[ David Jones ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html#searchbyline
[1]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html#
[2]:https://www.linuxinsider.com/perl/mailit/?id=84971
[3]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html
[4]:https://github.com/onnx/onnx
[5]:https://www.tractica.com/
[6]:http://www.tiriasresearch.com/
[7]:http://www.thinkstrategies.com/
[8]:https://www.linuxinsider.com/story/84926.html
[9]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html

View File

@ -0,0 +1,54 @@
translating by liuxinyu123
Long-term Linux support future clarified
============================================================
Long-term support Linux version 4.4 will get six years of life, but that doesn't mean other LTS editions will last so long.
[video](http://www.zdnet.com/video/video-torvalds-surprised-by-resilience-of-2-6-kernel-1/)
_Video: Torvalds surprised by resilience of 2.6 kernel_
In October 2017, the [Linux kernel team agreed to extend the next version of Linux's Long Term Support (LTS) from two years to six years][5], [Linux 4.14][6]. This helps [Android][7], embedded Linux, and Linux Internet of Things (IoT) developers. But this move did not mean all future Linux LTS versions will have a six-year lifespan.
As Konstantin Ryabitsev, [The Linux Foundation][8]'s director of IT infrastructure security, explained in a Google+ post, "Despite what various news sites out there may have told you, [kernel 4.14 LTS is not planned to be supported for 6 years][9]. Just because Greg Kroah-Hartman is doing it for 4.4 does not mean that all LTS kernels from now on are going to be maintained for that long."
So, in short, 4.14 will be supported until January 2020, while the 4.4 Linux kernel, which arrived on Jan. 20, 2016, will be supported until 2022\. Therefore, if you're working on a Linux distribution that's meant for the longest possible run, you want to base it on [Linux 4.4][10].
[Linux LTS versions][11] incorporate back-ported bug fixes for older kernel trees. Not all bug fixes are imported; only important bug fixes are applied to such kernels. They, especially for older trees, don't usually see very frequent releases.
The other Linux versions are Prepatch or release candidates (RC), Mainline, Stable, and LTS.
RC must be compiled from source and usually contains bug fixes and new features. These are maintained and released by Linus Torvalds. He also maintains the Mainline tree (this is where all new features are introduced). New mainline kernels are released every few months. When the mainline kernel is released for general use, it is considered "stable." Bug fixes for a stable kernel are back-ported from the mainline tree and applied by a designated stable kernel maintainer. There are usually only a few bug-fix kernel releases until the next mainline kernel becomes available.
As for the latest LTS, Linux 4.14, Ryabitsev said, "It is possible that someone may pick up maintainership of 4.14 after Greg is done with it (it's happened in the past on multiple occasions), but you should emphatically not plan on that."
Kroah-Hartman simply added to Ryabitsev's post: ["What he said."][12]
--------------------------------------------------------------------------------
via: http://www.zdnet.com/article/long-term-linux-support-future-clarified/
作者:[Steven J. Vaughan-Nichols ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
[1]:http://www.zdnet.com/article/long-term-linux-support-future-clarified/#comments-eb4f0633-955f-4fec-9e56-734c34ee2bf2
[2]:http://www.zdnet.com/article/the-tension-between-iot-and-erp/
[3]:http://www.zdnet.com/article/the-tension-between-iot-and-erp/
[4]:http://www.zdnet.com/article/the-tension-between-iot-and-erp/
[5]:http://www.zdnet.com/article/long-term-support-linux-gets-a-longer-lease-on-life/
[6]:http://www.zdnet.com/article/the-new-long-term-linux-kernel-linux-4-14-has-arrived/
[7]:https://www.android.com/
[8]:https://www.linuxfoundation.org/
[9]:https://plus.google.com/u/0/+KonstantinRyabitsev/posts/Lq97ZtL8Xw9
[10]:http://www.zdnet.com/article/whats-new-and-nifty-in-linux-4-4/
[11]:https://www.kernel.org/releases.html
[12]:https://plus.google.com/u/0/+gregkroahhartman/posts/ZUcSz3Sn1Hc
[13]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
[14]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
[15]:http://www.zdnet.com/blog/open-source/
[16]:http://www.zdnet.com/topic/enterprise-software/

View File

@ -1,3 +1,5 @@
translating---geekpi
Suplemon - Modern CLI Text Editor with Multi Cursor Support
======
Suplemon is a modern text editor for CLI that emulates the multi cursor behavior and other features of [Sublime Text][1]. It's lightweight and really easy to use, just as Nano is.

View File

@ -0,0 +1,117 @@
TLDR pages: Simplified Alternative To Linux Man Pages
============================================================
[![](https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-ubuntu-640x360.jpg "tldr page ubuntu")][22]
Working on the terminal and using various commands to carry out important tasks is an indispensable part of a Linux desktop experience. This open-source operating system possesses an [abundance of commands][23] that **makes** it impossible for any user to remember all of them. To make things more complex, each command has its own set of options to bring a wider set of functionality.
To solve this problem, [Man Pages][12], short for manual pages, were created. First written in English, it contains tons of in-depth information about different commands. Sometimes, when youre looking for just basic information on a command, it can also become overwhelming. To solve this issue,[ TLDR pages][13] was created.
_Before going ahead and knowing more about it, dont forget to check a few more terminal tricks:_
* _**[Watch Star Wars in terminal ][1]**_
* _**[Use StackOverflow in terminal][2]**_
* _**[Get Weather report in terminal][3]**_
* _**[Access Google through terminal][4]**_
* [**_Use Wikipedia from command line_**][7]
* _**[Check Cryptocurrency Prices From Terminal][5]**_
* _**[Search and download torrent in terminal][6]**_
### What are TLDR pages?
The GitHub page of TLDR pages for Linux/Unix describes it as a collection of simplified and community-driven man pages. Its an effort to make the experience of using man pages simpler with the help of practical examples. For those who dont know, TLDR is taken from common internet slang _ Too Long Didnt Read_ .
In case you wish to compare, lets take the example of tar command. The usual man page extends over 1,000 lines. Its an archiving utility thats often combined with a compression method like bzip or gzip. Take a look at its man page:
[![tar man page](https://fossbytes.com/wp-content/uploads/2017/11/tar-man-page.jpg)][14] On the other hand, TLDR pages lets you simply take a glance at the command and see how it works. Tars TLDR page simply looks like this and comes with some handy examples of the most common tasks you can complete with this utility:
[![tar tldr page](https://fossbytes.com/wp-content/uploads/2017/11/tar-tldr-page.jpg)][15] Lets take another example and show you what TLDR pages has to offer when it comes to apt:
[![tldr-page-of-apt](https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-of-apt.jpg)][16] Having shown you how TLDR works and makes your life easier, lets tell you how to install it on your Linux-based operating system.
### How to install and use TLDR pages on Linux?
The most mature TLDR client is based on Node.js and you can install it easily using NPM package manager. In case Node and NPM are not available on your system, run the following command:
```
sudo apt-get install nodejs
sudo apt-get install npm
```
In case youre using an OS other than Debian, Ubuntu, or Ubuntus derivatives, you can use yum, dnf, or pacman package manager as per your convenience.
Now, by running the following command in terminal, install TLDR client on your Linux machine:
```
sudo npm install -g tldr
```
Once youve installed this terminal utility, it would be a good idea to update its cache before trying it out. To do so, run the following command:
```
tldr --update
```
After doing this, feel free to read the TLDR page of any Linux command. To do so, simply type:
```
tldr <commandname>
```
[![tldr kill command](https://fossbytes.com/wp-content/uploads/2017/11/tldr-kill-command.jpg)][17]
You can also run the following help command to see all different parameters that can be used with TLDR to get the desired output. As usual, this help page is also accompanied with examples.
### TLDR web, Android, and iOS versions
You would be pleasantly surprised to know that TLDR pages isnt limited to your Linux desktop. Instead, it can also be used in your web browser, which can be accessed from any machine.
To use TLDR web version, visit [tldr.ostera.io][18] and perform the required search operation.
Alternatively, you can also download the [iOS][19] and [Android][20] apps and keep learning new commands on the go.
[![tldr app ios](https://fossbytes.com/wp-content/uploads/2017/11/tldr-app-ios.jpg)][21]
Did you find this cool Linux terminal trick interesting? Do give it a try and let us know your feedback.
--------------------------------------------------------------------------------
via: https://fossbytes.com/tldr-pages-linux-man-pages-alternative/
作者:[Adarsh Verma ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fossbytes.com/author/adarsh/
[1]:https://fossbytes.com/watch-star-wars-command-prompt-via-telnet/
[2]:https://fossbytes.com/use-stackoverflow-linux-terminal-mac/
[3]:https://fossbytes.com/single-command-curl-wttr-terminal-weather-report/
[4]:https://fossbytes.com/how-to-google-search-in-command-line-using-googler/
[5]:https://fossbytes.com/check-bitcoin-cryptocurrency-prices-command-line-coinmon/
[6]:https://fossbytes.com/review-torrench-download-torrents-using-terminal-linux/
[7]:https://fossbytes.com/use-wikipedia-termnianl-wikit/
[8]:http://www.facebook.com/sharer.php?u=https%3A%2F%2Ffossbytes.com%2Ftldr-pages-linux-man-pages-alternative%2F
[9]:https://twitter.com/intent/tweet?text=TLDR+pages%3A+Simplified+Alternative+To+Linux+Man+Pages&url=https%3A%2F%2Ffossbytes.com%2Ftldr-pages-linux-man-pages-alternative%2F&via=%40fossbytes14
[10]:http://plus.google.com/share?url=https://fossbytes.com/tldr-pages-linux-man-pages-alternative/
[11]:http://pinterest.com/pin/create/button/?url=https://fossbytes.com/tldr-pages-linux-man-pages-alternative/&media=https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-ubuntu.jpg
[12]:https://fossbytes.com/linux-lexicon-man-pages-navigation/
[13]:https://github.com/tldr-pages/tldr
[14]:https://fossbytes.com/wp-content/uploads/2017/11/tar-man-page.jpg
[15]:https://fossbytes.com/wp-content/uploads/2017/11/tar-tldr-page.jpg
[16]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-of-apt.jpg
[17]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-kill-command.jpg
[18]:https://tldr.ostera.io/
[19]:https://itunes.apple.com/us/app/tldt-pages/id1071725095?ls=1&mt=8
[20]:https://play.google.com/store/apps/details?id=io.github.hidroh.tldroid
[21]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-app-ios.jpg
[22]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-ubuntu.jpg
[23]:https://fossbytes.com/a-z-list-linux-command-line-reference/

View File

@ -0,0 +1,85 @@
# [Google launches TensorFlow-based vision recognition kit for RPi Zero W][26]
![](http://linuxgizmos.com/files/google_aiyvisionkit-thm.jpg)
Googles $45 “AIY Vision Kit” for the Raspberry Pi Zero W performs TensorFlow-based vision recognition using a “VisionBonnet” board with a Movidius chip.
Googles AIY Vision Kit for on-device neural network acceleration follows an earlier [AIY Projects][7] voice/AI kit for the Raspberry Pi that shipped to MagPi subscribers back in May. Like the voice kit and the older Google Cardboard VR viewer, the new AIY Vision Kit has a cardboard enclosure. The kit differs from the [Cloud Vision API][8], which was demod in 2015 with a Raspberry Pi based GoPiGo robot, in that it runs entirely on local processing power rather than requiring a cloud connection. The AIY Vision Kit is available now for pre-order at $45, with shipments due in early December.
[![](http://linuxgizmos.com/files/google_aiyvisionkit-sm.jpg)][9]   [![](http://linuxgizmos.com/files/rpi_zerow-sm.jpg)][10]
**AIY Vision Kit, fully assembled (left) and Raspberry Pi Zero W**
(click images to enlarge)
The kits key processing element, aside from the 1GHz ARM11-based Broadcom BCM2836 SoC found on the required [Raspberry Pi Zero W][21] SBC, is Googles new VisionBonnet RPi accessory board. The VisionBonnet pHAT board uses a Movidius MA2450, a version of the [Movidius Myriad 2 VPU][22] processor. On the VisionBonnet, the processor runs Googles open source [TensorFlow][23]machine intelligence library for neural networking. The chip enables visual perception processing at up to 30 frames per second.
The AIY Vision Kit requires a user-supplied RPi Zero W, a [Raspberry Pi Camera v2][11], and a 16GB micro SD card for downloading the Linux-based image. The kit includes the VisionBonnet, an RGB arcade-style button, a piezo speaker, a macro/wide lens kit, and the cardboard enclosure. You also get flex cables, standoffs, a tripod mounting nut, and connecting components.
[![](http://linuxgizmos.com/files/google_aiyvisionkit_pieces-sm.jpg)][12]   [![](http://linuxgizmos.com/files/google_visionbonnet-sm.jpg)][13]
**AIY Vision Kit kit components (left) and VisonBonnet accessory board**
(click images to enlarge)
Three neural network models are available. Theres a general-purpose model that can recognize 1,000 common objects, a facial detection model that can also score facial expression on a “joy scale” that ranges from “sad” to “laughing,” and a model that can identify whether the image contains a dog, cat, or human. The 1,000-image model derives from Googles open source [MobileNets][24], a family of TensorFlow based computer vision models designed for the restricted resources of a mobile or embedded device.
MobileNet models offer low latency and low power consumption, and are parameterized to meet the resource constraints of different use cases. The models can be built for classification, detection, embeddings, and segmentation, says Google. Earlier this month, Google released a developer preview of a mobile-friendly [TensorFlow Lite][14] library for Android and iOS that is compatible with MobileNets and the Android Neural Networks API.
[![](http://linuxgizmos.com/files/google_aiyvisionkit_assembly-sm.jpg)][15]
**AIY Vision Kit assembly views**
(click image to enlarge)
In addition to providing the three models, the AIY Vision Kit provides basic TensorFlow code and a compiler, so users can develop their own models. In addition, Python developers can write new software to customize RGB button colors, piezo element sounds, and 4x GPIO pins on the VisionBonnet that can add additional lights, buttons, or servos. Potential models include recognizing food items, opening a dog door based on visual input, sending a text when your car leaves the driveway, or playing particular music based on facial recognition of a person entering the cameras viewpoint.
[![](http://linuxgizmos.com/files/movidius_myriad2vpu_block-sm.jpg)][16]   [![](http://linuxgizmos.com/files/movidius_myriad2_reference_board-sm.jpg)][17]
**Myriad 2 VPU block diagram (left) and reference board**
(click image to enlarge)
The Movidius Myriad 2 processor provides TeraFLOPS of performance within a nominal 1 Watt power envelope. The chip appeared on early Project Tango reference platforms, and is built into the Ubuntu-driven [Fathom][25] neural processing USB stick that Movidius debuted in May 2016, prior to being acquired by Intel. According to Movidius, the Myriad 2 is available “in millions of devices on the market today.”
**Further information**
The AIY Vision Kit is available for pre-order from Micro Center at $44.99, with shipments due in early December. More information may be found in the AIY Vision Kit [announcement][18], [Google Blog notice][19], and [Micro Center shopping page][20].
--------------------------------------------------------------------------------
via: http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/
作者:[ Eric Brown][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/
[1]:http://twitter.com/share?url=http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/&text=Google%20launches%20TensorFlow-based%20vision%20recognition%20kit%20for%20RPi%20Zero%20W%20
[2]:https://plus.google.com/share?url=http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/
[3]:http://www.facebook.com/sharer.php?u=http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/
[4]:http://www.linkedin.com/shareArticle?mini=true&url=http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/
[5]:http://reddit.com/submit?url=http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/&title=Google%20launches%20TensorFlow-based%20vision%20recognition%20kit%20for%20RPi%20Zero%20W
[6]:mailto:?subject=Google%20launches%20TensorFlow-based%20vision%20recognition%20kit%20for%20RPi%20Zero%20W&body=%20http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/
[7]:http://linuxgizmos.com/free-raspberry-pi-voice-kit-taps-google-assistant-sdk/
[8]:http://linuxgizmos.com/google-releases-cloud-vision-api-with-demo-for-pi-based-robot/
[9]:http://linuxgizmos.com/files/google_aiyvisionkit.jpg
[10]:http://linuxgizmos.com/files/rpi_zerow.jpg
[11]:http://linuxgizmos.com/raspberry-pi-cameras-jump-to-8mp-keep-25-dollar-price/
[12]:http://linuxgizmos.com/files/google_aiyvisionkit_pieces.jpg
[13]:http://linuxgizmos.com/files/google_visionbonnet.jpg
[14]:https://developers.googleblog.com/2017/11/announcing-tensorflow-lite.html
[15]:http://linuxgizmos.com/files/google_aiyvisionkit_assembly.jpg
[16]:http://linuxgizmos.com/files/movidius_myriad2vpu_block.jpg
[17]:http://linuxgizmos.com/files/movidius_myriad2_reference_board.jpg
[18]:https://blog.google/topics/machine-learning/introducing-aiy-vision-kit-make-devices-see/
[19]:https://developers.googleblog.com/2017/11/introducing-aiy-vision-kit-add-computer.html
[20]:http://www.microcenter.com/site/content/Google_AIY.aspx?ekw=aiy&rd=1
[21]:http://linuxgizmos.com/raspberry-pi-zero-w-adds-wifi-and-bluetooth-for-only-5-more/
[22]:https://www.movidius.com/solutions/vision-processing-unit
[23]:https://www.tensorflow.org/
[24]:https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html
[25]:http://linuxgizmos.com/usb-stick-brings-neural-computing-functions-to-devices/
[26]:http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/

View File

@ -0,0 +1,187 @@
translating by zrszrszr
12 MySQL/MariaDB Security Best Practices for Linux
============================================================
MySQL is the worlds most popular open source database system and MariaDB (a fork of MySQL) is the worlds fastest growing open source database system. After installing MySQL server, it is insecure in its default configuration, and securing it is one of the essential tasks in general database management.
This will contribute to hardening and boosting of overall Linux server security, as attackers always scan vulnerabilities in any part of a system, and databases have in the past been key target areas. A common example is the brute-forcing of the root password for the MySQL database.
In this guide, we will explain useful MySQL/MariaDB security best practice for Linux.
### 1\. Secure MySQL Installation
This is the first recommended step after installing MySQL server, towards securing the database server. This script facilitates in improving the security of your MySQL server by asking you to:
* set a password for the root account, if you didnt set it during installation.
* disable remote root user login by removing root accounts that are accessible from outside the local host.
* remove anonymous-user accounts and test database which by default can be accessed by all users, even anonymous users.
```
# mysql_secure_installation
```
After running it, set the root password and answer the series of questions by entering [Yes/Y] and press [Enter].
[![Secure MySQL Installation](https://www.tecmint.com/wp-content/uploads/2017/12/Secure-MySQL-Installation.png)][2]
Secure MySQL Installation
### 2\. Bind Database Server To Loopback Address
This configuration will restrict access from remote machines, it tells the MySQL server to only accept connections from within the localhost. You can set it in main configuration file.
```
# vi /etc/my.cnf [RHEL/CentOS]
# vi /etc/mysql/my.conf [Debian/Ubuntu]
OR
# vi /etc/mysql/mysql.conf.d/mysqld.cnf [Debian/Ubuntu]
```
Add the following line below under `[mysqld]` section.
```
bind-address = 127.0.0.1
```
### 3\. Disable LOCAL INFILE in MySQL
As part of security hardening, you need to disable local_infile to prevent access to the underlying filesystem from within MySQL using the following directive under `[mysqld]` section.
```
local-infile=0
```
### 4\. Change MYSQL Default Port
The Port variable sets the MySQL port number that will be used to listen on TCP/ IP connections. The default port number is 3306 but you can change it under the [mysqld] section as shown.
```
Port=5000
```
### 5\. Enable MySQL Logging
Logs are one of the best ways to understand what happens on a server, in case of any attacks, you can easily see any intrusion-related activities from log files. You can enable MySQL logging by adding the following variable under the `[mysqld]` section.
```
log=/var/log/mysql.log
```
### 6\. Set Appropriate Permission on MySQL Files
Ensure that you have appropriate permissions set for all mysql server files and data directories. The /etc/my.conf file should only be writeable to root. This blocks other users from changing database server configurations.
```
# chmod 644 /etc/my.cnf
```
### 7\. Delete MySQL Shell History
All commands you execute on MySQL shell are stored by the mysql client in a history file: ~/.mysql_history. This can be dangerous, because for any user accounts that you will create, all usernames and passwords typed on the shell will recorded in the history file.
```
# cat /dev/null > ~/.mysql_history
```
### 8\. Dont Run MySQL Commands from Commandline
As you already know, all commands you type on the terminal are stored in a history file, depending on the shell you are using (for example ~/.bash_history for bash). An attacker who manages to gain access to this history file can easily see any passwords recorded there.
It is strongly not recommended to type passwords on the command line, something like this:
```
# mysql -u root -ppassword_
```
[![Connect MySQL with Password](https://www.tecmint.com/wp-content/uploads/2017/12/Connect-MySQL-with-Password.png)][3]
Connect MySQL with Password
When you check the last section of the command history file, you will see the password typed above.
```
# history
```
[![Check Command History](https://www.tecmint.com/wp-content/uploads/2017/12/Check-Command-History.png)][4]
Check Command History
The appropriate way to connect MySQL is.
```
# mysql -u root -p
Enter password:
```
### 9\. Define Application-Specific Database Users
For each application running on the server, only give access to a user who is in charge of a database for a given application. For example, if you have a wordpress site, create a specific user for the wordpress site database as follows.
```
# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE osclass_db;
MariaDB [(none)]> CREATE USER 'osclassdmin'@'localhost' IDENTIFIED BY 'osclass@dmin%!2';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON osclass_db.* TO 'osclassdmin'@'localhost';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> exit
```
and remember to always remove user accounts that are no longer managing any application database on the server.
### 10\. Use Additional Security Plugins and Libraries
MySQL includes a number of security plugins for: authenticating attempts by clients to connect to mysql server, password-validation and securing storage for sensitive information, which are all available in the free version.
You can find more here: [https://dev.mysql.com/doc/refman/5.7/en/security-plugins.html][5]
### 11\. Change MySQL Passwords Regularly
This is a common piece of information/application/system security advice. How often you do this will entirely depend on your internal security policy. However, it can prevent “snoopers” who might have been tracking your activity over an long period of time, from gaining access to your mysql server.
```
MariaDB [(none)]> USE mysql;
MariaDB [(none)]> UPDATE user SET password=PASSWORD('YourPasswordHere') WHERE User='root' AND Host = 'localhost';
MariaDB [(none)]> FLUSH PRIVILEGES;
```
### 12\. Update MySQL Server Package Regularly
It is highly recommended to upgrade mysql/mariadb packages regularly to keep up with security updates and bug fixes, from the vendors repository. Normally packages in default operating system repositories are outdated.
```
# yum update
# apt update
```
After making any changes to the mysql/mariadb server, always restart the service.
```
# systemctl restart mariadb #RHEL/CentOS
# systemctl restart mysql #Debian/Ubuntu
```
Read Also: [15 Useful MySQL/MariaDB Performance Tuning and Optimization Tips][6]
Thats all! We love to hear from you via the comment form below. Do share with us any MySQL/MariaDB security tips missing in the above list.
--------------------------------------------------------------------------------
via: https://www.tecmint.com/mysql-mariadb-security-best-practices-for-linux/
作者:[ Aaron Kili ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.tecmint.com/author/aaronkili/
[1]:https://www.tecmint.com/learn-mysql-mariadb-for-beginners/
[2]:https://www.tecmint.com/wp-content/uploads/2017/12/Secure-MySQL-Installation.png
[3]:https://www.tecmint.com/wp-content/uploads/2017/12/Connect-MySQL-with-Password.png
[4]:https://www.tecmint.com/wp-content/uploads/2017/12/Check-Command-History.png
[5]:https://dev.mysql.com/doc/refman/5.7/en/security-plugins.html
[6]:https://www.tecmint.com/mysql-mariadb-performance-tuning-and-optimization/
[7]:https://www.tecmint.com/author/aaronkili/
[8]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[9]:https://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -0,0 +1,85 @@
Launching an Open Source Project: A Free Guide
============================================================
![](https://www.linuxfoundation.org/wp-content/uploads/2017/11/project-launch-1024x645.jpg)
Launching a project and then rallying community support can be complicated, but the new guide to Starting an Open Source Project can help.
Increasingly, as open source programs become more pervasive at organizations of all sizes, tech and DevOps workers are choosing to or being asked to launch their own open source projects. From Google to Netflix to Facebook, companies are also releasing their open source creations to the community. Its become common for open source projects to start from scratch internally, after which they benefit from collaboration involving external developers.
Launching a project and then rallying community support can be more complicated than you think, however. A little up-front work can help things go smoothly, and thats exactly where the new guide to[ Starting an Open Source Project][1] comes in.
This free guide was created to help organizations already versed in open source learn how to start their own open source projects. It starts at the beginning of the process, including deciding what to open source, and moves on to budget and legal considerations, and more. The road to creating an open source project may be foreign, but major companies, from Google to Facebook, have opened up resources and provided guidance. In fact, Google has[ an extensive online destination][2] dedicated to open source best practices and how to open source projects.
“No matter how many smart people we hire inside the company, theres always smarter people on the outside,” notes Jared Smith, Open Source Community Manager at Capital One. “We find it is worth it to us to open source and share our code with the outside world in exchange for getting some great advice from people on the outside who have expertise and are willing to share back with us.”
In the new guide, noted open source expert Ibrahim Haddad provides five reasons why an organization might open source a new project:
1. Accelerate an open solution; provide a reference implementation to a standard; share development costs for strategic functions
2. Commoditize a market; reduce prices of non-strategic software components.
3. Drive demand by building an ecosystem for your products.
4. Partner with others; engage customers; strengthen relationships with common goals.
5. Offer your customers the ability to self-support: the ability to adapt your code without waiting for you.
The guide notes: “The decision to release or create a new open source project depends on your circumstances. Your company should first achieve a certain level of open source mastery by using open source software and contributing to existing projects. This is because consuming can teach you how to leverage external projects and developers to build your products. And participation can bring more fluency in the conventions and culture of open source communities. (See our guides on [Using Open Source Code][3] and [Participating in Open Source Communities][4]) But once you have achieved open source fluency, the best time to start launching your own open source projects is simply early and often.’”
The guide also notes that planning can keep you and your organization out of legal trouble. Issues pertaining to licensing, distribution, support options, and even branding require thinking ahead if you want your project to flourish.
“I think it is a crucial thing for a company to be thinking about what theyre hoping to achieve with a new open source project,” said John Mertic, Director of Program Management at The Linux Foundation. “They must think about the value of it to the community and developers out there and what outcomes theyre hoping to get out of it. And then they must understand all the pieces they must have in place to do this the right way, including legal, governance, infrastructure and a starting community. Those are the things I always stress the most when youre putting an open source project out there.”
The[ Starting an Open Source Project][5] guide can help you with everything from licensing issues to best development practices, and it explores how to seamlessly and safely weave existing open components into your open source projects. It is one of a new collection of free guides from The Linux Foundation and The TODO Group that are all extremely valuable for any organization running an open source program.[ The guides are available][6]now to help you run an open source program office where open source is supported, shared, and leveraged. With such an office, organizations can establish and execute on their open source strategies efficiently, with clear terms.
These free resources were produced based on expertise from open source leaders.[ Check out all the guides here][7] and stay tuned for our continuing coverage.
Also, dont miss the previous articles in the series:
[How to Create an Open Source Program][8]
[Tools for Managing Open Source Programs][9]
[Measuring Your Open Source Programs Success][10]
[Effective Strategies for Recruiting Open Source Developers][11]
[Participating in Open Source Communities][12]
[Using Open Source Code][13]
--------------------------------------------------------------------------------
via: https://www.linuxfoundation.org/blog/launching-open-source-project-free-guide/
作者:[Sam Dean ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxfoundation.org/author/sdean/
[1]:https://www.linuxfoundation.org/resources/open-source-guides/starting-open-source-project/
[2]:https://www.linux.com/blog/learn/chapter/open-source-management/2017/5/googles-new-home-all-things-open-source-runs-deep
[3]:https://www.linuxfoundation.org/using-open-source-code/
[4]:https://www.linuxfoundation.org/participating-open-source-communities/
[5]:https://www.linuxfoundation.org/resources/open-source-guides/starting-open-source-project/
[6]:https://github.com/todogroup/guides
[7]:https://github.com/todogroup/guides
[8]:https://github.com/todogroup/guides/blob/master/creating-an-open-source-program.md
[9]:https://www.linuxfoundation.org/blog/managing-open-source-programs-free-guide/
[10]:https://www.linuxfoundation.org/measuring-your-open-source-program-success/
[11]:https://www.linuxfoundation.org/blog/effective-strategies-recruiting-open-source-developers/
[12]:https://www.linuxfoundation.org/participating-open-source-communities/
[13]:https://www.linuxfoundation.org/using-open-source-code/
[14]:https://www.linuxfoundation.org/author/sdean/
[15]:https://www.linuxfoundation.org/category/audience/attorneys/
[16]:https://www.linuxfoundation.org/category/blog/
[17]:https://www.linuxfoundation.org/category/audience/c-level/
[18]:https://www.linuxfoundation.org/category/audience/developer-influencers/
[19]:https://www.linuxfoundation.org/category/audience/entrepreneurs/
[20]:https://www.linuxfoundation.org/category/content-placement/lf-brand/
[21]:https://www.linuxfoundation.org/category/audience/open-source-developers/
[22]:https://www.linuxfoundation.org/category/audience/open-source-professionals/
[23]:https://www.linuxfoundation.org/category/audience/open-source-users/

View File

@ -1,161 +0,0 @@
translating by wenwensnow
Randomize your WiFi MAC address on Ubuntu 16.04
============================================================
_Your devices MAC address can be used to track you across the WiFi networks you connect to. That data can be shared and sold, and often identifies you as an individual. Its possible to limit this tracking by using pseudo-random MAC addresses._
![A captive portal screen for a hotel allowing you to log in with social media for an hour of free WiFi](https://www.paulfurley.com/img/captive-portal-our-hotel.gif)
_Image courtesy of [Cloudessa][4]_
Every network device like a WiFi or Ethernet card has a unique identifier called a MAC address, for example `b4:b6:76:31:8c:ff`. Its how networking works: any time you connect to a WiFi network, the router uses that address to send and receive packets to your machine and distinguish it from other devices in the area.
The snag with this design is that your unique, unchanging MAC address is just perfect for tracking you. Logged into Starbucks WiFi? Noted. London Underground? Logged.
If youve ever put your real name into one of those Craptive Portals on a WiFi network youve now tied your identity to that MAC address. Didnt read the terms and conditions? You might assume that free airport WiFi is subsidised by flogging customer analytics (your personal information) to hotels, restaurant chains and whomever else wants to know about you.
I dont subscribe to being tracked and sold by mega-corps, so I spent a few hours hacking a solution.
### MAC addresses dont need to stay the same
Fortunately, its possible to spoof your MAC address to a random one without fundamentally breaking networking.
I wanted to randomize my MAC address, but with three particular caveats:
1. The MAC should be different across different networks. This means Starbucks WiFi sees a different MAC from London Underground, preventing linking my identity across different providers.
2. The MAC should change regularly to prevent a network knowing that Im the same person who walked past 75 times over the last year.
3. The MAC stays the same throughout each working day. When the MAC address changes, most networks will kick you off, and those with Craptive Portals will usually make you sign in again - annoying.
### Manipulating NetworkManager
My first attempt of using the `macchanger` tool was unsuccessful as NetworkManager would override the MAC address according to its own configuration.
I learned that NetworkManager 1.4.1+ can do MAC address randomization right out the box. If youre using Ubuntu 17.04 upwards, you can get most of the way with [this config file][7]. You cant quite achieve all three of my requirements (you must choose  _random_ or  _stable_  but it seems you cant do  _stable-for-one-day_ ).
Since Im sticking with Ubuntu 16.04 which ships with NetworkManager 1.2, I couldnt make use of the new functionality. Supposedly there is some randomization support but I failed to actually make it work, so I scripted up a solution instead.
Fortunately NetworkManager 1.2 does allow for spoofing your MAC address. You can see this in the Edit connections dialog for a given network:
![Screenshot of NetworkManager's edit connection dialog, showing a text entry for a cloned mac address](https://www.paulfurley.com/img/network-manager-cloned-mac-address.png)
NetworkManager also supports hooks - any script placed in `/etc/NetworkManager/dispatcher.d/pre-up.d/` is run before a connection is brought up.
### Assigning pseudo-random MAC addresses
To recap, I wanted to generate random MAC addresses based on the  _network_  and the  _date_ . We can use the NetworkManager command line, nmcli, to show a full list of networks:
```
> nmcli connection
NAME UUID TYPE DEVICE
Gladstone Guest 618545ca-d81a-11e7-a2a4-271245e11a45 802-11-wireless wlp1s0
DoESDinky 6e47c080-d81a-11e7-9921-87bc56777256 802-11-wireless --
PublicWiFi 79282c10-d81a-11e7-87cb-6341829c2a54 802-11-wireless --
virgintrainswifi 7d0c57de-d81a-11e7-9bae-5be89b161d22 802-11-wireless --
```
Since each network has a unique identifier, to achieve my scheme I just concatenated the UUID with todays date and hashed the result:
```
# eg 618545ca-d81a-11e7-a2a4-271245e11a45-2017-12-03
> echo -n "${UUID}-$(date +%F)" | md5sum
53594de990e92f9b914a723208f22b3f -
```
That produced bytes which can be substituted in for the last octets of the MAC address.
Note that the first byte `02` signifies the address is [locally administered][8]. Real, burned-in MAC addresses start with 3 bytes designing their manufacturer, for example `b4:b6:76` for Intel.
Its possible that some routers may reject locally administered MACs but I havent encountered that yet.
On every connection up, the script calls `nmcli` to set the spoofed MAC address for every connection:
![A terminal window show a number of nmcli command line calls](https://www.paulfurley.com/img/terminal-window-nmcli-commands.png)
As a final check, if I look at `ifconfig` I can see that the `HWaddr` is the spoofed one, not my real MAC address:
```
> ifconfig
wlp1s0 Link encap:Ethernet HWaddr b4:b6:76:45:64:4d
inet addr:192.168.0.86 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::648c:aff2:9a9d:764/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:12107812 errors:0 dropped:2 overruns:0 frame:0
TX packets:18332141 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:11627977017 (11.6 GB) TX bytes:20700627733 (20.7 GB)
```
The full script is [available on Github][9].
```
#!/bin/sh
# /etc/NetworkManager/dispatcher.d/pre-up.d/randomize-mac-addresses
# Configure every saved WiFi connection in NetworkManager with a spoofed MAC
# address, seeded from the UUID of the connection and the date eg:
# 'c31bbcc4-d6ad-11e7-9a5a-e7e1491a7e20-2017-11-20'
# This makes your MAC impossible(?) to track across WiFi providers, and
# for one provider to track across days.
# For craptive portals that authenticate based on MAC, you might want to
# automate logging in :)
# Note that NetworkManager >= 1.4.1 (Ubuntu 17.04+) can do something similar
# automatically.
export PATH=$PATH:/usr/bin:/bin
LOG_FILE=/var/log/randomize-mac-addresses
echo "$(date): $*" > ${LOG_FILE}
WIFI_UUIDS=$(nmcli --fields type,uuid connection show |grep 802-11-wireless |cut '-d ' -f3)
for UUID in ${WIFI_UUIDS}
do
UUID_DAILY_HASH=$(echo "${UUID}-$(date +F)" | md5sum)
RANDOM_MAC="02:$(echo -n ${UUID_DAILY_HASH} | sed 's/^\(..\)\(..\)\(..\)\(..\)\(..\).*$/\1:\2:\3:\4:\5/')"
CMD="nmcli connection modify ${UUID} wifi.cloned-mac-address ${RANDOM_MAC}"
echo "$CMD" >> ${LOG_FILE}
$CMD &
done
wait
```
Enjoy!
_Update: [Use locally administered MAC addresses][5] to avoid clashing with real Intel ones. Thanks [@_fink][6]_
--------------------------------------------------------------------------------
via: https://www.paulfurley.com/randomize-your-wifi-mac-address-on-ubuntu-1604-xenial/
作者:[Paul M Furley ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.paulfurley.com/
[1]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/raw/5f02fc8f6ff7fca5bca6ee4913c63bf6de15abca/randomize-mac-addresses
[2]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f#file-randomize-mac-addresses
[3]:https://github.com/
[4]:http://cloudessa.com/products/cloudessa-aaa-and-captive-portal-cloud-service/
[5]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/revisions#diff-824d510864d58c07df01102a8f53faef
[6]:https://twitter.com/fink_/status/937305600005943296
[7]:https://gist.github.com/paulfurley/978d4e2e0cceb41d67d017a668106c53/
[8]:https://en.wikipedia.org/wiki/MAC_address#Universal_vs._local
[9]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f

View File

@ -1,129 +0,0 @@
【iron0x翻译中】
Use multi-stage builds
============================================================
Multi-stage builds are a new feature requiring Docker 17.05 or higher on the daemon and client. Multistage builds are useful to anyone who has struggled to optimize Dockerfiles while keeping them easy to read and maintain.
> Acknowledgment: Special thanks to [Alex Ellis][1] for granting permission to use his blog post [Builder pattern vs. Multi-stage builds in Docker][2] as the basis of the examples below.
### Before multi-stage builds
One of the most challenging things about building images is keeping the image size down. Each instruction in the Dockerfile adds a layer to the image, and you need to remember to clean up any artifacts you dont need before moving on to the next layer. To write a really efficient Dockerfile, you have traditionally needed to employ shell tricks and other logic to keep the layers as small as possible and to ensure that each layer has the artifacts it needs from the previous layer and nothing else.
It was actually very common to have one Dockerfile to use for development (which contained everything needed to build your application), and a slimmed-down one to use for production, which only contained your application and exactly what was needed to run it. This has been referred to as the “builder pattern”. Maintaining two Dockerfiles is not ideal.
Heres an example of a `Dockerfile.build` and `Dockerfile` which adhere to the builder pattern above:
`Dockerfile.build`:
```
FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN go get -d -v golang.org/x/net/html \
&& CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
```
Notice that this example also artificially compresses two `RUN` commands together using the Bash `&&` operator, to avoid creating an additional layer in the image. This is failure-prone and hard to maintain. Its easy to insert another command and forget to continue the line using the `\` character, for example.
`Dockerfile`:
```
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY app .
CMD ["./app"]
```
`build.sh`:
```
#!/bin/sh
echo Building alexellis2/href-counter:build
docker build --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy \
-t alexellis2/href-counter:build . -f Dockerfile.build
docker create --name extract alexellis2/href-counter:build
docker cp extract:/go/src/github.com/alexellis/href-counter/app ./app
docker rm -f extract
echo Building alexellis2/href-counter:latest
docker build --no-cache -t alexellis2/href-counter:latest .
rm ./app
```
When you run the `build.sh` script, it needs to build the first image, create a container from it in order to copy the artifact out, then build the second image. Both images take up room on your system and you still have the `app` artifact on your local disk as well.
Multi-stage builds vastly simplify this situation!
### Use multi-stage builds
With multi-stage builds, you use multiple `FROM` statements in your Dockerfile. Each `FROM` instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you dont want in the final image. To show how this works, Lets adapt the Dockerfile from the previous section to use multi-stage builds.
`Dockerfile`:
```
FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
```
You only need the single Dockerfile. You dont need a separate build script, either. Just run `docker build`.
```
$ docker build -t alexellis2/href-counter:latest .
```
The end result is the same tiny production image as before, with a significant reduction in complexity. You dont need to create any intermediate images and you dont need to extract any artifacts to your local system at all.
How does it work? The second `FROM` instruction starts a new build stage with the `alpine:latest` image as its base. The `COPY --from=0` line copies just the built artifact from the previous stage into this new stage. The Go SDK and any intermediate artifacts are left behind, and not saved in the final image.
### Name your build stages
By default, the stages are not named, and you refer to them by their integer number, starting with 0 for the first `FROM` instruction. However, you can name your stages, by adding an `as <NAME>` to the `FROM` instruction. This example improves the previous one by naming the stages and using the name in the `COPY` instruction. This means that even if the instructions in your Dockerfile are re-ordered later, the `COPY` wont break.
```
FROM golang:1.7.3 as builder
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
```
--------------------------------------------------------------------------------
via: https://docs.docker.com/engine/userguide/eng-image/multistage-build/#name-your-build-stages
作者:[docker docs ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://docs.docker.com/engine/userguide/eng-image/multistage-build/
[1]:https://twitter.com/alexellisuk
[2]:http://blog.alexellis.io/mutli-stage-docker-builds/

View File

@ -0,0 +1,307 @@
Top 20 GNOME Extensions You Should Be Using Right Now
============================================================
_Brief: You can enhance the capacity of your GNOME desktop with extensions. Here, we list the best GNOME extensions to save you the trouble of finding them on your own._
[GNOME extensions][9] are a major part of the [GNOME][10] experience. These extensions add a lot of value to the ecosystem whether it is to mold the Gnome Desktop Environment (DE) to your workflow, to add more functionality than there is by default, or just simply to freshen up the experience.
With default [Ubuntu 17.10][11] switching from [Unity to Gnome][12], now is the time to familiarize yourself with the various extensions that the GNOME community has to offer. We already showed you[ how to enable and manage GNOME extensions][13]. But finding good extensions could be a daunting task. Thats why I created this list of best GNOME extensions to save you some trouble.
### Best GNOME Extensions
![Best GNOME Extensions for Ubuntu](https://itsfoss.com/wp-content/uploads/2017/12/Best-GNOME-Extensions-800x450.jpg)
The list is in alphabetical order but there is no ranking involved here. Extension at number 1 position is not better than the rest of the extensions.
### 1\. Appfolders Management extensions
One of the major features that I think GNOME is missing is the ability to organize the default application grid. This is something included by default in [KDE][14]s Application Dashboard, in [Elementary OS][15]s Slingshot Launcher, and even in macOS, yet as of [GNOME 3.26][16] it isnt something that comes baked in. Appfolders Management extension changes that.
This extension gives the user an easy way to organize their applications into various folders with a simple right click > add to folder. Creating folders and adding applications to them is not only simple through this extension, but it feels so natively implemented that you will wonder why this isnt built into the default GNOME experience.
![](https://itsfoss.com/wp-content/uploads/2017/11/folders-300x225.jpg)
[Appfolders Management extension][17]
### 2\. Apt Update Indicator
For distributions that utilize [Apt as their package manager][18], such as Ubuntu or Debian, the Apt Update Indicator extension allows for a more streamlined update experience in GNOME.
The extension settles into your top bar and notifies the user of updates waiting on their system. It also displays recently added repos, residual config files, files that are auto removable, and allows the user to manually check for updates all in one basic drop-down menu.
It is a simple extension that adds an immense amount of functionality to any system. 
![](https://itsfoss.com/wp-content/uploads/2017/11/Apt-Update-300x185.jpg)
[Apt Update Indicator][19]
### 3\. Auto Move Windows
If, like me, you utilize multiple virtual desktops than this extension will make your workflow much easier. Auto Move Windows allows you to set your applications to automatically open on a virtual desktop of your choosing. It is as simple as adding an application to the list and selecting the desktop you would like that application to open on.
From then on every time you open that application it will open on that desktop. This makes all the difference when as soon as you login to your computer all you have to do is open the application and it immediately opens to where you want it to go without manually having to move it around every time before you can get to work.
![](https://itsfoss.com/wp-content/uploads/2017/11/auto-move-300x225.jpg)
[Auto Move Windows][20]
### 4\. Caffeine 
Caffeine allows the user to keep their computer screen from auto-suspending at the flip of a switch. The coffee mug shaped extension icon embeds itself into the right side of your top bar and with a click shows that your computer is “caffeinated” with a subtle addition of steam to the mug and a notification.
The same is true to turn off Caffeine, enabling auto suspend and/or screensave again. Its incredibly simple to use and works just as you would expect.
Caffeine Disabled:  
![](https://itsfoss.com/wp-content/uploads/2017/11/caffeine-enabled-300x78.jpg)
Caffeine Enabled:
![](https://itsfoss.com/wp-content/uploads/2017/11/caffeine-disabled-300x75.jpg)
[Caffeine][21]
### 5\. CPU Power Management [Only for Intel CPUs]
This is an extension that, at first, I didnt think would be very useful, but after some time using it I have found that functionality like this should be backed into all computers by default. At least all laptops. CPU Power Management allows you to chose how much of your computers resources are being used at any given time.
Its simple drop-down menu allows the user to change between various preset or user made profiles that control at what frequency your CPU is to run. For example, you can set your CPU to the “Quiet” present which tells your computer to only us a maximum of 30% of its resources in this case.
On the other hand, you can set it to the “High Performance” preset to allow your computer to run at full potential. This comes in handy if you have loud fans and want to minimize the amount of noise they make or if you just need to save some battery life.
One thing to note is that  _this only works on computers with an Intel CPU_ , so keep that in mind.
![](https://itsfoss.com/wp-content/uploads/2017/11/CPU-300x194.jpg)
[CPU Power Management][22]
### 6\. Clipboard Indicator 
Clipboard Indicator is a clean and simple clipboard management tool. The extension sits in the top bar and caches your recent clipboard history (things you copy and paste). It will continue to save this information until the user clears the extensions history.
If you know that you are about to work with documentation that you dont want to be saved in this way, like Credit Card numbers or any of your personal information, Clipboard Indicator offers a private mode that the user can toggle on and off for such cases.
![](https://itsfoss.com/wp-content/uploads/2017/11/clipboard-300x200.jpg)
[Clipboard Indicator][23]
### 7\. Extensions
The Extensions extension allows the user to enable/disable other extensions and to access their settings in one singular extension. Extensions either sit next to your other icons and extensions in the panel or in the user drop-down menu.
Redundancies aside, Extensions is a great way to gain easy access to all your extensions without the need to open up the GNOME Tweak Tool to do so. 
![](https://itsfoss.com/wp-content/uploads/2017/11/extensions-300x185.jpg)
[Extensions][24]
### 8\. Frippery Move Clock
For those of us who are used to having the clock to the right of the Panel in Unity, this extension does the trick. Frippery Move Clock moves the clock from the middle of the top panel to the right side. It takes the calendar and notification window with it but does not migrate the notifications themselves. We have another application later in this list, Panel OSD, that can add bring your notifications over to the right as well.
Before Frippery: 
![](https://itsfoss.com/wp-content/uploads/2017/11/before-move-clock-300x19.jpg)
After Frippery:
![](https://itsfoss.com/wp-content/uploads/2017/11/after-move-clock-300x19.jpg)
[Frippery Move Clock][25]
### 9\. Gno-Menu
Gno-Menu brings a more traditional menu to the GNOME DE. Not only does it add an applications menu to the top panel but it also brings a ton of functionality and customization with it. If you are used to using the Applications Menu extension traditionally found in GNOME but dont want the bugs and issues that Ubuntu 17.10 brought to is, Gno-Meny is an awesome alternative.
![](https://itsfoss.com/wp-content/uploads/2017/11/Gno-Menu-300x169.jpg)
[Gno-Menu][26]
### 10\. User Themes
User Themes is a must for anyone looking to customize their GNOME desktop. By default, GNOME Tweaks lets its users change the theme of the applications themselves, icons, and cursors but not the theme of the shell. User Themes fixes that by enabling us to change the theme of GNOME Shell, allowing us to get the most out of our customization experience.  Check out our [video][27] or read our article to know how to [install new themes][28].
User Themes Off:
![](https://itsfoss.com/wp-content/uploads/2017/11/user-themes-off-300x141.jpg)
User Themes On:
![](https://itsfoss.com/wp-content/uploads/2017/11/user-themes-on-300x141.jpg)
[User Themes][29]
### 11\. Hide Activities Button
Hide Activities Button does exactly what you would expect. It hides the activities button found a the leftmost corner of the top panel. This button traditionally actives the activities overview in GNOME, but plenty of people use the Super Key on the keyboard to do this same function.
Though this disables the button itself, it does not disable the hot corner. Since Ubuntu 17.10 offers the ability to shut off the hot corner int he native settings application this not a huge deal for Ubuntu users. For other distributions, there are a plethora of other ways to disable the hot corner if you so desire, which we will not cover in this particular article.
Before: ![](https://itsfoss.com/wp-content/uploads/2017/11/activies-present-300x15.jpg) After:
![](https://itsfoss.com/wp-content/uploads/2017/11/activities-removed-300x15.jpg)
#### [Hide Activities Button][30] 
### 12\. MConnect
MConnect offers a way to seamlessly integrate the [KDE Connect][31] application within the GNOME desktop. Though KDE Connect offers a way for users to connect their Android handsets with virtually any Linux DE its indicator lacks a good way to integrate more seamlessly into any other DE than [Plasma][32].
MConnect fixes that, giving the user a straightforward drop-down menu that allows them to send SMS messages, locate their phones, browse their phones file system, and to send files to their phone from the desktop. Though I had to do some tweaking to get MConnect to work just as I would expect it to, I couldnt be any happier with the extension.
Do remember that you will need KDE Connect installed alongside MConnect in order to get it to work.
![](https://itsfoss.com/wp-content/uploads/2017/11/MConenct-300x174.jpg)
[MConnect][33]
### 13\. OpenWeather
OpenWeather adds an extension to the panel that gives the user weather information at a glance. It is customizable, it lets the user view weather information for whatever location they want to, and it doesnt rely on the computers location services. OpenWeather gives the user the choice between [OpenWeatherMap][34] and [Dark Sky][35] to provide the weather information that is to be displayed.
![](https://itsfoss.com/wp-content/uploads/2017/11/OpenWeather-300x147.jpg)
[OpenWeather][36]
### 14\. Panel OSD
This is the extension I mentioned earlier which allows the user to customize the location in which their desktop notifications appear on the screen. Not only does this allow the user to move their notifications over to the right, but Panel OSD gives the user the option to put their notifications literally anywhere they want on the screen. But for us migrating from Unity to GNOME, switching the notifications from the top middle to the top right may make us feel more at home.
Before:
![](https://itsfoss.com/wp-content/uploads/2017/11/osd1-300x40.jpg)
After:
![](https://itsfoss.com/wp-content/uploads/2017/11/osd-300x36.jpg)
#### [Panel OSD][37] 
### 15\. Places Status Indicator
Places Status Indicator has been a recommended extension for as long as people have started recommending extensions. Places adds a drop-down menu to the panel that gives the user quick access to various areas of the file system, from the home directory to serves your computer has access to and anywhere in between.
The convenience and usefulness of this extension become more apparent as you use it, becoming a fundamental way you navigate your system. I couldnt recommend it more highly enough.
![](https://itsfoss.com/wp-content/uploads/2017/11/Places-288x300.jpg)
[Places Status Indicator][38]
### 16\. Refresh Wifi Connections
One minor annoyance in GNOME is that the Wi-Fi Networks dialog box does not have a refresh button on it when you are trying to connect to a new Wi-Fi network. Instead, it makes the user wait while the system automatically refreshes the list. Refresh Wifi Connections fixes this. It simply adds that desired refresh button to the dialog box, adding functionality that really should be included out of the box.
Before: 
![](https://itsfoss.com/wp-content/uploads/2017/11/refresh-before-292x300.jpg)
After:
![](https://itsfoss.com/wp-content/uploads/2017/11/Refresh-after-280x300.jpg)
#### [Refresh Wifi Connections][39] 
### 17\. Remove Dropdown Arrows
The Remove Dropdown Arrows extension removes the arrows on the panel that signify when an icon has a drop-down menu that you can interact with. This is purely an aesthetic tweak and isnt always necessary as some themes remove these arrows by default. But themes such as [Numix][40], which happens to be my personal favorite, dont remove them.
Remove Dropdown Arrows brings that clean look to the GNOME Shell that removes some unneeded clutter. The only bug I have encountered is that the CPU Management extension I mentioned earlier will randomly “respawn” the drop-down arrow. To turn it back off I have to disable Remove Dropdown Arrows and then enable it again until once more it randomly reappears out of nowhere. 
Before:  
![](https://itsfoss.com/wp-content/uploads/2017/11/remove-arrows-before-300x17.jpg)
After:
![](https://itsfoss.com/wp-content/uploads/2017/11/remove-arrows-after-300x14.jpg)
[Remove Dropdown Arrows][41]
### 18\. Status Area Horizontal Spacing
This is another extension that is purely aesthetic and is only “necessary” in certain themes. Status Area Horizontal Spacing allows the user to control the amount of space between the icons in the status bar. If you think your status icons are too close or too spaced out, then this extension has you covered. Just select the padding you would like and youre set.
Maximum Spacing: 
![](https://itsfoss.com/wp-content/uploads/2017/11/spacing-2-300x241.jpg)
Minimum Spacing:
![](https://itsfoss.com/wp-content/uploads/2017/11/spacing-300x237.jpg)
#### [Status Area Horizontal Spacing][42] 
### 19\. Steal My Focus
By default, when you open an application in GNOME is will sometimes stay behind what you have open if a different application has focus. GNOME then notifies you that the application you selected has opened and it is up to you to switch over to it. But, in my experience, this isnt always consistent. There are certain applications that seem to jump to the front when opened while the rest rely on you to see the notifications to know they opened.
Steal My Focus changes that by removing the notification and immediately giving the user focus of the application they just opened. Because of this inconsistency, it was difficult for me to get a screenshot so you just have to trust me on this one. ;)
#### [Steal My Focus][43] 
### 20\. Workspaces to Dock 
This extension changed the way I use GNOME. Period. It allows me to be more productive and aware of my virtual desktop, making for a much better user experience. Workspaces to Dock allows the user to customize their overview workspaces by turning into an interactive dock.
You can customize its look, size, functionality, and even position. It can be used purely for aesthetics, but I think the real gold is using it to make the workspaces more fluid, functional, and consistent with the rest of the UI.
![](https://itsfoss.com/wp-content/uploads/2017/11/Workspaces-to-dock-300x169.jpg)
[Workspaces to Dock][44]
### Honorable Mentions: Dash to Dock and Dash to Panel  
Dash to Dock and Dash to Panel are not included in the official 20 extensions of this article for one main reason: Ubuntu Dock. Both extensions allow the user to make the GNOME Dash either a dock or a panel respectively and add more customization than comes by default.
The problem is that to get the full functionality of these two extensions you will need to jump through some hoops to disable Ubuntu Dock, which I wont outline in this article. We acknowledge that not everyone will be using Ubuntu 17.10, so for those of you that arent this may not apply to you. That being said, bot of these extensions are great and are included among some of the most popular GNOME extensions you will find.
Currently, there is a “bug” in Dash to Dock whereby changing its setting, even with the extension disabled, the changes apply to the Ubuntu Dock as well.  I say “bug” because I actually use this myself to customize Ubuntu Dock without the need for the extensions to be activated.  This may get patched in the future, but until then consider that a free tip.
###    [Dash to Dock][45]     [Dash to Panel][46]
So there you have it, our top 20 GNOME Extensions you should try right now. Which of these extensions do you particularly like? Which do you dislike? Let us know in the comments below and dont be afraid to say something if there is anything you think we missed.
### About Phillip Prado
Phillip Prado is an avid follower of all things tech, culture, and art. Not only is he an all-around geek, he has a BA in cultural communications and considers himself a serial hobbyist. He loves hiking, cycling, poetry, video games, and movies. But no matter what his passions are there is only one thing he loves more than Linux and FOSS: coffee. You can find him (nearly) everywhere on the web as @phillipprado.
--------------------------------------------------------------------------------
via: https://itsfoss.com/best-gnome-extensions/
作者:[ Phillip Prado][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/phillip/
[1]:https://itsfoss.com/author/phillip/
[2]:https://itsfoss.com/best-gnome-extensions/#comments
[3]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Fbest-gnome-extensions%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[4]:https://twitter.com/share?original_referer=/&text=Top+20+GNOME+Extensions+You+Should+Be+Using+Right+Now&url=https://itsfoss.com/best-gnome-extensions/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=phillipprado
[5]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Fbest-gnome-extensions%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[6]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Fbest-gnome-extensions%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[7]:http://www.stumbleupon.com/submit?url=https://itsfoss.com/best-gnome-extensions/&title=Top+20+GNOME+Extensions+You+Should+Be+Using+Right+Now
[8]:https://www.reddit.com/submit?url=https://itsfoss.com/best-gnome-extensions/&title=Top+20+GNOME+Extensions+You+Should+Be+Using+Right+Now
[9]:https://extensions.gnome.org/
[10]:https://www.gnome.org/
[11]:https://itsfoss.com/ubuntu-17-10-release-features/
[12]:https://itsfoss.com/ubuntu-unity-shutdown/
[13]:https://itsfoss.com/gnome-shell-extensions/
[14]:https://www.kde.org/
[15]:https://elementary.io/
[16]:https://itsfoss.com/gnome-3-26-released/
[17]:https://extensions.gnome.org/extension/1217/appfolders-manager/
[18]:https://en.wikipedia.org/wiki/APT_(Debian)
[19]:https://extensions.gnome.org/extension/1139/apt-update-indicator/
[20]:https://extensions.gnome.org/extension/16/auto-move-windows/
[21]:https://extensions.gnome.org/extension/517/caffeine/
[22]:https://extensions.gnome.org/extension/945/cpu-power-manager/
[23]:https://extensions.gnome.org/extension/779/clipboard-indicator/
[24]:https://extensions.gnome.org/extension/1036/extensions/
[25]:https://extensions.gnome.org/extension/2/move-clock/
[26]:https://extensions.gnome.org/extension/608/gnomenu/
[27]:https://youtu.be/9TNvaqtVKLk
[28]:https://itsfoss.com/install-themes-ubuntu/
[29]:https://extensions.gnome.org/extension/19/user-themes/
[30]:https://extensions.gnome.org/extension/744/hide-activities-button/
[31]:https://community.kde.org/KDEConnect
[32]:https://www.kde.org/plasma-desktop
[33]:https://extensions.gnome.org/extension/1272/mconnect/
[34]:http://openweathermap.org/
[35]:https://darksky.net/forecast/40.7127,-74.0059/us12/en
[36]:https://extensions.gnome.org/extension/750/openweather/
[37]:https://extensions.gnome.org/extension/708/panel-osd/
[38]:https://extensions.gnome.org/extension/8/places-status-indicator/
[39]:https://extensions.gnome.org/extension/905/refresh-wifi-connections/
[40]:https://numixproject.github.io/
[41]:https://extensions.gnome.org/extension/800/remove-dropdown-arrows/
[42]:https://extensions.gnome.org/extension/355/status-area-horizontal-spacing/
[43]:https://extensions.gnome.org/extension/234/steal-my-focus/
[44]:https://extensions.gnome.org/extension/427/workspaces-to-dock/
[45]:https://extensions.gnome.org/extension/307/dash-to-dock/
[46]:https://extensions.gnome.org/extension/1160/dash-to-panel/

View File

@ -0,0 +1,81 @@
5 Tips to Improve Technical Writing for an International Audience
============================================================
![documentation](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/typewriter-801921_1920.jpg?itok=faTXFNoE "documentation")
Writing in English for an international audience takes work; here are some handy tips to remember.[Creative Commons Zero][2]
Writing in English for an international audience does not necessarily put native English speakers in a better position. On the contrary, they tend to forget that the document's language might not be the first language of the audience. Let's have a look at the following simple sentence as an example: “Encrypt the password using the 'foo bar' command.”
Grammatically, the sentence is correct. Given that "-ing" forms (gerunds) are frequently used in the English language, most native speakers would probably not hesitate to phrase a sentence like this. However, on closer inspection, the sentence is ambiguous: The word “using” may refer either to the object (“the password”) or to the verb (“encrypt”). Thus, the sentence can be interpreted in two different ways:
* Encrypt the password that uses the 'foo bar' command.
* Encrypt the password by using the 'foo bar' command.
As long as you have previous knowledge about the topic (password encryption or the 'foo bar' command), you can resolve this ambiguity and correctly decide that the second reading is the intended meaning of this sentence. But what if you lack in-depth knowledge of the topic? What if you are not an expert but a translator with only general knowledge of the subject? Or, what if you are a non-native speaker of English who is unfamiliar with advanced grammatical forms?
### Know Your Audience
Even native English speakers may need some training to write clear and straightforward technical documentation. Raising awareness of usability and potential problems is the first step. This article, based on my talk at[ Open Source Summit EU][5], offers several useful techniques. Most of them are useful not only for technical documentation but also for everyday written communication, such as writing email or reports.
**1. Change perspective. **Step into your audience's shoes. Step one is to know your intended audience. If you are a developer writing for end users, view the product from their perspective. The [persona technique][6] can help to focus on the target audience and to provide the right level of detail for your readers.
**2\. Follow the KISS principle. **Keep it short and simple. The principle can be applied to several levels, like grammar, sentences, or words. Here are some examples:
_Words: _ Uncommon and long words slow down reading and might be obstacles for non-native speakers. Use simpler alternatives:
“utilize” → “use”
“indicate” → “show”, “tell”, “say”
“prerequisite” → “requirement”
_Grammar: _ Use the simplest tense that is appropriate. For example, use present tense when mentioning the result of an action: "Click  _OK_ . The  _Printer Options_  dialog appears.”
_Sentences: _ As a rule of thumb, present one idea in one sentence. However, restricting sentence length to a certain amount of words is not useful in my opinion. Short sentences are not automatically easy to understand (especially if they are a cluster of nouns). Sometimes, trimming down sentences to a certain word count can introduce ambiquities, which can, in turn, make sentences even more difficult to understand.
**3\. Beware of ambiguities. **As authors, we often do not notice ambiguity in a sentence. Having your texts reviewed by others can help identify such problems. If that's not an option, try to look at each sentence from different perspectives: Does the sentence also work for readers without in-depth knowledge of the topic? Does it work for readers with limited language skills? Is the grammatical relationship between all sentence parts clear? If the sentence does not meet these requirements, rephrase it to resolve the ambiguity.
**4\. Be consistent. **This applies to choice of words, spelling, and punctuation as well as phrases and structure. For lists, use parallel grammatical construction. For example:
Why white space is important:
* It focuses attention.
* It visually separates sections.
* It splits content into chunks. 
**5\. Remove redundant content.** Keep only information that is relevant for your target audience. On a sentence level, avoid fillers (basically, easily) and unnecessary modifications:
"already existing" → "existing"
"completely new" → "new"
As you might have guessed by now, writing is rewriting. Good writing requires effort and practice. But even if you write only occasionally, you can significantly improve your texts by focusing on the target audience and by using basic writing techniques. The better the readability of a text, the easier it is to process, even for an audience with varying language skills. When it comes to localization especially, good quality of the source text is important: Garbage in, garbage out. If the original text has deficiencies, it will take longer to translate the text, resulting in higher costs. In the worst case, the flaws will be multiplied during translation and need to be corrected in various languages. 
![Tanja Roth](https://www.linux.com/sites/lcom/files/styles/floated_images/public/tanja-roth.jpg?itok=eta0fvZC "Tanja Roth")
Tanja Roth, Technical Documentation Specialist at SUSE Linux GmbH[Used with permission][1]
_Driven by an interest in both language and technology, Tanja has been working as a technical writer in mechanical engineering, medical technology, and IT for many years. She joined SUSE in 2005 and contributes to a wide range of product and project documentation, including High Availability and Cloud topics._
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/event/open-source-summit-eu/2017/12/technical-writing-international-audience?sf175396579=1
作者:[TANJA ROTH ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/tanja-roth
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/creative-commons-zero
[3]:https://www.linux.com/files/images/tanja-rothjpg
[4]:https://www.linux.com/files/images/typewriter-8019211920jpg
[5]:https://osseu17.sched.com/event/ByIW
[6]:https://en.wikipedia.org/wiki/Persona_(user_experience)

View File

@ -0,0 +1,82 @@
translating---geekpi
FreeCAD A 3D Modeling and Design Software for Linux
============================================================
![FreeCAD 3D Modeling Software](https://www.fossmint.com/wp-content/uploads/2017/12/FreeCAD-3D-Modeling-Software.png)
[FreeCAD][8] is a cross-platform OpenCasCade-based mechanical engineering and product design tool. Being a parametric 3D modeler it works with PLM, CAx, CAE, MCAD and CAD and its functionalities can be extended using tons of advanced extensions and customization options.
It features a QT-based minimalist User Interface with toggable panels, layouts, toolbars, a broad Python API, and an Open Inventor-compliant 3D scene representation model (thanks to the Coin 3D library).
[![FreeCAD 3D Software](https://www.fossmint.com/wp-content/uploads/2017/12/FreeCAD-3D-Software.png)][9]
FreeCAD 3D Software
As is listed on the website, FreeCAD has a coupled of user cases, namely:
> * The Home User/Hobbyist: Got yourself a project you want to build, have built, or 3D printed? Model it in FreeCAD. No previous CAD experience required. Our community will help you get the hang of it quickly!
>
> * The Experienced CAD User: If you use commercial CAD or BIM modeling software at work, you will find similar tools and workflow among the many workbenches of FreeCAD.
>
> * The Programmer: Almost all of FreeCADs functionality is accessible to Python. You can easily extend FreeCADs functionality, automatize it with scripts, build your own modules or even embed FreeCAD in your own application.
>
> * The Educator: Teach your students a free software with no worry about license purchase. They can install the same version at home and continue using it after leaving school.
#### Features in FreeCAD
* Freeware: FreeCAD is free for everyone to download and use.
* Open Source: Contribute to the source code on [GitHub][4].
* Cross-Platform: All Windows, Linux, and Mac users can enjoy the coolness of FreeCAD.
* A comprehensive [Online Documentation][5].
* A free [Online Manual][6] for beginners and pros alike.
* Annotations support e.g. text and dimensions.
* A built-in Python console.
* A fully customizable and scriptable UI.
* An online community for showcasing projects [here][7].
* Extendable modules for modeling and designing a variety of objects e.g.
FreeCAD has a lot more features to offer users than we can list here so feel free to see the rest of them on its websites [Features page][11].
There are many 3D modeling tools in the market but they are barely free. If you are a modeling engineer, architect, or artist and are looking for an application you can use without necessarily shelling out any cash then FreeCAD is a beautiful open-source project you should check out.
Give it a test-drive and see if you dont like it.
[Download FreeCAD for Linux][13]
Are you already a FreeCAD user? Which of its features do you enjoy the most and have you come across any alternatives that may go head to head with its abilities?
Remember that your comments, suggestions, and constructive criticisms are always welcome in the comments section below.
--------------------------------------------------------------------------------
via: https://www.fossmint.com/freecad-3d-modeling-and-design-software-for-linux/
作者:[Martins D. Okoi ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.fossmint.com/author/dillivine/
[1]:https://www.fossmint.com/author/dillivine/
[2]:https://www.fossmint.com/author/dillivine/
[3]:https://www.fossmint.com/freecad-3d-modeling-and-design-software-for-linux/#disqus_thread
[4]:https://github.com/FreeCAD/FreeCAD
[5]:https://www.freecadweb.org/wiki/Main_Page
[6]:https://www.freecadweb.org/wiki/Manual
[7]:https://forum.freecadweb.org/viewforum.php?f=24
[8]:http://www.freecadweb.org/
[9]:https://www.fossmint.com/wp-content/uploads/2017/12/FreeCAD-3D-Software.png
[10]:https://www.fossmint.com/synfig-an-adobe-animate-alternative-for-gnulinux/
[11]:https://www.freecadweb.org/wiki/Feature_list
[12]:http://www.tecmint.com/red-hat-rhcsa-rhce-exam-certification-book/
[13]:https://www.freecadweb.org/wiki/Download

View File

@ -0,0 +1,47 @@
translating---geekpi
# GNOME Boxes Makes It Easier to Test Drive Linux Distros
![GNOME Boxes Distribution Selection](http://www.omgubuntu.co.uk/wp-content/uploads/2017/12/GNOME-Boxes-INstall-Distros-750x475.jpg)
Creating Linux virtual machines on the GNOME desktop is about to get a whole lot easier.
The next major release of  [_GNOME Boxes_][5]  is able to download popular Linux (and BSD-based) operating systems directly inside the app itself.
Boxes is free, open-source software. It can be used to access both remote and virtual systems as it is built around [QEMU][6], KVM, and libvirt virtualisation technologies.
For its new ISO-toting integration  _Boxes_  makes use of [libosinfo][7], a database of operating systems that also provides details on any virtualized environment requirements.
In [this (mis-titled) video][8] from GNOME developer Felipe Borges you can see just how easy the improved Source Selection screen makes things, including the ability to download a specific ISO architecture for a given distro:
[video](https://youtu.be/CGahI05Gbac)
Despite it being a core GNOME app I have to confess that I have never used Boxes. Its not that I dont hear good things about it (I do) its just that Im more familiar with setting up and configuring virtual machines in VirtualBox.
> The lazy geek inside me is going to appreciate this integration
Admitted its not exactly  _difficult_  to head out and download an ISO using the browser, then point a virtual machine app to it (heck, its what most of us have been doing for a decade or so).
But the lazy geek inside me is really going to appreciate this integration.
So, thanks to this feature Ill be unpacking Boxes on my system when GNOME 3.28 is released next March. I will be able to launch  _Boxes_ , close my eyes,pick a distro from the list at random, and instantly broaden my Tux-shaped horizons.
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2017/12/gnome-boxes-install-linux-distros-directly
作者:[ JOEY SNEDDON ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:https://plus.google.com/117485690627814051450/?rel=author
[2]:http://www.omgubuntu.co.uk/category/dev
[3]:http://www.omgubuntu.co.uk/category/video
[4]:http://www.omgubuntu.co.uk/2017/12/gnome-boxes-install-linux-distros-directly
[5]:https://en.wikipedia.org/wiki/GNOME_Boxes
[6]:https://en.wikipedia.org/wiki/QEMU
[7]:https://libosinfo.org/
[8]:https://blogs.gnome.org/felipeborges/boxes-downloadable-oses/

View File

@ -1,143 +0,0 @@
How To Know What A Command Or Program Will Exactly Do Before Executing It
======
Ever wondered what a Unix command will do before executing it? Not everyone knows what a particular command or program will do. Of course, you can check it with [Explainshell][2]. You need to copy/paste the command in Explainshell website and it let you know what each part of a Linux command does. However, it is not necessary. Now, we can easily know what a command or program will exactly do before executing it, right from the Terminal. Say hello to “maybe”, a simple tool that allows you to run a command and see what it does to your files without actually doing it! After reviewing the output listed, you can then decide whether you really want to run it or not.
#### How “maybe” works?
According to the developer,
> “maybe” runs processes under the control of ptrace with the help of python-ptrace library. When it intercepts a system call that is about to make changes to the file system, it logs that call, and then modifies CPU registers to both redirect the call to an invalid syscall ID (effectively turning it into a no-op) and set the return value of that no-op call to one indicating success of the original call. As a result, the process believes that everything it is trying to do is actually happening, when in reality nothing is.
Warning: You should be very very careful when using this utility in a production system or in any systems you care about. It can still do serious damages, because it will block only a handful of syscalls.
#### Installing “maybe”
Make sure you have installed pip in your Linux system. If not, install it as shown below depending upon the distribution you use.
On Arch Linux and its derivatives like Antergos, Manjaro Linux, install pip using the following command:
```
sudo pacman -S python-pip
```
On RHEL, CentOS:
```
sudo yum install epel-release
```
```
sudo yum install python-pip
```
On Fedora:
```
sudo dnf install epel-release
```
```
sudo dnf install python-pip
```
On Debian, Ubuntu, Linux Mint:
```
sudo apt-get install python-pip
```
On SUSE, openSUSE:
```
sudo zypper install python-pip
```
Once pip installed, run the following command to install “maybe”.
```
sudo pip install maybe
```
#### Know What A Command Or Program Will Exactly Do Before Executing It
Usage is absolutely easy! Just add “maybe” in front of a command that you want to execute.
Allow me to show you an example.
```
$ maybe rm -r ostechnix/
```
As you can see, I am going to delete a folder called “ostechnix” from my system. Here is the sample output.
```
maybe has prevented rm -r ostechnix/ from performing 5 file system operations:
delete /home/sk/inboxer-0.4.0-x86_64.AppImage
delete /home/sk/Docker.pdf
delete /home/sk/Idhayathai Oru Nodi.mp3
delete /home/sk/dThmLbB334_1398236878432.jpg
delete /home/sk/ostechnix
Do you want to rerun rm -r ostechnix/ and permit these operations? [y/N] y
```
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/maybe-1.png)][3]
The “maybe” tool performs 5 file system operations and shows me what this command (rm -r ostechnix/) will exactly do. Now I can decide whether I should perform this operation or not. Cool, yeah? Indeed!
Here is another example. I am going to install [Inboxer][4] desktop client for Gmail. This is what I got.
```
$ maybe ./inboxer-0.4.0-x86_64.AppImage
fuse: bad mount point `/tmp/.mount_inboxemDzuGV': No such file or directory
squashfuse 0.1.100 (c) 2012 Dave Vasilevsky
Usage: /home/sk/Downloads/inboxer-0.4.0-x86_64.AppImage [options] ARCHIVE MOUNTPOINT
FUSE options:
-d -o debug enable debug output (implies -f)
-f foreground operation
-s disable multi-threaded operation
open dir error: No such file or directory
maybe has prevented ./inboxer-0.4.0-x86_64.AppImage from performing 1 file system operations:
create directory /tmp/.mount_inboxemDzuGV
Do you want to rerun ./inboxer-0.4.0-x86_64.AppImage and permit these operations? [y/N]
```
If it not detects any file system operations, then it will simply display a result something like below.
For instance, I run this command to update my Arch Linux.
```
$ maybe sudo pacman -Syu
sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?
maybe has not detected any file system operations from sudo pacman -Syu.
```
See? It didnt detect any file system operations, so there were no warnings. This is absolutely brilliant and exactly what I was looking for. From now on, I can easily know what a command or a program will do even before executing it. I hope this will be useful to you too. More good stuffs to come. Stay tuned!
Cheers!
Resource:
* [“maybe” GitHub page][1]
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/know-command-program-will-exactly-executing/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://github.com/p-e-w/maybe
[2]:https://www.ostechnix.com/explainshell-find-part-linux-command/
[3]:http://www.ostechnix.com/wp-content/uploads/2017/12/maybe-1.png
[4]:https://www.ostechnix.com/inboxer-unofficial-google-inbox-desktop-client/

View File

@ -0,0 +1,113 @@
# [Improve your Bash scripts with Argbash][1]
![](https://fedoramagazine.org/wp-content/uploads/2017/11/argbash-1-945x400.png)
Do you write or maintain non-trivial bash scripts? If so, you probably want them to accept command-line arguments in a standard and robust way. Fedora recently got [a nice addition][2] which can help you produce better scripts. And dont worry, it wont cost you much of your time or energy.
### Why Argbash?
Bash is an interpreted command-line language with no standard library. Therefore, if you write bash scripts and want command-line interfaces that conform to [POSIX][3] and [GNU CLI][4] standards, youre used to only two options:
1. Write the argument-parsing functionality tailored to your script yourself (possibly using the `getopts` builtin).
2. Use an external bash module.
The first option looks incredibly silly as implementing the interface properly is not trivial. However, it is suggested as the best choice on various sites ranging from [Stack Overflow][5] to the [Bash Hackers][6] wiki.
The second option looks smarter, but using a module has its issues. The biggest is you have to bundle its code with your script. This may mean either:
* You distribute the library as a separate file, or
* You include the library code at the beginning of your script.
Having two files instead of one is awkward. So is polluting your bash scripts with a chunk of complex code over thousand lines long.
This was the main reason why the Argbash [project came to life][7]. Argbash is a code generator, so it generates a tailor-made parsing library for your script. Unlike the generic code of other bash modules, it produces minimal code your script needs. Moreover, you can request even simpler code if you dont need 100% conformance to these CLI standards.
### Example
### Analysis
Lets say you want to implement a script that [draws a bar][8] across the terminal window. You do that by repeating a single character of your choice multiple times. This means you need to get the following information from the command-line:
* _The character which is the element of the line. If not specified, use a dash._  On the command-line, this would be a single-valued positional argument  _character_  with a default value of -.
* _Length of the line. If not specified, go for 80._  This is a single-valued optional argument  _length_  with a default of 80.
* _Verbose mode (for debugging)._  This is a boolean argument  _verbose_ , off by default.
As the body of the script is really simple, this article focuses on getting the input of the user from the command-line to appropriate script variables. Argbash generates code that saves parsing results to shell variables  __arg_character_ ,  __arg_length_  and  __arg_verbose_ .
### Execution
In order to proceed, you need the  _argbash-init_  and  _argbash_  bash scripts that are parts of the  _argbash_  package. Therefore, run this command:
```
sudo dnf install argbash
```
Then, use  _argbash-init_  to generate a template for  _argbash_ , which generates the executable script. You want three arguments: a positional one called  _character_ , an optional  _length_  and an optional boolean  _verbose_ . Tell this to  _argbash-init_ , and then pass the output to  _argbash_ :
```
argbash-init --pos character --opt length --opt-bool verbose script-template.sh
argbash script-template.sh -o script
./script
```
See the help message? Looks like the script doesnt know about the default option for the character argument. So take a look at the [Argbash API][9], and then fix the issue by editing the template section of the script:
```
# ...
# ARG_OPTIONAL_SINGLE([length],[l],[Length of the line],[80])
# ARG_OPTIONAL_BOOLEAN([verbose],[V],[Debug mode])
# ARG_POSITIONAL_SINGLE([character],[The element of the line],[-])
# ARG_HELP([The line drawer])
# ...
```
Argbash is so smart that it tries to make every generated script a template of itself. This means you dont have to worry about storing source templates for further use. You just shouldnt lose your generated bash scripts. Now, try to regenerate the future line drawer to work as expected:
```
argbash script -o script
./script
```
As you can see, everything is working all right. The only thing left to do is fill in the line drawing functionality itself.
### Conclusion
You might find the section containing parsing code quite long, but consider that it allows you to call  _./script.sh x -Vl50_  and it will be understood the same way as  _./script -V -l 50 x. I_ t does require some code to get this right.
However, you can shift the balance between generated code complexity and parsing abilities towards more simple code by calling  _argbash-init_  with argument  _mode_  set to  _minimal_ . This option reduces the size of the script by about 20 lines, which corresponds to a roughly 25% decrease of the generated parsing code size. On the other hand, the  _full_  mode makes the script even smarter.
If you want to examine the generated code, give  _argbash_  the argument  _commented_ , which puts comments into the parsing code that reveal the intent behind various sections. Compare that to other argument parsing libraries such as [shflags][10], [argsparse][11] or [bash-modules/arguments][12], and youll see the powerful simplicity of Argbash. If something goes horribly wrong and you need to fix a glitch in the parsing functionality quickly, Argbash allows you to do that as well.
As youre most likely a Fedora user, you can enjoy the luxury of having command-line Argbash installed from the official repositories. However, there is also an [online parsing code generator][13] at your service. Furthermore, if youre working on a server with Docker, you can appreciate the [Argbash Docker image][14].
So enjoy and make sure that your scripts have a command-line interface that pleases your users. Argbash is here to help, with minimal effort required from your side.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/improve-bash-scripts-argbash/
作者:[Matěj Týč ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/bubla/
[1]:https://fedoramagazine.org/improve-bash-scripts-argbash/
[2]:https://argbash.readthedocs.io/
[3]:http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap12.html
[4]:https://www.gnu.org/prep/standards/html_node/Command_002dLine-Interfaces.html
[5]:https://stackoverflow.com/questions/192249/how-do-i-parse-command-line-arguments-in-bash
[6]:http://wiki.bash-hackers.org/howto/getopts_tutorial
[7]:https://argbash.readthedocs.io/
[8]:http://wiki.bash-hackers.org/snipplets/print_horizontal_line
[9]:http://argbash.readthedocs.io/en/stable/guide.html#argbash-api
[10]:https://raw.githubusercontent.com/Anvil/bash-argsparse/master/argsparse.sh
[11]:https://raw.githubusercontent.com/Anvil/bash-argsparse/master/argsparse.sh
[12]:https://raw.githubusercontent.com/vlisivka/bash-modules/master/main/bash-modules/src/bash-modules/arguments.sh
[13]:https://argbash.io/generate
[14]:https://hub.docker.com/r/matejak/argbash/

View File

@ -0,0 +1,210 @@
# Tutorial on how to write basic udev rules in Linux
Contents
* * [1. Objective][4]
* [2. Requirements][5]
* [3. Difficulty][6]
* [4. Conventions][7]
* [5. Introduction][8]
* [6. How rules are organized][9]
* [7. The rules syntax][10]
* [8. A test case][11]
* [9. Operators][12]
* * [9.1.1. == and != operators][1]
* [9.1.2. The assignment operators: = and :=][2]
* [9.1.3. The += and -= operators][3]
* [10. The keys we used][13]
### Objective
Understanding the base concepts behind udev, and learn how to write simple rules
### Requirements
* Root permissions
### Difficulty
MEDIUM
### Conventions
* **#** - requires given command to be executed with root privileges either directly as a root user or by use of `sudo` command
* **$** - given command to be executed as a regular non-privileged user
### Introduction
In a GNU/Linux system, while devices low level support is handled at the kernel level, the management of events related to them is managed in userspace by `udev`, and more precisely by the `udevd` daemon. Learning how to write rules to be applied on the occurring of those events can be really useful to modify the behavior of the system and adapt it to our needs.
### How rules are organized
Udev rules are defined into files with the `.rules` extension. There are two main locations in which those files can be placed: `/usr/lib/udev/rules.d` it's the directory used for system-installed rules, `/etc/udev/rules.d/`is reserved for custom made rules. 
The files in which the rules are defined are conventionally named with a number as prefix (e.g `50-udev-default.rules`) and are processed in lexical order independently of the directory they are in. Files installed in `/etc/udev/rules.d`, however, override those with the same name installed in the system default path.
### The rules syntax
The syntax of udev rules is not very complicated once you understand the logic behind it. A rule is composed by two main sections: the "match" part, in which we define the conditions for the rule to be applied, using a series of keys separated by a comma, and the "action" part, in which we perform some kind of action, when the conditions are met. 
### A test case
What a better way to explain possible options than to configure an actual rule? As an example, we are going to define a rule to disable the touchpad when a mouse is connected. Obviously the attributes provided in the rule definition, will reflect my hardware. 
We will write our rule in the `/etc/udev/rules.d/99-togglemouse.rules` file with the help of our favorite text editor. A rule definition can span over multiple lines, but if that's the case, a backslash must be used before the newline character, as a line continuation, just as in shell scripts. Here is our rule:
```
ACTION=="add" \
, ATTRS{idProduct}=="c52f" \
, ATTRS{idVendor}=="046d" \
, ENV{DISPLAY}=":0" \
, ENV{XAUTHORITY}="/run/user/1000/gdm/Xauthority" \
, RUN+="/usr/bin/xinput --disable 16"
```
Let's analyze it.
### Operators
First of all, an explanation of the used and possible operators:
#### == and != operators
The `==` is the equality operator and the `!=` is the inequality operator. By using them we establish that for the rule to be applied the defined keys must match, or not match the defined value respectively.
#### The assignment operators: = and :=
The `=` assignment operator, is used to assign a value to the keys that accepts one. We use the `:=` operator, instead, when we want to assign a value and we want to make sure that it is not overridden by other rules: the values assigned with this operator, in facts, cannot be altered.
#### The += and -= operators
The `+=` and `-=` operators are used respectively to add or to remove a value from the list of values defined for a specific key.
### The keys we used
Let's now analyze the keys we used in the rule. First of all we have the `ACTION` key: by using it, we specified that our rule is to be applied when a specific event happens for the device. Valid values are `add`, `remove` and `change` 
We then used the `ATTRS` keyword to specify an attribute to be matched. We can list a device attributes by using the `udevadm info` command, providing its name or `sysfs` path:
```
udevadm info -ap /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010/input/input39
Udevadm info starts with the device specified by the devpath and then
walks up the chain of parent devices. It prints for every device
found, all possible attributes in the udev rules key format.
A rule to match, can be composed by the attributes of the device
and the attributes from one single parent device.
looking at device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010/input/input39':
KERNEL=="input39"
SUBSYSTEM=="input"
DRIVER==""
ATTR{name}=="Logitech USB Receiver"
ATTR{phys}=="usb-0000:00:1d.0-1.2/input1"
ATTR{properties}=="0"
ATTR{uniq}==""
looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010':
KERNELS=="0003:046D:C52F.0010"
SUBSYSTEMS=="hid"
DRIVERS=="hid-generic"
ATTRS{country}=="00"
looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1':
KERNELS=="2-1.2:1.1"
SUBSYSTEMS=="usb"
DRIVERS=="usbhid"
ATTRS{authorized}=="1"
ATTRS{bAlternateSetting}==" 0"
ATTRS{bInterfaceClass}=="03"
ATTRS{bInterfaceNumber}=="01"
ATTRS{bInterfaceProtocol}=="00"
ATTRS{bInterfaceSubClass}=="00"
ATTRS{bNumEndpoints}=="01"
ATTRS{supports_autosuspend}=="1"
looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2':
KERNELS=="2-1.2"
SUBSYSTEMS=="usb"
DRIVERS=="usb"
ATTRS{authorized}=="1"
ATTRS{avoid_reset_quirk}=="0"
ATTRS{bConfigurationValue}=="1"
ATTRS{bDeviceClass}=="00"
ATTRS{bDeviceProtocol}=="00"
ATTRS{bDeviceSubClass}=="00"
ATTRS{bMaxPacketSize0}=="8"
ATTRS{bMaxPower}=="98mA"
ATTRS{bNumConfigurations}=="1"
ATTRS{bNumInterfaces}==" 2"
ATTRS{bcdDevice}=="3000"
ATTRS{bmAttributes}=="a0"
ATTRS{busnum}=="2"
ATTRS{configuration}=="RQR30.00_B0009"
ATTRS{devnum}=="12"
ATTRS{devpath}=="1.2"
ATTRS{idProduct}=="c52f"
ATTRS{idVendor}=="046d"
ATTRS{ltm_capable}=="no"
ATTRS{manufacturer}=="Logitech"
ATTRS{maxchild}=="0"
ATTRS{product}=="USB Receiver"
ATTRS{quirks}=="0x0"
ATTRS{removable}=="removable"
ATTRS{speed}=="12"
ATTRS{urbnum}=="1401"
ATTRS{version}==" 2.00"
[...]
```
Above is the truncated output received after running the command. As you can read it from the output itself, `udevadm` starts with the specified path that we provided, and gives us information about all the parent devices. Notice that attributes of the device are reported in singular form (e.g `KERNEL`), while the parent ones in plural form (e.g `KERNELS`). The parent information can be part of a rule but only one of the parents can be referenced at a time: mixing attributes of different parent devices will not work. In the rule we defined above, we used the attributes of one parent device: `idProduct` and `idVendor`. 
The next thing we have done in our rule, is to use the `ENV` keyword: it can be used to both set or try to match environment variables. We assigned a value to the `DISPLAY` and `XAUTHORITY` ones. Those variables are essential when interacting with the X server programmatically, to setup some needed information: with the `DISPLAY` variable, we specify on what machine the server is running, what display and what screen we are referencing, and with `XAUTHORITY` we provide the path to the file which contains Xorg authentication and authorization information. This file is usually located in the users "home" directory. 
Finally we used the `RUN` keyword: this is used to run external programs. Very important: this is not executed immediately, but the various actions are executed once all the rules have been parsed. In this case we used the `xinput` utility to change the status of the touchpad. I will not explain the syntax of xinput here, it would be out of context, just notice that `16` is the id of the touchpad. 
Once our rule is set, we can debug it by using the `udevadm test` command. This is useful for debugging but it doesn't really run commands specified using the `RUN` key:
```
$ udevadm test --action="add" /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010/input/input39
```
What we provided to the command is the action to simulate, using the `--action` option, and the sysfs path of the device. If no errors are reported, our rule should be good to go. To run it in the real world, we must reload the rules:
```
# udevadm control --reload
```
This command will reload the rules files, however, will have effect only on new generated events. 
We have seen the basic concepts and logic used to create an udev rule, however we only scratched the surface of the many options and possible settings. The udev manpage provides an exhaustive list: please refer to it for a more in-depth knowledge.
--------------------------------------------------------------------------------
via: https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux
作者:[Egidio Docile ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://disqus.com/by/egidiodocile/
[1]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-1-1-and-operators
[2]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-1-2-the-assignment-operators-and
[3]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-1-3-the-and-operators
[4]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h1-objective
[5]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h2-requirements
[6]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h3-difficulty
[7]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h4-conventions
[8]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h5-introduction
[9]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h6-how-rules-are-organized
[10]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h7-the-rules-syntax
[11]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h8-a-test-case
[12]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-operators
[13]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h10-the-keys-we-used

View File

@ -0,0 +1,102 @@
ANNOUNCING THE GENERAL AVAILABILITY OF CONTAINERD 1.0, THE INDUSTRY-STANDARD RUNTIME USED BY MILLIONS OF USERS
============================================================
Today, were pleased to announce that containerd (pronounced Con-Tay-Ner-D), an industry-standard runtime for building container solutions, has reached its 1.0 milestone. containerd has already been deployed in millions of systems in production today, making it the most widely adopted runtime and an essential upstream component of the Docker platform.
Built to address the needs of modern container platforms like Docker and orchestration systems like Kubernetes, containerd ensures users have a consistent dev to ops experience. From [Dockers initial announcement][22] last year that it was spinning out its core runtime to [its donation to the CNCF][23] in March 2017, the containerd project has experienced significant growth and progress over the past 12 months. .
Within both the Docker and Kubernetes communities, there has been a significant uptick in contributions from independents and CNCF member companies alike including Docker, Google, NTT, IBM, Microsoft, AWS, ZTE, Huawei and ZJU. Similarly, the maintainers have been working to add key functionality to containerd.The initial containerd donation provided everything users need to ensure a seamless container experience including methods for:
* transferring container images,
* container execution and supervision,
* low-level local storage and network interfaces and
* the ability to work on both Linux, Windows and other platforms. 
Additional work has been done to add even more powerful capabilities to containerd including a:
* Complete storage and distribution system that supports both OCI and Docker image formats and
* Robust events system
* More sophisticated snapshot model to manage container filesystems
These changes helped the team build out a smaller interface for the snapshotters, while still fulfilling the requirements needed from things like a builder. It also reduces the amount of code needed, making it much easier to maintain in the long run.
The containerd 1.0 milestone comes after several months testing both the alpha and version versions, which enabled the  team to implement many performance improvements. Some of these,improvements include the creation of a stress testing system, improvements in garbage collection and shim memory usage.
“In 2017 key functionality has been added containerd to address the needs of modern container platforms like Docker and orchestration systems like Kubernetes,” said Michael Crosby, Maintainer for containerd and engineer at Docker. “Since our announcement in December, we have been progressing the design of the project with the goal of making it easily embeddable in higher level systems to provide core container capabilities. We will continue to work with the community to create a runtime thats lightweight yet powerful, balancing new functionality with the desire for code that is easy to support and maintain.”
containerd is already being used by Kubernetes for its[ cri-containerd project][24], which enables users to run Kubernetes clusters using containerd as the underlying runtime. containerd is also an essential upstream component of the Docker platform and is currently used by millions of end users. There is also strong alignment with other CNCF projects: containerd exposes an API using [gRPC][25] and exposes metrics in the [Prometheus][26] format. containerd also fully leverages the Open Container Initiative (OCI) runtime, image format specifications and OCI reference implementation ([runC][27]), and will pursue OCI certification when it is available.
Key Milestones in the progress to 1.0 include:
![containerd 1.0](https://i2.wp.com/blog.docker.com/wp-content/uploads/4f8d8c4a-6233-4d96-a0a2-77ed345bf42b-5.jpg?resize=720%2C405&ssl=1)
Notable containerd facts and figures:
* 1994 GitHub stars, 401 forks
* 108 contributors
* 8 maintainers from independents and and member companies alike including Docker, Google, IBM, ZTE and ZJU .
* 3030+ commits, 26 releases
Availability and Resources
To participate in containerd: [github.com/containerd/containerd][28]
* Getting Started with containerd: [http://mobyproject.org/blog/2017/08/15/containerd-getting-started/][8]
* Roadmap: [https://github.com/containerd/containerd/blob/master/ROADMAP.md][1]
* Scope table: [https://github.com/containerd/containerd#scope][2]
* Architecture document: [https://github.com/containerd/containerd/blob/master/design/architecture.md][3]
* APIs: [https://github.com/containerd/containerd/tree/master/api/][9].
* Learn more about containerd at KubeCon by attending Justin Cormacks [LinuxKit & Kubernetes talk at Austin Docker Meetup][10], Patrick Chanezons [Moby session][11] [Phil Estes session][12] or the [containerd salon][13]
--------------------------------------------------------------------------------
via: https://blog.docker.com/2017/12/cncf-containerd-1-0-ga-announcement/
作者:[Patrick Chanezon ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.docker.com/author/chanezon/
[1]:https://github.com/docker/containerd/blob/master/ROADMAP.md
[2]:https://github.com/docker/containerd#scope
[3]:https://github.com/docker/containerd/blob/master/design/architecture.md
[4]:http://www.linkedin.com/shareArticle?mini=true&url=http://dockr.ly/2ArQe3G&title=Announcing%20the%20General%20Availability%20of%20containerd%201.0%2C%20the%20industry-standard%20runtime%20used%20by%20millions%20of%20users&summary=Today,%20we%E2%80%99re%20pleased%20to%20announce%20that%20containerd%20(pronounced%20Con-Tay-Ner-D),%20an%20industry-standard%20runtime%20for%20building%20container%20solutions,%20has%20reached%20its%201.0%20milestone.%20containerd%20has%20already%20been%20deployed%20in%20millions%20of%20systems%20in%20production%20today,%20making%20it%20the%20most%20widely%20adopted%20runtime%20and%20an%20essential%20upstream%20component%20of%20the%20Docker%20platform.%20Built%20...
[5]:http://www.reddit.com/submit?url=http://dockr.ly/2ArQe3G&title=Announcing%20the%20General%20Availability%20of%20containerd%201.0%2C%20the%20industry-standard%20runtime%20used%20by%20millions%20of%20users
[6]:https://plus.google.com/share?url=http://dockr.ly/2ArQe3G
[7]:http://news.ycombinator.com/submitlink?u=http://dockr.ly/2ArQe3G&t=Announcing%20the%20General%20Availability%20of%20containerd%201.0%2C%20the%20industry-standard%20runtime%20used%20by%20millions%20of%20users
[8]:http://mobyproject.org/blog/2017/08/15/containerd-getting-started/
[9]:https://github.com/docker/containerd/tree/master/api/
[10]:https://www.meetup.com/Docker-Austin/events/245536895/
[11]:http://sched.co/CU6G
[12]:https://kccncna17.sched.com/event/CU6g/embedding-the-containerd-runtime-for-fun-and-profit-i-phil-estes-ibm
[13]:https://kccncna17.sched.com/event/Cx9k/containerd-salon-hosted-by-derek-mcgowan-docker-lantao-liu-google
[14]:https://blog.docker.com/author/chanezon/
[15]:https://blog.docker.com/tag/cloud-native-computing-foundation/
[16]:https://blog.docker.com/tag/cncf/
[17]:https://blog.docker.com/tag/container-runtime/
[18]:https://blog.docker.com/tag/containerd/
[19]:https://blog.docker.com/tag/cri-containerd/
[20]:https://blog.docker.com/tag/grpc/
[21]:https://blog.docker.com/tag/kubernetes/
[22]:https://blog.docker.com/2016/12/introducing-containerd/
[23]:https://blog.docker.com/2017/03/docker-donates-containerd-to-cncf/
[24]:http://blog.kubernetes.io/2017/11/containerd-container-runtime-options-kubernetes.html
[25]:http://www.grpc.io/
[26]:https://prometheus.io/
[27]:https://github.com/opencontainers/runc
[28]:http://github.com/containerd/containerd

View File

@ -0,0 +1,154 @@
Ubuntu 18.04 New Features, Release Date & More
============================================================
Weve all been waiting for it the new LTS release of Ubuntu 18.04\. Learn more about new features, the release dates, and more.
> Note: well frequently update this article with new information, so bookmark this page and check back soon.
### Basic information about Ubuntu 18.04
Lets start with some basic information.
* Its a new LTS (Long Term Support) release. So you get 5 years of support for both the desktop and server version.
* Named “Bionic Beaver”. The founder of Canonical, Mark Shuttleworth, explained the meaning behind the name. The mascot is a Beaver because its energetic, industrious, and an awesome engineer which perfectly describes a typical Ubuntu user, and the new Ubuntu release itself. The “Bionic” adjective is due to the increased number of robots that run on the Ubuntu Core.
### Ubuntu 18.04 Release Dates & Schedule
If youre new to Ubuntu, you may not be familiar the actual version numbers mean. Its the year and month of the official release. So Ubuntus 18.04 official release will be in the 4th month of the year 2018. Ubuntu 17.10 was released in 2017, in the 10th month of the year.
To go into further details, here are the important dates and need to know about Ubuntu 18.04 LTS:
* November 30th, 2017 Feature Definition Freeze.
* January 4th, 2018 First Alpha release. So if you opted-in to receive new Alpha releases, youll get the Alpha 1 update on this date.
* February 1st, 2018 Second Alpha release.
* March 1st, 2018 Feature Freeze. No new features will be introduced or released. So the development team will only work on improving existing features and fixing bugs. With exceptions, of course. If youre not a developer or an experienced user, but would still like to try the new Ubuntu ASAP, then Id personally recommend starting with this release.
* March 8th, 2018 First Beta release. If you opted-in for receiving Beta updates, youll get your update on this day.
* March 22nd, 2018 User Interface Freeze. It means that no further changes or updates will be done to the actual user interface, so if you write documentation, [tutorials][1], and use screenshots, its safe to start then.
* March 29th, 2018 Documentation String Freeze. There wont be any edits or new stuff (strings) added to the documentation, so translators can start translating the documentation.
* April 5th, 2018 Final Beta release. This is also a good day to start using the new release.
* April 19th, 2018 Final Freeze. Everythings pretty much done now. Images for the release are created and distributed, and will likely not have any changes.
* April 26th, 2018 Official, Final release of Ubuntu 18.04\. Everyone should start using it starting this day, even on production servers. We recommend getting an Ubuntu 18.04 server from [Vultr][2] and testing out the new features. Servers at [Vultr][3] start at $2.5 per month.
### Whats New in Ubuntu 18.04
All the new features in Ubuntu 18.04 LTS:
### Color emojis are now supported 
With previous versions, Ubuntu only supported monochrome (black and white) emojis, which quite frankly, didnt look so good. Ubuntu 18.04 will support colored emojis by using the [Noto Color Emoji font][7]. With 18.04, you can view and add color emojis with ease everywhere. They are supported natively so you can use them without using 3-rd party apps or installing/configuring anything extra. You can always disable the color emojis by removing the font.
### GNOME desktop environment
[![ubuntu 17.10 gnome](https://thishosting.rocks/wp-content/uploads/2017/12/ubuntu-17-10-gnome.jpg.webp)][8]
Ubuntu started using the GNOME desktop environment with Ubuntu 17.10 instead of the default Unity environment. Ubuntu 18.04 will continue using GNOME. This is a major change to Ubuntu.
### Ubuntu 18.04 Desktop will have a new default theme
Ubuntu 18.04 is saying Goodbye to the old Ambience default theme with a new GTK theme. If you want to help with the new theme, check out some screenshots and more, go [here][9].
As of now, there is speculation that Suru will be the [new default icon theme][10] for Ubuntu 18.04\. Heres a screenshot:
[![suru icon theme ubuntu 18.04](https://thishosting.rocks/wp-content/uploads/2017/12/suru-icon-theme-ubuntu-18-04.jpg.webp)][11]
> Worth noting: all new features in Ubuntu 16.10, 17.04, and 17.10 will roll through to Ubuntu 18.04\. So updates like Window buttons to the right, a better login screen, imrpoved Bluetooth support etc. will roll out to Ubuntu 18.04\. We wont include a special section since its not really new to Ubuntu 18.04 itself. If you want to learn more about all the changes from 16.04 to 18.04, google it for each version in between.
### Download Ubuntu 18.04
First off, if youre already using Ubuntu, you can just upgrade to Ubuntu 18.04.
If you need to download Ubuntu 18.04:
Go to the [official Ubuntu download page][12] after the final release.
For the daily builds (alpha, beta, and non-final releases), go [here][13].
### FAQs
Now for some of the frequently asked questions (with answers) that should give you more information about all of this.
### When is it safe to switch to Ubuntu 18.04?
On the official final release date, of course. But if you cant wait, start using the desktop version on March 1st, 2018, and start testing out the server version on April 5th, 2018\. But for you to truly be “safe”, youll need to wait for the final release, maybe even more so the 3-rd party services and apps you are using are tested and working well on the new release.
### How do I upgrade my server to Ubuntu 18.04?
Its a fairly simple process but has huge potential risks. We may publish a tutorial sometime in the near future, but youll basically need to use do-release-upgrade. Again, upgrading your server has potential risks, and if youre on a production server, Id think twice before upgrading. Especially if youre on 16.04 which has a few years of support left.
### How can I help with Ubuntu 18.04?
Even if youre not an experienced developer and Ubuntu user, you can still help by:
* Spreading the word. Let people know about Ubuntu 18.04\. A simple share on social media helps a bit too.
* Using and testing the release. Start using the release and test it. Again, you dont have to be a developer. You can still find and report bugs, or send feedback.
* Translating. Join the translating teams and start translating documentation and/or applications.
* Helping other people. Join some online Ubuntu communities and help others with issues theyre having with Ubuntu 18.04\. Sometimes people need help with simple stuff like “where can I download Ubuntu?”
### What does Ubuntu 18.04 mean for other distros like Lubuntu?
All distros that are based on Ubuntu will have similar new features and a similar release schedule. Youll need to check your distros official website for more information.
### Is Ubuntu 18.04 an LTS release?
Yes, Ubuntu 18.04 is an LTS (Long Term Support) release, so youll get support for 5 years.
### Can I switch from Windows/OS X to Ubuntu 18.04?
Of course! Youll most likely experience a performance boost too. Switching from a different OS to Ubuntu is fairly easy, there are quite a lot of tutorials for doing that. You can even set up a dual-boot where youll be using multiple OSes, so you can use both Windows and Ubuntu 18.04.
### Can I try Ubuntu 18.04 without installing it?
Sure. You can use something like [VirtualBox][14] to create a “virtual desktop” you can install it on your local machine and use Ubuntu 18.04 without actually installing Ubuntu.
Or you can try an Ubuntu 18.04 server at [Vultr][15] for $2.5 per month. Its essentially free if you use some [free credits][16].
### Why cant I find a 32-bit version of Ubuntu 18.04?
Because there is no 32bit version. Ubuntu dropped 32bit versions with its 17.10 release. If youre using old hardware, youre better off using a different [lightweight Linux distro][17] instead of Ubuntu 18.04 anyway.
### Any other question?
Leave a comment below! Share your thoughts, were super excited and were gonna update this article as soon as new information comes in. Stay tuned and be patient!
--------------------------------------------------------------------------------
via: https://thishosting.rocks/ubuntu-18-04-new-features-release-date/
作者:[ thishosting.rocks][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:thishosting.rocks
[1]:https://thishosting.rocks/category/knowledgebase/
[2]:https://thishosting.rocks/go/vultr/
[3]:https://thishosting.rocks/go/vultr/
[4]:https://thishosting.rocks/category/knowledgebase/
[5]:https://thishosting.rocks/tag/ubuntu/
[6]:https://thishosting.rocks/2017/12/05/
[7]:https://www.google.com/get/noto/help/emoji/
[8]:https://thishosting.rocks/wp-content/uploads/2017/12/ubuntu-17-10-gnome.jpg
[9]:https://community.ubuntu.com/t/call-for-participation-an-ubuntu-default-theme-lead-by-the-community/1545
[10]:http://www.omgubuntu.co.uk/2017/11/suru-default-icon-theme-ubuntu-18-04-lts
[11]:https://thishosting.rocks/wp-content/uploads/2017/12/suru-icon-theme-ubuntu-18-04.jpg
[12]:https://www.ubuntu.com/download
[13]:http://cdimage.ubuntu.com/daily-live/current/
[14]:https://www.virtualbox.org/
[15]:https://thishosting.rocks/go/vultr/
[16]:https://thishosting.rocks/vultr-coupons-for-2017-free-credits-and-more/
[17]:https://thishosting.rocks/best-lightweight-linux-distros/

View File

@ -0,0 +1,200 @@
translating by lujun9972
10 useful ncat (nc) Command Examples for Linux Systems
======
[![nc-ncat-command-examples-Linux-Systems](https://www.linuxtechi.com/wp-content/uploads/2017/12/nc-ncat-command-examples-Linux-Systems.jpg)][1]
ncat or nc is networking utility with functionality similar to cat command but for network. It is a general purpose CLI tool for reading, writing, redirecting data across a network. It is designed to be a reliable back-end tool that can be used with scripts or other programs. Its also a great tool for network debugging, as it can create any kind of connect one can need.
ncat/nc can be a port scanning tool, or a security tool, or monitoring tool and is also a simple TCP proxy. Since it has so many features, it is known as a network swiss army knife. Its one of those tools that every System Admin should know & master.
In most of Debian distributions nc is available and its package is automatically installed during installation. But in minimal CentOS 7 / RHEL 7 installation you will not find nc as a default package. You need to install using the following command.
```
[root@linuxtechi ~]# yum install nmap-ncat -y
```
System admins can use it audit their system security, they can use it find the ports that are opened & than secure them. Admins can also use it as a client for auditing web servers, telnet servers, mail servers etc, with nc we can control every character sent & can also view the responses to sent queries.
We can also cause it to capture data being sent by client to understand what they are upto.
In this tutorial, we are going to learn about how to use nc command with 10 examples,
#### Example: 1) Listen to inbound connections
Ncat can work in listen mode & we can listen for inbound connections on port number with option l. Complete command is,
$ ncat -l port_number
For example,
```
$ ncat -l 8080
```
Server will now start listening to port 8080 for inbound connections.
#### Example: 2) Connect to a remote system
To connect to a remote system with nc, we can use the following command,
$ ncat IP_address port_number
Lets take an example,
```
$ ncat 192.168.1.100 80
```
Now a connection to server with IP address 192.168.1.100 will be made at port 80 & we can now send instructions to server. Like we can get the complete page content with
GET / HTTP/1.1
or get the page name,
GET / HTTP/1.1
or we can get banner for OS fingerprinting with the following,
HEAD / HTTP/1.1
This will tell what software is being used to run the web Server.
#### Example: 3) Connecting to UDP ports
By default , the nc utility makes connections only to TCP ports. But we can also make connections to UDP ports, for that we can use option u,
```
$ ncat -l -u 1234
```
Now our system will start listening a udp port 1234, we can verify this using below netstat command,
```
$ netstat -tunlp | grep 1234
udp 0 0 0.0.0.0:1234 0.0.0.0:* 17341/nc
udp6 0 0 :::1234 :::* 17341/nc
```
Lets assume we want to send or test UDP port connectivity to a specific remote host, then use the following command,
$ ncat -v -u {host-ip} {udp-port}
example:
```
[root@localhost ~]# ncat -v -u 192.168.105.150 53
Ncat: Version 6.40 ( http://nmap.org/ncat )
Ncat: Connected to 192.168.105.150:53.
```
#### Example: 4) NC as chat tool
NC can also be used as chat tool, we can configure server to listen to a port & than can make connection to server from a remote machine on same port & start sending message. On server side, run
```
$ ncat -l 8080
```
On remote client machine, run
```
$ ncat 192.168.1.100 8080
```
Than start sending messages & they will be displayed on server terminal.
#### Example: 5) NC as a proxy
NC can also be used as a proxy with a simple command. Lets take an example,
```
$ ncat -l 8080 | ncat 192.168.1.200 80
```
Now all the connections coming to our server on port 8080 will be automatically redirected to 192.168.1.200 server on port 80\. But since we are using a pipe, data can only be transferred & to be able to receive the data back, we need to create a two way pipe. Use the following commands to do so,
```
$ mkfifo 2way
$ ncat -l 8080 0<2way | ncat 192.168.1.200 80 1>2way
```
Now you will be able to send & receive data over nc proxy.
#### Example: 6) Copying Files using nc/ncat
NC can also be used to copy the files from one system to another, though it is not recommended & mostly all systems have ssh/scp installed by default. But none the less if you have come across a system with no ssh/scp, you can also use nc as last ditch effort.
Start with machine on which data is to be received & start nc is listener mode,
```
$ ncat -l 8080 > file.txt
```
Now on the machine from where data is to be copied, run the following command,
```
$ ncat 192.168.1.100 8080 --send-only < data.txt
```
Here, data.txt is the file that has to be sent. send-only option will close the connection once the file has been copied. If not using this option, than we will have press ctrl+c to close the connection manually.
We can also copy entire disk partitions using this method, but it should be done with caution.
#### Example: 7) Create a backdoor via nc/nact
NC command can also be used to create backdoor to your systems & this technique is actually used by hackers a lot. We should know how it works in order to secure our system. To create a backdoor, the command is,
```
$ ncat -l 10000 -e /bin/bash
```
e flag attaches a bash to port 10000\. Now a client can connect to port 10000 on server & will have complete access to our system via bash,
```
$ ncat 192.168.1.100 1000
```
#### Example: 8) Port forwarding via nc/ncat
We can also use NC for port forwarding with the help of option c , syntax for accomplishing port forwarding is,
```
$ ncat -u -l 80 -c 'ncat -u -l 8080'
```
Now all the connections for port 80 will be forwarded to port 8080.
#### Example: 9) Set Connection timeouts
Listener mode in ncat will continue to run & would have to be terminated manually. But we can configure timeouts with option w,
```
$ ncat -w 10 192.168.1.100 8080
```
This will cause connection to be terminated in 10 seconds, but it can only be used on client side & not on server side.
#### Example: 10) Force server to stay up using -k option in ncat
When client disconnects from server, after sometime server also stops listening. But we can force server to stay connected & continuing port listening with option k. Run the following command,
```
$ ncat -l -k 8080
```
Now server will stay up, even if a connection from client is broken.
With this we end our tutorial, please feel free to ask any question regarding this article using the comment box below.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/nc-ncat-command-examples-linux-systems/
作者:[Pradeep Kumar][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxtechi.com/author/pradeep/
[1]:https://www.linuxtechi.com/wp-content/uploads/2017/12/nc-ncat-command-examples-Linux-Systems.jpg

View File

@ -0,0 +1,95 @@
Getting started with Turtl, an open source alternative to Evernote
======
![Using Turtl as an open source alternative to Evernote](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_brainstorm_island_520px.png?itok=6IUPyxkY)
Just about everyone I know takes notes, and many people use an online note-taking application like Evernote, Simplenote, or Google Keep. Those are all good tools, but you have to wonder about the security and privacy of your information—especially in light of [Evernote's privacy flip-flop of 2016][1]. If you want more control over your notes and your data, you really need to turn to an open source tool.
Whatever your reasons for moving away from Evernote, there are open source alternatives out there. Let's look at one of those alternatives: Turtl.
### Getting started
The developers behind [Turtl][2] want you to think of it as "Evernote with ultimate privacy." To be honest, I can't vouch for the level of privacy that Turtl offers, but it is a quite a good note-taking tool.
To get started with Turtl, [download][3] a desktop client for Linux, Mac OS, or Windows, or grab the [Android app][4]. Install it, then fire up the client or app. You'll be asked for a username and passphrase. Turtl uses the passphrase to generate a cryptographic key that, according to the developers, encrypts your notes before storing them anywhere on your device or on their servers.
### Using Turtl
You can create the following types of notes with Turtl:
* Password
* File
* Image
* Bookmark
* Text note
No matter what type of note you choose, you create it in a window that's similar for all types of notes:
### [turtl-new-note-520.png][5]
![Create new text note with Turtl](https://opensource.com/sites/default/files/images/life-uploads/turtl-new-note-520.png)
Creating a new text note in Turtl
Add information like the title of the note, some text, and (if you're creating a File or Image note) attach a file or an image. Then click Save.
You can add formatting to your notes via [Markdown][6]. You need to add the formatting by hand—there are no toolbar shortcuts.
If you need to organize your notes, you can add them to Boards. Boards are just like notebooks in Evernote. To create a new board, click on the Boards tab, then click the Create a board button. Type a title for the board, then click Create.
### [turtl-boards-520.png][7]
![Create new board in Turtl](https://opensource.com/sites/default/files/images/life-uploads/turtl-boards-520.png)
Creating a new board in Turtl
To add a note to a board, create or edit the note, then click the This note is not in any boards link at the bottom of the note. Select one or more boards, then click Done.
To add tags to a note, click the Tags icon at the bottom of a note, enter one or more keywords separated by commas, and click Done.
### Syncing your notes across your devices
If you use Turtl across several computers and an Android device, for example, Turtl will sync your notes whenever you're online. However, I've encountered a small problem with syncing: Every so often, a note I've created on my phone doesn't sync to my laptop. I tried to sync manually by clicking the icon in the top left of the window and then clicking Sync Now, but that doesn't always work. I found that I occasionally need to click that icon, click Your settings, and then click Clear local data. I then need to log back into Turtl, but all the data syncs properly.
### A question, and a couple of problems
When I started using Turtl, I was dogged by one question: Where are my notes kept online? It turns out that the developers behind Turtl are based in the U.S., and that's also where their servers are. Although the encryption that Turtl uses is [quite strong][8] and your notes are encrypted on the server, the paranoid part of me says that you shouldn't save anything sensitive in Turtl (or any online note-taking tool, for that matter).
Turtl displays notes in a tiled view, reminiscent of Google Keep:
### [turtl-notes-520.png][9]
![Notes in Turtl](https://opensource.com/sites/default/files/images/life-uploads/turtl-notes-520.png)
A collection of notes in Turtl
There's no way to change that to a list view, either on the desktop or on the Android app. This isn't a problem for me, but I've heard some people pan Turtl because it lacks a list view.
Speaking of the Android app, it's not bad; however, it doesn't integrate with the Android Share menu. If you want to add a note to Turtl based on something you've seen or read in another app, you need to copy and paste it manually.
I've been using a Turtl for several months on a Linux-powered laptop, my [Chromebook running GalliumOS][10], and an Android-powered phone. It's been a pretty seamless experience across all those devices. Although it's not my favorite open source note-taking tool, Turtl does a pretty good job. Give it a try; it might be the simple note-taking tool you're looking for.
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/12/using-turtl-open-source-alternative-evernote
作者:[Scott Nesbitt][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/scottnesbitt
[1]:https://blog.evernote.com/blog/2016/12/15/evernote-revisits-privacy-policy/
[2]:https://turtlapp.com/
[3]:https://turtlapp.com/download/
[4]:https://turtlapp.com/download/
[5]:https://opensource.com/file/378346
[6]:https://en.wikipedia.org/wiki/Markdown
[7]:https://opensource.com/file/378351
[8]:https://turtlapp.com/docs/security/encryption-specifics/
[9]:https://opensource.com/file/378356
[10]:https://opensource.com/article/17/4/linux-chromebook-gallium-os

View File

@ -1,288 +0,0 @@
translating by yongshouzhang
How to use cron in Linux
============================================================
### No time for commands? Scheduling tasks with cron means programs can run but you don't have to stay up late.
[![](https://opensource.com/sites/default/files/styles/byline_thumbnail/public/david-crop.jpg?itok=Wnz6HdS0)][10] 06 Nov 2017 [David Both][11] [Feed][12]
27[up][13]
[9 comments][14]
![How to use cron in Linux](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux-penguins.png?itok=yKOpaJM_)
Image by :
[Internet Archive Book Images][15]. Modified by Opensource.com. [CC BY-SA 4.0][16]
One of the challenges (among the many advantages) of being a sysadmin is running tasks when you'd rather be sleeping. For example, some tasks (including regularly recurring tasks) need to run overnight or on weekends, when no one is expected to be using computer resources. I have no time to spare in the evenings to run commands and scripts that have to operate during off-hours. And I don't want to have to get up at oh-dark-hundred to start a backup or major update.
Instead, I use two service utilities that allow me to run commands, programs, and tasks at predetermined times. The [cron][17] and at services enable sysadmins to schedule tasks to run at a specific time in the future. The at service specifies a one-time task that runs at a certain time. The cron service can schedule tasks on a repetitive basis, such as daily, weekly, or monthly.
In this article, I'll introduce the cron service and how to use it.
### Common (and uncommon) cron uses
I use the cron service to schedule obvious things, such as regular backups that occur daily at 2 a.m. I also use it for less obvious things.
* The system times (i.e., the operating system time) on my many computers are set using the Network Time Protocol (NTP). While NTP sets the system time, it does not set the hardware time, which can drift. I use cron to set the hardware time based on the system time.
* I also have a Bash program I run early every morning that creates a new "message of the day" (MOTD) on each computer. It contains information, such as disk usage, that should be current in order to be useful.
* Many system processes and services, like [Logwatch][1], [logrotate][2], and [Rootkit Hunter][3], use the cron service to schedule tasks and run programs every day.
The crond daemon is the background service that enables cron functionality.
The cron service checks for files in the /var/spool/cron and /etc/cron.d directories and the /etc/anacrontab file. The contents of these files define cron jobs that are to be run at various intervals. The individual user cron files are located in /var/spool/cron, and system services and applications generally add cron job files in the /etc/cron.ddirectory. The /etc/anacrontab is a special case that will be covered later in this article.
### Using crontab
The cron utility runs based on commands specified in a cron table (crontab). Each user, including root, can have a cron file. These files don't exist by default, but can be created in the /var/spool/cron directory using the crontab -e command that's also used to edit a cron file (see the script below). I strongly recommend that you not use a standard editor (such as Vi, Vim, Emacs, Nano, or any of the many other editors that are available). Using the crontab command not only allows you to edit the command, it also restarts the crond daemon when you save and exit the editor. The crontabcommand uses Vi as its underlying editor, because Vi is always present (on even the most basic of installations).
New cron files are empty, so commands must be added from scratch. I added the job definition example below to my own cron files, just as a quick reference, so I know what the various parts of a command mean. Feel free to copy it for your own use.
```
# crontab -e
SHELL=/bin/bash
MAILTO=root@example.com
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin
# For details see man 4 crontabs
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed
# backup using the rsbu program to the internal 4TB HDD and then 4TB external
01 01 * * * /usr/local/bin/rsbu -vbd1 ; /usr/local/bin/rsbu -vbd2
# Set the hardware clock to keep it in sync with the more accurate system clock
03 05 * * * /sbin/hwclock --systohc
# Perform monthly updates on the first of the month
# 25 04 1 * * /usr/bin/dnf -y update
```
The first three lines in the code above set up a default environment. The environment must be set to whatever is necessary for a given user because cron does not provide an environment of any kind. The SHELL variable specifies the shell to use when commands are executed. This example specifies the Bash shell. The MAILTO variable sets the email address where cron job results will be sent. These emails can provide the status of the cron job (backups, updates, etc.) and consist of the output you would see if you ran the program manually from the command line. The third line sets up the PATH for the environment. Even though the path is set here, I always prepend the fully qualified path to each executable.
There are several comment lines in the example above that detail the syntax required to define a cron job. I'll break those commands down, then add a few more to show you some more advanced capabilities of crontab files.
```
01 01 * * * /usr/local/bin/rsbu -vbd1 ; /usr/local/bin/rsbu -vbd2
```
This line runs my self-written Bash shell script, rsbu, that backs up all my systems. This job kicks off at 1:01 a.m. (01 01) every day. The asterisks (*) in positions three, four, and five of the time specification are like file globs, or wildcards, for other time divisions; they specify "every day of the month," "every month," and "every day of the week." This line runs my backups twice; one backs up to an internal dedicated backup hard drive, and the other backs up to an external USB drive that I can take to the safe deposit box.
The following line sets the hardware clock on the computer using the system clock as the source of an accurate time. This line is set to run at 5:03 a.m. (03 05) every day.
```
03 05 * * * /sbin/hwclock --systohc
```
I was using the third and final cron job (commented out) to perform a dnf or yumupdate at 04:25 a.m. on the first day of each month, but I commented it out so it no longer runs.
```
# 25 04 1 * * /usr/bin/dnf -y update
```
### Other scheduling tricks
Now let's do some things that are a little more interesting than these basics. Suppose you want to run a particular job every Thursday at 3 p.m.:
```
00 15 * * Thu /usr/local/bin/mycronjob.sh
```
Or, maybe you need to run quarterly reports after the end of each quarter. The cron service has no option for "The last day of the month," so instead you can use the first day of the following month, as shown below. (This assumes that the data needed for the reports will be ready when the job is set to run.)
```
02 03 1 1,4,7,10 * /usr/local/bin/reports.sh
```
The following shows a job that runs one minute past every hour between 9:01 a.m. and 5:01 p.m.
```
01 09-17 * * * /usr/local/bin/hourlyreminder.sh
```
I have encountered situations where I need to run a job every two, three, or four hours. That can be accomplished by dividing the hours by the desired interval, such as */3 for every three hours, or 6-18/3 to run every three hours between 6 a.m. and 6 p.m. Other intervals can be divided similarly; for example, the expression */15 in the minutes position means "run the job every 15 minutes."
```
*/5 08-18/2 * * * /usr/local/bin/mycronjob.sh
```
One thing to note: The division expressions must result in a remainder of zero for the job to run. That's why, in this example, the job is set to run every five minutes (08:05, 08:10, 08:15, etc.) during even-numbered hours from 8 a.m. to 6 p.m., but not during any odd-numbered hours. For example, the job will not run at all from 9 p.m. to 9:59 a.m.
I am sure you can come up with many other possibilities based on these examples.
### Limiting cron access
More Linux resources
* [What is Linux?][4]
* [What are Linux containers?][5]
* [Download Now: Linux commands cheat sheet][6]
* [Advanced Linux commands cheat sheet][7]
* [Our latest Linux articles][8]
Regular users with cron access could make mistakes that, for example, might cause system resources (such as memory and CPU time) to be swamped. To prevent possible misuse, the sysadmin can limit user access by creating a
**/etc/cron.allow**
file that contains a list of all users with permission to create cron jobs. The root user cannot be prevented from using cron.
By preventing non-root users from creating their own cron jobs, it may be necessary for root to add their cron jobs to the root crontab. "But wait!" you say. "Doesn't that run those jobs as root?" Not necessarily. In the first example in this article, the username field shown in the comments can be used to specify the user ID a job is to have when it runs. This prevents the specified non-root user's jobs from running as root. The following example shows a job definition that runs a job as the user "student":
```
04 07 * * * student /usr/local/bin/mycronjob.sh
```
### cron.d
The directory /etc/cron.d is where some applications, such as [SpamAssassin][18] and [sysstat][19], install cron files. Because there is no spamassassin or sysstat user, these programs need a place to locate cron files, so they are placed in /etc/cron.d.
The /etc/cron.d/sysstat file below contains cron jobs that relate to system activity reporting (SAR). These cron files have the same format as a user cron file.
```
# Run system activity accounting tool every 10 minutes
*/10 * * * * root /usr/lib64/sa/sa1 1 1
# Generate a daily summary of process accounting at 23:53
53 23 * * * root /usr/lib64/sa/sa2 -A
```
The sysstat cron file has two lines that perform tasks. The first line runs the sa1program every 10 minutes to collect data stored in special binary files in the /var/log/sadirectory. Then, every night at 23:53, the sa2 program runs to create a daily summary.
### Scheduling tips
Some of the times I set in the crontab files seem rather random—and to some extent they are. Trying to schedule cron jobs can be challenging, especially as the number of jobs increases. I usually have only a few tasks to schedule on each of my computers, which is simpler than in some of the production and lab environments where I have worked.
One system I administered had around a dozen cron jobs that ran every night and an additional three or four that ran on weekends or the first of the month. That was a challenge, because if too many jobs ran at the same time—especially the backups and compiles—the system would run out of RAM and nearly fill the swap file, which resulted in system thrashing while performance tanked, so nothing got done. We added more memory and improved how we scheduled tasks. We also removed a task that was very poorly written and used large amounts of memory.
The crond service assumes that the host computer runs all the time. That means that if the computer is turned off during a period when cron jobs were scheduled to run, they will not run until the next time they are scheduled. This might cause problems if they are critical cron jobs. Fortunately, there is another option for running jobs at regular intervals: anacron.
### anacron
The [anacron][20] program performs the same function as crond, but it adds the ability to run jobs that were skipped, such as if the computer was off or otherwise unable to run the job for one or more cycles. This is very useful for laptops and other computers that are turned off or put into sleep mode.
As soon as the computer is turned on and booted, anacron checks to see whether configured jobs missed their last scheduled run. If they have, those jobs run immediately, but only once (no matter how many cycles have been missed). For example, if a weekly job was not run for three weeks because the system was shut down while you were on vacation, it would be run soon after you turn the computer on, but only once, not three times.
The anacron program provides some easy options for running regularly scheduled tasks. Just install your scripts in the /etc/cron.[hourly|daily|weekly|monthly]directories, depending how frequently they need to be run.
How does this work? The sequence is simpler than it first appears.
1. The crond service runs the cron job specified in /etc/cron.d/0hourly.
```
# Run the hourly jobs
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
01 * * * * root run-parts /etc/cron.hourly
```
1. The cron job specified in /etc/cron.d/0hourly runs the run-parts program once per hour.
2. The run-parts program runs all the scripts located in the /etc/cron.hourlydirectory.
3. The /etc/cron.hourly directory contains the 0anacron script, which runs the anacron program using the /etdc/anacrontab configuration file shown here.
```
# /etc/anacrontab: configuration file for anacron
# See anacron(8) and anacrontab(5) for details.
SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
# the maximal random delay added to the base delay of the jobs
RANDOM_DELAY=45
# the jobs will be started during the following hours only
START_HOURS_RANGE=3-22
#period in days delay in minutes job-identifier command
1 5 cron.daily nice run-parts /etc/cron.daily
7 25 cron.weekly nice run-parts /etc/cron.weekly
@monthly 45 cron.monthly nice run-parts /etc/cron.monthly
```
1. The anacron program runs the programs located in /etc/cron.daily once per day; it runs the jobs located in /etc/cron.weekly once per week, and the jobs in cron.monthly once per month. Note the specified delay times in each line that help prevent these jobs from overlapping themselves and other cron jobs.
Instead of placing complete Bash programs in the cron.X directories, I install them in the /usr/local/bin directory, which allows me to run them easily from the command line. Then I add a symlink in the appropriate cron directory, such as /etc/cron.daily.
The anacron program is not designed to run programs at specific times. Rather, it is intended to run programs at intervals that begin at the specified times, such as 3 a.m. (see the START_HOURS_RANGE line in the script just above) of each day, on Sunday (to begin the week), and on the first day of the month. If any one or more cycles are missed, anacron will run the missed jobs once, as soon as possible.
### More on setting limits
I use most of these methods for scheduling tasks to run on my computers. All those tasks are ones that need to run with root privileges. It's rare in my experience that regular users really need a cron job. One case was a developer user who needed a cron job to kick off a daily compile in a development lab.
It is important to restrict access to cron functions by non-root users. However, there are circumstances when a user needs to set a task to run at pre-specified times, and cron can allow them to do that. Many users do not understand how to properly configure these tasks using cron and they make mistakes. Those mistakes may be harmless, but, more often than not, they can cause problems. By setting functional policies that cause users to interact with the sysadmin, individual cron jobs are much less likely to interfere with other users and other system functions.
It is possible to set limits on the total resources that can be allocated to individual users or groups, but that is an article for another time.
For more information, the man pages for [cron][21], [crontab][22], [anacron][23], [anacrontab][24], and [run-parts][25] all have excellent information and descriptions of how the cron system works.
### Topics
[Linux][26][SysAdmin][27]
### About the author
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/david-crop.jpg?itok=oePpOpyV)][28] David Both
-
David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981\. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years. David has written articles for... [more about David Both][29][More about me][30]
* [Learn how you can contribute][9]
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/11/how-use-cron-linux
作者:[David Both ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[1]:https://sourceforge.net/projects/logwatch/files/
[2]:https://github.com/logrotate/logrotate
[3]:http://rkhunter.sourceforge.net/
[4]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[5]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[6]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[7]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[8]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[9]:https://opensource.com/participate
[10]:https://opensource.com/users/dboth
[11]:https://opensource.com/users/dboth
[12]:https://opensource.com/user/14106/feed
[13]:https://opensource.com/article/17/11/how-use-cron-linux?rate=9R7lrdQXsne44wxIh0Wu91ytYaxxi86zT1-uHo1a1IU
[14]:https://opensource.com/article/17/11/how-use-cron-linux#comments
[15]:https://www.flickr.com/photos/internetarchivebookimages/20570945848/in/photolist-xkMtw9-xA5zGL-tEQLWZ-wFwzFM-aNwxgn-aFdWBj-uyFKYv-7ZCCBU-obY1yX-UAPafA-otBzDF-ovdDo6-7doxUH-obYkeH-9XbHKV-8Zk4qi-apz7Ky-apz8Qu-8ZoaWG-orziEy-aNwxC6-od8NTv-apwpMr-8Zk4vn-UAP9Sb-otVa3R-apz6Cb-9EMPj6-eKfyEL-cv5mwu-otTtHk-7YjK1J-ovhxf6-otCg2K-8ZoaJf-UAPakL-8Zo8j7-8Zk74v-otp4Ls-8Zo8h7-i7xvpR-otSosT-9EMPja-8Zk6Zi-XHpSDB-hLkuF3-of24Gf-ouN1Gv-fJzkJS-icfbY9
[16]:https://creativecommons.org/licenses/by-sa/4.0/
[17]:https://en.wikipedia.org/wiki/Cron
[18]:http://spamassassin.apache.org/
[19]:https://github.com/sysstat/sysstat
[20]:https://en.wikipedia.org/wiki/Anacron
[21]:http://man7.org/linux/man-pages/man8/cron.8.html
[22]:http://man7.org/linux/man-pages/man5/crontab.5.html
[23]:http://man7.org/linux/man-pages/man8/anacron.8.html
[24]:http://man7.org/linux/man-pages/man5/anacrontab.5.html
[25]:http://manpages.ubuntu.com/manpages/zesty/man8/run-parts.8.html
[26]:https://opensource.com/tags/linux
[27]:https://opensource.com/tags/sysadmin
[28]:https://opensource.com/users/dboth
[29]:https://opensource.com/users/dboth
[30]:https://opensource.com/users/dboth

View File

@ -0,0 +1,251 @@
24 Must Have Essential Linux Applications In 2017
======
Brief: What are the must have applications for Linux? The answer is subjective and it depends on for what purpose do you use your desktop Linux. But there are still some essentials Linux apps that are more likely to be used by most Linux user. We have listed such best Linux applications that you should have installed in every Linux distribution you use.
The world of Linux, everything is full of alternatives. You have to choose a distro? You have got several dozens of them. Are you trying to find a decent music player? Alternatives are there too.
But not all of them are built with the same thing in mind some of them might target minimalism while others might offer tons of features. Finding the right application for your needs can be quite confusing and a tiresome task. Lets make that a bit easier.
### Best free applications for Linux users
Im putting together a list of essential free Linux applications I prefer to use in different categories. Im not saying that they are the best, but I have tried lots of applications in each category and finally liked the listed ones better. So, you are more than welcome to mention your favorite applications in the comment section.
We have also compiled a nice video of this list. Do subscribe to our YouTube channel for more such educational Linux videos:
### Web Browser
![Web Browsers](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Web-Browser-1024x512.jpg)
[Save][1]Web Browsers
#### [Google Chrome][12]
Google Chrome is a powerful and complete solution for a web browser. It comes with excellent syncing capabilities and offers a vast collection of extensions. If you are accustomed to Google eco-system Google Chrome is for you without any doubt. If you prefer a more open source solution, you may want to try out [Chromium][13], which is the project Google Chrome is based on.
#### [Firefox][14]
If you are not a fan of Google Chrome, you can try out Firefox. Its been around for a long time and is a very stable and robust web browser.
#### [Vivaldi][15]
However, if you want something new and different, you can check out Vivaldi. Vivaldi takes a completely fresh approach towards web browser. Its from former team members of Opera and built on top of the Chromium project. Its lightweight and customizable. Though it is still quite new and still missing out some features, it feels amazingly refreshing and does a really decent job.
[Suggested read[Review] Otter Browser Brings Hope To Opera Lovers][40]
### Download Manager
![Download Managers](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Download-Manager-1024x512.jpg)
[Save][2]Download Managers
#### [uGet][16]
uGet is the best download manager I have come across. It is open source and offers everything you can expect from a download manager. uGet offers advanced settings for managing downloads. It can queue and resume downloads, use multiple connections for downloading large files, download files to different directories according to categories and so on.
#### [XDM][17]
Xtreme Download Manager (XDM) is a powerful and open source tool developed with Java. It has all the basic features of a download manager, including video grabber, smart scheduler and browser integration.
[Suggested read4 Best Download Managers For Linux][41]
### BitTorrent Client
![BitTorrent Clients](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-BitTorrent-Client-1024x512.jpg)
[Save][3]BitTorrent Clients
#### [Deluge][18]
Deluge is a open source BitTorrent client. It has a beautiful user interface. If you are used to using uTorrent for Windows, Deluge interface will feel familiar. It has various configuration options as well as plugins support for various tasks.
#### [Transmission][19]
Transmission takes the minimal approach. It is an open source BitTorrent client with a minimal user interface. Transmission comes pre-installed with many Linux distributions.
[Suggested readTop 5 Torrent Clients For Ubuntu Linux][42]
### Cloud Storage
![Cloud Storages](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Cloud-Storage-1024x512.jpg)
[Save][4]Cloud Storages
#### [Dropbox][20]
Dropbox is one of the most popular cloud storage service available out there. It gives you 2GB free storage to start with. Dropbox has a robust and straight-forward Linux client.
#### [MEGA][21]
MEGA offers 50GB of free storage. But that is not the best thing about it. The best thing about MEGA is that it has end-to-end encryption support for your files. MEGA has a solid Linux client named MEGAsync.
[Suggested readBest Free Cloud Services For Linux in 2017][43]
### Communication
![Communication Apps](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Communication-1024x512.jpg)
[Save][5]Communication Apps
#### [Pidgin][22]
Pidgin is an open source instant messenger client. It supports many chatting platforms including Google Talk, Yahoo and even IRC. Pidgin is extensible through third-party plugins, that can provide a lot of additional functionalities to Pidgin.
You can also use [Franz][23] or [Rambox][24] to use several messaging services in one application.
#### [Skype][25]
We all know Skype, it is one of the most popular video chatting platforms. Recently it has [released a brand new desktop client][26] for Linux.
[Suggested read6 Best Messaging Apps Available For Linux In 2017][44]
### Office Suite
![Office Suites](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Office-Suite-1024x512.jpg)
[Save][6]Office Suites
#### [LibreOffice][27]
LibreOffice is the most actively developed open source office suite for Linux. It has mainly six modules Writer, Calc, Impress, Draw, Math and Base. And every one of them supports a wide range of file formats. LibreOffice also supports third-party extensions. It is the default office suite for many of the Linux distributions.
#### [WPS Office][28]
If you want to try out something other than LibreOffice, WPS Office might be your go-to. WPS Office suite includes writer, presentation and spreadsheets support.
[Suggested read6 Best Open Source Alternatives to Microsoft Office for Linux][45]
### Music Player
![Music Players](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Music-Player-1024x512.jpg)
[Save][7]Music Players
#### [Lollypop][29]
This is a relatively new music player. Lollypop is open source and has a beautiful yet simple user interface. It offers a nice music organizer, scrobbling support, online radio and a party mode. Though it is a simple music player without so many advanced features, it is worth giving it a try.
#### [Rhythmbox][30]
Rhythmbox is the music player mainly developed for GNOME desktop environment but it works on other desktop environments as well. It does all the basic tasks of a music player, including CD Ripping & Burning, scribbling etc. It also has support for iPod.
#### [cmus][31]
If you want minimalism and love your terminal window, cmus is for you. Personally, Im a fan and user of this one. cmus is a small, fast and powerful console music player for Unix-like operating systems. It has all the basic music player features. And you can also extend its functionalities with additional extensions and scripts.
[Suggested readHow To Install Tomahawk Player In Ubuntu 14.04 And Linux Mint 17][46]
### Video Player
![Video Player](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Video-Player-1024x512.jpg)
[Save][8]Video Players
#### [VLC][32]
VLC is an open source media player. It is simple, fast, lightweight and really powerful. VLC can play almost any media formats you can throw at it out-of-the-box. It can also stream online medias. It also have some nifty extensions for various tasks like downloading subtitles right from the player.
#### [Kodi][33]
Kodi is a full-fledged media center. Kodi is open source and very popular among its user base. It can handle videos, music, pictures, podcasts and even games, from both local and network media storage. You can even record TV with it. The behavior of Kodi can be customized via add-ons and skins.
[Suggested read4 Format Factory Alternative In Linux][47]
### Photo Editor
![Photo Editors](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Photo-Editor-1024x512.jpg)
[Save][9]Photo Editors
#### [GIMP][34]
GIMP is the Photoshop alternative for Linux. It is open source, full-featured and professional photo editing software. It is packed with a wide range of tools for manipulating images. And on top of that, there is various customization options and third-party plugins for enhancing the experience.
#### [Krita][35]
Krita is mainly a painting tool but serves as a photo editing application as well. It is open source and packed with lots of sophisticated and advanced tools.
[Suggested readBest Photo Applications For Linux][48]
### Text Editor
Every Linux distribution comes with their own solution for text editors. Generally, they are quite simple and without much functionality. But here are some text editors with enhanced capabilities.
![Text Editors](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Text-Editor-1024x512.jpg)
[Save][10]Text Editors
#### [Atom][36]
Atom is the modern and hackable text editor maintained by GitHub. It is completely open-source and offers everything you can think of to get out of a text editor. You can use it right out-of-the-box or you can customize and tune it just the way you want. And it has a ton of extensions and themes from the community up for grab.
#### [Sublime Text][37]
Sublime Text is one of the most popular text editors. Though it is not free, it allows you to use the software for evaluation without any time limit. Sublime Text is a feature-rich and sophisticated piece of software. And of course, it has plugins and themes support.
[Suggested read4 Best Modern Open Source Code Editors For Linux][49]
### Launcher
![Launchers](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Launcher-1024x512.jpg)
[Save][11]Launchers
#### [Albert][38]
Albert is inspired by Alfred (a productivity application for Mac, which is totally kickass by-the-way) and still in the development phase. Albert is fast, extensible and customizable. The goal is to “Access everything with virtually zero effort”. It integrates with your Linux distribution nicely and helps you to boost your productivity.
#### [Synapse][39]
Synapse has been around for years. Its a simple launcher that can search and run applications. It can also speed up various workflows like controlling music, searching files, directories, bookmarks etc., running commands and such.
As Abhishek advised, we will keep this list of best Linux software updated with our readers (i.e. yours) feedback. So, what are your favorite must have Linux applications? Share with us and do suggest more categories of software to add to this list.
--------------------------------------------------------------------------------
via: https://itsfoss.com/essential-linux-applications/
作者:[Munif Tanjim][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/munif/
[1]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Web-Browser-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Web%20Browsers
[2]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Download-Manager-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Download%20Managers
[3]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-BitTorrent-Client-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=BitTorrent%20Clients
[4]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Cloud-Storage-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Cloud%20Storages
[5]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Communication-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Communication%20Apps
[6]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Office-Suite-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Office%20Suites
[7]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Music-Player-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Music%20Players
[8]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Video-Player-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Video%20Player
[9]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Photo-Editor-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Photo%20Editors
[10]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Text-Editor-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Text%20Editors
[11]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Launcher-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Launchers
[12]:https://www.google.com/chrome/browser
[13]:https://www.chromium.org/Home
[14]:https://www.mozilla.org/en-US/firefox
[15]:https://vivaldi.com
[16]:http://ugetdm.com/
[17]:http://xdman.sourceforge.net/
[18]:http://deluge-torrent.org/
[19]:https://transmissionbt.com/
[20]:https://www.dropbox.com
[21]:https://mega.nz/
[22]:https://www.pidgin.im/
[23]:https://itsfoss.com/franz-messaging-app/
[24]:http://rambox.pro/
[25]:https://www.skype.com
[26]:https://itsfoss.com/skpe-alpha-linux/
[27]:https://www.libreoffice.org
[28]:https://www.wps.com
[29]:http://gnumdk.github.io/lollypop-web/
[30]:https://wiki.gnome.org/Apps/Rhythmbox
[31]:https://cmus.github.io/
[32]:http://www.videolan.org
[33]:https://kodi.tv
[34]:https://www.gimp.org/
[35]:https://krita.org/en/
[36]:https://atom.io/
[37]:http://www.sublimetext.com/
[38]:https://github.com/ManuelSchneid3r/albert
[39]:https://launchpad.net/synapse-project
[40]:https://itsfoss.com/otter-browser-review/
[41]:https://itsfoss.com/4-best-download-managers-for-linux/
[42]:https://itsfoss.com/best-torrent-ubuntu/
[43]:https://itsfoss.com/cloud-services-linux/
[44]:https://itsfoss.com/best-messaging-apps-linux/
[45]:https://itsfoss.com/best-free-open-source-alternatives-microsoft-office/
[46]:https://itsfoss.com/install-tomahawk-ubuntu-1404-linux-mint-17/
[47]:https://itsfoss.com/format-factory-alternative-linux/
[48]:https://itsfoss.com/image-applications-ubuntu-linux/
[49]:https://itsfoss.com/best-modern-open-source-code-editors-for-linux/

View File

@ -0,0 +1,90 @@
translating by lujun9972
What Are Zombie Processes And How To Find & Kill Zombie Processes?
======
[![What Are Zombie Processes And How To Find & Kill Zombie Processes?](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/what-are-the-zombie-processes_orig.jpg)][1]
If you are a regular Linux user, you must have encountered the term `Zombie Processes`. So what are the Zombie Processes? How do they get created? Are they harmful to the system? How do I kill these processes? Keep reading for the answers to all these questions.
### What are Zombie Processes?
So we all know how processes work. We launch a program, start our task & once our task is over, we end that process. Once the process has ended, it has to be removed from the processes table.
You can see the current processes in the System-Monitor.
[![Replace the pid with the id of the parent process so that the parent process will remove all the child processes that are dead and completed. Imagine it Like this : “You find a dead body in the middle of the road, you call the dead bodys family and they take that body away from the road.” But a lot of programs are not programmed well enough to remove these child zombies because if they were, you wouldnt have those zombies in the first place. So the only thing guaranteed to remove Child Zombies is killing the parent.](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux-check-zombie-processes_orig.jpg)][2]
But, sometimes some of these processes stay in the processes table even after they have completed execution.
So these processes that have completed their life of execution but still exist in the processes table are called Zombie Processes.
### And How Exactly do they get Created?
Whenever we run a program it creates a parent process and a lot of child processes. All of these child processes use resources such as memory and CPU allocated to them by the kernel.
Once these child processes have finished executing they send an Exit call and die. This Exit call has to be read by the parent process which later calls the wait command to read the exit_status of the child process so that the child process can be removed from the processes table.
If the Parent reads the Exit call correctly sent by the Child Process, the process is removed from the processes table.
But, if the parent fails to read the exit call from the child process, the child process which has already finished its execution and is now dead will not be removed from the processes table.
### Are Zombie processes harmful to the System?
**No. **
Since zombie process is not doing any work, not using any resources or affecting any other process, there is no harm in having a zombie process. But since the exit_status and other process information from the process table are stored in the RAM, having too many Zombie processes can sometimes be an issue.
**_Imagine it Like this :_**
_You are the owner of a construction company. You pay daily wages to all your workers depending upon how they work. _ _A worker comes to the construction site every day, just sits there, you dont have to pay him, he doesnt do any work. _ _He just comes every day and sits, thats it !”_
Such a worker is the living example of a zombie process.
**But,**
if you have a lot of zombie workers, your construction site will get crowded and it might get difficult for the people that are actually working.
### So how to find Zombie Processes?
Fire up a terminal and type the following command -
ps aux | grep Z
You will now get details of all zombie processes in the processes table.
### How to kill Zombie processes?
Normally we kill processes with the SIGKILL command but zombie processes are already dead. You Cannot kill something that is already dead. So what you do is you type this command -
kill -s SIGCHLD pid
Replace the pid with the id of the parent process so that the parent process will remove all the child processes that are dead and completed.
**_Imagine it Like this :_**
_You find a dead body in the middle of the road, you call the dead bodys family and they take that body away from the road.”_
But a lot of programs are not programmed well enough to remove these child zombies because if they were, you wouldnt have those zombies in the first place. So the only thing guaranteed to remove Child Zombies is killing the parent.
--------------------------------------------------------------------------------
via: http://www.linuxandubuntu.com/home/what-are-zombie-processes-and-how-to-find-kill-zombie-processes
作者:[linuxandubuntu][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxandubuntu.com
[1]:http://www.linuxandubuntu.com/home/what-are-zombie-processes-and-how-to-find-kill-zombie-processes
[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux-check-zombie-processes_orig.jpg

View File

@ -0,0 +1,132 @@
更多你所不知道的 Linux 命令
============================================================
![unknown Linux commands](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/outer-limits-of-linux.jpg?itok=5L5xfj2v "unknown Linux commands")
>在这篇文章中和 Carla Schroder 一起探索 Linux 中的一些鲜为人知的强大工具。[CC Zero][2]Pixabay
本文是一篇关于一些有趣但鲜为人知的工具 `termsaver`、`pv` 和 `calendar` 的文章。`termsaver` 是一个终端 ASCII 锁屏,`pv` 能够测量数据吞吐量并模拟输入。Debian 的 `calendar` 拥有许多不同的日历表,并且你还可以制定你自己的日历表。
![Linux commands](https://www.linux.com/sites/lcom/files/styles/floated_images/public/linux-commands-fig-1.png?itok=HveXXLLK "Linux commands")
*图片 1 星球大战屏保。[使用许可][1]*
### 终端屏保
难道只有图形桌面能够拥有有趣的屏保吗?现在,你可以通过安装 `termsaver` 来享受 ASCII 屏保,比如 matrixLCTT 译注:电影《黑客帝国》中出现的黑客屏保)、时钟、星球大战以及一系列不太安全的屏保。有趣的屏保将会瞬间占据 NSFW 屏幕。
`termsaver` 可以从 Debian/Ubuntu 的包管理器中直接下载安装,如果你使用别的不包含该软件包的发行版比如 CentOS那么你可以从 [termsaver.brunobraga.net][7] 下载,然后按照安装指导进行安装。
运行 `termsaver -h` 来查看一系列屏保:
```
randtxt displays word in random places on screen
starwars runs the asciimation Star Wars movie
urlfetcher displays url contents with typing animation
quotes4all displays recent quotes from quotes4all.net
rssfeed displays rss feed information
matrix displays a matrix movie alike screensaver
clock displays a digital clock on screen
rfc randomly displays RFC contents
jokes4all displays recent jokes from jokes4all.net (NSFW)
asciiartfarts displays ascii images from asciiartfarts.com (NSFW)
programmer displays source code in typing animation
sysmon displays a graphical system monitor
```
你可以通过运行命令 `termsaver [屏保名]` 来使用屏保,比如 `termsaver matrix` ,然后按 `Ctrl+c` 停止。你也可以通过运行 `termsaver [屏保名] -h` 命令来获取关于某一个特定屏保的信息。图片 1 来自 `startwars` 屏保,它运行的是古老但受人喜爱的 [Asciimation Wars][8] 。
那些不太安全的屏保通过在线获取资源的方式运行,我并不喜欢它们,但好消息是,由于 `termsaver` 是一些 Python 的脚本文件,因此,你可以很容易的利用它们连接到任何你想要的 RSS 资源。
### pv
`pv` 命令是一个非常有趣的小工具但却很实用。它的用途是监测数据复制的进程,比如,当你运行 `rsync` 命令或创建一个 `tar` 归档的时候。当你不带任何选项运行 `pv` 命令时,默认参数为:
* -p :进程
* -t :时间,到当前总运行时间
* -e :预计完成时间,这往往是不准确的,因为 `pv` 通常不知道需要移动的数据的大小
* -r :速率计数器,或吞吐量
* -b :字节计数器
一次 `rsync` 传输看起来像这样:
```
$ rsync -av /home/carla/ /media/carla/backup/ | pv
sending incremental file list
[...]
103GiB 0:02:48 [ 615MiB/s] [ <=>
```
创建一个 tar 归档,就像下面这个例子:
```
$ tar -czf - /file/path| (pv > backup.tgz)
885MiB 0:00:30 [28.6MiB/s] [ <=>
```
`pv` 能够监测进程,因此也可以监测 Web 浏览器的最大活动,令人惊讶的是,它产生了如此多的活动:
```
$ pv -d 3095
58:/home/carla/.pki/nssdb/key4.db: 0 B 0:00:33
[ 0 B/s] [<=> ]
78:/home/carla/.config/chromium/Default/Visited Links:
256KiB 0:00:33 [ 0 B/s] [<=> ]
]
85:/home/carla/.con...romium/Default/data_reduction_proxy_leveldb/LOG:
298 B 0:00:33 [ 0 B/s] [<=> ]
```
在网上,我偶然发现一个使用 `pv` 最有趣的方式:使用 `pv` 来回显输入的内容:
```
$ echo "typing random stuff to pipe through pv" | pv -qL 8
typing random stuff to pipe through pv
```
普通的 `echo` 命令会瞬间打印一整行内容。通过管道传给 `pv` 之后能够让内容像是重新输入一样的显示出来。我不知道这是否有实际的价值,但是我非常喜欢它。`-L` 选项控制回显的速度,即多少字节每秒。
`pv` 是一个非常古老且非常有趣的命令,这么多年以来,它拥有了许多的选项,包括有趣的格式化选项,多输出选项,以及传输速度修改器。你可以通过 `man pv` 来查看所有的选项。
### /usr/bin/calendar
通过浏览 `/usr/bin` 目录以及其他命令目录和阅读 man 手册,你能够学到很多东西。在 Debian/Ubuntu 上的 `/usr/bin/calendar` 是 BSD 日历的一个变种,但它忽略了月亮历和太阳历。它保留了多个日历包括 `calendar.computer, calendar.discordian, calendar.music` 以及 `calendar.lotr`。在我的系统上man 手册列出了 `/usr/bin/calendar` 里存在的不同日历。下面这个例子展示了指环王日历接下来的 60 天:
```
$ calendar -f /usr/share/calendar/calendar.lotr -A 60
Apr 17 An unexpected party
Apr 23 Crowning of King Ellesar
May 19 Arwen leaves Lorian to wed King Ellesar
Jun 11 Sauron attacks Osgilliath
```
这些日历是纯文本文件,因此,你可以轻松的创建你自己的日历。最简单的方式就是复制已经存在的日历文件的格式。你可以通过 `man calendar` 命令来查看创建个人日历文件的更详细的指导。
又一次很快走到了尽头。你可以花费一些时间来浏览你的文件系统,挖掘更多有趣的命令。
_你可以他通过来自 Linux 基金会和 edx 的免费课程 ["Introduction to Linux"][5] 来学习更过关于 Linux 的知识_
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2017/4/more-unknown-linux-commands
作者:[ CARLA SCHRODER][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/creative-commons-zero
[3]:https://www.linux.com/files/images/linux-commands-fig-1png
[4]:https://www.linux.com/files/images/outer-limits-linuxjpg
[5]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
[6]:https://www.addtoany.com/share#url=https%3A%2F%2Fwww.linux.com%2Flearn%2Fintro-to-linux%2F2017%2F4%2Fmore-unknown-linux-commands&amp;amp;amp;title=More%20Unknown%20Linux%20Commands
[7]:http://termsaver.brunobraga.net/
[8]:http://www.asciimation.co.nz/

View File

@ -0,0 +1,202 @@
# 红帽企业版 Linux 包含的系统服务 - 第一部分
在 2017 年红帽峰会上,有几个人问我“我们通常用完整的虚拟机来隔离如 DSN 和 DHCP 等网络服务,那我们可以用容器来取而代之吗?”。答案是可以的,下面是在当前红帽企业版 Linux 7 系统上创建一个系统容器的例子。
## **我们的目的**
### *创建一个可以独立于任何其他系统服务来进行更新的网络服务,并且可以从主机端容易管理和更新。*
让我们来探究在一个容器中建立一个运行在 systemd 之下的 BIND 服务器。在这一部分,我们将看到建立自己的容器以及管理 BIND 配置和数据文件。
在第二部分,我们将看到主机中的 systemd 怎样和容器中的 systmed 整合。我们将探究管理容器中的服务,并且使它作为一种主机中的服务。
## **创建 BIND 容器**
为了使 systemd 在一个容器中容易运行,我们首先需要在主机中增加两个包:`oci-register-machine` 和 `oci-systemd-hook`。`oci-systemd-hook` 这个钩子允许我们在一个容器中运行 systemd而不需要使用特权容器或者手工配置 tmpfs 和 cgroups。`oci-register-machine` 这个钩子允许我们使用 systemd 工具如 `systemctl``machinectl` 来跟踪容器。
```
[root@rhel7-host ~]# yum install oci-register-machine oci-systemd-hook
```
回到创建我们的 BIND 容器上。[红帽企业版 Linux 7 基础映像](https://access.redhat.com/containers)包含 systemd 作为一个 init 系统。我们安装并激活 BIND 正如我们在典型系统中做的那样。你可以从资源库中的 [git 仓库中下载这份 Dockerfile](http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#repo)。
```
[root@rhel7-host bind]# vi Dockerfile
# Dockerfile for BIND
FROM registry.access.redhat.com/rhel7/rhel
ENV container docker
RUN yum -y install bind && \
yum clean all && \
systemctl enable named
STOPSIGNAL SIGRTMIN+3
EXPOSE 53
EXPOSE 53/udp
CMD [ "/sbin/init" ]
```
因为我们以 PID 1 来启动一个 init 系统,当我们告诉容器停止时,需要改变 docker CLI 发送的信号。从 `kill` 系统调用手册中(man 2 kill):
```
The only signals that can be sent to process ID 1, the init
process, are those for which init has explicitly installed
signal handlers. This is done to assure the system is not
brought down accidentally.
```
对于 systemd 信号句柄,`SIGRTMIN+3`是对应于 `systemd start halt.target` 的信号。我们也应该暴露 TCP 和 UDP 端口号用来 BIND ,因为这两种协议可能都在使用中。
## **管理数据**
有了一个实用的 BIND 服务,我们需要一种管理配置和区域文件的方法。目前这些都在容器里面,所以我们任何时候都可以进入容器去更新配置或者改变一个区域文件。从管理者角度来说,这并不是很理想。当要更新 BIND 时,我们将需要重建这个容器,所以映像中的改变将会丢失。任何时候我们需要更新一个文件或者重启服务时,都需要进入这个容器,而这增加了步骤和时间。
相反的,我们将从这个容器中提取配置和数据文件,把它们拷贝到主机,然后在运行的时候挂载它们。这种方式我们可以很容易地重启或者重建容器,而不会丢失做出的更改。我们也可以使用容器外的编辑器来更改配置和区域文件。因为这个容器的数据看起来像“该系统服务的特定站点数据”,让我们遵循文件系统层次并在当前主机上创建 `/srv/named` 目录来保持管理权分离。
```
[root@rhel7-host ~]# mkdir -p /srv/named/etc
[root@rhel7-host ~]# mkdir -p /srv/named/var/named
```
***提示:如果你正在迁移一个存在的配置文件,你可以跳过下面的步骤并且将它直接拷贝到 `/srv/named` 目录下。你可能仍然想检查以一个临时容器分配给这个容器的 GID。***
让我们建立并运行一个临时容器来检查 BIND。在将 init 进程作为 PID 1 运行时,我们不能交互地运行这个容器来获取一个 shell。我们会在容器 启动后执行 shell并且使用 `rpm` 命令来检查重要文件。
```
[root@rhel7-host ~]# docker build -t named .
[root@rhel7-host ~]# docker exec -it $( docker run -d named ) /bin/bash
[root@0e77ce00405e /]# rpm -ql bind
```
对于这个例子来说,我们将需要 `/etc/named.conf``/var/named/` 目录下的任何文件。我们可以使用 `machinectl` 命令来提取它们。如果有一个以上的容器注册了,我们可以使用 `machinectl status` 命令来查看任一机器上运行的是什么。一旦有了这个配置我们就可以终止这个临时容器了。
*如果你喜欢,资源库中也有一个[样例 `named.conf` 和针对 `example.com` 的区域文件](http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#repo)*
```
[root@rhel7-host bind]# machinectl list
MACHINE CLASS SERVICE
8824c90294d5a36d396c8ab35167937f container docker
[root@rhel7-host ~]# machinectl copy-from 8824c90294d5a36d396c8ab35167937f /etc/named.conf /srv/named/etc/named.conf
[root@rhel7-host ~]# machinectl copy-from 8824c90294d5a36d396c8ab35167937f /var/named /srv/named/var/named
[root@rhel7-host ~]# docker stop infallible_wescoff
```
## **最终的创建**
为了创建和运行最终的容器,添加卷选项到挂载:
- 将文件 `/srv/named/etc/named.conf` 映射为 `/etc/named.conf`
- 将目录 `/srv/named/var/named` 映射为 `/var/named`
因为这是我们最终的容器,我们将提供一个有意义的名字,以供我们以后引用。
```
[root@rhel7-host ~]# docker run -d -p 53:53 -p 53:53/udp -v /srv/named/etc/named.conf:/etc/named.conf:Z -v /srv/named/var/named:/var/named:Z --name named-container named
```
在最终容器运行时,我们可以更改本机配置来改变这个容器中 BIND 的行为。这个 BIND 服务器将需要在这个容器分配的任何 IP 上监听。确保任何新文件的 GID 与来自这个容器中的剩余 BIND 文件相匹配。
```
[root@rhel7-host bind]# cp named.conf /srv/named/etc/named.conf
[root@rhel7-host ~]# cp example.com.zone /srv/named/var/named/example.com.zone
[root@rhel7-host ~]# cp example.com.rr.zone /srv/named/var/named/example.com.rr.zone
```
> [很好奇为什么我不需要在主机目录中改变 SELinux 上下文?](http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#sidebar_1)
我们将运行这个容器提供的 `rndc` 二进制文件重新加载配置。我们可以使用 `journald` 以同样的方式检查 BIND 日志。如果运行出现错误,你可以在主机中编辑这个文件,并且重新加载配置。在主机中使用 `host``dig`, 我们可以检查来自针对 example.com 而包含的服务的响应。
```
[root@rhel7-host ~]# docker exec -it named-container rndc reload
server reload successful
[root@rhel7-host ~]# docker exec -it named-container journalctl -u named -n
-- Logs begin at Fri 2017-05-12 19:15:18 UTC, end at Fri 2017-05-12 19:29:17 UTC. --
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: 9.E.F.IP6.ARPA
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: A.E.F.IP6.ARPA
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: B.E.F.IP6.ARPA
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: 8.B.D.0.1.0.0.2.IP6.ARPA
May 12 19:29:17 ac1752c314a7 named[27]: reloading configuration succeeded
May 12 19:29:17 ac1752c314a7 named[27]: reloading zones succeeded
May 12 19:29:17 ac1752c314a7 named[27]: zone 1.0.10.in-addr.arpa/IN: loaded serial 2001062601
May 12 19:29:17 ac1752c314a7 named[27]: zone 1.0.10.in-addr.arpa/IN: sending notifies (serial 2001062601)
May 12 19:29:17 ac1752c314a7 named[27]: all zones loaded
May 12 19:29:17 ac1752c314a7 named[27]: running
[root@rhel7-host bind]# host www.example.com localhost
Using domain server:
Name: localhost
Address: ::1#53
Aliases:
www.example.com is an alias for server1.example.com.
server1.example.com is an alias for mail
```
> [你的区域文件没有更新吗?可能是因为你的编辑器,而不是序列号。](http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#sidebar_2)
## **终点线**
我们已经知道我们打算完成什么。从容器中为 DNS 请求和区域文件提供服务。更新之后,我们已经得到一个持久化的位置来管理更新和配置。
在这个系列的第二部分,我们将看到怎样将一个容器看作为主机中的一个普通服务。
---
[跟随 RHEL 博客](http://redhatstackblog.wordpress.com/feed/)通过电子邮件来获得本系列第二部分和其它新文章的更新。
---
## **额外的资源**
**附带文件的 Github 仓库:**[**https://github.com/nzwulfin/named-container**](https://github.com/nzwulfin/named-container)
**侧边栏 1:** **通过容器访问本地文件的 SELinux 上下文**
你可能已经注意到当我从容器向本地主机拷贝文件时,我没有运行 `chcon` 将主机中的文件类型改变为 `svirt_sandbox_file_t`。为什么它没有终止?将一个文件拷贝到 `/srv` 本应该将这个文件标记为类型 `var_t`。我 `setenforce 0` 了吗?
当然没有,这将让 Dan Walsh 大哭(译注:未知人名)。是的,`machinectl` 确实将文件标记类型设置为期望的那样,可以看一下:
启动一个容器之前:
```
[root@rhel7-host ~]# ls -Z /srv/named/etc/named.conf
-rw-r-----. unconfined_u:object_r:var_t:s0 /srv/named/etc/named.conf
```
After starting the container:
不,运行中我使用了一个卷选项使 Dan Walsh 高兴,`:Z`。`-v /srv/named/etc/named.conf:/etc/named.conf:Z`命令的这部分做了两件事情:首先它表示这需要使用一个私有的卷 SELiunx 标记来重新标记,其次它表明以读写挂载。
启动容器之后:
```
[root@rhel7-host ~]# ls -Z /srv/named/etc/named.conf
-rw-r-----. root 25 system_u:object_r:svirt_sandbox_file_t:s0:c821,c956 /srv/named/etc/named.conf
```
**侧边栏 2:** **VIM 备份行为改变 inode**
如果你在本地主机中使用 `vim` 来编辑配置文件,并且你没有看到容器中的改变,你可能不经意的创建了容器感知不到的新文件。在编辑中时,有三种 `vim` 设定影响背负副本:backup, writebackup 和 backupcopy。
我从官方 VIM backup_table 中剪下了应用到 RHEL 7 中的默认配置
[[http://vimdoc.sourceforge.net/htmldoc/editing.html#backup-table](http://vimdoc.sourceforge.net/htmldoc/editing.html#backup-table)]
```
backup writebackup
off on backup current file, deleted afterwards (default)
```
So we dont create tilde copies that stick around, but we are creating backups. The other setting is backupcopy, where auto is the shipped default:
所以我们不创建停留的副本,但我们将创建备份。另外的设定是 backupcopy`auto` 是默认的设置:
```
"yes" make a copy of the file and overwrite the original one
"no" rename the file and write a new one
"auto" one of the previous, what works best
```
这种组合设定意味着当你编辑一个文件时,除非 `vim` 有理由不去(检查文件逻辑),你将会得到包含你编辑之后的新文件,当你保存时它会重命名原先的文件。这意味着这个文件获得了新的 inode。对于大多数情况这不是问题但是这里绑定挂载到一个容器对 inode 的改变很敏感。为了解决这个问题,你需要改变 backupcopy 的行为。
Either in the vim session or in your .vimrc, add set backupcopy=yes. This will make sure the original file gets truncated and overwritten, preserving the inode and propagating the changes into the container.
不管是在 `vim` 会话还是在你的 `.vimrc`中,添加 `set backupcopy=yes`。这将确保原先的文件被截断并且被覆写,维持了 inode 并且在容器中产生了改变。
------------
via: http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/
作者:[Matt Micene ][a]
译者:[liuxinyu123](https://github.com/liuxinyu123)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,197 @@
如何提供有帮助的回答
=============================
如果你的同事问你一个不太清晰的问题,你会怎么回答?我认为提问题是一种技巧(可以看 [如何提出有意义的问题][1]) 同时,合理地回答问题也是一种技巧。他们都是非常实用的。
一开始 - 有时向你提问的人不尊重你的时间,这很糟糕。
理想情况下,我们假设问你问题的人是一个理性的人并且正在尽力解决问题而你想帮助他们。和我一起工作的人是这样,我所生活的世界也是这样。当然,现实生活并不是这样。
下面是有助于回答问题的一些方法!
### 如果他们提问不清楚,帮他们澄清
通常初学者不会提出很清晰的问题,或者问一些对回答问题没有必要信息的问题。你可以尝试以下方法 澄清问题:
* ** 重述为一个更明确的问题 ** 来回复他们(”你是想问 X 吗?“)
* ** 向他们了解更具体的他们并没有提供的信息 ** (”你使用 IPv6 ?”)
* ** 问是什么导致了他们的问题 ** 例如有时有些人会进入我的团队频道询问我们的服务发现service discovery )如何工作的。这通常是因为他们试图设置/重新配置服务。在这种情况下,如果问“你正在使用哪种服务?可以给我看看你正在处理的 pull requests 吗?”是有帮助的。
这些方法很多来自 [如何提出有意义的问题][2]中的要点。(尽管我永远不会对某人说“噢,你得先看完 “如何提出有意义的问题”这篇文章后再来像我提问)
### 弄清楚他们已经知道了什么
在回答问题之前,知道对方已经知道什么是非常有用的!
Harold Treen 给了我一个很好的例子:
> 前几天,有人请我解释“ Redux-Sagas ”。与其深入解释不如说“ 他们就像 worker threads 监听行为actions让你更新 Redux store 。
> 我开始搞清楚他们对 Redux 、行为actions、store 以及其他基本概念了解多少。将这些概念都联系在一起再来解释会容易得多。
弄清楚问你问题的人已经知道什么是非常重要的。因为有时他们可能会对基础概念感到疑惑(“ Redux 是什么或者他们可能是专家但是恰巧遇到了微妙的极端情况corner case。如果答案建立在他们不知道的概念上会令他们困惑但如果重述他们已经知道的的又会是乏味的。
这里有一个很实用的技巧来了解他们已经知道什么 - 比如可以尝试用“你对 X 了解多少?”而不是问“你知道 X 吗?”。
### 给他们一个文档
“RTFM” “去读那些他妈的手册”Read The Fucking Manual是一个典型的无用的回答但事实上如果向他们指明一个特定的文档会是非常有用的当我提问题的时候我当然很乐意翻看那些能实际解决我的问题的文档因为它也可能解决其他我想问的问题。
我认为明确你所给的文档的确能够解决问题是非常重要的,或者至少经过查阅后确认它对解决问题有帮助。否则,你可能将以下面这种情形结束对话(非常常见):
* Ali我应该如何处理 X
* Jada<文档链接>
* Ali: 这个并有实际解释如何处理 X ,它仅仅解释了如何处理 Y !
如果我所给的文档特别长,我会指明文档中那个我将会谈及的特定部分。[bash 手册][3] 有44000个字真的所以如果只说“它在 bash 手册中有说明”是没有帮助的:)
### 告诉他们一个有用的搜索
在工作中,我经常发现我可以利用我所知道的关键字进行搜索找到能够解决我的问题的答案。对于初学者来说,这些关键字往往不是那么明显。所以说“这是我用来寻找这个答案的搜索”可能有用些。再次说明,回答时请经检查后以确保搜索能够得到他们所需要的答案:)
### 写新文档
人们经常一次又一次地问我的团队同样的问题。很显然这并不是他们的错他们怎么能够知道在他们之前已经有10个人问了这个问题且知道答案是什么呢因此我们会尝试写新文档而不是直接回答回答问题。
1. 马上写新文档
2. 给他们我们刚刚写好的新文档
3. 公示
写文档有时往往比回答问题需要花很多时间,但这是值得的。写文档尤其重要,如果:
a. 这个问题被问了一遍又一遍
b. 随着时间的推移,这个答案不会变化太大(如果这个答案每一个星期或者一个月就会变化,文档就会过时并且令人受挫)
### 解释你做了什么
对于一个话题,作为初学者来说,这样的交流会真让人沮丧:
* 新人:“嗨!你如何处理 X ?”
* 有经验的人:“我已经处理过了,而且它已经完美解决了”
* 新人:”...... 但是你做了什么?!“
如果问你问题的人想知道事情是如何进行的,这样是有帮助的:
* 让他们去完成任务而不是自己做
* 告诉他们你是如何得到你给他们的答案的。
这可能比你自己做的时间还要长,但对于被问的人来说这是一个学习机会,因为那样做使得他们将来能够更好地解决问题。
这样,你可以进行更好的交流,像这:
* 新人:“这个网站出现了错误,发生了什么?”
* 有经验的人2分钟后”oh 这是因为发生了数据库故障转移“
* 新人: ”你是怎么知道的??!?!?“
* 有经验的人:“以下是我所做的!“:
1. 通常这些错误是因为服务器 Y 被关闭了。我查看了一下 `$PLACE` 但它表明服务器 Y 开着。所以,并不是这个原因导致的。
2. 然后我查看 X 的仪表盘 ,仪表盘的这个部分显示这里发生了数据库故障转移。
3. 然后我在日志中找到了相应服务器,并且它显示连接数据库错误,看起来错误就是这里。
如果你正在解释你是如何调试一个问题,解释你是如何发现问题,以及如何找出问题的。尽管看起来你好像已经得到正确答案,但感觉更好的是能够帮助他们提高学习和诊断能力,并了解可用的资源。
### 解决根本问题
这一点有点棘手。有时候人们认为他们依旧找到了解决问题的正确途径,且他们只再多一点信息就可以解决问题。但他们可能并不是走在正确的道路上!比如:
* George”我在处理 X 的时候遇到了错误,我该如何修复它?“
* Jasminda”你是正在尝试解决 Y 吗?如果是这样,你不应该处理 X ,反而你应该处理 Z 。“
* George“噢你是对的谢谢你我回反过来处理 Z 的。“
Jasminda 一点都没有回答 George 的问题!反而,她猜测 George 并不想处理 X ,并且她是猜对了。这是非常有用的!
如果你这样做可能会产生高高在上的感觉:
* George”我在处理 X 的时候遇到了错误,我该如何修复它?“
* Jasminda不要这样做如果你想处理 Y ,你应该反过来完成 Z 。
* George“好吧我并不是想处理 Y 。实际上我想处理 X 因为某些原因REASONS。所以我该如何处理 X 。
所以不要高高在上,且要记住有时有些提问者可能已经偏离根本问题很远了。同时回答提问者提出的问题以及他们本该提出的问题都是合理的:“嗯,如果你想处理 X ,那么你可能需要这么做,但如果你想用这个解决 Y 问题,可能通过处理其他事情你可以更好地解决这个问题,这就是为什么可以做得更好的原因。
### 询问”那个回答可以解决您的问题吗?”
我总是喜欢在我回答了问题之后核实是否真的已经解决了问题:”这个回答解决了您的问题吗?您还有其他问题吗?“在问完这个之后最好等待一会,因为人们通常需要一两分钟来知道他们是否已经找到了答案。
我发现尤其是问“这个回答解决了您的问题吗”这个额外的步骤在写完文档后是非常有用的。通常,在写关于我熟悉的东西的文档时,我会忽略掉重要的东西而不会意识到它。
### 结对编程和面对面交谈
我是远程工作的,所以我的很多对话都是基于文本的。我认为这是沟通的默认方式。
今天,我们生活在一个方便进行小视频会议和屏幕共享的世界!在工作时候,在任何时间我都可以点击一个按钮并快速加入与他人的视频对话或者屏幕共享的对话中!
例如最近有人问如何自动调节他们的服务容量规划。我告诉他们我们有几样东西需要清理但我还不太确定他们要清理的是什么。然后我们进行了一个简短的视屏会话并在5分钟后我们解决了他们问题。
我认为,特别是如果有人真的被困在该如何开始一项任务时,开启视频进行结对编程几分钟真的比电子邮件或者一些即时通信更有效。
### 不要表现得过于惊讶
这是源自 Recurse Center 的一则法则:[不要故作惊讶][4]。这里有一个常见的情景:
* 某人1“什么是 Linux 内核”
* 某人2“你竟然不知道什么是 Linux 内核LINUX KERNEL
某人2表现无论他们是否真的如此惊讶是没有帮助的。这大部分只会让某人1不好受因为他们确实不知道什么是 Linux 内核。
我一直在假装不惊讶即使我事实上确实有点惊讶那个人不知道这种东西但它是令人敬畏的。
### 回答问题是令人敬畏的
显然并不是所有方法都是合适的,但希望你能够发现这里有些是有帮助的!我发现花时间去回答问题并教导人们是其实是很有收获的。
特别感谢 Josh Triplett 的一些建议并做了很多有益的补充,以及感谢 Harold Treen、Vaibhav Sagar、Peter Bhat Hatkins、Wesley Aptekar Cassels 和 Paul Gowder的阅读或评论。
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/answer-questions-well/
作者:[ Julia Evans][a]
译者:[HardworkFish](https://github.com/HardworkFish)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://jvns.ca/about
[1]:https://jvns.ca/blog/good-questions/
[2]:https://jvns.ca/blog/good-questions/
[3]:https://linux.die.net/man/1/bash
[4]:https://jvns.ca/blog/2017/04/27/no-feigning-surprise/

View File

@ -0,0 +1,150 @@
为什么 Kubernetes 很酷
============================================================
在我刚开始学习 Kubernetes大约是一年半以前吧我真的不明白为什么应该去关注它。
在我使用 Kubernetes 全职工作了三个多月后,我才有了一些想法为什么我应该考虑使用它了。(我距离成为一个 Kubernetes 专家还很远!)希望这篇文章对你理解 Kubernetes 能做什么会有帮助!
我将尝试去解释我认为的对 Kubernetes 感兴趣的一些原因,而不去使用 “原生云cloud native”、“编排系统orchestration"、”容器container“、或者任何 Kubernetes 专用的术语 :)。我去解释的这些主要来自 Kubernetes 操作者/基础设施工程师的观点,因为,我现在的工作就是去配置 Kubernetes 和让它工作的更好。
我根本就不去尝试解决一些如 “你应该在你的生产系统中使用 Kubernetes 吗?”这样的问题。那是非常复杂的问题。(不仅是因为“生产系统”根据你的用途而总是有不同的要求“)
### Kubernetes 可以让你在生产系统中运行代码而不需要去设置一台新的服务器
我首次被说教使用 Kubernetes 是与我的伙伴 Kamal 的下面的谈话:
大致是这样的:
* Kamal: 使用 Kubernetes 你可以通过几个简单的命令就能设置一台新的服务器。
* Julia: 我觉得不太可能吧。
* Kamal: 像这样,你写一个配置文件,然后应用它,这时候,你就在生产系统中运行了一个 HTTP 服务。
* Julia: 但是,现在我需要去创建一个新的 AWS 实例,明确地写一个 Puppet设置服务发现配置负载均衡配置开发软件并且确保 DNS 正常工作,如果没有什么问题的话,至少在 4 小时后才能投入使用。
* Kamal: 是的,使用 Kubernetes 你不需要做那么多事情,你可以在 5 分钟内设置一台新的 HTTP 服务,并且它将自动运行。只要你的集群中有空闲的资源它就能正常工作!
* Julia: 这儿一定是一个”坑“
这里有一种陷阱,设置一个生产用 Kubernetes 集群(在我的经险中)确实并不容易。(查看 [Kubernetes The Hard Way][3] 中去开始使用时有哪些复杂的东西)但是,我们现在并不深入讨论它。
因此Kubernetes 第一个很酷的事情是,它可能使那些想在生产系统中部署新开发的软件的方式变得更容易。那是很酷的事,而且它真的是这样,因此,一旦你使用一个 Kubernetes 集群工作,你真的可以仅使用一个配置文件在生产系统中设置一台 HTTP 服务(在 5 分钟内运行这个应用程序,设置一个负载均衡,给它一个 DNS 名字,等等)。看起来真的很有趣。
### 对于运行在生产系统中的你的代码Kubernetes 可以提供更好的可见性和可管理性
在我看来,在理解 etcd 之前,你可能不会理解 Kubernetes 的。因此,让我们先讨论 etcd
想像一下,如果现在我这样问你,”告诉我你运行在生产系统中的每个应用程序,它运行在哪台主机上?它是否状态很好?是否为它分配了一个 DNS 名字?”我并不知道这些,但是,我可能需要到很多不同的地方去查询来回答这些问题,并且,我需要花很长的时间才能搞定。我现在可以很确定地说不需要查询,仅一个 API 就可以搞定它们。
在 Kubernetes 中,你的集群的所有状态 应用程序运行 (“pods”)、节点、DNS 名字、 cron 任务、 等等 都保存在一个单一的数据库中etcd。每个 Kubernetes 组件是无状态的,并且基本是通过下列来工作的。
* 从 etcd 中读取状态(比如,“分配给节点 1 的 pods 列表“)
* 产生变化(比如,”在节点 1 上运行 pod A")
* 更新 etcd 中的状态(比如,“设置 pod A 的状态为 running
这意味着,如果你想去回答诸如 “在那个可用区域中有多少台运行 nginx 的 pods” 这样的问题时,你可以通过查询一个统一的 APIKubernetes API去回答它。并且你可以在每个其它 Kubernetes 组件上运行那个 API 去进行同样的访问。
这也意味着,你可以很容易地去管理每个运行在 Kubernetes 中的任何东西。如果你想这样做,你可以:
* 为部署实现一个复杂的定制的部署策略(部署一个东西,等待 2 分钟,部署 5 个以上,等待 3.7 分钟,等等)
* 每当推送到 github 上一个分支,自动化 [启动一个新的 web 服务器][1]
* 监视所有你的运行的应用程序,确保它们有一个合理的内存使用限制。
所有你需要做的这些事情,只需要写一个告诉 Kubernetes API“controller”的程序就可以了。
关于 Kubernetes API 的其它的令人激动的事情是,你不会被局限为 Kubernetes 提供的现有功能!如果对于你想去部署/创建/监视的软件有你自己的想法,那么,你可以使用 Kubernetes API 去写一些代码去达到你的目的!它可以让你做到你想做的任何事情。
### 如果每个 Kubernetes 组件都“挂了”,你的代码将仍然保持运行
关于 Kubernetes 我承诺的(通过各种博客文章:))一件事情是,“如果 Kubernetes API 服务和其它组件”挂了“,你的代码将一直保持运行状态”。从理论上说,这是它第二件很酷的事情,但是,我不确定它是否真是这样的。
到目前为止,这似乎是真的!
我已经断开了一些正在运行的 etcd它会发生的事情是
1. 所有的代码继续保持运行状态
2. 不能做 _新的_ 事情你不能部署新的代码或者生成变更cron 作业将停止工作)
3. 当它恢复时,集群将赶上这期间它错过的内容
这样做,意味着如果 etcd 宕掉,并且你的应用程序的其中之一崩溃或者发生其它事情,在 etcd 恢复之前它并不能返回come back up
### Kubernetes 的设计对 bugs 很有弹性
与任何软件一样Kubernetes 有 bugs。例如到目前为止我们的集群控制管理器有内存泄漏并且调度器经常崩溃。Bugs 当然不好,但是,我发现 Kubernetes 的设计,帮助减少了许多在它的内核中的错误。
如果你重启动任何组件,将发生:
* 从 etcd 中读取所有的与它相关的状态
* 基于那些状态(调度 pods、全部 pods 的垃圾回收、调度 cronjobs、按需部署、等等它启动去做它认为必须要做的事情。
因为,所有的组件并不会在内存中保持状态,你在任何时候都可以重启它们,它可以帮助你减少各种 bugs。
例如,假如说,在你的控制管理器中有内存泄露。因为,控制管理器是无状态的,你可以每小时定期去启动它,或者,感觉到可能导致任何不一致的问题发生时。或者 ,在我们运行的调度器中有一个 bug它有时仅仅是忘记了 pods 或者从来没有调度它们。你可以每隔 10 分钟来重启调度器来缓减这种情况。(我们并不这么做,而是去修复这个 bug但是你_可以吗_
因此,我觉得即使在它的内核组件中有 bug我仍然可以信任 Kubernetes 的设计去帮助我确保集群状态的一致性。并且,总在来说,随着时间的推移软件将会提高。你去操作的仅有的有状态的东西是 etcd。
不用过多地讨论“状态”这个东西 但是,我认为在 Kubernetes 中很酷的一件事情是,唯一需要去做备份/恢复计划的事情是 etcd (除非为你的 pods 使用了持久化存储的卷)。我认为这样可以使 kubernetes 对关于你考虑的事情的操作更容易一些。
### 在 Kubernetes 之上实现新的分发系统是非常容易的
假设你想去实现一个分发 cron 作业调度系统!从零开始做工作量非常大。但是,在 Kubernetes 里面实现一个分发 cron 作业调度系统是非常容易的!(它仍然是一个分布式系统)
我第一次读到 Kubernetes 的 cronjob 作业控制器的代码时,它是如此的简单,我真的特别高兴。它在这里,去读它吧,主要的逻辑大约是 400 行。去读它吧! => [cronjob_controller.go][4] <=
从本质上来看cronjob 控制器做了:
* 每 10 秒钟:
* 列出所有已存在的 cronjobs
* 检查是否有需要现在去运行的任务
* 如果有,创建一个新的作业对象去被调度并通过其它的 Kubernetes 控制器去真正地去运行它
* 清理已完成的作业
* 重复以上工作
Kubernetes 模型是很受限制的(它有定义在 etcd 中的资源模式,控制器读取这个资源和更新 etcd我认为这种相关的固有的/受限制的模型,可以使它更容易地在 Kubernetes 框架中开发你自己的分布式系统。
Kamal 介绍给我的 “ Kubernetes 是一个写你自己的分布式系统的很好的平台” 这一想法,而不是“ Kubernetes 是一个你可以使用的分布式系统”,并且,我想我对它真的有兴趣。他有一个 [system to run an HTTP service for every branch you push to github][5] 的雏型。他花了一个周末的时候,大约有了 800 行,我觉得它真的很不错!
### Kubernetes 可以使你做一些非常神奇的事情(但并不容易)
我一开始就说 “kubernetes 可以让你做一些很神奇的事情,你可以用一个配置文件来做这么多的基础设施,它太神奇了”,而且这是真的!
为什么说“Kubernetes 并不容易”呢?,是因为 Kubernetes 有很多的课件去学习怎么去成功地运营一个高可用的 Kubernetes 集群要做很多的工作。就像我发现它给我了许多抽象的东西,我需要去理解这些抽象的东西,为了去调试问题和正确地配置它们。我喜欢学习新东西,因此,它并不会使我发狂或者生气,我只是觉得理解它很重要:)
对于 “我不能仅依靠抽象概念” 的一个具体的例子是,我一直在努力学习需要的更多的 [Linux 上的关于网络的工作][6],去对设置 Kubernetes 网络有信心,这比我以前学过的关于网络的知识要多很多。这种方式很有意思但是非常费时间。在以后的某个时间,我可以写更多的关于设置 Kubernetes 网络的困难的/有趣的事情。
或者,我写一个关于学习 Kubernetes 的不同选项所做事情的 [2000 字的博客文章][7],才能够成功去设置我的 Kubernetes CAs。
我觉得,像 GKE (google 的 Kubernetes 生产系统) 这样的一些管理 Kubernetes 的系统可能更简单,因为,他们为你做了许多的决定,但是,我没有尝试过它们。
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2017/10/05/reasons-kubernetes-is-cool/
作者:[Julia Evans][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://jvns.ca/about
[1]:https://github.com/kamalmarhubi/kubereview
[2]:https://jvns.ca/categories/kubernetes
[3]:https://github.com/kelseyhightower/kubernetes-the-hard-way
[4]:https://github.com/kubernetes/kubernetes/blob/e4551d50e57c089aab6f67333412d3ca64bc09ae/pkg/controller/cronjob/cronjob_controller.go
[5]:https://github.com/kamalmarhubi/kubereview
[6]:https://jvns.ca/blog/2016/12/22/container-networking/
[7]:https://jvns.ca/blog/2017/08/05/how-kubernetes-certificates-work/

View File

@ -0,0 +1,93 @@
GitHub 欢迎所有 CI 工具
====================
[![GitHub and all CI tools](https://user-images.githubusercontent.com/29592817/32509084-2d52c56c-c3a1-11e7-8c49-901f0f601faf.png)][11]
持续集成([CI][12])工具可以帮助你在每次提交时执行测试,并将[报告结果][13]提交到合并请求,从而帮助维持团队的质量标准。结合持续交付([CD][14])工具,你还可以在多种配置上测试你的代码,运行额外的性能测试,并自动执行每个步骤[直到产品][15]。
有几个[与 GitHub 集成][16]的 CI 和 CD 工具,其中一些可以在 [GitHub Marketplace][17] 中点击几下安装。有了这么多的选择,你可以选择最好的工具 - 即使它不是与你的系统预集成的工具。
最适合你的工具取决于许多因素,其中包括:
* 编程语言和程序架构
* 你计划支持的操作系统和浏览器
* 你团队的经验和技能
* 扩展能力和增长计划
* 依赖系统的地理分布和使用的人
* 打包和交付目标
当然,无法为所有这些情况优化你的 CI 工具。构建它们的人需要选择哪些情况服务更好,何时优先考虑复杂性而不是简单性。例如,如果你想测试针对一个平台的用特定语言编写的小程序,那么你就不需要那些可在数十个平台上测试,有许多编程语言和框架的,用来测试嵌入软件控制器的复杂工具。
如果你需要一些灵感来挑选最好使用哪个 CI 工具,那么看一下[ Github 上的流行项目][18]。许多人在他们的 README.md 中将他们的集成的 CI/CD 工具的状态显示为徽章。我们还分析了 GitHub 社区中超过 5000 万个仓库中 CI 工具的使用情况,并发现了很多变化。下图显示了根据我们的 pull 请求中使用最多的[提交状态上下文][19]GitHub.com 使用的前 10 个 CI 工具的相对百分比。
_我们的分析还显示许多团队在他们的项目中使用多个 CI 工具使他们能够发挥它们最擅长的。_
[![Top 10 CI systems used with GitHub.com based on most used commit status contexts](https://user-images.githubusercontent.com/7321362/32575895-ea563032-c49a-11e7-9581-e05ec882658b.png)][20]
如果你想查看,下面是团队中使用最多的 10 个工具:
* [Travis CI][1]
* [Circle CI][2]
* [Jenkins][3]
* [AppVeyor][4]
* [CodeShip][5]
* [Drone][6]
* [Semaphore CI][7]
* [Buildkite][8]
* [Wercker][9]
* [TeamCity][10]
这只是尝试选择默认的、预先集成的工具,而没有花时间根据任务研究和选择最好的工具,但是对于你的特定情况会有很多[很好的选择][21]。如果你以后改变主意,没问题。当你为特定情况选择最佳工具时,你可以保证量身定制的性能和不再适合时互换的自由。
准备好了解 CI 工具如何适应你的工作流程了么?
[浏览 GitHub Marketplace][22]
--------------------------------------------------------------------------------
via: https://github.com/blog/2463-github-welcomes-all-ci-tools
作者:[jonico ][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://github.com/jonico
[1]:https://travis-ci.org/
[2]:https://circleci.com/
[3]:https://jenkins.io/
[4]:https://www.appveyor.com/
[5]:https://codeship.com/
[6]:http://try.drone.io/
[7]:https://semaphoreci.com/
[8]:https://buildkite.com/
[9]:http://www.wercker.com/
[10]:https://www.jetbrains.com/teamcity/
[11]:https://user-images.githubusercontent.com/29592817/32509084-2d52c56c-c3a1-11e7-8c49-901f0f601faf.png
[12]:https://en.wikipedia.org/wiki/Continuous_integration
[13]:https://github.com/blog/2051-protected-branches-and-required-status-checks
[14]:https://en.wikipedia.org/wiki/Continuous_delivery
[15]:https://developer.github.com/changes/2014-01-09-preview-the-new-deployments-api/
[16]:https://github.com/works-with/category/continuous-integration
[17]:https://github.com/marketplace/category/continuous-integration
[18]:https://github.com/explore?trending=repositories#trending
[19]:https://developer.github.com/v3/repos/statuses/
[20]:https://user-images.githubusercontent.com/7321362/32575895-ea563032-c49a-11e7-9581-e05ec882658b.png
[21]:https://github.com/works-with/category/continuous-integration
[22]:https://github.com/marketplace/category/continuous-integration

View File

@ -0,0 +1,54 @@
系统管理 101补丁管理
============================================================
就在之前几篇文章,我开始了“系统管理 101”系列文章用来记录现今许多初级系统管理员DevOps 工程师或者“全栈”开发者可能不曾接触过的一些系统管理方面的基本知识。按照我原本的设想,该系列文章已经是完结了的。然而后来 WannaCry 恶意软件出现并在补丁管理不善的 Windows 主机网络间爆发。我能想象到那些仍然深陷 2000 年代 Linux 与 Windows 争论的读者听到这个消息可能已经面露优越的微笑。
我之所以这么快就决定再次继续“系统管理 101”文章系列是因为我意识到在补丁管理方面一些 Linux 系统管理员和 Windows 系统管理员没有差别。实话说,在一些方面甚至做的更差(特别是以运行时间为豪)。所以,这篇文章会涉及 Linux 下补丁管理的基础概念,包括良好的补丁管理该是怎样的,你可能会用到的一些相关工具,以及整个补丁安装过程是如何进行的。
### 什么是补丁管理?
我所说的补丁管理,是指你部署用于升级服务器上软件的系统,不仅仅是把软件更新到最新最好的前沿版本。即使是像 Debian 这样为了“稳定性”持续保持某一特定版本软件的保守派发行版,也会时常发布升级补丁用于修补错误和安全漏洞。
当然,因为开发者对最新最好版本的需求,你需要派生软件源码并做出修改,或者因为你喜欢给自己额外的工作量,你的组织可能会决定自己维护特定软件的版本,这时你就会遇到问题。理想情况下,你应该已经配置好你的系统,让它在自动构建和打包定制版本软件时使用其它软件所用的同一套持续集成系统。然而,许多系统管理员仍旧在自己的本地主机上按照维基上的文档(但愿是最新的文档)使用过时的方法打包软件。不论使用哪种方法,你都需要明确你所使用的版本有没有安全缺陷,如果有,那必须确保新补丁安装到你定制版本的软件上了。
### 良好的补丁管理是怎样的
补丁管理首先要做的是检查软件的升级。首先,对于核心软件,你应该订阅相应 Linux 发行版的安全邮件列表,这样才能第一时间得知软件的安全升级情况。如果你使用的软件有些不是来自发行版的仓库,那么你也必须设法跟踪它们的安全更新。一旦接收到新的安全通知,你必须查阅通知细节,以此明确安全漏洞的严重程度,确定你的系统是否受影响,以及安全补丁的紧急性。
一些组织仍在使用手动方式管理补丁。在这种方式下,当出现一个安全补丁,系统管理员就要凭借记忆,登录到各个服务器上进行检查。在确定了哪些服务器需要升级后,再使用服务器内建的包管理工具从发行版仓库升级这些软件。最后以相同的方式升级剩余的所有服务器。
手动管理补丁的方式存在很多问题。首先,这么做会使补丁安装成为一个苦力活,安装补丁需要越多人力成本,系统管理员就越可能推迟甚至完全忽略它。其次,手动管理方式依赖系统管理员凭借记忆去跟踪他或她所负责的服务器的升级情况。这非常容易导致有些服务器被遗漏而未能及时升级。
补丁管理越快速简便,你就越可能把它做好。你应该构建一个系统,用来快速查询哪些服务器运行着特定的软件,以及这些软件的版本号,而且它最好还能够推送各种升级补丁。就个人而言,我倾向于使用 MCollective 这样的编排工具来完成这个任务,但是红帽提供的 Satellite 以及 Canonical 提供的 Landscape 也可以让你在统一的管理接口查看服务器上软件的版本信息,并且安装补丁。
补丁安装还应该具有容错能力。你应该具备在不下线的情况下为服务安装补丁的能力。这同样适用于需要重启系统的内核补丁。我采用的方法是把我的服务器划分为不同的高可用组lb1app1,rabbitmq1 和 db1 在一个组而lb2app2,rabbitmq2 和 db2 在另一个组。这样,我就能一次升级一个组,而无须下线服务。
所以,多快才能算快呢?对于少数没有附带服务的软件,你的系统最快应该能够在几分钟到一小时内安装好补丁(例如 bash 的 ShellShock 漏洞)。对于像 OpenSSL 这样需要重启服务的软件,以容错的方式安装补丁并重启服务的过程可能会花费稍多的时间,但这就是编排工具派上用场的时候。我在最近的关于 MCollective 的文章中(查看 2016 年 12 月和 2017 年 1 月的工单)给了几个使用 MCollective 实现补丁管理的例子。你最好能够部署一个系统,以具备容错性的自动化方式简化补丁安装和服务重启的过程。
如果补丁要求重启系统,像内核补丁,那它会花费更多的时间。再次强调,自动化和编排工具能够让这个过程比你想象的还要快。我能够在一到两个小时内在生产环境中以容错方式升级并重启服务器,如果重启之间无须等待集群同步备份,这个过程还能更快。
不幸的是,许多系统管理员仍坚信过时的观点,把运行时间作为一种骄傲的象征——鉴于紧急内核补丁大约每年一次。对于我来说,这只能说明你没有认真对待系统的安全性。
很多组织仍然使用无法暂时下线的单点故障的服务器,也因为这个原因,它无法升级或者重启。如果你想让系统更加安全,你需要去除过时的包袱,搭建一个至少能在深夜维护时段重启的系统。
基本上,快速便捷的补丁管理也是一个成熟专业的系统管理团队所具备的标志。升级软件是所有系统管理员的必要工作之一,花费时间去让这个过程简洁快速,带来的好处远远不止是系统安全性。例如,它能帮助我们找到架构设计中的单点故障。另外,它还帮助鉴定出环境中过时的系统,给我们替换这些部分提供了动机。最后,当补丁管理做得足够好,它会节省系统管理员的时间,让他们把精力放在真正需要专业知识的地方。
______________________
Kyle Rankin 是高级安全与基础设施架构师,其著作包括: Linux Hardening in Hostile NetworksDevOps Troubleshooting 以及 The Official Ubuntu Server Book。同时他还是 Linux Journal 的专栏作家。
--------------------------------------------------------------------------------
via: https://www.linuxjournal.com/content/sysadmin-101-patch-management
作者:[Kyle Rankin ][a]
译者:[haoqixu](https://github.com/haoqixu)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxjournal.com/users/kyle-rankin
[1]:https://www.linuxjournal.com/tag/how-tos
[2]:https://www.linuxjournal.com/tag/servers
[3]:https://www.linuxjournal.com/tag/sysadmin
[4]:https://www.linuxjournal.com/users/kyle-rankin

View File

@ -0,0 +1,163 @@
如何使用 Date 命令
======
在本文中, 我们会通过一些案例来演示如何使用 linux 中的 date 命令. date 命令可以用户输出/设置系统日期和时间. Date 命令很简单, 请参见下面的例子和语法.
默认情况下,当不带任何参数运行 date 命令时,它会输出当前系统日期和时间:
```shell
date
```
```
Sat 2 Dec 12:34:12 CST 2017
```
#### 语法
```
Usage: date [OPTION]... [+FORMAT]
or: date [-u|--utc|--universal] [MMDDhhmm[[CC]YY][.ss]]
Display the current time in the given FORMAT, or set the system date.
```
### 案例
下面这些案例会向你演示如何使用 date 命令来查看前后一段时间的日期时间.
#### 1\. 查找5周后的日期
```shell
date -d "5 weeks"
Sun Jan 7 19:53:50 CST 2018
```
#### 2\. 查找5周后又过4天的日期
```shell
date -d "5 weeks 4 days"
Thu Jan 11 19:55:35 CST 2018
```
#### 3\. 获取下个月的日期
```shell
date -d "next month"
Wed Jan 3 19:57:43 CST 2018
```
#### 4\. 获取下周日的日期
```shell
date -d last-sunday
Sun Nov 26 00:00:00 CST 2017
```
date 命令还有很多格式化相关的选项, 下面的例子向你演示如何格式化 date 命令的输出.
#### 5\. 以 yyyy-mm-dd 的格式显示日期
```shell
date +"%F"
2017-12-03
```
#### 6\. 以 mm/dd/yyyy 的格式显示日期
```shell
date +"%m/%d/%Y"
12/03/2017
```
#### 7\. 只显示时间
```shell
date +"%T"
20:07:04
```
#### 8\. 显示今天是一年中的第几天
```shell
date +"%j"
337
```
#### 9\. 与格式化相关的选项
| **%%** | 百分号 (“**%**“). |
| **%a** | 星期的缩写形式 (像这样, **Sun**). |
| **%A** | 星期的完整形式 (像这样, **Sunday**). |
| **%b** | 缩写的月份 (像这样, **Jan**). |
| **%B** | 当前区域的月份全称 (像这样, **January**). |
| **%c** | 日期以及时间 (像这样, **Thu Mar 3 23:05:25 2005**). |
| **%C** | 本世纪; 类似 **%Y**, 但是会省略最后两位 (像这样, **20**). |
| **%d** | 月中的第几日 (像这样, **01**). |
| **%D** | 日期; 效果与 **%m/%d/%y** 一样. |
| **%e** | 月中的第几日, 会填充空格; 与 **%_d** 一样. |
| **%F** | 完整的日期; 跟 **%Y-%m-%d** 一样. |
| **%g** | 年份的后两位 (参见 **%G**). |
| **%G** | 年份 (参见 **%V**); 通常跟 **%V** 连用. |
| **%h** | 同 **%b**. |
| **%H** | 小时 (**00**..**23**). |
| **%I** | 小时 (**01**..**12**). |
| **%j** | 一年中的第几天 (**001**..**366**). |
| **%k** | 小时, 用空格填充 ( **0**..**23**); same as **%_H**. |
| **%l** | 小时, 用空格填充 ( **1**..**12**); same as **%_I**. |
| **%m** | 月份 (**01**..**12**). |
| **%M** | 分钟 (**00**..**59**). |
| **%n** | 换行. |
| **%N** | 纳秒 (**000000000**..**999999999**). |
| **%p** | 当前区域时间是上午 **AM** 还是下午 **PM**; 未知则为空哦. |
| **%P** | 类似 **%p**, 但是用小写字母现实. |
| **%r** | 当前区域的12小时制现实时间 (像这样, **11:11:04 PM**). |
| **%R** | 24-小时制的小时和分钟; 同 **%H:%M**. |
| **%s** | 从 1970-01-01 00:00:00 UTC 到现在经历的秒数. |
| **%S** | 秒数 (**00**..**60**). |
| **%t** | tab 制表符. |
| **%T** | 时间; 同 **%H:%M:%S**. |
| **%u** | 星期 (**1**..**7**); 1 表示 **星期一**. |
| **%U** | 一年中的第几个星期, 以周日为一周的开始 (**00**..**53**). |
| **%V** | 一年中的第几个星期,以周一为一周的开始 (**01**..**53**). |
| **%w** | 用数字表示周几 (**0**..**6**); 0 表示 **周日**. |
| **%W** | 一年中的第几个星期, 周一为一周的开始 (**00**..**53**). |
| **%x** | Locales date representation (像这样, **12/31/99**). |
| **%X** | Locales time representation (像这样, **23:13:48**). |
| **%y** | 年份的后面两位 (**00**..**99**). |
| **%Y** | 年. |
| **%z** | +hhmm 指定数字时区 (像这样, **-0400**). |
| **%:z** | +hh:mm 指定数字时区 (像这样, **-04:00**). |
| **%::z** | +hh:mm:ss 指定数字时区 (像这样, **-04:00:00**). |
| **%:::z** | 指定数字时区, 其中 “**:**” 的个数由你需要的精度来决定 (例如, **-04**, **+05:30**). |
| **%Z** | 时区的字符缩写(例如, EDT). |
#### 10\. 设置系统时间
你也可以使用 date 来手工设置系统时间,方法是使用 `--set` 选项, 下面的例子会将系统时间设置成2017年8月30日下午4点22分
```shell
date --set="20170830 16:22"
```
当然, 如果你使用的是我们的 [VPS Hosting services][1], 你总是可以联系并咨询我们的Linux专家管理员 (通过客服电话或者下工单的方式) 关于 date 命令的任何东西. 他们是 24×7 在线的,会立即向您提供帮助.
PS. 如果你喜欢这篇帖子,请点击下面的按钮分享或者留言. 谢谢.
--------------------------------------------------------------------------------
via: https://www.rosehosting.com/blog/use-the-date-command-in-linux/
作者:[][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.rosehosting.com
[1]:https://www.rosehosting.com/hosting-services.html

View File

@ -0,0 +1,422 @@
translating by yongshouzhang
7个 Linux 下使用 bcc/BPF 的性能分析工具
============================================================
###使用伯克利的包过滤BPF编译器集合BCC工具深度探查你的 linux 代码。
[![](https://opensource.com/sites/default/files/styles/byline_thumbnail/public/pictures/brendan_face2017_620d.jpg?itok=xZzBQNcY)][7] 21 Nov 2017 [Brendan Gregg][8] [Feed][9]
43[up][10]
[4 comments][11]
![7 superpowers for Fedora bcc/BPF performance analysis](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/penguins%20in%20space_0.jpg?itok=umpCTAul)
图片来源 :
opensource.com
在 linux 中出现的一种新技术能够为系统管理员和开发者提供大量用于性能分析和故障排除的新工具和仪表盘。 它被称为增强的伯克利数据包过滤器eBPF或BPF虽然这些改进并不由伯克利开发它们不仅仅是处理数据包更多的是过滤。我将讨论在 Fedora 和 Red Hat Linux 发行版中使用 BPF 的一种方法,并在 Fedora 26 上演示。
BPF 可以在内核中运行用户定义的沙盒程序,以立即添加新的自定义功能。这就像可按需给 Linux 系统添加超能力一般。 你可以使用它的例子包括如下:
* 高级性能跟踪工具文件系统操作、TCP事件、用户级事件等的编程低开销检测。
* 网络性能 : 尽早丢弃数据包以提高DDoS的恢复能力或者在内核中重定向数据包以提高性能。
* 安全监控 : 24x7 小时全天候自定义检测和记录内核空间与用户空间内的可疑事件。
在可能的情况下BPF 程序必须通过一个内核验证机制来保证它们的安全运行,这比写自定义的内核模块更安全。我在此假设大多数人并不编写自己的 BPF 程序,而是使用别人写好的。在 GitHub 上的 [BPF Compiler Collection (bcc)][12] 项目中我已发布许多开源代码。bcc 提供不同的 BPF 开发前端支持包括Python和Lua并且是目前最活跃的 BPF 模具项目。
### 7 个有用的 bcc/BPF 新工具
为了了解BCC / BPF工具和他们的乐器我创建了下面的图表并添加到项目中
To understand the bcc/BPF tools and what they instrument, I created the following diagram and added it to the bcc project:
### [bcc_跟踪工具.png][13]
![Linux bcc/BPF 跟踪工具图](https://opensource.com/sites/default/files/u128651/bcc_tracing_tools.png)
Brendan Gregg, [CC BY-SA 4.0][14]
这些是命令行界面工具,你可以通过 SSH (安全外壳)使用它们。目前大多数分析,包括我的老板,是用 GUIs 和仪表盘进行的。SSH是最后的手段。但这些命令行工具仍然是预览BPF能力的好方法即使你最终打算通过一个可用的 GUI 使用它。我已着手向一个开源 GUI 添加BPF功能但那是另一篇文章的主题。现在我想分享你今天可以使用的 CLI 工具。
### 1\. execsnoop
从哪儿开始? 如何查看新的进程。这些可以消耗系统资源,但很短暂,它们不会出现在 top1命令或其他工具中。 这些新进程可以使用[execsnoop] [15]进行检测(或使用行业术语,可以追踪)。 在追踪时,我将在另一个窗口中通过 SSH 登录:
```
# /usr/share/bcc/tools/execsnoop
PCOMM PID PPID RET ARGS
sshd 12234 727 0 /usr/sbin/sshd -D -R
unix_chkpwd 12236 12234 0 /usr/sbin/unix_chkpwd root nonull
unix_chkpwd 12237 12234 0 /usr/sbin/unix_chkpwd root chkexpiry
bash 12239 12238 0 /bin/bash
id 12241 12240 0 /usr/bin/id -un
hostname 12243 12242 0 /usr/bin/hostname
pkg-config 12245 12244 0 /usr/bin/pkg-config --variable=completionsdir bash-completion
grepconf.sh 12246 12239 0 /usr/libexec/grepconf.sh -c
grep 12247 12246 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS
tty 12249 12248 0 /usr/bin/tty -s
tput 12250 12248 0 /usr/bin/tput colors
dircolors 12252 12251 0 /usr/bin/dircolors --sh /etc/DIR_COLORS
grep 12253 12239 0 /usr/bin/grep -qi ^COLOR.*none /etc/DIR_COLORS
grepconf.sh 12254 12239 0 /usr/libexec/grepconf.sh -c
grep 12255 12254 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS
grepconf.sh 12256 12239 0 /usr/libexec/grepconf.sh -c
grep 12257 12256 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS
```
哇。 那是什么? 什么是grepconf.sh 什么是 /etc/GREP_COLORS 而且 grep通过运行自身阅读它自己的配置文件 这甚至是如何工作的?
欢迎来到有趣的系统追踪世界。 你可以学到很多关于系统是如何工作的(或者一些情况下根本不工作),并且发现一些简单的优化。 execsnoop 通过跟踪 exec系统调用来工作exec() 通常用于在新进程中加载不同的程序代码。
### 2\. opensnoop
从上面继续所以grepconf.sh可能是一个shell脚本对吧 我将运行file1来检查并使用[opensnoop][16] bcc 工具来查看打开的文件:
```
# /usr/share/bcc/tools/opensnoop
PID COMM FD ERR PATH
12420 file 3 0 /etc/ld.so.cache
12420 file 3 0 /lib64/libmagic.so.1
12420 file 3 0 /lib64/libz.so.1
12420 file 3 0 /lib64/libc.so.6
12420 file 3 0 /usr/lib/locale/locale-archive
12420 file -1 2 /etc/magic.mgc
12420 file 3 0 /etc/magic
12420 file 3 0 /usr/share/misc/magic.mgc
12420 file 3 0 /usr/lib64/gconv/gconv-modules.cache
12420 file 3 0 /usr/libexec/grepconf.sh
1 systemd 16 0 /proc/565/cgroup
1 systemd 16 0 /proc/536/cgroup
```
像execsnoop和opensnoop这样的工具每个事件打印一行。上图显示 file1命令当前打开或尝试打开的文件返回的文件描述符“FD”列对于 /etc/magic.mgc 是-1而“ERR”列指示它是“文件未找到”。我不知道该文件也不知道 file1正在读取的 /usr/share/misc/magic.mgc 文件。我不应该感到惊讶,但是 file1在识别文件类型时没有问题
```
# file /usr/share/misc/magic.mgc /etc/magic
/usr/share/misc/magic.mgc: magic binary file for file(1) cmd (version 14) (little endian)
/etc/magic: magic text file for file(1) cmd, ASCII text
```
opensnoop通过跟踪 open系统调用来工作。为什么不使用 strace -feopen file 命令呢? 这将在这种情况下起作用。然而opensnoop 的一些优点在于它能在系统范围内工作,并且跟踪所有进程的 open系统调用。注意上例的输出中包括了从systemd打开的文件。Opensnoop 也应该有更低的开销BPF 跟踪已经被优化,并且当前版本的 strace1仍然使用较老和较慢的 ptrace2接口。
### 3\. xfsslower
bcc/BPF 不仅仅可以分析系统调用。[xfsslower][17] 工具跟踪具有大于1毫秒参数延迟的常见XFS文件系统操作。
```
# /usr/share/bcc/tools/xfsslower 1
Tracing XFS operations slower than 1 ms
TIME COMM PID T BYTES OFF_KB LAT(ms) FILENAME
14:17:34 systemd-journa 530 S 0 0 1.69 system.journal
14:17:35 auditd 651 S 0 0 2.43 audit.log
14:17:42 cksum 4167 R 52976 0 1.04 at
14:17:45 cksum 4168 R 53264 0 1.62 [
14:17:45 cksum 4168 R 65536 0 1.01 certutil
14:17:45 cksum 4168 R 65536 0 1.01 dir
14:17:45 cksum 4168 R 65536 0 1.17 dirmngr-client
14:17:46 cksum 4168 R 65536 0 1.06 grub2-file
14:17:46 cksum 4168 R 65536 128 1.01 grub2-fstest
[...]
```
在上图输出中,我捕获了多个延迟超过 1 毫秒 的 cksum1读数字段“T”等于“R”。这个工作是在 xfsslower 工具运行的时候,通过在 XFS 中动态地设置内核函数实现,当它结束的时候解除检测。其他文件系统也有这个 bcc 工具的版本ext4slowerbtrfsslowerzfsslower 和 nfsslower。
这是个有用的工具,也是 BPF 追踪的重要例子。对文件系统性能的传统分析主要集中在块 I/O 统计信息 - 通常你看到的是由 iostat1工具打印并由许多性能监视 GUI 绘制的图表。这些统计数据显示了磁盘如何执行但不是真正的文件系统。通常比起磁盘你更关心文件系统的性能因为应用程序是在文件系统中发起请求和等待。并且文件系统的性能可能与磁盘的性能大为不同文件系统可以完全从内存缓存中读取数据也可以通过预读算法和回写缓存填充缓存。xfsslower 显示了文件系统的性能 - 应用程序直接体验到什么。这对于免除整个存储子系统通常是有用的; 如果确实没有文件系统延迟,那么性能问题很可能在别处。
### 4\. biolatency
虽然文件系统性能对于理解应用程序性能非常重要,但研究磁盘性能也是有好处的。当各种缓存技巧不能再隐藏其延迟时,磁盘的低性能终会影响应用程序。 磁盘性能也是容量规划研究的目标。
iostat1工具显示平均磁盘 I/O 延迟,但平均值可能会引起误解。 以直方图的形式研究 I/O 延迟的分布是有用的,这可以通过使用 [biolatency] 来实现[18]
```
# /usr/share/bcc/tools/biolatency
Tracing block device I/O... Hit Ctrl-C to end.
^C
usecs : count distribution
0 -> 1 : 0 | |
2 -> 3 : 0 | |
4 -> 7 : 0 | |
8 -> 15 : 0 | |
16 -> 31 : 0 | |
32 -> 63 : 1 | |
64 -> 127 : 63 |**** |
128 -> 255 : 121 |********* |
256 -> 511 : 483 |************************************ |
512 -> 1023 : 532 |****************************************|
1024 -> 2047 : 117 |******** |
2048 -> 4095 : 8 | |
```
这是另一个有用的工具和例子; 它使用一个名为maps的BPF特性它可以用来实现高效的内核内摘要统计。从内核级别到用户级别的数据传输仅仅是“计数”列。 用户级程序生成其余的。
值得注意的是其中许多工具支持CLI选项和参数如其使用信息所示
```
# /usr/share/bcc/tools/biolatency -h
usage: biolatency [-h] [-T] [-Q] [-m] [-D] [interval] [count]
Summarize block device I/O latency as a histogram
positional arguments:
interval output interval, in seconds
count number of outputs
optional arguments:
-h, --help show this help message and exit
-T, --timestamp include timestamp on output
-Q, --queued include OS queued time in I/O time
-m, --milliseconds millisecond histogram
-D, --disks print a histogram per disk device
examples:
./biolatency # summarize block I/O latency as a histogram
./biolatency 1 10 # print 1 second summaries, 10 times
./biolatency -mT 1 # 1s summaries, milliseconds, and timestamps
./biolatency -Q # include OS queued time in I/O time
./biolatency -D # show each disk device separately
```
它们的行为像其他Unix工具是通过设计以协助采用。
### 5\. tcplife
另一个有用的工具是[tcplife][19] ,该例显示TCP会话的生命周期和吞吐量统计
```
# /usr/share/bcc/tools/tcplife
PID COMM LADDR LPORT RADDR RPORT TX_KB RX_KB MS
12759 sshd 192.168.56.101 22 192.168.56.1 60639 2 3 1863.82
12783 sshd 192.168.56.101 22 192.168.56.1 60640 3 3 9174.53
12844 wget 10.0.2.15 34250 54.204.39.132 443 11 1870 5712.26
12851 curl 10.0.2.15 34252 54.204.39.132 443 0 74 505.90
```
在你说:“我不能只是刮 tcpdump8输出这个”之前请注意运行 tcpdump8或任何数据包嗅探器在高数据包速率系统上花费的开销会很大即使tcpdump8的用户级和内核级机制已经过多年优化可能更差。tcplife不会测试每个数据包; 它只会监视TCP会话状态的变化从而影响会话的持续时间。它还使用已经跟踪吞吐量的内核计数器以及处理和命令信息“PID”和“COMM”列这些对 tcpdump8等线上嗅探工具是做不到的。
### 6\. gethostlatency
之前的每个例子都涉及到内核跟踪,所以我至少需要一个用户级跟踪的例子。 这是[gethostlatency] [20]其中gethostbyname3和相关的库调用名称解析
```
# /usr/share/bcc/tools/gethostlatency
TIME PID COMM LATms HOST
06:43:33 12903 curl 188.98 opensource.com
06:43:36 12905 curl 8.45 opensource.com
06:43:40 12907 curl 6.55 opensource.com
06:43:44 12911 curl 9.67 opensource.com
06:45:02 12948 curl 19.66 opensource.cats
06:45:06 12950 curl 18.37 opensource.cats
06:45:07 12952 curl 13.64 opensource.cats
06:45:19 13139 curl 13.10 opensource.cats
```
是的它始终是DNS所以有一个工具来监视系统范围内的DNS请求可以很方便这只有在应用程序使用标准系统库时才有效看看我如何跟踪多个查找“opensource.com” 第一个是188.98毫秒然后是更快不到10毫秒毫无疑问缓存的作用。它还追踪多个查找“opensource.cats”一个可悲的不存在的主机但我们仍然可以检查第一个和后续查找的延迟。 (第二次查找后是否有一点负面缓存?)
### 7\. trace
好的,再举一个例子。 [trace] [21]工具由Sasha Goldshtein提供并提供了一些基本的printf1功能和自定义探针。 例如:
```
# /usr/share/bcc/tools/trace 'pam:pam_start "%s: %s", arg1, arg2'
PID TID COMM FUNC -
13266 13266 sshd pam_start sshd: root
```
在这里,我正在跟踪 libpam 及其 pam_start3函数并将其两个参数都打印为字符串。 Libpam 用于可插入的身份验证模块系统,输出显示 sshd 为“root”用户我登录调用了 pam_start。 USAGE消息中有更多的例子“trace -h”而且所有这些工具在bcc版本库中都有手册页和示例文件。 例如trace_example.txt和trace.8。
### 通过包安装 bcc
安装 bcc 最佳的方法是从 iovisor 仓储库中安装,按照 bcc [INSTALL.md][22]。[IO Visor] [23]是包含 bcc 的Linux基金会项目。4.x系列Linux内核中增加了这些工具使用的BPF增强功能上至4.9 \。这意味着拥有4.8内核的 Fedora 25可以运行大部分这些工具。 Fedora 26及其4.11内核可以运行它们(至少目前)。
如果你使用的是Fedora 25或者Fedora 26而且这个帖子已经在很多个月前发布了 - 你好,来自遥远的过去!),那么这个包的方法应该是正常的。 如果您使用的是Fedora 26那么请跳至“通过源代码安装”部分该部分避免了已知的固定错误。 这个错误修复目前还没有进入Fedora 26软件包的依赖关系。 我使用的系统是:
```
# uname -a
Linux localhost.localdomain 4.11.8-300.fc26.x86_64 #1 SMP Thu Jun 29 20:09:48 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
# cat /etc/fedora-release
Fedora release 26 (Twenty Six)
```
以下是我所遵循的安装步骤但请参阅INSTALL.md获取更新的版本
```
# echo -e '[iovisor]\nbaseurl=https://repo.iovisor.org/yum/nightly/f25/$basearch\nenabled=1\ngpgcheck=0' | sudo tee /etc/yum.repos.d/iovisor.repo
# dnf install bcc-tools
[...]
Total download size: 37 M
Installed size: 143 M
Is this ok [y/N]: y
```
安装完成后,您可以在/ usr / share中看到新的工具
```
# ls /usr/share/bcc/tools/
argdist dcsnoop killsnoop softirqs trace
bashreadline dcstat llcstat solisten ttysnoop
[...]
```
试着运行其中一个:
```
# /usr/share/bcc/tools/opensnoop
chdir(/lib/modules/4.11.8-300.fc26.x86_64/build): No such file or directory
Traceback (most recent call last):
File "/usr/share/bcc/tools/opensnoop", line 126, in
b = BPF(text=bpf_text)
File "/usr/lib/python3.6/site-packages/bcc/__init__.py", line 284, in __init__
raise Exception("Failed to compile BPF module %s" % src_file)
Exception: Failed to compile BPF module
```
运行失败,提示/lib/modules/4.11.8-300.fc26.x86_64/build丢失。 如果你也这样做,那只是因为系统缺少内核头文件。 如果你看看这个文件指向什么这是一个符号链接然后使用“dnf whatprovides”来搜索它它会告诉你接下来需要安装的包。 对于这个系统,它是:
```
# dnf install kernel-devel-4.11.8-300.fc26.x86_64
[...]
Total download size: 20 M
Installed size: 63 M
Is this ok [y/N]: y
[...]
```
现在
```
# /usr/share/bcc/tools/opensnoop
PID COMM FD ERR PATH
11792 ls 3 0 /etc/ld.so.cache
11792 ls 3 0 /lib64/libselinux.so.1
11792 ls 3 0 /lib64/libcap.so.2
11792 ls 3 0 /lib64/libc.so.6
[...]
```
运行起来了。 这是从另一个窗口中的ls命令捕捉活动。 请参阅前面的部分以获取其他有用的命令
### 通过源码安装
如果您需要从源代码安装,您还可以在[INSTALL.md] [27]中找到文档和更新说明。 我在Fedora 26上做了如下的事情
```
sudo dnf install -y bison cmake ethtool flex git iperf libstdc++-static \
python-netaddr python-pip gcc gcc-c++ make zlib-devel \
elfutils-libelf-devel
sudo dnf install -y luajit luajit-devel # for Lua support
sudo dnf install -y \
http://pkgs.repoforge.org/netperf/netperf-2.6.0-1.el6.rf.x86_64.rpm
sudo pip install pyroute2
sudo dnf install -y clang clang-devel llvm llvm-devel llvm-static ncurses-devel
```
除 netperf 外一切妥当,其中有以下错误:
```
Curl error (28): Timeout was reached for http://pkgs.repoforge.org/netperf/netperf-2.6.0-1.el6.rf.x86_64.rpm [Connection timed out after 120002 milliseconds]
```
不必理会netperf是可选的 - 它只是用于测试 - 而 bcc 没有它也会编译成功。
以下是 bcc 编译和安装余下的步骤:
```
git clone https://github.com/iovisor/bcc.git
mkdir bcc/build; cd bcc/build
cmake .. -DCMAKE_INSTALL_PREFIX=/usr
make
sudo make install
```
在这一点上,命令应该起作用:
```
# /usr/share/bcc/tools/opensnoop
PID COMM FD ERR PATH
4131 date 3 0 /etc/ld.so.cache
4131 date 3 0 /lib64/libc.so.6
4131 date 3 0 /usr/lib/locale/locale-archive
4131 date 3 0 /etc/localtime
[...]
```
More Linux resources
* [What is Linux?][1]
* [What are Linux containers?][2]
* [Download Now: Linux commands cheat sheet][3]
* [Advanced Linux commands cheat sheet][4]
* [Our latest Linux articles][5]
### 写在最后和其他前端
这是一个可以在 Fedora 和 Red Hat 系列操作系统上使用的新 BPF 性能分析强大功能的快速浏览。我演示了BPF的流行前端 [bcc][28] ,并包含了其在 Fedora 上的安装说明。bcc 附带了60多个用于性能分析的新工具这将帮助您充分利用Linux系统。也许你会直接通过SSH使用这些工具或者一旦它们支持BPF你也可以通过监视GUI来使用相同的功能。
此外bcc并不是开发中唯一的前端。[ply][29]和[bpftrace][30],旨在为快速编写自定义工具提供更高级的语言。此外,[SystemTap] [31]刚刚发布[版本3.2] [32]包括一个早期的实验性eBPF后端。 如果这一点继续得到发展它将为运行多年来开发的许多SystemTap脚本和攻击集提供一个生产安全和高效的引擎。 使用SystemTap和eBPF将成为另一篇文章的主题。
如果您需要开发自定义工具,那么也可以使用 bcc 来实现,尽管语言比 SystemTapply 或 bpftrace 要冗长得多。 我的 bcc 工具可以作为代码示例,另外我还贡献了[教程] [33]来开发 Python 中的 bcc 工具。 我建议先学习bcc多工具因为在需要编写新工具之前你可能会从里面获得很多里程。 您可以从他们 bcc 存储库[funccount] [34][funclatency] [35][funcslower] [36][stackcount] [37][trace] [38] [argdist] [39] 的示例文件中研究 bcc。
感谢[Opensource.com] [40]进行编辑。
###  专题
[Linux][41][系统管理员][42]
### About the author
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/brendan_face2017_620d.jpg?itok=LIwTJjL9)][43] Brendan Gregg
-
Brendan Gregg是Netflix的一名高级性能架构师在那里他进行大规模的计算机性能设计分析和调优。[关于更多] [44]
* [Learn how you can contribute][6]
--------------------------------------------------------------------------------
via:https://opensource.com/article/17/11/bccbpf-performance
作者:[Brendan Gregg ][a]
译者:[yongshouzhang](https://github.com/yongshouzhang)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[6]:https://opensource.com/participate
[7]:https://opensource.com/users/brendang
[8]:https://opensource.com/users/brendang
[9]:https://opensource.com/user/77626/feed
[10]:https://opensource.com/article/17/11/bccbpf-performance?rate=r9hnbg3mvjFUC9FiBk9eL_ZLkioSC21SvICoaoJjaSM
[11]:https://opensource.com/article/17/11/bccbpf-performance#comments
[12]:https://github.com/iovisor/bcc
[13]:https://opensource.com/file/376856
[14]:https://opensource.com/usr/share/bcc/tools/trace
[15]:https://github.com/brendangregg/perf-tools/blob/master/execsnoop
[16]:https://github.com/brendangregg/perf-tools/blob/master/opensnoop
[17]:https://github.com/iovisor/bcc/blob/master/tools/xfsslower.py
[18]:https://github.com/iovisor/bcc/blob/master/tools/biolatency.py
[19]:https://github.com/iovisor/bcc/blob/master/tools/tcplife.py
[20]:https://github.com/iovisor/bcc/blob/master/tools/gethostlatency.py
[21]:https://github.com/iovisor/bcc/blob/master/tools/trace.py
[22]:https://github.com/iovisor/bcc/blob/master/INSTALL.md#fedora---binary
[23]:https://www.iovisor.org/
[24]:https://opensource.com/article/17/11/bccbpf-performance#InstallViaSource
[25]:https://github.com/iovisor/bcc/issues/1221
[26]:https://reviews.llvm.org/rL302055
[27]:https://github.com/iovisor/bcc/blob/master/INSTALL.md#fedora---source
[28]:https://github.com/iovisor/bcc
[29]:https://github.com/iovisor/ply
[30]:https://github.com/ajor/bpftrace
[31]:https://sourceware.org/systemtap/
[32]:https://sourceware.org/ml/systemtap/2017-q4/msg00096.html
[33]:https://github.com/iovisor/bcc/blob/master/docs/tutorial_bcc_python_developer.md
[34]:https://github.com/iovisor/bcc/blob/master/tools/funccount_example.txt
[35]:https://github.com/iovisor/bcc/blob/master/tools/funclatency_example.txt
[36]:https://github.com/iovisor/bcc/blob/master/tools/funcslower_example.txt
[37]:https://github.com/iovisor/bcc/blob/master/tools/stackcount_example.txt
[38]:https://github.com/iovisor/bcc/blob/master/tools/trace_example.txt
[39]:https://github.com/iovisor/bcc/blob/master/tools/argdist_example.txt
[40]:http://opensource.com/
[41]:https://opensource.com/tags/linux
[42]:https://opensource.com/tags/sysadmin
[43]:https://opensource.com/users/brendang
[44]:https://opensource.com/users/brendang

View File

@ -0,0 +1,72 @@
Sessions 与 Cookies 用户登录的原理是什么?
======
Facebook, Gmail, Twitter 是我们每天都会用的网站. 它们的共同点在于都需要你登录进去后才能做进一步的操作. 只有你通过认证并登录后才能在 twitter 发推, 在 Facebook 上评论,以及在 Gmail上处理电子邮件.
[![gmail, facebook login page](http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1.jpg)][1]
那么登录的原理是什么? 网站是如何认证的? 它怎么知道是哪个用户从哪儿登录进来的? 下面我们来对这些问题进行一一解答.
### 用户登录的原理是什么?
每次你在网站的登录页面中输入用户名和密码时, 这些信息都会发送到服务器. 服务器随后会将你的密码与服务器中的密码进行验证. 如果两者不匹配, 则你会得到一个错误密码的提示. 如果两则匹配, 则成功登录.
### 登陆时发生了什么?
登录后, web 服务器会初始化一个 session 并在你的浏览器中设置一个 cookie 变量. 该 cookie 变量用于作为新建 session 的一个引用. 搞晕了? 让我们说的再简单一点.
### 会话的原理是什么?
服务器在用户名和密码都正确的情况下会初始化一个 session. Sessions 的定义很复杂,你可以把它理解为 `关系的开始`.
[![session beginning of a relationship or partnership](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-9.png)][2]
认证通过后, 服务器就开始跟你展开一段关系了. 由于服务器不能象我们人类一样看东西, 它会在我们的浏览器中设置一个 cookie 来将我们的关系从其他人与服务器的关系标识出来.
### 什么是 Cookie?
cookie 是网站在你的浏览器中存储的一小段数据. 你应该已经见过他们了.
[![theitstuff official facebook page cookies](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-1-4.png)][3]
当你登录后,服务器为你创建一段关系或者说一个 session, 然后将唯一标识这个 session 的 session id 以 cookie 的形式存储在你的浏览器中.
### 什么意思?
所有这些东西存在的原因在于识别出你来,这样当你写评论或者发推时, 服务器能知道是谁在发评论,是谁在发推.
当你登录后, 会产生一个包含 session id 的 cookie. 这样, 这个 session id 就被赋予了那个输入正确用户名和密码的人了.
[![facebook cookies in web browser](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-2-3-e1508926255472.png)][4]
也就是说, session id 被赋予给了拥有这个账户的人了. 之后,所有在网站上产生的行为, 服务器都能通过他们的 session id 来判断是由谁发起的.
### 如何让我保持登录状态?
session 有一定的时间限制. 这一点与现实生活中不一样,现实生活中的关系可以在不见面的情况下持续很长一段时间, 而 session 具有时间限制. 你必须要不断地通过一些动作来告诉服务器你还在线. 否则的话,服务器会关掉这个 session而你会被登出.
[![websites keep me logged in option](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-3-3-e1508926314117.png)][5]
不过在某些网站上可以启用 `保持登录Keep me logged in`, 这样服务器会将另一个唯一变量以 cookie 的形式保存到我们的浏览器中. 这个唯一变量会通过与服务器上的变量进行对比来实现自动登录. 若有人盗取了这个唯一标识(我们称之为 cookie stealing), 他们就能访问你的账户了.
### 结论
我们讨论了登录系统的工作原理以及网站是如何进行认证的. 我们还学到了什么是 sessions 和 cookies,以及它们在登录机制中的作用.
我们希望你们以及理解了用户登录的工作原理, 如有疑问, 欢迎提问.
--------------------------------------------------------------------------------
via: http://www.theitstuff.com/sessions-cookies-user-login-work
作者:[Rishabh Kandari][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.theitstuff.com/author/reevkandari
[1]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1.jpg
[2]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-9.png
[3]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-1-4.png
[4]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-2-3-e1508926255472.png
[5]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-3-3-e1508926314117.png