mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-07 22:11:09 +08:00
commit
fd4af063f7
122
20171202 docker - Use multi-stage builds.md
Normal file
122
20171202 docker - Use multi-stage builds.md
Normal file
@ -0,0 +1,122 @@
|
||||
Docker:使用多阶段构建镜像
|
||||
============================================================
|
||||
|
||||
多阶段构建是 Docker 17.05 及更高版本提供的新功能。这对致力于优化 Dockerfile 的人来说,使得 Dockerfile 易于阅读和维护。
|
||||
|
||||
> 致谢: 特别感谢 [Alex Ellis][1] 授权使用他的关于 Docker 多阶段构建的博客文章 [Builder pattern vs. Multi-stage builds in Docker][2] 作为以下示例的基础。
|
||||
|
||||
### 在多阶段构建之前
|
||||
|
||||
关于构建镜像最具挑战性的事情之一是保持镜像体积小巧。 Dockerfile 中的每条指令都会在镜像中增加一层,并且在移动到下一层之前,需要记住清除不需要的构件。要编写一个非常高效的 Dockerfile,你通常需要使用 shell 技巧和其它方式来尽可能地减少层数,并确保每一层都具有上一层所需的构件,而其它任何东西都不需要。
|
||||
|
||||
实际上最常见的是,有一个 Dockerfile 用于开发(其中包含构建应用程序所需的所有内容),而另一个裁剪过的用于生产环境,它只包含您的应用程序以及运行它所需的内容。这被称为“构建器模式”。但是维护两个 Dockerfile 并不理想。
|
||||
|
||||
下面分别是一个 `Dockerfile.build` 和遵循上面的构建器模式的 `Dockerfile` 的例子:
|
||||
|
||||
`Dockerfile.build`:
|
||||
|
||||
```
|
||||
FROM golang:1.7.3
|
||||
WORKDIR /go/src/github.com/alexellis/href-counter/
|
||||
RUN go get -d -v golang.org/x/net/html
|
||||
COPY app.go .
|
||||
RUN go get -d -v golang.org/x/net/html \
|
||||
&& CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
|
||||
```
|
||||
|
||||
注意这个例子还使用 Bash 的 `&&` 运算符人为地将两个 `RUN` 命令压缩在一起,以避免在镜像中创建额外的层。这很容易失败,难以维护。例如,插入另一个命令时,很容易忘记继续使用 `\` 字符。
|
||||
|
||||
`Dockerfile`:
|
||||
|
||||
```
|
||||
FROM alpine:latest
|
||||
RUN apk --no-cache add ca-certificates
|
||||
WORKDIR /root/
|
||||
COPY app .
|
||||
CMD ["./app"]
|
||||
```
|
||||
|
||||
`build.sh`:
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
echo Building alexellis2/href-counter:build
|
||||
|
||||
docker build --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy \
|
||||
-t alexellis2/href-counter:build . -f Dockerfile.build
|
||||
|
||||
docker create --name extract alexellis2/href-counter:build
|
||||
docker cp extract:/go/src/github.com/alexellis/href-counter/app ./app
|
||||
docker rm -f extract
|
||||
|
||||
echo Building alexellis2/href-counter:latest
|
||||
|
||||
docker build --no-cache -t alexellis2/href-counter:latest .
|
||||
rm ./app
|
||||
```
|
||||
|
||||
当您运行 `build.sh` 脚本时,它会构建第一个镜像,从中创建一个容器,以便将该构件复制出来,然后构建第二个镜像。 这两个镜像会占用您的系统的空间,而你仍然会一个 `app` 构件存放在你的本地磁盘上。
|
||||
|
||||
多阶段构建大大简化了这种情况!
|
||||
|
||||
### 使用多阶段构建
|
||||
|
||||
在多阶段构建中,您需要在 Dockerfile 中多次使用 `FROM` 声明。每次 `FROM` 指令可以使用不同的基础镜像,并且每次 `FROM` 指令都会开始新阶段的构建。您可以选择将构件从一个阶段复制到另一个阶段,在最终镜像中,不会留下您不需要的所有内容。为了演示这是如何工作的,让我们调整前一节中的 Dockerfile 以使用多阶段构建。
|
||||
|
||||
`Dockerfile`:
|
||||
|
||||
```
|
||||
FROM golang:1.7.3
|
||||
WORKDIR /go/src/github.com/alexellis/href-counter/
|
||||
RUN go get -d -v golang.org/x/net/html
|
||||
COPY app.go .
|
||||
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
|
||||
|
||||
FROM alpine:latest
|
||||
RUN apk --no-cache add ca-certificates
|
||||
WORKDIR /root/
|
||||
COPY --from=0 /go/src/github.com/alexellis/href-counter/app .
|
||||
CMD ["./app"]
|
||||
```
|
||||
|
||||
您只需要单一个 Dockerfile。 不需要另外的构建脚本。只需运行 `docker build` 即可。
|
||||
|
||||
```
|
||||
$ docker build -t alexellis2/href-counter:latest .
|
||||
```
|
||||
|
||||
最终的结果是和以前体积一样小的生产镜像,复杂性显著降低。您不需要创建任何中间镜像,也不需要将任何构件提取到本地系统。
|
||||
|
||||
它是如何工作的呢?第二条 `FROM` 指令以 `alpine:latest` 镜像作为基础开始新的建造阶段。`COPY --from=0` 这一行将刚才前一个阶段产生的构件复制到这个新阶段。Go SDK 和任何中间构件都被留在那里,而不会保存到最终的镜像中。
|
||||
|
||||
### 命名您的构建阶段
|
||||
|
||||
默认情况下,这些阶段没有命名,您可以通过它们的整数来引用它们,从第一个 `FROM` 指令的 0 开始。但是,你可以通过在 `FROM` 指令中使用 `as <NAME>` 来为阶段命名。以下示例通过命名阶段并在 `COPY` 指令中使用名称来改进前一个示例。这意味着,即使您的 `Dockerfile` 中的指令稍后重新排序,`COPY` 也不会出问题。
|
||||
|
||||
```
|
||||
FROM golang:1.7.3 as builder
|
||||
WORKDIR /go/src/github.com/alexellis/href-counter/
|
||||
RUN go get -d -v golang.org/x/net/html
|
||||
COPY app.go .
|
||||
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
|
||||
|
||||
FROM alpine:latest
|
||||
RUN apk --no-cache add ca-certificates
|
||||
WORKDIR /root/
|
||||
COPY --from=builder /go/src/github.com/alexellis/href-counter/app .
|
||||
CMD ["./app"]
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://docs.docker.com/engine/userguide/eng-image/multistage-build/
|
||||
|
||||
作者:[docker][a]
|
||||
译者:[iron0x](https://github.com/iron0x)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://docs.docker.com/engine/userguide/eng-image/multistage-build/
|
||||
[1]:https://twitter.com/alexellisuk
|
||||
[2]:http://blog.alexellis.io/mutli-stage-docker-builds/
|
179
published/19951001 Writing man Pages Using groff.md
Normal file
179
published/19951001 Writing man Pages Using groff.md
Normal file
@ -0,0 +1,179 @@
|
||||
使用 groff 编写 man 手册页
|
||||
===================
|
||||
|
||||
`groff` 是大多数 Unix 系统上所提供的流行的文本格式化工具 nroff/troff 的 GNU 版本。它一般用于编写手册页,即命令、编程接口等的在线文档。在本文中,我们将给你展示如何使用 `groff` 编写你自己的 man 手册页。
|
||||
|
||||
在 Unix 系统上最初有两个文本处理系统:troff 和 nroff,它们是由贝尔实验室为初始的 Unix 所开发的(事实上,开发 Unix 系统的部分原因就是为了支持这样的一个文本处理系统)。这个文本处理器的第一个版本被称作 roff(意为 “runoff”——径流);稍后出现了 troff,在那时用于为特定的<ruby>排字机<rt>Typesetter</rt></ruby>生成输出。nroff 是更晚一些的版本,它成为了各种 Unix 系统的标准文本处理器。groff 是 nroff 和 troff 的 GNU 实现,用在 Linux 系统上。它包括了几个扩展功能和一些打印设备的驱动程序。
|
||||
|
||||
`groff` 能够生成文档、文章和书籍,很多时候它就像是其它的文本格式化系统(如 TeX)的血管一样。然而,`groff`(以及原来的 nroff)有一个固有的功能是 TeX 及其变体所缺乏的:生成普通 ASCII 输出。其它的系统在生成打印的文档方面做得很好,而 `groff` 却能够生成可以在线浏览的普通 ASCII(甚至可以在最简单的打印机上直接以普通文本打印)。如果要生成在线浏览的文档以及打印的表单,`groff` 也许是你所需要的(虽然也有替代品,如 Texinfo、Lametex 等等)。
|
||||
|
||||
`groff` 还有一个好处是它比 TeX 小很多;它所需要的支持文件和可执行程序甚至比最小化的 TeX 版本都少。
|
||||
|
||||
`groff` 一个特定的用途是用于格式化 Unix 的 man 手册页。如果你是一个 Unix 程序员,你肯定需要编写和生成各种 man 手册页。在本文中,我们将通过编写一个简短的 man 手册页来介绍 `groff` 的使用。
|
||||
|
||||
像 TeX 一样,`groff` 使用特定的文本格式化语言来描述如何处理文本。这种语言比 TeX 之类的系统更加神秘一些,但是更加简洁。此外,`groff` 在基本的格式化器之上提供了几个宏软件包;这些宏软件包是为一些特定类型的文档所定制的。举个例子, mgs 宏对于写作文章或论文很适合,而 man 宏可用于 man 手册页。
|
||||
|
||||
### 编写 man 手册页
|
||||
|
||||
用 `groff` 编写 man 手册页十分简单。要让你的 man 手册页看起来和其它的一样,你需要从源头上遵循几个惯例,如下所示。在这个例子中,我们将为一个虚构的命令 `coffee` 编写 man 手册页,它用于以各种方式控制你的联网咖啡机。
|
||||
|
||||
使用任意文本编辑器,输入如下代码,并保存为 `coffee.man`。不要输入每行的行号,它们仅用于本文中的说明。
|
||||
|
||||
```
|
||||
.TH COFFEE 1 "23 March 94"
|
||||
.SH NAME
|
||||
coffee \- Control remote coffee machine
|
||||
.SH SYNOPSIS
|
||||
\fBcoffee\fP [ -h | -b ] [ -t \fItype\fP ]
|
||||
\fIamount\fP
|
||||
.SH DESCRIPTION
|
||||
\fBcoffee\fP queues a request to the remote
|
||||
coffee machine at the device \fB/dev/cf0\fR.
|
||||
The required \fIamount\fP argument specifies
|
||||
the number of cups, generally between 0 and
|
||||
12 on ISO standard coffee machines.
|
||||
.SS Options
|
||||
.TP
|
||||
\fB-h\fP
|
||||
Brew hot coffee. Cold is the default.
|
||||
.TP
|
||||
\fB-b\fP
|
||||
Burn coffee. Especially useful when executing
|
||||
\fBcoffee\fP on behalf of your boss.
|
||||
.TP
|
||||
\fB-t \fItype\fR
|
||||
Specify the type of coffee to brew, where
|
||||
\fItype\fP is one of \fBcolumbian\fP,
|
||||
\fBregular\fP, or \fBdecaf\fP.
|
||||
.SH FILES
|
||||
.TP
|
||||
\fC/dev/cf0\fR
|
||||
The remote coffee machine device
|
||||
.SH "SEE ALSO"
|
||||
milk(5), sugar(5)
|
||||
.SH BUGS
|
||||
May require human intervention if coffee
|
||||
supply is exhausted.
|
||||
```
|
||||
|
||||
*清单 1:示例 man 手册页源文件*
|
||||
|
||||
不要让这些晦涩的代码吓坏了你。字符串序列 `\fB`、`\fI` 和 `\fR` 分别用来改变字体为粗体、斜体和正体(罗马字体)。`\fP` 设置字体为前一个选择的字体。
|
||||
|
||||
其它的 `groff` <ruby>请求<rt>request</rt></ruby>以点(`.`)开头出现在行首。第 1 行中,我们看到的 `.TH` 请求用于设置该 man 手册页的标题为 `COFFEE`、man 的部分为 `1`、以及该 man 手册页的最新版本的日期。(说明,man 手册的第 1 部分用于用户命令、第 2 部分用于系统调用等等。使用 `man man` 命令了解各个部分)。
|
||||
|
||||
在第 2 行,`.SH` 请求用于标记一个<ruby>节<rt>section</rt></ruby>的开始,并给该节名称为 `NAME`。注意,大部分的 Unix man 手册页依次使用 `NAME`、 `SYNOPSIS`、`DESCRIPTION`、`FILES`、`SEE ALSO`、`NOTES`、`AUTHOR` 和 `BUGS` 等节,个别情况下也需要一些额外的可选节。这只是编写 man 手册页的惯例,并不强制所有软件都如此。
|
||||
|
||||
第 3 行给出命令的名称,并在一个横线(`-`)后给出简短描述。在 `NAME` 节使用这个格式以便你的 man 手册页可以加到 whatis 数据库中——它可以用于 `man -k` 或 `apropos` 命令。
|
||||
|
||||
第 4-6 行我们给出了 `coffee` 命令格式的大纲。注意,斜体 `\fI...\fP` 用于表示命令行的参数,可选参数用方括号扩起来。
|
||||
|
||||
第 7-12 行给出了该命令的摘要介绍。粗体通常用于表示程序或文件的名称。
|
||||
|
||||
在 13 行,使用 `.SS` 开始了一个名为 `Options` 的子节。
|
||||
|
||||
接着第 14-25 行是选项列表,会使用参数列表样式表示。参数列表中的每一项以 `.TP` 请求来标记;`.TP` 后的行是参数,再之后是该项的文本。例如,第 14-16 行:
|
||||
|
||||
```
|
||||
.TP
|
||||
\fB-h\P
|
||||
Brew hot coffee. Cold is the default.
|
||||
```
|
||||
|
||||
将会显示如下:
|
||||
|
||||
```
|
||||
-h Brew hot coffee. Cold is the default.
|
||||
```
|
||||
|
||||
第 26-29 行创建该 man 手册页的 `FILES` 节,它用于描述该命令可能使用的文件。可以使用 `.TP` 请求来表示文件列表。
|
||||
|
||||
第 30-31 行,给出了 `SEE ALSO` 节,它提供了其它可以参考的 man 手册页。注意,第 30 行的 `.SH` 请求中 `"SEE ALSO"` 使用括号扩起来,这是因为 `.SH` 使用第一个空格来分隔该节的标题。任何超过一个单词的标题都需要使用引号扩起来成为一个单一参数。
|
||||
|
||||
最后,第 32-34 行,是 `BUGS` 节。
|
||||
|
||||
### 格式化和安装 man 手册页
|
||||
|
||||
为了在你的屏幕上查看这个手册页格式化的样式,你可以使用如下命令:
|
||||
|
||||
|
||||
```
|
||||
$ groff -Tascii -man coffee.man | more
|
||||
```
|
||||
|
||||
`-Tascii` 选项告诉 `groff` 生成普通 ASCII 输出;`-man` 告诉 `groff` 使用 man 手册页宏集合。如果一切正常,这个 man 手册页显示应该如下。
|
||||
|
||||
```
|
||||
COFFEE(1) COFFEE(1)
|
||||
NAME
|
||||
coffee - Control remote coffee machine
|
||||
SYNOPSIS
|
||||
coffee [ -h | -b ] [ -t type ] amount
|
||||
DESCRIPTION
|
||||
coffee queues a request to the remote coffee machine at
|
||||
the device /dev/cf0\. The required amount argument speci-
|
||||
fies the number of cups, generally between 0 and 12 on ISO
|
||||
standard coffee machines.
|
||||
Options
|
||||
-h Brew hot coffee. Cold is the default.
|
||||
-b Burn coffee. Especially useful when executing cof-
|
||||
fee on behalf of your boss.
|
||||
-t type
|
||||
Specify the type of coffee to brew, where type is
|
||||
one of columbian, regular, or decaf.
|
||||
FILES
|
||||
/dev/cf0
|
||||
The remote coffee machine device
|
||||
SEE ALSO
|
||||
milk(5), sugar(5)
|
||||
BUGS
|
||||
May require human intervention if coffee supply is
|
||||
exhausted.
|
||||
```
|
||||
|
||||
*格式化的 man 手册页*
|
||||
|
||||
如之前提到过的,`groff` 能够生成其它类型的输出。使用 `-Tps` 选项替代 `-Tascii` 将会生成 PostScript 输出,你可以将其保存为文件,用 GhostView 查看,或用一个 PostScript 打印机打印出来。`-Tdvi` 会生成设备无关的 .dvi 输出,类似于 TeX 的输出。
|
||||
|
||||
如果你希望让别人在你的系统上也可以查看这个 man 手册页,你需要安装这个 groff 源文件到其它用户的 `%MANPATH` 目录里面。标准的 man 手册页放在 `/usr/man`。第一部分的 man 手册页应该放在 `/usr/man/man1` 下,因此,使用命令:
|
||||
|
||||
```
|
||||
$ cp coffee.man /usr/man/man1/coffee.1
|
||||
```
|
||||
|
||||
这将安装该 man 手册页到 `/usr/man` 中供所有人使用(注意使用 `.1` 扩展名而不是 `.man`)。当接下来执行 `man coffee` 命令时,该 man 手册页会被自动重新格式化,并且可查看的文本会被保存到 `/usr/man/cat1/coffee.1.Z` 中。
|
||||
|
||||
如果你不能直接复制 man 手册页的源文件到 `/usr/man`(比如说你不是系统管理员),你可创建你自己的 man 手册页目录树,并将其加入到你的 `%MANPATH`。`%MANPATH` 环境变量的格式同 `%PATH` 一样,举个例子,要添加目录 `/home/mdw/man` 到 `%MANPATH` ,只需要:
|
||||
|
||||
```
|
||||
$ export MANPATH=/home/mdw/man:$MANPATH
|
||||
```
|
||||
|
||||
`groff` 和 man 手册页宏还有许多其它的选项和格式化命令。找到它们的最好办法是查看 `/usr/lib/groff` 中的文件; `tmac` 目录包含了宏文件,自身通常会包含其所提供的命令的文档。要让 `groff` 使用特定的宏集合,只需要使用 `-m macro` (或 `-macro`) 选项。例如,要使用 mgs 宏,使用命令:
|
||||
|
||||
```
|
||||
groff -Tascii -mgs files...
|
||||
```
|
||||
|
||||
`groff` 的 man 手册页对这个选项描述了更多细节。
|
||||
|
||||
不幸的是,随同 `groff` 提供的宏集合没有完善的文档。第 7 部分的 man 手册页提供了一些,例如,`man 7 groff_mm` 会给你 mm 宏集合的信息。然而,该文档通常只覆盖了在 `groff` 实现中不同和新功能,而假设你已经了解过原来的 nroff/troff 宏集合(称作 DWB:the Documentor's Work Bench)。最佳的信息来源或许是一本覆盖了那些经典宏集合细节的书。要了解更多的编写 man 手册页的信息,你可以看看 man 手册页源文件(`/usr/man` 中),并通过它们来比较源文件的输出。
|
||||
|
||||
这篇文章是《Running Linux》 中的一章,由 Matt Welsh 和 Lar Kaufman 著,奥莱理出版(ISBN 1-56592-100-3)。在本书中,还包括了 Linux 下使用的各种文本格式化系统的教程。这期的《Linux Journal》中的内容及《Running Linux》应该可以给你提供在 Linux 上使用各种文本工具的良好开端。
|
||||
|
||||
### 祝好,撰写快乐!
|
||||
|
||||
Matt Welsh ([mdw@cs.cornell.edu][1])是康奈尔大学的一名学生和系统程序员,在机器人和视觉实验室从事于时时机器视觉研究。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxjournal.com/article/1158
|
||||
|
||||
作者:[Matt Welsh][a]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxjournal.com/user/800006
|
||||
[1]:mailto:mdw@cs.cornell.edu
|
@ -0,0 +1,219 @@
|
||||
在红帽企业版 Linux 中将系统服务容器化(一)
|
||||
====================
|
||||
|
||||
在 2017 年红帽峰会上,有几个人问我“我们通常用完整的虚拟机来隔离如 DNS 和 DHCP 等网络服务,那我们可以用容器来取而代之吗?”答案是可以的,下面是在当前红帽企业版 Linux 7 系统上创建一个系统容器的例子。
|
||||
|
||||
### 我们的目的
|
||||
|
||||
**创建一个可以独立于任何其它系统服务而更新的网络服务,并且可以从主机端容易地管理和更新。**
|
||||
|
||||
让我们来探究一下在容器中建立一个运行在 systemd 之下的 BIND 服务器。在这一部分,我们将了解到如何建立自己的容器以及管理 BIND 配置和数据文件。
|
||||
|
||||
在本系列的第二部分,我们将看到如何整合主机中的 systemd 和容器中的 systemd。我们将探究如何管理容器中的服务,并且使它作为一种主机中的服务。
|
||||
|
||||
### 创建 BIND 容器
|
||||
|
||||
为了使 systemd 在一个容器中轻松运行,我们首先需要在主机中增加两个包:`oci-register-machine` 和 `oci-systemd-hook`。`oci-systemd-hook` 这个钩子允许我们在一个容器中运行 systemd,而不需要使用特权容器或者手工配置 tmpfs 和 cgroups。`oci-register-machine` 这个钩子允许我们使用 systemd 工具如 `systemctl` 和 `machinectl` 来跟踪容器。
|
||||
|
||||
```
|
||||
[root@rhel7-host ~]# yum install oci-register-machine oci-systemd-hook
|
||||
```
|
||||
|
||||
回到创建我们的 BIND 容器上。[红帽企业版 Linux 7 基础镜像][6]包含了 systemd 作为其初始化系统。我们可以如我们在典型的系统中做的那样安装并激活 BIND。你可以从 [git 仓库中下载这份 Dockerfile][8]。
|
||||
|
||||
```
|
||||
[root@rhel7-host bind]# vi Dockerfile
|
||||
|
||||
# Dockerfile for BIND
|
||||
FROM registry.access.redhat.com/rhel7/rhel
|
||||
ENV container docker
|
||||
RUN yum -y install bind && \
|
||||
yum clean all && \
|
||||
systemctl enable named
|
||||
STOPSIGNAL SIGRTMIN+3
|
||||
EXPOSE 53
|
||||
EXPOSE 53/udp
|
||||
CMD [ "/sbin/init" ]
|
||||
```
|
||||
|
||||
因为我们以 PID 1 来启动一个初始化系统,当我们告诉容器停止时,需要改变 docker CLI 发送的信号。从 `kill` 系统调用手册中 (`man 2 kill`):
|
||||
|
||||
> 唯一可以发送给 PID 1 进程(即 init 进程)的信号,是那些初始化系统明确安装了<ruby>信号处理器<rt>signal handler</rt></ruby>的信号。这是为了避免系统被意外破坏。
|
||||
|
||||
对于 systemd 信号处理器,`SIGRTMIN+3` 是对应于 `systemd start halt.target` 的信号。我们也需要为 BIND 暴露 TCP 和 UDP 端口号,因为这两种协议可能都要使用。
|
||||
|
||||
### 管理数据
|
||||
|
||||
有了一个可以工作的 BIND 服务,我们还需要一种管理配置文件和区域文件的方法。目前这些都放在容器里面,所以我们任何时候都可以进入容器去更新配置或者改变一个区域文件。从管理的角度来说,这并不是很理想。当要更新 BIND 时,我们将需要重建这个容器,所以镜像中的改变将会丢失。任何时候我们需要更新一个文件或者重启服务时,都需要进入这个容器,而这增加了步骤和时间。
|
||||
|
||||
相反的,我们将从这个容器中提取出配置文件和数据文件,把它们拷贝到主机上,然后在运行的时候挂载它们。用这种方式我们可以很容易地重启或者重建容器,而不会丢失所做出的更改。我们也可以使用容器外的编辑器来更改配置和区域文件。因为这个容器的数据看起来像“该系统所提供服务的特定站点数据”,让我们遵循 Linux <ruby>文件系统层次标准<rt>File System Hierarchy</rt></ruby>,并在当前主机上创建 `/srv/named` 目录来保持管理权分离。
|
||||
|
||||
```
|
||||
[root@rhel7-host ~]# mkdir -p /srv/named/etc
|
||||
|
||||
[root@rhel7-host ~]# mkdir -p /srv/named/var/named
|
||||
```
|
||||
|
||||
*提示:如果你正在迁移一个已有的配置文件,你可以跳过下面的步骤并且将它直接拷贝到 `/srv/named` 目录下。你也许仍然要用一个临时容器来检查一下分配给这个容器的 GID。*
|
||||
|
||||
让我们建立并运行一个临时容器来检查 BIND。在将 init 进程以 PID 1 运行时,我们不能交互地运行这个容器来获取一个 shell。我们会在容器启动后执行 shell,并且使用 `rpm` 命令来检查重要文件。
|
||||
|
||||
```
|
||||
[root@rhel7-host ~]# docker build -t named .
|
||||
|
||||
[root@rhel7-host ~]# docker exec -it $( docker run -d named ) /bin/bash
|
||||
|
||||
[root@0e77ce00405e /]# rpm -ql bind
|
||||
```
|
||||
|
||||
对于这个例子来说,我们将需要 `/etc/named.conf` 和 `/var/named/` 目录下的任何文件。我们可以使用 `machinectl` 命令来提取它们。如果注册了一个以上的容器,我们可以在任一机器上使用 `machinectl status` 命令来查看运行的是什么。一旦有了这些配置,我们就可以终止这个临时容器了。
|
||||
|
||||
*如果你喜欢,资源库中也有一个[样例 `named.conf` 和针对 `example.com` 的区域文件][8]。*
|
||||
|
||||
```
|
||||
[root@rhel7-host bind]# machinectl list
|
||||
|
||||
MACHINE CLASS SERVICE
|
||||
8824c90294d5a36d396c8ab35167937f container docker
|
||||
|
||||
[root@rhel7-host ~]# machinectl copy-from 8824c90294d5a36d396c8ab35167937f /etc/named.conf /srv/named/etc/named.conf
|
||||
|
||||
[root@rhel7-host ~]# machinectl copy-from 8824c90294d5a36d396c8ab35167937f /var/named /srv/named/var/named
|
||||
|
||||
[root@rhel7-host ~]# docker stop infallible_wescoff
|
||||
```
|
||||
|
||||
### 最终的创建
|
||||
|
||||
为了创建和运行最终的容器,添加卷选项以挂载:
|
||||
|
||||
- 将文件 `/srv/named/etc/named.conf` 映射为 `/etc/named.conf`
|
||||
- 将目录 `/srv/named/var/named` 映射为 `/var/named`
|
||||
|
||||
因为这是我们最终的容器,我们将提供一个有意义的名字,以供我们以后引用。
|
||||
|
||||
```
|
||||
[root@rhel7-host ~]# docker run -d -p 53:53 -p 53:53/udp -v /srv/named/etc/named.conf:/etc/named.conf:Z -v /srv/named/var/named:/var/named:Z --name named-container named
|
||||
```
|
||||
|
||||
在最终容器运行时,我们可以更改本机配置来改变这个容器中 BIND 的行为。这个 BIND 服务器将需要在这个容器分配的任何 IP 上监听。请确保任何新文件的 GID 与来自这个容器中的其余的 BIND 文件相匹配。
|
||||
|
||||
```
|
||||
[root@rhel7-host bind]# cp named.conf /srv/named/etc/named.conf
|
||||
|
||||
[root@rhel7-host ~]# cp example.com.zone /srv/named/var/named/example.com.zone
|
||||
|
||||
[root@rhel7-host ~]# cp example.com.rr.zone /srv/named/var/named/example.com.rr.zone
|
||||
```
|
||||
|
||||
> 很好奇为什么我不需要在主机目录中改变 SELinux 上下文?^注1
|
||||
|
||||
我们将运行这个容器提供的 `rndc` 二进制文件重新加载配置。我们可以使用 `journald` 以同样的方式检查 BIND 日志。如果运行出现错误,你可以在主机中编辑该文件,并且重新加载配置。在主机中使用 `host` 或 `dig`,我们可以检查来自该容器化服务的 example.com 的响应。
|
||||
|
||||
```
|
||||
[root@rhel7-host ~]# docker exec -it named-container rndc reload
|
||||
server reload successful
|
||||
|
||||
[root@rhel7-host ~]# docker exec -it named-container journalctl -u named -n
|
||||
-- Logs begin at Fri 2017-05-12 19:15:18 UTC, end at Fri 2017-05-12 19:29:17 UTC. --
|
||||
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: 9.E.F.IP6.ARPA
|
||||
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: A.E.F.IP6.ARPA
|
||||
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: B.E.F.IP6.ARPA
|
||||
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: 8.B.D.0.1.0.0.2.IP6.ARPA
|
||||
May 12 19:29:17 ac1752c314a7 named[27]: reloading configuration succeeded
|
||||
May 12 19:29:17 ac1752c314a7 named[27]: reloading zones succeeded
|
||||
May 12 19:29:17 ac1752c314a7 named[27]: zone 1.0.10.in-addr.arpa/IN: loaded serial 2001062601
|
||||
May 12 19:29:17 ac1752c314a7 named[27]: zone 1.0.10.in-addr.arpa/IN: sending notifies (serial 2001062601)
|
||||
May 12 19:29:17 ac1752c314a7 named[27]: all zones loaded
|
||||
May 12 19:29:17 ac1752c314a7 named[27]: running
|
||||
|
||||
[root@rhel7-host bind]# host www.example.com localhost
|
||||
Using domain server:
|
||||
Name: localhost
|
||||
Address: ::1#53
|
||||
Aliases:
|
||||
www.example.com is an alias for server1.example.com.
|
||||
server1.example.com is an alias for mail
|
||||
```
|
||||
|
||||
> 你的区域文件没有更新吗?可能是因为你的编辑器,而不是序列号。^注2
|
||||
|
||||
### 终点线
|
||||
|
||||
我们已经达成了我们打算完成的目标,从容器中为 DNS 请求和区域文件提供服务。我们已经得到一个持久化的位置来管理更新和配置,并且更新后该配置不变。
|
||||
|
||||
在这个系列的第二部分,我们将看到怎样将一个容器看作为主机中的一个普通服务来运行。
|
||||
|
||||
---
|
||||
|
||||
[关注 RHEL 博客](http://redhatstackblog.wordpress.com/feed/),通过电子邮件来获得本系列第二部分和其它新文章的更新。
|
||||
|
||||
---
|
||||
|
||||
### 附加资源
|
||||
|
||||
- **所附带文件的 Github 仓库:** [https://github.com/nzwulfin/named-container](https://github.com/nzwulfin/named-container)
|
||||
- **注1:** **通过容器访问本地文件的 SELinux 上下文**
|
||||
|
||||
你可能已经注意到当我从容器向本地主机拷贝文件时,我没有运行 `chcon` 将主机中的文件类型改变为 `svirt_sandbox_file_t`。为什么它没有出错?将一个文件拷贝到 `/srv` 会将这个文件标记为类型 `var_t`。我 `setenforce 0` (关闭 SELinux)了吗?
|
||||
|
||||
当然没有,这将让 [Dan Walsh 大哭](https://stopdisablingselinux.com/)(LCTT 译注:RedHat 的 SELinux 团队负责人,倡议不要禁用 SELinux)。是的,`machinectl` 确实将文件标记类型设置为期望的那样,可以看一下:
|
||||
|
||||
启动一个容器之前:
|
||||
|
||||
```
|
||||
[root@rhel7-host ~]# ls -Z /srv/named/etc/named.conf
|
||||
-rw-r-----. unconfined_u:object_r:var_t:s0 /srv/named/etc/named.conf
|
||||
```
|
||||
|
||||
不过,运行中我使用了一个卷选项可以使 Dan Walsh 先生高兴起来,`:Z`。`-v /srv/named/etc/named.conf:/etc/named.conf:Z` 命令的这部分做了两件事情:首先它表示这需要使用一个私有卷的 SELiunx 标记来重新标记;其次它表明以读写挂载。
|
||||
|
||||
启动容器之后:
|
||||
|
||||
```
|
||||
[root@rhel7-host ~]# ls -Z /srv/named/etc/named.conf
|
||||
-rw-r-----. root 25 system_u:object_r:svirt_sandbox_file_t:s0:c821,c956 /srv/named/etc/named.conf
|
||||
```
|
||||
|
||||
- **注2:** **VIM 备份行为能改变 inode**
|
||||
|
||||
如果你在本地主机中使用 `vim` 来编辑配置文件,而你没有看到容器中的改变,你可能不经意的创建了容器感知不到的新文件。在编辑时,有三种 `vim` 设定影响备份副本:`backup`、`writebackup` 和 `backupcopy`。
|
||||
|
||||
我摘录了 RHEL 7 中的来自官方 VIM [backup_table][9] 中的默认配置。
|
||||
|
||||
```
|
||||
backup writebackup
|
||||
off on backup current file, deleted afterwards (default)
|
||||
```
|
||||
所以我们不创建残留下的 `~` 副本,而是创建备份。另外的设定是 `backupcopy`,`auto` 是默认的设置:
|
||||
|
||||
```
|
||||
"yes" make a copy of the file and overwrite the original one
|
||||
"no" rename the file and write a new one
|
||||
"auto" one of the previous, what works best
|
||||
```
|
||||
|
||||
这种组合设定意味着当你编辑一个文件时,除非 `vim` 有理由(请查看文档了解其逻辑),你将会得到一个包含你编辑内容的新文件,当你保存时它会重命名为原先的文件。这意味着这个文件获得了新的 inode。对于大多数情况,这不是问题,但是这里容器的<ruby>绑定挂载<rt>bind mount</rt></ruby>对 inode 的改变很敏感。为了解决这个问题,你需要改变 `backupcopy` 的行为。
|
||||
|
||||
不管是在 `vim` 会话中还是在你的 `.vimrc`中,请添加 `set backupcopy=yes`。这将确保原先的文件被清空并覆写,维持了 inode 不变并且将该改变传递到了容器中。
|
||||
|
||||
------------
|
||||
|
||||
via: http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/
|
||||
|
||||
作者:[Matt Micene][a]
|
||||
译者:[liuxinyu123](https://github.com/liuxinyu123)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/
|
||||
[1]:http://rhelblog.redhat.com/author/mmicenerht/
|
||||
[2]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#repo
|
||||
[3]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#sidebar_1
|
||||
[4]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#sidebar_2
|
||||
[5]:http://redhatstackblog.wordpress.com/feed/
|
||||
[6]:https://access.redhat.com/containers
|
||||
[7]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#repo
|
||||
[8]:https://github.com/nzwulfin/named-container
|
||||
[9]:http://vimdoc.sourceforge.net/htmldoc/editing.html#backup-table
|
@ -0,0 +1,75 @@
|
||||
在 Linux 启动或重启时执行命令与脚本
|
||||
======
|
||||
|
||||
有时可能会需要在重启时或者每次系统启动时运行某些命令或者脚本。我们要怎样做呢?本文中我们就对此进行讨论。 我们会用两种方法来描述如何在 CentOS/RHEL 以及 Ubuntu 系统上做到重启或者系统启动时执行命令和脚本。 两种方法都通过了测试。
|
||||
|
||||
### 方法 1 – 使用 rc.local
|
||||
|
||||
这种方法会利用 `/etc/` 中的 `rc.local` 文件来在启动时执行脚本与命令。我们在文件中加上一行来执行脚本,这样每次启动系统时,都会执行该脚本。
|
||||
|
||||
不过我们首先需要为 `/etc/rc.local` 添加执行权限,
|
||||
|
||||
```
|
||||
$ sudo chmod +x /etc/rc.local
|
||||
```
|
||||
|
||||
然后将要执行的脚本加入其中:
|
||||
|
||||
```
|
||||
$ sudo vi /etc/rc.local
|
||||
```
|
||||
|
||||
在文件最后加上:
|
||||
|
||||
```
|
||||
sh /root/script.sh &
|
||||
```
|
||||
|
||||
然后保存文件并退出。使用 `rc.local` 文件来执行命令也是一样的,但是一定要记得填写命令的完整路径。 想知道命令的完整路径可以运行:
|
||||
|
||||
```
|
||||
$ which command
|
||||
```
|
||||
|
||||
比如:
|
||||
|
||||
```
|
||||
$ which shutter
|
||||
/usr/bin/shutter
|
||||
```
|
||||
|
||||
如果是 CentOS,我们修改的是文件 `/etc/rc.d/rc.local` 而不是 `/etc/rc.local`。 不过我们也需要先为该文件添加可执行权限。
|
||||
|
||||
注意:- 启动时执行的脚本,请一定保证是以 `exit 0` 结尾的。
|
||||
|
||||
### 方法 2 – 使用 Crontab
|
||||
|
||||
该方法最简单了。我们创建一个 cron 任务,这个任务在系统启动后等待 90 秒,然后执行命令和脚本。
|
||||
|
||||
要创建 cron 任务,打开终端并执行
|
||||
|
||||
```
|
||||
$ crontab -e
|
||||
```
|
||||
|
||||
然后输入下行内容,
|
||||
|
||||
```
|
||||
@reboot ( sleep 90 ; sh \location\script.sh )
|
||||
```
|
||||
|
||||
这里 `\location\script.sh` 就是待执行脚本的地址。
|
||||
|
||||
我们的文章至此就完了。如有疑问,欢迎留言。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/executing-commands-scripts-at-reboot/
|
||||
|
||||
作者:[Shusain][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
72
published/20170922 How to disable USB storage on Linux.md
Normal file
72
published/20170922 How to disable USB storage on Linux.md
Normal file
@ -0,0 +1,72 @@
|
||||
Linux 上如何禁用 USB 存储
|
||||
======
|
||||
|
||||
为了保护数据不被泄漏,我们使用软件和硬件防火墙来限制外部未经授权的访问,但是数据泄露也可能发生在内部。 为了消除这种可能性,机构会限制和监测访问互联网,同时禁用 USB 存储设备。
|
||||
|
||||
在本教程中,我们将讨论三种不同的方法来禁用 Linux 机器上的 USB 存储设备。所有这三种方法都在 CentOS 6&7 机器上通过测试。那么让我们一一讨论这三种方法,
|
||||
|
||||
(另请阅读: [Ultimate guide to securing SSH sessions][1])
|
||||
|
||||
### 方法 1 – 伪安装
|
||||
|
||||
在本方法中,我们往配置文件中添加一行 `install usb-storage /bin/true`, 这会让安装 usb-storage 模块的操作实际上变成运行 `/bin/true`, 这也是为什么这种方法叫做`伪安装`的原因。 具体来说就是,在文件夹 `/etc/modprobe.d` 中创建并打开一个名为 `block_usb.conf` (也可能叫其他名字) ,
|
||||
|
||||
```
|
||||
$ sudo vim /etc/modprobe.d/block_usb.conf
|
||||
```
|
||||
|
||||
然后将下行内容添加进去:
|
||||
|
||||
```
|
||||
install usb-storage /bin/true
|
||||
```
|
||||
|
||||
最后保存文件并退出。
|
||||
|
||||
### 方法 2 – 删除 USB 驱动
|
||||
|
||||
这种方法要求我们将 USB 存储的驱动程序(`usb_storage.ko`)删掉或者移走,从而达到无法再访问 USB 存储设备的目的。 执行下面命令可以将驱动从它默认的位置移走:
|
||||
|
||||
```
|
||||
$ sudo mv /lib/modules/$(uname -r)/kernel/drivers/usb/storage/usb-storage.ko /home/user1
|
||||
```
|
||||
|
||||
现在在默认的位置上无法再找到驱动程序了,因此当 USB 存储器连接到系统上时也就无法加载到驱动程序了,从而导致磁盘不可用。 但是这个方法有一个小问题,那就是当系统内核更新的时候,`usb-storage` 模块会再次出现在它的默认位置。
|
||||
|
||||
### 方法 3 - 将 USB 存储器纳入黑名单
|
||||
|
||||
我们也可以通过 `/etc/modprobe.d/blacklist.conf` 文件将 usb-storage 纳入黑名单。这个文件在 RHEL/CentOS 6 是现成就有的,但在 7 上可能需要自己创建。 要将 USB 存储列入黑名单,请使用 vim 打开/创建上述文件:
|
||||
|
||||
```
|
||||
$ sudo vim /etc/modprobe.d/blacklist.conf
|
||||
```
|
||||
|
||||
并输入以下行将 USB 纳入黑名单:
|
||||
|
||||
```
|
||||
blacklist usb-storage
|
||||
```
|
||||
|
||||
保存文件并退出。`usb-storage` 就在就会被系统阻止加载,但这种方法有一个很大的缺点,即任何特权用户都可以通过执行以下命令来加载 `usb-storage` 模块,
|
||||
|
||||
```
|
||||
$ sudo modprobe usb-storage
|
||||
```
|
||||
|
||||
这个问题使得这个方法不是那么理想,但是对于非特权用户来说,这个方法效果很好。
|
||||
|
||||
在更改完成后重新启动系统,以使更改生效。请尝试用这些方法来禁用 USB 存储,如果您遇到任何问题或有什么疑问,请告知我们。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/disable-usb-storage-linux/
|
||||
|
||||
作者:[Shusain][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject)原创编译,[Linux 中国](https://linux.cn/)荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
||||
[1]:http://linuxtechlab.com/ultimate-guide-to-securing-ssh-sessions/
|
@ -1,9 +1,9 @@
|
||||
并发服务器(3) —— 事件驱动
|
||||
并发服务器(三):事件驱动
|
||||
============================================================
|
||||
|
||||
这是《并发服务器》系列的第三节。[第一节][26] 介绍了阻塞式编程,[第二节 —— 线程][27] 探讨了多线程,将其作为一种可行的方法来实现服务器并发编程。
|
||||
这是并发服务器系列的第三节。[第一节][26] 介绍了阻塞式编程,[第二节:线程][27] 探讨了多线程,将其作为一种可行的方法来实现服务器并发编程。
|
||||
|
||||
另一种常见的实现并发的方法叫做 _事件驱动编程_,也可以叫做 _异步_ 编程 [^注1][28]。这种方法变化万千,因此我们会从最基本的开始,使用一些基本的 APIs 而非从封装好的高级方法开始。本系列以后的文章会讲高层次抽象,还有各种混合的方法。
|
||||
另一种常见的实现并发的方法叫做 _事件驱动编程_,也可以叫做 _异步_ 编程 ^注1 。这种方法变化万千,因此我们会从最基本的开始,使用一些基本的 API 而非从封装好的高级方法开始。本系列以后的文章会讲高层次抽象,还有各种混合的方法。
|
||||
|
||||
本系列的所有文章:
|
||||
|
||||
@ -13,13 +13,13 @@
|
||||
|
||||
### 阻塞式 vs. 非阻塞式 I/O
|
||||
|
||||
要介绍这个标题,我们先讲讲阻塞和非阻塞 I/O 的区别。阻塞式 I/O 更好理解,因为这是我们使用 I/O 相关 API 时的“标准”方式。从套接字接收数据的时候,调用 `recv` 函数会发生 _阻塞_,直到它从端口上接收到了来自另一端套接字的数据。这恰恰是第一部分讲到的顺序服务器的问题。
|
||||
作为本篇的介绍,我们先讲讲阻塞和非阻塞 I/O 的区别。阻塞式 I/O 更好理解,因为这是我们使用 I/O 相关 API 时的“标准”方式。从套接字接收数据的时候,调用 `recv` 函数会发生 _阻塞_,直到它从端口上接收到了来自另一端套接字的数据。这恰恰是第一部分讲到的顺序服务器的问题。
|
||||
|
||||
因此阻塞式 I/O 存在着固有的性能问题。第二节里我们讲过一种解决方法,就是用多线程。哪怕一个线程的 I/O 阻塞了,别的线程仍然可以使用 CPU 资源。实际上,阻塞 I/O 通常在利用资源方面非常高效,因为线程就等待着 —— 操作系统将线程变成休眠状态,只有满足了线程需要的条件才会被唤醒。
|
||||
|
||||
_非阻塞式_ I/O 是另一种思路。把套接字设成非阻塞模式时,调用 `recv` 时(还有 `send`,但是我们现在只考虑接收),函数返回地会很快,哪怕没有数据要接收。这时,就会返回一个特殊的错误状态 ^[注2][15] 来通知调用者,此时没有数据传进来。调用者可以去做其他的事情,或者尝试再次调用 `recv` 函数。
|
||||
_非阻塞式_ I/O 是另一种思路。把套接字设成非阻塞模式时,调用 `recv` 时(还有 `send`,但是我们现在只考虑接收),函数返回的会很快,哪怕没有接收到数据。这时,就会返回一个特殊的错误状态 ^注2 来通知调用者,此时没有数据传进来。调用者可以去做其他的事情,或者尝试再次调用 `recv` 函数。
|
||||
|
||||
证明阻塞式和非阻塞式的 `recv` 区别的最好方式就是贴一段示例代码。这里有个监听套接字的小程序,一直在 `recv` 这里阻塞着;当 `recv` 返回了数据,程序就报告接收到了多少个字节 ^[注3][16]:
|
||||
示范阻塞式和非阻塞式的 `recv` 区别的最好方式就是贴一段示例代码。这里有个监听套接字的小程序,一直在 `recv` 这里阻塞着;当 `recv` 返回了数据,程序就报告接收到了多少个字节 ^注3 :
|
||||
|
||||
```
|
||||
int main(int argc, const char** argv) {
|
||||
@ -69,8 +69,7 @@ hello # wait for 2 seconds after typing this
|
||||
socket world
|
||||
^D # to end the connection>
|
||||
```
|
||||
|
||||
The listening program will print the following:
|
||||
|
||||
监听程序会输出以下内容:
|
||||
|
||||
```
|
||||
@ -144,7 +143,6 @@ int main(int argc, const char** argv) {
|
||||
这里与阻塞版本有些差异,值得注意:
|
||||
|
||||
1. `accept` 函数返回的 `newsocktfd` 套接字因调用了 `fcntl`, 被设置成非阻塞的模式。
|
||||
|
||||
2. 检查 `recv` 的返回状态时,我们对 `errno` 进行了检查,判断它是否被设置成表示没有可供接收的数据的状态。这时,我们仅仅是休眠了 200 毫秒然后进入到下一轮循环。
|
||||
|
||||
同样用 `nc` 进行测试,以下是非阻塞监听器的输出:
|
||||
@ -183,19 +181,19 @@ Peer disconnected; I'm done.
|
||||
|
||||
作为练习,给输出添加一个时间戳,确认调用 `recv` 得到结果之间花费的时间是比输入到 `nc` 中所用的多还是少(每一轮是 200 ms)。
|
||||
|
||||
这里就实现了使用非阻塞的 `recv` 让监听者检查套接字变为可能,并且在没有数据的时候重新获得控制权。换句话说,这就是 _polling(轮询)_ —— 主程序周期性的查询套接字以便读取数据。
|
||||
这里就实现了使用非阻塞的 `recv` 让监听者检查套接字变为可能,并且在没有数据的时候重新获得控制权。换句话说,用编程的语言说这就是 <ruby>轮询<rt>polling</rt></ruby> —— 主程序周期性的查询套接字以便读取数据。
|
||||
|
||||
对于顺序响应的问题,这似乎是个可行的方法。非阻塞的 `recv` 让同时与多个套接字通信变成可能,轮询这些套接字,仅当有新数据到来时才处理。就是这样,这种方式 _可以_ 用来写并发服务器;但实际上一般不这么做,因为轮询的方式很难扩展。
|
||||
|
||||
首先,我在代码中引入的 200 ms 延迟对于记录非常好(监听器在我输入 `nc` 之间只打印几行 “Calling recv...”,但实际上应该有上千行)。但它也增加了多达 200 ms 的服务器响应时间,这几乎是意料不到的。实际的程序中,延迟会低得多,休眠时间越短,进程占用的 CPU 资源就越多。有些时钟周期只是浪费在等待,这并不好,尤其是在移动设备上,这些设备的电量往往有限。
|
||||
首先,我在代码中引入的 200ms 延迟对于演示非常好(监听器在我输入 `nc` 之间只打印几行 “Calling recv...”,但实际上应该有上千行)。但它也增加了多达 200ms 的服务器响应时间,这无意是不必要的。实际的程序中,延迟会低得多,休眠时间越短,进程占用的 CPU 资源就越多。有些时钟周期只是浪费在等待,这并不好,尤其是在移动设备上,这些设备的电量往往有限。
|
||||
|
||||
但是当我们实际这样来使用多个套接字的时候,更严重的问题出现了。想像下监听器正在同时处理 1000 个 客户端。这意味着每一个循环迭代里面,它都得为 _这 1000 个套接字中的每一个_ 执行一遍非阻塞的 `recv`,找到其中准备好了数据的那一个。这非常低效,并且极大的限制了服务器能够并发处理的客户端数。这里有个准则:每次轮询之间等待的间隔越久,服务器响应性越差;而等待的时间越少,CPU 在无用的轮询上耗费的资源越多。
|
||||
但是当我们实际这样来使用多个套接字的时候,更严重的问题出现了。想像下监听器正在同时处理 1000 个客户端。这意味着每一个循环迭代里面,它都得为 _这 1000 个套接字中的每一个_ 执行一遍非阻塞的 `recv`,找到其中准备好了数据的那一个。这非常低效,并且极大的限制了服务器能够并发处理的客户端数。这里有个准则:每次轮询之间等待的间隔越久,服务器响应性越差;而等待的时间越少,CPU 在无用的轮询上耗费的资源越多。
|
||||
|
||||
讲真,所有的轮询都像是无用功。当然操作系统应该是知道哪个套接字是准备好了数据的,因此没必要逐个扫描。事实上,就是这样,接下来就会讲一些API,让我们可以更优雅地处理多个客户端。
|
||||
讲真,所有的轮询都像是无用功。当然操作系统应该是知道哪个套接字是准备好了数据的,因此没必要逐个扫描。事实上,就是这样,接下来就会讲一些 API,让我们可以更优雅地处理多个客户端。
|
||||
|
||||
### select
|
||||
|
||||
`select` 的系统调用是轻便的(POSIX),标准 Unix API 中常有的部分。它是为上一节最后一部分描述的问题而设计的 —— 允许一个线程可以监视许多文件描述符 ^[注4][17] 的变化,不用在轮询中执行不必要的代码。我并不打算在这里引入一个关于 `select` 的理解性的教程,有很多网站和书籍讲这个,但是在涉及到问题的相关内容时,我会介绍一下它的 API,然后再展示一个非常复杂的例子。
|
||||
`select` 的系统调用是可移植的(POSIX),是标准 Unix API 中常有的部分。它是为上一节最后一部分描述的问题而设计的 —— 允许一个线程可以监视许多文件描述符 ^注4 的变化,而不用在轮询中执行不必要的代码。我并不打算在这里引入一个关于 `select` 的全面教程,有很多网站和书籍讲这个,但是在涉及到问题的相关内容时,我会介绍一下它的 API,然后再展示一个非常复杂的例子。
|
||||
|
||||
`select` 允许 _多路 I/O_,监视多个文件描述符,查看其中任何一个的 I/O 是否可用。
|
||||
|
||||
@ -209,30 +207,25 @@ int select(int nfds, fd_set *readfds, fd_set *writefds,
|
||||
`select` 的调用过程如下:
|
||||
|
||||
1. 在调用之前,用户先要为所有不同种类的要监视的文件描述符创建 `fd_set` 实例。如果想要同时监视读取和写入事件,`readfds` 和 `writefds` 都要被创建并且引用。
|
||||
|
||||
2. 用户可以使用 `FD_SET` 来设置集合中想要监视的特殊描述符。例如,如果想要监视描述符 2、7 和 10 的读取事件,在 `readfds` 这里调用三次 `FD_SET`,分别设置 2、7 和 10。
|
||||
|
||||
3. `select` 被调用。
|
||||
|
||||
4. 当 `select` 返回时(现在先不管超时),就是说集合中有多少个文件描述符已经就绪了。它也修改 `readfds` 和 `writefds` 集合,来标记这些准备好的描述符。其它所有的描述符都会被清空。
|
||||
|
||||
5. 这时用户需要遍历 `readfds` 和 `writefds`,找到哪个描述符就绪了(使用 `FD_ISSET`)。
|
||||
|
||||
作为完整的例子,我在并发的服务器程序上使用 `select`,重新实现了我们之前的协议。[完整的代码在这里][18];接下来的是代码中的高亮,还有注释。警告:示例代码非常复杂,因此第一次看的时候,如果没有足够的时间,快速浏览也没有关系。
|
||||
作为完整的例子,我在并发的服务器程序上使用 `select`,重新实现了我们之前的协议。[完整的代码在这里][18];接下来的是代码中的重点部分及注释。警告:示例代码非常复杂,因此第一次看的时候,如果没有足够的时间,快速浏览也没有关系。
|
||||
|
||||
### 使用 select 的并发服务器
|
||||
|
||||
使用 I/O 的多发 API 诸如 `select` 会给我们服务器的设计带来一些限制;这不会马上显现出来,但这值得探讨,因为它们是理解事件驱动编程到底是什么的关键。
|
||||
|
||||
最重要的是,要记住这种方法本质上是单线程的 ^[注5][19]。服务器实际上在 _同一时刻只能做一件事_。因为我们想要同时处理多个客户端请求,我们需要换一种方式重构代码。
|
||||
最重要的是,要记住这种方法本质上是单线程的 ^注5 。服务器实际上在 _同一时刻只能做一件事_。因为我们想要同时处理多个客户端请求,我们需要换一种方式重构代码。
|
||||
|
||||
首先,让我们谈谈主循环。它看起来是什么样的呢?先让我们想象一下服务器有一堆任务,它应该监视哪些东西呢?两种类型的套接字活动:
|
||||
|
||||
1. 新客户端尝试连接。这些客户端应该被 `accept`。
|
||||
|
||||
2. 已连接的客户端发送数据。这个数据要用 [第一节][11] 中所讲到的协议进行传输,有可能会有一些数据要被回送给客户端。
|
||||
|
||||
尽管这两种活动在本质上有所区别,我们还是要把他们放在一个循环里,因为只能有一个主循环。循环会包含 `select` 的调用。这个 `select` 的调用会监视上述的两种活动。
|
||||
尽管这两种活动在本质上有所区别,我们还是要把它们放在一个循环里,因为只能有一个主循环。循环会包含 `select` 的调用。这个 `select` 的调用会监视上述的两种活动。
|
||||
|
||||
这里是部分代码,设置了文件描述符集合,并在主循环里转到被调用的 `select` 部分。
|
||||
|
||||
@ -264,9 +257,7 @@ while (1) {
|
||||
这里的一些要点:
|
||||
|
||||
1. 由于每次调用 `select` 都会重写传递给函数的集合,调用器就得维护一个 “master” 集合,在循环迭代中,保持对所监视的所有活跃的套接字的追踪。
|
||||
|
||||
2. 注意我们所关心的,最开始的唯一那个套接字是怎么变成 `listener_sockfd` 的,这就是最开始的套接字,服务器借此来接收新客户端的连接。
|
||||
|
||||
3. `select` 的返回值,是在作为参数传递的集合中,那些已经就绪的描述符的个数。`select` 修改这个集合,用来标记就绪的描述符。下一步是在这些描述符中进行迭代。
|
||||
|
||||
```
|
||||
@ -298,7 +289,7 @@ for (int fd = 0; fd <= fdset_max && nready > 0; fd++) {
|
||||
}
|
||||
```
|
||||
|
||||
这部分循环检查 _可读的_ 描述符。让我们跳过监听器套接字(要浏览所有内容,[看这个代码][20]) 然后看看当其中一个客户端准备好了之后会发生什么。出现了这种情况后,我们调用一个叫做 `on_peer_ready_recv` 的 _回调_ 函数,传入相应的文件描述符。这个调用意味着客户端连接到套接字上,发送某些数据,并且对套接字上 `recv` 的调用不会被阻塞 ^[注6][21]。这个回调函数返回结构体 `fd_status_t`。
|
||||
这部分循环检查 _可读的_ 描述符。让我们跳过监听器套接字(要浏览所有内容,[看这个代码][20]) 然后看看当其中一个客户端准备好了之后会发生什么。出现了这种情况后,我们调用一个叫做 `on_peer_ready_recv` 的 _回调_ 函数,传入相应的文件描述符。这个调用意味着客户端连接到套接字上,发送某些数据,并且对套接字上 `recv` 的调用不会被阻塞 ^注6 。这个回调函数返回结构体 `fd_status_t`。
|
||||
|
||||
```
|
||||
typedef struct {
|
||||
@ -307,7 +298,7 @@ typedef struct {
|
||||
} fd_status_t;
|
||||
```
|
||||
|
||||
这个结构体告诉主循环,是否应该监视套接字的读取事件,写入事件,或者两者都监视。上述代码展示了 `FD_SET` 和 `FD_CLR` 是怎么在合适的描述符集合中被调用的。对于主循环中某个准备好了写入数据的描述符,代码是类似的,除了它所调用的回调函数,这个回调函数叫做 `on_peer_ready_send`。
|
||||
这个结构体告诉主循环,是否应该监视套接字的读取事件、写入事件,或者两者都监视。上述代码展示了 `FD_SET` 和 `FD_CLR` 是怎么在合适的描述符集合中被调用的。对于主循环中某个准备好了写入数据的描述符,代码是类似的,除了它所调用的回调函数,这个回调函数叫做 `on_peer_ready_send`。
|
||||
|
||||
现在来花点时间看看这个回调:
|
||||
|
||||
@ -464,37 +455,36 @@ INFO:2017-09-26 05:29:18,070:conn0 disconnecting
|
||||
INFO:2017-09-26 05:29:18,070:conn2 disconnecting
|
||||
```
|
||||
|
||||
和线程的情况相似,客户端之间没有延迟,他们被同时处理。而且在 `select-server` 也没有用线程!主循环 _多路_ 处理所有的客户端,通过高效使用 `select` 轮询多个套接字。回想下 [第二节中][22] 顺序的 vs 多线程的客户端处理过程的图片。对于我们的 `select-server`,三个客户端的处理流程像这样:
|
||||
和线程的情况相似,客户端之间没有延迟,它们被同时处理。而且在 `select-server` 也没有用线程!主循环 _多路_ 处理所有的客户端,通过高效使用 `select` 轮询多个套接字。回想下 [第二节中][22] 顺序的 vs 多线程的客户端处理过程的图片。对于我们的 `select-server`,三个客户端的处理流程像这样:
|
||||
|
||||
![多客户端处理流程](https://eli.thegreenplace.net/images/2017/multiplexed-flow.png)
|
||||
|
||||
所有的客户端在同一个线程中同时被处理,通过乘积,做一点这个客户端的任务,然后切换到另一个,再切换到下一个,最后切换回到最开始的那个客户端。注意,这里没有什么循环调度,客户端在它们发送数据的时候被客户端处理,这实际上是受客户端左右的。
|
||||
|
||||
### 同步,异步,事件驱动,回调
|
||||
### 同步、异步、事件驱动、回调
|
||||
|
||||
`select-server` 示例代码为讨论什么是异步编程,它和事件驱动及基于回调的编程有何联系,提供了一个良好的背景。因为这些词汇在并发服务器的(非常矛盾的)讨论中很常见。
|
||||
`select-server` 示例代码为讨论什么是异步编程、它和事件驱动及基于回调的编程有何联系,提供了一个良好的背景。因为这些词汇在并发服务器的(非常矛盾的)讨论中很常见。
|
||||
|
||||
让我们从一段 `select` 的手册页面中引用的一句好开始:
|
||||
让我们从一段 `select` 的手册页面中引用的一句话开始:
|
||||
|
||||
> select,pselect,FD_CLR,FD_ISSET,FD_SET,FD_ZERO - 同步 I/O 处理
|
||||
> select,pselect,FD\_CLR,FD\_ISSET,FD\_SET,FD\_ZERO - 同步 I/O 处理
|
||||
|
||||
因此 `select` 是 _同步_ 处理。但我刚刚演示了大量代码的例子,使用 `select` 作为 _异步_ 处理服务器的例子。有哪些东西?
|
||||
|
||||
答案是:这取决于你的观查角度。同步常用作阻塞处理,并且对 `select` 的调用实际上是阻塞的。和第 1、2 节中讲到的顺序的、多线程的服务器中对 `send` 和 `recv` 是一样的。因此说 `select` 是 _同步的_ API 是有道理的。可是,服务器的设计却可以是 _异步的_,或是 _基于回调的_,或是 _事件驱动的_,尽管其中有对 `select` 的使用。注意这里的 `on_peer_*` 函数是回调函数;它们永远不会阻塞,并且只有网络事件触发的时候才会被调用。它们可以获得部分数据,并能够在调用过程中保持稳定的状态。
|
||||
答案是:这取决于你的观察角度。同步常用作阻塞处理,并且对 `select` 的调用实际上是阻塞的。和第 1、2 节中讲到的顺序的、多线程的服务器中对 `send` 和 `recv` 是一样的。因此说 `select` 是 _同步的_ API 是有道理的。可是,服务器的设计却可以是 _异步的_,或是 _基于回调的_,或是 _事件驱动的_,尽管其中有对 `select` 的使用。注意这里的 `on_peer_*` 函数是回调函数;它们永远不会阻塞,并且只有网络事件触发的时候才会被调用。它们可以获得部分数据,并能够在调用过程中保持稳定的状态。
|
||||
|
||||
如果你曾经做过一些 GUI 编程,这些东西对你来说应该很亲切。有个 “事件循环”,常常完全隐藏在框架里,应用的 “业务逻辑” 建立在回调上,这些回调会在各种事件触发后被调用,用户点击鼠标,选择菜单,定时器到时间,数据到达套接字,等等。曾经最常见的编程模型是客户端的 JavaScript,这里面有一堆回调函数,它们在浏览网页时用户的行为被触发。
|
||||
如果你曾经做过一些 GUI 编程,这些东西对你来说应该很亲切。有个 “事件循环”,常常完全隐藏在框架里,应用的 “业务逻辑” 建立在回调上,这些回调会在各种事件触发后被调用,用户点击鼠标、选择菜单、定时器触发、数据到达套接字等等。曾经最常见的编程模型是客户端的 JavaScript,这里面有一堆回调函数,它们在浏览网页时用户的行为被触发。
|
||||
|
||||
### select 的局限
|
||||
|
||||
使用 `select` 作为第一个异步服务器的例子对于说明这个概念很有用,而且由于 `select` 是很常见,可移植的 API。但是它也有一些严重的缺陷,在监视的文件描述符非常大的时候就会出现。
|
||||
使用 `select` 作为第一个异步服务器的例子对于说明这个概念很有用,而且由于 `select` 是很常见、可移植的 API。但是它也有一些严重的缺陷,在监视的文件描述符非常大的时候就会出现。
|
||||
|
||||
1. 有限的文件描述符的集合大小。
|
||||
|
||||
2. 糟糕的性能。
|
||||
|
||||
从文件描述符的大小开始。`FD_SETSIZE` 是一个编译期常数,在如今的操作系统中,它的值通常是 1024。它被硬编码在 `glibc` 的头文件里,并且不容易修改。它把 `select` 能够监视的文件描述符的数量限制在 1024 以内。曾有些分支想要写出能够处理上万个并发访问的客户端请求的服务器,这个问题很有现实意义。有一些方法,但是不可移植,也很难用。
|
||||
从文件描述符的大小开始。`FD_SETSIZE` 是一个编译期常数,在如今的操作系统中,它的值通常是 1024。它被硬编码在 `glibc` 的头文件里,并且不容易修改。它把 `select` 能够监视的文件描述符的数量限制在 1024 以内。曾有些人想要写出能够处理上万个并发访问的客户端请求的服务器,所以这个问题很有现实意义。有一些方法,但是不可移植,也很难用。
|
||||
|
||||
糟糕的性能问题就好解决的多,但是依然非常严重。注意当 `select` 返回的时候,它向调用者提供的信息是 “就绪的” 描述符的个数,还有被修改过的描述符集合。描述符集映射着描述符 就绪/未就绪”,但是并没有提供什么有效的方法去遍历所有就绪的描述符。如果只有一个描述符是就绪的,最坏的情况是调用者需要遍历 _整个集合_ 来找到那个描述符。这在监视的描述符数量比较少的时候还行,但是如果数量变的很大的时候,这种方法弊端就凸显出了 ^[注7][23]。
|
||||
糟糕的性能问题就好解决的多,但是依然非常严重。注意当 `select` 返回的时候,它向调用者提供的信息是 “就绪的” 描述符的个数,还有被修改过的描述符集合。描述符集映射着描述符“就绪/未就绪”,但是并没有提供什么有效的方法去遍历所有就绪的描述符。如果只有一个描述符是就绪的,最坏的情况是调用者需要遍历 _整个集合_ 来找到那个描述符。这在监视的描述符数量比较少的时候还行,但是如果数量变的很大的时候,这种方法弊端就凸显出了 ^注7 。
|
||||
|
||||
由于这些原因,为了写出高性能的并发服务器, `select` 已经不怎么用了。每一个流行的操作系统有独特的不可移植的 API,允许用户写出非常高效的事件循环;像框架这样的高级结构还有高级语言通常在一个可移植的接口中包含这些 API。
|
||||
|
||||
@ -541,30 +531,23 @@ while (1) {
|
||||
}
|
||||
```
|
||||
|
||||
通过调用 `epoll_ctl` 来配置 `epoll`。这时,配置监听的套接字数量,也就是 `epoll` 监听的描述符的数量。然后分配一个缓冲区,把就绪的事件传给 `epoll` 以供修改。在主循环里对 `epoll_wait` 的调用是魅力所在。它阻塞着,直到某个描述符就绪了(或者超时),返回就绪的描述符数量。但这时,不少盲目地迭代所有监视的集合,我们知道 `epoll_write` 会修改传给它的 `events` 缓冲区,缓冲区中有就绪的事件,从 0 到 `nready-1`,因此我们只需迭代必要的次数。
|
||||
通过调用 `epoll_ctl` 来配置 `epoll`。这时,配置监听的套接字数量,也就是 `epoll` 监听的描述符的数量。然后分配一个缓冲区,把就绪的事件传给 `epoll` 以供修改。在主循环里对 `epoll_wait` 的调用是魅力所在。它阻塞着,直到某个描述符就绪了(或者超时),返回就绪的描述符数量。但这时,不要盲目地迭代所有监视的集合,我们知道 `epoll_write` 会修改传给它的 `events` 缓冲区,缓冲区中有就绪的事件,从 0 到 `nready-1`,因此我们只需迭代必要的次数。
|
||||
|
||||
要在 `select` 里面重新遍历,有明显的差异:如果在监视着 1000 个描述符,只有两个就绪, `epoll_waits` 返回的是 `nready=2`,然后修改 `events` 缓冲区最前面的两个元素,因此我们只需要“遍历”两个描述符。用 `select` 我们就需要遍历 1000 个描述符,找出哪个是就绪的。因此,在繁忙的服务器上,有许多活跃的套接字时 `epoll` 比 `select` 更加容易扩展。
|
||||
|
||||
剩下的代码很直观,因为我们已经很熟悉 `select 服务器` 了。实际上,`epoll 服务器` 中的所有“业务逻辑”和 `select 服务器` 是一样的,回调构成相同的代码。
|
||||
剩下的代码很直观,因为我们已经很熟悉 “select 服务器” 了。实际上,“epoll 服务器” 中的所有“业务逻辑”和 “select 服务器” 是一样的,回调构成相同的代码。
|
||||
|
||||
这种相似是通过将事件循环抽象分离到一个库/框架中。我将会详述这些内容,因为很多优秀的程序员曾经也是这样做的。相反,下一篇文章里我们会了解 `libuv`,一个最近出现的更加受欢迎的时间循环抽象层。像 `libuv` 这样的库让我们能够写出并发的异步服务器,并且不用考虑系统调用下繁琐的细节。
|
||||
这种相似是通过将事件循环抽象分离到一个库/框架中。我将会详述这些内容,因为很多优秀的程序员曾经也是这样做的。相反,下一篇文章里我们会了解 libuv,一个最近出现的更加受欢迎的时间循环抽象层。像 libuv 这样的库让我们能够写出并发的异步服务器,并且不用考虑系统调用下繁琐的细节。
|
||||
|
||||
* * *
|
||||
|
||||
|
||||
[注1][1] 我试着在两件事的实际差别中突显自己,一件是做一些网络浏览和阅读,但经常做得头疼。有很多不同的选项,从“他们是一样的东西”到“一个是另一个的子集”,再到“他们是完全不同的东西”。在面临这样主观的观点时,最好是完全放弃这个问题,专注特殊的例子和用例。
|
||||
|
||||
[注2][2] POSIX 表示这可以是 `EAGAIN`,也可以是 `EWOULDBLOCK`,可移植应用应该对这两个都进行检查。
|
||||
|
||||
[注3][3] 和这个系列所有的 C 示例类似,代码中用到了某些助手工具来设置监听套接字。这些工具的完整代码在这个 [仓库][4] 的 `utils` 模块里。
|
||||
|
||||
[注4][5] `select` 不是网络/套接字专用的函数,它可以监视任意的文件描述符,有可能是硬盘文件,管道,终端,套接字或者 Unix 系统中用到的任何文件描述符。这篇文章里,我们主要关注它在套接字方面的应用。
|
||||
|
||||
[注5][6] 有多种方式用多线程来实现事件驱动,我会把它放在稍后的文章中进行讨论。
|
||||
|
||||
[注6][7] 由于各种非实验因素,它 _仍然_ 可以阻塞,即使是在 `select` 说它就绪了之后。因此服务器上打开的所有套接字都被设置成非阻塞模式,如果对 `recv` 或 `send` 的调用返回了 `EAGAIN` 或者 `EWOULDBLOCK`,回调函数就装作没有事件发生。阅读示例代码的注释可以了解更多细节。
|
||||
|
||||
[注7][8] 注意这比该文章前面所讲的异步 polling 例子要稍好一点。polling 需要 _一直_ 发生,而 `select` 实际上会阻塞到有一个或多个套接字准备好读取/写入;`select` 会比一直询问浪费少得多的 CPU 时间。
|
||||
- 注1:我试着在做网络浏览和阅读这两件事的实际差别中突显自己,但经常做得头疼。有很多不同的选项,从“它们是一样的东西”到“一个是另一个的子集”,再到“它们是完全不同的东西”。在面临这样主观的观点时,最好是完全放弃这个问题,专注特殊的例子和用例。
|
||||
- 注2:POSIX 表示这可以是 `EAGAIN`,也可以是 `EWOULDBLOCK`,可移植应用应该对这两个都进行检查。
|
||||
- 注3:和这个系列所有的 C 示例类似,代码中用到了某些助手工具来设置监听套接字。这些工具的完整代码在这个 [仓库][4] 的 `utils` 模块里。
|
||||
- 注4:`select` 不是网络/套接字专用的函数,它可以监视任意的文件描述符,有可能是硬盘文件、管道、终端、套接字或者 Unix 系统中用到的任何文件描述符。这篇文章里,我们主要关注它在套接字方面的应用。
|
||||
- 注5:有多种方式用多线程来实现事件驱动,我会把它放在稍后的文章中进行讨论。
|
||||
- 注6:由于各种非实验因素,它 _仍然_ 可以阻塞,即使是在 `select` 说它就绪了之后。因此服务器上打开的所有套接字都被设置成非阻塞模式,如果对 `recv` 或 `send` 的调用返回了 `EAGAIN` 或者 `EWOULDBLOCK`,回调函数就装作没有事件发生。阅读示例代码的注释可以了解更多细节。
|
||||
- 注7:注意这比该文章前面所讲的异步轮询的例子要稍好一点。轮询需要 _一直_ 发生,而 `select` 实际上会阻塞到有一个或多个套接字准备好读取/写入;`select` 会比一直询问浪费少得多的 CPU 时间。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -572,7 +555,7 @@ via: https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/
|
||||
|
||||
作者:[Eli Bendersky][a]
|
||||
译者:[GitFuture](https://github.com/GitFuture)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -587,9 +570,9 @@ via: https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/
|
||||
[8]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id9
|
||||
[9]:https://eli.thegreenplace.net/tag/concurrency
|
||||
[10]:https://eli.thegreenplace.net/tag/c-c
|
||||
[11]:http://eli.thegreenplace.net/2017/concurrent-servers-part-1-introduction/
|
||||
[12]:http://eli.thegreenplace.net/2017/concurrent-servers-part-1-introduction/
|
||||
[13]:http://eli.thegreenplace.net/2017/concurrent-servers-part-2-threads/
|
||||
[11]:https://linux.cn/article-8993-1.html
|
||||
[12]:https://linux.cn/article-8993-1.html
|
||||
[13]:https://linux.cn/article-9002-1.html
|
||||
[14]:http://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/
|
||||
[15]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id11
|
||||
[16]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id12
|
||||
@ -598,10 +581,10 @@ via: https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/
|
||||
[19]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id14
|
||||
[20]:https://github.com/eliben/code-for-blog/blob/master/2017/async-socket-server/select-server.c
|
||||
[21]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id15
|
||||
[22]:http://eli.thegreenplace.net/2017/concurrent-servers-part-2-threads/
|
||||
[22]:https://linux.cn/article-9002-1.html
|
||||
[23]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id16
|
||||
[24]:https://github.com/eliben/code-for-blog/blob/master/2017/async-socket-server/epoll-server.c
|
||||
[25]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/
|
||||
[26]:http://eli.thegreenplace.net/2017/concurrent-servers-part-1-introduction/
|
||||
[27]:http://eli.thegreenplace.net/2017/concurrent-servers-part-2-threads/
|
||||
[26]:https://linux.cn/article-8993-1.html
|
||||
[27]:https://linux.cn/article-9002-1.html
|
||||
[28]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id10
|
@ -0,0 +1,80 @@
|
||||
面向初学者的 Linux 网络硬件:软件思维
|
||||
===========================================================
|
||||
|
||||
![island network](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/soderskar-island.jpg?itok=wiMaF66b "island network")
|
||||
|
||||
> 没有路由和桥接,我们将会成为孤独的小岛,你将会在这个网络教程中学到更多知识。
|
||||
|
||||
[Commons Zero][3]Pixabay
|
||||
|
||||
上周,我们学习了本地网络硬件知识,本周,我们将学习网络互联技术和在移动网络中的一些很酷的黑客技术。
|
||||
|
||||
### 路由器
|
||||
|
||||
网络路由器就是计算机网络中的一切,因为路由器连接着网络,没有路由器,我们就会成为孤岛。图一展示了一个简单的有线本地网络和一个无线接入点,所有设备都接入到互联网上,本地局域网的计算机连接到一个连接着防火墙或者路由器的以太网交换机上,防火墙或者路由器连接到网络服务供应商(ISP)提供的电缆箱、调制调节器、卫星上行系统……好像一切都在计算中,就像是一个带着不停闪烁的的小灯的盒子。当你的网络数据包离开你的局域网,进入广阔的互联网,它们穿过一个又一个路由器直到到达自己的目的地。
|
||||
|
||||
![simple LAN](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-1_7.png?itok=lsazmf3- "simple LAN")
|
||||
|
||||
*图一:一个简单的有线局域网和一个无线接入点。*
|
||||
|
||||
路由器可以是各种样式:一个只专注于路由的小巧特殊的小盒子,一个将会提供路由、防火墙、域名服务,以及 VPN 网关功能的大点的盒子,一台重新设计的台式电脑或者笔记本,一个树莓派计算机或者一个 Arduino,体积臃肿矮小的像 PC Engines 这样的单板计算机,除了苛刻的用途以外,普通的商品硬件都能良好的工作运行。高端的路由器使用特殊设计的硬件每秒能够传输最大量的数据包。它们有多路数据总线,多个中央处理器和极快的存储。(可以通过了解 Juniper 和思科的路由器来感受一下高端路由器书什么样子的,而且能看看里面是什么样的构造。)
|
||||
|
||||
接入你的局域网的无线接入点要么作为一个以太网网桥,要么作为一个路由器。桥接器扩展了这个网络,所以在这个桥接器上的任意一端口上的主机都连接在同一个网络中。一台路由器连接的是两个不同的网络。
|
||||
|
||||
### 网络拓扑
|
||||
|
||||
有多种设置你的局域网的方式,你可以把所有主机接入到一个单独的<ruby>平面网络<rt>flat network</rt></ruby>,也可以把它们划分为不同的子网。如果你的交换机支持 VLAN 的话,你也可以把它们分配到不同的 VLAN 中。
|
||||
|
||||
平面网络是最简单的网络,只需把每一台设备接入到同一个交换机上即可,如果一台交换上的端口不够使用,你可以将更多的交换机连接在一起。有些交换机有特殊的上行端口,有些是没有这种特殊限制的上行端口,你可以连接其中的任意端口,你可能需要使用交叉类型的以太网线,所以你要查阅你的交换机的说明文档来设置。
|
||||
|
||||
平面网络是最容易管理的,你不需要路由器也不需要计算子网,但它也有一些缺点。它们的伸缩性不好,所以当网络规模变得越来越大的时候就会被广播网络所阻塞。将你的局域网进行分段将会提升安全保障, 把局域网分成可管理的不同网段将有助于管理更大的网络。图二展示了一个分成两个子网的局域网络:内部的有线和无线主机,和一个托管公开服务的主机。包含面向公共的服务器的子网称作非军事区域 DMZ,(你有没有注意到那些都是主要在电脑上打字的男人们的术语?)因为它被阻挡了所有的内部网络的访问。
|
||||
|
||||
![LAN](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-2_4.png?itok=LpXq7bLf "LAN")
|
||||
|
||||
*图二:一个分成两个子网的简单局域网。*
|
||||
|
||||
即使像图二那样的小型网络也可以有不同的配置方法。你可以将防火墙和路由器放置在一台单独的设备上。你可以为你的非军事区域设置一个专用的网络连接,把它完全从你的内部网络隔离,这将引导我们进入下一个主题:一切基于软件。
|
||||
|
||||
### 软件思维
|
||||
|
||||
你可能已经注意到在这个简短的系列中我们所讨论的硬件,只有网络接口、交换机,和线缆是特殊用途的硬件。
|
||||
其它的都是通用的商用硬件,而且都是软件来定义它的用途。Linux 是一个真实的网络操作系统,它支持大量的网络操作:网关、虚拟专用网关、以太网桥、网页、邮箱以及文件等等服务器、负载均衡、代理、服务质量、多种认证、中继、故障转移……你可以在运行着 Linux 系统的标准硬件上运行你的整个网络。你甚至可以使用 Linux 交换应用(LISA)和VDE2 协议来模拟以太网交换机。
|
||||
|
||||
有一些用于小型硬件的特殊发行版,如 DD-WRT、OpenWRT,以及树莓派发行版,也不要忘记 BSD 们和它们的特殊衍生用途如 pfSense 防火墙/路由器,和 FreeNAS 网络存储服务器。
|
||||
|
||||
你知道有些人坚持认为硬件防火墙和软件防火墙有区别?其实是没有区别的,就像说硬件计算机和软件计算机一样。
|
||||
|
||||
### 端口聚合和以太网绑定
|
||||
|
||||
聚合和绑定,也称链路聚合,是把两条以太网通道绑定在一起成为一条通道。一些交换机支持端口聚合,就是把两个交换机端口绑定在一起,成为一个是它们原来带宽之和的一条新的连接。对于一台承载很多业务的服务器来说这是一个增加通道带宽的有效的方式。
|
||||
|
||||
你也可以在以太网口进行同样的配置,而且绑定汇聚的驱动是内置在 Linux 内核中的,所以不需要任何其他的专门的硬件。
|
||||
|
||||
### 随心所欲选择你的移动宽带
|
||||
|
||||
我期望移动宽带能够迅速增长来替代 DSL 和有线网络。我居住在一个有 25 万人口的靠近一个城市的地方,但是在城市以外,要想接入互联网就要靠运气了,即使那里有很大的用户上网需求。我居住的小角落离城镇有 20 分钟的距离,但对于网络服务供应商来说他们几乎不会考虑到为这个地方提供网络。 我唯一的选择就是移动宽带;这里没有拨号网络、卫星网络(即使它很糟糕)或者是 DSL、电缆、光纤,但却没有阻止网络供应商把那些我在这个区域从没看到过的 Xfinity 和其它高速网络服务的传单塞进我的邮箱。
|
||||
|
||||
我试用了 AT&T、Version 和 T-Mobile。Version 的信号覆盖范围最广,但是 Version 和 AT&T 是最昂贵的。
|
||||
我居住的地方在 T-Mobile 信号覆盖的边缘,但迄今为止他们给了最大的优惠,为了能够能够有效的使用,我必须购买一个 WeBoost 信号放大器和一台中兴的移动热点设备。当然你也可以使用一部手机作为热点,但是专用的热点设备有着最强的信号。如果你正在考虑购买一台信号放大器,最好的选择就是 WeBoost,因为他们的服务支持最棒,而且他们会尽最大努力去帮助你。在一个小小的 APP [SignalCheck Pro][8] 的协助下设置将会精准的增强你的网络信号,他们有一个功能较少的免费的版本,但你将一点都不会后悔去花两美元使用专业版。
|
||||
|
||||
那个小巧的中兴热点设备能够支持 15 台主机,而且还有拥有基本的防火墙功能。 但你如果你使用像 Linksys WRT54GL这样的设备,可以使用 Tomato、OpenWRT,或者 DD-WRT 来替代普通的固件,这样你就能完全控制你的防护墙规则、路由配置,以及任何其它你想要设置的服务。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2017/10/linux-networking-hardware-beginners-think-software
|
||||
|
||||
作者:[CARLA SCHRODER][a]
|
||||
译者:[FelixYFZ](https://github.com/FelixYFZ)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/cschroder
|
||||
[1]:https://www.linux.com/licenses/category/used-permission
|
||||
[2]:https://www.linux.com/licenses/category/used-permission
|
||||
[3]:https://www.linux.com/licenses/category/creative-commons-zero
|
||||
[4]:https://www.linux.com/files/images/fig-1png-7
|
||||
[5]:https://www.linux.com/files/images/fig-2png-4
|
||||
[6]:https://www.linux.com/files/images/soderskar-islandjpg
|
||||
[7]:https://www.linux.com/learn/intro-to-linux/2017/10/linux-networking-hardware-beginners-lan-hardware
|
||||
[8]:http://www.bluelinepc.com/signalcheck/
|
@ -1,43 +1,50 @@
|
||||
translated by smartgrids
|
||||
Eclipse 如何助力 IoT 发展
|
||||
============================================================
|
||||
|
||||
### 开源组织的模块发开发方式非常适合物联网。
|
||||
|
||||
> 开源组织的模块化开发方式非常适合物联网。
|
||||
|
||||
![How Eclipse is advancing IoT development](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_BUS_ArchitectureOfParticipation_520x292.png?itok=FA0Uuwzv "How Eclipse is advancing IoT development")
|
||||
|
||||
图片来源: opensource.com
|
||||
|
||||
[Eclipse][3] 可能不是第一个去研究物联网的开源组织。但是,远在 IoT 家喻户晓之前,该基金会在 2001 年左右就开始支持开源软件发展商业化。九月 Eclipse 物联网日和 RedMonk 的 [ThingMonk 2017][4] 一块举行,着重强调了 Eclipse 在 [物联网发展][5] 中的重要作用。它现在已经包含了 28 个项目,覆盖了大部分物联网项目需求。会议过程中,我和负责 Eclipse 市场化运作的 [Ian Skerritt][6] 讨论了 Eclipse 的物联网项目以及如何拓展它。
|
||||
[Eclipse][3] 可能不是第一个去研究物联网的开源组织。但是,远在 IoT 家喻户晓之前,该基金会在 2001 年左右就开始支持开源软件发展商业化。
|
||||
|
||||
九月份的 Eclipse 物联网日和 RedMonk 的 [ThingMonk 2017][4] 一块举行,着重强调了 Eclipse 在 [物联网发展][5] 中的重要作用。它现在已经包含了 28 个项目,覆盖了大部分物联网项目需求。会议过程中,我和负责 Eclipse 市场化运作的 [Ian Skerritt][6] 讨论了 Eclipse 的物联网项目以及如何拓展它。
|
||||
|
||||
### 物联网的最新进展?
|
||||
|
||||
###物联网的最新进展?
|
||||
我问 Ian 物联网同传统工业自动化,也就是前几十年通过传感器和相应工具来实现工厂互联的方式有什么不同。 Ian 指出很多工厂是还没有互联的。
|
||||
另外,他说“ SCADA[监控和数据分析] 系统以及工厂底层技术都是私有、独立性的。我们很难去改变它,也很难去适配它们…… 现在,如果你想运行一套生产系统,你需要设计成百上千的单元。生产线想要的是满足用户需求,使制造过程更灵活,从而可以不断产出。” 这也就是物联网会带给制造业的一个很大的帮助。
|
||||
|
||||
另外,他说 “SCADA [<ruby>监控和数据分析<rt>supervisory control and data analysis</rt></ruby>] 系统以及工厂底层技术都是非常私有的、独立性的。我们很难去改变它,也很难去适配它们 …… 现在,如果你想运行一套生产系统,你需要设计成百上千的单元。生产线想要的是满足用户需求,使制造过程更灵活,从而可以不断产出。” 这也就是物联网会带给制造业的一个很大的帮助。
|
||||
|
||||
###Eclipse 物联网方面的研究
|
||||
Ian 对于 Eclipse 在物联网的研究是这样描述的:“满足任何物联网解决方案的核心基础技术” ,通过使用开源技术,“每个人都可以使用从而可以获得更好的适配性。” 他说,Eclipse 将物联网视为包括三层互联的软件栈。从更高的层面上看,这些软件栈(按照大家常见的说法)将物联网描述为跨越三个层面的网络。特定的观念可能认为含有更多的层面,但是他们一直符合这个三层模型的功能的:
|
||||
### Eclipse 物联网方面的研究
|
||||
|
||||
Ian 对于 Eclipse 在物联网的研究是这样描述的:“满足任何物联网解决方案的核心基础技术” ,通过使用开源技术,“每个人都可以使用,从而可以获得更好的适配性。” 他说,Eclipse 将物联网视为包括三层互联的软件栈。从更高的层面上看,这些软件栈(按照大家常见的说法)将物联网描述为跨越三个层面的网络。特定的实现方式可能含有更多的层,但是它们一般都可以映射到这个三层模型的功能上:
|
||||
|
||||
* 一种可以装载设备(例如设备、终端、微控制器、传感器)用软件的堆栈。
|
||||
* 将不同的传感器采集到的数据信息聚合起来并传输到网上的一类网关。这一层也可能会针对传感器数据检测做出实时反映。
|
||||
* 将不同的传感器采集到的数据信息聚合起来并传输到网上的一类网关。这一层也可能会针对传感器数据检测做出实时反应。
|
||||
* 物联网平台后端的一个软件栈。这个后端云存储数据并能根据采集的数据比如历史趋势、预测分析提供服务。
|
||||
|
||||
这三个软件栈在 Eclipse 的白皮书 “ [The Three Software Stacks Required for IoT Architectures][7] ”中有更详细的描述。
|
||||
这三个软件栈在 Eclipse 的白皮书 “[The Three Software Stacks Required for IoT Architectures][7] ”中有更详细的描述。
|
||||
|
||||
Ian 说在这些架构中开发一种解决方案时,“需要开发一些特殊的东西,但是很多底层的技术是可以借用的,像通信协议、网关服务。需要一种模块化的方式来满足不用的需求场合。” Eclipse 关于物联网方面的研究可以概括为:开发模块化开源组件从而可以被用于开发大量的特定性商业服务和解决方案。
|
||||
Ian 说在这些架构中开发一种解决方案时,“需要开发一些特殊的东西,但是很多底层的技术是可以借用的,像通信协议、网关服务。需要一种模块化的方式来满足不同的需求场合。” Eclipse 关于物联网方面的研究可以概括为:开发模块化开源组件,从而可以被用于开发大量的特定性商业服务和解决方案。
|
||||
|
||||
###Eclipse 的物联网项目
|
||||
### Eclipse 的物联网项目
|
||||
|
||||
在众多一杯应用的 Eclipse 物联网应用中, Ian 举了两个和 [MQTT][8] 有关联的突出应用,一个设备与设备互联(M2M)的物联网协议。 Ian 把它描述成“一个专为重视电源管理工作的油气传输线监控系统的信息发布/订阅协议。MQTT 已经是众多物联网广泛应用标准中很成功的一个。” [Eclipse Mosquitto][9] 是 MQTT 的代理,[Eclipse Paho][10] 是他的客户端。
|
||||
[Eclipse Kura][11] 是一个物联网网关,引用 Ian 的话,“它连接了很多不同的协议间的联系”包括蓝牙、Modbus、CANbus 和 OPC 统一架构协议,以及一直在不断添加的协议。一个优势就是,他说,取代了你自己写你自己的协议, Kura 提供了这个功能并将你通过卫星、网络或其他设备连接到网络。”另外它也提供了防火墙配置、网络延时以及其它功能。Ian 也指出“如果网络不通时,它会存储信息直到网络恢复。”
|
||||
在众多已被应用的 Eclipse 物联网应用中, Ian 举了两个和 [MQTT][8] 有关联的突出应用,一个设备与设备互联(M2M)的物联网协议。 Ian 把它描述成“一个专为重视电源管理工作的油气传输线监控系统的信息发布/订阅协议。MQTT 已经是众多物联网广泛应用标准中很成功的一个。” [Eclipse Mosquitto][9] 是 MQTT 的代理,[Eclipse Paho][10] 是他的客户端。
|
||||
|
||||
[Eclipse Kura][11] 是一个物联网网关,引用 Ian 的话,“它连接了很多不同的协议间的联系”,包括蓝牙、Modbus、CANbus 和 OPC 统一架构协议,以及一直在不断添加的各种协议。他说,一个优势就是,取代了你自己写你自己的协议, Kura 提供了这个功能并将你通过卫星、网络或其他设备连接到网络。”另外它也提供了防火墙配置、网络延时以及其它功能。Ian 也指出“如果网络不通时,它会存储信息直到网络恢复。”
|
||||
|
||||
最新的一个项目中,[Eclipse Kapua][12] 正尝试通过微服务来为物联网云平台提供不同的服务。比如,它集成了通信、汇聚、管理、存储和分析功能。Ian 说“它正在不断前进,虽然还没被完全开发出来,但是 Eurotech 和 RedHat 在这个项目上非常积极。”
|
||||
Ian 说 [Eclipse hawkBit][13] ,软件更新管理的软件,是一项“非常有趣的项目。从安全的角度说,如果你不能更新你的设备,你将会面临巨大的安全漏洞。”很多物联网安全事故都和无法更新的设备有关,他说,“ HawkBit 可以基本负责通过物联网系统来完成扩展性更新的后端管理。”
|
||||
|
||||
物联网设备软件升级的难度一直被看作是难度最高的安全挑战之一。物联网设备不是一直连接的,而且数目众多,再加上首先设备的更新程序很难完全正常。正因为这个原因,关于无赖女王软件升级的项目一直是被当作重要内容往前推进。
|
||||
Ian 说 [Eclipse hawkBit][13] ,一个软件更新管理的软件,是一项“非常有趣的项目。从安全的角度说,如果你不能更新你的设备,你将会面临巨大的安全漏洞。”很多物联网安全事故都和无法更新的设备有关,他说,“HawkBit 可以基本负责通过物联网系统来完成扩展性更新的后端管理。”
|
||||
|
||||
###为什么物联网这么适合 Eclipse
|
||||
物联网设备软件升级的难度一直被看作是难度最高的安全挑战之一。物联网设备不是一直连接的,而且数目众多,再加上首先设备的更新程序很难完全正常。正因为这个原因,关于 IoT 软件升级的项目一直是被当作重要内容往前推进。
|
||||
|
||||
在物联网发展趋势中的一个方面就是关于构建模块来解决商业问题,而不是宽约工业和公司的大物联网平台。 Eclipse 关于物联网的研究放在一系列模块栈、提供特定和大众化需求功能的项目,还有就是指定目标所需的可捆绑式中间件、网关和协议组件上。
|
||||
### 为什么物联网这么适合 Eclipse
|
||||
|
||||
在物联网发展趋势中的一个方面就是关于构建模块来解决商业问题,而不是跨越行业和公司的大物联网平台。 Eclipse 关于物联网的研究放在一系列模块栈、提供特定和大众化需求功能的项目上,还有就是指定目标所需的可捆绑式中间件、网关和协议组件上。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
@ -46,15 +53,15 @@ Ian 说 [Eclipse hawkBit][13] ,软件更新管理的软件,是一项“非
|
||||
|
||||
作者简介:
|
||||
|
||||
Gordon Haff - Gordon Haff 是红帽公司的云营销员,经常在消费者和工业会议上讲话,并且帮助发展红帽全办公云解决方案。他是 计算机前言:云如何如何打开众多出版社未来之门 的作者。在红帽之前, Gordon 写了成百上千的研究报告,经常被引用到公众刊物上,像纽约时报关于 IT 的议题和产品建议等……
|
||||
Gordon Haff - Gordon Haff 是红帽公司的云专家,经常在消费者和行业会议上讲话,并且帮助发展红帽全面云化解决方案。他是《计算机前沿:云如何如何打开众多出版社未来之门》的作者。在红帽之前, Gordon 写了成百上千的研究报告,经常被引用到公众刊物上,像纽约时报关于 IT 的议题和产品建议等……
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
转自: https://opensource.com/article/17/10/eclipse-and-iot
|
||||
via: https://opensource.com/article/17/10/eclipse-and-iot
|
||||
|
||||
作者:[Gordon Haff ][a]
|
||||
作者:[Gordon Haff][a]
|
||||
译者:[smartgrids](https://github.com/smartgrids)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,135 @@
|
||||
如何使用 GPG 加解密文件
|
||||
=================
|
||||
|
||||
目标:使用 GPG 加密文件
|
||||
|
||||
发行版:适用于任何发行版
|
||||
|
||||
要求:安装了 GPG 的 Linux 或者拥有 root 权限来安装它。
|
||||
|
||||
难度:简单
|
||||
|
||||
约定:
|
||||
|
||||
* `#` - 需要使用 root 权限来执行指定命令,可以直接使用 root 用户来执行,也可以使用 `sudo` 命令
|
||||
* `$` - 可以使用普通用户来执行指定命令
|
||||
|
||||
### 介绍
|
||||
|
||||
加密非常重要。它对于保护敏感信息来说是必不可少的。你的私人文件应该要被加密,而 GPG 提供了很好的解决方案。
|
||||
|
||||
### 安装 GPG
|
||||
|
||||
GPG 的使用非常广泛。你在几乎每个发行版的仓库中都能找到它。如果你还没有安装它,那现在就来安装一下吧。
|
||||
|
||||
**Debian/Ubuntu**
|
||||
|
||||
```
|
||||
$ sudo apt install gnupg
|
||||
```
|
||||
|
||||
**Fedora**
|
||||
|
||||
```
|
||||
# dnf install gnupg2
|
||||
```
|
||||
|
||||
**Arch**
|
||||
|
||||
```
|
||||
# pacman -S gnupg
|
||||
```
|
||||
|
||||
**Gentoo**
|
||||
|
||||
```
|
||||
# emerge --ask app-crypt/gnupg
|
||||
```
|
||||
|
||||
### 创建密钥
|
||||
|
||||
你需要一个密钥对来加解密文件。如果你为 SSH 已经生成过了密钥对,那么你可以直接使用它。如果没有,GPG 包含工具来生成密钥对。
|
||||
|
||||
```
|
||||
$ gpg --full-generate-key
|
||||
```
|
||||
|
||||
GPG 有一个命令行程序可以帮你一步一步的生成密钥。它还有一个简单得多的工具,但是这个工具不能让你设置密钥类型,密钥的长度以及过期时间,因此不推荐使用这个工具。
|
||||
|
||||
GPG 首先会询问你密钥的类型。没什么特别的话选择默认值就好。
|
||||
|
||||
下一步需要设置密钥长度。`4096` 是一个不错的选择。
|
||||
|
||||
之后,可以设置过期的日期。 如果希望密钥永不过期则设置为 `0`。
|
||||
|
||||
然后,输入你的名称。
|
||||
|
||||
最后,输入电子邮件地址。
|
||||
|
||||
如果你需要的话,还能添加一个注释。
|
||||
|
||||
所有这些都完成后,GPG 会让你校验一下这些信息。
|
||||
|
||||
GPG 还会问你是否需要为密钥设置密码。这一步是可选的, 但是会增加保护的程度。若需要设置密码,则 GPG 会收集你的操作信息来增加密钥的健壮性。 所有这些都完成后, GPG 会显示密钥相关的信息。
|
||||
|
||||
### 加密的基本方法
|
||||
|
||||
现在你拥有了自己的密钥,加密文件非常简单。 使用下面的命令在 `/tmp` 目录中创建一个空白文本文件。
|
||||
|
||||
```
|
||||
$ touch /tmp/test.txt
|
||||
```
|
||||
|
||||
然后用 GPG 来加密它。这里 `-e` 标志告诉 GPG 你想要加密文件, `-r` 标志指定接收者。
|
||||
|
||||
```
|
||||
$ gpg -e -r "Your Name" /tmp/test.txt
|
||||
```
|
||||
|
||||
GPG 需要知道这个文件的接收者和发送者。由于这个文件给是你的,因此无需指定发送者,而接收者就是你自己。
|
||||
|
||||
### 解密的基本方法
|
||||
|
||||
你收到加密文件后,就需要对它进行解密。 你无需指定解密用的密钥。 这个信息被编码在文件中。 GPG 会尝试用其中的密钥进行解密。
|
||||
|
||||
```
|
||||
$ gpg -d /tmp/test.txt.gpg
|
||||
```
|
||||
|
||||
### 发送文件
|
||||
|
||||
假设你需要发送文件给别人。你需要有接收者的公钥。 具体怎么获得密钥由你自己决定。 你可以让他们直接把公钥发送给你, 也可以通过密钥服务器来获取。
|
||||
|
||||
收到对方公钥后,导入公钥到 GPG 中。
|
||||
|
||||
```
|
||||
$ gpg --import yourfriends.key
|
||||
```
|
||||
|
||||
这些公钥与你自己创建的密钥一样,自带了名称和电子邮件地址的信息。 记住,为了让别人能解密你的文件,别人也需要你的公钥。 因此导出公钥并将之发送出去。
|
||||
|
||||
```
|
||||
gpg --export -a "Your Name" > your.key
|
||||
```
|
||||
|
||||
现在可以开始加密要发送的文件了。它跟之前的步骤差不多, 只是需要指定你自己为发送人。
|
||||
|
||||
```
|
||||
$ gpg -e -u "Your Name" -r "Their Name" /tmp/test.txt
|
||||
```
|
||||
|
||||
### 结语
|
||||
|
||||
就这样了。GPG 还有一些高级选项, 不过你在 99% 的时间内都不会用到这些高级选项。 GPG 就是这么易于使用。你也可以使用创建的密钥对来发送和接受加密邮件,其步骤跟上面演示的差不多, 不过大多数的电子邮件客户端在拥有密钥的情况下会自动帮你做这个动作。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://linuxconfig.org/how-to-encrypt-and-decrypt-individual-files-with-gpg
|
||||
|
||||
作者:[Nick Congleton][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://linuxconfig.org
|
@ -1,20 +1,19 @@
|
||||
归档仓库
|
||||
如何归档 GitHub 仓库
|
||||
====================
|
||||
|
||||
|
||||
因为仓库不再活跃开发或者你不想接受额外的贡献并不意味着你想要删除它。现在在 Github 上归档仓库让它变成只读。
|
||||
如果仓库不再活跃开发或者你不想接受额外的贡献,但这并不意味着你想要删除它。现在可以在 Github 上归档仓库让它变成只读。
|
||||
|
||||
[![archived repository banner](https://user-images.githubusercontent.com/7321362/32558403-450458dc-c46a-11e7-96f9-af31d2206acb.png)][1]
|
||||
|
||||
归档一个仓库让它对所有人只读(包括仓库拥有者)。这包括编辑仓库、问题、合并请求、标记、里程碑、维基、发布、提交、标签、分支、反馈和评论。没有人可以在一个归档的仓库上创建新的问题、合并请求或者评论,但是你仍可以 fork 仓库-允许归档的仓库在其他地方继续开发。
|
||||
归档一个仓库会让它对所有人只读(包括仓库拥有者)。这包括对仓库的编辑、<ruby>问题<rt>issue</rt></ruby>、<ruby>合并请求<rt>pull request</rt></ruby>(PR)、标记、里程碑、项目、维基、发布、提交、标签、分支、反馈和评论。谁都不可以在一个归档的仓库上创建新的问题、合并请求或者评论,但是你仍可以 fork 仓库——以允许归档的仓库在其它地方继续开发。
|
||||
|
||||
要归档一个仓库,进入仓库设置页面并点在这个仓库上点击归档。
|
||||
要归档一个仓库,进入仓库设置页面并点在这个仓库上点击“<ruby>归档该仓库<rt>Archive this repository</rt></ruby>”。
|
||||
|
||||
[![archive repository button](https://user-images.githubusercontent.com/125011/32273119-0fc5571e-bef9-11e7-9909-d137268a1d6d.png)][2]
|
||||
|
||||
在归档你的仓库前,确保你已经更改了它的设置并考虑关闭所有的开放问题和合并请求。你还应该更新你的 README 和描述来让它让访问者了解他不再能够贡献。
|
||||
在归档你的仓库前,确保你已经更改了它的设置并考虑关闭所有的开放问题和合并请求。你还应该更新你的 README 和描述来让它让访问者了解他不再能够对之贡献。
|
||||
|
||||
如果你改变了主意想要解除归档你的仓库,在相同的地方点击解除归档。请注意大多数归档仓库的设置是隐藏的,并且你需要解除归档来改变它们。
|
||||
如果你改变了主意想要解除归档你的仓库,在相同的地方点击“<ruby>解除归档该仓库<rt>Unarchive this repository</rt></ruby>”。请注意归档仓库的大多数设置是隐藏的,你需要解除归档才能改变它们。
|
||||
|
||||
[![archived labelled repository](https://user-images.githubusercontent.com/125011/32541128-9d67a064-c466-11e7-857e-3834054ba3c9.png)][3]
|
||||
|
||||
@ -24,9 +23,9 @@
|
||||
|
||||
via: https://github.com/blog/2460-archiving-repositories
|
||||
|
||||
作者:[MikeMcQuaid ][a]
|
||||
作者:[MikeMcQuaid][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,70 @@
|
||||
Glitch:可以让你立即写出有趣的小型网站
|
||||
============================================================
|
||||
|
||||
我刚写了一篇关于 Jupyter Notebooks 的文章,它是一个有趣的交互式写 Python 代码的方式。这让我想起我最近学习了 Glitch,这个我同样喜爱!我构建了一个小的程序来用于[关闭转发 twitter][2]。因此有了这篇文章!
|
||||
|
||||
[Glitch][3] 是一个简单的构建 Javascript web 程序的方式(javascript 后端、javascript 前端)。
|
||||
|
||||
关于 glitch 有趣的地方有:
|
||||
|
||||
1. 你在他们的网站输入 Javascript 代码
|
||||
2. 只要输入了任何代码,它会自动用你的新代码重载你的网站。你甚至不必保存!它会自动保存。
|
||||
|
||||
所以这就像 Heroku,但更神奇!像这样的编码(你输入代码,代码立即在公共网络上运行)对我而言感觉很**有趣**。
|
||||
|
||||
这有点像用 ssh 登录服务器,编辑服务器上的 PHP/HTML 代码,它立即就可用了,而这也是我所喜爱的方式。虽然现在我们有了“更好的部署实践”,而不是“编辑代码,让它立即出现在互联网上”,但我们并不是在谈论严肃的开发实践,而是在讨论编写微型程序的乐趣。
|
||||
|
||||
### Glitch 有很棒的示例应用程序
|
||||
|
||||
Glitch 似乎是学习编程的好方式!
|
||||
|
||||
比如,这有一个太空侵略者游戏(由 [Mary Rose Cook][4] 编写):[https://space-invaders.glitch.me/][5]。我喜欢的是我只需要点击几下。
|
||||
|
||||
1. 点击 “remix this”
|
||||
2. 开始编辑代码使箱子变成橘色而不是黑色
|
||||
3. 制作我自己太空侵略者游戏!我的在这:[http://julias-space-invaders.glitch.me/][1]。(我只做了很小的更改使其变成橘色,没什么神奇的)
|
||||
|
||||
他们有大量的示例程序,你可以从中启动 - 例如[机器人][6]、[游戏][7]等等。
|
||||
|
||||
### 实际有用的非常好的程序:tweetstorms
|
||||
|
||||
我学习 Glitch 的方式是从这个程序开始的:[https://tweetstorms.glitch.me/][8],它会向你展示给定用户的推特云。
|
||||
|
||||
比如,你可以在 [https://tweetstorms.glitch.me/sarahmei][10] 看到 [@sarahmei][9] 的推特云(她发布了很多好的 tweetstorm!)。
|
||||
|
||||
### 我的 Glitch 程序: 关闭转推
|
||||
|
||||
当我了解到 Glitch 的时候,我想关闭在 Twitter 上关注的所有人的转推(我知道可以在 Tweetdeck 中做这件事),而且手动做这件事是一件很痛苦的事 - 我一次只能设置一个人。所以我写了一个 Glitch 程序来为我做!
|
||||
|
||||
我喜欢我不必设置一个本地开发环境,我可以直接开始输入然后开始!
|
||||
|
||||
Glitch 只支持 Javascript,我不是非常了解 Javascript(我之前从没写过一个 Node 程序),所以代码不是很好。但是编写它很愉快 - 能够输入并立即看到我的代码运行是令人愉快的。这是我的项目:[https://turn-off-retweets.glitch.me/][11]。
|
||||
|
||||
### 就是这些!
|
||||
|
||||
使用 Glitch 感觉真的很有趣和民主。通常情况下,如果我想 fork 某人的 Web 项目,并做出更改,我不会这样做 - 我必须 fork,找一个托管,设置本地开发环境或者 Heroku 或其他,安装依赖项等。我认为像安装 node.js 依赖关系这样的任务在过去很有趣,就像“我正在学习新东西很酷”,但现在我觉得它们很乏味。
|
||||
|
||||
所以我喜欢只需点击 “remix this!” 并立即在互联网上能有我的版本。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2017/11/13/glitch--write-small-web-projects-easily/
|
||||
|
||||
作者:[Julia Evans][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://jvns.ca/
|
||||
[1]:http://julias-space-invaders.glitch.me/
|
||||
[2]:https://turn-off-retweets.glitch.me/
|
||||
[3]:https://glitch.com/
|
||||
[4]:https://maryrosecook.com/
|
||||
[5]:https://space-invaders.glitch.me/
|
||||
[6]:https://glitch.com/handy-bots
|
||||
[7]:https://glitch.com/games
|
||||
[8]:https://tweetstorms.glitch.me/
|
||||
[9]:https://twitter.com/sarahmei
|
||||
[10]:https://tweetstorms.glitch.me/sarahmei
|
||||
[11]:https://turn-off-retweets.glitch.me/
|
74
published/20171120 Mark McIntyre How Do You Fedora.md
Normal file
74
published/20171120 Mark McIntyre How Do You Fedora.md
Normal file
@ -0,0 +1,74 @@
|
||||
Mark McIntyre:与 Fedora 的那些事
|
||||
===========================
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2017/11/mock-couch-945w-945x400.jpg)
|
||||
|
||||
最近我们采访了 Mark McIntyre,谈了他是如何使用 Fedora 系统的。这也是 Fedora 杂志上[系列文章的一部分][2]。该系列简要介绍了 Fedora 用户,以及他们是如何用 Fedora 把事情做好的。如果你想成为采访对象,请通过[反馈表][3]与我们联系。
|
||||
|
||||
### Mark McIntyre 是谁?
|
||||
|
||||
Mark McIntyre 为极客而生,以 Linux 为乐趣。他说:“我在 13 岁开始编程,当时自学 BASIC 语言,我体会到其中的乐趣,并在乐趣的引导下,一步步成为专业的码农。” Mark 和他的侄女都是披萨饼的死忠粉。“去年秋天,我和我的侄女开始了一个任务,去尝试诺克斯维尔的许多披萨饼连锁店。点击[这里][4]可以了解我们的进展情况。”Mark 也是一名业余的摄影爱好者,并且在 Flickr 上 [发布自己的作品][5]。
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2017/11/31456893222_553b3cac4d_k-1024x575.jpg)
|
||||
|
||||
作为一名开发者,Mark 有着丰富的工作背景。他用过 Visual Basic 编写应用程序,用过 LotusScript、 PL/SQL(Oracle)、 Tcl/TK 编写代码,也用过基于 Python 的 Django 框架。他的强项是 Python。这也是目前他作为系统工程师的工作语言。“我经常使用 Python。由于我的工作变得更像是自动化工程师, Python 用得就更频繁了。”
|
||||
|
||||
McIntyre 自称是个书呆子,喜欢科幻电影,但他最喜欢的一部电影却不是科幻片。“尽管我是个书呆子,喜欢看《<ruby>星际迷航<rt>Star Trek</rt></ruby>》、《<ruby>星球大战<rt>Star Wars</rt></ruby>》之类的影片,但《<ruby>光荣战役<rt>Glory</rt></ruby>》或许才是我最喜欢的电影。”他还提到,电影《<ruby>冲出宁静号<rt>Serenity</rt></ruby>》是一个著名电视剧的精彩后续(指《萤火虫》)。
|
||||
|
||||
Mark 比较看重他人的谦逊、知识与和气。他欣赏能够设身处地为他人着想的人。“如果你决定为另一个人服务,那么你会选择自己愿意亲近的人,而不是让自己备受折磨的人。”
|
||||
|
||||
McIntyre 目前在 [Scripps Networks Interactive][6] 工作,这家公司是 HGTV、Food Network、Travel Channel、DIY、GAC 以及其他几个有线电视频道的母公司。“我现在是一名系统工程师,负责非线性视频内容,这是所有媒体要开展线上消费所需要的。”他为一些开发团队提供支持,他们编写应用程序,将线性视频从有线电视发布到线上平台,比如亚马逊、葫芦。这些系统既包含预置系统,也包含云系统。Mark 还开发了一些自动化工具,将这些应用程序主要部署到云基础结构中。
|
||||
|
||||
### Fedora 社区
|
||||
|
||||
Mark 形容 Fedora 社区是一个富有活力的社区,充满着像 Fedora 用户一样热爱生活的人。“从设计师到封包人,这个团体依然非常活跃,生机勃勃。” 他继续说道:“这使我对该操作系统抱有一种信心。”
|
||||
|
||||
2002 年左右,Mark 开始经常使用 IRC 上的 #fedora 频道:“那时候,Wi-Fi 在启用适配器和配置模块功能时,有许多还是靠手工实现的。”为了让他的 Wi-Fi 能够工作,他不得不重新去编译 Fedora 内核。
|
||||
|
||||
McIntyre 鼓励他人参与 Fedora 社区。“这里有许多来自不同领域的机会。前端设计、测试部署、开发、应用程序打包以及新技术实现。”他建议选择一个感兴趣的领域,然后向那个团体提出疑问。“这里有许多机会去奉献自己。”
|
||||
|
||||
对于帮助他起步的社区成员,Mark 赞道:“Ben Williams 非常乐于助人。在我第一次接触 Fedora 时,他帮我搞定了一些 #fedora 支持频道中的安装补丁。” Ben 也鼓励 Mark 去做 Fedora [大使][7]。
|
||||
|
||||
### 什么样的硬件和软件?
|
||||
|
||||
McIntyre 将 Fedora Linux 系统用在他的笔记本和台式机上。在服务器上他选择了 CentOS,因为它有更长的生命周期支持。他现在的台式机是自己组装的,配有 Intel 酷睿 i5 处理器,32GB 的内存和2TB 的硬盘。“我装了个 4K 的显示屏,有足够大的地方来同时查看所有的应用。”他目前工作用的笔记本是戴尔灵越二合一,配备 13 英寸的屏,16 GB 的内存和 525 GB 的 m.2 固态硬盘。
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2017/11/Screenshot-from-2017-10-26-08-51-41-1024x640.png)
|
||||
|
||||
Mark 现在将 Fedora 26 运行在他过去几个月装配的所有机器中。当一个新版本正式发布的时候,他倾向于避开这个高峰期。“除非在它即将发行的时候,我的工作站中有个正在运行下一代测试版本,通常情况下,一旦它发展成熟,我都会试着去获取最新的版本。”他经常采取就地更新:“这种就地更新方法利用 dnf 系统升级插件,目前表现得非常好。”
|
||||
|
||||
为了搞摄影,McIntyre 用上了 [GIMP][8]、[Darktable][9],以及其他一些照片查看包和快速编辑包。当不用 Web 电子邮件时,Mark 会使用 [Geary][10],还有[GNOME Calendar][11]。Mark 选用 HexChat 作为 IRC 客户端,[HexChat][12] 与在 Fedora 服务器实例上运行的 [ZNC bouncer][13] 联机。他的部门通过 Slave 进行沟通交流。
|
||||
|
||||
“我从来都不是 IDE 粉,所以大多数的编辑任务都是在 [vim][14] 上完成的。”Mark 偶尔也会打开一个简单的文本编辑器,如 [gedit][15],或者 [xed][16]。他用 [GPaste][17] 做复制和粘贴工作。“对于终端的选择,我已经变成 [Tilix][18] 的忠粉。”McIntyre 通过 [Rhythmbox][19] 来管理他喜欢的播客,并用 [Epiphany][20] 实现快速网络查询。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/mark-mcintyre-fedora/
|
||||
|
||||
作者:[Charles Profitt][a]
|
||||
译者:[zrszrszrs](https://github.com/zrszrszrs)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/author/cprofitt/
|
||||
[1]:https://fedoramagazine.org/mark-mcintyre-fedora/
|
||||
[2]:https://fedoramagazine.org/tag/how-do-you-fedora/
|
||||
[3]:https://fedoramagazine.org/submit-an-idea-or-tip/
|
||||
[4]:https://knox-pizza-quest.blogspot.com/
|
||||
[5]:https://www.flickr.com/photos/mockgeek/
|
||||
[6]:http://www.scrippsnetworksinteractive.com/
|
||||
[7]:https://fedoraproject.org/wiki/Ambassadors
|
||||
[8]:https://www.gimp.org/
|
||||
[9]:http://www.darktable.org/
|
||||
[10]:https://wiki.gnome.org/Apps/Geary
|
||||
[11]:https://wiki.gnome.org/Apps/Calendar
|
||||
[12]:https://hexchat.github.io/
|
||||
[13]:https://wiki.znc.in/ZNC
|
||||
[14]:http://www.vim.org/
|
||||
[15]:https://wiki.gnome.org/Apps/Gedit
|
||||
[16]:https://github.com/linuxmint/xed
|
||||
[17]:https://github.com/Keruspe/GPaste
|
||||
[18]:https://fedoramagazine.org/try-tilix-new-terminal-emulator-fedora/
|
||||
[19]:https://wiki.gnome.org/Apps/Rhythmbox
|
||||
[20]:https://wiki.gnome.org/Apps/Web
|
@ -1,27 +1,22 @@
|
||||
# LibreOffice 现在在 Flatpak 的 Flathub 应用商店提供
|
||||
LibreOffice 上架 Flathub 应用商店
|
||||
===============
|
||||
|
||||
![LibreOffice on Flathub](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/libroffice-on-flathub-750x250.jpeg)
|
||||
|
||||
LibreOffice 现在可以从集中化的 Flatpak 应用商店 [Flathub][3] 进行安装。
|
||||
> LibreOffice 现在可以从集中化的 Flatpak 应用商店 [Flathub][3] 进行安装。
|
||||
|
||||
它的到来使任何运行现代 Linux 发行版的人都能只点击一两次安装 LibreOffice 的最新稳定版本,而无需搜索 PPA,纠缠 tar 包或等待发行商将其打包。
|
||||
它的到来使任何运行现代 Linux 发行版的人都能只点击一两次即可安装 LibreOffice 的最新稳定版本,而无需搜索 PPA,纠缠于 tar 包或等待发行版将其打包。
|
||||
|
||||
自去年 8 月份以来,[LibreOffice Flatpak][5] 已经可供用户下载和安装 [LibreOffice 5.2][6]。
|
||||
自去年 8 月份 [LibreOffice 5.2][6] 发布以来,[LibreOffice Flatpak][5] 已经可供用户下载和安装。
|
||||
|
||||
这里“新”的是发行方法。文档基金会选择使用 Flathub 而不是专门的服务器来发布更新。
|
||||
这里“新”的是指发行方法。<ruby>文档基金会<rt>Document Foundation</rt></ruby>选择使用 Flathub 而不是专门的服务器来发布更新。
|
||||
|
||||
这对于终端用户来说是一个_很好_的消息,因为这意味着不需要在新安装时担心仓库,但对于 Flatpak 的倡议者来说也是一个好消息:LibreOffice 是开源软件最流行的生产力套件。它对格式和应用商店的支持肯定会受到热烈的欢迎。
|
||||
这对于终端用户来说是一个_很好_的消息,因为这意味着不需要在新安装时担心仓库,但对于 Flatpak 的倡议者来说也是一个好消息:LibreOffice 是开源软件里最流行的生产力套件。它对该格式和应用商店的支持肯定会受到热烈的欢迎。
|
||||
|
||||
在撰写本文时,你可以从 Flathub 安装 LibreOffice 5.4.2。新的稳定版本将在发布时添加。
|
||||
|
||||
### 在 Ubuntu 上启用 Flathub
|
||||
|
||||
![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/flathub-750x495.png)
|
||||
|
||||
Fedora、Arch 和 Linux Mint 18.3 用户已经安装了 Flatpak,随时可以开箱即用。Mint 甚至预启用了 Flathub remote。
|
||||
|
||||
[从 Flathub 安装 LibreOffice][7]
|
||||
|
||||
要在 Ubuntu 上启动并运行 Flatpak,首先必须安装它:
|
||||
|
||||
```
|
||||
@ -34,17 +29,25 @@ sudo apt install flatpak gnome-software-plugin-flatpak
|
||||
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
|
||||
```
|
||||
|
||||
这就行了。只需注销并返回(以便 Ubuntu Software 刷新其缓存),之后你应该能够通过 Ubuntu Software 看到 Flathub 上的任何 Flatpak 程序了。
|
||||
这就行了。只需注销并重新登录(以便 Ubuntu Software 刷新其缓存),之后你应该能够通过 Ubuntu Software 看到 Flathub 上的任何 Flatpak 程序了。
|
||||
|
||||
![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/flathub-750x495.png)
|
||||
|
||||
*Fedora、Arch 和 Linux Mint 18.3 用户已经安装了 Flatpak,随时可以开箱即用。Mint 甚至预启用了 Flathub remote。*
|
||||
|
||||
在本例中,搜索 “LibreOffice” 并在结果中找到下面有 Flathub 提示的结果。(请记住,Ubuntu 已经调整了客户端,来将 Snap 程序显示在最上面,所以你可能需要向下滚动列表来查看它)。
|
||||
|
||||
### 从 Flathub 安装 LibreOffice
|
||||
|
||||
- [从 Flathub 安装 LibreOffice][7]
|
||||
|
||||
从 flatpakref 中[安装 Flatpak 程序有一个 bug][8],所以如果上面的方法不起作用,你也可以使用命令行从 Flathub 中安装 Flathub 程序。
|
||||
|
||||
Flathub 网站列出了安装每个程序所需的命令。切换到“命令行”选项卡来查看它们。
|
||||
|
||||
#### Flathub 上更多的应用
|
||||
### Flathub 上更多的应用
|
||||
|
||||
如果你经常看这个网站,你就会知道我喜欢 Flathub。这是我最喜欢的一些应用(Corebird、Parlatype、GNOME MPV、Peek、Audacity、GIMP 等)的家园。我无需折衷就能获得这些应用程序的最新,稳定版本(加上它们需要的所有依赖)。
|
||||
如果你经常看这个网站,你就会知道我喜欢 Flathub。这是我最喜欢的一些应用(Corebird、Parlatype、GNOME MPV、Peek、Audacity、GIMP 等)的家园。我无需等待就能获得这些应用程序的最新、稳定版本(加上它们需要的所有依赖)。
|
||||
|
||||
而且,在我 twiiter 上发布一周左右后,大多数 Flatpak 应用现在看起来有很棒 GTK 主题 - 不再需要[临时方案][9]了!
|
||||
|
||||
@ -52,9 +55,9 @@ Flathub 网站列出了安装每个程序所需的命令。切换到“命令行
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2017/11/libreoffice-now-available-flathub-flatpak-app-store
|
||||
|
||||
作者:[ JOEY SNEDDON ][a]
|
||||
作者:[JOEY SNEDDON][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,76 @@
|
||||
AWS 帮助构建 ONNX 开源 AI 平台
|
||||
============================================================
|
||||
![onnx-open-source-ai-platform](https://www.linuxinsider.com/article_images/story_graphics_xlarge/xl-2017-onnx-1.jpg)
|
||||
|
||||
|
||||
AWS 最近成为了加入深度学习社区的<ruby>开放神经网络交换<rt>Open Neural Network Exchange</rt></ruby>(ONNX)协作的技术公司,最近在<ruby>无障碍和可互操作<rt>frictionless and interoperable</rt></ruby>的环境中推出了高级人工智能。由 Facebook 和微软领头了该协作。
|
||||
|
||||
作为该合作的一部分,AWS 开源其深度学习框架 Python 软件包 ONNX-MXNet,该框架提供了跨多种语言的编程接口(API),包括 Python、Scala 和开源统计软件 R。
|
||||
|
||||
AWS 深度学习工程经理 Hagay Lupesko 和软件开发人员 Roshani Nagmote 上周在一篇帖子中写道,ONNX 格式将帮助开发人员构建和训练其它框架的模型,包括 PyTorch、Microsoft Cognitive Toolkit 或 Caffe2。它可以让开发人员将这些模型导入 MXNet,并运行它们进行推理。
|
||||
|
||||
### 对开发者的帮助
|
||||
|
||||
今年夏天,Facebook 和微软推出了 ONNX,以支持共享模式的互操作性,来促进 AI 的发展。微软提交了其 Cognitive Toolkit、Caffe2 和 PyTorch 来支持 ONNX。
|
||||
|
||||
微软表示:Cognitive Toolkit 和其他框架使开发人员更容易构建和运行计算图以表达神经网络。
|
||||
|
||||
[ONNX 代码和文档][4]的初始版本已经放到了 Github。
|
||||
|
||||
AWS 和微软上个月宣布了在 Apache MXNet 上的一个新 Gluon 接口计划,该计划允许开发人员构建和训练深度学习模型。
|
||||
|
||||
[Tractica][5] 的研究总监 Aditya Kaul 观察到:“Gluon 是他们试图与 Google 的 Tensorflow 竞争的合作伙伴关系的延伸”。
|
||||
|
||||
他告诉 LinuxInsider,“谷歌在这点上的疏忽是非常明显的,但也说明了他们在市场上的主导地位。”
|
||||
|
||||
Kaul 说:“甚至 Tensorflow 也是开源的,所以开源在这里并不是什么大事,但这归结到底是其他生态系统联手与谷歌竞争。”
|
||||
|
||||
根据 AWS 的说法,本月早些时候,Apache MXNet 社区推出了 MXNet 的 0.12 版本,它扩展了 Gluon 的功能,以便进行新的尖端研究。它的新功能之一是变分 dropout,它允许开发人员使用 dropout 技术来缓解递归神经网络中的过拟合。
|
||||
|
||||
AWS 指出:卷积 RNN、LSTM 网络和门控循环单元(GRU)允许使用基于时间的序列和空间维度对数据集进行建模。
|
||||
|
||||
### 框架中立方式
|
||||
|
||||
[Tirias Research][6] 的首席分析师 Paul Teich 说:“这看起来像是一个提供推理的好方法,而不管是什么框架生成的模型。”
|
||||
|
||||
他告诉 LinuxInsider:“这基本上是一种框架中立的推理方式。”
|
||||
|
||||
Teich 指出,像 AWS、微软等云提供商在客户的压力下可以在一个网络上进行训练,同时提供另一个网络,以推进人工智能。
|
||||
|
||||
他说:“我认为这是这些供应商检查互操作性的一种基本方式。”
|
||||
|
||||
Tractica 的 Kaul 指出:“框架互操作性是一件好事,这会帮助开发人员确保他们建立在 MXNet 或 Caffe 或 CNTK 上的模型可以互操作。”
|
||||
|
||||
至于这种互操作性如何适用于现实世界,Teich 指出,诸如自然语言翻译或语音识别等技术将要求将 Alexa 的语音识别技术打包并交付给另一个开发人员的嵌入式环境。
|
||||
|
||||
### 感谢开源
|
||||
|
||||
[ThinkStrategies][7] 的总经理 Jeff Kaplan 表示:“尽管存在竞争差异,但这些公司都认识到他们在开源运动所带来的软件开发进步方面所取得的巨大成功。”
|
||||
|
||||
他告诉 LinuxInsider:“开放式神经网络交换(ONNX)致力于在人工智能方面产生类似的优势和创新。”
|
||||
|
||||
越来越多的大型科技公司已经宣布使用开源技术来加快 AI 协作开发的计划,以便创建更加统一的开发和研究平台。
|
||||
|
||||
AT&T 几周前宣布了与 TechMahindra 和 Linux 基金会合作[推出 Acumos 项目][8]的计划。该平台旨在开拓电信、媒体和技术方面的合作。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html
|
||||
|
||||
作者:[David Jones][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html#searchbyline
|
||||
[1]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html#
|
||||
[2]:https://www.linuxinsider.com/perl/mailit/?id=84971
|
||||
[3]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html
|
||||
[4]:https://github.com/onnx/onnx
|
||||
[5]:https://www.tractica.com/
|
||||
[6]:http://www.tiriasresearch.com/
|
||||
[7]:http://www.thinkstrategies.com/
|
||||
[8]:https://www.linuxinsider.com/story/84926.html
|
||||
[9]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html
|
@ -0,0 +1,149 @@
|
||||
如何判断 Linux 服务器是否被入侵?
|
||||
=========================
|
||||
|
||||
本指南中所谓的服务器被入侵或者说被黑了的意思,是指未经授权的人或程序为了自己的目的登录到服务器上去并使用其计算资源,通常会产生不好的影响。
|
||||
|
||||
免责声明:若你的服务器被类似 NSA 这样的国家机关或者某个犯罪集团入侵,那么你并不会注意到有任何问题,这些技术也无法发觉他们的存在。
|
||||
|
||||
然而,大多数被攻破的服务器都是被类似自动攻击程序这样的程序或者类似“脚本小子”这样的廉价攻击者,以及蠢蛋罪犯所入侵的。
|
||||
|
||||
这类攻击者会在访问服务器的同时滥用服务器资源,并且不怎么会采取措施来隐藏他们正在做的事情。
|
||||
|
||||
### 被入侵服务器的症状
|
||||
|
||||
当服务器被没有经验攻击者或者自动攻击程序入侵了的话,他们往往会消耗 100% 的资源。他们可能消耗 CPU 资源来进行数字货币的采矿或者发送垃圾邮件,也可能消耗带宽来发动 DoS 攻击。
|
||||
|
||||
因此出现问题的第一个表现就是服务器 “变慢了”。这可能表现在网站的页面打开的很慢,或者电子邮件要花很长时间才能发送出去。
|
||||
|
||||
那么你应该查看那些东西呢?
|
||||
|
||||
#### 检查 1 - 当前都有谁在登录?
|
||||
|
||||
你首先要查看当前都有谁登录在服务器上。发现攻击者登录到服务器上进行操作并不复杂。
|
||||
|
||||
其对应的命令是 `w`。运行 `w` 会输出如下结果:
|
||||
|
||||
```
|
||||
08:32:55 up 98 days, 5:43, 2 users, load average: 0.05, 0.03, 0.00
|
||||
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
|
||||
root pts/0 113.174.161.1 08:26 0.00s 0.03s 0.02s ssh root@coopeaa12
|
||||
root pts/1 78.31.109.1 08:26 0.00s 0.01s 0.00s w
|
||||
```
|
||||
|
||||
第一个 IP 是英国 IP,而第二个 IP 是越南 IP。这个不是个好兆头。
|
||||
|
||||
停下来做个深呼吸, 不要恐慌之下只是干掉他们的 SSH 连接。除非你能够防止他们再次进入服务器,否则他们会很快进来并踢掉你,以防你再次回去。
|
||||
|
||||
请参阅本文最后的“被入侵之后怎么办”这一章节来看找到了被入侵的证据后应该怎么办。
|
||||
|
||||
`whois` 命令可以接一个 IP 地址然后告诉你该 IP 所注册的组织的所有信息,当然就包括所在国家的信息。
|
||||
|
||||
#### 检查 2 - 谁曾经登录过?
|
||||
|
||||
Linux 服务器会记录下哪些用户,从哪个 IP,在什么时候登录的以及登录了多长时间这些信息。使用 `last` 命令可以查看这些信息。
|
||||
|
||||
输出类似这样:
|
||||
|
||||
```
|
||||
root pts/1 78.31.109.1 Thu Nov 30 08:26 still logged in
|
||||
root pts/0 113.174.161.1 Thu Nov 30 08:26 still logged in
|
||||
root pts/1 78.31.109.1 Thu Nov 30 08:24 - 08:26 (00:01)
|
||||
root pts/0 113.174.161.1 Wed Nov 29 12:34 - 12:52 (00:18)
|
||||
root pts/0 14.176.196.1 Mon Nov 27 13:32 - 13:53 (00:21)
|
||||
```
|
||||
|
||||
这里可以看到英国 IP 和越南 IP 交替出现,而且最上面两个 IP 现在还处于登录状态。如果你看到任何未经授权的 IP,那么请参阅最后章节。
|
||||
|
||||
登录后的历史记录会记录到二进制的 `/var/log/wtmp` 文件中(LCTT 译注:这里作者应该写错了,根据实际情况修改),因此很容易被删除。通常攻击者会直接把这个文件删掉,以掩盖他们的攻击行为。 因此, 若你运行了 `last` 命令却只看得见你的当前登录,那么这就是个不妙的信号。
|
||||
|
||||
如果没有登录历史的话,请一定小心,继续留意入侵的其他线索。
|
||||
|
||||
#### 检查 3 - 回顾命令历史
|
||||
|
||||
这个层次的攻击者通常不会注意掩盖命令的历史记录,因此运行 `history` 命令会显示出他们曾经做过的所有事情。
|
||||
一定留意有没有用 `wget` 或 `curl` 命令来下载类似垃圾邮件机器人或者挖矿程序之类的非常规软件。
|
||||
|
||||
命令历史存储在 `~/.bash_history` 文件中,因此有些攻击者会删除该文件以掩盖他们的所作所为。跟登录历史一样,若你运行 `history` 命令却没有输出任何东西那就表示历史文件被删掉了。这也是个不妙的信号,你需要很小心地检查一下服务器了。(LCTT 译注,如果没有命令历史,也有可能是你的配置错误。)
|
||||
|
||||
#### 检查 4 - 哪些进程在消耗 CPU?
|
||||
|
||||
你常遇到的这类攻击者通常不怎么会去掩盖他们做的事情。他们会运行一些特别消耗 CPU 的进程。这就很容易发现这些进程了。只需要运行 `top` 然后看最前的那几个进程就行了。
|
||||
|
||||
这也能显示出那些未登录进来的攻击者。比如,可能有人在用未受保护的邮件脚本来发送垃圾邮件。
|
||||
|
||||
如果你最上面的进程对不了解,那么你可以 Google 一下进程名称,或者通过 `losf` 和 `strace` 来看看它做的事情是什么。
|
||||
|
||||
使用这些工具,第一步从 `top` 中拷贝出进程的 PID,然后运行:
|
||||
|
||||
```
|
||||
strace -p PID
|
||||
```
|
||||
|
||||
这会显示出该进程调用的所有系统调用。它产生的内容会很多,但这些信息能告诉你这个进程在做什么。
|
||||
|
||||
```
|
||||
lsof -p PID
|
||||
```
|
||||
|
||||
这个程序会列出该进程打开的文件。通过查看它访问的文件可以很好的理解它在做的事情。
|
||||
|
||||
#### 检查 5 - 检查所有的系统进程
|
||||
|
||||
消耗 CPU 不严重的未授权进程可能不会在 `top` 中显露出来,不过它依然可以通过 `ps` 列出来。命令 `ps auxf` 就能显示足够清晰的信息了。
|
||||
|
||||
你需要检查一下每个不认识的进程。经常运行 `ps` (这是个好习惯)能帮助你发现奇怪的进程。
|
||||
|
||||
#### 检查 6 - 检查进程的网络使用情况
|
||||
|
||||
`iftop` 的功能类似 `top`,它会排列显示收发网络数据的进程以及它们的源地址和目的地址。类似 DoS 攻击或垃圾机器人这样的进程很容易显示在列表的最顶端。
|
||||
|
||||
#### 检查 7 - 哪些进程在监听网络连接?
|
||||
|
||||
通常攻击者会安装一个后门程序专门监听网络端口接受指令。该进程等待期间是不会消耗 CPU 和带宽的,因此也就不容易通过 `top` 之类的命令发现。
|
||||
|
||||
`lsof` 和 `netstat` 命令都会列出所有的联网进程。我通常会让它们带上下面这些参数:
|
||||
|
||||
```
|
||||
lsof -i
|
||||
```
|
||||
|
||||
```
|
||||
netstat -plunt
|
||||
```
|
||||
|
||||
你需要留意那些处于 `LISTEN` 和 `ESTABLISHED` 状态的进程,这些进程要么正在等待连接(LISTEN),要么已经连接(ESTABLISHED)。如果遇到不认识的进程,使用 `strace` 和 `lsof` 来看看它们在做什么东西。
|
||||
|
||||
### 被入侵之后该怎么办呢?
|
||||
|
||||
首先,不要紧张,尤其当攻击者正处于登录状态时更不能紧张。**你需要在攻击者警觉到你已经发现他之前夺回机器的控制权。**如果他发现你已经发觉到他了,那么他可能会锁死你不让你登陆服务器,然后开始毁尸灭迹。
|
||||
|
||||
如果你技术不太好那么就直接关机吧。你可以在服务器上运行 `shutdown -h now` 或者 `systemctl poweroff` 这两条命令之一。也可以登录主机提供商的控制面板中关闭服务器。关机后,你就可以开始配置防火墙或者咨询一下供应商的意见。
|
||||
|
||||
如果你对自己颇有自信,而你的主机提供商也有提供上游防火墙,那么你只需要以此创建并启用下面两条规则就行了:
|
||||
|
||||
1. 只允许从你的 IP 地址登录 SSH。
|
||||
2. 封禁除此之外的任何东西,不仅仅是 SSH,还包括任何端口上的任何协议。
|
||||
|
||||
这样会立即关闭攻击者的 SSH 会话,而只留下你可以访问服务器。
|
||||
|
||||
如果你无法访问上游防火墙,那么你就需要在服务器本身创建并启用这些防火墙策略,然后在防火墙规则起效后使用 `kill` 命令关闭攻击者的 SSH 会话。(LCTT 译注:本地防火墙规则 有可能不会阻止已经建立的 SSH 会话,所以保险起见,你需要手工杀死该会话。)
|
||||
|
||||
最后还有一种方法,如果支持的话,就是通过诸如串行控制台之类的带外连接登录服务器,然后通过 `systemctl stop network.service` 停止网络功能。这会关闭所有服务器上的网络连接,这样你就可以慢慢的配置那些防火墙规则了。
|
||||
|
||||
重夺服务器的控制权后,也不要以为就万事大吉了。
|
||||
|
||||
不要试着修复这台服务器,然后接着用。你永远不知道攻击者做过什么,因此你也永远无法保证这台服务器还是安全的。
|
||||
|
||||
最好的方法就是拷贝出所有的数据,然后重装系统。(LCTT 译注:你的程序这时已经不可信了,但是数据一般来说没问题。)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://bash-prompt.net/guides/server-hacked/
|
||||
|
||||
作者:[Elliot Cooper][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://bash-prompt.net
|
@ -0,0 +1,127 @@
|
||||
Suplemon:带有多光标支持的现代 CLI 文本编辑器
|
||||
======
|
||||
|
||||
Suplemon 是一个 CLI 中的现代文本编辑器,它模拟 [Sublime Text][1] 的多光标行为和其它特性。它是轻量级的,非常易于使用,就像 Nano 一样。
|
||||
|
||||
使用 CLI 编辑器的好处之一是,无论你使用的 Linux 发行版是否有 GUI,你都可以使用它。这种文本编辑器也很简单、快速和强大。
|
||||
|
||||
你可以在其[官方仓库][2]中找到有用的信息和源代码。
|
||||
|
||||
### 功能
|
||||
|
||||
这些是一些它有趣的功能:
|
||||
|
||||
* 多光标支持
|
||||
* 撤销/重做
|
||||
* 复制和粘贴,带有多行支持
|
||||
* 鼠标支持
|
||||
* 扩展
|
||||
* 查找、查找所有、查找下一个
|
||||
* 语法高亮
|
||||
* 自动完成
|
||||
* 自定义键盘快捷键
|
||||
|
||||
### 安装
|
||||
|
||||
首先,确保安装了最新版本的 python3 和 pip3。
|
||||
|
||||
然后在终端输入:
|
||||
|
||||
```
|
||||
$ sudo pip3 install suplemon
|
||||
```
|
||||
|
||||
### 使用
|
||||
|
||||
#### 在当前目录中创建一个新文件
|
||||
|
||||
打开一个终端并输入:
|
||||
|
||||
```
|
||||
$ suplemon
|
||||
```
|
||||
|
||||
你将看到如下:
|
||||
|
||||
![suplemon new file](https://linoxide.com/wp-content/uploads/2017/11/suplemon-new-file.png)
|
||||
|
||||
#### 打开一个或多个文件
|
||||
|
||||
打开一个终端并输入:
|
||||
|
||||
```
|
||||
$ suplemon <filename1> <filename2> ... <filenameN>
|
||||
```
|
||||
|
||||
例如:
|
||||
|
||||
```
|
||||
$ suplemon example1.c example2.c
|
||||
```
|
||||
|
||||
### 主要配置
|
||||
|
||||
你可以在 `~/.config/suplemon/suplemon-config.json` 找到配置文件。
|
||||
|
||||
编辑这个文件很简单,你只需要进入命令模式(进入 suplemon 后)并运行 `config` 命令。你可以通过运行 `config defaults` 来查看默认配置。
|
||||
|
||||
#### 键盘映射配置
|
||||
|
||||
我会展示 suplemon 的默认键映射。如果你想编辑它们,只需运行 `keymap` 命令。运行 `keymap default` 来查看默认的键盘映射文件。
|
||||
|
||||
| 操作 | 快捷键 |
|
||||
| ---- | ---- |
|
||||
| 退出| `Ctrl + Q`|
|
||||
| 复制行到缓冲区|`Ctrl + C`|
|
||||
| 剪切行缓冲区| `Ctrl + X`|
|
||||
| 插入缓冲区| `Ctrl + V`|
|
||||
| 复制行| `Ctrl + K`|
|
||||
| 跳转| `Ctrl + G`。 你可以跳转到一行或一个文件(只需键入一个文件名的开头)。另外,可以输入类似于 `exam:50` 跳转到 `example.c` 第 `50` 行。|
|
||||
| 用字符串或正则表达式搜索| `Ctrl + F`|
|
||||
| 搜索下一个| `Ctrl + D`|
|
||||
| 去除空格| `Ctrl + T`|
|
||||
| 在箭头方向添加新的光标| `Alt + 方向键`|
|
||||
| 跳转到上一个或下一个单词或行| `Ctrl + 左/右`|
|
||||
| 恢复到单光标/取消输入提示| `Esc`|
|
||||
| 向上/向下移动行| `Page Up` / `Page Down`|
|
||||
| 保存文件|`Ctrl + S`|
|
||||
| 用新名称保存文件|`F1`|
|
||||
| 重新载入当前文件|`F2`|
|
||||
| 打开文件|`Ctrl + O`|
|
||||
| 关闭文件|`Ctrl + W`|
|
||||
| 切换到下一个/上一个文件|`Ctrl + Page Up` / `Ctrl + Page Down`|
|
||||
| 运行一个命令|`Ctrl + E`|
|
||||
| 撤消|`Ctrl + Z`|
|
||||
| 重做|`Ctrl + Y`|
|
||||
| 触发可见的空格|`F7`|
|
||||
| 切换鼠标模式|`F8`|
|
||||
| 显示行号|`F9`|
|
||||
| 显示全屏|`F11`|
|
||||
|
||||
|
||||
|
||||
#### 鼠标快捷键
|
||||
|
||||
* 将光标置于指针位置:左键单击
|
||||
* 在指针位置添加一个光标:右键单击
|
||||
* 垂直滚动:向上/向下滚动滚轮
|
||||
|
||||
### 总结
|
||||
|
||||
在尝试 Suplemon 一段时间后,我改变了对 CLI 文本编辑器的看法。我以前曾经尝试过 Nano,是的,我喜欢它的简单性,但是它的现代特征的缺乏使它在日常使用中变得不实用。
|
||||
|
||||
这个工具有 CLI 和 GUI 世界最好的东西……简单性和功能丰富!所以我建议你试试看,并在评论中写下你的想法 :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://linoxide.com/tools/suplemon-cli-text-editor-multi-cursor/
|
||||
|
||||
作者:[Ivo Ursino][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://linoxide.com/author/ursinov/
|
||||
[1]:https://linoxide.com/tools/install-sublime-text-editor-linux/
|
||||
[2]:https://github.com/richrd/suplemon/
|
132
published/20171130 Wake up and Shut Down Linux Automatically.md
Normal file
132
published/20171130 Wake up and Shut Down Linux Automatically.md
Normal file
@ -0,0 +1,132 @@
|
||||
如何自动唤醒和关闭 Linux
|
||||
=====================
|
||||
|
||||
![timekeeper](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner.jpg?itok=zItspoSb)
|
||||
|
||||
> 了解如何通过配置 Linux 计算机来根据时间自动唤醒和关闭。
|
||||
|
||||
|
||||
不要成为一个电能浪费者。如果你的电脑不需要开机就请把它们关机。出于方便和计算机宅的考虑,你可以通过配置你的 Linux 计算机实现自动唤醒和关闭。
|
||||
|
||||
### 宝贵的系统运行时间
|
||||
|
||||
有时候有些电脑需要一直处在开机状态,在不超过电脑运行时间的限制下这种情况是被允许的。有些人为他们的计算机可以长时间的正常运行而感到自豪,且现在我们有内核热补丁能够实现只有在硬件发生故障时才需要机器关机。我认为比较实际可行的是,像减少移动部件磨损一样节省电能,且在不需要机器运行的情况下将其关机。比如,你可以在规定的时间内唤醒备份服务器,执行备份,然后关闭它直到它要进行下一次备份。或者,你可以设置你的互联网网关只在特定的时间运行。任何不需要一直运行的东西都可以将其配置成在其需要工作的时候打开,待其完成工作后将其关闭。
|
||||
|
||||
### 系统休眠
|
||||
|
||||
对于不需要一直运行的电脑,使用 root 的 cron 定时任务(即 `/etc/crontab`)可以可靠地关闭电脑。这个例子创建一个 root 定时任务实现每天晚上 11 点 15 分定时关机。
|
||||
|
||||
```
|
||||
# crontab -e -u root
|
||||
# m h dom mon dow command
|
||||
15 23 * * * /sbin/shutdown -h now
|
||||
```
|
||||
|
||||
以下示例仅在周一至周五运行:
|
||||
|
||||
```
|
||||
15 23 * * 1-5 /sbin/shutdown -h now
|
||||
```
|
||||
|
||||
您可以为不同的日期和时间创建多个 cron 作业。 通过命令 `man 5 crontab` 可以了解所有时间和日期的字段。
|
||||
|
||||
一个快速、容易的方式是,使用 `/etc/crontab` 文件。但这样你必须指定用户:
|
||||
|
||||
```
|
||||
15 23 * * 1-5 root shutdown -h now
|
||||
```
|
||||
|
||||
### 自动唤醒
|
||||
|
||||
实现自动唤醒是一件很酷的事情;我大多数 SUSE (SUSE Linux)的同事都在纽伦堡,因此,因此为了跟同事能有几小时一起工作的时间,我不得不需要在凌晨五点起床。我的计算机早上 5 点半自动开始工作,而我只需要将自己和咖啡拖到我的桌子上就可以开始工作了。按下电源按钮看起来好像并不是什么大事,但是在每天的那个时候每件小事都会变得很大。
|
||||
|
||||
唤醒 Linux 计算机可能不如关闭它可靠,因此你可能需要尝试不同的办法。你可以使用远程唤醒(Wake-On-LAN)、RTC 唤醒或者个人电脑的 BIOS 设置预定的唤醒这些方式。这些方式可行的原因是,当你关闭电脑时,这并不是真正关闭了计算机;此时计算机处在极低功耗状态且还可以接受和响应信号。只有在你拔掉电源开关时其才彻底关闭。
|
||||
|
||||
### BIOS 唤醒
|
||||
|
||||
BIOS 唤醒是最可靠的。我的系统主板 BIOS 有一个易于使用的唤醒调度程序 (图 1)。对你来说也是一样的容易。
|
||||
|
||||
![wakeup](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-1_11.png?itok=8qAeqo1I)
|
||||
|
||||
*图 1:我的系统 BIOS 有个易用的唤醒定时器。*
|
||||
|
||||
### 主机远程唤醒(Wake-On-LAN)
|
||||
|
||||
远程唤醒是仅次于 BIOS 唤醒的又一种可靠的唤醒方法。这需要你从第二台计算机发送信号到所要打开的计算机。可以使用 Arduino 或<ruby>树莓派<rt>Raspberry Pi</rt></ruby>发送给基于 Linux 的路由器或者任何 Linux 计算机的唤醒信号。首先,查看系统主板 BIOS 是否支持 Wake-On-LAN ,要是支持的话,必须先启动它,因为它被默认为禁用。
|
||||
|
||||
然后,需要一个支持 Wake-On-LAN 的网卡;无线网卡并不支持。你需要运行 `ethtool` 命令查看网卡是否支持 Wake-On-LAN :
|
||||
|
||||
```
|
||||
# ethtool eth0 | grep -i wake-on
|
||||
Supports Wake-on: pumbg
|
||||
Wake-on: g
|
||||
```
|
||||
|
||||
这条命令输出的 “Supports Wake-on” 字段会告诉你你的网卡现在开启了哪些功能:
|
||||
|
||||
* d -- 禁用
|
||||
* p -- 物理活动唤醒
|
||||
* u -- 单播消息唤醒
|
||||
* m -- 多播(组播)消息唤醒
|
||||
* b -- 广播消息唤醒
|
||||
* a -- ARP 唤醒
|
||||
* g -- <ruby>特定数据包<rt>magic packet</rt></ruby>唤醒
|
||||
* s -- 设有密码的<ruby>特定数据包<rt>magic packet</rt></ruby>唤醒
|
||||
|
||||
`ethtool` 命令的 man 手册并没说清楚 `p` 选项的作用;这表明任何信号都会导致唤醒。然而,在我的测试中它并没有这么做。想要实现远程唤醒主机,必须支持的功能是 `g` —— <ruby>特定数据包<rt>magic packet</rt></ruby>唤醒,而且下面的“Wake-on” 行显示这个功能已经在启用了。如果它没有被启用,你可以通过 `ethtool` 命令来启用它。
|
||||
|
||||
```
|
||||
# ethtool -s eth0 wol g
|
||||
```
|
||||
|
||||
这条命令可能会在重启后失效,所以为了确保万无一失,你可以创建个 root 用户的定时任务(cron)在每次重启的时候来执行这条命令。
|
||||
|
||||
```
|
||||
@reboot /usr/bin/ethtool -s eth0 wol g
|
||||
```
|
||||
|
||||
另一个选择是最近的<ruby>网络管理器<rt>Network Manager</rt></ruby>版本有一个很好的小复选框来启用 Wake-On-LAN(图 2)。
|
||||
|
||||
![wakeonlan](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-2_7.png?itok=XQAwmHoQ)
|
||||
|
||||
*图 2:启用 Wake on LAN*
|
||||
|
||||
这里有一个可以用于设置密码的地方,但是如果你的网络接口不支持<ruby>安全开机<rt>Secure On</rt></ruby>密码,它就不起作用。
|
||||
|
||||
现在你需要配置第二台计算机来发送唤醒信号。你并不需要 root 权限,所以你可以为你的普通用户创建 cron 任务。你需要用到的是想要唤醒的机器的网络接口和MAC地址信息。
|
||||
|
||||
```
|
||||
30 08 * * * /usr/bin/wakeonlan D0:50:99:82:E7:2B
|
||||
```
|
||||
|
||||
### RTC 唤醒
|
||||
|
||||
通过使用实时闹钟来唤醒计算机是最不可靠的方法。对于这个方法,可以参看 [Wake Up Linux With an RTC Alarm Clock][4] ;对于现在的大多数发行版来说这种方法已经有点过时了。
|
||||
|
||||
下周继续了解更多关于使用 RTC 唤醒的方法。
|
||||
|
||||
通过 Linux 基金会和 edX 可以学习更多关于 Linux 的免费 [Linux 入门][5]教程。
|
||||
|
||||
(题图:[The Observatory at Delhi][7])
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via:https://www.linux.com/learn/intro-to-linux/2017/11/wake-and-shut-down-linux-automatically
|
||||
|
||||
作者:[Carla Schroder][a]
|
||||
译者:[HardworkFish](https://github.com/HardworkFish)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/cschroder
|
||||
[1]:https://www.linux.com/files/images/bannerjpg
|
||||
[2]:https://www.linux.com/files/images/fig-1png-11
|
||||
[3]:https://www.linux.com/files/images/fig-2png-7
|
||||
[4]:https://www.linux.com/learn/wake-linux-rtc-alarm-clock
|
||||
[5]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
||||
[6]:https://www.linux.com/licenses/category/creative-commons-attribution
|
||||
[7]:http://www.columbia.edu/itc/mealac/pritchett/00routesdata/1700_1799/jaipur/delhijantarearly/delhijantarearly.html
|
||||
[8]:https://www.linux.com/licenses/category/used-permission
|
||||
[9]:https://www.linux.com/licenses/category/used-permission
|
||||
|
@ -1,13 +1,9 @@
|
||||
如何在 Linux 系统中用用户组来管理用户
|
||||
如何在 Linux 系统中通过用户组来管理用户
|
||||
============================================================
|
||||
|
||||
### [group-of-people-1645356_1920.jpg][1]
|
||||
|
||||
![groups](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/group-of-people-1645356_1920.jpg?itok=rJlAxBSV)
|
||||
|
||||
本教程可以了解如何通过用户组和访问控制表(ACL)来管理用户。
|
||||
|
||||
[创意共享协议][4]
|
||||
> 本教程可以了解如何通过用户组和访问控制表(ACL)来管理用户。
|
||||
|
||||
当你需要管理一台容纳多个用户的 Linux 机器时,比起一些基本的用户管理工具所提供的方法,有时候你需要对这些用户采取更多的用户权限管理方式。特别是当你要管理某些用户的权限时,这个想法尤为重要。比如说,你有一个目录,某个用户组中的用户可以通过读和写的权限访问这个目录,而其他用户组中的用户对这个目录只有读的权限。在 Linux 中,这是完全可以实现的。但前提是你必须先了解如何通过用户组和访问控制表(ACL)来管理用户。
|
||||
|
||||
@ -18,36 +14,32 @@
|
||||
你需要用下面两个用户名新建两个用户:
|
||||
|
||||
* olivia
|
||||
|
||||
* nathan
|
||||
|
||||
你需要新建以下两个用户组:
|
||||
|
||||
* readers
|
||||
|
||||
* editors
|
||||
|
||||
olivia 属于 editors 用户组,而 nathan 属于 readers 用户组。reader 用户组对 ``/DATA`` 目录只有读的权限,而 editors 用户组则对 ``/DATA`` 目录同时有读和写的权限。当然,这是个非常小的任务,但它会给你基本的信息·。你可以扩展这个任务以适应你其他更大的需求。
|
||||
olivia 属于 editors 用户组,而 nathan 属于 readers 用户组。reader 用户组对 `/DATA` 目录只有读的权限,而 editors 用户组则对 `/DATA` 目录同时有读和写的权限。当然,这是个非常小的任务,但它会给你基本的信息,你可以扩展这个任务以适应你其他更大的需求。
|
||||
|
||||
我将在 Ubuntu 16.04 Server 平台上进行演示。这些命令都是通用的,唯一不同的是,要是在你的发行版中不使用 sudo 命令,你必须切换到 root 用户来执行这些命令。
|
||||
我将在 Ubuntu 16.04 Server 平台上进行演示。这些命令都是通用的,唯一不同的是,要是在你的发行版中不使用 `sudo` 命令,你必须切换到 root 用户来执行这些命令。
|
||||
|
||||
### 创建用户
|
||||
|
||||
我们需要做的第一件事是为我们的实验创建两个用户。可以用 ``useradd`` 命令来创建用户,我们不只是简单地创建一个用户,而需要同时创建用户和属于他们的家目录,然后给他们设置密码。
|
||||
我们需要做的第一件事是为我们的实验创建两个用户。可以用 `useradd` 命令来创建用户,我们不只是简单地创建一个用户,而需要同时创建用户和属于他们的家目录,然后给他们设置密码。
|
||||
|
||||
```
|
||||
sudo useradd -m olivia
|
||||
|
||||
sudo useradd -m nathan
|
||||
```
|
||||
|
||||
我们现在创建了两个用户,如果你看看 ``/home`` 目录,你可以发现他们的家目录(因为我们用了 -m 选项,可以帮在创建用户的同时创建他们的家目录。
|
||||
我们现在创建了两个用户,如果你看看 `/home` 目录,你可以发现他们的家目录(因为我们用了 `-m` 选项,可以在创建用户的同时创建他们的家目录。
|
||||
|
||||
之后,我们可以用以下命令给他们设置密码:
|
||||
|
||||
```
|
||||
sudo passwd olivia
|
||||
|
||||
sudo passwd nathan
|
||||
```
|
||||
|
||||
@ -59,26 +51,21 @@ sudo passwd nathan
|
||||
|
||||
```
|
||||
addgroup readers
|
||||
|
||||
addgroup editors
|
||||
```
|
||||
|
||||
(译者注:当你使用 CentOS 等一些 Linux 发行版时,可能系统没有 addgroup 这个命令,推荐使用 groupadd 命令来替换 addgroup 命令以达到同样的效果)
|
||||
|
||||
|
||||
### [groups_1.jpg][2]
|
||||
(LCTT 译注:当你使用 CentOS 等一些 Linux 发行版时,可能系统没有 `addgroup` 这个命令,推荐使用 `groupadd` 命令来替换 `addgroup` 命令以达到同样的效果)
|
||||
|
||||
![groups](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/groups_1.jpg?itok=BKwL89BB)
|
||||
|
||||
图一:我们可以使用刚创建的新用户组了。
|
||||
|
||||
[Used with permission][5]
|
||||
*图一:我们可以使用刚创建的新用户组了。*
|
||||
|
||||
创建用户组后,我们需要添加我们的用户到这两个用户组。我们用以下命令来将 nathan 用户添加到 readers 用户组:
|
||||
|
||||
```
|
||||
sudo usermod -a -G readers nathan
|
||||
```
|
||||
|
||||
用以下命令将 olivia 添加到 editors 用户组:
|
||||
|
||||
```
|
||||
@ -89,7 +76,7 @@ sudo usermod -a -G editors olivia
|
||||
|
||||
### 给用户组授予目录的权限
|
||||
|
||||
假设你有个目录 ``/READERS`` 且允许 readers 用户组的所有成员访问这个目录。首先,我们执行以下命令来更改目录所属用户组:
|
||||
假设你有个目录 `/READERS` 且允许 readers 用户组的所有成员访问这个目录。首先,我们执行以下命令来更改目录所属用户组:
|
||||
|
||||
```
|
||||
sudo chown -R :readers /READERS
|
||||
@ -107,26 +94,23 @@ sudo chmod -R g-w /READERS
|
||||
sudo chmod -R o-x /READERS
|
||||
```
|
||||
|
||||
这时候,只有目录的所有者(root)和用户组 reader 中的用户可以访问 ``/READES`` 中的文件。
|
||||
这时候,只有目录的所有者(root)和用户组 reader 中的用户可以访问 `/READES` 中的文件。
|
||||
|
||||
假设你有个目录 ``/EDITORS`` ,你需要给用户组 editors 里的成员这个目录的读和写的权限。为了达到这个目的,执行下面的这些命令是必要的:
|
||||
假设你有个目录 `/EDITORS` ,你需要给用户组 editors 里的成员这个目录的读和写的权限。为了达到这个目的,执行下面的这些命令是必要的:
|
||||
|
||||
```
|
||||
sudo chown -R :editors /EDITORS
|
||||
|
||||
sudo chmod -R g+w /EDITORS
|
||||
|
||||
sudo chmod -R o-x /EDITORS
|
||||
```
|
||||
|
||||
此时 editors 用户组的所有成员都可以访问和修改其中的文件。除此之外其他用户(除了 root 之外)无法访问 ``/EDITORS`` 中的任何文件。
|
||||
此时 editors 用户组的所有成员都可以访问和修改其中的文件。除此之外其他用户(除了 root 之外)无法访问 `/EDITORS` 中的任何文件。
|
||||
|
||||
使用这个方法的问题在于,你一次只能操作一个组和一个目录而已。这时候访问控制表(ACL)就可以派得上用场了。
|
||||
|
||||
|
||||
### 使用访问控制表(ACL)
|
||||
|
||||
现在,让我们把这个问题变得棘手一点。假设你有一个目录 ``/DATA`` 并且你想给 readers 用户组的成员读取权限并同时给 editors 用户组的成员读和写的权限。为此,你必须要用到 setfacl 命令。setfacl 命令可以为文件或文件夹设置一个访问控制表(ACL)。
|
||||
现在,让我们把这个问题变得棘手一点。假设你有一个目录 `/DATA` 并且你想给 readers 用户组的成员读取权限,并同时给 editors 用户组的成员读和写的权限。为此,你必须要用到 `setfacl` 命令。`setfacl` 命令可以为文件或文件夹设置一个访问控制表(ACL)。
|
||||
|
||||
这个命令的结构如下:
|
||||
|
||||
@ -134,45 +118,41 @@ sudo chmod -R o-x /EDITORS
|
||||
setfacl OPTION X:NAME:Y /DIRECTORY
|
||||
```
|
||||
|
||||
其中 OPTION 是可选选项,X 可以是 u(用户)或者是 g (用户组),NAME 是用户或者用户组的名字,/DIRECTORY 是要用到的目录。我们将使用 -m 选项进行修改(modify)。因此,我们给 readers 用户组添加读取权限的命令是:
|
||||
其中 OPTION 是可选选项,X 可以是 `u`(用户)或者是 `g` (用户组),NAME 是用户或者用户组的名字,/DIRECTORY 是要用到的目录。我们将使用 `-m` 选项进行修改。因此,我们给 readers 用户组添加读取权限的命令是:
|
||||
|
||||
```
|
||||
sudo setfacl -m g:readers:rx -R /DATA
|
||||
```
|
||||
|
||||
现在 readers 用户组里面的每一个用户都可以读取 /DATA 目录里的文件了,但是他们不能修改里面的内容。
|
||||
现在 readers 用户组里面的每一个用户都可以读取 `/DATA` 目录里的文件了,但是他们不能修改里面的内容。
|
||||
|
||||
为了给 editors 用户组里面的用户读写权限,我们执行了以下命令:
|
||||
|
||||
```
|
||||
sudo setfacl -m g:editors:rwx -R /DATA
|
||||
```
|
||||
|
||||
上述命令将赋予 editors 用户组中的任何成员读取权限,同时保留 readers 用户组的只读权限。
|
||||
|
||||
### 更多的权限控制
|
||||
|
||||
使用访问控制表(ACL),你可以实现你所需的权限控制。你可以添加用户到用户组,并且灵活地控制这些用户组对每个目录的权限以达到你的需求。如果想了解上述工具的更多信息,可以执行下列的命令:
|
||||
|
||||
* man usradd
|
||||
|
||||
* man addgroup
|
||||
|
||||
* man usermod
|
||||
|
||||
* man sefacl
|
||||
|
||||
* man chown
|
||||
|
||||
* man chmod
|
||||
* `man usradd`
|
||||
* `man addgroup`
|
||||
* `man usermod`
|
||||
* `man sefacl`
|
||||
* `man chown`
|
||||
* `man chmod`
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2017/12/how-manage-users-groups-linux
|
||||
|
||||
作者:[Jack Wallen ]
|
||||
作者:[Jack Wallen]
|
||||
译者:[imquanquan](https://github.com/imquanquan)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,161 @@
|
||||
在 Ubuntu 16.04 下随机化你的 WiFi MAC 地址
|
||||
============================================================
|
||||
|
||||
> 你的设备的 MAC 地址可以在不同的 WiFi 网络中记录你的活动。这些信息能被共享后出售,用于识别特定的个体。但可以用随机生成的伪 MAC 地址来阻止这一行为。
|
||||
|
||||
|
||||
![A captive portal screen for a hotel allowing you to log in with social media for an hour of free WiFi](https://www.paulfurley.com/img/captive-portal-our-hotel.gif)
|
||||
|
||||
_Image courtesy of [Cloudessa][4]_
|
||||
|
||||
每一个诸如 WiFi 或者以太网卡这样的网络设备,都有一个叫做 MAC 地址的唯一标识符,如:`b4:b6:76:31:8c:ff`。这就是你能上网的原因:每当你连上 WiFi,路由器就会用这一地址来向你接受和发送数据,并且用它来区别你和这一网络的其它设备。
|
||||
|
||||
这一设计的缺陷在于唯一性,不变的 MAC 地址正好可以用来追踪你。连上了星巴克的 WiFi? 好,注意到了。在伦敦的地铁上? 也记录下来。
|
||||
|
||||
如果你曾经在某一个 WiFi 验证页面上输入过你的真实姓名,你就已经把自己和这一 MAC 地址建立了联系。没有仔细阅读许可服务条款、你可以认为,机场的免费 WiFi 正通过出售所谓的 ‘顾客分析数据’(你的个人信息)获利。出售的对象包括酒店,餐饮业,和任何想要了解你的人。
|
||||
|
||||
我不想信息被记录,再出售给多家公司,所以我花了几个小时想出了一个解决方案。
|
||||
|
||||
### MAC 地址不一定总是不变的
|
||||
|
||||
幸运的是,在不断开网络的情况下,是可以随机生成一个伪 MAC 地址的。
|
||||
|
||||
我想随机生成我的 MAC 地址,但是有三个要求:
|
||||
|
||||
1. MAC 地址在不同网络中是不相同的。这意味着,我在星巴克和在伦敦地铁网络中的 MAC 地址是不相同的,这样在不同的服务提供商中就无法将我的活动系起来。
|
||||
2. MAC 地址需要经常更换,这样在网络上就没人知道我就是去年在这儿经过了 75 次的那个人。
|
||||
3. MAC 地址一天之内应该保持不变。当 MAC 地址更改时,大多数网络都会与你断开连接,然后必须得进入验证页面再次登陆 - 这很烦人。
|
||||
|
||||
### 操作<ruby>网络管理器<rt>NetworkManager</rt></ruby>
|
||||
|
||||
我第一次尝试用一个叫做 `macchanger` 的工具,但是失败了。因为<ruby>网络管理器<rt>NetworkManager</rt></ruby>会根据它自己的设置恢复默认的 MAC 地址。
|
||||
|
||||
我了解到,网络管理器 1.4.1 以上版本可以自动生成随机的 MAC 地址。如果你在使用 Ubuntu 17.04 版本,你可以根据[这一配置文件][7]实现这一目的。但这并不能完全符合我的三个要求(你必须在<ruby>随机<rt>random</rt></ruby>和<ruby>稳定<rt>stable</rt></ruby>这两个选项之中选择一个,但没有一天之内保持不变这一选项)
|
||||
|
||||
因为我使用的是 Ubuntu 16.04,网络管理器版本为 1.2,不能直接使用高版本这一新功能。可能网络管理器有一些随机化方法支持,但我没能成功。所以我编了一个脚本来实现这一目标。
|
||||
|
||||
幸运的是,网络管理器 1.2 允许模拟 MAC 地址。你在已连接的网络中可以看见 ‘编辑连接’ 这一选项:
|
||||
|
||||
![Screenshot of NetworkManager's edit connection dialog, showing a text entry for a cloned mac address](https:/www.paulfurley.com/img/network-manager-cloned-mac-address.png)
|
||||
|
||||
网络管理器也支持钩子处理 —— 任何位于 `/etc/NetworkManager/dispatcher.d/pre-up.d/` 的脚本在建立网络连接之前都会被执行。
|
||||
|
||||
|
||||
### 分配随机生成的伪 MAC 地址
|
||||
|
||||
我想根据网络 ID 和日期来生成新的随机 MAC 地址。 我们可以使用网络管理器的命令行工具 nmcli 来显示所有可用网络:
|
||||
|
||||
|
||||
```
|
||||
> nmcli connection
|
||||
NAME UUID TYPE DEVICE
|
||||
Gladstone Guest 618545ca-d81a-11e7-a2a4-271245e11a45 802-11-wireless wlp1s0
|
||||
DoESDinky 6e47c080-d81a-11e7-9921-87bc56777256 802-11-wireless --
|
||||
PublicWiFi 79282c10-d81a-11e7-87cb-6341829c2a54 802-11-wireless --
|
||||
virgintrainswifi 7d0c57de-d81a-11e7-9bae-5be89b161d22 802-11-wireless --
|
||||
```
|
||||
|
||||
因为每个网络都有一个唯一标识符(UUID),为了实现我的计划,我将 UUID 和日期拼接在一起,然后使用 MD5 生成 hash 值:
|
||||
|
||||
```
|
||||
# eg 618545ca-d81a-11e7-a2a4-271245e11a45-2017-12-03
|
||||
|
||||
> echo -n "${UUID}-$(date +%F)" | md5sum
|
||||
|
||||
53594de990e92f9b914a723208f22b3f -
|
||||
```
|
||||
|
||||
生成的结果可以代替 MAC 地址的最后八个字节。
|
||||
|
||||
|
||||
值得注意的是,最开始的字节 `02` 代表这个地址是[自行指定][8]的。实际上,真实 MAC 地址的前三个字节是由制造商决定的,例如 `b4:b6:76` 就代表 Intel。
|
||||
|
||||
有可能某些路由器会拒绝自己指定的 MAC 地址,但是我还没有遇到过这种情况。
|
||||
|
||||
每次连接到一个网络,这一脚本都会用 `nmcli` 来指定一个随机生成的伪 MAC 地址:
|
||||
|
||||
![A terminal window show a number of nmcli command line calls](https://www.paulfurley.com/imgterminal-window-nmcli-commands.png)
|
||||
|
||||
最后,我查看了 `ifconfig` 的输出结果,我发现 MAC 地址 `HWaddr` 已经变成了随机生成的地址(模拟 Intel 的),而不是我真实的 MAC 地址。
|
||||
|
||||
|
||||
```
|
||||
> ifconfig
|
||||
wlp1s0 Link encap:Ethernet HWaddr b4:b6:76:45:64:4d
|
||||
inet addr:192.168.0.86 Bcast:192.168.0.255 Mask:255.255.255.0
|
||||
inet6 addr: fe80::648c:aff2:9a9d:764/64 Scope:Link
|
||||
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
|
||||
RX packets:12107812 errors:0 dropped:2 overruns:0 frame:0
|
||||
TX packets:18332141 errors:0 dropped:0 overruns:0 carrier:0
|
||||
collisions:0 txqueuelen:1000
|
||||
RX bytes:11627977017 (11.6 GB) TX bytes:20700627733 (20.7 GB)
|
||||
|
||||
```
|
||||
|
||||
### 脚本
|
||||
|
||||
完整的脚本也可以[在 Github 上查看][9]。
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
# /etc/NetworkManager/dispatcher.d/pre-up.d/randomize-mac-addresses
|
||||
|
||||
# Configure every saved WiFi connection in NetworkManager with a spoofed MAC
|
||||
# address, seeded from the UUID of the connection and the date eg:
|
||||
# 'c31bbcc4-d6ad-11e7-9a5a-e7e1491a7e20-2017-11-20'
|
||||
|
||||
# This makes your MAC impossible(?) to track across WiFi providers, and
|
||||
# for one provider to track across days.
|
||||
|
||||
# For craptive portals that authenticate based on MAC, you might want to
|
||||
# automate logging in :)
|
||||
|
||||
# Note that NetworkManager >= 1.4.1 (Ubuntu 17.04+) can do something similar
|
||||
# automatically.
|
||||
|
||||
export PATH=$PATH:/usr/bin:/bin
|
||||
|
||||
LOG_FILE=/var/log/randomize-mac-addresses
|
||||
|
||||
echo "$(date): $*" > ${LOG_FILE}
|
||||
|
||||
WIFI_UUIDS=$(nmcli --fields type,uuid connection show |grep 802-11-wireless |cut '-d ' -f3)
|
||||
|
||||
for UUID in ${WIFI_UUIDS}
|
||||
do
|
||||
UUID_DAILY_HASH=$(echo "${UUID}-$(date +F)" | md5sum)
|
||||
|
||||
RANDOM_MAC="02:$(echo -n ${UUID_DAILY_HASH} | sed 's/^\(..\)\(..\)\(..\)\(..\)\(..\).*$/\1:\2:\3:\4:\5/')"
|
||||
|
||||
CMD="nmcli connection modify ${UUID} wifi.cloned-mac-address ${RANDOM_MAC}"
|
||||
|
||||
echo "$CMD" >> ${LOG_FILE}
|
||||
$CMD &
|
||||
done
|
||||
|
||||
wait
|
||||
```
|
||||
|
||||
_更新:[使用自己指定的 MAC 地址][5]可以避免和真正的 intel 地址冲突。感谢 [@_fink][6]_
|
||||
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.paulfurley.com/randomize-your-wifi-mac-address-on-ubuntu-1604-xenial/
|
||||
|
||||
作者:[Paul M Furley][a]
|
||||
译者:[wenwensnow](https://github.com/wenwensnow)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.paulfurley.com/
|
||||
[1]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/raw/5f02fc8f6ff7fca5bca6ee4913c63bf6de15abcarandomize-mac-addresses
|
||||
[2]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f#file-randomize-mac-addresses
|
||||
[3]:https://github.com/
|
||||
[4]:http://cloudessa.com/products/cloudessa-aaa-and-captive-portal-cloud-service/
|
||||
[5]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/revisions#diff-824d510864d58c07df01102a8f53faef
|
||||
[6]:https://twitter.com/fink_/status/937305600005943296
|
||||
[7]:https://gist.github.com/paulfurley/978d4e2e0cceb41d67d017a668106c53/
|
||||
[8]:https://en.wikipedia.org/wiki/MAC_address#Universal_vs._local
|
||||
[9]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f
|
@ -0,0 +1,298 @@
|
||||
2017 年 30 款最好的支持 Linux 的 Steam 游戏
|
||||
============================================================
|
||||
|
||||
说到游戏,人们一般都会推荐使用 Windows 系统。Windows 能提供更好的显卡支持和硬件兼容性,所以对于游戏爱好者来说的确是个更好的选择。但你是否想过[在 Linux 系统上玩游戏][9]?这的确是可以的,也许你以前还曾经考虑过。但在几年之前, [Steam for Linux][10] 上可玩的游戏并不是很吸引人。
|
||||
|
||||
但现在情况完全不一样了。Steam 商店里现在有许多支持 Linux 平台的游戏(包括很多主流大作)。我们在本文中将介绍 Steam 上最好的一些 Linux 游戏。
|
||||
|
||||
在进入正题之前,先介绍一个省钱小窍门。如果你是个狂热的游戏爱好者,在游戏上花费很多时间和金钱的话,我建议你订阅 [<ruby>Humble 每月包<rt>Humble Monthly</rt></ruby>][11]。这是个每月收费的订阅服务,每月只用 12 美元就能获得价值 100 美元的游戏。
|
||||
|
||||
这个游戏包中可能有些游戏不支持 Linux,但除了 Steam 游戏之外,它还会让 [Humble Bundle 网站][12]上所有的游戏和书籍都打九折,所以这依然是个不错的优惠。
|
||||
|
||||
更棒的是,你在 Humble Bundle 上所有的消费都会捐出一部分给慈善机构。所以你在享受游戏的同时还在帮助改变世界。
|
||||
|
||||
### Steam 上最好的 Linux 游戏
|
||||
|
||||
以下排名无先后顺序。
|
||||
|
||||
额外提示:虽然在 Steam 上有很多支持 Linux 的游戏,但你在 Linux 上玩游戏时依然可能会遇到各种问题。你可以阅读我们之前的文章:[每个 Linux 游戏玩家都会遇到的烦人问题][14]
|
||||
|
||||
可以点击以下链接跳转到你喜欢的游戏类型:
|
||||
|
||||
* [动作类游戏][3]
|
||||
* [角色扮演类游戏][4]
|
||||
* [赛车/运动/模拟类游戏][5]
|
||||
* [冒险类游戏][6]
|
||||
* [独立游戏][7]
|
||||
* [策略类游戏][8]
|
||||
|
||||
### Steam 上最佳 Linux 动作类游戏
|
||||
|
||||
#### 1、 《<ruby>反恐精英:全球攻势<rt>Counter-Strike: Global Offensive</rt></ruby>》(多人)
|
||||
|
||||
《CS:GO》毫无疑问是 Steam 上支持 Linux 的最好的 FPS 游戏之一。我觉得这款游戏无需介绍,但如果你没有听说过它,我要告诉你这将会是你玩过的最好玩的多人 FPS 游戏之一。《CS:GO》还是电子竞技中的一个主流项目。想要提升等级的话,你需要在天梯上和其他玩家同台竞技。但你也可以选择更加轻松的休闲模式。
|
||||
|
||||
我本想写《彩虹六号:围攻行动》,但它目前还不支持 Linux 或 Steam OS。
|
||||
|
||||
- [购买《CS: GO》][15]
|
||||
|
||||
#### 2、 《<ruby>求生之路 2<rt>Left 4 Dead 2</rt></ruby>》(多人/单机)
|
||||
|
||||
这是最受欢迎的僵尸主题多人 FPS 游戏之一。在 Steam 优惠时,价格可以低至 1.3 美元。这是个有趣的游戏,能让你体会到你在僵尸游戏中期待的战栗和刺激。游戏中的环境包括了沼泽、城市、墓地等等,让游戏既有趣又吓人。游戏中的枪械并不是非常先进,但作为一个老游戏来说,它已经提供了足够真实的体验。
|
||||
|
||||
- [购买《求生之路 2》][16]
|
||||
|
||||
#### 3、 《<ruby>无主之地 2<rt>Borderlands 2</rt></ruby>》(单机/协作)
|
||||
|
||||
《无主之地 2》是个很有意思的 FPS 游戏。它和你以前玩过的游戏完全不同。画风看上去有些诡异和卡通化,如果你正在寻找一个第一视角的射击游戏,我可以保证,游戏体验可一点也不逊色!
|
||||
|
||||
如果你在寻找一个好玩而且有很多 DLC 的 Linux 游戏,《无主之地 2》绝对是个不错的选择。
|
||||
|
||||
- [购买《无主之地 2》][17]
|
||||
|
||||
#### 4、 《<ruby>叛乱<rt>Insurgency</rt></ruby>》(多人)
|
||||
|
||||
《叛乱》是 Steam 上又一款支持 Linux 的优秀的 FPS 游戏。它剑走偏锋,从屏幕上去掉了 HUD 和弹药数量指示。如同许多评论者所说,这是款注重武器和团队战术的纯粹的射击游戏。这也许不是最好的 FPS 游戏,但如果你想玩和《三角洲部队》类似的多人游戏的话,这绝对是最好的游戏之一。
|
||||
|
||||
- [购买《叛乱》][18]
|
||||
|
||||
#### 5、 《<ruby>生化奇兵:无限<rt>Bioshock: Infinite</rt></ruby>》(单机)
|
||||
|
||||
《生化奇兵:无限》毫无疑问将会作为 PC 平台最好的单机 FPS 游戏之一而载入史册。你可以利用很多强大的能力来杀死你的敌人。同时你的敌人也各个身怀绝技。游戏的剧情也非常丰富。你不容错过!
|
||||
|
||||
- [购买《生化奇兵:无限》][19]
|
||||
|
||||
#### 6、 《<ruby>杀手(年度版)<rt>HITMAN - Game of the Year Edition</rt></ruby>》(单机)
|
||||
|
||||
《杀手》系列无疑是 PC 游戏爱好者们的最爱之一。本系列的最新作开始按章节发布,让很多玩家觉得不满。但现在 Square Enix 撤出了开发,而最新的年度版带着新的内容重返舞台。在游戏中发挥你的想象力暗杀你的目标吧,杀手47!
|
||||
|
||||
- [购买(杀手(年度版))][20]
|
||||
|
||||
#### 7、 《<ruby>传送门 2<rt>Portal 2</rt></ruby>》
|
||||
|
||||
《传送门 2》完美地结合了动作与冒险。这是款解谜类游戏,你可以与其他玩家协作,并开发有趣的谜题。协作模式提供了和单机模式截然不同的游戏内容。
|
||||
|
||||
- [购买《传送门2》][21]
|
||||
|
||||
#### 8、 《<ruby>杀出重围:人类分裂<rt>Deux Ex: Mankind Divided</rt></ruby>》
|
||||
|
||||
如果你在寻找隐蔽类的射击游戏,《杀出重围》是个填充你的 Steam 游戏库的完美选择。这是个非常华丽的游戏,有着最先进的武器和超乎寻常的战斗机制。
|
||||
|
||||
- [购买《杀出重围:人类分裂》][22]
|
||||
|
||||
#### 9、 《<ruby>地铁 2033 重置版<rt>Metro 2033 Redux</rt></ruby>》 / 《<ruby>地铁:最后曙光 重置版<rt>Metro Last Light Redux</rt></ruby>》
|
||||
|
||||
《地铁 2033 重置版》和《地铁:最后曙光 重置版》是经典的《地铁 2033》和《地铁:最后曙光》的最终版本。故事发生在世界末日之后。你需要消灭所有的变种人来保证人类的生存。剩下的就交给你自己去探索了!
|
||||
|
||||
- [购买《地铁 2033 重置版》][23]
|
||||
- [购买《地铁:最后曙光 重置版》][24]
|
||||
|
||||
#### 10、 《<ruby>坦能堡<rt>Tannenberg</rt></ruby>》(多人)
|
||||
|
||||
《坦能堡》是个全新的游戏 - 在本文发表一个月前刚刚发售。游戏背景是第一次世界大战的东线战场(1914-1918)。这款游戏只有多人模式。如果你想要在游戏中体验第一次世界大战,不要错过这款游戏!
|
||||
|
||||
- [购买《坦能堡》][25]
|
||||
|
||||
### Steam 上最佳 Linux 角色扮演类游戏
|
||||
|
||||
#### 11、 《<ruby>中土世界:暗影魔多<rt>Shadow of Mordor</rt></ruby>》
|
||||
|
||||
《中土世界:暗影魔多》 是 Steam 上支持 Linux 的最好的开放式角色扮演类游戏之一。你将扮演一个游侠(塔里昂),和光明领主(凯勒布理鹏)并肩作战击败索隆的军队(并最终和他直接交手)。战斗机制非常出色。这是款不得不玩的游戏!
|
||||
|
||||
- [购买《中土世界:暗影魔多》][26]
|
||||
|
||||
#### 12、 《<ruby>神界:原罪加强版<rt>Divinity: Original Sin – Enhanced Edition</rt></ruby>》
|
||||
|
||||
《神界:原罪》是一款极其优秀的角色扮演类独立游戏。它非常独特而又引人入胜。这或许是评分最高的带有冒险和策略元素的角色扮演游戏。加强版添加了新的游戏模式,并且完全重做了配音、手柄支持、协作任务等等。
|
||||
|
||||
- [购买《神界:原罪加强版》][27]
|
||||
|
||||
#### 13、 《<ruby>废土 2:导演剪辑版<rt>Wasteland 2: Director’s Cut</rt></ruby>》
|
||||
|
||||
《废土 2》是一款出色的 CRPG 游戏。如果《辐射 4》被移植成 CRPG 游戏,大概就是这种感觉。导演剪辑版完全重做了画面,并且增加了一百多名新人物。
|
||||
|
||||
- [购买《废土 2》][28]
|
||||
|
||||
#### 14、 《<ruby>阴暗森林<rt>Darkwood</rt></ruby>》
|
||||
|
||||
一个充满恐怖的俯视角角色扮演类游戏。你将探索世界、搜集材料、制作武器来生存下去。
|
||||
|
||||
- [购买《阴暗森林》][29]
|
||||
|
||||
### 最佳赛车 / 运动 / 模拟类游戏
|
||||
|
||||
#### 15、 《<ruby>火箭联盟<rt>Rocket League</rt></ruby>》
|
||||
|
||||
《火箭联盟》是一款充满刺激的足球游戏。游戏中你将驾驶用火箭助推的战斗赛车。你不仅是要驾车把球带进对方球门,你甚至还可以让你的对手化为灰烬!
|
||||
|
||||
这是款超棒的体育动作类游戏,每个游戏爱好者都值得拥有!
|
||||
|
||||
- [购买《火箭联盟》][30]
|
||||
|
||||
#### 16、 《<ruby>公路救赎<rt>Road Redemption</rt></ruby>》
|
||||
|
||||
想念《暴力摩托》了?作为它精神上的续作,《公路救赎》可以缓解你的饥渴。当然,这并不是真正的《暴力摩托 2》,但它一样有趣。如果你喜欢《暴力摩托》,你也会喜欢这款游戏。
|
||||
|
||||
- [购买《公路救赎》][31]
|
||||
|
||||
#### 17、 《<ruby>尘埃拉力赛<rt>Dirt Rally</rt></ruby>》
|
||||
|
||||
《尘埃拉力赛》是为想要体验公路和越野赛车的玩家准备的。画面非常有魄力,驾驶手感也近乎完美。
|
||||
|
||||
- [购买《尘埃拉力赛》][32]
|
||||
|
||||
#### 18、 《F1 2017》
|
||||
|
||||
《F1 2017》是另一款令人印象深刻的赛车游戏。由《尘埃拉力赛》的开发者 Codemasters & Feral Interactive 制作。游戏中包含了所有标志性的 F1 赛车,值得你去体验。
|
||||
|
||||
- [购买《F1 2017》][33]
|
||||
|
||||
#### 19、 《<ruby>超级房车赛:汽车运动<rt>GRID Autosport</rt></ruby>》
|
||||
|
||||
《超级房车赛》是最被低估的赛车游戏之一。《超级房车赛:汽车运动》是《超级房车赛》的续作。这款游戏的可玩性令人惊艳。游戏中的赛车也比前作更好。推荐所有的 PC 游戏玩家尝试这款赛车游戏。游戏还支持多人模式,你可以和你的朋友组队参赛。
|
||||
|
||||
- [购买《超级房车赛:汽车运动》][34]
|
||||
|
||||
### 最好的冒险游戏
|
||||
|
||||
#### 20、 《<ruby>方舟:生存进化<rt>ARK: Survival Evolved</rt></ruby>》
|
||||
|
||||
《方舟:生存进化》是一款不错的生存游戏,里面有着激动人心的冒险。你发现自己身处一个未知孤岛(方舟岛),为了生存下去并逃离这个孤岛,你必须去驯服恐龙、与其他玩家合作、猎杀其他人来抢夺资源、以及制作物品。
|
||||
|
||||
- [购买《方舟:生存进化》][35]
|
||||
|
||||
#### 21、 《<ruby>这是我的战争<rt>This War of Mine</rt></ruby>》
|
||||
|
||||
一款独特的战争游戏。你不是扮演士兵,而是要作为一个平民来面对战争带来的艰难。你需要在身经百战的敌人手下逃生,并帮助其他的幸存者。
|
||||
|
||||
- [购买《这是我的战争》][36]
|
||||
|
||||
#### 22、 《<ruby>疯狂的麦克斯<rt>Mad Max</rt></ruby>》
|
||||
|
||||
生存和暴力概括了《疯狂的麦克斯》的全部内容。游戏中有性能强大的汽车,开放性的世界,各种武器,以及徒手肉搏。你要不断地探索世界,并注意升级你的汽车来防患于未然。在做决定之前,你要仔细思考并设计好策略。
|
||||
|
||||
- [购买《疯狂的麦克斯》][37]
|
||||
|
||||
### 最佳独立游戏
|
||||
|
||||
#### 23、 《<ruby>泰拉瑞亚<rt>Terraria</rt></ruby>》
|
||||
|
||||
这是款在 Steam 上广受好评的 2D 游戏。你在旅途中需要去挖掘、战斗、探索、建造。游戏地图是自动生成的,而不是固定不变的。也许你刚刚遇到的东西,你的朋友过一会儿才会遇到。你还将体验到富有新意的 2D 动作场景。
|
||||
|
||||
- [购买《泰拉瑞亚》][38]
|
||||
|
||||
#### 24、 《<ruby>王国与城堡<rt>Kingdoms and Castles</rt></ruby>》
|
||||
|
||||
在《王国与城堡》中,你将建造你自己的王国。在管理你的王国的过程中,你需要收税、保护森林、规划城市,并且发展国防来防止别人入侵你的王国。
|
||||
|
||||
这是款比较新的游戏,但在独立游戏中已经相对获得了比较高的人气。
|
||||
|
||||
- [购买《王国与城堡》][39]
|
||||
|
||||
### Steam 上最佳 Linux 策略类游戏
|
||||
|
||||
#### 25、 《<ruby>文明 5<rt>Sid Meier’s Civilization V</rt></ruby>》
|
||||
|
||||
《文明 5》是 PC 上评价最高的策略游戏之一。如果你想的话,你可以去玩《文明 6》。但是依然有许多玩家喜欢《文明 5》,觉得它更有独创性,游戏细节也更富有创造力。
|
||||
|
||||
- [购买《文明 5》][40]
|
||||
|
||||
#### 26、 《<ruby>全面战争:战锤<rt>Total War: Warhammer</rt></ruby>》
|
||||
|
||||
《全面战争:战锤》是 PC 平台上一款非常出色的回合制策略游戏。可惜的是,新作《战锤 2》依然不支持 Linux。但如果你喜欢使用飞龙和魔法来建造与毁灭帝国的话,2016 年的《战锤》依然是个不错的选择。
|
||||
|
||||
- [购买《全面战争:战锤》][41]
|
||||
|
||||
#### 27、 《<ruby>轰炸小队<rt>Bomber Crew</rt></ruby>》
|
||||
|
||||
想要一款充满乐趣的策略游戏?《轰炸小队》就是为你准备的。你需要选择合适的队员并且让你的队伍稳定运转来取得最终的胜利。
|
||||
|
||||
- [购买《轰炸小队》][42]
|
||||
|
||||
#### 28、 《<ruby>奇迹时代 3<rt>Age of Wonders III</rt></ruby>》
|
||||
|
||||
非常流行的策略游戏,包含帝国建造、角色扮演、以及战争元素。这是款精致的回合制策略游戏,请一定要试试!
|
||||
|
||||
- [购买《奇迹时代 3》][43]
|
||||
|
||||
#### 29、 《<ruby>城市:天际线<rt>Cities: Skylines</rt></ruby>》
|
||||
|
||||
一款非常简洁的策略游戏。你要从零开始建造一座城市,并且管理它的全部运作。你将体验建造和管理城市带来的愉悦与困难。我不觉得每个玩家都会喜欢这款游戏——它的用户群体非常明确。
|
||||
|
||||
- [购买《城市:天际线》][44]
|
||||
|
||||
#### 30、 《<ruby>幽浮 2<rt>XCOM 2</rt></ruby>》
|
||||
|
||||
《幽浮 2》是 PC 上最好的回合制策略游戏之一。我在想如果《幽浮 2》能够被制作成 FPS 游戏的话该有多棒。不过它现在已经是一款好评如潮的杰作了。如果你有多余的预算能花在这款游戏上,建议你购买“<ruby>天选之战<rt>War of the Chosen</rt></ruby>“ DLC。
|
||||
|
||||
- [购买《幽浮 2》][45]
|
||||
|
||||
### 总结
|
||||
|
||||
我们从所有支持 Linux 的游戏中挑选了大部分的主流大作以及一些评价很高的新作。
|
||||
|
||||
你觉得我们遗漏了你最喜欢的支持 Linux 的 Steam 游戏么?另外,你还希望哪些 Steam 游戏开始支持 Linux 平台?
|
||||
|
||||
请在下面的回复中告诉我们你的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/best-linux-games-steam/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
译者:[yixunx](https://github.com/yixunx)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/ankush/
|
||||
[1]:https://itsfoss.com/author/ankush/
|
||||
[2]:https://itsfoss.com/best-linux-games-steam/#comments
|
||||
[3]:https://itsfoss.com/best-linux-games-steam/#action
|
||||
[4]:https://itsfoss.com/best-linux-games-steam/#rpg
|
||||
[5]:https://itsfoss.com/best-linux-games-steam/#racing
|
||||
[6]:https://itsfoss.com/best-linux-games-steam/#adv
|
||||
[7]:https://itsfoss.com/best-linux-games-steam/#indie
|
||||
[8]:https://itsfoss.com/best-linux-games-steam/#strategy
|
||||
[9]:https://linux.cn/article-7316-1.html
|
||||
[10]:https://itsfoss.com/install-steam-ubuntu-linux/
|
||||
[11]:https://www.humblebundle.com/?partner=itsfoss
|
||||
[12]:https://www.humblebundle.com/store?partner=itsfoss
|
||||
[13]:https://www.humblebundle.com/monthly?partner=itsfoss
|
||||
[14]:https://itsfoss.com/linux-gaming-problems/
|
||||
[15]:http://store.steampowered.com/app/730/CounterStrike_Global_Offensive/
|
||||
[16]:http://store.steampowered.com/app/550/Left_4_Dead_2/
|
||||
[17]:http://store.steampowered.com/app/49520/?snr=1_5_9__205
|
||||
[18]:http://store.steampowered.com/app/222880/?snr=1_5_9__205
|
||||
[19]:http://store.steampowered.com/agecheck/app/8870/
|
||||
[20]:http://store.steampowered.com/app/236870/?snr=1_5_9__205
|
||||
[21]:http://store.steampowered.com/app/620/?snr=1_5_9__205
|
||||
[22]:http://store.steampowered.com/app/337000/?snr=1_5_9__205
|
||||
[23]:http://store.steampowered.com/app/286690/?snr=1_5_9__205
|
||||
[24]:http://store.steampowered.com/app/287390/?snr=1_5_9__205
|
||||
[25]:http://store.steampowered.com/app/633460/?snr=1_5_9__205
|
||||
[26]:http://store.steampowered.com/app/241930/?snr=1_5_9__205
|
||||
[27]:http://store.steampowered.com/app/373420/?snr=1_5_9__205
|
||||
[28]:http://store.steampowered.com/app/240760/?snr=1_5_9__205
|
||||
[29]:http://store.steampowered.com/app/274520/?snr=1_5_9__205
|
||||
[30]:http://store.steampowered.com/app/252950/?snr=1_5_9__205
|
||||
[31]:http://store.steampowered.com/app/300380/?snr=1_5_9__205
|
||||
[32]:http://store.steampowered.com/app/310560/?snr=1_5_9__205
|
||||
[33]:http://store.steampowered.com/app/515220/?snr=1_5_9__205
|
||||
[34]:http://store.steampowered.com/app/255220/?snr=1_5_9__205
|
||||
[35]:http://store.steampowered.com/app/346110/?snr=1_5_9__205
|
||||
[36]:http://store.steampowered.com/app/282070/?snr=1_5_9__205
|
||||
[37]:http://store.steampowered.com/app/234140/?snr=1_5_9__205
|
||||
[38]:http://store.steampowered.com/app/105600/?snr=1_5_9__205
|
||||
[39]:http://store.steampowered.com/app/569480/?snr=1_5_9__205
|
||||
[40]:http://store.steampowered.com/app/8930/?snr=1_5_9__205
|
||||
[41]:http://store.steampowered.com/app/364360/?snr=1_5_9__205
|
||||
[42]:http://store.steampowered.com/app/537800/?snr=1_5_9__205
|
||||
[43]:http://store.steampowered.com/app/226840/?snr=1_5_9__205
|
||||
[44]:http://store.steampowered.com/app/255710/?snr=1_5_9__205
|
||||
[45]:http://store.steampowered.com/app/268500/?snr=1_5_9__205
|
||||
[46]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Fbest-linux-games-steam%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
|
||||
[47]:https://twitter.com/share?original_referer=/&text=30+Best+Linux+Games+On+Steam+You+Should+Play+in+2017&url=https://itsfoss.com/best-linux-games-steam/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=ankushdas9
|
||||
[48]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Fbest-linux-games-steam%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
|
||||
[49]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Fbest-linux-games-steam%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
|
||||
[50]:https://www.reddit.com/submit?url=https://itsfoss.com/best-linux-games-steam/&title=30+Best+Linux+Games+On+Steam+You+Should+Play+in+2017
|
@ -0,0 +1,140 @@
|
||||
如何在执行一个命令或程序之前就了解它会做什么
|
||||
======
|
||||
|
||||
有没有想过在执行一个 Unix 命令前就知道它干些什么呢?并不是每个人都会知道一个特定的命令或者程序将会做什么。当然,你可以用 [Explainshell][2] 来查看它。你可以在 Explainshell 网站中粘贴你的命令,然后它可以让你了解命令的每个部分做了什么。但是,这是没有必要的。现在,我们从终端就可以轻易地在执行一个命令或者程序前就知道它会做什么。 `maybe` ,一个简单的工具,它允许你运行一条命令并可以查看此命令对你的文件做了什么,而实际上这条命令却并未执行!在查看 `maybe` 的输出列表后,你可以决定是否真的想要运行这条命令。
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2017/12/maybe-2-720x340.png)
|
||||
|
||||
### `maybe` 是如何工作的
|
||||
|
||||
根据开发者的介绍:
|
||||
|
||||
> `maybe` 利用 `python-ptrace` 库在 `ptrace` 控制下运行了一个进程。当它截取到一个即将更改文件系统的系统调用时,它会记录该调用,然后修改 CPU 寄存器,将这个调用重定向到一个无效的系统调用 ID(效果上将其变成一个无效操作(no-op)),并将这个无效操作(no-op)的返回值设置为有效操作的返回值。结果,这个进程认为,它所做的一切都发生了,实际上什么都没有改变。
|
||||
|
||||
警告:在生产环境或者任何你所关心的系统里面使用这个工具时都应该小心。它仍然可能造成严重的损失,因为它只能阻止少数系统调用。
|
||||
|
||||
#### 安装 `maybe`
|
||||
|
||||
确保你已经在你的 Linux 系统中已经安装了 `pip` 。如果没有,可以根据您使用的发行版,按照如下指示进行安装。
|
||||
|
||||
在 Arch Linux 及其衍生产品(如 Antergos、Manjaro Linux)上,使用以下命令安装 `pip` :
|
||||
|
||||
```
|
||||
sudo pacman -S python-pip
|
||||
```
|
||||
|
||||
在 RHEL,CentOS 上:
|
||||
|
||||
```
|
||||
sudo yum install epel-release
|
||||
sudo yum install python-pip
|
||||
```
|
||||
|
||||
在 Fedora 上:
|
||||
|
||||
```
|
||||
sudo dnf install epel-release
|
||||
sudo dnf install python-pip
|
||||
```
|
||||
|
||||
在 Debian,Ubuntu,Linux Mint 上:
|
||||
|
||||
```
|
||||
sudo apt-get install python-pip
|
||||
```
|
||||
|
||||
在 SUSE、 openSUSE 上:
|
||||
|
||||
```
|
||||
sudo zypper install python-pip
|
||||
```
|
||||
|
||||
安装 `pip` 后,运行以下命令安装 `maybe` :
|
||||
|
||||
```
|
||||
sudo pip install maybe
|
||||
```
|
||||
|
||||
### 了解一个命令或程序在执行前会做什么
|
||||
|
||||
用法是非常简单的!只要在要执行的命令前加上 `maybe` 即可。
|
||||
|
||||
让我给你看一个例子:
|
||||
|
||||
```
|
||||
$ maybe rm -r ostechnix/
|
||||
```
|
||||
|
||||
如你所看到的,我从我的系统中删除一个名为 `ostechnix` 的文件夹。下面是示例输出:
|
||||
|
||||
```
|
||||
maybe has prevented rm -r ostechnix/ from performing 5 file system operations:
|
||||
|
||||
delete /home/sk/inboxer-0.4.0-x86_64.AppImage
|
||||
delete /home/sk/Docker.pdf
|
||||
delete /home/sk/Idhayathai Oru Nodi.mp3
|
||||
delete /home/sk/dThmLbB334_1398236878432.jpg
|
||||
delete /home/sk/ostechnix
|
||||
|
||||
Do you want to rerun rm -r ostechnix/ and permit these operations? [y/N] y
|
||||
```
|
||||
|
||||
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/maybe-1.png)][3]
|
||||
|
||||
`maybe` 执行了 5 个文件系统操作,并向我显示该命令(`rm -r ostechnix/`)究竟会做什么。现在我可以决定是否应该执行这个操作。是不是很酷呢?确实很酷!
|
||||
|
||||
这是另一个例子。我要为 Gmail 安装 Inboxer 桌面客户端。这是我得到的输出:
|
||||
|
||||
```
|
||||
$ maybe ./inboxer-0.4.0-x86_64.AppImage
|
||||
fuse: bad mount point `/tmp/.mount_inboxemDzuGV': No such file or directory
|
||||
squashfuse 0.1.100 (c) 2012 Dave Vasilevsky
|
||||
|
||||
Usage: /home/sk/Downloads/inboxer-0.4.0-x86_64.AppImage [options] ARCHIVE MOUNTPOINT
|
||||
|
||||
FUSE options:
|
||||
-d -o debug enable debug output (implies -f)
|
||||
-f foreground operation
|
||||
-s disable multi-threaded operation
|
||||
|
||||
open dir error: No such file or directory
|
||||
maybe has prevented ./inboxer-0.4.0-x86_64.AppImage from performing 1 file system operations:
|
||||
|
||||
create directory /tmp/.mount_inboxemDzuGV
|
||||
|
||||
Do you want to rerun ./inboxer-0.4.0-x86_64.AppImage and permit these operations? [y/N]
|
||||
```
|
||||
|
||||
如果它没有检测到任何文件系统操作,那么它会只显示如下所示的结果。
|
||||
|
||||
例如,我运行下面这条命令来更新我的 Arch Linux。
|
||||
|
||||
```
|
||||
$ maybe sudo pacman -Syu
|
||||
sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?
|
||||
maybe has not detected any file system operations from sudo pacman -Syu.
|
||||
```
|
||||
|
||||
看到没?它没有检测到任何文件系统操作,所以没有任何警告。这非常棒,而且正是我所预料到的结果。从现在开始,我甚至可以在执行之前知道一个命令或一个程序将执行什么操作。我希望这对你也会有帮助。
|
||||
|
||||
Cheers!
|
||||
|
||||
资源:
|
||||
|
||||
* [`maybe` GitHub 主页][1]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/know-command-program-will-exactly-executing/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[imquanquan](https://github.com/imquanquan)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://github.com/p-e-w/maybe
|
||||
[2]:https://www.ostechnix.com/explainshell-find-part-linux-command/
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2017/12/maybe-1.png
|
||||
[4]:https://www.ostechnix.com/inboxer-unofficial-google-inbox-desktop-client/
|
@ -0,0 +1,137 @@
|
||||
通过示例学习使用 netstat
|
||||
======
|
||||
|
||||
netstat 是一个告诉我们系统中所有 tcp/udp/unix socket 连接状态的命令行工具。它会列出所有已经连接或者等待连接状态的连接。 该工具在识别某个应用监听哪个端口时特别有用,我们也能用它来判断某个应用是否正常的在监听某个端口。
|
||||
|
||||
netstat 命令还能显示其它各种各样的网络相关信息,例如路由表, 网卡统计信息, 虚假连接以及多播成员等。
|
||||
|
||||
本文中,我们会通过几个例子来学习 netstat。
|
||||
|
||||
(推荐阅读: [通过示例学习使用 CURL 命令][1] )
|
||||
|
||||
### 1 - 检查所有的连接
|
||||
|
||||
使用 `a` 选项可以列出系统中的所有连接,
|
||||
|
||||
```shell
|
||||
$ netstat -a
|
||||
```
|
||||
|
||||
这会显示系统所有的 tcp、udp 以及 unix 连接。
|
||||
|
||||
### 2 - 检查所有的 tcp/udp/unix socket 连接
|
||||
|
||||
使用 `t` 选项只列出 tcp 连接,
|
||||
|
||||
```shell
|
||||
$ netstat -at
|
||||
```
|
||||
|
||||
类似的,使用 `u` 选项只列出 udp 连接,
|
||||
|
||||
```shell
|
||||
$ netstat -au
|
||||
```
|
||||
|
||||
使用 `x` 选项只列出 Unix socket 连接,
|
||||
|
||||
```shell
|
||||
$ netstat -ax
|
||||
```
|
||||
|
||||
### 3 - 同时列出进程 ID/进程名称
|
||||
|
||||
使用 `p` 选项可以在列出连接的同时也显示 PID 或者进程名称,而且它还能与其他选项连用,
|
||||
|
||||
```shell
|
||||
$ netstat -ap
|
||||
```
|
||||
|
||||
### 4 - 列出端口号而不是服务名
|
||||
|
||||
使用 `n` 选项可以加快输出,它不会执行任何反向查询(LCTT 译注:这里原文有误),而是直接输出数字。 由于无需查询,因此结果输出会快很多。
|
||||
|
||||
```shell
|
||||
$ netstat -an
|
||||
```
|
||||
|
||||
### 5 - 只输出监听端口
|
||||
|
||||
使用 `l` 选项只输出监听端口。它不能与 `a` 选项连用,因为 `a` 会输出所有端口,
|
||||
|
||||
```shell
|
||||
$ netstat -l
|
||||
```
|
||||
|
||||
### 6 - 输出网络状态
|
||||
|
||||
使用 `s` 选项输出每个协议的统计信息,包括接收/发送的包数量,
|
||||
|
||||
```shell
|
||||
$ netstat -s
|
||||
```
|
||||
|
||||
### 7 - 输出网卡状态
|
||||
|
||||
使用 `I` 选项只显示网卡的统计信息,
|
||||
|
||||
```shell
|
||||
$ netstat -i
|
||||
```
|
||||
|
||||
### 8 - 显示<ruby>多播组<rt>multicast group</rt></ruby>信息
|
||||
|
||||
使用 `g` 选项输出 IPV4 以及 IPV6 的多播组信息,
|
||||
|
||||
```shell
|
||||
$ netstat -g
|
||||
```
|
||||
|
||||
### 9 - 显示网络路由信息
|
||||
|
||||
使用 `r` 输出网络路由信息,
|
||||
|
||||
```shell
|
||||
$ netstat -r
|
||||
```
|
||||
|
||||
### 10 - 持续输出
|
||||
|
||||
使用 `c` 选项持续输出结果
|
||||
|
||||
```shell
|
||||
$ netstat -c
|
||||
```
|
||||
|
||||
### 11 - 过滤出某个端口
|
||||
|
||||
与 `grep` 连用来过滤出某个端口的连接,
|
||||
|
||||
```shell
|
||||
$ netstat -anp | grep 3306
|
||||
```
|
||||
|
||||
### 12 - 统计连接个数
|
||||
|
||||
通过与 `wc` 和 `grep` 命令连用,可以统计指定端口的连接数量
|
||||
|
||||
```shell
|
||||
$ netstat -anp | grep 3306 | wc -l
|
||||
```
|
||||
|
||||
这会输出 mysql 服务端口(即 3306)的连接数。
|
||||
|
||||
这就是我们简短的案例指南了,希望它带给你的信息量足够。 有任何疑问欢迎提出。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/learn-use-netstat-with-examples/
|
||||
|
||||
作者:[Shusain][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
||||
[1]:http://linuxtechlab.com/learn-use-curl-command-examples/
|
@ -0,0 +1,274 @@
|
||||
如何在 Linux 上安装友好的交互式 shell:Fish
|
||||
======
|
||||
|
||||
Fish,<ruby>友好的交互式 shell<rt>Friendly Interactive SHell</rt></ruby> 的缩写,它是一个适于装备于类 Unix 系统的智能而用户友好的 shell。Fish 有着很多重要的功能,比如自动建议、语法高亮、可搜索的历史记录(像在 bash 中 `CTRL+r`)、智能搜索功能、极好的 VGA 颜色支持、基于 web 的设置方式、完善的手册页和许多开箱即用的功能。尽管安装并立即使用它吧。无需更多其他配置,你也不需要安装任何额外的附加组件/插件!
|
||||
|
||||
在这篇教程中,我们讨论如何在 Linux 中安装和使用 fish shell。
|
||||
|
||||
#### 安装 Fish
|
||||
|
||||
尽管 fish 是一个非常用户友好的并且功能丰富的 shell,但并没有包括在大多数 Linux 发行版的默认仓库中。它只能在少数 Linux 发行版中的官方仓库中找到,如 Arch Linux,Gentoo,NixOS,和 Ubuntu 等。然而,安装 fish 并不难。
|
||||
|
||||
在 Arch Linux 和它的衍生版上,运行以下命令来安装它。
|
||||
|
||||
```
|
||||
sudo pacman -S fish
|
||||
```
|
||||
|
||||
在 CentOS 7 上以 root 运行以下命令:
|
||||
|
||||
```
|
||||
cd /etc/yum.repos.d/
|
||||
wget https://download.opensuse.org/repositories/shells:fish:release:2/CentOS_7/shells:fish:release:2.repo
|
||||
yum install fish
|
||||
```
|
||||
|
||||
在 CentOS 6 上以 root 运行以下命令:
|
||||
|
||||
```
|
||||
cd /etc/yum.repos.d/
|
||||
wget https://download.opensuse.org/repositories/shells:fish:release:2/CentOS_6/shells:fish:release:2.repo
|
||||
yum install fish
|
||||
```
|
||||
|
||||
在 Debian 9 上以 root 运行以下命令:
|
||||
|
||||
```
|
||||
wget -nv https://download.opensuse.org/repositories/shells:fish:release:2/Debian_9.0/Release.key -O Release.key
|
||||
apt-key add - < Release.key
|
||||
echo 'deb http://download.opensuse.org/repositories/shells:/fish:/release:/2/Debian_9.0/ /' > /etc/apt/sources.list.d/fish.list
|
||||
apt-get update
|
||||
apt-get install fish
|
||||
```
|
||||
|
||||
在 Debian 8 上以 root 运行以下命令:
|
||||
|
||||
```
|
||||
wget -nv https://download.opensuse.org/repositories/shells:fish:release:2/Debian_8.0/Release.key -O Release.key
|
||||
apt-key add - < Release.key
|
||||
echo 'deb http://download.opensuse.org/repositories/shells:/fish:/release:/2/Debian_8.0/ /' > /etc/apt/sources.list.d/fish.list
|
||||
apt-get update
|
||||
apt-get install fish
|
||||
```
|
||||
|
||||
在 Fedora 26 上以 root 运行以下命令:
|
||||
|
||||
```
|
||||
dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_26/shells:fish:release:2.repo
|
||||
dnf install fish
|
||||
```
|
||||
|
||||
在 Fedora 25 上以 root 运行以下命令:
|
||||
|
||||
```
|
||||
dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_25/shells:fish:release:2.repo
|
||||
dnf install fish
|
||||
```
|
||||
|
||||
在 Fedora 24 上以 root 运行以下命令:
|
||||
|
||||
```
|
||||
dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_24/shells:fish:release:2.repo
|
||||
dnf install fish
|
||||
```
|
||||
|
||||
在 Fedora 23 上以 root 运行以下命令:
|
||||
|
||||
```
|
||||
dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_23/shells:fish:release:2.repo
|
||||
dnf install fish
|
||||
```
|
||||
|
||||
在 openSUSE 上以 root 运行以下命令:
|
||||
|
||||
```
|
||||
zypper install fish
|
||||
```
|
||||
|
||||
在 RHEL 7 上以 root 运行以下命令:
|
||||
|
||||
```
|
||||
cd /etc/yum.repos.d/
|
||||
wget https://download.opensuse.org/repositories/shells:fish:release:2/RHEL_7/shells:fish:release:2.repo
|
||||
yum install fish
|
||||
```
|
||||
|
||||
在 RHEL-6 上以 root 运行以下命令:
|
||||
|
||||
```
|
||||
cd /etc/yum.repos.d/
|
||||
wget https://download.opensuse.org/repositories/shells:fish:release:2/RedHat_RHEL-6/shells:fish:release:2.repo
|
||||
yum install fish
|
||||
```
|
||||
|
||||
在 Ubuntu 和它的衍生版上:
|
||||
|
||||
```
|
||||
sudo apt-get update
|
||||
sudo apt-get install fish
|
||||
```
|
||||
|
||||
就这样了。是时候探索 fish shell 了。
|
||||
|
||||
### 用法
|
||||
|
||||
要从你默认的 shell 切换到 fish,请执行以下操作:
|
||||
|
||||
```
|
||||
$ fish
|
||||
Welcome to fish, the friendly interactive shell
|
||||
```
|
||||
|
||||
你可以在 `~/.config/fish/config.fish` 上找到默认的 fish 配置(类似于 `.bashrc`)。如果它不存在,就创建它吧。
|
||||
|
||||
#### 自动建议
|
||||
|
||||
当我输入一个命令,它以浅灰色自动建议一个命令。所以,我需要输入一个 Linux 命令的前几个字母,然后按下 `tab` 键来完成这个命令。
|
||||
|
||||
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-1.png)][2]
|
||||
|
||||
如果有更多的可能性,它将会列出它们。你可以使用上/下箭头键从列表中选择列出的命令。在选择你想运行的命令后,只需按下右箭头键,然后按下 `ENTER` 运行它。
|
||||
|
||||
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-2.png)][3]
|
||||
|
||||
无需 `CTRL+r` 了!正如你已知道的,我们通过按 `CTRL+r` 来反向搜索 Bash shell 中的历史命令。但在 fish shell 中是没有必要的。由于它有自动建议功能,只需输入命令的前几个字母,然后从历史记录中选择已经执行的命令。很酷,是吧。
|
||||
|
||||
#### 智能搜索
|
||||
|
||||
我们也可以使用智能搜索来查找一个特定的命令、文件或者目录。例如,我输入一个命令的一部分,然后按向下箭头键进行智能搜索,再次输入一个字母来从列表中选择所需的命令。
|
||||
|
||||
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-6.png)][4]
|
||||
|
||||
#### 语法高亮
|
||||
|
||||
当你输入一个命令时,你将注意到语法高亮。请看下面当我在 Bash shell 和 fish shell 中输入相同的命令时截图的区别。
|
||||
|
||||
Bash:
|
||||
|
||||
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-3.png)][5]
|
||||
|
||||
Fish:
|
||||
|
||||
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-4.png)][6]
|
||||
|
||||
正如你所看到的,`sudo` 在 fish shell 中已经被高亮显示。此外,默认情况下它将以红色显示无效命令。
|
||||
|
||||
#### 基于 web 的配置方式
|
||||
|
||||
这是 fish shell 另一个很酷的功能。我们可以设置我们的颜色、更改 fish 提示符,并从网页上查看所有功能、变量、历史记录、键绑定。
|
||||
|
||||
启动 web 配置接口,只需输入:
|
||||
|
||||
```
|
||||
fish_config
|
||||
```
|
||||
|
||||
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-5.png)][7]
|
||||
|
||||
#### 手册页补完
|
||||
|
||||
Bash 和 其它 shells 支持可编程的补完,但只有 fish 可以通过解析已安装的手册来自动生成它们。
|
||||
|
||||
为此,请运行:
|
||||
|
||||
```
|
||||
fish_update_completions
|
||||
```
|
||||
|
||||
实例输出将是:
|
||||
|
||||
```
|
||||
Parsing man pages and writing completions to /home/sk/.local/share/fish/generated_completions/
|
||||
3435 / 3435 : zramctl.8.gz
|
||||
```
|
||||
|
||||
#### 禁用问候语
|
||||
|
||||
默认情况下,fish 在启动时问候你(“Welcome to fish, the friendly interactive shell”)。如果你不想要这个问候消息,可以禁用它。为此,编辑 fish 配置文件:
|
||||
|
||||
```
|
||||
vi ~/.config/fish/config.fish
|
||||
```
|
||||
|
||||
添加以下行:
|
||||
|
||||
```
|
||||
set -g -x fish_greeting ''
|
||||
```
|
||||
|
||||
你也可以设置任意自定义的问候语,而不是禁用 fish 问候。
|
||||
|
||||
```
|
||||
set -g -x fish_greeting 'Welcome to OSTechNix'
|
||||
```
|
||||
|
||||
#### 获得帮助
|
||||
|
||||
这是另一个吸引我的令人印象深刻的功能。要在终端的默认 web 浏览器中打开 fish 文档页面,只需输入:
|
||||
|
||||
```
|
||||
help
|
||||
```
|
||||
|
||||
官方文档将会在你的默认浏览器中打开。另外,你可以使用手册页来显示任何命令的帮助部分。
|
||||
|
||||
```
|
||||
man fish
|
||||
```
|
||||
|
||||
#### 设置 fish 为默认 shell
|
||||
|
||||
非常喜欢它?太好了!设置它作为默认 shell 吧。为此,请使用命令 `chsh`:
|
||||
|
||||
```
|
||||
chsh -s /usr/bin/fish
|
||||
```
|
||||
|
||||
在这里,`/usr/bin/fish` 是 fish shell 的路径。如果你不知道正确的路径,以下命令将会帮助你:
|
||||
|
||||
```
|
||||
which fish
|
||||
```
|
||||
|
||||
注销并且重新登录以使用新的默认 shell。
|
||||
|
||||
请记住,为 Bash 编写的许多 shell 脚本可能不完全兼容 fish。
|
||||
|
||||
要切换回 Bash,只需运行:
|
||||
|
||||
```
|
||||
bash
|
||||
```
|
||||
|
||||
如果你想 Bash 作为你的永久默认 shell,运行:
|
||||
|
||||
```
|
||||
chsh -s /bin/bash
|
||||
```
|
||||
|
||||
各位,这就是全部了。在这个阶段,你可能会得到一个有关 fish shell 使用的基本概念。 如果你正在寻找一个Bash的替代品,fish 可能是一个不错的选择。
|
||||
|
||||
Cheers!
|
||||
|
||||
资源:
|
||||
|
||||
* [fish shell 官网][1]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/install-fish-friendly-interactive-shell-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[kimii](https://github.com/kimii)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://fishshell.com/
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-1.png
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-2.png
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-6.png
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-3.png
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-4.png
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-5.png
|
165
published/20171206 How to extract substring in Bash.md
Normal file
165
published/20171206 How to extract substring in Bash.md
Normal file
@ -0,0 +1,165 @@
|
||||
如何在 Bash 中抽取子字符串
|
||||
======
|
||||
|
||||
所谓“子字符串”就是出现在其它字符串内的字符串。 比如 “3382” 就是 “this is a 3382 test” 的子字符串。 我们有多种方法可以从中把数字或指定部分字符串抽取出来。
|
||||
|
||||
[![How to Extract substring in Bash Shell on Linux or Unix](https://www.cyberciti.biz/media/new/faq/2017/12/How-to-Extract-substring-in-Bash-Shell-on-Linux-or-Unix.jpg)][2]
|
||||
|
||||
本文会向你展示在 bash shell 中如何获取或者说查找出子字符串。
|
||||
|
||||
### 在 Bash 中抽取子字符串
|
||||
|
||||
其语法为:
|
||||
|
||||
```shell
|
||||
## 格式 ##
|
||||
${parameter:offset:length}
|
||||
```
|
||||
|
||||
子字符串扩展是 bash 的一项功能。它会扩展成 `parameter` 值中以 `offset` 为开始,长为 `length` 个字符的字符串。 假设, `$u` 定义如下:
|
||||
|
||||
```shell
|
||||
## 定义变量 u ##
|
||||
u="this is a test"
|
||||
```
|
||||
|
||||
那么下面参数的子字符串扩展会抽取出子字符串:
|
||||
|
||||
```shell
|
||||
var="${u:10:4}"
|
||||
echo "${var}"
|
||||
```
|
||||
|
||||
结果为:
|
||||
|
||||
```
|
||||
test
|
||||
```
|
||||
|
||||
其中这些参数分别表示:
|
||||
|
||||
+ 10 : 偏移位置
|
||||
+ 4 : 长度
|
||||
|
||||
### 使用 IFS
|
||||
|
||||
根据 bash 的 man 页说明:
|
||||
|
||||
> [IFS (内部字段分隔符)][3]用于在扩展后进行单词分割,并用内建的 read 命令将行分割为词。默认值是<space><tab><newline>。
|
||||
|
||||
另一种 <ruby>POSIX 就绪<rt>POSIX ready</rt></ruby>的方案如下:
|
||||
|
||||
```shell
|
||||
u="this is a test"
|
||||
set -- $u
|
||||
echo "$1"
|
||||
echo "$2"
|
||||
echo "$3"
|
||||
echo "$4"
|
||||
```
|
||||
|
||||
输出为:
|
||||
|
||||
```shell
|
||||
this
|
||||
is
|
||||
a
|
||||
test
|
||||
```
|
||||
|
||||
下面是一段 bash 代码,用来从 Cloudflare cache 中去除带主页的 url。
|
||||
|
||||
```shell
|
||||
#!/bin/bash
|
||||
####################################################
|
||||
## Author - Vivek Gite {https://www.cyberciti.biz/}
|
||||
## Purpose - Purge CF cache
|
||||
## License - Under GPL ver 3.x+
|
||||
####################################################
|
||||
## set me first ##
|
||||
zone_id="YOUR_ZONE_ID_HERE"
|
||||
api_key="YOUR_API_KEY_HERE"
|
||||
email_id="YOUR_EMAIL_ID_HERE"
|
||||
|
||||
## hold data ##
|
||||
home_url=""
|
||||
amp_url=""
|
||||
urls="$@"
|
||||
|
||||
## Show usage
|
||||
[ "$urls" == "" ] && { echo "Usage: $0 url1 url2 url3"; exit 1; }
|
||||
|
||||
## Get home page url as we have various sub dirs on domain
|
||||
## /tips/
|
||||
## /faq/
|
||||
|
||||
get_home_url(){
|
||||
local u="$1"
|
||||
IFS='/'
|
||||
set -- $u
|
||||
echo "${1}${IFS}${IFS}${3}${IFS}${4}${IFS}"
|
||||
}
|
||||
|
||||
echo
|
||||
echo "Purging cache from Cloudflare。.。"
|
||||
echo
|
||||
for u in $urls
|
||||
do
|
||||
home_url="$(get_home_url $u)"
|
||||
amp_url="${u}amp/"
|
||||
curl -X DELETE "https://api.cloudflare.com/client/v4/zones/${zone_id}/purge_cache" \
|
||||
-H "X-Auth-Email: ${email_id}" \
|
||||
-H "X-Auth-Key: ${api_key}" \
|
||||
-H "Content-Type: application/json" \
|
||||
--data "{\"files\":[\"${u}\",\"${amp_url}\",\"${home_url}\"]}"
|
||||
echo
|
||||
done
|
||||
echo
|
||||
```
|
||||
|
||||
它的使用方法为:
|
||||
|
||||
```shell
|
||||
~/bin/cf.clear.cache https://www.cyberciti.biz/faq/bash-for-loop/ https://www.cyberciti.biz/tips/linux-security.html
|
||||
```
|
||||
|
||||
### 借助 cut 命令
|
||||
|
||||
可以使用 `cut` 命令来将文件中每一行或者变量中的一部分删掉。它的语法为:
|
||||
|
||||
```shell
|
||||
u="this is a test"
|
||||
echo "$u" | cut -d' ' -f 4
|
||||
echo "$u" | cut --delimiter=' ' --fields=4
|
||||
##########################################
|
||||
## WHERE
|
||||
## -d' ' : Use a whitespace as delimiter
|
||||
## -f 4 : Select only 4th field
|
||||
##########################################
|
||||
var="$(cut -d' ' -f 4 <<< $u)"
|
||||
echo "${var}"
|
||||
```
|
||||
|
||||
想了解更多请阅读 bash 的 man 页:
|
||||
|
||||
```shell
|
||||
man bash
|
||||
man cut
|
||||
```
|
||||
|
||||
另请参见: [Bash String Comparison: Find Out IF a Variable Contains a Substring][1]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/how-to-extract-substring-in-bash/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz
|
||||
[1]:https://www.cyberciti.biz/faq/bash-find-out-if-variable-contains-substring/
|
||||
[2]:https://www.cyberciti.biz/media/new/faq/2017/12/How-to-Extract-substring-in-Bash-Shell-on-Linux-or-Unix.jpg
|
||||
[3]:https://bash.cyberciti.biz/guide/$IFS
|
@ -0,0 +1,401 @@
|
||||
7 个使用 bcc/BPF 的性能分析神器
|
||||
============================================================
|
||||
|
||||
> 使用<ruby>伯克利包过滤器<rt>Berkeley Packet Filter</rt></ruby>(BPF)<ruby>编译器集合<rt>Compiler Collection</rt></ruby>(BCC)工具深度探查你的 linux 代码。
|
||||
|
||||
![7 superpowers for Fedora bcc/BPF performance analysis](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/penguins%20in%20space_0.jpg?itok=umpCTAul)
|
||||
|
||||
在 Linux 中出现的一种新技术能够为系统管理员和开发者提供大量用于性能分析和故障排除的新工具和仪表盘。它被称为<ruby>增强的伯克利数据包过滤器<rt>enhanced Berkeley Packet Filter</rt></ruby>(eBPF,或 BPF),虽然这些改进并不是由伯克利开发的,而且它们不仅仅是处理数据包,更多的是过滤。我将讨论在 Fedora 和 Red Hat Linux 发行版中使用 BPF 的一种方法,并在 Fedora 26 上演示。
|
||||
|
||||
BPF 可以在内核中运行由用户定义的沙盒程序,可以立即添加新的自定义功能。这就像按需给 Linux 系统添加超能力一般。 你可以使用它的例子包括如下:
|
||||
|
||||
* **高级性能跟踪工具**:对文件系统操作、TCP 事件、用户级事件等的可编程的低开销检测。
|
||||
* **网络性能**: 尽早丢弃数据包以提高对 DDoS 的恢复能力,或者在内核中重定向数据包以提高性能。
|
||||
* **安全监控**: 7x24 小时的自定义检测和记录内核空间与用户空间内的可疑事件。
|
||||
|
||||
在可能的情况下,BPF 程序必须通过一个内核验证机制来保证它们的安全运行,这比写自定义的内核模块更安全。我在此假设大多数人并不编写自己的 BPF 程序,而是使用别人写好的。在 GitHub 上的 [BPF Compiler Collection (bcc)][12] 项目中,我已发布许多开源代码。bcc 为 BPF 开发提供了不同的前端支持,包括 Python 和 Lua,并且是目前最活跃的 BPF 工具项目。
|
||||
|
||||
### 7 个有用的 bcc/BPF 新工具
|
||||
|
||||
为了了解 bcc/BPF 工具和它们的检测内容,我创建了下面的图表并添加到 bcc 项目中。
|
||||
|
||||
![Linux bcc/BPF 跟踪工具图](https://opensource.com/sites/default/files/u128651/bcc_tracing_tools.png)
|
||||
|
||||
这些是命令行界面工具,你可以通过 SSH 使用它们。目前大多数分析,包括我的老板,都是用 GUI 和仪表盘进行的。SSH 是最后的手段。但这些命令行工具仍然是预览 BPF 能力的好方法,即使你最终打算通过一个可用的 GUI 使用它。我已着手向一个开源 GUI 添加 BPF 功能,但那是另一篇文章的主题。现在我想向你分享今天就可以使用的 CLI 工具。
|
||||
|
||||
#### 1、 execsnoop
|
||||
|
||||
从哪儿开始呢?如何查看新的进程。那些会消耗系统资源,但很短暂的进程,它们甚至不会出现在 `top(1)` 命令或其它工具中的显示之中。这些新进程可以使用 [execsnoop][15] 进行检测(或使用行业术语说,可以<ruby>被追踪<rt>traced</rt></ruby>)。 在追踪时,我将在另一个窗口中通过 SSH 登录:
|
||||
|
||||
```
|
||||
# /usr/share/bcc/tools/execsnoop
|
||||
PCOMM PID PPID RET ARGS
|
||||
sshd 12234 727 0 /usr/sbin/sshd -D -R
|
||||
unix_chkpwd 12236 12234 0 /usr/sbin/unix_chkpwd root nonull
|
||||
unix_chkpwd 12237 12234 0 /usr/sbin/unix_chkpwd root chkexpiry
|
||||
bash 12239 12238 0 /bin/bash
|
||||
id 12241 12240 0 /usr/bin/id -un
|
||||
hostname 12243 12242 0 /usr/bin/hostname
|
||||
pkg-config 12245 12244 0 /usr/bin/pkg-config --variable=completionsdir bash-completion
|
||||
grepconf.sh 12246 12239 0 /usr/libexec/grepconf.sh -c
|
||||
grep 12247 12246 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS
|
||||
tty 12249 12248 0 /usr/bin/tty -s
|
||||
tput 12250 12248 0 /usr/bin/tput colors
|
||||
dircolors 12252 12251 0 /usr/bin/dircolors --sh /etc/DIR_COLORS
|
||||
grep 12253 12239 0 /usr/bin/grep -qi ^COLOR.*none /etc/DIR_COLORS
|
||||
grepconf.sh 12254 12239 0 /usr/libexec/grepconf.sh -c
|
||||
grep 12255 12254 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS
|
||||
grepconf.sh 12256 12239 0 /usr/libexec/grepconf.sh -c
|
||||
grep 12257 12256 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS
|
||||
```
|
||||
|
||||
哇哦。 那是什么? 什么是 `grepconf.sh`? 什么是 `/etc/GREP_COLORS`? 是 `grep` 在读取它自己的配置文件……由 `grep` 运行的? 这究竟是怎么工作的?
|
||||
|
||||
欢迎来到有趣的系统追踪世界。 你可以学到很多关于系统是如何工作的(或者根本不工作,在有些情况下),并且发现一些简单的优化方法。 `execsnoop` 通过跟踪 `exec()` 系统调用来工作,`exec()` 通常用于在新进程中加载不同的程序代码。
|
||||
|
||||
#### 2、 opensnoop
|
||||
|
||||
接着上面继续,所以,`grepconf.sh` 可能是一个 shell 脚本,对吧? 我将运行 `file(1)` 来检查它,并使用[opensnoop][16] bcc 工具来查看打开的文件:
|
||||
|
||||
```
|
||||
# /usr/share/bcc/tools/opensnoop
|
||||
PID COMM FD ERR PATH
|
||||
12420 file 3 0 /etc/ld.so.cache
|
||||
12420 file 3 0 /lib64/libmagic.so.1
|
||||
12420 file 3 0 /lib64/libz.so.1
|
||||
12420 file 3 0 /lib64/libc.so.6
|
||||
12420 file 3 0 /usr/lib/locale/locale-archive
|
||||
12420 file -1 2 /etc/magic.mgc
|
||||
12420 file 3 0 /etc/magic
|
||||
12420 file 3 0 /usr/share/misc/magic.mgc
|
||||
12420 file 3 0 /usr/lib64/gconv/gconv-modules.cache
|
||||
12420 file 3 0 /usr/libexec/grepconf.sh
|
||||
1 systemd 16 0 /proc/565/cgroup
|
||||
1 systemd 16 0 /proc/536/cgroup
|
||||
```
|
||||
|
||||
像 `execsnoop` 和 `opensnoop` 这样的工具会将每个事件打印一行。上图显示 `file(1)` 命令当前打开(或尝试打开)的文件:返回的文件描述符(“FD” 列)对于 `/etc/magic.mgc` 是 -1,而 “ERR” 列指示它是“文件未找到”。我不知道该文件,也不知道 `file(1)` 正在读取的 `/usr/share/misc/magic.mgc` 文件是什么。我不应该感到惊讶,但是 `file(1)` 在识别文件类型时没有问题:
|
||||
|
||||
```
|
||||
# file /usr/share/misc/magic.mgc /etc/magic
|
||||
/usr/share/misc/magic.mgc: magic binary file for file(1) cmd (version 14) (little endian)
|
||||
/etc/magic: magic text file for file(1) cmd, ASCII text
|
||||
```
|
||||
|
||||
`opensnoop` 通过跟踪 `open()` 系统调用来工作。为什么不使用 `strace -feopen file` 命令呢? 在这种情况下是可以的。然而,`opensnoop` 的一些优点在于它能在系统范围内工作,并且跟踪所有进程的 `open()` 系统调用。注意上例的输出中包括了从 systemd 打开的文件。`opensnoop` 应该系统开销更低:BPF 跟踪已经被优化过,而当前版本的 `strace(1)` 仍然使用较老和较慢的 `ptrace(2)` 接口。
|
||||
|
||||
#### 3、 xfsslower
|
||||
|
||||
bcc/BPF 不仅仅可以分析系统调用。[xfsslower][17] 工具可以跟踪大于 1 毫秒(参数)延迟的常见 XFS 文件系统操作。
|
||||
|
||||
```
|
||||
# /usr/share/bcc/tools/xfsslower 1
|
||||
Tracing XFS operations slower than 1 ms
|
||||
TIME COMM PID T BYTES OFF_KB LAT(ms) FILENAME
|
||||
14:17:34 systemd-journa 530 S 0 0 1.69 system.journal
|
||||
14:17:35 auditd 651 S 0 0 2.43 audit.log
|
||||
14:17:42 cksum 4167 R 52976 0 1.04 at
|
||||
14:17:45 cksum 4168 R 53264 0 1.62 [
|
||||
14:17:45 cksum 4168 R 65536 0 1.01 certutil
|
||||
14:17:45 cksum 4168 R 65536 0 1.01 dir
|
||||
14:17:45 cksum 4168 R 65536 0 1.17 dirmngr-client
|
||||
14:17:46 cksum 4168 R 65536 0 1.06 grub2-file
|
||||
14:17:46 cksum 4168 R 65536 128 1.01 grub2-fstest
|
||||
[...]
|
||||
```
|
||||
|
||||
在上图输出中,我捕获到了多个延迟超过 1 毫秒 的 `cksum(1)` 读取操作(字段 “T” 等于 “R”)。这是在 `xfsslower` 工具运行的时候,通过在 XFS 中动态地检测内核函数实现的,并当它结束的时候解除该检测。这个 bcc 工具也有其它文件系统的版本:`ext4slower`、`btrfsslower`、`zfsslower` 和 `nfsslower`。
|
||||
|
||||
这是个有用的工具,也是 BPF 追踪的重要例子。对文件系统性能的传统分析主要集中在块 I/O 统计信息 —— 通常你看到的是由 `iostat(1)` 工具输出,并由许多性能监视 GUI 绘制的图表。这些统计数据显示的是磁盘如何执行,而不是真正的文件系统如何执行。通常比起磁盘来说,你更关心的是文件系统的性能,因为应用程序是在文件系统中发起请求和等待。并且,文件系统的性能可能与磁盘的性能大为不同!文件系统可以完全从内存缓存中读取数据,也可以通过预读算法和回写缓存来填充缓存。`xfsslower` 显示了文件系统的性能 —— 这是应用程序直接体验到的性能。通常这对于排除整个存储子系统的问题是有用的;如果确实没有文件系统延迟,那么性能问题很可能是在别处。
|
||||
|
||||
#### 4、 biolatency
|
||||
|
||||
虽然文件系统性能对于理解应用程序性能非常重要,但研究磁盘性能也是有好处的。当各种缓存技巧都无法挽救其延迟时,磁盘的低性能终会影响应用程序。 磁盘性能也是容量规划研究的目标。
|
||||
|
||||
`iostat(1)` 工具显示了平均磁盘 I/O 延迟,但平均值可能会引起误解。 以直方图的形式研究 I/O 延迟的分布是有用的,这可以通过使用 [biolatency] 来实现[18]:
|
||||
|
||||
```
|
||||
# /usr/share/bcc/tools/biolatency
|
||||
Tracing block device I/O... Hit Ctrl-C to end.
|
||||
^C
|
||||
usecs : count distribution
|
||||
0 -> 1 : 0 | |
|
||||
2 -> 3 : 0 | |
|
||||
4 -> 7 : 0 | |
|
||||
8 -> 15 : 0 | |
|
||||
16 -> 31 : 0 | |
|
||||
32 -> 63 : 1 | |
|
||||
64 -> 127 : 63 |**** |
|
||||
128 -> 255 : 121 |********* |
|
||||
256 -> 511 : 483 |************************************ |
|
||||
512 -> 1023 : 532 |****************************************|
|
||||
1024 -> 2047 : 117 |******** |
|
||||
2048 -> 4095 : 8 | |
|
||||
```
|
||||
|
||||
这是另一个有用的工具和例子;它使用一个名为 maps 的 BPF 特性,它可以用来实现高效的内核摘要统计。从内核层到用户层的数据传输仅仅是“计数”列。 用户级程序生成其余的。
|
||||
|
||||
值得注意的是,这种工具大多支持 CLI 选项和参数,如其使用信息所示:
|
||||
|
||||
```
|
||||
# /usr/share/bcc/tools/biolatency -h
|
||||
usage: biolatency [-h] [-T] [-Q] [-m] [-D] [interval] [count]
|
||||
|
||||
Summarize block device I/O latency as a histogram
|
||||
|
||||
positional arguments:
|
||||
interval output interval, in seconds
|
||||
count number of outputs
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
-T, --timestamp include timestamp on output
|
||||
-Q, --queued include OS queued time in I/O time
|
||||
-m, --milliseconds millisecond histogram
|
||||
-D, --disks print a histogram per disk device
|
||||
|
||||
examples:
|
||||
./biolatency # summarize block I/O latency as a histogram
|
||||
./biolatency 1 10 # print 1 second summaries, 10 times
|
||||
./biolatency -mT 1 # 1s summaries, milliseconds, and timestamps
|
||||
./biolatency -Q # include OS queued time in I/O time
|
||||
./biolatency -D # show each disk device separately
|
||||
```
|
||||
|
||||
它们的行为就像其它 Unix 工具一样,以利于采用而设计。
|
||||
|
||||
#### 5、 tcplife
|
||||
|
||||
另一个有用的工具是 [tcplife][19] ,该例显示 TCP 会话的生命周期和吞吐量统计。
|
||||
|
||||
```
|
||||
# /usr/share/bcc/tools/tcplife
|
||||
PID COMM LADDR LPORT RADDR RPORT TX_KB RX_KB MS
|
||||
12759 sshd 192.168.56.101 22 192.168.56.1 60639 2 3 1863.82
|
||||
12783 sshd 192.168.56.101 22 192.168.56.1 60640 3 3 9174.53
|
||||
12844 wget 10.0.2.15 34250 54.204.39.132 443 11 1870 5712.26
|
||||
12851 curl 10.0.2.15 34252 54.204.39.132 443 0 74 505.90
|
||||
```
|
||||
|
||||
在你说 “我不是可以只通过 `tcpdump(8)` 就能输出这个?” 之前请注意,运行 `tcpdump(8)` 或任何数据包嗅探器,在高数据包速率的系统上的开销会很大,即使 `tcpdump(8)` 的用户层和内核层机制已经过多年优化(要不可能更差)。`tcplife` 不会测试每个数据包;它只会有效地监视 TCP 会话状态的变化,并由此得到该会话的持续时间。它还使用已经跟踪了吞吐量的内核计数器,以及进程和命令信息(“PID” 和 “COMM” 列),这些对于 `tcpdump(8)` 等线上嗅探工具是做不到的。
|
||||
|
||||
#### 6、 gethostlatency
|
||||
|
||||
之前的每个例子都涉及到内核跟踪,所以我至少需要一个用户级跟踪的例子。 这就是 [gethostlatency][20],它检测用于名称解析的 `gethostbyname(3)` 和相关的库调用:
|
||||
|
||||
```
|
||||
# /usr/share/bcc/tools/gethostlatency
|
||||
TIME PID COMM LATms HOST
|
||||
06:43:33 12903 curl 188.98 opensource.com
|
||||
06:43:36 12905 curl 8.45 opensource.com
|
||||
06:43:40 12907 curl 6.55 opensource.com
|
||||
06:43:44 12911 curl 9.67 opensource.com
|
||||
06:45:02 12948 curl 19.66 opensource.cats
|
||||
06:45:06 12950 curl 18.37 opensource.cats
|
||||
06:45:07 12952 curl 13.64 opensource.cats
|
||||
06:45:19 13139 curl 13.10 opensource.cats
|
||||
```
|
||||
|
||||
是的,总是有 DNS 请求,所以有一个工具来监视系统范围内的 DNS 请求会很方便(这只有在应用程序使用标准系统库时才有效)。看看我如何跟踪多个对 “opensource.com” 的查找? 第一个是 188.98 毫秒,然后更快,不到 10 毫秒,毫无疑问,这是缓存的作用。它还追踪多个对 “opensource.cats” 的查找,一个不存在的可怜主机名,但我们仍然可以检查第一个和后续查找的延迟。(第二次查找后是否有一些否定缓存的影响?)
|
||||
|
||||
#### 7、 trace
|
||||
|
||||
好的,再举一个例子。 [trace][21] 工具由 Sasha Goldshtein 提供,并提供了一些基本的 `printf(1)` 功能和自定义探针。 例如:
|
||||
|
||||
```
|
||||
# /usr/share/bcc/tools/trace 'pam:pam_start "%s: %s", arg1, arg2'
|
||||
PID TID COMM FUNC -
|
||||
13266 13266 sshd pam_start sshd: root
|
||||
```
|
||||
|
||||
在这里,我正在跟踪 `libpam` 及其 `pam_start(3)` 函数,并将其两个参数都打印为字符串。 `libpam` 用于插入式身份验证模块系统,该输出显示 sshd 为 “root” 用户调用了 `pam_start()`(我登录了)。 其使用信息中有更多的例子(`trace -h`),而且所有这些工具在 bcc 版本库中都有手册页和示例文件。 例如 `trace_example.txt` 和 `trace.8`。
|
||||
|
||||
### 通过包安装 bcc
|
||||
|
||||
安装 bcc 最佳的方法是从 iovisor 仓储库中安装,按照 bcc 的 [INSTALL.md][22] 进行即可。[IO Visor][23] 是包括了 bcc 的 Linux 基金会项目。4.x 系列 Linux 内核中增加了这些工具所使用的 BPF 增强功能,直到 4.9 添加了全部支持。这意味着拥有 4.8 内核的 Fedora 25 可以运行这些工具中的大部分。 使用 4.11 内核的 Fedora 26 可以全部运行它们(至少在目前是这样)。
|
||||
|
||||
如果你使用的是 Fedora 25(或者 Fedora 26,而且这个帖子已经在很多个月前发布了 —— 你好,来自遥远的过去!),那么这个通过包安装的方式是可以工作的。 如果您使用的是 Fedora 26,那么请跳至“通过源代码安装”部分,它避免了一个[已修复的][26]的[已知][25]错误。 这个错误修复目前还没有进入 Fedora 26 软件包的依赖关系。 我使用的系统是:
|
||||
|
||||
```
|
||||
# uname -a
|
||||
Linux localhost.localdomain 4.11.8-300.fc26.x86_64 #1 SMP Thu Jun 29 20:09:48 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
|
||||
# cat /etc/fedora-release
|
||||
Fedora release 26 (Twenty Six)
|
||||
```
|
||||
|
||||
以下是我所遵循的安装步骤,但请参阅 INSTALL.md 获取更新的版本:
|
||||
|
||||
```
|
||||
# echo -e '[iovisor]\nbaseurl=https://repo.iovisor.org/yum/nightly/f25/$basearch\nenabled=1\ngpgcheck=0' | sudo tee /etc/yum.repos.d/iovisor.repo
|
||||
# dnf install bcc-tools
|
||||
[...]
|
||||
Total download size: 37 M
|
||||
Installed size: 143 M
|
||||
Is this ok [y/N]: y
|
||||
```
|
||||
|
||||
安装完成后,您可以在 `/usr/share` 中看到新的工具:
|
||||
|
||||
```
|
||||
# ls /usr/share/bcc/tools/
|
||||
argdist dcsnoop killsnoop softirqs trace
|
||||
bashreadline dcstat llcstat solisten ttysnoop
|
||||
[...]
|
||||
```
|
||||
|
||||
试着运行其中一个:
|
||||
|
||||
```
|
||||
# /usr/share/bcc/tools/opensnoop
|
||||
chdir(/lib/modules/4.11.8-300.fc26.x86_64/build): No such file or directory
|
||||
Traceback (most recent call last):
|
||||
File "/usr/share/bcc/tools/opensnoop", line 126, in
|
||||
b = BPF(text=bpf_text)
|
||||
File "/usr/lib/python3.6/site-packages/bcc/__init__.py", line 284, in __init__
|
||||
raise Exception("Failed to compile BPF module %s" % src_file)
|
||||
Exception: Failed to compile BPF module
|
||||
```
|
||||
|
||||
运行失败,提示 `/lib/modules/4.11.8-300.fc26.x86_64/build` 丢失。 如果你也遇到这个问题,那只是因为系统缺少内核头文件。 如果你看看这个文件指向什么(这是一个符号链接),然后使用 `dnf whatprovides` 来搜索它,它会告诉你接下来需要安装的包。 对于这个系统,它是:
|
||||
|
||||
```
|
||||
# dnf install kernel-devel-4.11.8-300.fc26.x86_64
|
||||
[...]
|
||||
Total download size: 20 M
|
||||
Installed size: 63 M
|
||||
Is this ok [y/N]: y
|
||||
[...]
|
||||
```
|
||||
|
||||
现在:
|
||||
|
||||
```
|
||||
# /usr/share/bcc/tools/opensnoop
|
||||
PID COMM FD ERR PATH
|
||||
11792 ls 3 0 /etc/ld.so.cache
|
||||
11792 ls 3 0 /lib64/libselinux.so.1
|
||||
11792 ls 3 0 /lib64/libcap.so.2
|
||||
11792 ls 3 0 /lib64/libc.so.6
|
||||
[...]
|
||||
```
|
||||
|
||||
运行起来了。 这是捕获自另一个窗口中的 ls 命令活动。 请参阅前面的部分以使用其它有用的命令。
|
||||
|
||||
### 通过源码安装
|
||||
|
||||
如果您需要从源代码安装,您还可以在 [INSTALL.md][27] 中找到文档和更新说明。 我在 Fedora 26 上做了如下的事情:
|
||||
|
||||
```
|
||||
sudo dnf install -y bison cmake ethtool flex git iperf libstdc++-static \
|
||||
python-netaddr python-pip gcc gcc-c++ make zlib-devel \
|
||||
elfutils-libelf-devel
|
||||
sudo dnf install -y luajit luajit-devel # for Lua support
|
||||
sudo dnf install -y \
|
||||
http://pkgs.repoforge.org/netperf/netperf-2.6.0-1.el6.rf.x86_64.rpm
|
||||
sudo pip install pyroute2
|
||||
sudo dnf install -y clang clang-devel llvm llvm-devel llvm-static ncurses-devel
|
||||
```
|
||||
|
||||
除 `netperf` 外一切妥当,其中有以下错误:
|
||||
|
||||
```
|
||||
Curl error (28): Timeout was reached for http://pkgs.repoforge.org/netperf/netperf-2.6.0-1.el6.rf.x86_64.rpm [Connection timed out after 120002 milliseconds]
|
||||
```
|
||||
|
||||
不必理会,`netperf` 是可选的,它只是用于测试,而 bcc 没有它也会编译成功。
|
||||
|
||||
以下是余下的 bcc 编译和安装步骤:
|
||||
|
||||
```
|
||||
git clone https://github.com/iovisor/bcc.git
|
||||
mkdir bcc/build; cd bcc/build
|
||||
cmake .. -DCMAKE_INSTALL_PREFIX=/usr
|
||||
make
|
||||
sudo make install
|
||||
```
|
||||
|
||||
现在,命令应该可以工作了:
|
||||
|
||||
```
|
||||
# /usr/share/bcc/tools/opensnoop
|
||||
PID COMM FD ERR PATH
|
||||
4131 date 3 0 /etc/ld.so.cache
|
||||
4131 date 3 0 /lib64/libc.so.6
|
||||
4131 date 3 0 /usr/lib/locale/locale-archive
|
||||
4131 date 3 0 /etc/localtime
|
||||
[...]
|
||||
```
|
||||
|
||||
### 写在最后和其他的前端
|
||||
|
||||
这是一个可以在 Fedora 和 Red Hat 系列操作系统上使用的新 BPF 性能分析强大功能的快速浏览。我演示了 BPF 的流行前端 [bcc][28] ,并包括了其在 Fedora 上的安装说明。bcc 附带了 60 多个用于性能分析的新工具,这将帮助您充分利用 Linux 系统。也许你会直接通过 SSH 使用这些工具,或者一旦 GUI 监控程序支持 BPF 的话,你也可以通过它们来使用相同的功能。
|
||||
|
||||
此外,bcc 并不是正在开发的唯一前端。[ply][29] 和 [bpftrace][30],旨在为快速编写自定义工具提供更高级的语言支持。此外,[SystemTap][31] 刚刚发布[版本3.2][32],包括一个早期的实验性 eBPF 后端。 如果这个继续开发,它将为运行多年来开发的许多 SystemTap 脚本和 tapset(库)提供一个安全和高效的生产级引擎。(随同 eBPF 使用 SystemTap 将是另一篇文章的主题。)
|
||||
|
||||
如果您需要开发自定义工具,那么也可以使用 bcc 来实现,尽管语言比 SystemTap、ply 或 bpftrace 要冗长得多。我的 bcc 工具可以作为代码示例,另外我还贡献了用 Python 开发 bcc 工具的[教程][33]。 我建议先学习 bcc 的 multi-tools,因为在需要编写新工具之前,你可能会从里面获得很多经验。 您可以从它们的 bcc 存储库[funccount] [34],[funclatency] [35],[funcslower] [36],[stackcount] [37],[trace] [38] ,[argdist] [39] 的示例文件中研究 bcc。
|
||||
|
||||
感谢[Opensource.com] [40]进行编辑。
|
||||
|
||||
### 关于作者
|
||||
|
||||
[![Brendan Gregg](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/brendan_face2017_620d.jpg?itok=LIwTJjL9)][43]
|
||||
|
||||
Brendan Gregg 是 Netflix 的一名高级性能架构师,在那里他进行大规模的计算机性能设计、分析和调优。
|
||||
|
||||
(题图:opensource.com)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via:https://opensource.com/article/17/11/bccbpf-performance
|
||||
|
||||
作者:[Brendan Gregg][a]
|
||||
译者:[yongshouzhang](https://github.com/yongshouzhang)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/brendang
|
||||
[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[6]:https://opensource.com/participate
|
||||
[7]:https://opensource.com/users/brendang
|
||||
[8]:https://opensource.com/users/brendang
|
||||
[9]:https://opensource.com/user/77626/feed
|
||||
[10]:https://opensource.com/article/17/11/bccbpf-performance?rate=r9hnbg3mvjFUC9FiBk9eL_ZLkioSC21SvICoaoJjaSM
|
||||
[11]:https://opensource.com/article/17/11/bccbpf-performance#comments
|
||||
[12]:https://github.com/iovisor/bcc
|
||||
[13]:https://opensource.com/file/376856
|
||||
[14]:https://opensource.com/usr/share/bcc/tools/trace
|
||||
[15]:https://github.com/brendangregg/perf-tools/blob/master/execsnoop
|
||||
[16]:https://github.com/brendangregg/perf-tools/blob/master/opensnoop
|
||||
[17]:https://github.com/iovisor/bcc/blob/master/tools/xfsslower.py
|
||||
[18]:https://github.com/iovisor/bcc/blob/master/tools/biolatency.py
|
||||
[19]:https://github.com/iovisor/bcc/blob/master/tools/tcplife.py
|
||||
[20]:https://github.com/iovisor/bcc/blob/master/tools/gethostlatency.py
|
||||
[21]:https://github.com/iovisor/bcc/blob/master/tools/trace.py
|
||||
[22]:https://github.com/iovisor/bcc/blob/master/INSTALL.md#fedora---binary
|
||||
[23]:https://www.iovisor.org/
|
||||
[24]:https://opensource.com/article/17/11/bccbpf-performance#InstallViaSource
|
||||
[25]:https://github.com/iovisor/bcc/issues/1221
|
||||
[26]:https://reviews.llvm.org/rL302055
|
||||
[27]:https://github.com/iovisor/bcc/blob/master/INSTALL.md#fedora---source
|
||||
[28]:https://github.com/iovisor/bcc
|
||||
[29]:https://github.com/iovisor/ply
|
||||
[30]:https://github.com/ajor/bpftrace
|
||||
[31]:https://sourceware.org/systemtap/
|
||||
[32]:https://sourceware.org/ml/systemtap/2017-q4/msg00096.html
|
||||
[33]:https://github.com/iovisor/bcc/blob/master/docs/tutorial_bcc_python_developer.md
|
||||
[34]:https://github.com/iovisor/bcc/blob/master/tools/funccount_example.txt
|
||||
[35]:https://github.com/iovisor/bcc/blob/master/tools/funclatency_example.txt
|
||||
[36]:https://github.com/iovisor/bcc/blob/master/tools/funcslower_example.txt
|
||||
[37]:https://github.com/iovisor/bcc/blob/master/tools/stackcount_example.txt
|
||||
[38]:https://github.com/iovisor/bcc/blob/master/tools/trace_example.txt
|
||||
[39]:https://github.com/iovisor/bcc/blob/master/tools/argdist_example.txt
|
||||
[40]:http://opensource.com/
|
||||
[41]:https://opensource.com/tags/linux
|
||||
[42]:https://opensource.com/tags/sysadmin
|
||||
[43]:https://opensource.com/users/brendang
|
||||
[44]:https://opensource.com/users/brendang
|
@ -1,63 +0,0 @@
|
||||
Book review: Ours to Hack and to Own
|
||||
============================================================
|
||||
|
||||
![Book review: Ours to Hack and to Own](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/EDUCATION_colorbooks.png?itok=liB3FyjP "Book review: Ours to Hack and to Own")
|
||||
Image by : opensource.com
|
||||
|
||||
It seems like the age of ownership is over, and I'm not just talking about the devices and software that many of us bring into our homes and our lives. I'm also talking about the platforms and services on which those devices and apps rely.
|
||||
|
||||
While many of the services that we use are free, we don't have any control over them. The firms that do, in essence, control what we see, what we hear, and what we read. Not only that, but many of them are also changing the nature of work. They're using closed platforms to power a shift away from full-time work to the [gig economy][2], one that offers little in the way of security or certainty.
|
||||
|
||||
This move has wide-ranging implications for the Internet and for everyone who uses and relies on it. The vision of the open Internet from just 20-odd-years ago is fading and is rapidly being replaced by an impenetrable curtain.
|
||||
|
||||
One remedy that's becoming popular is building [platform cooperatives][3], which are digital platforms that their users own. The idea behind platform cooperatives has many of the same roots as open source, as the book "[Ours to Hack and to Own][4]" explains.
|
||||
|
||||
Scholar Trebor Scholz and writer Nathan Schneider have collected 40 essays discussing the rise of, and the need for, platform cooperatives as tools ordinary people can use to promote openness, and to counter the opaqueness and the restrictions of closed systems.
|
||||
|
||||
### Where open source fits in
|
||||
|
||||
At or near the core of any platform cooperative lies open source; not necessarily open source technologies, but the principles and the ethos that underlie open source—openness, transparency, cooperation, collaboration, and sharing.
|
||||
|
||||
In his introduction to the book, Trebor Scholz points out that:
|
||||
|
||||
> In opposition to the black-box systems of the Snowden-era Internet, these platforms need to distinguish themselves by making their data flows transparent. They need to show where the data about customers and workers are stored, to whom they are sold, and for what purpose.
|
||||
|
||||
It's that transparency, so essential to open source, which helps make platform cooperatives so appealing and a refreshing change from much of what exists now.
|
||||
|
||||
Open source software can definitely play a part in the vision of platform cooperatives that "Ours to Hack and to Own" shares. Open source software can provide a fast, inexpensive way for groups to build the technical infrastructure that can power their cooperatives.
|
||||
|
||||
Mickey Metts illustrates this in the essay, "Meet Your Friendly Neighborhood Tech Co-Op." Metts works for a firm called Agaric, which uses Drupal to build for groups and small business what they otherwise couldn't do for themselves. On top of that, Metts encourages anyone wanting to build and run their own business or co-op to embrace free and open source software. Why? It's high quality, it's inexpensive, you can customize it, and you can connect with large communities of helpful, passionate people.
|
||||
|
||||
### Not always about open source, but open source is always there
|
||||
|
||||
Not all of the essays in this book focus or touch on open source; however, the key elements of the open source way—cooperation, community, open governance, and digital freedom—are always on or just below the surface.
|
||||
|
||||
In fact, as many of the essays in "Ours to Hack and to Own" argue, platform cooperatives can be important building blocks of a more open, commons-based economy and society. That can be, in Douglas Rushkoff's words, organizations like Creative Commons compensating "for the privatization of shared intellectual resources." It can also be what Francesca Bria, Barcelona's CTO, describes as cities running their own "distributed common data infrastructures with systems that ensure the security and privacy and sovereignty of citizens' data."
|
||||
|
||||
### Final thought
|
||||
|
||||
If you're looking for a blueprint for changing the Internet and the way we work, "Ours to Hack and to Own" isn't it. The book is more a manifesto than user guide. Having said that, "Ours to Hack and to Own" offers a glimpse at what we can do if we apply the principles of the open source way to society and to the wider world.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Scott Nesbitt - Writer. Editor. Soldier of fortune. Ocelot wrangler. Husband and father. Blogger. Collector of pottery. Scott is a few of these things. He's also a long-time user of free/open source software who extensively writes and blogs about it. You can find Scott on Twitter, GitHub
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/1/review-book-ours-to-hack-and-own
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/scottnesbitt
|
||||
[1]:https://opensource.com/article/17/1/review-book-ours-to-hack-and-own?rate=dgkFEuCLLeutLMH2N_4TmUupAJDjgNvFpqWqYCbQb-8
|
||||
[2]:https://en.wikipedia.org/wiki/Access_economy
|
||||
[3]:https://en.wikipedia.org/wiki/Platform_cooperative
|
||||
[4]:http://www.orbooks.com/catalog/ours-to-hack-and-to-own/
|
||||
[5]:https://opensource.com/user/14925/feed
|
||||
[6]:https://opensource.com/users/scottnesbitt
|
@ -0,0 +1,159 @@
|
||||
Annoying Experiences Every Linux Gamer Never Wanted!
|
||||
============================================================
|
||||
|
||||
|
||||
[![Linux gamer's problem](https://itsfoss.com/wp-content/uploads/2016/09/Linux-Gaming-Problems.jpg)][10]
|
||||
|
||||
[Gaming on Linux][12] has come a long way. There are dedicated [Linux gaming distributions][13] now. But this doesn’t mean that gaming experience on Linux is as smooth as on Windows.
|
||||
|
||||
What are the obstacles that should be thought about to ensure that we enjoy games as much as Windows users do?
|
||||
|
||||
[Wine][14], [PlayOnLinux][15] and other similar tools are not always able to play every popular Windows game. In this article, I would like to discuss various factors that must be dealt with in order to have the best possible Linux gaming experience.
|
||||
|
||||
### #1 SteamOS is Open Source, Steam for Linux is NOT
|
||||
|
||||
As stated on the [SteamOS page][16], even though SteamOS is open source, Steam for Linux continues to be proprietary. Had it also been open source, the amount of support from the open source community would have been tremendous! Since it is not, [the birth of Project Ascension was inevitable][17]:
|
||||
|
||||
[video](https://youtu.be/07UiS5iAknA)
|
||||
|
||||
Project Ascension is an open source game launcher designed to launch games that have been bought and downloaded from anywhere – they can be Steam games, [Origin games][18], Uplay games, games downloaded directly from game developer websites or from DVD/CD-ROMs.
|
||||
|
||||
Here is how it all began: [Sharing The Idea][19] resulted in a very interesting discussion with readers all over from the gaming community pitching in their own opinions and suggestions.
|
||||
|
||||
### #2 Performance compared to Windows
|
||||
|
||||
Getting Windows games to run on Linux is not always an easy task. But thanks to a feature called [CSMT][20] (command stream multi-threading), PlayOnLinux is now better equipped to deal with these performance issues, though it’s still a long way to achieve Windows level outcomes.
|
||||
|
||||
Native Linux support for games has not been so good for past releases.
|
||||
|
||||
Last year, it was reported that SteamOS performed [significantly worse][21] than Windows. Tomb Raider was released on SteamOS/Steam for Linux last year. However, benchmark results were [not at par][22] with performance on Windows.
|
||||
|
||||
[video](https://youtu.be/nkWUBRacBNE)
|
||||
|
||||
This was much obviously due to the fact that the game had been developed with [DirectX][23] in mind and not [OpenGL][24].
|
||||
|
||||
Tomb Raider is the [first Linux game that uses TressFX][25]. This video includes TressFX comparisons:
|
||||
|
||||
[video](https://youtu.be/-IeY5ZS-LlA)
|
||||
|
||||
Here is another interesting comparison which shows Wine+CSMT performing much better than the native Linux version itself on Steam! This is the power of Open Source!
|
||||
|
||||
[Suggested readA New Linux OS "OSu" Vying To Be Ubuntu Of Arch Linux World][26]
|
||||
|
||||
[video](https://youtu.be/sCJkC6oJ08A)
|
||||
|
||||
TressFX has been turned off in this case to avoid FPS loss.
|
||||
|
||||
Here is another Linux vs Windows comparison for the recently released “[Life is Strange][27]” on Linux:
|
||||
|
||||
[video](https://youtu.be/Vlflu-pIgIY)
|
||||
|
||||
It’s good to know that [_Steam for Linux_][28] has begun to show better improvements in performance for this new Linux game.
|
||||
|
||||
Before launching any game for Linux, developers should consider optimizing them especially if it’s a DirectX game and requires OpenGL translation. We really do hope that [Deus Ex: Mankind Divided on Linux][29] gets benchmarked well, upon release. As its a DirectX game, we hope it’s being ported well for Linux. Here’s [what the Executive Game Director had to say][30].
|
||||
|
||||
### #3 Proprietary NVIDIA Drivers
|
||||
|
||||
[AMD’s support for Open Source][31] is definitely commendable when compared to [NVIDIA][32]. Though [AMD][33] driver support is [pretty good on Linux][34] now due to its better open source driver, NVIDIA graphic card owners will still have to use the proprietary NVIDIA drivers because of the limited capabilities of the open-source version of NVIDIA’s graphics driver called Nouveau.
|
||||
|
||||
In the past, legendary Linus Torvalds has also shared his thoughts about Linux support from NVIDIA to be totally unacceptable:
|
||||
|
||||
[video](https://youtu.be/O0r6Pr_mdio)
|
||||
|
||||
You can watch the complete talk [here][35]. Although NVIDIA responded with [a commitment for better linux support][36], the open source graphics driver still continues to be weak as before.
|
||||
|
||||
### #4 Need for Uplay and Origin DRM support on Linux
|
||||
|
||||
[video](https://youtu.be/rc96NFwyxWU)
|
||||
|
||||
The above video describes how to install the [Uplay][37] DRM on Linux. The uploader also suggests that the use of wine as the main tool of games and applications is not recommended on Linux. Rather, preference to native applications should be encouraged instead.
|
||||
|
||||
The following video is a guide about installing the [Origin][38] DRM on Linux:
|
||||
|
||||
[video](https://youtu.be/ga2lNM72-Kw)
|
||||
|
||||
Digital Rights Management Software adds another layer for game execution and hence it adds up to the already challenging task to make a Windows game run well on Linux. So in addition to making the game execute, W.I.N.E has to take care of running the DRM software such as Uplay or Origin as well. It would have been great if, like Steam, Linux could have got its own native versions of Uplay and Origin.
|
||||
|
||||
[Suggested readLinux Foundation Head Calls 2017 'Year of the Linux Desktop'... While Running Apple's macOS Himself][39]
|
||||
|
||||
### #5 DirectX 11 support for Linux
|
||||
|
||||
Even though we have tools on Linux to run Windows applications, every game comes with its own set of tweak requirements for it to be playable on Linux. Though there was an announcement about [DirectX 11 support for Linux][40] last year via Code Weavers, it’s still a long way to go to make playing newly launched titles on Linux a possibility. Currently, you can
|
||||
|
||||
Currently, you can [buy Crossover from Codeweavers][41] to get the best DirectX 11 support available. This [thread][42] on the Arch Linux forums clearly shows how much more effort is required to make this dream a possibility. Here is an interesting [find][43] from a [Reddit thread][44], which mentions Wine getting [DirectX 11 patches from Codeweavers][45]. Now that’s definitely some good news.
|
||||
|
||||
### #6 100% of Steam games are not available for Linux
|
||||
|
||||
This is an important point to ponder as Linux gamers continue to miss out on every major game release since most of them land up on Windows. Here is a guide to [install Steam for Windows on Linux][46].
|
||||
|
||||
### #7 Better Support from video game publishers for OpenGL
|
||||
|
||||
Currently, developers and publishers focus primarily on DirectX for video game development rather than OpenGL. Now as Steam is officially here for Linux, developers should start considering development in OpenGL as well.
|
||||
|
||||
[Direct3D][47] is made solely for the Windows platform. The OpenGL API is an open standard, and implementations exist for not only Windows but a wide variety of other platforms.
|
||||
|
||||
Though quite an old article, [this valuable resource][48] shares a lot of thoughtful information on the realities of OpenGL and DirectX. The points made are truly very sensible and enlightens the reader about the facts based on actual chronological events.
|
||||
|
||||
Publishers who are launching their titles on Linux should definitely not leave out the fact that developing the game on OpenGL would be a much better deal than translating it from DirectX to OpenGL. If conversion has to be done, the translations must be well optimized and carefully looked into. There might be a delay in releasing the games but still it would definitely be worth the wait.
|
||||
|
||||
Have more annoyances to share? Do let us know in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/linux-gaming-problems/
|
||||
|
||||
作者:[Avimanyu Bandyopadhyay ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/avimanyu/
|
||||
[1]:https://itsfoss.com/author/avimanyu/
|
||||
[2]:https://itsfoss.com/linux-gaming-problems/#comments
|
||||
[3]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Flinux-gaming-problems%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
|
||||
[4]:https://twitter.com/share?original_referer=/&text=Annoying+Experiences+Every+Linux+Gamer+Never+Wanted%21&url=https://itsfoss.com/linux-gaming-problems/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=itsfoss2
|
||||
[5]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Flinux-gaming-problems%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
|
||||
[6]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Flinux-gaming-problems%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
|
||||
[7]:http://www.stumbleupon.com/submit?url=https://itsfoss.com/linux-gaming-problems/&title=Annoying+Experiences+Every+Linux+Gamer+Never+Wanted%21
|
||||
[8]:https://www.reddit.com/submit?url=https://itsfoss.com/linux-gaming-problems/&title=Annoying+Experiences+Every+Linux+Gamer+Never+Wanted%21
|
||||
[9]:https://itsfoss.com/wp-content/uploads/2016/09/Linux-Gaming-Problems.jpg
|
||||
[10]:https://itsfoss.com/wp-content/uploads/2016/09/Linux-Gaming-Problems.jpg
|
||||
[11]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/09/Linux-Gaming-Problems.jpg&url=https://itsfoss.com/linux-gaming-problems/&is_video=false&description=Linux%20gamer%27s%20problem
|
||||
[12]:https://itsfoss.com/linux-gaming-guide/
|
||||
[13]:https://itsfoss.com/linux-gaming-distributions/
|
||||
[14]:https://itsfoss.com/use-windows-applications-linux/
|
||||
[15]:https://www.playonlinux.com/en/
|
||||
[16]:http://store.steampowered.com/steamos/
|
||||
[17]:http://www.ibtimes.co.uk/reddit-users-want-replace-steam-open-source-game-launcher-project-ascension-1498999
|
||||
[18]:https://www.origin.com/
|
||||
[19]:https://www.reddit.com/r/pcmasterrace/comments/33xcvm/we_hate_valves_monopoly_over_pc_gaming_why/
|
||||
[20]:https://github.com/wine-compholio/wine-staging/wiki/CSMT
|
||||
[21]:http://arstechnica.com/gaming/2015/11/ars-benchmarks-show-significant-performance-hit-for-steamos-gaming/
|
||||
[22]:https://www.gamingonlinux.com/articles/tomb-raider-benchmark-video-comparison-linux-vs-windows-10.7138
|
||||
[23]:https://en.wikipedia.org/wiki/DirectX
|
||||
[24]:https://en.wikipedia.org/wiki/OpenGL
|
||||
[25]:https://www.gamingonlinux.com/articles/tomb-raider-released-for-linux-video-thoughts-port-report-included-the-first-linux-game-to-use-tresfx.7124
|
||||
[26]:https://itsfoss.com/osu-new-linux/
|
||||
[27]:http://lifeisstrange.com/
|
||||
[28]:https://itsfoss.com/install-steam-ubuntu-linux/
|
||||
[29]:https://itsfoss.com/deus-ex-mankind-divided-linux/
|
||||
[30]:http://wccftech.com/deus-ex-mankind-divided-director-console-ports-on-pc-is-disrespectful/
|
||||
[31]:http://developer.amd.com/tools-and-sdks/open-source/
|
||||
[32]:http://nvidia.com/
|
||||
[33]:http://amd.com/
|
||||
[34]:http://www.makeuseof.com/tag/open-source-amd-graphics-now-awesome-heres-get/
|
||||
[35]:https://youtu.be/MShbP3OpASA
|
||||
[36]:https://itsfoss.com/nvidia-optimus-support-linux/
|
||||
[37]:http://uplay.com/
|
||||
[38]:http://origin.com/
|
||||
[39]:https://itsfoss.com/linux-foundation-head-uses-macos/
|
||||
[40]:http://www.pcworld.com/article/2940470/hey-gamers-directx-11-is-coming-to-linux-thanks-to-codeweavers-and-wine.html
|
||||
[41]:https://itsfoss.com/deal-run-windows-software-and-games-on-linux-with-crossover-15-66-off/
|
||||
[42]:https://bbs.archlinux.org/viewtopic.php?id=214771
|
||||
[43]:https://ghostbin.com/paste/sy3e2
|
||||
[44]:https://www.reddit.com/r/linux_gaming/comments/3ap3uu/directx_11_support_coming_to_codeweavers/
|
||||
[45]:https://www.codeweavers.com/about/blogs/caron/2015/12/10/directx-11-really-james-didnt-lie
|
||||
[46]:https://itsfoss.com/linux-gaming-guide/
|
||||
[47]:https://en.wikipedia.org/wiki/Direct3D
|
||||
[48]:http://blog.wolfire.com/2010/01/Why-you-should-use-OpenGL-and-not-DirectX
|
68
sources/tech/20161216 GitHub Is Building a Coder Paradise.md
Normal file
68
sources/tech/20161216 GitHub Is Building a Coder Paradise.md
Normal file
@ -0,0 +1,68 @@
|
||||
translating by zrszrszrs
|
||||
GitHub Is Building a Coder’s Paradise. It’s Not Coming Cheap
|
||||
============================================================
|
||||
|
||||
The VC-backed unicorn startup lost $66 million in nine months of 2016, financial documents show.
|
||||
|
||||
|
||||
Though the name GitHub is practically unknown outside technology circles, coders around the world have embraced the software. The startup operates a sort of Google Docs for programmers, giving them a place to store, share and collaborate on their work. But GitHub Inc. is losing money through profligate spending and has stood by as new entrants emerged in a software category it essentially gave birth to, according to people familiar with the business and financial paperwork reviewed by Bloomberg.
|
||||
|
||||
The rise of GitHub has captivated venture capitalists. Sequoia Capital led a $250 million investment in mid-2015\. But GitHub management may have been a little too eager to spend the new money. The company paid to send employees jetting across the globe to Amsterdam, London, New York and elsewhere. More costly, it doubled headcount to 600 over the course of about 18 months.
|
||||
|
||||
GitHub lost $27 million in the fiscal year that ended in January 2016, according to an income statement seen by Bloomberg. It generated $95 million in revenue during that period, the internal financial document says.
|
||||
|
||||
![Chris Wanstrath, co-founder and chief executive officer at GitHub Inc., speaks during the 2015 Bloomberg Technology Conference in San Francisco, California, U.S., on Tuesday, June 16, 2015\. The conference gathers global business leaders, tech influencers, top investors and entrepreneurs to shine a spotlight on how coders and coding are transforming business and fueling disruption across all industries. Photographer: David Paul Morris/Bloomberg *** Local Caption *** Chris Wanstrath](https://assets.bwbx.io/images/users/iqjWHBFdfxIU/iXpmtRL9Q0C4/v0/400x-1.jpg)
|
||||
GitHub CEO Chris Wanstrath.Photographer: David Paul Morris/Bloomberg
|
||||
|
||||
Sitting in a conference room featuring an abstract art piece on the wall and a Mad Men-style rollaway bar cart in the corner, GitHub’s Chris Wanstrath says the business is running more smoothly now and growing. “What happened to 2015?” says the 31-year-old co-founder and chief executive officer. “Nothing was getting done, maybe? I shouldn’t say that. Strike that.”
|
||||
|
||||
GitHub recently hired Mike Taylor, the former treasurer and vice president of finance at Tesla Motors Inc., to manage spending as chief financial officer. It also hopes to add a seasoned chief operating officer. GitHub has already surpassed last year’s revenue in nine months this year, with $98 million, the financial document shows. “The whole product road map, we have all of our shit together in a way that we’ve never had together. I’m pretty elated right now with the way things are going,” says Wanstrath. “We’ve had a lot of ups and downs, and right now we’re definitely in an up.”
|
||||
|
||||
Also up: expenses. The income statement shows a loss of $66 million in the first three quarters of this year. That’s more than twice as much lost in any nine-month time frame by Twilio Inc., another maker of software tools founded the same year as GitHub. At least a dozen members of GitHub’s leadership team have left since last year, several of whom expressed unhappiness with Wanstrath’s management style. GitHub says the company has flourished under his direction but declined to comment on finances. Wanstrath says: “We raised $250 million last year, and we’re putting it to use. We’re not expecting to be profitable right now.”
|
||||
|
||||
Wanstrath started GitHub with three friends during the recession of 2008 and bootstrapped the business for four years. They encouraged employees to [work remotely][1], which forced the team to adopt GitHub’s tools for their own projects and had the added benefit of saving money on office space. GitHub quickly became essential to the code-writing process at technology companies of all sizes and gave birth to a new generation of programmers by hosting their open-source code for free.
|
||||
|
||||
Peter Levine, a partner at Andreessen Horowitz, courted the founders and eventually convinced them to take their first round of VC money in 2012\. The firm led a $100 million cash infusion, and Levine joined the board. The next year, GitHub signed a seven-year lease worth about $35 million for a headquarters in San Francisco, says a person familiar with the project.
|
||||
|
||||
The new digs gave employees a reason to come into the office. Visitors would enter a lobby modeled after the White House’s Oval Office before making their way to a replica of the Situation Room. The company also erected a statue of its mascot, a cartoon octopus-cat creature known as the Octocat. The 55,000-square-foot space is filled with wooden tables and modern art.
|
||||
|
||||
In GitHub’s cultural hierarchy, the coder is at the top. The company has strived to create the best product possible for software developers and watch them to flock to it. In addition to offering its base service for free, GitHub sells more advanced programming tools to companies big and small. But it found that some chief information officers want a human touch and began to consider building out a sales team.
|
||||
|
||||
The issue took on a new sense of urgency in 2014 with the formation of a rival startup with a similar name. GitLab Inc. went after large businesses from the start, offering them a cheaper alternative to GitHub. “The big differentiator for GitLab is that it was designed for the enterprise, and GitHub was not,” says GitLab CEO Sid Sijbrandij. “One of the values is frugality, and this is something very close to our heart. We want to treat our team members really well, but we don’t want to waste any money where it’s not needed. So we don’t have a big fancy office because we can be effective without it.”
|
||||
|
||||
Y Combinator, a Silicon Valley business incubator, welcomed GitLab into the fold last year. GitLab says more than 110,000 organizations, including IBM and Macy’s Inc., use its software. (IBM also uses GitHub.) Atlassian Corp. has taken a similar top-down approach with its own code repository Bitbucket.
|
||||
|
||||
Wanstrath says the competition has helped validate GitHub’s business. “When we started, people made fun of us and said there is no money in developer tools,” he says. “I’ve kind of been waiting for this for a long time—to be proven right, that this is a real market.”
|
||||
|
||||
![GitHub_Office-03](https://assets.bwbx.io/images/users/iqjWHBFdfxIU/iQB5sqXgihdQ/v0/400x-1.jpg)
|
||||
Source: GitHub
|
||||
|
||||
It also spurred GitHub into action. With fresh capital last year valuing the company at $2 billion, it went on a hiring spree. It spent $71 million on salaries and benefits last fiscal year, according to the financial document seen by Bloomberg. This year, those costs rose to $108 million from February to October, with three months still to go in the fiscal year, the document shows. This was the startup’s biggest expense by far.
|
||||
|
||||
The emphasis on sales seemed to be making an impact, but the team missed some of its targets, says a person familiar with the matter. In September 2014, subscription revenue on an annualized basis was about $25 million each from enterprise sales and organizations signing up through the site, according to another financial document. After GitHub staffed up, annual recurring revenue from large clients increased this year to $70 million while the self-service business saw healthy, if less dramatic, growth to $52 million.
|
||||
|
||||
But the uptick in revenue wasn’t keeping pace with the aggressive hiring. GitHub cut about 20 employees in recent weeks. “The unicorn trap is that you’ve sold equity against a plan that you often can’t hit; then what do you do?” says Nick Sturiale, a VC at Ignition Partners.
|
||||
|
||||
Such business shifts are risky, and stumbles aren’t uncommon, says Jason Lemkin, a corporate software VC who’s not an investor in GitHub. “That transition from a self-service product in its early days to being enterprise always has bumps,” he says. GitHub says it has 18 million users, and its Enterprise service is used by half of the world’s 10 highest-grossing companies, including Wal-Mart Stores Inc. and Ford Motor Co.
|
||||
|
||||
Some longtime GitHub fans weren’t happy with the new direction, though. More than 1,800 developers signed an online petition, saying: “Those of us who run some of the most popular projects on GitHub feel completely ignored by you.”
|
||||
|
||||
The backlash was a wake-up call, Wanstrath says. GitHub is now more focused on its original mission of catering to coders, he says. “I want us to be judged on, ‘Are we making developers more productive?’” he says. At GitHub’s developer conference in September, Wanstrath introduced several new features, including an updated process for reviewing code. He says 2016 was a “marquee year.”
|
||||
|
||||
|
||||
At least five senior staffers left in 2015, and turnover among leadership continued this year. Among them was co-founder and CIO Scott Chacon, who says he left to start a new venture. “GitHub was always very good to me, from the first day I started when it was just the four of us,” Chacon says. “They allowed me to travel the world representing them; they supported my teaching and evangelizing Git and remote work culture for a long time.”
|
||||
|
||||
The travel excursions are expected to continue at GitHub, and there’s little evidence it can rein in spending any time soon. The company says about half its staff is remote and that the trips bring together GitHub’s distributed workforce and encourage collaboration. Last week, at least 20 employees on GitHub’s human-resources team convened in Rancho Mirage, California, for a retreat at the Ritz Carlton.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.bloomberg.com/news/articles/2016-12-15/github-is-building-a-coder-s-paradise-it-s-not-coming-cheap
|
||||
|
||||
作者:[Eric Newcomer ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.bloomberg.com/authors/ASFMS16EsvU/eric-newcomer
|
||||
[1]:https://www.bloomberg.com/news/articles/2016-09-06/why-github-finally-abandoned-its-bossless-workplace
|
@ -0,0 +1,104 @@
|
||||
New Year’s resolution: Donate to 1 free software project every month
|
||||
============================================================
|
||||
|
||||
### Donating just a little bit helps ensure the open source software I use remains alive
|
||||
|
||||
Free and open source software is an absolutely critical part of our world—and the future of technology and computing. One problem that consistently plagues many free software projects, though, is the challenge of funding ongoing development (and support and documentation).
|
||||
|
||||
With that in mind, I have finally settled on a New Year’s resolution for 2017: to donate to one free software project (or group) every month—or the whole year. After all, these projects are saving me a boatload of money because I don’t need to buy expensive, proprietary packages to accomplish the same things.
|
||||
|
||||
#### + Also on Network World: [Free Software Foundation shakes up its list of priority projects][19] +
|
||||
|
||||
I’m not setting some crazy goal here—not requiring that I donate beyond my means. Heck, some months I may be able to donate only a few bucks. But every little bit helps, right?
|
||||
|
||||
To help me accomplish that goal, below is a list of free software projects with links to where I can donate to them. Organized by categories, just because. I’m scheduling a monthly calendar item to remind me to bring up this page and donate to one of these projects.
|
||||
|
||||
This isn’t a complete list—not by any measure—but it’s a good starting point. Apologies to the (many) great projects out there that I missed.
|
||||
|
||||
#### Linux distributions
|
||||
|
||||
[elementary OS][20] — In addition to the distribution itself (which is based, in part, on Ubuntu), this team also develops the Pantheon desktop environment.
|
||||
|
||||
[Solus][21] — This is a “from scratch” distro using their own custom-developed desktop environment, “Budgie.”
|
||||
|
||||
[Ubuntu MATE][22] — It’s Ubuntu—with Unity ripped off and replaced with MATE. I like to think of this as “What Ubuntu was like back when I still used Ubuntu.”
|
||||
|
||||
[Debian][23] — If you use Ubuntu or elementary or Mint, you are using a system based on Debian. Personally, I use Debian on my [PocketCHIP][24].
|
||||
|
||||
#### Linux components
|
||||
|
||||
[PulseAudio][25] — PulsAudio is all over the place now. If it stopped being supported and maintained, that would be… highly inconvenient.
|
||||
|
||||
#### Productivity/Creation
|
||||
|
||||
[Gimp][26] — The GNU Image Manipulation Program is one of the most famous free software projects—and the standard for cross-platform raster design tools.
|
||||
|
||||
[FreeCAD][27] — When people talk about difficulty in moving from Windows to Linux, the lack of CAD software often crops up. Supporting projects such as FreeCAD helps to remove that barrier.
|
||||
|
||||
[OpenShot][28] — Video editing on Linux (and other free software desktops) has improved tremendously over the past few years. But there is still work to be done.
|
||||
|
||||
[Blender][29] — What is Blender? A 3D modelling suite? A video editor? A game creation system? All three (and more)? Whatever you use Blender for, it’s amazing.
|
||||
|
||||
[Inkscape][30] — This is the most fantastic vector graphics editing suite on the planet (in my oh-so-humble opinion).
|
||||
|
||||
[LibreOffice / The Document Foundation][31] — I am writing this very document in LibreOffice. Donating to their foundation to help further development seems to be in my best interests.
|
||||
|
||||
#### Software development
|
||||
|
||||
[Python Software Foundation][32] — Python is a great language and is used all over the place.
|
||||
|
||||
#### Free and open source foundations
|
||||
|
||||
[Free Software Foundation][33] — “The Free Software Foundation (FSF) is a nonprofit with a worldwide mission to promote computer user freedom. We defend the rights of all software users.”
|
||||
|
||||
[Software Freedom Conservancy][34] — “Software Freedom Conservancy helps promote, improve, develop and defend Free, Libre and Open Source Software (FLOSS) projects.”
|
||||
|
||||
Again—this is, by no means, a complete list. Not even close. Luckily many projects provide easy donation mechanisms on their websites.
|
||||
|
||||
Join the Network World communities on [Facebook][17] and [LinkedIn][18] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3160174/linux/new-years-resolution-donate-to-1-free-software-project-every-month.html
|
||||
|
||||
作者:[ Bryan Lunduke][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.networkworld.com/author/Bryan-Lunduke/
|
||||
[1]:https://www.networkworld.com/article/3143583/linux/linux-y-things-i-am-thankful-for.html
|
||||
[2]:https://www.networkworld.com/article/3152745/linux/5-rock-solid-linux-distros-for-developers.html
|
||||
[3]:https://www.networkworld.com/article/3130760/open-source-tools/elementary-os-04-review-and-interview-with-the-founder.html
|
||||
[4]:https://www.networkworld.com/video/51206/solo-drone-has-linux-smarts-gopro-mount
|
||||
[5]:https://twitter.com/intent/tweet?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html&via=networkworld&text=New+Year%E2%80%99s+resolution%3A+Donate+to+1+free+software+project+every+month
|
||||
[6]:https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html
|
||||
[7]:http://www.linkedin.com/shareArticle?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html&title=New+Year%E2%80%99s+resolution%3A+Donate+to+1+free+software+project+every+month
|
||||
[8]:https://plus.google.com/share?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html
|
||||
[9]:http://reddit.com/submit?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html&title=New+Year%E2%80%99s+resolution%3A+Donate+to+1+free+software+project+every+month
|
||||
[10]:http://www.stumbleupon.com/submit?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html
|
||||
[11]:https://www.networkworld.com/article/3160174/linux/new-years-resolution-donate-to-1-free-software-project-every-month.html#email
|
||||
[12]:https://www.networkworld.com/article/3143583/linux/linux-y-things-i-am-thankful-for.html
|
||||
[13]:https://www.networkworld.com/article/3152745/linux/5-rock-solid-linux-distros-for-developers.html
|
||||
[14]:https://www.networkworld.com/article/3130760/open-source-tools/elementary-os-04-review-and-interview-with-the-founder.html
|
||||
[15]:https://www.networkworld.com/video/51206/solo-drone-has-linux-smarts-gopro-mount
|
||||
[16]:https://www.networkworld.com/video/51206/solo-drone-has-linux-smarts-gopro-mount
|
||||
[17]:https://www.facebook.com/NetworkWorld/
|
||||
[18]:https://www.linkedin.com/company/network-world
|
||||
[19]:http://www.networkworld.com/article/3158685/open-source-tools/free-software-foundation-shakes-up-its-list-of-priority-projects.html
|
||||
[20]:https://www.patreon.com/elementary
|
||||
[21]:https://www.patreon.com/solus
|
||||
[22]:https://www.patreon.com/ubuntu_mate
|
||||
[23]:https://www.debian.org/donations
|
||||
[24]:http://www.networkworld.com/article/3157210/linux/review-pocketchipsuper-cheap-linux-terminal-that-fits-in-your-pocket.html
|
||||
[25]:https://www.patreon.com/tanuk
|
||||
[26]:https://www.gimp.org/donating/
|
||||
[27]:https://www.patreon.com/yorikvanhavre
|
||||
[28]:https://www.patreon.com/openshot
|
||||
[29]:https://www.blender.org/foundation/donation-payment/
|
||||
[30]:https://inkscape.org/en/support-us/donate/
|
||||
[31]:https://www.libreoffice.org/donate/
|
||||
[32]:https://www.python.org/psf/donations/
|
||||
[33]:http://www.fsf.org/associate/
|
||||
[34]:https://sfconservancy.org/supporter/
|
@ -1,3 +1,6 @@
|
||||
|
||||
translating by HardworkFish
|
||||
|
||||
INTRODUCING DOCKER SECRETS MANAGEMENT
|
||||
============================================================
|
||||
|
||||
|
@ -0,0 +1,168 @@
|
||||
Which Official Ubuntu Flavor Is Best for You?
|
||||
============================================================
|
||||
|
||||
|
||||
![Ubuntu Budgie](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_budgie.jpg?itok=xpo3Ujfw "Ubuntu Budgie")
|
||||
Ubuntu Budgie is just one of the few officially recognized flavors of Ubuntu. Jack Wallen takes a look at some important differences between them.[Used with permission][7]
|
||||
|
||||
Ubuntu Linux comes in a few officially recognized flavors, as well as several derivative distributions. The recognized flavors are:
|
||||
|
||||
* [Kubuntu][9] - Ubuntu with the KDE desktop
|
||||
|
||||
* [Lubuntu][10] - Ubuntu with the LXDE desktop
|
||||
|
||||
* [Mythbuntu][11] - Ubuntu MythTV
|
||||
|
||||
* [Ubuntu Budgie][12] - Ubuntu with the Budgie desktop
|
||||
|
||||
* [Xubuntu][8] - Ubuntu with Xfce
|
||||
|
||||
Up until recently, the official Ubuntu Linux included the in-house Unity desktop and a sixth recognized flavor existed: Ubuntu GNOME -- Ubuntu with the GNOME desktop environment.
|
||||
|
||||
When Mark Shuttleworth decided to nix Unity, the choice was obvious to Canonical—make GNOME the official desktop of Ubuntu Linux. This begins with Ubuntu 18.04 (so April, 2018) and we’ll be down to the official distribution and four recognized flavors.
|
||||
|
||||
For those already enmeshed in the Linux community, that’s some seriously simple math to do—you know which Linux desktop you like, so making the choice between Ubuntu, Kubuntu, Lubuntu, Mythbuntu, Ubuntu Budgie, and Xubuntu couldn’t be easier. Those that haven’t already been indoctrinated into the way of Linux won’t see that as such a cut-and-dried decision.
|
||||
|
||||
To that end, I thought it might be a good idea to help newer users decide which flavor is best for them. After all, choosing the wrong distribution out of the starting gate can make for a less-than-ideal experience.
|
||||
|
||||
And so, if you’re considering a flavor of Ubuntu, and you want your experience to be as painless as possible, read on.
|
||||
|
||||
### Ubuntu
|
||||
|
||||
I’ll begin with the official flavor of Ubuntu. I am also going to warp time a bit and skip Unity, to launch right into the upcoming GNOME-based distribution. Beyond GNOME being an incredibly stable and easy to use desktop environment, there is one very good reason to select the official flavor—support. The official flavor of Ubuntu is commercially supported by Canonical. For $150.00 per year, you can purchase [official support][20] for the Ubuntu desktop. There is, of course, a 50-desktop minimum for this level of support. For individuals, the best bet for support would be the [Ubuntu Forums][21], the [Ubuntu documentation][22], or the [Community help wiki][23].
|
||||
|
||||
Beyond the commercial support, the reason to choose the official Ubuntu flavor would be if you’re looking for a modern, full-featured desktop that is incredibly reliable and easy to use. GNOME has been designed to serve as a platform perfectly suited for both desktops and laptops (Figure 1). Unlike its predecessor, Unity, GNOME can be far more easily customized to suit your needs—to a point. If you’re not one to tinker with the desktop, fear not, GNOME just works. In fact, the out of the box experience with GNOME might well be one of the finest on the market—even rivaling (or besting) Mac OS X. If tinkering and tweaking is of primary interest, you will find GNOME somewhat limiting. The [GNOME Tweak Tool][24] and [GNOME Shell Extensions ][25]will only take you so far, before you find yourself wanting more.
|
||||
|
||||
|
||||
![GNOME desktop](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_flavor_a.jpg?itok=Ir6jBKbd "GNOME desktop")
|
||||
|
||||
Figure 1: The GNOME desktop with a Unity-like flavor might be what we see with Ubuntu 18.04.[Used with permission][1]
|
||||
|
||||
### Kubuntu
|
||||
|
||||
The [K Desktop Environment][26] (otherwise known as KDE) has been around as long as GNOME and has, at times, been maligned as a lesser desktop. With the release of KDE Plasma 5, that changed. KDE has become an incredibly powerful, efficient, and stable desktop that can stand toe to toe with the best of them. But why would you select Kubuntu over the official Ubuntu? The answer to that question is quite simple—you’re used to the Windows XP/7 desktop metaphor. Start menu, taskbar, system tray, etc., KDE has those and more, all fashioned in such a way that will make you feel like you’re using the best of the past and current technologies. In fact, if you’re looking for one of the most Windows 7-like official Ubuntu flavors, you won’t find one that better fits the bill.
|
||||
|
||||
One of the nice things about Kubuntu, is that you’ll find it a bit more flexible than any Windows iteration you’ve ever used—and equally reliable/user-friendly. And don’t think, because KDE opts to offer a desktop somewhat similar to Windows 7, that it doesn’t have a modern flavor. In fact, Kubuntu takes what worked well with the Windows 7 interface and updates it to meet a more modern aesthetic (Figure 2).
|
||||
|
||||
|
||||
![Kubuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_flavor_b.jpg?itok=dGpebi4z "Kubuntu")
|
||||
|
||||
Figure 2: Kubuntu offers a modern take on an old UX.[Used with permission][2]
|
||||
|
||||
The official Ubuntu is not the only flavor to offer desktop support. Kubuntu users also can pay for [commercial support][27]. Be warned, it’s not cheap. One hour of support time will cost you $103.88 cents.
|
||||
|
||||
### Lubuntu
|
||||
|
||||
If you’re looking for an easy-to-use desktop that is very fast (so that older hardware will feel like new) and far more flexible than just about any desktop you’ve ever used, Lubuntu is what you want. The only caveat to Lubuntu is that you’re looking at a bit more bare bones on the desktop then you may be accustomed to. Lubuntu makes use of the [LXDE desktop][28] and includes a list of applications that continues the lightweight theme. So if you’re looking for blazing fast speeds on the desktop, Lubuntu might be a good choice.
|
||||
However, there is a caveat with Lubuntu and, for some users, this might be a deal breaker. Along with the small footprint of Lubuntu come pre-installed applications that might not stand up to task. For example, instead of the full-blown office suite, you’ll find the [AibWord word processor][29] and the [Gnumeric spreadsheet][30] tool. Don’t get me wrong; both of these are fine tools. However, if you’re looking for software that’s business-ready, you will find them lacking. On the other hand, if you want to install more work-centric tools (e.g., LibreOffice), Lubuntu includes the Synaptic Package Manager to make installation of third-party software simple.
|
||||
|
||||
Even with the limited default software, Lubuntu offers a clean and easy to use desktop (Figure 3), that anyone could start using with little to no learning curve.
|
||||
|
||||
|
||||
![Lubuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_flavor_c.jpg?itok=nWsJr39r "Lubuntu")
|
||||
|
||||
Figure 3: What Lubuntu lacks in software, it makes up for in speed and simplicity.[Used with permission][3]
|
||||
|
||||
### Mythbuntu
|
||||
|
||||
Mythbuntu is a sort of odd bird here, because it isn’t really a desktop variant. Instead, Mythbuntu is a special flavor of Ubuntu designed to be a multimedia powerhouse. Using Mythbuntu requires TV Tuners and TV Out cards. And, during the installation, there are a number of additional steps that must be taken (choosing how to set up the frontend/backend as well as setting up your IR remotes).
|
||||
|
||||
If you do happen to have the hardware (and the desire to create your own Ubuntu-powered entertainment system), Mythbuntu is the distribution you want. Once you’ve installed Mythbuntu, you will then be prompted to walk through the setup of your Capture cards, recording profiles, video sources, and Input connections (Figure 4).
|
||||
|
||||
|
||||
![Mythbuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_flavor_d.jpg?itok=Uk16xUIF "Mythbuntu")
|
||||
|
||||
Figure 4: Getting ready to set up Mythbuntu.[Used with permission][4]
|
||||
|
||||
### Ubuntu Budgie
|
||||
|
||||
Ubuntu Budgie is the new kid on the block to the official flavor list. Sporting the Budgie Desktop, this is a beautiful and modern take on Linux that will please just about any type of user. The goal of Ubuntu Budgie was to create an elegant and simple desktop interface. Mission accomplished. If you’re looking for a beautiful desktop to work on top of the remarkably stable Ubuntu Linux platform, look no further than Ubuntu Budgie.
|
||||
|
||||
Adding this particular spin on Ubuntu to the list of official variants was a smart move on the part of Canonical. With Unity going away, they needed a desktop that would offer the elegance found in Unity. Customization of Budgie is very easy, and the list of included software will get you working and browsing immediately.
|
||||
|
||||
And, unlike the learning curve many users encountered with Unity, the developers/designers of Ubuntu Budgie have done a remarkable job of keeping this take on Ubuntu familiar. Click on the “start” button to reveal a fairly standard menu of applications. Budgie also includes an easy to use Dock (Figure 5) that holds applications launchers for quick access.
|
||||
|
||||
|
||||
![Budgie](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_flavor_e.jpg?itok=mwlo4xzm "Budgie")
|
||||
|
||||
Figure 5: This is one beautiful desktop.[Used with permission][5]
|
||||
|
||||
Another really nice feature found in Ubuntu Budgie is a sidebar that can be quickly revealed and hidden. This sidebar holds applets and notifications. With this in play, your desktop can be both incredibly useful, while remaining clutter free.
|
||||
|
||||
In the end, if you’re looking for something a bit different, that happens to also be a very modern take on the desktop—with features and functions not found on other distributions—Ubuntu Budgie is what you’re looking for.
|
||||
|
||||
### Xubuntu
|
||||
|
||||
Another official flavor of Ubuntu that does a nice job of providing a small footprint version of Linux is [Xubuntu][32]. The difference between Xubuntu and Lubuntu is that, where Lubuntu uses the LXDE desktop, Xubuntu makes use of [Xfce][33]. What you get with that difference is a lightweight desktop that is far more configurable (than Lubuntu) as well as one that includes the more business-ready LibreOffice office suite.
|
||||
|
||||
Xubuntu is an out of the box experience that anyone, regardless of experience, can use. But don't think that immediate familiarity means this flavor of Ubuntu is locked out of making it your own. If you're looking for a take on Ubuntu that's somewhat old-school out of the box, but can be heavily tweaked to better resemble a more modern desktop, Xubuntu is what you want.
|
||||
|
||||
One really handy addition to Xubuntu that I've always enjoyed (one that harks back to Enlightenment) is the ability to bring up the "start" menu by right-clicking anywhere on the desktop (Figure 6). This can make for very efficient usage.
|
||||
|
||||
|
||||
![Xubuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/xubuntu.jpg?itok=XL8_hLet "Xubuntu")
|
||||
|
||||
Figure 6: Xubuntu lets you bring up the "start" menu by right-clicking anywhere on the desktop.[Used with permission][6]
|
||||
|
||||
### The choice is yours
|
||||
|
||||
There is a flavor of Ubuntu to meet nearly any need—which one you choose is up to you. As yourself questions such as:
|
||||
|
||||
* What are your needs?
|
||||
|
||||
* What type of desktop do you prefer to interact with?
|
||||
|
||||
* Is your hardware aging?
|
||||
|
||||
* Do you prefer a Windows XP/7 feel?
|
||||
|
||||
* Are you wanting a multimedia system?
|
||||
|
||||
Your answers to the above questions will go a long way to determining which flavor of Ubuntu is right for you. The good news is that you can’t really go wrong with any of the available options.
|
||||
|
||||
_Learn more about Linux through the free ["Introduction to Linux" ][31]course from The Linux Foundation and edX._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2017/5/which-official-ubuntu-flavor-best-you
|
||||
|
||||
作者:[ JACK WALLEN][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/jlwallen
|
||||
[1]:https://www.linux.com/licenses/category/used-permission
|
||||
[2]:https://www.linux.com/licenses/category/used-permission
|
||||
[3]:https://www.linux.com/licenses/category/used-permission
|
||||
[4]:https://www.linux.com/licenses/category/used-permission
|
||||
[5]:https://www.linux.com/licenses/category/used-permission
|
||||
[6]:https://www.linux.com/licenses/category/used-permission
|
||||
[7]:https://www.linux.com/licenses/category/used-permission
|
||||
[8]:http://xubuntu.org/
|
||||
[9]:http://www.kubuntu.org/
|
||||
[10]:http://lubuntu.net/
|
||||
[11]:http://www.mythbuntu.org/
|
||||
[12]:https://ubuntubudgie.org/
|
||||
[13]:https://www.linux.com/files/images/ubuntuflavorajpg
|
||||
[14]:https://www.linux.com/files/images/ubuntuflavorbjpg
|
||||
[15]:https://www.linux.com/files/images/ubuntuflavorcjpg
|
||||
[16]:https://www.linux.com/files/images/ubuntuflavordjpg
|
||||
[17]:https://www.linux.com/files/images/ubuntuflavorejpg
|
||||
[18]:https://www.linux.com/files/images/xubuntujpg
|
||||
[19]:https://www.linux.com/files/images/ubuntubudgiejpg
|
||||
[20]:https://buy.ubuntu.com/collections/ubuntu-advantage-for-desktop
|
||||
[21]:https://ubuntuforums.org/
|
||||
[22]:https://help.ubuntu.com/?_ga=2.155705979.1922322560.1494162076-828730842.1481046109
|
||||
[23]:https://help.ubuntu.com/community/CommunityHelpWiki?_ga=2.155705979.1922322560.1494162076-828730842.1481046109
|
||||
[24]:https://apps.ubuntu.com/cat/applications/gnome-tweak-tool/
|
||||
[25]:https://extensions.gnome.org/
|
||||
[26]:https://www.kde.org/
|
||||
[27]:https://kubuntu.emerge-open.com/buy
|
||||
[28]:http://lxde.org/
|
||||
[29]:https://www.abisource.com/
|
||||
[30]:http://www.gnumeric.org/
|
||||
[31]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
||||
[32]:https://xubuntu.org/
|
||||
[33]:https://www.xfce.org/
|
@ -1,3 +1,5 @@
|
||||
Translating by XiatianSummer
|
||||
|
||||
Why Car Companies Are Hiring Computer Security Experts
|
||||
============================================================
|
||||
|
||||
|
@ -1,232 +0,0 @@
|
||||
translating by liuxinyu123
|
||||
|
||||
Containing System Services in Red Hat Enterprise Linux – Part 1
|
||||
============================================================
|
||||
|
||||
|
||||
At the 2017 Red Hat Summit, several people asked me “We normally use full VMs to separate network services like DNS and DHCP, can we use containers instead?”. The answer is yes, and here’s an example of how to create a system container in Red Hat Enterprise Linux 7 today.
|
||||
|
||||
### **THE GOAL**
|
||||
|
||||
#### _Create a network service that can be updated independently of any other services of the system, yet easily managed and updated from the host._
|
||||
|
||||
Let’s explore setting up a BIND server running under systemd in a container. In this part, we’ll look at building our container, as well as managing the BIND configuration and data files.
|
||||
|
||||
In Part Two, we’ll look at how systemd on the host integrates with systemd in the container. We’ll explore managing the service in the container, and enabling it as a service on the host.
|
||||
|
||||
### **CREATING THE BIND CONTAINER**
|
||||
|
||||
To get systemd working inside a container easily, we first need to add two packages on the host: `oci-register-machine` and `oci-systemd-hook`. The `oci-systemd-hook` hook allows us to run systemd in a container without needing to use a privileged container or manually configuring tmpfs and cgroups. The `oci-register-machine` hook allows us to keep track of the container with the systemd tools like `systemctl` and `machinectl`.
|
||||
|
||||
```
|
||||
[root@rhel7-host ~]# yum install oci-register-machine oci-systemd-hook
|
||||
```
|
||||
|
||||
On to creating our BIND container. The [Red Hat Enterprise Linux 7 base image][6] includes systemd as an init system. We can install and enable BIND the same way we would on a typical system. You can [download this Dockerfile from the git repository][7] in the Resources.
|
||||
|
||||
```
|
||||
[root@rhel7-host bind]# vi Dockerfile
|
||||
|
||||
# Dockerfile for BIND
|
||||
FROM registry.access.redhat.com/rhel7/rhel
|
||||
ENV container docker
|
||||
RUN yum -y install bind && \
|
||||
yum clean all && \
|
||||
systemctl enable named
|
||||
STOPSIGNAL SIGRTMIN+3
|
||||
EXPOSE 53
|
||||
EXPOSE 53/udp
|
||||
CMD [ "/sbin/init" ]
|
||||
```
|
||||
|
||||
Since we’re starting with an init system as PID 1, we need to change the signal sent by the docker CLI when we tell the container to stop. From the `kill` system call man pages (`man 2 kill`):
|
||||
|
||||
```
|
||||
The only signals that can be sent to process ID 1, the init
|
||||
process, are those for which init has explicitly installed
|
||||
signal handlers. This is done to assure the system is not
|
||||
brought down accidentally.
|
||||
```
|
||||
|
||||
For the systemd signal handlers, `SIGRTMIN+3` is the signal that corresponds to `systemd start halt.target`. We also expose both TCP and UDP ports for BIND, since both protocols could be in use.
|
||||
|
||||
### **MANAGING DATA**
|
||||
|
||||
With a functional BIND service, we need a way to manage the configuration and zone files. Currently those are inside the container, so we _could_ enter the container any time we wanted to update the configs or make a zone file change. This isn’t ideal from a management perspective. We’ll need to rebuild the container when we need to update BIND, so changes in the images would be lost. Having to enter the container any time we need to update a file or restart the service adds steps and time.
|
||||
|
||||
Instead, we’ll extract the configuration and data files from the container and copy them to the host, then mount them at run time. This way we can easily restart or rebuild the container without losing changes. We can also modify configs and zones by using an editor outside of the container. Since this container data looks like “ _site-specific data served by this system_ ”, let’s follow the File System Hierarchy and create `/srv/named` on the local host to maintain administrative separation.
|
||||
|
||||
```
|
||||
[root@rhel7-host ~]# mkdir -p /srv/named/etc
|
||||
|
||||
[root@rhel7-host ~]# mkdir -p /srv/named/var/named
|
||||
```
|
||||
|
||||
##### _NOTE: If you are migrating an existing configuration, you can skip the following step and copy it directly to the`/srv/named` directories. You may still want to check the container assigned GID with a temporary container._
|
||||
|
||||
Let’s build and run an temporary container to examine BIND. With a init process as PID 1, we can’t run the container interactively to get a shell. We’ll exec into it after it launches, and check for important files with `rpm`.
|
||||
|
||||
```
|
||||
[root@rhel7-host ~]# docker build -t named .
|
||||
|
||||
[root@rhel7-host ~]# docker exec -it $( docker run -d named ) /bin/bash
|
||||
|
||||
[root@0e77ce00405e /]# rpm -ql bind
|
||||
```
|
||||
|
||||
For this example, we’ll need `/etc/named.conf` and everything under `/var/named/`. We can extract these with `machinectl`. If there’s more than one container registered, we can see what’s running in any machine with `machinectl status`. Once we have the configs we can stop the temporary container.
|
||||
|
||||
_There’s also a[ sample `named.conf` and zone files for `example.com` in the Resources][2] if you prefer._
|
||||
|
||||
```
|
||||
[root@rhel7-host bind]# machinectl list
|
||||
|
||||
MACHINE CLASS SERVICE
|
||||
8824c90294d5a36d396c8ab35167937f container docker
|
||||
|
||||
[root@rhel7-host ~]# machinectl copy-from 8824c90294d5a36d396c8ab35167937f /etc/named.conf /srv/named/etc/named.conf
|
||||
|
||||
[root@rhel7-host ~]# machinectl copy-from 8824c90294d5a36d396c8ab35167937f /var/named /srv/named/var/named
|
||||
|
||||
[root@rhel7-host ~]# docker stop infallible_wescoff
|
||||
```
|
||||
|
||||
### **FINAL CREATION**
|
||||
|
||||
To create and run the final container, add the volume options to mount:
|
||||
|
||||
* file `/srv/named/etc/named.conf` as `/etc/named.conf`
|
||||
|
||||
* directory `/srv/named/var/named` as `/var/named`
|
||||
|
||||
Since this is our final container, we’ll also provide a meaningful name that we can refer to later.
|
||||
|
||||
```
|
||||
[root@rhel7-host ~]# docker run -d -p 53:53 -p 53:53/udp -v /srv/named/etc/named.conf:/etc/named.conf:Z -v /srv/named/var/named:/var/named:Z --name named-container named
|
||||
```
|
||||
|
||||
With the final container running, we can modify the local configs to change the behavior of BIND in the container. The BIND server will need to listen on any IP that the container might be assigned. Be sure the GID of any new file matches the rest of the BIND files from the container.
|
||||
|
||||
```
|
||||
[root@rhel7-host bind]# cp named.conf /srv/named/etc/named.conf
|
||||
|
||||
[root@rhel7-host ~]# cp example.com.zone /srv/named/var/named/example.com.zone
|
||||
|
||||
[root@rhel7-host ~]# cp example.com.rr.zone /srv/named/var/named/example.com.rr.zone
|
||||
```
|
||||
|
||||
> [Curious why I didn’t need to change SELinux context on the host directories?][3]
|
||||
|
||||
We’ll reload the config by exec’ing the `rndc` binary provided by the container. We can use `journald` in the same fashion to check the BIND logs. If you run into errors, you can edit the file on the host, and reload the config. Using `host` or `dig` on the host, we can check the responses from the contained service for example.com.
|
||||
|
||||
```
|
||||
[root@rhel7-host ~]# docker exec -it named-container rndc reload
|
||||
server reload successful
|
||||
|
||||
[root@rhel7-host ~]# docker exec -it named-container journalctl -u named -n
|
||||
-- Logs begin at Fri 2017-05-12 19:15:18 UTC, end at Fri 2017-05-12 19:29:17 UTC. --
|
||||
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: 9.E.F.IP6.ARPA
|
||||
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: A.E.F.IP6.ARPA
|
||||
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: B.E.F.IP6.ARPA
|
||||
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: 8.B.D.0.1.0.0.2.IP6.ARPA
|
||||
May 12 19:29:17 ac1752c314a7 named[27]: reloading configuration succeeded
|
||||
May 12 19:29:17 ac1752c314a7 named[27]: reloading zones succeeded
|
||||
May 12 19:29:17 ac1752c314a7 named[27]: zone 1.0.10.in-addr.arpa/IN: loaded serial 2001062601
|
||||
May 12 19:29:17 ac1752c314a7 named[27]: zone 1.0.10.in-addr.arpa/IN: sending notifies (serial 2001062601)
|
||||
May 12 19:29:17 ac1752c314a7 named[27]: all zones loaded
|
||||
May 12 19:29:17 ac1752c314a7 named[27]: running
|
||||
|
||||
[root@rhel7-host bind]# host www.example.com localhost
|
||||
Using domain server:
|
||||
Name: localhost
|
||||
Address: ::1#53
|
||||
Aliases:
|
||||
www.example.com is an alias for server1.example.com.
|
||||
server1.example.com is an alias for mail
|
||||
```
|
||||
|
||||
> [Did your zone file not update? It might be your editor not the serial number.][4]
|
||||
|
||||
### THE FINISH LINE (?)
|
||||
|
||||
We’ve got what we set out to accomplish. DNS requests and zones are being served from a container. We’ve got a persistent location to manage data and configurations across updates.
|
||||
|
||||
In Part 2 of this series, we’ll see how to treat the container as a normal service on the host.
|
||||
|
||||
* * *
|
||||
|
||||
_[Follow the RHEL Blog][5] to receive updates on Part 2 of this series and other new posts via email._
|
||||
|
||||
* * *
|
||||
|
||||
### _**Additional Resources:**_
|
||||
|
||||
#### GitHub repository for accompanying files: [https://github.com/nzwulfin/named-container][8]
|
||||
|
||||
#### **SIDEBAR 1: ** _SELinux context on local files accessed by a container_
|
||||
|
||||
You may have noticed that when I copied the files from the container to the local host, I didn’t run a `chcon` to change the files on the host to type `svirt_sandbox_file_t`. Why didn’t it break? Copying a file into `/srv` should have made that file label type `var_t`. Did I `setenforce 0`?
|
||||
|
||||
Of course not, that would make Dan Walsh cry. And yes, `machinectl` did indeed set the label type as expected, take a look:
|
||||
|
||||
Before starting the container:
|
||||
|
||||
```
|
||||
[root@rhel7-host ~]# ls -Z /srv/named/etc/named.conf
|
||||
|
||||
-rw-r-----. unconfined_u:object_r:var_t:s0 /srv/named/etc/named.conf
|
||||
```
|
||||
|
||||
No, I used a volume option in run that makes Dan Walsh happy, `:Z`. This part of the command `-v /srv/named/etc/named.conf:/etc/named.conf:Z` does two things: first it says this needs to be relabeled with a private volume SELinux label, and second it says to mount it read / write.
|
||||
|
||||
After starting the container:
|
||||
|
||||
```
|
||||
[root@rhel7-host ~]# ls -Z /srv/named/etc/named.conf
|
||||
|
||||
-rw-r-----. root 25 system_u:object_r:svirt_sandbox_file_t:s0:c821,c956 /srv/named/etc/named.conf
|
||||
```
|
||||
|
||||
#### **SIDEBAR 2: ** _VIM backup behavior can change inodes_
|
||||
|
||||
If you made the edits to the config file with `vim` on the local host and you aren’t seeing the changes in the container, you may have inadvertently created a new file that the container isn’t aware of. There are three `vim` settings that affect backup copies during editing: backup, writebackup, and backupcopy.
|
||||
|
||||
I’ve snipped out the defaults that apply for RHEL 7 from the official VIM backup_table [http://vimdoc.sourceforge.net/htmldoc/editing.html#backup-table]
|
||||
|
||||
```
|
||||
backup writebackup
|
||||
|
||||
off on backup current file, deleted afterwards (default)
|
||||
```
|
||||
|
||||
So we don’t create tilde copies that stick around, but we are creating backups. The other setting is backupcopy, where `auto` is the shipped default:
|
||||
|
||||
```
|
||||
"yes" make a copy of the file and overwrite the original one
|
||||
"no" rename the file and write a new one
|
||||
"auto" one of the previous, what works best
|
||||
```
|
||||
|
||||
This combo means that when you edit a file, unless `vim` sees a reason not to (check the docs for the logic) you will end up with a new file that contains your edits, which will be renamed to the original filename when you save. This means the file gets a new inode. For most situations this isn’t a problem, but here the bind mount into the container *is* senstive to inode changes. To solve this, you need to change the backupcopy behavior.
|
||||
|
||||
Either in the `vim` session or in your `.vimrc`, add `set backupcopy=yes`. This will make sure the original file gets truncated and overwritten, preserving the inode and propagating the changes into the container.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/
|
||||
|
||||
作者:[Matt Micene ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/
|
||||
[1]:http://rhelblog.redhat.com/author/mmicenerht/
|
||||
[2]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#repo
|
||||
[3]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#sidebar_1
|
||||
[4]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#sidebar_2
|
||||
[5]:http://redhatstackblog.wordpress.com/feed/
|
||||
[6]:https://access.redhat.com/containers
|
||||
[7]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#repo
|
||||
[8]:https://github.com/nzwulfin/named-container
|
@ -0,0 +1,263 @@
|
||||
Useful Linux Commands that you should know
|
||||
======
|
||||
If you are Linux system administrator or just a Linux enthusiast/lover, than
|
||||
you love & use command line aks CLI. Until some years ago majority of Linux
|
||||
work was accomplished using CLI only & even there are some limitations to GUI
|
||||
. Though there are plenty of Linux distributions that can complete tasks with
|
||||
GUI but still learning CLI is major part of mastering Linux.
|
||||
|
||||
To this effect, we present you list of useful Linux commands that you should
|
||||
know.
|
||||
|
||||
**Note:-** There is no definite order to all these commands & all of these
|
||||
commands are equally important to learn & master in order to excel in Linux
|
||||
administration. One more thing, we have only used some of the options for each
|
||||
command for an example, you can refer to 'man pages' for complete list of
|
||||
options for each command.
|
||||
|
||||
### 1- top command
|
||||
|
||||
'top' command displays the real time summary/information of our system. It
|
||||
also displays the processes and all the threads that are running & are being
|
||||
managed by the system kernel.
|
||||
|
||||
Information provided by top command includes uptime, number of users, Load
|
||||
average, running/sleeping/zombie processes, CPU usage in percentage based on
|
||||
users/system etc, system memory free & used, swap memory etc.
|
||||
|
||||
To use top command, open terminal & execute the comamnd,
|
||||
|
||||
**$ top**
|
||||
|
||||
To exit out the command, either press 'q' or 'ctrl+c'.
|
||||
|
||||
### 2- free command
|
||||
|
||||
'free' command is used to specifically used to get the information about
|
||||
system memory or RAM. With this command we can get information regarding
|
||||
physical memory, swap memory as well as system buffers. It provided amount of
|
||||
total, free & used memory available on the system.
|
||||
|
||||
To use this utility, execute following command in terminal
|
||||
|
||||
**$ free**
|
||||
|
||||
It will present all the data in kb or kilobytes, for megabytes use options
|
||||
'-m' & '-g ' for gb.
|
||||
|
||||
#### 3- cp command
|
||||
|
||||
'cp' or copy command is used to copy files among the folders. Syntax for using
|
||||
'cp' command is,
|
||||
|
||||
**$ cp source destination**
|
||||
|
||||
### 4- cd command
|
||||
|
||||
'cd' command is used for changing directory . We can switch among directories
|
||||
using cd command.
|
||||
|
||||
To use it, execute
|
||||
|
||||
**$ cd directory_location**
|
||||
|
||||
### 5- ifconfig
|
||||
|
||||
'Ifconfig' is very important utility for viewing & configuring network
|
||||
information on Linux machine.
|
||||
|
||||
To use it, execute
|
||||
|
||||
**$ ifconfig**
|
||||
|
||||
This will present the network information of all the networking devices on the
|
||||
system. There are number of options that can be used with 'ifconfig' for
|
||||
configuration, in fact they are some many options that we have created a
|
||||
separate article for it ( **Read it here ||[IFCONFIG command : Learn with some
|
||||
examples][1]** ).
|
||||
|
||||
### 6- crontab command
|
||||
|
||||
'Crontab' is another important utility that is used schedule a job on Linux
|
||||
system. With crontab, we can make sure that a command or a script is executed
|
||||
at the pre-defined time. To create a cron job, run
|
||||
|
||||
**$ crontab -e**
|
||||
|
||||
To display all the created jobs, run
|
||||
|
||||
**$ crontab -l**
|
||||
|
||||
You can read our detailed article regarding crontab ( **Read it here ||[
|
||||
Scheduling Important Jobs with Crontab][2]** )
|
||||
|
||||
### 7- cat command
|
||||
|
||||
'cat' command has many uses, most common use is that it's used to display
|
||||
content of a file,
|
||||
|
||||
**$ cat file.txt**
|
||||
|
||||
But it can also be used to merge two or more file using the syntax below,
|
||||
|
||||
**$ cat file1 file2 file3 file4 > file_new**
|
||||
|
||||
We can also use 'cat' command to clone a whole disk ( **Read it here ||
|
||||
[Cloning Disks using dd & cat commands for Linux systems][3]** )
|
||||
|
||||
### 8- df command
|
||||
|
||||
'df' command is used to show the disk utilization of our whole Linux file
|
||||
system. Simply run.
|
||||
|
||||
**$ df**
|
||||
|
||||
& we will be presented with disk complete utilization of all the partitions on
|
||||
our Linux machine.
|
||||
|
||||
### 9- du command
|
||||
|
||||
'du' command shows the amount of disk that is being utilized by the files &
|
||||
directories on our Linux machine. To run it, type
|
||||
|
||||
**$ du /directory**
|
||||
|
||||
( **Recommended Read :[Use of du & df commands with examples][4]** )
|
||||
|
||||
### 10- mv command
|
||||
|
||||
'mv' command is used to move the files or folders from one location to
|
||||
another. Command syntax for moving the files/folders is,
|
||||
|
||||
**$ mv /source/filename /destination**
|
||||
|
||||
We can also use 'mv' command to rename a file/folder. Syntax for changing name
|
||||
is,
|
||||
|
||||
**$ mv file_oldname file_newname**
|
||||
|
||||
### 11- rm command
|
||||
|
||||
'rm' command is used to remove files\folders from Linux system. To use it, run
|
||||
|
||||
**$ rm filename**
|
||||
|
||||
We can also use '-rf' option with 'rm' command to completely remove a
|
||||
file\folder from the system but we must use this with caution.
|
||||
|
||||
### 12- vi/vim command
|
||||
|
||||
VI or VIM is very famous & one of the widely used CLI-based text editor for
|
||||
Linux. It takes some time to master it but it has a great number of utilities,
|
||||
which makes it a favorite for Linux users.
|
||||
|
||||
For detailed knowledge of VIM, kindly refer to the articles [**Beginner 's
|
||||
Guide to LVM (Logical Volume Management)** & **Working with Vi/Vim Editor :
|
||||
Advanced concepts.**][5]
|
||||
|
||||
### 13- ssh command
|
||||
|
||||
SSH utility is to remotely access another machine from the current Linux
|
||||
machine. To access a machine, execute
|
||||
|
||||
**$ ssh[[email protected]][6] OR machine_name**
|
||||
|
||||
Once we have remote access to machine, we can work on CLI of that machine as
|
||||
if we are working on local machine.
|
||||
|
||||
### 14- tar command
|
||||
|
||||
'tar' command is used to compress & extract the files\folders. To compress the
|
||||
files\folders using tar, execute
|
||||
|
||||
**$ tar -cvf file.tar file_name**
|
||||
|
||||
where file.tar will be the name of compressed folder & 'file_name' is the name
|
||||
of source file or folders. To extract a compressed folder,
|
||||
|
||||
**$ tar -xvf file.tar**
|
||||
|
||||
For more details on 'tar' command, read [**Tar command : Compress & Decompress
|
||||
the files\directories**][7]
|
||||
|
||||
### 15- locate command
|
||||
|
||||
'locate' command is used to locate files & folders on your Linux machines. To
|
||||
use it, run
|
||||
|
||||
**$ locate file_name**
|
||||
|
||||
### 16- grep command
|
||||
|
||||
'grep' command another very important command that a Linux administrator
|
||||
should know. It comes especially handy when we want to grab a keyword or
|
||||
multiple keywords from a file. Syntax for using it is,
|
||||
|
||||
**$ grep 'pattern' file.txt**
|
||||
|
||||
It will search for 'pattern' in the file 'file.txt' and produce the output on
|
||||
the screen. We can also redirect the output to another file,
|
||||
|
||||
**$ grep 'pattern' file.txt > newfile.txt**
|
||||
|
||||
### 17- ps command
|
||||
|
||||
'ps' command is especially used to get the process id of a running process. To
|
||||
get information of all the processes, run
|
||||
|
||||
**$ ps -ef**
|
||||
|
||||
To get information regarding a single process, executed
|
||||
|
||||
**$ ps -ef | grep java**
|
||||
|
||||
### 18- kill command
|
||||
|
||||
'kill' command is used to kill a running process. To kill a process we will
|
||||
need its process id, which we can get using above 'ps' command. To kill a
|
||||
process, run
|
||||
|
||||
**$ kill -9 process_id**
|
||||
|
||||
### 19- ls command
|
||||
|
||||
'ls' command is used list all the files in a directory. To use it, execute
|
||||
|
||||
**$ ls**
|
||||
|
||||
### 20- mkdir command
|
||||
|
||||
To create a directory in Linux machine, we use command 'mkdir'. Syntax for
|
||||
using 'mkdir' is
|
||||
|
||||
**$ mkdir new_dir**
|
||||
|
||||
These were some of the useful linux commands that every System Admin should
|
||||
know, we will soon be sharing another list of some more important commands
|
||||
that you should know being a Linux lover. You can also leave your suggestions
|
||||
and queries in the comment box below.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/useful-linux-commands-you-should-know/
|
||||
|
||||
作者:[][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com
|
||||
[1]:http://linuxtechlab.com/ifconfig-command-learn-examples/
|
||||
[2]:http://linuxtechlab.com/scheduling-important-jobs-crontab/
|
||||
[3]:http://linuxtechlab.com/linux-disk-cloning-using-dd-cat-commands/
|
||||
[4]:http://linuxtechlab.com/du-df-commands-examples/
|
||||
[5]:http://linuxtechlab.com/working-vivim-editor-advanced-concepts/
|
||||
[6]:/cdn-cgi/l/email-protection#bbcec8dec9d5dad6defbf2ebdadfdfc9dec8c8
|
||||
[7]:http://linuxtechlab.com/tar-command-compress-decompress-files
|
||||
[8]:https://www.facebook.com/linuxtechlab/
|
||||
[9]:https://twitter.com/LinuxTechLab
|
||||
[10]:https://plus.google.com/+linuxtechlab
|
||||
[11]:http://linuxtechlab.com/contact-us-2/
|
||||
|
@ -1,172 +0,0 @@
|
||||
How to answer questions in a helpful way
|
||||
============================================================
|
||||
|
||||
Your coworker asks you a slightly unclear question. How do you answer? I think asking questions is a skill (see [How to ask good questions][1]) and that answering questions in a helpful way is also a skill! Both of them are super useful.
|
||||
|
||||
To start out with – sometimes the people asking you questions don’t respect your time, and that sucks. I’m assuming here throughout that that’s not what happening – we’re going to assume that the person asking you questions is a reasonable person who is trying their best to figure something out and that you want to help them out. Everyone I work with is like that and so that’s the world I live in :)
|
||||
|
||||
Here are a few strategies for answering questions in a helpful way!
|
||||
|
||||
### If they’re not asking clearly, help them clarify
|
||||
|
||||
Often beginners don’t ask clear questions, or ask questions that don’t have the necessary information to answer the questions. Here are some strategies you can use to help them clarify.
|
||||
|
||||
* **Rephrase a more specific question** back at them (“Are you asking X?”)
|
||||
|
||||
* **Ask them for more specific information** they didn’t provide (“are you using IPv6?”)
|
||||
|
||||
* **Ask what prompted their question**. For example, sometimes people come into my team’s channel with questions about how our service discovery works. Usually this is because they’re trying to set up/reconfigure a service. In that case it’s helpful to ask “which service are you working with? Can I see the pull request you’re working on?”
|
||||
|
||||
A lot of these strategies come from the [how to ask good questions][2] post. (though I would never say to someone “oh you need to read this Document On How To Ask Good Questions before asking me a question”)
|
||||
|
||||
### Figure out what they know already
|
||||
|
||||
Before answering a question, it’s very useful to know what the person knows already!
|
||||
|
||||
Harold Treen gave me a great example of this:
|
||||
|
||||
> Someone asked me the other day to explain “Redux Sagas”. Rather than dive in and say “They are like worker threads that listen for actions and let you update the store!”
|
||||
> I started figuring out how much they knew about Redux, actions, the store and all these other fundamental concepts. From there it was easier to explain the concept that ties those other concepts together.
|
||||
|
||||
Figuring out what your question-asker knows already is important because they may be confused about fundamental concepts (“What’s Redux?”), or they may be an expert who’s getting at a subtle corner case. An answer building on concepts they don’t know is confusing, and an answer that recaps things they know is tedious.
|
||||
|
||||
One useful trick for asking what people know – instead of “Do you know X?”, maybe try “How familiar are you with X?”.
|
||||
|
||||
### Point them to the documentation
|
||||
|
||||
“RTFM” is the classic unhelpful answer to a question, but pointing someone to a specific piece of documentation can actually be really helpful! When I’m asking a question, I’d honestly rather be pointed to documentation that actually answers my question, because it’s likely to answer other questions I have too.
|
||||
|
||||
I think it’s important here to make sure you’re linking to documentation that actually answers the question, or at least check in afterwards to make sure it helped. Otherwise you can end up with this (pretty common) situation:
|
||||
|
||||
* Ali: How do I do X?
|
||||
|
||||
* Jada: <link to documentation>
|
||||
|
||||
* Ali: That doesn’t actually explain how to X, it only explains Y!
|
||||
|
||||
If the documentation I’m linking to is very long, I like to point out the specific part of the documentation I’m talking about. The [bash man page][3] is 44,000 words (really!), so just saying “it’s in the bash man page” is not that helpful :)
|
||||
|
||||
### Point them to a useful search
|
||||
|
||||
Often I find things at work by searching for some Specific Keyword that I know will find me the answer. That keyword might not be obvious to a beginner! So saying “this is the search I’d use to find the answer to that question” can be useful. Again, check in afterwards to make sure the search actually gets them the answer they need :)
|
||||
|
||||
### Write new documentation
|
||||
|
||||
People often come and ask my team the same questions over and over again. This is obviously not the fault of the people (how should _they_ know that 10 people have asked this already, or what the answer is?). So we’re trying to, instead of answering the questions directly,
|
||||
|
||||
1. Immediately write documentation
|
||||
|
||||
2. Point the person to the new documentation we just wrote
|
||||
|
||||
3. Celebrate!
|
||||
|
||||
Writing documentation sometimes takes more time than just answering the question, but it’s often worth it! Writing documentation is especially worth it if:
|
||||
|
||||
a. It’s a question which is being asked again and again b. The answer doesn’t change too much over time (if the answer changes every week or month, the documentation will just get out of date and be frustrating)
|
||||
|
||||
### Explain what you did
|
||||
|
||||
As a beginner to a subject, it’s really frustrating to have an exchange like this:
|
||||
|
||||
* New person: “hey how do you do X?”
|
||||
|
||||
* More Experienced Person: “I did it, it is done.”
|
||||
|
||||
* New person: ….. but what did you DO?!
|
||||
|
||||
If the person asking you is trying to learn how things work, it’s helpful to:
|
||||
|
||||
* Walk them through how to accomplish a task instead of doing it yourself
|
||||
|
||||
* Tell them the steps for how you got the answer you gave them!
|
||||
|
||||
This might take longer than doing it yourself, but it’s a learning opportunity for the person who asked, so that they’ll be better equipped to solve such problems in the future.
|
||||
|
||||
Then you can have WAY better exchanges, like this:
|
||||
|
||||
* New person: “I’m seeing errors on the site, what’s happening?”
|
||||
|
||||
* More Experienced Person: (2 minutes later) “oh that’s because there’s a database failover happening”
|
||||
|
||||
* New person: how did you know that??!?!?
|
||||
|
||||
* More Experienced Person: “Here’s what I did!”:
|
||||
1. Often these errors are due to Service Y being down. I looked at $PLACE and it said Service Y was up. So that wasn’t it.
|
||||
|
||||
2. Then I looked at dashboard X, and this part of that dashboard showed there was a database failover happening.
|
||||
|
||||
3. Then I looked in the logs for the service and it showed errors connecting to the database, here’s what those errors look like.
|
||||
|
||||
If you’re explaining how you debugged a problem, it’s useful both to explain how you found out what the problem was, and how you found out what the problem wasn’t. While it might feel good to look like you knew the answer right off the top of your head, it feels even better to help someone improve at learning and diagnosis, and understand the resources available.
|
||||
|
||||
### Solve the underlying problem
|
||||
|
||||
This one is a bit tricky. Sometimes people think they’ve got the right path to a solution, and they just need one more piece of information to implement that solution. But they might not be quite on the right path! For example:
|
||||
|
||||
* George: I’m doing X, and I got this error, how do I fix it
|
||||
|
||||
* Jasminda: Are you actually trying to do Y? If so, you shouldn’t do X, you should do Z instead
|
||||
|
||||
* George: Oh, you’re right!!! Thank you! I will do Z instead.
|
||||
|
||||
Jasminda didn’t answer George’s question at all! Instead she guessed that George didn’t actually want to be doing X, and she was right. That is helpful!
|
||||
|
||||
It’s possible to come off as condescending here though, like
|
||||
|
||||
* George: I’m doing X, and I got this error, how do I fix it?
|
||||
|
||||
* Jasminda: Don’t do that, you’re trying to do Y and you should do Z to accomplish that instead.
|
||||
|
||||
* George: Well, I am not trying to do Y, I actually want to do X because REASONS. How do I do X?
|
||||
|
||||
So don’t be condescending, and keep in mind that some questioners might be attached to the steps they’ve taken so far! It might be appropriate to answer both the question they asked and the one they should have asked: “Well, if you want to do X then you might try this, but if you’re trying to solve problem Y with that, you might have better luck doing this other thing, and here’s why that’ll work better”.
|
||||
|
||||
### Ask “Did that answer your question?”
|
||||
|
||||
I always like to check in after I _think_ I’ve answered the question and ask “did that answer your question? Do you have more questions?”.
|
||||
|
||||
It’s good to pause and wait after asking this because often people need a minute or two to know whether or not they’ve figured out the answer. I especially find this extra “did this answer your questions?” step helpful after writing documentation! Often when writing documentation about something I know well I’ll leave out something very important without realizing it.
|
||||
|
||||
### Offer to pair program/chat in real life
|
||||
|
||||
I work remote, so many of my conversations at work are text-based. I think of that as the default mode of communication.
|
||||
|
||||
Today, we live in a world of easy video conferencing & screensharing! At work I can at any time click a button and immediately be in a video call/screensharing session with someone. Some problems are easier to talk about using your voices!
|
||||
|
||||
For example, recently someone was asking about capacity planning/autoscaling for their service. I could tell there were a few things we needed to clear up but I wasn’t exactly sure what they were yet. We got on a quick video call and 5 minutes later we’d answered all their questions.
|
||||
|
||||
I think especially if someone is really stuck on how to get started on a task, pair programming for a few minutes can really help, and it can be a lot more efficient than email/instant messaging.
|
||||
|
||||
### Don’t act surprised
|
||||
|
||||
This one’s a rule from the Recurse Center: [no feigning surprise][4]. Here’s a relatively common scenario
|
||||
|
||||
* Human 1: “what’s the Linux kernel?”
|
||||
|
||||
* Human 2: “you don’t know what the LINUX KERNEL is?!!!!?!!!???”
|
||||
|
||||
Human 2’s reaction (regardless of whether they’re _actually_ surprised or not) is not very helpful. It mostly just serves to make Human 1 feel bad that they don’t know what the Linux kernel is.
|
||||
|
||||
I’ve worked on actually pretending not to be surprised even when I actually am a bit surprised the person doesn’t know the thing and it’s awesome.
|
||||
|
||||
### Answering questions well is awesome
|
||||
|
||||
Obviously not all these strategies are appropriate all the time, but hopefully you will find some of them helpful! I find taking the time to answer questions and teach people can be really rewarding.
|
||||
|
||||
Special thanks to Josh Triplett for suggesting this post and making many helpful additions, and to Harold Treen, Vaibhav Sagar, Peter Bhat Harkins, Wesley Aptekar-Cassels, and Paul Gowder for reading/commenting.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/answer-questions-well/
|
||||
|
||||
作者:[ Julia Evans][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://jvns.ca/about
|
||||
[1]:https://jvns.ca/blog/good-questions/
|
||||
[2]:https://jvns.ca/blog/good-questions/
|
||||
[3]:https://linux.die.net/man/1/bash
|
||||
[4]:https://jvns.ca/blog/2017/04/27/no-feigning-surprise/
|
@ -1,4 +1,4 @@
|
||||
fuzheng1998 translating
|
||||
|
||||
A Large-Scale Study of Programming Languages and Code Quality in GitHub
|
||||
============================================================
|
||||
|
||||
@ -36,7 +36,7 @@ Our language and project data was extracted from the _GitHub Archive_ , a data
|
||||
|
||||
**Identifying top languages.** We aggregate projects based on their primary language. Then we select the languages with the most projects for further analysis, as shown in [Table 1][48]. A given project can use many languages; assigning a single language to it is difficult. Github Archive stores information gathered from GitHub Linguist which measures the language distribution of a project repository using the source file extensions. The language with the maximum number of source files is assigned as the _primary language_ of the project.
|
||||
|
||||
[![t1.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t1.jpg)][49]
|
||||
[![t1.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t1.jpg)][49]
|
||||
**Table 1\. Top 3 projects in each language.**
|
||||
|
||||
**Retrieving popular projects.** For each selected language, we filter the project repositories written primarily in that language by its popularity based on the associated number of _stars._ This number indicates how many people have actively expressed interest in the project, and is a reasonable proxy for its popularity. Thus, the top 3 projects in C are _linux, git_ , and _php-src_ ; and for C++ they are _node-webkit, phantomjs_ , and _mongo_ ; and for `Java` they are _storm, elasticsearch_ , and _ActionBarSherlock._ In total, we select the top 50 projects in each language.
|
||||
@ -47,7 +47,7 @@ To ensure that these projects have a sufficient development history, we drop the
|
||||
|
||||
[Table 2][51] summarizes our data set. Since a project may use multiple languages, the second column of the table shows the total number of projects that use a certain language at some capacity. We further exclude some languages from a project that have fewer than 20 commits in that language, where 20 is the first quartile value of the total number of commits per project per language. For example, we find 220 projects that use more than 20 commits in C. This ensures sufficient activity for each language–project pair.
|
||||
|
||||
[![t2.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t2.jpg)][52]
|
||||
[![t2.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t2.jpg)][52]
|
||||
**Table 2\. Study subjects.**
|
||||
|
||||
In summary, we study 728 projects developed in 17 languages with 18 years of history. This includes 29,000 different developers, 1.57 million commits, and 564,625 bug fix commits.
|
||||
@ -57,14 +57,14 @@ In summary, we study 728 projects developed in 17 languages with 18 years of his
|
||||
|
||||
We define language classes based on several properties of the language thought to influence language quality,[7][9], [8][10], [12][11] as shown in [Table 3][53]. The _Programming Paradigm_ indicates whether the project is written in an imperative procedural, imperative scripting, or functional language. In the rest of the paper, we use the terms procedural and scripting to indicate imperative procedural and imperative scripting respectively.
|
||||
|
||||
[![t3.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t3.jpg)][54]
|
||||
[![t3.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t3.jpg)][54]
|
||||
**Table 3\. Different types of language classes.**
|
||||
|
||||
_Type Checking_ indicates static or dynamic typing. In statically typed languages, type checking occurs at compile time, and variable names are bound to a value and to a type. In addition, expressions (including variables) are classified by types that correspond to the values they might take on at run-time. In dynamically typed languages, type checking occurs at run-time. Hence, in the latter, it is possible to bind a variable name to objects of different types in the same program.
|
||||
|
||||
_Implicit Type Conversion_ allows access of an operand of type T1 as a different type T2, without an explicit conversion. Such implicit conversion may introduce type-confusion in some cases, especially when it presents an operand of specific type T1, as an instance of a different type T2\. Since not all implicit type conversions are immediately a problem, we operationalize our definition by showing examples of the implicit type confusion that can happen in all the languages we identified as allowing it. For example, in languages like `Perl, JavaScript`, and `CoffeeScript` adding a string to a number is permissible (e.g., "5" + 2 yields "52"). The same operation yields 7 in `Php`. Such an operation is not permitted in languages such as `Java` and `Python` as they do not allow implicit conversion. In C and C++ coercion of data types can result in unintended results, for example, `int x; float y; y=3.5; x=y`; is legal C code, and results in different values for x and y, which, depending on intent, may be a problem downstream.[a][12] In `Objective-C` the data type _id_ is a generic object pointer, which can be used with an object of any data type, regardless of the class.[b][13] The flexibility that such a generic data type provides can lead to implicit type conversion and also have unintended consequences.[c][14]Hence, we classify a language based on whether its compiler _allows_ or _disallows_ the implicit type conversion as above; the latter explicitly detects type confusion and reports it.
|
||||
|
||||
Disallowing implicit type conversion could result from static type inference within a compiler (e.g., with `Java`), using a type-inference algorithm such as Hindley[10][15] and Milner,[17][16] or at run-time using a dynamic type checker. In contrast, a type-confusion can occur silently because it is either undetected or is unreported. Either way, implicitly allowing type conversion provides flexibility but may eventually cause errors that are difficult to localize. To abbreviate, we refer to languages allowing implicit type conversion as _implicit_ and those that disallow it as _explicit._
|
||||
Disallowing implicit type conversion could result from static type inference within a compiler (e.g., with `Java`), using a type-inference algorithm such as Hindley[10][15] and Milner,[17][16] or at run-time using a dynamic type checker. In contrast, a type-confusion can occur silently because it is either undetected or is unreported. Either way, implicitly allowing type conversion provides flexibility but may eventually cause errors that are difficult to localize. To abbreviate, we refer to languages allowing implicit type conversion as _implicit_ and those that disallow it as _explicit._
|
||||
|
||||
_Memory Class_ indicates whether the language requires developers to manage memory. We treat `Objective-C` as unmanaged, in spite of it following a hybrid model, because we observe many memory errors in its codebase, as discussed in RQ4 in Section 3.
|
||||
|
||||
@ -77,7 +77,7 @@ We classify the studied projects into different domains based on their features
|
||||
|
||||
We detect 30 distinct domains, that is, topics, and estimate the probability that each project belonging to each domain. Since these auto-detected domains include several project-specific keywords, for example, facebook, it is difficult to identify the underlying common functions. In order to assign a meaningful name to each domain, we manually inspect each of the 30 domains to identify projectname-independent, domain-identifying keywords. We manually rename all of the 30 auto-detected domains and find that the majority of the projects fall under six domains: Application, Database, CodeAnalyzer, Middleware, Library, and Framework. We also find that some projects do not fall under any of the above domains and so we assign them to a catchall domain labeled as _Other_ . This classification of projects into domains was subsequently checked and confirmed by another member of our research group. [Table 4][57] summarizes the identified domains resulting from this process.
|
||||
|
||||
[![t4.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t4.jpg)][58]
|
||||
[![t4.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t4.jpg)][58]
|
||||
**Table 4\. Characteristics of domains.**
|
||||
|
||||
![*](http://dl.acm.org/images/bullet.gif)
|
||||
@ -87,7 +87,7 @@ While fixing software bugs, developers often leave important information in the
|
||||
|
||||
First, we categorize the bugs based on their _Cause_ and _Impact. Causes_ are further classified into disjoint subcategories of errors: Algorithmic, Concurrency, Memory, generic Programming, and Unknown. The bug _Impact_ is also classified into four disjoint subcategories: Security, Performance, Failure, and Other unknown categories. Thus, each bug-fix commit also has an induced Cause and an Impact type. [Table 5][59] shows the description of each bug category. This classification is performed in two phases:
|
||||
|
||||
[![t5.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t5.jpg)][60]
|
||||
[![t5.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t5.jpg)][60]
|
||||
**Table 5\. Categories of bugs and their distribution in the whole dataset.**
|
||||
|
||||
**(1) Keyword search.** We randomly choose 10% of the bug-fix messages and use a keyword based search technique to automatically categorize them as potential bug types. We use this annotation, separately, for both Cause and Impact types. We chose a restrictive set of keywords and phrases, as shown in [Table 5][61]. Such a restrictive set of keywords and phrases helps reduce false positives.
|
||||
@ -119,7 +119,7 @@ We begin with a straightforward question that directly addresses the core of wha
|
||||
|
||||
We use a regression model to compare the impact of each language on the number of defects with the average impact of all languages, against defect fixing commits (see [Table 6][64]).
|
||||
|
||||
[![t6.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t6.jpg)][65]
|
||||
[![t6.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t6.jpg)][65]
|
||||
**Table 6\. Some languages induce fewer defects than other languages.**
|
||||
|
||||
We include some variables as controls for factors that will clearly influence the response. Project age is included as older projects will generally have a greater number of defect fixes. Trivially, the number of commits to a project will also impact the response. Additionally, the number of developers who touch a project and the raw size of the project are both expected to grow with project activity.
|
||||
@ -128,11 +128,11 @@ The sign and magnitude of the estimated coefficients in the above model relates
|
||||
|
||||
One should take care not to overestimate the impact of language on defects. While the observed relationships are statistically significant, the effects are quite small. Analysis of deviance reveals that language accounts for less than 1% of the total explained deviance.
|
||||
|
||||
[![ut1.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut1.jpg)][66]
|
||||
[![ut1.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut1.jpg)][66]
|
||||
|
||||
We can read the model coefficients as the expected change in the log of the response for a one unit change in the predictor with all other predictors held constant; that is, for a coefficient _β<sub style="border: 0px; outline: 0px; font-size: smaller; vertical-align: sub; background: transparent;">i</sub>_ , a one unit change in _β<sub style="border: 0px; outline: 0px; font-size: smaller; vertical-align: sub; background: transparent;">i</sub>_ yields an expected change in the response of e _βi_ . For the factor variables, this expected change is compared to the average across all languages. Thus, if, for some number of commits, a particular project developed in an _average_ language had four defective commits, then the choice to use C++ would mean that we should expect one additional defective commit since e0.18 × 4 = 4.79\. For the same project, choosing `Haskell` would mean that we should expect about one fewer defective commit as _e_ −0.26 × 4 = 3.08\. The accuracy of this prediction depends on all other factors remaining the same, a challenging proposition for all but the most trivial of projects. All observational studies face similar limitations; we address this concern in more detail in Section 5.
|
||||
|
||||
**Result 1:** _Some languages have a greater association with defects than other languages, although the effect is small._
|
||||
**Result 1:** _Some languages have a greater association with defects than other languages, although the effect is small._
|
||||
|
||||
In the remainder of this paper we expand on this basic result by considering how different categories of application, defect, and language, lead to further insight into the relationship between languages and defect proneness.
|
||||
|
||||
@ -150,26 +150,26 @@ Rather than considering languages individually, we aggregate them by language cl
|
||||
|
||||
As with language (earlier in [Table 6][67]), we are comparing language _classes_ with the average behavior across all language classes. The model is presented in [Table 7][68]. It is clear that `Script-Dynamic-Explicit-Managed` class has the smallest magnitude coefficient. The coefficient is insignificant, that is, the z-test for the coefficient cannot distinguish the coefficient from zero. Given the magnitude of the standard error, however, we can assume that the behavior of languages in this class is very close to the average across all languages. We confirm this by recoding the coefficient using `Proc-Static-Implicit-Unmanaged` as the base level and employing treatment, or dummy coding that compares each language class with the base level. In this case, `Script-Dynamic-Explicit-Managed` is significantly different with _p_ = 0.00044\. We note here that while choosing different coding methods affects the coefficients and z-scores, the models are identical in all other respects. When we change the coding we are rescaling the coefficients to reflect the comparison that we wish to make.[4][28] Comparing the other language classes to the grand mean, `Proc-Static-Implicit-Unmanaged` languages are more likely to induce defects. This implies that either implicit type conversion or memory management issues contribute to greater defect proneness as compared with other procedural languages.
|
||||
|
||||
[![t7.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t7.jpg)][69]
|
||||
[![t7.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t7.jpg)][69]
|
||||
**Table 7\. Functional languages have a smaller relationship to defects than other language classes whereas procedural languages are greater than or similar to the average.**
|
||||
|
||||
Among scripting languages we observe a similar relationship between languages that allow versus those that do not allow implicit type conversion, providing some evidence that implicit type conversion (vs. explicit) is responsible for this difference as opposed to memory management. We cannot state this conclusively given the correlation between factors. However when compared to the average, as a group, languages that do not allow implicit type conversion are less error-prone while those that do are more error-prone. The contrast between static and dynamic typing is also visible in functional languages.
|
||||
|
||||
The functional languages as a group show a strong difference from the average. Statically typed languages have a substantially smaller coefficient yet both functional language classes have the same standard error. This is strong evidence that functional static languages are less error-prone than functional dynamic languages, however, the z-tests only test whether the coefficients are different from zero. In order to strengthen this assertion, we recode the model as above using treatment coding and observe that the `Functional-Static-Explicit-Managed` language class is significantly less defect-prone than the `Functional-Dynamic-Explicit-Managed`language class with _p_ = 0.034.
|
||||
|
||||
[![ut2.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut2.jpg)][70]
|
||||
[![ut2.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut2.jpg)][70]
|
||||
|
||||
As with language and defects, the relationship between language class and defects is based on a small effect. The deviance explained is similar, albeit smaller, with language class explaining much less than 1% of the deviance.
|
||||
|
||||
We now revisit the question of application domain. Does domain have an interaction with language class? Does the choice of, for example, a functional language, have an advantage for a particular domain? As above, a Chi-square test for the relationship between these factors and the project domain yields a value of 99.05 and _df_ = 30 with _p_ = 2.622e–09 allowing us to reject the null hypothesis that the factors are independent. Cramer's-V yields a value of 0.133, a weak level of association. Consequently, although there is some relation between domain and language, there is only a weak relationship between domain and language class.
|
||||
|
||||
**Result 2:** _There is a small but significant relationship between language class and defects. Functional languages are associated with fewer defects than either procedural or scripting languages._
|
||||
**Result 2:** _There is a small but significant relationship between language class and defects. Functional languages are associated with fewer defects than either procedural or scripting languages._
|
||||
|
||||
It is somewhat unsatisfying that we do not observe a strong association between language, or language class, and domain within a project. An alternative way to view this same data is to disregard projects and aggregate defects over all languages and domains. Since this does not yield independent samples, we do not attempt to analyze it statistically, rather we take a descriptive, visualization-based approach.
|
||||
|
||||
We define _Defect Proneness_ as the ratio of bug fix commits over total commits per language per domain. [Figure 1][71] illustrates the interaction between domain and language using a heat map, where the defect proneness increases from lighter to darker zone. We investigate which language factors influence defect fixing commits across a collection of projects written across a variety of languages. This leads to the following research question:
|
||||
|
||||
[![f1.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/f1.jpg)][72]
|
||||
[![f1.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/f1.jpg)][72]
|
||||
**Figure 1\. Interaction of language's defect proneness with domain. Each cell in the heat map represents defect proneness of a language (row header) for a given domain (column header). The "Overall" column represents defect proneness of a language over all the domains. The cells with white cross mark indicate null value, that is, no commits were made corresponding to that cell.**
|
||||
|
||||
**RQ3\. Does language defect proneness depend on domain?**
|
||||
@ -178,9 +178,9 @@ In order to answer this question we first filtered out projects that would have
|
||||
|
||||
We see only a subdued variation in this heat map which is a result of the inherent defect proneness of the languages as seen in RQ1\. To validate this, we measure the pairwise rank correlation between the language defect proneness for each domain with the overall. For all of the domains except Database, the correlation is positive, and p-values are significant (<0.01). Thus, w.r.t. defect proneness, the language ordering in each domain is strongly correlated with the overall language ordering.
|
||||
|
||||
[![ut3.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut3.jpg)][74]
|
||||
[![ut3.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut3.jpg)][74]
|
||||
|
||||
**Result 3:** _There is no general relationship between application domain and language defect proneness._
|
||||
**Result 3:** _There is no general relationship between application domain and language defect proneness._
|
||||
|
||||
We have shown that different languages induce a larger number of defects and that this relationship is not only related to particular languages but holds for general classes of languages; however, we find that the type of project does not mediate this relationship to a large degree. We now turn our attention to categorization of the response. We want to understand how language relates to specific kinds of defects and how this relationship compares to the more general relationship that we observe. We divide the defects into categories as described in [Table 5][75] and ask the following question:
|
||||
|
||||
@ -188,12 +188,12 @@ We have shown that different languages induce a larger number of defects and tha
|
||||
|
||||
We use an approach similar to RQ3 to understand the relation between languages and bug categories. First, we study the relation between bug categories and language class. A heat map ([Figure 2][76]) shows aggregated defects over language classes and bug types. To understand the interaction between bug categories and languages, we use an NBR regression model for each category. For each model we use the same control factors as RQ1 as well as languages encoded with weighted effects to predict defect fixing commits.
|
||||
|
||||
[![f2.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/f2.jpg)][77]
|
||||
[![f2.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/f2.jpg)][77]
|
||||
**Figure 2\. Relation between bug categories and language class. Each cell represents percentage of bug fix commit out of all bug fix commits per language class (row header) per bug category (column header). The values are normalized column wise.**
|
||||
|
||||
The results along with the anova value for language are shown in [Table 8][78]. The overall deviance for each model is substantially smaller and the proportion explained by language for a specific defect type is similar in magnitude for most of the categories. We interpret this relationship to mean that language has a greater impact on specific categories of bugs, than it does on bugs overall. In the next section we expand on these results for the bug categories with significant bug counts as reported in [Table 5][79]. However, our conclusion generalizes for all categories.
|
||||
|
||||
[![t8.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t8.jpg)][80]
|
||||
[![t8.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/t8.jpg)][80]
|
||||
**Table 8\. While the impact of language on defects varies across defect category, language has a greater impact on specific categories than it does on defects in general.**
|
||||
|
||||
**Programming errors.** Generic programming errors account for around 88.53% of all bug fix commits and occur in all the language classes. Consequently, the regression analysis draws a similar conclusion as of RQ1 (see [Table 6][81]). All languages incur programming errors such as faulty error-handling, faulty definitions, typos, etc.
|
||||
@ -202,7 +202,7 @@ The results along with the anova value for language are shown in [Table 8][78].
|
||||
|
||||
**Concurrency errors.** 1.99% of the total bug fix commits are related to concurrency errors. The heat map shows that `Proc-Static-Implicit-Unmanaged` dominates this error type. C and C++ introduce 19.15% and 7.89% of the errors, and they are distributed across the projects.
|
||||
|
||||
[![ut4.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut4.jpg)][84]
|
||||
[![ut4.jpg](http://deliveryimages.acm.org/10.1145/3130000/3126905/ut4.jpg)][84]
|
||||
|
||||
Both of the `Static-Strong-Managed` language classes are in the darker zone in the heat map confirming, in general static languages produce more concurrency errors than others. Among the dynamic languages, only `Erlang` is more prone to concurrency errors, perhaps relating to the greater use of this language for concurrent applications. Likewise, the negative coefficients in [Table 8][85] shows that projects written in dynamic languages like `Ruby` and `Php` have fewer concurrency errors. Note that, certain languages like `JavaScript, CoffeeScript`, and `TypeScript` do not support concurrency, in its traditional form, while `Php` has a limited support depending on its implementations. These languages introduce artificial zeros in the data, and thus the concurrency model coefficients in [Table 8][86] for those languages cannot be interpreted like the other coefficients. Due to these artificial zeros, the average over all languages in this model is smaller, which may affect the sizes of the coefficients, since they are given w.r.t. the average, but it will not affect their relative relationships, which is what we are after.
|
||||
|
||||
@ -210,7 +210,7 @@ A textual analysis based on word-frequency of the bug fix messages suggests that
|
||||
|
||||
**Security and other impact errors.** Around 7.33% of all the bug fix commits are related to Impact errors. Among them `Erlang, C++`, and `Python` associate with more security errors than average ([Table 8][87]). `Clojure` projects associate with fewer security errors ([Figure 2][88]). From the heat map we also see that `Static` languages are in general more prone to failure and performance errors, these are followed by `Functional-Dynamic-Explicit-Managed` languages such as `Erlang`. The analysis of deviance results confirm that language is strongly associated with failure impacts. While security errors are the weakest among the categories, the deviance explained by language is still quite strong when compared with the residual deviance.
|
||||
|
||||
**Result 4:** _Defect types are strongly associated with languages; some defect type like memory errors and concurrency errors also depend on language primitives. Language matters more for specific categories than it does for defects overall._
|
||||
**Result 4:** _Defect types are strongly associated with languages; some defect type like memory errors and concurrency errors also depend on language primitives. Language matters more for specific categories than it does for defects overall._
|
||||
|
||||
[Back to Top][89]
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
**translating by [erlinux](https://github.com/erlinux)**
|
||||
Operating a Kubernetes network
|
||||
============================================================
|
||||
|
||||
|
@ -0,0 +1,189 @@
|
||||
3 Simple, Excellent Linux Network Monitors
|
||||
============================================================
|
||||
|
||||
![network](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner_3.png?itok=iuPcSN4k "network")
|
||||
Learn more about your network connections with the iftop, Nethogs, and vnstat tools.[Used with permission][3]
|
||||
|
||||
You can learn an amazing amount of information about your network connections with these three glorious Linux networking commands. iftop tracks network connections by process number, Nethogs quickly reveals what is hogging your bandwidth, and vnstat runs as a nice lightweight daemon to record your usage over time.
|
||||
|
||||
### iftop
|
||||
|
||||
The excellent [iftop][8] listens to the network interface that you specify, and displays connections in a top-style interface.
|
||||
|
||||
This is a great little tool for quickly identifying hogs, measuring speed, and also to maintain a running total of your network traffic. It is rather surprising to see how much bandwidth we use, especially for us old people who remember the days of telephone land lines, modems, screaming kilobits of speed, and real live bauds. We abandoned bauds a long time ago in favor of bit rates. Baud measures signal changes, which sometimes were the same as bit rates, but mostly not.
|
||||
|
||||
If you have just one network interface, run iftop with no options. iftop requires root permissions:
|
||||
|
||||
```
|
||||
$ sudo iftop
|
||||
```
|
||||
|
||||
When you have more than one, specify the interface you want to monitor:
|
||||
|
||||
```
|
||||
$ sudo iftop -i wlan0
|
||||
```
|
||||
|
||||
Just like top, you can change the display options while it is running.
|
||||
|
||||
* **h** toggles the help screen.
|
||||
|
||||
* **n** toggles name resolution.
|
||||
|
||||
* **s** toggles source host display, and **d** toggles the destination hosts.
|
||||
|
||||
* **s** toggles port numbers.
|
||||
|
||||
* **N** toggles port resolution; to see all port numbers toggle resolution off.
|
||||
|
||||
* **t** toggles the text interface. The default display requires ncurses. I think the text display is more readable and better-organized (Figure 1).
|
||||
|
||||
* **p** pauses the display.
|
||||
|
||||
* **q** quits the program.
|
||||
|
||||
|
||||
![text display](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-1_8.png?itok=luKHS5ve "text display")
|
||||
Figure 1: The text display is readable and organized.[Used with permission][1]
|
||||
|
||||
When you toggle the display options, iftop continues to measure all traffic. You can also select a single host to monitor. You need the host's IP address and netmask. I was curious how much of a load Pandora put on my sad little meager bandwidth cap, so first I used dig to find their IP address:
|
||||
|
||||
```
|
||||
$ dig A pandora.com
|
||||
[...]
|
||||
;; ANSWER SECTION:
|
||||
pandora.com. 267 IN A 208.85.40.20
|
||||
pandora.com. 267 IN A 208.85.40.50
|
||||
```
|
||||
|
||||
What's the netmask? [ipcalc][9] tells us:
|
||||
|
||||
```
|
||||
$ ipcalc -b 208.85.40.20
|
||||
Address: 208.85.40.20
|
||||
Netmask: 255.255.255.0 = 24
|
||||
Wildcard: 0.0.0.255
|
||||
=>
|
||||
Network: 208.85.40.0/24
|
||||
```
|
||||
|
||||
Now feed the address and netmask to iftop:
|
||||
|
||||
```
|
||||
$ sudo iftop -F 208.85.40.20/24 -i wlan0
|
||||
```
|
||||
|
||||
Is that not seriously groovy? I was surprised to learn that Pandora is easy on my precious bits, using around 500Kb per hour. And, like most streaming services, Pandora's traffic comes in spurts and relies on caching to smooth out the lumps and bumps.
|
||||
|
||||
You can do the same with IPv6 addresses, using the **-G** option. Consult the fine man page to learn the rest of iftop's features, including customizing your default options with a personal configuration file, and applying custom filters (see [PCAP-FILTER][10] for a filter reference).
|
||||
|
||||
### Nethogs
|
||||
|
||||
When you want to quickly learn who is sucking up your bandwidth, Nethogs is fast and easy. Run it as root and specify the interface to listen on. It displays the hoggy application and the process number, so that you may kill it if you so desire:
|
||||
|
||||
```
|
||||
$ sudo nethogs wlan0
|
||||
|
||||
NetHogs version 0.8.1
|
||||
|
||||
PID USER PROGRAM DEV SENT RECEIVED
|
||||
7690 carla /usr/lib/firefox wlan0 12.494 556.580 KB/sec
|
||||
5648 carla .../chromium-browser wlan0 0.052 0.038 KB/sec
|
||||
TOTAL 12.546 556.618 KB/sec
|
||||
```
|
||||
|
||||
Nethogs has few options: cycling between kb/s, kb, b, and mb, sorting by received or sent packets, and adjusting the delay between refreshes. See `man nethogs`, or run `nethogs -h`.
|
||||
|
||||
### vnstat
|
||||
|
||||
[vnstat][11] is the easiest network data collector to use. It is lightweight and does not need root permissions. It runs as a daemon and records your network statistics over time. The `vnstat`command displays the accumulated data:
|
||||
|
||||
```
|
||||
$ vnstat -i wlan0
|
||||
Database updated: Tue Oct 17 08:36:38 2017
|
||||
|
||||
wlan0 since 10/17/2017
|
||||
|
||||
rx: 45.27 MiB tx: 3.77 MiB total: 49.04 MiB
|
||||
|
||||
monthly
|
||||
rx | tx | total | avg. rate
|
||||
------------------------+-------------+-------------+---------------
|
||||
Oct '17 45.27 MiB | 3.77 MiB | 49.04 MiB | 0.28 kbit/s
|
||||
------------------------+-------------+-------------+---------------
|
||||
estimated 85 MiB | 5 MiB | 90 MiB |
|
||||
|
||||
daily
|
||||
rx | tx | total | avg. rate
|
||||
------------------------+-------------+-------------+---------------
|
||||
today 45.27 MiB | 3.77 MiB | 49.04 MiB | 12.96 kbit/s
|
||||
------------------------+-------------+-------------+---------------
|
||||
estimated 125 MiB | 8 MiB | 133 MiB |
|
||||
```
|
||||
|
||||
By default it displays all network interfaces. Use the `-i` option to select a single interface. Merge the data of multiple interfaces this way:
|
||||
|
||||
```
|
||||
$ vnstat -i wlan0+eth0+eth1
|
||||
```
|
||||
|
||||
You can filter the display in several ways:
|
||||
|
||||
* **-h** displays statistics by hours.
|
||||
|
||||
* **-d** displays statistics by days.
|
||||
|
||||
* **-w** and **-m** displays statistics by weeks and months.
|
||||
|
||||
* Watch live updates with the **-l** option.
|
||||
|
||||
This command deletes the database for wlan1 and stops watching it:
|
||||
|
||||
```
|
||||
$ vnstat -i wlan1 --delete
|
||||
```
|
||||
|
||||
This command creates an alias for a network interface. This example uses one of the weird interface names from Ubuntu 16.04:
|
||||
|
||||
```
|
||||
$ vnstat -u -i enp0s25 --nick eth0
|
||||
```
|
||||
|
||||
By default vnstat monitors eth0\. You can change this in `/etc/vnstat.conf`, or create your own personal configuration file in your home directory. See `man vnstat` for a complete reference.
|
||||
|
||||
You can also install vnstati to create simple, colored graphs (Figure 2):
|
||||
|
||||
```
|
||||
$ vnstati -s -i wlx7cdd90a0a1c2 -o vnstat.png
|
||||
```
|
||||
|
||||
|
||||
![vnstati](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-2_5.png?itok=HsWJMcW0 "vnstati")
|
||||
Figure 2: You can create simple colored graphs with vnstati.[Used with permission][2]
|
||||
|
||||
See `man vnstati` for complete options.
|
||||
|
||||
_Learn more about Linux through the free ["Introduction to Linux" ][7]course from The Linux Foundation and edX._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2017/10/3-simple-excellent-linux-network-monitors
|
||||
|
||||
作者:[CARLA SCHRODER ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/cschroder
|
||||
[1]:https://www.linux.com/licenses/category/used-permission
|
||||
[2]:https://www.linux.com/licenses/category/used-permission
|
||||
[3]:https://www.linux.com/licenses/category/used-permission
|
||||
[4]:https://www.linux.com/files/images/fig-1png-8
|
||||
[5]:https://www.linux.com/files/images/fig-2png-5
|
||||
[6]:https://www.linux.com/files/images/bannerpng-3
|
||||
[7]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
||||
[8]:http://www.ex-parrot.com/pdw/iftop/
|
||||
[9]:https://www.linux.com/learn/intro-to-linux/2017/8/how-calculate-network-addresses-ipcalc
|
||||
[10]:http://www.tcpdump.org/manpages/pcap-filter.7.html
|
||||
[11]:http://humdi.net/vnstat/
|
@ -1,95 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
GitHub welcomes all CI tools
|
||||
====================
|
||||
|
||||
|
||||
[![GitHub and all CI tools](https://user-images.githubusercontent.com/29592817/32509084-2d52c56c-c3a1-11e7-8c49-901f0f601faf.png)][11]
|
||||
|
||||
Continuous Integration ([CI][12]) tools help you stick to your team's quality standards by running tests every time you push a new commit and [reporting the results][13] to a pull request. Combined with continuous delivery ([CD][14]) tools, you can also test your code on multiple configurations, run additional performance tests, and automate every step [until production][15].
|
||||
|
||||
There are several CI and CD tools that [integrate with GitHub][16], some of which you can install in a few clicks from [GitHub Marketplace][17]. With so many options, you can pick the best tool for the job—even if it's not the one that comes pre-integrated with your system.
|
||||
|
||||
The tools that will work best for you depends on many factors, including:
|
||||
|
||||
* Programming language and application architecture
|
||||
|
||||
* Operating system and browsers you plan to support
|
||||
|
||||
* Your team's experience and skills
|
||||
|
||||
* Scaling capabilities and plans for growth
|
||||
|
||||
* Geographic distribution of dependent systems and the people who use them
|
||||
|
||||
* Packaging and delivery goals
|
||||
|
||||
Of course, it isn't possible to optimize your CI tool for all of these scenarios. The people who build them have to choose which use cases to serve best—and when to prioritize complexity over simplicity. For example, if you like to test small applications written in a particular programming language for one platform, you won't need the complexity of a tool that tests embedded software controllers on dozens of platforms with a broad mix of programming languages and frameworks.
|
||||
|
||||
If you need a little inspiration for which CI tool might work best, take a look at [popular GitHub projects][18]. Many show the status of their integrated CI/CD tools as badges in their README.md. We've also analyzed the use of CI tools across more than 50 million repositories in the GitHub community, and found a lot of variety. The following diagram shows the relative percentage of the top 10 CI tools used with GitHub.com, based on the most used [commit status contexts][19] used within our pull requests.
|
||||
|
||||
_Our analysis also showed that many teams use more than one CI tool in their projects, allowing them to emphasize what each tool does best._
|
||||
|
||||
[![Top 10 CI systems used with GitHub.com based on most used commit status contexts](https://user-images.githubusercontent.com/7321362/32575895-ea563032-c49a-11e7-9581-e05ec882658b.png)][20]
|
||||
|
||||
If you'd like to check them out, here are the top 10 tools teams use:
|
||||
|
||||
* [Travis CI][1]
|
||||
|
||||
* [Circle CI][2]
|
||||
|
||||
* [Jenkins][3]
|
||||
|
||||
* [AppVeyor][4]
|
||||
|
||||
* [CodeShip][5]
|
||||
|
||||
* [Drone][6]
|
||||
|
||||
* [Semaphore CI][7]
|
||||
|
||||
* [Buildkite][8]
|
||||
|
||||
* [Wercker][9]
|
||||
|
||||
* [TeamCity][10]
|
||||
|
||||
It's tempting to just pick the default, pre-integrated tool without taking the time to research and choose the best one for the job, but there are plenty of [excellent choices][21] built for your specific use cases. And if you change your mind later, no problem. When you choose the best tool for a specific situation, you're guaranteeing tailored performance and the freedom of interchangability when it no longer fits.
|
||||
|
||||
Ready to see how CI tools can fit into your workflow?
|
||||
|
||||
[Browse GitHub Marketplace][22]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://github.com/blog/2463-github-welcomes-all-ci-tools
|
||||
|
||||
作者:[jonico ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://github.com/jonico
|
||||
[1]:https://travis-ci.org/
|
||||
[2]:https://circleci.com/
|
||||
[3]:https://jenkins.io/
|
||||
[4]:https://www.appveyor.com/
|
||||
[5]:https://codeship.com/
|
||||
[6]:http://try.drone.io/
|
||||
[7]:https://semaphoreci.com/
|
||||
[8]:https://buildkite.com/
|
||||
[9]:http://www.wercker.com/
|
||||
[10]:https://www.jetbrains.com/teamcity/
|
||||
[11]:https://user-images.githubusercontent.com/29592817/32509084-2d52c56c-c3a1-11e7-8c49-901f0f601faf.png
|
||||
[12]:https://en.wikipedia.org/wiki/Continuous_integration
|
||||
[13]:https://github.com/blog/2051-protected-branches-and-required-status-checks
|
||||
[14]:https://en.wikipedia.org/wiki/Continuous_delivery
|
||||
[15]:https://developer.github.com/changes/2014-01-09-preview-the-new-deployments-api/
|
||||
[16]:https://github.com/works-with/category/continuous-integration
|
||||
[17]:https://github.com/marketplace/category/continuous-integration
|
||||
[18]:https://github.com/explore?trending=repositories#trending
|
||||
[19]:https://developer.github.com/v3/repos/statuses/
|
||||
[20]:https://user-images.githubusercontent.com/7321362/32575895-ea563032-c49a-11e7-9581-e05ec882658b.png
|
||||
[21]:https://github.com/works-with/category/continuous-integration
|
||||
[22]:https://github.com/marketplace/category/continuous-integration
|
@ -1,3 +1,5 @@
|
||||
yixunx translating
|
||||
|
||||
Love Your Bugs
|
||||
============================================================
|
||||
|
||||
|
@ -1,76 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
Glitch: write fun small web projects instantly
|
||||
============================================================
|
||||
|
||||
I just wrote about Jupyter Notebooks which are a fun interactive way to write Python code. That reminded me I learned about Glitch recently, which I also love!! I built a small app to [turn of twitter retweets][2] with it. So!
|
||||
|
||||
[Glitch][3] is an easy way to make Javascript webapps. (javascript backend, javascript frontend)
|
||||
|
||||
The fun thing about glitch is:
|
||||
|
||||
1. you start typing Javascript code into their web interface
|
||||
|
||||
2. as soon as you type something, it automagically reloads the backend of your website with the new code. You don’t even have to save!! It autosaves.
|
||||
|
||||
So it’s like Heroku, but even more magical!! Coding like this (you type, and the code runs on the public internet immediately) just feels really **fun** to me.
|
||||
|
||||
It’s kind of like sshing into a server and editing PHP/HTML code on your server and having it instantly available, which I kind of also loved. Now we have “better deployment practices” than “just edit the code and it is instantly on the internet” but we are not talking about Serious Development Practices, we are talking about writing tiny programs for fun.
|
||||
|
||||
### glitch has awesome example apps
|
||||
|
||||
Glitch seems like fun nice way to learn programming!
|
||||
|
||||
For example, there’s a space invaders game (code by [Mary Rose Cook][4]) at [https://space-invaders.glitch.me/][5]. The thing I love about this is that in just a few clicks I can
|
||||
|
||||
1. click “remix this”
|
||||
|
||||
2. start editing the code to make the boxes orange instead of black
|
||||
|
||||
3. have my own space invaders game!! Mine is at [http://julias-space-invaders.glitch.me/][1]. (i just made very tiny edits to make it orange, nothing fancy)
|
||||
|
||||
They have tons of example apps that you can start from – for instance [bots][6], [games][7], and more.
|
||||
|
||||
### awesome actually useful app: tweetstorms
|
||||
|
||||
The way I learned about Glitch was from this app which shows you tweetstorms from a given user: [https://tweetstorms.glitch.me/][8].
|
||||
|
||||
For example, you can see [@sarahmei][9]’s tweetstorms at [https://tweetstorms.glitch.me/sarahmei][10] (she tweets a lot of good tweetstorms!).
|
||||
|
||||
### my glitch app: turn off retweets
|
||||
|
||||
When I learned about Glitch I wanted to turn off retweets for everyone I follow on Twitter (I know you can do it in Tweetdeck!) and doing it manually was a pain – I had to do it one person at a time. So I wrote a tiny Glitch app to do it for me!
|
||||
|
||||
I liked that I didn’t have to set up a local development environment, I could just start typing and go!
|
||||
|
||||
Glitch only supports Javascript and I don’t really know Javascript that well (I think I’ve never written a Node program before), so the code isn’t awesome. But I had a really good time writing it – being able to type and just see my code running instantly was delightful. Here it is: [https://turn-off-retweets.glitch.me/][11].
|
||||
|
||||
### that’s all!
|
||||
|
||||
Using Glitch feels really fun and democratic. Usually if I want to fork someone’s web project and make changes I wouldn’t do it – I’d have to fork it, figure out hosting, set up a local dev environment or Heroku or whatever, install the dependencies, etc. I think tasks like installing node.js dependencies used to be interesting, like “cool i am learning something new” and now I just find them tedious.
|
||||
|
||||
So I love being able to just click “remix this!” and have my version on the internet instantly.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2017/11/13/glitch--write-small-web-projects-easily/
|
||||
|
||||
作者:[Julia Evans ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://jvns.ca/
|
||||
[1]:http://julias-space-invaders.glitch.me/
|
||||
[2]:https://turn-off-retweets.glitch.me/
|
||||
[3]:https://glitch.com/
|
||||
[4]:https://maryrosecook.com/
|
||||
[5]:https://space-invaders.glitch.me/
|
||||
[6]:https://glitch.com/handy-bots
|
||||
[7]:https://glitch.com/games
|
||||
[8]:https://tweetstorms.glitch.me/
|
||||
[9]:https://twitter.com/sarahmei
|
||||
[10]:https://tweetstorms.glitch.me/sarahmei
|
||||
[11]:https://turn-off-retweets.glitch.me/
|
@ -1,61 +0,0 @@
|
||||
【翻译中 @haoqixu】Sysadmin 101: Patch Management
|
||||
============================================================
|
||||
|
||||
* [HOW-TOs][1]
|
||||
|
||||
* [Servers][2]
|
||||
|
||||
* [SysAdmin][3]
|
||||
|
||||
|
||||
A few articles ago, I started a Sysadmin 101 series to pass down some fundamental knowledge about systems administration that the current generation of junior sysadmins, DevOps engineers or "full stack" developers might not learn otherwise. I had thought that I was done with the series, but then the WannaCry malware came out and exposed some of the poor patch management practices still in place in Windows networks. I imagine some readers that are still stuck in the Linux versus Windows wars of the 2000s might have even smiled with a sense of superiority when they heard about this outbreak.
|
||||
|
||||
The reason I decided to revive my Sysadmin 101 series so soon is I realized that most Linux system administrators are no different from Windows sysadmins when it comes to patch management. Honestly, in some areas (in particular, uptime pride), some Linux sysadmins are even worse than Windows sysadmins regarding patch management. So in this article, I cover some of the fundamentals of patch management under Linux, including what a good patch management system looks like, the tools you will want to put in place and how the overall patching process should work.
|
||||
|
||||
### What Is Patch Management?
|
||||
|
||||
When I say patch management, I'm referring to the systems you have in place to update software already on a server. I'm not just talking about keeping up with the latest-and-greatest bleeding-edge version of a piece of software. Even more conservative distributions like Debian that stick with a particular version of software for its "stable" release still release frequent updates that patch bugs or security holes.
|
||||
|
||||
Of course, if your organization decided to roll its own version of a particular piece of software, either because developers demanded the latest and greatest, you needed to fork the software to apply a custom change, or you just like giving yourself extra work, you now have a problem. Ideally you have put in a system that automatically packages up the custom version of the software for you in the same continuous integration system you use to build and package any other software, but many sysadmins still rely on the outdated method of packaging the software on their local machine based on (hopefully up to date) documentation on their wiki. In either case, you will need to confirm that your particular version has the security flaw, and if so, make sure that the new patch applies cleanly to your custom version.
|
||||
|
||||
### What Good Patch Management Looks Like
|
||||
|
||||
Patch management starts with knowing that there is a software update to begin with. First, for your core software, you should be subscribed to your Linux distribution's security mailing list, so you're notified immediately when there are security patches. If there you use any software that doesn't come from your distribution, you must find out how to be kept up to date on security patches for that software as well. When new security notifications come in, you should review the details so you understand how severe the security flaw is, whether you are affected and gauge a sense of how urgent the patch is.
|
||||
|
||||
Some organizations have a purely manual patch management system. With such a system, when a security patch comes along, the sysadmin figures out which servers are running the software, generally by relying on memory and by logging in to servers and checking. Then the sysadmin uses the server's built-in package management tool to update the software with the latest from the distribution. Then the sysadmin moves on to the next server, and the next, until all of the servers are patched.
|
||||
|
||||
There are many problems with manual patch management. First is the fact that it makes patching a laborious chore. The more work patching is, the more likely a sysadmin will put it off or skip doing it entirely. The second problem is that manual patch management relies too much on the sysadmin's ability to remember and recall all of the servers he or she is responsible for and keep track of which are patched and which aren't. This makes it easy for servers to be forgotten and sit unpatched.
|
||||
|
||||
The faster and easier patch management is, the more likely you are to do it. You should have a system in place that quickly can tell you which servers are running a particular piece of software at which version. Ideally, that system also can push out updates. Personally, I prefer orchestration tools like MCollective for this task, but Red Hat provides Satellite, and Canonical provides Landscape as central tools that let you view software versions across your fleet of servers and apply patches all from a central place.
|
||||
|
||||
Patching should be fault-tolerant as well. You should be able to patch a service and restart it without any overall down time. The same idea goes for kernel patches that require a reboot. My approach is to divide my servers into different high availability groups so that lb1, app1, rabbitmq1 and db1 would all be in one group, and lb2, app2, rabbitmq2 and db2 are in another. Then, I know I can patch one group at a time without it causing downtime anywhere else.
|
||||
|
||||
So, how fast is fast? Your system should be able to roll out a patch to a minor piece of software that doesn't have an accompanying service (such as bash in the case of the ShellShock vulnerability) within a few minutes to an hour at most. For something like OpenSSL that requires you to restart services, the careful process of patching and restarting services in a fault-tolerant way probably will take more time, but this is where orchestration tools come in handy. I gave examples of how to use MCollective to accomplish this in my recent MCollective articles (see the December 2016 and January 2017 issues), but ideally, you should put a system in place that makes it easy to patch and restart services in a fault-tolerant and automated way.
|
||||
|
||||
When patching requires a reboot, such as in the case of kernel patches, it might take a bit more time, but again, automation and orchestration tools can make this go much faster than you might imagine. I can patch and reboot the servers in an environment in a fault-tolerant way within an hour or two, and it would be much faster than that if I didn't need to wait for clusters to sync back up in between reboots.
|
||||
|
||||
Unfortunately, many sysadmins still hold on to the outdated notion that uptime is a badge of pride—given that serious kernel patches tend to come out at least once a year if not more often, to me, it's proof you don't take security seriously.
|
||||
|
||||
Many organizations also still have that single point of failure server that can never go down, and as a result, it never gets patched or rebooted. If you want to be secure, you need to remove these outdated liabilities and create systems that at least can be rebooted during a late-night maintenance window.
|
||||
|
||||
Ultimately, fast and easy patch management is a sign of a mature and professional sysadmin team. Updating software is something all sysadmins have to do as part of their jobs, and investing time into systems that make that process easy and fast pays dividends far beyond security. For one, it helps identify bad architecture decisions that cause single points of failure. For another, it helps identify stagnant, out-of-date legacy systems in an environment and provides you with an incentive to replace them. Finally, when patching is managed well, it frees up sysadmins' time and turns their attention to the things that truly require their expertise.
|
||||
|
||||
______________________
|
||||
|
||||
Kyle Rankin is senior security and infrastructure architect, the author of many books including Linux Hardening in Hostile Networks, DevOps Troubleshooting and The Official Ubuntu Server Book, and a columnist for Linux Journal. Follow him @kylerankin
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxjournal.com/content/sysadmin-101-patch-management
|
||||
|
||||
作者:[Kyle Rankin ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linuxjournal.com/users/kyle-rankin
|
||||
[1]:https://www.linuxjournal.com/tag/how-tos
|
||||
[2]:https://www.linuxjournal.com/tag/servers
|
||||
[3]:https://www.linuxjournal.com/tag/sysadmin
|
||||
[4]:https://www.linuxjournal.com/users/kyle-rankin
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
Security Jobs Are Hot: Get Trained and Get Noticed
|
||||
============================================================
|
||||
|
||||
|
264
sources/tech/20171119 10 Best LaTeX Editors For Linux.md
Normal file
264
sources/tech/20171119 10 Best LaTeX Editors For Linux.md
Normal file
@ -0,0 +1,264 @@
|
||||
10 Best LaTeX Editors For Linux
|
||||
======
|
||||
**Brief: Once you get over the learning curve, there is nothing like LaTex.
|
||||
Here are the best LaTex editors for Linux and other systems.**
|
||||
|
||||
## What is LaTeX?
|
||||
|
||||
[LaTeX][1] is a document preparation system. Unlike plain text editor, you
|
||||
can't just write a plain text using LaTeX editors. Here, you will have to
|
||||
utilize LaTeX commands in order to manage the content of the document.
|
||||
|
||||
![LaTex Sample][2]![LaTex Sample][3]
|
||||
|
||||
LaTex Editors are generally used to publish scientific research documents or
|
||||
books for academic purposes. Most importantly, LaText editors come handy while
|
||||
dealing with a document containing complex Mathematical notations. Surely,
|
||||
LaTeX editors are fun to use. But, not that useful unless you have specific
|
||||
needs for a document.
|
||||
|
||||
## Why should you use LaTex?
|
||||
|
||||
Well, just like I previously mentioned, LaTeX editors are meant for specific
|
||||
purposes. You do not need to be a geek head in order to figure out the way to
|
||||
use LaTeX editors but it is not a productive solution for users who deal with
|
||||
basic text editors.
|
||||
|
||||
If you are looking to craft a document but you are not interested in spending
|
||||
time formatting the text, then LaTeX editors should be the one you should go
|
||||
for. With LaTeX editors, you just have to specify the type of document, and
|
||||
the text font and sizes will be taken care of accordingly. No wonder it is
|
||||
considered one of the [best open source tools for writers][4].
|
||||
|
||||
Do note that it isn't something automated, you will have to first learn LaTeX
|
||||
commands to let the editor handle the text formatting with precision.
|
||||
|
||||
## 10 Of The Best LaTeX Editors For Linux
|
||||
|
||||
Just for information, the list is not in any specific order. Editor at number
|
||||
three is not better than the editor at number seven.
|
||||
|
||||
### 1\. Lyx
|
||||
|
||||
![][2]
|
||||
|
||||
![][5]
|
||||
|
||||
Lyx is an open source LaTeX Editor. In other words, it is one of the best
|
||||
document processors available on the web.LyX helps you focus on the structure
|
||||
of the write-up, just as every LaTeX editor should and lets you forget about
|
||||
the word formatting. LyX would manage whatsoever depending on the type of
|
||||
document specified. You get to control a lot of stuff while you have it
|
||||
installed. The margins, headers/footers, spacing/indents, tables, and so on.
|
||||
|
||||
If you are into crafting scientific documents, research thesis, or similar,
|
||||
you will be delighted to experience Lyx's formula editor which should be a
|
||||
charm to use. LyX also includes a set of tutorials to get started without much
|
||||
of a hassle.
|
||||
|
||||
[Lyx][6]
|
||||
|
||||
### 2\. Texmaker
|
||||
|
||||
![][2]
|
||||
|
||||
![][7]
|
||||
|
||||
Texmaker is considered to be one of the best LaTeX editors for GNOME desktop
|
||||
environment. It presents a great user interface which results in a good user
|
||||
experience. It is also crowned to be one among the most useful LaTeX editor
|
||||
there is.If you perform PDF conversions often, you will find TeXmaker to be
|
||||
relatively faster than other LaTeX editors. You can take a look at a preview
|
||||
of what the final document would look like while you write. Also, one could
|
||||
observe the symbols being easy to reach when needed.
|
||||
|
||||
Texmaker also offers an extensive support for hotkeys configuration. Why not
|
||||
give it a try?
|
||||
|
||||
[Texmaker][8]
|
||||
|
||||
### 3\. TeXstudio
|
||||
|
||||
![][2]
|
||||
|
||||
![][9]
|
||||
|
||||
If you want a LaTeX editor which offers you a decent level of customizability
|
||||
along with an easy-to-use interface, then TeXstudio would be the perfect one
|
||||
to have installed. The UI is surely very simple but not clumsy. TeXstudio lets
|
||||
you highlight syntax, comes with an integrated viewer, lets you check the
|
||||
references and also bundles some other assistant tools.
|
||||
|
||||
It also supports some cool features like auto-completion, link overlay,
|
||||
bookmarks, multi-cursors, and so on - which makes writing a LaTeX document
|
||||
easier than ever before.
|
||||
|
||||
TeXstudio is actively maintained, which makes it a compelling choice for both
|
||||
novice users and advanced writers.
|
||||
|
||||
[TeXstudio][10]
|
||||
|
||||
### 4\. Gummi
|
||||
|
||||
![][2]
|
||||
|
||||
![][11]
|
||||
|
||||
Gummi is a very simple LaTeX editor based on the GTK+ toolkit. Well, you may
|
||||
not find a lot of fancy options here but if you are just starting out - Gummi
|
||||
will be our recommendation.It supports exporting the documents to PDF format,
|
||||
lets you highlight syntax, and helps you with some basic error checking
|
||||
functionalities. Though Gummi isn't actively maintained via GitHub it works
|
||||
just fine.
|
||||
|
||||
[Gummi][12]
|
||||
|
||||
### 5\. TeXpen
|
||||
|
||||
![][2]
|
||||
|
||||
![][13]
|
||||
|
||||
TeXpen is yet another simplified tool to go with. You get the auto-completion
|
||||
functionality with this LaTeX editor. However, you may not find the user
|
||||
interface impressive. If you do not mind the UI, but want a super easy LaTeX
|
||||
editor, TeXpen could fulfill that wish for you.Also, TeXpen lets you
|
||||
correct/improve the English grammar and expressions used in the document.
|
||||
|
||||
[TeXpen][14]
|
||||
|
||||
### 6\. ShareLaTeX
|
||||
|
||||
![][2]
|
||||
|
||||
![][15]
|
||||
|
||||
ShareLaTeX is an online LaTeX editor. If you want someone (or a group of
|
||||
people) to collaborate on documents you are working on, this is what you need.
|
||||
|
||||
It offers a free plan along with several paid packages. Even the students of
|
||||
Harvard University & Oxford University utilize this for their projects. With
|
||||
the free plan, you get the ability to add one collaborator.
|
||||
|
||||
The paid packages let you sync the documents on GitHub and Dropbox along with
|
||||
the ability to record the full document history. You can choose to have
|
||||
multiple collaborators as per your plan. For students, there's a separate
|
||||
pricing plan available.
|
||||
|
||||
[ShareLaTeX][16]
|
||||
|
||||
### 7\. Overleaf
|
||||
|
||||
![][2]
|
||||
|
||||
![][17]
|
||||
|
||||
Overleaf is yet another online LaTeX editor. Similar to ShareLaTeX, it offers
|
||||
separate pricing plans for professionals and students. It also includes a free
|
||||
plan where you can sync with GitHub, check your revision history, and add
|
||||
multiple collaborators.
|
||||
|
||||
There's a limit on the number of files you can create per project - so it
|
||||
could bother if you are a professional working with LaTeX documents most of
|
||||
the time.
|
||||
|
||||
[Overleaf][18]
|
||||
|
||||
### 8\. Authorea
|
||||
|
||||
![][2]
|
||||
|
||||
![][19]
|
||||
|
||||
Authorea is a wonderful online LaTeX editor. However, it is not the best out
|
||||
there - when considering the pricing plans. For free, it offers just 100 MB of
|
||||
data upload limit and 1 private document at a time. The paid plans offer you
|
||||
more perks but it may not be the cheapest from the lot.The only reason you
|
||||
should choose Authorea is the user interface. If you love to work with a tool
|
||||
offering an impressive user interface, there's no looking back.
|
||||
|
||||
[Authorea][20]
|
||||
|
||||
### 9\. Papeeria
|
||||
|
||||
![][2]
|
||||
|
||||
![][21]
|
||||
|
||||
Papeeria is the cheapest LaTeX editor you can find on the Internet -
|
||||
considering it is as reliable as the others. You do not get private projects
|
||||
if you want to utilize it for free. But, if you prefer public projects it lets
|
||||
you work on an unlimited number of projects with numerous collaborators. It
|
||||
features a pretty simple plot builder and includes Git sync for no additional
|
||||
cost.If you opt for the paid plan, it will empower you with the ability to
|
||||
work on 10 private projects.
|
||||
|
||||
[Papeeria][22]
|
||||
|
||||
### 10\. Kile
|
||||
|
||||
![Kile LaTeX editor][2]
|
||||
|
||||
![Kile LaTeX editor][23]
|
||||
|
||||
Last entry in our list of best LaTeX editor is Kile. Some people swear by
|
||||
Kile. Primarily because of the features it provides.
|
||||
|
||||
Kile is more than just an editor. It is an IDE tool like Eclipse that provides
|
||||
a complete environment to work on documents and projects. Apart from quick
|
||||
compilation and preview, you get features like auto-completion of commands,
|
||||
insert citations, organize document in chapters etc. You really have to use
|
||||
Kile to realize its true potential.
|
||||
|
||||
Kile is available for Linux and Windows.
|
||||
|
||||
[Kile][24]
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
So, there go our recommendations for the LaTeX editors you should utilize on
|
||||
Ubuntu/Linux.
|
||||
|
||||
There are chances that we might have missed some interesting LaTeX editors
|
||||
available for Linux. If you happen to know about any, let us know down in the
|
||||
comments below.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/latex-editors-linux/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
译者:[翻译者ID](https://github.com/翻译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/ankush/
|
||||
[1]:https://www.latex-project.org/
|
||||
[2]:data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=
|
||||
[3]:https://itsfoss.com/wp-content/uploads/2017/11/latex-sample-example.jpeg
|
||||
[4]:https://itsfoss.com/open-source-tools-writers/
|
||||
[5]:https://itsfoss.com/wp-content/uploads/2017/10/lyx_latex_editor.jpg
|
||||
[6]:https://www.lyx.org/
|
||||
[7]:https://itsfoss.com/wp-content/uploads/2017/10/texmaker_latex_editor.jpg
|
||||
[8]:http://www.xm1math.net/texmaker/
|
||||
[9]:https://itsfoss.com/wp-content/uploads/2017/10/tex_studio_latex_editor.jpg
|
||||
[10]:https://www.texstudio.org/
|
||||
[11]:https://itsfoss.com/wp-content/uploads/2017/10/gummi_latex_editor.jpg
|
||||
[12]:https://github.com/alexandervdm/gummi
|
||||
[13]:https://itsfoss.com/wp-content/uploads/2017/10/texpen_latex_editor.jpg
|
||||
[14]:https://sourceforge.net/projects/texpen/
|
||||
[15]:https://itsfoss.com/wp-content/uploads/2017/10/sharelatex.jpg
|
||||
[16]:https://www.sharelatex.com/
|
||||
[17]:https://itsfoss.com/wp-content/uploads/2017/10/overleaf.jpg
|
||||
[18]:https://www.overleaf.com/
|
||||
[19]:https://itsfoss.com/wp-content/uploads/2017/10/authorea.jpg
|
||||
[20]:https://www.authorea.com/
|
||||
[21]:https://itsfoss.com/wp-content/uploads/2017/10/papeeria_latex_editor.jpg
|
||||
[22]:https://www.papeeria.com/
|
||||
[23]:https://itsfoss.com/wp-content/uploads/2017/11/kile-latex-800x621.png
|
||||
[24]:https://kile.sourceforge.io/
|
@ -1,3 +1,5 @@
|
||||
translating by aiwhj
|
||||
|
||||
Adopting Kubernetes step by step
|
||||
============================================================
|
||||
|
||||
|
@ -1,75 +0,0 @@
|
||||
translating by zrszrszrs
|
||||
# [Mark McIntyre: How Do You Fedora?][1]
|
||||
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2017/11/mock-couch-945w-945x400.jpg)
|
||||
|
||||
We recently interviewed Mark McIntyre on how he uses Fedora. This is [part of a series][2] on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the [feedback form][3] to express your interest in becoming a interviewee.
|
||||
|
||||
### Who is Mark McIntyre?
|
||||
|
||||
Mark McIntyre is a geek by birth and Linux by choice. “I started coding at the early age of 13 learning BASIC on my own and finding the excitement of programming which led me down a path of becoming a professional coder,” he says. McIntyre and his niece are big fans of pizza. “My niece and I started a quest last fall to try as many of the pizza joints in Knoxville. You can read about our progress at [https://knox-pizza-quest.blogspot.com/][4]” Mark is also an amateur photographer and [publishes his images][5] on Flickr.
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2017/11/31456893222_553b3cac4d_k-1024x575.jpg)
|
||||
|
||||
Mark has a diverse background as a developer. He has worked with Visual Basic for Applications, LotusScript, Oracle’s PL/SQL, Tcl/Tk and Python with Django as the framework. His strongest skill is Python which he uses in his current job as a systems engineer. “I am using Python on a regular basis. As my job is morphing into more of an automation engineer, that became more frequent.”
|
||||
|
||||
McIntyre is a self-described nerd and loves sci-fi movies, but his favorite movie falls out of that genre. “As much as I am a nerd and love the Star Trek and Star Wars and related movies, the movie Glory is probably my favorite of all time.” He also mentioned that Serenity was a fantastic follow-up to a great TV series.
|
||||
|
||||
Mark values humility, knowledge and graciousness in others. He appreciates people who act based on understanding the situation that other people are in. “If you add a decision to serve another, you have the basis for someone you’d want to be around instead of someone who you have to tolerate.”
|
||||
|
||||
McIntyre works for [Scripps Networks Interactive][6], which is the parent company for HGTV, Food Network, Travel Channel, DIY, GAC, and several other cable channels. “Currently, I function as a systems engineer for the non-linear video content, which is all the media purposed for online consumption.” He supports a few development teams who write applications to publish the linear video from cable TV into the online formats such as Amazon and Hulu. The systems include both on-premise and cloud systems. Mark also develops automation tools for deploying these applications primarily to a cloud infrastructure.
|
||||
|
||||
### The Fedora community
|
||||
|
||||
Mark describes the Fedora community as an active community filled with people who enjoy life as Fedora users. “From designers to packagers, this group is still very active and feels alive.” McIntyre continues, “That gives me a sense of confidence in the operating system.”
|
||||
|
||||
He started frequenting the #fedora channel on IRC around 2002: “Back then, Wi-Fi functionality was still done a lot by hand in starting the adapter and configuring the modules.” In order to get his Wi-Fi working he had to recompile the Fedora kernel. Shortly after, he started helping others in the #fedora channel.
|
||||
|
||||
McIntyre encourages others to get involved in the Fedora Community. “There are many different areas of opportunity in which to be involved. Front-end design, testing deployments, development, packaging of applications, and new technology implementation.” He recommends picking an area of interest and asking questions of that group. “There are many opportunities available to jump in to contribute.”
|
||||
|
||||
He credits a fellow community member with helping him get started: “Ben Williams was very helpful in my first encounters with Fedora, helping me with some of my first installation rough patches in the #fedora support channel.” Ben also encouraged Mark to become an [Ambassador][7].
|
||||
|
||||
### What hardware and software?
|
||||
|
||||
McIntyre uses Fedora Linux on all his laptops and desktops. On servers he chooses CentOS, due to the longer support lifecycle. His current desktop is self-built and equipped with an Intel Core i5 processor, 32 GB of RAM and 2 TB of disk space. “I have a 4K monitor attached which gives me plenty of room for viewing all my applications at once.” His current work laptop is a Dell Inspiron 2-in-1 13-inch laptop with 16 GB RAM and a 525 GB m.2 SSD.
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2017/11/Screenshot-from-2017-10-26-08-51-41-1024x640.png)
|
||||
|
||||
Mark currently runs Fedora 26 on any box he setup in the past few months. When it comes to new versions he likes to avoid the rush when the version is officially released. “I usually try to get the latest version as soon as it goes gold, with the exception of one of my workstations running the next version’s beta when it is closer to release.” He usually upgrades in place: “The in-place upgrade using _dnf system-upgrade_ works very well these days.”
|
||||
|
||||
To handle his photography, McIntyre uses [GIMP][8] and [Darktable][9], along with a few other photo viewing and quick editing packages. When not using web-based email, he uses [Geary][10] along with [GNOME Calendar][11]. Mark’s IRC client of choice is [HexChat][12] connecting to a [ZNC bouncer][13]running on a Fedora Server instance. His department’s communication is handled via Slack.
|
||||
|
||||
“I have never really been a big IDE fan, so I spend time in [vim][14] for most of my editing.” Occasionally, he opens up a simple text editor like [gedit][15] or [xed][16]. Mark uses [GPaste][17] for copying and pasting. “I have become a big fan of [Tilix][18] for my terminal choice.” McIntyre manages the podcasts he likes with [Rhythmbox][19], and uses [Epiphany][20] for quick web lookups.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/mark-mcintyre-fedora/
|
||||
|
||||
作者:[Charles Profitt][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/author/cprofitt/
|
||||
[1]:https://fedoramagazine.org/mark-mcintyre-fedora/
|
||||
[2]:https://fedoramagazine.org/tag/how-do-you-fedora/
|
||||
[3]:https://fedoramagazine.org/submit-an-idea-or-tip/
|
||||
[4]:https://knox-pizza-quest.blogspot.com/
|
||||
[5]:https://www.flickr.com/photos/mockgeek/
|
||||
[6]:http://www.scrippsnetworksinteractive.com/
|
||||
[7]:https://fedoraproject.org/wiki/Ambassadors
|
||||
[8]:https://www.gimp.org/
|
||||
[9]:http://www.darktable.org/
|
||||
[10]:https://wiki.gnome.org/Apps/Geary
|
||||
[11]:https://wiki.gnome.org/Apps/Calendar
|
||||
[12]:https://hexchat.github.io/
|
||||
[13]:https://wiki.znc.in/ZNC
|
||||
[14]:http://www.vim.org/
|
||||
[15]:https://wiki.gnome.org/Apps/Gedit
|
||||
[16]:https://github.com/linuxmint/xed
|
||||
[17]:https://github.com/Keruspe/GPaste
|
||||
[18]:https://fedoramagazine.org/try-tilix-new-terminal-emulator-fedora/
|
||||
[19]:https://wiki.gnome.org/Apps/Rhythmbox
|
||||
[20]:https://wiki.gnome.org/Apps/Web
|
@ -0,0 +1,77 @@
|
||||
Useful GNOME Shell Keyboard Shortcuts You Might Not Know About
|
||||
======
|
||||
As Ubuntu has moved to Gnome Shell in its 17.10 release, many users may be interested to discover some of the most useful shortcuts in Gnome as well as how to create your own shortcuts. This article will explain both.
|
||||
|
||||
If you expect GNOME to ship with hundreds or thousands of shell shortcuts, you will be disappointed to learn this isn't the case. The list of shortcuts isn't miles long, and not all of them will be useful to you, but there are still many keyboard shortcuts you can take advantage of.
|
||||
|
||||
![gnome-shortcuts-01-settings][1]
|
||||
|
||||
![gnome-shortcuts-01-settings][1]
|
||||
|
||||
To access the list of shortcuts, go to "Settings -> Devices -> Keyboard." Here are some less popular, yet useful shortcuts.
|
||||
|
||||
* Ctrl + Alt + T - this combination launches the terminal; you can use this from anywhere within GNOME
|
||||
|
||||
|
||||
|
||||
Two shortcuts I personally use quite frequently are:
|
||||
|
||||
* Alt + F4 - close the window on focus
|
||||
* Alt + F8 - resize the window
|
||||
|
||||
|
||||
Most of you know how to switch between open applications (Alt + Tab), but you may not know you can use Alt + Shift + Tab to cycle through applications in reverse direction.
|
||||
|
||||
Another useful combination for switching within the windows of an application is Alt + (key above Tab) (example: Alt + ` on a US keyboard).
|
||||
|
||||
If you want to show the Activities overview, use Alt + F1.
|
||||
|
||||
There are quite a lot of shortcuts related to workspaces. If you are like me and don't use multiple workspaces frequently, these shortcuts are useless to you. Still, some of the ones worth noting are the following:
|
||||
|
||||
* Super + PageUp (or PageDown) moves to the workspace above or below
|
||||
* Ctrl + Alt + Left (or Right) moves to the workspace on the left/right
|
||||
|
||||
If you add Shift to these commands, e.g. Shift + Ctrl + Alt + Left, you move the window one worskpace above, below, to the left, or to the right.
|
||||
|
||||
Another favorite keyboard shortcut of mine is in the Accessibility section - Increase/Decrease Text Size. You can use Ctrl + + (and Ctrl + -) to zoom text size quickly. In some cases, this may be disabled by default, so do check it out before you try it.
|
||||
|
||||
The above-mentioned shortcuts are lesser known, yet useful keyboard shortcuts. If you are curious to see what else is available, you can check [the official GNOME shell cheat sheet][2].
|
||||
|
||||
If the default shortcuts are not to your liking, you can change them or create new ones. You do this from the same "Settings -> Devices -> Keyboard" dialog. Just select the entry you want to change, and the following dialog will popup.
|
||||
|
||||
![gnome-shortcuts-02-change-shortcut][3]
|
||||
|
||||
![gnome-shortcuts-02-change-shortcut][3]
|
||||
|
||||
Enter the keyboard combination you want.
|
||||
|
||||
![gnome-shortcuts-03-set-shortcut][4]
|
||||
|
||||
![gnome-shortcuts-03-set-shortcut][4]
|
||||
|
||||
If it is already in use you will get a message. If not, just click Set, and you are done.
|
||||
|
||||
If you want to add new shortcuts rather than change existing ones, scroll down until you see the "Plus" sign, click it, and in the dialog that appears, enter the name and keys of your new keyboard shortcut.
|
||||
|
||||
![gnome-shortcuts-04-add-custom-shortcut][5]
|
||||
|
||||
![gnome-shortcuts-04-add-custom-shortcut][5]
|
||||
|
||||
GNOME doesn't come with tons of shell shortcuts by default, and the above listed ones are some of the more useful ones. If these shortcuts are not enough for you, you can always create your own. Let us know if this is helpful to you.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/gnome-shell-keyboard-shortcuts/
|
||||
|
||||
作者:[Ada Ivanova][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com/author/adaivanoff/
|
||||
[1]https://www.maketecheasier.com/assets/uploads/2017/10/gnome-shortcuts-01-settings.jpg (gnome-shortcuts-01-settings)
|
||||
[2]https://wiki.gnome.org/Projects/GnomeShell/CheatSheet
|
||||
[3]https://www.maketecheasier.com/assets/uploads/2017/10/gnome-shortcuts-02-change-shortcut.png (gnome-shortcuts-02-change-shortcut)
|
||||
[4]https://www.maketecheasier.com/assets/uploads/2017/10/gnome-shortcuts-03-set-shortcut.png (gnome-shortcuts-03-set-shortcut)
|
||||
[5]https://www.maketecheasier.com/assets/uploads/2017/10/gnome-shortcuts-04-add-custom-shortcut.png (gnome-shortcuts-04-add-custom-shortcut)
|
@ -1,3 +1,5 @@
|
||||
**translating by [erlinux](https://github.com/erlinux)**
|
||||
|
||||
Why microservices are a security issue
|
||||
============================================================
|
||||
|
||||
|
@ -1,78 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
AWS to Help Build ONNX Open Source AI Platform
|
||||
============================================================
|
||||
![onnx-open-source-ai-platform](https://www.linuxinsider.com/article_images/story_graphics_xlarge/xl-2017-onnx-1.jpg)
|
||||
|
||||
|
||||
Amazon Web Services has become the latest tech firm to join the deep learning community's collaboration on the Open Neural Network Exchange, recently launched to advance artificial intelligence in a frictionless and interoperable environment. Facebook and Microsoft led the effort.
|
||||
|
||||
As part of that collaboration, AWS made its open source Python package, ONNX-MxNet, available as a deep learning framework that offers application programming interfaces across multiple languages including Python, Scala and open source statistics software R.
|
||||
|
||||
The ONNX format will help developers build and train models for other frameworks, including PyTorch, Microsoft Cognitive Toolkit or Caffe2, AWS Deep Learning Engineering Manager Hagay Lupesko and Software Developer Roshani Nagmote wrote in an online post last week. It will let developers import those models into MXNet, and run them for inference.
|
||||
|
||||
### Help for Developers
|
||||
|
||||
Facebook and Microsoft this summer launched ONNX to support a shared model of interoperability for the advancement of AI. Microsoft committed its Cognitive Toolkit, Caffe2 and PyTorch to support ONNX.
|
||||
|
||||
Cognitive Toolkit and other frameworks make it easier for developers to construct and run computational graphs that represent neural networks, Microsoft said.
|
||||
|
||||
Initial versions of [ONNX code and documentation][4] were made available on Github.
|
||||
|
||||
AWS and Microsoft last month announced plans for Gluon, a new interface in Apache MXNet that allows developers to build and train deep learning models.
|
||||
|
||||
Gluon "is an extension of their partnership where they are trying to compete with Google's Tensorflow," observed Aditya Kaul, research director at [Tractica][5].
|
||||
|
||||
"Google's omission from this is quite telling but also speaks to their dominance in the market," he told LinuxInsider.
|
||||
|
||||
"Even Tensorflow is open source, and so open source is not the big catch here -- but the rest of the ecosystem teaming up to compete with Google is what this boils down to," Kaul said.
|
||||
|
||||
The Apache MXNet community earlier this month introduced version 0.12 of MXNet, which extends Gluon functionality to allow for new, cutting-edge research, according to AWS. Among its new features are variational dropout, which allows developers to apply the dropout technique for mitigating overfitting to recurrent neural networks.
|
||||
|
||||
Convolutional RNN, Long Short-Term Memory and gated recurrent unit cells allow datasets to be modeled using time-based sequence and spatial dimensions, AWS noted.
|
||||
|
||||
### Framework-Neutral Method
|
||||
|
||||
"This looks like a great way to deliver inference regardless of which framework generated a model," said Paul Teich, principal analyst at [Tirias Research][6].
|
||||
|
||||
"This is basically a framework-neutral way to deliver inference," he told LinuxInsider.
|
||||
|
||||
Cloud providers like AWS, Microsoft and others are under pressure from customers to be able to train on one network while delivering on another, in order to advance AI, Teich pointed out.
|
||||
|
||||
"I see this as kind of a baseline way for these vendors to check the interoperability box," he remarked.
|
||||
|
||||
"Framework interoperability is a good thing, and this will only help developers in making sure that models that they build on MXNet or Caffe or CNTK are interoperable," Tractica's Kaul pointed out.
|
||||
|
||||
As to how this interoperability might apply in the real world, Teich noted that technologies such as natural language translation or speech recognition would require that Alexa's voice recognition technology be packaged and delivered to another developer's embedded environment.
|
||||
|
||||
### Thanks, Open Source
|
||||
|
||||
"Despite their competitive differences, these companies all recognize they owe a significant amount of their success to the software development advancements generated by the open source movement," said Jeff Kaplan, managing director of [ThinkStrategies][7].
|
||||
|
||||
"The Open Neural Network Exchange is committed to producing similar benefits and innovations in AI," he told LinuxInsider.
|
||||
|
||||
A growing number of major technology companies have announced plans to use open source to speed the development of AI collaboration, in order to create more uniform platforms for development and research.
|
||||
|
||||
AT&T just a few weeks ago announced plans [to launch the Acumos Project][8] with TechMahindra and The Linux Foundation. The platform is designed to open up efforts for collaboration in telecommunications, media and technology.
|
||||
![](https://www.ectnews.com/images/end-enn.gif)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html
|
||||
|
||||
作者:[ David Jones ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html#searchbyline
|
||||
[1]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html#
|
||||
[2]:https://www.linuxinsider.com/perl/mailit/?id=84971
|
||||
[3]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html
|
||||
[4]:https://github.com/onnx/onnx
|
||||
[5]:https://www.tractica.com/
|
||||
[6]:http://www.tiriasresearch.com/
|
||||
[7]:http://www.thinkstrategies.com/
|
||||
[8]:https://www.linuxinsider.com/story/84926.html
|
||||
[9]:https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html
|
@ -0,0 +1,54 @@
|
||||
translating by liuxinyu123
|
||||
|
||||
Long-term Linux support future clarified
|
||||
============================================================
|
||||
|
||||
Long-term support Linux version 4.4 will get six years of life, but that doesn't mean other LTS editions will last so long.
|
||||
|
||||
[video](http://www.zdnet.com/video/video-torvalds-surprised-by-resilience-of-2-6-kernel-1/)
|
||||
|
||||
_Video: Torvalds surprised by resilience of 2.6 kernel_
|
||||
|
||||
In October 2017, the [Linux kernel team agreed to extend the next version of Linux's Long Term Support (LTS) from two years to six years][5], [Linux 4.14][6]. This helps [Android][7], embedded Linux, and Linux Internet of Things (IoT) developers. But this move did not mean all future Linux LTS versions will have a six-year lifespan.
|
||||
|
||||
As Konstantin Ryabitsev, [The Linux Foundation][8]'s director of IT infrastructure security, explained in a Google+ post, "Despite what various news sites out there may have told you, [kernel 4.14 LTS is not planned to be supported for 6 years][9]. Just because Greg Kroah-Hartman is doing it for 4.4 does not mean that all LTS kernels from now on are going to be maintained for that long."
|
||||
|
||||
So, in short, 4.14 will be supported until January 2020, while the 4.4 Linux kernel, which arrived on Jan. 20, 2016, will be supported until 2022\. Therefore, if you're working on a Linux distribution that's meant for the longest possible run, you want to base it on [Linux 4.4][10].
|
||||
|
||||
[Linux LTS versions][11] incorporate back-ported bug fixes for older kernel trees. Not all bug fixes are imported; only important bug fixes are applied to such kernels. They, especially for older trees, don't usually see very frequent releases.
|
||||
|
||||
The other Linux versions are Prepatch or release candidates (RC), Mainline, Stable, and LTS.
|
||||
|
||||
RC must be compiled from source and usually contains bug fixes and new features. These are maintained and released by Linus Torvalds. He also maintains the Mainline tree (this is where all new features are introduced). New mainline kernels are released every few months. When the mainline kernel is released for general use, it is considered "stable." Bug fixes for a stable kernel are back-ported from the mainline tree and applied by a designated stable kernel maintainer. There are usually only a few bug-fix kernel releases until the next mainline kernel becomes available.
|
||||
|
||||
As for the latest LTS, Linux 4.14, Ryabitsev said, "It is possible that someone may pick up maintainership of 4.14 after Greg is done with it (it's happened in the past on multiple occasions), but you should emphatically not plan on that."
|
||||
|
||||
Kroah-Hartman simply added to Ryabitsev's post: ["What he said."][12]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.zdnet.com/article/long-term-linux-support-future-clarified/
|
||||
|
||||
作者:[Steven J. Vaughan-Nichols ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
|
||||
[1]:http://www.zdnet.com/article/long-term-linux-support-future-clarified/#comments-eb4f0633-955f-4fec-9e56-734c34ee2bf2
|
||||
[2]:http://www.zdnet.com/article/the-tension-between-iot-and-erp/
|
||||
[3]:http://www.zdnet.com/article/the-tension-between-iot-and-erp/
|
||||
[4]:http://www.zdnet.com/article/the-tension-between-iot-and-erp/
|
||||
[5]:http://www.zdnet.com/article/long-term-support-linux-gets-a-longer-lease-on-life/
|
||||
[6]:http://www.zdnet.com/article/the-new-long-term-linux-kernel-linux-4-14-has-arrived/
|
||||
[7]:https://www.android.com/
|
||||
[8]:https://www.linuxfoundation.org/
|
||||
[9]:https://plus.google.com/u/0/+KonstantinRyabitsev/posts/Lq97ZtL8Xw9
|
||||
[10]:http://www.zdnet.com/article/whats-new-and-nifty-in-linux-4-4/
|
||||
[11]:https://www.kernel.org/releases.html
|
||||
[12]:https://plus.google.com/u/0/+gregkroahhartman/posts/ZUcSz3Sn1Hc
|
||||
[13]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
|
||||
[14]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
|
||||
[15]:http://www.zdnet.com/blog/open-source/
|
||||
[16]:http://www.zdnet.com/topic/enterprise-software/
|
@ -1,95 +0,0 @@
|
||||
translating---aiwhj
|
||||
5 best practices for getting started with DevOps
|
||||
============================================================
|
||||
|
||||
### Are you ready to implement DevOps, but don't know where to begin? Try these five best practices.
|
||||
|
||||
|
||||
![5 best practices for getting started with DevOps](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/devops-gears.png?itok=rUejbLQX "5 best practices for getting started with DevOps")
|
||||
Image by :
|
||||
|
||||
[Andrew Magill][8]. Modified by Opensource.com. [CC BY 4.0][9]
|
||||
|
||||
DevOps often stymies early adopters with its ambiguity, not to mention its depth and breadth. By the time someone buys into the idea of DevOps, their first questions usually are: "How do I get started?" and "How do I measure success?" These five best practices are a great road map to starting your DevOps journey.
|
||||
|
||||
### 1\. Measure all the things
|
||||
|
||||
You don't know for sure that your efforts are even making things better unless you can quantify the outcomes. Are my features getting out to customers more rapidly? Are fewer defects escaping to them? Are we responding to and recovering more quickly from failure?
|
||||
|
||||
Before you change anything, think about what kinds of outcomes you expect from your DevOps transformation. When you're further into your DevOps journey, you'll enjoy a rich array of near-real-time reports on everything about your service. But consider starting with these two metrics:
|
||||
|
||||
* **Time to market** measures the end-to-end, often customer-facing, business experience. It usually begins when a feature is formally conceived and ends when the customer can consume the feature in production. Time to market is not mainly an engineering team metric; more importantly it shows your business' complete end-to-end efficiency in bringing valuable new features to market and isolates opportunities for system-wide improvement.
|
||||
|
||||
* **Cycle time** measures the engineering team process. Once work on a new feature starts, when does it become available in production? This metric is very useful for understanding the efficiency of the engineering team and isolating opportunities for team-level improvement.
|
||||
|
||||
### 2\. Get your process off the ground
|
||||
|
||||
DevOps success requires an organization to put a regular (and hopefully effective) process in place and relentlessly improve upon it. It doesn't have to start out being effective, but it must be a regular process. Usually that it's some flavor of agile methodology like Scrum or Scrumban; sometimes it's a Lean derivative. Whichever way you go, pick a formal process, start using it, and get the basics right.
|
||||
|
||||
Regular inspect-and-adapt behaviors are key to your DevOps success. Make good use of opportunities like the stakeholder demo, team retrospectives, and daily standups to find opportunities to improve your process.
|
||||
|
||||
A lot of your DevOps success hinges on people working effectively together. People on a team need to work from a common process that they are empowered to improve upon. They also need regular opportunities to share what they are learning with other stakeholders, both upstream and downstream, in the process.
|
||||
|
||||
Good process discipline will help your organization consume the other benefits of DevOps at the great speed that comes as your success builds.
|
||||
|
||||
Although it's common for more development-oriented teams to successfully adopt processes like Scrum, operations-focused teams (or others that are more interrupt-driven) may opt for a process with a more near-term commitment horizon, such as Kanban.
|
||||
|
||||
### 3\. Visualize your end-to-end workflow
|
||||
|
||||
There is tremendous power in being able to see who's working on what part of your service at any given time. Visualizing your workflow will help people know what they need to work on next, how much work is in progress, and where the bottlenecks are in the process.
|
||||
|
||||
You can't effectively limit work in process until you can see it and quantify it. Likewise, you can't effectively eliminate bottlenecks until you can clearly see them.
|
||||
|
||||
Visualizing the entire workflow will help people in all parts of the organization understand how their work contributes to the success of the whole. It can catalyze relationship-building across organizational boundaries to help your teams collaborate more effectively towards a shared sense of success.
|
||||
|
||||
### 4\. Continuous all the things
|
||||
|
||||
DevOps promises a dizzying array of compelling automation. But Rome wasn't built in a day. One of the first areas you can focus your efforts on is [continuous integration][10] (CI). But don't stop there; you'll want to follow quickly with [continuous delivery][11] (CD) and eventually continuous deployment.
|
||||
|
||||
Your CD pipeline is your opportunity to inject all manner of automated quality testing into your process. The moment new code is committed, your CD pipeline should run a battery of tests against the code and the successfully built artifact. The artifact that comes out at the end of this gauntlet is what progresses along your process until eventually it's seen by customers in production.
|
||||
|
||||
Another "continuous" that doesn't get enough attention is continuous improvement. That's as simple as setting some time aside each day to ask your colleagues: "What small thing can we do today to get better at how we do our work?" These small, daily changes compound over time into more profound results. You'll be pleasantly surprised! But it also gets people thinking all the time about how to improve things.
|
||||
|
||||
### 5\. Gherkinize
|
||||
|
||||
Fostering more effective communication across your organization is crucial to fostering the sort of systems thinking prevalent in successful DevOps journeys. One way to help that along is to use a shared language between the business and the engineers to express the desired acceptance criteria for new features. A good product manager can learn [Gherkin][12] in a day and begin using it to express acceptance criteria in an unambiguous, structured form of plain English. Engineers can use this Gherkinized acceptance criteria to write acceptance tests against the criteria, and then develop their feature code until the tests pass. This is a simplification of [acceptance test-driven development][13](ATDD) that can also help kick start your DevOps culture and engineering practice.
|
||||
|
||||
### Start on your journey
|
||||
|
||||
Don't be discouraged by getting started with your DevOps practice. It's a journey. And hopefully these five ideas give you solid ways to get started.
|
||||
|
||||
|
||||
### About the author
|
||||
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/headshot_4.jpg?itok=jntfDCfX)][14]
|
||||
|
||||
Magnus Hedemark - Magnus has been in the IT industry for over 20 years, and a technology enthusiast for most of his life. He's presently Manager of DevOps Engineering at UnitedHealth Group. In his spare time, Magnus enjoys photography and paddling canoes.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/11/5-keys-get-started-devops
|
||||
|
||||
作者:[Magnus Hedemark ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/magnus919
|
||||
[1]:https://opensource.com/tags/devops?src=devops_resource_menu1
|
||||
[2]:https://opensource.com/resources/devops?src=devops_resource_menu2
|
||||
[3]:https://www.openshift.com/promotions/devops-with-openshift.html?intcmp=7016000000127cYAAQ&src=devops_resource_menu3
|
||||
[4]:https://enterprisersproject.com/article/2017/5/9-key-phrases-devops?intcmp=7016000000127cYAAQ&src=devops_resource_menu4
|
||||
[5]:https://www.redhat.com/en/insights/devops?intcmp=7016000000127cYAAQ&src=devops_resource_menu5
|
||||
[6]:https://opensource.com/article/17/11/5-keys-get-started-devops?rate=oEOzMXx1ghbkfl2a5ae6AnvO88iZ3wzkk53K2CzbDWI
|
||||
[7]:https://opensource.com/user/25739/feed
|
||||
[8]:https://ccsearch.creativecommons.org/image/detail/7qRx_yrcN5isTMS0u9iKMA==
|
||||
[9]:https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[10]:https://martinfowler.com/articles/continuousIntegration.html
|
||||
[11]:https://martinfowler.com/bliki/ContinuousDelivery.html
|
||||
[12]:https://cucumber.io/docs/reference
|
||||
[13]:https://en.wikipedia.org/wiki/Acceptance_test%E2%80%93driven_development
|
||||
[14]:https://opensource.com/users/magnus919
|
||||
[15]:https://opensource.com/users/magnus919
|
||||
[16]:https://opensource.com/users/magnus919
|
||||
[17]:https://opensource.com/tags/devops
|
@ -1,41 +0,0 @@
|
||||
Someone Tries to Bring Back Ubuntu's Unity from the Dead as an Official Spin
|
||||
============================================================
|
||||
|
||||
|
||||
|
||||
> The Ubuntu Unity remix would be supported for nine months
|
||||
|
||||
Canonical's sudden decision of killing its Unity user interface after seven years affected many Ubuntu users, and it looks like someone now tries to bring it back from the dead as an unofficial spin.
|
||||
|
||||
Long-time [Ubuntu][1] member Dale Beaudoin [ran a poll][2] last week on the official Ubuntu forums to take the pulse of the community and see if they are interested in an Ubuntu Unity Remix that would be released alongside Ubuntu 18.04 LTS (Bionic Beaver) next year and be supported for nine months or five years.
|
||||
|
||||
Thirty people voted in the poll, with 67 percent of them opting for an LTS (Long Term Support) release of the so-called Ubuntu Unity Remix, while 33 percent voted for the 9-month supported release. It also looks like this upcoming Ubuntu Unity Spin [looks to become an official flavor][3], yet this means commitment from those developing it.
|
||||
|
||||
"A recent poll voted 2/3rds in favor of Ubuntu Unity to become an LTS distribution. We should try to work this cycle assuming that it will be LTS and an official flavor," said Dale Beaudoin. "We will try and release an updated ISO once every week or 10 days using the current 18.04 daily builds of default Ubuntu Bionic Beaver as a platform."
|
||||
|
||||
### Is Ubuntu Unity making a comeback?
|
||||
|
||||
The last Ubuntu version to ship with Unity by default was Ubuntu 17.04 (Zesty Zapus), which will reach end of life on January 2018\. Ubuntu 17.10 (Artful Artful), the current stable release of the popular operating system, is the first to use the GNOME desktop environment by default for the main Desktop edition as Canonical CEO [announced][4] earlier this year that Unity would no longer be developed.
|
||||
|
||||
However, Canonical is still offering the Unity desktop environment from the official software repositories, so if someone wants to install it, it's one click away. But the bad news is that they'll be supported up until the release of Ubuntu 18.04 LTS (Bionic Beaver) in April 2018, so the developers of the Ubuntu Unity Remix would have to continue to keep in on life support on their a separate repository.
|
||||
|
||||
On the other hand, we don't believe Canonical will change their mind and accept this Ubuntu Unity Spin to become an official flavor, which would mean they failed to continue development of Unity, and now a handful of people can do it. Most probably, if interest in this Ubuntu Unity Remix won't fade away soon, it will be an unofficial spin supported by the nostalgic community.
|
||||
|
||||
Question is, would you be interested in an Ubuntu Unity spin, official or not?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://news.softpedia.com/news/someone-tries-to-bring-back-ubuntu-s-unity-from-the-dead-as-an-unofficial-spin-518778.shtml
|
||||
|
||||
作者:[Marius Nestor ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://news.softpedia.com/editors/browse/marius-nestor
|
||||
[1]:http://linux.softpedia.com/downloadTag/Ubuntu
|
||||
[2]:https://community.ubuntu.com/t/poll-unity-7-distro-9-month-spin-or-lts-for-18-04/2066
|
||||
[3]:https://community.ubuntu.com/t/unity-maintenance-roadmap/2223
|
||||
[4]:http://news.softpedia.com/news/canonical-to-stop-developing-unity-8-ubuntu-18-04-lts-ships-with-gnome-desktop-514604.shtml
|
||||
[5]:http://news.softpedia.com/editors/browse/marius-nestor
|
@ -0,0 +1,117 @@
|
||||
TLDR pages: Simplified Alternative To Linux Man Pages
|
||||
============================================================
|
||||
|
||||
[![](https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-ubuntu-640x360.jpg "tldr page ubuntu")][22]
|
||||
|
||||
Working on the terminal and using various commands to carry out important tasks is an indispensable part of a Linux desktop experience. This open-source operating system possesses an [abundance of commands][23] that **makes** it impossible for any user to remember all of them. To make things more complex, each command has its own set of options to bring a wider set of functionality.
|
||||
|
||||
To solve this problem, [Man Pages][12], short for manual pages, were created. First written in English, it contains tons of in-depth information about different commands. Sometimes, when you’re looking for just basic information on a command, it can also become overwhelming. To solve this issue,[ TLDR pages][13] was created.
|
||||
|
||||
_Before going ahead and knowing more about it, don’t forget to check a few more terminal tricks:_
|
||||
|
||||
* _**[Watch Star Wars in terminal ][1]**_
|
||||
|
||||
* _**[Use StackOverflow in terminal][2]**_
|
||||
|
||||
* _**[Get Weather report in terminal][3]**_
|
||||
|
||||
* _**[Access Google through terminal][4]**_
|
||||
|
||||
* [**_Use Wikipedia from command line_**][7]
|
||||
|
||||
* _**[Check Cryptocurrency Prices From Terminal][5]**_
|
||||
|
||||
* _**[Search and download torrent in terminal][6]**_
|
||||
|
||||
### What are TLDR pages?
|
||||
|
||||
The GitHub page of TLDR pages for Linux/Unix describes it as a collection of simplified and community-driven man pages. It’s an effort to make the experience of using man pages simpler with the help of practical examples. For those who don’t know, TLDR is taken from common internet slang _ Too Long Didn’t Read_ .
|
||||
|
||||
In case you wish to compare, let’s take the example of tar command. The usual man page extends over 1,000 lines. It’s an archiving utility that’s often combined with a compression method like bzip or gzip. Take a look at its man page:
|
||||
|
||||
[![tar man page](https://fossbytes.com/wp-content/uploads/2017/11/tar-man-page.jpg)][14] On the other hand, TLDR pages lets you simply take a glance at the command and see how it works. Tar’s TLDR page simply looks like this and comes with some handy examples of the most common tasks you can complete with this utility:
|
||||
|
||||
[![tar tldr page](https://fossbytes.com/wp-content/uploads/2017/11/tar-tldr-page.jpg)][15] Let’s take another example and show you what TLDR pages has to offer when it comes to apt:
|
||||
|
||||
[![tldr-page-of-apt](https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-of-apt.jpg)][16] Having shown you how TLDR works and makes your life easier, let’s tell you how to install it on your Linux-based operating system.
|
||||
|
||||
### How to install and use TLDR pages on Linux?
|
||||
|
||||
The most mature TLDR client is based on Node.js and you can install it easily using NPM package manager. In case Node and NPM are not available on your system, run the following command:
|
||||
|
||||
```
|
||||
sudo apt-get install nodejs
|
||||
|
||||
sudo apt-get install npm
|
||||
```
|
||||
|
||||
In case you’re using an OS other than Debian, Ubuntu, or Ubuntu’s derivatives, you can use yum, dnf, or pacman package manager as per your convenience.
|
||||
|
||||
Now, by running the following command in terminal, install TLDR client on your Linux machine:
|
||||
|
||||
```
|
||||
sudo npm install -g tldr
|
||||
```
|
||||
|
||||
Once you’ve installed this terminal utility, it would be a good idea to update its cache before trying it out. To do so, run the following command:
|
||||
|
||||
```
|
||||
tldr --update
|
||||
```
|
||||
|
||||
After doing this, feel free to read the TLDR page of any Linux command. To do so, simply type:
|
||||
|
||||
```
|
||||
tldr <commandname>
|
||||
```
|
||||
|
||||
[![tldr kill command](https://fossbytes.com/wp-content/uploads/2017/11/tldr-kill-command.jpg)][17]
|
||||
|
||||
You can also run the following help command to see all different parameters that can be used with TLDR to get the desired output. As usual, this help page is also accompanied with examples.
|
||||
|
||||
### TLDR web, Android, and iOS versions
|
||||
|
||||
You would be pleasantly surprised to know that TLDR pages isn’t limited to your Linux desktop. Instead, it can also be used in your web browser, which can be accessed from any machine.
|
||||
|
||||
To use TLDR web version, visit [tldr.ostera.io][18] and perform the required search operation.
|
||||
|
||||
Alternatively, you can also download the [iOS][19] and [Android][20] apps and keep learning new commands on the go.
|
||||
|
||||
[![tldr app ios](https://fossbytes.com/wp-content/uploads/2017/11/tldr-app-ios.jpg)][21]
|
||||
|
||||
Did you find this cool Linux terminal trick interesting? Do give it a try and let us know your feedback.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fossbytes.com/tldr-pages-linux-man-pages-alternative/
|
||||
|
||||
作者:[Adarsh Verma ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fossbytes.com/author/adarsh/
|
||||
[1]:https://fossbytes.com/watch-star-wars-command-prompt-via-telnet/
|
||||
[2]:https://fossbytes.com/use-stackoverflow-linux-terminal-mac/
|
||||
[3]:https://fossbytes.com/single-command-curl-wttr-terminal-weather-report/
|
||||
[4]:https://fossbytes.com/how-to-google-search-in-command-line-using-googler/
|
||||
[5]:https://fossbytes.com/check-bitcoin-cryptocurrency-prices-command-line-coinmon/
|
||||
[6]:https://fossbytes.com/review-torrench-download-torrents-using-terminal-linux/
|
||||
[7]:https://fossbytes.com/use-wikipedia-termnianl-wikit/
|
||||
[8]:http://www.facebook.com/sharer.php?u=https%3A%2F%2Ffossbytes.com%2Ftldr-pages-linux-man-pages-alternative%2F
|
||||
[9]:https://twitter.com/intent/tweet?text=TLDR+pages%3A+Simplified+Alternative+To+Linux+Man+Pages&url=https%3A%2F%2Ffossbytes.com%2Ftldr-pages-linux-man-pages-alternative%2F&via=%40fossbytes14
|
||||
[10]:http://plus.google.com/share?url=https://fossbytes.com/tldr-pages-linux-man-pages-alternative/
|
||||
[11]:http://pinterest.com/pin/create/button/?url=https://fossbytes.com/tldr-pages-linux-man-pages-alternative/&media=https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-ubuntu.jpg
|
||||
[12]:https://fossbytes.com/linux-lexicon-man-pages-navigation/
|
||||
[13]:https://github.com/tldr-pages/tldr
|
||||
[14]:https://fossbytes.com/wp-content/uploads/2017/11/tar-man-page.jpg
|
||||
[15]:https://fossbytes.com/wp-content/uploads/2017/11/tar-tldr-page.jpg
|
||||
[16]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-of-apt.jpg
|
||||
[17]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-kill-command.jpg
|
||||
[18]:https://tldr.ostera.io/
|
||||
[19]:https://itunes.apple.com/us/app/tldt-pages/id1071725095?ls=1&mt=8
|
||||
[20]:https://play.google.com/store/apps/details?id=io.github.hidroh.tldroid
|
||||
[21]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-app-ios.jpg
|
||||
[22]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-ubuntu.jpg
|
||||
[23]:https://fossbytes.com/a-z-list-linux-command-line-reference/
|
@ -0,0 +1,85 @@
|
||||
# [Google launches TensorFlow-based vision recognition kit for RPi Zero W][26]
|
||||
|
||||
|
||||
![](http://linuxgizmos.com/files/google_aiyvisionkit-thm.jpg)
|
||||
Google’s $45 “AIY Vision Kit” for the Raspberry Pi Zero W performs TensorFlow-based vision recognition using a “VisionBonnet” board with a Movidius chip.
|
||||
|
||||
Google’s AIY Vision Kit for on-device neural network acceleration follows an earlier [AIY Projects][7] voice/AI kit for the Raspberry Pi that shipped to MagPi subscribers back in May. Like the voice kit and the older Google Cardboard VR viewer, the new AIY Vision Kit has a cardboard enclosure. The kit differs from the [Cloud Vision API][8], which was demo’d in 2015 with a Raspberry Pi based GoPiGo robot, in that it runs entirely on local processing power rather than requiring a cloud connection. The AIY Vision Kit is available now for pre-order at $45, with shipments due in early December.
|
||||
|
||||
|
||||
[![](http://linuxgizmos.com/files/google_aiyvisionkit-sm.jpg)][9] [![](http://linuxgizmos.com/files/rpi_zerow-sm.jpg)][10]
|
||||
**AIY Vision Kit, fully assembled (left) and Raspberry Pi Zero W**
|
||||
(click images to enlarge)
|
||||
|
||||
|
||||
The kit’s key processing element, aside from the 1GHz ARM11-based Broadcom BCM2836 SoC found on the required [Raspberry Pi Zero W][21] SBC, is Google’s new VisionBonnet RPi accessory board. The VisionBonnet pHAT board uses a Movidius MA2450, a version of the [Movidius Myriad 2 VPU][22] processor. On the VisionBonnet, the processor runs Google’s open source [TensorFlow][23]machine intelligence library for neural networking. The chip enables visual perception processing at up to 30 frames per second.
|
||||
|
||||
The AIY Vision Kit requires a user-supplied RPi Zero W, a [Raspberry Pi Camera v2][11], and a 16GB micro SD card for downloading the Linux-based image. The kit includes the VisionBonnet, an RGB arcade-style button, a piezo speaker, a macro/wide lens kit, and the cardboard enclosure. You also get flex cables, standoffs, a tripod mounting nut, and connecting components.
|
||||
|
||||
|
||||
[![](http://linuxgizmos.com/files/google_aiyvisionkit_pieces-sm.jpg)][12] [![](http://linuxgizmos.com/files/google_visionbonnet-sm.jpg)][13]
|
||||
**AIY Vision Kit kit components (left) and VisonBonnet accessory board**
|
||||
(click images to enlarge)
|
||||
|
||||
|
||||
Three neural network models are available. There’s a general-purpose model that can recognize 1,000 common objects, a facial detection model that can also score facial expression on a “joy scale” that ranges from “sad” to “laughing,” and a model that can identify whether the image contains a dog, cat, or human. The 1,000-image model derives from Google’s open source [MobileNets][24], a family of TensorFlow based computer vision models designed for the restricted resources of a mobile or embedded device.
|
||||
|
||||
MobileNet models offer low latency and low power consumption, and are parameterized to meet the resource constraints of different use cases. The models can be built for classification, detection, embeddings, and segmentation, says Google. Earlier this month, Google released a developer preview of a mobile-friendly [TensorFlow Lite][14] library for Android and iOS that is compatible with MobileNets and the Android Neural Networks API.
|
||||
|
||||
|
||||
[![](http://linuxgizmos.com/files/google_aiyvisionkit_assembly-sm.jpg)][15]
|
||||
**AIY Vision Kit assembly views**
|
||||
(click image to enlarge)
|
||||
|
||||
|
||||
In addition to providing the three models, the AIY Vision Kit provides basic TensorFlow code and a compiler, so users can develop their own models. In addition, Python developers can write new software to customize RGB button colors, piezo element sounds, and 4x GPIO pins on the VisionBonnet that can add additional lights, buttons, or servos. Potential models include recognizing food items, opening a dog door based on visual input, sending a text when your car leaves the driveway, or playing particular music based on facial recognition of a person entering the camera’s viewpoint.
|
||||
|
||||
|
||||
[![](http://linuxgizmos.com/files/movidius_myriad2vpu_block-sm.jpg)][16] [![](http://linuxgizmos.com/files/movidius_myriad2_reference_board-sm.jpg)][17]
|
||||
**Myriad 2 VPU block diagram (left) and reference board**
|
||||
(click image to enlarge)
|
||||
|
||||
|
||||
The Movidius Myriad 2 processor provides TeraFLOPS of performance within a nominal 1 Watt power envelope. The chip appeared on early Project Tango reference platforms, and is built into the Ubuntu-driven [Fathom][25] neural processing USB stick that Movidius debuted in May 2016, prior to being acquired by Intel. According to Movidius, the Myriad 2 is available “in millions of devices on the market today.”
|
||||
|
||||
**Further information**
|
||||
|
||||
The AIY Vision Kit is available for pre-order from Micro Center at $44.99, with shipments due in early December. More information may be found in the AIY Vision Kit [announcement][18], [Google Blog notice][19], and [Micro Center shopping page][20].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/
|
||||
|
||||
作者:[ Eric Brown][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/
|
||||
[1]:http://twitter.com/share?url=http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/&text=Google%20launches%20TensorFlow-based%20vision%20recognition%20kit%20for%20RPi%20Zero%20W%20
|
||||
[2]:https://plus.google.com/share?url=http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/
|
||||
[3]:http://www.facebook.com/sharer.php?u=http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/
|
||||
[4]:http://www.linkedin.com/shareArticle?mini=true&url=http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/
|
||||
[5]:http://reddit.com/submit?url=http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/&title=Google%20launches%20TensorFlow-based%20vision%20recognition%20kit%20for%20RPi%20Zero%20W
|
||||
[6]:mailto:?subject=Google%20launches%20TensorFlow-based%20vision%20recognition%20kit%20for%20RPi%20Zero%20W&body=%20http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/
|
||||
[7]:http://linuxgizmos.com/free-raspberry-pi-voice-kit-taps-google-assistant-sdk/
|
||||
[8]:http://linuxgizmos.com/google-releases-cloud-vision-api-with-demo-for-pi-based-robot/
|
||||
[9]:http://linuxgizmos.com/files/google_aiyvisionkit.jpg
|
||||
[10]:http://linuxgizmos.com/files/rpi_zerow.jpg
|
||||
[11]:http://linuxgizmos.com/raspberry-pi-cameras-jump-to-8mp-keep-25-dollar-price/
|
||||
[12]:http://linuxgizmos.com/files/google_aiyvisionkit_pieces.jpg
|
||||
[13]:http://linuxgizmos.com/files/google_visionbonnet.jpg
|
||||
[14]:https://developers.googleblog.com/2017/11/announcing-tensorflow-lite.html
|
||||
[15]:http://linuxgizmos.com/files/google_aiyvisionkit_assembly.jpg
|
||||
[16]:http://linuxgizmos.com/files/movidius_myriad2vpu_block.jpg
|
||||
[17]:http://linuxgizmos.com/files/movidius_myriad2_reference_board.jpg
|
||||
[18]:https://blog.google/topics/machine-learning/introducing-aiy-vision-kit-make-devices-see/
|
||||
[19]:https://developers.googleblog.com/2017/11/introducing-aiy-vision-kit-add-computer.html
|
||||
[20]:http://www.microcenter.com/site/content/Google_AIY.aspx?ekw=aiy&rd=1
|
||||
[21]:http://linuxgizmos.com/raspberry-pi-zero-w-adds-wifi-and-bluetooth-for-only-5-more/
|
||||
[22]:https://www.movidius.com/solutions/vision-processing-unit
|
||||
[23]:https://www.tensorflow.org/
|
||||
[24]:https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html
|
||||
[25]:http://linuxgizmos.com/usb-stick-brings-neural-computing-functions-to-devices/
|
||||
[26]:http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/
|
@ -1,156 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
Undistract-me : Get Notification When Long Running Terminal Commands Complete
|
||||
============================================================
|
||||
|
||||
by [sk][2] · November 30, 2017
|
||||
|
||||
![Undistract-me](https://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2-720x340.png)
|
||||
|
||||
A while ago, we published how to [get notification when a Terminal activity is done][3]. Today, I found out a similar utility called “undistract-me” that notifies you when long running terminal commands complete. Picture this scenario. You run a command that takes a while to finish. In the mean time, you check your facebook and get so involved in it. After a while, you remembered that you ran a command few minutes ago. You go back to the Terminal and notice that the command has already finished. But you have no idea when the command is completed. Have you ever been in this situation? I bet most of you were in this situation many times. This is where “undistract-me” comes in help. You don’t need to constantly check the terminal to see if a command is completed or not. Undistract-me utility will notify you when a long running command is completed. It will work on Arch Linux, Debian, Ubuntu and other Ubuntu-derivatives.
|
||||
|
||||
#### Installing Undistract-me
|
||||
|
||||
Undistract-me is available in the default repositories of Debian and its variants such as Ubuntu. All you have to do is to run the following command to install it.
|
||||
|
||||
```
|
||||
sudo apt-get install undistract-me
|
||||
```
|
||||
|
||||
The Arch Linux users can install it from AUR using any helper programs.
|
||||
|
||||
Using [Pacaur][4]:
|
||||
|
||||
```
|
||||
pacaur -S undistract-me-git
|
||||
```
|
||||
|
||||
Using [Packer][5]:
|
||||
|
||||
```
|
||||
packer -S undistract-me-git
|
||||
```
|
||||
|
||||
Using [Yaourt][6]:
|
||||
|
||||
```
|
||||
yaourt -S undistract-me-git
|
||||
```
|
||||
|
||||
Then, run the following command to add “undistract-me” to your Bash.
|
||||
|
||||
```
|
||||
echo 'source /etc/profile.d/undistract-me.sh' >> ~/.bashrc
|
||||
```
|
||||
|
||||
Alternatively you can run this command to add it to your Bash:
|
||||
|
||||
```
|
||||
echo "source /usr/share/undistract-me/long-running.bash\nnotify_when_long_running_commands_finish_install" >> .bashrc
|
||||
```
|
||||
|
||||
If you are in Zsh shell, run this command:
|
||||
|
||||
```
|
||||
echo "source /usr/share/undistract-me/long-running.bash\nnotify_when_long_running_commands_finish_install" >> .zshrc
|
||||
```
|
||||
|
||||
Finally update the changes:
|
||||
|
||||
For Bash:
|
||||
|
||||
```
|
||||
source ~/.bashrc
|
||||
```
|
||||
|
||||
For Zsh:
|
||||
|
||||
```
|
||||
source ~/.zshrc
|
||||
```
|
||||
|
||||
#### Configure Undistract-me
|
||||
|
||||
By default, Undistract-me will consider any command that takes more than 10 seconds to complete as a long-running command. You can change this time interval by editing /usr/share/undistract-me/long-running.bash file.
|
||||
|
||||
```
|
||||
sudo nano /usr/share/undistract-me/long-running.bash
|
||||
```
|
||||
|
||||
Find “LONG_RUNNING_COMMAND_TIMEOUT” variable and change the default value (10 seconds) to something else of your choice.
|
||||
|
||||
[![](http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-1.png)][7]
|
||||
|
||||
Save and close the file. Do not forget to update the changes:
|
||||
|
||||
```
|
||||
source ~/.bashrc
|
||||
```
|
||||
|
||||
Also, you can disable notifications for particular commands. To do so, find the “LONG_RUNNING_IGNORE_LIST” variable and add the commands space-separated like below.
|
||||
|
||||
By default, the notification will only show if the active window is not the window the command is running in. That means, it will notify you only if the command is running in the background Terminal window. If the command is running in active window Terminal, you will not be notified. If you want undistract-me to send notifications either the Terminal window is visible or in the background, you can set IGNORE_WINDOW_CHECK to 1 to skip the window check.
|
||||
|
||||
The other cool feature of Undistract-me is you can set audio notification along with visual notification when a command is done. By default, it will only send a visual notification. You can change this behavior by setting the variable UDM_PLAY_SOUND to a non-zero integer on the command line. However, your Ubuntu system should have pulseaudio-utils and sound-theme-freedesktop utilities installed to enable this functionality.
|
||||
|
||||
Please remember that you need to run the following command to update the changes made.
|
||||
|
||||
For Bash:
|
||||
|
||||
```
|
||||
source ~/.bashrc
|
||||
```
|
||||
|
||||
For Zsh:
|
||||
|
||||
```
|
||||
source ~/.zshrc
|
||||
```
|
||||
|
||||
It is time to verify if this really works.
|
||||
|
||||
#### Get Notification When Long Running Terminal Commands Complete
|
||||
|
||||
Now, run any command that takes longer than 10 seconds or the time duration you defined in Undistract-me script.
|
||||
|
||||
I ran the following command on my Arch Linux desktop.
|
||||
|
||||
```
|
||||
sudo pacman -Sy
|
||||
```
|
||||
|
||||
This command took 32 seconds to complete. After the completion of the above command, I got the following notification.
|
||||
|
||||
[![](http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2.png)][8]
|
||||
|
||||
Please remember Undistract-me script notifies you only if the given command took more than 10 seconds to complete. If the command is completed in less than 10 seconds, you will not be notified. Of course, you can change this time interval settings as I described in the Configuration section above.
|
||||
|
||||
I find this tool very useful. It helped me to get back to the business after I completely lost in some other tasks. I hope this tool will be helpful to you too.
|
||||
|
||||
More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
Resource:
|
||||
|
||||
* [Undistract-me GitHub Repository][1]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/undistract-get-notification-long-running-terminal-commands-complete/
|
||||
|
||||
作者:[sk][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://github.com/jml/undistract-me
|
||||
[2]:https://www.ostechnix.com/author/sk/
|
||||
[3]:https://www.ostechnix.com/get-notification-terminal-task-done/
|
||||
[4]:https://www.ostechnix.com/install-pacaur-arch-linux/
|
||||
[5]:https://www.ostechnix.com/install-packer-arch-linux-2/
|
||||
[6]:https://www.ostechnix.com/install-yaourt-arch-linux/
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-1.png
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2.png
|
@ -1,135 +0,0 @@
|
||||
|
||||
translating by HardworkFish
|
||||
|
||||
Wake up and Shut Down Linux Automatically
|
||||
============================================================
|
||||
|
||||
### [banner.jpg][1]
|
||||
|
||||
![time keeper](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner.jpg?itok=zItspoSb)
|
||||
|
||||
Learn how to configure your Linux computers to watch the time for you, then wake up and shut down automatically.
|
||||
|
||||
[Creative Commons Attribution][6][The Observatory at Delhi][7]
|
||||
|
||||
Don't be a watt-waster. If your computers don't need to be on then shut them down. For convenience and nerd creds, you can configure your Linux computers to wake up and shut down automatically.
|
||||
|
||||
### Precious Uptimes
|
||||
|
||||
Some computers need to be on all the time, which is fine as long as it's not about satisfying an uptime compulsion. Some people are very proud of their lengthy uptimes, and now that we have kernel hot-patching that leaves only hardware failures requiring shutdowns. I think it's better to be practical. Save electricity as well as wear on your moving parts, and shut them down when they're not needed. For example, you can wake up a backup server at a scheduled time, run your backups, and then shut it down until it's time for the next backup. Or, you can configure your Internet gateway to be on only at certain times. Anything that doesn't need to be on all the time can be configured to turn on, do a job, and then shut down.
|
||||
|
||||
### Sleepies
|
||||
|
||||
For computers that don't need to be on all the time, good old cron will shut them down reliably. Use either root's cron, or /etc/crontab. This example creates a root cron job to shut down every night at 11:15 p.m.
|
||||
|
||||
```
|
||||
# crontab -e -u root
|
||||
# m h dom mon dow command
|
||||
15 23 * * * /sbin/shutdown -h now
|
||||
```
|
||||
|
||||
```
|
||||
15 23 * * 1-5 /sbin/shutdown -h now
|
||||
```
|
||||
|
||||
You may also use /etc/crontab, which is fast and easy, and everything is in one file. You have to specify the user:
|
||||
|
||||
```
|
||||
15 23 * * 1-5 root shutdown -h now
|
||||
```
|
||||
|
||||
Auto-wakeups are very cool; most of my SUSE colleagues are in Nuremberg, so I am crawling out of bed at 5 a.m. to have a few hours of overlap with their schedules. My work computer turns itself on at 5:30 a.m., and then all I have to do is drag my coffee and myself to my desk to start work. It might not seem like pressing a power button is a big deal, but at that time of day every little thing looms large.
|
||||
|
||||
Waking up your Linux PC can be less reliable than shutting it down, so you may want to try different methods. You can use wakeonlan, RTC wakeups, or your PC's BIOS to set scheduled wakeups. These all work because, when you power off your computer, it's not really all the way off; it is in an extremely low-power state and can receive and respond to signals. You need to use the power supply switch to turn it off completely.
|
||||
|
||||
### BIOS Wakeup
|
||||
|
||||
A BIOS wakeup is the most reliable. My system BIOS has an easy-to-use wakeup scheduler (Figure 1). Chances are yours does, too. Easy peasy.
|
||||
|
||||
### [fig-1.png][2]
|
||||
|
||||
![wake up](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-1_11.png?itok=8qAeqo1I)
|
||||
|
||||
Figure 1: My system BIOS has an easy-to-use wakeup scheduler.
|
||||
|
||||
[Used with permission][8]
|
||||
|
||||
### wakeonlan
|
||||
|
||||
wakeonlan is the next most reliable method. This requires sending a signal from a second computer to the computer you want to power on. You could use an Arduino or Raspberry Pi to send the wakeup signal, a Linux-based router, or any Linux PC. First, look in your system BIOS to see if wakeonlan is supported -- which it should be -- and then enable it, as it should be disabled by default.
|
||||
|
||||
Then, you'll need an Ethernet network adapter that supports wakeonlan; wireless adapters won't work. You'll need to verify that your Ethernet card supports wakeonlan:
|
||||
|
||||
```
|
||||
# ethtool eth0 | grep -i wake-on
|
||||
Supports Wake-on: pumbg
|
||||
Wake-on: g
|
||||
```
|
||||
|
||||
* d -- all wake ups disabled
|
||||
|
||||
* p -- wake up on physical activity
|
||||
|
||||
* u -- wake up on unicast messages
|
||||
|
||||
* m -- wake up on multicast messages
|
||||
|
||||
* b -- wake up on broadcast messages
|
||||
|
||||
* a -- wake up on ARP messages
|
||||
|
||||
* g -- wake up on magic packet
|
||||
|
||||
* s -- set the Secure On password for the magic packet
|
||||
|
||||
man ethtool is not clear on what the p switch does; it suggests that any signal will cause a wake up. In my testing, however, it doesn't do that. The one that must be enabled is g -- wake up on magic packet, and the Wake-on line shows that it is already enabled. If it is not enabled, you can use ethtool to enable it, using your own device name, of course:
|
||||
|
||||
```
|
||||
# ethtool -s eth0 wol g
|
||||
```
|
||||
|
||||
```
|
||||
@reboot /usr/bin/ethtool -s eth0 wol g
|
||||
```
|
||||
|
||||
### [fig-2.png][3]
|
||||
|
||||
![wakeonlan](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-2_7.png?itok=XQAwmHoQ)
|
||||
|
||||
Figure 2: Enable Wake on LAN.
|
||||
|
||||
[Used with permission][9]
|
||||
|
||||
Another option is recent Network Manager versions have a nice little checkbox to enable wakeonlan (Figure 2).
|
||||
|
||||
There is a field for setting a password, but if your network interface doesn't support the Secure On password, it won't work.
|
||||
|
||||
Now you need to configure a second PC to send the wakeup signal. You don't need root privileges, so create a cron job for your user. You need the MAC address of the network interface on the machine you're waking up:
|
||||
|
||||
```
|
||||
30 08 * * * /usr/bin/wakeonlan D0:50:99:82:E7:2B
|
||||
```
|
||||
|
||||
Using the real-time clock for wakeups is the least reliable method. Check out [Wake Up Linux With an RTC Alarm Clock][4]; this is a bit outdated as most distros use systemd now. Come back next week to learn more about updated ways to use RTC wakeups.
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][5]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2017/11/wake-and-shut-down-linux-automatically
|
||||
|
||||
作者:[Carla Schroder]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://www.linux.com/files/images/bannerjpg
|
||||
[2]:https://www.linux.com/files/images/fig-1png-11
|
||||
[3]:https://www.linux.com/files/images/fig-2png-7
|
||||
[4]:https://www.linux.com/learn/wake-linux-rtc-alarm-clock
|
||||
[5]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
||||
[6]:https://www.linux.com/licenses/category/creative-commons-attribution
|
||||
[7]:http://www.columbia.edu/itc/mealac/pritchett/00routesdata/1700_1799/jaipur/delhijantarearly/delhijantarearly.html
|
||||
[8]:https://www.linux.com/licenses/category/used-permission
|
||||
[9]:https://www.linux.com/licenses/category/used-permission
|
@ -0,0 +1,187 @@
|
||||
translating by zrszrszr
|
||||
12 MySQL/MariaDB Security Best Practices for Linux
|
||||
============================================================
|
||||
|
||||
MySQL is the world’s most popular open source database system and MariaDB (a fork of MySQL) is the world’s fastest growing open source database system. After installing MySQL server, it is insecure in it’s default configuration, and securing it is one of the essential tasks in general database management.
|
||||
|
||||
This will contribute to hardening and boosting of overall Linux server security, as attackers always scan vulnerabilities in any part of a system, and databases have in the past been key target areas. A common example is the brute-forcing of the root password for the MySQL database.
|
||||
|
||||
In this guide, we will explain useful MySQL/MariaDB security best practice for Linux.
|
||||
|
||||
### 1\. Secure MySQL Installation
|
||||
|
||||
This is the first recommended step after installing MySQL server, towards securing the database server. This script facilitates in improving the security of your MySQL server by asking you to:
|
||||
|
||||
* set a password for the root account, if you didn’t set it during installation.
|
||||
|
||||
* disable remote root user login by removing root accounts that are accessible from outside the local host.
|
||||
|
||||
* remove anonymous-user accounts and test database which by default can be accessed by all users, even anonymous users.
|
||||
|
||||
```
|
||||
# mysql_secure_installation
|
||||
```
|
||||
|
||||
After running it, set the root password and answer the series of questions by entering [Yes/Y] and press [Enter].
|
||||
|
||||
[![Secure MySQL Installation](https://www.tecmint.com/wp-content/uploads/2017/12/Secure-MySQL-Installation.png)][2]
|
||||
|
||||
Secure MySQL Installation
|
||||
|
||||
### 2\. Bind Database Server To Loopback Address
|
||||
|
||||
This configuration will restrict access from remote machines, it tells the MySQL server to only accept connections from within the localhost. You can set it in main configuration file.
|
||||
|
||||
```
|
||||
# vi /etc/my.cnf [RHEL/CentOS]
|
||||
# vi /etc/mysql/my.conf [Debian/Ubuntu]
|
||||
OR
|
||||
# vi /etc/mysql/mysql.conf.d/mysqld.cnf [Debian/Ubuntu]
|
||||
```
|
||||
|
||||
Add the following line below under `[mysqld]` section.
|
||||
|
||||
```
|
||||
bind-address = 127.0.0.1
|
||||
```
|
||||
|
||||
### 3\. Disable LOCAL INFILE in MySQL
|
||||
|
||||
As part of security hardening, you need to disable local_infile to prevent access to the underlying filesystem from within MySQL using the following directive under `[mysqld]` section.
|
||||
|
||||
```
|
||||
local-infile=0
|
||||
```
|
||||
|
||||
### 4\. Change MYSQL Default Port
|
||||
|
||||
The Port variable sets the MySQL port number that will be used to listen on TCP/ IP connections. The default port number is 3306 but you can change it under the [mysqld] section as shown.
|
||||
|
||||
```
|
||||
Port=5000
|
||||
```
|
||||
|
||||
### 5\. Enable MySQL Logging
|
||||
|
||||
Logs are one of the best ways to understand what happens on a server, in case of any attacks, you can easily see any intrusion-related activities from log files. You can enable MySQL logging by adding the following variable under the `[mysqld]` section.
|
||||
|
||||
```
|
||||
log=/var/log/mysql.log
|
||||
```
|
||||
|
||||
### 6\. Set Appropriate Permission on MySQL Files
|
||||
|
||||
Ensure that you have appropriate permissions set for all mysql server files and data directories. The /etc/my.conf file should only be writeable to root. This blocks other users from changing database server configurations.
|
||||
|
||||
```
|
||||
# chmod 644 /etc/my.cnf
|
||||
```
|
||||
|
||||
### 7\. Delete MySQL Shell History
|
||||
|
||||
All commands you execute on MySQL shell are stored by the mysql client in a history file: ~/.mysql_history. This can be dangerous, because for any user accounts that you will create, all usernames and passwords typed on the shell will recorded in the history file.
|
||||
|
||||
```
|
||||
# cat /dev/null > ~/.mysql_history
|
||||
```
|
||||
|
||||
### 8\. Don’t Run MySQL Commands from Commandline
|
||||
|
||||
As you already know, all commands you type on the terminal are stored in a history file, depending on the shell you are using (for example ~/.bash_history for bash). An attacker who manages to gain access to this history file can easily see any passwords recorded there.
|
||||
|
||||
It is strongly not recommended to type passwords on the command line, something like this:
|
||||
|
||||
```
|
||||
# mysql -u root -ppassword_
|
||||
```
|
||||
[![Connect MySQL with Password](https://www.tecmint.com/wp-content/uploads/2017/12/Connect-MySQL-with-Password.png)][3]
|
||||
|
||||
Connect MySQL with Password
|
||||
|
||||
When you check the last section of the command history file, you will see the password typed above.
|
||||
|
||||
```
|
||||
# history
|
||||
```
|
||||
[![Check Command History](https://www.tecmint.com/wp-content/uploads/2017/12/Check-Command-History.png)][4]
|
||||
|
||||
Check Command History
|
||||
|
||||
The appropriate way to connect MySQL is.
|
||||
|
||||
```
|
||||
# mysql -u root -p
|
||||
Enter password:
|
||||
```
|
||||
|
||||
### 9\. Define Application-Specific Database Users
|
||||
|
||||
For each application running on the server, only give access to a user who is in charge of a database for a given application. For example, if you have a wordpress site, create a specific user for the wordpress site database as follows.
|
||||
|
||||
```
|
||||
# mysql -u root -p
|
||||
MariaDB [(none)]> CREATE DATABASE osclass_db;
|
||||
MariaDB [(none)]> CREATE USER 'osclassdmin'@'localhost' IDENTIFIED BY 'osclass@dmin%!2';
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON osclass_db.* TO 'osclassdmin'@'localhost';
|
||||
MariaDB [(none)]> FLUSH PRIVILEGES;
|
||||
MariaDB [(none)]> exit
|
||||
```
|
||||
|
||||
and remember to always remove user accounts that are no longer managing any application database on the server.
|
||||
|
||||
### 10\. Use Additional Security Plugins and Libraries
|
||||
|
||||
MySQL includes a number of security plugins for: authenticating attempts by clients to connect to mysql server, password-validation and securing storage for sensitive information, which are all available in the free version.
|
||||
|
||||
You can find more here: [https://dev.mysql.com/doc/refman/5.7/en/security-plugins.html][5]
|
||||
|
||||
### 11\. Change MySQL Passwords Regularly
|
||||
|
||||
This is a common piece of information/application/system security advice. How often you do this will entirely depend on your internal security policy. However, it can prevent “snoopers” who might have been tracking your activity over an long period of time, from gaining access to your mysql server.
|
||||
|
||||
```
|
||||
MariaDB [(none)]> USE mysql;
|
||||
MariaDB [(none)]> UPDATE user SET password=PASSWORD('YourPasswordHere') WHERE User='root' AND Host = 'localhost';
|
||||
MariaDB [(none)]> FLUSH PRIVILEGES;
|
||||
```
|
||||
|
||||
### 12\. Update MySQL Server Package Regularly
|
||||
|
||||
It is highly recommended to upgrade mysql/mariadb packages regularly to keep up with security updates and bug fixes, from the vendor’s repository. Normally packages in default operating system repositories are outdated.
|
||||
|
||||
```
|
||||
# yum update
|
||||
# apt update
|
||||
```
|
||||
|
||||
After making any changes to the mysql/mariadb server, always restart the service.
|
||||
|
||||
```
|
||||
# systemctl restart mariadb #RHEL/CentOS
|
||||
# systemctl restart mysql #Debian/Ubuntu
|
||||
```
|
||||
|
||||
Read Also: [15 Useful MySQL/MariaDB Performance Tuning and Optimization Tips][6]
|
||||
|
||||
That’s all! We love to hear from you via the comment form below. Do share with us any MySQL/MariaDB security tips missing in the above list.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.tecmint.com/mysql-mariadb-security-best-practices-for-linux/
|
||||
|
||||
作者:[ Aaron Kili ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.tecmint.com/author/aaronkili/
|
||||
[1]:https://www.tecmint.com/learn-mysql-mariadb-for-beginners/
|
||||
[2]:https://www.tecmint.com/wp-content/uploads/2017/12/Secure-MySQL-Installation.png
|
||||
[3]:https://www.tecmint.com/wp-content/uploads/2017/12/Connect-MySQL-with-Password.png
|
||||
[4]:https://www.tecmint.com/wp-content/uploads/2017/12/Check-Command-History.png
|
||||
[5]:https://dev.mysql.com/doc/refman/5.7/en/security-plugins.html
|
||||
[6]:https://www.tecmint.com/mysql-mariadb-performance-tuning-and-optimization/
|
||||
[7]:https://www.tecmint.com/author/aaronkili/
|
||||
[8]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[9]:https://www.tecmint.com/free-linux-shell-scripting-books/
|
@ -1,71 +0,0 @@
|
||||
### [Fedora Classroom Session: Ansible 101][2]
|
||||
|
||||
### By Sachin S Kamath
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2017/07/fedora-classroom-945x400.jpg)
|
||||
|
||||
Fedora Classroom sessions continue this week with an Ansible session. The general schedule for sessions appears [on the wiki][3]. You can also find [resources and recordings from previous sessions][4] there. Here are details about this week’s session on [Thursday, 30th November at 1600 UTC][5]. That link allows you to convert the time to your timezone.
|
||||
|
||||
### Topic: Ansible 101
|
||||
|
||||
As the Ansible [documentation][6] explains, Ansible is an IT automation tool. It’s primarily used to configure systems, deploy software, and orchestrate more advanced IT tasks. Examples include continuous deployments or zero downtime rolling updates.
|
||||
|
||||
This Classroom session covers the topics listed below:
|
||||
|
||||
1. Introduction to SSH
|
||||
|
||||
2. Understanding different terminologies
|
||||
|
||||
3. Introduction to Ansible
|
||||
|
||||
4. Ansible installation and setup
|
||||
|
||||
5. Establishing password-less connection
|
||||
|
||||
6. Ad-hoc commands
|
||||
|
||||
7. Managing inventory
|
||||
|
||||
8. Playbooks examples
|
||||
|
||||
There will also be a follow-up Ansible 102 session later. That session will cover complex playbooks, roles, dynamic inventory files, control flow and Galaxy.
|
||||
|
||||
### Instructors
|
||||
|
||||
We have two experienced instructors handling this session.
|
||||
|
||||
[Geoffrey Marr][7], also known by his IRC name as “coremodule,” is a Red Hat employee and Fedora contributor with a background in Linux and cloud technologies. While working, he spends his time lurking in the [Fedora QA][8] wiki and test pages. Away from work, he enjoys RaspberryPi projects, especially those focusing on software-defined radio.
|
||||
|
||||
[Vipul Siddharth][9] is an intern at Red Hat who also works on Fedora. He loves to contribute to open source and seeks opportunities to spread the word of free and open source software.
|
||||
|
||||
### Joining the session
|
||||
|
||||
This session takes place on [BlueJeans][10]. The following information will help you join the session:
|
||||
|
||||
* URL: [https://bluejeans.com/3466040121][1]
|
||||
|
||||
* Meeting ID (for Desktop App): 3466040121
|
||||
|
||||
We hope you attend, learn from, and enjoy this session! If you have any feedback about the sessions, have ideas for a new one or want to host a session, please feel free to comment on this post or edit the [Classroom wiki page][11].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/fedora-classroom-session-ansible-101/
|
||||
|
||||
作者:[Sachin S Kamath]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://bluejeans.com/3466040121
|
||||
[2]:https://fedoramagazine.org/fedora-classroom-session-ansible-101/
|
||||
[3]:https://fedoraproject.org/wiki/Classroom
|
||||
[4]:https://fedoraproject.org/wiki/Classroom#Previous_Sessions
|
||||
[5]:https://www.timeanddate.com/worldclock/fixedtime.html?msg=Fedora+Classroom+-+Ansible+101&iso=20171130T16&p1=%3A
|
||||
[6]:http://docs.ansible.com/ansible/latest/index.html
|
||||
[7]:https://fedoraproject.org/wiki/User:Coremodule
|
||||
[8]:https://fedoraproject.org/wiki/QA
|
||||
[9]:https://fedoraproject.org/wiki/User:Siddharthvipul1
|
||||
[10]:https://www.bluejeans.com/downloads
|
||||
[11]:https://fedoraproject.org/wiki/Classroom
|
@ -1,3 +1,5 @@
|
||||
|
||||
Translating by FelixYFZ
|
||||
How to find a publisher for your tech book
|
||||
============================================================
|
||||
|
||||
|
@ -0,0 +1,85 @@
|
||||
Launching an Open Source Project: A Free Guide
|
||||
============================================================
|
||||
|
||||
![](https://www.linuxfoundation.org/wp-content/uploads/2017/11/project-launch-1024x645.jpg)
|
||||
|
||||
Launching a project and then rallying community support can be complicated, but the new guide to Starting an Open Source Project can help.
|
||||
|
||||
Increasingly, as open source programs become more pervasive at organizations of all sizes, tech and DevOps workers are choosing to or being asked to launch their own open source projects. From Google to Netflix to Facebook, companies are also releasing their open source creations to the community. It’s become common for open source projects to start from scratch internally, after which they benefit from collaboration involving external developers.
|
||||
|
||||
Launching a project and then rallying community support can be more complicated than you think, however. A little up-front work can help things go smoothly, and that’s exactly where the new guide to[ Starting an Open Source Project][1] comes in.
|
||||
|
||||
This free guide was created to help organizations already versed in open source learn how to start their own open source projects. It starts at the beginning of the process, including deciding what to open source, and moves on to budget and legal considerations, and more. The road to creating an open source project may be foreign, but major companies, from Google to Facebook, have opened up resources and provided guidance. In fact, Google has[ an extensive online destination][2] dedicated to open source best practices and how to open source projects.
|
||||
|
||||
“No matter how many smart people we hire inside the company, there’s always smarter people on the outside,” notes Jared Smith, Open Source Community Manager at Capital One. “We find it is worth it to us to open source and share our code with the outside world in exchange for getting some great advice from people on the outside who have expertise and are willing to share back with us.”
|
||||
|
||||
In the new guide, noted open source expert Ibrahim Haddad provides five reasons why an organization might open source a new project:
|
||||
|
||||
1. Accelerate an open solution; provide a reference implementation to a standard; share development costs for strategic functions
|
||||
|
||||
2. Commoditize a market; reduce prices of non-strategic software components.
|
||||
|
||||
3. Drive demand by building an ecosystem for your products.
|
||||
|
||||
4. Partner with others; engage customers; strengthen relationships with common goals.
|
||||
|
||||
5. Offer your customers the ability to self-support: the ability to adapt your code without waiting for you.
|
||||
|
||||
The guide notes: “The decision to release or create a new open source project depends on your circumstances. Your company should first achieve a certain level of open source mastery by using open source software and contributing to existing projects. This is because consuming can teach you how to leverage external projects and developers to build your products. And participation can bring more fluency in the conventions and culture of open source communities. (See our guides on [Using Open Source Code][3] and [Participating in Open Source Communities][4]) But once you have achieved open source fluency, the best time to start launching your own open source projects is simply ‘early’ and ‘often.’”
|
||||
|
||||
The guide also notes that planning can keep you and your organization out of legal trouble. Issues pertaining to licensing, distribution, support options, and even branding require thinking ahead if you want your project to flourish.
|
||||
|
||||
“I think it is a crucial thing for a company to be thinking about what they’re hoping to achieve with a new open source project,” said John Mertic, Director of Program Management at The Linux Foundation. “They must think about the value of it to the community and developers out there and what outcomes they’re hoping to get out of it. And then they must understand all the pieces they must have in place to do this the right way, including legal, governance, infrastructure and a starting community. Those are the things I always stress the most when you’re putting an open source project out there.”
|
||||
|
||||
The[ Starting an Open Source Project][5] guide can help you with everything from licensing issues to best development practices, and it explores how to seamlessly and safely weave existing open components into your open source projects. It is one of a new collection of free guides from The Linux Foundation and The TODO Group that are all extremely valuable for any organization running an open source program.[ The guides are available][6]now to help you run an open source program office where open source is supported, shared, and leveraged. With such an office, organizations can establish and execute on their open source strategies efficiently, with clear terms.
|
||||
|
||||
These free resources were produced based on expertise from open source leaders.[ Check out all the guides here][7] and stay tuned for our continuing coverage.
|
||||
|
||||
Also, don’t miss the previous articles in the series:
|
||||
|
||||
[How to Create an Open Source Program][8]
|
||||
|
||||
[Tools for Managing Open Source Programs][9]
|
||||
|
||||
[Measuring Your Open Source Program’s Success][10]
|
||||
|
||||
[Effective Strategies for Recruiting Open Source Developers][11]
|
||||
|
||||
[Participating in Open Source Communities][12]
|
||||
|
||||
[Using Open Source Code][13]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxfoundation.org/blog/launching-open-source-project-free-guide/
|
||||
|
||||
作者:[Sam Dean ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linuxfoundation.org/author/sdean/
|
||||
[1]:https://www.linuxfoundation.org/resources/open-source-guides/starting-open-source-project/
|
||||
[2]:https://www.linux.com/blog/learn/chapter/open-source-management/2017/5/googles-new-home-all-things-open-source-runs-deep
|
||||
[3]:https://www.linuxfoundation.org/using-open-source-code/
|
||||
[4]:https://www.linuxfoundation.org/participating-open-source-communities/
|
||||
[5]:https://www.linuxfoundation.org/resources/open-source-guides/starting-open-source-project/
|
||||
[6]:https://github.com/todogroup/guides
|
||||
[7]:https://github.com/todogroup/guides
|
||||
[8]:https://github.com/todogroup/guides/blob/master/creating-an-open-source-program.md
|
||||
[9]:https://www.linuxfoundation.org/blog/managing-open-source-programs-free-guide/
|
||||
[10]:https://www.linuxfoundation.org/measuring-your-open-source-program-success/
|
||||
[11]:https://www.linuxfoundation.org/blog/effective-strategies-recruiting-open-source-developers/
|
||||
[12]:https://www.linuxfoundation.org/participating-open-source-communities/
|
||||
[13]:https://www.linuxfoundation.org/using-open-source-code/
|
||||
[14]:https://www.linuxfoundation.org/author/sdean/
|
||||
[15]:https://www.linuxfoundation.org/category/audience/attorneys/
|
||||
[16]:https://www.linuxfoundation.org/category/blog/
|
||||
[17]:https://www.linuxfoundation.org/category/audience/c-level/
|
||||
[18]:https://www.linuxfoundation.org/category/audience/developer-influencers/
|
||||
[19]:https://www.linuxfoundation.org/category/audience/entrepreneurs/
|
||||
[20]:https://www.linuxfoundation.org/category/content-placement/lf-brand/
|
||||
[21]:https://www.linuxfoundation.org/category/audience/open-source-developers/
|
||||
[22]:https://www.linuxfoundation.org/category/audience/open-source-professionals/
|
||||
[23]:https://www.linuxfoundation.org/category/audience/open-source-users/
|
@ -1,160 +0,0 @@
|
||||
Randomize your WiFi MAC address on Ubuntu 16.04
|
||||
============================================================
|
||||
|
||||
_Your device’s MAC address can be used to track you across the WiFi networks you connect to. That data can be shared and sold, and often identifies you as an individual. It’s possible to limit this tracking by using pseudo-random MAC addresses._
|
||||
|
||||
![A captive portal screen for a hotel allowing you to log in with social media for an hour of free WiFi](https://www.paulfurley.com/img/captive-portal-our-hotel.gif)
|
||||
|
||||
_Image courtesy of [Cloudessa][4]_
|
||||
|
||||
Every network device like a WiFi or Ethernet card has a unique identifier called a MAC address, for example `b4:b6:76:31:8c:ff`. It’s how networking works: any time you connect to a WiFi network, the router uses that address to send and receive packets to your machine and distinguish it from other devices in the area.
|
||||
|
||||
The snag with this design is that your unique, unchanging MAC address is just perfect for tracking you. Logged into Starbucks WiFi? Noted. London Underground? Logged.
|
||||
|
||||
If you’ve ever put your real name into one of those Craptive Portals on a WiFi network you’ve now tied your identity to that MAC address. Didn’t read the terms and conditions? You might assume that free airport WiFi is subsidised by flogging ‘customer analytics’ (your personal information) to hotels, restaurant chains and whomever else wants to know about you.
|
||||
|
||||
I don’t subscribe to being tracked and sold by mega-corps, so I spent a few hours hacking a solution.
|
||||
|
||||
### MAC addresses don’t need to stay the same
|
||||
|
||||
Fortunately, it’s possible to spoof your MAC address to a random one without fundamentally breaking networking.
|
||||
|
||||
I wanted to randomize my MAC address, but with three particular caveats:
|
||||
|
||||
1. The MAC should be different across different networks. This means Starbucks WiFi sees a different MAC from London Underground, preventing linking my identity across different providers.
|
||||
|
||||
2. The MAC should change regularly to prevent a network knowing that I’m the same person who walked past 75 times over the last year.
|
||||
|
||||
3. The MAC stays the same throughout each working day. When the MAC address changes, most networks will kick you off, and those with Craptive Portals will usually make you sign in again - annoying.
|
||||
|
||||
### Manipulating NetworkManager
|
||||
|
||||
My first attempt of using the `macchanger` tool was unsuccessful as NetworkManager would override the MAC address according to its own configuration.
|
||||
|
||||
I learned that NetworkManager 1.4.1+ can do MAC address randomization right out the box. If you’re using Ubuntu 17.04 upwards, you can get most of the way with [this config file][7]. You can’t quite achieve all three of my requirements (you must choose _random_ or _stable_ but it seems you can’t do _stable-for-one-day_ ).
|
||||
|
||||
Since I’m sticking with Ubuntu 16.04 which ships with NetworkManager 1.2, I couldn’t make use of the new functionality. Supposedly there is some randomization support but I failed to actually make it work, so I scripted up a solution instead.
|
||||
|
||||
Fortunately NetworkManager 1.2 does allow for spoofing your MAC address. You can see this in the ‘Edit connections’ dialog for a given network:
|
||||
|
||||
![Screenshot of NetworkManager's edit connection dialog, showing a text entry for a cloned mac address](https://www.paulfurley.com/img/network-manager-cloned-mac-address.png)
|
||||
|
||||
NetworkManager also supports hooks - any script placed in `/etc/NetworkManager/dispatcher.d/pre-up.d/` is run before a connection is brought up.
|
||||
|
||||
### Assigning pseudo-random MAC addresses
|
||||
|
||||
To recap, I wanted to generate random MAC addresses based on the _network_ and the _date_ . We can use the NetworkManager command line, nmcli, to show a full list of networks:
|
||||
|
||||
```
|
||||
> nmcli connection
|
||||
NAME UUID TYPE DEVICE
|
||||
Gladstone Guest 618545ca-d81a-11e7-a2a4-271245e11a45 802-11-wireless wlp1s0
|
||||
DoESDinky 6e47c080-d81a-11e7-9921-87bc56777256 802-11-wireless --
|
||||
PublicWiFi 79282c10-d81a-11e7-87cb-6341829c2a54 802-11-wireless --
|
||||
virgintrainswifi 7d0c57de-d81a-11e7-9bae-5be89b161d22 802-11-wireless --
|
||||
|
||||
```
|
||||
|
||||
Since each network has a unique identifier, to achieve my scheme I just concatenated the UUID with today’s date and hashed the result:
|
||||
|
||||
```
|
||||
|
||||
# eg 618545ca-d81a-11e7-a2a4-271245e11a45-2017-12-03
|
||||
|
||||
> echo -n "${UUID}-$(date +%F)" | md5sum
|
||||
|
||||
53594de990e92f9b914a723208f22b3f -
|
||||
|
||||
```
|
||||
|
||||
That produced bytes which can be substituted in for the last octets of the MAC address.
|
||||
|
||||
Note that the first byte `02` signifies the address is [locally administered][8]. Real, burned-in MAC addresses start with 3 bytes designing their manufacturer, for example `b4:b6:76` for Intel.
|
||||
|
||||
It’s possible that some routers may reject locally administered MACs but I haven’t encountered that yet.
|
||||
|
||||
On every connection up, the script calls `nmcli` to set the spoofed MAC address for every connection:
|
||||
|
||||
![A terminal window show a number of nmcli command line calls](https://www.paulfurley.com/img/terminal-window-nmcli-commands.png)
|
||||
|
||||
As a final check, if I look at `ifconfig` I can see that the `HWaddr` is the spoofed one, not my real MAC address:
|
||||
|
||||
```
|
||||
> ifconfig
|
||||
wlp1s0 Link encap:Ethernet HWaddr b4:b6:76:45:64:4d
|
||||
inet addr:192.168.0.86 Bcast:192.168.0.255 Mask:255.255.255.0
|
||||
inet6 addr: fe80::648c:aff2:9a9d:764/64 Scope:Link
|
||||
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
|
||||
RX packets:12107812 errors:0 dropped:2 overruns:0 frame:0
|
||||
TX packets:18332141 errors:0 dropped:0 overruns:0 carrier:0
|
||||
collisions:0 txqueuelen:1000
|
||||
RX bytes:11627977017 (11.6 GB) TX bytes:20700627733 (20.7 GB)
|
||||
|
||||
```
|
||||
|
||||
The full script is [available on Github][9].
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
# /etc/NetworkManager/dispatcher.d/pre-up.d/randomize-mac-addresses
|
||||
|
||||
# Configure every saved WiFi connection in NetworkManager with a spoofed MAC
|
||||
# address, seeded from the UUID of the connection and the date eg:
|
||||
# 'c31bbcc4-d6ad-11e7-9a5a-e7e1491a7e20-2017-11-20'
|
||||
|
||||
# This makes your MAC impossible(?) to track across WiFi providers, and
|
||||
# for one provider to track across days.
|
||||
|
||||
# For craptive portals that authenticate based on MAC, you might want to
|
||||
# automate logging in :)
|
||||
|
||||
# Note that NetworkManager >= 1.4.1 (Ubuntu 17.04+) can do something similar
|
||||
# automatically.
|
||||
|
||||
export PATH=$PATH:/usr/bin:/bin
|
||||
|
||||
LOG_FILE=/var/log/randomize-mac-addresses
|
||||
|
||||
echo "$(date): $*" > ${LOG_FILE}
|
||||
|
||||
WIFI_UUIDS=$(nmcli --fields type,uuid connection show |grep 802-11-wireless |cut '-d ' -f3)
|
||||
|
||||
for UUID in ${WIFI_UUIDS}
|
||||
do
|
||||
UUID_DAILY_HASH=$(echo "${UUID}-$(date +F)" | md5sum)
|
||||
|
||||
RANDOM_MAC="02:$(echo -n ${UUID_DAILY_HASH} | sed 's/^\(..\)\(..\)\(..\)\(..\)\(..\).*$/\1:\2:\3:\4:\5/')"
|
||||
|
||||
CMD="nmcli connection modify ${UUID} wifi.cloned-mac-address ${RANDOM_MAC}"
|
||||
|
||||
echo "$CMD" >> ${LOG_FILE}
|
||||
$CMD &
|
||||
done
|
||||
|
||||
wait
|
||||
```
|
||||
Enjoy!
|
||||
|
||||
_Update: [Use locally administered MAC addresses][5] to avoid clashing with real Intel ones. Thanks [@_fink][6]_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.paulfurley.com/randomize-your-wifi-mac-address-on-ubuntu-1604-xenial/
|
||||
|
||||
作者:[Paul M Furley ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.paulfurley.com/
|
||||
[1]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/raw/5f02fc8f6ff7fca5bca6ee4913c63bf6de15abca/randomize-mac-addresses
|
||||
[2]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f#file-randomize-mac-addresses
|
||||
[3]:https://github.com/
|
||||
[4]:http://cloudessa.com/products/cloudessa-aaa-and-captive-portal-cloud-service/
|
||||
[5]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/revisions#diff-824d510864d58c07df01102a8f53faef
|
||||
[6]:https://twitter.com/fink_/status/937305600005943296
|
||||
[7]:https://gist.github.com/paulfurley/978d4e2e0cceb41d67d017a668106c53/
|
||||
[8]:https://en.wikipedia.org/wiki/MAC_address#Universal_vs._local
|
||||
[9]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f
|
@ -1,129 +0,0 @@
|
||||
【iron0x翻译中】
|
||||
|
||||
Use multi-stage builds
|
||||
============================================================
|
||||
|
||||
Multi-stage builds are a new feature requiring Docker 17.05 or higher on the daemon and client. Multistage builds are useful to anyone who has struggled to optimize Dockerfiles while keeping them easy to read and maintain.
|
||||
|
||||
> Acknowledgment: Special thanks to [Alex Ellis][1] for granting permission to use his blog post [Builder pattern vs. Multi-stage builds in Docker][2] as the basis of the examples below.
|
||||
|
||||
### Before multi-stage builds
|
||||
|
||||
One of the most challenging things about building images is keeping the image size down. Each instruction in the Dockerfile adds a layer to the image, and you need to remember to clean up any artifacts you don’t need before moving on to the next layer. To write a really efficient Dockerfile, you have traditionally needed to employ shell tricks and other logic to keep the layers as small as possible and to ensure that each layer has the artifacts it needs from the previous layer and nothing else.
|
||||
|
||||
It was actually very common to have one Dockerfile to use for development (which contained everything needed to build your application), and a slimmed-down one to use for production, which only contained your application and exactly what was needed to run it. This has been referred to as the “builder pattern”. Maintaining two Dockerfiles is not ideal.
|
||||
|
||||
Here’s an example of a `Dockerfile.build` and `Dockerfile` which adhere to the builder pattern above:
|
||||
|
||||
`Dockerfile.build`:
|
||||
|
||||
```
|
||||
FROM golang:1.7.3
|
||||
WORKDIR /go/src/github.com/alexellis/href-counter/
|
||||
RUN go get -d -v golang.org/x/net/html
|
||||
COPY app.go .
|
||||
RUN go get -d -v golang.org/x/net/html \
|
||||
&& CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
|
||||
|
||||
```
|
||||
|
||||
Notice that this example also artificially compresses two `RUN` commands together using the Bash `&&` operator, to avoid creating an additional layer in the image. This is failure-prone and hard to maintain. It’s easy to insert another command and forget to continue the line using the `\` character, for example.
|
||||
|
||||
`Dockerfile`:
|
||||
|
||||
```
|
||||
FROM alpine:latest
|
||||
RUN apk --no-cache add ca-certificates
|
||||
WORKDIR /root/
|
||||
COPY app .
|
||||
CMD ["./app"]
|
||||
|
||||
```
|
||||
|
||||
`build.sh`:
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
echo Building alexellis2/href-counter:build
|
||||
|
||||
docker build --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy \
|
||||
-t alexellis2/href-counter:build . -f Dockerfile.build
|
||||
|
||||
docker create --name extract alexellis2/href-counter:build
|
||||
docker cp extract:/go/src/github.com/alexellis/href-counter/app ./app
|
||||
docker rm -f extract
|
||||
|
||||
echo Building alexellis2/href-counter:latest
|
||||
|
||||
docker build --no-cache -t alexellis2/href-counter:latest .
|
||||
rm ./app
|
||||
|
||||
```
|
||||
|
||||
When you run the `build.sh` script, it needs to build the first image, create a container from it in order to copy the artifact out, then build the second image. Both images take up room on your system and you still have the `app` artifact on your local disk as well.
|
||||
|
||||
Multi-stage builds vastly simplify this situation!
|
||||
|
||||
### Use multi-stage builds
|
||||
|
||||
With multi-stage builds, you use multiple `FROM` statements in your Dockerfile. Each `FROM` instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image. To show how this works, Let’s adapt the Dockerfile from the previous section to use multi-stage builds.
|
||||
|
||||
`Dockerfile`:
|
||||
|
||||
```
|
||||
FROM golang:1.7.3
|
||||
WORKDIR /go/src/github.com/alexellis/href-counter/
|
||||
RUN go get -d -v golang.org/x/net/html
|
||||
COPY app.go .
|
||||
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
|
||||
|
||||
FROM alpine:latest
|
||||
RUN apk --no-cache add ca-certificates
|
||||
WORKDIR /root/
|
||||
COPY --from=0 /go/src/github.com/alexellis/href-counter/app .
|
||||
CMD ["./app"]
|
||||
|
||||
```
|
||||
|
||||
You only need the single Dockerfile. You don’t need a separate build script, either. Just run `docker build`.
|
||||
|
||||
```
|
||||
$ docker build -t alexellis2/href-counter:latest .
|
||||
|
||||
```
|
||||
|
||||
The end result is the same tiny production image as before, with a significant reduction in complexity. You don’t need to create any intermediate images and you don’t need to extract any artifacts to your local system at all.
|
||||
|
||||
How does it work? The second `FROM` instruction starts a new build stage with the `alpine:latest` image as its base. The `COPY --from=0` line copies just the built artifact from the previous stage into this new stage. The Go SDK and any intermediate artifacts are left behind, and not saved in the final image.
|
||||
|
||||
### Name your build stages
|
||||
|
||||
By default, the stages are not named, and you refer to them by their integer number, starting with 0 for the first `FROM` instruction. However, you can name your stages, by adding an `as <NAME>` to the `FROM` instruction. This example improves the previous one by naming the stages and using the name in the `COPY` instruction. This means that even if the instructions in your Dockerfile are re-ordered later, the `COPY` won’t break.
|
||||
|
||||
```
|
||||
FROM golang:1.7.3 as builder
|
||||
WORKDIR /go/src/github.com/alexellis/href-counter/
|
||||
RUN go get -d -v golang.org/x/net/html
|
||||
COPY app.go .
|
||||
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
|
||||
|
||||
FROM alpine:latest
|
||||
RUN apk --no-cache add ca-certificates
|
||||
WORKDIR /root/
|
||||
COPY --from=builder /go/src/github.com/alexellis/href-counter/app .
|
||||
CMD ["./app"]
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://docs.docker.com/engine/userguide/eng-image/multistage-build/#name-your-build-stages
|
||||
|
||||
作者:[docker docs ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://docs.docker.com/engine/userguide/eng-image/multistage-build/
|
||||
[1]:https://twitter.com/alexellisuk
|
||||
[2]:http://blog.alexellis.io/mutli-stage-docker-builds/
|
@ -0,0 +1,307 @@
|
||||
Top 20 GNOME Extensions You Should Be Using Right Now
|
||||
============================================================
|
||||
|
||||
_Brief: You can enhance the capacity of your GNOME desktop with extensions. Here, we list the best GNOME extensions to save you the trouble of finding them on your own._
|
||||
|
||||
[GNOME extensions][9] are a major part of the [GNOME][10] experience. These extensions add a lot of value to the ecosystem whether it is to mold the Gnome Desktop Environment (DE) to your workflow, to add more functionality than there is by default, or just simply to freshen up the experience.
|
||||
|
||||
With default [Ubuntu 17.10][11] switching from [Unity to Gnome][12], now is the time to familiarize yourself with the various extensions that the GNOME community has to offer. We already showed you[ how to enable and manage GNOME extensions][13]. But finding good extensions could be a daunting task. That’s why I created this list of best GNOME extensions to save you some trouble.
|
||||
|
||||
### Best GNOME Extensions
|
||||
|
||||
![Best GNOME Extensions for Ubuntu](https://itsfoss.com/wp-content/uploads/2017/12/Best-GNOME-Extensions-800x450.jpg)
|
||||
|
||||
The list is in alphabetical order but there is no ranking involved here. Extension at number 1 position is not better than the rest of the extensions.
|
||||
|
||||
### 1\. Appfolders Management extensions
|
||||
|
||||
One of the major features that I think GNOME is missing is the ability to organize the default application grid. This is something included by default in [KDE][14]‘s Application Dashboard, in [Elementary OS][15]‘s Slingshot Launcher, and even in macOS, yet as of [GNOME 3.26][16] it isn’t something that comes baked in. Appfolders Management extension changes that.
|
||||
|
||||
This extension gives the user an easy way to organize their applications into various folders with a simple right click > add to folder. Creating folders and adding applications to them is not only simple through this extension, but it feels so natively implemented that you will wonder why this isn’t built into the default GNOME experience.
|
||||
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/folders-300x225.jpg)
|
||||
|
||||
[Appfolders Management extension][17]
|
||||
|
||||
### 2\. Apt Update Indicator
|
||||
|
||||
For distributions that utilize [Apt as their package manager][18], such as Ubuntu or Debian, the Apt Update Indicator extension allows for a more streamlined update experience in GNOME.
|
||||
|
||||
The extension settles into your top bar and notifies the user of updates waiting on their system. It also displays recently added repos, residual config files, files that are auto removable, and allows the user to manually check for updates all in one basic drop-down menu.
|
||||
|
||||
It is a simple extension that adds an immense amount of functionality to any system.
|
||||
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/Apt-Update-300x185.jpg)
|
||||
|
||||
[Apt Update Indicator][19]
|
||||
|
||||
### 3\. Auto Move Windows
|
||||
|
||||
If, like me, you utilize multiple virtual desktops than this extension will make your workflow much easier. Auto Move Windows allows you to set your applications to automatically open on a virtual desktop of your choosing. It is as simple as adding an application to the list and selecting the desktop you would like that application to open on.
|
||||
|
||||
From then on every time you open that application it will open on that desktop. This makes all the difference when as soon as you login to your computer all you have to do is open the application and it immediately opens to where you want it to go without manually having to move it around every time before you can get to work.
|
||||
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/auto-move-300x225.jpg)
|
||||
|
||||
[Auto Move Windows][20]
|
||||
|
||||
### 4\. Caffeine
|
||||
|
||||
Caffeine allows the user to keep their computer screen from auto-suspending at the flip of a switch. The coffee mug shaped extension icon embeds itself into the right side of your top bar and with a click shows that your computer is “caffeinated” with a subtle addition of steam to the mug and a notification.
|
||||
|
||||
The same is true to turn off Caffeine, enabling auto suspend and/or screensave again. It’s incredibly simple to use and works just as you would expect.
|
||||
|
||||
Caffeine Disabled:
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/caffeine-enabled-300x78.jpg)
|
||||
|
||||
Caffeine Enabled:
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/caffeine-disabled-300x75.jpg)
|
||||
|
||||
[Caffeine][21]
|
||||
|
||||
### 5\. CPU Power Management [Only for Intel CPUs]
|
||||
|
||||
This is an extension that, at first, I didn’t think would be very useful, but after some time using it I have found that functionality like this should be backed into all computers by default. At least all laptops. CPU Power Management allows you to chose how much of your computer’s resources are being used at any given time.
|
||||
|
||||
Its simple drop-down menu allows the user to change between various preset or user made profiles that control at what frequency your CPU is to run. For example, you can set your CPU to the “Quiet” present which tells your computer to only us a maximum of 30% of its resources in this case.
|
||||
|
||||
On the other hand, you can set it to the “High Performance” preset to allow your computer to run at full potential. This comes in handy if you have loud fans and want to minimize the amount of noise they make or if you just need to save some battery life.
|
||||
|
||||
One thing to note is that _this only works on computers with an Intel CPU_ , so keep that in mind.
|
||||
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/CPU-300x194.jpg)
|
||||
|
||||
[CPU Power Management][22]
|
||||
|
||||
### 6\. Clipboard Indicator
|
||||
|
||||
Clipboard Indicator is a clean and simple clipboard management tool. The extension sits in the top bar and caches your recent clipboard history (things you copy and paste). It will continue to save this information until the user clears the extension’s history.
|
||||
|
||||
If you know that you are about to work with documentation that you don’t want to be saved in this way, like Credit Card numbers or any of your personal information, Clipboard Indicator offers a private mode that the user can toggle on and off for such cases.
|
||||
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/clipboard-300x200.jpg)
|
||||
|
||||
[Clipboard Indicator][23]
|
||||
|
||||
### 7\. Extensions
|
||||
|
||||
The Extensions extension allows the user to enable/disable other extensions and to access their settings in one singular extension. Extensions either sit next to your other icons and extensions in the panel or in the user drop-down menu.
|
||||
|
||||
Redundancies aside, Extensions is a great way to gain easy access to all your extensions without the need to open up the GNOME Tweak Tool to do so.
|
||||
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/extensions-300x185.jpg)
|
||||
|
||||
[Extensions][24]
|
||||
|
||||
### 8\. Frippery Move Clock
|
||||
|
||||
For those of us who are used to having the clock to the right of the Panel in Unity, this extension does the trick. Frippery Move Clock moves the clock from the middle of the top panel to the right side. It takes the calendar and notification window with it but does not migrate the notifications themselves. We have another application later in this list, Panel OSD, that can add bring your notifications over to the right as well.
|
||||
|
||||
Before Frippery:
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/before-move-clock-300x19.jpg)
|
||||
|
||||
After Frippery:
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/after-move-clock-300x19.jpg)
|
||||
|
||||
[Frippery Move Clock][25]
|
||||
|
||||
### 9\. Gno-Menu
|
||||
|
||||
Gno-Menu brings a more traditional menu to the GNOME DE. Not only does it add an applications menu to the top panel but it also brings a ton of functionality and customization with it. If you are used to using the Applications Menu extension traditionally found in GNOME but don’t want the bugs and issues that Ubuntu 17.10 brought to is, Gno-Meny is an awesome alternative.
|
||||
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/Gno-Menu-300x169.jpg)
|
||||
|
||||
[Gno-Menu][26]
|
||||
|
||||
### 10\. User Themes
|
||||
|
||||
User Themes is a must for anyone looking to customize their GNOME desktop. By default, GNOME Tweaks lets its users change the theme of the applications themselves, icons, and cursors but not the theme of the shell. User Themes fixes that by enabling us to change the theme of GNOME Shell, allowing us to get the most out of our customization experience. Check out our [video][27] or read our article to know how to [install new themes][28].
|
||||
|
||||
User Themes Off:
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/user-themes-off-300x141.jpg)
|
||||
User Themes On:
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/user-themes-on-300x141.jpg)
|
||||
|
||||
[User Themes][29]
|
||||
|
||||
### 11\. Hide Activities Button
|
||||
|
||||
Hide Activities Button does exactly what you would expect. It hides the activities button found a the leftmost corner of the top panel. This button traditionally actives the activities overview in GNOME, but plenty of people use the Super Key on the keyboard to do this same function.
|
||||
|
||||
Though this disables the button itself, it does not disable the hot corner. Since Ubuntu 17.10 offers the ability to shut off the hot corner int he native settings application this not a huge deal for Ubuntu users. For other distributions, there are a plethora of other ways to disable the hot corner if you so desire, which we will not cover in this particular article.
|
||||
|
||||
Before: ![](https://itsfoss.com/wp-content/uploads/2017/11/activies-present-300x15.jpg) After:
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/activities-removed-300x15.jpg)
|
||||
|
||||
#### [Hide Activities Button][30]
|
||||
|
||||
### 12\. MConnect
|
||||
|
||||
MConnect offers a way to seamlessly integrate the [KDE Connect][31] application within the GNOME desktop. Though KDE Connect offers a way for users to connect their Android handsets with virtually any Linux DE its indicator lacks a good way to integrate more seamlessly into any other DE than [Plasma][32].
|
||||
|
||||
MConnect fixes that, giving the user a straightforward drop-down menu that allows them to send SMS messages, locate their phones, browse their phone’s file system, and to send files to their phone from the desktop. Though I had to do some tweaking to get MConnect to work just as I would expect it to, I couldn’t be any happier with the extension.
|
||||
|
||||
Do remember that you will need KDE Connect installed alongside MConnect in order to get it to work.
|
||||
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/MConenct-300x174.jpg)
|
||||
|
||||
[MConnect][33]
|
||||
|
||||
### 13\. OpenWeather
|
||||
|
||||
OpenWeather adds an extension to the panel that gives the user weather information at a glance. It is customizable, it lets the user view weather information for whatever location they want to, and it doesn’t rely on the computers location services. OpenWeather gives the user the choice between [OpenWeatherMap][34] and [Dark Sky][35] to provide the weather information that is to be displayed.
|
||||
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/OpenWeather-300x147.jpg)
|
||||
|
||||
[OpenWeather][36]
|
||||
|
||||
### 14\. Panel OSD
|
||||
|
||||
This is the extension I mentioned earlier which allows the user to customize the location in which their desktop notifications appear on the screen. Not only does this allow the user to move their notifications over to the right, but Panel OSD gives the user the option to put their notifications literally anywhere they want on the screen. But for us migrating from Unity to GNOME, switching the notifications from the top middle to the top right may make us feel more at home.
|
||||
|
||||
Before:
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/osd1-300x40.jpg)
|
||||
|
||||
After:
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/osd-300x36.jpg)
|
||||
|
||||
#### [Panel OSD][37]
|
||||
|
||||
### 15\. Places Status Indicator
|
||||
|
||||
Places Status Indicator has been a recommended extension for as long as people have started recommending extensions. Places adds a drop-down menu to the panel that gives the user quick access to various areas of the file system, from the home directory to serves your computer has access to and anywhere in between.
|
||||
|
||||
The convenience and usefulness of this extension become more apparent as you use it, becoming a fundamental way you navigate your system. I couldn’t recommend it more highly enough.
|
||||
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/Places-288x300.jpg)
|
||||
|
||||
[Places Status Indicator][38]
|
||||
|
||||
### 16\. Refresh Wifi Connections
|
||||
|
||||
One minor annoyance in GNOME is that the Wi-Fi Networks dialog box does not have a refresh button on it when you are trying to connect to a new Wi-Fi network. Instead, it makes the user wait while the system automatically refreshes the list. Refresh Wifi Connections fixes this. It simply adds that desired refresh button to the dialog box, adding functionality that really should be included out of the box.
|
||||
|
||||
Before:
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/refresh-before-292x300.jpg)
|
||||
|
||||
After:
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/Refresh-after-280x300.jpg)
|
||||
|
||||
#### [Refresh Wifi Connections][39]
|
||||
|
||||
### 17\. Remove Dropdown Arrows
|
||||
|
||||
The Remove Dropdown Arrows extension removes the arrows on the panel that signify when an icon has a drop-down menu that you can interact with. This is purely an aesthetic tweak and isn’t always necessary as some themes remove these arrows by default. But themes such as [Numix][40], which happens to be my personal favorite, don’t remove them.
|
||||
|
||||
Remove Dropdown Arrows brings that clean look to the GNOME Shell that removes some unneeded clutter. The only bug I have encountered is that the CPU Management extension I mentioned earlier will randomly “respawn” the drop-down arrow. To turn it back off I have to disable Remove Dropdown Arrows and then enable it again until once more it randomly reappears out of nowhere.
|
||||
|
||||
Before:
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/remove-arrows-before-300x17.jpg)
|
||||
|
||||
After:
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/remove-arrows-after-300x14.jpg)
|
||||
|
||||
[Remove Dropdown Arrows][41]
|
||||
|
||||
### 18\. Status Area Horizontal Spacing
|
||||
|
||||
This is another extension that is purely aesthetic and is only “necessary” in certain themes. Status Area Horizontal Spacing allows the user to control the amount of space between the icons in the status bar. If you think your status icons are too close or too spaced out, then this extension has you covered. Just select the padding you would like and you’re set.
|
||||
|
||||
Maximum Spacing:
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/spacing-2-300x241.jpg)
|
||||
|
||||
Minimum Spacing:
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/spacing-300x237.jpg)
|
||||
|
||||
#### [Status Area Horizontal Spacing][42]
|
||||
|
||||
### 19\. Steal My Focus
|
||||
|
||||
By default, when you open an application in GNOME is will sometimes stay behind what you have open if a different application has focus. GNOME then notifies you that the application you selected has opened and it is up to you to switch over to it. But, in my experience, this isn’t always consistent. There are certain applications that seem to jump to the front when opened while the rest rely on you to see the notifications to know they opened.
|
||||
|
||||
Steal My Focus changes that by removing the notification and immediately giving the user focus of the application they just opened. Because of this inconsistency, it was difficult for me to get a screenshot so you just have to trust me on this one. ;)
|
||||
|
||||
#### [Steal My Focus][43]
|
||||
|
||||
### 20\. Workspaces to Dock
|
||||
|
||||
This extension changed the way I use GNOME. Period. It allows me to be more productive and aware of my virtual desktop, making for a much better user experience. Workspaces to Dock allows the user to customize their overview workspaces by turning into an interactive dock.
|
||||
|
||||
You can customize its look, size, functionality, and even position. It can be used purely for aesthetics, but I think the real gold is using it to make the workspaces more fluid, functional, and consistent with the rest of the UI.
|
||||
|
||||
![](https://itsfoss.com/wp-content/uploads/2017/11/Workspaces-to-dock-300x169.jpg)
|
||||
|
||||
[Workspaces to Dock][44]
|
||||
|
||||
### Honorable Mentions: Dash to Dock and Dash to Panel
|
||||
|
||||
Dash to Dock and Dash to Panel are not included in the official 20 extensions of this article for one main reason: Ubuntu Dock. Both extensions allow the user to make the GNOME Dash either a dock or a panel respectively and add more customization than comes by default.
|
||||
|
||||
The problem is that to get the full functionality of these two extensions you will need to jump through some hoops to disable Ubuntu Dock, which I won’t outline in this article. We acknowledge that not everyone will be using Ubuntu 17.10, so for those of you that aren’t this may not apply to you. That being said, bot of these extensions are great and are included among some of the most popular GNOME extensions you will find.
|
||||
|
||||
Currently, there is a “bug” in Dash to Dock whereby changing its setting, even with the extension disabled, the changes apply to the Ubuntu Dock as well. I say “bug” because I actually use this myself to customize Ubuntu Dock without the need for the extensions to be activated. This may get patched in the future, but until then consider that a free tip.
|
||||
|
||||
### [Dash to Dock][45] [Dash to Panel][46]
|
||||
|
||||
So there you have it, our top 20 GNOME Extensions you should try right now. Which of these extensions do you particularly like? Which do you dislike? Let us know in the comments below and don’t be afraid to say something if there is anything you think we missed.
|
||||
|
||||
### About Phillip Prado
|
||||
|
||||
Phillip Prado is an avid follower of all things tech, culture, and art. Not only is he an all-around geek, he has a BA in cultural communications and considers himself a serial hobbyist. He loves hiking, cycling, poetry, video games, and movies. But no matter what his passions are there is only one thing he loves more than Linux and FOSS: coffee. You can find him (nearly) everywhere on the web as @phillipprado.
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/best-gnome-extensions/
|
||||
|
||||
作者:[ Phillip Prado][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/phillip/
|
||||
[1]:https://itsfoss.com/author/phillip/
|
||||
[2]:https://itsfoss.com/best-gnome-extensions/#comments
|
||||
[3]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Fbest-gnome-extensions%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
|
||||
[4]:https://twitter.com/share?original_referer=/&text=Top+20+GNOME+Extensions+You+Should+Be+Using+Right+Now&url=https://itsfoss.com/best-gnome-extensions/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=phillipprado
|
||||
[5]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Fbest-gnome-extensions%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
|
||||
[6]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Fbest-gnome-extensions%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
|
||||
[7]:http://www.stumbleupon.com/submit?url=https://itsfoss.com/best-gnome-extensions/&title=Top+20+GNOME+Extensions+You+Should+Be+Using+Right+Now
|
||||
[8]:https://www.reddit.com/submit?url=https://itsfoss.com/best-gnome-extensions/&title=Top+20+GNOME+Extensions+You+Should+Be+Using+Right+Now
|
||||
[9]:https://extensions.gnome.org/
|
||||
[10]:https://www.gnome.org/
|
||||
[11]:https://itsfoss.com/ubuntu-17-10-release-features/
|
||||
[12]:https://itsfoss.com/ubuntu-unity-shutdown/
|
||||
[13]:https://itsfoss.com/gnome-shell-extensions/
|
||||
[14]:https://www.kde.org/
|
||||
[15]:https://elementary.io/
|
||||
[16]:https://itsfoss.com/gnome-3-26-released/
|
||||
[17]:https://extensions.gnome.org/extension/1217/appfolders-manager/
|
||||
[18]:https://en.wikipedia.org/wiki/APT_(Debian)
|
||||
[19]:https://extensions.gnome.org/extension/1139/apt-update-indicator/
|
||||
[20]:https://extensions.gnome.org/extension/16/auto-move-windows/
|
||||
[21]:https://extensions.gnome.org/extension/517/caffeine/
|
||||
[22]:https://extensions.gnome.org/extension/945/cpu-power-manager/
|
||||
[23]:https://extensions.gnome.org/extension/779/clipboard-indicator/
|
||||
[24]:https://extensions.gnome.org/extension/1036/extensions/
|
||||
[25]:https://extensions.gnome.org/extension/2/move-clock/
|
||||
[26]:https://extensions.gnome.org/extension/608/gnomenu/
|
||||
[27]:https://youtu.be/9TNvaqtVKLk
|
||||
[28]:https://itsfoss.com/install-themes-ubuntu/
|
||||
[29]:https://extensions.gnome.org/extension/19/user-themes/
|
||||
[30]:https://extensions.gnome.org/extension/744/hide-activities-button/
|
||||
[31]:https://community.kde.org/KDEConnect
|
||||
[32]:https://www.kde.org/plasma-desktop
|
||||
[33]:https://extensions.gnome.org/extension/1272/mconnect/
|
||||
[34]:http://openweathermap.org/
|
||||
[35]:https://darksky.net/forecast/40.7127,-74.0059/us12/en
|
||||
[36]:https://extensions.gnome.org/extension/750/openweather/
|
||||
[37]:https://extensions.gnome.org/extension/708/panel-osd/
|
||||
[38]:https://extensions.gnome.org/extension/8/places-status-indicator/
|
||||
[39]:https://extensions.gnome.org/extension/905/refresh-wifi-connections/
|
||||
[40]:https://numixproject.github.io/
|
||||
[41]:https://extensions.gnome.org/extension/800/remove-dropdown-arrows/
|
||||
[42]:https://extensions.gnome.org/extension/355/status-area-horizontal-spacing/
|
||||
[43]:https://extensions.gnome.org/extension/234/steal-my-focus/
|
||||
[44]:https://extensions.gnome.org/extension/427/workspaces-to-dock/
|
||||
[45]:https://extensions.gnome.org/extension/307/dash-to-dock/
|
||||
[46]:https://extensions.gnome.org/extension/1160/dash-to-panel/
|
@ -1,306 +0,0 @@
|
||||
30 Best Linux Games On Steam You Should Play in 2017
|
||||
============================================================
|
||||
|
||||
When it comes to Gaming, a system running on Windows platform is what anyone would recommend. It still is a superior choice for gamers with better graphics driver support and perfect hardware compatibility. But, what about the thought of [gaming on a Linux system][9]? Well, yes, of course – it is possible – maybe you thought of it at some point in time but the collection of Linux games on [Steam for Linux][10] platform wasn’t appealing at all few years back.
|
||||
|
||||
However, that’s not true at all for the current scene. The Steam store now has a lot of great games listed for Linux platform (including a lot of major titles). So, in this article, we’ll be taking a look at the best Linux games on Steam.
|
||||
|
||||
But before we do that, let me tell you a money saving trick. If you are an avid gamer who spends plenty of time and money on gaming, you should subscribe to Humble Monthly. This monthly subscription program from [Humble Bundle][11] gives you $100 in games for just $12 each month.
|
||||
|
||||
Not all games might be available on Linux though but it is still a good deal because you get additional 10% discount on any games or books you buy from [Humble Bundle website][12].
|
||||
|
||||
The best thing here is that every purchase you make supports a charity organization. So, you are not just gaming, you are also making a difference to the world.
|
||||
|
||||
### Best Linux games on Steam
|
||||
|
||||
The list of best Linux games on steam is in no particular ranking order.
|
||||
|
||||
Additional Note: While there’s a lot of games available on Steam for Linux, there are still a lot of problems you would face as a Linux gamer. You can refer to one of our articles to know about the [annoying experiences every Linux gamer encounters][14].
|
||||
|
||||
Jump Directly to your preferred genre of Games:
|
||||
|
||||
* [Action Games][3]
|
||||
|
||||
* [RPG Games][4]
|
||||
|
||||
* [Racing/Sports/Simulation Games][5]
|
||||
|
||||
* [Adventure Games][6]
|
||||
|
||||
* [Indie Games][7]
|
||||
|
||||
* [Strategy Games][8]
|
||||
|
||||
### Best Action Games for Linux On Steam
|
||||
|
||||
### 1\. Counter-Strike: Global Offensive (Multiplayer)
|
||||
|
||||
CS GO is definitely one of the best FPS games for Linux on Steam. I don’t think this game needs an introduction but in case you are unaware of it – I must mention that it is one of the most enjoyable FPS multiplayer game you would ever play. You’ll observe CS GO is one of the games contributing a major part to the e-sports scene. To up your rank – you need to play competitive matches. In either case, you can continue playing casual matches.
|
||||
|
||||
I could have listed Rainbow Six siege instead of Counter-Strike, but we still don’t have it for Linux/Steam OS.
|
||||
|
||||
[CS: GO (Purchase)][15]
|
||||
|
||||
### 2\. Left 4 Dead 2 (Multiplayer/Singleplayer)
|
||||
|
||||
One of the most loved first-person zombie shooter multiplayer game. You may get it for as low as 1.3 USD on a Steam sale. It is an interesting game which gives you the chills and thrills you’d expect from a zombie game. The game features swamps, cities, cemetries, and a lot more environments to keep things interesting and horrific. The guns aren’t super techy but definitely provides a realistic experience considering it’s an old game.
|
||||
|
||||
[Left 4 Dead 2 (Purchase)][16]
|
||||
|
||||
### 3\. Borderlands 2 (Singleplayer/Co-op)
|
||||
|
||||
Borderlands 2 is an interesting take on FPS games for PC. It isn’t anything like you experienced before. The graphics look sketchy and cartoony but that does not let you miss the real action you always look for in a first-person shooter game. You can trust me on that!
|
||||
|
||||
If you are looking for one of the best Linux games with tons of DLC – Borderlands 2 will definitely suffice.
|
||||
|
||||
[Borderlands 2 (Purchase)][17]
|
||||
|
||||
### 4\. Insurgency (Multiplayer)
|
||||
|
||||
Insurgency is yet another impressive FPS game available on Steam for Linux machines. It takes a different approach by eliminating the HUD or the ammo counter. As most of the reviewers mentioned – pure shooting game focusing on the weapon and the tactics of your team. It may not be the best FPS game – but it surely is one of them if you like – Delta Force kinda shooters along with your squad.
|
||||
|
||||
[Insurgency (Purchase)][18]
|
||||
|
||||
### 5\. Bioshock: Infinite (Singleplayer)
|
||||
|
||||
Bioshock Infinite would definitely remain as one of the best singleplayer FPS games ever developed for PC. You get unrealistic powers to kill your enemies. And, so do your enemies have a lot of tricks up in the sleeves. It is a story-rich FPS game which you should not miss playing on your Linux system!
|
||||
|
||||
[BioShock: Infinite (Purchase)][19]
|
||||
|
||||
### 6\. HITMAN – Game of the Year Edition (Singleplayer)
|
||||
|
||||
The Hitman series is obviously one of the most loved game series for a PC gamer. The recent iteration of HITMAN series saw an episodic release which wasn’t appreciated much but now with Square Enix gone, the GOTY edition announced with a few more additions is back to the spotlight. Make sure to get creative with your assassinations in the game Agent 47!
|
||||
|
||||
[HITMAN (GOTY)][20]
|
||||
|
||||
### 7\. Portal 2
|
||||
|
||||
Portal 2 is the perfect blend of action and adventure. It is a puzzle game which lets you join co-op sessions and create interesting puzzles. The co-op mode features a completely different campaign when compared to the single player mode.
|
||||
|
||||
[Portal 2 (Purchase)][21]
|
||||
|
||||
### 8\. Deux Ex: Mankind Divided
|
||||
|
||||
If you are on the lookout for a shooter game focused on stealth skills – Deux Ex would be the perfect addition to your Steam library. It is indeed a very beautiful game with some state-of-the-art weapons and crazy fighting mechanics.
|
||||
|
||||
[Deus Ex: Mankind Divided (Purchase)][22]
|
||||
|
||||
### 9\. Metro 2033 Redux / Metro Last Light Redux
|
||||
|
||||
Both Metro 2033 Redux and the Last Light are the definitive editions of the classic hit Metro 2033 and Last Light. The game has a post-apocalyptic setting. You need to eliminate all the mutants in order to ensure the survival of mankind. You should explore the rest when you get to play it!
|
||||
|
||||
[Metro 2033 Redux (Purchase)][23]
|
||||
|
||||
[Metro Last Light Redux (Purchase)][24]
|
||||
|
||||
### 10\. Tannenberg (Multiplayer)
|
||||
|
||||
Tannenberg is a brand new game – announced a month before this article was published. The game is based on the Eastern Front (1914-1918) as a part of World War I. It is a multiplayer-only game. So, if you want to experience WWI gameplay experience, look no further!
|
||||
|
||||
[Tannenberg (Purchase)][25]
|
||||
|
||||
### Best RPG Games for Linux on Steam
|
||||
|
||||
### 11\. Shadow of Mordor
|
||||
|
||||
Shadow of Mordor is one of the most exciting open world RPG game you will find listed on Steam for Linux systems. You have to fight as a ranger (Talion) with the bright master (Celebrimbor) to defeat Sauron’s army (and then approach killing him). The fighting mechanics are very impressive. It is a must try game!
|
||||
|
||||
[SOM (Purchase)][26]
|
||||
|
||||
### 12\. Divinity: Original Sin – Enhanced Edition
|
||||
|
||||
Divinity: Original is a kick-ass Indie-RPG game that’s unique in itself and very much enjoyable. It is probably one of the highest rated RPG games with a mixture of Adventure & Strategy. The enhanced edition includes new game modes and a complete revamp of voice-overs, controller support, co-op sessions, and so much more.
|
||||
|
||||
[Divinity: Original Sin (Purchase)][27]
|
||||
|
||||
### 13\. Wasteland 2: Director’s Cut
|
||||
|
||||
Wasteland 2 is an amazing CRPG game. If Fallout 4 was to be ported down as a CRPG as well – this is what we would have expected it to be. The director’s cut edition includes a complete visual overhaul with hundred new characters.
|
||||
|
||||
[Wasteland 2 (Purchase)][28]
|
||||
|
||||
### 14\. Darkwood
|
||||
|
||||
A horror-filled top-down view RPG game. You get to explore the world, scavenging materials, and craft weapons to survive.
|
||||
|
||||
[Darkwood (Purchase)][29]
|
||||
|
||||
### Best Racing/Sports/Simulation Games
|
||||
|
||||
### 15\. Rocket League
|
||||
|
||||
Rocket League is an action-packed soccer game conceptualized by rocket-powered battle cars. Not just driving the car and heading to the goal – you can even make your opponents go – kaboom!
|
||||
|
||||
A fantastic sports-action game every gamer must have installed!
|
||||
|
||||
[Rocket League (Purchase)][30]
|
||||
|
||||
### 16\. Road Redemption
|
||||
|
||||
Missing Road Rash? Well, Road Redemption will quench your thirst as a spiritual successor to Road Rash. Ofcourse, it is not officially “Road Rash II” – but it is equally enjoyable. If you loved Road Rash, you’ll like it too.
|
||||
|
||||
[Road Redemption (Purchase)][31]
|
||||
|
||||
### 17\. Dirt Rally
|
||||
|
||||
Dirt Rally is for the gamers who want to experience off-road and on-road racing game. The visuals are breathtaking and the game is enjoyable with near to perfect driving mechanics.
|
||||
|
||||
[Dirt Rally (Purchase)][32]
|
||||
|
||||
### 18\. F1 2017
|
||||
|
||||
F1 2017 is yet another impressive car racing game from the developers of Dirt Rally (Codemasters & Feral Interactive). It features all of the iconic F1 racing cars that you need to experience.
|
||||
|
||||
[F1 2017 (Purchase)][33]
|
||||
|
||||
### 19. GRID Autosport
|
||||
|
||||
GRID is one of the most underrated car racing games available out there. GRID Autosport is the sequel to GRID 2\. The gameplay seems stunning to me. With even better cars than GRID 2, the GRID Autosport is a recommended racing game for every PC gamer out there. The game also supports a multiplayer mode where you can play with your friends – representing as a team.
|
||||
|
||||
[GRID Autosport (Purchase)][34]
|
||||
|
||||
### Best Adventure Games
|
||||
|
||||
### 20\. ARK: Survival Evolved
|
||||
|
||||
ARK Survival Evolved is a quite decent survival game with exciting adventures following in the due course. You find yourself in the middle of nowhere (ARK Island) and have got no choice except training the dinosaurs, teaming up with other players, hunt someone to get the required resources, and craft items to maximize your chances to survive and escape the Island.
|
||||
|
||||
[ARK: Survival Evolved (Purchase)][35]
|
||||
|
||||
### 21\. This War of Mine
|
||||
|
||||
A unique game where you aren’t a soldier but a civilian facing the hardships of wartime. You’ve to make your way through highly-skilled enemies and help out other survivors as well.
|
||||
|
||||
[This War of Mine (Purchase)][36]
|
||||
|
||||
### 22\. Mad Max
|
||||
|
||||
Mad Max is all about survival and brutality. It includes powerful cars, an open-world setting, weapons, and hand-to-hand combat. You need to keep exploring the place and also focus on upgrading your vehicle to prepare for the worst. You need to think carefully and have a strategy before you make a decision.
|
||||
|
||||
[Mad Max (Purchase)][37]
|
||||
|
||||
### Best Indie Games
|
||||
|
||||
### 23\. Terraria
|
||||
|
||||
It is a 2D game which has received overwhelmingly positive reviews on Steam. Dig, fight, explore, and build to keep your journey going. The environments are automatically generated. So, it isn’t anything static. You might encounter something first and your friend might encounter the same after a while. You’ll also get to experience creative 2D action-packed sequences.
|
||||
|
||||
[Terraria (Purchase)][38]
|
||||
|
||||
### 24\. Kingdoms and Castles
|
||||
|
||||
With Kingdoms and Castles, you get to build your own kingdom. You have to manage your kingdom by collecting tax (as funds necessary) from the people, take care of the forests, handle the city
|
||||
|
||||
design, and also make sure no one raids your kingdom by implementing proper defences.
|
||||
|
||||
It is a fairly new game but quite trending among the Indie genre of games.
|
||||
|
||||
[Kingdoms and Castles][39]
|
||||
|
||||
### Best Strategy Games on Steam For Linux Machines
|
||||
|
||||
### 25\. Sid Meier’s Civilization V
|
||||
|
||||
Sid Meier’s Civilization V is one of the best-rated strategy game available for PC. You could opt for Civilization VI – if you want. But, the gamers still root for Sid Meier’s Civilization V because of its originality and creative implementation.
|
||||
|
||||
[Civilization V (Purchase)][40]
|
||||
|
||||
### 26\. Total War: Warhammer
|
||||
|
||||
Total War: Warhammer is an incredible turn-based strategy game available for PC. Sadly, the Warhammer II isn’t available for Linux as of yet. But 2016’s Warhammer is still a great choice if you like real-time battles that involve building/destroying empires with flying creatures and magical powers.
|
||||
|
||||
[Warhammer I (Purchase)][41]
|
||||
|
||||
### 27\. Bomber Crew
|
||||
|
||||
Wanted a strategy simulation game that’s equally fun to play? Bomber Crew is the answer to it. You need to choose the right crew and maintain it in order to win it all.
|
||||
|
||||
[Bomber Crew (Purchase)][42]
|
||||
|
||||
### 28\. Age of Wonders III
|
||||
|
||||
A very popular strategy title with a mixture of empire building, role playing, and warfare. A polished turn-based strategy game you must try!
|
||||
|
||||
[Age of Wonders III (Purchase)][43]
|
||||
|
||||
### 29\. Cities: Skylines
|
||||
|
||||
A pretty straightforward strategy game to build a city from scratch and manage everything in it. You’ll experience the thrills and hardships of building and maintaining a city. I wouldn’t expect every gamer to like this game – it has a very specific userbase.
|
||||
|
||||
[Cities: Skylines (Purchase)][44]
|
||||
|
||||
### 30\. XCOM 2
|
||||
|
||||
XCOM 2 is one of the best turn-based strategy game available for PC. I wonder how crazy it could have been to have XCOM 2 as a first person shooter game. However, it’s still a masterpiece with an overwhelming response from almost everyone who bought the game. If you have the budget to spend more on this game, do get the – “War of the Chosen” – DLC.
|
||||
|
||||
[XCOM 2 (Purchase)][45]
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
Among all the games available for Linux, we did include most of the major titles and some the latest games with an overwhelming response from the gamers.
|
||||
|
||||
Do you think we missed any of your favorite Linux game available on Steam? Also, what are the games that you would like to see on Steam for Linux platform?
|
||||
|
||||
Let us know your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/best-linux-games-steam/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/ankush/
|
||||
[1]:https://itsfoss.com/author/ankush/
|
||||
[2]:https://itsfoss.com/best-linux-games-steam/#comments
|
||||
[3]:https://itsfoss.com/best-linux-games-steam/#action
|
||||
[4]:https://itsfoss.com/best-linux-games-steam/#rpg
|
||||
[5]:https://itsfoss.com/best-linux-games-steam/#racing
|
||||
[6]:https://itsfoss.com/best-linux-games-steam/#adv
|
||||
[7]:https://itsfoss.com/best-linux-games-steam/#indie
|
||||
[8]:https://itsfoss.com/best-linux-games-steam/#strategy
|
||||
[9]:https://itsfoss.com/linux-gaming-guide/
|
||||
[10]:https://itsfoss.com/install-steam-ubuntu-linux/
|
||||
[11]:https://www.humblebundle.com/?partner=itsfoss
|
||||
[12]:https://www.humblebundle.com/store?partner=itsfoss
|
||||
[13]:https://www.humblebundle.com/monthly?partner=itsfoss
|
||||
[14]:https://itsfoss.com/linux-gaming-problems/
|
||||
[15]:http://store.steampowered.com/app/730/CounterStrike_Global_Offensive/
|
||||
[16]:http://store.steampowered.com/app/550/Left_4_Dead_2/
|
||||
[17]:http://store.steampowered.com/app/49520/?snr=1_5_9__205
|
||||
[18]:http://store.steampowered.com/app/222880/?snr=1_5_9__205
|
||||
[19]:http://store.steampowered.com/agecheck/app/8870/
|
||||
[20]:http://store.steampowered.com/app/236870/?snr=1_5_9__205
|
||||
[21]:http://store.steampowered.com/app/620/?snr=1_5_9__205
|
||||
[22]:http://store.steampowered.com/app/337000/?snr=1_5_9__205
|
||||
[23]:http://store.steampowered.com/app/286690/?snr=1_5_9__205
|
||||
[24]:http://store.steampowered.com/app/287390/?snr=1_5_9__205
|
||||
[25]:http://store.steampowered.com/app/633460/?snr=1_5_9__205
|
||||
[26]:http://store.steampowered.com/app/241930/?snr=1_5_9__205
|
||||
[27]:http://store.steampowered.com/app/373420/?snr=1_5_9__205
|
||||
[28]:http://store.steampowered.com/app/240760/?snr=1_5_9__205
|
||||
[29]:http://store.steampowered.com/app/274520/?snr=1_5_9__205
|
||||
[30]:http://store.steampowered.com/app/252950/?snr=1_5_9__205
|
||||
[31]:http://store.steampowered.com/app/300380/?snr=1_5_9__205
|
||||
[32]:http://store.steampowered.com/app/310560/?snr=1_5_9__205
|
||||
[33]:http://store.steampowered.com/app/515220/?snr=1_5_9__205
|
||||
[34]:http://store.steampowered.com/app/255220/?snr=1_5_9__205
|
||||
[35]:http://store.steampowered.com/app/346110/?snr=1_5_9__205
|
||||
[36]:http://store.steampowered.com/app/282070/?snr=1_5_9__205
|
||||
[37]:http://store.steampowered.com/app/234140/?snr=1_5_9__205
|
||||
[38]:http://store.steampowered.com/app/105600/?snr=1_5_9__205
|
||||
[39]:http://store.steampowered.com/app/569480/?snr=1_5_9__205
|
||||
[40]:http://store.steampowered.com/app/8930/?snr=1_5_9__205
|
||||
[41]:http://store.steampowered.com/app/364360/?snr=1_5_9__205
|
||||
[42]:http://store.steampowered.com/app/537800/?snr=1_5_9__205
|
||||
[43]:http://store.steampowered.com/app/226840/?snr=1_5_9__205
|
||||
[44]:http://store.steampowered.com/app/255710/?snr=1_5_9__205
|
||||
[45]:http://store.steampowered.com/app/268500/?snr=1_5_9__205
|
||||
[46]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Fbest-linux-games-steam%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
|
||||
[47]:https://twitter.com/share?original_referer=/&text=30+Best+Linux+Games+On+Steam+You+Should+Play+in+2017&url=https://itsfoss.com/best-linux-games-steam/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=ankushdas9
|
||||
[48]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Fbest-linux-games-steam%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
|
||||
[49]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Fbest-linux-games-steam%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
|
||||
[50]:https://www.reddit.com/submit?url=https://itsfoss.com/best-linux-games-steam/&title=30+Best+Linux+Games+On+Steam+You+Should+Play+in+2017
|
@ -0,0 +1,81 @@
|
||||
5 Tips to Improve Technical Writing for an International Audience
|
||||
============================================================
|
||||
|
||||
|
||||
![documentation](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/typewriter-801921_1920.jpg?itok=faTXFNoE "documentation")
|
||||
Writing in English for an international audience takes work; here are some handy tips to remember.[Creative Commons Zero][2]
|
||||
|
||||
Writing in English for an international audience does not necessarily put native English speakers in a better position. On the contrary, they tend to forget that the document's language might not be the first language of the audience. Let's have a look at the following simple sentence as an example: “Encrypt the password using the 'foo bar' command.”
|
||||
|
||||
Grammatically, the sentence is correct. Given that "-ing" forms (gerunds) are frequently used in the English language, most native speakers would probably not hesitate to phrase a sentence like this. However, on closer inspection, the sentence is ambiguous: The word “using” may refer either to the object (“the password”) or to the verb (“encrypt”). Thus, the sentence can be interpreted in two different ways:
|
||||
|
||||
* Encrypt the password that uses the 'foo bar' command.
|
||||
|
||||
* Encrypt the password by using the 'foo bar' command.
|
||||
|
||||
As long as you have previous knowledge about the topic (password encryption or the 'foo bar' command), you can resolve this ambiguity and correctly decide that the second reading is the intended meaning of this sentence. But what if you lack in-depth knowledge of the topic? What if you are not an expert but a translator with only general knowledge of the subject? Or, what if you are a non-native speaker of English who is unfamiliar with advanced grammatical forms?
|
||||
|
||||
### Know Your Audience
|
||||
|
||||
Even native English speakers may need some training to write clear and straightforward technical documentation. Raising awareness of usability and potential problems is the first step. This article, based on my talk at[ Open Source Summit EU][5], offers several useful techniques. Most of them are useful not only for technical documentation but also for everyday written communication, such as writing email or reports.
|
||||
|
||||
**1. Change perspective. **Step into your audience's shoes. Step one is to know your intended audience. If you are a developer writing for end users, view the product from their perspective. The [persona technique][6] can help to focus on the target audience and to provide the right level of detail for your readers.
|
||||
|
||||
**2\. Follow the KISS principle. **Keep it short and simple. The principle can be applied to several levels, like grammar, sentences, or words. Here are some examples:
|
||||
|
||||
_Words: _ Uncommon and long words slow down reading and might be obstacles for non-native speakers. Use simpler alternatives:
|
||||
|
||||
“utilize” → “use”
|
||||
|
||||
“indicate” → “show”, “tell”, “say”
|
||||
|
||||
“prerequisite” → “requirement”
|
||||
|
||||
_Grammar: _ Use the simplest tense that is appropriate. For example, use present tense when mentioning the result of an action: "Click _OK_ . The _Printer Options_ dialog appears.”
|
||||
|
||||
_Sentences: _ As a rule of thumb, present one idea in one sentence. However, restricting sentence length to a certain amount of words is not useful in my opinion. Short sentences are not automatically easy to understand (especially if they are a cluster of nouns). Sometimes, trimming down sentences to a certain word count can introduce ambiquities, which can, in turn, make sentences even more difficult to understand.
|
||||
|
||||
**3\. Beware of ambiguities. **As authors, we often do not notice ambiguity in a sentence. Having your texts reviewed by others can help identify such problems. If that's not an option, try to look at each sentence from different perspectives: Does the sentence also work for readers without in-depth knowledge of the topic? Does it work for readers with limited language skills? Is the grammatical relationship between all sentence parts clear? If the sentence does not meet these requirements, rephrase it to resolve the ambiguity.
|
||||
|
||||
**4\. Be consistent. **This applies to choice of words, spelling, and punctuation as well as phrases and structure. For lists, use parallel grammatical construction. For example:
|
||||
|
||||
Why white space is important:
|
||||
|
||||
* It focuses attention.
|
||||
|
||||
* It visually separates sections.
|
||||
|
||||
* It splits content into chunks.
|
||||
|
||||
**5\. Remove redundant content.** Keep only information that is relevant for your target audience. On a sentence level, avoid fillers (basically, easily) and unnecessary modifications:
|
||||
|
||||
"already existing" → "existing"
|
||||
|
||||
"completely new" → "new"
|
||||
|
||||
As you might have guessed by now, writing is rewriting. Good writing requires effort and practice. But even if you write only occasionally, you can significantly improve your texts by focusing on the target audience and by using basic writing techniques. The better the readability of a text, the easier it is to process, even for an audience with varying language skills. When it comes to localization especially, good quality of the source text is important: Garbage in, garbage out. If the original text has deficiencies, it will take longer to translate the text, resulting in higher costs. In the worst case, the flaws will be multiplied during translation and need to be corrected in various languages.
|
||||
|
||||
|
||||
![Tanja Roth](https://www.linux.com/sites/lcom/files/styles/floated_images/public/tanja-roth.jpg?itok=eta0fvZC "Tanja Roth")
|
||||
|
||||
Tanja Roth, Technical Documentation Specialist at SUSE Linux GmbH[Used with permission][1]
|
||||
|
||||
_Driven by an interest in both language and technology, Tanja has been working as a technical writer in mechanical engineering, medical technology, and IT for many years. She joined SUSE in 2005 and contributes to a wide range of product and project documentation, including High Availability and Cloud topics._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/event/open-source-summit-eu/2017/12/technical-writing-international-audience?sf175396579=1
|
||||
|
||||
作者:[TANJA ROTH ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/tanja-roth
|
||||
[1]:https://www.linux.com/licenses/category/used-permission
|
||||
[2]:https://www.linux.com/licenses/category/creative-commons-zero
|
||||
[3]:https://www.linux.com/files/images/tanja-rothjpg
|
||||
[4]:https://www.linux.com/files/images/typewriter-8019211920jpg
|
||||
[5]:https://osseu17.sched.com/event/ByIW
|
||||
[6]:https://en.wikipedia.org/wiki/Persona_(user_experience)
|
@ -0,0 +1,82 @@
|
||||
translating---geekpi
|
||||
|
||||
FreeCAD – A 3D Modeling and Design Software for Linux
|
||||
============================================================
|
||||
![FreeCAD 3D Modeling Software](https://www.fossmint.com/wp-content/uploads/2017/12/FreeCAD-3D-Modeling-Software.png)
|
||||
|
||||
[FreeCAD][8] is a cross-platform OpenCasCade-based mechanical engineering and product design tool. Being a parametric 3D modeler it works with PLM, CAx, CAE, MCAD and CAD and its functionalities can be extended using tons of advanced extensions and customization options.
|
||||
|
||||
It features a QT-based minimalist User Interface with toggable panels, layouts, toolbars, a broad Python API, and an Open Inventor-compliant 3D scene representation model (thanks to the Coin 3D library).
|
||||
|
||||
[![FreeCAD 3D Software](https://www.fossmint.com/wp-content/uploads/2017/12/FreeCAD-3D-Software.png)][9]
|
||||
|
||||
FreeCAD 3D Software
|
||||
|
||||
As is listed on the website, FreeCAD has a coupled of user cases, namely:
|
||||
|
||||
> * The Home User/Hobbyist: Got yourself a project you want to build, have built, or 3D printed? Model it in FreeCAD. No previous CAD experience required. Our community will help you get the hang of it quickly!
|
||||
>
|
||||
> * The Experienced CAD User: If you use commercial CAD or BIM modeling software at work, you will find similar tools and workflow among the many workbenches of FreeCAD.
|
||||
>
|
||||
> * The Programmer: Almost all of FreeCAD’s functionality is accessible to Python. You can easily extend FreeCAD’s functionality, automatize it with scripts, build your own modules or even embed FreeCAD in your own application.
|
||||
>
|
||||
> * The Educator: Teach your students a free software with no worry about license purchase. They can install the same version at home and continue using it after leaving school.
|
||||
|
||||
#### Features in FreeCAD
|
||||
|
||||
* Freeware: FreeCAD is free for everyone to download and use.
|
||||
|
||||
* Open Source: Contribute to the source code on [GitHub][4].
|
||||
|
||||
* Cross-Platform: All Windows, Linux, and Mac users can enjoy the coolness of FreeCAD.
|
||||
|
||||
* A comprehensive [Online Documentation][5].
|
||||
|
||||
* A free [Online Manual][6] for beginners and pros alike.
|
||||
|
||||
* Annotations support e.g. text and dimensions.
|
||||
|
||||
* A built-in Python console.
|
||||
|
||||
* A fully customizable and scriptable UI.
|
||||
|
||||
* An online community for showcasing projects [here][7].
|
||||
|
||||
* Extendable modules for modeling and designing a variety of objects e.g.
|
||||
|
||||
FreeCAD has a lot more features to offer users than we can list here so feel free to see the rest of them on its website’s [Features page][11].
|
||||
|
||||
There are many 3D modeling tools in the market but they are barely free. If you are a modeling engineer, architect, or artist and are looking for an application you can use without necessarily shelling out any cash then FreeCAD is a beautiful open-source project you should check out.
|
||||
|
||||
Give it a test-drive and see if you don’t like it.
|
||||
|
||||
[Download FreeCAD for Linux][13]
|
||||
|
||||
Are you already a FreeCAD user? Which of its features do you enjoy the most and have you come across any alternatives that may go head to head with its abilities?
|
||||
|
||||
Remember that your comments, suggestions, and constructive criticisms are always welcome in the comments section below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.fossmint.com/freecad-3d-modeling-and-design-software-for-linux/
|
||||
|
||||
作者:[Martins D. Okoi ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.fossmint.com/author/dillivine/
|
||||
[1]:https://www.fossmint.com/author/dillivine/
|
||||
[2]:https://www.fossmint.com/author/dillivine/
|
||||
[3]:https://www.fossmint.com/freecad-3d-modeling-and-design-software-for-linux/#disqus_thread
|
||||
[4]:https://github.com/FreeCAD/FreeCAD
|
||||
[5]:https://www.freecadweb.org/wiki/Main_Page
|
||||
[6]:https://www.freecadweb.org/wiki/Manual
|
||||
[7]:https://forum.freecadweb.org/viewforum.php?f=24
|
||||
[8]:http://www.freecadweb.org/
|
||||
[9]:https://www.fossmint.com/wp-content/uploads/2017/12/FreeCAD-3D-Software.png
|
||||
[10]:https://www.fossmint.com/synfig-an-adobe-animate-alternative-for-gnulinux/
|
||||
[11]:https://www.freecadweb.org/wiki/Feature_list
|
||||
[12]:http://www.tecmint.com/red-hat-rhcsa-rhce-exam-certification-book/
|
||||
[13]:https://www.freecadweb.org/wiki/Download
|
@ -0,0 +1,47 @@
|
||||
translating---geekpi
|
||||
|
||||
# GNOME Boxes Makes It Easier to Test Drive Linux Distros
|
||||
|
||||
![GNOME Boxes Distribution Selection](http://www.omgubuntu.co.uk/wp-content/uploads/2017/12/GNOME-Boxes-INstall-Distros-750x475.jpg)
|
||||
|
||||
Creating Linux virtual machines on the GNOME desktop is about to get a whole lot easier.
|
||||
|
||||
The next major release of [_GNOME Boxes_][5] is able to download popular Linux (and BSD-based) operating systems directly inside the app itself.
|
||||
|
||||
Boxes is free, open-source software. It can be used to access both remote and virtual systems as it is built around [QEMU][6], KVM, and libvirt virtualisation technologies.
|
||||
|
||||
For its new ISO-toting integration _Boxes_ makes use of [libosinfo][7], a database of operating systems that also provides details on any virtualized environment requirements.
|
||||
|
||||
In [this (mis-titled) video][8] from GNOME developer Felipe Borges you can see just how easy the improved ‘Source Selection’ screen makes things, including the ability to download a specific ISO architecture for a given distro:
|
||||
|
||||
[video](https://youtu.be/CGahI05Gbac)
|
||||
|
||||
Despite it being a core GNOME app I have to confess that I have never used Boxes. It’s not that I don’t hear good things about it (I do) it’s just that I’m more familiar with setting up and configuring virtual machines in VirtualBox.
|
||||
|
||||
> ‘The lazy geek inside me is going to appreciate this integration’
|
||||
|
||||
Admitted it’s not exactly _difficult_ to head out and download an ISO using the browser, then point a virtual machine app to it (heck, it’s what most of us have been doing for a decade or so).
|
||||
|
||||
But the lazy geek inside me is really going to appreciate this integration.
|
||||
|
||||
So, thanks to this feature I’ll be unpacking Boxes on my system when GNOME 3.28 is released next March. I will be able to launch _Boxes_ , close my eyes,pick a distro from the list at random, and instantly broaden my Tux-shaped horizons.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2017/12/gnome-boxes-install-linux-distros-directly
|
||||
|
||||
作者:[ JOEY SNEDDON ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[2]:http://www.omgubuntu.co.uk/category/dev
|
||||
[3]:http://www.omgubuntu.co.uk/category/video
|
||||
[4]:http://www.omgubuntu.co.uk/2017/12/gnome-boxes-install-linux-distros-directly
|
||||
[5]:https://en.wikipedia.org/wiki/GNOME_Boxes
|
||||
[6]:https://en.wikipedia.org/wiki/QEMU
|
||||
[7]:https://libosinfo.org/
|
||||
[8]:https://blogs.gnome.org/felipeborges/boxes-downloadable-oses/
|
113
sources/tech/20171204 Improve your Bash scripts with Argbash.md
Normal file
113
sources/tech/20171204 Improve your Bash scripts with Argbash.md
Normal file
@ -0,0 +1,113 @@
|
||||
# [Improve your Bash scripts with Argbash][1]
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2017/11/argbash-1-945x400.png)
|
||||
|
||||
Do you write or maintain non-trivial bash scripts? If so, you probably want them to accept command-line arguments in a standard and robust way. Fedora recently got [a nice addition][2] which can help you produce better scripts. And don’t worry, it won’t cost you much of your time or energy.
|
||||
|
||||
### Why Argbash?
|
||||
|
||||
Bash is an interpreted command-line language with no standard library. Therefore, if you write bash scripts and want command-line interfaces that conform to [POSIX][3] and [GNU CLI][4] standards, you’re used to only two options:
|
||||
|
||||
1. Write the argument-parsing functionality tailored to your script yourself (possibly using the `getopts` builtin).
|
||||
|
||||
2. Use an external bash module.
|
||||
|
||||
The first option looks incredibly silly as implementing the interface properly is not trivial. However, it is suggested as the best choice on various sites ranging from [Stack Overflow][5] to the [Bash Hackers][6] wiki.
|
||||
|
||||
The second option looks smarter, but using a module has its issues. The biggest is you have to bundle its code with your script. This may mean either:
|
||||
|
||||
* You distribute the library as a separate file, or
|
||||
|
||||
* You include the library code at the beginning of your script.
|
||||
|
||||
Having two files instead of one is awkward. So is polluting your bash scripts with a chunk of complex code over thousand lines long.
|
||||
|
||||
This was the main reason why the Argbash [project came to life][7]. Argbash is a code generator, so it generates a tailor-made parsing library for your script. Unlike the generic code of other bash modules, it produces minimal code your script needs. Moreover, you can request even simpler code if you don’t need 100% conformance to these CLI standards.
|
||||
|
||||
### Example
|
||||
|
||||
### Analysis
|
||||
|
||||
Let’s say you want to implement a script that [draws a bar][8] across the terminal window. You do that by repeating a single character of your choice multiple times. This means you need to get the following information from the command-line:
|
||||
|
||||
* _The character which is the element of the line. If not specified, use a dash._ On the command-line, this would be a single-valued positional argument _character_ with a default value of -.
|
||||
|
||||
* _Length of the line. If not specified, go for 80._ This is a single-valued optional argument _–length_ with a default of 80.
|
||||
|
||||
* _Verbose mode (for debugging)._ This is a boolean argument _verbose_ , off by default.
|
||||
|
||||
As the body of the script is really simple, this article focuses on getting the input of the user from the command-line to appropriate script variables. Argbash generates code that saves parsing results to shell variables __arg_character_ , __arg_length_ and __arg_verbose_ .
|
||||
|
||||
### Execution
|
||||
|
||||
In order to proceed, you need the _argbash-init_ and _argbash_ bash scripts that are parts of the _argbash_ package. Therefore, run this command:
|
||||
|
||||
```
|
||||
sudo dnf install argbash
|
||||
```
|
||||
|
||||
Then, use _argbash-init_ to generate a template for _argbash_ , which generates the executable script. You want three arguments: a positional one called _character_ , an optional _length_ and an optional boolean _verbose_ . Tell this to _argbash-init_ , and then pass the output to _argbash_ :
|
||||
|
||||
```
|
||||
argbash-init --pos character --opt length --opt-bool verbose script-template.sh
|
||||
argbash script-template.sh -o script
|
||||
./script
|
||||
```
|
||||
|
||||
See the help message? Looks like the script doesn’t know about the default option for the character argument. So take a look at the [Argbash API][9], and then fix the issue by editing the template section of the script:
|
||||
|
||||
```
|
||||
# ...
|
||||
# ARG_OPTIONAL_SINGLE([length],[l],[Length of the line],[80])
|
||||
# ARG_OPTIONAL_BOOLEAN([verbose],[V],[Debug mode])
|
||||
# ARG_POSITIONAL_SINGLE([character],[The element of the line],[-])
|
||||
# ARG_HELP([The line drawer])
|
||||
# ...
|
||||
```
|
||||
|
||||
Argbash is so smart that it tries to make every generated script a template of itself. This means you don’t have to worry about storing source templates for further use. You just shouldn’t lose your generated bash scripts. Now, try to regenerate the future line drawer to work as expected:
|
||||
|
||||
```
|
||||
argbash script -o script
|
||||
./script
|
||||
```
|
||||
|
||||
As you can see, everything is working all right. The only thing left to do is fill in the line drawing functionality itself.
|
||||
|
||||
### Conclusion
|
||||
|
||||
You might find the section containing parsing code quite long, but consider that it allows you to call _./script.sh x -Vl50_ and it will be understood the same way as _./script -V -l 50 x. I_ t does require some code to get this right.
|
||||
|
||||
However, you can shift the balance between generated code complexity and parsing abilities towards more simple code by calling _argbash-init_ with argument _–mode_ set to _minimal_ . This option reduces the size of the script by about 20 lines, which corresponds to a roughly 25% decrease of the generated parsing code size. On the other hand, the _full_ mode makes the script even smarter.
|
||||
|
||||
If you want to examine the generated code, give _argbash_ the argument _–commented_ , which puts comments into the parsing code that reveal the intent behind various sections. Compare that to other argument parsing libraries such as [shflags][10], [argsparse][11] or [bash-modules/arguments][12], and you’ll see the powerful simplicity of Argbash. If something goes horribly wrong and you need to fix a glitch in the parsing functionality quickly, Argbash allows you to do that as well.
|
||||
|
||||
As you’re most likely a Fedora user, you can enjoy the luxury of having command-line Argbash installed from the official repositories. However, there is also an [online parsing code generator][13] at your service. Furthermore, if you’re working on a server with Docker, you can appreciate the [Argbash Docker image][14].
|
||||
|
||||
So enjoy and make sure that your scripts have a command-line interface that pleases your users. Argbash is here to help, with minimal effort required from your side.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/improve-bash-scripts-argbash/
|
||||
|
||||
作者:[Matěj Týč ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/author/bubla/
|
||||
[1]:https://fedoramagazine.org/improve-bash-scripts-argbash/
|
||||
[2]:https://argbash.readthedocs.io/
|
||||
[3]:http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap12.html
|
||||
[4]:https://www.gnu.org/prep/standards/html_node/Command_002dLine-Interfaces.html
|
||||
[5]:https://stackoverflow.com/questions/192249/how-do-i-parse-command-line-arguments-in-bash
|
||||
[6]:http://wiki.bash-hackers.org/howto/getopts_tutorial
|
||||
[7]:https://argbash.readthedocs.io/
|
||||
[8]:http://wiki.bash-hackers.org/snipplets/print_horizontal_line
|
||||
[9]:http://argbash.readthedocs.io/en/stable/guide.html#argbash-api
|
||||
[10]:https://raw.githubusercontent.com/Anvil/bash-argsparse/master/argsparse.sh
|
||||
[11]:https://raw.githubusercontent.com/Anvil/bash-argsparse/master/argsparse.sh
|
||||
[12]:https://raw.githubusercontent.com/vlisivka/bash-modules/master/main/bash-modules/src/bash-modules/arguments.sh
|
||||
[13]:https://argbash.io/generate
|
||||
[14]:https://hub.docker.com/r/matejak/argbash/
|
@ -0,0 +1,210 @@
|
||||
# Tutorial on how to write basic udev rules in Linux
|
||||
|
||||
Contents
|
||||
|
||||
* * [1. Objective][4]
|
||||
|
||||
* [2. Requirements][5]
|
||||
|
||||
* [3. Difficulty][6]
|
||||
|
||||
* [4. Conventions][7]
|
||||
|
||||
* [5. Introduction][8]
|
||||
|
||||
* [6. How rules are organized][9]
|
||||
|
||||
* [7. The rules syntax][10]
|
||||
|
||||
* [8. A test case][11]
|
||||
|
||||
* [9. Operators][12]
|
||||
* * [9.1.1. == and != operators][1]
|
||||
|
||||
* [9.1.2. The assignment operators: = and :=][2]
|
||||
|
||||
* [9.1.3. The += and -= operators][3]
|
||||
|
||||
* [10. The keys we used][13]
|
||||
|
||||
### Objective
|
||||
|
||||
Understanding the base concepts behind udev, and learn how to write simple rules
|
||||
|
||||
### Requirements
|
||||
|
||||
* Root permissions
|
||||
|
||||
### Difficulty
|
||||
|
||||
MEDIUM
|
||||
|
||||
### Conventions
|
||||
|
||||
* **#** - requires given command to be executed with root privileges either directly as a root user or by use of `sudo` command
|
||||
|
||||
* **$** - given command to be executed as a regular non-privileged user
|
||||
|
||||
### Introduction
|
||||
|
||||
In a GNU/Linux system, while devices low level support is handled at the kernel level, the management of events related to them is managed in userspace by `udev`, and more precisely by the `udevd` daemon. Learning how to write rules to be applied on the occurring of those events can be really useful to modify the behavior of the system and adapt it to our needs.
|
||||
|
||||
### How rules are organized
|
||||
|
||||
Udev rules are defined into files with the `.rules` extension. There are two main locations in which those files can be placed: `/usr/lib/udev/rules.d` it's the directory used for system-installed rules, `/etc/udev/rules.d/`is reserved for custom made rules.
|
||||
|
||||
The files in which the rules are defined are conventionally named with a number as prefix (e.g `50-udev-default.rules`) and are processed in lexical order independently of the directory they are in. Files installed in `/etc/udev/rules.d`, however, override those with the same name installed in the system default path.
|
||||
|
||||
### The rules syntax
|
||||
|
||||
The syntax of udev rules is not very complicated once you understand the logic behind it. A rule is composed by two main sections: the "match" part, in which we define the conditions for the rule to be applied, using a series of keys separated by a comma, and the "action" part, in which we perform some kind of action, when the conditions are met.
|
||||
|
||||
### A test case
|
||||
|
||||
What a better way to explain possible options than to configure an actual rule? As an example, we are going to define a rule to disable the touchpad when a mouse is connected. Obviously the attributes provided in the rule definition, will reflect my hardware.
|
||||
|
||||
We will write our rule in the `/etc/udev/rules.d/99-togglemouse.rules` file with the help of our favorite text editor. A rule definition can span over multiple lines, but if that's the case, a backslash must be used before the newline character, as a line continuation, just as in shell scripts. Here is our rule:
|
||||
```
|
||||
ACTION=="add" \
|
||||
, ATTRS{idProduct}=="c52f" \
|
||||
, ATTRS{idVendor}=="046d" \
|
||||
, ENV{DISPLAY}=":0" \
|
||||
, ENV{XAUTHORITY}="/run/user/1000/gdm/Xauthority" \
|
||||
, RUN+="/usr/bin/xinput --disable 16"
|
||||
```
|
||||
Let's analyze it.
|
||||
|
||||
### Operators
|
||||
|
||||
First of all, an explanation of the used and possible operators:
|
||||
|
||||
#### == and != operators
|
||||
|
||||
The `==` is the equality operator and the `!=` is the inequality operator. By using them we establish that for the rule to be applied the defined keys must match, or not match the defined value respectively.
|
||||
|
||||
#### The assignment operators: = and :=
|
||||
|
||||
The `=` assignment operator, is used to assign a value to the keys that accepts one. We use the `:=` operator, instead, when we want to assign a value and we want to make sure that it is not overridden by other rules: the values assigned with this operator, in facts, cannot be altered.
|
||||
|
||||
#### The += and -= operators
|
||||
|
||||
The `+=` and `-=` operators are used respectively to add or to remove a value from the list of values defined for a specific key.
|
||||
|
||||
### The keys we used
|
||||
|
||||
Let's now analyze the keys we used in the rule. First of all we have the `ACTION` key: by using it, we specified that our rule is to be applied when a specific event happens for the device. Valid values are `add`, `remove` and `change`
|
||||
|
||||
We then used the `ATTRS` keyword to specify an attribute to be matched. We can list a device attributes by using the `udevadm info` command, providing its name or `sysfs` path:
|
||||
```
|
||||
udevadm info -ap /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010/input/input39
|
||||
|
||||
Udevadm info starts with the device specified by the devpath and then
|
||||
walks up the chain of parent devices. It prints for every device
|
||||
found, all possible attributes in the udev rules key format.
|
||||
A rule to match, can be composed by the attributes of the device
|
||||
and the attributes from one single parent device.
|
||||
|
||||
looking at device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010/input/input39':
|
||||
KERNEL=="input39"
|
||||
SUBSYSTEM=="input"
|
||||
DRIVER==""
|
||||
ATTR{name}=="Logitech USB Receiver"
|
||||
ATTR{phys}=="usb-0000:00:1d.0-1.2/input1"
|
||||
ATTR{properties}=="0"
|
||||
ATTR{uniq}==""
|
||||
|
||||
looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010':
|
||||
KERNELS=="0003:046D:C52F.0010"
|
||||
SUBSYSTEMS=="hid"
|
||||
DRIVERS=="hid-generic"
|
||||
ATTRS{country}=="00"
|
||||
|
||||
looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1':
|
||||
KERNELS=="2-1.2:1.1"
|
||||
SUBSYSTEMS=="usb"
|
||||
DRIVERS=="usbhid"
|
||||
ATTRS{authorized}=="1"
|
||||
ATTRS{bAlternateSetting}==" 0"
|
||||
ATTRS{bInterfaceClass}=="03"
|
||||
ATTRS{bInterfaceNumber}=="01"
|
||||
ATTRS{bInterfaceProtocol}=="00"
|
||||
ATTRS{bInterfaceSubClass}=="00"
|
||||
ATTRS{bNumEndpoints}=="01"
|
||||
ATTRS{supports_autosuspend}=="1"
|
||||
|
||||
looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2':
|
||||
KERNELS=="2-1.2"
|
||||
SUBSYSTEMS=="usb"
|
||||
DRIVERS=="usb"
|
||||
ATTRS{authorized}=="1"
|
||||
ATTRS{avoid_reset_quirk}=="0"
|
||||
ATTRS{bConfigurationValue}=="1"
|
||||
ATTRS{bDeviceClass}=="00"
|
||||
ATTRS{bDeviceProtocol}=="00"
|
||||
ATTRS{bDeviceSubClass}=="00"
|
||||
ATTRS{bMaxPacketSize0}=="8"
|
||||
ATTRS{bMaxPower}=="98mA"
|
||||
ATTRS{bNumConfigurations}=="1"
|
||||
ATTRS{bNumInterfaces}==" 2"
|
||||
ATTRS{bcdDevice}=="3000"
|
||||
ATTRS{bmAttributes}=="a0"
|
||||
ATTRS{busnum}=="2"
|
||||
ATTRS{configuration}=="RQR30.00_B0009"
|
||||
ATTRS{devnum}=="12"
|
||||
ATTRS{devpath}=="1.2"
|
||||
ATTRS{idProduct}=="c52f"
|
||||
ATTRS{idVendor}=="046d"
|
||||
ATTRS{ltm_capable}=="no"
|
||||
ATTRS{manufacturer}=="Logitech"
|
||||
ATTRS{maxchild}=="0"
|
||||
ATTRS{product}=="USB Receiver"
|
||||
ATTRS{quirks}=="0x0"
|
||||
ATTRS{removable}=="removable"
|
||||
ATTRS{speed}=="12"
|
||||
ATTRS{urbnum}=="1401"
|
||||
ATTRS{version}==" 2.00"
|
||||
|
||||
[...]
|
||||
```
|
||||
Above is the truncated output received after running the command. As you can read it from the output itself, `udevadm` starts with the specified path that we provided, and gives us information about all the parent devices. Notice that attributes of the device are reported in singular form (e.g `KERNEL`), while the parent ones in plural form (e.g `KERNELS`). The parent information can be part of a rule but only one of the parents can be referenced at a time: mixing attributes of different parent devices will not work. In the rule we defined above, we used the attributes of one parent device: `idProduct` and `idVendor`.
|
||||
|
||||
The next thing we have done in our rule, is to use the `ENV` keyword: it can be used to both set or try to match environment variables. We assigned a value to the `DISPLAY` and `XAUTHORITY` ones. Those variables are essential when interacting with the X server programmatically, to setup some needed information: with the `DISPLAY` variable, we specify on what machine the server is running, what display and what screen we are referencing, and with `XAUTHORITY` we provide the path to the file which contains Xorg authentication and authorization information. This file is usually located in the users "home" directory.
|
||||
|
||||
Finally we used the `RUN` keyword: this is used to run external programs. Very important: this is not executed immediately, but the various actions are executed once all the rules have been parsed. In this case we used the `xinput` utility to change the status of the touchpad. I will not explain the syntax of xinput here, it would be out of context, just notice that `16` is the id of the touchpad.
|
||||
|
||||
Once our rule is set, we can debug it by using the `udevadm test` command. This is useful for debugging but it doesn't really run commands specified using the `RUN` key:
|
||||
```
|
||||
$ udevadm test --action="add" /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010/input/input39
|
||||
```
|
||||
What we provided to the command is the action to simulate, using the `--action` option, and the sysfs path of the device. If no errors are reported, our rule should be good to go. To run it in the real world, we must reload the rules:
|
||||
```
|
||||
# udevadm control --reload
|
||||
```
|
||||
This command will reload the rules files, however, will have effect only on new generated events.
|
||||
|
||||
We have seen the basic concepts and logic used to create an udev rule, however we only scratched the surface of the many options and possible settings. The udev manpage provides an exhaustive list: please refer to it for a more in-depth knowledge.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux
|
||||
|
||||
作者:[Egidio Docile ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://disqus.com/by/egidiodocile/
|
||||
[1]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-1-1-and-operators
|
||||
[2]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-1-2-the-assignment-operators-and
|
||||
[3]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-1-3-the-and-operators
|
||||
[4]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h1-objective
|
||||
[5]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h2-requirements
|
||||
[6]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h3-difficulty
|
||||
[7]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h4-conventions
|
||||
[8]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h5-introduction
|
||||
[9]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h6-how-rules-are-organized
|
||||
[10]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h7-the-rules-syntax
|
||||
[11]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h8-a-test-case
|
||||
[12]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-operators
|
||||
[13]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h10-the-keys-we-used
|
@ -0,0 +1,102 @@
|
||||
ANNOUNCING THE GENERAL AVAILABILITY OF CONTAINERD 1.0, THE INDUSTRY-STANDARD RUNTIME USED BY MILLIONS OF USERS
|
||||
============================================================
|
||||
|
||||
Today, we’re pleased to announce that containerd (pronounced Con-Tay-Ner-D), an industry-standard runtime for building container solutions, has reached its 1.0 milestone. containerd has already been deployed in millions of systems in production today, making it the most widely adopted runtime and an essential upstream component of the Docker platform.
|
||||
|
||||
Built to address the needs of modern container platforms like Docker and orchestration systems like Kubernetes, containerd ensures users have a consistent dev to ops experience. From [Docker’s initial announcement][22] last year that it was spinning out its core runtime to [its donation to the CNCF][23] in March 2017, the containerd project has experienced significant growth and progress over the past 12 months. .
|
||||
|
||||
Within both the Docker and Kubernetes communities, there has been a significant uptick in contributions from independents and CNCF member companies alike including Docker, Google, NTT, IBM, Microsoft, AWS, ZTE, Huawei and ZJU. Similarly, the maintainers have been working to add key functionality to containerd.The initial containerd donation provided everything users need to ensure a seamless container experience including methods for:
|
||||
|
||||
* transferring container images,
|
||||
|
||||
* container execution and supervision,
|
||||
|
||||
* low-level local storage and network interfaces and
|
||||
|
||||
* the ability to work on both Linux, Windows and other platforms.
|
||||
|
||||
Additional work has been done to add even more powerful capabilities to containerd including a:
|
||||
|
||||
* Complete storage and distribution system that supports both OCI and Docker image formats and
|
||||
|
||||
* Robust events system
|
||||
|
||||
* More sophisticated snapshot model to manage container filesystems
|
||||
|
||||
These changes helped the team build out a smaller interface for the snapshotters, while still fulfilling the requirements needed from things like a builder. It also reduces the amount of code needed, making it much easier to maintain in the long run.
|
||||
|
||||
The containerd 1.0 milestone comes after several months testing both the alpha and version versions, which enabled the team to implement many performance improvements. Some of these,improvements include the creation of a stress testing system, improvements in garbage collection and shim memory usage.
|
||||
|
||||
“In 2017 key functionality has been added containerd to address the needs of modern container platforms like Docker and orchestration systems like Kubernetes,” said Michael Crosby, Maintainer for containerd and engineer at Docker. “Since our announcement in December, we have been progressing the design of the project with the goal of making it easily embeddable in higher level systems to provide core container capabilities. We will continue to work with the community to create a runtime that’s lightweight yet powerful, balancing new functionality with the desire for code that is easy to support and maintain.”
|
||||
|
||||
containerd is already being used by Kubernetes for its[ cri-containerd project][24], which enables users to run Kubernetes clusters using containerd as the underlying runtime. containerd is also an essential upstream component of the Docker platform and is currently used by millions of end users. There is also strong alignment with other CNCF projects: containerd exposes an API using [gRPC][25] and exposes metrics in the [Prometheus][26] format. containerd also fully leverages the Open Container Initiative (OCI) runtime, image format specifications and OCI reference implementation ([runC][27]), and will pursue OCI certification when it is available.
|
||||
|
||||
Key Milestones in the progress to 1.0 include:
|
||||
|
||||
![containerd 1.0](https://i2.wp.com/blog.docker.com/wp-content/uploads/4f8d8c4a-6233-4d96-a0a2-77ed345bf42b-5.jpg?resize=720%2C405&ssl=1)
|
||||
|
||||
Notable containerd facts and figures:
|
||||
|
||||
* 1994 GitHub stars, 401 forks
|
||||
|
||||
* 108 contributors
|
||||
|
||||
* 8 maintainers from independents and and member companies alike including Docker, Google, IBM, ZTE and ZJU .
|
||||
|
||||
* 3030+ commits, 26 releases
|
||||
|
||||
Availability and Resources
|
||||
|
||||
To participate in containerd: [github.com/containerd/containerd][28]
|
||||
|
||||
* Getting Started with containerd: [http://mobyproject.org/blog/2017/08/15/containerd-getting-started/][8]
|
||||
|
||||
* Roadmap: [https://github.com/containerd/containerd/blob/master/ROADMAP.md][1]
|
||||
|
||||
* Scope table: [https://github.com/containerd/containerd#scope][2]
|
||||
|
||||
* Architecture document: [https://github.com/containerd/containerd/blob/master/design/architecture.md][3]
|
||||
|
||||
* APIs: [https://github.com/containerd/containerd/tree/master/api/][9].
|
||||
|
||||
* Learn more about containerd at KubeCon by attending Justin Cormack’s [LinuxKit & Kubernetes talk at Austin Docker Meetup][10], Patrick Chanezon’s [Moby session][11] [Phil Estes’ session][12] or the [containerd salon][13]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.docker.com/2017/12/cncf-containerd-1-0-ga-announcement/
|
||||
|
||||
作者:[Patrick Chanezon ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://blog.docker.com/author/chanezon/
|
||||
[1]:https://github.com/docker/containerd/blob/master/ROADMAP.md
|
||||
[2]:https://github.com/docker/containerd#scope
|
||||
[3]:https://github.com/docker/containerd/blob/master/design/architecture.md
|
||||
[4]:http://www.linkedin.com/shareArticle?mini=true&url=http://dockr.ly/2ArQe3G&title=Announcing%20the%20General%20Availability%20of%20containerd%201.0%2C%20the%20industry-standard%20runtime%20used%20by%20millions%20of%20users&summary=Today,%20we%E2%80%99re%20pleased%20to%20announce%20that%20containerd%20(pronounced%20Con-Tay-Ner-D),%20an%20industry-standard%20runtime%20for%20building%20container%20solutions,%20has%20reached%20its%201.0%20milestone.%20containerd%20has%20already%20been%20deployed%20in%20millions%20of%20systems%20in%20production%20today,%20making%20it%20the%20most%20widely%20adopted%20runtime%20and%20an%20essential%20upstream%20component%20of%20the%20Docker%20platform.%20Built%20...
|
||||
[5]:http://www.reddit.com/submit?url=http://dockr.ly/2ArQe3G&title=Announcing%20the%20General%20Availability%20of%20containerd%201.0%2C%20the%20industry-standard%20runtime%20used%20by%20millions%20of%20users
|
||||
[6]:https://plus.google.com/share?url=http://dockr.ly/2ArQe3G
|
||||
[7]:http://news.ycombinator.com/submitlink?u=http://dockr.ly/2ArQe3G&t=Announcing%20the%20General%20Availability%20of%20containerd%201.0%2C%20the%20industry-standard%20runtime%20used%20by%20millions%20of%20users
|
||||
[8]:http://mobyproject.org/blog/2017/08/15/containerd-getting-started/
|
||||
[9]:https://github.com/docker/containerd/tree/master/api/
|
||||
[10]:https://www.meetup.com/Docker-Austin/events/245536895/
|
||||
[11]:http://sched.co/CU6G
|
||||
[12]:https://kccncna17.sched.com/event/CU6g/embedding-the-containerd-runtime-for-fun-and-profit-i-phil-estes-ibm
|
||||
[13]:https://kccncna17.sched.com/event/Cx9k/containerd-salon-hosted-by-derek-mcgowan-docker-lantao-liu-google
|
||||
[14]:https://blog.docker.com/author/chanezon/
|
||||
[15]:https://blog.docker.com/tag/cloud-native-computing-foundation/
|
||||
[16]:https://blog.docker.com/tag/cncf/
|
||||
[17]:https://blog.docker.com/tag/container-runtime/
|
||||
[18]:https://blog.docker.com/tag/containerd/
|
||||
[19]:https://blog.docker.com/tag/cri-containerd/
|
||||
[20]:https://blog.docker.com/tag/grpc/
|
||||
[21]:https://blog.docker.com/tag/kubernetes/
|
||||
[22]:https://blog.docker.com/2016/12/introducing-containerd/
|
||||
[23]:https://blog.docker.com/2017/03/docker-donates-containerd-to-cncf/
|
||||
[24]:http://blog.kubernetes.io/2017/11/containerd-container-runtime-options-kubernetes.html
|
||||
[25]:http://www.grpc.io/
|
||||
[26]:https://prometheus.io/
|
||||
[27]:https://github.com/opencontainers/runc
|
||||
[28]:http://github.com/containerd/containerd
|
@ -0,0 +1,133 @@
|
||||
translating by lujun9972
|
||||
7 rules for avoiding documentation pitfalls
|
||||
======
|
||||
English serves as _lingua franca_ in the open source community. To reduce
|
||||
translation costs, many teams have switched to English as the source language
|
||||
for their documentation. But surprisingly, writing in English for an
|
||||
international audience does not necessarily put native English speakers in a
|
||||
better position. On the contrary, they tend to forget that the document 's
|
||||
language might not be the audience's first language.
|
||||
|
||||
Let's have a look at the following simple sentence as an example: "Encrypt the
|
||||
password using the `foo bar` command." Grammatically, the sentence is correct.
|
||||
Given that '-ing' forms (gerunds) are frequently used in the English language
|
||||
and most native speakers consider them an elegant way of expressing things,
|
||||
native speakers usually do not hesitate to phrase a sentence like this. On
|
||||
closer inspection, the sentence is ambiguous because "using" may refer either
|
||||
to the object ("the password") or to the verb ("encrypt"). Thus, the sentence
|
||||
can be interpreted in two different ways:
|
||||
|
||||
* "Encrypt the password that uses the `foo bar` command."
|
||||
* "Encrypt the password by using the `foo bar` command."
|
||||
|
||||
As long as you have previous knowledge about the topic (password encryption or
|
||||
the `foo bar` command), you can resolve this ambiguity and correctly decide
|
||||
that the second version is the intended meaning of this sentence. But what if
|
||||
you lack in-depth knowledge of the topic? What if you are not an expert, but a
|
||||
translator with only general knowledge of the subject? Or if you are a non-
|
||||
native speaker of English, unfamiliar with advanced grammatical forms like
|
||||
gerunds?
|
||||
|
||||
Even native English speakers need training to write clear and straightforward
|
||||
technical documentation. The first step is to raise awareness about the
|
||||
usability of texts and potential problems, so let's look at seven rules that
|
||||
can help avoid common pitfalls.
|
||||
|
||||
### 1. Know your target audience and step into their shoes.
|
||||
|
||||
If you are a developer writing for end users, view the product from their
|
||||
perspective. Does the structure reflect the users' goals? The [persona
|
||||
technique][1] can help you to focus on the target audience and provide the
|
||||
right level of detail for your readers.
|
||||
|
||||
### 2. Follow the KISS principle--keep it short and simple.
|
||||
|
||||
The principle can be applied on several levels, such as grammar, sentences, or
|
||||
words. Here are examples:
|
||||
|
||||
* Use the simplest tense that is appropriate. For example, use present tense when mentioning the result of an action:
|
||||
* " ~~Click 'OK.' The 'Printer Options' dialog will appear.~~" -> "Click 'OK.' The 'Printer Options' dialog appears."
|
||||
* As a rule of thumb, present one idea in one sentence; however, short sentences are not automatically easy to understand (especially if they are an accumulation of nouns). Sometimes, trimming down sentences to a certain word count can introduce ambiguities. In turn, this makes the sentences more difficult to understand.
|
||||
* Uncommon and long words slow reading and might be obstacles for non-native speakers. Use simpler alternatives:
|
||||
|
||||
* " ~~utilize~~ " -> "use"
|
||||
* " ~~indicate~~ " -> "show," "tell," or "say"
|
||||
* " ~~prerequisite~~ " -> "requirement"
|
||||
|
||||
### 3. Avoid disturbing the reading flow.
|
||||
|
||||
Move particles or longer parentheses to the beginning or end of a sentence:
|
||||
|
||||
* " ~~They are not, however, marked as installed.~~ " -> "However, they are not marked as installed."
|
||||
|
||||
Place long commands at the end of a sentence. This also results in better
|
||||
segmentation of sentences for automatic or semi-automatic translations.
|
||||
|
||||
### 4. Discriminate between two basic information types.
|
||||
|
||||
Discriminating between _descriptive information_ and _task-based information_
|
||||
is useful. Typical examples for descriptive information are command-line
|
||||
references, whereas how-to 's are task-based information; however, both
|
||||
information types are needed in technical writing. On closer inspection, many
|
||||
texts contain a mixture of both information types. Clearly separating the
|
||||
information types is helpful. For better orientation, label them accordingly.
|
||||
Titles should reflect a section's content and information type. Use noun-based
|
||||
titles for descriptive sections ("Types of Frobnicators") and verbally phrased
|
||||
titles for task-based sections ("Installing Frobnicators"). This helps readers
|
||||
quickly identify the sections they are interested in and allows them to skip
|
||||
the ones they don't need at the moment.
|
||||
|
||||
### 5. Consider different reading situations and modes of text consumption.
|
||||
|
||||
Some of your readers are already frustrated when they turn to the product
|
||||
documentation because they could not achieve a certain goal on their own. They
|
||||
might also work in a noisy environment that makes it hard to focus on reading.
|
||||
Also, do not expect your audience to read cover to cover, as many people skim
|
||||
or browse texts for keywords or look up topics by using tables, indexes, or
|
||||
full-text search. With that in mind, look at your text from different
|
||||
perspectives. Often, compromises are needed to find a text structure that
|
||||
works well for multiple situations.
|
||||
|
||||
### 6. Break down complex information into smaller chunks.
|
||||
|
||||
This makes it easier for the audience to remember and process the information.
|
||||
For example, procedures should not exceed seven to 10 steps (according to
|
||||
[Miller's Law][2] in cognitive psychology). If more steps are required, split
|
||||
the task into separate procedures.
|
||||
|
||||
### 7. Form follows function.
|
||||
|
||||
Examine your text according to the question: What is the _purpose_ (function)
|
||||
of a certain sentence, a paragraph, or a section? For example, is it an
|
||||
instruction? A result? A warning? For instructions, use active voice:
|
||||
"Configure the system." Passive voice may be appropriate for descriptions:
|
||||
"The system is configured automatically." Add warnings _before_ the step or
|
||||
action where danger arises. Focusing on the purpose also helps detect
|
||||
redundant content to help eliminate fillers like "basically" or "easily,"
|
||||
unnecessary modifications like " ~~already~~ existing " or " ~~completely~~
|
||||
new, " or any content that is not relevant for your target audience.
|
||||
|
||||
As you might have guessed by now, writing is re-writing. Good writing requires
|
||||
effort and practice. Even if you write only occasionally, you can
|
||||
significantly improve your texts by focusing on the target audience and
|
||||
following the rules above. The better the readability of a text, the easier it
|
||||
is to process, even for audiences with varying language skills. Especially
|
||||
when it comes to localization, good quality of the source text is important:
|
||||
"Garbage in, garbage out." If the original text has deficiencies, translating
|
||||
the text takes longer, resulting in higher costs. In the worst case, flaws are
|
||||
multiplied during translation and must be corrected in various languages.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/12/7-rules
|
||||
|
||||
作者:[Tanja Roth][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com
|
||||
[1]:https://en.wikipedia.org/wiki/Persona_(user_experience)
|
||||
[2]:https://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two
|
154
sources/tech/20171205 Ubuntu 18.04 – New Features.md
Normal file
154
sources/tech/20171205 Ubuntu 18.04 – New Features.md
Normal file
@ -0,0 +1,154 @@
|
||||
Ubuntu 18.04 – New Features, Release Date & More
|
||||
============================================================
|
||||
|
||||
|
||||
We’ve all been waiting for it – the new LTS release of Ubuntu – 18.04\. Learn more about new features, the release dates, and more.
|
||||
|
||||
> Note: we’ll frequently update this article with new information, so bookmark this page and check back soon.
|
||||
|
||||
### Basic information about Ubuntu 18.04
|
||||
|
||||
Let’s start with some basic information.
|
||||
|
||||
* It’s a new LTS (Long Term Support) release. So you get 5 years of support for both the desktop and server version.
|
||||
|
||||
* Named “Bionic Beaver”. The founder of Canonical, Mark Shuttleworth, explained the meaning behind the name. The mascot is a Beaver because it’s energetic, industrious, and an awesome engineer – which perfectly describes a typical Ubuntu user, and the new Ubuntu release itself. The “Bionic” adjective is due to the increased number of robots that run on the Ubuntu Core.
|
||||
|
||||
### Ubuntu 18.04 Release Dates & Schedule
|
||||
|
||||
If you’re new to Ubuntu, you may not be familiar the actual version numbers mean. It’s the year and month of the official release. So Ubuntu’s 18.04 official release will be in the 4th month of the year 2018. Ubuntu 17.10 was released in 2017, in the 10th month of the year.
|
||||
|
||||
To go into further details, here are the important dates and need to know about Ubuntu 18.04 LTS:
|
||||
|
||||
* November 30th, 2017 – Feature Definition Freeze.
|
||||
|
||||
* January 4th, 2018 – First Alpha release. So if you opted-in to receive new Alpha releases, you’ll get the Alpha 1 update on this date.
|
||||
|
||||
* February 1st, 2018 – Second Alpha release.
|
||||
|
||||
* March 1st, 2018 – Feature Freeze. No new features will be introduced or released. So the development team will only work on improving existing features and fixing bugs. With exceptions, of course. If you’re not a developer or an experienced user, but would still like to try the new Ubuntu ASAP, then I’d personally recommend starting with this release.
|
||||
|
||||
* March 8th, 2018 – First Beta release. If you opted-in for receiving Beta updates, you’ll get your update on this day.
|
||||
|
||||
* March 22nd, 2018 – User Interface Freeze. It means that no further changes or updates will be done to the actual user interface, so if you write documentation, [tutorials][1], and use screenshots, it’s safe to start then.
|
||||
|
||||
* March 29th, 2018 – Documentation String Freeze. There won’t be any edits or new stuff (strings) added to the documentation, so translators can start translating the documentation.
|
||||
|
||||
* April 5th, 2018 – Final Beta release. This is also a good day to start using the new release.
|
||||
|
||||
* April 19th, 2018 – Final Freeze. Everything’s pretty much done now. Images for the release are created and distributed, and will likely not have any changes.
|
||||
|
||||
* April 26th, 2018 – Official, Final release of Ubuntu 18.04\. Everyone should start using it starting this day, even on production servers. We recommend getting an Ubuntu 18.04 server from [Vultr][2] and testing out the new features. Servers at [Vultr][3] start at $2.5 per month.
|
||||
|
||||
### What’s New in Ubuntu 18.04
|
||||
|
||||
All the new features in Ubuntu 18.04 LTS:
|
||||
|
||||
### Color emojis are now supported
|
||||
|
||||
With previous versions, Ubuntu only supported monochrome (black and white) emojis, which quite frankly, didn’t look so good. Ubuntu 18.04 will support colored emojis by using the [Noto Color Emoji font][7]. With 18.04, you can view and add color emojis with ease everywhere. They are supported natively – so you can use them without using 3-rd party apps or installing/configuring anything extra. You can always disable the color emojis by removing the font.
|
||||
|
||||
### GNOME desktop environment
|
||||
|
||||
[![ubuntu 17.10 gnome](https://thishosting.rocks/wp-content/uploads/2017/12/ubuntu-17-10-gnome.jpg.webp)][8]
|
||||
|
||||
Ubuntu started using the GNOME desktop environment with Ubuntu 17.10 instead of the default Unity environment. Ubuntu 18.04 will continue using GNOME. This is a major change to Ubuntu.
|
||||
|
||||
### Ubuntu 18.04 Desktop will have a new default theme
|
||||
|
||||
Ubuntu 18.04 is saying Goodbye to the old ‘Ambience’ default theme with a new GTK theme. If you want to help with the new theme, check out some screenshots and more, go [here][9].
|
||||
|
||||
As of now, there is speculation that Suru will be the [new default icon theme][10] for Ubuntu 18.04\. Here’s a screenshot:
|
||||
|
||||
[![suru icon theme ubuntu 18.04](https://thishosting.rocks/wp-content/uploads/2017/12/suru-icon-theme-ubuntu-18-04.jpg.webp)][11]
|
||||
|
||||
> Worth noting: all new features in Ubuntu 16.10, 17.04, and 17.10 will roll through to Ubuntu 18.04\. So updates like Window buttons to the right, a better login screen, imrpoved Bluetooth support etc. will roll out to Ubuntu 18.04\. We won’t include a special section since it’s not really new to Ubuntu 18.04 itself. If you want to learn more about all the changes from 16.04 to 18.04, google it for each version in between.
|
||||
|
||||
### Download Ubuntu 18.04
|
||||
|
||||
First off, if you’re already using Ubuntu, you can just upgrade to Ubuntu 18.04.
|
||||
|
||||
If you need to download Ubuntu 18.04:
|
||||
|
||||
Go to the [official Ubuntu download page][12] after the final release.
|
||||
|
||||
For the daily builds (alpha, beta, and non-final releases), go [here][13].
|
||||
|
||||
### FAQs
|
||||
|
||||
Now for some of the frequently asked questions (with answers) that should give you more information about all of this.
|
||||
|
||||
### When is it safe to switch to Ubuntu 18.04?
|
||||
|
||||
On the official final release date, of course. But if you can’t wait, start using the desktop version on March 1st, 2018, and start testing out the server version on April 5th, 2018\. But for you to truly be “safe”, you’ll need to wait for the final release, maybe even more so the 3-rd party services and apps you are using are tested and working well on the new release.
|
||||
|
||||
### How do I upgrade my server to Ubuntu 18.04?
|
||||
|
||||
It’s a fairly simple process but has huge potential risks. We may publish a tutorial sometime in the near future, but you’ll basically need to use ‘do-release-upgrade’. Again, upgrading your server has potential risks, and if you’re on a production server, I’d think twice before upgrading. Especially if you’re on 16.04 which has a few years of support left.
|
||||
|
||||
### How can I help with Ubuntu 18.04?
|
||||
|
||||
Even if you’re not an experienced developer and Ubuntu user, you can still help by:
|
||||
|
||||
* Spreading the word. Let people know about Ubuntu 18.04\. A simple share on social media helps a bit too.
|
||||
|
||||
* Using and testing the release. Start using the release and test it. Again, you don’t have to be a developer. You can still find and report bugs, or send feedback.
|
||||
|
||||
* Translating. Join the translating teams and start translating documentation and/or applications.
|
||||
|
||||
* Helping other people. Join some online Ubuntu communities and help others with issues they’re having with Ubuntu 18.04\. Sometimes people need help with simple stuff like “where can I download Ubuntu?”
|
||||
|
||||
### What does Ubuntu 18.04 mean for other distros like Lubuntu?
|
||||
|
||||
All distros that are based on Ubuntu will have similar new features and a similar release schedule. You’ll need to check your distro’s official website for more information.
|
||||
|
||||
### Is Ubuntu 18.04 an LTS release?
|
||||
|
||||
Yes, Ubuntu 18.04 is an LTS (Long Term Support) release, so you’ll get support for 5 years.
|
||||
|
||||
### Can I switch from Windows/OS X to Ubuntu 18.04?
|
||||
|
||||
Of course! You’ll most likely experience a performance boost too. Switching from a different OS to Ubuntu is fairly easy, there are quite a lot of tutorials for doing that. You can even set up a dual-boot where you’ll be using multiple OSes, so you can use both Windows and Ubuntu 18.04.
|
||||
|
||||
### Can I try Ubuntu 18.04 without installing it?
|
||||
|
||||
Sure. You can use something like [VirtualBox][14] to create a “virtual desktop” – you can install it on your local machine and use Ubuntu 18.04 without actually installing Ubuntu.
|
||||
|
||||
Or you can try an Ubuntu 18.04 server at [Vultr][15] for $2.5 per month. It’s essentially free if you use some [free credits][16].
|
||||
|
||||
### Why can’t I find a 32-bit version of Ubuntu 18.04?
|
||||
|
||||
Because there is no 32bit version. Ubuntu dropped 32bit versions with its 17.10 release. If you’re using old hardware, you’re better off using a different [lightweight Linux distro][17] instead of Ubuntu 18.04 anyway.
|
||||
|
||||
### Any other question?
|
||||
|
||||
Leave a comment below! Share your thoughts, we’re super excited and we’re gonna update this article as soon as new information comes in. Stay tuned and be patient!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://thishosting.rocks/ubuntu-18-04-new-features-release-date/
|
||||
|
||||
作者:[ thishosting.rocks][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:thishosting.rocks
|
||||
[1]:https://thishosting.rocks/category/knowledgebase/
|
||||
[2]:https://thishosting.rocks/go/vultr/
|
||||
[3]:https://thishosting.rocks/go/vultr/
|
||||
[4]:https://thishosting.rocks/category/knowledgebase/
|
||||
[5]:https://thishosting.rocks/tag/ubuntu/
|
||||
[6]:https://thishosting.rocks/2017/12/05/
|
||||
[7]:https://www.google.com/get/noto/help/emoji/
|
||||
[8]:https://thishosting.rocks/wp-content/uploads/2017/12/ubuntu-17-10-gnome.jpg
|
||||
[9]:https://community.ubuntu.com/t/call-for-participation-an-ubuntu-default-theme-lead-by-the-community/1545
|
||||
[10]:http://www.omgubuntu.co.uk/2017/11/suru-default-icon-theme-ubuntu-18-04-lts
|
||||
[11]:https://thishosting.rocks/wp-content/uploads/2017/12/suru-icon-theme-ubuntu-18-04.jpg
|
||||
[12]:https://www.ubuntu.com/download
|
||||
[13]:http://cdimage.ubuntu.com/daily-live/current/
|
||||
[14]:https://www.virtualbox.org/
|
||||
[15]:https://thishosting.rocks/go/vultr/
|
||||
[16]:https://thishosting.rocks/vultr-coupons-for-2017-free-credits-and-more/
|
||||
[17]:https://thishosting.rocks/best-lightweight-linux-distros/
|
@ -0,0 +1,95 @@
|
||||
Getting started with Turtl, an open source alternative to Evernote
|
||||
======
|
||||
![Using Turtl as an open source alternative to Evernote](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_brainstorm_island_520px.png?itok=6IUPyxkY)
|
||||
|
||||
Just about everyone I know takes notes, and many people use an online note-taking application like Evernote, Simplenote, or Google Keep. Those are all good tools, but you have to wonder about the security and privacy of your information—especially in light of [Evernote's privacy flip-flop of 2016][1]. If you want more control over your notes and your data, you really need to turn to an open source tool.
|
||||
|
||||
Whatever your reasons for moving away from Evernote, there are open source alternatives out there. Let's look at one of those alternatives: Turtl.
|
||||
|
||||
### Getting started
|
||||
|
||||
The developers behind [Turtl][2] want you to think of it as "Evernote with ultimate privacy." To be honest, I can't vouch for the level of privacy that Turtl offers, but it is a quite a good note-taking tool.
|
||||
|
||||
To get started with Turtl, [download][3] a desktop client for Linux, Mac OS, or Windows, or grab the [Android app][4]. Install it, then fire up the client or app. You'll be asked for a username and passphrase. Turtl uses the passphrase to generate a cryptographic key that, according to the developers, encrypts your notes before storing them anywhere on your device or on their servers.
|
||||
|
||||
### Using Turtl
|
||||
|
||||
You can create the following types of notes with Turtl:
|
||||
|
||||
* Password
|
||||
|
||||
* File
|
||||
|
||||
* Image
|
||||
|
||||
* Bookmark
|
||||
|
||||
* Text note
|
||||
|
||||
No matter what type of note you choose, you create it in a window that's similar for all types of notes:
|
||||
|
||||
### [turtl-new-note-520.png][5]
|
||||
|
||||
![Create new text note with Turtl](https://opensource.com/sites/default/files/images/life-uploads/turtl-new-note-520.png)
|
||||
|
||||
Creating a new text note in Turtl
|
||||
|
||||
Add information like the title of the note, some text, and (if you're creating a File or Image note) attach a file or an image. Then click Save.
|
||||
|
||||
You can add formatting to your notes via [Markdown][6]. You need to add the formatting by hand—there are no toolbar shortcuts.
|
||||
|
||||
If you need to organize your notes, you can add them to Boards. Boards are just like notebooks in Evernote. To create a new board, click on the Boards tab, then click the Create a board button. Type a title for the board, then click Create.
|
||||
|
||||
### [turtl-boards-520.png][7]
|
||||
|
||||
![Create new board in Turtl](https://opensource.com/sites/default/files/images/life-uploads/turtl-boards-520.png)
|
||||
|
||||
Creating a new board in Turtl
|
||||
|
||||
To add a note to a board, create or edit the note, then click the This note is not in any boards link at the bottom of the note. Select one or more boards, then click Done.
|
||||
|
||||
To add tags to a note, click the Tags icon at the bottom of a note, enter one or more keywords separated by commas, and click Done.
|
||||
|
||||
### Syncing your notes across your devices
|
||||
|
||||
If you use Turtl across several computers and an Android device, for example, Turtl will sync your notes whenever you're online. However, I've encountered a small problem with syncing: Every so often, a note I've created on my phone doesn't sync to my laptop. I tried to sync manually by clicking the icon in the top left of the window and then clicking Sync Now, but that doesn't always work. I found that I occasionally need to click that icon, click Your settings, and then click Clear local data. I then need to log back into Turtl, but all the data syncs properly.
|
||||
|
||||
### A question, and a couple of problems
|
||||
|
||||
When I started using Turtl, I was dogged by one question: Where are my notes kept online? It turns out that the developers behind Turtl are based in the U.S., and that's also where their servers are. Although the encryption that Turtl uses is [quite strong][8] and your notes are encrypted on the server, the paranoid part of me says that you shouldn't save anything sensitive in Turtl (or any online note-taking tool, for that matter).
|
||||
|
||||
Turtl displays notes in a tiled view, reminiscent of Google Keep:
|
||||
|
||||
### [turtl-notes-520.png][9]
|
||||
|
||||
![Notes in Turtl](https://opensource.com/sites/default/files/images/life-uploads/turtl-notes-520.png)
|
||||
|
||||
A collection of notes in Turtl
|
||||
|
||||
There's no way to change that to a list view, either on the desktop or on the Android app. This isn't a problem for me, but I've heard some people pan Turtl because it lacks a list view.
|
||||
|
||||
Speaking of the Android app, it's not bad; however, it doesn't integrate with the Android Share menu. If you want to add a note to Turtl based on something you've seen or read in another app, you need to copy and paste it manually.
|
||||
|
||||
I've been using a Turtl for several months on a Linux-powered laptop, my [Chromebook running GalliumOS][10], and an Android-powered phone. It's been a pretty seamless experience across all those devices. Although it's not my favorite open source note-taking tool, Turtl does a pretty good job. Give it a try; it might be the simple note-taking tool you're looking for.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/12/using-turtl-open-source-alternative-evernote
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/scottnesbitt
|
||||
[1]:https://blog.evernote.com/blog/2016/12/15/evernote-revisits-privacy-policy/
|
||||
[2]:https://turtlapp.com/
|
||||
[3]:https://turtlapp.com/download/
|
||||
[4]:https://turtlapp.com/download/
|
||||
[5]:https://opensource.com/file/378346
|
||||
[6]:https://en.wikipedia.org/wiki/Markdown
|
||||
[7]:https://opensource.com/file/378351
|
||||
[8]:https://turtlapp.com/docs/security/encryption-specifics/
|
||||
[9]:https://opensource.com/file/378356
|
||||
[10]:https://opensource.com/article/17/4/linux-chromebook-gallium-os
|
@ -0,0 +1,195 @@
|
||||
Cheat – A Collection Of Practical Linux Command Examples
|
||||
======
|
||||
Many of us very often checks **[Man Pages][1]** to know about command switches
|
||||
(options), it shows you the details about command syntax, description,
|
||||
details, and available switches but it doesn 't has any practical examples.
|
||||
Hence, we are face some trouble to form a exact command format which we need.
|
||||
|
||||
Are you really facing the trouble on this and want a better solution? i would
|
||||
advise you to check about cheat utility.
|
||||
|
||||
#### What Is Cheat
|
||||
|
||||
[Cheat][2] allows you to create and view interactive cheatsheets on the
|
||||
command-line. It was designed to help remind *nix system administrators of
|
||||
options for commands that they use frequently, but not frequently enough to
|
||||
remember.
|
||||
|
||||
#### How to Install Cheat
|
||||
|
||||
Cheat package was developed using python, so install pip package to install
|
||||
cheat on your system.
|
||||
|
||||
For **`Debian/Ubuntu`** , use [apt-get command][3] or [apt command][4] to
|
||||
install pip.
|
||||
|
||||
```
|
||||
|
||||
[For Python2]
|
||||
|
||||
|
||||
$ sudo apt install python-pip python-setuptools
|
||||
|
||||
|
||||
|
||||
[For Python3]
|
||||
|
||||
|
||||
$ sudo apt install python3-pip
|
||||
|
||||
```
|
||||
|
||||
pip doesn't shipped with **`RHEL/CentOS`** system official repository so,
|
||||
enable [EPEL Repository][5] and use [YUM command][6] to install pip.
|
||||
|
||||
```
|
||||
|
||||
$ sudo yum install python-pip python-devel python-setuptools
|
||||
|
||||
```
|
||||
|
||||
For **`Fedora`** system, use [dnf Command][7] to install pip.
|
||||
|
||||
```
|
||||
|
||||
[For Python2]
|
||||
|
||||
|
||||
$ sudo dnf install python-pip
|
||||
|
||||
|
||||
|
||||
[For Python3]
|
||||
|
||||
|
||||
$ sudo dnf install python3
|
||||
|
||||
```
|
||||
|
||||
For **`Arch Linux`** based systems, use [Pacman Command][8] to install pip.
|
||||
|
||||
```
|
||||
|
||||
[For Python2]
|
||||
|
||||
|
||||
$ sudo pacman -S python2-pip python-setuptools
|
||||
|
||||
|
||||
|
||||
[For Python3]
|
||||
|
||||
|
||||
$ sudo pacman -S python-pip python3-setuptools
|
||||
|
||||
```
|
||||
|
||||
For **`openSUSE`** system, use [Zypper Command][9] to install pip.
|
||||
|
||||
```
|
||||
|
||||
[For Python2]
|
||||
|
||||
|
||||
$ sudo pacman -S python-pip
|
||||
|
||||
|
||||
|
||||
[For Python3]
|
||||
|
||||
|
||||
$ sudo pacman -S python3-pip
|
||||
|
||||
```
|
||||
|
||||
pip is a python module bundled with setuptools, it's one of the recommended
|
||||
tool for installing Python packages in Linux.
|
||||
|
||||
```
|
||||
|
||||
$ sudo pip install cheat
|
||||
|
||||
```
|
||||
|
||||
#### How to Use Cheat
|
||||
|
||||
Run `cheat` followed by corresponding `command` to view the cheatsheet, For
|
||||
demonstration purpose, we are going to check about `tar` command examples.
|
||||
|
||||
```
|
||||
|
||||
$ cheat tar
|
||||
# To extract an uncompressed archive:
|
||||
tar -xvf /path/to/foo.tar
|
||||
|
||||
# To create an uncompressed archive:
|
||||
tar -cvf /path/to/foo.tar /path/to/foo/
|
||||
|
||||
# To extract a .gz archive:
|
||||
tar -xzvf /path/to/foo.tgz
|
||||
|
||||
# To create a .gz archive:
|
||||
tar -czvf /path/to/foo.tgz /path/to/foo/
|
||||
|
||||
# To list the content of an .gz archive:
|
||||
tar -ztvf /path/to/foo.tgz
|
||||
|
||||
# To extract a .bz2 archive:
|
||||
tar -xjvf /path/to/foo.tgz
|
||||
|
||||
# To create a .bz2 archive:
|
||||
tar -cjvf /path/to/foo.tgz /path/to/foo/
|
||||
|
||||
# To extract a .tar in specified Directory:
|
||||
tar -xvf /path/to/foo.tar -C /path/to/destination/
|
||||
|
||||
# To list the content of an .bz2 archive:
|
||||
tar -jtvf /path/to/foo.tgz
|
||||
|
||||
# To create a .gz archive and exclude all jpg,gif,... from the tgz
|
||||
tar czvf /path/to/foo.tgz --exclude=\*.{jpg,gif,png,wmv,flv,tar.gz,zip} /path/to/foo/
|
||||
|
||||
# To use parallel (multi-threaded) implementation of compression algorithms:
|
||||
tar -z ... -> tar -Ipigz ...
|
||||
tar -j ... -> tar -Ipbzip2 ...
|
||||
tar -J ... -> tar -Ipixz ...
|
||||
|
||||
```
|
||||
|
||||
Run the following command to see what cheatsheets are available.
|
||||
|
||||
```
|
||||
|
||||
$ cheat -l
|
||||
|
||||
```
|
||||
|
||||
Navigate to help page for more details.
|
||||
|
||||
```
|
||||
|
||||
$ cheat -h
|
||||
|
||||
```
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/cheat-a-collection-of-practical-linux-command-examples/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com
|
||||
[1]:https://www.2daygeek.com/linux-color-man-pages-configuration-less-most-command/
|
||||
[2]:https://github.com/chrisallenlane/cheat
|
||||
[3]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[4]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[5]:https://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/
|
||||
[6]:https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
|
||||
[7]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
|
||||
[8]:https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
||||
[9]:https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
|
@ -0,0 +1,251 @@
|
||||
24 Must Have Essential Linux Applications In 2017
|
||||
======
|
||||
Brief: What are the must have applications for Linux? The answer is subjective and it depends on for what purpose do you use your desktop Linux. But there are still some essentials Linux apps that are more likely to be used by most Linux user. We have listed such best Linux applications that you should have installed in every Linux distribution you use.
|
||||
|
||||
The world of Linux, everything is full of alternatives. You have to choose a distro? You have got several dozens of them. Are you trying to find a decent music player? Alternatives are there too.
|
||||
|
||||
But not all of them are built with the same thing in mind – some of them might target minimalism while others might offer tons of features. Finding the right application for your needs can be quite confusing and a tiresome task. Let’s make that a bit easier.
|
||||
|
||||
### Best free applications for Linux users
|
||||
|
||||
I’m putting together a list of essential free Linux applications I prefer to use in different categories. I’m not saying that they are the best, but I have tried lots of applications in each category and finally liked the listed ones better. So, you are more than welcome to mention your favorite applications in the comment section.
|
||||
|
||||
We have also compiled a nice video of this list. Do subscribe to our YouTube channel for more such educational Linux videos:
|
||||
|
||||
### Web Browser
|
||||
|
||||
![Web Browsers](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Web-Browser-1024x512.jpg)
|
||||
[Save][1]Web Browsers
|
||||
|
||||
#### [Google Chrome][12]
|
||||
|
||||
Google Chrome is a powerful and complete solution for a web browser. It comes with excellent syncing capabilities and offers a vast collection of extensions. If you are accustomed to Google eco-system Google Chrome is for you without any doubt. If you prefer a more open source solution, you may want to try out [Chromium][13], which is the project Google Chrome is based on.
|
||||
|
||||
#### [Firefox][14]
|
||||
|
||||
If you are not a fan of Google Chrome, you can try out Firefox. It’s been around for a long time and is a very stable and robust web browser.
|
||||
|
||||
#### [Vivaldi][15]
|
||||
|
||||
However, if you want something new and different, you can check out Vivaldi. Vivaldi takes a completely fresh approach towards web browser. It’s from former team members of Opera and built on top of the Chromium project. It’s lightweight and customizable. Though it is still quite new and still missing out some features, it feels amazingly refreshing and does a really decent job.
|
||||
|
||||
[Suggested read[Review] Otter Browser Brings Hope To Opera Lovers][40]
|
||||
|
||||
### Download Manager
|
||||
|
||||
![Download Managers](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Download-Manager-1024x512.jpg)
|
||||
[Save][2]Download Managers
|
||||
|
||||
#### [uGet][16]
|
||||
|
||||
uGet is the best download manager I have come across. It is open source and offers everything you can expect from a download manager. uGet offers advanced settings for managing downloads. It can queue and resume downloads, use multiple connections for downloading large files, download files to different directories according to categories and so on.
|
||||
|
||||
#### [XDM][17]
|
||||
|
||||
Xtreme Download Manager (XDM) is a powerful and open source tool developed with Java. It has all the basic features of a download manager, including – video grabber, smart scheduler and browser integration.
|
||||
|
||||
[Suggested read4 Best Download Managers For Linux][41]
|
||||
|
||||
### BitTorrent Client
|
||||
|
||||
![BitTorrent Clients](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-BitTorrent-Client-1024x512.jpg)
|
||||
[Save][3]BitTorrent Clients
|
||||
|
||||
#### [Deluge][18]
|
||||
|
||||
Deluge is a open source BitTorrent client. It has a beautiful user interface. If you are used to using uTorrent for Windows, Deluge interface will feel familiar. It has various configuration options as well as plugins support for various tasks.
|
||||
|
||||
#### [Transmission][19]
|
||||
|
||||
Transmission takes the minimal approach. It is an open source BitTorrent client with a minimal user interface. Transmission comes pre-installed with many Linux distributions.
|
||||
|
||||
[Suggested readTop 5 Torrent Clients For Ubuntu Linux][42]
|
||||
|
||||
### Cloud Storage
|
||||
|
||||
![Cloud Storages](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Cloud-Storage-1024x512.jpg)
|
||||
[Save][4]Cloud Storages
|
||||
|
||||
#### [Dropbox][20]
|
||||
|
||||
Dropbox is one of the most popular cloud storage service available out there. It gives you 2GB free storage to start with. Dropbox has a robust and straight-forward Linux client.
|
||||
|
||||
#### [MEGA][21]
|
||||
|
||||
MEGA offers 50GB of free storage. But that is not the best thing about it. The best thing about MEGA is that it has end-to-end encryption support for your files. MEGA has a solid Linux client named MEGAsync.
|
||||
|
||||
[Suggested readBest Free Cloud Services For Linux in 2017][43]
|
||||
|
||||
### Communication
|
||||
|
||||
![Communication Apps](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Communication-1024x512.jpg)
|
||||
[Save][5]Communication Apps
|
||||
|
||||
#### [Pidgin][22]
|
||||
|
||||
Pidgin is an open source instant messenger client. It supports many chatting platforms including – Google Talk, Yahoo and even IRC. Pidgin is extensible through third-party plugins, that can provide a lot of additional functionalities to Pidgin.
|
||||
|
||||
You can also use [Franz][23] or [Rambox][24] to use several messaging services in one application.
|
||||
|
||||
#### [Skype][25]
|
||||
|
||||
We all know Skype, it is one of the most popular video chatting platforms. Recently it has [released a brand new desktop client][26] for Linux.
|
||||
|
||||
[Suggested read6 Best Messaging Apps Available For Linux In 2017][44]
|
||||
|
||||
### Office Suite
|
||||
|
||||
![Office Suites](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Office-Suite-1024x512.jpg)
|
||||
[Save][6]Office Suites
|
||||
|
||||
#### [LibreOffice][27]
|
||||
|
||||
LibreOffice is the most actively developed open source office suite for Linux. It has mainly six modules – Writer, Calc, Impress, Draw, Math and Base. And every one of them supports a wide range of file formats. LibreOffice also supports third-party extensions. It is the default office suite for many of the Linux distributions.
|
||||
|
||||
#### [WPS Office][28]
|
||||
|
||||
If you want to try out something other than LibreOffice, WPS Office might be your go-to. WPS Office suite includes writer, presentation and spreadsheets support.
|
||||
|
||||
[Suggested read6 Best Open Source Alternatives to Microsoft Office for Linux][45]
|
||||
|
||||
### Music Player
|
||||
|
||||
![Music Players](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Music-Player-1024x512.jpg)
|
||||
[Save][7]Music Players
|
||||
|
||||
#### [Lollypop][29]
|
||||
|
||||
This is a relatively new music player. Lollypop is open source and has a beautiful yet simple user interface. It offers a nice music organizer, scrobbling support, online radio and a party mode. Though it is a simple music player without so many advanced features, it is worth giving it a try.
|
||||
|
||||
#### [Rhythmbox][30]
|
||||
|
||||
Rhythmbox is the music player mainly developed for GNOME desktop environment but it works on other desktop environments as well. It does all the basic tasks of a music player, including – CD Ripping & Burning, scribbling etc. It also has support for iPod.
|
||||
|
||||
#### [cmus][31]
|
||||
|
||||
If you want minimalism and love your terminal window, cmus is for you. Personally, I’m a fan and user of this one. cmus is a small, fast and powerful console music player for Unix-like operating systems. It has all the basic music player features. And you can also extend its functionalities with additional extensions and scripts.
|
||||
|
||||
[Suggested readHow To Install Tomahawk Player In Ubuntu 14.04 And Linux Mint 17][46]
|
||||
|
||||
### Video Player
|
||||
|
||||
![Video Player](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Video-Player-1024x512.jpg)
|
||||
[Save][8]Video Players
|
||||
|
||||
#### [VLC][32]
|
||||
|
||||
VLC is an open source media player. It is simple, fast, lightweight and really powerful. VLC can play almost any media formats you can throw at it out-of-the-box. It can also stream online medias. It also have some nifty extensions for various tasks like downloading subtitles right from the player.
|
||||
|
||||
#### [Kodi][33]
|
||||
|
||||
Kodi is a full-fledged media center. Kodi is open source and very popular among its user base. It can handle videos, music, pictures, podcasts and even games, from both local and network media storage. You can even record TV with it. The behavior of Kodi can be customized via add-ons and skins.
|
||||
|
||||
[Suggested read4 Format Factory Alternative In Linux][47]
|
||||
|
||||
### Photo Editor
|
||||
|
||||
![Photo Editors](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Photo-Editor-1024x512.jpg)
|
||||
[Save][9]Photo Editors
|
||||
|
||||
#### [GIMP][34]
|
||||
|
||||
GIMP is the Photoshop alternative for Linux. It is open source, full-featured and professional photo editing software. It is packed with a wide range of tools for manipulating images. And on top of that, there is various customization options and third-party plugins for enhancing the experience.
|
||||
|
||||
#### [Krita][35]
|
||||
|
||||
Krita is mainly a painting tool but serves as a photo editing application as well. It is open source and packed with lots of sophisticated and advanced tools.
|
||||
|
||||
[Suggested readBest Photo Applications For Linux][48]
|
||||
|
||||
### Text Editor
|
||||
|
||||
Every Linux distribution comes with their own solution for text editors. Generally, they are quite simple and without much functionality. But here are some text editors with enhanced capabilities.
|
||||
|
||||
![Text Editors](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Text-Editor-1024x512.jpg)
|
||||
[Save][10]Text Editors
|
||||
|
||||
#### [Atom][36]
|
||||
|
||||
Atom is the modern and hackable text editor maintained by GitHub. It is completely open-source and offers everything you can think of to get out of a text editor. You can use it right out-of-the-box or you can customize and tune it just the way you want. And it has a ton of extensions and themes from the community up for grab.
|
||||
|
||||
#### [Sublime Text][37]
|
||||
|
||||
Sublime Text is one of the most popular text editors. Though it is not free, it allows you to use the software for evaluation without any time limit. Sublime Text is a feature-rich and sophisticated piece of software. And of course, it has plugins and themes support.
|
||||
|
||||
[Suggested read4 Best Modern Open Source Code Editors For Linux][49]
|
||||
|
||||
### Launcher
|
||||
|
||||
![Launchers](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Launcher-1024x512.jpg)
|
||||
[Save][11]Launchers
|
||||
|
||||
#### [Albert][38]
|
||||
|
||||
Albert is inspired by Alfred (a productivity application for Mac, which is totally kickass by-the-way) and still in the development phase. Albert is fast, extensible and customizable. The goal is to “Access everything with virtually zero effort”. It integrates with your Linux distribution nicely and helps you to boost your productivity.
|
||||
|
||||
#### [Synapse][39]
|
||||
|
||||
Synapse has been around for years. It’s a simple launcher that can search and run applications. It can also speed up various workflows like – controlling music, searching files, directories, bookmarks etc., running commands and such.
|
||||
|
||||
As Abhishek advised, we will keep this list of best Linux software updated with our readers’ (i.e. yours) feedback. So, what are your favorite must have Linux applications? Share with us and do suggest more categories of software to add to this list.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/essential-linux-applications/
|
||||
|
||||
作者:[Munif Tanjim][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/munif/
|
||||
[1]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Web-Browser-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Web%20Browsers
|
||||
[2]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Download-Manager-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Download%20Managers
|
||||
[3]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-BitTorrent-Client-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=BitTorrent%20Clients
|
||||
[4]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Cloud-Storage-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Cloud%20Storages
|
||||
[5]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Communication-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Communication%20Apps
|
||||
[6]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Office-Suite-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Office%20Suites
|
||||
[7]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Music-Player-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Music%20Players
|
||||
[8]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Video-Player-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Video%20Player
|
||||
[9]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Photo-Editor-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Photo%20Editors
|
||||
[10]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Text-Editor-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Text%20Editors
|
||||
[11]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Launcher-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Launchers
|
||||
[12]:https://www.google.com/chrome/browser
|
||||
[13]:https://www.chromium.org/Home
|
||||
[14]:https://www.mozilla.org/en-US/firefox
|
||||
[15]:https://vivaldi.com
|
||||
[16]:http://ugetdm.com/
|
||||
[17]:http://xdman.sourceforge.net/
|
||||
[18]:http://deluge-torrent.org/
|
||||
[19]:https://transmissionbt.com/
|
||||
[20]:https://www.dropbox.com
|
||||
[21]:https://mega.nz/
|
||||
[22]:https://www.pidgin.im/
|
||||
[23]:https://itsfoss.com/franz-messaging-app/
|
||||
[24]:http://rambox.pro/
|
||||
[25]:https://www.skype.com
|
||||
[26]:https://itsfoss.com/skpe-alpha-linux/
|
||||
[27]:https://www.libreoffice.org
|
||||
[28]:https://www.wps.com
|
||||
[29]:http://gnumdk.github.io/lollypop-web/
|
||||
[30]:https://wiki.gnome.org/Apps/Rhythmbox
|
||||
[31]:https://cmus.github.io/
|
||||
[32]:http://www.videolan.org
|
||||
[33]:https://kodi.tv
|
||||
[34]:https://www.gimp.org/
|
||||
[35]:https://krita.org/en/
|
||||
[36]:https://atom.io/
|
||||
[37]:http://www.sublimetext.com/
|
||||
[38]:https://github.com/ManuelSchneid3r/albert
|
||||
[39]:https://launchpad.net/synapse-project
|
||||
[40]:https://itsfoss.com/otter-browser-review/
|
||||
[41]:https://itsfoss.com/4-best-download-managers-for-linux/
|
||||
[42]:https://itsfoss.com/best-torrent-ubuntu/
|
||||
[43]:https://itsfoss.com/cloud-services-linux/
|
||||
[44]:https://itsfoss.com/best-messaging-apps-linux/
|
||||
[45]:https://itsfoss.com/best-free-open-source-alternatives-microsoft-office/
|
||||
[46]:https://itsfoss.com/install-tomahawk-ubuntu-1404-linux-mint-17/
|
||||
[47]:https://itsfoss.com/format-factory-alternative-linux/
|
||||
[48]:https://itsfoss.com/image-applications-ubuntu-linux/
|
||||
[49]:https://itsfoss.com/best-modern-open-source-code-editors-for-linux/
|
@ -0,0 +1,77 @@
|
||||
OnionShare - Share Files Anonymously
|
||||
======
|
||||
In this Digital World, we share our media, documents, important files via the Internet using different cloud storage like Dropbox, Mega, Google Drive and many more. But every cloud storage comes with two major problems, one is the Size and the other Security. After getting used to Bit Torrent the size is not a matter anymore, but the security is.
|
||||
|
||||
Even though you send your files through the secure cloud services they will be noted by the company, if the files are confidential, even the government can have them. So to overcome these problems we use OnionShare, as per the name it uses the Onion internet i.e Tor to share files Anonymously to anyone.
|
||||
|
||||
### How to Use **OnionShare**?
|
||||
|
||||
* First Download the [OnionShare][1] and [Tor Browser][2]. After downloading install both of them.
|
||||
|
||||
|
||||
|
||||
[![install onionshare and tor browser][3]][3]
|
||||
|
||||
* Now open OnionShare from the start menu
|
||||
|
||||
|
||||
|
||||
[![onionshare share files anonymously][4]][4]
|
||||
|
||||
* Click on Add and add a File/Folder to share.
|
||||
* Click start sharing. It produces a .onion URL, you could share the URL with your recipient.
|
||||
|
||||
|
||||
|
||||
[![share file with onionshare anonymously][5]][5]
|
||||
|
||||
* To Download file from the URL, copy the URL and open Tor Browser and paste it. Open the URL and download the Files/Folder.
|
||||
|
||||
|
||||
|
||||
[![receive file with onionshare anonymously][6]][6]
|
||||
|
||||
### Start of **OnionShare**
|
||||
|
||||
A few years back when Glenn Greenwald found that some of the NSA documents which he received from Edward Snowden had been corrupted. But he needed the documents and decided to get the files by using a USB. It was not successful.
|
||||
|
||||
After reading the book written by Greenwald, Micah Lee crypto expert at The Intercept, released the OnionShare - simple, free software to share files anonymously and securely. He created the program to share big data dumps via a direct channel encrypted and protected by the anonymity software Tor, making it hard to get the files for the eavesdroppers.
|
||||
|
||||
### How Does **OnionShare** Work?
|
||||
|
||||
OnionShare starts a web server at 127.0.0.1 for sharing the file on a random port. It chooses any of two words from the wordlist of 6800-wordlist called slug. It makes the server available as Tor onion service to send the file. The final URL looks like:
|
||||
|
||||
`http://qx2d7lctsnqwfdxh.onion/subside-durable`
|
||||
|
||||
The OnionShare shuts down after downloading. There is an option to allow the files to be downloaded multiple times. This makes the file not available on the internet anymore.
|
||||
|
||||
### Advantages of using **OnionShare**
|
||||
|
||||
Other Websites or Applications have access to your files: The file the sender shares using OnionShare is not stored on any server. It is directly hosted on the sender's system.
|
||||
|
||||
No one can spy on the shared files: As the connection between the users is encrypted by the Onion service and Tor Browser. This makes the connection secure and hard to eavesdroppers to get the files.
|
||||
|
||||
Both users are Anonymous: OnionShare and Tor Browser make both sender and recipient anonymous.
|
||||
|
||||
### Conclusion
|
||||
|
||||
In this article, I have explained how to **share your documents, files anonymously**. I also explained how it works. Hope you have understood how OnionShare works, and if you still have a doubt regarding anything, just drop in a comment.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.theitstuff.com/onionshare-share-files-anonymously-2
|
||||
|
||||
作者:[Anirudh Rayapeddi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.theitstuff.com
|
||||
[1]https://onionshare.org/
|
||||
[2]https://www.torproject.org/projects/torbrowser.html.en
|
||||
[3]http://www.theitstuff.com/wp-content/uploads/2017/12/Icons.png
|
||||
[4]http://www.theitstuff.com/wp-content/uploads/2017/12/Onion-Share.png
|
||||
[5]http://www.theitstuff.com/wp-content/uploads/2017/12/With-Link.png
|
||||
[6]http://www.theitstuff.com/wp-content/uploads/2017/12/Tor.png
|
@ -0,0 +1,93 @@
|
||||
The Biggest Problems With UC Browser
|
||||
======
|
||||
Before we even begin talking about the cons, I want to establish the fact that
|
||||
I have been a devoted UC Browser user for the past 3 years. I really love the
|
||||
download speeds I get, the ultra-sleek user interface and eye-catching icons
|
||||
used for tools. I was a Chrome for Android user in the beginning but I
|
||||
migrated to UC on a friend's recommendation. But in the past 1 year or so, I
|
||||
have seen some changes that have made me rethink about my choice and now I
|
||||
feel like migrating back to chrome again.
|
||||
|
||||
### The Unwanted **Notifications**
|
||||
|
||||
I am sure I am not the only one who gets these unwanted notifications every
|
||||
few hours. These clickbait articles are a real pain and the worst part is that
|
||||
you get them every few hours.
|
||||
|
||||
[![uc browser's annoying ads notifications][1]][1]
|
||||
|
||||
I tried closing them down from the notification settings but they still kept
|
||||
appearing with a less frequency.
|
||||
|
||||
### The **News Homepage**
|
||||
|
||||
Another unwanted section that is completely useless. We completely understand
|
||||
that UC browser is free to download and it may require funding but this is not
|
||||
the way to do it. The homepage features news articles that are extremely
|
||||
distracting and unwanted. Sometimes when you are in a professional or family
|
||||
environment some of these click baits might even cause awkwardness.
|
||||
|
||||
[![uc browser's embarrassing news homepage][2]][2]
|
||||
|
||||
And they even have a setting for that. To Turn the **UC** **News Display ON /
|
||||
OFF.** And guess what, I tried that too **.** In the image below, You can see
|
||||
my efforts on the left-hand side and the output on the right-hand side.[![uc
|
||||
browser homepage settings][3]][3]
|
||||
|
||||
And click bait news isn't enough, they have started adding some unnecessary
|
||||
features. So let's include them as well.
|
||||
|
||||
### UC **Music**
|
||||
|
||||
UC browser integrated a **music player** in their browser to play music. It 's
|
||||
just something that works, nothing too fancy. So why even have it? What's the
|
||||
point? Who needs a music player in their browsers?
|
||||
|
||||
[![uc browser adds uc music player][4]][4]
|
||||
|
||||
It's not even like it will play audio from the web directly via that player in
|
||||
the background. Instead, it is a music player that plays offline music. So why
|
||||
have it? I mean it is not even good enough to be used as a primary music
|
||||
player. Even if it was, it doesn't run independently of UC Browser. So why
|
||||
would someone have his/her browser running just to use your Music Player?
|
||||
|
||||
### The **Quick** Access Bar
|
||||
|
||||
I have seen 9 out of 10 average users have this bar hanging around in their
|
||||
notification area because it comes default with the installation and they
|
||||
don't know how to get rid of it. The settings on the right get the job done.
|
||||
|
||||
[![uc browser annoying quick access bar][5]][5]
|
||||
|
||||
But I still wanna ask, "Why does it come by default ?". It's a headache for
|
||||
most users. If we want it we will enable it. Why forcing the users though.
|
||||
|
||||
### Conclusion
|
||||
|
||||
UC browser is still one of the top players in the game. It provides one of the
|
||||
best experiences, however, I am not sure what UC is trying to prove by packing
|
||||
more and more unwanted features in their browser and forcing the user to use
|
||||
them.
|
||||
|
||||
I have loved UC for its speed and design. But recent experiences have led to
|
||||
me having a second thought about my primary browser.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.theitstuff.com/biggest-problems-uc-browser
|
||||
|
||||
作者:[Rishabh Kandari][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.theitstuff.com/author/reevkandari
|
||||
[1]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-6.png
|
||||
[2]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1-1.png
|
||||
[3]:http://www.theitstuff.com/wp-content/uploads/2017/12/uceffort.png
|
||||
[4]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-3-1.png
|
||||
[5]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-4-1.png
|
||||
|
176
sources/tech/20171212 Internet protocols are changing.md
Normal file
176
sources/tech/20171212 Internet protocols are changing.md
Normal file
@ -0,0 +1,176 @@
|
||||
Internet protocols are changing
|
||||
============================================================
|
||||
|
||||
|
||||
![](https://blog.apnic.net/wp-content/uploads/2017/12/evolution-555x202.png)
|
||||
|
||||
When the Internet started to become widely used in the 1990s, most traffic used just a few protocols: IPv4 routed packets, TCP turned those packets into connections, SSL (later TLS) encrypted those connections, DNS named hosts to connect to, and HTTP was often the application protocol using it all.
|
||||
|
||||
For many years, there were negligible changes to these core Internet protocols; HTTP added a few new headers and methods, TLS slowly went through minor revisions, TCP adapted congestion control, and DNS introduced features like DNSSEC. The protocols themselves looked about the same ‘on the wire’ for a very long time (excepting IPv6, which already gets its fair amount of attention in the network operator community.)
|
||||
|
||||
As a result, network operators, vendors, and policymakers that want to understand (and sometimes, control) the Internet have adopted a number of practices based upon these protocols’ wire ‘footprint’ — whether intended to debug issues, improve quality of service, or impose policy.
|
||||
|
||||
Now, significant changes to the core Internet protocols are underway. While they are intended to be compatible with the Internet at large (since they won’t get adoption otherwise), they might be disruptive to those who have taken liberties with undocumented aspects of protocols or made an assumption that things won’t change.
|
||||
|
||||
#### Why we need to change the Internet
|
||||
|
||||
There are a number of factors driving these changes.
|
||||
|
||||
First, the limits of the core Internet protocols have become apparent, especially regarding performance. Because of structural problems in the application and transport protocols, the network was not being used as efficiently as it could be, leading to end-user perceived performance (in particular, latency).
|
||||
|
||||
This translates into a strong motivation to evolve or replace those protocols because there is a [large body of experience showing the impact of even small performance gains][14].
|
||||
|
||||
Second, the ability to evolve Internet protocols — at any layer — has become more difficult over time, largely thanks to the unintended uses by networks discussed above. For example, HTTP proxies that tried to compress responses made it more difficult to deploy new compression techniques; TCP optimization in middleboxes made it more difficult to deploy improvements to TCP.
|
||||
|
||||
Finally, [we are in the midst of a shift towards more use of encryption on the Internet][15], first spurred by Edward Snowden’s revelations in 2015\. That’s really a separate discussion, but it is relevant here in that encryption is one of best tools we have to ensure that protocols can evolve.
|
||||
|
||||
Let’s have a look at what’s happened, what’s coming next, how it might impact networks, and how networks impact protocol design.
|
||||
|
||||
#### HTTP/2
|
||||
|
||||
[HTTP/2][16] (based on Google’s SPDY) was the first notable change — standardized in 2015, it multiplexes multiple requests onto one TCP connection, thereby avoiding the need to queue requests on the client without blocking each other. It is now widely deployed, and supported by all major browsers and web servers.
|
||||
|
||||
From a network’s viewpoint, HTTP/2 made a few notable changes. First, it’s a binary protocol, so any device that assumes it’s HTTP/1.1 is going to break.
|
||||
|
||||
That breakage was one of the primary reasons for another big change in HTTP/2; it effectively requires encryption. This gives it a better chance of avoiding interference from intermediaries that assume it’s HTTP/1.1, or do more subtle things like strip headers or block new protocol extensions — both things that had been seen by some of the engineers working on the protocol, causing significant support problems for them.
|
||||
|
||||
[HTTP/2 also requires TLS/1.2 to be used when it is encrypted][17], and [blacklists ][18]cipher suites that were judged to be insecure — with the effect of only allowing ephemeral keys. See the TLS 1.3 section for potential impacts here.
|
||||
|
||||
Finally, HTTP/2 allows more than one host’s requests to be [coalesced onto a connection][19], to improve performance by reducing the number of connections (and thereby, congestion control contexts) used for a page load.
|
||||
|
||||
For example, you could have a connection for <tt style="box-sizing: inherit;">www.example.com</tt>, but also use it for requests for <tt style="box-sizing: inherit;">images.example.com</tt>. [Future protocol extensions might also allow additional hosts to be added to the connection][20], even if they weren’t listed in the original TLS certificate used for it. As a result, assuming that the traffic on a connection is limited to the purpose it was initiated for isn’t going to apply.
|
||||
|
||||
Despite these changes, it’s worth noting that HTTP/2 doesn’t appear to suffer from significant interoperability problems or interference from networks.
|
||||
|
||||
#### TLS 1.3
|
||||
|
||||
[TLS 1.3][21] is just going through the final processes of standardization and is already supported by some implementations.
|
||||
|
||||
Don’t be fooled by its incremental name; this is effectively a new version of TLS, with a much-revamped handshake that allows application data to flow from the start (often called ‘0RTT’). The new design relies upon ephemeral key exchange, thereby ruling out static keys.
|
||||
|
||||
This has caused concern from some network operators and vendors — in particular those who need visibility into what’s happening inside those connections.
|
||||
|
||||
For example, consider the datacentre for a bank that has regulatory requirements for visibility. By sniffing traffic in the network and decrypting it with the static keys of their servers, they can log legitimate traffic and identify harmful traffic, whether it be attackers from the outside or employees trying to leak data from the inside.
|
||||
|
||||
TLS 1.3 doesn’t support that particular technique for intercepting traffic, since it’s also [a form of attack that ephemeral keys protect against][22]. However, since they have regulatory requirements to both use modern encryption protocols and to monitor their networks, this puts those network operators in an awkward spot.
|
||||
|
||||
There’s been much debate about whether regulations require static keys, whether alternative approaches could be just as effective, and whether weakening security for the entire Internet for the benefit of relatively few networks is the right solution. Indeed, it’s still possible to decrypt traffic in TLS 1.3, but you need access to the ephemeral keys to do so, and by design, they aren’t long-lived.
|
||||
|
||||
At this point it doesn’t look like TLS 1.3 will change to accommodate these networks, but there are rumblings about creating another protocol that allows a third party to observe what’s going on— and perhaps more — for these use cases. Whether that gets traction remains to be seen.
|
||||
|
||||
#### QUIC
|
||||
|
||||
During work on HTTP/2, it became evident that TCP has similar inefficiencies. Because TCP is an in-order delivery protocol, the loss of one packet can prevent those in the buffers behind it from being delivered to the application. For a multiplexed protocol, this can make a big difference in performance.
|
||||
|
||||
[QUIC][23] is an attempt to address that by effectively rebuilding TCP semantics (along with some of HTTP/2’s stream model) on top of UDP. Like HTTP/2, it started as a Google effort and is now in the IETF, with an initial use case of HTTP-over-UDP and a goal of becoming a standard in late 2018\. However, since Google has already deployed QUIC in the Chrome browser and on its sites, it already accounts for more than 7% of Internet traffic.
|
||||
|
||||
Read [Your questions answered about QUIC][24]
|
||||
|
||||
Besides the shift from TCP to UDP for such a sizable amount of traffic (and all of the adjustments in networks that might imply), both Google QUIC (gQUIC) and IETF QUIC (iQUIC) require encryption to operate at all; there is no unencrypted QUIC.
|
||||
|
||||
iQUIC uses TLS 1.3 to establish keys for a session and then uses them to encrypt each packet. However, since it’s UDP-based, a lot of the session information and metadata that’s exposed in TCP gets encrypted in QUIC.
|
||||
|
||||
In fact, iQUIC’s current [‘short header’][25] — used for all packets except the handshake — only exposes a packet number, an optional connection identifier, and a byte of state for things like the encryption key rotation schedule and the packet type (which might end up encrypted as well).
|
||||
|
||||
Everything else is encrypted — including ACKs, to raise the bar for [traffic analysis][26] attacks.
|
||||
|
||||
However, this means that passively estimating RTT and packet loss by observing connections is no longer possible; there isn’t enough information. This lack of observability has caused a significant amount of concern by some in the operator community, who say that passive measurements like this are critical for debugging and understanding their networks.
|
||||
|
||||
One proposal to meet this need is the ‘[Spin Bit][27]‘ — a bit in the header that flips once a round trip, so that observers can estimate RTT. Since it’s decoupled from the application’s state, it doesn’t appear to leak any information about the endpoints, beyond a rough estimate of location on the network.
|
||||
|
||||
#### DOH
|
||||
|
||||
The newest change on the horizon is DOH — [DNS over HTTP][28]. A [significant amount of research has shown that networks commonly use DNS as a means of imposing policy][29] (whether on behalf of the network operator or a greater authority).
|
||||
|
||||
Circumventing this kind of control with encryption has been [discussed for a while][30], but it has a disadvantage (at least from some standpoints) — it is possible to discriminate it from other traffic; for example, by using its port number to block access.
|
||||
|
||||
DOH addresses that by piggybacking DNS traffic onto an existing HTTP connection, thereby removing any discriminators. A network that wishes to block access to that DNS resolver can only do so by blocking access to the website as well.
|
||||
|
||||
For example, if Google was to deploy its [public DNS service over DOH][31]on <tt style="box-sizing: inherit;">www.google.com</tt> and a user configures their browser to use it, a network that wants (or is required) to stop it would have to effectively block all of Google (thanks to how they host their services).
|
||||
|
||||
DOH has just started its work, but there’s already a fair amount of interest in it, and some rumblings of deployment. How the networks (and governments) that use DNS to impose policy will react remains to be seen.
|
||||
|
||||
Read [IETF 100, Singapore: DNS over HTTP (DOH!)][1]
|
||||
|
||||
#### Ossification and grease
|
||||
|
||||
To return to motivations, one theme throughout this work is how protocol designers are increasingly encountering problems where networks make assumptions about traffic.
|
||||
|
||||
For example, TLS 1.3 has had a number of last-minute issues with middleboxes that assume it’s an older version of the protocol. gQUIC blacklists several networks that throttle UDP traffic, because they think that it’s harmful or low-priority traffic.
|
||||
|
||||
When a protocol can’t evolve because deployments ‘freeze’ its extensibility points, we say it has _ossified_ . TCP itself is a severe example of ossification; so many middleboxes do so many things to TCP — whether it’s blocking packets with TCP options that aren’t recognized, or ‘optimizing’ congestion control.
|
||||
|
||||
It’s necessary to prevent ossification, to ensure that protocols can evolve to meet the needs of the Internet in the future; otherwise, it would be a ‘tragedy of the commons’ where the actions of some individual networks — although well-intended — would affect the health of the Internet overall.
|
||||
|
||||
There are many ways to prevent ossification; if the data in question is encrypted, it cannot be accessed by any party but those that hold the keys, preventing interference. If an extension point is unencrypted but commonly used in a way that would break applications visibly (for example, HTTP headers), it’s less likely to be interfered with.
|
||||
|
||||
Where protocol designers can’t use encryption and an extension point isn’t used often, artificially exercising the extension point can help; we call this _greasing_ it.
|
||||
|
||||
For example, QUIC encourages endpoints to use a range of decoy values in its [version negotiation][32], to avoid implementations assuming that it will never change (as was often encountered in TLS implementations, leading to significant problems).
|
||||
|
||||
#### The network and the user
|
||||
|
||||
Beyond the desire to avoid ossification, these changes also reflect the evolving relationship between networks and their users. While for a long time people assumed that networks were always benevolent — or at least disinterested — parties, this is no longer the case, thanks not only to [pervasive monitoring][33] but also attacks like [Firesheep][34].
|
||||
|
||||
As a result, there is growing tension between the needs of Internet users overall and those of the networks who want to have access to some amount of the data flowing over them. Particularly affected will be networks that want to impose policy upon those users; for example, enterprise networks.
|
||||
|
||||
In some cases, they might be able to meet their goals by installing software (or a CA certificate, or a browser extension) on their users’ machines. However, this isn’t as easy in cases where the network doesn’t own or have access to the computer; for example, BYOD has become common, and IoT devices seldom have the appropriate control interfaces.
|
||||
|
||||
As a result, a lot of discussion surrounding protocol development in the IETF is touching on the sometimes competing needs of enterprises and other ‘leaf’ networks and the good of the Internet overall.
|
||||
|
||||
#### Get involved
|
||||
|
||||
For the Internet to work well in the long run, it needs to provide value to end users, avoid ossification, and allow networks to operate. The changes taking place now need to meet all three goals, but we need more input from network operators.
|
||||
|
||||
If these changes affect your network — or won’t— please leave comments below, or better yet, get involved in the [IETF][35] by attending a meeting, joining a mailing list, or providing feedback on a draft.
|
||||
|
||||
Thanks to Martin Thomson and Brian Trammell for their review.
|
||||
|
||||
_Mark Nottingham is a member of the Internet Architecture Board and co-chairs the IETF’s HTTP and QUIC Working Groups._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.apnic.net/2017/12/12/internet-protocols-changing/
|
||||
|
||||
作者:[ Mark Nottingham ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://blog.apnic.net/author/mark-nottingham/
|
||||
[1]:https://blog.apnic.net/2017/11/17/ietf-100-singapore-dns-http-doh/
|
||||
[2]:https://blog.apnic.net/author/mark-nottingham/
|
||||
[3]:https://blog.apnic.net/category/tech-matters/
|
||||
[4]:https://blog.apnic.net/tag/dns/
|
||||
[5]:https://blog.apnic.net/tag/doh/
|
||||
[6]:https://blog.apnic.net/tag/guest-post/
|
||||
[7]:https://blog.apnic.net/tag/http/
|
||||
[8]:https://blog.apnic.net/tag/ietf/
|
||||
[9]:https://blog.apnic.net/tag/quic/
|
||||
[10]:https://blog.apnic.net/tag/tls/
|
||||
[11]:https://blog.apnic.net/tag/protocol/
|
||||
[12]:https://blog.apnic.net/2017/12/12/internet-protocols-changing/#comments
|
||||
[13]:https://blog.apnic.net/
|
||||
[14]:https://www.smashingmagazine.com/2015/09/why-performance-matters-the-perception-of-time/
|
||||
[15]:https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46197.pdf
|
||||
[16]:https://http2.github.io/
|
||||
[17]:http://httpwg.org/specs/rfc7540.html#TLSUsage
|
||||
[18]:http://httpwg.org/specs/rfc7540.html#BadCipherSuites
|
||||
[19]:http://httpwg.org/specs/rfc7540.html#reuse
|
||||
[20]:https://tools.ietf.org/html/draft-bishop-httpbis-http2-additional-certs
|
||||
[21]:https://datatracker.ietf.org/doc/draft-ietf-tls-tls13/
|
||||
[22]:https://en.wikipedia.org/wiki/Forward_secrecy
|
||||
[23]:https://quicwg.github.io/
|
||||
[24]:https://blog.apnic.net/2016/08/30/questions-answered-quic/
|
||||
[25]:https://quicwg.github.io/base-drafts/draft-ietf-quic-transport.html#short-header
|
||||
[26]:https://www.mjkranch.com/docs/CODASPY17_Kranch_Reed_IdentifyingHTTPSNetflix.pdf
|
||||
[27]:https://tools.ietf.org/html/draft-trammell-quic-spin
|
||||
[28]:https://datatracker.ietf.org/wg/doh/about/
|
||||
[29]:https://datatracker.ietf.org/meeting/99/materials/slides-99-maprg-fingerprint-based-detection-of-dns-hijacks-using-ripe-atlas/
|
||||
[30]:https://datatracker.ietf.org/wg/dprive/about/
|
||||
[31]:https://developers.google.com/speed/public-dns/
|
||||
[32]:https://quicwg.github.io/base-drafts/draft-ietf-quic-transport.html#rfc.section.3.7
|
||||
[33]:https://tools.ietf.org/html/rfc7258
|
||||
[34]:http://codebutler.com/firesheep
|
||||
[35]:https://www.ietf.org/
|
@ -0,0 +1,282 @@
|
||||
Toplip – A Very Strong File Encryption And Decryption CLI Utility
|
||||
======
|
||||
There are numerous file encryption tools available on the market to protect
|
||||
your files. We have already reviewed some encryption tools such as
|
||||
[**Cryptomater**][1], [**Cryptkeeper**][2], [**CryptGo**][3], [**Cryptr**][4],
|
||||
[**Tomb**][5], and [**GnuPG**][6] etc. Today, we will be discussing yet
|
||||
another file encryption and decryption command line utility named **"
|
||||
Toplip"**. It is a free and open source encryption utility that uses a very
|
||||
strong encryption method called **[AES256][7]** , along with an **XTS-AES**
|
||||
design to safeguard your confidential data. Also, it uses [**Scrypt**][8], a
|
||||
password-based key derivation function, to protect your passphrases against
|
||||
brute-force attacks.
|
||||
|
||||
### Prominent features
|
||||
|
||||
Compared to other file encryption tools, toplip ships with the following
|
||||
unique and prominent features.
|
||||
|
||||
* Very strong XTS-AES256 based encryption method.
|
||||
* Plausible deniability.
|
||||
* Encrypt files inside images (PNG/JPG).
|
||||
* Multiple passphrase protection.
|
||||
* Simplified brute force recovery protection.
|
||||
* No identifiable output markers.
|
||||
* Open source/GPLv3.
|
||||
|
||||
### Installing Toplip
|
||||
|
||||
There is no installation required. Toplip is a standalone executable binary
|
||||
file. All you have to do is download the latest toplip from the [**official
|
||||
products page**][9] and make it as executable. To do so, just run:
|
||||
|
||||
```
|
||||
chmod +x toplip
|
||||
```
|
||||
|
||||
### Usage
|
||||
|
||||
If you run toplip without any arguments, you will see the help section.
|
||||
|
||||
```
|
||||
./toplip
|
||||
```
|
||||
|
||||
[![][10]][11]
|
||||
|
||||
Allow me to show you some examples.
|
||||
|
||||
For the purpose of this guide, I have created two files namely **file1** and
|
||||
**file2**. Also, I have an image file which we need it to hide the files
|
||||
inside it. And finally, I have **toplip** executable binary file. I have kept
|
||||
them all in a directory called **test**.
|
||||
|
||||
[![][12]][13]
|
||||
|
||||
**Encrypt/decrypt a single file**
|
||||
|
||||
Now, let us encrypt **file1**. To do so, run:
|
||||
|
||||
```
|
||||
./toplip file1 > file1.encrypted
|
||||
```
|
||||
|
||||
This command will prompt you to enter a passphrase. Once you have given the
|
||||
passphrase, it will encrypt the contents of **file1** and save them in a file
|
||||
called **file1.encrypted** in your current working directory.
|
||||
|
||||
Sample output of the above command would be:
|
||||
|
||||
```
|
||||
This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip file1 Passphrase #1: generating keys...Done
|
||||
Encrypting...Done
|
||||
```
|
||||
|
||||
To verify if the file is really encrypted., try to open it and you will see
|
||||
some random characters.
|
||||
|
||||
To decrypt the encrypted file, use **-d** flag like below:
|
||||
|
||||
```
|
||||
./toplip -d file1.encrypted
|
||||
```
|
||||
|
||||
This command will decrypt the given file and display the contents in the
|
||||
Terminal window.
|
||||
|
||||
To restore the file instead of writing to stdout, do:
|
||||
|
||||
```
|
||||
./toplip -d file1.encrypted > file1.decrypted
|
||||
```
|
||||
|
||||
Enter the correct passphrase to decrypt the file. All contents of **file1.encrypted** will be restored in a file called **file1.decrypted**.
|
||||
|
||||
Please don't follow this naming method. I used it for the sake of easy understanding. Use any other name(s) which is very hard to predict.
|
||||
|
||||
**Encrypt/decrypt multiple files
|
||||
**
|
||||
|
||||
Now we will encrypt two files with two separate passphrases for each one.
|
||||
|
||||
```
|
||||
./toplip -alt file1 file2 > file3.encrypted
|
||||
```
|
||||
|
||||
You will be asked to enter passphrase for each file. Use different
|
||||
passphrases.
|
||||
|
||||
Sample output of the above command will be:
|
||||
|
||||
```
|
||||
This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip
|
||||
**file2 Passphrase #1** : generating keys...Done
|
||||
**file1 Passphrase #1** : generating keys...Done
|
||||
Encrypting...Done
|
||||
```
|
||||
|
||||
What the above command will do is encrypt the contents of two files and save
|
||||
them in a single file called **file3.encrypted**. While restoring, just give
|
||||
the respective password. For example, if you give the passphrase of the file1,
|
||||
toplip will restore file1. If you enter the passphrase of file2, toplip will
|
||||
restore file2.
|
||||
|
||||
Each **toplip** encrypted output may contain up to four wholly independent
|
||||
files, and each created with their own separate and unique passphrase. Due to
|
||||
the way the encrypted output is put together, there is no way to easily
|
||||
determine whether or not multiple files actually exist in the first place. By
|
||||
default, even if only one file is encrypted using toplip, random data is added
|
||||
automatically. If more than one file is specified, each with their own
|
||||
passphrase, then you can selectively extract each file independently and thus
|
||||
deny the existence of the other files altogether. This effectively allows a
|
||||
user to open an encrypted bundle with controlled exposure risk, and no
|
||||
computationally inexpensive way for an adversary to conclusively identify that
|
||||
additional confidential data exists. This is called **Plausible deniability**
|
||||
, one of the notable feature of toplip.
|
||||
|
||||
To decrypt **file1** from **file3.encrypted** , just enter:
|
||||
|
||||
```
|
||||
./toplip -d file3.encrypted > file1.encrypted
|
||||
```
|
||||
|
||||
You will be prompted to enter the correct passphrase of file1.
|
||||
|
||||
To decrypt **file2** from **file3.encrypted** , enter:
|
||||
|
||||
```
|
||||
./toplip -d file3.encrypted > file2.encrypted
|
||||
```
|
||||
|
||||
Do not forget to enter the correct passphrase of file2.
|
||||
|
||||
**Use multiple passphrase protection**
|
||||
|
||||
This is another cool feature that I admire. We can provide multiple
|
||||
passphrases for a single file when encrypting it. It will protect the
|
||||
passphrases against brute force attempts.
|
||||
|
||||
```
|
||||
./toplip -c 2 file1 > file1.encrypted
|
||||
```
|
||||
|
||||
Here, **-c 2** represents two different passphrases. Sample output of above
|
||||
command would be:
|
||||
|
||||
```
|
||||
This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip
|
||||
**file1 Passphrase #1:** generating keys...Done
|
||||
**file1 Passphrase #2:** generating keys...Done
|
||||
Encrypting...Done
|
||||
```
|
||||
|
||||
As you see in the above example, toplip prompted me to enter two passphrases.
|
||||
Please note that you must **provide two different passphrases** , not a single
|
||||
passphrase twice.
|
||||
|
||||
To decrypt this file, do:
|
||||
|
||||
```
|
||||
$ ./toplip -c 2 -d file1.encrypted > file1.decrypted
|
||||
This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip
|
||||
**file1.encrypted Passphrase #1:** generating keys...Done
|
||||
**file1.encrypted Passphrase #2:** generating keys...Done
|
||||
Decrypting...Done
|
||||
```
|
||||
|
||||
**Hide files inside image**
|
||||
|
||||
The practice of concealing a file, message, image, or video within another
|
||||
file is called **steganography**. Fortunately, this feature exists in toplip
|
||||
by default.
|
||||
|
||||
To hide a file(s) inside images, use **-m** flag as shown below.
|
||||
|
||||
```
|
||||
$ ./toplip -m image.png file1 > image1.png
|
||||
This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip
|
||||
file1 Passphrase #1: generating keys...Done
|
||||
Encrypting...Done
|
||||
```
|
||||
|
||||
This command conceals the contents of file1 inside an image named image1.png.
|
||||
To decrypt it, run:
|
||||
|
||||
```
|
||||
$ ./toplip -d image1.png > file1.decrypted This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip
|
||||
image1.png Passphrase #1: generating keys...Done
|
||||
Decrypting...Done
|
||||
```
|
||||
|
||||
**Increase password complexity**
|
||||
|
||||
To make things even harder to break, we can increase the password complexity
|
||||
like below.
|
||||
|
||||
```
|
||||
./toplip -c 5 -i 0x8000 -alt file1 -c 10 -i 10 file2 > file3.encrypted
|
||||
```
|
||||
|
||||
The above command will prompt to you enter 10 passphrases for the file1, 5
|
||||
passphrases for the file2 and encrypt both of them in a single file called
|
||||
"file3.encrypted". As you may noticed, we have used one more additional flag
|
||||
**-i** in this example. This is used to specify key derivation iterations.
|
||||
This option overrides the default iteration count of 1 for scrypt's initial
|
||||
and final PBKDF2 stages. Hexadecimal or decimal values permitted, e.g.
|
||||
**0x8000** , **10** , etc. Please note that this can dramatically increase the
|
||||
calculation times.
|
||||
|
||||
To decrypt file1, use:
|
||||
|
||||
```
|
||||
./toplip -c 5 -i 0x8000 -d file3.encrypted > file1.decrypted
|
||||
```
|
||||
|
||||
To decrypt file2, use:
|
||||
|
||||
```
|
||||
./toplip -c 10 -i 10 -d file3.encrypted > file2.decrypted
|
||||
```
|
||||
|
||||
To know more about the underlying technical information and crypto methods
|
||||
used in toplip, refer its official website given at the end.
|
||||
|
||||
My personal recommendation to all those who wants to protect their data. Don't
|
||||
rely on single method. Always use more than one tools/methods to encrypt
|
||||
files. Do not write passphrases/passwords in a paper and/or do not save them
|
||||
in your local or cloud storage. Just memorize them and destroy the notes. If
|
||||
you're poor at remembering passwords, consider to use any trustworthy password
|
||||
managers.
|
||||
|
||||
And, that's all. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/toplip-strong-file-encryption-decryption-cli-utility/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.ostechnix.com/cryptomator-open-source-client-side-encryption-tool-cloud/
|
||||
[2]:https://www.ostechnix.com/how-to-encrypt-your-personal-foldersdirectories-in-linux-mint-ubuntu-distros/
|
||||
[3]:https://www.ostechnix.com/cryptogo-easy-way-encrypt-password-protect-files/
|
||||
[4]:https://www.ostechnix.com/cryptr-simple-cli-utility-encrypt-decrypt-files/
|
||||
[5]:https://www.ostechnix.com/tomb-file-encryption-tool-protect-secret-files-linux/
|
||||
[6]:https://www.ostechnix.com/an-easy-way-to-encrypt-and-decrypt-files-from-commandline-in-linux/
|
||||
[7]:http://en.wikipedia.org/wiki/Advanced_Encryption_Standard
|
||||
[8]:http://en.wikipedia.org/wiki/Scrypt
|
||||
[9]:https://2ton.com.au/Products/
|
||||
[10]:https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2.png%201366w,%20https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2-300x157.png%20300w,%20https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2-768x403.png%20768w,%20https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2-1024x537.png%201024w
|
||||
[11]:http://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2.png
|
||||
[12]:https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-1.png%20779w,%20https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-1-300x101.png%20300w,%20https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-1-768x257.png%20768w
|
||||
[13]:http://www.ostechnix.com/wp-content/uploads/2017/12/toplip-1.png
|
||||
|
@ -0,0 +1,186 @@
|
||||
Creating a blog with pelican and Github pages
|
||||
======
|
||||
|
||||
Today I'm going to talk about how this blog was created. Before we begin, I expect you to be familiarized with using Github and creating a Python virtual enviroment to develop. If you aren't, I recommend you to learn with the [Django Girls tutorial][2], which covers that and more.
|
||||
|
||||
This is a tutorial to help you publish a personal blog hosted by Github. For that, you will need a regular Github user account (instead of a project account).
|
||||
|
||||
The first thing you will do is to create the Github repository where your code will live. If you want your blog to point to only your username (like rsip22.github.io) instead of a subfolder (like rsip22.github.io/blog), you have to create the repository with that full name.
|
||||
|
||||
![Screenshot of Github, the menu to create a new repository is open and a new repo is being created with the name 'rsip22.github.io'][3]
|
||||
|
||||
I recommend that you initialize your repository with a README, with a .gitignore for Python and with a [free software license][4]. If you use a free software license, you still own the code, but you make sure that others will benefit from it, by allowing them to study it, reuse it and, most importantly, keep sharing it.
|
||||
|
||||
Now that the repository is ready, let's clone it to the folder you will be using to store the code in your machine:
|
||||
```
|
||||
$ git clone https://github.com/YOUR_USERNAME/YOUR_USERNAME.github.io.git
|
||||
|
||||
```
|
||||
|
||||
And change to the new directory:
|
||||
```
|
||||
$ cd YOUR_USERNAME.github.io
|
||||
|
||||
```
|
||||
|
||||
Because of how Github Pages prefers to work, serving the files from the master branch, you have to put your source code in a new branch, preserving the "master" for the output of the static files generated by Pelican. To do that, you must create a new branch called "source":
|
||||
```
|
||||
$ git checkout -b source
|
||||
|
||||
```
|
||||
|
||||
Create the virtualenv with the Python3 version installed on your system.
|
||||
|
||||
On GNU/Linux systems, the command might go as:
|
||||
```
|
||||
$ python3 -m venv venv
|
||||
|
||||
```
|
||||
|
||||
or as
|
||||
```
|
||||
$ virtualenv --python=python3.5 venv
|
||||
|
||||
```
|
||||
|
||||
And activate it:
|
||||
```
|
||||
$ source venv/bin/activate
|
||||
|
||||
```
|
||||
|
||||
Inside the virtualenv, you have to install pelican and it's dependencies. You should also install ghp-import (to help us with publishing to github) and Markdown (for writing your posts using markdown). It goes like this:
|
||||
```
|
||||
(venv)$ pip install pelican markdown ghp-import
|
||||
|
||||
```
|
||||
|
||||
Once that is done, you can start creating your blog using pelican-quickstart:
|
||||
```
|
||||
(venv)$ pelican-quickstart
|
||||
|
||||
```
|
||||
|
||||
Which will prompt us a series of questions. Before answering them, take a look at my answers below:
|
||||
```
|
||||
> Where do you want to create your new web site? [.] ./
|
||||
> What will be the title of this web site? Renata's blog
|
||||
> Who will be the author of this web site? Renata
|
||||
> What will be the default language of this web site? [pt] en
|
||||
> Do you want to specify a URL prefix? e.g., http://example.com (Y/n) n
|
||||
> Do you want to enable article pagination? (Y/n) y
|
||||
> How many articles per page do you want? [10] 10
|
||||
> What is your time zone? [Europe/Paris] America/Sao_Paulo
|
||||
> Do you want to generate a Fabfile/Makefile to automate generation and publishing? (Y/n) Y **# PAY ATTENTION TO THIS!**
|
||||
> Do you want an auto-reload & simpleHTTP script to assist with theme and site development? (Y/n) n
|
||||
> Do you want to upload your website using FTP? (y/N) n
|
||||
> Do you want to upload your website using SSH? (y/N) n
|
||||
> Do you want to upload your website using Dropbox? (y/N) n
|
||||
> Do you want to upload your website using S3? (y/N) n
|
||||
> Do you want to upload your website using Rackspace Cloud Files? (y/N) n
|
||||
> Do you want to upload your website using GitHub Pages? (y/N) y
|
||||
> Is this your personal page (username.github.io)? (y/N) y
|
||||
Done. Your new project is available at /home/username/YOUR_USERNAME.github.io
|
||||
|
||||
```
|
||||
|
||||
About the time zone, it should be specified as TZ Time zone (full list here: [List of tz database time zones][5]).
|
||||
|
||||
Now, go ahead and create your first blog post! You might want to open the project folder on your favorite code editor and find the "content" folder inside it. Then, create a new file, which can be called my-first-post.md (don't worry, this is just for testing, you can change it later). The contents should begin with the metadata which identifies the Title, Date, Category and more from the post before you start with the content, like this:
|
||||
```
|
||||
.lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes
|
||||
Title: My first post
|
||||
Date: 2017-11-26 10:01
|
||||
Modified: 2017-11-27 12:30
|
||||
Category: misc
|
||||
Tags: first , misc
|
||||
Slug: My-first-post
|
||||
Authors: Your name
|
||||
Summary: What does your post talk about ? Write here.
|
||||
|
||||
This is the *first post* from my Pelican blog. ** YAY !**
|
||||
```
|
||||
|
||||
Let's see how it looks?
|
||||
|
||||
Go to the terminal, generate the static files and start the server. To do that, use the following command:
|
||||
```
|
||||
(venv)$ make html && make serve
|
||||
```
|
||||
|
||||
While this command is running, you should be able to visit it on your favorite web browser by typing localhost:8000 on the address bar.
|
||||
|
||||
![Screenshot of the blog home. It has a header with the title Renata\\'s blog, the first post on the left, info about the post on the right, links and social on the bottom.][6]
|
||||
|
||||
Pretty neat, right?
|
||||
|
||||
Now, what if you want to put an image in a post, how do you do that? Well, first you create a directory inside your content directory, where your posts are. Let's call this directory 'images' for easy reference. Now, you have to tell Pelican to use it. Find the pelicanconf.py, the file where you configure the system, and add a variable that contains the directory with your images:
|
||||
```
|
||||
.lang="python" # DON'T COPY this line, it exists just for highlighting purposes
|
||||
STATIC_PATHS = ['images']
|
||||
|
||||
```
|
||||
|
||||
Save it. Go to your post and add the image this way:
|
||||
```
|
||||
.lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes
|
||||
![Write here a good description for people who can ' t see the image]({filename}/images/IMAGE_NAME.jpg)
|
||||
|
||||
```
|
||||
|
||||
You can interrupt the server at anytime pressing CTRL+C on the terminal. But you should start it again and check if the image is correct. Can you remember how?
|
||||
```
|
||||
(venv)$ make html && make serve
|
||||
```
|
||||
|
||||
One last step before your coding is "done": you should make sure anyone can read your posts using ATOM or RSS feeds. Find the pelicanconf.py, the file where you configure the system, and edit the part about feed generation:
|
||||
```
|
||||
.lang="python" # DON'T COPY this line, it exists just for highlighting purposes
|
||||
FEED_ALL_ATOM = 'feeds/all.atom.xml'
|
||||
FEED_ALL_RSS = 'feeds/all.rss.xml'
|
||||
AUTHOR_FEED_RSS = 'feeds/%s.rss.xml'
|
||||
RSS_FEED_SUMMARY_ONLY = False
|
||||
```
|
||||
|
||||
Save everything so you can send the code to Github. You can do that by adding all files, committing it with a message ('first commit') and using git push. You will be asked for your Github login and password.
|
||||
```
|
||||
$ git add -A && git commit -a -m 'first commit' && git push --all
|
||||
|
||||
```
|
||||
|
||||
And... remember how at the very beginning I said you would be preserving the master branch for the output of the static files generated by Pelican? Now it's time for you to generate them:
|
||||
```
|
||||
$ make github
|
||||
|
||||
```
|
||||
|
||||
You will be asked for your Github login and password again. And... voila! Your new blog should be live on https://YOUR_USERNAME.github.io.
|
||||
|
||||
If you had an error in any step of the way, please reread this tutorial, try and see if you can detect in which part the problem happened, because that is the first step to debbugging. Sometimes, even something simple like a typo or, with Python, a wrong indentation, can give us trouble. Shout out and ask for help online or on your community.
|
||||
|
||||
For tips on how to write your posts using Markdown, you should read the [Daring Fireball Markdown guide][7].
|
||||
|
||||
To get other themes, I recommend you visit [Pelican Themes][8].
|
||||
|
||||
This post was adapted from [Adrien Leger's Create a github hosted Pelican blog with a Bootstrap3 theme][9]. I hope it was somewhat useful for you.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://rsip22.github.io/blog/create-a-blog-with-pelican-and-github-pages.html
|
||||
|
||||
作者:[][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://rsip22.github.io
|
||||
[1]https://rsip22.github.io/blog/category/blog.html
|
||||
[2]https://tutorial.djangogirls.org
|
||||
[3]https://rsip22.github.io/blog/img/create_github_repository.png
|
||||
[4]https://www.gnu.org/licenses/license-list.html
|
||||
[5]https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
|
||||
[6]https://rsip22.github.io/blog/img/blog_screenshot.png
|
||||
[7]https://daringfireball.net/projects/markdown/syntax
|
||||
[8]http://www.pelicanthemes.com/
|
||||
[9]https://a-slide.github.io/blog/github-pelican
|
@ -0,0 +1,63 @@
|
||||
书评:《Ours to Hack and to Own》
|
||||
============================================================
|
||||
|
||||
![书评: Ours to Hack and to Own](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/EDUCATION_colorbooks.png?itok=liB3FyjP "Book review: Ours to Hack and to Own")
|
||||
Image by : opensource.com
|
||||
|
||||
私有制的时代看起来似乎结束了,我将不仅仅讨论那些由我们中的许多人引入到我们的家庭与生活的设备和软件。我也将讨论这些设备与应用依赖的平台与服务。
|
||||
|
||||
尽管我们使用的许多服务是免费的,我们对它们并没有任何控制。本质上讲,这些企业确实控制着我们所看到的,听到的以及阅读到的内容。不仅如此,许多企业还在改变工作的性质。他们正使用封闭的平台来助长由全职工作到[零工经济][2]的转变方式,这种方式提供极少的安全性与确定性。
|
||||
|
||||
这项行动对于网络以及每一个使用与依赖网络的人产生了广泛的影响。仅仅二十多年前的开放网络的想象正在逐渐消逝并迅速地被一块难以穿透的幕帘所取代。
|
||||
|
||||
一种变得流行的补救办法就是建立[平台合作][3], 由他们的用户所拥有的电子化平台。正如这本书所阐述的,平台合作社背后的观点与开源有许多相同的根源。
|
||||
|
||||
学者Trebor Scholz和作家Nathan Schneider已经收集了40篇探讨平台合作社作为普通人可使用以提升开放性并对闭源系统的不透明性及各种限制予以还击的工具的增长及需求的论文。
|
||||
|
||||
### 哪里适合开源
|
||||
|
||||
任何平台合作社核心及接近核心的部分依赖与开源;不仅开源技术是必要的,构成开源开放性,透明性,协同合作以及共享的准则与理念同样不可或缺。
|
||||
|
||||
在这本书的介绍中, Trebor Scholz指出:
|
||||
|
||||
> 与网络的黑盒子系统相反,这些平台需要使它们的数据流透明来辨别自身。他们需要展示客户与员工的数据在哪里存储,数据出售给了谁以及数据为了何种目的。
|
||||
|
||||
正是对开源如此重要的透明性,促使平台合作社如此吸引人并在目前大量已存平台之中成为令人耳目一新的变化。
|
||||
|
||||
开源软件在《Ours to Hack and to Own》所分享的平台合作社的构想中必然充当着重要角色。开源软件能够为群体建立助推合作社的技术型公共建设提供快速,不算昂贵的途径。
|
||||
|
||||
Mickey Metts在论文中这样形容, "与你的友好的社区型技术合作社相遇。(原文:Meet Your Friendly Neighborhood Tech Co-Op.)" Metts为一家名为Agaric的企业工作,这家企业使用Drupal为团体及小型企业建立他们不能独自完成的产品。除此以外, Metts还鼓励任何想要建立并运营自己的企业的公司或合作社的人接受免费且开源的软件。为什么呢?因为它是高质量的,不算昂贵的,可定制的,并且你能够与由乐于助人而又热情的人们组成的大型社区产生联系。
|
||||
|
||||
### 不总是开源的,但开源总在
|
||||
|
||||
这本书里不是所有的论文都聚焦或提及开源的;但是,开源方式的关键元素-合作,社区,开放管理以及电子自由化-总是在其表面若隐若现。
|
||||
|
||||
事实上正如《Ours to Hack and to Own》中许多论文所讨论的,建立一个更加开放,基于平常人的经济与社会区块,平台合作社会变得非常重要。用Douglas Rushkoff的话讲,那会是类似Creative Commons的组织“对共享知识资源的私有化”的补偿。它们也如Barcelona的CTO(首席执行官)Francesca Bria所描述的那样,是“通过确保市民数据安全性,隐私性和权利的系统”来运营他们自己的“分布式通用数据基础架构”的城市。
|
||||
|
||||
### 最后的思考
|
||||
|
||||
如果你在寻找改变互联网的蓝图以及我们工作的方式,《Ours to Hack and to Own》并不是你要寻找的。这本书与其说是用户指南,不如说是一种宣言。如书中所说,《Ours to Hack and to Own》让我们略微了解如果我们将开源方式准则应用于社会及更加广泛的世界我们能够做的事。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Scott Nesbitt -作家,编辑,雇佣兵,虎猫牛仔(原文:Ocelot wrangle),丈夫与父亲,博客写手,陶器收藏家。Scott正是做这样的一些事情。他还是大量写关于开源软件文章与博客的长期开源用户。你可以在Twitter,Github上找到他。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/1/review-book-ours-to-hack-and-own
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
译者:[darsh8](https://github.com/darsh8)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/scottnesbitt
|
||||
[1]:https://opensource.com/article/17/1/review-book-ours-to-hack-and-own?rate=dgkFEuCLLeutLMH2N_4TmUupAJDjgNvFpqWqYCbQb-8
|
||||
[2]:https://en.wikipedia.org/wiki/Access_economy
|
||||
[3]:https://en.wikipedia.org/wiki/Platform_cooperative
|
||||
[4]:http://www.orbooks.com/catalog/ours-to-hack-and-to-own/
|
||||
[5]:https://opensource.com/user/14925/feed
|
||||
[6]:https://opensource.com/users/scottnesbitt
|
132
translated/tech/20170413 More Unknown Linux Commands.md
Normal file
132
translated/tech/20170413 More Unknown Linux Commands.md
Normal file
@ -0,0 +1,132 @@
|
||||
更多你所不知道的 Linux 命令
|
||||
============================================================
|
||||
|
||||
|
||||
![unknown Linux commands](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/outer-limits-of-linux.jpg?itok=5L5xfj2v "unknown Linux commands")
|
||||
>在这篇文章中和 Carla Schroder 一起探索 Linux 中的一些鲜为人知的强大工具。[CC Zero][2]Pixabay
|
||||
|
||||
本文是一篇关于一些有趣但鲜为人知的工具 `termsaver`、`pv` 和 `calendar` 的文章。`termsaver` 是一个终端 ASCII 锁屏,`pv` 能够测量数据吞吐量并模拟输入。Debian 的 `calendar` 拥有许多不同的日历表,并且你还可以制定你自己的日历表。
|
||||
|
||||
![Linux commands](https://www.linux.com/sites/lcom/files/styles/floated_images/public/linux-commands-fig-1.png?itok=HveXXLLK "Linux commands")
|
||||
|
||||
*图片 1: 星球大战屏保。[使用许可][1]*
|
||||
|
||||
### 终端屏保
|
||||
|
||||
难道只有图形桌面能够拥有有趣的屏保吗?现在,你可以通过安装 `termsaver` 来享受 ASCII 屏保,比如 matrix(LCTT 译注:电影《黑客帝国》中出现的黑客屏保)、时钟、星球大战以及一系列不太安全的屏保。有趣的屏保将会瞬间占据 NSFW 屏幕。
|
||||
|
||||
`termsaver` 可以从 Debian/Ubuntu 的包管理器中直接下载安装,如果你使用别的不包含该软件包的发行版比如 CentOS,那么你可以从 [termsaver.brunobraga.net][7] 下载,然后按照安装指导进行安装。
|
||||
|
||||
运行 `termsaver -h` 来查看一系列屏保:
|
||||
|
||||
```
|
||||
randtxt displays word in random places on screen
|
||||
starwars runs the asciimation Star Wars movie
|
||||
urlfetcher displays url contents with typing animation
|
||||
quotes4all displays recent quotes from quotes4all.net
|
||||
rssfeed displays rss feed information
|
||||
matrix displays a matrix movie alike screensaver
|
||||
clock displays a digital clock on screen
|
||||
rfc randomly displays RFC contents
|
||||
jokes4all displays recent jokes from jokes4all.net (NSFW)
|
||||
asciiartfarts displays ascii images from asciiartfarts.com (NSFW)
|
||||
programmer displays source code in typing animation
|
||||
sysmon displays a graphical system monitor
|
||||
```
|
||||
|
||||
你可以通过运行命令 `termsaver [屏保名]` 来使用屏保,比如 `termsaver matrix` ,然后按 `Ctrl+c` 停止。你也可以通过运行 `termsaver [屏保名] -h` 命令来获取关于某一个特定屏保的信息。图片 1 来自 `startwars` 屏保,它运行的是古老但受人喜爱的 [Asciimation Wars][8] 。
|
||||
|
||||
那些不太安全的屏保通过在线获取资源的方式运行,我并不喜欢它们,但好消息是,由于 `termsaver` 是一些 Python 的脚本文件,因此,你可以很容易的利用它们连接到任何你想要的 RSS 资源。
|
||||
|
||||
### pv
|
||||
|
||||
`pv` 命令是一个非常有趣的小工具但却很实用。它的用途是监测数据复制的进程,比如,当你运行 `rsync` 命令或创建一个 `tar` 归档的时候。当你不带任何选项运行 `pv` 命令时,默认参数为:
|
||||
|
||||
* -p :进程
|
||||
|
||||
* -t :时间,到当前总运行时间
|
||||
|
||||
* -e :预计完成时间,这往往是不准确的,因为 `pv` 通常不知道需要移动的数据的大小
|
||||
|
||||
* -r :速率计数器,或吞吐量
|
||||
|
||||
* -b :字节计数器
|
||||
|
||||
一次 `rsync` 传输看起来像这样:
|
||||
|
||||
```
|
||||
$ rsync -av /home/carla/ /media/carla/backup/ | pv
|
||||
sending incremental file list
|
||||
[...]
|
||||
103GiB 0:02:48 [ 615MiB/s] [ <=>
|
||||
```
|
||||
|
||||
创建一个 tar 归档,就像下面这个例子:
|
||||
|
||||
```
|
||||
$ tar -czf - /file/path| (pv > backup.tgz)
|
||||
885MiB 0:00:30 [28.6MiB/s] [ <=>
|
||||
```
|
||||
|
||||
`pv` 能够监测进程,因此也可以监测 Web 浏览器的最大活动,令人惊讶的是,它产生了如此多的活动:
|
||||
|
||||
```
|
||||
$ pv -d 3095
|
||||
58:/home/carla/.pki/nssdb/key4.db: 0 B 0:00:33
|
||||
[ 0 B/s] [<=> ]
|
||||
78:/home/carla/.config/chromium/Default/Visited Links:
|
||||
256KiB 0:00:33 [ 0 B/s] [<=> ]
|
||||
]
|
||||
85:/home/carla/.con...romium/Default/data_reduction_proxy_leveldb/LOG:
|
||||
298 B 0:00:33 [ 0 B/s] [<=> ]
|
||||
```
|
||||
|
||||
在网上,我偶然发现一个使用 `pv` 最有趣的方式:使用 `pv` 来回显输入的内容:
|
||||
|
||||
```
|
||||
$ echo "typing random stuff to pipe through pv" | pv -qL 8
|
||||
typing random stuff to pipe through pv
|
||||
```
|
||||
|
||||
普通的 `echo` 命令会瞬间打印一整行内容。通过管道传给 `pv` 之后能够让内容像是重新输入一样的显示出来。我不知道这是否有实际的价值,但是我非常喜欢它。`-L` 选项控制回显的速度,即多少字节每秒。
|
||||
|
||||
`pv` 是一个非常古老且非常有趣的命令,这么多年以来,它拥有了许多的选项,包括有趣的格式化选项,多输出选项,以及传输速度修改器。你可以通过 `man pv` 来查看所有的选项。
|
||||
|
||||
### /usr/bin/calendar
|
||||
|
||||
通过浏览 `/usr/bin` 目录以及其他命令目录和阅读 man 手册,你能够学到很多东西。在 Debian/Ubuntu 上的 `/usr/bin/calendar` 是 BSD 日历的一个变种,但它忽略了月亮历和太阳历。它保留了多个日历包括 `calendar.computer, calendar.discordian, calendar.music` 以及 `calendar.lotr`。在我的系统上,man 手册列出了 `/usr/bin/calendar` 里存在的不同日历。下面这个例子展示了指环王日历接下来的 60 天:
|
||||
|
||||
```
|
||||
$ calendar -f /usr/share/calendar/calendar.lotr -A 60
|
||||
Apr 17 An unexpected party
|
||||
Apr 23 Crowning of King Ellesar
|
||||
May 19 Arwen leaves Lorian to wed King Ellesar
|
||||
Jun 11 Sauron attacks Osgilliath
|
||||
```
|
||||
|
||||
这些日历是纯文本文件,因此,你可以轻松的创建你自己的日历。最简单的方式就是复制已经存在的日历文件的格式。你可以通过 `man calendar` 命令来查看创建个人日历文件的更详细的指导。
|
||||
|
||||
又一次很快走到了尽头。你可以花费一些时间来浏览你的文件系统,挖掘更多有趣的命令。
|
||||
|
||||
_你可以他通过来自 Linux 基金会和 edx 的免费课程 ["Introduction to Linux"][5] 来学习更过关于 Linux 的知识_。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2017/4/more-unknown-linux-commands
|
||||
|
||||
作者:[ CARLA SCHRODER][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/cschroder
|
||||
[1]:https://www.linux.com/licenses/category/used-permission
|
||||
[2]:https://www.linux.com/licenses/category/creative-commons-zero
|
||||
[3]:https://www.linux.com/files/images/linux-commands-fig-1png
|
||||
|
||||
[4]:https://www.linux.com/files/images/outer-limits-linuxjpg
|
||||
[5]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
||||
[6]:https://www.addtoany.com/share#url=https%3A%2F%2Fwww.linux.com%2Flearn%2Fintro-to-linux%2F2017%2F4%2Fmore-unknown-linux-commands&amp;amp;title=More%20Unknown%20Linux%20Commands
|
||||
[7]:http://termsaver.brunobraga.net/
|
||||
[8]:http://www.asciimation.co.nz/
|
@ -0,0 +1,197 @@
|
||||
|
||||
如何提供有帮助的回答
|
||||
=============================
|
||||
|
||||
如果你的同事问你一个不太清晰的问题,你会怎么回答?我认为提问题是一种技巧(可以看 [如何提出有意义的问题][1]) 同时,合理地回答问题也是一种技巧。他们都是非常实用的。
|
||||
|
||||
一开始 - 有时向你提问的人不尊重你的时间,这很糟糕。
|
||||
|
||||
理想情况下,我们假设问你问题的人是一个理性的人并且正在尽力解决问题而你想帮助他们。和我一起工作的人是这样,我所生活的世界也是这样。当然,现实生活并不是这样。
|
||||
|
||||
下面是有助于回答问题的一些方法!
|
||||
|
||||
|
||||
### 如果他们提问不清楚,帮他们澄清
|
||||
|
||||
通常初学者不会提出很清晰的问题,或者问一些对回答问题没有必要信息的问题。你可以尝试以下方法 澄清问题:
|
||||
|
||||
* ** 重述为一个更明确的问题 ** 来回复他们(”你是想问 X 吗?“)
|
||||
|
||||
* ** 向他们了解更具体的他们并没有提供的信息 ** (”你使用 IPv6 ?”)
|
||||
|
||||
* ** 问是什么导致了他们的问题 ** 例如,有时有些人会进入我的团队频道,询问我们的服务发现(service discovery )如何工作的。这通常是因为他们试图设置/重新配置服务。在这种情况下,如果问“你正在使用哪种服务?可以给我看看你正在处理的 pull requests 吗?”是有帮助的。
|
||||
|
||||
这些方法很多来自 [如何提出有意义的问题][2]中的要点。(尽管我永远不会对某人说“噢,你得先看完 “如何提出有意义的问题”这篇文章后再来像我提问)
|
||||
|
||||
|
||||
### 弄清楚他们已经知道了什么
|
||||
|
||||
在回答问题之前,知道对方已经知道什么是非常有用的!
|
||||
|
||||
Harold Treen 给了我一个很好的例子:
|
||||
|
||||
> 前几天,有人请我解释“ Redux-Sagas ”。与其深入解释不如说“ 他们就像 worker threads 监听行为(actions),让你更新 Redux store 。
|
||||
|
||||
> 我开始搞清楚他们对 Redux 、行为(actions)、store 以及其他基本概念了解多少。将这些概念都联系在一起再来解释会容易得多。
|
||||
|
||||
弄清楚问你问题的人已经知道什么是非常重要的。因为有时他们可能会对基础概念感到疑惑(“ Redux 是什么?“),或者他们可能是专家但是恰巧遇到了微妙的极端情况(corner case)。如果答案建立在他们不知道的概念上会令他们困惑,但如果重述他们已经知道的的又会是乏味的。
|
||||
|
||||
这里有一个很实用的技巧来了解他们已经知道什么 - 比如可以尝试用“你对 X 了解多少?”而不是问“你知道 X 吗?”。
|
||||
|
||||
|
||||
### 给他们一个文档
|
||||
|
||||
“RTFM” (“去读那些他妈的手册”(Read The Fucking Manual))是一个典型的无用的回答,但事实上如果向他们指明一个特定的文档会是非常有用的!当我提问题的时候,我当然很乐意翻看那些能实际解决我的问题的文档,因为它也可能解决其他我想问的问题。
|
||||
|
||||
我认为明确你所给的文档的确能够解决问题是非常重要的,或者至少经过查阅后确认它对解决问题有帮助。否则,你可能将以下面这种情形结束对话(非常常见):
|
||||
|
||||
* Ali:我应该如何处理 X ?
|
||||
|
||||
* Jada:<文档链接>
|
||||
|
||||
* Ali: 这个并有实际解释如何处理 X ,它仅仅解释了如何处理 Y !
|
||||
|
||||
如果我所给的文档特别长,我会指明文档中那个我将会谈及的特定部分。[bash 手册][3] 有44000个字(真的!),所以如果只说“它在 bash 手册中有说明”是没有帮助的:)
|
||||
|
||||
|
||||
### 告诉他们一个有用的搜索
|
||||
|
||||
在工作中,我经常发现我可以利用我所知道的关键字进行搜索找到能够解决我的问题的答案。对于初学者来说,这些关键字往往不是那么明显。所以说“这是我用来寻找这个答案的搜索”可能有用些。再次说明,回答时请经检查后以确保搜索能够得到他们所需要的答案:)
|
||||
|
||||
|
||||
### 写新文档
|
||||
|
||||
人们经常一次又一次地问我的团队同样的问题。很显然这并不是他们的错(他们怎么能够知道在他们之前已经有10个人问了这个问题,且知道答案是什么呢?)因此,我们会尝试写新文档,而不是直接回答回答问题。
|
||||
|
||||
1. 马上写新文档
|
||||
|
||||
2. 给他们我们刚刚写好的新文档
|
||||
|
||||
3. 公示
|
||||
|
||||
写文档有时往往比回答问题需要花很多时间,但这是值得的。写文档尤其重要,如果:
|
||||
|
||||
a. 这个问题被问了一遍又一遍
|
||||
|
||||
b. 随着时间的推移,这个答案不会变化太大(如果这个答案每一个星期或者一个月就会变化,文档就会过时并且令人受挫)
|
||||
|
||||
|
||||
### 解释你做了什么
|
||||
|
||||
对于一个话题,作为初学者来说,这样的交流会真让人沮丧:
|
||||
|
||||
* 新人:“嗨!你如何处理 X ?”
|
||||
|
||||
* 有经验的人:“我已经处理过了,而且它已经完美解决了”
|
||||
|
||||
* 新人:”...... 但是你做了什么?!“
|
||||
|
||||
如果问你问题的人想知道事情是如何进行的,这样是有帮助的:
|
||||
|
||||
* 让他们去完成任务而不是自己做
|
||||
|
||||
* 告诉他们你是如何得到你给他们的答案的。
|
||||
|
||||
这可能比你自己做的时间还要长,但对于被问的人来说这是一个学习机会,因为那样做使得他们将来能够更好地解决问题。
|
||||
|
||||
这样,你可以进行更好的交流,像这:
|
||||
|
||||
* 新人:“这个网站出现了错误,发生了什么?”
|
||||
|
||||
* 有经验的人:(2分钟后)”oh 这是因为发生了数据库故障转移“
|
||||
|
||||
* 新人: ”你是怎么知道的??!?!?“
|
||||
|
||||
* 有经验的人:“以下是我所做的!“:
|
||||
|
||||
1. 通常这些错误是因为服务器 Y 被关闭了。我查看了一下 `$PLACE` 但它表明服务器 Y 开着。所以,并不是这个原因导致的。
|
||||
|
||||
2. 然后我查看 X 的仪表盘 ,仪表盘的这个部分显示这里发生了数据库故障转移。
|
||||
|
||||
3. 然后我在日志中找到了相应服务器,并且它显示连接数据库错误,看起来错误就是这里。
|
||||
|
||||
如果你正在解释你是如何调试一个问题,解释你是如何发现问题,以及如何找出问题的。尽管看起来你好像已经得到正确答案,但感觉更好的是能够帮助他们提高学习和诊断能力,并了解可用的资源。
|
||||
|
||||
|
||||
### 解决根本问题
|
||||
|
||||
这一点有点棘手。有时候人们认为他们依旧找到了解决问题的正确途径,且他们只再多一点信息就可以解决问题。但他们可能并不是走在正确的道路上!比如:
|
||||
|
||||
* George:”我在处理 X 的时候遇到了错误,我该如何修复它?“
|
||||
|
||||
* Jasminda:”你是正在尝试解决 Y 吗?如果是这样,你不应该处理 X ,反而你应该处理 Z 。“
|
||||
|
||||
* George:“噢,你是对的!!!谢谢你!我回反过来处理 Z 的。“
|
||||
|
||||
Jasminda 一点都没有回答 George 的问题!反而,她猜测 George 并不想处理 X ,并且她是猜对了。这是非常有用的!
|
||||
|
||||
如果你这样做可能会产生高高在上的感觉:
|
||||
|
||||
* George:”我在处理 X 的时候遇到了错误,我该如何修复它?“
|
||||
|
||||
* Jasminda:不要这样做,如果你想处理 Y ,你应该反过来完成 Z 。
|
||||
|
||||
* George:“好吧,我并不是想处理 Y 。实际上我想处理 X 因为某些原因(REASONS)。所以我该如何处理 X 。
|
||||
|
||||
所以不要高高在上,且要记住有时有些提问者可能已经偏离根本问题很远了。同时回答提问者提出的问题以及他们本该提出的问题都是合理的:“嗯,如果你想处理 X ,那么你可能需要这么做,但如果你想用这个解决 Y 问题,可能通过处理其他事情你可以更好地解决这个问题,这就是为什么可以做得更好的原因。
|
||||
|
||||
|
||||
### 询问”那个回答可以解决您的问题吗?”
|
||||
|
||||
我总是喜欢在我回答了问题之后核实是否真的已经解决了问题:”这个回答解决了您的问题吗?您还有其他问题吗?“在问完这个之后最好等待一会,因为人们通常需要一两分钟来知道他们是否已经找到了答案。
|
||||
|
||||
我发现尤其是问“这个回答解决了您的问题吗”这个额外的步骤在写完文档后是非常有用的。通常,在写关于我熟悉的东西的文档时,我会忽略掉重要的东西而不会意识到它。
|
||||
|
||||
|
||||
### 结对编程和面对面交谈
|
||||
|
||||
我是远程工作的,所以我的很多对话都是基于文本的。我认为这是沟通的默认方式。
|
||||
|
||||
今天,我们生活在一个方便进行小视频会议和屏幕共享的世界!在工作时候,在任何时间我都可以点击一个按钮并快速加入与他人的视频对话或者屏幕共享的对话中!
|
||||
|
||||
例如,最近有人问如何自动调节他们的服务容量规划。我告诉他们我们有几样东西需要清理,但我还不太确定他们要清理的是什么。然后我们进行了一个简短的视屏会话并在5分钟后,我们解决了他们问题。
|
||||
|
||||
我认为,特别是如果有人真的被困在该如何开始一项任务时,开启视频进行结对编程几分钟真的比电子邮件或者一些即时通信更有效。
|
||||
|
||||
|
||||
### 不要表现得过于惊讶
|
||||
|
||||
这是源自 Recurse Center 的一则法则:[不要故作惊讶][4]。这里有一个常见的情景:
|
||||
|
||||
* 某人1:“什么是 Linux 内核”
|
||||
|
||||
* 某人2:“你竟然不知道什么是 Linux 内核(LINUX KERNEL)?!!!!?!!!????”
|
||||
|
||||
某人2表现(无论他们是否真的如此惊讶)是没有帮助的。这大部分只会让某人1不好受,因为他们确实不知道什么是 Linux 内核。
|
||||
|
||||
我一直在假装不惊讶即使我事实上确实有点惊讶那个人不知道这种东西但它是令人敬畏的。
|
||||
|
||||
### 回答问题是令人敬畏的
|
||||
|
||||
显然并不是所有方法都是合适的,但希望你能够发现这里有些是有帮助的!我发现花时间去回答问题并教导人们是其实是很有收获的。
|
||||
|
||||
特别感谢 Josh Triplett 的一些建议并做了很多有益的补充,以及感谢 Harold Treen、Vaibhav Sagar、Peter Bhat Hatkins、Wesley Aptekar Cassels 和 Paul Gowder的阅读或评论。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/answer-questions-well/
|
||||
|
||||
作者:[ Julia Evans][a]
|
||||
译者:[HardworkFish](https://github.com/HardworkFish)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://jvns.ca/about
|
||||
[1]:https://jvns.ca/blog/good-questions/
|
||||
[2]:https://jvns.ca/blog/good-questions/
|
||||
[3]:https://linux.die.net/man/1/bash
|
||||
[4]:https://jvns.ca/blog/2017/04/27/no-feigning-surprise/
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -1,89 +0,0 @@
|
||||
Translating by FelixYFZ
|
||||
|
||||
面向初学者的Linux网络硬件: 软件工程思想
|
||||
============================================================
|
||||
|
||||
![island network](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/soderskar-island.jpg?itok=wiMaF66b "island network")
|
||||
没有路由和桥接,我们将会成为孤独的小岛,你将会在这个网络教程中学到更多知识。
|
||||
Commons Zero][3]Pixabay
|
||||
|
||||
上周,我们学习了本地网络硬件知识,本周,我们将学习网络互联技术和在移动网络中的一些很酷的黑客技术。
|
||||
### Routers:路由器
|
||||
|
||||
|
||||
网络路由器就是计算机网络中的一切,因为路由器连接着网络,没有路由器,我们就会成为孤岛,
|
||||
|
||||
图一展示了一个简单的有线本地网络和一个无线接入点,所有设备都接入到Internet上,本地局域网的计算机连接到一个连接着防火墙或者路由器的以太网交换机上,防火墙或者路由器连接到网络服务供应商提供的电缆箱,调制调节器,卫星上行系统...好像一切都在计算中,就像是一个带着不停闪烁的的小灯的盒子,当你的网络数据包离开你的局域网,进入广阔的互联网,它们穿过一个又一个路由器直到到达自己的目的地。
|
||||
|
||||
|
||||
### [fig-1.png][4]
|
||||
|
||||
![simple LAN](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-1_7.png?itok=lsazmf3- "simple LAN")
|
||||
|
||||
图一:一个简单的有线局域网和一个无线接入点。
|
||||
|
||||
一台路由器能连接一切,一个小巧特殊的小盒子只专注于路由,一个大点的盒子将会提供路由,防火墙,域名服务,以及VPN网关功能,一台重新设计的台式电脑或者笔记本,一个树莓派计算机或者一个小模块,体积臃肿矮小的像PC这样的单板计算机,除了苛刻的用途以外,普通的商品硬件都能良好的工作运行。高端的路由器使用特殊设计的硬件每秒能够传输最大量的数据包。 它们有多路数据总线,多个中央处理器和极快的存储。
|
||||
可以通过查阅Juniper和思科的路由器来感受一下高端路由器书什么样子的,而且能看看里面是什么样的构造。
|
||||
一个接入你的局域网的无线接入点要么作为一个以太网网桥要么作为一个路由器。一个桥接器扩展了这个网络,所以在这个桥接器上的任意一端口上的主机都连接在同一个网络中。
|
||||
一台路由器连接的是两个不同的网络。
|
||||
### Network Topology:网络拓扑
|
||||
|
||||
|
||||
有多种设置你的局域网的方式,你可以把所有主机接入到一个单独的平面网络,如果你的交换机支持的话,你也可以把它们分配到不同的子网中。
|
||||
平面网络是最简单的网络,只需把每一台设备接入到同一个交换机上即可,如果一台交换上的端口不够使用,你可以将更多的交换机连接在一起。
|
||||
有些交换机有特殊的上行端口,有些是没有这种特殊限制的上行端口,你可以连接其中的任意端口,你可能需要使用交叉类型的以太网线,所以你要查阅你的交换机的说明文档来设置。平面网络是最容易管理的,你不需要路由器也不需要计算子网,但它也有一些缺点。他们的伸缩性不好,所以当网络规模变得越来越大的时候就会被广播网络所阻塞。
|
||||
将你的局域网进行分段将会提升安全保障, 把局域网分成可管理的不同网段将有助于管理更大的网络。
|
||||
图2展示了一个分成两个子网的局域网络:内部的有线和无线主机,和非军事区域(从来不知道所所有的工作上的男性术语都是在计算机上键入的?)因为他被阻挡了所有的内部网络的访问。
|
||||
|
||||
|
||||
### [fig-2.png][5]
|
||||
|
||||
![LAN](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-2_4.png?itok=LpXq7bLf "LAN")
|
||||
|
||||
图2:一个分成两个子网的简单局域网。
|
||||
即使像图2那样的小型网络也可以有不同的配置方法。你可以将防火墙和路由器放置在一台单独的设备上。
|
||||
你可以为你的非军事区域设置一个专用的网络连接,把它完全从你的内部网络隔离,这将引导我们进入下一个主题:一切基于软件。
|
||||
|
||||
|
||||
### Think Software软件思维
|
||||
|
||||
|
||||
你可能已经注意到在这个简短的系列中我们所讨论的硬件,只有网络接口,交换机,和线缆是特殊用途的硬件。
|
||||
其它的都是通用的商用硬件,而且都是软件来定义它的用途。
|
||||
网关,虚拟专用网关,以太网桥,网页,邮箱以及文件等等。
|
||||
服务器,负载均衡,代理,大量的服务,各种各样的认证,中继,故障转移...你可以在运行着Linux系统的标准硬件上运行你的整个网络。
|
||||
你甚至可以使用Linux交换应用和VDE2协议来模拟以太网交换机,像DD-WRT,openWRT 和Rashpberry Pi distros,这些小型的硬件都是有专业的分类的,要记住BSDS和它们的特殊衍生用途如防火墙,路由器,和网络附件存储。
|
||||
你知道有些人坚持认为硬件防火墙和软件防火墙有区别?其实是没有区别的,就像说有一台硬件计算机和一台软件计算机。
|
||||
### Port Trunking and Ethernet Bonding
|
||||
端口聚合和以太网绑定
|
||||
聚合和绑定,也称链路聚合,是把两条以太网通道绑定在一起成为一条通道。一些交换机支持端口聚合,就是把两个交换机端口绑定在一起成为一个是他们原来带宽之和的一条新的连接。对于一台承载很多业务的服务器来说这是一个增加通道带宽的有效的方式。
|
||||
你也可以在以太网口进行同样的配置,而且绑定汇聚的驱动是内置在Linux内核中的,所以不需要任何其他的专门的硬件。
|
||||
|
||||
|
||||
### Bending Mobile Broadband to your Will随心所欲选择你的移动带宽
|
||||
|
||||
我期望移动带宽能够迅速增长来替代DSL和有线网络。我居住在一个有250,000人口的靠近一个城市的地方,但是在城市以外,要想接入互联网就要靠运气了,即使那里有很大的用户上网需求。我居住的小角落离城镇有20分钟的距离,但对于网络服务供应商来说他们几乎不会考虑到为这个地方提供网络。 我唯一的选择就是移动带宽; 这里没有拨号网络,卫星网络(即使它很糟糕)或者是DSL,电缆,光纤,但却没有阻止网络供应商把那些在我这个区域从没看到过的无限制通信个其他高速网络服务的传单塞进我的邮箱。
|
||||
我试用了AT&T,Version,和T-Mobile。Version的信号覆盖范围最广,但是Version和AT&T是最昂贵的。
|
||||
我居住的地方在T-Mobile信号覆盖的边缘,但迄今为止他们给了最大的优惠,为了能够能够有效的使用,我必须购买一个WeBoostDe信号放大器和
|
||||
一台中兴的移动热点设备。当然你也可以使用一部手机作为热点,但是专用的热点设备有着最强的信号。如果你正在考虑购买一台信号放大器,最好的选择就是WeBoost因为他们的服务支持最棒,而且他们会尽最大努力去帮助你。在一个小小的APP的协助下去设置将会精准的增强 你的网络信号,他们有一个功能较少的免费的版本,但你将一点都不会后悔去花两美元使用专业版。
|
||||
那个小巧的中兴热点设备能够支持15台主机而且还有拥有基本的防火墙功能。 但你如果你使用像 Linksys WRT54GL这样的设备,使用Tomato,openWRT,或者DD-WRT来替代普通的固件,这样你就能完全控制你的防护墙规则,路由配置,以及任何其他你想要设置的服务。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2017/10/linux-networking-hardware-beginners-think-software
|
||||
|
||||
作者:[CARLA SCHRODER][a]
|
||||
译者:[FelixYFZ](https://github.com/FelixYFZ)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/cschroder
|
||||
[1]:https://www.linux.com/licenses/category/used-permission
|
||||
[2]:https://www.linux.com/licenses/category/used-permission
|
||||
[3]:https://www.linux.com/licenses/category/creative-commons-zero
|
||||
[4]:https://www.linux.com/files/images/fig-1png-7
|
||||
[5]:https://www.linux.com/files/images/fig-2png-4
|
||||
[6]:https://www.linux.com/files/images/soderskar-islandjpg
|
||||
[7]:https://www.linux.com/learn/intro-to-linux/2017/10/linux-networking-hardware-beginners-lan-hardware
|
||||
[8]:http://www.bluelinepc.com/signalcheck/
|
93
translated/tech/20171107 GitHub welcomes all CI tools.md
Normal file
93
translated/tech/20171107 GitHub welcomes all CI tools.md
Normal file
@ -0,0 +1,93 @@
|
||||
GitHub 欢迎所有 CI 工具
|
||||
====================
|
||||
|
||||
|
||||
[![GitHub and all CI tools](https://user-images.githubusercontent.com/29592817/32509084-2d52c56c-c3a1-11e7-8c49-901f0f601faf.png)][11]
|
||||
|
||||
持续集成([CI][12])工具可以帮助你在每次提交时执行测试,并将[报告结果][13]提交到合并请求,从而帮助维持团队的质量标准。结合持续交付([CD][14])工具,你还可以在多种配置上测试你的代码,运行额外的性能测试,并自动执行每个步骤[直到产品][15]。
|
||||
|
||||
有几个[与 GitHub 集成][16]的 CI 和 CD 工具,其中一些可以在 [GitHub Marketplace][17] 中点击几下安装。有了这么多的选择,你可以选择最好的工具 - 即使它不是与你的系统预集成的工具。
|
||||
|
||||
最适合你的工具取决于许多因素,其中包括:
|
||||
|
||||
* 编程语言和程序架构
|
||||
|
||||
* 你计划支持的操作系统和浏览器
|
||||
|
||||
* 你团队的经验和技能
|
||||
|
||||
* 扩展能力和增长计划
|
||||
|
||||
* 依赖系统的地理分布和使用的人
|
||||
|
||||
* 打包和交付目标
|
||||
|
||||
当然,无法为所有这些情况优化你的 CI 工具。构建它们的人需要选择哪些情况服务更好,何时优先考虑复杂性而不是简单性。例如,如果你想测试针对一个平台的用特定语言编写的小程序,那么你就不需要那些可在数十个平台上测试,有许多编程语言和框架的,用来测试嵌入软件控制器的复杂工具。
|
||||
|
||||
如果你需要一些灵感来挑选最好使用哪个 CI 工具,那么看一下[ Github 上的流行项目][18]。许多人在他们的 README.md 中将他们的集成的 CI/CD 工具的状态显示为徽章。我们还分析了 GitHub 社区中超过 5000 万个仓库中 CI 工具的使用情况,并发现了很多变化。下图显示了根据我们的 pull 请求中使用最多的[提交状态上下文][19],GitHub.com 使用的前 10 个 CI 工具的相对百分比。
|
||||
|
||||
_我们的分析还显示,许多团队在他们的项目中使用多个 CI 工具,使他们能够发挥它们最擅长的。_
|
||||
|
||||
[![Top 10 CI systems used with GitHub.com based on most used commit status contexts](https://user-images.githubusercontent.com/7321362/32575895-ea563032-c49a-11e7-9581-e05ec882658b.png)][20]
|
||||
|
||||
如果你想查看,下面是团队中使用最多的 10 个工具:
|
||||
|
||||
* [Travis CI][1]
|
||||
|
||||
* [Circle CI][2]
|
||||
|
||||
* [Jenkins][3]
|
||||
|
||||
* [AppVeyor][4]
|
||||
|
||||
* [CodeShip][5]
|
||||
|
||||
* [Drone][6]
|
||||
|
||||
* [Semaphore CI][7]
|
||||
|
||||
* [Buildkite][8]
|
||||
|
||||
* [Wercker][9]
|
||||
|
||||
* [TeamCity][10]
|
||||
|
||||
这只是尝试选择默认的、预先集成的工具,而没有花时间根据任务研究和选择最好的工具,但是对于你的特定情况会有很多[很好的选择][21]。如果你以后改变主意,没问题。当你为特定情况选择最佳工具时,你可以保证量身定制的性能和不再适合时互换的自由。
|
||||
|
||||
准备好了解 CI 工具如何适应你的工作流程了么?
|
||||
|
||||
[浏览 GitHub Marketplace][22]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://github.com/blog/2463-github-welcomes-all-ci-tools
|
||||
|
||||
作者:[jonico ][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://github.com/jonico
|
||||
[1]:https://travis-ci.org/
|
||||
[2]:https://circleci.com/
|
||||
[3]:https://jenkins.io/
|
||||
[4]:https://www.appveyor.com/
|
||||
[5]:https://codeship.com/
|
||||
[6]:http://try.drone.io/
|
||||
[7]:https://semaphoreci.com/
|
||||
[8]:https://buildkite.com/
|
||||
[9]:http://www.wercker.com/
|
||||
[10]:https://www.jetbrains.com/teamcity/
|
||||
[11]:https://user-images.githubusercontent.com/29592817/32509084-2d52c56c-c3a1-11e7-8c49-901f0f601faf.png
|
||||
[12]:https://en.wikipedia.org/wiki/Continuous_integration
|
||||
[13]:https://github.com/blog/2051-protected-branches-and-required-status-checks
|
||||
[14]:https://en.wikipedia.org/wiki/Continuous_delivery
|
||||
[15]:https://developer.github.com/changes/2014-01-09-preview-the-new-deployments-api/
|
||||
[16]:https://github.com/works-with/category/continuous-integration
|
||||
[17]:https://github.com/marketplace/category/continuous-integration
|
||||
[18]:https://github.com/explore?trending=repositories#trending
|
||||
[19]:https://developer.github.com/v3/repos/statuses/
|
||||
[20]:https://user-images.githubusercontent.com/7321362/32575895-ea563032-c49a-11e7-9581-e05ec882658b.png
|
||||
[21]:https://github.com/works-with/category/continuous-integration
|
||||
[22]:https://github.com/marketplace/category/continuous-integration
|
54
translated/tech/20171114 Sysadmin 101 Patch Management.md
Normal file
54
translated/tech/20171114 Sysadmin 101 Patch Management.md
Normal file
@ -0,0 +1,54 @@
|
||||
系统管理 101:补丁管理
|
||||
============================================================
|
||||
|
||||
就在之前几篇文章,我开始了“系统管理 101”系列文章,用来记录现今许多初级系统管理员,DevOps 工程师或者“全栈”开发者可能不曾接触过的一些系统管理方面的基本知识。按照我原本的设想,该系列文章已经是完结了的。然而后来 WannaCry 恶意软件出现并在补丁管理不善的 Windows 主机网络间爆发。我能想象到那些仍然深陷 2000 年代 Linux 与 Windows 争论的读者听到这个消息可能已经面露优越的微笑。
|
||||
|
||||
我之所以这么快就决定再次继续“系统管理 101”文章系列,是因为我意识到在补丁管理方面一些 Linux 系统管理员和 Windows 系统管理员没有差别。实话说,在一些方面甚至做的更差(特别是以运行时间为豪)。所以,这篇文章会涉及 Linux 下补丁管理的基础概念,包括良好的补丁管理该是怎样的,你可能会用到的一些相关工具,以及整个补丁安装过程是如何进行的。
|
||||
|
||||
### 什么是补丁管理?
|
||||
|
||||
我所说的补丁管理,是指你部署用于升级服务器上软件的系统,不仅仅是把软件更新到最新最好的前沿版本。即使是像 Debian 这样为了“稳定性”持续保持某一特定版本软件的保守派发行版,也会时常发布升级补丁用于修补错误和安全漏洞。
|
||||
|
||||
当然,因为开发者对最新最好版本的需求,你需要派生软件源码并做出修改,或者因为你喜欢给自己额外的工作量,你的组织可能会决定自己维护特定软件的版本,这时你就会遇到问题。理想情况下,你应该已经配置好你的系统,让它在自动构建和打包定制版本软件时使用其它软件所用的同一套持续集成系统。然而,许多系统管理员仍旧在自己的本地主机上按照维基上的文档(但愿是最新的文档)使用过时的方法打包软件。不论使用哪种方法,你都需要明确你所使用的版本有没有安全缺陷,如果有,那必须确保新补丁安装到你定制版本的软件上了。
|
||||
|
||||
### 良好的补丁管理是怎样的
|
||||
|
||||
补丁管理首先要做的是检查软件的升级。首先,对于核心软件,你应该订阅相应 Linux 发行版的安全邮件列表,这样才能第一时间得知软件的安全升级情况。如果你使用的软件有些不是来自发行版的仓库,那么你也必须设法跟踪它们的安全更新。一旦接收到新的安全通知,你必须查阅通知细节,以此明确安全漏洞的严重程度,确定你的系统是否受影响,以及安全补丁的紧急性。
|
||||
|
||||
一些组织仍在使用手动方式管理补丁。在这种方式下,当出现一个安全补丁,系统管理员就要凭借记忆,登录到各个服务器上进行检查。在确定了哪些服务器需要升级后,再使用服务器内建的包管理工具从发行版仓库升级这些软件。最后以相同的方式升级剩余的所有服务器。
|
||||
|
||||
手动管理补丁的方式存在很多问题。首先,这么做会使补丁安装成为一个苦力活,安装补丁需要越多人力成本,系统管理员就越可能推迟甚至完全忽略它。其次,手动管理方式依赖系统管理员凭借记忆去跟踪他或她所负责的服务器的升级情况。这非常容易导致有些服务器被遗漏而未能及时升级。
|
||||
|
||||
补丁管理越快速简便,你就越可能把它做好。你应该构建一个系统,用来快速查询哪些服务器运行着特定的软件,以及这些软件的版本号,而且它最好还能够推送各种升级补丁。就个人而言,我倾向于使用 MCollective 这样的编排工具来完成这个任务,但是红帽提供的 Satellite 以及 Canonical 提供的 Landscape 也可以让你在统一的管理接口查看服务器上软件的版本信息,并且安装补丁。
|
||||
|
||||
补丁安装还应该具有容错能力。你应该具备在不下线的情况下为服务安装补丁的能力。这同样适用于需要重启系统的内核补丁。我采用的方法是把我的服务器划分为不同的高可用组,lb1,app1,rabbitmq1 和 db1 在一个组,而lb2,app2,rabbitmq2 和 db2 在另一个组。这样,我就能一次升级一个组,而无须下线服务。
|
||||
|
||||
所以,多快才能算快呢?对于少数没有附带服务的软件,你的系统最快应该能够在几分钟到一小时内安装好补丁(例如 bash 的 ShellShock 漏洞)。对于像 OpenSSL 这样需要重启服务的软件,以容错的方式安装补丁并重启服务的过程可能会花费稍多的时间,但这就是编排工具派上用场的时候。我在最近的关于 MCollective 的文章中(查看 2016 年 12 月和 2017 年 1 月的工单)给了几个使用 MCollective 实现补丁管理的例子。你最好能够部署一个系统,以具备容错性的自动化方式简化补丁安装和服务重启的过程。
|
||||
|
||||
如果补丁要求重启系统,像内核补丁,那它会花费更多的时间。再次强调,自动化和编排工具能够让这个过程比你想象的还要快。我能够在一到两个小时内在生产环境中以容错方式升级并重启服务器,如果重启之间无须等待集群同步备份,这个过程还能更快。
|
||||
|
||||
不幸的是,许多系统管理员仍坚信过时的观点,把运行时间作为一种骄傲的象征——鉴于紧急内核补丁大约每年一次。对于我来说,这只能说明你没有认真对待系统的安全性。
|
||||
|
||||
很多组织仍然使用无法暂时下线的单点故障的服务器,也因为这个原因,它无法升级或者重启。如果你想让系统更加安全,你需要去除过时的包袱,搭建一个至少能在深夜维护时段重启的系统。
|
||||
|
||||
基本上,快速便捷的补丁管理也是一个成熟专业的系统管理团队所具备的标志。升级软件是所有系统管理员的必要工作之一,花费时间去让这个过程简洁快速,带来的好处远远不止是系统安全性。例如,它能帮助我们找到架构设计中的单点故障。另外,它还帮助鉴定出环境中过时的系统,给我们替换这些部分提供了动机。最后,当补丁管理做得足够好,它会节省系统管理员的时间,让他们把精力放在真正需要专业知识的地方。
|
||||
|
||||
______________________
|
||||
|
||||
Kyle Rankin 是高级安全与基础设施架构师,其著作包括: Linux Hardening in Hostile Networks,DevOps Troubleshooting 以及 The Official Ubuntu Server Book。同时,他还是 Linux Journal 的专栏作家。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxjournal.com/content/sysadmin-101-patch-management
|
||||
|
||||
作者:[Kyle Rankin ][a]
|
||||
译者:[haoqixu](https://github.com/haoqixu)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linuxjournal.com/users/kyle-rankin
|
||||
[1]:https://www.linuxjournal.com/tag/how-tos
|
||||
[2]:https://www.linuxjournal.com/tag/servers
|
||||
[3]:https://www.linuxjournal.com/tag/sysadmin
|
||||
[4]:https://www.linuxjournal.com/users/kyle-rankin
|
@ -1,59 +1,59 @@
|
||||
-最合理的语言工程模式
|
||||
-============================================================
|
||||
-
|
||||
-当你熟练掌握一体化工程技术时,你就会发现它逐渐超过了技术优化的层面。我们制作的每件手工艺品都在一个大环境背景下,在这个环境中,人类的行为逐渐突破了经济意义,社会学意义,达到了奥地利经济学家所称的“人类行为学”,这是目的明确的人类行为所能达到的最大范围。
|
||||
-
|
||||
-对我来说这并不只是抽象理论。当我在开源发展项目中编写时,我的行为就十分符合人类行为学的理论,这行为不是针对任何特定的软件技术或某个客观事物,它指的是在开发科技的过程中人类行为的背景环境。从人类行为学角度对科技进行的解读不断增加,大量的这种解读可以重塑科技框架,带来人类生产力和满足感的极大幅度增长,而这并不是由于我们换了工具,而是在于我们改变了掌握它们的方式。
|
||||
-
|
||||
-在这个背景下,我在第三篇额外的文章中谈到了 C 语言的衰退和正在到来的巨大改变,而我们也确实能够感受到系统编程的新时代的到来,在这个时刻,我决定把我之前有的大体的预感具象化为更加具体的,更实用的点子,它们主要是关于计算机语言设计的分析,例如为什么他们会成功,或为什么他们会失败。
|
||||
-
|
||||
-在我最近的一篇文章中,我写道:所有计算机语言都是对机器资源的成本和程序员工作成本的相对权衡的结果,和对其相对价值的体现。这些都是在一个计算能力成本不断下降但程序员工作成本不减反增的背景下产生的。我还强调了转化成本在使原有交易主张适用于当下环境中的新增角色。在文中我将编程人员描述为一个寻找今后最适方案的探索者。
|
||||
-
|
||||
-现在我要讲一讲最后一点。以现有水平为起点,一个语言工程师有极大可能通过多种方式推动语言设计的发展。通过什么系统呢? GC 还是人工分配?使用何种配置,命令式语言,函数程式语言或是面向对象语言?但是从人类行为学的角度来说,我认为它的形式会更简洁,也许只是选择解决长期问题还是短期问题?
|
||||
-
|
||||
-所谓的“远”“近”之分,是指硬件成本的逐渐降低,软件复杂程度的上升和由现有语言向其他语言转化的成本的增加,根据它们的变化曲线所做出的判断。短期问题指编程人员眼下发现的问题,长期问题指可预见的一系列情况,但它们一段时间内不会到来。针对近期问题所做出的部署需要非常及时且有效,但随着情况的变化,短期解决方案有可能很快就不适用了。而长期的解决方案可能因其过于超前而夭折,或因其代价过高无法被接受。
|
||||
-
|
||||
-在计算机刚刚面世的时候, FORTRAN 是近期亟待解决的问题, LISP 是远期问题。汇编语言是短期解决方案,图解说明非通用语言的分类应用,还有关门电阻不断上涨的成本。随着计算机技术的发展,PHP 和 Javascript逐渐应用于游戏中。至于长期的解决方案? Oberon , Ocaml , ML , XML-Docbook 都可以。 他们形成的激励机制带来了大量具有突破性和原创性的想法,事态蓬勃但未形成体系,那个时候距离专业语言的面世还很远,(值得注意的是这些想法的出现都是人类行为学中的因果,并非由于某种技术)。专业语言会失败,这是显而易见的,它的转入成本高昂,让大部分人望而却步,因此不能没能达到能够让主流群体接受的水平,被孤立,被搁置。这也是 LISP 不为人知的的过去,作为前 LISP 管理层人员,出于对它深深的爱,我为你们讲述了这段历史。
|
||||
-
|
||||
-如果短期解决方案出现故障,它的后果更加惨不忍睹,最好的结果是期待一个相对体面的失败,好转换到另一个设计方案。(通常在转化成本较高时)如果他们执意继续,通常造成众多方案相互之间藕断丝连,形成一个不断扩张的复合体,一直维持到不能运转下去,变成一堆摇摇欲坠的杂物。是的,我说的就是 C++ 语言,还有 Java 描述语言,(唉)还有 Perl,虽然 Larry Wall 的好品味成功地让他维持了很多年,问题一直没有爆发,但在 Perl 6 发行时,他的好品味最终引爆了整个问题。
|
||||
-
|
||||
-这种思考角度激励了编程人员向着两个不同的目的重新塑造语言设计: ①以远近为轴,在自身和预计的未来之间选取一个最适点,然后 ②降低由一种或多种语言转化为自身语言的转入成本,这样你就可以吸纳他们的用户群。接下来我会讲讲 C 语言是怎样占领全世界的。
|
||||
-
|
||||
-在整个计算机发展史中,没有谁能比 C 语言完美地把握最适点的选取了,我要做的只是证明这一点,作为一种实用的主流语言, C 语言有着更长的寿命,它目睹了无数个竞争者的兴衰,但它的地位仍旧不可取代。从淘汰它的第一个竞争者到现在已经过了 35 年,但看起来C语言的终结仍旧不会到来。
|
||||
-
|
||||
-当然,如果你愿意的话,可以把 C 语言的持久存在归功于人类的文化惰性,但那是对“文化惰性”这个词的曲解, C 语言一直得以延续的真正原因是没有人提供足够的转化费用!
|
||||
-
|
||||
-相反的, C 语言低廉的内部转化费用未得到应有的重视,C 语言是如此的千变万化,从它漫长统治时期的初期开始,它就可以适用于多种语言如 FORTRAN , Pascal , 汇编语言和 LISP 的编程习惯。在二十世纪八十年代我就注意到,我可以根据编程人员的编码风格判断出他的母语是什么,这也从另一方面证明了C 语言的魅力能够吸引全世界的人使用它。
|
||||
-
|
||||
-C++ 语言同样胜在它低廉的转化费用。很快,大部分新兴的语言为了降低自身转化费用,纷纷参考 C 语言语法。请注意这给未来的语言设计环境带来了什么影响:它尽可能地提高了 C-like 语言的价值,以此来降低其他语言转化为 C 语言的转化成本。
|
||||
-
|
||||
-另一种降低转入成本的方法十分简单,即使没接触过编程的人都能学会,但这种方法很难完成。我认为唯一使用了这种方法的 Python就是靠这种方法进入了职业比赛。对这个方法我一带而过,是因为它并不是我希望看到的,顺利执行的系统语言战略,虽然我很希望它不是那样的。
|
||||
-
|
||||
-今天我们在2017年年底聚集在这里,下一项我们应该为某些暴躁的团体发声,如 Go 团队,但事实并非如此。 Go 这个项目漏洞百出,我甚至可以想象出它失败的各种可能,Go 团队太过固执独断,即使几乎整个用户群体都认为 Go 需要做出改变了,Go 团队也无动于衷,这是个大问题。 一旦发生故障, GC 发生延迟或者用牺牲生产量来弥补延迟,但无论如何,它都会严重影响到这种语言的应用,大幅缩小这种语言的适用范围。
|
||||
-
|
||||
-即便如此,在 Go 的设计中,还是有一个我颇为认同的远大战略目标,想要理解这个目标,我们需要回想一下如果想要取代 C 语言,要面临的短期问题是什么。同我之前提到的,随着项目计划的不断扩张,故障率也在持续上升,这其中内存管理方面的故障尤其多,而内存管理一直是崩溃漏洞和安全漏洞的高发领域。
|
||||
-
|
||||
-我们现在已经知道了两件十分中重要的紧急任务,要想取代 C 语言,首先要先做到这两点:(1)解决内存管理问题;(2)降低由 C 语言向本语言转化时所需的转入成本。纵观编程语言的历史——从人类行为学的角度来看,作为 C 语言的准替代者,如果不能有效解决转入成本过高这个问题,那他们所做的其他部分做得再好都不算数。相反的,如果他们把转入成本过高这个问题解决地很好,即使他们其他部分做的不是最好的,人们也不会对他们吹毛求疵。
|
||||
-
|
||||
-这正是 Go 的做法,但这个理论并不是完美无瑕的,它也有局限性。目前 GC 延迟限制了它的发展,但 Go 现在选择照搬 Unix 下 C 语言的传染战略,让自身语言变成易于转入,便于传播的语言,其繁殖速度甚至快于替代品。但从长远角度看,这并不是个好办法。
|
||||
-
|
||||
-当然, Rust 语言的不足是个十分明显的问题,我们不应当回避它。而它,正将自己定位为适用于长远计划的选择。在之前的部分中我已经谈到了为什么我觉得它还不完美,Rust 语言在 TIBOE 和PYPL 指数上的成就也证明了我的说法,在 TIBOE 上 Rust 从来没有进过前20名,在 PYPL 指数上它的成就也比 Go 差很多。
|
||||
-
|
||||
-五年后 Rust 能发展的怎样还是个问题,如果他们愿意改变,我建议他们重视转入成本问题。以我个人经历来说,由 C 语言转入 Rust 语言的能量壁垒使人望而却步。如果编码提升工具比如 Corrode 只能把 C 语言映射为不稳定的 Rust 语言,但不能解决能量壁垒的问题;或者如果有更简单的方法能够自动注释所有权或试用期,人们也不再需要它们了——这些问题编译器就能够解决。目前我不知道怎样解决这个问题,但我觉得他们最好找出解决方案。
|
||||
-
|
||||
-在最后我想强调一下,虽然在 Ken Thompson 的设计经历中,他看起来很少解决短期问题,但他对未来有着极大的包容性,并且这种包容性还在不断提升。当然 Unix 也是这样的, 它让我不禁暗自揣测,让我认为 Go 语言中令人不快的地方都其实是他们未来事业的基石(例如缺乏泛型)。如果要确认这件事是真假,我需要比 Ken 还要聪明,但这并不是一件容易让人相信的事情。
|
||||
-
|
||||
---------------------------------------------------------------------------------
|
||||
-
|
||||
-via: http://esr.ibiblio.org/?p=7745
|
||||
-
|
||||
-作者:[Eric Raymond ][a]
|
||||
-译者:[Valoniakim](https://github.com/Valoniakim)
|
||||
-校对:[校对者ID](https://github.com/校对者ID)
|
||||
-
|
||||
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
-
|
||||
-[a]:http://esr.ibiblio.org/?author=2
|
||||
-[1]:http://esr.ibiblio.org/?author=2
|
||||
-[2]:http://esr.ibiblio.org/?p=7711&cpage=1#comment-1913931
|
||||
-[3]:http://esr.ibiblio.org/?p=7745
|
||||
ESR:最合理的语言工程模式
|
||||
============================================================
|
||||
|
||||
当你熟练掌握一体化工程技术时,你就会发现它逐渐超过了技术优化的层面。我们制作的每件手工艺品都在一个大环境背景下,在这个环境中,人类的行为逐渐突破了经济意义、社会学意义,达到了奥地利经济学家所称的“<ruby>人类行为学<rt>praxeology</rt></ruby>”,这是目的明确的人类行为所能达到的最大范围。
|
||||
|
||||
对我来说这并不只是抽象理论。当我在开源开发项目中编写论文时,我的行为就十分符合人类行为学的理论,这行为不是针对任何特定的软件技术或某个客观事物,它指的是在开发科技的过程中人类行为的背景环境。从人类行为学角度对科技进行的解读不断增加,大量的这种解读可以重塑科技框架,带来人类生产力和满足感的极大幅度增长,而这并不是由于我们换了工具,而是在于我们改变了掌握它们的方式。
|
||||
|
||||
在这个背景下,我的计划之外的文章系列的第三篇中谈到了 C 语言的衰退和正在到来的巨大改变,而我们也确实能够感受到系统编程的新时代的到来,在这个时刻,我决定把我之前有的大体的预感具象化为更加具体的、更实用的想法,它们主要是关于计算机语言设计的分析,例如为什么它们会成功,或为什么它们会失败。
|
||||
|
||||
在我最近的一篇文章中,我写道:所有计算机语言都是对机器资源的成本和程序员工作成本的相对权衡的结果,和对其相对价值的体现。这些都是在一个计算能力成本不断下降但程序员工作成本不减反增的背景下产生的。我还强调了转化成本在使原有交易主张适用于当下环境中的新增角色。在文中我将编程人员描述为一个寻找今后最适方案的探索者。
|
||||
|
||||
现在我要讲一讲最后一点。以现有水平为起点,一个语言工程师有极大可能通过多种方式推动语言设计的发展。通过什么系统呢? GC 还是人工分配?使用何种配置,命令式语言、函数程式语言或是面向对象语言?但是从人类行为学的角度来说,我认为它的形式会更简洁,也许只是选择解决长期问题还是短期问题?
|
||||
|
||||
所谓的“远”、“近”之分,是指硬件成本的逐渐降低,软件复杂程度的上升和由现有语言向其他语言转化的成本的增加,根据它们的变化曲线所做出的判断。短期问题指编程人员眼下发现的问题,长期问题指可预见的一系列情况,但它们一段时间内不会到来。针对近期问题所做出的部署需要非常及时且有效,但随着情况的变化,短期解决方案有可能很快就不适用了。而长期的解决方案可能因其过于超前而夭折,或因其代价过高无法被接受。
|
||||
|
||||
在计算机刚刚面世的时候, FORTRAN 是近期亟待解决的问题, LISP 是远期问题,汇编语言是短期解决方案。说明这种分类适用于非通用语言,还有 roff 标记语言。随着计算机技术的发展,PHP 和 Javascript 逐渐参与到这场游戏中。至于长期的解决方案? Oberon、Ocaml、ML、XML-Docbook 都可以。 它们形成的激励机制带来了大量具有突破性和原创性的想法,事态蓬勃但未形成体系,那个时候距离专业语言的面世还很远,(值得注意的是这些想法的出现都是人类行为学中的因果,并非由于某种技术)。专业语言会失败,这是显而易见的,它的转入成本高昂,让大部分人望而却步,因此不能达到能够让主流群体接受的水平,被孤立,被搁置。这也是 LISP 不为人知的的过去,作为前 LISP 管理层人员,出于对它深深的爱,我为你们讲述了这段历史。
|
||||
|
||||
如果短期解决方案出现故障,它的后果更加惨不忍睹,最好的结果是期待一个相对体面的失败,好转换到另一个设计方案。(通常在转化成本较高时)如果他们执意继续,通常造成众多方案相互之间藕断丝连,形成一个不断扩张的复合体,一直维持到不能运转下去,变成一堆摇摇欲坠的杂物。是的,我说的就是 C++ 语言,还有 Java 描述语言,(唉)还有 Perl,虽然 Larry Wall 的好品味成功地让他维持了很多年,问题一直没有爆发,但在 Perl 6 发行时,他的好品味最终引爆了整个问题。
|
||||
|
||||
这种思考角度激励了编程人员向着两个不同的目的重新塑造语言设计: (1)以远近为轴,在自身和预计的未来之间选取一个最适点,然后(2)降低由一种或多种语言转化为自身语言的转入成本,这样你就可以吸纳他们的用户群。接下来我会讲讲 C 语言是怎样占领全世界的。
|
||||
|
||||
在整个计算机发展史中,没有谁能比 C 语言完美地把握最适点的选取了,我要做的只是证明这一点,作为一种实用的主流语言, C 语言有着更长的寿命,它目睹了无数个竞争者的兴衰,但它的地位仍旧不可取代。从淘汰它的第一个竞争者到现在已经过了 35 年,但看起来C语言的终结仍旧不会到来。
|
||||
|
||||
当然,如果你愿意的话,可以把 C 语言的持久存在归功于人类的文化惰性,但那是对“文化惰性”这个词的曲解, C 语言一直得以延续的真正原因是没有人提供足够的转化费用!
|
||||
|
||||
相反的, C 语言低廉的内部转化成本未得到应有的重视,C 语言是如此的千变万化,从它漫长统治时期的初期开始,它就可以适用于多种语言如 FORTRAN、Pascal 、汇编语言和 LISP 的编程习惯。在二十世纪八十年代我就注意到,我可以根据编程人员的编码风格判断出他的母语是什么,这也从另一方面证明了C 语言的魅力能够吸引全世界的人使用它。
|
||||
|
||||
C++ 语言同样胜在它低廉的转化成本。很快,大部分新兴的语言为了降低自身转化成本,纷纷参考 C 语言语法。请注意这给未来的语言设计环境带来了什么影响:它尽可能地提高了类 C 语言的价值,以此来降低其他语言转化为 C 语言的转化成本。
|
||||
|
||||
另一种降低转入成本的方法十分简单,即使没接触过编程的人都能学会,但这种方法很难完成。我认为唯一使用了这种方法的 Python 就是靠这种方法进入了职业比赛。对这个方法我一带而过,是因为它并不是我希望看到的,顺利执行的系统语言战略,虽然我很希望它不是那样的。
|
||||
|
||||
今天我们在 2017 年底聚集在这里,下一项我们应该为某些暴躁的团体发声,如 Go 团队,但事实并非如此。 Go 这个项目漏洞百出,我甚至可以想象出它失败的各种可能,Go 团队太过固执独断,即使几乎整个用户群体都认为 Go 需要做出改变了,Go 团队也无动于衷,这是个大问题。 一旦发生故障, GC 发生延迟或者用牺牲生产量来弥补延迟,但无论如何,它都会严重影响到这种语言的应用,大幅缩小这种语言的适用范围。
|
||||
|
||||
即便如此,在 Go 的设计中,还是有一个我颇为认同的远大战略目标,想要理解这个目标,我们需要回想一下如果想要取代 C 语言,要面临的短期问题是什么。同我之前提到的,随着项目计划的不断扩张,故障率也在持续上升,这其中内存管理方面的故障尤其多,而内存管理一直是崩溃漏洞和安全漏洞的高发领域。
|
||||
|
||||
我们现在已经知道了两件十分重要的紧急任务,要想取代 C 语言,首先要先做到这两点:(1)解决内存管理问题;(2)降低由 C 语言向本语言转化时所需的转入成本。纵观编程语言的历史——从人类行为学的角度来看,作为 C 语言的准替代者,如果不能有效解决转入成本过高这个问题,那他们所做的其他部分做得再好都不算数。相反的,如果他们把转入成本过高这个问题解决地很好,即使他们其他部分做的不是最好的,人们也不会对他们吹毛求疵。
|
||||
|
||||
这正是 Go 的做法,但这个理论并不是完美无瑕的,它也有局限性。目前 GC 延迟限制了它的发展,但 Go 现在选择照搬 Unix 下 C 语言的传染战略,让自身语言变成易于转入,便于传播的语言,其繁殖速度甚至快于替代品。但从长远角度看,这并不是个好办法。
|
||||
|
||||
当然, Rust 语言的不足是个十分明显的问题,我们不应当回避它。而它,正将自己定位为适用于长远计划的选择。在之前的部分中我已经谈到了为什么我觉得它还不完美,Rust 语言在 TIBOE 和PYPL 指数上的成就也证明了我的说法,在 TIBOE 上 Rust 从来没有进过前 20 名,在 PYPL 指数上它的成就也比 Go 差很多。
|
||||
|
||||
五年后 Rust 能发展的怎样还是个问题,如果他们愿意改变,我建议他们重视转入成本问题。以我个人经历来说,由 C 语言转入 Rust 语言的能量壁垒使人望而却步。如果编码提升工具比如 Corrode 只能把 C 语言映射为不稳定的 Rust 语言,但不能解决能量壁垒的问题;或者如果有更简单的方法能够自动注释所有权或试用期,人们也不再需要它们了——这些问题编译器就能够解决。目前我不知道怎样解决这个问题,但我觉得他们最好找出解决方案。
|
||||
|
||||
在最后我想强调一下,虽然在 Ken Thompson 的设计经历中,他看起来很少解决短期问题,但他对未来有着极大的包容性,并且这种包容性还在不断提升。当然 Unix 也是这样的, 它让我不禁暗自揣测,让我认为 Go 语言中令人不快的地方都其实是他们未来事业的基石(例如缺乏泛型)。如果要确认这件事是真假,我需要比 Ken 还要聪明,但这并不是一件容易让人相信的事情。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://esr.ibiblio.org/?p=7745
|
||||
|
||||
作者:[Eric Raymond][a]
|
||||
译者:[Valoniakim](https://github.com/Valoniakim)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://esr.ibiblio.org/?author=2
|
||||
[1]:http://esr.ibiblio.org/?author=2
|
||||
[2]:http://esr.ibiblio.org/?p=7711&cpage=1#comment-1913931
|
||||
[3]:http://esr.ibiblio.org/?p=7745
|
||||
|
@ -1,163 +0,0 @@
|
||||
如何判断Linux服务器是否被入侵
|
||||
--------------
|
||||
|
||||
本指南中所谓的服务器被入侵或者说被黑了的意思是指未经认证的人或程序为了自己的目的登录到服务器上去并使用其计算资源, 通常会产生不好的影响。
|
||||
|
||||
免责声明: 若你的服务器被类似NSA这样的国家机关或者某个犯罪集团如请,那么你并不会发现有任何问题,这些技术也无法发觉他们的存在。
|
||||
|
||||
然而, 大多数被攻破的服务器都是被类似自动攻击程序这样的程序或者类似“脚本小子”这样的廉价攻击者,以及蠢蛋犯罪所入侵的。
|
||||
|
||||
这类攻击者会在访问服务器的同时滥用服务器资源,并且不怎么会采取措施来隐藏他们正在做的事情。
|
||||
|
||||
### 入侵服务器的症状
|
||||
|
||||
当服务器被没有经验攻击者或者自动攻击程序入侵了的话,他们往往会消耗100%的资源. 他们可能消耗CPU资源来进行数字货币的采矿或者发送垃圾邮件,也可能消耗带宽来发动 `DoS` 攻击。
|
||||
|
||||
因此出现问题的第一个表现就是服务器 “变慢了”. 这可能表现在网站的页面打开的很慢, 或者电子邮件要花很长时间才能发送出去。
|
||||
|
||||
那么你应该查看那些东西呢?
|
||||
|
||||
#### 检查 1 - 当前都有谁在登录?
|
||||
|
||||
你首先要查看当前都有谁登录在服务器上. 发现攻击者登录到服务器上进行操作并不罕见。
|
||||
|
||||
其对应的命令是 `w`. 运行 `w` 会输出如下结果:
|
||||
|
||||
```
|
||||
08:32:55 up 98 days, 5:43, 2 users, load average: 0.05, 0.03, 0.00
|
||||
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
|
||||
root pts/0 113.174.161.1 08:26 0.00s 0.03s 0.02s ssh root@coopeaa12
|
||||
root pts/1 78.31.109.1 08:26 0.00s 0.01s 0.00s w
|
||||
|
||||
```
|
||||
|
||||
第一个IP是英国IP,而第二个IP是越南IP. 这个不是个好兆头。
|
||||
|
||||
停下来做个深呼吸, 不要紧,只需要杀掉他们的SSH连接就好了. Unless you can stop then re-entering the server they will do so quickly and quite likely kick you off and stop you getting back in。
|
||||
|
||||
请参阅本文最后的 `入侵之后怎么办` 这一章节来看发现被入侵的证据后应该怎么办。
|
||||
|
||||
`whois` 命令可以接一个IP地址然后告诉你IP注册的组织的所有信息, 当然就包括所在国家的信息。
|
||||
|
||||
#### 检查 2 - 谁曾经登录过?
|
||||
|
||||
Linux 服务器会记录下哪些用户,从哪个IP,在什么时候登录的以及登陆了多长时间这些信息. 使用 `last` 命令可以查看这些信息。
|
||||
|
||||
输出类似这样:
|
||||
|
||||
```
|
||||
root pts/1 78.31.109.1 Thu Nov 30 08:26 still logged in
|
||||
root pts/0 113.174.161.1 Thu Nov 30 08:26 still logged in
|
||||
root pts/1 78.31.109.1 Thu Nov 30 08:24 - 08:26 (00:01)
|
||||
root pts/0 113.174.161.1 Wed Nov 29 12:34 - 12:52 (00:18)
|
||||
root pts/0 14.176.196.1 Mon Nov 27 13:32 - 13:53 (00:21)
|
||||
|
||||
```
|
||||
|
||||
这里可以看到英国IP和越南IP交替出现, 而且最上面两个IP现在还处于登录状态. 如果你看到任何未经授权的IP,那么请参阅最后章节。
|
||||
|
||||
登录历史记录会以文本格式记录到 `~/.bash_history`(注:这里作者应该写错了)中,因此很容易被删除。
|
||||
通常攻击者会直接把这个文件删掉,以掩盖他们的攻击行为. 因此, 若你运行了 `last` 命令却只看得见你的当前登录,那么这就是个不妙的信号。
|
||||
|
||||
如果没有登录历史的话,请一定小心,继续留意入侵的其他线索。
|
||||
|
||||
#### 检查 3 - 回顾命令历史
|
||||
|
||||
这个层次的攻击者通常不会注意掩盖命令的历史记录,因此运行 `history` 命令会显示出他们曾经做过的所有事情。
|
||||
一定留意有没有用 `wget` 或 `curl` 命令来下载类似垃圾邮件机器人或者挖矿程序之类的软件。
|
||||
|
||||
命令历史存储在 `~/.bash_history` 文件中,因此有些攻击者会删除该文件以掩盖他们的所作所为。
|
||||
跟登录历史一样, 若你运行 `history` 命令却没有输出任何东西那就表示历史文件被删掉了. 这也是个不妙的信号,你需要很小心地检查一下服务器了。
|
||||
|
||||
#### 检查 4 - 哪些进程在消耗CPU?
|
||||
|
||||
你常遇到的这类攻击者通常不怎么会去掩盖他们做的事情. 他们会运行一些特别消耗CPU的进程. 这就很容易发着这些进程了. 只需要运行 `top` 然后看最前的那几个进程就行了。
|
||||
|
||||
这也能显示出那些未登录的攻击者来. 比如,可能有人在用未受保护的邮件脚本来发送垃圾邮件。
|
||||
|
||||
如果你最上面的进程对不了解,那么你可以google一下进程名称,或者通过 `losf` 和 `strace` 来看看它做的事情是什么。
|
||||
|
||||
使用这些工具,第一步从 `top` 中拷贝出进程的 PID,然后运行:
|
||||
|
||||
```shell
|
||||
strace -p PID
|
||||
|
||||
```
|
||||
|
||||
这会显示出进程调用的所有系统调用. 它产生的内容会很多,但这些信息能告诉你这个进程在做什么。
|
||||
|
||||
```
|
||||
lsof -p PID
|
||||
|
||||
```
|
||||
|
||||
这个程序会列出进程打开的文件. 通过查看它访问的文件可以很好的理解它在做的事情。
|
||||
|
||||
#### 检查 5 - 检查所有的系统进程
|
||||
|
||||
消耗CPU不严重的未认证进程可能不会在 `top` 中显露出来,不过它依然可以通过 `ps` 列出来. 命令 `ps auxf` 就能显示足够清晰的信息了。
|
||||
|
||||
你需要检查一下每个不认识的进程. 经常运行 `ps` (这是个好习惯) 能帮助你发现奇怪的进程。
|
||||
|
||||
#### 检查 6 - 检查进程的网络使用情况
|
||||
|
||||
`iftop` 的功能类似 `top`,他会显示一系列收发网络数据的进程以及他们的源地址和目的地址。
|
||||
类似 `DoS` 攻击或垃圾制造器这样的进程很容易显示在列表的最顶端。
|
||||
|
||||
#### 检查 7 - 哪些进程在监听网络连接?
|
||||
|
||||
通常攻击者会安装一个后门程序专门监听网络端口接受指令. 该进程等待期间是不会消耗CPU和带宽的,因此也就不容易通过 `top` 之类的命令发现。
|
||||
|
||||
`lsof` 和 `netstat` 命令都会列出所有的联网进程. 我通常会让他们带上下面这些参数:
|
||||
|
||||
```
|
||||
lsof -i
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
netstat -plunt
|
||||
|
||||
```
|
||||
|
||||
你需要留意那些处于 `LISTEN` 和 `ESTABLISHED` 状态的进程,这些进程要么正在等待连接(LISTEN),要么已经连接(ESTABLISHED)。
|
||||
如果遇到不认识的进程,使用 `strace` 和 `lsof` 来看看它们在做什么东西。
|
||||
|
||||
### 被入侵之后该怎么办呢?
|
||||
|
||||
首先,不要紧张, 尤其当攻击者正处于登陆状态时更不能紧张. 你需要在攻击者警觉到你已经发现他之前夺回机器的控制权。
|
||||
如果他发现你已经发觉到他了,那么他可能会锁死你不让你登陆服务器,然后开始毁尸灭迹。
|
||||
|
||||
如果你技术不太好那么就直接关机吧. 你可以在服务器上运行 `shutdown -h now` 或者 `systemctl poweroff` 这两条命令. 也可以登陆主机提供商的控制面板中关闭服务器。
|
||||
关机后,你就可以开始配置防火墙或者咨询一下供应商的意见。
|
||||
|
||||
如果你对自己颇有自信,而你的主机提供商也有提供上游防火墙,那么你只需要以此创建并启用下面两条规则就行了:
|
||||
|
||||
1. 只允许从你的IP地址登陆SSH
|
||||
|
||||
2. 封禁除此之外的任何东西,不仅仅是SSH,还包括任何端口上的任何协议。
|
||||
|
||||
这样会立即关闭攻击者的SSH会话,而只留下你访问服务器。
|
||||
|
||||
如果你无法访问上游防火墙,那么你就需要在服务器本身创建并启用这些防火墙策略,然后在防火墙规则起效后使用 `kill` 命令关闭攻击者的ssh会话。
|
||||
|
||||
最后还有一种方法, 就是通过诸如串行控制台之类的带外连接登陆服务器,然后通过 `systemctl stop network.service` 停止网络功能。
|
||||
这会关闭所有服务器上的网络连接,这样你就可以慢慢的配置那些防火墙规则了。
|
||||
|
||||
重夺服务器的控制权后,也不要以为就万事大吉了。
|
||||
|
||||
不要试着修复这台服务器,让后接着用. 你永远不知道攻击者做过什么因此你也永远无法保证这台服务器还是安全的。
|
||||
|
||||
最好的方法就是拷贝出所有的资料,然后重装系统。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://bash-prompt.net/guides/server-hacked/
|
||||
|
||||
作者:[Elliot Cooper][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://bash-prompt.net
|
@ -0,0 +1,92 @@
|
||||
5 个最佳实践开始你的 DevOps 之旅
|
||||
============================================================
|
||||
|
||||
### 想要实现 DevOps 但是不知道如何开始吗?试试这 5 个最佳实践吧。
|
||||
|
||||
|
||||
![5 best practices for getting started with DevOps](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/devops-gears.png?itok=rUejbLQX "5 best practices for getting started with DevOps")
|
||||
|
||||
Image by : [Andrew Magill][8]. Modified by Opensource.com. [CC BY 4.0][9]
|
||||
|
||||
想要采用 DevOps 的人通常会过早的被它的歧义性给吓跑,更不要说更加深入的使用了。当一些人开始使用 DevOps 的时候都会问:“如何开始使用呢?”,”怎么才算使用了呢?“。这 5 个最佳实践是很好的路线图来指导你的 DevOps 之旅。
|
||||
|
||||
### 1\. 衡量所有的事情
|
||||
|
||||
除非你能量化输出结果,否则你并不能确认你的努力能否使事情变得更好。新功能能否快速的输出给客户?有更少的漏洞泄漏给他们吗?出错了能快速应对和恢复吗?
|
||||
|
||||
在你开始做任何修改之前,思考一下你切换到 DevOps 之后想要一些什么样的输出。随着你的 DevOps 之旅,将享受到服务的所有内容的丰富的实时报告,从这两个指标考虑一下:
|
||||
|
||||
* **上架时间** 衡量端到端,通常是面向客户的业务经验。这通常从一个功能被正式提出而开始,客户在产品中开始使用这个功能而结束。上架时间不是团队的主要指标;更加重要的是,当开发出一个有价值的新功能时,它表明了你完成业务的效率,为系统改进提供了一个机会。
|
||||
|
||||
* **时间周期** 衡量工程团队的进度。从开始开发一个新功能开始,到在产品中运行需要多久?这个指标对于你理解团队的效率是非常有用的,为团队等级的提升提供了一个机会。
|
||||
|
||||
### 2\. 放飞你的流程
|
||||
|
||||
DevOps 的成功需要团队布置一个定期流程并且持续提升它。这不总是有效的,但是必须是一个定期(希望有效)的流程。通常它有一些敏捷开发的味道,就像 Scrum 或者 Scrumban 一样;一些时候它也像精益开发。不论你用的什么方法,挑选一个正式的流程,开始使用它,并且做好这些基础。
|
||||
|
||||
定期检查和调整流程是 DevOps 成功的关键,抓住相关演示,团队回顾,每日会议的机会来提升你的流程。
|
||||
|
||||
DevOps 的成功取决于大家一起有效的工作。团队的成员需要在一个有权改进的公共流程中工作。他们也需要定期找机会分享从这个流程中上游或下游的其他人那里学到的东西。
|
||||
|
||||
随着你构建成功。好的流程规范能帮助你的团队以很快的速度体会到 DevOps 其他的好处
|
||||
|
||||
尽管更多面向开发的团队采用 Scrum 是常见的,但是以运营为中心的团队(或者其他中断驱动的团队)可能选用一个更短期的流程,例如 Kanban。
|
||||
|
||||
### 3\. 可视化工作流程
|
||||
这是很强大的,能够看到哪个人在给定的时间做哪一部分工作,可视化你的工作流程能帮助大家知道接下来应该做什么,流程中有多少工作以及流程中的瓶颈在哪里。
|
||||
|
||||
在你看到和衡量之前你并不能有效的限制流程中的工作。同样的,你也不能有效的排除瓶颈直到你清楚的看到它。
|
||||
|
||||
全部工作可视化能帮助团队中的成员了解他们在整个工作中的贡献。这样可以促进跨组织边界的关系建设,帮助您的团队更有效地协作,实现共同的成就感。
|
||||
|
||||
### 4\. 持续化所有的事情
|
||||
|
||||
DevOps 应该是强制自动化的。然而罗马不是一日建成的。你应该注意的第一个事情应该是努力的持续集成(CI),但是不要停留到这里;紧接着的是持续交付(CD)以及最终的持续部署。
|
||||
|
||||
持续部署的过程中是个注入自动测试的好时机。这个时候新代码刚被提交,你的持续部署应该运行测试代码来测试你的代码和构建成功的加工品。这个加工品经受流程的考验被产出直到最终被客户看到。
|
||||
|
||||
另一个“持续”是不太引人注意的持续改进。一个简单的场景是每天询问你旁边的同事:“今天做些什么能使工作变得更好?”,随着时间的推移,这些日常的小改进融合到一起会引起很大的结果,你将很惊喜!但是这也会让人一直思考着如何改进。
|
||||
|
||||
### 5\. Gherkinize
|
||||
|
||||
促进组织间更有效的沟通对于成功的 DevOps 的系统思想至关重要。在程序员和业务员之间直接使用共享语言来描述新功能的需求文档对于沟通是个好办法。一个好的产品经理能在一天内学会 [Gherkin][12] 然后使用它构造出明确的英语来描述需求文档,工程师会使用 Gherkin 描述的需求文档来写功能测试,之后开发功能代码直到代码通过测试。这是一个简化的 [验收测试驱动开发][13](ATDD),这样就开始了你的 DevOps 文化和开发实践。
|
||||
|
||||
### 开始你旅程
|
||||
|
||||
不要自馁哦。希望这五个想法给你坚实的入门方法。
|
||||
|
||||
|
||||
### 关于作者
|
||||
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/headshot_4.jpg?itok=jntfDCfX)][14]
|
||||
|
||||
Magnus Hedemark - Magnus 在IT行业已有20多年,并且一直热衷于技术。他目前是 nitedHealth Group 的 DevOps 工程师。在业余时间,Magnus 喜欢摄影和划独木舟。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/11/5-keys-get-started-devops
|
||||
|
||||
作者:[Magnus Hedemark ][a]
|
||||
译者:[aiwhj](https://github.com/aiwhj)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/magnus919
|
||||
[1]:https://opensource.com/tags/devops?src=devops_resource_menu1
|
||||
[2]:https://opensource.com/resources/devops?src=devops_resource_menu2
|
||||
[3]:https://www.openshift.com/promotions/devops-with-openshift.html?intcmp=7016000000127cYAAQ&src=devops_resource_menu3
|
||||
[4]:https://enterprisersproject.com/article/2017/5/9-key-phrases-devops?intcmp=7016000000127cYAAQ&src=devops_resource_menu4
|
||||
[5]:https://www.redhat.com/en/insights/devops?intcmp=7016000000127cYAAQ&src=devops_resource_menu5
|
||||
[6]:https://opensource.com/article/17/11/5-keys-get-started-devops?rate=oEOzMXx1ghbkfl2a5ae6AnvO88iZ3wzkk53K2CzbDWI
|
||||
[7]:https://opensource.com/user/25739/feed
|
||||
[8]:https://ccsearch.creativecommons.org/image/detail/7qRx_yrcN5isTMS0u9iKMA==
|
||||
[9]:https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[10]:https://martinfowler.com/articles/continuousIntegration.html
|
||||
[11]:https://martinfowler.com/bliki/ContinuousDelivery.html
|
||||
[12]:https://cucumber.io/docs/reference
|
||||
[13]:https://en.wikipedia.org/wiki/Acceptance_test%E2%80%93driven_development
|
||||
[14]:https://opensource.com/users/magnus919
|
||||
[15]:https://opensource.com/users/magnus919
|
||||
[16]:https://opensource.com/users/magnus919
|
||||
[17]:https://opensource.com/tags/devops
|
@ -0,0 +1,41 @@
|
||||
有人试图将 Ubuntu Unity 非正式地从死亡带回来
|
||||
============================================================
|
||||
|
||||
|
||||
|
||||
> Ubuntu Unity Remix 将支持九个月
|
||||
|
||||
Canonical 在七年之后突然决定抛弃它的 Unity 用户界面影响了许多 Ubuntu 用户,看起来有人现在试图非正式地把它从死亡中带回来。
|
||||
|
||||
长期 [Ubuntu][1] 成员 Dale Beaudoin 上周在官方的 Ubuntu 论坛上[进行了一项调查][2]来了解社区,看看他们是否对明年发布的 Ubuntu 18.04 LTS(Bionic Beaver)的 Ubuntu Unity Remix 感兴趣,它将支持 9 个月或 5 年。
|
||||
|
||||
有 30 人进行了投票,其中 67% 的人选择了所谓的 Ubuntu Unity Remix 的 LTS(长期支持)版本,33% 的人投票支持 9 个月的支持版本。它也看起来像即将到来的 Ubuntu Unity Spin [看起来会成为官方版本][3],但这不意味着开发它的承诺。
|
||||
|
||||
Dale Beaudoin 表示:“最近的一项民意调查显示,2/3 的人支持 Ubuntu Unity 成为 LTS 发行版,我们应该尝试这个循环,假设它将是 LTS 和官方的风格。“我们将尝试使用当前默认的 Ubuntu Bionic Beaver 18.04 的每日版本作为平台每周或每 10 天发布一次更新的 ISO。”
|
||||
|
||||
### Ubuntu Unity 是否会卷土重来?
|
||||
|
||||
默认情况下,最后一个带有 Unity 的 Ubuntu 版本是 Ubuntu 17.04(Zesty Zapus),它将在 2018 年 1 月终止支持。当前流行操作系统的稳定版本 Ubuntu 17.10(Artful Artful),是今年早些时候 Canonical CEO [宣布][4]之后第一个默认使用 GNOME 桌面环境的版本,Unity 将不再开发。
|
||||
|
||||
然而,Canonical 仍然从官方软件仓库提供 Unity 桌面环境,所以如果有人想要安装它,只需点击一下即可。但坏消息是,它们支持到 2018 年 4 月发布 Ubuntu 18.04 LTS(Bionic Beaver)之前,所以 Ubuntu Unity Remix 的开发者们将不得不在独立的仓库中继续支持。
|
||||
|
||||
另一方面,我们不相信 Canonical 会改变主意,接受这个 Ubuntu Unity Spin 成为官方的风格,这意味着他们无法继续开发 Unity,现在只有一小部分人可以做到这一点。最有可能的是,如果对 Ubuntu Unity Remix 的兴趣没有很快消失,那么,这可能会是一个由怀旧社区支持的非官方版本。
|
||||
|
||||
问题是,你会对 你会对Ubuntu Unity Spin 感兴趣么,官方或者非官方?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://news.softpedia.com/news/someone-tries-to-bring-back-ubuntu-s-unity-from-the-dead-as-an-unofficial-spin-518778.shtml
|
||||
|
||||
作者:[Marius Nestor ][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://news.softpedia.com/editors/browse/marius-nestor
|
||||
[1]:http://linux.softpedia.com/downloadTag/Ubuntu
|
||||
[2]:https://community.ubuntu.com/t/poll-unity-7-distro-9-month-spin-or-lts-for-18-04/2066
|
||||
[3]:https://community.ubuntu.com/t/unity-maintenance-roadmap/2223
|
||||
[4]:http://news.softpedia.com/news/canonical-to-stop-developing-unity-8-ubuntu-18-04-lts-ships-with-gnome-desktop-514604.shtml
|
||||
[5]:http://news.softpedia.com/editors/browse/marius-nestor
|
@ -0,0 +1,185 @@
|
||||
如何在 Linux shell 中找出所有包含指定文本的文件
|
||||
------
|
||||
### 目标
|
||||
|
||||
本文提供一些关于如何搜索出指定目录或整个文件系统中那些包含指定单词或字符串的文件。
|
||||
|
||||
### 难度
|
||||
|
||||
容易
|
||||
|
||||
### 约定
|
||||
|
||||
* \# - 需要使用 root 权限来执行指定命令,可以直接使用 root 用户来执行也可以使用 sudo 命令
|
||||
|
||||
* \$ - 可以使用普通用户来执行指定命令
|
||||
|
||||
### 案例
|
||||
|
||||
#### 非递归搜索包含指定字符串的文件
|
||||
|
||||
第一个例子让我们来搜索 `/etc/` 目录下所有包含 `stretch` 字符串的文件,但不去搜索其中的子目录:
|
||||
|
||||
```shell
|
||||
# grep -s stretch /etc/*
|
||||
/etc/os-release:PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
|
||||
/etc/os-release:VERSION="9 (stretch)"
|
||||
```
|
||||
grep 的 `-s` 选项会在发现不能存在或者不能读取的文件时抑制报错信息。结果现实除了文件名外还有包含请求字符串的行也被一起输出了。
|
||||
|
||||
#### 递归地搜索包含指定字符串的文件
|
||||
|
||||
上面案例中忽略了所有的子目录。所谓递归搜索就是指同时搜索所有的子目录。
|
||||
下面的命令会在 `/etc/` 及其子目录中搜索包含 `stretch` 字符串的文件:
|
||||
|
||||
```shell
|
||||
# grep -R stretch /etc/*
|
||||
/etc/apt/sources.list:# deb cdrom:[Debian GNU/Linux testing _Stretch_ - Official Snapshot amd64 NETINST Binary-1 20170109-05:56]/ stretch main
|
||||
/etc/apt/sources.list:#deb cdrom:[Debian GNU/Linux testing _Stretch_ - Official Snapshot amd64 NETINST Binary-1 20170109-05:56]/ stretch main
|
||||
/etc/apt/sources.list:deb http://ftp.au.debian.org/debian/ stretch main
|
||||
/etc/apt/sources.list:deb-src http://ftp.au.debian.org/debian/ stretch main
|
||||
/etc/apt/sources.list:deb http://security.debian.org/debian-security stretch/updates main
|
||||
/etc/apt/sources.list:deb-src http://security.debian.org/debian-security stretch/updates main
|
||||
/etc/dictionaries-common/words:backstretch
|
||||
/etc/dictionaries-common/words:backstretch's
|
||||
/etc/dictionaries-common/words:backstretches
|
||||
/etc/dictionaries-common/words:homestretch
|
||||
/etc/dictionaries-common/words:homestretch's
|
||||
/etc/dictionaries-common/words:homestretches
|
||||
/etc/dictionaries-common/words:outstretch
|
||||
/etc/dictionaries-common/words:outstretched
|
||||
/etc/dictionaries-common/words:outstretches
|
||||
/etc/dictionaries-common/words:outstretching
|
||||
/etc/dictionaries-common/words:stretch
|
||||
/etc/dictionaries-common/words:stretch's
|
||||
/etc/dictionaries-common/words:stretched
|
||||
/etc/dictionaries-common/words:stretcher
|
||||
/etc/dictionaries-common/words:stretcher's
|
||||
/etc/dictionaries-common/words:stretchers
|
||||
/etc/dictionaries-common/words:stretches
|
||||
/etc/dictionaries-common/words:stretchier
|
||||
/etc/dictionaries-common/words:stretchiest
|
||||
/etc/dictionaries-common/words:stretching
|
||||
/etc/dictionaries-common/words:stretchy
|
||||
/etc/grub.d/00_header:background_image -m stretch `make_system_path_relative_to_its_root "$GRUB_BACKGROUND"`
|
||||
/etc/os-release:PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
|
||||
/etc/os-release:VERSION="9 (stretch)"
|
||||
```
|
||||
|
||||
#### 搜索所有包含特定单词的文件
|
||||
上面 `grep` 命令的案例中列出的是所有包含字符串 `stretch` 的文件。也就是说包含 `stretches` , `stretched` 等内容的行也会被显示。 使用 grep 的 `-w` 选项会只显示包含特定单词的行:
|
||||
|
||||
```shell
|
||||
# grep -Rw stretch /etc/*
|
||||
/etc/apt/sources.list:# deb cdrom:[Debian GNU/Linux testing _Stretch_ - Official Snapshot amd64 NETINST Binary-1 20170109-05:56]/ stretch main
|
||||
/etc/apt/sources.list:#deb cdrom:[Debian GNU/Linux testing _Stretch_ - Official Snapshot amd64 NETINST Binary-1 20170109-05:56]/ stretch main
|
||||
/etc/apt/sources.list:deb http://ftp.au.debian.org/debian/ stretch main
|
||||
/etc/apt/sources.list:deb-src http://ftp.au.debian.org/debian/ stretch main
|
||||
/etc/apt/sources.list:deb http://security.debian.org/debian-security stretch/updates main
|
||||
/etc/apt/sources.list:deb-src http://security.debian.org/debian-security stretch/updates main
|
||||
/etc/dictionaries-common/words:stretch
|
||||
/etc/dictionaries-common/words:stretch's
|
||||
/etc/grub.d/00_header:background_image -m stretch `make_system_path_relative_to_its_root "$GRUB_BACKGROUND"`
|
||||
/etc/os-release:PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
|
||||
/etc/os-release:VERSION="9 (stretch)"
|
||||
```
|
||||
|
||||
#### 显示包含特定文本文件的文件名
|
||||
上面的命令都会产生多余的输出。下一个案例则会递归地搜索 `etc` 目录中包含 `stretch` 的文件并只输出文件名:
|
||||
|
||||
```shell
|
||||
# grep -Rl stretch /etc/*
|
||||
/etc/apt/sources.list
|
||||
/etc/dictionaries-common/words
|
||||
/etc/grub.d/00_header
|
||||
/etc/os-release
|
||||
```
|
||||
|
||||
#### 大小写不敏感的搜索
|
||||
默认情况下搜索 hi 大小写敏感的,也就是说当搜索字符串 `stretch` 时只会包含大小写一致内容的文件。
|
||||
通过使用 grep 的 `-i` 选项,grep 命令还会列出所有包含 `Stretch` , `STRETCH` , `StReTcH` 等内容的文件,也就是说进行的是大小写不敏感的搜索。
|
||||
|
||||
```shell
|
||||
# grep -Ril stretch /etc/*
|
||||
/etc/apt/sources.list
|
||||
/etc/dictionaries-common/default.hash
|
||||
/etc/dictionaries-common/words
|
||||
/etc/grub.d/00_header
|
||||
/etc/os-release
|
||||
```
|
||||
|
||||
#### 搜索是包含/排除指定文件
|
||||
`grep` 命令也可以只在指定文件中进行搜索。比如,我们可以只在配置文件(扩展名为`.conf`)中搜索指定的文本/字符串。 下面这个例子就会在 `/etc` 目录中搜索带字符串 `bash` 且所有扩展名为 `.conf` 的文件:
|
||||
|
||||
```shell
|
||||
# grep -Ril bash /etc/*.conf
|
||||
OR
|
||||
# grep -Ril --include=\*.conf bash /etc/*
|
||||
/etc/adduser.conf
|
||||
```
|
||||
|
||||
类似的,也可以使用 `--exclude` 来排除特定的文件:
|
||||
|
||||
```shell
|
||||
# grep -Ril --exclude=\*.conf bash /etc/*
|
||||
/etc/alternatives/view
|
||||
/etc/alternatives/vim
|
||||
/etc/alternatives/vi
|
||||
/etc/alternatives/vimdiff
|
||||
/etc/alternatives/rvim
|
||||
/etc/alternatives/ex
|
||||
/etc/alternatives/rview
|
||||
/etc/bash.bashrc
|
||||
/etc/bash_completion.d/grub
|
||||
/etc/cron.daily/apt-compat
|
||||
/etc/cron.daily/exim4-base
|
||||
/etc/dictionaries-common/default.hash
|
||||
/etc/dictionaries-common/words
|
||||
/etc/inputrc
|
||||
/etc/passwd
|
||||
/etc/passwd-
|
||||
/etc/profile
|
||||
/etc/shells
|
||||
/etc/skel/.profile
|
||||
/etc/skel/.bashrc
|
||||
/etc/skel/.bash_logout
|
||||
```
|
||||
|
||||
#### 搜索时排除指定目录
|
||||
跟文件一样,grep 也能在搜索时排除指定目录。 使用 `--exclude-dir` 选项就行。
|
||||
下面这个例子会搜索 `/etc` 目录中搜有包含字符串 `stretch` 的文件,但不包括 `/etc/grub.d` 目录下的文件:
|
||||
|
||||
```shell
|
||||
# grep --exclude-dir=/etc/grub.d -Rwl stretch /etc/*
|
||||
/etc/apt/sources.list
|
||||
/etc/dictionaries-common/words
|
||||
/etc/os-release
|
||||
```
|
||||
|
||||
#### 显示包含搜索字符串的行号
|
||||
`-n` 选项还会显示指定字符串所在行的行号:
|
||||
|
||||
```shell
|
||||
# grep -Rni bash /etc/*.conf
|
||||
/etc/adduser.conf:6:DSHELL=/bin/bash
|
||||
```
|
||||
|
||||
#### 寻找不包含指定字符串的文件
|
||||
最后这个例子使用 `-v` 来列出所有 *不* 包含指定字符串的文件。
|
||||
例如下面命令会搜索 `/etc` 目录中不包含 `stretch` 的所有文件:
|
||||
|
||||
```shell
|
||||
# grep -Rlv stretch /etc/*
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://linuxconfig.org/how-to-find-all-files-with-a-specific-text-using-linux-shell
|
||||
|
||||
作者:[Lubos Rendek][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者 ID](https://github.com/校对者 ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://linuxconfig.org
|
@ -0,0 +1,154 @@
|
||||
Undistract-me:当长时间运行的终端命令完成时获取通知
|
||||
============================================================
|
||||
|
||||
作者:[sk][2],时间:2017.11.30
|
||||
|
||||
![Undistract-me](https://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2-720x340.png)
|
||||
|
||||
前一段时间,我们发表了如何[在终端活动完成时获取通知][3]。今天,我发现了一个叫做 “undistract-me” 的类似工具,它可以在长时间运行的终端命令完成时通知你。想象这个场景。你运行着一个需要一段时间才能完成的命令。与此同时,你查看你的 Facebook,并参与其中。过了一会儿,你记得你几分钟前执行了一个命令。你回到终端,注意到这个命令已经完成了。但是你不知道命令何时完成。你有没有遇到这种情况?我敢打赌,你们大多数人遇到过许多次这种情况。这就是 “undistract-me” 能帮助的了。你不需要经常检查终端,查看命令是否完成。长时间运行的命令完成后,undistract-me 会通知你。它能在 Arch Linux、Debian、Ubuntu 和其他 Ubuntu 衍生版上运行。
|
||||
|
||||
#### 安装 Undistract-me
|
||||
|
||||
Undistract-me 可以在 Debian 及其衍生版(如 Ubuntu)的默认仓库中使用。你要做的就是运行下面的命令来安装它。
|
||||
|
||||
```
|
||||
sudo apt-get install undistract-me
|
||||
```
|
||||
|
||||
Arch Linux 用户可以使用任何帮助程序从 AUR 安装它。
|
||||
|
||||
使用 [Pacaur][4]:
|
||||
|
||||
```
|
||||
pacaur -S undistract-me-git
|
||||
```
|
||||
|
||||
使用 [Packer][5]:
|
||||
|
||||
```
|
||||
packer -S undistract-me-git
|
||||
```
|
||||
|
||||
使用 [Yaourt][6]:
|
||||
|
||||
```
|
||||
yaourt -S undistract-me-git
|
||||
```
|
||||
|
||||
然后,运行以下命令将 “undistract-me” 添加到 Bash 中。
|
||||
|
||||
```
|
||||
echo 'source /etc/profile.d/undistract-me.sh' >> ~/.bashrc
|
||||
```
|
||||
|
||||
或者,你可以运行此命令将其添加到你的 Bash:
|
||||
|
||||
```
|
||||
echo "source /usr/share/undistract-me/long-running.bash\nnotify_when_long_running_commands_finish_install" >> .bashrc
|
||||
```
|
||||
|
||||
如果你在 Zsh shell 中,请运行以下命令:
|
||||
|
||||
```
|
||||
echo "source /usr/share/undistract-me/long-running.bash\nnotify_when_long_running_commands_finish_install" >> .zshrc
|
||||
```
|
||||
|
||||
最后更新更改:
|
||||
|
||||
对于 Bash:
|
||||
|
||||
```
|
||||
source ~/.bashrc
|
||||
```
|
||||
|
||||
对于 Zsh:
|
||||
|
||||
```
|
||||
source ~/.zshrc
|
||||
```
|
||||
|
||||
#### 配置 Undistract-me
|
||||
|
||||
默认情况下,Undistract-me 会将任何超过 10 秒的命令视为长时间运行的命令。你可以通过编辑 /usr/share/undistract-me/long-running.bash 来更改此时间间隔。
|
||||
|
||||
```
|
||||
sudo nano /usr/share/undistract-me/long-running.bash
|
||||
```
|
||||
|
||||
找到 “LONG_RUNNING_COMMAND_TIMEOUT” 变量并将默认值(10 秒)更改为你所选择的其他值。
|
||||
|
||||
[![](http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-1.png)][7]
|
||||
|
||||
保存并关闭文件。不要忘记更新更改:
|
||||
|
||||
```
|
||||
source ~/.bashrc
|
||||
```
|
||||
|
||||
此外,你可以禁用特定命令的通知。为此,找到 “LONG_RUNNING_IGNORE_LIST” 变量并像下面那样用空格分隔命令。
|
||||
|
||||
默认情况下,只有当活动窗口不是命令运行的窗口时才会显示通知。也就是说,只有当命令在后台“终端”窗口中运行时,它才会通知你。如果该命令在活动窗口终端中运行,则不会收到通知。如果你希望无论终端窗口可见还是在后台都发送通知,你可以将 IGNORE_WINDOW_CHECK 设置为 1 以跳过窗口检查。
|
||||
|
||||
Undistract-me 的另一个很酷的功能是当命令完成时,你可以设置音频通知和可视通知。默认情况下,它只会发送一个可视通知。你可以通过在命令行上将变量 UDM_PLAY_SOUND 设置为非零整数来更改此行为。但是,你的 Ubuntu 系统应该安装 pulseaudio-utils 和 sound-theme-freedesktop 程序来启用此功能。
|
||||
|
||||
请记住,你需要运行以下命令来更新所做的更改。
|
||||
|
||||
对于 Bash:
|
||||
|
||||
```
|
||||
source ~/.bashrc
|
||||
```
|
||||
|
||||
对于 Zsh:
|
||||
|
||||
```
|
||||
source ~/.zshrc
|
||||
```
|
||||
|
||||
现在是时候来验证这是否真的有效。
|
||||
|
||||
#### 在长时间运行的终端命令完成时获取通知
|
||||
|
||||
现在,运行任何需要超过 10 秒或者你在 Undistract-me 脚本中定义的时间的命令
|
||||
|
||||
我在 Arch Linux 桌面上运行以下命令。
|
||||
|
||||
```
|
||||
sudo pacman -Sy
|
||||
```
|
||||
|
||||
这个命令花了 32 秒完成。上述命令完成后,我收到以下通知。
|
||||
|
||||
[![](http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2.png)][8]
|
||||
|
||||
请记住,只有当给定的命令花了超过 10 秒才能完成,Undistract-me 脚本才会通知你。如果命令在 10 秒内完成,你将不会收到通知。当然,你可以按照上面的“配置”部分所述更改此时间间隔设置。
|
||||
|
||||
我发现这个工具非常有用。在我迷失在其他任务上时,它帮助我回到正事。我希望这个工具也能对你有帮助。
|
||||
|
||||
还有更多的工具。保持耐心!
|
||||
|
||||
干杯!
|
||||
|
||||
资源:
|
||||
|
||||
* [Undistract-me GitHub 仓库][1]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/undistract-get-notification-long-running-terminal-commands-complete/
|
||||
|
||||
作者:[sk][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://github.com/jml/undistract-me
|
||||
[2]:https://www.ostechnix.com/author/sk/
|
||||
[3]:https://www.ostechnix.com/get-notification-terminal-task-done/
|
||||
[4]:https://www.ostechnix.com/install-pacaur-arch-linux/
|
||||
[5]:https://www.ostechnix.com/install-packer-arch-linux-2/
|
||||
[6]:https://www.ostechnix.com/install-yaourt-arch-linux/
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-1.png
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2017/11/undistract-me-2.png
|
@ -0,0 +1,71 @@
|
||||
### [Fedora 课堂会议: Ansible 101][2]
|
||||
|
||||
### By Sachin S Kamath
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2017/07/fedora-classroom-945x400.jpg)
|
||||
|
||||
Fedora 课堂会议本周继续进行,本周的主题是 Ansible。 会议的时间安排表发布在 [wiki][3] 上。你还可以从那里找到[之前会议的资源和录像][4]。以下是会议的具体时间 [11月30日本周星期四 1600 UTC][5]。该链接可以将这个时间转换为您的时区上的时间。
|
||||
|
||||
### 主题: Ansible 101
|
||||
|
||||
正如 Ansible [文档][6] 所说,Ansible 是一个 IT 自动化工具。它主要用于配置系统,部署软件和编排更高级的 IT 任务。示例包括持续交付与零停机滚动升级。
|
||||
|
||||
本课堂课程涵盖以下主题:
|
||||
|
||||
1. SSH 简介
|
||||
|
||||
2. 了解不同的术语
|
||||
|
||||
3. Ansible 简介
|
||||
|
||||
4. Ansible 安装和设置
|
||||
|
||||
5. 建立无密码连接
|
||||
|
||||
6. Ad-hoc 命令
|
||||
|
||||
7. 管理 inventory
|
||||
|
||||
8. Playbooks 示例
|
||||
|
||||
之后还将有 Ansible 102 的后续会议。该会议将涵盖复杂的 playbooks,playbooks 角色(roles),动态 inventory 文件,流程控制和 Ansible Galaxy 命令行工具.
|
||||
|
||||
### 讲师
|
||||
|
||||
我们有两位经验丰富的讲师进行这次会议。
|
||||
|
||||
[Geoffrey Marr][7],IRC 聊天室中名字叫 coremodule,是 Red Hat 的一名员工和 Fedora 的贡献者,拥有 Linux 和云技术的背景。工作时,他潜心于 [Fedora QA][8] wiki 和测试页面中。业余时间, 他热衷于 RaspberryPi 项目,尤其是专注于那些软件无线电(Software-defined radio)项目。
|
||||
|
||||
[Vipul Siddharth][9] 是Red Hat的实习生,他也在Fedora上工作。他喜欢贡献开源,借此机会传播自由开源软件。
|
||||
|
||||
### 加入会议
|
||||
|
||||
本次会议将在 [BlueJeans][10] 上进行。下面的信息可以帮你加入到会议:
|
||||
|
||||
* 网址: [https://bluejeans.com/3466040121][1]
|
||||
|
||||
* 会议 ID (桌面版): 3466040121
|
||||
|
||||
我们希望您可以参加,学习,并享受这个会议!如果您对会议有任何反馈意见,有什么新的想法或者想要主持一个会议, 可以随时在这篇文章发表评论或者查看[课堂 wiki 页面][11].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/fedora-classroom-session-ansible-101/
|
||||
|
||||
作者:[Sachin S Kamath]
|
||||
译者:[imquanquan](https://github.com/imquanquan)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://bluejeans.com/3466040121
|
||||
[2]:https://fedoramagazine.org/fedora-classroom-session-ansible-101/
|
||||
[3]:https://fedoraproject.org/wiki/Classroom
|
||||
[4]:https://fedoraproject.org/wiki/Classroom#Previous_Sessions
|
||||
[5]:https://www.timeanddate.com/worldclock/fixedtime.html?msg=Fedora+Classroom+-+Ansible+101&iso=20171130T16&p1=%3A
|
||||
[6]:http://docs.ansible.com/ansible/latest/index.html
|
||||
[7]:https://fedoraproject.org/wiki/User:Coremodule
|
||||
[8]:https://fedoraproject.org/wiki/QA
|
||||
[9]:https://fedoraproject.org/wiki/User:Siddharthvipul1
|
||||
[10]:https://www.bluejeans.com/downloads
|
||||
[11]:https://fedoraproject.org/wiki/Classroom
|
163
translated/tech/20171205 How to Use the Date Command in Linux.md
Normal file
163
translated/tech/20171205 How to Use the Date Command in Linux.md
Normal file
@ -0,0 +1,163 @@
|
||||
如何使用 Date 命令
|
||||
======
|
||||
在本文中, 我们会通过一些案例来演示如何使用 linux 中的 date 命令. date 命令可以用户输出/设置系统日期和时间. Date 命令很简单, 请参见下面的例子和语法.
|
||||
|
||||
默认情况下,当不带任何参数运行 date 命令时,它会输出当前系统日期和时间:
|
||||
|
||||
```shell
|
||||
date
|
||||
```
|
||||
|
||||
```
|
||||
Sat 2 Dec 12:34:12 CST 2017
|
||||
```
|
||||
|
||||
#### 语法
|
||||
|
||||
```
|
||||
Usage: date [OPTION]... [+FORMAT]
|
||||
or: date [-u|--utc|--universal] [MMDDhhmm[[CC]YY][.ss]]
|
||||
Display the current time in the given FORMAT, or set the system date.
|
||||
|
||||
```
|
||||
|
||||
### 案例
|
||||
|
||||
下面这些案例会向你演示如何使用 date 命令来查看前后一段时间的日期时间.
|
||||
|
||||
#### 1\. 查找5周后的日期
|
||||
|
||||
```shell
|
||||
date -d "5 weeks"
|
||||
Sun Jan 7 19:53:50 CST 2018
|
||||
|
||||
```
|
||||
|
||||
#### 2\. 查找5周后又过4天的日期
|
||||
|
||||
```shell
|
||||
date -d "5 weeks 4 days"
|
||||
Thu Jan 11 19:55:35 CST 2018
|
||||
|
||||
```
|
||||
|
||||
#### 3\. 获取下个月的日期
|
||||
|
||||
```shell
|
||||
date -d "next month"
|
||||
Wed Jan 3 19:57:43 CST 2018
|
||||
```
|
||||
|
||||
#### 4\. 获取下周日的日期
|
||||
|
||||
```shell
|
||||
date -d last-sunday
|
||||
Sun Nov 26 00:00:00 CST 2017
|
||||
```
|
||||
|
||||
date 命令还有很多格式化相关的选项, 下面的例子向你演示如何格式化 date 命令的输出.
|
||||
|
||||
#### 5\. 以 yyyy-mm-dd 的格式显示日期
|
||||
|
||||
```shell
|
||||
date +"%F"
|
||||
2017-12-03
|
||||
```
|
||||
|
||||
#### 6\. 以 mm/dd/yyyy 的格式显示日期
|
||||
|
||||
```shell
|
||||
date +"%m/%d/%Y"
|
||||
12/03/2017
|
||||
|
||||
```
|
||||
|
||||
#### 7\. 只显示时间
|
||||
|
||||
```shell
|
||||
date +"%T"
|
||||
20:07:04
|
||||
|
||||
```
|
||||
|
||||
#### 8\. 显示今天是一年中的第几天
|
||||
|
||||
```shell
|
||||
date +"%j"
|
||||
337
|
||||
|
||||
```
|
||||
|
||||
#### 9\. 与格式化相关的选项
|
||||
|
||||
| **%%** | 百分号 (“**%**“). |
|
||||
| **%a** | 星期的缩写形式 (像这样, **Sun**). |
|
||||
| **%A** | 星期的完整形式 (像这样, **Sunday**). |
|
||||
| **%b** | 缩写的月份 (像这样, **Jan**). |
|
||||
| **%B** | 当前区域的月份全称 (像这样, **January**). |
|
||||
| **%c** | 日期以及时间 (像这样, **Thu Mar 3 23:05:25 2005**). |
|
||||
| **%C** | 本世纪; 类似 **%Y**, 但是会省略最后两位 (像这样, **20**). |
|
||||
| **%d** | 月中的第几日 (像这样, **01**). |
|
||||
| **%D** | 日期; 效果与 **%m/%d/%y** 一样. |
|
||||
| **%e** | 月中的第几日, 会填充空格; 与 **%_d** 一样. |
|
||||
| **%F** | 完整的日期; 跟 **%Y-%m-%d** 一样. |
|
||||
| **%g** | 年份的后两位 (参见 **%G**). |
|
||||
| **%G** | 年份 (参见 **%V**); 通常跟 **%V** 连用. |
|
||||
| **%h** | 同 **%b**. |
|
||||
| **%H** | 小时 (**00**..**23**). |
|
||||
| **%I** | 小时 (**01**..**12**). |
|
||||
| **%j** | 一年中的第几天 (**001**..**366**). |
|
||||
| **%k** | 小时, 用空格填充 ( **0**..**23**); same as **%_H**. |
|
||||
| **%l** | 小时, 用空格填充 ( **1**..**12**); same as **%_I**. |
|
||||
| **%m** | 月份 (**01**..**12**). |
|
||||
| **%M** | 分钟 (**00**..**59**). |
|
||||
| **%n** | 换行. |
|
||||
| **%N** | 纳秒 (**000000000**..**999999999**). |
|
||||
| **%p** | 当前区域时间是上午 **AM** 还是下午 **PM**; 未知则为空哦. |
|
||||
| **%P** | 类似 **%p**, 但是用小写字母现实. |
|
||||
| **%r** | 当前区域的12小时制现实时间 (像这样, **11:11:04 PM**). |
|
||||
| **%R** | 24-小时制的小时和分钟; 同 **%H:%M**. |
|
||||
| **%s** | 从 1970-01-01 00:00:00 UTC 到现在经历的秒数. |
|
||||
| **%S** | 秒数 (**00**..**60**). |
|
||||
| **%t** | tab 制表符. |
|
||||
| **%T** | 时间; 同 **%H:%M:%S**. |
|
||||
| **%u** | 星期 (**1**..**7**); 1 表示 **星期一**. |
|
||||
| **%U** | 一年中的第几个星期, 以周日为一周的开始 (**00**..**53**). |
|
||||
| **%V** | 一年中的第几个星期,以周一为一周的开始 (**01**..**53**). |
|
||||
| **%w** | 用数字表示周几 (**0**..**6**); 0 表示 **周日**. |
|
||||
| **%W** | 一年中的第几个星期, 周一为一周的开始 (**00**..**53**). |
|
||||
| **%x** | Locale’s date representation (像这样, **12/31/99**). |
|
||||
| **%X** | Locale’s time representation (像这样, **23:13:48**). |
|
||||
| **%y** | 年份的后面两位 (**00**..**99**). |
|
||||
| **%Y** | 年. |
|
||||
| **%z** | +hhmm 指定数字时区 (像这样, **-0400**). |
|
||||
| **%:z** | +hh:mm 指定数字时区 (像这样, **-04:00**). |
|
||||
| **%::z** | +hh:mm:ss 指定数字时区 (像这样, **-04:00:00**). |
|
||||
| **%:::z** | 指定数字时区, 其中 “**:**” 的个数由你需要的精度来决定 (例如, **-04**, **+05:30**). |
|
||||
| **%Z** | 时区的字符缩写(例如, EDT). |
|
||||
|
||||
#### 10\. 设置系统时间
|
||||
|
||||
你也可以使用 date 来手工设置系统时间,方法是使用 `--set` 选项, 下面的例子会将系统时间设置成2017年8月30日下午4点22分
|
||||
|
||||
```shell
|
||||
date --set="20170830 16:22"
|
||||
|
||||
```
|
||||
|
||||
当然, 如果你使用的是我们的 [VPS Hosting services][1], 你总是可以联系并咨询我们的Linux专家管理员 (通过客服电话或者下工单的方式) 关于 date 命令的任何东西. 他们是 24×7 在线的,会立即向您提供帮助.
|
||||
|
||||
PS. 如果你喜欢这篇帖子,请点击下面的按钮分享或者留言. 谢谢.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.rosehosting.com/blog/use-the-date-command-in-linux/
|
||||
|
||||
作者:[][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.rosehosting.com
|
||||
[1]:https://www.rosehosting.com/hosting-services.html
|
@ -0,0 +1,229 @@
|
||||
Linux下使用sudo进行赋权
|
||||
======
|
||||
我最近写了一个简短的 Bash 程序来将 MP3 文件从一台网络主机的 UBS 盘中拷贝到另一台网络主机上去。拷贝出来的文件存放在一台志愿者组织所属服务器的特定目录下, 在那里,这些文件可以被下载和播放。
|
||||
|
||||
我的程序还会做些其他事情,比如为了自动在网页上根据日期排序,在拷贝文件之前会先对这些文件重命名。 在验证拷贝完成后,还会删掉 USB 盘中的所有文件。 这个小程序还有一些其他选项,比如 `-h` 会显示帮助, `-t` 进入测试模式等等。
|
||||
|
||||
我的程序需要以 root 运行才能发挥作用。然而, 这个组织中之后很少的人对管理音频和计算机系统有兴趣的,这使得我不得不找那些半吊子的科技人员来,并培训他们登陆用于传输的计算机,运行这个小程序。
|
||||
|
||||
倒不是说我不能亲自运行这个程序,但由于外出和疾病等等各种原因, 我不是时常在场的。 即使我在场, 作为一名 "懒惰的系统管理员", 我也希望别人能替我把事情给做了。 因此我写了一些脚本来自动完成这些人物并通过 sudo 来指定某些人来运行这些脚本。 很多 Linux 命令都需要用户以 root 身份来运行。 sudo 能够保护系统免遭一时糊涂造成的意外损坏以及恶意用户的故意破坏。
|
||||
|
||||
### Do that sudo that you do so well
|
||||
|
||||
sudo 是一个很方便的工具,它让我一个 root 管理员可以分配所有或者部分管理性的任务给其他用户, 而且还无需告诉他们 root 密码, 从而保证主机的高安全性。
|
||||
|
||||
假设,我给了普通用户 "ruser" 访问我 Bash 程序 "myprog" 的权限, 而这个程序的部分功能需要 root 权限。 那么该用户可以以 ruser 的身份登陆,然后通过以下命令运行 myprog。
|
||||
|
||||
```shell
|
||||
sudo myprog
|
||||
```
|
||||
|
||||
我发现在训练时记录下每个用 sudo 执行的命令会很有帮助。我可以看到谁执行了哪些命令,他们是否输对了。
|
||||
|
||||
我委派了权限给自己和另一个人来运行那个程序; 然而,sudo 可以做更多的事情。 它允许系统管理员委派网络管理或特定的服务器权限给某个人或某组人,以此来保护 root 密码的安全性。
|
||||
|
||||
### 配置 sudoers 文件
|
||||
|
||||
作为一名系统管理员,我使用 `/etc/sudoers` 文件来设置某些用户或某些用户组可以访问某个命令,或某组命令,或所有命令。 这种灵活性是使用 sudo 进行委派时能兼顾功能与简易性的关键。
|
||||
|
||||
我一开始对 `sudoers` 文件感到很困惑,因此下面我会拷贝并分解我所使用主机上的完整 `sudoers` 文件。 希望在分析的过程中不会让你感到困惑。 我意外地发现, 基于 Red Hat 的发行版中默认的配置文件都会很多注释以及例子来指导你如何做出修改,这使得修改配置文件变得简单了很多,也不需要在互联网上搜索那么多东西了。
|
||||
|
||||
不要直接用编辑起来修改 sudoers 文件,而应该用 `visudo` 命令,因为该命令会在你保存并退出编辑器后就立即生效这些变更。 visudo 也可以使用除了 `Vi` 之外的其他编辑器。
|
||||
|
||||
让我们首先来分析一下文件中的各种别名。
|
||||
|
||||
#### Host aliases(主机别名)
|
||||
|
||||
host aliases 用于创建主机分组,在不同主机上可以设置允许访问不同的命令或命令别名 (command aliases)。 它的基本思想是,该文件由组织中的所有主机共同维护,然后拷贝到每台主机中的 `/etc` 中。 其中有些主机, 例如各种服务器, 可以配置成一个组来赋予用户访问特定命令的权限, 比如可以启停类似 HTTPD, DNS, 以及网络服务; 可以挂载文件系统等等。
|
||||
|
||||
在设置主机别名时也可以用 IP 地址替代主机名。
|
||||
|
||||
```
|
||||
## Sudoers allows particular users to run various commands as
|
||||
## the root user,without needing the root password。
|
||||
##
|
||||
## Examples are provided at the bottom of the file for collections
|
||||
## of related commands,which can then be delegated out to particular
|
||||
## users or groups。
|
||||
##
|
||||
## This file must be edited with the 'visudo' command。
|
||||
|
||||
## Host Aliases
|
||||
## Groups of machines。You may prefer to use hostnames (perhaps using
|
||||
## wildcards for entire domains) or IP addresses instead。
|
||||
# Host_Alias FILESERVERS = fs1,fs2
|
||||
# Host_Alias MAILSERVERS = smtp,smtp2
|
||||
|
||||
## User Aliases
|
||||
## These aren't often necessary,as you can use regular groups
|
||||
## (ie,from files, LDAP, NIS, etc) in this file - just use %groupname
|
||||
## rather than USERALIAS
|
||||
# User_Alias ADMINS = jsmith,mikem
|
||||
User_Alias AUDIO = dboth,ruser
|
||||
|
||||
## Command Aliases
|
||||
## These are groups of related commands。.。
|
||||
|
||||
## Networking
|
||||
# Cmnd_Alias NETWORKING = /sbin/route,/sbin/ifconfig,
|
||||
/bin/ping,/sbin/dhclient, /usr/bin/net, /sbin/iptables,
|
||||
/usr/bin/rfcomm,/usr/bin/wvdial, /sbin/iwconfig, /sbin/mii-tool
|
||||
|
||||
## Installation and management of software
|
||||
# Cmnd_Alias SOFTWARE = /bin/rpm,/usr/bin/up2date, /usr/bin/yum
|
||||
|
||||
## Services
|
||||
# Cmnd_Alias SERVICES = /sbin/service,/sbin/chkconfig
|
||||
|
||||
## Updating the locate database
|
||||
# Cmnd_Alias LOCATE = /usr/bin/updatedb
|
||||
|
||||
## Storage
|
||||
# Cmnd_Alias STORAGE = /sbin/fdisk,/sbin/sfdisk, /sbin/parted, /sbin/partprobe, /bin/mount, /bin/umount
|
||||
|
||||
## Delegating permissions
|
||||
# Cmnd_Alias DELEGATING = /usr/sbin/visudo,/bin/chown, /bin/chmod, /bin/chgrp
|
||||
|
||||
## Processes
|
||||
# Cmnd_Alias PROCESSES = /bin/nice,/bin/kill, /usr/bin/kill, /usr/bin/killall
|
||||
|
||||
## Drivers
|
||||
# Cmnd_Alias DRIVERS = /sbin/modprobe
|
||||
|
||||
# Defaults specification
|
||||
|
||||
#
|
||||
# Refuse to run if unable to disable echo on the tty。
|
||||
#
|
||||
Defaults!visiblepw
|
||||
|
||||
Defaults env_reset
|
||||
Defaults env_keep = "COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS"
|
||||
Defaults env_keep += "MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE"
|
||||
Defaults env_keep += "LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES"
|
||||
Defaults env_keep += "LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE"
|
||||
Defaults env_keep += "LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY"
|
||||
|
||||
Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin
|
||||
|
||||
## Next comes the main part: which users can run what software on
|
||||
## which machines (the sudoers file can be shared between multiple
|
||||
## systems)。
|
||||
## Syntax:
|
||||
##
|
||||
## user MACHINE=COMMANDS
|
||||
##
|
||||
## The COMMANDS section may have other options added to it。
|
||||
##
|
||||
## Allow root to run any commands anywhere
|
||||
root ALL=(ALL) ALL
|
||||
|
||||
## Allows members of the 'sys' group to run networking,software,
|
||||
## service management apps and more。
|
||||
# %sys ALL = NETWORKING,SOFTWARE, SERVICES, STORAGE, DELEGATING, PROCESSES, LOCATE, DRIVERS
|
||||
|
||||
## Allows people in group wheel to run all commands
|
||||
%wheel ALL=(ALL) ALL
|
||||
|
||||
## Same thing without a password
|
||||
# %wheel ALL=(ALL) NOPASSWD: ALL
|
||||
|
||||
## Allows members of the users group to mount and unmount the
|
||||
## cdrom as root
|
||||
# %users ALL=/sbin/mount /mnt/cdrom,/sbin/umount /mnt/cdrom
|
||||
|
||||
## Allows members of the users group to shutdown this system
|
||||
# %users localhost=/sbin/shutdown -h now
|
||||
|
||||
## Read drop-in files from /etc/sudoers.d (the # here does not mean a comment)
|
||||
#includedir /etc/sudoers.d
|
||||
|
||||
################################################################################
|
||||
# Added by David Both,11/04/2017 to provide limited access to myprog #
|
||||
################################################################################
|
||||
#
|
||||
AUDIO guest1=/usr/local/bin/myprog
|
||||
```
|
||||
|
||||
#### User aliases(用户别名)
|
||||
|
||||
user alias 允许 root 将用户整理成组并按组来分配权限。在这部分内容中我加了一行 `User_Alias AUDIO = dboth, ruser`,他定义了一个别名 `AUDIO` 用来指代了两个用户。
|
||||
|
||||
正如 `sudoers` 文件中所阐明的,也可以直接使用 `/etc/groups` 中定义的组而不用自己设置别名。 如果你定义好的组(假设组名为 "audio")已经能满足要求了, 那么在后面分配命令时只需要在组名前加上 `%` 号,像这样: %audio。
|
||||
|
||||
#### Command aliases(命令别名)
|
||||
|
||||
再后面是 command aliases 部分。这些别名表示的是一系列相关的命令, 比如网络相关命令,或者 RPM 包管理命令。 这些别名允许系统管理员方便地为一组命令分配权限。
|
||||
|
||||
该部分内容已经设置好了许多别名,这使得分配权限给某类命令变得方便很多。
|
||||
|
||||
#### Environment defaults(环境默认值)
|
||||
|
||||
下部分内容设置默认的环境变量。这部分最值得关注的是 `!visiblepw` 这一行, 它表示当用户环境设置成显示密码时禁止 `sudo` 的运行。 这个安全措施不应该被修改掉。
|
||||
|
||||
#### Command section(命令部分)
|
||||
|
||||
command 部分是 `sudoers` 文件的主体。不使用别名并不会影响你完成要实现 的效果。 它只是让整个配置工作大幅简化而已。
|
||||
|
||||
这部分使用之前定义的别名来告诉 `sudo` 哪些人可以在哪些机器上执行哪些操作。一旦你理解了这部分内容的语法,你会发现这些例子都非常的直观。 下面我们来看看它的语法。
|
||||
|
||||
```
|
||||
ruser ALL=(ALL) ALL
|
||||
```
|
||||
|
||||
这是一条为用户 ruser 做出的配置。行中第一个 `ALL` 表示该条规则在所有主机上生效。 第二个 `ALL` 允许 ruser 以其他用户的身份运行命令。 默认情况下, 命令以 root 用户的身份运行, 但 ruser 可以在 sudo 命令行指定程序以其他用户的身份运行。 最后这个 ALL 表示 ruser 可以运行所有命令而不受限制。 这让 ruser 实际上就变成了 root。
|
||||
|
||||
注意到下面还有一条针对 root 的配置。这允许 root 能通过 sudo 在任何主机上运行任何命令。
|
||||
|
||||
```
|
||||
root ALL=(ALL) ALL
|
||||
```
|
||||
|
||||
为了实验一下效果,我注释掉了这行, 然后以 root 的身份, 试着直接运行 chown。 出乎意料的是这样是能成功的。 然后我试了下 sudo chown,结果失败了,提示信息 "Root is not in the sudoers file。 This incident will be reported"。 也就是说 root 可以直接运行任何命令, 但当加上 sudo 时则不行。 这会阻止 root 像其他用户一样使用 sudo 命令来运行其他命令, 但是 root 有太多中方法可以绕过这个约束了。
|
||||
|
||||
下面这行是我新增来控制访问 myprog 的。它指定了只有上面定义的 AUDIO 组中的用户才能在 guest1 这台主机上使用 myprog 这个命令。
|
||||
|
||||
```
|
||||
AUDIO guest1=/usr/local/bin/myprog
|
||||
```
|
||||
|
||||
注意,上面这一行只指定了允许访问的主机名和程序, 而没有说用户可以以其他用户的身份来运行该程序。
|
||||
|
||||
#### 省略密码
|
||||
|
||||
你也可以通过 NOPASSWORD 来让 AUDIO 组中的用户无需密码就能运行 myprog。像这样 Here's how:
|
||||
|
||||
```
|
||||
AUDIO guest1=NOPASSWORD : /usr/local/bin/myprog
|
||||
```
|
||||
|
||||
我并没有这样做,因为哦我觉得使用 sudo 的用户必须要停下来想清楚他们正在做的事情,这对他们有好处。 我这里只是举个例子。
|
||||
|
||||
#### wheel
|
||||
|
||||
`sudoers` 文件中命令部分的 `wheel` 说明(如下所示)允许所有在 "wheel" 组中的用户在任何机器上运行任何命令。wheel 组在 `/etc/group` 文件中定义, 用户必须加入该组后才能工作。 组名前面的 % 符号表示 sudo 应该去 `/etc/group` 文件中查找该组。
|
||||
|
||||
```
|
||||
%wheel ALL = (ALL) ALL
|
||||
```
|
||||
|
||||
这种方法很好的实现了为多个用户赋予完全的 root 权限而不用提供 root 密码。只需要把哦嗯虎加入 wheel 组中就能给他们提供完整的 root 的能力。 它也提供了一个种通过 sudo 创建的日志来监控他们行为的途径。 有些 Linux 发行版, 比如 Ubuntu, 会自动将用户的 ID 加入 `/etc/group` 中的 wheel 组中, 这使得他们能够用 sudo 命令运行所有的特权命令。
|
||||
|
||||
### 结语
|
||||
|
||||
我这里只是小试了一把 sudo — 我只是给一到两个用户以 root 权限运行单个命令的权限。完成这些只添加了两行配置(不考虑注释)。 将某项任务的权限委派给其他非 root 用户非常简单,而且可以节省你大量的时间。 同时它还会产生日志来帮你发现问题。
|
||||
|
||||
`sudoers` 文件还有许多其他的配置和能力。查看 sudo 和 sudoers 的 man 手册可以深入了解详细信息。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/12/using-sudo-delegate
|
||||
|
||||
作者:[David Both][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/dboth
|
@ -0,0 +1,199 @@
|
||||
10 个例子教你学会 ncat (nc) 命令
|
||||
======
|
||||
[![nc-ncat-command-examples-Linux-Systems](https://www.linuxtechi.com/wp-content/uploads/2017/12/nc-ncat-command-examples-Linux-Systems.jpg)][1]
|
||||
|
||||
ncat 或者说 nc 是一款功能类似 cat 的网络工具。它是一款拥有多种功能的 CLI 工具,可以用来在网络上读,写以及重定向数据。 它被设计成可以被脚本或其他程序调用的可靠后端工具。 同时由于它能创建任意所需的连接,因此也是一个很好的网络调试工具。
|
||||
|
||||
ncat/nc 既是一个端口扫描工具,也是一款安全工具, 还能是一款检测工具,甚至可以做一个简单的 TCP 代理。 由于有这么多的功能,它被誉为是网络界的瑞士军刀。 这是每个系统管理员都应该知道并且掌握它。
|
||||
|
||||
在大多数 Debian 发行版中,`nc` 是默认可用的,它会在安装系统的过程中自动被安装。 但是在 CentOS 7 / RHEL 7 的最小化安装中,`nc` 并不会默认被安装。 你需要用下列命令手工安装。
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# yum install nmap-ncat -y
|
||||
```
|
||||
|
||||
系统管理员可以用它来审计系统安全,用它来找出开放的端口然后保护这些端口。 管理员还能用它作为客户端来审计 Web 服务器, 远程登陆服务器, 邮件服务器等, 通过 `nc` 我们可以控制发送的每个字符,也可以查看对方的回应。
|
||||
|
||||
我们还可以 o 用它捕获客户端发送的数据一次来了解这些客户端是做什么的。
|
||||
|
||||
在本文中,我们会通过 10 个例子来学习如何使用 `nc` 命令。
|
||||
|
||||
### 例子: 1) 监听入站连接
|
||||
|
||||
通过 `l` 选项,ncat 可以进入监听模式,使我们可以在指定端口监听入站连接。 完整的命令是这样的
|
||||
|
||||
$ ncat -l port_number
|
||||
|
||||
比如,
|
||||
|
||||
```
|
||||
$ ncat -l 8080
|
||||
```
|
||||
|
||||
服务器就会开始在 8080 端口监听入站连接。
|
||||
|
||||
### 例子: 2) 连接远程系统
|
||||
|
||||
使用下面命令可以用 nc 来连接远程系统,
|
||||
|
||||
$ ncat IP_address port_number
|
||||
|
||||
让我们来看个例子,
|
||||
|
||||
```
|
||||
$ ncat 192。168。1。100 80
|
||||
```
|
||||
|
||||
这会创建一个连接,连接到 IP 为 192。168。1。100 的服务器上的 80 端口,然后我们就可以向服务器发送指令了。 比如我们可以输入下面内容来获取完整的网页内容
|
||||
|
||||
GET / HTTP/1.1
|
||||
|
||||
或者获取页面名称,
|
||||
|
||||
GET / HTTP/1.1
|
||||
|
||||
或者我们可以通过以下方式获得操作系统指纹标识,
|
||||
|
||||
HEAD / HTTP/1.1
|
||||
|
||||
这会告诉我们使用的是什么软件来运行这个 web 服务器的。
|
||||
|
||||
### 例子: 3) 连接 UDP 端口
|
||||
|
||||
默认情况下,nc 创建连接时只会连接 TCP 端口。 不过我们可以使用 `u` 选项来连接到 UDP 端口,
|
||||
|
||||
```
|
||||
$ ncat -l -u 1234
|
||||
```
|
||||
|
||||
现在我们的系统会开始监听 udp 的'1234'端口,我们可以使用下面的 netstat 命令来验证这一点,
|
||||
|
||||
```
|
||||
$ netstat -tunlp | grep 1234
|
||||
udp 0 0 0。0。0。0:1234 0。0。0。0:* 17341/nc
|
||||
udp6 0 0 :::1234 :::* 17341/nc
|
||||
```
|
||||
|
||||
假设我们想发送或者说测试某个远程主机 UDP 端口的连通性,我们可以使用下面命令,
|
||||
|
||||
$ ncat -v -u {host-ip} {udp-port}
|
||||
|
||||
比如:
|
||||
|
||||
```
|
||||
[root@localhost ~]# ncat -v -u 192。168。105。150 53
|
||||
Ncat: Version 6。40 ( http://nmap.org/ncat )
|
||||
Ncat: Connected to 192。168。105。150:53。
|
||||
```
|
||||
|
||||
#### 例子: 4) 将 NC 作为聊天工具
|
||||
|
||||
NC 也可以作为聊天工具来用,我们可以配置服务器监听某个端口,然后从远程主机上连接到服务器的这个端口,就可以开始发送消息了。 在服务器这端运行:
|
||||
|
||||
```
|
||||
$ ncat -l 8080
|
||||
```
|
||||
|
||||
在远程客户端主机上运行:
|
||||
|
||||
```
|
||||
$ ncat 192。168。1。100 8080
|
||||
```
|
||||
|
||||
之后开始发送消息,这些消息会在服务器终端上显示出来。
|
||||
|
||||
#### 例子: 5) 将 NC 作为代理
|
||||
|
||||
NC 也可以用来做代理。比如下面这个例子,
|
||||
|
||||
```
|
||||
$ ncat -l 8080 | ncat 192。168。1。200 80
|
||||
```
|
||||
|
||||
所有发往我们服务器 8080 端口的连接都会自动转发到 192。168。1。200 上的 80 端口。 不过由于我们使用了管道,数据只能被单向传输。 要同时能够接受返回的数据,我们需要创建一个双向管道。 使用下面命令可以做到这点:
|
||||
|
||||
```
|
||||
$ mkfifo 2way
|
||||
$ ncat -l 8080 0<2way | ncat 192。168。1。200 80 1>2way
|
||||
```
|
||||
|
||||
现在你可以通过 nc 代理来收发数据了。
|
||||
|
||||
#### 例子: 6) 使用 nc/ncat 拷贝文件
|
||||
|
||||
NC 还能用来在系统间拷贝文件,虽然这么做并不推荐,因为绝大多数系统默认都安装了 ssh/scp。 不过如果你恰好遇见个没有 ssh/scp 的系统的话, 你可以用 nc 来作最后的努力。
|
||||
|
||||
在要接受数据的机器上启动 nc 并让它进入监听模式:
|
||||
|
||||
```
|
||||
$ ncat -l 8080 > file.txt
|
||||
```
|
||||
|
||||
现在去要被拷贝数据的机器上运行下面命令:
|
||||
|
||||
```
|
||||
$ ncat 192。168。1。100 8080 --send-only < data.txt
|
||||
```
|
||||
|
||||
这里,data.txt 是要发送的文件。 –send-only 选项会在文件拷贝完后立即关闭连接。 如果不加该选项, 我们需要手工按下 ctrl+c to 来关闭连接。
|
||||
|
||||
我们也可以用这种方法拷贝整个磁盘分区,不过请一定要小心。
|
||||
|
||||
#### 例子: 7) 通过 nc/ncat 创建后门
|
||||
|
||||
NC 命令还可以用来在系统中创建后门,并且这种技术也确实被黑客大量使用。 为了保护我们的系统,我们需要知道它是怎么做的。 创建后门的命令为:
|
||||
|
||||
```
|
||||
$ ncat -l 10000 -e /bin/bash
|
||||
```
|
||||
|
||||
‘e‘ 标志将一个 bash 与端口 10000 相连。现在客户端只要连接到服务器上的 10000 端口就能通过 bash 获取我们系统的完整访问权限:
|
||||
|
||||
```
|
||||
$ ncat 192。168。1。100 10000
|
||||
```
|
||||
|
||||
#### 例子: 8) 通过 nc/ncat 进行端口转发
|
||||
|
||||
我们通过选项 `c` 来用 NC 进行端口转发,实现端口转发的语法为:
|
||||
|
||||
```
|
||||
$ ncat -u -l 80 -c 'ncat -u -l 8080'
|
||||
```
|
||||
|
||||
这样,所有连接到 80 端口的连接都会转发到 8080 端口。
|
||||
|
||||
#### 例子: 9) 设置连接超时
|
||||
|
||||
ncat 的监听模式会一直运行,直到手工终止。 不过我们可以通过选项 `w` 设置超时时间:
|
||||
|
||||
```
|
||||
$ ncat -w 10 192。168。1。100 8080
|
||||
```
|
||||
|
||||
这回导致连接 10 秒后终止,不过这个选项只能用于客户端而不是服务端。
|
||||
|
||||
#### 例子: 10) 使用 -k 选项强制 ncat 待命
|
||||
|
||||
当客户端从服务端断开连接后,过一段时间服务端也会停止监听。 但通过选项 `k` 我们可以强制服务器保持连接并继续监听端口。 命令如下:
|
||||
|
||||
```
|
||||
$ ncat -l -k 8080
|
||||
```
|
||||
|
||||
现在即使来自客户端的连接断了也依然会处于待命状态。
|
||||
|
||||
自此我们的教程就完了,如有疑问,请在下方留言。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/nc-ncat-command-examples-linux-systems/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linuxtechi.com/author/pradeep/
|
||||
[1]:https://www.linuxtechi.com/wp-content/uploads/2017/12/nc-ncat-command-examples-Linux-Systems.jpg
|
@ -0,0 +1,72 @@
|
||||
Sessions 与 Cookies – 用户登录的原理是什么?
|
||||
======
|
||||
Facebook, Gmail, Twitter 是我们每天都会用的网站. 它们的共同点在于都需要你登录进去后才能做进一步的操作. 只有你通过认证并登录后才能在 twitter 发推, 在 Facebook 上评论,以及在 Gmail上处理电子邮件.
|
||||
|
||||
[![gmail, facebook login page](http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1.jpg)][1]
|
||||
|
||||
那么登录的原理是什么? 网站是如何认证的? 它怎么知道是哪个用户从哪儿登录进来的? 下面我们来对这些问题进行一一解答.
|
||||
|
||||
### 用户登录的原理是什么?
|
||||
|
||||
每次你在网站的登录页面中输入用户名和密码时, 这些信息都会发送到服务器. 服务器随后会将你的密码与服务器中的密码进行验证. 如果两者不匹配, 则你会得到一个错误密码的提示. 如果两则匹配, 则成功登录.
|
||||
|
||||
### 登陆时发生了什么?
|
||||
|
||||
登录后, web 服务器会初始化一个 session 并在你的浏览器中设置一个 cookie 变量. 该 cookie 变量用于作为新建 session 的一个引用. 搞晕了? 让我们说的再简单一点.
|
||||
|
||||
### 会话的原理是什么?
|
||||
|
||||
服务器在用户名和密码都正确的情况下会初始化一个 session. Sessions 的定义很复杂,你可以把它理解为 `关系的开始`.
|
||||
|
||||
[![session beginning of a relationship or partnership](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-9.png)][2]
|
||||
|
||||
认证通过后, 服务器就开始跟你展开一段关系了. 由于服务器不能象我们人类一样看东西, 它会在我们的浏览器中设置一个 cookie 来将我们的关系从其他人与服务器的关系标识出来.
|
||||
|
||||
### 什么是 Cookie?
|
||||
|
||||
cookie 是网站在你的浏览器中存储的一小段数据. 你应该已经见过他们了.
|
||||
|
||||
[![theitstuff official facebook page cookies](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-1-4.png)][3]
|
||||
|
||||
当你登录后,服务器为你创建一段关系或者说一个 session, 然后将唯一标识这个 session 的 session id 以 cookie 的形式存储在你的浏览器中.
|
||||
|
||||
### 什么意思?
|
||||
|
||||
所有这些东西存在的原因在于识别出你来,这样当你写评论或者发推时, 服务器能知道是谁在发评论,是谁在发推.
|
||||
|
||||
当你登录后, 会产生一个包含 session id 的 cookie. 这样, 这个 session id 就被赋予了那个输入正确用户名和密码的人了.
|
||||
|
||||
[![facebook cookies in web browser](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-2-3-e1508926255472.png)][4]
|
||||
|
||||
也就是说, session id 被赋予给了拥有这个账户的人了. 之后,所有在网站上产生的行为, 服务器都能通过他们的 session id 来判断是由谁发起的.
|
||||
|
||||
### 如何让我保持登录状态?
|
||||
|
||||
session 有一定的时间限制. 这一点与现实生活中不一样,现实生活中的关系可以在不见面的情况下持续很长一段时间, 而 session 具有时间限制. 你必须要不断地通过一些动作来告诉服务器你还在线. 否则的话,服务器会关掉这个 session,而你会被登出.
|
||||
|
||||
[![websites keep me logged in option](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-3-3-e1508926314117.png)][5]
|
||||
|
||||
不过在某些网站上可以启用 `保持登录(Keep me logged in)`, 这样服务器会将另一个唯一变量以 cookie 的形式保存到我们的浏览器中. 这个唯一变量会通过与服务器上的变量进行对比来实现自动登录. 若有人盗取了这个唯一标识(我们称之为 cookie stealing), 他们就能访问你的账户了.
|
||||
|
||||
### 结论
|
||||
|
||||
我们讨论了登录系统的工作原理以及网站是如何进行认证的. 我们还学到了什么是 sessions 和 cookies,以及它们在登录机制中的作用.
|
||||
|
||||
我们希望你们以及理解了用户登录的工作原理, 如有疑问, 欢迎提问.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.theitstuff.com/sessions-cookies-user-login-work
|
||||
|
||||
作者:[Rishabh Kandari][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.theitstuff.com/author/reevkandari
|
||||
[1]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1.jpg
|
||||
[2]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-9.png
|
||||
[3]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-1-4.png
|
||||
[4]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-2-3-e1508926255472.png
|
||||
[5]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-3-3-e1508926314117.png
|
@ -0,0 +1,78 @@
|
||||
什么是僵尸进程以及如何找到并杀掉僵尸进程?
|
||||
======
|
||||
[![What Are Zombie Processes And How To Find & Kill Zombie Processes?](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/what-are-the-zombie-processes_orig.jpg)][1]
|
||||
|
||||
如果你经常使用 Linux,你应该遇到这个术语 `僵尸进程`。 那么什么是僵尸进程? 它们是怎么产生的? 他们是否对系统有害? 我要怎样杀掉这些进程? 下面将会回答这些问题。
|
||||
|
||||
### 什么是僵尸进程?
|
||||
|
||||
我们都知道进程的工作原理。我们启动一个程序,开始我们的任务,然后等任务结束了,我们就停止这个进程。 进程停止后, 该进程就会从进程表中移除。
|
||||
|
||||
你可以通过 `System-Monitor` 查看当前进程。
|
||||
|
||||
[![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux-check-zombie-processes_orig.jpg)][2]
|
||||
|
||||
但是,有时候有些程序即使执行完了也依然留在进程表中。
|
||||
|
||||
那么,这些完成了生命周期但却依然留在进程表中的进程,我们称之为 `僵尸进程`。
|
||||
|
||||
### 他们是如何产生的?
|
||||
|
||||
当你运行一个程序时,它会产生一个父进程以及很多子进程。 所有这些子进程都会消耗内核分配给他们的内存和 CPU 资源。
|
||||
|
||||
这些子进程完成执行后会发送一个 Exit 信号然后死掉。这个 Exit 信号需要被父进程所读取。父进程需要随后调用 `wait` 命令来读取子进程的退出状态并将子进程从进程表中移除。
|
||||
|
||||
若父进程正确第读取了子进程的 Exit 信号,则子进程会从进程表中删掉。
|
||||
|
||||
但若父进程未能读取到子进程的 Exit 信号,则这个子进程虽然完成执行处于死亡的状态,但也不会从进程表中删掉。
|
||||
|
||||
### 僵尸进程对系统有害吗 Are Zombie processes harmful to the System?
|
||||
|
||||
**不会**。由于僵尸进程并不做任何事情, 不会使用任何资源也不会影响其他进程, 因此存在僵尸进程也没什么坏处。 不过由于进程表中的退出状态以及其他一些进程信息也是存储在内存中的,因此存在太多僵尸进程有时也会是一些问题。
|
||||
|
||||
**你可以想象成这样:**
|
||||
|
||||
“你是一家建筑公司的老板。你每天根据工人们的工作量来支付工资。 有一个工人每天来到施工现场,就坐在那里, 你不用付钱, 它也不做任何工作。 他只是每天都来然后呆坐在那,仅此而已!”
|
||||
|
||||
这个工人就是僵尸进程的一个活生生的例子。**但是**, 如果你有很多僵尸工人, 你的建设工地就会很拥堵从而让那些正常的工人难以工作。
|
||||
|
||||
### 那么如何找出僵尸进程呢?
|
||||
|
||||
打开终端并输入下面命令:
|
||||
|
||||
```
|
||||
ps aux | grep Z
|
||||
```
|
||||
|
||||
会列出进程表中所有僵尸进程的详细内容。
|
||||
|
||||
### 如何杀掉僵尸进程?
|
||||
|
||||
正常情况下我们可以用 `SIGKILL` 信号来杀死进程,但是僵尸进程已经死了, 你不能杀死已经死掉的东西。 因此你需要输入的命令应该是
|
||||
|
||||
```
|
||||
kill -s SIGCHLD pid
|
||||
```
|
||||
|
||||
将这里的 pid 替换成父进程的 id,这样父进程就会删除所有以及完成并死掉的子进程了。
|
||||
|
||||
**你可以把它想象成:**
|
||||
|
||||
"你在道路中间发现一具尸体,于是你联系了死者的家属,随后他们就会将尸体带离道路了。"
|
||||
|
||||
不过许多程序写的不是那么好,无法删掉这些子僵尸(否则你一开始也见不到这些僵尸了)。 因此确保删除子僵尸的唯一方法就是杀掉它们的父进程。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxandubuntu.com/home/what-are-zombie-processes-and-how-to-find-kill-zombie-processes
|
||||
|
||||
作者:[linuxandubuntu][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxandubuntu.com
|
||||
[1]:http://www.linuxandubuntu.com/home/what-are-zombie-processes-and-how-to-find-kill-zombie-processes
|
||||
[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux-check-zombie-processes_orig.jpg
|
Loading…
Reference in New Issue
Block a user