mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-04 22:00:34 +08:00
commit
211b61a557
@ -15,7 +15,7 @@
|
||||
|
||||
![Flexible Process Address Space Layout In Linux](http://static.duartes.org/img/blogPosts/linuxFlexibleAddressSpaceLayout.png)
|
||||
|
||||
当计算机还是快乐、安全的时代时,在机器中的几乎每个进程上,那些段的起始虚拟地址都是**完全相同**的。这将使远程挖掘安全漏洞变得容易。漏洞利用经常需要去引用绝对内存位置:比如在栈中的一个地址,一个库函数的地址,等等。远程攻击闭着眼睛也会选择这个地址,因为地址空间都是相同的。当攻击者们这样做的时候,人们就会受到伤害。因此,地址空间随机化开始流行起来。Linux 会通过在其起始地址上增加偏移量来随机化[栈][3]、[内存映射段][4]、以及[堆][5]。不幸的是,32 位的地址空间是非常拥挤的,为地址空间随机化留下的空间不多,因此 [妨碍了地址空间随机化的效果][6]。
|
||||
当计算机还是快乐、安全的时代时,在机器中的几乎每个进程上,那些段的起始虚拟地址都是**完全相同**的。这将使远程挖掘安全漏洞变得容易。漏洞利用经常需要去引用绝对内存位置:比如在栈中的一个地址,一个库函数的地址,等等。远程攻击可以闭着眼睛选择这个地址,因为地址空间都是相同的。当攻击者们这样做的时候,人们就会受到伤害。因此,地址空间随机化开始流行起来。Linux 会通过在其起始地址上增加偏移量来随机化[栈][3]、[内存映射段][4]、以及[堆][5]。不幸的是,32 位的地址空间是非常拥挤的,为地址空间随机化留下的空间不多,因此 [妨碍了地址空间随机化的效果][6]。
|
||||
|
||||
在进程地址空间中最高的段是栈,在大多数编程语言中它存储本地变量和函数参数。调用一个方法或者函数将推送一个新的<ruby>栈帧<rt>stack frame</rt></ruby>到这个栈。当函数返回时这个栈帧被删除。这个简单的设计,可能是因为数据严格遵循 [后进先出(LIFO)][7] 的次序,这意味着跟踪栈内容时不需要复杂的数据结构 —— 一个指向栈顶的简单指针就可以做到。推入和弹出也因此而非常快且准确。也可能是,持续的栈区重用往往会在 [CPU 缓存][8] 中保持活跃的栈内存,这样可以加快访问速度。进程中的每个线程都有它自己的栈。
|
||||
|
||||
@ -25,7 +25,7 @@
|
||||
|
||||
在栈的下面,有内存映射段。在这里,内核将文件内容直接映射到内存。任何应用程序都可以通过 Linux 的 [`mmap()`][12] 系统调用( [代码实现][13])或者 Windows 的 [`CreateFileMapping()`][14] / [`MapViewOfFile()`][15] 来请求一个映射。内存映射是实现文件 I/O 的方便高效的方式。因此,它经常被用于加载动态库。有时候,也被用于去创建一个并不匹配任何文件的匿名内存映射,这种映射经常被用做程序数据的替代。在 Linux 中,如果你通过 [`malloc()`][16] 去请求一个大的内存块,C 库将会创建这样一个匿名映射而不是使用堆内存。这里所谓的“大”表示是超过了`MMAP_THRESHOLD` 设置的字节数,它的缺省值是 128 kB,可以通过 [`mallopt()`][17] 去调整这个设置值。
|
||||
|
||||
接下来讲的是“堆”,就在我们接下来的地址空间中,堆提供运行时内存分配,像栈一样,但又不同于栈的是,它分配的数据生存期要长于分配它的函数。大多数编程语言都为程序提供了堆管理支持。因此,满足内存需要是编程语言运行时和内核共同来做的事情。在 C 中,堆分配的接口是 [`malloc()`][18] 一族,然而在垃圾回收式编程语言中,像 C#,这个接口使用 `new` 关键字。
|
||||
接下来讲的是“堆”,就在我们接下来的地址空间中,堆提供运行时内存分配,像栈一样,但又不同于栈的是,它分配的数据生存期要长于分配它的函数。大多数编程语言都为程序提供了堆管理支持。因此,满足内存需要是编程语言运行时和内核共同来做的事情。在 C 中,堆分配的接口是 [`malloc()`][18] 一族,然而在支持垃圾回收的编程语言中,像 C#,这个接口使用 `new` 关键字。
|
||||
|
||||
如果在堆中有足够的空间可以满足内存请求,它可以由编程语言运行时来处理内存分配请求,而无需内核参与。否则将通过 [`brk()`][19] 系统调用([代码实现][20])来扩大堆以满足内存请求所需的大小。堆管理是比较 [复杂的][21],在面对我们程序的混乱分配模式时,它通过复杂的算法,努力在速度和内存使用效率之间取得一种平衡。服务一个堆请求所需要的时间可能是非常可观的。实时系统有一个 [特定用途的分配器][22] 去处理这个问题。堆也会出现 _碎片化_ ,如下图所示:
|
||||
|
||||
@ -51,7 +51,7 @@
|
||||
|
||||
via: http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory/
|
||||
|
||||
作者:[gustavo][a]
|
||||
作者:[Gustavo Duarte][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
239
published/20160625 Trying out LXD containers on our Ubuntu.md
Normal file
239
published/20160625 Trying out LXD containers on our Ubuntu.md
Normal file
@ -0,0 +1,239 @@
|
||||
在 Ubuntu 上体验 LXD 容器
|
||||
======
|
||||
|
||||
本文的主角是容器,一种类似虚拟机但更轻量级的构造。你可以轻易地在你的 Ubuntu 桌面系统中创建一堆容器!
|
||||
|
||||
虚拟机会虚拟出整个电脑让你来安装客户机操作系统。**相比之下**,容器**复用**了主机的 Linux 内核,只是简单地 **包容** 了我们选择的根文件系统(也就是运行时环境)。Linux 内核有很多功能可以将运行的 Linux 容器与我们的主机分割开(也就是我们的 Ubuntu 桌面)。
|
||||
|
||||
Linux 本身需要一些手工操作来直接管理他们。好在,有 LXD(读音为 Lex-deeh),这是一款为我们管理 Linux 容器的服务。
|
||||
|
||||
我们将会看到如何:
|
||||
|
||||
1. 在我们的 Ubuntu 桌面上配置容器,
|
||||
2. 创建容器,
|
||||
3. 安装一台 web 服务器,
|
||||
4. 测试一下这台 web 服务器,以及
|
||||
5. 清理所有的东西。
|
||||
|
||||
### 设置 Ubuntu 容器
|
||||
|
||||
如果你安装的是 Ubuntu 16.04,那么你什么都不用做。只要安装下面所列出的一些额外的包就行了。若你安装的是 Ubuntu 14.04.x 或 Ubuntu 15.10,那么按照 [LXD 2.0 系列(二):安装与配置][1] 来进行一些操作,然后再回来。
|
||||
|
||||
确保已经更新了包列表:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt upgrade
|
||||
```
|
||||
|
||||
安装 `lxd` 包:
|
||||
|
||||
```
|
||||
sudo apt install lxd
|
||||
```
|
||||
|
||||
若你安装的是 Ubuntu 16.04,那么还可以让你的容器文件以 ZFS 文件系统的格式进行存储。Ubuntu 16.04 的 Linux kernel 包含了支持 ZFS 必要的内核模块。若要让 LXD 使用 ZFS 进行存储,我们只需要安装 ZFS 工具包。没有 ZFS,容器会在主机文件系统中以单独的文件形式进行存储。通过 ZFS,我们就有了写入时拷贝等功能,可以让任务完成更快一些。
|
||||
|
||||
安装 `zfsutils-linux` 包(若你安装的是 Ubuntu 16.04.x):
|
||||
|
||||
```
|
||||
sudo apt install zfsutils-linux
|
||||
```
|
||||
|
||||
安装好 LXD 后,包安装脚本应该会将你加入 `lxd` 组。该组成员可以使你无需通过 `sudo` 就能直接使用 LXD 管理容器。根据 Linux 的习惯,**你需要先登出桌面会话然后再登录** 才能应用 `lxd` 的组成员关系。(若你是高手,也可以通过在当前 shell 中执行 `newgrp lxd` 命令,就不用重登录了)。
|
||||
|
||||
在开始使用前,LXD 需要初始化存储和网络参数。
|
||||
|
||||
运行下面命令:
|
||||
|
||||
```
|
||||
$ sudo lxd init
|
||||
Name of the storage backend to use (dir or zfs): zfs
|
||||
Create a new ZFS pool (yes/no)? yes
|
||||
Name of the new ZFS pool: lxd-pool
|
||||
Would you like to use an existing block device (yes/no)? no
|
||||
Size in GB of the new loop device (1GB minimum): 30
|
||||
Would you like LXD to be available over the network (yes/no)? no
|
||||
Do you want to configure the LXD bridge (yes/no)? yes
|
||||
> You will be asked about the network bridge configuration. Accept all defaults and continue.
|
||||
Warning: Stopping lxd.service, but it can still be activated by:
|
||||
lxd.socket
|
||||
LXD has been successfully configured.
|
||||
$ _
|
||||
```
|
||||
|
||||
我们在一个(单独)的文件而不是块设备(即分区)中构建了一个文件系统来作为 ZFS 池,因此我们无需进行额外的分区操作。在本例中我指定了 30GB 大小,这个空间取之于根(`/`) 文件系统中。这个文件就是 `/var/lib/lxd/zfs.img`。
|
||||
|
||||
行了!最初的配置完成了。若有问题,或者想了解其他信息,请阅读 https://www.stgraber.org/2016/03/15/lxd-2-0-installing-and-configuring-lxd-212/ 。
|
||||
|
||||
### 创建第一个容器
|
||||
|
||||
所有 LXD 的管理操作都可以通过 `lxc` 命令来进行。我们通过给 `lxc` 不同参数来管理容器。
|
||||
|
||||
```
|
||||
lxc list
|
||||
```
|
||||
|
||||
可以列出所有已经安装的容器。很明显,这个列表现在是空的,但这表示我们的安装是没问题的。
|
||||
|
||||
```
|
||||
lxc image list
|
||||
```
|
||||
|
||||
列出可以用来启动容器的(已经缓存的)镜像列表。很明显这个列表也是空的,但这也说明我们的安装是没问题的。
|
||||
|
||||
```
|
||||
lxc image list ubuntu:
|
||||
```
|
||||
|
||||
列出可以下载并启动容器的远程镜像。而且指定了显示 Ubuntu 镜像。
|
||||
|
||||
```
|
||||
lxc image list images:
|
||||
```
|
||||
|
||||
列出可以用来启动容器的(已经缓存的)各种发行版的镜像列表。这会列出各种发行版的镜像比如 Alpine、Debian、Gentoo、Opensuse 以及 Fedora。
|
||||
|
||||
让我们启动一个 Ubuntu 16.04 容器,并称之为 `c1`:
|
||||
|
||||
```
|
||||
$ lxc launch ubuntu:x c1
|
||||
Creating c1
|
||||
Starting c1
|
||||
$
|
||||
```
|
||||
|
||||
我们使用 `launch` 动作,然后选择镜像 `ubuntu:x` (`x` 表示 Xenial/16.04 镜像),最后我们使用名字 `c1` 作为容器的名称。
|
||||
|
||||
让我们来看看安装好的首个容器,
|
||||
|
||||
```
|
||||
$ lxc list
|
||||
|
||||
+---------|---------|----------------------|------|------------|-----------+
|
||||
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
|
||||
+---------|---------|----------------------|------|------------|-----------+
|
||||
| c1 | RUNNING | 10.173.82.158 (eth0) | | PERSISTENT | 0 |
|
||||
+---------|---------|----------------------|------|------------|-----------+
|
||||
```
|
||||
|
||||
我们的首个容器 c1 已经运行起来了,它还有自己的 IP 地址(可以本地访问)。我们可以开始用它了!
|
||||
|
||||
### 安装 web 服务器
|
||||
|
||||
我们可以在容器中运行命令。运行命令的动作为 `exec`。
|
||||
|
||||
```
|
||||
$ lxc exec c1 -- uptime
|
||||
11:47:25 up 2 min,0 users,load average:0.07,0.05,0.04
|
||||
$ _
|
||||
```
|
||||
|
||||
在 `exec` 后面,我们指定容器、最后输入要在容器中运行的命令。该容器的运行时间只有 2 分钟,这是个新出炉的容器:-)。
|
||||
|
||||
命令行中的 `--` 跟我们 shell 的参数处理过程有关。若我们的命令没有任何参数,则完全可以省略 `-`。
|
||||
|
||||
```
|
||||
$ lxc exec c1 -- df -h
|
||||
```
|
||||
|
||||
这是一个必须要 `-` 的例子,由于我们的命令使用了参数 `-h`。若省略了 `-`,会报错。
|
||||
|
||||
然后我们运行容器中的 shell 来更新包列表。
|
||||
|
||||
```
|
||||
$ lxc exec c1 bash
|
||||
root@c1:~# apt update
|
||||
Ign http://archive.ubuntu.com trusty InRelease
|
||||
Get:1 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB]
|
||||
Get:2 http://security.ubuntu.com trusty-security InRelease [65.9 kB]
|
||||
...
|
||||
Hit http://archive.ubuntu.com trusty/universe Translation-en
|
||||
Fetched 11.2 MB in 9s (1228 kB/s)
|
||||
Reading package lists... Done
|
||||
root@c1:~# apt upgrade
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
...
|
||||
Processing triggers for man-db (2.6.7.1-1ubuntu1) ...
|
||||
Setting up dpkg (1.17.5ubuntu5.7) ...
|
||||
root@c1:~# _
|
||||
```
|
||||
|
||||
我们使用 nginx 来做 web 服务器。nginx 在某些方面要比 Apache web 服务器更酷一些。
|
||||
|
||||
```
|
||||
root@c1:~# apt install nginx
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
...
|
||||
Setting up nginx-core (1.4.6-1ubuntu3.5) ...
|
||||
Setting up nginx (1.4.6-1ubuntu3.5) ...
|
||||
Processing triggers for libc-bin (2.19-0ubuntu6.9) ...
|
||||
root@c1:~# _
|
||||
```
|
||||
|
||||
让我们用浏览器访问一下这个 web 服务器。记住 IP 地址为 10.173.82.158,因此你需要在浏览器中输入这个 IP。
|
||||
|
||||
[![lxd-nginx][2]][3]
|
||||
|
||||
让我们对页面文字做一些小改动。回到容器中,进入默认 HTML 页面的目录中。
|
||||
|
||||
```
|
||||
root@c1:~# cd /var/www/html/
|
||||
root@c1:/var/www/html# ls -l
|
||||
total 2
|
||||
-rw-r--r-- 1 root root 612 Jun 25 12:15 index.nginx-debian.html
|
||||
root@c1:/var/www/html#
|
||||
```
|
||||
|
||||
使用 nano 编辑文件,然后保存:
|
||||
|
||||
[![lxd-nginx-nano][4]][5]
|
||||
|
||||
之后,再刷一下页面看看,
|
||||
|
||||
[![lxd-nginx-modified][6]][7]
|
||||
|
||||
### 清理
|
||||
|
||||
让我们清理一下这个容器,也就是删掉它。当需要的时候我们可以很方便地创建一个新容器出来。
|
||||
|
||||
```
|
||||
$ lxc list
|
||||
+---------+---------+----------------------+------+------------+-----------+
|
||||
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
|
||||
+---------+---------+----------------------+------+------------+-----------+
|
||||
| c1 | RUNNING | 10.173.82.169 (eth0) | | PERSISTENT | 0 |
|
||||
+---------+---------+----------------------+------+------------+-----------+
|
||||
$ lxc stop c1
|
||||
$ lxc delete c1
|
||||
$ lxc list
|
||||
+---------+---------+----------------------+------+------------+-----------+
|
||||
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
|
||||
+---------+---------+----------------------+------+------------+-----------+
|
||||
+---------+---------+----------------------+------+------------+-----------+
|
||||
```
|
||||
|
||||
我们停止(关闭)这个容器,然后删掉它了。
|
||||
|
||||
本文至此就结束了。关于容器有很多玩法。而这只是配置 Ubuntu 并尝试使用容器的第一步而已。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.simos.info/trying-out-lxd-containers-on-our-ubuntu/
|
||||
|
||||
作者:[Simos Xenitellis][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://blog.simos.info/author/simos/
|
||||
[1]:https://linux.cn/article-7687-1.html
|
||||
[2]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx.png?resize=564%2C269&ssl=1
|
||||
[3]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx.png?ssl=1
|
||||
[4]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-nano.png?resize=750%2C424&ssl=1
|
||||
[5]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-nano.png?ssl=1
|
||||
[6]:https://i1.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-modified.png?resize=595%2C317&ssl=1
|
||||
[7]:https://i1.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-modified.png?ssl=1
|
@ -1,52 +1,47 @@
|
||||
当你在 Linux 上启动一个进程时会发生什么?
|
||||
===========================================================
|
||||
|
||||
|
||||
本文是关于 fork 和 exec 是如何在 Unix 上工作的。你或许已经知道,也有人还不知道。几年前当我了解到这些时,我惊叹不已。
|
||||
|
||||
我们要做的是启动一个进程。我们已经在博客上讨论了很多关于**系统调用**的问题,每当你启动一个进程或者打开一个文件,这都是一个系统调用。所以你可能会认为有这样的系统调用:
|
||||
|
||||
```
|
||||
start_process(["ls", "-l", "my_cool_directory"])
|
||||
|
||||
```
|
||||
|
||||
这是一个合理的想法,显然这是它在 DOS 或 Windows 中的工作原理。我想说的是,这并不是 Linux 上的工作原理。但是,我查阅了文档,确实有一个 [posix_spawn][2] 的系统调用基本上是这样做的,不过这不在本文的讨论范围内。
|
||||
|
||||
### fork 和 exec
|
||||
|
||||
Linux 上的 `posix_spawn` 是通过两个系统调用实现的,分别是 `fork` 和 `exec`(实际上是 execve),这些都是人们常常使用的。尽管在 OS X 上,人们使用 `posix_spawn`,而 fork 和 exec 是不提倡的,但我们将讨论的是 Linux。
|
||||
Linux 上的 `posix_spawn` 是通过两个系统调用实现的,分别是 `fork` 和 `exec`(实际上是 `execve`),这些都是人们常常使用的。尽管在 OS X 上,人们使用 `posix_spawn`,而 `fork` 和 `exec` 是不提倡的,但我们将讨论的是 Linux。
|
||||
|
||||
Linux 中的每个进程都存在于“进程树”中。你可以通过运行 `pstree` 命令查看进程树。树的根是 `init`,进程号是 1。每个进程(init 除外)都有一个父进程,一个进程都可以有很多子进程。
|
||||
Linux 中的每个进程都存在于“进程树”中。你可以通过运行 `pstree` 命令查看进程树。树的根是 `init`,进程号是 1。每个进程(`init` 除外)都有一个父进程,一个进程都可以有很多子进程。
|
||||
|
||||
所以,假设我要启动一个名为 `ls` 的进程来列出一个目录。我是不是只要发起一个进程 `ls` 就好了呢?不是的。
|
||||
|
||||
我要做的是,创建一个子进程,这个子进程是我本身的一个克隆,然后这个子进程的“大脑”被替代,变成 `ls`。
|
||||
我要做的是,创建一个子进程,这个子进程是我(`me`)本身的一个克隆,然后这个子进程的“脑子”被吃掉了,变成 `ls`。
|
||||
|
||||
开始是这样的:
|
||||
|
||||
```
|
||||
my parent
|
||||
|- me
|
||||
|
||||
```
|
||||
|
||||
然后运行 `fork()`,生成一个子进程,是我自己的一份克隆:
|
||||
然后运行 `fork()`,生成一个子进程,是我(`me`)自己的一份克隆:
|
||||
|
||||
```
|
||||
my parent
|
||||
|- me
|
||||
|-- clone of me
|
||||
|
||||
```
|
||||
|
||||
然后我让子进程运行 `exec("ls")`,变成这样:
|
||||
然后我让该子进程运行 `exec("ls")`,变成这样:
|
||||
|
||||
```
|
||||
my parent
|
||||
|- me
|
||||
|-- ls
|
||||
|
||||
```
|
||||
|
||||
当 ls 命令结束后,我几乎又变回了我自己:
|
||||
@ -55,24 +50,22 @@ my parent
|
||||
my parent
|
||||
|- me
|
||||
|-- ls (zombie)
|
||||
|
||||
```
|
||||
|
||||
在这时 ls 其实是一个僵尸进程。这意味着它已经死了,但它还在等我,以防我需要检查它的返回值(使用 `wait` 系统调用)。一旦我获得了它的返回值,我将再次恢复独自一人的状态。
|
||||
在这时 `ls` 其实是一个僵尸进程。这意味着它已经死了,但它还在等我,以防我需要检查它的返回值(使用 `wait` 系统调用)。一旦我获得了它的返回值,我将再次恢复独自一人的状态。
|
||||
|
||||
```
|
||||
my parent
|
||||
|- me
|
||||
|
||||
```
|
||||
|
||||
### fork 和 exec 的代码实现
|
||||
|
||||
如果你要编写一个 shell,这是你必须做的一个练习(这是一个非常有趣和有启发性的项目。Kamal 在 Github 上有一个很棒的研讨会:[https://github.com/kamalmarhubi/shell-workshop][3])
|
||||
如果你要编写一个 shell,这是你必须做的一个练习(这是一个非常有趣和有启发性的项目。Kamal 在 Github 上有一个很棒的研讨会:[https://github.com/kamalmarhubi/shell-workshop][3])。
|
||||
|
||||
事实证明,有了 C 或 Python 的技能,你可以在几个小时内编写一个非常简单的 shell,例如 bash。(至少如果你旁边能有个人多少懂一点,如果没有的话用时会久一点。)我已经完成啦,真的很棒。
|
||||
事实证明,有了 C 或 Python 的技能,你可以在几个小时内编写一个非常简单的 shell,像 bash 一样。(至少如果你旁边能有个人多少懂一点,如果没有的话用时会久一点。)我已经完成啦,真的很棒。
|
||||
|
||||
这就是 fork 和 exec 在程序中的实现。我写了一段 C 的伪代码。请记住,[fork 也可能会失败哦。][4]
|
||||
这就是 `fork` 和 `exec` 在程序中的实现。我写了一段 C 的伪代码。请记住,[fork 也可能会失败哦。][4]
|
||||
|
||||
```
|
||||
int pid = fork();
|
||||
@ -80,7 +73,7 @@ int pid = fork();
|
||||
// “我”是谁呢?可能是子进程也可能是父进程
|
||||
if (pid == 0) {
|
||||
// 我现在是子进程
|
||||
// 我的大脑将被替代,然后变成一个完全不一样的进程“ls”
|
||||
// “ls” 吃掉了我脑子,然后变成一个完全不一样的进程
|
||||
exec(["ls"])
|
||||
} else if (pid == -1) {
|
||||
// 天啊,fork 失败了,简直是灾难!
|
||||
@ -89,59 +82,48 @@ if (pid == 0) {
|
||||
// 继续做一个酷酷的美男子吧
|
||||
// 需要的话,我可以等待子进程结束
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
### 上文提到的“大脑被替代“是什么意思呢?
|
||||
### 上文提到的“脑子被吃掉”是什么意思呢?
|
||||
|
||||
进程有很多属性:
|
||||
|
||||
* 打开的文件(包括打开的网络连接)
|
||||
|
||||
* 环境变量
|
||||
|
||||
* 信号处理程序(在程序上运行 Ctrl + C 时会发生什么?)
|
||||
|
||||
* 内存(你的“地址空间”)
|
||||
|
||||
* 寄存器
|
||||
|
||||
* 可执行文件(/proc/$pid/exe)
|
||||
|
||||
* 可执行文件(`/proc/$pid/exe`)
|
||||
* cgroups 和命名空间(与 Linux 容器相关)
|
||||
|
||||
* 当前的工作目录
|
||||
|
||||
* 运行程序的用户
|
||||
|
||||
* 其他我还没想到的
|
||||
|
||||
当你运行 `execve` 并让另一个程序替代你的时候,实际上几乎所有东西都是相同的! 你们有相同的环境变量、信号处理程序和打开的文件等等。
|
||||
当你运行 `execve` 并让另一个程序吃掉你的脑子的时候,实际上几乎所有东西都是相同的! 你们有相同的环境变量、信号处理程序和打开的文件等等。
|
||||
|
||||
唯一改变的是,内存、寄存器以及正在运行的程序,这可是件大事。
|
||||
|
||||
### 为何 fork 并非那么耗费资源(写入时复制)
|
||||
|
||||
你可能会问:“如果我有一个使用了 2 GB 内存的进程,这是否意味着每次我启动一个子进程,所有 2 GB 的内存都要被复制一次?这听起来要耗费很多资源!“
|
||||
你可能会问:“如果我有一个使用了 2GB 内存的进程,这是否意味着每次我启动一个子进程,所有 2 GB 的内存都要被复制一次?这听起来要耗费很多资源!”
|
||||
|
||||
事实上,Linux 为 fork() 调用实现了写入时复制(copy on write),对于新进程的 2 GB 内存来说,就像是“看看旧的进程就好了,是一样的!”。然后,当如果任一进程试图写入内存,此时系统才真正地复制一个内存的副本给该进程。如果两个进程的内存是相同的,就不需要复制了。
|
||||
事实上,Linux 为 `fork()` 调用实现了<ruby>写时复制<rt>copy on write</rt></ruby>,对于新进程的 2GB 内存来说,就像是“看看旧的进程就好了,是一样的!”。然后,当如果任一进程试图写入内存,此时系统才真正地复制一个内存的副本给该进程。如果两个进程的内存是相同的,就不需要复制了。
|
||||
|
||||
### 为什么你需要知道这么多
|
||||
|
||||
你可能会说,好吧,这些琐事听起来很厉害,但为什么这么重要?关于信号处理程序或环境变量的细节会被继承吗?这对我的日常编程有什么实际影响呢?
|
||||
你可能会说,好吧,这些细节听起来很厉害,但为什么这么重要?关于信号处理程序或环境变量的细节会被继承吗?这对我的日常编程有什么实际影响呢?
|
||||
|
||||
有可能哦!比如说,在 Kamal 的博客上有一个很有意思的 [bug][5]。它讨论了 Python 如何使信号处理程序忽略了 SIGPIPE。也就是说,如果你从 Python 里运行一个程序,默认情况下它会忽略 SIGPIPE!这意味着,程序从 Python 脚本和从 shell 启动的表现会**有所不同**。在这种情况下,它会造成一个奇怪的问题。
|
||||
有可能哦!比如说,在 Kamal 的博客上有一个很有意思的 [bug][5]。它讨论了 Python 如何使信号处理程序忽略了 `SIGPIPE`。也就是说,如果你从 Python 里运行一个程序,默认情况下它会忽略 `SIGPIPE`!这意味着,程序从 Python 脚本和从 shell 启动的表现会**有所不同**。在这种情况下,它会造成一个奇怪的问题。
|
||||
|
||||
所以,你的程序的环境(环境变量、信号处理程序等)可能很重要,都是从父进程继承来的。知道这些,在调试时是很有用的。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2016/10/04/exec-will-eat-your-brain/
|
||||
|
||||
作者:[ Julia Evans][a]
|
||||
作者:[Julia Evans][a]
|
||||
译者:[jessie-pang](https://github.com/jessie-pang)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,19 +1,21 @@
|
||||
让 History 命令显示日期和时间
|
||||
让 history 命令显示日期和时间
|
||||
======
|
||||
我们都对 History 命令很熟悉。它将终端上 bash 执行过的所有命令存储到 `.bash_history` 文件中,来帮助我们复查用户之前执行过的命令。
|
||||
|
||||
默认情况下 history 命令直接显示用户执行的命令而不会输出运行命令时的日期和时间,即使 history 命令记录了这个时间。
|
||||
我们都对 `history` 命令很熟悉。它将终端上 bash 执行过的所有命令存储到 `.bash_history` 文件中,来帮助我们复查用户之前执行过的命令。
|
||||
|
||||
运行 history 命令时,它会检查一个叫做 `HISTTIMEFORMAT` 的环境变量,这个环境变量指明了如何格式化输出 history 命令中记录的这个时间。
|
||||
默认情况下 `history` 命令直接显示用户执行的命令而不会输出运行命令时的日期和时间,即使 `history` 命令记录了这个时间。
|
||||
|
||||
若该值为 null 或者根本没有设置,则它跟大多数系统默认显示的一样,不会现实日期和时间。
|
||||
运行 `history` 命令时,它会检查一个叫做 `HISTTIMEFORMAT` 的环境变量,这个环境变量指明了如何格式化输出 `history` 命令中记录的这个时间。
|
||||
|
||||
`HISTTIMEFORMAT` 使用 strftime 来格式化显示时间 (strftime - 将日期和时间转换为字符串)。history 命令输出日期和时间能够帮你更容易地追踪问题。
|
||||
若该值为 null 或者根本没有设置,则它跟大多数系统默认显示的一样,不会显示日期和时间。
|
||||
|
||||
* **%T:** 替换为时间 ( %H:%M:%S )。
|
||||
* **%F:** 等同于 %Y-%m-%d (ISO 8601:2000 标准日期格式)。
|
||||
`HISTTIMEFORMAT` 使用 `strftime` 来格式化显示时间(`strftime` - 将日期和时间转换为字符串)。`history` 命令输出日期和时间能够帮你更容易地追踪问题。
|
||||
|
||||
* `%T`: 替换为时间(`%H:%M:%S`)。
|
||||
* `%F`: 等同于 `%Y-%m-%d` (ISO 8601:2000 标准日期格式)。
|
||||
|
||||
下面是 `history` 命令默认的输出。
|
||||
|
||||
下面是 history 命令默认的输出。
|
||||
```
|
||||
# history
|
||||
1 yum install -y mysql-server mysql-client
|
||||
@ -46,36 +48,36 @@
|
||||
28 sysdig
|
||||
29 yum install httpd mysql
|
||||
30 service httpd start
|
||||
|
||||
```
|
||||
|
||||
根据需求,有三种不同的方法设置环境变量。
|
||||
根据需求,有三种不同的设置环境变量的方法。
|
||||
|
||||
* 临时设置当前用户的环境变量
|
||||
* 永久设置当前/其他用户的环境变量
|
||||
* 永久设置所有用户的环境变量
|
||||
* 临时设置当前用户的环境变量
|
||||
* 永久设置当前/其他用户的环境变量
|
||||
* 永久设置所有用户的环境变量
|
||||
|
||||
**注意:** 不要忘了在最后那个单引号前加上空格,否则输出会很混乱的。
|
||||
|
||||
### 方法 -1:
|
||||
### 方法 1:
|
||||
|
||||
运行下面命令为为当前用户临时设置 `HISTTIMEFORMAT` 变量。这会一直生效到下次重启。
|
||||
|
||||
运行下面命令为为当前用户临时设置 HISTTIMEFORMAT 变量。这会一直生效到下次重启。
|
||||
```
|
||||
# export HISTTIMEFORMAT='%F %T '
|
||||
|
||||
```
|
||||
|
||||
### 方法 -2:
|
||||
### 方法 2:
|
||||
|
||||
将 `HISTTIMEFORMAT` 变量加到 `.bashrc` 或 `.bash_profile` 文件中,让它永久生效。
|
||||
|
||||
将 HISTTIMEFORMAT 变量加到 `.bashrc` 或 `.bash_profile` 文件中,让它永久生效。
|
||||
```
|
||||
# echo 'HISTTIMEFORMAT="%F %T "' >> ~/.bashrc
|
||||
或
|
||||
# echo 'HISTTIMEFORMAT="%F %T "' >> ~/.bash_profile
|
||||
|
||||
```
|
||||
|
||||
运行下面命令来让文件中的修改生效。
|
||||
|
||||
```
|
||||
# source ~/.bashrc
|
||||
或
|
||||
@ -83,21 +85,22 @@
|
||||
|
||||
```
|
||||
|
||||
### 方法 -3:
|
||||
### 方法 3:
|
||||
|
||||
将 `HISTTIMEFORMAT` 变量加入 `/etc/profile` 文件中,让它对所有用户永久生效。
|
||||
|
||||
将 HISTTIMEFORMAT 变量加入 `/etc/profile` 文件中,让它对所有用户永久生效。
|
||||
```
|
||||
# echo 'HISTTIMEFORMAT="%F %T "' >> /etc/profile
|
||||
|
||||
```
|
||||
|
||||
运行下面命令来让文件中的修改生效。
|
||||
|
||||
```
|
||||
# source /etc/profile
|
||||
|
||||
```
|
||||
|
||||
输出结果为。
|
||||
输出结果为:
|
||||
|
||||
```
|
||||
# history
|
||||
1 2017-08-16 15:30:15 yum install -y mysql-server mysql-client
|
||||
@ -130,7 +133,6 @@
|
||||
28 2017-08-16 15:30:15 sysdig
|
||||
29 2017-08-16 15:30:15 yum install httpd mysql
|
||||
30 2017-08-16 15:30:15 service httpd start
|
||||
|
||||
```
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -138,7 +140,7 @@ via: https://www.2daygeek.com/display-date-time-linux-bash-history-command/
|
||||
|
||||
作者:[2daygeek][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,9 @@
|
||||
如何方便地寻找 GitHub 上超棒的项目和资源
|
||||
如何轻松地寻找 GitHub 上超棒的项目和资源
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2017/09/Awesome-finder-Find-Awesome-Projects-720x340.png)
|
||||
|
||||
在 **GitHub** 网站上每天都会新增上百个项目。由于 GitHub 上有成千上万的项目,要在上面搜索好的项目简直要累死人。好在,有那么一伙人已经创建了一些这样的列表。其中包含的类别五花八门,如编程,数据库,编辑器,游戏,娱乐等。这使得我们寻找在 GitHub 上托管的项目,软件,资源,裤,书籍等其他东西变得容易了很多。有一个 GitHub 用户更进了一步,创建了一个名叫 `Awesome-finder` 的命令行工具,用来在 awesome 系列的仓库中寻找超棒的项目和资源。该工具帮助我们不需要离开终端(当然也就不需要使用浏览器了)的情况下浏览 awesome 列表。
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2017/09/Awesome-finder-Find-Awesome-Projects-720x340.png)
|
||||
|
||||
在 GitHub 网站上每天都会新增上百个项目。由于 GitHub 上有成千上万的项目,要在上面搜索好的项目简直要累死人。好在,有那么一伙人已经创建了一些这样的列表。其中包含的类别五花八门,如编程、数据库、编辑器、游戏、娱乐等。这使得我们寻找在 GitHub 上托管的项目、软件、资源、库、书籍等其他东西变得容易了很多。有一个 GitHub 用户更进了一步,创建了一个名叫 `Awesome-finder` 的命令行工具,用来在 awesome 系列的仓库中寻找超棒的项目和资源。该工具可以让我们不需要离开终端(当然也就不需要使用浏览器了)的情况下浏览 awesome 列表。
|
||||
|
||||
在这篇简单的说明中,我会向你演示如何方便地在类 Unix 系统中浏览 awesome 列表。
|
||||
|
||||
@ -12,12 +13,14 @@
|
||||
|
||||
使用 `pip` 可以很方便地安装该工具,`pip` 是一个用来安装使用 Python 编程语言开发的程序的包管理器。
|
||||
|
||||
在 **Arch Linux** 一起衍生发行版中(比如 **Antergos**,**Manjaro Linux**),你可以使用下面命令安装 `pip`:
|
||||
在 Arch Linux 及其衍生发行版中(比如 Antergos,Manjaro Linux),你可以使用下面命令安装 `pip`:
|
||||
|
||||
```
|
||||
sudo pacman -S python-pip
|
||||
```
|
||||
|
||||
在 **RHEL**,**CentOS** 中:
|
||||
在 RHEL,CentOS 中:
|
||||
|
||||
```
|
||||
sudo yum install epel-release
|
||||
```
|
||||
@ -25,32 +28,33 @@ sudo yum install epel-release
|
||||
sudo yum install python-pip
|
||||
```
|
||||
|
||||
在 **Fedora** 上:
|
||||
在 Fedora 上:
|
||||
|
||||
```
|
||||
sudo dnf install epel-release
|
||||
```
|
||||
```
|
||||
sudo dnf install python-pip
|
||||
```
|
||||
|
||||
在 **Debian**,**Ubuntu**,**Linux Mint** 上:
|
||||
在 Debian,Ubuntu,Linux Mint 上:
|
||||
|
||||
```
|
||||
sudo apt-get install python-pip
|
||||
```
|
||||
|
||||
在 **SUSE**,**openSUSE** 上:
|
||||
在 SUSE,openSUSE 上:
|
||||
```
|
||||
sudo zypper install python-pip
|
||||
```
|
||||
|
||||
PIP 安装好后,用下面命令来安装 'Awesome-finder'。
|
||||
`pip` 安装好后,用下面命令来安装 'Awesome-finder'。
|
||||
|
||||
```
|
||||
sudo pip install awesome-finder
|
||||
```
|
||||
|
||||
#### 用法
|
||||
|
||||
Awesome-finder 会列出 GitHub 网站中如下这些主题(其实就是仓库)的内容:
|
||||
Awesome-finder 会列出 GitHub 网站中如下这些主题(其实就是仓库)的内容:
|
||||
|
||||
* awesome
|
||||
* awesome-android
|
||||
@ -66,83 +70,84 @@ Awesome-finder 会列出 GitHub 网站中如下这些主题(其实就是仓库)
|
||||
* awesome-scala
|
||||
* awesome-swift
|
||||
|
||||
|
||||
该列表会定期更新。
|
||||
|
||||
比如,要查看 `awesome-go` 仓库中的列表,只需要输入:
|
||||
|
||||
```
|
||||
awesome go
|
||||
```
|
||||
|
||||
你就能看到用 “Go” 写的所有流行的东西了,而且这些东西按字母顺序进行了排列。
|
||||
|
||||
[![][1]][2]
|
||||
![][2]
|
||||
|
||||
你可以通过 **上/下** 箭头在列表中导航。一旦找到所需要的东西,只需要选中它,然后按下 **回车** 键就会用你默认的 web 浏览器打开相应的链接了。
|
||||
你可以通过 上/下 箭头在列表中导航。一旦找到所需要的东西,只需要选中它,然后按下回车键就会用你默认的 web 浏览器打开相应的链接了。
|
||||
|
||||
类似的,
|
||||
|
||||
* "awesome android" 命令会搜索 **awesome-android** 仓库。
|
||||
* "awesome awesome" 命令会搜索 **awesome** 仓库。
|
||||
* "awesome elixir" 命令会搜索 **awesome-elixir**。
|
||||
* "awesome go" 命令会搜索 **awesome-go**。
|
||||
* "awesome ios" 命令会搜索 **awesome-ios**。
|
||||
* "awesome java" 命令会搜索 **awesome-java**。
|
||||
* "awesome javascript" 命令会搜索 **awesome-javascript**。
|
||||
* "awesome php" 命令会搜索 **awesome-php**。
|
||||
* "awesome python" 命令会搜索 **awesome-python**。
|
||||
* "awesome ruby" 命令会搜索 **awesome-ruby**。
|
||||
* "awesome rust" 命令会搜索 **awesome-rust**。
|
||||
* "awesome scala" 命令会搜索 **awesome-scala**。
|
||||
* "awesome swift" 命令会搜索 **awesome-swift**。
|
||||
* `awesome android` 命令会搜索 awesome-android 仓库。
|
||||
* `awesome awesome` 命令会搜索 awesome 仓库。
|
||||
* `awesome elixir` 命令会搜索 awesome-elixir。
|
||||
* `awesome go` 命令会搜索 awesome-go。
|
||||
* `awesome ios` 命令会搜索 awesome-ios。
|
||||
* `awesome java` 命令会搜索 awesome-java。
|
||||
* `awesome javascript` 命令会搜索 awesome-javascript。
|
||||
* `awesome php` 命令会搜索 awesome-php。
|
||||
* `awesome python` 命令会搜索 awesome-python。
|
||||
* `awesome ruby` 命令会搜索 awesome-ruby。
|
||||
* `awesome rust` 命令会搜索 awesome-rust。
|
||||
* `awesome scala` 命令会搜索 awesome-scala。
|
||||
* `awesome swift` 命令会搜索 awesome-swift。
|
||||
|
||||
而且,它还会随着你在提示符中输入的内容而自动进行筛选。比如,当我输入 "dj" 后,他会显示与 Django 相关的内容。
|
||||
而且,它还会随着你在提示符中输入的内容而自动进行筛选。比如,当我输入 `dj` 后,他会显示与 Django 相关的内容。
|
||||
|
||||
[![][1]][3]
|
||||
![][3]
|
||||
|
||||
若你想从最新的 `awesome-<topic>`( 而不是用缓存中的数据) 中搜索,使用 `-f` 或 `-force` 标志:
|
||||
|
||||
```
|
||||
awesome <topic> -f (--force)
|
||||
|
||||
```
|
||||
|
||||
**像这样:**
|
||||
像这样:
|
||||
|
||||
```
|
||||
awesome python -f
|
||||
```
|
||||
|
||||
或,
|
||||
|
||||
```
|
||||
awesome python --force
|
||||
```
|
||||
|
||||
上面命令会显示 **awesome-python** GitHub 仓库中的列表。
|
||||
上面命令会显示 awesome-python GitHub 仓库中的列表。
|
||||
|
||||
很棒,对吧?
|
||||
|
||||
要退出这个工具的话,按下 **ESC** 键。要显示帮助信息,输入:
|
||||
要退出这个工具的话,按下 ESC 键。要显示帮助信息,输入:
|
||||
|
||||
```
|
||||
awesome -h
|
||||
```
|
||||
|
||||
本文至此就结束了。希望本文能对你产生帮助。如果你觉得我们的文章对你有帮助,请将他们分享到你的社交网络中去,造福大众。我们马上还有其他好东西要来了。敬请期待!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_008-1.png ()
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_009.png ()
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_008-1.png
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_009.png
|
||||
[4]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=reddit (Click to share on Reddit)
|
||||
[5]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=twitter (Click to share on Twitter)
|
||||
[6]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=facebook (Click to share on Facebook)
|
@ -1,57 +1,61 @@
|
||||
微服务和容器:需要去防范的 5 个“坑”
|
||||
======
|
||||
|
||||
> 微服务与容器天生匹配,但是你需要避开一些常见的陷阱。
|
||||
|
||||
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO%20Containers%20Ecosystem.png?itok=lDTaYXzk)
|
||||
|
||||
因为微服务和容器是 [天生的“一对”][1],所以一起来使用它们,似乎也就不会有什么问题。当我们将这对“天作之合”投入到生产系统后,你就会发现,随着你的 IT 基础的提升,等待你的将是大幅上升的成本。是不是这样的?
|
||||
|
||||
(让我们等一下,等人们笑声过去)
|
||||
|
||||
是的,很遗憾,这并不是你所希望的结果。虽然这两种技术的组合是非常强大的,但是,如果没有很好的规划和适配,它们并不能发挥出强大的性能来。在前面的文章中,我们整理了如果你想 [使用它们你应该掌握的知识][2]。但是,那些都是组织在容器中使用微服务时所遇到的常见问题。
|
||||
|
||||
事先了解这些可能出现的问题,可以为你的成功奠定更坚实的基础。
|
||||
事先了解这些可能出现的问题,能够帮你避免这些问题,为你的成功奠定更坚实的基础。
|
||||
|
||||
微服务和容器技术的出现是基于组织的需要、知识、资源等等更多的现实的要求。Mac Browning 说,“他们最常犯的一个 [错误] 是试图一次就想“搞定”一切”,他是 [DigitalOcean][3] 的工程部经理。“而真正需要面对的问题是,你的公司应该采用什么样的容器和微服务。”
|
||||
微服务和容器技术的出现是基于组织的需要、知识、资源等等更多的现实的要求。Mac Browning 说,“他们最常犯的一个 [错误] 是试图一次就想‘搞定’一切”,他是 [DigitalOcean][3] 的工程部经理。“而真正需要面对的问题是,你的公司应该采用什么样的容器和微服务。”
|
||||
|
||||
**[ 努力向你的老板和同事去解释什么是微服务?阅读我们的入门读本[如何简单明了地解释微服务][4]。]**
|
||||
|
||||
Browning 和其他的 IT 专业人员分享了他们遇到的,在组织中使用容器化微服务时的五个陷阱,特别是在他们的生产系统生命周期的早期时候。在你的组织中需要去部署微服务和容器时,了解这些知识,将有助于你去评估微服务和容器化的部署策略。
|
||||
|
||||
### 1. 在部署微服务和容器化上,试图同时从零开始
|
||||
### 1、 在部署微服务和容器化上,试图同时从零开始
|
||||
|
||||
如果你刚开始从完全的实体服务器上开始改变,或者如果你的组织在微服务和容器化上还没有足够的知识储备,那么,请记住:微服务和容器化并不是拴在一起,不可分别部署的。这就意味着,你可以发挥你公司内部专家的技术特长,先从部署其中的一个开始。Kevin McGrath,CTO, [Sungard 服务可用性][5] 资深设计师,他建议,通过首先使用容器化来为你的团队建立知识和技能储备,通过对现有应用或者新应用进行容器化部署,接着再将它们迁移到微服务架构,这样才能在最后的阶段感受到它们的优势所在。
|
||||
如果你刚开始从完全的单例应用开始改变,或者如果你的组织在微服务和容器化上还没有足够的知识储备,那么,请记住:微服务和容器化并不是拴在一起、不可分别部署的。这就意味着,你可以发挥你公司内部专家的技术特长,先从部署其中的一个开始。Sungard Availability Services][5] 的资深 CTO 架构师 Kevin McGrath 建议,通过首先使用容器化来为你的团队建立知识和技能储备,通过对现有应用或者新应用进行容器化部署,接着再将它们迁移到微服务架构,这样才能最终感受到它们的优势所在。
|
||||
|
||||
McGrath 说,“微服务要想运行的很好,需要公司经过多年的反复迭代,这样才能实现快速部署和迁移”,“如果组织不能实现快速迁移,那么支持微服务将很困难。实现快速迁移,容器化可以帮助你,这样就不用担心业务整体停机”
|
||||
McGrath 说,“微服务要想运行的很好,需要公司经过多年的反复迭代,这样才能实现快速部署和迁移”,“如果组织不能实现快速迁移,那么支持微服务将很困难。实现快速迁移,容器化可以帮助你,这样就不用担心业务整体停机”。
|
||||
|
||||
### 2. 从一个面向客户的或者关键的业务应用开始
|
||||
### 2、 从一个面向客户的或者关键的业务应用开始
|
||||
|
||||
对组织来说,一个相关陷阱恰恰就是引入容器、微服务、或者同时两者都引入的这个开端:在尝试征服一片丛林中的雄狮之前,你应该先去征服处于食物链底端的一些小动物,以取得一些实践经验。
|
||||
对组织来说,一个相关陷阱恰恰就是从容器、微服务、或者两者同时起步:在尝试征服一片丛林中的雄狮之前,你应该先去征服处于食物链底端的一些小动物,以取得一些实践经验。
|
||||
|
||||
在你的学习过程中预期会有一些错误出现 - 你是希望这些错误发生在面向客户的关键业务应用上,还是,仅对 IT 或者其他内部团队可见的低风险应用上?
|
||||
在你的学习过程中可以预期会有一些错误出现 —— 你是希望这些错误发生在面向客户的关键业务应用上,还是,仅对 IT 或者其他内部团队可见的低风险应用上?
|
||||
|
||||
DigitalOcean 的 Browning 说,“如果整个生态系统都是新的,为了获取一些微服务和容器方面的操作经验,那么,将它们先应用到影响面较低的区域,比如像你的持续集成系统或者内部工具,可能是一个低风险的做法。”你获得这方面的经验以后,当然会将这些技术应用到为客户提供服务的生产系统上。而现实情况是,不论你准备的如何周全,都不可避免会遇到问题,因此,需要提前为可能出现的问题制定应对之策。
|
||||
|
||||
### 3. 在没有合适的团队之前引入了太多的复杂性
|
||||
### 3、 在没有合适的团队之前引入了太多的复杂性
|
||||
|
||||
由于微服务架构的弹性,它可能会产生复杂的管理需求。
|
||||
|
||||
作为 [Red Hat][6] 技术的狂热拥护者,[Gordon Haff][7] 最近写道,“一个符合 OCI 标准的容器运行时本身管理单个容器是很擅长的,但是,当你开始使用多个容器和容器化应用时,并将它们分解为成百上千个节点后,管理和编配它们将变得极为复杂。最终,你将回过头来需要将容器分组来提供服务 - 比如,跨容器的网络、安全、测控”
|
||||
作为 [Red Hat][6] 技术的狂热拥护者,[Gordon Haff][7] 最近写道,“一个符合 OCI 标准的容器运行时本身管理单个容器是很擅长的,但是,当你开始使用多个容器和容器化应用时,并将它们分解为成百上千个节点后,管理和编配它们将变得极为复杂。最终,你将需要回过头来将容器分组来提供服务 —— 比如,跨容器的网络、安全、测控”。
|
||||
|
||||
Haff 提示说,“幸运的是,由于容器是可移植的,并且,与之相关的管理栈也是可移植的”。“这时出现的编配技术,比如像 [Kubernetes][8] ,使得这种 IT 需求变得简单化了”(更多内容请查阅 Haff 的文章:[容器化为编写应用带来的 5 个优势][1])
|
||||
Haff 提示说,“幸运的是,由于容器是可移植的,并且,与之相关的管理栈也是可移植的”。“这时出现的编配技术,比如像 [Kubernetes][8] ,使得这种 IT 需求变得简单化了”(更多内容请查阅 Haff 的文章:[容器化为编写应用带来的 5 个优势][1])。
|
||||
|
||||
另外,你需要合适的团队去做这些事情。如果你已经有 [DevOps shop][9],那么,你可能比较适合做这种转换。因为,从一开始你已经聚集了相关技能的人才。
|
||||
|
||||
Mike Kavis 说,“随着时间的推移,会有越来越多的服务得以部署,管理起来会变得很不方便”,他是 [Cloud Technology Partners][10] 的副总裁兼首席云架构设计师。他说,“在 DevOps 的关键过程中,确保各个领域的专家 - 开发、测试、安全、运营等等 - 全部者参与进来,并且在基于容器的微服务中,在构建、部署、运行、安全方面实现协作。”
|
||||
Mike Kavis 说,“随着时间的推移,部署了越来越多的服务,管理起来会变得很不方便”,他是 [Cloud Technology Partners][10] 的副总裁兼首席云架构设计师。他说,“在 DevOps 的关键过程中,确保各个领域的专家 —— 开发、测试、安全、运营等等 —— 全部都参与进来,并且在基于容器的微服务中,在构建、部署、运行、安全方面实现协作。”
|
||||
|
||||
### 4. 忽视重要的需求:自动化
|
||||
### 4、 忽视重要的需求:自动化
|
||||
|
||||
除了具有一个合适的团队之外,那些在基于容器化的微服务部署比较成功的组织都倾向于以“实现尽可能多的自动化”来解决固有的复杂性。
|
||||
|
||||
Carlos Sanchez 说,“实现分布式架构并不容易,一些常见的挑战,像数据持久性、日志、排错等等,在微服务架构中都会变得很复杂”,他是 [CloudBees][11] 的资深软件工程师。根据定义,Sanchez 提到的分布式架构,随着业务的增长,将变成一个巨大无比的繁重的运营任务。“服务和组件的增殖,将使得运营自动化变成一项非常强烈的需求”。Sanchez 警告说。“手动管理将限制服务的规模”
|
||||
Carlos Sanchez 说,“实现分布式架构并不容易,一些常见的挑战,像数据持久性、日志、排错等等,在微服务架构中都会变得很复杂”,他是 [CloudBees][11] 的资深软件工程师。根据定义,Sanchez 提到的分布式架构,随着业务的增长,将变成一个巨大无比的繁重的运营任务。“服务和组件的增殖,将使得运营自动化变成一项非常强烈的需求”。Sanchez 警告说。“手动管理将限制服务的规模”。
|
||||
|
||||
### 5. 随着时间的推移,微服务变得越来越臃肿
|
||||
### 5、 随着时间的推移,微服务变得越来越臃肿
|
||||
|
||||
在一个容器中运行一个服务或者软件组件并不神奇。但是,这样做并不能证明你就一定在使用微服务。Manual Nedbal, [ShieldX Networks][12] 的 CTO,它警告说,IT 专业人员要确保,随着时间的推移,微服务仍然是微服务。
|
||||
在一个容器中运行一个服务或者软件组件并不神奇。但是,这样做并不能证明你就一定在使用微服务。Manual Nedbal, [ShieldX Networks][12] 的 CTO,他警告说,IT 专业人员要确保,随着时间的推移,微服务仍然是微服务。
|
||||
|
||||
Nedbal 说,“随着时间的推移,一些软件组件积累了大量的代码和特性,将它们将在一个容器中将会产生并不需要的微服务,也不会带来相同的优势”,也就是说,“随着组件的变大,工程师需要找到合适的时机将它们再次分解”
|
||||
Nedbal 说,“随着时间的推移,一些软件组件积累了大量的代码和特性,将它们放在一个容器中将会产生并不需要的微服务,也不会带来相同的优势”,也就是说,“随着组件的变大,工程师需要找到合适的时机将它们再次分解”。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -59,7 +63,7 @@ via: https://enterprisersproject.com/article/2017/9/using-microservices-containe
|
||||
|
||||
作者:[Kevin Casey][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,17 +1,23 @@
|
||||
让我们使用 PC 键盘在终端演奏钢琴
|
||||
======
|
||||
厌倦了工作?那么来吧,让我们弹弹钢琴!是的,你没有看错。谁需要真的钢琴啊?我们可以用 PC 键盘在命令行下就能弹钢琴。向你们介绍一下 **Piano-rs** - 这是一款用 Rust 语言编写的,可以让你用 PC 键盘在终端弹钢琴的简单工具。它免费,开源,而且基于 MIT 协议。你可以在任何支持 Rust 的操作系统中使用它。
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2017/10/Play-Piano-In-Terminal-720x340.jpg)
|
||||
|
||||
### Piano-rs:使用 PC 键盘在终端弹钢琴
|
||||
厌倦了工作?那么来吧,让我们弹弹钢琴!是的,你没有看错,根本不需要真的钢琴。我们可以用 PC 键盘在命令行下就能弹钢琴。向你们介绍一下 `piano-rs` —— 这是一款用 Rust 语言编写的,可以让你用 PC 键盘在终端弹钢琴的简单工具。它自由开源,基于 MIT 协议。你可以在任何支持 Rust 的操作系统中使用它。
|
||||
|
||||
### piano-rs:使用 PC 键盘在终端弹钢琴
|
||||
|
||||
#### 安装
|
||||
|
||||
确保系统已经安装了 Rust 编程语言。若还未安装,运行下面命令来安装它。
|
||||
|
||||
```
|
||||
curl https://sh.rustup.rs -sSf | sh
|
||||
```
|
||||
|
||||
安装程序会问你是否默认安装还是自定义安装还是取消安装。我希望默认安装,因此输入 **1** (数字一)。
|
||||
(LCTT 译注:这种直接通过 curl 执行远程 shell 脚本是一种非常危险和不成熟的做法。)
|
||||
|
||||
安装程序会问你是否默认安装还是自定义安装还是取消安装。我希望默认安装,因此输入 `1` (数字一)。
|
||||
|
||||
```
|
||||
info: downloading installer
|
||||
|
||||
@ -43,7 +49,7 @@ default host triple: x86_64-unknown-linux-gnu
|
||||
1) Proceed with installation (default)
|
||||
2) Customize installation
|
||||
3) Cancel installation
|
||||
**1**
|
||||
1
|
||||
|
||||
info: syncing channel updates for 'stable-x86_64-unknown-linux-gnu'
|
||||
223.6 KiB / 223.6 KiB (100 %) 215.1 KiB/s ETA: 0 s
|
||||
@ -72,9 +78,10 @@ environment variable. Next time you log in this will be done automatically.
|
||||
To configure your current shell run source $HOME/.cargo/env
|
||||
```
|
||||
|
||||
登出然后重启系统来将 cargo 的 bin 目录纳入 PATH 变量中。
|
||||
登出然后重启系统来将 cargo 的 bin 目录纳入 `PATH` 变量中。
|
||||
|
||||
校验 Rust 是否正确安装:
|
||||
|
||||
```
|
||||
$ rustc --version
|
||||
rustc 1.21.0 (3b72af97e 2017-10-09)
|
||||
@ -83,40 +90,44 @@ rustc 1.21.0 (3b72af97e 2017-10-09)
|
||||
太棒了!Rust 成功安装了。是时候构建 piano-rs 应用了。
|
||||
|
||||
使用下面命令克隆 Piano-rs 仓库:
|
||||
|
||||
```
|
||||
git clone https://github.com/ritiek/piano-rs
|
||||
```
|
||||
|
||||
上面命令会在当前工作目录创建一个名为 "piano-rs" 的目录并下载所有内容到其中。进入该目录:
|
||||
上面命令会在当前工作目录创建一个名为 `piano-rs` 的目录并下载所有内容到其中。进入该目录:
|
||||
|
||||
```
|
||||
cd piano-rs
|
||||
```
|
||||
|
||||
最后,运行下面命令来构建 Piano-rs:
|
||||
|
||||
```
|
||||
cargo build --release
|
||||
```
|
||||
|
||||
编译过程要花上一阵子。
|
||||
|
||||
#### Usage
|
||||
#### 用法
|
||||
|
||||
编译完成后,在 `piano-rs` 目录中运行下面命令:
|
||||
|
||||
编译完成后,在 **piano-rs** 目录中运行下面命令:
|
||||
```
|
||||
./target/release/piano-rs
|
||||
```
|
||||
|
||||
这就我们在终端上的钢琴键盘了!可以开始弹指一些音符了。按下按键可以弹奏相应音符。使用 **左/右** 方向键可以在弹奏时调整音频。而,使用 **上/下** 方向键可以在弹奏时调整音长。
|
||||
这就是我们在终端上的钢琴键盘了!可以开始弹指一些音符了。按下按键可以弹奏相应音符。使用 **左/右** 方向键可以在弹奏时调整音频。而,使用 **上/下** 方向键可以在弹奏时调整音长。
|
||||
|
||||
[![][1]][2]
|
||||
![][2]
|
||||
|
||||
Piano-rs 使用与 [**multiplayerpiano.com**][3] 一样的音符和按键。另外,你可以使用[**这些音符 **][4] 来学习弹指各种流行歌曲。
|
||||
Piano-rs 使用与 [multiplayerpiano.com][3] 一样的音符和按键。另外,你可以使用[这些音符][4] 来学习弹指各种流行歌曲。
|
||||
|
||||
要查看帮助。输入:
|
||||
|
||||
```
|
||||
$ ./target/release/piano-rs -h
|
||||
```
|
||||
```
|
||||
|
||||
piano-rs 0.1.0
|
||||
Ritiek Malhotra <ritiekmalhotra123@gmail.com>
|
||||
Play piano in the terminal using PC keyboard.
|
||||
@ -141,19 +152,18 @@ OPTIONS:
|
||||
此致敬礼!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/let-us-play-piano-terminal-using-pc-keyboard/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2017/10/Piano.png ()
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2017/10/Piano.png
|
||||
[3]:http://www.multiplayerpiano.com/
|
||||
[4]:https://pastebin.com/CX1ew0uB
|
@ -0,0 +1,51 @@
|
||||
autorandr:自动调整屏幕布局
|
||||
======
|
||||
|
||||
像许多笔记本用户一样,我经常将笔记本插入到不同的显示器上(桌面上有多台显示器,演示时有投影机等)。运行 `xrandr` 命令或点击界面非常繁琐,编写脚本也不是很好。
|
||||
|
||||
最近,我遇到了 [autorandr][1],它使用 EDID(和其他设置)检测连接的显示器,保存 `xrandr` 配置并恢复它们。它也可以在加载特定配置时运行任意脚本。我已经打包了它,目前仍在 NEW 状态。如果你不能等待,[这是 deb][2],[这是 git 仓库][3]。
|
||||
|
||||
要使用它,只需安装软件包,并创建你的初始配置(我这里用的名字是 `undocked`):
|
||||
|
||||
```
|
||||
autorandr --save undocked
|
||||
```
|
||||
|
||||
然后,连接你的笔记本(或者插入你的外部显示器),使用 `xrandr`(或其他任何)更改配置,然后保存你的新配置(我这里用的名字是 workstation):
|
||||
|
||||
```
|
||||
autorandr --save workstation
|
||||
```
|
||||
|
||||
对你额外的配置(或当你有新的配置)进行重复操作。
|
||||
|
||||
`autorandr` 有 `udev`、`systemd` 和 `pm-utils` 钩子,当新的显示器出现时 `autorandr --change` 应该会立即运行。如果需要,也可以手动运行 `autorandr --change` 或 `autorandr - load workstation`。你也可以在加载配置后在 `~/.config/autorandr/$PROFILE/postswitch` 添加自己的脚本来运行。由于我运行 i3,我的工作站配置如下所示:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
xrandr --dpi 92
|
||||
xrandr --output DP2-2 --primary
|
||||
i3-msg '[workspace="^(1|4|6)"] move workspace to output DP2-2;'
|
||||
i3-msg '[workspace="^(2|5|9)"] move workspace to output DP2-3;'
|
||||
i3-msg '[workspace="^(3|8)"] move workspace to output DP2-1;'
|
||||
```
|
||||
|
||||
它适当地修正了 dpi,设置主屏幕(可能不需要?),并移动 i3 工作区。你可以通过在配置文件目录中添加一个 `block` 钩子来安排配置永远不会运行。
|
||||
|
||||
如果你定期更换显示器,请看一下!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.donarmstrong.com/posts/autorandr/
|
||||
|
||||
作者:[Don Armstrong][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.donarmstrong.com
|
||||
[1]:https://github.com/phillipberndt/autorandr
|
||||
[2]:https://www.donarmstrong.com/autorandr_1.2-1_all.deb
|
||||
[3]:https://git.donarmstrong.com/deb_pkgs/autorandr.git
|
135
published/20171215 How to find and tar files into a tar ball.md
Normal file
135
published/20171215 How to find and tar files into a tar ball.md
Normal file
@ -0,0 +1,135 @@
|
||||
如何找出并打包文件成 tar 包
|
||||
======
|
||||
|
||||
Q:我想找出所有的 *.doc 文件并将它们创建成一个 tar 包,然后存储在 `/nfs/backups/docs/file.tar` 中。是否可以在 Linux 或者类 Unix 系统上查找并 tar 打包文件?
|
||||
|
||||
`find` 命令用于按照给定条件在目录层次结构中搜索文件。`tar` 命令是用于 Linux 和类 Unix 系统创建 tar 包的归档工具。
|
||||
|
||||
[![How to find and tar files on linux unix][1]][1]
|
||||
|
||||
让我们看看如何将 `tar` 命令与 `find` 命令结合在一个命令行中创建一个 tar 包。
|
||||
|
||||
### Find 命令
|
||||
|
||||
语法是:
|
||||
|
||||
```
|
||||
find /path/to/search -name "file-to-search" -options
|
||||
## 找出所有 Perl(*.pl)文件 ##
|
||||
find $HOME -name "*.pl" -print
|
||||
## 找出所有 *.doc 文件 ##
|
||||
find $HOME -name "*.doc" -print
|
||||
## 找出所有 *.sh(shell 脚本)并运行 ls -l 命令 ##
|
||||
find . -iname "*.sh" -exec ls -l {} +
|
||||
```
|
||||
|
||||
最后一个命令的输出示例:
|
||||
|
||||
```
|
||||
-rw-r--r-- 1 vivek vivek 1169 Apr 4 2017 ./backups/ansible/cluster/nginx.build.sh
|
||||
-rwxr-xr-x 1 vivek vivek 1500 Dec 6 14:36 ./bin/cloudflare.pure.url.sh
|
||||
lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/cmspostupload.sh -> postupload.sh
|
||||
lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/cmspreupload.sh -> preupload.sh
|
||||
lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/cmssuploadimage.sh -> uploadimage.sh
|
||||
lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/faqpostupload.sh -> postupload.sh
|
||||
lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/faqpreupload.sh -> preupload.sh
|
||||
lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/faquploadimage.sh -> uploadimage.sh
|
||||
-rw-r--r-- 1 vivek vivek 778 Nov 6 14:44 ./bin/mirror.sh
|
||||
-rwxr-xr-x 1 vivek vivek 136 Apr 25 2015 ./bin/nixcraft.com.301.sh
|
||||
-rwxr-xr-x 1 vivek vivek 547 Jan 30 2017 ./bin/paypal.sh
|
||||
-rwxr-xr-x 1 vivek vivek 531 Dec 31 2013 ./bin/postupload.sh
|
||||
-rwxr-xr-x 1 vivek vivek 437 Dec 31 2013 ./bin/preupload.sh
|
||||
-rwxr-xr-x 1 vivek vivek 1046 May 18 2017 ./bin/purge.all.cloudflare.domain.sh
|
||||
lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/tipspostupload.sh -> postupload.sh
|
||||
lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/tipspreupload.sh -> preupload.sh
|
||||
lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/tipsuploadimage.sh -> uploadimage.sh
|
||||
-rwxr-xr-x 1 vivek vivek 1193 Oct 18 2013 ./bin/uploadimage.sh
|
||||
-rwxr-xr-x 1 vivek vivek 29 Nov 6 14:33 ./.vim/plugged/neomake/tests/fixtures/errors.sh
|
||||
-rwxr-xr-x 1 vivek vivek 215 Nov 6 14:33 ./.vim/plugged/neomake/tests/helpers/trap.sh
|
||||
```
|
||||
|
||||
### Tar 命令
|
||||
|
||||
要[创建 /home/vivek/projects 目录的 tar 包][2],运行:
|
||||
|
||||
```
|
||||
$ tar -cvf /home/vivek/projects.tar /home/vivek/projects
|
||||
```
|
||||
|
||||
### 结合 find 和 tar 命令
|
||||
|
||||
语法是:
|
||||
|
||||
```
|
||||
find /dir/to/search/ -name "*.doc" -exec tar -rvf out.tar {} \;
|
||||
```
|
||||
|
||||
或者
|
||||
|
||||
```
|
||||
find /dir/to/search/ -name "*.doc" -exec tar -rvf out.tar {} +
|
||||
```
|
||||
|
||||
例子:
|
||||
|
||||
```
|
||||
find $HOME -name "*.doc" -exec tar -rvf /tmp/all-doc-files.tar "{}" \;
|
||||
```
|
||||
|
||||
或者
|
||||
|
||||
```
|
||||
find $HOME -name "*.doc" -exec tar -rvf /tmp/all-doc-files.tar "{}" +
|
||||
```
|
||||
|
||||
这里,find 命令的选项:
|
||||
|
||||
* `-name "*.doc"`:按照给定的模式/标准查找文件。在这里,在 $HOME 中查找所有 *.doc 文件。
|
||||
* `-exec tar ...` :对 `find` 命令找到的所有文件执行 `tar` 命令。
|
||||
|
||||
这里,`tar` 命令的选项:
|
||||
|
||||
* `-r`:将文件追加到归档末尾。参数与 `-c` 选项具有相同的含义。
|
||||
* `-v`:详细输出。
|
||||
* `-f out.tar` : 将所有文件追加到 out.tar 中。
|
||||
|
||||
也可以像下面这样将 `find` 命令的输出通过管道输入到 `tar` 命令中:
|
||||
|
||||
```
|
||||
find $HOME -name "*.doc" -print0 | tar -cvf /tmp/file.tar --null -T -
|
||||
```
|
||||
|
||||
传递给 `find` 命令的 `-print0` 选项处理特殊的文件名。`--null` 和 `-T` 选项告诉 `tar` 命令从标准输入/管道读取输入。也可以使用 `xargs` 命令:
|
||||
|
||||
```
|
||||
find $HOME -type f -name "*.sh" | xargs tar cfvz /nfs/x230/my-shell-scripts.tgz
|
||||
```
|
||||
|
||||
有关更多信息,请参阅下面的 man 页面:
|
||||
|
||||
```
|
||||
$ man tar
|
||||
$ man find
|
||||
$ man xargs
|
||||
$ man bash
|
||||
```
|
||||
|
||||
------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
作者是 nixCraft 的创造者,是一名经验丰富的系统管理员,也是 Linux 操作系统/Unix shell 脚本培训师。他曾与全球客户以及 IT、教育、国防和太空研究以及非营利部门等多个行业合作。在 Twitter、Facebook 和 Google+ 上关注他。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/linux-unix-find-tar-files-into-tarball-command/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz
|
||||
[1]:https://www.cyberciti.biz/media/new/faq/2017/12/How-to-find-and-tar-files-on-linux-unix.jpg
|
||||
[2]:https://www.cyberciti.biz/faq/creating-a-tar-file-linux-command-line/
|
127
published/20171226 How to Configure Linux for Children.md
Normal file
127
published/20171226 How to Configure Linux for Children.md
Normal file
@ -0,0 +1,127 @@
|
||||
如何配置一个小朋友使用的 Linux
|
||||
======
|
||||
|
||||
![](https://www.maketecheasier.com/assets/uploads/2017/12/keep-kids-safe-online-hero.jpg)
|
||||
|
||||
如果你接触电脑有一段时间了,提到 Linux,你应该会联想到一些特定的人群。你觉得哪些人在使用 Linux?别担心,这就告诉你。
|
||||
|
||||
Linux 是一个可以深度定制的操作系统。这就赋予了用户高度控制权。事实上,家长们可以针对小朋友设置出一个专门的 Linux 发行版,确保让孩子不会在不经意间接触那些高危地带。但是相比 Windows,这些设置显得更费时,但是一劳永逸。Linux 的开源免费,让教室或计算机实验室系统部署变得容易。
|
||||
|
||||
### 小朋友的 Linux 发行版
|
||||
|
||||
这些为儿童而简化的 Linux 发行版,界面对儿童十分友好。家长只需要先安装和设置,孩子就可以完全独立地使用计算机了。你将看见多彩的图形界面,丰富的图画,简明的语言。
|
||||
|
||||
不过,不幸的是,这类发行版不会经常更新,甚至有些已经不再积极开发了。但也不意味着不能使用,只是故障发生率可能会高一点。
|
||||
|
||||
![qimo-gcompris][1]
|
||||
|
||||
#### 1. Edubuntu
|
||||
|
||||
[Edubuntu][2] 是 Ubuntu 的一个分支版本,专用于教育事业。它拥有丰富的图形环境和大量教育软件,易于更新维护。它被设计成初高中学生专用的操作系统。
|
||||
|
||||
#### 2. Ubermix
|
||||
|
||||
[Ubermix][3] 是根据教育需求而被设计出来的。Ubermix 将学生从复杂的计算机设备中解脱出来,就像手机一样简单易用,而不会牺牲性能和操作系统的全部能力。一键开机、五分钟安装、二十秒钟快速还原机制,以及超过 60 个的免费预装软件,ubermix 就可以让你的硬件变成功能强大的学习设备。
|
||||
|
||||
#### 3. Sugar
|
||||
|
||||
[Sugar][4] 是为“每个孩子一台笔记本(OLPC)计划”而设计的操作系统。Sugar 和普通桌面 Linux 大不相同,它更专注于学生课堂使用和教授编程能力。
|
||||
|
||||
**注意** :很多为儿童开发的 Linux 发行版我并没有列举,因为它们大都不再积极维护或是被长时间遗弃。
|
||||
|
||||
### 为小朋友过筛选内容的 Linux
|
||||
|
||||
只有你,最能保护孩子拒绝访问少儿不宜的内容,但是你不可能每分每秒都在孩子身边。但是你可以设置“限制访问”的 URL 到内容过滤代理服务器(通过软件)。这里有两个主要的软件可以帮助你。
|
||||
|
||||
![儿童内容过滤 Linux][5]
|
||||
|
||||
#### 1、 DansGuardian
|
||||
|
||||
[DansGuardian][6],一个开源内容过滤软件,几乎可以工作在任何 Linux 发行版上,灵活而强大,需要你通过命令行设置你的代理。如果你不深究代理服务器的设置,这可能是最强力的选择。
|
||||
|
||||
配置 DansGuardian 可不是轻松活儿,但是你可以跟着安装说明按步骤完成。一旦设置完成,它将是过滤不良内容的高效工具。
|
||||
|
||||
#### 2、 Parental Control: Family Friendly Filter
|
||||
|
||||
[Parental Control: Family Friendly Filter][7] 是 Firefox 的插件,允许家长屏蔽包含色情内容在内的任何少儿不宜的网站。你也可以设置不良网站黑名单,将其一直屏蔽。
|
||||
|
||||
![firefox 内容过滤插件][8]
|
||||
|
||||
你使用的老版本的 Firefox 可能不支持 [网页插件][9],那么你可以使用 [ProCon Latte 内容过滤器][10]。家长们添加网址到预设的黑名单内,然后设置密码,防止设置被篡改。
|
||||
|
||||
#### 3、 Blocksi 网页过滤
|
||||
|
||||
[Blocksi 网页过滤][11] 是 Chrome 浏览器插件,能有效过滤网页和 Youtube。它也提供限时服务,这样你可以限制家里小朋友的上网时间。
|
||||
|
||||
### 闲趣
|
||||
|
||||
![Linux 儿童游戏:tux kart][12]
|
||||
|
||||
给孩子们使用的计算机,不管是否是用作教育,最好都要有一些游戏。虽然 Linux 没有 Windows 那么好的游戏性,但也在奋力追赶。这有建议几个有益的游戏,你可以安装到孩子们的计算机上。
|
||||
|
||||
* [Super Tux Kart][21](竞速卡丁车)
|
||||
* [GCompris][22](适合教育的游戏)
|
||||
* [Secret Maryo Chronicles][23](超级马里奥)
|
||||
* [Childsplay][24](教育/记忆力游戏)
|
||||
* [EToys][25](儿童编程)
|
||||
* [TuxTyping][26](打字游戏)
|
||||
* [Kalzium][27](元素周期表)
|
||||
* [Tux of Math Command][28](数学游戏)
|
||||
* [Pink Pony][29](Tron 风格竞速游戏)
|
||||
* [KTuberling][30](创造游戏)
|
||||
* [TuxPaint][31](绘画)
|
||||
* [Blinken][32]([记忆力][33] 游戏)
|
||||
* [KTurtle][34](编程指导环境)
|
||||
* [KStars][35](天文馆)
|
||||
* [Marble][36](虚拟地球)
|
||||
* [KHangman][37](猜单词)
|
||||
|
||||
### 结论:为什么给孩子使用 Linux?
|
||||
|
||||
Linux 以复杂著称。那为什么给孩子使用 Linux?这是为了让孩子适应 Linux。在 Linux 上工作给了解系统运行提供了很多机会。当孩子长大,他们就有随自己兴趣探索的机会。得益于 Linux 如此开放的平台,孩子们才能得到这么一个极佳的场所发现自己对计算机的毕生之恋。
|
||||
|
||||
本文于 2010 年 7 月首发,2017 年 12 月更新。
|
||||
|
||||
图片来自 [在校学生][13]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/configure-linux-for-children/
|
||||
|
||||
作者:[Alexander Fox][a]
|
||||
译者:[CYLeft](https://github.com/CYLeft)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com/author/alexfox/
|
||||
[1]:https://www.maketecheasier.com/assets/uploads/2010/08/qimo-gcompris.jpg (qimo-gcompris)
|
||||
[2]:http://www.edubuntu.org
|
||||
[3]:http://www.ubermix.org/
|
||||
[4]:http://wiki.sugarlabs.org/go/Downloads
|
||||
[5]:https://www.maketecheasier.com/assets/uploads/2017/12/linux-for-children-content-filtering.png (linux-for-children-content-filtering)
|
||||
[6]:https://help.ubuntu.com/community/DansGuardian
|
||||
[7]:https://addons.mozilla.org/en-US/firefox/addon/family-friendly-filter/
|
||||
[8]:https://www.maketecheasier.com/assets/uploads/2017/12/firefox-content-filter-addon.png (firefox-content-filter-addon)
|
||||
[9]:https://www.maketecheasier.com/best-firefox-web-extensions/
|
||||
[10]:https://addons.mozilla.org/en-US/firefox/addon/procon-latte/
|
||||
[11]:https://chrome.google.com/webstore/detail/blocksi-web-filter/pgmjaihnmedpcdkjcgigocogcbffgkbn?hl=en
|
||||
[12]:https://www.maketecheasier.com/assets/uploads/2017/12/linux-for-children-tux-kart-e1513389774535.jpg (linux-for-children-tux-kart)
|
||||
[13]:https://www.flickr.com/photos/lupuca/8720604364
|
||||
[21]:http://supertuxkart.sourceforge.net/
|
||||
[22]:http://gcompris.net/
|
||||
[23]:http://www.secretmaryo.org/
|
||||
[24]:http://www.schoolsplay.org/
|
||||
[25]:http://www.squeakland.org/about/intro/
|
||||
[26]:http://tux4kids.alioth.debian.org/tuxtype/index.php
|
||||
[27]:http://edu.kde.org/kalzium/
|
||||
[28]:http://tux4kids.alioth.debian.org/tuxmath/index.php
|
||||
[29]:http://code.google.com/p/pink-pony/
|
||||
[30]:http://games.kde.org/game.php?game=ktuberling
|
||||
[31]:http://www.tuxpaint.org/
|
||||
[32]:https://www.kde.org/applications/education/blinken/
|
||||
[33]:https://www.ebay.com/sch/i.html?_nkw=memory
|
||||
[34]:https://www.kde.org/applications/education/kturtle/
|
||||
[35]:https://www.kde.org/applications/education/kstars/
|
||||
[36]:https://www.kde.org/applications/education/marble/
|
||||
[37]:https://www.kde.org/applications/education/khangman/
|
@ -0,0 +1,56 @@
|
||||
4 artificial intelligence trends to watch
|
||||
======
|
||||
|
||||
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO%20Mentor.png?itok=K-6s_q2C)
|
||||
|
||||
However much your IT operation is using [artificial intelligence][1] today, expect to be doing more with it in 2018. Even if you have never dabbled in AI projects, this may be the year talk turns into action, says David Schatsky, managing director at [Deloitte][2]. "The number of companies doing something with AI is on track to rise," he says.
|
||||
|
||||
Check out his AI predictions for the coming year:
|
||||
|
||||
### 1. Expect more enterprise AI pilot projects
|
||||
|
||||
Many of today's off-the-shelf applications and platforms that companies already routinely use incorporate AI. "But besides that, a growing number of companies are experimenting with machine learning or natural language processing to solve particular problems or help understand their data, or automate internal processes, or improve their own products and services," Schatsky says.
|
||||
|
||||
**[ What IT jobs will be hot in the AI age? See our related article, [8 emerging AI jobs for IT pros][3]. ]**
|
||||
|
||||
"Beyond that, the intensity with which companies are working with AI will rise," he says. "Companies that are early adopters already mostly have five or fewer projects underway, but we think that number will rise to having 10 or more pilots underway." One reason for this prediction, he says, is that AI technologies are getting better and easier to use.
|
||||
|
||||
### 2. AI will help with data science talent crunch
|
||||
|
||||
Talent is a huge problem in data science, where most large companies are struggling to hire the data scientists they need. AI can take up some of the load, Schatsky says. "The practice of data science is increasingly automatable with tools offered both by startups and large, established technology vendors," he says. A lot of data science work is repetitive and tedious, and ripe for automation, he explains. "Data scientists aren't going away, but they're going to get much more productive. So a company that can only do a few data science projects without automation will be able to do much more with automation, even if it can't hire any more data scientists."
|
||||
|
||||
### 3. Synthetic data models will ease bottlenecks
|
||||
|
||||
Before you can train a machine learning model, you have to get the data to train it on, Schatsky notes. That's not always easy. "That's often a business bottleneck, not a production bottleneck," he says. In some cases you can't get the data because of regulations governing things like health records and financial information.
|
||||
|
||||
Synthetic data models can take a smaller set of data and use it to generate the larger set that may be needed, he says. "If you used to need 10,000 data points to train a model but could only get 2,000, you can now generate the missing 8,000 and go ahead and train your model."
|
||||
|
||||
### 4. AI decision-making will become more transparent
|
||||
|
||||
One of the business problems with AI is that it often operates as a black box. That is, once you train a model, it will spit out answers that you can't necessarily explain. "Machine learning can automatically discover patterns in data that a human can't see because it's too much data or too complex," Schatsky says. "Having discovered these patterns, it can make predictions about new data it hasn't seen."
|
||||
|
||||
The problem is that sometimes you really do need to know the reasons behind an AI finding or prediction. "You feed in a medical image and the model says, based on the data you've given me, there's a 90 percent chance that there's a tumor in this image," Schatsky says. "You say, 'Why do you think so?' and the model says, 'I don't know, that's what the data would suggest.'"
|
||||
|
||||
If you follow that data, you're going to have to do exploratory surgery on a patient, Schatsky says. That's a tough call to make when you can't explain why. "There are a lot of situations where even though the model produces very accurate results, if it can't explain how it got there, nobody wants to trust it."
|
||||
|
||||
There are also situations where because of regulations, you literally can't use data that you can't explain. "If a bank declines a loan application, it needs to be able to explain why," Schatsky says. "That's a regulation, at least in the U.S. Traditionally, a human underwriter makes that call. A machine learning model could be more accurate, but if it can't explain its answer, it can't be used."
|
||||
|
||||
Most algorithms were not designed to explain their reasoning. "So researchers are finding clever ways to get AI to spill its secrets and explain what variables make it more likely that this patient has a tumor," he says. "Once they do that, a human can look at the answers and see why it came to that conclusion."
|
||||
|
||||
That means AI findings and decisions can be used in many areas where they can't be today, he says. "That will make these models more trustworthy and more usable in the business world."
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://enterprisersproject.com/article/2018/1/4-ai-trends-watch
|
||||
|
||||
作者:[Minda Zetlin][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://enterprisersproject.com/user/minda-zetlin
|
||||
[1]:https://enterprisersproject.com/tags/artificial-intelligence
|
||||
[2]:https://www2.deloitte.com/us/en.html
|
||||
[3]:https://enterprisersproject.com/article/2017/12/8-emerging-ai-jobs-it-pros?sc_cid=70160000000h0aXAAQ
|
@ -1,82 +0,0 @@
|
||||
AI and machine learning bias has dangerous implications
|
||||
======
|
||||
translating
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_goodbadugly.png?itok=ZxaimUWU)
|
||||
|
||||
Image by : opensource.com
|
||||
|
||||
Algorithms are everywhere in our world, and so is bias. From social media news feeds to streaming service recommendations to online shopping, computer algorithms--specifically, machine learning algorithms--have permeated our day-to-day world. As for bias, we need only examine the 2016 American election to understand how deeply--both implicitly and explicitly--it permeates our society as well.
|
||||
|
||||
What's often overlooked, however, is the intersection between these two: bias in computer algorithms themselves.
|
||||
|
||||
Contrary to what many of us might think, technology is not objective. AI algorithms and their decision-making processes are directly shaped by those who build them--what code they write, what data they use to "[train][1]" the machine learning models, and how they [stress-test][2] the models after they're finished. This means that the programmers' values, biases, and human flaws are reflected in the software. If I fed an image-recognition algorithm the faces of only white researchers in my lab, for instance, it [wouldn't recognize non-white faces as human][3]. Such a conclusion isn't the result of a "stupid" or "unsophisticated" AI, but to a bias in training data: a lack of diverse faces. This has dangerous consequences.
|
||||
|
||||
There's no shortage of examples. [State court systems][4] across the country use "black box" algorithms to recommend prison sentences for convicts. [These algorithms are biased][5] against black individuals because of the data that trained them--so they recommend longer sentences as a result, thus perpetuating existing racial disparities in prisons. All this happens under the guise of objective, "scientific" decision-making.
|
||||
|
||||
The United States federal government uses machine-learning algorithms to calculate welfare payouts and other types of subsidies. But [information on these algorithms][6], such as their creators and their training data, is extremely difficult to find--which increases the risk of public officials operating under bias and meting out systematically unfair payments.
|
||||
|
||||
This list goes on. From Facebook news algorithms to medical care systems to police body cameras, we as a society are at great risk of inserting our biases--racism, sexism, xenophobia, socioeconomic discrimination, confirmation bias, and more--into machines that will be mass-produced and mass-distributed, operating under the veil of perceived technological objectivity.
|
||||
|
||||
This must stop.
|
||||
|
||||
While we should by no means halt research and development on artificial intelligence, we need to slow its development such that we tread carefully. The danger of algorithmic bias is already too great.
|
||||
|
||||
## How can we fight algorithmic bias?
|
||||
|
||||
One of the best ways to fight algorithmic bias is by vetting the training data fed into machine learning models themselves. As [researchers at Microsoft][2] point out, this can take many forms.
|
||||
|
||||
The data itself might have a skewed distribution--for instance, programmers may have more data about United States-born citizens than immigrants, and about rich men than poor women. Such imbalances will cause an AI to make improper conclusions about how our society is in fact represented--i.e., that most Americans are wealthy white businessmen--simply because of the way machine-learning models make statistical correlations.
|
||||
|
||||
It's also possible, even if men and women are equally represented in training data, that the representations themselves result in prejudiced understandings of humanity. For instance, if all the pictures of "male occupation" are of CEOs and all those of "female occupation" are of secretaries (even if more CEOs are in fact male than female), the AI could conclude that women are inherently not meant to be CEOs.
|
||||
|
||||
We can imagine similar issues, for example, with law enforcement AIs that examine representations of criminality in the media, which dozens of studies have shown to be [egregiously slanted][7] towards black and Latino citizens.
|
||||
|
||||
Bias in training data can take many other forms as well--unfortunately, more than can be adequately covered here. Nonetheless, training data is just one form of vetting; it's also important that AI models are "stress-tested" after they're completed to seek out prejudice.
|
||||
|
||||
If we show an Indian face to our camera, is it appropriately recognized? Is our AI less likely to recommend a job candidate from an inner city than a candidate from the suburbs, even if they're equally qualified? How does our terrorism algorithm respond to intelligence on a white domestic terrorist compared to an Iraqi? Can our ER camera pull up medical records of children?
|
||||
|
||||
These are obviously difficult issues to resolve in the data itself, but we can begin to identify and address them through comprehensive testing.
|
||||
|
||||
## Why is open source well-suited for this task?
|
||||
|
||||
Both open source technology and open source methodologies have extreme potential to help in this fight against algorithmic bias.
|
||||
|
||||
Modern artificial intelligence is dominated by open source software, from TensorFlow to IBM Watson to packages like [scikit-learn][8]. The open source community has already proven extremely effective in developing robust and rigorously tested machine-learning tools, so it follows that the same community could effectively build anti-bias tests into that same software.
|
||||
|
||||
Debugging tools like [DeepXplore][9], out of Columbia and Lehigh Universities, for example, make the AI stress-testing process extensive yet also easy to navigate. This and other projects, such as work being done at [MIT's Computer Science and Artificial Intelligence Lab][10], develop the agile and rapid prototyping the open source community should adopt.
|
||||
|
||||
Open source technology has also proven to be extremely effective for vetting and sorting large sets of data. Nothing should make this more obvious than the domination of open source tools in the data analysis market (Weka, Rapid Miner, etc.). Tools for identifying data bias should be designed by the open source community, and those techniques should also be applied to the plethora of open training data sets already published on sites like [Kaggle][11].
|
||||
|
||||
The open source methodology itself is also well-suited for designing processes to fight bias. Making conversations about software open, democratized, and in tune with social good are pivotal to combating an issue that is partly caused by the very opposite--closed conversations, private software development, and undemocratized decision-making. If online communities, corporations, and academics can adopt these open source characteristics when approaching machine learning, fighting algorithmic bias should become easier.
|
||||
|
||||
## How can we all get involved?
|
||||
|
||||
Education is extremely important. We all know people who may be unaware of algorithmic bias but who care about its implications--for law, social justice, public policy, and more. It's critical to talk to those people and explain both how the bias is formed and why it matters because the only way to get these conversations started is to start them ourselves.
|
||||
|
||||
For those of us who work with artificial intelligence in some capacity--as developers, on the policy side, through academic research, or in other capacities--these conversations are even more important. Those who are designing the artificial intelligence of tomorrow need to understand the extreme dangers that bias presents today; clearly, integrating anti-bias processes into software design depends on this very awareness.
|
||||
|
||||
Finally, we should all build and strengthen open source community around ethical AI. Whether that means contributing to software tools, stress-testing machine learning models, or sifting through gigabytes of training data, it's time we leverage the power of open source methodology to combat one of the greatest threats of our digital age.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/how-open-source-can-fight-algorithmic-bias
|
||||
|
||||
作者:[Justin Sherman][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/justinsherman
|
||||
[1]:https://www.crowdflower.com/what-is-training-data/
|
||||
[2]:https://medium.com/microsoft-design/how-to-recognize-exclusion-in-ai-ec2d6d89f850
|
||||
[3]:https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms
|
||||
[4]:https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/
|
||||
[5]:https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
|
||||
[6]:https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3012499
|
||||
[7]:https://www.hivlawandpolicy.org/sites/default/files/Race%20and%20Punishment-%20Racial%20Perceptions%20of%20Crime%20and%20Support%20for%20Punitive%20Policies%20%282014%29.pdf
|
||||
[8]:http://scikit-learn.org/stable/
|
||||
[9]:https://arxiv.org/pdf/1705.06640.pdf
|
||||
[10]:https://www.csail.mit.edu/research/understandable-deep-networks
|
||||
[11]:https://www.kaggle.com/datasets
|
86
sources/talk/20180115 Why DevSecOps matters to IT leaders.md
Normal file
86
sources/talk/20180115 Why DevSecOps matters to IT leaders.md
Normal file
@ -0,0 +1,86 @@
|
||||
Why DevSecOps matters to IT leaders
|
||||
======
|
||||
|
||||
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/TEP_SecurityTraining1_620x414_1014.png?itok=zqxqJGDG)
|
||||
|
||||
If [DevOps][1] is ultimately about building better software, that means better-secured software, too.
|
||||
|
||||
Enter the term "DevSecOps." Like any IT term, DevSecOps - a descendant of the better-established DevOps - could be susceptible to hype and misappropriation. But the term has real meaning for IT leaders who've embraced a culture of DevOps and the practices and tools that help deliver on its promise.
|
||||
|
||||
Speaking of which: What does "DevSecOps" mean?
|
||||
|
||||
"DevSecOps is a portmanteau of development, security, and operations," says Robert Reeves, CTO and co-founder at [Datical][2]. "It reminds us that security is just as important to our applications as creating them and deploying them to production."
|
||||
|
||||
**[ Want DevOps advice from other CIOs? See our comprehensive resource, [DevOps: The IT Leader's Guide][3]. ]**
|
||||
|
||||
One easy way to explain DevSecOps to non-technical people: It bakes security into the development process intentionally and earlier.
|
||||
|
||||
"Security teams have historically been isolated from development teams - and each team has developed deep expertise in different areas of IT," [Red Hat][4] security strategist Kirsten Newcomer [told us][5] recently. "It doesn't need to be this way. Enterprises that care deeply about security and also care deeply about their ability to quickly deliver business value through software are finding ways to move security left in their application development lifecycles. They're adopting DevSecOps by integrating security practices, tooling, and automation throughout the CI/CD pipeline."
|
||||
|
||||
"To do this well, they're integrating their teams - security professionals are embedded with application development teams from inception (design) through to production deployment," she says. "Both sides are seeing the value - each team expands their skill sets and knowledge base, making them more valuable technologists. DevOps done right - or DevSecOps - improves IT security."
|
||||
|
||||
IT teams are tasked with delivering services faster and more frequently than ever before. DevOps can be a great enabler of this, in part because it can remove some of the traditional friction between development and operations teams that commonly surfaced when Ops was left out of the process until deployment time and Dev tossed its code over an invisible wall, never to manage it again, much less have any infrastructure responsibility. That kind of siloed approach causes problems, to put it mildly, in the digital age. According to Reeves, the same holds true if security exists in a silo.
|
||||
|
||||
"We have adopted DevOps because it's proven to improve our IT performance by removing the barriers between development and operations," Reeves says. "Much like we shouldn't wait until the end of the deployment cycle to involve operations, we shouldn't wait until the end to involve security."
|
||||
|
||||
### Why DevSecOps is here to stay
|
||||
|
||||
It may be tempting to see DevSecOps as just another buzzword, but for security-conscious IT leaders, it's a substantive term: Security must be a first-class citizen in the software development pipeline, not something that gets bolted on as a final step before a deploy, or worse, as a team that gets scrambled only after an actual incident occurs.
|
||||
|
||||
"DevSecOps is not just a buzzword - it is the current and future state of IT for multiple reasons," says George Gerchow, VP of security and compliance at [Sumo Logic][6]. "The most important benefit is the ability to bake security into development and operational processes to provide guardrails - not barriers - to achieve agility and innovation."
|
||||
|
||||
Moreover, the appearance of the DevSecOps on the scene might be another sign that DevOps itself is maturing and digging deep roots inside IT.
|
||||
|
||||
"The culture of DevOps in the enterprise is here to stay, and that means that developers are delivering features and updates to the production environment at an increasingly higher velocity, especially as the self-organizing teams become more comfortable with both collaboration and measurement of results," says Mike Kail, CTO and co-founder at [CYBRIC][7].
|
||||
|
||||
Teams and companies that have kept their old security practices in place while embracing DevOps are likely experiencing an increasing amount of pain managing security risks as they continue to deploy faster and more frequently.
|
||||
|
||||
"The current, manual testing approaches of security continue to fall further and further behind."
|
||||
|
||||
"The current, manual testing approaches of security continue to fall further and further behind, and leveraging both automation and collaboration to shift security testing left into the software development life cycle, thus driving the culture of DevSecOps, is the only way for IT leaders to increase overall resiliency and delivery security assurance," Kail says.
|
||||
|
||||
Shifting security testing left (earlier) benefits developers, too: Rather than finding out about a glaring hole in their code right before a new or updated service is set to deploy, they can identify and resolve potential issues during much earlier stages of development - often with little or no intervention from security personnel.
|
||||
|
||||
"Done right, DevSecOps can ingrain security into the development lifecycle, empowering developers to more quickly and easily secure their applications without security disruptions," says Brian Wilson, chief information security officer at [SAS][8].
|
||||
|
||||
Wilson points to static (SAST) and source composition analysis (SCA) tools, integrated into a team's continuous delivery pipelines, as useful technologies that help make this possible by giving developers feedback about potential issues in their own code as well as vulnerabilities in third-party dependencies.
|
||||
|
||||
"As a result, developers can proactively and iteratively mitigate appsec issues and rerun security scans without the need to involve security personnel," Wilson says. He notes, too, that DevSecOps can also help the Dev team streamline updates and patching.
|
||||
|
||||
DevSecOps doesn't mean you no longer need security pros, just as DevOps doesn't mean you no longer need infrastructure experts; it just helps reduce the likelihood of flaws finding their way into production, or from slowing down deployments because they're caught late in the pipeline.
|
||||
|
||||
"We're here if they have questions or need help, but having given developers the tools they need to secure their apps, we're less likely to find a showstopper issue during a penetration test," Wilson says.
|
||||
|
||||
### DevSecOps meets Meltdown
|
||||
|
||||
Sumo Logic's Gerchow shares a timely example of the DevSecOps culture in action: When the recent [Meltdown and Spectre][9] news hit, the team's DevSecOps approach enabled a rapid response to mitigate its risks without any noticeable disruption to internal or external customers, which Gerchow said was particularly important for the cloud-native, highly regulated company.
|
||||
|
||||
The first step: Gerchow's small security team, which he notes also has development skills, was able to work with one of its main cloud vendors via Slack to ensure its infrastructure was completely patched within 24 hours.
|
||||
|
||||
"My team then began OS-level fixes immediately with zero downtime to end users without having to open tickets and requests with engineering that would have meant waiting on a long change management process. All the changes were accounted for via automated Jira tickets opened via Slack and monitored through our logs and analytics solution," Gerchow explains.
|
||||
|
||||
In essence, it sounds a whole lot like the culture of DevOps, matched with the right mix of people, processes, and tools, but it explicitly includes security as part of that culture and mix.
|
||||
|
||||
"In traditional environments, it would have taken weeks or months to do this with downtime because all three development, operations, and security functions were siloed," Gerchow says. "With a DevSecOps process and mindset, end users get a seamless experience with easy communication and same-day fixes."
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://enterprisersproject.com/article/2018/1/why-devsecops-matters-it-leaders
|
||||
|
||||
作者:[Kevin Casey][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://enterprisersproject.com/user/kevin-casey
|
||||
[1]:https://enterprisersproject.com/tags/devops
|
||||
[2]:https://www.datical.com/
|
||||
[3]:https://enterprisersproject.com/devops?sc_cid=70160000000h0aXAAQ
|
||||
[4]:https://www.redhat.com/en?intcmp=701f2000000tjyaAAA
|
||||
[5]:https://enterprisersproject.com/article/2017/10/what-s-next-devops-5-trends-watch
|
||||
[6]:https://www.sumologic.com/
|
||||
[7]:https://www.cybric.io/
|
||||
[8]:https://www.sas.com/en_us/home.html
|
||||
[9]:https://www.redhat.com/en/blog/what-are-meltdown-and-spectre-heres-what-you-need-know?intcmp=701f2000000tjyaAAA
|
143
sources/talk/20180117 How to get into DevOps.md
Normal file
143
sources/talk/20180117 How to get into DevOps.md
Normal file
@ -0,0 +1,143 @@
|
||||
How to get into DevOps
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_resume_rh1x.png?itok=S3HGxi6E)
|
||||
|
||||
I've observed a sharp uptick of developers and systems administrators interested in "getting into DevOps" within the past year or so. This pattern makes sense: In an age in which a single developer can spin up a globally distributed infrastructure for an application with a few dollars and a few API calls, the gap between development and systems administration is closer than ever. Although I've seen plenty of blog posts and articles about cool DevOps tools and thoughts to think about, I've seen fewer content on pointers and suggestions for people looking to get into this work.
|
||||
|
||||
My goal with this article is to draw what that path looks like. My thoughts are based upon several interviews, chats, late-night discussions on [reddit.com/r/devops][1], and random conversations, likely over beer and delicious food. I'm also interested in hearing feedback from those who have made the jump; if you have, please reach out through [my blog][2], [Twitter][3], or in the comments below. I'd love to hear your thoughts and stories.
|
||||
|
||||
### Olde world IT
|
||||
|
||||
Understanding history is key to understanding the future, and DevOps is no exception. To understand the pervasiveness and popularity of the DevOps movement, understanding what IT was like in the late '90s and most of the '00s is helpful. This was my experience.
|
||||
|
||||
I started my career in late 2006 as a Windows systems administrator in a large, multi-national financial services firm. In those days, adding new compute involved calling Dell (or, in our case, CDW) and placing a multi-hundred-thousand-dollar order of servers, networking equipment, cables, and software, all destined for your on- and offsite datacenters. Although VMware was still convincing companies that using virtual machines was, indeed, a cost-effective way of hosting its "performance-sensitive" application, many companies, including mine, pledged allegiance to running applications on their physical hardware.
|
||||
|
||||
Our technology department had an entire group dedicated to datacenter engineering and operations, and its job was to negotiate our leasing rates down to some slightly less absurd monthly rate and ensure that our systems were being cooled properly (an exponentially difficult problem if you have enough equipment). If the group was lucky/wealthy enough, the offshore datacenter crew knew enough about all of our server models to not accidentally pull the wrong thing during after-hours trading. Amazon Web Services and Rackspace were slowly beginning to pick up steam, but were far from critical mass.
|
||||
|
||||
In those days, we also had teams dedicated to ensuring that the operating systems and software running on top of that hardware worked when they were supposed to. The engineers were responsible for designing reliable architectures for patching, monitoring, and alerting these systems as well as defining what the "gold image" looked like. Most of this work was done with a lot of manual experimentation, and the extent of most tests was writing a runbook describing what you did, and ensuring that what you did actually did what you expected it to do after following said runbook. This was important in a large organization like ours, since most of the level 1 and 2 support was offshore, and the extent of their training ended with those runbooks.
|
||||
|
||||
(This is the world that your author lived in for the first three years of his career. My dream back then was to be the one who made the gold standard!)
|
||||
|
||||
Software releases were another beast altogether. Admittedly, I didn't gain a lot of experience working on this side of the fence. However, from stories that I've gathered (and recent experience), much of the daily grind for software development during this time went something like this:
|
||||
|
||||
* Developers wrote code as specified by the technical and functional requirements laid out by business analysts from meetings they weren't invited to.
|
||||
* Optionally, developers wrote unit tests for their code to ensure that it didn't do anything obviously crazy, like try to divide over zero without throwing an exception.
|
||||
* When done, developers would mark their code as "Ready for QA." A quality assurance person would pick up the code and run it in their own environment, which might or might not be like production or even the environment the developer used to test their own code against.
|
||||
* Failures would get sent back to the developers within "a few days or weeks" depending on other business activities and where priorities fell.
|
||||
|
||||
|
||||
|
||||
Although sysadmins and developers didn't often see eye to eye, the one thing they shared a common hatred for was "change management." This was a composition of highly regulated (and in the case of my employer at the time), highly necessary rules and procedures governing when and how technical changes happened in a company. Most companies followed [ITIL][4] practices, which, in a nutshell, asked a lot of questions around why, when, where, and how things happened and provided a process for establishing an audit trail of the decisions that led up to those answers.
|
||||
|
||||
As you could probably gather from my short history lesson, many, many things were done manually in IT. This led to a lot of mistakes. Lots of mistakes led up to lots of lost revenue. Change management's job was to minimize those lost revenues; this usually came in the form of releases only every two weeks and changes to servers, regardless of their impact or size, queued up to occur between Friday at 4 p.m. and Monday at 5:59 a.m. (Ironically, this batching of work led to even more mistakes, usually more serious ones.)
|
||||
|
||||
### DevOps isn't a Tiger Team
|
||||
|
||||
You might be thinking "What is Carlos going on about, and when is he going to talk about Ansible playbooks?" I love Ansible tons, but hang on; this is important.
|
||||
|
||||
Have you ever been assigned to a project where you had to interact with the "DevOps" team? Or did you have to rely on a "configuration management" or "CI/CD" team to ensure your pipeline was set up properly? Have you had to attend meetings about your release and what it pertains to--weeks after the work was marked "code complete"?
|
||||
|
||||
If so, then you're reliving history. All of that comes from all of the above.
|
||||
|
||||
[Silos form][5] out of an instinctual draw to working with people like ourselves. Naturally, it's no surprise that this human trait also manifests in the workplace. I even saw this play out at a 250-person startup where I used to work. When I started, developers all worked in common pods and collaborated heavily with each other. As the codebase grew in complexity, developers who worked on common features naturally aligned with each other to try and tackle the complexity within their own feature. Soon afterwards, feature teams were officially formed.
|
||||
|
||||
Sysadmins and developers at many of the companies I worked at not only formed natural silos like this, but also fiercely competed with each other. Developers were mad at sysadmins when their environments were broken. Developers were mad at sysadmins when their environments were too locked down. Sysadmins were mad that developers were breaking their environments in arbitrary ways all of the time. Sysadmins were mad at developers for asking for way more computing power than they needed. Neither side understood each other, and worse yet, neither side wanted to.
|
||||
|
||||
Most developers were uninterested in the basics of operating systems, kernels, or, in some cases, computer hardware. As well, most sysadmins, even Linux sysadmins, took a 10-foot pole approach to learning how to code. They tried a bit of C in college, hated it and never wanted to touch an IDE again. Consequently, developers threw their environment problems over the wall to sysadmins, sysadmins prioritized them with the hundreds of other things that were thrown over the wall to them, and everyone busy-waited angrily while hating each other. The purpose of DevOps was to put an end to this.
|
||||
|
||||
DevOps isn't a team. CI/CD isn't a group in Jira. DevOps is a way of thinking. According to the movement, in an ideal world, developers, sysadmins, and business stakeholders would be working as one team. While they might not know everything about each other's worlds, not only do they all know enough to understand each other and their backlogs, but they can, for the most part, speak the same language.
|
||||
|
||||
This is the basis behind having all infrastructure and business logic be in code and subject to the same deployment pipelines as the software that sits on top of it. Everybody is winning because everyone understands each other. This is also the basis behind the rise of other tools like chatbots and easily accessible monitoring and graphing.
|
||||
|
||||
[Adam Jacob said][6] it best: "DevOps is the word we will use to describe the operational side of the transition to enterprises being software led."
|
||||
|
||||
### What do I need to know to get into DevOps?
|
||||
|
||||
I'm commonly asked this question, and the answer, like most open-ended questions like this, is: It depends.
|
||||
|
||||
At the moment, the "DevOps engineer" varies from company to company. Smaller companies that have plenty of software developers but fewer folks that understand infrastructure will likely look for people with more experience administrating systems. Other, usually larger and/or older companies that have a solid sysadmin organization will likely optimize for something closer to a [Google site reliability engineer][7], i.e. "a software engineer to design an operations function." This isn't written in stone, however, as, like any technology job, the decision largely depends on the hiring manager sponsoring it.
|
||||
|
||||
That said, we typically look for engineers who are interested in learning more about:
|
||||
|
||||
* How to administrate and architect secure and scalable cloud platforms (usually on AWS, but Azure, Google Cloud Platform, and PaaS providers like DigitalOcean and Heroku are popular too);
|
||||
* How to build and optimize deployment pipelines and deployment strategies on popular [CI/CD][8] tools like Jenkins, Go continuous delivery, and cloud-based ones like Travis CI or CircleCI;
|
||||
* How to monitor, log, and alert on changes in your system with timeseries-based tools like Kibana, Grafana, Splunk, Loggly, or Logstash; and
|
||||
* How to maintain infrastructure as code with configuration management tools like Chef, Puppet, or Ansible, as well as deploy said infrastructure with tools like Terraform or CloudFormation.
|
||||
|
||||
|
||||
|
||||
Containers are becoming increasingly popular as well. Despite the [beef against the status quo][9] surrounding Docker at scale, containers are quickly becoming a great way of achieving an extremely high density of services and applications running on fewer systems while increasing their reliability. (Orchestration tools like Kubernetes or Mesos can spin up new containers in seconds if the host they're being served by fails.) Given this, having knowledge of Docker or rkt and an orchestration platform will go a long way.
|
||||
|
||||
If you're a systems administrator that's looking to get into DevOps, you will also need to know how to write code. Python and Ruby are popular languages for this purpose, as they are portable (i.e., can be used on any operating system), fast, and easy to read and learn. They also form the underpinnings of the industry's most popular configuration management tools (Python for Ansible, Ruby for Chef and Puppet) and cloud API clients (Python and Ruby are commonly used for AWS, Azure, and Google Cloud Platform clients).
|
||||
|
||||
If you're a developer looking to make this change, I highly recommend learning more about Unix, Windows, and networking fundamentals. Even though the cloud abstracts away many of the complications of administrating a system, debugging slow application performance is aided greatly by knowing how these things work. I've included a few books on this topic in the next section.
|
||||
|
||||
If this sounds overwhelming, you aren't alone. Fortunately, there are plenty of small projects to dip your feet into. One such toy project is Gary Stafford's Voter Service, a simple Java-based voting platform. We ask our candidates to take the service from GitHub to production infrastructure through a pipeline. One can combine that with Rob Mile's awesome DevOps Tutorial repository to learn about ways of doing this.
|
||||
|
||||
Another great way of becoming familiar with these tools is taking popular services and setting up an infrastructure for them using nothing but AWS and configuration management. Set it up manually first to get a good idea of what to do, then replicate what you just did using nothing but CloudFormation (or Terraform) and Ansible. Surprisingly, this is a large part of the work that we infrastructure devs do for our clients on a daily basis. Our clients find this work to be highly valuable!
|
||||
|
||||
### Books to read
|
||||
|
||||
If you're looking for other resources on DevOps, here are some theory and technical books that are worth a read.
|
||||
|
||||
#### Theory books
|
||||
|
||||
* [The Phoenix Project][10] by Gene Kim. This is a great book that covers much of the history I explained earlier (with much more color) and describes the journey to a lean company running on agile and DevOps.
|
||||
* [Driving Technical Change][11] by Terrance Ryan. Awesome little book on common personalities within most technology organizations and how to deal with them. This helped me out more than I expected.
|
||||
* [Peopleware][12] by Tom DeMarco and Tim Lister. A classic on managing engineering organizations. A bit dated, but still relevant.
|
||||
* [Time Management for System Administrators][13] by Tom Limoncelli. While this is heavily geared towards sysadmins, it provides great insight into the life of a systems administrator at most large organizations. If you want to learn more about the war between sysadmins and developers, this book might explain more.
|
||||
* [The Lean Startup][14] by Eric Ries. Describes how Eric's 3D avatar company, IMVU, discovered how to work lean, fail fast, and find profit faster.
|
||||
* [Lean Enterprise][15] by Jez Humble and friends. This book is an adaption of The Lean Startup for the enterprise. Both are great reads and do a good job of explaining the business motivation behind DevOps.
|
||||
* [Infrastructure As Code][16] by Kief Morris. Awesome primer on, well, infrastructure as code! It does a great job of describing why it's essential for any business to adopt this for their infrastructure.
|
||||
* [Site Reliability Engineering][17] by Betsy Beyer, Chris Jones, Jennifer Petoff, and Niall Richard Murphy. A book explaining how Google does SRE, or also known as "DevOps before DevOps was a thing." Provides interesting opinions on how to handle uptime, latency, and keeping engineers happy.
|
||||
|
||||
|
||||
|
||||
#### Technical books
|
||||
|
||||
If you're looking for books that'll take you straight to code, you've come to the right section.
|
||||
|
||||
* [TCP/IP Illustrated][18] by the late W. Richard Stevens. This is the classic (and, arguably, complete) tome on the fundamental networking protocols, with special emphasis on TCP/IP. If you've heard of Layers 1, 2, 3, and 4 and are interested in learning more, you'll need this book.
|
||||
* [UNIX and Linux System Administration Handbook][19] by Evi Nemeth, Trent Hein, and Ben Whaley. A great primer into how Linux and Unix work and how to navigate around them.
|
||||
* [Learn Windows Powershell In A Month of Lunches][20] by Don Jones and Jeffrey Hicks. If you're doing anything automated with Windows, you will need to learn how to use Powershell. This is the book that will help you do that. Don Jones is a well-known MVP in this space.
|
||||
* Practically anything by [James Turnbull][21]. He puts out great technical primers on popular DevOps-related tools.
|
||||
|
||||
|
||||
|
||||
From companies deploying everything to bare metal (there are plenty that still do, for good reasons) to trailblazers doing everything serverless, DevOps is likely here to stay for a while. The work is interesting, the results are impactful, and, most important, it helps bridge the gap between technology and business. It's a wonderful thing to see.
|
||||
|
||||
Originally published at [Neurons Firing on a Keyboard][22], CC-BY-SA.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/getting-devops
|
||||
|
||||
作者:[Carlos Nunez][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/carlosonunez
|
||||
[1]:https://www.reddit.com/r/devops/
|
||||
[2]:https://carlosonunez.wordpress.com/
|
||||
[3]:https://twitter.com/easiestnameever
|
||||
[4]:https://en.wikipedia.org/wiki/ITIL
|
||||
[5]:https://www.psychologytoday.com/blog/time-out/201401/getting-out-your-silo
|
||||
[6]:https://twitter.com/adamhjk/status/572832185461428224
|
||||
[7]:https://landing.google.com/sre/interview/ben-treynor.html
|
||||
[8]:https://en.wikipedia.org/wiki/CI/CD
|
||||
[9]:https://thehftguy.com/2016/11/01/docker-in-production-an-history-of-failure/
|
||||
[10]:https://itrevolution.com/book/the-phoenix-project/
|
||||
[11]:https://pragprog.com/book/trevan/driving-technical-change
|
||||
[12]:https://en.wikipedia.org/wiki/Peopleware:_Productive_Projects_and_Teams
|
||||
[13]:http://shop.oreilly.com/product/9780596007836.do
|
||||
[14]:http://theleanstartup.com/
|
||||
[15]:https://info.thoughtworks.com/lean-enterprise-book.html
|
||||
[16]:http://infrastructure-as-code.com/book/
|
||||
[17]:https://landing.google.com/sre/book.html
|
||||
[18]:https://en.wikipedia.org/wiki/TCP/IP_Illustrated
|
||||
[19]:http://www.admin.com/
|
||||
[20]:https://www.manning.com/books/learn-windows-powershell-in-a-month-of-lunches-third-edition
|
||||
[21]:https://jamesturnbull.net/
|
||||
[22]:https://carlosonunez.wordpress.com/2017/03/02/getting-into-devops/
|
@ -0,0 +1,140 @@
|
||||
Linux/Unix App For Prevention Of RSI (Repetitive Strain Injury)
|
||||
======
|
||||
![workrave-image][1]
|
||||
|
||||
[A repetitive strain injury][2] (RSI) is occupational overuse syndrome, non-specific arm pain or work related upper limb disorder. RSI caused from overusing the hands to perform a repetitive task, such as typing, writing, or clicking a mouse. Unfortunately, most people do not understand what RSI is or how dangerous it can be. You can easily prevent RSI using open source software called Workrave.
|
||||
|
||||
|
||||
### What are the symptoms of RSI?
|
||||
|
||||
I'm quoting from this [page][3]. Do you experience:
|
||||
|
||||
1. Fatigue or lack of endurance?
|
||||
2. Weakness in the hands or forearms?
|
||||
3. Tingling, numbness, or loss of sensation?
|
||||
4. Heaviness: Do your hands feel like dead weight?
|
||||
5. Clumsiness: Do you keep dropping things?
|
||||
6. Lack of strength in your hands? Is it harder to open jars? Cut vegetables?
|
||||
7. Lack of control or coordination?
|
||||
8. Chronically cold hands?
|
||||
9. Heightened awareness? Just being slightly more aware of a body part can be a clue that something is wrong.
|
||||
10. Hypersensitivity?
|
||||
11. Frequent self-massage (subconsciously)?
|
||||
12. Sympathy pains? Do your hands hurt when someone else talks about their hand pain?
|
||||
|
||||
|
||||
|
||||
### How to reduce your risk of Developing RSI
|
||||
|
||||
* Take breaks, when using your computer, every 30 minutes or so. Use software such as workrave to prevent RSI.
|
||||
* Regular exercise can prevent all sort of injuries including RSI.
|
||||
* Use good posture. Adjust your computer desk and chair to support muscles necessary for good posture.
|
||||
|
||||
|
||||
|
||||
### Workrave
|
||||
|
||||
Workrave is a free open source software application intended to prevent computer users from developing RSI or myopia. The software periodically locks the screen while an animated character, "Miss Workrave," walks the user through various stretching exercises and urges them to take a coffee break. The program frequently alerts you to take micro-pauses, rest breaks and restricts you to your daily limit. The program works under MS-Windows and Linux, UNIX-like operating systems.
|
||||
|
||||
#### Install workrave
|
||||
|
||||
Type the following [apt command][4]/[apt-get command][5] under a Debian / Ubuntu Linux:
|
||||
`$ sudo apt-get install workrave`
|
||||
Fedora Linux user should type the following dnf command:
|
||||
`$ sudo dnf install workrave`
|
||||
RHEL/CentOS Linux user should enable EPEL repo and install it using [yum command][6]:
|
||||
```
|
||||
### [ **tested on a CentOS/RHEL 7.x and clones** ] ###
|
||||
$ sudo yum install epel-release
|
||||
$ sudo yum install https://rpms.remirepo.net/enterprise/remi-release-7.rpm
|
||||
$ sudo yum install workrave
|
||||
```
|
||||
Arch Linux user type the following pacman command to install it:
|
||||
`$ sudo pacman -S workrave`
|
||||
FreeBSD user can install it using the following pkg command:
|
||||
`# pkg install workrave`
|
||||
OpenBSD user can install it using the following pkg_add command
|
||||
```
|
||||
$ doas pkg_add workrave
|
||||
```
|
||||
|
||||
#### How to configure workrave
|
||||
|
||||
Workrave works as an applet which is a small application whose user interface resides within a panel. You need to add workrave to panel to control behavior and appearance of the software.
|
||||
|
||||
##### Adding a New Workrave Object To Panel
|
||||
|
||||
* Right-click on a vacant space on a panel to open the panel popup menu.
|
||||
* Choose Add to Panel.
|
||||
* The Add to Panel dialog opens.The available panel objects are listed alphabetically, with launchers at the top. Select workrave applet and click on Add button.
|
||||
|
||||
![Fig.01: Adding an Object \(Workrave\) to a Panel][7]
|
||||
Fig.01: Adding an Object (Workrave) to a Panel
|
||||
|
||||
##### How Do I Modify Properties Of Workrave Software?
|
||||
|
||||
To modify the properties of an object workrave, perform the following steps:
|
||||
|
||||
* Right-click on the workrave object to open the panel object popup.
|
||||
* Choose Preference. Use the Properties dialog to modify the properties as required.
|
||||
|
||||
![](https://www.cyberciti.biz/media/new/tips/2009/11/linux-gnome-workwave-preferences-.png)
|
||||
Fig.02: Modifying the Properties of The Workrave Software
|
||||
|
||||
#### Workrave in Action
|
||||
|
||||
The main window shows the time remaining until it suggests a pause. The windows can be closed and you will the time remaining on the panel itself:
|
||||
![Fig.03: Time reaming counter ][8]
|
||||
Fig.03: Time reaming counter
|
||||
|
||||
![Fig.04: Miss Workrave - an animated character walks you through various stretching exercises][9]
|
||||
Fig.04: Miss Workrave - an animated character walks you through various stretching exercises
|
||||
|
||||
The break prelude window, bugging you to take a micro-pause:
|
||||
![Fig.05: Time for a micro-pause remainder ][10]
|
||||
Fig.05: Time for a micro-pause remainder
|
||||
|
||||
![Fig.06: You can skip Micro-break ][11]
|
||||
Fig.06: You can skip Micro-break
|
||||
|
||||
##### References:
|
||||
|
||||
1. [Workrave project][12] home page.
|
||||
2. [pokoy][13] lightweight daemon that helps prevent RSI and other computer related stress.
|
||||
3. [A Pomodoro][14] timer for GNOME 3.
|
||||
4. [RSI][2] from the wikipedia.
|
||||
|
||||
|
||||
|
||||
### about the author
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][15], [Facebook][16], [Google+][17].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/tips/repetitive-strain-injury-prevention-software.html
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/media/new/tips/2009/11/workrave-image.jpg (workrave-image)
|
||||
[2]:https://en.wikipedia.org/wiki/Repetitive_strain_injury
|
||||
[3]:https://web.eecs.umich.edu/~cscott/rsi.html##symptoms
|
||||
[4]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info)
|
||||
[5]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info)
|
||||
[6]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info)
|
||||
[7]:https://www.cyberciti.biz/media/new/tips/2009/11/add-workwave-to-panel.png (Adding an Object (Workrave) to a Gnome Panel)
|
||||
[8]:https://www.cyberciti.biz/media/new/tips/2009/11/screenshot-workrave.png (Workrave main window shows the time remaining until it suggests a pause.)
|
||||
[9]:https://www.cyberciti.biz/media/new/tips/2009/11/miss-workrave.png (Miss Workrave Sofrware character walks you through various RSI stretching exercises )
|
||||
[10]:https://www.cyberciti.biz/media/new/tips/2009/11/time-for-micro-pause.gif (Workrave RSI Software Time for a micro-pause remainder )
|
||||
[11]:https://www.cyberciti.biz/media/new/tips/2009/11/Micro-break.png (Workrave RSI Software Micro-break )
|
||||
[12]:http://www.workrave.org/
|
||||
[13]:https://github.com/ttygde/pokoy
|
||||
[14]:http://gnomepomodoro.org
|
||||
[15]:https://twitter.com/nixcraft
|
||||
[16]:https://facebook.com/nixcraft
|
||||
[17]:https://plus.google.com/+CybercitiBiz
|
@ -1,113 +0,0 @@
|
||||
Translating by Torival Three steps to learning GDB
|
||||
============================================================
|
||||
|
||||
Debugging C programs used to scare me a lot. Then I was writing my [operating system][2] and I had so many bugs to debug! I was extremely fortunate to be using the emulator qemu, which lets me attach a debugger to my operating system. The debugger is called `gdb`.
|
||||
|
||||
I’m going to explain a couple of small things you can do with `gdb`, because I found it really confusing to get started. We’re going to set a breakpoint and examine some memory in a tiny program.
|
||||
|
||||
### 1\. Set breakpoints
|
||||
|
||||
If you’ve ever used a debugger before, you’ve probably set a breakpoint.
|
||||
|
||||
Here’s the program that we’re going to be “debugging” (though there aren’t any bugs):
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
void do_thing() {
|
||||
printf("Hi!\n");
|
||||
}
|
||||
int main() {
|
||||
do_thing();
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Save this as `hello.c`. We can debug it with gdb like this:
|
||||
|
||||
```
|
||||
bork@kiwi ~> gcc -g hello.c -o hello
|
||||
bork@kiwi ~> cat
|
||||
bork@kiwi ~> gdb ./hello
|
||||
```
|
||||
|
||||
This compiles `hello.c` with debugging symbols (so that gdb can do better work), and gives us kind of scary prompt that just says
|
||||
|
||||
`(gdb)`
|
||||
|
||||
We can then set a breakpoint using the `break` command, and then `run` the program.
|
||||
|
||||
```
|
||||
(gdb) break do_thing
|
||||
Breakpoint 1 at 0x4004f8
|
||||
(gdb) run
|
||||
Starting program: /home/bork/hello
|
||||
|
||||
Breakpoint 1, 0x00000000004004f8 in do_thing ()
|
||||
```
|
||||
|
||||
This stops the program at the beginning of `do_thing`.
|
||||
|
||||
We can find out where we are in the call stack with `where`: (thanks to [@mgedmin][3] for the tip)
|
||||
|
||||
```
|
||||
(gdb) where
|
||||
#0 do_thing () at hello.c:3
|
||||
#1 0x08050cdb in main () at hello.c:6
|
||||
(gdb)
|
||||
```
|
||||
|
||||
### 2\. Look at some assembly code
|
||||
|
||||
We can look at the assembly code for our function using the `disassemble`command! This is cool. This is x86 assembly. I don’t understand it very well, but the line that says `callq` is what does the `printf` function call.
|
||||
|
||||
```
|
||||
(gdb) disassemble do_thing
|
||||
Dump of assembler code for function do_thing:
|
||||
0x00000000004004f4 <+0>: push %rbp
|
||||
0x00000000004004f5 <+1>: mov %rsp,%rbp
|
||||
=> 0x00000000004004f8 <+4>: mov $0x40060c,%edi
|
||||
0x00000000004004fd <+9>: callq 0x4003f0
|
||||
0x0000000000400502 <+14>: pop %rbp
|
||||
0x0000000000400503 <+15>: retq
|
||||
```
|
||||
|
||||
You can also shorten `disassemble` to `disas`
|
||||
|
||||
### 3\. Examine some memory!
|
||||
|
||||
The main thing I used `gdb` for when I was debugging my kernel was to examine regions of memory to make sure they were what I thought they were. The command for examining memory is `examine`, or `x` for short. We’re going to use `x`.
|
||||
|
||||
From looking at that assembly above, it seems like `0x40060c` might be the address of the string we’re printing. Let’s check!
|
||||
|
||||
```
|
||||
(gdb) x/s 0x40060c
|
||||
0x40060c: "Hi!"
|
||||
```
|
||||
|
||||
It is! Neat! Look at that. The `/s` part of `x/s` means “show it to me like it’s a string”. I could also have said “show me 10 characters” like this:
|
||||
|
||||
```
|
||||
(gdb) x/10c 0x40060c
|
||||
0x40060c: 72 'H' 105 'i' 33 '!' 0 '\000' 1 '\001' 27 '\033' 3 '\003' 59 ';'
|
||||
0x400614: 52 '4' 0 '\000'
|
||||
```
|
||||
|
||||
You can see that the first four characters are ‘H’, ‘i’, and ‘!’, and ‘\0’ and then after that there’s more unrelated stuff.
|
||||
|
||||
I know that gdb does lots of other stuff, but I still don’t know it very well and `x`and `break` got me pretty far. You can read the [documentation for examining memory][4].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2014/02/10/three-steps-to-learning-gdb/
|
||||
|
||||
作者:[Julia Evans ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://jvns.ca
|
||||
[1]:https://jvns.ca/categories/spytools
|
||||
[2]:http://jvns.ca/blog/categories/kernel
|
||||
[3]:https://twitter.com/mgedmin
|
||||
[4]:https://ftp.gnu.org/old-gnu/Manuals/gdb-5.1.1/html_chapter/gdb_9.html#SEC56
|
@ -1,246 +0,0 @@
|
||||
BriFuture is translating this article.
|
||||
|
||||
Let’s Build A Simple Interpreter. Part 2.
|
||||
======
|
||||
|
||||
In their amazing book "The 5 Elements of Effective Thinking" the authors Burger and Starbird share a story about how they observed Tony Plog, an internationally acclaimed trumpet virtuoso, conduct a master class for accomplished trumpet players. The students first played complex music phrases, which they played perfectly well. But then they were asked to play very basic, simple notes. When they played the notes, the notes sounded childish compared to the previously played complex phrases. After they finished playing, the master teacher also played the same notes, but when he played them, they did not sound childish. The difference was stunning. Tony explained that mastering the performance of simple notes allows one to play complex pieces with greater control. The lesson was clear - to build true virtuosity one must focus on mastering simple, basic ideas.
|
||||
|
||||
The lesson in the story clearly applies not only to music but also to software development. The story is a good reminder to all of us to not lose sight of the importance of deep work on simple, basic ideas even if it sometimes feels like a step back. While it is important to be proficient with a tool or framework you use, it is also extremely important to know the principles behind them. As Ralph Waldo Emerson said:
|
||||
|
||||
> "If you learn only methods, you'll be tied to your methods. But if you learn principles, you can devise your own methods."
|
||||
|
||||
On that note, let's dive into interpreters and compilers again.
|
||||
|
||||
Today I will show you a new version of the calculator from [Part 1][1] that will be able to:
|
||||
|
||||
1. Handle whitespace characters anywhere in the input string
|
||||
2. Consume multi-digit integers from the input
|
||||
3. Subtract two integers (currently it can only add integers)
|
||||
|
||||
|
||||
|
||||
Here is the source code for your new version of the calculator that can do all of the above:
|
||||
```
|
||||
# Token types
|
||||
# EOF (end-of-file) token is used to indicate that
|
||||
# there is no more input left for lexical analysis
|
||||
INTEGER, PLUS, MINUS, EOF = 'INTEGER', 'PLUS', 'MINUS', 'EOF'
|
||||
|
||||
|
||||
class Token(object):
|
||||
def __init__(self, type, value):
|
||||
# token type: INTEGER, PLUS, MINUS, or EOF
|
||||
self.type = type
|
||||
# token value: non-negative integer value, '+', '-', or None
|
||||
self.value = value
|
||||
|
||||
def __str__(self):
|
||||
"""String representation of the class instance.
|
||||
|
||||
Examples:
|
||||
Token(INTEGER, 3)
|
||||
Token(PLUS '+')
|
||||
"""
|
||||
return 'Token({type}, {value})'.format(
|
||||
type=self.type,
|
||||
value=repr(self.value)
|
||||
)
|
||||
|
||||
def __repr__(self):
|
||||
return self.__str__()
|
||||
|
||||
|
||||
class Interpreter(object):
|
||||
def __init__(self, text):
|
||||
# client string input, e.g. "3 + 5", "12 - 5", etc
|
||||
self.text = text
|
||||
# self.pos is an index into self.text
|
||||
self.pos = 0
|
||||
# current token instance
|
||||
self.current_token = None
|
||||
self.current_char = self.text[self.pos]
|
||||
|
||||
def error(self):
|
||||
raise Exception('Error parsing input')
|
||||
|
||||
def advance(self):
|
||||
"""Advance the 'pos' pointer and set the 'current_char' variable."""
|
||||
self.pos += 1
|
||||
if self.pos > len(self.text) - 1:
|
||||
self.current_char = None # Indicates end of input
|
||||
else:
|
||||
self.current_char = self.text[self.pos]
|
||||
|
||||
def skip_whitespace(self):
|
||||
while self.current_char is not None and self.current_char.isspace():
|
||||
self.advance()
|
||||
|
||||
def integer(self):
|
||||
"""Return a (multidigit) integer consumed from the input."""
|
||||
result = ''
|
||||
while self.current_char is not None and self.current_char.isdigit():
|
||||
result += self.current_char
|
||||
self.advance()
|
||||
return int(result)
|
||||
|
||||
def get_next_token(self):
|
||||
"""Lexical analyzer (also known as scanner or tokenizer)
|
||||
|
||||
This method is responsible for breaking a sentence
|
||||
apart into tokens.
|
||||
"""
|
||||
while self.current_char is not None:
|
||||
|
||||
if self.current_char.isspace():
|
||||
self.skip_whitespace()
|
||||
continue
|
||||
|
||||
if self.current_char.isdigit():
|
||||
return Token(INTEGER, self.integer())
|
||||
|
||||
if self.current_char == '+':
|
||||
self.advance()
|
||||
return Token(PLUS, '+')
|
||||
|
||||
if self.current_char == '-':
|
||||
self.advance()
|
||||
return Token(MINUS, '-')
|
||||
|
||||
self.error()
|
||||
|
||||
return Token(EOF, None)
|
||||
|
||||
def eat(self, token_type):
|
||||
# compare the current token type with the passed token
|
||||
# type and if they match then "eat" the current token
|
||||
# and assign the next token to the self.current_token,
|
||||
# otherwise raise an exception.
|
||||
if self.current_token.type == token_type:
|
||||
self.current_token = self.get_next_token()
|
||||
else:
|
||||
self.error()
|
||||
|
||||
def expr(self):
|
||||
"""Parser / Interpreter
|
||||
|
||||
expr -> INTEGER PLUS INTEGER
|
||||
expr -> INTEGER MINUS INTEGER
|
||||
"""
|
||||
# set current token to the first token taken from the input
|
||||
self.current_token = self.get_next_token()
|
||||
|
||||
# we expect the current token to be an integer
|
||||
left = self.current_token
|
||||
self.eat(INTEGER)
|
||||
|
||||
# we expect the current token to be either a '+' or '-'
|
||||
op = self.current_token
|
||||
if op.type == PLUS:
|
||||
self.eat(PLUS)
|
||||
else:
|
||||
self.eat(MINUS)
|
||||
|
||||
# we expect the current token to be an integer
|
||||
right = self.current_token
|
||||
self.eat(INTEGER)
|
||||
# after the above call the self.current_token is set to
|
||||
# EOF token
|
||||
|
||||
# at this point either the INTEGER PLUS INTEGER or
|
||||
# the INTEGER MINUS INTEGER sequence of tokens
|
||||
# has been successfully found and the method can just
|
||||
# return the result of adding or subtracting two integers,
|
||||
# thus effectively interpreting client input
|
||||
if op.type == PLUS:
|
||||
result = left.value + right.value
|
||||
else:
|
||||
result = left.value - right.value
|
||||
return result
|
||||
|
||||
|
||||
def main():
|
||||
while True:
|
||||
try:
|
||||
# To run under Python3 replace 'raw_input' call
|
||||
# with 'input'
|
||||
text = raw_input('calc> ')
|
||||
except EOFError:
|
||||
break
|
||||
if not text:
|
||||
continue
|
||||
interpreter = Interpreter(text)
|
||||
result = interpreter.expr()
|
||||
print(result)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
```
|
||||
|
||||
Save the above code into the calc2.py file or download it directly from [GitHub][2]. Try it out. See for yourself that it works as expected: it can handle whitespace characters anywhere in the input; it can accept multi-digit integers, and it can also subtract two integers as well as add two integers.
|
||||
|
||||
Here is a sample session that I ran on my laptop:
|
||||
```
|
||||
$ python calc2.py
|
||||
calc> 27 + 3
|
||||
30
|
||||
calc> 27 - 7
|
||||
20
|
||||
calc>
|
||||
```
|
||||
|
||||
The major code changes compared with the version from [Part 1][1] are:
|
||||
|
||||
1. The get_next_token method was refactored a bit. The logic to increment the pos pointer was factored into a separate method advance.
|
||||
2. Two more methods were added: skip_whitespace to ignore whitespace characters and integer to handle multi-digit integers in the input.
|
||||
3. The expr method was modified to recognize INTEGER -> MINUS -> INTEGER phrase in addition to INTEGER -> PLUS -> INTEGER phrase. The method now also interprets both addition and subtraction after having successfully recognized the corresponding phrase.
|
||||
|
||||
In [Part 1][1] you learned two important concepts, namely that of a **token** and a **lexical analyzer**. Today I would like to talk a little bit about **lexemes** , **parsing** , and **parsers**.
|
||||
|
||||
You already know about tokens. But in order for me to round out the discussion of tokens I need to mention lexemes. What is a lexeme? A **lexeme** is a sequence of characters that form a token. In the following picture you can see some examples of tokens and sample lexemes and hopefully it will make the relationship between them clear:
|
||||
|
||||
![][3]
|
||||
|
||||
Now, remember our friend, the expr method? I said before that that's where the interpretation of an arithmetic expression actually happens. But before you can interpret an expression you first need to recognize what kind of phrase it is, whether it is addition or subtraction, for example. That's what the expr method essentially does: it finds the structure in the stream of tokens it gets from the get_next_token method and then it interprets the phrase that is has recognized, generating the result of the arithmetic expression.
|
||||
|
||||
The process of finding the structure in the stream of tokens, or put differently, the process of recognizing a phrase in the stream of tokens is called **parsing**. The part of an interpreter or compiler that performs that job is called a **parser**.
|
||||
|
||||
So now you know that the expr method is the part of your interpreter where both **parsing** and **interpreting** happens - the expr method first tries to recognize ( **parse** ) the INTEGER -> PLUS -> INTEGER or the INTEGER -> MINUS -> INTEGER phrase in the stream of tokens and after it has successfully recognized ( **parsed** ) one of those phrases, the method interprets it and returns the result of either addition or subtraction of two integers to the caller.
|
||||
|
||||
And now it's time for exercises again.
|
||||
|
||||
![][4]
|
||||
|
||||
1. Extend the calculator to handle multiplication of two integers
|
||||
2. Extend the calculator to handle division of two integers
|
||||
3. Modify the code to interpret expressions containing an arbitrary number of additions and subtractions, for example "9 - 5 + 3 + 11"
|
||||
|
||||
|
||||
|
||||
**Check your understanding.**
|
||||
|
||||
1. What is a lexeme?
|
||||
2. What is the name of the process that finds the structure in the stream of tokens, or put differently, what is the name of the process that recognizes a certain phrase in that stream of tokens?
|
||||
3. What is the name of the part of the interpreter (compiler) that does parsing?
|
||||
|
||||
|
||||
|
||||
|
||||
I hope you liked today's material. In the next article of the series you will extend your calculator to handle more complex arithmetic expressions. Stay tuned.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://ruslanspivak.com/lsbasi-part2/
|
||||
|
||||
作者:[Ruslan Spivak][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://ruslanspivak.com
|
||||
[1]:http://ruslanspivak.com/lsbasi-part1/ (Part 1)
|
||||
[2]:https://github.com/rspivak/lsbasi/blob/master/part2/calc2.py
|
||||
[3]:https://ruslanspivak.com/lsbasi-part2/lsbasi_part2_lexemes.png
|
||||
[4]:https://ruslanspivak.com/lsbasi-part2/lsbasi_part2_exercises.png
|
@ -1,3 +1,5 @@
|
||||
BriFuture is Translating this article
|
||||
|
||||
Let’s Build A Simple Interpreter. Part 3.
|
||||
======
|
||||
|
||||
|
@ -1,242 +0,0 @@
|
||||
translated by cyleft
|
||||
|
||||
Top 10 Command Line Games For Linux
|
||||
======
|
||||
Brief: This article lists the **best command line games for Linux**.
|
||||
|
||||
Linux has never been the preferred operating system for gaming. Though [gaming on Linux][1] has improved a lot lately. You can [download Linux games][2] from a number of resources.
|
||||
|
||||
There are dedicated [Linux distributions for gaming][3]. Yes, they do exist. But, we are not going to see the Linux gaming distributions today.
|
||||
|
||||
Linux has one added advantage over its Windows counterpart. It has got the mighty Linux terminal. You can do a hell lot of things in terminal including playing **command line games**.
|
||||
|
||||
Yeah, hardcore terminal lovers, gather around. Terminal games are light, fast and hell lotta fun to play. And the best thing of all, you've got a lot of classic retro games in Linux terminal.
|
||||
|
||||
[Suggested read: Gaming On Linux:All You Need To Know][20]
|
||||
|
||||
### Best Linux terminal games
|
||||
|
||||
So let's crack this list and see what are some of the best Linux terminal games.
|
||||
|
||||
### 1. Bastet
|
||||
|
||||
Who hasn't spent hours together playing [Tetris][4]? Simple, but totally addictive. Bastet is the Tetris of Linux.
|
||||
|
||||
![Bastet Linux terminal game][5]
|
||||
|
||||
Use the command below to get Bastet:
|
||||
```
|
||||
sudo apt install bastet
|
||||
```
|
||||
|
||||
To play the game, run the below command in terminal:
|
||||
```
|
||||
bastet
|
||||
```
|
||||
|
||||
Use spacebar to rotate the bricks and arrow keys to guide.
|
||||
|
||||
### 2. Ninvaders
|
||||
|
||||
Space Invaders. I remember tussling for high score with my brother on this. One of the best arcade games out there.
|
||||
|
||||
![nInvaders command line game in Linux][6]
|
||||
|
||||
Copy paste the command to install Ninvaders.
|
||||
```
|
||||
sudo apt-get install ninvaders
|
||||
```
|
||||
|
||||
To play this game, use the command below:
|
||||
```
|
||||
ninvaders
|
||||
```
|
||||
|
||||
Arrow keys to move the spaceship. Space bar to shoot at the aliens.
|
||||
|
||||
[Suggested read:Top 10 Best Linux Games eleased in 2016 That You Can Play Today][21]
|
||||
|
||||
|
||||
### 3. Pacman4console
|
||||
|
||||
Yes, the King of the Arcade is here. Pacman4console is the terminal version of the popular arcade hit, Pacman.
|
||||
|
||||
![Pacman4console is a command line Pacman game in Linux][7]
|
||||
|
||||
Use the command to get pacman4console:
|
||||
```
|
||||
sudo apt-get install pacman4console
|
||||
```
|
||||
|
||||
Open a terminal, and I suggest you maximize it. Type the command below to launch the game:
|
||||
```
|
||||
pacman4console
|
||||
```
|
||||
|
||||
Use the arrow keys to control the movement.
|
||||
|
||||
### 4. nSnake
|
||||
|
||||
Remember the snake game in old Nokia phones?
|
||||
|
||||
That game kept me hooked to the phone for a really long time. I used to devise various coiling patterns to manage the grown up snake.
|
||||
|
||||
![nsnake : Snake game in Linux terminal][8]
|
||||
|
||||
We have the [snake game in Linux terminal][9] thanks to [nSnake][9]. Use the command below to install it.
|
||||
```
|
||||
sudo apt-get install nsnake
|
||||
```
|
||||
|
||||
To play the game, type in the below command to launch the game.
|
||||
```
|
||||
nsnake
|
||||
```
|
||||
|
||||
Use arrow keys to move the snake and feed it.
|
||||
|
||||
### 5. Greed
|
||||
|
||||
Greed is little like Tron, minus the speed and adrenaline.
|
||||
|
||||
Your location is denoted by a blinking '@'. You are surrounded by numbers and you can choose to move in any of the 4 directions,
|
||||
|
||||
The direction you choose has a number and you move exactly that number of steps. And you repeat the step again. You cannot revisit the visited spot again and the game ends when you cannot make a move.
|
||||
|
||||
I made it sound more complicated than it really is.
|
||||
|
||||
![Greed : Tron game in Linux command line][10]
|
||||
|
||||
Grab greed with the command below:
|
||||
```
|
||||
sudo apt-get install greed
|
||||
```
|
||||
|
||||
To launch the game use the command below. Then use the arrow keys to play the game.
|
||||
```
|
||||
greed
|
||||
```
|
||||
|
||||
### 6. Air Traffic Controller
|
||||
|
||||
What's better than being a pilot? An air traffic controller. You can simulate an entire air traffic system in your terminal. To be honest, managing air traffic from a terminal kinda feels, real.
|
||||
|
||||
![Air Traffic Controller game in Linux][11]
|
||||
|
||||
Install the game using the command below:
|
||||
```
|
||||
sudo apt-get install bsdgames
|
||||
```
|
||||
|
||||
Type in the command below to launch the game:
|
||||
```
|
||||
atc
|
||||
```
|
||||
|
||||
ATC is not a child's play. So read the man page using the command below.
|
||||
|
||||
### 7. Backgammon
|
||||
|
||||
Whether You have played [Backgammon][12] before or not, You should check this out. The instructions and control manuals are all so friendly. Play it against computer or your friend if you prefer.
|
||||
|
||||
![Backgammon terminal game in Linux][13]
|
||||
|
||||
Install Backgammon using this command:
|
||||
```
|
||||
sudo apt-get install bsdgames
|
||||
```
|
||||
|
||||
Type in the below command to launch the game:
|
||||
```
|
||||
backgammon
|
||||
```
|
||||
|
||||
Press 'y' when prompted for rules of the game.
|
||||
|
||||
### 8. Moon Buggy
|
||||
|
||||
Jump. Fire. Hours of fun. No more words.
|
||||
|
||||
![Moon buggy][14]
|
||||
|
||||
Install the game using the command below:
|
||||
```
|
||||
sudo apt-get install moon-buggy
|
||||
```
|
||||
|
||||
Use the below command to start the game:
|
||||
```
|
||||
moon-buggy
|
||||
```
|
||||
|
||||
Press space to jump, 'a' or 'l' to shoot. Enjoy
|
||||
|
||||
### 9. 2048
|
||||
|
||||
Here's something to make your brain flex. [2048][15] is a strategic as well as a highly addictive game. The goal is to get a score of 2048.
|
||||
|
||||
![2048 game in Linux terminal][16]
|
||||
|
||||
Copy paste the commands below one by one to install the game.
|
||||
```
|
||||
wget https://raw.githubusercontent.com/mevdschee/2048.c/master/2048.c
|
||||
|
||||
gcc -o 2048 2048.c
|
||||
```
|
||||
|
||||
Type the below command to launch the game and use the arrow keys to play.
|
||||
```
|
||||
./2048
|
||||
```
|
||||
|
||||
### 10. Tron
|
||||
|
||||
How can this list be complete without a brisk action game?
|
||||
|
||||
![Tron Linux terminal game][17]
|
||||
|
||||
Yes, the snappy Tron is available on Linux terminal. Get ready for some serious nimble action. No installation hassle nor setup hassle. One command will launch the game. All You need is an internet connection.
|
||||
```
|
||||
ssh sshtron.zachlatta.com
|
||||
```
|
||||
|
||||
You can even play this game in multiplayer if there are other gamers online. Read more about [Tron game in Linux][18].
|
||||
|
||||
### Your pick?
|
||||
|
||||
There you have it, people. Top 10 Linux terminal games. I guess it's ctrl+alt+T now. What is Your favorite among the list? Or got some other fun stuff for the terminal? Do share.
|
||||
|
||||
With inputs from [Abhishek Prakash][19].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/best-command-line-games-linux/
|
||||
|
||||
作者:[Aquil Roshan][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/aquil/
|
||||
[1]:https://itsfoss.com/linux-gaming-guide/
|
||||
[2]:https://itsfoss.com/download-linux-games/
|
||||
[3]:https://itsfoss.com/manjaro-gaming-linux/
|
||||
[4]:https://en.wikipedia.org/wiki/Tetris
|
||||
[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/bastet.jpg
|
||||
[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/ninvaders.jpg
|
||||
[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/pacman.jpg
|
||||
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/nsnake.jpg
|
||||
[9]:https://itsfoss.com/nsnake-play-classic-snake-game-linux-terminal/
|
||||
[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/greed.jpg
|
||||
[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/atc.jpg
|
||||
[12]:https://en.wikipedia.org/wiki/Backgammon
|
||||
[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/backgammon.jpg
|
||||
[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/moon-buggy.jpg
|
||||
[15]:https://itsfoss.com/2048-offline-play-ubuntu/
|
||||
[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/2048.jpg
|
||||
[17]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/tron.jpg
|
||||
[18]:https://itsfoss.com/play-tron-game-linux-terminal/
|
||||
[19]:https://twitter.com/abhishek_pc
|
||||
[20]:https://itsfoss.com/linux-gaming-guide/
|
||||
[21]:https://itsfoss.com/best-linux-games/
|
@ -0,0 +1,96 @@
|
||||
How to resolve mount.nfs: Stale file handle error
|
||||
======
|
||||
Learn how to resolve mount.nfs: Stale file handle error on Linux platform. This is Network File System error can be resolved from client or server end.
|
||||
|
||||
_![][1]_
|
||||
|
||||
When you are using Network File System in your environment, you must have seen`mount.nfs: Stale file handle` error at times. This error denotes that NFS share is unable to mount since something has changed since last good known configuration.
|
||||
|
||||
Whenever you reboot NFS server or some of the NFS processes are not running on client or server or share is not properly exported at server; these can be reasons for this error. Moreover its irritating when this error comes to previously mounted NFS share. Because this means configuration part is correct since it was previously mounted. In such case once can try following commands:
|
||||
|
||||
Make sure NFS service are running good on client and server.
|
||||
|
||||
```
|
||||
# service nfs status
|
||||
rpc.svcgssd is stopped
|
||||
rpc.mountd (pid 11993) is running...
|
||||
nfsd (pid 12009 12008 12007 12006 12005 12004 12003 12002) is running...
|
||||
rpc.rquotad (pid 11988) is running...
|
||||
```
|
||||
|
||||
> Stay connected to your favorite windows applications from anywhere on any device with [ windows 7 cloud desktop ][2] from CloudDesktopOnline.com. Get Office 365 with expert support and free migration from [ Apps4Rent.com ][3].
|
||||
|
||||
If NFS share currently mounted on client, then un-mount it forcefully and try to remount it on NFS client. Check if its properly mounted by `df` command and changing directory inside it.
|
||||
|
||||
```
|
||||
# umount -f /mydata_nfs
|
||||
|
||||
# mount -t nfs server:/nfs_share /mydata_nfs
|
||||
|
||||
#df -k
|
||||
------ output clipped -----
|
||||
server:/nfs_share 41943040 892928 41050112 3% /mydata_nfs
|
||||
```
|
||||
|
||||
In above mount command, server can be IP or [hostname ][4]of NFS server.
|
||||
|
||||
If you are getting error while forcefully un-mounting like below :
|
||||
|
||||
```
|
||||
# umount -f /mydata_nfs
|
||||
umount2: Device or resource busy
|
||||
umount: /mydata_nfs: device is busy
|
||||
umount2: Device or resource busy
|
||||
umount: /mydata_nfs: device is busy
|
||||
```
|
||||
Then you can check which all processes or users are using that mount point with `lsof` command like below:
|
||||
|
||||
```
|
||||
# lsof |grep mydata_nfs
|
||||
lsof: WARNING: can't stat() nfs file system /mydata_nfs
|
||||
Output information may be incomplete.
|
||||
su 3327 root cwd unknown /mydata_nfs/dir (stat: Stale NFS file handle)
|
||||
bash 3484 grid cwd unknown /mydata_nfs/MYDB (stat: Stale NFS file handle)
|
||||
bash 20092 oracle11 cwd unknown /mydata_nfs/MPRP (stat: Stale NFS file handle)
|
||||
bash 25040 oracle11 cwd unknown /mydata_nfs/MUYR (stat: Stale NFS file handle)
|
||||
```
|
||||
|
||||
If you see in above example that 4 PID are using some files on said mount point. Try killing them off to free mount point. Once done you will be able to un-mount it properly.
|
||||
|
||||
Sometimes it still give same error for mount command. Then try mounting after restarting NFS service at client using below command.
|
||||
|
||||
```
|
||||
# service nfs restart
|
||||
Shutting down NFS daemon: [ OK ]
|
||||
Shutting down NFS mountd: [ OK ]
|
||||
Shutting down NFS quotas: [ OK ]
|
||||
Shutting down RPC idmapd: [ OK ]
|
||||
Starting NFS services: [ OK ]
|
||||
Starting NFS quotas: [ OK ]
|
||||
Starting NFS mountd: [ OK ]
|
||||
Starting NFS daemon: [ OK ]
|
||||
```
|
||||
|
||||
Also read : [How to restart NFS step by step in HPUX][5]
|
||||
|
||||
Even if this didnt solve your issue, final step is to restart services at NFS server. Caution! This will disconnect all NFS shares which are exported from NFS server. All clients will see mount point disconnect. This step is where 99% you will get your issue resolved. If not then [NFS configurations][6] must be checked, provided you have changed configuration and post that you started seeing this error.
|
||||
|
||||
Outputs in above post are from RHEL6.3 server. Drop us your comments related to this post.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://kerneltalks.com/troubleshooting/resolve-mount-nfs-stale-file-handle-error/
|
||||
|
||||
作者:[KernelTalks][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://kerneltalks.com
|
||||
[1]:http://kerneltalks.com/wp-content/uploads/2017/01/nfs_error-2-150x150.png
|
||||
[2]:https://www.clouddesktoponline.com/
|
||||
[3]:http://www.apps4rent.com
|
||||
[4]:https://kerneltalks.com/linux/all-you-need-to-know-about-hostname-in-linux/
|
||||
[5]:http://kerneltalks.com/hpux/restart-nfs-in-hpux/
|
||||
[6]:http://kerneltalks.com/linux/nfs-configuration-linux-hpux/
|
@ -1,284 +0,0 @@
|
||||
Translating by qhwdw ftrace: trace your kernel functions!
|
||||
============================================================
|
||||
|
||||
Hello! Today we’re going to talk about a debugging tool we haven’t talked about much before on this blog: ftrace. What could be more exciting than a new debugging tool?!
|
||||
|
||||
Better yet, ftrace isn’t new! It’s been around since Linux kernel 2.6, or about 2008. [here’s the earliest documentation I found with some quick Gooogling][10]. So you might be able to use it even if you’re debugging an older system!
|
||||
|
||||
I’ve known that ftrace exists for about 2.5 years now, but hadn’t gotten around to really learning it yet. I’m supposed to run a workshop tomorrow where I talk about ftrace, so today is the day we talk about it!
|
||||
|
||||
### what’s ftrace?
|
||||
|
||||
ftrace is a Linux kernel feature that lets you trace Linux kernel function calls. Why would you want to do that? Well, suppose you’re debugging a weird problem, and you’ve gotten to the point where you’re staring at the source code for your kernel version and wondering what **exactly** is going on.
|
||||
|
||||
I don’t read the kernel source code very often when debugging, but occasionally I do! For example this week at work we had a program that was frozen and stuck spinning inside the kernel. Looking at what functions were being called helped us understand better what was happening in the kernel and what systems were involved (in that case, it was the virtual memory system)!
|
||||
|
||||
I think ftrace is a bit of a niche tool (it’s definitely less broadly useful and harder to use than strace) but that it’s worth knowing about. So let’s learn about it!
|
||||
|
||||
### first steps with ftrace
|
||||
|
||||
Unlike strace and perf, ftrace isn’t a **program** exactly – you don’t just run `ftrace my_cool_function`. That would be too easy!
|
||||
|
||||
If you read [Debugging the kernel using Ftrace][11] it starts out by telling you to `cd /sys/kernel/debug/tracing` and then do various filesystem manipulations.
|
||||
|
||||
For me this is way too annoying – a simple example of using ftrace this way is something like
|
||||
|
||||
```
|
||||
cd /sys/kernel/debug/tracing
|
||||
echo function > current_tracer
|
||||
echo do_page_fault > set_ftrace_filter
|
||||
cat trace
|
||||
|
||||
```
|
||||
|
||||
This filesystem interface to the tracing system (“put values in these magic files and things will happen”) seems theoretically possible to use but really not my preference.
|
||||
|
||||
Luckily, team ftrace also thought this interface wasn’t that user friendly and so there is an easier-to-use interface called **trace-cmd**!!! trace-cmd is a normal program with command line arguments. We’ll use that! I found an intro to trace-cmd on LWN at [trace-cmd: A front-end for Ftrace][12].
|
||||
|
||||
### getting started with trace-cmd: let’s trace just one function
|
||||
|
||||
First, I needed to install `trace-cmd` with `sudo apt-get install trace-cmd`. Easy enough.
|
||||
|
||||
For this first ftrace demo, I decided I wanted to know when my kernel was handling a page fault. When Linux allocates memory, it often does it lazily (“you weren’t _really_ planning to use that memory, right?“). This means that when an application tries to actually write to memory that it allocated, there’s a page fault and the kernel needs to give the application physical memory to use.
|
||||
|
||||
Let’s start `trace-cmd` and make it trace the `do_page_fault` function!
|
||||
|
||||
```
|
||||
$ sudo trace-cmd record -p function -l do_page_fault
|
||||
plugin 'function'
|
||||
Hit Ctrl^C to stop recording
|
||||
|
||||
```
|
||||
|
||||
I ran it for a few seconds and then hit `Ctrl+C`. Awesome! It created a 2.5MB file called `trace.dat`. Let’s see what’s that file!
|
||||
|
||||
```
|
||||
$ sudo trace-cmd report
|
||||
chrome-15144 [000] 11446.466121: function: do_page_fault
|
||||
chrome-15144 [000] 11446.467910: function: do_page_fault
|
||||
chrome-15144 [000] 11446.469174: function: do_page_fault
|
||||
chrome-15144 [000] 11446.474225: function: do_page_fault
|
||||
chrome-15144 [000] 11446.474386: function: do_page_fault
|
||||
chrome-15144 [000] 11446.478768: function: do_page_fault
|
||||
CompositorTileW-15154 [001] 11446.480172: function: do_page_fault
|
||||
chrome-1830 [003] 11446.486696: function: do_page_fault
|
||||
CompositorTileW-15154 [001] 11446.488983: function: do_page_fault
|
||||
CompositorTileW-15154 [001] 11446.489034: function: do_page_fault
|
||||
CompositorTileW-15154 [001] 11446.489045: function: do_page_fault
|
||||
|
||||
```
|
||||
|
||||
This is neat – it shows me the process name (chrome), process ID (15144), CPU (000), and function that got traced.
|
||||
|
||||
By looking at the whole report, (`sudo trace-cmd report | grep chrome`) I can see that we traced for about 1.5 seconds and in that time Chrome had about 500 page faults. Cool! We have done our first ftrace!
|
||||
|
||||
### next ftrace trick: let’s trace a process!
|
||||
|
||||
Okay, but just seeing one function is kind of boring! Let’s say I want to know everything that’s happening for one program. I use a static site generator called Hugo. What’s the kernel doing for Hugo?
|
||||
|
||||
Hugo’s PID on my computer right now is 25314, so I recorded all the kernel functions with:
|
||||
|
||||
```
|
||||
sudo trace-cmd record --help # I read the help!
|
||||
sudo trace-cmd record -p function -P 25314 # record for PID 25314
|
||||
|
||||
```
|
||||
|
||||
`sudo trace-cmd report` printed out 18,000 lines of output. If you’re interested, you can see [all 18,000 lines here][13].
|
||||
|
||||
18,000 lines is a lot so here are some interesting excerpts.
|
||||
|
||||
This looks like what happens when the `clock_gettime` system call runs. Neat!
|
||||
|
||||
```
|
||||
compat_SyS_clock_gettime
|
||||
SyS_clock_gettime
|
||||
clockid_to_kclock
|
||||
posix_clock_realtime_get
|
||||
getnstimeofday64
|
||||
__getnstimeofday64
|
||||
arch_counter_read
|
||||
__compat_put_timespec
|
||||
|
||||
```
|
||||
|
||||
This is something related to process scheduling:
|
||||
|
||||
```
|
||||
cpufreq_sched_irq_work
|
||||
wake_up_process
|
||||
try_to_wake_up
|
||||
_raw_spin_lock_irqsave
|
||||
do_raw_spin_lock
|
||||
_raw_spin_lock
|
||||
do_raw_spin_lock
|
||||
walt_ktime_clock
|
||||
ktime_get
|
||||
arch_counter_read
|
||||
walt_update_task_ravg
|
||||
exiting_task
|
||||
|
||||
```
|
||||
|
||||
Being able to see all these function calls is pretty cool, even if I don’t quite understand them.
|
||||
|
||||
### “function graph” tracing
|
||||
|
||||
There’s another tracing mode called `function_graph`. This is the same as the function tracer except that it instruments both entering _and_ exiting a function. [Here’s the output of that tracer][14]
|
||||
|
||||
```
|
||||
sudo trace-cmd record -p function_graph -P 25314
|
||||
|
||||
```
|
||||
|
||||
Again, here’s a snipped (this time from the futex code)
|
||||
|
||||
```
|
||||
| futex_wake() {
|
||||
| get_futex_key() {
|
||||
| get_user_pages_fast() {
|
||||
1.458 us | __get_user_pages_fast();
|
||||
4.375 us | }
|
||||
| __might_sleep() {
|
||||
0.292 us | ___might_sleep();
|
||||
2.333 us | }
|
||||
0.584 us | get_futex_key_refs();
|
||||
| unlock_page() {
|
||||
0.291 us | page_waitqueue();
|
||||
0.583 us | __wake_up_bit();
|
||||
5.250 us | }
|
||||
0.583 us | put_page();
|
||||
+ 24.208 us | }
|
||||
|
||||
```
|
||||
|
||||
We see in this example that `get_futex_key` gets called right after `futex_wake`. Is that what really happens in the source code? We can check!! [Here’s the definition of futex_wake in Linux 4.4][15] (my kernel version).
|
||||
|
||||
I’ll save you a click: it looks like this:
|
||||
|
||||
```
|
||||
static int
|
||||
futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset)
|
||||
{
|
||||
struct futex_hash_bucket *hb;
|
||||
struct futex_q *this, *next;
|
||||
union futex_key key = FUTEX_KEY_INIT;
|
||||
int ret;
|
||||
WAKE_Q(wake_q);
|
||||
|
||||
if (!bitset)
|
||||
return -EINVAL;
|
||||
|
||||
ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &key, VERIFY_READ);
|
||||
|
||||
```
|
||||
|
||||
So the first function called in `futex_wake` really is `get_futex_key`! Neat! Reading the function trace was definitely an easier way to find that out than by reading the kernel code, and it’s nice to see how long all of the functions took.
|
||||
|
||||
### How to know what functions you can trace
|
||||
|
||||
If you run `sudo trace-cmd list -f` you’ll get a list of all the functions you can trace. That’s pretty simple but it’s important.
|
||||
|
||||
### one last thing: events!
|
||||
|
||||
So, now we know how to trace functions in the kernel! That’s really cool!
|
||||
|
||||
There’s one more class of thing we can trace though! Some events don’t correspond super well to function calls. For example, you might want to knowwhen a program is scheduled on or off the CPU! You might be able to figure that out by peering at function calls, but I sure can’t.
|
||||
|
||||
So the kernel also gives you a few events so you can see when a few important things happen. You can see a list of all these events with `sudo cat /sys/kernel/debug/tracing/available_events`
|
||||
|
||||
I looked at all the sched_switch events. I’m not exactly sure what sched_switch is but it’s something to do with scheduling I guess.
|
||||
|
||||
```
|
||||
sudo cat /sys/kernel/debug/tracing/available_events
|
||||
sudo trace-cmd record -e sched:sched_switch
|
||||
sudo trace-cmd report
|
||||
|
||||
```
|
||||
|
||||
The output looks like this:
|
||||
|
||||
```
|
||||
16169.624862: Chrome_ChildIOT:24817 [112] S ==> chrome:15144 [120]
|
||||
16169.624992: chrome:15144 [120] S ==> swapper/3:0 [120]
|
||||
16169.625202: swapper/3:0 [120] R ==> Chrome_ChildIOT:24817 [112]
|
||||
16169.625251: Chrome_ChildIOT:24817 [112] R ==> chrome:1561 [112]
|
||||
16169.625437: chrome:1561 [112] S ==> chrome:15144 [120]
|
||||
|
||||
```
|
||||
|
||||
so you can see it switching from PID 24817 -> 15144 -> kernel -> 24817 -> 1561 -> 15114\. (all of these events are on the same CPU)
|
||||
|
||||
### how does ftrace work?
|
||||
|
||||
ftrace is a dynamic tracing system. This means that when I start ftracing a kernel function, the **function’s code gets changed**. So – let’s suppose that I’m tracing that `do_page_fault` function from before. The kernel will insert some extra instructions in the assembly for that function to notify the tracing system every time that function gets called. The reason it can add extra instructions is that Linux compiles in a few extra NOP instructions into every function, so there’s space to add tracing code when needed.
|
||||
|
||||
This is awesome because it means that when I’m not using ftrace to trace my kernel, it doesn’t affect performance at all. When I do start tracing, the more functions I trace, the more overhead it’ll have.
|
||||
|
||||
(probably some of this is wrong, but this is how I think ftrace works anyway)
|
||||
|
||||
### use ftrace more easily: brendan gregg’s tools & kernelshark
|
||||
|
||||
As we’ve seen in this post, you need to think quite a lot about what individual kernel functions / events do to use ftrace directly. This is cool, but it’s also a lot of work!
|
||||
|
||||
Brendan Gregg (our linux debugging tools hero) has repository of tools that use ftrace to give you information about various things like IO latency. They’re all in his [perf-tools][16] repository on GitHub.
|
||||
|
||||
The tradeoff here is that they’re easier to use, but you’re limited to things that Brendan Gregg thought of & decided to make a tool for. Which is a lot of things! :)
|
||||
|
||||
Another tool for visualizing the output of ftrace better is [kernelshark][17]. I haven’t played with it much yet but it looks useful. You can install it with `sudo apt-get install kernelshark`.
|
||||
|
||||
### a new superpower
|
||||
|
||||
I’m really happy I took the time to learn a little more about ftrace today! Like any kernel tool, it’ll work differently between different kernel versions, but I hope that you find it useful one day.
|
||||
|
||||
### an index of ftrace articles
|
||||
|
||||
Finally, here’s a list of a bunch of ftrace articles I found. Many of them are on LWN (Linux Weekly News), which is a pretty great source of writing on Linux. (you can buy a [subscription][18]!)
|
||||
|
||||
* [Debugging the kernel using Ftrace - part 1][1] (Dec 2009, Steven Rostedt)
|
||||
|
||||
* [Debugging the kernel using Ftrace - part 2][2] (Dec 2009, Steven Rostedt)
|
||||
|
||||
* [Secrets of the Linux function tracer][3] (Jan 2010, Steven Rostedt)
|
||||
|
||||
* [trace-cmd: A front-end for Ftrace][4] (Oct 2010, Steven Rostedt)
|
||||
|
||||
* [Using KernelShark to analyze the real-time scheduler][5] (2011, Steven Rostedt)
|
||||
|
||||
* [Ftrace: The hidden light switch][6] (2014, Brendan Gregg)
|
||||
|
||||
* the kernel documentation: (which is quite useful) [Documentation/ftrace.txt][7]
|
||||
|
||||
* documentation on events you can trace [Documentation/events.txt][8]
|
||||
|
||||
* some docs on ftrace design for linux kernel devs (not as useful, but interesting) [Documentation/ftrace-design.txt][9]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2017/03/19/getting-started-with-ftrace/
|
||||
|
||||
作者:[Julia Evans ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://jvns.ca
|
||||
[1]:https://lwn.net/Articles/365835/
|
||||
[2]:https://lwn.net/Articles/366796/
|
||||
[3]:https://lwn.net/Articles/370423/
|
||||
[4]:https://lwn.net/Articles/410200/
|
||||
[5]:https://lwn.net/Articles/425583/
|
||||
[6]:https://lwn.net/Articles/608497/
|
||||
[7]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/ftrace.txt
|
||||
[8]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/events.txt
|
||||
[9]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/ftrace-design.txt
|
||||
[10]:https://lwn.net/Articles/290277/
|
||||
[11]:https://lwn.net/Articles/365835/
|
||||
[12]:https://lwn.net/Articles/410200/
|
||||
[13]:https://gist.githubusercontent.com/jvns/e5c2d640f7ec76ed9ed579be1de3312e/raw/78b8425436dc4bb5bb4fa76a4f85d5809f7d1ef2/trace-cmd-report.txt
|
||||
[14]:https://gist.githubusercontent.com/jvns/f32e9b06bcd2f1f30998afdd93e4aaa5/raw/8154d9828bb895fd6c9b0ee062275055b3775101/function_graph.txt
|
||||
[15]:https://github.com/torvalds/linux/blob/v4.4/kernel/futex.c#L1313-L1324
|
||||
[16]:https://github.com/brendangregg/perf-tools
|
||||
[17]:https://lwn.net/Articles/425583/
|
||||
[18]:https://lwn.net/subscribe/Info
|
@ -0,0 +1,156 @@
|
||||
Ansible Tutorial: Intorduction to simple Ansible commands
|
||||
======
|
||||
In our earlier Ansible tutorial, we discussed [**the installation & configuration of Ansible**][1]. Now in this ansible tutorial, we will learn some basic examples of ansible commands that we will use to manage our infrastructure. So let us start by looking at the syntax of a complete ansible command,
|
||||
|
||||
```
|
||||
$ ansible <group> -m <module> -a <arguments>
|
||||
```
|
||||
|
||||
Here, we can also use a single host or all in place of <group> & <arguments> are optional to provide. Now let's look at some basic commands to use with ansible,
|
||||
|
||||
### Check connectivity of hosts
|
||||
|
||||
We have used this command in our previous tutorial also. The command to check connectivity of hosts is
|
||||
|
||||
```
|
||||
$ ansible <group> -m ping
|
||||
```
|
||||
|
||||
### Rebooting hosts
|
||||
|
||||
```
|
||||
$ ansible <group> -a "/sbin/reboot"
|
||||
```
|
||||
|
||||
### Checking host 's system information
|
||||
|
||||
Ansible collects the system's information for all the hosts connected to it. To display the information of hosts, run
|
||||
|
||||
```
|
||||
$ ansible <group> -m setup | less
|
||||
```
|
||||
|
||||
Secondly, to check a particular info from the collected information by passing an argument,
|
||||
|
||||
```
|
||||
$ ansible <group> -m setup -a "filter=ansible_distribution"
|
||||
```
|
||||
|
||||
### Transfering files
|
||||
|
||||
For transferring files we use a module 'copy' & complete command that is used is
|
||||
|
||||
```
|
||||
$ ansible <group> -m copy -a "src=/home/dan dest=/tmp/home"
|
||||
```
|
||||
|
||||
### Manging users
|
||||
|
||||
So to manage the users on the connected hosts, we use a module named 'user' & comamnds to use it are as follows,
|
||||
|
||||
#### Creating a new user
|
||||
|
||||
```
|
||||
$ ansible <group> -m user -a "name=testuser password=<encrypted password>"
|
||||
```
|
||||
|
||||
#### Deleting a user
|
||||
|
||||
```
|
||||
$ ansible <group> -m user -a "name=testuser state=absent"
|
||||
```
|
||||
|
||||
**Note:-** To create an encrypted password, use the 'mkpasswd -method=sha-512' command.
|
||||
|
||||
### Changing permissions & ownership
|
||||
|
||||
So for changing ownership of files of connected hosts, we use module named 'file' & commands used are
|
||||
|
||||
### Changing permission of a file
|
||||
|
||||
```
|
||||
$ ansible <group> -m file -a "dest=/home/dan/file1.txt mode=777"
|
||||
```
|
||||
|
||||
### Changing ownership of a file
|
||||
|
||||
```
|
||||
$ ansible <group> -m file -a "dest=/home/dan/file1.txt mode=777 owner=dan group=dan"
|
||||
```
|
||||
|
||||
### Managing Packages
|
||||
|
||||
So, we can manage the packages installed on all the hosts connected to ansible by using 'yum' & 'apt' modules & the complete commands used are
|
||||
|
||||
#### Check if package is installed & update it
|
||||
|
||||
```
|
||||
$ ansible <group> -m yum -a "name=ntp state=latest"
|
||||
```
|
||||
|
||||
#### Check if package is installed & don't update it
|
||||
|
||||
```
|
||||
$ ansible <group> -m yum -a "name=ntp state=present"
|
||||
```
|
||||
|
||||
#### Check if package is at a specific version
|
||||
|
||||
```
|
||||
$ ansible <group> -m yum -a "name= ntp-1.8 state=present"
|
||||
```
|
||||
|
||||
#### Check if package is not installed
|
||||
|
||||
```
|
||||
$ ansible <group> -m yum -a "name=ntp state=absent"
|
||||
```
|
||||
|
||||
### Managing services
|
||||
|
||||
So to manage services with ansible, we use a modules 'service' & complete commands that are used are,
|
||||
|
||||
#### Starting a service
|
||||
|
||||
```
|
||||
$ansible <group> -m service -a "name=httpd state=started"
|
||||
```
|
||||
|
||||
#### Stopping a service
|
||||
|
||||
```
|
||||
$ ansible <group> -m service -a "name=httpd state=stopped"
|
||||
```
|
||||
|
||||
#### Restarting a service
|
||||
|
||||
```
|
||||
$ ansible <group> -m service -a "name=httpd state=restarted"
|
||||
```
|
||||
|
||||
So this completes our tutorial of some simple, one line commands that can be used with ansible. Also, for our future tutorials, we will learn to create plays & playbooks that help us manage our hosts more easliy & efficiently.
|
||||
|
||||
If you think we have helped you or just want to support us, please consider these :-
|
||||
|
||||
Connect to us: [Facebook][2] | [Twitter][3] | [Google Plus][4]
|
||||
|
||||
Become a Supporter - [Make a contribution via PayPal][5]
|
||||
|
||||
Linux TechLab is thankful for your continued support.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/ansible-tutorial-simple-commands/
|
||||
|
||||
作者:[SHUSAIN][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
||||
[1]:http://linuxtechlab.com/create-first-ansible-server-automation-setup/
|
||||
[2]:https://www.facebook.com/linuxtechlab/
|
||||
[3]:https://twitter.com/LinuxTechLab
|
||||
[4]:https://plus.google.com/+linuxtechlab
|
||||
[5]:http://linuxtechlab.com/contact-us-2/
|
@ -1,119 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
|
||||
Working with Vi/Vim Editor : Advanced concepts
|
||||
======
|
||||
Earlier we have discussed some basics about VI/VIM editor but VI & VIM are both very powerful editors and there are many other functionalities that can be used with these editors. In this tutorial, we are going to learn some advanced uses of VI/VIM editor.
|
||||
|
||||
( **Recommended Read** : [Working with VI editor : The Basics ][1])
|
||||
|
||||
## Opening multiple files with VI/VIM editor
|
||||
|
||||
To open multiple files, command would be same as is for a single file; we just add the file name for second file as well.
|
||||
|
||||
```
|
||||
$ vi file1 file2 file 3
|
||||
```
|
||||
|
||||
Now to browse to next file, we can use
|
||||
|
||||
```
|
||||
$ :n
|
||||
```
|
||||
|
||||
or we can also use
|
||||
|
||||
```
|
||||
$ :e filename
|
||||
```
|
||||
|
||||
## Run external commands inside the editor
|
||||
|
||||
We can run external Linux/Unix commands from inside the vi editor, i.e. without exiting the editor. To issue a command from editor, go back to Command Mode if in Insert mode & we use the BANG i.e. '!' followed by the command that needs to be used. Syntax for running a command is,
|
||||
|
||||
```
|
||||
$ :! command
|
||||
```
|
||||
|
||||
An example for this would be
|
||||
|
||||
```
|
||||
$ :! df -H
|
||||
```
|
||||
|
||||
## Searching for a pattern
|
||||
|
||||
To search for a word or pattern in the text file, we use following two commands in command mode,
|
||||
|
||||
* command '/' searches the pattern in forward direction
|
||||
|
||||
* command '?' searched the pattern in backward direction
|
||||
|
||||
|
||||
Both of these commands are used for same purpose, only difference being the direction they search in. An example would be,
|
||||
|
||||
`$ :/ search pattern` (If at beginning of the file)
|
||||
|
||||
`$ :/ search pattern` (If at the end of the file)
|
||||
|
||||
## Searching & replacing a pattern
|
||||
|
||||
We might be required to search & replace a word or a pattern from our text files. So rather than finding the occurrence of word from whole text file & replace it, we can issue a command from the command mode to replace the word automatically. Syntax for using search & replacement is,
|
||||
|
||||
```
|
||||
$ :s/pattern_to_be_found/New_pattern/g
|
||||
```
|
||||
|
||||
Suppose we want to find word "alpha" & replace it with word "beta", the command would be
|
||||
|
||||
```
|
||||
$ :s/alpha/beta/g
|
||||
```
|
||||
|
||||
If we want to only replace the first occurrence of word "alpha", then the command would be
|
||||
|
||||
```
|
||||
$ :s/alpha/beta/
|
||||
```
|
||||
|
||||
## Using Set commands
|
||||
|
||||
We can also customize the behaviour, the and feel of the vi/vim editor by using the set command. Here is a list of some options that can be use set command to modify the behaviour of vi/vim editor,
|
||||
|
||||
`$ :set ic ` ignores cases while searching
|
||||
|
||||
`$ :set smartcase ` enforce case sensitive search
|
||||
|
||||
`$ :set nu` display line number at the begining of the line
|
||||
|
||||
`$ :set hlsearch ` highlights the matching words
|
||||
|
||||
`$ : set ro ` change the file type to read only
|
||||
|
||||
`$ : set term ` prints the terminal type
|
||||
|
||||
`$ : set ai ` sets auto-indent
|
||||
|
||||
`$ :set noai ` unsets the auto-indent
|
||||
|
||||
Some other commands to modify vi editors are,
|
||||
|
||||
`$ :colorscheme ` its used to change the color scheme for the editor. (for VIM editor only)
|
||||
|
||||
`$ :syntax on ` will turn on the color syntax for .xml, .html files etc. (for VIM editor only)
|
||||
|
||||
This complete our tutorial, do mention your queries/questions or suggestions in the comment box below.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/working-vivim-editor-advanced-concepts/
|
||||
|
||||
作者:[Shusain][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
||||
[1]:http://linuxtechlab.com/working-vi-editor-basics/
|
@ -0,0 +1,254 @@
|
||||
How to use Fio (Flexible I/O Tester) to Measure Disk Performance in Linux
|
||||
======
|
||||
![](https://wpmojo.com/wp-content/uploads/2017/08/wpmojo.com-how-to-use-fio-to-measure-disk-performance-in-linux-dotlayer.com-how-to-use-fio-to-measure-disk-performance-in-linux-816x457.jpeg)
|
||||
|
||||
Fio which stands for Flexible I/O Tester [is a free and open source][1] disk I/O tool used both for benchmark and stress/hardware verification developed by Jens Axboe.
|
||||
|
||||
It has support for 19 different types of I/O engines (sync, mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio, and more), I/O priorities (for newer Linux kernels), rate I/O, forked or threaded jobs, and much more. It can work on block devices as well as files.
|
||||
|
||||
Fio accepts job descriptions in a simple-to-understand text format. Several example job files are included. Fio displays all sorts of I/O performance information, including complete IO latencies and percentiles.
|
||||
|
||||
It is in wide use in many places, for both benchmarking, QA, and verification purposes. It supports Linux, FreeBSD, NetBSD, OpenBSD, OS X, OpenSolaris, AIX, HP-UX, Android, and Windows.
|
||||
|
||||
In this tutorial, we will be using Ubuntu 16 and you are required to have sudo or root privileges to the computer. We will go over the installation and use of fio.
|
||||
|
||||
### Installing fio from Source
|
||||
|
||||
We are going to clone the repo on GitHub. Install the prerequisites, and then we will build the packages from the source code. Lets' start by making sure we have git installed.
|
||||
```
|
||||
|
||||
sudo apt-get install git
|
||||
|
||||
|
||||
```
|
||||
|
||||
For centOS users you can use:
|
||||
```
|
||||
|
||||
sudo yum install git
|
||||
|
||||
|
||||
```
|
||||
|
||||
Now we change directory to /opt and clone the repo from Github:
|
||||
```
|
||||
|
||||
cd /opt
|
||||
git clone https://github.com/axboe/fio
|
||||
|
||||
|
||||
```
|
||||
|
||||
You should see the output below:
|
||||
```
|
||||
|
||||
Cloning into 'fio'...
|
||||
remote: Counting objects: 24819, done.
|
||||
remote: Compressing objects: 100% (44/44), done.
|
||||
remote: Total 24819 (delta 39), reused 62 (delta 32), pack-reused 24743
|
||||
Receiving objects: 100% (24819/24819), 16.07 MiB | 0 bytes/s, done.
|
||||
Resolving deltas: 100% (16251/16251), done.
|
||||
Checking connectivity... done.
|
||||
|
||||
|
||||
```
|
||||
|
||||
Now, we change directory into the fio codebase by typing the command below inside the opt folder:
|
||||
```
|
||||
|
||||
cd fio
|
||||
|
||||
|
||||
```
|
||||
|
||||
We can finally build fio from source using the `make` build utility bu using the commands below:
|
||||
```
|
||||
|
||||
# ./configure
|
||||
# make
|
||||
# make install
|
||||
|
||||
|
||||
```
|
||||
|
||||
### Installing fio on Ubuntu
|
||||
|
||||
For Ubuntu and Debian, fio is available on the main repository. You can easily install fio using the standard package managers such as yum and apt-get.
|
||||
|
||||
For Ubuntu and Debian you can simple use:
|
||||
```
|
||||
|
||||
sudo apt-get install fio
|
||||
|
||||
|
||||
```
|
||||
|
||||
For CentOS/Redhat you can simple use:
|
||||
On CentOS, you might need to install EPEL repository to your system before you can have access to fio. You can install it by running the following command:
|
||||
```
|
||||
|
||||
sudo yum install epel-release -y
|
||||
|
||||
|
||||
```
|
||||
|
||||
You can then install fio using the command below:
|
||||
```
|
||||
|
||||
sudo yum install fio -y
|
||||
|
||||
|
||||
```
|
||||
|
||||
### Disk Performace testing with Fio
|
||||
|
||||
With Fio is installed on your system. It's time to see how to use Fio with some examples below. We are going to perform a random write, read and read and write test.
|
||||
|
||||
### Performing a Random Write Test
|
||||
|
||||
Let's start by running the following command. This command will write a total 4GB file [4 jobs x 512 MB = 2GB] running 2 processes at a time:
|
||||
```
|
||||
|
||||
sudo fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=512M --numjobs=2 --runtime=240 --group_reporting
|
||||
|
||||
|
||||
```
|
||||
```
|
||||
|
||||
...
|
||||
fio-2.2.10
|
||||
Starting 2 processes
|
||||
|
||||
randwrite: (groupid=0, jobs=2): err= 0: pid=7271: Sat Aug 5 13:28:44 2017
|
||||
write: io=1024.0MB, bw=2485.5MB/s, iops=636271, runt= 412msec
|
||||
slat (usec): min=1, max=268, avg= 1.79, stdev= 1.01
|
||||
clat (usec): min=0, max=13, avg= 0.20, stdev= 0.40
|
||||
lat (usec): min=1, max=268, avg= 2.03, stdev= 1.01
|
||||
clat percentiles (usec):
|
||||
| 1.00th=[ 0], 5.00th=[ 0], 10.00th=[ 0], 20.00th=[ 0],
|
||||
| 30.00th=[ 0], 40.00th=[ 0], 50.00th=[ 0], 60.00th=[ 0],
|
||||
| 70.00th=[ 0], 80.00th=[ 1], 90.00th=[ 1], 95.00th=[ 1],
|
||||
| 99.00th=[ 1], 99.50th=[ 1], 99.90th=[ 1], 99.95th=[ 1],
|
||||
| 99.99th=[ 1]
|
||||
lat (usec) : 2=99.99%, 4=0.01%, 10=0.01%, 20=0.01%
|
||||
cpu : usr=15.14%, sys=84.00%, ctx=8, majf=0, minf=26
|
||||
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
|
||||
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
|
||||
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
|
||||
issued : total=r=0/w=262144/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
|
||||
latency : target=0, window=0, percentile=100.00%, depth=1
|
||||
|
||||
Run status group 0 (all jobs):
|
||||
WRITE: io=1024.0MB, aggrb=2485.5MB/s, minb=2485.5MB/s, maxb=2485.5MB/s, mint=412msec, maxt=412msec
|
||||
|
||||
Disk stats (read/write):
|
||||
sda: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
|
||||
|
||||
|
||||
```
|
||||
|
||||
### Performing a Random Read Test
|
||||
|
||||
We are going to perform a random read test now, we will be trying to read a random 2Gb file
|
||||
```
|
||||
|
||||
sudo fio --name=randread --ioengine=libaio --iodepth=16 --rw=randread --bs=4k --direct=0 --size=512M --numjobs=4 --runtime=240 --group_reporting
|
||||
|
||||
|
||||
```
|
||||
|
||||
You should see the output below:
|
||||
```
|
||||
|
||||
...
|
||||
fio-2.2.10
|
||||
Starting 4 processes
|
||||
randread: Laying out IO file(s) (1 file(s) / 512MB)
|
||||
randread: Laying out IO file(s) (1 file(s) / 512MB)
|
||||
randread: Laying out IO file(s) (1 file(s) / 512MB)
|
||||
randread: Laying out IO file(s) (1 file(s) / 512MB)
|
||||
Jobs: 4 (f=4): [r(4)] [100.0% done] [71800KB/0KB/0KB /s] [17.1K/0/0 iops] [eta 00m:00s]
|
||||
randread: (groupid=0, jobs=4): err= 0: pid=7586: Sat Aug 5 13:30:52 2017
|
||||
read : io=2048.0MB, bw=80719KB/s, iops=20179, runt= 25981msec
|
||||
slat (usec): min=72, max=10008, avg=195.79, stdev=94.72
|
||||
clat (usec): min=2, max=28811, avg=2971.96, stdev=760.33
|
||||
lat (usec): min=185, max=29080, avg=3167.96, stdev=798.91
|
||||
clat percentiles (usec):
|
||||
| 1.00th=[ 2192], 5.00th=[ 2448], 10.00th=[ 2576], 20.00th=[ 2736],
|
||||
| 30.00th=[ 2800], 40.00th=[ 2832], 50.00th=[ 2928], 60.00th=[ 3024],
|
||||
| 70.00th=[ 3120], 80.00th=[ 3184], 90.00th=[ 3248], 95.00th=[ 3312],
|
||||
| 99.00th=[ 3536], 99.50th=[ 6304], 99.90th=[15168], 99.95th=[18816],
|
||||
| 99.99th=[22912]
|
||||
bw (KB /s): min=17360, max=25144, per=25.05%, avg=20216.90, stdev=1605.65
|
||||
lat (usec) : 4=0.01%, 10=0.01%, 250=0.01%, 500=0.01%, 750=0.01%
|
||||
lat (usec) : 1000=0.01%
|
||||
lat (msec) : 2=0.01%, 4=99.27%, 10=0.44%, 20=0.24%, 50=0.04%
|
||||
cpu : usr=1.35%, sys=5.18%, ctx=524309, majf=0, minf=98
|
||||
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
|
||||
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
|
||||
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
|
||||
issued : total=r=524288/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
|
||||
latency : target=0, window=0, percentile=100.00%, depth=16
|
||||
|
||||
Run status group 0 (all jobs):
|
||||
READ: io=2048.0MB, aggrb=80718KB/s, minb=80718KB/s, maxb=80718KB/s, mint=25981msec, maxt=25981msec
|
||||
|
||||
Disk stats (read/write):
|
||||
sda: ios=521587/871, merge=0/1142, ticks=96664/612, in_queue=97284, util=99.85%
|
||||
|
||||
|
||||
```
|
||||
|
||||
Finally, we want to show a sample read-write test to see how the kind out output that fio returns.
|
||||
|
||||
### Read Write Performance Test
|
||||
|
||||
The command below will measure random read/write performance of USB Pen drive (/dev/sdc1):
|
||||
```
|
||||
|
||||
sudo fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
|
||||
|
||||
|
||||
```
|
||||
|
||||
Below is the outout we get from the command above.
|
||||
```
|
||||
|
||||
fio-2.2.10
|
||||
Starting 1 process
|
||||
Jobs: 1 (f=1): [m(1)] [100.0% done] [217.8MB/74452KB/0KB /s] [55.8K/18.7K/0 iops] [eta 00m:00s]
|
||||
test: (groupid=0, jobs=1): err= 0: pid=8475: Sat Aug 5 13:36:04 2017
|
||||
read : io=3071.7MB, bw=219374KB/s, iops=54843, runt= 14338msec
|
||||
write: io=1024.4MB, bw=73156KB/s, iops=18289, runt= 14338msec
|
||||
cpu : usr=6.78%, sys=20.81%, ctx=1007218, majf=0, minf=9
|
||||
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
|
||||
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
|
||||
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
|
||||
issued : total=r=786347/w=262229/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
|
||||
latency : target=0, window=0, percentile=100.00%, depth=64
|
||||
|
||||
Run status group 0 (all jobs):
|
||||
READ: io=3071.7MB, aggrb=219374KB/s, minb=219374KB/s, maxb=219374KB/s, mint=14338msec, maxt=14338msec
|
||||
WRITE: io=1024.4MB, aggrb=73156KB/s, minb=73156KB/s, maxb=73156KB/s, mint=14338msec, maxt=14338msec
|
||||
|
||||
Disk stats (read/write):
|
||||
sda: ios=774141/258944, merge=1463/899, ticks=748800/150316, in_queue=900720, util=99.35%
|
||||
|
||||
|
||||
```
|
||||
|
||||
We hope you enjoyed this tutorial and enjoyed following along, Fio is a very useful tool and we hope you can use it in your next debugging activity. If you enjoyed reading this post feel free to leave a comment of questions. Go ahead and clone the repo and play around with the code.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://wpmojo.com/how-to-use-fio-to-measure-disk-performance-in-linux/
|
||||
|
||||
作者:[Alex Pearson][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://wpmojo.com/author/wpmojo/
|
||||
[1]:https://github.com/axboe/fio
|
@ -1,102 +0,0 @@
|
||||
3 text editor alternatives to Emacs and Vim
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_blue.png?itok=IfckxN48)
|
||||
|
||||
Before you start reaching for those implements of mayhem, Emacs and Vim fans, understand that this article isn't about putting the boot to your favorite editor. I'm a professed Emacs guy, but one who also likes Vim. A lot.
|
||||
|
||||
That said, I realize that Emacs and Vim aren't for everyone. It might be that the silliness of the so-called [Editor war][1] has turned some people off. Or maybe they just want an editor that is less demanding and has a more modern sheen.
|
||||
|
||||
If you're looking for an alternative to Emacs or Vim, keep reading. Here are three that might interest you.
|
||||
|
||||
### Geany
|
||||
|
||||
|
||||
![Editing a LaTeX document with Geany][3]
|
||||
|
||||
|
||||
Editing a LaTeX document with Geany
|
||||
|
||||
[Geany][4] is an old favorite from the days when I computed on older hardware running lightweight Linux distributions. Geany started out as my [LaTeX][5] editor, but quickly became the app in which I did all of my text editing.
|
||||
|
||||
Although Geany is billed as a small and fast [IDE][6] (integrated development environment), it's definitely not just a techie's tool. Geany is small and it is fast, even on older hardware or a [Chromebook running Linux][7]. You can use Geany for everything from editing configuration files to maintaining a task list or journal, from writing an article or a book to doing some coding and scripting.
|
||||
|
||||
[Plugins][8] give Geany a bit of extra oomph. Those plugins expand the editor's capabilities, letting you code or work with markup languages more effectively, manipulate text, and even check your spelling.
|
||||
|
||||
### Atom
|
||||
|
||||
|
||||
![Editing a webpage with Atom][10]
|
||||
|
||||
|
||||
Editing a webpage with Atom
|
||||
|
||||
[Atom][11] is a new-ish kid in the text editing neighborhood. In the short time it's been on the scene, though, Atom has gained a dedicated following.
|
||||
|
||||
What makes Atom attractive is that you can customize it. If you're of a more technical bent, you can fiddle with the editor's configuration. If you aren't all that technical, Atom has [a number of themes][12] you can use to change how the editor looks.
|
||||
|
||||
And don't discount Atom's thousands of [packages][13]. They extend the editor in many different ways, enabling you to turn it into the text editing or development environment that's right for you. Atom isn't just for coders. It's a very good [text editor for writers][14], too.
|
||||
|
||||
### Xed
|
||||
|
||||
![Writing this article in Xed][16]
|
||||
|
||||
|
||||
Writing this article in Xed
|
||||
|
||||
Maybe Atom and Geany are a bit heavy for your tastes. Maybe you want a lighter editor, something that's not bare bones but also doesn't have features you'll rarely (if ever) use. In that case, [Xed][17] might be what you're looking for.
|
||||
|
||||
If Xed looks familiar, it's a fork of the Pluma text editor for the MATE desktop environment. I've found that Xed is a bit faster and a bit more responsive than Pluma--your mileage may vary, though.
|
||||
|
||||
Although Xed isn't as rich in features as other editors, it doesn't do too badly. It has solid syntax highlighting, a better-than-average search and replace function, a spelling checker, and a tabbed interface for editing multiple files in a single window.
|
||||
|
||||
### Other editors worth exploring
|
||||
|
||||
I'm not a KDE guy, but when I worked in that environment, [KDevelop][18] was my go-to editor for heavy-duty work. It's a lot like Geany in that KDevelop is powerful and flexible without a lot of bulk.
|
||||
|
||||
Although I've never really felt the love, more than a couple of people I know swear by [Brackets][19]. It is powerful, and I have to admit its [extensions][20] look useful.
|
||||
|
||||
Billed as a "text editor for developers," [Notepadqq][21] is an editor that's reminiscent of [Notepad++][22]. It's in the early stages of development, but Notepadqq does look promising.
|
||||
|
||||
[Gedit][23] and [Kate][24] are excellent for anyone whose text editing needs are simple. They're definitely not bare bones--they pack enough features to do heavy text editing. Both Gedit and Kate balance that by being speedy and easy to use.
|
||||
|
||||
Do you have another favorite text editor that's not Emacs or Vim? Feel free to share by leaving a comment.
|
||||
|
||||
### About The Author
|
||||
Scott Nesbitt;I'M A Long-Time User Of Free Open Source Software;Write Various Things For Both Fun;Profit. I Don'T Take Myself Too Seriously;I Do All Of My Own Stunts. You Can Find Me At These Fine Establishments On The Web
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/9/3-alternatives-emacs-and-vim
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/scottnesbitt
|
||||
[1]:https://en.wikipedia.org/wiki/Editor_war
|
||||
[2]:/file/370196
|
||||
[3]:https://opensource.com/sites/default/files/u128651/geany.png (Editing a LaTeX document with Geany)
|
||||
[4]:https://www.geany.org/
|
||||
[5]:https://opensource.com/article/17/6/introduction-latex
|
||||
[6]:https://en.wikipedia.org/wiki/Integrated_development_environment
|
||||
[7]:https://opensource.com/article/17/4/linux-chromebook-gallium-os
|
||||
[8]:http://plugins.geany.org/
|
||||
[9]:/file/370191
|
||||
[10]:https://opensource.com/sites/default/files/u128651/atom.png (Editing a webpage with Atom)
|
||||
[11]:https://atom.io
|
||||
[12]:https://atom.io/themes
|
||||
[13]:https://atom.io/packages
|
||||
[14]:https://opensource.com/article/17/5/atom-text-editor-packages-writers
|
||||
[15]:/file/370201
|
||||
[16]:https://opensource.com/sites/default/files/u128651/xed.png (Writing this article in Xed)
|
||||
[17]:https://github.com/linuxmint/xed
|
||||
[18]:https://www.kdevelop.org/
|
||||
[19]:http://brackets.io/
|
||||
[20]:https://registry.brackets.io/
|
||||
[21]:http://notepadqq.altervista.org/s/
|
||||
[22]:https://opensource.com/article/16/12/notepad-text-editor
|
||||
[23]:https://wiki.gnome.org/Apps/Gedit
|
||||
[24]:https://kate-editor.org/
|
@ -1,3 +1,5 @@
|
||||
translate by cy
|
||||
|
||||
Linux directory structure: /lib explained
|
||||
======
|
||||
[![lib folder linux][1]][1]
|
||||
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
Bash Bypass Alias Linux/Unix Command
|
||||
======
|
||||
I defined mount bash shell alias as follows on my Linux system:
|
||||
|
@ -1,131 +0,0 @@
|
||||
Translating by qhwdw 10 layers of Linux container security | Opensource.com
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA)
|
||||
|
||||
Containers provide an easy way to package applications and deliver them seamlessly from development to test to production. This helps ensure consistency across a variety of environments, including physical servers, virtual machines (VMs), or private or public clouds. These benefits are leading organizations to rapidly adopt containers in order to easily develop and manage the applications that add business value.
|
||||
|
||||
Enterprises require strong security, and anyone running essential services in containers will ask, "Are containers secure?" and "Can we trust containers with our applications?"
|
||||
|
||||
Securing containers is a lot like securing any running process. You need to think about security throughout the layers of the solution stack before you deploy and run your container. You also need to think about security throughout the application and container lifecycle.
|
||||
|
||||
Try these 10 key elements to secure different layers of the container solution stack and different stages of the container lifecycle.
|
||||
|
||||
### 1. The container host operating system and multi-tenancy
|
||||
|
||||
Containers make it easier for developers to build and promote an application and its dependencies as a unit and to get the most use of servers by enabling multi-tenant application deployments on a shared host. It's easy to deploy multiple applications on a single host, spinning up and shutting down individual containers as needed. To take full advantage of this packaging and deployment technology, the operations team needs the right environment for running containers. Operations needs an operating system that can secure containers at the boundaries, securing the host kernel from container escapes and securing containers from each other.
|
||||
|
||||
### 2. Container content (use trusted sources)
|
||||
|
||||
Containers are Linux processes with isolation and resource confinement that enable you to run sandboxed applications on a shared host kernel. Your approach to securing containers should be the same as your approach to securing any running process on Linux. Dropping privileges is important and still the best practice. Even better is to create containers with the least privilege possible. Containers should run as user, not root. Next, make use of the multiple levels of security available in Linux. Linux namespaces, Security-Enhanced Linux ( [SELinux][1] ), [cgroups][2] , capabilities, and secure computing mode ( [seccomp][3] ) are five of the security features available for securing containers.
|
||||
|
||||
When it comes to security, what's inside your container matters. For some time now, applications and infrastructures have been composed from readily available components. Many of these are open source packages, such as the Linux operating system, Apache Web Server, Red Hat JBoss Enterprise Application Platform, PostgreSQL, and Node.js. Containerized versions of these packages are now also readily available, so you don't have to build your own. But, as with any code you download from an external source, you need to know where the packages originated, who built them, and whether there's any malicious code inside them.
|
||||
|
||||
### 3. Container registries (secure access to container images)
|
||||
|
||||
Your teams are building containers that layer content on top of downloaded public container images, so it's critical to manage access to and promotion of the downloaded container images and the internally built images in the same way other types of binaries are managed. Many private registries support storage of container images. Select a private registry that helps to automate policies for the use of container images stored in the registry.
|
||||
|
||||
### 4. Security and the build process
|
||||
|
||||
In a containerized environment, the software-build process is the stage in the lifecycle where application code is integrated with needed runtime libraries. Managing this build process is key to securing the software stack. Adhering to a "build once, deploy everywhere" philosophy ensures that the product of the build process is exactly what is deployed in production. It's also important to maintain the immutability of your containers--in other words, do not patch running containers; rebuild and redeploy them instead.
|
||||
|
||||
Whether you work in a highly regulated industry or simply want to optimize your team's efforts, design your container image management and build process to take advantage of container layers to implement separation of control, so that the:
|
||||
|
||||
* Operations team manages base images
|
||||
* Architects manage middleware, runtimes, databases, and other such solutions
|
||||
* Developers focus on application layers and just write code
|
||||
|
||||
|
||||
|
||||
Finally, sign your custom-built containers so that you can be sure they are not tampered with between build and deployment.
|
||||
|
||||
### 5. Control what can be deployed within a cluster
|
||||
|
||||
In case anything falls through during the build process, or for situations where a vulnerability is discovered after an image has been deployed, add yet another layer of security in the form of tools for automated, policy-based deployment.
|
||||
|
||||
Let's look at an application that's built using three container image layers: core, middleware, and the application layer. An issue is discovered in the core image and that image is rebuilt. Once the build is complete, the image is pushed to the container platform registry. The platform can detect that the image has changed. For builds that are dependent on this image and have triggers defined, the platform will automatically rebuild the application image, incorporating the fixed libraries.
|
||||
|
||||
Add yet another layer of security in the form of tools for automated, policy-based deployment.
|
||||
|
||||
Once the build is complete, the image is pushed to container platform's internal registry. It immediately detects changes to images in its internal registry and, for applications where triggers are defined, automatically deploys the updated image, ensuring that the code running in production is always identical to the most recently updated image. All these capabilities work together to integrate security capabilities into your continuous integration and continuous deployment (CI/CD) process and pipeline.
|
||||
|
||||
### 6. Container orchestration: Securing the container platform
|
||||
|
||||
Once the build is complete, the image is pushed to container platform's internal registry. It immediately detects changes to images in its internal registry and, for applications where triggers are defined, automatically deploys the updated image, ensuring that the code running in production is always identical to the most recently updated image. All these capabilities work together to integrate security capabilities into your continuous integration and continuous deployment (CI/CD) process and pipeline.
|
||||
|
||||
Of course, applications are rarely delivered in a single container. Even simple applications typically have a frontend, a backend, and a database. And deploying modern microservices applications in containers means deploying multiple containers, sometimes on the same host and sometimes distributed across multiple hosts or nodes, as shown in this diagram.
|
||||
|
||||
When managing container deployment at scale, you need to consider:
|
||||
|
||||
* Which containers should be deployed to which hosts?
|
||||
* Which host has more capacity?
|
||||
* Which containers need access to each other? How will they discover each other?
|
||||
* How will you control access to--and management of--shared resources, like network and storage?
|
||||
* How will you monitor container health?
|
||||
* How will you automatically scale application capacity to meet demand?
|
||||
* How will you enable developer self-service while also meeting security requirements?
|
||||
|
||||
|
||||
|
||||
Given the wealth of capabilities for both developers and operators, strong role-based access control is a critical element of the container platform. For example, the orchestration management servers are a central point of access and should receive the highest level of security scrutiny. APIs are key to automating container management at scale and used to validate and configure the data for pods, services, and replication controllers; perform project validation on incoming requests; and invoke triggers on other major system components.
|
||||
|
||||
### 7. Network isolation
|
||||
|
||||
Deploying modern microservices applications in containers often means deploying multiple containers distributed across multiple nodes. With network defense in mind, you need a way to isolate applications from one another within a cluster. A typical public cloud container service, like Google Container Engine (GKE), Azure Container Services, or Amazon Web Services (AWS) Container Service, are single-tenant services. They let you run your containers on the VM cluster that you initiate. For secure container multi-tenancy, you want a container platform that allows you to take a single cluster and segment the traffic to isolate different users, teams, applications, and environments within that cluster.
|
||||
|
||||
With network namespaces, each collection of containers (known as a "pod") gets its own IP and port range to bind to, thereby isolating pod networks from each other on the node. Pods from different namespaces (projects) cannot send packets to or receive packets from pods and services of a different project by default, with the exception of options noted below. You can use these features to isolate developer, test, and production environments within a cluster; however, this proliferation of IP addresses and ports makes networking more complicated. In addition, containers are designed to come and go. Invest in tools that handle this complexity for you. The preferred tool is a container platform that uses [software-defined networking][4] (SDN) to provide a unified cluster network that enables communication between containers across the cluster.
|
||||
|
||||
### 8. Storage
|
||||
|
||||
Containers are useful for both stateless and stateful applications. Protecting attached storage is a key element of securing stateful services. Container platforms should provide plugins for multiple flavors of storage, including network file systems (NFS), AWS Elastic Block Stores (EBS), GCE Persistent Disks, GlusterFS, iSCSI, RADOS (Ceph), Cinder, etc.
|
||||
|
||||
A persistent volume (PV) can be mounted on a host in any way supported by the resource provider. Providers will have different capabilities, and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read only. Each PV gets its own set of access modes describing that specific PV's capabilities, such as ReadWriteOnce, ReadOnlyMany, and ReadWriteMany.
|
||||
|
||||
### 9. API management, endpoint security, and single sign-on (SSO)
|
||||
|
||||
Securing your applications includes managing application and API authentication and authorization.
|
||||
|
||||
Web SSO capabilities are a key part of modern applications. Container platforms can come with various containerized services for developers to use when building their applications.
|
||||
|
||||
APIs are key to applications composed of microservices. These applications have multiple independent API services, leading to proliferation of service endpoints, which require additional tools for governance. An API management tool is also recommended. All API platforms should offer a variety of standard options for API authentication and security, which can be used alone or in combination, to issue credentials and control access.
|
||||
|
||||
Securing your applications includes managing application and API authentication and authorization.
|
||||
|
||||
These options include standard API keys, application ID and key pairs, and OAuth 2.0.
|
||||
|
||||
### 10. Roles and access management in a cluster federation
|
||||
|
||||
These options include standard API keys, application ID and key pairs, and OAuth 2.0.
|
||||
|
||||
In July 2016, Kubernetes 1.3 introduced [Kubernetes Federated Clusters][5]. This is one of the exciting new features evolving in the Kubernetes upstream, currently in beta in Kubernetes 1.6. Federation is useful for deploying and accessing application services that span multiple clusters running in the public cloud or enterprise datacenters. Multiple clusters can be useful to enable application high availability across multiple availability zones or to enable common management of deployments or migrations across multiple cloud providers, such as AWS, Google Cloud, and Azure.
|
||||
|
||||
When managing federated clusters, you must be sure that your orchestration tools provide the security you need across the different deployment platform instances. As always, authentication and authorization are key--as well as the ability to securely pass data to your applications, wherever they run, and manage application multi-tenancy across clusters. Kubernetes is extending Cluster Federation to include support for Federated Secrets, Federated Namespaces, and Ingress objects.
|
||||
|
||||
### Choosing a container platform
|
||||
|
||||
Of course, it is not just about security. Your container platform needs to provide an experience that works for your developers and your operations team. It needs to offer a secure, enterprise-grade container-based application platform that enables both developers and operators, without compromising the functions needed by each team, while also improving operational efficiency and infrastructure utilization.
|
||||
|
||||
Learn more in Daniel's talk, [Ten Layers of Container Security][6], at [Open Source Summit EU][7], which will be held October 23-26 in Prague.
|
||||
|
||||
### About The Author
|
||||
Daniel Oh;Microservives;Agile;Devops;Java Ee;Container;Openshift;Jboss;Evangelism
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/10/10-layers-container-security
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/daniel-oh
|
||||
[1]:https://en.wikipedia.org/wiki/Security-Enhanced_Linux
|
||||
[2]:https://en.wikipedia.org/wiki/Cgroups
|
||||
[3]:https://en.wikipedia.org/wiki/Seccomp
|
||||
[4]:https://en.wikipedia.org/wiki/Software-defined_networking
|
||||
[5]:https://kubernetes.io/docs/concepts/cluster-administration/federation/
|
||||
[6]:https://osseu17.sched.com/mobile/#session:f2deeabfc1640d002c1d55101ce81223
|
||||
[7]:http://events.linuxfoundation.org/events/open-source-summit-europe
|
@ -1,95 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
Using the Linux find command with caution
|
||||
======
|
||||
![](https://images.idgesg.net/images/article/2017/10/caution-sign-100738884-large.jpg)
|
||||
A friend recently reminded me of a useful option that can add a little caution to the commands that I run with the Linux find command. It's called -ok and it works like the -exec option except for one important difference -- it makes the find command ask for permission before taking the specified action.
|
||||
|
||||
Here's an example. If you were looking for files that you intended to remove from the system using find, you might run a command like this:
|
||||
```
|
||||
$ find . -name runme -exec rm {} \;
|
||||
|
||||
```
|
||||
|
||||
Anywhere within the current directory and its subdirectories, any files named "runme" would be summarily removed -- provided, of course, you have permission to remove them. Use the -ok command instead, and you'll see something like this. The find command will ask for approval before removing the files. Answering **y** for "yes" would allow the find command to go ahead and remove the files one by one.
|
||||
```
|
||||
$ find . -name runme -ok rm {} \;
|
||||
< rm ... ./bin/runme > ?
|
||||
|
||||
```
|
||||
|
||||
### The -exedir command is also an option
|
||||
|
||||
Another option that can be used to modify the behavior of the find command and potentially make it more controllable is the -execdir command. Where -exec runs whatever command is specified, -execdir runs the specified command from the directory in which the located file resides rather than from the directory in which the find command is run. Here's an example of how it works:
|
||||
```
|
||||
$ pwd
|
||||
/home/shs
|
||||
$ find . -name runme -execdir pwd \;
|
||||
/home/shs/bin
|
||||
|
||||
```
|
||||
```
|
||||
$ find . -name runme -execdir ls \;
|
||||
ls rm runme
|
||||
|
||||
```
|
||||
|
||||
So far, so good. One important thing to keep in mind, however, is that the -execdir option will also run commands from the directories in which the located files reside. If you run the command shown below and the directory contains a file named "ls", it will run that file and it will run it even if the file does _not_ have execute permissions set. Using **-exec** or **-execdir** is similar to running a command by sourcing it.
|
||||
```
|
||||
$ find . -name runme -execdir ls \;
|
||||
Running the /home/shs/bin/ls file
|
||||
|
||||
```
|
||||
```
|
||||
$ find . -name runme -execdir rm {} \;
|
||||
This is an imposter rm command
|
||||
|
||||
```
|
||||
```
|
||||
$ ls -l bin
|
||||
total 12
|
||||
-r-x------ 1 shs shs 25 Oct 13 18:12 ls
|
||||
-rwxr-x--- 1 shs shs 36 Oct 13 18:29 rm
|
||||
-rw-rw-r-- 1 shs shs 28 Oct 13 18:55 runme
|
||||
|
||||
```
|
||||
```
|
||||
$ cat bin/ls
|
||||
echo Running the $0 file
|
||||
$ cat bin/rm
|
||||
echo This is an imposter rm command
|
||||
|
||||
```
|
||||
|
||||
### The -okdir option also asks for permission
|
||||
|
||||
To be more cautious, you can use the **-okdir** option. Like **-ok** , this option will prompt for permission to run the command.
|
||||
```
|
||||
$ find . -name runme -okdir rm {} \;
|
||||
< rm ... ./bin/runme > ?
|
||||
|
||||
```
|
||||
|
||||
You can also be careful to specify the commands you want to run with full paths to avoid any problems with imposter commands like those shown above.
|
||||
```
|
||||
$ find . -name runme -execdir /bin/rm {} \;
|
||||
|
||||
```
|
||||
|
||||
The find command has a lot of options besides the default print. Some can make your file searching more precise, but a little caution is always a good idea.
|
||||
|
||||
Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3233305/linux/using-the-linux-find-command-with-caution.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[1]:https://www.facebook.com/NetworkWorld/
|
||||
[2]:https://www.linkedin.com/company/network-world
|
@ -1,68 +0,0 @@
|
||||
translating by lujun9972
|
||||
Run Linux On Android Devices, No Rooting Required!
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2017/10/Termux-720x340.jpg)
|
||||
|
||||
The other day I was searching for a simple and easy way to run Linux on Android. My only intention was to just use Linux with some basic applications like SSH, Git, awk etc. Not much! I don't want to root the Android device. I have a Tablet PC that I mostly use for reading EBooks, news, and few Linux blogs. I don't use it much for other activities. So, I decided to use it for some Linux activities. After spending few minutes on Google Play Store, one app immediately caught my attention and I wanted to give it a try. If you're ever wondered how to run Linux on Android devices, this one might help.
|
||||
|
||||
### Termux - An Android terminal emulator to run Linux on Android and Chrome OS
|
||||
|
||||
**Termux** is an Android terminal emulator and Linux environment app. Unlike many other apps, you don 't need to root your device or no setup required. It just works out of the box! A minimal base Linux system will be installed automatically, and of course you can install other packages with APT package manager. In short, you can use your Android device like a pocket Linux computer. It's not just for Android, you can install it on your Chrome OS too.
|
||||
|
||||
Termux offers many significant features than you would think.
|
||||
|
||||
* It allows you to SSH to your remote server via openSSH.
|
||||
* You can also SSH into your Android devices from any remote system.
|
||||
* Sync your smart phone contacts to a remote system using rsync and curl.
|
||||
* You could choose any shells such as BASH, ZSH, and FISH etc.
|
||||
* You can choose different text editors such as Emacs, Nano, and Vim to edit/view files.
|
||||
* Install any packages of your choice in your Android devices using APT package manager. Up-to-date versions of Git, Perl, Python, Ruby and Node.js are all available.
|
||||
* Connect your Android device with a bluetooth Keyboard, mouse and external display and use it like a convergence device. Termux supports keyboard shortcuts .
|
||||
* Termux allows you to run almost all GNU/Linux commands.
|
||||
|
||||
|
||||
|
||||
It also has some extra features. You can enable them by installing the addons. For instance, **Termux:API** addon will allow you to Access Android and Chrome hardware features. The other useful addons are:
|
||||
|
||||
* Termux:Boot - Run script(s) when your device boots.
|
||||
* Termux:Float - Run Termux in a floating window.
|
||||
* Termux:Styling - Provides color schemes and powerline-ready fonts to customize the appearance of the Termux terminal.
|
||||
* Termux:Task - Provides an easy way to call Termux executables from Tasker and compatible apps.
|
||||
* Termux:Widget - Provides an easy way to start small scriptlets from the home screen.
|
||||
|
||||
|
||||
|
||||
To know more about termux, open the built-in help section by long-pressing anywhere on the terminal and selecting the Help menu option. The only drawback is it **requires Android 5.0 and higher versions**. It could be more useful for many users if it supports Android 4.x and older versions. Termux is available in **Google Play Store** and **F-Droid**.
|
||||
|
||||
To install Termux from Google Play Store, click the following button.
|
||||
|
||||
[![termux][1]][2]
|
||||
|
||||
To install it from F-Droid, click the following button.
|
||||
|
||||
[![][1]][3]
|
||||
|
||||
You know now how to try Linux on your android devices using Termux. Do you use any other better apps worth trying? Please mention them in the comment section below. I'd love to try them too!
|
||||
|
||||
Cheers!
|
||||
|
||||
Resource:
|
||||
|
||||
+[Termux website][4]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/termux-run-linux-android-devices-no-rooting-required/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]:https://play.google.com/store/apps/details?id=com.termux
|
||||
[3]:https://f-droid.org/packages/com.termux/
|
||||
[4]:https://termux.com/
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
Easy guide to secure VNC server with TLS encryption
|
||||
======
|
||||
In this tutorial, we will learn to install VNC server & secure VNC server sessions with TLS encryption.
|
||||
|
@ -1,97 +0,0 @@
|
||||
Translating by Drshu
|
||||
|
||||
How to bind ntpd to specific IP addresses on Linux/Unix
|
||||
======
|
||||
|
||||
By default, my ntpd/NTP server listens on all interfaces or IP address i.e 0.0.0.0:123. How do I make sure ntpd only listen on a specific IP address such as localhost or 192.168.1.1:123 on a Linux or FreeBSD Unix server?
|
||||
|
||||
NTP is an acronym for Network Time Protocol. It is used for clock synchronization between computers. The ntpd program is an operating system daemon which sets and maintains the system time of day in synchronism with Internet standard time servers.
|
||||
[![How to prevent NTPD from listening on 0.0.0.0:123 and binding to specific IP addresses on a Linux/Unix server][1]][1]
|
||||
The NTP is configured using ntp.conf located in /etc/ directory.
|
||||
|
||||
## interface directive in /etc/ntp.conf
|
||||
|
||||
|
||||
You can prevent ntpd to listen on 0.0.0.0:123 by setting the interface command. The syntax is:
|
||||
`interface listen IPv4|IPv6|all
|
||||
interface ignore IPv4|IPv6|all
|
||||
interface drop IPv4|IPv6|all`
|
||||
The above configures which network addresses ntpd listens or dropped without processing any requests. The ignore prevents opening matching addresses, drop causes ntpd to open the address and drop all received packets without examination. For example to ignore listing on all interfaces, add the following in /etc/ntp.conf:
|
||||
`interface ignore wildcard`
|
||||
To listen to only 127.0.0.1 and 192.168.1.1 addresses:
|
||||
`interface listen 127.0.0.1
|
||||
interface listen 192.168.1.1`
|
||||
Here is my sample /etc/ntp.conf file from FreeBSD cloud server:
|
||||
`$ egrep -v '^#|$^' /etc/ntp.conf`
|
||||
Sample outputs:
|
||||
```
|
||||
tos minclock 3 maxclock 6
|
||||
pool 0.freebsd.pool.ntp.org iburst
|
||||
restrict default limited kod nomodify notrap noquery nopeer
|
||||
restrict -6 default limited kod nomodify notrap noquery nopeer
|
||||
restrict source limited kod nomodify notrap noquery
|
||||
restrict 127.0.0.1
|
||||
restrict -6 ::1
|
||||
leapfile "/var/db/ntpd.leap-seconds.list"
|
||||
interface ignore wildcard
|
||||
interface listen 172.16.3.1
|
||||
interface listen 10.105.28.1
|
||||
```
|
||||
|
||||
|
||||
## Restart ntpd
|
||||
|
||||
Reload/restart the ntpd on a FreeBSD unix:
|
||||
`$ sudo /etc/rc.d/ntpd restart`
|
||||
OR [use the following command on a Debian/Ubuntu Linux][2]:
|
||||
`$ sudo systemctl restart ntp`
|
||||
OR [use the following on a CentOS/RHEL 7/Fedora Linux][2]:
|
||||
`$ sudo systemctl restart ntpd`
|
||||
|
||||
## Verification
|
||||
|
||||
Use the netstat command/ss command for verification or to make sure ntpd bind to the specific IP address only:
|
||||
`$ netstat -tulpn | grep :123`
|
||||
OR
|
||||
`$ ss -tulpn | grep :123`
|
||||
Sample outputs:
|
||||
```
|
||||
udp 0 0 10.105.28.1:123 0.0.0.0:* -
|
||||
udp 0 0 172.16.3.1:123 0.0.0.0:* -
|
||||
```
|
||||
|
||||
udp 0 0 10.105.28.1:123 0.0.0.0:* - udp 0 0 172.16.3.1:123 0.0.0.0:* -
|
||||
|
||||
Use [the sockstat command on a FreeBSD Unix server][3]:
|
||||
`$ sudo sockstat
|
||||
$ sudo sockstat -4
|
||||
$ sudo sockstat -4 | grep :123`
|
||||
Sample outputs:
|
||||
```
|
||||
root ntpd 59914 22 udp4 127.0.0.1:123 *:*
|
||||
root ntpd 59914 24 udp4 127.0.1.1:123 *:*
|
||||
```
|
||||
|
||||
root ntpd 59914 22 udp4 127.0.0.1:123 *:* root ntpd 59914 24 udp4 127.0.1.1:123 *:*
|
||||
|
||||
## Posted by:Vivek Gite
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][4], [Facebook][5], [Google+][6].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/how-to-bind-ntpd-to-specific-ip-addresses-on-linuxunix/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz
|
||||
[1]:https://www.cyberciti.biz/media/new/faq/2017/10/how-to-prevent-ntpd-to-listen-on-all-interfaces-on-linux-unix-box.jpg
|
||||
[2]:https://www.cyberciti.biz/faq/restarting-ntp-service-on-linux/
|
||||
[3]:https://www.cyberciti.biz/faq/freebsd-unix-find-the-process-pid-listening-on-a-certain-port-commands/
|
||||
[4]:https://twitter.com/nixcraft
|
||||
[5]:https://facebook.com/nixcraft
|
||||
[6]:https://plus.google.com/+CybercitiBiz
|
@ -1,138 +0,0 @@
|
||||
translating by lujun9972
|
||||
What is huge pages in Linux?
|
||||
======
|
||||
Learn about huge pages in Linux. Understand what is hugepages, how to configure it, how to check current state and how to disable it.
|
||||
|
||||
![Huge Pages in Linux][1]
|
||||
|
||||
In this article, we will walk you though details about huge pages so that you will be able to answer : what is huge pages in Linux? How to enable/disable huge pages? How to determine huge page value? in Linux like RHEL6, RHEL7, Ubuntu etc.
|
||||
|
||||
Lets start with Huge pages basics.
|
||||
|
||||
### What is Huge page in Linux?
|
||||
|
||||
Huge pages are helpful in virtual memory management in Linux system. As name suggests, they help is managing huge size pages in memory in addition to standard 4KB page size. You can define as huge as 1GB page size using huge pages.
|
||||
|
||||
During system boot, you reserve your memory portion with huge pages for your application. This memory portion i.e. these memory occupied by huge pages is never swapped out of memory. It will stick there until you change your configuration. This increases application performance to great extent like Oracle database with pretty large memory requirement.
|
||||
|
||||
### Why use huge page?
|
||||
|
||||
In virtual memory management, kernel maintains table in which it has mapping of virtual memory address to physical address. For every page transaction, kernel needs to load related mapping. If you have small size pages then you need to load more numbers of pages resulting kernel to load more mapping tables. This decreases performance.
|
||||
|
||||
Using huge pages, means you will need fewer pages. This decreases number of mapping tables to load by kernel to great extent. This increases your kernel level performance which ultimately benefits your application.
|
||||
|
||||
In short, by enabling huge pages, system has fewer page tables to deal with and hence less overhead to access / maintain them!
|
||||
|
||||
### How to configure huge pages?
|
||||
|
||||
Run below command to check current huge pages details.
|
||||
|
||||
```
|
||||
root@kerneltalks # grep Huge /proc/meminfo
|
||||
AnonHugePages: 0 kB
|
||||
HugePages_Total: 0
|
||||
HugePages_Free: 0
|
||||
HugePages_Rsvd: 0
|
||||
HugePages_Surp: 0
|
||||
Hugepagesize: 2048 kB
|
||||
```
|
||||
|
||||
In above output you can see one page size is 2MB `Hugepagesize` and total of 0 pages on system `HugePages_Total`. This huge page size can be increased from 2MB to max 1GB.
|
||||
|
||||
Run below script to get how much huge pages your system needs currently . Script is from Oracle and can be found.
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
#
|
||||
# hugepages_settings.sh
|
||||
#
|
||||
# Linux bash script to compute values for the
|
||||
# recommended HugePages/HugeTLB configuration
|
||||
#
|
||||
# Note: This script does calculation for all shared memory
|
||||
# segments available when the script is run, no matter it
|
||||
# is an Oracle RDBMS shared memory segment or not.
|
||||
# Check for the kernel version
|
||||
KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'`
|
||||
# Find out the HugePage size
|
||||
HPG_SZ=`grep Hugepagesize /proc/meminfo | awk {'print $2'}`
|
||||
# Start from 1 pages to be on the safe side and guarantee 1 free HugePage
|
||||
NUM_PG=1
|
||||
# Cumulative number of pages required to handle the running shared memory segments
|
||||
for SEG_BYTES in `ipcs -m | awk {'print $5'} | grep "[0-9][0-9]*"`
|
||||
do
|
||||
MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q`
|
||||
if [ $MIN_PG -gt 0 ]; then
|
||||
NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q`
|
||||
fi
|
||||
done
|
||||
# Finish with results
|
||||
case $KERN in
|
||||
'2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`;
|
||||
echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;;
|
||||
'2.6' | '3.8' | '3.10' | '4.1' ) echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;;
|
||||
*) echo "Unrecognized kernel version $KERN. Exiting." ;;
|
||||
esac
|
||||
# End
|
||||
```
|
||||
You can save it in `/tmp` as `hugepages_settings.sh` and then run it like below :
|
||||
```
|
||||
root@kerneltalks # sh /tmp/hugepages_settings.sh
|
||||
Recommended setting: vm.nr_hugepages = 124
|
||||
```
|
||||
|
||||
Output will be similar to some number as shown in above sample output.
|
||||
|
||||
This means your system needs 124 huge pages of 2MB each! If you have set 4MB as page size then output would have been 62. You got the point, right?
|
||||
|
||||
### Configure hugepages in kernel
|
||||
|
||||
Now last part is to configure above stated [kernel parameter][2] and reload it. Add below value in `/etc/sysctl.conf` and reload configuration by issuing `sysctl -p` command.
|
||||
|
||||
```
|
||||
vm .nr_hugepages=126
|
||||
```
|
||||
|
||||
Notice that we added 2 extra pages in kernel since we want to keep couple of pages spare than actual required number.
|
||||
|
||||
Now, huge pages has been configured in kernel but to allow your application to use them you need to increase memory limits as well. New memory limit should be 126 pages x 2 MB each = 252 MB i.e. 258048 KB.
|
||||
|
||||
You need to edit below settings in `/etc/security/limits.conf`
|
||||
|
||||
```
|
||||
soft memlock 258048
|
||||
hard memlock 258048
|
||||
```
|
||||
|
||||
Sometimes these settings are configured in app specific files like for Oracle DB its in `/etc/security/limits.d/99-grid-oracle-limits.conf`
|
||||
|
||||
Thats it! You might want to restart your application to make use of these new huge pages.
|
||||
|
||||
### How to disable hugepages?
|
||||
|
||||
HugePages are generally enabled by default. Use below command to check current state of hugepages.
|
||||
|
||||
```
|
||||
root@kerneltalks# cat /sys/kernel/mm/transparent_hugepage/enabled
|
||||
[always] madvise never
|
||||
```
|
||||
|
||||
`[always]` flag in output shows that hugepages are enabled on system.
|
||||
|
||||
For RedHat base systems file path is `/sys/kernel/mm/redhat_transparent_hugepage/enabled`
|
||||
|
||||
If you want to disable huge pages then add `transparent_hugepage=never` at the end of `kernel` line in `/etc/grub.conf` and reboot the system.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://kerneltalks.com/services/what-is-huge-pages-in-linux/
|
||||
|
||||
作者:[Shrikant Lavhate][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://kerneltalks.com
|
||||
[1]:https://c1.kerneltalks.com/wp-content/uploads/2017/11/hugepages-in-linux.png
|
||||
[2]:https://kerneltalks.com/linux/how-to-tune-kernel-parameters-in-linux/
|
@ -1,3 +1,5 @@
|
||||
translate by cyleft
|
||||
|
||||
Command line fun: Insult the user when typing wrong bash command
|
||||
======
|
||||
You can configure sudo command to insult user when they type the wrong password. Now, it is possible to abuse insult the user when they enter the wrong command at the shell prompt.
|
||||
|
@ -1,3 +1,5 @@
|
||||
Translating by jessie-pang
|
||||
|
||||
How to use special permissions: the setuid, setgid and sticky bits
|
||||
======
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
Translating by jessie-pang
|
||||
|
||||
Protecting Your Website From Application Layer DOS Attacks With mod
|
||||
======
|
||||
There exist many ways of maliciously taking a website offline. The more complicated methods involve technical knowledge of databases and programming. A far simpler method is known as a "Denial Of Service", or "DOS" attack. This attack derives its name from its goal which is to deny your regular clients or site visitors normal website service.
|
||||
|
@ -1,143 +0,0 @@
|
||||
How to Configure Linux for Children
|
||||
======
|
||||
|
||||
![](https://www.maketecheasier.com/assets/uploads/2017/12/keep-kids-safe-online-hero.jpg)
|
||||
|
||||
If you've been around computers for a while, you might associate Linux with a certain stereotype of computer user. How do you know someone uses Linux? Don't worry, they'll tell you.
|
||||
|
||||
But Linux is an exceptionally customizable operating system. This allows users an unprecedented degree of control. In fact, parents can set up a specialized distro of Linux for children, ensuring children don't stumble across dangerous content accidentally. While the process is more prolonged than using Windows, it's also more powerful and durable. Linux is also free, which can make it well-suited for classroom or computer lab deployment.
|
||||
|
||||
## Linux Distros for Children
|
||||
|
||||
These Linux distros for children are built with simplified, kid-friendly interfaces. An adult will need to install and set up the operating system at first, but kids can run the computer entirely alone. You'll find large colorful interfaces, plenty of pictures and simple language.
|
||||
|
||||
Unfortunately, none of these distros are regularly updated, and some are no longer in active development. That doesn't mean they won't work, but it does make malfunctions more likely.
|
||||
|
||||
![qimo-gcompris][1]
|
||||
|
||||
|
||||
### 1. Edubuntu
|
||||
|
||||
[Edubuntu][2] is an education-specific fork of the popular Ubuntu operating system. It has a rich graphical environment and ships with a lot of educational software that's easy to update and maintain. It's designed for children in middle and high school.
|
||||
|
||||
### 2. Ubermix
|
||||
|
||||
[Ubermix][3] is designed from the ground up with the needs of education in mind. Ubermix takes all the complexity out of student devices by making them as reliable and easy-to-use as a cell phone without sacrificing the power and capabilities of a full operating system. With a turn-key, five-minute installation, twenty-second quick recovery mechanism, and more than sixty free applications pre-installed, ubermix turns whatever hardware you have into a powerful device for learning.
|
||||
|
||||
### 3. Sugar
|
||||
|
||||
[Sugar][4] is the operating system built for the One Laptop Per Child initiative. Sugar is pretty different from normal desktop Linux, with a heavy bias towards classroom use and teaching programming skills.
|
||||
|
||||
**Note** : do note that there are several more Linux distros for kids that we didn't include in the list above because they have not been actively developed or were abandoned a long time ago.
|
||||
|
||||
## Content Filtering Linux for Children
|
||||
|
||||
The best tool for protecting children from accessing inappropriate content is you, but you can't be there all the time. Content filtering via proxy filtering sets up certain URLs as "off limits." There are two main tools you can use.
|
||||
|
||||
![linux-for-children-content-filtering][5]
|
||||
|
||||
### 1. DansGuardian
|
||||
|
||||
[DansGuardian][6], an open-source content filter that works on virtually every Linux distro, is flexible and powerful, requiring command-line setup with a proxy of your choice. If you don't mind digging into proxy settings, this is the most powerful choice.
|
||||
|
||||
Setting up DansGuardian is not an easy task, and you can follow the installation instructions on its main page. But once it is set up, it is a very effective tool to filter out unwanted content.
|
||||
|
||||
### 2. Parental Control: Family Friendly Filter
|
||||
|
||||
[Parental Control: Family Friendly Filter][7] is an extension for Firefox that allows parents to block sites containing pornography and any other kind of inappropriate material. You can blacklist particular domains so that bad websites are always blocked.
|
||||
|
||||
![firefox-content-filter-addon][8]
|
||||
|
||||
If you are still using an older version of Firefox that doesn't support [web extensions][9], then you can check out [ProCon Latte Content Filter][10]. Parents add domains to a pre-loaded blacklist and set a password to keep the extension from being modified.
|
||||
|
||||
### 3. Blocksi Web Filter
|
||||
|
||||
[Blocksi Web Filter][11] is an extension for Chrome and is useful for Web and Youtube filtering. It also comes with a time-access control so that you can limit the hours your kids can access the Web.
|
||||
|
||||
## Fun Stuff
|
||||
|
||||
![linux-for-children-tux-kart][12]
|
||||
|
||||
Any computer for children better have some games on it, educational or otherwise. While Linux isn't as gaming-friendly as Windows, it's getting closer all the time. Here are several suggestions for constructive games you might load on to Linux for children:
|
||||
|
||||
* [Super Tux Kart][21] (kart racing game)
|
||||
|
||||
* [GCompris][22] (educational game suite)
|
||||
|
||||
* [Secret Maryo Chronicles][23] (Super Mario clone)
|
||||
|
||||
* [Childsplay][24] (educational/memory games)
|
||||
|
||||
* [EToys][25] (programming for kids)
|
||||
|
||||
* [TuxTyping][26], (typing game)
|
||||
|
||||
* [Kalzium][27] (periodic table guide)
|
||||
|
||||
* [Tux of Math Command][28] (math arcade games)
|
||||
|
||||
* [Pink Pony][29] (Tron-like racing game)
|
||||
|
||||
* [KTuberling][30] (constructor game)
|
||||
|
||||
* [TuxPaint][31] (painting)
|
||||
|
||||
* [Blinken][32] ([memory][33] game)
|
||||
|
||||
* [KTurtle][34] (educational programming environment)
|
||||
|
||||
* [KStars][35] (desktop planetarium)
|
||||
|
||||
* [Marble][36] (virtual globe)
|
||||
|
||||
* [KHangman][37] (hangman guessing game)
|
||||
|
||||
## Conclusion: Why Linux for Children?
|
||||
|
||||
Linux has a reputation for being needlessly complex. So why use Linux for children? It's about setting kids up to learn. Working with Linux provides many opportunities to learn how the operating system works. As children get older, they'll have opportunities to explore, driven by their own interests and curiosity. Because the Linux platform is so open to users, it's an excellent venue for children to discover a life-long love of computers.
|
||||
|
||||
This article was first published in July 2010 and was updated in December 2017.
|
||||
|
||||
Image by [Children at school][13]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/configure-linux-for-children/
|
||||
|
||||
作者:[Alexander Fox][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com/author/alexfox/
|
||||
[1]:https://www.maketecheasier.com/assets/uploads/2010/08/qimo-gcompris.jpg (qimo-gcompris)
|
||||
[2]:http://www.edubuntu.org
|
||||
[3]:http://www.ubermix.org/
|
||||
[4]:http://wiki.sugarlabs.org/go/Downloads
|
||||
[5]:https://www.maketecheasier.com/assets/uploads/2017/12/linux-for-children-content-filtering.png (linux-for-children-content-filtering)
|
||||
[6]:https://help.ubuntu.com/community/DansGuardian
|
||||
[7]:https://addons.mozilla.org/en-US/firefox/addon/family-friendly-filter/
|
||||
[8]:https://www.maketecheasier.com/assets/uploads/2017/12/firefox-content-filter-addon.png (firefox-content-filter-addon)
|
||||
[9]:https://www.maketecheasier.com/best-firefox-web-extensions/
|
||||
[10]:https://addons.mozilla.org/en-US/firefox/addon/procon-latte/
|
||||
[11]:https://chrome.google.com/webstore/detail/blocksi-web-filter/pgmjaihnmedpcdkjcgigocogcbffgkbn?hl=en
|
||||
[12]:https://www.maketecheasier.com/assets/uploads/2017/12/linux-for-children-tux-kart-e1513389774535.jpg (linux-for-children-tux-kart)
|
||||
[13]:https://www.flickr.com/photos/lupuca/8720604364
|
||||
[21]:http://supertuxkart.sourceforge.net/
|
||||
[22]:http://gcompris.net/
|
||||
[23]:http://www.secretmaryo.org/
|
||||
[24]:http://www.schoolsplay.org/
|
||||
[25]:http://www.squeakland.org/about/intro/
|
||||
[26]:http://tux4kids.alioth.debian.org/tuxtype/index.php
|
||||
[27]:http://edu.kde.org/kalzium/
|
||||
[28]:http://tux4kids.alioth.debian.org/tuxmath/index.php
|
||||
[29]:http://code.google.com/p/pink-pony/
|
||||
[30]:http://games.kde.org/game.php?game=ktuberling
|
||||
[31]:http://www.tuxpaint.org/
|
||||
[32]:https://www.kde.org/applications/education/blinken/
|
||||
[33]:https://www.ebay.com/sch/i.html?_nkw=memory
|
||||
[34]:https://www.kde.org/applications/education/kturtle/
|
||||
[35]:https://www.kde.org/applications/education/kstars/
|
||||
[36]:https://www.kde.org/applications/education/marble/
|
||||
[37]:https://www.kde.org/applications/education/khangman/
|
@ -1,3 +1,4 @@
|
||||
XYenChi is translating
|
||||
Why You Should Still Love Telnet
|
||||
======
|
||||
Telnet, the protocol and the command line tool, were how system administrators used to log into remote servers. However, due to the fact that there is no encryption all communication, including passwords, are sent in plaintext meant that Telnet was abandoned in favour of SSH almost as soon as SSH was created.
|
||||
|
@ -1,143 +0,0 @@
|
||||
translating by lujun9972
|
||||
Linux size Command Tutorial for Beginners (6 Examples)
|
||||
======
|
||||
|
||||
As some of you might already know, an object or executable file in Linux consists of several sections (like txt and data). In case you want to know the size of each section, there exists a command line utility - dubbed **size** \- that provides you this information. In this tutorial, we will discuss the basics of this tool using some easy to understand examples.
|
||||
|
||||
But before we do that, it's worth mentioning that all examples mentioned in this article have been tested on Ubuntu 16.04LTS.
|
||||
|
||||
## Linux size command
|
||||
|
||||
The size command basically lists section sizes as well as total size for the input object file(s). Here's the syntax for the command:
|
||||
```
|
||||
size [-A|-B|--format=compatibility]
|
||||
[--help]
|
||||
[-d|-o|-x|--radix=number]
|
||||
[--common]
|
||||
[-t|--totals]
|
||||
[--target=bfdname] [-V|--version]
|
||||
[objfile...]
|
||||
```
|
||||
|
||||
And here's how the man page describes this utility:
|
||||
```
|
||||
The GNU size utility lists the section sizes---and the total size---for each of the object or archive files objfile in its argument list. By default, one line of output is generated for each object file or each module in an archive.
|
||||
|
||||
objfile... are the object files to be examined. If none are specified, the file "a.out" will be used.
|
||||
```
|
||||
|
||||
Following are some Q&A-styled examples that'll give you a better idea about how the size command works.
|
||||
|
||||
## Q1. How to use size command?
|
||||
|
||||
Basic usage of size is very simple. All you have to do is to pass the object/executable file name as input to the tool. Following is an example:
|
||||
|
||||
```
|
||||
size apl
|
||||
```
|
||||
|
||||
Following is the output the above command produced on our system:
|
||||
|
||||
[![How to use size command][1]][2]
|
||||
|
||||
The first three entries are for text, data, and bss sections, with their corresponding sizes. Then comes the total in decimal and hexadecimal formats. And finally, the last entry is for the filename.
|
||||
|
||||
## Q2. How to switch between different output formats?
|
||||
|
||||
The default output format, the man page for size says, is similar to the Berkeley's format. However, if you want, you can go for System V convention as well. For this, you'll have to use the **\--format** option with SysV as value.
|
||||
|
||||
```
|
||||
size apl --format=SysV
|
||||
```
|
||||
|
||||
Here's the output in this case:
|
||||
|
||||
[![How to switch between different output formats][3]][4]
|
||||
|
||||
## Q3. How to switch between different size units?
|
||||
|
||||
By default, the size of sections is displayed in decimal. However, if you want, you can have this information on octal as well as hexadecimal. For this, use the **-o** and **-x** command line options.
|
||||
|
||||
[![How to switch between different size units][5]][6]
|
||||
|
||||
Here's what the man page says about these options:
|
||||
```
|
||||
-d
|
||||
-o
|
||||
-x
|
||||
--radix=number
|
||||
|
||||
Using one of these options, you can control whether the size of each section is given in decimal
|
||||
(-d, or --radix=10); octal (-o, or --radix=8); or hexadecimal (-x, or --radix=16). In
|
||||
--radix=number, only the three values (8, 10, 16) are supported. The total size is always given in
|
||||
two radices; decimal and hexadecimal for -d or -x output, or octal and hexadecimal if you're using
|
||||
-o.
|
||||
```
|
||||
|
||||
## Q4. How to make size command show totals of all object files?
|
||||
|
||||
If you are using size to find out section sizes for multiple files in one go, then if you want, you can also have the tool provide totals of all column values. You can enable this feature using the **-t** command line option.
|
||||
|
||||
```
|
||||
size -t [file1] [file2] ...
|
||||
```
|
||||
|
||||
The following screenshot shows this command line option in action:
|
||||
|
||||
[![How to make size command show totals of all object files][7]][8]
|
||||
|
||||
The last row in the output has been added by the **-t** command line option.
|
||||
|
||||
## Q5. How to make size print total size of common symbols in each file?
|
||||
|
||||
If you are running the size command with multiple input files, and want the command to display common symbols in each file, then you can do this with the **\--common** command line option.
|
||||
|
||||
```
|
||||
size --common [file1] [file2] ...
|
||||
```
|
||||
|
||||
It's also worth mentioning that when using Berkeley format these are included in the bss size.
|
||||
|
||||
## Q6. What are the other available command line options?
|
||||
|
||||
Aside from the ones discussed until now, size also offers some generic command line options like **-v** (for version info) and **-h** (for summary of eligible arguments and options)
|
||||
|
||||
[![What are the other available command line options][9]][10]
|
||||
|
||||
In addition, you can also make size read command-line options from a file. This you can do using the **@file** option. Following are some details related to this option:
|
||||
```
|
||||
The options read are inserted in place of the original @file option. If file does not exist, or
|
||||
cannot be read, then the option will be treated literally, and not removed. Options in file are
|
||||
separated by whitespace. A whitespace character may be included in an option by surrounding the
|
||||
entire option in either single or double quotes. Any character (including a backslash) may be
|
||||
included by prefixing the character to be included with a backslash. The file may itself contain
|
||||
additional @file options; any such options will be processed recursively.
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
One thing is clear, the size command isn't for everybody. It's aimed at only those who deal with the structure of object/executable files in Linux. So if you are among the target audience, practice the options we've discussed here, and you should be ready to use the tool on daily basis. For more information on size, head to its [man page][11].
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/linux-size-command/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
||||
[1]:https://www.howtoforge.com/images/command-tutorial/size-basic-usage.png
|
||||
[2]:https://www.howtoforge.com/images/command-tutorial/big/size-basic-usage.png
|
||||
[3]:https://www.howtoforge.com/images/command-tutorial/size-format-option.png
|
||||
[4]:https://www.howtoforge.com/images/command-tutorial/big/size-format-option.png
|
||||
[5]:https://www.howtoforge.com/images/command-tutorial/size-o-x-options.png
|
||||
[6]:https://www.howtoforge.com/images/command-tutorial/big/size-o-x-options.png
|
||||
[7]:https://www.howtoforge.com/images/command-tutorial/size-t-option.png
|
||||
[8]:https://www.howtoforge.com/images/command-tutorial/big/size-t-option.png
|
||||
[9]:https://www.howtoforge.com/images/command-tutorial/size-v-x1.png
|
||||
[10]:https://www.howtoforge.com/images/command-tutorial/big/size-v-x1.png
|
||||
[11]:https://linux.die.net/man/1/size
|
@ -0,0 +1,261 @@
|
||||
How to install software applications on Linux
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux-penguins.png?itok=yKOpaJM_)
|
||||
|
||||
Image by : Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0
|
||||
|
||||
How do you install an application on Linux? As with many operating systems, there isn't just one answer to that question. Applications can come from so many sources--it's nearly impossible to count--and each development team may deliver their software whatever way they feel is best. Knowing how to install what you're given is part of being a true power user of your OS.
|
||||
|
||||
### Repositories
|
||||
|
||||
For well over a decade, Linux has used software repositories to distribute software. A "repository" in this context is a public server hosting installable software packages. A Linux distribution provides a command, and usually a graphical interface to that command, that pulls the software from the server and installs it onto your computer. It's such a simple concept that it has served as the model for all major cellphone operating systems and, more recently, the "app stores" of the two major closed source computer operating systems.
|
||||
|
||||
|
||||
![Linux repository][2]
|
||||
|
||||
Not an app store
|
||||
|
||||
Installing from a software repository is the primary method of installing apps on Linux. It should be the first place you look for any application you intend to install.
|
||||
|
||||
To install from a software repository, there's usually a command:
|
||||
```
|
||||
|
||||
|
||||
$ sudo dnf install inkscape
|
||||
```
|
||||
|
||||
The actual command you use depends on what distribution of Linux you use. Fedora uses `dnf`, OpenSUSE uses `zypper`, Debian and Ubuntu use `apt`, Slackware uses `sbopkg`, FreeBSD uses `pkg_add`, and Illumos-based OpenIndiana uses `pkg`. Whatever you use, the incantation usually involves searching for the proper name of what you want to install, because sometimes what you call software is not its official or solitary designation:
|
||||
```
|
||||
|
||||
|
||||
$ sudo dnf search pyqt
|
||||
|
||||
PyQt.x86_64 : Python bindings for Qt3
|
||||
|
||||
PyQt4.x86_64 : Python bindings for Qt4
|
||||
|
||||
python-qt5.x86_64 : PyQt5 is Python bindings for Qt5
|
||||
```
|
||||
|
||||
Once you have located the name of the package you want to install, use the `install` subcommand to perform the actual download and automated install:
|
||||
```
|
||||
|
||||
|
||||
$ sudo dnf install python-qt5
|
||||
```
|
||||
|
||||
For specifics on installing from a software repository, see your distribution's documentation.
|
||||
|
||||
The same generally holds true with the graphical tools. Search for what you think you want, and then install it.
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/apper.png)
|
||||
|
||||
Like the underlying command, the name of the graphical installer depends on what distribution you are running. The relevant application is usually tagged with the software or package keywords, so search your launcher or menu for those terms, and you'll find what you need. Since open source is all about user choice, if you don't like the graphical user interface (GUI) that your distribution provides, there may be an alternative that you can install. And now you know how to do that.
|
||||
|
||||
#### Extra repositories
|
||||
|
||||
Your distribution has its standard repository for software that it packages for you, and there are usually extra repositories common to your distribution. For example, [EPEL][3] serves Red Hat Enterprise Linux and CentOS, [RPMFusion][4] serves Fedora, Ubuntu has various levels of support as well as a Personal Package Archive (PPA) network, [Packman][5] provides extra software for OpenSUSE, and [SlackBuilds.org][6] provides community build scripts for Slackware.
|
||||
|
||||
By default, your Linux OS is set to look at just its official repositories, so if you want to use additional software collections, you must add extra repositories yourself. You can usually install a repository as though it were a software package. In fact, when you install certain software, such as [GNU Ring][7] video chat, the [Vivaldi][8] web browser, Google Chrome, and many others, what you are actually installing is access to their private repositories, from which the latest version of their application is installed to your machine.
|
||||
|
||||
|
||||
![Installing a repo][10]
|
||||
|
||||
Installing a repo
|
||||
|
||||
You can also add the repository manually by editing a text file and adding it to your package manager's configuration directory, or by running a command to install the repository. As usual, the exact command you use depends on the distribution you are running; for example, here is a `dnf` command that adds a repository to the system:
|
||||
```
|
||||
|
||||
|
||||
$ sudo dnf config-manager --add-repo=http://example.com/pub/centos/7
|
||||
```
|
||||
|
||||
### Installing apps without repositories
|
||||
|
||||
The repository model is so popular because it provides a link between the user (you) and the developer. When important updates are released, your system kindly prompts you to accept the updates, and you can accept them all from one centralized location.
|
||||
|
||||
Sometimes, though, there are times when a package is made available with no repository attached. These installable packages come in several forms.
|
||||
|
||||
#### Linux packages
|
||||
|
||||
Sometimes, a developer distributes software in a common Linux packaging format, such as RPM, DEB, or the newer but very popular FlatPak or Snap formats. You make not get access to a repository with this download; you might just get the package.
|
||||
|
||||
The video editor [Lightworks][11], for example, provides a `.deb` file for APT users and an `.rpm` file for RPM users. When you want to update, you return to the website and download the latest appropriate file.
|
||||
|
||||
These one-off packages can be installed with all the same tools used when installing from a repository. If you double-click the package you download, a graphical installer launches and steps you through the install process.
|
||||
|
||||
Alternately, you can install from a terminal. The difference here is that a lone package file you've downloaded from the internet isn't coming from a repository. It's a "local" install, meaning your package management software doesn't need to download it to install it. Most package managers handle this transparently:
|
||||
```
|
||||
|
||||
|
||||
$ sudo dnf install ~/Downloads/lwks-14.0.0-amd64.rpm
|
||||
```
|
||||
|
||||
In some cases, you need to take additional steps to get the application to run, so carefully read the documentation about the software you're installing.
|
||||
|
||||
#### Generic install scripts
|
||||
|
||||
Some developers release their packages in one of several generic formats. Common extensions include `.run` and `.sh`. NVIDIA graphic card drivers, Foundry visual FX packages like Nuke and Mari, and many DRM-free games from [GOG][12] use this style of installer.
|
||||
|
||||
This model of installation relies on the developer to deliver an installation "wizard." Some of the installers are graphical, while others just run in a terminal.
|
||||
|
||||
There are two ways to run these types of installers.
|
||||
|
||||
1. You can run the installer directly from a terminal:
|
||||
|
||||
|
||||
```
|
||||
|
||||
|
||||
$ sh ./game/gog_warsow_x.y.z.sh
|
||||
```
|
||||
|
||||
2. Alternately, you can run it from your desktop by marking it as executable. To mark an installer executable, right-click on its icon and select **Properties**.
|
||||
|
||||
![Giving an installer executable permission][14]
|
||||
|
||||
|
||||
Giving an installer executable permission
|
||||
|
||||
Once you've given permission for it to run, double-click the icon to start the install.
|
||||
|
||||
![GOG installer][16]
|
||||
|
||||
GOG installer
|
||||
|
||||
For the rest of the install, just follow the instructions on the screen.
|
||||
|
||||
#### AppImage portable apps
|
||||
|
||||
The AppImage format is relatively new to Linux, although its concept is based on both NeXT and Rox. The idea is simple: everything required to run an application is placed into one directory, and then that directory is treated as an "app." To run the application, you just double-click the icon, and it runs. There's no need or expectation that the application is installed in the traditional sense; it just runs from wherever you have it lying around on your hard drive.
|
||||
|
||||
Despite its ability to run as a self-contained app, an AppImage usually offers to do some soft system integration.
|
||||
|
||||
![AppImage system integration][18]
|
||||
|
||||
AppImage system integration
|
||||
|
||||
If you accept this offer, a local `.desktop` file is installed to your home directory. A `.desktop` file is a small configuration file used by the Applications menu and mimetype system of a Linux desktop. Essentially, placing the desktop config file in your home directory's application list "installs" the application without actually installing it. You get all the benefits of having installed something, and the benefits of being able to run something locally, as a "portable app."
|
||||
|
||||
#### Application directory
|
||||
|
||||
Sometimes, a developer just compiles an application and posts the result as a download, with no install script and no packaging. Usually, this means that you download a TAR file, [extract it][19], and then double-click the executable file (it's usually the one with the name of the software you downloaded).
|
||||
|
||||
![Twine downloaded for Linux][21]
|
||||
|
||||
|
||||
Twine downloaded for Linux
|
||||
|
||||
When presented with this style of software delivery, you can either leave it where you downloaded it and launch it manually when you need it, or you can do a quick and dirty install yourself. This involves two simple steps:
|
||||
|
||||
1. Save the directory to a standard location and launch it manually when you need it.
|
||||
2. Save the directory to a standard location and create a `.desktop` file to integrate it into your system.
|
||||
|
||||
|
||||
|
||||
If you're just installing applications for yourself, it's traditional to keep a `bin` directory (short for "binary") in your home directory as a storage location for locally installed applications and scripts. If you have other users on your system who need access to the applications, it's traditional to place the binaries in `/opt`. Ultimately, it's up to you where you store the application.
|
||||
|
||||
Downloads often come in directories with versioned names, such as `twine_2.13` or `pcgen-v6.07.04`. Since it's reasonable to assume you'll update the application at some point, it's a good idea to either remove the version number or to create a symlink to the directory. This way, the launcher that you create for the application can remain the same, even though you update the application itself.
|
||||
|
||||
To create a `.desktop` launcher file, open a text editor and create a file called `twine.desktop`. The [Desktop Entry Specification][22] is defined by [FreeDesktop.org][23]. Here is a simple launcher for a game development IDE called Twine, installed to the system-wide `/opt` directory:
|
||||
```
|
||||
|
||||
|
||||
[Desktop Entry]
|
||||
|
||||
Encoding=UTF-8
|
||||
|
||||
Name=Twine
|
||||
|
||||
GenericName=Twine
|
||||
|
||||
Comment=Twine
|
||||
|
||||
Exec=/opt/twine/Twine
|
||||
|
||||
Icon=/usr/share/icons/oxygen/64x64/categories/applications-games.png
|
||||
|
||||
Terminal=false
|
||||
|
||||
Type=Application
|
||||
|
||||
Categories=Development;IDE;
|
||||
```
|
||||
|
||||
The tricky line is the `Exec` line. It must contain a valid command to start the application. Usually, it's just the full path to the thing you downloaded, but in some cases, it's something more complex. For example, a Java application might need to be launched as an argument to Java itself:
|
||||
```
|
||||
|
||||
|
||||
Exec=java -jar /path/to/foo.jar
|
||||
```
|
||||
|
||||
Sometimes, a project includes a wrapper script that you can run so you don't have to figure out the right command:
|
||||
```
|
||||
|
||||
|
||||
Exec=/opt/foo/foo-launcher.sh
|
||||
```
|
||||
|
||||
In the Twine example, there's no icon bundled with the download, so the example `.desktop` file assigns a generic gaming icon that shipped with the KDE desktop. You can use workarounds like that, but if you're more artistic, you can just create your own icon, or you can search the Internet for a good icon. As long as the `Icon` line points to a valid PNG or SVG file, your application will inherit the icon.
|
||||
|
||||
The example script also sets the application category primarily to Development, so in KDE, GNOME, and most other Application menus, Twine appears under the Development category.
|
||||
|
||||
To get this example to appear in an Application menu, place the `twine.desktop` file into one of two places:
|
||||
|
||||
* Place it in `~/.local/share/applications` if you're storing the application in your own home directory.
|
||||
* Place it in `/usr/share/applications` if you're storing the application in `/opt` or another system-wide location and want it to appear in all your users' Application menus.
|
||||
|
||||
|
||||
|
||||
And now the application is installed as it needs to be and integrated with the rest of your system.
|
||||
|
||||
### Compiling from source
|
||||
|
||||
Finally, there's the truly universal install format: source code. Compiling an application from source code is a great way to learn how applications are structured, how they interact with your system, and how they can be customized. It's by no means a push-button process, though. It requires a build environment, it usually involves installing dependency libraries and header files, and sometimes a little bit of debugging.
|
||||
|
||||
To learn more about compiling from source code, [read my article][24] on the topic.
|
||||
|
||||
### Now you know
|
||||
|
||||
Some people think installing software is a magical process that only developers understand, or they think it "activates" an application, as if the binary executable file isn't valid until it has been "installed." Hopefully, learning about the many different methods of installing has shown you that install is really just shorthand for "copying files from one place to the appropriate places on your system." There's nothing mysterious about it. As long as you approach each install without expectations of how it's supposed to happen, and instead look for what the developer has set up as the install process, it's generally easy, even if it is different from what you're used to.
|
||||
|
||||
The important thing is that an installer is honest with you. If you come across an installer that attempts to install additional software without your consent (or maybe it asks for consent, but in a confusing or misleading way), or that attempts to run checks on your system for no apparent reason, then don't continue an install.
|
||||
|
||||
Good software is flexible, honest, and open. And now you know how to get good software onto your computer.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/how-install-apps-linux
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[1]:/file/382591
|
||||
[2]:https://opensource.com/sites/default/files/u128651/repo.png (Linux repository)
|
||||
[3]:https://fedoraproject.org/wiki/EPEL
|
||||
[4]:http://rpmfusion.org
|
||||
[5]:http://packman.links2linux.org/
|
||||
[6]:http://slackbuilds.org
|
||||
[7]:https://ring.cx/en/download/gnu-linux
|
||||
[8]:http://vivaldi.com
|
||||
[9]:/file/382566
|
||||
[10]:https://opensource.com/sites/default/files/u128651/access.png (Installing a repo)
|
||||
[11]:https://www.lwks.com/
|
||||
[12]:http://gog.com
|
||||
[13]:/file/382581
|
||||
[14]:https://opensource.com/sites/default/files/u128651/exec.jpg (Giving an installer executable permission)
|
||||
[15]:/file/382586
|
||||
[16]:https://opensource.com/sites/default/files/u128651/gog.jpg (GOG installer)
|
||||
[17]:/file/382576
|
||||
[18]:https://opensource.com/sites/default/files/u128651/appimage.png (AppImage system integration)
|
||||
[19]:https://opensource.com/article/17/7/how-unzip-targz-file
|
||||
[20]:/file/382596
|
||||
[21]:https://opensource.com/sites/default/files/u128651/twine.jpg (Twine downloaded for Linux)
|
||||
[22]:https://specifications.freedesktop.org/desktop-entry-spec/desktop-entry-spec-latest.html
|
||||
[23]:http://freedesktop.org
|
||||
[24]:https://opensource.com/article/17/10/open-source-cats
|
@ -0,0 +1,96 @@
|
||||
8 KDE Plasma Tips and Tricks to Improve Your Productivity
|
||||
======
|
||||
|
||||
![](https://www.maketecheasier.com/assets/uploads/2018/01/kde-plasma-desktop-featured.jpg)
|
||||
|
||||
KDE's Plasma is easily one of the most powerful desktop environments available for Linux. It's highly configurable, and it looks pretty good, too. That doesn't amount to a whole lot unless you can actually get things done.
|
||||
|
||||
You can easily configure Plasma and make use of a lot of its convenient and time-saving features to boost your productivity and have a desktop that empowers you, rather than getting in your way.
|
||||
|
||||
These tips aren't in any particular order, so you don't need to prioritize. Pick the ones that best fit your workflow.
|
||||
|
||||
**Related** : [10 of the Best KDE Plasma Applications You Should Try][1]
|
||||
|
||||
### 1. Multimedia Controls
|
||||
|
||||
This isn't so much of a tip as it is something that's good to keep in mind. Plasma keeps multimedia controls everywhere. You don't need to open your media player every time you need to pause, resume, or skip a song; you can mouse over the minimized window or even control it via the lock screen. There's no need to scramble to log in to change a song or because you forgot to pause one.
|
||||
|
||||
### 2. KRunner
|
||||
|
||||
![KDE Plasma KRunner][2]
|
||||
|
||||
KRunner is an often under-appreciated feature of the Plasma desktop. Most people are used to digging through the application launcher menu to find the program that they're looking to launch. That's not necessary with KRunner.
|
||||
|
||||
To use KRunner, make sure that your focus is on the desktop itself. (Click on it instead of a window.) Then, start typing the name of the program that you want. KRunner will automatically drop down from the top of your screen with suggestions. Click or press Enter on the one you're looking for. It's much faster than remembering which category your program is under.
|
||||
|
||||
### 3. Jump Lists
|
||||
|
||||
![KDE Plasma Jump Lists][3]
|
||||
|
||||
Jump lists are a fairly recent addition to the Plasma desktop. They allow you to launch an application directly to a specific section or feature.
|
||||
|
||||
So if you have a launcher on a menu bar, you can right-click and get a list of places to jump to. Select where you want to go, and you're off.
|
||||
|
||||
### 4. KDE Connect
|
||||
|
||||
![KDE Connect Menu Android][4]
|
||||
|
||||
[KDE Connect][5] is a massive help if you have an Android phone. It connects the phone to your desktop so you can share things seamlessly between the devices.
|
||||
|
||||
With KDE Connect, you can see your [Android device's notification][6] on your desktop in real time. It also enables you to send and receive text messages from Plasma without ever picking up your phone.
|
||||
|
||||
KDE Connect also lets you send files and share web pages between your phone and your computer. You can easily move from one device to the other without a lot of hassle or losing your train of thought.
|
||||
|
||||
### 5. Plasma Vaults
|
||||
|
||||
![KDE Plasma Vault][7]
|
||||
|
||||
Plasma Vaults are another new addition to the Plasma desktop. They are KDE's simple solution to encrypted files and folders. If you don't work with encrypted files, this one won't really save you any time. If you do, though, vaults are a much simpler approach.
|
||||
|
||||
Plasma Vaults let you create encrypted directories as a regular user without root and manage them from your task bar. You can mount and unmount the directories on the fly without the need for external programs or additional privileges.
|
||||
|
||||
### 6. Pager Widget
|
||||
|
||||
![KDE Plasma Pager][8]
|
||||
|
||||
Configure your desktop with the pager widget. It allows you to easily access three additional workspaces for even more screen room.
|
||||
|
||||
Add the widget to your menu bar, and you can slide between multiple workspaces. These are all the size of your screen, so you gain multiple times the total screen space. That lets you lay out more windows without getting confused by a minimized mess or disorganization.
|
||||
|
||||
### 7. Create a Dock
|
||||
|
||||
![KDE Plasma Dock][9]
|
||||
|
||||
Plasma is known for its flexibility and the room it allows for configuration. Use that to your advantage. If you have programs that you're always using, consider setting up an OS X style dock with your most used applications. You'll be able to get them with a single click rather than going through a menu or typing in their name.
|
||||
|
||||
### 8. Add a File Tree to Dolphin
|
||||
|
||||
![Plasma Dolphin Directory][10]
|
||||
|
||||
It's much easier to navigate folders in a directory tree. Dolphin, Plasma's default file manager, has built-in functionality to display a directory listing in the form of a tree on the side of the folder window.
|
||||
|
||||
To enable the directory tree, click on the "Control" tab, then "Configure Dolphin," "View Modes," and "Details." Finally, select "Expandable Folders."
|
||||
|
||||
Remember that these tips are just tips. Don't try to force yourself to do something that's getting in your way. You may hate using file trees in Dolphin. You may never use Pager. That's alright. There may even be something that you personally like that's not listed here. Do what works for you. That said, at least a few of these should shave some serious time out of your work day.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/kde-plasma-tips-tricks-improve-productivity/
|
||||
|
||||
作者:[Nick Congleton][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com/author/nickcongleton/
|
||||
[1]:https://www.maketecheasier.com/10-best-kde-plasma-applications/ (10 of the Best KDE Plasma Applications You Should Try)
|
||||
[2]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-krunner.jpg (KDE Plasma KRunner)
|
||||
[3]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-jumplist.jpg (KDE Plasma Jump Lists)
|
||||
[4]:https://www.maketecheasier.com/assets/uploads/2017/05/kde-connect-menu-e1494899929112.jpg (KDE Connect Menu Android)
|
||||
[5]:https://www.maketecheasier.com/send-receive-sms-linux-kde-connect/
|
||||
[6]:https://www.maketecheasier.com/android-notifications-ubuntu-kde-connect/
|
||||
[7]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-vault.jpg (KDE Plasma Vault)
|
||||
[8]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-pager.jpg (KDE Plasma Pager)
|
||||
[9]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-dock.jpg (KDE Plasma Dock)
|
||||
[10]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-dolphin.jpg (Plasma Dolphin Directory)
|
@ -1,3 +1,5 @@
|
||||
translating by ypingcn
|
||||
|
||||
Top 5 Firefox extensions to install now
|
||||
======
|
||||
|
||||
|
@ -0,0 +1,111 @@
|
||||
2 scientific calculators for the Linux desktop
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_OpenData_CityNumbers.png?itok=lC03ce76)
|
||||
|
||||
Image by : opensource.com
|
||||
|
||||
Every Linux desktop environment comes with at least a simple desktop calculator, but most of those simple calculators are just that: a simple tool for simple calculations.
|
||||
|
||||
Fortunately, there are exceptions; programs that go far beyond square roots and a couple of trigonometric functions, yet are still easy to use. Here are two powerful calculator tools for Linux, plus a couple of bonus options.
|
||||
|
||||
### SpeedCrunch
|
||||
|
||||
[SpeedCrunch][1] is a high-precision scientific calculator with a simple Qt5 graphical interface and strong focus on the keyboard.
|
||||
|
||||
![SpeedCrunch graphical interface][3]
|
||||
|
||||
|
||||
SpeedCrunch at work
|
||||
|
||||
It supports working with units and comes loaded with all kinds of functions.
|
||||
|
||||
For example, by writing:
|
||||
`2 * 10^6 newton / (meter^2)`
|
||||
|
||||
you get:
|
||||
`= 2000000 pascal`
|
||||
|
||||
By default, SpeedCrunch delivers its results in the international unit system, but units can be transformed with the "in" instruction.
|
||||
|
||||
For example:
|
||||
`3*10^8 meter / second in kilo meter / hour`
|
||||
|
||||
produces:
|
||||
`= 1080000000 kilo meter / hour`
|
||||
|
||||
With the `F5` key, all results will turn into scientific notation (`1.08e9 kilo meter / hour`), while with `F2` only numbers that are small enough or big enough will change. More options are available on the Configuration menu.
|
||||
|
||||
The list of available functions is really impressive. It works on Linux, Windows, and MacOS, and it's licensed under GPLv2; you can access its source code on [Bitbucket][4].
|
||||
|
||||
### Qalculate!
|
||||
|
||||
[Qalculate!][5] (with the exclamation point) has a long and complex history.
|
||||
|
||||
The project offers a powerful library that can be used by other programs (the Plasma desktop can use it to perform calculations from krunner) and a graphical interface built on GTK3. It allows you to work with units, handle physical constants, create graphics, use complex numbers, matrices, and vectors, choose arbitrary precision, and more.
|
||||
|
||||
|
||||
![Qalculate! Interface][7]
|
||||
|
||||
|
||||
Looking for some physical constants on Qalculate!
|
||||
|
||||
Its use of units is far more intuitive than SpeedCrunch's and it understands common prefixes without problem. Have you heard of an exapascal pressure? I hadn't (the Sun's core stops at `~26 PPa`), but Qalculate! has no problem understanding the meaning of `1 EPa`. Also, Qalculate! is more flexible with syntax errors, so you don't need to worry about closing all those parentheses: if there is no ambiguity, Qalculate! will give you the right answer.
|
||||
|
||||
After a long period on which the project seemed orphaned, it came back to life in 2016 and has been going strong since, with more than 10 versions in just one year. It's licensed under GPLv2 (with source code on [GitHub][8]) and offers versions for Linux and Windows, as well as a MacOS port.
|
||||
|
||||
### Bonus calculators
|
||||
|
||||
#### ConvertAll
|
||||
|
||||
OK, it's not a "calculator," yet this simple application is incredibly useful.
|
||||
|
||||
Most unit converters stop at a long list of basic units and a bunch of common combinations, but not [ConvertAll][9]. Trying to convert from astronomical units per year into inches per second? It doesn't matter if it makes sense or not, if you need to transform a unit of any kind, ConvertAll is the tool for you.
|
||||
|
||||
Just write the starting unit and the final unit in the corresponding boxes; if the units are compatible, you'll get the transformation without protest.
|
||||
|
||||
The main application is written in PyQt5, but there is also an [online version written in JavaScript][10].
|
||||
|
||||
#### (wx)Maxima with the units package
|
||||
|
||||
Sometimes (OK, many times) a desktop calculator is not enough and you need more raw power.
|
||||
|
||||
[Maxima][11] is a computer algebra system (CAS) with which you can do derivatives, integrals, series, equations, eigenvectors and eigenvalues, Taylor series, Laplace and Fourier transformations, as well as numerical calculations with arbitrary precision, graph on two and three dimensions… we could fill several pages just listing its capabilities.
|
||||
|
||||
[wxMaxima][12] is a well-designed graphical frontend for Maxima that simplifies the use of many Maxima options without compromising others. On top of the full power of Maxima, wxMaxima allows you to create "notebooks" on which you write comments, keep your graphics with your math, etc. One of the (wx)Maxima combo's most impressive features is that it works with dimension units.
|
||||
|
||||
On the prompt, just type:
|
||||
`load("unit")`
|
||||
|
||||
press Shift+Enter, wait a few seconds, and you'll be ready to work.
|
||||
|
||||
By default, the unit package works with the basic MKS units, but if you prefer, for instance, to get `N` instead of `kg*m/s2`, you just need to type:
|
||||
`setunits(N)`
|
||||
|
||||
Maxima's help (which is also available from wxMaxima's help menu) will give you more information.
|
||||
|
||||
Do you use these programs? Do you know another great desktop calculator for scientists and engineers or another related tool? Tell us about them in the comments!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/scientific-calculators-linux
|
||||
|
||||
作者:[Ricardo Berlasso][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rgb-es
|
||||
[1]:http://speedcrunch.org/index.html
|
||||
[2]:/file/382511
|
||||
[3]:https://opensource.com/sites/default/files/u128651/speedcrunch.png (SpeedCrunch graphical interface)
|
||||
[4]:https://bitbucket.org/heldercorreia/speedcrunch
|
||||
[5]:https://qalculate.github.io/
|
||||
[6]:/file/382506
|
||||
[7]:https://opensource.com/sites/default/files/u128651/qalculate-600.png (Qalculate! Interface)
|
||||
[8]:https://github.com/Qalculate
|
||||
[9]:http://convertall.bellz.org/
|
||||
[10]:http://convertall.bellz.org/js/
|
||||
[11]:http://maxima.sourceforge.net/
|
||||
[12]:https://andrejv.github.io/wxmaxima/
|
61
sources/tech/20180115 How To Boot Into Linux Command Line.md
Normal file
61
sources/tech/20180115 How To Boot Into Linux Command Line.md
Normal file
@ -0,0 +1,61 @@
|
||||
How To Boot Into Linux Command Line
|
||||
======
|
||||
![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/how-to-boot-into-linux-command-line_orig.jpg)
|
||||
|
||||
There may be times where you need or want to boot up a [Linux][1] system without using a GUI, that is with no X, but rather opt for the command line. Whatever the reason, fortunately, booting straight into the Linux **command-line** is very simple. It requires a simple change to the boot parameter after the other kernel options. This change specifies the runlevel to boot the system into.
|
||||
|
||||
### Why Do This?
|
||||
|
||||
If your system does not run Xorg because the configuration is invalid, or if the display manager is broken, or whatever may prevent the GUI from starting properly, booting into the command-line will allow you to troubleshoot by logging into a terminal (assuming you know what you’re doing to start with) and do whatever you need to do. Booting into the command-line is also a great way to become more familiar with the terminal, otherwise, you can do it just for fun.
|
||||
|
||||
### Accessing GRUB Menu
|
||||
|
||||
On startup, you will need access to the GRUB boot menu. You may need to hold the SHIFT key down before the system boots if the menu isn’t set to display every time the computer is started. In the menu, the [Linux distribution][2] entry must be selected. Once highlighted, press ‘e’ to edit the boot parameters.
|
||||
|
||||
[![zorin os grub menu](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnu-grub_orig.png)][3]
|
||||
|
||||
Older GRUB versions follow a similar mechanism. The boot manager should provide instructions on how to edit the boot parameters.
|
||||
|
||||
### Specify the Runlevel
|
||||
|
||||
An editor will appear and you will see the options that GRUB parses to the kernel. Navigate to the line that starts with ‘linux’ (older GRUB versions may be ‘kernel’; select that and follow the instructions). This specifies parameters to parse into the kernel. At the end of that line (may appear to span multiple lines, depending on resolution), you simply specify the runlevel to boot into, which is 3 (multi-user mode, text-only).
|
||||
|
||||
[![customize grub menu](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/runlevel_orig.png)][4]
|
||||
|
||||
Pressing Ctrl-X or F10 will boot the system using those parameters. Boot-up will continue as normal. The only thing that has changed is the runlevel to boot into.
|
||||
|
||||
|
||||
|
||||
This is what was started up:
|
||||
|
||||
[![boot linux in command line](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/runlevel_1_orig.png)][5]
|
||||
|
||||
### Runlevels
|
||||
|
||||
You can specify different runlevels to boot into with runlevel 5 being the default one. 1 boots into “single-user” mode, which boots into a root shell. 3 provides a multi-user, command-line only system.
|
||||
|
||||
### Switch From Command-Line
|
||||
|
||||
At some point, you may want to run the display manager again to use a GUI, and the quickest way to do that is running this:
|
||||
```
|
||||
$ sudo init 5
|
||||
```
|
||||
|
||||
And it is as simple as that. Personally, I find the command-line much more exciting and hands-on than using GUI tools; however, that’s just my preference.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxandubuntu.com/home/how-to-boot-into-linux-command-line
|
||||
|
||||
作者:[LinuxAndUbuntu][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxandubuntu.com
|
||||
[1]:http://www.linuxandubuntu.com/home/category/linux
|
||||
[2]:http://www.linuxandubuntu.com/home/category/distros
|
||||
[3]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnu-grub_orig.png
|
||||
[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/runlevel_orig.png
|
||||
[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/runlevel_1_orig.png
|
99
sources/tech/20180115 How debuggers really work.md
Normal file
99
sources/tech/20180115 How debuggers really work.md
Normal file
@ -0,0 +1,99 @@
|
||||
How debuggers really work
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/annoyingbugs.png?itok=ywFZ99Gs)
|
||||
|
||||
Image by : opensource.com
|
||||
|
||||
A debugger is one of those pieces of software that most, if not every, developer uses at least once during their software engineering career, but how many of you know how they actually work? During my talk at [linux.conf.au 2018][1] in Sydney, I will be talking about writing a debugger from scratch... in [Rust][2]!
|
||||
|
||||
In this article, the terms debugger/tracer are interchangeably. "Tracee" refers to the process being traced by the tracer.
|
||||
|
||||
### The ptrace system call
|
||||
|
||||
Most debuggers heavily rely on a system call known as `ptrace(2)`, which has the prototype:
|
||||
```
|
||||
|
||||
|
||||
long ptrace(enum __ptrace_request request, pid_t pid, void *addr, void *data);
|
||||
```
|
||||
|
||||
This is a system call that can manipulate almost all aspects of a process; however, before the debugger can attach to a process, the "tracee" has to call `ptrace` with the request `PTRACE_TRACEME`. This tells Linux that it is legitimate for the parent to attach via `ptrace` to this process. But... how do we coerce a process into calling `ptrace`? Easy-peasy! `fork/execve` provides an easy way of calling `ptrace` after `fork` but before the tracee really starts using `execve`. Conveniently, `fork` will also return the `pid` of the tracee, which is required for using `ptrace` later.
|
||||
|
||||
Now that the tracee can be traced by the debugger, important changes take place:
|
||||
|
||||
* Every time a signal is delivered to the tracee, it stops and a wait-event is delivered to the tracer that can be captured by the `wait` family of system calls.
|
||||
* Each `execve` system call will cause a `SIGTRAP` to be delivered to the tracee. (Combined with the previous item, this means the tracee is stopped before an `execve` can fully take place.)
|
||||
|
||||
|
||||
|
||||
This means that, once we issue the `PTRACE_TRACEME` request and call the `execve` system call to actually start the program in the tracee, the tracee will immediately stop, since `execve` delivers a `SIGTRAP`, and that is caught by a wait-event in the tracer. How do we continue? As one would expect, `ptrace` has a number of requests that can be used for telling the tracee it's fine to continue:
|
||||
|
||||
* `PTRACE_CONT`: This is the simplest. The tracee runs until it receives a signal, at which point a wait-event is delivered to the tracer. This is most commonly used to implement "continue-until-breakpoint" and "continue-forever" options of a real-world debugger. Breakpoints will be covered below.
|
||||
* `PTRACE_SYSCALL`: Very similar to `PTRACE_CONT`, but stops before a system call is entered and also before a system call returns to userspace. It can be used in combination with other requests (which we will cover later in this article) to monitor and modify a system call's arguments or return value. `strace`, the system call tracer, uses this request heavily to figure out what system calls are made by a process.
|
||||
* `PTRACE_SINGLESTEP`: This one is pretty self-explanatory. If you used a debugger before, this request executes the next instruction, but stops immediately after.
|
||||
|
||||
|
||||
|
||||
We can stop the process with a variety of requests, but how do we get the state of the tracee? The state of a process is mostly captured by its registers, so of course `ptrace` has a request to get (or modify!) the registers:
|
||||
|
||||
* `PTRACE_GETREGS`: This request will give the registers' state as it was when a tracee was stopped.
|
||||
* `PTRACE_SETREGS`: If the tracer has the values of registers from a previous call to `PTRACE_GETREGS`, it can modify the values in that structure and set the registers to the new values via this request.
|
||||
* `PTRACE_PEEKUSER` and `PTRACE_POKEUSER`: These allow reading from the tracee's `USER` area, which holds the registers and other useful information. This can be used to modify a single register, without the more heavyweight `PTRACE_{GET,SET}REGS`.
|
||||
|
||||
|
||||
|
||||
Modifying the registers isn't always sufficient in a debugger. A debugger will sometimes need to read some parts of the memory or even modify it. The GNU Project Debugger (GDB) can use `print` to get the value of a memory location or a variable. `ptrace` has the functionality to implement this:
|
||||
|
||||
* `PTRACE_PEEKTEXT` and `PTRACE_POKETEXT`: These allow reading and writing a word in the address space of the tracee. Of course, the tracee has to be stopped for this to work.
|
||||
|
||||
|
||||
|
||||
Real-world debuggers also have features like breakpoints and watchpoints. In the next section, I'll dive into the architectural details of debugging support. For the purposes of clarity and conciseness, this article will consider x86 only.
|
||||
|
||||
### Architectural support
|
||||
|
||||
`ptrace` is all cool, but how does it work? In the previous section, we've seen that `ptrace` has quite a bit to do with signals: `SIGTRAP` can be delivered during single-stepping, before `execve` and before or after system calls. Signals can be generated a number of ways, but we will look at two specific examples that can be used by debuggers to stop a program (effectively creating a breakpoint!) at a given location:
|
||||
|
||||
* **Undefined instructions:** When a process tries to execute an undefined instruction, an exception is raised by the CPU. This exception is handled via a CPU interrupt, and a handler corresponding to the interrupt in the kernel is called. This will result in a `SIGILL` being sent to the process. This, in turn, causes the process to stop, and the tracer is notified via a wait-event. It can then decide what to do. On x86, an instruction `ud2` is guaranteed to be always undefined.
|
||||
|
||||
* **Debugging interrupt:** The problem with the previous approach is that the `ud2` instruction takes two bytes of machine code. A special instruction exists that takes one byte and raises an interrupt. It's `int $3` and the machine code is `0xCC`. When this interrupt is raised, the kernel sends a `SIGTRAP` to the process and, just as before, the tracer is notified.
|
||||
|
||||
|
||||
|
||||
|
||||
This is fine, but how do we coerce the tracee to execute these instructions? Easy: `ptrace` has `PTRACE_POKETEXT`, which can override a word at a memory location. A debugger would read the original word at the location using `PTRACE_PEEKTEXT` and replace it with `0xCC`, remembering the original byte and the fact that it is a breakpoint in its internal state. The next time the tracee executes at the location, it is automatically stopped by the virtue of a `SIGTRAP`. The debugger's end user can then decide how to continue (for instance, inspect the registers).
|
||||
|
||||
Okay, we've covered breakpoints, but what about watchpoints? How does a debugger stop a program when a certain memory location is read or written? Surely you wouldn't just overwrite every instruction with `int $3` that could read or write some memory location. Meet debug registers, a set of registers designed to fulfill this goal more efficiently:
|
||||
|
||||
* `DR0` to `DR3`: Each of these registers contains an address (a memory location), where the debugger wants the tracee to stop for some reason. The reason is specified as a bitmask in `DR7`.
|
||||
* `DR4` and `DR5`: These obsolete aliases to `DR6` and `DR7`, respectively.
|
||||
* `DR6`: Debug status. Contains information about which `DR0` to `DR3` caused the debugging exception to be raised. This is used by Linux to figure out the information passed along with the `SIGTRAP` to the tracee.
|
||||
* `DR7`: Debug control. Using the bits in these registers, the debugger can control how the addresses specified in `DR0` to `DR3` are interpreted. A bitmask controls the size of the watchpoint (whether 1, 2, 4, or 8 bytes are monitored) and whether to raise an exception on execution, reading, writing, or either of reading and writing.
|
||||
|
||||
|
||||
|
||||
Because the debug registers form part of the `USER` area of a process, the debugger can use `PTRACE_POKEUSER` to write values into the debug registers. The debug registers are only relevant to a specific process and are thus restored to the value at preemption before the process regains control of the CPU.
|
||||
|
||||
### Tip of the iceberg
|
||||
|
||||
We've glanced at the iceberg a debugger is: we've covered `ptrace`, went over some of its functionality, then we had a look at how `ptrace` is implemented. Some parts of `ptrace` can be implemented in software, but other parts have to be implemented in hardware, otherwise they'd be very expensive or even impossible.
|
||||
|
||||
There's plenty that we didn't cover, of course. Questions, like "how does a debugger know where a variable is in memory?" remain open due to space and time constraints, but I hope you've learned something from this article; if it piqued your interest, there are plenty of resources available online to learn more.
|
||||
|
||||
For more, attend Levente Kurusa's talk, [Let's Write a Debugger!][3], at [linux.conf.au][1], which will be held January 22-26 in Sydney.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/how-debuggers-really-work
|
||||
|
||||
作者:[Levente Kurusa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/lkurusa
|
||||
[1]:https://linux.conf.au/index.html
|
||||
[2]:https://www.rust-lang.org
|
||||
[3]:https://rego.linux.conf.au/schedule/presentation/91/
|
@ -0,0 +1,97 @@
|
||||
Partclone – A Versatile Free Software for Partition Imaging and Cloning
|
||||
======
|
||||
|
||||
![](https://www.fossmint.com/wp-content/uploads/2018/01/Partclone-Backup-Tool-For-Linux.png)
|
||||
|
||||
**[Partclone][1]** is a free and open-source tool for creating and cloning partition images brought to you by the developers of **Clonezilla**. In fact, **Partclone** is one of the tools that **Clonezilla** is based on.
|
||||
|
||||
It provides users with the tools required to backup and restores used partition blocks along with high compatibility with several file systems thanks to its ability to use existing libraries like **e2fslibs** to read and write partitions e.g. **ext2**.
|
||||
|
||||
Its best stronghold is the variety of formats it supports including ext2, ext3, ext4, hfs+, reiserfs, reiser4, btrfs, vmfs3, vmfs5, xfs, jfs, ufs, ntfs, fat(12/16/32), exfat, f2fs, and nilfs.
|
||||
|
||||
It also has a plethora of available programs including **partclone.ext2** (ext3 & ext4), partclone.ntfs, partclone.exfat, partclone.hfsp, and partclone.vmfs (v3 and v5), among others.
|
||||
|
||||
### Features in Partclone
|
||||
|
||||
* **Freeware:** **Partclone** is free for everyone to download and use.
|
||||
* **Open Source:** **Partclone** is released under the GNU GPL license and is open to contribution on [GitHub][2].
|
||||
* **Cross-Platform** : Available on Linux, Windows, MAC, ESX file system backup/restore, and FreeBSD.
|
||||
* An online [Documentation page][3] from where you can view help docs and track its GitHub issues.
|
||||
* An online [user manual][4] for beginners and pros alike.
|
||||
* Rescue support.
|
||||
* Clone partitions to image files.
|
||||
* Restore image files to partitions.
|
||||
* Duplicate partitions quickly.
|
||||
* Support for raw clone.
|
||||
* Displays transfer rate and elapsed time.
|
||||
* Supports piping.
|
||||
* Support for crc32.
|
||||
* Supports vmfs for ESX vmware server and ufs for FreeBSD file system.
|
||||
|
||||
|
||||
|
||||
There are a lot more features bundled in **Partclone** and you can see the rest of them [here][5].
|
||||
|
||||
[__Download Partclone for Linux][6]
|
||||
|
||||
### How to Install and Use Partclone
|
||||
|
||||
To install Partclone on Linux.
|
||||
```
|
||||
$ sudo apt install partclone [On Debian/Ubuntu]
|
||||
$ sudo yum install partclone [On CentOS/RHEL/Fedora]
|
||||
|
||||
```
|
||||
|
||||
Clone partition to image.
|
||||
```
|
||||
# partclone.ext4 -d -c -s /dev/sda1 -o sda1.img
|
||||
|
||||
```
|
||||
|
||||
Restore image to partition.
|
||||
```
|
||||
# partclone.ext4 -d -r -s sda1.img -o /dev/sda1
|
||||
|
||||
```
|
||||
|
||||
Partition to partition clone.
|
||||
```
|
||||
# partclone.ext4 -d -b -s /dev/sda1 -o /dev/sdb1
|
||||
|
||||
```
|
||||
|
||||
Display image information.
|
||||
```
|
||||
# partclone.info -s sda1.img
|
||||
|
||||
```
|
||||
|
||||
Check image.
|
||||
```
|
||||
# partclone.chkimg -s sda1.img
|
||||
|
||||
```
|
||||
|
||||
Are you a **Partclone** user? I wrote on [**Deepin Clone**][7] just recently and apparently, there are certain tasks Partclone is better at handling. What has been your experience with other backup and restore utility tools?
|
||||
|
||||
Do share your thoughts and suggestions with us in the comments section below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.fossmint.com/partclone-linux-backup-clone-tool/
|
||||
|
||||
作者:[Martins D. Okoi;View All Posts;Peter Beck;Martins Divine Okoi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[1]:https://partclone.org/
|
||||
[2]:https://github.com/Thomas-Tsai/partclone
|
||||
[3]:https://partclone.org/help/
|
||||
[4]:https://partclone.org/usage/
|
||||
[5]:https://partclone.org/features/
|
||||
[6]:https://partclone.org/download/
|
||||
[7]:https://www.fossmint.com/deepin-clone-system-backup-restore-for-deepin-users/
|
253
sources/tech/20180116 Analyzing the Linux boot process.md
Normal file
253
sources/tech/20180116 Analyzing the Linux boot process.md
Normal file
@ -0,0 +1,253 @@
|
||||
Translating by jessie-pang
|
||||
|
||||
Analyzing the Linux boot process
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_boot.png?itok=FUesnJQp)
|
||||
|
||||
Image by : Penguin, Boot. Modified by Opensource.com. CC BY-SA 4.0.
|
||||
|
||||
The oldest joke in open source software is the statement that "the code is self-documenting." Experience shows that reading the source is akin to listening to the weather forecast: sensible people still go outside and check the sky. What follows are some tips on how to inspect and observe Linux systems at boot by leveraging knowledge of familiar debugging tools. Analyzing the boot processes of systems that are functioning well prepares users and developers to deal with the inevitable failures.
|
||||
|
||||
In some ways, the boot process is surprisingly simple. The kernel starts up single-threaded and synchronous on a single core and seems almost comprehensible to the pitiful human mind. But how does the kernel itself get started? What functions do [initial ramdisk][1] ) and bootloaders perform? And wait, why is the LED on the Ethernet port always on?
|
||||
|
||||
Read on for answers to these and other questions; the [code for the described demos and exercises][2] is also available on GitHub.
|
||||
|
||||
### The beginning of boot: the OFF state
|
||||
|
||||
#### Wake-on-LAN
|
||||
|
||||
The OFF state means that the system has no power, right? The apparent simplicity is deceptive. For example, the Ethernet LED is illuminated because wake-on-LAN (WOL) is enabled on your system. Check whether this is the case by typing:
|
||||
```
|
||||
$# sudo ethtool <interface name>
|
||||
```
|
||||
|
||||
where `<interface name>` might be, for example, `eth0`. (`ethtool` is found in Linux packages of the same name.) If "Wake-on" in the output shows `g`, remote hosts can boot the system by sending a [MagicPacket][3]. If you have no intention of waking up your system remotely and do not wish others to do so, turn WOL off either in the system BIOS menu, or via:
|
||||
```
|
||||
$# sudo ethtool -s <interface name> wol d
|
||||
```
|
||||
|
||||
The processor that responds to the MagicPacket may be part of the network interface or it may be the [Baseboard Management Controller][4] (BMC).
|
||||
|
||||
#### Intel Management Engine, Platform Controller Hub, and Minix
|
||||
|
||||
The BMC is not the only microcontroller (MCU) that may be listening when the system is nominally off. x86_64 systems also include the Intel Management Engine (IME) software suite for remote management of systems. A wide variety of devices, from servers to laptops, includes this technology, [which enables functionality][5] such as KVM Remote Control and Intel Capability Licensing Service. The [IME has unpatched vulnerabilities][6], according to [Intel's own detection tool][7]. The bad news is, it's difficult to disable the IME. Trammell Hudson has created an [me_cleaner project][8] that wipes some of the more egregious IME components, like the embedded web server, but could also brick the system on which it is run.
|
||||
|
||||
The IME firmware and the System Management Mode (SMM) software that follows it at boot are [based on the Minix operating system][9] and run on the separate Platform Controller Hub processor, not the main system CPU. The SMM then launches the Universal Extensible Firmware Interface (UEFI) software, about which much has [already been written][10], on the main processor. The Coreboot group at Google has started a breathtakingly ambitious [Non-Extensible Reduced Firmware][11] (NERF) project that aims to replace not only UEFI but early Linux userspace components such as systemd. While we await the outcome of these new efforts, Linux users may now purchase laptops from Purism, System76, or Dell [with IME disabled][12], plus we can hope for laptops [with ARM 64-bit processors][13].
|
||||
|
||||
#### Bootloaders
|
||||
|
||||
Besides starting buggy spyware, what function does early boot firmware serve? The job of a bootloader is to make available to a newly powered processor the resources it needs to run a general-purpose operating system like Linux. At power-on, there not only is no virtual memory, but no DRAM until its controller is brought up. A bootloader then turns on power supplies and scans buses and interfaces in order to locate the kernel image and the root filesystem. Popular bootloaders like U-Boot and GRUB have support for familiar interfaces like USB, PCI, and NFS, as well as more embedded-specific devices like NOR- and NAND-flash. Bootloaders also interact with hardware security devices like [Trusted Platform Modules][14] (TPMs) to establish a chain of trust from earliest boot.
|
||||
|
||||
![Running the U-boot bootloader][16]
|
||||
|
||||
Running the U-boot bootloader in the sandbox on the build host.
|
||||
|
||||
The open source, widely used [U-Boot ][17]bootloader is supported on systems ranging from Raspberry Pi to Nintendo devices to automotive boards to Chromebooks. There is no syslog, and when things go sideways, often not even any console output. To facilitate debugging, the U-Boot team offers a sandbox in which patches can be tested on the build-host, or even in a nightly Continuous Integration system. Playing with U-Boot's sandbox is relatively simple on a system where common development tools like Git and the GNU Compiler Collection (GCC) are installed:
|
||||
```
|
||||
|
||||
|
||||
$# git clone git://git.denx.de/u-boot; cd u-boot
|
||||
|
||||
$# make ARCH=sandbox defconfig
|
||||
|
||||
$# make; ./u-boot
|
||||
|
||||
=> printenv
|
||||
|
||||
=> help
|
||||
```
|
||||
|
||||
That's it: you're running U-Boot on x86_64 and can test tricky features like [mock storage device][2] repartitioning, TPM-based secret-key manipulation, and hotplug of USB devices. The U-Boot sandbox can even be single-stepped under the GDB debugger. Development using the sandbox is 10x faster than testing by reflashing the bootloader onto a board, and a "bricked" sandbox can be recovered with Ctrl+C.
|
||||
|
||||
### Starting up the kernel
|
||||
|
||||
#### Provisioning a booting kernel
|
||||
|
||||
Upon completion of its tasks, the bootloader will execute a jump to kernel code that it has loaded into main memory and begin execution, passing along any command-line options that the user has specified. What kind of program is the kernel? `file /boot/vmlinuz` indicates that it is a bzImage, meaning a big compressed one. The Linux source tree contains an [extract-vmlinux tool][18] that can be used to uncompress the file:
|
||||
```
|
||||
|
||||
|
||||
$# scripts/extract-vmlinux /boot/vmlinuz-$(uname -r) > vmlinux
|
||||
|
||||
$# file vmlinux
|
||||
|
||||
vmlinux: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically
|
||||
|
||||
linked, stripped
|
||||
```
|
||||
|
||||
The kernel is an [Executable and Linking Format][19] (ELF) binary, like Linux userspace programs. That means we can use commands from the `binutils` package like `readelf` to inspect it. Compare the output of, for example:
|
||||
```
|
||||
|
||||
|
||||
$# readelf -S /bin/date
|
||||
|
||||
$# readelf -S vmlinux
|
||||
```
|
||||
|
||||
The list of sections in the binaries is largely the same.
|
||||
|
||||
So the kernel must start up something like other Linux ELF binaries ... but how do userspace programs actually start? In the `main()` function, right? Not precisely.
|
||||
|
||||
Before the `main()` function can run, programs need an execution context that includes heap and stack memory plus file descriptors for `stdio`, `stdout`, and `stderr`. Userspace programs obtain these resources from the standard library, which is `glibc` on most Linux systems. Consider the following:
|
||||
```
|
||||
|
||||
|
||||
$# file /bin/date
|
||||
|
||||
/bin/date: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically
|
||||
|
||||
linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32,
|
||||
|
||||
BuildID[sha1]=14e8563676febeb06d701dbee35d225c5a8e565a,
|
||||
|
||||
stripped
|
||||
```
|
||||
|
||||
ELF binaries have an interpreter, just as Bash and Python scripts do, but the interpreter need not be specified with `#!` as in scripts, as ELF is Linux's native format. The ELF interpreter [provisions a binary][20] with the needed resources by calling `_start()`, a function available from the `glibc` source package that can be [inspected via GDB][21]. The kernel obviously has no interpreter and must provision itself, but how?
|
||||
|
||||
Inspecting the kernel's startup with GDB gives the answer. First install the debug package for the kernel that contains an unstripped version of `vmlinux`, for example `apt-get install linux-image-amd64-dbg`, or compile and install your own kernel from source, for example, by following instructions in the excellent [Debian Kernel Handbook][22]. `gdb vmlinux` followed by `info files` shows the ELF section `init.text`. List the start of program execution in `init.text` with `l *(address)`, where `address` is the hexadecimal start of `init.text`. GDB will indicate that the x86_64 kernel starts up in the kernel's file [arch/x86/kernel/head_64.S][23], where we find the assembly function `start_cpu0()` and code that explicitly creates a stack and decompresses the zImage before calling the `x86_64 start_kernel()` function. ARM 32-bit kernels have the similar [arch/arm/kernel/head.S][24]. `start_kernel()` is not architecture-specific, so the function lives in the kernel's [init/main.c][25]. `start_kernel()` is arguably Linux's true `main()` function.
|
||||
|
||||
### From start_kernel() to PID 1
|
||||
|
||||
#### The kernel's hardware manifest: the device-tree and ACPI tables
|
||||
|
||||
At boot, the kernel needs information about the hardware beyond the processor type for which it has been compiled. The instructions in the code are augmented by configuration data that is stored separately. There are two main methods of storing this data: [device-trees][26] and [ACPI tables][27]. The kernel learns what hardware it must run at each boot by reading these files.
|
||||
|
||||
For embedded devices, the device-tree is a manifest of installed hardware. The device-tree is simply a file that is compiled at the same time as kernel source and is typically located in `/boot` alongside `vmlinux`. To see what's in the binary device-tree on an ARM device, just use the `strings` command from the `binutils` package on a file whose name matches `/boot/*.dtb`, as `dtb` refers to a device-tree binary. Clearly the device-tree can be modified simply by editing the JSON-like files that compose it and rerunning the special `dtc` compiler that is provided with the kernel source. While the device-tree is a static file whose file path is typically passed to the kernel by the bootloader on the command line, a [device-tree overlay][28] facility has been added in recent years, where the kernel can dynamically load additional fragments in response to hotplug events after boot.
|
||||
|
||||
x86-family and many enterprise-grade ARM64 devices make use of the alternative Advanced Configuration and Power Interface ([ACPI][27]) mechanism. In contrast to the device-tree, the ACPI information is stored in the `/sys/firmware/acpi/tables` virtual filesystem that is created by the kernel at boot by accessing onboard ROM. The easy way to read the ACPI tables is with the `acpidump` command from the `acpica-tools` package. Here's an example:
|
||||
|
||||
![ACPI tables on Lenovo laptops][30]
|
||||
|
||||
|
||||
ACPI tables on Lenovo laptops are all set for Windows 2001.
|
||||
|
||||
Yes, your Linux system is ready for Windows 2001, should you care to install it. ACPI has both methods and data, unlike the device-tree, which is more of a hardware-description language. ACPI methods continue to be active post-boot. For example, starting the command `acpi_listen` (from package `apcid`) and opening and closing the laptop lid will show that ACPI functionality is running all the time. While temporarily and dynamically [overwriting the ACPI tables][31] is possible, permanently changing them involves interacting with the BIOS menu at boot or reflashing the ROM. If you're going to that much trouble, perhaps you should just [install coreboot][32], the open source firmware replacement.
|
||||
|
||||
#### From start_kernel() to userspace
|
||||
|
||||
The code in [init/main.c][25] is surprisingly readable and, amusingly, still carries Linus Torvalds' original copyright from 1991-1992. The lines found in `dmesg | head` on a newly booted system originate mostly from this source file. The first CPU is registered with the system, global data structures are initialized, and the scheduler, interrupt handlers (IRQs), timers, and console are brought one-by-one, in strict order, online. Until the function `timekeeping_init()` runs, all timestamps are zero. This part of the kernel initialization is synchronous, meaning that execution occurs in precisely one thread, and no function is executed until the last one completes and returns. As a result, the `dmesg` output will be completely reproducible, even between two systems, as long as they have the same device-tree or ACPI tables. Linux is behaving like one of the RTOS (real-time operating systems) that runs on MCUs, for example QNX or VxWorks. The situation persists into the function `rest_init()`, which is called by `start_kernel()` at its termination.
|
||||
|
||||
![Summary of early kernel boot process.][34]
|
||||
|
||||
Summary of early kernel boot process.
|
||||
|
||||
The rather humbly named `rest_init()` spawns a new thread that runs `kernel_init()`, which invokes `do_initcalls()`. Users can spy on `initcalls` in action by appending `initcall_debug` to the kernel command line, resulting in `dmesg` entries every time an `initcall` function runs. `initcalls` pass through seven sequential levels: early, core, postcore, arch, subsys, fs, device, and late. The most user-visible part of the `initcalls` is the probing and setup of all the processors' peripherals: buses, network, storage, displays, etc., accompanied by the loading of their kernel modules. `rest_init()` also spawns a second thread on the boot processor that begins by running `cpu_idle()` while it waits for the scheduler to assign it work.
|
||||
|
||||
`kernel_init()` also [sets up symmetric multiprocessing][35] (SMP). With more recent kernels, find this point in `dmesg` output by looking for "Bringing up secondary CPUs..." SMP proceeds by "hotplugging" CPUs, meaning that it manages their lifecycle with a state machine that is notionally similar to that of devices like hotplugged USB sticks. The kernel's power-management system frequently takes individual cores offline, then wakes them as needed, so that the same CPU hotplug code is called over and over on a machine that is not busy. Observe the power-management system's invocation of CPU hotplug with the [BCC tool][36] called `offcputime.py`.
|
||||
|
||||
Note that the code in `init/main.c` is nearly finished executing when `smp_init()` runs: The boot processor has completed most of the one-time initialization that the other cores need not repeat. Nonetheless, the per-CPU threads must be spawned for each core to manage interrupts (IRQs), workqueues, timers, and power events on each. For example, see the per-CPU threads that service softirqs and workqueues in action via the `ps -o psr` command.
|
||||
```
|
||||
|
||||
|
||||
$\# ps -o pid,psr,comm $(pgrep ksoftirqd)
|
||||
|
||||
PID PSR COMMAND
|
||||
|
||||
7 0 ksoftirqd/0
|
||||
|
||||
16 1 ksoftirqd/1
|
||||
|
||||
22 2 ksoftirqd/2
|
||||
|
||||
28 3 ksoftirqd/3
|
||||
|
||||
|
||||
|
||||
$\# ps -o pid,psr,comm $(pgrep kworker)
|
||||
|
||||
PID PSR COMMAND
|
||||
|
||||
4 0 kworker/0:0H
|
||||
|
||||
18 1 kworker/1:0H
|
||||
|
||||
24 2 kworker/2:0H
|
||||
|
||||
30 3 kworker/3:0H
|
||||
|
||||
[ . . . ]
|
||||
```
|
||||
|
||||
where the PSR field stands for "processor." Each core must also host its own timers and `cpuhp` hotplug handlers.
|
||||
|
||||
How is it, finally, that userspace starts? Near its end, `kernel_init()` looks for an `initrd` that can execute the `init` process on its behalf. If it finds none, the kernel directly executes `init` itself. Why then might one want an `initrd`?
|
||||
|
||||
#### Early userspace: who ordered the initrd?
|
||||
|
||||
Besides the device-tree, another file path that is optionally provided to the kernel at boot is that of the `initrd`. The `initrd` often lives in `/boot` alongside the bzImage file vmlinuz on x86, or alongside the similar uImage and device-tree for ARM. List the contents of the `initrd` with the `lsinitramfs` tool that is part of the `initramfs-tools-core` package. Distro `initrd` schemes contain minimal `/bin`, `/sbin`, and `/etc` directories along with kernel modules, plus some files in `/scripts`. All of these should look pretty familiar, as the `initrd` for the most part is simply a minimal Linux root filesystem. The apparent similarity is a bit deceptive, as nearly all the executables in `/bin` and `/sbin` inside the ramdisk are symlinks to the [BusyBox binary][37], resulting in `/bin` and `/sbin` directories that are 10x smaller than glibc's.
|
||||
|
||||
Why bother to create an `initrd` if all it does is load some modules and then start `init` on the regular root filesystem? Consider an encrypted root filesystem. The decryption may rely on loading a kernel module that is stored in `/lib/modules` on the root filesystem ... and, unsurprisingly, in the `initrd` as well. The crypto module could be statically compiled into the kernel instead of loaded from a file, but there are various reasons for not wanting to do so. For example, statically compiling the kernel with modules could make it too large to fit on the available storage, or static compilation may violate the terms of a software license. Unsurprisingly, storage, network, and human input device (HID) drivers may also be present in the `initrd`--basically any code that is not part of the kernel proper that is needed to mount the root filesystem. The `initrd` is also a place where users can stash their own [custom ACPI][38] table code.
|
||||
|
||||
![Rescue shell and a custom <code>initrd</code>.][40]
|
||||
|
||||
Having some fun with the rescue shell and a custom `initrd`.
|
||||
|
||||
`initrd`'s are also great for testing filesystems and data-storage devices themselves. Stash these test tools in the `initrd` and run your tests from memory rather than from the object under test.
|
||||
|
||||
At last, when `init` runs, the system is up! Since the secondary processors are now running, the machine has become the asynchronous, preemptible, unpredictable, high-performance creature we know and love. Indeed, `ps -o pid,psr,comm -p 1` is liable to show that userspace's `init` process is no longer running on the boot processor.
|
||||
|
||||
### Summary
|
||||
|
||||
The Linux boot process sounds forbidding, considering the number of different pieces of software that participate even on simple embedded devices. Looked at differently, the boot process is rather simple, since the bewildering complexity caused by features like preemption, RCU, and race conditions are absent in boot. Focusing on just the kernel and PID 1 overlooks the large amount of work that bootloaders and subsidiary processors may do in preparing the platform for the kernel to run. While the kernel is certainly unique among Linux programs, some insight into its structure can be gleaned by applying to it some of the same tools used to inspect other ELF binaries. Studying the boot process while it's working well arms system maintainers for failures when they come.
|
||||
|
||||
To learn more, attend Alison Chaiken's talk, [Linux: The first second][41], at [linux.conf.au][42], which will be held January 22-26 in Sydney.
|
||||
|
||||
Thanks to [Akkana Peck][43] for originally suggesting this topic and for many corrections.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/analyzing-linux-boot-process
|
||||
|
||||
作者:[Alison Chaiken][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/don-watkins
|
||||
[1]:https://en.wikipedia.org/wiki/Initial_ramdisk
|
||||
[2]:https://github.com/chaiken/LCA2018-Demo-Code
|
||||
[3]:https://en.wikipedia.org/wiki/Wake-on-LAN
|
||||
[4]:https://lwn.net/Articles/630778/
|
||||
[5]:https://www.youtube.com/watch?v=iffTJ1vPCSo&index=65&list=PLbzoR-pLrL6pISWAq-1cXP4_UZAyRtesk
|
||||
[6]:https://security-center.intel.com/advisory.aspx?intelid=INTEL-SA-00086&languageid=en-fr
|
||||
[7]:https://www.intel.com/content/www/us/en/support/articles/000025619/software.html
|
||||
[8]:https://github.com/corna/me_cleaner
|
||||
[9]:https://lwn.net/Articles/738649/
|
||||
[10]:https://lwn.net/Articles/699551/
|
||||
[11]:https://trmm.net/NERF
|
||||
[12]:https://www.extremetech.com/computing/259879-dell-now-shipping-laptops-intels-management-engine-disabled
|
||||
[13]:https://lwn.net/Articles/733837/
|
||||
[14]:https://linuxplumbersconf.org/2017/ocw/events/LPC2017/tracks/639
|
||||
[15]:/file/383501
|
||||
[16]:https://opensource.com/sites/default/files/u128651/linuxboot_1.png (Running the U-boot bootloader)
|
||||
[17]:http://www.denx.de/wiki/DULG/Manual
|
||||
[18]:https://github.com/torvalds/linux/blob/master/scripts/extract-vmlinux
|
||||
[19]:http://man7.org/linux/man-pages/man5/elf.5.html
|
||||
[20]:https://0xax.gitbooks.io/linux-insides/content/Misc/program_startup.html
|
||||
[21]:https://github.com/chaiken/LCA2018-Demo-Code/commit/e543d9812058f2dd65f6aed45b09dda886c5fd4e
|
||||
[22]:http://kernel-handbook.alioth.debian.org/
|
||||
[23]:https://github.com/torvalds/linux/blob/master/arch/x86/boot/compressed/head_64.S
|
||||
[24]:https://github.com/torvalds/linux/blob/master/arch/arm/boot/compressed/head.S
|
||||
[25]:https://github.com/torvalds/linux/blob/master/init/main.c
|
||||
[26]:https://www.youtube.com/watch?v=m_NyYEBxfn8
|
||||
[27]:http://events.linuxfoundation.org/sites/events/files/slides/x86-platform.pdf
|
||||
[28]:http://lwn.net/Articles/616859/
|
||||
[29]:/file/383506
|
||||
[30]:https://opensource.com/sites/default/files/u128651/linuxboot_2.png (ACPI tables on Lenovo laptops)
|
||||
[31]:https://www.mjmwired.net/kernel/Documentation/acpi/method-customizing.txt
|
||||
[32]:https://www.coreboot.org/Supported_Motherboards
|
||||
[33]:/file/383511
|
||||
[34]:https://opensource.com/sites/default/files/u128651/linuxboot_3.png (Summary of early kernel boot process.)
|
||||
[35]:http://free-electrons.com/pub/conferences/2014/elc/clement-smp-bring-up-on-arm-soc
|
||||
[36]:http://www.brendangregg.com/ebpf.html
|
||||
[37]:https://www.busybox.net/
|
||||
[38]:https://www.mjmwired.net/kernel/Documentation/acpi/initrd_table_override.txt
|
||||
[39]:/file/383516
|
||||
[40]:https://opensource.com/sites/default/files/u128651/linuxboot_4.png (Rescue shell and a custom <code>initrd</code>.)
|
||||
[41]:https://rego.linux.conf.au/schedule/presentation/16/
|
||||
[42]:https://linux.conf.au/index.html
|
||||
[43]:http://shallowsky.com/
|
@ -0,0 +1,315 @@
|
||||
How To Create A Bootable Zorin OS USB Drive
|
||||
======
|
||||
![Zorin OS][17]
|
||||
|
||||
### Introduction
|
||||
|
||||
In this guide I will show you how to create a bootable Zorin OS USB Drive.
|
||||
|
||||
To be able to follow this guide you will need the following:
|
||||
|
||||
* A blank USB drive
|
||||
* An internet connection
|
||||
|
||||
|
||||
|
||||
### What Is Zorin OS?
|
||||
|
||||
Zorin OS is a Linux based operating system.
|
||||
|
||||
If you are a Windows user you might wonder why you would bother with Zorin OS. If you are a Linux user then you might also wonder why you would use Zorin OS over other distributions such as Linux Mint or Ubuntu.
|
||||
|
||||
If you are using an older version of Windows and you can't afford to upgrade to Windows 10 or your computer doesn't have the right specifications for running Windows 10 then Zorin OS provides a free (or cheap, depending how much you choose to donate) upgrade path allowing you to continue to use your computer in a much more secure environment.
|
||||
|
||||
If your current operating system is Windows XP or Windows Vista then you might consider using Zorin OS Lite as opposed to Zorin OS Core.
|
||||
|
||||
The features of Zorin OS Lite are generally the same as the Zorin OS Core product but some of the applications installed and the desktop environment used for displaying menus and icons and other Windowsy features take up much less memory and processing power.
|
||||
|
||||
If you are running Windows 7 then your operating system is coming towards the end of its life. You could probably upgrade to Windows 10 but at a hefty price.
|
||||
|
||||
Not everybody has the finances to pay for a new Windows license and not everybody has the money to buy a brand new computer.
|
||||
|
||||
Zorin OS will help you extend the life of your computer and you will still feel you are using a premium product and that is because you will be. The product with the highest price doesn't always provide the best value.
|
||||
|
||||
Whilst we are talking about value for money, Zorin OS allows you to install the best free and open source software available and comes with a good selection of packages pre-installed.
|
||||
|
||||
For the home user, using Zorin OS doesn't have to feel any different to running Windows. You can browse the web using the browser of your choice, you can listen to music and watch videos. There are mail clients and other productivity tools.
|
||||
|
||||
Talking of productivity there is LibreOffice. LibreOffice has everything the average home user requires from an office suite with a word processor, spreadsheet and presentations package.
|
||||
|
||||
If you want to run Windows software then you can use the pre-installed PlayOnLinux and WINE packages to install and run all manner of packages including Microsoft Office.
|
||||
|
||||
By running Zorin OS you will get the extra security benefits of running a Linux based operating system.
|
||||
|
||||
Are you fed up with Windows updates stalling your productivity? When Windows wants to install updates it requires a reboot and then a long wait whilst it proceeds to install update after update. Sometimes it even forces a reboot whilst you are busy working.
|
||||
|
||||
Zorin OS is different. Updates download and install themselves whilst you are using the computer. You won't even need to know it is happening.
|
||||
|
||||
Why Zorin over Mint or Ubuntu? Zorin is the happy stepping stone between Windows and Linux. It is Linux but you don't need to care that it is Linux. If you decide later on to move to something different then so be it but there really is no need.
|
||||
|
||||
### The Zorin OS Website
|
||||
|
||||
![](http://dailylinuxuser.com/wp-content/uploads/2018/01/zorinwebsite1-678x381.png)
|
||||
|
||||
You can visit the Zorin OS website by visiting [www.zorinos.com][18].
|
||||
|
||||
The homepage of the Zorin OS website tells you everything you need to know.
|
||||
|
||||
"Zorin OS is an alternative to Windows and macOX, designed to make your computer faster, more powerful and secure".
|
||||
|
||||
There is nothing that tells you that Zorin OS is based on Linux. There is no need for Zorin to tell you that because even though Windows used to be heavily based on DOS you didn't need to know DOS commands to use it. Likewise you don't necessarily need to know Linux commands to use Zorin.
|
||||
|
||||
If you scroll down the page you will see a slide show highlighting the way the desktop looks and feels under Zorin.
|
||||
|
||||
The good thing is that you can customise the user interface so that if you prefer a Windows layout you can use a Windows style layout but if you prefer a Mac style layout you can go for that as well.
|
||||
|
||||
Zorin OS is based on Ubuntu Linux and the website uses this fact to highlight that underneath it has a stable base and it highlights the security benefits provided by Linux.
|
||||
|
||||
If you want to see what applications are available for Zorin then there is a link to do that and Zorin never sells your data and protects your privacy.
|
||||
|
||||
### What Are The Different Versions Of Zorin OS
|
||||
|
||||
#### Zorin OS Ultimate
|
||||
|
||||
The ultimate edition takes the core edition and adds other features such as different layouts, more applications pre-installed and extra games.
|
||||
|
||||
The ultimate edition comes at a price of 19 euros which is a bargain compared to other operating systems.
|
||||
|
||||
#### Zorin OS Core
|
||||
|
||||
The core version is the standard edition and comes with everything the average person could need from the outset.
|
||||
|
||||
This is the version I will show you how to download and install in this guide.
|
||||
|
||||
#### Zorin OS Lite
|
||||
|
||||
Zorin OS Lite also has an ultimate version available and a core version. Zorin OS Lite is perfect for older computers and the main difference is the desktop environments used to display menus and handle screen elements such as icons and panels.
|
||||
|
||||
Zorin OS Lite is less memory intensive than Zorin OS.
|
||||
|
||||
#### Zorin OS Business
|
||||
|
||||
Zorin OS Business comes with business applications installed as standard such as finance applications and office applications.
|
||||
|
||||
### How To Get Zorin OS
|
||||
|
||||
To download Zorin OS visit <https://zorinos.com/download/>.
|
||||
|
||||
To get the core version scroll past the Zorin Ultimate section until you get to the Zorin Core section.
|
||||
|
||||
You will see a small pay panel which allows you to choose how much you wish to pay for Zorin Core with a purchase now button underneath.
|
||||
|
||||
#### How To Pay For Zorin OS
|
||||
|
||||
![](http://dailylinuxuser.com/wp-content/uploads/2018/01/zorinwebsite1-678x381.png)
|
||||
|
||||
You can choose from the three preset amounts or enter an amount of your choice in the "Custom" box.
|
||||
|
||||
When you click "Purchase Zorin OS Core" the following window will appear:
|
||||
|
||||
![](http://dailylinuxuser.com/wp-content/uploads/2018/01/payforzorin.png)
|
||||
|
||||
You can now enter your email and credit card information.
|
||||
|
||||
When you click the "pay" button a window will appear with a download link.
|
||||
|
||||
#### How To Get Zorin OS For Free
|
||||
|
||||
If you don't wish to pay anything at all you can enter zero (0) into the custom box. The button will change and will show the words "Download Zorin OS Core".
|
||||
|
||||
#### How To Download Zorin OS
|
||||
|
||||
![](http://dailylinuxuser.com/wp-content/uploads/2018/01/downloadzorin.png)
|
||||
|
||||
Whether you have bought Zorin or have chosen to download for free, a window will appear with the option to download a 64 bit or 32 bit version of Zorin.
|
||||
|
||||
Most modern computers are capable of running 64 bit operating systems but in order to check within Windows click the "start" button and type "system information".
|
||||
|
||||
![](http://dailylinuxuser.com/wp-content/uploads/2018/01/systeminfo.png)
|
||||
|
||||
Click on the "System Information" desktop app and halfway down the right panel you will see the words "system type". If you see the words "x64 based PC" then the system is capable of running 64-bit operating systems.
|
||||
|
||||
If your computer is capable of running 64-bit operating systems click on the "Download 64 bit" button otherwise click on "Download 32 bit".
|
||||
|
||||
The ISO image file for Zorin will now start to download to your computer.
|
||||
|
||||
### How To Verify If The Zorin OS Download Is Valid
|
||||
|
||||
It is important to check whether the download is valid for many reasons.
|
||||
|
||||
If the file has only partially downloaded or there were interruptions whilst downloading and you had to resume then the image might not be perfect and it should be downloaded again.
|
||||
|
||||
More importantly you should check the validity to make sure the version you downloaded is genuine and wasn't uploaded by a hacker.
|
||||
|
||||
In order to check the validity of the ISO image you should download a piece of software called QuickHash for Windows from <https://www.quickhash-gui.org/download/quickhash-v2-8-4-for-windows/>.
|
||||
|
||||
Click the "download" link and when the file has downloaded double click on it.
|
||||
|
||||
Click on the relevant application file within the zip file. If you have a 32-bit system click "Quickhash-v2.8.4-32bit" or for a 64-bit system click "Quickhash-v2.8.4-64bit".
|
||||
|
||||
Click on the "Run" button.
|
||||
|
||||
![](http://dailylinuxuser.com/wp-content/uploads/2018/01/zorinhash.png)
|
||||
|
||||
Click the SHA256 radio button on the left side of the screen and then click on the file tab.
|
||||
|
||||
Click "Select File" and navigate to the downloads folder.
|
||||
|
||||
Choose the Zorin ISO image downloaded previously.
|
||||
|
||||
A progress bar will now work out the hash value for the ISO image.
|
||||
|
||||
To compare this with the valid keys available for Zorin visit <https://zorinos.com/help/install-zorin-os/> and scroll down until you see the list of checksums as follows:
|
||||
|
||||
![](http://dailylinuxuser.com/wp-content/uploads/2018/01/zorinhashcodes.png)
|
||||
|
||||
Select the long list of scrambled characters next to the version of Zorin OS that you downloaded and press CTRL and C to copy.
|
||||
|
||||
Go back to the Quickhash screen and paste the value into the "Expected hash value" box by pressing CTRL and V.
|
||||
|
||||
You should see the words "Expected hash matches the computed file hash, OK".
|
||||
|
||||
If the values do not match you will see the words "Expected hash DOES NOT match the computed file hash" and you should download the ISO image again.
|
||||
|
||||
### How To Create A Bootable Zorin OS USB Drive
|
||||
|
||||
In order to be able to install Zorin you will need to install a piece of software called Etcher. You will also need a blank USB drive.
|
||||
|
||||
You can download Etcher from <https://etcher.io/>.
|
||||
|
||||
![](http://dailylinuxuser.com/wp-content/uploads/2018/01/downloadetcher.png)
|
||||
|
||||
If you are using a 64 bit computer click on the "Download for Windows x64" link otherwise click on the little arrow and choose "Etcher for Windows x86 (32-bit) (Installer)".
|
||||
|
||||
Insert the USB drive into your computer and double click on the "Etcher" setup executable file.
|
||||
|
||||
![](http://dailylinuxuser.com/wp-content/uploads/2018/01/etcherlicense.png)
|
||||
|
||||
When the license screen appears click "I Agree".
|
||||
|
||||
Etcher should start automatically after the installation completes but if it doesn't you can press the Windows key or click the start button and search for "Etcher".
|
||||
|
||||
![](http://dailylinuxuser.com/wp-content/uploads/2018/01/etcherscreen.png)
|
||||
|
||||
Click on "Select Image" and select the "Zorin" ISO image downloaded previously.
|
||||
|
||||
Click "Flash".
|
||||
|
||||
Windows will ask for your permission to continue. Click "Yes" to accept.
|
||||
|
||||
After a while a window will appear with the words "Flash Complete".
|
||||
|
||||
### How To Buy A Zorin OS USB Drive
|
||||
|
||||
If the above instructions seem too much like hard work then you can order a Zorin USB Drive by clicking one of the following links:
|
||||
|
||||
* [Zorin OS Core – 32-bit DVD][1]
|
||||
|
||||
* [Zorin OS Core – 64-bit DVD][2]
|
||||
|
||||
* [Zorin OS Core – 16 gigabyte USB drive (32-bit)][3]
|
||||
|
||||
* [Zorin OS Core – 32 gigabyte USB drive (32-bit)][4]
|
||||
|
||||
* [Zorin OS Core – 64 gigabyte USB drive (32-bit)][5]
|
||||
|
||||
* [Zorin OS Core – 16 gigabyte USB drive (64-bit)][6]
|
||||
|
||||
* [Zorin OS Core – 32 gigabyte USB drive (64-bit)][7]
|
||||
|
||||
* [Zorin OS Core – 64 gigabyte USB drive (64-bit)][8]
|
||||
|
||||
* [Zorin OS Lite – 32-bit DVD][9]
|
||||
|
||||
* [Zorin OS Lite – 64-bit DVD][10]
|
||||
|
||||
* [Zorin OS Lite – 16 gigabyte USB drive (32-bit)][11]
|
||||
|
||||
* [Zorin OS Lite – 32 gigabyte USB drive (32-bit)][12]
|
||||
|
||||
* [Zorin OS Lite – 64 gigabyte USB drive (32-bit)][13]
|
||||
|
||||
* [Zorin OS Lite – 16 gigabyte USB drive (64-bit)][14]
|
||||
|
||||
* [Zorin OS Lite – 32 gigabyte USB drive (64-bit)][15]
|
||||
|
||||
* [Zorin OS Lite – 64 gigabyte USB drive (64-bit)][16]
|
||||
|
||||
|
||||
### How To Boot Into Zorin OS Live
|
||||
|
||||
On older computers simply insert the USB drive and restart the computer. The boot menu for Zorin should appear straight away.
|
||||
|
||||
On modern computers insert the USB drive, restart the computer and before Windows loads press the appropriate function key to bring up the boot menu.
|
||||
|
||||
The following list shows the key or keys you can press for the most popular computer manufacturers.
|
||||
|
||||
* Acer - Escape, F12, F9
|
||||
* Asus - Escape, F8
|
||||
* Compaq - Escape, F9
|
||||
* Dell - F12
|
||||
* Emachines - F12
|
||||
* HP - Escape, F9
|
||||
* Intel - F10
|
||||
* Lenovo - F8, F10, F12
|
||||
* Packard Bell - F8
|
||||
* Samsung - Escape, F12
|
||||
* Sony - F10, F11
|
||||
* Toshiba - F12
|
||||
|
||||
|
||||
|
||||
Check the manufacturer's website to find the key for your computer if it isn't listed or keep trying different function keys or the escape key.
|
||||
|
||||
A screen will appear with the following three options:
|
||||
|
||||
1. Try Zorin OS without Installing
|
||||
2. Install Zorin OS
|
||||
3. Check disc for defects
|
||||
|
||||
|
||||
|
||||
Choose "Try Zorin OS without Installing" by pressing enter with that option selected.
|
||||
|
||||
### Summary
|
||||
|
||||
You can now try Zorin OS without damaging your current operating system.
|
||||
|
||||
To get back to your original operating system reboot and remove the USB drive.
|
||||
|
||||
### How To Remove Zorin OS From The USB Drive
|
||||
|
||||
If you have decided that Zorin OS is not for you and you want to get the USB drive back into its pre-Zorin state follow this guide:
|
||||
|
||||
[How To Fix A USB Drive After Linux Has Been Installed On It][19]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://dailylinuxuser.com/2018/01/how-to-create-a-bootable-zorin-os-usb-drive.html
|
||||
|
||||
作者:[admin][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[1]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-install-live-dvd-32bit.html?affiliate=everydaylinuxuser
|
||||
[2]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-install-live-dvd-64bit.html?affiliate=everydaylinuxuser
|
||||
[3]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-16gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser
|
||||
[4]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-32gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser
|
||||
[5]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-64gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser
|
||||
[6]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-16gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser
|
||||
[7]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-32gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser
|
||||
[8]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-64gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser
|
||||
[9]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-install-live-dvd-32bit.html?affiliate=everydaylinuxuser
|
||||
[10]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-install-live-dvd-64bit.html?affiliate=everydaylinuxuser
|
||||
[11]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-16gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser
|
||||
[12]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-32gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser
|
||||
[13]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-64gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser
|
||||
[14]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-16gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser
|
||||
[15]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-32gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser
|
||||
[16]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-64gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser
|
||||
[17]:http://dailylinuxuser.com/wp-content/uploads/2018/01/zorindesktop-678x381.png (Zorin OS)
|
||||
[18]:http://www.zorinos.com
|
||||
[19]:http://dailylinuxuser.com/2016/04/how-to-fix-usb-drive-after-linux-has.html
|
@ -0,0 +1,267 @@
|
||||
How to Install and Optimize Apache on Ubuntu
|
||||
======
|
||||
|
||||
This is the beginning of our LAMP tutorial series: how to install the Apache web server on Ubuntu.
|
||||
|
||||
These instructions should work on any Ubuntu-based distro, including Ubuntu 14.04, Ubuntu 16.04, [Ubuntu 18.04][1], and even non-LTS Ubuntu releases like 17.10. They were tested and written for Ubuntu 16.04.
|
||||
|
||||
Apache (aka httpd) is the most popular and most widely used web server, so this should be useful for everyone.
|
||||
|
||||
### Before we begin installing Apache
|
||||
|
||||
Some requirements and notes before we begin:
|
||||
|
||||
* Apache may already be installed on your server, so check if it is first. You can do so with the "apachectl -V" command that outputs the Apache version you're using and some other information.
|
||||
* You'll need an Ubuntu server. You can buy one from [Vultr][2], they're one of the [best and cheapest cloud hosting providers][3]. Their servers start from $2.5 per month.
|
||||
* You'll need the root user or a user with sudo access. All commands below are executed by the root user so we didn't have to append 'sudo' to each command.
|
||||
* You'll need [SSH enabled][4] if you use Ubuntu or an SSH client like [MobaXterm][5] if you use Windows.
|
||||
|
||||
|
||||
|
||||
That's most of it. Let's move onto the installation.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
### Install Apache on Ubuntu
|
||||
|
||||
The first thing you always need to do is update Ubuntu before you do anything else. You can do so by running:
|
||||
```
|
||||
apt-get update && apt-get upgrade
|
||||
```
|
||||
|
||||
Next, to install Apache, run the following command:
|
||||
```
|
||||
apt-get install apache2
|
||||
```
|
||||
|
||||
If you want to, you can also install the Apache documentation and some Apache utilities. You'll need the Apache utilities for some of the modules we'll install later.
|
||||
```
|
||||
apt-get install apache2-doc apache2-utils
|
||||
```
|
||||
|
||||
**And that 's it. You've successfully installed Apache.**
|
||||
|
||||
You'll still need to configure it.
|
||||
|
||||
### Configure and Optimize Apache on Ubuntu
|
||||
|
||||
There are various configs you can do on Apache, but the main and most common ones are explained below.
|
||||
|
||||
#### Check if Apache is running
|
||||
|
||||
By default, Apache is configured to start automatically on boot, so you don't have to enable it. You can check if it's running and other relevant information with the following command:
|
||||
```
|
||||
systemctl status apache2
|
||||
```
|
||||
|
||||
[![check if apache is running][6]][6]
|
||||
|
||||
And you can check what version you're using with
|
||||
```
|
||||
apachectl -V
|
||||
```
|
||||
|
||||
A simpler way of checking this is by visiting your server's IP address. If you get the default Apache page, then everything's working fine.
|
||||
|
||||
#### Update your firewall
|
||||
|
||||
If you use a firewall (which you should), you'll probably need to update your firewall rules and allow access to the default ports. The most common firewall used on Ubuntu is UFW, so the instructions below are for UFW.
|
||||
|
||||
To allow traffic through both the 80 (http) and 443 (https) ports, run the following command:
|
||||
```
|
||||
ufw allow 'Apache Full'
|
||||
```
|
||||
|
||||
#### Install common Apache modules
|
||||
|
||||
Some modules are frequently recommended and you should install them. We'll include instructions for the most common ones:
|
||||
|
||||
##### Speed up your website with the PageSpeed module
|
||||
|
||||
The PageSpeed module will optimize and speed up your Apache server automatically.
|
||||
|
||||
First, go to the [PageSpeed download page][7] and choose the file you need. We're using a 64-bit Ubuntu server and we'll install the latest stable version. Download it using wget:
|
||||
```
|
||||
wget https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_amd64.deb
|
||||
```
|
||||
|
||||
Then, install it with the following commands:
|
||||
```
|
||||
dpkg -i mod-pagespeed-stable_current_amd64.deb
|
||||
apt-get -f install
|
||||
```
|
||||
|
||||
Restart Apache for the changes to take effect:
|
||||
```
|
||||
systemctl restart apache2
|
||||
```
|
||||
|
||||
##### Enable rewrites/redirects using the mod_rewrite module
|
||||
|
||||
This module is used for rewrites (redirects), as the name suggests. You'll need it if you use WordPress or any other CMS for that matter. To install it, just run:
|
||||
```
|
||||
a2enmod rewrite
|
||||
```
|
||||
|
||||
And restart Apache again. You may need some extra configurations depending on what CMS you're using, if any. Google it for specific instructions for your setup.
|
||||
|
||||
##### Secure your Apache with the ModSecurity module
|
||||
|
||||
ModSecurity is a module used for security, again, as the name suggests. It basically acts as a firewall, and it monitors your traffic. To install it, run the following command:
|
||||
```
|
||||
apt-get install libapache2-modsecurity
|
||||
```
|
||||
|
||||
And restart Apache again:
|
||||
```
|
||||
systemctl restart apache2
|
||||
```
|
||||
|
||||
ModSecurity comes with a default setup that's enough by itself, but if you want to extend it, you can use the [OWASP rule set][8].
|
||||
|
||||
##### Block DDoS attacks using the mod_evasive module
|
||||
|
||||
You can use the mod_evasive module to block and prevent DDoS attacks on your server, though it's debatable how useful it is in preventing attacks. To install it, use the following command:
|
||||
```
|
||||
apt-get install libapache2-mod-evasive
|
||||
```
|
||||
|
||||
By default, mod_evasive is disabled, to enable it, edit the following file:
|
||||
```
|
||||
nano /etc/apache2/mods-enabled/evasive.conf
|
||||
```
|
||||
|
||||
And uncomment all the lines (remove #) and configure it per your requirements. You can leave everything as-is if you don't know what to edit.
|
||||
|
||||
[![mod_evasive][9]][9]
|
||||
|
||||
And create a log file:
|
||||
```
|
||||
mkdir /var/log/mod_evasive
|
||||
chown -R www-data:www-data /var/log/mod_evasive
|
||||
```
|
||||
|
||||
That's it. Now restart Apache for the changes to take effect:
|
||||
```
|
||||
systemctl restart apache2
|
||||
```
|
||||
|
||||
There are [additional modules][10] you can install and configure, but it's all up to you and the software you're using. They're usually not required. Even the 4 modules we included are not required. If a module is required for a specific application, then they'll probably note that.
|
||||
|
||||
#### Optimize Apache with the Apache2Buddy script
|
||||
|
||||
Apache2Buddy is a script that will automatically fine-tune your Apache configuration. The only thing you need to do is run the following command and the script does the rest automatically:
|
||||
```
|
||||
curl -sL https://raw.githubusercontent.com/richardforth/apache2buddy/master/apache2buddy.pl | perl
|
||||
```
|
||||
|
||||
You may need to install curl if you don't have it already installed. Use the following command to install curl:
|
||||
```
|
||||
apt-get install curl
|
||||
```
|
||||
|
||||
#### Additional configurations
|
||||
|
||||
There's some extra stuff you can do with Apache, but we'll leave them for another tutorial. Stuff like enabling http/2 support, turning off (or on) KeepAlive, tuning your Apache even more. You don't have to do any of this, but you can find tutorials online and do it if you can't wait for our tutorials.
|
||||
|
||||
### Create your first website with Apache
|
||||
|
||||
Now that we're done with all the tuning, let's move onto creating an actual website. Follow our instructions to create a simple HTML page and a virtual host that's going to run on Apache.
|
||||
|
||||
The first thing you need to do is create a new directory for your website. Run the following command to do so:
|
||||
```
|
||||
mkdir -p /var/www/example.com/public_html
|
||||
```
|
||||
|
||||
Of course, replace example.com with your desired domain. You can get a cheap domain name from [Namecheap][11].
|
||||
|
||||
Don't forget to replace example.com in all of the commands below.
|
||||
|
||||
Next, create a simple, static web page. Create the HTML file:
|
||||
```
|
||||
nano /var/www/example.com/public_html/index.html
|
||||
```
|
||||
|
||||
And paste this:
|
||||
```
|
||||
<html>
|
||||
<head>
|
||||
<title>Simple Page</title>
|
||||
</head>
|
||||
<body>
|
||||
<p>If you're seeing this in your browser then everything works.</p>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
Save and close the file.
|
||||
|
||||
Configure the permissions of the directory:
|
||||
```
|
||||
chown -R www-data:www-data /var/www/example.com
|
||||
chmod -R og-r /var/www/example.com
|
||||
```
|
||||
|
||||
Create a new virtual host for your site:
|
||||
```
|
||||
nano /etc/apache2/sites-available/example.com.conf
|
||||
```
|
||||
|
||||
And paste the following:
|
||||
```
|
||||
<VirtualHost *:80>
|
||||
ServerAdmin admin@example.com
|
||||
ServerName example.com
|
||||
ServerAlias www.example.com
|
||||
|
||||
DocumentRoot /var/www/example.com/public_html
|
||||
|
||||
ErrorLog ${APACHE_LOG_DIR}/error.log
|
||||
CustomLog ${APACHE_LOG_DIR}/access.log combined
|
||||
</VirtualHost>
|
||||
```
|
||||
|
||||
This is a basic virtual host. You may need a more advanced .conf file depending on your setup.
|
||||
|
||||
Save and close the file after updating everything accordingly.
|
||||
|
||||
Now, enable the virtual host with the following command:
|
||||
```
|
||||
a2ensite example.com.conf
|
||||
```
|
||||
|
||||
And finally, restart Apache for the changes to take effect:
|
||||
```
|
||||
systemctl restart apache2
|
||||
```
|
||||
|
||||
That's it. You're done. Now you can visit example.com and view your page.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://thishosting.rocks/how-to-install-optimize-apache-ubuntu/
|
||||
|
||||
作者:[ThisHosting][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://thishosting.rocks
|
||||
[1]:https://thishosting.rocks/ubuntu-18-04-new-features-release-date/
|
||||
[2]:https://thishosting.rocks/go/vultr/
|
||||
[3]:https://thishosting.rocks/cheap-cloud-hosting-providers-comparison/
|
||||
[4]:https://thishosting.rocks/how-to-enable-ssh-on-ubuntu/
|
||||
[5]:https://mobaxterm.mobatek.net/
|
||||
[6]:https://thishosting.rocks/wp-content/uploads/2018/01/apache-running.jpg
|
||||
[7]:https://www.modpagespeed.com/doc/download
|
||||
[8]:https://www.owasp.org/index.php/Category:OWASP_ModSecurity_Core_Rule_Set_Project
|
||||
[9]:https://thishosting.rocks/wp-content/uploads/2018/01/mod_evasive.jpg
|
||||
[10]:https://httpd.apache.org/docs/2.4/mod/
|
||||
[11]:https://thishosting.rocks/neamcheap-review-cheap-domains-cool-names
|
||||
[12]:https://thishosting.rocks/wp-content/plugins/patron-button-and-widgets-by-codebard/images/become_a_patron_button.png
|
||||
[13]:https://www.patreon.com/thishostingrocks
|
@ -0,0 +1,225 @@
|
||||
How to Install and Use iostat on Ubuntu 16.04 LTS
|
||||
======
|
||||
|
||||
iostat also known as input/output statistics is a popular Linux system monitoring tool that can be used to collect statistics of input and output devices. It allows users to identify performance issues of local disk, remote disk and system information. The iostat create reports, the CPU Utilization report, the Device Utilization report and the Network Filesystem report.
|
||||
|
||||
In this tutorial, we will learn how to install iostat on Ubuntu 16.04 and how to use it.
|
||||
|
||||
### Prerequisite
|
||||
|
||||
* Ubuntu 16.04 desktop installed on your system.
|
||||
* Non-root user with sudo privileges setup on your system
|
||||
|
||||
|
||||
|
||||
### Install iostat
|
||||
|
||||
By default, iostat is included with sysstat package in Ubuntu 16.04. You can easily install it by just running the following command:
|
||||
|
||||
```
|
||||
sudo apt-get install sysstat -y
|
||||
```
|
||||
|
||||
Once sysstat is installed, you can proceed to the next step.
|
||||
|
||||
### iostat Basic Example
|
||||
|
||||
Let's start by running the iostat command without any argument. This will displays information about the CPU usage, and I/O statistics of your system:
|
||||
|
||||
```
|
||||
iostat
|
||||
```
|
||||
|
||||
You should see the following output:
|
||||
```
|
||||
Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU)
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
22.67 0.52 6.99 1.88 0.00 67.94
|
||||
|
||||
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
|
||||
sda 15.15 449.15 119.01 771022 204292
|
||||
|
||||
```
|
||||
|
||||
In the above output, the first line display, Linux kernel version and hostname. Next two lines displays CPU statistics like, average CPU usage, percentage of time the CPU were idle and waited for I/O response, percentage of waiting time of virtual CPU and the percentage of time the CPU is idle. Next two lines displays the device utilization report like, number of blocks read and write per second and total block reads and write per second.
|
||||
|
||||
By default iostat displays the report with current date. If you want to display the current time, run the following command:
|
||||
|
||||
```
|
||||
iostat -t
|
||||
```
|
||||
|
||||
You should see the following output:
|
||||
```
|
||||
Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU)
|
||||
|
||||
Saturday 16 December 2017 09:44:55 IST
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
21.37 0.31 6.93 1.28 0.00 70.12
|
||||
|
||||
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
|
||||
sda 9.48 267.80 79.69 771022 229424
|
||||
|
||||
```
|
||||
|
||||
To check the version of the iostat, run the following command:
|
||||
|
||||
```
|
||||
iostat -V
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
sysstat version 10.2.0
|
||||
(C) Sebastien Godard (sysstat orange.fr)
|
||||
|
||||
```
|
||||
|
||||
You can listout all the options available with iostat command using the following command:
|
||||
|
||||
```
|
||||
iostat --help
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
Usage: iostat [ options ] [ [ ] ]
|
||||
Options are:
|
||||
[ -c ] [ -d ] [ -h ] [ -k | -m ] [ -N ] [ -t ] [ -V ] [ -x ] [ -y ] [ -z ]
|
||||
[ -j { ID | LABEL | PATH | UUID | ... } ]
|
||||
[ [ -T ] -g ] [ -p [ [,...] | ALL ] ]
|
||||
[ [...] | ALL ]
|
||||
|
||||
```
|
||||
|
||||
### iostat Advance Usage Example
|
||||
|
||||
If you want to view only the device report only once, run the following command:
|
||||
|
||||
```
|
||||
iostat -d
|
||||
```
|
||||
|
||||
You should see the following output:
|
||||
```
|
||||
Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU)
|
||||
|
||||
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
|
||||
sda 12.18 353.66 102.44 771022 223320
|
||||
|
||||
```
|
||||
|
||||
To view the device report continuously for every 5 seconds, for 3 times:
|
||||
|
||||
```
|
||||
iostat -d 5 3
|
||||
```
|
||||
|
||||
You should see the following output:
|
||||
```
|
||||
Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU)
|
||||
|
||||
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
|
||||
sda 11.77 340.71 98.95 771022 223928
|
||||
|
||||
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
|
||||
sda 2.00 0.00 8.00 0 40
|
||||
|
||||
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
|
||||
sda 0.60 0.00 3.20 0 16
|
||||
|
||||
```
|
||||
|
||||
If you want to view the statistics of specific devices, run the following command:
|
||||
|
||||
```
|
||||
iostat -p sda
|
||||
```
|
||||
|
||||
You should see the following output:
|
||||
```
|
||||
Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU)
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
21.69 0.36 6.98 1.44 0.00 69.53
|
||||
|
||||
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
|
||||
sda 11.00 316.91 92.38 771022 224744
|
||||
sda1 0.07 0.27 0.00 664 0
|
||||
sda2 0.01 0.05 0.00 128 0
|
||||
sda3 0.07 0.27 0.00 648 0
|
||||
sda4 10.56 315.21 92.35 766877 224692
|
||||
sda5 0.12 0.48 0.02 1165 52
|
||||
sda6 0.07 0.32 0.00 776 0
|
||||
|
||||
```
|
||||
|
||||
You can also view the statistics of multiple devices with the following command:
|
||||
|
||||
```
|
||||
iostat -p sda, sdb, sdc
|
||||
```
|
||||
|
||||
If you want to displays the device I/O statistics in MB/second, run the following command:
|
||||
|
||||
```
|
||||
iostat -m
|
||||
```
|
||||
|
||||
You should see the following output:
|
||||
```
|
||||
Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU)
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
21.39 0.31 6.94 1.30 0.00 70.06
|
||||
|
||||
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
|
||||
sda 9.67 0.27 0.08 752 223
|
||||
|
||||
```
|
||||
|
||||
If you want to view the extended information for a specific partition (sda4), run the following command:
|
||||
|
||||
```
|
||||
iostat -x sda4
|
||||
```
|
||||
|
||||
You should see the following output:
|
||||
```
|
||||
Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU)
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
21.26 0.28 6.87 1.19 0.00 70.39
|
||||
|
||||
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
|
||||
sda4 0.79 4.65 5.71 2.68 242.76 73.28 75.32 0.35 41.80 43.66 37.84 4.55 3.82
|
||||
|
||||
```
|
||||
|
||||
If you want to displays only the CPU usage statistics, run the following command:
|
||||
|
||||
```
|
||||
iostat -c
|
||||
```
|
||||
|
||||
You should see the following output:
|
||||
```
|
||||
Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU)
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
21.45 0.33 6.96 1.34 0.00 69.91
|
||||
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/how-to-install-and-use-iostat-on-ubuntu-1604/
|
||||
|
||||
作者:[Hitesh Jethva][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
264
sources/tech/20180116 Monitor your Kubernetes Cluster.md
Normal file
264
sources/tech/20180116 Monitor your Kubernetes Cluster.md
Normal file
@ -0,0 +1,264 @@
|
||||
Monitor your Kubernetes Cluster
|
||||
======
|
||||
This article originally appeared on [Kevin Monroe's blog][1]
|
||||
|
||||
Keeping an eye on logs and metrics is a necessary evil for cluster admins. The benefits are clear: metrics help you set reasonable performance goals, while log analysis can uncover issues that impact your workloads. The hard part, however, is getting a slew of applications to work together in a useful monitoring solution.
|
||||
|
||||
In this post, I'll cover monitoring a Kubernetes cluster with [Graylog][2] (for logging) and [Prometheus][3] (for metrics). Of course that's not just wiring 3 things together. In fact, it'll end up looking like this:
|
||||
|
||||
![][4]
|
||||
|
||||
As you know, Kubernetes isn't just one thing -- it's a system of masters, workers, networking bits, etc(d). Similarly, Graylog comes with a supporting cast (apache2, mongodb, etc), as does Prometheus (telegraf, grafana, etc). Connecting the dots in a deployment like this may seem daunting, but the right tools can make all the difference.
|
||||
|
||||
I'll walk through this using [conjure-up][5] and the [Canonical Distribution of Kubernetes][6] (CDK). I find the conjure-up interface really helpful for deploying big software, but I know some of you hate GUIs and TUIs and probably other UIs too. For those folks, I'll do the same deployment again from the command line.
|
||||
|
||||
Before we jump in, note that Graylog and Prometheus will be deployed alongside Kubernetes and not in the cluster itself. Things like the Kubernetes Dashboard and Heapster are excellent sources of information from within a running cluster, but my objective is to provide a mechanism for log/metric analysis whether the cluster is running or not.
|
||||
|
||||
### The Walk Through
|
||||
|
||||
First things first, install conjure-up if you don't already have it. On Linux, that's simply:
|
||||
```
|
||||
sudo snap install conjure-up --classic
|
||||
```
|
||||
|
||||
There's also a brew package for macOS users:
|
||||
```
|
||||
brew install conjure-up
|
||||
```
|
||||
|
||||
You'll need at least version 2.5.2 to take advantage of the recent CDK spell additions, so be sure to `sudo snap refresh conjure-up` or `brew update && brew upgrade conjure-up` if you have an older version installed.
|
||||
|
||||
Once installed, run it:
|
||||
```
|
||||
conjure-up
|
||||
```
|
||||
|
||||
![][7]
|
||||
|
||||
You'll be presented with a list of various spells. Select CDK and press `Enter`.
|
||||
|
||||
![][8]
|
||||
|
||||
At this point, you'll see additional components that are available for the CDK spell. We're interested in Graylog and Prometheus, so check both of those and hit `Continue`.
|
||||
|
||||
You'll be guided through various cloud choices to determine where you want your cluster to live. After that, you'll see options for post-deployment steps, followed by a review screen that lets you see what is about to be deployed:
|
||||
|
||||
![][9]
|
||||
|
||||
In addition to the typical K8s-related applications (etcd, flannel, load-balancer, master, and workers), you'll see additional applications related to our logging and metric selections.
|
||||
|
||||
The Graylog stack includes the following:
|
||||
|
||||
* apache2: reverse proxy for the graylog web interface
|
||||
* elasticsearch: document database for the logs
|
||||
* filebeat: forwards logs from K8s master/workers to graylog
|
||||
* graylog: provides an api for log collection and an interface for analysis
|
||||
* mongodb: database for graylog metadata
|
||||
|
||||
|
||||
|
||||
The Prometheus stack includes the following:
|
||||
|
||||
* grafana: web interface for metric-related dashboards
|
||||
* prometheus: metric collector and time series database
|
||||
* telegraf: sends host metrics to prometheus
|
||||
|
||||
|
||||
|
||||
You can fine tune the deployment from this review screen, but the defaults will suite our needs. Click `Deploy all Remaining Applications` to get things going.
|
||||
|
||||
The deployment will take a few minutes to settle as machines are brought online and applications are configured in your cloud. Once complete, conjure-up will show a summary screen that includes links to various interesting endpoints for you to browse:
|
||||
|
||||
![][10]
|
||||
|
||||
#### Exploring Logs
|
||||
|
||||
Now that Graylog has been deployed and configured, let's take a look at some of the data we're gathering. By default, the filebeat application will send both syslog and container log events to graylog (that's `/var/log/*.log` and `/var/log/containers/*.log` from the kubernetes master and workers).
|
||||
|
||||
Grab the apache2 address and graylog admin password as follows:
|
||||
```
|
||||
juju status --format yaml apache2/0 | grep public-address
|
||||
public-address: <your-apache2-ip>
|
||||
juju run-action --wait graylog/0 show-admin-password
|
||||
admin-password: <your-graylog-password>
|
||||
```
|
||||
|
||||
Browse to `http://<your-apache2-ip>` and login with admin as the username and <your-graylog-password> as the password. **Note:** if the interface is not immediately available, please wait as the reverse proxy configuration may take up to 5 minutes to complete.
|
||||
|
||||
Once logged in, head to the `Sources` tab to get an overview of the logs collected from our K8s master and workers:
|
||||
|
||||
![][11]
|
||||
|
||||
Drill into those logs by clicking the `System / Inputs` tab and selecting `Show received messages` for the filebeat input:
|
||||
|
||||
![][12]
|
||||
|
||||
From here, you may want to play around with various filters or setup Graylog dashboards to help identify the events that are most important to you. Check out the [Graylog Dashboard][13] docs for details on customizing your view.
|
||||
|
||||
#### Exploring Metrics
|
||||
|
||||
Our deployment exposes two types of metrics through our grafana dashboards: system metrics include things like cpu/memory/disk utilization for the K8s master and worker machines, and cluster metrics include container-level data scraped from the K8s cAdvisor endpoints.
|
||||
|
||||
Grab the grafana address and admin password as follows:
|
||||
```
|
||||
juju status --format yaml grafana/0 | grep public-address
|
||||
public-address: <your-grafana-ip>
|
||||
juju run-action --wait grafana/0 get-admin-password
|
||||
password: <your-grafana-password>
|
||||
```
|
||||
|
||||
Browse to `http://<your-grafana-ip>:3000` and login with admin as the username and <your-grafana-password> as the password. Once logged in, check out the cluster metric dashboard by clicking the `Home` drop-down box and selecting `Kubernetes Metrics (via Prometheus)`:
|
||||
|
||||
![][14]
|
||||
|
||||
We can also check out the system metrics of our K8s host machines by switching the drop-down box to `Node Metrics (via Telegraf) `
|
||||
|
||||
![][15]
|
||||
|
||||
|
||||
### The Other Way
|
||||
|
||||
As alluded to in the intro, I prefer the wizard-y feel of conjure-up to guide me through complex software deployments like Kubernetes. Now that we've seen the conjure-up way, some of you may want to see a command line approach to achieve the same results. Still others may have deployed CDK previously and want to extend it with the Graylog/Prometheus components described above. Regardless of why you've read this far, I've got you covered.
|
||||
|
||||
The tool that underpins conjure-up is [Juju][16]. Everything that the CDK spell did behind the scenes can be done on the command line with Juju. Let's step through how that works.
|
||||
|
||||
**Starting From Scratch**
|
||||
|
||||
If you're on Linux, install Juju like this:
|
||||
```
|
||||
sudo snap install juju --classic
|
||||
```
|
||||
|
||||
For macOS, Juju is available from brew:
|
||||
```
|
||||
brew install juju
|
||||
```
|
||||
|
||||
Now setup a controller for your preferred cloud. You may be prompted for any required cloud credentials:
|
||||
```
|
||||
juju bootstrap
|
||||
```
|
||||
|
||||
We then need to deploy the base CDK bundle:
|
||||
```
|
||||
juju deploy canonical-kubernetes
|
||||
```
|
||||
|
||||
**Starting From CDK**
|
||||
|
||||
With our Kubernetes cluster deployed, we need to add all the applications required for Graylog and Prometheus:
|
||||
```
|
||||
## deploy graylog-related applications
|
||||
juju deploy xenial/apache2
|
||||
juju deploy xenial/elasticsearch
|
||||
juju deploy xenial/filebeat
|
||||
juju deploy xenial/graylog
|
||||
juju deploy xenial/mongodb
|
||||
```
|
||||
```
|
||||
## deploy prometheus-related applications
|
||||
juju deploy xenial/grafana
|
||||
juju deploy xenial/prometheus
|
||||
juju deploy xenial/telegraf
|
||||
```
|
||||
|
||||
Now that the software is deployed, connect them together so they can communicate:
|
||||
```
|
||||
## relate graylog applications
|
||||
juju relate apache2:reverseproxy graylog:website
|
||||
juju relate graylog:elasticsearch elasticsearch:client
|
||||
juju relate graylog:mongodb mongodb:database
|
||||
juju relate filebeat:beats-host kubernetes-master:juju-info
|
||||
juju relate filebeat:beats-host kubernetes-worker:jujuu-info
|
||||
```
|
||||
```
|
||||
## relate prometheus applications
|
||||
juju relate prometheus:grafana-source grafana:grafana-source
|
||||
juju relate telegraf:prometheus-client prometheus:target
|
||||
juju relate kubernetes-master:juju-info telegraf:juju-info
|
||||
juju relate kubernetes-worker:juju-info telegraf:juju-info
|
||||
```
|
||||
|
||||
At this point, all the applications can communicate with each other, but we have a bit more configuration to do (e.g., setting up the apache2 reverse proxy, telling prometheus how to scrape k8s, importing our grafana dashboards, etc):
|
||||
```
|
||||
## configure graylog applications
|
||||
juju config apache2 enable_modules="headers proxy_html proxy_http"
|
||||
juju config apache2 vhost_http_template="$(base64 <vhost-tmpl>)"
|
||||
juju config elasticsearch firewall_enabled="false"
|
||||
juju config filebeat \
|
||||
logpath="/var/log/*.log /var/log/containers/*.log"
|
||||
juju config filebeat logstash_hosts="<graylog-ip>:5044"
|
||||
juju config graylog elasticsearch_cluster_name="<es-cluster>"
|
||||
```
|
||||
```
|
||||
## configure prometheus applications
|
||||
juju config prometheus scrape-jobs="<scraper-yaml>"
|
||||
juju run-action --wait grafana/0 import-dashboard \
|
||||
dashboard="$(base64 <dashboard-json>)"
|
||||
```
|
||||
|
||||
Some of the above steps need values specific to your deployment. You can get these in the same way that conjure-up does:
|
||||
|
||||
* <vhost-tmpl>: fetch our sample [template][17] from github
|
||||
* <graylog-ip>: `juju run --unit graylog/0 'unit-get private-address'`
|
||||
* <es-cluster>: `juju config elasticsearch cluster-name`
|
||||
* <scraper-yaml>: fetch our sample [scraper][18] from github; [substitute][19]appropriate values for `[K8S_PASSWORD][20]` and `[K8S_API_ENDPOINT][21]`
|
||||
* <dashboard-json>: fetch our [host][22] and [k8s][23] dashboards from github
|
||||
|
||||
|
||||
|
||||
Finally, you'll want to expose the apache2 and grafana applications to make their web interfaces accessible:
|
||||
```
|
||||
## expose relevant endpoints
|
||||
juju expose apache2
|
||||
juju expose grafana
|
||||
```
|
||||
|
||||
Now that we have everything deployed, related, configured, and exposed, you can login and poke around using the same steps from the **Exploring Logs** and **Exploring Metrics** sections above.
|
||||
|
||||
### The Wrap Up
|
||||
|
||||
My goal here was to show you how to deploy a Kubernetes cluster with rich monitoring capabilities for logs and metrics. Whether you prefer a guided approach or command line steps, I hope it's clear that monitoring complex deployments doesn't have to be a pipe dream. The trick is to figure out how all the moving parts work, make them work together repeatably, and then break/fix/repeat for a while until everyone can use it.
|
||||
|
||||
This is where tools like conjure-up and Juju really shine. Leveraging the expertise of contributors to this ecosystem makes it easy to manage big software. Start with a solid set of apps, customize as needed, and get back to work!
|
||||
|
||||
Give these bits a try and let me know how it goes. You can find enthusiasts like me on Freenode IRC in **#conjure-up** and **#juju**. Thanks for reading!
|
||||
|
||||
### About the author
|
||||
|
||||
Kevin joined Canonical in 2014 with his focus set on modeling complex software. He found his niche on the Juju Big Software team where his mission is to capture operational knowledge of Big Data and Machine Learning applications into repeatable (and reliable!) solutions.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://insights.ubuntu.com/2018/01/16/monitor-your-kubernetes-cluster/
|
||||
|
||||
作者:[Kevin Monroe][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://insights.ubuntu.com/author/kwmonroe/
|
||||
[1]:https://medium.com/@kwmonroe/monitor-your-kubernetes-cluster-a856d2603ec3
|
||||
[2]:https://www.graylog.org/
|
||||
[3]:https://prometheus.io/
|
||||
[4]:https://insights.ubuntu.com/wp-content/uploads/706b/1_TAA57DGVDpe9KHIzOirrBA.png
|
||||
[5]:https://conjure-up.io/
|
||||
[6]:https://jujucharms.com/canonical-kubernetes
|
||||
[7]:https://insights.ubuntu.com/wp-content/uploads/98fd/1_o0UmYzYkFiHIs2sBgj7G9A.png
|
||||
[8]:https://insights.ubuntu.com/wp-content/uploads/0351/1_pgVaO_ZlalrjvYd5pOMJMA.png
|
||||
[9]:https://insights.ubuntu.com/wp-content/uploads/9977/1_WXKxMlml2DWA5Kj6wW9oXQ.png
|
||||
[10]:https://insights.ubuntu.com/wp-content/uploads/8588/1_NWq7u6g6UAzyFxtbM-ipqg.png
|
||||
[11]:https://insights.ubuntu.com/wp-content/uploads/a1c3/1_hHK5mSrRJQi6A6u0yPSGOA.png
|
||||
[12]:https://insights.ubuntu.com/wp-content/uploads/937f/1_cP36lpmSwlsPXJyDUpFluQ.png
|
||||
[13]:http://docs.graylog.org/en/2.3/pages/dashboards.html
|
||||
[14]:https://insights.ubuntu.com/wp-content/uploads/9256/1_kskust3AOImIh18QxQPgRw.png
|
||||
[15]:https://insights.ubuntu.com/wp-content/uploads/2037/1_qJpjPOTGMQbjFY5-cZsYrQ.png
|
||||
[16]:https://jujucharms.com/
|
||||
[17]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/graylog/steps/01_install-graylog/graylog-vhost.tmpl
|
||||
[18]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/prometheus-scrape-k8s.yaml
|
||||
[19]:https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/after-deploy#L25
|
||||
[20]:https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/after-deploy#L10
|
||||
[21]:https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/after-deploy#L11
|
||||
[22]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/grafana-telegraf.json
|
||||
[23]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/grafana-k8s.json
|
@ -0,0 +1,66 @@
|
||||
Why building a community is worth the extra effort
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_brandbalance.png?itok=XSQ1OU16)
|
||||
|
||||
When we launched [Nethesis][1] in 2003, we were just system integrators. We only used existing open source projects. Our business model was clear: Add multiople forms of value to those projects: know-how, documentation for the Italian market, extra modules, professional support, and training courses. We gave back to upstream projects as well, through upstream code contributions and by participating in their communities.
|
||||
|
||||
Times were different then. We couldn't use the term "open source" too loudly. People associated it with words like: "nerdy," "no value" and, worst of all, "free." Not too good for a business.
|
||||
|
||||
On a Saturday in 2010, with pasties and espresso in hand, the Nethesis staff were discussing how to move things forward (hey, we like to eat and drink while we innovate!). In spite of the momentum working against us, we decided not to change course. In fact, we decided to push harder--to make open source, and an open way of working, a successful model for running a business.
|
||||
|
||||
Over the years, we've proven that model's potential. And one thing has been key to our success: community.
|
||||
|
||||
In this three-part series, I'll explain the important role community plays in an open organization's existence. I'll explore why an organization would want to build a community, and discuss how to build one--because I really do believe it's the best way to generate new innovations today.
|
||||
|
||||
### The crazy idea
|
||||
|
||||
Together with the Nethesis guys, we decided to build our own open source project: our own operating system, built on top of CentOS (because we didn't want to reinvent the wheel). We assumed that we had the experience, know-how, and workforce to achieve it. We felt brave.
|
||||
|
||||
And we very much wanted to build an operating system called [NethServer][2] with one mission: making a sysadmin's life easier with open source. We knew we could create a Linux distribution for a server that would be more accessible, easier to adopt, and simpler to understand than anything currently offered.
|
||||
|
||||
Above all, though, we decided to create a real, 100% open project with three primary rules:
|
||||
|
||||
* completely free to download,
|
||||
* openly developed, and
|
||||
* community-driven
|
||||
|
||||
|
||||
|
||||
That last one is important. We were a company; we were able to develop it by ourselves. We would have been more effective (and made quicker decisions) if we'd done the work in-house. It would be so simple, like any other company in Italy.
|
||||
|
||||
But we were so deeply into open source culture culture that we chose a different path.
|
||||
|
||||
We really wanted as many people as possible around us, around the product, and around the company. We wanted as many perspectives on the work as possible. We realized: Alone, you can go fast--but if you want to go far, you need to go together.
|
||||
|
||||
So we decided to build a community instead.
|
||||
|
||||
### What next?
|
||||
|
||||
We realized that creating a community has so many benefits. For example, if the people who use your product are really involved in the project, they will provide feedback and use cases, write documentation, catch bugs, compare with other products, suggest features, and contribute to development. All of this generates innovations, attracts contributors and customers, and expands your product's user base.
|
||||
|
||||
But quicky the question arose: How can we build a community? We didn't know how to achieve that. We'd participated in many communities, but we'd never built one.
|
||||
|
||||
We were good at code--not with people. And we were a company, an organization with very specific priorities. So how were we going to build a community and a foster good relationships between the company and the community itself?
|
||||
|
||||
We did the first thing you had to do: study. We learned from experts, blogs, and lots of books. We experimented. We failed many times, collected data from the outcomes, and tested them again.
|
||||
|
||||
Eventually we learned the golden rule of the community management: There is no golden rule of community management.
|
||||
|
||||
People are too complex and communities are too different to have one rule "to rule them all,"
|
||||
|
||||
One thing I can say, however, is that an healthy relationship between a community and a company is always a process of give and take. In my next article, I'll discuss what your organization should expect to give if it wants a flourishing and innovating community.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/18/1/why-build-community-1
|
||||
|
||||
作者:[Alessio Fattorini][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/alefattorini
|
||||
[1]:http://www.nethesis.it/
|
||||
[2]:http://www.nethserver.org/
|
125
sources/tech/20180117 Avoiding Server Disaster.md
Normal file
125
sources/tech/20180117 Avoiding Server Disaster.md
Normal file
@ -0,0 +1,125 @@
|
||||
Avoiding Server Disaster
|
||||
======
|
||||
|
||||
Worried that your server will go down? You should be. Here are some disaster-planning tips for server owners.
|
||||
|
||||
If you own a car or a house, you almost certainly have insurance. Insurance seems like a huge waste of money. You pay it every year and make sure that you get the best possible price for the best possible coverage, and then you hope you never need to use the insurance. Insurance seems like a really bad deal—until you have a disaster and realize that had it not been for the insurance, you might have been in financial ruin.
|
||||
|
||||
Unfortunately, disasters and mishaps are a fact of life in the computer industry. And so, just as you pay insurance and hope never to have to use it, you also need to take time to ensure the safety and reliability of your systems—not because you want disasters to happen, or even expect them to occur, but rather because you have to.
|
||||
|
||||
If your website is an online brochure for your company and then goes down for a few hours or even days, it'll be embarrassing and annoying, but not financially painful. But, if your website is your business, when your site goes down, you're losing money. If that's the case, it's crucial to ensure that your server and software are not only unlikely to go down, but also easily recoverable if and when that happens.
|
||||
|
||||
Why am I writing about this subject? Well, let's just say that this particular problem hit close to home for me, just before I started to write this article. After years of helping clients around the world to ensure the reliability of their systems, I made the mistake of not being as thorough with my own. ("The shoemaker's children go barefoot", as the saying goes.) This means that just after launching my new online product for Python developers, a seemingly trivial upgrade turned into a disaster. The precautions I put in place, it turns out, weren't quite enough—and as I write this, I'm still putting my web server together. I'll survive, as will my server and business, but this has been a painful and important lesson—one that I'll do almost anything to avoid repeating in the future.
|
||||
|
||||
So in this article, I describe a number of techniques I've used to keep servers safe and sound through the years, and to reduce the chances of a complete meltdown. You can think of these techniques as insurance for your server, so that even if something does go wrong, you'll be able to recover fairly quickly.
|
||||
|
||||
I should note that most of the advice here assumes no redundancy in your architecture—that is, a single web server and (at most) a single database server. If you can afford to have a bunch of servers of each type, these sorts of problems tend to be much less frequent. However, that doesn't mean they go away entirely. Besides, although people like to talk about heavy-duty web applications that require massive iron in order to run, the fact is that many businesses run on small, one- and two-computer servers. Moreover, those businesses don't need more than that; the ROI (return on investment) they'll get from additional servers cannot be justified. However, the ROI from a good backup and recovery plan is huge, and thus worth the investment.
|
||||
|
||||
### The Parts of a Web Application
|
||||
|
||||
Before I can talk about disaster preparation and recovery, it's important to consider the different parts of a web application and what those various parts mean for your planning.
|
||||
|
||||
For many years, my website was trivially small and simple. Even if it contained some simple programs, those generally were used for sending email or for dynamically displaying different assets to visitors. The entire site consisted of some static HTML, images, JavaScript and CSS. No database or other excitement was necessary.
|
||||
|
||||
At the other end of the spectrum, many people have full-blown web applications, sitting on multiple servers, with one or more databases and caches, as well as HTTP servers with extensively edited configuration files.
|
||||
|
||||
But even when considering those two extremes, you can see that a web application consists of only a few parts:
|
||||
|
||||
* The application software itself.
|
||||
|
||||
* Static assets for that application.
|
||||
|
||||
* Configuration file(s) for the HTTP server(s).
|
||||
|
||||
* Database configuration files.
|
||||
|
||||
* Database schema and contents.
|
||||
|
||||
Assuming that you're using a high-level language, such as Python, Ruby or JavaScript, everything in this list either is a file or can be turned into one. (All databases make it possible to "dump" their contents onto disk, into a format that then can be loaded back into the database server.)
|
||||
|
||||
Consider a site containing only application software, static assets and configuration files. (In other words, no database is involved.) In many cases, such a site can be backed up reliably in Git. Indeed, I prefer to keep my sites in Git, backed up on a commercial hosting service, such as GitHub or Bitbucket, and then deployed using a system like Capistrano.
|
||||
|
||||
In other words, you develop the site on your own development machine. Whenever you are happy with a change that you've made, you commit the change to Git (on your local machine) and then do a git push to your central repository. In order to deploy your application, you then use Capistrano to do a cap deploy, which reads the data from the central repository, puts it into the appropriate place on the server's filesystem, and you're good to go.
|
||||
|
||||
This system keeps you safe in a few different ways. The code itself is located in at least three locations: your development machine, the server and the repository. And those central repositories tend to be fairly reliable, if only because it's in the financial interest of the hosting company to ensure that things are reliable.
|
||||
|
||||
I should add that in such a case, you also should include the HTTP server's configuration files in your Git repository. Those files aren't likely to change very often, but I can tell you from experience, if you're recovering from a crisis, the last thing you want to think about is how your Apache configuration files should look. Copying those files into your Git repository will work just fine.
|
||||
|
||||
### Backing Up Databases
|
||||
|
||||
You could argue that the difference between a "website" and a "web application" is a database. Databases long have powered the back ends of many web applications and for good reason—they allow you to store and retrieve data reliably and flexibly. The power that modern open-source databases provides was unthinkable just a decade or two ago, and there's no reason to think that they'll be any less reliable in the future.
|
||||
|
||||
And yet, just because your database is pretty reliable doesn't mean that it won't have problems. This means you're going to want to keep a snapshot ("dump") of the database's contents around, in case the database server corrupts information, and you need to roll back to a previous version.
|
||||
|
||||
My favorite solution for such a problem is to dump the database on a regular basis, preferably hourly. Here's a shell script I've used, in one form or another, for creating such regular database dumps:
|
||||
|
||||
```
|
||||
|
||||
#!/bin/sh
|
||||
|
||||
BACKUP_ROOT="/home/database-backups/"
|
||||
YEAR=`/bin/date +'%Y'`
|
||||
MONTH=`/bin/date +'%m'`
|
||||
DAY=`/bin/date +'%d'`
|
||||
|
||||
DIRECTORY="$BACKUP_ROOT/$YEAR/$MONTH/$DAY"
|
||||
USERNAME=dbuser
|
||||
DATABASE=dbname
|
||||
HOST=localhost
|
||||
PORT=3306
|
||||
|
||||
/bin/mkdir -p $DIRECTORY
|
||||
|
||||
/usr/bin/mysqldump -h $HOST --databases $DATABASE -u $USERNAME
|
||||
↪| /bin/gzip --best --verbose >
|
||||
↪$DIRECTORY/$DATABASE-dump.gz
|
||||
|
||||
```
|
||||
|
||||
The above shell script starts off by defining a bunch of variables, from the directory in which I want to store the backups, to the parts of the date (stored in $YEAR, $MONTH and $DAY). This is so I can have a separate directory for each day of the month. I could, of course, go further and have separate directories for each hour, but I've found that I rarely need more than one backup from a day.
|
||||
|
||||
Once I have defined those variables, I then use the mkdir command to create a new directory. The -p option tells mkdir that if necessary, it should create all of the directories it needs such that the entire path will exist.
|
||||
|
||||
Finally, I then run the database's "dump" command. In this particular case, I'm using MySQL, so I'm using the mysqldump command. The output from this command is a stream of SQL that can be used to re-create the database. I thus take the output from mysqldump and pipe it into gzip, which compresses the output file. Finally, the resulting dumpfile is placed, in compressed form, inside the daily backup directory.
|
||||
|
||||
Depending on the size of your database and the amount of disk space you have on hand, you'll have to decide just how often you want to run dumps and how often you want to clean out old ones. I know from experience that dumping every hour can cause some load problems. On one virtual machine I've used, the overall administration team was unhappy that I was dumping and compressing every hour, which they saw as an unnecessary use of system resources.
|
||||
|
||||
If you're worried your system will run out of disk space, you might well want to run a space-checking program that'll alert you when the filesystem is low on free space. In addition, you can run a cron job that uses find to erase all dumpfiles from before a certain cutoff date. I'm always a bit nervous about programs that automatically erase backups, so I generally prefer not to do this. Rather, I run a program that warns me if the disk usage is going above 85% (which is usually low enough to ensure that I can fix the problem in time, even if I'm on a long flight). Then I can go in and remove the problematic files by hand.
|
||||
|
||||
When you back up your database, you should be sure to back up the configuration for that database as well. The database schema and data, which are part of the dumpfile, are certainly important. However, if you find yourself having to re-create your server from scratch, you'll want to know precisely how you configured the database server, with a particular emphasis on the filesystem configuration and memory allocations. I tend to use PostgreSQL for most of my work, and although postgresql.conf is simple to understand and configure, I still like to keep it around with my dumpfiles.
|
||||
|
||||
Another crucial thing to do is to check your database dumps occasionally to be sure that they are working the way you want. It turns out that the backups I thought I was making weren't actually happening, in no small part because I had modified the shell script and hadn't double-checked that it was creating useful backups. Occasionally pulling out one of your dumpfiles and restoring it to a separate (and offline!) database to check its integrity is a good practice, both to ensure that the dump is working and that you remember how to restore it in the case of an emergency.
|
||||
|
||||
### Storing Backups
|
||||
|
||||
But wait. It might be great to have these backups, but what if the server goes down entirely? In the case of the code, I mentioned to ensure that it was located on more than one machine, ensuring its integrity. By contrast, your database dumps are now on the server, such that if the server fails, your database dumps will be inaccessible.
|
||||
|
||||
This means you'll want to have your database dumps stored elsewhere, preferably automatically. How can you do that?
|
||||
|
||||
There are a few relatively easy and inexpensive solutions to this problem. If you have two servers—ideally in separate physical locations—you can use rsync to copy the files from one to the other. Don't rsync the database's actual files, since those might get corrupted in transfer and aren't designed to be copied when the server is running. By contrast, the dumpfiles that you have created are more than able to go elsewhere. Setting up a remote server, with a user specifically for handling these backup transfers, shouldn't be too hard and will go a long way toward ensuring the safety of your data.
|
||||
|
||||
I should note that using rsync in this way basically requires that you set up passwordless SSH, so that you can transfer without having to be physically present to enter the password.
|
||||
|
||||
Another possible solution is Amazon's Simple Storage Server (S3), which offers astonishing amounts of disk space at very low prices. I know that many companies use S3 as a simple (albeit slow) backup system. You can set up a cron job to run a program that copies the contents of a particular database dumpfile directory onto a particular server. The assumption here is that you're not ever going to use these backups, meaning that S3's slow searching and access will not be an issue once you're working on the server.
|
||||
|
||||
Similarly, you might consider using Dropbox. Dropbox is best known for its desktop client, but it has a "headless", text-based client that can be used on Linux servers without a GUI connected. One nice advantage of Dropbox is that you can share a folder with any number of people, which means you can have Dropbox distribute your backup databases everywhere automatically, including to a number of people on your team. The backups arrive in their Dropbox folder, and you can be sure that the LAMP is conditional.
|
||||
|
||||
Finally, if you're running a WordPress site, you might want to consider VaultPress, a for-pay backup system. I must admit that in the weeks before I took my server down with a database backup error, I kept seeing ads in WordPress for VaultPress. "Who would buy that?", I asked myself, thinking that I'm smart enough to do backups myself. Of course, after disaster occurred and my database was ruined, I realized that $30/year to back up all of my data is cheap, and I should have done it before.
|
||||
|
||||
### Conclusion
|
||||
|
||||
When it comes to your servers, think less like an optimistic programmer and more like an insurance agent. Perhaps disaster won't strike, but if it does, will you be able to recover? Making sure that even if your server is completely unavailable, you'll be able to bring up your program and any associated database is crucial.
|
||||
|
||||
My preferred solution involves combining a Git repository for code and configuration files, distributed across several machines and services. For the databases, however, it's not enough to dump your database; you'll need to get that dump onto a separate machine, and preferably test the backup file on a regular basis. That way, even if things go wrong, you'll be able to get back up in no time.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxjournal.com/content/avoiding-server-disaster
|
||||
|
||||
作者:[Reuven M.Lerner][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxjournal.com/user/1000891
|
@ -0,0 +1,130 @@
|
||||
Linux tee Command Explained for Beginners (6 Examples)
|
||||
======
|
||||
|
||||
There are times when you want to manually track output of a command and also simultaneously make sure the output is being written to a file so that you can refer to it later. If you are looking for a Linux tool which can do this for you, you'll be glad to know there exists a command **tee** that's built for this purpose.
|
||||
|
||||
In this tutorial, we will discuss the basics of the tee command using some easy to understand examples. But before we do that, it's worth mentioning that all examples used in this article have been tested on Ubuntu 16.04 LTS.
|
||||
|
||||
### Linux tee command
|
||||
|
||||
The tee command basically reads from the standard input and writes to standard output and files. Following is the syntax of the command:
|
||||
|
||||
```
|
||||
tee [OPTION]... [FILE]...
|
||||
```
|
||||
|
||||
And here's how the man page explains it:
|
||||
```
|
||||
Copy standard input to each FILE, and also to standard output.
|
||||
```
|
||||
|
||||
The following Q&A-styled examples should give you a better idea on how the command works.
|
||||
|
||||
### Q1. How to use tee command in Linux?
|
||||
|
||||
Suppose you are using the ping command for some reason.
|
||||
|
||||
ping google.com
|
||||
|
||||
[![How to use tee command in Linux][1]][2]
|
||||
|
||||
And what you want, is that the output should also get written to a file in parallel. Then here's where you can use the tee command.
|
||||
|
||||
```
|
||||
ping google.com | tee output.txt
|
||||
```
|
||||
|
||||
The following screenshot shows the output was written to the 'output.txt' file along with being written on stdout.
|
||||
|
||||
[![tee command output][3]][4]
|
||||
|
||||
So that should clear the basic usage of tee.
|
||||
|
||||
### Q2. How to make sure tee appends information in files?
|
||||
|
||||
By default, the tee command overwrites information in a file when used again. However, if you want, you can change this behavior by using the -a command line option.
|
||||
|
||||
```
|
||||
[command] | tee -a [file]
|
||||
```
|
||||
|
||||
So basically, the -a option forces tee to append information to the file.
|
||||
|
||||
### Q3. How to make tee write to multiple files?
|
||||
|
||||
That's pretty easy. You just have to mention their names.
|
||||
|
||||
```
|
||||
[command] | tee [file1] [file2] [file3]
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
ping google.com | tee output1.txt output2.txt output3.txt
|
||||
```
|
||||
|
||||
[![How to make tee write to multiple files][5]][6]
|
||||
|
||||
### Q4. How to make tee redirect output of one command to another?
|
||||
|
||||
You can not only use tee to simultaneously write output to files, but also to pass on the output as input to other commands. For example, the following command will not only store the filenames in 'output.txt' but also let you know - through wc - the number of entries in the output.txt file.
|
||||
|
||||
```
|
||||
ls file* | tee output.txt | wc -l
|
||||
```
|
||||
|
||||
[![How to make tee redirect output of one command to another][7]][8]
|
||||
|
||||
### Q5. How to write to a file with elevated privileges using tee?
|
||||
|
||||
Suppose you opened a file in the [Vim editor][9], made a lot of changes, and then when you tried saving those changes, you got an error that made you realize that it's a root-owned file, meaning you need to have sudo privileges to save these changes.
|
||||
|
||||
[![How to write to a file with elevated privileges using tee][10]][11]
|
||||
|
||||
In scenarios like these, you can use tee to elevate privileges on the go.
|
||||
|
||||
```
|
||||
:w !sudo tee %
|
||||
```
|
||||
|
||||
The aforementioned command will ask you for root password, and then let you save the changes.
|
||||
|
||||
### Q6. How to make tee ignore interrupt?
|
||||
|
||||
The -i command line option enables tee to ignore the interrupt signal (`SIGINT`), which is usually issued when you press the crl+c key combination.
|
||||
|
||||
```
|
||||
[command] | tee -i [file]
|
||||
```
|
||||
|
||||
This is useful when you want to kill the command with ctrl+c but want tee to exit gracefully.
|
||||
|
||||
### Conclusion
|
||||
|
||||
You'll likely agree now that tee is an extremely useful command. We've discussed it's basic usage as well as majority of its command line options here. The tool doesn't have a steep learning curve, so just practice all these examples, and you should be good to go. For more information, head to the tool's [man page][12].
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/linux-tee-command/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
||||
[1]:https://www.howtoforge.com/images/command-tutorial/ping-example.png
|
||||
[2]:https://www.howtoforge.com/images/command-tutorial/big/ping-example.png
|
||||
[3]:https://www.howtoforge.com/images/command-tutorial/ping-with-tee.png
|
||||
[4]:https://www.howtoforge.com/images/command-tutorial/big/ping-with-tee.png
|
||||
[5]:https://www.howtoforge.com/images/command-tutorial/tee-mult-files1.png
|
||||
[6]:https://www.howtoforge.com/images/command-tutorial/big/tee-mult-files1.png
|
||||
[7]:https://www.howtoforge.com/images/command-tutorial/tee-redirect-output.png
|
||||
[8]:https://www.howtoforge.com/images/command-tutorial/big/tee-redirect-output.png
|
||||
[9]:https://www.howtoforge.com/vim-basics
|
||||
[10]:https://www.howtoforge.com/images/command-tutorial/vim-write-error.png
|
||||
[11]:https://www.howtoforge.com/images/command-tutorial/big/vim-write-error.png
|
||||
[12]:https://linux.die.net/man/1/tee
|
@ -0,0 +1,81 @@
|
||||
AI 和机器中暗含的算法偏见是怎样形成的,我们又能通过开源社区做些什么
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_goodbadugly.png?itok=ZxaimUWU)
|
||||
|
||||
图片来源:opensource.com
|
||||
|
||||
在我们的世界里,算法无处不在,偏见也是一样。从社会媒体新闻的提供到流式媒体服务的推荐到线上购物,计算机算法,尤其是机器学习算法,已经渗透到我们日常生活的每一个角落。至于偏见,我们只需要参考 2016 年美国大选就可以知道,偏见是怎样在明处与暗处影响着我们的社会。
|
||||
|
||||
很难想像,我们经常忽略的一点是这二者的交集:计算机算法中存在的偏见。
|
||||
|
||||
与我们大多数人所认为的相反,科技并不是客观的。 AI 算法和它们的决策程序是由它们的研发者塑造的,他们写入的代码,使用的“[训练][1]”数据还有他们对算法进行[应力测试][2] 的过程,都会影响这些算法今后的选择。这意味着研发者的价值观,偏见和人类缺陷都会反映在软件上。如果我只给实验室中的人脸识别算法提供白人的照片,当遇到不是白人照片时,它[不会认为照片中的是人类][3] 。这结论并不意味着 AI 是“愚蠢的”或是“天真的”,它显示的是训练数据的分布偏差:缺乏多种的脸部照片。这会引来非常严重的后果。
|
||||
|
||||
这样的例子并不少。全美范围内的[州法院系统][4] 都使用“黑箱子”对罪犯进行宣判。由于训练数据的问题,[这些算法对黑人有偏见][5] ,他们对黑人罪犯会选择更长的服刑期,因此监狱中的种族差异会一直存在。而这些都发生在科技的客观性伪装下,这是“科学的”选择。
|
||||
|
||||
美国联邦政府使用机器学习算法来计算福利性支出和各类政府补贴。[但这些算法中的信息][6],例如它们的创造者和训练信息,都很难找到。这增加了政府工作人员进行不平等补助金分发操作的几率。
|
||||
|
||||
算法偏见情况还不止这些。从 Facebook 的新闻算法到医疗系统再到警方使用的相机,我们作为社会的一部分极有可能对这些算法输入各式各样的偏见,性别歧视,仇外思想,社会经济地位歧视,确认偏误等等。这些被输入了偏见的机器会大量生产分配,将种种社会偏见潜藏于科技客观性的面纱之下。
|
||||
|
||||
这种状况绝对不能再继续下去了。
|
||||
|
||||
在我们对人工智能进行不断开发研究的同时,需要降低它的开发速度,小心仔细地开发。算法偏见的危害已经足够大了。
|
||||
|
||||
## 我们能怎样减少算法偏见?
|
||||
|
||||
最好的方式是从算法训练的数据开始审查,根据 [Microsoft 的研究者][2] 所说,这方法很有效。
|
||||
|
||||
数据分布本身就带有一定的偏见性。编程者手中的美国公民数据分布并不均衡,本地居民的数据多于移民者,富人的数据多于穷人,这是极有可能出现的情况。这种数据的不平均会使 AI 对我们是社会组成得出错误的结论。例如机器学习算法仅仅通过统计分析,就得出“大多数美国人都是富有的白人”这个结论。
|
||||
|
||||
即使男性和女性的样本在训练数据中等量分布,也可能出现偏见的结果。如果训练数据中所有男性的职业都是 CEO,而所有女性的职业都是秘书(即使现实中男性 CEO 的数量要多于女性),AI 也可能得出女性天生不适合做 CEO 的结论。
|
||||
|
||||
同样的,大量研究表明,用于执法部门的 AI 在检测新闻中出现的罪犯照片时,结果会 [惊人地偏向][7] 黑人及拉丁美洲裔居民。
|
||||
|
||||
在训练数据中存在的偏见还有很多其他形式,不幸的是比这里提到的要多得多。但是训练数据只是审查方式的一种,通过“应力测验”找出人类存在的偏见也同样重要。
|
||||
|
||||
如果提供一张印度人的照片,我们自己的相机能够识别吗?在两名同样水平的应聘者中,我们的 AI 是否会倾向于推荐住在市区的应聘者呢?对于情报中本地白人恐怖分子和伊拉克籍恐怖分子,反恐算法会怎样选择呢?急诊室的相机可以调出儿童的病历吗?
|
||||
|
||||
这些对于 AI 来说是十分复杂的数据,但我们可以通过多项测试对它们进行定义和传达。
|
||||
|
||||
## 为什么开源很适合这项任务?
|
||||
|
||||
开源方法和开源技术都有着极大的潜力改变算法偏见。
|
||||
|
||||
现代人工智能已经被开源软件占领,TensorFlow、IBM Watson 还有 [scikit-learn][8] 这类的程序包都是开源软件。开源社区已经证明它能够开发出强健的,经得住严酷测试的机器学习工具。同样的,我相信,开源社区也能开发出消除偏见的测试程序,并将其应用于这些软件中。
|
||||
|
||||
调试工具如哥伦比亚大学和理海大学推出的 [DeepXplore][9],增强了 AI 应力测试的强度,同时提高了其操控性。还有 [麻省理工学院的计算机科学和人工智能实验室][10]完成的项目,它开发出敏捷快速的样机研究软件,这些应该会被开源社区采纳。
|
||||
|
||||
开源技术也已经证明了其在审查和分类大组数据方面的能力。最明显的体现在开源工具在数据分析市场的占有率上(Weka , Rapid Miner 等等)。应当由开源社区来设计识别数据偏见的工具,已经在网上发布的大量训练数据组比如 [Kaggle][11]也应当使用这种技术进行识别筛选。
|
||||
|
||||
开源方法本身十分适合消除偏见程序的设计。内部谈话,私人软件开发及非民主的决策制定引起了很多问题。开源社区能够进行软件公开的谈话,进行大众化,维持好与大众的关系,这对于处理以上问题是十分重要的。如果线上社团,组织和院校能够接受这些开源特质,那么由开源社区进行消除算法偏见的机器设计也会顺利很多。
|
||||
|
||||
## 我们怎样才能够参与其中?
|
||||
|
||||
教育是一个很重要的环节。我们身边有很多还没意识到算法偏见的人,但算法偏见在立法,社会公正,政策及更多领域产生的影响与他们息息相关。让这些人知道算法偏见是怎样形成的和它们带来的重要影响是很重要的,因为想要改变目前是局面,从我们自身做起是唯一的方法。
|
||||
|
||||
对于我们中间那些与人工智能一起工作的人来说,这种沟通尤其重要。不论是人工智能的研发者,警方或是科研人员,当他们为今后设计人工智能时,应当格外意识到现今这种偏见存在的危险性,很明显,想要消除人工智能中存在的偏见,就要从意识到偏见的存在开始。
|
||||
|
||||
最后,我们需要围绕 AI 伦理化建立并加强开源社区。不论是需要建立应力实验训练模型,软件工具,或是从千兆字节的训练数据中筛选,现在已经到了我们利用开源方法来应对数字化时代最大的威胁的时间了。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/how-open-source-can-fight-algorithmic-bias
|
||||
|
||||
作者:[Justin Sherman][a]
|
||||
译者:[Valoniakim](https://github.com/Valoniakim)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/justinsherman
|
||||
[1]:https://www.crowdflower.com/what-is-training-data/
|
||||
[2]:https://medium.com/microsoft-design/how-to-recognize-exclusion-in-ai-ec2d6d89f850
|
||||
[3]:https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms
|
||||
[4]:https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/
|
||||
[5]:https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
|
||||
[6]:https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3012499
|
||||
[7]:https://www.hivlawandpolicy.org/sites/default/files/Race%20and%20Punishment-%20Racial%20Perceptions%20of%20Crime%20and%20Support%20for%20Punitive%20Policies%20%282014%29.pdf
|
||||
[8]:http://scikit-learn.org/stable/
|
||||
[9]:https://arxiv.org/pdf/1705.06640.pdf
|
||||
[10]:https://www.csail.mit.edu/research/understandable-deep-networks
|
||||
[11]:https://www.kaggle.com/datasets
|
108
translated/tech/20140210 Three steps to learning GDB.md
Normal file
108
translated/tech/20140210 Three steps to learning GDB.md
Normal file
@ -0,0 +1,108 @@
|
||||
# 三步上手GDB
|
||||
|
||||
调试C程序,曾让我很困扰。然而当我之前在写我的[操作系统][2]时,我有很多的BUG需要调试。我很幸运的使用上了qemu模拟器,它允许我将调试器附加到我的操作系统。这个调试器就是`gdb`。
|
||||
|
||||
我得解释一下,你可以使用`gdb`先做一些小事情,因为我发现初学它的时候真的很混乱。我们接下来会在一个小程序中,设置断点,查看内存。.
|
||||
|
||||
### 1. 设断点
|
||||
|
||||
如果你曾经使用过调试器,那你可能已经会设置断点了。
|
||||
|
||||
下面是一个我们要调试的程序(虽然没有任何Bug):
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
void do_thing() {
|
||||
printf("Hi!\n");
|
||||
}
|
||||
int main() {
|
||||
do_thing();
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
另存为 `hello.c`. 我们可以使用dbg调试它,像这样:
|
||||
|
||||
```
|
||||
bork@kiwi ~> gcc -g hello.c -o hello
|
||||
bork@kiwi ~> gdb ./hello
|
||||
```
|
||||
|
||||
以上是带调试信息编译 `hello.c`(为了gdb可以更好工作),并且它会给我们醒目的提示符,就像这样:
|
||||
`(gdb)`
|
||||
|
||||
我们可以使用`break`命令设置断点,然后使用`run`开始调试程序。
|
||||
|
||||
```
|
||||
(gdb) break do_thing
|
||||
Breakpoint 1 at 0x4004f8
|
||||
(gdb) run
|
||||
Starting program: /home/bork/hello
|
||||
|
||||
Breakpoint 1, 0x00000000004004f8 in do_thing ()
|
||||
```
|
||||
程序暂停在了`do_thing`开始的地方。
|
||||
|
||||
我们可以通过`where`查看我们所在的调用栈。
|
||||
```
|
||||
(gdb) where
|
||||
#0 do_thing () at hello.c:3
|
||||
#1 0x08050cdb in main () at hello.c:6
|
||||
(gdb)
|
||||
```
|
||||
|
||||
### 2. 阅读汇编代码
|
||||
|
||||
使用`disassemble`命令,我们可以看到这个函数的汇编代码。棒级了。这是x86汇编代码。虽然我不是很懂它,但是`callq`这一行是`printf`函数调用。
|
||||
|
||||
```
|
||||
(gdb) disassemble do_thing
|
||||
Dump of assembler code for function do_thing:
|
||||
0x00000000004004f4 <+0>: push %rbp
|
||||
0x00000000004004f5 <+1>: mov %rsp,%rbp
|
||||
=> 0x00000000004004f8 <+4>: mov $0x40060c,%edi
|
||||
0x00000000004004fd <+9>: callq 0x4003f0
|
||||
0x0000000000400502 <+14>: pop %rbp
|
||||
0x0000000000400503 <+15>: retq
|
||||
```
|
||||
|
||||
你也可以使用`disassemble`的缩写`disas`。`
|
||||
|
||||
### 3. 查看内存
|
||||
|
||||
当调试我的内核时,我使用`gdb`的主要原因是,以确保内存布局是如我所想的那样。检查内存的命令是`examine`,或者使用缩写`x`。我们将使用`x`。
|
||||
|
||||
通过阅读上面的汇编代码,似乎`0x40060c`可能是我们所要打印的字符串地址。我们来试一下。
|
||||
|
||||
```
|
||||
(gdb) x/s 0x40060c
|
||||
0x40060c: "Hi!"
|
||||
```
|
||||
|
||||
的确是这样。`x/s`中`/s`部分,意思是“把它作为字符串展示”。我也可以“展示10个字符”,像这样:
|
||||
|
||||
```
|
||||
(gdb) x/10c 0x40060c
|
||||
0x40060c: 72 'H' 105 'i' 33 '!' 0 '\000' 1 '\001' 27 '\033' 3 '\003' 59 ';'
|
||||
0x400614: 52 '4' 0 '\000'
|
||||
```
|
||||
|
||||
你可以看到前四个字符是'H','i','!',和'\0',并且它们之后的是一些不相关的东西。
|
||||
|
||||
我知道gdb很多其他的东西,但是我任然不是很了解它,其中`x`和`break`让我获得很多。你还可以阅读 [do umentation for examining memory][4]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2014/02/10/three-steps-to-learning-gdb/
|
||||
|
||||
作者:[Julia Evans ][a]
|
||||
译者:[Torival](https://github.com/Torival)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://jvns.ca
|
||||
[1]:https://jvns.ca/categories/spytools
|
||||
[2]:http://jvns.ca/blog/categories/kernel
|
||||
[3]:https://twitter.com/mgedmin
|
||||
[4]:https://ftp.gnu.org/old-gnu/Manuals/gdb-5.1.1/html_chapter/gdb_9.html#SEC56
|
@ -0,0 +1,234 @@
|
||||
让我们做个简单的解释器(2)
|
||||
======
|
||||
|
||||
在一本叫做 《高效思考的 5 要素》 的书中,作者 Burger 和 Starbird 讲述了一个关于他们如何研究 Tony Plog 的故事,一个举世闻名的交响曲名家,为一些有才华的演奏者开创了一个大师班。这些学生一开始演奏复杂的乐曲,他们演奏的非常好。然后他们被要求演奏非常基础简单的乐曲。当他们演奏这些乐曲时,与之前所演奏的相比,听起来非常幼稚。在他们结束演奏后,老师也演奏了同样的乐曲,但是听上去非常娴熟。差别令人震惊。Tony 解释道,精通简单符号可以让人更好的掌握复杂的部分。这个例子很清晰 - 要成为真正的名家,必须要掌握简单基础的思想。
|
||||
|
||||
故事中的例子明显不仅仅适用于音乐,而且适用于软件开发。这个故事告诉我们不要忽视繁琐工作中简单基础的概念的重要性,哪怕有时候这让人感觉是一种倒退。尽管熟练掌握一门工具或者框架非常重要,了解他们背后的原理也是极其重要的。正如 Palph Waldo Emerson 所说:
|
||||
|
||||
> “如果你只学习方法,你就会被方法束缚。但如果你知道原理,就可以发明自己的方法。”
|
||||
|
||||
有鉴于此,让我们再次深入了解解释器和编译器。
|
||||
|
||||
今天我会向你们展示一个全新的计算器,与 [第一部分][1] 相比,它可以做到:
|
||||
|
||||
1. 处理输入字符串任意位置的空白符
|
||||
2. 识别输入字符串中的多位整数
|
||||
3. 做两个整数之间的减法(目前它仅能加减整数)
|
||||
|
||||
|
||||
新版本计算器的源代码在这里,它可以做到上述的所有事情:
|
||||
```
|
||||
# 标记类型
|
||||
# EOF (end-of-file 文件末尾) 标记是用来表示所有输入都解析完成
|
||||
INTEGER, PLUS, MINUS, EOF = 'INTEGER', 'PLUS', 'MINUS', 'EOF'
|
||||
|
||||
|
||||
class Token(object):
|
||||
def __init__(self, type, value):
|
||||
# token 类型: INTEGER, PLUS, MINUS, or EOF
|
||||
self.type = type
|
||||
# token 值: 非负整数值, '+', '-', 或无
|
||||
self.value = value
|
||||
|
||||
def __str__(self):
|
||||
"""String representation of the class instance.
|
||||
|
||||
Examples:
|
||||
Token(INTEGER, 3)
|
||||
Token(PLUS '+')
|
||||
"""
|
||||
return 'Token({type}, {value})'.format(
|
||||
type=self.type,
|
||||
value=repr(self.value)
|
||||
)
|
||||
|
||||
def __repr__(self):
|
||||
return self.__str__()
|
||||
|
||||
|
||||
class Interpreter(object):
|
||||
def __init__(self, text):
|
||||
# 客户端字符输入, 例如. "3 + 5", "12 - 5",
|
||||
self.text = text
|
||||
# self.pos 是 self.text 的索引
|
||||
self.pos = 0
|
||||
# 当前标记实例
|
||||
self.current_token = None
|
||||
self.current_char = self.text[self.pos]
|
||||
|
||||
def error(self):
|
||||
raise Exception('Error parsing input')
|
||||
|
||||
def advance(self):
|
||||
"""Advance the 'pos' pointer and set the 'current_char' variable."""
|
||||
self.pos += 1
|
||||
if self.pos > len(self.text) - 1:
|
||||
self.current_char = None # Indicates end of input
|
||||
else:
|
||||
self.current_char = self.text[self.pos]
|
||||
|
||||
def skip_whitespace(self):
|
||||
while self.current_char is not None and self.current_char.isspace():
|
||||
self.advance()
|
||||
|
||||
def integer(self):
|
||||
"""Return a (multidigit) integer consumed from the input."""
|
||||
result = ''
|
||||
while self.current_char is not None and self.current_char.isdigit():
|
||||
result += self.current_char
|
||||
self.advance()
|
||||
return int(result)
|
||||
|
||||
def get_next_token(self):
|
||||
"""Lexical analyzer (also known as scanner or tokenizer)
|
||||
|
||||
This method is responsible for breaking a sentence
|
||||
apart into tokens.
|
||||
"""
|
||||
while self.current_char is not None:
|
||||
|
||||
if self.current_char.isspace():
|
||||
self.skip_whitespace()
|
||||
continue
|
||||
|
||||
if self.current_char.isdigit():
|
||||
return Token(INTEGER, self.integer())
|
||||
|
||||
if self.current_char == '+':
|
||||
self.advance()
|
||||
return Token(PLUS, '+')
|
||||
|
||||
if self.current_char == '-':
|
||||
self.advance()
|
||||
return Token(MINUS, '-')
|
||||
|
||||
self.error()
|
||||
|
||||
return Token(EOF, None)
|
||||
|
||||
def eat(self, token_type):
|
||||
# 将当前的标记类型与传入的标记类型作比较,如果他们相匹配,就
|
||||
# “eat” 掉当前的标记并将下一个标记赋给 self.current_token,
|
||||
# 否则抛出一个异常
|
||||
if self.current_token.type == token_type:
|
||||
self.current_token = self.get_next_token()
|
||||
else:
|
||||
self.error()
|
||||
|
||||
def expr(self):
|
||||
"""Parser / Interpreter
|
||||
|
||||
expr -> INTEGER PLUS INTEGER
|
||||
expr -> INTEGER MINUS INTEGER
|
||||
"""
|
||||
# 将输入中的第一个标记设置成当前标记
|
||||
self.current_token = self.get_next_token()
|
||||
|
||||
# 当前标记应该是一个整数
|
||||
left = self.current_token
|
||||
self.eat(INTEGER)
|
||||
|
||||
# 当前标记应该是 ‘+’ 或 ‘-’
|
||||
op = self.current_token
|
||||
if op.type == PLUS:
|
||||
self.eat(PLUS)
|
||||
else:
|
||||
self.eat(MINUS)
|
||||
|
||||
# 当前标记应该是一个整数
|
||||
right = self.current_token
|
||||
self.eat(INTEGER)
|
||||
# 在上述函数调用后,self.current_token 就被设为 EOF 标记
|
||||
|
||||
# 这时要么是成功地找到 INTEGER PLUS INTEGER,要么是 INTEGER MINUS INTEGER
|
||||
# 序列的标记,并且这个方法可以仅仅返回两个整数的加或减的结果,就能高效解释客户端的输入
|
||||
if op.type == PLUS:
|
||||
result = left.value + right.value
|
||||
else:
|
||||
result = left.value - right.value
|
||||
return result
|
||||
|
||||
|
||||
def main():
|
||||
while True:
|
||||
try:
|
||||
# To run under Python3 replace 'raw_input' call
|
||||
# with 'input'
|
||||
text = raw_input('calc> ')
|
||||
except EOFError:
|
||||
break
|
||||
if not text:
|
||||
continue
|
||||
interpreter = Interpreter(text)
|
||||
result = interpreter.expr()
|
||||
print(result)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
```
|
||||
|
||||
把上面的代码保存到 calc2.py 文件中,或者直接从 [GitHub][2] 上下载。试着运行它。看看它是不是正常工作:它应该能够处理输入中任意位置的空白符;能够接受多位的整数,并且能够对两个整数做减法和加法。
|
||||
|
||||
这是我在自己的笔记本上运行的示例:
|
||||
```
|
||||
$ python calc2.py
|
||||
calc> 27 + 3
|
||||
30
|
||||
calc> 27 - 7
|
||||
20
|
||||
calc>
|
||||
```
|
||||
|
||||
与 [第一部分][1] 的版本相比,主要的代码改动有:
|
||||
|
||||
1. get_next_token 方法重写了很多。增加指针位置的逻辑之前是放在一个单独的方法中。
|
||||
2. 增加了一些方法:skip_whitespace 用于忽略空白字符,integer 用于处理输入字符的多位整数。
|
||||
3. expr 方法修改成了可以识别 “整数 -> 减号 -> 整数” 词组和 “整数 -> 加号 -> 整数” 词组。在成功识别相应的词组后,这个方法现在可以解释加法和减法。
|
||||
|
||||
[第一部分][1] 中你学到了两个重要的概念,叫做 **标记** 和 **词法分析**。现在我想谈一谈 **词法**, **解析**,和**解析器**。
|
||||
|
||||
你已经知道标记。但是为了让我详细的讨论标记,我需要谈一谈词法。词法是什么?**词法** 是一个标记中的字符序列。在下图中你可以看到一些关于标记的例子,还好这可以让它们之间的关系变得清晰:
|
||||
|
||||
![][3]
|
||||
|
||||
现在还记得我们的朋友,expr 方法吗?我之前说过,这是数学表达式实际被解释的地方。但是你要先识别这个表达式有哪些词组才能解释它,比如它是加法还是减法。expr 方法最重要的工作是:它从 get_next_token 方法中得到流,并找出标记流的结构然后解释已经识别出的词组,产生数学表达式的结果。
|
||||
|
||||
在标记流中找出结构的过程,或者换种说法,识别标记流中的词组的过程就叫 **解析**。解释器或者编译器中执行这个任务的部分就叫做 **解析器**。
|
||||
|
||||
现在你知道 expr 方法就是你的解释器的部分,**解析** 和 **解释** 都在这里发生 - expr 方法首先尝试识别(**解析**)标记流里的 “整数 -> 加法 -> 整数” 或者 “整数 -> 减法 -> 整数” 词组,成功识别后 (**解析**) 其中一个词组,这个方法就开始解释它,返回两个整数的和或差。
|
||||
|
||||
又到了练习的时间。
|
||||
|
||||
![][4]
|
||||
|
||||
1. 扩展这个计算器,让它能够计算两个整数的乘法
|
||||
2. 扩展这个计算器,让它能够计算两个整数的除法
|
||||
3. 修改代码,让它能够解释包含了任意数量的加法和减法的表达式,比如 “9 - 5 + 3 + 11”
|
||||
|
||||
|
||||
|
||||
**检验你的理解:**
|
||||
|
||||
1. 词法是什么?
|
||||
2. 找出标记流结构的过程叫什么,或者换种说法,识别标记流中一个词组的过程叫什么?
|
||||
3. 解释器(编译器)执行解析的部分叫什么?
|
||||
|
||||
|
||||
希望你喜欢今天的内容。在该系列的下一篇文章里你就能扩展计算器从而处理更多复杂的算术表达式。敬请期待。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://ruslanspivak.com/lsbasi-part2/
|
||||
|
||||
作者:[Ruslan Spivak][a]
|
||||
译者:[BriFuture](https://github.com/BriFuture)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://ruslanspivak.com
|
||||
[1]:http://ruslanspivak.com/lsbasi-part1/ (Part 1)
|
||||
[2]:https://github.com/rspivak/lsbasi/blob/master/part2/calc2.py
|
||||
[3]:https://ruslanspivak.com/lsbasi-part2/lsbasi_part2_lexemes.png
|
||||
[4]:https://ruslanspivak.com/lsbasi-part2/lsbasi_part2_exercises.png
|
@ -1,223 +0,0 @@
|
||||
在 Ubuntu 上玩玩 LXD 容器
|
||||
======
|
||||
本文的主角是容器,一种类似虚拟机但更轻量级的构造。你可以轻易地在你的 Ubuntu 桌面系统中创建一堆个容器!
|
||||
|
||||
虚拟机会虚拟出正太电脑让你来安装客户机操作系统。**相比之下**,容器**复用**了主机 Linux 内核,只是简单地 **包容** 了我们选择的根文件系统(也就是运行时环境)。Linux 内核有很多功能可以将运行的 Linux 容器与我们的主机分割开(也就是我们的 Ubuntu 桌面)。
|
||||
|
||||
Linux 本身需要一些手工操作来直接管理他们。好在,有 LXD( 读音为 Lex-deeh),一款为我们管理 Linux 容器的服务。
|
||||
|
||||
我们将会看到如何
|
||||
|
||||
1。在我们的 Ubuntu 桌面上配置容器,
|
||||
2。创建容器,
|
||||
3。安装一台 web 服务器,
|
||||
4。测试一下这台 web 服务器,以及
|
||||
5。清理所有的东西。
|
||||
|
||||
### 设置 Ubuntu 容器
|
||||
|
||||
如果你安装的是 Ubuntu 16.04,那么你什么都不用做。只要安装下面所列出的一些额外的包就行了。若你安装的是 Ubuntu 14.04.x 或 Ubuntu 15.10,那么按照 [LXD 2.0:Installing and configuring LXD [2/12]][1] 来进行一些操作,然后再回来。
|
||||
|
||||
确保已经更新了包列表:
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt upgrade
|
||||
```
|
||||
|
||||
安装 **lxd** 包:
|
||||
```
|
||||
sudo apt install lxd
|
||||
```
|
||||
|
||||
若你安装的是 Ubuntu 16.04,那么还可以让你的容器文件以 ZFS 文件系统的格式进行存储。Ubuntu 16.04 的 Linux kernel 包含了支持 ZFS 必要的内核模块。若要让 LXD 使用 ZFS 进行存储,我们只需要安装 ZFS 工具包。没有 ZFS,容器会在主机文件系统中以单独的文件形式进行存储。通过 ZFS,我们就有了写入时拷贝等功能,可以让任务完成更快一些。
|
||||
|
||||
安装 **zfsutils-linux** 包 (若你安装的是 Ubuntu 16.04.x):
|
||||
```
|
||||
sudo apt install zfsutils-linux
|
||||
```
|
||||
|
||||
安装好 LXD 后,包安装脚本应该会将你加入 **lxd** 组。该组成员可以使你无需通过 sudo 就能直接使用 LXD 管理容器。根据 Linux 的尿性,**你需要先登出桌面会话然后再登陆** 才能应用 **lxd** 的组成员关系。(若你是高手,也可以通过在当前 shell 中执行 newgrp lxd 命令,就不用重登陆了)。
|
||||
|
||||
在开始使用前,LXD 需要初始化存储和网络参数。
|
||||
|
||||
运行下面命令:
|
||||
```
|
||||
$ **sudo lxd init**
|
||||
Name of the storage backend to use (dir or zfs):**zfs**
|
||||
Create a new ZFS pool (yes/no)?**yes**
|
||||
Name of the new ZFS pool:**lxd-pool**
|
||||
Would you like to use an existing block device (yes/no)?**no**
|
||||
Size in GB of the new loop device (1GB minimum):**30**
|
||||
Would you like LXD to be available over the network (yes/no)?**no**
|
||||
Do you want to configure the LXD bridge (yes/no)?**yes**
|
||||
**> You will be asked about the network bridge configuration。Accept all defaults and continue。**
|
||||
Warning:Stopping lxd.service,but it can still be activated by:
|
||||
lxd.socket
|
||||
LXD has been successfully configured。
|
||||
$ _
|
||||
```
|
||||
|
||||
我们在一个(独立)的文件而不是块设备(即分区)中构建了一个文件系统来作为 ZFS 池,因此我们无需进行额外的分区操作。在本例中我指定了 30GB 大小,这个空间取之于根(/) 文件系统中。这个文件就是 `/var/lib/lxd/zfs.img`。
|
||||
|
||||
行了!最初的配置完成了。若有问题,或者想了解其他信息,请阅读 https://www.stgraber.org/2016/03/15/lxd-2-0-installing-and-configuring-lxd-212/
|
||||
|
||||
### 创建第一个容器
|
||||
|
||||
所有 LXD 的管理操作都可以通过 **lxc** 命令来进行。我们通过给 **lxc** 不同参数来管理容器。
|
||||
```
|
||||
lxc list
|
||||
```
|
||||
可以列出所有已经安装的容器。很明显,这个列表现在是空的,但这表示我们的安装是没问题的。
|
||||
|
||||
```
|
||||
lxc image list
|
||||
```
|
||||
列出可以用来启动容器的(已经缓存)镜像列表。很明显这个列表也是空的,但这也说明我们的安装是没问题的。
|
||||
|
||||
```
|
||||
lxc image list ubuntu:
|
||||
```
|
||||
列出可以下载并启动容器的远程镜像。而且指定了是显示 Ubuntu 镜像。
|
||||
|
||||
```
|
||||
lxc image list images:
|
||||
```
|
||||
列出可以用来启动容器的(已经缓存)各种发行版的镜像列表。这会列出各种发行版的镜像比如 Alpine,Debian,Gentoo,Opensuse 以及 Fedora。
|
||||
|
||||
让我们启动一个 Ubuntu 16.04 容器,并称之为 c1:
|
||||
```
|
||||
$ lxc launch ubuntu:x c1
|
||||
Creating c1
|
||||
Starting c1
|
||||
$
|
||||
```
|
||||
|
||||
我们使用 launch 动作,然后选择镜像 **ubuntu:x** (x 表示 Xenial/16.04 镜像),最后我们使用名字 `c1` 作为容器的名称。
|
||||
|
||||
让我们来看看安装好的首个容器,
|
||||
```
|
||||
$ lxc list
|
||||
|
||||
+---------|---------|----------------------|------|------------|-----------+
|
||||
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
|
||||
+---------|---------|----------------------|------|------------|-----------+
|
||||
| c1 | RUNNING | 10.173.82.158 (eth0) | | PERSISTENT | 0 |
|
||||
+---------|---------|----------------------|------|------------|-----------+
|
||||
```
|
||||
|
||||
我们的首个容器 c1 已经运行起来了,它还有自己的 IP 地址(可以本地访问)。我们可以开始用它了!
|
||||
|
||||
### 安装 web 服务器
|
||||
|
||||
我们可以在容器中运行命令。运行命令的动作为 **exec**。
|
||||
```
|
||||
$ lxc exec c1 -- uptime
|
||||
11:47:25 up 2 min,0 users,load average:0.07,0.05,0.04
|
||||
$ _
|
||||
```
|
||||
|
||||
在 exec 后面,我们指定容器,最后输入要在容器中运行的命令。运行时间只有 2 分钟,这是个新出炉的容器:-)。
|
||||
|
||||
命令行中的`--`跟我们 shell 的参数处理过程有关是告诉。若我们的命令没有任何参数,则完全可以省略`-`。
|
||||
```
|
||||
$ lxc exec c1 -- df -h
|
||||
```
|
||||
|
||||
这是一个必须要`-`的例子,由于我们的命令使用了参数 -h。若省略了 -,会报错。
|
||||
|
||||
然我们运行容器中的 shell 来新包列表。
|
||||
```
|
||||
$ lxc exec c1 bash
|
||||
root@c1:~# apt update
|
||||
Ign http://archive.ubuntu.com trusty InRelease
|
||||
Get:1 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB]
|
||||
Get:2 http://security.ubuntu.com trusty-security InRelease [65.9 kB]
|
||||
.。。
|
||||
Hit http://archive.ubuntu.com trusty/universe Translation-en
|
||||
Fetched 11.2 MB in 9s (1228 kB/s)
|
||||
Reading package lists。.. Done
|
||||
root@c1:~# **apt upgrade**
|
||||
Reading package lists。.. Done
|
||||
Building dependency tree
|
||||
.。。
|
||||
Processing triggers for man-db (2.6.7.1-1ubuntu1) .。。
|
||||
Setting up dpkg (1.17.5ubuntu5.7) .。。
|
||||
root@c1:~# _
|
||||
```
|
||||
|
||||
我们使用 **nginx** 来做 web 服务器。nginx 在某些方面要比 Apache web 服务器更酷一些。
|
||||
```
|
||||
root@c1:~# apt install nginx
|
||||
Reading package lists。.. Done
|
||||
Building dependency tree
|
||||
.。。
|
||||
Setting up nginx-core (1.4.6-1ubuntu3.5) .。。
|
||||
Setting up nginx (1.4.6-1ubuntu3.5) .。。
|
||||
Processing triggers for libc-bin (2.19-0ubuntu6.9) .。。
|
||||
root@c1:~# _
|
||||
```
|
||||
|
||||
让我们用浏览器访问一下这个 web 服务器。记住 IP 地址为 10.173.82.158,因此你需要在浏览器中输入这个 IP。
|
||||
|
||||
[![lxd-nginx][2]][3]
|
||||
|
||||
让我们对页面文字做一些小改动。回到容器中,进入默认 HTML 页面的目录中。
|
||||
```
|
||||
root@c1:~# **cd /var/www/html/**
|
||||
root@c1:/var/www/html# **ls -l**
|
||||
total 2
|
||||
-rw-r--r-- 1 root root 612 Jun 25 12:15 index.nginx-debian.html
|
||||
root@c1:/var/www/html#
|
||||
```
|
||||
|
||||
使用 nano 编辑文件,然后保存
|
||||
|
||||
[![lxd-nginx-nano][4]][5]
|
||||
|
||||
子后,再刷一下页面看看,
|
||||
|
||||
[![lxd-nginx-modified][6]][7]
|
||||
|
||||
### 清理
|
||||
|
||||
让我们清理一下这个容器,也就是删掉它。当需要的时候我们可以很方便地创建一个新容器出来。
|
||||
```
|
||||
$ **lxc list**
|
||||
+---------|---------|----------------------|------|------------|-----------+
|
||||
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
|
||||
+---------|---------|----------------------|------|------------|-----------+
|
||||
| c1 | RUNNING | 10.173.82.169 (eth0) | | PERSISTENT | 0 |
|
||||
+---------|---------|----------------------|------|------------|-----------+
|
||||
$ **lxc stop c1**
|
||||
$ **lxc delete c1**
|
||||
$ **lxc list**
|
||||
+---------|---------|----------------------|------|------------|-----------+
|
||||
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
|
||||
+---------|---------|----------------------|------|------------|-----------+
|
||||
+---------|---------|----------------------|------|------------|-----------+
|
||||
|
||||
```
|
||||
|
||||
我们停止(关闭)这个容器,然后删掉它了。
|
||||
|
||||
本文至此就结束了。关于容器有很多玩法。而这只是配置 Ubuntu 并尝试使用容器的第一步而已。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.simos.info/trying-out-lxd-containers-on-our-ubuntu/
|
||||
|
||||
作者:[Simos Xenitellis][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://blog.simos.info/author/simos/
|
||||
[1]:https://www.stgraber.org/2016/03/15/lxd-2-0-installing-and-configuring-lxd-212/
|
||||
[2]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx.png?resize=564%2C269&ssl=1
|
||||
[3]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx.png?ssl=1
|
||||
[4]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-nano.png?resize=750%2C424&ssl=1
|
||||
[5]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-nano.png?ssl=1
|
||||
[6]:https://i1.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-modified.png?resize=595%2C317&ssl=1
|
||||
[7]:https://i1.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-modified.png?ssl=1
|
237
translated/tech/20160808 Top 10 Command Line Games For Linux.md
Normal file
237
translated/tech/20160808 Top 10 Command Line Games For Linux.md
Normal file
@ -0,0 +1,237 @@
|
||||
Linux 命令行游戏 Top 10
|
||||
======
|
||||
概要: 本文列举了 **Linux 中最好的命令行游戏**。
|
||||
|
||||
Linux 从来都不是游戏的首选操作系统。尽管近日来 [Linux 的游戏][1] 提供了很多。你可以在 [下载 Linux 游戏][2] 得到许多资源。
|
||||
|
||||
这有专门的 [游戏版 Linux][3]。它确实存在。但是今天,我们并不是要欣赏游戏版 Linux。
|
||||
|
||||
Linux 有一个超过 Windows 的优势。它拥有一个强大的 Linux 终端。在 Linux 终端上,你可以做很多事情,包括玩 **命令行游戏**。
|
||||
|
||||
当然,毕竟是 Linux 终端的核心爱好者、拥护者。终端游戏轻便,快速,有地狱般的魔力。而这最有意思的事情是,你可以在 Linux 终端上重温大量经典游戏。
|
||||
|
||||
[推荐阅读:Linux 上游戏,你所需要了解的全部][20]
|
||||
|
||||
### 最好的 Linux 终端游戏
|
||||
|
||||
来揭秘这张榜单,找出 Linux 终端最好的游戏。
|
||||
|
||||
### 1. Bastet
|
||||
|
||||
谁还没花上几个小时玩 [俄罗斯方块][4] ?简单而且容易上瘾。 Bastet 就是 Linux 版的俄罗斯方块。
|
||||
|
||||
![Linux 终端游戏 Bastet][5]
|
||||
|
||||
使用下面的命令获取 Bastet:
|
||||
```
|
||||
sudo apt install bastet
|
||||
```
|
||||
|
||||
运行下列命令,在终端上开始这个游戏:
|
||||
```
|
||||
bastet
|
||||
```
|
||||
|
||||
使用空格键旋转方块,方向键控制方块移动
|
||||
|
||||
### 2. Ninvaders
|
||||
|
||||
Space Invaders(太空侵略者)。我任记得这个游戏里,和我弟弟(哥哥)在高分之路上扭打。这是最好的街机游戏之一。
|
||||
|
||||
![Linux 终端游戏 nInvaders][6]
|
||||
|
||||
复制粘贴这段代码安装 Ninvaders。
|
||||
```
|
||||
sudo apt-get install ninvaders
|
||||
```
|
||||
|
||||
使用下面的命令开始游戏:
|
||||
```
|
||||
ninvaders
|
||||
```
|
||||
|
||||
方向键移动太空飞船。空格键设计外星人。
|
||||
|
||||
[推荐阅读:2016 你可以开始的 Linux 游戏 Top 10][21]
|
||||
|
||||
### 3. Pacman4console
|
||||
|
||||
是的,这个就是街机之王。Pacman4console 是最受欢迎的街机游戏 Pacman(吃豆豆)终端版。
|
||||
|
||||
![Linux 命令行吃豆豆游戏 Pacman4console][7]
|
||||
|
||||
使用以下命令获取 pacman4console:
|
||||
```
|
||||
sudo apt-get install pacman4console
|
||||
```
|
||||
|
||||
打开终端,建议使用最大的终端界面(29x32)。键入以下命令启动游戏:
|
||||
```
|
||||
pacman4console
|
||||
```
|
||||
|
||||
使用方向键控制移动。
|
||||
|
||||
### 4. nSnake
|
||||
|
||||
记得在老式诺基亚手机里玩的贪吃蛇游戏吗?
|
||||
|
||||
这个游戏让我保持对手机着迷很长时间。我曾经设计过各种姿态去获得更长的蛇身。
|
||||
|
||||
![nsnake : Linux 终端上的贪吃蛇游戏][8]
|
||||
|
||||
我们拥有 [Linux 终端上的贪吃蛇游戏][9] 得感谢 [nSnake][9]。使用下面的命令安装它:
|
||||
```
|
||||
sudo apt-get install nsnake
|
||||
```
|
||||
|
||||
键入下面的命令开始游戏:
|
||||
```
|
||||
nsnake
|
||||
```
|
||||
|
||||
使用方向键控制蛇身,获取豆豆。
|
||||
|
||||
### 5. Greed
|
||||
|
||||
Greed 有点像精简调加速和肾上腺素的 Tron(类似贪吃蛇的进化版)。
|
||||
|
||||
你当前的位置由‘@’表示。你被数字包围了,你可以在四个方向任意移动。你选择的移动方向上标识的数字,就是你能移动的步数。走过的路不能再走,如果你无路可走,游戏结束。
|
||||
|
||||
听起来,似乎我让它变得更复杂了。
|
||||
|
||||
![Greed : 命令行上的 Tron][10]
|
||||
|
||||
通过下列命令获取 Greed:
|
||||
```
|
||||
sudo apt-get install greed
|
||||
```
|
||||
|
||||
通过下列命令启动游戏,使用方向键控制游戏。
|
||||
```
|
||||
greed
|
||||
```
|
||||
|
||||
### 6. Air Traffic Controller
|
||||
|
||||
还有什么比做飞行员更有意思的?空中交通管制员。在你的终端中,你可以模拟一个空中要塞。说实话,在终端里管理空中交通蛮有意思的。
|
||||
|
||||
![Linux 空中交通管理员][11]
|
||||
|
||||
使用下列命令安装游戏:
|
||||
```
|
||||
sudo apt-get install bsdgames
|
||||
```
|
||||
|
||||
键入下列命令启动游戏:
|
||||
```
|
||||
atc
|
||||
```
|
||||
|
||||
ATC 不是孩子玩的游戏。建议查看官方文档。
|
||||
|
||||
### 7. Backgammon(双陆棋)
|
||||
|
||||
无论之前你有没有玩过 [双陆棋][12],你都应该看看这个。 它的说明书和控制手册都非常友好。如果你喜欢,可以挑战你的电脑或者你的朋友。
|
||||
|
||||
![Linux 终端上的双陆棋][13]
|
||||
|
||||
使用下列命令安装双陆棋:
|
||||
```
|
||||
sudo apt-get install bsdgames
|
||||
```
|
||||
|
||||
键入下列命令启动游戏:
|
||||
```
|
||||
backgammon
|
||||
```
|
||||
|
||||
当你需要提示游戏规则时,回复 ‘y’。
|
||||
|
||||
### 8. Moon Buggy
|
||||
|
||||
跳跃。疯狂。欢乐时光不必多言。
|
||||
|
||||
![Moon buggy][14]
|
||||
|
||||
使用下列命令安装游戏:
|
||||
```
|
||||
sudo apt-get install moon-buggy
|
||||
```
|
||||
|
||||
使用下列命令启动游戏:
|
||||
```
|
||||
moon-buggy
|
||||
```
|
||||
|
||||
空格跳跃,‘a’或者‘l’射击。尽情享受吧。
|
||||
|
||||
### 9. 2048
|
||||
|
||||
2048 可以活跃你的大脑。[2048][15] 是一个策咯游戏,很容易上瘾。以获取 2048 分为目标。
|
||||
|
||||
![Linux 终端上的 2048][16]
|
||||
|
||||
复制粘贴下面的命令安装游戏:
|
||||
```
|
||||
wget https://raw.githubusercontent.com/mevdschee/2048.c/master/2048.c
|
||||
|
||||
gcc -o 2048 2048.c
|
||||
```
|
||||
|
||||
键入下列命令启动游戏:
|
||||
```
|
||||
./2048
|
||||
```
|
||||
|
||||
### 10. Tron
|
||||
|
||||
没有动作类游戏,这张榜单怎么可能结束?
|
||||
|
||||
![Linux 终端游戏 Tron][17]
|
||||
|
||||
是的,Linux 终端可以实现这种精力充沛的游戏 Tron。为接下来迅捷的反应做准备吧。无需被下载和安装困扰。一个命令即可启动游戏,你只需要一个网络连接
|
||||
```
|
||||
ssh sshtron.zachlatta.com
|
||||
```
|
||||
|
||||
如果由别的在线游戏者,你可以多人游戏。了解更多:[Linux 终端游戏 Tron][18].
|
||||
|
||||
### 你看上了哪一款?
|
||||
|
||||
朋友,Linux 终端游戏 Top 10,都分享给你了。我猜你现在正准备键入 ctrl+alt+T(终端快捷键) 了。榜单中,那个是你最喜欢的游戏?或者为终端提供其他的有趣的事物?尽情分享吧!
|
||||
|
||||
在 [Abhishek Prakash][19] 回复。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/best-command-line-games-linux/
|
||||
|
||||
作者:[Aquil Roshan][a]
|
||||
译者:[CYLeft](https://github.com/CYleft)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/aquil/
|
||||
[1]:https://itsfoss.com/linux-gaming-guide/
|
||||
[2]:https://itsfoss.com/download-linux-games/
|
||||
[3]:https://itsfoss.com/manjaro-gaming-linux/
|
||||
[4]:https://en.wikipedia.org/wiki/Tetris
|
||||
[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/bastet.jpg
|
||||
[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/ninvaders.jpg
|
||||
[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/pacman.jpg
|
||||
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/nsnake.jpg
|
||||
[9]:https://itsfoss.com/nsnake-play-classic-snake-game-linux-terminal/
|
||||
[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/greed.jpg
|
||||
[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/atc.jpg
|
||||
[12]:https://en.wikipedia.org/wiki/Backgammon
|
||||
[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/backgammon.jpg
|
||||
[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/moon-buggy.jpg
|
||||
[15]:https://itsfoss.com/2048-offline-play-ubuntu/
|
||||
[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/2048.jpg
|
||||
[17]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/tron.jpg
|
||||
[18]:https://itsfoss.com/play-tron-game-linux-terminal/
|
||||
[19]:https://twitter.com/abhishek_pc
|
||||
[20]:https://itsfoss.com/linux-gaming-guide/
|
||||
[21]:https://itsfoss.com/best-linux-games/
|
284
translated/tech/20170319 ftrace trace your kernel functions.md
Normal file
284
translated/tech/20170319 ftrace trace your kernel functions.md
Normal file
@ -0,0 +1,284 @@
|
||||
ftrace:跟踪你的内核函数!
|
||||
============================================================
|
||||
|
||||
大家好!今天我们将去讨论一个调试工具:ftrace,之前我的博客上还没有讨论过它。还有什么能比一个新的调试工具更让人激动呢?
|
||||
|
||||
这个非常棒的 ftrace 并不是个新的工具!它大约在 Linux 的 2.6 内核版本中就有了,时间大约是在 2008 年。[这里是我用谷歌能找到的一些文档][10]。因此,如果你是一个调试系统的“老手”,可能早就已经使用它了!
|
||||
|
||||
我知道,ftrace 已经存在了大约 2.5 年了,但是还没有真正的去学习它。假设我明天要召开一个专题研究会,那么,关于 ftrace 应该讨论些什么?因此,今天是时间去讨论一下它了!
|
||||
|
||||
### 什么是 ftrace?
|
||||
|
||||
ftrace 是一个 Linux 内核特性,它可以让你去跟踪 Linux 内核的函数调用。为什么要这么做呢?好吧,假设你调试一个奇怪的问题,而你已经得到了你的内核版本中这个问题在源代码中的开始的位置,而你想知道这里到底发生了什么?
|
||||
|
||||
每次在调试的时候,我并不会经常去读内核源代码,但是,极个别的情况下会去读它!例如,本周在工作中,我有一个程序在内核中卡死了。查看到底是调用了什么函数、哪些系统涉及其中,能够帮我更好的理解在内核中发生了什么!(在我的那个案例中,它是虚拟内存系统)
|
||||
|
||||
我认为 ftrace 是一个十分好用的工具(它肯定没有 strace 那样广泛被使用,使用难度也低于它),但是它还是值得你去学习。因此,让我们开始吧!
|
||||
|
||||
### 使用 ftrace 的第一步
|
||||
|
||||
不像 strace 和 perf,ftrace 并不是真正的 **程序** – 你不能只运行 `ftrace my_cool_function`。那样太容易了!
|
||||
|
||||
如果你去读 [使用 Ftrace 调试内核][11],它会告诉你从 `cd /sys/kernel/debug/tracing` 开始,然后做很多文件系统的操作。
|
||||
|
||||
对于我来说,这种办法太麻烦 – 使用 ftrace 的一个简单例子应该像这样:
|
||||
|
||||
```
|
||||
cd /sys/kernel/debug/tracing
|
||||
echo function > current_tracer
|
||||
echo do_page_fault > set_ftrace_filter
|
||||
cat trace
|
||||
|
||||
```
|
||||
|
||||
这个文件系统到跟踪系统的接口(“给这些神奇的文件赋值,然后该发生的事情就会发生”)理论上看起来似乎可用,但是它不是我的首选方式。
|
||||
|
||||
幸运的是,ftrace 团队也考虑到这个并不友好的用户界面,因此,它有了一个更易于使用的界面,它就是 **trace-cmd**!!!trace-cmd 是一个带命令行参数的普通程序。我们后面将使用它!我在 LWN 上找到了一个 trace-cmd 的使用介绍:[trace-cmd: Ftrace 的一个前端][12]。
|
||||
|
||||
### 开始使用 trace-cmd:让 trace 仅跟踪一个函数
|
||||
|
||||
首先,我需要去使用 `sudo apt-get install trace-cmd` 安装 `trace-cmd`,这一步很容易。
|
||||
|
||||
对于第一个 ftrace 的演示,我决定去了解我的内核如何去处理一个页面故障。当 Linux 分配内存时,它经常偷懒,(“你并不是 _真的_ 计划去使用内存,对吗?”)。这意味着,当一个应用程序尝试去对分配给它的内存进行写入时,就会发生一个页面故障,而这个时候,内核才会真正的为应用程序去分配物理内存。
|
||||
|
||||
我们开始使用 `trace-cmd` 并让它跟踪 `do_page_fault` 函数!
|
||||
|
||||
```
|
||||
$ sudo trace-cmd record -p function -l do_page_fault
|
||||
plugin 'function'
|
||||
Hit Ctrl^C to stop recording
|
||||
|
||||
```
|
||||
|
||||
我将它运行了几秒钟,然后按下了 `Ctrl+C`。 让我大吃一惊的是,它竟然产生了一个 2.5MB 大小的名为 `trace.dat` 的跟踪文件。我们来看一下这个文件的内容!
|
||||
|
||||
```
|
||||
$ sudo trace-cmd report
|
||||
chrome-15144 [000] 11446.466121: function: do_page_fault
|
||||
chrome-15144 [000] 11446.467910: function: do_page_fault
|
||||
chrome-15144 [000] 11446.469174: function: do_page_fault
|
||||
chrome-15144 [000] 11446.474225: function: do_page_fault
|
||||
chrome-15144 [000] 11446.474386: function: do_page_fault
|
||||
chrome-15144 [000] 11446.478768: function: do_page_fault
|
||||
CompositorTileW-15154 [001] 11446.480172: function: do_page_fault
|
||||
chrome-1830 [003] 11446.486696: function: do_page_fault
|
||||
CompositorTileW-15154 [001] 11446.488983: function: do_page_fault
|
||||
CompositorTileW-15154 [001] 11446.489034: function: do_page_fault
|
||||
CompositorTileW-15154 [001] 11446.489045: function: do_page_fault
|
||||
|
||||
```
|
||||
|
||||
看起来很整洁 – 它展示了进程名(chrome)、进程 ID (15144)、CPU(000)、以及它跟踪的函数。
|
||||
|
||||
通过察看整个文件,(`sudo trace-cmd report | grep chrome`)可以看到,我们跟踪了大约 1.5 秒,在这 1.5 秒的时间段内,Chrome 发生了大约 500 个页面故障。真是太酷了!这就是我们做的第一个 ftrace!
|
||||
|
||||
### 下一个 ftrace 技巧:我们来跟踪一个进程!
|
||||
|
||||
好吧,只看一个函数是有点无聊!假如我想知道一个程序中都发生了什么事情。我使用一个名为 Hugo 的静态站点生成器。看看内核为 Hugo 都做了些什么事情?
|
||||
|
||||
在我的电脑上 Hugo 的 PID 现在是 25314,因此,我使用如下的命令去记录所有的内核函数:
|
||||
|
||||
```
|
||||
sudo trace-cmd record --help # I read the help!
|
||||
sudo trace-cmd record -p function -P 25314 # record for PID 25314
|
||||
|
||||
```
|
||||
|
||||
`sudo trace-cmd report` 输出了 18,000 行。如果你对这些感兴趣,你可以看 [这里是所有的 18,000 行的输出][13]。
|
||||
|
||||
18,000 行太多了,因此,在这里仅摘录其中几行。
|
||||
|
||||
当系统调用 `clock_gettime` 运行时,都发生了什么。
|
||||
|
||||
```
|
||||
compat_SyS_clock_gettime
|
||||
SyS_clock_gettime
|
||||
clockid_to_kclock
|
||||
posix_clock_realtime_get
|
||||
getnstimeofday64
|
||||
__getnstimeofday64
|
||||
arch_counter_read
|
||||
__compat_put_timespec
|
||||
|
||||
```
|
||||
|
||||
这是与进程调试相关的一些东西:
|
||||
|
||||
```
|
||||
cpufreq_sched_irq_work
|
||||
wake_up_process
|
||||
try_to_wake_up
|
||||
_raw_spin_lock_irqsave
|
||||
do_raw_spin_lock
|
||||
_raw_spin_lock
|
||||
do_raw_spin_lock
|
||||
walt_ktime_clock
|
||||
ktime_get
|
||||
arch_counter_read
|
||||
walt_update_task_ravg
|
||||
exiting_task
|
||||
|
||||
```
|
||||
|
||||
虽然你可能还不理解它们是做什么的,但是,能够看到所有的这些函数调用也是件很酷的事情。
|
||||
|
||||
### “function graph” 跟踪
|
||||
|
||||
这里有另外一个模式,称为 `function_graph`。除了它既可以进入也可以退出一个函数外,其它的功能和函数跟踪器是一样的。[这里是那个跟踪器的输出][14]
|
||||
|
||||
```
|
||||
sudo trace-cmd record -p function_graph -P 25314
|
||||
|
||||
```
|
||||
|
||||
同样,这里只是一个片断(这次来自 futex 代码)
|
||||
|
||||
```
|
||||
| futex_wake() {
|
||||
| get_futex_key() {
|
||||
| get_user_pages_fast() {
|
||||
1.458 us | __get_user_pages_fast();
|
||||
4.375 us | }
|
||||
| __might_sleep() {
|
||||
0.292 us | ___might_sleep();
|
||||
2.333 us | }
|
||||
0.584 us | get_futex_key_refs();
|
||||
| unlock_page() {
|
||||
0.291 us | page_waitqueue();
|
||||
0.583 us | __wake_up_bit();
|
||||
5.250 us | }
|
||||
0.583 us | put_page();
|
||||
+ 24.208 us | }
|
||||
|
||||
```
|
||||
|
||||
我们看到在这个示例中,在 `futex_wake` 后面调用了 `get_futex_key`。这是在源代码中真实发生的事情吗?我们可以检查一下!![这里是在 Linux 4.4 中 futex_wake 的定义][15] (我的内核版本是 4.4)。
|
||||
|
||||
为节省时间我直接贴出来,它的内容如下:
|
||||
|
||||
```
|
||||
static int
|
||||
futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset)
|
||||
{
|
||||
struct futex_hash_bucket *hb;
|
||||
struct futex_q *this, *next;
|
||||
union futex_key key = FUTEX_KEY_INIT;
|
||||
int ret;
|
||||
WAKE_Q(wake_q);
|
||||
|
||||
if (!bitset)
|
||||
return -EINVAL;
|
||||
|
||||
ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &key, VERIFY_READ);
|
||||
|
||||
```
|
||||
|
||||
如你所见,在 `futex_wake` 中的第一个函数调用真的是 `get_futex_key`! 太棒了!相比阅读内核代码,阅读函数跟踪肯定是更容易的找到结果的办法,并且让人高兴的是,还能看到所有的函数用了多长时间。
|
||||
|
||||
### 如何知道哪些函数可以被跟踪
|
||||
|
||||
如果你去运行 `sudo trace-cmd list -f`,你将得到一个你可以跟踪的函数的列表。它很简单但是也很重要。
|
||||
|
||||
### 最后一件事:事件!
|
||||
|
||||
现在,我们已经知道了怎么去跟踪内核中的函数,真是太酷了!
|
||||
|
||||
还有一类我们可以跟踪的东西!有些事件与我们的函数调用并不相符。例如,你可能想去知道当一个程序被调度进入或者离开 CPU 时,都发生了什么事件!你可能想通过“盯着”函数调用计算出来,但是,我告诉你,不可行!
|
||||
|
||||
由于函数也为你提供了几种事件,因此,你可以看到当重要的事件发生时,都发生了什么事情。你可以使用 `sudo cat /sys/kernel/debug/tracing/available_events` 来查看这些事件的一个列表。
|
||||
|
||||
我查看了全部的 sched_switch 事件。我并不完全知道 sched_switch 是什么,但是,我猜测它与调度有关。
|
||||
|
||||
```
|
||||
sudo cat /sys/kernel/debug/tracing/available_events
|
||||
sudo trace-cmd record -e sched:sched_switch
|
||||
sudo trace-cmd report
|
||||
|
||||
```
|
||||
|
||||
输出如下:
|
||||
|
||||
```
|
||||
16169.624862: Chrome_ChildIOT:24817 [112] S ==> chrome:15144 [120]
|
||||
16169.624992: chrome:15144 [120] S ==> swapper/3:0 [120]
|
||||
16169.625202: swapper/3:0 [120] R ==> Chrome_ChildIOT:24817 [112]
|
||||
16169.625251: Chrome_ChildIOT:24817 [112] R ==> chrome:1561 [112]
|
||||
16169.625437: chrome:1561 [112] S ==> chrome:15144 [120]
|
||||
|
||||
```
|
||||
|
||||
现在,可以很清楚地看到这些切换,从 PID 24817 -> 15144 -> kernel -> 24817 -> 1561 -> 15114\。(所有的这些事件都发生在同一个 CPU 上)
|
||||
|
||||
### ftrace 是如何工作的?
|
||||
|
||||
ftrace 是一个动态跟踪系统。当启动 ftracing 去跟踪内核函数时,**函数的代码会被改变**。因此 – 我们假设去跟踪 `do_page_fault` 函数。内核将在那个函数的汇编代码中插入一些额外的指令,以便每次该函数被调用时去提示跟踪系统。内核之所以能够添加额外的指令的原因是,Linux 将额外的几个 NOP 指令编译进每个函数中,因此,当需要的时候,这里有添加跟踪代码的地方。
|
||||
|
||||
这是一个十分复杂的问题,因为,当不需要使用 ftrace 去跟踪我的内核时,它根本就不影响性能。而当我需要跟踪时,跟踪的函数越多,产生的开销就越大。
|
||||
|
||||
(或许有些是不对的,但是,我认为的 ftrace 就是这样工作的)
|
||||
|
||||
### 更容易地使用 ftrace:brendan gregg 的工具 & kernelshark
|
||||
|
||||
正如我们在文件中所讨论的,你需要去考虑很多的关于单个的内核函数/事件直接使用 ftrace 都做了些什么。能够做到这一点很酷!但是也需要做大量的工作!
|
||||
|
||||
Brendan Gregg (我们的 linux 调试工具“大神”)有个工具仓库,它使用 ftrace 去提供关于像 I/O 延迟这样的各种事情的信息。这是它在 GitHub 上全部的 [perf-tools][16] 仓库。
|
||||
|
||||
这里有一个权衡(tradeoff),那就是这些工具易于使用,但是被限制仅用于 Brendan Gregg 认可的事情。决定将它做成一个工具,那需要做很多的事情!:)
|
||||
|
||||
另一个工具是将 ftrace 的输出可视化,做的比较好的是 [kernelshark][17]。我还没有用过它,但是看起来似乎很有用。你可以使用 `sudo apt-get install kernelshark` 来安装它。
|
||||
|
||||
### 一个新的超能力
|
||||
|
||||
我很高兴能够花一些时间去学习 ftrace!对于任何内核工具,不同的内核版本有不同的功效,我希望有一天你能发现它很有用!
|
||||
|
||||
### ftrace 系列文章的一个索引
|
||||
|
||||
最后,这里是我找到的一些 ftrace 方面的文章。它们大部分在 LWN (Linux 新闻周刊)上,它是 Linux 的一个极好的资源(你可以购买一个 [订阅][18]!)
|
||||
|
||||
* [使用 Ftrace 调试内核 - part 1][1] (Dec 2009, Steven Rostedt)
|
||||
|
||||
* [使用 Ftrace 调试内核 - part 2][2] (Dec 2009, Steven Rostedt)
|
||||
|
||||
* [Linux 函数跟踪器的秘密][3] (Jan 2010, Steven Rostedt)
|
||||
|
||||
* [trace-cmd:Ftrace 的一个前端][4] (Oct 2010, Steven Rostedt)
|
||||
|
||||
* [使用 KernelShark 去分析实时调试器][5] (2011, Steven Rostedt)
|
||||
|
||||
* [Ftrace: 神秘的开关][6] (2014, Brendan Gregg)
|
||||
|
||||
* 内核文档:(它十分有用) [Documentation/ftrace.txt][7]
|
||||
|
||||
* 你能跟踪的事件的文档 [Documentation/events.txt][8]
|
||||
|
||||
* linux 内核开发上的一些 ftrace 设计文档 (不是有用,而是有趣!) [Documentation/ftrace-design.txt][9]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2017/03/19/getting-started-with-ftrace/
|
||||
|
||||
作者:[Julia Evans ][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://jvns.ca
|
||||
[1]:https://lwn.net/Articles/365835/
|
||||
[2]:https://lwn.net/Articles/366796/
|
||||
[3]:https://lwn.net/Articles/370423/
|
||||
[4]:https://lwn.net/Articles/410200/
|
||||
[5]:https://lwn.net/Articles/425583/
|
||||
[6]:https://lwn.net/Articles/608497/
|
||||
[7]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/ftrace.txt
|
||||
[8]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/events.txt
|
||||
[9]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/ftrace-design.txt
|
||||
[10]:https://lwn.net/Articles/290277/
|
||||
[11]:https://lwn.net/Articles/365835/
|
||||
[12]:https://lwn.net/Articles/410200/
|
||||
[13]:https://gist.githubusercontent.com/jvns/e5c2d640f7ec76ed9ed579be1de3312e/raw/78b8425436dc4bb5bb4fa76a4f85d5809f7d1ef2/trace-cmd-report.txt
|
||||
[14]:https://gist.githubusercontent.com/jvns/f32e9b06bcd2f1f30998afdd93e4aaa5/raw/8154d9828bb895fd6c9b0ee062275055b3775101/function_graph.txt
|
||||
[15]:https://github.com/torvalds/linux/blob/v4.4/kernel/futex.c#L1313-L1324
|
||||
[16]:https://github.com/brendangregg/perf-tools
|
||||
[17]:https://lwn.net/Articles/425583/
|
||||
[18]:https://lwn.net/subscribe/Info
|
@ -0,0 +1,116 @@
|
||||
使用 Vi/Vim 编辑器:高级概念
|
||||
======
|
||||
早些时候我们已经讨论了一些关于 VI/VIM 编辑器的基础知识,但是 VI 和 VIM 都是非常强大的编辑器,还有很多其他的功能可以和编辑器一起使用。在本教程中,我们将学习 VI/VIM 编辑器的一些高级用法。
|
||||
|
||||
(**推荐阅读**:[使用 VI 编辑器:基础知识] [1])
|
||||
|
||||
## 使用 VI/VIM 编辑器打开多个文件
|
||||
|
||||
要打开多个文件,命令将与打开单个文件相同。我们只要添加第二个文件的名称。
|
||||
|
||||
```
|
||||
$ vi file1 file2 file 3
|
||||
```
|
||||
|
||||
要浏览到下一个文件,我们可以使用
|
||||
|
||||
```
|
||||
$ :n
|
||||
```
|
||||
|
||||
或者我们也可以使用
|
||||
|
||||
```
|
||||
$ :e filename
|
||||
```
|
||||
|
||||
## 在编辑器中运行外部命令
|
||||
|
||||
我们可以在 vi 编辑器内部运行外部的 Linux/Unix 命令,也就是说不需要退出编辑器。要在编辑器中运行命令,如果在插入模式下,先返回到命令模式,我们使用 BANG 也就是 “!” 接着是需要使用的命令。运行命令的语法是:
|
||||
|
||||
```
|
||||
$ :! command
|
||||
```
|
||||
|
||||
这是一个例子
|
||||
|
||||
```
|
||||
$ :! df -H
|
||||
```
|
||||
|
||||
## 根据模板搜索
|
||||
|
||||
要在文本文件中搜索一个单词或模板,我们在命令模式下使用以下两个命令:
|
||||
|
||||
* 命令 “/” 代表正向搜索模板
|
||||
|
||||
* 命令 “?” 代表正向搜索模板
|
||||
|
||||
|
||||
这两个命令都用于相同的目的,唯一不同的是它们搜索的方向。一个例子是:
|
||||
|
||||
`$ :/ search pattern` (如果在文件的开头)
|
||||
|
||||
`$ :? search pattern` (如果在文件末尾)
|
||||
|
||||
## 搜索并替换一个模板
|
||||
|
||||
我们可能需要搜索和替换我们的文本中的单词或模板。我们不是从整个文本中找到单词的出现的地方并替换它,我们可以在命令模式中使用命令来自动替换单词。使用搜索和替换的语法是:
|
||||
|
||||
```
|
||||
$ :s/pattern_to_be_found/New_pattern/g
|
||||
```
|
||||
|
||||
假设我们想要将单词 “alpha” 用单词 “beta” 代替,命令就是这样:
|
||||
|
||||
```
|
||||
$ :s/alpha/beta/g
|
||||
```
|
||||
|
||||
如果我们只想替换第一个出现的 “alpha”,那么命令就是:
|
||||
|
||||
```
|
||||
$ :s/alpha/beta/
|
||||
```
|
||||
|
||||
## 使用 set 命令
|
||||
|
||||
我们也可以使用 set 命令自定义 vi/vim 编辑器的行为和外观。下面是一些可以使用 set 命令修改 vi/vim 编辑器行为的选项列表:
|
||||
|
||||
`$ :set ic ` 在搜索时忽略大小写
|
||||
|
||||
`$ :set smartcase ` 搜索强制区分大小写
|
||||
|
||||
`$ :set nu` 在每行开始显示行号
|
||||
|
||||
`$ :set hlsearch ` 高亮显示匹配的单词
|
||||
|
||||
`$ : set ro ` 将文件类型更改为只读
|
||||
|
||||
`$ : set term ` 打印终端类型
|
||||
|
||||
`$ : set ai ` 设置自动缩进
|
||||
|
||||
`$ :set noai ` 取消自动缩进
|
||||
|
||||
其他一些修改 vi 编辑器的命令是:
|
||||
|
||||
`$ :colorscheme ` 用来改变编辑器的配色方案 。(仅适用于 VIM 编辑器)
|
||||
|
||||
`$ :syntax on ` 为 .xml、.html 等文件打开颜色方案。(仅适用于VIM编辑器)
|
||||
|
||||
这篇结束了本系列教程,请在下面的评论栏中提出你的疑问/问题或建议。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/working-vivim-editor-advanced-concepts/
|
||||
|
||||
作者:[Shusain][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
||||
[1]:http://linuxtechlab.com/working-vi-editor-basics/
|
@ -0,0 +1,102 @@
|
||||
3 个替代 Emacs 的 Vim 文本编辑器
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_blue.png?itok=IfckxN48)
|
||||
|
||||
Emacs 和 Vim 的粉丝们,在你们开始编辑器之争之前,请你们理解,这篇文章并不会把引导放在诸位最喜欢的编辑器上。我是一个 Emacs 爱好者,但是也很喜欢 Vim。
|
||||
|
||||
就是说,我已经意识到 Emacs 和 Vim 并不适合所有人。也许 [编辑器之争][1] 略显幼稚,让很多人失望了。也许他们只是想要有一个不太苛刻的现代化的编辑器。
|
||||
|
||||
如果你正寻找可以替代 Emacs 或者 Vim 的编辑器,请继续阅读下去。这里有三个可能会让你感兴趣的编辑器。
|
||||
|
||||
### Geany
|
||||
|
||||
|
||||
![用 Geany 编辑一个 LaTeX 文档][3]
|
||||
|
||||
|
||||
你可以用 Geany 编辑 LaTeX 文档
|
||||
|
||||
[Geany][4] 是一个古老的编辑器,当我还在过时的硬件上运行轻量级 Linux 发行版的时候,[Geany][4] 就是一个优秀的的编辑器。Geany 开始于我的 [LaTeX][5] 编辑,但是很快就成为我所有应用程序的编辑器了。
|
||||
|
||||
尽管 Geany 号称是轻量且高速的 [IDE][6](集成开发环境),但是它绝不仅仅是一个技术工具。Geany 轻便快捷,即便是在一个过时的机器或是 [运行 Linux 的 Chromebook][7] 也能轻松运行起来。无论是编辑配置文件维护任务列表、写文章、代码还是脚本,Geany 都能轻松胜任。
|
||||
|
||||
[插件][8] 给 Geany 带来一些额外的魅力。这些插件拓展了 Geany 的功能,让你编码或是处理一些标记语言变得更高效,帮助你处理文本,甚至做拼写检查。
|
||||
|
||||
### Atom
|
||||
|
||||
|
||||
![使用 Atom 编辑网页][10]
|
||||
|
||||
|
||||
使用 Atom 编辑网页
|
||||
|
||||
在文本编辑器领域,[Atom][11] 后来居上。很短的时间内,Atom 就获得了一批忠实的追随者。
|
||||
|
||||
Atom 的定制功能让其拥有如此的吸引力。如果有一些技术癖好,你完全可以在这个编辑器上随意设置。如果你不仅仅是忠于技术,Atom 也有 [一些主题][12] ,你可以用来更改编辑器外观。
|
||||
|
||||
千万不要低估 Atom 数以千计的 [拓展包][13]。它们能在不同功能上拓展 Atom,能根据你的爱好把 Atom 转化成合适的文本编辑器或是开发环境。Atom 不仅为程序员提供服务。它同样适用于 [作家的文本编辑器][14]。
|
||||
|
||||
### Xed
|
||||
|
||||
![使用 Xed 编辑文章][16]
|
||||
|
||||
|
||||
使用 Xed 编辑文章
|
||||
|
||||
可能对用户体验来说,Atom 和 Geany 略显臃肿。也许你只想要一个轻量级,一个不要太露骨也不要有太多很少使用的特性的编辑器,如此看来,[Xed][17] 正是你所期待的。
|
||||
|
||||
如果 Xed 你看着眼熟,那是因为它是 MATE 桌面环境中 Pluma 编辑器上的分支。我发现相比于 Pluma,Xed 可能速度更快一点,响应更灵敏一点--不过,因人而异吧。
|
||||
|
||||
虽然 Xed 没有那么多的功能,但也不至于太糟。它有扎实的语法高亮,略强于一般的搜索替换和拼写检查功能以及单窗口编辑多文件的选项卡式界面。
|
||||
|
||||
### 其他值得发掘的编辑器
|
||||
|
||||
我不是 KDE 痴,当我工作在 KDE 环境下时, [KDevelop][18] 就已经是我深度工作时的首选了。它很强大而且灵活,又没有过大的体积,很像 Genany。
|
||||
|
||||
虽然我还没感受过爱,但是我发誓我和我了解的几个人都在 [Brackets][19] 感受到了。它很强大,而且不得不承认它的 [拓展][20] 真的很实用。
|
||||
|
||||
被称为 “开发者的编辑器” 的 [Notepadqq][21] ,总让人联想到 [Notepad++][22]。虽然它的发展仍处于早期阶段,但至少它看起来还是很有前景的。
|
||||
|
||||
对于那些只有简单的文本编辑器需求的人来说,[Gedit][23] 和 [Kate][24] 相比是极好的。它绝不是太过原始的编辑器--它有丰富的功能去完成大型文本编辑。无论是 Gedit 还是 Kate 都缘于速度和易上手而齐名。
|
||||
|
||||
你有其他 Emacs 和 Vim 之外的挚爱编辑器么?留言下来,免费分享。
|
||||
|
||||
### 关于作者
|
||||
Scott Nesbitt;我长期使用开源软件;记录各种有趣的事物;利益。做自己力所能及的事,并不把自己当回事。你可以在网络上的这些地方找到我。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/9/3-alternatives-emacs-and-vim
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
译者:[CYLeft](https://github.com/CYLeft)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/scottnesbitt
|
||||
[1]:https://en.wikipedia.org/wiki/Editor_war
|
||||
[2]:/file/370196
|
||||
[3]:https://opensource.com/sites/default/files/u128651/geany.png (Editing a LaTeX document with Geany)
|
||||
[4]:https://www.geany.org/
|
||||
[5]:https://opensource.com/article/17/6/introduction-latex
|
||||
[6]:https://en.wikipedia.org/wiki/Integrated_development_environment
|
||||
[7]:https://opensource.com/article/17/4/linux-chromebook-gallium-os
|
||||
[8]:http://plugins.geany.org/
|
||||
[9]:/file/370191
|
||||
[10]:https://opensource.com/sites/default/files/u128651/atom.png (Editing a webpage with Atom)
|
||||
[11]:https://atom.io
|
||||
[12]:https://atom.io/themes
|
||||
[13]:https://atom.io/packages
|
||||
[14]:https://opensource.com/article/17/5/atom-text-editor-packages-writers
|
||||
[15]:/file/370201
|
||||
[16]:https://opensource.com/sites/default/files/u128651/xed.png (Writing this article in Xed)
|
||||
[17]:https://github.com/linuxmint/xed
|
||||
[18]:https://www.kdevelop.org/
|
||||
[19]:http://brackets.io/
|
||||
[20]:https://registry.brackets.io/
|
||||
[21]:http://notepadqq.altervista.org/s/
|
||||
[22]:https://opensource.com/article/16/12/notepad-text-editor
|
||||
[23]:https://wiki.gnome.org/Apps/Gedit
|
||||
[24]:https://kate-editor.org/
|
@ -0,0 +1,131 @@
|
||||
Linux 容器安全的 10 个层面
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA)
|
||||
|
||||
容器提供了打包应用程序的一种简单方法,它实现了从开发到测试到投入生产系统的无缝传递。它也有助于确保跨不同环境的连贯性,包括物理服务器、虚拟机、以及公有云或私有云。这些好处使得一些组织为了更方便地部署和管理为他们提升业务价值的应用程序,而快速部署容器。
|
||||
|
||||
企业要求存储安全,在容器中运行基础服务的任何人都会问,“容器安全吗?”以及“怎么相信运行在容器中的我的应用程序是安全的?”
|
||||
|
||||
安全的容器就像是许多安全运行的进程。在你部署和运行你的容器之前,你需要去考虑整个解决方案栈~~(致校对,容器是由不同的层堆叠而成,英文原文中使用的stack,可以直译为“解决方案栈”,但是似乎没有这一习惯说法,也可以翻译为解决方案的不同层级,哪个更合适?)~~各个层面的安全。你也需要去考虑应用程序和容器整个生命周期的安全。
|
||||
|
||||
尝试从这十个关键的因素去确保容器解决方案栈不同层面、以及容器生命周期的不同阶段的安全。
|
||||
|
||||
### 1. 容器宿主机操作系统和多租户环境
|
||||
|
||||
由于容器将应用程序和它的依赖作为一个单元来处理,使得开发者构建和升级应用程序变得更加容易,并且,容器可以启用多租户技术将许多应用程序和服务部署到一台共享主机上。在一台单独的主机上以容器方式部署多个应用程序、按需启动和关闭单个容器都是很容易的。为完全实现这种打包和部署技术的优势,运营团队需要运行容器的合适环境。运营者需要一个安全的操作系统,它能够在边界上保护容器安全、从容器中保护主机内核、以及保护容器彼此之间的安全。
|
||||
|
||||
### 2. 容器内容(使用可信来源)
|
||||
|
||||
容器是隔离的 Linux 进程,并且在一个共享主机的内核中,容器内使用的资源被限制在仅允许你运行着应用程序的沙箱中。保护容器的方法与保护你的 Linux 中运行的任何进程的方法是一样的。降低权限是非常重要的,也是保护容器安全的最佳实践。甚至是使用尽可能小的权限去创建容器。容器应该以一个普通用户的权限来运行,而不是 root 权限的用户。在 Linux 中可以使用多级安全,Linux 命名空间、安全强化 Linux( [SELinux][1])、[cgroups][2] 、capabilities(译者注:Linux 内核的一个安全特性,它打破了传统的普通用户与 root 用户的概念,在进程级提供更好的安全控制)、以及安全计算模式( [seccomp][3] ),Linux 的这五种安全特性可以用于保护容器的安全。
|
||||
|
||||
在谈到安全时,首先要考虑你的容器里面有什么?例如 ,有些时候,应用程序和基础设施是由很多可用的组件所构成。它们中的一些是开源的包,比如,Linux 操作系统、Apache Web 服务器、Red Hat JBoss 企业应用平台、PostgreSQL、以及Node.js。这些包的容器化版本已经可以使用了,因此,你没有必要自己去构建它们。但是,对于你从一些外部来源下载的任何代码,你需要知道这些包的原始来源,是谁构建的它,以及这些包里面是否包含恶意代码。
|
||||
|
||||
### 3. 容器注册(安全访问容器镜像)
|
||||
|
||||
你的团队所构建的容器的最顶层的内容是下载的公共容器镜像,因此,管理和下载容器镜像以及内部构建镜像,与管理和下载其它类型的二进制文件的方式是相同的,这一点至关重要。许多私有的注册者支持容器镜像的保存。选择一个私有的注册者,它可以帮你将存储在它的注册中的容器镜像实现策略自动化。
|
||||
|
||||
### 4. 安全性与构建过程
|
||||
|
||||
在一个容器化环境中,构建过程是软件生命周期的一个阶段,它将所需的运行时库和应用程序代码集成到一起。管理这个构建过程对于软件栈安全来说是很关键的。遵守“一次构建,到处部署”的原则,可以确保构建过程的结果正是生产系统中需要的。保持容器的恒定不变也很重要 — 换句话说就是,不要对正在运行的容器打补丁,而是,重新构建和部署它们。
|
||||
|
||||
不论是因为你处于一个高强度监管的行业中,还是只希望简单地优化你的团队的成果,去设计你的容器镜像管理以及构建过程,可以使用容器层的优势来实现控制分离,因此,你应该去这么做:
|
||||
|
||||
* 运营团队管理基础镜像
|
||||
* 设计者管理中间件、运行时、数据库、以及其它解决方案
|
||||
* 开发者专注于应用程序层面,并且只写代码
|
||||
|
||||
|
||||
|
||||
最后,标记好你的定制构建容器,这样可以确保在构建和部署时不会搞混乱。
|
||||
|
||||
### 5. 控制好在同一个集群内部署应用
|
||||
|
||||
如果是在构建过程中出现的任何问题,或者在镜像被部署之后发现的任何漏洞,那么,请在基于策略的、自动化工具上添加另外的安全层。
|
||||
|
||||
我们来看一下,一个应用程序的构建使用了三个容器镜像层:内核、中间件、以及应用程序。如果在内核镜像中发现了问题,那么只能重新构建镜像。一旦构建完成,镜像就会被发布到容器平台注册中。这个平台可以自动检测到发生变化的镜像。对于基于这个镜像的其它构建将被触发一个预定义的动作,平台将自己重新构建应用镜像,合并进修复库。
|
||||
|
||||
在基于策略的、自动化工具上添加另外的安全层。
|
||||
|
||||
一旦构建完成,镜像将被发布到容器平台的内部注册中。在它的内部注册中,会立即检测到镜像发生变化,应用程序在这里将会被触发一个预定义的动作,自动部署更新镜像,确保运行在生产系统中的代码总是使用更新后的最新的镜像。所有的这些功能协同工作,将安全功能集成到你的持续集成和持续部署(CI/CD)过程和管道中。
|
||||
|
||||
### 6. 容器编配:保护容器平台
|
||||
|
||||
一旦构建完成,镜像被发布到容器平台的内部注册中。内部注册会立即检测到镜像的变化,应用程序在这里会被触发一个预定义的动作,自己部署更新,确保运行在生产系统中的代码总是使用更新后的最新的镜像。所有的功能协同工作,将安全功能集成到你的持续集成和持续部署(CI/CD)过程和管道中。~~(致校对:这一段和上一段是重复的,请确认,应该是选题工具造成的重复!!)~~
|
||||
|
||||
当然了,应用程序很少会部署在单一的容器中。甚至,单个应用程序一般情况下都有一个前端、一个后端、以及一个数据库。而在容器中以微服务模式部署的应用程序,意味着应用程序将部署在多个容器中,有时它们在同一台宿主机上,有时它们是分布在多个宿主机或者节点上,如下面的图所示:~~(致校对:图去哪里了???应该是选题问题的问题!)~~
|
||||
|
||||
在大规模的容器部署时,你应该考虑:
|
||||
|
||||
* 哪个容器应该被部署在哪个宿主机上?
|
||||
* 那个宿主机应该有什么样的性能?
|
||||
* 哪个容器需要访问其它容器?它们之间如何发现彼此?
|
||||
* 你如何控制和管理对共享资源的访问,像网络和存储?
|
||||
* 如何监视容器健康状况?
|
||||
* 如何去自动扩展性能以满足应用程序的需要?
|
||||
* 如何在满足安全需求的同时启用开发者的自助服务?
|
||||
|
||||
|
||||
|
||||
考虑到开发者和运营者的能力,提供基于角色的访问控制是容器平台的关键要素。例如,编配管理服务器是中心访问点,应该接受最高级别的安全检查。APIs 是规模化的自动容器平台管理的关键,可以用于为 pods、服务、以及复制控制器去验证和配置数据;在入站请求上执行项目验证;以及调用其它主要系统组件上的触发器。
|
||||
|
||||
### 7. 网络隔离
|
||||
|
||||
在容器中部署现代微服务应用,经常意味着跨多个节点在多个容器上部署。考虑到网络防御,你需要一种在一个集群中的应用之间的相互隔离的方法。一个典型的公有云容器服务,像 Google 容器引擎(GKE)、Azure 容器服务、或者 Amazon Web 服务(AWS)容器服务,是单租户服务。他们让你在你加入的虚拟机集群上运行你的容器。对于多租户容器的安全,你需要容器平台为你启用一个单一集群,并且分割通讯以隔离不同的用户、团队、应用、以及在这个集群中的环境。
|
||||
|
||||
使用网络命名空间,容器内的每个集合(即大家熟知的“pod”)得到它自己的 IP 和绑定的端口范围,以此来从一个节点上隔离每个 pod 网络。除使用下文所述的选项之外,~~(选项在哪里???,请查看原文,是否是选题丢失???)~~默认情况下,来自不同命名空间(项目)的Pods 并不能发送或者接收其它 Pods 上的包和不同项目的服务。你可以使用这些特性在同一个集群内,去隔离开发者环境、测试环境、以及生产环境。但是,这样会导致 IP 地址和端口数量的激增,使得网络管理更加复杂。另外,容器是被反复设计的,你应该在处理这种复杂性的工具上进行投入。在容器平台上比较受欢迎的工具是使用 [软件定义网络][4] (SDN) 去提供一个定义的网络集群,它允许跨不同集群的容器进行通讯。
|
||||
|
||||
### 8. 存储
|
||||
|
||||
容器即可被用于无状态应用,也可被用于有状态应用。保护附加存储是保护有状态服务的一个关键要素。容器平台对多个受欢迎的存储提供了插件,包括网络文件系统(NFS)、AWS 弹性块存储(EBS)、GCE 持久磁盘、GlusterFS、iSCSI、 RADOS(Ceph)、Cinder、等等。
|
||||
|
||||
一个持久卷(PV)可以通过资源提供者支持的任何方式装载到一个主机上。提供者有不同的性能,而每个 PV 的访问模式是设置为被特定的卷支持的特定模式。例如,NFS 能够支持多路客户端同时读/写,但是,一个特定的 NFS 的 PV 可以在服务器上被发布为只读模式。每个 PV 得到它自己的一组反应特定 PV 性能的访问模式的描述,比如,ReadWriteOnce、ReadOnlyMany、以及 ReadWriteMany。
|
||||
|
||||
### 9. API 管理、终端安全、以及单点登陆(SSO)
|
||||
|
||||
保护你的应用包括管理应用、以及 API 的认证和授权。
|
||||
|
||||
Web SSO 能力是现代应用程序的一个关键部分。在构建它们的应用时,容器平台带来了开发者可以使用的多种容器化服务。
|
||||
|
||||
APIs 是微服务构成的应用程序的关键所在。这些应用程序有多个独立的 API 服务,这导致了终端服务数量的激增,它就需要额外的管理工具。推荐使用 API 管理工具。所有的 API 平台应该提供多种 API 认证和安全所需要的标准选项,这些选项既可以单独使用,也可以组合使用,以用于发布证书或者控制访问。
|
||||
|
||||
保护你的应用包括管理应用以及 API 的认证和授权。~~(致校对:这一句话和本节的第一句话重复)~~
|
||||
|
||||
这些选项包括标准的 API keys、应用 ID 和密钥对、 以及 OAuth 2.0。
|
||||
|
||||
### 10. 在一个联合集群中的角色和访问管理
|
||||
|
||||
这些选项包括标准的 API keys、应用 ID 和密钥对、 以及 OAuth 2.0。~~(致校对:这一句和上一节最后一句重复)~~
|
||||
|
||||
在 2016 年 7 月份,Kubernetes 1.3 引入了 [Kubernetes 联合集群][5]。这是一个令人兴奋的新特性之一,它是在 Kubernetes 上游、当前的 Kubernetes 1.6 beta 中引用的。联合是用于部署和访问跨多集群运行在公有云或企业数据中心的应用程序服务的。多个集群能够用于去实现应用程序的高可用性,应用程序可以跨多个可用区域、或者去启用部署公共管理、或者跨不同的供应商进行迁移,比如,AWS、Google Cloud、以及 Azure。
|
||||
|
||||
当管理联合集群时,你必须确保你的编配工具能够提供,你所需要的跨不同部署平台的实例的安全性。一般来说,认证和授权是很关键的 — 不论你的应用程序运行在什么地方,将数据安全可靠地传递给它们,以及管理跨集群的多租户应用程序。Kubernetes 扩展了联合集群,包括对联合的秘密数据、联合的命名空间、以及 Ingress objects 的支持。
|
||||
|
||||
### 选择一个容器平台
|
||||
|
||||
当然,它并不仅关乎安全。你需要提供一个你的开发者团队和运营团队有相关经验的容器平台。他们需要一个安全的、企业级的基于容器的应用平台,它能够同时满足开发者和运营者的需要,而且还能够提高操作效率和基础设施利用率。
|
||||
|
||||
想从 Daniel 在 [欧盟开源峰会][7] 上的 [容器安全的十个层面][6] 的演讲中学习更多知识吗?这个峰会将于10 月 23 - 26 日在 Prague 举行。
|
||||
|
||||
### 关于作者
|
||||
Daniel Oh;Microservives;Agile;Devops;Java Ee;Container;Openshift;Jboss;Evangelism
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/10/10-layers-container-security
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/daniel-oh
|
||||
[1]:https://en.wikipedia.org/wiki/Security-Enhanced_Linux
|
||||
[2]:https://en.wikipedia.org/wiki/Cgroups
|
||||
[3]:https://en.wikipedia.org/wiki/Seccomp
|
||||
[4]:https://en.wikipedia.org/wiki/Software-defined_networking
|
||||
[5]:https://kubernetes.io/docs/concepts/cluster-administration/federation/
|
||||
[6]:https://osseu17.sched.com/mobile/#session:f2deeabfc1640d002c1d55101ce81223
|
||||
[7]:http://events.linuxfoundation.org/events/open-source-summit-europe
|
@ -0,0 +1,93 @@
|
||||
谨慎使用 Linux find 命令
|
||||
======
|
||||
![](https://images.idgesg.net/images/article/2017/10/caution-sign-100738884-large.jpg)
|
||||
最近有朋友提醒我可以添加一个有用的选项来更加谨慎地运行 find 命令,它是 -ok。除了一个重要的区别之外,它的工作方式与 -exec 相似,它使 find 命令在执行指定的操作之前请求权限。
|
||||
|
||||
这有一个例子。如果你使用 find 命令查找文件并删除它们,则可以运行下面的命令:
|
||||
```
|
||||
$ find . -name runme -exec rm {} \;
|
||||
|
||||
```
|
||||
|
||||
在当前目录及其子目录中中任何名为 “runme” 的文件都将被立即删除 - 当然,你要有权删除它们。改用 -ok 选项,你会看到类似这样的东西,find 命令将在删除文件之前会请求权限。回答 **y** 代表 “yes” 将允许 find 命令继续并逐个删除文件。
|
||||
```
|
||||
$ find . -name runme -ok rm {} \;
|
||||
< rm ... ./bin/runme > ?
|
||||
|
||||
```
|
||||
|
||||
### -exedir 命令也是一个选项
|
||||
|
||||
另一个可以用来修改 find 命令行为并可能使其更可控的选项是 -execdir 。其中 -exec 运行指定的任何命令,-execdir 从文件所在的目录运行指定的命令,而不是在运行 find 命令的目录运行。这是一个它的例子:
|
||||
```
|
||||
$ pwd
|
||||
/home/shs
|
||||
$ find . -name runme -execdir pwd \;
|
||||
/home/shs/bin
|
||||
|
||||
```
|
||||
```
|
||||
$ find . -name runme -execdir ls \;
|
||||
ls rm runme
|
||||
|
||||
```
|
||||
|
||||
到现在为止还挺好。但要记住的是,-execdir 也会在匹配文件的目录中执行命令。如果运行下面的命令,并且目录包含一个名为 “ls” 的文件,那么即使该文件没有_执行权限,它也将运行该文件。使用 **-exec** 或 **-execdir** 类似于通过 source 来运行命令。
|
||||
```
|
||||
$ find . -name runme -execdir ls \;
|
||||
Running the /home/shs/bin/ls file
|
||||
|
||||
```
|
||||
```
|
||||
$ find . -name runme -execdir rm {} \;
|
||||
This is an imposter rm command
|
||||
|
||||
```
|
||||
```
|
||||
$ ls -l bin
|
||||
total 12
|
||||
-r-x------ 1 shs shs 25 Oct 13 18:12 ls
|
||||
-rwxr-x--- 1 shs shs 36 Oct 13 18:29 rm
|
||||
-rw-rw-r-- 1 shs shs 28 Oct 13 18:55 runme
|
||||
|
||||
```
|
||||
```
|
||||
$ cat bin/ls
|
||||
echo Running the $0 file
|
||||
$ cat bin/rm
|
||||
echo This is an imposter rm command
|
||||
|
||||
```
|
||||
|
||||
### -okdir 选项也会请求权限
|
||||
|
||||
要更谨慎,可以使用 **-okdir** 选项。类似 **-ok**,该选项将请求权限来运行该命令。
|
||||
```
|
||||
$ find . -name runme -okdir rm {} \;
|
||||
< rm ... ./bin/runme > ?
|
||||
|
||||
```
|
||||
|
||||
你也可以小心地指定你想用的命令的完整路径,以避免像上面那样的冒牌命令出现的任何问题。
|
||||
```
|
||||
$ find . -name runme -execdir /bin/rm {} \;
|
||||
|
||||
```
|
||||
|
||||
find 命令除了默认打印之外还有很多选项,有些可以使你的文件搜索更精确,但谨慎一点总是好的。
|
||||
|
||||
在 [Facebook][1] 和 [LinkedIn][2] 上加入网络世界社区来进行评论。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3233305/linux/using-the-linux-find-command-with-caution.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[Locez](https://github.com/locez)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[1]:https://www.facebook.com/NetworkWorld/
|
||||
[2]:https://www.linkedin.com/company/network-world
|
@ -0,0 +1,65 @@
|
||||
无需 Root 实现在 Android 设备上运行 Linux
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2017/10/Termux-720x340.jpg)
|
||||
|
||||
曾今,我尝试过搜索一种简单的可以在 Android 上运行 Linux 的方法。我当时唯一的意图只是想使用 Linux 以及一些基本的用用程序,比如 SSH,Git,awk 等。要求的并不多!我不不想 root Android 设备。我有一台平板电脑,主要用于阅读电子书,新闻和少数 Linux 博客。除此之外也不怎么用它了。因此我决定用它来实现一些 Linux 的功能。在 Google Play 商店上浏览了几分钟后,一个应用程序瞬间引起了我的注意,勾起了我实验的欲望。如果你也想在 Android 设备上运行 Linux,这个应用可能会有所帮助。
|
||||
|
||||
### Termux - 在 Android 和 Chrome OS 上运行的 Android 终端模拟器
|
||||
|
||||
**Termux** 是一个 Android 终端模拟器以及提供 Linux 环境的应用程序。跟许多其他应用程序不同,你无需 root 设备也无需进行设置。它是开箱即用的!它会自动安装好一个最基本的 Linux 系统,当然你也可以使用 APT 软件包管理器来安装其他软件包。总之,你可以让你的 Android 设备变成一台袖珍的 Linux 电脑。它不仅适用于 Android,你还能在 Chrome OS 上安装它。
|
||||
|
||||
![](http://www.ostechnix.com/wp-content/uploads/2017/10/termux.png)
|
||||
|
||||
Termux 提供了许多重要的功能,比您想象的要多。
|
||||
|
||||
* 它允许你通过 openSSH 登陆远程服务器
|
||||
* 你还能够从远程系统 SSH 到 Android 设备中。
|
||||
* 使用 rsync 和 curl 将您的智能手机通讯录同步到远程系统。
|
||||
* 支持不同的 shell,比如 BASH,ZSH,以及 FISH 等等。
|
||||
* 可以选择不同的文本编辑器来编辑/查看文件,支持 Emacs,Nano 和 Vim。
|
||||
* 使用 APT 软件包管理器在 Android 设备上安装你想要的软件包。支持 Git,Perl,Python,Ruby 和 Node.js 的最新版本。
|
||||
* 可以将 Android 设备与蓝牙键盘,鼠标和外置显示器连接起来,就像是整合在一起的设备一样。Termux 支持键盘快捷键。
|
||||
* Termux 支持几乎所有 GNU/Linux 命令。
|
||||
|
||||
此外通过安装插件可以启用其他一些功能。例如,**Termux:API** 插件允许你访问 Android 和 Chrome 的硬件功能。其他有用的插件包括:
|
||||
|
||||
* Termux:Boot - 设备启动时运行脚本
|
||||
* Termux:Float - 在浮动窗口中运行 Termux
|
||||
* Termux:Styling - 提供配色方案和支持 powerline 的字体来定制 Termux 终端的外观。
|
||||
* Termux:Task - 提供一种从任务栏类的应用中调用 Termux 可执行文件的简易方法。
|
||||
* Termux:Widget - 提供一种从主屏幕启动小脚本的建议方法。
|
||||
|
||||
要了解更多有关 termux 的信息,请长按终端上的任意位置并选择“帮助”菜单选项来打开内置的帮助部分。它唯一的缺点就是**需要 Android 5.0 及更高版本**。如果它支持 Android 4.x 和旧版本的话,将会更有用的多。你可以在** Google Play 商店 **和** F-Droid **中找到并安装 Termux。
|
||||
|
||||
要在 Google Play 商店中安装 Termux,点击下面按钮。
|
||||
|
||||
[![termux][1]][2]
|
||||
|
||||
若要在 F-Droid 中安装,则点击下面按钮。
|
||||
|
||||
[![][1]][3]
|
||||
|
||||
你现在知道如何使用 Termux 在 Android 设备上使用 Linux 了。你有用过其他更好的应用吗?请在下面留言框中留言。我很乐意也去尝试他们!
|
||||
|
||||
此致敬礼!
|
||||
|
||||
相关资源:
|
||||
|
||||
+[Termux 官网 ][4]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/termux-run-linux-android-devices-no-rooting-required/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]:https://play.google.com/store/apps/details?id=com.termux
|
||||
[3]:https://f-droid.org/packages/com.termux/
|
||||
[4]:https://termux.com/
|
@ -0,0 +1,123 @@
|
||||
如何在 Linux/Unix 之上绑定 ntpd 到特定的 IP 地址
|
||||
======
|
||||
|
||||
默认的情况下,我们的 ntpd/NTP 服务器会监听所有的端口或者 IP 地址,也就是:0.0.0.0:123。 怎么才可以在一个 Linux 或是 FreeBSD Unix 服务器上,确保只监听特定的 IP 地址,比如 localhost 或者是 192.168.1.1:123 ?
|
||||
|
||||
NTP 是网络时间协议的首字母简写,这是一个用来同步两台电脑之间时间的协议。ntpd 是一个操作系统守护进程,可以设置并且保证系统的时间与互联网标准时间服务器同步。
|
||||
|
||||
[![如何在Linux和Unix服务器,防止 NTPD 监听0.0.0.0:123 并将其绑定到特定的 IP 地址][1]][1]
|
||||
|
||||
NTP使用 `/etc/directory` 之下的 `ntp.conf`作为配置文件。
|
||||
|
||||
|
||||
|
||||
## /etc/ntp.conf 之中的端口指令
|
||||
|
||||
你可以通过设置端口命令来防止 ntpd 监听 0.0.0.0:123,语法如下:
|
||||
|
||||
```
|
||||
interface listen IPv4|IPv6|all
|
||||
interface ignore IPv4|IPv6|all
|
||||
interface drop IPv4|IPv6|all
|
||||
```
|
||||
|
||||
上面的配置可以使 ntpd 监听或者断开一个网络地址而不需要任何的请求。**这样将会** 举个例子,如果要忽略所有端口之上的监听,加入下面的语句到`/etc/ntp.conf`:
|
||||
|
||||
The above configures which network addresses ntpd listens or dropped without processing any requests. **The ignore prevents opening matching addresses, drop causes ntpd to open the address and drop all received packets without examination.** For example to ignore listing on all interfaces, add the following in /etc/ntp.conf:
|
||||
|
||||
`interface ignore wildcard`
|
||||
|
||||
如果只监听 127.0.0.1 和 192.168.1.1 则是这样:
|
||||
|
||||
```
|
||||
interface listen 127.0.0.1
|
||||
interface listen 192.168.1.1
|
||||
```
|
||||
|
||||
这是我 FreeBSD 云服务器上的样例 /etc/ntp.conf 文件:
|
||||
|
||||
`$ egrep -v '^#|$^' /etc/ntp.conf`
|
||||
|
||||
样例输出为:
|
||||
|
||||
```
|
||||
tos minclock 3 maxclock 6
|
||||
pool 0.freebsd.pool.ntp.org iburst
|
||||
restrict default limited kod nomodify notrap noquery nopeer
|
||||
restrict -6 default limited kod nomodify notrap noquery nopeer
|
||||
restrict source limited kod nomodify notrap noquery
|
||||
restrict 127.0.0.1
|
||||
restrict -6 ::1
|
||||
leapfile "/var/db/ntpd.leap-seconds.list"
|
||||
interface ignore wildcard
|
||||
interface listen 172.16.3.1
|
||||
interface listen 10.105.28.1
|
||||
```
|
||||
|
||||
|
||||
## 重启 ntpd
|
||||
|
||||
在 FreeBSD Unix 之上重新加载/重启 ntpd
|
||||
|
||||
`$ sudo /etc/rc.d/ntpd restart`
|
||||
或者 [在Debian和Ubuntu Linux 之上使用下面的命令][2]:
|
||||
`$ sudo systemctl restart ntp`
|
||||
或者 [在CentOS/RHEL 7/Fedora Linux 之上使用下面的命令][2]:
|
||||
`$ sudo systemctl restart ntpd`
|
||||
|
||||
## 校验
|
||||
|
||||
使用 `netstat` 和 `ss` 命令来检查 ntpd只绑定到了特定的 IP 地址:
|
||||
|
||||
`$ netstat -tulpn | grep :123`
|
||||
或是
|
||||
`$ ss -tulpn | grep :123`
|
||||
样例输出:
|
||||
|
||||
```
|
||||
udp 0 0 10.105.28.1:123 0.0.0.0:* -
|
||||
udp 0 0 172.16.3.1:123 0.0.0.0:* -
|
||||
```
|
||||
使用
|
||||
|
||||
使用 [socksata命令(FreeBSD Unix 服务群)][3]:
|
||||
|
||||
```
|
||||
$ sudo sockstat
|
||||
$ sudo sockstat -4
|
||||
$ sudo sockstat -4 | grep :123
|
||||
```
|
||||
|
||||
|
||||
样例输出:
|
||||
|
||||
```
|
||||
root ntpd 59914 22 udp4 127.0.0.1:123 *:*
|
||||
root ntpd 59914 24 udp4 127.0.1.1:123 *:*
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Vivek Gite 投稿
|
||||
|
||||
这个作者是 nixCraft 的作者并且是一位经验丰富的系统管理员,也是一名 Linux 操作系统和 Unix shell 脚本的训练师。他为全球不同行业,包括 IT、教育业、安全防护、空间研究和非营利性组织的客户工作。关注他的 [Twitter][4], [Facebook][5], [Google+][6]。
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/how-to-bind-ntpd-to-specific-ip-addresses-on-linuxunix/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[Drshu](https://github.com/Drshu)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz
|
||||
[1]:https://www.cyberciti.biz/media/new/faq/2017/10/how-to-prevent-ntpd-to-listen-on-all-interfaces-on-linux-unix-box.jpg
|
||||
[2]:https://www.cyberciti.biz/faq/restarting-ntp-service-on-linux/
|
||||
[3]:https://www.cyberciti.biz/faq/freebsd-unix-find-the-process-pid-listening-on-a-certain-port-commands/
|
||||
[4]:https://twitter.com/nixcraft
|
||||
[5]:https://facebook.com/nixcraft
|
||||
[6]:https://plus.google.com/+CybercitiBiz
|
137
translated/tech/20171102 What is huge pages in Linux.md
Normal file
137
translated/tech/20171102 What is huge pages in Linux.md
Normal file
@ -0,0 +1,137 @@
|
||||
Linux 中的 huge pages 是个什么玩意?
|
||||
======
|
||||
学习 Linux 中的 huge pages( 巨大页)。理解什么是 hugepages,如何进行配置,如何查看当前状态以及如何禁用它。
|
||||
|
||||
![Huge Pages in Linux][1]
|
||||
|
||||
本文,我们会详细介绍 huge page,让你能够回答:Linux 中的 huge page 是什么玩意?在 RHEL6,RHEL7,Ubuntu 等 Linux 中,如何启用/禁用 huge pages?如何查看 huge page 的当前值?
|
||||
|
||||
首先让我们从 Huge page 的基础知识开始讲起。
|
||||
|
||||
### Linux 中的 Huge page 是个什么玩意?
|
||||
|
||||
Huge pages 有助于 Linux 系统进行虚拟内存管理。顾名思义,除了标准的 4KB 大小的页面外,他们还能帮助管理内存中的巨大页面。使用 huge pages,你最大可以定义 1GB 的页面大小。
|
||||
|
||||
在系统启动期间,huge pages 会为应用程序预留一部分内存。这部分内存,即被 huge pages 占用的这些存储器永远不会被交换出内存。它会一直保留其中除非你修改了配置。这会极大地提高像 Orcle 数据库这样的需要海量内存的应用程序的性能。
|
||||
|
||||
### 为什么使用巨大的页?
|
||||
|
||||
在虚拟内存管理中,内核维护一个将虚拟内存地址映射到物理地址的表,对于每个页面操作,内核都需要加载相关的映射标。如果你的内存页很小,那么你需要加载的页就会很多,导致内核加载更多的映射表。而这会降低性能。
|
||||
|
||||
使用巨大的页,意味着所需要的页变少了。从而大大减少由内核加载的映射表的数量。这提高了内核级别的性能最终有利于应用程序的性能。
|
||||
|
||||
简而言之,通过启用 huge pages,系统具只需要处理较少的页面映射表,从而减少访问/维护它们的开销!
|
||||
|
||||
### 如何配置 huge pages?
|
||||
|
||||
运行下面命令来查看当前 huge pages 的详细内容。
|
||||
|
||||
```
|
||||
root@kerneltalks # grep Huge /proc/meminfo
|
||||
AnonHugePages: 0 kB
|
||||
HugePages_Total: 0
|
||||
HugePages_Free: 0
|
||||
HugePages_Rsvd: 0
|
||||
HugePages_Surp: 0
|
||||
Hugepagesize: 2048 kB
|
||||
```
|
||||
|
||||
从上面输出可以看到,每个页的大小为 2MB(`Hugepagesize`) 并且系统中目前有 0 个页 (`HugePages_Total`)。这里巨大页的大小可以从 2MB 增加到 1GB。
|
||||
|
||||
运行下面的脚本可以获取系统当前需要多少个巨大页。该脚本取之于 Oracle。
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
#
|
||||
# hugepages_settings.sh
|
||||
#
|
||||
# Linux bash script to compute values for the
|
||||
# recommended HugePages/HugeTLB configuration
|
||||
#
|
||||
# Note: This script does calculation for all shared memory
|
||||
# segments available when the script is run, no matter it
|
||||
# is an Oracle RDBMS shared memory segment or not.
|
||||
# Check for the kernel version
|
||||
KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'`
|
||||
# Find out the HugePage size
|
||||
HPG_SZ=`grep Hugepagesize /proc/meminfo | awk {'print $2'}`
|
||||
# Start from 1 pages to be on the safe side and guarantee 1 free HugePage
|
||||
NUM_PG=1
|
||||
# Cumulative number of pages required to handle the running shared memory segments
|
||||
for SEG_BYTES in `ipcs -m | awk {'print $5'} | grep "[0-9][0-9]*"`
|
||||
do
|
||||
MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q`
|
||||
if [ $MIN_PG -gt 0 ]; then
|
||||
NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q`
|
||||
fi
|
||||
done
|
||||
# Finish with results
|
||||
case $KERN in
|
||||
'2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`;
|
||||
echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;;
|
||||
'2.6' | '3.8' | '3.10' | '4.1' ) echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;;
|
||||
*) echo "Unrecognized kernel version $KERN. Exiting." ;;
|
||||
esac
|
||||
# End
|
||||
```
|
||||
将它以 `hugepages_settings.sh` 为名保存到 `/tmp` 中,然后运行之:
|
||||
```
|
||||
root@kerneltalks # sh /tmp/hugepages_settings.sh
|
||||
Recommended setting: vm.nr_hugepages = 124
|
||||
```
|
||||
|
||||
输出如上结果,只是数字会有一些出入。
|
||||
|
||||
这意味着,你系统需要 124 个每个 2MB 的巨大页!若你设置页面大小为 4MB,则结果就变成了 62。你明白了吧?
|
||||
|
||||
### 配置内核中的 hugepages
|
||||
|
||||
本文最后一部分内容是配置上面提到的 [内核参数 ][2] 然后重新加载。将下面内容添加到 `/etc/sysctl.conf` 中,然后输入 `sysctl -p` 命令重新加载配置。
|
||||
|
||||
```
|
||||
vm .nr_hugepages=126
|
||||
```
|
||||
|
||||
注意我们这里多加了两个额外的页,因为我们希望在实际需要的页面数量外多一些额外的空闲页。
|
||||
|
||||
现在,内核已经配置好了,但是要让应用能够使用这些巨大页还需要提高内存的使用阀值。新的内存阀值应该为 126 个页 x 每个页 2 MB = 252 MB,也就是 258048 KB。
|
||||
|
||||
你需要编辑 `/etc/security/limits.conf` 中的如下配置
|
||||
|
||||
```
|
||||
soft memlock 258048
|
||||
hard memlock 258048
|
||||
```
|
||||
|
||||
某些情况下,这些设置是在指定应用的文件中配置的,比如 Oracle DB 就是在 `/etc/security/limits.d/99-grid-oracle-limits.conf` 中配置的。
|
||||
|
||||
这就完成了!你可能还需要重启应用来让应用来使用这些新的巨大页。
|
||||
|
||||
### 如何禁用 hugepages?
|
||||
|
||||
HugePages 默认是开启的。使用下面命令来查看 hugepages 的当前状态。
|
||||
|
||||
```
|
||||
root@kerneltalks# cat /sys/kernel/mm/transparent_hugepage/enabled
|
||||
[always] madvise never
|
||||
```
|
||||
|
||||
输出中的 `[always]` 标志说明系统启用了 hugepages。
|
||||
|
||||
若使用的是基于 RedHat 的系统,则应该要查看的文件路径为 `/sys/kernel/mm/redhat_transparent_hugepage/enabled`。
|
||||
|
||||
若想禁用巨大页,则在 `/etc/grub.conf` 中的 `kernel` 行后面加上 `transparent_hugepage=never`,然后重启系统。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://kerneltalks.com/services/what-is-huge-pages-in-linux/
|
||||
|
||||
作者:[Shrikant Lavhate][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://kerneltalks.com
|
||||
[1]:https://c1.kerneltalks.com/wp-content/uploads/2017/11/hugepages-in-linux.png
|
||||
[2]:https://kerneltalks.com/linux/how-to-tune-kernel-parameters-in-linux/
|
@ -1,50 +0,0 @@
|
||||
Autorandr:自动调整屏幕布局
|
||||
======
|
||||
像许多笔记本用户一样,我经常将笔记本插入到不同的显示器上(桌面上有多台显示器,演示时有投影机等)。运行 xrandr 命令或点击界面非常繁琐,编写脚本也不是很好。
|
||||
|
||||
最近,我遇到了 [autorandr][1],它使用 EDID(和其他设置)检测连接的显示器,保存 xrandr 配置并恢复它们。它也可以在加载特定配置时运行任意脚本。我已经打包了它,目前仍在 NEW 状态。如果你不能等待,[这是 deb][2],[这是 git 仓库][3]。
|
||||
|
||||
要使用它,只需安装软件包,并创建你的初始配置(我这里是 undocked):
|
||||
```
|
||||
autorandr --save undocked
|
||||
|
||||
```
|
||||
|
||||
然后,连接你的笔记本(或者插入你的外部显示器),使用 xrandr(或其他任何)更改配置,然后保存你的新配置(我这里是 workstation):
|
||||
```
|
||||
autorandr --save workstation
|
||||
|
||||
```
|
||||
|
||||
对你额外的配置(或当你有新的配置)进行重复操作。
|
||||
|
||||
Autorandr 有 `udev`、`systemd` 和 `pm-utils` 钩子,当新的显示器出现时 `autorandr --change` 应该会立即运行。如果需要,也可以手动运行 `autorandr --change` 或 `autorandr - load workstation`。你也可以在加载配置后在 `~/.config/autorandr/$PROFILE/postswitch` 添加自己的脚本来运行。由于我运行 i3,我的工作站配置如下所示:
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
xrandr --dpi 92
|
||||
xrandr --output DP2-2 --primary
|
||||
i3-msg '[workspace="^(1|4|6)"] move workspace to output DP2-2;'
|
||||
i3-msg '[workspace="^(2|5|9)"] move workspace to output DP2-3;'
|
||||
i3-msg '[workspace="^(3|8)"] move workspace to output DP2-1;'
|
||||
|
||||
```
|
||||
|
||||
它适当地修正了 dpi,设置主屏幕(可能不需要?),并移动 i3 工作区。你可以通过在配置文件目录中添加一个 `block` 钩子来安排配置永远不会运行。
|
||||
|
||||
如果你定期更换显示器,请看一下!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.donarmstrong.com/posts/autorandr/
|
||||
|
||||
作者:[Don Armstrong][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.donarmstrong.com
|
||||
[1]:https://github.com/phillipberndt/autorandr
|
||||
[2]:https://www.donarmstrong.com/autorandr_1.2-1_all.deb
|
||||
[3]:https://git.donarmstrong.com/deb_pkgs/autorandr.git
|
@ -7,7 +7,7 @@
|
||||
|
||||
### 在Arch Linux中设置日语环境
|
||||
|
||||
首先,安装必要的日语字体,以正确查看日语 ASCII 格式:
|
||||
首先,为了正确查看日语 ASCII 格式,先安装必要的日语字体:
|
||||
```
|
||||
sudo pacman -S adobe-source-han-sans-jp-fonts otf-ipafont
|
||||
```
|
||||
@ -27,7 +27,7 @@ pacaur -S ttf-monapo
|
||||
sudo pacman -S ibus ibus-anthy
|
||||
```
|
||||
|
||||
在 **~/.xprofile** 中添加以下行(如果不存在,创建一个):
|
||||
在 **~/.xprofile** 中添加以下几行(如果不存在,创建一个):
|
||||
```
|
||||
# Settings for Japanese input
|
||||
export GTK_IM_MODULE='ibus'
|
||||
@ -38,7 +38,7 @@ export XMODIFIERS=@im='ibus'
|
||||
ibus-daemon -drx
|
||||
```
|
||||
|
||||
~/.xprofile 允许我们在窗口管理器启动之前在 X 用户会话开始时执行命令。
|
||||
~/.xprofile 允许我们在 X 用户会话开始时且在窗口管理器启动之前执行命令。
|
||||
|
||||
|
||||
保存并关闭文件。重启 Arch Linux 系统以使更改生效。
|
||||
@ -72,9 +72,9 @@ ibus-setup
|
||||
|
||||
[![][2]][8]
|
||||
|
||||
你还可以在键盘绑定中编辑默认的快捷键。完成所有更改后,单击应用并确定。就是这样。从任务栏中的 iBus 图标中选择日语,或者按下**Command/Window 键+空格键**来在日语和英语(或者系统中的其他默认语言)之间切换。你可以从 iBus 首选项窗口更改键盘快捷键。
|
||||
你还可以在键盘绑定中编辑默认的快捷键。完成所有更改后,点击应用并确定。就是这样。从任务栏中的 iBus 图标中选择日语,或者按下**SUPER 键+空格键**(LCTT译注:SUPER KEY 通常为 Command/Window KEY)来在日语和英语(或者系统中的其他默认语言)之间切换。你可以从 iBus 首选项窗口更改键盘快捷键。
|
||||
|
||||
你现在知道如何在 Arch Linux 及其衍生版中使用日语了。如果你发现我们的指南很有用,那么请您在社交、专业网络上分享,并支持 OSTechNix。
|
||||
现在你知道如何在 Arch Linux 及其衍生版中使用日语了。如果你发现我们的指南很有用,那么请您在社交、专业网络上分享,并支持 OSTechNix。
|
||||
|
||||
|
||||
|
||||
@ -84,7 +84,7 @@ via: https://www.ostechnix.com/setup-japanese-language-environment-arch-linux/
|
||||
|
||||
作者:[][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[Locez](https://github.com/locez)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
|
@ -3,9 +3,9 @@
|
||||
|
||||
### 目标
|
||||
|
||||
学习使用 "pass" 密码管理器来组织你的密码
|
||||
学习在 Linux 上使用 "pass" 密码管理器来管理你的密码
|
||||
|
||||
### 需求
|
||||
### 条件
|
||||
|
||||
* 需要 root 权限来安装需要的包
|
||||
|
||||
@ -16,15 +16,15 @@
|
||||
### 约定
|
||||
|
||||
* **#** - 执行指定命令需要 root 权限,可以是直接使用 root 用户来执行或者使用 `sudo` 命令来执行
|
||||
* **$** - 使用非特权普通用户执行指定命令
|
||||
* **$** - 使用普通的非特权用户执行指定命令
|
||||
|
||||
### 介绍
|
||||
|
||||
如果你有根据目的不同设置不同密码的好习惯,你可能已经感受到要一个密码管理器的必要性了。在 Linux 上有很多选择,可以是专利软件(如果你敢的话)也可以是开源软件。如果你跟我一样喜欢简洁的话,你可能会对 `pass` 感兴趣。
|
||||
如果你有根据不同的意图设置不同密码的好习惯,你可能已经感受到需要一个密码管理器的必要性了。在 Linux 上有很多选择,可以是专利软件(如果你敢的话)也可以是开源软件。如果你跟我一样喜欢简洁的话,你可能会对 `pass` 感兴趣。
|
||||
|
||||
### First steps
|
||||
|
||||
Pass 作为一个密码管理器,其实际上是对类似 `gpg` 和 `git` 等可信赖的实用工具的一种封装。虽然它也有图形界面,但它专门设计能成在命令行下工作的:因此它也可以在 headless machines 上工作 (LCTT 注:根据 wikipedia 的说法,所谓 headless machines 是指没有显示器、键盘和鼠标的机器,一般通过网络链接来控制)。
|
||||
Pass 作为一个密码管理器,其实际上是一些你可能早已每天使用的、可信赖且实用的工具的一种封装,比如 `gpg` 和 `git` 。虽然它也有图形界面,但它专门设计能成在命令行下工作的:因此它也可以在 headless machines 上工作 (LCTT 注:根据 wikipedia 的说法,所谓 headless machines 是指没有显示器、键盘和鼠标的机器,一般通过网络链接来控制)。
|
||||
|
||||
### 步骤 1 - 安装
|
||||
|
||||
@ -42,7 +42,7 @@ Pass 不在官方仓库中,但你可以从 `epel` 中获取道它。要在 Cen
|
||||
# yum install epel-release
|
||||
```
|
||||
|
||||
然而在 Red Hat 企业版的 Linux 上,这个额外的源是不可用的;你需要从官方的 EPEL 网站上下载它。
|
||||
然而在 Red Hat 企业版的 Linux 上,这个额外的源是不可用的;你需要从 EPEL 官方网站上下载它。
|
||||
|
||||
#### Debian and Ubuntu
|
||||
```
|
||||
@ -95,12 +95,12 @@ Password Store
|
||||
pass mysite
|
||||
```
|
||||
|
||||
然而更好的方法是使用 `-c` 选项让 pass 将密码直接拷贝道粘帖板上:
|
||||
然而更好的方法是使用 `-c` 选项让 pass 将密码直接拷贝到剪切板上:
|
||||
```
|
||||
pass -c mysite
|
||||
```
|
||||
|
||||
这种情况下粘帖板中的内容会在 `45` 秒后自动清除。两种方法都会要求你输入 gpg 密码。
|
||||
这种情况下剪切板中的内容会在 `45` 秒后自动清除。两种方法都会要求你输入 gpg 密码。
|
||||
|
||||
### 生成密码
|
||||
|
||||
@ -109,11 +109,11 @@ Pass 也可以为我们自动生成(并自动存储)安全密码。假设我们
|
||||
pass generate mysite 15
|
||||
```
|
||||
|
||||
若希望密码只包含字母和数字则可以是使用 `--no-symbols` 选项。生成的密码会显示在屏幕上。也可以通过 `--clip` 或 `-c` 选项让 pass 吧密码直接拷贝到粘帖板中。通过使用 `-q` 或 `--qrcode` 选项来生成二维码:
|
||||
若希望密码只包含字母和数字则可以是使用 `--no-symbols` 选项。生成的密码会显示在屏幕上。也可以通过 `--clip` 或 `-c` 选项让 pass 把密码直接拷贝到剪切板中。通过使用 `-q` 或 `--qrcode` 选项来生成二维码:
|
||||
|
||||
![qrcode][1]
|
||||
|
||||
从上面的截屏中尅看出,生成了一个二维码,不过由于 `mysite` 的密码以及存在了,pass 会提示我们确认是否要覆盖原密码。
|
||||
从上面的截屏中尅看出,生成了一个二维码,不过由于 `mysite` 的密码已经存在了,pass 会提示我们确认是否要覆盖原密码。
|
||||
|
||||
Pass 使用 `/dev/urandom` 设备作为(伪)随机数据生成器来生成密码,同时它使用 `xclip` 工具来将密码拷贝到粘帖板中,同时使用 `qrencode` 来将密码以二维码的形式显示出来。在我看来,这种模块化的设计正是它最大的优势:它并不重复造轮子,而只是将常用的工具包装起来完成任务。
|
||||
|
||||
@ -131,9 +131,9 @@ pass git init
|
||||
pass git remote add <name> <url>
|
||||
```
|
||||
|
||||
我们可以把这个仓库当成普通密码仓库来用。唯一的不同点在于每次我们新增或修改一个密码,`pass` 都会自动将该文件加入索引并创建一个提交。
|
||||
我们可以把这个密码仓库当成普通仓库来用。唯一的不同点在于每次我们新增或修改一个密码,`pass` 都会自动将该文件加入索引并创建一个提交。
|
||||
|
||||
`pass` 有一个叫做 `qtpass` 的图形界面,而且 `pass` 也支持 Windows 和 MacOs。通过使用 `PassFF` 插件,它还能获取 firefox 中存储的密码。在它的项目网站上可以查看更多详细信息。试一下 `pass` 吧,你不会失望的!
|
||||
`pass` 有一个叫做 `qtpass` 的图形界面,而且也支持 Windows 和 MacOs。通过使用 `PassFF` 插件,它还能获取 firefox 中存储的密码。在它的项目网站上可以查看更多详细信息。试一下 `pass` 吧,你不会失望的!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
@ -142,7 +142,7 @@ via: https://linuxconfig.org/how-to-organize-your-passwords-using-pass-password-
|
||||
|
||||
作者:[Egidio Docile][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[Locez](https://github.com/locez)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
|
@ -1,120 +0,0 @@
|
||||
如何找出并打包文件成 tar 包
|
||||
======
|
||||
|
||||
我想找出所有的 \*.doc 文件并将它们创建成一个 tar 包,然后存储在 /nfs/backups/docs/file.tar 中。是否可以在 Linux 或者类 Unix 系统上查找并 tar 打包文件?
|
||||
|
||||
find 命令用于按照给定条件在目录层次结构中搜索文件。tar 命令是用于 Linux 和类 Unix 系统创建 tar 包的归档工具。
|
||||
|
||||
[![How to find and tar files on linux unix][1]][1]
|
||||
|
||||
让我们看看如何将 tar 命令与 find 命令结合在一个命令行中创建一个 tar 包。
|
||||
|
||||
## Find 命令
|
||||
|
||||
语法是:
|
||||
```
|
||||
find /path/to/search -name "file-to-search" -options
|
||||
## 找出所有 Perl(*.pl)文件 ##
|
||||
find $HOME -name "*.pl" -print
|
||||
## 找出所有 \*.doc 文件 ##
|
||||
find $HOME -name "*.doc" -print
|
||||
## 找出所有 *.sh(shell 脚本)并运行 ls -l 命令 ##
|
||||
find . -iname "*.sh" -exec ls -l {} +
|
||||
```
|
||||
最后一个命令的输出示例:
|
||||
```
|
||||
-rw-r--r-- 1 vivek vivek 1169 Apr 4 2017 ./backups/ansible/cluster/nginx.build.sh
|
||||
-rwxr-xr-x 1 vivek vivek 1500 Dec 6 14:36 ./bin/cloudflare.pure.url.sh
|
||||
lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/cmspostupload.sh -> postupload.sh
|
||||
lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/cmspreupload.sh -> preupload.sh
|
||||
lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/cmssuploadimage.sh -> uploadimage.sh
|
||||
lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/faqpostupload.sh -> postupload.sh
|
||||
lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/faqpreupload.sh -> preupload.sh
|
||||
lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/faquploadimage.sh -> uploadimage.sh
|
||||
-rw-r--r-- 1 vivek vivek 778 Nov 6 14:44 ./bin/mirror.sh
|
||||
-rwxr-xr-x 1 vivek vivek 136 Apr 25 2015 ./bin/nixcraft.com.301.sh
|
||||
-rwxr-xr-x 1 vivek vivek 547 Jan 30 2017 ./bin/paypal.sh
|
||||
-rwxr-xr-x 1 vivek vivek 531 Dec 31 2013 ./bin/postupload.sh
|
||||
-rwxr-xr-x 1 vivek vivek 437 Dec 31 2013 ./bin/preupload.sh
|
||||
-rwxr-xr-x 1 vivek vivek 1046 May 18 2017 ./bin/purge.all.cloudflare.domain.sh
|
||||
lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/tipspostupload.sh -> postupload.sh
|
||||
lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/tipspreupload.sh -> preupload.sh
|
||||
lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/tipsuploadimage.sh -> uploadimage.sh
|
||||
-rwxr-xr-x 1 vivek vivek 1193 Oct 18 2013 ./bin/uploadimage.sh
|
||||
-rwxr-xr-x 1 vivek vivek 29 Nov 6 14:33 ./.vim/plugged/neomake/tests/fixtures/errors.sh
|
||||
-rwxr-xr-x 1 vivek vivek 215 Nov 6 14:33 ./.vim/plugged/neomake/tests/helpers/trap.sh
|
||||
```
|
||||
|
||||
## Tar 命令
|
||||
|
||||
要[创建 /home/vivek/projects 目录的 tar 包][2],运行:
|
||||
```
|
||||
$ tar -cvf /home/vivek/projects.tar /home/vivek/projects
|
||||
```
|
||||
|
||||
## 结合 find 和 tar 命令
|
||||
|
||||
语法是:
|
||||
```
|
||||
find /dir/to/search/ -name "*.doc" -exec tar -rvf out.tar {} \;
|
||||
```
|
||||
或者
|
||||
```
|
||||
find /dir/to/search/ -name "*.doc" -exec tar -rvf out.tar {} +
|
||||
```
|
||||
例子:
|
||||
```
|
||||
find $HOME -name "*.doc" -exec tar -rvf /tmp/all-doc-files.tar "{}" \;
|
||||
```
|
||||
或者
|
||||
```
|
||||
find $HOME -name "*.doc" -exec tar -rvf /tmp/all-doc-files.tar "{}" +
|
||||
```
|
||||
这里,find 命令的选项:
|
||||
|
||||
* **-name "*.doc"** : 按照给定的模式/标准查找文件。在这里,在 $HOME 中查找所有 \*.doc 文件。
|
||||
* **-exec tar ...** : 对 find 命令找到的所有文件执行 tar 命令。
|
||||
|
||||
这里,tar 命令的选项:
|
||||
|
||||
* **-r** : 将文件追加到归档末尾。参数与 -c 选项具有相同的含义。
|
||||
* **-v** : 详细输出。
|
||||
* **-f** : out.tar : 将所有文件追加到 out.tar 中。
|
||||
|
||||
|
||||
|
||||
也可以像下面这样将 find 命令的输出通过管道输入到 tar 命令中:
|
||||
```
|
||||
find $HOME -name "*.doc" -print0 | tar -cvf /tmp/file.tar --null -T -
|
||||
```
|
||||
传递给 find 命令的 -print0 选项处理特殊的文件名。-null 和 -T 选项告诉 tar 命令从标准输入/管道读取输入。也可以使用 xargs 命令:
|
||||
```
|
||||
find $HOME -type f -name "*.sh" | xargs tar cfvz /nfs/x230/my-shell-scripts.tgz
|
||||
```
|
||||
有关更多信息,请参阅下面的 man 页面:
|
||||
```
|
||||
$ man tar
|
||||
$ man find
|
||||
$ man xargs
|
||||
$ man bash
|
||||
```
|
||||
|
||||
------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
作者是 nixCraft 的创造者,是一名经验丰富的系统管理员,也是 Linux 操作系统/Unix shell 脚本培训师。他曾与全球客户以及 IT、教育、国防和太空研究以及非营利部门等多个行业合作。在 Twitter、Facebook 和 Google+ 上关注他。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/linux-unix-find-tar-files-into-tarball-command/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz
|
||||
[1]:https://www.cyberciti.biz/media/new/faq/2017/12/How-to-find-and-tar-files-on-linux-unix.jpg
|
||||
[2]:https://www.cyberciti.biz/faq/creating-a-tar-file-linux-command-line/
|
@ -0,0 +1,137 @@
|
||||
六个例子带你入门 size 命令
|
||||
======
|
||||
|
||||
正如你所知道的那样,Linux 中的目标文件或着说可执行文件由多个段组成(比如 txt 和 data)。若你想知道每个段的大小,那么确实存在这么一个命令行工具 - 那就是 `size`。在本教程中,我们将会用几个简单易懂的案例来讲解该工具的基本用法。
|
||||
|
||||
在我们开始前,有必要先声明一下,本文的所有案例都在 Ubuntu 16.04LTS 中测试过了 .04LTS。
|
||||
|
||||
## Linux size 命令
|
||||
|
||||
size 命令基本上就是输出指定木比奥文件各段及其总和的大小。下面是该命令的语法:
|
||||
```
|
||||
size [-A|-B|--format=compatibility]
|
||||
[--help]
|
||||
[-d|-o|-x|--radix=number]
|
||||
[--common]
|
||||
[-t|--totals]
|
||||
[--target=bfdname] [-V|--version]
|
||||
[objfile...]
|
||||
```
|
||||
|
||||
man 页是这样描述它的:
|
||||
```
|
||||
GNU的size程序列出参数列表objfile中,各目标文件(object)或存档库文件(archive)的段节(section)大小 — 以及总大小.默认情况下,对每目标文件或存档库中的每个模块都会产生一行输出.
|
||||
|
||||
objfile... 是待检查的目标文件(object). 如果没有指定, 则默认为文件 "a.out".
|
||||
```
|
||||
|
||||
下面是一些问答方式的案例,希望能让你对 size 命令有所了解。
|
||||
|
||||
## Q1。如何使用 size 命令?
|
||||
|
||||
size 的基本用法很简单。你只需要将目标文件/可执行文件名称作为输入就行了。下面是一个例子:
|
||||
|
||||
```
|
||||
size apl
|
||||
```
|
||||
|
||||
该命令在我的系统中的输出如下:
|
||||
|
||||
[![How to use size command][1]][2]
|
||||
|
||||
前三部分的内容是 text,data,和 bss 段及其相应的大小。然后是十进制格式和十六进制格式的总大小。最后是文件名。
|
||||
|
||||
## Q2。如何切换不同的输出格式?
|
||||
|
||||
根据 man 页的说法,size 的默认输出格式类似于 Berkeley 的格式。然而,如果你想的话,你也可以使用 System V 规范。要做到这一点,你可以使用 `--format` 选项加上 `SysV` 值。
|
||||
|
||||
```
|
||||
size apl --format=SysV
|
||||
```
|
||||
|
||||
下面是它的输出:
|
||||
|
||||
[![How to switch between different output formats][3]][4]
|
||||
|
||||
## Q3。如何切换使用其他的单位?
|
||||
|
||||
默认情况下,段的大小是以十进制的方式来展示。然而,如果你想的话,也可以使用八进制或十六进制来表示。对应的命令行参数分别为 `o` 和 `-x`。
|
||||
|
||||
[![How to switch between different size units][5]][6]
|
||||
|
||||
关于这些参数,man 页是这么说的:
|
||||
```
|
||||
-d
|
||||
-o
|
||||
-x
|
||||
--radix=number
|
||||
|
||||
使用这几个选项,你可以让各个段节的大小以十进制(`-d',或`--radix 10');八进制(`-o',或`--radix 8');或十六进制(`-x',或`--radix 16')数字的格式显示.`--radix number' 只支持三个数值参数 (8, 10, 16).总共大小以两种进制给出; `-d'或`-x'的十进制和十六进制输出,或`-o'的 八进制和 十六进制 输出.
|
||||
```
|
||||
|
||||
## Q4。如何让 size 命令显示所有对象文件的总大小?
|
||||
|
||||
如果你用 size 一次性查找多个文件的段大小,则通过使用 `-t` 选项还可以让它显示各列值的总和。
|
||||
|
||||
```
|
||||
size -t [file1] [file2] ...
|
||||
```
|
||||
|
||||
下面是该命令的执行的截屏:
|
||||
|
||||
[![How to make size command show totals of all object files][7]][8]
|
||||
|
||||
`-t` 选项让它多加了最后那一行。
|
||||
|
||||
## Q5。如何让 size 输出每个文件中公共符号的总大小?
|
||||
|
||||
若你为 size 提供多个输入文件作为参数,而且想让它显示每个文件中公共符号(指 common segment 中的 symbol) 的大小,则你可以带上 `--common` 选项。
|
||||
|
||||
```
|
||||
size --common [file1] [file2] ...
|
||||
```
|
||||
|
||||
另外需要指出的是,当使用 Berkeley 格式时,和谐公共符号的大小被纳入了 bss 大小中。
|
||||
|
||||
## Q6。还有什么其他的选项?
|
||||
|
||||
除了刚才提到的那些选项外,size 还有一些一般性的命令行选项,比如 `v` (显示版本信息) 和 `-h` (可选参数和选项的 summary)
|
||||
|
||||
[![What are the other available command line options][9]][10]
|
||||
|
||||
除此之外,你也可以使用 `@file` 选项来让 size 从文件中读取命令行选项。下面是详细的相关说明:
|
||||
```
|
||||
读出来的选项会插入并替代原来的@file选项。若文件不存在或着无法读取,则该选项不会被删除,而是会以字面意义来解释该选项。
|
||||
|
||||
文件中的选项以空格分隔。当选项中要包含空格时需要用单引号或双引号将整个选项包起来。
|
||||
通过在字符前面添加一个反斜杠可以将任何字符(包括反斜杠本身)纳入到选项中。
|
||||
文件本身也能包含其他的@file选项;任何这样的选项都会被递归处理。
|
||||
```
|
||||
|
||||
## 结论
|
||||
|
||||
很明显,size 命令并不适用于所有人。它的目标群体是那些需要处理 Linux 中目标文件/可执行文件结构的人。因此,如果你刚好是目标受众,那么多试试我们这里提到的那些选项,你应该做好每天都使用这个工具的准备。想了解关于 size 的更多信息,请阅读它的 [man 页 ][11]。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/linux-size-command/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
||||
[1]:https://www.howtoforge.com/images/command-tutorial/size-basic-usage.png
|
||||
[2]:https://www.howtoforge.com/images/command-tutorial/big/size-basic-usage.png
|
||||
[3]:https://www.howtoforge.com/images/command-tutorial/size-format-option.png
|
||||
[4]:https://www.howtoforge.com/images/command-tutorial/big/size-format-option.png
|
||||
[5]:https://www.howtoforge.com/images/command-tutorial/size-o-x-options.png
|
||||
[6]:https://www.howtoforge.com/images/command-tutorial/big/size-o-x-options.png
|
||||
[7]:https://www.howtoforge.com/images/command-tutorial/size-t-option.png
|
||||
[8]:https://www.howtoforge.com/images/command-tutorial/big/size-t-option.png
|
||||
[9]:https://www.howtoforge.com/images/command-tutorial/size-v-x1.png
|
||||
[10]:https://www.howtoforge.com/images/command-tutorial/big/size-v-x1.png
|
||||
[11]:https://linux.die.net/man/1/size
|
Loading…
Reference in New Issue
Block a user