mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-25 00:50:15 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
070ba0fc5e
@ -577,40 +577,40 @@ sed < inputfile -En -e '
|
||||
|
||||
我们刚才已经看到,Sed 因为有保持空间所以有了缓存的功能。其实它还有测试和分支的指令。因为有这些特性使得 Sed 是一个[图灵完备][30]的语言。虽然它可能看起来很傻,但意味着你可以使用 Sed 写任何程序。你可以实现任何你的目的,但并不意味着实现起来会很容易,而且结果也不一定会很高效。
|
||||
|
||||
但是,不用担心。在本文中,我们将使用能够展示测试和分支功能的最简单的例子。虽然这些功能乍一看似乎很有限,但请记住,有些人用 Sed 写了 <http://www.catonmat.net/ftp/sed/dc.sed> [calculators]、<http://www.catonmat.net/ftp/sed/sedtris.sed> [Tetris] 或许多其它类型的应用程序!
|
||||
不过不用担心。在本文中,我们将使用能够展示测试和分支功能的最简单的例子。虽然这些功能乍一看似乎很有限,但请记住,有些人用 Sed 写了 <http://www.catonmat.net/ftp/sed/dc.sed> [计算器]、<http://www.catonmat.net/ftp/sed/sedtris.sed> [俄罗斯方块] 或许多其它类型的应用程序!
|
||||
|
||||
#### 标签和分支
|
||||
|
||||
从某些方面,你可以将 Sed 看到是一个功能有限的汇编语言。因此,你不会找到在高级语言中常见的 “for” 或 “while” 循环,或者 “if … else” 语句,但是你可以使用分支来实现同样的功能。
|
||||
从某些方面,你可以将 Sed 看到是一个功能有限的汇编语言。因此,你不会找到在高级语言中常见的 “for” 或 “while” 循环,或者 “if ... else” 语句,但是你可以使用分支来实现同样的功能。
|
||||
|
||||
![The Sed `branch` command][31]
|
||||
![The Sed branch command][31]
|
||||
|
||||
如果你在本文开始部分看到了用流程图描述的 Sed 运行模型,那么你应该知道 Sed 会自动增加程序计数器的值,命令是按程序的指令顺序来运行的。但是,使用分支指令,你可以通过选择程序中的任意命令来改变顺序运行的程序。跳转目的地是使用一个标签来显式定义的。
|
||||
如果你在本文开始部分看到了用流程图描述的 Sed 运行模型,那么你应该知道 Sed 会自动增加程序计数器(PC)的值,命令是按程序的指令顺序来运行的。但是,使用分支(`b`)指令,你可以通过选择执行程序中的任意命令来改变顺序运行的程序。跳转目的地是使用一个标签(`:`)来显式定义的。
|
||||
|
||||
![The Sed `label` command][32]
|
||||
![The Sed label command][32]
|
||||
|
||||
这是一个这样的示例:
|
||||
|
||||
```
|
||||
echo hello | sed -ne '
|
||||
:start # 在程序的那个行上放置一个 "start" 标签
|
||||
p # 输出 pattern 空间内容
|
||||
b start # 继续在 :start 标签上运行
|
||||
:start # 在程序的该行上放置一个 “start” 标签
|
||||
p # 输出模式空间内容
|
||||
b start # 继续在 :start 标签上运行
|
||||
' | less
|
||||
|
||||
```
|
||||
|
||||
那个 Sed 程序的行为非常类似于 `yes` 命令:它获取一个流并产生一个包含那个字符串的无限流。
|
||||
那个 Sed 程序的行为非常类似于 `yes` 命令:它获取一个字符串并产生一个包含那个字符串的无限流。
|
||||
|
||||
切换到一个标签就像我们旁通了 Sed 的自动化特性一样:它既不读取任何输入,也不输出任何内容,更不更新任何缓冲区。它只是跳转到一个不同于源程序指令顺序的另一个指令。
|
||||
切换到一个标签就像我们绕开了 Sed 的自动化特性一样:它既不读取任何输入,也不输出任何内容,更不更新任何缓冲区。它只是跳转到源程序指令顺序中下一条的另外一个指令。
|
||||
|
||||
值得一提的是,如果在分支命令(`b`)上没有指定一个标签作为它的参数,那么分支将直接切换到程序结束的地方。因此,Sed 将启动一个新的循环。这个特性可以用于去跳过一些指令并且因此可以用于作为“块”的替代者:
|
||||
|
||||
值得一提的是,如果在分支命令(`b`)上没有指定一个标签作为它的参数,那么分支将直接切换到程序结束的地方。因此,Sed 将启动一个新的循环。这个特性可以用于去旁通一些指令并且因此可以用于作为块的替代者:
|
||||
```
|
||||
cat -n inputfile | sed -ne '
|
||||
/usb/!b
|
||||
/daemon/!b
|
||||
p
|
||||
'
|
||||
|
||||
```
|
||||
|
||||
#### 条件分支
|
||||
@ -619,93 +619,90 @@ p
|
||||
|
||||
但是,在传统意义上,一个无条件分支也是一个分支,当它运行时,将跳转到特定的目的地,而条件分支既有可能也或许不可能跳转到特定的指令,这取决于系统的当前状态。
|
||||
|
||||
Sed 只有一个条件指令,就是 test(`t`) 命令。只有在当前循环的开始或因为前一个条件分支运行了替换,它才跳转到不同的指令。更多的情况是,只有替换标志被设置时,test 命令才会切换。
|
||||
Sed 只有一个条件指令,就是测试(`t`)命令。只有在当前循环的开始或因为前一个条件分支运行了替换,它才跳转到不同的指令。更多的情况是,只有替换标志被设置时,测试命令才会切换分支。
|
||||
|
||||
![The Sed `test` command][3]![The Sed `test` command][33]
|
||||
![The Sed `test` command][33]
|
||||
|
||||
使用测试指令,你可以在一个 Sed 程序中很轻松地执行一个循环。作为一个特定的示例,你可以用它将一个行填充到某个长度(这是使用正则表达式无法实现的):
|
||||
|
||||
使用 test 指令,你可以在一个 Sed 程序中很轻松地执行一个循环。作为一个特定的示例,你可以用它将一个行填充到某个长度(这是使用正则表达式无法实现的):
|
||||
```
|
||||
# Center text
|
||||
# 居中文本
|
||||
cut -d: -f1 inputfile | sed -Ee '
|
||||
:start
|
||||
s/^(.{,19})$/ \1 / # 用空格在开始处填充少于 20 个字符的行
|
||||
# 并在结束处
|
||||
# 添加一个空格
|
||||
t start # 如果我们已经添加了一个空格,则返回到 :start 标签
|
||||
s/(.{20}).*/| \1 |/ # 保留一个行的前 20 个字符
|
||||
# 以修复由于奇数行引起的
|
||||
# 差一错误
|
||||
:start
|
||||
s/^(.{,19})$/ \1 / # 用一个空格填充少于 20 个字符的行的开始处
|
||||
# 并在结束处添加另一个空格
|
||||
t start # 如果我们已经添加了一个空格,则返回到 :start 标签
|
||||
s/(.{20}).*/| \1 |/ # 只保留一个行的前 20 个字符
|
||||
# 以修复由于奇数行引起的差一错误
|
||||
'
|
||||
|
||||
```
|
||||
|
||||
如果你仔细读前面的示例,你可能注意到,在将要把数据“喂”给 Sed 之前,我会通过使用 cut 命令创建一个比特去预处理数据。
|
||||
如果你仔细读前面的示例,你可能注意到,在将要把数据“喂”给 Sed 之前,我通过 `cut` 命令做了一点小修正去预处理数据。
|
||||
|
||||
不过,我们也可以只使用 Sed 对程序做一些小修改来执行相同的任务:
|
||||
|
||||
然后,我们可以只使用 Sed 对程序做一些小的修改来执行相同的任务:
|
||||
```
|
||||
cat inputfile | sed -Ee '
|
||||
s/:.*// # 除第 1 个字段外删除剩余字段
|
||||
t start
|
||||
:start
|
||||
s/^(.{,19})$/ \1 / # 在开始处使用空格去填充
|
||||
# 并在结束处填充一个空格
|
||||
# 使行的长度不短于 20 个字符
|
||||
t start # 如果添加了一个空格,则返回到 :start
|
||||
s/(.{20}).*/| \1 |/ # 仅保留一个行的前 20 个字符
|
||||
# 以修复由于奇数行引起的
|
||||
# 差一错误
|
||||
s/:.*// # 除第 1 个字段外删除剩余字段
|
||||
t start
|
||||
:start
|
||||
s/^(.{,19})$/ \1 / # 用一个空格填充少于 20 个字符的行的开始处
|
||||
# 并在结束处添加另一个空格
|
||||
t start # 如果我们已经添加了一个空格,则返回到 :start 标签
|
||||
s/(.{20}).*/| \1 |/ # 仅保留一个行的前 20 个字符
|
||||
# 以修复由于奇数行引起的差一错误
|
||||
'
|
||||
|
||||
```
|
||||
|
||||
在上面的示例中,你或许对下列的结构感到惊奇:
|
||||
|
||||
```
|
||||
t start
|
||||
:start
|
||||
|
||||
```
|
||||
|
||||
乍一看,在这里的分支并没有用,因为它只是跳转到将要运行的指令处。但是,如果你仔细阅读了 `test` 命令的定义,你将会看到,如果在当前循环的开始或者前一个 test 命令运行后发生了一个替换,分支才会起作用。换句话说就是,test 指令有清除替换标志的副作用。这也正是上面的代码片段的真实目的。这是一个在包含条件分支的 Sed 程序中经常看到的技巧,用于在使用多个替换命令时避免出现 false 的情况。
|
||||
乍一看,在这里的分支并没有用,因为它只是跳转到将要运行的指令处。但是,如果你仔细阅读了测试命令的定义,你将会看到,如果在当前循环的开始或者前一个测试命令运行后发生了一个替换,分支才会起作用。换句话说就是,测试指令有清除替换标志的副作用。这也正是上面的代码片段的真实目的。这是一个在包含条件分支的 Sed 程序中经常看到的技巧,用于在使用多个替换命令时避免出现<ruby>误报<rt>false positive</rt></ruby>的情况。
|
||||
|
||||
通过它并不能绝对强制地清除替换标志,我同意这一说法。因为在将字符串填充到正确的长度时我使用的特定的替换命令是<ruby>幂等<rt>idempotent</rt></ruby>的。因此,一个多余的迭代并不会改变结果。不过,我们可以现在再次看一下第二个示例:
|
||||
|
||||
通过它并不能绝对强制地清除替换标志,我同意这一说法。因为我使用的特定的替换命令在将字符串填充到正确的长度时是幂等的。因此,一个多余的迭代并不会改变结果。不过,我们可以现在再次看一下第二个示例:
|
||||
```
|
||||
# 基于它们的登录程序来分类用户帐户
|
||||
cat inputfile | sed -Ene '
|
||||
s/^/login=/
|
||||
/nologin/s/^/type=SERV /
|
||||
/false/s/^/type=SERV /
|
||||
t print
|
||||
s/^/type=USER /
|
||||
:print
|
||||
s/:.*//p
|
||||
s/^/login=/
|
||||
/nologin/s/^/type=SERV /
|
||||
/false/s/^/type=SERV /
|
||||
t print
|
||||
s/^/type=USER /
|
||||
:print
|
||||
s/:.*//p
|
||||
'
|
||||
|
||||
```
|
||||
|
||||
我希望在这里根据用户默认配置的登录程序,为用户帐户打上 “SERV” 或 “USER” 的标签。如果你运行它,预计你将看到 “SERV” 标签。然而,并没有在输出中跟踪到 “USER” 标签。为什么呢?因为 `t print` 指令不论行的内容是什么,它总是切换,替换标志总是由程序的第一个替换命令来设置。一旦替换标志设置完成后,在下一个行被读取或直到下一个 test 命令之前,这个标志将保持不变。下面我们给出修复这个程序的解决方案:
|
||||
我希望在这里根据用户默认配置的登录程序,为用户帐户打上 “SERV” 或 “USER” 的标签。如果你运行它,预计你将看到 “SERV” 标签。然而,并没有在输出中跟踪到 “USER” 标签。为什么呢?因为 `t print` 指令不论行的内容是什么,它总是切换,替换标志总是由程序的第一个替换命令来设置。一旦替换标志设置完成后,在下一个行被读取或直到下一个测试命令之前,这个标志将保持不变。下面我们给出修复这个程序的解决方案:
|
||||
|
||||
```
|
||||
# 基于用户登录程序来分类用户帐户
|
||||
cat inputfile | sed -Ene '
|
||||
s/^/login=/
|
||||
s/^/login=/
|
||||
|
||||
t classify # clear the "substitution flag"
|
||||
:classify
|
||||
t classify # clear the "substitution flag"
|
||||
:classify
|
||||
|
||||
/nologin/s/^/type=SERV /
|
||||
/false/s/^/type=SERV /
|
||||
t print
|
||||
s/^/type=USER /
|
||||
:print
|
||||
s/:.*//p
|
||||
/nologin/s/^/type=SERV /
|
||||
/false/s/^/type=SERV /
|
||||
t print
|
||||
s/^/type=USER /
|
||||
:print
|
||||
s/:.*//p
|
||||
'
|
||||
|
||||
```
|
||||
|
||||
### 精确地处理文本
|
||||
|
||||
Sed 是一个非交互式文本编辑器。虽然是非交互式的,但仍然是文本编辑器。而如果没有在输出中插入一些东西的功能,那它就不算一个完整的文本编辑器。我不是很喜欢它的文本编辑的特性,因为我发现它的语法太难用了(即便是使用标准的 Sed),但有时你难免会用到它。
|
||||
Sed 是一个非交互式文本编辑器。虽然是非交互式的,但仍然是文本编辑器。而如果没有在输出中插入一些东西的功能,那它就不算一个完整的文本编辑器。我不是很喜欢它的文本编辑的特性,因为我发现它的语法太难用了(即便是以 Sed 的标准而言),但有时你难免会用到它。
|
||||
|
||||
采用严格的 POSIX 语法的只有三个命令:改变(`c`)、插入(`i`)或追加(`a`)一些文字文本到输出,都遵循相同的特定语法:命令字母后面跟着一个反斜杠,并且文本从脚本的下一行上开始插入:
|
||||
|
||||
在严格的 POSIX 语法中,所有通过这三个命令:change(`c`)、insert(`i`)或 append(`a`)来处理一些到输出的文字文本,都遵循相同的特定语法:命令字母后面跟着一个反斜杠,并且文本从脚本的下一行上开始插入:
|
||||
```
|
||||
head -5 inputfile | sed '
|
||||
1i\
|
||||
@ -713,10 +710,10 @@ head -5 inputfile | sed '
|
||||
$a\
|
||||
# end
|
||||
'
|
||||
|
||||
```
|
||||
|
||||
插入多行文本,你必须每一行结束的位置使用一个反斜杠:
|
||||
|
||||
```
|
||||
head -5 inputfile | sed '
|
||||
1i\
|
||||
@ -728,45 +725,46 @@ $a\
|
||||
|
||||
```
|
||||
|
||||
一些 Sed 实现,比如 GNU Sed,在初始的反斜杠后面有一个可选的换行符,即便是在 `--posix` 模式下仍然如此。我在标准中并没有找到任何关于替代该语法的授权(如果是因为我没有在标准中找到那个特性,请在评论区留言告诉我!)。因此,如果对可移植性要求很高,请注意使用它的风险:
|
||||
一些 Sed 实现,比如 GNU Sed,在初始的反斜杠后面的换行符是可选的,即便是在 `--posix` 模式下仍然如此。我在标准中并没有找到任何关于该替代语法的说明(如果是因为我没有在标准中找到那个特性,请在评论区留言告诉我!)。因此,如果对可移植性要求很高,请注意使用它的风险:
|
||||
|
||||
```
|
||||
# 非 POSIX 语法:
|
||||
head -5 inputfile | sed -e '
|
||||
1i \# List of user accounts
|
||||
1i\# List of user accounts
|
||||
$a\# end
|
||||
'
|
||||
|
||||
```
|
||||
|
||||
也有一些 Sed 的实现,让初始的反斜杠完全是可选的。因此毫无疑问,它是一个厂商对 POSIX 标准进行扩展的特定版本,它是否支持那个语法,你需要去查看那个 Sed 版本的手册。
|
||||
|
||||
在简单概述之后,我们现在来回顾一下这些命令的更多细节,从我还没有介绍的 change 命令开始。
|
||||
在简单概述之后,我们现在来回顾一下这些命令的更多细节,从我还没有介绍的改变命令开始。
|
||||
|
||||
#### change 命令
|
||||
#### 改变命令
|
||||
|
||||
change 命令(`c\`)就像 `d` 命令一样删除模式空间的内容并开始一个新的循环。唯一的不同在于,当命令运行之后,用户提供的文本是写往输出的。
|
||||
改变命令(`c\`)就像 `d` 命令一样删除模式空间的内容并开始一个新的循环。唯一的不同在于,当命令运行之后,用户提供的文本是写往输出的。
|
||||
|
||||
![The Sed change command][34]
|
||||
|
||||
![The Sed `change` command][34]
|
||||
```
|
||||
cat -n inputfile | sed -e '
|
||||
/systemd/c\
|
||||
# :REMOVED:
|
||||
s/:.*// # This will NOT be applied to the "changed" text
|
||||
'
|
||||
|
||||
```
|
||||
|
||||
如果 change 命令与一个地址范围关联,当到达范围的最后一行时,这个文本将仅输出一次。这在某种程度上成为 Sed 命令将被重复应用在地址范围内所有行这一惯例的一个例外情况:
|
||||
如果改变命令与一个地址范围关联,当到达范围的最后一行时,这个文本将仅输出一次。这在某种程度上成为 Sed 命令将被重复应用在地址范围内所有行这一惯例的一个例外情况:
|
||||
|
||||
```
|
||||
cat -n inputfile | sed -e '
|
||||
19,22c\
|
||||
# :REMOVED:
|
||||
s/:.*// # This will NOT be applied to the "changed" text
|
||||
'
|
||||
|
||||
```
|
||||
|
||||
因此,如果你希望将 change 命令重复应用到地址范围内的所有行上,除了将它封装到一个块中之外,你将没有其它的选择:
|
||||
因此,如果你希望将改变命令重复应用到地址范围内的所有行上,除了将它封装到一个块中之外,你将没有其它的选择:
|
||||
|
||||
```
|
||||
cat -n inputfile | sed -e '
|
||||
19,22{c\
|
||||
@ -774,14 +772,14 @@ cat -n inputfile | sed -e '
|
||||
}
|
||||
s/:.*// # This will NOT be applied to the "changed" text
|
||||
'
|
||||
|
||||
```
|
||||
|
||||
#### insert 命令
|
||||
#### 插入命令
|
||||
|
||||
insert 命令(`i\`)将立即在输出中给出用户提供的文本。它并不以任何方式修改程序流或缓冲区的内容。
|
||||
插入命令(`i\`)将立即在输出中给出用户提供的文本。它并不以任何方式修改程序流或缓冲区的内容。
|
||||
|
||||
![The Sed insert command][35]
|
||||
|
||||
![The Sed `insert` command][35]
|
||||
```
|
||||
# display the first five user names with a title on the first row
|
||||
sed < inputfile -e '
|
||||
@ -790,16 +788,16 @@ USER NAME
|
||||
s/:.*//
|
||||
5q
|
||||
'
|
||||
|
||||
```
|
||||
|
||||
#### append 命令
|
||||
#### 追加命令
|
||||
|
||||
当输入的下一行被读取时,append 命令(`a\`)将一些文本追加到显示队列。文本在当前循环的结束部分(包含程序结束的情况)或当使用 `n` 或 `N` 命令从输入中读取一个新行时被输出。
|
||||
当输入的下一行被读取时,追加命令(`a\`)将一些文本追加到显示队列。文本在当前循环的结束部分(包含程序结束的情况)或当使用 `n` 或 `N` 命令从输入中读取一个新行时被输出。
|
||||
|
||||
![The Sed `append` command][36]
|
||||
![The Sed append command][36]
|
||||
|
||||
与上面相同的一个示例,但这次是插入到底部而不是顶部:
|
||||
|
||||
与上面相同的一个示例,但这次是插入到底部而是顶部:
|
||||
```
|
||||
sed < inputfile -e '
|
||||
5a\
|
||||
@ -807,80 +805,81 @@ USER NAME
|
||||
s/:.*//
|
||||
5q
|
||||
'
|
||||
|
||||
```
|
||||
|
||||
#### read 命令
|
||||
#### 读取命令
|
||||
|
||||
这是插入一些文本内容到输出流的第四个命令:read 命令(`r`)。它的工作方式与 append 命令完全一样,但不同的,它不从 Sed 脚本中取得硬编码到脚本中的文本,而是在一个输出上写一个文件的内容。
|
||||
这是插入一些文本内容到输出流的第四个命令:读取命令(`r`)。它的工作方式与追加命令完全一样,但不同的,它不从 Sed 脚本中取得硬编码到脚本中的文本,而是把一个文件的内容写入到一个输出上。
|
||||
|
||||
read 命令只调度要读取的文件。当刷新 append 队列时,后者被高效地读取,而不是在 read 命令运行时。如果这时候对这个文件有并发的访问,或那个文件不是一个普通的文件(比如,它是一个字符设备或命名管道),或文件在读取期间被修改,这时可能会产生严重的后果。
|
||||
读取命令只调度要读取的文件。当清理追加队列时,后者才被高效地读取,而不是在读取命令运行时。如果这时候对这个文件有并发的访问读取,或那个文件不是一个普通的文件(比如,它是一个字符设备或命名管道),或文件在读取期间被修改,这时可能会产生严重的后果。
|
||||
|
||||
作为一个例证,如果你使用我们将在下一节详细讲述的写入命令,它与读取命令共同配合从一个临时文件中写入并重新读取,你可能会获得一些创造性的结果(使用法语版的 [Shiritori][37] 游戏作为一个例证):
|
||||
|
||||
作为一个例证,如果你使用我们将在下一次详细讲的 write 命令,它与 read 命令共同去写入并从一个临时文件中重新读取,你可能会获得一些创造性的结果(使用法语版的 [Shiritori][37] 游戏作为一个例证):
|
||||
```
|
||||
printf "%s\n" "Trois p'tits chats" "Chapeau d' paille" "Paillasson" |
|
||||
sed -ne '
|
||||
r temp
|
||||
a\
|
||||
----
|
||||
w temp
|
||||
r temp
|
||||
a\
|
||||
----
|
||||
w temp
|
||||
'
|
||||
|
||||
```
|
||||
|
||||
现在,在流输出中专门用于插入一些文本的 Sed 命令的清单结束了。我的最后一个示例纯属好玩,但是由于我前面提到过有一个 write 命令,这个示例将我们完美地带到下一节,在下一节我们将看到在 Sed 中如何将数据写入到一个外部文件。
|
||||
现在,在流输出中专门用于插入一些文本的 Sed 命令清单结束了。我的最后一个示例纯属好玩,但是由于我前面提到过有一个写入命令,这个示例将我们完美地带到下一节,在下一节我们将看到在 Sed 中如何将数据写入到一个外部文件。
|
||||
|
||||
### 输出的替代
|
||||
### 替代的输出
|
||||
|
||||
Sed 的设计思想是,所有的文本转换都将写入到进程的标准输出上。但是,Sed 也有一些特性支持将数据发送到替代的目的地。你有两种方式去实现上述的输出目标替换:使用专门的 `write` 命令,或者在一个 `substitution` 命令上添加一个写入标志。
|
||||
Sed 的设计思想是,所有的文本转换都将写入到进程的标准输出上。但是,Sed 也有一些特性支持将数据发送到替代的目的地。你有两种方式去实现上述的输出目标替换:使用专门的写入命令(`w`),或者在一个替换命令(`s`)上添加一个写入标志。
|
||||
|
||||
#### write 命令
|
||||
#### 写入命令
|
||||
|
||||
write 命令(`w`)追加模式空间的内容到给定的目标文件中。POSIX 要求在 Sed 处理任何数据之前,目标文件能够被 Sed 所创建。如果给定的目标文件已经存在,它将被覆写。
|
||||
写入命令(`w`)会追加模式空间的内容到给定的目标文件中。POSIX 要求在 Sed 处理任何数据之前,目标文件能够被 Sed 所创建。如果给定的目标文件已经存在,它将被覆写。
|
||||
|
||||
![The Sed `write` command][38]
|
||||
![The Sed write command][38]
|
||||
|
||||
因此,即便是你从未真的写入到该文件中,但该文件仍然会被创建。例如,下列的 Sed 程序将创建/覆写这个 `output` 文件,那怕是这个写入命令从未被运行过:
|
||||
|
||||
因此,即便是你从未真实地去写入到一个文件中,但文件仍然会被创建。例如,下列的 Sed 程序将创建/覆写这个 “output” 文件,那怕是这个写入命令从未被运行过:
|
||||
```
|
||||
echo | sed -ne '
|
||||
q # 立刻退出
|
||||
w output # 这个命令从未被运行
|
||||
q # 立刻退出
|
||||
w output # 这个命令从未被运行
|
||||
'
|
||||
|
||||
```
|
||||
|
||||
你可以将几个写入命令指向到同一个目标文件。指向同一个目标文件的所有写入命令将追加那个文件的内容(工作方式几乎与 shell 的重定向符 `>>` 相同):
|
||||
你可以将几个写入命令指向到同一个目标文件。指向同一个目标文件的所有写入命令将追加那个文件的内容(工作方式几乎与 shell 的重定向符 `>>` 相同):
|
||||
|
||||
```
|
||||
sed < inputfile -ne '
|
||||
/:\/bin\/false$/w server
|
||||
/:\/usr\/sbin\/nologin$/w server
|
||||
w output
|
||||
/:\/bin\/false$/w server
|
||||
/:\/usr\/sbin\/nologin$/w server
|
||||
w output
|
||||
'
|
||||
cat server
|
||||
|
||||
```
|
||||
|
||||
#### 替换命令的写入标志
|
||||
|
||||
在前面,我们已经学习了替换命令,它有一个 `p` 选项用于在替换之后输出模式空间的内容。同样它也提供一个类似功能的 `w` 选项,用于在替换之后将模式空间的内容输出到一个文件中:
|
||||
```
|
||||
sed < inputfile -ne '
|
||||
s/:.*\/nologin$//w server
|
||||
s/:.*\/false$//w server
|
||||
'
|
||||
cat server
|
||||
在前面,我们已经学习了替换命令(`s`),它有一个 `p` 选项用于在替换之后输出模式空间的内容。同样它也提供一个类似功能的 `w` 选项,用于在替换之后将模式空间的内容输出到一个文件中:
|
||||
|
||||
```
|
||||
sed < inputfile -ne '
|
||||
s/:.*\/nologin$//w server
|
||||
s/:.*\/false$//w server
|
||||
'
|
||||
cat server
|
||||
```
|
||||
|
||||
### 注释
|
||||
|
||||
我无数次使用过它们,但我从未花时间正式介绍过它们,因此,我决定现在来正式地介绍它们:就像大多数编程语言一样,注释是添加软件不去解析的自由格式文本的一种方法。Sed 的语法很晦涩,我不得不强调在脚本中需要的地方添加足够的注释。否则,除了作者外其他人将几乎无法理解它。
|
||||
|
||||
![The Sed `comment` command][39]
|
||||
![The Sed comment command][39]
|
||||
|
||||
不过,和 Sed 的其它部分一样,注释也有它自己的微妙之处。首先并且是最重要的,注释并不是语法结构,但它在 Sed 中很成熟。注释虽然是一个“什么也不做”的命令,但它仍然是一个命令。至少,它是在 POSIX 中定义了的。因此,严格地说,它们只允许使用在其它命令允许使用的地方。
|
||||
不过,和 Sed 的其它部分一样,注释也有它自己的微妙之处。首先并且是最重要的,注释并不是语法结构,但它是真正意义的 Sed 命令。注释虽然是一个“什么也不做”的命令,但它仍然是一个命令。至少,它是在 POSIX 中定义了的。因此,严格地说,它们只允许使用在其它命令允许使用的地方。
|
||||
|
||||
大多数 Sed 实现都通过允许行内命令来放松了那种要求,就像在那个文章中我到处都使用的那样。
|
||||
|
||||
结束那个主题之前,需要说一下 `#n` 注释(`#` 后面紧跟一个`n`,中间没有空格)的特殊情况。如果在脚本的第一行找到这个精确注释,Sed 将切换到静默模式(即:清除自动输出标志),就像在命令行上指定了 `-n` 选项一样。
|
||||
结束那个主题之前,需要说一下 `#n` 注释(`#` 后面紧跟一个字母 `n`,中间没有空格)的特殊情况。如果在脚本的第一行找到这个精确注释,Sed 将切换到静默模式(即:清除自动输出标志),就像在命令行上指定了 `-n` 选项一样。
|
||||
|
||||
### 很少用得到的命令
|
||||
|
||||
@ -888,64 +887,63 @@ cat server
|
||||
|
||||
#### 行数命令
|
||||
|
||||
这个 `=` 命令将向标准输出上显示当前 Sed 正在读取的行数,这个行数就是行计数器的内容。没有任何方式从任何一个 Sed 缓冲区中捕获那个数字,也不能对它进行输出格式化。由于这两个限制使得这个命令的可用性大大降低。
|
||||
这个 `=` 命令将向标准输出上显示当前 Sed 正在读取的行数,这个行数就是行计数器(`LC`)的内容。没有任何方式从任何一个 Sed 缓冲区中捕获那个数字,也不能对它进行输出格式化。由于这两个限制使得这个命令的可用性大大降低。
|
||||
|
||||
![The Sed `line number` command][40]
|
||||
![The Sed line number command][40]
|
||||
|
||||
请记住,在严格的 POSIX 兼容模式中,当在命令行上给定几个输入文件时,Sed 并不重置那个计数器,而是连续地增长它,就像所有的输入文件是连接在一起的一样。一些 Sed 实现,像 GNU Sed,它就有一个选项可以在每个输入文件读取结束后去重置计数器。
|
||||
|
||||
#### 明确的 print 命令
|
||||
#### 明确打印命令
|
||||
|
||||
这个 `l`(小写的字母 `l`)作用类似于 print 命令(`p`),但它是以精确的格式去输出模式空间的内容。以下引用自 [POSIX 标准][12]:
|
||||
这个 `l`(小写的字母 `l`)作用类似于打印命令(`p`),但它是以精确的格式去输出模式空间的内容。以下引用自 [POSIX 标准][12]:
|
||||
|
||||
> 在 XBD 转义序列中列出的字符和相关的动作(‘\\\’、‘\a’、‘\b’、‘\f’、‘\r’、‘\t’、‘\v’)将被写为相应的转义序列;在那个表中的 ‘\n’ 是不适用的。不在那个表中的不可打印字符将被写为一个三位八进制数字(在前面使用一个 <反斜杠>),表示字符中的每个字节(最重要的字节在前面)。长行应该被换行,通过写一个 <反斜杠>后跟一个 <换行符> 来表示换行点;发生换行时的长度是不确定的,但应该适合输出设备的具体情况。每个行应该以一个 ‘$’ 标记结束。
|
||||
> 在 XBD 转义序列中列出的字符和相关的动作(`\\`、`\a`、`\b`、`\f`、`\r`、`\t`、`\v`)将被写为相应的转义序列;在那个表中的 `\n` 是不适用的。不在那个表中的不可打印字符将被写为一个三位八进制数字(在前面使用一个反斜杠 `\`),表示字符中的每个字节(最重要的字节在前面)。长行应该被换行,通过写一个反斜杠后跟一个换行符来表示换行位置;发生换行时的长度是不确定的,但应该适合输出设备的具体情况。每个行应该以一个 `$` 标记结束。
|
||||
|
||||
![The Sed `unambiguous print` command][3]![The Sed `unambiguous print` command][41]
|
||||
![The Sed unambiguous print command][41]
|
||||
|
||||
我怀疑这个命令是在非 [8位规则化信道][42] 上交换数据的。就我本人而言,除了调试用途以外,也从未使用过它。
|
||||
我怀疑这个命令是在非 [8 位规则化信道][42] 上交换数据的。就我本人而言,除了调试用途以外,也从未使用过它。
|
||||
|
||||
#### transliterate 命令
|
||||
#### 移译命令
|
||||
|
||||
<ruby>移译<rt>transliterate</rt></ruby>(`y`)命令允许映射模式空间的字符从一个源集到一个目标集。它非常类似于 `tr` 命令,但是限制更多。
|
||||
<ruby>移译<rt>transliterate</rt></ruby>(`y`)命令允许从一个源集到一个目标集映射模式空间的字符。它非常类似于 `tr` 命令,但是限制更多。
|
||||
|
||||
![The Sed transliterate command][43]
|
||||
|
||||
![The Sed `transliterate` command][43]
|
||||
```
|
||||
# The `y` c0mm4nd 1s for h4x0rz only
|
||||
sed < inputfile -e '
|
||||
s/:.*//
|
||||
y/abcegio/48<3610/
|
||||
'
|
||||
|
||||
```
|
||||
|
||||
虽然 transliterate 命令语法与 substitution 命令的语法有一些相似之处,但它在替换字符串之后不接受任何选项。这个移译总是全局的。
|
||||
虽然移译命令语法与替换命令的语法有一些相似之处,但它在替换字符串之后不接受任何选项。这个移译总是全局的。
|
||||
|
||||
请注意,移译命令要求源集和目标集之间要一一对应地转换。这意味着下面的 Sed 程序可能所做的事情并不是你乍一看所想的那样:
|
||||
|
||||
```
|
||||
# BEWARE: this doesn't do what you may think!
|
||||
# 注意:这可能并不如你想的那样工作!
|
||||
sed < inputfile -e '
|
||||
s/:.*//
|
||||
y/[a-z]/[A-Z]/
|
||||
s/:.*//
|
||||
y/[a-z]/[A-Z]/
|
||||
'
|
||||
|
||||
```
|
||||
|
||||
### 写在最后的话
|
||||
|
||||
```
|
||||
# 它要做什么?
|
||||
# 提示:答案就在不远处...
|
||||
sed -E '
|
||||
s/.*\W(.*)/\1/
|
||||
h
|
||||
${ x; p; }
|
||||
d' < inputfile
|
||||
|
||||
s/.*\W(.*)/\1/
|
||||
h
|
||||
${ x; p; }
|
||||
d' < inputfile
|
||||
```
|
||||
|
||||
我们已经学习了所有的 Sed 命令,真不敢相信我们已经做到了!如果你也读到这里了,应该恭喜你,尤其是如果你花费了一些时间,在你的系统上尝试了所有的不同示例!
|
||||
|
||||
正如你所见,Sed 是非常复杂的,不仅因为它的语法比较零乱,也因为许多极端案例或命令行为之间的细微差别。毫无疑问,我们可以将这些归结于历史的原因。尽管它有这么多缺点,但是 Sed 仍然是一个非常强大的工具,甚至到现在,它仍然是大量使用的、为数不多的 Unix 工具箱中的命令之一。是时候总结一下这篇文章了,如果你不先支持我,我将不去总结它:请节选你对喜欢的或最具创意的 Sed 脚本,并共享给我们。如果我收集到的你们共享出的脚本足够多了,我将会把这些 Sed 脚本结集发布!
|
||||
正如你所见,Sed 是非常复杂的,不仅因为它的语法比较零乱,也因为许多极端案例或命令行为之间的细微差别。毫无疑问,我们可以将这些归结于历史的原因。尽管它有这么多缺点,但是 Sed 仍然是一个非常强大的工具,甚至到现在,它仍然是 Unix 工具箱中为数不多的大量使用的命令之一。是时候总结一下这篇文章了,没有你们的支持我将无法做到:请节选你对喜欢的或最具创意的 Sed 脚本,并共享给我们。如果我收集到的你们共享出的脚本足够多了,我将会把这些 Sed 脚本结集发布!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -954,7 +952,7 @@ via: https://linuxhandbook.com/sed-reference-guide/
|
||||
作者:[Sylvain Leroux][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
41
sources/talk/20181109 7 reasons I love open source.md
Normal file
41
sources/talk/20181109 7 reasons I love open source.md
Normal file
@ -0,0 +1,41 @@
|
||||
7 reasons I love open source
|
||||
======
|
||||
Being a part of the open source community is a huge win for many reasons.
|
||||

|
||||
|
||||
Here's why I spend so much of my time—including evenings and weekends—[on GitHub][1], as an active member of the open source community.
|
||||
|
||||
I’ve worked on everything from solo projects to small collaborative group efforts to projects with hundreds of contributors. With each project, I’ve learned something new.
|
||||
|
||||

|
||||
|
||||
* **It keeps my skills fresh.** As someone in a management position at a consultancy, I sometimes feel like I am becoming more and more distant from the physical process of creating software. Working on open source projects allows me to get back to what I love best: writing code. It also allows me to experiment with new technologies, learn new techniques and languages—and keep up with the cool kids!
|
||||
* **It teaches me about people.** Working on an open source project with a group of people you’ve never met teaches you a lot about how to interact with people. You quickly discover that everyone has their own pressures, their own commitments, and differing timescales. Learning how to work collaboratively with a group of strangers is a great life skill.
|
||||
* **It makes me a better communicator.** Maintainers of open source projects have a limited amount of time. You quickly learn that to successfully contribute, you must be able to communicate clearly and concisely what you are changing, adding, or fixing, and most importantly, why you are doing it.
|
||||
* **It makes me a better developer**. There is nothing quite like having hundreds—or thousands—of other developers depend on your code. It motivates you to pay a lot more attention to software design, testing, and documentation.
|
||||
* **It makes my own creations better**. Possibly the most powerful concept behind open source is that it allows you to harness a global network of creative, intelligent, and knowledgeable individuals. I know I have my limits, and I don’t know everything, but engaging with the open source community helps me improve my creations.
|
||||
* **It teaches me the value of small things**. If the documentation for a project is unclear or incomplete, I don’t hesitate to make it better. One small update or fix might save a developer only a few minutes, but multiplied across all the users, your one small change can have a significant impact.
|
||||
* **It makes me better at marketing**. Ok, this is an odd one. There are so many great open source projects out there that it can feel like a struggle to get noticed. Working in open source has taught me a lot about the value of marketing your creations. This isn’t about spin or creating a flashy website. It is about clearly communicating what you have created, how it is used, and the benefits it brings.
|
||||
|
||||
|
||||
|
||||
That said, here are seven reasons why I contribute to open source:
|
||||
|
||||
I could go on about how open source helps you build partnerships, connections, and friends, but you get the idea. There are a great many reasons why I thoroughly enjoy being part of the open source community.
|
||||
|
||||
You might be wondering how all this applies to the IT strategy for large financial services organizations. Simple: Who wouldn’t want a team of developers who are great at communicating and working with people, have cutting-edge skills, and are able to market their creations?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/11/reasons-love-open-source
|
||||
|
||||
作者:[Colin Eberhardt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/colineberhardt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/ColinEberhardt/
|
@ -0,0 +1,57 @@
|
||||
A Free Guide for Setting Your Open Source Strategy
|
||||
======
|
||||
|
||||

|
||||
|
||||
The majority of companies using open source understand its business value, but they may lack the tools to strategically implement an open source program and reap the full rewards. According to a recent survey from [The New Stack][1], “the top three benefits of open source programs are 1) increased awareness of open source, 2) more speed and agility in the development cycle, and 3) better license compliance.”
|
||||
|
||||
Running an open source program office involves creating a strategy to help you define and implement your approach as well as measure your progress. The [Open Source Guides to the Enterprise][2], developed by The Linux Foundation in partnership with the TODO Group, offer open source expertise based on years of experience and practice.
|
||||
|
||||
The most recent guide, [Setting an Open Source Strategy][3], details the essential steps in creating a strategy and setting you on the path to success. According to the guide, “your open source strategy connects the plans for managing, participating in, and creating open source software with the business objectives that the plans serve. This can open up many opportunities and catalyze innovation.” The guide covers the following topics:
|
||||
|
||||
1. Why create a strategy?
|
||||
2. Your strategy document
|
||||
3. Approaches to strategy
|
||||
4. Key considerations
|
||||
5. Other components
|
||||
6. Determine ROI
|
||||
7. Where to invest
|
||||
|
||||
|
||||
|
||||
The critical first step here is creating and documenting your open source strategy, which will “help you maximize the benefits your organization gets from open source.” At the same time, your detailed strategy can help you avoid difficulties that may arise from mistakes such as choosing the wrong license or improperly maintaining code. According to the guide, this document can also:
|
||||
|
||||
* Get leaders excited and involved
|
||||
* Help obtain buy-in within the company
|
||||
* Facilitate decision-making in diffuse, multi-departmental organizations
|
||||
* Help build a healthy community
|
||||
* Explain your company’s approach to open source and support of its use
|
||||
* Clarify where your company invests in community-driven, external R&D and where your company will focus on its value added differentiation
|
||||
|
||||
|
||||
|
||||
“At Salesforce, we have internal documents that we circulate to our engineering team, providing strategic guidance and encouragement around open source. These encourage the creation and use of open source, letting them know in no uncertain terms that the strategic leaders at the company are fully behind it. Additionally, if there are certain kinds of licenses we don’t want engineers using, or other open source guidelines for them, our internal documents need to be explicit,” said Ian Varley, Software Architect at Salesforce and contributor to the guide.
|
||||
|
||||
Open source programs help promote an enterprise culture that can make companies more productive, and, according to the guide, a strong strategy document can “help your team understand the business objectives behind your open source program, ensure better decision-making, and minimize risks.”
|
||||
|
||||
Learn how to align your goals for managing and creating open source software with your organization’s business objectives using the tips and proven practices in the new guide to [Setting an Open Source Strategy][3]. And, check out all 12 [Open Source Guides for the Enterprise][2] for more information on achieving success with open source.
|
||||
|
||||
This article originally appeared on [The Linux Foundation][4]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/2018/11/free-guide-setting-your-open-source-strategy
|
||||
|
||||
作者:[Amber Ankerholz][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/aankerholz
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://thenewstack.io/open-source-culture-starts-with-programs-and-policies/
|
||||
[2]: https://www.linuxfoundation.org/resources/open-source-guides/
|
||||
[3]: https://www.linuxfoundation.org/resources/open-source-guides/setting-an-open-source-strategy/
|
||||
[4]: https://www.linuxfoundation.org/blog/2018/11/a-free-guide-for-setting-your-open-source-strategy/
|
@ -1,164 +0,0 @@
|
||||
translating by GraveAccent Test containers with Python and Conu
|
||||
======
|
||||
|
||||

|
||||
|
||||
More and more developers are using containers to develop and deploy their applications. This means that easily testing containers is also becoming important. [Conu][1] (short for container utilities) is a Python library that makes it easy to write tests for your containers. This article shows you how to use it to test your containers.
|
||||
|
||||
### Getting started
|
||||
|
||||
First you need a container application to test. For that, the following commands create a new directory with a container Dockerfile, and a Flask application to be served by the container.
|
||||
```
|
||||
$ mkdir container_test
|
||||
$ cd container_test
|
||||
$ touch Dockerfile
|
||||
$ touch app.py
|
||||
|
||||
```
|
||||
|
||||
Copy the following code inside the app.py file. This is the customary basic Flask application that returns the string “Hello Container World!”
|
||||
```
|
||||
from flask import Flask
|
||||
app = Flask(__name__)
|
||||
|
||||
@app.route('/')
|
||||
def hello_world():
|
||||
return 'Hello Container World!'
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.run(debug=True,host='0.0.0.0')
|
||||
|
||||
```
|
||||
|
||||
### Create and Build a Test Container
|
||||
|
||||
To build the test container, add the following instructions to the Dockerfile.
|
||||
```
|
||||
FROM registry.fedoraproject.org/fedora-minimal:latest
|
||||
RUN microdnf -y install python3-flask && microdnf clean all
|
||||
ADD ./app.py /srv
|
||||
CMD ["python3", "/srv/app.py"]
|
||||
|
||||
```
|
||||
|
||||
Then build the container using the Docker CLI tool.
|
||||
```
|
||||
$ sudo dnf -y install docker
|
||||
$ sudo systemctl start docker
|
||||
$ sudo docker build . -t flaskapp_container
|
||||
|
||||
```
|
||||
|
||||
Note : The first two commands are only needed if Docker is not installed on your system.
|
||||
|
||||
After the build use the following command to run the container.
|
||||
```
|
||||
$ sudo docker run -p 5000:5000 --rm flaskapp_container
|
||||
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
|
||||
* Restarting with stat
|
||||
* Debugger is active!
|
||||
* Debugger PIN: 473-505-51
|
||||
|
||||
```
|
||||
|
||||
Finally, use curl to check that the Flask application is correctly running inside the container:
|
||||
```
|
||||
$ curl http://127.0.0.1:5000
|
||||
Hello Container World!
|
||||
|
||||
```
|
||||
|
||||
With the flaskapp_container now running and ready for testing, you can stop it using **Ctrl+C**.
|
||||
|
||||
### Create a test script
|
||||
|
||||
Before you write the test script, you must install conu. Inside the previously created container_test directory run the following commands.
|
||||
```
|
||||
$ python3 -m venv .venv
|
||||
$ source .venv/bin/activate
|
||||
(.venv)$ pip install --upgrade pip
|
||||
(.venv)$ pip install conu
|
||||
|
||||
$ touch test_container.py
|
||||
|
||||
```
|
||||
|
||||
Then copy and save the following script in the test_container.py file.
|
||||
```
|
||||
import conu
|
||||
|
||||
PORT = 5000
|
||||
|
||||
with conu.DockerBackend() as backend:
|
||||
image = backend.ImageClass("flaskapp_container")
|
||||
options = ["-p", "5000:5000"]
|
||||
container = image.run_via_binary(additional_opts=options)
|
||||
|
||||
try:
|
||||
# Check that the container is running and wait for the flask application to start.
|
||||
assert container.is_running()
|
||||
container.wait_for_port(PORT)
|
||||
|
||||
# Run a GET request on / port 5000.
|
||||
http_response = container.http_request(path="/", port=PORT)
|
||||
|
||||
# Check the response status code is 200
|
||||
assert http_response.ok
|
||||
|
||||
# Get the response content
|
||||
response_content = http_response.content.decode("utf-8")
|
||||
|
||||
# Check that the "Hello Container World!" string is served.
|
||||
assert "Hello Container World!" in response_content
|
||||
|
||||
# Get the logs from the container
|
||||
logs = [line for line in container.logs()]
|
||||
# Check the the Flask application saw the GET request.
|
||||
assert b'"GET / HTTP/1.1" 200 -' in logs[-1]
|
||||
|
||||
finally:
|
||||
container.stop()
|
||||
container.delete()
|
||||
|
||||
```
|
||||
|
||||
#### Test Setup
|
||||
|
||||
The script starts by setting conu to use Docker as a backend to run the container. Then it sets the container image to use the flaskapp_container you built in the first part of this tutorial.
|
||||
|
||||
The next step is to configure the options needed to run the container. In this example, the Flask application serves the content on port 5000. Therefore you need to expose this port and map it to the same port on the host.
|
||||
|
||||
Finally, the script starts the container, and it’s now ready to be tested.
|
||||
|
||||
#### Testing methods
|
||||
|
||||
Before testing a container, check that the container is running and ready. The example script is using container.is_running and container.wait_for_port. These methods ensure the container is running and the service is available on the expected port.
|
||||
|
||||
The container.http_request is a wrapper around the [requests][2] library which makes it convenient to send HTTP requests during the tests. This method returns a [requests.Response][3]object, so it’s easy to access the content of the response for testing.
|
||||
|
||||
Conu also gives access to the container logs. Once again, this can be useful during testing. In the example above, the container.logs method returns the container logs. You can use them to assert that a specific log was printed, or for example that no exceptions were raised during testing.
|
||||
|
||||
Conu provides many other useful methods to interface with containers. A full list of the APIs is available in the [documentation][4]. You can also consult the examples available on [GitHub][5].
|
||||
|
||||
All the code and files needed to run this tutorial are available on [GitHub][6] as well. For readers who want to take this example further, you can look at using [pytest][7] to run the tests and build a container test suite.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/test-containers-python-conu/
|
||||
|
||||
作者:[Clément Verna][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/cverna/
|
||||
[1]: https://github.com/user-cont/conu
|
||||
[2]: http://docs.python-requests.org/en/master/
|
||||
[3]: http://docs.python-requests.org/en/master/api/#requests.Response
|
||||
[4]: https://conu.readthedocs.io/en/latest/index.html
|
||||
[5]: https://github.com/user-cont/conu/tree/master/docs/source/examples
|
||||
[6]: https://github.com/cverna/container_test_script
|
||||
[7]: https://docs.pytest.org/en/latest/
|
@ -1,202 +0,0 @@
|
||||
Translating by qhwdw
|
||||
6.828 lab tools guide
|
||||
======
|
||||
### 6.828 lab tools guide
|
||||
|
||||
Familiarity with your environment is crucial for productive development and debugging. This page gives a brief overview of the JOS environment and useful GDB and QEMU commands. Don't take our word for it, though. Read the GDB and QEMU manuals. These are powerful tools that are worth knowing how to use.
|
||||
|
||||
#### Debugging tips
|
||||
|
||||
##### Kernel
|
||||
|
||||
GDB is your friend. Use the qemu-gdb target (or its `qemu-gdb-nox` variant) to make QEMU wait for GDB to attach. See the GDB reference below for some commands that are useful when debugging kernels.
|
||||
|
||||
If you're getting unexpected interrupts, exceptions, or triple faults, you can ask QEMU to generate a detailed log of interrupts using the -d argument.
|
||||
|
||||
To debug virtual memory issues, try the QEMU monitor commands info mem (for a high-level overview) or info pg (for lots of detail). Note that these commands only display the _current_ page table.
|
||||
|
||||
(Lab 4+) To debug multiple CPUs, use GDB's thread-related commands like thread and info threads.
|
||||
|
||||
##### User environments (lab 3+)
|
||||
|
||||
GDB also lets you debug user environments, but there are a few things you need to watch out for, since GDB doesn't know that there's a distinction between multiple user environments, or between user and kernel.
|
||||
|
||||
You can start JOS with a specific user environment using make run- _name_ (or you can edit `kern/init.c` directly). To make QEMU wait for GDB to attach, use the run- _name_ -gdb variant.
|
||||
|
||||
You can symbolically debug user code, just like you can kernel code, but you have to tell GDB which symbol table to use with the symbol-file command, since it can only use one symbol table at a time. The provided `.gdbinit` loads the kernel symbol table, `obj/kern/kernel`. The symbol table for a user environment is in its ELF binary, so you can load it using symbol-file obj/user/ _name_. _Don't_ load symbols from any `.o` files, as those haven't been relocated by the linker (libraries are statically linked into JOS user binaries, so those symbols are already included in each user binary). Make sure you get the _right_ user binary; library functions will be linked at different EIPs in different binaries and GDB won't know any better!
|
||||
|
||||
(Lab 4+) Since GDB is attached to the virtual machine as a whole, it sees clock interrupts as just another control transfer. This makes it basically impossible to step through user code because a clock interrupt is virtually guaranteed the moment you let the VM run again. The stepi command works because it suppresses interrupts, but it only steps one assembly instruction. Breakpoints generally work, but watch out because you can hit the same EIP in a different environment (indeed, a different binary altogether!).
|
||||
|
||||
#### Reference
|
||||
|
||||
##### JOS makefile
|
||||
|
||||
The JOS GNUmakefile includes a number of phony targets for running JOS in various ways. All of these targets configure QEMU to listen for GDB connections (the `*-gdb` targets also wait for this connection). To start once QEMU is running, simply run gdb from your lab directory. We provide a `.gdbinit` file that automatically points GDB at QEMU, loads the kernel symbol file, and switches between 16-bit and 32-bit mode. Exiting GDB will shut down QEMU.
|
||||
|
||||
* make qemu
|
||||
Build everything and start QEMU with the VGA console in a new window and the serial console in your terminal. To exit, either close the VGA window or press `Ctrl-c` or `Ctrl-a x` in your terminal.
|
||||
* make qemu-nox
|
||||
Like `make qemu`, but run with only the serial console. To exit, press `Ctrl-a x`. This is particularly useful over SSH connections to Athena dialups because the VGA window consumes a lot of bandwidth.
|
||||
* make qemu-gdb
|
||||
Like `make qemu`, but rather than passively accepting GDB connections at any time, this pauses at the first machine instruction and waits for a GDB connection.
|
||||
* make qemu-nox-gdb
|
||||
A combination of the `qemu-nox` and `qemu-gdb` targets.
|
||||
* make run- _name_
|
||||
(Lab 3+) Run user program _name_. For example, `make run-hello` runs `user/hello.c`.
|
||||
* make run- _name_ -nox, run- _name_ -gdb, run- _name_ -gdb-nox,
|
||||
(Lab 3+) Variants of `run-name` that correspond to the variants of the `qemu` target.
|
||||
|
||||
|
||||
|
||||
The makefile also accepts a few useful variables:
|
||||
|
||||
* make V=1 ...
|
||||
Verbose mode. Print out every command being executed, including arguments.
|
||||
* make V=1 grade
|
||||
Stop after any failed grade test and leave the QEMU output in `jos.out` for inspection.
|
||||
* make QEMUEXTRA=' _args_ ' ...
|
||||
Specify additional arguments to pass to QEMU.
|
||||
|
||||
|
||||
|
||||
##### JOS obj/
|
||||
|
||||
The JOS GNUmakefile includes a number of phony targets for running JOS in various ways. All of these targets configure QEMU to listen for GDB connections (thetargets also wait for this connection). To start once QEMU is running, simply runfrom your lab directory. We provide afile that automatically points GDB at QEMU, loads the kernel symbol file, and switches between 16-bit and 32-bit mode. Exiting GDB will shut down QEMU.The makefile also accepts a few useful variables:
|
||||
|
||||
When building JOS, the makefile also produces some additional output files that may prove useful while debugging:
|
||||
|
||||
* `obj/boot/boot.asm`, `obj/kern/kernel.asm`, `obj/user/hello.asm`, etc.
|
||||
Assembly code listings for the bootloader, kernel, and user programs.
|
||||
* `obj/kern/kernel.sym`, `obj/user/hello.sym`, etc.
|
||||
Symbol tables for the kernel and user programs.
|
||||
* `obj/boot/boot.out`, `obj/kern/kernel`, `obj/user/hello`, etc
|
||||
Linked ELF images of the kernel and user programs. These contain symbol information that can be used by GDB.
|
||||
|
||||
|
||||
|
||||
##### GDB
|
||||
|
||||
See the [GDB manual][1] for a full guide to GDB commands. Here are some particularly useful commands for 6.828, some of which don't typically come up outside of OS development.
|
||||
|
||||
* Ctrl-c
|
||||
Halt the machine and break in to GDB at the current instruction. If QEMU has multiple virtual CPUs, this halts all of them.
|
||||
* c (or continue)
|
||||
Continue execution until the next breakpoint or `Ctrl-c`.
|
||||
* si (or stepi)
|
||||
Execute one machine instruction.
|
||||
* b function or b file:line (or breakpoint)
|
||||
Set a breakpoint at the given function or line.
|
||||
* b * _addr_ (or breakpoint)
|
||||
Set a breakpoint at the EIP _addr_.
|
||||
* set print pretty
|
||||
Enable pretty-printing of arrays and structs.
|
||||
* info registers
|
||||
Print the general purpose registers, `eip`, `eflags`, and the segment selectors. For a much more thorough dump of the machine register state, see QEMU's own `info registers` command.
|
||||
* x/ _N_ x _addr_
|
||||
Display a hex dump of _N_ words starting at virtual address _addr_. If _N_ is omitted, it defaults to 1. _addr_ can be any expression.
|
||||
* x/ _N_ i _addr_
|
||||
Display the _N_ assembly instructions starting at _addr_. Using `$eip` as _addr_ will display the instructions at the current instruction pointer.
|
||||
* symbol-file _file_
|
||||
(Lab 3+) Switch to symbol file _file_. When GDB attaches to QEMU, it has no notion of the process boundaries within the virtual machine, so we have to tell it which symbols to use. By default, we configure GDB to use the kernel symbol file, `obj/kern/kernel`. If the machine is running user code, say `hello.c`, you can switch to the hello symbol file using `symbol-file obj/user/hello`.
|
||||
|
||||
|
||||
|
||||
QEMU represents each virtual CPU as a thread in GDB, so you can use all of GDB's thread-related commands to view or manipulate QEMU's virtual CPUs.
|
||||
|
||||
* thread _n_
|
||||
GDB focuses on one thread (i.e., CPU) at a time. This command switches that focus to thread _n_ , numbered from zero.
|
||||
* info threads
|
||||
List all threads (i.e., CPUs), including their state (active or halted) and what function they're in.
|
||||
|
||||
|
||||
|
||||
##### QEMU
|
||||
|
||||
QEMU includes a built-in monitor that can inspect and modify the machine state in useful ways. To enter the monitor, press Ctrl-a c in the terminal running QEMU. Press Ctrl-a c again to switch back to the serial console.
|
||||
|
||||
For a complete reference to the monitor commands, see the [QEMU manual][2]. Here are some particularly useful commands:
|
||||
|
||||
* xp/ _N_ x _paddr_
|
||||
Display a hex dump of _N_ words starting at _physical_ address _paddr_. If _N_ is omitted, it defaults to 1. This is the physical memory analogue of GDB's `x` command.
|
||||
|
||||
* info registers
|
||||
Display a full dump of the machine's internal register state. In particular, this includes the machine's _hidden_ segment state for the segment selectors and the local, global, and interrupt descriptor tables, plus the task register. This hidden state is the information the virtual CPU read from the GDT/LDT when the segment selector was loaded. Here's the CS when running in the JOS kernel in lab 1 and the meaning of each field:
|
||||
```
|
||||
CS =0008 10000000 ffffffff 10cf9a00 DPL=0 CS32 [-R-]
|
||||
```
|
||||
|
||||
* `CS =0008`
|
||||
The visible part of the code selector. We're using segment 0x8. This also tells us we're referring to the global descriptor table (0x8 &4=0), and our CPL (current privilege level) is 0x8&3=0.
|
||||
* `10000000`
|
||||
The base of this segment. Linear address = logical address + 0x10000000.
|
||||
* `ffffffff`
|
||||
The limit of this segment. Linear addresses above 0xffffffff will result in segment violation exceptions.
|
||||
* `10cf9a00`
|
||||
The raw flags of this segment, which QEMU helpfully decodes for us in the next few fields.
|
||||
* `DPL=0`
|
||||
The privilege level of this segment. Only code running with privilege level 0 can load this segment.
|
||||
* `CS32`
|
||||
This is a 32-bit code segment. Other values include `DS` for data segments (not to be confused with the DS register), and `LDT` for local descriptor tables.
|
||||
* `[-R-]`
|
||||
This segment is read-only.
|
||||
* info mem
|
||||
(Lab 2+) Display mapped virtual memory and permissions. For example,
|
||||
```
|
||||
ef7c0000-ef800000 00040000 urw
|
||||
efbf8000-efc00000 00008000 -rw
|
||||
|
||||
```
|
||||
|
||||
tells us that the 0x00040000 bytes of memory from 0xef7c0000 to 0xef800000 are mapped read/write and user-accessible, while the memory from 0xefbf8000 to 0xefc00000 is mapped read/write, but only kernel-accessible.
|
||||
|
||||
* info pg
|
||||
(Lab 2+) Display the current page table structure. The output is similar to `info mem`, but distinguishes page directory entries and page table entries and gives the permissions for each separately. Repeated PTE's and entire page tables are folded up into a single line. For example,
|
||||
```
|
||||
VPN range Entry Flags Physical page
|
||||
[00000-003ff] PDE[000] -------UWP
|
||||
[00200-00233] PTE[200-233] -------U-P 00380 0037e 0037d 0037c 0037b 0037a ..
|
||||
[00800-00bff] PDE[002] ----A--UWP
|
||||
[00800-00801] PTE[000-001] ----A--U-P 0034b 00349
|
||||
[00802-00802] PTE[002] -------U-P 00348
|
||||
|
||||
```
|
||||
|
||||
This shows two page directory entries, spanning virtual addresses 0x00000000 to 0x003fffff and 0x00800000 to 0x00bfffff, respectively. Both PDE's are present, writable, and user and the second PDE is also accessed. The second of these page tables maps three pages, spanning virtual addresses 0x00800000 through 0x00802fff, of which the first two are present, user, and accessed and the third is only present and user. The first of these PTE's maps physical page 0x34b.
|
||||
|
||||
|
||||
|
||||
|
||||
QEMU also takes some useful command line arguments, which can be passed into the JOS makefile using the
|
||||
|
||||
* make QEMUEXTRA='-d int' ...
|
||||
Log all interrupts, along with a full register dump, to `qemu.log`. You can ignore the first two log entries, "SMM: enter" and "SMM: after RMS", as these are generated before entering the boot loader. After this, log entries look like
|
||||
```
|
||||
4: v=30 e=0000 i=1 cpl=3 IP=001b:00800e2e pc=00800e2e SP=0023:eebfdf28 EAX=00000005
|
||||
EAX=00000005 EBX=00001002 ECX=00200000 EDX=00000000
|
||||
ESI=00000805 EDI=00200000 EBP=eebfdf60 ESP=eebfdf28
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
The first line describes the interrupt. The `4:` is just a log record counter. `v` gives the vector number in hex. `e` gives the error code. `i=1` indicates that this was produced by an `int` instruction (versus a hardware interrupt). The rest of the line should be self-explanatory. See info registers for a description of the register dump that follows.
|
||||
|
||||
Note: If you're running a pre-0.15 version of QEMU, the log will be written to `/tmp` instead of the current directory.
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://pdos.csail.mit.edu/6.828/2018/labguide.html
|
||||
|
||||
作者:[csail.mit][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://pdos.csail.mit.edu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://sourceware.org/gdb/current/onlinedocs/gdb/
|
||||
[2]: http://wiki.qemu.org/download/qemu-doc.html#pcsys_005fmonitor
|
@ -0,0 +1,324 @@
|
||||
Top 30 OpenStack Interview Questions and Answers
|
||||
======
|
||||
Now a days most of the firms are trying to migrate their IT infrastructure and Telco Infra into private cloud i.e OpenStack. If you planning to give interviews on Openstack admin profile, then below list of interview questions might help you to crack the interview.
|
||||
|
||||

|
||||
|
||||
### Q:1 Define OpenStack and its key components?
|
||||
|
||||
Ans: It is a bundle of opensource software, which all in combine forms a provide cloud software known as OpenStack.OpenStack is known as Stack of Open source Software or Projects.
|
||||
|
||||
Following are the key components of OpenStack
|
||||
|
||||
* **Nova** – It handles the Virtual machines at compute level and performs other computing task at compute or hypervisor level.
|
||||
* **Neutron** – It provides the networking functionality to VMs, Compute and Controller Nodes.
|
||||
* **Keystone** – It provides the identity service for all cloud users and openstack services. In other words, we can say Keystone a method to provide access to cloud users and services.
|
||||
* **Horizon** – It provides a GUI (Graphical User Interface), using the GUI Admin can all day to day operations task at ease.
|
||||
* **Cinder** – It provides the block storage functionality, generally in OpenStack Cinder is integrated with Chef and ScaleIO to service block storage to Compute & Controller nodes.
|
||||
* **Swift** – It provides the object storage functionality. Generally, Glance images are on object storage. External storage like ScaleIO can work as Object storage too and can easily be integrated with Glance Service.
|
||||
* **Glance** – It provides Cloud image services, using glance admin used to upload and download cloud images.
|
||||
* **Heat** – It provides an orchestration service or functionality. Using Heat admin can easily VMs as stack and based on requirements VMs in the stack can be scale-in and Scale-out
|
||||
* **Ceilometer** – It provides the telemetry and billing services.
|
||||
|
||||
|
||||
|
||||
### Q:2 What are services generally run on a controller node?
|
||||
|
||||
Ans: Following services run on a controller node:
|
||||
|
||||
* Identity Service ( KeyStone)
|
||||
* Image Service ( Glance)
|
||||
* Nova Services like Nova API, Nova Scheduler & Nova DB
|
||||
* Block & Object Service
|
||||
* Ceilometer Service
|
||||
* MariaDB / MySQL and RabbitMQ Service
|
||||
* Management services of Networking (Neutron) and Networking agents
|
||||
* Orchestration Service (Heat)
|
||||
|
||||
|
||||
|
||||
### Q:3 What are the services generally run on a Compute Node?
|
||||
|
||||
Ans: Following services run on a compute node,
|
||||
|
||||
* Nova-Compute
|
||||
* Networking Services like OVS
|
||||
|
||||
|
||||
|
||||
### Q:4 What is the default location of VMs on the Compute Nodes?
|
||||
|
||||
Ans: VMs in the Compute node are stored at “ **/var/lib/nova/instances** ”
|
||||
|
||||
### Q:5 What is default location of glance images?
|
||||
|
||||
Ans: As the Glance service runs on a controller node, all the glance images are store under the folder “ **/var/lib/glance/images** ” on a controller node.
|
||||
|
||||
Read More : [**How to Create and Delete Virtual Machine(VM) from Command line in OpenStack**][1]
|
||||
|
||||
### Q:6 Tell me the command how to spin a VM from Command Line?
|
||||
|
||||
Ans: We can easily spin a new VM using the following openstack command,
|
||||
|
||||
```
|
||||
# openstack server create --flavor {flavor-name} --image {Image-Name-Or-Image-ID} --nic net-id={Network-ID} --security-group {Security_Group_ID} –key-name {Keypair-Name} <VM_Name>
|
||||
```
|
||||
|
||||
### Q:7 How to list the network namespace of a tenant in OpenStack?
|
||||
|
||||
Ans: Network namespace of a tenant can be listed using “ip net ns” command
|
||||
|
||||
```
|
||||
~# ip netns list
|
||||
qdhcp-a51635b1-d023-419a-93b5-39de47755d2d
|
||||
haproxy
|
||||
vrouter
|
||||
```
|
||||
|
||||
### Q:8 How to execute command inside network namespace in openstack?
|
||||
|
||||
Ans: Let’s assume we want to execute “ifconfig” command inside the network namespace “qdhcp-a51635b1-d023-419a-93b5-39de47755d2d”, then run the beneath command,
|
||||
|
||||
Syntax : ip netns exec {network-space} <command>
|
||||
|
||||
```
|
||||
~# ip netns exec qdhcp-a51635b1-d023-419a-93b5-39de47755d2d "ifconfig"
|
||||
```
|
||||
|
||||
### Q:9 How to upload and download a cloud image in Glance from command line?
|
||||
|
||||
Ans: A Cloud image can be uploaded in glance from command using beneath openstack command,
|
||||
|
||||
```
|
||||
~# openstack image create --disk-format qcow2 --container-format bare --public --file {Name-Cloud-Image}.qcow2 <Cloud-Image-Name>
|
||||
```
|
||||
|
||||
Use below openstack command to download a cloud image from command line,
|
||||
|
||||
```
|
||||
~# glance image-download --file <Cloud-Image-Name> --progress <Image-ID>
|
||||
```
|
||||
|
||||
### Q:10 How to reset error state of a VM into active in OpenStack env?
|
||||
|
||||
Ans: There are some scenarios where some VMs went to error state and this error state can be changed into active state using below commands,
|
||||
|
||||
```
|
||||
~# nova reset-state --active {Instance_id}
|
||||
```
|
||||
|
||||
### Q:11 How to get list of available Floating IPs from command line?
|
||||
|
||||
Ans: Available floating ips can be listed using the below command,
|
||||
|
||||
```
|
||||
~]# openstack ip floating list | grep None | head -10
|
||||
```
|
||||
|
||||
### Q:12 How to provision a virtual machine in specific availability zone and compute Host?
|
||||
|
||||
Ans: Let’s assume we want to provision a VM on the availability zone NonProduction in compute-02, use the beneath command to accomplish this,
|
||||
|
||||
```
|
||||
~]# openstack server create --flavor m1.tiny --image cirros --nic net-id=e0be93b8-728b-4d4d-a272-7d672b2560a6 --security-group NonProd_SG --key-name linuxtec --availability-zone NonProduction:compute-02 nonprod_testvm
|
||||
```
|
||||
|
||||
### Q:13 How to get list of VMs which are provisioned on a specific Compute node?
|
||||
|
||||
Ans: Let’s assume we want to list the vms which are provisioned on compute-0-19, use below
|
||||
|
||||
Syntax: openstack server list –all-projects –long -c Name -c Host | grep -i {Compute-Node-Name}
|
||||
|
||||
```
|
||||
~# openstack server list --all-projects --long -c Name -c Host | grep -i compute-0-19
|
||||
```
|
||||
|
||||
### Q:14 How to view the console log of an openstack instance from command line?
|
||||
|
||||
Ans: Console logs of an instance can be viewed from the command line using the following commands,
|
||||
|
||||
First get the ID of an instance and then use the below command,
|
||||
|
||||
```
|
||||
~# openstack console log show {Instance-id}
|
||||
```
|
||||
|
||||
### Q:15 How to get console URL of an openstack instance?
|
||||
|
||||
Ans: Console URL of an instance can be retrieved from command line using the below openstack command,
|
||||
|
||||
```
|
||||
~# openstack console url show {Instance-id}
|
||||
```
|
||||
|
||||
### Q:16 How to create a bootable cinder / block storage volume from command line?
|
||||
|
||||
Ans: To Create a bootable cinder or block storage volume (assume 8 GB) , refer the below steps:
|
||||
|
||||
* Get Image list using below
|
||||
|
||||
|
||||
|
||||
```
|
||||
~# openstack image list | grep -i cirros
|
||||
| 89254d46-a54b-4bc8-8e4d-658287c7ee92 | cirros | active |
|
||||
```
|
||||
|
||||
* Create bootable volume of size 8 GB using cirros image
|
||||
|
||||
|
||||
|
||||
```
|
||||
~# cinder create --image-id 89254d46-a54b-4bc8-8e4d-658287c7ee92 --display-name cirros-bootable-vol 8
|
||||
```
|
||||
|
||||
### Q:17 How to list all projects or tenants that has been created in your opentstack?
|
||||
|
||||
Ans: Projects or tenants list can be retrieved from the command using the below openstack command,
|
||||
|
||||
```
|
||||
~# openstack project list --long
|
||||
```
|
||||
|
||||
### Q:18 How to list the endpoints of openstack services?
|
||||
|
||||
Ans: Openstack service endpoints are classified into three categories,
|
||||
|
||||
* Public Endpoint
|
||||
* Internal Endpoint
|
||||
* Admin Endpoint
|
||||
|
||||
|
||||
|
||||
Use below openstack command to view endpoints of each openstack service,
|
||||
|
||||
```
|
||||
~# openstack catalog list
|
||||
```
|
||||
|
||||
To list the endpoint of a specific service like keystone use below,
|
||||
|
||||
```
|
||||
~# openstack catalog show keystone
|
||||
```
|
||||
|
||||
Read More : [**Step by Step Instance Creation Flow in OpenStack**][2]
|
||||
|
||||
### Q:19 In which order we should restart nova services on a controller node?
|
||||
|
||||
Ans: Following order should be followed to restart the nova services on openstack controller node,
|
||||
|
||||
* service nova-api restart
|
||||
* service nova-cert restart
|
||||
* service nova-conductor restart
|
||||
* service nova-consoleauth restart
|
||||
* service nova-scheduler restart
|
||||
|
||||
|
||||
|
||||
### Q:20 Let’s assume DPDK ports are configured on compute node for data traffic, now how you will check the status of dpdk ports?
|
||||
|
||||
Ans: As DPDK ports are configured via openvSwitch (OVS), use below commands to check the status,
|
||||
|
||||
### Q:21 How to add new rules to the existing SG(Security Group) from command line in openstack?
|
||||
|
||||
Ans: New rules to the existing SG in openstack can be added using the neutron command,
|
||||
|
||||
```
|
||||
~# neutron security-group-rule-create --protocol <tcp or udp> --port-range-min <port-number> --port-range-max <port-number> --direction <ingress or egress> --remote-ip-prefix <IP-address-or-range> Security-Group-Name
|
||||
```
|
||||
|
||||
### Q:22 How to view the OVS bridges configured on Controller and Compute Nodes?
|
||||
|
||||
Ans: OVS bridges on Controller and Compute nodes can be viewed using below command,
|
||||
|
||||
```
|
||||
~]# ovs-vsctl show
|
||||
```
|
||||
|
||||
### Q:23 What is the role of Integration Bridge(br-int) on the Compute Node ?
|
||||
|
||||
Ans: The integration bridge (br-int) performs VLAN tagging and untagging for the traffic coming from and to the instance running on the compute node.
|
||||
|
||||
Packets leaving the n/w interface of an instance goes through the linux bridge (qbr) using the virtual interface qvo. The interface qvb is connected to the Linux Bridge & interface qvo is connected to integration bridge (br-int). The qvo port on integration bridge has an internal VLAN tag that gets appended to packet header when a packet reaches to the integration bridge.
|
||||
|
||||
### Q:24 What is the role of Tunnel Bridge (br-tun) on the compute node?
|
||||
|
||||
Ans: The tunnel bridge (br-tun) translates the VLAN tagged traffic from integration bridge to the tunnel ids using OpenFlow rules.
|
||||
|
||||
br-tun (tunnel bridge) allows the communication between the instances on different networks. Tunneling helps to encapsulate the traffic travelling over insecure networks, br-tun supports two overlay networks i.e GRE and VXLAN
|
||||
|
||||
### Q:25 What is the role of external OVS bridge (br-ex)?
|
||||
|
||||
Ans: As the name suggests, this bridge forwards the traffic coming to and from the network to allow external access to instances. br-ex connects to the physical interface like eth2, so that floating IP traffic for tenants networks is received from the physical network and routed to the tenant network ports.
|
||||
|
||||
### Q:26 What is function of OpenFlow rules in OpenStack Networking?
|
||||
|
||||
Ans: OpenFlow rules is a mechanism that define how a packet will reach to destination starting from its source. OpenFlow rules resides in flow tables. The flow tables are part of OpenFlow switch.
|
||||
|
||||
When a packet arrives to a switch, it is processed by the first flow table, if it doesn’t match any flow entries in the table then packet is dropped or forwarded to another table.
|
||||
|
||||
### Q:27 How to display the information about a OpenFlow switch (like ports, no. of tables, no of buffer)?
|
||||
|
||||
Ans: Let’s assume we want to display the information about OpenFlow switch (br-int), run the following command,
|
||||
|
||||
```
|
||||
root@compute-0-15# ovs-ofctl show br-int
|
||||
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000fe981785c443
|
||||
n_tables:254, n_buffers:256
|
||||
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
|
||||
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
|
||||
1(patch-tun): addr:3a:c6:4f:bd:3e:3b
|
||||
config: 0
|
||||
state: 0
|
||||
speed: 0 Mbps now, 0 Mbps max
|
||||
2(qvob35d2d65-f3): addr:b2:83:c4:0b:42:3a
|
||||
config: 0
|
||||
state: 0
|
||||
current: 10GB-FD COPPER
|
||||
speed: 10000 Mbps now, 0 Mbps max
|
||||
………………………………………
|
||||
```
|
||||
|
||||
### Q:28 How to display the entries for all the flows in a switch?
|
||||
|
||||
Ans: Flows entries of a switch can be displayed using the command ‘ **ovs-ofctl dump-flows** ‘
|
||||
|
||||
Let’s assume we want to display flow entries of OVS integration bridge (br-int),
|
||||
|
||||
### Q:29 What are Neutron Agents and how to list all neutron agents?
|
||||
|
||||
Ans: OpenStack neutron server acts as the centralized controller, the actual network configurations are executed either on compute and network nodes. Neutron agents are software entities that carry out configuration changes on compute or network nodes. Neutron agents communicate with the main neutron service via Neuron API and message queue.
|
||||
|
||||
Neutron agents can be listed using the following command,
|
||||
|
||||
```
|
||||
~# openstack network agent list -c ‘Agent type’ -c Host -c Alive -c State
|
||||
```
|
||||
|
||||
### Q:30 What is CPU pinning?
|
||||
|
||||
Ans: CPU pinning refers to reserving the physical cores for specific virtual machine. It is also known as CPU isolation or processor affinity. The configuration is in two parts:
|
||||
|
||||
* it ensures that virtual machine can only run on dedicated cores
|
||||
* it also ensures that common host processes don’t run on those cores
|
||||
|
||||
|
||||
|
||||
In other words we can say pinning is one to one mapping of a physical core to a guest vCPU.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/openstack-interview-questions-answers/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/create-delete-virtual-machine-command-line-openstack/
|
||||
[2]: https://www.linuxtechi.com/step-by-step-instance-creation-flow-in-openstack/
|
@ -1,3 +1,5 @@
|
||||
HankChow translating
|
||||
|
||||
The Difference Between more, less And most Commands
|
||||
======
|
||||

|
||||
|
@ -0,0 +1,137 @@
|
||||
A Free, Secure And Cross-platform Password Manager
|
||||
======
|
||||
|
||||

|
||||
|
||||
In this modern Internet era, you will surely have multiple accounts on lot of websites. It could be a personal or official mail account, social or professional network account, GitHub account, and ecommerce account etc. So you should have several different passwords for different accounts. I am sure that you are already aware that setting up same password to multiple accounts is crazy and dangerous practice. If an attacker managed to breach one of your accounts, it’s highly likely he/she will try to access other accounts you have with the same password. So, it is **highly recommended to set different passwords** to different accounts.
|
||||
|
||||
However, remembering several passwords might be difficult. You can write them in a paper. But it is not an efficient method either and you might lose them over a period of time. This is where the password managers comes in help. The password managers are like a repository where you can store all your passwords for different accounts and lock them down with a master password. By this way, all you need to remember is just the master password. We already have reviewed an open source password manager named [**KeeWeb**][1]. Today, we are going to see yet another password manager called **Buttercup**.
|
||||
|
||||
### About Buttercup
|
||||
|
||||
Buttercup is a free, open source, secure and cross-platform password manager written using **NodeJS**. It helps you to store all your login credentials of different accounts in an encrypted archive, which can be stored in your local system or any remote services like DropBox, ownCloud, NextCloud and WebDAV-based services. It uses strong **256bit AES encryption** method to save your sensitive data with a master password. So, no one can access your login details except those who have the master password. Buttercup currently supports Linux, Mac OS and Windows. It is also available a browser extension and mobile app. so, you can access the same archive you use on the desktop application and browser extension in your Android or iOS devices as well.
|
||||
|
||||
### Installing Buttercup Password Manager
|
||||
|
||||
Buttercup is currently available as **.deb** , **.rpm** packages, portable AppImage and tar archives for Linux platform. Head over to the [**releases pages**][2] and download and install the version you want to use.
|
||||
|
||||
Buttercup desktop application is also available in [**AUR**][3], so you can install on Arch-based systems using AUR helper programs, such as [**Yay**][4], as shown below:
|
||||
|
||||
```
|
||||
$ yay -S buttercup-desktop
|
||||
```
|
||||
|
||||
If you have downloaded the portable AppImage file, make it executable using command:
|
||||
|
||||
```
|
||||
$ chmod +x buttercup-desktop-1.11.0-x86_64.AppImage
|
||||
```
|
||||
|
||||
Then, launch it using command:
|
||||
|
||||
```
|
||||
$ ./buttercup-desktop-1.11.0-x86_64.AppImage
|
||||
```
|
||||
|
||||
Once you run this command, it will prompt whether you like to integrate Buttercup AppImage with your system. If you choose ‘Yes’, this will add it to your applications menu and install icons. If you don’t do this, you can still launch the application by double-clicking on the AppImage or using the above command from the Terminal.
|
||||
|
||||
### Add archives
|
||||
|
||||
When you launch it for the first time, you will see the following welcome screen:
|
||||

|
||||
|
||||
We haven’t added any archives yet, so let us add one. To do so, click on the “New Archive File” button and type the name of the archive file and choose the location to save it.
|
||||

|
||||
|
||||
You can name it as you wish. I named it mine as “mypass”. The archives will have extension **.bcup** at the end and saved in the location of your choice.
|
||||
|
||||
If you already have created one, simply choose it by clicking on “Open Archive File”.
|
||||
|
||||
Next, buttercup will prompt you to enter a master password to the newly created archive. It is recommended to provide a strong password to protect the archives from the unauthorized access.
|
||||
|
||||

|
||||
|
||||
We have now created an archive and secured it with a master password. Similarly, you can create any number of archives and protect them with a password.
|
||||
|
||||
Let us go ahead and add the account details in the archives.
|
||||
|
||||
### Adding entries (login credentials) in the archives
|
||||
|
||||
Once you created or opened the archive, you will see the following screen.
|
||||
|
||||

|
||||
|
||||
It is like a vault where we are going to save our login credentials of different online accounts. As you can see, we haven’t added any entries yet. Let us add some.
|
||||
|
||||
To add a new entry, click “ADD ENTRY” button on the lower right corner and enter your account information you want to save.
|
||||
|
||||

|
||||
|
||||
If you want to add any extra detail, there is an “ADD NEW FIELD” option right under the each entry. Just click on it and add as many as fields you want to include in the entries.
|
||||
|
||||
Once you added all entries, you will see them on the right pane of the Buttercup interface.
|
||||
|
||||
![][6]
|
||||
|
||||
### Creating new groups
|
||||
|
||||
You can also group login details under different name for easy recognition. Say for example, you can group all your mail accounts under a distinct name named “my_mails”. By default, your login details will be saved under “General” group. To create a new group, click “NEW GROUP” button and provide the name for the group. When creating new entries inside a new group, just click on the group name and start adding the entries as shown above.
|
||||
|
||||
### Manage and access login details
|
||||
|
||||
The data stored in the archives can be edited, moved to different groups, or entirely deleted at anytime. For instance, if you want to copy the username or password to clipboard, right click on the entry and choose “Copy to Clipboard” option.
|
||||
|
||||
![][7]
|
||||
|
||||
To edit/modify the data in the future, just click “Edit” button under the selected entry.
|
||||
|
||||
### Save archives on remote location
|
||||
|
||||
By default, Buttercup will save your data on the local system. However, you can save them on different remote services, such as Dropbox, ownCloud/NextCloud, WebDAV-based service.
|
||||
|
||||
To connect to these services, go to **File - > Connect Cloud Sources**.
|
||||
|
||||

|
||||
|
||||
And, choose the service you want to connect and authorize it to save your data.
|
||||
|
||||
![][8]
|
||||
|
||||
You can also connect those services from the Buttercup welcome screen while adding the archives.
|
||||
|
||||
### Import/Export
|
||||
|
||||
Buttercup allows you to import or export data to or from other password managers, such as 1Password, Lastpass and KeePass. You can also export your data and access them from another system or device, for example on your Android phone. You can export Buttercup vaults to CSV format as well.
|
||||
|
||||
![][9]
|
||||
|
||||
Buttercup is a simple, yet mature and fully functional password manager. It is being actively developed for years. If you ever in need of a password manager, Buttercup might a good choice. For more details, refer the project website and github page.
|
||||
|
||||
And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/buttercup-a-free-secure-and-cross-platform-password-manager/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/keeweb-an-open-source-cross-platform-password-manager/
|
||||
[2]: https://github.com/buttercup/buttercup-desktop/releases/latest
|
||||
[3]: https://aur.archlinux.org/packages/buttercup-desktop/
|
||||
[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||
[5]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[6]: http://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-6.png
|
||||
[7]: http://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-7.png
|
||||
[8]: http://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-9.png
|
||||
[9]: http://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-10.png
|
205
sources/tech/20181112 Behind the scenes with Linux containers.md
Normal file
205
sources/tech/20181112 Behind the scenes with Linux containers.md
Normal file
@ -0,0 +1,205 @@
|
||||
Behind the scenes with Linux containers
|
||||
======
|
||||
Become a better container troubleshooter by using LXC to understand how they work.
|
||||

|
||||
|
||||
Can you have Linux containers without [Docker][1]? Without [OpenShift][2]? Without [Kubernetes][3]?
|
||||
|
||||
Yes, you can. Years before Docker made containers a household term (if you live in a data center, that is), the [LXC][4] project developed the concept of running a kind of virtual operating system, sharing the same kernel, but contained within defined groups of processes.
|
||||
|
||||
Docker built on LXC, and today there are plenty of platforms that leverage the work of LXC both directly and indirectly. Most of these platforms make creating and maintaining containers sublimely simple, and for large deployments, it makes sense to use such specialized services. However, not everyone's managing a large deployment or has access to big services to learn about containerization. The good news is that you can create, use, and learn containers with nothing more than a PC running Linux and this article. This article will help you understand containers by looking at LXC, how it works, why it works, and how to troubleshoot when something goes wrong.
|
||||
|
||||
### Sidestepping the simplicity
|
||||
|
||||
If you're looking for a quick-start guide to LXC, refer to the excellent [Linux Containers][5] website.
|
||||
|
||||
### Installing LXC
|
||||
|
||||
If it's not already installed, you can install [LXC][6] with your package manager.
|
||||
|
||||
On Fedora or similar, enter:
|
||||
|
||||
```
|
||||
$ sudo dnf install lxc lxc-templates lxc-doc
|
||||
```
|
||||
|
||||
On Debian, Ubuntu, and similar, enter:
|
||||
|
||||
```
|
||||
$ sudo apt install lxc
|
||||
```
|
||||
|
||||
### Creating a network bridge
|
||||
|
||||
Most containers assume a network will be available, and most container tools expect the user to be able to create virtual network devices. The most basic unit required for containers is the network bridge, which is more or less the software equivalent of a network switch. A network switch is a little like a smart Y-adapter used to split a headphone jack so two people can hear the same thing with separate headsets, except instead of an audio signal, a network switch bridges network data.
|
||||
|
||||
You can create your own software network bridge so your host computer and your container OS can both send and receive different network data over a single network device (either your Ethernet port or your wireless card). This is an important concept that often gets lost once you graduate from manually generating containers, because no matter the size of your deployment, it's highly unlikely you have a dedicated physical network card for each container you run. It's vital to understand that containers talk to virtual network devices, so you know where to start troubleshooting if a container loses its network connection.
|
||||
|
||||
To create a network bridge on your machine, you must have the appropriate permissions. For this article, use the **sudo** command to operate with root privileges. (However, LXC docs provide a configuration to grant users permission to do this without using **sudo**.)
|
||||
|
||||
```
|
||||
$ sudo ip link add br0 type bridge
|
||||
```
|
||||
|
||||
Verify that the imaginary network interface has been created:
|
||||
|
||||
```
|
||||
$ sudo ip addr show br0
|
||||
7: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc
|
||||
noop state DOWN group default qlen 1000
|
||||
link/ether 26:fa:21:5f:cf:99 brd ff:ff:ff:ff:ff:ff
|
||||
```
|
||||
|
||||
Since **br0** is seen as a network interface, it requires its own IP address. Choose a valid local IP address that doesn't conflict with any existing IP address on your network and assign it to the **br0** device:
|
||||
|
||||
```
|
||||
$ sudo ip addr add 192.168.168.168 dev br0
|
||||
```
|
||||
|
||||
And finally, ensure that **br0** is up and running:
|
||||
|
||||
```
|
||||
$ sudo ip link set br0 up
|
||||
```
|
||||
|
||||
### Setting the container config
|
||||
|
||||
The config file for an LXC container can be as complex as it needs to be to define a container's place in your network and the host system, but for this example the config is simple. Create a file in your favorite text editor and define a name for the container and the network's required settings:
|
||||
|
||||
```
|
||||
lxc.utsname = opensourcedotcom
|
||||
lxc.network.type = veth
|
||||
lxc.network.flags = up
|
||||
lxc.network.link = br0
|
||||
lxc.network.hwaddr = 4a:49:43:49:79:bd
|
||||
lxc.network.ipv4 = 192.168.168.1/24
|
||||
lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3596
|
||||
```
|
||||
|
||||
Save this file in your home directory as **mycontainer.conf**.
|
||||
|
||||
The **lxc.utsname** is arbitrary. You can call your container whatever you like; it's the name you'll use when starting and stopping it.
|
||||
|
||||
The network type is set to **veth** , which is a kind of virtual Ethernet patch cable. The idea is that the **veth** connection goes from the container to the bridge device, which is defined by the **lxc.network.link** property, set to **br0**. The IP address for the container is in the same network as the bridge device but unique to avoid collisions.
|
||||
|
||||
With the exception of the **veth** network type and the **up** network flag, you invent all the values in the config file. The list of properties is available from **man lxc.container.conf**. (If it's missing on your system, check your package manager for separate LXC documentation packages.) There are several example config files in **/usr/share/doc/lxc/examples** , which you should review later.
|
||||
|
||||
### Launching a container shell
|
||||
|
||||
At this point, you're two-thirds of the way to an operable container: you have the network infrastructure, and you've installed the imaginary network cards in an imaginary PC. All you need now is to install an operating system.
|
||||
|
||||
However, even at this stage, you can see LXC at work by launching a shell within a container space.
|
||||
|
||||
```
|
||||
$ sudo lxc-execute --name basic \
|
||||
--rcfile ~/mycontainer.conf /bin/bash \
|
||||
--logfile mycontainer.log
|
||||
#
|
||||
```
|
||||
|
||||
In this very bare container, look at your network configuration. It should look familiar, yet unique, to you.
|
||||
|
||||
```
|
||||
# /usr/sbin/ip addr show
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state [...]
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
inet 127.0.0.1/8 scope host lo
|
||||
[...]
|
||||
22: eth0@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> [...] qlen 1000
|
||||
link/ether 4a:49:43:49:79:bd brd ff:ff:ff:ff:ff:ff link-netnsid 0
|
||||
inet 192.168.168.167/24 brd 192.168.168.255 scope global eth0
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 2003:db8:1:0:214:1234:fe0b:3596/64 scope global
|
||||
valid_lft forever preferred_lft forever
|
||||
[...]
|
||||
```
|
||||
|
||||
Your container is aware of its fake network infrastructure and of a familiar-yet-unique kernel.
|
||||
|
||||
```
|
||||
# uname -av
|
||||
Linux opensourcedotcom 4.18.13-100.fc27.x86_64 #1 SMP Wed Oct 10 18:34:01 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
|
||||
```
|
||||
|
||||
Use the **exit** command to leave the container:
|
||||
|
||||
```
|
||||
# exit
|
||||
```
|
||||
|
||||
### Installing the container operating system
|
||||
|
||||
Building out a fully containerized environment is a lot more complex than the networking and config steps, so you can borrow a container template from LXC. If you don't have any templates, look for a separate LXC template package in your software repository.
|
||||
|
||||
The default LXC templates are available in **/usr/share/lxc/templates**.
|
||||
|
||||
```
|
||||
$ ls -m /usr/share/lxc/templates/
|
||||
lxc-alpine, lxc-altlinux, lxc-archlinux, lxc-busybox, lxc-centos, lxc-cirros, lxc-debian, lxc-download, lxc-fedora, lxc-gentoo, lxc-openmandriva, lxc-opensuse, lxc-oracle, lxc-plamo, lxc-slackware, lxc-sparclinux, lxc-sshd, lxc-ubuntu, lxc-ubuntu-cloud
|
||||
```
|
||||
|
||||
Pick your favorite, then create the container. This example uses Slackware.
|
||||
|
||||
```
|
||||
$ sudo lxc-create --name slackware --template slackware
|
||||
```
|
||||
|
||||
Watching a template being executed is almost as educational as building one from scratch; it's very verbose, and you can see that **lxc-create** sets the "root" of the container to **/var/lib/lxc/slackware/rootfs** and several packages are being downloaded and installed to that directory.
|
||||
|
||||
Reading through the template files gives you an even better idea of what's involved: LXC sets up a minimal device tree, common spool files, a file systems table (fstab), init files, and so on. It also prevents some services that make no sense in a container (like udev for hardware detection) from starting. Since the templates cover a wide spectrum of typical Linux configurations, if you intend to design your own, it's wise to base your work on a template closest to what you want to set up; otherwise, you're sure to make errors of omission (if nothing else) that the LXC project has already stumbled over and accounted for.
|
||||
|
||||
Once you've installed the minimal operating system environment, you can start your container.
|
||||
|
||||
```
|
||||
$ sudo lxc-start --name slackware \
|
||||
--rcfile ~/mycontainer.conf
|
||||
```
|
||||
|
||||
You have started the container, but you have not attached to it. (Unlike the previous basic example, you're not just running a shell this time, but a containerized operating system.) Attach to it by name.
|
||||
|
||||
```
|
||||
$ sudo lxc-attach --name slackware
|
||||
#
|
||||
```
|
||||
|
||||
Check that the IP address of your environment matches the one in your config file.
|
||||
|
||||
```
|
||||
# /usr/sbin/ip addr SHOW | grep eth
|
||||
34: eth0@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 [...] 1000
|
||||
link/ether 4a:49:43:49:79:bd brd ff:ff:ff:ff:ff:ff link-netnsid 0
|
||||
inet 192.168.168.167/24 brd 192.168.168.255 scope global eth0
|
||||
```
|
||||
|
||||
Exit the container, and shut it down.
|
||||
|
||||
```
|
||||
# exit
|
||||
$ sudo lxc-stop slackware
|
||||
```
|
||||
|
||||
### Running real-world containers with LXC
|
||||
|
||||
In real life, LXC makes it easy to create and run safe and secure containers. Containers have come a long way since the introduction of LXC in 2008, so use its developers' expertise to your advantage.
|
||||
|
||||
While the LXC instructions on [linuxcontainers.org][5] make the process simple, this tour of the manual side of things should help you understand what's going on behind the scenes.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/11/behind-scenes-linux-containers
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/resources/what-docker
|
||||
[2]: https://opensource.com/sitewide-search?search_api_views_fulltext=openshift
|
||||
[3]: https://opensource.com/resources/what-is-kubernetes
|
||||
[4]: https://linuxcontainers.org
|
||||
[5]: https://linuxcontainers.org/lxc/getting-started
|
||||
[6]: https://github.com/lxc/lxc
|
94
sources/tech/20181112 The Source History of Cat.md
Normal file
94
sources/tech/20181112 The Source History of Cat.md
Normal file
@ -0,0 +1,94 @@
|
||||
The Source History of Cat
|
||||
======
|
||||
I once had a debate with members of my extended family about whether a computer science degree is a degree worth pursuing. I was in college at the time and trying to decide whether I should major in computer science. My aunt and a cousin of mine believed that I shouldn’t. They conceded that knowing how to program is of course a useful and lucrative thing, but they argued that the field of computer science advances so quickly that everything I learned would almost immediately be outdated. Better to pick up programming on the side and instead major in a field like economics or physics where the basic principles would be applicable throughout my lifetime.
|
||||
|
||||
I knew that my aunt and cousin were wrong and decided to major in computer science. (Sorry, aunt and cousin!) It is easy to see why the average person might believe that a field like computer science, or a profession like software engineering, completely reinvents itself every few years. We had personal computers, then the web, then phones, then machine learning… technology is always changing, so surely all the underlying principles and techniques change too. Of course, the amazing thing is how little actually changes. Most people, I’m sure, would be stunned to know just how old some of the important software on their computer really is. I’m not talking about flashy application software, admittedly—my copy of Firefox, the program I probably use the most on my computer, is not even two weeks old. But, if you pull up the manual page for something like `grep`, you will see that it has not been updated since 2010 (at least on MacOS). And the original version of `grep` was written in 1974, which in the computing world was back when dinosaurs roamed Silicon Valley. People (and programs) still depend on `grep` every day.
|
||||
|
||||
My aunt and cousin thought of computer technology as a series of increasingly elaborate sand castles supplanting one another after each high tide clears the beach. The reality, at least in many areas, is that we steadily accumulate programs that have solved problems. We might have to occasionally modify these programs to avoid software rot, but otherwise they can be left alone. `grep` is a simple program that solves a still-relevant problem, so it survives. Most application programming is done at a very high level, atop a pyramid of much older code solving much older problems. The ideas and concepts of 30 or 40 years ago, far from being obsolete today, have in many cases been embodied in software that you can still find installed on your laptop.
|
||||
|
||||
I thought it would be interesting to take a look at one such old program and see how much it had changed since it was first written. `cat` is maybe the simplest of all the Unix utilities, so I’m going to use it as my example. Ken Thompson wrote the original implementation of `cat` in 1969. If I were to tell somebody that I have a program on my computer from 1969, would that be accurate? How much has `cat` really evolved over the decades? How old is the software on our computers?
|
||||
|
||||
Thanks to repositories like [this one][1], we can see exactly how `cat` has evolved since 1969. I’m going to focus on implementations of `cat` that are ancestors of the implementation I have on my Macbook. You will see, as we trace `cat` from the first versions of Unix down to the `cat` in MacOS today, that the program has been rewritten more times than you might expect—but it ultimately works more or less the same way it did fifty years ago.
|
||||
|
||||
### Research Unix
|
||||
|
||||
Ken Thompson and Dennis Ritchie began writing Unix on a PDP 7. This was in 1969, before C, so all of the early Unix software was written in PDP 7 assembly. The exact flavor of assembly they used was unique to Unix, since Ken Thompson wrote his own assembler that added some features on top of the assembler provided by DEC, the PDP 7’s manufacturer. Thompson’s changes are all documented in [the original Unix Programmer’s Manual][2] under the entry for `as`, the assembler.
|
||||
|
||||
[The first implementation][3] of `cat` is thus in PDP 7 assembly. I’ve added comments that try to explain what each instruction is doing, but the program is still difficult to follow unless you understand some of the extensions Thompson made while writing his assembler. There are two important ones. First, the `;` character can be used to separate multiple statements on the same line. It appears that this was used most often to put system call arguments on the same line as the `sys` instruction. Second, Thompson added support for “temporary labels” using the digits 0 through 9. These are labels that can be reused throughout a program, thus being, according to the Unix Programmer’s Manual, “less taxing both on the imagination of the programmer and on the symbol space of the assembler.” From any given instruction, you can refer to the next or most recent temporary label `n` using `nf` and `nb` respectively. For example, if you have some code in a block labeled `1:`, you can jump back to that block from further down by using the instruction `jmp 1b`. (But you cannot jump forward to that block from above without using `jmp 1f` instead.)
|
||||
|
||||
The most interesting thing about this first version of `cat` is that it contains two names we should recognize. There is a block of instructions labeled `getc` and a block of instructions labeled `putc`, demonstrating that these names are older than the C standard library. The first version of `cat` actually contained implementations of both functions. The implementations buffered input so that reads and writes were not done a character at a time.
|
||||
|
||||
The first version of `cat` did not last long. Ken Thompson and Dennis Ritchie were able to persuade Bell Labs to buy them a PDP 11 so that they could continue to expand and improve Unix. The PDP 11 had a different instruction set, so `cat` had to be rewritten. I’ve marked up [this second version][4] of `cat` with comments as well. It uses new assembler mnemonics for the new instruction set and takes advantage of the PDP 11’s various [addressing modes][5]. (If you are confused by the parentheses and dollar signs in the source code, those are used to indicate different addressing modes.) But it also leverages the `;` character and temporary labels just like the first version of `cat`, meaning that these features must have been retained when `as` was adapted for the PDP 11.
|
||||
|
||||
The second version of `cat` is significantly simpler than the first. It is also more “Unix-y” in that it doesn’t just expect a list of filename arguments—it will, when given no arguments, read from `stdin`, which is what `cat` still does today. You can also give this version of `cat` an argument of `-` to indicate that it should read from `stdin`.
|
||||
|
||||
In 1973, in preparation for the release of the Fourth Edition of Unix, much of Unix was rewritten in C. But `cat` does not seem to have been rewritten in C until a while after that. [The first C implementation][6] of `cat` only shows up in the Seventh Edition of Unix. This implementation is really fun to look through because it is so simple. Of all the implementations to follow, this one most resembles the idealized `cat` used as a pedagogic demonstration in K&R C. The heart of the program is the classic two-liner:
|
||||
|
||||
```
|
||||
while ((c = getc(fi)) != EOF)
|
||||
putchar(c);
|
||||
```
|
||||
|
||||
There is of course quite a bit more code than that, but the extra code is mostly there to ensure that you aren’t reading and writing to the same file. The other interesting thing to note is that this implementation of `cat` only recognized one flag, `-u`. The `-u` flag could be used to avoid buffering input and output, which `cat` would otherwise do in blocks of 512 bytes.
|
||||
|
||||
### BSD
|
||||
|
||||
After the Seventh Edition, Unix spawned all sorts of derivatives and offshoots. MacOS is built on top of Darwin, which in turn is derived from the Berkeley Software Distribution (BSD), so BSD is the Unix offshoot we are most interested in. BSD was originally just a collection of useful programs and add-ons for Unix, but it eventually became a complete operating system. BSD seems to have relied on the original `cat` implementation up until the fourth BSD release, known as 4BSD, when support was added for a whole slew of new flags. [The 4BSD implementation][7] of `cat` is clearly derived from the original implementation, though it adds a new function to implement the behavior triggered by the new flags. The naming conventions already used in the file were adhered to—the `fflg` variable, used to mark whether input was being read from `stdin` or a file, was joined by `nflg`, `bflg`, `vflg`, `sflg`, `eflg`, and `tflg`, all there to record whether or not each new flag was supplied in the invocation of the program. These were the last command-line flags added to `cat`; the man page for `cat` today lists these flags and no others, at least on Mac OS. 4BSD was released in 1980, so this set of flags is 38 years old.
|
||||
|
||||
`cat` would be entirely rewritten a final time for BSD Net/2, which was, among other things, an attempt to avoid licensing issues by replacing all AT&T Unix-derived code with new code. BSD Net/2 was released in 1991. This final rewrite of `cat` was done by Kevin Fall, who graduated from Berkeley in 1988 and spent the next year working as a staff member at the Computer Systems Research Group (CSRG). Fall told me that a list of Unix utilities still implemented using AT&T code was put up on a wall at CSRG and staff were told to pick the utilities they wanted to reimplement. Fall picked `cat` and `mknod`. The `cat` implementation bundled with MacOS today is built from a source file that still bears his name at the very top. His version of `cat`, even though it is a relatively trivial program, is today used by millions.
|
||||
|
||||
[Fall’s original implementation][8] of `cat` is much longer than anything we have seen so far. Other than support for a `-?` help flag, it adds nothing in the way of new functionality. Conceptually, it is very similar to the 4BSD implementation. It is only longer because Fall separates the implementation into a “raw” mode and a “cooked” mode. The “raw” mode is `cat` classic; it prints a file character for character. The “cooked” mode is `cat` with all the 4BSD command-line options. The distinction makes sense but it also pads out the implementation so that it seems more complex at first glance than it actually is. There is also a fancy error handling function at the end of the file that further adds to its length.
|
||||
|
||||
### MacOS
|
||||
|
||||
In 2001, Apple launched Mac OS X. The launch was an important one for Apple, because Apple had spent many years trying and failing to replace its existing operating system (classic Mac OS), which had long been showing its age. There were two previous attempts to create a new operating system internally, but both went nowhere; in the end, Apple bought NeXT, Steve Jobs’ company, which had developed an operating system and object-oriented programming framework called NeXTSTEP. Apple took NeXTSTEP and used it as a basis for Mac OS X. NeXTSTEP was in part built on BSD, so using NeXTSTEP as a starting point for Mac OS X brought BSD-derived code right into the center of the Apple universe.
|
||||
|
||||
The very first release of Mac OS X thus includes [an implementation][9] of `cat` pulled from the NetBSD project. NetBSD, which remains in development today, began as a fork of 386BSD, which in turn was based directly on BSD Net/2. So the first Mac OS X implementation of `cat` is Kevin Fall’s `cat`. The only thing that had changed over the intervening decade was that Fall’s error-handling function `err()` was removed and the `err()` function made available by `err.h` was used in its place. `err.h` is a BSD extension to the C standard library.
|
||||
|
||||
The NetBSD implementation of `cat` was later swapped out for FreeBSD’s implementation of `cat`. [According to Wikipedia][10], Apple began using FreeBSD instead of NetBSD in Mac OS X 10.3 (Panther). But the Mac OS X implementation of `cat`, according to Apple’s own open source releases, was not replaced until Mac OS X 10.5 (Leopard) was released in 2007. The [FreeBSD implementation][11] that Apple swapped in for the Leopard release is the same implementation on Apple computers today. As of 2018, the implementation has not been updated or changed at all since 2007.
|
||||
|
||||
So the Mac OS `cat` is old. As it happens, it is actually two years older than its 2007 appearance in MacOS X would suggest. [This 2005 change][12], which is visible in FreeBSD’s Github mirror, was the last change made to FreeBSD’s `cat` before Apple pulled it into Mac OS X. So the Mac OS X `cat` implementation, which has not been kept in sync with FreeBSD’s `cat` implementation, is officially 13 years old. There’s a larger debate to be had about how much software can change before it really counts as the same software; in this case, the source file has not changed at all since 2005.
|
||||
|
||||
The `cat` implementation used by Mac OS today is not that different from the implementation that Fall wrote for the 1991 BSD Net/2 release. The biggest difference is that a whole new function was added to provide Unix domain socket support. At some point, a FreeBSD developer also seems to have decided that Fall’s `raw_args()` function and `cook_args()` should be combined into a single function called `scanfiles()`. Otherwise, the heart of the program is still Fall’s code.
|
||||
|
||||
I asked Fall how he felt about having written the `cat` implementation now used by millions of Apple users, either directly or indirectly through some program that relies on `cat` being present. Fall, who is now a consultant and a co-author of the most recent editions of TCP/IP Illustrated, says that he is surprised when people get such a thrill out of learning about his work on `cat`. Fall has had a long career in computing and has worked on many high-profile projects, but it seems that many people still get most excited about the six months of work he put into rewriting `cat` in 1989.
|
||||
|
||||
### The Hundred-Year-Old Program
|
||||
|
||||
In the grand scheme of things, computers are not an old invention. We’re used to hundred-year-old photographs or even hundred-year-old camera footage. But computer programs are in a different category—they’re high-tech and new. At least, they are now. As the computing industry matures, will we someday find ourselves using programs that approach the hundred-year-old mark?
|
||||
|
||||
Computer hardware will presumably change enough that we won’t be able to take an executable compiled today and run it on hardware a century from now. Perhaps advances in programming language design will also mean that nobody will understand C in the future and `cat` will have long since been rewritten in another language. (Though C has already been around for fifty years, and it doesn’t look like it is about to be replaced any time soon.) But barring all that, why not just keep using the `cat` we have forever?
|
||||
|
||||
I think the history of `cat` shows that some ideas in computer science are very durable indeed. Indeed, with `cat`, both the idea and the program itself are old. It may not be accurate to say that the `cat` on my computer is from 1969. But I could make a case for saying that the `cat` on my computer is from 1989, when Fall wrote his implementation of `cat`. Lots of other software is just as ancient. So maybe we shouldn’t think of computer science and software development primarily as fields that disrupt the status quo and invent new things. Our computer systems are built out of historical artifacts. At some point, we may all spend more time trying to understand and maintain those historical artifacts than we spend writing new code.
|
||||
|
||||
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][13] on Twitter or subscribe to the [RSS feed][14] to make sure you know when a new post is out.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://twobithistory.org/2018/11/12/cat.html
|
||||
|
||||
作者:[Two-Bit History][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://twobithistory.org
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/dspinellis/unix-history-repo
|
||||
[2]: https://www.bell-labs.com/usr/dmr/www/man11.pdf
|
||||
[3]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-1-cat-pdp7-s
|
||||
[4]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-2-cat-pdp11-s
|
||||
[5]: https://en.wikipedia.org/wiki/PDP-11_architecture#Addressing_modes
|
||||
[6]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-3-cat-v7-c
|
||||
[7]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-4-cat-bsd4-c
|
||||
[8]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-5-cat-net2-c
|
||||
[9]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-6-cat-macosx-c
|
||||
[10]: https://en.wikipedia.org/wiki/Darwin_(operating_system)
|
||||
[11]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-7-cat-macos-10-13-c
|
||||
[12]: https://github.com/freebsd/freebsd/commit/a76898b84970888a6fd015e15721f65815ea119a#diff-6e405d5ab5b47ca2a131ac7955e5a16b
|
||||
[13]: https://twitter.com/TwoBitHistory
|
||||
[14]: https://twobithistory.org/feed.xml
|
||||
[15]: https://twitter.com/TwoBitHistory/status/1051826516844322821?ref_src=twsrc%5Etfw
|
75
sources/tech/20181113 4 tips for learning Golang.md
Normal file
75
sources/tech/20181113 4 tips for learning Golang.md
Normal file
@ -0,0 +1,75 @@
|
||||
4 tips for learning Golang
|
||||
======
|
||||
Arriving in Golang land: A senior developer's journey.
|
||||

|
||||
|
||||
In the summer of 2014...
|
||||
|
||||
> IBM: "We need you to go figure out this Docker thing."
|
||||
> Me: "OK."
|
||||
> IBM: "Start contributing and just get involved."
|
||||
> Me: "OK." (internal voice): "This is written in Go. What's that?" (Googles) "Oh, a programming language. I've learned a few of those in my career. Can't be that hard."
|
||||
|
||||
My university's freshman programming class was taught using VAX assembler. In data structures class, we used Pascal—loaded via diskette on tired, old PCs in the library's computer center. In one upper-level course, I had a professor that loved to show all examples in ADA. I learned a bit of C via playing with various Unix utilities' source code on our Sun workstations. At IBM we used C—and some x86 assembler—for the OS/2 source code, and we heavily used C++'s object-oriented features for a joint project with Apple. I learned shell scripting soon after, starting with csh, but moving to Bash after finding Linux in the mid-'90s. I was thrust into learning m4 (arguably more of a macro-processor than a programming language) while working on the just-in-time (JIT) compiler in IBM's custom JVM code when porting it to Linux in the late '90s.
|
||||
|
||||
Fast-forward 20 years... I'd never been nervous about learning a new programming language. But [Go][1] felt different. I was going to contribute publicly, upstream on GitHub, visible to anyone interested enough to look! I didn't want to be the laughingstock, the Go newbie as a 40-something-year-old senior developer! We all know that programmer pride that doesn't like to get bruised, no matter your experience level.
|
||||
|
||||
My early investigations revealed that Go seemed more committed to its "idiomatic-ness" than some languages. It wasn't just about getting the code to compile; I needed to be able to write code "the Go way."
|
||||
|
||||
Now that I'm four years and several hundred pull requests into my personal Go journey, I don't claim to be an expert, but I do feel a lot more comfortable contributing and writing Go code than I did in 2014. So, how do you teach an old guy new tricks—or at least a new programming language? Here are four steps that were valuable in my own journey to Golang land.
|
||||
|
||||
### 1. Don't skip the fundamentals
|
||||
|
||||
While you might be able to get by with copying code and hunting and pecking your way through early learnings (who has time to read the manual?!?), Go has a very readable [language spec][2] that was clearly written to be read and understood, even if you don't have a master's in language or compiler theory. Given that Go made some unique decisions about the order of the **parameter:type** constructs and has interesting language features like channels and goroutines, it is important to get grounded in these new concepts. Reading this document alongside [Effective Go][3], another great resource from the Golang creators, will give you a huge boost in readiness to use the language effectively and properly.
|
||||
|
||||
### 2. Learn from the best
|
||||
|
||||
There are many valuable resources for digging in and taking your Go knowledge to the next level. All the talks from any recent [GopherCon][4] can be found online, like this exhaustive list from [GopherCon US in 2018][5]. Talks range in expertise and skill level, but you can easily find something you didn't know about Go by watching the talks. [Francesc Campoy][6] created a Go programming video series called [JustForFunc][7] that has an ever-increasing number of episodes to expand your Go knowledge and understanding. A quick search on "Golang" reveals many other video and online resources for those who want to learn more.
|
||||
|
||||
Want to look at code? Many of the most popular cloud-native projects on GitHub are written in Go: [Docker/Moby][8], [Kubernetes][9], [Istio][10], [containerd][11], [CoreDNS][12], and many others. Language purists might rate some projects better than others regarding idiomatic-ness, but these are all good starting points to see how large codebases are using Go in highly active projects.
|
||||
|
||||
### 3. Use good language tools
|
||||
|
||||
You will learn quickly about the value of [gofmt][13]. One of the beautiful aspects of Go is that there is no arguing about code formatting guidelines per project— **gofmt** is built into the language runtime, and it formats Go code according to a set of stable, well-understood language rules. I don't know of any Golang-based project that doesn't insist on checking with **gofmt** for pull requests as part of continuous integration.
|
||||
|
||||
Beyond the wide, valuable array of useful tools built directly into the runtime/SDK, I strongly recommend using an editor or IDE with good Golang support features. Since I find myself much more often at a command line, I rely on Vim plus the great [vim-go][14] plugin. I also like what Microsoft has offered with [VS Code][15], especially with its [Go language][16] plugins.
|
||||
|
||||
Looking for a debugger? The [Delve][17] project has been improving and maturing and is a strong contender for doing [gdb][18]-like debugging on Go binaries.
|
||||
|
||||
### 4. Jump in and write some Go!
|
||||
|
||||
You'll never get better at writing Go unless you start trying. Find a project that has some "help needed" issues flagged and make a contribution. If you are already using an open source project written in Go, find out if there are some bugs that have beginner-level solutions and make your first pull request. As with most things in life, the only real way to improve is through practice, so get going.
|
||||
|
||||
And, as it turns out, apparently you can teach an old senior developer new tricks—or languages at least.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/11/learning-golang
|
||||
|
||||
作者:[Phill Estes][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/estesp
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://golang.org/
|
||||
[2]: https://golang.org/ref/spec
|
||||
[3]: https://golang.org/doc/effective_go.html
|
||||
[4]: https://www.gophercon.com/
|
||||
[5]: https://tqdev.com/2018-gophercon-2018-videos-online
|
||||
[6]: https://twitter.com/francesc
|
||||
[7]: https://www.youtube.com/channel/UC_BzFbxG2za3bp5NRRRXJSw
|
||||
[8]: https://github.com/moby/moby
|
||||
[9]: https://github.com/kubernetes/kubernetes
|
||||
[10]: https://github.com/istio/istio
|
||||
[11]: https://github.com/containerd/containerd
|
||||
[12]: https://github.com/coredns/coredns
|
||||
[13]: https://blog.golang.org/go-fmt-your-code
|
||||
[14]: https://github.com/fatih/vim-go
|
||||
[15]: https://code.visualstudio.com/
|
||||
[16]: https://code.visualstudio.com/docs/languages/go
|
||||
[17]: https://github.com/derekparker/delve
|
||||
[18]: https://www.gnu.org/software/gdb/
|
@ -0,0 +1,228 @@
|
||||
An introduction to Udev: The Linux subsystem for managing device events
|
||||
======
|
||||
Create a script that triggers your computer to do a specific action when a specific device is plugged in.
|
||||

|
||||
|
||||
Udev is the Linux subsystem that supplies your computer with device events. In plain English, that means it's the code that detects when you have things plugged into your computer, like a network card, external hard drives (including USB thumb drives), mouses, keyboards, joysticks and gamepads, DVD-ROM drives, and so on. That makes it a potentially useful utility, and it's well-enough exposed that a standard user can manually script it to do things like performing certain tasks when a certain hard drive is plugged in.
|
||||
|
||||
This article teaches you how to create a [udev][1] script triggered by some udev event, such as plugging in a specific thumb drive. Once you understand the process for working with udev, you can use it to do all manner of things, like loading a specific driver when a gamepad is attached, or performing an automatic backup when you attach your backup drive.
|
||||
|
||||
### A basic script
|
||||
|
||||
The best way to work with udev is in small chunks. Don't write the entire script upfront, but instead start with something that simply confirms that udev triggers some custom event.
|
||||
|
||||
Depending on your goal for your script, you can't guarantee you will ever see the results of a script with your own eyes, so make sure your script logs that it was successfully triggered. The usual place for log files is in the **/var** directory, but that's mostly the root user's domain. For testing, use **/tmp** , which is accessible by normal users and usually gets cleaned out with a reboot.
|
||||
|
||||
Open your favorite text editor and enter this simple script:
|
||||
|
||||
```
|
||||
#!/usr/bin/bash
|
||||
|
||||
echo $date > /tmp/udev.log
|
||||
```
|
||||
|
||||
Place this in **/usr/local/bin** or some such place in the default executable path. Call it **trigger.sh** and, of course, make it executable with **chmod +x**.
|
||||
|
||||
```
|
||||
$ sudo mv trigger.sh /usr/local/bin
|
||||
$ sudo chmod +x /usr/local/bin/trigger.sh
|
||||
```
|
||||
|
||||
This script has nothing to do with udev. When it executes, the script places a timestamp in the file **/tmp/udev.log**. Test the script yourself:
|
||||
|
||||
```
|
||||
$ /usr/local/bin/trigger.sh
|
||||
$ cat /tmp/udev.log
|
||||
Tue Oct 31 01:05:28 NZDT 2035
|
||||
```
|
||||
|
||||
The next step is to make udev trigger the script.
|
||||
|
||||
### Unique device identification
|
||||
|
||||
In order for your script to be triggered by a device event, udev must know under what conditions it should call the script. In real life, you can identify a thumb drive by its color, the manufacturer, and the fact that you just plugged it into your computer. Your computer, however, needs a different set of criteria.
|
||||
|
||||
Udev identifies devices by serial numbers, manufacturers, and even vendor ID and product ID numbers. Since this is early in your udev script's lifespan, be as broad, non-specific, and all-inclusive as possible. In other words, you want first to catch nearly any valid udev event to trigger your script.
|
||||
|
||||
With the **udevadm monitor** command, you can tap into udev in real time and see what it sees when you plug in different devices. Become root and try it.
|
||||
|
||||
```
|
||||
$ su
|
||||
# udevadm monitor
|
||||
```
|
||||
|
||||
The monitor function prints received events for:
|
||||
|
||||
* UDEV: the event udev sends out after rule processing
|
||||
* KERNEL: the kernel uevent
|
||||
|
||||
|
||||
|
||||
With **udevadm monitor** running, plug in a thumb drive and watch as all kinds of information is spewed out onto your screen. Notice that the type of event is an **ADD** event. That's a good way to identify what type of event you want.
|
||||
|
||||
The **udevadm monitor** command provides a lot of good info, but you can see it with prettier formatting with the command **udevadm info** , assuming you know where your thumb drive is currently located in your **/dev** tree. If not, unplug and plug your thumb drive back in, then immediately issue this command:
|
||||
|
||||
```
|
||||
$ su -c 'dmesg | tail | fgrep -i sd*'
|
||||
```
|
||||
|
||||
If that command returned **sdb: sdb1** , for instance, you know the kernel has assigned your thumb drive the **sdb** label.
|
||||
|
||||
Alternately, you can use the **lsblk** command to see all drives attached to your system, including their sizes and partitions.
|
||||
|
||||
Now that you have established where your drive is located in your filesystem, you can view udev information about that device with this command:
|
||||
|
||||
```
|
||||
# udevadm info -a -n /dev/sdb | less
|
||||
```
|
||||
|
||||
This returns a lot of information. Focus on the first block of info for now.
|
||||
|
||||
Your job is to pick out parts of udev's report about a device that are most unique to that device, then tell udev to trigger your script when those unique attributes are detected.
|
||||
|
||||
The **udevadm info** process reports on a device (specified by the device path), then "walks" up the chain of parent devices. For every device found, it prints all possible attributes using a key-value format. You can compose a rule to match according to the attributes of a device plus attributes from one single parent device.
|
||||
|
||||
```
|
||||
looking at device '/devices/000:000/blah/blah//block/sdb':
|
||||
KERNEL=="sdb"
|
||||
SUBSYSTEM=="block"
|
||||
DRIVER==""
|
||||
ATTR{ro}=="0"
|
||||
ATTR{size}=="125722368"
|
||||
ATTR{stat}==" 2765 1537 5393"
|
||||
ATTR{range}=="16"
|
||||
ATTR{discard\_alignment}=="0"
|
||||
ATTR{removable}=="1"
|
||||
ATTR{blah}=="blah"
|
||||
```
|
||||
|
||||
A udev rule must contain one attribute from one single parent device.
|
||||
|
||||
Parent attributes are things that describe a device from the most basic level, such as it's something that has been plugged into a physical port or it is something with a size or this is a removable device.
|
||||
|
||||
Since the KERNEL label of **sdb** can change depending upon how many other drives were plugged in before you plugged that thumb drive in, that's not the optimal parent attribute for a udev rule. However, it works for a proof of concept, so you could use it. An even better candidate is the SUBSYSTEM attribute, which identifies that this is a "block" system device (which is why the **lsblk** command lists the device).
|
||||
|
||||
Open a file called **80-local.rules** in **/etc/udev/rules.d** and enter this code:
|
||||
|
||||
```
|
||||
SUBSYSTEM=="block", ACTION=="add", RUN+="/usr/local/bin/trigger.sh"
|
||||
```
|
||||
|
||||
Save the file, unplug your test thumb drive, and reboot.
|
||||
|
||||
Wait, reboot on a Linux machine?
|
||||
|
||||
Theoretically, you can just issue **udevadm control --reload** , which should load all rules, but at this stage in the game, it's best to eliminate all variables. Udev is complex enough, and you don't want to be lying in bed all night wondering if that rule didn't work because of a syntax error or if you just should have rebooted. So reboot regardless of what your POSIX pride tells you.
|
||||
|
||||
When your system is back online, switch to a text console (with Ctl+Alt+F3 or similar) and plug in your thumb drive. If you are running a recent kernel, you will probably see a bunch of output in your console when you plug in the drive. If you see an error message such as Could not execute /usr/local/bin/trigger.sh, you probably forgot to make the script executable. Otherwise, hopefully all you see is a device was plugged in, it got some kind of kernel device assignment, and so on.
|
||||
|
||||
Now, the moment of truth:
|
||||
|
||||
```
|
||||
$ cat /tmp/udev.log
|
||||
Tue Oct 31 01:35:28 NZDT 2035
|
||||
```
|
||||
|
||||
If you see a very recent date and time returned from **/tmp/udev.log** , udev has successfully triggered your script.
|
||||
|
||||
### Refining the rule into something useful
|
||||
|
||||
The problem with this rule is that it's very generic. Plugging in a mouse, a thumb drive, or someone else's thumb drive will indiscriminately trigger your script. Now is the time to start focusing on the exact thumb drive you want to trigger your script.
|
||||
|
||||
One way to do this is with the vendor ID and product ID. To get these numbers, you can use the **lsusb** command.
|
||||
|
||||
```
|
||||
$ lsusb
|
||||
Bus 001 Device 002: ID 8087:0024 Slacker Corp. Hub
|
||||
Bus 002 Device 002: ID 8087:0024 Slacker Corp. Hub
|
||||
Bus 003 Device 005: ID 03f0:3307 TyCoon Corp.
|
||||
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 hub
|
||||
Bus 001 Device 003: ID 13d3:5165 SBo Networks
|
||||
```
|
||||
|
||||
In this example, the **03f0:3307** before **TyCoon Corp.** denotes the idVendor and idProduct attributes. You can also see these numbers in the output of **udevadm info -a -n /dev/sdb | grep vendor** , but I find the output of **lsusb** a little easier on the eyes.
|
||||
|
||||
You can now include these attributes in your rule.
|
||||
|
||||
```
|
||||
SUBSYSTEM=="block", ATTRS{idVendor}=="03f0", ACTION=="add", RUN+="/usr/local/bin/thumb.sh"
|
||||
```
|
||||
|
||||
Test this (yes, you should still reboot, just to make sure you're getting fresh reactions from udev), and it should work the same as before, only now if you plug in, say, a thumb drive manufactured by a different company (therefore with a different idVendor) or a mouse or a printer, the script won't be triggered.
|
||||
|
||||
Keep adding new attributes to further focus in on that one unique thumb drive you want to trigger your script. Using **udevadm info -a -n /dev/sdb** , you can find out things like the vendor name, sometimes a serial number, or the product name, and so on.
|
||||
|
||||
For your own sanity, be sure to add only one new attribute at a time. Most mistakes I have made (and have seen other people online make) is to throw a bunch of attributes into their udev rule and wonder why the thing no longer works. Testing attributes one by one is the safest way to ensure udev can identify your device successfully.
|
||||
|
||||
### Security
|
||||
|
||||
This brings up the security concerns of writing udev rules to automatically do something when a drive is plugged in. On my machines, I don't even have auto-mount turned on, and yet this article proposes scripts and rules that execute commands just by having something plugged in.
|
||||
|
||||
Two things to bear in mind here.
|
||||
|
||||
1. Focus your udev rules once you have them working so they trigger scripts only when you really want them to. Executing a script that blindly copies data to or from your computer is a bad idea in case anyone who happens to be carrying the same brand of thumb drive plugs it into your box.
|
||||
2. Do not write your udev rule and scripts and forget about them. I know which computers have my udev rules on them, and those boxes are most often my personal computers, not the ones I take around to conferences or have in my office at work. The more "social" a computer is, the less likely it is to get a udev rule on it that could potentially result in my data ending up on someone else's device or someone else's data or malware on my device.
|
||||
|
||||
|
||||
|
||||
In other words, as with so much of the power provided by a GNU system, it is your job to be mindful of how you are wielding that power. If you abuse it or fail to treat it with respect, it very well could go horribly wrong.
|
||||
|
||||
### Udev in the real world
|
||||
|
||||
Now that you can confirm that your script is triggered by udev, you can turn your attention to the function of the script. Right now, it is useless, doing nothing more than logging the fact that it has been executed.
|
||||
|
||||
I use udev to trigger [automated backups][2] of my thumb drives. The idea is that the master copies of my active documents are on my thumb drive (since it goes everywhere I go and could be worked on at any moment), and those master documents get backed up to my computer each time I plug the drive into that machine. In other words, my computer is the backup drive and my production data is mobile. The source code is available, so feel free to look at the code of attachup for further examples of constraining your udev tests.
|
||||
|
||||
Since that's what I use udev for the most, it's the example I'll use here, but udev can grab lots of other things, like gamepads (this is useful on systems that aren't set to load the xboxdrv module when a gamepad is attached) and cameras and microphones (useful to set inputs when a specific mic is attached), so realize that it's good for a lot more than this one example.
|
||||
|
||||
A simple version of my backup system is a two-command process:
|
||||
|
||||
```
|
||||
SUBSYSTEM=="block", ATTRS{idVendor}=="03f0", ACTION=="add", SYMLINK+="safety%n"
|
||||
SUBSYSTEM=="block", ATTRS{idVendor}=="03f0", ACTION=="add", RUN+="/usr/local/bin/trigger.sh"
|
||||
```
|
||||
|
||||
The first line detects my thumb drive with the attributes already discussed, then assigns the thumb drive a symlink within the device tree. The symlink it assigns is **safety%n**. The **%n** is a udev macro that resolves to whatever number the kernel gives to the device, such as sdb1, sdb2, sdb3, and so on. So **%n** would be the 1 or the 2 or the 3.
|
||||
|
||||
This creates a symlink in the dev tree, so it does not interfere with the normal process of plugging in a device. This means that if you use a desktop environment that likes to auto-mount devices, you won't be causing problems for it.
|
||||
|
||||
The second line runs the script.
|
||||
|
||||
My backup script looks like this:
|
||||
|
||||
```
|
||||
#!/usr/bin/bash
|
||||
|
||||
mount /dev/safety1 /mnt/hd
|
||||
sleep 2
|
||||
rsync -az /mnt/hd/ /home/seth/backups/ && umount /dev/safety1
|
||||
```
|
||||
|
||||
The script uses the symlink, which avoids the possibility of udev naming the drive something unexpected (for instance, if I have a thumb drive called DISK plugged into my computer already, and I plug in my other thumb drive also called DISK, the second one will be labeled DISK_, which would foil my script). It mounts **safety1** (the first partition of the drive) at my preferred mount point of **/mnt/hd**.
|
||||
|
||||
Once safely mounted, it uses [rsync][3] to back up the drive to my backup folder (my actual script uses rdiff-backup, and yours can use whatever automated backup solution you prefer).
|
||||
|
||||
### Udev is your dev
|
||||
|
||||
Udev is a very flexible system and enables you to define rules and functions in ways that few other systems dare provide users. Learn it and use it, and enjoy the power of POSIX.
|
||||
|
||||
This article builds on content from the [Slackermedia Handbook][4], which is licensed under the [GNU Free Documentation License 1.3][5].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/11/udev
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://linux.die.net/man/8/udev
|
||||
[2]: https://gitlab.com/slackermedia/attachup
|
||||
[3]: https://opensource.com/article/17/1/rsync-backup-linux
|
||||
[4]: http://slackermedia.info/handbook/doku.php?id=backup
|
||||
[5]: http://www.gnu.org/licenses/fdl-1.3.html
|
@ -0,0 +1,154 @@
|
||||
The alias And unalias Commands Explained With Examples
|
||||
======
|
||||

|
||||
|
||||
You may forget the complex and lengthy Linux commands after certain period of time unless you’re a heavy command line user. Sure, there are a few ways to [**recall the forgotten commands**][1]. You could simply [**save the frequently used commands**][2] and use them on demand. Also, you can [**bookmark the important commands**][3] in your Terminal and use whenever you want. And, of course there is already a built-in **“history”** command available to help you to remember the commands. Another easiest way to remember such long commands is to simply create an alias (shortcut) to them. Not just long commands, you can create alias to any frequently used Linux commands for easier repeated invocation. By this approach, you don’t need to memorize those commands anymore. In this guide, we are going to learn about **alias** and **unalias** commands with examples in Linux.
|
||||
|
||||
### The alias command
|
||||
|
||||
The **alias** command is used to run any command or set of commands (inclusive of many options, arguments) with a user-defined string. The string could be a simple name or abbreviations for the commands regardless of how complex the original commands are. You can use the aliases as the way you use the normal Linux commands. The alias command comes preinstalled in shells, including BASH, Csh, Ksh and Zsh etc.
|
||||
|
||||
The general syntax of alias command is:
|
||||
|
||||
```
|
||||
alias [alias-name[=string]...]
|
||||
```
|
||||
|
||||
Let us go ahead and see some examples.
|
||||
|
||||
**List aliases**
|
||||
|
||||
You might already have aliases in your system. Some applications may create the aliases automatically when you install them. To view the list of existing aliases, run:
|
||||
|
||||
```
|
||||
$ alias
|
||||
```
|
||||
|
||||
or,
|
||||
|
||||
```
|
||||
$ alias -p
|
||||
```
|
||||
|
||||
I have the following aliases in my Arch Linux system.
|
||||
|
||||
```
|
||||
alias betty='/home/sk/betty/main.rb'
|
||||
alias ls='ls --color=auto'
|
||||
alias pbcopy='xclip -selection clipboard'
|
||||
alias pbpaste='xclip -selection clipboard -o'
|
||||
alias update='newsbeuter -r && sudo pacman -Syu'
|
||||
```
|
||||
|
||||
**Create a new alias**
|
||||
|
||||
Like I already said, you don’t need to memorize the lengthy and complex commands. You don’t even need to run long commands over and over. Just create an alias to the command with easily recognizable name and run it whenever you want. Let us say, you want to use this command often.
|
||||
|
||||
```
|
||||
$ du -h --max-depth=1 | sort -hr
|
||||
```
|
||||
|
||||
This command finds which sub-directories consume how much disk size in the current working directory. This command is bit long. Instead of remembering the whole command, we can easily create an alias like below:
|
||||
|
||||
```
|
||||
$ alias du='du -h --max-depth=1 | sort -hr'
|
||||
```
|
||||
|
||||
Here, **du** is the alias name. You can use any name to the alias to easily remember it later.
|
||||
|
||||
You can either use single or double quotes when creating an alias. It makes no difference.
|
||||
|
||||
Now you can just run the alias (i.e **du** in our case) instead of the full command. Both will produce the same result.
|
||||
|
||||
The aliases will expire with the current shell session. They will be gone once you log out of the current session. In order to make the aliases permanent, you need to add them in your shell’s configuration file.
|
||||
|
||||
On BASH shell, edit **~/.bashrc** file:
|
||||
|
||||
```
|
||||
$ nano ~/.bashrc
|
||||
```
|
||||
|
||||
Add the aliases one by one:
|
||||

|
||||
|
||||
Save and quit the file. Then, update the changes by running the following command:
|
||||
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
```
|
||||
|
||||
Now, the aliases are persistent across sessions.
|
||||
|
||||
On ZSH, you need to add the aliases in **~/.zshrc** file. Similarly, add your aliases in **~/.config/fish/config.fish** file if you use Fish shell.
|
||||
|
||||
**Viewing a specific aliased command**
|
||||
|
||||
As I mentioned earlier, you can view the list of all aliases in your system using ‘alias’ command. If you want to view the command associated with a given alias, for example ‘du’, just run:
|
||||
|
||||
```
|
||||
$ alias du
|
||||
alias du='du -h --max-depth=1 | sort -hr'
|
||||
```
|
||||
|
||||
As you can see, the above command display the command associated with the word ‘du’.
|
||||
|
||||
For more details about alias command, refer the man pages:
|
||||
|
||||
```
|
||||
$ man alias
|
||||
```
|
||||
|
||||
### The unalias command
|
||||
|
||||
As the name says, the **unalias** command simply removes the aliases in your system. The typical syntax of unalias command is:
|
||||
|
||||
```
|
||||
unalias <alias-name>
|
||||
```
|
||||
|
||||
To remove an aliased command, for example ‘du’ which we created earlier, simply run:
|
||||
|
||||
```
|
||||
$ unalias du
|
||||
```
|
||||
|
||||
The unalias command not only removes the alias from the current session, but also remove them permanently from your shell’s configuration file.
|
||||
|
||||
Another way to remove an alias is to create a new alias with same name.
|
||||
|
||||
To remove all aliases from the current session, use **-a** flag:
|
||||
|
||||
```
|
||||
$ unalias -a
|
||||
```
|
||||
|
||||
For more details, refer man pages.
|
||||
|
||||
```
|
||||
$ man unalias
|
||||
```
|
||||
|
||||
Creating aliases to complex and lengthy commands will save you some time if you run those commands over and over. Now it is your time to create aliases the frequently used commands.
|
||||
|
||||
And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/the-alias-and-unalias-commands-explained-with-examples/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/easily-recall-forgotten-linux-commands/
|
||||
[2]: https://www.ostechnix.com/save-commands-terminal-use-demand/
|
||||
[3]: https://www.ostechnix.com/bookmark-linux-commands-easier-repeated-invocation/
|
155
translated/tech/20180831 Test containers with Python and Conu.md
Normal file
155
translated/tech/20180831 Test containers with Python and Conu.md
Normal file
@ -0,0 +1,155 @@
|
||||
用 Python 和 Conu 测试容器
|
||||
======
|
||||
|
||||

|
||||
|
||||
越来越多的开发人员使用容器开发和部署他们的应用。这意味着可以轻松地测试容器也变得很重要。[Conu][1] (container utilities 的简写) 是一个Python库,让你编写容器测试变得简单。本文向你介绍如何使用它测试容器。
|
||||
|
||||
### 开始吧
|
||||
|
||||
首先,你需要一个容器程序来测试。为此,以下命令创建一个包含一个容器 Dockerfile 和一个被容器伺服的 Flask 应用程序的文件夹。
|
||||
```bash
|
||||
$ mkdir container_test
|
||||
$ cd container_test
|
||||
$ touch Dockerfile
|
||||
$ touch app.py
|
||||
```
|
||||
|
||||
将以下代码复制到 app.py 文件中。这是惯常的基本 Flask 应用,它返回字符串“Hello Container World!”。
|
||||
```python
|
||||
from flask import Flask
|
||||
app = Flask(__name__)
|
||||
|
||||
@app.route('/')
|
||||
def hello_world():
|
||||
return 'Hello Container World!'
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.run(debug=True,host='0.0.0.0')
|
||||
```
|
||||
|
||||
### 创建和构建测试容器
|
||||
|
||||
为了构建测试容器,将以下指令添加到 Dockerfile。
|
||||
```dockerfile
|
||||
FROM registry.fedoraproject.org/fedora-minimal:latest
|
||||
RUN microdnf -y install python3-flask && microdnf clean all
|
||||
ADD ./app.py /srv
|
||||
CMD ["python3", "/srv/app.py"]
|
||||
```
|
||||
|
||||
然后使用 Docker CLI 工具构建容器。
|
||||
```bash
|
||||
$ sudo dnf -y install docker
|
||||
$ sudo systemctl start docker
|
||||
$ sudo docker build . -t flaskapp_container
|
||||
```
|
||||
|
||||
提示:只有在系统上未安装 Docker 时才需要前两个命令。
|
||||
|
||||
构建之后使用以下命令运行容器。
|
||||
```bash
|
||||
$ sudo docker run -p 5000:5000 --rm flaskapp_container
|
||||
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
|
||||
* Restarting with stat
|
||||
* Debugger is active!
|
||||
* Debugger PIN: 473-505-51
|
||||
```
|
||||
|
||||
最后,使用 curl 检查 Flask 应用程序是否在容器内正确运行:
|
||||
```bash
|
||||
$ curl http://127.0.0.1:5000
|
||||
Hello Container World!
|
||||
```
|
||||
|
||||
现在,flaskapp_container 正在运行并准备好进行测试,你可以使用 Ctrl+C 将其停止。
|
||||
|
||||
### 创建测试脚本
|
||||
|
||||
在编写测试脚本之前,必须安装 conu。在先前创建的 container_test 目录中,运行以下命令。
|
||||
```bash
|
||||
$ python3 -m venv .venv
|
||||
$ source .venv/bin/activate
|
||||
(.venv)$ pip install --upgrade pip
|
||||
(.venv)$ pip install conu
|
||||
$ touch test_container.py
|
||||
```
|
||||
|
||||
然后将以下脚本复制并保存在 test_container.py 文件中。
|
||||
```python
|
||||
import conu
|
||||
|
||||
PORT = 5000
|
||||
|
||||
with conu.DockerBackend() as backend:
|
||||
image = backend.ImageClass("flaskapp_container")
|
||||
options = ["-p", "5000:5000"]
|
||||
container = image.run_via_binary(additional_opts=options)
|
||||
|
||||
try:
|
||||
# Check that the container is running and wait for the flask application to start.
|
||||
assert container.is_running()
|
||||
container.wait_for_port(PORT)
|
||||
|
||||
# Run a GET request on / port 5000.
|
||||
http_response = container.http_request(path="/", port=PORT)
|
||||
|
||||
# Check the response status code is 200
|
||||
assert http_response.ok
|
||||
|
||||
# Get the response content
|
||||
response_content = http_response.content.decode("utf-8")
|
||||
|
||||
# Check that the "Hello Container World!" string is served.
|
||||
assert "Hello Container World!" in response_content
|
||||
|
||||
# Get the logs from the container
|
||||
logs = [line for line in container.logs()]
|
||||
# Check the the Flask application saw the GET request.
|
||||
assert b'"GET / HTTP/1.1" 200 -' in logs[-1]
|
||||
|
||||
finally:
|
||||
container.stop()
|
||||
container.delete()
|
||||
|
||||
```
|
||||
|
||||
#### 测试设置
|
||||
|
||||
这个脚本首先设置 conu 使用 Docker 作为后端来运行容器。然后它设置容器镜像以使用你在本教程第一部分中构建的 flaskapp_container。
|
||||
|
||||
下一步是配置运行容器所需的选项。在此示例中,Flask 应用在端口5000上提供内容。于是你需要暴露此端口并将其映射到主机上的同一端口。
|
||||
|
||||
最后,用这个脚本启动容器,现在可以测试了。
|
||||
|
||||
#### 测试方法
|
||||
|
||||
在测试容器之前,检查容器是否正在运行并准备就绪。示范脚本使用 container.is_running 和 container.wait_for_port。这些方法可确保容器正在运行,并且服务在预设端口上可用。
|
||||
|
||||
container.http_request 是 [request][2] 库的包装器,可以方便地在测试期间发送 HTTP 请求。这个方法返回[requests.Responseobject][3],因此可以轻松地访问响应的内容以进行测试。
|
||||
|
||||
Conu 还可以访问容器日志。又一次,这在测试期间非常有用。在上面的示例中,container.logs 方法返回容器日志。你可以使用它们断言打印了特定日志,或者,例如在测试期间没有异常被引发。
|
||||
|
||||
Conu 提供了许多与容器接合的有用方法。[文档][4]中提供了完整的 API 列表。你还可以参考 [GitHub][5] 上提供的示例。
|
||||
|
||||
运行本教程所需的所有代码和文件也可以在 [GitHub][6] 上获得。 对于想要进一步采用这个例子的读者,你可以看看使用 [pytest][7] 来运行测试并构建一个容器测试套件。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/test-containers-python-conu/
|
||||
|
||||
作者:[Clément Verna][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[GraveAccent](https://github.com/GraveAccent)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/cverna/
|
||||
[1]: https://github.com/user-cont/conu
|
||||
[2]: http://docs.python-requests.org/en/master/
|
||||
[3]: http://docs.python-requests.org/en/master/api/#requests.Response
|
||||
[4]: https://conu.readthedocs.io/en/latest/index.html
|
||||
[5]: https://github.com/user-cont/conu/tree/master/docs/source/examples
|
||||
[6]: https://github.com/cverna/container_test_script
|
||||
[7]: https://docs.pytest.org/en/latest/
|
200
translated/tech/20180907 6.828 lab tools guide.md
Normal file
200
translated/tech/20180907 6.828 lab tools guide.md
Normal file
@ -0,0 +1,200 @@
|
||||
6.828 实验工具指南
|
||||
======
|
||||
### 6.828 实验工具指南
|
||||
|
||||
熟悉你的环境对高效率的开发和调试来说是至关重要的。本文将为你简单概述一下 JOS 环境和非常有用的 GDB 和 QEMU 命令。话虽如此,但你仍然需要去阅读 GDB 和 QEMU 手册,来理解这些强大的工具如何使用。
|
||||
|
||||
#### 调试小贴士
|
||||
|
||||
##### 内核
|
||||
|
||||
GDB 是你的朋友。使用 `qemu-gdb target`(或它的变体 `qemu-gdb-nox`)使 QEMU 等待 GDB 去绑定。下面在调试内核时用到的一些命令,可以去查看 GDB 的资料。
|
||||
|
||||
如果你遭遇意外的中断、异常、或三重故障,你可以使用 `-d` 参数要求 QEMU 去产生一个详细的中断日志。
|
||||
|
||||
调试虚拟内存问题时,尝试 QEMU 监视命令 `info mem`(提供内存高级概述)或 `info pg`(提供更多细节内容)。注意,这些命令仅显示**当前**页表。
|
||||
|
||||
(在实验 4 以后)去调试多个 CPU 时,使用 GDB 的线程相关命令,比如 `thread` 和 `info threads`。
|
||||
|
||||
##### 用户环境(在实验 3 以后)
|
||||
|
||||
GDB 也可以去调试用户环境,但是有些事情需要注意,因为 GDB 无法区分开多个用户环境或用户环境与内核环境。
|
||||
|
||||
你可以使用 `make run-name`(或编辑 `kern/init.c` 目录)来指定 JOS 启动的用户环境,为使 QEMU 等待 GDB 去绑定,使用 `run-name-gdb` 的变体。
|
||||
|
||||
你可以符号化调试用户代码,就像调试内核代码一样,但是你要告诉 GDB,哪个符号表用到符号文件命令上,因为它一次仅能够使用一个符号表。提供的 `.gdbinit` 用于加载内核符号表 `obj/kern/kernel`。对于一个用户环境,这个符号表在它的 ELF 二进制文件中,因此你可以使用 `symbol-file obj/user/name` 去加载它。不要从任何 `.o` 文件中加载符号,因为它们不会被链接器迁移进去(库是静态链接进 JOS 用户二进制文件中的,因此这些符号已经包含在每个用户二进制文件中了)。确保你得到了正确的用户二进制文件;在不同的二直制文件中,库函数被链接为不同的 EIP,而 GDB 并不知道更多的内容!
|
||||
|
||||
(在实验 4 以后)因为 GDB 绑定了整个虚拟机,所以它可以将时钟中断看作为一种控制转移。这使得从底层上不可能实现步进用户代码,因为一个时钟中断无形中保证了片刻之后虚拟机可以再次运行。因此可以使用 `stepi` 命令,因为它阻止了中断,但它仅可以步进一个汇编指令。断点一般来说可以正常工作,但要注意,因为你可能在不同的环境(完全不同的一个二进制文件)上遇到同一个 EIP。
|
||||
|
||||
#### 参考
|
||||
|
||||
##### JOS makefile
|
||||
|
||||
JOS 的 GNUmakefile 包含了在各种方式中运行的 JOS 的许多假目标。所有这些目标都配置 QEMU 去监听 GDB 连接(`*-gdb` 目标也等待这个连接)。要在运行中的 QEMU 上启动它,只需要在你的实验目录中简单地运行 `gdb ` 即可。我们提供了一个 `.gdbinit` 文件,它可以在 QEMU 中自动指向到 GDB、加载内核符号文件、以及在 16 位和 32 位模式之间切换。退出 GDB 将关闭 QEMU。
|
||||
|
||||
* `make qemu`
|
||||
在一个新窗口中构建所有的东西并使用 VGA 控制台和你的终端中的串行控制台启动 QEMU。想退出时,既可以关闭 VGA 窗口,也可以在你的终端中按 `Ctrl-c` 或 `Ctrl-a x`。
|
||||
* `make qemu-nox`
|
||||
和 `make qemu` 一样,但仅使用串行控制台来运行。想退出时,按下 `Ctrl-a x`。这种方式在通过 SSH 拨号连接到 Athena 上时非常有用,因为 VGA 窗口会占用许多带宽。
|
||||
* `make qemu-gdb`
|
||||
和 `make qemu` 一样,但它与任意时间被动接受 GDB 不同,而是暂停第一个机器指令并等待一个 GDB 连接。
|
||||
* `make qemu-nox-gdb`
|
||||
它是 `qemu-nox` 和 `qemu-gdb` 目标的组合。
|
||||
* `make run-nam`
|
||||
(在实验 3 以后)运行用户程序 _name_。例如,`make run-hello` 运行 `user/hello.c`。
|
||||
* `make run-name-nox`,`run-name-gdb`, `run-name-gdb-nox`
|
||||
(在实验 3 以后)与 `qemu` 目标变量对应的 `run-name` 的变体。
|
||||
|
||||
|
||||
|
||||
makefile 也接受几个非常有用的变量:
|
||||
|
||||
* `make V=1 …`
|
||||
详细模式。输出正在运行的每个命令,包括参数。
|
||||
* `make V=1 grade`
|
||||
在评级测试失败后停止,并将 QEMU 的输出放入 `jos.out` 文件中以备检查。
|
||||
* `make QEMUEXTRA=' _args_ ' …`
|
||||
指定传递给 QEMU 的额外参数。
|
||||
|
||||
|
||||
|
||||
##### JOS obj/
|
||||
|
||||
在构建 JOS 时,makefile 也产生一些额外的输出文件,这些文件在调试时非常有用:
|
||||
|
||||
* `obj/boot/boot.asm`、`obj/kern/kernel.asm`、`obj/user/hello.asm`、等等。
|
||||
引导加载器、内核、和用户程序的汇编代码列表。
|
||||
* `obj/kern/kernel.sym`、`obj/user/hello.sym`、等等。
|
||||
内核和用户程序的符号表。
|
||||
* `obj/boot/boot.out`、`obj/kern/kernel`、`obj/user/hello`、等等。
|
||||
内核和用户程序链接的 ELF 镜像。它们包含了 GDB 用到的符号信息。
|
||||
|
||||
|
||||
|
||||
##### GDB
|
||||
|
||||
完整的 GDB 命令指南请查看 [GDB 手册][1]。下面是一些在 6.828 课程中非常有用的命令,它们中的一些在操作系统开发之外的领域几乎用不到。
|
||||
|
||||
* `Ctrl-c`
|
||||
在当前指令处停止机器并打断进入到 GDB。如果 QEMU 有多个虚拟的 CPU,所有的 CPU 都会停止。
|
||||
* `c`(或 `continue`)
|
||||
继续运行,直到下一个断点或 `Ctrl-c`。
|
||||
* `si`(或 `stepi`)
|
||||
运行一个机器指令。
|
||||
* `b function` 或 `b file:line`(或 `breakpoint`)
|
||||
在给定的函数或行上设置一个断点。
|
||||
* `b * addr`(或 `breakpoint`)
|
||||
在 EIP 的 addr 处设置一个断点。
|
||||
* `set print pretty`
|
||||
启用数组和结构的美化输出。
|
||||
* `info registers`
|
||||
输出通用寄存器 `eip`、`eflags`、和段选择器。更多更全的机器寄存器状态转储,查看 QEMU 自己的 `info registers` 命令。
|
||||
* `x/ N x addr`
|
||||
以十六进制显示虚拟地址 addr 处开始的 N 个词的转储。如果 N 省略,默认为 1。addr 可以是任何表达式。
|
||||
* `x/ N i addr`
|
||||
显示从 addr 处开始的 N 个汇编指令。使用 `$eip` 作为 addr 将显示当前指令指针寄存器中的指令。
|
||||
* `symbol-file file`
|
||||
(在实验 3 以后)切换到符号文件 file 上。当 GDB 绑定到 QEMU 后,它并不是虚拟机中进程边界内的一部分,因此我们要去告诉它去使用哪个符号。默认情况下,我们配置 GDB 去使用内核符号文件 `obj/kern/kernel`。如果机器正在运行用户代码,比如是 `hello.c`,你就需要使用 `symbol-file obj/user/hello` 去切换到 hello 的符号文件。
|
||||
|
||||
|
||||
|
||||
QEMU 将每个虚拟 CPU 表示为 GDB 中的一个线程,因此你可以使用 GDB 中所有的线程相关的命令去查看或维护 QEMU 的虚拟 CPU。
|
||||
|
||||
* `thread n`
|
||||
GDB 在一个时刻只关注于一个线程(即:CPU)。这个命令将关注的线程切换到 n,n 是从 0 开始编号的。
|
||||
* `info threads`
|
||||
列出所有的线程(即:CPU),包括它们的状态(活动还是停止)和它们在什么函数中。
|
||||
|
||||
|
||||
|
||||
##### QEMU
|
||||
|
||||
QEMU 包含一个内置的监视器,它能够有效地检查和修改机器状态。想进入到监视器中,在运行 QEMU 的终端中按入 `Ctrl-a c` 即可。再次按下 `Ctrl-a c` 将切换回串行控制台。
|
||||
|
||||
监视器命令的完整参考资料,请查看 [QEMU 手册][2]。下面是 6.828 课程中用到的一些有用的命令:
|
||||
|
||||
* `xp/ N x paddr`
|
||||
显示从物理地址 paddr 处开始的 N 个词的十六进制转储。如果 N 省略,默认为 1。这是 GDB 的 `x` 命令模拟的物理内存。
|
||||
|
||||
* `info registers`
|
||||
显示机器内部寄存器状态的一个完整转储。实践中,对于段选择器,这将包含机器的 _隐藏_ 段状态和局部、全局、和中断描述符表加任务状态寄存器。隐藏状态是在加载段选择器后,虚拟的 CPU 从 GDT/LDT 中读取的信息。下面是实验 1 中 JOS 内核处于运行中时的 CS 信息和每个字段的含义:
|
||||
```c
|
||||
CS =0008 10000000 ffffffff 10cf9a00 DPL=0 CS32 [-R-]
|
||||
```
|
||||
|
||||
* `CS =0008`
|
||||
|
||||
代码选择器可见部分。我们使用段 0x8。这也告诉我们参考全局描述符表(0x8 &4=0),并且我们的 CPL(当前权限级别)是 0x8&3=0。
|
||||
* `10000000`
|
||||
这是段基址。线性地址 = 逻辑地址 + 0x10000000。
|
||||
* `ffffffff`
|
||||
这是段限制。访问线性地址 0xffffffff 以上将返回段违规异常。
|
||||
* `10cf9a00`
|
||||
段的原始标志,QEMU 将在接下来的几个字段中解码这些对我们有用的标志。
|
||||
* `DPL=0`
|
||||
段的权限级别。一旦代码以权限 0 运行,它将就能够加载这个段。
|
||||
* `CS32`
|
||||
这是一个 32 位代码段。对于数据段(不要与 DS 寄存器混淆了),另外的值还包括 `DS`,而对于本地描述符表是 `LDT`。
|
||||
* `[-R-]`
|
||||
这个段是只读的。
|
||||
|
||||
* `info mem`
|
||||
(在实验 2 以后)显示映射的虚拟内存和权限。比如:
|
||||
```
|
||||
ef7c0000-ef800000 00040000 urw
|
||||
efbf8000-efc00000 00008000 -rw
|
||||
|
||||
```
|
||||
|
||||
这告诉我们从 0xef7c0000 到 0xef800000 的 0x00040000 字节的内存被映射为读取/写入/用户可访问,而映射在 0xefbf8000 到 0xefc00000 之间的内存权限是读取/写入,但是仅限于内核可访问。
|
||||
|
||||
* `info pg`
|
||||
(在实验 2 以后)显示当前页表结构。它的输出类似于 `info mem`,但与页目录条目和页表条目是有区别的,并且为每个条目给了单独的权限。重复的 PTE 和整个页表被折叠为一个单行。例如:
|
||||
```
|
||||
VPN range Entry Flags Physical page
|
||||
[00000-003ff] PDE[000] -------UWP
|
||||
[00200-00233] PTE[200-233] -------U-P 00380 0037e 0037d 0037c 0037b 0037a ..
|
||||
[00800-00bff] PDE[002] ----A--UWP
|
||||
[00800-00801] PTE[000-001] ----A--U-P 0034b 00349
|
||||
[00802-00802] PTE[002] -------U-P 00348
|
||||
|
||||
```
|
||||
|
||||
这里各自显示了两个页目录条目、虚拟地址范围 0x00000000 到 0x003fffff 以及 0x00800000 到 0x00bfffff。 所有的 PDE 都存在于内存中、可写入、并且用户可访问,而第二个 PDE 也是可访问的。这些页表中的第二个映射了三个页、虚拟地址范围 0x00800000 到 0x00802fff,其中前两个页是存在于内存中的、可写入、并且用户可访问的,而第三个仅存在于内存中,并且用户可访问。这些 PTE 的第一个条目映射在物理页 0x34b 处。
|
||||
|
||||
|
||||
|
||||
|
||||
QEMU 也有一些非常有用的命令行参数,使用 `QEMUEXTRA` 变量可以将参数传递给 JOS 的 makefile。
|
||||
|
||||
* `make QEMUEXTRA='-d int' ...`
|
||||
记录所有的中断和一个完整的寄存器转储到 `qemu.log` 文件中。你可以忽略前两个日志条目、"SMM: enter" 和 "SMM: after RMS”,因为这些是在进入引导加载器之前生成的。在这之后的日志条目看起来像下面这样:
|
||||
```
|
||||
4: v=30 e=0000 i=1 cpl=3 IP=001b:00800e2e pc=00800e2e SP=0023:eebfdf28 EAX=00000005
|
||||
EAX=00000005 EBX=00001002 ECX=00200000 EDX=00000000
|
||||
ESI=00000805 EDI=00200000 EBP=eebfdf60 ESP=eebfdf28
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
第一行描述了中断。`4:` 只是一个日志记录计数器。`v` 提供了十六进程的向量号。`e` 提供了错误代码。`i=1` 表示它是由一个 `int` 指令(相对一个硬件产生的中断而言)产生的。剩下的行的意思很明显。对于一个寄存器转储而言,接下来看到的就是寄存器信息。
|
||||
|
||||
注意:如果你运行的是一个 0.15 版本之前的 QEMU,日志将写入到 `/tmp` 目录,而不是当前目录下。
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://pdos.csail.mit.edu/6.828/2018/labguide.html
|
||||
|
||||
作者:[csail.mit][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://pdos.csail.mit.edu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://sourceware.org/gdb/current/onlinedocs/gdb/
|
||||
[2]: http://wiki.qemu.org/download/qemu-doc.html#pcsys_005fmonitor
|
@ -1,4 +1,4 @@
|
||||
在 Fedora 上使用 Steam play 和 Proton 来玩 Windows 游戏
|
||||
在 Fedora 上使用 Steam play 和 Proton 来玩 Windows 游戏
|
||||
======
|
||||
|
||||

|
Loading…
Reference in New Issue
Block a user