mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-31 23:30:11 +08:00
commit
044456ceb3
@ -1,28 +1,23 @@
|
||||
# 用 350 行代码从零开始,将 Lisp 编译成 JavaScript
|
||||
用 350 行代码从零开始,将 Lisp 编译成 JavaScript
|
||||
======
|
||||
|
||||
我们将会在本篇文章中看到从零开始实现的编译器,将简单的类 LISP 计算语言编译成 JavaScript。完整的源代码在 [这里][7].
|
||||
我们将会在本篇文章中看到从零开始实现的编译器,将简单的类 LISP 计算语言编译成 JavaScript。完整的源代码在 [这里][7]。
|
||||
|
||||
我们将会:
|
||||
|
||||
1. 自定义语言,并用它编写一个简单的程序
|
||||
|
||||
2. 实现一个简单的解析器组合器
|
||||
|
||||
3. 为该语言实现一个解析器
|
||||
|
||||
4. 为该语言实现一个美观的打印器
|
||||
|
||||
5. 为我们的需求定义 JavaScript 的一个子集
|
||||
|
||||
5. 为我们的用途定义 JavaScript 的一个子集
|
||||
6. 实现代码转译器,将代码转译成我们定义的 JavaScript 子集
|
||||
|
||||
7. 把所有东西整合在一起
|
||||
|
||||
开始吧!
|
||||
|
||||
### 1. 定义语言
|
||||
### 1、定义语言
|
||||
|
||||
lisps 最迷人的地方在于,它们的语法就是树状表示的,这就是这门语言很容易解析的原因。我们很快就能接触到它。但首先让我们把自己的语言定义好。关于我们语言的语法的范式(BNF)描述如下:
|
||||
Lisp 族语言最迷人的地方在于,它们的语法就是树状表示的,这就是这门语言很容易解析的原因。我们很快就能接触到它。但首先让我们把自己的语言定义好。关于我们语言的语法的范式(BNF)描述如下:
|
||||
|
||||
```
|
||||
program ::= expr
|
||||
@ -35,17 +30,17 @@ expr ::= <integer> | <name> | ([<expr>])
|
||||
|
||||
该语言中,我们保留一些内建的特殊形式,这样我们就能做一些更有意思的事情:
|
||||
|
||||
* let 表达式使我们可以在它的 body 环境中引入新的变量。语法如下:
|
||||
* `let` 表达式使我们可以在它的 `body` 环境中引入新的变量。语法如下:
|
||||
|
||||
```
|
||||
```
|
||||
let ::= (let ([<letarg>]) <body>)
|
||||
letargs ::= (<name> <expr>)
|
||||
body ::= <expr>
|
||||
```
|
||||
|
||||
* lambda 表达式:也就是匿名函数定义。语法如下:
|
||||
* `lambda` 表达式:也就是匿名函数定义。语法如下:
|
||||
|
||||
```
|
||||
```
|
||||
lambda ::= (lambda ([<name>]) <body>)
|
||||
```
|
||||
|
||||
@ -94,12 +89,11 @@ data Atom
|
||||
另一件你想做的事情可能是在语法中添加一些注释信息。比如定位:`Expr` 是来自哪个文件的,具体到这个文件的哪一行哪一列。你可以在后面的阶段中使用这一特性,打印出错误定位,即使它们不是处于解析阶段。
|
||||
|
||||
* _练习 1_:添加一个 `Program` 数据类型,可以按顺序包含多个 `Expr`
|
||||
|
||||
* _练习 2_:向语法树中添加一个定位注解。
|
||||
|
||||
### 2. 实现一个简单的解析器组合库
|
||||
### 2、实现一个简单的解析器组合库
|
||||
|
||||
我们要做的第一件事情是定义一个嵌入式领域专用语言(Embedded Domain Specific Language 或者 EDSL),我们会用它来定义我们的语言解析器。这常常被称为解析器组合库。我们做这件事完全是出于学习的目的,Haskell 里有很好的解析库,在实际构建软件或者进行实验时,你应该使用它们。[megaparsec][8] 就是这样的一个库。
|
||||
我们要做的第一件事情是定义一个<ruby>嵌入式领域专用语言<rt>Embedded Domain Specific Language</rt></ruby>(EDSL),我们会用它来定义我们的语言解析器。这常常被称为解析器组合库。我们做这件事完全是出于学习的目的,Haskell 里有很好的解析库,在实际构建软件或者进行实验时,你应该使用它们。[megaparsec][8] 就是这样的一个库。
|
||||
|
||||
首先我们来谈谈解析库的实现的思路。本质上,我们的解析器就是一个函数,接受一些输入,可能会读取输入的一些或全部内容,然后返回解析出来的值和无法解析的输入部分,或者在解析失败时抛出异常。我们把它写出来。
|
||||
|
||||
@ -114,7 +108,6 @@ data ParseError
|
||||
= ParseError ParseString Error
|
||||
|
||||
type Error = String
|
||||
|
||||
```
|
||||
|
||||
这里我们定义了三个主要的新类型。
|
||||
@ -124,9 +117,7 @@ type Error = String
|
||||
第二个,`ParseString` 是我们的输入或携带的状态。它有三个重要的部分:
|
||||
|
||||
* `Name`: 这是源的名字
|
||||
|
||||
* `(Int, Int)`: 这是源的当前位置
|
||||
|
||||
* `String`: 这是等待解析的字符串
|
||||
|
||||
第三个,`ParseError` 包含了解析器的当前状态和一个错误信息。
|
||||
@ -180,13 +171,11 @@ instance Monad Parser where
|
||||
Right (rs, rest) ->
|
||||
case f rs of
|
||||
Parser parser -> parser rest
|
||||
|
||||
```
|
||||
|
||||
接下来,让我们定义一种的方式,用于运行解析器和防止失败的助手函数:
|
||||
|
||||
```
|
||||
|
||||
runParser :: String -> String -> Parser a -> Either ParseError (a, ParseString)
|
||||
runParser name str (Parser parser) = parser $ ParseString name (0,0) str
|
||||
|
||||
@ -237,7 +226,6 @@ many parser = go []
|
||||
many1 :: Parser a -> Parser [a]
|
||||
many1 parser =
|
||||
(:) <$> parser <*> many parser
|
||||
|
||||
```
|
||||
|
||||
下面的这些解析器通过我们定义的组合器来实现一些特殊的解析器:
|
||||
@ -273,14 +261,13 @@ sepBy sep parser = do
|
||||
frst <- optional parser
|
||||
rest <- many (sep *> parser)
|
||||
pure $ maybe rest (:rest) frst
|
||||
|
||||
```
|
||||
|
||||
现在为该门语言定义解析器所需要的所有东西都有了。
|
||||
|
||||
* _练习_ :实现一个 EOF(end of file/input,即文件或输入终止符)解析器组合器。
|
||||
* _练习_ :实现一个 EOF(end of file/input,即文件或输入终止符)解析器组合器。
|
||||
|
||||
### 3. 为我们的语言实现解析器
|
||||
### 3、为我们的语言实现解析器
|
||||
|
||||
我们会用自顶而下的方法定义解析器。
|
||||
|
||||
@ -296,7 +283,6 @@ parseAtom = parseSymbol <|> parseInt
|
||||
|
||||
parseSymbol :: Parser Atom
|
||||
parseSymbol = fmap Symbol parseName
|
||||
|
||||
```
|
||||
|
||||
注意到这四个函数是在我们这门语言中属于高阶描述。这解释了为什么 Haskell 执行解析工作这么棒。在定义完高级部分后,我们还需要定义低级别的 `parseName` 和 `parseInt`。
|
||||
@ -311,7 +297,7 @@ parseName = do
|
||||
pure (c:cs)
|
||||
```
|
||||
|
||||
整数是一系列数字,数字前面可能有负号 ‘-’:
|
||||
整数是一系列数字,数字前面可能有负号 `-`:
|
||||
|
||||
```
|
||||
parseInt :: Parser Atom
|
||||
@ -333,12 +319,10 @@ runExprParser name str =
|
||||
```
|
||||
|
||||
* _练习 1_ :为第一节中定义的 `Program` 类型编写一个解析器
|
||||
|
||||
* _练习 2_ :用 Applicative 的形式重写 `parseName`
|
||||
|
||||
* _练习 3_ :`parseInt` 可能出现溢出情况,找到处理它的方法,不要用 `read`。
|
||||
|
||||
### 4. 为这门语言实现一个更好看的输出器
|
||||
### 4、为这门语言实现一个更好看的输出器
|
||||
|
||||
我们还想做一件事,将我们的程序以源代码的形式打印出来。这对完善错误信息很有用。
|
||||
|
||||
@ -372,7 +356,7 @@ indent tabs e = concat (replicate tabs " ") ++ e
|
||||
|
||||
好,目前为止我们写了近 200 行代码,这些代码一般叫做编译器的前端。我们还要写大概 150 行代码,用来执行三个额外的任务:我们需要根据需求定义一个 JS 的子集,定义一个将我们的语言转译成这个子集的转译器,最后把所有东西整合在一起。开始吧。
|
||||
|
||||
### 5. 根据需求定义 JavaScript 的子集
|
||||
### 5、根据需求定义 JavaScript 的子集
|
||||
|
||||
首先,我们要定义将要使用的 JavaScript 的子集:
|
||||
|
||||
@ -411,10 +395,9 @@ printJSExpr doindent tabs = \case
|
||||
```
|
||||
|
||||
* _练习 1_ :添加 `JSProgram` 类型,它可以包含多个 `JSExpr` ,然后创建一个叫做 `printJSExprProgram` 的函数来生成代码。
|
||||
|
||||
* _练习 2_ :添加 `JSExpr` 的新类型:`JSIf`,并为其生成代码。
|
||||
|
||||
### 6. 实现到我们定义的 JavaScript 子集的代码转译器
|
||||
### 6、实现到我们定义的 JavaScript 子集的代码转译器
|
||||
|
||||
我们快做完了。这一节将会创建函数,将 `Expr` 转译成 `JSExpr`。
|
||||
|
||||
@ -437,7 +420,6 @@ translateList = \case
|
||||
f xs
|
||||
f:xs ->
|
||||
JSFunCall <$> translateToJS f <*> traverse translateToJS xs
|
||||
|
||||
```
|
||||
|
||||
`builtins` 是一系列要转译的特例,就像 `lambada` 和 `let`。每一种情况都可以获得一系列参数,验证它是否合乎语法规范,然后将其转译成等效的 `JSExpr`。
|
||||
@ -456,7 +438,6 @@ builtins =
|
||||
,("div", transBinOp "div" "/")
|
||||
,("print", transPrint)
|
||||
]
|
||||
|
||||
```
|
||||
|
||||
我们这种情况,会将内建的特殊形式当作特殊的、非第一类的进行对待,因此不可能将它们当作第一类函数。
|
||||
@ -480,10 +461,9 @@ transLambda = \case
|
||||
fromSymbol :: Expr -> Either String Name
|
||||
fromSymbol (ATOM (Symbol s)) = Right s
|
||||
fromSymbol e = Left $ "cannot bind value to non symbol type: " ++ show e
|
||||
|
||||
```
|
||||
|
||||
我们会将 let 转译成带有相关名字参数的函数定义,然后带上参数调用函数,因此会在这一作用域中引入变量:
|
||||
我们会将 `let` 转译成带有相关名字参数的函数定义,然后带上参数调用函数,因此会在这一作用域中引入变量:
|
||||
|
||||
```
|
||||
transLet :: [Expr] -> Either TransError JSExpr
|
||||
@ -522,35 +502,27 @@ transBinOp _ f list = foldl1 (JSBinOp f) <$> traverse translateToJS list
|
||||
transPrint :: [Expr] -> Either TransError JSExpr
|
||||
transPrint [expr] = JSFunCall (JSSymbol "console.log") . (:[]) <$> translateToJS expr
|
||||
transPrint xs = Left $ "Syntax error. print expected 1 arguments, got: " ++ show (length xs)
|
||||
|
||||
```
|
||||
|
||||
注意,如果我们将这些代码当作 `Expr` 的特例进行解析,那我们就可能会跳过语法验证。
|
||||
|
||||
* _练习 1_ :将 `Program` 转译成 `JSProgram`
|
||||
|
||||
* _练习 2_ :为 `if Expr Expr Expr` 添加一个特例,并将它转译成你在上一次练习中实现的 `JSIf` 条件语句。
|
||||
|
||||
### 7. 把所有东西整合到一起
|
||||
### 7、把所有东西整合到一起
|
||||
|
||||
最终,我们将会把所有东西整合到一起。我们会:
|
||||
|
||||
1. 读取文件
|
||||
|
||||
2. 将文件解析成 `Expr`
|
||||
|
||||
3. 将文件转译成 `JSExpr`
|
||||
|
||||
4. 将 JavaScript 代码发送到标准输出流
|
||||
|
||||
我们还会启用一些用于测试的标志位:
|
||||
|
||||
* `--e` 将进行解析并打印出表达式的抽象表示(`Expr`)
|
||||
|
||||
* `--pp` 将进行解析,美化输出
|
||||
|
||||
* `--jse` 将进行解析、转译、并打印出生成的 JS 表达式(`JSExpr`)的抽象表示
|
||||
|
||||
* `--ppc` 将进行解析,美化输出并进行编译
|
||||
|
||||
```
|
||||
@ -616,10 +588,10 @@ undefined
|
||||
|
||||
via: https://gilmi.me/blog/post/2016/10/14/lisp-to-js
|
||||
|
||||
作者:[ Gil Mizrahi ][a]
|
||||
作者:[Gil Mizrahi][a]
|
||||
选题:[oska874][b]
|
||||
译者:[BriFuture](https://github.com/BriFuture)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -3,11 +3,11 @@
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/06/Rename-Multiple-Files-720x340.png)
|
||||
|
||||
你可能已经知道,我们使用 mv 命令在类 Unix 操作系统中重命名或者移动文件和目录。 但是,mv 命令不支持一次重命名多个文件。 不用担心。 在本教程中,我们将学习使用 Linux 中的 “mmv” 命令一次重命名多个文件。 此命令用于在类 Unix 操作系统中使用标准通配符批量移动,复制,追加和重命名文件。
|
||||
你可能已经知道,我们使用 `mv` 命令在类 Unix 操作系统中重命名或者移动文件和目录。 但是,`mv` 命令不支持一次重命名多个文件。 不用担心。 在本教程中,我们将学习使用 Linux 中的 `mmv` 命令一次重命名多个文件。 此命令用于在类 Unix 操作系统中使用标准通配符批量移动、复制、追加和重命名文件。
|
||||
|
||||
### 在 Linux 中一次重命名多个文件
|
||||
|
||||
mmv 程序可在基于 Debian 的系统的默认仓库中使用。 要想在 Debian,Ubuntu,Linux Mint 上安装它,请运行以下命令:
|
||||
`mmv` 程序可在基于 Debian 的系统的默认仓库中使用。 要想在 Debian、Ubuntu、Linux Mint 上安装它,请运行以下命令:
|
||||
|
||||
```
|
||||
$ sudo apt-get install mmv
|
||||
@ -20,7 +20,7 @@ $ ls
|
||||
a1.txt a2.txt a3.txt
|
||||
```
|
||||
|
||||
现在,你想要将所有以字母 “a” 开头的文件重命名为以 “b” 开头的。 当然,你可以在几秒钟内手动执行此操作。 但是想想你是否有数百个文件想要重命名? 这是一个非常耗时的过程。 这时候 **mmv** 命令就很有帮助了。
|
||||
现在,你想要将所有以字母 “a” 开头的文件重命名为以 “b” 开头的。 当然,你可以在几秒钟内手动执行此操作。 但是想想你是否有数百个文件想要重命名? 这是一个非常耗时的过程。 这时候 `mmv` 命令就很有帮助了。
|
||||
|
||||
要将所有以字母 “a” 开头的文件重命名为以字母 “b” 开头的,只需要运行:
|
||||
|
||||
@ -33,22 +33,20 @@ $ mmv a\* b\#1
|
||||
```
|
||||
$ ls
|
||||
b1.txt b2.txt b3.txt
|
||||
|
||||
```
|
||||
|
||||
如你所见,所有以字母 “a” 开头的文件(即 a1.txt,a2.txt,a3.txt)都重命名为 b1.txt,b2.txt,b3.txt。
|
||||
如你所见,所有以字母 “a” 开头的文件(即 `a1.txt`、`a2.txt`、`a3.txt`)都重命名为 `b1.txt`、`b2.txt`、`b3.txt`。
|
||||
|
||||
**解释**
|
||||
|
||||
在上面的例子中,第一个参数(a\\*)是 'from' 模式,第二个参数是 'to' 模式(b\\#1)。根据上面的例子,mmv 将查找任何以字母 'a' 开头的文件名,并根据第二个参数重命名匹配的文件,即 'to' 模式。我们使用通配符,例如用 '*','?' 和 '[]' 来匹配一个或多个任意字符。请注意,你必须避免使用通配符,否则它们将被 shell 扩展,mmv 将无法理解。
|
||||
在上面的例子中,第一个参数(`a\*`)是 “from” 模式,第二个参数是 “to” 模式(`b\#1`)。根据上面的例子,`mmv` 将查找任何以字母 “a” 开头的文件名,并根据第二个参数重命名匹配的文件,即 “to” 模式。我们可以使用通配符,例如用 `*`、`?` 和 `[]` 来匹配一个或多个任意字符。请注意,你必须转义使用通配符,否则它们将被 shell 扩展,`mmv` 将无法理解。
|
||||
|
||||
'to' 模式中的 '#1' 是通配符索引。它匹配 'from' 模式中的第一个通配符。 'to' 模式中的 '#2' 将匹配第二个通配符,依此类推。在我们的例子中,我们只有一个通配符(星号),所以我们写了一个 #1。并且,哈希标志也应该被转义。此外,你也可以用引号括起模式。
|
||||
“to” 模式中的 `#1` 是通配符索引。它匹配 “from” 模式中的第一个通配符。 “to” 模式中的 `#2` 将匹配第二个通配符(如果有的话),依此类推。在我们的例子中,我们只有一个通配符(星号),所以我们写了一个 `#1`。并且,`#` 符号也应该被转义。此外,你也可以用引号括起模式。
|
||||
|
||||
你甚至可以将具有特定扩展名的所有文件重命名为其他扩展名。例如,要将当前目录中的所有 **.txt** 文件重命名为 **.doc** 文件格式,只需运行:
|
||||
你甚至可以将具有特定扩展名的所有文件重命名为其他扩展名。例如,要将当前目录中的所有 `.txt` 文件重命名为 `.doc` 文件格式,只需运行:
|
||||
|
||||
```
|
||||
$ mmv \*.txt \#1.doc
|
||||
|
||||
```
|
||||
|
||||
这是另一个例子。 我们假设你有以下文件。
|
||||
@ -56,16 +54,14 @@ $ mmv \*.txt \#1.doc
|
||||
```
|
||||
$ ls
|
||||
abcd1.txt abcd2.txt abcd3.txt
|
||||
|
||||
```
|
||||
|
||||
你希望在当前目录下的所有文件中将第一次出现的 **abc** 替换为 **xyz**。 你会怎么做呢?
|
||||
你希望在当前目录下的所有文件中将第一次出现的 “abc” 替换为 “xyz”。 你会怎么做呢?
|
||||
|
||||
很简单。
|
||||
|
||||
```
|
||||
$ mmv '*abc*' '#1xyz#2'
|
||||
|
||||
```
|
||||
|
||||
请注意,在上面的示例中,模式被单引号括起来了。
|
||||
@ -75,77 +71,74 @@ $ mmv '*abc*' '#1xyz#2'
|
||||
```
|
||||
$ ls
|
||||
xyzd1.txt xyzd2.txt xyzd3.txt
|
||||
|
||||
```
|
||||
|
||||
看到没? 文件 **abcd1.txt**,**abcd2.txt** 和 **abcd3.txt** 已经重命名为 **xyzd1.txt**,**xyzd2.txt** 和 **xyzd3.txt**。
|
||||
看到没? 文件 `abcd1.txt`、`abcd2.txt` 和 `abcd3.txt` 已经重命名为 `xyzd1.txt`、`xyzd2.txt` 和 `xyzd3.txt`。
|
||||
|
||||
mmv 命令的另一个值得注意的功能是你可以使用 **-n** 选项打印输出而不是重命名文件,如下所示。
|
||||
`mmv` 命令的另一个值得注意的功能是你可以使用 `-n` 选项打印输出而不是重命名文件,如下所示。
|
||||
|
||||
```
|
||||
$ mmv -n a\* b\#1
|
||||
a1.txt -> b1.txt
|
||||
a2.txt -> b2.txt
|
||||
a3.txt -> b3.txt
|
||||
|
||||
```
|
||||
|
||||
这样,你可以在重命名文件之前简单地验证 mmv 命令实际执行的操作。
|
||||
这样,你可以在重命名文件之前简单地验证 `mmv` 命令实际执行的操作。
|
||||
|
||||
有关更多详细信息,请参阅 man 页面。
|
||||
|
||||
```
|
||||
$ man mmv
|
||||
|
||||
```
|
||||
|
||||
**更新:**
|
||||
### 更新:Thunar 文件管理器
|
||||
|
||||
**Thunar 文件管理器**默认具有内置**批量重命名**选项。 如果你正在使用thunar,那么重命名文件要比使用mmv命令容易得多。
|
||||
**Thunar 文件管理器**默认具有内置**批量重命名**选项。 如果你正在使用 Thunar,那么重命名文件要比使用 `mmv` 命令容易得多。
|
||||
|
||||
Thunar在大多数Linux发行版的默认仓库库中都可用。
|
||||
Thunar 在大多数 Linux 发行版的默认仓库库中都可用。
|
||||
|
||||
要在基于Arch的系统上安装它,请运行:
|
||||
要在基于 Arch 的系统上安装它,请运行:
|
||||
|
||||
```
|
||||
$ sudo pacman -S thunar
|
||||
```
|
||||
|
||||
在 RHEL,CentOS 上:
|
||||
在 RHEL、CentOS 上:
|
||||
|
||||
```
|
||||
$ sudo yum install thunar
|
||||
```
|
||||
|
||||
在 Fedora 上:
|
||||
|
||||
```
|
||||
$ sudo dnf install thunar
|
||||
|
||||
```
|
||||
|
||||
在 openSUSE 上:
|
||||
|
||||
```
|
||||
$ sudo zypper install thunar
|
||||
|
||||
```
|
||||
|
||||
在 Debian,Ubuntu,Linux Mint 上:
|
||||
在 Debian、Ubuntu、Linux Mint 上:
|
||||
|
||||
```
|
||||
$ sudo apt-get install thunar
|
||||
|
||||
```
|
||||
|
||||
安装后,你可以从菜单或应用程序启动器中启动批量重命名程序。 要从终端启动它,请使用以下命令:
|
||||
|
||||
```
|
||||
$ thunar -B
|
||||
|
||||
```
|
||||
|
||||
批量重命名就是这么回事。
|
||||
批量重命名方式如下。
|
||||
|
||||
![][1]
|
||||
|
||||
单击加号,然后选择要重命名的文件列表。 批量重命名可以重命名文件的名称,文件的后缀或者同事重命名文件的名称和后缀。 Thunar 目前支持以下批量重命名:
|
||||
单击“+”,然后选择要重命名的文件列表。 批量重命名可以重命名文件的名称、文件的后缀或者同时重命名文件的名称和后缀。 Thunar 目前支持以下批量重命名:
|
||||
|
||||
- 插入日期或时间
|
||||
- 插入或覆盖
|
||||
@ -158,9 +151,9 @@ $ thunar -B
|
||||
|
||||
![][2]
|
||||
|
||||
选择条件后,单击**重命名文件**选项来重命名文件。
|
||||
选择条件后,单击“重命名文件”选项来重命名文件。
|
||||
|
||||
你还可以通过选择两个或更多文件从 Thunar 中打开批量重命名器。 选择文件后,按F2或右键单击并选择**重命名**。
|
||||
你还可以通过选择两个或更多文件从 Thunar 中打开批量重命名器。 选择文件后,按 F2 或右键单击并选择“重命名”。
|
||||
|
||||
嗯,这就是本次的所有内容了。希望有所帮助。更多干货即将到来。敬请关注!
|
||||
|
||||
@ -173,7 +166,7 @@ via: https://www.ostechnix.com/how-to-rename-multiple-files-at-once-in-linux/
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[Flowsnow](https://github.com/Flowsnow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,277 @@
|
||||
重启和关闭 Linux 系统的 6 个终端命令
|
||||
======
|
||||
|
||||
在 Linux 管理员的日程当中,有很多需要执行的任务,其中就有系统的重启和关闭。
|
||||
|
||||
对于 Linux 管理员来说,重启和关闭系统是其诸多风险操作中的一例,有时候,由于某些原因,这些操作可能无法挽回,他们需要更多的时间来排查问题。
|
||||
|
||||
在 Linux 命令行模式下我们可以执行这些任务。很多时候,由于熟悉命令行,Linux 管理员更倾向于在命令行下完成这些任务。
|
||||
|
||||
重启和关闭系统的 Linux 命令并不多,用户需要根据需要,选择合适的命令来完成任务。
|
||||
|
||||
以下所有命令都有其自身特点,并允许被 Linux 管理员使用.
|
||||
|
||||
**建议阅读:**
|
||||
|
||||
- [查看系统/服务器正常运行时间的 11 个方法][1]
|
||||
- [Tuptime 一款为 Linux 系统保存历史记录、统计运行时间工具][2]
|
||||
|
||||
系统重启和关闭之始,会通知所有已登录的用户和进程。当然,如果使用了时间参数,系统将拒绝新的用户登入。
|
||||
|
||||
执行此类操作之前,我建议您坚持复查,因为您只能得到很少的提示来确保这一切顺利。
|
||||
|
||||
下面陈列了一些步骤:
|
||||
|
||||
* 确保您拥有一个可以处理故障的控制台,以防之后可能会发生的问题。 VMWare 可以访问虚拟机,而 IPMI、iLO 和 iDRAC 可以访问物理服务器。
|
||||
* 您需要通过公司的流程,申请修改或故障的执行权直到得到许可。
|
||||
* 为安全着想,备份重要的配置文件,并保存到其他服务器上.
|
||||
* 验证日志文件(提前检查)
|
||||
* 和相关团队交流,比如数据库管理团队,应用团队等。
|
||||
* 通知数据库和应用服务人员关闭服务,并得到确定答复。
|
||||
* 使用适当的命令复盘操作,验证工作。
|
||||
* 最后,重启系统。
|
||||
* 验证日志文件,如果一切顺利,执行下一步操作,如果发现任何问题,对症排查。
|
||||
* 无论是回退版本还是运行程序,通知相关团队提出申请。
|
||||
* 对操作做适当守候,并将预期的一切正常的反馈给团队
|
||||
|
||||
使用下列命令执行这项任务。
|
||||
|
||||
* `shutdown`、`halt`、`poweroff`、`reboot` 命令:用来停机、重启或切断电源
|
||||
* `init` 命令:是 “initialization” 的简称,是系统启动的第一个进程。
|
||||
* `systemctl` 命令:systemd 是 Linux 系统和服务器的管理程序。
|
||||
|
||||
### 方案 1:如何使用 shutdown 命令关闭和重启 Linux 系统
|
||||
|
||||
`shutdown` 命令用于断电或重启本地和远程的 Linux 机器。它为高效完成作业提供多个选项。如果使用了时间参数,系统关闭的 5 分钟之前,会创建 `/run/nologin` 文件,以确保后续的登录会被拒绝。
|
||||
|
||||
通用语法如下:
|
||||
|
||||
```
|
||||
# shutdown [OPTION] [TIME] [MESSAGE]
|
||||
```
|
||||
|
||||
运行下面的命令来立即关闭 Linux 机器。它会立刻杀死所有进程,并关闭系统。
|
||||
|
||||
```
|
||||
# shutdown -h now
|
||||
```
|
||||
|
||||
* `-h`:如果不特指 `-halt` 选项,这等价于 `-poweroff` 选项。
|
||||
|
||||
另外我们可以使用带有 `-halt` 选项的 `shutdown` 命令来立即关闭设备。
|
||||
|
||||
```
|
||||
# shutdown --halt now
|
||||
或者
|
||||
# shutdown -H now
|
||||
```
|
||||
|
||||
* `-H, --halt`:停止设备运行
|
||||
|
||||
另外我们可以使用带有 `poweroff` 选项的 `shutdown` 命令来立即关闭设备。
|
||||
|
||||
```
|
||||
# shutdown --poweroff now
|
||||
或者
|
||||
# shutdown -P now
|
||||
```
|
||||
|
||||
* `-P, --poweroff`:切断电源(默认)。
|
||||
|
||||
如果您没有使用时间选项运行下面的命令,它将会在一分钟后执行给出的命令。
|
||||
|
||||
```
|
||||
# shutdown -h
|
||||
Shutdown scheduled for Mon 2018-10-08 06:42:31 EDT, use 'shutdown -c' to cancel.
|
||||
|
||||
root@2daygeek.com#
|
||||
Broadcast message from root@vps.2daygeek.com (Mon 2018-10-08 06:41:31 EDT):
|
||||
|
||||
The system is going down for power-off at Mon 2018-10-08 06:42:31 EDT!
|
||||
```
|
||||
|
||||
其他的登录用户都能在中断中看到如下的广播消息:
|
||||
|
||||
```
|
||||
[daygeek@2daygeek.com ~]$
|
||||
Broadcast message from root@2daygeek.com (Mon 2018-10-08 06:41:31 EDT):
|
||||
|
||||
The system is going down for power-off at Mon 2018-10-08 06:42:31 EDT!
|
||||
```
|
||||
|
||||
对于使用了 `-halt` 选项:
|
||||
|
||||
```
|
||||
# shutdown -H
|
||||
Shutdown scheduled for Mon 2018-10-08 06:37:53 EDT, use 'shutdown -c' to cancel.
|
||||
|
||||
root@2daygeek.com#
|
||||
Broadcast message from root@vps.2daygeek.com (Mon 2018-10-08 06:36:53 EDT):
|
||||
|
||||
The system is going down for system halt at Mon 2018-10-08 06:37:53 EDT!
|
||||
```
|
||||
|
||||
对于使用了 `-poweroff` 选项:
|
||||
|
||||
```
|
||||
# shutdown -P
|
||||
Shutdown scheduled for Mon 2018-10-08 06:40:07 EDT, use 'shutdown -c' to cancel.
|
||||
|
||||
root@2daygeek.com#
|
||||
Broadcast message from root@vps.2daygeek.com (Mon 2018-10-08 06:39:07 EDT):
|
||||
|
||||
The system is going down for power-off at Mon 2018-10-08 06:40:07 EDT!
|
||||
```
|
||||
|
||||
可以在您的终端上敲击 `shutdown -c` 选项取消操作。
|
||||
|
||||
```
|
||||
# shutdown -c
|
||||
|
||||
Broadcast message from root@vps.2daygeek.com (Mon 2018-10-08 06:39:09 EDT):
|
||||
|
||||
The system shutdown has been cancelled at Mon 2018-10-08 06:40:09 EDT!
|
||||
```
|
||||
|
||||
其他的登录用户都能在中断中看到如下的广播消息:
|
||||
|
||||
```
|
||||
[daygeek@2daygeek.com ~]$
|
||||
Broadcast message from root@vps.2daygeek.com (Mon 2018-10-08 06:41:35 EDT):
|
||||
|
||||
The system shutdown has been cancelled at Mon 2018-10-08 06:42:35 EDT!
|
||||
```
|
||||
|
||||
添加时间参数,如果你想在 `N` 秒之后执行关闭或重启操作。这里,您可以为所有登录用户添加自定义广播消息。例如,我们将在五分钟后重启设备。
|
||||
|
||||
```
|
||||
# shutdown -r +5 "To activate the latest Kernel"
|
||||
Shutdown scheduled for Mon 2018-10-08 07:13:16 EDT, use 'shutdown -c' to cancel.
|
||||
|
||||
[root@vps138235 ~]#
|
||||
Broadcast message from root@vps.2daygeek.com (Mon 2018-10-08 07:08:16 EDT):
|
||||
|
||||
To activate the latest Kernel
|
||||
The system is going down for reboot at Mon 2018-10-08 07:13:16 EDT!
|
||||
```
|
||||
|
||||
运行下面的命令立即重启 Linux 机器。它会立即杀死所有进程并且重新启动系统。
|
||||
|
||||
```
|
||||
# shutdown -r now
|
||||
```
|
||||
|
||||
* `-r, --reboot`: 重启设备。
|
||||
|
||||
### 方案 2:如何通过 reboot 命令关闭和重启 Linux 系统
|
||||
|
||||
`reboot` 命令用于关闭和重启本地或远程设备。`reboot` 命令拥有两个实用的选项。
|
||||
|
||||
它能够优雅的关闭和重启设备(就好像在系统菜单中惦记重启选项一样简单)。
|
||||
|
||||
执行不带任何参数的 `reboot` 命令来重启 Linux 机器。
|
||||
|
||||
```
|
||||
# reboot
|
||||
```
|
||||
|
||||
执行带 `-p` 参数的 `reboot` 命令来关闭 Linux 机器电源。
|
||||
|
||||
```
|
||||
# reboot -p
|
||||
```
|
||||
|
||||
* `-p, --poweroff`:调用 `halt` 或 `poweroff` 命令,切断设备电源。
|
||||
|
||||
执行带 `-f` 参数的 `reboot` 命令来强制重启 Linux 设备(这类似按压机器上的电源键)。
|
||||
|
||||
```
|
||||
# reboot -f
|
||||
```
|
||||
|
||||
* `-f, --force`:立刻强制中断,切断电源或重启。
|
||||
|
||||
### 方案 3:如何通过 init 命令关闭和重启 Linux 系统
|
||||
|
||||
`init`(“initialization” 的简写)是系统启动的第一个进程。
|
||||
|
||||
它将会检查 `/etc/inittab` 文件并决定 linux 运行级别。同时,允许用户在 Linux 设备上执行关机或重启操作. 这里存在从 `0` 到 `6` 的七个运行等级。
|
||||
|
||||
**建议阅读:**
|
||||
|
||||
- [如何检查 Linux 上所有运行的服务][3]
|
||||
|
||||
执行以下 `init` 命令关闭系统。
|
||||
|
||||
```
|
||||
# init 0
|
||||
```
|
||||
|
||||
* `0`: 停机 – 关闭系统。
|
||||
|
||||
运行下面的 `init` 命令重启设备:
|
||||
|
||||
```
|
||||
# init 6
|
||||
```
|
||||
|
||||
* `6`:重启 – 重启设备。
|
||||
|
||||
### 方案 4:如何通过 halt 命令关闭和重启 Linux 系统
|
||||
|
||||
`halt` 命令用来切断电源或关闭远程 Linux 机器或本地主机。
|
||||
中断所有进程并关闭 cpu。
|
||||
|
||||
```
|
||||
# halt
|
||||
```
|
||||
|
||||
### 方案 5:如何通过 poweroff 命令关闭和重启 Linux 系统
|
||||
|
||||
`poweroff` 命令用来切断电源或关闭远程 Linux 机器或本地主机。 `poweroff` 很像 `halt`,但是它可以关闭设备硬件(灯和其他 PC 上的其它东西)。它会给主板发送 ACPI 指令,然后信号发送到电源,切断电源。
|
||||
|
||||
```
|
||||
# poweroff
|
||||
```
|
||||
|
||||
### 方案 6:如何通过 systemctl 命令关闭和重启 Linux 系统
|
||||
|
||||
systemd 是一款适用于所有主流 Linux 发型版的全新 init 系统和系统管理器,而不是传统的 SysV init 系统。
|
||||
|
||||
systemd 兼容与 SysV 和 LSB 初始化脚本。它能够替代 SysV init 系统。systemd 是内核启动的第一个进程,并持有序号为 1 的进程 PID。
|
||||
|
||||
**建议阅读:**
|
||||
|
||||
- [chkservice – 一款终端下系统单元管理工具][4]
|
||||
|
||||
它是一切进程的父进程,Fedora 15 是第一个适配安装 systemd (替代了 upstart)的发行版。
|
||||
|
||||
`systemctl` 是命令行下管理 systemd 守护进程和服务的主要工具(如 `start`、`restart`、`stop`、`enable`、`disable`、`reload` & `status`)。
|
||||
|
||||
systemd 使用 .service 文件而不是 SysV init 使用的 bash 脚本。 systemd 将所有守护进程归与自身的 Linux cgroups 用户组下,您可以浏览 `/cgroup/systemd` 文件查看该系统层次等级。
|
||||
|
||||
```
|
||||
# systemctl halt
|
||||
# systemctl poweroff
|
||||
# systemctl reboot
|
||||
# systemctl suspend
|
||||
# systemctl hibernate
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/6-commands-to-shutdown-halt-poweroff-reboot-the-linux-system/
|
||||
|
||||
作者:[Prakash Subramanian][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[cyleft](https://github.com/cyleft)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/prakash/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/11-methods-to-find-check-system-server-uptime-in-linux/
|
||||
[2]: https://www.2daygeek.com/tuptime-a-tool-to-report-the-historical-and-statistical-running-time-of-linux-system/
|
||||
[3]: https://www.2daygeek.com/how-to-check-all-running-services-in-linux/
|
||||
[4]: https://www.2daygeek.com/chkservice-a-tool-for-managing-systemd-units-from-linux-terminal/
|
@ -1,5 +1,6 @@
|
||||
MidnightBSD 发布 1.0!看看有哪些新的东西
|
||||
MidnightBSD 发布 1.0!
|
||||
======
|
||||
|
||||
几天前,Lucas Holt 宣布发布 MidnightBSD 1.0。让我们快速看一下这个新版本中包含的内容。
|
||||
|
||||
### 什么是 MidnightBSD?
|
||||
@ -10,15 +11,13 @@ MidnightBSD 发布 1.0!看看有哪些新的东西
|
||||
|
||||
### MidnightBSD 1.0 中有什么?
|
||||
|
||||
根据[发布说明][3],1.0 中的大部分工作都是更新基础系统,改进包管理器和更新工具。新版本与 FreeBSD 10-Stable 兼容。
|
||||
根据[发布说明][3]([视频](https://www.youtube.com/embed/-rlk2wFsjJ4)),1.0 中的大部分工作都是更新基础系统,改进包管理器和更新工具。新版本与 FreeBSD 10-Stable 兼容。
|
||||
|
||||
Mports(MidnightBSD 的包管理系统)已经升级支持使用一个命令安装多个包。`mport upgrade` 命令已经修复。Mports 现在会跟踪已弃用和过期的包。它还引入了新的包格式。
|
||||
|
||||
<https://www.youtube.com/embed/-rlk2wFsjJ4>
|
||||
|
||||
其他变化包括:
|
||||
|
||||
* 现在支持 [ZFS][4] 作为启动文件系统。以前,ZFS 只能用于额外存储。
|
||||
* 现在支持 [ZFS][4] 作为启动文件系统。以前,ZFS 只能用于附加存储。
|
||||
* 支持 NVME SSD。
|
||||
* AMD Ryzen 和 Radeon 的支持得到了改善。
|
||||
* Intel、Broadcom 和其他驱动程序已更新。
|
||||
@ -27,15 +26,13 @@ Mports(MidnightBSD 的包管理系统)已经升级支持使用一个命令
|
||||
* 删除了 Sudo 并用 OpenBSD 中的 [doas][5] 替换。
|
||||
* 增加了对 Microsoft hyper-v 的支持。
|
||||
|
||||
|
||||
|
||||
### 升级之前
|
||||
|
||||
如果你当前是 MidnightBSD 的用户或正在考虑尝试新版本,那么还是再等一会。Lucas 目前正在重建软件包以支持新的软件包格式和工具。他还计划在未来几个月内升级软件包和移植桌面环境。他目前正致力于移植 Firefox 52 ESR,因为它是最后一个不需要 Rust 的版本。他还希望将更新版本的 Chromium 移植到 MidnightBSD。我建议关注 MidnightBSD 的 [Twitter][6]。
|
||||
|
||||
### 0.9怎么回事?
|
||||
### 0.9 怎么回事?
|
||||
|
||||
你可能注意到 MidnightBSD 的先前版本是 0.8.6。你现在可能想知道“为什么跳到 1.0”?根据 Lucas 的说法,他在开发 0.9 时遇到了几个问题。事实上,他重试好几次。他最终采用与 0.9 分支不同的方式,并变成了 1.0。有些软件包也存在 0.* 编号系统的问题。
|
||||
你可能注意到 MidnightBSD 的先前版本是 0.8.6。你现在可能想知道“为什么跳到 1.0”?根据 Lucas 的说法,他在开发 0.9 时遇到了几个问题。事实上,他重试好几次。他最终采用与 0.9 分支不同的方式,并变成了 1.0。有些软件包在 0.* 系列也有问题。
|
||||
|
||||
### 需要帮助
|
||||
|
||||
@ -58,7 +55,7 @@ via: https://itsfoss.com/midnightbsd-1-0-release/
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,57 +1,64 @@
|
||||
理解 Linux 链接:第一部分
|
||||
理解 Linux 链接(一)
|
||||
======
|
||||
> 链接是可以将文件和目录放在你希望它们放在的位置的另一种方式。
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-link-498708.jpg?itok=DyVEcEsc)
|
||||
|
||||
除了 `cp` 和 `mv` 这两个我们在[本系列的前一部分][1]中详细讨论过的,链接是另一种方式可以将文件和目录放在你希它们放在的位置。它的优点是可以让你同时在多个位置显示一个文件或目录。
|
||||
除了 `cp` 和 `mv` 这两个我们在[本系列的前一部分][1]中详细讨论过的,链接是可以将文件和目录放在你希望它们放在的位置的另一种方式。它的优点是可以让你同时在多个位置显示一个文件或目录。
|
||||
|
||||
如前所述,在物理磁盘这个级别上,文件和目录之类的东西并不真正存在。文件系统为了方便人类使用,将它们虚构出来。但在磁盘级别上,有一个名为 _partition table_(分区表)的东西,它位于每个分区的开头,然后数据分散在磁盘的其余部分。
|
||||
如前所述,在物理磁盘这个级别上,文件和目录之类的东西并不真正存在。文件系统是为了方便人类使用,将它们虚构出来。但在磁盘级别上,有一个名为<ruby>分区表<rt>partition table</rt></ruby>的东西,它位于每个分区的开头,然后数据分散在磁盘的其余部分。
|
||||
|
||||
虽然有不同类型的分区表,但是在分区开头的表包含的数据将映射每个目录和文件的开始和结束位置。分区表的就像一个索引:当从磁盘加载文件时,操作系统会查找表中的条目,分区表会告诉文件在磁盘上的起始位置和结束位置。然后磁盘头移动到起点,读取数据,直到它到达终点,最后告诉 presto:这就是你的文件。
|
||||
虽然有不同类型的分区表,但是在分区开头的那个表包含的数据将映射每个目录和文件的开始和结束位置。分区表的就像一个索引:当从磁盘加载文件时,操作系统会查找表中的条目,分区表会告诉文件在磁盘上的起始位置和结束位置。然后磁盘头移动到起点,读取数据,直到它到达终点,您看:这就是你的文件。
|
||||
|
||||
### 硬链接
|
||||
|
||||
硬链接只是分区表中的一个条目,它指向磁盘上的某个区域,表示该区域**已经被分配给文件**。换句话说,硬链接指向已经被另一个条目索引的数据。让我们看看它是如何工作的。
|
||||
|
||||
打开终端,创建一个实验目录并进入:
|
||||
|
||||
```
|
||||
mkdir test_dir
|
||||
cd test_dir
|
||||
```
|
||||
|
||||
使用 [touch][1] 创建一个文件:
|
||||
|
||||
```
|
||||
touch test.txt
|
||||
```
|
||||
|
||||
为了获得更多的体验(?),在文本编辑器中打开 _test.txt_ 并添加一些单词。
|
||||
为了获得更多的体验(?),在文本编辑器中打开 `test.txt` 并添加一些单词。
|
||||
|
||||
现在通过执行以下命令来建立硬链接:
|
||||
|
||||
```
|
||||
ln test.txt hardlink_test.txt
|
||||
```
|
||||
|
||||
运行 `ls`,你会看到你的目录现在包含两个文件,或者看起来如此。正如你之前读到的那样,你真正看到的是完全相同的文件的两个名称: _hardlink\_test.txt_ 包含相同的内容,没有填充磁盘中的任何更多空间(尝试使用大文件来测试),并与 _test.txt_ 使用相同的 inode:
|
||||
运行 `ls`,你会看到你的目录现在包含两个文件,或者看起来如此。正如你之前读到的那样,你真正看到的是完全相同的文件的两个名称: `hardlink_test.txt` 包含相同的内容,没有填充磁盘中的任何更多空间(可以尝试使用大文件来测试),并与 `test.txt` 使用相同的 inode:
|
||||
|
||||
```
|
||||
$ ls -li *test*
|
||||
16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 hardlink_test.txt
|
||||
16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 test.txt
|
||||
```
|
||||
|
||||
_ls_ 的 `-i` 选项显示一个文件的 _inode 数值_。_inode_ 是分区表中的信息块,它包含磁盘上文件或目录的位置,上次修改的时间以及其它数据。如果两个文件使用相同的 inode,那么无论它们在目录树中的位置如何,它们在实际效果上都是相同的文件。
|
||||
`ls` 的 `-i` 选项显示一个文件的 “inode 数值”。“inode” 是分区表中的信息块,它包含磁盘上文件或目录的位置、上次修改的时间以及其它数据。如果两个文件使用相同的 inode,那么无论它们在目录树中的位置如何,它们在实际上都是相同的文件。
|
||||
|
||||
### 软链接
|
||||
|
||||
软链接,也称为 _symlinks_(系统链接),它是不同的:软链接实际上是一个独立的文件,它有自己的 inode 和它自己在磁盘上的小插槽。但它只包含一小段数据,将操作系统指向另一个文件或目录。
|
||||
软链接,也称为<ruby>符号链接<rt>symlink</rt></ruby>,它与硬链接是不同的:软链接实际上是一个独立的文件,它有自己的 inode 和它自己在磁盘上的小块地方。但它只包含一小段数据,将操作系统指向另一个文件或目录。
|
||||
|
||||
你可以使用 `ln` 的 `-s` 选项来创建一个软链接:
|
||||
|
||||
```
|
||||
ln -s test.txt softlink_test.txt
|
||||
```
|
||||
|
||||
这将在当前目录中创建软链接 _softlink\_test.txt_,它指向 _test.txt_。
|
||||
这将在当前目录中创建软链接 `softlink_test.txt`,它指向 `test.txt`。
|
||||
|
||||
再次执行 `ls -li`,你可以看到两种链接的不同之处:
|
||||
|
||||
```
|
||||
$ ls -li
|
||||
total 8
|
||||
@ -60,48 +67,53 @@ total 8
|
||||
16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 test.txt
|
||||
```
|
||||
|
||||
_hardlink\_test.txt_ 和 _test.txt_ 包含一些文本并占据相同的空格*字面*。它们使用相同的 inode 数值。与此同时,_softlink\_test.txt_ 占用少得多,并且具有不同的 inode 数值,将其标记为完全不同的文件。使用 _ls_ 的 `-l` 选项还会显示软链接指向的文件或目录。
|
||||
`hardlink_test.txt` 和 `test.txt` 包含一些文本并且*字面上*占据相同的空间。它们使用相同的 inode 数值。与此同时,`softlink_test.txt` 占用少得多,并且具有不同的 inode 数值,将其标记为完全不同的文件。使用 `ls` 的 `-l` 选项还会显示软链接指向的文件或目录。
|
||||
|
||||
### 为什么要用链接?
|
||||
|
||||
它们适用于**带有自己环境的应用程序**。你的 Linux 发行版通常不会附带你需要应用程序的最新版本。以优秀的 [Blender 3D][2] 设计软件为例,Blender 允许你创建 3D 静态图像以及动画电影,人人都想在自己的机器上拥有它。问题是,当前版本的 Blender 至少比任何发行版中的自带的高一个版本。
|
||||
|
||||
幸运的是,[Blender 提供下载][3]开箱即用。除了程序本身之外,这些软件包还包含了 Blender 需要运行的复杂的库和依赖框架。所有这些数据和块都在它们自己的目录层次中。
|
||||
幸运的是,[Blender 提供可以开箱即用的下载][3]。除了程序本身之外,这些软件包还包含了 Blender 需要运行的复杂的库和依赖框架。所有这些数据和块都在它们自己的目录层次中。
|
||||
|
||||
每次你想运行 Blender,你都可以 `cd` 到你下载它的文件夹并运行:
|
||||
|
||||
```
|
||||
./blender
|
||||
```
|
||||
|
||||
但这很不方便。如果你可以从文件系统的任何地方,比如桌面命令启动器中运行 `blender` 命令会更好。
|
||||
|
||||
这样做的方法是将 _blender_ 可执行文件链接到 _bin/_ 目录。在许多系统上,你可以通过将其链接到文件系统中的任何位置来使 `blender` 命令可用,就像这样。
|
||||
这样做的方法是将 `blender` 可执行文件链接到 `bin/` 目录。在许多系统上,你可以通过将其链接到文件系统中的任何位置来使 `blender` 命令可用,就像这样。
|
||||
|
||||
```
|
||||
ln -s /path/to/blender_directory/blender /home/<username>/bin
|
||||
```
|
||||
|
||||
你需要链接的另一个情况是**软件需要过时的库**。如果你用 `ls -l` 列出你的 _/usr/lib_ 目录,你会看到许多软链接文件飞过。仔细看看,你会看到软链接通常与它们链接到的原始文件具有相似的名称。你可能会看到 _libblah_ 链接到 _libblah.so.2_,你甚至可能会注意到 _libblah.so.2_ 依次链接到原始文件 _libblah.so.2.1.0_。
|
||||
你需要链接的另一个情况是**软件需要过时的库**。如果你用 `ls -l` 列出你的 `/usr/lib` 目录,你会看到许多软链接文件一闪而过。仔细看看,你会看到软链接通常与它们链接到的原始文件具有相似的名称。你可能会看到 `libblah` 链接到 `libblah.so.2`,你甚至可能会注意到 `libblah.so.2` 相应链接到原始文件 `libblah.so.2.1.0`。
|
||||
|
||||
这是因为应用程序通常需要安装比已安装版本更老的库。问题是,即使新版本仍然与旧版本(通常是)兼容,如果程序找不到它正在寻找的版本,程序将会出现问题。为了解决这个问题,发行版通常会创建链接,以便挑剔的应用程序相信它找到了旧版本,实际上它只找到了一个链接并最终使用了更新的库版本。
|
||||
这是因为应用程序通常需要安装比已安装版本更老的库。问题是,即使新版本仍然与旧版本(通常是)兼容,如果程序找不到它正在寻找的版本,程序将会出现问题。为了解决这个问题,发行版通常会创建链接,以便挑剔的应用程序**相信**它找到了旧版本,实际上它只找到了一个链接并最终使用了更新的库版本。
|
||||
|
||||
有些是和**你自己从源代码编译的程序**相关。你自己编译的程序通常最终安装在 `/usr/local` 下,程序本身最终在 `/usr/local/bin` 中,它在 `/usr/local/bin` 目录中查找它需要的库。但假设你的新程序需要 `libblah`,但 `libblah` 在 `/usr/lib` 中,这就是所有其它程序都会寻找到它的地方。你可以通过执行以下操作将其链接到 `/usr/local/lib`:
|
||||
|
||||
有些是和**你自己从源代码编译的程序**相关。你自己编译的程序通常最终安装在 _/usr/local_ 下,程序本身最终在 _/usr/local/bin_ 中,它在 _/usr/local/bin_ 目录中查找它需要的库。但假设你的新程序需要 _libblah_,但 _libblah_ 在 _/usr/lib_ 中,这就是所有其它程序都会寻找到它的地方。你可以通过执行以下操作将其链接到 _/usr/local/lib_:
|
||||
```
|
||||
ln -s /usr/lib/libblah /usr/local/lib
|
||||
```
|
||||
|
||||
或者如果你愿意,可以 `cd` 到 _/usr/local/lib_:
|
||||
或者如果你愿意,可以 `cd` 到 `/usr/local/lib`:
|
||||
|
||||
```
|
||||
cd /usr/local/lib
|
||||
```
|
||||
|
||||
然后使用链接:
|
||||
|
||||
```
|
||||
ln -s ../lib/libblah
|
||||
```
|
||||
|
||||
还有几十个案例证明软链接是有用的,当你使用 Linux 更熟练时,你肯定会发现它们,但这些是最常见的。下一次,我们将看一些你需要注意的链接怪异。
|
||||
|
||||
通过 Linux 基金会和 edX 的免费 ["Linux 简介"][4]课程了解有关 Linux 的更多信息。
|
||||
通过 Linux 基金会和 edX 的免费 [“Linux 简介”][4]课程了解有关 Linux 的更多信息。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
@ -111,7 +123,7 @@ via: https://www.linux.com/blog/intro-to-linux/2018/10/linux-links-part-1
|
||||
作者:[Paul Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,135 +0,0 @@
|
||||
Translating by Felix
|
||||
20 questions DevOps job candidates should be prepared to answer
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hire-job-career.png?itok=SrZo0QJ3)
|
||||
Hiring the wrong person is [expensive][1]. Recruiting, hiring, and onboarding a new employee can cost a company as much as $240,000, according to Jörgen Sundberg, CEO of Link Humans. When you make the wrong hire:
|
||||
|
||||
* You lose what they know.
|
||||
* You lose who they know.
|
||||
* Your team could go into the [storming][2] phase of group development.
|
||||
* Your company risks disorganization.
|
||||
|
||||
|
||||
|
||||
When you lose an employee, you lose a piece of the fabric of the company. It's also worth mentioning the pain on the other end. The person hired into the wrong job may experience stress, feelings of overall dissatisfaction, and even health issues.
|
||||
|
||||
On the other hand, when you get it right, your new hire will:
|
||||
|
||||
* Enhance the existing culture, making your organization an even a better place to work. Studies show that a positive work culture helps [drive long-term financial performance][3] and that if you work in a happy environment, you’re more likely to do better in life.
|
||||
* Love working with your organization. When people love what they do, they tend to do it well.
|
||||
|
||||
|
||||
|
||||
Hiring to fit or enhance your existing culture is essential in DevOps and agile teams. That means hiring someone who can encourage effective collaboration so that individual contributors from varying backgrounds, and teams with different goals and working styles, can work together productively. Your new hire should help teams collaborate to maximize their value while also increasing employee satisfaction and balancing conflicting organizational goals. He or she should be able to choose tools and workflows wisely to complement your organization. Culture is everything.
|
||||
|
||||
As a follow-up to our November 2017 post, [20 questions DevOps hiring managers should be prepared to answer][4], this article will focus on how to hire for the best mutual fit.
|
||||
|
||||
### Why hiring goes wrong
|
||||
|
||||
The typical hiring strategy many companies use today is based on a talent surplus:
|
||||
|
||||
* Post on job boards.
|
||||
* Focus on candidates with the skills they need.
|
||||
* Find as many candidates as possible.
|
||||
* Interview to weed out the weak.
|
||||
* Conduct formal interviews to do more weeding.
|
||||
* Assess, vote, and select.
|
||||
* Close on compensation.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/hiring_graphic.png?itok=1udGbkhB)
|
||||
|
||||
Job boards were invented during the Great Depression when millions of people were out of work and there was a talent surplus. There is no talent surplus in today's job market, yet we’re still using a hiring strategy that's based on one.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/732px-unemployed_men_queued_outside_a_depression_soup_kitchen_opened_in_chicago_by_al_capone_02-1931_-_nara_-_541927.jpg?itok=HSs4NjCN)
|
||||
|
||||
### Hire for mutual fit: Use culture and emotions
|
||||
|
||||
The idea behind the talent surplus hiring strategy is to design jobs and then slot people into them.
|
||||
|
||||
Instead, do the opposite: Find talented people who will positively add to your business culture, then find the best fit for them in a job they’ll love. To do this, you must be open to creating jobs around their passions.
|
||||
|
||||
**Who is looking for a job?** According to a 2016 survey of more than 50,000 U.S. developers, [85.7% of respondents][5] were either not interested in new opportunities or were not actively looking for them. And of those who were looking, a whopping [28.3% of job discoveries][5] came from referrals by friends. If you’re searching only for people who are looking for jobs, you’re missing out on top talent.
|
||||
|
||||
**Use your team to find and vet potential recruits**. For example, if Diane is a developer on your team, chances are she has [been coding for years][6] and has met fellow developers along the way who also love what they do. Wouldn’t you think her chances of vetting potential recruits for skills, knowledge, and intelligence would be higher than having someone from HR find and vet potential recruits? And before asking Diane to share her knowledge of fellow recruits, inform her of the upcoming mission, explain your desire to hire a diverse team of passionate explorers, and describe some of the areas where help will be needed in the future.
|
||||
|
||||
**What do employees want?** A comprehensive study comparing the wants and needs of Millennials, GenX’ers, and Baby Boomers shows that within two percentage points, we all [want the same things][7]:
|
||||
|
||||
1. To make a positive impact on the organization
|
||||
2. To help solve social and/or environmental challenges
|
||||
3. To work with a diverse group of people
|
||||
|
||||
|
||||
|
||||
### The interview challenge
|
||||
|
||||
The interview should be a two-way conversation for finding a mutual fit between the person hiring and the person interviewing. Focus your interview on CQ ([Cultural Quotient][7]) and EQ ([Emotional Quotient][8]): Will this person reinforce and add to your culture and love working with you? Can you help make them successful at their job?
|
||||
|
||||
**For the hiring manager:** Every interview is an opportunity to learn how your organization could become more irresistible to prospective team members, and every positive interview can be your best opportunity to finding talent, even if you don’t hire that person. Everyone remembers being interviewed if it is a positive experience. Even if they don’t get hired, they will talk about the experience with their friends, and you may get a referral as a result. There is a big upside to this: If you’re not attracting this talent, you have the opportunity to learn the reason and fix it.
|
||||
|
||||
**For the interviewee** : Each interview experience is an opportunity to unlock your passions.
|
||||
|
||||
### 20 questions to help you unlock the passions of potential hires
|
||||
|
||||
1. What are you passionate about?
|
||||
|
||||
2. What makes you think, "I can't wait to get to work this morning!”
|
||||
|
||||
3. What is the most fun you’ve ever had?
|
||||
|
||||
4. What is your favorite example of a problem you’ve solved, and how did you solve it?
|
||||
|
||||
5. How do you feel about paired learning?
|
||||
|
||||
6. What’s at the top of your mind when you arrive at, and leave, the office?
|
||||
|
||||
7. If you could have changed one thing in your previous/current job, what would it be?
|
||||
|
||||
8. What are you excited to learn while working here?
|
||||
|
||||
9. What do you aspire to in life, and how are you pursuing it?
|
||||
|
||||
10. What do you want, or feel you need, to learn to achieve these aspirations?
|
||||
|
||||
11. What values do you hold?
|
||||
|
||||
12. How do you live those values?
|
||||
|
||||
13. What does balance mean in your life?
|
||||
|
||||
14. What work interactions are you are most proud of? Why?
|
||||
|
||||
15. What type of environment do you like to create?
|
||||
|
||||
16. How do you like to be treated?
|
||||
|
||||
17. What do you trust vs. verify?
|
||||
|
||||
18. Tell me about a recent learning you had when working on a project.
|
||||
|
||||
19. What else should we know about you?
|
||||
|
||||
20. If you were hiring me, what questions would you ask me?
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/questions-devops-employees-should-answer
|
||||
|
||||
作者:[Catherine Louis][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/catherinelouis
|
||||
[1]:https://www.shrm.org/resourcesandtools/hr-topics/employee-relations/pages/cost-of-bad-hires.aspx
|
||||
[2]:https://en.wikipedia.org/wiki/Tuckman%27s_stages_of_group_development
|
||||
[3]:http://www.forbes.com/sites/johnkotter/2011/02/10/does-corporate-culture-drive-financial-performance/
|
||||
[4]:https://opensource.com/article/17/11/inclusive-workforce-takes-work
|
||||
[5]:https://insights.stackoverflow.com/survey/2016#work-job-discovery
|
||||
[6]:https://research.hackerrank.com/developer-skills/2018/
|
||||
[7]:http://www-935.ibm.com/services/us/gbs/thoughtleadership/millennialworkplace/
|
||||
[8]:https://en.wikipedia.org/wiki/Emotional_intelligence
|
@ -1,3 +1,5 @@
|
||||
translating by belitex
|
||||
|
||||
What breaks our systems: A taxonomy of black swans
|
||||
======
|
||||
|
||||
|
124
sources/talk/20181031 3 scary sysadmin stories.md
Normal file
124
sources/talk/20181031 3 scary sysadmin stories.md
Normal file
@ -0,0 +1,124 @@
|
||||
3 scary sysadmin stories
|
||||
======
|
||||
|
||||
Terrifying ghosts are hanging around every data center, just waiting to haunt the unsuspecting sysadmin.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/spooky_halloween_haunted_house.jpg?itok=UkRBeItZ)
|
||||
|
||||
> "It's all just a bunch of hocus pocus!" — Max in [Hocus Pocus][1]
|
||||
|
||||
Over my many years as a system administrator, I've heard many horror stories about the different ghosts that have haunted new admins due to their inexperience.
|
||||
|
||||
Here are three of the stories that stand out to me the most in helping build my character as a good sysadmin.
|
||||
|
||||
### The ghost of the failed restore
|
||||
|
||||
In a well-known data center (whose name I do not want to remember), one cold October night we had a production outage in which thousands of web servers stopped responding due to downtime in the main database. The database administrator asked me, the rookie sysadmin, to recover the database's last full backup and restore it to bring the service back online.
|
||||
|
||||
But, at the end of the process, the database was still broken. I didn't worry, because there were other full backup files in stock. However, even after doing the process several times, the result didn't change.
|
||||
|
||||
With great fear, I asked the senior sysadmin what to do to fix this behavior.
|
||||
|
||||
"You remember when I showed you, a few days ago, how the full backup script was running? Something about how important it was to validate the backup?" responded the sysadmin.
|
||||
|
||||
"Of course! You told me that I had to stay a couple of extra hours to perform that task," I answered.
|
||||
|
||||
"Exactly! But you preferred to leave early without finishing that task," he said.
|
||||
|
||||
"Oh my! I thought it was optional!" I exclaimed.
|
||||
|
||||
"It was, it was…"
|
||||
|
||||
**Moral of the story:** Even with the best solution that promises to make the most thorough backups, the ghost of the failed restoration can appear, darkening our job skills, if we don't make a habit of validating the backup every time.
|
||||
|
||||
### The dark window
|
||||
|
||||
Once upon a night watch, reflecting I was, lonely and tired,
|
||||
Looking at the file window on my screen.
|
||||
Clicking randomly, nearly napping, suddenly came a beeping
|
||||
From some server, sounding gently, sounding on my pager.
|
||||
"It's just a warning," I muttered, "sounding on my pager—
|
||||
Only this and nothing more."
|
||||
Soon again I heard a beeping somewhat louder than before.
|
||||
Opening my pager with great disdain,
|
||||
There was the message from a server of the saintly days of yore:
|
||||
"The legacy application, it's down, doesn't respond," and nothing more.
|
||||
There were many stories of this server,
|
||||
Incredibly, almost terrified,
|
||||
I went down to the data center to review it.
|
||||
I sat engaged in guessing, what would be the console to restart it
|
||||
Without keyboard, mouse, or monitor?
|
||||
"The task level up"—I think—"only this and nothing more."
|
||||
Then, thinking, "In another rack, I saw a similar server,
|
||||
I'll take its monitor and keyboard, nothing bad."
|
||||
Suddenly, this server shut down, and my pager beeped again:
|
||||
"The legacy application, it's down, doesn't respond", and nothing more.
|
||||
Bemused, I sat down to call my sysadmin mentor:
|
||||
"I wanted to use the console of another server, and now both are out."
|
||||
"Did you follow my advice? Don't use the graphics console, the terminal is better."
|
||||
Of course, I remember, it was last December;
|
||||
I felt fear, a horror that I had never felt before;
|
||||
"It is a tool of the past and nothing more."
|
||||
With great shame I understood my mistake:
|
||||
"Master," I said, "truly, your forgiveness I implore;
|
||||
but the fact is I thought it was not used anymore.
|
||||
A dark window and nothing more."
|
||||
"Learn it well, little kid," he spoke.
|
||||
"In the terminal you can trust, it's your friend and much, much more."
|
||||
Step by step, my master showed me to connect with the terminal,
|
||||
And restarting each one
|
||||
With infinite patience, he taught me
|
||||
That from that dark window I should not separate
|
||||
Never, nevermore.
|
||||
|
||||
**Moral of the story:** Fluency in the command-line terminal is a skill often abandoned and considered archaic by newer generations, but it improves your flexibility and productivity as a sysadmin in obvious and subtle ways.
|
||||
|
||||
### Troll bridge
|
||||
|
||||
I'd been a sysadmin for three or four years when one of my old mentors was removed from work. The older man was known for making fun of the new guys in the group—the ones who brought from the university the desire to improve processes with the newly released community operating system. My manager assigned me the older man's office, a small space under the access stairs to the data center—"Troll Bridge," they called it—and the few legacy servers he still managed.
|
||||
|
||||
While reviewing those legacy servers, I realized most of them had many scripts that did practically all the work. I just had to check that they did not go offline due to an electrical failure. I started using those methods, adapting them so my own servers would work the same way, making my tasks more efficient and, at the same time, requiring less of my time to complete them. My day soon became surfing the internet, watching funny videos, and even participating in internet forums.
|
||||
|
||||
A couple of years went by, and I maintained my work in the same way. When a new server arrived, I automated its tasks so I could free myself and continue with my usual participation in internet forums. One day, when I shared one of my scripts in the internet forum, a new admin told me I could simplify it using one novelty language, a new trend that was becoming popular among the new folks.
|
||||
|
||||
"I am a sysadmin, not a programmer," I answered. "They will never be the same."
|
||||
|
||||
From that day on, I dedicated myself to ridiculing the kids who told me I should program in the new languages.
|
||||
|
||||
"You do not know, newbie," I answered every time, "this job will never change."
|
||||
|
||||
A few years later, my responsibilities increased, and my manager wanted me to modify the code of the applications hosted on my server.
|
||||
|
||||
"That's what the job is about now," said my manager. "Development and operations are joining; if you're not willing to do it, we'll bring in some guy who does."
|
||||
|
||||
"I will never do it, it's not my role," I said.
|
||||
|
||||
"Well then…" he said, looking at me harshly.
|
||||
|
||||
I've been here ever since. Hiding. Waiting. Under my bridge.
|
||||
|
||||
I watch from the shadows as the people pass: up the stairs, muttering, or talking about the things the new applications do. Sometimes people pause beneath my bridge, to talk, or share code, or make plans. And I watch them, but they don't see me.
|
||||
|
||||
I'm just going to stay here, in the darkness under the bridge. I can hear you all out there, everything you say.
|
||||
|
||||
Oh yes, I can hear you.
|
||||
But I'm not coming out.
|
||||
|
||||
**Moral of the story:** "The lazy sysadmin is the best sysadmin" is a well-known phrase that means if we are proactive enough to automate all our processes properly, we will have a lot of free time. The best sysadmins never seem to be very busy; they prefer to be relaxed and let the system do the work for them. "Work smarter not harder." However, if we don't use this free time productively, we can fall into obsoleteness and become something we do not want. The best sysadmins reinvent themselves constantly; they are always researching and learning.
|
||||
|
||||
Following these stories' morals—and continually learning from my mistakes—helped me improve my management skills and create the good habits necessary for the sysadmin job.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/3-scary-sysadmin-stories
|
||||
|
||||
作者:[Alex Callejas][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/darkaxl
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Hocus_Pocus_(1993_film)
|
@ -0,0 +1,84 @@
|
||||
How open source hardware increases security
|
||||
======
|
||||
Want to boost cybersecurity at your organization? Switch to open source hardware.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/esp8266_board_hardware.jpg?itok=OTmNpKV1)
|
||||
|
||||
Hardware hacks are particularly scary because they trump any software security safeguards—for example, they can render all accounts on a server password-less.
|
||||
|
||||
Fortunately, we can benefit from what the software industry has learned from decades of fighting prolific software hackers: Using open source techniques can, perhaps counterintuitively, [make a system more secure][1]. Open source hardware and distributed manufacturing can provide protection from future attacks.
|
||||
|
||||
### Trust—but verify
|
||||
|
||||
Imagine you are a 007 agent holding classified documents. Would you feel more secure locking them in a safe whose manufacturer keeps the workings of the locks secret, or in a safe whose design is published openly so that everyone (including thieves) can judge its quality—thus enabling you to rely exclusively on technical complexity for protection?
|
||||
|
||||
The former approach might be perfectly secure—you simply don’t know. But why would you trust any manufacturer that could be compromised now or in the future? In contrast, the open system is almost certain to be secure, especially if enough time has passed for it to be tested by multiple companies, governments, and individuals.
|
||||
|
||||
To a large degree, the software world has seen the benefits of moving to free and open source software. That's why open source is run on all [supercomputers][2], [90% of the cloud, 82% of the smartphone market, and 62% of the embedded systems market][3]. Open source appears poised to dominate the future, with over [70% of the IoT][4].
|
||||
|
||||
In fact, security is one of the core benefits of [open source][5]. While open source is not inherently more secure, it allows you to verify security yourself (or pay someone more qualified to do so). With closed source programs, you must trust, without verification, that a program works properly. To quote President Reagan: "Trust—but verify." The bottom line is that open source allows users to make more informed choices about the security of a system—choices that are based on their own independent judgment.
|
||||
|
||||
### Open source hardware
|
||||
|
||||
This concept also holds true for electronic devices. Most electronics customers have no idea what is in their products, and even technically sophisticated companies like Amazon may not know exactly what is in the hardware that runs their servers because they use proprietary products that are made by other companies.
|
||||
|
||||
In the incident mentioned above, Chinese spies recently used a tiny microchip, not much bigger than a grain of rice, to infiltrate hardware made by SuperMicro (the Microsoft of the hardware world). These chips enabled outside infiltrators to access the core server functions of some of America’s leading companies and government operations, including DOD data centers, CIA drone operations, and the onboard networks of Navy warships. Operatives from the People’s Liberation Army or similar groups could have reverse-engineered or made identical or disguised modules (in this case, the chips looked like signal-conditioning couplers, a common motherboard component, rather than the spy devices they were).
|
||||
|
||||
Having the source available helps customers much more than hackers, as most customers do not have the resources to reverse-engineer the electronics they buy. Without the device's source, or design, it's difficult to determine whether or not hardware has been hacked.
|
||||
|
||||
Enter [open source hardware][6]: hardware design that is publicly available so that anyone can study, modify, test, distribute, make, or sell it, or hardware based on it. The hardware’s source is available to everyone.
|
||||
|
||||
### Distributed manufacturing for cybersecurity
|
||||
|
||||
Open source hardware and distributed manufacturing could have prevented the Chinese hack that rightfully terrified the security world. Organizations that require tight security, such as military groups, could then check the product's code and bring production in-house if necessary.
|
||||
|
||||
This open source future may not be far off. Recently I co-authored, with Shane Oberloier, an [article][7] that discusses a low-cost open source benchtop device that enables anyone to make a wide range of open source electronic products. The number of open source electronics designs is proliferating on websites like [Hackaday][8], [Open Electronics][9], and the [Open Circuit Institute][10], as are communities based on specific products like [Arduino][11] and around companies like [Adafruit Industries][12] and [SparkFun Electronics][13].
|
||||
|
||||
Every level of manufacturing that users can do themselves increases the security of the device. Not long ago, you had to be an expert to make even a simple breadboard design. Now, with open source mills for boards and electronics repositories, small companies and even individuals can make reasonably sophisticated electronic devices. While most builders are still using black-box chips on their devices, this is also changing as [open source chips gain traction][14].
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/800px-oscircuitmill.png)
|
||||
|
||||
Creating electronics that are open source all the way down to the chip is certainly possible—and the more besieged we are by hardware hacks, perhaps it is even inevitable. Companies, governments, and other organizations that care about cybersecurity should strongly consider moving toward open source—perhaps first by establishing purchasing policies for software and hardware that makes the code accessible so they can test for security weaknesses.
|
||||
|
||||
Although every customer and every manufacturer of an open source hardware product will have different standards of quality and security, this does not necessarily mean weaker security. Customers should choose whatever version of an open source product best meets their needs, just as users can choose their flavor of Linux. For example, do you run [Fedora][15] for free, or do you, like [90% of Fortune Global 500 companies][16], pay Red Hat for its version and support?
|
||||
|
||||
Red Hat makes billions of dollars a year for the service it provides, on top of a product that can ostensibly be downloaded for free. Open source hardware can follow the [same business model][17]; it is just a less mature field, lagging [open source software by about 15 years][18].
|
||||
|
||||
The core source code for hardware devices would be controlled by their manufacturer, following the "[benevolent dictator for life][19]" model. Code of any kind (infected or not) is screened before it becomes part of the root. This is true for hardware, too. For example, Aleph Objects manufacturers the popular [open source LulzBot brand of 3D printer][20], a commercial 3D printer that's essentially designed to be hacked. Users have made [dozens of modifications][21] (mods) to the printer, and while they are available, Aleph uses only the ones that meet its QC standards in each subsequent version of the printer. Sure, downloading a mod could mess up your own machine, but infecting the source code of the next LulzBot that way would be nearly impossible. Customers are also able to more easily check the security of the machines themselves.
|
||||
|
||||
While [challenges certainly remain for the security of open source products][22], the open hardware model can help enhance cybersecurity—from the Pentagon to your living room.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/cybersecurity-demands-rapid-switch-open-source-hardware
|
||||
|
||||
作者:[Joshua Pearce][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jmpearce
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://dl.acm.org/citation.cfm?id=1188921
|
||||
[2]: https://www.zdnet.com/article/supercomputers-all-linux-all-the-time/
|
||||
[3]: https://www.serverwatch.com/server-news/linux-foundation-on-track-for-best-year-ever-as-open-source-dominates.html
|
||||
[4]: https://www.itprotoday.com/iot/survey-shows-linux-top-operating-system-internet-things-devices
|
||||
[5]: https://www.infoworld.com/article/2985242/linux/why-is-open-source-software-more-secure.html
|
||||
[6]: https://www.oshwa.org/definition/
|
||||
[7]: https://www.mdpi.com/2411-5134/3/3/64/htm
|
||||
[8]: https://hackaday.io/
|
||||
[9]: https://www.open-electronics.org/
|
||||
[10]: http://opencircuitinstitute.org/
|
||||
[11]: https://www.arduino.cc/
|
||||
[12]: http://www.adafruit.com/
|
||||
[13]: https://www.sparkfun.com/
|
||||
[14]: https://www.wired.com/story/using-open-source-designs-to-create-more-specialized-chips/
|
||||
[15]: https://getfedora.org/
|
||||
[16]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
|
||||
[17]: https://openhardware.metajnl.com/articles/10.5334/joh.4/
|
||||
[18]: https://www.mdpi.com/2411-5134/3/3/44/htm
|
||||
[19]: https://www.theatlantic.com/technology/archive/2014/01/on-the-reign-of-benevolent-dictators-for-life-in-software/283139/
|
||||
[20]: https://www.lulzbot.com/
|
||||
[21]: https://forum.lulzbot.com/viewtopic.php?t=2378
|
||||
[22]: https://ieeexplore.ieee.org/abstract/document/8250205
|
@ -1,3 +1,4 @@
|
||||
Translating by DavidChenLiang
|
||||
Python
|
||||
============================================================
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
Joplin: Encrypted Open Source Note Taking And To-Do Application
|
||||
======
|
||||
**[Joplin][1] is a free and open source note taking and to-do application available for Linux, Windows, macOS, Android and iOS. Its key features include end-to-end encryption, Markdown support, and synchronization via third-party services like NextCloud, Dropbox, OneDrive or WebDAV.**
|
||||
|
@ -1,4 +1,3 @@
|
||||
Translating by z52527
|
||||
Publishing Markdown to HTML with MDwiki
|
||||
======
|
||||
|
||||
|
@ -1,69 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
6 open source tools for writing a book
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-austen-writing-code.png?itok=XPxRMtQ4)
|
||||
|
||||
I first used and contributed to free and open source software in 1993, and since then I've been an open source software developer and evangelist. I've written or contributed to dozens of open source software projects, although the one that I'll be remembered for is the [FreeDOS Project][1], an open source implementation of the DOS operating system.
|
||||
|
||||
I recently wrote a book about FreeDOS. [_Using FreeDOS_][2] is my celebration of the 24th anniversary of FreeDOS. It is a collection of how-to's about installing and using FreeDOS, essays about my favorite DOS applications, and quick-reference guides to the DOS command line and DOS batch programming. I've been working on this book for the last few months, with the help of a great professional editor.
|
||||
|
||||
_Using FreeDOS_ is available under the Creative Commons Attribution (cc-by) International Public License. You can download the EPUB and PDF versions at no charge from the [FreeDOS e-books][2] website. (I'm also planning a print version, for those who prefer a bound copy.)
|
||||
|
||||
The book was produced almost entirely with open source software. I'd like to share a brief insight into the tools I used to create, edit, and produce _Using FreeDOS_.
|
||||
|
||||
### Google Docs
|
||||
|
||||
[Google Docs][3] is the only tool I used that isn't open source software. I uploaded my first drafts to Google Docs so my editor and I could collaborate. I'm sure there are open source collaboration tools, but Google Doc's ability to let two people edit the same document at the same time, make comments, edit suggestions, and change tracking—not to mention its use of paragraph styles and the ability to download the finished document—made it a valuable part of the editing process.
|
||||
|
||||
### LibreOffice
|
||||
|
||||
I started on [LibreOffice][4] 6.0 but I finished the book using LibreOffice 6.1. I love LibreOffice's rich support of styles. Paragraph styles made it easy to apply a style for titles, headers, body text, sample code, and other text. Character styles let me modify the appearance of text within a paragraph, such as inline sample code or a different style to indicate a filename. Graphics styles let me apply certain styling to screenshots and other images. And page styles allowed me to easily modify the layout and appearance of the page.
|
||||
|
||||
### GIMP
|
||||
|
||||
My book includes a lot of DOS program screenshots, website screenshots, and FreeDOS logos. I used [GIMP][5] to modify these images for the book. Usually, this was simply cropping or resizing an image, but as I prepare the print edition of the book, I'm using GIMP to create a few images that will be simpler for print layout.
|
||||
|
||||
### Inkscape
|
||||
|
||||
Most of the FreeDOS logos and fish mascots are in SVG format, and I used [Inkscape][6] for any image tweaking here. And in preparing the PDF version of the ebook, I wanted a simple blue banner at top of the page, with the FreeDOS logo in the corner. After some experimenting, I found it easier to create an SVG image in Inkscape that looked like the banner I wanted, and I pasted that into the header.
|
||||
|
||||
### ImageMagick
|
||||
|
||||
While it's great to use GIMP to do the fine work, sometimes it's faster to run an [ImageMagick][7] command over a set of images, such as to convert into PNG format or to resize images.
|
||||
|
||||
### Sigil
|
||||
|
||||
LibreOffice can export directly to EPUB format, but it wasn't a great transfer. I haven't tried creating an EPUB with LibreOffice 6.1, but LibreOffice 6.0 didn't include my images. It also added styles in a weird way. I used [Sigil][8] to tweak the EPUB file and make everything look right. Sigil even has a preview function so you can see what the EPUB will look like.
|
||||
|
||||
### QEMU
|
||||
|
||||
Because this book is about installing and running FreeDOS, I needed to actually run FreeDOS. You can boot FreeDOS inside any PC emulator, including VirtualBox, QEMU, GNOME Boxes, PCem, and Bochs. But I like the simplicity of [QEMU][9]. And the QEMU console lets you issue a screen dump in PPM format, which is ideal for grabbing screenshots to include in the book.
|
||||
|
||||
Of course, I have to mention running [GNOME][10] on [Linux][11]. I use the [Fedora][12] distribution of Linux.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/9/writing-book-open-source-tools
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[1]: http://www.freedos.org/
|
||||
[2]: http://www.freedos.org/ebook/
|
||||
[3]: https://www.google.com/docs/about/
|
||||
[4]: https://www.libreoffice.org/
|
||||
[5]: https://www.gimp.org/
|
||||
[6]: https://inkscape.org/
|
||||
[7]: https://www.imagemagick.org/
|
||||
[8]: https://sigil-ebook.com/
|
||||
[9]: https://www.qemu.org/
|
||||
[10]: https://www.gnome.org/
|
||||
[11]: https://www.kernel.org/
|
||||
[12]: https://getfedora.org/
|
@ -1,3 +1,5 @@
|
||||
Translating by jlztan
|
||||
|
||||
KeeWeb – An Open Source, Cross Platform Password Manager
|
||||
======
|
||||
|
||||
|
@ -1,596 +0,0 @@
|
||||
Translating by qhwdw
|
||||
Lab 4: Preemptive Multitasking
|
||||
======
|
||||
### Lab 4: Preemptive Multitasking
|
||||
|
||||
**Part A due Thursday, October 18, 2018
|
||||
Part B due Thursday, October 25, 2018
|
||||
Part C due Thursday, November 1, 2018**
|
||||
|
||||
#### Introduction
|
||||
|
||||
In this lab you will implement preemptive multitasking among multiple simultaneously active user-mode environments.
|
||||
|
||||
In part A you will add multiprocessor support to JOS, implement round-robin scheduling, and add basic environment management system calls (calls that create and destroy environments, and allocate/map memory).
|
||||
|
||||
In part B, you will implement a Unix-like `fork()`, which allows a user-mode environment to create copies of itself.
|
||||
|
||||
Finally, in part C you will add support for inter-process communication (IPC), allowing different user-mode environments to communicate and synchronize with each other explicitly. You will also add support for hardware clock interrupts and preemption.
|
||||
|
||||
##### Getting Started
|
||||
|
||||
Use Git to commit your Lab 3 source, fetch the latest version of the course repository, and then create a local branch called `lab4` based on our lab4 branch, `origin/lab4`:
|
||||
|
||||
```
|
||||
athena% cd ~/6.828/lab
|
||||
athena% add git
|
||||
athena% git pull
|
||||
Already up-to-date.
|
||||
athena% git checkout -b lab4 origin/lab4
|
||||
Branch lab4 set up to track remote branch refs/remotes/origin/lab4.
|
||||
Switched to a new branch "lab4"
|
||||
athena% git merge lab3
|
||||
Merge made by recursive.
|
||||
...
|
||||
athena%
|
||||
```
|
||||
|
||||
Lab 4 contains a number of new source files, some of which you should browse before you start:
|
||||
| kern/cpu.h | Kernel-private definitions for multiprocessor support |
|
||||
| kern/mpconfig.c | Code to read the multiprocessor configuration |
|
||||
| kern/lapic.c | Kernel code driving the local APIC unit in each processor |
|
||||
| kern/mpentry.S | Assembly-language entry code for non-boot CPUs |
|
||||
| kern/spinlock.h | Kernel-private definitions for spin locks, including the big kernel lock |
|
||||
| kern/spinlock.c | Kernel code implementing spin locks |
|
||||
| kern/sched.c | Code skeleton of the scheduler that you are about to implement |
|
||||
|
||||
##### Lab Requirements
|
||||
|
||||
This lab is divided into three parts, A, B, and C. We have allocated one week in the schedule for each part.
|
||||
|
||||
As before, you will need to do all of the regular exercises described in the lab and _at least one_ challenge problem. (You do not need to do one challenge problem per part, just one for the whole lab.) Additionally, you will need to write up a brief description of the challenge problem that you implemented. If you implement more than one challenge problem, you only need to describe one of them in the write-up, though of course you are welcome to do more. Place the write-up in a file called `answers-lab4.txt` in the top level of your `lab` directory before handing in your work.
|
||||
|
||||
#### Part A: Multiprocessor Support and Cooperative Multitasking
|
||||
|
||||
In the first part of this lab, you will first extend JOS to run on a multiprocessor system, and then implement some new JOS kernel system calls to allow user-level environments to create additional new environments. You will also implement _cooperative_ round-robin scheduling, allowing the kernel to switch from one environment to another when the current environment voluntarily relinquishes the CPU (or exits). Later in part C you will implement _preemptive_ scheduling, which allows the kernel to re-take control of the CPU from an environment after a certain time has passed even if the environment does not cooperate.
|
||||
|
||||
##### Multiprocessor Support
|
||||
|
||||
We are going to make JOS support "symmetric multiprocessing" (SMP), a multiprocessor model in which all CPUs have equivalent access to system resources such as memory and I/O buses. While all CPUs are functionally identical in SMP, during the boot process they can be classified into two types: the bootstrap processor (BSP) is responsible for initializing the system and for booting the operating system; and the application processors (APs) are activated by the BSP only after the operating system is up and running. Which processor is the BSP is determined by the hardware and the BIOS. Up to this point, all your existing JOS code has been running on the BSP.
|
||||
|
||||
In an SMP system, each CPU has an accompanying local APIC (LAPIC) unit. The LAPIC units are responsible for delivering interrupts throughout the system. The LAPIC also provides its connected CPU with a unique identifier. In this lab, we make use of the following basic functionality of the LAPIC unit (in `kern/lapic.c`):
|
||||
|
||||
* Reading the LAPIC identifier (APIC ID) to tell which CPU our code is currently running on (see `cpunum()`).
|
||||
* Sending the `STARTUP` interprocessor interrupt (IPI) from the BSP to the APs to bring up other CPUs (see `lapic_startap()`).
|
||||
* In part C, we program LAPIC's built-in timer to trigger clock interrupts to support preemptive multitasking (see `apic_init()`).
|
||||
|
||||
|
||||
|
||||
A processor accesses its LAPIC using memory-mapped I/O (MMIO). In MMIO, a portion of _physical_ memory is hardwired to the registers of some I/O devices, so the same load/store instructions typically used to access memory can be used to access device registers. You've already seen one IO hole at physical address `0xA0000` (we use this to write to the VGA display buffer). The LAPIC lives in a hole starting at physical address `0xFE000000` (32MB short of 4GB), so it's too high for us to access using our usual direct map at KERNBASE. The JOS virtual memory map leaves a 4MB gap at `MMIOBASE` so we have a place to map devices like this. Since later labs introduce more MMIO regions, you'll write a simple function to allocate space from this region and map device memory to it.
|
||||
|
||||
```
|
||||
Exercise 1. Implement `mmio_map_region` in `kern/pmap.c`. To see how this is used, look at the beginning of `lapic_init` in `kern/lapic.c`. You'll have to do the next exercise, too, before the tests for `mmio_map_region` will run.
|
||||
```
|
||||
|
||||
###### Application Processor Bootstrap
|
||||
|
||||
Before booting up APs, the BSP should first collect information about the multiprocessor system, such as the total number of CPUs, their APIC IDs and the MMIO address of the LAPIC unit. The `mp_init()` function in `kern/mpconfig.c` retrieves this information by reading the MP configuration table that resides in the BIOS's region of memory.
|
||||
|
||||
The `boot_aps()` function (in `kern/init.c`) drives the AP bootstrap process. APs start in real mode, much like how the bootloader started in `boot/boot.S`, so `boot_aps()` copies the AP entry code (`kern/mpentry.S`) to a memory location that is addressable in the real mode. Unlike with the bootloader, we have some control over where the AP will start executing code; we copy the entry code to `0x7000` (`MPENTRY_PADDR`), but any unused, page-aligned physical address below 640KB would work.
|
||||
|
||||
After that, `boot_aps()` activates APs one after another, by sending `STARTUP` IPIs to the LAPIC unit of the corresponding AP, along with an initial `CS:IP` address at which the AP should start running its entry code (`MPENTRY_PADDR` in our case). The entry code in `kern/mpentry.S` is quite similar to that of `boot/boot.S`. After some brief setup, it puts the AP into protected mode with paging enabled, and then calls the C setup routine `mp_main()` (also in `kern/init.c`). `boot_aps()` waits for the AP to signal a `CPU_STARTED` flag in `cpu_status` field of its `struct CpuInfo` before going on to wake up the next one.
|
||||
|
||||
```
|
||||
Exercise 2. Read `boot_aps()` and `mp_main()` in `kern/init.c`, and the assembly code in `kern/mpentry.S`. Make sure you understand the control flow transfer during the bootstrap of APs. Then modify your implementation of `page_init()` in `kern/pmap.c` to avoid adding the page at `MPENTRY_PADDR` to the free list, so that we can safely copy and run AP bootstrap code at that physical address. Your code should pass the updated `check_page_free_list()` test (but might fail the updated `check_kern_pgdir()` test, which we will fix soon).
|
||||
```
|
||||
|
||||
```
|
||||
Question
|
||||
|
||||
1. Compare `kern/mpentry.S` side by side with `boot/boot.S`. Bearing in mind that `kern/mpentry.S` is compiled and linked to run above `KERNBASE` just like everything else in the kernel, what is the purpose of macro `MPBOOTPHYS`? Why is it necessary in `kern/mpentry.S` but not in `boot/boot.S`? In other words, what could go wrong if it were omitted in `kern/mpentry.S`?
|
||||
Hint: recall the differences between the link address and the load address that we have discussed in Lab 1.
|
||||
```
|
||||
|
||||
|
||||
###### Per-CPU State and Initialization
|
||||
|
||||
When writing a multiprocessor OS, it is important to distinguish between per-CPU state that is private to each processor, and global state that the whole system shares. `kern/cpu.h` defines most of the per-CPU state, including `struct CpuInfo`, which stores per-CPU variables. `cpunum()` always returns the ID of the CPU that calls it, which can be used as an index into arrays like `cpus`. Alternatively, the macro `thiscpu` is shorthand for the current CPU's `struct CpuInfo`.
|
||||
|
||||
Here is the per-CPU state you should be aware of:
|
||||
|
||||
* **Per-CPU kernel stack**.
|
||||
Because multiple CPUs can trap into the kernel simultaneously, we need a separate kernel stack for each processor to prevent them from interfering with each other's execution. The array `percpu_kstacks[NCPU][KSTKSIZE]` reserves space for NCPU's worth of kernel stacks.
|
||||
|
||||
In Lab 2, you mapped the physical memory that `bootstack` refers to as the BSP's kernel stack just below `KSTACKTOP`. Similarly, in this lab, you will map each CPU's kernel stack into this region with guard pages acting as a buffer between them. CPU 0's stack will still grow down from `KSTACKTOP`; CPU 1's stack will start `KSTKGAP` bytes below the bottom of CPU 0's stack, and so on. `inc/memlayout.h` shows the mapping layout.
|
||||
|
||||
* **Per-CPU TSS and TSS descriptor**.
|
||||
A per-CPU task state segment (TSS) is also needed in order to specify where each CPU's kernel stack lives. The TSS for CPU _i_ is stored in `cpus[i].cpu_ts`, and the corresponding TSS descriptor is defined in the GDT entry `gdt[(GD_TSS0 >> 3) + i]`. The global `ts` variable defined in `kern/trap.c` will no longer be useful.
|
||||
|
||||
* **Per-CPU current environment pointer**.
|
||||
Since each CPU can run different user process simultaneously, we redefined the symbol `curenv` to refer to `cpus[cpunum()].cpu_env` (or `thiscpu->cpu_env`), which points to the environment _currently_ executing on the _current_ CPU (the CPU on which the code is running).
|
||||
|
||||
* **Per-CPU system registers**.
|
||||
All registers, including system registers, are private to a CPU. Therefore, instructions that initialize these registers, such as `lcr3()`, `ltr()`, `lgdt()`, `lidt()`, etc., must be executed once on each CPU. Functions `env_init_percpu()` and `trap_init_percpu()` are defined for this purpose.
|
||||
|
||||
|
||||
|
||||
```
|
||||
Exercise 3. Modify `mem_init_mp()` (in `kern/pmap.c`) to map per-CPU stacks starting at `KSTACKTOP`, as shown in `inc/memlayout.h`. The size of each stack is `KSTKSIZE` bytes plus `KSTKGAP` bytes of unmapped guard pages. Your code should pass the new check in `check_kern_pgdir()`.
|
||||
```
|
||||
|
||||
```
|
||||
Exercise 4. The code in `trap_init_percpu()` (`kern/trap.c`) initializes the TSS and TSS descriptor for the BSP. It worked in Lab 3, but is incorrect when running on other CPUs. Change the code so that it can work on all CPUs. (Note: your new code should not use the global `ts` variable any more.)
|
||||
```
|
||||
|
||||
When you finish the above exercises, run JOS in QEMU with 4 CPUs using make qemu CPUS=4 (or make qemu-nox CPUS=4), you should see output like this:
|
||||
|
||||
```
|
||||
...
|
||||
Physical memory: 66556K available, base = 640K, extended = 65532K
|
||||
check_page_alloc() succeeded!
|
||||
check_page() succeeded!
|
||||
check_kern_pgdir() succeeded!
|
||||
check_page_installed_pgdir() succeeded!
|
||||
SMP: CPU 0 found 4 CPU(s)
|
||||
enabled interrupts: 1 2
|
||||
SMP: CPU 1 starting
|
||||
SMP: CPU 2 starting
|
||||
SMP: CPU 3 starting
|
||||
```
|
||||
|
||||
###### Locking
|
||||
|
||||
Our current code spins after initializing the AP in `mp_main()`. Before letting the AP get any further, we need to first address race conditions when multiple CPUs run kernel code simultaneously. The simplest way to achieve this is to use a _big kernel lock_. The big kernel lock is a single global lock that is held whenever an environment enters kernel mode, and is released when the environment returns to user mode. In this model, environments in user mode can run concurrently on any available CPUs, but no more than one environment can run in kernel mode; any other environments that try to enter kernel mode are forced to wait.
|
||||
|
||||
`kern/spinlock.h` declares the big kernel lock, namely `kernel_lock`. It also provides `lock_kernel()` and `unlock_kernel()`, shortcuts to acquire and release the lock. You should apply the big kernel lock at four locations:
|
||||
|
||||
* In `i386_init()`, acquire the lock before the BSP wakes up the other CPUs.
|
||||
* In `mp_main()`, acquire the lock after initializing the AP, and then call `sched_yield()` to start running environments on this AP.
|
||||
* In `trap()`, acquire the lock when trapped from user mode. To determine whether a trap happened in user mode or in kernel mode, check the low bits of the `tf_cs`.
|
||||
* In `env_run()`, release the lock _right before_ switching to user mode. Do not do that too early or too late, otherwise you will experience races or deadlocks.
|
||||
|
||||
|
||||
```
|
||||
Exercise 5. Apply the big kernel lock as described above, by calling `lock_kernel()` and `unlock_kernel()` at the proper locations.
|
||||
```
|
||||
|
||||
How to test if your locking is correct? You can't at this moment! But you will be able to after you implement the scheduler in the next exercise.
|
||||
|
||||
```
|
||||
Question
|
||||
|
||||
2. It seems that using the big kernel lock guarantees that only one CPU can run the kernel code at a time. Why do we still need separate kernel stacks for each CPU? Describe a scenario in which using a shared kernel stack will go wrong, even with the protection of the big kernel lock.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! The big kernel lock is simple and easy to use. Nevertheless, it eliminates all concurrency in kernel mode. Most modern operating systems use different locks to protect different parts of their shared state, an approach called _fine-grained locking_. Fine-grained locking can increase performance significantly, but is more difficult to implement and error-prone. If you are brave enough, drop the big kernel lock and embrace concurrency in JOS!
|
||||
|
||||
It is up to you to decide the locking granularity (the amount of data that a lock protects). As a hint, you may consider using spin locks to ensure exclusive access to these shared components in the JOS kernel:
|
||||
|
||||
* The page allocator.
|
||||
* The console driver.
|
||||
* The scheduler.
|
||||
* The inter-process communication (IPC) state that you will implement in the part C.
|
||||
```
|
||||
|
||||
|
||||
##### Round-Robin Scheduling
|
||||
|
||||
Your next task in this lab is to change the JOS kernel so that it can alternate between multiple environments in "round-robin" fashion. Round-robin scheduling in JOS works as follows:
|
||||
|
||||
* The function `sched_yield()` in the new `kern/sched.c` is responsible for selecting a new environment to run. It searches sequentially through the `envs[]` array in circular fashion, starting just after the previously running environment (or at the beginning of the array if there was no previously running environment), picks the first environment it finds with a status of `ENV_RUNNABLE` (see `inc/env.h`), and calls `env_run()` to jump into that environment.
|
||||
* `sched_yield()` must never run the same environment on two CPUs at the same time. It can tell that an environment is currently running on some CPU (possibly the current CPU) because that environment's status will be `ENV_RUNNING`.
|
||||
* We have implemented a new system call for you, `sys_yield()`, which user environments can call to invoke the kernel's `sched_yield()` function and thereby voluntarily give up the CPU to a different environment.
|
||||
|
||||
|
||||
|
||||
```
|
||||
Exercise 6. Implement round-robin scheduling in `sched_yield()` as described above. Don't forget to modify `syscall()` to dispatch `sys_yield()`.
|
||||
|
||||
Make sure to invoke `sched_yield()` in `mp_main`.
|
||||
|
||||
Modify `kern/init.c` to create three (or more!) environments that all run the program `user/yield.c`.
|
||||
|
||||
Run make qemu. You should see the environments switch back and forth between each other five times before terminating, like below.
|
||||
|
||||
Test also with several CPUS: make qemu CPUS=2.
|
||||
|
||||
...
|
||||
Hello, I am environment 00001000.
|
||||
Hello, I am environment 00001001.
|
||||
Hello, I am environment 00001002.
|
||||
Back in environment 00001000, iteration 0.
|
||||
Back in environment 00001001, iteration 0.
|
||||
Back in environment 00001002, iteration 0.
|
||||
Back in environment 00001000, iteration 1.
|
||||
Back in environment 00001001, iteration 1.
|
||||
Back in environment 00001002, iteration 1.
|
||||
...
|
||||
|
||||
After the `yield` programs exit, there will be no runnable environment in the system, the scheduler should invoke the JOS kernel monitor. If any of this does not happen, then fix your code before proceeding.
|
||||
```
|
||||
|
||||
```
|
||||
Question
|
||||
|
||||
3. In your implementation of `env_run()` you should have called `lcr3()`. Before and after the call to `lcr3()`, your code makes references (at least it should) to the variable `e`, the argument to `env_run`. Upon loading the `%cr3` register, the addressing context used by the MMU is instantly changed. But a virtual address (namely `e`) has meaning relative to a given address context--the address context specifies the physical address to which the virtual address maps. Why can the pointer `e` be dereferenced both before and after the addressing switch?
|
||||
4. Whenever the kernel switches from one environment to another, it must ensure the old environment's registers are saved so they can be restored properly later. Why? Where does this happen?
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Add a less trivial scheduling policy to the kernel, such as a fixed-priority scheduler that allows each environment to be assigned a priority and ensures that higher-priority environments are always chosen in preference to lower-priority environments. If you're feeling really adventurous, try implementing a Unix-style adjustable-priority scheduler or even a lottery or stride scheduler. (Look up "lottery scheduling" and "stride scheduling" in Google.)
|
||||
|
||||
Write a test program or two that verifies that your scheduling algorithm is working correctly (i.e., the right environments get run in the right order). It may be easier to write these test programs once you have implemented `fork()` and IPC in parts B and C of this lab.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! The JOS kernel currently does not allow applications to use the x86 processor's x87 floating-point unit (FPU), MMX instructions, or Streaming SIMD Extensions (SSE). Extend the `Env` structure to provide a save area for the processor's floating point state, and extend the context switching code to save and restore this state properly when switching from one environment to another. The `FXSAVE` and `FXRSTOR` instructions may be useful, but note that these are not in the old i386 user's manual because they were introduced in more recent processors. Write a user-level test program that does something cool with floating-point.
|
||||
```
|
||||
|
||||
##### System Calls for Environment Creation
|
||||
|
||||
Although your kernel is now capable of running and switching between multiple user-level environments, it is still limited to running environments that the _kernel_ initially set up. You will now implement the necessary JOS system calls to allow _user_ environments to create and start other new user environments.
|
||||
|
||||
Unix provides the `fork()` system call as its process creation primitive. Unix `fork()` copies the entire address space of calling process (the parent) to create a new process (the child). The only differences between the two observable from user space are their process IDs and parent process IDs (as returned by `getpid` and `getppid`). In the parent, `fork()` returns the child's process ID, while in the child, `fork()` returns 0. By default, each process gets its own private address space, and neither process's modifications to memory are visible to the other.
|
||||
|
||||
You will provide a different, more primitive set of JOS system calls for creating new user-mode environments. With these system calls you will be able to implement a Unix-like `fork()` entirely in user space, in addition to other styles of environment creation. The new system calls you will write for JOS are as follows:
|
||||
|
||||
* `sys_exofork`:
|
||||
This system call creates a new environment with an almost blank slate: nothing is mapped in the user portion of its address space, and it is not runnable. The new environment will have the same register state as the parent environment at the time of the `sys_exofork` call. In the parent, `sys_exofork` will return the `envid_t` of the newly created environment (or a negative error code if the environment allocation failed). In the child, however, it will return 0. (Since the child starts out marked as not runnable, `sys_exofork` will not actually return in the child until the parent has explicitly allowed this by marking the child runnable using....)
|
||||
* `sys_env_set_status`:
|
||||
Sets the status of a specified environment to `ENV_RUNNABLE` or `ENV_NOT_RUNNABLE`. This system call is typically used to mark a new environment ready to run, once its address space and register state has been fully initialized.
|
||||
* `sys_page_alloc`:
|
||||
Allocates a page of physical memory and maps it at a given virtual address in a given environment's address space.
|
||||
* `sys_page_map`:
|
||||
Copy a page mapping ( _not_ the contents of a page!) from one environment's address space to another, leaving a memory sharing arrangement in place so that the new and the old mappings both refer to the same page of physical memory.
|
||||
* `sys_page_unmap`:
|
||||
Unmap a page mapped at a given virtual address in a given environment.
|
||||
|
||||
|
||||
|
||||
For all of the system calls above that accept environment IDs, the JOS kernel supports the convention that a value of 0 means "the current environment." This convention is implemented by `envid2env()` in `kern/env.c`.
|
||||
|
||||
We have provided a very primitive implementation of a Unix-like `fork()` in the test program `user/dumbfork.c`. This test program uses the above system calls to create and run a child environment with a copy of its own address space. The two environments then switch back and forth using `sys_yield` as in the previous exercise. The parent exits after 10 iterations, whereas the child exits after 20.
|
||||
|
||||
```
|
||||
Exercise 7. Implement the system calls described above in `kern/syscall.c` and make sure `syscall()` calls them. You will need to use various functions in `kern/pmap.c` and `kern/env.c`, particularly `envid2env()`. For now, whenever you call `envid2env()`, pass 1 in the `checkperm` parameter. Be sure you check for any invalid system call arguments, returning `-E_INVAL` in that case. Test your JOS kernel with `user/dumbfork` and make sure it works before proceeding.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Add the additional system calls necessary to _read_ all of the vital state of an existing environment as well as set it up. Then implement a user mode program that forks off a child environment, runs it for a while (e.g., a few iterations of `sys_yield()`), then takes a complete snapshot or _checkpoint_ of the child environment, runs the child for a while longer, and finally restores the child environment to the state it was in at the checkpoint and continues it from there. Thus, you are effectively "replaying" the execution of the child environment from an intermediate state. Make the child environment perform some interaction with the user using `sys_cgetc()` or `readline()` so that the user can view and mutate its internal state, and verify that with your checkpoint/restart you can give the child environment a case of selective amnesia, making it "forget" everything that happened beyond a certain point.
|
||||
```
|
||||
|
||||
This completes Part A of the lab; make sure it passes all of the Part A tests when you run make grade, and hand it in using make handin as usual. If you are trying to figure out why a particular test case is failing, run ./grade-lab4 -v, which will show you the output of the kernel builds and QEMU runs for each test, until a test fails. When a test fails, the script will stop, and then you can inspect `jos.out` to see what the kernel actually printed.
|
||||
|
||||
#### Part B: Copy-on-Write Fork
|
||||
|
||||
As mentioned earlier, Unix provides the `fork()` system call as its primary process creation primitive. The `fork()` system call copies the address space of the calling process (the parent) to create a new process (the child).
|
||||
|
||||
xv6 Unix implements `fork()` by copying all data from the parent's pages into new pages allocated for the child. This is essentially the same approach that `dumbfork()` takes. The copying of the parent's address space into the child is the most expensive part of the `fork()` operation.
|
||||
|
||||
However, a call to `fork()` is frequently followed almost immediately by a call to `exec()` in the child process, which replaces the child's memory with a new program. This is what the the shell typically does, for example. In this case, the time spent copying the parent's address space is largely wasted, because the child process will use very little of its memory before calling `exec()`.
|
||||
|
||||
For this reason, later versions of Unix took advantage of virtual memory hardware to allow the parent and child to _share_ the memory mapped into their respective address spaces until one of the processes actually modifies it. This technique is known as _copy-on-write_. To do this, on `fork()` the kernel would copy the address space _mappings_ from the parent to the child instead of the contents of the mapped pages, and at the same time mark the now-shared pages read-only. When one of the two processes tries to write to one of these shared pages, the process takes a page fault. At this point, the Unix kernel realizes that the page was really a "virtual" or "copy-on-write" copy, and so it makes a new, private, writable copy of the page for the faulting process. In this way, the contents of individual pages aren't actually copied until they are actually written to. This optimization makes a `fork()` followed by an `exec()` in the child much cheaper: the child will probably only need to copy one page (the current page of its stack) before it calls `exec()`.
|
||||
|
||||
In the next piece of this lab, you will implement a "proper" Unix-like `fork()` with copy-on-write, as a user space library routine. Implementing `fork()` and copy-on-write support in user space has the benefit that the kernel remains much simpler and thus more likely to be correct. It also lets individual user-mode programs define their own semantics for `fork()`. A program that wants a slightly different implementation (for example, the expensive always-copy version like `dumbfork()`, or one in which the parent and child actually share memory afterward) can easily provide its own.
|
||||
|
||||
##### User-level page fault handling
|
||||
|
||||
A user-level copy-on-write `fork()` needs to know about page faults on write-protected pages, so that's what you'll implement first. Copy-on-write is only one of many possible uses for user-level page fault handling.
|
||||
|
||||
It's common to set up an address space so that page faults indicate when some action needs to take place. For example, most Unix kernels initially map only a single page in a new process's stack region, and allocate and map additional stack pages later "on demand" as the process's stack consumption increases and causes page faults on stack addresses that are not yet mapped. A typical Unix kernel must keep track of what action to take when a page fault occurs in each region of a process's space. For example, a fault in the stack region will typically allocate and map new page of physical memory. A fault in the program's BSS region will typically allocate a new page, fill it with zeroes, and map it. In systems with demand-paged executables, a fault in the text region will read the corresponding page of the binary off of disk and then map it.
|
||||
|
||||
This is a lot of information for the kernel to keep track of. Instead of taking the traditional Unix approach, you will decide what to do about each page fault in user space, where bugs are less damaging. This design has the added benefit of allowing programs great flexibility in defining their memory regions; you'll use user-level page fault handling later for mapping and accessing files on a disk-based file system.
|
||||
|
||||
###### Setting the Page Fault Handler
|
||||
|
||||
In order to handle its own page faults, a user environment will need to register a _page fault handler entrypoint_ with the JOS kernel. The user environment registers its page fault entrypoint via the new `sys_env_set_pgfault_upcall` system call. We have added a new member to the `Env` structure, `env_pgfault_upcall`, to record this information.
|
||||
|
||||
```
|
||||
Exercise 8. Implement the `sys_env_set_pgfault_upcall` system call. Be sure to enable permission checking when looking up the environment ID of the target environment, since this is a "dangerous" system call.
|
||||
```
|
||||
|
||||
###### Normal and Exception Stacks in User Environments
|
||||
|
||||
During normal execution, a user environment in JOS will run on the _normal_ user stack: its `ESP` register starts out pointing at `USTACKTOP`, and the stack data it pushes resides on the page between `USTACKTOP-PGSIZE` and `USTACKTOP-1` inclusive. When a page fault occurs in user mode, however, the kernel will restart the user environment running a designated user-level page fault handler on a different stack, namely the _user exception_ stack. In essence, we will make the JOS kernel implement automatic "stack switching" on behalf of the user environment, in much the same way that the x86 _processor_ already implements stack switching on behalf of JOS when transferring from user mode to kernel mode!
|
||||
|
||||
The JOS user exception stack is also one page in size, and its top is defined to be at virtual address `UXSTACKTOP`, so the valid bytes of the user exception stack are from `UXSTACKTOP-PGSIZE` through `UXSTACKTOP-1` inclusive. While running on this exception stack, the user-level page fault handler can use JOS's regular system calls to map new pages or adjust mappings so as to fix whatever problem originally caused the page fault. Then the user-level page fault handler returns, via an assembly language stub, to the faulting code on the original stack.
|
||||
|
||||
Each user environment that wants to support user-level page fault handling will need to allocate memory for its own exception stack, using the `sys_page_alloc()` system call introduced in part A.
|
||||
|
||||
###### Invoking the User Page Fault Handler
|
||||
|
||||
You will now need to change the page fault handling code in `kern/trap.c` to handle page faults from user mode as follows. We will call the state of the user environment at the time of the fault the _trap-time_ state.
|
||||
|
||||
If there is no page fault handler registered, the JOS kernel destroys the user environment with a message as before. Otherwise, the kernel sets up a trap frame on the exception stack that looks like a `struct UTrapframe` from `inc/trap.h`:
|
||||
|
||||
```
|
||||
<-- UXSTACKTOP
|
||||
trap-time esp
|
||||
trap-time eflags
|
||||
trap-time eip
|
||||
trap-time eax start of struct PushRegs
|
||||
trap-time ecx
|
||||
trap-time edx
|
||||
trap-time ebx
|
||||
trap-time esp
|
||||
trap-time ebp
|
||||
trap-time esi
|
||||
trap-time edi end of struct PushRegs
|
||||
tf_err (error code)
|
||||
fault_va <-- %esp when handler is run
|
||||
|
||||
```
|
||||
|
||||
The kernel then arranges for the user environment to resume execution with the page fault handler running on the exception stack with this stack frame; you must figure out how to make this happen. The `fault_va` is the virtual address that caused the page fault.
|
||||
|
||||
If the user environment is _already_ running on the user exception stack when an exception occurs, then the page fault handler itself has faulted. In this case, you should start the new stack frame just under the current `tf->tf_esp` rather than at `UXSTACKTOP`. You should first push an empty 32-bit word, then a `struct UTrapframe`.
|
||||
|
||||
To test whether `tf->tf_esp` is already on the user exception stack, check whether it is in the range between `UXSTACKTOP-PGSIZE` and `UXSTACKTOP-1`, inclusive.
|
||||
|
||||
```
|
||||
Exercise 9. Implement the code in `page_fault_handler` in `kern/trap.c` required to dispatch page faults to the user-mode handler. Be sure to take appropriate precautions when writing into the exception stack. (What happens if the user environment runs out of space on the exception stack?)
|
||||
```
|
||||
|
||||
###### User-mode Page Fault Entrypoint
|
||||
|
||||
Next, you need to implement the assembly routine that will take care of calling the C page fault handler and resume execution at the original faulting instruction. This assembly routine is the handler that will be registered with the kernel using `sys_env_set_pgfault_upcall()`.
|
||||
|
||||
```
|
||||
Exercise 10. Implement the `_pgfault_upcall` routine in `lib/pfentry.S`. The interesting part is returning to the original point in the user code that caused the page fault. You'll return directly there, without going back through the kernel. The hard part is simultaneously switching stacks and re-loading the EIP.
|
||||
```
|
||||
|
||||
Finally, you need to implement the C user library side of the user-level page fault handling mechanism.
|
||||
|
||||
```
|
||||
Exercise 11. Finish `set_pgfault_handler()` in `lib/pgfault.c`.
|
||||
```
|
||||
|
||||
###### Testing
|
||||
|
||||
Run `user/faultread` (make run-faultread). You should see:
|
||||
|
||||
```
|
||||
...
|
||||
[00000000] new env 00001000
|
||||
[00001000] user fault va 00000000 ip 0080003a
|
||||
TRAP frame ...
|
||||
[00001000] free env 00001000
|
||||
```
|
||||
|
||||
Run `user/faultdie`. You should see:
|
||||
|
||||
```
|
||||
...
|
||||
[00000000] new env 00001000
|
||||
i faulted at va deadbeef, err 6
|
||||
[00001000] exiting gracefully
|
||||
[00001000] free env 00001000
|
||||
```
|
||||
|
||||
Run `user/faultalloc`. You should see:
|
||||
|
||||
```
|
||||
...
|
||||
[00000000] new env 00001000
|
||||
fault deadbeef
|
||||
this string was faulted in at deadbeef
|
||||
fault cafebffe
|
||||
fault cafec000
|
||||
this string was faulted in at cafebffe
|
||||
[00001000] exiting gracefully
|
||||
[00001000] free env 00001000
|
||||
```
|
||||
|
||||
If you see only the first "this string" line, it means you are not handling recursive page faults properly.
|
||||
|
||||
Run `user/faultallocbad`. You should see:
|
||||
|
||||
```
|
||||
...
|
||||
[00000000] new env 00001000
|
||||
[00001000] user_mem_check assertion failure for va deadbeef
|
||||
[00001000] free env 00001000
|
||||
```
|
||||
|
||||
Make sure you understand why `user/faultalloc` and `user/faultallocbad` behave differently.
|
||||
|
||||
```
|
||||
Challenge! Extend your kernel so that not only page faults, but _all_ types of processor exceptions that code running in user space can generate, can be redirected to a user-mode exception handler. Write user-mode test programs to test user-mode handling of various exceptions such as divide-by-zero, general protection fault, and illegal opcode.
|
||||
```
|
||||
|
||||
##### Implementing Copy-on-Write Fork
|
||||
|
||||
You now have the kernel facilities to implement copy-on-write `fork()` entirely in user space.
|
||||
|
||||
We have provided a skeleton for your `fork()` in `lib/fork.c`. Like `dumbfork()`, `fork()` should create a new environment, then scan through the parent environment's entire address space and set up corresponding page mappings in the child. The key difference is that, while `dumbfork()` copied _pages_ , `fork()` will initially only copy page _mappings_. `fork()` will copy each page only when one of the environments tries to write it.
|
||||
|
||||
The basic control flow for `fork()` is as follows:
|
||||
|
||||
1. The parent installs `pgfault()` as the C-level page fault handler, using the `set_pgfault_handler()` function you implemented above.
|
||||
|
||||
2. The parent calls `sys_exofork()` to create a child environment.
|
||||
|
||||
3. For each writable or copy-on-write page in its address space below UTOP, the parent calls `duppage`, which should map the page copy-on-write into the address space of the child and then _remap_ the page copy-on-write in its own address space. [ Note: The ordering here (i.e., marking a page as COW in the child before marking it in the parent) actually matters! Can you see why? Try to think of a specific case where reversing the order could cause trouble. ] `duppage` sets both PTEs so that the page is not writeable, and to contain `PTE_COW` in the "avail" field to distinguish copy-on-write pages from genuine read-only pages.
|
||||
|
||||
The exception stack is _not_ remapped this way, however. Instead you need to allocate a fresh page in the child for the exception stack. Since the page fault handler will be doing the actual copying and the page fault handler runs on the exception stack, the exception stack cannot be made copy-on-write: who would copy it?
|
||||
|
||||
`fork()` also needs to handle pages that are present, but not writable or copy-on-write.
|
||||
|
||||
4. The parent sets the user page fault entrypoint for the child to look like its own.
|
||||
|
||||
5. The child is now ready to run, so the parent marks it runnable.
|
||||
|
||||
|
||||
|
||||
|
||||
Each time one of the environments writes a copy-on-write page that it hasn't yet written, it will take a page fault. Here's the control flow for the user page fault handler:
|
||||
|
||||
1. The kernel propagates the page fault to `_pgfault_upcall`, which calls `fork()`'s `pgfault()` handler.
|
||||
2. `pgfault()` checks that the fault is a write (check for `FEC_WR` in the error code) and that the PTE for the page is marked `PTE_COW`. If not, panic.
|
||||
3. `pgfault()` allocates a new page mapped at a temporary location and copies the contents of the faulting page into it. Then the fault handler maps the new page at the appropriate address with read/write permissions, in place of the old read-only mapping.
|
||||
|
||||
|
||||
|
||||
The user-level `lib/fork.c` code must consult the environment's page tables for several of the operations above (e.g., that the PTE for a page is marked `PTE_COW`). The kernel maps the environment's page tables at `UVPT` exactly for this purpose. It uses a [clever mapping trick][1] to make it to make it easy to lookup PTEs for user code. `lib/entry.S` sets up `uvpt` and `uvpd` so that you can easily lookup page-table information in `lib/fork.c`.
|
||||
|
||||
``````
|
||||
Exercise 12. Implement `fork`, `duppage` and `pgfault` in `lib/fork.c`.
|
||||
|
||||
Test your code with the `forktree` program. It should produce the following messages, with interspersed 'new env', 'free env', and 'exiting gracefully' messages. The messages may not appear in this order, and the environment IDs may be different.
|
||||
|
||||
1000: I am ''
|
||||
1001: I am '0'
|
||||
2000: I am '00'
|
||||
2001: I am '000'
|
||||
1002: I am '1'
|
||||
3000: I am '11'
|
||||
3001: I am '10'
|
||||
4000: I am '100'
|
||||
1003: I am '01'
|
||||
5000: I am '010'
|
||||
4001: I am '011'
|
||||
2002: I am '110'
|
||||
1004: I am '001'
|
||||
1005: I am '111'
|
||||
1006: I am '101'
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Implement a shared-memory `fork()` called `sfork()`. This version should have the parent and child _share_ all their memory pages (so writes in one environment appear in the other) except for pages in the stack area, which should be treated in the usual copy-on-write manner. Modify `user/forktree.c` to use `sfork()` instead of regular `fork()`. Also, once you have finished implementing IPC in part C, use your `sfork()` to run `user/pingpongs`. You will have to find a new way to provide the functionality of the global `thisenv` pointer.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Your implementation of `fork` makes a huge number of system calls. On the x86, switching into the kernel using interrupts has non-trivial cost. Augment the system call interface so that it is possible to send a batch of system calls at once. Then change `fork` to use this interface.
|
||||
|
||||
How much faster is your new `fork`?
|
||||
|
||||
You can answer this (roughly) by using analytical arguments to estimate how much of an improvement batching system calls will make to the performance of your `fork`: How expensive is an `int 0x30` instruction? How many times do you execute `int 0x30` in your `fork`? Is accessing the `TSS` stack switch also expensive? And so on...
|
||||
|
||||
Alternatively, you can boot your kernel on real hardware and _really_ benchmark your code. See the `RDTSC` (read time-stamp counter) instruction, defined in the IA32 manual, which counts the number of clock cycles that have elapsed since the last processor reset. QEMU doesn't emulate this instruction faithfully (it can either count the number of virtual instructions executed or use the host TSC, neither of which reflects the number of cycles a real CPU would require).
|
||||
```
|
||||
|
||||
This ends part B. Make sure you pass all of the Part B tests when you run make grade. As usual, you can hand in your submission with make handin.
|
||||
|
||||
#### Part C: Preemptive Multitasking and Inter-Process communication (IPC)
|
||||
|
||||
In the final part of lab 4 you will modify the kernel to preempt uncooperative environments and to allow environments to pass messages to each other explicitly.
|
||||
|
||||
##### Clock Interrupts and Preemption
|
||||
|
||||
Run the `user/spin` test program. This test program forks off a child environment, which simply spins forever in a tight loop once it receives control of the CPU. Neither the parent environment nor the kernel ever regains the CPU. This is obviously not an ideal situation in terms of protecting the system from bugs or malicious code in user-mode environments, because any user-mode environment can bring the whole system to a halt simply by getting into an infinite loop and never giving back the CPU. In order to allow the kernel to _preempt_ a running environment, forcefully retaking control of the CPU from it, we must extend the JOS kernel to support external hardware interrupts from the clock hardware.
|
||||
|
||||
###### Interrupt discipline
|
||||
|
||||
External interrupts (i.e., device interrupts) are referred to as IRQs. There are 16 possible IRQs, numbered 0 through 15. The mapping from IRQ number to IDT entry is not fixed. `pic_init` in `picirq.c` maps IRQs 0-15 to IDT entries `IRQ_OFFSET` through `IRQ_OFFSET+15`.
|
||||
|
||||
In `inc/trap.h`, `IRQ_OFFSET` is defined to be decimal 32. Thus the IDT entries 32-47 correspond to the IRQs 0-15. For example, the clock interrupt is IRQ 0. Thus, IDT[IRQ_OFFSET+0] (i.e., IDT[32]) contains the address of the clock's interrupt handler routine in the kernel. This `IRQ_OFFSET` is chosen so that the device interrupts do not overlap with the processor exceptions, which could obviously cause confusion. (In fact, in the early days of PCs running MS-DOS, the `IRQ_OFFSET` effectively _was_ zero, which indeed caused massive confusion between handling hardware interrupts and handling processor exceptions!)
|
||||
|
||||
In JOS, we make a key simplification compared to xv6 Unix. External device interrupts are _always_ disabled when in the kernel (and, like xv6, enabled when in user space). External interrupts are controlled by the `FL_IF` flag bit of the `%eflags` register (see `inc/mmu.h`). When this bit is set, external interrupts are enabled. While the bit can be modified in several ways, because of our simplification, we will handle it solely through the process of saving and restoring `%eflags` register as we enter and leave user mode.
|
||||
|
||||
You will have to ensure that the `FL_IF` flag is set in user environments when they run so that when an interrupt arrives, it gets passed through to the processor and handled by your interrupt code. Otherwise, interrupts are _masked_ , or ignored until interrupts are re-enabled. We masked interrupts with the very first instruction of the bootloader, and so far we have never gotten around to re-enabling them.
|
||||
|
||||
```
|
||||
Exercise 13. Modify `kern/trapentry.S` and `kern/trap.c` to initialize the appropriate entries in the IDT and provide handlers for IRQs 0 through 15. Then modify the code in `env_alloc()` in `kern/env.c` to ensure that user environments are always run with interrupts enabled.
|
||||
|
||||
Also uncomment the `sti` instruction in `sched_halt()` so that idle CPUs unmask interrupts.
|
||||
|
||||
The processor never pushes an error code when invoking a hardware interrupt handler. You might want to re-read section 9.2 of the [80386 Reference Manual][2], or section 5.8 of the [IA-32 Intel Architecture Software Developer's Manual, Volume 3][3], at this time.
|
||||
|
||||
After doing this exercise, if you run your kernel with any test program that runs for a non-trivial length of time (e.g., `spin`), you should see the kernel print trap frames for hardware interrupts. While interrupts are now enabled in the processor, JOS isn't yet handling them, so you should see it misattribute each interrupt to the currently running user environment and destroy it. Eventually it should run out of environments to destroy and drop into the monitor.
|
||||
```
|
||||
|
||||
###### Handling Clock Interrupts
|
||||
|
||||
In the `user/spin` program, after the child environment was first run, it just spun in a loop, and the kernel never got control back. We need to program the hardware to generate clock interrupts periodically, which will force control back to the kernel where we can switch control to a different user environment.
|
||||
|
||||
The calls to `lapic_init` and `pic_init` (from `i386_init` in `init.c`), which we have written for you, set up the clock and the interrupt controller to generate interrupts. You now need to write the code to handle these interrupts.
|
||||
|
||||
```
|
||||
Exercise 14. Modify the kernel's `trap_dispatch()` function so that it calls `sched_yield()` to find and run a different environment whenever a clock interrupt takes place.
|
||||
|
||||
You should now be able to get the `user/spin` test to work: the parent environment should fork off the child, `sys_yield()` to it a couple times but in each case regain control of the CPU after one time slice, and finally kill the child environment and terminate gracefully.
|
||||
```
|
||||
|
||||
This is a great time to do some _regression testing_. Make sure that you haven't broken any earlier part of that lab that used to work (e.g. `forktree`) by enabling interrupts. Also, try running with multiple CPUs using make CPUS=2 _target_. You should also be able to pass `stresssched` now. Run make grade to see for sure. You should now get a total score of 65/80 points on this lab.
|
||||
|
||||
##### Inter-Process communication (IPC)
|
||||
|
||||
(Technically in JOS this is "inter-environment communication" or "IEC", but everyone else calls it IPC, so we'll use the standard term.)
|
||||
|
||||
We've been focusing on the isolation aspects of the operating system, the ways it provides the illusion that each program has a machine all to itself. Another important service of an operating system is to allow programs to communicate with each other when they want to. It can be quite powerful to let programs interact with other programs. The Unix pipe model is the canonical example.
|
||||
|
||||
There are many models for interprocess communication. Even today there are still debates about which models are best. We won't get into that debate. Instead, we'll implement a simple IPC mechanism and then try it out.
|
||||
|
||||
###### IPC in JOS
|
||||
|
||||
You will implement a few additional JOS kernel system calls that collectively provide a simple interprocess communication mechanism. You will implement two system calls, `sys_ipc_recv` and `sys_ipc_try_send`. Then you will implement two library wrappers `ipc_recv` and `ipc_send`.
|
||||
|
||||
The "messages" that user environments can send to each other using JOS's IPC mechanism consist of two components: a single 32-bit value, and optionally a single page mapping. Allowing environments to pass page mappings in messages provides an efficient way to transfer more data than will fit into a single 32-bit integer, and also allows environments to set up shared memory arrangements easily.
|
||||
|
||||
###### Sending and Receiving Messages
|
||||
|
||||
To receive a message, an environment calls `sys_ipc_recv`. This system call de-schedules the current environment and does not run it again until a message has been received. When an environment is waiting to receive a message, _any_ other environment can send it a message - not just a particular environment, and not just environments that have a parent/child arrangement with the receiving environment. In other words, the permission checking that you implemented in Part A will not apply to IPC, because the IPC system calls are carefully designed so as to be "safe": an environment cannot cause another environment to malfunction simply by sending it messages (unless the target environment is also buggy).
|
||||
|
||||
To try to send a value, an environment calls `sys_ipc_try_send` with both the receiver's environment id and the value to be sent. If the named environment is actually receiving (it has called `sys_ipc_recv` and not gotten a value yet), then the send delivers the message and returns 0. Otherwise the send returns `-E_IPC_NOT_RECV` to indicate that the target environment is not currently expecting to receive a value.
|
||||
|
||||
A library function `ipc_recv` in user space will take care of calling `sys_ipc_recv` and then looking up the information about the received values in the current environment's `struct Env`.
|
||||
|
||||
Similarly, a library function `ipc_send` will take care of repeatedly calling `sys_ipc_try_send` until the send succeeds.
|
||||
|
||||
###### Transferring Pages
|
||||
|
||||
When an environment calls `sys_ipc_recv` with a valid `dstva` parameter (below `UTOP`), the environment is stating that it is willing to receive a page mapping. If the sender sends a page, then that page should be mapped at `dstva` in the receiver's address space. If the receiver already had a page mapped at `dstva`, then that previous page is unmapped.
|
||||
|
||||
When an environment calls `sys_ipc_try_send` with a valid `srcva` (below `UTOP`), it means the sender wants to send the page currently mapped at `srcva` to the receiver, with permissions `perm`. After a successful IPC, the sender keeps its original mapping for the page at `srcva` in its address space, but the receiver also obtains a mapping for this same physical page at the `dstva` originally specified by the receiver, in the receiver's address space. As a result this page becomes shared between the sender and receiver.
|
||||
|
||||
If either the sender or the receiver does not indicate that a page should be transferred, then no page is transferred. After any IPC the kernel sets the new field `env_ipc_perm` in the receiver's `Env` structure to the permissions of the page received, or zero if no page was received.
|
||||
|
||||
###### Implementing IPC
|
||||
|
||||
```
|
||||
Exercise 15. Implement `sys_ipc_recv` and `sys_ipc_try_send` in `kern/syscall.c`. Read the comments on both before implementing them, since they have to work together. When you call `envid2env` in these routines, you should set the `checkperm` flag to 0, meaning that any environment is allowed to send IPC messages to any other environment, and the kernel does no special permission checking other than verifying that the target envid is valid.
|
||||
|
||||
Then implement the `ipc_recv` and `ipc_send` functions in `lib/ipc.c`.
|
||||
|
||||
Use the `user/pingpong` and `user/primes` functions to test your IPC mechanism. `user/primes` will generate for each prime number a new environment until JOS runs out of environments. You might find it interesting to read `user/primes.c` to see all the forking and IPC going on behind the scenes.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Why does `ipc_send` have to loop? Change the system call interface so it doesn't have to. Make sure you can handle multiple environments trying to send to one environment at the same time.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! The prime sieve is only one neat use of message passing between a large number of concurrent programs. Read C. A. R. Hoare, ``Communicating Sequential Processes,'' _Communications of the ACM_ 21(8) (August 1978), 666-667, and implement the matrix multiplication example.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! One of the most impressive examples of the power of message passing is Doug McIlroy's power series calculator, described in [M. Douglas McIlroy, ``Squinting at Power Series,'' _Software--Practice and Experience_ , 20(7) (July 1990), 661-683][4]. Implement his power series calculator and compute the power series for _sin_ ( _x_ + _x_ ^3).
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Make JOS's IPC mechanism more efficient by applying some of the techniques from Liedtke's paper, [Improving IPC by Kernel Design][5], or any other tricks you may think of. Feel free to modify the kernel's system call API for this purpose, as long as your code is backwards compatible with what our grading scripts expect.
|
||||
```
|
||||
|
||||
**This ends part C.** Make sure you pass all of the make grade tests and don't forget to write up your answers to the questions and a description of your challenge exercise solution in `answers-lab4.txt`.
|
||||
|
||||
Before handing in, use git status and git diff to examine your changes and don't forget to git add answers-lab4.txt. When you're ready, commit your changes with git commit -am 'my solutions to lab 4', then make handin and follow the directions.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://pdos.csail.mit.edu/6.828/2018/labs/lab4/
|
||||
|
||||
作者:[csail.mit][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://pdos.csail.mit.edu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://pdos.csail.mit.edu/6.828/2018/labs/lab4/uvpt.html
|
||||
[2]: https://pdos.csail.mit.edu/6.828/2018/labs/readings/i386/toc.htm
|
||||
[3]: https://pdos.csail.mit.edu/6.828/2018/labs/readings/ia32/IA32-3A.pdf
|
||||
[4]: https://swtch.com/~rsc/thread/squint.pdf
|
||||
[5]: http://dl.acm.org/citation.cfm?id=168633
|
@ -0,0 +1,131 @@
|
||||
How to Check HP iLO Firmware Version from Linux Command Line
|
||||
======
|
||||
There are many utilities are available in Linux to get a [hardware information][1].
|
||||
|
||||
Each tool has their own unique feature which help us to gather the required information.
|
||||
|
||||
We have already wrote many articles about this, the hardware tools are Dmidecode, hwinfo, lshw, inxi, lspci, lssci, lsusb, lsblk, neofetch, screenfetch, etc.,
|
||||
|
||||
Today we are going to discuss about the same topic. I will tell you, how to check HP iLO firmware version through Linux command line.
|
||||
|
||||
Also read a following articles which is related to Linux hardware.
|
||||
|
||||
**Suggested Read :**
|
||||
**(#)** [LSHW (Hardware Lister) – A Nifty Tool To Get A Hardware Information On Linux][2]
|
||||
**(#)** [inxi – A Great Tool to Check Hardware Information on Linux][3]
|
||||
**(#)** [Dmidecode – Easy Way To Get Linux System Hardware Information][4]
|
||||
**(#)** [Neofetch – Shows Linux System Information With ASCII Distribution Logo][5]
|
||||
**(#)** [ScreenFetch – Fetch Linux System Information on Terminal with Distribution ASCII art logo][6]
|
||||
**(#)** [16 Methods To Check If A Linux System Is Physical or Virtual Machine][7]
|
||||
**(#)** [hwinfo (Hardware Info) – A Nifty Tool To Detect System Hardware Information On Linux][8]
|
||||
**(#)** [How To Find WWN, WWNN and WWPN Number Of HBA Card In Linux][9]
|
||||
**(#)** [How To Check System Hardware Manufacturer, Model And Serial Number In Linux][1]
|
||||
**(#)** [How To Use lspci, lsscsi, lsusb, And lsblk To Get Linux System Devices Information][10]
|
||||
|
||||
### What is iLO?
|
||||
|
||||
iLO stands for Integrated Lights-Out is a proprietary embedded server management technology by Hewlett-Packard which provides out-of-band management facilities.
|
||||
|
||||
I can say this in simple term, it’s a dedicated device management channel which allow users to manage and monitor the device remotely regardless of whether the machine is powered on, or whether an operating system is installed or functional.
|
||||
|
||||
It allows a system administrator to monitor all devices such as CPU, RAM, Hardware RAID, fan speed, power voltages, chassis intrusion, firmware (BIOS or UEFI), also manage remote terminals (KVM over IP), remote reboot, shutdown, powering on, etc.
|
||||
|
||||
The below list of lights out management (LOM) technology offered by other vendors.
|
||||
|
||||
* **`iLO:`** Integrated Lights-Out by HP
|
||||
* **`IMM:`** Integrated Management Module by IBM
|
||||
* **`iDRAC:`** Integrated Dell Remote Access Controllers by Dell
|
||||
* **`IPMI:`** Intelligent Platform Management Interface – General Standard, it’s used on Supermicro hardware
|
||||
* **`AMT:`** Intel Active Management Technology by Intel
|
||||
* **`CIMC:`** Cisco Integrated Management Controller by Cisco
|
||||
|
||||
|
||||
|
||||
The below table will give the details about iLO version and supported hardware’s.
|
||||
|
||||
* **`iLO:`** ProLiant G2, G3, G4, and G6 servers, model numbers under 300
|
||||
* **`iLO 2:`** ProLiant G5 and G6 servers, model numbers 300 and higher
|
||||
* **`iLO 3:`** ProLiant G7 servers
|
||||
* **`iLO 4:`** ProLiant Gen8 and Gen9 servers
|
||||
* **`iLO 5:`** ProLiant Gen10 servers
|
||||
|
||||
|
||||
|
||||
There are three easy ways to check HP iLO firmware version in Linux, Here we are going to show you one by one.
|
||||
|
||||
### Method-1: Using Dmidcode Command
|
||||
|
||||
[Dmidecode][4] is a tool which reads a computer’s DMI (stands for Desktop Management Interface) (some say SMBIOS – stands for System Management BIOS) table contents and display system hardware information in a human-readable format.
|
||||
|
||||
This table contains a description of the system’s hardware components, as well as other useful information such as serial number, Manufacturer information, Release Date, and BIOS revision, etc,.,
|
||||
|
||||
The DMI table doesn’t only describe what the system is currently made of, it also can report the possible evolution’s (such as the fastest supported CPU or the maximal amount of memory supported). This will help you to analyze your hardware capability like whether it’s support latest application version or not?
|
||||
|
||||
As you run it, dmidecode will try to locate the DMI table. If it succeeds, it will then parse this table and display a list of records which you expect.
|
||||
|
||||
First, learn about DMI Types and its keywords, so that we can play nicely without any trouble otherwise we can’t.
|
||||
|
||||
```
|
||||
# dmidecode | grep "Firmware Revision"
|
||||
Firmware Revision: 2.40
|
||||
```
|
||||
|
||||
### Method-2: Using HPONCFG Utility
|
||||
|
||||
HPONCFG is an online configuration tool used to set up and reconfigure iLO without requiring a reboot of the server operating system. The utility runs in a command-line mode and must be executed from an operating system command line on the local server. HPONCFG enables you to initially configure features exposed through the RBSU or iLO.
|
||||
|
||||
Before using HPONCFG, the iLO Management Interface Driver must be loaded on the server. HPONCFG displays a warning if the driver is not installed.
|
||||
|
||||
To install this, visit the [HP website][11] and get the latest hponcfg package by searching the following keyword (sample search key word for iLO 4 “HPE Integrated Lights-Out 4 (iLO 4)”). In that you need to click “HP Lights-Out Online Configuration Utility for Linux (AMD64/EM64T)” and download the package.
|
||||
|
||||
```
|
||||
# rpm -ivh /tmp/hponcfg-5.3.0-0.x86_64.rpm
|
||||
```
|
||||
|
||||
Use hponcfg command to get the information.
|
||||
|
||||
```
|
||||
# hponcfg | grep Firmware
|
||||
Firmware Revision = 2.40 Device type = iLO 4 Driver name = hpilo
|
||||
```
|
||||
|
||||
### Method-3: Using CURL Command
|
||||
|
||||
We can use cURL command to get some of the information in XML format, for HP iLO, iLO 2, iLO 3, iLO 4 and iLO 5.
|
||||
|
||||
Using cURL command we can get the iLO firmware version without to login to the server or console.
|
||||
|
||||
Make sure you have to use right iLO management IP instead of us to get the details. I have removed all the unnecessary details from the below output for better clarification.
|
||||
|
||||
```
|
||||
# curl -k https://10.2.0.101/xmldata?item=All
|
||||
|
||||
ProLiant DL380p G8
|
||||
Integrated Lights-Out 4 (iLO 4)
|
||||
2.40
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-check-hp-ilo-firmware-version-from-linux-command-line/
|
||||
|
||||
作者:[Prakash Subramanian][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/prakash/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/how-to-check-system-hardware-manufacturer-model-and-serial-number-in-linux/
|
||||
[2]: https://www.2daygeek.com/lshw-find-check-system-hardware-information-details-linux/
|
||||
[3]: https://www.2daygeek.com/inxi-system-hardware-information-on-linux/
|
||||
[4]: https://www.2daygeek.com/dmidecode-get-print-display-check-linux-system-hardware-information/
|
||||
[5]: https://www.2daygeek.com/neofetch-display-linux-systems-information-ascii-distribution-logo-terminal/
|
||||
[6]: https://www.2daygeek.com/install-screenfetch-to-fetch-linux-system-information-on-terminal-with-distribution-ascii-art-logo/
|
||||
[7]: https://www.2daygeek.com/check-linux-system-physical-virtual-machine-virtualization-technology/
|
||||
[8]: https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/
|
||||
[9]: https://www.2daygeek.com/how-to-find-wwn-wwnn-and-wwpn-number-of-hba-card-in-linux/
|
||||
[10]: https://www.2daygeek.com/check-system-hardware-devices-bus-information-lspci-lsscsi-lsusb-lsblk-linux/
|
||||
[11]: https://support.hpe.com/hpesc/public/home
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
4 cool new projects to try in COPR for October 2018
|
||||
======
|
||||
|
||||
|
@ -1,87 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
Get organized at the Linux command line with Calcurse
|
||||
======
|
||||
|
||||
Keep up with your calendar and to-do list with Calcurse.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar.jpg?itok=jEKbhvDT)
|
||||
|
||||
Do you need complex, feature-packed graphical or web applications to get and stay organized? I don't think so. The right command line tool can do the job and do it well.
|
||||
|
||||
Of course, uttering the words command and line together can strike fear into the hearts of some Linux users. The command line, to them, is terra incognita.
|
||||
|
||||
Organizing yourself at the command line is easy with [Calcurse][1]. Calcurse brings a graphical look and feel to a text-based interface. You get the simplicity and focus of the command line married to ease of use and navigation.
|
||||
|
||||
Let's take a closer look at Calcurse, which is open sourced under the BSD License.
|
||||
|
||||
### Getting the software
|
||||
|
||||
If compiling code is your thing (it's not mine, generally), you can grab the source code from the [Calcurse website][1]. Otherwise, get the [binary installer][2] for your Linux distribution. You might even be able to get Calcurse from your Linux distro's package manager. It never hurts to check.
|
||||
|
||||
Compile or install Calcurse (neither takes all that long), and you're ready to go.
|
||||
|
||||
### Using Calcurse
|
||||
|
||||
Crack open a terminal window and type **calcurse**.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/calcurse-main.png)
|
||||
|
||||
Calcurse's interface consists of three panels:
|
||||
|
||||
* Appointments (the left side of the screen)
|
||||
* Calendar (the top right)
|
||||
* To-do list (the bottom right)
|
||||
|
||||
|
||||
|
||||
Move between the panels by pressing the Tab key on your keyboard. To add a new item to a panel, press **a**. Calcurse walks you through what you need to do to add the item.
|
||||
|
||||
One interesting quirk is that the Appointment and Calendar panels work together. You add an appointment by tabbing to the Calendar panel. There, you choose the date for your appointment. Once you do that, you tab back to the Appointments panel. I know …
|
||||
|
||||
Press **a** to set a start time, a duration (in minutes), and a description of the appointment. The start time and duration are optional. Calcurse displays appointments on the day they're due.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/calcurse-appointment.png)
|
||||
|
||||
Here's what a day's appointments look like:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/calcurse-appt-list.png)
|
||||
|
||||
The to-do list works on its own. Tab to the ToDo panel and (again) press **a**. Type a description of the task, then set a priority (1 is the highest and 9 is the lowest). Calcurse lists your uncompleted tasks in the ToDo panel.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/calcurse-todo.png)
|
||||
|
||||
If your task has a long description, Calcurse truncates it. You can view long descriptions by navigating to the task using the up or down arrow keys on your keyboard, then pressing **v**.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/calcurse-view-todo.png)
|
||||
|
||||
Calcurse saves its information in text files in a hidden folder called **.calcurse** in your home directory—for example, **/home/scott/.calcurse**. If Calcurse stops working, it's easy to find your information.
|
||||
|
||||
### Other useful features
|
||||
|
||||
Other Calcurse features include the ability to set recurring appointments. To do that, find the appointment you want to repeat and press **r** in the Appointments panel. You'll be asked to set the frequency (for example, daily or weekly) and how long you want the appointment to repeat.
|
||||
|
||||
You can also import calendars in [ICAL][3] format or export your data in either ICAL or [PCAL][4] format. With ICAL, you can share your data with other calendar applications. With PCAL, you can generate a Postscript version of your calendar.
|
||||
|
||||
There are also a number of command line arguments you can pass to Calcurse. You can read about them [in the documentation][5].
|
||||
|
||||
While simple, Calcurse does a solid job of helping you keep organized. You'll need to be a bit more mindful of your tasks and appointments, but you'll be able to focus better on what you need to do and where you need to be.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/calcurse
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/scottnesbitt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://www.calcurse.org/
|
||||
[2]: http://www.calcurse.org/downloads/#packages
|
||||
[3]: https://tools.ietf.org/html/rfc2445
|
||||
[4]: http://pcal.sourceforge.net/
|
||||
[5]: http://www.calcurse.org/files/manual.chunked/ar01s04.html#_invocation
|
@ -0,0 +1,69 @@
|
||||
Create animated, scalable vector graphic images with MacSVG
|
||||
======
|
||||
|
||||
Open source SVG: The writing is on the wall
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_design_paper_plane_2_0.jpg?itok=xKdP-GWE)
|
||||
|
||||
The Neo-Babylonian regent [Belshazzar][1] did not heed [the writing on the wall][2] that magically appeared during his great feast. However, if he had had a laptop and a good internet connection in 539 BC, he might have staved off those pesky Persians by reading the SVG on the browser.
|
||||
|
||||
Animating text and objects on web pages is a great way to build user interest and engagement. There are several ways to achieve this, such as a video embed, an animated GIF, or a slideshow—but you can also use [scalable vector graphics][3] (SVG).
|
||||
|
||||
An SVG image is different from, say, a JPG, because it is scalable without losing its resolution. A vector image is created by points, not dots, so no matter how large it gets, it will not lose its resolution or pixelate. An example of a good use of scalable, static images would be logos on websites.
|
||||
|
||||
### Move it, move it
|
||||
|
||||
You can create SVG images with several drawing programs, including open source [Inkscape][4] and Adobe Illustrator. Getting your images to “do something” requires a bit more effort. Fortunately, there are open source solutions that would get even Belshazzar’s attention.
|
||||
|
||||
[MacSVG][5] is one tool that will get your images moving. You can find the source code on [GitHub][6].
|
||||
|
||||
Developed by Douglas Ward of Conway, Arkansas, MacSVG is an “open source Mac OS app for designing HTML5 SVG art and animation,” according to its [website][5].
|
||||
|
||||
I was interested in using MacSVG to create an animated signature. I admit that I found the process a bit confusing and failed at my first attempts to create an actual animated SVG image.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/macsvg-screen.png)
|
||||
|
||||
It is important to first learn what makes “the writing on the wall” actually write.
|
||||
|
||||
The attribute behind the animated writing is [stroke-dasharray][7]. Breaking the term into three words helps explain what is happening: Stroke refers to the line or stroke you would make with a pen, whether physical or digital. Dash means breaking the stroke down into a series of dashes. Array means producing the whole thing into an array. That’s a simple overview, but it helped me understand what was supposed to happen and why.
|
||||
|
||||
With MacSVG, you can import a graphic (.PNG) and use the pen tool to trace the path of the writing. I used a cursive representation of my first name. Then it was just a matter of applying the attributes to animate the writing, increase and decrease the thickness of the stroke, change its color, and so on. Once completed, the animated writing was exported as an .SVG file and was ready for use on the web. MacSVG can be used for many different types of SVG animation in addition to handwriting.
|
||||
|
||||
### The writing is on the WordPress
|
||||
|
||||
I was ready to upload and share my SVG example on my [WordPress][8] site, but I discovered that WordPress does not allow for SVG media imports. Fortunately, I found a handy plugin: Benbodhi’s [SVG Support][9] allowed a quick, easy import of my SVG the same way I would import a JPG to my Media Library. I was able to showcase my [writing on the wall][10] to Babylonians everywhere.
|
||||
|
||||
I opened the source code of my SVG in [Brackets][11], and here are the results:
|
||||
|
||||
```
|
||||
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
|
||||
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:cc="http://web.resource.org/cc/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd" xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape" height="360px" style="zoom: 1;" cursor="default" id="svg_document" width="480px" baseProfile="full" version="1.1" preserveAspectRatio="xMidYMid meet" viewBox="0 0 480 360"><title id="svg_document_title">Path animation with stroke-dasharray</title><desc id="desc1">This example demonstrates the use of a path element, an animate element, and the stroke-dasharray attribute to simulate drawing.</desc><defs id="svg_document_defs"></defs><g id="main_group"></g><path stroke="#004d40" id="path2" stroke-width="9px" d="M86,75 C86,75 75,72 72,61 C69,50 66,37 71,34 C76,31 86,21 92,35 C98,49 95,73 94,82 C93,91 87,105 83,110 C79,115 70,124 71,113 C72,102 67,105 75,97 C83,89 111,74 111,74 C111,74 119,64 119,63 C119,62 110,57 109,58 C108,59 102,65 102,66 C102,67 101,75 107,79 C113,83 118,85 122,81 C126,77 133,78 136,64 C139,50 147,45 146,33 C145,21 136,15 132,24 C128,33 123,40 123,49 C123,58 135,87 135,96 C135,105 139,117 133,120 C127,123 116,127 120,116 C124,105 144,82 144,81 C144,80 158,66 159,58 C160,50 159,48 161,43 C163,38 172,23 166,22 C160,21 155,12 153,23 C151,34 161,68 160,78 C159,88 164,108 163,113 C162,118 165,126 157,128 C149,130 152,109 152,109 C152,109 185,64 185,64 " fill="none" transform=""><animate values="0,1739;1739,0;" attributeType="XML" begin="0; animate1.end+5s" id="animateSig1" repeatCount="indefinite" attributeName="stroke-dasharray" fill="freeze" dur="2"></animate></path></svg>
|
||||
```
|
||||
|
||||
What would you use MacSVG for?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/macsvg-open-source-tool-animation
|
||||
|
||||
作者:[Jeff Macharyas][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rikki-endsley
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Belshazzar
|
||||
[2]: https://en.wikipedia.org/wiki/Belshazzar%27s_feast
|
||||
[3]: https://en.wikipedia.org/wiki/Scalable_Vector_Graphics
|
||||
[4]: https://inkscape.org/
|
||||
[5]: https://macsvg.org/
|
||||
[6]: https://github.com/dsward2/macSVG
|
||||
[7]: https://gist.github.com/mbostock/5649592
|
||||
[8]: https://macharyas.com/
|
||||
[9]: https://wordpress.org/plugins/svg-support/
|
||||
[10]: https://macharyas.com/index.php/2018/10/14/open-source-svg/
|
||||
[11]: http://brackets.io/
|
@ -0,0 +1,100 @@
|
||||
FSSlc translating
|
||||
|
||||
How To Analyze And Explore The Contents Of Docker Images
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/dive-tool-720x340.png)
|
||||
|
||||
As you may already know, a Docker container image is a lightweight, standalone, executable package of software that has everything required to run an application. That’s why container images are often used by developers for building and distributing applications. If you’re curious to know what is in a Docker image, this brief guide might help you. Today, we are going to learn to analyze and explore the contents of Docker images layer by layer using a tool named **“Dive”**. By analyzing a Docker image, we can discover possible duplicate files across the layers and remove them to reduce the size of the docker image. The Dive utility is not just a Docker Image analyzer, but also helps us to build one.
|
||||
|
||||
### Installing Dive
|
||||
|
||||
Get the latest version from the [**releases page**][1] and install it as shown below depending upon the distribution you use.
|
||||
|
||||
If you’re on **Debian** or **Ubuntu** , run the following commands to download and install it.
|
||||
|
||||
```
|
||||
$ wget https://github.com/wagoodman/dive/releases/download/v0.0.8/dive_0.0.8_linux_amd64.deb
|
||||
|
||||
$ sudo apt install ./dive_0.0.8_linux_amd64.deb
|
||||
```
|
||||
|
||||
**On RHEL/CentOS:**
|
||||
|
||||
```
|
||||
$ wget https://github.com/wagoodman/dive/releases/download/v0.0.8/dive_0.0.8_linux_amd64.rpm
|
||||
|
||||
$ sudo rpm -i dive_0.0.8_linux_amd64.rpm
|
||||
```
|
||||
|
||||
Dive can also installed using [**Linuxbrew**][2] package manager.
|
||||
|
||||
```
|
||||
$ brew tap wagoodman/dive
|
||||
|
||||
$ brew install dive
|
||||
```
|
||||
|
||||
For other installation methods, refer the project’s GitHub page given at the end of this guide.
|
||||
|
||||
### Analyze And Explore The Contents Of Docker Images
|
||||
|
||||
To analyze a Docker image, simply run dive command with Docker “Image ID”. You can get your Docker images’ IDs using “sudo docker images” command.
|
||||
|
||||
```
|
||||
$ sudo dive ea4c82dcd15a
|
||||
```
|
||||
|
||||
Here, **ea4c82dcd15a** is Docker image id.
|
||||
|
||||
The Dive command will quickly analyze the given Docker image and display its contents in the Terminal.
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Dive-1.png)
|
||||
As you can see in the above screenshot, the layers of given docker image and its details, wasted space are shown in the left pane. On the right pane, the contents of each layer in the given Docker image. You can switch between the left and right pane using **Ctrl+SPACEBAR** key and **UP/DOWN** arrow keys to navigate through the directory tree.
|
||||
|
||||
The list of keyboard shortcuts to use “Dive”.
|
||||
|
||||
* **Ctrl+Spacebar** – Switch between left and right panes,
|
||||
* **Spacebar** – Expand/Collapse directory tree,
|
||||
* **Ctrl+A** – Show/hide added files,
|
||||
* **Ctrl+R** – Show/hide removed files,
|
||||
* **Ctrl+M** – Show/hide modified files,
|
||||
* **Ctrl+U** – Show/hide unmodified files,
|
||||
* **Ctrl+ L** – Show layer changes,
|
||||
* **Ctrl+A** – Show layer changes,
|
||||
* **Ctrl+/** – Filter files,
|
||||
* **Ctrl+C** – Exit.
|
||||
|
||||
|
||||
|
||||
In the above example, I have used “sudo” permission, because my Docker images are stored in **/var/lib/docker/** directory. If you have them on your $HOME directory or anywhere not owned “root” user, you need not to use “sudo”.
|
||||
|
||||
You can also build a Docker image and do an immediate analysis with one command: ``
|
||||
|
||||
```
|
||||
$ dive build -t <some-tag>
|
||||
```
|
||||
|
||||
Dive tool is still in beta stage, so there will be bugs. If you came across any bugs, report them in the project’s github page.
|
||||
|
||||
And, that’s all for today. You know now how to explore and analyze the contents of Docker container image and how to build it using Dive tool. Hope this helps.
|
||||
|
||||
More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-analyze-and-explore-the-contents-of-docker-images/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/wagoodman/dive/releases
|
||||
[2]: https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/
|
@ -0,0 +1,130 @@
|
||||
Podman: A more secure way to run containers
|
||||
======
|
||||
Podman uses a traditional fork/exec model (vs. a client/server model) for running containers.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-cloud-safe.png?itok=yj2TFPzq)
|
||||
|
||||
Before I get into the main topic of this article, [Podman][1] and containers, I need to get a little technical about the Linux audit feature.
|
||||
|
||||
### What is audit?
|
||||
|
||||
The Linux kernel has an interesting security feature called **audit**. It allows administrators to watch for security events on a system and have them logged to the audit.log, which can be stored locally or remotely on another machine to prevent a hacker from trying to cover his tracks.
|
||||
|
||||
The **/etc/shadow** file is a common security file to watch, since adding a record to it could allow an attacker to get return access to the system. Administrators want to know if any process modified the file. You can do this by executing the command:
|
||||
|
||||
```
|
||||
# auditctl -w /etc/shadow
|
||||
```
|
||||
|
||||
Now let's see what happens if I modify the /etc/shadow file:
|
||||
|
||||
```
|
||||
# touch /etc/shadow
|
||||
# ausearch -f /etc/shadow -i -ts recent
|
||||
|
||||
type=PROCTITLE msg=audit(10/10/2018 09:46:03.042:4108) : proctitle=touch /etc/shadow type=SYSCALL msg=audit(10/10/2018 09:46:03.042:4108) : arch=x86_64 syscall=openat success=yes exit=3 a0=0xffffff9c a1=0x7ffdb17f6704 a2=O_WRONLY|O_CREAT|O_NOCTTY| O_NONBLOCK a3=0x1b6 items=2 ppid=2712 pid=3727 auid=dwalsh uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts1 ses=3 comm=touch exe=/usr/bin/touch subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=(null)`
|
||||
```
|
||||
|
||||
There's a lot of information in the audit record, but I highlighted that it recorded that root modified the /etc/shadow file and the owner of the process' audit UID ( **auid** ) was **dwalsh**.
|
||||
|
||||
Did the kernel do that?
|
||||
|
||||
#### Tracking the login UID
|
||||
|
||||
**loginuid** , stored in **/proc/self/loginuid** , that is part of the proc struct of every process on the system. This field can be set only once; after it is set, the kernel will not allow any process to reset it.
|
||||
|
||||
There is a field called, stored in, that is part of the proc struct of every process on the system. This field can be set only once; after it is set, the kernel will not allow any process to reset it.
|
||||
|
||||
When I log into the system, the login program sets the loginuid field for my login process.
|
||||
|
||||
My UID, dwalsh, is 3267.
|
||||
|
||||
```
|
||||
$ cat /proc/self/loginuid
|
||||
3267
|
||||
```
|
||||
|
||||
Now, even if I become root, my login UID stays the same.
|
||||
|
||||
```
|
||||
$ sudo cat /proc/self/loginuid
|
||||
3267
|
||||
```
|
||||
|
||||
Note that every process that's forked and executed from the initial login process automatically inherits the loginuid. This is how the kernel knew that the person who logged was dwalsh.
|
||||
|
||||
### Containers
|
||||
|
||||
Now let's look at containers.
|
||||
|
||||
```
|
||||
sudo podman run fedora cat /proc/self/loginuid
|
||||
3267
|
||||
```
|
||||
|
||||
Even the container process retains my loginuid. Now let's try with Docker.
|
||||
|
||||
```
|
||||
sudo docker run fedora cat /proc/self/loginuid
|
||||
4294967295
|
||||
```
|
||||
|
||||
### Why the difference?
|
||||
|
||||
Podman uses a traditional fork/exec model for the container, so the container process is an offspring of the Podman process. Docker uses a client/server model. The **docker** command I executed is the Docker client tool, and it communicates with the Docker daemon via a client/server operation. Then the Docker daemon creates the container and handles communications of stdin/stdout back to the Docker client tool.
|
||||
|
||||
The default loginuid of processes (before their loginuid is set) is 4294967295. Since the container is an offspring of the Docker daemon and the Docker daemon is a child of the init system, we see that systemd, Docker daemon, and the container processes all have the same loginuid, 4294967295, which audit refers to as the unset audit UID.
|
||||
|
||||
```
|
||||
cat /proc/1/loginuid
|
||||
4294967295
|
||||
```
|
||||
|
||||
### How can this be abused?
|
||||
|
||||
Let's look at what would happen if a container process launched by Docker modifies the /etc/shadow file.
|
||||
|
||||
```
|
||||
$ sudo docker run --privileged -v /:/host fedora touch /host/etc/shadow
|
||||
$ sudo ausearch -f /etc/shadow -i type=PROCTITLE msg=audit(10/10/2018 10:27:20.055:4569) : proctitle=/usr/bin/coreutils --coreutils-prog-shebang=touch /usr/bin/touch /host/etc/shadow type=SYSCALL msg=audit(10/10/2018 10:27:20.055:4569) : arch=x86_64 syscall=openat success=yes exit=3 a0=0xffffff9c a1=0x7ffdb6973f50 a2=O_WRONLY|O_CREAT|O_NOCTTY| O_NONBLOCK a3=0x1b6 items=2 ppid=11863 pid=11882 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=touch exe=/usr/bin/coreutils subj=system_u:system_r:spc_t:s0 key=(null)
|
||||
```
|
||||
|
||||
In the Docker case, the auid is unset (4294967295); this means the security officer might know that a process modified the /etc/shadow file but the identity was lost.
|
||||
|
||||
If that attacker then removed the Docker container, there would be no trace on the system of who modified the /etc/shadow file.
|
||||
|
||||
Now let's look at the exact same scenario with Podman.
|
||||
|
||||
```
|
||||
$ sudo podman run --privileged -v /:/host fedora touch /host/etc/shadow
|
||||
$ sudo ausearch -f /etc/shadow -i type=PROCTITLE msg=audit(10/10/2018 10:23:41.659:4530) : proctitle=/usr/bin/coreutils --coreutils-prog-shebang=touch /usr/bin/touch /host/etc/shadow type=SYSCALL msg=audit(10/10/2018 10:23:41.659:4530) : arch=x86_64 syscall=openat success=yes exit=3 a0=0xffffff9c a1=0x7fffdffd0f34 a2=O_WRONLY|O_CREAT|O_NOCTTY| O_NONBLOCK a3=0x1b6 items=2 ppid=11671 pid=11683 auid=dwalsh uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=3 comm=touch exe=/usr/bin/coreutils subj=unconfined_u:system_r:spc_t:s0 key=(null)
|
||||
```
|
||||
|
||||
Everything is recorded correctly with Podman since it uses traditional fork/exec.
|
||||
|
||||
This was just a simple example of watching the /etc/shadow file, but the auditing system is very powerful for watching what processes do on a system. Using a fork/exec container runtime for launching containers (instead of a client/server container runtime) allows you to maintain better security through audit logging.
|
||||
|
||||
### Final thoughts
|
||||
|
||||
There are many other nice features about the fork/exec model versus the client/server model when launching containers. For example, systemd features include:
|
||||
|
||||
* **SD_NOTIFY:** If you put a Podman command into a systemd unit file, the container process can return notice up the stack through Podman that the service is ready to receive tasks. This is something that can't be done in client/server mode.
|
||||
* **Socket activation:** You can pass down connected sockets from systemd to Podman and onto the container process to use them. This is impossible in the client/server model.
|
||||
|
||||
|
||||
|
||||
The nicest feature, in my opinion, is **running Podman and containers as a non-root user**. This means you never have give a user root privileges on the host, while in the client/server model (like Docker employs), you must open a socket to a privileged daemon running as root to launch the containers. There you are at the mercy of the security mechanisms implemented in the daemon versus the security mechanisms implemented in the host operating systems—a dangerous proposition.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/podman-more-secure-way-run-containers
|
||||
|
||||
作者:[Daniel J Walsh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rhatdan
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://podman.io
|
@ -0,0 +1,60 @@
|
||||
8 creepy commands that haunt the terminal | Opensource.com
|
||||
======
|
||||
|
||||
Welcome to the spookier side of Linux.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/halloween_bag_bat_diy.jpg?itok=24M0lX25)
|
||||
|
||||
It’s that time of year again: The weather gets chilly, the leaves change colors, and kids everywhere transform into tiny ghosts, goblins, and zombies. But did you know that Unix (and Linux) and its various offshoots are also chock-full of creepy crawly things? Let’s take a quick look at some of the spookier aspects of the operating system we all know and love.
|
||||
|
||||
### daemon
|
||||
|
||||
Unix just wouldn’t be the same without all the various daemons that haunt the system. A `daemon` is a process that runs in the background and provides useful services to both the user and the operating system itself. Think SSH, FTP, HTTP, etc.
|
||||
|
||||
### zombie
|
||||
|
||||
Every now and then a zombie, a process that has been killed but refuses to go away, shows up. When this happens, you have no choice but to dispatch it using whatever tools you have available. A zombie usually indicates that something is wrong with the process that spawned it.
|
||||
|
||||
### kill
|
||||
|
||||
Not only can you use the `kill` command to dispatch a zombie, but you can also use it to kill any process that’s adversely affecting your system. Have a process that’s using too much RAM or CPU cycles? Dispatch it with the `kill` command.
|
||||
|
||||
### cat
|
||||
|
||||
The `cat` command has nothing to do with felines and everything to do with combining files: `cat` is short for "concatenate." You can even use this handy command to view the contents of a file.
|
||||
|
||||
|
||||
### tail
|
||||
|
||||
|
||||
The `tail` command is useful when you want to see last n number of lines in a file. It’s also great when you want to monitor a file.
|
||||
|
||||
### which
|
||||
|
||||
No, not that kind of witch, but the command that prints the location of the files associated with any command passed to it. `which python`, for example, will print the locations of every version of Python on your system.
|
||||
|
||||
### crypt
|
||||
|
||||
The `crypt` command, known these days as `mcrypt`, is handy when you want to scramble (encrypt) the contents of a file so that no one but you can read it. Like most Unix commands, you can use `crypt` standalone or within a system script.
|
||||
|
||||
### shred
|
||||
|
||||
The `shred` command is handy when you not only want to delete a file but you also want to ensure that no one will ever be able to recover it. Using the `rm` command to delete a file isn’t enough. You also need to overwrite the space that the file previously occupied. That’s where `shred` comes in.
|
||||
|
||||
These are just a few of the spooky things you’ll find hiding inside Unix. Do you know more creepy commands? Feel free to let me know.
|
||||
|
||||
Happy Halloween!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/spookier-side-unix-linux
|
||||
|
||||
作者:[Patrick H.Mullins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/pmullins
|
||||
[b]: https://github.com/lujun9972
|
@ -0,0 +1,302 @@
|
||||
Working with data streams on the Linux command line
|
||||
======
|
||||
Learn to connect data streams from one utility to another using STDIO.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/pipe-pipeline-grid.png?itok=kkpzKxKg)
|
||||
|
||||
**Author’s note:** Much of the content in this article is excerpted, with some significant edits to fit the Opensource.com article format, from Chapter 3: Data Streams, of my new book, [The Linux Philosophy for SysAdmins][1].
|
||||
|
||||
Everything in Linux revolves around streams of data—particularly text streams. Data streams are the raw materials upon which the [GNU Utilities][2], the Linux core utilities, and many other command-line tools perform their work.
|
||||
|
||||
As its name implies, a data stream is a stream of data—especially text data—being passed from one file, device, or program to another using STDIO. This chapter introduces the use of pipes to connect streams of data from one utility program to another using STDIO. You will learn that the function of these programs is to transform the data in some manner. You will also learn about the use of redirection to redirect the data to a file.
|
||||
|
||||
I use the term “transform” in conjunction with these programs because the primary task of each is to transform the incoming data from STDIO in a specific way as intended by the sysadmin and to send the transformed data to STDOUT for possible use by another transformer program or redirection to a file.
|
||||
|
||||
The standard term, “filters,” implies something with which I don’t agree. By definition, a filter is a device or a tool that removes something, such as an air filter removes airborne contaminants so that the internal combustion engine of your automobile does not grind itself to death on those particulates. In my high school and college chemistry classes, filter paper was used to remove particulates from a liquid. The air filter in my home HVAC system removes particulates that I don’t want to breathe.
|
||||
|
||||
Although they do sometimes filter out unwanted data from a stream, I much prefer the term “transformers” because these utilities do so much more. They can add data to a stream, modify the data in some amazing ways, sort it, rearrange the data in each line, perform operations based on the contents of the data stream, and so much more. Feel free to use whichever term you prefer, but I prefer transformers. I expect that I am alone in this.
|
||||
|
||||
Data streams can be manipulated by inserting transformers into the stream using pipes. Each transformer program is used by the sysadmin to perform some operation on the data in the stream, thus changing its contents in some manner. Redirection can then be used at the end of the pipeline to direct the data stream to a file. As mentioned, that file could be an actual data file on the hard drive, or a device file such as a drive partition, a printer, a terminal, a pseudo-terminal, or any other device connected to a computer.
|
||||
|
||||
The ability to manipulate these data streams using these small yet powerful transformer programs is central to the power of the Linux command-line interface. Many of the core utilities are transformer programs and use STDIO.
|
||||
|
||||
In the Unix and Linux worlds, a stream is a flow of text data that originates at some source; the stream may flow to one or more programs that transform it in some way, and then it may be stored in a file or displayed in a terminal session. As a sysadmin, your job is intimately associated with manipulating the creation and flow of these data streams. In this post, we will explore data streams—what they are, how to create them, and a little bit about how to use them.
|
||||
|
||||
### Text streams—a universal interface
|
||||
|
||||
The use of Standard Input/Output (STDIO) for program input and output is a key foundation of the Linux way of doing things. STDIO was first developed for Unix and has found its way into most other operating systems since then, including DOS, Windows, and Linux.
|
||||
|
||||
> “This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.”
|
||||
>
|
||||
> — Doug McIlroy, Basics of the Unix Philosophy
|
||||
|
||||
### STDIO
|
||||
|
||||
STDIO was developed by Ken Thompson as a part of the infrastructure required to implement pipes on early versions of Unix. Programs that implement STDIO use standardized file handles for input and output rather than files that are stored on a disk or other recording media. STDIO is best described as a buffered data stream, and its primary function is to stream data from the output of one program, file, or device to the input of another program, file, or device.
|
||||
|
||||
There are three STDIO data streams, each of which is automatically opened as a file at the startup of a program—well, those programs that use STDIO. Each STDIO data stream is associated with a file handle, which is just a set of metadata that describes the attributes of the file. File handles 0, 1, and 2 are explicitly defined by convention and long practice as STDIN, STDOUT, and STDERR, respectively.
|
||||
|
||||
**STDIN, File handle 0** , is standard input which is usually input from the keyboard. STDIN can be redirected from any file, including device files, instead of the keyboard. It is not common to need to redirect STDIN, but it can be done.
|
||||
|
||||
**STDOUT, File handle 1** , is standard output which sends the data stream to the display by default. It is common to redirect STDOUT to a file or to pipe it to another program for further processing.
|
||||
|
||||
**STDERR, File handle 2**. The data stream for STDERR is also usually sent to the display.
|
||||
|
||||
If STDOUT is redirected to a file, STDERR continues to be displayed on the screen. This ensures that when the data stream itself is not displayed on the terminal, that STDERR is, thus ensuring that the user will see any errors resulting from execution of the program. STDERR can also be redirected to the same or passed on to the next transformer program in a pipeline.
|
||||
|
||||
STDIO is implemented as a C library, **stdio.h** , which can be included in the source code of programs so that it can be compiled into the resulting executable.
|
||||
|
||||
### Simple streams
|
||||
|
||||
You can perform the following experiments safely in the **/tmp** directory of your Linux host. As the root user, make **/tmp** the PWD, create a test directory, and then make the new directory the PWD.
|
||||
|
||||
```
|
||||
# cd /tmp ; mkdir test ; cd test
|
||||
```
|
||||
|
||||
Enter and run the following command line program to create some files with content on the drive. We use the `dmesg` command simply to provide data for the files to contain. The contents don’t matter as much as just the fact that each file has some content.
|
||||
|
||||
```
|
||||
# for I in 0 1 2 3 4 5 6 7 8 9 ; do dmesg > file$I.txt ; done
|
||||
```
|
||||
|
||||
Verify that there are now at least 10 files in **/tmp/** with the names **file0.txt** through **file9.txt**.
|
||||
|
||||
```
|
||||
# ll
|
||||
total 1320
|
||||
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file0.txt
|
||||
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file1.txt
|
||||
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file2.txt
|
||||
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file3.txt
|
||||
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file4.txt
|
||||
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file5.txt
|
||||
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file6.txt
|
||||
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file7.txt
|
||||
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file8.txt
|
||||
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file9.txt
|
||||
```
|
||||
|
||||
We have generated data streams using the `dmesg` command, which was redirected to a series of files. Most of the core utilities use STDIO as their output stream and those that generate data streams, rather than acting to transform the data stream in some way, can be used to create the data streams that we will use for our experiments. Data streams can be as short as one line or even a single character, and as long as needed.
|
||||
|
||||
### Exploring the hard drive
|
||||
|
||||
It is now time to do a little exploring. In this experiment, we will look at some of the filesystem structures.
|
||||
|
||||
Let’s start with something simple. You should be at least somewhat familiar with the `dd` command. Officially known as “disk dump,” many sysadmins call it “disk destroyer” for good reason. Many of us have inadvertently destroyed the contents of an entire hard drive or partition using the `dd` command. That is why we will hang out in the **/tmp/test** directory to perform some of these experiments.
|
||||
|
||||
Despite its reputation, `dd` can be quite useful in exploring various types of storage media, hard drives, and partitions. We will also use it as a tool to explore other aspects of Linux.
|
||||
|
||||
Log into a terminal session as root if you are not already. We first need to determine the device special file for your hard drive using the `lsblk` command.
|
||||
|
||||
```
|
||||
[root@studentvm1 test]# lsblk -i
|
||||
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
|
||||
sda 8:0 0 60G 0 disk
|
||||
|-sda1 8:1 0 1G 0 part /boot
|
||||
`-sda2 8:2 0 59G 0 part
|
||||
|-fedora_studentvm1-pool00_tmeta 253:0 0 4M 0 lvm
|
||||
| `-fedora_studentvm1-pool00-tpool 253:2 0 2G 0 lvm
|
||||
| |-fedora_studentvm1-root 253:3 0 2G 0 lvm /
|
||||
| `-fedora_studentvm1-pool00 253:6 0 2G 0 lvm
|
||||
|-fedora_studentvm1-pool00_tdata 253:1 0 2G 0 lvm
|
||||
| `-fedora_studentvm1-pool00-tpool 253:2 0 2G 0 lvm
|
||||
| |-fedora_studentvm1-root 253:3 0 2G 0 lvm /
|
||||
| `-fedora_studentvm1-pool00 253:6 0 2G 0 lvm
|
||||
|-fedora_studentvm1-swap 253:4 0 10G 0 lvm [SWAP]
|
||||
|-fedora_studentvm1-usr 253:5 0 15G 0 lvm /usr
|
||||
|-fedora_studentvm1-home 253:7 0 2G 0 lvm /home
|
||||
|-fedora_studentvm1-var 253:8 0 10G 0 lvm /var
|
||||
`-fedora_studentvm1-tmp 253:9 0 5G 0 lvm /tmp
|
||||
sr0 11:0 1 1024M 0 rom
|
||||
```
|
||||
|
||||
We can see from this that there is only one hard drive on this host, that the device special file associated with it is **/dev/sda** , and that it has two partitions. The **/dev/sda1** partition is the boot partition, and the **/dev/sda2** partition contains a volume group on which the rest of the host’s logical volumes have been created.
|
||||
|
||||
As root in the terminal session, use the `dd` command to view the boot record of the hard drive, assuming it is assigned to the **/dev/sda** device. The `bs=` argument is not what you might think; it simply specifies the block size, and the `count=` argument specifies the number of blocks to dump to STDIO. The `if=` argument specifies the source of the data stream, in this case, the **/dev/sda** device. Notice that we are not looking at the first block of the partition, we are looking at the very first block of the hard drive.
|
||||
|
||||
```
|
||||
[root@studentvm1 test]# dd if=/dev/sda bs=512 count=1
|
||||
<EFBFBD>c<EFBFBD>#<23>м<EFBFBD><D0BC><EFBFBD>؎<EFBFBD><D88E><EFBFBD>|<7C>#<23>#<23><><EFBFBD>!#<23><>8#u
|
||||
<20><>#<23><><EFBFBD>u<EFBFBD><75>#<23>#<23>#<23>|<7C><><EFBFBD>t#<23>L#<23>#<23>|<7C><><EFBFBD>#<23><><EFBFBD><EFBFBD><EFBFBD>t<C280><74>pt#<23><><EFBFBD>y|1<><31>؎м <20><>d|<<EFBFBD>t#<23><>R<EFBFBD>|1<><31>D#@<40>D<EFBFBD><44>D#<23>##f<>#\|f<>f<EFBFBD>#`|f<>\
|
||||
<20>D#p<>B<EFBFBD>#r<>p<EFBFBD>#<23>K`#<23>#<23><>1<EFBFBD><31><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD>#a`<60><><EFBFBD>#f<><66>u#<23><><EFBFBD><EFBFBD>f1<66>f<EFBFBD>TCPAf<41>#f<>#a<>&Z|<7C>#}<7D>#<23>.}<7D>4<EFBFBD>3}<7D>.<2E>#<23><>GRUB GeomHard DiskRead Error
|
||||
<EFBFBD>#<23><>#<23><u<EFBFBD><EFBFBD>ܻޮ<EFBFBD>###<EFBFBD><EFBFBD><EFBFBD> <EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD> <EFBFBD>_U<EFBFBD>1+0 records in
|
||||
1+0 records out
|
||||
512 bytes copied, 4.3856e-05 s, 11.7 MB/s
|
||||
```
|
||||
|
||||
This prints the text of the boot record, which is the first block on the disk—any disk. In this case, there is information about the filesystem and, although it is unreadable because it is stored in binary format, the partition table. If this were a bootable device, stage 1 of GRUB or some other boot loader would be located in this sector. The last three lines contain data about the number of records and bytes processed.
|
||||
|
||||
Starting with the beginning of **/dev/sda1** , let’s look at a few blocks of data at a time to find what we want. The command is similar to the previous one, except that we have specified a few more blocks of data to view. You may have to specify fewer blocks if your terminal is not large enough to display all of the data at one time, or you can pipe the data through the less utility and use that to page through the data—either way works. Remember, we are doing all of this as root user because non-root users do not have the required permissions.
|
||||
|
||||
Enter the same command as you did in the previous experiment, but increase the block count to be displayed to 100, as shown below, in order to show more data.
|
||||
|
||||
```
|
||||
[root@studentvm1 test]# dd if=/dev/sda1 bs=512 count=100
|
||||
##33<33><33>#:<3A>##<23><> :o<>[:o<>[#<23><>S<EFBFBD>###<23>q[#
|
||||
#<<EFBFBD>#{5OZh<5A>GJ͞#t<>Ұ##boot/bootysimage/booC<6F>dp<64><70>G'<27>*)<29>#A<>##@
|
||||
#<23>q[
|
||||
<EFBFBD>## ## ###<23>#<23><><EFBFBD>To=###<#8<><38><EFBFBD>#'#<23>###<23>#<23><><EFBFBD><EFBFBD><EFBFBD>#<23>' <><C2A0><EFBFBD><EFBFBD><EFBFBD>#Xi <>#<23><>` qT<71><54><EFBFBD>
|
||||
<<EFBFBD><EFBFBD><EFBFBD>
|
||||
<20> r<C2A0><72><EFBFBD><EFBFBD> ]<5D>#<23>#<23>##<23>##<23>##<23>#<23>##<23>##<23>##<23>#<23>##<23>##<23>#<23><>#<23>#<23>##<23>#<23>##<23>##<23>#<23><>#<23>#<23><><EFBFBD><EFBFBD># <20> <20># <20># <20>#
|
||||
<EFBFBD>
|
||||
<EFBFBD>#
|
||||
<EFBFBD>#
|
||||
<EFBFBD>#
|
||||
<20>
|
||||
<>#
|
||||
<>#
|
||||
<>#
|
||||
<>
|
||||
<20>#
|
||||
<20>#
|
||||
<20>#100+0 records in
|
||||
100+0 records out
|
||||
51200 bytes (51 kB, 50 KiB) copied, 0.00117615 s, 43.5 MB/s
|
||||
```
|
||||
|
||||
Now try this command. I won’t reproduce the entire data stream here because it would take up huge amounts of space. Use **Ctrl-C** to break out and stop the stream of data.
|
||||
|
||||
```
|
||||
[root@studentvm1 test]# dd if=/dev/sda
|
||||
```
|
||||
|
||||
This command produces a stream of data that is the complete content of the hard drive, **/dev/sda** , including the boot record, the partition table, and all of the partitions and their content. This data could be redirected to a file for use as a complete backup from which a bare metal recovery can be performed. It could also be sent directly to another hard drive to clone the first. But do not perform this particular experiment.
|
||||
|
||||
```
|
||||
[root@studentvm1 test]# dd if=/dev/sda of=/dev/sdx
|
||||
```
|
||||
|
||||
You can see that the `dd` command can be very useful for exploring the structures of various types of filesystems, locating data on a defective storage device, and much more. It also produces a stream of data on which we can use the transformer utilities in order to modify or view.
|
||||
|
||||
The real point here is that `dd`, like so many Linux commands, produces a stream of data as its output. That data stream can be searched and manipulated in many ways using other tools. It can even be used for ghost-like backups or disk duplication.
|
||||
|
||||
### Randomness
|
||||
|
||||
It turns out that randomness is a desirable thing in computers—who knew? There are a number of reasons that sysadmins might want to generate a stream of random data. A stream of random data is sometimes useful to overwrite the contents of a complete partition, such as **/dev/sda1** , or even the entire hard drive, as in **/dev/sda**.
|
||||
|
||||
Perform this experiment as a non-root user. Enter this command to print an unending stream of random data to STDIO.
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ cat /dev/urandom
|
||||
```
|
||||
|
||||
Use **Ctrl-C** to break out and stop the stream of data. You may need to use **Ctrl-C** multiple times.
|
||||
|
||||
Random data is also used as the input seed to programs that generate random passwords and random data and numbers for use in scientific and statistical calculations. I will cover randomness and other interesting data sources in a bit more detail in Chapter 24: Everything is a file.
|
||||
|
||||
### Pipe dreams
|
||||
|
||||
Pipes are critical to our ability to do the amazing things on the command line, so much so that I think it is important to recognize that they were invented by Douglas McIlroy during the early days of Unix (thanks, Doug!). The Princeton University website has a fragment of an [interview][3] with McIlroy in which he discusses the creation of the pipe and the beginnings of the Unix philosophy.
|
||||
|
||||
Notice the use of pipes in the simple command-line program shown next, which lists each logged-in user a single time, no matter how many logins they have active. Perform this experiment as the student user. Enter the command shown below:
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ w | tail -n +3 | awk '{print $1}' | sort | uniq
|
||||
root
|
||||
student
|
||||
[student@studentvm1 ~]$
|
||||
```
|
||||
|
||||
The results from this command produce two lines of data that show that the user's root and student are both logged in. It does not show how many times each user is logged in. Your results will almost certainly differ from mine.
|
||||
|
||||
Pipes—represented by the vertical bar ( | )—are the syntactical glue, the operator, that connects these command-line utilities together. Pipes allow the Standard Output from one command to be “piped,” i.e., streamed from Standard Output of one command to the Standard Input of the next command.
|
||||
|
||||
The |& operator can be used to pipe the STDERR along with STDOUT to STDIN of the next command. This is not always desirable, but it does offer flexibility in the ability to record the STDERR data stream for the purposes of problem determination.
|
||||
|
||||
A string of programs connected with pipes is called a pipeline, and the programs that use STDIO are referred to officially as filters, but I prefer the term “transformers.”
|
||||
|
||||
Think about how this program would have to work if we could not pipe the data stream from one command to the next. The first command would perform its task on the data and then the output from that command would need to be saved in a file. The next command would have to read the stream of data from the intermediate file and perform its modification of the data stream, sending its own output to a new, temporary data file. The third command would have to take its data from the second temporary data file and perform its own manipulation of the data stream and then store the resulting data stream in yet another temporary file. At each step, the data file names would have to be transferred from one command to the next in some way.
|
||||
|
||||
I cannot even stand to think about that because it is so complex. Remember: Simplicity rocks!
|
||||
|
||||
### Building pipelines
|
||||
|
||||
When I am doing something new, solving a new problem, I usually do not just type in a complete Bash command pipeline from scratch off the top of my head. I usually start with just one or two commands in the pipeline and build from there by adding more commands to further process the data stream. This allows me to view the state of the data stream after each of the commands in the pipeline and make corrections as they are needed.
|
||||
|
||||
It is possible to build up very complex pipelines that can transform the data stream using many different utilities that work with STDIO.
|
||||
|
||||
### Redirection
|
||||
|
||||
Redirection is the capability to redirect the STDOUT data stream of a program to a file instead of to the default target of the display. The “greater than” ( > ) character, aka “gt”, is the syntactical symbol for redirection of STDOUT.
|
||||
|
||||
Redirecting the STDOUT of a command can be used to create a file containing the results from that command.
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ df -h > diskusage.txt
|
||||
```
|
||||
|
||||
There is no output to the terminal from this command unless there is an error. This is because the STDOUT data stream is redirected to the file and STDERR is still directed to the STDOUT device, which is the display. You can view the contents of the file you just created using this next command:
|
||||
|
||||
```
|
||||
[student@studentvm1 test]# cat diskusage.txt
|
||||
Filesystem Size Used Avail Use% Mounted on
|
||||
devtmpfs 2.0G 0 2.0G 0% /dev
|
||||
tmpfs 2.0G 0 2.0G 0% /dev/shm
|
||||
tmpfs 2.0G 1.2M 2.0G 1% /run
|
||||
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
|
||||
/dev/mapper/fedora_studentvm1-root 2.0G 50M 1.8G 3% /
|
||||
/dev/mapper/fedora_studentvm1-usr 15G 4.5G 9.5G 33% /usr
|
||||
/dev/mapper/fedora_studentvm1-var 9.8G 1.1G 8.2G 12% /var
|
||||
/dev/mapper/fedora_studentvm1-tmp 4.9G 21M 4.6G 1% /tmp
|
||||
/dev/mapper/fedora_studentvm1-home 2.0G 7.2M 1.8G 1% /home
|
||||
/dev/sda1 976M 221M 689M 25% /boot
|
||||
tmpfs 395M 0 395M 0% /run/user/0
|
||||
tmpfs 395M 12K 395M 1% /run/user/1000
|
||||
```
|
||||
|
||||
When using the > symbol to redirect the data stream, the specified file is created if it does not already exist. If it does exist, the contents are overwritten by the data stream from the command. You can use double greater-than symbols, >>, to append the new data stream to any existing content in the file.
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ df -h >> diskusage.txt
|
||||
```
|
||||
|
||||
You can use `cat` and/or `less` to view the **diskusage.txt** file in order to verify that the new data was appended to the end of the file.
|
||||
|
||||
The < (less than) symbol redirects data to the STDIN of the program. You might want to use this method to input data from a file to STDIN of a command that does not take a filename as an argument but that does use STDIN. Although input sources can be redirected to STDIN, such as a file that is used as input to grep, it is generally not necessary as grep also takes a filename as an argument to specify the input source. Most other commands also take a filename as an argument for their input source.
|
||||
|
||||
### Just grep’ing around
|
||||
|
||||
The `grep` command is used to select lines that match a specified pattern from a stream of data. `grep` is one of the most commonly used transformer utilities and can be used in some very creative and interesting ways. The `grep` command is one of the few that can correctly be called a filter because it does filter out all the lines of the data stream that you do not want; it leaves only the lines that you do want in the remaining data stream.
|
||||
|
||||
If the PWD is not the **/tmp/test** directory, make it so. Let’s first create a stream of random data to store in a file. In this case, we want somewhat less random data that would be limited to printable characters. A good password generator program can do this. The following program (you may have to install `pwgen` if it is not already) creates a file that contains 50,000 passwords that are 80 characters long using every printable character. Try it without redirecting to the **random.txt** file first to see what that looks like, and then do it once redirecting the output data stream to the file.
|
||||
|
||||
```
|
||||
$ pwgen -sy 80 50000 > random.txt
|
||||
```
|
||||
|
||||
Considering that there are so many passwords, it is very likely that some character strings in them are the same. First, `cat` the **random.txt** file, then use the `grep` command to locate some short, randomly selected strings from the last ten passwords on the screen. I saw the word “see” in one of those ten passwords, so my command looked like this: `grep see random.txt`, and you can try that, but you should also pick some strings of your own to check. Short strings of two to four characters work best.
|
||||
|
||||
```
|
||||
$ grep see random.txt
|
||||
R=p)'s/~0}wr~2(OqaL.S7DNyxlmO69`"12u]h@rp[D2%3}1b87+>Vk,;4a0hX]d7see;1%9|wMp6Yl.
|
||||
bSM_mt_hPy|YZ1<TY/Hu5{g#mQ<u_(@8B5Vt?w%i-&C>NU@[;zV2-see)>(BSK~n5mmb9~h)yx{a&$_e
|
||||
cjR1QWZwEgl48[3i-(^x9D=v)seeYT2R#M:>wDh?Tn$]HZU7}j!7bIiIr^cI.DI)W0D"'vZU@.Kxd1E1
|
||||
z=tXcjVv^G\nW`,y=bED]d|7%s6iYT^a^Bvsee:v\UmWT02|P|nq%A*;+Ng[$S%*s)-ls"dUfo|0P5+n
|
||||
```
|
||||
|
||||
### Summary
|
||||
|
||||
It is the use of pipes and redirection that allows many of the amazing and powerful tasks that can be performed with data streams on the Linux command line. It is pipes that transport STDIO data streams from one program or file to another. The ability to pipe streams of data through one or more transformer programs supports powerful and flexible manipulation of data in those streams.
|
||||
|
||||
Each of the programs in the pipelines demonstrated in the experiments is small, and each does one thing well. They are also transformers; that is, they take Standard Input, process it in some way, and then send the result to Standard Output. Implementation of these programs as transformers to send processed data streams from their own Standard Output to the Standard Input of the other programs is complementary to, and necessary for, the implementation of pipes as a Linux tool.
|
||||
|
||||
STDIO is nothing more than streams of data. This data can be almost anything from the output of a command to list the files in a directory, or an unending stream of data from a special device like **/dev/urandom** , or even a stream that contains all of the raw data from a hard drive or a partition.
|
||||
|
||||
Any device on a Linux computer can be treated like a data stream. You can use ordinary tools like `dd` and `cat` to dump data from a device into a STDIO data stream that can be processed using other ordinary Linux tools.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/linux-data-streams
|
||||
|
||||
作者:[David Both][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/dboth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.apress.com/us/book/9781484237298
|
||||
[2]: https://www.gnu.org/software/coreutils/coreutils.html
|
||||
[3]: https://www.princeton.edu/~hos/mike/transcripts/mcilroy.htm
|
@ -0,0 +1,92 @@
|
||||
DevOps应聘者应该准备回答的20个问题
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hire-job-career.png?itok=SrZo0QJ3)
|
||||
聘请一个不合适的人代价是很高的。根据Link人力资源的首席执行官Jörgen Sundberg的统计,招聘,雇佣一名新员工将会花费公司$240,000之多,当你进行了一次不合适的招聘:
|
||||
* 你失去了他们所知道的。
|
||||
* 你失去了他们认识的人
|
||||
* 你的团队将可能进入到一个组织发展的震荡阶段
|
||||
* 你的公司将会面临组织破裂的风险
|
||||
|
||||
当你失去一名员工的时候,你就像丢失了公司图谱中的一块。同样值得一提的是另一端的疼痛。应聘到一个错误工作岗位的员工会感受到很大的压力以及整个身心的不满意,甚至是健康问题。
|
||||
另外一方面,当你招聘到合适的人时,新的员工将会:
|
||||
* 丰富公司现有的文化,使你的组织成为一个更好的工作场所。研究表明一个积极的工作文化能够帮助驱动一个更长久的财务业绩,而且如果你在一个欢快的环境中工 作,你更有可能在生活中做的更好。
|
||||
* 热爱和你的组织在一起工作。当人们热爱他们所在做的,他们会趋向于做的更好。
|
||||
|
||||
招聘适合的或者加强现有的文化在DevOps和敏捷团多中是必不可少的。也就是说雇佣到一个能够鼓励积极合作的人,以便来自不同背景,有着不同目标和工作方式的团队能够在一起有效的工作。你新雇佣的员工因应该能够帮助团队合作来充分发挥放大他们的价值同时也能够增加员工的满意度以及平衡组织目标的冲突。他或者她应该能够通过明智的选择工具和工作流来促进你的组织,文化就是一切。
|
||||
|
||||
作为我们2017年11月发布的一篇文章,[DevOps的招聘经理应该准备回答的20个问题][4],这篇文章将会重点关注在如何招聘最适合的人。
|
||||
### 为什么招聘走错了方向
|
||||
很多公司现在在用的典型的雇佣策略是基于人才过剩的基础上:
|
||||
|
||||
* 职位公告栏。
|
||||
* 关注和所需才能符合的应聘者。
|
||||
* 尽可能找多的候选者。
|
||||
* 通过面试淘汰弱者。
|
||||
* 通过正式的面试淘汰更多的弱者。
|
||||
* 评估,投票,选择。
|
||||
* 渐渐接近补偿。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/hiring_graphic.png?itok=1udGbkhB)
|
||||
|
||||
职位公告栏是有成千上万失业者人才过剩的经济大萧条时期发明的。在今天的求职市场上已经没有人才过剩了,然而我们仍然在使用基于此的招聘策略。
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/732px-unemployed_men_queued_outside_a_depression_soup_kitchen_opened_in_chicago_by_al_capone_02-1931_-_nara_-_541927.jpg?itok=HSs4NjCN)
|
||||
|
||||
### 雇佣最合适的人员:运用文化和情感
|
||||
在人才过剩雇佣策略背后的思想是去设计工作岗位然后将人员安排进去。
|
||||
相反,做相反的事情:寻找将会积极融入你的商业文化的人才,然后为他们寻找他们热爱的最合适的岗位。要想如此实现,你必须能够围绕他们热情为他们创造工作岗位。
|
||||
**谁正在寻找一份工作?** 根据一份2016年对美国50,000名开发者的调查显示,[85.7%的受访对象][5]要么对新的机会不感兴趣,要么对于寻找新工作没有积极性。在寻找工作的那部分中,有将近[28.3%的求职者][5]来自于朋友的推荐。如果你只是在那些在找工作的人中寻找人才,你将会错过高端的人才。
|
||||
**运用团队力量去发现和寻找潜力的雇员**。列如,戴安娜是你的团队中的一名开发者,她所提供的机会即使她已经从事编程很多年而且在期间已经结识了很多从事热爱他们所从事的工作的人。难道你不认为她所推荐的潜在员工在技能,知识和智慧上要比HR所寻找的要优秀吗?在要求戴安娜分享她同伴之前,通知她即将到来的使命任务,向她阐明你要雇佣潜在有探索精神的团队,描述在将来会需要的知识领域。
|
||||
**雇员想要什么?**一份来自千禧年,婴儿潮实时期出生的人的对比综合性研究显示,20% 的人所想要的是相同的:
|
||||
1. 对组织产生积极的影响
|
||||
2. 帮助解决社交或者环境上的挑战
|
||||
3. 和一群有动力的人一起工作
|
||||
|
||||
### 面试的挑战
|
||||
面试应该是招聘者和应聘者双方为了寻找最合适的人才进行的一次双方之间的对话。将面试聚焦在企业文化和情感对话两个问题上:这个应聘者将会丰富你的企业文化并且会热爱和你在一起工作吗?你能够在工作中帮他们取得成功吗?
|
||||
**对于招聘经理来说:** 每一次的面试都是你学习如何将自己的组织变得对未来的团队成员更有吸引力,并且每次积极的面试多都可能是你发现人才(即使你不会雇佣)的机会。每个人都将会记得积极有效的面试的经历。即使他们不会被雇佣,他们将会和他们的朋友谈论这次经历,你竟会得到一个被推荐的机会。这又很大的好处:如果你无法吸引到这个人才,你也将会从中学习吸取经验并且改善。
|
||||
**对面试者来说**:每次的面试都是你释放激情的机会
|
||||
|
||||
### 助你释放潜在雇员激情的20个问题
|
||||
1. 你热爱什么?
|
||||
2. “今天早晨我已经迫不及待的要去工作”你怎么看待这句话?
|
||||
3. 你曾经最快乐的是什么?
|
||||
4. 你曾经解决问题的最典型的例子是什么,你是如何解决的?
|
||||
5. 你如何看待配对学习?
|
||||
6. 你到达办公室和离开办公室心里最先想到的是什么?
|
||||
7. 你如果你有一次改变你之前或者现在的共工作的一件事的机会,将会是什么事?
|
||||
8. 当你在这工作的时候,你最兴奋去学习什么?
|
||||
9. 你的梦想是什么,你如何去实现?
|
||||
10. 你在学会如何去实现你的追求的时候想要或者需要什么?
|
||||
11. 你的价值观是什么?
|
||||
12. 你是如何坚守自己的价值观的?
|
||||
13. 平衡在你的生活中意味着什么?
|
||||
14. 你最引以为傲的工作交流能力是什么?为什么?
|
||||
15. 你最喜欢营造什么样的环境?
|
||||
16. 你喜欢别人怎样对待你?
|
||||
17. 你信任我们什么,如何验证?
|
||||
18. 告诉我们你在最近的一个项目中学习到什么?
|
||||
19. 我们还能知道你的其他方面的什么?
|
||||
20. 如果你正在雇佣我,你将会问我什么问题?
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/questions-devops-employees-should-answer
|
||||
|
||||
作者:[Catherine Louis][a]
|
||||
译者:[FelixYFZ](https://github.com/FelixYFZ)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/catherinelouis
|
||||
[1]:https://www.shrm.org/resourcesandtools/hr-topics/employee-relations/pages/cost-of-bad-hires.aspx
|
||||
[2]:https://en.wikipedia.org/wiki/Tuckman%27s_stages_of_group_development
|
||||
[3]:http://www.forbes.com/sites/johnkotter/2011/02/10/does-corporate-culture-drive-financial-performance/
|
||||
[4]:https://opensource.com/article/17/11/inclusive-workforce-takes-work
|
||||
[5]:https://insights.stackoverflow.com/survey/2016#work-job-discovery
|
||||
[6]:https://research.hackerrank.com/developer-skills/2018/
|
||||
[7]:http://www-935.ibm.com/services/us/gbs/thoughtleadership/millennialworkplace/
|
||||
[8]:https://en.wikipedia.org/wiki/Emotional_intelligence
|
@ -0,0 +1,67 @@
|
||||
6 个用于写书的开源工具
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-austen-writing-code.png?itok=XPxRMtQ4)
|
||||
|
||||
我在 1993 年首次使用并贡献了免费和开源软件,从那时起我一直是一名开源软件开发人员和传播者。尽管我一个被记住的项目是[ FreeDOS 项目][1], 一个 DOS 操作系统的开源实现,但我已经编写或者贡献了数十个开源软件项目。
|
||||
|
||||
我最近写了一本关于 FreeDOS 的书。 [_使用 FreeDOS_][2]是我庆祝 FreeDOS 出现 24 周年。它是关于安装和使用 FreeDOS、关于我最喜欢的 DOS 程序的文章,以及 DOS 命令行和 DOS 批处理编程的快速参考指南的集合。在一位出色的专业编辑的帮助下,我在过去的几个月里一直在编写这本书。
|
||||
|
||||
_使用 FreeDOS_ 可在知识共享署名(cc-by)国际公共许可证下获得。你可以从[FreeDO S电子书][2]网站免费下载 EPUB 和 PDF 版本。(我也计划为那些喜欢纸质的人提供打印版本。)
|
||||
|
||||
这本书几乎完全是用开源软件制作的。我想分享一下对用来创建、编辑和生成_使用 FreeDOS_的工具的看法。
|
||||
|
||||
### Google 文档
|
||||
|
||||
[Google 文档][3]是我使用的唯一不是开源软件的工具。我将我的第一份草稿上传到 Google 文档,这样我就能与编辑器进行协作。我确信有开源协作工具,但 Google 文档能够让两个人同时编辑同一个文档、发表评论、编辑建议和更改跟踪 - 更不用说它使用段落样式和能够下载完成的文档 - 这使其成为编辑过程中有价值的一部分。
|
||||
|
||||
### LibreOffice
|
||||
|
||||
我开始使用 [LibreOffice][4] 6.0,但我最终使用 LibreOffice 6.1 完成了这本书。我喜欢 LibreOffice 对样式的丰富支持。段落样式可以轻松地为标题、页眉、正文、示例代码和其他文本应用样式。字符样式允许我修改段落中文本的外观,例如内联示例代码或用不同的样式代表文件名。图形样式让我可以将某些样式应用于截图和其他图像。页面样式允许我轻松修改页面的布局和外观。
|
||||
|
||||
### GIMP
|
||||
|
||||
我的书包括很多 DOS 程序截图,网站截图和 FreeDOS logo。我用 [GIMP][5] 修改了这本书的图像。通常,只是裁剪或调整图像大小,但在我准备本书的印刷版时,我使用 GIMP 创建了一些更易于打印布局的图像。
|
||||
|
||||
### Inkscape
|
||||
|
||||
大多数 FreeDOS logo 和小鱼吉祥物都是 SVG 格式,我使用 [Inkscape][6]来调整它们。在准备电子书的 PDF 版本时,我想在页面顶部放置一个简单的蓝色横幅,角落里有 FreeDOS logo。实验后,我发现在 Inkscape 中创建一个我想要的横幅 SVG 图案更容易,然后我将其粘贴到页眉中。
|
||||
|
||||
### ImageMagick
|
||||
|
||||
虽然使用 GIMP 来完成这项工作也很好,但有时在一组图像上运行 [ImageMagick][7] 命令会更快,例如转换为 PNG 格式或调整图像大小。
|
||||
|
||||
### Sigil
|
||||
|
||||
LibreOffice 可以直接导出到 EPUB 格式,但它不是个好的转换器。我没有尝试使用 LibreOffice 6.1 创建 EPUB,但 LibreOffice 6.0 没有包含我的图像。它还以奇怪的方式添加了样式。我使用 [Sigil][8] 来调整 EPUB 并使一切看起来正常。Sigil 甚至还有预览功能,因此你可以看到 EPUB 的样子。
|
||||
|
||||
### QEMU
|
||||
|
||||
因为本书是关于安装和运行 FreeDOS 的,所以我需要实际运行 FreeDOS。你可以在任何 PC 模拟器中启动 FreeDOS,包括 VirtualBox、QEMU、GNOME Boxes、PCem 和 Bochs。但我喜欢 [QEMU] [9] 的简单性。QEMU 控制台允许你以 PPM 转储屏幕,这非常适合抓取截图来包含在书中。
|
||||
|
||||
当然,我不得不提到在 [Linux][11] 上运行 [GNOME][10]。我使用 Linux 的 [Fedora][12] 发行版。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/9/writing-book-open-source-tools
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[1]: http://www.freedos.org/
|
||||
[2]: http://www.freedos.org/ebook/
|
||||
[3]: https://www.google.com/docs/about/
|
||||
[4]: https://www.libreoffice.org/
|
||||
[5]: https://www.gimp.org/
|
||||
[6]: https://inkscape.org/
|
||||
[7]: https://www.imagemagick.org/
|
||||
[8]: https://sigil-ebook.com/
|
||||
[9]: https://www.qemu.org/
|
||||
[10]: https://www.gnome.org/
|
||||
[11]: https://www.kernel.org/
|
||||
[12]: https://getfedora.org/
|
@ -1,307 +0,0 @@
|
||||
重启和关闭 Linux 系统的 6 个终端命令
|
||||
======
|
||||
在 Linux 管理员的日程当中, 有很多需要执行的任务, 系统的重启和关闭就被包含其中.
|
||||
|
||||
对于 Linux 管理员来说, 重启和关闭系统是其诸多风险操作中的一例, 有时候, 由于某些原因, 这些操作可能无法挽回, 他们需要更多的时间来排查问题.
|
||||
|
||||
在 Linux 命令行模式下我们可以执行这些任务. 很多时候, 由于熟悉命令行, Linux 管理员更倾向于在命令行下完成这些任务.
|
||||
|
||||
重启和关闭系统的 Linux 命令并不多, 用户需要根据需要, 选择合适的命令来完成任务.
|
||||
|
||||
以下所有命令都有其自身特点, 并允许被 Linux 管理员使用.
|
||||
|
||||
**建议阅读 :**
|
||||
|
||||
**(#)** [查看系统/服务器正常运行时间的 11 个方法][1]
|
||||
|
||||
**(#)** [Tuptime 一款为 Linux 系统保存历史记录, 统计运行时间工具][2]
|
||||
|
||||
系统重启和关闭之始, 会通知所有已登录的用户和已注册的进程. 当然, 如果会造成冲突, 系统不会允许新的用户登入.
|
||||
|
||||
执行此类操作之前, 我建议您坚持复查, 因为您只能得到很少的提示来确保这一切顺利.
|
||||
|
||||
下面陈列了一些步骤.
|
||||
|
||||
* 确保您拥有一个可以处理故障的终端, 以防之后可能会发生的问题. VMWare 可以访问物理服务器的虚拟机, IPMI, iLO 和 iDRAC.
|
||||
* 您需要通过公司的流程, 申请修改或故障的执行权直到得到许可.
|
||||
* 为安全着想, 备份重要的配置文件, 并保存到其他服务器上.
|
||||
* 验证日志文件(提前检查)
|
||||
* 和相关团队交流, 比如数据库管理团队, 应用团队等.
|
||||
* 通知数据库和应用服务人员关闭服务, 并得到确定.
|
||||
* 使用适当的命令复盘操作, 验证工作.
|
||||
* 最后, 重启系统
|
||||
* 验证日志文件, 如果一切顺利, 执行下一步操作, 如果发现任何问题, 对症排查.
|
||||
* 无论是回退版本还是运行程序, 通知相关团队提出申请.
|
||||
* 对操作做适当守候, 并将预期的一切正常的反馈给团队
|
||||
|
||||
使用下列命令执行这项任务.
|
||||
|
||||
* **`shutdown 命令:`** shutdown 命令用来为中止, 重启或切断电源
|
||||
* **`halt 命令:`** halt 命令用来为中止, 重启或切断电源
|
||||
* **`poweroff 命令:`** poweroff 命令用来为中止, 重启或切断电源
|
||||
* **`reboot 命令:`** reboot 命令用来为中止, 重启或切断电源
|
||||
* **`init 命令:`** init(initialization 的简称) 是系统启动的第一个进程.
|
||||
* **`systemctl 命令:`** systemd 是 Linux 系统和服务器的管理程序.
|
||||
|
||||
|
||||
### 方案 - 1: 如何使用 Shutdown 命令关闭和重启 Linux 系统
|
||||
|
||||
shutdown 命令用户关闭或重启本地和远程的 Linux 设备. 它为高效完成作业提供多个选项. 如果使用了 time 参数, 系统关闭的 5 分钟之前, /run/nologin 文件会被创建, 以确保后续的登录会被拒绝.
|
||||
|
||||
通用语法如下
|
||||
|
||||
```
|
||||
# shutdown [OPTION] [TIME] [MESSAGE]
|
||||
|
||||
```
|
||||
|
||||
运行下面的命令来立即关闭 Linux 设备. 它会立刻杀死所有进程, 并关闭系统.
|
||||
|
||||
```
|
||||
# shutdown -h now
|
||||
|
||||
```
|
||||
|
||||
* **`-h:`** 如果不特指 -halt 选项, 这等价于 -poweroff 选项.
|
||||
|
||||
另外我们可以使用带有 `poweroff` 选项的 `shutdown` 命令来立即关闭设备.
|
||||
|
||||
```
|
||||
# shutdown --halt now
|
||||
或者
|
||||
# shutdown -H now
|
||||
|
||||
```
|
||||
|
||||
* **`-H, --halt:`** 停止设备运行
|
||||
|
||||
另外我们可以使用带有 `poweroff` 选项的 `shutdown` 命令来立即关闭设备.
|
||||
|
||||
```
|
||||
# shutdown --poweroff now
|
||||
或者
|
||||
# shutdown -P now
|
||||
|
||||
```
|
||||
|
||||
* **`-P, --poweroff:`** 切断电源 (默认).
|
||||
|
||||
运行以下命令立即关闭 Linux 设备. 它将会立即杀死所有的进程并关闭系统.
|
||||
|
||||
```
|
||||
# shutdown -h now
|
||||
|
||||
```
|
||||
|
||||
* **`-h:`** 如果不特指 -halt 选项, 这等价于 -poweroff 选项.
|
||||
|
||||
如果您没有使用 time 选项运行下面的命令, 它将会在一分钟后执行给出的命令
|
||||
|
||||
```
|
||||
# shutdown -h
|
||||
Shutdown scheduled for Mon 2018-10-08 06:42:31 EDT, use 'shutdown -c' to cancel.
|
||||
|
||||
[email protected]#
|
||||
Broadcast message from [email protected] (Mon 2018-10-08 06:41:31 EDT):
|
||||
|
||||
The system is going down for power-off at Mon 2018-10-08 06:42:31 EDT!
|
||||
|
||||
```
|
||||
|
||||
其他的登录用户都能在中断中看到如下的广播消息:
|
||||
|
||||
```
|
||||
[[email protected] ~]$
|
||||
Broadcast message from [email protected] (Mon 2018-10-08 06:41:31 EDT):
|
||||
|
||||
The system is going down for power-off at Mon 2018-10-08 06:42:31 EDT!
|
||||
|
||||
```
|
||||
|
||||
对于使用了 Halt 选项.
|
||||
|
||||
```
|
||||
# shutdown -H
|
||||
Shutdown scheduled for Mon 2018-10-08 06:37:53 EDT, use 'shutdown -c' to cancel.
|
||||
|
||||
[email protected]#
|
||||
Broadcast message from [email protected] (Mon 2018-10-08 06:36:53 EDT):
|
||||
|
||||
The system is going down for system halt at Mon 2018-10-08 06:37:53 EDT!
|
||||
|
||||
```
|
||||
|
||||
对于使用了 Poweroff 选项.
|
||||
|
||||
```
|
||||
# shutdown -P
|
||||
Shutdown scheduled for Mon 2018-10-08 06:40:07 EDT, use 'shutdown -c' to cancel.
|
||||
|
||||
[email protected]#
|
||||
Broadcast message from [email protected] (Mon 2018-10-08 06:39:07 EDT):
|
||||
|
||||
The system is going down for power-off at Mon 2018-10-08 06:40:07 EDT!
|
||||
|
||||
```
|
||||
|
||||
可以在您的终端上敲击 `Shutdown -c` 选项取消操作.
|
||||
|
||||
```
|
||||
# shutdown -c
|
||||
|
||||
Broadcast message from [email protected] (Mon 2018-10-08 06:39:09 EDT):
|
||||
|
||||
The system shutdown has been cancelled at Mon 2018-10-08 06:40:09 EDT!
|
||||
|
||||
```
|
||||
|
||||
其他的登录用户都能在中断中看到如下的广播消息:
|
||||
|
||||
```
|
||||
[[email protected] ~]$
|
||||
Broadcast message from [email protected] (Mon 2018-10-08 06:41:35 EDT):
|
||||
|
||||
The system shutdown has been cancelled at Mon 2018-10-08 06:42:35 EDT!
|
||||
|
||||
```
|
||||
|
||||
添加 time 参数, 如果你想在 `N` 秒之后执行关闭或重启操作. 这里, 您可以为所有登录用户添加自定义广播消息. 例如, 我们将在五分钟后重启设备.
|
||||
|
||||
```
|
||||
# shutdown -r +5 "To activate the latest Kernel"
|
||||
Shutdown scheduled for Mon 2018-10-08 07:13:16 EDT, use 'shutdown -c' to cancel.
|
||||
|
||||
[[email protected] ~]#
|
||||
Broadcast message from [email protected] (Mon 2018-10-08 07:08:16 EDT):
|
||||
|
||||
To activate the latest Kernel
|
||||
The system is going down for reboot at Mon 2018-10-08 07:13:16 EDT!
|
||||
|
||||
```
|
||||
|
||||
运行下面的命令立即重启 Linux 设备. 它会立即杀死所有进程并且重新启动系统.
|
||||
|
||||
```
|
||||
# shutdown -r now
|
||||
|
||||
```
|
||||
|
||||
* **`-r, --reboot:`** 重启设备.
|
||||
|
||||
### 方案 - 2: 如何通过 reboot 命令关闭和重启 Linux 系统
|
||||
|
||||
reboot 命令用于关闭和重启本地或远程设备. Reboot 命令拥有两个实用的选项.
|
||||
|
||||
它能够优雅的关闭和重启设备(就好像在系统菜单中惦记重启选项一样简单).
|
||||
|
||||
执行不带任何参数的 `reboot` 命令来重启 Linux 设备
|
||||
|
||||
```
|
||||
# reboot
|
||||
|
||||
```
|
||||
|
||||
执行带 `-p` 参数的 `reboot` 命令来关闭 Linux 设备或切断电源
|
||||
|
||||
```
|
||||
# reboot -p
|
||||
|
||||
```
|
||||
|
||||
* **`-p, --poweroff:`** 调用 halt 或 poweroff 命令, 切断设备电源.
|
||||
|
||||
|
||||
执行带 `-f` 参数的 `reboot` 命令来强制重启 Linux 设备(这类似按压 CPU 上的电源键)
|
||||
|
||||
```
|
||||
# reboot -f
|
||||
|
||||
```
|
||||
|
||||
* **`-f, --force:`** 立刻强制中断, 切断电源或重启
|
||||
|
||||
### 方案 - 3: 如何通过 init 命令关闭和重启 Linux 系统
|
||||
|
||||
init(initialization 的简写) 是系统启动的第一个进程.
|
||||
|
||||
他将会检查 /etc/inittab 文件并决定 linux 运行级别. 同时, 授权用户在 Linux 设备上执行关机或重启 操作. 这里存在从 0 到 6 的七个运行等级.
|
||||
|
||||
**建议阅读 :**
|
||||
**(#)** [如何检查 Linux 上所有运行的服务][3]
|
||||
|
||||
执行一下 init 命令关闭系统.
|
||||
```
|
||||
# init 0
|
||||
|
||||
```
|
||||
|
||||
* **`0:`** 中断 – 关闭系统.
|
||||
|
||||
运行下面的 init 命令重启设备
|
||||
```
|
||||
# init 6
|
||||
|
||||
```
|
||||
|
||||
* **`6:`** 重启 – 重启设备.
|
||||
|
||||
### 方案 - 4: 如何通过 halt 命令关闭和重启 Linux 系统
|
||||
|
||||
halt 命令用来切断电源或关闭远程 Linux 设备或本地主机.
|
||||
中断所有进程并关闭 cpu
|
||||
```
|
||||
# halt
|
||||
|
||||
```
|
||||
|
||||
### 方案 - 5: 如何通过 poweroff 命令关闭和重启 Linux 系统
|
||||
|
||||
poweroff 命令用来切断电源或关闭远程 Linux 设备或本地主机. Poweroff 很像 halt, 但是它可以关闭设备自身的单元(等和其他 PC 上的任何事物). 它会为 PSU 发送 ACPI 指令, 切断电源.
|
||||
|
||||
```
|
||||
# poweroff
|
||||
|
||||
```
|
||||
|
||||
### 方案 - 6: 如何通过 systemctl 命令关闭和重启 Linux 系统
|
||||
|
||||
Systemd 是一款适用于所有主流 Linux 发型版的全新 init 系统和系统管理器, 而不是传统的 SysV init 系统.
|
||||
|
||||
systemd 兼容与 SysV 和 LSB 脚本. 它能够替代 sysvinit 系统. systemd 是内核启动的第一个进程, 并持有序号为 1 的进程 PID.
|
||||
|
||||
**建议阅读 :**
|
||||
**(#)** [chkservice – 一款终端下系统单元管理工具][4]
|
||||
|
||||
它是一切进程的父进程, Fedora 15 是第一个适配安装 systemd 的发行版.
|
||||
It’s a parent process for everything and Fedora 15 is the first distribution which was adapted systemd instead of upstart.
|
||||
|
||||
systemctl 是命令行下管理系统, 守护进程, 开启服务(如 start, restart, stop, enable, disable, reload & status)的主要工具.
|
||||
|
||||
systemd 使用 .service 文件而不是 bash 脚本(SysVinit 用户使用的). systemd 将所有守护进程归与自身的 Linux cgroups 用户组下, 您可以浏览 /cgroup/systemd 文件查看系统层次等级
|
||||
|
||||
```
|
||||
# systemctl halt
|
||||
# systemctl poweroff
|
||||
# systemctl reboot
|
||||
# systemctl suspend
|
||||
# systemctl hibernate
|
||||
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/6-commands-to-shutdown-halt-poweroff-reboot-the-linux-system/
|
||||
|
||||
作者:[Prakash Subramanian][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[cyleft](https://github.com/cyleft)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/prakash/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/11-methods-to-find-check-system-server-uptime-in-linux/
|
||||
[2]: https://www.2daygeek.com/tuptime-a-tool-to-report-the-historical-and-statistical-running-time-of-linux-system/
|
||||
[3]: https://www.2daygeek.com/how-to-check-all-running-services-in-linux/
|
||||
[4]: https://www.2daygeek.com/chkservice-a-tool-for-managing-systemd-units-from-linux-terminal/
|
590
translated/tech/20181016 Lab 4- Preemptive Multitasking.md
Normal file
590
translated/tech/20181016 Lab 4- Preemptive Multitasking.md
Normal file
@ -0,0 +1,590 @@
|
||||
实验 4:抢占式多任务处理
|
||||
======
|
||||
### 实验 4:抢占式多任务处理
|
||||
|
||||
#### 简介
|
||||
|
||||
在本实验中,你将在多个同时活动的用户模式中的环境之间实现抢占式多任务处理。
|
||||
|
||||
在 Part A 中,你将在 JOS 中添加对多处理器的支持,以实现循环调度。并且添加基本的环境管理方面的系统调用(创建和销毁环境的系统调用、以及分配/映射内存)。
|
||||
|
||||
在 Part B 中,你将要实现一个类 Unix 的 `fork()`,它将允许一个用户模式中的环境去创建一个它自已的副本。
|
||||
|
||||
最后,在 Part C 中,你将在 JOS 中添加对进程间通讯(IPC)的支持,以允许不同用户模式环境之间进行显式通讯和同步。你也将要去添加对硬件时钟中断和优先权的支持。
|
||||
|
||||
##### 预备知识
|
||||
|
||||
使用 git 去提交你的实验 3 的源代码,并获取课程仓库的最新版本,然后创建一个名为 `lab4` 的本地分支,它跟踪我们的名为 `origin/lab4` 的远程 `lab4` 分支:
|
||||
|
||||
```markdown
|
||||
athena% cd ~/6.828/lab
|
||||
athena% add git
|
||||
athena% git pull
|
||||
Already up-to-date.
|
||||
athena% git checkout -b lab4 origin/lab4
|
||||
Branch lab4 set up to track remote branch refs/remotes/origin/lab4.
|
||||
Switched to a new branch "lab4"
|
||||
athena% git merge lab3
|
||||
Merge made by recursive.
|
||||
...
|
||||
athena%
|
||||
```
|
||||
|
||||
实验 4 包含了一些新的源文件,在开始之前你应该去浏览一遍:
|
||||
```markdown
|
||||
kern/cpu.h Kernel-private definitions for multiprocessor support
|
||||
kern/mpconfig.c Code to read the multiprocessor configuration
|
||||
kern/lapic.c Kernel code driving the local APIC unit in each processor
|
||||
kern/mpentry.S Assembly-language entry code for non-boot CPUs
|
||||
kern/spinlock.h Kernel-private definitions for spin locks, including the big kernel lock
|
||||
kern/spinlock.c Kernel code implementing spin locks
|
||||
kern/sched.c Code skeleton of the scheduler that you are about to implement
|
||||
```
|
||||
|
||||
##### 实验要求
|
||||
|
||||
本实验分为三部分:Part A、Part B、和 Part C。我们计划为每个部分分配一周的时间。
|
||||
|
||||
和以前一样,你需要完成实验中出现的、所有常规练习和至少一个挑战问题。(不是每个部分做一个挑战问题,是整个实验做一个挑战问题即可。)另外,你还要写出你实现的挑战问题的详细描述。如果你实现了多个挑战问题,你只需写出其中一个即可,虽然我们的课程欢迎你完成越多的挑战越好。在动手实验之前,请将你的挑战问题的答案写在一个名为 `answers-lab4.txt` 的文件中,并把它放在你的 `lab` 目录的根下。
|
||||
|
||||
#### Part A:多处理器支持和协调多任务处理
|
||||
|
||||
在本实验的第一部分,将去扩展你的 JOS 内核,以便于它能够在一个多处理器的系统上运行,并且要在 JOS 内核中实现一些新的系统调用,以便于它允许用户级环境创建附加的新环境。你也要去实现协调的循环调度,在当前的环境自愿放弃 CPU(或退出)时,允许内核将一个环境切换到另一个环境。稍后在 Part C 中,你将要实现抢占调度,它允许内核在环境占有 CPU 一段时间后,从这个环境上重新取回对 CPU 的控制,那怕是在那个环境不配合的情况下。
|
||||
|
||||
##### 多处理器支持
|
||||
|
||||
我们继续去让 JOS 支持 “对称多处理器”(SMP),在一个多处理器的模型中,所有 CPU 们都有平等访问系统资源(如内存和 I/O 总线)的权利。虽然在 SMP 中所有 CPU 们都有相同的功能,但是在引导进程的过程中,它们被分成两种类型:引导程序处理器(BSP)负责初始化系统和引导操作系统;而在操作系统启动并正常运行后,应用程序处理器(AP)将被 BSP 激活。哪个处理器做 BSP 是由硬件和 BIOS 来决定的。到目前为止,你所有的已存在的 JOS 代码都是运行在 BSP 上的。
|
||||
|
||||
在一个 SMP 系统上,每个 CPU 都伴有一个本地 APIC(LAPIC)单元。这个 LAPIC 单元负责传递系统中的中断。LAPIC 还为它所连接的 CPU 提供一个唯一的标识符。在本实验中,我们将使用 LAPIC 单元(它在 `kern/lapic.c` 中)中的下列基本功能:
|
||||
|
||||
* 读取 LAPIC 标识符(APIC ID),去告诉那个 CPU 现在我们的代码正在它上面运行(查看 `cpunum()`)。
|
||||
* 从 BSP 到 AP 之间发送处理器间中断(IPI) `STARTUP`,以启动其它 CPU(查看 `lapic_startap()`)。
|
||||
* 在 Part C 中,我们设置 LAPIC 的内置定时器去触发时钟中断,以便于支持抢占式多任务处理(查看 `apic_init()`)。
|
||||
|
||||
|
||||
|
||||
一个处理器使用内存映射的 I/O(MMIO)来访问它的 LAPIC。在 MMIO 中,一部分物理内存是硬编码到一些 I/O 设备的寄存器中,因此,访问内存时一般可以使用相同的 `load/store` 指令去访问设备的寄存器。正如你所看到的,在物理地址 `0xA0000` 处就是一个 IO 入口(就是我们写入 VGA 缓冲区的入口)。LAPIC 就在那里,它从物理地址 `0xFE000000` 处(4GB 减去 32MB 处)开始,这个地址对于我们在 KERNBASE 处使用直接映射访问来说太高了。JOS 虚拟内存映射在 `MMIOBASE` 处,留下一个 4MB 的空隙,以便于我们有一个地方,能像这样去映射设备。由于在后面的实验中,我们将介绍更多的 MMIO 区域,你将要写一个简单的函数,从这个区域中去分配空间,并将设备的内存映射到那里。
|
||||
|
||||
```markdown
|
||||
练习 1、实现 `kern/pmap.c` 中的 `mmio_map_region`。去看一下它是如何使用的,从 `kern/lapic.c` 中的 `lapic_init` 开始看起。在 `mmio_map_region` 的测试运行之前,你还要做下一个练习。
|
||||
```
|
||||
|
||||
###### 引导应用程序处理器
|
||||
|
||||
在引导应用程序处理器之前,引导程序处理器应该会首先去收集关于多处理器系统的信息,比如总的 CPU 数、它们的 APIC ID 以及 LAPIC 单元的 MMIO 地址。在 `kern/mpconfig.c` 中的 `mp_init()` 函数,通过读取内存中位于 BIOS 区域里的 MP 配置表来获得这些信息。
|
||||
|
||||
`boot_aps()` 函数(在 `kern/init.c` 中)驱动 AP 的引导过程。AP 们在实模式中开始,与 `boot/boot.S` 中启动引导加载程序非常相似。因此,`boot_aps()` 将 AP 入口代码(`kern/mpentry.S`)复制到实模式中的那个可寻址内存地址上。不像使用引导加载程序那样,我们可以控制 AP 将从哪里开始运行代码;我们复制入口代码到 `0x7000`(`MPENTRY_PADDR`)处,但是复制到任何低于 640KB 的、未使用的、页对齐的物理地址上都是可以运行的。
|
||||
|
||||
在那之后,通过发送 IPI `STARTUP` 到相关 AP 的 LAPIC 单元,以及一个初始的 `CS:IP` 地址(AP 将从那儿开始运行它的入口代码,在我们的案例中是 `MPENTRY_PADDR` ),`boot_aps()` 将一个接一个地激活 AP。在 `kern/mpentry.S` 中的入口代码非常类似于 `boot/boot.S`。在一些简短的设置之后,它启用分页,使 AP 进入保护模式,然后调用 C 设置程序 `mp_main()`(它也在 `kern/init.c` 中)。在继续唤醒下一个 AP 之前, `boot_aps()` 将等待这个 AP 去传递一个 `CPU_STARTED` 标志到它的 `struct CpuInfo` 中的 `cpu_status` 字段中。
|
||||
|
||||
```markdown
|
||||
练习 2、阅读 `kern/init.c` 中的 `boot_aps()` 和 `mp_main()`,以及在 `kern/mpentry.S` 中的汇编代码。确保你理解了在 AP 引导过程中的控制流转移。然后修改在 `kern/pmap.c` 中的、你自己的 `page_init()`,实现避免在 `MPENTRY_PADDR` 处添加页到空闲列表上,以便于我们能够在物理地址上安全地复制和运行 AP 引导程序代码。你的代码应该会通过更新后的 `check_page_free_list()` 的测试(但可能会在更新后的 `check_kern_pgdir()` 上测试失败,我们在后面会修复它)。
|
||||
```
|
||||
|
||||
```markdown
|
||||
问题
|
||||
1、比较 `kern/mpentry.S` 和 `boot/boot.S`。记住,那个 `kern/mpentry.S` 是编译和链接后的,运行在 `KERNBASE` 上面的,就像内核中的其它程序一样,宏 `MPBOOTPHYS` 的作用是什么?为什么它需要在 `kern/mpentry.S` 中,而不是在 `boot/boot.S` 中?换句话说,如果在 `kern/mpentry.S` 中删掉它,会发生什么错误?
|
||||
提示:回顾链接地址和加载地址的区别,我们在实验 1 中讨论过它们。
|
||||
```
|
||||
|
||||
|
||||
###### 每个 CPU 的状态和初始化
|
||||
|
||||
当写一个多处理器操作系统时,区分每个 CPU 的状态是非常重要的,而每个 CPU 的状态对其它处理器是不公开的,而全局状态是整个系统共享的。`kern/cpu.h` 定义了大部分每个 CPU 的状态,包括 `struct CpuInfo`,它保存了每个 CPU 的变量。`cpunum()` 总是返回调用它的那个 CPU 的 ID,它可以被用作是数组的索引,比如 `cpus`。或者,宏 `thiscpu` 是当前 CPU 的 `struct CpuInfo` 缩略表示。
|
||||
|
||||
下面是你应该知道的每个 CPU 的状态:
|
||||
|
||||
* **每个 CPU 的内核栈**
|
||||
因为内核能够同时捕获多个 CPU,因此,我们需要为每个 CPU 准备一个单独的内核栈,以防止它们运行的程序之间产生相互干扰。数组 `percpu_kstacks[NCPU][KSTKSIZE]` 为 NCPU 的内核栈资产保留了空间。
|
||||
|
||||
在实验 2 中,你映射的 `bootstack` 所引用的物理内存,就作为 `KSTACKTOP` 以下的 BSP 的内核栈。同样,在本实验中,你将每个 CPU 的内核栈映射到这个区域,而使用保护页做为它们之间的缓冲区。CPU 0 的栈将从 `KSTACKTOP` 处向下增长;CPU 1 的栈将从 CPU 0 的栈底部的 `KSTKGAP` 字节处开始,依次类推。在 `inc/memlayout.h` 中展示了这个映射布局。
|
||||
|
||||
* **每个 CPU 的 TSS 和 TSS 描述符**
|
||||
为了指定每个 CPU 的内核栈在哪里,也需要有一个每个 CPU 的任务状态描述符(TSS)。CPU _i_ 的任务状态描述符是保存在 `cpus[i].cpu_ts` 中,而对应的 TSS 描述符是定义在 GDT 条目 `gdt[(GD_TSS0 >> 3) + i]` 中。在 `kern/trap.c` 中定义的全局变量 `ts` 将不再被使用。
|
||||
|
||||
* **每个 CPU 当前的环境指针**
|
||||
由于每个 CPU 都能同时运行不同的用户进程,所以我们重新定义了符号 `curenv`,让它指向到 `cpus[cpunum()].cpu_env`(或 `thiscpu->cpu_env`),它指向到当前 CPU(代码正在运行的那个 CPU)上当前正在运行的环境上。
|
||||
|
||||
* **每个 CPU 的系统寄存器**
|
||||
所有的寄存器,包括系统寄存器,都是一个 CPU 私有的。所以,初始化这些寄存器的指令,比如 `lcr3()`、`ltr()`、`lgdt()`、`lidt()`、等待,必须在每个 CPU 上运行一次。函数 `env_init_percpu()` 和 `trap_init_percpu()` 就是为此目的而定义的。
|
||||
|
||||
|
||||
|
||||
```markdown
|
||||
练习 3、修改 `mem_init_mp()`(在 `kern/pmap.c` 中)去映射每个 CPU 的栈从 `KSTACKTOP` 处开始,就像在 `inc/memlayout.h` 中展示的那样。每个栈的大小是 `KSTKSIZE` 字节加上未映射的保护页 `KSTKGAP` 的字节。你的代码应该会通过在 `check_kern_pgdir()` 中的新的检查。
|
||||
```
|
||||
|
||||
```markdown
|
||||
练习 4、在 `trap_init_percpu()`(在 `kern/trap.c` 文件中)的代码为 BSP 初始化 TSS 和 TSS 描述符。在实验 3 中它就运行过,但是当它运行在其它的 CPU 上就会出错。修改这些代码以便它能在所有 CPU 上都正常运行。(注意:你的新代码应该还不能使用全局变量 `ts`)
|
||||
```
|
||||
|
||||
在你完成上述练习后,在 QEMU 中使用 4 个 CPU(使用 `make qemu CPUS=4` 或 `make qemu-nox CPUS=4`)来运行 JOS,你应该看到类似下面的输出:
|
||||
|
||||
```c
|
||||
...
|
||||
Physical memory: 66556K available, base = 640K, extended = 65532K
|
||||
check_page_alloc() succeeded!
|
||||
check_page() succeeded!
|
||||
check_kern_pgdir() succeeded!
|
||||
check_page_installed_pgdir() succeeded!
|
||||
SMP: CPU 0 found 4 CPU(s)
|
||||
enabled interrupts: 1 2
|
||||
SMP: CPU 1 starting
|
||||
SMP: CPU 2 starting
|
||||
SMP: CPU 3 starting
|
||||
```
|
||||
|
||||
###### 锁定
|
||||
|
||||
在 `mp_main()` 中初始化 AP 后我们的代码快速运行起来。在你更进一步增强 AP 之前,我们需要首先去处理多个 CPU 同时运行内核代码的争用状况。达到这一目标的最简单的方法是使用大内核锁。大内核锁是一个单个的全局锁,当一个环境进入内核模式时,它将被加锁,而这个环境返回到用户模式时它将释放锁。在这种模型中,在用户模式中运行的环境可以同时运行在任何可用的 CPU 上,但是只有一个环境能够运行在内核模式中;而任何尝试进入内核模式的其它环境都被强制等待。
|
||||
|
||||
`kern/spinlock.h` 中声明大内核锁,即 `kernel_lock`。它也提供 `lock_kernel()` 和 `unlock_kernel()`,快捷地去获取/释放锁。你应该在以下的四个位置应用大内核锁:
|
||||
|
||||
* 在 `i386_init()` 时,在 BSP 唤醒其它 CPU 之前获取锁。
|
||||
* 在 `mp_main()` 时,在初始化 AP 之后获取锁,然后调用 `sched_yield()` 在这个 AP 上开始运行环境。
|
||||
* 在 `trap()` 时,当从用户模式中捕获一个<ruby>陷阱<rt>trap</rt></ruby>时获取锁。在检查 `tf_cs` 的低位比特,以确定一个陷阱是发生在用户模式还是内核模式时。
|
||||
* 在 `env_run()` 中,在切换到用户模式之前释放锁。不能太早也不能太晚,否则你将可能会产生争用或死锁。
|
||||
|
||||
|
||||
```markdown
|
||||
练习 5、在上面所描述的情况中,通过在合适的位置调用 `lock_kernel()` 和 `unlock_kernel()` 应用大内核锁。
|
||||
```
|
||||
|
||||
如果你的锁定是正确的,如何去测试它?实际上,到目前为止,还无法测试!但是在下一个练习中,你实现了调度之后,就可以测试了。
|
||||
|
||||
```
|
||||
问题
|
||||
2、看上去使用一个大内核锁,可以保证在一个时间中只有一个 CPU 能够运行内核代码。为什么每个 CPU 仍然需要单独的内核栈?描述一下使用一个共享内核栈出现错误的场景,即便是在它使用了大内核锁保护的情况下。
|
||||
```
|
||||
|
||||
```
|
||||
小挑战!大内核锁很简单,也易于使用。尽管如此,它消除了内核模式的所有并发。大多数现代操作系统使用不同的锁,一种称之为细粒度锁定的方法,去保护它们的共享的栈的不同部分。细粒度锁能够大幅提升性能,但是实现起来更困难并且易出错。如果你有足够的勇气,在 JOS 中删除大内核锁,去拥抱并发吧!
|
||||
|
||||
由你来决定锁的粒度(一个锁保护的数据量)。给你一个提示,你可以考虑在 JOS 内核中使用一个自旋锁去确保你独占访问这些共享的组件:
|
||||
|
||||
* 页分配器
|
||||
* 控制台驱动
|
||||
* 调度器
|
||||
* 你将在 Part C 中实现的进程间通讯(IPC)的状态
|
||||
```
|
||||
|
||||
|
||||
##### 循环调度
|
||||
|
||||
本实验中,你的下一个任务是去修改 JOS 内核,以使它能够在多个环境之间以“循环”的方式去交替。JOS 中的循环调度工作方式如下:
|
||||
|
||||
* 在新的 `kern/sched.c` 中的 `sched_yield()` 函数负责去选择一个新环境来运行。它按顺序以循环的方式在数组 `envs[]` 中进行搜索,在前一个运行的环境之后开始(或如果之前没有运行的环境,就从数组起点开始),选择状态为 `ENV_RUNNABLE` 的第一个环境(查看 `inc/env.h`),并调用 `env_run()` 去跳转到那个环境。
|
||||
* `sched_yield()` 必须做到,同一个时间在两个 CPU 上绝对不能运行相同的环境。它可以判断出一个环境正运行在一些 CPU(可能是当前 CPU)上,因为,那个正在运行的环境的状态将是 `ENV_RUNNING`。
|
||||
* 我们已经为你实现了一个新的系统调用 `sys_yield()`,用户环境调用它去调用内核的 `sched_yield()` 函数,并因此将自愿把对 CPU 的控制禅让给另外的一个环境。
|
||||
|
||||
|
||||
|
||||
```c
|
||||
练习 6、像上面描述的那样,在 `sched_yield()` 中实现循环调度。不要忘了去修改 `syscall()` 以派发 `sys_yield()`。
|
||||
|
||||
确保在 `mp_main` 中调用了 `sched_yield()`。
|
||||
|
||||
修改 `kern/init.c` 去创建三个(或更多个!)运行程序 `user/yield.c`的环境。
|
||||
|
||||
运行 `make qemu`。在它终止之前,你应该会看到像下面这样,在环境之间来回切换了五次。
|
||||
|
||||
也可以使用几个 CPU 来测试:make qemu CPUS=2。
|
||||
|
||||
...
|
||||
Hello, I am environment 00001000.
|
||||
Hello, I am environment 00001001.
|
||||
Hello, I am environment 00001002.
|
||||
Back in environment 00001000, iteration 0.
|
||||
Back in environment 00001001, iteration 0.
|
||||
Back in environment 00001002, iteration 0.
|
||||
Back in environment 00001000, iteration 1.
|
||||
Back in environment 00001001, iteration 1.
|
||||
Back in environment 00001002, iteration 1.
|
||||
...
|
||||
|
||||
在程序 `yield` 退出之后,系统中将没有可运行的环境,调度器应该会调用 JOS 内核监视器。如果它什么也没有发生,那么你应该在继续之前修复你的代码。
|
||||
```
|
||||
|
||||
```c
|
||||
问题
|
||||
3、在你实现的 `env_run()` 中,你应该会调用 `lcr3()`。在调用 `lcr3()` 的之前和之后,你的代码引用(至少它应该会)变量 `e`,它是 `env_run` 的参数。在加载 `%cr3` 寄存器时,MMU 使用的地址上下文将马上被改变。但一个虚拟地址(即 `e`)相对一个给定的地址上下文是有意义的 —— 地址上下文指定了物理地址到那个虚拟地址的映射。为什么指针 `e` 在地址切换之前和之后被解除引用?
|
||||
4、无论何时,内核从一个环境切换到另一个环境,它必须要确保旧环境的寄存器内容已经被保存,以便于它们稍后能够正确地还原。为什么?这种事件发生在什么地方?
|
||||
```
|
||||
|
||||
```c
|
||||
小挑战!给内核添加一个小小的调度策略,比如一个固定优先级的调度器,它将会给每个环境分配一个优先级,并且在执行中,较高优先级的环境总是比低优先级的环境优先被选定。如果你想去冒险一下,尝试实现一个类 Unix 的、优先级可调整的调度器,或者甚至是一个彩票调度器或跨步调度器。(可以在 Google 中查找“彩票调度”和“跨步调度”的相关资料)
|
||||
|
||||
写一个或两个测试程序,去测试你的调度算法是否工作正常(即,正确的算法能够按正确的次序运行)。如果你实现了本实验的 Part B 和 Part C 部分的 `fork()` 和 IPC,写这些测试程序可能会更容易。
|
||||
```
|
||||
|
||||
```markdown
|
||||
小挑战!目前的 JOS 内核还不能应用到使用了 x87 协处理器、MMX 指令集、或流式 SIMD 扩展(SSE)的 x86 处理器上。扩展数据结构 `Env` 去提供一个能够保存处理器的浮点状态的地方,并且扩展上下文切换代码,当从一个环境切换到另一个环境时,能够保存和还原正确的状态。`FXSAVE` 和 `FXRSTOR` 指令或许对你有帮助,但是需要注意的是,这些指令在旧的 x86 用户手册上没有,因为它是在较新的处理器上引入的。写一个用户级的测试程序,让它使用浮点做一些很酷的事情。
|
||||
```
|
||||
|
||||
##### 创建环境的系统调用
|
||||
|
||||
虽然你的内核现在已经有了在多个用户级环境之间切换的功能,但是由于内核初始化设置的原因,它在运行环境时仍然是受限的。现在,你需要去实现必需的 JOS 系统调用,以允许用户环境去创建和启动其它的新用户环境。
|
||||
|
||||
Unix 提供了 `fork()` 系统调用作为它的进程创建原语。Unix 的 `fork()` 通过复制调用进程(父进程)的整个地址空间去创建一个新进程(子进程)。从用户空间中能够观察到它们之间的仅有的两个差别是,它们的进程 ID 和父进程 ID(由 `getpid` 和 `getppid` 返回)。在父进程中,`fork()` 返回子进程 ID,而在子进程中,`fork()` 返回 0。默认情况下,每个进程得到它自己的私有地址空间,一个进程对内存的修改对另一个进程都是不可见的。
|
||||
|
||||
为创建一个用户模式下的新的环境,你将要提供一个不同的、更原始的 JOS 系统调用集。使用这些系统调用,除了其它类型的环境创建之外,你可以在用户空间中实现一个完整的类 Unix 的 `fork()`。你将要为 JOS 编写的新的系统调用如下:
|
||||
|
||||
* `sys_exofork`:
|
||||
这个系统调用创建一个新的空白的环境:在它的地址空间的用户部分什么都没有映射,并且它也不能运行。这个新的环境与 `sys_exofork` 调用时创建它的父环境的寄存器状态完全相同。在父进程中,`sys_exofork` 将返回新创建进程的 `envid_t`(如果环境分配失败的话,返回的是一个负的错误代码)。在子进程中,它将返回 0。(因为子进程从一开始就被标记为不可运行,在子进程中,`sys_exofork` 将并不真的返回,直到它的父进程使用 .... 显式地将子进程标记为可运行之前。)
|
||||
* `sys_env_set_status`:
|
||||
设置指定的环境状态为 `ENV_RUNNABLE` 或 `ENV_NOT_RUNNABLE`。这个系统调用一般是在,一个新环境的地址空间和寄存器状态已经完全初始化完成之后,用于去标记一个准备去运行的新环境。
|
||||
* `sys_page_alloc`:
|
||||
分配一个物理内存页,并映射它到一个给定的环境地址空间中、给定的一个虚拟地址上。
|
||||
* `sys_page_map`:
|
||||
从一个环境的地址空间中复制一个页映射(不是页内容!)到另一个环境的地址空间中,保持一个内存共享,以便于新的和旧的映射共同指向到同一个物理内存页。
|
||||
* `sys_page_unmap`:
|
||||
在一个给定的环境中,取消映射一个给定的已映射的虚拟地址。
|
||||
|
||||
|
||||
|
||||
上面所有的系统调用都接受环境 ID 作为参数,JOS 内核支持一个约定,那就是用值 “0” 来表示“当前环境”。这个约定在 `kern/env.c` 中的 `envid2env()` 中实现的。
|
||||
|
||||
在我们的 `user/dumbfork.c` 中的测试程序里,提供了一个类 Unix 的 `fork()` 的非常原始的实现。这个测试程序使用了上面的系统调用,去创建和运行一个复制了它自己地址空间的子环境。然后,这两个环境像前面的练习那样使用 `sys_yield` 来回切换,父进程在迭代 10 次后退出,而子进程在迭代 20 次后退出。
|
||||
|
||||
```c
|
||||
练习 7、在 `kern/syscall.c` 中实现上面描述的系统调用,并确保 `syscall()` 能调用它们。你将需要使用 `kern/pmap.c` 和 `kern/env.c` 中的多个函数,尤其是要用到 `envid2env()`。目前,每当你调用 `envid2env()` 时,在 `checkperm` 中传递参数 1。你务必要做检查任何无效的系统调用参数,在那个案例中,就返回了 `-E_INVAL`。使用 `user/dumbfork` 测试你的 JOS 内核,并在继续之前确保它运行正常。
|
||||
```
|
||||
|
||||
```c
|
||||
小挑战!添加另外的系统调用,必须能够读取已存在的、所有的、环境的重要状态,以及设置它们。然后实现一个能够 fork 出子环境的用户模式程序,运行它一小会(即,迭代几次 `sys_yield()`),然后取得几张屏幕截图或子环境的检查点,然后运行子环境一段时间,然后还原子环境到检查点时的状态,然后从这里继续开始。这样,你就可以有效地从一个中间状态“回放”了子环境的运行。确保子环境与用户使用 `sys_cgetc()` 或 `readline()` 执行了一些交互,这样,那个用户就能够查看和突变它的内部状态,并且你可以通过给子环境给定一个选择性遗忘的状况,来验证你的检查点/重启动的有效性,使它“遗忘”了在某些点之前发生的事情。
|
||||
```
|
||||
|
||||
到此为止,已经完成了本实验的 Part A 部分;在你运行 `make grade` 之前确保它通过了所有的 Part A 的测试,并且和以往一样,使用 `make handin` 去提交它。如果你想尝试找出为什么一些特定的测试是失败的,可以运行 `run ./grade-lab4 -v`,它将向你展示内核构建的输出,和测试失败时的 QEMU 运行情况。当测试失败时,这个脚本将停止运行,然后你可以去检查 `jos.out` 的内容,去查看内核真实的输出内容。
|
||||
|
||||
#### Part B:写时复制 Fork
|
||||
|
||||
正如在前面提到过的,Unix 提供 `fork()` 系统调用作为它主要的进程创建原语。`fork()` 系统调用通过复制调用进程(父进程)的地址空间来创建一个新进程(子进程)。
|
||||
|
||||
xv6 Unix 的 `fork()` 从父进程的页上复制所有数据,然后将它分配到子进程的新页上。从本质上看,它与 `dumbfork()` 所采取的方法是相同的。复制父进程的地址空间到子进程,是 `fork()` 操作中代价最高的部分。
|
||||
|
||||
但是,一个对 `fork()` 的调用后,经常是紧接着几乎立即在子进程中有一个到 `exec()` 的调用,它使用一个新程序来替换子进程的内存。这是 shell 默认去做的事,在这种情况下,在复制父进程地址空间上花费的时间是非常浪费的,因为在调用 `exec()` 之前,子进程使用的内存非常少。
|
||||
|
||||
基于这个原因,Unix 的最新版本利用了虚拟内存硬件的优势,允许父进程和子进程去共享映射到它们各自地址空间上的内存,直到其中一个进程真实地修改了它们为止。这个技术就是众所周知的“写时复制”。为实现这一点,在 `fork()` 时,内核将复制从父进程到子进程的地址空间的映射,而不是所映射的页的内容,并且同时设置正在共享中的页为只读。当两个进程中的其中一个尝试去写入到它们共享的页上时,进程将产生一个页故障。在这时,Unix 内核才意识到那个页实际上是“虚拟的”或“写时复制”的副本,然后它生成一个新的、私有的、那个发生页故障的进程可写的、页的副本。在这种方式中,个人的页的内容并不进行真实地复制,直到它们真正进行写入时才进行复制。这种优化使得一个`fork()` 后在子进程中跟随一个 `exec()` 变得代价很低了:子进程在调用 `exec()` 时或许仅需要复制一个页(它的栈的当前页)。
|
||||
|
||||
在本实验的下一段中,你将实现一个带有“写时复制”的“真正的”类 Unix 的 `fork()`,来作为一个常规的用户空间库。在用户空间中实现 `fork()` 和写时复制有一个好处就是,让内核始终保持简单,并且因此更不易出错。它也让个别的用户模式程序在 `fork()` 上定义了它们自己的语义。一个有略微不同实现的程序(例如,代价昂贵的、总是复制的 `dumbfork()` 版本,或父子进程真实共享内存的后面的那一个),它自己可以很容易提供。
|
||||
|
||||
##### 用户级页故障处理
|
||||
|
||||
一个用户级写时复制 `fork()` 需要知道关于在写保护页上的页故障相关的信息,因此,这是你首先需要去实现的东西。对用户级页故障处理来说,写时复制仅是众多可能的用途之一。
|
||||
|
||||
它通常是配置一个地址空间,因此在一些动作需要时,那个页故障将指示去处。例如,主流的 Unix 内核在一个新进程的栈区域中,初始的映射仅是单个页,并且在后面“按需”分配和映射额外的栈页,因此,进程的栈消费是逐渐增加的,并因此导致在尚未映射的栈地址上发生页故障。在每个进程空间的区域上发生一个页故障时,一个典型的 Unix 内核必须对它的动作保持跟踪。例如,在栈区域中的一个页故障,一般情况下将分配和映射新的物理内存页。一个在程序的 BSS 区域中的页故障,一般情况下将分配一个新页,然后用 0 填充它并映射它。在一个按需分页的系统上的一个可执行文件中,在文本区域中的页故障将从磁盘上读取相应的二进制页并映射它。
|
||||
|
||||
内核跟踪有大量的信息,与传统的 Unix 方法不同,你将决定在每个用户空间中关于每个页故障应该做的事。用户空间中的 bug 危害都较小。这种设计带来了额外的好处,那就是允许程序员在定义它们的内存区域时,会有很好的灵活性;对于映射和访问基于磁盘文件系统上的文件时,你应该使用后面的用户级页故障处理。
|
||||
|
||||
###### 设置页故障服务程序
|
||||
|
||||
为了处理它自己的页故障,一个用户环境将需要在 JOS 内核上注册一个页故障服务程序入口。用户环境通过新的 `sys_env_set_pgfault_upcall` 系统调用来注册它的页故障入口。我们给结构 `Env` 增加了一个新的成员 `env_pgfault_upcall`,让它去记录这个信息。
|
||||
|
||||
```markdown
|
||||
练习 8、实现 `sys_env_set_pgfault_upcall` 系统调用。当查找目标环境的环境 ID 时,一定要确认启用了权限检查,因为这是一个“危险的”系统调用。
|
||||
```
|
||||
|
||||
###### 在用户环境中的正常和异常栈
|
||||
|
||||
在正常运行期间,JOS 中的一个用户环境运行在正常的用户栈上:它的 `ESP` 寄存器开始指向到 `USTACKTOP`,而它所推送的栈数据将驻留在 `USTACKTOP-PGSIZE` 和 `USTACKTOP-1`(含)之间的页上。但是,当在用户模式中发生页故障时,内核将在一个不同的栈上重新启动用户环境,运行一个用户级页故障指定的服务程序,即用户异常栈。其它,我们将让 JOS 内核为用户环境实现自动的“栈切换”,当从用户模式转换到内核模式时,x86 处理器就以大致相同的方式为 JOS 实现了栈切换。
|
||||
|
||||
JOS 用户异常栈也是一个页的大小,并且它的顶部被定义在虚拟地址 `UXSTACKTOP` 处,因此用户异常栈的有效字节数是从 `UXSTACKTOP-PGSIZE` 到 `UXSTACKTOP-1`(含)。尽管运行在异常栈上,用户页故障服务程序能够使用 JOS 的普通系统调用去映射新页或调整映射,以便于去修复最初导致页故障发生的各种问题。然后用户级页故障服务程序通过汇编语言 `stub` 返回到原始栈上的故障代码。
|
||||
|
||||
每个想去支持用户级页故障处理的用户环境,都需要为它自己的异常栈使用在 Part A 中介绍的 `sys_page_alloc()` 系统调用去分配内存。
|
||||
|
||||
###### 调用用户页故障服务程序
|
||||
|
||||
现在,你需要去修改 `kern/trap.c` 中的页故障处理代码,以能够处理接下来在用户模式中发生的页故障。我们将故障发生时用户环境的状态称之为捕获时状态。
|
||||
|
||||
如果这里没有注册页故障服务程序,JOS 内核将像前面那样,使用一个消息来销毁用户环境。否则,内核将在异常栈上设置一个陷阱帧,它看起来就像是来自 `inc/trap.h` 文件中的一个 `struct UTrapframe` 一样:
|
||||
|
||||
```assembly
|
||||
<-- UXSTACKTOP
|
||||
trap-time esp
|
||||
trap-time eflags
|
||||
trap-time eip
|
||||
trap-time eax start of struct PushRegs
|
||||
trap-time ecx
|
||||
trap-time edx
|
||||
trap-time ebx
|
||||
trap-time esp
|
||||
trap-time ebp
|
||||
trap-time esi
|
||||
trap-time edi end of struct PushRegs
|
||||
tf_err (error code)
|
||||
fault_va <-- %esp when handler is run
|
||||
|
||||
```
|
||||
|
||||
然后,内核安排这个用户环境重新运行,使用这个栈帧在异常栈上运行页故障服务程序;你必须搞清楚为什么发生这种情况。`fault_va` 是引发页故障的虚拟地址。
|
||||
|
||||
如果在一个异常发生时,用户环境已经在用户异常栈上运行,那么页故障服务程序自身将会失败。在这种情况下,你应该在当前的 `tf->tf_esp` 下,而不是在 `UXSTACKTOP` 下启动一个新的栈帧。
|
||||
|
||||
去测试 `tf->tf_esp` 是否已经在用户异常栈上准备好,可以去检查它是否在 `UXSTACKTOP-PGSIZE` 和 `UXSTACKTOP-1`(含)的范围内。
|
||||
|
||||
```markdown
|
||||
练习 9、实现在 `kern/trap.c` 中的 `page_fault_handler` 的代码,要求派发页故障到用户模式故障服务程序上。在写入到异常栈时,一定要采取适当的预防措施。(如果用户环境运行时溢出了异常栈,会发生什么事情?)
|
||||
```
|
||||
|
||||
###### 用户模式页故障入口点
|
||||
|
||||
接下来,你需要去实现汇编程序,它将调用 C 页故障服务程序,并在原始的故障指令处恢复程序运行。这个汇编程序是一个故障服务程序,它由内核使用 `sys_env_set_pgfault_upcall()` 来注册。
|
||||
|
||||
```markdown
|
||||
练习 10、实现在 `lib/pfentry.S` 中的 `_pgfault_upcall` 程序。最有趣的部分是返回到用户代码中产生页故障的原始位置。你将要直接返回到那里,不能通过内核返回。最难的部分是同时切换栈和重新加载 EIP。
|
||||
```
|
||||
|
||||
最后,你需要去实现用户级页故障处理机制的 C 用户库。
|
||||
|
||||
```c
|
||||
练习 11、完成 `lib/pgfault.c` 中的 `set_pgfault_handler()`。
|
||||
```
|
||||
|
||||
###### 测试
|
||||
|
||||
运行 `user/faultread`(make run-faultread)你应该会看到:
|
||||
|
||||
```c
|
||||
...
|
||||
[00000000] new env 00001000
|
||||
[00001000] user fault va 00000000 ip 0080003a
|
||||
TRAP frame ...
|
||||
[00001000] free env 00001000
|
||||
```
|
||||
|
||||
运行 `user/faultdie` 你应该会看到:
|
||||
|
||||
```c
|
||||
...
|
||||
[00000000] new env 00001000
|
||||
i faulted at va deadbeef, err 6
|
||||
[00001000] exiting gracefully
|
||||
[00001000] free env 00001000
|
||||
```
|
||||
|
||||
运行 `user/faultalloc` 你应该会看到:
|
||||
|
||||
```c
|
||||
...
|
||||
[00000000] new env 00001000
|
||||
fault deadbeef
|
||||
this string was faulted in at deadbeef
|
||||
fault cafebffe
|
||||
fault cafec000
|
||||
this string was faulted in at cafebffe
|
||||
[00001000] exiting gracefully
|
||||
[00001000] free env 00001000
|
||||
```
|
||||
|
||||
如果你只看到第一个 "this string” 行,意味着你没有正确地处理递归页故障。
|
||||
|
||||
运行 `user/faultallocbad` 你应该会看到:
|
||||
|
||||
```c
|
||||
...
|
||||
[00000000] new env 00001000
|
||||
[00001000] user_mem_check assertion failure for va deadbeef
|
||||
[00001000] free env 00001000
|
||||
```
|
||||
|
||||
确保你理解了为什么 `user/faultalloc` 和 `user/faultallocbad` 的行为是不一样的。
|
||||
|
||||
```markdown
|
||||
小挑战!扩展你的内核,让它不仅是页故障,而是在用户空间中运行的代码能够产生的所有类型的处理器异常,都能够被重定向到一个用户模式中的异常服务程序上。写出用户模式测试程序,去测试各种各样的用户模式异常处理,比如除零错误、一般保护故障、以及非法操作码。
|
||||
```
|
||||
|
||||
##### 实现写时复制 Fork
|
||||
|
||||
现在,你有个内核功能要去实现,那就是在用户空间中完整地实现写时复制 `fork()`。
|
||||
|
||||
我们在 `lib/fork.c` 中为你的 `fork()` 提供了一个框架。像 `dumbfork()`、`fork()` 应该会创建一个新环境,然后通过扫描父环境的整个地址空间,并在子环境中设置相关的页映射。重要的差别在于,`dumbfork()` 复制了页,而 `fork()` 开始只是复制了页映射。`fork()` 仅当在其中一个环境尝试去写入它时才复制每个页。
|
||||
|
||||
`fork()` 的基本控制流如下:
|
||||
|
||||
1. 父环境使用你在上面实现的 `set_pgfault_handler()` 函数,安装 `pgfault()` 作为 C 级页故障服务程序。
|
||||
|
||||
2. 父环境调用 `sys_exofork()` 去创建一个子环境。
|
||||
|
||||
3. 在它的地址空间中,低于 UTOP 位置的、每个可写入页、或写时复制页上,父环境调用 `duppage` 后,它应该会映射页写时复制到子环境的地址空间中,然后在它自己的地址空间中重新映射页写时复制。[ 注意:这里的顺序很重要(即,在父环境中标记之前,先在子环境中标记该页为 COW)!你能明白是为什么吗?尝试去想一个具体的案例,将顺序颠倒一下会发生什么样的问题。] `duppage` 把两个 PTE 都设置了,致使那个页不可写入,并且在 "avail” 字段中通过包含 `PTE_COW` 来从真正的只读页中区分写时复制页。
|
||||
|
||||
然而异常栈是不能通过这种方式重映射的。对于异常栈,你需要在子环境中分配一个新页。因为页故障服务程序不能做真实的复制,并且页故障服务程序是运行在异常栈上的,异常栈不能进行写时复制:那么谁来复制它呢?
|
||||
|
||||
`fork()` 也需要去处理存在的页,但不能写入或写时复制。
|
||||
|
||||
4. 父环境为子环境设置了用户页故障入口点,让它看起来像它自己的一样。
|
||||
|
||||
5. 现在,子环境准备去运行,所以父环境标记它为可运行。
|
||||
|
||||
|
||||
|
||||
|
||||
每次其中一个环境写一个还没有写入的写时复制页时,它将产生一个页故障。下面是用户页故障服务程序的控制流:
|
||||
|
||||
1. 内核传递页故障到 `_pgfault_upcall`,它调用 `fork()` 的 `pgfault()` 服务程序。
|
||||
2. `pgfault()` 检测到那个故障是一个写入(在错误代码中检查 `FEC_WR`),然后将那个页的 PTE 标记为 `PTE_COW`。如果不是一个写入,则崩溃。
|
||||
3. `pgfault()` 在一个临时位置分配一个映射的新页,并将故障页的内容复制进去。然后,故障服务程序以读取/写入权限映射新页到合适的地址,替换旧的只读映射。
|
||||
|
||||
|
||||
|
||||
对于上面的几个操作,用户级 `lib/fork.c` 代码必须查询环境的页表(即,那个页的 PTE 是否标记为 `PET_COW`)。为此,内核在 `UVPT` 位置精确地映射环境的页表。它使用一个 [聪明的映射技巧][1] 去标记它,以使用户代码查找 PTE 时更容易。`lib/entry.S` 设置 `uvpt` 和 `uvpd`,以便于你能够在 `lib/fork.c` 中轻松查找页表信息。
|
||||
|
||||
```c
|
||||
练习 12、在 `lib/fork.c` 中实现 `fork`、`duppage` 和 `pgfault`。
|
||||
|
||||
使用 `forktree` 程序测试你的代码。它应该会产生下列的信息,在信息中会有 'new env'、'free env'、和 'exiting gracefully' 这样的字眼。信息可能不是按如下的顺序出现的,并且环境 ID 也可能不一样。
|
||||
|
||||
1000: I am ''
|
||||
1001: I am '0'
|
||||
2000: I am '00'
|
||||
2001: I am '000'
|
||||
1002: I am '1'
|
||||
3000: I am '11'
|
||||
3001: I am '10'
|
||||
4000: I am '100'
|
||||
1003: I am '01'
|
||||
5000: I am '010'
|
||||
4001: I am '011'
|
||||
2002: I am '110'
|
||||
1004: I am '001'
|
||||
1005: I am '111'
|
||||
1006: I am '101'
|
||||
```
|
||||
|
||||
```c
|
||||
小挑战!实现一个名为 `sfork()` 的共享内存的 `fork()`。这个版本的 `sfork()` 中,父子环境共享所有的内存页(因此,一个环境中对内存写入,就会改变另一个环境数据),除了在栈区域中的页以外,它应该使用写时复制来处理这些页。修改 `user/forktree.c` 去使用 `sfork()` 而是不常见的 `fork()`。另外,你在 Part C 中实现了 IPC 之后,使用你的 `sfork()` 去运行 `user/pingpongs`。你将找到提供全局指针 `thisenv` 功能的一个新方式。
|
||||
```
|
||||
|
||||
```markdown
|
||||
小挑战!你实现的 `fork` 将产生大量的系统调用。在 x86 上,使用中断切换到内核模式将产生较高的代价。增加系统调用接口,以便于它能够一次发送批量的系统调用。然后修改 `fork` 去使用这个接口。
|
||||
|
||||
你的新的 `fork` 有多快?
|
||||
|
||||
你可以用一个分析来论证,批量提交对你的 `fork` 的性能改变,以它来(粗略地)回答这个问题:使用一个 `int 0x30` 指令的代价有多高?在你的 `fork` 中运行了多少次 `int 0x30` 指令?访问 `TSS` 栈切换的代价高吗?等待 ...
|
||||
|
||||
或者,你可以在真实的硬件上引导你的内核,并且真实地对你的代码做基准测试。查看 `RDTSC`(读取时间戳计数器)指令,它的定义在 IA32 手册中,它计数自上一次处理器重置以来流逝的时钟周期数。QEMU 并不能真实地模拟这个指令(它能够计数运行的虚拟指令数量,或使用主机的 TSC,但是这两种方式都不能反映真实的 CPU 周期数)。
|
||||
```
|
||||
|
||||
到此为止,Part B 部分结束了。在你运行 `make grade` 之前,确保你通过了所有的 Part B 部分的测试。和以前一样,你可以使用 `make handin` 去提交你的实验。
|
||||
|
||||
#### Part C:抢占式多任务处理和进程间通讯(IPC)
|
||||
|
||||
在实验 4 的最后部分,你将修改内核去抢占不配合的环境,并允许环境之间显式地传递消息。
|
||||
|
||||
##### 时钟中断和抢占
|
||||
|
||||
运行测试程序 `user/spin`。这个测试程序 fork 出一个子环境,它控制了 CPU 之后,就永不停歇地运转起来。无论是父环境还是内核都不能回收对 CPU 的控制。从用户模式环境中保护系统免受 bug 或恶意代码攻击的角度来看,这显然不是个理想的状态,因为任何用户模式环境都能够通过简单的无限循环,并永不归还 CPU 控制权的方式,让整个系统处于暂停状态。为了允许内核去抢占一个运行中的环境,从其中夺回对 CPU 的控制权,我们必须去扩展 JOS 内核,以支持来自硬件时钟的外部硬件中断。
|
||||
|
||||
###### 中断规则
|
||||
|
||||
外部中断(即:设备中断)被称为 IRQ。现在有 16 个可能出现的 IRQ,编号 0 到 15。从 IRQ 号到 IDT 条目的映射是不固定的。在 `picirq.c` 中的 `pic_init` 映射 IRQ 0 - 15 到 IDT 条目 `IRQ_OFFSET` 到 `IRQ_OFFSET+15`。
|
||||
|
||||
在 `inc/trap.h` 中,`IRQ_OFFSET` 被定义为十进制的 32。所以,IDT 条目 32 - 47 对应 IRQ 0 - 15。例如,时钟中断是 IRQ 0,所以 IDT[IRQ_OFFSET+0](即:IDT[32])包含了内核中时钟中断服务程序的地址。这里选择 `IRQ_OFFSET` 是为了处理器异常不会覆盖设备中断,因为它会引起显而易见的混淆。(事实上,在早期运行 MS-DOS 的 PC 上, `IRQ_OFFSET` 事实上是 0,它确实导致了硬件中断服务程序和处理器异常处理之间的混淆!)
|
||||
|
||||
在 JOS 中,相比 xv6 Unix 我们做了一个重要的简化。当处于内核模式时,外部设备中断总是被关闭(并且,像 xv6 一样,当处于用户空间时,再打开外部设备的中断)。外部中断由 `%eflags` 寄存器的 `FL_IF` 标志位来控制(查看 `inc/mmu.h`)。当这个标志位被设置时,外部中断被打开。虽然这个标志位可以使用几种方式来修改,但是为了简化,我们只通过进程所保存和恢复的 `%eflags` 寄存器值,作为我们进入和离开用户模式的方法。
|
||||
|
||||
处于用户环境中时,你将要确保 `FL_IF` 标志被设置,以便于出现一个中断时,它能够通过处理器来传递,让你的中断代码来处理。否则,中断将被屏蔽或忽略,直到中断被重新打开后。我们使用引导加载程序的第一个指令去屏蔽中断,并且到目前为止,还没有去重新打开它们。
|
||||
|
||||
```markdown
|
||||
练习 13、修改 `kern/trapentry.S` 和 `kern/trap.c` 去初始化 IDT 中的相关条目,并为 IRQ 0 到 15 提供服务程序。然后修改 `kern/env.c` 中的 `env_alloc()` 的代码,以确保在用户环境中,中断总是打开的。
|
||||
|
||||
另外,在 `sched_halt()` 中取消注释 `sti` 指令,以便于空闲的 CPU 取消屏蔽中断。
|
||||
|
||||
当调用一个硬件中断服务程序时,处理器不会推送一个错误代码。在这个时候,你可能需要重新阅读 [80386 参考手册][2] 的 9.2 节,或 [IA-32 Intel 架构软件开发者手册 卷 3][3] 的 5.8 节。
|
||||
|
||||
在完成这个练习后,如果你在你的内核上使用任意的测试程序去持续运行(即:`spin`),你应该会看到内核输出中捕获的硬件中断的捕获帧。虽然在处理器上已经打开了中断,但是 JOS 并不能处理它们,因此,你应该会看到在当前运行的用户环境中每个中断的错误属性并被销毁,最终环境会被销毁并进入到监视器中。
|
||||
```
|
||||
|
||||
###### 处理时钟中断
|
||||
|
||||
在 `user/spin` 程序中,子环境首先运行之后,它只是进入一个高速循环中,并且内核再无法取得 CPU 控制权。我们需要对硬件编程,定期产生时钟中断,它将强制将 CPU 控制权返还给内核,在内核中,我们就能够将控制权切换到另外的用户环境中。
|
||||
|
||||
我们已经为你写好了对 `lapic_init` 和 `pic_init`(来自 `init.c` 中的 `i386_init`)的调用,它将设置时钟和中断控制器去产生中断。现在,你需要去写代码来处理这些中断。
|
||||
|
||||
```markdown
|
||||
练习 14、修改内核的 `trap_dispatch()` 函数,以便于在时钟中断发生时,它能够调用 `sched_yield()` 去查找和运行一个另外的环境。
|
||||
|
||||
现在,你应该能够用 `user/spin` 去做测试了:父环境应该会 fork 出子环境,`sys_yield()` 到它许多次,但每次切换之后,将重新获得对 CPU 的控制权,最后杀死子环境后优雅地终止。
|
||||
```
|
||||
|
||||
这是做回归测试的好机会。确保你没有弄坏本实验的前面部分,确保打开中断能够正常工作(即: `forktree`)。另外,尝试使用 ` make CPUS=2 target` 在多个 CPU 上运行它。现在,你应该能够通过 `stresssched` 测试。可以运行 `make grade` 去确认。现在,你的得分应该是 65 分了(总分为 80)。
|
||||
|
||||
##### 进程间通讯(IPC)
|
||||
|
||||
(严格来说,在 JOS 中这是“环境间通讯” 或 “IEC”,但所有人都称它为 IPC,因此我们使用标准的术语。)
|
||||
|
||||
我们一直专注于操作系统的隔离部分,这就产生了一种错觉,好像每个程序都有一个机器完整地为它服务。一个操作系统的另一个重要服务是,当它们需要时,允许程序之间相互通讯。让程序与其它程序交互可以让它的功能更加强大。Unix 的管道模型就是一个权威的示例。
|
||||
|
||||
进程间通讯有许多模型。关于哪个模型最好的争论从来没有停止过。我们不去参与这种争论。相反,我们将要实现一个简单的 IPC 机制,然后尝试使用它。
|
||||
|
||||
###### JOS 中的 IPC
|
||||
|
||||
你将要去实现另外几个 JOS 内核的系统调用,由它们共同来提供一个简单的进程间通讯机制。你将要实现两个系统调用,`sys_ipc_recv` 和 `sys_ipc_try_send`。然后你将要实现两个库去封装 `ipc_recv` 和 `ipc_send`。
|
||||
|
||||
用户环境可以使用 JOS 的 IPC 机制相互之间发送 “消息” 到每个其它环境,这些消息有两部分组成:一个单个的 32 位值,和可选的一个单个页映射。允许环境在消息中传递页映射,提供了一个高效的方式,传输比一个仅适合单个的 32 位整数更多的数据,并且也允许环境去轻松地设置安排共享内存。
|
||||
|
||||
###### 发送和接收消息
|
||||
|
||||
一个环境通过调用 `sys_ipc_recv` 去接收消息。这个系统调用将取消对当前环境的调度,并且不会再次去运行它,直到消息被接收为止。当一个环境正在等待接收一个消息时,任何其它环境都能够给它发送一个消息 — 而不仅是一个特定的环境,而且不仅是与接收环境有父子关系的环境。换句话说,你在 Part A 中实现的权限检查将不会应用到 IPC 上,因为 IPC 系统调用是经过慎重设计的,因此可以认为它是“安全的”:一个环境并不能通过给它发送消息导致另一个环境发生故障(除非目标环境也存在 Bug)。
|
||||
|
||||
尝试去发送一个值时,一个环境使用接收者的 ID 和要发送的值去调用 `sys_ipc_try_send` 来发送。如果指定的环境正在接收(它调用了 `sys_ipc_recv`,但尚未收到值),那么这个环境将去发送消息并返回 0。否则将返回 `-E_IPC_NOT_RECV` 来表示目标环境当前不希望来接收值。
|
||||
|
||||
在用户空间中的一个库函数 `ipc_recv` 将去调用 `sys_ipc_recv`,然后,在当前环境的 `struct Env` 中查找关于接收到的值的相关信息。
|
||||
|
||||
同样,一个库函数 `ipc_send` 将去不停地调用 `sys_ipc_try_send` 来发送消息,直到发送成功为止。
|
||||
|
||||
###### 转移页
|
||||
|
||||
当一个环境使用一个有效的 `dstva` 参数(低于 `UTOP`)去调用 `sys_ipc_recv` 时,环境将声明愿意去接收一个页映射。如果发送方发送一个页,那么那个页应该会被映射到接收者地址空间的 `dstva` 处。如果接收者在 `dstva` 已经有了一个页映射,那么已存在的那个页映射将被取消映射。
|
||||
|
||||
当一个环境使用一个有效的 `srcva` 参数(低于 `UTOP`)去调用 `sys_ipc_try_send` 时,意味着发送方希望使用 `perm` 权限去发送当前映射在 `srcva` 处的页给接收方。在 IPC 成功之后,发送方在它的地址空间中,保留了它最初映射到 `srcva` 位置的页。而接收方也获得了最初由它指定的、在它的地址空间中的 `dstva` 处的、映射到相同物理页的映射。最后的结果是,这个页成为发送方和接收方共享的页。
|
||||
|
||||
如果发送方和接收方都没有表示要转移这个页,那么就不会有页被转移。在任何 IPC 之后,内核将在接收方的 `Env` 结构上设置新的 `env_ipc_perm` 字段,以允许接收页,或者将它设置为 0,表示不再接收。
|
||||
|
||||
###### 实现 IPC
|
||||
|
||||
```markdown
|
||||
练习 15、实现 `kern/syscall.c` 中的 `sys_ipc_recv` 和 `sys_ipc_try_send`。在实现它们之前一起阅读它们的注释信息,因为它们要一起工作。当你在这些程序中调用 `envid2env` 时,你应该去设置 `checkperm` 的标志为 0,这意味着允许任何环境去发送 IPC 消息到另外的环境,并且内核除了验证目标 envid 是否有效外,不做特别的权限检查。
|
||||
|
||||
接着实现 `lib/ipc.c` 中的 `ipc_recv` 和 `ipc_send` 函数。
|
||||
|
||||
使用 `user/pingpong` 和 `user/primes` 函数去测试你的 IPC 机制。`user/primes` 将为每个质数生成一个新环境,直到 JOS 耗尽环境为止。你可能会发现,阅读 `user/primes.c` 非常有趣,你将看到所有的 fork 和 IPC 都是在幕后进行。
|
||||
```
|
||||
|
||||
```
|
||||
小挑战!为什么 `ipc_send` 要循环调用?修改系统调用接口,让它不去循环。确保你能处理多个环境尝试同时发送消息到一个环境上的情况。
|
||||
```
|
||||
|
||||
```markdown
|
||||
小挑战!质数筛选是在大规模并发程序中传递消息的一个很巧妙的用法。阅读 C. A. R. Hoare 写的 《Communicating Sequential Processes》,Communications of the ACM_ 21(8) (August 1978), 666-667,并去实现矩阵乘法示例。
|
||||
```
|
||||
|
||||
```markdown
|
||||
小挑战!控制消息传递的最令人印象深刻的一个例子是,Doug McIlroy 的幂序列计算器,它在 [M. Douglas McIlroy,《Squinting at Power Series》,Software--Practice and Experience, 20(7) (July 1990),661-683][4] 中做了详细描述。实现了它的幂序列计算器,并且计算了 _sin_ ( _x_ + _x_ ^3) 的幂序列。
|
||||
```
|
||||
|
||||
```markdown
|
||||
小挑战!通过应用 Liedtke 的论文([通过内核设计改善 IPC 性能][5])中的一些技术、或你可以想到的其它技巧,来让 JOS 的 IPC 机制更高效。为此,你可以随意修改内核的系统调用 API,只要你的代码向后兼容我们的评级脚本就行。
|
||||
```
|
||||
|
||||
**Part C 到此结束了。**确保你通过了所有的评级测试,并且不要忘了将你的小挑战的答案写入到 `answers-lab4.txt` 中。
|
||||
|
||||
在动手实验之前, 使用 `git status` 和 `git diff` 去检查你的更改,并且不要忘了去使用 `git add answers-lab4.txt` 添加你的小挑战的答案。在你全部完成后,使用 `git commit -am 'my solutions to lab 4’` 提交你的更改,然后 `make handin` 并关注它的动向。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://pdos.csail.mit.edu/6.828/2018/labs/lab4/
|
||||
|
||||
作者:[csail.mit][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://pdos.csail.mit.edu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://pdos.csail.mit.edu/6.828/2018/labs/lab4/uvpt.html
|
||||
[2]: https://pdos.csail.mit.edu/6.828/2018/labs/readings/i386/toc.htm
|
||||
[3]: https://pdos.csail.mit.edu/6.828/2018/labs/readings/ia32/IA32-3A.pdf
|
||||
[4]: https://swtch.com/~rsc/thread/squint.pdf
|
||||
[5]: http://dl.acm.org/citation.cfm?id=168633
|
@ -0,0 +1,86 @@
|
||||
使用 Calcurse 在 Linux 命令行中组织任务
|
||||
======
|
||||
|
||||
使用 Calcurse 了解你的日历和待办事项列表。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar.jpg?itok=jEKbhvDT)
|
||||
|
||||
你是否需要复杂,功能丰富的图形或 Web 程序才能保持井井有条?我不这么认为。正确的命令行工具可以完成工作并且做得很好。
|
||||
|
||||
当然,说出命令行这个词可能会让一些 Linux 用户感到害怕。对他们来说,命令行是未知领域。
|
||||
|
||||
使用 [Calcurse][1] 可以轻松地在命令行中进行组织任务。Calcurse 在基于文本的界面里带来了图形化外观。你可以得到简单、结合易用性的命令行和导航。
|
||||
|
||||
让我们仔细看看 Calcurse,它是在 BSD 许可证下开源的。
|
||||
|
||||
### 获取软件
|
||||
|
||||
如果你喜欢编译代码(我通常不喜欢),你可以从[Calcurse 网站][1]获取源码。否则,根据你的 Linux 发行版获取[二进制安装程序][2]。你甚至可以从 Linux 发行版的软件包管理器中获取 Calcurse。检查一下不会有错的。
|
||||
|
||||
编译或安装 Calcurse 后(两者都不用太长时间),你就可以开始使用了。
|
||||
|
||||
### 使用 Calcurse
|
||||
|
||||
打开终端并输入 **calcurse**。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/calcurse-main.png)
|
||||
|
||||
Calcurse 的界面由三个面板组成:
|
||||
|
||||
* 预约(屏幕左侧)
|
||||
* 日历(右上角)
|
||||
* 待办事项清单(右下角)
|
||||
|
||||
|
||||
|
||||
|
||||
按键盘上的 Tab 键在面板之间移动。要在面板添加新项目,请按下 **a**。Calcurse 将指导你完成添加项目所需的操作。
|
||||
|
||||
一个有趣的地方地预约和日历面板一起生效。你选中日历面板并添加一个预约。在那里,你选择一个预约的日期。完成后,你回到预约面板。我知道。。。
|
||||
|
||||
按下 **a** 设置开始时间,持续时间(以分钟为单位)和预约说明。开始时间和持续时间是可选的。Calcurse 在它们到期的那天显示预约。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/calcurse-appointment.png)
|
||||
|
||||
一天的预约看起来像:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/calcurse-appt-list.png)
|
||||
|
||||
待办事项列表独立运作。选中待办面板并(再次)按下 **a**。输入任务的描述,然后设置优先级(1 表示最高,9 表示最低)。Calcurse 会在待办事项面板中列出未完成的任务。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/calcurse-todo.png)
|
||||
|
||||
如果你的任务有很长的描述,那么 Calcurse 会截断它。你可以使用键盘上的向上或向下箭头键浏览任务,然后按下 **v** 查看描述。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/calcurse-view-todo.png)
|
||||
|
||||
Calcurse 将其信息以文本形式保存在你的主目录下名为 **.calcurse** 的隐藏文件夹中,例如 **/home/scott/.calcurse**。如果 Calcurse 停止工作,那也很容易找到你的信息。
|
||||
|
||||
### 其他有用的功能
|
||||
|
||||
Calcurse 其他的功能包括设置重复预约的功能。要执行此操作,找出要重复的预约,然后在预约面板中按下 **r**。系统会要求你设置频率(例如,每天或每周)以及你希望重复预约的时间。
|
||||
|
||||
你还可以导入 [ICAL][3] 格式的日历或以 ICAL 或 [PCAL][4] 格式导出数据。使用 ICAL,你可以与其他日历程序共享数据。使用 PCAL,你可以生成日历的 Postscript 版本。
|
||||
|
||||
你还可以将许多命令行参数传递给 Calcurse。你可以[在文档中][5]阅读它们。
|
||||
|
||||
虽然很简单,但 Calcurse 可以帮助你保持井井有条。你需要更加关注自己的任务和预约,但是你将能够更好地关注你需要做什么以及你需要做的方向。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/calcurse
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/scottnesbitt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://www.calcurse.org/
|
||||
[2]: http://www.calcurse.org/downloads/#packages
|
||||
[3]: https://tools.ietf.org/html/rfc2445
|
||||
[4]: http://pcal.sourceforge.net/
|
||||
[5]: http://www.calcurse.org/files/manual.chunked/ar01s04.html#_invocation
|
@ -0,0 +1,77 @@
|
||||
四个开源的Android邮件客户端
|
||||
======
|
||||
Email 现在还没有绝迹,而且现在大部分邮件都来自于移动设备。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_mail_box_envelope_send_blue.jpg?itok=6Epj47H6)
|
||||
|
||||
现在一些年轻人正将邮件称之为“老年人的交流方式”,然而事实却是邮件绝对还没有消亡。虽然[协作工具][1],社交媒体,和短信很常用,但是它们还没做好取代邮件这种必要的商业(和社交)通信工具。
|
||||
|
||||
考虑到邮件还没有消失,并且(很多研究表明)人们都是在移动设备上阅读邮件,拥有一个好的移动邮件客户端就变得很关键。如果你是一个想使用开源的邮件客户端的 Android 用户,事情就变得有点棘手了。
|
||||
|
||||
我们提供了四个开源的 Andorid 邮件客户端供选择。其中两个可以通过 Andorid 官方应用商店 [Google Play][2] 下载。你也可以在 [Fossdroid][3] 或者 [F-Droid][4] 这些开源 Android 应用库中找到他们。(下方有每个应用的具体下载方式。)
|
||||
### K-9 Mail
|
||||
|
||||
[K-9 Mail][5] 拥有几乎和 Android 一样长的历史——它起源于 Android 1.0 邮件客户端的一个补丁。它支持 IMAP 和 WebDAV、多用户、附件、emojis 和其他经典的邮件客户端功能。它的[用户文档][6]提供了关于安装、启动、安全、阅读和发送邮件等等的帮助。
|
||||
|
||||
K-9 基于 [Apache 2.0][7] 协议开源,[源码][8]可以从 GitHub 上获得. 应用可以从 [Google Play][9]、[Amazon][10] 和 [F-Droid][11] 上下载。
|
||||
|
||||
### p≡p
|
||||
|
||||
正如它的全称,”Pretty Easy Privacy”说的那样,[p≡p][12] 主要关注于隐私和安全通信。它提供自动的、端到端的邮件和附件加密(但要求你的收件人也要能够加密邮件——否则,p≡p会警告你的邮件将不加密发出)。
|
||||
|
||||
你可以从 GitLab 获得[源码][13](基于 [GPLv3][14] 协议),并且可以从应用的官网上找到相应的[文档][15]。应用可以在 [Fossdroid][16] 上免费下载或者在 [Google Play][17] 上支付一点儿象征性的费用下载。
|
||||
|
||||
### InboxPager
|
||||
|
||||
[InboxPager][18] 允许你通过 SSL/TLS 协议收发邮件信息,这也表明如果你的邮件提供商(比如 Gmail )没有默认开启这个功能的话,你可能要做一些设置。(幸运的是, InboxPager 提供了 Gmail的[设置教程][19]。)它同时也支持通过 OpenKeychain 应用进行 OpenPGP 机密。
|
||||
|
||||
InboxPager 基于 [GPLv3][20] 协议,其源码可从 GitHub 获得,并且应用可以从 [F-Droid][21] 下载。
|
||||
|
||||
### FairEmail
|
||||
|
||||
[FairEmail][22] 是一个极简的邮件客户端,它的功能集中于读写信息,没有任何多余的可能拖慢客户端的功能。它支持多个帐号和用户,消息线程,加密等等。
|
||||
|
||||
它基于 [GPLv3][23] 协议开源,[源码][24]可以从GitHub上获得。你可以在 [Fossdroid][25] 上下载 FairEamil; 对 Google Play 版本感兴趣的人可以从 [testing the software][26] 获得应用。
|
||||
|
||||
肯定还有更多的开源 Android 客户端(或者上述软件的加强版本)——活跃的开发者们可以关注一下。如果你知道还有哪些优秀的应用,可以在评论里和我们分享。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/open-source-android-email-clients
|
||||
|
||||
作者:[Opensource.com][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[zianglei][c]
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[c]: https://github.com/zianglei
|
||||
[1]: https://opensource.com/alternatives/trello
|
||||
[2]: https://play.google.com/store
|
||||
[3]: https://fossdroid.com/
|
||||
[4]: https://f-droid.org/
|
||||
[5]: https://k9mail.github.io/
|
||||
[6]: https://k9mail.github.io/documentation.html
|
||||
[7]: http://www.apache.org/licenses/LICENSE-2.0
|
||||
[8]: https://github.com/k9mail/k-9
|
||||
[9]: https://play.google.com/store/apps/details?id=com.fsck.k9
|
||||
[10]: https://www.amazon.com/K-9-Dog-Walkers-Mail/dp/B004JK61K0/
|
||||
[11]: https://f-droid.org/packages/com.fsck.k9/
|
||||
[12]: https://www.pep.security/android.html.en
|
||||
[13]: https://pep-security.lu/gitlab/android/pep
|
||||
[14]: https://pep-security.lu/gitlab/android/pep/blob/feature/material/LICENSE
|
||||
[15]: https://www.pep.security/docs/
|
||||
[16]: https://fossdroid.com/a/p%E2%89%A1p.html
|
||||
[17]: https://play.google.com/store/apps/details?id=security.pEp
|
||||
[18]: https://github.com/itprojects/InboxPager
|
||||
[19]: https://github.com/itprojects/InboxPager/blob/HEAD/README.md#gmail-configuration
|
||||
[20]: https://github.com/itprojects/InboxPager/blob/c5641a6d644d001bd4cec520b5a96d7e588cb6ad/LICENSE
|
||||
[21]: https://f-droid.org/en/packages/net.inbox.pager/
|
||||
[22]: https://email.faircode.eu/
|
||||
[23]: https://github.com/M66B/open-source-email/blob/master/LICENSE
|
||||
[24]: https://github.com/M66B/open-source-email
|
||||
[25]: https://fossdroid.com/a/fairemail.html
|
||||
[26]: https://play.google.com/apps/testing/eu.faircode.email
|
Loading…
Reference in New Issue
Block a user