diff --git a/published/20171202 Simulating the Altair.md b/published/20171202 Simulating the Altair.md new file mode 100644 index 0000000000..e59c3c913c --- /dev/null +++ b/published/20171202 Simulating the Altair.md @@ -0,0 +1,69 @@ +模拟 Altair 8800 计算机 +====== + +[Altair 8800][1] 是 1975 年发布的自建家用电脑套件。Altair 基本上是第一台个人电脑(PC),虽然 PC 这个名词好几年前就出现了。对 Dell、HP 或者 Macbook 而言它是亚当(或者夏娃)。 + +有些人认为为 Z80(与 Altair 的 Intel 8080 密切相关的处理器)编写仿真器真是太棒了,并认为它需要一个模拟 Altair 的控制面板。所以如果你想知道 1975 年使用电脑是什么感觉,你可以在你的 Macbook 上运行 Altair: + +![Altair 8800][2] + +### 安装它 + +你可以从[这里][3]的 FTP 服务器下载 Z80 包。你要查找最新的 Z80 包版本,例如 `z80pack-1.26.tgz`。 + +首先解压文件: + +``` +$ tar -xvf z80pack-1.26.tgz +``` + +进入解压目录: + +``` +$ cd z80pack-1.26 +``` + +控制面板模拟基于名为 `frontpanel` 的库。你必须先编译该库。如果你进入 `frontpanel` 目录,你会发现 `README` 文件列出了这个库自己的依赖项。你在这里的体会几乎肯定会与我的不同,但也许我的痛苦可以作为例子。我安装了依赖项,但是是通过 [Homebrew][4] 安装的。为了让库能够编译,我必须确保在 `Makefile.osx` 中将 `/usr/local/include `添加到 Clang 的 include 路径中。 + +如果你觉得依赖没有问题,那么你应该就能编译这个库(我们现在位于 `z80pack-1.26/frontpanel`): + +``` +$ make -f Makefile.osx ... +$ make -f Makefile.osx clean +``` + +你应该会得到 `libfrontpanel.so`。我把它拷贝到 `libfrontpanel.so`。 + +Altair 模拟器位于 `z80pack-1.26/altairsim` 下。你现在需要编译模拟器本身。进入 `z80pack-1.26/altairsim/srcsim` 并再次运行 `make`: + +``` +$ make -f Makefile.osx ... +$ make -f Makefile.osx clean +``` + +该过程将在 `z80pack-1.26/altairsim` 中创建一个名为 `altairsim` 的可执行文件。运行该可执行文件,你应该会看到标志性的 Altair 控制面板! + +如果你想要探究,请阅读原始的 [Altair 手册][5] + +如果你喜欢这篇文章,我们每两周更新一次!在 Twitter 上关注 [@TwoBitHistory]​​[6] 或订阅 [RSS 源][7]了解什么时候有新文章。 + +-------------------------------------------------------------------------------- + +via: https://twobithistory.org/2017/12/02/simulating-the-altair.html + +作者:[Two-Bit History][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twobithistory.org +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Altair_8800 +[2]: https://www.autometer.de/unix4fun/z80pack/altair.png +[3]: http://www.autometer.de/unix4fun/z80pack/ftp/ +[4]: http://brew.sh/ +[5]: http://www.classiccmp.org/dunfield/altair/d/88opman.pdf +[6]: https://twitter.com/TwoBitHistory +[7]: https://twobithistory.org/feed.xml diff --git a/translated/tech/20180901 Flameshot - A Simple, Yet Powerful Feature-rich Screenshot Tool.md b/published/20180901 Flameshot - A Simple, Yet Powerful Feature-rich Screenshot Tool.md similarity index 69% rename from translated/tech/20180901 Flameshot - A Simple, Yet Powerful Feature-rich Screenshot Tool.md rename to published/20180901 Flameshot - A Simple, Yet Powerful Feature-rich Screenshot Tool.md index 898955242a..a34c575261 100644 --- a/translated/tech/20180901 Flameshot - A Simple, Yet Powerful Feature-rich Screenshot Tool.md +++ b/published/20180901 Flameshot - A Simple, Yet Powerful Feature-rich Screenshot Tool.md @@ -1,4 +1,4 @@ -Flameshot – 一个简洁但功能丰富的截图工具 +Flameshot:一个简洁但功能丰富的截图工具 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/09/Flameshot-720x340.png) @@ -10,11 +10,13 @@ Flameshot – 一个简洁但功能丰富的截图工具 **在 Arch Linux 上:** Flameshot 可以从 Arch LInux 的 [community] 仓库中获取。确保你已经启用了 community 仓库,然后就可以像下面展示的那样使用 pacman 来安装 Flameshot : + ``` $ sudo pacman -S flameshot ``` 它也可以从 [**AUR**][1] 中获取,所以你还可以使用任意一个 AUR 帮助程序(例如 [**Yay**][2])来在基于 Arch 的系统中安装它: + ``` $ yay -S flameshot-git ``` @@ -26,6 +28,7 @@ $ sudo dnf install flameshot ``` 在 **Debian 10+** 和 **Ubuntu 18.04+** 中,可以使用 APT 包管理器来安装它: + ``` $ sudo apt install flameshot ``` @@ -35,97 +38,105 @@ $ sudo apt install flameshot ``` $ sudo zypper install flameshot ``` + 在其他的 Linux 发行版中,可以从源代码编译并安装它。编译过程中需要 **Qt version 5.3** 以及 **GCC 4.9.2** 或者它们的更高版本。 ### 使用 -可以从菜单或者应用启动器中启动 Flameshot。在 MATE 桌面环境,它通常可以在 **Applications - > Graphics** 下找到。 +可以从菜单或者应用启动器中启动 Flameshot。在 MATE 桌面环境,它通常可以在 “Applications -> Graphics” 下找到。 一旦打开了它,你就可以在系统面板中看到 Flameshot 的托盘图标。 **注意:** -假如你使用 Gnome 桌面环境,为了能够看到系统托盘图标,你需要安装 [TopIcons][3] 扩展。 +假如你使用 Gnome 桌面环境,为了能够看到系统托盘图标,你需要安装 [TopIcons][3] 扩展。 在 Flameshot 托盘图标上右击,你便会看到几个菜单项,例如打开配置窗口、信息窗口以及退出该应用。 -要进行截图,只需要点击托盘图标就可以了。接着你将看到如何使用 Flameshot 的帮助窗口。选择一个截图区域,然后敲 **ENTER** 键便可以截屏了,点击右键便可以看到颜色拾取器,再敲空格键便可以查看屏幕侧边的面板。你可以使用鼠标的滚轮来增加或者减少指针的宽度。 +要进行截图,只需要点击托盘图标就可以了。接着你将看到如何使用 Flameshot 的帮助窗口。选择一个截图区域,然后敲回车键便可以截屏了,点击右键便可以看到颜色拾取器,再敲空格键便可以查看屏幕侧边的面板。你可以使用鼠标的滚轮来增加或者减少指针的宽度。 Flameshot 自带一系列非常好的功能,例如: - * 可以进行手写 - * 可以划直线 - * 可以画长方形或者圆形框 - * 可以进行长方形区域选择 - * 可以画箭头 - * 可以对要点进行标注 - * 可以添加文本 - * 可以对图片或者文字进行模糊处理 - * 可以展示图片的尺寸大小 - * 在编辑图片是可以进行撤销和重做操作 - * 可以将选择的东西复制到剪贴板 - * 可以保存选择 - * 可以离开截屏 - * 可以选择另一个 app 来打开图片 - * 可以上传图片到 imgur 网站 - * 可以将图片固定到桌面上 +* 可以进行手写 +* 可以划直线 +* 可以画长方形或者圆形框 +* 可以进行长方形区域选择 +* 可以画箭头 +* 可以对要点进行标注 +* 可以添加文本 +* 可以对图片或者文字进行模糊处理 +* 可以展示图片的尺寸大小 +* 在编辑图片是可以进行撤销和重做操作 +* 可以将选择的东西复制到剪贴板 +* 可以保存选区 +* 可以离开截屏 +* 可以选择另一个 app 来打开图片 +* 可以上传图片到 imgur 网站 +* 可以将图片固定到桌面上 下面是一个示例的视频: -**快捷键** +### 快捷键 -Frameshot 也支持快捷键。在 Flameshot 的托盘图标上右击并点击 **Information** 窗口便可以看到在 GUI 模式下所有可用的快捷键。下面是在 GUI 模式下可用的快捷键清单: +Frameshot 也支持快捷键。在 Flameshot 的托盘图标上右击并点击 “Information” 窗口便可以看到在 GUI 模式下所有可用的快捷键。下面是在 GUI 模式下可用的快捷键清单: | 快捷键 | 描述 | |------------------------|------------------------------| -| ←, ↓, ↑, → | 移动选择区域 1px | -| Shift + ←, ↓, ↑, → | 将选择区域大小更改 1px | -| Esc | 退出截图 | -| Ctrl + C | 复制到粘贴板 | -| Ctrl + S | 将选择区域保存为文件 | -| Ctrl + Z | 撤销最近的一次操作 | -| Right Click | 展示颜色拾取器 | -| Mouse Wheel | 改变工具的宽度 | +| `←`、`↓`、`↑`、`→` | 移动选择区域 1px | +| `Shift` + `←`、`↓`、`↑`、`→` | 将选择区域大小更改 1px | +| `Esc` | 退出截图 | +| `Ctrl` + `C` | 复制到粘贴板 | +| `Ctrl` + `S` | 将选择区域保存为文件 | +| `Ctrl` + `Z` | 撤销最近的一次操作 | +| 鼠标右键 | 展示颜色拾取器 | +| 鼠标滚轮 | 改变工具的宽度 | -边按住 Shift 键并拖动选择区域的其中一个控制点将会对它相反方向的控制点做类似的拖放操作。 +边按住 `Shift` 键并拖动选择区域的其中一个控制点将会对它相反方向的控制点做类似的拖放操作。 -**命令行选项** +### 命令行选项 Flameshot 也支持一系列的命令行选项来延时截图和保存图片到自定义的路径。 要使用 Flameshot GUI 模式,运行: + ``` $ flameshot gui ``` 要使用 GUI 模式截屏并将你选取的区域保存到一个自定义的路径,运行: + ``` $ flameshot gui -p ~/myStuff/captures ``` 要延时 2 秒后打开 GUI 模式可以使用: + ``` $ flameshot gui -d 2000 ``` 要延时 2 秒并将截图保存到一个自定义的路径(无 GUI)可以使用: + ``` $ flameshot full -p ~/myStuff/captures -d 2000 ``` 要截图全屏并保存到自定义的路径和粘贴板中使用: + ``` $ flameshot full -c -p ~/myStuff/captures ``` -要在截屏中包含鼠标并将图片保存为 **PNG** 格式可以使用: +要在截屏中包含鼠标并将图片保存为 PNG 格式可以使用: + ``` $ flameshot screen -r ``` 要对屏幕 1 进行截屏并将截屏复制到粘贴板中可以运行: + ``` $ flameshot screen -n 1 -c ``` @@ -143,7 +154,7 @@ via: https://www.ostechnix.com/flameshot-a-simple-yet-powerful-feature-rich-scre 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20140607 Five things that make Go fast.md b/published/201810/20140607 Five things that make Go fast.md similarity index 100% rename from published/20140607 Five things that make Go fast.md rename to published/201810/20140607 Five things that make Go fast.md diff --git a/translated/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md b/published/201810/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md similarity index 92% rename from translated/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md rename to published/201810/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md index 2b3a558191..0667575e63 100644 --- a/translated/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md +++ b/published/201810/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md @@ -1,28 +1,23 @@ -# 用 350 行代码从零开始,将 Lisp 编译成 JavaScript +用 350 行代码从零开始,将 Lisp 编译成 JavaScript +====== -我们将会在本篇文章中看到从零开始实现的编译器,将简单的类 LISP 计算语言编译成 JavaScript。完整的源代码在 [这里][7]. +我们将会在本篇文章中看到从零开始实现的编译器,将简单的类 LISP 计算语言编译成 JavaScript。完整的源代码在 [这里][7]。 我们将会: 1. 自定义语言,并用它编写一个简单的程序 - 2. 实现一个简单的解析器组合器 - 3. 为该语言实现一个解析器 - 4. 为该语言实现一个美观的打印器 - -5. 为我们的需求定义 JavaScript 的一个子集 - +5. 为我们的用途定义 JavaScript 的一个子集 6. 实现代码转译器,将代码转译成我们定义的 JavaScript 子集 - 7. 把所有东西整合在一起 开始吧! -### 1. 定义语言 +### 1、定义语言 -lisps 最迷人的地方在于,它们的语法就是树状表示的,这就是这门语言很容易解析的原因。我们很快就能接触到它。但首先让我们把自己的语言定义好。关于我们语言的语法的范式(BNF)描述如下: +Lisp 族语言最迷人的地方在于,它们的语法就是树状表示的,这就是这门语言很容易解析的原因。我们很快就能接触到它。但首先让我们把自己的语言定义好。关于我们语言的语法的范式(BNF)描述如下: ``` program ::= expr @@ -35,17 +30,17 @@ expr ::= | | ([]) 该语言中,我们保留一些内建的特殊形式,这样我们就能做一些更有意思的事情: -* let 表达式使我们可以在它的 body 环境中引入新的变量。语法如下: +* `let` 表达式使我们可以在它的 `body` 环境中引入新的变量。语法如下: -``` + ``` let ::= (let ([]) ) letargs ::= ( ) body ::= ``` -* lambda 表达式:也就是匿名函数定义。语法如下: +* `lambda` 表达式:也就是匿名函数定义。语法如下: -``` + ``` lambda ::= (lambda ([]) ) ``` @@ -94,12 +89,11 @@ data Atom 另一件你想做的事情可能是在语法中添加一些注释信息。比如定位:`Expr` 是来自哪个文件的,具体到这个文件的哪一行哪一列。你可以在后面的阶段中使用这一特性,打印出错误定位,即使它们不是处于解析阶段。 * _练习 1_:添加一个 `Program` 数据类型,可以按顺序包含多个 `Expr` - * _练习 2_:向语法树中添加一个定位注解。 -### 2. 实现一个简单的解析器组合库 +### 2、实现一个简单的解析器组合库 -我们要做的第一件事情是定义一个嵌入式领域专用语言(Embedded Domain Specific Language 或者 EDSL),我们会用它来定义我们的语言解析器。这常常被称为解析器组合库。我们做这件事完全是出于学习的目的,Haskell 里有很好的解析库,在实际构建软件或者进行实验时,你应该使用它们。[megaparsec][8] 就是这样的一个库。 +我们要做的第一件事情是定义一个嵌入式领域专用语言Embedded Domain Specific Language(EDSL),我们会用它来定义我们的语言解析器。这常常被称为解析器组合库。我们做这件事完全是出于学习的目的,Haskell 里有很好的解析库,在实际构建软件或者进行实验时,你应该使用它们。[megaparsec][8] 就是这样的一个库。 首先我们来谈谈解析库的实现的思路。本质上,我们的解析器就是一个函数,接受一些输入,可能会读取输入的一些或全部内容,然后返回解析出来的值和无法解析的输入部分,或者在解析失败时抛出异常。我们把它写出来。 @@ -114,7 +108,6 @@ data ParseError = ParseError ParseString Error type Error = String - ``` 这里我们定义了三个主要的新类型。 @@ -124,9 +117,7 @@ type Error = String 第二个,`ParseString` 是我们的输入或携带的状态。它有三个重要的部分: * `Name`: 这是源的名字 - * `(Int, Int)`: 这是源的当前位置 - * `String`: 这是等待解析的字符串 第三个,`ParseError` 包含了解析器的当前状态和一个错误信息。 @@ -180,13 +171,11 @@ instance Monad Parser where Right (rs, rest) -> case f rs of Parser parser -> parser rest - ``` 接下来,让我们定义一种的方式,用于运行解析器和防止失败的助手函数: ``` - runParser :: String -> String -> Parser a -> Either ParseError (a, ParseString) runParser name str (Parser parser) = parser $ ParseString name (0,0) str @@ -237,7 +226,6 @@ many parser = go [] many1 :: Parser a -> Parser [a] many1 parser = (:) <$> parser <*> many parser - ``` 下面的这些解析器通过我们定义的组合器来实现一些特殊的解析器: @@ -273,14 +261,13 @@ sepBy sep parser = do frst <- optional parser rest <- many (sep *> parser) pure $ maybe rest (:rest) frst - ``` 现在为该门语言定义解析器所需要的所有东西都有了。 -* _练习_ :实现一个 EOF(end of file/input,即文件或输入终止符)解析器组合器。 +* _练习_ :实现一个 EOF(end of file/input,即文件或输入终止符)解析器组合器。 -### 3. 为我们的语言实现解析器 +### 3、为我们的语言实现解析器 我们会用自顶而下的方法定义解析器。 @@ -296,7 +283,6 @@ parseAtom = parseSymbol <|> parseInt parseSymbol :: Parser Atom parseSymbol = fmap Symbol parseName - ``` 注意到这四个函数是在我们这门语言中属于高阶描述。这解释了为什么 Haskell 执行解析工作这么棒。在定义完高级部分后,我们还需要定义低级别的 `parseName` 和 `parseInt`。 @@ -311,7 +297,7 @@ parseName = do pure (c:cs) ``` -整数是一系列数字,数字前面可能有负号 ‘-’: +整数是一系列数字,数字前面可能有负号 `-`: ``` parseInt :: Parser Atom @@ -333,12 +319,10 @@ runExprParser name str = ``` * _练习 1_ :为第一节中定义的 `Program` 类型编写一个解析器 - * _练习 2_ :用 Applicative 的形式重写 `parseName` - * _练习 3_ :`parseInt` 可能出现溢出情况,找到处理它的方法,不要用 `read`。 -### 4. 为这门语言实现一个更好看的输出器 +### 4、为这门语言实现一个更好看的输出器 我们还想做一件事,将我们的程序以源代码的形式打印出来。这对完善错误信息很有用。 @@ -372,7 +356,7 @@ indent tabs e = concat (replicate tabs " ") ++ e 好,目前为止我们写了近 200 行代码,这些代码一般叫做编译器的前端。我们还要写大概 150 行代码,用来执行三个额外的任务:我们需要根据需求定义一个 JS 的子集,定义一个将我们的语言转译成这个子集的转译器,最后把所有东西整合在一起。开始吧。 -### 5. 根据需求定义 JavaScript 的子集 +### 5、根据需求定义 JavaScript 的子集 首先,我们要定义将要使用的 JavaScript 的子集: @@ -411,10 +395,9 @@ printJSExpr doindent tabs = \case ``` * _练习 1_ :添加 `JSProgram` 类型,它可以包含多个 `JSExpr` ,然后创建一个叫做 `printJSExprProgram` 的函数来生成代码。 - * _练习 2_ :添加 `JSExpr` 的新类型:`JSIf`,并为其生成代码。 -### 6. 实现到我们定义的 JavaScript 子集的代码转译器 +### 6、实现到我们定义的 JavaScript 子集的代码转译器 我们快做完了。这一节将会创建函数,将 `Expr` 转译成 `JSExpr`。 @@ -437,7 +420,6 @@ translateList = \case f xs f:xs -> JSFunCall <$> translateToJS f <*> traverse translateToJS xs - ``` `builtins` 是一系列要转译的特例,就像 `lambada` 和 `let`。每一种情况都可以获得一系列参数,验证它是否合乎语法规范,然后将其转译成等效的 `JSExpr`。 @@ -456,7 +438,6 @@ builtins = ,("div", transBinOp "div" "/") ,("print", transPrint) ] - ``` 我们这种情况,会将内建的特殊形式当作特殊的、非第一类的进行对待,因此不可能将它们当作第一类函数。 @@ -480,10 +461,9 @@ transLambda = \case fromSymbol :: Expr -> Either String Name fromSymbol (ATOM (Symbol s)) = Right s fromSymbol e = Left $ "cannot bind value to non symbol type: " ++ show e - ``` -我们会将 let 转译成带有相关名字参数的函数定义,然后带上参数调用函数,因此会在这一作用域中引入变量: +我们会将 `let` 转译成带有相关名字参数的函数定义,然后带上参数调用函数,因此会在这一作用域中引入变量: ``` transLet :: [Expr] -> Either TransError JSExpr @@ -522,35 +502,27 @@ transBinOp _ f list = foldl1 (JSBinOp f) <$> traverse translateToJS list transPrint :: [Expr] -> Either TransError JSExpr transPrint [expr] = JSFunCall (JSSymbol "console.log") . (:[]) <$> translateToJS expr transPrint xs = Left $ "Syntax error. print expected 1 arguments, got: " ++ show (length xs) - ``` 注意,如果我们将这些代码当作 `Expr` 的特例进行解析,那我们就可能会跳过语法验证。 * _练习 1_ :将 `Program` 转译成 `JSProgram` - * _练习 2_ :为 `if Expr Expr Expr` 添加一个特例,并将它转译成你在上一次练习中实现的 `JSIf` 条件语句。 -### 7. 把所有东西整合到一起 +### 7、把所有东西整合到一起 最终,我们将会把所有东西整合到一起。我们会: 1. 读取文件 - 2. 将文件解析成 `Expr` - 3. 将文件转译成 `JSExpr` - 4. 将 JavaScript 代码发送到标准输出流 我们还会启用一些用于测试的标志位: * `--e` 将进行解析并打印出表达式的抽象表示(`Expr`) - * `--pp` 将进行解析,美化输出 - * `--jse` 将进行解析、转译、并打印出生成的 JS 表达式(`JSExpr`)的抽象表示 - * `--ppc` 将进行解析,美化输出并进行编译 ``` @@ -616,10 +588,10 @@ undefined via: https://gilmi.me/blog/post/2016/10/14/lisp-to-js -作者:[ Gil Mizrahi ][a] +作者:[Gil Mizrahi][a] 选题:[oska874][b] 译者:[BriFuture](https://github.com/BriFuture) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20170810 How we built our first full-stack JavaScript web app in three weeks.md b/published/201810/20170810 How we built our first full-stack JavaScript web app in three weeks.md similarity index 100% rename from published/20170810 How we built our first full-stack JavaScript web app in three weeks.md rename to published/201810/20170810 How we built our first full-stack JavaScript web app in three weeks.md diff --git a/published/20170926 Managing users on Linux systems.md b/published/201810/20170926 Managing users on Linux systems.md similarity index 100% rename from published/20170926 Managing users on Linux systems.md rename to published/201810/20170926 Managing users on Linux systems.md diff --git a/published/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md b/published/201810/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md similarity index 100% rename from published/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md rename to published/201810/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md diff --git a/published/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md b/published/201810/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md similarity index 100% rename from published/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md rename to published/201810/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md diff --git a/published/20171204 Improve your Bash scripts with Argbash.md b/published/201810/20171204 Improve your Bash scripts with Argbash.md similarity index 100% rename from published/20171204 Improve your Bash scripts with Argbash.md rename to published/201810/20171204 Improve your Bash scripts with Argbash.md diff --git a/published/20171208 24 Must Have Essential Linux Applications In 2017.md b/published/201810/20171208 24 Must Have Essential Linux Applications In 2017.md similarity index 100% rename from published/20171208 24 Must Have Essential Linux Applications In 2017.md rename to published/201810/20171208 24 Must Have Essential Linux Applications In 2017.md diff --git a/published/20171214 Peeking into your Linux packages.md b/published/201810/20171214 Peeking into your Linux packages.md similarity index 100% rename from published/20171214 Peeking into your Linux packages.md rename to published/201810/20171214 Peeking into your Linux packages.md diff --git a/published/20180105 The Best Linux Distributions for 2018.md b/published/201810/20180105 The Best Linux Distributions for 2018.md similarity index 100% rename from published/20180105 The Best Linux Distributions for 2018.md rename to published/201810/20180105 The Best Linux Distributions for 2018.md diff --git a/published/20180117 How to get into DevOps.md b/published/201810/20180117 How to get into DevOps.md similarity index 100% rename from published/20180117 How to get into DevOps.md rename to published/201810/20180117 How to get into DevOps.md diff --git a/published/20180123 Moving to Linux from dated Windows machines.md b/published/201810/20180123 Moving to Linux from dated Windows machines.md similarity index 100% rename from published/20180123 Moving to Linux from dated Windows machines.md rename to published/201810/20180123 Moving to Linux from dated Windows machines.md diff --git a/published/20180201 Conditional Rendering in React using Ternaries and.md b/published/201810/20180201 Conditional Rendering in React using Ternaries and.md similarity index 100% rename from published/20180201 Conditional Rendering in React using Ternaries and.md rename to published/201810/20180201 Conditional Rendering in React using Ternaries and.md diff --git a/published/20180201 Rock Solid React.js Foundations A Beginners Guide.md b/published/201810/20180201 Rock Solid React.js Foundations A Beginners Guide.md similarity index 100% rename from published/20180201 Rock Solid React.js Foundations A Beginners Guide.md rename to published/201810/20180201 Rock Solid React.js Foundations A Beginners Guide.md diff --git a/published/20180329 How to configure multiple websites with Apache web server.md b/published/201810/20180329 How to configure multiple websites with Apache web server.md similarity index 100% rename from published/20180329 How to configure multiple websites with Apache web server.md rename to published/201810/20180329 How to configure multiple websites with Apache web server.md diff --git a/published/20180412 A Desktop GUI Application For NPM.md b/published/201810/20180412 A Desktop GUI Application For NPM.md similarity index 100% rename from published/20180412 A Desktop GUI Application For NPM.md rename to published/201810/20180412 A Desktop GUI Application For NPM.md diff --git a/published/20180413 The df Command Tutorial With Examples For Beginners.md b/published/201810/20180413 The df Command Tutorial With Examples For Beginners.md similarity index 100% rename from published/20180413 The df Command Tutorial With Examples For Beginners.md rename to published/201810/20180413 The df Command Tutorial With Examples For Beginners.md diff --git a/published/20180522 Free Resources for Securing Your Open Source Code.md b/published/201810/20180522 Free Resources for Securing Your Open Source Code.md similarity index 100% rename from published/20180522 Free Resources for Securing Your Open Source Code.md rename to published/201810/20180522 Free Resources for Securing Your Open Source Code.md diff --git a/published/20180528 What is behavior-driven Python.md b/published/201810/20180528 What is behavior-driven Python.md similarity index 100% rename from published/20180528 What is behavior-driven Python.md rename to published/201810/20180528 What is behavior-driven Python.md diff --git a/published/20180531 How to create shortcuts in vi.md b/published/201810/20180531 How to create shortcuts in vi.md similarity index 100% rename from published/20180531 How to create shortcuts in vi.md rename to published/201810/20180531 How to create shortcuts in vi.md diff --git a/published/20180601 Download an OS with GNOME Boxes.md b/published/201810/20180601 Download an OS with GNOME Boxes.md similarity index 100% rename from published/20180601 Download an OS with GNOME Boxes.md rename to published/201810/20180601 Download an OS with GNOME Boxes.md diff --git a/translated/tech/20180615 How To Rename Multiple Files At Once In Linux.md b/published/201810/20180615 How To Rename Multiple Files At Once In Linux.md similarity index 51% rename from translated/tech/20180615 How To Rename Multiple Files At Once In Linux.md rename to published/201810/20180615 How To Rename Multiple Files At Once In Linux.md index 14f16b3eb6..05916fb914 100644 --- a/translated/tech/20180615 How To Rename Multiple Files At Once In Linux.md +++ b/published/201810/20180615 How To Rename Multiple Files At Once In Linux.md @@ -3,11 +3,11 @@ ![](https://www.ostechnix.com/wp-content/uploads/2018/06/Rename-Multiple-Files-720x340.png) -你可能已经知道,我们使用 mv 命令在类 Unix 操作系统中重命名或者移动文件和目录。 但是,mv 命令不支持一次重命名多个文件。 不用担心。 在本教程中,我们将学习使用 Linux 中的 “mmv” 命令一次重命名多个文件。 此命令用于在类 Unix 操作系统中使用标准通配符批量移动,复制,追加和重命名文件。 +你可能已经知道,我们使用 `mv` 命令在类 Unix 操作系统中重命名或者移动文件和目录。 但是,`mv` 命令不支持一次重命名多个文件。 不用担心。 在本教程中,我们将学习使用 Linux 中的 `mmv` 命令一次重命名多个文件。 此命令用于在类 Unix 操作系统中使用标准通配符批量移动、复制、追加和重命名文件。 ### 在 Linux 中一次重命名多个文件 -mmv 程序可在基于 Debian 的系统的默认仓库中使用。 要想在 Debian,Ubuntu,Linux Mint 上安装它,请运行以下命令: +`mmv` 程序可在基于 Debian 的系统的默认仓库中使用。 要想在 Debian、Ubuntu、Linux Mint 上安装它,请运行以下命令: ``` $ sudo apt-get install mmv @@ -20,7 +20,7 @@ $ ls a1.txt a2.txt a3.txt ``` -现在,你想要将所有以字母 “a” 开头的文件重命名为以 “b” 开头的。 当然,你可以在几秒钟内手动执行此操作。 但是想想你是否有数百个文件想要重命名? 这是一个非常耗时的过程。 这时候 **mmv** 命令就很有帮助了。 +现在,你想要将所有以字母 “a” 开头的文件重命名为以 “b” 开头的。 当然,你可以在几秒钟内手动执行此操作。 但是想想你是否有数百个文件想要重命名? 这是一个非常耗时的过程。 这时候 `mmv` 命令就很有帮助了。 要将所有以字母 “a” 开头的文件重命名为以字母 “b” 开头的,只需要运行: @@ -33,22 +33,20 @@ $ mmv a\* b\#1 ``` $ ls b1.txt b2.txt b3.txt - ``` -如你所见,所有以字母 “a” 开头的文件(即 a1.txt,a2.txt,a3.txt)都重命名为 b1.txt,b2.txt,b3.txt。 +如你所见,所有以字母 “a” 开头的文件(即 `a1.txt`、`a2.txt`、`a3.txt`)都重命名为 `b1.txt`、`b2.txt`、`b3.txt`。 **解释** -在上面的例子中,第一个参数(a\\*)是 'from' 模式,第二个参数是 'to' 模式(b\\#1)。根据上面的例子,mmv 将查找任何以字母 'a' 开头的文件名,并根据第二个参数重命名匹配的文件,即 'to' 模式。我们使用通配符,例如用 '*','?' 和 '[]' 来匹配一个或多个任意字符。请注意,你必须避免使用通配符,否则它们将被 shell 扩展,mmv 将无法理解。 +在上面的例子中,第一个参数(`a\*`)是 “from” 模式,第二个参数是 “to” 模式(`b\#1`)。根据上面的例子,`mmv` 将查找任何以字母 “a” 开头的文件名,并根据第二个参数重命名匹配的文件,即 “to” 模式。我们可以使用通配符,例如用 `*`、`?` 和 `[]` 来匹配一个或多个任意字符。请注意,你必须转义使用通配符,否则它们将被 shell 扩展,`mmv` 将无法理解。 -'to' 模式中的 '#1' 是通配符索引。它匹配 'from' 模式中的第一个通配符。 'to' 模式中的 '#2' 将匹配第二个通配符,依此类推。在我们的例子中,我们只有一个通配符(星号),所以我们写了一个 #1。并且,哈希标志也应该被转义。此外,你也可以用引号括起模式。 +“to” 模式中的 `#1` 是通配符索引。它匹配 “from” 模式中的第一个通配符。 “to” 模式中的 `#2` 将匹配第二个通配符(如果有的话),依此类推。在我们的例子中,我们只有一个通配符(星号),所以我们写了一个 `#1`。并且,`#` 符号也应该被转义。此外,你也可以用引号括起模式。 -你甚至可以将具有特定扩展名的所有文件重命名为其他扩展名。例如,要将当前目录中的所有 **.txt** 文件重命名为 **.doc** 文件格式,只需运行: +你甚至可以将具有特定扩展名的所有文件重命名为其他扩展名。例如,要将当前目录中的所有 `.txt` 文件重命名为 `.doc` 文件格式,只需运行: ``` $ mmv \*.txt \#1.doc - ``` 这是另一个例子。 我们假设你有以下文件。 @@ -56,16 +54,14 @@ $ mmv \*.txt \#1.doc ``` $ ls abcd1.txt abcd2.txt abcd3.txt - ``` -你希望在当前目录下的所有文件中将第一次出现的 **abc** 替换为 **xyz**。 你会怎么做呢? +你希望在当前目录下的所有文件中将第一次出现的 “abc” 替换为 “xyz”。 你会怎么做呢? 很简单。 ``` $ mmv '*abc*' '#1xyz#2' - ``` 请注意,在上面的示例中,模式被单引号括起来了。 @@ -75,77 +71,74 @@ $ mmv '*abc*' '#1xyz#2' ``` $ ls xyzd1.txt xyzd2.txt xyzd3.txt - ``` -看到没? 文件 **abcd1.txt**,**abcd2.txt** 和 **abcd3.txt** 已经重命名为 **xyzd1.txt**,**xyzd2.txt** 和 **xyzd3.txt**。 +看到没? 文件 `abcd1.txt`、`abcd2.txt` 和 `abcd3.txt` 已经重命名为 `xyzd1.txt`、`xyzd2.txt` 和 `xyzd3.txt`。 -mmv 命令的另一个值得注意的功能是你可以使用 **-n** 选项打印输出而不是重命名文件,如下所示。 +`mmv` 命令的另一个值得注意的功能是你可以使用 `-n` 选项打印输出而不是重命名文件,如下所示。 ``` $ mmv -n a\* b\#1 a1.txt -> b1.txt a2.txt -> b2.txt a3.txt -> b3.txt - ``` -这样,你可以在重命名文件之前简单地验证 mmv 命令实际执行的操作。 +这样,你可以在重命名文件之前简单地验证 `mmv` 命令实际执行的操作。 有关更多详细信息,请参阅 man 页面。 ``` $ man mmv - ``` -**更新:** +### 更新:Thunar 文件管理器 -**Thunar 文件管理器**默认具有内置**批量重命名**选项。 如果你正在使用thunar,那么重命名文件要比使用mmv命令容易得多。 +**Thunar 文件管理器**默认具有内置**批量重命名**选项。 如果你正在使用 Thunar,那么重命名文件要比使用 `mmv` 命令容易得多。 -Thunar在大多数Linux发行版的默认仓库库中都可用。 +Thunar 在大多数 Linux 发行版的默认仓库库中都可用。 -要在基于Arch的系统上安装它,请运行: +要在基于 Arch 的系统上安装它,请运行: ``` $ sudo pacman -S thunar ``` -在 RHEL,CentOS 上: +在 RHEL、CentOS 上: + ``` $ sudo yum install thunar ``` 在 Fedora 上: + ``` $ sudo dnf install thunar - ``` 在 openSUSE 上: + ``` $ sudo zypper install thunar - ``` -在 Debian,Ubuntu,Linux Mint 上: +在 Debian、Ubuntu、Linux Mint 上: + ``` $ sudo apt-get install thunar - ``` 安装后,你可以从菜单或应用程序启动器中启动批量重命名程序。 要从终端启动它,请使用以下命令: ``` $ thunar -B - ``` -批量重命名就是这么回事。 +批量重命名方式如下。 ![][1] -单击加号,然后选择要重命名的文件列表。 批量重命名可以重命名文件的名称,文件的后缀或者同事重命名文件的名称和后缀。 Thunar 目前支持以下批量重命名: +单击“+”,然后选择要重命名的文件列表。 批量重命名可以重命名文件的名称、文件的后缀或者同时重命名文件的名称和后缀。 Thunar 目前支持以下批量重命名: - 插入日期或时间 - 插入或覆盖 @@ -158,9 +151,9 @@ $ thunar -B ![][2] -选择条件后,单击**重命名文件**选项来重命名文件。 +选择条件后,单击“重命名文件”选项来重命名文件。 -你还可以通过选择两个或更多文件从 Thunar 中打开批量重命名器。 选择文件后,按F2或右键单击并选择**重命名**。 +你还可以通过选择两个或更多文件从 Thunar 中打开批量重命名器。 选择文件后,按 F2 或右键单击并选择“重命名”。 嗯,这就是本次的所有内容了。希望有所帮助。更多干货即将到来。敬请关注! @@ -173,10 +166,10 @@ via: https://www.ostechnix.com/how-to-rename-multiple-files-at-once-in-linux/ 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[Flowsnow](https://github.com/Flowsnow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://www.ostechnix.com/author/sk/ [1]: http://www.ostechnix.com/wp-content/uploads/2018/06/bulk-rename.png -[2]: http://www.ostechnix.com/wp-content/uploads/2018/06/bulk-rename-1.png \ No newline at end of file +[2]: http://www.ostechnix.com/wp-content/uploads/2018/06/bulk-rename-1.png diff --git a/published/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md b/published/201810/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md similarity index 100% rename from published/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md rename to published/201810/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md diff --git a/published/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md b/published/201810/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md similarity index 100% rename from published/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md rename to published/201810/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md diff --git a/published/20180709 How To Configure SSH Key-based Authentication In Linux.md b/published/201810/20180709 How To Configure SSH Key-based Authentication In Linux.md similarity index 100% rename from published/20180709 How To Configure SSH Key-based Authentication In Linux.md rename to published/201810/20180709 How To Configure SSH Key-based Authentication In Linux.md diff --git a/published/20180715 Why is Python so slow.md b/published/201810/20180715 Why is Python so slow.md similarity index 100% rename from published/20180715 Why is Python so slow.md rename to published/201810/20180715 Why is Python so slow.md diff --git a/published/20180724 75 Most Used Essential Linux Applications of 2018.md b/published/201810/20180724 75 Most Used Essential Linux Applications of 2018.md similarity index 100% rename from published/20180724 75 Most Used Essential Linux Applications of 2018.md rename to published/201810/20180724 75 Most Used Essential Linux Applications of 2018.md diff --git a/published/20180724 Building a network attached storage device with a Raspberry Pi.md b/published/201810/20180724 Building a network attached storage device with a Raspberry Pi.md similarity index 100% rename from published/20180724 Building a network attached storage device with a Raspberry Pi.md rename to published/201810/20180724 Building a network attached storage device with a Raspberry Pi.md diff --git a/published/20180803 5 Essential Tools for Linux Development.md b/published/201810/20180803 5 Essential Tools for Linux Development.md similarity index 100% rename from published/20180803 5 Essential Tools for Linux Development.md rename to published/201810/20180803 5 Essential Tools for Linux Development.md diff --git a/published/20180810 How To Remove Or Disable Ubuntu Dock.md b/published/201810/20180810 How To Remove Or Disable Ubuntu Dock.md similarity index 100% rename from published/20180810 How To Remove Or Disable Ubuntu Dock.md rename to published/201810/20180810 How To Remove Or Disable Ubuntu Dock.md diff --git a/published/20180813 5 of the Best Linux Educational Software and Games for Kids.md b/published/201810/20180813 5 of the Best Linux Educational Software and Games for Kids.md similarity index 100% rename from published/20180813 5 of the Best Linux Educational Software and Games for Kids.md rename to published/201810/20180813 5 of the Best Linux Educational Software and Games for Kids.md diff --git a/published/20180814 Automating backups on a Raspberry Pi NAS.md b/published/201810/20180814 Automating backups on a Raspberry Pi NAS.md similarity index 100% rename from published/20180814 Automating backups on a Raspberry Pi NAS.md rename to published/201810/20180814 Automating backups on a Raspberry Pi NAS.md diff --git a/published/20180815 How to Create M3U Playlists in Linux [Quick Tip].md b/published/201810/20180815 How to Create M3U Playlists in Linux [Quick Tip].md similarity index 100% rename from published/20180815 How to Create M3U Playlists in Linux [Quick Tip].md rename to published/201810/20180815 How to Create M3U Playlists in Linux [Quick Tip].md diff --git a/published/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md b/published/201810/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md similarity index 100% rename from published/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md rename to published/201810/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md diff --git a/published/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md b/published/201810/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md similarity index 100% rename from published/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md rename to published/201810/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md diff --git a/published/20180821 A checklist for submitting your first Linux kernel patch.md b/published/201810/20180821 A checklist for submitting your first Linux kernel patch.md similarity index 100% rename from published/20180821 A checklist for submitting your first Linux kernel patch.md rename to published/201810/20180821 A checklist for submitting your first Linux kernel patch.md diff --git a/published/20180823 CLI- improved.md b/published/201810/20180823 CLI- improved.md similarity index 100% rename from published/20180823 CLI- improved.md rename to published/201810/20180823 CLI- improved.md diff --git a/published/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md b/published/201810/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md similarity index 100% rename from published/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md rename to published/201810/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md diff --git a/published/20180824 5 cool music player apps.md b/published/201810/20180824 5 cool music player apps.md similarity index 100% rename from published/20180824 5 cool music player apps.md rename to published/201810/20180824 5 cool music player apps.md diff --git a/published/20180824 What Stable Kernel Should I Use.md b/published/201810/20180824 What Stable Kernel Should I Use.md similarity index 100% rename from published/20180824 What Stable Kernel Should I Use.md rename to published/201810/20180824 What Stable Kernel Should I Use.md diff --git a/published/20180827 4 tips for better tmux sessions.md b/published/201810/20180827 4 tips for better tmux sessions.md similarity index 100% rename from published/20180827 4 tips for better tmux sessions.md rename to published/201810/20180827 4 tips for better tmux sessions.md diff --git a/published/20180827 A sysadmin-s guide to containers.md b/published/201810/20180827 A sysadmin-s guide to containers.md similarity index 100% rename from published/20180827 A sysadmin-s guide to containers.md rename to published/201810/20180827 A sysadmin-s guide to containers.md diff --git a/published/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md b/published/201810/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md similarity index 100% rename from published/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md rename to published/201810/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md diff --git a/published/20180830 6 places to host your git repository.md b/published/201810/20180830 6 places to host your git repository.md similarity index 100% rename from published/20180830 6 places to host your git repository.md rename to published/201810/20180830 6 places to host your git repository.md diff --git a/published/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md b/published/201810/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md similarity index 100% rename from published/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md rename to published/201810/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md diff --git a/published/201810/20180901 Flameshot - A Simple, Yet Powerful Feature-rich Screenshot Tool.md b/published/201810/20180901 Flameshot - A Simple, Yet Powerful Feature-rich Screenshot Tool.md new file mode 100644 index 0000000000..a34c575261 --- /dev/null +++ b/published/201810/20180901 Flameshot - A Simple, Yet Powerful Feature-rich Screenshot Tool.md @@ -0,0 +1,164 @@ +Flameshot:一个简洁但功能丰富的截图工具 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/Flameshot-720x340.png) + +截图是我工作的一部分,我先前使用深度截图工具来截图,深度截图是一个简单、轻量级且非常简洁的截图工具。它自带许多功能例如窗口识别、快捷键支持、图片编辑、延时截图、社交分享、智能存储以及图片清晰度调整等功能。今天我碰巧发现了另一个具备多种功能的截图工具,它就是 **Flameshot** ,一个简单但功能丰富的针对类 Unix 系统的截图工具。它简单易用,可定制并且有选项可以支持上传截图到在线图片分享网站 **imgur** 上。同时 Flameshot 有一个 CLI 版本,所以你也可以从命令行来进行截图。Flameshot 是一个完全免费且开源的工具。在本教程中,我们将看到如何安装 Flameshot 以及如何使用它来截图。 + +### 安装 Flameshot + +**在 Arch Linux 上:** + +Flameshot 可以从 Arch LInux 的 [community] 仓库中获取。确保你已经启用了 community 仓库,然后就可以像下面展示的那样使用 pacman 来安装 Flameshot : + +``` +$ sudo pacman -S flameshot +``` + +它也可以从 [**AUR**][1] 中获取,所以你还可以使用任意一个 AUR 帮助程序(例如 [**Yay**][2])来在基于 Arch 的系统中安装它: + +``` +$ yay -S flameshot-git +``` + +**在 Fedora 中:** + +``` +$ sudo dnf install flameshot +``` + +在 **Debian 10+** 和 **Ubuntu 18.04+** 中,可以使用 APT 包管理器来安装它: + +``` +$ sudo apt install flameshot +``` + +**在 openSUSE 上:** + +``` +$ sudo zypper install flameshot +``` + +在其他的 Linux 发行版中,可以从源代码编译并安装它。编译过程中需要 **Qt version 5.3** 以及 **GCC 4.9.2** 或者它们的更高版本。 + +### 使用 + +可以从菜单或者应用启动器中启动 Flameshot。在 MATE 桌面环境,它通常可以在 “Applications -> Graphics” 下找到。 + +一旦打开了它,你就可以在系统面板中看到 Flameshot 的托盘图标。 + +**注意:** + +假如你使用 Gnome 桌面环境,为了能够看到系统托盘图标,你需要安装 [TopIcons][3] 扩展。 + +在 Flameshot 托盘图标上右击,你便会看到几个菜单项,例如打开配置窗口、信息窗口以及退出该应用。 + +要进行截图,只需要点击托盘图标就可以了。接着你将看到如何使用 Flameshot 的帮助窗口。选择一个截图区域,然后敲回车键便可以截屏了,点击右键便可以看到颜色拾取器,再敲空格键便可以查看屏幕侧边的面板。你可以使用鼠标的滚轮来增加或者减少指针的宽度。 + +Flameshot 自带一系列非常好的功能,例如: + +* 可以进行手写 +* 可以划直线 +* 可以画长方形或者圆形框 +* 可以进行长方形区域选择 +* 可以画箭头 +* 可以对要点进行标注 +* 可以添加文本 +* 可以对图片或者文字进行模糊处理 +* 可以展示图片的尺寸大小 +* 在编辑图片是可以进行撤销和重做操作 +* 可以将选择的东西复制到剪贴板 +* 可以保存选区 +* 可以离开截屏 +* 可以选择另一个 app 来打开图片 +* 可以上传图片到 imgur 网站 +* 可以将图片固定到桌面上 + +下面是一个示例的视频: + + + +### 快捷键 + +Frameshot 也支持快捷键。在 Flameshot 的托盘图标上右击并点击 “Information” 窗口便可以看到在 GUI 模式下所有可用的快捷键。下面是在 GUI 模式下可用的快捷键清单: + +| 快捷键 | 描述 | +|------------------------|------------------------------| +| `←`、`↓`、`↑`、`→` | 移动选择区域 1px | +| `Shift` + `←`、`↓`、`↑`、`→` | 将选择区域大小更改 1px | +| `Esc` | 退出截图 | +| `Ctrl` + `C` | 复制到粘贴板 | +| `Ctrl` + `S` | 将选择区域保存为文件 | +| `Ctrl` + `Z` | 撤销最近的一次操作 | +| 鼠标右键 | 展示颜色拾取器 | +| 鼠标滚轮 | 改变工具的宽度 | + +边按住 `Shift` 键并拖动选择区域的其中一个控制点将会对它相反方向的控制点做类似的拖放操作。 + +### 命令行选项 + +Flameshot 也支持一系列的命令行选项来延时截图和保存图片到自定义的路径。 + +要使用 Flameshot GUI 模式,运行: + +``` +$ flameshot gui +``` + +要使用 GUI 模式截屏并将你选取的区域保存到一个自定义的路径,运行: + +``` +$ flameshot gui -p ~/myStuff/captures +``` + +要延时 2 秒后打开 GUI 模式可以使用: + +``` +$ flameshot gui -d 2000 +``` + +要延时 2 秒并将截图保存到一个自定义的路径(无 GUI)可以使用: + +``` +$ flameshot full -p ~/myStuff/captures -d 2000 +``` + +要截图全屏并保存到自定义的路径和粘贴板中使用: + +``` +$ flameshot full -c -p ~/myStuff/captures +``` + +要在截屏中包含鼠标并将图片保存为 PNG 格式可以使用: + +``` +$ flameshot screen -r +``` + +要对屏幕 1 进行截屏并将截屏复制到粘贴板中可以运行: + +``` +$ flameshot screen -n 1 -c +``` + +你还需要什么功能呢?Flameshot 拥有几乎截屏的所有功能:添加注释、编辑图片、模糊处理或者对要点做高亮等等功能。我想:在我找到它的最佳替代品之前,我将一直使用 Flameshot 来作为我当前的截图工具。请尝试一下它,你不会失望的。 + +好了,这就是今天的全部内容了。后续将有更多精彩内容,请保持关注! + +Cheers! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/flameshot-a-simple-yet-powerful-feature-rich-screenshot-tool/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[FSSlc](https://github.com/FSSlc) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://aur.archlinux.org/packages/flameshot-git +[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[3]: https://extensions.gnome.org/extension/1031/topicons/ diff --git a/published/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md b/published/201810/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md similarity index 100% rename from published/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md rename to published/201810/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md diff --git a/published/20180907 How to Use the Netplan Network Configuration Tool on Linux.md b/published/201810/20180907 How to Use the Netplan Network Configuration Tool on Linux.md similarity index 100% rename from published/20180907 How to Use the Netplan Network Configuration Tool on Linux.md rename to published/201810/20180907 How to Use the Netplan Network Configuration Tool on Linux.md diff --git a/published/20180910 How To List An Available Package Groups In Linux.md b/published/201810/20180910 How To List An Available Package Groups In Linux.md similarity index 100% rename from published/20180910 How To List An Available Package Groups In Linux.md rename to published/201810/20180910 How To List An Available Package Groups In Linux.md diff --git a/published/20180912 How to build rpm packages.md b/published/201810/20180912 How to build rpm packages.md similarity index 100% rename from published/20180912 How to build rpm packages.md rename to published/201810/20180912 How to build rpm packages.md diff --git a/published/20180913 ScreenCloud- The Screenshot-- App.md b/published/201810/20180913 ScreenCloud- The Screenshot-- App.md similarity index 100% rename from published/20180913 ScreenCloud- The Screenshot-- App.md rename to published/201810/20180913 ScreenCloud- The Screenshot-- App.md diff --git a/published/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md b/published/201810/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md similarity index 100% rename from published/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md rename to published/201810/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md diff --git a/published/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md b/published/201810/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md similarity index 100% rename from published/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md rename to published/201810/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md diff --git a/published/20180917 4 scanning tools for the Linux desktop.md b/published/201810/20180917 4 scanning tools for the Linux desktop.md similarity index 100% rename from published/20180917 4 scanning tools for the Linux desktop.md rename to published/201810/20180917 4 scanning tools for the Linux desktop.md diff --git a/published/20180917 Getting started with openmediavault- A home NAS solution.md b/published/201810/20180917 Getting started with openmediavault- A home NAS solution.md similarity index 100% rename from published/20180917 Getting started with openmediavault- A home NAS solution.md rename to published/201810/20180917 Getting started with openmediavault- A home NAS solution.md diff --git a/published/20180918 Linux firewalls- What you need to know about iptables and firewalld.md b/published/201810/20180918 Linux firewalls- What you need to know about iptables and firewalld.md similarity index 100% rename from published/20180918 Linux firewalls- What you need to know about iptables and firewalld.md rename to published/201810/20180918 Linux firewalls- What you need to know about iptables and firewalld.md diff --git a/published/20180918 Top 3 Python libraries for data science.md b/published/201810/20180918 Top 3 Python libraries for data science.md similarity index 100% rename from published/20180918 Top 3 Python libraries for data science.md rename to published/201810/20180918 Top 3 Python libraries for data science.md diff --git a/published/20180919 Host your own cloud with Raspberry Pi NAS.md b/published/201810/20180919 Host your own cloud with Raspberry Pi NAS.md similarity index 100% rename from published/20180919 Host your own cloud with Raspberry Pi NAS.md rename to published/201810/20180919 Host your own cloud with Raspberry Pi NAS.md diff --git a/published/20180919 How Writing Can Expand Your Skills and Grow Your Career.md b/published/201810/20180919 How Writing Can Expand Your Skills and Grow Your Career.md similarity index 100% rename from published/20180919 How Writing Can Expand Your Skills and Grow Your Career.md rename to published/201810/20180919 How Writing Can Expand Your Skills and Grow Your Career.md diff --git a/published/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md b/published/201810/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md similarity index 100% rename from published/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md rename to published/201810/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md diff --git a/published/20180920 8 Python packages that will simplify your life with Django.md b/published/201810/20180920 8 Python packages that will simplify your life with Django.md similarity index 100% rename from published/20180920 8 Python packages that will simplify your life with Django.md rename to published/201810/20180920 8 Python packages that will simplify your life with Django.md diff --git a/published/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md b/published/201810/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md similarity index 100% rename from published/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md rename to published/201810/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md diff --git a/published/20180921 Clinews - Read News And Latest Headlines From Commandline.md b/published/201810/20180921 Clinews - Read News And Latest Headlines From Commandline.md similarity index 100% rename from published/20180921 Clinews - Read News And Latest Headlines From Commandline.md rename to published/201810/20180921 Clinews - Read News And Latest Headlines From Commandline.md diff --git a/published/20180921 Control your data with Syncthing- An open source synchronization tool.md b/published/201810/20180921 Control your data with Syncthing- An open source synchronization tool.md similarity index 100% rename from published/20180921 Control your data with Syncthing- An open source synchronization tool.md rename to published/201810/20180921 Control your data with Syncthing- An open source synchronization tool.md diff --git a/published/20180924 A Simple, Beautiful And Cross-platform Podcast App.md b/published/201810/20180924 A Simple, Beautiful And Cross-platform Podcast App.md similarity index 100% rename from published/20180924 A Simple, Beautiful And Cross-platform Podcast App.md rename to published/201810/20180924 A Simple, Beautiful And Cross-platform Podcast App.md diff --git a/published/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md b/published/201810/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md similarity index 100% rename from published/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md rename to published/201810/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md diff --git a/published/20180924 Why Linux users should try Rust.md b/published/201810/20180924 Why Linux users should try Rust.md similarity index 100% rename from published/20180924 Why Linux users should try Rust.md rename to published/201810/20180924 Why Linux users should try Rust.md diff --git a/published/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md b/published/201810/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md similarity index 100% rename from published/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md rename to published/201810/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md diff --git a/published/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md b/published/201810/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md similarity index 100% rename from published/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md rename to published/201810/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md diff --git a/published/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md b/published/201810/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md similarity index 100% rename from published/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md rename to published/201810/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md diff --git a/published/20180926 3 open source distributed tracing tools.md b/published/201810/20180926 3 open source distributed tracing tools.md similarity index 100% rename from published/20180926 3 open source distributed tracing tools.md rename to published/201810/20180926 3 open source distributed tracing tools.md diff --git a/published/20180926 An introduction to swap space on Linux systems.md b/published/201810/20180926 An introduction to swap space on Linux systems.md similarity index 100% rename from published/20180926 An introduction to swap space on Linux systems.md rename to published/201810/20180926 An introduction to swap space on Linux systems.md diff --git a/published/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md b/published/201810/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md similarity index 100% rename from published/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md rename to published/201810/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md diff --git a/published/20180926 How to use the Scikit-learn Python library for data science projects.md b/published/201810/20180926 How to use the Scikit-learn Python library for data science projects.md similarity index 100% rename from published/20180926 How to use the Scikit-learn Python library for data science projects.md rename to published/201810/20180926 How to use the Scikit-learn Python library for data science projects.md diff --git a/published/20180927 5 cool tiling window managers.md b/published/201810/20180927 5 cool tiling window managers.md similarity index 100% rename from published/20180927 5 cool tiling window managers.md rename to published/201810/20180927 5 cool tiling window managers.md diff --git a/published/20180927 How To Find And Delete Duplicate Files In Linux.md b/published/201810/20180927 How To Find And Delete Duplicate Files In Linux.md similarity index 100% rename from published/20180927 How To Find And Delete Duplicate Files In Linux.md rename to published/201810/20180927 How To Find And Delete Duplicate Files In Linux.md diff --git a/published/20180927 How to Use RAR files in Ubuntu Linux.md b/published/201810/20180927 How to Use RAR files in Ubuntu Linux.md similarity index 100% rename from published/20180927 How to Use RAR files in Ubuntu Linux.md rename to published/201810/20180927 How to Use RAR files in Ubuntu Linux.md diff --git a/published/20180928 10 handy Bash aliases for Linux.md b/published/201810/20180928 10 handy Bash aliases for Linux.md similarity index 100% rename from published/20180928 10 handy Bash aliases for Linux.md rename to published/201810/20180928 10 handy Bash aliases for Linux.md diff --git a/published/20180928 A Free And Secure Online PDF Conversion Suite.md b/published/201810/20180928 A Free And Secure Online PDF Conversion Suite.md similarity index 100% rename from published/20180928 A Free And Secure Online PDF Conversion Suite.md rename to published/201810/20180928 A Free And Secure Online PDF Conversion Suite.md diff --git a/published/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md b/published/201810/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md similarity index 100% rename from published/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md rename to published/201810/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md diff --git a/published/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md b/published/201810/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md similarity index 100% rename from published/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md rename to published/201810/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md diff --git a/published/20181001 16 iptables tips and tricks for sysadmins.md b/published/201810/20181001 16 iptables tips and tricks for sysadmins.md similarity index 100% rename from published/20181001 16 iptables tips and tricks for sysadmins.md rename to published/201810/20181001 16 iptables tips and tricks for sysadmins.md diff --git a/published/20181001 How to Install Pip on Ubuntu.md b/published/201810/20181001 How to Install Pip on Ubuntu.md similarity index 100% rename from published/20181001 How to Install Pip on Ubuntu.md rename to published/201810/20181001 How to Install Pip on Ubuntu.md diff --git a/published/20181002 How use SSH and SFTP protocols on your home network.md b/published/201810/20181002 How use SSH and SFTP protocols on your home network.md similarity index 100% rename from published/20181002 How use SSH and SFTP protocols on your home network.md rename to published/201810/20181002 How use SSH and SFTP protocols on your home network.md diff --git a/published/20181003 Introducing Swift on Fedora.md b/published/201810/20181003 Introducing Swift on Fedora.md similarity index 100% rename from published/20181003 Introducing Swift on Fedora.md rename to published/201810/20181003 Introducing Swift on Fedora.md diff --git a/published/20181003 Tips for listing files with ls at the Linux command line.md b/published/201810/20181003 Tips for listing files with ls at the Linux command line.md similarity index 100% rename from published/20181003 Tips for listing files with ls at the Linux command line.md rename to published/201810/20181003 Tips for listing files with ls at the Linux command line.md diff --git a/published/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md b/published/201810/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md similarity index 100% rename from published/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md rename to published/201810/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md diff --git a/published/20181005 Open Source Logging Tools for Linux.md b/published/201810/20181005 Open Source Logging Tools for Linux.md similarity index 100% rename from published/20181005 Open Source Logging Tools for Linux.md rename to published/201810/20181005 Open Source Logging Tools for Linux.md diff --git a/published/20181008 Python at the pump- A script for filling your gas tank.md b/published/201810/20181008 Python at the pump- A script for filling your gas tank.md similarity index 100% rename from published/20181008 Python at the pump- A script for filling your gas tank.md rename to published/201810/20181008 Python at the pump- A script for filling your gas tank.md diff --git a/published/201810/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md b/published/201810/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md new file mode 100644 index 0000000000..b14c45ded7 --- /dev/null +++ b/published/201810/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md @@ -0,0 +1,277 @@ +重启和关闭 Linux 系统的 6 个终端命令 +====== + +在 Linux 管理员的日程当中,有很多需要执行的任务,其中就有系统的重启和关闭。 + +对于 Linux 管理员来说,重启和关闭系统是其诸多风险操作中的一例,有时候,由于某些原因,这些操作可能无法挽回,他们需要更多的时间来排查问题。 + +在 Linux 命令行模式下我们可以执行这些任务。很多时候,由于熟悉命令行,Linux 管理员更倾向于在命令行下完成这些任务。 + +重启和关闭系统的 Linux 命令并不多,用户需要根据需要,选择合适的命令来完成任务。 + +以下所有命令都有其自身特点,并允许被 Linux 管理员使用. + +**建议阅读:** + +- [查看系统/服务器正常运行时间的 11 个方法][1] +- [Tuptime 一款为 Linux 系统保存历史记录、统计运行时间工具][2] + +系统重启和关闭之始,会通知所有已登录的用户和进程。当然,如果使用了时间参数,系统将拒绝新的用户登入。 + +执行此类操作之前,我建议您坚持复查,因为您只能得到很少的提示来确保这一切顺利。 + +下面陈列了一些步骤: + +* 确保您拥有一个可以处理故障的控制台,以防之后可能会发生的问题。 VMWare 可以访问虚拟机,而 IPMI、iLO 和 iDRAC 可以访问物理服务器。 +* 您需要通过公司的流程,申请修改或故障的执行权直到得到许可。 +* 为安全着想,备份重要的配置文件,并保存到其他服务器上. +* 验证日志文件(提前检查) +* 和相关团队交流,比如数据库管理团队,应用团队等。 +* 通知数据库和应用服务人员关闭服务,并得到确定答复。 +* 使用适当的命令复盘操作,验证工作。 +* 最后,重启系统。 +* 验证日志文件,如果一切顺利,执行下一步操作,如果发现任何问题,对症排查。 +* 无论是回退版本还是运行程序,通知相关团队提出申请。 +* 对操作做适当守候,并将预期的一切正常的反馈给团队 + +使用下列命令执行这项任务。 + +* `shutdown`、`halt`、`poweroff`、`reboot` 命令:用来停机、重启或切断电源 +* `init` 命令:是 “initialization” 的简称,是系统启动的第一个进程。 +* `systemctl` 命令:systemd 是 Linux 系统和服务器的管理程序。 + +### 方案 1:如何使用 shutdown 命令关闭和重启 Linux 系统 + +`shutdown` 命令用于断电或重启本地和远程的 Linux 机器。它为高效完成作业提供多个选项。如果使用了时间参数,系统关闭的 5 分钟之前,会创建 `/run/nologin` 文件,以确保后续的登录会被拒绝。 + +通用语法如下: + +``` +# shutdown [OPTION] [TIME] [MESSAGE] +``` + +运行下面的命令来立即关闭 Linux 机器。它会立刻杀死所有进程,并关闭系统。 + +``` +# shutdown -h now +``` + +* `-h`:如果不特指 `-halt` 选项,这等价于 `-poweroff` 选项。 + +另外我们可以使用带有 `-halt` 选项的 `shutdown` 命令来立即关闭设备。 + +``` +# shutdown --halt now +或者 +# shutdown -H now +``` + +* `-H, --halt`:停止设备运行 + +另外我们可以使用带有 `poweroff` 选项的 `shutdown` 命令来立即关闭设备。 + +``` +# shutdown --poweroff now +或者 +# shutdown -P now +``` + +* `-P, --poweroff`:切断电源(默认)。 + +如果您没有使用时间选项运行下面的命令,它将会在一分钟后执行给出的命令。 + +``` +# shutdown -h +Shutdown scheduled for Mon 2018-10-08 06:42:31 EDT, use 'shutdown -c' to cancel. + +root@2daygeek.com# +Broadcast message from root@vps.2daygeek.com (Mon 2018-10-08 06:41:31 EDT): + +The system is going down for power-off at Mon 2018-10-08 06:42:31 EDT! +``` + +其他的登录用户都能在中断中看到如下的广播消息: + +``` +[daygeek@2daygeek.com ~]$ +Broadcast message from root@2daygeek.com (Mon 2018-10-08 06:41:31 EDT): + +The system is going down for power-off at Mon 2018-10-08 06:42:31 EDT! +``` + +对于使用了 `-halt` 选项: + +``` +# shutdown -H +Shutdown scheduled for Mon 2018-10-08 06:37:53 EDT, use 'shutdown -c' to cancel. + +root@2daygeek.com# +Broadcast message from root@vps.2daygeek.com (Mon 2018-10-08 06:36:53 EDT): + +The system is going down for system halt at Mon 2018-10-08 06:37:53 EDT! +``` + +对于使用了 `-poweroff` 选项: + +``` +# shutdown -P +Shutdown scheduled for Mon 2018-10-08 06:40:07 EDT, use 'shutdown -c' to cancel. + +root@2daygeek.com# +Broadcast message from root@vps.2daygeek.com (Mon 2018-10-08 06:39:07 EDT): + +The system is going down for power-off at Mon 2018-10-08 06:40:07 EDT! +``` + +可以在您的终端上敲击 `shutdown -c` 选项取消操作。 + +``` +# shutdown -c + +Broadcast message from root@vps.2daygeek.com (Mon 2018-10-08 06:39:09 EDT): + +The system shutdown has been cancelled at Mon 2018-10-08 06:40:09 EDT! +``` + +其他的登录用户都能在中断中看到如下的广播消息: + +``` +[daygeek@2daygeek.com ~]$ +Broadcast message from root@vps.2daygeek.com (Mon 2018-10-08 06:41:35 EDT): + +The system shutdown has been cancelled at Mon 2018-10-08 06:42:35 EDT! +``` + +添加时间参数,如果你想在 `N` 秒之后执行关闭或重启操作。这里,您可以为所有登录用户添加自定义广播消息。例如,我们将在五分钟后重启设备。 + +``` +# shutdown -r +5 "To activate the latest Kernel" +Shutdown scheduled for Mon 2018-10-08 07:13:16 EDT, use 'shutdown -c' to cancel. + +[root@vps138235 ~]# +Broadcast message from root@vps.2daygeek.com (Mon 2018-10-08 07:08:16 EDT): + +To activate the latest Kernel +The system is going down for reboot at Mon 2018-10-08 07:13:16 EDT! +``` + +运行下面的命令立即重启 Linux 机器。它会立即杀死所有进程并且重新启动系统。 + +``` +# shutdown -r now +``` + +* `-r, --reboot`: 重启设备。 + +### 方案 2:如何通过 reboot 命令关闭和重启 Linux 系统 + +`reboot` 命令用于关闭和重启本地或远程设备。`reboot` 命令拥有两个实用的选项。 + +它能够优雅的关闭和重启设备(就好像在系统菜单中惦记重启选项一样简单)。 + +执行不带任何参数的 `reboot` 命令来重启 Linux 机器。 + +``` +# reboot +``` + +执行带 `-p` 参数的 `reboot` 命令来关闭 Linux 机器电源。 + +``` +# reboot -p +``` + +* `-p, --poweroff`:调用 `halt` 或 `poweroff` 命令,切断设备电源。 + +执行带 `-f` 参数的 `reboot` 命令来强制重启 Linux 设备(这类似按压机器上的电源键)。 + +``` +# reboot -f +``` + +* `-f, --force`:立刻强制中断,切断电源或重启。 + +### 方案 3:如何通过 init 命令关闭和重启 Linux 系统 + +`init`(“initialization” 的简写)是系统启动的第一个进程。 + +它将会检查 `/etc/inittab` 文件并决定 linux 运行级别。同时,允许用户在 Linux 设备上执行关机或重启操作. 这里存在从 `0` 到 `6` 的七个运行等级。 + +**建议阅读:** + +- [如何检查 Linux 上所有运行的服务][3] + +执行以下 `init` 命令关闭系统。 + +``` +# init 0 +``` + +* `0`: 停机 – 关闭系统。 + +运行下面的 `init` 命令重启设备: + +``` +# init 6 +``` + +* `6`:重启 – 重启设备。 + +### 方案 4:如何通过 halt 命令关闭和重启 Linux 系统 + +`halt` 命令用来切断电源或关闭远程 Linux 机器或本地主机。 +中断所有进程并关闭 cpu。 + +``` +# halt +``` + +### 方案 5:如何通过 poweroff 命令关闭和重启 Linux 系统 + +`poweroff` 命令用来切断电源或关闭远程 Linux 机器或本地主机。 `poweroff` 很像 `halt`,但是它可以关闭设备硬件(灯和其他 PC 上的其它东西)。它会给主板发送 ACPI 指令,然后信号发送到电源,切断电源。 + +``` +# poweroff +``` + +### 方案 6:如何通过 systemctl 命令关闭和重启 Linux 系统 + +systemd 是一款适用于所有主流 Linux 发型版的全新 init 系统和系统管理器,而不是传统的 SysV init 系统。 + +systemd 兼容与 SysV 和 LSB 初始化脚本。它能够替代 SysV init 系统。systemd 是内核启动的第一个进程,并持有序号为 1 的进程 PID。 + +**建议阅读:** + +- [chkservice – 一款终端下系统单元管理工具][4] + +它是一切进程的父进程,Fedora 15 是第一个适配安装 systemd (替代了 upstart)的发行版。 + +`systemctl` 是命令行下管理 systemd 守护进程和服务的主要工具(如 `start`、`restart`、`stop`、`enable`、`disable`、`reload` & `status`)。 + +systemd 使用 .service 文件而不是 SysV init 使用的 bash 脚本。 systemd 将所有守护进程归与自身的 Linux cgroups 用户组下,您可以浏览 `/cgroup/systemd` 文件查看该系统层次等级。 + +``` +# systemctl halt +# systemctl poweroff +# systemctl reboot +# systemctl suspend +# systemctl hibernate +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/6-commands-to-shutdown-halt-poweroff-reboot-the-linux-system/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[cyleft](https://github.com/cyleft) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/11-methods-to-find-check-system-server-uptime-in-linux/ +[2]: https://www.2daygeek.com/tuptime-a-tool-to-report-the-historical-and-statistical-running-time-of-linux-system/ +[3]: https://www.2daygeek.com/how-to-check-all-running-services-in-linux/ +[4]: https://www.2daygeek.com/chkservice-a-tool-for-managing-systemd-units-from-linux-terminal/ diff --git a/published/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md b/published/201810/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md similarity index 100% rename from published/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md rename to published/201810/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md diff --git a/published/20181009 How To Create And Maintain Your Own Man Pages.md b/published/201810/20181009 How To Create And Maintain Your Own Man Pages.md similarity index 100% rename from published/20181009 How To Create And Maintain Your Own Man Pages.md rename to published/201810/20181009 How To Create And Maintain Your Own Man Pages.md diff --git a/published/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md b/published/201810/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md similarity index 100% rename from published/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md rename to published/201810/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md diff --git a/published/20181010 Design faster web pages, part 1- Image compression.md b/published/201810/20181010 Design faster web pages, part 1- Image compression.md similarity index 100% rename from published/20181010 Design faster web pages, part 1- Image compression.md rename to published/201810/20181010 Design faster web pages, part 1- Image compression.md diff --git a/published/20181010 How To List The Enabled-Active Repositories In Linux.md b/published/201810/20181010 How To List The Enabled-Active Repositories In Linux.md similarity index 100% rename from published/20181010 How To List The Enabled-Active Repositories In Linux.md rename to published/201810/20181010 How To List The Enabled-Active Repositories In Linux.md diff --git a/published/20181011 A Front-end For Popular Package Managers.md b/published/201810/20181011 A Front-end For Popular Package Managers.md similarity index 100% rename from published/20181011 A Front-end For Popular Package Managers.md rename to published/201810/20181011 A Front-end For Popular Package Managers.md diff --git a/published/20181011 Getting started with Minikube- Kubernetes on your laptop.md b/published/201810/20181011 Getting started with Minikube- Kubernetes on your laptop.md similarity index 100% rename from published/20181011 Getting started with Minikube- Kubernetes on your laptop.md rename to published/201810/20181011 Getting started with Minikube- Kubernetes on your laptop.md diff --git a/published/20181012 Command line quick tips- Reading files different ways.md b/published/201810/20181012 Command line quick tips- Reading files different ways.md similarity index 100% rename from published/20181012 Command line quick tips- Reading files different ways.md rename to published/201810/20181012 Command line quick tips- Reading files different ways.md diff --git a/published/20181012 Happy birthday, KDE- 11 applications you never knew existed.md b/published/201810/20181012 Happy birthday, KDE- 11 applications you never knew existed.md similarity index 100% rename from published/20181012 Happy birthday, KDE- 11 applications you never knew existed.md rename to published/201810/20181012 Happy birthday, KDE- 11 applications you never knew existed.md diff --git a/published/20181012 How To Lock Virtual Console Sessions On Linux.md b/published/201810/20181012 How To Lock Virtual Console Sessions On Linux.md similarity index 100% rename from published/20181012 How To Lock Virtual Console Sessions On Linux.md rename to published/201810/20181012 How To Lock Virtual Console Sessions On Linux.md diff --git a/published/20181013 How to Install GRUB on Arch Linux (UEFI).md b/published/201810/20181013 How to Install GRUB on Arch Linux (UEFI).md similarity index 100% rename from published/20181013 How to Install GRUB on Arch Linux (UEFI).md rename to published/201810/20181013 How to Install GRUB on Arch Linux (UEFI).md diff --git a/published/20181015 How To Browse And Read Entire Arch Wiki As Linux Man Pages.md b/published/201810/20181015 How To Browse And Read Entire Arch Wiki As Linux Man Pages.md similarity index 100% rename from published/20181015 How To Browse And Read Entire Arch Wiki As Linux Man Pages.md rename to published/201810/20181015 How To Browse And Read Entire Arch Wiki As Linux Man Pages.md diff --git a/published/20181015 Running Linux containers as a non-root with Podman.md b/published/201810/20181015 Running Linux containers as a non-root with Podman.md similarity index 100% rename from published/20181015 Running Linux containers as a non-root with Podman.md rename to published/201810/20181015 Running Linux containers as a non-root with Podman.md diff --git a/published/20181016 Turn Your Old PC into a Retrogaming Console with Lakka Linux.md b/published/201810/20181016 Turn Your Old PC into a Retrogaming Console with Lakka Linux.md similarity index 100% rename from published/20181016 Turn Your Old PC into a Retrogaming Console with Lakka Linux.md rename to published/201810/20181016 Turn Your Old PC into a Retrogaming Console with Lakka Linux.md diff --git a/translated/tech/20181018 MidnightBSD Hits 1.0- Checkout What-s New.md b/published/201810/20181018 MidnightBSD Hits 1.0- Checkout What-s New.md similarity index 89% rename from translated/tech/20181018 MidnightBSD Hits 1.0- Checkout What-s New.md rename to published/201810/20181018 MidnightBSD Hits 1.0- Checkout What-s New.md index 2d88b21581..4ff1d767e4 100644 --- a/translated/tech/20181018 MidnightBSD Hits 1.0- Checkout What-s New.md +++ b/published/201810/20181018 MidnightBSD Hits 1.0- Checkout What-s New.md @@ -1,5 +1,6 @@ -MidnightBSD 发布 1.0!看看有哪些新的东西 +MidnightBSD 发布 1.0! ====== + 几天前,Lucas Holt 宣布发布 MidnightBSD 1.0。让我们快速看一下这个新版本中包含的内容。 ### 什么是 MidnightBSD? @@ -10,15 +11,13 @@ MidnightBSD 发布 1.0!看看有哪些新的东西 ### MidnightBSD 1.0 中有什么? -根据[发布说明][3],1.0 中的大部分工作都是更新基础系统,改进包管理器和更新工具。新版本与 FreeBSD 10-Stable 兼容。 +根据[发布说明][3]([视频](https://www.youtube.com/embed/-rlk2wFsjJ4)),1.0 中的大部分工作都是更新基础系统,改进包管理器和更新工具。新版本与 FreeBSD 10-Stable 兼容。 Mports(MidnightBSD 的包管理系统)已经升级支持使用一个命令安装多个包。`mport upgrade` 命令已经修复。Mports 现在会跟踪已弃用和过期的包。它还引入了新的包格式。 - - 其他变化包括: - * 现在支持 [ZFS][4] 作为启动文件系统。以前,ZFS 只能用于额外存储。 + * 现在支持 [ZFS][4] 作为启动文件系统。以前,ZFS 只能用于附加存储。   * 支持 NVME SSD。   * AMD Ryzen 和 Radeon 的支持得到了改善。   * Intel、Broadcom 和其他驱动程序已更新。 @@ -27,15 +26,13 @@ Mports(MidnightBSD 的包管理系统)已经升级支持使用一个命令   * 删除了 Sudo 并用 OpenBSD 中的 [doas][5] 替换。   * 增加了对 Microsoft hyper-v 的支持。 - - ### 升级之前 如果你当前是 MidnightBSD 的用户或正在考虑尝试新版本,那么还是再等一会。Lucas 目前正在重建软件包以支持新的软件包格式和工具。他还计划在未来几个月内升级软件包和移植桌面环境。他目前正致力于移植 Firefox 52 ESR,因为它是最后一个不需要 Rust 的版本。他还希望将更新版本的 Chromium 移植到 MidnightBSD。我建议关注 MidnightBSD 的 [Twitter][6]。 -### 0.9怎么回事? +### 0.9 怎么回事? -你可能注意到 MidnightBSD 的先前版本是 0.8.6。你现在可能想知道“为什么跳到 1.0”?根据 Lucas 的说法,他在开发 0.9 时遇到了几个问题。事实上,他重试好几次。他最终采用与 0.9 分支不同的方式,并变成了 1.0。有些软件包也存在 0.* 编号系统的问题。 +你可能注意到 MidnightBSD 的先前版本是 0.8.6。你现在可能想知道“为什么跳到 1.0”?根据 Lucas 的说法,他在开发 0.9 时遇到了几个问题。事实上,他重试好几次。他最终采用与 0.9 分支不同的方式,并变成了 1.0。有些软件包在 0.* 系列也有问题。 ### 需要帮助 @@ -58,7 +55,7 @@ via: https://itsfoss.com/midnightbsd-1-0-release/ 作者:[John Paul][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20181018 Understanding Linux Links- Part 1.md b/published/201810/20181018 Understanding Linux Links- Part 1.md similarity index 50% rename from translated/tech/20181018 Understanding Linux Links- Part 1.md rename to published/201810/20181018 Understanding Linux Links- Part 1.md index ab2433484e..ecfb777cd9 100644 --- a/translated/tech/20181018 Understanding Linux Links- Part 1.md +++ b/published/201810/20181018 Understanding Linux Links- Part 1.md @@ -1,57 +1,64 @@ -理解 Linux 链接:第一部分 +理解 Linux 链接(一) ====== +> 链接是可以将文件和目录放在你希望它们放在的位置的另一种方式。 ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-link-498708.jpg?itok=DyVEcEsc) -除了 `cp` 和 `mv` 这两个我们在[本系列的前一部分][1]中详细讨论过的,链接是另一种方式可以将文件和目录放在你希它们放在的位置。它的优点是可以让你同时在多个位置显示一个文件或目录。 +除了 `cp` 和 `mv` 这两个我们在[本系列的前一部分][1]中详细讨论过的,链接是可以将文件和目录放在你希望它们放在的位置的另一种方式。它的优点是可以让你同时在多个位置显示一个文件或目录。 -如前所述,在物理磁盘这个级别上,文件和目录之类的东西并不真正存在。文件系统为了方便人类使用,将它们虚构出来。但在磁盘级别上,有一个名为 _partition table_(分区表)的东西,它位于每个分区的开头,然后数据分散在磁盘的其余部分。 +如前所述,在物理磁盘这个级别上,文件和目录之类的东西并不真正存在。文件系统是为了方便人类使用,将它们虚构出来。但在磁盘级别上,有一个名为分区表partition table的东西,它位于每个分区的开头,然后数据分散在磁盘的其余部分。 -虽然有不同类型的分区表,但是在分区开头的表包含的数据将映射每个目录和文件的开始和结束位置。分区表的就像一个索引:当从磁盘加载文件时,操作系统会查找表中的条目,分区表会告诉文件在磁盘上的起始位置和结束位置。然后磁盘头移动到起点,读取数据,直到它到达终点,最后告诉 presto:这就是你的文件。 +虽然有不同类型的分区表,但是在分区开头的那个表包含的数据将映射每个目录和文件的开始和结束位置。分区表的就像一个索引:当从磁盘加载文件时,操作系统会查找表中的条目,分区表会告诉文件在磁盘上的起始位置和结束位置。然后磁盘头移动到起点,读取数据,直到它到达终点,您看:这就是你的文件。 ### 硬链接 硬链接只是分区表中的一个条目,它指向磁盘上的某个区域,表示该区域**已经被分配给文件**。换句话说,硬链接指向已经被另一个条目索引的数据。让我们看看它是如何工作的。 打开终端,创建一个实验目录并进入: + ``` mkdir test_dir cd test_dir ``` 使用 [touch][1] 创建一个文件: + ``` touch test.txt ``` -为了获得更多的体验(?),在文本编辑器中打开 _test.txt_ 并添加一些单词。 +为了获得更多的体验(?),在文本编辑器中打开 `test.txt` 并添加一些单词。 现在通过执行以下命令来建立硬链接: + ``` ln test.txt hardlink_test.txt ``` -运行 `ls`,你会看到你的目录现在包含两个文件,或者看起来如此。正如你之前读到的那样,你真正看到的是完全相同的文件的两个名称: _hardlink\_test.txt_ 包含相同的内容,没有填充磁盘中的任何更多空间(尝试使用大文件来测试),并与 _test.txt_ 使用相同的 inode: +运行 `ls`,你会看到你的目录现在包含两个文件,或者看起来如此。正如你之前读到的那样,你真正看到的是完全相同的文件的两个名称: `hardlink_test.txt` 包含相同的内容,没有填充磁盘中的任何更多空间(可以尝试使用大文件来测试),并与 `test.txt` 使用相同的 inode: + ``` $ ls -li *test* 16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 hardlink_test.txt 16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 test.txt ``` -_ls_ 的 `-i` 选项显示一个文件的 _inode 数值_。_inode_ 是分区表中的信息块,它包含磁盘上文件或目录的位置,上次修改的时间以及其它数据。如果两个文件使用相同的 inode,那么无论它们在目录树中的位置如何,它们在实际效果上都是相同的文件。 +`ls` 的 `-i` 选项显示一个文件的 “inode 数值”。“inode” 是分区表中的信息块,它包含磁盘上文件或目录的位置、上次修改的时间以及其它数据。如果两个文件使用相同的 inode,那么无论它们在目录树中的位置如何,它们在实际上都是相同的文件。 ### 软链接 -软链接,也称为 _symlinks_(系统链接),它是不同的:软链接实际上是一个独立的文件,它有自己的 inode 和它自己在磁盘上的小插槽。但它只包含一小段数据,将操作系统指向另一个文件或目录。 +软链接,也称为符号链接symlink,它与硬链接是不同的:软链接实际上是一个独立的文件,它有自己的 inode 和它自己在磁盘上的小块地方。但它只包含一小段数据,将操作系统指向另一个文件或目录。 你可以使用 `ln` 的 `-s` 选项来创建一个软链接: + ``` ln -s test.txt softlink_test.txt ``` -这将在当前目录中创建软链接 _softlink\_test.txt_,它指向 _test.txt_。 +这将在当前目录中创建软链接 `softlink_test.txt`,它指向 `test.txt`。 再次执行 `ls -li`,你可以看到两种链接的不同之处: + ``` $ ls -li total 8 @@ -60,48 +67,53 @@ total 8 16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 test.txt ``` -_hardlink\_test.txt_ 和 _test.txt_ 包含一些文本并占据相同的空格*字面*。它们使用相同的 inode 数值。与此同时,_softlink\_test.txt_ 占用少得多,并且具有不同的 inode 数值,将其标记为完全不同的文件。使用 _ls_ 的 `-l` 选项还会显示软链接指向的文件或目录。 +`hardlink_test.txt` 和 `test.txt` 包含一些文本并且*字面上*占据相同的空间。它们使用相同的 inode 数值。与此同时,`softlink_test.txt` 占用少得多,并且具有不同的 inode 数值,将其标记为完全不同的文件。使用 `ls` 的 `-l` 选项还会显示软链接指向的文件或目录。 ### 为什么要用链接? 它们适用于**带有自己环境的应用程序**。你的 Linux 发行版通常不会附带你需要应用程序的最新版本。以优秀的 [Blender 3D][2] 设计软件为例,Blender 允许你创建 3D 静态图像以及动画电影,人人都想在自己的机器上拥有它。问题是,当前版本的 Blender 至少比任何发行版中的自带的高一个版本。 -幸运的是,[Blender 提供下载][3]开箱即用。除了程序本身之外,这些软件包还包含了 Blender 需要运行的复杂的库和依赖框架。所有这些数据和块都在它们自己的目录层次中。 +幸运的是,[Blender 提供可以开箱即用的下载][3]。除了程序本身之外,这些软件包还包含了 Blender 需要运行的复杂的库和依赖框架。所有这些数据和块都在它们自己的目录层次中。 每次你想运行 Blender,你都可以 `cd` 到你下载它的文件夹并运行: + ``` ./blender ``` 但这很不方便。如果你可以从文件系统的任何地方,比如桌面命令启动器中运行 `blender` 命令会更好。 -这样做的方法是将 _blender_ 可执行文件链接到 _bin/_ 目录。在许多系统上,你可以通过将其链接到文件系统中的任何位置来使 `blender` 命令可用,就像这样。 +这样做的方法是将 `blender` 可执行文件链接到 `bin/` 目录。在许多系统上,你可以通过将其链接到文件系统中的任何位置来使 `blender` 命令可用,就像这样。 + ``` ln -s /path/to/blender_directory/blender /home//bin ``` -你需要链接的另一个情况是**软件需要过时的库**。如果你用 `ls -l` 列出你的 _/usr/lib_ 目录,你会看到许多软链接文件飞过。仔细看看,你会看到软链接通常与它们链接到的原始文件具有相似的名称。你可能会看到 _libblah_ 链接到 _libblah.so.2_,你甚至可能会注意到 _libblah.so.2_ 依次链接到原始文件 _libblah.so.2.1.0_。 +你需要链接的另一个情况是**软件需要过时的库**。如果你用 `ls -l` 列出你的 `/usr/lib` 目录,你会看到许多软链接文件一闪而过。仔细看看,你会看到软链接通常与它们链接到的原始文件具有相似的名称。你可能会看到 `libblah` 链接到 `libblah.so.2`,你甚至可能会注意到 `libblah.so.2` 相应链接到原始文件 `libblah.so.2.1.0`。 -这是因为应用程序通常需要安装比已安装版本更老的库。问题是,即使新版本仍然与旧版本(通常是)兼容,如果程序找不到它正在寻找的版本,程序将会出现问题。为了解决这个问题,发行版通常会创建链接,以便挑剔的应用程序相信它找到了旧版本,实际上它只找到了一个链接并最终使用了更新的库版本。 +这是因为应用程序通常需要安装比已安装版本更老的库。问题是,即使新版本仍然与旧版本(通常是)兼容,如果程序找不到它正在寻找的版本,程序将会出现问题。为了解决这个问题,发行版通常会创建链接,以便挑剔的应用程序**相信**它找到了旧版本,实际上它只找到了一个链接并最终使用了更新的库版本。 + +有些是和**你自己从源代码编译的程序**相关。你自己编译的程序通常最终安装在 `/usr/local` 下,程序本身最终在 `/usr/local/bin` 中,它在 `/usr/local/bin` 目录中查找它需要的库。但假设你的新程序需要 `libblah`,但 `libblah` 在 `/usr/lib` 中,这就是所有其它程序都会寻找到它的地方。你可以通过执行以下操作将其链接到 `/usr/local/lib`: -有些是和**你自己从源代码编译的程序**相关。你自己编译的程序通常最终安装在 _/usr/local_ 下,程序本身最终在 _/usr/local/bin_ 中,它在 _/usr/local/bin_ 目录中查找它需要的库。但假设你的新程序需要 _libblah_,但 _libblah_ 在 _/usr/lib_ 中,这就是所有其它程序都会寻找到它的地方。你可以通过执行以下操作将其链接到 _/usr/local/lib_: ``` ln -s /usr/lib/libblah /usr/local/lib ``` -或者如果你愿意,可以 `cd` 到 _/usr/local/lib_: +或者如果你愿意,可以 `cd` 到 `/usr/local/lib`: + ``` cd /usr/local/lib ``` 然后使用链接: + ``` ln -s ../lib/libblah ``` 还有几十个案例证明软链接是有用的,当你使用 Linux 更熟练时,你肯定会发现它们,但这些是最常见的。下一次,我们将看一些你需要注意的链接怪异。 -通过 Linux 基金会和 edX 的免费 ["Linux 简介"][4]课程了解有关 Linux 的更多信息。 +通过 Linux 基金会和 edX 的免费 [“Linux 简介”][4]课程了解有关 Linux 的更多信息。 -------------------------------------------------------------------------------- @@ -111,7 +123,7 @@ via: https://www.linux.com/blog/intro-to-linux/2018/10/linux-links-part-1 作者:[Paul Brown][a] 选题:[lujun9972][b] 译者:[MjSeven](https://github.com/MjSeven) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20181019 How to use Pandoc to produce a research paper.md b/published/201810/20181019 How to use Pandoc to produce a research paper.md similarity index 51% rename from translated/tech/20181019 How to use Pandoc to produce a research paper.md rename to published/201810/20181019 How to use Pandoc to produce a research paper.md index 516ab8ba37..3ccbc8df1c 100644 --- a/translated/tech/20181019 How to use Pandoc to produce a research paper.md +++ b/published/201810/20181019 How to use Pandoc to produce a research paper.md @@ -1,19 +1,21 @@ -用 Pandoc 做一篇调研论文 +用 Pandoc 生成一篇调研论文 ====== -学习如何用 Markdown 管理引用、图像、表格、以及更多。 + +> 学习如何用 Markdown 管理章节引用、图像、表格以及更多。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_paperclips.png?itok=j48op49T) -这篇文章对于使用 [Markdown][1] 语法做一篇调研论文进行了一个深度体验。覆盖了如何创建和引用、图像(用 Markdown 和 [LaTeX][2])和参考书目。我们也讨论了一些棘手的案例和为什么使用 LaTex 是一个正确的做法。 +这篇文章对于使用 [Markdown][1] 语法做一篇调研论文进行了一个深度体验。覆盖了如何创建和引用章节、图像(用 Markdown 和 [LaTeX][2])和参考书目。我们也讨论了一些棘手的案例和为什么使用 LaTex 是一个正确的做法。 -### 调查 +### 调研 -调研论文一般包括引用、图像、表格和参考书目。[Pandoc][3] 本身并不能交叉引用这些,但是但是它能够利用 [pandoc-crossref][4] 过滤来完成自动编号和章节、图像、表格的交叉引用。 +调研论文一般包括对章节、图像、表格和参考书目的引用。[Pandoc][3] 本身并不能交叉引用这些,但是它能够利用 [pandoc-crossref][4] 过滤器来完成自动编号和章节、图像、表格的交叉引用。 -让我们开始正常的使用 LaTax 重写 [一个教育调研报告的例子][5],然后用 Markdown(和一些 LaTax)、Pandoc 和 Pandoc-crossref 再重写。 +让我们从重写原本以 LaTax 撰写的 [一个教育调研报告的例子][5] 开始,然后用 Markdown(和一些 LaTax)、Pandoc 和 Pandoc-crossref 重写。 #### 添加并引用章节 -要想章节被自动编号,必须使用 Markdown 标题 H1 编写。子章节使用子标题 H2-H4 编写(通常不需要更多的东西)。例如一个章节的标题是 “履行”,写作 `# 履行 {#sec: 履行}`,然后 Pandoc 会把它转化为 `3. 履行`(或者转换为相应的章节标号)。`履行` 这个标题使用了 H1 并且声明了一个 `{#sec: 履行}` 的标签,这是作者引用了该章节的标签。要想引用一个章节,在对应章节后面输入 `@` 符号并使用方括号括起来即可: `[@sec:履行]` +要想章节被自动编号,必须使用 Markdown H1 标题编写。子章节使用 H2-H4 子标题编写(通常不需要更多级别了)。例如一个章节的标题是 “Implementation”,写作 `# Implementation {#sec: implementation}`,然后 Pandoc 会把它转化为 `3. Implementation `(或者转换为相应的章节编号)。`Implementation` 这个标题使用了 H1 并且声明了一个 `{#sec: implementation}` 的标签,这是作者用于引用该章节的标签。要想引用一个章节,输入 `@` 符号并跟上对应章节标签,使用方括号括起来即可: `[@ sec:implementation]` [在这篇论文中][5], 我们发现了下面这个例子: @@ -27,16 +29,17 @@ Pandoc 转换: we lack experience (consistency between TAs, Section 4). ``` -章节被自动(这包含在文章最后的 `Makefile` 当中)标号。要创建无标号的章节,输入章节的标题并在最后添加 `{-}`。例如:`### 设计一个可维护的游戏 {-}` 就以标题 “设计一个可维护的游戏”,创建了一个无标号的章节。 +章节被自动编号(这在本文最后的 `Makefile` 当中说明)。要创建无编号的章节,输入章节的标题并在最后添加 `{-}`。例如:`### Designing a game for maintainability {-}` 就以标题 “Designing a game for maintainability”,创建了一个无标号的章节。 #### 添加并引用图像 -添加并引用一个图像,跟添加并引用一个章节和添加一个 Markdown 图片很相似: +添加并引用一个图像,跟添加并引用一个章节和添加一个 Markdown 图片很相似: ``` ![Scatterplot matrix](data/scatterplots/RScatterplotMatrix2.png){#fig:scatter-matrix} ``` -上面这一行是告诉 Pandoc,有一个标有 Scatterplot matrix 的图像以及这张图片路径是 `data/scatterplots/RScatterplotMatrix2.png`。`{#fig:scatter-matrix}` 表明了应该引用的图像的名字。 + +上面这一行是告诉 Pandoc,有一个标有 Scatterplot matrix 的图像以及这张图片路径是 `data/scatterplots/RScatterplotMatrix2.png`。`{#fig:scatter-matrix}` 表明了用于引用该图像的名字。 这里是从一篇论文中进行图像引用的例子: @@ -51,46 +54,47 @@ The boxes "Enjoy", "Grade" and "Motivation" (Fig. 1) ... ``` #### 添加及引用参考书目 -大多数调研报告都把引用放在一个 BibTeX 的数据库文件中。在这个例子中,该文件被命名为 [biblio.bib][6],它包含了论文中所有的引用。下面是这个文件的样子: + +大多数调研报告都把引用放在一个 BibTeX 的数据库文件中。在这个例子中,该文件被命名为 [biblio.bib][6],它包含了论文中所有的引用。下面是这个文件的样子: ``` @inproceedings{wrigstad2017mastery, -    Author =       {Wrigstad, Tobias and Castegren, Elias}, -    Booktitle =    {SPLASH-E}, -    Title =        {Mastery Learning-Like Teaching with Achievements}, -    Year =         2017 + Author = {Wrigstad, Tobias and Castegren, Elias}, + Booktitle = {SPLASH-E}, + Title = {Mastery Learning-Like Teaching with Achievements}, + Year = 2017 } @inproceedings{review-gamification-framework, -  Author =       {A. Mora and D. Riera and C. Gonzalez and J. Arnedo-Moreno}, -  Publisher =    {IEEE}, -  Booktitle =    {2015 7th International Conference on Games and Virtual Worlds -                  for Serious Applications (VS-Games)}, -  Doi =          {10.1109/VS-GAMES.2015.7295760}, -  Keywords =     {formal specification;serious games (computing);design -                  framework;formal design process;game components;game design -                  elements;gamification design frameworks;gamification-based -                  solutions;Bibliographies;Context;Design -                  methodology;Ethics;Games;Proposals}, -  Month =        {Sept}, -  Pages =        {1-8}, -  Title =        {A Literature Review of Gamification Design Frameworks}, -  Year =         2015, -  Bdsk-Url-1 =   {http://dx.doi.org/10.1109/VS-GAMES.2015.7295760} + Author = {A. Mora and D. Riera and C. Gonzalez and J. Arnedo-Moreno}, + Publisher = {IEEE}, + Booktitle = {2015 7th International Conference on Games and Virtual Worlds + for Serious Applications (VS-Games)}, + Doi = {10.1109/VS-GAMES.2015.7295760}, + Keywords = {formal specification;serious games (computing);design + framework;formal design process;game components;game design + elements;gamification design frameworks;gamification-based + solutions;Bibliographies;Context;Design + methodology;Ethics;Games;Proposals}, + Month = {Sept}, + Pages = {1-8}, + Title = {A Literature Review of Gamification Design Frameworks}, + Year = 2015, + Bdsk-Url-1 = {http://dx.doi.org/10.1109/VS-GAMES.2015.7295760} } ... ``` -第一行的 `@inproceedings{wrigstad2017mastery,` 表明了出版物 (`inproceedings`) 的类型,以及用来指向那篇论文 (`wrigstad2017mastery`) 的标签。 +第一行的 `@inproceedings{wrigstad2017mastery,` 表明了出版物 的类型(`inproceedings`),以及用来指向那篇论文的标签(`wrigstad2017mastery`)。 -引用这篇题为 “Mastery Learning-Like Teaching with Achievements” 的论文, 输入: +引用这篇题为 “Mastery Learning-Like Teaching with Achievements” 的论文, 输入: ``` the achievement-driven learning methodology [@wrigstad2017mastery] ``` -Pandoc 将会输出: +Pandoc 将会输出: ``` the achievement- driven learning methodology [30] @@ -100,25 +104,23 @@ the achievement- driven learning methodology [30] ![](https://opensource.com/sites/default/files/uploads/bibliography-example_0.png) -引用文章的集合也很容易:只要引用使用分号 `;` 分隔开被标记的参考文献就可以了。如果一个引用有两个标签 —— 例如: `SEABORN201514` 和 `gamification-leaderboard-benefits`—— 像下面这样把它们放在一起引用: +引用文章的集合也很容易:只要引用使用分号 `;` 分隔开被标记的参考文献就可以了。如果一个引用有两个标签 —— 例如: `SEABORN201514` 和 `gamification-leaderboard-benefits`—— 像下面这样把它们放在一起引用: ``` Thus, the most important benefit is its potential to increase students' motivation - and engagement [@SEABORN201514;@gamification-leaderboard-benefits]. ``` -Pandoc 将会产生: +Pandoc 将会产生: ``` Thus, the most important benefit is its potential to increase students’ motivation - and engagement [26, 28] ``` ### 问题案例 -一个常见的问题是项目与页面不匹配。不匹配的部分会自动移动到它们认为合适的地方,即便这些位置并不是读者期望看到的位置。因此在图像或者表格接近于它们被提及的地方时,我们需要调节一下它们在此处的元素组合,使得他们更加易于阅读。为了达到这个效果,我建议使用 `figure` 这个 LaTeX 环境参数,它可以让用户控制图像的位置。 +一个常见的问题是所需项目与页面不匹配。不匹配的部分会自动移动到它们认为合适的地方,即便这些位置并不是读者期望看到的位置。因此在图像或者表格接近于它们被提及的地方时,我们需要调节一下那些元素放置的位置,使得它们更加易于阅读。为了达到这个效果,我建议使用 `figure` 这个 LaTeX 环境参数,它可以让用户控制图像的位置。 我们看一个上面提到的图像的例子: @@ -126,7 +128,7 @@ and engagement [26, 28] ![Scatterplot matrix](data/scatterplots/RScatterplotMatrix2.png){#fig:scatter-matrix} ``` -然后使用 LaTeX 重写: +然后使用 LaTeX 重写: ``` \begin{figure}[t] @@ -139,17 +141,17 @@ and engagement [26, 28] ### 产生一篇论文 -到目前为止,我们讲了如何添加和引用(子)章节、图像和参考书目,现在让我们重温一下如何生产一篇 PDF 格式的论文,生成 PDF,我们将使用 Pandoc 生成一篇可以被构建成最终 PDF 的 LaTeX 文件。我们还会讨论如何以 LaTeX,使用一套自定义的模板和元信息文件生成一篇调研论文,以及如何构建 LaTeX 文档为最终的 PDF 格式。 +到目前为止,我们讲了如何添加和引用(子)章节、图像和参考书目,现在让我们重温一下如何生成一篇 PDF 格式的论文。要生成 PDF,我们将使用 Pandoc 生成一篇可以被构建成最终 PDF 的 LaTeX 文件。我们还会讨论如何以 LaTeX,使用一套自定义的模板和元信息文件生成一篇调研论文,以及如何将 LaTeX 文档编译为最终的 PDF 格式。 -很多会议都提供了一个 **.cls** 文件或者一套论文该有样子的模板; 例如,他们是否应该使用两列的格式以及其他的设计风格。在我们的例子中,会议提供了一个名为 **acmart.cls** 的文件。 +很多会议都提供了一个 .cls 文件或者一套论文应有样式的模板;例如,它们是否应该使用两列的格式以及其它的设计风格。在我们的例子中,会议提供了一个名为 `acmart.cls` 的文件。 -作者通常想要在他们的论文中包含他们所属的机构,然而,这个选项并没有包含在默认的 Pandoc 的 LaTeX 模板(注意,可以通过输入 `pandoc -D latex` 来查看 Pandoc 模板)当中。要包含这个内容,找一个 Pandoc 默认的 LaTeX 模板,并添加一些新的内容。将这个模板像下面这样复制进一个名为 `mytemplate.tex` 的文件中: +作者通常想要在他们的论文中包含他们所属的机构,然而,这个选项并没有包含在默认的 Pandoc 的 LaTeX 模板(注意,可以通过输入 `pandoc -D latex` 来查看 Pandoc 模板)当中。要包含这个内容,找一个 Pandoc 默认的 LaTeX 模板,并添加一些新的内容。将这个模板像下面这样复制进一个名为 `mytemplate.tex` 的文件中: ``` pandoc -D latex > mytemplate.tex ``` -默认的模板包含以下代码: +默认的模板包含以下代码: ``` $if(author)$ @@ -161,32 +163,30 @@ $if(institute)$ $endif$ ``` -因为这个模板应该包含作者的联系方式和电子邮件地址,在其他一些选项之间,我们可以添加以下内容(我们还做了一些其他的更改,但是因为文件的长度,就没有包含在此处)更新这个模板 +因为这个模板应该包含作者的联系方式和电子邮件地址,在其他一些选项之间,我们更新这个模板以添加以下内容(我们还做了一些其他的更改,但是因为文件的长度,就没有包含在此处): ``` latex $for(author)$ -    $if(author.name)$ -        \author{$author.name$} -        $if(author.affiliation)$ -            \affiliation{\institution{$author.affiliation$}} -        $endif$ -        $if(author.email)$ -            \email{$author.email$} -        $endif$ -    $else$ -        $author$ -    $endif$ + $if(author.name)$ + \author{$author.name$} + $if(author.affiliation)$ + \affiliation{\institution{$author.affiliation$}} + $endif$ + $if(author.email)$ + \email{$author.email$} + $endif$ + $else$ + $author$ + $endif$ $endfor$ ``` 要让这些更改起作用,我们还应该有下面的文件: - * `main.md` 包含调研论文 - * `biblio.bib` 包含参考书目数据库 - * `acmart.cls` 我们使用的文档的集合 - * `mytemplate.tex` 是我们使用的模板文件(代替默认的) - - +* `main.md` 包含调研论文 +* `biblio.bib` 包含参考书目数据库 +* `acmart.cls` 我们使用的文档的集合 +* `mytemplate.tex` 是我们使用的模板文件(代替默认的) 让我们添加论文的元信息到一个 `meta.yaml` 文件: @@ -211,7 +211,7 @@ abstract: |   An achievement-driven methodology strives to give students more control over their learning with enough flexibility to engage them in deeper learning. (more stuff continues) include-before: | -   \```{=latex} +   \` ``{=latex}   \copyrightyear{2018}   \acmYear{2018}   \setcopyright{acmlicensed} @@ -234,7 +234,7 @@ include-before: |   \ccsdesc[500]{Applied computing~Education}   \keywords{gamification, education, software design, UML} -   \``` +   \` `` figPrefix:   - "Fig."   - "Figs." @@ -246,23 +246,21 @@ secPrefix: 这个元信息文件使用 LaTeX 设置下列参数: - * `template` 指向使用的模板(’mytemplate.tex‘) - * `documentclass` 指向使用的 LaTeX 文档集合 (`acmart`) - * `classoption` 是在 `sigconf` 的案例中,指向这个类的选项 - * `title` 指定论文的标题 - * `author` 是一个包含例如 `name`, `affiliation`, 和 `email` 的地方 - * `bibliography` 指向包含参考书目的文件 (biblio.bib) - * `abstract` 包含论文的摘要 - * `include-before` 是这篇论文的真实内容之前应该被包含的信息;在 LaTeX 中被称为 [前言][8]。我在这里包含它去展示如何产生一篇计算机科学的论文,但是你可以选择跳过 - * `figPrefix` 指向如何引用文档中的图像,例如:当引用图像的 `[@fig:scatter-matrix]` 时应该显示什么。例如,当前的 `figPrefix` 在这个例子 `The boxes "Enjoy", "Grade" and "Motivation" ([@fig:scatter-matrix])`中,产生了这样的输出:`The boxes "Enjoy", "Grade" and "Motivation" (Fig. 3)`。如果这里有很多图像,目前的设置表明它应该在图像号码旁边显示 `Figs.`。 - * `secPrefix` 指定如何引用文档中其他地方提到的部分(类似之前的图像和概览) - - +* `template` 指向使用的模板(`mytemplate.tex`) +* `documentclass` 指向使用的 LaTeX 文档集合(`acmart`) +* `classoption` 是在 `sigconf` 的案例中,指向这个类的选项 +* `title` 指定论文的标题 +* `author` 是一个包含例如 `name`、`affiliation` 和 `email` 的地方 +* `bibliography` 指向包含参考书目的文件(`biblio.bib`) +* `abstract` 包含论文的摘要 +* `include-before` 是这篇论文的具体内容之前应该被包含的信息;在 LaTeX 中被称为 [前言][8]。我在这里包含它去展示如何产生一篇计算机科学的论文,但是你可以选择跳过 +* `figPrefix` 指向如何引用文档中的图像,例如:当引用图像的 `[@fig:scatter-matrix]` 时应该显示什么。例如,当前的 `figPrefix` 在这个例子 `The boxes "Enjoy", "Grade" and "Motivation" ([@fig:scatter-matrix])`中,产生了这样的输出:`The boxes "Enjoy", "Grade" and "Motivation" (Fig. 3)`。如果这里有很多图像,目前的设置表明它应该在图像号码旁边显示 `Figs.` +* `secPrefix` 指定如何引用文档中其他地方提到的部分(类似之前的图像和概览) 现在已经设置好了元信息,让我们来创建一个 `Makefile`,它会产生你想要的输出。`Makefile` 使用 Pandoc 产生 LaTeX 文件,`pandoc-crossref` 产生交叉引用,`pdflatex` 构建 LaTeX 为 PDF,`bibtex ` 处理引用。 -`Makefile` 已经展示如下: +`Makefile` 已经展示如下: ``` all: paper @@ -281,18 +279,16 @@ clean: .PHONY: all clean paper ``` -Pandoc 使用下面的标记: +Pandoc 使用下面的标记: - * `-s` 创建一个独立的 LaTeX 文档 - * `-F pandoc-crossref` 利用 `pandoc-crossref` 进行过滤 - * `--natbib` 用 `natbib` (你也可以选择 `--biblatex`)对参考书目进行渲染 - * `--template` 设置使用的模板文件 - * `-N` 为章节的标题编号 - * `-f` 和 `-t` 指定从哪个格式转换到哪个格式。`-t` 通常包含格式和 Pandoc 使用的扩展。在这个例子中,我们标明的 `raw_tex+tex_math_dollars+citations` 允许在 Markdown 中使用 `raw_tex` LaTeX。 `tex_math_dollars` 让我们能够像在 LaTeX 中一样输入数学符号,`citations` 让我们可以使用 [这个扩展][9]. +* `-s` 创建一个独立的 LaTeX 文档 +* `-F pandoc-crossref` 利用 `pandoc-crossref` 进行过滤 +* `--natbib` 用 `natbib` (你也可以选择 `--biblatex`)对参考书目进行渲染 +* `--template` 设置使用的模板文件 +* `-N` 为章节的标题编号 +* `-f` 和 `-t` 指定从哪个格式转换到哪个格式。`-t` 通常包含格式和 Pandoc 使用的扩展。在这个例子中,我们标明的 `raw_tex+tex_math_dollars+citations` 允许在 Markdown 中使用 `raw_tex` LaTeX。 `tex_math_dollars` 让我们能够像在 LaTeX 中一样输入数学符号,`citations` 让我们可以使用 [这个扩展][9]。 - - -由 LaTeX 产生 PDF,接着引导行 [从 bibtex][10] 处理参考书目: +要从 LaTeX 产生 PDF,按 [来自bibtex][10] 的指导处理参考书目: ``` @pdflatex main.tex &> /dev/null @@ -301,7 +297,7 @@ Pandoc 使用下面的标记: @pdflatex main.tex &> /dev/null ``` -脚本用 `@` 忽略输出,并且重定向标准输出和错误到 `/dev/null` ,因此我们在使用这些命令的可执行文件时不会看到任何的输出。 +脚本用 `@` 忽略输出,并且重定向标准输出和错误到 `/dev/null` ,因此我们在使用这些命令的可执行文件时不会看到任何的输出。 最终的结果展示如下。这篇文章的库可以在 [GitHub][11] 找到: @@ -309,9 +305,9 @@ Pandoc 使用下面的标记: ### 结论 -在我看来,研究的重点是协作,思想的传播,以及在任何一个恰好存在的领域中改进现有的技术。许多计算机科学家和工程师使用 LaTeX 文档系统来写论文,它对数学提供了完美的支持。来自社会科学的调查员似乎更喜欢 DOCX 文档。 +在我看来,研究的重点是协作、思想的传播,以及在任何一个恰好存在的领域中改进现有的技术。许多计算机科学家和工程师使用 LaTeX 文档系统来写论文,它对数学提供了完美的支持。来自社会科学的研究人员似乎更喜欢 DOCX 文档。 -当身处不同社区的调查员一同写一篇论文时,他们首先应该讨论一下他们将要使用哪种格式。然而如果包含太多的数学符号,DOCX 对于工程师来说不会是最简便的选择,LaTeX 对于缺乏编程经验的调查员来说也有一些问题。就像这篇文章中展示的,Markdown 是一门工程师和社会科学家都很轻易能够使用的语言。 +当身处不同社区的研究人员一同写一篇论文时,他们首先应该讨论一下他们将要使用哪种格式。然而如果包含太多的数学符号,DOCX 对于工程师来说不会是最简便的选择,LaTeX 对于缺乏编程经验的研究人员来说也有一些问题。就像这篇文章中展示的,Markdown 是一门工程师和社会科学家都很轻易能够使用的语言。 -------------------------------------------------------------------------------- @@ -320,7 +316,7 @@ via: https://opensource.com/article/18/9/pandoc-research-paper 作者:[Kiko Fernandez-Reyes][a] 选题:[lujun9972][b] 译者:[dianbanjiu](https://github.com/dianbanjiu) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20181019 How to use Pandoc to produce a research paper.md b/published/20181019 How to use Pandoc to produce a research paper.md new file mode 100644 index 0000000000..3ccbc8df1c --- /dev/null +++ b/published/20181019 How to use Pandoc to produce a research paper.md @@ -0,0 +1,335 @@ +用 Pandoc 生成一篇调研论文 +====== + +> 学习如何用 Markdown 管理章节引用、图像、表格以及更多。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_paperclips.png?itok=j48op49T) + +这篇文章对于使用 [Markdown][1] 语法做一篇调研论文进行了一个深度体验。覆盖了如何创建和引用章节、图像(用 Markdown 和 [LaTeX][2])和参考书目。我们也讨论了一些棘手的案例和为什么使用 LaTex 是一个正确的做法。 + +### 调研 + +调研论文一般包括对章节、图像、表格和参考书目的引用。[Pandoc][3] 本身并不能交叉引用这些,但是它能够利用 [pandoc-crossref][4] 过滤器来完成自动编号和章节、图像、表格的交叉引用。 + +让我们从重写原本以 LaTax 撰写的 [一个教育调研报告的例子][5] 开始,然后用 Markdown(和一些 LaTax)、Pandoc 和 Pandoc-crossref 重写。 + +#### 添加并引用章节 + +要想章节被自动编号,必须使用 Markdown H1 标题编写。子章节使用 H2-H4 子标题编写(通常不需要更多级别了)。例如一个章节的标题是 “Implementation”,写作 `# Implementation {#sec: implementation}`,然后 Pandoc 会把它转化为 `3. Implementation `(或者转换为相应的章节编号)。`Implementation` 这个标题使用了 H1 并且声明了一个 `{#sec: implementation}` 的标签,这是作者用于引用该章节的标签。要想引用一个章节,输入 `@` 符号并跟上对应章节标签,使用方括号括起来即可: `[@ sec:implementation]` + +[在这篇论文中][5], 我们发现了下面这个例子: + +``` +we lack experience (consistency between TAs, [@sec:implementation]). +``` + +Pandoc 转换: + +``` +we lack experience (consistency between TAs, Section 4). +``` + +章节被自动编号(这在本文最后的 `Makefile` 当中说明)。要创建无编号的章节,输入章节的标题并在最后添加 `{-}`。例如:`### Designing a game for maintainability {-}` 就以标题 “Designing a game for maintainability”,创建了一个无标号的章节。 + +#### 添加并引用图像 + +添加并引用一个图像,跟添加并引用一个章节和添加一个 Markdown 图片很相似: + +``` +![Scatterplot matrix](data/scatterplots/RScatterplotMatrix2.png){#fig:scatter-matrix} +``` + +上面这一行是告诉 Pandoc,有一个标有 Scatterplot matrix 的图像以及这张图片路径是 `data/scatterplots/RScatterplotMatrix2.png`。`{#fig:scatter-matrix}` 表明了用于引用该图像的名字。 + +这里是从一篇论文中进行图像引用的例子: + +``` +The boxes "Enjoy", "Grade" and "Motivation" ([@fig:scatter-matrix]) ... +``` + +Pandoc 产生如下输出: + +``` +The boxes "Enjoy", "Grade" and "Motivation" (Fig. 1) ... +``` + +#### 添加及引用参考书目 + +大多数调研报告都把引用放在一个 BibTeX 的数据库文件中。在这个例子中,该文件被命名为 [biblio.bib][6],它包含了论文中所有的引用。下面是这个文件的样子: + +``` +@inproceedings{wrigstad2017mastery, + Author = {Wrigstad, Tobias and Castegren, Elias}, + Booktitle = {SPLASH-E}, + Title = {Mastery Learning-Like Teaching with Achievements}, + Year = 2017 +} + +@inproceedings{review-gamification-framework, + Author = {A. Mora and D. Riera and C. Gonzalez and J. Arnedo-Moreno}, + Publisher = {IEEE}, + Booktitle = {2015 7th International Conference on Games and Virtual Worlds + for Serious Applications (VS-Games)}, + Doi = {10.1109/VS-GAMES.2015.7295760}, + Keywords = {formal specification;serious games (computing);design + framework;formal design process;game components;game design + elements;gamification design frameworks;gamification-based + solutions;Bibliographies;Context;Design + methodology;Ethics;Games;Proposals}, + Month = {Sept}, + Pages = {1-8}, + Title = {A Literature Review of Gamification Design Frameworks}, + Year = 2015, + Bdsk-Url-1 = {http://dx.doi.org/10.1109/VS-GAMES.2015.7295760} +} + +... +``` + +第一行的 `@inproceedings{wrigstad2017mastery,` 表明了出版物 的类型(`inproceedings`),以及用来指向那篇论文的标签(`wrigstad2017mastery`)。 + +引用这篇题为 “Mastery Learning-Like Teaching with Achievements” 的论文, 输入: + +``` +the achievement-driven learning methodology [@wrigstad2017mastery] +``` + +Pandoc 将会输出: + +``` +the achievement- driven learning methodology [30] +``` + +这篇论文将会产生像下面这样被标号的参考书目: + +![](https://opensource.com/sites/default/files/uploads/bibliography-example_0.png) + +引用文章的集合也很容易:只要引用使用分号 `;` 分隔开被标记的参考文献就可以了。如果一个引用有两个标签 —— 例如: `SEABORN201514` 和 `gamification-leaderboard-benefits`—— 像下面这样把它们放在一起引用: + +``` +Thus, the most important benefit is its potential to increase students' motivation +and engagement [@SEABORN201514;@gamification-leaderboard-benefits]. +``` + +Pandoc 将会产生: + +``` +Thus, the most important benefit is its potential to increase students’ motivation +and engagement [26, 28] +``` + +### 问题案例 + +一个常见的问题是所需项目与页面不匹配。不匹配的部分会自动移动到它们认为合适的地方,即便这些位置并不是读者期望看到的位置。因此在图像或者表格接近于它们被提及的地方时,我们需要调节一下那些元素放置的位置,使得它们更加易于阅读。为了达到这个效果,我建议使用 `figure` 这个 LaTeX 环境参数,它可以让用户控制图像的位置。 + +我们看一个上面提到的图像的例子: + +``` +![Scatterplot matrix](data/scatterplots/RScatterplotMatrix2.png){#fig:scatter-matrix} +``` + +然后使用 LaTeX 重写: + +``` +\begin{figure}[t] +\includegraphics{data/scatterplots/RScatterplotMatrix2.png} +\caption{\label{fig:matrix}Scatterplot matrix} +\end{figure} +``` + +在 LaTeX 中,`figure` 环境参数中的 `[t]` 选项表示这张图用该位于该页的最顶部。有关更多选项,参阅 [LaTex/Floats, Figures, and Captions][7] 这篇 Wikibooks 的文章。 + +### 产生一篇论文 + +到目前为止,我们讲了如何添加和引用(子)章节、图像和参考书目,现在让我们重温一下如何生成一篇 PDF 格式的论文。要生成 PDF,我们将使用 Pandoc 生成一篇可以被构建成最终 PDF 的 LaTeX 文件。我们还会讨论如何以 LaTeX,使用一套自定义的模板和元信息文件生成一篇调研论文,以及如何将 LaTeX 文档编译为最终的 PDF 格式。 + +很多会议都提供了一个 .cls 文件或者一套论文应有样式的模板;例如,它们是否应该使用两列的格式以及其它的设计风格。在我们的例子中,会议提供了一个名为 `acmart.cls` 的文件。 + +作者通常想要在他们的论文中包含他们所属的机构,然而,这个选项并没有包含在默认的 Pandoc 的 LaTeX 模板(注意,可以通过输入 `pandoc -D latex` 来查看 Pandoc 模板)当中。要包含这个内容,找一个 Pandoc 默认的 LaTeX 模板,并添加一些新的内容。将这个模板像下面这样复制进一个名为 `mytemplate.tex` 的文件中: + +``` +pandoc -D latex > mytemplate.tex +``` + +默认的模板包含以下代码: + +``` +$if(author)$ +\author{$for(author)$$author$$sep$ \and $endfor$} +$endif$ +$if(institute)$ +\providecommand{\institute}[1]{} +\institute{$for(institute)$$institute$$sep$ \and $endfor$} +$endif$ +``` + +因为这个模板应该包含作者的联系方式和电子邮件地址,在其他一些选项之间,我们更新这个模板以添加以下内容(我们还做了一些其他的更改,但是因为文件的长度,就没有包含在此处): + +``` +latex +$for(author)$ + $if(author.name)$ + \author{$author.name$} + $if(author.affiliation)$ + \affiliation{\institution{$author.affiliation$}} + $endif$ + $if(author.email)$ + \email{$author.email$} + $endif$ + $else$ + $author$ + $endif$ +$endfor$ +``` +要让这些更改起作用,我们还应该有下面的文件: + +* `main.md` 包含调研论文 +* `biblio.bib` 包含参考书目数据库 +* `acmart.cls` 我们使用的文档的集合 +* `mytemplate.tex` 是我们使用的模板文件(代替默认的) + +让我们添加论文的元信息到一个 `meta.yaml` 文件: + +``` +--- +template: 'mytemplate.tex' +documentclass: acmart +classoption: sigconf +title: The impact of opt-in gamification on `\\`{=latex} students' grades in a software design course +author: +- name: Kiko Fernandez-Reyes +  affiliation: Uppsala University +  email: kiko.fernandez@it.uu.se +- name: Dave Clarke +  affiliation: Uppsala University +  email: dave.clarke@it.uu.se +- name: Janina Hornbach +  affiliation: Uppsala University +  email: janina.hornbach@fek.uu.se +bibliography: biblio.bib +abstract: | +  An achievement-driven methodology strives to give students more control over their learning with enough flexibility to engage them in deeper learning. (more stuff continues) + +include-before: | +   \` ``{=latex} +   \copyrightyear{2018} +   \acmYear{2018} +   \setcopyright{acmlicensed} +   \acmConference[MODELS '18 Companion]{ACM/IEEE 21th International Conference on Model Driven Engineering Languages and Systems}{October 14--19, 2018}{Copenhagen, Denmark} +   \acmBooktitle{ACM/IEEE 21th International Conference on Model Driven Engineering Languages and Systems (MODELS '18 Companion), October 14--19, 2018, Copenhagen, Denmark} +   \acmPrice{XX.XX} +   \acmDOI{10.1145/3270112.3270118} +   \acmISBN{978-1-4503-5965-8/18/10} + +   \begin{CCSXML} +   +   +   10010405.10010489 +   Applied computing~Education +   500 +   +   +   \end{CCSXML} + +   \ccsdesc[500]{Applied computing~Education} + +   \keywords{gamification, education, software design, UML} +   \` `` +figPrefix: +  - "Fig." +  - "Figs." +secPrefix: +  - "Section" +  - "Sections" +... +``` + +这个元信息文件使用 LaTeX 设置下列参数: + +* `template` 指向使用的模板(`mytemplate.tex`) +* `documentclass` 指向使用的 LaTeX 文档集合(`acmart`) +* `classoption` 是在 `sigconf` 的案例中,指向这个类的选项 +* `title` 指定论文的标题 +* `author` 是一个包含例如 `name`、`affiliation` 和 `email` 的地方 +* `bibliography` 指向包含参考书目的文件(`biblio.bib`) +* `abstract` 包含论文的摘要 +* `include-before` 是这篇论文的具体内容之前应该被包含的信息;在 LaTeX 中被称为 [前言][8]。我在这里包含它去展示如何产生一篇计算机科学的论文,但是你可以选择跳过 +* `figPrefix` 指向如何引用文档中的图像,例如:当引用图像的 `[@fig:scatter-matrix]` 时应该显示什么。例如,当前的 `figPrefix` 在这个例子 `The boxes "Enjoy", "Grade" and "Motivation" ([@fig:scatter-matrix])`中,产生了这样的输出:`The boxes "Enjoy", "Grade" and "Motivation" (Fig. 3)`。如果这里有很多图像,目前的设置表明它应该在图像号码旁边显示 `Figs.` +* `secPrefix` 指定如何引用文档中其他地方提到的部分(类似之前的图像和概览) + +现在已经设置好了元信息,让我们来创建一个 `Makefile`,它会产生你想要的输出。`Makefile` 使用 Pandoc 产生 LaTeX 文件,`pandoc-crossref` 产生交叉引用,`pdflatex` 构建 LaTeX 为 PDF,`bibtex ` 处理引用。 + + +`Makefile` 已经展示如下: + +``` +all: paper + +paper: +        @pandoc -s -F pandoc-crossref --natbib meta.yaml --template=mytemplate.tex -N \ +         -f markdown -t latex+raw_tex+tex_math_dollars+citations -o main.tex main.md +        @pdflatex main.tex &> /dev/null +        @bibtex main &> /dev/null +        @pdflatex main.tex &> /dev/null +        @pdflatex main.tex &> /dev/null + +clean: +        rm main.aux main.tex main.log main.bbl main.blg main.out + +.PHONY: all clean paper +``` + +Pandoc 使用下面的标记: + +* `-s` 创建一个独立的 LaTeX 文档 +* `-F pandoc-crossref` 利用 `pandoc-crossref` 进行过滤 +* `--natbib` 用 `natbib` (你也可以选择 `--biblatex`)对参考书目进行渲染 +* `--template` 设置使用的模板文件 +* `-N` 为章节的标题编号 +* `-f` 和 `-t` 指定从哪个格式转换到哪个格式。`-t` 通常包含格式和 Pandoc 使用的扩展。在这个例子中,我们标明的 `raw_tex+tex_math_dollars+citations` 允许在 Markdown 中使用 `raw_tex` LaTeX。 `tex_math_dollars` 让我们能够像在 LaTeX 中一样输入数学符号,`citations` 让我们可以使用 [这个扩展][9]。 + +要从 LaTeX 产生 PDF,按 [来自bibtex][10] 的指导处理参考书目: + +``` +@pdflatex main.tex &> /dev/null +@bibtex main &> /dev/null +@pdflatex main.tex &> /dev/null +@pdflatex main.tex &> /dev/null +``` + +脚本用 `@` 忽略输出,并且重定向标准输出和错误到 `/dev/null` ,因此我们在使用这些命令的可执行文件时不会看到任何的输出。 + +最终的结果展示如下。这篇文章的库可以在 [GitHub][11] 找到: + +![](https://opensource.com/sites/default/files/uploads/abstract-image.png) + +### 结论 + +在我看来,研究的重点是协作、思想的传播,以及在任何一个恰好存在的领域中改进现有的技术。许多计算机科学家和工程师使用 LaTeX 文档系统来写论文,它对数学提供了完美的支持。来自社会科学的研究人员似乎更喜欢 DOCX 文档。 + +当身处不同社区的研究人员一同写一篇论文时,他们首先应该讨论一下他们将要使用哪种格式。然而如果包含太多的数学符号,DOCX 对于工程师来说不会是最简便的选择,LaTeX 对于缺乏编程经验的研究人员来说也有一些问题。就像这篇文章中展示的,Markdown 是一门工程师和社会科学家都很轻易能够使用的语言。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/pandoc-research-paper + +作者:[Kiko Fernandez-Reyes][a] +选题:[lujun9972][b] +译者:[dianbanjiu](https://github.com/dianbanjiu) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/kikofernandez +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Markdown +[2]: https://www.latex-project.org/ +[3]: https://pandoc.org/ +[4]: http://lierdakil.github.io/pandoc-crossref/ +[5]: https://dl.acm.org/citation.cfm?id=3270118 +[6]: https://github.com/kikofernandez/pandoc-examples/blob/master/research-paper/biblio.bib +[7]: https://en.wikibooks.org/wiki/LaTeX/Floats,_Figures_and_Captions#Figures +[8]: https://www.sharelatex.com/learn/latex/Creating_a_document_in_LaTeX#The_preamble_of_a_document +[9]: http://pandoc.org/MANUAL.html#citations +[10]: http://www.bibtex.org/Using/ +[11]: https://github.com/kikofernandez/pandoc-examples/tree/master/research-paper diff --git a/sources/talk/20180308 20 questions DevOps job candidates should be prepared to answer.md b/sources/talk/20180308 20 questions DevOps job candidates should be prepared to answer.md deleted file mode 100644 index b5f220ef24..0000000000 --- a/sources/talk/20180308 20 questions DevOps job candidates should be prepared to answer.md +++ /dev/null @@ -1,135 +0,0 @@ -Translating by Felix -20 questions DevOps job candidates should be prepared to answer -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hire-job-career.png?itok=SrZo0QJ3) -Hiring the wrong person is [expensive][1]. Recruiting, hiring, and onboarding a new employee can cost a company as much as $240,000, according to Jörgen Sundberg, CEO of Link Humans. When you make the wrong hire: - - * You lose what they know. - * You lose who they know. - * Your team could go into the [storming][2] phase of group development. - * Your company risks disorganization. - - - -When you lose an employee, you lose a piece of the fabric of the company. It's also worth mentioning the pain on the other end. The person hired into the wrong job may experience stress, feelings of overall dissatisfaction, and even health issues. - -On the other hand, when you get it right, your new hire will: - - * Enhance the existing culture, making your organization an even a better place to work. Studies show that a positive work culture helps [drive long-term financial performance][3] and that if you work in a happy environment, you’re more likely to do better in life. - * Love working with your organization. When people love what they do, they tend to do it well. - - - -Hiring to fit or enhance your existing culture is essential in DevOps and agile teams. That means hiring someone who can encourage effective collaboration so that individual contributors from varying backgrounds, and teams with different goals and working styles, can work together productively. Your new hire should help teams collaborate to maximize their value while also increasing employee satisfaction and balancing conflicting organizational goals. He or she should be able to choose tools and workflows wisely to complement your organization. Culture is everything. - -As a follow-up to our November 2017 post, [20 questions DevOps hiring managers should be prepared to answer][4], this article will focus on how to hire for the best mutual fit. - -### Why hiring goes wrong - -The typical hiring strategy many companies use today is based on a talent surplus: - - * Post on job boards. - * Focus on candidates with the skills they need. - * Find as many candidates as possible. - * Interview to weed out the weak. - * Conduct formal interviews to do more weeding. - * Assess, vote, and select. - * Close on compensation. - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/hiring_graphic.png?itok=1udGbkhB) - -Job boards were invented during the Great Depression when millions of people were out of work and there was a talent surplus. There is no talent surplus in today's job market, yet we’re still using a hiring strategy that's based on one. - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/732px-unemployed_men_queued_outside_a_depression_soup_kitchen_opened_in_chicago_by_al_capone_02-1931_-_nara_-_541927.jpg?itok=HSs4NjCN) - -### Hire for mutual fit: Use culture and emotions - -The idea behind the talent surplus hiring strategy is to design jobs and then slot people into them. - -Instead, do the opposite: Find talented people who will positively add to your business culture, then find the best fit for them in a job they’ll love. To do this, you must be open to creating jobs around their passions. - -**Who is looking for a job?** According to a 2016 survey of more than 50,000 U.S. developers, [85.7% of respondents][5] were either not interested in new opportunities or were not actively looking for them. And of those who were looking, a whopping [28.3% of job discoveries][5] came from referrals by friends. If you’re searching only for people who are looking for jobs, you’re missing out on top talent. - -**Use your team to find and vet potential recruits**. For example, if Diane is a developer on your team, chances are she has [been coding for years][6] and has met fellow developers along the way who also love what they do. Wouldn’t you think her chances of vetting potential recruits for skills, knowledge, and intelligence would be higher than having someone from HR find and vet potential recruits? And before asking Diane to share her knowledge of fellow recruits, inform her of the upcoming mission, explain your desire to hire a diverse team of passionate explorers, and describe some of the areas where help will be needed in the future. - -**What do employees want?** A comprehensive study comparing the wants and needs of Millennials, GenX’ers, and Baby Boomers shows that within two percentage points, we all [want the same things][7]: - - 1. To make a positive impact on the organization - 2. To help solve social and/or environmental challenges - 3. To work with a diverse group of people - - - -### The interview challenge - -The interview should be a two-way conversation for finding a mutual fit between the person hiring and the person interviewing. Focus your interview on CQ ([Cultural Quotient][7]) and EQ ([Emotional Quotient][8]): Will this person reinforce and add to your culture and love working with you? Can you help make them successful at their job? - -**For the hiring manager:** Every interview is an opportunity to learn how your organization could become more irresistible to prospective team members, and every positive interview can be your best opportunity to finding talent, even if you don’t hire that person. Everyone remembers being interviewed if it is a positive experience. Even if they don’t get hired, they will talk about the experience with their friends, and you may get a referral as a result. There is a big upside to this: If you’re not attracting this talent, you have the opportunity to learn the reason and fix it. - -**For the interviewee** : Each interview experience is an opportunity to unlock your passions. - -### 20 questions to help you unlock the passions of potential hires - - 1. What are you passionate about? - - 2. What makes you think, "I can't wait to get to work this morning!” - - 3. What is the most fun you’ve ever had? - - 4. What is your favorite example of a problem you’ve solved, and how did you solve it? - - 5. How do you feel about paired learning? - - 6. What’s at the top of your mind when you arrive at, and leave, the office? - - 7. If you could have changed one thing in your previous/current job, what would it be? - - 8. What are you excited to learn while working here? - - 9. What do you aspire to in life, and how are you pursuing it? - - 10. What do you want, or feel you need, to learn to achieve these aspirations? - - 11. What values do you hold? - - 12. How do you live those values? - - 13. What does balance mean in your life? - - 14. What work interactions are you are most proud of? Why? - - 15. What type of environment do you like to create? - - 16. How do you like to be treated? - - 17. What do you trust vs. verify? - - 18. Tell me about a recent learning you had when working on a project. - - 19. What else should we know about you? - - 20. If you were hiring me, what questions would you ask me? - - - - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/3/questions-devops-employees-should-answer - -作者:[Catherine Louis][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/catherinelouis -[1]:https://www.shrm.org/resourcesandtools/hr-topics/employee-relations/pages/cost-of-bad-hires.aspx -[2]:https://en.wikipedia.org/wiki/Tuckman%27s_stages_of_group_development -[3]:http://www.forbes.com/sites/johnkotter/2011/02/10/does-corporate-culture-drive-financial-performance/ -[4]:https://opensource.com/article/17/11/inclusive-workforce-takes-work -[5]:https://insights.stackoverflow.com/survey/2016#work-job-discovery -[6]:https://research.hackerrank.com/developer-skills/2018/ -[7]:http://www-935.ibm.com/services/us/gbs/thoughtleadership/millennialworkplace/ -[8]:https://en.wikipedia.org/wiki/Emotional_intelligence diff --git a/sources/talk/20181019 What is an SRE and how does it relate to DevOps.md b/sources/talk/20181019 What is an SRE and how does it relate to DevOps.md deleted file mode 100644 index 7093b36cd5..0000000000 --- a/sources/talk/20181019 What is an SRE and how does it relate to DevOps.md +++ /dev/null @@ -1,71 +0,0 @@ -translating by belitex - -What is an SRE and how does it relate to DevOps? -====== -The SRE role is common in large enterprises, but smaller businesses need it, too. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/toolbox-learn-draw-container-yearbook.png?itok=xDbwz1pP) - -Even though the site reliability engineer (SRE) role has become prevalent in recent years, many people—even in the software industry—don't know what it is or does. This article aims to clear that up by explaining what an SRE is, how it relates to DevOps, and how an SRE works when your entire engineering organization can fit in a coffee shop. - -### What is site reliability engineering? - -[Site Reliability Engineering: How Google Runs Production Systems][1], written by a group of Google engineers, is considered the definitive book on site reliability engineering. Google vice president of engineering Ben Treynor Sloss [coined the term][2] back in the early 2000s. He defined it as: "It's what happens when you ask a software engineer to design an operations function." - -Sysadmins have been writing code for a long time, but for many of those years, a team of sysadmins managed many machines manually. Back then, "many" may have been dozens or hundreds, but when you scale to thousands or hundreds of thousands of hosts, you simply can't continue to throw people at the problem. When the number of machines gets that large, the obvious solution is to use code to manage hosts (and the software that runs on them). - -Also, until fairly recently, the operations team was completely separate from the developers. The skillsets for each job were considered completely different. The SRE role tries to bring both jobs together. - -Before we dig deeper into what makes an SRE and how SREs work with the development team, we need to understand how site reliability engineering works within the DevOps paradigm. - -### Site reliability engineering and DevOps - -At its core, site reliability engineering is an implementation of the DevOps paradigm. There seems to be a wide array of ways to [define DevOps][3]. The traditional model, where the development ("devs") and operations ("ops") teams were separated, led to the team that writes the code not being responsible for how it works when customers start using it. The development team would "throw the code over the wall" to the operations team to install and support. - -This situation can lead to a significant amount of dysfunction. The goals of the dev and ops teams are constantly at odds—a developer wants customers to use the "latest and greatest" piece of code, but the operations team wants a steady system with as little change as possible. Their premise is that any change can introduce instability, while a system with no changes should continue to behave in the same manner. (Noting that minimizing change on the software side is not the only factor in preventing instability is important. For example, if your web application stays exactly the same, but the number of customers grows by 10x, your application may break in many different ways.) - -The premise of DevOps is that by merging these two distinct jobs into one, you eliminate contention. If the "dev" wants to deploy new code all the time, they have to deal with any fallout the new code creates. As Amazon's [Werner Vogels said][4], "you build it, you run it" (in production). But developers already have a lot to worry about. They are continually pushed to develop new features for their employer's products. Asking them to understand the infrastructure, including how to deploy, configure, and monitor their service, may be asking a little too much from them. This is where an SRE steps in. - -When a web application is developed, there are often many people that contribute. There are user interface designers, graphic designers, frontend engineers, backend engineers, and a whole host of other specialties (depending on the technologies used). Requirements include how the code gets managed (e.g., deployed, configured, monitored)—which are the SRE's areas of specialty. But, just as an engineer developing a nice look and feel for an application benefits from knowledge of the backend-engineer's job (e.g., how data is fetched from a database), the SRE understands how the deployment system works and how to adapt it to the specific needs of that particular codebase or project. - -So, an SRE is not just "an ops person who codes." Rather, the SRE is another member of the development team with a different set of skills particularly around deployment, configuration management, monitoring, metrics, etc. But, just as an engineer developing a nice look and feel for an application must know how data is fetched from a data store, an SRE is not singly responsible for these areas. The entire team works together to deliver a product that can be easily updated, managed, and monitored. - -The need for an SRE naturally comes about when a team is implementing DevOps but realizes they are asking too much of the developers and need a specialist for what the ops team used to handle. - -### How the SRE works at a startup - -This is great when there are hundreds of employees (let alone when you are the size of Google or Facebook). Large companies have SRE teams that are split up and embedded into each development team. But a startup doesn't have those economies of scale, and engineers often wear many hats. So, where does the "SRE hat" sit in a small company? One approach is to fully adopt DevOps and have the developers be responsible for the typical tasks an SRE would perform at a larger company. On the other side of the spectrum, you hire specialists — a.k.a., SREs. - -The most obvious advantage of trying to put the SRE hat on a developer's head is it scales well as your team grows. Also, the developer will understand all the quirks of the application. But many startups use a wide variety of SaaS products to power their infrastructure. The most obvious is the infrastructure platform itself. Then you add in metrics systems, site monitoring, log analysis, containers, and more. While these technologies solve some problems, they create an additional complexity cost. The developer would need to understand all those technologies and services in addition to the core technologies (e.g., languages) the application uses. In the end, keeping on top of all of that technology can be overwhelming. - -The other option is to hire a specialist to handle the SRE job. Their responsibility would be to focus on deployment, configuration, monitoring, and metrics, freeing up the developer's time to write the application. The disadvantage is that the SRE would have to split their time between multiple, different applications (i.e., the SRE needs to support the breadth of applications throughout engineering). This likely means they may not have the time to gain any depth of knowledge of any of the applications; however, they would be in a position to see how all the different pieces fit together. This "30,000-foot view" can help prioritize the weak spots to fix in the system as a whole. - -There is one key piece of information I am ignoring: your other engineers. They may have a deep desire to understand how deployment works and how to use the metrics system to the best of their ability. Also, hiring an SRE is not an easy task. You are looking for a mix of sysadmin skills and software engineering skills. (I am specific about software engineers, vs. just "being able to code," because software engineering involves more than just writing code [e.g., writing good tests or documentation].) - -Therefore, in some cases, it may make more sense for the "SRE hat" to live on a developer's head. If so, keep an eye on the amount of complexity in both the code and the infrastructure (SaaS or internal). At some point, the complexity on either end will likely push toward more specialization. - -### Conclusion - -An SRE team is one of the most efficient ways to implement the DevOps paradigm in a startup. I have seen a couple of different approaches, but I believe that hiring a dedicated SRE (pretty early) at your startup will free up time for the developers to focus on their specific challenges. The SRE can focus on improving the tools (and processes) that make the developers more productive. Also, an SRE will focus on making sure your customers have a product that is reliable and secure. - -Craig Sebenik will present [SRE (and DevOps) at a Startup][5] at [LISA18][6], October 29-31 in Nashville, Tennessee. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/10/sre-startup - -作者:[Craig Sebenik][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/craig5 -[b]: https://github.com/lujun9972 -[1]: http://shop.oreilly.com/product/0636920041528.do -[2]: https://landing.google.com/sre/interview/ben-treynor.html -[3]: https://opensource.com/resources/devops -[4]: https://queue.acm.org/detail.cfm?id=1142065 -[5]: https://www.usenix.org/conference/lisa18/presentation/sebenik -[6]: https://www.usenix.org/conference/lisa18 diff --git a/sources/talk/20181025 What breaks our systems- A taxonomy of black swans.md b/sources/talk/20181025 What breaks our systems- A taxonomy of black swans.md index d2d5217ade..376809b08b 100644 --- a/sources/talk/20181025 What breaks our systems- A taxonomy of black swans.md +++ b/sources/talk/20181025 What breaks our systems- A taxonomy of black swans.md @@ -1,3 +1,5 @@ +translating by belitex + What breaks our systems: A taxonomy of black swans ====== diff --git a/sources/talk/20181026 Directing traffic- Demystifying internet-scale load balancing.md b/sources/talk/20181026 Directing traffic- Demystifying internet-scale load balancing.md new file mode 100644 index 0000000000..6ebcba69e3 --- /dev/null +++ b/sources/talk/20181026 Directing traffic- Demystifying internet-scale load balancing.md @@ -0,0 +1,108 @@ +Directing traffic: Demystifying internet-scale load balancing +====== +Common techniques used to balance network traffic come with advantages and trade-offs. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/traffic-light-go.png?itok=nC_851ys) +Large, multi-site, internet-facing systems, including content-delivery networks (CDNs) and cloud providers, have several options for balancing traffic coming onto their networks. In this article, we'll describe common traffic-balancing designs, including techniques and trade-offs. + +If you were an early cloud computing provider, you could take a single customer web server, assign it an IP address, configure a domain name system (DNS) record to associate it with a human-readable name, and advertise the IP address via the border gateway protocol (BGP), the standard way of exchanging routing information between networks. + +It wasn't load balancing per se, but there probably was load distribution across redundant network paths and networking technologies to increase availability by routing around unavailable infrastructure (giving rise to phenomena like [asymmetric routing][1]). + +### Doing simple DNS load balancing + +As traffic to your customer's service grows, the business' owners want higher availability. You add a second web server with its own publicly accessible IP address and update the DNS record to direct users to both web servers (hopefully somewhat evenly). This is OK for a while until one web server unexpectedly goes offline. Assuming you detect the failure quickly, you can update the DNS configuration (either manually or with software) to stop referencing the broken server. + +Unfortunately, because DNS records are cached, around 50% of requests to the service will likely fail until the record expires from the client caches and those of other nameservers in the DNS hierarchy. DNS records generally have a time to live (TTL) of several minutes or more, so this can create a significant impact on your system's availability. + +Worse, some proportion of clients ignore TTL entirely, so some requests will be directed to your offline web server for some time. Setting very short DNS TTLs is not a great idea either; it means higher load on DNS services plus increased latency because clients will have to perform DNS lookups more often. If your DNS service is unavailable for any reason, access to your service will degrade more quickly with a shorter TTL because fewer clients will have your service's IP address cached. + +### Adding network load balancing + +To work around this problem, you can add a redundant pair of [Layer 4][2] (L4) network load balancers that serve the same virtual IP (VIP) address. They could be hardware appliances or software balancers like [HAProxy][3]. This means the DNS record points only at the VIP and no longer does load balancing. + +![Layer 4 load balancers balance connections across webservers.][5] + +Layer 4 load balancers balance connections from users across two webservers. + +The L4 balancers load-balance traffic from the internet to the backend servers. This is generally done based on a hash (a mathematical function) of each IP packet's 5-tuple: the source and destination IP address and port plus the protocol (such as TCP or UDP). This is fast and efficient (and still maintains essential properties of TCP) and doesn't require the balancers to maintain state per connection. (For more information, [Google's paper on Maglev][6] discusses implementation of a software L4 balancer in significant detail.) + +The L4 balancers can do health-checking and send traffic only to web servers that pass checks. Unlike in DNS balancing, there is minimal delay in redirecting traffic to another web server if one crashes, although existing connections will be reset. + +L4 balancers can do weighted balancing, dealing with backends with varying capacity. L4 balancing gives significant power and flexibility to operators while being relatively inexpensive in terms of computing power. + +### Going multi-site + +The system continues to grow. Your customers want to stay up even if your data center goes down. You build a new data center with its own set of service backends and another cluster of L4 balancers, which serve the same VIP as before. The DNS setup doesn't change. + +The edge routers in both sites advertise address space, including the service VIP. Requests sent to that VIP can reach either site, depending on how each network between the end user and the system is connected and how their routing policies are configured. This is known as anycast. Most of the time, this works fine. If one site isn't operating, you can stop advertising the VIP for the service via BGP, and traffic will quickly move to the alternative site. + +![Serving from multiple sites using anycast][8] + +Serving from multiple sites using anycast. + +This setup has several problems. Its worst failing is that you can't control where traffic flows or limit how much traffic is sent to a given site. You also don't have an explicit way to route users to the nearest site (in terms of network latency), but the network protocols and configurations that determine the routes should, in most cases, route requests to the nearest site. + +### Controlling inbound requests in a multi-site system + +To maintain stability, you need to be able to control how much traffic is served to each site. You can get that control by assigning a different VIP to each site and use DNS to balance them using simple or weighted [round-robin][9]. + +![Serving from multiple sites using a primary VIP][11] + +Serving from multiple sites using a primary VIP per site, backed up by secondary sites, with geo-aware DNS. + +You now have two new problems. + +First, using DNS balancing means you have cached records, which is not good if you need to redirect traffic quickly. + +Second, whenever users do a fresh DNS lookup, a VIP connects them to the service at an arbitrary site, which may not be the closest site to them. If your service runs on widely separated sites, individual users will experience wide variations in your system's responsiveness, depending upon the network latency between them and the instance of your service they are using. + +You can solve the first problem by having each site constantly advertise and serve the VIPs for all the other sites (and consequently the VIP for any faulty site). Networking tricks (such as advertising less-specific routes from the backups) can ensure that VIP's primary site is preferred, as long as it is available. This is done via BGP, so we should see traffic move within a minute or two of updating BGP. + +There isn't an elegant solution to the problem of serving users from sites other than the nearest healthy site with capacity. Many large internet-facing services use DNS services that attempt to return different results to users in different locations, with some degree of success. This approach is always somewhat [complex and error-prone][12], given that internet-addressing schemes are not organized geographically, blocks of addresses can change locations (e.g., when a company reorganizes its network), and many end users can be served from a single caching nameserver. + +### Adding Layer 7 load balancing + +Over time, your customers begin to ask for more advanced features. + +While L4 load balancers can efficiently distribute load among multiple web servers, they operate only on source and destination IP addresses, protocol, and ports. They don't know anything about the content of a request, so you can't implement many advanced features in an L4 balancer. Layer 7 (L7) load balancers are aware of the structure and contents of requests and can do far more. + +Some things that can be implemented in L7 load balancers are caching, rate limiting, fault injection, and cost-aware load balancing (some requests require much more server time to process). + +They can also balance based on a request's attributes (e.g., HTTP cookies), terminate SSL connections, and help defend against application layer denial-of-service (DoS) attacks. The downside of L7 balancers at scale is cost—they do more computation to process requests, and each active request consumes some system resources. Running L4 balancers in front of one or more pools of L7 balancers can help with scaling. + +### Conclusion + +Load balancing is a difficult and complex problem. In addition to the strategies described in this article, there are different [load-balancing algorithms][13], high-availability techniques used to implement load balancers, client load-balancing techniques, and the recent rise of service meshes. + +Core load-balancing patterns have evolved alongside the growth of cloud computing, and they will continue to improve as large web services work to improve the control and flexibility that load-balancing techniques offer./p> + +Laura Nolan and Murali Suriar will present [Keeping the Balance: Load Balancing Demystified][14] at [LISA18][15], October 29-31 in Nashville, Tennessee, USA. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/internet-scale-load-balancing + +作者:[Laura Nolan][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/lauranolan +[b]: https://github.com/lujun9972 +[1]: https://www.noction.com/blog/bgp-and-asymmetric-routing +[2]: https://en.wikipedia.org/wiki/Transport_layer +[3]: https://www.haproxy.com/blog/failover-and-worst-case-management-with-haproxy/ +[4]: /file/412596 +[5]: https://opensource.com/sites/default/files/uploads/loadbalancing1_l4-network-loadbalancing.png (Layer 4 load balancers balance connections across webservers.) +[6]: https://ai.google/research/pubs/pub44824 +[7]: /file/412601 +[8]: https://opensource.com/sites/default/files/uploads/loadbalancing2_going-multisite.png (Serving from multiple sites using anycast) +[9]: https://en.wikipedia.org/wiki/Round-robin_scheduling +[10]: /file/412606 +[11]: https://opensource.com/sites/default/files/uploads/loadbalancing3_controlling-inbound-requests.png (Serving from multiple sites using a primary VIP) +[12]: https://landing.google.com/sre/book/chapters/load-balancing-frontend.html +[13]: https://medium.com/netflix-techblog/netflix-edge-load-balancing-695308b5548c +[14]: https://www.usenix.org/conference/lisa18/presentation/suriar +[15]: https://www.usenix.org/conference/lisa18 diff --git a/sources/talk/20181031 3 scary sysadmin stories.md b/sources/talk/20181031 3 scary sysadmin stories.md new file mode 100644 index 0000000000..6810012f57 --- /dev/null +++ b/sources/talk/20181031 3 scary sysadmin stories.md @@ -0,0 +1,124 @@ +3 scary sysadmin stories +====== + +Terrifying ghosts are hanging around every data center, just waiting to haunt the unsuspecting sysadmin. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/spooky_halloween_haunted_house.jpg?itok=UkRBeItZ) + +> "It's all just a bunch of hocus pocus!" — Max in [Hocus Pocus][1] + +Over my many years as a system administrator, I've heard many horror stories about the different ghosts that have haunted new admins due to their inexperience. + +Here are three of the stories that stand out to me the most in helping build my character as a good sysadmin. + +### The ghost of the failed restore + +In a well-known data center (whose name I do not want to remember), one cold October night we had a production outage in which thousands of web servers stopped responding due to downtime in the main database. The database administrator asked me, the rookie sysadmin, to recover the database's last full backup and restore it to bring the service back online. + +But, at the end of the process, the database was still broken. I didn't worry, because there were other full backup files in stock. However, even after doing the process several times, the result didn't change. + +With great fear, I asked the senior sysadmin what to do to fix this behavior. + +"You remember when I showed you, a few days ago, how the full backup script was running? Something about how important it was to validate the backup?" responded the sysadmin. + +"Of course! You told me that I had to stay a couple of extra hours to perform that task," I answered. + +"Exactly! But you preferred to leave early without finishing that task," he said. + +"Oh my! I thought it was optional!" I exclaimed. + +"It was, it was…" + +**Moral of the story:** Even with the best solution that promises to make the most thorough backups, the ghost of the failed restoration can appear, darkening our job skills, if we don't make a habit of validating the backup every time. + +### The dark window + +Once upon a night watch, reflecting I was, lonely and tired, +Looking at the file window on my screen. +Clicking randomly, nearly napping, suddenly came a beeping +From some server, sounding gently, sounding on my pager. +"It's just a warning," I muttered, "sounding on my pager— +Only this and nothing more." +Soon again I heard a beeping somewhat louder than before. +Opening my pager with great disdain, +There was the message from a server of the saintly days of yore: +"The legacy application, it's down, doesn't respond," and nothing more. +There were many stories of this server, +Incredibly, almost terrified, +I went down to the data center to review it. +I sat engaged in guessing, what would be the console to restart it +Without keyboard, mouse, or monitor? +"The task level up"—I think—"only this and nothing more." +Then, thinking, "In another rack, I saw a similar server, +I'll take its monitor and keyboard, nothing bad." +Suddenly, this server shut down, and my pager beeped again: +"The legacy application, it's down, doesn't respond", and nothing more. +Bemused, I sat down to call my sysadmin mentor: +"I wanted to use the console of another server, and now both are out." +"Did you follow my advice? Don't use the graphics console, the terminal is better." +Of course, I remember, it was last December; +I felt fear, a horror that I had never felt before; +"It is a tool of the past and nothing more." +With great shame I understood my mistake: +"Master," I said, "truly, your forgiveness I implore; +but the fact is I thought it was not used anymore. +A dark window and nothing more." +"Learn it well, little kid," he spoke. +"In the terminal you can trust, it's your friend and much, much more." +Step by step, my master showed me to connect with the terminal, +And restarting each one +With infinite patience, he taught me +That from that dark window I should not separate +Never, nevermore. + +**Moral of the story:** Fluency in the command-line terminal is a skill often abandoned and considered archaic by newer generations, but it improves your flexibility and productivity as a sysadmin in obvious and subtle ways. + +### Troll bridge + +I'd been a sysadmin for three or four years when one of my old mentors was removed from work. The older man was known for making fun of the new guys in the group—the ones who brought from the university the desire to improve processes with the newly released community operating system. My manager assigned me the older man's office, a small space under the access stairs to the data center—"Troll Bridge," they called it—and the few legacy servers he still managed. + +While reviewing those legacy servers, I realized most of them had many scripts that did practically all the work. I just had to check that they did not go offline due to an electrical failure. I started using those methods, adapting them so my own servers would work the same way, making my tasks more efficient and, at the same time, requiring less of my time to complete them. My day soon became surfing the internet, watching funny videos, and even participating in internet forums. + +A couple of years went by, and I maintained my work in the same way. When a new server arrived, I automated its tasks so I could free myself and continue with my usual participation in internet forums. One day, when I shared one of my scripts in the internet forum, a new admin told me I could simplify it using one novelty language, a new trend that was becoming popular among the new folks. + +"I am a sysadmin, not a programmer," I answered. "They will never be the same." + +From that day on, I dedicated myself to ridiculing the kids who told me I should program in the new languages. + +"You do not know, newbie," I answered every time, "this job will never change." + +A few years later, my responsibilities increased, and my manager wanted me to modify the code of the applications hosted on my server. + +"That's what the job is about now," said my manager. "Development and operations are joining; if you're not willing to do it, we'll bring in some guy who does." + +"I will never do it, it's not my role," I said. + +"Well then…" he said, looking at me harshly. + +I've been here ever since. Hiding. Waiting. Under my bridge. + +I watch from the shadows as the people pass: up the stairs, muttering, or talking about the things the new applications do. Sometimes people pause beneath my bridge, to talk, or share code, or make plans. And I watch them, but they don't see me. + +I'm just going to stay here, in the darkness under the bridge. I can hear you all out there, everything you say. + +Oh yes, I can hear you. +But I'm not coming out. + +**Moral of the story:** "The lazy sysadmin is the best sysadmin" is a well-known phrase that means if we are proactive enough to automate all our processes properly, we will have a lot of free time. The best sysadmins never seem to be very busy; they prefer to be relaxed and let the system do the work for them. "Work smarter not harder." However, if we don't use this free time productively, we can fall into obsoleteness and become something we do not want. The best sysadmins reinvent themselves constantly; they are always researching and learning. + +Following these stories' morals—and continually learning from my mistakes—helped me improve my management skills and create the good habits necessary for the sysadmin job. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/3-scary-sysadmin-stories + +作者:[Alex Callejas][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/darkaxl +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Hocus_Pocus_(1993_film) diff --git a/sources/talk/20181031 How open source hardware increases security.md b/sources/talk/20181031 How open source hardware increases security.md new file mode 100644 index 0000000000..9e823436cf --- /dev/null +++ b/sources/talk/20181031 How open source hardware increases security.md @@ -0,0 +1,84 @@ +How open source hardware increases security +====== +Want to boost cybersecurity at your organization? Switch to open source hardware. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/esp8266_board_hardware.jpg?itok=OTmNpKV1) + +Hardware hacks are particularly scary because they trump any software security safeguards—for example, they can render all accounts on a server password-less. + +Fortunately, we can benefit from what the software industry has learned from decades of fighting prolific software hackers: Using open source techniques can, perhaps counterintuitively, [make a system more secure][1]. Open source hardware and distributed manufacturing can provide protection from future attacks. + +### Trust—but verify + +Imagine you are a 007 agent holding classified documents. Would you feel more secure locking them in a safe whose manufacturer keeps the workings of the locks secret, or in a safe whose design is published openly so that everyone (including thieves) can judge its quality—thus enabling you to rely exclusively on technical complexity for protection? + +The former approach might be perfectly secure—you simply don’t know. But why would you trust any manufacturer that could be compromised now or in the future? In contrast, the open system is almost certain to be secure, especially if enough time has passed for it to be tested by multiple companies, governments, and individuals. + +To a large degree, the software world has seen the benefits of moving to free and open source software. That's why open source is run on all [supercomputers][2], [90% of the cloud, 82% of the smartphone market, and 62% of the embedded systems market][3]. Open source appears poised to dominate the future, with over [70% of the IoT][4]. + +In fact, security is one of the core benefits of [open source][5]. While open source is not inherently more secure, it allows you to verify security yourself (or pay someone more qualified to do so). With closed source programs, you must trust, without verification, that a program works properly. To quote President Reagan: "Trust—but verify." The bottom line is that open source allows users to make more informed choices about the security of a system—choices that are based on their own independent judgment. + +### Open source hardware + +This concept also holds true for electronic devices. Most electronics customers have no idea what is in their products, and even technically sophisticated companies like Amazon may not know exactly what is in the hardware that runs their servers because they use proprietary products that are made by other companies. + +In the incident mentioned above, Chinese spies recently used a tiny microchip, not much bigger than a grain of rice, to infiltrate hardware made by SuperMicro (the Microsoft of the hardware world). These chips enabled outside infiltrators to access the core server functions of some of America’s leading companies and government operations, including DOD data centers, CIA drone operations, and the onboard networks of Navy warships. Operatives from the People’s Liberation Army or similar groups could have reverse-engineered or made identical or disguised modules (in this case, the chips looked like signal-conditioning couplers, a common motherboard component, rather than the spy devices they were). + +Having the source available helps customers much more than hackers, as most customers do not have the resources to reverse-engineer the electronics they buy. Without the device's source, or design, it's difficult to determine whether or not hardware has been hacked. + +Enter [open source hardware][6]: hardware design that is publicly available so that anyone can study, modify, test, distribute, make, or sell it, or hardware based on it. The hardware’s source is available to everyone. + +### Distributed manufacturing for cybersecurity + +Open source hardware and distributed manufacturing could have prevented the Chinese hack that rightfully terrified the security world. Organizations that require tight security, such as military groups, could then check the product's code and bring production in-house if necessary. + +This open source future may not be far off. Recently I co-authored, with Shane Oberloier, an [article][7] that discusses a low-cost open source benchtop device that enables anyone to make a wide range of open source electronic products. The number of open source electronics designs is proliferating on websites like [Hackaday][8], [Open Electronics][9], and the [Open Circuit Institute][10], as are communities based on specific products like [Arduino][11] and around companies like [Adafruit Industries][12] and [SparkFun Electronics][13]. + +Every level of manufacturing that users can do themselves increases the security of the device. Not long ago, you had to be an expert to make even a simple breadboard design. Now, with open source mills for boards and electronics repositories, small companies and even individuals can make reasonably sophisticated electronic devices. While most builders are still using black-box chips on their devices, this is also changing as [open source chips gain traction][14]. + +![](https://opensource.com/sites/default/files/uploads/800px-oscircuitmill.png) + +Creating electronics that are open source all the way down to the chip is certainly possible—and the more besieged we are by hardware hacks, perhaps it is even inevitable. Companies, governments, and other organizations that care about cybersecurity should strongly consider moving toward open source—perhaps first by establishing purchasing policies for software and hardware that makes the code accessible so they can test for security weaknesses. + +Although every customer and every manufacturer of an open source hardware product will have different standards of quality and security, this does not necessarily mean weaker security. Customers should choose whatever version of an open source product best meets their needs, just as users can choose their flavor of Linux. For example, do you run [Fedora][15] for free, or do you, like [90% of Fortune Global 500 companies][16], pay Red Hat for its version and support? + +Red Hat makes billions of dollars a year for the service it provides, on top of a product that can ostensibly be downloaded for free. Open source hardware can follow the [same business model][17]; it is just a less mature field, lagging [open source software by about 15 years][18]. + +The core source code for hardware devices would be controlled by their manufacturer, following the "[benevolent dictator for life][19]" model. Code of any kind (infected or not) is screened before it becomes part of the root. This is true for hardware, too. For example, Aleph Objects manufacturers the popular [open source LulzBot brand of 3D printer][20], a commercial 3D printer that's essentially designed to be hacked. Users have made [dozens of modifications][21] (mods) to the printer, and while they are available, Aleph uses only the ones that meet its QC standards in each subsequent version of the printer. Sure, downloading a mod could mess up your own machine, but infecting the source code of the next LulzBot that way would be nearly impossible. Customers are also able to more easily check the security of the machines themselves. + +While [challenges certainly remain for the security of open source products][22], the open hardware model can help enhance cybersecurity—from the Pentagon to your living room. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/cybersecurity-demands-rapid-switch-open-source-hardware + +作者:[Joshua Pearce][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jmpearce +[b]: https://github.com/lujun9972 +[1]: https://dl.acm.org/citation.cfm?id=1188921 +[2]: https://www.zdnet.com/article/supercomputers-all-linux-all-the-time/ +[3]: https://www.serverwatch.com/server-news/linux-foundation-on-track-for-best-year-ever-as-open-source-dominates.html +[4]: https://www.itprotoday.com/iot/survey-shows-linux-top-operating-system-internet-things-devices +[5]: https://www.infoworld.com/article/2985242/linux/why-is-open-source-software-more-secure.html +[6]: https://www.oshwa.org/definition/ +[7]: https://www.mdpi.com/2411-5134/3/3/64/htm +[8]: https://hackaday.io/ +[9]: https://www.open-electronics.org/ +[10]: http://opencircuitinstitute.org/ +[11]: https://www.arduino.cc/ +[12]: http://www.adafruit.com/ +[13]: https://www.sparkfun.com/ +[14]: https://www.wired.com/story/using-open-source-designs-to-create-more-specialized-chips/ +[15]: https://getfedora.org/ +[16]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux +[17]: https://openhardware.metajnl.com/articles/10.5334/joh.4/ +[18]: https://www.mdpi.com/2411-5134/3/3/44/htm +[19]: https://www.theatlantic.com/technology/archive/2014/01/on-the-reign-of-benevolent-dictators-for-life-in-software/283139/ +[20]: https://www.lulzbot.com/ +[21]: https://forum.lulzbot.com/viewtopic.php?t=2378 +[22]: https://ieeexplore.ieee.org/abstract/document/8250205 diff --git a/sources/tech/20171202 Easily control delivery of your Python applications to millions of Linux users with Snapcraft.md b/sources/tech/20171202 Easily control delivery of your Python applications to millions of Linux users with Snapcraft.md index dbdebf63e3..80975288e4 100644 --- a/sources/tech/20171202 Easily control delivery of your Python applications to millions of Linux users with Snapcraft.md +++ b/sources/tech/20171202 Easily control delivery of your Python applications to millions of Linux users with Snapcraft.md @@ -1,3 +1,4 @@ +Translating by DavidChenLiang Python ============================================================ diff --git a/sources/tech/20171202 Simulating the Altair.md b/sources/tech/20171202 Simulating the Altair.md deleted file mode 100644 index 3de613cb9a..0000000000 --- a/sources/tech/20171202 Simulating the Altair.md +++ /dev/null @@ -1,70 +0,0 @@ -translating---geekpi - -Simulating the Altair -====== -The [Altair 8800][1] was a build-it-yourself home computer kit released in 1975. The Altair was basically the first personal computer, though it predated the advent of that term by several years. It is Adam (or Eve) to every Dell, HP, or Macbook out there. - -Some people thought it’d be awesome to write an emulator for the Z80—a processor closely related to the Altair’s Intel 8080—and then thought it needed a simulation of the Altair’s control panel on top of it. So if you’ve ever wondered what it was like to use a computer in 1975, you can run the Altair on your Macbook: - -![Altair 8800][2] - -### Installing it - -You can download Z80 pack from the FTP server available [here][3]. You’re looking for the latest Z80 pack release, something like `z80pack-1.26.tgz`. - -First unpack the file: - -``` -$ tar -xvf z80pack-1.26.tgz -``` - -Move into the unpacked directory: - -``` -$ cd z80pack-1.26 -``` - -The control panel simulation is based on a library called `frontpanel`. You’ll have to compile that library first. If you move into the `frontpanel` directory, you will find a `README` file listing the libraries own dependencies. Your experience here will almost certainly differ from mine, but perhaps my travails will be illustrative. I had the dependencies installed, but via [Homebrew][4]. To get the library to compile I just had to make sure that `/usr/local/include` was added to Clang’s include path in `Makefile.osx`. - -If you’ve satisfied the dependencies, you should be able to compile the library (we’re now in `z80pack-1.26/frontpanel`: - -``` -$ make -f Makefile.osx ... -$ make -f Makefile.osx clean -``` - -You should end up with `libfrontpanel.so`. I copied this to `/usr/local/lib`. - -The Altair simulator is under `z80pack-1.26/altairsim`. You now need to compile the simulator itself. Move into `z80pack-1.26/altairsim/srcsim` and run `make` once more: - -``` -$ make -f Makefile.osx ... -$ make -f Makefile.osx clean -``` - -That process will create an executable called `altairsim` one level up in `z80pack-1.26/altairsim`. Run that executable and you should see that iconic Altair control panel! - -And if you really want to nerd out, read through the original [Altair manual][5]. - -If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][6] on Twitter or subscribe to the [RSS feed][7] to make sure you know when a new post is out. - --------------------------------------------------------------------------------- - -via: https://twobithistory.org/2017/12/02/simulating-the-altair.html - -作者:[Two-Bit History][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://twobithistory.org -[b]: https://github.com/lujun9972 -[1]: https://en.wikipedia.org/wiki/Altair_8800 -[2]: https://www.autometer.de/unix4fun/z80pack/altair.png -[3]: http://www.autometer.de/unix4fun/z80pack/ftp/ -[4]: http://brew.sh/ -[5]: http://www.classiccmp.org/dunfield/altair/d/88opman.pdf -[6]: https://twitter.com/TwoBitHistory -[7]: https://twobithistory.org/feed.xml diff --git a/sources/tech/20180810 How To Quickly Serve Files And Folders Over HTTP In Linux.md b/sources/tech/20180810 How To Quickly Serve Files And Folders Over HTTP In Linux.md index c4adc3ac07..025199d93c 100644 --- a/sources/tech/20180810 How To Quickly Serve Files And Folders Over HTTP In Linux.md +++ b/sources/tech/20180810 How To Quickly Serve Files And Folders Over HTTP In Linux.md @@ -1,3 +1,5 @@ +translating---geekpi + How To Quickly Serve Files And Folders Over HTTP In Linux ====== diff --git a/sources/tech/20180820 How To Disable Ads In Terminal Welcome Message In Ubuntu Server.md b/sources/tech/20180820 How To Disable Ads In Terminal Welcome Message In Ubuntu Server.md deleted file mode 100644 index 8f6ef80dbe..0000000000 --- a/sources/tech/20180820 How To Disable Ads In Terminal Welcome Message In Ubuntu Server.md +++ /dev/null @@ -1,120 +0,0 @@ -How To Disable Ads In Terminal Welcome Message In Ubuntu Server -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/disable-ads-in-Terminal-welcome-message-in-Ubuntu-720x340.jpg) - -If you’re using any latest Ubuntu server edition, you might have noticed some promotional links in welcome message which are not relevant to Ubuntu server platform. As you might already know **MOTD** , abbreviation of **M** essage **O** f **T** he **D** ay, displays a welcome message at every login in Linux systems. Usually, the welcome message contains the version of your OS, basic system information, official documentation link, and the links to read about the latest security updates etc. This is what we usually see at every time we login either via SSH or on the local machine. However, there some additional links started to appear in the terminal welcome message lately. I have already noticed this link few times, but I didn’t care about it and never clicked it though. Here is the Terminal welcome message shown in my Ubuntu 18.04 LTS server. - -![](http://www.ostechnix.com/wp-content/uploads/2018/08/Ubuntu-Terminal-welcome-message.png) - -As you can see in the above screenshot, there is also a bit.ly link and Ubuntu wiki link in the welcome message. Some of you may surprise and wondering what this is. There is nothing to worry about the links in the welcome message. It may look sort of ad-like, but those are not really commercial ads. The links are actually pointing to [**Ubuntu official blog**][1] and [**Ubuntu wiki**][2]. As I said earlier, one of link is not relevant and doesn’t has any details related to Ubuntu server. That’s why I called them ads in the first place. - -Even though most of us won’t visit bit.ly links, but some people may visit those links out of curiosity and ended up disappointed realizing that it simply points you to an external link. You can use any URL unshortners services, such as unshorten.it, to see where they lead before visiting the actual link. Alternatively, you can just type a plus sign ( **+** ) at the end of the bit.ly link to see where they lead and some statistics about the link. - -![](http://www.ostechnix.com/wp-content/uploads/2018/08/shortlink.png) - -### What is MOTD and how it works? - -Back in 2009, **Dustin Kirkland** from Canonical introduced the concept of MOTD in Ubuntu. It’s a flexible framework that enables the administrators or distro packages to add executable scripts in /etc/update-motd.d/* location to generate informative, interesting messages displayed at login. It was originally implemented for Landscape (a commercial service from Canonical), however other distribution maintainers found it useful and adopted this feature in their own distributions as well. - -If you look in **/etc/update-motd.d/** location in your Ubuntu system, you’ll see a set of scripts. One prints the generic “welcome” banner. The next one prints 3 links showing where to find help for the OS. The other one counts and displays the number of package updates available for the local system. Another one tells you if a reboot is required and so on. - -From Ubuntu 17.04 onwards, the developers have added **/etc/update-motd.d/50-motd-news** , a script to include some additional information in the welcome message. They additional information are; - - 1. Important critical information, such as - -ShellShock, Heartbleed etc. - - 2. End-of-Life (EOL) messages, new feature availability, etc. - - 3. Some fun and informative posts published in Ubuntu official blog and other news about Ubuntu. - - - - -Asynchronously, about 60 seconds after boot, a systemd timer runs “/etc/update-motd.d/50-motd-news –force” script. It sources 3 config variables defined in /etc/default/motd-news script. The default values are: ENABLED=1, URLS=”, WAIT=”5″. - -Here is the contents of /etc/default/motd-news file: -``` -$ cat /etc/default/motd-news -# Enable/disable the dynamic MOTD news service -# This is a useful way to provide dynamic, informative -# information pertinent to the users and administrators -# of the local system -ENABLED=1 - -# Configure the source of dynamic MOTD news -# White space separated list of 0 to many news services -# For security reasons, these must be https -# and have a valid certificate -# Canonical runs a service at motd.ubuntu.com, and you -# can easily run one too -URLS="https://motd.ubuntu.com" - -# Specify the time in seconds, you're willing to wait for -# dynamic MOTD news -# Note that news messages are fetched in the background by -# a systemd timer, so this should never block boot or login -WAIT=5 - -``` - -Good thing is MOTD is fully customizable, so you can disable it entirely (ENABLED=0), change or add scripts as per your wish, and change the wait time in seconds - -If MOTD is enabled, that systemd timer job will loop over each of the URLS, trim them to 80 characters per line, and a maximum of 10 lines, and concatenate them to a cache file in /var/cache/motd-news. This systemd timer job will re-run and update the /var/cache/motd-news every 12 hours. Upon user login, the contents of /var/cache/motd-news is just printed to screen. This is how MOTD works. - -Also, a custom user-agent string is included in **/etc/update-motd.d/50-motd-news** file to report information about your computer. If you look into **/etc/update-motd.d/50-motd-news** file, you will see the following code. -``` -# Piece together the user agent -USER_AGENT="curl/$curl_ver $lsb $platform $cpu $uptime" - -``` - -That means, the MOTD retriever reports your **operating system release** , **hardware platform** , **CPU type** and **uptime** to Canonical. - -Hope you got the basic idea about MOTD. - -Let us now get back to the topic. I don’t want this feature. How do I disable it? If the promotional links in the welcome message still bothers you and you wanted to disable them permanently, here is a quick way to disable it. - -### Disable Ads In Terminal Welcome Message In Ubuntu Server - -To disable these ads, edit file: -``` -$ sudo vi /etc/default/motd-news - -``` - -Find the following line and set its value as 0 (zero). -``` -[...] -ENABLED=0 -[...] - -``` - -Save and close the file. Now, reboot your system and see if the welcome message stills showing the links from Ubuntu blog. - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/Ubuntu-Terminal-welcome-message-1.png) - -See? There are no links from Ubuntu blog and Ubuntu wiki now. - -And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-disable-ads-in-terminal-welcome-message-in-ubuntu-server/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://blog.ubuntu.com/ -[2]:https://wiki.ubuntu.com/ diff --git a/sources/tech/20180824 Joplin- Encrypted Open Source Note Taking And To-Do Application.md b/sources/tech/20180824 Joplin- Encrypted Open Source Note Taking And To-Do Application.md index 5c520c8021..ae6a1f32d9 100644 --- a/sources/tech/20180824 Joplin- Encrypted Open Source Note Taking And To-Do Application.md +++ b/sources/tech/20180824 Joplin- Encrypted Open Source Note Taking And To-Do Application.md @@ -1,3 +1,5 @@ +translating---geekpi + Joplin: Encrypted Open Source Note Taking And To-Do Application ====== **[Joplin][1] is a free and open source note taking and to-do application available for Linux, Windows, macOS, Android and iOS. Its key features include end-to-end encryption, Markdown support, and synchronization via third-party services like NextCloud, Dropbox, OneDrive or WebDAV.** diff --git a/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md b/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md index 769f9ba420..c25239b7ba 100644 --- a/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md +++ b/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md @@ -1,4 +1,3 @@ -Translating by z52527 Publishing Markdown to HTML with MDwiki ====== diff --git a/sources/tech/20180831 Test containers with Python and Conu.md b/sources/tech/20180831 Test containers with Python and Conu.md index e28ca4674e..9911901d51 100644 --- a/sources/tech/20180831 Test containers with Python and Conu.md +++ b/sources/tech/20180831 Test containers with Python and Conu.md @@ -1,4 +1,4 @@ -Test containers with Python and Conu +translating by GraveAccent Test containers with Python and Conu ====== ![](https://fedoramagazine.org/wp-content/uploads/2018/08/conu-816x345.jpg) diff --git a/sources/tech/20180905 How To Run MS-DOS Games And Programs In Linux.md b/sources/tech/20180905 How To Run MS-DOS Games And Programs In Linux.md index 0552fb3d09..ffcdf9f47d 100644 --- a/sources/tech/20180905 How To Run MS-DOS Games And Programs In Linux.md +++ b/sources/tech/20180905 How To Run MS-DOS Games And Programs In Linux.md @@ -1,3 +1,5 @@ +Translating by way-ww + How To Run MS-DOS Games And Programs In Linux ====== diff --git a/sources/tech/20180907 6 open source tools for writing a book.md b/sources/tech/20180907 6 open source tools for writing a book.md deleted file mode 100644 index 52115b1c45..0000000000 --- a/sources/tech/20180907 6 open source tools for writing a book.md +++ /dev/null @@ -1,69 +0,0 @@ -translating---geekpi - -6 open source tools for writing a book -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-austen-writing-code.png?itok=XPxRMtQ4) - -I first used and contributed to free and open source software in 1993, and since then I've been an open source software developer and evangelist. I've written or contributed to dozens of open source software projects, although the one that I'll be remembered for is the [FreeDOS Project][1], an open source implementation of the DOS operating system. - -I recently wrote a book about FreeDOS. [_Using FreeDOS_][2] is my celebration of the 24th anniversary of FreeDOS. It is a collection of how-to's about installing and using FreeDOS, essays about my favorite DOS applications, and quick-reference guides to the DOS command line and DOS batch programming. I've been working on this book for the last few months, with the help of a great professional editor. - -_Using FreeDOS_ is available under the Creative Commons Attribution (cc-by) International Public License. You can download the EPUB and PDF versions at no charge from the [FreeDOS e-books][2] website. (I'm also planning a print version, for those who prefer a bound copy.) - -The book was produced almost entirely with open source software. I'd like to share a brief insight into the tools I used to create, edit, and produce _Using FreeDOS_. - -### Google Docs - -[Google Docs][3] is the only tool I used that isn't open source software. I uploaded my first drafts to Google Docs so my editor and I could collaborate. I'm sure there are open source collaboration tools, but Google Doc's ability to let two people edit the same document at the same time, make comments, edit suggestions, and change tracking—not to mention its use of paragraph styles and the ability to download the finished document—made it a valuable part of the editing process. - -### LibreOffice - -I started on [LibreOffice][4] 6.0 but I finished the book using LibreOffice 6.1. I love LibreOffice's rich support of styles. Paragraph styles made it easy to apply a style for titles, headers, body text, sample code, and other text. Character styles let me modify the appearance of text within a paragraph, such as inline sample code or a different style to indicate a filename. Graphics styles let me apply certain styling to screenshots and other images. And page styles allowed me to easily modify the layout and appearance of the page. - -### GIMP - -My book includes a lot of DOS program screenshots, website screenshots, and FreeDOS logos. I used [GIMP][5] to modify these images for the book. Usually, this was simply cropping or resizing an image, but as I prepare the print edition of the book, I'm using GIMP to create a few images that will be simpler for print layout. - -### Inkscape - -Most of the FreeDOS logos and fish mascots are in SVG format, and I used [Inkscape][6] for any image tweaking here. And in preparing the PDF version of the ebook, I wanted a simple blue banner at top of the page, with the FreeDOS logo in the corner. After some experimenting, I found it easier to create an SVG image in Inkscape that looked like the banner I wanted, and I pasted that into the header. - -### ImageMagick - -While it's great to use GIMP to do the fine work, sometimes it's faster to run an [ImageMagick][7] command over a set of images, such as to convert into PNG format or to resize images. - -### Sigil - -LibreOffice can export directly to EPUB format, but it wasn't a great transfer. I haven't tried creating an EPUB with LibreOffice 6.1, but LibreOffice 6.0 didn't include my images. It also added styles in a weird way. I used [Sigil][8] to tweak the EPUB file and make everything look right. Sigil even has a preview function so you can see what the EPUB will look like. - -### QEMU - -Because this book is about installing and running FreeDOS, I needed to actually run FreeDOS. You can boot FreeDOS inside any PC emulator, including VirtualBox, QEMU, GNOME Boxes, PCem, and Bochs. But I like the simplicity of [QEMU][9]. And the QEMU console lets you issue a screen dump in PPM format, which is ideal for grabbing screenshots to include in the book. - -Of course, I have to mention running [GNOME][10] on [Linux][11]. I use the [Fedora][12] distribution of Linux. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/writing-book-open-source-tools - -作者:[Jim Hall][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jim-hall -[1]: http://www.freedos.org/ -[2]: http://www.freedos.org/ebook/ -[3]: https://www.google.com/docs/about/ -[4]: https://www.libreoffice.org/ -[5]: https://www.gimp.org/ -[6]: https://inkscape.org/ -[7]: https://www.imagemagick.org/ -[8]: https://sigil-ebook.com/ -[9]: https://www.qemu.org/ -[10]: https://www.gnome.org/ -[11]: https://www.kernel.org/ -[12]: https://getfedora.org/ diff --git a/sources/tech/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md b/sources/tech/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md index a9b20ac54d..443627f702 100644 --- a/sources/tech/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md +++ b/sources/tech/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md @@ -1,3 +1,5 @@ +Translating by jlztan + KeeWeb – An Open Source, Cross Platform Password Manager ====== diff --git a/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md b/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md index 27616a9f6e..71adf0112b 100644 --- a/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md +++ b/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md @@ -1,3 +1,5 @@ +translating by cyleft + Taking notes with Laverna, a web-based information organizer ====== diff --git a/sources/tech/20181016 Lab 4- Preemptive Multitasking.md b/sources/tech/20181016 Lab 4- Preemptive Multitasking.md deleted file mode 100644 index de68cd7f39..0000000000 --- a/sources/tech/20181016 Lab 4- Preemptive Multitasking.md +++ /dev/null @@ -1,596 +0,0 @@ -Translating by qhwdw -Lab 4: Preemptive Multitasking -====== -### Lab 4: Preemptive Multitasking - -**Part A due Thursday, October 18, 2018 -Part B due Thursday, October 25, 2018 -Part C due Thursday, November 1, 2018** - -#### Introduction - -In this lab you will implement preemptive multitasking among multiple simultaneously active user-mode environments. - -In part A you will add multiprocessor support to JOS, implement round-robin scheduling, and add basic environment management system calls (calls that create and destroy environments, and allocate/map memory). - -In part B, you will implement a Unix-like `fork()`, which allows a user-mode environment to create copies of itself. - -Finally, in part C you will add support for inter-process communication (IPC), allowing different user-mode environments to communicate and synchronize with each other explicitly. You will also add support for hardware clock interrupts and preemption. - -##### Getting Started - -Use Git to commit your Lab 3 source, fetch the latest version of the course repository, and then create a local branch called `lab4` based on our lab4 branch, `origin/lab4`: - -``` - athena% cd ~/6.828/lab - athena% add git - athena% git pull - Already up-to-date. - athena% git checkout -b lab4 origin/lab4 - Branch lab4 set up to track remote branch refs/remotes/origin/lab4. - Switched to a new branch "lab4" - athena% git merge lab3 - Merge made by recursive. - ... - athena% -``` - -Lab 4 contains a number of new source files, some of which you should browse before you start: -| kern/cpu.h | Kernel-private definitions for multiprocessor support | -| kern/mpconfig.c | Code to read the multiprocessor configuration | -| kern/lapic.c | Kernel code driving the local APIC unit in each processor | -| kern/mpentry.S | Assembly-language entry code for non-boot CPUs | -| kern/spinlock.h | Kernel-private definitions for spin locks, including the big kernel lock | -| kern/spinlock.c | Kernel code implementing spin locks | -| kern/sched.c | Code skeleton of the scheduler that you are about to implement | - -##### Lab Requirements - -This lab is divided into three parts, A, B, and C. We have allocated one week in the schedule for each part. - -As before, you will need to do all of the regular exercises described in the lab and _at least one_ challenge problem. (You do not need to do one challenge problem per part, just one for the whole lab.) Additionally, you will need to write up a brief description of the challenge problem that you implemented. If you implement more than one challenge problem, you only need to describe one of them in the write-up, though of course you are welcome to do more. Place the write-up in a file called `answers-lab4.txt` in the top level of your `lab` directory before handing in your work. - -#### Part A: Multiprocessor Support and Cooperative Multitasking - -In the first part of this lab, you will first extend JOS to run on a multiprocessor system, and then implement some new JOS kernel system calls to allow user-level environments to create additional new environments. You will also implement _cooperative_ round-robin scheduling, allowing the kernel to switch from one environment to another when the current environment voluntarily relinquishes the CPU (or exits). Later in part C you will implement _preemptive_ scheduling, which allows the kernel to re-take control of the CPU from an environment after a certain time has passed even if the environment does not cooperate. - -##### Multiprocessor Support - -We are going to make JOS support "symmetric multiprocessing" (SMP), a multiprocessor model in which all CPUs have equivalent access to system resources such as memory and I/O buses. While all CPUs are functionally identical in SMP, during the boot process they can be classified into two types: the bootstrap processor (BSP) is responsible for initializing the system and for booting the operating system; and the application processors (APs) are activated by the BSP only after the operating system is up and running. Which processor is the BSP is determined by the hardware and the BIOS. Up to this point, all your existing JOS code has been running on the BSP. - -In an SMP system, each CPU has an accompanying local APIC (LAPIC) unit. The LAPIC units are responsible for delivering interrupts throughout the system. The LAPIC also provides its connected CPU with a unique identifier. In this lab, we make use of the following basic functionality of the LAPIC unit (in `kern/lapic.c`): - - * Reading the LAPIC identifier (APIC ID) to tell which CPU our code is currently running on (see `cpunum()`). - * Sending the `STARTUP` interprocessor interrupt (IPI) from the BSP to the APs to bring up other CPUs (see `lapic_startap()`). - * In part C, we program LAPIC's built-in timer to trigger clock interrupts to support preemptive multitasking (see `apic_init()`). - - - -A processor accesses its LAPIC using memory-mapped I/O (MMIO). In MMIO, a portion of _physical_ memory is hardwired to the registers of some I/O devices, so the same load/store instructions typically used to access memory can be used to access device registers. You've already seen one IO hole at physical address `0xA0000` (we use this to write to the VGA display buffer). The LAPIC lives in a hole starting at physical address `0xFE000000` (32MB short of 4GB), so it's too high for us to access using our usual direct map at KERNBASE. The JOS virtual memory map leaves a 4MB gap at `MMIOBASE` so we have a place to map devices like this. Since later labs introduce more MMIO regions, you'll write a simple function to allocate space from this region and map device memory to it. - -``` -Exercise 1. Implement `mmio_map_region` in `kern/pmap.c`. To see how this is used, look at the beginning of `lapic_init` in `kern/lapic.c`. You'll have to do the next exercise, too, before the tests for `mmio_map_region` will run. -``` - -###### Application Processor Bootstrap - -Before booting up APs, the BSP should first collect information about the multiprocessor system, such as the total number of CPUs, their APIC IDs and the MMIO address of the LAPIC unit. The `mp_init()` function in `kern/mpconfig.c` retrieves this information by reading the MP configuration table that resides in the BIOS's region of memory. - -The `boot_aps()` function (in `kern/init.c`) drives the AP bootstrap process. APs start in real mode, much like how the bootloader started in `boot/boot.S`, so `boot_aps()` copies the AP entry code (`kern/mpentry.S`) to a memory location that is addressable in the real mode. Unlike with the bootloader, we have some control over where the AP will start executing code; we copy the entry code to `0x7000` (`MPENTRY_PADDR`), but any unused, page-aligned physical address below 640KB would work. - -After that, `boot_aps()` activates APs one after another, by sending `STARTUP` IPIs to the LAPIC unit of the corresponding AP, along with an initial `CS:IP` address at which the AP should start running its entry code (`MPENTRY_PADDR` in our case). The entry code in `kern/mpentry.S` is quite similar to that of `boot/boot.S`. After some brief setup, it puts the AP into protected mode with paging enabled, and then calls the C setup routine `mp_main()` (also in `kern/init.c`). `boot_aps()` waits for the AP to signal a `CPU_STARTED` flag in `cpu_status` field of its `struct CpuInfo` before going on to wake up the next one. - -``` -Exercise 2. Read `boot_aps()` and `mp_main()` in `kern/init.c`, and the assembly code in `kern/mpentry.S`. Make sure you understand the control flow transfer during the bootstrap of APs. Then modify your implementation of `page_init()` in `kern/pmap.c` to avoid adding the page at `MPENTRY_PADDR` to the free list, so that we can safely copy and run AP bootstrap code at that physical address. Your code should pass the updated `check_page_free_list()` test (but might fail the updated `check_kern_pgdir()` test, which we will fix soon). -``` - -``` -Question - - 1. Compare `kern/mpentry.S` side by side with `boot/boot.S`. Bearing in mind that `kern/mpentry.S` is compiled and linked to run above `KERNBASE` just like everything else in the kernel, what is the purpose of macro `MPBOOTPHYS`? Why is it necessary in `kern/mpentry.S` but not in `boot/boot.S`? In other words, what could go wrong if it were omitted in `kern/mpentry.S`? -Hint: recall the differences between the link address and the load address that we have discussed in Lab 1. -``` - - -###### Per-CPU State and Initialization - -When writing a multiprocessor OS, it is important to distinguish between per-CPU state that is private to each processor, and global state that the whole system shares. `kern/cpu.h` defines most of the per-CPU state, including `struct CpuInfo`, which stores per-CPU variables. `cpunum()` always returns the ID of the CPU that calls it, which can be used as an index into arrays like `cpus`. Alternatively, the macro `thiscpu` is shorthand for the current CPU's `struct CpuInfo`. - -Here is the per-CPU state you should be aware of: - - * **Per-CPU kernel stack**. -Because multiple CPUs can trap into the kernel simultaneously, we need a separate kernel stack for each processor to prevent them from interfering with each other's execution. The array `percpu_kstacks[NCPU][KSTKSIZE]` reserves space for NCPU's worth of kernel stacks. - -In Lab 2, you mapped the physical memory that `bootstack` refers to as the BSP's kernel stack just below `KSTACKTOP`. Similarly, in this lab, you will map each CPU's kernel stack into this region with guard pages acting as a buffer between them. CPU 0's stack will still grow down from `KSTACKTOP`; CPU 1's stack will start `KSTKGAP` bytes below the bottom of CPU 0's stack, and so on. `inc/memlayout.h` shows the mapping layout. - - * **Per-CPU TSS and TSS descriptor**. -A per-CPU task state segment (TSS) is also needed in order to specify where each CPU's kernel stack lives. The TSS for CPU _i_ is stored in `cpus[i].cpu_ts`, and the corresponding TSS descriptor is defined in the GDT entry `gdt[(GD_TSS0 >> 3) + i]`. The global `ts` variable defined in `kern/trap.c` will no longer be useful. - - * **Per-CPU current environment pointer**. -Since each CPU can run different user process simultaneously, we redefined the symbol `curenv` to refer to `cpus[cpunum()].cpu_env` (or `thiscpu->cpu_env`), which points to the environment _currently_ executing on the _current_ CPU (the CPU on which the code is running). - - * **Per-CPU system registers**. -All registers, including system registers, are private to a CPU. Therefore, instructions that initialize these registers, such as `lcr3()`, `ltr()`, `lgdt()`, `lidt()`, etc., must be executed once on each CPU. Functions `env_init_percpu()` and `trap_init_percpu()` are defined for this purpose. - - - -``` -Exercise 3. Modify `mem_init_mp()` (in `kern/pmap.c`) to map per-CPU stacks starting at `KSTACKTOP`, as shown in `inc/memlayout.h`. The size of each stack is `KSTKSIZE` bytes plus `KSTKGAP` bytes of unmapped guard pages. Your code should pass the new check in `check_kern_pgdir()`. -``` - -``` -Exercise 4. The code in `trap_init_percpu()` (`kern/trap.c`) initializes the TSS and TSS descriptor for the BSP. It worked in Lab 3, but is incorrect when running on other CPUs. Change the code so that it can work on all CPUs. (Note: your new code should not use the global `ts` variable any more.) -``` - -When you finish the above exercises, run JOS in QEMU with 4 CPUs using make qemu CPUS=4 (or make qemu-nox CPUS=4), you should see output like this: - -``` - ... - Physical memory: 66556K available, base = 640K, extended = 65532K - check_page_alloc() succeeded! - check_page() succeeded! - check_kern_pgdir() succeeded! - check_page_installed_pgdir() succeeded! - SMP: CPU 0 found 4 CPU(s) - enabled interrupts: 1 2 - SMP: CPU 1 starting - SMP: CPU 2 starting - SMP: CPU 3 starting -``` - -###### Locking - -Our current code spins after initializing the AP in `mp_main()`. Before letting the AP get any further, we need to first address race conditions when multiple CPUs run kernel code simultaneously. The simplest way to achieve this is to use a _big kernel lock_. The big kernel lock is a single global lock that is held whenever an environment enters kernel mode, and is released when the environment returns to user mode. In this model, environments in user mode can run concurrently on any available CPUs, but no more than one environment can run in kernel mode; any other environments that try to enter kernel mode are forced to wait. - -`kern/spinlock.h` declares the big kernel lock, namely `kernel_lock`. It also provides `lock_kernel()` and `unlock_kernel()`, shortcuts to acquire and release the lock. You should apply the big kernel lock at four locations: - - * In `i386_init()`, acquire the lock before the BSP wakes up the other CPUs. - * In `mp_main()`, acquire the lock after initializing the AP, and then call `sched_yield()` to start running environments on this AP. - * In `trap()`, acquire the lock when trapped from user mode. To determine whether a trap happened in user mode or in kernel mode, check the low bits of the `tf_cs`. - * In `env_run()`, release the lock _right before_ switching to user mode. Do not do that too early or too late, otherwise you will experience races or deadlocks. - - -``` -Exercise 5. Apply the big kernel lock as described above, by calling `lock_kernel()` and `unlock_kernel()` at the proper locations. -``` - -How to test if your locking is correct? You can't at this moment! But you will be able to after you implement the scheduler in the next exercise. - -``` -Question - - 2. It seems that using the big kernel lock guarantees that only one CPU can run the kernel code at a time. Why do we still need separate kernel stacks for each CPU? Describe a scenario in which using a shared kernel stack will go wrong, even with the protection of the big kernel lock. -``` - -``` -Challenge! The big kernel lock is simple and easy to use. Nevertheless, it eliminates all concurrency in kernel mode. Most modern operating systems use different locks to protect different parts of their shared state, an approach called _fine-grained locking_. Fine-grained locking can increase performance significantly, but is more difficult to implement and error-prone. If you are brave enough, drop the big kernel lock and embrace concurrency in JOS! - -It is up to you to decide the locking granularity (the amount of data that a lock protects). As a hint, you may consider using spin locks to ensure exclusive access to these shared components in the JOS kernel: - - * The page allocator. - * The console driver. - * The scheduler. - * The inter-process communication (IPC) state that you will implement in the part C. -``` - - -##### Round-Robin Scheduling - -Your next task in this lab is to change the JOS kernel so that it can alternate between multiple environments in "round-robin" fashion. Round-robin scheduling in JOS works as follows: - - * The function `sched_yield()` in the new `kern/sched.c` is responsible for selecting a new environment to run. It searches sequentially through the `envs[]` array in circular fashion, starting just after the previously running environment (or at the beginning of the array if there was no previously running environment), picks the first environment it finds with a status of `ENV_RUNNABLE` (see `inc/env.h`), and calls `env_run()` to jump into that environment. - * `sched_yield()` must never run the same environment on two CPUs at the same time. It can tell that an environment is currently running on some CPU (possibly the current CPU) because that environment's status will be `ENV_RUNNING`. - * We have implemented a new system call for you, `sys_yield()`, which user environments can call to invoke the kernel's `sched_yield()` function and thereby voluntarily give up the CPU to a different environment. - - - -``` -Exercise 6. Implement round-robin scheduling in `sched_yield()` as described above. Don't forget to modify `syscall()` to dispatch `sys_yield()`. - -Make sure to invoke `sched_yield()` in `mp_main`. - -Modify `kern/init.c` to create three (or more!) environments that all run the program `user/yield.c`. - -Run make qemu. You should see the environments switch back and forth between each other five times before terminating, like below. - -Test also with several CPUS: make qemu CPUS=2. - - ... - Hello, I am environment 00001000. - Hello, I am environment 00001001. - Hello, I am environment 00001002. - Back in environment 00001000, iteration 0. - Back in environment 00001001, iteration 0. - Back in environment 00001002, iteration 0. - Back in environment 00001000, iteration 1. - Back in environment 00001001, iteration 1. - Back in environment 00001002, iteration 1. - ... - -After the `yield` programs exit, there will be no runnable environment in the system, the scheduler should invoke the JOS kernel monitor. If any of this does not happen, then fix your code before proceeding. -``` - -``` -Question - - 3. In your implementation of `env_run()` you should have called `lcr3()`. Before and after the call to `lcr3()`, your code makes references (at least it should) to the variable `e`, the argument to `env_run`. Upon loading the `%cr3` register, the addressing context used by the MMU is instantly changed. But a virtual address (namely `e`) has meaning relative to a given address context--the address context specifies the physical address to which the virtual address maps. Why can the pointer `e` be dereferenced both before and after the addressing switch? - 4. Whenever the kernel switches from one environment to another, it must ensure the old environment's registers are saved so they can be restored properly later. Why? Where does this happen? -``` - -``` -Challenge! Add a less trivial scheduling policy to the kernel, such as a fixed-priority scheduler that allows each environment to be assigned a priority and ensures that higher-priority environments are always chosen in preference to lower-priority environments. If you're feeling really adventurous, try implementing a Unix-style adjustable-priority scheduler or even a lottery or stride scheduler. (Look up "lottery scheduling" and "stride scheduling" in Google.) - -Write a test program or two that verifies that your scheduling algorithm is working correctly (i.e., the right environments get run in the right order). It may be easier to write these test programs once you have implemented `fork()` and IPC in parts B and C of this lab. -``` - -``` -Challenge! The JOS kernel currently does not allow applications to use the x86 processor's x87 floating-point unit (FPU), MMX instructions, or Streaming SIMD Extensions (SSE). Extend the `Env` structure to provide a save area for the processor's floating point state, and extend the context switching code to save and restore this state properly when switching from one environment to another. The `FXSAVE` and `FXRSTOR` instructions may be useful, but note that these are not in the old i386 user's manual because they were introduced in more recent processors. Write a user-level test program that does something cool with floating-point. -``` - -##### System Calls for Environment Creation - -Although your kernel is now capable of running and switching between multiple user-level environments, it is still limited to running environments that the _kernel_ initially set up. You will now implement the necessary JOS system calls to allow _user_ environments to create and start other new user environments. - -Unix provides the `fork()` system call as its process creation primitive. Unix `fork()` copies the entire address space of calling process (the parent) to create a new process (the child). The only differences between the two observable from user space are their process IDs and parent process IDs (as returned by `getpid` and `getppid`). In the parent, `fork()` returns the child's process ID, while in the child, `fork()` returns 0. By default, each process gets its own private address space, and neither process's modifications to memory are visible to the other. - -You will provide a different, more primitive set of JOS system calls for creating new user-mode environments. With these system calls you will be able to implement a Unix-like `fork()` entirely in user space, in addition to other styles of environment creation. The new system calls you will write for JOS are as follows: - - * `sys_exofork`: -This system call creates a new environment with an almost blank slate: nothing is mapped in the user portion of its address space, and it is not runnable. The new environment will have the same register state as the parent environment at the time of the `sys_exofork` call. In the parent, `sys_exofork` will return the `envid_t` of the newly created environment (or a negative error code if the environment allocation failed). In the child, however, it will return 0. (Since the child starts out marked as not runnable, `sys_exofork` will not actually return in the child until the parent has explicitly allowed this by marking the child runnable using....) - * `sys_env_set_status`: -Sets the status of a specified environment to `ENV_RUNNABLE` or `ENV_NOT_RUNNABLE`. This system call is typically used to mark a new environment ready to run, once its address space and register state has been fully initialized. - * `sys_page_alloc`: -Allocates a page of physical memory and maps it at a given virtual address in a given environment's address space. - * `sys_page_map`: -Copy a page mapping ( _not_ the contents of a page!) from one environment's address space to another, leaving a memory sharing arrangement in place so that the new and the old mappings both refer to the same page of physical memory. - * `sys_page_unmap`: -Unmap a page mapped at a given virtual address in a given environment. - - - -For all of the system calls above that accept environment IDs, the JOS kernel supports the convention that a value of 0 means "the current environment." This convention is implemented by `envid2env()` in `kern/env.c`. - -We have provided a very primitive implementation of a Unix-like `fork()` in the test program `user/dumbfork.c`. This test program uses the above system calls to create and run a child environment with a copy of its own address space. The two environments then switch back and forth using `sys_yield` as in the previous exercise. The parent exits after 10 iterations, whereas the child exits after 20. - -``` -Exercise 7. Implement the system calls described above in `kern/syscall.c` and make sure `syscall()` calls them. You will need to use various functions in `kern/pmap.c` and `kern/env.c`, particularly `envid2env()`. For now, whenever you call `envid2env()`, pass 1 in the `checkperm` parameter. Be sure you check for any invalid system call arguments, returning `-E_INVAL` in that case. Test your JOS kernel with `user/dumbfork` and make sure it works before proceeding. -``` - -``` -Challenge! Add the additional system calls necessary to _read_ all of the vital state of an existing environment as well as set it up. Then implement a user mode program that forks off a child environment, runs it for a while (e.g., a few iterations of `sys_yield()`), then takes a complete snapshot or _checkpoint_ of the child environment, runs the child for a while longer, and finally restores the child environment to the state it was in at the checkpoint and continues it from there. Thus, you are effectively "replaying" the execution of the child environment from an intermediate state. Make the child environment perform some interaction with the user using `sys_cgetc()` or `readline()` so that the user can view and mutate its internal state, and verify that with your checkpoint/restart you can give the child environment a case of selective amnesia, making it "forget" everything that happened beyond a certain point. -``` - -This completes Part A of the lab; make sure it passes all of the Part A tests when you run make grade, and hand it in using make handin as usual. If you are trying to figure out why a particular test case is failing, run ./grade-lab4 -v, which will show you the output of the kernel builds and QEMU runs for each test, until a test fails. When a test fails, the script will stop, and then you can inspect `jos.out` to see what the kernel actually printed. - -#### Part B: Copy-on-Write Fork - -As mentioned earlier, Unix provides the `fork()` system call as its primary process creation primitive. The `fork()` system call copies the address space of the calling process (the parent) to create a new process (the child). - -xv6 Unix implements `fork()` by copying all data from the parent's pages into new pages allocated for the child. This is essentially the same approach that `dumbfork()` takes. The copying of the parent's address space into the child is the most expensive part of the `fork()` operation. - -However, a call to `fork()` is frequently followed almost immediately by a call to `exec()` in the child process, which replaces the child's memory with a new program. This is what the the shell typically does, for example. In this case, the time spent copying the parent's address space is largely wasted, because the child process will use very little of its memory before calling `exec()`. - -For this reason, later versions of Unix took advantage of virtual memory hardware to allow the parent and child to _share_ the memory mapped into their respective address spaces until one of the processes actually modifies it. This technique is known as _copy-on-write_. To do this, on `fork()` the kernel would copy the address space _mappings_ from the parent to the child instead of the contents of the mapped pages, and at the same time mark the now-shared pages read-only. When one of the two processes tries to write to one of these shared pages, the process takes a page fault. At this point, the Unix kernel realizes that the page was really a "virtual" or "copy-on-write" copy, and so it makes a new, private, writable copy of the page for the faulting process. In this way, the contents of individual pages aren't actually copied until they are actually written to. This optimization makes a `fork()` followed by an `exec()` in the child much cheaper: the child will probably only need to copy one page (the current page of its stack) before it calls `exec()`. - -In the next piece of this lab, you will implement a "proper" Unix-like `fork()` with copy-on-write, as a user space library routine. Implementing `fork()` and copy-on-write support in user space has the benefit that the kernel remains much simpler and thus more likely to be correct. It also lets individual user-mode programs define their own semantics for `fork()`. A program that wants a slightly different implementation (for example, the expensive always-copy version like `dumbfork()`, or one in which the parent and child actually share memory afterward) can easily provide its own. - -##### User-level page fault handling - -A user-level copy-on-write `fork()` needs to know about page faults on write-protected pages, so that's what you'll implement first. Copy-on-write is only one of many possible uses for user-level page fault handling. - -It's common to set up an address space so that page faults indicate when some action needs to take place. For example, most Unix kernels initially map only a single page in a new process's stack region, and allocate and map additional stack pages later "on demand" as the process's stack consumption increases and causes page faults on stack addresses that are not yet mapped. A typical Unix kernel must keep track of what action to take when a page fault occurs in each region of a process's space. For example, a fault in the stack region will typically allocate and map new page of physical memory. A fault in the program's BSS region will typically allocate a new page, fill it with zeroes, and map it. In systems with demand-paged executables, a fault in the text region will read the corresponding page of the binary off of disk and then map it. - -This is a lot of information for the kernel to keep track of. Instead of taking the traditional Unix approach, you will decide what to do about each page fault in user space, where bugs are less damaging. This design has the added benefit of allowing programs great flexibility in defining their memory regions; you'll use user-level page fault handling later for mapping and accessing files on a disk-based file system. - -###### Setting the Page Fault Handler - -In order to handle its own page faults, a user environment will need to register a _page fault handler entrypoint_ with the JOS kernel. The user environment registers its page fault entrypoint via the new `sys_env_set_pgfault_upcall` system call. We have added a new member to the `Env` structure, `env_pgfault_upcall`, to record this information. - -``` -Exercise 8. Implement the `sys_env_set_pgfault_upcall` system call. Be sure to enable permission checking when looking up the environment ID of the target environment, since this is a "dangerous" system call. -``` - -###### Normal and Exception Stacks in User Environments - -During normal execution, a user environment in JOS will run on the _normal_ user stack: its `ESP` register starts out pointing at `USTACKTOP`, and the stack data it pushes resides on the page between `USTACKTOP-PGSIZE` and `USTACKTOP-1` inclusive. When a page fault occurs in user mode, however, the kernel will restart the user environment running a designated user-level page fault handler on a different stack, namely the _user exception_ stack. In essence, we will make the JOS kernel implement automatic "stack switching" on behalf of the user environment, in much the same way that the x86 _processor_ already implements stack switching on behalf of JOS when transferring from user mode to kernel mode! - -The JOS user exception stack is also one page in size, and its top is defined to be at virtual address `UXSTACKTOP`, so the valid bytes of the user exception stack are from `UXSTACKTOP-PGSIZE` through `UXSTACKTOP-1` inclusive. While running on this exception stack, the user-level page fault handler can use JOS's regular system calls to map new pages or adjust mappings so as to fix whatever problem originally caused the page fault. Then the user-level page fault handler returns, via an assembly language stub, to the faulting code on the original stack. - -Each user environment that wants to support user-level page fault handling will need to allocate memory for its own exception stack, using the `sys_page_alloc()` system call introduced in part A. - -###### Invoking the User Page Fault Handler - -You will now need to change the page fault handling code in `kern/trap.c` to handle page faults from user mode as follows. We will call the state of the user environment at the time of the fault the _trap-time_ state. - -If there is no page fault handler registered, the JOS kernel destroys the user environment with a message as before. Otherwise, the kernel sets up a trap frame on the exception stack that looks like a `struct UTrapframe` from `inc/trap.h`: - -``` - <-- UXSTACKTOP - trap-time esp - trap-time eflags - trap-time eip - trap-time eax start of struct PushRegs - trap-time ecx - trap-time edx - trap-time ebx - trap-time esp - trap-time ebp - trap-time esi - trap-time edi end of struct PushRegs - tf_err (error code) - fault_va <-- %esp when handler is run - -``` - -The kernel then arranges for the user environment to resume execution with the page fault handler running on the exception stack with this stack frame; you must figure out how to make this happen. The `fault_va` is the virtual address that caused the page fault. - -If the user environment is _already_ running on the user exception stack when an exception occurs, then the page fault handler itself has faulted. In this case, you should start the new stack frame just under the current `tf->tf_esp` rather than at `UXSTACKTOP`. You should first push an empty 32-bit word, then a `struct UTrapframe`. - -To test whether `tf->tf_esp` is already on the user exception stack, check whether it is in the range between `UXSTACKTOP-PGSIZE` and `UXSTACKTOP-1`, inclusive. - -``` -Exercise 9. Implement the code in `page_fault_handler` in `kern/trap.c` required to dispatch page faults to the user-mode handler. Be sure to take appropriate precautions when writing into the exception stack. (What happens if the user environment runs out of space on the exception stack?) -``` - -###### User-mode Page Fault Entrypoint - -Next, you need to implement the assembly routine that will take care of calling the C page fault handler and resume execution at the original faulting instruction. This assembly routine is the handler that will be registered with the kernel using `sys_env_set_pgfault_upcall()`. - -``` -Exercise 10. Implement the `_pgfault_upcall` routine in `lib/pfentry.S`. The interesting part is returning to the original point in the user code that caused the page fault. You'll return directly there, without going back through the kernel. The hard part is simultaneously switching stacks and re-loading the EIP. -``` - -Finally, you need to implement the C user library side of the user-level page fault handling mechanism. - -``` -Exercise 11. Finish `set_pgfault_handler()` in `lib/pgfault.c`. -``` - -###### Testing - -Run `user/faultread` (make run-faultread). You should see: - -``` - ... - [00000000] new env 00001000 - [00001000] user fault va 00000000 ip 0080003a - TRAP frame ... - [00001000] free env 00001000 -``` - -Run `user/faultdie`. You should see: - -``` - ... - [00000000] new env 00001000 - i faulted at va deadbeef, err 6 - [00001000] exiting gracefully - [00001000] free env 00001000 -``` - -Run `user/faultalloc`. You should see: - -``` - ... - [00000000] new env 00001000 - fault deadbeef - this string was faulted in at deadbeef - fault cafebffe - fault cafec000 - this string was faulted in at cafebffe - [00001000] exiting gracefully - [00001000] free env 00001000 -``` - -If you see only the first "this string" line, it means you are not handling recursive page faults properly. - -Run `user/faultallocbad`. You should see: - -``` - ... - [00000000] new env 00001000 - [00001000] user_mem_check assertion failure for va deadbeef - [00001000] free env 00001000 -``` - -Make sure you understand why `user/faultalloc` and `user/faultallocbad` behave differently. - -``` -Challenge! Extend your kernel so that not only page faults, but _all_ types of processor exceptions that code running in user space can generate, can be redirected to a user-mode exception handler. Write user-mode test programs to test user-mode handling of various exceptions such as divide-by-zero, general protection fault, and illegal opcode. -``` - -##### Implementing Copy-on-Write Fork - -You now have the kernel facilities to implement copy-on-write `fork()` entirely in user space. - -We have provided a skeleton for your `fork()` in `lib/fork.c`. Like `dumbfork()`, `fork()` should create a new environment, then scan through the parent environment's entire address space and set up corresponding page mappings in the child. The key difference is that, while `dumbfork()` copied _pages_ , `fork()` will initially only copy page _mappings_. `fork()` will copy each page only when one of the environments tries to write it. - -The basic control flow for `fork()` is as follows: - - 1. The parent installs `pgfault()` as the C-level page fault handler, using the `set_pgfault_handler()` function you implemented above. - - 2. The parent calls `sys_exofork()` to create a child environment. - - 3. For each writable or copy-on-write page in its address space below UTOP, the parent calls `duppage`, which should map the page copy-on-write into the address space of the child and then _remap_ the page copy-on-write in its own address space. [ Note: The ordering here (i.e., marking a page as COW in the child before marking it in the parent) actually matters! Can you see why? Try to think of a specific case where reversing the order could cause trouble. ] `duppage` sets both PTEs so that the page is not writeable, and to contain `PTE_COW` in the "avail" field to distinguish copy-on-write pages from genuine read-only pages. - -The exception stack is _not_ remapped this way, however. Instead you need to allocate a fresh page in the child for the exception stack. Since the page fault handler will be doing the actual copying and the page fault handler runs on the exception stack, the exception stack cannot be made copy-on-write: who would copy it? - -`fork()` also needs to handle pages that are present, but not writable or copy-on-write. - - 4. The parent sets the user page fault entrypoint for the child to look like its own. - - 5. The child is now ready to run, so the parent marks it runnable. - - - - -Each time one of the environments writes a copy-on-write page that it hasn't yet written, it will take a page fault. Here's the control flow for the user page fault handler: - - 1. The kernel propagates the page fault to `_pgfault_upcall`, which calls `fork()`'s `pgfault()` handler. - 2. `pgfault()` checks that the fault is a write (check for `FEC_WR` in the error code) and that the PTE for the page is marked `PTE_COW`. If not, panic. - 3. `pgfault()` allocates a new page mapped at a temporary location and copies the contents of the faulting page into it. Then the fault handler maps the new page at the appropriate address with read/write permissions, in place of the old read-only mapping. - - - -The user-level `lib/fork.c` code must consult the environment's page tables for several of the operations above (e.g., that the PTE for a page is marked `PTE_COW`). The kernel maps the environment's page tables at `UVPT` exactly for this purpose. It uses a [clever mapping trick][1] to make it to make it easy to lookup PTEs for user code. `lib/entry.S` sets up `uvpt` and `uvpd` so that you can easily lookup page-table information in `lib/fork.c`. - -`````` -Exercise 12. Implement `fork`, `duppage` and `pgfault` in `lib/fork.c`. - -Test your code with the `forktree` program. It should produce the following messages, with interspersed 'new env', 'free env', and 'exiting gracefully' messages. The messages may not appear in this order, and the environment IDs may be different. - - 1000: I am '' - 1001: I am '0' - 2000: I am '00' - 2001: I am '000' - 1002: I am '1' - 3000: I am '11' - 3001: I am '10' - 4000: I am '100' - 1003: I am '01' - 5000: I am '010' - 4001: I am '011' - 2002: I am '110' - 1004: I am '001' - 1005: I am '111' - 1006: I am '101' -``` - -``` -Challenge! Implement a shared-memory `fork()` called `sfork()`. This version should have the parent and child _share_ all their memory pages (so writes in one environment appear in the other) except for pages in the stack area, which should be treated in the usual copy-on-write manner. Modify `user/forktree.c` to use `sfork()` instead of regular `fork()`. Also, once you have finished implementing IPC in part C, use your `sfork()` to run `user/pingpongs`. You will have to find a new way to provide the functionality of the global `thisenv` pointer. -``` - -``` -Challenge! Your implementation of `fork` makes a huge number of system calls. On the x86, switching into the kernel using interrupts has non-trivial cost. Augment the system call interface so that it is possible to send a batch of system calls at once. Then change `fork` to use this interface. - -How much faster is your new `fork`? - -You can answer this (roughly) by using analytical arguments to estimate how much of an improvement batching system calls will make to the performance of your `fork`: How expensive is an `int 0x30` instruction? How many times do you execute `int 0x30` in your `fork`? Is accessing the `TSS` stack switch also expensive? And so on... - -Alternatively, you can boot your kernel on real hardware and _really_ benchmark your code. See the `RDTSC` (read time-stamp counter) instruction, defined in the IA32 manual, which counts the number of clock cycles that have elapsed since the last processor reset. QEMU doesn't emulate this instruction faithfully (it can either count the number of virtual instructions executed or use the host TSC, neither of which reflects the number of cycles a real CPU would require). -``` - -This ends part B. Make sure you pass all of the Part B tests when you run make grade. As usual, you can hand in your submission with make handin. - -#### Part C: Preemptive Multitasking and Inter-Process communication (IPC) - -In the final part of lab 4 you will modify the kernel to preempt uncooperative environments and to allow environments to pass messages to each other explicitly. - -##### Clock Interrupts and Preemption - -Run the `user/spin` test program. This test program forks off a child environment, which simply spins forever in a tight loop once it receives control of the CPU. Neither the parent environment nor the kernel ever regains the CPU. This is obviously not an ideal situation in terms of protecting the system from bugs or malicious code in user-mode environments, because any user-mode environment can bring the whole system to a halt simply by getting into an infinite loop and never giving back the CPU. In order to allow the kernel to _preempt_ a running environment, forcefully retaking control of the CPU from it, we must extend the JOS kernel to support external hardware interrupts from the clock hardware. - -###### Interrupt discipline - -External interrupts (i.e., device interrupts) are referred to as IRQs. There are 16 possible IRQs, numbered 0 through 15. The mapping from IRQ number to IDT entry is not fixed. `pic_init` in `picirq.c` maps IRQs 0-15 to IDT entries `IRQ_OFFSET` through `IRQ_OFFSET+15`. - -In `inc/trap.h`, `IRQ_OFFSET` is defined to be decimal 32. Thus the IDT entries 32-47 correspond to the IRQs 0-15. For example, the clock interrupt is IRQ 0. Thus, IDT[IRQ_OFFSET+0] (i.e., IDT[32]) contains the address of the clock's interrupt handler routine in the kernel. This `IRQ_OFFSET` is chosen so that the device interrupts do not overlap with the processor exceptions, which could obviously cause confusion. (In fact, in the early days of PCs running MS-DOS, the `IRQ_OFFSET` effectively _was_ zero, which indeed caused massive confusion between handling hardware interrupts and handling processor exceptions!) - -In JOS, we make a key simplification compared to xv6 Unix. External device interrupts are _always_ disabled when in the kernel (and, like xv6, enabled when in user space). External interrupts are controlled by the `FL_IF` flag bit of the `%eflags` register (see `inc/mmu.h`). When this bit is set, external interrupts are enabled. While the bit can be modified in several ways, because of our simplification, we will handle it solely through the process of saving and restoring `%eflags` register as we enter and leave user mode. - -You will have to ensure that the `FL_IF` flag is set in user environments when they run so that when an interrupt arrives, it gets passed through to the processor and handled by your interrupt code. Otherwise, interrupts are _masked_ , or ignored until interrupts are re-enabled. We masked interrupts with the very first instruction of the bootloader, and so far we have never gotten around to re-enabling them. - -``` -Exercise 13. Modify `kern/trapentry.S` and `kern/trap.c` to initialize the appropriate entries in the IDT and provide handlers for IRQs 0 through 15. Then modify the code in `env_alloc()` in `kern/env.c` to ensure that user environments are always run with interrupts enabled. - -Also uncomment the `sti` instruction in `sched_halt()` so that idle CPUs unmask interrupts. - -The processor never pushes an error code when invoking a hardware interrupt handler. You might want to re-read section 9.2 of the [80386 Reference Manual][2], or section 5.8 of the [IA-32 Intel Architecture Software Developer's Manual, Volume 3][3], at this time. - -After doing this exercise, if you run your kernel with any test program that runs for a non-trivial length of time (e.g., `spin`), you should see the kernel print trap frames for hardware interrupts. While interrupts are now enabled in the processor, JOS isn't yet handling them, so you should see it misattribute each interrupt to the currently running user environment and destroy it. Eventually it should run out of environments to destroy and drop into the monitor. -``` - -###### Handling Clock Interrupts - -In the `user/spin` program, after the child environment was first run, it just spun in a loop, and the kernel never got control back. We need to program the hardware to generate clock interrupts periodically, which will force control back to the kernel where we can switch control to a different user environment. - -The calls to `lapic_init` and `pic_init` (from `i386_init` in `init.c`), which we have written for you, set up the clock and the interrupt controller to generate interrupts. You now need to write the code to handle these interrupts. - -``` -Exercise 14. Modify the kernel's `trap_dispatch()` function so that it calls `sched_yield()` to find and run a different environment whenever a clock interrupt takes place. - -You should now be able to get the `user/spin` test to work: the parent environment should fork off the child, `sys_yield()` to it a couple times but in each case regain control of the CPU after one time slice, and finally kill the child environment and terminate gracefully. -``` - -This is a great time to do some _regression testing_. Make sure that you haven't broken any earlier part of that lab that used to work (e.g. `forktree`) by enabling interrupts. Also, try running with multiple CPUs using make CPUS=2 _target_. You should also be able to pass `stresssched` now. Run make grade to see for sure. You should now get a total score of 65/80 points on this lab. - -##### Inter-Process communication (IPC) - -(Technically in JOS this is "inter-environment communication" or "IEC", but everyone else calls it IPC, so we'll use the standard term.) - -We've been focusing on the isolation aspects of the operating system, the ways it provides the illusion that each program has a machine all to itself. Another important service of an operating system is to allow programs to communicate with each other when they want to. It can be quite powerful to let programs interact with other programs. The Unix pipe model is the canonical example. - -There are many models for interprocess communication. Even today there are still debates about which models are best. We won't get into that debate. Instead, we'll implement a simple IPC mechanism and then try it out. - -###### IPC in JOS - -You will implement a few additional JOS kernel system calls that collectively provide a simple interprocess communication mechanism. You will implement two system calls, `sys_ipc_recv` and `sys_ipc_try_send`. Then you will implement two library wrappers `ipc_recv` and `ipc_send`. - -The "messages" that user environments can send to each other using JOS's IPC mechanism consist of two components: a single 32-bit value, and optionally a single page mapping. Allowing environments to pass page mappings in messages provides an efficient way to transfer more data than will fit into a single 32-bit integer, and also allows environments to set up shared memory arrangements easily. - -###### Sending and Receiving Messages - -To receive a message, an environment calls `sys_ipc_recv`. This system call de-schedules the current environment and does not run it again until a message has been received. When an environment is waiting to receive a message, _any_ other environment can send it a message - not just a particular environment, and not just environments that have a parent/child arrangement with the receiving environment. In other words, the permission checking that you implemented in Part A will not apply to IPC, because the IPC system calls are carefully designed so as to be "safe": an environment cannot cause another environment to malfunction simply by sending it messages (unless the target environment is also buggy). - -To try to send a value, an environment calls `sys_ipc_try_send` with both the receiver's environment id and the value to be sent. If the named environment is actually receiving (it has called `sys_ipc_recv` and not gotten a value yet), then the send delivers the message and returns 0. Otherwise the send returns `-E_IPC_NOT_RECV` to indicate that the target environment is not currently expecting to receive a value. - -A library function `ipc_recv` in user space will take care of calling `sys_ipc_recv` and then looking up the information about the received values in the current environment's `struct Env`. - -Similarly, a library function `ipc_send` will take care of repeatedly calling `sys_ipc_try_send` until the send succeeds. - -###### Transferring Pages - -When an environment calls `sys_ipc_recv` with a valid `dstva` parameter (below `UTOP`), the environment is stating that it is willing to receive a page mapping. If the sender sends a page, then that page should be mapped at `dstva` in the receiver's address space. If the receiver already had a page mapped at `dstva`, then that previous page is unmapped. - -When an environment calls `sys_ipc_try_send` with a valid `srcva` (below `UTOP`), it means the sender wants to send the page currently mapped at `srcva` to the receiver, with permissions `perm`. After a successful IPC, the sender keeps its original mapping for the page at `srcva` in its address space, but the receiver also obtains a mapping for this same physical page at the `dstva` originally specified by the receiver, in the receiver's address space. As a result this page becomes shared between the sender and receiver. - -If either the sender or the receiver does not indicate that a page should be transferred, then no page is transferred. After any IPC the kernel sets the new field `env_ipc_perm` in the receiver's `Env` structure to the permissions of the page received, or zero if no page was received. - -###### Implementing IPC - -``` -Exercise 15. Implement `sys_ipc_recv` and `sys_ipc_try_send` in `kern/syscall.c`. Read the comments on both before implementing them, since they have to work together. When you call `envid2env` in these routines, you should set the `checkperm` flag to 0, meaning that any environment is allowed to send IPC messages to any other environment, and the kernel does no special permission checking other than verifying that the target envid is valid. - -Then implement the `ipc_recv` and `ipc_send` functions in `lib/ipc.c`. - -Use the `user/pingpong` and `user/primes` functions to test your IPC mechanism. `user/primes` will generate for each prime number a new environment until JOS runs out of environments. You might find it interesting to read `user/primes.c` to see all the forking and IPC going on behind the scenes. -``` - -``` -Challenge! Why does `ipc_send` have to loop? Change the system call interface so it doesn't have to. Make sure you can handle multiple environments trying to send to one environment at the same time. -``` - -``` -Challenge! The prime sieve is only one neat use of message passing between a large number of concurrent programs. Read C. A. R. Hoare, ``Communicating Sequential Processes,'' _Communications of the ACM_ 21(8) (August 1978), 666-667, and implement the matrix multiplication example. -``` - -``` -Challenge! One of the most impressive examples of the power of message passing is Doug McIlroy's power series calculator, described in [M. Douglas McIlroy, ``Squinting at Power Series,'' _Software--Practice and Experience_ , 20(7) (July 1990), 661-683][4]. Implement his power series calculator and compute the power series for _sin_ ( _x_ + _x_ ^3). -``` - -``` -Challenge! Make JOS's IPC mechanism more efficient by applying some of the techniques from Liedtke's paper, [Improving IPC by Kernel Design][5], or any other tricks you may think of. Feel free to modify the kernel's system call API for this purpose, as long as your code is backwards compatible with what our grading scripts expect. -``` - -**This ends part C.** Make sure you pass all of the make grade tests and don't forget to write up your answers to the questions and a description of your challenge exercise solution in `answers-lab4.txt`. - -Before handing in, use git status and git diff to examine your changes and don't forget to git add answers-lab4.txt. When you're ready, commit your changes with git commit -am 'my solutions to lab 4', then make handin and follow the directions. - --------------------------------------------------------------------------------- - -via: https://pdos.csail.mit.edu/6.828/2018/labs/lab4/ - -作者:[csail.mit][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://pdos.csail.mit.edu -[b]: https://github.com/lujun9972 -[1]: https://pdos.csail.mit.edu/6.828/2018/labs/lab4/uvpt.html -[2]: https://pdos.csail.mit.edu/6.828/2018/labs/readings/i386/toc.htm -[3]: https://pdos.csail.mit.edu/6.828/2018/labs/readings/ia32/IA32-3A.pdf -[4]: https://swtch.com/~rsc/thread/squint.pdf -[5]: http://dl.acm.org/citation.cfm?id=168633 diff --git a/sources/tech/20181017 Design faster web pages, part 2- Image replacement.md b/sources/tech/20181017 Design faster web pages, part 2- Image replacement.md deleted file mode 100644 index 119646347d..0000000000 --- a/sources/tech/20181017 Design faster web pages, part 2- Image replacement.md +++ /dev/null @@ -1,178 +0,0 @@ -Translating by StdioA - -Design faster web pages, part 2: Image replacement -====== -![](https://fedoramagazine.org/wp-content/uploads/2018/03/fasterwebsites2-816x345.jpg) - -Welcome back to this series on building faster web pages. The last [article][1] talked about what you can achieve just through image compression. The example started with 1.2MB of browser fat, and reduced down to a weight of 488.9KB. That’s still not fast enough! This article continues the browser diet to lose more fat. You might think that partway through this process things are a bit crazy, but once finished, you’ll understand why. - -### Preparation - -Once again this article starts with an analysis of the web pages. Use the built-in screenshot function of Firefox to make a screenshot of the entire page. You’ll also want to install Inkscape [using sudo][2]: - -``` -$ sudo dnf install inkscape -``` - -If you want to know how to use Inkscape, there are already several [articles][3] in Fedora Magazine. This article will only explain some basic tasks for optimizing an SVG for web use. - -### Analysis - -Once again, this example uses the [getfedora.org][4] web page. - -![Getfedora page with graphics marked][5] - -This analysis is better done graphically, which is why it starts with a screenshot. The screenshot above marks all graphical elements of the page. In two cases or better in four cases, the Fedora websites team already used measures to replace images. The icons for social media are glyphs from a font and the language selector is an SVG. - -There are several options for replacing: - - -+ CSS3 -+ Fonts -+ SVG -+ HTML5 Canvas - - -#### HTML5 Canvas - -Briefly, HTML5 Canvas is an HTML element that allows you to draw with the help of scripts, mostly JavaScript, although it’s not widely used yet. As you draw with the help of scripts, the element can also be animated. Some examples of what you can achieve with HTML Canvas include this [triangle pattern,][6] [animated wave][7], and [text animation][8]. In this case, though, it seems not to be the right choice. - -#### CSS3 - -With Cascading Style Sheets you can draw shapes and even animate them. CSS is often used for drawing elements like buttons. However, more complicated graphics via CSS are usually only seen in technical demonstration pages. This is because graphics are still better done visually as with coding. - -#### Fonts - -The usage of fonts for styling web pages is another way, and [Fontawesome][9] is quiet popular. For instance, you could replace the Flavor and the Spin icons with a font in this example. There is a negative side to using this method, which will be covered in the next part of this series, but it can be done easily. - -#### SVG - -This graphics format has existed for a long time and was always supposed to be used in the browser. For a long time not all browsers supported it, but that’s history. So the best way to replace pictures in this example is with SVG. - -### Optimizing SVG for the web - -To optimize an SVG for internet use requires several steps. - -SVG is an XML dialect. Components like circle, rectangle, or text paths are described with nodes. Each node is an XML element. To keep the code clean, an SVG should use as few nodes as possible. - -The SVG example is a circular icon with a coffee mug on it. You have 3 options to describe it with SVG. - -#### Circle element with the mug on top - -``` - -``` - -#### Circular path with the mug on top - -``` - -``` - -#### single path - -``` - -``` - -You probably can see the code becomes more complex and needs more characters to describe it. More characters in a file result, of course, in a larger size. - -#### Node cleaning - -If you open an example SVG in Inkscape and press F2, that activates the Node tool. You should see something like this: - -![Inkscape - Node tool activated][10] - -There are 5 nodes that aren’t necessary in this example — the ones in the middle of the lines. To remove them, select them one by one with the activated Node tool and press the **Del** key. After this, select the nodes which define this lines and make them corners again using the toolbar tool. - -![Inkscape - Node tool make node a corner][11] - -Without fixing the corners, handles are used that define the curve, which gets saved and will increase file size. You have to do this node cleaning by hand, as it can’t be effectively automated. Now you’re ready for the next stage. - -Use the Save as function and choose Optimized svg. A dialogue window opens where you can select what to remove or keep. - -![Inkscape - Dialog window for save as optimized SVG][12] - -Even the little SVG in this example got down from 3.2 KB to 920 bytes, less than a third of its original size. - -Back to the getfedora page: The grey voronoi pattern used in the background of the main section, after our optimization from Part 1 of this series, is down to 164.1 KB versus the original 211.12 KB size. - -The original SVG it was exported from is 1.9 MB in size. After these SVG optimization steps, it’s only 500.4KB. Too big? Well, the current blue background is 564.98 KB in size. But there’s only a small difference between the SVG and the PNG. - -#### Compressed files - -``` -$ ls -lh -insgesamt 928K --rw-r--r--. 1 user user 161K 19. Feb 19:44 grey-pattern.png --rw-rw-r--. 1 user user 160K 18. Feb 12:23 grey-pattern.png.gz --rw-r--r--. 1 user user 489K 19. Feb 19:43 greyscale-pattern-opti.svg --rw-rw-r--. 1 user user 112K 19. Feb 19:05 greyscale-pattern-opti.svg.gz -``` - -This is the output of a small test I did to visualize this topic. You should probably see that the raster graphic — the PNG — is already compressed and can’t be anymore. The opposite is the SVG, an XML file. This is just text and can compressed, to less then a fourth of its size. As a result it is now around 50 KB smaller in size than the PNG. - -Modern browsers can handle compressed files natively. Therefore, a lot of web servers have switched on mod_deflate (Apache) and gzip (nginx). That’s how we save space during delivery. Check out if it’s enabled at your server [here][13]. - -### Tooling for production - -First of all, nobody wants to always optimize SVG in Inkscape. You can run Inkscape without a GUI in batch mode, but there’s no option to convert from Inkscape SVG to optimized SVG. You can only export raster graphics this way. But there are alternatives: - - * SVGO (which seems not actively developed) - * Scour - - - -This example will use scour for optimization. To install it: - -``` -$ sudo dnf install scour -``` - -To automatically optimize an SVG file, run scour similarly to this: - -``` -[user@localhost ]$ scour INPUT.svg OUTPUT.svg -p 3 --create-groups --renderer-workaround --strip-xml-prolog --remove-descriptive-elements --enable-comment-stripping --disable-embed-rasters --no-line-breaks --enable-id-stripping --shorten-ids -``` - -This is the end of part two, in which you learned how to replace raster images with SVG and how to optimize it for usage. Stay tuned to the Fedora Magazine for part three, coming soon. - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/design-faster-web-pages-part-2-image-replacement/ - -作者:[Sirko Kemter][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/gnokii/ -[b]: https://github.com/lujun9972 -[1]: https://wp.me/p3XX0v-5fJ -[2]: https://fedoramagazine.org/howto-use-sudo/ -[3]: https://fedoramagazine.org/?s=Inkscape -[4]: https://getfedora.org -[5]: https://fedoramagazine.org/wp-content/uploads/2018/02/getfedora_mag.png -[6]: https://codepen.io/Cthulahoop/pen/umcvo -[7]: https://codepen.io/jackrugile/pen/BvLHg -[8]: https://codepen.io/tholman/pen/lDLhk -[9]: https://fontawesome.com/ -[10]: https://fedoramagazine.org/wp-content/uploads/2018/02/svg-optimization-nodes.png -[11]: https://fedoramagazine.org/wp-content/uploads/2018/02/node_cleaning.png -[12]: https://fedoramagazine.org/wp-content/uploads/2018/02/svg-optimizing-dialog.png -[13]: https://checkgzipcompression.com/?url=http%3A%2F%2Fgetfedora.org diff --git a/sources/tech/20181023 How to Check HP iLO Firmware Version from Linux Command Line.md b/sources/tech/20181023 How to Check HP iLO Firmware Version from Linux Command Line.md new file mode 100644 index 0000000000..5dc19ed73c --- /dev/null +++ b/sources/tech/20181023 How to Check HP iLO Firmware Version from Linux Command Line.md @@ -0,0 +1,131 @@ +How to Check HP iLO Firmware Version from Linux Command Line +====== +There are many utilities are available in Linux to get a [hardware information][1]. + +Each tool has their own unique feature which help us to gather the required information. + +We have already wrote many articles about this, the hardware tools are Dmidecode, hwinfo, lshw, inxi, lspci, lssci, lsusb, lsblk, neofetch, screenfetch, etc., + +Today we are going to discuss about the same topic. I will tell you, how to check HP iLO firmware version through Linux command line. + +Also read a following articles which is related to Linux hardware. + +**Suggested Read :** +**(#)** [LSHW (Hardware Lister) – A Nifty Tool To Get A Hardware Information On Linux][2] +**(#)** [inxi – A Great Tool to Check Hardware Information on Linux][3] +**(#)** [Dmidecode – Easy Way To Get Linux System Hardware Information][4] +**(#)** [Neofetch – Shows Linux System Information With ASCII Distribution Logo][5] +**(#)** [ScreenFetch – Fetch Linux System Information on Terminal with Distribution ASCII art logo][6] +**(#)** [16 Methods To Check If A Linux System Is Physical or Virtual Machine][7] +**(#)** [hwinfo (Hardware Info) – A Nifty Tool To Detect System Hardware Information On Linux][8] +**(#)** [How To Find WWN, WWNN and WWPN Number Of HBA Card In Linux][9] +**(#)** [How To Check System Hardware Manufacturer, Model And Serial Number In Linux][1] +**(#)** [How To Use lspci, lsscsi, lsusb, And lsblk To Get Linux System Devices Information][10] + +### What is iLO? + +iLO stands for Integrated Lights-Out is a proprietary embedded server management technology by Hewlett-Packard which provides out-of-band management facilities. + +I can say this in simple term, it’s a dedicated device management channel which allow users to manage and monitor the device remotely regardless of whether the machine is powered on, or whether an operating system is installed or functional. + +It allows a system administrator to monitor all devices such as CPU, RAM, Hardware RAID, fan speed, power voltages, chassis intrusion, firmware (BIOS or UEFI), also manage remote terminals (KVM over IP), remote reboot, shutdown, powering on, etc. + +The below list of lights out management (LOM) technology offered by other vendors. + + * **`iLO:`** Integrated Lights-Out by HP + * **`IMM:`** Integrated Management Module by IBM + * **`iDRAC:`** Integrated Dell Remote Access Controllers by Dell + * **`IPMI:`** Intelligent Platform Management Interface – General Standard, it’s used on Supermicro hardware + * **`AMT:`** Intel Active Management Technology by Intel + * **`CIMC:`** Cisco Integrated Management Controller by Cisco + + + +The below table will give the details about iLO version and supported hardware’s. + + * **`iLO:`** ProLiant G2, G3, G4, and G6 servers, model numbers under 300 + * **`iLO 2:`** ProLiant G5 and G6 servers, model numbers 300 and higher + * **`iLO 3:`** ProLiant G7 servers + * **`iLO 4:`** ProLiant Gen8 and Gen9 servers + * **`iLO 5:`** ProLiant Gen10 servers + + + +There are three easy ways to check HP iLO firmware version in Linux, Here we are going to show you one by one. + +### Method-1: Using Dmidcode Command + +[Dmidecode][4] is a tool which reads a computer’s DMI (stands for Desktop Management Interface) (some say SMBIOS – stands for System Management BIOS) table contents and display system hardware information in a human-readable format. + +This table contains a description of the system’s hardware components, as well as other useful information such as serial number, Manufacturer information, Release Date, and BIOS revision, etc,., + +The DMI table doesn’t only describe what the system is currently made of, it also can report the possible evolution’s (such as the fastest supported CPU or the maximal amount of memory supported). This will help you to analyze your hardware capability like whether it’s support latest application version or not? + +As you run it, dmidecode will try to locate the DMI table. If it succeeds, it will then parse this table and display a list of records which you expect. + +First, learn about DMI Types and its keywords, so that we can play nicely without any trouble otherwise we can’t. + +``` +# dmidecode | grep "Firmware Revision" + Firmware Revision: 2.40 +``` + +### Method-2: Using HPONCFG Utility + +HPONCFG is an online configuration tool used to set up and reconfigure iLO without requiring a reboot of the server operating system. The utility runs in a command-line mode and must be executed from an operating system command line on the local server. HPONCFG enables you to initially configure features exposed through the RBSU or iLO. + +Before using HPONCFG, the iLO Management Interface Driver must be loaded on the server. HPONCFG displays a warning if the driver is not installed. + +To install this, visit the [HP website][11] and get the latest hponcfg package by searching the following keyword (sample search key word for iLO 4 “HPE Integrated Lights-Out 4 (iLO 4)”). In that you need to click “HP Lights-Out Online Configuration Utility for Linux (AMD64/EM64T)” and download the package. + +``` +# rpm -ivh /tmp/hponcfg-5.3.0-0.x86_64.rpm +``` + +Use hponcfg command to get the information. + +``` +# hponcfg | grep Firmware +Firmware Revision = 2.40 Device type = iLO 4 Driver name = hpilo +``` + +### Method-3: Using CURL Command + +We can use cURL command to get some of the information in XML format, for HP iLO, iLO 2, iLO 3, iLO 4 and iLO 5. + +Using cURL command we can get the iLO firmware version without to login to the server or console. + +Make sure you have to use right iLO management IP instead of us to get the details. I have removed all the unnecessary details from the below output for better clarification. + +``` +# curl -k https://10.2.0.101/xmldata?item=All + +ProLiant DL380p G8 +Integrated Lights-Out 4 (iLO 4) +2.40 +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-check-hp-ilo-firmware-version-from-linux-command-line/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/how-to-check-system-hardware-manufacturer-model-and-serial-number-in-linux/ +[2]: https://www.2daygeek.com/lshw-find-check-system-hardware-information-details-linux/ +[3]: https://www.2daygeek.com/inxi-system-hardware-information-on-linux/ +[4]: https://www.2daygeek.com/dmidecode-get-print-display-check-linux-system-hardware-information/ +[5]: https://www.2daygeek.com/neofetch-display-linux-systems-information-ascii-distribution-logo-terminal/ +[6]: https://www.2daygeek.com/install-screenfetch-to-fetch-linux-system-information-on-terminal-with-distribution-ascii-art-logo/ +[7]: https://www.2daygeek.com/check-linux-system-physical-virtual-machine-virtualization-technology/ +[8]: https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/ +[9]: https://www.2daygeek.com/how-to-find-wwn-wwnn-and-wwpn-number-of-hba-card-in-linux/ +[10]: https://www.2daygeek.com/check-system-hardware-devices-bus-information-lspci-lsscsi-lsusb-lsblk-linux/ +[11]: https://support.hpe.com/hpesc/public/home diff --git a/sources/tech/20181024 4 cool new projects to try in COPR for October 2018.md b/sources/tech/20181024 4 cool new projects to try in COPR for October 2018.md index 465c6b2f50..25a1c29f68 100644 --- a/sources/tech/20181024 4 cool new projects to try in COPR for October 2018.md +++ b/sources/tech/20181024 4 cool new projects to try in COPR for October 2018.md @@ -1,3 +1,5 @@ +translating---geekpi + 4 cool new projects to try in COPR for October 2018 ====== diff --git a/sources/tech/20181024 Get organized at the Linux command line with Calcurse.md b/sources/tech/20181024 Get organized at the Linux command line with Calcurse.md deleted file mode 100644 index 9f67503f2e..0000000000 --- a/sources/tech/20181024 Get organized at the Linux command line with Calcurse.md +++ /dev/null @@ -1,87 +0,0 @@ -translating---geekpi - -Get organized at the Linux command line with Calcurse -====== - -Keep up with your calendar and to-do list with Calcurse. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar.jpg?itok=jEKbhvDT) - -Do you need complex, feature-packed graphical or web applications to get and stay organized? I don't think so. The right command line tool can do the job and do it well. - -Of course, uttering the words command and line together can strike fear into the hearts of some Linux users. The command line, to them, is terra incognita. - -Organizing yourself at the command line is easy with [Calcurse][1]. Calcurse brings a graphical look and feel to a text-based interface. You get the simplicity and focus of the command line married to ease of use and navigation. - -Let's take a closer look at Calcurse, which is open sourced under the BSD License. - -### Getting the software - -If compiling code is your thing (it's not mine, generally), you can grab the source code from the [Calcurse website][1]. Otherwise, get the [binary installer][2] for your Linux distribution. You might even be able to get Calcurse from your Linux distro's package manager. It never hurts to check. - -Compile or install Calcurse (neither takes all that long), and you're ready to go. - -### Using Calcurse - -Crack open a terminal window and type **calcurse**. - -![](https://opensource.com/sites/default/files/uploads/calcurse-main.png) - -Calcurse's interface consists of three panels: - - * Appointments (the left side of the screen) - * Calendar (the top right) - * To-do list (the bottom right) - - - -Move between the panels by pressing the Tab key on your keyboard. To add a new item to a panel, press **a**. Calcurse walks you through what you need to do to add the item. - -One interesting quirk is that the Appointment and Calendar panels work together. You add an appointment by tabbing to the Calendar panel. There, you choose the date for your appointment. Once you do that, you tab back to the Appointments panel. I know … - -Press **a** to set a start time, a duration (in minutes), and a description of the appointment. The start time and duration are optional. Calcurse displays appointments on the day they're due. - -![](https://opensource.com/sites/default/files/uploads/calcurse-appointment.png) - -Here's what a day's appointments look like: - -![](https://opensource.com/sites/default/files/uploads/calcurse-appt-list.png) - -The to-do list works on its own. Tab to the ToDo panel and (again) press **a**. Type a description of the task, then set a priority (1 is the highest and 9 is the lowest). Calcurse lists your uncompleted tasks in the ToDo panel. - -![](https://opensource.com/sites/default/files/uploads/calcurse-todo.png) - -If your task has a long description, Calcurse truncates it. You can view long descriptions by navigating to the task using the up or down arrow keys on your keyboard, then pressing **v**. - -![](https://opensource.com/sites/default/files/uploads/calcurse-view-todo.png) - -Calcurse saves its information in text files in a hidden folder called **.calcurse** in your home directory—for example, **/home/scott/.calcurse**. If Calcurse stops working, it's easy to find your information. - -### Other useful features - -Other Calcurse features include the ability to set recurring appointments. To do that, find the appointment you want to repeat and press **r** in the Appointments panel. You'll be asked to set the frequency (for example, daily or weekly) and how long you want the appointment to repeat. - -You can also import calendars in [ICAL][3] format or export your data in either ICAL or [PCAL][4] format. With ICAL, you can share your data with other calendar applications. With PCAL, you can generate a Postscript version of your calendar. - -There are also a number of command line arguments you can pass to Calcurse. You can read about them [in the documentation][5]. - -While simple, Calcurse does a solid job of helping you keep organized. You'll need to be a bit more mindful of your tasks and appointments, but you'll be able to focus better on what you need to do and where you need to be. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/10/calcurse - -作者:[Scott Nesbitt][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/scottnesbitt -[b]: https://github.com/lujun9972 -[1]: http://www.calcurse.org/ -[2]: http://www.calcurse.org/downloads/#packages -[3]: https://tools.ietf.org/html/rfc2445 -[4]: http://pcal.sourceforge.net/ -[5]: http://www.calcurse.org/files/manual.chunked/ar01s04.html#_invocation diff --git a/sources/tech/20181025 How to write your favorite R functions in Python.md b/sources/tech/20181025 How to write your favorite R functions in Python.md new file mode 100644 index 0000000000..a06d3557b9 --- /dev/null +++ b/sources/tech/20181025 How to write your favorite R functions in Python.md @@ -0,0 +1,153 @@ +How to write your favorite R functions in Python +====== +R or Python? This Python script mimics convenient R-style functions for doing statistics nice and easy. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0) + +One of the great modern battles of data science and machine learning is "Python vs. R." There is no doubt that both have gained enormous ground in recent years to become top programming languages for data science, predictive analytics, and machine learning. In fact, according to a recent IEEE article, Python overtook C++ as the [top programming language][1] and R firmly secured its spot in the top 10. + +However, there are some fundamental differences between these two. [R was developed primarily][2] as a tool for statistical analysis and quick prototyping of a data analysis problem. Python, on the other hand, was developed as a general purpose, modern object-oriented language in the same vein as C++ or Java but with a simpler learning curve and more flexible demeanor. Consequently, R continues to be extremely popular among statisticians, quantitative biologists, physicists, and economists, whereas Python has slowly emerged as the top language for day-to-day scripting, automation, backend web development, analytics, and general machine learning frameworks and has an extensive support base and open source development community work. + +### Mimicking functional programming in a Python environment + +[R's nature as a functional programming language][3] provides users with an extremely simple and compact interface for quick calculations of probabilities and essential descriptive/inferential statistics for a data analysis problem. For example, wouldn't it be great to be able to solve the following problems with just a single, compact function call? + + * How to calculate the mean/median/mode of a data vector. + * How to calculate the cumulative probability of some event following a normal distribution. What if the distribution is Poisson? + * How to calculate the inter-quartile range of a series of data points. + * How to generate a few random numbers following a Student's t-distribution. + + + +The R programming environment can do all of these. + +On the other hand, Python's scripting ability allows analysts to use those statistics in a wide variety of analytics pipelines with limitless sophistication and creativity. + +To combine the advantages of both worlds, you just need a simple Python-based wrapper library that contains the most commonly used functions pertaining to probability distributions and descriptive statistics defined in R-style. This enables you to call those functions really fast without having to go to the proper Python statistical libraries and figure out the whole list of methods and arguments. + +### Python wrapper script for most convenient R-functions + +[I wrote a Python script][4] to define the most convenient and widely used R-functions in simple, statistical analysis—in Python. After importing this script, you will be able to use those R-functions naturally, just like in an R programming environment. + +The goal of this script is to provide simple Python subroutines mimicking R-style statistical functions for quickly calculating density/point estimates, cumulative distributions, and quantiles and generating random variates for important probability distributions. + +To maintain the spirit of R styling, the script uses no class hierarchy and only raw functions are defined in the file. Therefore, a user can import this one Python script and use all the functions whenever they're needed with a single name call. + +Note that I use the word mimic. Under no circumstance am I claiming to emulate R's true functional programming paradigm, which consists of a deep environmental setup and complex relationships between those environments and objects. This script allows me (and I hope countless other Python users) to quickly fire up a Python program or Jupyter notebook, import the script, and start doing simple descriptive statistics in no time. That's the goal, nothing more, nothing less. + +If you've coded in R (maybe in grad school) and are just starting to learn and use Python for data analysis, you will be happy to see and use some of the same well-known functions in your Jupyter notebook in a manner similar to how you use them in your R environment. + +Whatever your reason, using this script is fun. + +### Simple examples + +To start, just import the script and start working with lists of numbers as if they were data vectors in R. + +``` +from R_functions import * +lst=[20,12,16,32,27,65,44,45,22,18] + +``` + +Say you want to calculate the [Tuckey five-number][5] summary from a vector of data points. You just call one simple function, **fivenum** , and pass on the vector. It will return the five-number summary in a NumPy array. + +``` +lst=[20,12,16,32,27,65,44,45,22,18] +fivenum(lst) +> array([12. , 18.5, 24.5, 41. , 65. ]) +``` + +Maybe you want to know the answer to the following question: + +Suppose a machine outputs 10 finished goods per hour on average with a standard deviation of 2. The output pattern follows a near normal distribution. What is the probability that the machine will output at least 7 but no more than 12 units in the next hour? + +The answer is essentially this: + +![](https://opensource.com/sites/default/files/uploads/r-functions-in-python_1.png) + +You can obtain the answer with just one line of code using **pnorm** : + +``` +pnorm(12,10,2)-pnorm(7,10,2) +> 0.7745375447996848 +``` + +Or maybe you need to answer the following: + +Suppose you have a loaded coin with the probability of turning heads up 60% every time you toss it. You are playing a game of 10 tosses. How do you plot and map out the chances of all the possible number of wins (from 0 to 10) with this coin? + +You can obtain a nice bar chart with just a few lines of code using just one function, **dbinom** : + +``` +probs=[] +import matplotlib.pyplot as plt +for i in range(11): +    probs.append(dbinom(i,10,0.6)) +plt.bar(range(11),height=probs) +plt.grid(True) +plt.show() +``` + +![](https://opensource.com/sites/default/files/uploads/r-functions-in-python_2.png) + +### Simple interface for probability calculations + +R offers an extremely simple and intuitive interface for quick calculations from essential probability distributions. The interface goes like this: + + * **d** {distribution} gives the density function value at a point **x** + * **p** {distribution} gives the cumulative value at a point **x** + * **q** {distribution} gives the quantile function value at a probability **p** + * **r** {distribution} generates one or multiple random variates + + + +In our implementation, we stick to this interface and its associated argument list so you can execute these functions exactly like you would in an R environment. + +### Currently implemented functions + +The following R-style functions are implemented in the script for fast calling. + + * Mean, median, variance, standard deviation + * Tuckey five-number summary, IQR + * Covariance of a matrix or between two vectors + * Density, cumulative probability, quantile function, and random variate generation for the following distributions: normal, uniform, binomial, Poisson, F, Student's t, Chi-square, beta, and gamma. + + + +### Work in progress + +Obviously, this is a work in progress, and I plan to add some other convenient R-functions to this script. For example, in R, a single line of command **lm** can get you an ordinary least-square fitted model to a numerical dataset with all the necessary inferential statistics (P-values, standard error, etc.). This is powerfully brief and compact! On the other hand, standard linear regression problems in Python are often tackled using [Scikit-learn][6], which needs a bit more scripting for this use, so I plan to incorporate this single function linear model fitting feature using Python's [statsmodels][7] backend. + +If you like and use this script in your work, please help others find it by starring or forking its [GitHub repository][8]. Also, you can check my other [GitHub repos][9] for fun code snippets in Python, R, or MATLAB and some machine learning resources. + +If you have any questions or ideas to share, please contact me at [tirthajyoti[AT]gmail.com][10]. If you are, like me, passionate about machine learning and data science, please [add me on LinkedIn][11] or [follow me on Twitter. ][12] + +Originally published on [Towards Data Science][13]. Reposted under [CC BY-SA 4.0][14]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/write-favorite-r-functions-python + +作者:[Tirthajyoti Sarkar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/tirthajyoti +[b]: https://github.com/lujun9972 +[1]: https://spectrum.ieee.org/at-work/innovation/the-2018-top-programming-languages +[2]: https://www.coursera.org/lecture/r-programming/overview-and-history-of-r-pAbaE +[3]: http://adv-r.had.co.nz/Functional-programming.html +[4]: https://github.com/tirthajyoti/StatsUsingPython/blob/master/R_Functions.py +[5]: https://en.wikipedia.org/wiki/Five-number_summary +[6]: http://scikit-learn.org/stable/ +[7]: https://www.statsmodels.org/stable/index.html +[8]: https://github.com/tirthajyoti/StatsUsingPython +[9]: https://github.com/tirthajyoti?tab=repositories +[10]: mailto:tirthajyoti@gmail.com +[11]: https://www.linkedin.com/in/tirthajyoti-sarkar-2127aa7/ +[12]: https://twitter.com/tirthajyotiS +[13]: https://towardsdatascience.com/how-to-write-your-favorite-r-functions-in-python-11e1e9c29089 +[14]: https://creativecommons.org/licenses/by-sa/4.0/ diff --git a/sources/tech/20181025 Understanding Linux Links- Part 2.md b/sources/tech/20181025 Understanding Linux Links- Part 2.md new file mode 100644 index 0000000000..925138f038 --- /dev/null +++ b/sources/tech/20181025 Understanding Linux Links- Part 2.md @@ -0,0 +1,98 @@ +Understanding Linux Links: Part 2 +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/links-fikri-rasyid-7853.jpg?itok=0jBT_1M2) + +In the [first part of this series][1], we looked at hard links and soft links and discussed some of the various ways that linking can be useful. Linking may seem straightforward, but there are some non-obvious quirks you have to be aware of. That’s what we’ll be looking at here. Consider, for example, at the way we created the link to _libblah_ in the previous article. Notice how we linked from within the destination folder: + +``` +cd /usr/local/lib + +ln -s /usr/lib/libblah +``` + +That will work. But this: + +``` +cd /usr/lib + +ln -s libblah /usr/local/lib +``` + +That is, linking from within the original folder to the destination folder, will not work. + +The reason for that is that _ln_ will think you are linking from inside _/usr/local/lib_ to _/usr/local/lib_ and will create a linked file from _libblah_ in _/usr/local/lib_ to _libblah_ also in _/usr/local/lib_. This is because all the link file gets is the name of the file ( _libblah_ ) but not the path to the file. The end result is a very broken link. + +However, this: + +``` +cd /usr/lib + +ln -s /usr/lib/libblah /usr/local/lib +``` + +will work. Then again, it would work regardless of from where you executed the instruction within the filesystem. Using absolute paths, that is, spelling out the whole the path, from root (/) drilling down to to the file or directory itself, is just best practice. + +Another thing to note is that, as long as both _/usr/lib_ and _/usr/local/lib_ are on the same partition, making a hard link like this: + +``` +cd /usr/lib + +ln -s libblah /usr/local/lib +``` + +will also work because hard links don't rely on pointing to a file within the filesystem to work. + +Where hard links will not work is if you want to link across partitions. Say you have _fileA_ on partition A and the partition is mounted at _/path/to/partitionA/directory_. If you want to link _fileA_ to _/path/to/partitionB/directory_ that is on partition B, this will not work: + +``` +ln /path/to/partitionA/directory/file /path/to/partitionB/directory +``` + +As we saw previously, hard links are entries in a partition table that point to data on the *same partition*. You can't have an entry in the table of one partition pointing to data on another partition. Your only choice here would be to us a soft link: + +``` +ln -s /path/to/partitionA/directory/file /path/to/partitionB/directory +``` + +Another thing that soft links can do and hard links cannot is link to whole directories: + +``` +ln -s /path/to/some/directory /path/to/some/other/directory +``` + +will create a link to _/path/to/some/directory_ within _/path/to/some/other/directory_ without a hitch. + +Trying to do the same by hard linking will show you an error saying that you are not allowed to do that. And the reason for that is unending recursiveness: if you have directory B inside directory A, and then you link A inside B, you have situation, because then A contains B within A inside B that incorporates A that encloses B, and so on ad-infinitum. + +You can have recursive using soft links, but why would you do that to yourself? + +### Should I use a hard or a soft link? + +In general you can use soft links everywhere and for everything. In fact, there are situations in which you can only use soft links. That said, hard links are slightly more efficient: they take up less space on disk and are faster to access. On most machines you will not notice the difference, though: the difference in space and speed will be negligible given today's massive and speedy hard disks. However, if you are using Linux on an embedded system with a small storage and a low-powered processor, you may want to give hard links some consideration. + +Another reason to use hard links is that a hard link is much more difficult to break. If you have a soft link and you accidentally move or delete the file it is pointing to, your soft link will be broken and point to... nothing. There is no danger of this happening with a hard link, since the hard link points directly to the data on the disk. Indeed, the space on the disk will not be flagged as free until the last hard link pointing to it is erased from the file system. + +Soft links, on the other hand can do more than hard links and point to anything, be it file or directory. They can also point to items that are on different partitions. These two things alone often make them the only choice. + +### Next Time + +Now we have covered files and directories and the basic tools to manipulate them, you are ready to move onto the tools that let you explore the directory hierarchy, find data within files, and examine the contents. That's what we'll be dealing with in the next installment. See you then! + +Learn more about Linux through the free ["Introduction to Linux" ][2]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/2018/10/understanding-linux-links-part-2 + +作者:[Paul Brown][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/bro66 +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/blog/intro-to-linux/2018/10/linux-links-part-1 +[2]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20181026 Ultimate Plumber - Writing Linux Pipes With Instant Live Preview.md b/sources/tech/20181026 Ultimate Plumber - Writing Linux Pipes With Instant Live Preview.md new file mode 100644 index 0000000000..fda7de542e --- /dev/null +++ b/sources/tech/20181026 Ultimate Plumber - Writing Linux Pipes With Instant Live Preview.md @@ -0,0 +1,84 @@ +Ultimate Plumber – Writing Linux Pipes With Instant Live Preview +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Ultimate-Plumber-720x340.jpg) + +As you may already know, **Pipe** command is used to send the output of one command/program/process to another command/program/process for further processing in Unix-like operating systems. Using the Pipe command, we can combine two or more commands and redirect the standard input or output of one command to another easily and quickly. A pipe is represented by a vertical bar character ( **|** ) between two or more Linux commands. The general syntax of a pipe command is given below. + +``` +Command-1 | Command-2 | Command-3 | …| Command-N +``` + +If you use Pipe command often, I have a good news for you. Now, you can preview the Linux pipes results instantly while writing them. Say hello to **“Ultimate Plumber”** , shortly **UP** , a command line tool for writing Linux pipes with instant live preview. It is used to build complex Pipelines quickly, easily with instant, scrollable preview of the command results. The UP tool is quite handy if you often need to repeat piped commands to get the desired result. + +In this brief guide, I will show you how to install UP and build complex Linux pipelines easily. + +**Important warning:** + +Please be careful when using this tool in production! It could be dangerous and you might inadvertently delete any important data. You must particularly be careful when using “rm” or “dd” commands with UP tool. You have been warned! + +### Writing Linux Pipes With Instant Live Preview Using Ultimate Plumber + +Here is a simple example to understand the underlying concept of UP. For example, let us pipe the output of **lshw** command into UP. To do so, type the following command in your Terminal and press ENTER: + +``` +$ lshw |& up +``` + +You will see an input box at the top of the screen as shown in the below screenshot. +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Ultimate-Plumber.png) +In the input box, start typing any pipelines and press ENTER key to execute the command you just typed. Now, the Ultimate Plumber utility will immediately show you the output of the pipeline in the **scrollable window** below. You can browse through the results using **PgUp/PgDn** or **Ctrl+ ** keys. + +Once you’re satisfied with the result, press **Ctrl-X** to exit the UP. The Linux pipe command you just built will be saved in a file named **up1.sh** in the current working directory. If this file is already exists, an additional file named **up2.sh** will be created to save the result. This will go on until 1000 files. If you don’t want to save the output, just press **Ctrl-C**. + +You can view the contents of the upX.sh file with cat command. Here is the output of my **up2.sh** file: + +``` +$ cat up2.sh +#!/bin/bash +grep network -A5 | grep : | cut -d: -f2- | paste - - +``` + +If the command you piped into UP is long running, you will see a **~** (tilde) character in the top-left corner of the window. It means that UP is still waiting for the inputs. In such cases, you may need to freeze the Up’s input buffer size temporarily by pressing **Ctrl-S**. To unfreeze UP back, simply press **Ctrl-Q**. The current input buffer size of Ultimate Plumber is **40 MB**. Once you reached this limit, you will see a **+** (plus) sign on the top-left corner of the screen. + +Here is the short demo of UP tool in action: +![](https://www.ostechnix.com/wp-content/uploads/2018/10/up.gif) + +### Installing Ultimate Plumber + +Liked it? Great! Go ahead and install it on your Linux system and start using it. Installing UP is quite easy! All you have to do is open your Terminal and run the following two commands to install UP. + +Download the latest Ultimate Plumber binary file from the [**releases page**][1] and put it in your path, for example **/usr/local/bin/**. + +``` +$ sudo wget -O /usr/local/bin/up wget https://github.com/akavel/up/releases/download/v0.2.1/up +``` + +Then, make the UP binary as executable using command: + +``` +$ sudo chmod a+x /usr/local/bin/up +``` + +Done! Start building Linux pipelines as described above!! + +And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/ultimate-plumber-writing-linux-pipes-with-instant-live-preview/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://github.com/akavel/up/releases diff --git a/sources/tech/20181029 Create animated, scalable vector graphic images with MacSVG.md b/sources/tech/20181029 Create animated, scalable vector graphic images with MacSVG.md new file mode 100644 index 0000000000..df990db3bc --- /dev/null +++ b/sources/tech/20181029 Create animated, scalable vector graphic images with MacSVG.md @@ -0,0 +1,69 @@ +Create animated, scalable vector graphic images with MacSVG +====== + +Open source SVG: The writing is on the wall + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_design_paper_plane_2_0.jpg?itok=xKdP-GWE) + +The Neo-Babylonian regent [Belshazzar][1] did not heed [the writing on the wall][2] that magically appeared during his great feast. However, if he had had a laptop and a good internet connection in 539 BC, he might have staved off those pesky Persians by reading the SVG on the browser. + +Animating text and objects on web pages is a great way to build user interest and engagement. There are several ways to achieve this, such as a video embed, an animated GIF, or a slideshow—but you can also use [scalable vector graphics][3] (SVG). + +An SVG image is different from, say, a JPG, because it is scalable without losing its resolution. A vector image is created by points, not dots, so no matter how large it gets, it will not lose its resolution or pixelate. An example of a good use of scalable, static images would be logos on websites. + +### Move it, move it + +You can create SVG images with several drawing programs, including open source [Inkscape][4] and Adobe Illustrator. Getting your images to “do something” requires a bit more effort. Fortunately, there are open source solutions that would get even Belshazzar’s attention. + +[MacSVG][5] is one tool that will get your images moving. You can find the source code on [GitHub][6]. + +Developed by Douglas Ward of Conway, Arkansas, MacSVG is an “open source Mac OS app for designing HTML5 SVG art and animation,” according to its [website][5]. + +I was interested in using MacSVG to create an animated signature. I admit that I found the process a bit confusing and failed at my first attempts to create an actual animated SVG image. + +![](https://opensource.com/sites/default/files/uploads/macsvg-screen.png) + +It is important to first learn what makes “the writing on the wall” actually write. + +The attribute behind the animated writing is [stroke-dasharray][7]. Breaking the term into three words helps explain what is happening: Stroke refers to the line or stroke you would make with a pen, whether physical or digital. Dash means breaking the stroke down into a series of dashes. Array means producing the whole thing into an array. That’s a simple overview, but it helped me understand what was supposed to happen and why. + +With MacSVG, you can import a graphic (.PNG) and use the pen tool to trace the path of the writing. I used a cursive representation of my first name. Then it was just a matter of applying the attributes to animate the writing, increase and decrease the thickness of the stroke, change its color, and so on. Once completed, the animated writing was exported as an .SVG file and was ready for use on the web. MacSVG can be used for many different types of SVG animation in addition to handwriting. + +### The writing is on the WordPress + +I was ready to upload and share my SVG example on my [WordPress][8] site, but I discovered that WordPress does not allow for SVG media imports. Fortunately, I found a handy plugin: Benbodhi’s [SVG Support][9] allowed a quick, easy import of my SVG the same way I would import a JPG to my Media Library. I was able to showcase my [writing on the wall][10] to Babylonians everywhere. + +I opened the source code of my SVG in [Brackets][11], and here are the results: + +``` + + +Path animation with stroke-dasharrayThis example demonstrates the use of a path element, an animate element, and the stroke-dasharray attribute to simulate drawing. +``` + +What would you use MacSVG for? + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/macsvg-open-source-tool-animation + +作者:[Jeff Macharyas][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/rikki-endsley +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Belshazzar +[2]: https://en.wikipedia.org/wiki/Belshazzar%27s_feast +[3]: https://en.wikipedia.org/wiki/Scalable_Vector_Graphics +[4]: https://inkscape.org/ +[5]: https://macsvg.org/ +[6]: https://github.com/dsward2/macSVG +[7]: https://gist.github.com/mbostock/5649592 +[8]: https://macharyas.com/ +[9]: https://wordpress.org/plugins/svg-support/ +[10]: https://macharyas.com/index.php/2018/10/14/open-source-svg/ +[11]: http://brackets.io/ diff --git a/sources/tech/20181029 DF-SHOW - A Terminal File Manager Based On An Old DOS Application.md b/sources/tech/20181029 DF-SHOW - A Terminal File Manager Based On An Old DOS Application.md new file mode 100644 index 0000000000..f250cca056 --- /dev/null +++ b/sources/tech/20181029 DF-SHOW - A Terminal File Manager Based On An Old DOS Application.md @@ -0,0 +1,162 @@ +DF-SHOW – A Terminal File Manager Based On An Old DOS Application +====== +![](https://www.ostechnix.com/wp-content/uploads/2018/10/dfshow-720x340.png) + +If you have worked on good-old MS-DOS, you might have used or heard about **DF-EDIT**. The DF-EDIT, stands for **D** irectory **F** ile **Edit** or, is an obscure DOS file manager, originally written by **Larry Kroeker** for MS-DOS and PC-DOS systems. It is used to display the contents of a given directory or file in MS-DOS and PC-DOS systems. Today, I stumbled upon a similar utility named **DF-SHOW** ( **D** irectory **F** ile **S** how), a terminal file manager for Unix-like operating systems. It is an Unix rewrite of obscure DF-EDIT file manager and is based on DF-EDIT 2.3d release from 1986. DF-SHOW is completely free, open source and released under GPLv3. + +DF-SHOW can be able to, + + * List contents of a directory, + * View files, + * Edit files using your default file editor, + * Copy files to/from different locations, + * Rename files, + * Delete files, + * Create new directories from within the DF-SHOW interface, + * Update file permissions, owners and groups, + * Search files matching a search term, + * Launch executable files. + + + +### DF-SHOW Usage + +DF-SHOW consists of two programs, namely **“show”** and **“sf”**. + +**Show command** + +The “show” program (similar to the `ls` command) is used to display the contents of a directory, create new directories, rename, delete files/folders, update permissions, search files and so on. + +To view the list of contents in a directory, use the following command: + +``` +$ show +``` + +Example: + +``` +$ show dfshow +``` + +Here, dfshow is a directory. If you invoke the “show” command without specifying a directory path, it will display the contents of current directory. + +Here is how DF-SHOW default interface looks like. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/dfshow-1.png) + +As you can see, DF-SHOW interface is self-explanatory. + +On the top bar, you see the list of available options such as Copy, Delete, Edit, Modify etc. + +Complete list of available options are given below: + + * **C** opy, + * **D** elete, + * **E** dit, + * **H** idden, + * **M** odify, + * **Q** uit, + * **R** ename, + * **S** how, + * h **U** nt, + * e **X** ec, + * **R** un command, + * **E** dit file, + * **H** elp, + * **M** ake dir, + * **Q** uit, + * **S** how dir + + + +In each option, one letter has been capitalized and marked as bold. Just press the capitalized letter to perform the respective operation. For example, to rename a file, just press **R** and type the new name and hit ENTER to rename the selected item. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/dfshow-2.png) + +To display all options or cancel an operation, just press **ESC** key. + +Also, you will see a bunch of function keys at the bottom of DF-SHOW interface to navigate through the contents of a directory. + + * **UP/DOWN** arrows or **F1/F2** – Move up and down (one line at time), + * **PgUp/Pg/Dn** – Move one page at a time, + * **F3/F4** – Instantly go to Top and bottom of the list, + * **F5** – Refresh, + * **F6** – Mark/Unmark files (Files marked will be indicated with an ***** in front of them), + * **F7/F8** – Mark/Unmark all files at once, + * **F9** – Sort the list by – Date & time, Name, Size., + + + +Press **h** to learn more details about **show** command and its options. + +To exit DF-SHOW, simply press **q**. + +**SF Command** + +The “sf” (show files) is used to display the contents of a file. + +``` +$ sf +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/dfshow-3.png) + +Press **h** to learn more “sf” command and its options. To quit, press **q**. + +Want to give it a try? Great! Go ahead and install DF-SHOW on your Linux system as described below. + +### Installing DF-SHOW + +DF-SHOW is available in [**AUR**][1], so you can install it on any Arch-based system using AUR programs such as [**Yay**][2]. + +``` +$ yay -S dfshow +``` + +On Ubuntu and its derivatives: + +``` +$ sudo add-apt-repository ppa:ian-hawdon/dfshow + +$ sudo apt-get update + +$ sudo apt-get install dfshow +``` + +On other Linux distributions, you can compile and build it from the source as shown below. + +``` +$ git clone https://github.com/roberthawdon/dfshow +$ cd dfshow +$ ./bootstrap +$ ./configure +$ make +$ sudo make install +``` + +The author of DF-SHOW project has only rewritten some of the applications of DF-EDIT utility. Since the source code is freely available on GitHub, you can add more features, improve the code and submit or fix the bugs (if there are any). It is still in alpha stage, but fully functional. + +Have you tried it already? If so, how’d go? Tell us your experience in the comments section below. + +And, that’s all for now. Hope this was useful.More good stuffs to come. + +Stay tuned! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/df-show-a-terminal-file-manager-based-on-an-old-dos-application/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://aur.archlinux.org/packages/dfshow/ +[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ diff --git a/sources/tech/20181029 Machine learning with Python- Essential hacks and tricks.md b/sources/tech/20181029 Machine learning with Python- Essential hacks and tricks.md new file mode 100644 index 0000000000..a3896df3f0 --- /dev/null +++ b/sources/tech/20181029 Machine learning with Python- Essential hacks and tricks.md @@ -0,0 +1,112 @@ +Machine learning with Python: Essential hacks and tricks +====== +Master machine learning, AI, and deep learning with Python. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop.png?itok=pGfEfu2S) + +It's never been easier to get started with machine learning. In addition to structured massive open online courses (MOOCs), there are a huge number of incredible, free resources available around the web. Here are a few that have helped me. + + 2. Learn to clearly differentiate between the buzzwords—for example, machine learning, artificial intelligence, deep learning, data science, computer vision, and robotics. Read or listen to talks by experts on each of them. Watch this [amazing video by Brandon Rohrer][1], an influential data scientist. Or this video about the [clear differences between various roles][2] associated with data science. + + + 3. Clearly set a goal for what you want to learn. Then go and take [that Coursera course][3]. Or take the one [from the University of Washington][4], which is pretty good too. + + + 5. If you are enthusiastic about taking online courses, check out this article for guidance on [choosing the right MOOC][5]. + + + 6. Most of all, develop a feel for it. Join some good social forums, but resist the temptation to latch onto sensationalized headlines and news. Do your own reading to understand what it is and what it is not, where it might go, and what possibilities it can open up. Then sit back and think about how you can apply machine learning or imbue data science principles into your daily work. Build a simple regression model to predict the cost of your next lunch or download your electricity usage data from your energy provider and do a simple time-series plot in Excel to discover some pattern of usage. And after you are thoroughly enamored with machine learning, you can watch this video. + + + +### Is Python a good language for machine learning/AI? + +Familiarity and moderate expertise in at least one high-level programming language is useful for beginners in machine learning. Unless you are a Ph.D. researcher working on a purely theoretical proof of some complex algorithm, you are expected to mostly use the existing machine learning algorithms and apply them in solving novel problems. This requires you to put on a programming hat. + +There's a lot of talk about the best language for data science. While the debate rages, grab a coffee and read this insightful FreeCodeCamp article to learn about [data science languages][6] . Or, check out this post on KDnuggets to dive directly into the [Python vs. R debate][7] + +For now, it's widely believed that Python helps developers be more productive from development to deployment and maintenance. Python's syntax is simpler and at a higher level when compared to Java, C, and C++. It has a vibrant community, open source culture, hundreds of high-quality libraries focused on machine learning, and a huge support base from big names in the industry (e.g., Google, Dropbox, Airbnb, etc.). + +### Fundamental Python libraries + +Assuming you go with the widespread opinion that Python is the best language for machine learning, there are a few core Python packages and libraries you need to master. + +#### NumPy + +Short for [Numerical Python][8], NumPy is the fundamental package required for high-performance scientific computing and data analysis in the Python ecosystem. It's the foundation on which nearly all of the higher-level tools, such as [Pandas][9] and [scikit-learn][10], are built. [TensorFlow][11] uses NumPy arrays as the fundamental building blocks underpinning Tensor objects and graphflow for deep learning tasks. Many NumPy operations are implemented in C, making them super fast. For data science and modern machine learning tasks, this is an invaluable advantage. + +![](https://opensource.com/sites/default/files/uploads/machine-learning-python_numpy-cheat-sheet.jpeg) + +#### Pandas + +Pandas is the most popular library in the scientific Python ecosystem for doing general-purpose data analysis. Pandas is built upon a NumPy array, thereby preserving fast execution speed and offering many data engineering features, including: + + * Reading/writing many different data formats + * Selecting subsets of data + * Calculating across rows and down columns + * Finding and filling missing data + * Applying operations to independent groups within the data + * Reshaping data into different forms + * Combing multiple datasets together + * Advanced time-series functionality + * Visualization through Matplotlib and Seaborn + +![](https://opensource.com/sites/default/files/uploads/pandas_cheat_sheet_github.png) + +#### Matplotlib and Seaborn + +Data visualization and storytelling with data are essential skills for every data scientist because it's crtitical to be able to communicate insights from analyses to any audience effectively. This is an equally critical part of your machine learning pipeline, as you often have to perform an exploratory analysis of a dataset before deciding to apply a particular machine learning algorithm. + +[Matplotlib][12] is the most widely used 2D Python visualization library. It's equipped with a dazzling array of commands and interfaces for producing publication-quality graphics from your data. This amazingly detailed and rich article will help you [get started with Matplotlib][13]. + +![](https://opensource.com/sites/default/files/uploads/matplotlib_gallery_-1.png) +[Seaborn][14] is another great visualization library focused on statistical plotting. It provides an API (with flexible choices for plot style and color defaults) on top of Matplotlib, defines simple high-level functions for common statistical plot types, and integrates with functionality provided by Pandas. You can start with this great tutorial on [Seaborn for beginners][15]. + +![](https://opensource.com/sites/default/files/uploads/machine-learning-python_seaborn.png) + +#### Scikit-learn + +Scikit-learn is the most important general machine learning Python package to master. It features various [classification][16], [regression][17], and [clustering][18] algorithms, including [support vector machines][19], [random forests][20], [gradient boosting][21], [k-means][22], and [DBSCAN][23], and is designed to interoperate with the Python numerical and scientific libraries NumPy and [SciPy][24]. It provides a range of supervised and unsupervised learning algorithms via a consistent interface. The library has a level of robustness and support required for use in production systems. This means it has a deep focus on concerns such as ease of use, code quality, collaboration, documentation, and performance. Look at this [gentle introduction to machine learning vocabulary][25] used in the Scikit-learn universe or this article demonstrating [a simple machine learning pipeline][26] method using Scikit-learn. + +This article was originally published on [Heartbeat][27] under [CC BY-SA 4.0][28]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/machine-learning-python-essential-hacks-and-tricks + +作者:[Tirthajyoti Sarkar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/tirthajyoti +[b]: https://github.com/lujun9972 +[1]: https://www.youtube.com/watch?v=tKa0zDDDaQk +[2]: https://www.youtube.com/watch?v=Ura_ioOcpQI +[3]: https://www.coursera.org/learn/machine-learning +[4]: https://www.coursera.org/specializations/machine-learning +[5]: https://towardsdatascience.com/how-to-choose-effective-moocs-for-machine-learning-and-data-science-8681700ed83f +[6]: https://medium.freecodecamp.org/which-languages-should-you-learn-for-data-science-e806ba55a81f +[7]: https://www.kdnuggets.com/2017/09/python-vs-r-data-science-machine-learning.html +[8]: http://numpy.org/ +[9]: https://pandas.pydata.org/ +[10]: http://scikit-learn.org/ +[11]: https://www.tensorflow.org/ +[12]: https://matplotlib.org/ +[13]: https://realpython.com/python-matplotlib-guide/ +[14]: https://seaborn.pydata.org/ +[15]: https://www.datacamp.com/community/tutorials/seaborn-python-tutorial +[16]: https://en.wikipedia.org/wiki/Statistical_classification +[17]: https://en.wikipedia.org/wiki/Regression_analysis +[18]: https://en.wikipedia.org/wiki/Cluster_analysis +[19]: https://en.wikipedia.org/wiki/Support_vector_machine +[20]: https://en.wikipedia.org/wiki/Random_forests +[21]: https://en.wikipedia.org/wiki/Gradient_boosting +[22]: https://en.wikipedia.org/wiki/K-means_clustering +[23]: https://en.wikipedia.org/wiki/DBSCAN +[24]: https://en.wikipedia.org/wiki/SciPy +[25]: http://scikit-learn.org/stable/tutorial/basic/tutorial.html +[26]: https://towardsdatascience.com/machine-learning-with-python-easy-and-robust-method-to-fit-nonlinear-data-19e8a1ddbd49 +[27]: https://heartbeat.fritz.ai/some-essential-hacks-and-tricks-for-machine-learning-with-python-5478bc6593f2 +[28]: https://creativecommons.org/licenses/by-sa/4.0/ diff --git a/sources/tech/20181030 Podman- A more secure way to run containers.md b/sources/tech/20181030 Podman- A more secure way to run containers.md new file mode 100644 index 0000000000..a6252d87cc --- /dev/null +++ b/sources/tech/20181030 Podman- A more secure way to run containers.md @@ -0,0 +1,130 @@ +Podman: A more secure way to run containers +====== +Podman uses a traditional fork/exec model (vs. a client/server model) for running containers. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-cloud-safe.png?itok=yj2TFPzq) + +Before I get into the main topic of this article, [Podman][1] and containers, I need to get a little technical about the Linux audit feature. + +### What is audit? + +The Linux kernel has an interesting security feature called **audit**. It allows administrators to watch for security events on a system and have them logged to the audit.log, which can be stored locally or remotely on another machine to prevent a hacker from trying to cover his tracks. + +The **/etc/shadow** file is a common security file to watch, since adding a record to it could allow an attacker to get return access to the system. Administrators want to know if any process modified the file. You can do this by executing the command: + +``` +# auditctl -w /etc/shadow +``` + +Now let's see what happens if I modify the /etc/shadow file: + +``` +# touch /etc/shadow +# ausearch -f /etc/shadow -i -ts recent + +type=PROCTITLE msg=audit(10/10/2018 09:46:03.042:4108) : proctitle=touch /etc/shadow type=SYSCALL msg=audit(10/10/2018 09:46:03.042:4108) : arch=x86_64 syscall=openat success=yes exit=3 a0=0xffffff9c a1=0x7ffdb17f6704 a2=O_WRONLY|O_CREAT|O_NOCTTY| O_NONBLOCK a3=0x1b6 items=2 ppid=2712 pid=3727 auid=dwalsh uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts1 ses=3 comm=touch exe=/usr/bin/touch subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=(null)` +``` + +There's a lot of information in the audit record, but I highlighted that it recorded that root modified the /etc/shadow file and the owner of the process' audit UID ( **auid** ) was **dwalsh**. + +Did the kernel do that? + +#### Tracking the login UID + +**loginuid** , stored in **/proc/self/loginuid** , that is part of the proc struct of every process on the system. This field can be set only once; after it is set, the kernel will not allow any process to reset it. + +There is a field called, stored in, that is part of the proc struct of every process on the system. This field can be set only once; after it is set, the kernel will not allow any process to reset it. + +When I log into the system, the login program sets the loginuid field for my login process. + +My UID, dwalsh, is 3267. + +``` +$ cat /proc/self/loginuid +3267 +``` + +Now, even if I become root, my login UID stays the same. + +``` +$ sudo cat /proc/self/loginuid +3267 +``` + +Note that every process that's forked and executed from the initial login process automatically inherits the loginuid. This is how the kernel knew that the person who logged was dwalsh. + +### Containers + +Now let's look at containers. + +``` +sudo podman run fedora cat /proc/self/loginuid +3267 +``` + +Even the container process retains my loginuid. Now let's try with Docker. + +``` +sudo docker run fedora cat /proc/self/loginuid +4294967295 +``` + +### Why the difference? + +Podman uses a traditional fork/exec model for the container, so the container process is an offspring of the Podman process. Docker uses a client/server model. The **docker** command I executed is the Docker client tool, and it communicates with the Docker daemon via a client/server operation. Then the Docker daemon creates the container and handles communications of stdin/stdout back to the Docker client tool. + +The default loginuid of processes (before their loginuid is set) is 4294967295. Since the container is an offspring of the Docker daemon and the Docker daemon is a child of the init system, we see that systemd, Docker daemon, and the container processes all have the same loginuid, 4294967295, which audit refers to as the unset audit UID. + +``` +cat /proc/1/loginuid +4294967295 +``` + +### How can this be abused? + +Let's look at what would happen if a container process launched by Docker modifies the /etc/shadow file. + +``` +$ sudo docker run --privileged -v /:/host fedora touch /host/etc/shadow +$ sudo ausearch -f /etc/shadow -i type=PROCTITLE msg=audit(10/10/2018 10:27:20.055:4569) : proctitle=/usr/bin/coreutils --coreutils-prog-shebang=touch /usr/bin/touch /host/etc/shadow type=SYSCALL msg=audit(10/10/2018 10:27:20.055:4569) : arch=x86_64 syscall=openat success=yes exit=3 a0=0xffffff9c a1=0x7ffdb6973f50 a2=O_WRONLY|O_CREAT|O_NOCTTY| O_NONBLOCK a3=0x1b6 items=2 ppid=11863 pid=11882 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=touch exe=/usr/bin/coreutils subj=system_u:system_r:spc_t:s0 key=(null) +``` + +In the Docker case, the auid is unset (4294967295); this means the security officer might know that a process modified the /etc/shadow file but the identity was lost. + +If that attacker then removed the Docker container, there would be no trace on the system of who modified the /etc/shadow file. + +Now let's look at the exact same scenario with Podman. + +``` +$ sudo podman run --privileged -v /:/host fedora touch /host/etc/shadow +$ sudo ausearch -f /etc/shadow -i type=PROCTITLE msg=audit(10/10/2018 10:23:41.659:4530) : proctitle=/usr/bin/coreutils --coreutils-prog-shebang=touch /usr/bin/touch /host/etc/shadow type=SYSCALL msg=audit(10/10/2018 10:23:41.659:4530) : arch=x86_64 syscall=openat success=yes exit=3 a0=0xffffff9c a1=0x7fffdffd0f34 a2=O_WRONLY|O_CREAT|O_NOCTTY| O_NONBLOCK a3=0x1b6 items=2 ppid=11671 pid=11683 auid=dwalsh uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=3 comm=touch exe=/usr/bin/coreutils subj=unconfined_u:system_r:spc_t:s0 key=(null) +``` + +Everything is recorded correctly with Podman since it uses traditional fork/exec. + +This was just a simple example of watching the /etc/shadow file, but the auditing system is very powerful for watching what processes do on a system. Using a fork/exec container runtime for launching containers (instead of a client/server container runtime) allows you to maintain better security through audit logging. + +### Final thoughts + +There are many other nice features about the fork/exec model versus the client/server model when launching containers. For example, systemd features include: + + * **SD_NOTIFY:** If you put a Podman command into a systemd unit file, the container process can return notice up the stack through Podman that the service is ready to receive tasks. This is something that can't be done in client/server mode. + * **Socket activation:** You can pass down connected sockets from systemd to Podman and onto the container process to use them. This is impossible in the client/server model. + + + +The nicest feature, in my opinion, is **running Podman and containers as a non-root user**. This means you never have give a user root privileges on the host, while in the client/server model (like Docker employs), you must open a socket to a privileged daemon running as root to launch the containers. There you are at the mercy of the security mechanisms implemented in the daemon versus the security mechanisms implemented in the host operating systems—a dangerous proposition. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/podman-more-secure-way-run-containers + +作者:[Daniel J Walsh][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/rhatdan +[b]: https://github.com/lujun9972 +[1]: https://podman.io diff --git a/sources/tech/20181031 8 creepy commands that haunt the terminal - Opensource.com.md b/sources/tech/20181031 8 creepy commands that haunt the terminal - Opensource.com.md new file mode 100644 index 0000000000..a2e9f1aa2b --- /dev/null +++ b/sources/tech/20181031 8 creepy commands that haunt the terminal - Opensource.com.md @@ -0,0 +1,60 @@ +8 creepy commands that haunt the terminal | Opensource.com +====== + +Welcome to the spookier side of Linux. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/halloween_bag_bat_diy.jpg?itok=24M0lX25) + +It’s that time of year again: The weather gets chilly, the leaves change colors, and kids everywhere transform into tiny ghosts, goblins, and zombies. But did you know that Unix (and Linux) and its various offshoots are also chock-full of creepy crawly things? Let’s take a quick look at some of the spookier aspects of the operating system we all know and love. + +### daemon + +Unix just wouldn’t be the same without all the various daemons that haunt the system. A `daemon` is a process that runs in the background and provides useful services to both the user and the operating system itself. Think SSH, FTP, HTTP, etc. + +### zombie + +Every now and then a zombie, a process that has been killed but refuses to go away, shows up. When this happens, you have no choice but to dispatch it using whatever tools you have available. A zombie usually indicates that something is wrong with the process that spawned it. + +### kill + +Not only can you use the `kill` command to dispatch a zombie, but you can also use it to kill any process that’s adversely affecting your system. Have a process that’s using too much RAM or CPU cycles? Dispatch it with the `kill` command. + +### cat + +The `cat` command has nothing to do with felines and everything to do with combining files: `cat` is short for "concatenate." You can even use this handy command to view the contents of a file. + + +### tail + + +The `tail` command is useful when you want to see last n number of lines in a file. It’s also great when you want to monitor a file. + +### which + +No, not that kind of witch, but the command that prints the location of the files associated with any command passed to it. `which python`, for example, will print the locations of every version of Python on your system. + +### crypt + +The `crypt` command, known these days as `mcrypt`, is handy when you want to scramble (encrypt) the contents of a file so that no one but you can read it. Like most Unix commands, you can use `crypt` standalone or within a system script. + +### shred + +The `shred` command is handy when you not only want to delete a file but you also want to ensure that no one will ever be able to recover it. Using the `rm` command to delete a file isn’t enough. You also need to overwrite the space that the file previously occupied. That’s where `shred` comes in. + +These are just a few of the spooky things you’ll find hiding inside Unix. Do you know more creepy commands? Feel free to let me know. + +Happy Halloween! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/spookier-side-unix-linux + +作者:[Patrick H.Mullins][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/pmullins +[b]: https://github.com/lujun9972 diff --git a/sources/tech/20181031 Working with data streams on the Linux command line.md b/sources/tech/20181031 Working with data streams on the Linux command line.md new file mode 100644 index 0000000000..87403558d7 --- /dev/null +++ b/sources/tech/20181031 Working with data streams on the Linux command line.md @@ -0,0 +1,302 @@ +Working with data streams on the Linux command line +====== +Learn to connect data streams from one utility to another using STDIO. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/pipe-pipeline-grid.png?itok=kkpzKxKg) + +**Author’s note:** Much of the content in this article is excerpted, with some significant edits to fit the Opensource.com article format, from Chapter 3: Data Streams, of my new book, [The Linux Philosophy for SysAdmins][1]. + +Everything in Linux revolves around streams of data—particularly text streams. Data streams are the raw materials upon which the [GNU Utilities][2], the Linux core utilities, and many other command-line tools perform their work. + +As its name implies, a data stream is a stream of data—especially text data—being passed from one file, device, or program to another using STDIO. This chapter introduces the use of pipes to connect streams of data from one utility program to another using STDIO. You will learn that the function of these programs is to transform the data in some manner. You will also learn about the use of redirection to redirect the data to a file. + +I use the term “transform” in conjunction with these programs because the primary task of each is to transform the incoming data from STDIO in a specific way as intended by the sysadmin and to send the transformed data to STDOUT for possible use by another transformer program or redirection to a file. + +The standard term, “filters,” implies something with which I don’t agree. By definition, a filter is a device or a tool that removes something, such as an air filter removes airborne contaminants so that the internal combustion engine of your automobile does not grind itself to death on those particulates. In my high school and college chemistry classes, filter paper was used to remove particulates from a liquid. The air filter in my home HVAC system removes particulates that I don’t want to breathe. + +Although they do sometimes filter out unwanted data from a stream, I much prefer the term “transformers” because these utilities do so much more. They can add data to a stream, modify the data in some amazing ways, sort it, rearrange the data in each line, perform operations based on the contents of the data stream, and so much more. Feel free to use whichever term you prefer, but I prefer transformers. I expect that I am alone in this. + +Data streams can be manipulated by inserting transformers into the stream using pipes. Each transformer program is used by the sysadmin to perform some operation on the data in the stream, thus changing its contents in some manner. Redirection can then be used at the end of the pipeline to direct the data stream to a file. As mentioned, that file could be an actual data file on the hard drive, or a device file such as a drive partition, a printer, a terminal, a pseudo-terminal, or any other device connected to a computer. + +The ability to manipulate these data streams using these small yet powerful transformer programs is central to the power of the Linux command-line interface. Many of the core utilities are transformer programs and use STDIO. + +In the Unix and Linux worlds, a stream is a flow of text data that originates at some source; the stream may flow to one or more programs that transform it in some way, and then it may be stored in a file or displayed in a terminal session. As a sysadmin, your job is intimately associated with manipulating the creation and flow of these data streams. In this post, we will explore data streams—what they are, how to create them, and a little bit about how to use them. + +### Text streams—a universal interface + +The use of Standard Input/Output (STDIO) for program input and output is a key foundation of the Linux way of doing things. STDIO was first developed for Unix and has found its way into most other operating systems since then, including DOS, Windows, and Linux. + +> “This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.” +> +> — Doug McIlroy, Basics of the Unix Philosophy + +### STDIO + +STDIO was developed by Ken Thompson as a part of the infrastructure required to implement pipes on early versions of Unix. Programs that implement STDIO use standardized file handles for input and output rather than files that are stored on a disk or other recording media. STDIO is best described as a buffered data stream, and its primary function is to stream data from the output of one program, file, or device to the input of another program, file, or device. + +There are three STDIO data streams, each of which is automatically opened as a file at the startup of a program—well, those programs that use STDIO. Each STDIO data stream is associated with a file handle, which is just a set of metadata that describes the attributes of the file. File handles 0, 1, and 2 are explicitly defined by convention and long practice as STDIN, STDOUT, and STDERR, respectively. + +**STDIN, File handle 0** , is standard input which is usually input from the keyboard. STDIN can be redirected from any file, including device files, instead of the keyboard. It is not common to need to redirect STDIN, but it can be done. + +**STDOUT, File handle 1** , is standard output which sends the data stream to the display by default. It is common to redirect STDOUT to a file or to pipe it to another program for further processing. + +**STDERR, File handle 2**. The data stream for STDERR is also usually sent to the display. + +If STDOUT is redirected to a file, STDERR continues to be displayed on the screen. This ensures that when the data stream itself is not displayed on the terminal, that STDERR is, thus ensuring that the user will see any errors resulting from execution of the program. STDERR can also be redirected to the same or passed on to the next transformer program in a pipeline. + +STDIO is implemented as a C library, **stdio.h** , which can be included in the source code of programs so that it can be compiled into the resulting executable. + +### Simple streams + +You can perform the following experiments safely in the **/tmp** directory of your Linux host. As the root user, make **/tmp** the PWD, create a test directory, and then make the new directory the PWD. + +``` +# cd /tmp ; mkdir test ; cd test +``` + +Enter and run the following command line program to create some files with content on the drive. We use the `dmesg` command simply to provide data for the files to contain. The contents don’t matter as much as just the fact that each file has some content. + +``` +# for I in 0 1 2 3 4 5 6 7 8 9 ; do dmesg > file$I.txt ; done +``` + +Verify that there are now at least 10 files in **/tmp/** with the names **file0.txt** through **file9.txt**. + +``` +# ll +total 1320 +-rw-r--r-- 1 root root 131402 Oct 17 15:50 file0.txt +-rw-r--r-- 1 root root 131402 Oct 17 15:50 file1.txt +-rw-r--r-- 1 root root 131402 Oct 17 15:50 file2.txt +-rw-r--r-- 1 root root 131402 Oct 17 15:50 file3.txt +-rw-r--r-- 1 root root 131402 Oct 17 15:50 file4.txt +-rw-r--r-- 1 root root 131402 Oct 17 15:50 file5.txt +-rw-r--r-- 1 root root 131402 Oct 17 15:50 file6.txt +-rw-r--r-- 1 root root 131402 Oct 17 15:50 file7.txt +-rw-r--r-- 1 root root 131402 Oct 17 15:50 file8.txt +-rw-r--r-- 1 root root 131402 Oct 17 15:50 file9.txt +``` + +We have generated data streams using the `dmesg` command, which was redirected to a series of files. Most of the core utilities use STDIO as their output stream and those that generate data streams, rather than acting to transform the data stream in some way, can be used to create the data streams that we will use for our experiments. Data streams can be as short as one line or even a single character, and as long as needed. + +### Exploring the hard drive + +It is now time to do a little exploring. In this experiment, we will look at some of the filesystem structures. + +Let’s start with something simple. You should be at least somewhat familiar with the `dd` command. Officially known as “disk dump,” many sysadmins call it “disk destroyer” for good reason. Many of us have inadvertently destroyed the contents of an entire hard drive or partition using the `dd` command. That is why we will hang out in the **/tmp/test** directory to perform some of these experiments. + +Despite its reputation, `dd` can be quite useful in exploring various types of storage media, hard drives, and partitions. We will also use it as a tool to explore other aspects of Linux. + +Log into a terminal session as root if you are not already. We first need to determine the device special file for your hard drive using the `lsblk` command. + +``` +[root@studentvm1 test]# lsblk -i +NAME                                 MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT +sda                                    8:0    0   60G  0 disk +|-sda1                                 8:1    0    1G  0 part /boot +`-sda2                                 8:2    0   59G  0 part +  |-fedora_studentvm1-pool00_tmeta   253:0    0    4M  0 lvm   +  | `-fedora_studentvm1-pool00-tpool 253:2    0    2G  0 lvm   +  |   |-fedora_studentvm1-root       253:3    0    2G  0 lvm  / +  |   `-fedora_studentvm1-pool00     253:6    0    2G  0 lvm   +  |-fedora_studentvm1-pool00_tdata   253:1    0    2G  0 lvm   +  | `-fedora_studentvm1-pool00-tpool 253:2    0    2G  0 lvm   +  |   |-fedora_studentvm1-root       253:3    0    2G  0 lvm  / +  |   `-fedora_studentvm1-pool00     253:6    0    2G  0 lvm   +  |-fedora_studentvm1-swap           253:4    0   10G  0 lvm  [SWAP] +  |-fedora_studentvm1-usr            253:5    0   15G  0 lvm  /usr +  |-fedora_studentvm1-home           253:7    0    2G  0 lvm  /home +  |-fedora_studentvm1-var            253:8    0   10G  0 lvm  /var +  `-fedora_studentvm1-tmp            253:9    0    5G  0 lvm  /tmp +sr0                                   11:0    1 1024M  0 rom +``` + +We can see from this that there is only one hard drive on this host, that the device special file associated with it is **/dev/sda** , and that it has two partitions. The **/dev/sda1** partition is the boot partition, and the **/dev/sda2** partition contains a volume group on which the rest of the host’s logical volumes have been created. + +As root in the terminal session, use the `dd` command to view the boot record of the hard drive, assuming it is assigned to the **/dev/sda** device. The `bs=` argument is not what you might think; it simply specifies the block size, and the `count=` argument specifies the number of blocks to dump to STDIO. The `if=` argument specifies the source of the data stream, in this case, the **/dev/sda** device. Notice that we are not looking at the first block of the partition, we are looking at the very first block of the hard drive. + +``` +[root@studentvm1 test]# dd if=/dev/sda bs=512 count=1 +�c�#�м���؎���|�#�#���!#��8#u +                            ��#���u��#�#�#�|���t#�L#�#�|���#�����€t��pt#���y|1��؎м ��d|<�t#��R�|1��D#@�D��D#�##f�#\|f�f�#`|f�\ +                                      �D#p�B�#r�p�#�K`#�#��1��������#a`���#f��u#����f1�f�TCPAf�#f�#a�&Z|�#}�#�.}�4�3}�.�#��GRUB GeomHard DiskRead Error +�#��#� ) character, aka “gt”, is the syntactical symbol for redirection of STDOUT. + +Redirecting the STDOUT of a command can be used to create a file containing the results from that command. + +``` +[student@studentvm1 ~]$ df -h > diskusage.txt +``` + +There is no output to the terminal from this command unless there is an error. This is because the STDOUT data stream is redirected to the file and STDERR is still directed to the STDOUT device, which is the display. You can view the contents of the file you just created using this next command: + +``` +[student@studentvm1 test]# cat diskusage.txt +Filesystem                          Size  Used Avail Use% Mounted on +devtmpfs                            2.0G     0  2.0G   0% /dev +tmpfs                               2.0G     0  2.0G   0% /dev/shm +tmpfs                               2.0G  1.2M  2.0G   1% /run +tmpfs                               2.0G     0  2.0G   0% /sys/fs/cgroup +/dev/mapper/fedora_studentvm1-root  2.0G   50M  1.8G   3% / +/dev/mapper/fedora_studentvm1-usr    15G  4.5G  9.5G  33% /usr +/dev/mapper/fedora_studentvm1-var   9.8G  1.1G  8.2G  12% /var +/dev/mapper/fedora_studentvm1-tmp   4.9G   21M  4.6G   1% /tmp +/dev/mapper/fedora_studentvm1-home  2.0G  7.2M  1.8G   1% /home +/dev/sda1                           976M  221M  689M  25% /boot +tmpfs                               395M     0  395M   0% /run/user/0 +tmpfs                               395M   12K  395M   1% /run/user/1000 +``` + +When using the > symbol to redirect the data stream, the specified file is created if it does not already exist. If it does exist, the contents are overwritten by the data stream from the command. You can use double greater-than symbols, >>, to append the new data stream to any existing content in the file. + +``` +[student@studentvm1 ~]$ df -h >> diskusage.txt +``` + +You can use `cat` and/or `less` to view the **diskusage.txt** file in order to verify that the new data was appended to the end of the file. + +The < (less than) symbol redirects data to the STDIN of the program. You might want to use this method to input data from a file to STDIN of a command that does not take a filename as an argument but that does use STDIN. Although input sources can be redirected to STDIN, such as a file that is used as input to grep, it is generally not necessary as grep also takes a filename as an argument to specify the input source. Most other commands also take a filename as an argument for their input source. + +### Just grep’ing around + +The `grep` command is used to select lines that match a specified pattern from a stream of data. `grep` is one of the most commonly used transformer utilities and can be used in some very creative and interesting ways. The `grep` command is one of the few that can correctly be called a filter because it does filter out all the lines of the data stream that you do not want; it leaves only the lines that you do want in the remaining data stream. + +If the PWD is not the **/tmp/test** directory, make it so. Let’s first create a stream of random data to store in a file. In this case, we want somewhat less random data that would be limited to printable characters. A good password generator program can do this. The following program (you may have to install `pwgen` if it is not already) creates a file that contains 50,000 passwords that are 80 characters long using every printable character. Try it without redirecting to the **random.txt** file first to see what that looks like, and then do it once redirecting the output data stream to the file. + +``` +$ pwgen -sy 80 50000 > random.txt +``` + +Considering that there are so many passwords, it is very likely that some character strings in them are the same. First, `cat` the **random.txt** file, then use the `grep` command to locate some short, randomly selected strings from the last ten passwords on the screen. I saw the word “see” in one of those ten passwords, so my command looked like this: `grep see random.txt`, and you can try that, but you should also pick some strings of your own to check. Short strings of two to four characters work best. + +``` +$ grep see random.txt +        R=p)'s/~0}wr~2(OqaL.S7DNyxlmO69`"12u]h@rp[D2%3}1b87+>Vk,;4a0hX]d7see;1%9|wMp6Yl. +        bSM_mt_hPy|YZ1NU@[;zV2-see)>(BSK~n5mmb9~h)yx{a&$_e +        cjR1QWZwEgl48[3i-(^x9D=v)seeYT2R#M:>wDh?Tn$]HZU7}j!7bIiIr^cI.DI)W0D"'vZU@.Kxd1E1 +        z=tXcjVv^G\nW`,y=bED]d|7%s6iYT^a^Bvsee:v\UmWT02|P|nq%A*;+Ng[$S%*s)-ls"dUfo|0P5+n +``` + +### Summary + +It is the use of pipes and redirection that allows many of the amazing and powerful tasks that can be performed with data streams on the Linux command line. It is pipes that transport STDIO data streams from one program or file to another. The ability to pipe streams of data through one or more transformer programs supports powerful and flexible manipulation of data in those streams. + +Each of the programs in the pipelines demonstrated in the experiments is small, and each does one thing well. They are also transformers; that is, they take Standard Input, process it in some way, and then send the result to Standard Output. Implementation of these programs as transformers to send processed data streams from their own Standard Output to the Standard Input of the other programs is complementary to, and necessary for, the implementation of pipes as a Linux tool. + +STDIO is nothing more than streams of data. This data can be almost anything from the output of a command to list the files in a directory, or an unending stream of data from a special device like **/dev/urandom** , or even a stream that contains all of the raw data from a hard drive or a partition. + +Any device on a Linux computer can be treated like a data stream. You can use ordinary tools like `dd` and `cat` to dump data from a device into a STDIO data stream that can be processed using other ordinary Linux tools. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/linux-data-streams + +作者:[David Both][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dboth +[b]: https://github.com/lujun9972 +[1]: https://www.apress.com/us/book/9781484237298 +[2]: https://www.gnu.org/software/coreutils/coreutils.html +[3]: https://www.princeton.edu/~hos/mike/transcripts/mcilroy.htm diff --git a/translated/talk/20180308 20 questions DevOps job candidates should be prepared to answer.md b/translated/talk/20180308 20 questions DevOps job candidates should be prepared to answer.md new file mode 100644 index 0000000000..c236b5fef4 --- /dev/null +++ b/translated/talk/20180308 20 questions DevOps job candidates should be prepared to answer.md @@ -0,0 +1,92 @@ +DevOps应聘者应该准备回答的20个问题 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hire-job-career.png?itok=SrZo0QJ3) +聘请一个不合适的人代价是很高的。根据Link人力资源的首席执行官Jörgen Sundberg的统计,招聘,雇佣一名新员工将会花费公司$240,000之多,当你进行了一次不合适的招聘: + * 你失去了他们所知道的。 + * 你失去了他们认识的人 + * 你的团队将可能进入到一个组织发展的震荡阶段 + * 你的公司将会面临组织破裂的风险 + +当你失去一名员工的时候,你就像丢失了公司图谱中的一块。同样值得一提的是另一端的疼痛。应聘到一个错误工作岗位的员工会感受到很大的压力以及整个身心的不满意,甚至是健康问题。 +另外一方面,当你招聘到合适的人时,新的员工将会: + * 丰富公司现有的文化,使你的组织成为一个更好的工作场所。研究表明一个积极的工作文化能够帮助驱动一个更长久的财务业绩,而且如果你在一个欢快的环境中工 作,你更有可能在生活中做的更好。 + * 热爱和你的组织在一起工作。当人们热爱他们所在做的,他们会趋向于做的更好。 + +招聘适合的或者加强现有的文化在DevOps和敏捷团多中是必不可少的。也就是说雇佣到一个能够鼓励积极合作的人,以便来自不同背景,有着不同目标和工作方式的团队能够在一起有效的工作。你新雇佣的员工因应该能够帮助团队合作来充分发挥放大他们的价值同时也能够增加员工的满意度以及平衡组织目标的冲突。他或者她应该能够通过明智的选择工具和工作流来促进你的组织,文化就是一切。 + +作为我们2017年11月发布的一篇文章,[DevOps的招聘经理应该准备回答的20个问题][4],这篇文章将会重点关注在如何招聘最适合的人。 +### 为什么招聘走错了方向 +很多公司现在在用的典型的雇佣策略是基于人才过剩的基础上: + + * 职位公告栏。 + * 关注和所需才能符合的应聘者。 + * 尽可能找多的候选者。 + * 通过面试淘汰弱者。 + * 通过正式的面试淘汰更多的弱者。 + * 评估,投票,选择。 + * 渐渐接近补偿。 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/hiring_graphic.png?itok=1udGbkhB) + +职位公告栏是有成千上万失业者人才过剩的经济大萧条时期发明的。在今天的求职市场上已经没有人才过剩了,然而我们仍然在使用基于此的招聘策略。 +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/732px-unemployed_men_queued_outside_a_depression_soup_kitchen_opened_in_chicago_by_al_capone_02-1931_-_nara_-_541927.jpg?itok=HSs4NjCN) + +### 雇佣最合适的人员:运用文化和情感 +在人才过剩雇佣策略背后的思想是去设计工作岗位然后将人员安排进去。 +相反,做相反的事情:寻找将会积极融入你的商业文化的人才,然后为他们寻找他们热爱的最合适的岗位。要想如此实现,你必须能够围绕他们热情为他们创造工作岗位。 +**谁正在寻找一份工作?** 根据一份2016年对美国50,000名开发者的调查显示,[85.7%的受访对象][5]要么对新的机会不感兴趣,要么对于寻找新工作没有积极性。在寻找工作的那部分中,有将近[28.3%的求职者][5]来自于朋友的推荐。如果你只是在那些在找工作的人中寻找人才,你将会错过高端的人才。 +**运用团队力量去发现和寻找潜力的雇员**。列如,戴安娜是你的团队中的一名开发者,她所提供的机会即使她已经从事编程很多年而且在期间已经结识了很多从事热爱他们所从事的工作的人。难道你不认为她所推荐的潜在员工在技能,知识和智慧上要比HR所寻找的要优秀吗?在要求戴安娜分享她同伴之前,通知她即将到来的使命任务,向她阐明你要雇佣潜在有探索精神的团队,描述在将来会需要的知识领域。 +**雇员想要什么?**一份来自千禧年,婴儿潮实时期出生的人的对比综合性研究显示,20% 的人所想要的是相同的: + 1. 对组织产生积极的影响 + 2. 帮助解决社交或者环境上的挑战 + 3. 和一群有动力的人一起工作 + +### 面试的挑战 +面试应该是招聘者和应聘者双方为了寻找最合适的人才进行的一次双方之间的对话。将面试聚焦在企业文化和情感对话两个问题上:这个应聘者将会丰富你的企业文化并且会热爱和你在一起工作吗?你能够在工作中帮他们取得成功吗? +**对于招聘经理来说:** 每一次的面试都是你学习如何将自己的组织变得对未来的团队成员更有吸引力,并且每次积极的面试多都可能是你发现人才(即使你不会雇佣)的机会。每个人都将会记得积极有效的面试的经历。即使他们不会被雇佣,他们将会和他们的朋友谈论这次经历,你竟会得到一个被推荐的机会。这又很大的好处:如果你无法吸引到这个人才,你也将会从中学习吸取经验并且改善。 +**对面试者来说**:每次的面试都是你释放激情的机会 + +### 助你释放潜在雇员激情的20个问题 + 1. 你热爱什么? + 2. “今天早晨我已经迫不及待的要去工作”你怎么看待这句话? + 3. 你曾经最快乐的是什么? + 4. 你曾经解决问题的最典型的例子是什么,你是如何解决的? + 5. 你如何看待配对学习? + 6. 你到达办公室和离开办公室心里最先想到的是什么? + 7. 你如果你有一次改变你之前或者现在的共工作的一件事的机会,将会是什么事? + 8. 当你在这工作的时候,你最兴奋去学习什么? + 9. 你的梦想是什么,你如何去实现? + 10. 你在学会如何去实现你的追求的时候想要或者需要什么? + 11. 你的价值观是什么? + 12. 你是如何坚守自己的价值观的? + 13. 平衡在你的生活中意味着什么? + 14. 你最引以为傲的工作交流能力是什么?为什么? + 15. 你最喜欢营造什么样的环境? + 16. 你喜欢别人怎样对待你? + 17. 你信任我们什么,如何验证? + 18. 告诉我们你在最近的一个项目中学习到什么? + 19. 我们还能知道你的其他方面的什么? + 20. 如果你正在雇佣我,你将会问我什么问题? + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/3/questions-devops-employees-should-answer + +作者:[Catherine Louis][a] +译者:[FelixYFZ](https://github.com/FelixYFZ) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/catherinelouis +[1]:https://www.shrm.org/resourcesandtools/hr-topics/employee-relations/pages/cost-of-bad-hires.aspx +[2]:https://en.wikipedia.org/wiki/Tuckman%27s_stages_of_group_development +[3]:http://www.forbes.com/sites/johnkotter/2011/02/10/does-corporate-culture-drive-financial-performance/ +[4]:https://opensource.com/article/17/11/inclusive-workforce-takes-work +[5]:https://insights.stackoverflow.com/survey/2016#work-job-discovery +[6]:https://research.hackerrank.com/developer-skills/2018/ +[7]:http://www-935.ibm.com/services/us/gbs/thoughtleadership/millennialworkplace/ +[8]:https://en.wikipedia.org/wiki/Emotional_intelligence diff --git a/translated/talk/20181019 What is an SRE and how does it relate to DevOps.md b/translated/talk/20181019 What is an SRE and how does it relate to DevOps.md new file mode 100644 index 0000000000..80700d6fb9 --- /dev/null +++ b/translated/talk/20181019 What is an SRE and how does it relate to DevOps.md @@ -0,0 +1,68 @@ +什么是 SRE?它和 DevOps 是怎么关联的? +===== + +大型企业里 SRE 角色比较常见,不过小公司也需要 SRE。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/toolbox-learn-draw-container-yearbook.png?itok=xDbwz1pP) + +虽然站点可靠性工程师(SRE)角色在近几年变得流行起来,但是很多人 —— 甚至是软件行业里的 —— 还不知道 SRE 是什么或者 SRE 都干些什么。为了搞清楚这些问题,这篇文章解释了 SRE 的含义,还有 SRE 怎样关联 DevOps,以及在工程师团队规模不大的组织里 SRE 该如何工作。 + +### 什么是站点可靠性工程? + +谷歌的几个工程师写的《 [SRE:谷歌运维解密][1]》被认为是站点可靠性工程的权威书籍。谷歌的工程副总裁 Ben Treynor Sloss 在二十一世纪初[创造了这个术语][2]。他是这样定义的:“当你让软件工程师设计运维功能时,SRE 就产生了。” + +虽然系统管理员从很久之前就在写代码,但是过去的很多时候系统管理团队是手动管理机器的。当时他们管理的机器可能有几十台或者上百台,不过当这个数字涨到了几千甚至几十万的时候,就不能简单的靠人去解决问题了。规模如此大的情况下,很明显应该用代码去管理机器(以及机器上运行的软件)。 + +另外,一直到近几年,运维团队和开发团队都还是完全独立的。两个岗位的技能要求也被认为是完全不同的。SRE 的角色想尝试把这两份工作结合起来。 + +在深入探讨什么是 SRE 以及 SRE 如何和开发团队协作之前,我们需要先了解一下 SRE 在 DevOps 范例中是怎么工作的。 + +### SRE 和 DevOps + +站点可靠性工程的核心,就是对 DevOps 范例的实践。[DevOps 的定义][3]有很多种方式。开发团队(“devs”)和运维(“ops”)团队相互分离的传统模式下,写代码的团队在服务交付给用户使用之后就不再对服务状态负责了。开发团队“把代码扔到墙那边”让运维团队去部署和支持。 + +这种情况会导致大量失衡。开发和运维的目标总是不一致 —— 开发希望用户体验到“最新最棒”的代码,但是运维想要的是变更尽量少的稳定系统。运维是这样假定的,任何变更都可能引发不稳定,而不做任何变更的系统可以一直保持稳定。(减少软件的变更次数并不是避免故障的唯一因素,认识到这一点很重要。例如,虽然你的 web 应用保持不变,但是当用户数量涨到十倍时,服务可能就会以各种方式出问题。) + +DevOps 理念认为通过合并这两个岗位就能够消灭争论。如果开发团队时刻都想把新代码部署上线,那么他们也必须对新代码引起的故障负责。就像亚马逊的 [Werner Vogels 说的][4]那样,“谁开发,谁运维”(生产环境)。但是开发人员已经有一大堆问题了。他们不断的被推动着去开发老板要的产品功能。再让他们去了解基础设施,包括如何部署、配置还有监控服务,这对他们的要求有点太多了。所以就需要 SRE 了。 + +开发一个 web 应用的时候经常是很多人一起参与。有用户界面设计师,图形设计师,前端工程师,后端工程师,还有许多其他工种(视技术选型的具体情况而定)。如何管理写好的代码也是需求之一(例如部署,配置,监控)—— 这是 SRE 的专业领域。但是,就像前端工程师受益于后端领域的知识一样(例如从数据库获取数据的方法),SRE 理解部署系统的工作原理,知道如何满足特定的代码或者项目的具体需求。 + +所以 SRE 不仅仅是“写代码的运维工程师”。相反,SRE 是开发团队的成员,他们有着不同的技能,特别是在发布部署、配置管理、监控、指标等方面。但是,就像前端工程师必须知道如何从数据库中获取数据一样,SRE 也不是只负责这些领域。为了提供更容易升级、管理和监控的产品,整个团队共同努力。 + +当一个团队在做 DevOps 实践,但是他们意识到对开发的要求太多了,过去由运维团队做的事情,现在需要一个专家来专门处理。这个时候,对 SRE 的需求很自然地就出现了。 + +### SRE 在初创公司怎么工作 + +如果你们公司有好几百位员工,那是非常好的(如果到了 Google 和 Facebook 的规模就更不用说了)。大公司的 SRE 团队分散在各个开发团队里。但是一个初创公司没有这种规模经济,工程师经常身兼数职。那么小公司该让谁做 SRE 呢?其中一种方案是完全践行 DevOps,那些大公司里属于 SRE 的典型任务,在小公司就让开发者去负责。另一种方案,则是聘请专家 —— 也就是 SRE。 + +让开发人员做 SRE 最显著的优点是,团队规模变大的时候也能很好的扩展。而且,开发人员将会全面地了解应用的特性。但是,许多初创公司的基础设施包含了各种各样的 SaaS 产品,这种多样性在基础设施上体现的最明显,因为连基础设施本身也是多种多样。然后你们在某个基础设施上引入指标系统、站点监控、日志分析、容器等等。这些技术解决了一部分问题,也增加了复杂度。开发人员除了要了解应用程序的核心技术(比如开发语言),还要了解上述所有技术和服务。最终,掌握所有的这些技术让人无法承受。 + +另一种方案是聘请专家专职做 SRE。他们专注于发布部署、配置管理、监控和指标,可以节省开发人员的时间。这种方案的缺点是,SRE 的时间必须分配给多个不同的应用(就是说 SRE 需要贯穿整个工程部门)。 这可能意味着 SRE 没时间对任何应用深入学习,然而他们可以站在一个能看到服务全貌的高度,知道各个部分是怎么组合在一起的。 这个“ 三万英尺高的视角”可以帮助 SRE 从系统整体上考虑,哪些薄弱环节需要优先修复。 + +有一个关键信息我还没提到:其他的工程师。他们可能很渴望了解发布部署的原理,也很想尽全力学会使用指标系统。而且,雇一个 SRE 可不是一件简单的事儿。因为你要找的是一个既懂系统管理又懂软件工程的人。(我之所以明确地说软件工程而不是说“能写代码”,是因为除了写代码之外软件工程还包括很多东西,比如编写良好的测试或文档。) + +因此,在某些情况下让开发人员做 SRE 可能更合理一些。如果这样做了,得同时关注代码和基础设施(购买 SaaS 或内部自建)的复杂程度。这两边的复杂性,有时候能促进专业化。 + +### 总结 + +在初创公司做 DevOps 实践最有效的方式是组建 SRE 小组。我见过一些不同的方案,但是我相信初创公司(尽早)招聘专职 SRE 可以解放开发人员,让开发人员专注于特定的挑战。SRE 可以把精力放在改善工具(流程)上,以提高开发人员的生产力。不仅如此,SRE 还专注于确保交付给客户的产品是可靠且安全的。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/sre-startup + +作者:[Craig Sebenik][a] +选题:[lujun9972][b] +译者:[BeliteX](https://github.com/belitex) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/craig5 +[b]: https://github.com/lujun9972 +[1]: http://shop.oreilly.com/product/0636920041528.do +[2]: https://landing.google.com/sre/interview/ben-treynor.html +[3]: https://opensource.com/resources/devops +[4]: https://queue.acm.org/detail.cfm?id=1142065 +[5]: https://www.usenix.org/conference/lisa18/presentation/sebenik +[6]: https://www.usenix.org/conference/lisa18 diff --git a/translated/tech/20180820 How To Disable Ads In Terminal Welcome Message In Ubuntu Server.md b/translated/tech/20180820 How To Disable Ads In Terminal Welcome Message In Ubuntu Server.md new file mode 100644 index 0000000000..48a556d29a --- /dev/null +++ b/translated/tech/20180820 How To Disable Ads In Terminal Welcome Message In Ubuntu Server.md @@ -0,0 +1,110 @@ +如何在 Ubuntu 服务器中禁用终端欢迎消息中的广告 +====== + +如果你正在使用最新的 Ubuntu 服务器版本,你可能已经注意到欢迎消息中有一些与 Ubuntu 服务器平台无关的促销链接。你可能已经知道 **MOTD**,即 **M**essage **O**f **T**he **D**ay 的开头首字母,在 Linux 系统每次登录时都会显示欢迎信息。通常,欢迎消息包含操作系统版本,基本系统信息,官方文档链接以及有关最新安全更新等的链接。这些是我们每次通过 SSH 或本地登录时通常会看到的内容。但是,最近在终端欢迎消息中出现了一些其他链接。我已经几次注意到这些链接,但我并在意,也从未点击过。以下是我的 Ubuntu 18.04 LTS 服务器上显示的终端欢迎消息。 + +![](http://www.ostechnix.com/wp-content/uploads/2018/08/Ubuntu-Terminal-welcome-message.png) + +正如你在上面截图中所看到的,欢迎消息中有一个 bit.ly 链接和 Ubuntu wiki 链接。有些人可能会惊讶并想知道这是什么。其实欢迎信息中的链接无需担心。它可能看起来像广告,但并不是商业广告。链接实际上指的是 [**Ubuntu 官方博客**][1] 和 [**Ubuntu wiki**][2]。正如我之前所说,其中的一个链接是不相关的,没有任何与 Ubuntu 服务器相关的细节,这就是为什么我开头称它们为广告。 +(to 校正:这里是其中一个链接不相关还是两个链接都不相关) + +虽然我们大多数人都不会访问 bit.ly 链接,但是有些人可能出于好奇去访问这些链接,结果失望地发现它只是指向一个外部链接。你可以使用任何 URL 短网址服务,例如 unshorten.it,在访问真正链接之前,查看它会指向哪里。或者,你只需在 bit.ly 链接的末尾输入加号(**+**)即可查看它们的实际位置以及有关链接的一些统计信息。 + +![](http://www.ostechnix.com/wp-content/uploads/2018/08/shortlink.png) + +### 什么是 MOTD 以及它是如何工作的? + +2009 年,来自 Canonical 的 **Dustin Kirkland** 在 Ubuntu 中引入了 MOTD 的概念。它是一个灵活的框架,使管理员或发行包能够在 /etc/update-motd.d/* 位置添加可执行脚本,目的是生成在登录时显示有益的,有趣的消息。它最初是为 Landscape(Canonical 的商业服务)实现的,但是其它发行版维护者发现它很有用,并且在他们自己的发行版中也采用了这个特性。 + +如果你在 Ubuntu 系统中查看 **/etc/update-motd.d/**,你会看到一组脚本。一个是打印通用的 “ Welcome” 横幅。下一个打印 3 个链接,显示在哪里可以找到操作系统的帮助。另一个计算并显示本地系统包可以更新的数量。另一个脚本告诉你是否需要重新启动等等。 + +从 Ubuntu 17.04 起,开发人员添加了 **/etc/update-motd.d/50-motd-news**,这是一个脚本用来在欢迎消息中包含一些附加信息。这些附加信息是: + + 1. 重要的关键信息,例如 ShellShock, Heartbleed 等 + + 2. 生命周期(EOL)消息,新功能可用性等 + + 3. 在 Ubuntu 官方博客和其他有关 Ubuntu 的新闻中发布的一些有趣且有益的帖子 + +另一个特点是异步,启动后约 60 秒,systemd 计时器运行 “/etc/update-motd.d/50-motd-news –force” 脚本。它提供了 /etc/default/motd-news 脚本中定义的 3 个配置变量。默认值为:ENABLED=1, URLS=”, WAIT=”5″。 + +以下是 /etc/default/motd-news 文件的内容: +``` +$ cat /etc/default/motd-news +# Enable/disable the dynamic MOTD news service +# This is a useful way to provide dynamic, informative +# information pertinent to the users and administrators +# of the local system +ENABLED=1 + +# Configure the source of dynamic MOTD news +# White space separated list of 0 to many news services +# For security reasons, these must be https +# and have a valid certificate +# Canonical runs a service at motd.ubuntu.com, and you +# can easily run one too +URLS="https://motd.ubuntu.com" + +# Specify the time in seconds, you're willing to wait for +# dynamic MOTD news +# Note that news messages are fetched in the background by +# a systemd timer, so this should never block boot or login +WAIT=5 + +``` + +好事情是 MOTD 是完全可定制的,所以你可以彻底禁用它(ENABLED=0),根据你的意愿更改或添加脚本,并以秒为单位更改等待时间。 + +如果启用了 MOTD,那么 systemd 计时器作业将循环遍历每个 URL,将它们缩减到每行 80 个字符,最多 10 行,并将它们连接(to 校正:也可能是链接?)到 /var/cache/motd-news 中的缓存文件。此 systemd 计时器作业将每隔 12 小时运行并更新 /var/cache/motd-news。用户登录后,/var/cache/motd-news 的内容会打印到屏幕上。这就是 MOTD 的工作原理。 + +此外,**/etc/update-motd.d/50-motd-news** 文件中包含自定义用户代理字符串,以报告有关计算机的信息。如果你查看 **/etc/update-motd.d/50-motd-news** 文件,你会看到 +``` +# Piece together the user agent +USER_AGENT="curl/$curl_ver $lsb $platform $cpu $uptime" +``` + +这意味着,MOTD 检索器将向 Canonical 报告你的**操作系统版本**,**硬件平台**,**CPU 类型**和**正常运行时间**。 + +到这里,希望你对 MOTD 有了一个基本的了解。 + +现在让我们回到主题,我不想要这个功能。我该如何禁用它?如果欢迎消息中的促销链接仍然困扰你,并且你想永久禁用它们,则可以通过以下方法快速禁用它。 + +### 在 Ubuntu 服务器中禁用终端欢迎消息中的广告 + +要禁用这些广告,编辑文件: +``` +$ sudo vi /etc/default/motd-news +``` + +找到以下行并将其值设置为 0(零)。 +``` +[...] +ENABLED=0 +[...] +``` + +保存并关闭文件。现在,重新启动系统,看看欢迎消息是否仍然显示来自 Ubuntu 博客的链接。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/Ubuntu-Terminal-welcome-message-1.png) + +看到没?现在没有来自 Ubuntu 博客和 Ubuntu wiki 的链接。 + +这就是全部内容了。希望这对你有所帮助。更多好东西要来了,敬请关注! + +顺祝时祺! + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-disable-ads-in-terminal-welcome-message-in-ubuntu-server/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://blog.ubuntu.com/ +[2]:https://wiki.ubuntu.com/ diff --git a/translated/tech/20180907 6 open source tools for writing a book.md b/translated/tech/20180907 6 open source tools for writing a book.md new file mode 100644 index 0000000000..ef1edd8cff --- /dev/null +++ b/translated/tech/20180907 6 open source tools for writing a book.md @@ -0,0 +1,67 @@ +6 个用于写书的开源工具 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-austen-writing-code.png?itok=XPxRMtQ4) + +我在 1993 年首次使用并贡献了免费和开源软件,从那时起我一直是一名开源软件开发人员和传播者。尽管我一个被记住的项目是[ FreeDOS 项目][1], 一个 DOS 操作系统的开源实现,但我已经编写或者贡献了数十个开源软件项目。 + +我最近写了一本关于 FreeDOS 的书。 [_使用 FreeDOS_][2]是我庆祝 FreeDOS 出现 24 周年。它是关于安装和使用 FreeDOS、关于我最喜欢的 DOS 程序的文章,以及 DOS 命令行和 DOS 批处理编程的快速参考指南的集合。在一位出色的专业编辑的帮助下,我在过去的几个月里一直在编写这本书。 + +_使用 FreeDOS_ 可在知识共享署名(cc-by)国际公共许可证下获得。你可以从[FreeDO S电子书][2]网站免费下载 EPUB 和 PDF 版本。(我也计划为那些喜欢纸质的人提供打印版本。) + +这本书几乎完全是用开源软件制作的。我想分享一下对用来创建、编辑和生成_使用 FreeDOS_的工具的看法。 + +### Google 文档 + +[Google 文档][3]是我使用的唯一不是开源软件的工具。我将我的第一份草稿上传到 Google 文档,这样我就能与编辑器进行协作。我确信有开源协作工具,但 Google 文档能够让两个人同时编辑同一个文档、发表评论、编辑建议和更改跟踪 - 更不用说它使用段落样式和能够下载完成的文档 - 这使其成为编辑过程中有价值的一部分。 + +### LibreOffice + +我开始使用 [LibreOffice][4] 6.0,但我最终使用 LibreOffice 6.1 完成了这本书。我喜欢 LibreOffice 对样式的丰富支持。段落样式可以轻松地为标题、页眉、正文、示例代码和其他文本应用样式。字符样式允许我修改段落中文本的外观,例如内联示例代码或用不同的样式代表文件名。图形样式让我可以将某些样式应用于截图和其他图像。页面样式允许我轻松修改页面的布局和外观。 + +### GIMP + +我的书包括很多 DOS 程序截图,网站截图和 FreeDOS logo。我用 [GIMP][5] 修改了这本书的图像。通常,只是裁剪或调整图像大小,但在我准备本书的印刷版时,我使用 GIMP 创建了一些更易于打印布局的图像。 + +### Inkscape + +大多数 FreeDOS logo 和小鱼吉祥物都是 SVG 格式,我使用 [Inkscape][6]来调整它们。在准备电子书的 PDF 版本时,我想在页面顶部放置一个简单的蓝色横幅,角落里有 FreeDOS logo。实验后,我发现在 Inkscape 中创建一个我想要的横幅 SVG 图案更容易,然后我将其粘贴到页眉中。 + +### ImageMagick + +虽然使用 GIMP 来完成这项工作也很好,但有时在一组图像上运行 [ImageMagick][7] 命令会更快,例如转换为 PNG 格式或调整图像大小。 + +### Sigil + +LibreOffice 可以直接导出到 EPUB 格式,但它不是个好的转换器。我没有尝试使用 LibreOffice 6.1 创建 EPUB,但 LibreOffice 6.0 没有包含我的图像。它还以奇怪的方式添加了样式。我使用 [Sigil][8] 来调整 EPUB 并使一切看起来正常。Sigil 甚至还有预览功能,因此你可以看到 EPUB 的样子。 + +### QEMU + +因为本书是关于安装和运行 FreeDOS 的,所以我需要实际运行 FreeDOS。你可以在任何 PC 模拟器中启动 FreeDOS,包括 VirtualBox、QEMU、GNOME Boxes、PCem 和 Bochs。但我喜欢 [QEMU] [9] 的简单性。QEMU 控制台允许你以 PPM 转储屏幕,这非常适合抓取截图来包含在书中。 + +当然,我不得不提到在 [Linux][11] 上运行 [GNOME][10]。我使用 Linux 的 [Fedora][12] 发行版。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/writing-book-open-source-tools + +作者:[Jim Hall][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jim-hall +[1]: http://www.freedos.org/ +[2]: http://www.freedos.org/ebook/ +[3]: https://www.google.com/docs/about/ +[4]: https://www.libreoffice.org/ +[5]: https://www.gimp.org/ +[6]: https://inkscape.org/ +[7]: https://www.imagemagick.org/ +[8]: https://sigil-ebook.com/ +[9]: https://www.qemu.org/ +[10]: https://www.gnome.org/ +[11]: https://www.kernel.org/ +[12]: https://getfedora.org/ diff --git a/translated/tech/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md b/translated/tech/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md deleted file mode 100644 index fe273fa69e..0000000000 --- a/translated/tech/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md +++ /dev/null @@ -1,307 +0,0 @@ -重启和关闭 Linux 系统的 6 个终端命令 -====== -在 Linux 管理员的日程当中, 有很多需要执行的任务, 系统的重启和关闭就被包含其中. - -对于 Linux 管理员来说, 重启和关闭系统是其诸多风险操作中的一例, 有时候, 由于某些原因, 这些操作可能无法挽回, 他们需要更多的时间来排查问题. - -在 Linux 命令行模式下我们可以执行这些任务. 很多时候, 由于熟悉命令行, Linux 管理员更倾向于在命令行下完成这些任务. - -重启和关闭系统的 Linux 命令并不多, 用户需要根据需要, 选择合适的命令来完成任务. - -以下所有命令都有其自身特点, 并允许被 Linux 管理员使用. - -**建议阅读 :** - -**(#)** [查看系统/服务器正常运行时间的 11 个方法][1] - -**(#)** [Tuptime 一款为 Linux 系统保存历史记录, 统计运行时间工具][2] - -系统重启和关闭之始, 会通知所有已登录的用户和已注册的进程. 当然, 如果会造成冲突, 系统不会允许新的用户登入. - -执行此类操作之前, 我建议您坚持复查, 因为您只能得到很少的提示来确保这一切顺利. - -下面陈列了一些步骤. - - * 确保您拥有一个可以处理故障的终端, 以防之后可能会发生的问题. VMWare 可以访问物理服务器的虚拟机, IPMI, iLO 和 iDRAC. - * 您需要通过公司的流程, 申请修改或故障的执行权直到得到许可. - * 为安全着想, 备份重要的配置文件, 并保存到其他服务器上. - * 验证日志文件(提前检查) - * 和相关团队交流, 比如数据库管理团队, 应用团队等. - * 通知数据库和应用服务人员关闭服务, 并得到确定. - * 使用适当的命令复盘操作, 验证工作. - * 最后, 重启系统 - * 验证日志文件, 如果一切顺利, 执行下一步操作, 如果发现任何问题, 对症排查. - * 无论是回退版本还是运行程序, 通知相关团队提出申请. - * 对操作做适当守候, 并将预期的一切正常的反馈给团队 - -使用下列命令执行这项任务. - - * **`shutdown 命令:`** shutdown 命令用来为中止, 重启或切断电源 - * **`halt 命令:`** halt 命令用来为中止, 重启或切断电源 - * **`poweroff 命令:`** poweroff 命令用来为中止, 重启或切断电源 - * **`reboot 命令:`** reboot 命令用来为中止, 重启或切断电源 - * **`init 命令:`** init(initialization 的简称) 是系统启动的第一个进程. - * **`systemctl 命令:`** systemd 是 Linux 系统和服务器的管理程序. - - -### 方案 - 1: 如何使用 Shutdown 命令关闭和重启 Linux 系统 - -shutdown 命令用户关闭或重启本地和远程的 Linux 设备. 它为高效完成作业提供多个选项. 如果使用了 time 参数, 系统关闭的 5 分钟之前, /run/nologin 文件会被创建, 以确保后续的登录会被拒绝. - -通用语法如下 - -``` -# shutdown [OPTION] [TIME] [MESSAGE] - -``` - -运行下面的命令来立即关闭 Linux 设备. 它会立刻杀死所有进程, 并关闭系统. - -``` -# shutdown -h now - -``` - - * **`-h:`** 如果不特指 -halt 选项, 这等价于 -poweroff 选项. - -另外我们可以使用带有 `poweroff` 选项的 `shutdown` 命令来立即关闭设备. - -``` -# shutdown --halt now -或者 -# shutdown -H now - -``` - - * **`-H, --halt:`** 停止设备运行 - -另外我们可以使用带有 `poweroff` 选项的 `shutdown` 命令来立即关闭设备. - -``` -# shutdown --poweroff now -或者 -# shutdown -P now - -``` - - * **`-P, --poweroff:`** 切断电源 (默认). - -运行以下命令立即关闭 Linux 设备. 它将会立即杀死所有的进程并关闭系统. - -``` -# shutdown -h now - -``` - - * **`-h:`** 如果不特指 -halt 选项, 这等价于 -poweroff 选项. - -如果您没有使用 time 选项运行下面的命令, 它将会在一分钟后执行给出的命令 - -``` -# shutdown -h -Shutdown scheduled for Mon 2018-10-08 06:42:31 EDT, use 'shutdown -c' to cancel. - -[email protected]# -Broadcast message from [email protected] (Mon 2018-10-08 06:41:31 EDT): - -The system is going down for power-off at Mon 2018-10-08 06:42:31 EDT! - -``` - -其他的登录用户都能在中断中看到如下的广播消息: - -``` -[[email protected] ~]$ -Broadcast message from [email protected] (Mon 2018-10-08 06:41:31 EDT): - -The system is going down for power-off at Mon 2018-10-08 06:42:31 EDT! - -``` - -对于使用了 Halt 选项. - -``` -# shutdown -H -Shutdown scheduled for Mon 2018-10-08 06:37:53 EDT, use 'shutdown -c' to cancel. - -[email protected]# -Broadcast message from [email protected] (Mon 2018-10-08 06:36:53 EDT): - -The system is going down for system halt at Mon 2018-10-08 06:37:53 EDT! - -``` - -对于使用了 Poweroff 选项. - -``` -# shutdown -P -Shutdown scheduled for Mon 2018-10-08 06:40:07 EDT, use 'shutdown -c' to cancel. - -[email protected]# -Broadcast message from [email protected] (Mon 2018-10-08 06:39:07 EDT): - -The system is going down for power-off at Mon 2018-10-08 06:40:07 EDT! - -``` - -可以在您的终端上敲击 `Shutdown -c` 选项取消操作. - -``` -# shutdown -c - -Broadcast message from [email protected] (Mon 2018-10-08 06:39:09 EDT): - -The system shutdown has been cancelled at Mon 2018-10-08 06:40:09 EDT! - -``` - -其他的登录用户都能在中断中看到如下的广播消息: - -``` -[[email protected] ~]$ -Broadcast message from [email protected] (Mon 2018-10-08 06:41:35 EDT): - -The system shutdown has been cancelled at Mon 2018-10-08 06:42:35 EDT! - -``` - -添加 time 参数, 如果你想在 `N` 秒之后执行关闭或重启操作. 这里, 您可以为所有登录用户添加自定义广播消息. 例如, 我们将在五分钟后重启设备. - -``` -# shutdown -r +5 "To activate the latest Kernel" -Shutdown scheduled for Mon 2018-10-08 07:13:16 EDT, use 'shutdown -c' to cancel. - -[[email protected] ~]# -Broadcast message from [email protected] (Mon 2018-10-08 07:08:16 EDT): - -To activate the latest Kernel -The system is going down for reboot at Mon 2018-10-08 07:13:16 EDT! - -``` - -运行下面的命令立即重启 Linux 设备. 它会立即杀死所有进程并且重新启动系统. - -``` -# shutdown -r now - -``` - - * **`-r, --reboot:`** 重启设备. - -### 方案 - 2: 如何通过 reboot 命令关闭和重启 Linux 系统 - -reboot 命令用于关闭和重启本地或远程设备. Reboot 命令拥有两个实用的选项. - -它能够优雅的关闭和重启设备(就好像在系统菜单中惦记重启选项一样简单). - -执行不带任何参数的 `reboot` 命令来重启 Linux 设备 - -``` -# reboot - -``` - -执行带 `-p` 参数的 `reboot` 命令来关闭 Linux 设备或切断电源 - -``` -# reboot -p - -``` - - * **`-p, --poweroff:`** 调用 halt 或 poweroff 命令, 切断设备电源. - - -执行带 `-f` 参数的 `reboot` 命令来强制重启 Linux 设备(这类似按压 CPU 上的电源键) - -``` -# reboot -f - -``` - - * **`-f, --force:`** 立刻强制中断, 切断电源或重启 - -### 方案 - 3: 如何通过 init 命令关闭和重启 Linux 系统 - -init(initialization 的简写) 是系统启动的第一个进程. - -他将会检查 /etc/inittab 文件并决定 linux 运行级别. 同时, 授权用户在 Linux 设备上执行关机或重启 操作. 这里存在从 0 到 6 的七个运行等级. - -**建议阅读 :** -**(#)** [如何检查 Linux 上所有运行的服务][3] - -执行一下 init 命令关闭系统. -``` -# init 0 - -``` - - * **`0:`** 中断 – 关闭系统. - -运行下面的 init 命令重启设备 -``` -# init 6 - -``` - - * **`6:`** 重启 – 重启设备. - -### 方案 - 4: 如何通过 halt 命令关闭和重启 Linux 系统 - -halt 命令用来切断电源或关闭远程 Linux 设备或本地主机. -中断所有进程并关闭 cpu -``` -# halt - -``` - -### 方案 - 5: 如何通过 poweroff 命令关闭和重启 Linux 系统 - -poweroff 命令用来切断电源或关闭远程 Linux 设备或本地主机. Poweroff 很像 halt, 但是它可以关闭设备自身的单元(等和其他 PC 上的任何事物). 它会为 PSU 发送 ACPI 指令, 切断电源. - -``` -# poweroff - -``` - -### 方案 - 6: 如何通过 systemctl 命令关闭和重启 Linux 系统 - -Systemd 是一款适用于所有主流 Linux 发型版的全新 init 系统和系统管理器, 而不是传统的 SysV init 系统. - -systemd 兼容与 SysV 和 LSB 脚本. 它能够替代 sysvinit 系统. systemd 是内核启动的第一个进程, 并持有序号为 1 的进程 PID. - -**建议阅读 :** -**(#)** [chkservice – 一款终端下系统单元管理工具][4] - -它是一切进程的父进程, Fedora 15 是第一个适配安装 systemd 的发行版. -It’s a parent process for everything and Fedora 15 is the first distribution which was adapted systemd instead of upstart. - -systemctl 是命令行下管理系统, 守护进程, 开启服务(如 start, restart, stop, enable, disable, reload & status)的主要工具. - -systemd 使用 .service 文件而不是 bash 脚本(SysVinit 用户使用的). systemd 将所有守护进程归与自身的 Linux cgroups 用户组下, 您可以浏览 /cgroup/systemd 文件查看系统层次等级 - -``` -# systemctl halt -# systemctl poweroff -# systemctl reboot -# systemctl suspend -# systemctl hibernate - -``` - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/6-commands-to-shutdown-halt-poweroff-reboot-the-linux-system/ - -作者:[Prakash Subramanian][a] -选题:[lujun9972][b] -译者:[cyleft](https://github.com/cyleft) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.2daygeek.com/author/prakash/ -[b]: https://github.com/lujun9972 -[1]: https://www.2daygeek.com/11-methods-to-find-check-system-server-uptime-in-linux/ -[2]: https://www.2daygeek.com/tuptime-a-tool-to-report-the-historical-and-statistical-running-time-of-linux-system/ -[3]: https://www.2daygeek.com/how-to-check-all-running-services-in-linux/ -[4]: https://www.2daygeek.com/chkservice-a-tool-for-managing-systemd-units-from-linux-terminal/ diff --git a/translated/tech/20181016 Lab 4- Preemptive Multitasking.md b/translated/tech/20181016 Lab 4- Preemptive Multitasking.md new file mode 100644 index 0000000000..9302b7288a --- /dev/null +++ b/translated/tech/20181016 Lab 4- Preemptive Multitasking.md @@ -0,0 +1,590 @@ +实验 4:抢占式多任务处理 +====== +### 实验 4:抢占式多任务处理 + +#### 简介 + +在本实验中,你将在多个同时活动的用户模式中的环境之间实现抢占式多任务处理。 + +在 Part A 中,你将在 JOS 中添加对多处理器的支持,以实现循环调度。并且添加基本的环境管理方面的系统调用(创建和销毁环境的系统调用、以及分配/映射内存)。 + +在 Part B 中,你将要实现一个类 Unix 的 `fork()`,它将允许一个用户模式中的环境去创建一个它自已的副本。 + +最后,在 Part C 中,你将在 JOS 中添加对进程间通讯(IPC)的支持,以允许不同用户模式环境之间进行显式通讯和同步。你也将要去添加对硬件时钟中断和优先权的支持。 + +##### 预备知识 + +使用 git 去提交你的实验 3 的源代码,并获取课程仓库的最新版本,然后创建一个名为 `lab4` 的本地分支,它跟踪我们的名为 `origin/lab4` 的远程 `lab4` 分支: + +```markdown + athena% cd ~/6.828/lab + athena% add git + athena% git pull + Already up-to-date. + athena% git checkout -b lab4 origin/lab4 + Branch lab4 set up to track remote branch refs/remotes/origin/lab4. + Switched to a new branch "lab4" + athena% git merge lab3 + Merge made by recursive. + ... + athena% +``` + +实验 4 包含了一些新的源文件,在开始之前你应该去浏览一遍: +```markdown +kern/cpu.h Kernel-private definitions for multiprocessor support +kern/mpconfig.c Code to read the multiprocessor configuration +kern/lapic.c Kernel code driving the local APIC unit in each processor +kern/mpentry.S Assembly-language entry code for non-boot CPUs +kern/spinlock.h Kernel-private definitions for spin locks, including the big kernel lock +kern/spinlock.c Kernel code implementing spin locks +kern/sched.c Code skeleton of the scheduler that you are about to implement +``` + +##### 实验要求 + +本实验分为三部分:Part A、Part B、和 Part C。我们计划为每个部分分配一周的时间。 + +和以前一样,你需要完成实验中出现的、所有常规练习和至少一个挑战问题。(不是每个部分做一个挑战问题,是整个实验做一个挑战问题即可。)另外,你还要写出你实现的挑战问题的详细描述。如果你实现了多个挑战问题,你只需写出其中一个即可,虽然我们的课程欢迎你完成越多的挑战越好。在动手实验之前,请将你的挑战问题的答案写在一个名为 `answers-lab4.txt` 的文件中,并把它放在你的 `lab` 目录的根下。 + +#### Part A:多处理器支持和协调多任务处理 + +在本实验的第一部分,将去扩展你的 JOS 内核,以便于它能够在一个多处理器的系统上运行,并且要在 JOS 内核中实现一些新的系统调用,以便于它允许用户级环境创建附加的新环境。你也要去实现协调的循环调度,在当前的环境自愿放弃 CPU(或退出)时,允许内核将一个环境切换到另一个环境。稍后在 Part C 中,你将要实现抢占调度,它允许内核在环境占有 CPU 一段时间后,从这个环境上重新取回对 CPU 的控制,那怕是在那个环境不配合的情况下。 + +##### 多处理器支持 + +我们继续去让 JOS 支持 “对称多处理器”(SMP),在一个多处理器的模型中,所有 CPU 们都有平等访问系统资源(如内存和 I/O 总线)的权利。虽然在 SMP 中所有 CPU 们都有相同的功能,但是在引导进程的过程中,它们被分成两种类型:引导程序处理器(BSP)负责初始化系统和引导操作系统;而在操作系统启动并正常运行后,应用程序处理器(AP)将被 BSP 激活。哪个处理器做 BSP 是由硬件和 BIOS 来决定的。到目前为止,你所有的已存在的 JOS 代码都是运行在 BSP 上的。 + +在一个 SMP 系统上,每个 CPU 都伴有一个本地 APIC(LAPIC)单元。这个 LAPIC 单元负责传递系统中的中断。LAPIC 还为它所连接的 CPU 提供一个唯一的标识符。在本实验中,我们将使用 LAPIC 单元(它在 `kern/lapic.c` 中)中的下列基本功能: + + * 读取 LAPIC 标识符(APIC ID),去告诉那个 CPU 现在我们的代码正在它上面运行(查看 `cpunum()`)。 + * 从 BSP 到 AP 之间发送处理器间中断(IPI) `STARTUP`,以启动其它 CPU(查看 `lapic_startap()`)。 + * 在 Part C 中,我们设置 LAPIC 的内置定时器去触发时钟中断,以便于支持抢占式多任务处理(查看 `apic_init()`)。 + + + +一个处理器使用内存映射的 I/O(MMIO)来访问它的 LAPIC。在 MMIO 中,一部分物理内存是硬编码到一些 I/O 设备的寄存器中,因此,访问内存时一般可以使用相同的 `load/store` 指令去访问设备的寄存器。正如你所看到的,在物理地址 `0xA0000` 处就是一个 IO 入口(就是我们写入 VGA 缓冲区的入口)。LAPIC 就在那里,它从物理地址 `0xFE000000` 处(4GB 减去 32MB 处)开始,这个地址对于我们在 KERNBASE 处使用直接映射访问来说太高了。JOS 虚拟内存映射在 `MMIOBASE` 处,留下一个 4MB 的空隙,以便于我们有一个地方,能像这样去映射设备。由于在后面的实验中,我们将介绍更多的 MMIO 区域,你将要写一个简单的函数,从这个区域中去分配空间,并将设备的内存映射到那里。 + +```markdown +练习 1、实现 `kern/pmap.c` 中的 `mmio_map_region`。去看一下它是如何使用的,从 `kern/lapic.c` 中的 `lapic_init` 开始看起。在 `mmio_map_region` 的测试运行之前,你还要做下一个练习。 +``` + +###### 引导应用程序处理器 + +在引导应用程序处理器之前,引导程序处理器应该会首先去收集关于多处理器系统的信息,比如总的 CPU 数、它们的 APIC ID 以及 LAPIC 单元的 MMIO 地址。在 `kern/mpconfig.c` 中的 `mp_init()` 函数,通过读取内存中位于 BIOS 区域里的 MP 配置表来获得这些信息。 + +`boot_aps()` 函数(在 `kern/init.c` 中)驱动 AP 的引导过程。AP 们在实模式中开始,与 `boot/boot.S` 中启动引导加载程序非常相似。因此,`boot_aps()` 将 AP 入口代码(`kern/mpentry.S`)复制到实模式中的那个可寻址内存地址上。不像使用引导加载程序那样,我们可以控制 AP 将从哪里开始运行代码;我们复制入口代码到 `0x7000`(`MPENTRY_PADDR`)处,但是复制到任何低于 640KB 的、未使用的、页对齐的物理地址上都是可以运行的。 + +在那之后,通过发送 IPI `STARTUP` 到相关 AP 的 LAPIC 单元,以及一个初始的 `CS:IP` 地址(AP 将从那儿开始运行它的入口代码,在我们的案例中是 `MPENTRY_PADDR` ),`boot_aps()` 将一个接一个地激活 AP。在 `kern/mpentry.S` 中的入口代码非常类似于 `boot/boot.S`。在一些简短的设置之后,它启用分页,使 AP 进入保护模式,然后调用 C 设置程序 `mp_main()`(它也在 `kern/init.c` 中)。在继续唤醒下一个 AP 之前, `boot_aps()` 将等待这个 AP 去传递一个 `CPU_STARTED` 标志到它的 `struct CpuInfo` 中的 `cpu_status` 字段中。 + +```markdown +练习 2、阅读 `kern/init.c` 中的 `boot_aps()` 和 `mp_main()`,以及在 `kern/mpentry.S` 中的汇编代码。确保你理解了在 AP 引导过程中的控制流转移。然后修改在 `kern/pmap.c` 中的、你自己的 `page_init()`,实现避免在 `MPENTRY_PADDR` 处添加页到空闲列表上,以便于我们能够在物理地址上安全地复制和运行 AP 引导程序代码。你的代码应该会通过更新后的 `check_page_free_list()` 的测试(但可能会在更新后的 `check_kern_pgdir()` 上测试失败,我们在后面会修复它)。 +``` + +```markdown +问题 + 1、比较 `kern/mpentry.S` 和 `boot/boot.S`。记住,那个 `kern/mpentry.S` 是编译和链接后的,运行在 `KERNBASE` 上面的,就像内核中的其它程序一样,宏 `MPBOOTPHYS` 的作用是什么?为什么它需要在 `kern/mpentry.S` 中,而不是在 `boot/boot.S` 中?换句话说,如果在 `kern/mpentry.S` 中删掉它,会发生什么错误? +提示:回顾链接地址和加载地址的区别,我们在实验 1 中讨论过它们。 +``` + + +###### 每个 CPU 的状态和初始化 + +当写一个多处理器操作系统时,区分每个 CPU 的状态是非常重要的,而每个 CPU 的状态对其它处理器是不公开的,而全局状态是整个系统共享的。`kern/cpu.h` 定义了大部分每个 CPU 的状态,包括 `struct CpuInfo`,它保存了每个 CPU 的变量。`cpunum()` 总是返回调用它的那个 CPU 的 ID,它可以被用作是数组的索引,比如 `cpus`。或者,宏 `thiscpu` 是当前 CPU 的 `struct CpuInfo` 缩略表示。 + +下面是你应该知道的每个 CPU 的状态: + + * **每个 CPU 的内核栈** +因为内核能够同时捕获多个 CPU,因此,我们需要为每个 CPU 准备一个单独的内核栈,以防止它们运行的程序之间产生相互干扰。数组 `percpu_kstacks[NCPU][KSTKSIZE]` 为 NCPU 的内核栈资产保留了空间。 + +在实验 2 中,你映射的 `bootstack` 所引用的物理内存,就作为 `KSTACKTOP` 以下的 BSP 的内核栈。同样,在本实验中,你将每个 CPU 的内核栈映射到这个区域,而使用保护页做为它们之间的缓冲区。CPU 0 的栈将从 `KSTACKTOP` 处向下增长;CPU 1 的栈将从 CPU 0 的栈底部的 `KSTKGAP` 字节处开始,依次类推。在 `inc/memlayout.h` 中展示了这个映射布局。 + + * **每个 CPU 的 TSS 和 TSS 描述符** +为了指定每个 CPU 的内核栈在哪里,也需要有一个每个 CPU 的任务状态描述符(TSS)。CPU _i_ 的任务状态描述符是保存在 `cpus[i].cpu_ts` 中,而对应的 TSS 描述符是定义在 GDT 条目 `gdt[(GD_TSS0 >> 3) + i]` 中。在 `kern/trap.c` 中定义的全局变量 `ts` 将不再被使用。 + + * **每个 CPU 当前的环境指针** +由于每个 CPU 都能同时运行不同的用户进程,所以我们重新定义了符号 `curenv`,让它指向到 `cpus[cpunum()].cpu_env`(或 `thiscpu->cpu_env`),它指向到当前 CPU(代码正在运行的那个 CPU)上当前正在运行的环境上。 + + * **每个 CPU 的系统寄存器** +所有的寄存器,包括系统寄存器,都是一个 CPU 私有的。所以,初始化这些寄存器的指令,比如 `lcr3()`、`ltr()`、`lgdt()`、`lidt()`、等待,必须在每个 CPU 上运行一次。函数 `env_init_percpu()` 和 `trap_init_percpu()` 就是为此目的而定义的。 + + + +```markdown +练习 3、修改 `mem_init_mp()`(在 `kern/pmap.c` 中)去映射每个 CPU 的栈从 `KSTACKTOP` 处开始,就像在 `inc/memlayout.h` 中展示的那样。每个栈的大小是 `KSTKSIZE` 字节加上未映射的保护页 `KSTKGAP` 的字节。你的代码应该会通过在 `check_kern_pgdir()` 中的新的检查。 +``` + +```markdown +练习 4、在 `trap_init_percpu()`(在 `kern/trap.c` 文件中)的代码为 BSP 初始化 TSS 和 TSS 描述符。在实验 3 中它就运行过,但是当它运行在其它的 CPU 上就会出错。修改这些代码以便它能在所有 CPU 上都正常运行。(注意:你的新代码应该还不能使用全局变量 `ts`) +``` + +在你完成上述练习后,在 QEMU 中使用 4 个 CPU(使用 `make qemu CPUS=4` 或 `make qemu-nox CPUS=4`)来运行 JOS,你应该看到类似下面的输出: + +```c + ... + Physical memory: 66556K available, base = 640K, extended = 65532K + check_page_alloc() succeeded! + check_page() succeeded! + check_kern_pgdir() succeeded! + check_page_installed_pgdir() succeeded! + SMP: CPU 0 found 4 CPU(s) + enabled interrupts: 1 2 + SMP: CPU 1 starting + SMP: CPU 2 starting + SMP: CPU 3 starting +``` + +###### 锁定 + +在 `mp_main()` 中初始化 AP 后我们的代码快速运行起来。在你更进一步增强 AP 之前,我们需要首先去处理多个 CPU 同时运行内核代码的争用状况。达到这一目标的最简单的方法是使用大内核锁。大内核锁是一个单个的全局锁,当一个环境进入内核模式时,它将被加锁,而这个环境返回到用户模式时它将释放锁。在这种模型中,在用户模式中运行的环境可以同时运行在任何可用的 CPU 上,但是只有一个环境能够运行在内核模式中;而任何尝试进入内核模式的其它环境都被强制等待。 + +`kern/spinlock.h` 中声明大内核锁,即 `kernel_lock`。它也提供 `lock_kernel()` 和 `unlock_kernel()`,快捷地去获取/释放锁。你应该在以下的四个位置应用大内核锁: + + * 在 `i386_init()` 时,在 BSP 唤醒其它 CPU 之前获取锁。 + * 在 `mp_main()` 时,在初始化 AP 之后获取锁,然后调用 `sched_yield()` 在这个 AP 上开始运行环境。 + * 在 `trap()` 时,当从用户模式中捕获一个陷阱trap时获取锁。在检查 `tf_cs` 的低位比特,以确定一个陷阱是发生在用户模式还是内核模式时。 + * 在 `env_run()` 中,在切换到用户模式之前释放锁。不能太早也不能太晚,否则你将可能会产生争用或死锁。 + + +```markdown +练习 5、在上面所描述的情况中,通过在合适的位置调用 `lock_kernel()` 和 `unlock_kernel()` 应用大内核锁。 +``` + +如果你的锁定是正确的,如何去测试它?实际上,到目前为止,还无法测试!但是在下一个练习中,你实现了调度之后,就可以测试了。 + +``` +问题 + 2、看上去使用一个大内核锁,可以保证在一个时间中只有一个 CPU 能够运行内核代码。为什么每个 CPU 仍然需要单独的内核栈?描述一下使用一个共享内核栈出现错误的场景,即便是在它使用了大内核锁保护的情况下。 +``` + +``` +小挑战!大内核锁很简单,也易于使用。尽管如此,它消除了内核模式的所有并发。大多数现代操作系统使用不同的锁,一种称之为细粒度锁定的方法,去保护它们的共享的栈的不同部分。细粒度锁能够大幅提升性能,但是实现起来更困难并且易出错。如果你有足够的勇气,在 JOS 中删除大内核锁,去拥抱并发吧! + +由你来决定锁的粒度(一个锁保护的数据量)。给你一个提示,你可以考虑在 JOS 内核中使用一个自旋锁去确保你独占访问这些共享的组件: + + * 页分配器 + * 控制台驱动 + * 调度器 + * 你将在 Part C 中实现的进程间通讯(IPC)的状态 +``` + + +##### 循环调度 + +本实验中,你的下一个任务是去修改 JOS 内核,以使它能够在多个环境之间以“循环”的方式去交替。JOS 中的循环调度工作方式如下: + + * 在新的 `kern/sched.c` 中的 `sched_yield()` 函数负责去选择一个新环境来运行。它按顺序以循环的方式在数组 `envs[]` 中进行搜索,在前一个运行的环境之后开始(或如果之前没有运行的环境,就从数组起点开始),选择状态为 `ENV_RUNNABLE` 的第一个环境(查看 `inc/env.h`),并调用 `env_run()` 去跳转到那个环境。 + * `sched_yield()` 必须做到,同一个时间在两个 CPU 上绝对不能运行相同的环境。它可以判断出一个环境正运行在一些 CPU(可能是当前 CPU)上,因为,那个正在运行的环境的状态将是 `ENV_RUNNING`。 + * 我们已经为你实现了一个新的系统调用 `sys_yield()`,用户环境调用它去调用内核的 `sched_yield()` 函数,并因此将自愿把对 CPU 的控制禅让给另外的一个环境。 + + + +```c +练习 6、像上面描述的那样,在 `sched_yield()` 中实现循环调度。不要忘了去修改 `syscall()` 以派发 `sys_yield()`。 + +确保在 `mp_main` 中调用了 `sched_yield()`。 + +修改 `kern/init.c` 去创建三个(或更多个!)运行程序 `user/yield.c`的环境。 + +运行 `make qemu`。在它终止之前,你应该会看到像下面这样,在环境之间来回切换了五次。 + +也可以使用几个 CPU 来测试:make qemu CPUS=2。 + + ... + Hello, I am environment 00001000. + Hello, I am environment 00001001. + Hello, I am environment 00001002. + Back in environment 00001000, iteration 0. + Back in environment 00001001, iteration 0. + Back in environment 00001002, iteration 0. + Back in environment 00001000, iteration 1. + Back in environment 00001001, iteration 1. + Back in environment 00001002, iteration 1. + ... + +在程序 `yield` 退出之后,系统中将没有可运行的环境,调度器应该会调用 JOS 内核监视器。如果它什么也没有发生,那么你应该在继续之前修复你的代码。 +``` + +```c +问题 + 3、在你实现的 `env_run()` 中,你应该会调用 `lcr3()`。在调用 `lcr3()` 的之前和之后,你的代码引用(至少它应该会)变量 `e`,它是 `env_run` 的参数。在加载 `%cr3` 寄存器时,MMU 使用的地址上下文将马上被改变。但一个虚拟地址(即 `e`)相对一个给定的地址上下文是有意义的 —— 地址上下文指定了物理地址到那个虚拟地址的映射。为什么指针 `e` 在地址切换之前和之后被解除引用? + 4、无论何时,内核从一个环境切换到另一个环境,它必须要确保旧环境的寄存器内容已经被保存,以便于它们稍后能够正确地还原。为什么?这种事件发生在什么地方? +``` + +```c +小挑战!给内核添加一个小小的调度策略,比如一个固定优先级的调度器,它将会给每个环境分配一个优先级,并且在执行中,较高优先级的环境总是比低优先级的环境优先被选定。如果你想去冒险一下,尝试实现一个类 Unix 的、优先级可调整的调度器,或者甚至是一个彩票调度器或跨步调度器。(可以在 Google 中查找“彩票调度”和“跨步调度”的相关资料) + +写一个或两个测试程序,去测试你的调度算法是否工作正常(即,正确的算法能够按正确的次序运行)。如果你实现了本实验的 Part B 和 Part C 部分的 `fork()` 和 IPC,写这些测试程序可能会更容易。 +``` + +```markdown +小挑战!目前的 JOS 内核还不能应用到使用了 x87 协处理器、MMX 指令集、或流式 SIMD 扩展(SSE)的 x86 处理器上。扩展数据结构 `Env` 去提供一个能够保存处理器的浮点状态的地方,并且扩展上下文切换代码,当从一个环境切换到另一个环境时,能够保存和还原正确的状态。`FXSAVE` 和 `FXRSTOR` 指令或许对你有帮助,但是需要注意的是,这些指令在旧的 x86 用户手册上没有,因为它是在较新的处理器上引入的。写一个用户级的测试程序,让它使用浮点做一些很酷的事情。 +``` + +##### 创建环境的系统调用 + +虽然你的内核现在已经有了在多个用户级环境之间切换的功能,但是由于内核初始化设置的原因,它在运行环境时仍然是受限的。现在,你需要去实现必需的 JOS 系统调用,以允许用户环境去创建和启动其它的新用户环境。 + +Unix 提供了 `fork()` 系统调用作为它的进程创建原语。Unix 的 `fork()` 通过复制调用进程(父进程)的整个地址空间去创建一个新进程(子进程)。从用户空间中能够观察到它们之间的仅有的两个差别是,它们的进程 ID 和父进程 ID(由 `getpid` 和 `getppid` 返回)。在父进程中,`fork()` 返回子进程 ID,而在子进程中,`fork()` 返回 0。默认情况下,每个进程得到它自己的私有地址空间,一个进程对内存的修改对另一个进程都是不可见的。 + +为创建一个用户模式下的新的环境,你将要提供一个不同的、更原始的 JOS 系统调用集。使用这些系统调用,除了其它类型的环境创建之外,你可以在用户空间中实现一个完整的类 Unix 的 `fork()`。你将要为 JOS 编写的新的系统调用如下: + + * `sys_exofork`: +这个系统调用创建一个新的空白的环境:在它的地址空间的用户部分什么都没有映射,并且它也不能运行。这个新的环境与 `sys_exofork` 调用时创建它的父环境的寄存器状态完全相同。在父进程中,`sys_exofork` 将返回新创建进程的 `envid_t`(如果环境分配失败的话,返回的是一个负的错误代码)。在子进程中,它将返回 0。(因为子进程从一开始就被标记为不可运行,在子进程中,`sys_exofork` 将并不真的返回,直到它的父进程使用 .... 显式地将子进程标记为可运行之前。) + * `sys_env_set_status`: +设置指定的环境状态为 `ENV_RUNNABLE` 或 `ENV_NOT_RUNNABLE`。这个系统调用一般是在,一个新环境的地址空间和寄存器状态已经完全初始化完成之后,用于去标记一个准备去运行的新环境。 + * `sys_page_alloc`: +分配一个物理内存页,并映射它到一个给定的环境地址空间中、给定的一个虚拟地址上。 + * `sys_page_map`: +从一个环境的地址空间中复制一个页映射(不是页内容!)到另一个环境的地址空间中,保持一个内存共享,以便于新的和旧的映射共同指向到同一个物理内存页。 + * `sys_page_unmap`: +在一个给定的环境中,取消映射一个给定的已映射的虚拟地址。 + + + +上面所有的系统调用都接受环境 ID 作为参数,JOS 内核支持一个约定,那就是用值 “0” 来表示“当前环境”。这个约定在 `kern/env.c` 中的 `envid2env()` 中实现的。 + +在我们的 `user/dumbfork.c` 中的测试程序里,提供了一个类 Unix 的 `fork()` 的非常原始的实现。这个测试程序使用了上面的系统调用,去创建和运行一个复制了它自己地址空间的子环境。然后,这两个环境像前面的练习那样使用 `sys_yield` 来回切换,父进程在迭代 10 次后退出,而子进程在迭代 20 次后退出。 + +```c +练习 7、在 `kern/syscall.c` 中实现上面描述的系统调用,并确保 `syscall()` 能调用它们。你将需要使用 `kern/pmap.c` 和 `kern/env.c` 中的多个函数,尤其是要用到 `envid2env()`。目前,每当你调用 `envid2env()` 时,在 `checkperm` 中传递参数 1。你务必要做检查任何无效的系统调用参数,在那个案例中,就返回了 `-E_INVAL`。使用 `user/dumbfork` 测试你的 JOS 内核,并在继续之前确保它运行正常。 +``` + +```c +小挑战!添加另外的系统调用,必须能够读取已存在的、所有的、环境的重要状态,以及设置它们。然后实现一个能够 fork 出子环境的用户模式程序,运行它一小会(即,迭代几次 `sys_yield()`),然后取得几张屏幕截图或子环境的检查点,然后运行子环境一段时间,然后还原子环境到检查点时的状态,然后从这里继续开始。这样,你就可以有效地从一个中间状态“回放”了子环境的运行。确保子环境与用户使用 `sys_cgetc()` 或 `readline()` 执行了一些交互,这样,那个用户就能够查看和突变它的内部状态,并且你可以通过给子环境给定一个选择性遗忘的状况,来验证你的检查点/重启动的有效性,使它“遗忘”了在某些点之前发生的事情。 +``` + +到此为止,已经完成了本实验的 Part A 部分;在你运行 `make grade` 之前确保它通过了所有的 Part A 的测试,并且和以往一样,使用 `make handin` 去提交它。如果你想尝试找出为什么一些特定的测试是失败的,可以运行 `run ./grade-lab4 -v`,它将向你展示内核构建的输出,和测试失败时的 QEMU 运行情况。当测试失败时,这个脚本将停止运行,然后你可以去检查 `jos.out` 的内容,去查看内核真实的输出内容。 + +#### Part B:写时复制 Fork + +正如在前面提到过的,Unix 提供 `fork()` 系统调用作为它主要的进程创建原语。`fork()` 系统调用通过复制调用进程(父进程)的地址空间来创建一个新进程(子进程)。 + +xv6 Unix 的 `fork()` 从父进程的页上复制所有数据,然后将它分配到子进程的新页上。从本质上看,它与 `dumbfork()` 所采取的方法是相同的。复制父进程的地址空间到子进程,是 `fork()` 操作中代价最高的部分。 + +但是,一个对 `fork()` 的调用后,经常是紧接着几乎立即在子进程中有一个到 `exec()` 的调用,它使用一个新程序来替换子进程的内存。这是 shell 默认去做的事,在这种情况下,在复制父进程地址空间上花费的时间是非常浪费的,因为在调用 `exec()` 之前,子进程使用的内存非常少。 + +基于这个原因,Unix 的最新版本利用了虚拟内存硬件的优势,允许父进程和子进程去共享映射到它们各自地址空间上的内存,直到其中一个进程真实地修改了它们为止。这个技术就是众所周知的“写时复制”。为实现这一点,在 `fork()` 时,内核将复制从父进程到子进程的地址空间的映射,而不是所映射的页的内容,并且同时设置正在共享中的页为只读。当两个进程中的其中一个尝试去写入到它们共享的页上时,进程将产生一个页故障。在这时,Unix 内核才意识到那个页实际上是“虚拟的”或“写时复制”的副本,然后它生成一个新的、私有的、那个发生页故障的进程可写的、页的副本。在这种方式中,个人的页的内容并不进行真实地复制,直到它们真正进行写入时才进行复制。这种优化使得一个`fork()` 后在子进程中跟随一个 `exec()` 变得代价很低了:子进程在调用 `exec()` 时或许仅需要复制一个页(它的栈的当前页)。 + +在本实验的下一段中,你将实现一个带有“写时复制”的“真正的”类 Unix 的 `fork()`,来作为一个常规的用户空间库。在用户空间中实现 `fork()` 和写时复制有一个好处就是,让内核始终保持简单,并且因此更不易出错。它也让个别的用户模式程序在 `fork()` 上定义了它们自己的语义。一个有略微不同实现的程序(例如,代价昂贵的、总是复制的 `dumbfork()` 版本,或父子进程真实共享内存的后面的那一个),它自己可以很容易提供。 + +##### 用户级页故障处理 + +一个用户级写时复制 `fork()` 需要知道关于在写保护页上的页故障相关的信息,因此,这是你首先需要去实现的东西。对用户级页故障处理来说,写时复制仅是众多可能的用途之一。 + +它通常是配置一个地址空间,因此在一些动作需要时,那个页故障将指示去处。例如,主流的 Unix 内核在一个新进程的栈区域中,初始的映射仅是单个页,并且在后面“按需”分配和映射额外的栈页,因此,进程的栈消费是逐渐增加的,并因此导致在尚未映射的栈地址上发生页故障。在每个进程空间的区域上发生一个页故障时,一个典型的 Unix 内核必须对它的动作保持跟踪。例如,在栈区域中的一个页故障,一般情况下将分配和映射新的物理内存页。一个在程序的 BSS 区域中的页故障,一般情况下将分配一个新页,然后用 0 填充它并映射它。在一个按需分页的系统上的一个可执行文件中,在文本区域中的页故障将从磁盘上读取相应的二进制页并映射它。 + +内核跟踪有大量的信息,与传统的 Unix 方法不同,你将决定在每个用户空间中关于每个页故障应该做的事。用户空间中的 bug 危害都较小。这种设计带来了额外的好处,那就是允许程序员在定义它们的内存区域时,会有很好的灵活性;对于映射和访问基于磁盘文件系统上的文件时,你应该使用后面的用户级页故障处理。 + +###### 设置页故障服务程序 + +为了处理它自己的页故障,一个用户环境将需要在 JOS 内核上注册一个页故障服务程序入口。用户环境通过新的 `sys_env_set_pgfault_upcall` 系统调用来注册它的页故障入口。我们给结构 `Env` 增加了一个新的成员 `env_pgfault_upcall`,让它去记录这个信息。 + +```markdown +练习 8、实现 `sys_env_set_pgfault_upcall` 系统调用。当查找目标环境的环境 ID 时,一定要确认启用了权限检查,因为这是一个“危险的”系统调用。 +``` + +###### 在用户环境中的正常和异常栈 + +在正常运行期间,JOS 中的一个用户环境运行在正常的用户栈上:它的 `ESP` 寄存器开始指向到 `USTACKTOP`,而它所推送的栈数据将驻留在 `USTACKTOP-PGSIZE` 和 `USTACKTOP-1`(含)之间的页上。但是,当在用户模式中发生页故障时,内核将在一个不同的栈上重新启动用户环境,运行一个用户级页故障指定的服务程序,即用户异常栈。其它,我们将让 JOS 内核为用户环境实现自动的“栈切换”,当从用户模式转换到内核模式时,x86 处理器就以大致相同的方式为 JOS 实现了栈切换。 + +JOS 用户异常栈也是一个页的大小,并且它的顶部被定义在虚拟地址 `UXSTACKTOP` 处,因此用户异常栈的有效字节数是从 `UXSTACKTOP-PGSIZE` 到 `UXSTACKTOP-1`(含)。尽管运行在异常栈上,用户页故障服务程序能够使用 JOS 的普通系统调用去映射新页或调整映射,以便于去修复最初导致页故障发生的各种问题。然后用户级页故障服务程序通过汇编语言 `stub` 返回到原始栈上的故障代码。 + +每个想去支持用户级页故障处理的用户环境,都需要为它自己的异常栈使用在 Part A 中介绍的 `sys_page_alloc()` 系统调用去分配内存。 + +###### 调用用户页故障服务程序 + +现在,你需要去修改 `kern/trap.c` 中的页故障处理代码,以能够处理接下来在用户模式中发生的页故障。我们将故障发生时用户环境的状态称之为捕获时状态。 + +如果这里没有注册页故障服务程序,JOS 内核将像前面那样,使用一个消息来销毁用户环境。否则,内核将在异常栈上设置一个陷阱帧,它看起来就像是来自 `inc/trap.h` 文件中的一个 `struct UTrapframe` 一样: + +```assembly + <-- UXSTACKTOP + trap-time esp + trap-time eflags + trap-time eip + trap-time eax start of struct PushRegs + trap-time ecx + trap-time edx + trap-time ebx + trap-time esp + trap-time ebp + trap-time esi + trap-time edi end of struct PushRegs + tf_err (error code) + fault_va <-- %esp when handler is run + +``` + +然后,内核安排这个用户环境重新运行,使用这个栈帧在异常栈上运行页故障服务程序;你必须搞清楚为什么发生这种情况。`fault_va` 是引发页故障的虚拟地址。 + +如果在一个异常发生时,用户环境已经在用户异常栈上运行,那么页故障服务程序自身将会失败。在这种情况下,你应该在当前的 `tf->tf_esp` 下,而不是在 `UXSTACKTOP` 下启动一个新的栈帧。 + +去测试 `tf->tf_esp` 是否已经在用户异常栈上准备好,可以去检查它是否在 `UXSTACKTOP-PGSIZE` 和 `UXSTACKTOP-1`(含)的范围内。 + +```markdown +练习 9、实现在 `kern/trap.c` 中的 `page_fault_handler` 的代码,要求派发页故障到用户模式故障服务程序上。在写入到异常栈时,一定要采取适当的预防措施。(如果用户环境运行时溢出了异常栈,会发生什么事情?) +``` + +###### 用户模式页故障入口点 + +接下来,你需要去实现汇编程序,它将调用 C 页故障服务程序,并在原始的故障指令处恢复程序运行。这个汇编程序是一个故障服务程序,它由内核使用 `sys_env_set_pgfault_upcall()` 来注册。 + +```markdown +练习 10、实现在 `lib/pfentry.S` 中的 `_pgfault_upcall` 程序。最有趣的部分是返回到用户代码中产生页故障的原始位置。你将要直接返回到那里,不能通过内核返回。最难的部分是同时切换栈和重新加载 EIP。 +``` + +最后,你需要去实现用户级页故障处理机制的 C 用户库。 + +```c +练习 11、完成 `lib/pgfault.c` 中的 `set_pgfault_handler()`。 +``` + +###### 测试 + +运行 `user/faultread`(make run-faultread)你应该会看到: + +```c + ... + [00000000] new env 00001000 + [00001000] user fault va 00000000 ip 0080003a + TRAP frame ... + [00001000] free env 00001000 +``` + +运行 `user/faultdie` 你应该会看到: + +```c + ... + [00000000] new env 00001000 + i faulted at va deadbeef, err 6 + [00001000] exiting gracefully + [00001000] free env 00001000 +``` + +运行 `user/faultalloc` 你应该会看到: + +```c + ... + [00000000] new env 00001000 + fault deadbeef + this string was faulted in at deadbeef + fault cafebffe + fault cafec000 + this string was faulted in at cafebffe + [00001000] exiting gracefully + [00001000] free env 00001000 +``` + +如果你只看到第一个 "this string” 行,意味着你没有正确地处理递归页故障。 + +运行 `user/faultallocbad` 你应该会看到: + +```c + ... + [00000000] new env 00001000 + [00001000] user_mem_check assertion failure for va deadbeef + [00001000] free env 00001000 +``` + +确保你理解了为什么 `user/faultalloc` 和 `user/faultallocbad` 的行为是不一样的。 + +```markdown +小挑战!扩展你的内核,让它不仅是页故障,而是在用户空间中运行的代码能够产生的所有类型的处理器异常,都能够被重定向到一个用户模式中的异常服务程序上。写出用户模式测试程序,去测试各种各样的用户模式异常处理,比如除零错误、一般保护故障、以及非法操作码。 +``` + +##### 实现写时复制 Fork + +现在,你有个内核功能要去实现,那就是在用户空间中完整地实现写时复制 `fork()`。 + +我们在 `lib/fork.c` 中为你的 `fork()` 提供了一个框架。像 `dumbfork()`、`fork()` 应该会创建一个新环境,然后通过扫描父环境的整个地址空间,并在子环境中设置相关的页映射。重要的差别在于,`dumbfork()` 复制了页,而 `fork()` 开始只是复制了页映射。`fork()` 仅当在其中一个环境尝试去写入它时才复制每个页。 + +`fork()` 的基本控制流如下: + + 1. 父环境使用你在上面实现的 `set_pgfault_handler()` 函数,安装 `pgfault()` 作为 C 级页故障服务程序。 + + 2. 父环境调用 `sys_exofork()` 去创建一个子环境。 + + 3. 在它的地址空间中,低于 UTOP 位置的、每个可写入页、或写时复制页上,父环境调用 `duppage` 后,它应该会映射页写时复制到子环境的地址空间中,然后在它自己的地址空间中重新映射页写时复制。[ 注意:这里的顺序很重要(即,在父环境中标记之前,先在子环境中标记该页为 COW)!你能明白是为什么吗?尝试去想一个具体的案例,将顺序颠倒一下会发生什么样的问题。] `duppage` 把两个 PTE 都设置了,致使那个页不可写入,并且在 "avail” 字段中通过包含 `PTE_COW` 来从真正的只读页中区分写时复制页。 + +然而异常栈是不能通过这种方式重映射的。对于异常栈,你需要在子环境中分配一个新页。因为页故障服务程序不能做真实的复制,并且页故障服务程序是运行在异常栈上的,异常栈不能进行写时复制:那么谁来复制它呢? + +`fork()` 也需要去处理存在的页,但不能写入或写时复制。 + + 4. 父环境为子环境设置了用户页故障入口点,让它看起来像它自己的一样。 + + 5. 现在,子环境准备去运行,所以父环境标记它为可运行。 + + + + +每次其中一个环境写一个还没有写入的写时复制页时,它将产生一个页故障。下面是用户页故障服务程序的控制流: + + 1. 内核传递页故障到 `_pgfault_upcall`,它调用 `fork()` 的 `pgfault()` 服务程序。 + 2. `pgfault()` 检测到那个故障是一个写入(在错误代码中检查 `FEC_WR`),然后将那个页的 PTE 标记为 `PTE_COW`。如果不是一个写入,则崩溃。 + 3. `pgfault()` 在一个临时位置分配一个映射的新页,并将故障页的内容复制进去。然后,故障服务程序以读取/写入权限映射新页到合适的地址,替换旧的只读映射。 + + + +对于上面的几个操作,用户级 `lib/fork.c` 代码必须查询环境的页表(即,那个页的 PTE 是否标记为 `PET_COW`)。为此,内核在 `UVPT` 位置精确地映射环境的页表。它使用一个 [聪明的映射技巧][1] 去标记它,以使用户代码查找 PTE 时更容易。`lib/entry.S` 设置 `uvpt` 和 `uvpd`,以便于你能够在 `lib/fork.c` 中轻松查找页表信息。 + +```c +练习 12、在 `lib/fork.c` 中实现 `fork`、`duppage` 和 `pgfault`。 + +使用 `forktree` 程序测试你的代码。它应该会产生下列的信息,在信息中会有 'new env'、'free env'、和 'exiting gracefully' 这样的字眼。信息可能不是按如下的顺序出现的,并且环境 ID 也可能不一样。 + + 1000: I am '' + 1001: I am '0' + 2000: I am '00' + 2001: I am '000' + 1002: I am '1' + 3000: I am '11' + 3001: I am '10' + 4000: I am '100' + 1003: I am '01' + 5000: I am '010' + 4001: I am '011' + 2002: I am '110' + 1004: I am '001' + 1005: I am '111' + 1006: I am '101' +``` + +```c +小挑战!实现一个名为 `sfork()` 的共享内存的 `fork()`。这个版本的 `sfork()` 中,父子环境共享所有的内存页(因此,一个环境中对内存写入,就会改变另一个环境数据),除了在栈区域中的页以外,它应该使用写时复制来处理这些页。修改 `user/forktree.c` 去使用 `sfork()` 而是不常见的 `fork()`。另外,你在 Part C 中实现了 IPC 之后,使用你的 `sfork()` 去运行 `user/pingpongs`。你将找到提供全局指针 `thisenv` 功能的一个新方式。 +``` + +```markdown +小挑战!你实现的 `fork` 将产生大量的系统调用。在 x86 上,使用中断切换到内核模式将产生较高的代价。增加系统调用接口,以便于它能够一次发送批量的系统调用。然后修改 `fork` 去使用这个接口。 + +你的新的 `fork` 有多快? + +你可以用一个分析来论证,批量提交对你的 `fork` 的性能改变,以它来(粗略地)回答这个问题:使用一个 `int 0x30` 指令的代价有多高?在你的 `fork` 中运行了多少次 `int 0x30` 指令?访问 `TSS` 栈切换的代价高吗?等待 ... + +或者,你可以在真实的硬件上引导你的内核,并且真实地对你的代码做基准测试。查看 `RDTSC`(读取时间戳计数器)指令,它的定义在 IA32 手册中,它计数自上一次处理器重置以来流逝的时钟周期数。QEMU 并不能真实地模拟这个指令(它能够计数运行的虚拟指令数量,或使用主机的 TSC,但是这两种方式都不能反映真实的 CPU 周期数)。 +``` + +到此为止,Part B 部分结束了。在你运行 `make grade` 之前,确保你通过了所有的 Part B 部分的测试。和以前一样,你可以使用 `make handin` 去提交你的实验。 + +#### Part C:抢占式多任务处理和进程间通讯(IPC) + +在实验 4 的最后部分,你将修改内核去抢占不配合的环境,并允许环境之间显式地传递消息。 + +##### 时钟中断和抢占 + +运行测试程序 `user/spin`。这个测试程序 fork 出一个子环境,它控制了 CPU 之后,就永不停歇地运转起来。无论是父环境还是内核都不能回收对 CPU 的控制。从用户模式环境中保护系统免受 bug 或恶意代码攻击的角度来看,这显然不是个理想的状态,因为任何用户模式环境都能够通过简单的无限循环,并永不归还 CPU 控制权的方式,让整个系统处于暂停状态。为了允许内核去抢占一个运行中的环境,从其中夺回对 CPU 的控制权,我们必须去扩展 JOS 内核,以支持来自硬件时钟的外部硬件中断。 + +###### 中断规则 + +外部中断(即:设备中断)被称为 IRQ。现在有 16 个可能出现的 IRQ,编号 0 到 15。从 IRQ 号到 IDT 条目的映射是不固定的。在 `picirq.c` 中的 `pic_init` 映射 IRQ 0 - 15 到 IDT 条目 `IRQ_OFFSET` 到 `IRQ_OFFSET+15`。 + +在 `inc/trap.h` 中,`IRQ_OFFSET` 被定义为十进制的 32。所以,IDT 条目 32 - 47 对应 IRQ 0 - 15。例如,时钟中断是 IRQ 0,所以 IDT[IRQ_OFFSET+0](即:IDT[32])包含了内核中时钟中断服务程序的地址。这里选择 `IRQ_OFFSET` 是为了处理器异常不会覆盖设备中断,因为它会引起显而易见的混淆。(事实上,在早期运行 MS-DOS 的 PC 上, `IRQ_OFFSET` 事实上是 0,它确实导致了硬件中断服务程序和处理器异常处理之间的混淆!) + +在 JOS 中,相比 xv6 Unix 我们做了一个重要的简化。当处于内核模式时,外部设备中断总是被关闭(并且,像 xv6 一样,当处于用户空间时,再打开外部设备的中断)。外部中断由 `%eflags` 寄存器的 `FL_IF` 标志位来控制(查看 `inc/mmu.h`)。当这个标志位被设置时,外部中断被打开。虽然这个标志位可以使用几种方式来修改,但是为了简化,我们只通过进程所保存和恢复的 `%eflags` 寄存器值,作为我们进入和离开用户模式的方法。 + +处于用户环境中时,你将要确保 `FL_IF` 标志被设置,以便于出现一个中断时,它能够通过处理器来传递,让你的中断代码来处理。否则,中断将被屏蔽或忽略,直到中断被重新打开后。我们使用引导加载程序的第一个指令去屏蔽中断,并且到目前为止,还没有去重新打开它们。 + +```markdown +练习 13、修改 `kern/trapentry.S` 和 `kern/trap.c` 去初始化 IDT 中的相关条目,并为 IRQ 0 到 15 提供服务程序。然后修改 `kern/env.c` 中的 `env_alloc()` 的代码,以确保在用户环境中,中断总是打开的。 + +另外,在 `sched_halt()` 中取消注释 `sti` 指令,以便于空闲的 CPU 取消屏蔽中断。 + +当调用一个硬件中断服务程序时,处理器不会推送一个错误代码。在这个时候,你可能需要重新阅读 [80386 参考手册][2] 的 9.2 节,或 [IA-32 Intel 架构软件开发者手册 卷 3][3] 的 5.8 节。 + +在完成这个练习后,如果你在你的内核上使用任意的测试程序去持续运行(即:`spin`),你应该会看到内核输出中捕获的硬件中断的捕获帧。虽然在处理器上已经打开了中断,但是 JOS 并不能处理它们,因此,你应该会看到在当前运行的用户环境中每个中断的错误属性并被销毁,最终环境会被销毁并进入到监视器中。 +``` + +###### 处理时钟中断 + +在 `user/spin` 程序中,子环境首先运行之后,它只是进入一个高速循环中,并且内核再无法取得 CPU 控制权。我们需要对硬件编程,定期产生时钟中断,它将强制将 CPU 控制权返还给内核,在内核中,我们就能够将控制权切换到另外的用户环境中。 + +我们已经为你写好了对 `lapic_init` 和 `pic_init`(来自 `init.c` 中的 `i386_init`)的调用,它将设置时钟和中断控制器去产生中断。现在,你需要去写代码来处理这些中断。 + +```markdown +练习 14、修改内核的 `trap_dispatch()` 函数,以便于在时钟中断发生时,它能够调用 `sched_yield()` 去查找和运行一个另外的环境。 + +现在,你应该能够用 `user/spin` 去做测试了:父环境应该会 fork 出子环境,`sys_yield()` 到它许多次,但每次切换之后,将重新获得对 CPU 的控制权,最后杀死子环境后优雅地终止。 +``` + +这是做回归测试的好机会。确保你没有弄坏本实验的前面部分,确保打开中断能够正常工作(即: `forktree`)。另外,尝试使用 ` make CPUS=2 target` 在多个 CPU 上运行它。现在,你应该能够通过 `stresssched` 测试。可以运行 `make grade` 去确认。现在,你的得分应该是 65 分了(总分为 80)。 + +##### 进程间通讯(IPC) + +(严格来说,在 JOS 中这是“环境间通讯” 或 “IEC”,但所有人都称它为 IPC,因此我们使用标准的术语。) + +我们一直专注于操作系统的隔离部分,这就产生了一种错觉,好像每个程序都有一个机器完整地为它服务。一个操作系统的另一个重要服务是,当它们需要时,允许程序之间相互通讯。让程序与其它程序交互可以让它的功能更加强大。Unix 的管道模型就是一个权威的示例。 + +进程间通讯有许多模型。关于哪个模型最好的争论从来没有停止过。我们不去参与这种争论。相反,我们将要实现一个简单的 IPC 机制,然后尝试使用它。 + +###### JOS 中的 IPC + +你将要去实现另外几个 JOS 内核的系统调用,由它们共同来提供一个简单的进程间通讯机制。你将要实现两个系统调用,`sys_ipc_recv` 和 `sys_ipc_try_send`。然后你将要实现两个库去封装 `ipc_recv` 和 `ipc_send`。 + +用户环境可以使用 JOS 的 IPC 机制相互之间发送 “消息” 到每个其它环境,这些消息有两部分组成:一个单个的 32 位值,和可选的一个单个页映射。允许环境在消息中传递页映射,提供了一个高效的方式,传输比一个仅适合单个的 32 位整数更多的数据,并且也允许环境去轻松地设置安排共享内存。 + +###### 发送和接收消息 + +一个环境通过调用 `sys_ipc_recv` 去接收消息。这个系统调用将取消对当前环境的调度,并且不会再次去运行它,直到消息被接收为止。当一个环境正在等待接收一个消息时,任何其它环境都能够给它发送一个消息 — 而不仅是一个特定的环境,而且不仅是与接收环境有父子关系的环境。换句话说,你在 Part A 中实现的权限检查将不会应用到 IPC 上,因为 IPC 系统调用是经过慎重设计的,因此可以认为它是“安全的”:一个环境并不能通过给它发送消息导致另一个环境发生故障(除非目标环境也存在 Bug)。 + +尝试去发送一个值时,一个环境使用接收者的 ID 和要发送的值去调用 `sys_ipc_try_send` 来发送。如果指定的环境正在接收(它调用了 `sys_ipc_recv`,但尚未收到值),那么这个环境将去发送消息并返回 0。否则将返回 `-E_IPC_NOT_RECV` 来表示目标环境当前不希望来接收值。 + +在用户空间中的一个库函数 `ipc_recv` 将去调用 `sys_ipc_recv`,然后,在当前环境的 `struct Env` 中查找关于接收到的值的相关信息。 + +同样,一个库函数 `ipc_send` 将去不停地调用 `sys_ipc_try_send` 来发送消息,直到发送成功为止。 + +###### 转移页 + +当一个环境使用一个有效的 `dstva` 参数(低于 `UTOP`)去调用 `sys_ipc_recv` 时,环境将声明愿意去接收一个页映射。如果发送方发送一个页,那么那个页应该会被映射到接收者地址空间的 `dstva` 处。如果接收者在 `dstva` 已经有了一个页映射,那么已存在的那个页映射将被取消映射。 + +当一个环境使用一个有效的 `srcva` 参数(低于 `UTOP`)去调用 `sys_ipc_try_send` 时,意味着发送方希望使用 `perm` 权限去发送当前映射在 `srcva` 处的页给接收方。在 IPC 成功之后,发送方在它的地址空间中,保留了它最初映射到 `srcva` 位置的页。而接收方也获得了最初由它指定的、在它的地址空间中的 `dstva` 处的、映射到相同物理页的映射。最后的结果是,这个页成为发送方和接收方共享的页。 + +如果发送方和接收方都没有表示要转移这个页,那么就不会有页被转移。在任何 IPC 之后,内核将在接收方的 `Env` 结构上设置新的 `env_ipc_perm` 字段,以允许接收页,或者将它设置为 0,表示不再接收。 + +###### 实现 IPC + +```markdown +练习 15、实现 `kern/syscall.c` 中的 `sys_ipc_recv` 和 `sys_ipc_try_send`。在实现它们之前一起阅读它们的注释信息,因为它们要一起工作。当你在这些程序中调用 `envid2env` 时,你应该去设置 `checkperm` 的标志为 0,这意味着允许任何环境去发送 IPC 消息到另外的环境,并且内核除了验证目标 envid 是否有效外,不做特别的权限检查。 + +接着实现 `lib/ipc.c` 中的 `ipc_recv` 和 `ipc_send` 函数。 + +使用 `user/pingpong` 和 `user/primes` 函数去测试你的 IPC 机制。`user/primes` 将为每个质数生成一个新环境,直到 JOS 耗尽环境为止。你可能会发现,阅读 `user/primes.c` 非常有趣,你将看到所有的 fork 和 IPC 都是在幕后进行。 +``` + +``` +小挑战!为什么 `ipc_send` 要循环调用?修改系统调用接口,让它不去循环。确保你能处理多个环境尝试同时发送消息到一个环境上的情况。 +``` + +```markdown +小挑战!质数筛选是在大规模并发程序中传递消息的一个很巧妙的用法。阅读 C. A. R. Hoare 写的 《Communicating Sequential Processes》,Communications of the ACM_ 21(8) (August 1978), 666-667,并去实现矩阵乘法示例。 +``` + +```markdown +小挑战!控制消息传递的最令人印象深刻的一个例子是,Doug McIlroy 的幂序列计算器,它在 [M. Douglas McIlroy,《Squinting at Power Series》,Software--Practice and Experience, 20(7) (July 1990),661-683][4] 中做了详细描述。实现了它的幂序列计算器,并且计算了 _sin_ ( _x_ + _x_ ^3) 的幂序列。 +``` + +```markdown +小挑战!通过应用 Liedtke 的论文([通过内核设计改善 IPC 性能][5])中的一些技术、或你可以想到的其它技巧,来让 JOS 的 IPC 机制更高效。为此,你可以随意修改内核的系统调用 API,只要你的代码向后兼容我们的评级脚本就行。 +``` + +**Part C 到此结束了。**确保你通过了所有的评级测试,并且不要忘了将你的小挑战的答案写入到 `answers-lab4.txt` 中。 + +在动手实验之前, 使用 `git status` 和 `git diff` 去检查你的更改,并且不要忘了去使用 `git add answers-lab4.txt` 添加你的小挑战的答案。在你全部完成后,使用 `git commit -am 'my solutions to lab 4’` 提交你的更改,然后 `make handin` 并关注它的动向。 + +-------------------------------------------------------------------------------- + +via: https://pdos.csail.mit.edu/6.828/2018/labs/lab4/ + +作者:[csail.mit][a] +选题:[lujun9972][b] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://pdos.csail.mit.edu +[b]: https://github.com/lujun9972 +[1]: https://pdos.csail.mit.edu/6.828/2018/labs/lab4/uvpt.html +[2]: https://pdos.csail.mit.edu/6.828/2018/labs/readings/i386/toc.htm +[3]: https://pdos.csail.mit.edu/6.828/2018/labs/readings/ia32/IA32-3A.pdf +[4]: https://swtch.com/~rsc/thread/squint.pdf +[5]: http://dl.acm.org/citation.cfm?id=168633 \ No newline at end of file diff --git a/translated/tech/20181017 Design faster web pages, part 2- Image replacement.md b/translated/tech/20181017 Design faster web pages, part 2- Image replacement.md new file mode 100644 index 0000000000..55631b4713 --- /dev/null +++ b/translated/tech/20181017 Design faster web pages, part 2- Image replacement.md @@ -0,0 +1,177 @@ +设计更快的网页(二):图片替换 +====== +![](https://fedoramagazine.org/wp-content/uploads/2018/03/fasterwebsites2-816x345.jpg) + + +欢迎回到我们为了构建更快网页所写的系列文章。上一篇[文章][1]讨论了只通过图片压缩实现这个目标的方法。这个例子从一开始有 1.2MB 的“浏览器脂肪”,然后它减轻到了 488.9KB 的大小。但这还不够快!那么本文继续来给浏览器“减肥”。你可能在这个过程中会认为我们所做的事情有点疯狂,但一旦完成,你就会明白为什么要这么做了。 + +### 准备工作 + +本文再次从对网页的分析开始。使用 Firefox 内置的截图功能来对整个页面进行截图。你还需要[用 sudo][2] 来安装 Inkscape: + +``` +$ sudo dnf install inkscape +``` + +如果你想了解 Inkscape 的用法,Fedora 杂志上有几篇现成的[文章][3]。本文仅会介绍一些基本的 SVG 优化方法以供 Web 使用。 + +### 分析 + +我们再来用 [getfedora.org][4] 的网页来举例。 + +![Getfedora 的页面,对其中的图片做了标记][5] + +这次分析更好地以图形方式完成,这也就是它从屏幕截图开始的原因。上面的截图标记了页面中的所有图形元素。Fedora 网站团队已经针对两种情况措施(也有可能是四种,这样更好)来替换图像了。社交媒体的图标变成了字体的字形,而语言选择器变成了 SVG. + +我们有几个可以替换的选择: + + ++ CSS3 ++ 字体 ++ SVG ++ HTML5 Canvas + + +#### HTML5 Canvas + +简单来说,HTML5 Canvas 是一种 HTML 元素,它允许你借助脚本语言(通常是 JavaScript)在上面绘图,不过它现在还没有被广泛使用。因为它可以使用脚本语言来绘制,所以这个元素也可以用来做动画。这里有一些使用 HTML Canvas 实现的实例,比如[三角形模式][6]、[动态波浪][7]和[字体动画][8]。不过,在这种情况下,似乎这也不是最好的选择。 + +#### CSS3 + +使用层叠式样式表,你可以绘制图形,甚至可以让它们动起来。CSS 常被用来绘制按钮等元素。然而,使用 CSS 绘制的更复杂的图形通常只能在技术演示页面中看到。这是因为使用视觉来制作图形依然要比使用代码来的更快一些。 + +#### 字体 + +另外一种方式是使用字体来装饰网页,[Fontawesome][9] 在这方面很流行。比如,在这个例子中你可以使用字体来替换“风味”和“旋转”的图标。这种方法有一个负面影响,但解决起来很容易,我们会在本系列的下一部分中来介绍。 + +#### SVG + +这种图形格式已经存在了很长时间,而且它总是在浏览器中被使用。有很长一段时间并非所有浏览器都支持它,不过现在这已经成为历史了。所以,本例中图形替换的最佳方法是使用 SVG. + +### 为网页优化 SVG + +优化 SVG 以供互联网使用,需要几个步骤。 + +SVG 是一种 XML 方言。它用节点来描述圆形、矩形或文本路径等组件。每个节点都是一个 XML 元素。为了保证代码简洁,SVG 应该包含尽可能少的元素。 + +我们选用的 SVG 实例是带有一个咖啡杯的圆形图标。你有三种选项来用 SVG 描述它。 + +#### 一个圆形元素,上面有一个咖啡杯 + +``` + +``` + +#### 一个圆形路径,上面有一个咖啡杯 + +``` + +``` + +#### 单一路径 + +``` + +``` + +你应该可以看出,代码变得越来越复杂,需要更多的字符来描述它。当然,文件中包含更多的字符,就会导致更大的尺寸。 + +#### 节点清理 + +如果你在 Inkscape 中打开了实例 SVG 按下 F2,就会激活一个节点工具。你应该看到这样的界面: + +![Inkscape - 激活节点工具][10] + +这个例子中有五个不必要的节点——就是直线中间的那些。要删除它们,你可以使用已激活的节点工具依次选中它们,并按下 **Del** 键。然后,选中这条线的定义节点,并使用工具栏的工具把它们重新做成角。 + +![Inkscape - 将节点变成角的工具][11] + +如果不修复这些角,我们还有方法可以定义这条曲线,这条曲线会被保存,也就会增加文件体积。你可以手动清理这些节点,因为它无法有效的自动完成。现在,你已经为下一阶段做好了准备。 + +使用_另存为_功能,并选择_优化的 SVG_。这会弹出一个窗口,你可以在里面选择移除或保留哪些成分。 + +![Inkscape - “另存为”“优化的 SVG”][12] + +虽然这个 SVG 实例很小,但它还是从 3.2KB 减小到了 920 字节,不到原有的三分之一。 + +回到 getfedora 的页面:页面主要部分的背景中的灰色沃罗诺伊图,在经过本系列第一篇文章中的优化处理之后,从原先的 211.12 KB 减小到了 164.1 KB. + +页面中导出的原始 SVG 有 1.9 MB 大小。经过这些 SVG 优化步骤后,它只有 500.4 KB 了。太大了?好吧,现在的蓝色背景的体积是 564.98 KB。SVG 和 PNG 之间只有很小的差别。 + +#### 压缩文件 + +``` +$ ls -lh +insgesamt 928K +-rw-r--r--. 1 user user 161K 19. Feb 19:44 grey-pattern.png +-rw-rw-r--. 1 user user 160K 18. Feb 12:23 grey-pattern.png.gz +-rw-r--r--. 1 user user 489K 19. Feb 19:43 greyscale-pattern-opti.svg +-rw-rw-r--. 1 user user 112K 19. Feb 19:05 greyscale-pattern-opti.svg.gz +``` + +这是我为可视化这个主题所做的一个小测试的输出。你可能应该看到光栅图形——PNG——已经被压缩,不能再被压缩了。而 SVG,一个 XML 文件正相反。它是文本文件,所以可被压缩至原来的四分之一不到。因此,现在它的体积要比 PNG 小 50 KB 左右。 + +现代浏览器可以以原生方式处理压缩文件。所以,许多 Web 服务器都打开了 mod_deflate (Apache) 和 gzip (Nginx) 模式。这样我们就可以在传输过程中节省空间。你可以在[这儿][13]看看你的服务器是不是启用了它。 + +### 生产工具 + +首先,没有人希望每次都要用 Inkscape 来优化 SVG. 你可以在命令行中脱离 GUI 来运行 Inkscape,但你找不到选项来将 Inkscape SVG 转换成优化的 SVG. 用这种方式只能导出光栅图像。但是我们替代品: + + * SVGO (看起来开发过程已经不活跃了) + * Scour + + + +本例中我们使用 scour 来进行优化。先来安装它: + +``` +$ sudo dnf install scour +``` + +要想自动优化 SVG 文件,请运行 scour,就像这样: + +``` +[user@localhost ]$ scour INPUT.svg OUTPUT.svg -p 3 --create-groups --renderer-workaround --strip-xml-prolog --remove-descriptive-elements --enable-comment-stripping --disable-embed-rasters --no-line-breaks --enable-id-stripping --shorten-ids +``` + +这就是第二部分的结尾了。在这部分中你应该学会了如何将光栅图像替换成 SVG,并对它进行优化以供使用。请继续关注 Feroda 杂志,第三篇即将出炉。 + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/design-faster-web-pages-part-2-image-replacement/ + +作者:[Sirko Kemter][a] +选题:[lujun9972][b] +译者:[StdioA](https://github.com/StdioA) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/gnokii/ +[b]: https://github.com/lujun9972 +[1]: https://wp.me/p3XX0v-5fJ +[2]: https://fedoramagazine.org/howto-use-sudo/ +[3]: https://fedoramagazine.org/?s=Inkscape +[4]: https://getfedora.org +[5]: https://fedoramagazine.org/wp-content/uploads/2018/02/getfedora_mag.png +[6]: https://codepen.io/Cthulahoop/pen/umcvo +[7]: https://codepen.io/jackrugile/pen/BvLHg +[8]: https://codepen.io/tholman/pen/lDLhk +[9]: https://fontawesome.com/ +[10]: https://fedoramagazine.org/wp-content/uploads/2018/02/svg-optimization-nodes.png +[11]: https://fedoramagazine.org/wp-content/uploads/2018/02/node_cleaning.png +[12]: https://fedoramagazine.org/wp-content/uploads/2018/02/svg-optimizing-dialog.png +[13]: https://checkgzipcompression.com/?url=http%3A%2F%2Fgetfedora.org diff --git a/translated/tech/20181024 Get organized at the Linux command line with Calcurse.md b/translated/tech/20181024 Get organized at the Linux command line with Calcurse.md new file mode 100644 index 0000000000..6b6622dc5a --- /dev/null +++ b/translated/tech/20181024 Get organized at the Linux command line with Calcurse.md @@ -0,0 +1,86 @@ +使用 Calcurse 在 Linux 命令行中组织任务 +====== + +使用 Calcurse 了解你的日历和待办事项列表。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar.jpg?itok=jEKbhvDT) + +你是否需要复杂,功能丰富的图形或 Web 程序才能保持井井有条?我不这么认为。正确的命令行工具可以完成工作并且做得很好。 + +当然,说出命令行这个词可能会让一些 Linux 用户感到害怕。对他们来说,命令行是未知领域。 + +使用 [Calcurse][1] 可以轻松地在命令行中进行组织任务。Calcurse 在基于文本的界面里带来了图形化外观。你可以得到简单、结合易用性的命令行和导航。 + +让我们仔细看看 Calcurse,它是在 BSD 许可证下开源的。 + +### 获取软件 + +如果你喜欢编译代码(我通常不喜欢),你可以从[Calcurse 网站][1]获取源码。否则,根据你的 Linux 发行版获取[二进制安装程序][2]。你甚至可以从 Linux 发行版的软件包管理器中获取 Calcurse。检查一下不会有错的。 + +编译或安装 Calcurse 后(两者都不用太长时间),你就可以开始使用了。 + +### 使用 Calcurse + +打开终端并输入 **calcurse**。 + +![](https://opensource.com/sites/default/files/uploads/calcurse-main.png) + +Calcurse 的界面由三个面板组成: + + * 预约(屏幕左侧) +  * 日历(右上角) +  * 待办事项清单(右下角) + + + + +按键盘上的 Tab 键在面板之间移动。要在面板添加新项目,请按下 **a**。Calcurse 将指导你完成添加项目所需的操作。 + +一个有趣的地方地预约和日历面板一起生效。你选中日历面板并添加一个预约。在那里,你选择一个预约的日期。完成后,你回到预约面板。我知道。。。 + +按下 **a** 设置开始时间,持续时间(以分钟为单位)和预约说明。开始时间和持续时间是可选的。Calcurse 在它们到期的那天显示预约。 + +![](https://opensource.com/sites/default/files/uploads/calcurse-appointment.png) + +一天的预约看起来像: + +![](https://opensource.com/sites/default/files/uploads/calcurse-appt-list.png) + +待办事项列表独立运作。选中待办面板并(再次)按下 **a**。输入任务的描述,然后设置优先级(1 表示最高,9 表示最低)。Calcurse 会在待办事项面板中列出未完成的任务。 + +![](https://opensource.com/sites/default/files/uploads/calcurse-todo.png) + +如果你的任务有很长的描述,那么 Calcurse 会截断它。你可以使用键盘上的向上或向下箭头键浏览任务,然后按下 **v** 查看描述。 + +![](https://opensource.com/sites/default/files/uploads/calcurse-view-todo.png) + +Calcurse 将其信息以文本形式保存在你的主目录下名为 **.calcurse** 的隐藏文件夹中,例如 **/home/scott/.calcurse**。如果 Calcurse 停止工作,那也很容易找到你的信息。 + +### 其他有用的功能 + +Calcurse 其他的功能包括设置重复预约的功能。要执行此操作,找出要重复的预约,然后在预约面板中按下 **r**。系统会要求你设置频率(例如,每天或每周)以及你希望重复预约的时间。 + +你还可以导入 [ICAL][3] 格式的日历或以 ICAL 或 [PCAL][4] 格式导出数据。使用 ICAL,你可以与其他日历程序共享数据。使用 PCAL,你可以生成日历的 Postscript 版本。 + +你还可以将许多命令行参数传递给 Calcurse。你可以[在文档中][5]阅读它们。 + +虽然很简单,但 Calcurse 可以帮助你保持井井有条。你需要更加关注自己的任务和预约,但是你将能够更好地关注你需要做什么以及你需要做的方向。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/calcurse + +作者:[Scott Nesbitt][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt +[b]: https://github.com/lujun9972 +[1]: http://www.calcurse.org/ +[2]: http://www.calcurse.org/downloads/#packages +[3]: https://tools.ietf.org/html/rfc2445 +[4]: http://pcal.sourceforge.net/ +[5]: http://www.calcurse.org/files/manual.chunked/ar01s04.html#_invocation diff --git a/translated/tech/20181029 4 open source Android email clients.md b/translated/tech/20181029 4 open source Android email clients.md new file mode 100644 index 0000000000..285b472234 --- /dev/null +++ b/translated/tech/20181029 4 open source Android email clients.md @@ -0,0 +1,77 @@ +四个开源的Android邮件客户端 +====== +Email 现在还没有绝迹,而且现在大部分邮件都来自于移动设备。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_mail_box_envelope_send_blue.jpg?itok=6Epj47H6) + +现在一些年轻人正将邮件称之为“老年人的交流方式”,然而事实却是邮件绝对还没有消亡。虽然[协作工具][1],社交媒体,和短信很常用,但是它们还没做好取代邮件这种必要的商业(和社交)通信工具。 + +考虑到邮件还没有消失,并且(很多研究表明)人们都是在移动设备上阅读邮件,拥有一个好的移动邮件客户端就变得很关键。如果你是一个想使用开源的邮件客户端的 Android 用户,事情就变得有点棘手了。 + +我们提供了四个开源的 Andorid 邮件客户端供选择。其中两个可以通过 Andorid 官方应用商店 [Google Play][2] 下载。你也可以在 [Fossdroid][3] 或者 [F-Droid][4] 这些开源 Android 应用库中找到他们。(下方有每个应用的具体下载方式。) +### K-9 Mail + +[K-9 Mail][5] 拥有几乎和 Android 一样长的历史——它起源于 Android 1.0 邮件客户端的一个补丁。它支持 IMAP 和 WebDAV、多用户、附件、emojis 和其他经典的邮件客户端功能。它的[用户文档][6]提供了关于安装、启动、安全、阅读和发送邮件等等的帮助。 + +K-9 基于 [Apache 2.0][7] 协议开源,[源码][8]可以从 GitHub 上获得. 应用可以从 [Google Play][9]、[Amazon][10] 和 [F-Droid][11] 上下载。 + +### p≡p + +正如它的全称,”Pretty Easy Privacy”说的那样,[p≡p][12] 主要关注于隐私和安全通信。它提供自动的、端到端的邮件和附件加密(但要求你的收件人也要能够加密邮件——否则,p≡p会警告你的邮件将不加密发出)。 + +你可以从 GitLab 获得[源码][13](基于 [GPLv3][14] 协议),并且可以从应用的官网上找到相应的[文档][15]。应用可以在 [Fossdroid][16] 上免费下载或者在 [Google Play][17] 上支付一点儿象征性的费用下载。 + +### InboxPager + +[InboxPager][18] 允许你通过 SSL/TLS 协议收发邮件信息,这也表明如果你的邮件提供商(比如 Gmail )没有默认开启这个功能的话,你可能要做一些设置。(幸运的是, InboxPager 提供了 Gmail的[设置教程][19]。)它同时也支持通过 OpenKeychain 应用进行 OpenPGP 机密。 + +InboxPager 基于 [GPLv3][20] 协议,其源码可从 GitHub 获得,并且应用可以从 [F-Droid][21] 下载。 + +### FairEmail + +[FairEmail][22] 是一个极简的邮件客户端,它的功能集中于读写信息,没有任何多余的可能拖慢客户端的功能。它支持多个帐号和用户,消息线程,加密等等。 + +它基于 [GPLv3][23] 协议开源,[源码][24]可以从GitHub上获得。你可以在 [Fossdroid][25] 上下载 FairEamil; 对 Google Play 版本感兴趣的人可以从 [testing the software][26] 获得应用。 + +肯定还有更多的开源 Android 客户端(或者上述软件的加强版本)——活跃的开发者们可以关注一下。如果你知道还有哪些优秀的应用,可以在评论里和我们分享。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/open-source-android-email-clients + +作者:[Opensource.com][a] +选题:[lujun9972][b] +译者:[zianglei][c] +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com +[b]: https://github.com/lujun9972 +[c]: https://github.com/zianglei +[1]: https://opensource.com/alternatives/trello +[2]: https://play.google.com/store +[3]: https://fossdroid.com/ +[4]: https://f-droid.org/ +[5]: https://k9mail.github.io/ +[6]: https://k9mail.github.io/documentation.html +[7]: http://www.apache.org/licenses/LICENSE-2.0 +[8]: https://github.com/k9mail/k-9 +[9]: https://play.google.com/store/apps/details?id=com.fsck.k9 +[10]: https://www.amazon.com/K-9-Dog-Walkers-Mail/dp/B004JK61K0/ +[11]: https://f-droid.org/packages/com.fsck.k9/ +[12]: https://www.pep.security/android.html.en +[13]: https://pep-security.lu/gitlab/android/pep +[14]: https://pep-security.lu/gitlab/android/pep/blob/feature/material/LICENSE +[15]: https://www.pep.security/docs/ +[16]: https://fossdroid.com/a/p%E2%89%A1p.html +[17]: https://play.google.com/store/apps/details?id=security.pEp +[18]: https://github.com/itprojects/InboxPager +[19]: https://github.com/itprojects/InboxPager/blob/HEAD/README.md#gmail-configuration +[20]: https://github.com/itprojects/InboxPager/blob/c5641a6d644d001bd4cec520b5a96d7e588cb6ad/LICENSE +[21]: https://f-droid.org/en/packages/net.inbox.pager/ +[22]: https://email.faircode.eu/ +[23]: https://github.com/M66B/open-source-email/blob/master/LICENSE +[24]: https://github.com/M66B/open-source-email +[25]: https://fossdroid.com/a/fairemail.html +[26]: https://play.google.com/apps/testing/eu.faircode.email diff --git a/translated/tech/20181030 How To Analyze And Explore The Contents Of Docker Images.md b/translated/tech/20181030 How To Analyze And Explore The Contents Of Docker Images.md new file mode 100644 index 0000000000..8b0021bf26 --- /dev/null +++ b/translated/tech/20181030 How To Analyze And Explore The Contents Of Docker Images.md @@ -0,0 +1,94 @@ +如何分析并探索 Docker 容器镜像的内容 +====== +![](https://www.ostechnix.com/wp-content/uploads/2018/10/dive-tool-720x340.png) + +或许你已经了解到 Docker 容器镜像是一个轻量、独立、含有运行某个应用所需全部软件的可执行包,这也是为什么容器镜像会经常被开发者用于构建和分发应用。假如你很好奇一个 Docker 镜像里面包含了什么东西,那么这篇简要的指南或许会帮助到你。今天,我们将学会使用一个名为 **Dive** 的工具来分析和探索 Docker 镜像每层的内容。通过分析 Docker 镜像,我们可以发现在各个层之间可能重复的文件并通过移除它们来减小 Docker 镜像的大小。Dive 工具不仅仅是一个 Docker 镜像分析工具,它还可以帮助我们来构建镜像。Dive 是一个用 Go 编程语言编写的免费开源工具。 + +### 安装 Dive + +首先从该项目的 [**发布页**][1] 下载最新版本,然后像下面展示的那样根据你所使用的发行版来安装它。 + +假如你正在使用 **Debian** 或者 **Ubuntu**,那么可以运行下面的命令来下载并安装它。 +``` +$ wget https://github.com/wagoodman/dive/releases/download/v0.0.8/dive_0.0.8_linux_amd64.deb +``` +``` +$ sudo apt install ./dive_0.0.8_linux_amd64.deb +``` + +**在 RHEL 或 CentOS 系统中** +``` +$ wget https://github.com/wagoodman/dive/releases/download/v0.0.8/dive_0.0.8_linux_amd64.rpm +``` +``` +$ sudo rpm -i dive_0.0.8_linux_amd64.rpm +``` + +Dive 也可以使用 [**Linuxbrew**][2] 包管理器来安装。 +``` +$ brew tap wagoodman/dive +``` +``` +$ brew install dive +``` + +至于其他的安装方法,请参考 [Dive 项目的 GitHub 网页][3]。 + +### 分析并探索 Docker 镜像的内容 + +要分析一个 Docker 镜像,只需要运行加上 Docker 镜像 ID的 dive 命令就可以了。你可以使用 `sudo docker images` 来得到 Docker 镜像的 ID。 +``` +$ sudo dive ea4c82dcd15a +``` + +上面命令中的 **ea4c82dcd15a** 是某个镜像的 id。 + +然后 Dive 命令将快速地分析给定 Docker 镜像的内容并将它在终端中展示出来。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Dive-1.png) + +正如你在上面的截图中看到的那样,在终端的左边一栏列出了给定 Docker 镜像的各个层及其详细内容,浪费的空间大小等信息。右边一栏则给出了给定 Docker 镜像每一层的内容。你可以使用 **Ctrl+SPACEBAR** 来在左右栏之间切换,使用 **UP/DOWN** 上下键来在目录树中进行浏览。 + +下面是 `Dive` 的快捷键列表: + * **Ctrl+Spacebar** – 在左右栏之间切换 + * **Spacebar** – 展开或收起目录树 + * **Ctrl+A** – 文件树视图:展示或隐藏增加的文件 + * **Ctrl+R** – 文件树视图:展示或隐藏被移除的文件 + * **Ctrl+M** – 文件树视图:展示或隐藏被修改的文件 + * **Ctrl+U** – 文件树视图:展示或隐藏未修改的文件 + * **Ctrl+L** – 层视图:展示当前层的变化 + * **Ctrl+A** – 层视图:展示总的变化 + * **Ctrl+/** – 筛选文件 + * **Ctrl+C** – 退出 + +在上面的例子中,我使用了 `sudo` 权限,这是因为我的 Docker 镜像存储在 **/var/lib/docker/** 目录中。假如你的镜像保存在你的家目录 `$HOME`或者在其他不属于 `root` 用户的目录,你就没有必要使用 `sudo` 命令。 + +你还可以使用下面的单个命令来构建一个 Docker 镜像并立刻分析该镜像: +``` +$ dive build -t +``` + +Dive 工具仍处于 beta 阶段,所以可能会存在 bug。假如你遇到了 bug,请在该项目的 GitHub 主页上进行报告。 + +好了,这就是今天的全部内容。现在你知道如何使用 Dive 工具来探索和分析 Docker 容器镜像的内容以及利用它构建镜像。希望本文对你有所帮助。 + +更多精彩内容即将呈现,请保持关注! + +干杯! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-analyze-and-explore-the-contents-of-docker-images/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://github.com/wagoodman/dive/releases +[2]: https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/ +[3]: https://github.com/wagoodman/dive \ No newline at end of file